Description

Develop data pipelines to ingest the data and implement ETL on data by using various tools like NiFi, Kafka, Spark and Hive. Store, retrieve and manipulate data for analysis of system capabilities and requirements. Design, develop and modify software systems to predict and measure outcome and consequences of design. Build frameworks for storage of data and computation using various cloud services in AWS like S3, EC2, Lambda, EMR, Redshift etc. Develop and maintain mapping logic to Hadoop data sets from a variety of sources systems for data integration and creation of consumption model. Automate the Data migration process using different tools like Sqoop, and Oozie and automate the tasks using Shell scripting. Requires Master’s in Computer Science, Engineering, or related and 1 year experience or Bachelor’s in Computer Science, Engineering, or related and 5 years progressive experience.

Education

Any graduate