Description

Must-Have

Should have working experience in PySpark, Scala Should have experience in working in Bigdata Environments Should have experience in Hadoop, Hive, Scoop, Should have experience in AwS
Should have worked on Python, Bigdata Applications Should have experience in Pyspark Applications and on Linux OS. Should have developed API Services and should have used bigdata Libraries. Should have worked on AWS Cloud
Extensive experience in Bigdata Projects particularly Data Ingestion and Transformation Projects
Total experience of 5 Years or Above with 3+ years of relevant Experience and prior experience should be on Scala Spark based Big Data Projects.
Experience in Coding in Scala & Spark and should come from Programming background
Good Knowledge on DataFrame APIs and Spark SQL
Experience working with AWS Glue / AWS EMR / CDH / Databricks
Experience working with Cloud Services
Knowledge working with Git and Maven / SBT
Knowledge working with Agile methodology and understand DevOps practices.
Excellent communication skills and being able to work independently or in a full team

Good-to-Have

Agile/Scrum development cycle, Jira, Veracode, SonarQube

Education

Any Graduate