Job Description:
• At least 8 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL and data warehouse solutions.
• Extensive experience providing practical direction within the AWS Native and Hadoop Experience with private and public cloud architectures, pros/cons, and migration considerations.
• Minimum of 5 years of hands-on experience in AWS and Big Data technologies such as Java, Node.js, C##, Python, SQL, EC2, S3, Glue, Lambda, Spark/SparkSQL, Hive/MR, Pig, Oozie and streaming technologies such as Kafka, Kinesis, NiFI etc.
• 5+ years of hands-on experience in programming languages such as Java, c#, node.js, python, pyspark, spark, SQL, Unix shell/Perl scripting etc.
• Experience working with DevOps tools such as GitLabs, Jenkins, CodeBuild, CoePipeline CodeDeploy, etc.
Must have at least one AWS Certification:
• Certified AWS Developer - Associate
• Certified AWS DevOps - Professional
• Certified AWS Big Data Specialty
Any Graduate