Job Description:
Requirement:
• BS degree in computer science, computer engineering or equivalent
• 7-8 years of experience delivering enterprise software solutions
• Proficient in Spark, Scala, Python, AWS Cloud technologies
• 3+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, HBase, Hive, Flume, Sqoop, Kafka, Scala
• Flair for data, schema, data model, how to bring efficiency in big data related life cycle
• Must be able to quickly understand technical and business requirements and can translate them into technical implementations
• Experience with Agile Development methodologies
• Experience with data ingestion and transformation
• Solid understanding of secure application development methodologies
• Experienced in developing microservices using spring framework is a plus
• Experience in with Airflow and Python will be preferred
• Understanding of automated QA needs related to Big data
• Strong object-oriented design and analysis skills
Any Graduate