Role Description
• Utilize Hadoop Ecosystem to land, transform, and store data to make available for analytics.
• Make use of real-time streaming architecture to move and process data.
• Meet with business customers to design, suggest solutions to fulfill critical business needs.
• Implement solutions that meet IT standards, procedures, security and with quality.
• Act as a full stack developer by working with many disparate and diverse technologies.
• Actively participant in all agile ceremonies such as: backlog refinement, standup, iteration closure, iteration retrospective.
• Review ongoing production software operations and troubleshoot production issues.
• Utilize technical knowledge and connected vehicle architecture to suggest, design and implement optimal Big Data solution.
• Utilize Continuous Integration / Continuous Demand and Test Driven Development to deliver software with quality.
• Lead architecture and design discussion to devise optimal solutions.
• Guide and coach other software engineers on best practices.
Required Skills & Experience
o Firm understanding of the following big data technologies: MapReduce, Oozie, HIVE, HBase, HDFS, Spark, Storm, Kafka and Nifi.
o Additional technical experience required: PCF, Spring Boot, Java, and Linux experience
o Experience with CI/CD systems such as Jenkins.
o Understanding of various big data batch and streaming architectures and design.
o Ability to utilize real-time streaming architecture to interact with, and land streaming data sources.
o Experience with Github and Accurev SCM systems.
o Experience with Agile practices.
o Self-starter and good communicator.
o Knowledge of analytics customer use cases.
o Minimum of 5 years experience in the following big data technologies: MapReduce, Oozie, HIVE, HBase, HDFS, Spark, Storm, Kafka and Nifi.
Any Graduate