Responsibilities:
· Collaborate with product teams, data analysts and data scientists to design and build data-forward solutions
· Drive and maintain a culture of quality, innovation and experimentation
· Provide the prescriptive point-solution architectures and guide the descriptive architectures within assigned modules
· Accountable for the availability, stability, scalability, security, and recoverability enabled by the designs
· Own technical decisions for the solution and application developers in the creation of architectural decisions and artifacts
Qualifications:
· Candidate must possess a passion for producing high quality software and solutions, be ready to jump in and solve complex problems, and interact with users to understand the requirements and deliver solutions as a design-build-run engineer
· Bachelor’s or Master’s Degree in Computer Science/related field or equivalent working experience
· 5+ years of hands-on experience of Hadoop, Streaming and Hadoop ecosystem
· Experience designing, developing, and maintaining software frameworks on Spark, Hadoop MR, Kafka, Java/Scala/Python etc
· Utilizing distributed computing architecture
· Strong understanding of Storage and Compute separation
Hands-on experience of developing data pipeline and data prep modules using Java, Scala or Python on MR /Python/Spark
Bachelor’s Degree