Requirements:
The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
Overall minimum of 10+ years of experience either in JAVA/J2EE OR in Data Warehousing Domain
Minimum of 5+ years of hands on experience in Big Data Toolset such as HIVE, SPARK & HDFS eco system
Programming knowledge on Python OR Scala is a must.
Excellent knowledge in SQL & Linux Shell scripting & Control M scheduling.
Experience in Devops based deployment and knowledge on Git, Jenkins and Rundeck
Excellent analytical and problem-solving skills
Cloud (AWS, Azure, GCP) knowledge is an added advantage
Excellent Analytics related activity on Lily
Responsibilities
Responsible for the documentation, design, development, and testing of Hadoop reporting and analytical applications
Converting functional requirements into the detailed technical designs
Adhere to SCRUM timeline and deliver accordingly
Prepare Unit/SIT/UAT testcase and log the results
Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
Drive small projects individually.
Co-ordinate change and deployment in time
Bachelor's degree