Description

Job Description

Must Have skill : Java Spark and AWS

Snowflake – Preferably Good knowledge/ Experience.

•             Design, develop, and maintain scalable data pipelines using Apache Spark and Java.

•             Implement data processing workflows and ETL processes to ingest, transform, and store large volumes of data.

•             Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions.

•             Optimize and tune data processing jobs for performance and cost-efficiency.

•             Ensure data quality, integrity, and security across all data pipelines and storage solutions.

•             Develop and maintain data models, schemas, and documentation.

•             Monitor and troubleshoot data pipeline issues, ensuring high availability and reliability.

•             Hands-on experience with AWS services, including S3, EMR, Lambda, and Glue. Snowflake

•             Experience with SQL and NoSQL databases.

•             CICd/ Jules /Spinnaker

 

Education

Any Graduate