Description

Job Description:
·        Design, develop, and maintain scalable data pipelines using Apache Spark and Java.
·        Implement data processing workflows and ETL processes to ingest, transform, and store large volumes of data.
·        Optimize and tune data processing jobs for performance and cost-efficiency.
·        Ensure data quality, integrity, and security across all data pipelines and storage solutions.
·        Develop and maintain data models, schemas, and documentation.
·        Hands-on experience with AWS services, including S3, EMR, Lambda, and Glue. Snowflake
·        Experience with SQL and NoSQL databases.
·        CICd/ Jules /Spinnaker 

Snowflake – Preferably Good knowledge/ Experience
Long term Contract on W2 / C2C 

Education

Any Graduate