Job Description:
We are seeking a highly experienced Data Engineer to join our Client on a long-term contract with Banking sector.
- Lead Snowflake implementations and optimizations.
- Process real-time data from AWS S3 data lakes and mainframe data sources (COBOL).
- Perform performance tuning and query optimization in Snowflake.
- Handle high-volume data pipelines ensuring reliability and efficiency.
- Transform COBOL data sources, converting JCL code to Python-based ETL processes.
- Utilize AWS Glue ETL, AWS Lake Formation, or compatible ETL tools for data transformation.
- Implement Snowflake features like Snow Pipe, Bulk Copy, Tasks, Streams, Stored Procedures, and UDFs.
- Integrate data from multiple sources into Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container.
- Utilize AWS services (S3, Glue, Lambda) or Azure services (Blob Storage, ADLS gen2, ADF) for at least 4 years.
- Deploy code using CI/CD for AWS services and Snowflake solutions, and manage repositories like GitLab, GitHub, etc.
- Deploy infrastructure as code (IaC) using tools like Terraform or equivalent for AWS services and Snowflake solutions.
- Build and manage APIs using Python/Pyspark integrated with Snowflake and cloud (AWS/Azure).
- Utilize Snowpark for advanced data processing and analytics within Snowflake.
- Apply experience in Finance and Treasury projects.
Any graduate