Description

"Daily Tasks Perfomed

Develop and Maintain Data Integration Solutions

o Design and implement data integration workflows using AWS GlueEMR Lambda Redshift

o Demonstrate proficiency in Pyspark Apache Spark and Python for data processing large datasets

o Ensure data is accurately and efficiently extracted transformed and loaded into target systems

Ensure Data Quality and Integrity

o Validate and cleanse data to maintain high data quality

o Ensure data quality and integrity by implementing monitoring validation and error handling mechanisms within data pipelines

Optimize Data Integration Processes

o Enhance the performance optimization of data workflows to meet SLAs scalability of data integration processes and costefficiency on AWS cloud infrastructure

o Identify and resolve performance bottlenecks finetuning queries and optimizing data processing to enhance Redshifts performance

o Regularly review and refine integration processes to improve efficiency

Support Business Intelligence and Analytics

o Translate business requirements to technical specifications and coded data pipelines

o Ensure timely availability of integrated data for business intelligence and analytics

o Collaborate with data analysts and business stakeholders to meet their data requirements

Maintain Documentation and Compliance

o Document all data integration processes workflows and technical  system specifications

o Ensure compliance with data governance policies industry standards and regulatory requirements

 

 

What will this person be working on

The IT Data Integration Engineer  Developer is tasked with the design development and management of data integration processes to ensure seamless data flow and accessibility across the organization This role is pivotal in integrating data from diverse sources transforming it to meet business requirements and loading it into target systems such as data warehouses or data lakes The aim is to support the organizations datadriven decisionmaking by providing highquality consistent and accessible data

 

Position Success Criteria Desired  WANTS

 

Bachelors degree in computer science information technology or a related field A masters degree can be advantageous

7-10 years of experience in data engineering database design ETL processes

5 in programming languages such as PySpark Python

5 years of experience with AWS tools and technologies S3 EMR Glue Athena RedShift Postgres RDS Lambda PySpark

3 years of experience of working with databases data martsdata warehouses

Proven experience in ETL development system integration and CICD implementation

Experience in complex database objects to move the changed data across multiple environments

Solid understanding of data security privacy and compliance

Excellent problemsolving and communication skills

Display good communication skills to effectively collaborate with multifunctional teams

Participate in agile development processes including sprint planning standups and retrospectives

Provide technical guidance and mentorship to junior developers

Attention to detail and a commitment to data quality

Continuous learning mindset to keep up with evolving technologies and best practices in data engineering"

Education

Bachelor's degree in Computer Science