Description

Responsibilities:

πŸš€ Design, develop, and implement data pipelines using cutting-edge tools such as Apache Airflow.
βš™οΈ Monitor and optimize ELT data pipelines for peak performance, reliability, and scalability.
πŸ” Troubleshoot and resolve issues related to data pipelines, ensuring data integrity and quality.
πŸ’‘ Utilize dbt for data transformation, optimizing its usage and operations while establishing standards and best practices.
🌐 Implement and maintain data governance practices, including data lineage and metadata management.
πŸ‘₯ Collaborate closely with the DevOps team to ensure seamless integration and deployment of data pipelines using GitLab.
πŸ“° Stay up-to-date with industry trends and best practices in data engineering and orchestration.


Requirements:

πŸŽ“ Proven experience as a DataOps Engineer or in a similar role with a minimum of 10 years of hands-on experience.
πŸš€ Strong proficiency in data orchestration tools, particularly Apache Airflow.
πŸ’‘ Solid understanding of data engineering concepts and best practices.
πŸ’» Proficiency in Git, Python, and SQL.
πŸ”„ Experience with data transformation tools, especially dbt.
☁️ Experience with AWS cloud platform (Redshift, MWAA, Glue, Lambda, S3).
πŸ“Š Familiarity with data quality and data observability tools.
πŸ›  Familiarity with DevOps tools (e.g., GitLab) and concepts (CI/CD, containerization).
πŸ€– Strong problem-solving and troubleshooting skills.
πŸ—£ Excellent communication and collaboration abilities

Education

Bachelor's degree in Computer Science