When you are working with us, you will be:
- Collaborate with cross-functional teams to understand data requirements and develop scalable and efficient data solutions.
- Design, implement, and optimize data pipelines and ETL processes to ingest, transform, and load large volumes of data from various sources into our data warehouse.
- Develop and maintain data models, schemas, and databases to support data analysis, reporting, and visualization needs.
- Perform data cleansing, validation, and quality assurance processes to ensure data accuracy, completeness, and consistency.
- Implement data governance policies and procedures to ensure data security, privacy, and compliance with regulatory requirements.
- Monitor and optimize the performance, scalability, and reliability of data infrastructure and systems.
- Troubleshoot data issues, diagnose root causes, and implement timely solutions to minimize data downtime and disruptions.
- Document data engineering processes, architectures, and best practices to facilitate knowledge sharing and team collaboration.
- Stay updated on industry trends, emerging technologies, and best practices in data engineering and analytics.
You can get in, if you can prove that you:
- Bachelor’s degree in Computer Science, Engineering, or related field.
- Proven experience as a Data Engineer or similar role, with a focus on designing and implementing data solutions.
- Strong programming skills in languages such as Python and SQL.
- Deep understanding of data warehousing concepts and technologies (e.g., AWS Redshift, Google BigQuery, Snowflake, Azure).
- Hands-on experience with ETL tools and frameworks (e.g., SSIS, Azure Data Factory, Apache Spark, Talend, Informatica).
- Experience with data modeling, database design, and SQL query optimization.
- Familiarity with data visualization tools (e.g., Tableau, Power BI) is a plus.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.