Description

When you are working with us, you will be:

  • Work closely with data scientists, analysts, and other stakeholders to understand data requirements and develop data solutions to meet business needs.
  • Design, implement, and optimize data pipelines and ETL processes to ingest, transform, and load data from various sources into our data warehouse.
  • Develop and maintain data models, schemas, and databases to support data analysis and reporting requirements.
  • Perform data cleansing, validation, and quality assurance processes to ensure data accuracy, completeness, and consistency.
  • Collaborate with IT teams to ensure the scalability, reliability, and security of data infrastructure and systems.
  • Monitor data pipelines and proactively identify and resolve data issues or performance bottlenecks.
  • Document data engineering processes, procedures, and best practices to facilitate knowledge sharing and team collaboration.
  • Stay updated on industry trends, emerging technologies, and best practices in data engineering and analytics.

 

You can get in, if you can prove that you:

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • Strong programming skills in languages such as Python and SQL.
  • Experience with data modeling, database design, and SQL query optimization.
  • Familiarity with data warehousing concepts and technologies (e.g., AWS Redshift, Google BigQuery, Snowflake, Azure).
  • Hands-on experience with ETL tools and frameworks (e.g., SSIS, Azure Data Factory, Apache Spark, Talend, Informatica).
  • Knowledge of data visualization tools (e.g., Tableau, Power BI) is a plus.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills.

Education

Any Graduate