Description

Take on the role of Data Engineer for a multinational Fortune 500 Project in Canada. Drive innovation and contribute to technological excellence. Apply today to become a key player in the dynamic team.

Responsibilities 

You will need a product-focused mindset. It is essential for you to understand business requirements and architect systems that will scale and extend to accommodate those needs
Break down complex problems, document technical solutions, and sequence work to make fast, iterative improvements 
Build and scale data infrastructure that powers batch and real-time data processing of billions of records  
Automate cloud infrastructure, services, and observability 
Develop CI/CD pipelines and testing automation 
Interface with data engineers, data scientists, product managers, and all data stakeholders to understand their needs and promote best practices
You have a growth mindset. You will identify business challenges and opportunities for improvement and solve for them using data analysis and data mining to make strategic or tactical recommendations.
You will support analytics and provide critical insights around product usage, campaign performance, funnel metrics, segmentation, conversion, and revenue growth.
You will build ad-hoc analysis, long-term projects, reports, and dashboards to find new insights and to measure progress in key initiatives.
You will work closely with business stakeholders to understand and maintain focus on their analytical needs, including identifying critical metrics and KPIs.
You will partner with different teams within the organization to understand business needs and requirements.
You will deliver presentations that will distill complex problems into clear insights 
 

Minimum Qualifications

Bachelor’s degree in computer science, Engineering, or related field, or equivalent training, fellowship, or work experience 
4-7 years of relevant industry experience in big data systems, data processing, and SQL databases 
3+ years of coding experience in Spark data frames, Spark SQL, PySpark
3+ years of hands-on programming skills, able to write modular, maintainable code, preferably Python & SQL
Good understanding of SQL, dimensional modeling, and analytical big data warehouses like Hive and Snowflake 
Familiar with ETL workflow management tools like Airflow
2+ years of building reports and dashboards BI tools such as Looker.
Experience with version control and CICD tools like Git and Jenkins CI
Experience in working and analyzing data on notebook solutions like Jupyter, EMR Notebooks, and Apache Zeppelin. 

Education

Bachelor’s degree in computer science