Description

Job Description

Role : AWS DataBricks Engineer

Type : Contract Role

Location : Remote - Canada

SKILLS

· Total of 6+ years of experience. 2-3 + yrs. experience designing, developing, deploying and/or supporting data pipelines using Databricks

· Expertise in designing and deploying data applications on cloud solutions AWS or Teradata SaaS

· Hands on experience in performance tuning and optimizing code running in Databricks environment

· Proficient in programming languages like Pyspark and Python

· Good understanding of SQL, T-SQL and/or PL/SQL

· Demonstrated analytical and problem-solving skills particularly those that apply to a big data environment

· Knowledge of Teradata is an advantage

 

Responsibilities

· Designing and implementing data ingestion pipelines from multiple sources

· Developing scalable and re-usable frameworks for ingesting of data sets

· Integrating the end to end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times

· Working with event based / streaming technologies to ingest and process data

· Working with other members of the project team to support delivery of additional project components (API interfaces, Search)

· Evaluating the performance and applicability of multiple tools against customer requirements


 

Education

Any Graduate