Description

Key Skills:

  • Database modelling experience;
  • R programming experience;
  • knowledge of snowflake and/or AWS (Redshift), Azure, GCP ;
  • knowledge of biomarker high dimensional (omics) data.
  • Research and implement MLOps tools, frameworks and platforms for our Data Science projects.
  • Work on a backlog of activities to raise MLOps maturity in the organization.
  • Proactively introduce a modern, agile and automated approach to Data Science.
  • Conduct internal training and presentations about MLOps tools' benefits and usage.
  • Required experience and qualifications:
  • Experience with Kubernetes
  • Expertise in ETL and scheduling tools
  • Experience in operationalization of Data Science projects (MLOps) using at least one of the popular - frameworks or platforms (e.g. Kubeflow & AWS Sagemaker).
  • Good understanding of ML and AI concepts. Hands-on experience in ML model development.
  • Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit.
  • Experience with devops, CI/CD/CT, pipelines implementation
  • Experience with AWS (knowledge of other cloud providers is a plus)
  • Experience with LLMOps and genAI
  • Experience in running project teams
  • Oracle is the Database OS, the developer needs to create/ update complex SQL queries/scripts.

 

Secondary Skills:

  • Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
  • Proven experience (5+ years) in data engineering with a strong background in software engineering.
  • Very good communication skills
  • Design, implement, and maintain scalable data architectures for large volumes of data in the cloud.
  • Expertise in data modeling, ETL processes, and building scalable data architectures

Education

Bachelor’s or Master’s degree