Description

Job Description

 

  • Strong knowledge in Python programming & libraries like Pandas, NumPy, etc.
  • Experience with GCP/AWS/Azure Cloud, Docker/Kubernetes and scale containerized apps
  • Experience in Apache Spark or PySpark, Kafka
  • Good knowledge in SQL Queries and any RDBMS DB.
  • Understanding of CI/CD tools like Jenkins & SonarQube
  • Implement & deploy models into production by leveraging MLOps best practices.
  • Strong communication skills in a collaborative environment
  • Experience in Big data technologies such as Hadoop,
  • Experience in MLFlow & DVC
  • Experience in working on ETL Jobs and building pipelines using UC4/Airflow.
  • Collaborate with Data Scientists, Data Engineers, cloud platforms and application engineers create and implement
  • Inferencing pipelines and governance for ML/DL model lifecycle
  • Collaborate with Data science & other dependent teams to design and implement the required solutions
  • Participate in formal and informal code reviews to ensure code quality
  • Actively contribute to the automated test suites to enable continuous integration

Education

Any degree