Description

The role is for a self-motivated individual with software engineering skills and expertise with Big Data and cloud technologies. The candidate will be extensively involved in hands-on activities including POCs, design, documentation, development, and test of new functionality. Candidate must be agile and flexible with changing priorities based on teams needs.

 

NOTE:

 

1. The job is on contract basis with 6 months to 1 years to contract with possibility of extension or full time conversion depending upon the requirement of the client.

 

2. The client is planning to start their Bangalore office from February 2022 end so the employees needs to WFO(Work From Office) post that little bit extension can be granted for relocation etc.


 

Qualification & Experience:

 

  • CS fundamentals: You have earned at least a B.S. / MS in Computer Science, or related degree AND you have a strong ethos of continuous learning.
  • Software engineering & Architecture: at least 6+ years of professional software development experience with languages and systems such as Python/Java, REST API, PySpark, Apache Beam and version control (git), with good analytical & debugging skills.
  • Big data: You have extensive experience with data analytics and working knowledge of big data infrastructure such as Google Cloud, Big Query, Data Flow, AWS/Azure, Hadoop Eco System, HDFS, Spark. You've routinely built data pipelines with gigabytes/terabytes of data and understand the challenges of manipulating such large datasets.
  • Data Science/ML Ops: Experience in operationalization of Data Science projects (ML Ops) using at least one of the popular frameworks or platforms (e.g. Terraform, Ansible, Kubeflow, Google AI Platform)
  • Data Modeling: Flair for data, schema, data model, SQL, how to bring efficiency in data modeling for efficient querying data for analysis, understands criticality TDD and develops data validation techniques.
  • Real Time Systems: Understands evolution of databases for in-memory, NoSQL & indexing technologies along with experience on real-time & stream processing systems like Google pub/sub, GCP technologies, Kafka, AWS/Azure streaming technologies, Storm, Spark Streaming.
  • Strong design skills: with a proven track record of success on large/highly complex projects preferably in Enterprise Apps and Integration.
  • Project management: You demonstrate excellent project and time management skills, exposure to scrum or other agile practices in JIRA.
  • Excellent verbal and written communication skills: Must be able to effectively communicate & work with fellow team members and other functional team members to coordinate & meet deliverables.

 

Desired Candidate Profile

 

Must Have

2+ years on GCP platform

Google Cloud Data Engineering, Architecture or DevOps certifications

 

Software engineering & Architecture

Python, version control (git), REST API, analytical & debugging skills

PySpark, Apache Beam

 

Big Data

Google Cloud Platform, Big Query, Data Flow, Composer, Cloud functions, Stack driver

AWS/Azure, Hadoop Eco System, HDFS, Spark

 

Dev/ML Ops

Teraform, Ansible, Cloud Build, Container Registry, Kubernetes

Kubeflow, Google AI Platform

 

Data Modeling

Data modeling, SQL, in-memory database, data catalog

NoSQL & indexing technologies

 

Real Time Systems

Google pub/sub, GCP technologies

 

AWS/Azure streaming technologies, Storm, Spark Streaming, Kafka

 

Tools

Tableau, PowerBI

 

Desired candidates can send there resumes at nilesh@iitjobs.com

THANKS and REGARDS
NILESH NAIKWADI
TECHNICAL RECRUITER
9762635601/9356016824
nilesh@iitjobs.com 

Education

MS/M.Sc(Science) in Computers

Salary

INR 15,00,000 -30,00,000