Description

Job Title: Senior Data Engineering Manager - GCP

Experience Required: 14-16 years (Level 5 - Senior Manager)

Location: Bangalore, Chennai, Kolkata, Gurugram, Pune

Mandatory Skills: GCP, GCS, BigQuery, SQL, Data Flow, DataProc with PySpark, Pub/Sub, Airflow, Python, Spark

Job Description:

We are looking for an analytical, big-picture thinker who is driven to enhance and further the mission of client by delivering technology to internal business and functional stakeholders. You will serve as a leader to drive the IT strategy to create value across the organization. As a Data Engineer, you will lead the engagement to focus on implementing both low-level, innovative solutions, as well as the day-to-day tactics that drive efficiency, effectiveness, and value.

You will play a critical role in creating and analyzing deliverables to provide critical content for fact-based decision-making, facilitation, and achievement of successful collaboration with business stakeholders. You will analyze, design, and develop best practices for business changes through technology solutions.

Technical Requirements:

Have implemented and architected solutions on Google Cloud Platform using GCP components.

Experience with Apache Beam/Google Dataflow/Apache Spark in creating end-to-end data pipelines.

Experience in some of the following: Python, Hadoop, Spark, SQL, BigQuery, BigTable, Cloud Storage, Datastore, Spanner, Cloud SQL, Machine Learning.

Experience programming in Java, Python, etc.

Expertise in at least two of these technologies: Relational Databases, Analytical Databases, NoSQL databases.

Certified Google Professional Data Engineer or Solution Architect is a major advantage.

Roles & Responsibilities:

Experience: 14-16 years in IT or professional services in IT delivery or large-scale IT analytics projects.

Expert knowledge of Google Cloud Platform; experience with other cloud platforms is a plus.

Expert in SQL development.

Expertise in building data integration and preparation tools using cloud technologies (like Snaplogic, Google Dataflow, Cloud Dataprep, Python, etc.).

Experience with Apache Beam/Google Dataflow/Apache Spark in creating end-to-end data pipelines.

Experience in Python, Hadoop, Spark, SQL, BigQuery, BigTable, Cloud Storage, Datastore, Spanner, Cloud SQL, Machine Learning.

Ability to identify downstream implications of data loads/migration (e.g., data quality, regulatory, etc.).

Implement data pipelines to automate data source ingestion, transformation, and augmentation, and provide best practices for pipeline operations.

Capable of working in a rapidly changing business environment and enabling simplified user access to massive data by building scalable data solutions.

Advanced SQL writing and experience in data mining (SQL, ETL, data warehousing, etc.) and using databases in a business environment with complex datasets.

Education

Any Graduate