Description

Job Description: This is a remote position. 

Responsibilities:

Design, develop, and deploy data pipelines on GCP using tools such as Google Cloud Dataflow, Apache Beam, or Apache Spark.
Build and maintain data storage solutions on GCP, including BigQuery, Cloud Storage, and Cloud Bigtable.
Collaborate with data scientists and analysts to implement data models and algorithms for analytics and machine learning.
Optimize data pipelines and workflows for performance, reliability, and cost-effectiveness.
Implement data security and compliance best practices to ensure data privacy and integrity.
Monitor and troubleshoot data pipelines to identify and resolve issues proactively.
Stay up-to-date with the latest trends and technologies in data engineering and cloud computing.
Qualifications:

Bachelor's degree in Computer Science, Engineering, or related field.
years of experience in data engineering with a focus on Google Cloud Platform (GCP).
Proficiency in programming languages such as Python, Java, or Scala.
Hands-on experience with GCP services such as BigQuery, Cloud Storage, Dataflow, Pub/Sub, and Data Catalog.
Experience with data modeling, ETL/ELT processes, and SQL.
Strong understanding of distributed computing, parallel processing, and data integration concepts.
Excellent problem-solving skills and attention to detail.
Strong communication and collaboration skills

Education

Any Graduate