Description

Qualifications:


 

• 5+ years of relevant work experience in data engineering or software engineering equivalent.

• 3+ years of experience in implementing big data processing technology: AWS / Azure / GCP, Apache Spark, Python.

• Experience writing and optimizing SQL queries in a business environment with large-scale, complex datasets.

• Working knowledge of higher abstraction ETL tooling (e.g., AWS Glue Studio, Talend, Informatica).

• Detailed knowledge of databases like Oracle, DB2, SQL Server, data warehouse concepts, and technical

architecture, infrastructure components, ETL, and reporting/analytic tools and environments.

• Hands-on experience in cloud technologies (AWS, Google Cloud, Azure) related to data ingestion tools (both realtime

and batch-based), CI/CD processes, cloud architecture understanding, and big data implementation.


 

Preferred Skills:


 

• AWS certification or experience with cloud technologies.

• Working knowledge of Glue, Lambda, S3, Athena, Redshift, Snowflake.

• Strong verbal and written communication skills, excellent organizational and prioritization skills.

• Strong analytical and problem-solving skills.

• Data Bricks and Azure certification is must.

Education

Any Graduate