Description

Job Description:
Developer experience specifically focusing on Data Engineering
Hands-on experience in Development using Python and Pyspark as an ETL tool
Experience in AWS services like Glue, Lambda, MSK (Kafka), S3, Step functions, RDS, EKS etc
Experience in Databases like Postgres, SQL Server, Oracle, Sybase
Experience with SQL database programming, SQL performance tuning, relational model analysis, queries, stored procedures, views, 
functions and triggers 
Strong technical experience in Design (Mapping specifications, HLD, LLD), Development (Coding, Unit testing). 
Knowledge in developing UNIX scripts, Oracle SQL/PL-SQL 
Experience with data models, data mining, data analysis and data profiling. Working knowledge of ERWIN a plus.
Experience in Reporting tools like Tableau, Power BI is a plus
Experience in working with REST API's
Experience in work load automation tools like Control-M, Autosys etc.
Good knowledge in CI/CD DevOps process and tools like Bitbucket, GitHub, Jenkins
Strong experience with Agile/SCRUM methodology 
Experience with other ETL tools (DataStage, Informatica, Pentaho, etc.)
Knowledge in MDM, Data warehouse and Data Analytics
Working knowledge of Data Science concepts a plus.

Education

Bachelor's degree