Job Description
· 9+ years of strong hands-on experience working in data warehousing, data engineering and dimension modelling.
· Should be able to work independently with minimal guidance, with excellent problem solving and analytical skills.
Required Skills
· Experience building and maintaining ETL pipelines with large data sets using services such as AWS Glue, EMR, Kinesis or Kafka
· Strong Python development experience with proficiency in Spark or PySpark and in using APIs
· Strong in writing SQL queries and performance tuning in AWS Redshift and other industry leading RDMS such MS SQL Server, Postgres
· Proficient working with AWS Services such as AWS Lambda, Event Bridge, Step functions, SNS, SQS
· Familiar with how IAM Roles and Policies work
Preferred Skills
· Worked in workflow management tools such as Airflow
· Familiar with infrastructure coding such as Cloud Formation
· Worked in CI/CD pipeline and agile methodologies.
ANY GRADUATE