Responsibilities
We are looking for Senior Cloud data Engineer having at least 10+ years of industry experience in building Enterprise data warehouse solution using Spark with Scala or python and also Investment bank domain knowledge of at least 8+ year. Candidate is expected to be in office (in Person) at least 3 times a week.
This person will be responsible to develop the framework required to load and transform data into a data vault modeling design to consume gigabyte of historical data for daily reporting purposes.
Skills
Must have
Bachelor’s and/or Master’s degree in Computer Science, Computer Engineering or related technical discipline
Total experience in IT - 15 years.
At least 10+ years of experience in working in large global teams, especially for high-performance, large-scale systems data warehouse solution using Data bricks , Spark (Python/Scala)
Microsoft Azure experience is a must
Spark/Data bricks batch and streaming solutions (Delta Lake, Lake house)
Knowledge of Azure Data Factory and Cosmos DB
Knowledge of Kafka, Event hubs and ADLS Gen2 on Microsoft Azure
Understanding of dev ops tools and engineering practices using micro-services, ci/cd pipeline and test automation
Cloud technologies & design patterns
building and optimizing ‘big data’ data pipelines, architectures, and data sets
proven skills in performance tuning and quality improvements
Strong in Algorithms and Data Structures
Infrastructure-as-code, using tools such as Terraform, ARM, Bicep or Cloud Formation
Azure CLI, setting up ADO pipelines, Terraform Enterprise, etc
Nice to have
finance/banking
Any Graduate