Project description
In our agile operating model, crews are aligned to larger products and services fulfilling client's needs and encompass multiple autonomous pods.
You will be part of Global Markets Financing stream within Investment Banking division in US. The team is responsible for feature development, modernization and transformation of the platform used by both internal and external users.
Responsibilities
We are looking for Senior Cloud data Engineer having at least 15+ years of industry experience in building Enterprise data warehouse solution using Spark with Scala or python.
This person will be responsible to develop the framework required to load and transform data into a data vault modeling design to consume gigabyte of historical data for daily reporting purposes.
This person will also help the team to learn and design next gen data warehouse solution using Cloud technology as per latest Industry standards.
Skills Must have
Bachelor’s and/or Master’s degree in Computer Science, Computer Engineering or related technical discipline
At least 8-10+ years of experience in working in large global teams, especially for high-performance, large-scale systems data warehouse solution using Data bricks , Spark (Python/Scala)
Microsoft Azure experience
Spark/Databricks batch and streaming solutions (Delta Lake, Lakehouse)
Knowledge of Azure Data Factory
Knowledge of Kafka, Event hubs and ADLS Gen2 on Microsoft Azure
Understanding of dev ops tools and engineering practices using micro-services, ci/cd pipeline and test automation
Cloud technologies and design patterns
building and optimizing ‘big data’ data pipelines, architectures, and data sets
proven skills in performance tuning and quality improvements
strong in Algorithms and Data Structures
Infrastructure-as-code, using tools such as Terraform, ARM, Bicep or CloudFormation
Azure CLI, setting up ADO pipelines, Terraform Enterprise, etc
Bachelor's degree in Computer Science