Position Summary:
The Senior Data Engineer will be responsible for the design, development, implementation, and support of the Data Initiatives throughout Gallagher, to ensure that optimal data delivery architecture is consistent throughout ongoing projects. You will engage in supporting the data analysts and data scientists, and data needs of multiple teams, systems and products. Do you find the prospect of optimizing or even re-designing our company’s integration and data architecture to support our next generation of products and data initiatives most exciting? We really should explore together.
Essential Duties and Responsibilities:
• Drive requirements, scope, and technical design of the integration workflows, to make sure the build is conducted accurately and according to spec. Develop and maintain requirements, design documentation and test plans.
• Seek out, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
• Coordinate with BI Engineers, Financial Applications and Oracle HR teams around data management including schemas, failure conditions, reconciliation, test data set up, etc.
• Build the infrastructure required for optimal ETL/ELT pipelines to ingest data from a wide variety of data sources using Microsoft Azure technologies such as Azure Data Factory and Databricks.
• Construct and maintain of enterprise level integrations using the Snowflake platform, Azure Synapse, Azure SQL and SQL Server.
• Create data tools for data analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader.
• Design analytics tools that utilize the data pipeline to deliver actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
• Troubleshoot issues helping to drive root-cause analysis, and work with infrastructure teams to resolve incidents and arrive to a permanent resolution.
• Partner with data and analytics teams to strive for greater functionality in our data systems.
• Provide direction and coordination for development, and support teams, including globally located resources.
• Understand the layout and working of existing integrations that send and receive data between Oracle, Concur, JDE, Corporate Data Platform and other systems.
Required:
• A relevant technical BS Degree in Information Technology
• 5+ years writing SQL queries against any RDBMS with query optimization.
• 5 years of data engineering experience leveraging technologies such as Snowflake, Azure Data Factory, ADLS Gen 2, Logic Apps, Azure Functions, Databricks, Apache Spark, Scala, Synapse, SQL Server
• Experience with scripting tools such as Power Shell, Python, Scala, Java and XML
• Understanding the pros and cons, and best practices of implementing Data Lake, using Microsoft Azure Data Lake Storage
• Experience structuring Data Lake for the reliability, security and performance.
• Experience implementing ETL for Data Warehouse and Business intelligence solutions.
• Skills to read and write effective, modular, dynamic, parameterized and robust code, establish and follow already established code standards, and ETL framework.
• Strong analytical, problem solving, and troubleshooting abilities.
• Good understanding of unit testing, software change management, and software release management
• Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as Code fundamentals
• Experience performing root cause analysis on data and processes to answer specific business questions and identify opportunities for improvement.
• Experience working within an agile team.
• Excellent communication skills
Bachelor's degree