Description

Requirements

Develop and oversee a comprehensive data architecture, aligning with business goals and integrating technologies such as Azure, Databricks, and Palantir to craft a forward-looking data management and analytics landscape

Lead the design of enterprise-grade data platforms addressing needs across Data Engineering, Data Science, and Data Analysis, capitalizing on the capabilities of Azure Databricks

Architect, develop, and document scalable data architecture patterns, ETL frameworks, and governance policies, adhering to Databricks' best practices to support future and unknown use cases with minimal rework

Define cloud data standards, DevOps, Continuous Integration / Continuous Delivery (CI/CD) processes and participate in the proliferation of the corporate meta-data repository

Offer hands-on technical guidance and leadership across teams, driving the development of KPIs for effective platform cost management and the creation of repeatable data patterns for data integrity and governance

Direct the strategic implementation of Databricks-based solutions, aligning them with business objectives and data governance standards while optimizing performance and efficiency

Promote a culture of teamwork, leading evaluations of design, code, data assets, and security features, and working with key external data providers like Databricks and Microsoft to follow best practices. Create and deliver training materials, such as data flow diagrams, conceptual diagrams, UML diagrams, and ER flow diagrams, to explain data model meaning and usage clearly to a diverse audience of technical and non-technical users

Communicate complex technical concepts effectively to both technical and non-technical stakeholders, ensuring clear understanding and alignment across the organization

Implement robust audit and monitoring solutions, design effective security controls, and collaborate closely with operations teams to ensure data platform stability and reliability

What You'll Need

Bachelor's or master's degree in Computer Science, Information Technology, or a related field

8+ years of experience in technical roles with expertise in Software/Data Engineering, Development Tools, Data Applications Engineering

Proficiency in SQL, Python, Scala, or Java. Experience with big data technologies (e.g., Spark, Hadoop, Kafka), MPP databases, and cloud infrastructure

Strong background in data modeling, ETL/ELT workloads, and enterprise data architecture on platforms like Azure Databricks

Experience with data governance tools, BI tools (Tableau, PowerBI), version control systems, and CI/CD tools

Relevant certifications in Databricks, cloud technologies (AWS or Azure), or related fields are a plus

Essentially, we need someone REALLY strong with implementing Databricks into a new environment.

Education

Bachelor's or master's degree in Computer Science, Information Technology