Job Description
Publicis Sapient is looking for a Senior Associate Data Engineer (Azure) to be part of our team of top-notch technologists. You will lead and deliver technical solutions for large-scale digital transformation projects. Working with the latest data technologies in the industry, you will be instrumental in helping our clients evolve for a more digital future.
Your Impact:
Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our client's business
Translate client's requirements to system design and develop a solution that delivers business value
Lead, designed, develop, and deliver large-scale data systems, data processing, and data transformation projects
Automate data platform operations and manage the post-production system and processes
Conduct technical feasibility assessments and provide project estimates for the design and development of the solution
Mentor, help and grow junior team members
Set Yourself Apart With:
Developer certifications in Azurecloud services
Understanding of development and project methodologies
Willingness to travel
Qualifications
Your Technical Skills & Experience:
Demonstrable experience in data platforms involving implementation of end to end data pipelines
Hands-on experience with at least one of the leading public cloud data platforms (Azure, AWS or Google Cloud)
Implementation experience with column-oriented database technologies (i.e., Big Query, Redshift, Vertica), NoSQL database technologies (i.e., DynamoDB, BigTable, Cosmos DB, etc.) and traditional database systems (i.e., SQL Server, Oracle, MySQL)
Experience in implementing data pipelines for both streaming and batch integrations using tools/frameworks like Azure Data Factory, Glue ETL, Lambda, Spark, Spark Streaming, etc.
Ability to handle module or track level responsibilities and contributing to tasks “hands-on”
Experience in data modeling, warehouse design and fact/dimension implementations
Experience working with code repositories and continuous integration
Data modeling, querying, and optimization for relational, NoSQL, timeseries, and graph databases and data warehouses and data lakes
Data processing programming using SQL, DBT, Python, and similar tools
Logical programming in Python, Spark, PySpark, Java, Javascript, and/or Scala
Data ingest, validation, and enrichment pipeline design and implementation
Cloud-native data platform design with a focus on streaming and event-driven architectures
Test programming using automated testing frameworks, data validation and quality frameworks, and data lineage frameworks
Metadata definition and management via data catalogs, service catalogs, and stewardship tools such as OpenMetadata, DataHub, Alation, AWS Glue Catalog, Google Data Catalog, and similar
Code review and mentorship
Bachelor’s degree in Computer Science, Engineering or related field.
Any Graduate