Description

Data Architect
Atlanta, GA (onsite) Only Locals
Client: Evoke Technologies
Contract: C2C
visa: Any

Notes:
Looking for someone who can help him in end to end data life cycle and should be able to verify the design developed by their offshore team. Must be good in Data Modelling of Data Warehouse/Data Lake, Data Pipeline/ETL 

Role and Responsibilities:
As a Data Architect, you will play a pivotal role in shaping our data infrastructure and ensuring that data flows seamlessly through our organization. Your primary responsibilities will include:
Designing, developing, and maintaining end-to-end master data pipelines that efficiently collect, process, transform, and load data from diverse sources into our data ecosystem.
Collaborating with cross-functional teams to understand business requirements and translate them into effective data architecture solutions.
Ensuring data quality, integrity, and security throughout the data lifecycle by implementing best practices and standards.
Optimizing data pipelines for performance, scalability, and reliability, and identifying opportunities for automation and process improvement.
Utilizing your expertise in SQL, Unix/Shell scripting, Python, and data processing frameworks to create and manage data transformations and integrations.
Utilizing concepts like Data warehouse, Data Lake house, Data Mess to share enterprise data strategies.
Working closely with DevOps and Engineering teams to implement continuous integration and continuous deployment (CI/CD) pipelines for data-related processes.
Staying current with industry trends and emerging technologies in data architecture and applying that knowledge to drive innovation within the organization.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field (Master's degree preferred).
A minimum of 5 years of experience in data architecture, with a proven track record of designing and deploying master data pipelines.
Proficiency in SQL for data manipulation and retrieval from relational databases.
Strong scripting skills in Unix/Shell and Python for automating data processes and transformations.
Experience with data processing frameworks such as Apache Spark, Apache Flink, or similar technologies.
Familiarity with CI/CD tools and practices for automating deployment and monitoring of data pipelines.
Excellent problem-solving skills and the ability to optimize data workflows for performance and efficiency.
Solid understanding of data modeling, data warehousing concepts, and ETL processes.
Strong communication skills to collaborate effectively with cross-functional teams and articulate complex technical concepts to non-technical stakeholders.


Thanks & Regards

Education

ANY GRADUATE