Description
As a PySpark & Azure Databricks Data Engineer, you will play a crucial role in designing, implementing, and maintaining data engineering solutions. Your expertise in PySpark and Azure Databricks will be instrumental in building robust data pipelines, optimizing data processing, and ensuring data availability and reliability. This role requires a deep understanding of big data technologies and data integration.
Key Responsibilities:
1. Data Pipeline Development
2. Performance Optimization
3. Data Architecture
4. Documentation and Reporting
5. Collaboration
6. Continuous Learning
Proven experience in data engineering, with a focus on PySpark and Azure Databricks.
Proficiency in programming languages like Python and SQL.
Strong understanding of big data concepts, data lakes, and data warehouses.
Experience with cloud platforms, particularly Microsoft Azure.
Knowledge of data integration and ETL processes.
Excellent problem-solving and debugging skills.
Strong communication and teamwork skills.
Relevant certifications in PySpark, Azure, or related areas are a plus.
In this role, you will have the opportunity to contribute to the organization's data-driven decision-making processes by building and maintaining efficient data pipelines and supporting data analytics initiatives. Your expertise in PySpark and Azure Databricks will be essential in driving the success of data-driven projects.
Any Graduate