Bachelor's or Master's degree in Computer Science, Engineering, or a related field
Proven experience (10+ years) as a Spark Developer or in a similar role working with big data technologies
Strong proficiency in Apache Spark, including Spark SQL, Spark Streaming, and Spark MLlib
Proficiency in programming languages such as Scala or Python for Spark development
Experience with data processing and ETL concepts, data warehousing, and data modeling
Solid understanding of distributed computing principles and cluster management
Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and containerization (e.g., Docker, Kubernetes) is a plus
Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment
Strong communication skills to effectively interact with technical and non-technical stakeholders
Experience with version control systems (e.g., Git) and agile development methodologies
Bachelor's degree