Required:
5+ years of experience with Apache Spark
Strong programming skills in Scala, Java, or Python
Experience with big data tools and technologies such as Hadoop, Hive, Kafka, and HDFS
Proficiency in SQL and experience with relational databases
Familiarity with cloud platforms like AWS, Azure, or Google Cloud
Strong problem-solving skills and attention to detail
Excellent communication and teamwork skills
Preferred:
Master’s degree in Computer Science or related field
Experience with streaming technologies such as Spark Streaming or Kafka Streams
Familiarity with DevOps practices and tools (e.g., Docker, Kubernetes)
Knowledge of data warehousing and data modeling
Experience with machine learning frameworks and libraries
Master's degree