Required Skills:
🔹 Expertise in data engineering frameworks like Apache Spark, Hadoop, or Kafka
🔹 Strong proficiency in SQL and working with both relational and NoSQL databases
🔹 Experience building and maintaining ETL pipelines
🔹 Familiarity with cloud platforms such as AWS, Azure, or GCP
🔹 Knowledge of Python, Scala, or Java for data processing
🔹 Hands-on experience with data warehouse solutions like Snowflake, Redshift, or BigQuery
🔹 Proficiency in data modeling and data pipeline orchestration tools (e.g., Apache Airflow, Prefect)
🔹 Excellent problem-solving skills and a passion for data-driven insights
Bachelor's degree in Computer Science