Skills Required:
- AWS: Glue, Step Functions, Lambda, S3 – intelligent tiering, EMR, Lake formation, Athena.
- Python: Strong knowledge in Spark (PySpark), common analytics libraries like Numpy, Pandas.
- Shell Scripting.
Project experience:
# Should have working background from Python based technologies including PySpark, Pandas and other data handling libraries.
# Should have worked in at least 1 Bigdata related project using Spark as core processing engine.
Bachelor's degree