Description

Job Description:

5 - 7 years of overall experience.
4+ years of experience of building and supporting scalable Big Data applications. Working in a software product development organisation building modern and scalable pipelines and delivering data promptly in a collaborative team environment.
Proficiency in Hadoop technologies and Big Data processing technologies, e.g. Spark, YARN, HDFS, Oozie, Hive, Airflow. Shell scrpting.
Strong and exceptional knowledge of Spark Engine, Spark and Scala.
Hands on experience with data processing technologies, ETL processes and feature engineering
Expertise in Data Analytics, PL/SQL, NoSQL.
Strong analytical and troubleshooting skills.
Strong interpersonal skills and ability to work effectively across multiple business and technical teams
Excellent oral and written communication skills
Ability to independently learn new technologies.
Passionate, team player and fast leaner.

Additional Nice to Have:
Experience in commonly used cloud services.
Expertise in columnar storage such as Parquet, Iceberg.
Knowledge in deep learning models
Should work with Good clients and Education should be in Good Universities
 

Education

Any Graduate