Description


Job Description
Hands on experience on Hadoop /Big Data related technology experience in Storage, Querying, Processing, and analysis of data.
Strong knowledge on Pyspark programming and spark-based applications to load streaming data with minimal latency.
Hands on Experience with AWS cloud data warehouse and AWS S3 bucket for integrating data from multiple source system.
Work experience in Scrum / Agile framework and Waterfall project execution methodologies.
Expert in writing complex SQL Queries to check the integrity of data to perform database testing.
Ability to Communicate with clients and gather requirements to convey the same to Off-shore resources.
Experience in developing Hive Queries and Hive query optimization.
Experience in Importing and exporting data from different databases like MySQL, RDBMS into HDFS using Sqoop.
Experience in deploying and managing the multi-node development and production Cluster.
Ability to Monitor and schedule the workflows using Oozie.

Education

Any Graduate