JOB DUTIES:
Analyzing large datasets and finding patterns and insights within structured and unstructured data. Installation, configuration, supporting and managing of Big Data and underlying infrastructure of Hadoop Cluster. Analyzing data using HiveQL, Pig Latin, HBase and custom MapReduce programs in Java. Creating NIFI templates for ingesting real time data. Consuming real time data from Kafka and process it using Spark and Scala. Parsing complex JSON objects using NIFI and storing the flattened JSON attributes in the Hive tables. Work with Oozie workflow Engine for running job workflows with actions that run MapReduce, shell, Pig and Hive actions. Writing Shell scripts for cleansing and preparing the data before it is used for analysis and building reports. Migrating the data using Sqoop from HDFS to Relational Database System and vice-versa. Travel And/or Relocation to unanticipated client sites is required.
EDUCATION REQUIRED:
Master’s Degree in Computer Science/ Technology/ Engineering(Any)/ Business /IT or related field with Six (6) months of experience in the job offered or as an IT Consultant/ IT Analyst/ Developer/ Programmer/ IT Engineer or closely related fields. Employer also accepts A Bachelor’s degree in Computer Science/ Technology/ Engineering (Any)/ Business /IT or related field plus 5 years of progressive work experience in related field.
EXPRERIENCE REQUIRED:
Experience Should Include 6 (Six) Months of Working with Either Hadoop or Big Data Applications. Travel and/or relocation to unanticipated client sites throughout USA is required.
Any Graduate