Must Have skills:
Hadoop
Python/Scala
SparkSQL
Job Details:
Strong SQL Skills � one or more of MySQL, HIVE, Impala, SPARK SQL
Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, XML
Experience working with SPARK Structured steaming
Experience working with Hadoop/Big Data and Distributed Systems
Working experience with Spark, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, CDP or HDP, Cloudera or Hortonworks, Elastic Search, Kibana, etc.
Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting, to name a few
Performance tuning experience with spark /MapReduce or SQL jobs
Experience and proficiency with Linux operating system is a must
Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines
Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle
Experience using Source Code and Version Control systems like SVN, Git, Bit Bucket etc.
Experience working with Jenkins and Jar management
Self-starter who works with minimal supervision and the ability to work in a team of diverse skill sets
Ability to comprehend customer requests and provide the correct solution
Strong analytical mind to help take on complicated problems
Desire to resolve issues and dive into potential issues
Ability to adapt and continue to learn new technologies is important
Any graduate