Able to develop programs using Spark Java APIs over Hadoop Yarn to perform analytics on data.
Implemented Spark using Clojure (Functional Programming), Spark Core, Spark streaming, and Spark SQL for faster data processing and testing.
Ability to work on compressed HDFS file formats like Apache Parquet and with various flat files.
Involve in developing spark jobs for parsing XML and JSON data.
Perform in-memory computations to generate the output response by Loading data spark datasets.
Analyzed SQL queries/scripts and design a solution using Spark Dataset, RDDs in Clojure functional programming, and Java. Basic understanding of python and Scala scripts.
Validate data from datasets using Clojure spec to business requirements.
Work with Technology Architects in designing Spark models for applications.
Working knowledge on integrating HDFS Applications with Legacy systems and RDDs for data processing.
Communicating regularly with the development team.
Collaborating with the development team to understand the design and performance specifications of the software.
Any gradudate