Description

Role & Responsibilities 
12+ years of IT experience in Data Engineering, Data Quality, Data Migrations, Data Architecture, Data Lake formation and Data Analytics. 
5+ Years hands on solid Experience on AWS services like S3, EMR, VPC, EC2, IAM, EBS, RDS, Glue, Lambda, Lake Formation etc. 
Must have worked in producing architecture document for small to large solution implementations. 
In depth understanding of Spark Architecture including Spark Code, Spark SQL, Data frames, Spark Streaming, Spark MLiB, etc. Experience on handing very high-volume streaming data in various format like JSON, XML, AVRO, Snappy etc. 
Good Exposure to Kafka to design future capacity planning, partition Planning, Read and write 
Must have worked with Big Data and should have good knowledge Mar reduce and Spark. 
Must have very good working exposure on different kind of databases like RDBMS, No SQL Columnar, Document, distributed databases, Could Databases, in memory databases etc. 
Python Exposure is an added advantage
 

Education

Any Graduate