Description:
Designs, develops, and implements Hadoop eco-system based applications to support business requirements. Follows approved life cycle methodologies, creates design documents, and performs program coding and testing. Resolves technical issues through debugging, research, and investigation. Experience/Skills Required:1. Bachelor?s degree in Computer Science, Information Technology, or related field and 5 years experience in computer programming, software development or related2. 3+ years of solid Java and 2+ years experience in design, implementation, and support of solutions big data solution in Hadoop using Hive, Spark, Drill, Impala, HBase3. Hands on experience with Unix, Teradata and other relational databases. Experience with @Scale a plus.4. Strong communication and problem-solving skills.
Note:
Remote is fine but preferably hybrid
data engineer with more than 4-5 yrs of exp
Required qualifications – have exp with Python, PySpark, Scala, Spark, GCP, Airflow DAG, Hive, Data pipeline technologies, Any NoSql Nice to haves – Hadoop, Map reduce, BigQuery, COSMOS
Any Graduate