Job Duties: Design and develop software applications using Hadoop Eco System (Hive, HDFS, Apache Spark, PySpark), Python, Unix/Shell Scripting. loading and manipulating large data sets using Spark & Hive into Hadoop, GCP. Analyze the business needs, profile large data sets and build custom data models and applications to drive the Adobe business decision making and customers experience Develop and extend design patterns, processes, standards, frameworks and reusable components for various data engineering functions/areas.
Requirements: Must have Master’s degree, or its foreign equivalent, in Computer Science, Engineering (any), Information Technology or a related field. Employer will accept any suitable combination of education, training, and/or experience
Any Graduate