Responsibilities:
- Possess extensive analysis, design and development experience in Hadoop and AWS Big Data platforms
- Able to critically inspect and analyze large, complex, multi-dimensional data sets in Big Data platforms
- Work with Big Data technologies, distributed file systems, Hadoop, HDFS, Hive, and Hbase
- Define and execute appropriate steps to validate various data feeds to and from the organization
- Collaborate with business partners to gain in-depth understanding of data requirements and desired business outcomes
- Create scripts to extract, transfer, transform, load, and analyze data residing in Hadoop and RDBMS including Oracle and Teradata
- Design, implement, and load table structures in Hadoop and RDBMS including Oracle and Teradata to facilitate detailed data analysis
- Participate in user acceptance testing in a fast-paced Agile development environment
- Troubleshoot data issues and work creatively and analytically to solve problems and design solutions
- Create documentation to clearly articulate designs, use cases, test results, and deliverables to varied audiences
- Create executive-level presentations and status reports
- Under general supervision, manage priorities for multiple projects simultaneously while meeting published deadlines
Degree Requirement:
Bachelor’s degree in computer science, computer information systems, information technology, or a closely related IT field, or a combination of education and experience equating to the U.S equivalent of a bachelors degree in one of the aforementioned subjects