Description

Job Description

 

  • Analyze Business Requirement Documents and Implement Technical Solutions for privacy related applications.
  • Develop ETL process for supporting Data Extraction, transformations and loading.
  • Perform data conversions and aggregations using different transformations such as Merge, Merge join, Union condition split, sort, order by. Derived columns convert and cast transformations and row count and lookup and fuzzy lookup transformations.
  • Develop UNIX scripts to load the data from Source server to Teradata  and validate the files between different servers.
  • Develop new process to implement state level privacy regulations based on each state law in Big Data Platform.
  • Create  Temperory/Fact tables, loading with data and writing Teradata and Spark SQL queries.
  • Optimize/tune ETL objects, indexing and partitioning for better performance and efficiency.
  • Validate the performance metrics and work on performance tuning for SQL, HQL and Spark SQL queries.
  • Perform testing and Provide test support for various level of testing phases like Unit, User Acceptance, Regression, Parallel and System testing.
  • Promote the components to production environment through CI/CD process by using  Git hub .
  • Script task and execute SQL tasks to execute SQL code. Work on containers for loop and for each loop container to run a group of tasks into a single container and repeating tasks.
  • Create the data flow to extract data from sources to OLEDB Source, Excel, XML, flat files sources and destination is SQL data warehouse.

Minimum Education Required:- All the responsibilities mentioned above are in line with the professional background and requires an absolute minimum of a Bachelor’s degree in computer science, computer information systems, technology management, or a combination of education and experience equating to the U.S. equivalent of a Bachelor’s degree in one of the aforementioned subjects.

Education

Any Graduate