Description

Preferred Qualifications:
Bachelor of Science degree in Computer Science or equivalent
Vantage Master 2.0 certifications / Vantage Data Science Master certifications
At least 7 years of post-degree professional experience
At least 5+ years of Airline industry experience
4+ years development experience building and maintaining ETL pipelines
3+ years of experience working with database technologies and data development such as Python, PLSQL, etc.
Experience in mentoring junior team members through code reviews and recommend adherence to best practices
Deep understanding of writing test cases to ensure data quality, reliability and high level of confidence
Track record of advancing new technologies to improve data quality and reliability
Continuously improve quality, efficiency, and scalability of data pipelines
Expert skills working with queries/applications, including performance tuning, utilizing indexes, and materialized views to improve query performance
Identify necessary business rules for extracting data along with functional or technical risks related to data sources (e.g. data latency, frequency, etc.)
Develop initial queries for profiling data, validating analysis, testing assumptions, driving data quality assessment specifications, and define a path to deployment
Familiar with best practices for data ingestion and data design

Key Responsibilities and Skillets:
Database Management, ETL Development, have used tools like Teradata Utilities (e.g., BTEQ, FastLoad, MultiLoad, TPT, etc.), Data Modeling, DB Performance Tuning, Data Security & Governance, Cloud Integration and Management, Cloud Infrastructure, Automation and DevOps, Data Warehousing and Analytics, Security and Compliance
In-depth knowledge of Teradata architecture and components (e.g., AMP, Nodes, Parsing Engines, etc.).
Proficiency in Teradata SQL and Teradata utilities.
Strong SQL skills for querying and managing relational databases.
Scripting experience (e.g., Shell, Python) for automating tasks.
Knowledge of query optimization techniques, indexing, partitioning, and workload management in Teradata.
Proficiency with core AWS services: EC2, S3, RDS, Lambda, CloudWatch, CloudFormation, etc.
Familiarity with AWS storage options, such as S3, EFS, and Glacier, in the context of large-scale data storage and retrieval.
Knowledge of AWS services that enhance Teradata performance, such as using S3 for staging data or integrating with Redshift for analytics.
Hands-on experience with AWS CloudFormation, Terraform, or similar tools for automating infrastructure deployment.
Proficiency in automating ETL pipelines and data processing workflows using AWS Lambda, Step Functions, or AWS Glue.
like
 

Education

Bachelor's degree in Computer Science