Job Description:
Role Description
Looking for a data engineer to design, develop and maintain pipelines and workflows and create analytics to digitally transform the current NA Accident and Health reporting and business decision processes.
Mandatory:
- Experience candidate with Palantir Foundry(https://www.palantir.com/platforms/foundry/).
- Experience candidate with Stop Loss Insurance (https://www.spencerjamesgroup.com/blog/what-is-stop-loss-insurance).
Responsibilities
- Development and support of data pipelines that produce data assets for various A&H workstreams including UW, UA, Actuarial, Claims
- Handle data pipelines while testing for data curation, parsing, cleaning, transformation and enrichment of data
- Work with fundamentals of data processing, data pipeline, data lineage and ETL (Extract-Transform-Load) methodologies
- Implement the project according to the Software Development Life Cycle (SDLC) and programming by using fast paced agile methodology, involving task completion, user stories
- Utilize knowledge of database management system software, object oriented programming development, system architecture and components and various programming languages
- Review and analyze business workflows and user data needs
- Design and implement business performance dashboards
- Write customized queries/programs to generate automatic periodical reports highlighting all the Key Performance Indicators (KPIs)
- Build applications using SQL and/or Python scripts to manipulate data, monitor and help to improve data quality
- Design, build and maintain end-to-end data solutions supporting our processes with the right data architecture
- Have working knowledge of Apache Spark, big data processing and building products on distributed cluster-computing framework
- Construct workflow charts and diagrams and writing specifications.
- Documentation of end-to-end data pipeline process.
- Documentation of data assets for information management purposes.
- Ad hoc team / business support as needed.
- Exploration and evaluation of new technologies and platforms.
Requirements:
- Bachelor’s or equivalent degree Computer Science, Data Science, Statistics or another relevant quantitative field
- 5+ years as a data engineer
- Sound Python and SQL skills with ability to query and analyze data, understand complexity and data structures.
- Experience with data and analytics technology, including but not limited to Hadoop, Spark, Java, Python, R, Elasticsearch, and others.
- Familiarity with relational database concepts
- Detail-oriented, analytical, and inquisitive
- Good communication skills
- Highly organized with strong time-management skills
- Ability to work independently and collaborate well with others.
- Ability to affect smooth organizational transformations.