Description

Must Haves
Needs to be hands-on with Spark SQL
Hadoop environment
Python, Pyspark scripting
ETL experience
Finance experience
Data Aggregation
Performance tuning and optimization in Spark
Data engineering background is probably best. 
In-house framework. Framework extracts Hadoop engineering aspect.
Build a good pipeline.
Previous Java would be preferred, the application has some Java components.

Qualification notes:
Part of the data solutions group. 
His team works with different orgs. 
Delivering data assets. 
Support one of those assets
Specific for delivering for transformation.
Aggregate data in the enterprise. 
Downstream data. 
Finance and risk, High visibility person.

Nice to have:
Hybrid shop for the work with ab initio and move to spark.
How to Spark tuning, how to work with spark process 

Education

Any Graduate