AWS Data Engineer
Remote Job | 2023-02-06 09:41:10
Apply Now
Share Job
Job Code : Apex163
Hi All,
Role: AWS Data Engineer
Location: Remote – needs to sit on EST or CST hours
Duration: 1+ year
Type: C2C/W2
Client: Apex/Fidelity Investments
Job Description:
We talked to Fidelity this morning and they gave us a good insight on some of the expectations they have for this position, and what we need to be looking for. Please see below for some detailed notes on what they are expecting. They are looking for a true Cloud Data Engineer, specifically using AWS. Let me know if you have any further questions about this.
AWS Data Engineer
- Interview feedback: candidates who haven't been able to detail their experience in AWS data pipelining. They have mentions on their resumes but can't speak to the experience beyond just 1.
- Services they're using and looking for: AWS Glue, Lambdas, SQS and SNS, IAM, Cloudformation Templates, AWS security and networking. They are now running pipelines through deployments using Cloudformation within a Lambdas instance, scheduling with SQS. Need someone who has worked on the deployments side as well as build (so needs CF, Lambdas, and the pipeline builds in PySpark and Glue).
- Need to be senior-level candidates with Python experience--they have hired several Jr level candidates recently so are training them up. Remaining openings need to come in and immediately contribute to the project. Do not have bandwidth to train.
- NOT looking for a database candidate (i.e. Redshift, DynamoDB, Aurora with EMR). More of a AWS Infra/DevOps + Data Engineer--true Cloud Data Engineer role.
Updated Requirements:
- Python,
- Spark/PySpark
- AWS Glue (basic familiarity)
- Lambdas
- Cloudformation
- Understanding of AWS cloud systems--need to know how the deployment of a pipeline will work from the build through to go live and production.
Required Skills:
AWS: Cloudformation Templates, Lambdas, IAM, S3, Glue
Need strong experience in Python
Need Spark/PySpark experience