Description

Bachelor degree or equivalent
4+ years with Big Data Hadoop cluster (HDFS, Yarn, Hive, MapReduce frameworks), Spark, AWS EMR
4+ years of recent experience with building and deploying applications in AWS (S3, Hive, Glue, AWS Batch, Dynamo DB, Redshift, AWS EMR, Cloudwatch, RDS, Lambda, SNS, SWS etc.)
4+ years of Python, SQL, SparkSQL, PySpark
Excellent problem solving skills and strong verbal & written communication skills
Ability to work independently as well as part of an agile team (Scrum / Kanban)
Skilled in discovering patterns in large data sets with the use of relevant software such as Oracle Data Mining or Informatica
Skilled in documentation and database reporting for the purposes of analysis, data discovery, and decision-making with the use of relevant software such as Crystal Reports, Excel, or SSRS
Skilled in cloud technologies and cloud computing
Skilled in creating and managing databases with the use of relevant software such as MySQL, Hadoop, or MongoDB

Education

Bachelor degree