Minimum 3 years of experience in building and designing
large-scale on-premises Data warehouse solutions.
Minimum 1 year hands on experience in designing, building
and operationalizing large scale enterprise data solutions and applications
using one or more of AWS data and analytics services in combination with 3rd
parties like Spark, EMR, DynamoDB, Redshift, Kinesis, Lambda, Glue, DMS, S3,
RDS, Snowflake etc.
Proven experience in analyzing, re-architecting and
migrating on-premise data warehouses to data platforms on AWS cloud using AWS
or 3rd party services.
Good knowledge of
designing and building data pipelines from ingestion to consumption within a
big data architecture, using Java / Python / Scala.
Database and Application architecture experience
Expert-level skills in writing and optimizing SQL.
Experience with Big Data technologies such as
Hadoop/Hive/Spark.
Experience with very large data warehouses or data lakes.
Hands experience in streaming data processing
Solid Experience in Redshift and AWS RDS.
Good to have Skills:
Knowledge of BI tools
Experience in on premise to Cloud migration projects
Hands on experience in implementing DevOps for Data
applications
Operating systems knowledge, especially UNIX, Linux, and
Solaris
Any graduate