At least 5 years of experience developing in Python, SQL (postgres/snowflake preferred)
• Bachelor’s degree with equivalent work experience in computer science, data science or a related field.
• Experience working with different Databases and understanding of data concepts (including data warehousing, data lake patterns, structured and unstructured data)
• experience of Data Storage/Hadoop platform implementation, including 3+ years of hands-on experience in implementation and performance tuning Hadoop/Spark implementations.
• Implementation and tuning experience specifically using Amazon Elastic Map Reduce (EMR).
• Implementing AWS services in a variety of distributed computing, enterprise environments.
• Experience writing automated unit, integration, regression, performance and acceptance tests
Any Graduate