Job role: Architect - Data
Job location: Mumbai/ Bangalore (Hybrid)
Experience: 8 to 13 Years
Job Responsibilities
- Design and build a hybrid data solution using a combination of on premise and in-cloud services
- Build tools to ingest and process terabyte-scale data on a daily basis
- Communicate with a wide set of teams, including product owners, business analysts, data and enterprise architects, platform engineers, and cloud vendors, etc
- Compilation of detailed design documents, technical specification documents, and test plans as required.
- Independently manage development tasks as per technical specification documents, prod deployment, post-prod support, etc
- Managing support to Unit Testing, SIT & UAT
- Effectively articulate technical issues to tech lead, business team and facilitate resolution
- Strong troubleshooting skills to identify the root cause of performance bottlenecks or any other
- Problem solving mindset working in agile environment
- Strong communication skills, including the ability to convey analytic insights effectively to both IT and business audiences
- Strong knowledge of the Hadoop ecosystem and its core frameworks, including HDFS, YARN, Spark, MapReduce, Pig, Hive, Flume, Sqoop, Oozie, Impala, ZooKeeper, and Kafka.
- Proficient in SQL-based technologies (MySQL, Oracle DB,SQL Server etc.) as well as NoSQL technologies (Cassandra and MongoDB)
- Proficient in SQL writing, Performance tuning & optimization and basic modeling
- Efficiency in handling both ETL and data warehousing solutions
- Experience in large scale data migration projects – history loads, incremental loads
- Experience in working with multiple programming languages like Python, R, Java, Scala, etc.
- Good to have knowledge of AWS Services like EMR, Glue, Lambda, DMS, Kinesis, DynamoDB
- Good to have knowledge of Redshift/Snowflake
- (1) Big Data + Redshift :- Must have skill - Spark/PySpark, Hive and Redshift
- (2) Big Data + Snowflake :-Must have skill - Spark/PySpark, Hive and Snowflake