Requirements
Expertise in Azure Data Factory V2
Expertise in other Azure components like Data lake Store, SQL Database, Databricks
Experience in building power BI reports
Must have working knowledge of spark programming
Good exposure to Data Projects dealing with Data Design and Source to Target documentation including defining transformation rules
Strong knowledge of CICD Process
Understanding of different components like Pipelines, activities, datasets & linked services
Exposure to dynamic configuration of pipelines using data sets and linked Services
Experience in designing, developing and deploying pipelines to higher environments
Good knowledge on File formats for flexible usage, File location Objects (SFTP, FTP, local, HDFS, ADLS, BLOB, Amazon S3 etc.)
Strong knowledge in SQL queries
Must have worked in full life-cycle development from functional design to deployment
Should have working knowledge of GIT, SVN
Good experience in establishing connection with heterogeneous sources like Hadoop, Hive, Amazon, Azure, Salesforce, SAP, HANA, API’s, various Databases etc.
Should have working knowledge of different resources available in Azure like Storage Account, Synapse, Azure SQL Server, Azure Data Bricks, Azure Purview
Any experience related to metadata management, data modelling, and related tools (Erwin or ER Studio or others) would be preferred
Any Graduate