- Minimum 6 years of relevant experience, specifically strong experience in SQL, Python, and PySpark for building and optimizing data workflows
- In-depth knowledge of Azure Databricks for data engineering, including proficiency in Apache Spark, PySpark, and Delta Lake.
- Familiarity with Databricks components such as Workspace, Run-time, Clusters, workflows, DLT, functions, hive Metastore, SQL Warehouse, Delta sharing, and Unity Catalog.
- Knowledge of the insurance industry is preferred.
- Experience in ETL/ELT and data integration with an understanding of enterprise data models like CDM and departmental data marts.
- Proficiency in using Azure Data Factory (ADF) to build complex data pipelines and integrate data from various sources.
- Familiarity with Azure Purview, Azure Key Vault, Azure Active Directory, and RBAC for managing security and compliance in data platforms.