We are looking for a seasoned Data Engineer with extensive experience in designing and implementing data pipelines using Medallion Architecture, Azure Databricks, and Snowflake. The ideal candidate will be responsible for building scalable ETL pipelines, optimizing data flows, and ensuring data quality for large-scale data platforms.
Key Responsibilities:
- Design, develop, and optimize data pipelines in Azure Databricks following Medallion Architecture (Bronze, Silver, Gold layers).
- Implement and maintain ETL pipelines using Azure Data Factory (ADF), Databricks, and Python/PySpark.
- Leverage Snowflake for data warehousing, including schema design, data loading, and performance tuning.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver robust data solutions.
- Develop data models and manage data lakes for structured and unstructured data.
- Implement data governance and security best practices.
- Monitor and troubleshoot data pipelines for performance and reliability.
- Stay up-to-date with industry trends and best practices in data engineering and cloud technologies.