Date Engineer

Posted on January 12, 2026

Apply Now

Job Description

Job Overview

We are currently seeking experienced Data Engineers (5–7 years of experience) with strong expertise in Databricks, PySpark, and Data Fabric concepts to contribute to an ongoing enterprise data transformation initiative. The ideal candidates will have solid handson engineering skills, a good understanding of modern data architectures, and the ability to work collaboratively within crossfunctional teams.

Key Responsibilities

  • Strong experience in understanding and translating data transformation logic written in TSQL and implementing equivalent, efficient transformations in Databricks using PySpark, aligned with Data Fabric design principles.
  • Handson experience in designing and implementing data ingestion pipelines using Azure Data Factory, enabling reliable data movement from source systems to the RAW and curated data layers within a Data Fabric ecosystem.
  • Working knowledge of Data Fabric concepts, including metadatadriven pipelines, data integration, orchestration, data lineage, and governance, with the ability to apply these principles in daytoday engineering tasks.
  • Experience in monitoring, collecting, and analyzing pipeline performance metrics to identify inefficiencies and support optimization of data ingestion and processing workflows.
  • Practical experience in performance tuning and optimization of Databricks read and write operations, including partitioning, file formats, and query optimization techniques.
  • Ability to collaborate closely with senior engineers and architects, contribute to design discussions, follow best practices, and support the continuous improvement of the data platform.
  • Strong problemsolving skills, eagerness to learn, and the ability to work effectively with crossfunctional teams, including data analysts, data scientists, and business stakeholders.

Screening Checklist

  • Proficiency in interpreting data transformation logic written in TSQL and implementing equivalent processes within Databricks.
  • Ability to design and implement data ingestion pipelines using Azure Data Factory (from source to RAW layer).
  • Basic knowledge of C and SQL (at least read the coding, no need to write).
  • Experience in collecting and analyzing performance metrics to optimize data ingestion pipelines.
  • Competence in performing performance optimizations for Databricks read/write queries as needed.

Required Skills

t-sql azure data factory sql databricks pyspark

Clarification Board

Your Clarifications
"Send your Job Related Query - you'll get a reply soon."