Data engineer

Posted on May 21, 2025

Apply Now

Job Description

  • Position: Data engineer
  • Experience : 8 years
  • *Remote*
  • Send Bench Candidates only
  • Mandatary Skills : Azure Data Bricks � Pyspark , Data Modelling & architecture , Microsoft Fabrics , Data Migration
  • Job Description:
  • We are seeking a highly skilled and experienced Azure Data Engineer with over 8 years of expertise in designing, developing, and maintaining scalable data solutions on Microsoft Azure. The ideal candidate should have strong proficiency in SQL, Python (or similar language), and extensive hands-on experience with PySpark, ADF, Azure Databricks, and Data Lake. Experience in MS Fabric and Power BI is a plus, along with domain knowledge in Finance, Procurement, or Human Capital.
  • Key Responsibilities:
  • Design, develop, and maintain robust and scalable data pipelines using Azure Data Factory, Azure Databricks, and PySpark.
  • Perform advanced data manipulation and scripting using Python or equivalent programming languages.
  • Implement and maintain large-scale data solutions leveraging Azure Data Lake and Azure Synapse Analytics.
  • Apply data warehousing concepts and methodologies to design and develop data warehouse systems.
  • Collaborate with cross-functional teams to understand business requirements and translate them into data models and architecture.
  • Ensure data quality, integrity, and governance by following best practices in data engineering.
  • Optimize data workflows and manage performance tuning of big data applications.
  • Create and manage ETL processes using tools such as Apache Spark.
  • Support reporting and analytics needs by integrating with tools like Power BI.
  • Stay updated with emerging Azure technologies and contribute to the improvement of existing data solutions.
  • Required Skills and Qualifications:
  • 8+ years of experience in data engineering, specifically within Azure ecosystem.
  • Proficiency in SQL and scripting/programming in Python or similar languages.
  • Strong experience with Azure Data Factory (ADF), PySpark, Azure Databricks, Data Lake, and Azure Synapse.
  • Solid understanding of data warehousing concepts, data integration, and ETL processes.
  • Experience with Apache Spark or similar big data platforms.
  • Knowledge of data modelling, data governance, and metadata management.
  • Excellent analytical, problem-solving, and communication skills.
  • Preferred Qualifications:
  • Experience with Microsoft Fabric.
  • Familiarity with Power BI for data visualization and dashboarding.
  • Domain knowledge in Finance, Procurement, or Human Capital.

Required Skills

azure data bricks � pyspark data modelling & architecture microsoft fabrics data migration