Data engineers
Posted on April 11, 2025
Job Description
- Experience :7+ years
- The candidate has PySpark and SQL Data Analytics experience
- Key Responsibilities:
- Design, develop, and optimize large-scale data processing pipelines using PySpark.
- Write complex and efficient SQL queries for data extraction, transformation, and reporting.
- Analyze structured and semi-structured data from various sources to support business requirements.
- Collaborate with data engineers, data scientists, and business stakeholders to understand data needs and deliver ac
- . Ensure data quality and integrity across all analytics and reporting processes.
- Create and maintain clear documentation of data workflows, queries, and analytics logic.
- Identify opportunities to automate and streamline data processing and reporting.
- Required Skills:
- Strong hands-on experience with PySpark for data processing and transformation.
- Expert-level knowledge in SQL (preferably on platforms like PostgreSQL, MySQL, or Hive).
- Proficient in Python for data manipulation and scripting.
- Experience with data warehousing concepts and working with large datasets.
- Familiarity with cloud platforms (AWS. Azure. or GCP) and data tools like Databricks. Snowflake, or Hive is a plus
Required Skills
� strong hands-on experience with pyspark for data processing and transformation. � expert-level knowledge in sql (preferably on platforms like postgresql
mysql
or hive). proficient in python for data manipulation and scripting. � experience with data warehousing concepts and working with large datasets. familiarity with cloud platforms (aws. azure. or gcp)