Data Engineer
Posted on August 21, 2025
Job Description
- Data Engineer
- Job Overview
- ?Location
- Gurgaon, India?
- Job Type
- Contract
- Work From
- Onsite
- Job Requirements
- Experience
- 8 - 10 years
- No. of Positions
- 2
- Duration
- 3-6 months
- Skills
- S3
- Glue
- Athena
- EMR
- Kinesis
- Flink
- Kafka
- Data lake
- data warehousing
- Apache
- Job Description
- Data Engineer
- About the Job:
- Join our team as a Data Engineer and contribute to our mission of leveraging data to drive meaningful insights and deliver innovative solutions for our clients.
- What will you do?
- Design, develop and own robust data pipelines, ensuring optimal performance, scalability, and maintainability.
- Design and implement Data Lake, Dare Warehouse and Lakehouse solutions with different architecture patterns.
- Ensure data quality, integrity and governance across all stages of data lifecycle.
- Monitor and optimize performance of data engineering pipelines.
- Contribute to design principles, best practices, and documentation.
- Collaborate closely with cross-functional teams to deeply understand business requirements, translating them into effective technical design, implementations that support the organization's data-driven initiatives.
- Provide mentorship and guidance to other team members of the data engineering team, promoting knowledge transfer, a culture of continuous learning and skills development.
- We are looking for:
- Bachelor's degree in computer science, Information Systems, or a related field. Master's degree is a plus.
- A seasoned Data Engineer with a minimum of 8+ years of experience.
- Deep experience in designing and building robust, scalable data pipelines � both batch and real-time using modern data engineering tools and frameworks.
- Proficiency in AWS Data Services (S3, Glue, Athena, EMR, Kinesis etc.).
- Strong grip on SQL queries, various file formats like Apache Parquet, Delta Lake, Apache Iceberg or Hudi and CDC patterns.
- Experience in stream processing frameworks like Apache Flink or Kafka Streams or any other distributed data processing frameworks like pySpark.
- Expertise in workflow orchestration using Apache Airflow.
- Strong analytical and problem-solving skills, with the ability to work independently in a fast-paced environment.
- In-depth knowledge of database systems (both relational and NoSQL) and experience with data warehousing concepts.
- Hands-on experience with data integration tools and a strong familiarity with cloud-based data warehousing and processing is highly desirable.
- Excellent communication and interpersonal skills, facilitating effective collaboration with both technical and non-technical stakeholders.
- A strong desire to stay current with emerging technologies and industry best practices in data landscape.
Required Skills
s3 glue athena emr kinesis flink kafka data lake data warehousing apache