Lead Data Engineer (Must have exp. in Fivetran)
Posted on July 9, 2025
Job Description
- Strong hands-on expertise in SQL, DBT and Python for data processing and transformation.
- Lead the design and development of data pipelines (batch and real-time) using modern cloud-native technologies (Azure, Snowflake, DBT, Python).
- Expertise in Azure data services (e.g., Azure Data Factory, Synapse, Event Hub) and orchestration tools.
- Translate business and data requirements into scalable data integration designs.
- Strong experience with Snowflake � including schema design, performance tuning, and security model.
- Guide and review development work across data engineering team members (onshore and offshore).
- Good understanding of DBT for transformation layer and modular pipeline design.
- Define and enforce best practices for coding, testing, version control, CI/CD, data quality, and pipeline monitoring.
- Hands-on with Git and version control practices � branching, pull requests, code reviews.
- Collaborate with data analysts, architects, and business stakeholders to ensure data solutions are aligned with business goals.
- Understanding of DevOps/DataOps principles � CI/CD for data pipelines, testing, monitoring.
- Own and drive end-to-end data engineering workstreams � from design to production deployment and support.
- Knowledge of data modeling techniques � Star schema, Data Vault, Normalization/Denormalization.
- Provide architectural and technical guidance on platform setup, performance tuning, cost optimization, and data security.
- Experience with real-time data processing architectures is a strong plus.
- Drive data engineering standards and reusable patterns across projects to ensure scalability, maintainability, and reusability of code and data assets.
- Proven leadership experience � should be able to mentor team members, take ownership, make design decisions independently.
- Define and oversee data quality frameworks to proactively detect, report, and resolve data issues across ingestion, transformation, and consumption layers.
- Strong sense of ownership, accountability, and solution-oriented mindset.
- Act as a technical go-to team member for complex design, performance, or integration issues across multiple teams and tools (e.g., DBT + Snowflake + Azure pipelines).
- Ability to handle ambiguity and work independently with minimal supervision.
- Contribute to hand on development as well for the ned to end integration pipelines and workflows.
- Clear and confident communication (written and verbal) � must be able to represent design and architecture decisions.
- Document using Excel, Word, or tools like Confluence.
- Core Technical Expertise
- Deep architectural understanding of Snowflake, including:
- o Compute management (virtual warehouses, scaling policies, resource optimization)
- o Multi-cluster warehouse design and evaluation
- o Performance tuning and cost optimization strategies
- Proficient use of Snowflake stages, with clear design rationale:
- o When to use internal vs. external stages
- o Data loading/unloading strategies
- Advanced Python for data engineering:
- o Writing production-ready, modular, and scalable code
- o Data transformation, orchestration, and API integrations
- Hands-on experience with CI/CD pipelines:
- o Git-based workflows
- o Deployment strategies for data pipelines
- Azure Data Factory and Databricks:
- o End-to-end pipeline development and orchestration
- o Integration with Snowflake and other systems
Required Skills
No specific skills listed.