AWS Cloud Data Engineering.

Posted on March 4, 2025

Apply Now

Job Description

  • Key Responsibilities:
  • Design, develop, and maintain scalable and efficient data pipelines on AWS.
  • Implement data ingestion, transformation, and processing using AWS services such as Glue, Lambda, Kinesis, S3, and Step Functions.
  • Work with Redshift, Athena, RDS, DynamoDB, and Snowflake to optimize data storage and retrieval.
  • Develop and maintain ETL workflows using AWS Glue, Apache Spark, or Python.
  • Ensure data integrity, security, and compliance following best practices.
  • Implement and manage CI/CD pipelines for data engineering workloads.
  • Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements.
  • Optimize performance and cost of data pipelines and cloud storage solutions.
  • Monitor and troubleshoot data workflows using CloudWatch, AWS X-Ray, and logging frameworks.
  • Required Skills & Experience:
  • 5-6 years of experience in AWS Cloud Data Engineering.
  • Hands-on experience with AWS services like Glue, Redshift, S3, Lambda, Athena, Kinesis, and Step Functions.
  • Strong programming skills in Python, PySpark, or SQL.
  • Experience with data modeling, schema design, and query optimization.
  • Familiarity with big data technologies such as Apache Spark, Hadoop, or Kafka.
  • Knowledge of data security best practices (IAM roles, encryption, VPCs, etc.).
  • Experience in Terraform or CloudFormation for Infrastructure as Code (IaC).
  • Working knowledge of Git, Docker, and Kubernetes for deployment and version control.
  • Strong problem-solving skills with a focus on automation and scalability.
  • Preferred Qualifications:
  • AWS Certified Data Analytics � Specialty or Solutions Architect � Associate.
  • 6 Months Contract + Extendable
  • Location- Remote

Required Skills

python pyspark or sql. aws services like glue redshift s3 lambda athena kinesis and step functions.