Tech

Sr. Data Engineer - AWS

Ahmedabad, Gujarat
Work Type: Full Time
Job Title: Sr. Data Engineer (AWS)
Location: Ahmedabad, Gujarat
Job Type: Full Time
Department: Data Engineering
 

Job Summary

We are looking for a Data Engineer (AWS) who is passionate about managing large-scale data, eager to take on challenges, and committed to delivering exceptional results. This role requires expertise in AWS data services, ETL pipelines, and database management. If you are a proactive problem solver with a strong technical background and a team player with a positive attitude, we want to hear from you.

Key Responsibilities

  • Design, develop, monitor, and maintain end-to-end data pipelines on AWS.
  • Work with AWS services such as AWS Glue, AWS Kinesis, Redshift, and S3 for data ingestion, processing, and analytics.
  • Develop and optimize data ingestion, transformation, and analytical workflows for structured and unstructured data.
  • Design efficient data models to ensure optimal performance in data processing and storage.
  • Implement large-scale data ingestion pipelines capable of handling 100GB+ datasets.
  • Develop scalable and high-throughput distributed batch or real-time data solutions using Apache Kafka, Apache Flink, Spark, and Apache Airflow.
  • Build and maintain ETL/ELT pipelines for data integration and migration.
  • Work extensively with relational databases (PostgreSQL, MySQL, SQL Server) and NoSQL databases (MongoDB, Cassandra, Neptune).
  • Optimize SQL queries, performance tuning, indexing, partitioning, and denormalization strategies.
  • Collaborate with cross-functional teams, including data scientists and software engineers, to integrate data solutions into production environments.
  • Ensure data quality, integrity, and compliance with security best practices.
  • Participate in client interactions and stakeholder meetings to gather requirements and provide technical insights.

Required Skills & Qualifications

  • Bachelor’s/Master’s degree in Computer Science, Data Engineering, or a related field.
  • 3–7 years of hands-on experience in data engineering, ETL development, and data pipeline implementation.
  • Proficiency in Python and SQL for data processing and analysis.
  • Strong expertise in AWS data services (AWS Glue, Redshift, Kinesis, S3, etc.).
  • Experience with big data processing frameworks like Spark, Flink, and Kafka.
  • Working knowledge of data warehouse solutions such as Redshift, Snowflake, or BigQuery.
  • Experience with NoSQL databases (MongoDB, Cassandra, Neptune) and relational databases.
  • Strong analytical and problem-solving skills.
  • Ability to work independently, mentor peers, and meet tight deadlines.
  • Excellent interpersonal and communication skills.
  • Experience with cloud-based data architecture and security best practices.
  • AWS certification is a plus.

Preferred Qualifications (Nice to Have)

  • Experience with data lake architectures.
  • Knowledge of machine learning model deployment and data processing for AI applications.
  • Prior experience in automated data pipeline deployment using CI/CD workflows.
Benefits of Joining Us
  • Flat-hierarchical, friendly, engineering-oriented, and growth-focused culture.
  • Flexible work timing, leaves for life events, work-from-home options.
  • Free health insurance.
  • Office facility with a fully-equipped game zone, in-office kitchen with affordable lunch service, and free snacks.
  • Sponsorship for certifications/events and library service.
 

Submit Your Application

You have successfully applied
  • You have errors in applying