AWS Data Engineer(R-1511)
1,500,000.00-1,900,000.00/A
IT (Information Technology)
Data Engineer Python SQL PySpark AWS AWS services Glue Lambda ETL/ELT Data warehousing
Job Description
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Python, SQL, and PySpark
- Work with AWS services such as AWS Glue, Lambda, Step Functions, and DMS for data processing and orchestration
- Build and optimize ETL/ELT workflows for structured and unstructured data
- Integrate data from multiple sources and ensure data quality, consistency, and reliability
- Collaborate with data analysts, data scientists, and stakeholders to understand data requirements
- Monitor and troubleshoot data pipelines to ensure high performance and availability
- Implement best practices for data governance, security, and compliance
- Optimize query performance and data storage solutions
Required Skills:
- Strong programming skills in Python
- Expertise in SQL for data manipulation and querying
- Hands-on experience with PySpark and distributed data processing
- Good knowledge of AWS services: Glue, Lambda, Step Functions, DMS
- Experience with ETL/ELT processes and data pipeline architecture
- Familiarity with data warehousing concepts and big data technologies
- Understanding of cloud-based data solutions and architecture
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
- Relevant AWS certifications (preferred but not mandatory)