Data Engineer - AWS Apply
Your Responsibilities:• Able to independently analyse, refine, develop and deliver technical data pipelines.• Able to analyse data sets and prepare data model / data flow.• Able to follow data lineage to understand ETL logic.• Design and Develop ETL Processes in AWS Glue with source as S3, Glue Catalog and database(s). Good to Have• Exposure to Data Ecosystems (ETL tools, AWS data services, Analytical databases, data modeling tools).• Scripting Languages: PySpark.• Exposure to Alembic.• Exposure to CI/CD pipeline (Gitlab). • Overall 8+ years of experience on BI/Data Platform with 2+ yrs working experience on AWS platform using data services.• Expert level working experience on ETL pipelines and writing complex database SQLs.• Scripting Languages: exposure to Python. Must Have:• Overall 8+ years of experience on BI/Data Platform.• 2+ yrs working experience on AWS platform using data services.• Expert knowledge on Data Extraction, aggregations and consolidation of data (ETL Pipeline).• 4+ year experience in SQL writing and performance tuning with understanding of queries, query-plan, distribution/partition keys.• Big Data Ecosystems : S3, Redshift, Glue and at least one ingestion service like DMS, Appflow, Data Transfer/Data Sync.• Exposure to Python with basic data frames and to plug in SQL code.• Excellent analytical and problem-solving skills.• Experience working in Agile, Scrum.

