image
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,
loadingbar
Loading, Please wait..!!

Aws Data Engineer

  • ... Posted on: Jan 29, 2026
  • ... Qode
  • ... Селиська, Colorado
  • ... Salary: Not Available
  • ... Full-time

Aws Data Engineer   

Job Title :

Aws Data Engineer

Job Type :

Full-time

Job Location :

Селиська Colorado United States

Remote :

No

Jobcon Logo Job Description :

Job Brief
As an AWS Data Engineer, your role will be to design, develop, and maintain scalable data pipelines on AWS. You will work closely with technical analysts, client stakeholders, data scientists, and other team members to ensure data quality and integrity while optimizing data storage solutions for performance and cost-efficiency. This role requires leveraging AWS native technologies and Databricks for data transformations and scalable data processing.

Responsibilities
•                   Lead and support the delivery of data platform modernization projects.
•                   Design and develop robust and scalable data pipelines leveraging AWS native services.
•                   Optimize ETL processes, ensuring efficient data transformation.
•                   Migrate workflows from on-premise to AWS cloud, ensuring data quality and consistency.
•                   Design automations and integrations to resolve data inconsistencies and quality issues
•                   Perform system testing and validation to ensure successful integration and functionality.
•                   Implement security and compliance controls in the cloud environment.
•                   Ensure data quality pre- and post-migration through validation checks and addressing issues regarding completeness, consistency, and accuracy of data sets.
•                   Collaborate with data architects and lead developers to identify and document manual data movement workflows and design automation strategies.
Skills and Requirements
·                  10+ years’ experience with a core data engineering skillset leveraging AWS native technologies (AWS Glue, Python, Snowflake, S3, Redshift).
·                  Experience in the design and development of robust and scalable data pipelines leveraging AWS native services.
·                  Proficiency in leveraging Snowflake for data transformations, optimization of ETL pipelines, and scalable data processing.
·                  Experience with streaming and batch data pipeline/engineering architectures.
·                  Familiarity with DataOps concepts and tooling for source control and setting up CI/CD pipelines on AWS.
·                  Hands-on experience with Databricks and a willingness to grow capabilities.
·                  Experience with data engineering and storage solutions (AWS Glue, EMR, Lambda, Redshift, S3).
·                  Strong problem-solving and analytical skills.
·                  Knowledge of Dataiku is needed
·                  Graduate/Post-Graduate degree in Computer Science or a related field.
·                  AWS S3 (data storage, export, recall)
·                  Athena (querying data lakes)
·                  Data pipelines (batch & near-real-time)
·                  Integration with external systems (FHIR)
·                  Secure data handling (KMS, Macie)
·                  Cloud-native analytics
·                  Multi-account, multi-region data architecture
·                  BI integrations: Power BI, Tableau, QuickSight

Jobcon Logo Position Details

Posted:

Jan 29, 2026

Employment:

Full-time

Salary:

Not Available

City:

Селиська

Job Origin:

WORKABLE_ORGANIC_FEED

Share this job:

  • linkedin

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Aws Data Engineer    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Job Brief
As an AWS Data Engineer, your role will be to design, develop, and maintain scalable data pipelines on AWS. You will work closely with technical analysts, client stakeholders, data scientists, and other team members to ensure data quality and integrity while optimizing data storage solutions for performance and cost-efficiency. This role requires leveraging AWS native technologies and Databricks for data transformations and scalable data processing.

Responsibilities
•                   Lead and support the delivery of data platform modernization projects.
•                   Design and develop robust and scalable data pipelines leveraging AWS native services.
•                   Optimize ETL processes, ensuring efficient data transformation.
•                   Migrate workflows from on-premise to AWS cloud, ensuring data quality and consistency.
•                   Design automations and integrations to resolve data inconsistencies and quality issues
•                   Perform system testing and validation to ensure successful integration and functionality.
•                   Implement security and compliance controls in the cloud environment.
•                   Ensure data quality pre- and post-migration through validation checks and addressing issues regarding completeness, consistency, and accuracy of data sets.
•                   Collaborate with data architects and lead developers to identify and document manual data movement workflows and design automation strategies.
Skills and Requirements
·                  10+ years’ experience with a core data engineering skillset leveraging AWS native technologies (AWS Glue, Python, Snowflake, S3, Redshift).
·                  Experience in the design and development of robust and scalable data pipelines leveraging AWS native services.
·                  Proficiency in leveraging Snowflake for data transformations, optimization of ETL pipelines, and scalable data processing.
·                  Experience with streaming and batch data pipeline/engineering architectures.
·                  Familiarity with DataOps concepts and tooling for source control and setting up CI/CD pipelines on AWS.
·                  Hands-on experience with Databricks and a willingness to grow capabilities.
·                  Experience with data engineering and storage solutions (AWS Glue, EMR, Lambda, Redshift, S3).
·                  Strong problem-solving and analytical skills.
·                  Knowledge of Dataiku is needed
·                  Graduate/Post-Graduate degree in Computer Science or a related field.
·                  AWS S3 (data storage, export, recall)
·                  Athena (querying data lakes)
·                  Data pipelines (batch & near-real-time)
·                  Integration with external systems (FHIR)
·                  Secure data handling (KMS, Macie)
·                  Cloud-native analytics
·                  Multi-account, multi-region data architecture
·                  BI integrations: Power BI, Tableau, QuickSight

Loading
Please wait..!!