• Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,

Chat with the recruiter

...Minimize

Hey I'm Online! Leave me a message.
Let me know if you have any questions.

AWS Data Engineer

In United States

Save this job

AWS Data Engineer   

Click on the below icons to share this job to Linkedin, Twitter!

JOB TITLE:

AWS Data Engineer

JOB TYPE:

JOB SKILLS:

JOB LOCATION:

Dallas United States

JOB DESCRIPTION:

Data Engineer Location: Must be onsite 2+ days per week out of either Mclean, Atlanta, Dallas, or NY offices. Must Haves: Must have experience building data pipelines using Spark, Scala, and Python. Must have experience with S3, EC2, Docker, EKS, EMR. Job Responsibilities: Design and develop reusable data pipelines to support batch and stream processing of large volumes of structured, semi-structured and unstructured data Create and maintain optimal data pipeline architecture Identify, design and implement internal process improvements including manual process automation, optimizing data delivery, infrastructure redesign with a focus on scalability Develop data integration/ engineering workflows using Scala and/or Python leveraging AWS, Snowflake and other cloud data stores Leverage best practices, design and architecture that align with Enterprise Data and Application architecture standards Leverage Microservices for aggregating data streams/ pipelines where needed Maintain and enhance data engineering architecture, design principles, CI/CD deployment procedures Design, develop and unit test new or existing data integration solutions to meet business requirements Qualifications Bachelors Degree in computer science or IT, Advanced degree preferred 8+ years of overall experience 3+ years of relevant large-scale enterprise transformation experience as relates to data engineering is preferred 3+ years of experience developing RESTful APIs, microservices, containers, frameworks 3+ years of database technologies Oracle, DB2, Mongo DB, NoSQL DBs, PostgreSQL, Snowflake Experience with any of Cloud technologies like AWS S3, EC2, Docker, OpenShift, Kubernetes/Amazon EKS, EMR, Hadoop Experience with Attunity, Kafka, AMQ or similar streaming tools is a plus Strong programming skills in one or more modern languages such as Python, Java, Spark Prior experience in the primary or secondary mortgage industry is desirable Deep knowledge of various enterprise level Platforms widely used in the industry is desirable

Position Details

POSTED:

Jan 17, 2023

EMPLOYMENT:

INDUSTRY:

SNAPRECRUIT ID:

S166250893519112381

LOCATION:

United States

CITY:

Dallas

Job Origin:

OORWIN_ORGANIC_FEED

A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

AWS Data Engineer    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Data Engineer Location: Must be onsite 2+ days per week out of either Mclean, Atlanta, Dallas, or NY offices. Must Haves: Must have experience building data pipelines using Spark, Scala, and Python. Must have experience with S3, EC2, Docker, EKS, EMR. Job Responsibilities: Design and develop reusable data pipelines to support batch and stream processing of large volumes of structured, semi-structured and unstructured data Create and maintain optimal data pipeline architecture Identify, design and implement internal process improvements including manual process automation, optimizing data delivery, infrastructure redesign with a focus on scalability Develop data integration/ engineering workflows using Scala and/or Python leveraging AWS, Snowflake and other cloud data stores Leverage best practices, design and architecture that align with Enterprise Data and Application architecture standards Leverage Microservices for aggregating data streams/ pipelines where needed Maintain and enhance data engineering architecture, design principles, CI/CD deployment procedures Design, develop and unit test new or existing data integration solutions to meet business requirements Qualifications Bachelors Degree in computer science or IT, Advanced degree preferred 8+ years of overall experience 3+ years of relevant large-scale enterprise transformation experience as relates to data engineering is preferred 3+ years of experience developing RESTful APIs, microservices, containers, frameworks 3+ years of database technologies Oracle, DB2, Mongo DB, NoSQL DBs, PostgreSQL, Snowflake Experience with any of Cloud technologies like AWS S3, EC2, Docker, OpenShift, Kubernetes/Amazon EKS, EMR, Hadoop Experience with Attunity, Kafka, AMQ or similar streaming tools is a plus Strong programming skills in one or more modern languages such as Python, Java, Spark Prior experience in the primary or secondary mortgage industry is desirable Deep knowledge of various enterprise level Platforms widely used in the industry is desirable


Please wait..!!