Find Data Engineer Modelling Design Python Spark Job in Alpharetta, Georgia | Snaprecruit

Find Data Engineer Design Jobs in Alpharetta
image
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,

Data Engineer Modelling Design Python Spark

  • ... ZnA Inc
  • ... Alpharetta, Georgia,
  • ...

    CTC

  • ... Salary: 90 per hour
  • Posted on: Sep 04, 2024

Data Engineer Modelling Design Python Spark   

JOB TITLE:

Data Engineer Modelling Design Python Spark

JOB TYPE:

CTC

JOB LOCATION:

Alpharetta Georgia United States

REMOTE:

No

JOB DESCRIPTION:

Job Id: 3261 L3

Duration: 12 Months

Location: Alpharetta GA

Title: Data Engineer, Modelling, design, integration, Python, Spark, DB2, NoSQL, Jenkins 12 mths Alpharetta GA

Description:

Hybrid 3 days a week onsite

Potential to convert

The Data System Engineer will be responsible for tasks such as data engineering, data modeling, ETL processes, data warehousing, and data analytics & science. Our platform run both on premise and on the cloud (AWS/Azure).

Knowledge/Skills:

  • Able to establish, modify or maintain data structures and associated components according to design
  • Understands and documents business data requirements
  • Able to come up with Conceptual and Logical Data Models at Enterprise, Business Unit/Domain Level
  • Understands XML/JSON and schema development/reuse, database concepts, database designs, Open Source and NoSQL concepts
  • Partners with Sr. Data Engineers and Sr. Data architects to create platform level data models and database designs
  • Takes part in reviews of own work and reviews of colleagues' work
  • Has working knowledge of the core tools used in the planning, analyzing, designing, building, testing, configuring and maintaining of assigned application(s)
  • Able to participate in assigned team's software delivery methodology (Agile, Scrum, Test-Driven Development, Waterfall, etc.) in support of data engineering pipeline development
  • Understands infrastructure technologies and components like servers, databases, and networking concepts
  • Write code to develop, maintain and optimized batch and event driven for storing, managing, and analyzing large volumes of structured and unstructured data both
  • Metadata integration in data pipelines
  • Automate build and deployment processes using Jenkins across all environments to enable faster, high-quality releases

Qualification:

Up to 4 years of software development experience in a professional environment and/or comparable experience such as:

  • Understanding of Agile or other rapid application development methods
  • Exposure to design and development across one or more database management systems DB2, SybaseIQ, Snowflake as appropriate
  • Exposure to methods relating to application and database design, development, and automated testing
  • Understanding of big data technology and NOSQL design and development with variety of data stores (document, column family, graph, etc.)
  • General knowledge of distributed (multi-tiered) systems, algorithms, and relational & non-relational databases
  • Experience with Linux and Python scripting as well as large scale data processing technology such as Spark
  • Exposure to Big data technology and NOSQL design and coding with variety of data stores (document, column family, graph, etc.)
  • Experience with cloud technologies such as AWS and Azure, including deployment, management, and optimization of data analytics & science pipelines
  • Nice to have: Collibra, Terraform, Java, Golang, Ruby, Machine Learning Operation deployment
  • Bachelor's degree in computer science, computer science engineering, or related field required

Position Details

POSTED:

Sep 04, 2024

EMPLOYMENT:

CTC

SALARY:

90 per hour

SNAPRECRUIT ID:

SD-da7e8bc2a9580df138311f5315a247364a481ebb4b9c85f0e77350e3fbeeee26

CITY:

Alpharetta

Job Origin:

CIEPAL_ORGANIC_FEED

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Similar Jobs

Data Engineer Modelling Design Python Spark    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Job Id: 3261 L3

Duration: 12 Months

Location: Alpharetta GA

Title: Data Engineer, Modelling, design, integration, Python, Spark, DB2, NoSQL, Jenkins 12 mths Alpharetta GA

Description:

Hybrid 3 days a week onsite

Potential to convert

The Data System Engineer will be responsible for tasks such as data engineering, data modeling, ETL processes, data warehousing, and data analytics & science. Our platform run both on premise and on the cloud (AWS/Azure).

Knowledge/Skills:

  • Able to establish, modify or maintain data structures and associated components according to design
  • Understands and documents business data requirements
  • Able to come up with Conceptual and Logical Data Models at Enterprise, Business Unit/Domain Level
  • Understands XML/JSON and schema development/reuse, database concepts, database designs, Open Source and NoSQL concepts
  • Partners with Sr. Data Engineers and Sr. Data architects to create platform level data models and database designs
  • Takes part in reviews of own work and reviews of colleagues' work
  • Has working knowledge of the core tools used in the planning, analyzing, designing, building, testing, configuring and maintaining of assigned application(s)
  • Able to participate in assigned team's software delivery methodology (Agile, Scrum, Test-Driven Development, Waterfall, etc.) in support of data engineering pipeline development
  • Understands infrastructure technologies and components like servers, databases, and networking concepts
  • Write code to develop, maintain and optimized batch and event driven for storing, managing, and analyzing large volumes of structured and unstructured data both
  • Metadata integration in data pipelines
  • Automate build and deployment processes using Jenkins across all environments to enable faster, high-quality releases

Qualification:

Up to 4 years of software development experience in a professional environment and/or comparable experience such as:

  • Understanding of Agile or other rapid application development methods
  • Exposure to design and development across one or more database management systems DB2, SybaseIQ, Snowflake as appropriate
  • Exposure to methods relating to application and database design, development, and automated testing
  • Understanding of big data technology and NOSQL design and development with variety of data stores (document, column family, graph, etc.)
  • General knowledge of distributed (multi-tiered) systems, algorithms, and relational & non-relational databases
  • Experience with Linux and Python scripting as well as large scale data processing technology such as Spark
  • Exposure to Big data technology and NOSQL design and coding with variety of data stores (document, column family, graph, etc.)
  • Experience with cloud technologies such as AWS and Azure, including deployment, management, and optimization of data analytics & science pipelines
  • Nice to have: Collibra, Terraform, Java, Golang, Ruby, Machine Learning Operation deployment
  • Bachelor's degree in computer science, computer science engineering, or related field required

Loading
Please wait..!!