Find Aws Python Developer Job in columbus, Ohio | Snaprecruit

Find Aws Python Jobs in columbus
image
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,
loadingbar
Loading, Please wait..!!

Aws Python Developer

  • ... Posted on: Sep 03, 2024
  • ... Conch Technologies Inc
  • ... columbus, Ohio
  • ... Salary: Not Available
  • ... Full-time

Aws Python Developer   

Job Title :

Aws Python Developer

Job Type :

Full-time

Job Location :

columbus Ohio United States

Remote :

No

Jobcon Logo Job Description :


Handle migration to PySpark on AWS.
Design and implement data pipelines.
Work with AWS and Big Data.
Produce unit tests for Spark transformations and helper methods.
Create Scala/Spark jobs for data transformation and aggregation.
Write Scaladoc-style documentation for code.
Optimize Spark queries for performance.
Integrate with SQL databases (e.g., Microsoft, Oracle, Postgres, MySQL).
Understand distributed systems concepts (CAP theorem, partitioning, replication, consistency, and consensus).

Skills:
Proficiency in Python, Scala (with a focus on functional programming), and Spark.
Familiarity with Spark APIs, including RDD, DataFrame, MLlib, GraphX, and Streaming.
Experience working with HDFS, S3, Cassandra, and/or DynamoDB.
Deep understanding of distributed systems.
Experience with building or maintaining cloud-native applications.
Familiarity with serverless approaches using AWS Lambda is a plus



Jobcon Logo Position Details

Posted:

Sep 03, 2024

Employment:

Full-time

Salary:

Not Available

Snaprecruit ID:

SD-CIE-c70da1c528dfb6ee639b59b09b48eea773c92a97cfea25b9a25c3eddfbada042

City:

columbus

Job Origin:

CIEPAL_ORGANIC_FEED

Share this job:

  • linkedin

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Similar Jobs

Aws Python Developer    Apply

Click on the below icons to share this job to Linkedin, Twitter!


Handle migration to PySpark on AWS.
Design and implement data pipelines.
Work with AWS and Big Data.
Produce unit tests for Spark transformations and helper methods.
Create Scala/Spark jobs for data transformation and aggregation.
Write Scaladoc-style documentation for code.
Optimize Spark queries for performance.
Integrate with SQL databases (e.g., Microsoft, Oracle, Postgres, MySQL).
Understand distributed systems concepts (CAP theorem, partitioning, replication, consistency, and consensus).

Skills:
Proficiency in Python, Scala (with a focus on functional programming), and Spark.
Familiarity with Spark APIs, including RDD, DataFrame, MLlib, GraphX, and Streaming.
Experience working with HDFS, S3, Cassandra, and/or DynamoDB.
Deep understanding of distributed systems.
Experience with building or maintaining cloud-native applications.
Familiarity with serverless approaches using AWS Lambda is a plus



Loading
Please wait..!!