• Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,

Chat with the recruiter

...Minimize

Hey I'm Online! Leave me a message.
Let me know if you have any questions.

Big Data Developer

In Georgia / United States

Save this job

Big Data Developer   

Click on the below icons to share this job to Linkedin, Twitter!
JOB TITLE:
Big Data Developer
JOB TYPE:

JOB SKILLS:
JOB LOCATION:
Atlanta Georgia / United States

JOB DESCRIPTION :

Technical/Functional Skills

Experience with Spark, working in RDDs and DataFrames/Datasets API (with emphasis on DataFrames) to query and perform data manipulation
Experience building large scale Spark applications, ideally with either Batch processing and/or Streaming processing
Experience in SparkSQL (Broadcast Joins)
Scala would be ideal but a solid knowledge of Java is also acceptable
Knowledge on cloud computing platforms, we use AWS S3, Lambda and Redshift
Has experience with ANSI SQL relational database (Redshift, Oracle, MySQL)
Linux common working knowledge, including navigating through the file system and simple bash scripting
General knowledge of distributed systems and distributed data processing frameworks
Experience with Python and Airflow is a plus
Knowledge about agile software processes

Experience Required

5 to 7 years

Roles & Responsibilities

  1. To extract data from various sources and form data frames
  2. Should be able to create spark applications either using batch programming or Stream processing
  3. Should be able to use SPARK SQL
  4. Should be familiar with SQL Queries, Stored Procedures and Prepared Statements
  5. Should be familiar with Linux OS and AWS cloud (hands on)

Position Details

Apr 12, 2021
S16174661458803534
Georgia / United States
Atlanta
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Big Data Developer    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Technical/Functional Skills

Experience with Spark, working in RDDs and DataFrames/Datasets API (with emphasis on DataFrames) to query and perform data manipulation
Experience building large scale Spark applications, ideally with either Batch processing and/or Streaming processing
Experience in SparkSQL (Broadcast Joins)
Scala would be ideal but a solid knowledge of Java is also acceptable
Knowledge on cloud computing platforms, we use AWS S3, Lambda and Redshift
Has experience with ANSI SQL relational database (Redshift, Oracle, MySQL)
Linux common working knowledge, including navigating through the file system and simple bash scripting
General knowledge of distributed systems and distributed data processing frameworks
Experience with Python and Airflow is a plus
Knowledge about agile software processes

Experience Required

5 to 7 years

Roles & Responsibilities

  1. To extract data from various sources and form data frames
  2. Should be able to create spark applications either using batch programming or Stream processing
  3. Should be able to use SPARK SQL
  4. Should be familiar with SQL Queries, Stored Procedures and Prepared Statements
  5. Should be familiar with Linux OS and AWS cloud (hands on)