• Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,

Chat with the recruiter

...Minimize

Hey I'm Online! Leave me a message.
Let me know if you have any questions.

Hadoop Developer

Contract In Virginia / United States

Save this job

Hadoop Developer   

Click on the below icons to share this job to Linkedin, Twitter!

JOB TITLE:

Hadoop Developer

JOB TYPE:

Contract

JOB SKILLS:

Big Data, Teradata, ETL/ELT, PySpark, SparkSql, Hadoop, SQL

JOB LOCATION:

Ashburn Virginia / United States

JOB DESCRIPTION:

Description:  Migrate RDBMS(Teradata) sqls into Pyspark work loads to enable the BIG Data solutions for VBG(Verizon Business Group) Anlytical groups.
Also decom the On-prem Hadoop Data Lake to Private Cloud.
  JOB DUTIES:  1.
Understanding business requirements & helping in assessing them with the development teams.
  2.
Creating high quality documentation supporting the design/coding tasks.
  3.
Participate in architecture / design discussions and Develop the ETL/ELT using PySpark and SparkSql  4.
Conduct code and design reviews and provide review feedback.
  5.
Identify areas of improvement in framework and processes and strive them to make better.
  Qualifications:  1.
At least 3 years working experience in a Big Data Environment  2.
Knowledge on design and development best practices in datawarehouse environments  3.
Experience developing large scale distributed computing systems  4.
Knowledge of Hadoop ecosystem and its components – HBase, Pig, Hive, Sqoop, Flume, Oozie, etc.
  5.
Experience of Pyspark & Spark SQL  6.
Experience with integration of data from multiple data sources  7.
Implement ETL process in Hadoop (Develop big data ETL jobs that ingest, integrate, and export data.
) Converting Teradata SQL to PySpark SQL.
  8.
Experience in Presto, Kafka, Nifi.
  Desired Skill:  1.
Airflow  2.
Understanding Object oriented programming  3.
Devops implementation knowledge  4.
Git Commands 5.
Python Sphinx, Pandas, SQL Alchemy , Mccabe, Unitest etc.
.
Modules

Position Details

POSTED:

Mar 25, 2023

EMPLOYMENT:

Contract

INDUSTRY:

Information Technology (IT)

SNAPRECRUIT ID:

S871551175482157762

LOCATION:

Virginia / United States

CITY:

Ashburn

Job Origin:

Snaprecruit Job

A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Hadoop Developer    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Description:  Migrate RDBMS(Teradata) sqls into Pyspark work loads to enable the BIG Data solutions for VBG(Verizon Business Group) Anlytical groups. Also decom the On-prem Hadoop Data Lake to Private Cloud.  JOB DUTIES:  1. Understanding business requirements & helping in assessing them with the development teams.  2. Creating high quality documentation supporting the design/coding tasks.  3. Participate in architecture / design discussions and Develop the ETL/ELT using PySpark and SparkSql  4. Conduct code and design reviews and provide review feedback.  5. Identify areas of improvement in framework and processes and strive them to make better.  Qualifications:  1. At least 3 years working experience in a Big Data Environment  2. Knowledge on design and development best practices in datawarehouse environments  3. Experience developing large scale distributed computing systems  4. Knowledge of Hadoop ecosystem and its components – HBase, Pig, Hive, Sqoop, Flume, Oozie, etc.  5. Experience of Pyspark & Spark SQL  6. Experience with integration of data from multiple data sources  7. Implement ETL process in Hadoop (Develop big data ETL jobs that ingest, integrate, and export data.) Converting Teradata SQL to PySpark SQL.  8. Experience in Presto, Kafka, Nifi.  Desired Skill:  1. Airflow  2. Understanding Object oriented programming  3. Devops implementation knowledge  4. Git Commands 5. Python Sphinx, Pandas, SQL Alchemy , Mccabe, Unitest etc.. Modules


Please wait..!!