image
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,
loadingbar
Loading, Please wait..!!

Senior Data Engineer Pyspark And Databricks

  • ... Posted on: Dec 08, 2025
  • ... Stellent IT LLC
  • ... New York City, New York
  • ... Salary: Not Available
  • ... Full-time

Senior Data Engineer Pyspark And Databricks   

Job Title :

Senior Data Engineer Pyspark And Databricks

Job Type :

Full-time

Job Location :

New York City New York United States

Remote :

No

Jobcon Logo Job Description :

Senior Data Engineer Pyspark and Databricks (Hybrid)
Location: New York City, NY
Visa: Only USC
Mode of Interview: Virtual
Job Description:

We are seeking a highly experienced Senior Data Engineer with strong hands-on expertise in PySpark and Databricks to support large-scale data engineering initiatives for the Federal Reserve Board (FRB). The ideal candidate will have a deep background in building, optimizing, and modernizing enterprise data pipelines and distributed processing systems in a cloud environment.
This role requires 12+ years of experience in data engineering, strong technical leadership, and the ability to collaborate with cross-functional teams. Candidates must be able to work onsite 3 days each week in NYC.

Responsibilities:

Data Engineering & Development
Design, build, and maintain scalable, high-performance data pipelines using PySpark and Databricks.
Develop and optimize ETL/ELT processes for structured and unstructured datasets.
Build and enhance data ingestion frameworks, streaming pipelines, and batch workflows.
Databricks & Spark Optimization
Utilize Databricks notebooks, Delta Lake, and Spark SQL for data transformations.
Optimize PySpark jobs for performance, cost-efficiency, and scalability.
Troubleshoot Spark performance issues and implement best practices.
Data Architecture & Modeling
Work with architects to design data lake/lakehouse solutions.
Implement data modeling standards, schema management, and data quality frameworks.
Maintain and improve data governance, metadata, and lineage processes.
Collaboration & Delivery
Partner with data scientists, analysts, and business teams to support analytical requirements.
Translate business needs into technical solutions and deliver production-ready datasets.
Participate in Agile ceremonies, sprint planning, and code reviews.


Required Skills & Qualifications

Mandatory Requirements

  • 12+ years of professional experience in Data Engineering (no exceptions).
  • Strong hands-on expertise in PySpark (advanced level).
  • Deep proficiency with Databricks (development + optimization).
  • Strong knowledge of Spark SQL, Delta Lake, and distributed data processing.
  • Solid experience in ETL/ELT design, large-scale data pipelines, and performance tuning.
  • Experience working in cloud environments (AWS, Azure, or GCP).
  • Excellent communication and documentation skills.

LinkedIn ID required for client submission.

Preferred Skills

Prior experience in banking, finance, or federal organizations.
Experience with CI/CD tools (Git, Jenkins, Azure DevOps).
Knowledge of data governance, security, and compliance frameworks.

Additional Information

Work Mode: Hybrid 3 days onsite in NYC (mandatory).
Only local or nearby candidates will be considered due to onsite requirements.
Excellent opportunity to work with a major federal client on high-impact data engineering initiatives.

Jobcon Logo Position Details

Posted:

Dec 08, 2025

Employment:

Full-time

Salary:

Not Available

Snaprecruit ID:

SD-CIE-2040e8602c1f3aa32db4d335c12727aad4a101311401587a1728be9c97d05734

City:

New York City

Job Origin:

CIEPAL_ORGANIC_FEED

Share this job:

  • linkedin

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Senior Data Engineer Pyspark And Databricks    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Senior Data Engineer Pyspark and Databricks (Hybrid)
Location: New York City, NY
Visa: Only USC
Mode of Interview: Virtual
Job Description:

We are seeking a highly experienced Senior Data Engineer with strong hands-on expertise in PySpark and Databricks to support large-scale data engineering initiatives for the Federal Reserve Board (FRB). The ideal candidate will have a deep background in building, optimizing, and modernizing enterprise data pipelines and distributed processing systems in a cloud environment.
This role requires 12+ years of experience in data engineering, strong technical leadership, and the ability to collaborate with cross-functional teams. Candidates must be able to work onsite 3 days each week in NYC.

Responsibilities:

Data Engineering & Development
Design, build, and maintain scalable, high-performance data pipelines using PySpark and Databricks.
Develop and optimize ETL/ELT processes for structured and unstructured datasets.
Build and enhance data ingestion frameworks, streaming pipelines, and batch workflows.
Databricks & Spark Optimization
Utilize Databricks notebooks, Delta Lake, and Spark SQL for data transformations.
Optimize PySpark jobs for performance, cost-efficiency, and scalability.
Troubleshoot Spark performance issues and implement best practices.
Data Architecture & Modeling
Work with architects to design data lake/lakehouse solutions.
Implement data modeling standards, schema management, and data quality frameworks.
Maintain and improve data governance, metadata, and lineage processes.
Collaboration & Delivery
Partner with data scientists, analysts, and business teams to support analytical requirements.
Translate business needs into technical solutions and deliver production-ready datasets.
Participate in Agile ceremonies, sprint planning, and code reviews.


Required Skills & Qualifications

Mandatory Requirements

  • 12+ years of professional experience in Data Engineering (no exceptions).
  • Strong hands-on expertise in PySpark (advanced level).
  • Deep proficiency with Databricks (development + optimization).
  • Strong knowledge of Spark SQL, Delta Lake, and distributed data processing.
  • Solid experience in ETL/ELT design, large-scale data pipelines, and performance tuning.
  • Experience working in cloud environments (AWS, Azure, or GCP).
  • Excellent communication and documentation skills.

LinkedIn ID required for client submission.

Preferred Skills

Prior experience in banking, finance, or federal organizations.
Experience with CI/CD tools (Git, Jenkins, Azure DevOps).
Knowledge of data governance, security, and compliance frameworks.

Additional Information

Work Mode: Hybrid 3 days onsite in NYC (mandatory).
Only local or nearby candidates will be considered due to onsite requirements.
Excellent opportunity to work with a major federal client on high-impact data engineering initiatives.

Loading
Please wait..!!