Senior Data Engineer Pyspark And Databricks Apply
We are seeking a highly experienced Senior Data Engineer with strong hands-on expertise in PySpark and Databricks to support large-scale data engineering initiatives for the Federal Reserve Board (FRB). The ideal candidate will have a deep background in building, optimizing, and modernizing enterprise data pipelines and distributed processing systems in a cloud environment.
This role requires 12+ years of experience in data engineering, strong technical leadership, and the ability to collaborate with cross-functional teams. Candidates must be able to work onsite 3 days each week in NYC.
Responsibilities:
Data Engineering & Development
Design, build, and maintain scalable, high-performance data pipelines using PySpark and Databricks.
Develop and optimize ETL/ELT processes for structured and unstructured datasets.
Build and enhance data ingestion frameworks, streaming pipelines, and batch workflows.
Databricks & Spark Optimization
Utilize Databricks notebooks, Delta Lake, and Spark SQL for data transformations.
Optimize PySpark jobs for performance, cost-efficiency, and scalability.
Troubleshoot Spark performance issues and implement best practices.
Data Architecture & Modeling
Work with architects to design data lake/lakehouse solutions.
Implement data modeling standards, schema management, and data quality frameworks.
Maintain and improve data governance, metadata, and lineage processes.
Collaboration & Delivery
Partner with data scientists, analysts, and business teams to support analytical requirements.
Translate business needs into technical solutions and deliver production-ready datasets.
Participate in Agile ceremonies, sprint planning, and code reviews.
Required Skills & Qualifications
Mandatory Requirements
- 12+ years of professional experience in Data Engineering (no exceptions).
- Strong hands-on expertise in PySpark (advanced level).
- Deep proficiency with Databricks (development + optimization).
- Strong knowledge of Spark SQL, Delta Lake, and distributed data processing.
- Solid experience in ETL/ELT design, large-scale data pipelines, and performance tuning.
- Experience working in cloud environments (AWS, Azure, or GCP).
- Excellent communication and documentation skills.
LinkedIn ID required for client submission.
Preferred Skills
Prior experience in banking, finance, or federal organizations.
Experience with CI/CD tools (Git, Jenkins, Azure DevOps).
Knowledge of data governance, security, and compliance frameworks.
Additional Information
Work Mode: Hybrid 3 days onsite in NYC (mandatory).
Only local or nearby candidates will be considered due to onsite requirements.
Excellent opportunity to work with a major federal client on high-impact data engineering initiatives.
Send Us Your Feedback
Sign In
Please check your email. We have sent you a password reset link. This link will expire in one hour.
Please input your account's email
Get $5 added to your wallet for registering an account. Refers others and get $1 for each successful referral. Earn while you search for jobs and redeem in gift cards!
Apply to this Job
Already have account? Login here
Find Your Next Job In A Snap!
We help you find the best Jobs, Employers and Career Advice.
Upload your resume for a free professional assessment
Loading, Please wait..!!
Send this job to my email
Please wait..!!

