image
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,
loadingbar
Loading, Please wait..!!

Databricks Architect

  • ... Posted on: Feb 27, 2026
  • ... FourCi
  • ... Grinnell, Iowa
  • ... Salary: Not Available
  • ... Full-time

Databricks Architect   

Job Title :

Databricks Architect

Job Type :

Full-time

Job Location :

Grinnell Iowa United States

Remote :

No

Jobcon Logo Job Description :

Job Title: Databricks Architect

Location: Grimes, IA / Remote

Duration: long term

Job ID: 4CI 7030

Required Qualifications:

  • Design and develop scalable ETL pipelines using Databricks (PySpark, Delta Lake, Workflows).
  • Migrate data from legacy databases, VSAM files, and multiple source systems to PostgreSQL.
  • Architect and configure Databricks environments including landing and staging zones.
  • Optimize load performance, load balancing, and manage large-volume data cutovers.
  • Implement data validation, transformation rules, and data masking.
  • Configure job automation, monitoring, and performance tuning.
  • Support defect resolution and ensure high data accuracy and reliability.
  • Collaborate with cross-functional teams on data mapping, technical design, and testing.

Jobcon Logo Position Details

Posted:

Feb 27, 2026

Reference Number:

7131-19796

Employment:

Full-time

Salary:

Not Available

City:

Grinnell

Job Origin:

CIEPAL_ORGANIC_FEED

Share this job:

  • linkedin

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Databricks Architect    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Job Title: Databricks Architect

Location: Grimes, IA / Remote

Duration: long term

Job ID: 4CI 7030

Required Qualifications:

  • Design and develop scalable ETL pipelines using Databricks (PySpark, Delta Lake, Workflows).
  • Migrate data from legacy databases, VSAM files, and multiple source systems to PostgreSQL.
  • Architect and configure Databricks environments including landing and staging zones.
  • Optimize load performance, load balancing, and manage large-volume data cutovers.
  • Implement data validation, transformation rules, and data masking.
  • Configure job automation, monitoring, and performance tuning.
  • Support defect resolution and ensure high data accuracy and reliability.
  • Collaborate with cross-functional teams on data mapping, technical design, and testing.

Loading
Please wait..!!