image
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,
loadingbar
Loading, Please wait..!!

Databricks Engineer

  • ... Posted on: Dec 01, 2025
  • ... Argyllinfotech
  • ... University of Maryland, Maryland
  • ... Salary: Not Available
  • ... CTC

Databricks Engineer   

Job Title :

Databricks Engineer

Job Type :

CTC

Job Location :

University of Maryland Maryland United States

Remote :

No

Jobcon Logo Job Description :

Job Role : Databricks Engineer
Location : Maryland
Client : University of Maryland Global Campus
We are seeking a Databricks Engineer to design, build, and operate a Data & AI platform with a strong
foundation in the Medallion Architecture (raw/bronze, curated/silver, and mart/gold layers). This
platform will orchestrate complex data workflows and scalable ELT pipelines to integrate data from
enterprise systems such as PeopleSoft, D2L, and Salesforce, delivering high-quality, governed data
for machine learning, AI/BI, and analytics at scale.
You will play a critical role in engineering the infrastructure and workflows that enable seamless data
flow across the enterprise, ensure operational excellence, and provide the backbone for strategic
decision-making, predictive modeling, and innovation.
Responsibilities:
1. Data & AI Platform Engineering (Databricks-Centric):
Design, implement, and optimize end-to-end data pipelines on Databricks, following the
Medallion Architecture principles.
Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform
raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers.
Operationalize Databricks Workflows for orchestration, dependency management, and
pipeline automation.
Apply schema evolution and data versioning to support agile data development.
2. Platform Integration & Data Ingestion:
Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using
APIs, JDBC, or other integration frameworks.
Implement connectors and ingestion frameworks that accommodate structured, semi
structured, and unstructured data.
Design standardized data ingestion processes with automated error handling, retries, and
alerting.
3. Data Quality, Monitoring, and Governance:
Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure
data integrity across all layers.
Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL
performance, latency, and failures.
Implement Unity Catalog or equivalent tools for centralized metadata management, data
lineage, and governance policy enforcement.
4. Security, Privacy, and Compliance:
Enforce data security best practices including row-level security, encryption at rest/in transit,
and fine-grained access control via Unity Catalog.
Design and implement data masking, tokenization, and anonymization for compliance with
privacy regulations (e.g., GDPR, FERPA).
Work with security teams to audit and certify compliance controls.

Jobcon Logo Position Details

Posted:

Dec 01, 2025

Employment:

CTC

Salary:

Not Available

City:

University of Maryland

Job Origin:

CIEPAL_ORGANIC_FEED

Share this job:

  • linkedin

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Databricks Engineer    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Job Role : Databricks Engineer
Location : Maryland
Client : University of Maryland Global Campus
We are seeking a Databricks Engineer to design, build, and operate a Data & AI platform with a strong
foundation in the Medallion Architecture (raw/bronze, curated/silver, and mart/gold layers). This
platform will orchestrate complex data workflows and scalable ELT pipelines to integrate data from
enterprise systems such as PeopleSoft, D2L, and Salesforce, delivering high-quality, governed data
for machine learning, AI/BI, and analytics at scale.
You will play a critical role in engineering the infrastructure and workflows that enable seamless data
flow across the enterprise, ensure operational excellence, and provide the backbone for strategic
decision-making, predictive modeling, and innovation.
Responsibilities:
1. Data & AI Platform Engineering (Databricks-Centric):
Design, implement, and optimize end-to-end data pipelines on Databricks, following the
Medallion Architecture principles.
Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform
raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers.
Operationalize Databricks Workflows for orchestration, dependency management, and
pipeline automation.
Apply schema evolution and data versioning to support agile data development.
2. Platform Integration & Data Ingestion:
Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using
APIs, JDBC, or other integration frameworks.
Implement connectors and ingestion frameworks that accommodate structured, semi
structured, and unstructured data.
Design standardized data ingestion processes with automated error handling, retries, and
alerting.
3. Data Quality, Monitoring, and Governance:
Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure
data integrity across all layers.
Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL
performance, latency, and failures.
Implement Unity Catalog or equivalent tools for centralized metadata management, data
lineage, and governance policy enforcement.
4. Security, Privacy, and Compliance:
Enforce data security best practices including row-level security, encryption at rest/in transit,
and fine-grained access control via Unity Catalog.
Design and implement data masking, tokenization, and anonymization for compliance with
privacy regulations (e.g., GDPR, FERPA).
Work with security teams to audit and certify compliance controls.

Loading
Please wait..!!