image
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,
loadingbar
Loading, Please wait..!!

Fraud Data Engineer

  • ... Posted on: Nov 12, 2025
  • ... MW Partner
  • ... San Jose, California
  • ... Salary: Not Available
  • ... Full-time

Fraud Data Engineer   

Job Title :

Fraud Data Engineer

Job Type :

Full-time

Job Location :

San Jose California United States

Remote :

No

Jobcon Logo Job Description :

MW Partners is currently seeking a Fraud Data Engineer to work for our client who is a global leader in multimedia and creativity software products.

Responsibilities and duties:

  • Design, build, and maintain robust ETL/ELT pipelines (batch and streaming) for structured and unstructured data using SQL and Python/PySpark.
  • Collaborate with data scientists and business stakeholders to model, query, and visualize complex entity relationships using Neo4j.
  • Optimize Neo4j data models and Cypher queries for scalability and performance.
  • Build and manage large-scale ML feature stores and integrate them with AI and agentic workflows.
  • Develop and maintain integrations across AWS (S3) and Azure (Blob Storage, VMs) , and third-party threat intelligence APIs to enrich fraud detection and investigation workflows.
  • Automate workflows using Apache Airflow or equivalent orchestration tools.
  • Apply DataOps best practices, including version control (Git), CI/CD, and monitoring for reliability and maintainability.
  • Implement and enforce data quality, lineage, and governance standards across all data assets.

Requirements:

  • Master's degree in Statistics, Mathematics, Computer Science, or a related field (or bachelor's degree with equivalent experience)
  • 8+ years of experience in data engineering or a related field.
  • Proven success in ETL/ELT design and implementation, including batch and streaming pipelines.
  • Strong proficiency in SQL, Python, and PySpark.
  • Hands-on experience with Neo4j (data modeling, Cypher, query optimization).
  • Experience building ML feature stores and integrating with AI/ML pipelines.
  • Working knowledge of AWS and Azure data services.
  • Familiarity with Apache Airflow or similar orchestration tools.
  • Proficient in Git and CI/CD workflows.
  • Strong understanding of data quality, lineage, and governance
  • Nice to Have: Experience with Databricks.

For a confidential discussion or to find out more, contact Amit Kumar on 909-206-4330 or apply now.

Jobcon Logo Position Details

Posted:

Nov 12, 2025

Employment:

Full-time

Salary:

Not Available

Snaprecruit ID:

SD-CIE-aa9916505a5cc547f165b06b82279a856a49cc873a8e6325247258942d7efd14

City:

San Jose

Job Origin:

CIEPAL_ORGANIC_FEED

Share this job:

  • linkedin

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Fraud Data Engineer    Apply

Click on the below icons to share this job to Linkedin, Twitter!

MW Partners is currently seeking a Fraud Data Engineer to work for our client who is a global leader in multimedia and creativity software products.

Responsibilities and duties:

  • Design, build, and maintain robust ETL/ELT pipelines (batch and streaming) for structured and unstructured data using SQL and Python/PySpark.
  • Collaborate with data scientists and business stakeholders to model, query, and visualize complex entity relationships using Neo4j.
  • Optimize Neo4j data models and Cypher queries for scalability and performance.
  • Build and manage large-scale ML feature stores and integrate them with AI and agentic workflows.
  • Develop and maintain integrations across AWS (S3) and Azure (Blob Storage, VMs) , and third-party threat intelligence APIs to enrich fraud detection and investigation workflows.
  • Automate workflows using Apache Airflow or equivalent orchestration tools.
  • Apply DataOps best practices, including version control (Git), CI/CD, and monitoring for reliability and maintainability.
  • Implement and enforce data quality, lineage, and governance standards across all data assets.

Requirements:

  • Master's degree in Statistics, Mathematics, Computer Science, or a related field (or bachelor's degree with equivalent experience)
  • 8+ years of experience in data engineering or a related field.
  • Proven success in ETL/ELT design and implementation, including batch and streaming pipelines.
  • Strong proficiency in SQL, Python, and PySpark.
  • Hands-on experience with Neo4j (data modeling, Cypher, query optimization).
  • Experience building ML feature stores and integrating with AI/ML pipelines.
  • Working knowledge of AWS and Azure data services.
  • Familiarity with Apache Airflow or similar orchestration tools.
  • Proficient in Git and CI/CD workflows.
  • Strong understanding of data quality, lineage, and governance
  • Nice to Have: Experience with Databricks.

For a confidential discussion or to find out more, contact Amit Kumar on 909-206-4330 or apply now.

Loading
Please wait..!!