image
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,
loadingbar
Loading, Please wait..!!

Data Engineer

  • ... Posted on: Jan 10, 2025
  • ... VeeRteq Solutions Inc
  • ... Carteret, New Jersey
  • ... Salary: Not Available
  • ... CTC

Data Engineer   

Job Title :

Data Engineer

Job Type :

CTC

Job Location :

Carteret New Jersey United States

Remote :

No

Jobcon Logo Job Description :

Senior Data Engineer

New York, NY / Iselin, NJ

Our challenge

This position is for a Cloud Data engineer with a background in Python, Pyspark, SQL and data warehousing for enterprise level systems. The position calls for someone that is comfortable working with business users along with business analyst expertise.

The Role

Responsibilities:

  • Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity.
  • Design, develop, and deploy Spark program in databricks environment to process and analyze large volumes of data.
  • Experience of Delta Lake, DWH, Data Integration, Cloud, Design and Data Modelling.
  • Proficient in developing programs in Python and SQL
  • Experience with Data warehouse Dimensional data modeling.
  • Working with event based/streaming technologies to ingest and process data.
  • Working with structured, semi structured and unstructured data.
  • Optimize Databricks jobs for performance and scalability to handle big data workloads.
  • Monitor and troubleshoot Databricks jobs, identify and resolve issues or bottlenecks.
  • Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions.
  • Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process.
  • Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.

Requirements:

You are:

  • Minimally a BA degree within an engineering and/or computer science discipline
  • Master's degree strongly preferred
  • 5+ years Python coding experience.
  • 5+ years - SQL Server based development of large datasets
  • 5+ years with Experience with developing and deploying ETL pipelines using Databricks Pyspark.
  • Experience in any cloud data warehouse like Synapse, Big Query, Redshift, Snowflake.
  • Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling.
  • Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills.
  • Experience with Cloud based data architectures, messaging, and analytics.
  • Cloud certification(s).
  • Any experience with Airflow is a Plus.

Jobcon Logo Position Details

Posted:

Jan 10, 2025

Employment:

CTC

Salary:

Not Available

Snaprecruit ID:

SD-CIE-961491bca4a53b4623240c467a69570d19c49e2109abfa7de97a60e94f3d4877

City:

Carteret

Job Origin:

CIEPAL_ORGANIC_FEED

Share this job:

  • linkedin

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Data Engineer    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Senior Data Engineer

New York, NY / Iselin, NJ

Our challenge

This position is for a Cloud Data engineer with a background in Python, Pyspark, SQL and data warehousing for enterprise level systems. The position calls for someone that is comfortable working with business users along with business analyst expertise.

The Role

Responsibilities:

  • Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity.
  • Design, develop, and deploy Spark program in databricks environment to process and analyze large volumes of data.
  • Experience of Delta Lake, DWH, Data Integration, Cloud, Design and Data Modelling.
  • Proficient in developing programs in Python and SQL
  • Experience with Data warehouse Dimensional data modeling.
  • Working with event based/streaming technologies to ingest and process data.
  • Working with structured, semi structured and unstructured data.
  • Optimize Databricks jobs for performance and scalability to handle big data workloads.
  • Monitor and troubleshoot Databricks jobs, identify and resolve issues or bottlenecks.
  • Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions.
  • Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process.
  • Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.

Requirements:

You are:

  • Minimally a BA degree within an engineering and/or computer science discipline
  • Master's degree strongly preferred
  • 5+ years Python coding experience.
  • 5+ years - SQL Server based development of large datasets
  • 5+ years with Experience with developing and deploying ETL pipelines using Databricks Pyspark.
  • Experience in any cloud data warehouse like Synapse, Big Query, Redshift, Snowflake.
  • Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling.
  • Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills.
  • Experience with Cloud based data architectures, messaging, and analytics.
  • Cloud certification(s).
  • Any experience with Airflow is a Plus.

Loading
Please wait..!!