image
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,
loadingbar
Loading, Please wait..!!

Data Engineer With Python And Scala

  • ... Posted on: Oct 28, 2025
  • ... Saransh Inc
  • ... Brighton, Massachusetts
  • ... Salary: Not Available
  • ... CTC

Data Engineer With Python And Scala   

Job Title :

Data Engineer With Python And Scala

Job Type :

CTC

Job Location :

Brighton Massachusetts United States

Remote :

No

Jobcon Logo Job Description :

Data Engineer
Brighton, MA / NYC, NY (Day 1 Onsite)
Long Term
Note: Highly locals are preferred as Client Interview requires In-person

Key Responsibilities:

  • Design, develop, and implement end-to-end big data solutions across the enterprise.
  • Develop and maintain big data applications using Apache Spark, Scala, AWS Glue, AWS Lambda, SNS/SQS, and CloudWatch.
  • Build and optimize data pipelines, ensuring high performance, scalability, and reliability.
  • Collaborate with cross-functional teams to design and document integration and application technical designs.
  • Conduct peer reviews of functional and design documentation to maintain high-quality standards.
  • Develop, configure, and perform unit testing and code reviews to ensure coding best practices.
  • Troubleshoot and resolve complex issues during testing phases and identify root causes efficiently.
  • Perform performance testing and optimize system performance.
  • Manage and maintain SQL-based databases, preferably Amazon Redshift.
  • Utilize Snowflake for data warehousing (experience in Snowflake is an added advantage).
  • Implement ETL/ELT processes and ensure data quality across various systems.
  • Work with Git repositories and manage CI/CD deployment pipelines for continuous integration and delivery.
  • Provide production support, including troubleshooting and environment tuning.
  • Ensure adherence to best practices and technical standards throughout the project lifecycle.

Required Skills & Experience:

  • 6+ years of relevant experience in big data design and development.
  • Proficiency in Scala and/or Python for application development.
  • Strong expertise in Spark, AWS Glue, Lambda, SNS/SQS, and CloudWatch.
  • Hands-on experience with ETL/ELT frameworks and data integration.
  • Advanced knowledge of SQL, with experience in Redshift preferred.
  • Familiarity with Snowflake and cloud-based data solutions is advantageous.
  • Experience with CI/CD processes, Git, and production support in large-scale environments.

Jobcon Logo Position Details

Posted:

Oct 28, 2025

Employment:

CTC

Salary:

Not Available

Snaprecruit ID:

SD-CIE-e129b07c6593c4ae090272148be43fc62490376dc770097f2692fb3e02ca74a8

City:

Brighton

Job Origin:

CIEPAL_ORGANIC_FEED

Share this job:

  • linkedin

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Data Engineer With Python And Scala    Apply

Click on the below icons to share this job to Linkedin, Twitter!

Data Engineer
Brighton, MA / NYC, NY (Day 1 Onsite)
Long Term
Note: Highly locals are preferred as Client Interview requires In-person

Key Responsibilities:

  • Design, develop, and implement end-to-end big data solutions across the enterprise.
  • Develop and maintain big data applications using Apache Spark, Scala, AWS Glue, AWS Lambda, SNS/SQS, and CloudWatch.
  • Build and optimize data pipelines, ensuring high performance, scalability, and reliability.
  • Collaborate with cross-functional teams to design and document integration and application technical designs.
  • Conduct peer reviews of functional and design documentation to maintain high-quality standards.
  • Develop, configure, and perform unit testing and code reviews to ensure coding best practices.
  • Troubleshoot and resolve complex issues during testing phases and identify root causes efficiently.
  • Perform performance testing and optimize system performance.
  • Manage and maintain SQL-based databases, preferably Amazon Redshift.
  • Utilize Snowflake for data warehousing (experience in Snowflake is an added advantage).
  • Implement ETL/ELT processes and ensure data quality across various systems.
  • Work with Git repositories and manage CI/CD deployment pipelines for continuous integration and delivery.
  • Provide production support, including troubleshooting and environment tuning.
  • Ensure adherence to best practices and technical standards throughout the project lifecycle.

Required Skills & Experience:

  • 6+ years of relevant experience in big data design and development.
  • Proficiency in Scala and/or Python for application development.
  • Strong expertise in Spark, AWS Glue, Lambda, SNS/SQS, and CloudWatch.
  • Hands-on experience with ETL/ELT frameworks and data integration.
  • Advanced knowledge of SQL, with experience in Redshift preferred.
  • Familiarity with Snowflake and cloud-based data solutions is advantageous.
  • Experience with CI/CD processes, Git, and production support in large-scale environments.

Loading
Please wait..!!