Data Engineer With Python And Scala Apply
Data Engineer
Brighton, MA / NYC, NY (Day 1 Onsite)
Long Term
Note: Highly locals are preferred as Client Interview requires In-person
Key Responsibilities:
- Design, develop, and implement end-to-end big data solutions across the enterprise.
- Develop and maintain big data applications using Apache Spark, Scala, AWS Glue, AWS Lambda, SNS/SQS, and CloudWatch.
- Build and optimize data pipelines, ensuring high performance, scalability, and reliability.
- Collaborate with cross-functional teams to design and document integration and application technical designs.
- Conduct peer reviews of functional and design documentation to maintain high-quality standards.
- Develop, configure, and perform unit testing and code reviews to ensure coding best practices.
- Troubleshoot and resolve complex issues during testing phases and identify root causes efficiently.
- Perform performance testing and optimize system performance.
- Manage and maintain SQL-based databases, preferably Amazon Redshift.
- Utilize Snowflake for data warehousing (experience in Snowflake is an added advantage).
- Implement ETL/ELT processes and ensure data quality across various systems.
- Work with Git repositories and manage CI/CD deployment pipelines for continuous integration and delivery.
- Provide production support, including troubleshooting and environment tuning.
- Ensure adherence to best practices and technical standards throughout the project lifecycle.
Required Skills & Experience:
- 6+ years of relevant experience in big data design and development.
- Proficiency in Scala and/or Python for application development.
- Strong expertise in Spark, AWS Glue, Lambda, SNS/SQS, and CloudWatch.
- Hands-on experience with ETL/ELT frameworks and data integration.
- Advanced knowledge of SQL, with experience in Redshift preferred.
- Familiarity with Snowflake and cloud-based data solutions is advantageous.
- Experience with CI/CD processes, Git, and production support in large-scale environments.

