Data Engineer Ii Apply
Overview:
TekWissen is a global workforce management provider headquartered in Ann Arbor, Michigan that offers strategic talent solutions to our clients world-wide. Our client is a company operating a marketplace for consumers, sellers, and content creators. It offers merchandise and content purchased for resale from vendors and those offered by thirdparty sellers.
Job Title: Data Engineer II
Location: North Reading, MA 01864
Duration: 6 Months
Job Type: Contract
Work Type: Onsite
Job Description:
- We are looking for a Data Engineer to join the Infrastructure and Operations team within the Research Organization to scale our research across our Robotic Solutions.
Responsibilities:
- As a Data Engineer you will be working in one of the world's largest and most complex data warehouse environments.
- You will be developing and supporting the analytic technologies that give our customers timely, flexible and structured access to their data.
- You will be responsible for designing and implementing complex data models to scale research and simulations.
- You will work with business customers in understanding the business requirements and implementing solutions to support analytical and reporting needs.
Required Skills & Experience
- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Master's degree in computer science, engineering, analytics, mathematics, statistics, IT or equivalent
- Experience with SQL
- Experience working on and delivering end to end projects independently
Preferred qualifications
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
- Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets
- Experience with Apache Spark / Elastic Map Reduce
- Familiarity and comfort with Python, SQL, Docker, and Shell scripting. Java preferred but not necessary.
- Experience with continuous delivery, infrastructure as code, microservices, in addition to designing and implementing automated data solutions using Apache Airflow, AWS Step Functions, or equivalent
Story Behind the Need Business Group & Key Projects:
- Business group and team purpose: research applied science org, providing research ideas and new concepts for the software, this engineering team provides the infrastructure to scale the research
- Team projects: for designing and implementing complex data models to scale research and simulations
Typical Day in the Role:
Day to Day:
- As a Data Engineer you will be working in one of the world's largest and most complex data warehouse environments.
- You will be developing and supporting the analytic technologies that give our customers timely, flexible and structured access to their data.
- You will be responsible for designing and implementing complex data models to scale research and simulations.
- You will work with business customers in understanding the business requirements and implementing solutions to support analytical and reporting needs.
- Partnering with scientists to assist with algorithms and research
- Creation of data modules
Candidate Requirements:
Client Leadership Principals:
- Deliver Results, Ownership, Learn and be Curious
Required Skills & Experience
- 3+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Master's degree in computer science, engineering, analytics, mathematics, statistics, IT or equivalent
- Experience with SQL
- Experience working on and delivering end to end projects independently
Preferred qualifications
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
- Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
- Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets
- Experience with Apache Spark / Elastic Map Reduce
- Familiarity and comfort with Python, SQL, Docker, and Shell scripting. Java preferred but not necessary.
- Experience with continuous delivery, infrastructure as code, microservices, in addition to designing and implementing automated data solutions using Apache Airflow, AWS Step Functions, or equivalent
TekWissen Group is an equal opportunity employer supporting workforce diversity.