Find Data Architect Job in Columbus, Ohio | Snaprecruit

Find Data Architect Jobs in Columbus
image
  • Snapboard
  • Activity
  • Reports
  • Campaign
Welcome ,

Data Architect

  • ... Stellent IT LLC
  • ... Columbus, Ohio,
  • ...

    Full-time

  • ... Salary: 60 per hour
  • Posted on: Sep 04, 2024

Data Architect   

JOB TITLE:

Data Architect

JOB TYPE:

Full-time

JOB LOCATION:

Columbus Ohio United States

REMOTE:

No

JOB DESCRIPTION:

PO client

Proper Linkedin

NO h1b

Has to be in Columbus, OH or Minnetonka, MN

$60/hr max

Title: Data Architect: I (Junior) x2 - MLOps Engineers with API experience

Job ID: 6465

Type: Contract to hire

Conversion Salary: 90-95K

Location: Hybrid/Columbus, OH or Minneapolis, MN

JOB DESCRIPTION

Job Title: MLOps Engineer with API experience (Contract-to-Hire)

Location: Hybrid (Minneapolis, MN or Columbus, OH) - Remote for the duration of the contract, with an expectation of 3 days in the office per week once hired full-time. This is a hybrid role, with the expectation of being in the office three days a week once hired full-time. Candidates must be based in or willing to relocate to the Minneapolis, MN, or Columbus, OH areas. Remote work is possible for the duration of the 90-day contract, with the understanding that the role will transition to a hybrid model post-contract. Position not eligible for sponsorship now or in the future. Please make sure answers to screening questions are provided.

Contract Duration: 90 days (Contract-to-Hire)

About Us:

We are a forward-thinking team within a large enterprise bank, deeply invested in leveraging machine learning and artificial intelligence to drive impactful business outcomes. Our team is responsible for ensuring the smooth, scalable and secure deployment of machine learning models into production, handling both real-time and batch processing workloads. We offer a unique opportunity to work closely with data scientists and engineers, focusing on large language models and cutting-edge MLOps practices.

Job Summary:

As an MLOps Engineer, you will be responsible for the end-to-end productionization and deployment of machine learning models at scale. You will work closely with data scientists to refine models and ensure they are optimized for production. Additionally, you will be responsible for maintaining and improving our MLOps infrastructure, automating deployment pipelines, and ensuring compliance with IT and security standards. You will play a critical role in image management, vulnerability remediation, and the deployment of ML models using modern infrastructure-as-code practices.

API Experience: The MLOps Engineer will be responsible for developing and maintaining APIs and data pipelines that facilitate the seamless integration of machine learning model outputs into our Kafka-based event hub platform. This role requires a strong background in Python, API development (batch and real-time), Kafka, FTP/SFTP automation, and familiarity with Linux operating systems. You will work closely with data scientists to understand and finalize the schema of model outputs, document schemas using Swagger, collaborate with event hub architects, and ensure that data is accurately and reliably published, whether through Kafka APIs, automated FTP processes, or custom-developed APIs for real-time integration.

Key Responsibilities:

1) Vulnerability Remediation & Image Management:

- Manage and update Docker images, ensuring they are secure and optimized.

- Collaborate with data scientists to validate that models run effectively on updated images.

- Address security vulnerabilities by updating and patching Docker images.

2) AWS & Terraform Expertise:

- Deploy, manage, and scale AWS services (SageMaker, S3, Lambda) using Terraform.

- Automate the spin-up and spin-down of AWS infrastructure using Terraform scripts.

- Monitor and optimize AWS resources to ensure cost-effectiveness and efficiency.

3) DevOps & CI/CD Pipeline Management:

- Design, implement, and maintain CI/CD pipelines in Azure DevOps (ADO).

- Integrate CI/CD practices with model deployment processes, ensuring smooth productionization of ML models.

- Strong experience with Git for code versioning and collaboration.

4) Model Productionization:

- Participate in the end-to-end process of productionizing machine learning models, from model deployment to monitoring and maintaining their performance.

- Work with large language models, focusing on implementing near real-time and batch inferences.

- Address data drift and model drift in production environments.

5) Collaboration & Continuous Learning:

- Work closely with data scientists, DevOps engineers, and other MLOps professionals to ensure seamless integration and deployment of ML models.

- Stay updated on the latest trends and technologies in MLOps, especially related to AWS and Docker.

6) API related responsibilities:

- Schema Documentation: Collaborate with data scientists to refine and document model output schemas using Swagger for downstream API development.

- Data Transfer & API Development: Automate data transfers (data pipelines) to Kafka using FTP/SFTP or Kafka APIs. Develop and maintain batch and real-time APIs for model output integration.

- Event Hub Integration: Work with Kafka engineers to ensure accurate data publishing and monitor for reliability.

Required Skills & Qualifications:

- Python: Deep expertise in Python for scripting and automation.

- AWS: Strong experience with AWS services, particularly SageMaker, S3, and Lambda.

- Terraform: Proficiency in using Terraform for infrastructure-as-code on AWS.

- Docker: Extensive experience with Docker, including building, managing, and securing Docker images.

- Linux: Strong command-line skills in Linux, especially for Docker and system management.

- DevOps Experience: Azure DevOps (ADO): Significant experience in setting up and managing CI/CD pipelines in ADO.

- Git: Proficient in using Git for version control and collaboration.

- Proven experience in developing and managing both batch and real-time APIs, preferably in a Kafka-based event-driven architecture.

- Expertise in API development, including both batch and real-time data processing. Exposure to API documentation tools like Swagger. Strong understanding of schema design and data serialization formats such as JSON.

- Additional DevOps Tools: Experience with Jenkins or other CI/CD tools is a plus.

- Experience & Education: 4 years of experience in combination of MLOps/DevOps/Data Engineering;

Bachelor's degree in Computer Science, Engineering, or a related discipline.

Preferred Qualifications:

- Experience with large language models and productionizing ML models in a cloud environment.

- Exposure to near real-time inference systems and batch processing in ML.

- Familiarity with data drift and model drift management.

Misty Kozekwa

Position Details

POSTED:

Sep 04, 2024

EMPLOYMENT:

Full-time

SALARY:

60 per hour

SNAPRECRUIT ID:

SD-eb42de7e4a6f5d61b522b6faf21cf75790a13136aaf3117a20b462bdc4175d59

CITY:

Columbus

Job Origin:

CIEPAL_ORGANIC_FEED

Jobcon Logo
A job sourcing event
In Dallas Fort Worth
Aug 19, 2017 9am-6pm
All job seekers welcome!

Similar Jobs

Data Architect    Apply

Click on the below icons to share this job to Linkedin, Twitter!

PO client

Proper Linkedin

NO h1b

Has to be in Columbus, OH or Minnetonka, MN

$60/hr max

Title: Data Architect: I (Junior) x2 - MLOps Engineers with API experience

Job ID: 6465

Type: Contract to hire

Conversion Salary: 90-95K

Location: Hybrid/Columbus, OH or Minneapolis, MN

JOB DESCRIPTION

Job Title: MLOps Engineer with API experience (Contract-to-Hire)

Location: Hybrid (Minneapolis, MN or Columbus, OH) - Remote for the duration of the contract, with an expectation of 3 days in the office per week once hired full-time. This is a hybrid role, with the expectation of being in the office three days a week once hired full-time. Candidates must be based in or willing to relocate to the Minneapolis, MN, or Columbus, OH areas. Remote work is possible for the duration of the 90-day contract, with the understanding that the role will transition to a hybrid model post-contract. Position not eligible for sponsorship now or in the future. Please make sure answers to screening questions are provided.

Contract Duration: 90 days (Contract-to-Hire)

About Us:

We are a forward-thinking team within a large enterprise bank, deeply invested in leveraging machine learning and artificial intelligence to drive impactful business outcomes. Our team is responsible for ensuring the smooth, scalable and secure deployment of machine learning models into production, handling both real-time and batch processing workloads. We offer a unique opportunity to work closely with data scientists and engineers, focusing on large language models and cutting-edge MLOps practices.

Job Summary:

As an MLOps Engineer, you will be responsible for the end-to-end productionization and deployment of machine learning models at scale. You will work closely with data scientists to refine models and ensure they are optimized for production. Additionally, you will be responsible for maintaining and improving our MLOps infrastructure, automating deployment pipelines, and ensuring compliance with IT and security standards. You will play a critical role in image management, vulnerability remediation, and the deployment of ML models using modern infrastructure-as-code practices.

API Experience: The MLOps Engineer will be responsible for developing and maintaining APIs and data pipelines that facilitate the seamless integration of machine learning model outputs into our Kafka-based event hub platform. This role requires a strong background in Python, API development (batch and real-time), Kafka, FTP/SFTP automation, and familiarity with Linux operating systems. You will work closely with data scientists to understand and finalize the schema of model outputs, document schemas using Swagger, collaborate with event hub architects, and ensure that data is accurately and reliably published, whether through Kafka APIs, automated FTP processes, or custom-developed APIs for real-time integration.

Key Responsibilities:

1) Vulnerability Remediation & Image Management:

- Manage and update Docker images, ensuring they are secure and optimized.

- Collaborate with data scientists to validate that models run effectively on updated images.

- Address security vulnerabilities by updating and patching Docker images.

2) AWS & Terraform Expertise:

- Deploy, manage, and scale AWS services (SageMaker, S3, Lambda) using Terraform.

- Automate the spin-up and spin-down of AWS infrastructure using Terraform scripts.

- Monitor and optimize AWS resources to ensure cost-effectiveness and efficiency.

3) DevOps & CI/CD Pipeline Management:

- Design, implement, and maintain CI/CD pipelines in Azure DevOps (ADO).

- Integrate CI/CD practices with model deployment processes, ensuring smooth productionization of ML models.

- Strong experience with Git for code versioning and collaboration.

4) Model Productionization:

- Participate in the end-to-end process of productionizing machine learning models, from model deployment to monitoring and maintaining their performance.

- Work with large language models, focusing on implementing near real-time and batch inferences.

- Address data drift and model drift in production environments.

5) Collaboration & Continuous Learning:

- Work closely with data scientists, DevOps engineers, and other MLOps professionals to ensure seamless integration and deployment of ML models.

- Stay updated on the latest trends and technologies in MLOps, especially related to AWS and Docker.

6) API related responsibilities:

- Schema Documentation: Collaborate with data scientists to refine and document model output schemas using Swagger for downstream API development.

- Data Transfer & API Development: Automate data transfers (data pipelines) to Kafka using FTP/SFTP or Kafka APIs. Develop and maintain batch and real-time APIs for model output integration.

- Event Hub Integration: Work with Kafka engineers to ensure accurate data publishing and monitor for reliability.

Required Skills & Qualifications:

- Python: Deep expertise in Python for scripting and automation.

- AWS: Strong experience with AWS services, particularly SageMaker, S3, and Lambda.

- Terraform: Proficiency in using Terraform for infrastructure-as-code on AWS.

- Docker: Extensive experience with Docker, including building, managing, and securing Docker images.

- Linux: Strong command-line skills in Linux, especially for Docker and system management.

- DevOps Experience: Azure DevOps (ADO): Significant experience in setting up and managing CI/CD pipelines in ADO.

- Git: Proficient in using Git for version control and collaboration.

- Proven experience in developing and managing both batch and real-time APIs, preferably in a Kafka-based event-driven architecture.

- Expertise in API development, including both batch and real-time data processing. Exposure to API documentation tools like Swagger. Strong understanding of schema design and data serialization formats such as JSON.

- Additional DevOps Tools: Experience with Jenkins or other CI/CD tools is a plus.

- Experience & Education: 4 years of experience in combination of MLOps/DevOps/Data Engineering;

Bachelor's degree in Computer Science, Engineering, or a related discipline.

Preferred Qualifications:

- Experience with large language models and productionizing ML models in a cloud environment.

- Exposure to near real-time inference systems and batch processing in ML.

- Familiarity with data drift and model drift management.

Misty Kozekwa

Loading
Please wait..!!