Ai Quality Engineer Apply
Job Title-AI Quality Engineer
Location - Saint Louis, MO
Number of days onsite-3 Days
Rate :: 45-50/hr on w2 , $57-60/hr on C@C Max
8210942368_1 and VMS #: 21179-1
Must Have Skills
Drift Detection
Java
Selenium
Cucumber
Cloud
Google Apps
UiPath
AI/ML,
Google kubernates Engineering
Recommended Qualification and Experience:
- Experience - 5+ years in Software Quality Engineering, with at least 2 years focused on AI/ML, UiPath automation testing.
- Understanding of OCR (Optical Character Recognition), NLP (Natural Language Processing), and LLM architectures if Value Add Experience
- Possesses prior experience in Java OR Python scripting, as well as experience with Salesforce and GCP (Google Cloud Platform)
Proficient in system integrations utilizing technologies such as SQL, REST APIs, and Google Kubernetes Engine
Role : AI Quality Engineer
- We are looking for a highly experienced and certified AI Quality Engineer who will drive transformation within our team.
- The ideal candidate will oversee the validation and lifecycle of our generative and predictive models.
- Your mission is to ensure our AI systems are not just smart but are enough for production.
- What You'll DoThe responsibilities of an AI Quality Engineer are split between testing the AI itself, AI-Driven Automation and using effective Prompt engineering to improve testing and Quality processes.
- This role also involves engaging and collaborating with cross-functional teams
- Your key responsibilities will include
- Define and execute end-to-end testing and validation frameworks for AI services, encompassing machine learning models, agentic workflows, and automation solutions.
- Focus on rigorous model and data validation, real-time observability, and the implementation of autonomous test generation
- Data Quality Auditing: Validating training and testing datasets for bias, noise, and completeness
- Model Performance Testing: Defining and tracking metrics beyond simple such as Precision-Recall curves, F1-score, and Mean Absolute Error (MAE).
- Adversarial Testing: Intentionally providing &apos prompt injections or perturbed images to see if the model breaks or leaks sensitive data
- Drift Detection: Monitoring production models for concept drift (changes in real-world data patterns) and data
- Fairness Monitoring: Ensuring the model's outputs remain ethical and do not discriminate against specific demographics over time
- You will be the primary gatekeeper for AI Governance and compliance, ensuring the model adheres to Equifax AI standards.
- Design comprehensive test cases and strategic test plans to validate RPA solutions (UiPath) for both new implementations and existing systems.
- Document detected defects and manage their lifecycle through to resolution if !supportListsValidate the Hand-off points where an AI Agent triggers an PRA bot to perform a structural task.

