Skip to main content

AI Challenge Reviewer - Best Practice Guidance

Best practices and Guidance for when it comes to utilizing the AI Challenge Reviewer

Written by Linsey (WorkRamp Support)
Updated today

Purpose of this Article

This article provides best practices and a practical framework for writing effective grading criteria for WorkRamp's Auto Review feature for Video and Written Challenges. The goal is to help subject matter experts (SMEs) create clear, objective instructions that enable the AI to grade Challenge submissions accurately and consistently.


Core Principle: Treat the AI as a Human Reviewer

Think of the AI as a human reviewer who needs a very clear, objective rubric. It doesn't understand implicit knowledge, tone, or nuance without specific guidance. The more specific and detailed you are in your instructions, the more accurately the AI will be able to grade.

Best Practices for Writing Grading Criteria

1. Be Explicit and Specific: Avoid using subjective or vague terms. Instead of writing

"excellent communication skills," define what that looks like. For example: "The learner clearly and concisely articulated three key benefits of our product and used a

conversational tone throughout the video."

2. Define a Clear Rubric: Establish a simple and consistent grading scale (e.g., 1-5,

Pass/Fail, or a simple rubric with defined levels). For each point on the scale, provide a detailed description of what constitutes that level of performance for each criterion.


3. Break Down the Challenge: For more complex video or written submissions, break

them into smaller, scorable components. For a video sales pitch, instead of one overall grade, you might have separate criteria for:

  • Introduction: Did the learner greet the customer and state the purpose of the call?

  • Problem Statement: Did the learner identify a specific pain point?

  • Solution: Did the learner present a solution that directly addressed the pain point?

  • Call to Action: Did the learner clearly state the next steps?


4. Provide Training Examples: The most effective way to improve the AI's grading is by

training it with human-reviewed data. If you have existing, manually-graded assignments, use them as a baseline. This teaches the AI what a "good" versus a "poor" response looks like in practice.


5. Use a "Human-in-the-Loop" Approach: When you first launch Challenges with AI

Reviewers, we don't necessarily recommend relying on it for 100% of the grading. Start with an AI-assisted or hybrid approach where the AI provides a suggested score, and a human reviewer has the final say. This allows you to review the AI's suggestions and

refine your criteria over time.


Practical Framework for SMEs

Use this simple framework as a launch point with your SMEs to structure and define your grading criteria for each challenge question.


Challenge Question: "Record a video explaining how our product addresses a specific customer pain point.

Did this answer your question?