Validate Definitions

Prev Next

Early Access

This article has been published prior to aji’s GA date. Content may be subject to updates before its official release.

Overview

Validation is an important stage in aji’s Review process because it takes your Definition—which was refined using a small dataset during Calibration—and tests its accuracy on a larger random set of documents from your dataset. While this step is optional and you can move from Calibration directly to Hybrid or Full Review, Validation can help you see how your Definitions perform against a large number of documents.

Note

In Reveal, there is no explicit “Validation” modal to fill that validates your dataset. Instead, you will perform a Calibration Run on a sample set of random documents.

Validation is performed by following the below steps:

  1. Create Your Random Set – Create a sample set of random documents, sourced from your whole target data population.

  2. Start a Calibration Run – Execute a Calibration Run on your Random Set.

  3. Review Your Random Set – Code all of the documents in your Random Set.

  4. Apply Validation Findings – View your Validation results and determine next steps.

I. Create Your Random Set

Important

When sampling documents using the below instructions from your target data population, keep in mind that Reveal may pick up documents that have already been rated through a previous Calibration Run.

  1. In the Review Grid, find your work folder that contains your target data population (described in our aji Environment and Data Requirements article).

  2. Click the work folder to create a new pill in the search bar.

  3. Click Sample in the Review Grid toolbar, and fill out the modal as described in Create a Sample Document Set.

    1. Copy the samples into a folder, and give that work folder a name that makes sense to you and can be easily referenced later.

Note

Larger Random Sets will provide more accurate results against your target data population.

II. Start a Calibration Run

  1. Fill out the Run a Calibration Test modal to start a new Calibration Run. Instructions on this modal can be found in our Calibrate aji article.

    1. When selecting Documents to Review, make sure to choose Create a New Document Selection, then find your Random Set work folder.

III. Review Your Random Set

  1. In the Review Grid, find your Random Set work folder.

  2. Code all of the documents in your Random Set.

    • Make sure you’re using the right AI Tag associated with your Calibration Run, which can be found in the Calibration Results tab (see Calibrate aji for more information).

IV. Apply Validation Findings

  1. Using the Calibration Data Visualizations, interpret your Validation results. The Calibration Data Visualizations are discussed in detail in our Calibrate aji article.

    1. Pay close attention to the Evaluation pane, which contains the Agreement Rate and true / false agreement outcomes.

    2. You can also calculate Precision, Recall, and F1 for more insight into your Validation results.

  2. Decide next steps.

    • If you are happy with your Agreement Rate, proceed to either running a Hybrid Review or Full Review.

    • If you think your Agreement Rate can be improved, return to the Define stage, then:

      • Edit your Definitions.

      • Perform Calibration Runs.

      • Validate your Definitions (optional).