Early Access
This article has been published prior to aji’s GA date. Content may be subject to updates before its official release.
Introducing aji: your expertise, magnified. Quickly launch and intelligently fine-tune your reviews with full context and traceable logic.
aji is Reveal’s GenAI Review engine tailored to attorneys and built for precision, giving speed and control to your reviews. aji doesn’t just process documents at staggering scale, it helps legal teams deliver outcomes they can trust.
Designed with transparency and flexibility in mind, aji provides several ways to create an informative and personalized AI-assisted review experience for all kinds of users.
aji offers real-time feedback to refine and strengthen your Definitions so they deliver the documents you’re looking for.
aji tests your Definitions against a small sample of your dataset, so you can refine them before running aji at scale.
aji includes in-document citations and reasoning for its ratings, opening the AI black box.
aji provides a Hybrid GenAI Review mode that significantly reduces the cost of using GenAI for document review.
By leveraging all the above capabilities, review can become a much quicker step in the eDiscovery workflow and offer valuable insight into your datasets—all at a more affordable cost than manual review.
Before you use aji, we recommend reading:
aji Environment and Data Requirements: Outlines certain requirements that need to be met before you can use aji.
aji Billing: Describes how Reveal keeps track of your GenAI usage through aji Units.
aji Unit Allocation – Instance Admin: Details how to assign aji Units to projects, which is necessary before users can perform Calibration, Hybrid Reviews, or Full Reviews.
Five Stages of aji Review
A complete use of aji’s features will follow the below sequence of stages.
See the corresponding reference articles for each stage to access in-depth instructions, terminology, and methodology on aji as an entire feature.
I. Define
Create Definitions (written prompts based off your RFP) to describe what you’re looking for in your dataset. Make use of the AI Advisor to help refine and strengthen your Definitions, or the Definition Auto-Tuner to suggest a new Definition that takes into consideration your manually tagged documents.
Please reference the following articles during your Define stage:
II. Calibrate
Manually review a small subset of documents according to your Definition, then compare your coding decisions with aji’s ratings against that same set. If the Agreement Rate is high (aji’s ratings match your coding decisions) move to Validate. If the Agreement Rate is low, refine your Definition and run Calibration again.
Each document reviewed by aji will contain descriptive reasoning and in-text citations that can be used to understand aji’s thought process and further edit your Definitions.
Please reference the following articles during your Calibrate stage:
III. Validate
With your Definitions finalized, run aji against a bigger, random set of documents outside your Calibration set. Here, you confirm that aji’s ratings are consistently accurate against a large sample set. This stage is not required; you may skip to Run if you are satisfied with your Definitions and their performance in the Calibrate stage.
The Validate stage follows a similar workflow you use in the Calibrate stage. Please reference the following articles during your Validate stage:
IV. Run
Run aji against your full dataset. There are two modes to choose from:
Full Review: All documents will be rated by GenAI, with reasoning and citations for each document.
Hybrid Review: Combines GenAI with Reveal’s supervised learning Classifiers (SVM) to rapidly review documents for a faster and more affordable review.
Please reference the following articles during your Run stage:
V. Utilize
aji fully integrates with Reveal’s unified suite of eDiscovery tools, allowing you to combine GenAI Review with data visualizations, supervised learning, and more. You can leverage your aji-reviewed documents by:
Filtering, sorting, and prioritizing documents based on aji’s ratings.
Reading explanations for why documents were rated as responsive or non-responsive, which include in-text citations.
Using aji’s review results to guide you on next steps for your eDiscovery workflow.