Technical Architecture

CBRP Judgment Engine

How CounselBrief captures, computes, and operationalizes the professional judgment of in-house General Counsel — turning daily engagement into a structured intelligence layer that no transaction-based tool can replicate.

1
Input

Daily Engagement

Every morning, the GC receives 3 practical applications drawn from 15 practice areas. Each is a real-world legal scenario: a framework analysis of the issue, its implications, and a judgment question designed to reveal what matters to this GC in this company.

Not hypotheticals. Not news summaries. Concrete scenarios calibrated to force a prioritization signal.

Corporate Governance
Board diversity disclosure rules just expanded. Your company has 14 months to comply — or explain why not. What's your materiality threshold?
Framework: SEC mandate · Compliance timeline · Disclosure strategy
IP & Technology
Your engineering team just shipped a feature built on a fine-tuned open-source model. Who owns the output? The answer depends on your license stack.
Framework: OSS licensing · AI-generated IP · Derivative works
Data Privacy
A vendor's sub-processor just got acquired by a company in a non-adequate jurisdiction. Your DPA doesn't cover this. How fast do you need to move?
Framework: Cross-border transfer · Sub-processor risk · DPA gaps

~1,000 unique scenarios per year across the full practice area taxonomy. No GC sees the same sequence. Scenario selection adapts over time based on accumulated profile data.

2
Capture

Three-Signal Scoring

After reading each practical application, the GC taps one of three signals. That's the entire interaction — a single judgment call, repeated ~500 times per year.

Matters
"This is consistently important to my company and role."
?
Might Matter
"I'm not sure yet — this could become important."
Doesn't Matter
"This is noise for my context."
"Might Matter" is the most valuable signal. It captures active uncertainty — the frontier where professional judgment is forming. Transaction tools never see this state; they only encounter the GC after she's already decided.
EXAMPLE

Scoring Reveals a Hidden Priority

A SaaS GC scores 8 data privacy practical applications as "Matters" over 4 months — but consistently scores employment data handling as "Doesn't Matter."

The engine learns: this GC's data priority is customer data, not employee data. The company's risk posture is external-facing.

Result: When reviewing a vendor agreement, the engine flags a data processing addendum for customer analytics data — but deprioritizes an employee background check processing clause. The GC's own scoring taught the system where her company's exposure actually lives.
EXAMPLE

Behavioral Signals Beyond the Tap

The three-signal tap is the primary capture mechanism — but it's not the only one. Every interaction generates implicit behavioral data that enriches the profile:

Time-on-article 6 minutes on a capital markets app vs. 45 seconds on employment — attention is a signal, even when the tap is the same.
Dig Deeper saves Saving an application for later review is a stronger signal than tapping "Matters." It indicates active intent to engage.
Explain clicks Requesting deeper explanation signals unfamiliarity — the GC is encountering this topic for the first time. Critical for identifying development edges.
Return visits Coming back to an application hours or days later reveals sustained interest — the topic is occupying mental real estate.

~500 explicit scoring events per year per GC, plus hundreds of implicit behavioral signals. Continuous, organic, non-survey. The GC never fills out a form — she just reads and reacts.

3
Computation

The Judgment Profile

Accumulated signals compute into a structured profile — a living, evolving model of how this GC thinks about legal risk.

Per-Area Signal Weights

Data Privacy
IP & Tech
Employment
Corp Gov
Securities

Matters Might Doesn't

Temporal Trends

IP & Tech — rising from Might → Matters over 6 months
AI Governance — new "Matters" cluster emerging (Q3)
Employment — stable low priority (18 months)
Data Privacy — consistently high (12 months)

Confidence Levels

Data Privacy
47 scores
IP & Tech
34 scores
Securities
22 scores
Antitrust
5 scores

50+ scores = high confidence. Under 10 = exploratory. The engine knows what it knows — and what it doesn't.

"Might Matter" Inventory

AI-Generated IP 3 scores, trending →
ESG Reporting 4 scores, new
Cross-Border M&A 2 scores, early

The active edge of professional development. These are the areas where the GC is still calibrating — they represent where the company may be headed next.

Key Insight "Harvey sees the GC at the negotiating table, after judgment has formed. CounselBrief sees the judgment forming."
EXAMPLE

"Might Matter" Resolves into Conviction

A GC scores AI-generated IP practical applications as "Might Matter" three times over two months. At the time, the company doesn't use AI tools in production.

Then the company licenses a code-generation AI tool. The next two AI-IP applications are scored "Matters."

Result: The profile captures the full uncertainty → conviction arc, and when it crystallized. This transition signals an undisclosed business event (a new tool adoption) without the GC ever reporting it. The judgment data reveals the company's strategic direction before any contract is signed.
4
Intelligence

Three Levels of Inference

The engine doesn't just record signals — it reasons over them. Three inference levels extract increasing value from the same underlying data.

Level 1 · Direct Match

Explicit Signal → Direct Flag

The GC scored IP assignment practical applications as "Matters" 12 times over 14 months. When reviewing any contract, flag IP assignment clauses. High confidence. Straightforward.

IP Assignment
12× Matters
Flag IP Assignment Clauses

This is table stakes — the minimum viable intelligence from signal data. Necessary, but not where the real value lives.

Level 2 · Adjacent Inference

Implicit Position → Inferred Flag

The GC has never directly scored "derivative works." But she scored IP assignment, trade secret protection, and open-source licensing as "Matters" — and those three topics triangulate around derivative works doctrine.

The engine infers: derivative works clauses matter to this GC, even though she never said so.

IP Assignment
Matters (12×)
Trade Secret
Matters (8×)
OSS Licensing
Matters (6×)
Derivative Works Clauses
Inferred — Medium Confidence
Key Insight This surfaces positions the GC holds implicitly — ones she may not have articulated to herself. No intake form or configuration panel can capture what the GC hasn't yet thought to declare.
EXAMPLE

Adjacent Inference Catches What the GC Missed

A PE-backed healthcare GC has never scored a False Claims Act practical application. She's never thought about FCA exposure as a standalone priority.

But her profile shows: Healthcare regulatory compliance → Matters (9×). Whistleblower protections → Matters (6×). Government contract compliance → Matters (4×).

Those three topics triangulate directly onto False Claims Act liability.

Result: When the company acquires a Medicare billing provider, the engine flags FCA exposure in the acquisition documents — a risk the GC hadn't considered because she'd never framed it as a standalone concern. The profile knew before she did.
Level 3 · Pattern Inference

Meta-Priority → Cross-Practice Flag

The GC scored enforcement-related topics as "Matters" across four unrelated practice areas. Not a single area of focus — a pattern that cuts across the entire practice area taxonomy.

FTC · Privacy
Enforcement ×5
EEOC · Employment
Enforcement ×4
EPA · Environmental
Enforcement ×3
SEC · Securities
Enforcement ×4
Meta-Priority: Enforcement-Sensitive Org
Pattern-inferred · All contracts flagged
EXAMPLE

Pattern Inference Reveals an Organizational Trait

No single practice area would reveal this. The GC didn't score "enforcement" as a category — the category doesn't exist in the taxonomy. The engine discovered it by observing a consistent theme across:

FTC enforcement (privacy) · EEOC enforcement (employment) · EPA enforcement (environmental) · SEC enforcement (securities)

Result: The engine identifies one meta-priority: this organization is enforcement-sensitive. Going forward, enforcement-related language gets flagged in every contract type — including practice areas the GC hasn't scored yet. The profile reveals institutional character, not just individual preference.
Key Insight Cross-practice-area pattern recognition cannot be replicated by any configuration panel or intake form. You cannot check a box for a pattern you haven't recognized in yourself.
5
Output

Integration Layer

The Judgment Profile becomes a structured API that plugs directly into the execution tools the GC already uses. CounselBrief doesn't replace Harvey, Ironclad, or Robin AI — it makes them smarter.

CB Judgment Profile
API
Harvey
Ironclad
Robin AI

Each execution tool receives contextualized flags with full reasoning chains:

// Flag delivered to execution tool via API
{
  "clause": "§4.2 — Derivative Works Assignment",
  "priority": "high",
  "basis": "adjacent_inference",
  "reasoning": "GC has scored IP assignment (12×),
    trade secret (8×), and OSS licensing (6×) as
    Matters. These triangulate onto derivative
    works doctrine."
,
  "confidence": 0.78,
  "context_warning": "Profile built on SaaS
    context; this transaction involves hardware
    licensing — verify applicability."

}

Every flag includes: what was found, why it was flagged (basis + reasoning), how confident the engine is, and context warnings when scoring context diverges from transaction context.

"The profile is indicative, not definitive. It is a lens, not a rulebook."

The engine augments human judgment — it never replaces it. Every flag is a starting point for the GC's decision, not the decision itself.

6
Moat

The Compound Effect

Every scoring event makes the next inference more accurate. This creates a compounding data flywheel that cannot be cold-started by a competitor.

More engagement → richer profiles
Richer profiles → better inference
Better inference → more valuable integrations
More valuable integrations → stickier product
Stickier product → more engagement
Longitudinal data that compounds — this cannot be cold-started.
~500
Scores / Year / GC
15
Practice Areas
3
Inference Levels
Compound Value
Key Insight "Harvey sees a snapshot. CounselBrief sees the time-lapse."
EXAMPLE

Cohort Intelligence: The Network Effect

Individual profiles are valuable. Aggregated profiles are transformative.

73% of PE-backed SaaS GCs score the SEC cybersecurity disclosure rule as "Matters" — vs. only 12% of manufacturing GCs.

This is revealed preference, not survey data. No one asked these GCs to rank cybersecurity. Their daily engagement produced this signal organically.

Result: Cohort intelligence feeds three outputs: (1) content strategy — surface emerging topics before they trend; (2) topic detection — identify regulatory shifts as they form; (3) publishable indices — the "CounselBrief GC Priority Index" becomes a market signal in itself. The aggregated judgment of 1,000 GCs is a dataset that didn't previously exist.
Summary

Key Differentiators

Breadth

Covers topics the GC hasn't transacted on yet. Harvey and Ironclad only see active deals — CounselBrief captures judgment across the full 15-area taxonomy, including areas where no contract has been signed.

Depth

Captures uncertainty via "Might Matter." Transactions only reveal final decisions — they never show the GC's evolving thinking or the priorities she hasn't yet committed to. The richest signal is the one still forming.

Continuity

Daily engagement vs. episodic deal-cycle data. A GC who hasn't signed a contract in 3 months is invisible to transaction tools. She's been scoring with CounselBrief every morning — her profile is sharper than ever.