Technical Architecture

CBRP Judgment Engine

How CounselBrief captures, computes, and operationalizes the professional judgment of in-house General Counsel — turning daily engagement into a structured intelligence layer that no transaction-based tool can replicate.

1
Input

Daily Engagement

Every morning, the GC receives 3 practical applications drawn from 15 practice areas. Each is a real-world legal scenario: a framework analysis of the issue, its implications, and a judgment question designed to reveal what matters to this GC in this company.

Not hypotheticals. Not news summaries. Concrete scenarios calibrated to force a prioritization signal.

Corporate Governance
Board diversity disclosure rules just expanded. Your company has 14 months to comply — or explain why not. What's your materiality threshold?
Framework: SEC mandate · Compliance timeline · Disclosure strategy
IP & Technology
Your engineering team just shipped a feature built on a fine-tuned open-source model. Who owns the output? The answer depends on your license stack.
Framework: OSS licensing · AI-generated IP · Derivative works
Data Privacy
A vendor's sub-processor just got acquired by a company in a non-adequate jurisdiction. Your DPA doesn't cover this. How fast do you need to move?
Framework: Cross-border transfer · Sub-processor risk · DPA gaps

~1,000 unique scenarios per year across the full practice area taxonomy. No GC sees the same sequence. Scenario selection adapts over time based on accumulated profile data.

2
Capture

Three-Signal Scoring

After reading each practical application, the GC taps one of three signals. That's the entire interaction — a single judgment call, repeated ~500 times per year.

Matters
"This is consistently important to my company and role."
?
Might Matter
"I'm not sure yet — this could become important."
Doesn't Matter
"This is noise for my context."
"Might Matter" is the most valuable signal. It captures active uncertainty — the frontier where professional judgment is forming. Transaction tools never see this state; they only encounter the GC after she's already decided.
EXAMPLE

Scoring Reveals a Hidden Priority

A SaaS GC scores 8 data privacy practical applications as "Matters" over 4 months — but consistently scores employment data handling as "Doesn't Matter."

The engine learns: this GC's data priority is customer data, not employee data. The company's risk posture is external-facing.

Result: When reviewing a vendor agreement, the engine flags a data processing addendum for customer analytics data — but deprioritizes an employee background check processing clause. The GC's own scoring taught the system where her company's exposure actually lives.
EXAMPLE

Behavioral Signals Beyond the Tap

The three-signal tap is the primary capture mechanism — but it's not the only one. Every interaction generates implicit behavioral data that enriches the profile:

Time-on-article 6 minutes on a capital markets app vs. 45 seconds on employment — attention is a signal, even when the tap is the same.
Dig Deeper saves Saving an application for later review is a stronger signal than tapping "Matters." It indicates active intent to engage.
Explain clicks Requesting deeper explanation signals unfamiliarity — the GC is encountering this topic for the first time. Critical for identifying development edges.
Return visits Coming back to an application hours or days later reveals sustained interest — the topic is occupying mental real estate.

~500 explicit scoring events per year per GC, plus hundreds of implicit behavioral signals. Continuous, organic, non-survey. The GC never fills out a form — she just reads and reacts.

3
Computation

The Judgment Profile

Accumulated signals compute into a structured profile — a living, evolving model of how this GC thinks about legal risk.

Per-Area Signal Weights

Data Privacy
IP & Tech
Employment
Corp Gov
Securities

Matters Might Doesn't

Temporal Trends

IP & Tech — rising from Might → Matters over 6 months
AI Governance — new "Matters" cluster emerging (Q3)
Employment — stable low priority (18 months)
Data Privacy — consistently high (12 months)

Confidence Levels

Data Privacy
47 scores
IP & Tech
34 scores
Securities
22 scores
Antitrust
5 scores

50+ scores = high confidence. Under 10 = exploratory. The engine knows what it knows — and what it doesn't.

"Might Matter" Inventory

AI-Generated IP 3 scores, trending →
ESG Reporting 4 scores, new
Cross-Border M&A 2 scores, early

The active edge of professional development. These are the areas where the GC is still calibrating — they represent where the company may be headed next.

Key Insight "Harvey sees the GC at the negotiating table, after judgment has formed. CounselBrief sees the judgment forming."
EXAMPLE

"Might Matter" Resolves into Conviction

A GC scores AI-generated IP practical applications as "Might Matter" three times over two months. At the time, the company doesn't use AI tools in production.

Then the company licenses a code-generation AI tool. The next two AI-IP applications are scored "Matters."

Result: The profile captures the full uncertainty → conviction arc, and when it crystallized. This transition signals an undisclosed business event (a new tool adoption) without the GC ever reporting it. The judgment data reveals the company's strategic direction before any contract is signed.
4
Intelligence

Three Levels of Inference

The engine doesn't just record signals — it reasons over them. Three inference levels extract increasing value from the same underlying data.

Level 1 · Direct Match

Explicit Signal → Direct Flag

The GC scored IP assignment practical applications as "Matters" 12 times over 14 months. When reviewing any contract, flag IP assignment clauses. High confidence. Straightforward.

IP Assignment
12× Matters
Flag IP Assignment Clauses

This is table stakes — the minimum viable intelligence from signal data. Necessary, but not where the real value lives.

Level 2 · Adjacent Inference

Implicit Position → Inferred Flag

The GC has never directly scored "derivative works." But she scored IP assignment, trade secret protection, and open-source licensing as "Matters" — and those three topics triangulate around derivative works doctrine.

The engine infers: derivative works clauses matter to this GC, even though she never said so.

IP Assignment
Matters (12×)
Trade Secret
Matters (8×)
OSS Licensing
Matters (6×)
Derivative Works Clauses
Inferred — Medium Confidence
Key Insight This surfaces positions the GC holds implicitly — ones she may not have articulated to herself. No intake form or configuration panel can capture what the GC hasn't yet thought to declare.
EXAMPLE

Adjacent Inference Catches What the GC Missed

A PE-backed healthcare GC has never scored a False Claims Act practical application. She's never thought about FCA exposure as a standalone priority.

But her profile shows: Healthcare regulatory compliance → Matters (9×). Whistleblower protections → Matters (6×). Government contract compliance → Matters (4×).

Those three topics triangulate directly onto False Claims Act liability.

Result: When the company acquires a Medicare billing provider, the engine flags FCA exposure in the acquisition documents — a risk the GC hadn't considered because she'd never framed it as a standalone concern. The profile knew before she did.
Level 3 · Pattern Inference

Meta-Priority → Cross-Practice Flag

The GC scored enforcement-related topics as "Matters" across four unrelated practice areas. Not a single area of focus — a pattern that cuts across the entire practice area taxonomy.

FTC · Privacy
Enforcement ×5
EEOC · Employment
Enforcement ×4
EPA · Environmental
Enforcement ×3
SEC · Securities
Enforcement ×4
Meta-Priority: Enforcement-Sensitive Org
Pattern-inferred · All contracts flagged
EXAMPLE

Pattern Inference Reveals an Organizational Trait

No single practice area would reveal this. The GC didn't score "enforcement" as a category — the category doesn't exist in the taxonomy. The engine discovered it by observing a consistent theme across:

FTC enforcement (privacy) · EEOC enforcement (employment) · EPA enforcement (environmental) · SEC enforcement (securities)

Result: The engine identifies one meta-priority: this organization is enforcement-sensitive. Going forward, enforcement-related language gets flagged in every contract type — including practice areas the GC hasn't scored yet. The profile reveals institutional character, not just individual preference.
Key Insight Cross-practice-area pattern recognition cannot be replicated by any configuration panel or intake form. You cannot check a box for a pattern you haven't recognized in yourself.
5
Output

Integration Layer

The Judgment Profile becomes a structured API that plugs directly into the execution tools the GC already uses. CounselBrief doesn't replace Harvey, Ironclad, or Robin AI — it makes them smarter.

CB Judgment Profile
API
Harvey
Ironclad
Robin AI

Each execution tool receives contextualized flags with full reasoning chains:

// Flag delivered to execution tool via API
{
  "clause": "§4.2 — Derivative Works Assignment",
  "priority": "high",
  "basis": "adjacent_inference",
  "reasoning": "GC has scored IP assignment (12×),
    trade secret (8×), and OSS licensing (6×) as
    Matters. These triangulate onto derivative
    works doctrine."
,
  "confidence": 0.78,
  "context_warning": "Profile built on SaaS
    context; this transaction involves hardware
    licensing — verify applicability."

}

Every flag includes: what was found, why it was flagged (basis + reasoning), how confident the engine is, and context warnings when scoring context diverges from transaction context.

"The profile is indicative, not definitive. It is a lens, not a rulebook."

The engine augments human judgment — it never replaces it. Every flag is a starting point for the GC's decision, not the decision itself.

6
Evolution

Temporal Intelligence

A static profile is a photograph. The Judgment Engine builds a time-lapse — one that weights recent judgment more heavily, detects sudden shifts in attention, and distinguishes between decisions that hardened over time and ones that reversed overnight.

Temporal Decay

A score from yesterday carries more weight than a score from last year. The engine applies exponential decay with a configurable half-life — recent judgments shape the profile more than historical ones, but nothing is ever fully erased. When a GC changes roles, companies, or industries, the profile self-corrects without anyone resetting anything.

Today's score
100% weight
6 months ago
50% weight
12 months ago
25% weight

The GC who cared about SPAC litigation in 2025 but now focuses on AI governance has a profile that reflects today, not an outdated historical average.

Scoring Velocity

Five "Matters" scores on employment law in ten days is a fundamentally different signal than five over a year. The first means something just hit the GC's radar — a new deal, a regulatory development, a board directive. The engine detects these velocity spikes in real time.

EXAMPLE

Velocity Detects a Deal Before the Contract Arrives

A GC who typically scores 1 employment item per month suddenly scores 5 in 10 days — all as "Matters." Her baseline rate is 0.3 scores/week. Her current rate is 1.25 — a 4.2x velocity spike.

Result: The engine surfaces 3 employment items she hasn't seen yet and flags the velocity to integration partners: "Elevated engagement in employment law — current attention rate is 4.2x baseline." When Harvey receives this signal alongside a stock purchase agreement, it knows to weight employment provisions more heavily — before anyone told it to.

Score Revision Tracking

When a GC changes "Matters" to "Doesn't Matter," that reversal is the highest-signal event in the entire system. It means judgment actively shifted. The engine tracks every revision — its type, its direction, and how long the original position was held.

Crystallization
"Might Matter" resolves to conviction. Deliberation complete.
Reversal
"Matters" flips to "Doesn't Matter." Complete judgment shift. 2x weight.
Upgrade
Something new hit the radar. Awareness increasing.
Downgrade
The threat passed or the deal closed. Deprioritizing.

When a reversal occurs, the engine prompts: "You changed your position on non-solicitation. What changed?" That optional response — the GC articulating why her judgment shifted — is the richest signal in the system.

7
Network Intelligence

Cohort Divergence & Contextual Inference

Individual profiles are valuable. The network of profiles creates intelligence that no single GC's engagement could produce.

Cohort Divergence Detection

The engine identifies where a GC's judgment significantly diverges from their peer cohort — not to pressure conformity, but to prompt reflection. The most interesting signal isn't what everyone agrees on. It's where this GC sees differently.

Your peers see this differently

78% of GCs at PE-backed SaaS companies score regulatory enforcement exposure as Matters. You've consistently scored it Doesn't Matter.

This might reflect informed judgment about your specific context — or it might be worth another look.

I've thought about it Worth reviewing Dismiss
Key Insight "I've thought about it" is also high-value data. A confirmed divergence — the GC has considered the peer signal and maintained her position — means she has a specific, conscious reason for differing. That's a stronger signal than the original score.

Guardrail: never more than 1 divergence surfaced per week. Never framed as "you're wrong." Always: "your peers see this differently — that might be worth knowing."

Negative Adjacency

Topics can be inversely correlated. A GC who prioritizes trade secret protection and deprioritizes open-source licensing is revealing a tension — proprietary vs. open. The adjacency map captures these inverse relationships, so the engine can infer not just what a GC cares about, but what she actively deprioritizes as a consequence of her priorities.

Trade Secret
Matters (14x)
Open Source Contrib.
Doesn't Matter (9x)

The inverse signal: if the GC prioritizes trade secrets, derivative works clauses that intersect with open-source exposure are deprioritized — saving review time where it doesn't matter.

Contextualized Inference API

The static profile tells an execution tool what the GC cares about. The contextualized API tells it what the GC should care about in this specific transaction. The integration partner sends the deal context — industry, deal size, jurisdictions, counterparty type — and the engine returns flags calibrated to both the profile and the transaction.

// Contextualized flag with deal context
{
  "clause": "§12.3 — Non-Solicitation",
  "profile_signal": "doesnt_matter",
  "adjusted_signal": "review_recommended",
  "context_adjustment": "context_mismatch",
  "reasoning": "Profile deprioritizes
    non-solicitation (11 scores, SaaS context).
    However, this pharma acquisition involves
    scientific workforce — non-solicitation may
    be critical for key scientist retention."
,
  "novel_context": "First international
    deal. No scoring history for cross-border
    regulatory. Full human review recommended."

}

Three context adjustments: amplification (transaction reinforces profile), mismatch warning (profile formed in different context), and novelty detection (GC has never encountered this dimension). The engine knows what it doesn't know.

"The static profile informs. The contextualized endpoint advises."

From "here's what the GC cares about" to "here's what the GC should care about in this specific deal."

8
Moat

The Compound Effect

Every scoring event makes the next inference more accurate. This creates a compounding data flywheel that cannot be cold-started by a competitor.

More engagement → richer profiles
Richer profiles → better inference
Better inference → more valuable integrations
More valuable integrations → stickier product
Stickier product → more engagement
Longitudinal data that compounds — this cannot be cold-started.
~500
Scores / Year / GC
15
Practice Areas
3
Inference Levels
Compound Value
Key Insight "Harvey sees a snapshot. CounselBrief sees the time-lapse."
EXAMPLE

Cohort Intelligence: The Network Effect

Individual profiles are valuable. Aggregated profiles are transformative.

73% of PE-backed SaaS GCs score the SEC cybersecurity disclosure rule as "Matters" — vs. only 12% of manufacturing GCs.

This is revealed preference, not survey data. No one asked these GCs to rank cybersecurity. Their daily engagement produced this signal organically.

Result: Cohort intelligence feeds three outputs: (1) content strategy — surface emerging topics before they trend; (2) topic detection — identify regulatory shifts as they form; (3) publishable indices — the "CounselBrief GC Priority Index" becomes a market signal in itself. The aggregated judgment of 1,000 GCs is a dataset that didn't previously exist.
Summary

Key Differentiators

Breadth

Covers topics the GC hasn't transacted on yet. Harvey and Ironclad only see active deals — CounselBrief captures judgment across the full 15-area taxonomy, including areas where no contract has been signed.

Depth

Captures uncertainty via "Might Matter." Transactions only reveal final decisions — they never show the GC's evolving thinking or the priorities she hasn't yet committed to. The richest signal is the one still forming.

Continuity

Daily engagement vs. episodic deal-cycle data. A GC who hasn't signed a contract in 3 months is invisible to transaction tools. She's been scoring with CounselBrief every morning — her profile is sharper than ever.

Network

Cohort divergence detection turns the population into a mirror. The engine tells the GC where she differs from her peers — not to conform, but to reflect. Confirmed divergences are the strongest signals in the system.

Temporality

Decay, velocity, and revision tracking mean the profile is a living system. It knows what the GC cares about now, what just spiked this week, and where she changed her mind — and why.

Context

The contextualized API doesn't just report the profile — it advises per-transaction. It catches when a SaaS-trained profile meets a pharma deal and flags what the GC hasn't seen before.