Daily Engagement
Every morning, the GC receives 3 practical applications drawn from 15 practice areas. Each is a real-world legal scenario: a framework analysis of the issue, its implications, and a judgment question designed to reveal what matters to this GC in this company.
Not hypotheticals. Not news summaries. Concrete scenarios calibrated to force a prioritization signal.
~1,000 unique scenarios per year across the full practice area taxonomy. No GC sees the same sequence. Scenario selection adapts over time based on accumulated profile data.
Three-Signal Scoring
After reading each practical application, the GC taps one of three signals. That's the entire interaction — a single judgment call, repeated ~500 times per year.
Scoring Reveals a Hidden Priority
A SaaS GC scores 8 data privacy practical applications as "Matters" over 4 months — but consistently scores employment data handling as "Doesn't Matter."
The engine learns: this GC's data priority is customer data, not employee data. The company's risk posture is external-facing.
Behavioral Signals Beyond the Tap
The three-signal tap is the primary capture mechanism — but it's not the only one. Every interaction generates implicit behavioral data that enriches the profile:
~500 explicit scoring events per year per GC, plus hundreds of implicit behavioral signals. Continuous, organic, non-survey. The GC never fills out a form — she just reads and reacts.
The Judgment Profile
Accumulated signals compute into a structured profile — a living, evolving model of how this GC thinks about legal risk.
Per-Area Signal Weights
Matters Might Doesn't
Temporal Trends
Confidence Levels
50+ scores = high confidence. Under 10 = exploratory. The engine knows what it knows — and what it doesn't.
"Might Matter" Inventory
The active edge of professional development. These are the areas where the GC is still calibrating — they represent where the company may be headed next.
"Might Matter" Resolves into Conviction
A GC scores AI-generated IP practical applications as "Might Matter" three times over two months. At the time, the company doesn't use AI tools in production.
Then the company licenses a code-generation AI tool. The next two AI-IP applications are scored "Matters."
Three Levels of Inference
The engine doesn't just record signals — it reasons over them. Three inference levels extract increasing value from the same underlying data.
Explicit Signal → Direct Flag
The GC scored IP assignment practical applications as "Matters" 12 times over 14 months. When reviewing any contract, flag IP assignment clauses. High confidence. Straightforward.
12× Matters
This is table stakes — the minimum viable intelligence from signal data. Necessary, but not where the real value lives.
Implicit Position → Inferred Flag
The GC has never directly scored "derivative works." But she scored IP assignment, trade secret protection, and open-source licensing as "Matters" — and those three topics triangulate around derivative works doctrine.
The engine infers: derivative works clauses matter to this GC, even though she never said so.
Matters (12×)
Matters (8×)
Matters (6×)
Inferred — Medium Confidence
Adjacent Inference Catches What the GC Missed
A PE-backed healthcare GC has never scored a False Claims Act practical application. She's never thought about FCA exposure as a standalone priority.
But her profile shows: Healthcare regulatory compliance → Matters (9×). Whistleblower protections → Matters (6×). Government contract compliance → Matters (4×).
Those three topics triangulate directly onto False Claims Act liability.
Meta-Priority → Cross-Practice Flag
The GC scored enforcement-related topics as "Matters" across four unrelated practice areas. Not a single area of focus — a pattern that cuts across the entire practice area taxonomy.
Enforcement ×5
Enforcement ×4
Enforcement ×3
Enforcement ×4
Pattern-inferred · All contracts flagged
Pattern Inference Reveals an Organizational Trait
No single practice area would reveal this. The GC didn't score "enforcement" as a category — the category doesn't exist in the taxonomy. The engine discovered it by observing a consistent theme across:
FTC enforcement (privacy) · EEOC enforcement (employment) · EPA enforcement (environmental) · SEC enforcement (securities)
Integration Layer
The Judgment Profile becomes a structured API that plugs directly into the execution tools the GC already uses. CounselBrief doesn't replace Harvey, Ironclad, or Robin AI — it makes them smarter.
Each execution tool receives contextualized flags with full reasoning chains:
{
"clause": "§4.2 — Derivative Works Assignment",
"priority": "high",
"basis": "adjacent_inference",
"reasoning": "GC has scored IP assignment (12×),
trade secret (8×), and OSS licensing (6×) as
Matters. These triangulate onto derivative
works doctrine.",
"confidence": 0.78,
"context_warning": "Profile built on SaaS
context; this transaction involves hardware
licensing — verify applicability."
}
Every flag includes: what was found, why it was flagged (basis + reasoning), how confident the engine is, and context warnings when scoring context diverges from transaction context.
"The profile is indicative, not definitive. It is a lens, not a rulebook."
The engine augments human judgment — it never replaces it. Every flag is a starting point for the GC's decision, not the decision itself.
Temporal Intelligence
A static profile is a photograph. The Judgment Engine builds a time-lapse — one that weights recent judgment more heavily, detects sudden shifts in attention, and distinguishes between decisions that hardened over time and ones that reversed overnight.
Temporal Decay
A score from yesterday carries more weight than a score from last year. The engine applies exponential decay with a configurable half-life — recent judgments shape the profile more than historical ones, but nothing is ever fully erased. When a GC changes roles, companies, or industries, the profile self-corrects without anyone resetting anything.
100% weight
50% weight
25% weight
The GC who cared about SPAC litigation in 2025 but now focuses on AI governance has a profile that reflects today, not an outdated historical average.
Scoring Velocity
Five "Matters" scores on employment law in ten days is a fundamentally different signal than five over a year. The first means something just hit the GC's radar — a new deal, a regulatory development, a board directive. The engine detects these velocity spikes in real time.
Velocity Detects a Deal Before the Contract Arrives
A GC who typically scores 1 employment item per month suddenly scores 5 in 10 days — all as "Matters." Her baseline rate is 0.3 scores/week. Her current rate is 1.25 — a 4.2x velocity spike.
Score Revision Tracking
When a GC changes "Matters" to "Doesn't Matter," that reversal is the highest-signal event in the entire system. It means judgment actively shifted. The engine tracks every revision — its type, its direction, and how long the original position was held.
"Might Matter" resolves to conviction. Deliberation complete.
"Matters" flips to "Doesn't Matter." Complete judgment shift. 2x weight.
Something new hit the radar. Awareness increasing.
The threat passed or the deal closed. Deprioritizing.
When a reversal occurs, the engine prompts: "You changed your position on non-solicitation. What changed?" That optional response — the GC articulating why her judgment shifted — is the richest signal in the system.
Cohort Divergence & Contextual Inference
Individual profiles are valuable. The network of profiles creates intelligence that no single GC's engagement could produce.
Cohort Divergence Detection
The engine identifies where a GC's judgment significantly diverges from their peer cohort — not to pressure conformity, but to prompt reflection. The most interesting signal isn't what everyone agrees on. It's where this GC sees differently.
78% of GCs at PE-backed SaaS companies score regulatory enforcement exposure as Matters. You've consistently scored it Doesn't Matter.
This might reflect informed judgment about your specific context — or it might be worth another look.
Guardrail: never more than 1 divergence surfaced per week. Never framed as "you're wrong." Always: "your peers see this differently — that might be worth knowing."
Negative Adjacency
Topics can be inversely correlated. A GC who prioritizes trade secret protection and deprioritizes open-source licensing is revealing a tension — proprietary vs. open. The adjacency map captures these inverse relationships, so the engine can infer not just what a GC cares about, but what she actively deprioritizes as a consequence of her priorities.
Matters (14x)
Doesn't Matter (9x)
The inverse signal: if the GC prioritizes trade secrets, derivative works clauses that intersect with open-source exposure are deprioritized — saving review time where it doesn't matter.
Contextualized Inference API
The static profile tells an execution tool what the GC cares about. The contextualized API tells it what the GC should care about in this specific transaction. The integration partner sends the deal context — industry, deal size, jurisdictions, counterparty type — and the engine returns flags calibrated to both the profile and the transaction.
{
"clause": "§12.3 — Non-Solicitation",
"profile_signal": "doesnt_matter",
"adjusted_signal": "review_recommended",
"context_adjustment": "context_mismatch",
"reasoning": "Profile deprioritizes
non-solicitation (11 scores, SaaS context).
However, this pharma acquisition involves
scientific workforce — non-solicitation may
be critical for key scientist retention.",
"novel_context": "First international
deal. No scoring history for cross-border
regulatory. Full human review recommended."
}
Three context adjustments: amplification (transaction reinforces profile), mismatch warning (profile formed in different context), and novelty detection (GC has never encountered this dimension). The engine knows what it doesn't know.
"The static profile informs. The contextualized endpoint advises."
From "here's what the GC cares about" to "here's what the GC should care about in this specific deal."
The Compound Effect
Every scoring event makes the next inference more accurate. This creates a compounding data flywheel that cannot be cold-started by a competitor.
Cohort Intelligence: The Network Effect
Individual profiles are valuable. Aggregated profiles are transformative.
73% of PE-backed SaaS GCs score the SEC cybersecurity disclosure rule as "Matters" — vs. only 12% of manufacturing GCs.
This is revealed preference, not survey data. No one asked these GCs to rank cybersecurity. Their daily engagement produced this signal organically.
Key Differentiators
Breadth
Covers topics the GC hasn't transacted on yet. Harvey and Ironclad only see active deals — CounselBrief captures judgment across the full 15-area taxonomy, including areas where no contract has been signed.
Depth
Captures uncertainty via "Might Matter." Transactions only reveal final decisions — they never show the GC's evolving thinking or the priorities she hasn't yet committed to. The richest signal is the one still forming.
Continuity
Daily engagement vs. episodic deal-cycle data. A GC who hasn't signed a contract in 3 months is invisible to transaction tools. She's been scoring with CounselBrief every morning — her profile is sharper than ever.
Network
Cohort divergence detection turns the population into a mirror. The engine tells the GC where she differs from her peers — not to conform, but to reflect. Confirmed divergences are the strongest signals in the system.
Temporality
Decay, velocity, and revision tracking mean the profile is a living system. It knows what the GC cares about now, what just spiked this week, and where she changed her mind — and why.
Context
The contextualized API doesn't just report the profile — it advises per-transaction. It catches when a SaaS-trained profile meets a pharma deal and flags what the GC hasn't seen before.