The document review problem
In a typical commercial litigation case, document review accounts for 60-80% of total discovery costs, according to the RAND Institute for Civil Justice. For a mid-size firm handling a case with 50,000 documents, that means associates are spending weeks — sometimes months — reading through contracts, emails, memos, and filings to identify what's relevant.
The cost is staggering. Thomson Reuters estimates that document review runs $1.50-$2.50 per document when done by associates billing at standard rates. For that 50,000-document case, you're looking at $75,000-$125,000 in review costs alone. And that's before anyone starts analyzing the relevant documents or building arguments.
For law firms between $2M and $20M in revenue, this isn't an abstract problem. It's the reason your associates are working late, your margins on litigation matters are thin, and your clients are starting to push back on discovery bills.
How AI document review actually works
AI document review isn't new — technology-assisted review (TAR) has been accepted by courts since 2012, when Judge Andrew Peck ruled in Da Silva Moore v. Publicis Groupe that TAR was an acceptable way to handle e-discovery. What's changed in the last two years is the quality of the technology.
Modern AI review systems work through a process called continuous active learning. Here's the practical version:
- You train the system with a seed set. A senior attorney reviews 200-500 documents and tags them — relevant, not relevant, privileged, confidential. This takes a few hours, not weeks.
- AI reviews the remaining documents. The system reads every document, compares it against the patterns it learned from the seed set, and assigns a relevance score. It flags documents that are potentially responsive, potentially privileged, or containing key terms.
- The system gets smarter as you go. As attorneys review the AI's flagged documents and make corrections, the model refines itself. After a few rounds, accuracy rates typically hit 90-95% — which is actually higher than purely manual review. (Studies from the TREC Legal Track, run by NIST, have consistently shown that manual review by humans averages about 60-75% recall.)
- Attorneys review the flagged subset. Instead of reading 50,000 documents, your associates are reviewing 3,000-5,000 flagged documents. They're applying legal judgment — is this actually privileged? Does this pattern support our theory? — rather than just reading and sorting.
The time savings are dramatic. What used to take a team of 4-5 associates several weeks now takes 1-2 associates a few days, with higher accuracy.
What associates do instead
This is the part that matters to firm leadership. When you free associates from hundreds of hours of document review per year, where does that time go?
More substantive analysis. Instead of reading documents to find the relevant ones, associates are analyzing the relevant documents. They're identifying patterns, building timelines, spotting inconsistencies. The work that actually requires a law degree.
Better client relationships. Associates who aren't buried in document review have time to draft client updates, prepare for calls, and engage more deeply with the case strategy. Partners notice. Clients notice.
Higher-value billable work. This is the financial argument. An associate billing $350/hour for document review generates revenue, but it's the kind of work that's increasingly difficult to bill at full rates. An associate billing $350/hour for legal analysis, motion drafting, or deposition preparation is doing work clients willingly pay for.
Faster matter resolution. When document review takes days instead of weeks, your overall case timelines compress. That's good for clients, good for cash flow, and good for the attorneys who don't have cases dragging on for months longer than necessary.
Beyond document review: 4 other wins
Document review is the most obvious use case, but it's not the only one. Here's where else AI is making a measurable difference in mid-size firms:
1. Contract analysis and due diligence
In M&A transactions and corporate deals, associates spend days reviewing stacks of contracts for key terms, obligations, and risk factors. AI reads through hundreds of contracts in hours, flagging non-standard clauses, change-of-control provisions, termination rights, and liability caps. Associates then review the flagged items instead of reading every page of every contract.
2. Legal research acceleration
Traditional legal research involves searching Westlaw or Lexis, reading through case summaries, pulling relevant statutes, and synthesizing findings. AI research tools don't replace this process, but they accelerate the gathering phase — finding relevant cases, identifying key holdings, and organizing citations. What used to take 4 hours takes 45 minutes.
3. Intake and conflict checking
Every new matter requires a conflict check. For firms handling hundreds of matters, this involves cross-referencing clients, opposing parties, related entities, and prior representations. AI handles this search instantly, flagging potential conflicts for attorney review instead of requiring a paralegal to manually search the firm's database.
4. Brief and memo drafting
AI can generate first drafts of standard legal documents — demand letters, motions to compel, discovery requests — using your firm's templates, prior work product, and the facts of the current case. The attorney reviews and edits rather than writing from scratch. For routine filings, this cuts drafting time by 40-60%.
The ethics question (answered directly)
Every legal technology conversation eventually hits the ethics question, and it should. Attorneys have ethical obligations around competence, confidentiality, and supervision that can't be hand-waved away.
Here's where things stand:
Competence (Rule 1.1). The ABA's Formal Opinion 498 (2021) clarified that attorneys have a duty to understand the technology they use, including AI. This doesn't mean partners need to understand machine learning algorithms. It means someone at the firm needs to understand what the AI tool does, how it works at a functional level, and where its limitations are. Treating AI output as final work product without review would be a competence issue. Using AI as a research and review tool with attorney oversight is well within the rules.
Confidentiality (Rule 1.6). This is the real concern. Client data going into any AI system needs to be protected. The practical requirement: use AI tools that keep your data within your firm's environment (on-premise or private cloud), don't train on your documents, and have clear data handling policies. Most enterprise legal AI providers offer these guarantees. Consumer tools like ChatGPT, where data might be used for model training, aren't appropriate for client work.
Supervision (Rules 5.1, 5.3). AI output requires the same level of supervision you'd apply to work by a junior associate. A partner or senior attorney reviews the AI's work before it goes to a client or a court. The AI is a tool, not a practitioner. Every court that has addressed this — and several have, after attorneys filed AI-generated briefs without checking them — has been clear: the attorney is responsible for the work product, regardless of how it was created.
The bottom line: AI in legal practice isn't an ethics gray area anymore. It's a well-mapped space with clear guidelines. The ethical risk isn't in using AI — it's in using it without understanding it.
Getting started without disrupting your practice
The firms that adopt AI successfully don't try to change everything at once. They pick one pain point, prove it works, and expand from there.
Here's the path we recommend:
Week 1-2: Assessment. Identify where your firm's time is going. Pull data on hours spent by task type — document review, research, drafting, admin. Find the single biggest time sink that doesn't require senior attorney judgment.
Week 3-4: Pilot setup. Implement an AI tool for that one task, with one practice group. Set up proper data handling, train the attorneys who'll use it, and establish a review protocol.
Week 5-8: Measure results. Track time savings, accuracy, and attorney satisfaction. Compare the AI-assisted workflow to the previous process with real numbers, not impressions.
Month 3+: Expand or adjust. If the pilot works, roll it out to additional practice groups. If it doesn't, you've spent a few weeks testing, not six months on a firm-wide rollout that didn't deliver.
The firms getting the most from AI right now are the ones that started small and measured carefully. If you want help identifying where to start, our AI assessment is designed for exactly that — mapping your firm's workflows and finding the highest-ROI starting point.
Want to see where AI fits in your firm?
We'll map your workflows, identify the biggest time sinks, and show you what's possible. Thirty minutes. No pitch. Attorney-to-attorney.
Book Your AI Assessment