The advent of AI is being billed as an upgrade on the scale of the Industrial Revolution. Lawyers have begun to implement AI not only for research or to summarize voluminous documents, but also to brainstorm arguments and draft legal filings.
There have been hiccups, but guardrails are being introduced. Hallucinations that may result from generative AI can be addressed through human verification; confidentiality concerns may be remedied by working only in controlled proprietary AI environments. The legal industry appears to be digesting AI tools more comfortably every day.
We nevertheless urge caution. Any new technology can have unintended side effects. Smartphones, for instance, devour our attention. We all knew watching too much TV was bad for us, but smartphones (especially when paired with social media) are like TV screens on steroids. Like the radio transmitter worn by Harrison Bergeron’s father in Vonnegut’s 1961 short story, smartphones displace deep thought.
AI could be worse.
“De-skilling” was the process by which the Industrial Revolution replaced skilled artisans with unskilled workers. Dividing a manufacturing process into simplified (i.e., de-skilled) steps allowed goods to be cheaply mass-produced. Nowadays, fewer people specialize in making shoes or chairs by hand.
AI offers to de-skill tasks that typically require mental work. Of course, so do calculators. We don’t mind pushing a few buttons instead of doing long division because it frees up our mental energy for other tasks. That feels like a benefit with a cost we are willing to pay. But is that the case with every tool that offers to reduce cognitive load?
The effects of using AI to handle tasks requiring critical thinking are still being studied, but reasons for concern exist. A January 2025 study published in Societies found “a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading.” A September 2025 study shared in The Lancet found that endoscopists who learned to rely on AI to interpret medical images were subsequently 6 percent less effective at identifying colon cancer without AI assistance than they had been before they began relying on AI. It appears at least possible that, by reducing mental struggle, AI could enable atrophy of critical thinking skills — even those of experienced professionals.
Is this really a concern in law, though? If a law partner uses AI (instead of a first-year associate) to draft a legal document, is anything lost? The partner gets the draft back in minutes, not hours, and can focus on editing. The associate may be asked to verify assertions (to protect against hallucinations) and even to pre-edit AI output before it goes to the partner.
But is the struggle of editing AI output enough to teach the skill of editing? The law partner whose academic career preceded the AI era will have struggled through the writing and editing process hundreds or thousands of times. What, though, of the associate whose academic career was AI-aided?
Students, it appears, are embracing generative AI. In Youth and Government, a program designed to train future leaders in the arts of argument and policymaking, many students use AI to write their bills. Some even use it to prepare their speeches — both canned and “extemporaneous.”
At the University of Texas, students often enter the Writing Center having used AI to generate ideas, polish their essay, or simply write it in the first place. Most concerningly, AI seems to be prevalent in preparing personal statements, likely because of the amount of pressure on them to be successful. The resulting documents tend to lack a personal voice for obvious reasons. When asked to work on the text with a consultant, students are increasingly helpless — they seem to think of themselves as generators of raw material for the AI to refine rather than writers making creative decisions. At other university writing centers, so much AI output is being presented for editing that, as one midwestern university writing professor expresses it, “the joy of helping someone learn to write is gone.”
Evidence that AI is beginning to do the writing for us is not just anecdotal. According to the College Board, the majority of high school students use generative AI for schoolwork. A 2025 Inside Higher Ed survey of over 1,000 students at 166 colleges and universities found that 85 percent of respondents had used generative AI for coursework in the last year, 25 percent had used AI to complete assignments, and 19 percent had used AI to write full essays.
The apparent effects are not attractive. According to a June 2025 study reported on by Education Week, students who write with assistance from ChatGPT exhibit less brain activity, are less able to recall what they’ve written, feel less ownership of their work, and produce less creative output.
To the extent that AI relieves students, law students, and/or young lawyers of the struggle to knit words into persuasive argument, we are concerned that precious skills may be lost. As Annabel V. Teiling wrote for the New York State Bar Association in October 2025,
“AI automation is taking away … fundamental learning experiences, and there is a risk that younger attorneys are developing an overreliance on these tools without fully grasping the underlying legal concepts, the context of the information, or the critical thinking necessary to identify AI’s inherent limitations, such as hallucinations or biased outputs. This could lead to a generation of lawyers with diminished independent judgment and competence and an inability to “dig deeper” when AI falls short, potentially compromising the quality of legal advice and advocacy.“
We don’t mind that we don’t know how to hand-build shoes or chairs. We don’t mind leaning on calculators for simple math. We do wish to avoid a world where legal “writing” consists of prompting AI, and “editing” consists of review of AI output by people who have never (or seldom) written without AI assistance.
Avoiding such an outcome will require discipline in the face of overwhelming convenience. As Daniel Kahneman articulated in Thinking, Fast and Slow,
“A general “law of least effort” applies to cognitive as well as physical exertion. The law asserts that if there are several ways of achieving the same goal, people will eventually gravitate to the least demanding course of action. In the economy of action, effort is a cost, and the acquisition of skill is driven by the balance of benefits and costs. Laziness is built deep into our nature.“
Today, AI appears to serve primarily as a time-saver in argument preparation by replacing initial drafting and perhaps merely supplementing brainstorming. As AI models continue to advance and AI becomes more embedded in legal practice, though, the “least demanding course of action” may become using AI for even the most thinking-oriented steps of writing. Any efficiency or quality that results may come at the cost of a lawyer’s ability. As discussed, this is already a concern in schools. (Law firms are not schools, but (1) both lawyers and students are human, and (2) tomorrow’s lawyers are students today.)
To the extent that AI eliminates the struggle that would have built a skill we want to retain, it may be worth constructing guardrails sufficient to require at least enough struggle to preserve learning. In schools, for instance, students can be required (to the extent possible) to compose essays without AI assistance. In law firms, to the extent that not all of a lawyer’s education is complete at the moment the bar is passed, similar guardrails may be necessary. The New York-based law firm Debevoise & Plimpton has published content arguing that AI should be avoided “when learning the subject is as important as the content being created.” We agree.
Our ability to think critically and express those thoughts in language is what separates humans from other animals. Let’s not rush to outsource it.
Disclaimer: The views expressed here are strictly our own, and may evolve as we digest new information.
Robert Manz, CPA/CFF, CFA, CFE is a partner at BVA Group, where he specializes in the application of financial forensic analysis to complex business disputes and investigations. He has testified over 50 times as an expert witness on financial matters.
Katherine Manz is a sophomore English and History major in the Liberal Arts Honors program at the University of Texas at Austin. Outside of her work at the University Writing Center, she writes for the Texas Undergraduate Law Journal and SPARK Magazine.
