
Jessica Erickson
3.1K posts

Jessica Erickson
@ProfJErickson
Law Prof at the University of Richmond. Obsessed with corporate law, shareholder litigation, and law school pedagogy.








I sent this to my local law review, which is wondering if they should have an AI attestation for authors. ~~~~~~~ Fundamentally, the purpose of academia is to seek truth. This would have the opposite effect. First, it cannot be enforced. So asking the question does not yield reliable information about who did or did not use AI. It reveals only who is honest. Second, what is the intent of asking? If it is not intended to affect your decisions, then it has no benefit. If it is intended to affect your decision, it’s likely to be against folks using AI. That would be bad for a few reasons. First, as noted above, you aren’t punishing those that used AI, you’re punishing those that admit to using AI. This taxes honesty and promotes dishonesty, which is contrary to the purpose of promoting truth. Second, if AI is helpful at generating knowledge, we should celebrate it. Every week AI finds new math proofs and new cures for cancer. That is to be celebrated, not shamed. If we believe that law review articles improve the practice of law, we shouldn’t stigmatize something that might advance our field. Especially now, when we are almost at the point where it is irresponsible not to ask AI to evaluate your arguments. We should want authors doing that because it will sharpen ideas and increase truth. (We might say, well this is just about transparency, but we all know that means people will be punished for using it and it will count less for tenure and promotion. The same plague has effectively halted co-written articles before tenure, which is counterproductive to seeking truth.) The best counterargument is that law reviews serve a prestige allocation function, and by granting prestige to an article that is written using AI, they are distorting the process. AI is cheating. It is valor stolen through chatbots. A few thoughts on that. First, prestige allocation is a side product of seeking truth; it is not the goal. The only reason law reviews have prestige to allocate is because past editors’ commitment to seeking truth. That is the vitality of an academic journal. If the journal acts to promote or discourage some social practice, rather than to seek truth, the journal will have little prestige to allocate. Second, AI has not replaced the core skills of scholarship. Most scholars have many ideas per day; scholarly work is about knowing which ideas to pursue and which to discard. Great scholars are not defined by their idea generation, but by their exceptional judgment and discernment among their ideas. Ideas have little value on their own; they advance to a fully formed article only after a scholar with judgment discerns their worth and pursues them. Third, much academic work is pointless drudgery for the sake of layman editors. To be gentle about it, folks in my field already know the first 20 pages of every article in the field. Yet those 20 pages still take a hundred hours to produce. If that can be redirected to accumulating new knowledge, the entire field gains. We could redirect hundreds of hours to advancing research. Fourth, the legal profession too often equates hours-expended with value. By stigmatizing tools that reduce the grind, we further entrench the belief that the grind is the product. Prestige should be allocated to those that advance the field, not those that grind the most. Even if the goal is not to stigmatize AI, every author will understand it that way. The polling is strongly against AI, and the assumption will be that you intend to take a luddite position. In summary, asking about AI use stigmatizes a technology with great potential to advance knowledge, while at the same time it provides no useful benefit because it cannot be enforced. Liars will benefit, the honest will suffer, and everyone will hesitate to use better tools.


An attorney writes to me about the mostly AI-written law review article he had accepted this spring, now forthcoming in the flagship law review of a Top 50 law school. A draft of the article is now up on SSRN. According to the attorney: " Last month I used Claude to assist in drafting a new article . . . . I drafted this article in about 15 hours. In 2022 I published an article of similar length that took around 150 hours." The attorney adds: "I used Claude the way I’d use a junior associate—as a first drafter, sounding board, and research assistant. Most of the article, including the entirety of the title, abstract, and intro, is mine from the keyboard up. And anything Claude contributed that made it to the final version is there because I reviewed it, agreed with it, and chose to sign my name to it. This is no different than how I’d review an associate’s draft and then take responsibility for the finished product." The attorney adds: "That first draft was by no means file ready, but it was better than what I would’ve received from the vast majority of BigLaw associates. I was blown away, and have since started my own appellate and litigation practice in an effort to replicate these productivity gains for client work." Your thoughts? I know the attorney's name, and the journal, and I have checked out the article, but I figured that, at least for now, I would hold that back.





I sent this to my local law review, which is wondering if they should have an AI attestation for authors. ~~~~~~~ Fundamentally, the purpose of academia is to seek truth. This would have the opposite effect. First, it cannot be enforced. So asking the question does not yield reliable information about who did or did not use AI. It reveals only who is honest. Second, what is the intent of asking? If it is not intended to affect your decisions, then it has no benefit. If it is intended to affect your decision, it’s likely to be against folks using AI. That would be bad for a few reasons. First, as noted above, you aren’t punishing those that used AI, you’re punishing those that admit to using AI. This taxes honesty and promotes dishonesty, which is contrary to the purpose of promoting truth. Second, if AI is helpful at generating knowledge, we should celebrate it. Every week AI finds new math proofs and new cures for cancer. That is to be celebrated, not shamed. If we believe that law review articles improve the practice of law, we shouldn’t stigmatize something that might advance our field. Especially now, when we are almost at the point where it is irresponsible not to ask AI to evaluate your arguments. We should want authors doing that because it will sharpen ideas and increase truth. (We might say, well this is just about transparency, but we all know that means people will be punished for using it and it will count less for tenure and promotion. The same plague has effectively halted co-written articles before tenure, which is counterproductive to seeking truth.) The best counterargument is that law reviews serve a prestige allocation function, and by granting prestige to an article that is written using AI, they are distorting the process. AI is cheating. It is valor stolen through chatbots. A few thoughts on that. First, prestige allocation is a side product of seeking truth; it is not the goal. The only reason law reviews have prestige to allocate is because past editors’ commitment to seeking truth. That is the vitality of an academic journal. If the journal acts to promote or discourage some social practice, rather than to seek truth, the journal will have little prestige to allocate. Second, AI has not replaced the core skills of scholarship. Most scholars have many ideas per day; scholarly work is about knowing which ideas to pursue and which to discard. Great scholars are not defined by their idea generation, but by their exceptional judgment and discernment among their ideas. Ideas have little value on their own; they advance to a fully formed article only after a scholar with judgment discerns their worth and pursues them. Third, much academic work is pointless drudgery for the sake of layman editors. To be gentle about it, folks in my field already know the first 20 pages of every article in the field. Yet those 20 pages still take a hundred hours to produce. If that can be redirected to accumulating new knowledge, the entire field gains. We could redirect hundreds of hours to advancing research. Fourth, the legal profession too often equates hours-expended with value. By stigmatizing tools that reduce the grind, we further entrench the belief that the grind is the product. Prestige should be allocated to those that advance the field, not those that grind the most. Even if the goal is not to stigmatize AI, every author will understand it that way. The polling is strongly against AI, and the assumption will be that you intend to take a luddite position. In summary, asking about AI use stigmatizes a technology with great potential to advance knowledge, while at the same time it provides no useful benefit because it cannot be enforced. Liars will benefit, the honest will suffer, and everyone will hesitate to use better tools.







