washpost20250604
Table of Contents
This is an old revision of the document!
This page last changed 2025.06.07 11:16 [5 times today, 0 time yesterday, and 5 total times]
Summary of Washington Post 6/4/2025 Article
Washington Post “challenged AI helpers to decode legal contracts, simplify medical research, speed-read a novel, and make sense of Trump speeches.” They asked ChatGPT, Claude, Copilot, Meta AI, and Gemini. Responses varied from good to bad. Scores are out of 10.
Literature
- [7.8] ChatGPT. Best summary. Failed to address slavery and the Civil War (as did most AI bots).
- [7.3] Claude. Got facts right
- [4.3] Meta AI.
- [3.5] Copilot.
- [2.3] Gemini. Inaccurate, misleading, and sloppy. Like when Costanza watches the “Breakfast at Tiffany's” movie instead of reading the book.
Law
- [6.9] Claude. Most consistently decent answers and did well suggesting changes to their test rental agreement.
- [6.1] Gemini
- [5.4] Copilot
- [5.3] ChatGPT. tried to reduce complex parts of the contracts to one-line summaries and missed important points (key clauses).
- [2.6] Meta AI. tried to reduce complex parts of the contracts to one-line summaries. Skipped several sections and important points.
washpost20250604.1749320186.txt.gz · Last modified: by Steve Isenberg