Associate Professor of English, University of Michigan-Flint. I research and teach rhetoric and writing.
5104 stories
·
43 followers

oh the irony

1 Share

Americans are reading less. Is that poisoning our politics? | Vox

As America’s test scores fall and its screen time rises, narratives of cultural decline become hard to dismiss outright.

Yet it’s worth remembering the perennial appeal of such pessimism. More than 2,000 years ago, Socrates decried the novel media technology of his day — the written word — in much the same terms that many condemn social media and AI in 2025. Addressing himself to the inventor of writing, the Greek philosopher declared, “You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing.”

Lovely! Here’s an essay about the decline of reading that features either a misreading or non-reading of a passage from Plato’s Phaedrus. Remember, Plato wrote dialogues, and in this one Socrates is on a walk with Phaedrus, having a discussion about the written and spoken word. Thus: 

Soc. At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality. 

So, no: Socrates does not speak to the inventor of writing: he tells a story in which a divine Egyptian king speaks to the inventor of writing. And this isn’t hard to discover, nor is the passage hard to understand. 

Bless me, what do they teach journalists these days? It’s all in Plato — all in Plato! 

And if you keep reading the dialogue it gets more curious. Phaedrus says that he agrees with Thamus, but Socrates does not, not exactly. He too has concerns about writing, but they are rather different than Thamus’s. For Sccrates, writing shares a problem with several other modes of expression: 

I cannot help feeling, Phaedrus, that writing is unfortunately like painting; for the creations of the painter have the attitude of life, and yet if you ask them a question they preserve a solemn silence. And the same may be said of speeches. You would imagine that they had intelligence, but if you want to know anything and put a question to one of them, the speaker always gives one unvarying answer. And when they have been once written down they are tumbled about anywhere among those who may or may not understand them, and know not to whom they should reply, to whom not: and, if they are maltreated or abused, they have no parent to protect them; and they cannot protect or defend themselves. 

Socrates believes that writing, painting, and declamatory rhetoric all have the same problem: they are non-dialectical. This is also, Socrates shows in other dialogues, the problem with many versions of what people call “philosophy.” Genuine philosophy, Socrates believes, is dialectical, that is, it proceeds when people physically present to one another put one another to the question in a strenuous encounter that elicits anamnesis — recollection (literally unforgetting) of the knowledge that one’s spirit had before being tossed into this world of flux. Nothing else counts as philosophy; nothing else — not painting; not poetry or speeches, whether in spoken or written form — is productive of genuine knowledge. The critique of Socrates is far more unbendingly radical than that of Thamus. 

Read the whole story
betajames
21 hours ago
reply
Michigan
Share this story
Delete

AI coders think they’re 20% faster — but they’re actually 19% slower

2 Shares

Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisations, but they don’t seem strident about it and they like to show their working. So good, I guess.

METR funded 16 experienced open-source developers with “moderate AI experience” to do what they do. They set the devs to work in projects they knew well, fixing 246 real bug reports in each project — not synthetic tasks like in your typical AI coding benchmark.

The projects were large and reasonably popular — over a million lines of code, average 22,000 stars on GitHub. It was real work on real software.

For each task, METR randomly told the dev to either use a chatbot helper — Cursor with Claude Code — or use no assistance.

The developers predicted they’d go 24% faster with AI. After they’d done the work, the developers said they’d done it 20% faster. But they’d actually been slowed down by 19%. They thought they were faster, they were actually slower. [blog post; paper, PDF]

When the devs use the AI, they’re spending less time looking for information and writing code — and instead they’re prompting the AI, they’re reviewing the AI, or they’re doing nothing while they’re waiting for the AI.

Even the devs who liked the AI found it was bad at large and complex code bases like these ones, and over half the AI suggestions were not usable. Even the suggestions they accepted needed a lot of fixing up.

The number of people tested in the study was n=16. That’s a small number. But it’s a lot better than the usual AI coding promotion, where n=1 ’cos it’s just one guy saying “I’m so much faster now, trust me bro. No, I didn’t measure it.”

The interesting thing is how the devs thought these tools were rocket fuel — then the numbers showed the opposite.

When you’re developing software, you never claim you’ve optimised your program without measuring the program’s performance. That should apply to claims about dev tools. How many AI advocates measure their performance numbers properly, not just self-reporting? It’s all vibe advocacy.

Devs are famously bad at estimating how long a software project will take. And so it is here.

How you  feel about your job is important. If an experienced dev enjoys using the AI tab complete and their stuff is still good, I mean, fine. But if someone claims a speedup, that’s a number, and they should show that number.

The promptfondlers instantly had a zillion excuses for this study. Usually “it’s so much better with this month’s model.” They don’t provide any numbers. Or “but they just need to learn Cursor properly.” Which is another way of saying “it can’t be that stupid, you must be prompting it wrong.” Though METR specifically picked devs with “moderate AI experience.” And the promptfondlers never say how you’d prompt the magic AI roulette wheel right.

If someone’s promoting AI for coding, get their numbers. If they don’t have numbers, you should consider whether they’re sounding like addicts who someone threatened with taking their cocaine away.

Read the whole story
betajames
1 day ago
reply
Michigan
Share this story
Delete

Are Students Making Good Choices on AI?

1 Comment
Are Students Making Good Choices on AI? johnw@mcsweeneys.net

AI has changed what it means when students dodge an assignment.

Byline(s)
Read the whole story
betajames
1 day ago
reply
lol no
Michigan
Share this story
Delete

‘AI is here to stay’ — is it, though? What do you mean, ‘stay’?

1 Share

When some huge and stupid public chatbot disaster hits the news, the AI pumpers will Kramer into the mentions to say stuff like “you have to admit, AI is here to stay.”

Well, no, I don’t. Not unless you say what you actually mean. What’s the claim you’re making? Herpes is here to stay too, but you probably wouldn’t brag about it.

What they’re really saying is “give in and do what I tell you.” They’re saying AI like it is in the bubble is a permanent force that will reshape society in Sam Altman’s image. It’s a new paradigm! So you have to give in to it. And give me everything I’m demanding.

Here’s an egregious example from the Washington State school system. It starts with the sentence “AI is here to stay,” then there’s a list of AI stuff to force on the kids, on the assumption that all of this will work forever just like the biggest hype in the bubble. And that’s not true! [OSPI]

If you ask these guys why AI is here to stay, they’ll just recite promotional talking points. So ask them some really pointy questions about the details.

Remember that a lot of these people were super convinced by just one really impressive demo that blew their minds. We have computers you can just talk to naturally now and have a conversation! That’s legit amazing, actually! The whole field of natural language processing is 80% solved!

The other 20% is where the computer is a lying idiot — and it probably can’t be fixed. That’s a bit of a problem in practice. Generative AI is all like that — it’s genuinely impressive demos with unfixable problems.

Sometimes they’ll claim chatbots are forever because machine learning works for X-ray scans. If they say that, they don’t know enough about the details to make a coherent claim, and you’d have to teach them the difference between medical machine learning systems and chatbots before they could.

Grifters will try to use gotchas. Photoshop has AI in it, so you should let me post image slop to your forum! Office 365 has AI in it, so if you use Word then you’re using AI! Spell check’s a kind of AI! These are all real examples. These guys are lying weasels and the correct answer is “go away”. Or maybe something stronger.

Are they saying the technology will surely get better because all technology just improves? Will the hallucinating stop? Then they need evidence of that, because it sure looks like the tech of generative AI is stuck at the top of its S-curve at 80% useful and has not made any major breakthroughs in a couple of years or more.

The guy’s probably seen an impressive demo, but he’s going to have to bring proper evidence that chatbots are going to make it to being any sort of reliable product. And we have no reason to think they will.

Are they saying that OpenAI and its friends, all setting money on fire, will be around forever? Ha, no. That is not economically possible. OpenAI alone needs tens of billions of fresh dollars every year. Look through Ed Zitron’s numbers if you think numbers will do any good to reply to this one. [Ed Zitron]

The big venture-funded AI companies are machines for taking money from venture capitalists and setting it on fire. The chatbots are just the excuse for them to do that. The companies are simply not sustainable standalone businesses.

Maybe after the collapse, there’ll be a company that buys the name “OpenAI” and dances around wearing its skin. The name “Fyre Festival” just went on sale. [eBay]

Are they saying there’s a market for generative AI, so it’ll surely keep going when the bubble pops? There may well be some market — the vibe coders are addicts. But the prices will be at least five or ten times what they are now if the chatbot has to pay its way as a standalone business.

But chatbots are useful to me personally! Sure, they do some useful things. Large language models are based on transformers, so anything a transformer does well, a chatbot will do okay if it’s trained. Translation, transcription, grammar checking, a chatbot can at least muddle through. And right now, the chatbot is convenient. Will you pay ten times the price for that? I’m not so sure.

Are they saying you can always run a local model at home? Sure you can, and about 0.0% of chatbot users do that. In 2025, the home models are painfully slow toys for nerd enthusiasts, even on a high-end box. No normal people are going to do this to get what they get from a casual chatbot now.

I’ve seen the people saying “AI is here to stay” get called on it and back down to, well, the technology will still exist. Sure, mathematics is here to stay. The transformer architecture is actually useful for stuff. But just existing isn’t much of a claim either. Technologies have their heyday then the last dregs of them linger forever.

Crypto is still around, serving the important “crime is legal” market, but nothing else is happening, and it’s radioactive for normal people. If you search for “AI is here to stay” on Twitter, you’ll see the guys who still have Bored Ape NFT icons.

Generative AI has a good chance of becoming as radioactive to the general public as crypto is. They’ll have to start calling the stuff that works “machine learning” again.

So. If someone says “AI is here to stay,” nail them down on what the precise claim is they’re making. Details. Numbers. What do you mean by being here? What would failure mean? Get them to make their claim properly.

I mean, they won’t answer. They never answer. They never had a claim in mind. They were just making promotional mouth noises.

I’ll make a prediction for you, to give an example:

The AI bubble will last at least two, maybe three more years, because the venture capitalists really need it to. When, not if, the VCs and their money pipeline go home and the chatbot prices multiply by ten or more, the market for generative AI will collapse.

There will be some small providers left. Gen-AI will technically be not dead yet! But the bubble will be extremely over. The number of people running an LLM at home will still be negligible.

It’s possible there will be something left after the bubble pops. AI boosters like saying it’s just like the dot-com bubble!! But i‘ve never really been convinced by the argument “Amazon lost money for years, so if OpenAI just sets enough money on fire then it must be Amazon.” It’s not a good argument.

Will inference costs — the real cost of each query, which are 80%-90% of compute load — come down? Sure, they’ll come down at some point. Will it be soon enough? Well, Nvidia’s Blackwell hasn’t been a good chip generation, so Nvidia is putting out more of their old generation chips while they try to get Blackwell production volumes up. So more efficient chips won’t fill out the market very soon.

So there you go. I might be wrong about any of that — but at least I’ve given reasons for what I’m saying.

If you want to say “but AI is here to stay!” then tell us what you mean in detail. Stick your neck out. Give your reasons. You might be wrong about parts of it, but at least you’ll have made a checkable claim.

Read the whole story
betajames
2 days ago
reply
Michigan
Share this story
Delete

The Texas Flash Flood Is a Preview of the Chaos to Come

1 Share

ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published.

On July 4, the broken remnants of a powerful tropical storm spun off the warm waters of the Gulf of Mexico so heavy with moisture that it seemed to stagger under its load. Then, colliding with another soggy system sliding north off the Pacific, the storm wobbled and its clouds tipped, waterboarding south central Texas with an extraordinary 20 inches of rain. In the predawn blackness, the Guadalupe River, which drains from the Hill Country, rose by more than 26 vertical feet in just 45 minutes, jumping its banks and hurtling downstream, killing 109 people, including at least 27 children at a summer camp located inside a federally designated floodway.

Over the days and weeks to come there will be tireless — and warranted — analysis of who is to blame for this heart-wrenching loss. Should Kerr County, where most of the deaths occurred, have installed warning sirens along that stretch of the waterway, and why were children allowed to sleep in an area prone to high-velocity flash flooding? Why were urgent updates apparently only conveyed by cellphone and online in a rural area with limited connectivity? Did the National Weather Service, enduring steep budget cuts under the current administration, adequately forecast this storm?

Those questions are critical. But so is a far larger concern: The rapid onset of disruptive climate change — driven by the burning of oil, gasoline and coal — is making disasters like this one more common, more deadly and far more costly to Americans, even as the federal government is running away from the policies and research that might begin to address it.

President Lyndon B. Johnson was briefed in 1965 that a climate crisis was being caused by burning fossil fuels and was warned that it would create the conditions for intensifying storms and extreme events, and this country — including 10 more presidents — has debated how to respond to that warning ever since. Still, it took decades for the slow-motion change to grow large enough to affect people’s everyday lives and safety and for the world to reach the stage it is in now: an age of climate-driven chaos, where the past is no longer prologue and the specific challenges of the future might be foreseeable but are less predictable.

Climate change doesn’t chart a linear path where each day is warmer than the last. Rather, science suggests that we’re now in an age of discontinuity, with heat one day and hail the next and with more dramatic extremes. Across the planet, dry places are getting drier while wet places are getting wetter. The jet stream — the band of air that circulates through the Northern Hemisphere — is slowing to a near stall at times, weaving off its tracks, causing unprecedented events like polar vortexes drawing arctic air far south. Meanwhile the heat is sucking moisture from the drought-plagued plains of Kansas only to dump it over Spain, contributing to last year’s cataclysmic floods.

We saw something similar when Hurricane Harvey dumped as much as 60 inches of rain on parts of Texas in 2017 and when Hurricane Helene devastated North Carolina last year — and countless times in between. We witnessed it again in Texas this past weekend. Warmer oceans evaporate faster, and warmer air holds more water, transporting it in the form of humidity across the atmosphere, until it can’t hold it any longer and it falls. Meteorologists estimate that the atmosphere had reached its capacity for moisture before the storm struck.

The disaster comes during a week in which extreme heat and extreme weather have battered the planet. Parts of northern Spain and southern France are burning out of control, as are parts of California. In the past 72 hours, storms have torn the roofs off of five-story apartment buildings in Slovakia, while intense rainfall has turned streets into rivers in southern Italy. Same story in Lombok, Indonesia, where cars floated like buoys, and in eastern China, where an inland typhoon-like storm sent furniture blowing down the streets like so many sheafs of paper. Léon, Mexico, was battered by hail so thick on Monday it covered the city in white. And North Carolina is, again, enduring 10 inches of rainfall.

There is no longer much debate that climate change is making many of these events demonstrably worse. Scientists conducting a rapid analysis of last week’s extreme heat wave that spread across Europe have concluded that human-caused warming killed roughly 1,500 more people than might have otherwise perished. Early reports suggest that the flooding in Texas, too, was substantially influenced by climate change. According to a preliminary analysis by ClimaMeter, a joint project of the European Union and the French National Centre for Scientific Research, the weather in Texas was 7% wetter on July 4 than it was before climate change warmed that part of the state, and natural variability alone cannot explain “this very exceptional meteorological condition.”

That the United States once again is reeling from familiar but alarming headlines and body counts should not be a surprise by now. According to the World Meteorological Organization, the number of extreme weather disasters has jumped fivefold worldwide over the past 50 years, and the number of deaths has nearly tripled. In the United States, which prefers to measure its losses in dollars, the damage from major storms was more than $180 billion last year, nearly 10 times the average annual toll during the 1980s, after accounting for inflation. These storms have now cost Americans nearly $3 trillion. Meanwhile, the number of annual major disasters has grown sevenfold. Fatalities in billion-dollar storms last year alone were nearly equal to the number of such deaths counted by the federal government in the 20 years between 1980 and 2000.

The most worrisome fact, though, may be that the warming of the planet has scarcely begun. Just as each step up on the Richter scale represents a massive increase in the force of an earthquake, the damage caused by the next 1 or 2 degrees Celsius of warming stands to be far greater than that caused by the 1.5 degrees we have so far endured. The world’s leading scientists, the United Nations panel on climate change and even many global energy experts warn that we face something akin to our last chance before it is too late to curtail a runaway crisis. It’s one reason our predictions and modeling capabilities are becoming an essential, lifesaving mechanism of national defense.

What is extraordinary is that at such a volatile moment, President Donald Trump’s administration would choose not just to minimize the climate danger — and thus the suffering of the people affected by it — but to revoke funding for the very data collection and research that would help the country better understand and prepare for this moment.

Over the past couple of months, the administration has defunded much of the operations of the National Oceanic and Atmospheric Administration, the nation’s chief climate and scientific agency responsible for weather forecasting, as well as the cutting-edge earth systems research at places like Princeton University, which is essential to modeling an aberrant future. It has canceled the nation’s seminal scientific assessment of climate change and risk. The administration has defunded the Federal Emergency Management Agency’s core program paying for infrastructure projects meant to prevent major disasters from causing harm, and it has threatened to eliminate FEMA itself, the main federal agency charged with helping Americans after a climate emergency like the Texas floods. It has — as of last week — signed legislation that unravels the federal programs meant to slow warming by helping the country’s industries transition to cleaner energy. And it has even stopped the reporting of the cost of disasters, stating that doing so is “in alignment with evolving priorities” of the administration. It is as if the administration hopes that making the price tag for the Kerr County flooding invisible would make the events unfolding there seem less devastating.

Given the abandonment of policy that might forestall more severe events like the Texas floods by reducing the emissions that cause them, Americans are left to the daunting task of adapting. In Texas, it is critical to ask whether the protocols in place at the time of the storm were good enough. This week is not the first time that children have died in a flash flood along the Guadalupe River, and reports suggest county officials struggled to raise money and then declined to install a warning system in 2018 in order to save approximately $1 million. But the country faces a larger and more daunting challenge, because this disaster — like the firestorms in Los Angeles and the hurricanes repeatedly pummeling Florida and the southeast — once again raises the question of where people can continue to safely live. It might be that in an era of what researchers are calling “mega rain” events, a flood plain should now be off-limits.

Read the whole story
betajames
3 days ago
reply
Michigan
Share this story
Delete

Trial Court Decides Case Based On AI-Hallucinated Caselaw - Above the Law

2 Shares

Every time a lawyer cites a fake case spit out by generative AI, an angel gets its wings. When the lawyers in Mata v. Avianca infamously earned a rebuke for citing an AI-imagined alternate history of the Montreal Convention, many of us assumed the high-profile embarrassment would mark the end of fake cases working their way into filings. Instead, new cases crop up with alarming frequency, ensnaring everyone from Trump’s former fixer to Biglaw to — almost certainly — the DOJ. It seems no amount of public embarrassment can overcome laziness.

But so far, the system has stood up to these errors. Between opposing counsel and diligent judges, fake cases keep getting caught before they result in real mischief. That said, it was always only a matter of time before a poor litigant representing themselves fails to know enough to sniff out and flag Beavis v. Butthead and a busy or apathetic judge rubberstamps one side’s proposed order without probing the cites for verification. Hallucinations are all fun and games until they work their way into the orders.

It finally happened with a trial judge issuing an order based off fake cases (flagged by Rob Freund). While the appellate court put a stop to the matter, the fact that it got this far should terrify everyone.

Shahid v. Esaam, out of the Georgia Court of Appeals, involved a final judgment and decree of divorce served by publication. When the wife objected to the judgment based on improper service, the husband’s brief included two fake cases. The trial judge accepted the husband’s argument, issuing an order based in part on the fake cases. On appeal, the husband did not respond to the fake case claim, but….

Undeterred by Wife’s argument that the order (which appears to have been prepared by Husband’s attorney, Diana Lynch) is “void on its face” because it relies on two non-existent cases, Husband cites to 11 additional cites in response that are either hallucinated or have nothing to do with the propositions for which they are cited. Appellee’s Brief further adds insult to injury by requesting “Attorney’s Fees on Appeal” and supports this “request” with one of the new hallucinated cases.

They cited MORE fake cases to defend their first set of fake cases. Epic. A perpetual motion machine of bullshit, if you will. Seeking attorney’s fees based on a fake case was a nice touch. Probably should’ve thought of that at the trial court level, it probably would’ve worked.

The appellate court could not make the factual leap to blame AI for the fake cases, but laid out its theory of the case:

As noted above, the irregularities in these filings suggest that they were drafted using generative AI. In his 2023 Year-End Report on the Federal Judiciary, Chief Justice John Roberts warned that “any use of AI requires caution and humility.” Roberts specifically noted that commonly used AI applications can be prone to “hallucinations,” which caused lawyers using those programs to submit briefs with cites to non-existent cases.

Well, there you go! Someone finally found a use for the Chief Justice’s infamous typewriter report. Now it almost seems like a useful expenditure of official resources instead of a cynical opportunity to dodge addressing that his proposed solution to the Court’s deepening ethical cesspool is… JAZZ HANDS!

But there’s a critical line between submitting fake cases and judges acting on fake cases. The urgency the courts feel for stamping out fake citations stems in part from the “there but for the grace of my clerks go I” fear that the judge might bless a fake argument. Now that this has happened to a trial judge out there, the high-profile embarrassment should mark the end of fake cases working their way into orders.

Where have I heard something like that before? *Re-reads first paragraph.*

We’re screwed.

HeadshotJoe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter or Bluesky if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

Read the whole story
betajames
3 days ago
reply
Michigan
acdha
4 days ago
reply
Washington, DC
Share this story
Delete
Next Page of Stories