Associate Professor of English, University of Michigan-Flint. I research and teach rhetoric and writing.
5144 stories
·
43 followers

AI Is Supercharging the War on Libraries, Education, and Human Knowledge

2 Shares

Advertisement

"Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another."

AI Is Supercharging the War on Libraries, Education, and Human Knowledge Image: Steve Johnson via Unsplash
Read the whole story
betajames
2 hours ago
reply
Michigan
acdha
18 hours ago
reply
Washington, DC
Share this story
Delete

AI gets 45% of news wrong — but readers still trust it

2 Shares

The BBC and the European Broadcasting Union have produced a large study of how well AI chatbots handle summarising the news. In short: badly. [BBC; EBU]

The researchers asked ChatGPT, Copilot, Gemini, and Perplexity about current events. 45% of the chatbot answers had at least one major issue. 31% were seriously wrong and 20% had major inaccuracies, from hallucinations or outdated sources. This is across multiple languages and multiple countries. [EBU, PDF]

The AI distortions are “significant and systemic in nature.”

Google Gemini was by far the worst. It would make up an authoritative-sounding summary with completely fake and wrong references — much more than the other chatbots. It also used a satire source as a news source. Pity Gemini’s been forced into every Android phone, hey.

Chatbots fail most with current news stories that are moving fast. They’re also really prone to making up quotes. Anything in quotes probably isn’t the words the person actually said.

7% of news consumers ask a chatbot for their news, and that’s 15% of readers under 25. And just over a third — though they don’t give the actual percentage number — say they trust AI summaries, and about half of those under 35. People pick convenience first. [BBC, PDF]

Peter Archer is the BBC’s Programme Director for Generative AI — what a job title — and is quoted in the EBU press release. Archer put forward these results even though they were quite bad. So full points for that.

Unfortunately, Archer also says in the press release: ‘We’re excited about AI and how it can help us bring even more value to audiences.”

Archer sees his task here as promoting the chatbots: “We want these tools to succeed and are open to working with AI companies to deliver for audiences and wider society.”

Anyone whose title is “Programme Director for Generative AI” is never going to sign off on a result that this stuff is poison to accurate news and the public discourse, and the BBC needs it gone — as this study makes clear. Because the job description is not to assess generative AI — it’s to promote generative AI. [job description]

So what happens next? The broadcasters have no plan to address the chatbot problem. The report doesn’t even offer ways forward. There’s no action points! Except do more studies!

They’re just going to cross their fingers and hope the chatbot vendors can be shamed into giving a hoot — the approach that hasn’t worked so far, and isn’t going to work.

Unless the vendors can cure chatbot hallucinations. And they can’t do that, because that’s how chatbots work. Everything a chatbot outputs is a hallucination, and some of the hallucinations are just closer to accurate.

The actual answer is to stop using chatbots for news, stop creating jobs inside the broadcasters whose purpose is to befoul the information stream with generative AI, and attach actual liability to the chatbot vendors when they output complete lies. Imagine a chatbot vendor having to take responsibility for what the lying chatbot spits out.

Read the whole story
betajames
21 hours ago
reply
Michigan
Share this story
Delete

Zohran Mamdani’s Win Is a Rare and Beautiful Moment In the Class War

1 Share
Can good things happen? Last night's victory in New York City suggests they can.
Read the whole story
betajames
1 day ago
reply
Michigan
Share this story
Delete

Details

2 Shares

As the U.S. tariff act of June 6, 1872, was being drafted, planners intended to exempt “Fruit plants, tropical and semi-tropical for the purpose of propagation or cultivation.”

Unfortunately, as the language was being copied, a comma was inadvertently moved one word to the left, producing the phrase “Fruit, plants tropical and semi-tropical for the purpose of propagation or cultivation.”

Importers pounced, claiming that the new phrase exempted all tropical and semi-tropical fruit, not just the plants on which it grew.

The Treasury eventually had to agree that this was indeed what the language now said, opening a loophole for fruit importers that deprived the U.S. government of an estimated $1 million in revenue. Subsequent tariffs restored the comma to its intended position.

Read the whole story
betajames
4 days ago
reply
Michigan
Share this story
Delete

AI makes you think you’re a genius when you’re an idiot

3 Shares

Today’s paper is “AI Makes You Smarter, But None the Wiser: The Disconnect between Performance and Metacognition”. AI users wildly overestimate how brilliant they actually are: [Elsevier, paywalled; SSRN preprint, PDF; press release]

All users show a significant inability to assess their performance accurately when using ChatGPT. In fact, across the board, people overestimated their performance.

The researchers tested about 500 people on the LSAT. One group had ChatGPT with GPT-4o, and one just used their brains. The researchers then asked the users how they thought they’d done.

The chatbot users did better — which is not surprising, since past LSATs are very much in all the chatbots’ training data, and they regurgitate them just fine.

The AI users did not question the chatbot at length — they just asked it once what the answer was and used whatever the chatbot, regurgitated.

But also, the chatbot users estimated their results as being even better than they actually were. In fact, the more “AI literate” the subjects measured as, the more wrongly overconfident they were.

Problems with this paper: it credits the LSAT performance as improving thinking and not just the AI regurgitating its training, and it suggests ways to use the AI better rather than suggesting not using it and actually studying. But the main result seems reached reasonably.

If you think you’re a hotshot promptfondler, you’re wildly overconfident and you’re badly wrong. Your ego is vastly ahead of your ability. Just ask your coworkers. Democratising arrogant incompetence!

Read the whole story
betajames
5 days ago
reply
Michigan
Share this story
Delete

Bob Dylan’s Nobel essay (again)

1 Share
Ron Rosenbaum, writing about Bob Dylan’s Nobel Prize essay:
It’s sad that this amazing essay has been almost entirely overlooked by Dylanologists, because it offers a skeleton key to something in my opinion quite essential about Bob Dylan.
The title of this book excerpt: “Bob Dylan’s Superpower Is That He Doesn’t Get Embarrassed.” Indeed, no.

Dylan’s comments on the Odyssey, Moby-Dick, and All Quiet on the Western Front in the “amazing essay” bear unmistakable traces of CliffsNotes and SparkNotes. Andrea Pitzer’s 2017 article “The Freewheelin’ Bob Dylan” offers ample evidence from the SparkNotes for Moby-Dick and cites one phrase about All Quiet on the Western Front from CliffsNotes. She also provides links for anyone interesting in Dylan's practice of appropriation in music and painting. My modest contribution: a post in 2017 with what I see as clear evidence that Dylan plagiarized from the CliffsNotes for the Odyssey: “Dylan, Homer, and Cliff.”

Pretty pathetic stuff. You’d have to have a superpower not to be embarrassed by it. Or to ignore it.

[There’s ample room in art and music and writing for the use of found materials and for the transformation of preexisting works. But swiping from CliffsNotes and SparkNotes ain’t that.]
Read the whole story
betajames
6 days ago
reply
Michigan
Share this story
Delete
Next Page of Stories