Associate Professor of English, University of Michigan-Flint. I research and teach rhetoric and writing.
5145 stories
·
43 followers

Investigating a Possible Scammer in Journalism’s AI Era | The Local

2 Shares

In late September, I put out a call for pitches from freelance journalists. As an editor at The Local, an online magazine in Toronto that wins awards for its long-form journalism, I have a stable of dependable writers I like to work with. But we’re always trying to bring new voices onto our site, and an open call for pitches on a specific theme has, in the past, been a good way to find them.

The last time I’d put out an open call was more than a year ago, when we’d received the usual stream of ideas—some intriguing, most not quite right for us, but all recognizably human. A year later, things were very different.

My request this time was for stories about health care privatization, which has become a fraught topic in Canada. Over the next week, I got a flood of story ideas from people around the world. Some, from writers in Africa, India, and the U.S., obviously weren’t right for a Toronto publication. But many had the sound, at least, of plausible Local stories.

One pitch in particular seemed promising. The writer, Victoria Goldiee, introduced herself as having written for The Globe and Mail, The Walrus, and Maisonneuve—Canadian outlets that publish the same kind of feature writing we do. The pitch tackled the idea of privatization with a catchy angle about the rise of “membership medicine.”

“The story would track how these plans transform health care into something resembling Netflix or Amazon Prime, and what this means for a public system that has long prided itself on universality,” it read.

What set the pitch apart from other emails suggesting similar stories was the amount of reporting the author had already done, as well as her collection of bylines. Victoria said she’d already spoken with a number of people—a 42-year-old consultant in Vancouver, a 58-year-old construction worker in Hamilton, and health care experts like Toronto physician Danielle Martin, who she quoted as saying “membership medicine is a creeping form of privatization.”

When I googled her, I saw that Victoria had written stories for a set of publications that collectively painted the picture of an ambitious young freelancer on the rise—short pieces in prestigious outlets like The Cut and The Guardian, lifestyle features in titles like Architectural Digest and Dwell, and in-depth reporting in non-profit and industry publications like Outrider and the Journal of the Law Society of Scotland. Her headshot was of a youthful Black woman. She was, according to her author bio, “a writer with a keen focus on sharing the untold stories of underrepresented communities in the media.”

At the next editorial story meeting, we decided to take a shot on Victoria and assign the story. Then I began looking more closely at her work.

There were some red flags. The first question I had was whether she was actually in Toronto when so many of her bylines were in New York magazines and British newspapers. And how had she managed to do so many interviews already? Doing so much reporting without the guarantee of pay felt like a big gamble.

When I googled “Victoria Goldiee” with the names of the Canadian publications she said she’d written for, there were no results. We reached out to Danielle Martin, one of the doctors Victoria claimed she’d interviewed. Martin said she’d never heard of her.

I emailed Victoria back: “​​Are those quotes from your own interviews? And do you mind sending along some clippings, perhaps from your Walrus or Maisonneuve stories?”

She sent a lengthy reply the next day. “The quotes I included in the pitch are from original interviews I’ve conducted over the past few weeks,” she insisted. “In terms of previous work, I write a regular newsletter for The Walrus, which gives a good sense of my ability to balance accessibility with depth while speaking to a broad audience.” She attached a link to The Walrus’s “Lab Insider” newsletter that did not have her byline.

“I can 100% confirm that they do not write the Lab Insider newsletter,” wrote Tracie Jones from The Walrus when I emailed. “How odd to say they do!”

Victoria’s stilted email, and a closer read of the original pitch, revealed what should have been clear from the start: with its rote phrasing (“This story matters because of… It is timely because of… It fits your readership because of…”), it had all the hallmarks of an AI-generated piece of writing.

I was embarrassed. I had been naively operating with a pre-ChatGPT mindset, still assuming a pitch’s ideas and prose were actually connected to the person who sent it. Worse, the reason the pitch had been appealing to me to begin with was likely because a large language model somewhere was remixing my own prompt asking for stories where “health and money collide,” flattering me by sending me back what I wanted to hear.

But if Victoria’s pitch appeared to be an AI-generated fabrication, and if she was making up interviews and bylines, what to make of her long list of publications?

Since 2022, the byline “Victoria Goldiee” has been attached to dozens of articles. There are a series of “as-told to” stories in Business Insider. (“I’m a 22-year-old Amazon delivery driver. The cameras in my truck keep me on high alert, but it’s my dream job and the flexible hours are great,” is a novel take on Amazon’s labour practices). There had been an interview with the comic actor Nico Santos in Vogue Philippines, a feature on Afrobeats in RollingStone Africa (no longer on the site), a product recommendation for a DVD drive in New York Magazine’s The Strategist, and, in the past two years, a move away from culture writing to meatier features.

A 2024 story about climate change memes from the non-profit Outrider quotes “Juliet Pinto, a Professor of Psychology” at Pennsylvania State University. I emailed Pinto, who is in fact a communications professor at Pennsylvania State University. “I have not spoken with any reporter about that piece of research, and I am not a professor of psychology,” she wrote back. The piece also quotes “Terry Collins, a climate scientist and professor of environmental science at the University of California.” I could not find anyone of that description, but Terry Collins, the director of the institute of green science at Carnegie Mellon, said he’s never communicated with Goldiee.

Victoria’s online portfolio featured a pair of stories from the Vox Media publication PS (formerly Pop Sugar). When I clicked her links, however, I found each had been replaced with an editor’s note explaining the article had been “removed because it did not meet our editorial standards.”

“As I recall, the articles bylined by Victoria borrowed far too heavily from articles published elsewhere,” wrote former PS editor Nancy Einhart when I asked why the stories had been taken down. “I remember feeling disappointed because I really liked Victoria’s pitches.”

She added: “You are actually the third editor to contact me about this writer in the past couple of months! She is clearly on a pitch tour.”

Indeed, over the past months, Goldiee’s production has ramped up, with a series of articles in a wide range of publications that seem to be growing bolder in their ambitions.

“We definitely did not talk to her. So that’s kind of crazy.”

In September, the Journal of the Law Society of Scotland published a story about rural law firms that includes quotes from regular Scots whom I could not find, a lawyer who appears to be fictitious, a professor who told me she did not speak with the reporter, and even the Cabinet Secretary for Justice and Home Affairs, who did not respond to my email.

“The quotation did not come from me and, to the best of my recollection, I have never met or spoken to Victoria Goldiee,” Elaine Sutherland, professor emerita at the University of Stirling, wrote me. What was even more unsettling, though, was that the sentiments in the soundbite reflected her real beliefs. “The quotation attributed to me is the sort of thing I might say,” she wrote.

A month after that article, a Victoria Goldiee story in the design publication Dwell—“How to Turn Your Home’s Neglected Corners Into Design Gold”—featured a series of quotes purported to be from a wide array of international designers and architects, from Japan to England to California. A cursory read raised questions that probably should have been asked by editors to begin with. Namely, had a freelancer writing an affiliate-link-laden article about putting credenzas in your living room’s corners actually interviewed 10 of the world’s top designers and architects?

“Beata hasn’t heard of the journalist,” wrote a representative of designer Beata Heuman in response to my email.

“I did not speak with this reporter and did not give this quote,” wrote designer Young Huh, who is quoted as saying “corners are like little secrets in a home.”

“We definitely did not talk to her,” said a representative from architect Barbara Bestor. “So that’s kind of crazy.”

The stories had the characteristic weirdness of articles written by a large language model—invented anecdotes from regular people who didn’t appear to exist accompanied by expert commentary from public figures who do, with some biographical details mangled, who are made to voice “quotes” that sound, broadly, like something they might say.

When I asked one of the architects quoted in the Dwell piece, Sally Augustin, if she had ever spoken with Victoria, she was careful in her response. “I don’t actually remember speaking with her,” she wrote, and there was no sign of Victoria in her inbox. But she couldn’t be totally sure. And she wasn’t particularly bothered by her appearance in the article anyway. “The material attributed to me sounds exactly like something that I would say and I am fine with that material being out there,” she wrote.

Two weeks after she first pitched me, and after spending far too many hours trailing the path she’d cut through the internet, I emailed Victoria Goldiee asking if we could talk about her story.

By that point I suspected she was making things up in publications around the world, but I was hoping a conversation could get me closer to some answers.

We set up a video call for later that week. Ten minutes before it was set to begin, she emailed to change plans. “I look forward to chatting in a bit, I’ll be joining via phone so it’ll be a voice call on my end,” she wrote, signing off with, “xx.”

Moments later she was on the line. “Hi Nick!” she said, chipper and upbeat, speaking through a crackling phone line. To my ear, she sounded like a young woman with an African accent.

I asked her where in Toronto she was based. “Bloor,” she said cheerfully, naming one of the city’s busiest commercial thoroughfares without missing a beat. If she was just naming one of the first streets that comes up when you google “Toronto streets,” I couldn’t help but appreciate the effort.

Support Human Journalists

Do you know who doesn't use AI to write stories? The Local. Consider becoming a supporter.

Support

I asked her to tell me a little more about the story she had in mind. She talked her way through it using more or less the same language as the pitch, as if paraphrasing a document in front of her. “It’s basically as if more people, like Canadians, are paying for health care as kind of their own version of Netflix or Amazon, kind of how we pay monthly subscriptions for Netflix. Now, health care is the same way.”

I asked her about the interviews she said she’d already done. “You said you spoke with Danielle Martin. Is that right?”

“Yes, yes,” she said.

“We know her at The Local, and she said she doesn’t remember speaking with you,” I said, trying to keep my tone more curious than accusatory. “Did you actually speak with her?”

“Oh yeah, I did,” she said quickly. “I did have my personal assistant talk to her.”

I did not linger on the idea of a freelance writer with a personal assistant, instead pushing on, not wanting to spook her.

If the person on the other line was put off by my line of questioning, she didn’t show it. Victoria remained cheerful sounding and upbeat, providing quick, if implausible, answers to every question. Why couldn’t I find the stories she said she’d written for Canadian publications? “Most of them are in print,” she explained. (Editors from The Globe and Mail, Maisonneuve, and The Walrus say they do not believe Victoria Goldiee has ever written for those publications).

She said she was in Toronto, but I had noticed she’d written a lot for British publications. Had she just come to Canada recently? “I did, recently, like this past year,” she said.

Did she know why those Pop Sugar articles were no longer online? “I think the editor who published the story left the publication,” she responded. “So that’s why they deleted all the pieces that she covered.”

I don’t think I’ve ever spoken to someone who I suspected was lying to me with each and every response. I also don’t know if I’ve interviewed anyone I so desperately wanted to hear the truth from.

I had so many questions. Was the person on the phone even the same person whose writing was online? Where did she actually live—if not “Bloor,” was it the States? The U.K.? Or, as suggested by some of her writing, Nigeria? Was she a writer with genuine ambitions, who had gotten in way over her head and was now taking some truly outrageous risks? I was ready to be sympathetic. Was there some other explanation for the wild inconsistencies I had found? Or was she a simple scammer who had found easy marks in the overworked, credulous editors of the journalism world?

I had been naively operating with a pre-ChatGPT mindset, still assuming a pitch’s ideas and prose were actually connected to the person who sent it.

In my fantasy version of this phone call, after I’d gently led her toward more and more severe inconsistencies, Victoria would be forced to admit to the deceptions, and then we would really talk. As the conversation continued, though, I realized how foolish that hope had been. Whoever was on the other end of that line—caught in a nightmarish call that was transforming from a work chat into an audit of their professional life—was not going to somehow open up and offer a full explanation.

Five minutes into the call, I turned the conversation to what I’d discovered. “So, I started looking at some other clippings of your work,” I said. “I saw a piece that you did recently for the Journal of the Law Society of Scotland, I think?”

“Yeah,” she said, and I heard the slightest crack in her voice.

“I was looking at some of the quotes within those stories, and some of the people you spoke with,” I continued. “For example, in the Law Society of Scotland piece, you quote this professor.”

Victoria was quiet on the other end of the line. I actually emailed the professor, I explained. “And she said she never spoke with you.”

In the silence that followed I realized Victoria had hung up. She has not responded to my emails since.

Every media era gets the fabulists it deserves. If Stephen Glass, Jayson Blair and the other late 20th century fakers were looking for the prestige and power that came with journalism in that moment, then this generation’s internet scammers are scavenging in the wreckage of a degraded media environment. They’re taking advantage of an ecosystem uniquely susceptible to fraud—where publications with prestigious names publish rickety journalism under their brands, where fact-checkers have been axed and editors are overworked, where technology has made falsifying pitches and entire articles trivially easy, and where decades of devaluing journalism as simply more “content” have blurred the lines so much it can be difficult to remember where they were to begin with.

Freelance journalism in 2025 is an incredibly difficult place to build a career. But, it turns out, it’s a decent enough arena for a scam. On their website, Outrider says they pay $1,000 per article. Dwell’s rates start at 50 cents a word—a fee that’s difficult to justify if you actually want to interview 10 of the top designers in the world, but a healthy payday if you only need to enter a few words into ChatGPT.

Not every Victoria Goldiee story I looked at raised the same questions. A writer by that name had, in fact, spoken with actor Nico Santos for a story in Vogue Philippines, according to his publicist. Others were impossible to debunk with certainty. Had Victoria actually interviewed an incredibly elusive Korean production designer for a story in Architectural Digest about how people were, apparently, redesigning their rooms to match K-dramas? I can’t say for sure, and Architectural Digest did not respond to questions about the story. A story she published in October headlined “20 iconic slang words from Black Twitter that shaped pop culture”—which was syndicated across dozens of small-town American newspapers desperate for content, from northeast Mississippi to Waynesville, North Carolina—contains lines like “‘Brazy’ is another word for ‘crazy,’ replacing the ‘c’ with a ‘b.’” Is that story written by AI? It’s impossible to know and, frankly, impossible to say if it even matters.

My favourite “Victoria Goldiee” story is a piece she published in The Guardian just last month. It’s a first-person essay without quotes, and thus difficult to fact-check. In it, Goldiee—who told me she lives in Toronto, writes as an American in other work, enthuses about the daily jollof specials at a restaurant in Ghana in yet other writing, and lists herself as based in Nigeria elsewhere—vividly describes discovering underground music as she moves through life in 21st-century England. It follows her from a Somali football league in east London to “Morley’s fried chicken shops lit up after midnight” and “community centres that smell of carpet cleaner and curry.” It’s a rousing argument that real culture happens in real spaces, between real human beings, not in some cold, computer-generated reality. “The future of our music,” it reads, “is not written by algorithm.”

“Wonderful article,” reads one of many approving comments.

“Beautiful message that a lot of people aren’t trudging wide-eyed and brain-dead through this increasingly soulless, corporate-heavy… modern world,” reads another. “They are socialising, communicating, loving and laughing and making culture like real, thinking, feeling human beings.”

Victoria’s online writer’s portfolio came down following her conversation with The Local.

In the days after our conversation, Victoria’s online writer’s portfolio vanished. Her Muck Rack page (a listing of a journalist’s published works) was switched to private. An X account with her handle that had shared previous stories disappeared.

As I emailed the editors of the publications I’d been investigating, one by one Victoria’s articles came down.

The climate story at Outrider disappeared, replaced by a 404 error. Outrider did not respond to questions about their editorial process.

The Guardian story came down, with a note saying it had been “removed pending review.”

The story at Dwell was removed. “An investigation concluded that the article, ‘How to Turn Your Home’s Neglected Corners Into Design Gold,’ did not meet Dwell’s editorial standards, and as such, we’ve retracted it,” said the editor’s note in its place.

The Journal of the Law Society of Scotland removed Goldiee’s article and editor-in-chief Joshua King published an apology to the journal’s readers. “On the balance of the evidence available, it is now my belief these quotations were falsely attributed to the interviewees and are likely to be fabricated,” he wrote. “This is professionally embarrassing and this apology is an article I am disappointed I have to publish.”

In an email, King explained that the quotes in the piece “raised no red flags because they were, in all honesty, what I would have expected those quoted to have said.” He added: “Sadly I think editors and publications are at risk from bad actors.”

Those bad actors are already here. This summer, the Chicago Sun-Times published an AI-generated “summer reading list” filled with books that didn’t exist. Here in Toronto, independent publication The Grind was forced to postpone an issue after they took a chance on some new writers and were inundated with “scammers trying to pawn off AI-generated stories about fictional places and people.” Earlier this year, at least six publications, including Wired, removed stories after it was discovered that the articles, allegedly written by a freelancer named “Margaux Blanchard,” were likely AI inventions. The suspected fraud was discovered only after Jacob Furedi, the editor of the independent publication Dispatch, received a suspicious pitch and began digging into the writer’s work. According to reporting from The Daily Beast, after the revelations, Business Insider quietly removed at least 34 essays under 13 different bylines.

After weeks of trudging through Goldiee’s online mess, I went back to my inbox to deal with the rest of the pitches that were still sitting there waiting for me. I was a freelance writer for most of my career, so as an editor, I’ve always done my best to respond to every thoughtful pitch I get. Looking at them now, though, all I could see was the synthetic sheen of artificial intelligence. There were probably some promising young writers buried in there somewhere. But I couldn’t bear to dig through the bullshit to try to find them.

I idly googled the authors of a few of the pitches that looked most blatantly written by AI. I saw their bylines across the internet—a web of lies and uncanny half-truths entrenched so deeply in the information ecosystem that no one could possibly have the energy to dislodge them—and I was struck by a brief but genuine moment of bone-deep despair. Then I closed my laptop.

I do not know who “Victoria Goldiee” is. I suspected, for a time, that she might just be a name unattached to any specific human being—perhaps one of a dozen bylines used by a content farm somewhere, maybe London, maybe Lagos. There is nothing in any of the editorial processes I’ve seen that would prevent that.

But after sifting through what’s left of her online presence and squinting at her collected writing, trying to discern what’s real, I now see her story as more mundane. The author, I believe, is either from or still lives in Nigeria. She likes Korean dramas. She lives online, like the rest of us. It makes her miserable, like it does the rest of us.

One of the earliest Victoria Goldiee stories I could find, published on a small website in May 2022, is about the experience of logging on to the internet and finding yourself bombarded with the images and stories of people doing better than you are. It was, notably, published six months before ChatGPT was released—the demarcation line after which you could no longer assume a sentence had been produced by a human.

The piece is far less polished than the stories that would come later, with awkward phrasing and ungrammatical sentences. But it seems, to my eyes at least, to express the real feelings of a real human being.

Online, she writes, people are “inclined to curate the best parts of their lives.” They refuse to show their imperfections, instead presenting a false image of positivity and success.

“There’s this immense pressure to be productive,” she writes. “Most people like me are tired and trying to survive day-by-day.”

Read the whole story
betajames
11 hours ago
reply
Michigan
acdha
1 day ago
reply
Washington, DC
Share this story
Delete

AI Is Supercharging the War on Libraries, Education, and Human Knowledge

2 Shares

Advertisement

"Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another."

AI Is Supercharging the War on Libraries, Education, and Human Knowledge Image: Steve Johnson via Unsplash
Read the whole story
betajames
23 days ago
reply
Michigan
acdha
24 days ago
reply
Washington, DC
Share this story
Delete

AI gets 45% of news wrong — but readers still trust it

2 Shares

The BBC and the European Broadcasting Union have produced a large study of how well AI chatbots handle summarising the news. In short: badly. [BBC; EBU]

The researchers asked ChatGPT, Copilot, Gemini, and Perplexity about current events. 45% of the chatbot answers had at least one major issue. 31% were seriously wrong and 20% had major inaccuracies, from hallucinations or outdated sources. This is across multiple languages and multiple countries. [EBU, PDF]

The AI distortions are “significant and systemic in nature.”

Google Gemini was by far the worst. It would make up an authoritative-sounding summary with completely fake and wrong references — much more than the other chatbots. It also used a satire source as a news source. Pity Gemini’s been forced into every Android phone, hey.

Chatbots fail most with current news stories that are moving fast. They’re also really prone to making up quotes. Anything in quotes probably isn’t the words the person actually said.

7% of news consumers ask a chatbot for their news, and that’s 15% of readers under 25. And just over a third — though they don’t give the actual percentage number — say they trust AI summaries, and about half of those under 35. People pick convenience first. [BBC, PDF]

Peter Archer is the BBC’s Programme Director for Generative AI — what a job title — and is quoted in the EBU press release. Archer put forward these results even though they were quite bad. So full points for that.

Unfortunately, Archer also says in the press release: ‘We’re excited about AI and how it can help us bring even more value to audiences.”

Archer sees his task here as promoting the chatbots: “We want these tools to succeed and are open to working with AI companies to deliver for audiences and wider society.”

Anyone whose title is “Programme Director for Generative AI” is never going to sign off on a result that this stuff is poison to accurate news and the public discourse, and the BBC needs it gone — as this study makes clear. Because the job description is not to assess generative AI — it’s to promote generative AI. [job description]

So what happens next? The broadcasters have no plan to address the chatbot problem. The report doesn’t even offer ways forward. There’s no action points! Except do more studies!

They’re just going to cross their fingers and hope the chatbot vendors can be shamed into giving a hoot — the approach that hasn’t worked so far, and isn’t going to work.

Unless the vendors can cure chatbot hallucinations. And they can’t do that, because that’s how chatbots work. Everything a chatbot outputs is a hallucination, and some of the hallucinations are just closer to accurate.

The actual answer is to stop using chatbots for news, stop creating jobs inside the broadcasters whose purpose is to befoul the information stream with generative AI, and attach actual liability to the chatbot vendors when they output complete lies. Imagine a chatbot vendor having to take responsibility for what the lying chatbot spits out.

Read the whole story
betajames
24 days ago
reply
Michigan
Share this story
Delete

Zohran Mamdani’s Win Is a Rare and Beautiful Moment In the Class War

1 Share
Can good things happen? Last night's victory in New York City suggests they can.
Read the whole story
betajames
25 days ago
reply
Michigan
Share this story
Delete

Details

2 Shares

As the U.S. tariff act of June 6, 1872, was being drafted, planners intended to exempt “Fruit plants, tropical and semi-tropical for the purpose of propagation or cultivation.”

Unfortunately, as the language was being copied, a comma was inadvertently moved one word to the left, producing the phrase “Fruit, plants tropical and semi-tropical for the purpose of propagation or cultivation.”

Importers pounced, claiming that the new phrase exempted all tropical and semi-tropical fruit, not just the plants on which it grew.

The Treasury eventually had to agree that this was indeed what the language now said, opening a loophole for fruit importers that deprived the U.S. government of an estimated $1 million in revenue. Subsequent tariffs restored the comma to its intended position.

Read the whole story
betajames
28 days ago
reply
Michigan
Share this story
Delete

AI makes you think you’re a genius when you’re an idiot

3 Shares

Today’s paper is “AI Makes You Smarter, But None the Wiser: The Disconnect between Performance and Metacognition”. AI users wildly overestimate how brilliant they actually are: [Elsevier, paywalled; SSRN preprint, PDF; press release]

All users show a significant inability to assess their performance accurately when using ChatGPT. In fact, across the board, people overestimated their performance.

The researchers tested about 500 people on the LSAT. One group had ChatGPT with GPT-4o, and one just used their brains. The researchers then asked the users how they thought they’d done.

The chatbot users did better — which is not surprising, since past LSATs are very much in all the chatbots’ training data, and they regurgitate them just fine.

The AI users did not question the chatbot at length — they just asked it once what the answer was and used whatever the chatbot, regurgitated.

But also, the chatbot users estimated their results as being even better than they actually were. In fact, the more “AI literate” the subjects measured as, the more wrongly overconfident they were.

Problems with this paper: it credits the LSAT performance as improving thinking and not just the AI regurgitating its training, and it suggests ways to use the AI better rather than suggesting not using it and actually studying. But the main result seems reached reasonably.

If you think you’re a hotshot promptfondler, you’re wildly overconfident and you’re badly wrong. Your ego is vastly ahead of your ability. Just ask your coworkers. Democratising arrogant incompetence!

Read the whole story
betajames
28 days ago
reply
Michigan
Share this story
Delete
Next Page of Stories