Associate Professor of English, University of Michigan-Flint. I research and teach rhetoric and writing.
5146 stories
·
43 followers

The Fallout From Our AI Freelancer Investigation | The Local

2 Shares

Last week, we published the story of “Victoria Goldiee,” a freelancer with a promising pitch and bylines at a series of impressive outlets who wasn’t what she seemed. After we dug into her published work—finding fabricated quotations from real people, possibly created using AI—four separate publications, including The Guardian and Dwell, took down her stories.

In the days since, I’ve heard from editors around the world who’ve had their own run-ins with “Goldiee.” The editor of a small Indigenous-led newsroom on the West Coast emailed, explaining she had just assigned the freelancer a story earlier that week. An editor in the States had been in the middle of editing a draft Goldiee had submitted about people squatting on public land, apparently drawing on the author’s own experience living in a van for years, when she “decided to Google Goldiee’s name one more time” and found my article. (You can read that account here). Countless more reached out to say they’d been pitched by her—for stories in The Globe and Mail and a local paper in Tulsa, niche publications on everything from military service to Grand Prix racing, a brand new books publication in Oakland, and the 150-year-old scientific journal Nature. Truly, the range was breathtaking.

But the story here is not just of a single prolific pitcher who appears to be using AI to try to dupe the editors of the world. The bigger question in my inbox, and in a conversation that has grown across the wider journalism world, is: How widespread is this kind of thing? And, more pressingly, in a universe where AI text can be produced with the click of a button and a pitch is no longer intrinsically connected to the human who sends it, what do publications need to do to protect themselves and their readers?

We’ve been having these discussions at The Local as well, and you can see the results in our AI policy, which we are announcing today. The core of the policy is pretty simple. The Local’s stories are researched, written and edited by human beings; our photographs are taken by human beings; our illustrations are drawn by human beings.

After we published our piece last week, The Walrus’s editor-in-chief Carmine Starnino got me on Zoom to talk. The conversation, first published at The Walrus, touches on modern fabulists, the role of AI in the newsroom, and what all this means for journalism.

Carmine Starnino: Two years ago, Sports Illustrated published a batch of fictitious stories by non-existent writers. All of it was AI-generated—even the author portraits. They swiftly took the content down. It isn’t hard to find similar examples of generative fakery. So, in one sense, there’s nothing fundamentally new in what you reported. The increasingly synthetic nature of journalism is something we have to deal with and worry about all the time. But your story feels different. Why do you think the reaction to it has been so dramatic?

Nicholas Hune-Brown: That’s a good question. The example of the Sports Illustrated stories is, in my mind, pretty reprehensible. But in that case, the shell of a once great magazine decided to use AI to create stories. This is more a situation where you see publications being duped by bad actors who, aided by technology, are sneaking in stuff that appears totally made up. And if you’re an editor, that’s a scarier prospect. Because you’re not choosing to do that. You’re getting tricked. And I think that element of deception makes it different.

CS: And why? To be clear, what makes it scarier?

NHB: Whenever I’ve gone into journalism classes for the past 15 years, I’m always saying that an amazing pitch can be this key that opens doors you never thought could be opened. That’s how I got my career, just writing good pitches. Someone doesn’t have to know you, but they can see your ability on the page. Now, as an editor, I open my inbox and I understand there doesn’t need to be any link between the pitch and the person. They can be completely separate things. And I don’t know how you work in that kind of world.

CS: That brings me to the next part: why do it? Sure, journalism has always had its fabricators. Stephen Glass, who you cite, being an obvious example. But in those cases, the deceptions were the act of an otherwise gifted writer working in deranged fealty to a very traditional idea of reporting. They lied, yes, but they came up with the lies themselves! It was still work! But to hand over the entire commissioning cycle to AI—from pitch to draft—feels like a win with no victory. Why go through the effort of making no effort?

NHB: This is what I was obsessing about for months. I don’t have any clear answers. At certain points, I thought Goldiee was one name out of many bylines that a content farm was using the same way they might be writing other email scams. I don’t think that’s the case anymore. I think it’s a real individual. I can speculate about psychology or whatever, but one thing to remember is what this technology enables in a global world of wild inequalities. AI can make gaps in geography or cultural understanding disappear. Based on what this individual has written, and some elements in their social media before it disappeared, I believe they’re either from or still live in Nigeria. And if that’s the case, a terrible word rate at a publication in the United States could actually be a pretty decent payday—especially if you just have to enter it into ChatGPT.

CS: Fascinating. That almost makes it less a story about deception and more one about economic conditions.

NHB: I think this is something we’re going to see across different industries. After I published the story, I heard from a social scientist who said they encountered the same thing when trying to do qualitative studies. This is where you put out a call and offer a small honorarium, right? And in the last year, they’ve looked at the IP addresses from participants when they come online, and it seems many are in Nigeria. I don’t know why Nigeria exactly. But these people were using AI to give generic responses, and then, in the Zoom call, turning off the camera and, between long pauses, presumably ChatGPTing answers to the researcher’s questions. So, she was kind of at a loss about how to keep doing the qualitative studies that they usually do.

CS: Some of the places that published her were not fly-by-night places. As you hint, maybe they’re under pressure to feed an endless demand for content, maybe the editorial teams are leaner and just don’t have the resources to vet stuff. Or the staff is being crowded into revenue-generating activities that make it harder to do the one thing that might have helped: edit properly. What jolted me about your piece, to be honest, is that it’s more than a judgment on the Victoria Goldiees of our time. It’s also a judgment on an industry that found her writing credible enough to let through.

NHB: Like I wrote, this is only possible in an incredibly degraded media environment. Some of the pieces that went up, of course, weren’t fact checked, but could not even have been read with a critical eye.

CS: Let’s flip this around. Does it really matter if there’s no human behind the text? Does it fundamentally change the meaning of what’s there? Me, I like to think that there’s a type of irreducible value created by the knowledge that someone wrote something—that it wasn’t a machine. But is that just sentimental? Do you feel that it’s vital the texts we publish are written by actual people?

NHB: I see no value in a piece of writing that, structurally, looks like it’s done by a human but does not actually convey any human experience. It has no value to me, even if I might not be able to spot it at this point, unfortunately. That piece in The Guardian that I mentioned in my story, about experiencing underground music and moving through all these spaces in England. It was very vividly written and moving in a way that stirred some commenters. But it didn’t express anything real. I mean, I’m not sure Goldiee ever lived in England.

CS: But you were impressed by it.

NHB: It was a good ChatGPT piece. It was impressive, and I could see why anyone would be fooled by it. I could see why they would enjoy it. But it has no value to me if it’s not created by a person. When I think about most of the work we do at The Local, I can’t see any way a computer can do it—phone someone up, talk to them, discover new things. That has value. You’re asking an interesting philosophical question. I’m just not into any of it right now.

CS: My editor-in-chief role at The Walrus has overlapped almost perfectly with the ChatGPT era. And so, I’ve been obsessing about it a fair bit. I find it’s shifting how I think about pitches, moving away from questions like, “Is this interesting enough to report out?” and towards, “Is there a mind at work here that readers want to spend time with?” We’re inching toward more seasoned writers—writers who can turn a phrase, and have vocal print. And we know that because they’ve published other things. They have a voice and have built a career around that voice.

Chatbots can replicate the skin of their sentences, but can’t incarnate the substance. That is probably too weird for what we’re hoping for in this conversation!

NHB: No, that’s interesting. I probably thought that way a year ago. I thought there was something innately human that a chatbot can never synthetically recreate. And I don’t think so anymore. I don’t know if there’s a way anymore to see a piece of writing and recognize the human soul behind its structure. Maybe there are still some stilted ChatGPTisms that give it away—people always flag em dashes, which I take such offence to as someone that uses a lot of them! But in a few months even those tells might be gone. So, my instinct is almost the opposite. I think the thing that cannot be replaced at all is the reporting. I think I value the person who has sat through events, who has picked up the phone and spoken to real people, sometimes people no one else has spoken to. I’m thinking that is the differentiating quality.

CS: Maybe we need a marriage of both? Writers who write in a way that’s inherent to them and who can couch that writing in reporting that’s also original.

NHB: The other thing you said that struck me, and is kind of tragic, is that the easiest way to tell that someone’s legit and that the pitch is good is their body of work. If they’ve written for you, and you know them, that is a safe way to do things. Part of what we want to do at The Local, part of what I’m always trying to do, is develop new writers. I know that’s what you folks try to do as well. That’s literally in our mandate. If I’m trying to assign seven stories, I want two of them to be to new and emerging writers who maybe haven’t done this before, but that you trust that you can work with them, train them up. I don’t know how that trust can happen now. I don’t know how young freelancers get their foot in the door in this new world. We’re not going to give up on it and just work with people we know, but it’s definitely something we have to figure out.

CS: Is this our equivalent of doping in sports—something we have to stay vigilant against?

NHB: Honestly, I think you guys are okay. I don’t think the Victoria stories that got published on other websites would get published on The Walrus or The Local. I think you have robust editing and have a great fact-checking process. I’m not worried about that. I’m more worried about the front end being inundated with garbage. How do you find what’s good, right? I doubt those publications that got burned are going to hire back their copy desk and fact checkers, but that is the most obvious move. That is, if you actually care about this stuff. If you don’t care—which I think is the case with some of these publications who are increasingly in the content business and don’t mind where it comes from—then that’s a different story.

CS: But that’s exactly it. I do wonder if more of the industry is becoming tolerant of this kind of synthetic content, especially if they’re already doing away with the quality-control measures. My worry isn’t just the front end. Look at what AI has done to search. Readers are already being trained to see summaries as being perfectly adequate proxies for journalism itself.

NHB: And you can see the numbers already, right? Places now see, say, 70 percent less traffic.

CS: Yes. My worry is that it could have an effect on newsrooms—encourage them to empty their room of news. I worry that operations whose sense of journalism is maybe not as acute as ours, not as ethically rooted as ours, are starting to feel that if, well, readers don’t care, maybe it becomes an opportunity to make things a lot easier on themselves. What if The Walrus and The Local end up among pockets of rearguard thinking in an industry that will hand over a lot of its content to large language models. I mean, if you’re running a media org, couldn’t you just fire everybody and chatbot your way into clicks? Ten articles a day, 20 an hour, 30 every 15minutes—in different voices, on every subject under the sun, conjured instantaneously and all for the price of a subscription to Claude or ChatGPT. Isn’t that a reality we might be heading towards? What if Sports Illustrated is our future and not a blunder? If there isn’t a desk somewhere deep in Condé Nast mulling this, I’d be surprised.

NHB: That would be a way to kill what remains of the journalism industry. You can’t give an inch to it. You can’t have a mix of things that are true and not true in your news publication or the whole thing is destroyed. There’s no space for that at all. That’s where I’m at right now. And I think, economically, it doesn’t make any sense. It might in the short term, but as search gets degraded, as this fake stuff gets proliferated across the internet, there is going to be real value in being able to go to a place and know the stuff there is true and human. I hope, and trust, it will be something readers find valuable.

Read the whole story
betajames
8 hours ago
reply
Michigan
acdha
3 days ago
reply
Washington, DC
Share this story
Delete

Investigating a Possible Scammer in Journalism’s AI Era | The Local

2 Shares

In late September, I put out a call for pitches from freelance journalists. As an editor at The Local, an online magazine in Toronto that wins awards for its long-form journalism, I have a stable of dependable writers I like to work with. But we’re always trying to bring new voices onto our site, and an open call for pitches on a specific theme has, in the past, been a good way to find them.

The last time I’d put out an open call was more than a year ago, when we’d received the usual stream of ideas—some intriguing, most not quite right for us, but all recognizably human. A year later, things were very different.

My request this time was for stories about health care privatization, which has become a fraught topic in Canada. Over the next week, I got a flood of story ideas from people around the world. Some, from writers in Africa, India, and the U.S., obviously weren’t right for a Toronto publication. But many had the sound, at least, of plausible Local stories.

One pitch in particular seemed promising. The writer, Victoria Goldiee, introduced herself as having written for The Globe and Mail, The Walrus, and Maisonneuve—Canadian outlets that publish the same kind of feature writing we do. The pitch tackled the idea of privatization with a catchy angle about the rise of “membership medicine.”

“The story would track how these plans transform health care into something resembling Netflix or Amazon Prime, and what this means for a public system that has long prided itself on universality,” it read.

What set the pitch apart from other emails suggesting similar stories was the amount of reporting the author had already done, as well as her collection of bylines. Victoria said she’d already spoken with a number of people—a 42-year-old consultant in Vancouver, a 58-year-old construction worker in Hamilton, and health care experts like Toronto physician Danielle Martin, who she quoted as saying “membership medicine is a creeping form of privatization.”

When I googled her, I saw that Victoria had written stories for a set of publications that collectively painted the picture of an ambitious young freelancer on the rise—short pieces in prestigious outlets like The Cut and The Guardian, lifestyle features in titles like Architectural Digest and Dwell, and in-depth reporting in non-profit and industry publications like Outrider and the Journal of the Law Society of Scotland. Her headshot was of a youthful Black woman. She was, according to her author bio, “a writer with a keen focus on sharing the untold stories of underrepresented communities in the media.”

At the next editorial story meeting, we decided to take a shot on Victoria and assign the story. Then I began looking more closely at her work.

There were some red flags. The first question I had was whether she was actually in Toronto when so many of her bylines were in New York magazines and British newspapers. And how had she managed to do so many interviews already? Doing so much reporting without the guarantee of pay felt like a big gamble.

When I googled “Victoria Goldiee” with the names of the Canadian publications she said she’d written for, there were no results. We reached out to Danielle Martin, one of the doctors Victoria claimed she’d interviewed. Martin said she’d never heard of her.

I emailed Victoria back: “​​Are those quotes from your own interviews? And do you mind sending along some clippings, perhaps from your Walrus or Maisonneuve stories?”

She sent a lengthy reply the next day. “The quotes I included in the pitch are from original interviews I’ve conducted over the past few weeks,” she insisted. “In terms of previous work, I write a regular newsletter for The Walrus, which gives a good sense of my ability to balance accessibility with depth while speaking to a broad audience.” She attached a link to The Walrus’s “Lab Insider” newsletter that did not have her byline.

“I can 100% confirm that they do not write the Lab Insider newsletter,” wrote Tracie Jones from The Walrus when I emailed. “How odd to say they do!”

Victoria’s stilted email, and a closer read of the original pitch, revealed what should have been clear from the start: with its rote phrasing (“This story matters because of… It is timely because of… It fits your readership because of…”), it had all the hallmarks of an AI-generated piece of writing.

I was embarrassed. I had been naively operating with a pre-ChatGPT mindset, still assuming a pitch’s ideas and prose were actually connected to the person who sent it. Worse, the reason the pitch had been appealing to me to begin with was likely because a large language model somewhere was remixing my own prompt asking for stories where “health and money collide,” flattering me by sending me back what I wanted to hear.

But if Victoria’s pitch appeared to be an AI-generated fabrication, and if she was making up interviews and bylines, what to make of her long list of publications?

Since 2022, the byline “Victoria Goldiee” has been attached to dozens of articles. There are a series of “as-told to” stories in Business Insider. (“I’m a 22-year-old Amazon delivery driver. The cameras in my truck keep me on high alert, but it’s my dream job and the flexible hours are great,” is a novel take on Amazon’s labour practices). There had been an interview with the comic actor Nico Santos in Vogue Philippines, a feature on Afrobeats in RollingStone Africa (no longer on the site), a product recommendation for a DVD drive in New York Magazine’s The Strategist, and, in the past two years, a move away from culture writing to meatier features.

A 2024 story about climate change memes from the non-profit Outrider quotes “Juliet Pinto, a Professor of Psychology” at Pennsylvania State University. I emailed Pinto, who is in fact a communications professor at Pennsylvania State University. “I have not spoken with any reporter about that piece of research, and I am not a professor of psychology,” she wrote back. The piece also quotes “Terry Collins, a climate scientist and professor of environmental science at the University of California.” I could not find anyone of that description, but Terry Collins, the director of the institute of green science at Carnegie Mellon, said he’s never communicated with Goldiee.

Victoria’s online portfolio featured a pair of stories from the Vox Media publication PS (formerly Pop Sugar). When I clicked her links, however, I found each had been replaced with an editor’s note explaining the article had been “removed because it did not meet our editorial standards.”

“As I recall, the articles bylined by Victoria borrowed far too heavily from articles published elsewhere,” wrote former PS editor Nancy Einhart when I asked why the stories had been taken down. “I remember feeling disappointed because I really liked Victoria’s pitches.”

She added: “You are actually the third editor to contact me about this writer in the past couple of months! She is clearly on a pitch tour.”

Indeed, over the past months, Goldiee’s production has ramped up, with a series of articles in a wide range of publications that seem to be growing bolder in their ambitions.

“We definitely did not talk to her. So that’s kind of crazy.”

In September, the Journal of the Law Society of Scotland published a story about rural law firms that includes quotes from regular Scots whom I could not find, a lawyer who appears to be fictitious, a professor who told me she did not speak with the reporter, and even the Cabinet Secretary for Justice and Home Affairs, who did not respond to my email.

“The quotation did not come from me and, to the best of my recollection, I have never met or spoken to Victoria Goldiee,” Elaine Sutherland, professor emerita at the University of Stirling, wrote me. What was even more unsettling, though, was that the sentiments in the soundbite reflected her real beliefs. “The quotation attributed to me is the sort of thing I might say,” she wrote.

A month after that article, a Victoria Goldiee story in the design publication Dwell—“How to Turn Your Home’s Neglected Corners Into Design Gold”—featured a series of quotes purported to be from a wide array of international designers and architects, from Japan to England to California. A cursory read raised questions that probably should have been asked by editors to begin with. Namely, had a freelancer writing an affiliate-link-laden article about putting credenzas in your living room’s corners actually interviewed 10 of the world’s top designers and architects?

“Beata hasn’t heard of the journalist,” wrote a representative of designer Beata Heuman in response to my email.

“I did not speak with this reporter and did not give this quote,” wrote designer Young Huh, who is quoted as saying “corners are like little secrets in a home.”

“We definitely did not talk to her,” said a representative from architect Barbara Bestor. “So that’s kind of crazy.”

The stories had the characteristic weirdness of articles written by a large language model—invented anecdotes from regular people who didn’t appear to exist accompanied by expert commentary from public figures who do, with some biographical details mangled, who are made to voice “quotes” that sound, broadly, like something they might say.

When I asked one of the architects quoted in the Dwell piece, Sally Augustin, if she had ever spoken with Victoria, she was careful in her response. “I don’t actually remember speaking with her,” she wrote, and there was no sign of Victoria in her inbox. But she couldn’t be totally sure. And she wasn’t particularly bothered by her appearance in the article anyway. “The material attributed to me sounds exactly like something that I would say and I am fine with that material being out there,” she wrote.

Two weeks after she first pitched me, and after spending far too many hours trailing the path she’d cut through the internet, I emailed Victoria Goldiee asking if we could talk about her story.

By that point I suspected she was making things up in publications around the world, but I was hoping a conversation could get me closer to some answers.

We set up a video call for later that week. Ten minutes before it was set to begin, she emailed to change plans. “I look forward to chatting in a bit, I’ll be joining via phone so it’ll be a voice call on my end,” she wrote, signing off with, “xx.”

Moments later she was on the line. “Hi Nick!” she said, chipper and upbeat, speaking through a crackling phone line. To my ear, she sounded like a young woman with an African accent.

I asked her where in Toronto she was based. “Bloor,” she said cheerfully, naming one of the city’s busiest commercial thoroughfares without missing a beat. If she was just naming one of the first streets that comes up when you google “Toronto streets,” I couldn’t help but appreciate the effort.

Support Human Journalists

Do you know who doesn't use AI to write stories? The Local. Consider becoming a supporter.

Support

I asked her to tell me a little more about the story she had in mind. She talked her way through it using more or less the same language as the pitch, as if paraphrasing a document in front of her. “It’s basically as if more people, like Canadians, are paying for health care as kind of their own version of Netflix or Amazon, kind of how we pay monthly subscriptions for Netflix. Now, health care is the same way.”

I asked her about the interviews she said she’d already done. “You said you spoke with Danielle Martin. Is that right?”

“Yes, yes,” she said.

“We know her at The Local, and she said she doesn’t remember speaking with you,” I said, trying to keep my tone more curious than accusatory. “Did you actually speak with her?”

“Oh yeah, I did,” she said quickly. “I did have my personal assistant talk to her.”

I did not linger on the idea of a freelance writer with a personal assistant, instead pushing on, not wanting to spook her.

If the person on the other line was put off by my line of questioning, she didn’t show it. Victoria remained cheerful sounding and upbeat, providing quick, if implausible, answers to every question. Why couldn’t I find the stories she said she’d written for Canadian publications? “Most of them are in print,” she explained. (Editors from The Globe and Mail, Maisonneuve, and The Walrus say they do not believe Victoria Goldiee has ever written for those publications).

She said she was in Toronto, but I had noticed she’d written a lot for British publications. Had she just come to Canada recently? “I did, recently, like this past year,” she said.

Did she know why those Pop Sugar articles were no longer online? “I think the editor who published the story left the publication,” she responded. “So that’s why they deleted all the pieces that she covered.”

I don’t think I’ve ever spoken to someone who I suspected was lying to me with each and every response. I also don’t know if I’ve interviewed anyone I so desperately wanted to hear the truth from.

I had so many questions. Was the person on the phone even the same person whose writing was online? Where did she actually live—if not “Bloor,” was it the States? The U.K.? Or, as suggested by some of her writing, Nigeria? Was she a writer with genuine ambitions, who had gotten in way over her head and was now taking some truly outrageous risks? I was ready to be sympathetic. Was there some other explanation for the wild inconsistencies I had found? Or was she a simple scammer who had found easy marks in the overworked, credulous editors of the journalism world?

I had been naively operating with a pre-ChatGPT mindset, still assuming a pitch’s ideas and prose were actually connected to the person who sent it.

In my fantasy version of this phone call, after I’d gently led her toward more and more severe inconsistencies, Victoria would be forced to admit to the deceptions, and then we would really talk. As the conversation continued, though, I realized how foolish that hope had been. Whoever was on the other end of that line—caught in a nightmarish call that was transforming from a work chat into an audit of their professional life—was not going to somehow open up and offer a full explanation.

Five minutes into the call, I turned the conversation to what I’d discovered. “So, I started looking at some other clippings of your work,” I said. “I saw a piece that you did recently for the Journal of the Law Society of Scotland, I think?”

“Yeah,” she said, and I heard the slightest crack in her voice.

“I was looking at some of the quotes within those stories, and some of the people you spoke with,” I continued. “For example, in the Law Society of Scotland piece, you quote this professor.”

Victoria was quiet on the other end of the line. I actually emailed the professor, I explained. “And she said she never spoke with you.”

In the silence that followed I realized Victoria had hung up. She has not responded to my emails since.

Every media era gets the fabulists it deserves. If Stephen Glass, Jayson Blair and the other late 20th century fakers were looking for the prestige and power that came with journalism in that moment, then this generation’s internet scammers are scavenging in the wreckage of a degraded media environment. They’re taking advantage of an ecosystem uniquely susceptible to fraud—where publications with prestigious names publish rickety journalism under their brands, where fact-checkers have been axed and editors are overworked, where technology has made falsifying pitches and entire articles trivially easy, and where decades of devaluing journalism as simply more “content” have blurred the lines so much it can be difficult to remember where they were to begin with.

Freelance journalism in 2025 is an incredibly difficult place to build a career. But, it turns out, it’s a decent enough arena for a scam. On their website, Outrider says they pay $1,000 per article. Dwell’s rates start at 50 cents a word—a fee that’s difficult to justify if you actually want to interview 10 of the top designers in the world, but a healthy payday if you only need to enter a few words into ChatGPT.

Not every Victoria Goldiee story I looked at raised the same questions. A writer by that name had, in fact, spoken with actor Nico Santos for a story in Vogue Philippines, according to his publicist. Others were impossible to debunk with certainty. Had Victoria actually interviewed an incredibly elusive Korean production designer for a story in Architectural Digest about how people were, apparently, redesigning their rooms to match K-dramas? I can’t say for sure, and Architectural Digest did not respond to questions about the story. A story she published in October headlined “20 iconic slang words from Black Twitter that shaped pop culture”—which was syndicated across dozens of small-town American newspapers desperate for content, from northeast Mississippi to Waynesville, North Carolina—contains lines like “‘Brazy’ is another word for ‘crazy,’ replacing the ‘c’ with a ‘b.’” Is that story written by AI? It’s impossible to know and, frankly, impossible to say if it even matters.

My favourite “Victoria Goldiee” story is a piece she published in The Guardian just last month. It’s a first-person essay without quotes, and thus difficult to fact-check. In it, Goldiee—who told me she lives in Toronto, writes as an American in other work, enthuses about the daily jollof specials at a restaurant in Ghana in yet other writing, and lists herself as based in Nigeria elsewhere—vividly describes discovering underground music as she moves through life in 21st-century England. It follows her from a Somali football league in east London to “Morley’s fried chicken shops lit up after midnight” and “community centres that smell of carpet cleaner and curry.” It’s a rousing argument that real culture happens in real spaces, between real human beings, not in some cold, computer-generated reality. “The future of our music,” it reads, “is not written by algorithm.”

“Wonderful article,” reads one of many approving comments.

“Beautiful message that a lot of people aren’t trudging wide-eyed and brain-dead through this increasingly soulless, corporate-heavy… modern world,” reads another. “They are socialising, communicating, loving and laughing and making culture like real, thinking, feeling human beings.”

Victoria’s online writer’s portfolio came down following her conversation with The Local.

In the days after our conversation, Victoria’s online writer’s portfolio vanished. Her Muck Rack page (a listing of a journalist’s published works) was switched to private. An X account with her handle that had shared previous stories disappeared.

As I emailed the editors of the publications I’d been investigating, one by one Victoria’s articles came down.

The climate story at Outrider disappeared, replaced by a 404 error. Outrider did not respond to questions about their editorial process.

The Guardian story came down, with a note saying it had been “removed pending review.”

The story at Dwell was removed. “An investigation concluded that the article, ‘How to Turn Your Home’s Neglected Corners Into Design Gold,’ did not meet Dwell’s editorial standards, and as such, we’ve retracted it,” said the editor’s note in its place.

The Journal of the Law Society of Scotland removed Goldiee’s article and editor-in-chief Joshua King published an apology to the journal’s readers. “On the balance of the evidence available, it is now my belief these quotations were falsely attributed to the interviewees and are likely to be fabricated,” he wrote. “This is professionally embarrassing and this apology is an article I am disappointed I have to publish.”

In an email, King explained that the quotes in the piece “raised no red flags because they were, in all honesty, what I would have expected those quoted to have said.” He added: “Sadly I think editors and publications are at risk from bad actors.”

Those bad actors are already here. This summer, the Chicago Sun-Times published an AI-generated “summer reading list” filled with books that didn’t exist. Here in Toronto, independent publication The Grind was forced to postpone an issue after they took a chance on some new writers and were inundated with “scammers trying to pawn off AI-generated stories about fictional places and people.” Earlier this year, at least six publications, including Wired, removed stories after it was discovered that the articles, allegedly written by a freelancer named “Margaux Blanchard,” were likely AI inventions. The suspected fraud was discovered only after Jacob Furedi, the editor of the independent publication Dispatch, received a suspicious pitch and began digging into the writer’s work. According to reporting from The Daily Beast, after the revelations, Business Insider quietly removed at least 34 essays under 13 different bylines.

After weeks of trudging through Goldiee’s online mess, I went back to my inbox to deal with the rest of the pitches that were still sitting there waiting for me. I was a freelance writer for most of my career, so as an editor, I’ve always done my best to respond to every thoughtful pitch I get. Looking at them now, though, all I could see was the synthetic sheen of artificial intelligence. There were probably some promising young writers buried in there somewhere. But I couldn’t bear to dig through the bullshit to try to find them.

I idly googled the authors of a few of the pitches that looked most blatantly written by AI. I saw their bylines across the internet—a web of lies and uncanny half-truths entrenched so deeply in the information ecosystem that no one could possibly have the energy to dislodge them—and I was struck by a brief but genuine moment of bone-deep despair. Then I closed my laptop.

I do not know who “Victoria Goldiee” is. I suspected, for a time, that she might just be a name unattached to any specific human being—perhaps one of a dozen bylines used by a content farm somewhere, maybe London, maybe Lagos. There is nothing in any of the editorial processes I’ve seen that would prevent that.

But after sifting through what’s left of her online presence and squinting at her collected writing, trying to discern what’s real, I now see her story as more mundane. The author, I believe, is either from or still lives in Nigeria. She likes Korean dramas. She lives online, like the rest of us. It makes her miserable, like it does the rest of us.

One of the earliest Victoria Goldiee stories I could find, published on a small website in May 2022, is about the experience of logging on to the internet and finding yourself bombarded with the images and stories of people doing better than you are. It was, notably, published six months before ChatGPT was released—the demarcation line after which you could no longer assume a sentence had been produced by a human.

The piece is far less polished than the stories that would come later, with awkward phrasing and ungrammatical sentences. But it seems, to my eyes at least, to express the real feelings of a real human being.

Online, she writes, people are “inclined to curate the best parts of their lives.” They refuse to show their imperfections, instead presenting a false image of positivity and success.

“There’s this immense pressure to be productive,” she writes. “Most people like me are tired and trying to survive day-by-day.”

Read the whole story
betajames
3 days ago
reply
Michigan
acdha
4 days ago
reply
Washington, DC
Share this story
Delete

AI Is Supercharging the War on Libraries, Education, and Human Knowledge

2 Shares

Advertisement

"Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another."

AI Is Supercharging the War on Libraries, Education, and Human Knowledge Image: Steve Johnson via Unsplash
Read the whole story
betajames
26 days ago
reply
Michigan
acdha
27 days ago
reply
Washington, DC
Share this story
Delete

AI gets 45% of news wrong — but readers still trust it

2 Shares

The BBC and the European Broadcasting Union have produced a large study of how well AI chatbots handle summarising the news. In short: badly. [BBC; EBU]

The researchers asked ChatGPT, Copilot, Gemini, and Perplexity about current events. 45% of the chatbot answers had at least one major issue. 31% were seriously wrong and 20% had major inaccuracies, from hallucinations or outdated sources. This is across multiple languages and multiple countries. [EBU, PDF]

The AI distortions are “significant and systemic in nature.”

Google Gemini was by far the worst. It would make up an authoritative-sounding summary with completely fake and wrong references — much more than the other chatbots. It also used a satire source as a news source. Pity Gemini’s been forced into every Android phone, hey.

Chatbots fail most with current news stories that are moving fast. They’re also really prone to making up quotes. Anything in quotes probably isn’t the words the person actually said.

7% of news consumers ask a chatbot for their news, and that’s 15% of readers under 25. And just over a third — though they don’t give the actual percentage number — say they trust AI summaries, and about half of those under 35. People pick convenience first. [BBC, PDF]

Peter Archer is the BBC’s Programme Director for Generative AI — what a job title — and is quoted in the EBU press release. Archer put forward these results even though they were quite bad. So full points for that.

Unfortunately, Archer also says in the press release: ‘We’re excited about AI and how it can help us bring even more value to audiences.”

Archer sees his task here as promoting the chatbots: “We want these tools to succeed and are open to working with AI companies to deliver for audiences and wider society.”

Anyone whose title is “Programme Director for Generative AI” is never going to sign off on a result that this stuff is poison to accurate news and the public discourse, and the BBC needs it gone — as this study makes clear. Because the job description is not to assess generative AI — it’s to promote generative AI. [job description]

So what happens next? The broadcasters have no plan to address the chatbot problem. The report doesn’t even offer ways forward. There’s no action points! Except do more studies!

They’re just going to cross their fingers and hope the chatbot vendors can be shamed into giving a hoot — the approach that hasn’t worked so far, and isn’t going to work.

Unless the vendors can cure chatbot hallucinations. And they can’t do that, because that’s how chatbots work. Everything a chatbot outputs is a hallucination, and some of the hallucinations are just closer to accurate.

The actual answer is to stop using chatbots for news, stop creating jobs inside the broadcasters whose purpose is to befoul the information stream with generative AI, and attach actual liability to the chatbot vendors when they output complete lies. Imagine a chatbot vendor having to take responsibility for what the lying chatbot spits out.

Read the whole story
betajames
27 days ago
reply
Michigan
Share this story
Delete

Zohran Mamdani’s Win Is a Rare and Beautiful Moment In the Class War

1 Share
Can good things happen? Last night's victory in New York City suggests they can.
Read the whole story
betajames
28 days ago
reply
Michigan
Share this story
Delete

Details

2 Shares

As the U.S. tariff act of June 6, 1872, was being drafted, planners intended to exempt “Fruit plants, tropical and semi-tropical for the purpose of propagation or cultivation.”

Unfortunately, as the language was being copied, a comma was inadvertently moved one word to the left, producing the phrase “Fruit, plants tropical and semi-tropical for the purpose of propagation or cultivation.”

Importers pounced, claiming that the new phrase exempted all tropical and semi-tropical fruit, not just the plants on which it grew.

The Treasury eventually had to agree that this was indeed what the language now said, opening a loophole for fruit importers that deprived the U.S. government of an estimated $1 million in revenue. Subsequent tariffs restored the comma to its intended position.

Read the whole story
betajames
31 days ago
reply
Michigan
Share this story
Delete
Next Page of Stories