Associate Professor of English, University of Michigan-Flint. I research and teach rhetoric and writing.
5119 stories
·
43 followers

AI doesn’t belong in journaling | The Verge

1 Comment

AI doesn’t belong in journaling | The Verge

At my demo, Google told me the idea was to make journaling easier — much in the way that Gemini simplifies other writing tasks, like emails and document summaries. Sometimes, I was told, it can be hard to know what you should journal about. Looking back can also be difficult. The point of Gemini in this instance was to make life a little more convenient and helpful.

That’s nice, except journaling isn’t supposed to be easy or convenient.

Ask any writer: a blank page is meant to be wrestled with. And in journaling, the only prompt you ever need is “What happened today and how do I feel about that?”

What we believe in.

Read the whole story
betajames
5 days ago
reply
wtf is the point jfc
Michigan
Share this story
Delete

Will Smith’s concert crowds are real, but AI is blurring the lines

1 Comment and 2 Shares

This minute-long clip of a Will Smith concert is blowing up online for all the wrong reasons, with people accusing him of using AI to generate fake crowds filled with fake fans carrying fake signs. The story’s blown up a bit, with coverage in Rolling Stone, NME, The Independent, and Consequence of Sound.

And it definitely looks terrible! The faces have all the characteristics of AI slop, with familiar artifacts like uncanny features, smeared faces, multiple fingers/limbs, and nonsensical signage. “From West Philly to West Swig̴̙̕g̷̤̔͜y”?

It gets worse the more you look at it.

But here’s where things get complicated.

The crowds are real. Every person you see in the video above started out as real footage of real fans, pulled from video of multiple Will Smith concerts during his recent European tour.

Real Crowds, Real Signs

The main Will Smith performance footage in the clip is from the Positiv Festival, held last month at the Théâtre Antique d’Orange in Orange, France. (Here’s a phone recording from the audience of the first half of the performance.) It’s intercut with various shots of audiences from Gurtenfestival and Paléo in Switzerland, the Ronquieres Festival in Belgium, among others.

From this slideshow of photos from the Paléo festival in Nyon, Switzerland, you can see professionally-shot photos of the same audience from the video.

The signs, previously distorted, can now be read, like this one which actually reads “From West Philly to West Swizzy.” (Short for Switzerland, if you’re wondering.)

One of the most egregious examples is the couple holding the sign thanking Will Smith for helping them survive cancer which — if it was AI-generated slop — would be pretty disgusting: a gross attempt to drum up sympathy with fake people.

In an article posted by The Independent today, music editor Roisin O’Connor points to the couple as clear evidence of AI generation:

“Another shot shows a man’s knuckle appear to blur along with his sign, which reads ‘You Can Make It’ helped me survive cancer. THX Will.’ Meanwhile, the woman in front of him is seemingly holding his hand, but the headband of the woman behind her is somehow over her wrist.”

But the couple is real. There’s two good photos of them on Will Smith’s Instagram in a slideshow of photos and videos from Gurtenfestival in Bern last month.

You can see them in this video from Will Smith’s Instagram post, which I clipped below.

Two Levels of AI Enhancement

So if these fans aren’t AI-generated fakes, what’s going on here?

The video features real performances and real audiences, but I believe they were manipulated on two levels:

  1. Will Smith’s team generated several short AI image-to-video clips from professionally-shot audience photos
  2. YouTube post-processed the resulting Shorts montage, making everything look so much worse

Let’s start with YouTube.

YouTube’s Shorts “Experiment”

Will Smith’s team also uploaded this same video to Instagram and Facebook, where it looks considerably better than the copy on YouTube, without the smeary sheen of uncanny detail.

I put them side-by-side below. Try going full-screen and pause at any point to see the difference. The Instagram footage is noticeably better throughout, though some of the audience clips still have issues.

For the last two months, it turns out that YouTube was quietly experimenting with post-processing YouTube Shorts videos: unblurring and denoising videos with often-unpleasant results.

I first heard about this ten days ago, when guitarist Rhett Shull posted a great video about the issue, which now has over 700k views.

Five days ago, YouTube finally confirmed it was happening. YouTube’s Creator Liaison Rene Ritchie posted on X about the experiment.

In a followup reply, Ritchie clarified the difference, as he saw it:

GenAI typically refers to technologies like transformers and large language models, which are relatively new. Upscaling typically refers to taking one resolution (like SD/480p) and making it look good at a higher resolution (like HD/1080p). This isn’t using GenAI or doing any upscaling. It’s using the kind of machine learning you experience with computational photography on smartphones, for example, and it’s not changing the resolution.

On Friday, Alex Reisner wrote about “YouTube’s Sneaky AI ‘Experiment’” in The Atlantic, and got another official statement from Google:

When I asked Google, YouTube’s parent company, about what’s happening to these videos, the spokesperson Allison Toh wrote, “We’re running an experiment on select YouTube Shorts that uses image enhancement technology to sharpen content. These enhancements are not done with generative AI.” But this is a tricky statement: “Generative AI” has no strict technical definition, and “image enhancement technology” could be anything. I asked for more detail about which technologies are being employed, and to what end. Toh said YouTube is “using traditional machine learning to unblur, denoise, and improve clarity in videos,” she told me.

I agree with Reisner that it’s likely that a diffusion model is at work here and Google is trying to split hairs over the definition of “generative AI” because of how divisive it is.

Will Smith’s Generated Videos

That explains why the entire YouTube Shorts video has that smeary look to it that isn’t present throughout the copy posted on Instagram, but both versions have those terrible audience shots with AI artifacts and garbled signage.

After looking at it, I believe that Will Smith’s team was using a generative video model, but not to create entirely new audience footage, like most people suspect.

Instead, they started with photos shot by their official tour photographers, and used those photos in Runway, Veo 3, or a similar image-to-video model to create a short animated clip suitable for a concert montage.

Let’s go back to the crowd photo from Paléo in Switzerland:

I believe this is the exact photo that the crowd shot in the video was generated with. Here it is as a two-frame animation, with the first frame from the AI video overlaid on the original photo.

Here’s another example. The photo below was taken at Ronquieres Festival in Belgium, posted to Will Smith’s Instagram three weeks ago.

And here’s the AI-generated clip that it was turned into.

Conclusion

Virtually all of the commenters on YouTube, Reddit, and X believe this was fake footage of fake fans, generated by Will Smith’s team to prop up a lackluster tour.

Like the faces in the video, the truth is blurry.

The crowds were real, but the videos were manipulated: first by Will Smith’s team, and then without asking, by YouTube.

We can debate the ethics of using an image-to-video model to animate photos in this way, but I think it’s meaningfully different than what most people were accusing Will Smith of doing here: using generative AI video to fake a sold-out crowd of passionate fans.

Read the whole story
betajames
5 days ago
reply
Michigan
Share this story
Delete
1 public comment
istoner
5 days ago
reply
Potentially an interesting case study to offer students who are convinced that AI can "polish" their final drafts to a sheen they could not themselves achieve
Saint Paul, MN, USA

James Dobson Is Dead, Was A Monster

1 Comment and 2 Shares

James Dobson was a nasty dude. He liked to beat children and dogs with a belt and to rain misery and punishment on the vulnerable; we know all of this about him because he said as much in public, repeatedly, over a long and rancid public life. He enlisted a whole bunch of Ideology—patriarchy, social conservatism, utterly fake upside-down Christianity—in service of those basic motivations, not only to justify his own appetite for and personal acts of sadism and domination, but to cast punishment and predation as far out into the world as he could manage. He studied psychology and the Bible so that he could borrow their authority and instrumentalize them to do widespread cruelty more effectively. He was oriented to evil, at vast scale, by continual lifelong choice. It was his calling, and he made it his job.

What a guy like James Dobson does, and what James Dobson did for his whole adult life, is offer people—white men primarily, but not exclusively—a rhetorical framework for doing evil and feeling good about it. Stand right here and look exactly there, he said, and psychology says it's OK for you to beat your children, that when they cry for more than two minutes of the beating, it is because they are bad and not because you are hurting them; you should beat them harder for crying until they stop. Stand right here and look exactly there, and tradition says your wife should have no will of her own. Stand right here and look exactly there, and love of country says society should press its boot onto the poor and marginalized and crush them until they die. Didn't you always hate them? Sure you did. Religion says right here that you are right to. He blew softly on a stupid and seething population's resentments, its will to power, its lust to punish those who complicate their desires by having lives of their own, and watched those appetites stick up like the hairs on your arm, or glow like charcoal in a fire. It feels good. He tempts you with the promise that every cruel, fearful, punitive impulse you have aligns with The Way Things Are Supposed To Be, and that it is even your grim duty is to indulge them. In this respect, James Dobson was very much like Satan.



Read the whole story
rocketo
10 days ago
reply
rest in piss, jackass
seattle, wa
betajames
9 days ago
reply
Michigan
Share this story
Delete

“i fed chatgpt identification notes to see how it would handle–” well i fed the wise gnarled oak…

3 Shares

siberiantrap:

“i fed chatgpt identification notes to see how it would handle–” well i fed the wise gnarled oak that protects my village a sample of your blood to see how sie would handle you, and sie told me that the nitrogen of your corpse is unworthy of fertilizing hir sandy soil and showed me a vision of the grass withering and dying in your footsteps

Read the whole story
betajames
12 days ago
reply
Michigan
Share this story
Delete

a word to my students

1 Share

Craiyon 095541 Bowser with the Sorting Hat on .

The first thing to know is that I don’t call it AI. When those of us in the humanities talk about “AI in education” what we almost always mean is “chat interfaces to large language modules.” There are many other kinds of machine-learning endeavors but they’re not immediately relevant to most of us. And anyway, whether they’re “intelligent” is up for debate. So the word I’ll use here is “chatbot,” and the question is: What’s my policy? What do I think about your using chatbots for work in my class?

I’ll start to answer that by turning it around: Would my stated policy have any effect whatsoever on your actions? Pause and think about it for a moment: Would it?

For some of you the answer will be: No. And to you I say: thanks for the candor.

Others among you will reply: Yes. And probably you mean it … or think you mean it. But will your compliance survive a challenge? When you’re sitting around with friends and every single one of them except you is using a chatbot to get work done, will you be able to resist the temptation to join them? When they copy and paste and then head merrily out for tacos, will you stay in your room and grind? Maybe you will, once, or twice, or even three times, but … eventually…. I mean, come on: we all know how this story ends.

So let’s be clear about three things. The first is that if I make assignments which you can get chatbots to do for you, that’s what you’ll do. The second is that if I have a “no chatbot” policy and you use chatbots, you’re cheating. The third is that cheating is lying: it is saying (either implicitly or explicitly) that you’ve done something you have not done. You are claiming and presenting to me as your work what is not your work.

Now, this has several consequences, and one of them — if I don’t catch you — is that I will end up affirming that you have certain skills and abilities that you do not in fact have. Which makes me, however unintentionally, complicit in your lie. That reflects badly on me.

But that makes a problem for you, too, because sooner or later the time will come — perhaps in a job interview, or an interview for a place in a graduate program, or your second week in a new job that doesn’t have you in front of a computer all day — when your lack of the skills you claim to have will become evident, to your great embarrassment and frustration. You’re probably not worried about that now, because one of the most universal of human tendencies is — I use the technical term — Kicking The Can Down The Road. Almost all human beings will put off dealing with a problem if they possibly can; the only ones among us who don’t are those who have learned through painful experience the costs of can-kicking. (This is in fact one of the very few ways in which we Olds are superior to you Youngs: we’ve been there, we know.)

And then, you know, I’m a Christian, and I’ve read the parable of the talents. I want to see you multiply your gifts, not leave you exactly as you were when you came to my class, only with a little more experience in writing chatbot prompts. (Would a personal trainer be happy if you instructed a robot to do pull-ups and crunches for you? Would he think he had done his job?)

Perhaps the most worrisome consequence of this whole ridiculous circus in which (a) you’re trying not to get caught cheating and (b) your professors are trying to catch you cheating is how thoroughly dehumanizing it is to all of us. All of us end up acting like we’re in a video-game boss fight. Modern education, with its emphasis on credentialing and therefore on grades, is already dehumanizing: as my friend Tal Brewer from the UVA says, we’re not teachers, we’re the Sorting Hat. The chatbot world makes that all crap so much worse. Now we’re Boswer and the Sorting Hat. 

But me, I just want to help you to be a better reader, a better writer, and a better thinker. If you can learn these skills, and the habits that enable them, I believe you will be a better person — not in every way, maybe not even in the ways that matter most, but in significant ways. You’ll be a little more alert, a little more aware; you’ll make more nuanced judgments and will be able to express those judgments more clearly. You may even increase your self-knowledge. I want to do what I can to encourage those virtues. 

I don’t want to be trying to outwit you and avoid being outwitted. I don‘t want to enable your can-kicking. I don’t want to affirm that you have skills you don’t have. I don’t want to have to say, at the end of the day, that the only thing I taught you was better prompt engineering. Above all, I don’t want to make assignments that become a proximate occasion of sin for you: I don’t want to be your tempter. So I simply must — I am obliged as a teacher and a Christian — keep the chatbots out of our class, as best I can. If you pray, please pray for me. 

Read the whole story
betajames
12 days ago
reply
Michigan
Share this story
Delete

Our perceptual relationship with the world works because we trust prior stories | Umberto Eco

4 Shares

Our perceptual relationship with the world works because we trust prior stories. We could not fully perceive a tree if we did not know (because others have told us) that it is the product of a long growth process and that it does not grow overnight. This certainty is part of our “understanding” that a tree is a tree, and not a flower. We accept a story that our ancestors have handed down to us as being true, even though today we call these ancestors scientists.

No one lives in the immediate present; we link things and events thanks to the adhesive function of memory, both personal and collective (history and myth). We rely upon a previous tale when, in saying “I,” we do not question that we are the natural continuation of an individual who (according to our parents or the registry office) was born at that precise time, on that precise day, in that precise year, and in that precise place. Living with two memories (our individual memory, which enables us to relate what we did yesterday, and the collective memory, which tells us when and where our mother was born), we often tend to confuse them, as if we had witnessed the birth of our mother (and also Julius Caesar’s) in the same way we “witnessed” the scenes of our own past experiences.

This tangle of individual and collective memory prolongs our life, by extending it back through time, and appears to us as a promise of immortality. When we partake of this collective memory (through the tales of our elders or through books), we are like Borges gazing at the magical Aleph—the point that contains the entire universe: in the course of our lifetime we can, in a way, shiver along with Napoleon as a sudden gust of cold wind sweeps over Saint Helena, rejoice with Henry V over the victory at Agincourt, and suffer with Caesar as a result of Brutus’ betrayal.

And so it is easy to understand why fiction fascinates us so. It offers us the opportunity to employ limitlessly our faculties for perceiving the world and reconstructing the past. Fiction has the same function that games have. In playing, children learn to live, because they simulate situations in which they may find themselves as adults. And it is through fiction that we adults train our ability to structure our past and present experience.

From Umberto Eco’s lecture “Fictional Protocols,” part of Six Walks in the Fictional Woods.



Read the whole story
betajames
13 days ago
reply
Michigan
rocketo
14 days ago
reply
seattle, wa
Share this story
Delete
Next Page of Stories