Associate Professor of English, University of Michigan-Flint. I research and teach rhetoric and writing.
4949 stories
·
43 followers

American drivers are now even more distracted by their phones. Pedestrian deaths are soaring.

3 Shares
A driver is captured using a phone behind the wheel in New York in this 2016 in photo. The problem of distracted drivers has only gotten worse, one set of data from millions of drivers shows. | Spencer Platt/Getty Images

During the pandemic, distracted driving increased, and it hasn’t gone down since.

Until relatively recently, good data on the problem of distracted driving has been hard to find. The government estimates that 3,522 people died because of it in 2021, but experts say the official number probably majorly undercounts the number of deaths, in part because police are rarely able to definitively prove that a driver was distracted right before a crash.

In the last few years, though, the data on distracted driving has gotten better. Cambridge Mobile Telematics is a company that partners with major insurance companies to offer downloadable apps that drivers can use to save money on their rates. Via the apps, Cambridge Mobile Telematics (CMT) uses mobile phone sensors to measure driving behavior, including whether a person is speeding, holding their phone, or interacting with an unlocked screen while driving (the company says it doesn’t collect information on what the drivers are doing on their phones). Its work gives the company insight into the driving behaviors of more than 10 million people.

CMT recently analyzed driver behavior during millions of car trips. What it found should be troubling to anyone who uses a road in the US: During the pandemic, American drivers got even more distracted by their phones while driving. The amount of distracted driving hasn’t receded, even as life has mostly stabilized.

The company found that both phone motion and screen interaction while driving went up roughly 20 percent between 2020-2022. “By almost every metric CMT measures, distracted driving is more present than ever on US roadways. Drivers are spending more time using their phones while driving and doing it on more trips. Drivers interacted with their phones on nearly 58% of trips in 2022,” a recent report by the company concludes. More than a third of that phone motion distraction happens at over 50 mph.

We’re also spending nearly three times more time distracted by our phones than drivers in the United Kingdom and several other European countries. US drivers spent an average of 2 minutes 11 seconds on their phones per hour while driving, compared to 44 seconds per hour for UK drivers, CMT found. The company compared the driving behaviors of US and European drivers because road fatalities in the United States surged during the pandemic and European fatalities did not. In 2020, 38,824 people died on US roads. In 2021, that number rose to 42,915 people, and the highest number of pedestrians were killed in 40 years. In 2022, the overall deaths stayed high, around 42,795, among them 7,508 pedestrians.

The United States is increasingly an outlier when it comes to traffic fatalities, with rates 50 percent higher than its peers. The CMT findings suggest that the way Americans use their phones while driving could be one important reason why, along with road and vehicle design and a lack of consistent traffic safety enforcement.

“The way individuals are driving their vehicles in the US is distinct from the way they’re driving in Europe,” says Ryan McMahon, senior vice president of strategy for Cambridge Mobile Telematics. That extra time Americans are spending on their phones while driving increases risk: In more than a third of crashes the company analyzed, McMahon says, the driver had their phone in their hand a minute prior to collision.

The large increase in risky driving behaviors in the US started basically as soon as the pandemic began. “We saw this incredible increase in distracted driving. You could almost track it by the day schools started to shut down,” McMahon says. “When mobility changed, risk increased dramatically.”

The individual and collective consequences of our cellphone compulsions are stark: The most distracted drivers are over 240 percent more likely to crash than the safest drivers, according to the report.

The report also notes how the rise of smartphone use roughly corresponds to the rise in pedestrian fatalities: About 4,600 people were killed while walking in 2007, the year the iPhone was introduced. By 2021, with 85 percent of Americans owning smartphones, the number rose to 7,485.

Why American drivers got more distracted during the pandemic

McMahon and other experts on distracted driving have some theories. Culture may play a role: The shift to working from home, the fact that Americans work longer hours and vacation less, and the expectation that they need to be available to their colleagues even while driving is a notion many Europeans would scoff at. (As someone currently living in Europe after a lifetime in the United States, my highly subjective observation is that people really do seem less work-crazed in Europe. And while phone-checking while driving definitely happens, I see it a lot less than I do in the US.)

“I do think this notion of work in our country, and [the idea that] you have to be available 24-7 has also exacerbated it,” says Pam Shadel Fischer, a senior director at the Governors Highway Safety Association, who’s been working for decades to reduce risky and impaired driving. “It’s absolutely a cultural issue.”

The most compelling theories, though, are structural and psychological. In the United States, infrastructure is built around cars, and Americans generally have fewer public transportation options than Europeans do. They spend more time in their cars, commuting, doing chores, and taking children to school than people in European countries, and they are far more likely to make these daily trips on roads that are straight, flat, and built for easy car travel: a perfect recipe for boredom.

While it’s difficult to generalize too much about Europe, anyone who’s lived or visited there can attest to the differences inherent in roads built before the age of the auto, where pedestrians are considered important road users. The road design, the topography, and the presence of people on foot demand drivers’ attention. “You see more distraction happening when people are more familiar with roads,” McMahon says.

The type of car also may matter a lot. In the US, the CMT analysis notes, 94 percent of car drivers said they were driving cars with automatic transmissions. Only 33 percent of UK drivers answered the same. Manual shifting requires more active engagement with the vehicle.

It’s also possible that Americans are getting more comfortable with risk precisely because their vehicles keep getting safer for the people driving them (if not for people outside of them). “We’ve got all these safety features,” Shadel Fischer says. They convince drivers that “‘everything is fine! The car will take care of me, no big deal.’ They overestimate what those safety features are designed to do.”

Another challenge is that there are frequently no negative consequences for using your phone while driving. It’s easy for people to do because they’ve done it before with no problem — until it causes a crash.

What could make drivers put down their phones?

A tech industry giant genuinely interested in improving lives and mitigating the harm and disruption caused by its products could find a way to disable distracting devices, leaving them available only for, say, GPS and emergency calls. “I’m convinced that the solution is the technology,” Shadel Fischer says. “We shouldn’t have to do anything.”

Apple introduced a feature to the iPhone in 2017 that automatically puts the phone in “do not disturb” mode while driving, but it’s extremely easy to turn off. So we are left mostly with interventions into individual behavior.

On the policy front, activists like Jennifer Smith, whose mother was killed in Oklahoma in 2008 by a driver talking on his cellphone, have been working with states to pass laws to end distracted driving. Forty-four states have some sort of distracted driving laws on the books, and 27 states have bans on hand-held cellphone use.

In the distracted driving report, Cambridge Mobile Telematics looked at how driver behavior changed after a state passed a “hands-free” law and found that it led to a 13 percent reduction in phone motion while driving in the first three months after a law took effect. But those changes tended to diminish over time, and there’s wide variation among the states both in terms of public awareness of the laws and traffic enforcement, which declined in some states during the pandemic. Without high public awareness or enforcement — which is difficult to do well because it relies on law enforcement officers witnessing the distracted driving and enforcing the laws equitably — getting good compliance can be difficult.

That’s why policymakers are taking a multi-pronged approach to the issue, trying to find ways to educate the public and make the laws enforceable. “It will take a long, sustained effort to change driver behavior if we want to have fewer deaths in this country,” says Michelle May, manager of the Highway Safety Program at Ohio’s Department of Transportation.

Ohio’s “hands-free” law went into effect last year; since then, May says, the state has used telematics data and tracked an 8.1 percent decline in driver distractions. But May expects the effort to reduce phone use while driving will be a long-term effort, akin to the effort to reduce drunk driving and getting people to wear seatbelts.

Financial incentives can also help. Car insurance rates have skyrocketed in recent years, becoming a leading cause of inflation and contributing to the financial burdens associated with car ownership, which disproportionately affects low-income and working-class Americans. The use of telematics-based apps by insurance companies offers drivers an opportunity to save money on their insurance rates. Research into the use of the apps suggests that drivers who regularly receive feedback on their driving habits tend to use their phones less while driving. The data can also help state departments of transportation better locate areas with more distracted driving, which could in turn help influence road design. “We’ve found that simple things like using paint to narrow the lanes gives people the illusion that they’ve got to slow down,” Shadel Fischer says. “The road design does play a role in how we act.”

In other words, there are ways to address the problem but they rely heavily on a bunch of solutions working with one another, at a time when our road safety system appears to be breaking down. “Every single piece has to work in concert with the other or it won’t be successful because we’re up against such a huge scale of a behavioral problem,” Smith says.

The stakes couldn’t be higher, though, and getting this right will undoubtedly take some combination of policy intervention, industry investment, and a willingness among drivers to put their phones down and pay attention.

Until then, the status quo is an advanced country with incredibly high rates of road death, where, over time, almost everyone will know someone who lost their life in a car crash. “Somehow, we’re just accepting 42-45,000 people in the US dying in this manner every year,” McMahon says. “It’s preventable.”

Read the whole story
betajames
6 days ago
reply
Michigan
acdha
6 days ago
reply
Washington, DC
mareino
3 days ago
For me, that 2 minutes per hour could easily be just getting the mapping software to load properly
Share this story
Delete

Literacy crisis in college students: Essay from a professor on students who don’t read.

2 Shares

Recent years have seen successive waves of book bans in Republican-controlled states, aimed at pulling any text with “woke” themes from classrooms and library shelves. Though the results sometimes seem farcical, as with the banning of Art Spiegelman’s Maus due to its inclusion of “cuss words” and explicit rodent nudity, the book-banning agenda is no laughing matter. Motivated by bigotry, it has already done demonstrable harm and promises to do more. But at the same time, the appropriate response is, in principle, simple. Named individuals have advanced explicit policies with clear goals and outcomes, and we can replace those individuals with people who want to reverse those policies. That is already beginning to happen in many places, and I hope those successes will continue until every banned book is restored.

If and when that happens, however, we will not be able to declare victory quite yet. Defeating the open conspiracy to deprive students of physical access to books will do little to counteract the more diffuse confluence of forces that are depriving students of the skills needed to meaningfully engage with those books in the first place. As a college educator, I am confronted daily with the results of that conspiracy-without-conspirators. I have been teaching in small liberal arts colleges for over 15 years now, and in the past five years, it’s as though someone flipped a switch. For most of my career, I assigned around 30 pages of reading per class meeting as a baseline expectation—sometimes scaling up for purely expository readings or pulling back for more difficult texts. (No human being can read 30 pages of Hegel in one sitting, for example.) Now students are intimidated by anything over 10 pages and seem to walk away from readings of as little as 20 pages with no real understanding. Even smart and motivated students struggle to do more with written texts than extract decontextualized take-aways. Considerable class time is taken up simply establishing what happened in a story or the basic steps of an argument—skills I used to be able to take for granted.

Since this development very directly affects my ability to do my job as I understand it, I talk about it a lot. And when I talk about it with nonacademics, certain predictable responses inevitably arise, all questioning the reality of the trend I describe. Hasn’t every generation felt that the younger cohort is going to hell in a handbasket? Haven’t professors always complained that educators at earlier levels are not adequately equipping their students? And haven’t students from time immemorial skipped the readings?

The response of my fellow academics, however, reassures me that I’m not simply indulging in intergenerational grousing. Anecdotally, I have literally never met a professor who did not share my experience. Professors are also discussing the issue in academic trade publications, from a variety of perspectives. What we almost all seem to agree on is that we are facing new obstacles in structuring and delivering our courses, requiring us to ratchet down expectations in the face of a ratcheting down of preparation. Yes, there were always students who skipped the readings, but we are in new territory when even highly motivated honors students struggle to grasp the basic argument of a 20-page article. Yes, professors never feel satisfied that high school teachers have done enough, but not every generation of professors has had to deal with the fallout of No Child Left Behind and Common Core. Finally, yes, every generation thinks the younger generation is failing to make the grade—except for the current cohort of professors, who are by and large more invested in their students’ success and mental health and more responsive to student needs than any group of educators in human history. We are not complaining about our students. We are complaining about what has been taken from them.

If we ask what has caused this change, there are some obvious culprits. The first is the same thing that has taken away almost everyone’s ability to focus—the ubiquitous smartphone. Even as a career academic who studies the Quran in Arabic for fun, I have noticed my reading endurance flagging. I once found myself boasting at a faculty meeting that I had read through my entire hourlong train ride without looking at my phone. My colleagues agreed this was a major feat, one they had not achieved recently. Even if I rarely attain that high level of focus, though, I am able to “turn it on” when demanded, for instance to plow through a big novel during a holiday break. That’s because I was able to develop and practice those skills of extended concentration and attentive reading before the intervention of the smartphone. For children who were raised with smartphones, by contrast, that foundation is missing. It is probably no coincidence that the iPhone itself, originally released in 2007, is approaching college age, meaning that professors are increasingly dealing with students who would have become addicted to the dopamine hit of the omnipresent screen long before they were introduced to the more subtle pleasures of the page.

The second go-to explanation is the massive disruption of school closures during COVID-19. There is still some debate about the necessity of those measures, but what is not up for debate any longer is the very real learning loss that students suffered at every level. The impact will inevitably continue to be felt for the next decade or more, until the last cohort affected by the mass “pivot to online” finally graduates. I doubt that the pandemic closures were the decisive factor in themselves, however. Not only did the marked decline in reading resilience start before the pandemic, but the students I am seeing would have already been in high school during the school closures. Hence they would be better equipped to get something out of the online format and, more importantly, their basic reading competence would have already been established.

Less discussed than these broader cultural trends over which educators have little control are the major changes in reading pedagogy that have occurred in recent decades—some motivated by the ever-increasing demand to “teach to the test” and some by fads coming out of schools of education. In the latter category is the widely discussed decline in phonics education in favor of the “balanced literacy” approach advocated by education expert Lucy Calkins (who has more recently come to accept the need for more phonics instruction). I started to see the results of this ill-advised change several years ago, when students abruptly stopped attempting to sound out unfamiliar words and instead paused until they recognized the whole word as a unit. (In a recent class session, a smart, capable student was caught short by the word circumstances when reading a text out loud.) The result of this vibes-based literacy is that students never attain genuine fluency in reading. Even aside from the impact of smartphones, their experience of reading is constantly interrupted by their intentionally cultivated inability to process unfamiliar words.

For all the flaws of the balanced literacy method, it was presumably implemented by people who thought it would help. It is hard to see a similar motivation in the growing trend toward assigning students only the kind of short passages that can be included in a standardized test. Due in part to changes driven by the infamous Common Core standards, teachers now have to fight to assign their students longer readings, much less entire books, because those activities won’t feed directly into students getting higher test scores, which leads to schools getting more funding. The emphasis on standardized tests was always a distraction at best, but we have reached the point where it is actively cannibalizing students’ educational experience—an outcome no one intended or planned, and for which there is no possible justification.

We can’t go back in time and do the pandemic differently at this point, nor is there any realistic path to putting the smartphone genie back in the bottle. (Though I will note that we as a society do at least attempt to keep other addictive products out of the hands of children.) But I have to think that we can, at the very least, stop actively preventing young people from developing the ability to follow extended narratives and arguments in the classroom. Regardless of their profession or ultimate educational level, they will need those skills. The world is a complicated place. People—their histories and identities, their institutions and work processes, their fears and desires—are simply too complex to be captured in a worksheet with a paragraph and some reading comprehension questions. Large-scale prose writing is the best medium we have for capturing that complexity, and the education system should not be in the business of keeping students from learning how to engage effectively with it.

This is a matter not of snobbery, but of basic justice. I recognize that not everyone centers their lives on books as much as a humanities professor does. I think they’re missing out, but they’re adults and they can choose how to spend their time. What’s happening with the current generation is not that they are simply choosing TikTok over Jane Austen. They are being deprived of the ability to choose—for no real reason or benefit. We can and must stop perpetrating this crime on our young people.

Read the whole story
betajames
14 days ago
reply
Michigan
iridesce
15 days ago
reply
DC
Share this story
Delete

The Verge’s Mia Sato on the rise of AI-generated obituary spam

1 Share
scumbag SEOs turning personal tragedies into cheap hallucinated content for easy ad revenue #
Read the whole story
betajames
15 days ago
reply
Michigan
Share this story
Delete

The Information Environment: Toward a Deeper Enshittification Thesis

1 Share

This morning I published an article on Slate about the decline of reading comprehension and resilience among college-age students. In the piece, I try to defuse the knee-jerk reaction that I must be indulging in a typical “kids these days” criticism before assessing the most commonly-discussed potential causes of this phenomenon (smartphones and covid lockdowns) and concluding that changes in reading pedagogy are the primary culprit. Even within just a few hours, the response has been gratifyingly positive — meaning that I have been basking in the repeated social media dopamine hits rather than doing what I intended to do once the article was published: follow up with a broader reflection on my concerns about the degredation of people’s ability to understand and engage with the world around them in a realistic way.

Cory Doctorow’s enshittification thesis has been getting a lot of attention lately, and rightly so. As Google in particular cannibalizes its own flagship product, simple access to information has declined precipitously. But beyond the asset-stripping enshittification Doctorow identifies, we need to acknowledge that there was another, broader enshittification of our relationship with information already implicit in the shift to the “information age.” A key canary in the coal mine for me here is Google Books, on which I rely very heavily as the translator of a philosopher who cites 20 million random things in every one of his books. One persistent annoyance is that when I search for a book’s title and then click on the book, Google Books initially displays search results for the book’s title, within the book itself. We could dismiss this as a dumb glitch, but I think it’s symptomatic, because it shows the designers of the product don’t understand that people would use a database of every book that ever existed to search for books, nor do they understand how people use and interact with books (for instance, by scanning the table of contents or index).

In short, Google Books assumes that what one wants out of their huge pile of books is not books, but isolated strings of information. That’s the same assumption that stands behind their disastrous attempt to revamp their search engine so that it doesn’t take you primarily to a website, but instead tries to directly present you with the answer. And arguably the culmination of that is ChatGPT, which promises to give you exactly the information you’re looking for in an unobtrusive “neutral” prose style. Much as I hate ChatGPT — and I do hate it with all my heart, unconditionally, unchangeably, eternally — I do get why that fantasy is attractive. But it is a fantasy, because as the man says, there is nothing outside the text. There is no such thing as the raw information devoid of presentation and context. We can’t get at that raw information, and we certainly can’t program computers to do so, because it does not exist. It is a fantasy, and it is increasingly a willful lie.

Our interaction with information has also been enshittified in more visceral ways. The internet may have given us access to the sum of human knowledge, but it gave it to us in an actively user-hostile form. Reading on the screen sucks. It actively injures our eyes. And there is no one alive who doesn’t realize that their screen reading is much less attentive and rigorous than their print reading. There are studies that show this, but I won’t insult your intelligence by linking them (if I could even find them on Google anyway). We all know it. Screen reading is inferior to print reading, even before we add in the distraction factor. We don’t remember what we read, and we have no tactile memory of where to find specific bits (“I know it’s on the lower left-hand page….”).

Print was a perfected technology, an unsurpassable way of sharing information and ideas and stories — and we are all in the process of throwing it away for something we know is worse, out of inertia and laziness and cheapness and convenience. Think, for instance, of the difference between a printed newspaper page and a website. Even before we factor in the annoying pop-ups, etc., the printed paper is a much more dynamic and flexible format. You don’t have to click through to each article — at least the beginning is right there, and it has been carefully structured so that the amount of time you spend reading will broadly reward your particular level of interest. If you just want the overall scenario, read a couple paragraphs. If you’re deeply invested, stick with it to the bitter end. We complain so much about people only reading the headlines rather than the article, but the reason that occurs is that you need to make a firm, specific commitment to that article, which may festoon your screen with popovers and demands to sign up for a newsletter and solicitation of donations before you even read one word. With the newspaper, it was all right there. If you had any interest in the article’s content, you would naturally keep reading, it didn’t take some extra choice or effort.

I could say the same for the simple act of flipping through a magazine (an unparalleled browsing experience that is not replicable through the ill-named web “browser”) or even paging through a book to get a sense of what’s in it. All those affordances of the printed text — together with the pleasure of its distinctive heft in your hand, the precisely calibrated tint of the paper, that new book smell, even the sound of opening a book for the first time — are sacrificed in favor of a format that assaults our senses without making a durable impression. It’s a pure lose-lose. That screen format gives rise naturally, even necessarily to the decontextualized tweet or the brief video — because that’s all our small screens can accomodate, and it’s all we can stand to focus on at any given time.

The near-total context collapse we are now experiencing was already baked into the workings of the Mosaic web browser and the dream of the “information age” that it encapsulates. Information does want to be free, as it turns out — free of context, free of pleasure, free of empathy, even free of comprehension. The effort to just cut to the chase and give us the information has actively destroyed the conditions for understanding and using that information in an intelligent way. The Dewey Decimal System and the Reader’s Guide to Periodical Literature gave us access to information in a much more authentic way than any means commonly available now — and indeed, even library databases are being cannibalized from within by the informatization of search (so that reviews of a book come up, sometimes in the dozens, before the book itself, for instance).

Computers cannot give us the information because they cannot, and likely never will, understand meaning and context. Only we can — though we must take the time to learn and cultivate the skills necessary, the habits of mind and body that allow us to engage with the marvellous and unsurpassable human technology known as the printed word.





Read the whole story
betajames
15 days ago
reply
Michigan
Share this story
Delete

It Was a Pleasure Just to Watch Carl Weathers Move

2 Shares

Carl Weathers’ physicality was so intense that when he ran, jumped, or fought, the borders of the screen seemed to tremble. The pleasure he took in performing was infectious, whether he was throwing body-blows as heavyweight champ Apollo Creed opposite Sylvester Stallone in the first four Rocky movies, clinging to the roof of a careening taxicab in Action Jackson, incinerating acres of Central American jungle in the sci-fi thriller Predator, or playing himself on Arrested Development as a man so miserly that he buys all his cars at police auctions, volunteers to get bumped from flights to collect refund vouchers, and never throws out leftovers. (If you have even a shred of meat, he advises his student Tobias Fünke, “Baby, you got a stew goin’!”)

Weathers’ story, like that of so many Black actors, is unfortunately a familiar one: he did everything that white action stars of the same era did, just as well and arguably better, but never got the same opportunities or reaped comparable rewards. But he survived and persevered, branching out into directing (including episodes of Silk Stalkings and The Mandalorian, on which he played ex-magistrate turned bounty hunter Greef Karga) and purely comedic roles (including the golf pro Chubbs in the Adam Sandler hit Happy Gilmore) that showcased his stopwatch timing.

He was a day laborer’s son who was born and raised in New Orleans. Forged by American college and pro football, he was an athlete turned actor in the mold of many such film performers, including two Burts, Reynolds and Lancaster (the former was also a football player, the latter a trapeze artist and acrobat). (Weathers had a supporting role in the Reynolds football picture Semi-Tough.) Well into his seventies, he was a regular sight at his local gym, working out with his pal Dolph Lundgren or riding a treadmill for hours while reading a newspaper from first page to last. Weathers had a black belt in Shotokan karate and became an avid golfer. He got his first girlfriend through football, he said, and hoped to make it to the pros and stay there. It didn’t happen (he went undrafted in 1970, right after college, and only briefly played with the Oakland Raiders as a free agent). But it was for the best, because he’d always possessed an instinct for creative expression that was not valued by his peers growing up.

“It was not a happy time,” Weathers told The Washington Post in a 1979 profile. “In fact, when I’m feeling sorry for myself, I remember it as a perfectly miserable time. I think I was consciously trying to escape from a very early age. In the streets they’ll kill you, literally. It was my curse to be a sensitive kid. Certain things I was drawn to — like doing a little acting or singing in the choir — had no credibility on the street. And I was too ignorant and intimated to try to explain why they seemed valuable to me.”

It would prove to be a pleasure just to watch Weathers move, whether he was taking a gun away from a minor thug in the intriguingly nasty 1992 exploitation picture Hurricane Smith or making you believe that after his Predator character’s gun-arm was blasted off by an extraterrestrial’s raygun, his trigger finger would remain clenched and the weapon would continue firing (a shot that Arrested Development re-used to great comedic effect, pairing it with Tobias’ dialogue about Weathers’ acting). However genial and upbeat he was on sets (everyone seemed to adore him) the actor’s wattage was so bright it practically burned a hole in the screen. Weathers ultimately became so beloved, via the fan convention circuit as well as his acting and filmmaking, that he arguably finished his career in the 2020s more famous than he’d been in at the previous peak of his fame in the mid-1980s, when the Rocky franchise shifted its Hero’s Journey.

The heart of the original Rocky’s publicity campaign conflated the Cinderella-like story of Philadelphia brawler Rocky Balboa (modeled on Chuck Wepner, the “Bayonne bleeder” who went 15 rounds with Muhammad Ali in 1974) with that of the man who created him. Stallone turned down a million-dollar offer to buy the first Rocky script for an established actor to star in because he’d written the role to show his range and get out of playing hoodlums and doing porn. It was a bet on his own talent that paid off beyond anyone’s expectations. Pushing against the downbeat trends of American cinema in the ‘70s, and channeling white ethnic anxiety after the civil rights movement (Apollo started out as a reactionary caricature of Muhammad Ali, losing the lived experience of racism but keeping the mouthiness), Rocky became a global megahit that let Stallone write, direct, produce and star in many more films and franchises.

No such boost for Weathers. For almost a decade after the original Rocky, Weathers had to be content with playing Apollo Creed again and again, despite being a vastly superior athlete and a more innately charismatic performer than Stallone. Apollo was a villain in the first Rocky film, a more nuanced antagonist in the second, a best friend and guru in the third, and a pretext for revenge and the expiation of guilt in the fourth. But Apollo was always in the margins of the saga, even though Black fighters were at the center of the sport in reality, and had been for decades. One of Roger Ebert’s most insightful bits of feature writing was his Chicago Sun-Times piece about watching Rocky II with Ali. The champ praised Rocky and its sequel’s psychological insights into a boxer’s mind and heart, but ridiculed their depiction of training and fighting, and generally treated the franchise’s popularity with the world-weary resignation of an iconic Black boxer whose own amazing story was not represented on film until Ali played himself in 1977’s The Greatest, released a year after Rocky.

“For the black man to come out superior would be against America’s teachings,” Ali told Ebert. “I have been so great in boxing they had to create an image like Rocky, a white image on the screen, to counteract my image in the ring. America has to have its white images, no matter where it gets them.”

Stallone’s writing of Apollo became more nuanced as the series stretched on, which Stallone has said was a direct result of getting to know Weathers and appreciating his subtlety. Apollo transformed from heel to ally in Rocky III (to annihilate Mr. T’s Clubber Lang, a Mohawked, up-from-the-streets monstrosity who epitomized Reagan-era fears of a Black underclass), only to get his brains pulped in Rocky IV, in large part because he couldn’t stand being away from the spotlight that fed his ego. By then, New Yorker film critic Pauline Kael had singled out Weathers as the secret MVP of the series, writing that he gave “a likable, unaffected performance” in Rocky III and possessed “a physique that makes Rocky look like a lump.”

The character didn’t grow and improve in the way that Rocky did, and he was never “likable” in the same easy-to-digest manner. But the tragedy of the Apollo’s exit — martyred at the hands of glowering Soviet boxer Ivan Drago (Lundgren) in a Cold War perversion of Joe Louis’ matchup with German champ Max Schmeling — was thrilling in a more primordial way, like a comic book answer to Shakespeare or the Greeks. Weathers’ most powerful reactive closeup in the series comes near the end of the fight in Rocky IV, when his wife cries out from the stands, realizing that her husband is overmatched, and Apollo stares back at her helplessly, a prisoner of his neediness and machismo. The same impulses that drove him to scale the heights of his profession are what doomed him.

He may not have been the main character of the Rocky films, but he was the star, illuminating the series even after he’d ceased to be a part of it. Thirty years after Apollo’s death came Creed, the first of three sequels starring Michael B. Jordan as Apollo’s son Adonis “Donnie” Creed. Donnie is an extension of Apollo and Rocky but also perhaps an empathetic sketch of Carl Weathers, the bruiser who was educated and ambitious and curious about the world but had to hide it from people he grew up with. Adonis seems as much of a reincarnation and reinvention of Apollo by way of Weathers as a genetic successor — the alternate universe Apollo that we might’ve gotten way back when, in a country that was willing to put a Black fighter at the center of a franchise.

Reviewing the original Creed, Odie Henderson wrote about how director Ryan Coogler got “an achingly beautiful and subtle commentary out of brief shots of young, brown faces looking at and admiring Donnie as he trains. Like Rocky, Donnie may be a hero for all races, but these shots of young Black children add an extra dimension by showing us rare instances of African-American admiration of a hero onscreen.” Donnie’s dad never got those kinds of adoring closeups, except very briefly during the rematch between Rocky and Apollo — and there, the attention was more about the stakes for both characters than a larger sense of empowerment. The Creed movies give Apollo the adoration he craved after that first almost-defeat by Rocky, and that he probably earned outright when he was younger, before the Italian Stallion entered his life and the story of the franchise officially began.

The inspirational peaks of the first Creed are its training montages, as in the other Rocky pictures; but the most haunting moment of characterization is fleeting and eerily quiet: a battered Donnie turns on a wall projector in his apartment, calls up video of one one of his dad’s fights on YouTube, and shadow-boxes with him, briefly bringing Apollo back to life, affirming him as the star of the series, and dramatizing Donnie’s anxiety about living up to the achievements of the dad he never knew. Much of Creed and its first sequel Creed II (which brings back Drago and gives him a boxer son who fights Donny twice) amounts to a reapportionment of pop culture glory for Weathers and a reclamation of Apollo’s importance not just to the series as a whole, but to generations of moviegoers who wanted more representation than Hollywood was willing to provide. (The third Creed omitted Donnie’s coach and “unc” Rocky Balboa entirely, achieving narrative autonomy for the Creed name and annoying Stallone.)

Part of what makes Creed so touching and unexpected is how it certifies Rocky’s matches with Apollo as fabulous flukes. Coogler even gives the Philly palooka stray bits of dialogue confirming that he always knew Apollo was the superior athlete. The final scene of Rocky III shows Rocky and Apollo slipping into a gym late at night to have an off-the-books third fight to determine who the heavyweight champion really is. The outcome isn’t shown, even though the freeze-frame of the boxers was turned into a LeRoy Neiman painting. But 35 years later, when Donnie asks Rocky who won, Rocky replies, simply, “he did.”

Related

Read the whole story
betajames
22 days ago
reply
Michigan
rocketo
24 days ago
reply
seattle, wa
Share this story
Delete

We Need Your Email Address

1 Comment and 3 Shares

Advertisement

AI stealing our work. The collapse of social networks. The need to pay journalists to produce impactful journalism. Here is why we are asking for your email address to read 404 Media.

Read the whole story
betajames
31 days ago
reply
Michigan
acdha
33 days ago
reply
Washington, DC
Share this story
Delete
1 public comment
kasnewsblur
32 days ago
reply
I get it …
Next Page of Stories