Pages

2 May 2023

AI Is Eating the World

JONATHAN V. LAST

In a post-AI world will OnlyFans even need real girls? (Shutterstock)

Every week I highlight three newsletters.

If you find value in this project, do two things for me: (1) Hit the Like button, and (2) Share this with someone.

Most of what we do in Bulwark+ is only for our members, but this email will always be free for everyone. Sign up to get it now. (Just select the “free” option at the bottom.)

1. Social Media News

Parker Malloy has a long and very smart post about the end of an era of journalism:

Facebook used to be a great place to get referral traffic; Twitter, not so much. Still, it’s only gotten worse as the platforms underwent the enshittification process.

In just the past few weeks, Twitter’s Elon Musk stripped journalists and news outlets of verification badges in an effort to turn the site into a pay-to-play hellhole. Verification used to mean that the person posting something was who they claimed to be. When my account was verified in 2015 or so, it involved me having to send a scan of my driver’s license to Twitter. Now, anyone willing to pay Musk $8 per month gets "verified" (they do not actually verify your identity)and

For whatever reason, Twitter’s Elon Musk has decided to implement platform policies that punish users for linking to outside websites (like this newsletter, for instance!) in what will ultimately be a doomed effort to turn Twitter into an “everything” app where content simply lives on Twitter. That… is not what journalists/bloggers/content creators/whatever want.

This isn’t about Musk Twitter so much as it’s about the referral traffic economic mode of journalism.
1 For the last 15 years or so, the model for media companies has been to drive enormous amounts of traffic to individual stories via referrals from social media (and search). That model was never sustainable. It is currently dying.

Also: It’s about to get a lot worse. Let me give you an even darker view of the future of . . . well, not journalism, but content.

Because forget the end of social media news. AI is going to screw up the entire ecosystem of content.

In the same way that some giant portion of the views and comments on YouTube are generated by bots, some large portion of the internet that purports to be “news” is generated by content mills.

These mills are fairly sophisticated; certainly they’re more sophisticated than most media organizations. A newspaper sends out a reporter, who writes a story, that is then edited, published, and finally promoted in an attempt to find an audience.

A content mill identifies a trending topic, writes a piece of content designed to produce clicks, optimizes it for both search and social, then A/B test several versions.

The most labor-intensive step for the content mill is the content generation itself. This doesn’t take as much effort as reporting, or writing a real story. But it still requires a human sitting at a keyboard.

Well, not no more it don’t!

AI makes it possible for content mills to create a nearly infinite amount of content, which other machines then optimize, which still other programs A/B test. Automatically.

For now, the only thing a human has to do is identify the hot topic, plug in the prompts, and then do some copy-pasting. But that’ll change soon enough, too. Soon the AIs will be able to identify the most promising trending topics and write optimized copy, too.

The flood of content we’re going to see—and the pressure it will put on actual media organizations—is going to be like nothing in the history of the written word.

There’s a storm coming.


And it won’t just be shady, fly-by-night content mills, either. Lots of soft-news sources are going to turn to AI to produce content.

Remember: This whole conversation was started because BuzzFeed eliminated its news division.

It is not a coincidence that less than a month ago the non-news sections of BuzzFeed started publishing AI generated “articles.”

2. Inside ChatGPT

David Epstein has a conversation with Georgetown comp-sci professor Cal Newport, who is much more sanguine about AI than I am:

David Epstein: Earlier this month, I saw a talk by economist Tyler Cowen in which he displayed a chart showing all the tests that GPT-4 had aced. It included medical exams and bar exams. If I recall correctly, he said that one of his colleagues gave an exam he gives to students to GPT-3.5 and it was mediocre, but then GPT-4 was his top performer. I believe Cowen also said it was doing some diagnosis as well as or better than current doctors. Outside the room after his talk, every nook of the hotel lobby was occupied by someone on a phone talking about how they’re about to be replaced. In the talk, Cowen said that if you’re competing with GPT-4 rather than working with it, you’re done for. Apropos of writers, he said that if your job primarily involves the production of words, you’re essentially replaceable immediately, it just takes some time for GPT-4 integration.

But you argue that while ChatGPT can “generate attention-catching examples, the technology is unlikely in its current form to significantly disrupt the job market.” What makes you say that, given that it can pass all these tests?

Cal Newport: If you read OpenAI’s main research paper about GPT-3, you discover that what they’re most proud of was not any one individual ability of the model, but instead the breadth of its abilities. They demonstrated that it could do well on many different well-known tests of natural language processing. The key, however, is that it didn’t do *better* than the state of the art programs for most of these tasks, but instead that it could do similarly to the state of the art in many different areas without having to specialize. This is critical because it reminds us that in many of these areas (processing medical records, producing natural text on a specific topic), we have already had specialized language models that do very well, in many cases better than recent GPT models. But these have failed to fundamentally disrupt these fields.

These models do particularly well on tests because they’re very well suited for its word-guessing approach. They have been specifically trained to be good at responding to questions – as this is what is expected in the chat interface. To respond to a test question, all it needs is to have seen a similar question and its model will push for words from relevant answers.

Ultimately, however, the best summary of what these models can do is the following: in response to a user request, write natural text on arbitrary combinations of known subjects in arbitrary combinations of known topics, where “known” means encountered them enough during training. In doing so, it has no ability to actually check if what it’s saying is true or not. The key question to ask is how much of your current job could be replaced by this ability?

Newport’s thesis is that large-language models function not by “thinking,” but by simply guessing the next word based on the preceding words according to the data set they’ve be trained on.

I find this distinction less comforting than he does.

3. AI Porn

I am sorry, but we have to talk about this because it gets at what “replaceability” really means. Also, it’s fascinating. This is from a newsletter called Unreality, which describes itself as an “online subscription for existential dread masquerading as cultural analysis.”

AI threatens to replace all aspects of the porn industry, save for its consumption. But will it? Wired recently cast its vote on the subject, declaring, “When It Comes to OnlyFans, Humans Can Outcompete AI”. . . .

It should be abundantly clear that our question is not ‘can AI make porn’. We’re already there. Neither is the question ‘can AI make video porn’, as artificially generated video looks pretty stunningly competent already. But pornography is not all images and sounds. As the Wired article explains, consumers of subscription ‘e-girl’ pornography are not going to emotionally invest in pixels, no matter how alluringly arranged. To be a successful pornographer, one becomes a celebrity to their devotees:

“People do not subscribe to my OnlyFans because they want to see a random naked woman, they subscribe to my OnlyFans because they want to see me naked, specifically, based on a parasocial connection formed by following ME on other social media platforms,” Laura Lux—an OnlyFans model who manages a free and paid subscription account—pointed out on Twitter.

When It Comes to OnlyFans, Humans Can Outcompete AI
Lux Alptraum, Wired

That all said, we’re not just dealing with artificially generated media, we’re also contending with the absurdly rapid development of neural language models – as previously discussed. These chatbots may be intelligent and conversational, but they’re still not real, and people aren’t going to emotionally invest in something as unsexy as a ‘neural language model’, let alone, want to know them in the Biblical sense, right?

Guys?

I share the writer’s skepticism. But there’s more.

The the porn industrial complex has been transformed in recent years by OnlyFans.

The question in a post-AI world is whether or not OnlyFans needs real girls.

OnlyFans, like any digital economy, must operate at scale. To be successful on the platform requires the upkeep of potentially hundreds of parasocial relationships at once, but an individual just doesn’t have the bandwidth to do so. As a result, many pornographers farm out the time and effort of messaging their clientele to their staff, ‘many of which are men’, according to those interviewed. The question must therefore be asked, is it a more ‘genuine’ experience to trade dirty messages with a personal chatbot or with Kevin from finance?

To make serious cash on the site, they need to be “available and online 24/7” to deliver custom videos and chat with fans, as well as to schedule shoots and create enough content to keep up with consumer demands. “Once you get over a certain number of subscribers, it’s an impossible task to handle the messaging alone,” she explains.

Mel Magazine

Digging into the economics of subscription pornography exposes the limits inherent in the mass replication of connection. Suddenly, the dilemma of artificial pornography boils down to just one question. Can artificial intelligence convincingly emulate that which we already pretend to do? Smart money is on ‘yes’, in fact, it’s on a resounding ‘yes and better’. . . .

So, the cam girl pimp of tomorrow won’t be some Romanian mogul, but instead just a software executive commanding a legion of erotic AI companions.

Yeah, we’re like five minutes from Blade Runner.

For most of this year I’ve spent every Saturday ringing the bell about AI because I believe it’s incredibly important—a development more consequential than the mobile revolution.

It’s going to touch every part of our society—economics, politics, art. Even porn. And we all have to start figuring out how we’re going to conform our lives to it.

Because the history of technology suggests that the technology won’t conform to us.

If you find this newsletter valuable, please hit the like button and share it with a friend. And if you want to get the Newsletter of Newsletters every week, sign up below. It’s free.

Though Malloy does have one anecdote from recent Musk Twitter that shocked me, even though it shouldn’t have:

“Oh, and also, Musk's really been feeding into a lot of really right-wing, anti-trans stuff lately. Just the other day, a former MMA fighter named Jake Shields posted a tweet asking if there should be public executions of people who support transition-related health care for minors. The top responses from ‘verified’ accounts were filled with people responding, ‘Yes!’ and ‘I’ll volunteer as executioner!’ Musk and his team didn’t suspend the accounts involved but did try to hide evidence.”

What. The. Fuck.

I know. That should be my tagline.

No comments:

Post a Comment