This is a continuation of the previous blog post, "Habits: What makes a smartphone so hard to put down?" In that post, I looked at smartphone use through a behavior analytic lens, including negative reinforcement and intermittent positive reinforcement, especially the powerful variable ratio schedule of reinforcement. In this post, I look more broadly at problematic technology design and enshittification.
Have you noticed that Amazon sucks now, filled with fake reviews for cheap and misleadingly-advertised products branded with some random combination of letters? Have you noticed that Google search sucks now, filled to the brim with ads and SEO content farms rather than immediately pointing you to the source you actually want? Have you noticed that social media like Facebook and Instagram are now less likely to show you your friends and those you follow and more likely to insert their own algorithmic suggestions between the more-frequent-than-ever ads?
Why is everything getting worse?
Designing for users or against them
Imagine it's the mid-2000s and you're working at Apple on the design of a new product, what they'll eventually call an iPhone. It combines cellular phone and texting functions with internet browsing, a camera, a calculator, and so much more!
As one of the designers, you might be thinking about tradeoffs like battery life versus device size, processing power versus unit cost. Certainly you can't max out every spec that a user might want because then most users couldn't afford it.
That said, when you decide to limit or lower some spec like battery life it's not for want of giving the user more of that thing. You as the designer want the user experience to be positive. Your motivations fundamentally align with that of the user -- at least insofar as the user can afford your device and finds it useful. When you add an additional function to the design, like an app that serves as a calculator or compass or map, your primary motivation is to provide a good, desirable product to the user.
But if you stuck around at Apple for a few years, you might notice something start to change around 2009 when the app store started allowing in-app transactions from which Apple gets a significant cut of the purchase. You watch as third-party creators flood the app store with apps that use dark patterns and psychological tricks inspired by casinos to make an addictive-feeling experience that milks users via microtransactions (Flayelle et al., 2023; Lewis, 2014).
Because Apple gets a cut of the profits their incentives are no longer aligned with that of the user. When Apple gets a cut from every time-waster like Candy Crush they no longer care primarily about making your phone using experience pleasurable or useful. They now benefit more from aligning with app creators -- business customers -- at the expense of users.
The app store turned the iPhone from just a device into a platform. And by making their incentives misaligned with -- even adversarial to -- the user's interests, that platform became a little bit shittier.
Years later, in early 2023, the inimitable Cory Doctorow captured a feeling we'd all been having when he coined the term enshittification to describe a three-stage process he had observed in the evolution of tech platforms like this.
Enshittification Stage 1: Provide good value to users to capture them
Consider Amazon: they exploded in popularity initially because they provided a great user experience. Searching on the site was great; it took you right to what you were looking for.
They even operated at a loss for years, subsidizing user purchases and subsidizing shipping as well, to get us hooked. And it worked! Today more than half of Americans start their shopping search on Amazon.
Sure, you're kind of locked into their ecosystem when your ebooks and audiobooks only exist as a license on their platform that can be taken away at will. But we got hooked on the convenience and on things like the "free" shipping that comes with a Prime membership (i.e. pre-paying for your shipping so you're more committed to buy from them).
Meanwhile, they choked a lot of brick and mortar stores out of business.
Enshittification Stage 2: Shift value to business customers to capture them
But, as Doctorow points out, Amazon didn't stay user-friendly. As it exploded in popularity it also pulled in business customers who could find a large audience for their goods if they were only willing to give a small cut to the platform hosting them (and initially the cut was indeed small).
These Marketplace sellers (business customers) were now the ones getting special treatment and subsidies, and pretty soon if you were a small business -- or even a large business -- you had to have a presence on Amazon or risk invisibility to customers.
Meanwhile, the user experience got worse as the search stopped showing the best products that fit your search terms and started prioritizing businesses that paid to be placed up top.
Enshittification Stage 3: Shift value to the platform and shareholders
Eventually, once Marketplace sellers were also trapped on Amazon, the platform shifted value away from business customers the same way they had shifted it away from users and toward business customers. Amazon increased the cut it takes from every purchase, but businesses couldn't do anything about that since leaving Amazon now means the death of their business.
Oh, and Marketplace sellers have to bribe Amazon to be placed high in the search or risk being relegated so far down no shoppers see the product.
Amazon can change the rules, change the cut they take, change the algorithm at any time and the Marketplace sellers have no real power to stop them.
Worse, Amazon started to clone the products of businesses on their platform and sell their own version to undercut and outcompete that business. For example, a small business owner tells how he created an all natural lip balm product and was doing well on Amazon right up until the day he went to search Amazon for his product and saw it listed there not under his own business but sold by Amazon itself. They had poached his product, priced it cheaper, listed it above his, and stolen his business. Amazon could sell their version much cheaper since he had to price in the big cut of sales that Amazon took for selling on their platform.
This is the third stage of enshittification, where both users and businesses customers are trapped and the platform begins making everything worse to squeeze out every bit of value for itself. As Doctorow describes the process: "surpluses are first directed to users; then, once they're locked in, surpluses go to suppliers; then once they're locked in, the surplus is handed to shareholders and the platform becomes a useless pile of shit."
Facebook's enshittification
Doctorow also discusses Facebook as a case study. Early on, Facebook was good to users. You see posts from your friends and family, from those you wanted to hear from. Meanwhile, the platform grows and grows, capturing more and more people. Once a critical mass of your loved ones are all on Facebook, it's really hard for you to leave Facebook for a new, better platform because you're leaving them all behind and missing out on all the updates from your grandparents, your old college buddies, your exes, your crushes. You're stuck.
And once users are hooked, the platform can start to extract more value from them and prioritize business customers until they, too, are trapped and can be squeezed by the platform as well:
[Facebook] started to cram your feed full of posts from accounts you didn't follow. At first, it was media companies, who Facebook preferentially crammed down its users' throats so that they would click on articles and send traffic to newspapers, magazines and blogs.
Then, once those publications were dependent on Facebook for their traffic, it dialed down their traffic. First, it choked off traffic to publications that used Facebook to run excerpts with links to their own sites, as a way of driving publications into supplying fulltext feeds inside Facebook's walled garden.
This made publications truly dependent on Facebook – their readers no longer visited the publications' websites, they just tuned into them on Facebook. The publications were hostage to those readers, who were hostage to each other. Facebook stopped showing readers the articles publications ran, tuning The Algorithm to suppress posts from publications unless they paid to "boost" their articles to the readers who had explicitly subscribed to them and asked Facebook to put them in their feeds.
Now, Facebook started to cram more ads into the feed, mixing payola from people you wanted to hear from with payola from strangers who wanted to commandeer your eyeballs. It gave those advertisers a great deal, charging a pittance to target their ads based on the dossiers of nonconsensually harvested personal data they'd stolen from you.
Sellers became dependent on Facebook, too, unable to carry on business without access to those targeted pitches. That was Facebook's cue to jack up ad prices, stop worrying so much about ad fraud, and to collude with Google to rig the ad market [...].
Enshittification isn't just technology getting worse: it's a specific, three-stage process where first platforms design to benefit users, then once users are captured, the design pivots in a second stage to prioritizing -- and capturing -- business customers at the expense of users; then once business customers are also stuck the third stage is to screw them over to maximize value for the platform itself and its shareholders. Make it as bad as needed to maximize profit, users be damned. And we all end up trapped on these enshittified platforms, stuck with a crappier Google search, a horrible time finding the best product on Amazon, and a social media experience that keeps us scrolling but hating it1.

Why don't we leave at stage 2? Boiling frogs, pecking pigeons, and swiping humans
Digital platforms are particularly prone to enshittification because the platform can adjust the dials quickly and at-scale, changing the rules on a dime, and get immediate feedback. They can run A/B testing to see just how many ads can be crammed into a user's feed while still keeping them swiping and scrolling (Quin et al., 2024; Yan et al., 2020).
And unfortunately, there are some behavioral principles working against us as users that make it harder to give up on a behavior like scrolling social media even as the experience gets worse and worse.
In my previous post, I talked about things like variable ratio reinforcement schedules, where you get rewarded every X or so times that you do a behavior. Every few swipes on social media, you see something you genuinely like: a post from a friend or an account you follow, a cute or informative video snippet, whatever. But it's unpredictable; that's the "variable" in variable ratio reinforcement. Sometimes it happens after 5 swipes, sometimes after 12, sometimes after 2, but it's unpredictable so your brain's constantly primed for that upcoming reinforcement. It feels like you're always on the verge of getting that payoff with the next swipe (i.e. the next time you do the behavior).
As I talked about previously, variable ratio reinforcement setups lead to a high and consistent rate of behavior, often with minimal pausing. This is part of why you can open an app like TikTok thinking you'll just spend a few moments but then you find yourself looking back 40 minutes later wondering how so much time went by.
But there's something more insidious to variable ratio schedules and that's that unlike continuous reinforcement (where you get the payoff every time you do the behavior) variable schedules are more resistant to extinction of the behavior.
Extinction is what happens when a behavior that was getting reinforced is no longer reinforced. Eventually the behavior stops happening because there's no contingent reward maintaining it. Our brains are quick to notice when a continuous reinforcement schedule goes into extinction. When the lever that used to work every time suddenly stops working, it becomes clear immediately. Likewise for fixed ratio (FR) setups: when you used to get rewarded every 30 behaviors, it becomes obvious pretty soon that something has changed if the reward doesn't show up at 30, 60, or 90.
But when the reinforcement has been more variable, less predictable -- when the schedule of reinforcement already included normalizing the sense of "this one didn't pay off, but that's okay, the next one might!" -- it's harder for our brain to pick up the new extinction pattern when the reward contingency is removed. It's harder to get enough clear signals that the lever is broken, so to speak (after all, this might just be one of those times it takes a lot more behaviors to get the reward!).
That means with a variable ratio we're less likely to notice if the schedule of reinforcement ends or, for that matter, when it thins out (e.g., when it the reinforcer starts showing up after every 12ish behaviors or average rather than every 8 or so, and then every 15ish or 20ish behaviors, and so forth). With variable reinforcement, we're more likely to keep doing the behavior even as the payoffs become fewer and further between.

Platforms know this. They utilize "reinforcement thinning" to stuff more and more ads in front of our eyeballs while still providing juuuust enough reinforcement to keep us performing that scrolling or swiping behavior.
B.F. Skinner found that he could get a pigeon to peck a key hundreds of times for a single little reinforcer but only if they first got used to being rewarded at a higher rate and then over time were shifted to a more and more thinned-out schedule (Ferster & Skinner, 1957).
It's like the apologue of a frog being boiled alive without hopping out of the pot of water that started cool and only slowly became hotter and hotter. When enshittifying an app or social media feed you don't jump straight to a feed packed full of ads; you get people used to a higher payoff rate for their swiping and then slowly adjust the rate to be worse and worse over time.
Digital platforms have the power to change the reinforcement schedules for users with the same ease Skinner could adjust the reinforcement contingency for his pigeons. Meanwhile, our human brains follow the same behavioral principles as those of pigeons: variable ratio reinforcements keep us maintaining a high and consistent rate of behavior that's resistant to extinction and tolerant to thinning of the reinforcement rate.
Of course, companies also have powerful tools at their disposal to measure and adjust what they present to us in order to maximize how much of our time and attention they take. For example, in 2023 Facebook started doubling the amount of AI-recommended "suggested" content it inserted into user feeds in place of friends or accounts users had actually subscribed to. They can twist the dials and fine-tune the experience to find whatever manipulates our behavior in the direction they want (seeing more ads).
Switching costs
On top of that, enshittified platforms generally have high switching costs. It's hard to move elsewhere. A platform like Instagram or Twitter isn't designed for portability, so if you leave you lose your data, your past, your connections, your network. Try convincing others to leave with you and, well, it's a classic collective action problem where it's nearly impossible to get enough people to leave and go to a new platform together2.
So it's easy to see how users can feel stuck in a platform even as they perceive their experience degrading and the platform enshittifying. By the time you notice the experience is degrading, it's hard to stop or migrate.
Enshittification is coming for AI
One reason I bring this up now relates to news in May of OpenAI's latest hire: they are bringing in Fidji Simo as chief executive of the business and operations team. She comes most recently from Instacart, but before that she ran Facebook's News Feed, was architect of their advertising business, and monetized mobile after their 2012 IPO. In other words, she presided over some of the biggest steps of enshittification at Facebook.
Right now, companies like OpenAI are operating at a loss in order to draw in users. Gemini and ChatGPT are offering free membership trials to college students (right around finals time!) to get them familiar and reliant on the company's LLM. The companies are providing a great deal to users who get an incredibly powerful tool for much less than the cost to the platform of creating and running that tool. Meanwhile, a lot of users are getting hooked on AI, with some researchers even arguing it can be 'addictive' (Zhou & Zhang, 2024).
It sure feels like we're in the first stage of consumer AI, where value is directed to users.
Appointing Simo suggests things won't stay there. Consumer AI (e.g., LLMs like ChatGPT, Claude, Gemini) eventually needs to turn a profit -- and then to keep turning more and more profit to keep up that infinite growth for shareholders.
If we're not careful, we may find ourselves locked into and hooked on an AI ecosystem (perhaps OpenAI's, perhaps Google's?) that can then enshittify the user experience to extract more and more while providing a worse and worse experience.
Imagine AI pivoting away from providing the best or most helpful answer and toward attention-harvesting mechanisms that maximize your time on the system (in order to maximize how many embedded ads you're exposed to, whether or not you realize they are ads).
Imagine AI that changes its answer to your query depending on which business has paid it the most to have the priority answer slot for any user request relating to a given topic.
Right now AI might feel like it's working for you, in your court, doing your bidding, but that's how social media originally felt. That's how Amazon and Google search originally felt.
References
Ferster, C. B., & Skinner, B. F. (1957). Schedules of reinforcement. Appleton-Century-Crofts.
Flayelle, M., Brewers, D., King, D. L., Maurage, P., Perales, J. C., & Billieux, J. (2023). A taxonomy of technology design features that promote potentially addictive online behaviors. Nature Reviews Psychology, 2, 136-150. https://doi.org/10.1038/s44159-023-00153-4 [PDF]
Lewis, C. (2014). Irresistible apps: Motivational design patterns for apps, games, and web-based communities. Apress.
Page, L., & Brin, S. (1998). Anatomy of a large-scale hypertext web search engine. Computer Networks, 30, 107-117. https://doi.org/10.1016/S0169-7552(98)00110-X [PDF]
Quin, F., Weyns, D., Galster, M., & Costa Silva, C. (2024). A/B testing: A systematic literature review. Journal of Systems and Software, 211, 112011. https://doi.org/10.1016/j.jss.2024.112011
Reviglio, U., & Agosti, C. (2020). Thinking outside the black-box: The case of 'algorithmic sovereignty' in social media. Social Media + Society, 6(2), 1-12. https://doi.org/10.1177/2056305120915613
Yan, J., Xu, Z., Tiwana, B., & Chatterjee, S. (2020). Ads allocation in feed via constrained optimization. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 3386-3394. https://doi.org/10.1145/3394486.3403391
Zhou, T., & Zhang, C. (2024). Examining generative AI user addiction from a C-A-C perspective. Technology in Society, 78, 102653. https://doi.org/10.1016/j.techsoc.2024.102653
Doctorow also tackles the case of the enshittification of Google search in the first of four episodes of the CBC podcast “Who broke the internet“. They describe the early internet before ads were all over (Usenet, etc.) and how Google’s Pagerank algorithm really revolutionized internet search and brought them their extreme popularity. In a 1998 paper, Google founders Page and Brin (1998) were saying things like “Advertising funded search engines will be inherently biased towards the advertisers and away from the needs of consumers”. Two years later, they started AdWords, but ads back then were less intrusive and sneaky, more clearly marked, compared to the enshittified search experience today. The CBC podcast episode brings to light a 2019 internal debate at Google on whether to deliberately make search worse to increase ad revenue and hit the quarter’s targets. Around this time, they started undoing past improvements to search, like undoing earlier fixes that had deranked spammy results. They started making ads look just like real search results — very hard to tell apart unless consciously looking closely for it. In 2020 the head of ads was appointed to also be the head of search(!). Uh oh. For the Google platform, it worked. They were already a giant company for multiple decades at this point, yet in just 5 years Google doubled its overall revenue (and specifically ad revenue doubled in the time since the 2019 Internet debate that was won by the ad men). But as the platform benefited, users suffered.
Doctorow has long been an advocate for solutions to the switching cost problem such as interoperability. When it comes to social media like Twitter, there are alternatives like Mastadon and — to a lesser extent perhaps — Bluesky that are designed based on protocols that make user data portable and ease the changing of platforms. They also give users more control over the algorithms that dictate what they see (a big piece of what some researchers have called algorithmic sovereignty; Reviglio & Agosti, 2020). Of course, an even more classic way to control your algorithmic feed is an RSS reader.
Yeah, Amazon sucks now! Now I know a little more why (thank you)...