Resident of the world, traveling the road of life
63768 stories
·
21 followers

Saturday Morning Breakfast Cereal - History

1 Share


Click here to go see the bonus panel!

Hovertext:
You won't experience the ice cream in the powerful, meaningful way that I would, but you'll have a great big smile and it'll be so dear.


Today's News:
Read the whole story
mkalus
14 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

OmniPlan 4.8 for Mac, iPad, iPhone, and Apple Vison Pro

1 Share

On February 2, 2024, the first day Apple Vision Pro shipped, we released OmniPlan 4.7.2 native on Apple Vision Pro. Today, we’re releasing the first feature update since that special update, universal across four (4) Apple platforms. That’s right: OmniPlan 4.8 for Mac, iPad, iPhone, and Apple Vison Pro is now available!

OmniPlan 4.8 introduces a beautiful new app icon on the Mac, iPad, and iPhone, bringing visual consistency across all supported platforms.

For anyone running the Pro edition of OmniPlan, this release also introduces support for Omni Automation “Install Links” for simplified Omni Automation plug-in installation. Introduced with OmniFocus 4.1 on Apple Vision Pro, Install Links brings OmniPlan 4.8 across all platforms a simple “Look, Tap, and Approve!” mechanism for installing plug-ins. It’s a prime example of how the innovation and development efforts for the Apple Vision Pro extend to benefit the other platforms as well. We’re updating our plug-in collections to take advantage of this new feature.

But wait, there’s more! OmniPlan 4.8 also introduces support for custom data on iPad, iPhone, and Apple Vision Pro for the first time. Previously only available in OmniPlan for Mac, custom data support lets you display custom data in the project outline and track tasks accordingly. For example, if there’s a particular bit of data, say a part number or item key, that you’ve configured in OmniPlan for Mac, it can now be viewed and edited in OmniPlan across all synced devices. Naturally, OmniPlan 4.8 also includes fixes to variety of bugs. See the full Mac, iPad and iPhone and Apple Vision Pro release notes for the full run down of the changes in OmniPlan 4.8.

If OmniPlan 4 has also empowered you, leaving an App Store review is a great way to help others discover OmniPlan in the App Store! We always like to hear from you directly, too. So, if you’d like to provide feedback about OmniPlan 4, we would love to hear from you!

Read the whole story
mkalus
32 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Pluralistic: The specific process by which Google enshittified its search (24 Apr 2024)

1 Share


Today's links



A collection of 1950s white, suited boardroom executives seated around a table, staring at its center. The original has been altered. In the center of the table stands a stylized stick figure cartoon mascot whose head is a poop emoji rendered in the colors of the Google logo. The various memos on the boardroom table repeat this poop Google image. On the wall behind the executives is the original Google logo in an ornate gilt frame.

The specific process by which Google enshittified its search (permalink)

All digital businesses have the technical capacity to enshittify: the ability to change the underlying functions of the business from moment to moment and user to user, allowing for the rapid transfer of value between business customers, end users and shareholders:

https://pluralistic.net/2023/02/19/twiddler/

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan

Which raises an important question: why do companies enshittify at a specific moment, after refraining from enshittifying before? After all, a company always has the potential to benefit by treating its business customers and end users worse, by giving them a worse deal. If you charge more for your product and pay your suppliers less, that leaves more money on the table for your investors.

Of course, it's not that simple. While cheating, price-gouging, and degrading your product can produce gains, these tactics also threaten losses. You might lose customers to a rival, or get punished by a regulator, or face mass resignations from your employees who really believe in your product.

Companies choose not to enshittify their products…until they choose to do so. One theory to explain this is that companies are engaged in a process of continuous assessment, gathering data about their competitive risks, their regulators' mettle, their employees' boldness. When these assessments indicate that the conditions are favorable to enshittification, the CEO walks over to the big "enshittification" lever on the wall and yanks it all the way to MAX.

Some companies have certainly done this – and paid the price. Think of Myspace or Yahoo: companies that made themselves worse by reducing quality and gouging on price (be it measured in dollars or attention – that is, ads) before sinking into obscure senescence. These companies made a bet that they could get richer while getting worse, and they were wrong, and they lost out.

But this model doesn't explain the Great Enshittening, in which all the tech companies are enshittifying at the same time. Maybe all these companies are subscribing to the same business newsletter (or, more likely, buying advice from the same management consultancy) (cough McKinsey cough) that is a kind of industry-wide starter pistol for enshittification.

I think it's something else. I think the main job of a CEO is to show up for work every morning and yank on the enshittification lever as hard as you can, in hopes that you can eke out some incremental gains in your company's cost-basis and/or income by shifting value away from your suppliers and customers to yourself.

We get good digital services when the enshittification lever doesn't budge – when it is constrained: by competition, by regulation, by interoperable mods and hacks that undo enshittification (like alternative clients and ad-blockers) and by workers who have bargaining power thanks to a tight labor market or a powerful union:

https://pluralistic.net/2023/11/09/lead-me-not-into-temptation/#chamberlain

When Google ordered its staff to build a secret Chinese search engine that would censor search results and rat out dissidents to the Chinese secret police, googlers revolted and refused, and the project died:

https://en.wikipedia.org/wiki/Dragonfly_(search_engine)

When Google tried to win a US government contract to build AI for drones used to target and murder civilians far from the battlefield, googlers revolted and refused, and the project died:

https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html

What's happened since – what's behind all the tech companies enshittifying all at once – is that tech worker power has been smashed, especially at Google, where 12,000 workers were fired just months after a $80b stock buyback that would have paid their wages for the next 27 years. Likewise, competition has receded from tech bosses' worries, thanks to lax antitrust enforcement that saw most credible competitors merged into behemoths, or neutralized with predatory pricing schemes. Lax enforcement of other policies – privacy, labor and consumer protection – loosened up the enshittification lever even more. And the expansion of IP rights, which criminalize most kinds of reverse engineering and aftermarket modification, means that interoperability no longer applies friction to the enshittification lever.

Now that every tech boss has an enshittification lever that moves very freely, they can show up for work, yank the enshittification lever, and it goes all the way to MAX. When googlers protested the company's complicity in the genocide in Gaza, Google didn't kill the project – it mass-fired the workers:

https://medium.com/@notechforapartheid/statement-from-google-workers-with-the-no-tech-for-apartheid-campaign-on-googles-indiscriminate-28ba4c9b7ce8

Enshittification is a macroeconomic phenomenon, determined by the regulatory environment for competition, privacy, labor, consumer protection and IP. But enshittification is also a microeconomic phenomenon, the result of innumerable boardroom and product-planning fights within companies in which would-be enshittifiers try to do things that make the company's products and services shittier wrestle with rivals who want to keep things as they are, or make them better, whether out of principle or fear of the consequences.

Those microeconomic wrestling-matches are where we find enshittification's heroes and villains – the people who fight for the user or stand up for a fair deal, versus the people who want to cheat and wreck to make things better for the company and win bonuses and promotions for themselves:

https://locusmag.com/2023/11/commentary-by-cory-doctorow-dont-be-evil/

These microeconomic struggles are usually obscure, because companies are secretive institutions and our glimpses into their deliberations are normally limited to the odd leaked memo, whistleblower tell-all, or spectacular worker revolt. But when a company gets dragged into court, a new window opens into the company's internal operations. That's especially true when the plaintiff is the US government.

Which brings me back to Google, the poster-child for enshittification, a company that revolutionized the internet a quarter of a century ago with a search-engine that was so good that it felt like magic, which has decayed so badly and so rapidly that whole sections of the internet are disappearing from view for the 90% of users who rely on the search engine as their gateway to the internet.

Google is being sued by the DOJ's Antitrust Division, and that means we are getting a very deep look into the company, as its internal emails and memos come to light:

https://pluralistic.net/2023/10/03/not-feeling-lucky/#fundamental-laws-of-economics

Google is a tech company, and tech companies have literary cultures – they run on email and other forms of written communication, even for casual speech, which is more likely to take place in a chat program than at a water-cooler. This means that tech companies have giant databases full of confessions to every crime they've ever committed:

https://pluralistic.net/2023/09/03/big-tech-cant-stop-telling-on-itself/

Large pieces of Google's database-of-crimes are now on display – so much, in fact, that it's hard for anyone to parse through it all and understand what it means. But some people are trying, and coming up with gold. One of those successful prospectors is Ed Zitron, who has produced a staggering account of the precise moment at which Google search tipped over into enshittification, which names the executives at the very heart of the rot:

https://www.wheresyoured.at/the-men-who-killed-google/

Zitron tells the story of a boardroom struggle over search quality, in which Ben Gomes – a long-tenured googler who helped define the company during its best years – lost a fight with Prabhakar Raghavan, a computer scientist turned manager whose tactic for increasing the number of search queries (and thus the number of ads the company could show to searchers) was to decrease the quality of search. That way, searchers would have to spend more time on Google before they found what they were looking for.

Zitron contrasts the background of these two figures. Gomes, the hero, worked at Google for 19 years, solving fantastically hard technical scaling problems and eventually becoming the company's "search czar." Raghavan, the villain, "failed upwards" through his career, including a stint as Yahoo's head of search from 2005-12, a presiding over the collapse of Yahoo's search business. Under Raghavan's leadership, Yahoo's search market-share fell from 30.4% to 14%, and in the end, Yahoo jettisoned its search altogether and replaced it with Bing.

For Zitron, the memos show how Raghavan engineered the ouster of Gomes, with help from the company CEO, the ex-McKinseyite Sundar Pichai. It was a triumph for enshittification, a deliberate decision to make the product worse in order to make it more profitable, under the (correct) belief that the company's exclusivity deals to provide search everywhere from Iphones and Samsungs to Mozilla would mean that the business would face no consequences for doing so.

It a picture of a company that isn't just too big to fail – it's (as FTC Chair Lina Khan put it on The Daily Show) too big to care:

https://www.youtube.com/watch?v=oaDTiWaYfcM

Zitron's done excellent sleuthing through the court exhibits here, and his writeup is incandescently brilliant. But there's one point I quibble with him on. Zitron writes that "It’s because the people running the tech industry are no longer those that built it."

I think that gets it backwards. I think that there were always enshittifiers in the C-suites of these companies. When Page and Brin brought in the war criminal Eric Schmidt to run the company, he surely started every day with a ritual, ferocious tug at that enshittification lever. The difference wasn't who was in the C-suite – the difference was how freely the lever moved.

On Saturday, I wrote:

The platforms used to treat us well and now treat us badly. That's not because they were setting a patient trap, luring us in with good treatment in the expectation of locking us in and turning on us. Tech bosses do not have the executive function to lie in wait for years and years.

https://pluralistic.net/2024/04/22/kargo-kult-kaptialism/#dont-buy-it

Someone on Hacker News called that "silly," adding that "tech bosses do in fact have the executive function to lie in wait for years and years. That's literally the business model of most startups":

https://news.ycombinator.com/item?id=40114339

That's not quite right, though. The business-model of the startup is to yank on the enshittification lever every day. Tech bosses don't lie in wait for the perfect moment to claw away all the value from their employees, users, business customers, and suppliers – they're always trying to get that value. It's only when they become too big to care that they succeed. That's the definition of being too big to care.

In antitrust circles, they sometimes say that "the process is the punishment." No matter what happens to the DOJ's case against Google, its internal workers have been made visible to the public. The secrecy surrounding the Google trial when it was underway meant that a lot of this stuff flew under the radar when it first appeared. But as Zitron's work shows, there is plenty of treasure to be found in that trove of documents that is now permanently in the public domain.

When future scholars study the enshittocene, they will look to accounts like Zitron's to mark the turning points from the old, good internet to the enshitternet. Let's hope those future scholars have a new, good internet on which to publish their findings.


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#15yrsago London cop’s Facebook: “Can’t wait to bash” G20 protestors http://news.bbc.co.uk/2/hi/uk_news/england/london/8016620.stm

#15yrsago Dangerous terrorists arrested in the UK weren’t http://news.bbc.co.uk/2/hi/uk_news/8011955.stm

#10yrsago Muslims sue FBI: kept on no-fly list because they wouldn’t turn informant https://arstechnica.com/tech-policy/2014/04/suit-claims-muslims-put-on-no-fly-list-for-refusing-to-become-informants/

#10yrsago Lost Warhol originals extracted from decaying Amiga floppies https://web.archive.org/web/20140424093724/https://studioforcreativeinquiry.org/events/warhol-discovery

#10yrsago Making a planetary-scale sandwich https://www.reddit.com/r/pics/comments/23symq/me_located_in_iceland_and_my_friend_located_in/

#10yrsago Drunk 18 year old girl rushed to hospital from Canadian PM Stephen Harper’s residence https://nationalpost.com/news/canada/intoxicated-18-year-old-girl-reportedly-rushed-to-hospital-from-prime-minister-harpers-residence

#5yrsago Nest’s “ease of use” imperative plus poor integration with Google security has turned it into a hacker’s playground https://memex.craphound.com/2019/04/24/nests-ease-of-use-imperative-plus-poor-integration-with-google-security-has-turned-it-into-a-hackers-playground/

#1yrago How Goldman Sachs's "tax-loss harvesting" lets the ultra-rich rake in billions tax-free https://pluralistic.net/2023/04/24/tax-loss-harvesting/#mego


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025

  • Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025



Colophon (permalink)

Today's top sources:

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: Capitalists Hate Capitalism https://craphound.com/news/2024/04/14/capitalists-hate-capitalism/


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

Read the whole story
mkalus
42 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Pluralistic: "Humans in the loop" must detect the hardest-to-spot errors, at superhuman speed (23 Apr 2024)

1 Share


Today's links



A vintage Readers' Digest 'What's Wrong With This Picture' puzzle, featuring a subtly distorted domestic scene in which a man sits in an easy chair, reading a newspaper, while a woman in a pinafore vacuums the rug. The window has been altered such that it is filled with the staring red eye of HAL 9000 from Kubrick's '2001: A Space Odyssey.' The image is rendered in black and white, except for HAL's eye. It blinks erratically, switching to a false-color version that is momentarily visible and then disappears.

"Humans in the loop" must detect the hardest-to-spot errors, at superhuman speed (permalink)

If AI has a future (a big if), it will have to be economically viable. An industry can't spend 1,700% more on Nvidia chips than it earns indefinitely – not even with Nvidia being a principle investor in its largest customers:

https://news.ycombinator.com/item?id=39883571

A company that pays 0.36-1 cent/query for electricity and (scarce, fresh) water can't indefinitely give those queries away by the millions to people who are expected to revise those queries dozens of times before eliciting the perfect botshit rendition of "instructions for removing a grilled cheese sandwich from a VCR in the style of the King James Bible":

https://www.semianalysis.com/p/the-inference-cost-of-search-disruption

Eventually, the industry will have to uncover some mix of applications that will cover its operating costs, if only to keep the lights on in the face of investor disillusionment (this isn't optional – investor disillusionment is an inevitable part of every bubble).

Now, there are lots of low-stakes applications for AI that can run just fine on the current AI technology, despite its many – and seemingly inescapable – errors ("hallucinations"). People who use AI to generate illustrations of their D&D characters engaged in epic adventures from their previous gaming session don't care about the odd extra finger. If the chatbot powering a tourist's automatic text-to-translation-to-speech phone tool gets a few words wrong, it's still much better than the alternative of speaking slowly and loudly in your own language while making emphatic hand-gestures.

There are lots of these applications, and many of the people who benefit from them would doubtless pay something for them. The problem – from an AI company's perspective – is that these aren't just low-stakes, they're also low-value. Their users would pay something for them, but not very much.

For AI to keep its servers on through the coming trough of disillusionment, it will have to locate high-value applications, too. Economically speaking, the function of low-value applications is to soak up excess capacity and produce value at the margins after the high-value applications pay the bills. Low-value applications are a side-dish, like the coach seats on an airplane whose total operating expenses are paid by the business class passengers up front. Without the principle income from high-value applications, the servers shut down, and the low-value applications disappear:

https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/

Now, there are lots of high-value applications the AI industry has identified for its products. Broadly speaking, these high-value applications share the same problem: they are all high-stakes, which means they are very sensitive to errors. Mistakes made by apps that produce code, drive cars, or identify cancerous masses on chest X-rays are extremely consequential.

Some businesses may be insensitive to those consequences. Air Canada replaced its human customer service staff with chatbots that just lied to passengers, stealing hundreds of dollars from them in the process. But the process for getting your money back after you are defrauded by Air Canada's chatbot is so onerous that only one passenger has bothered to go through it, spending ten weeks exhausting all of Air Canada's internal review mechanisms before fighting his case for weeks more at the regulator:

https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454

There's never just one ant. If this guy was defrauded by an AC chatbot, so were hundreds or thousands of other fliers. Air Canada doesn't have to pay them back. Air Canada is tacitly asserting that, as the country's flagship carrier and near-monopolist, it is too big to fail and too big to jail, which means it's too big to care.

Air Canada shows that for some business customers, AI doesn't need to be able to do a worker's job in order to be a smart purchase: a chatbot can replace a worker, fail to their worker's job, and still save the company money on balance.

I can't predict whether the world's sociopathic monopolists are numerous and powerful enough to keep the lights on for AI companies through leases for automation systems that let them commit consequence-free free fraud by replacing workers with chatbots that serve as moral crumple-zones for furious customers:

https://www.sciencedirect.com/science/article/abs/pii/S0747563219304029

But even stipulating that this is sufficient, it's intrinsically unstable. Anything that can't go on forever eventually stops, and the mass replacement of humans with high-speed fraud software seems likely to stoke the already blazing furnace of modern antitrust:

https://www.eff.org/de/deeplinks/2021/08/party-its-1979-og-antitrust-back-baby

Of course, the AI companies have their own answer to this conundrum. A high-stakes/high-value customer can still fire workers and replace them with AI – they just need to hire fewer, cheaper workers to supervise the AI and monitor it for "hallucinations." This is called the "human in the loop" solution.

The human in the loop story has some glaring holes. From a worker's perspective, serving as the human in the loop in a scheme that cuts wage bills through AI is a nightmare – the worst possible kind of automation.

Let's pause for a little detour through automation theory here. Automation can augment a worker. We can call this a "centaur" – the worker offloads a repetitive task, or one that requires a high degree of vigilance, or (worst of all) both. They're a human head on a robot body (hence "centaur"). Think of the sensor/vision system in your car that beeps if you activate your turn-signal while a car is in your blind spot. You're in charge, but you're getting a second opinion from the robot.

Likewise, consider an AI tool that double-checks a radiologist's diagnosis of your chest X-ray and suggests a second look when its assessment doesn't match the radiologist's. Again, the human is in charge, but the robot is serving as a backstop and helpmeet, using its inexhaustible robotic vigilance to augment human skill.

That's centaurs. They're the good automation. Then there's the bad automation: the reverse-centaur, when the human is used to augment the robot.

Amazon warehouse pickers stand in one place while robotic shelving units trundle up to them at speed; then, the haptic bracelets shackled around their wrists buzz at them, directing them pick up specific items and move them to a basket, while a third automation system penalizes them for taking toilet breaks or even just walking around and shaking out their limbs to avoid a repetitive strain injury. This is a robotic head using a human body – and destroying it in the process.

An AI-assisted radiologist processes fewer chest X-rays every day, costing their employer more, on top of the cost of the AI. That's not what AI companies are selling. They're offering hospitals the power to create reverse centaurs: radiologist-assisted AIs. That's what "human in the loop" means.

This is a problem for workers, but it's also a problem for their bosses (assuming those bosses actually care about correcting AI hallucinations, rather than providing a figleaf that lets them commit fraud or kill people and shift the blame to an unpunishable AI).

Humans are good at a lot of things, but they're not good at eternal, perfect vigilance. Writing code is hard, but performing code-review (where you check someone else's code for errors) is much harder – and it gets even harder if the code you're reviewing is usually fine, because this requires that you maintain your vigilance for something that only occurs at rare and unpredictable intervals:

https://twitter.com/qntm/status/1773779967521780169

But for a coding shop to make the cost of an AI pencil out, the human in the loop needs to be able to process a lot of AI-generated code. Replacing a human with an AI doesn't produce any savings if you need to hire two more humans to take turns doing close reads of the AI's code.

This is the fatal flaw in robo-taxi schemes. The "human in the loop" who is supposed to keep the murderbot from smashing into other cars, steering into oncoming traffic, or running down pedestrians isn't a driver, they're a driving instructor. This is a much harder job than being a driver, even when the student driver you're monitoring is a human, making human mistakes at human speed. It's even harder when the student driver is a robot, making errors at computer speed:

https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle

This is why the doomed robo-taxi company Cruise had to deploy 1.5 skilled, high-paid human monitors to oversee each of its murderbots, while traditional taxis operate at a fraction of the cost with a single, precaratized, low-paid human driver:

https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no

The vigilance problem is pretty fatal for the human-in-the-loop gambit, but there's another problem that is, if anything, even more fatal: the kinds of errors that AIs make.

Foundationally, AI is applied statistics. An AI company trains its AI by feeding it a lot of data about the real world. The program processes this data, looking for statistical correlations in that data, and makes a model of the world based on those correlations. A chatbot is a next-word-guessing program, and an AI "art" generator is a next-pixel-guessing program. They're drawing on billions of documents to find the most statistically likely way of finishing a sentence or a line of pixels in a bitmap:

https://dl.acm.org/doi/10.1145/3442188.3445922

This means that AI doesn't just make errors – it makes subtle errors, the kinds of errors that are the hardest for a human in the loop to spot, because they are the most statistically probable ways of being wrong. Sure, we notice the gross errors in AI output, like confidently claiming that a living human is dead:

https://www.tomsguide.com/opinion/according-to-chatgpt-im-dead

But the most common errors that AIs make are the ones we don't notice, because they're perfectly camouflaged as the truth. Think of the recurring AI programming error that inserts a call to a nonexistent library called "huggingface-cli," which is what the library would be called if developers reliably followed naming conventions. But due to a human inconsistency, the real library has a slightly different name. The fact that AIs repeatedly inserted references to the nonexistent library opened up a vulnerability – a security researcher created a (inert) malicious library with that name and tricked numerous companies into compiling it into their code because their human reviewers missed the chatbot's (statistically indistinguishable from the the truth) lie:

https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/

For a driving instructor or a code reviewer overseeing a human subject, the majority of errors are comparatively easy to spot, because they're the kinds of errors that lead to inconsistent library naming – places where a human behaved erratically or irregularly. But when reality is irregular or erratic, the AI will make errors by presuming that things are statistically normal.

These are the hardest kinds of errors to spot. They couldn't be harder for a human to detect if they were specifically designed to go undetected. The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.

This is a special new torment for reverse centaurs – and a significant problem for AI companies hoping to accumulate and keep enough high-value, high-stakes customers on their books to weather the coming trough of disillusionment.

This is pretty grim, but it gets grimmer. AI companies have argued that they have a third line of business, a way to make money for their customers beyond automation's gifts to their payrolls: they claim that they can perform difficult scientific tasks at superhuman speed, producing billion-dollar insights (new materials, new drugs, new proteins) at unimaginable speed.

However, these claims – credulously amplified by the non-technical press – keep on shattering when they are tested by experts who understand the esoteric domains in which AI is said to have an unbeatable advantage. For example, Google claimed that its Deepmind AI had discovered "millions of new materials," "equivalent to nearly 800 years’ worth of knowledge," constituting "an order-of-magnitude expansion in stable materials known to humanity":

https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/

It was a hoax. When independent material scientists reviewed representative samples of these "new materials," they concluded that "no new materials have been discovered" and that not one of these materials was "credible, useful and novel":

https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/

As Brian Merchant writes, AI claims are eerily similar to "smoke and mirrors" – the dazzling reality-distortion field thrown up by 17th century magic lantern technology, which millions of people ascribed wild capabilities to, thanks to the outlandish claims of the technology's promoters:

https://www.bloodinthemachine.com/p/ai-really-is-smoke-and-mirrors

The fact that we have a four-hundred-year-old name for this phenomenon, and yet we're still falling prey to it is frankly a little depressing. And, unlucky for us, it turns out that AI therapybots can't help us with this – rather, they're apt to literally convince us to kill ourselves:

https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says


Hey look at this (permalink)



A Wayback Machine banner.

This day in history (permalink)

#15yrsago EU Parliament passes copyright term extension, rejects proposal to give the addition funds to artists https://www.openrightsgroup.org/blog/parliament-buckles-copyright-extension-goes-through-to-council-of-ministers/

#10yrsago How science fiction influences thinking about the future https://www.smithsonianmag.com/arts-culture/how-americas-leading-science-fiction-authors-are-shaping-your-future-180951169/?no-ist

#10yrsago Obama official responsible for copyright chapters of TPP & ACTA gets a job at MPAA; his replacement is another copyright lobbyist https://www.vox.com/2014/4/22/5636466/hollywood-just-hired-another-white-house-trade-official

#10yrsago Having leisure time is now a marker for poverty, not riches https://www.economist.com/finance-and-economics/2014/04/22/nice-work-if-you-can-get-out

#10yrsago Eternal vigilance app for social networks: treating privacy vulnerabilities like other security risks https://freedom-to-tinker.com/2014/04/21/eternal-vigilance-is-a-solvable-technology-problem-a-proposal-for-streamlined-privacy-alerts/

#10yrsago How the Russian surveillance state works https://web.archive.org/web/20140206154124/http://www.worldpolicy.org/journal/fall2013/Russia-surveillance

#5yrsago Political candidate’s kids use his election flyers to fool his laptop’s facial recognition lock https://twitter.com/mattcarthy/status/1120641557886058496

#5yrsago Fool me twice: New York State commutes Charter’s death sentence after Charter promises to stop breaking its promises https://arstechnica.com/tech-policy/2019/04/charter-avoids-getting-kicked-out-of-new-york-agrees-to-new-merger-conditions/

#5yrsago Greta Thunberg attributes her ability to focus on climate change to her Asperger’s https://www.youtube.com/watch?v=hKMX8WRw3fc

#5yrsago A Sanders candidacy would make 2020 a referendum on the future, not a referendum on Trump https://www.theguardian.com/us-news/2019/apr/22/bernie-sanders-democrats-trump-2020

#5yrsago EU to create 350m person biometric database for borders, migration and law enforcement https://www.zdnet.com/article/eu-votes-to-create-gigantic-biometrics-database/

#1yrsago A Collective Bargain https://pluralistic.net/2023/04/23/a-collective-bargain/


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025

  • Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025



Colophon (permalink)

Today's top sources:

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: Capitalists Hate Capitalism https://craphound.com/news/2024/04/14/capitalists-hate-capitalism/


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

Read the whole story
mkalus
42 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Saturday Morning Breakfast Cereal - Picture

1 Comment and 5 Shares


Click here to go see the bonus panel!

Hovertext:
Writing a book to convince a child they're special is like writing a book to convince a fish it can swim.


Today's News:
Read the whole story
mkalus
4 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
1 public comment
LeMadChef
1 day ago
reply
Any time someone tells me they are writing a children's book, I wonder why they think they can.
Denver, CO

Saturday Morning Breakfast Cereal - Immortal

1 Share


Click here to go see the bonus panel!

Hovertext:
When you add in the Stalin potential it gets really dicey.


Today's News:
Read the whole story
mkalus
4 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories