Resident of the world, traveling the road of life
67209 stories
·
21 followers

OpenAI needs its $40 billion funding deal — so it’s threatening Microsoft!

1 Share

OpenAI must finish its transition from a weird charity to some sort of for-profit company. Then it can get $40 billion from SoftBank! OpenAI absolutely needs to get the stakeholders on side — particularly Microsoft.

It’s not going so well. OpenAI is discussing a nuclear option — formally accusing Microsoft of anticompetitive behaviour. [WSJ, archive]

Firstly, Microsoft has exclusive reseller rights to all OpenAI software — but OpenAI wants to open that up so they can go out through other providers.

Secondly, Microsoft currently owns some percentage of the nonexistent profits from OpenAI’s commercial entity. If OpenAI goes for-profit, then Microsoft wants a bigger share than OpenAI wants to give it.

Thirdly, Microsoft is annoyed at OpenAI buying AI vibe-coding tool Windsurf. This is just a tweaked ChatGPT plugin for VS Code — the software is trivial.

But OpenAI bought Windsurf to get its customers and its deals as they prepare for their own enterprise push — so they can tread firmly on Microsoft’s turf.

And lastly, Microsoft’s deal for all of OpenAI’s software lasts only until the company makes $100 billion in profit. So that’s never going to happen. But still, Microsoft now wants rights to OpenAI’s software even after that dizzying number has been achieved.

The two companies made a joint statement to the Wall Street Journal:

We have a long-term, productive partnership that has delivered amazing AI tools for everyone. Talks are ongoing and we are optimistic we will continue to build together for years to come.

Of course they will.

Read the whole story
mkalus
4 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Sincerity Wins The War

1 Share

Hello Where’s Your Ed At Subscribers! I’ve started a premium version of this newsletter with a weekly Friday column where I go over the most meaningful news and give my views, which I guess is what you’d expect. Anyway, it’s $7 a month or $70 a year, and helps support the newsletter. I will continue to do my big free column too! Thanks.


What wins the war is sincerity.

What wins the war is accountability.

And we do not have to buy into the inevitability of this movement.

Nor do we have to cover it in the way it has always been covered. Why not mix emotion and honesty with business reporting? Why not pry apart the narrative as you tell the story rather than hoping the audience works it out? Forget “hanging them with their own rope” — describe what’s happening and hold these people accountable in the way you would be held accountable at your job. 

Your job is not to report “the facts” and let the readers work it out. To quote my buddy Kasey, if you're not reporting the context, you're not reporting the story. Facts without context aren’t really facts. Blandly repeating what an executive or politician says and thinking that appending it with “...said [person]” is sufficient to communicate their biases or intentions isn’t just irresponsible, it’s actively rejecting your position as a journalist.

You don’t even have to say somebody is lying when they say they’re going to do something — but the word “allegedly” is powerful, reasonable and honest, and is an objective way of calling into question a narrative. 

Let me give you a few examples.

A few weeks ago, multiple outlets reported that Meta would partner with Anduril, the military contractor founded by Palmer Luckey, the former founder of VR company Oculus whichMeta acquired in 2014, only to oust Luckey four years later for donating $10,000 to an anti-Hilary Clinton group. In 2024, Meta CTO Andrew “Boz” Bosworth, famous for saying that Facebook’s growth is necessary and good, even if it leads to bad things like cyberbullying and  terror attacks, publicly apologized to Luckey

Now the circle is completing, with Luckey sort-of-returning to Meta to work with the company on some sort of helmet called “Eagle Eye.” 

One might think at this point the media would be a little more hesitant in how they cover anything Zuckerberg-related after he completely lied to them about the metaverse, and one would be wrong.

The Washington Post reported that, and I quote:

To aid the collaboration, Meta will draw on its hefty investments in AI models known as Llama and its virtual reality division, Reality Labs. The company has built several iterations of immersive headsets aimed at blending the physical and virtual worlds — a concept known as the metaverse.

Are you fucking kidding me?

The metaverse was a joke! It never existed! Meta bought a company that made VR headsets — a technology so old, they featured in an episode of Murder She Wrote — and an online game that could best be described as “Second Life, but sadder.” Here’s a piece from the Washington Post agreeing with me! The metaverse never really had a product of any kind, and lost tens of billions of dollars for no reason! Here’s a whole thing I wrote about it years ago! To still bring up the metaverse in the year of our lord 2025 is ridiculous!

But even putting that aside… wait, Meta’s going to put its AI inside of this headset? Palmer Luckey claims that, according to the Post, this headset will be “combining an AI assistant with communications and other functions.” Llama? That assistant? 

You mean the one that it had to rig to cheat on LLM benchmarking tests? The one that will, as reported by the Wall Street Journal, participate in vivid and gratuitous sexual fantasies with children? The one using generative AI models that hallucinate, like every other LLM? That’s the one that you’re gonna put in the helmet for the military? How is the helmet going to do that exactly? What will an LLM — an inconsistent and unreliable generative AI system — do in a combat situation, and will a soldier trust it again after its first fuckup?

Just to be clear, and I quote Palmer Luckey, the helmet that will feature an “ever-present companion who can operate systems, who can communicate with others, who you can off-load tasks onto … that is looking out for you with more eyes than you could ever look out for yourself right there right there in your helmet.” This is all going to be powered by Llama? 

Really? Are we all really going to accept that? Does nobody actually think about the words they’re writing down?

Here’s the thing about military tech: the US DOD tends to be fairly conservative when it comes to the software it uses, and has high requirements for reliability and safety. I could talk about these for hours — from coding guidelines, to the ADA programming language, which was designed to be highly crash-resistant and powers everything from guided missiles to F-15 fighter jet — but suffice it to say that it’s highly doubtful that the military is going to rely on an LLM that hallucinates a significant portion of the time. 

To be clear, I’m not saying we have to reject every single announcement that comes along, but can we just for one second think critically about what it is we are writing down.

We do not have to buy into every narrative, nor do we have to report it as if we do so. We do not have to accept anything based on the fact someone says it emphatically, or because they throw a number at us to make it sound respectable. 

Here’s another example. A few weeks ago, Axios had a miniature shitfit after Anthropic CEO said that “AI could wipe out half of all entry-level white-collar jobs and spike unemployment to 10-20% in the next one to five years.” 

What data did Mr. Amodei use to make this point? Who knows! Axios simply accepted that he said something and wrote it down, because why think when you could write.

This is extremely stupid! This is so unbelievably stupid that it makes me question the intelligence of literally anybody that quotes it! Dario Amodei provided no sourcing, no data, nothing other than a vibes-based fib specifically engineered to alarm hapless journalists. Amodei hasn’t done any kind of study or research. He’s just saying stuff, and that’s all it takes to get a headline when you’re the CEO of one of the top two big AI companies.

It is, by the way, easy to cover this ethically, as proven by Allison Morrow of CNN, who, engaging her critical thinking, correctly stated that “Amodei didn’t cite any research or evidence for that 50% estimate,” that “Amodei is a salesman, and it’s in his interest to make his product appear inevitable and so powerful it’s scary,” and that “little of what Amodei told Axios was new, but it was calibrated to sound just outrageous enough to draw attention to Anthropic’s work.”

Morrow’s work is compelling because it’s sincere, and is proof that there is absolutely nothing stopping mainstream press from covering this industry honestly. Instead, Business Insider (which just laid off a ton of people and lazily recommended their workers read books that don’t exist because they can’t even write their own emails without AI), Fortune, Mashable and many other outlets blandly covered a man’s completely made up figure as if it was fact. 

This isn’t a story. It is “guy said thing,” and “guy” happens to be “billionaire behind multi-billion dollar Large Language Model company,” and said company has made exactly jack shit as far as software that can actually replace workers. 

While there are absolutely some jobs being taken by AI, there is, to this point, little or no research that suggests that it’s happening at scale, mostly because Large Language Models don’t really do the things that you need them to do to take someone’s job at scale. Nor is it clear that those jobs were lost because AI — specifically genAI — can actually do them as well, or better, than a person, or because an imbecile CEO bought into the hype and decided to fire up the pink slip printer, and when those LLMs inevitably shit the bed, those people will be hired back. 

You know, like Klarna literally just had to. 

These scare tactics exist to do one thing: increase the value of companies like Anthropic, OpenAI, Microsoft, Salesforce, and anybody else outright lying about how “agents” will do our jobs, and to make it easier for the startups making these models to raise funds, kind-of how a pump-and-dump scammer will hype up a doomed penny stock by saying how it’s going to the moon, not disclosing that they themselves own a stake in the business.

Let’s look at another example. A recent report from Oxford Economics talked about how entry-level workers were facing a job crisis, and vaguely mentioned in the preview of the report that “there are signs that entry-level positions are being displaced by artificial intelligence at higher rates.” 

One might think the report says much more than that, and one would be wrong. On the very first page, it says that “there are signs that entry-level positions are being displaced by artificial intelligence at higher rates.” On page 3, it claims that the “high adoption rate by information companies along with the sheer employment declines in [some roles] since 2022 suggested some displacement effect from AI…[and] digging deeper, the largest displacement seems to be entry-level jobs normally filled by recent graduates.” 

In fact, fuck it, take a look.

alt

That’s it! That’s the entire extent of its proof! The argument is that because companies are getting AI software and there’s employment declines, it must be AI. There you go! Case closed. 

This report has now been quoted as gospel. Axios claimed that Oxford Economics’ report provided “hard evidence” that “AI is displacing white-collar workers.” USA Today said that positions in computer and mathematical sciences have been the first affected as companies increasingly adopt artificial intelligence systems.”

And Anthropic marketing intern/New York Times columnist Kevin Roose claimed that this was only the tip of the iceberg, because, and I shit you not, he had talked to some guys who said some stuff.

No, really.

In interview after interview, I’m hearing that firms are making rapid progress toward automating entry-level work, and that A.I. companies are racing to build “virtual workers” that can replace junior employees at a fraction of the cost. Corporate attitudes toward automation are changing, too — some firms have encouraged managers to become “A.I.-first,” testing whether a given task can be done by A.I. before hiring a human to do it.

One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a midlevel title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by A.I. coding tools. Another told me that his start-up now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company.

Yet Roose’s most egregious bullshit came after he admitted that these don’t prove anything:

Anecdotes like these don’t add up to mass joblessness, of course. Most economists believe there are multiple factors behind the rise in unemployment for college graduates, including a hiring slowdown by big tech companies and broader uncertainty about President Trump’s economic policies.

But among people who pay close attention to what’s happening in A.I., alarms are starting to go off.

That’s right, anecdotes don’t prove his point, but what if other anecdotes proved his point? Because Roose goes on to quote Amodei’s 50% quote, and say that they now claim its Claude Opus 4 model can “code for several hours without stopping,” a statement that Roose calls “a tantalizing possibility if you’re a company accustomed to paying six-figure engineer salaries for that kind of productivity” without thinking “does that mean the code is good?” or “what does it do for those hours?”

Roose spends the rest of the article clearing his throat, adding that “even if AI doesn’t take all entry-level jobs right away” that “two trends concern [him],” namely that he worries companies are “turning to AI too early, before the tools are robust enough to handle full entry-level workloads,” and that executives believing that entry-level jobs are short-lived will “underinvest in job training, mentorship and other programs aimed at entry-level workers.” 

Kevin, have you ever considered checking whether that actually happens?

Nah! Why would he? Kevin’s job is to be a greasy pawn of the AI industry and the markets at large. An interesting — and sincere! — version of this piece would’ve intelligently humoured the idea then attempted to actually prove it, and then failed because there is no proof that this is actually happening other than that which the media drums up.

It’s the same craven, insincere crap we saw with the return to office “debate” which was far more about bosses pretending that the office was good than it was about productivity or any kind of work. I wrote about this almost every week for several years, and every single media outlet participated, on some level, in pushing a completely fictitious world where in-office work was “better” due to ‘serendipity,” that the boss was right, and that we all had to come back to the office. 

Did they check with the boss about how often they were in the office? Nope! Did they give equal weight to those who disagreed with management — namely those doing the actual work? No. But they did get really concerned about quiet quitting for some reason, even though it wasn’t real, because the bosses that don’t seem to actually do any work had demanded that it was.

Anyway, Kevin Roose was super ahead of the curve on that one. He wrote that “working from home is overrated” and that “home-cooked lunches and no commuting…can’t compensate for what’s lost in creativity” in March, 2020. My favourite quote is when he says “...research also shows that what remote workers gain in productivity, they often miss in harder-to-measure benefits like creativity and innovative thinking,” before mentioning some studies about “team cohesion,” linking to an article from The Atlantic from 2017 that does not appear to include a study other than the Nicholas Bloom study that Roose himself linked that showed remote work was productive and another about “proximity boosting productivity” that it does not link to, adding that “the data tend to talk past each other.”

I swear to god I am not trying to personally vilify Kevin Roose — it’s just that he appears to have backed up every single boss-coddling market-driven hype cycle with a big smile, every single time. If he starts writing about Quantum Computing, it’s tits up for AI.

This is the same thing that happened when corporations were raising prices and the media steadfastly claimed that inflation had nothing to do with corporate greed (once again, CNN’s Allison Morrow was one of the few mainstream media reporters willing to just say “yeah corporations actually are raising prices and blaming it on inflation”), desperately clinging to whatever flimsy data might prove that corporations weren’t price gouging even as corporations talked about doing so publicly.

It’s all so deeply insincere, and all so deeply ugly — a view from nowhere, one that seeks not to tell anyone anything other than that whatever the rich or powerful is worried or excited about is true, and that the evidence, no matter how flimsy, always points in the way they want it to. 

It’s lazy, brainless, and suggests either a complete rot in the top of editorial across the entire  business and tech media or a consistent failure by writers to do basic journalism, and as forgiving I want to be, there are enough of these egregious issues that I have to begin asking if anybody is actually fucking trying

It’s the same thing every time the powerful have an idea — remote work is bad for companies and we must return to the office, the metaverse is here and we’re all gonna work in it, prices are higher and it’s due to inflation rather than anything else, AI is so powerful and strong and will take all of our jobs, or whatever it is — and that idea immediately become the media’s talking points. Real people in the real world, experiencing a different reality, watch as the media repeatedly tells them that their own experiences are wrong. Companies can raise their prices specifically to raise their profits, Meta can literally not make a metaverse, AI can do very little to actually automate your real job, and the media will still tell you to shut the fuck up and eat their truth-slop.

You want an actual conspiracy theory? How about a real one: that the media works together with the rich and powerful to directly craft “the truth,” even if it runs contrary to reality. The Business Idiots that rule our economy — work-shy executives and investors with no real connection to any kind of actual production — are the true architects of what’s “real” in our world, and their demands are simple: “make the news read like we want it to.”

Yet when I say “works together,” I don’t even mean that they get together in a big room and agree on what’s going to be said. Editors — and writers — eagerly await the chance to write something following a trend or a concept that their bosses (or other writers’ bosses) come up with and are ready to go. I don’t want to pillory too many people here, but go and look at who covered the metaverse, cryptocurrency, remote work, NFTs and now generative AI in gushing terms.

Okay, but seriously, how is it every time with Casey and Kevin

The illuminati doesn’t need to exist. We don’t need to talk about the Bilderberg Group, or Skull and Bones, or reptilians, or wheel out David Icke and his turquoise shellsuit. The media has become more than willing to follow whatever it needs to once everybody agrees on the latest fad or campaign, to the point that they’ll repeat nonsensical claim after nonsensical claim.

The cycle repeats because our society — and yes, our editorial class too — is controlled by people who don’t actually interact with it. They have beliefs that they want affirmed, ideas that they want spread, and they don’t even need to work that hard to do so, because the editorial rails are already in place to accept whatever the next big idea is. They’ve created editorial class structures to make sure writers will only write what’s assigned, pushing back on anything that steps too far out of everybody’s agreed-upon comfort zone.

The “AI is going to eliminate half of white collar jobs” story is one that’s taken hold because it gets clicks and appeals to a fear that everyone, particularly those in the knowledge economy who have long enjoyed protection from automation, has. Nobody wants to be destitute. Nobody with six figures of college debt wants to be stood in a dole queue.  

It’s a sexy headline, one that scares the reader into clicking, and when you’re doing a half-assed job at covering a study, you can very easily just say “there’s evidence this is happening.” It’s scary. People are scared, and want to know more about the scary subject, so reporters keep covering it again and again, repeating a blatant lie sourced using flimsy data, pandering to those fears rather than addressing them with reality.

It feels like the easiest way to push back on these stories is fairly simple: ask reporters to show the companies that have actually done this.

No, I don’t mean “show me a company that did layoffs and claims they’re bringing in new efficiencies with AI.” I mean actually show me a company that has laid off, say, 10 people, and how those people have been replaced by AI. What does the AI do? How does it work? How do you quantify the work it’s replaced? How does it compare in quality? Surely with all these headlines there’s got to be one company that can show you, right?

No, no, I really don’t mean “we’re saying this is the reason,” I mean show me the actual job replacement happening and how it works. We’re three years in and we’ve got headlines talking about AI replacing jobs. Where? Christopher Mims of the Wall Street Journal had a story from June 2024 that talked about freelance copy editors and concept artists being replaced by generative AI, but I can find no stories about companies replacing employees. 

To be clear, I am not advocating for this to happen. I am simply asking that the media, which seems obsessed with — even excited by — the prospect of imminent large-scale job loss, goes out and finds a business (not a freelancer who has lost work, not a company that has laid people off with a statement about AI) that has replaced workers with generative AI. 

They can’t, because it isn’t happening at scale, because generative AI does not have the capabilities that people like Dario Amodei and Sam Altman repeatedly act like they do, yet the media continues to prop up the story because they don’t have the basic fucking curiosity to learn about what they’re talking about.

Hell, I’ll make it easier for you. Why don’t you find me the product, the actual thing, that can do someone’s job? Can you replace an accountant? No. A doctor? No. A writer? Not if you want good writing. An artist? Not if you want to actually copyright the artwork, and that’s before you get to how weird and soulless the art itself feels. Walk into your place of work tomorrow and look around you and start telling me how you would replace each and every person in there with the technology that exists today, not the imaginary stuff that Dario Amodei and Sam Altman want you to think about.

Outside of coding — which, by the way, is not the majority of a software engineer’s fucking job, if you’d take the god damn time to actually talk to one! — what are the actual capabilities of a Large Language Model today? What can it actually do? 

You’re gonna say “it can do deep research,” by which you mean a product that doesn’t really work. What else? Generate videos that sometimes look okay? “Vibe code”? Bet you’re gonna say something about AI being used in the sciences to “discover new materials” which proved AI’s productivity benefits. Well, MIT announced that it has “no confidence in the provenance, reliability or validity of the data, and [has] no confidence in the validity of the research contained in the paper.” 

I’m not even being facetious: show me something! Show me something that actually matters. Show me the thing that will replace white collar workers — or even, honestly, “reduce the need for them.” Find me someone who said “with a tool like this I won’t need this many people” who actually fired them and then replaced them with the tool and the business keeps functioning. Then find me two or three more. Actually, make it ten, because this is apparently replacing half the white collar workforce.

There are some answers, by the way. Generative AI has sped up transcription and translation, which are useful for quick references but can cause genuine legal risk. Generative AI-based video editing tools are gaining in popularity, though it’s unclear by how much. Seemingly every app that connects to generative AI can summarise a message. Software engineers using LLM tools — as I talked about on a recent episode of Better Offline — are finding some advantages, but LLMs are far from a panacea. Generative AI chatbots are driving people insane by providing them an endlessly-configurable pseudo-conversation too, though that’s less of a “use case” and more of a “text-based video game launched at scale without anybody thinking about what might happen.” 

Let’s be real: none of this is transformative. None of this is futuristic. It’s stuff we already do, done faster, though “faster” doesn’t mean better, or even that the task is done properly, and obviously, it doesn’t mean removing the human from the picture. Generative AI is best at, it seems, doing very specific things in a very generic way, none of which are truly life-changing. Yet that’s how the media discusses it. 

An aside about software engineering: I actually believe LLMs have some value here. LLMs can generate outputs to generate and evaluate code, as well as handle distinct functions within a software engineering environment. It’s pretty exciting for some software engineers - they’re able to get a lot of things done much faster! - though they’d never trust it with things launched in production. These LLMs also have “agents” - but for the sake of argument, I’d like to call them “bots.” Bots, because the term “agent” is bullshit and used to make things sound like they can do more than they can. Anyway, bots can, to quote Thomas Ptacek, “poke around your codebase on their own…author files directly…run tools…compile code…run tests…and iterate on the results,” to name a few things.” These are all things - under the watchful eye of an actual person - that can speed up some software engineers’ work. 

(A note from my editor, Matt Hughes, who has been a software engineer for a long time: I’m not sure how persuasive this stuff is. Coders have been automating things like tests, code compilation, and the general mechanics of software engineering long before AI and LLMs were the hot thing du jour. You can do so many of the things that Ptacek mentioned with cronjobs and shell scripts — and, undoubtedly, with greater consistency and reliability.)Ptacek also adds that “if truly mediocre code is all we ever get from LLM, that’s still huge, [as] it’s that much less mediocre code humans have to write.”  

Back to Ed: In a conversation with The Internet of Bugs’ (and veteran software engineer) Carl Brown as I was writing this newsletter, he recommended I exercise caution with how I discussed LLMs and software engineering, saying that “...there are situations at the moment (unusual problems, or little-used programming languages or frameworks) where the stuff is absolutely useless, and is likely to be for a long time.”In a previous draft, I’d written that mediocre code was “fine if you knew what to look for,” but even then, Brown added that “...the idea that a human can ‘know what code is supposed to look like’ is truly problematic.  A lot of programmers believe that they can spot bugs by visual inspection, but I know I can't, and I'd bet large sums of money they can't either — and I have a ton of evidence I would win that bet.”

Brown continued: “In an offline environment, mediocre code may be fine when you know what good code looks like, but if the code might be exposed to hackers, or you don't know what to look for, you're gonna cause bugs, and there are more bugs than ever in today's software, and that is making everyone on the Internet less secure.”

He also told me the story of the famed Heartbleed bug, a massive vulnerability in a common encryption library that millions of smart, professional security experts and developers looked at for over two years before someone saw a single error — one single statement — that somebody didn’t check that led to a massive, internet-wide panic leaving hundreds of millions of websites vulnerable.

So, yeah, I dunno man. On one hand, there are clearly software developers that benefit from using LLMs, but it’s complicated, much like software engineering itself. You cannot just “replace a coder,” because “coder” isn’t really the job, and while this might affect entry-level software engineers at some point, there’s yet to be proof it’s actually happening, or that AI’s taking these jobs and not, say, outsourcing.

Perhaps there’s a simpler way to put it: software engineering is not just writing code, and if you think that’s the case, you do not write software or talk to software engineers about what it is they do. 

Seriously, put aside the money, the hype, the pressure, the media campaigns, the emotions you have, everything, and just focus on the product as it is today. What is it that generative AI does, today, for you? Don’t say “AI could” or “AI will,” tell me what “AI does.” Tell me what has changed about your life, your job, your friends’ jobs, or the world around you, other than that you heard a bunch of people got rich.

Yet the media continually calls it “powerful AI.” Powerful how? Explain the power! What is the power? The word “powerful” is a marketing term that the media has adopted to describe something it doesn’t understand, along with the word “agent,” which means “autonomous AI that can do things for you” but is used, at this point, to describe any Large Language Model doing anything. 

But the intention is to frame these models as “powerful” and to use the term “agents” to make this technology seem bigger than it is, and the people that control those terms are the AI companies themselves.

It’s at best lazy and at worst actively deceitful, a failure of modern journalism to successfully describe the moment outside of what they’re told to, or the “industry standards” they accept, such as “a Large Language Model is powerful and whatever Anthropic or OpenAI tells me is true.”

It’s a disgrace, and I believe it either creates distrust in the media or drives people insane as they look at reality - where generative AI doesn’t really seem to be doing much - and get told something entirely different by the media.


When I read a lot of modern journalism, I genuinely wonder what it is the reporter wants to convey. A thought? A narrative? A story? Some sort of regurgitated version of “the truth” as justified by what everybody else is writing and how your editor feels, or what the markets are currently interested in? What is it that writers want readers to come away with, exactly?

It reminds me a lot of a term that Defector’s David Roth once used to describe CNN’s Chris Cilizza — “politics, noticed”:

This feels, from one frothy burble to the next, like a very specific type of fashion writing, not of the kind that an astute critic or academic or even competent industry-facing journalist might write, but of the kind that you find on social media in the threaded comments attached to photos of Rihanna. Cillizza does not really appear to follow any policy issue at all, and evinces no real insight into electoral trends or political tactics. He just sort of notices whatever is happening and cheerfully announces that it is very exciting and that he is here for it. The slugline for his blog at CNN—it is, in a typical moment of uncanny poker-faced maybe-trolling, called The Point—is “Politics, Explained.” That is definitely not accurate, but it does look better than the more accurate “Politics, Noticed.”

Whether Roth would agree or not, I believe that this paragraph applies to a great deal of modern journalism. Oh! Anthropic launched a new model! Delightful. What does it do? Oh they told me, great, I can write it down. It’s even better at coding now! Wow! Also, Anthropic’s CEO said something, which I will also write down. The end!

I’ll be blunt: making no attempt to give actual context or scale or consideration to the larger meaning of the things said makes the purpose of journalism moot. Business and tech journalism has become “technology, noticed.” While there are forays out of this cul-de-sac of credulity — and exceptions at many mainstream outlets — there are so many more people who will simply hear that there’s a guy who said a thing, and that guy is rich and runs a company people respect, and thus that statement is now news to be reported without commentary or consideration.

Much of this can be blamed on the editorial upper crust that continually refuses to let writers critique their subject matter, and wants to “play it safe” by basically doing what everybody else does. What’s crazy to me is that many of the problems with the AI bubble — as with the metaverse, as with the return to office, as with inflation and price gouging — are obvious if you actually use the things or participate in reality, but such things do not always fit with the editorial message.

But honestly, there are plenty of writers who just don’t give a shit. They don’t really care to find out what AI can (or can’t) do. They’ve come to their conclusion (it’s powerful, inevitable, and already doing amazing things) and thus will write from that perspective. It’s actually pretty nefarious to continually refer to this stuff as “powerful,” because you know their public justification is how this stuff uses a bunch of GPUs, and you know their private justification is that they have never checked and don’t really care to. It’s much easier to follow the pack, because everybody “needs to cover AI” and AI stories, I assume, get clicks.

That, and their bosses, who don’t really know anything other than that “AI will be big,” don’t want to see anything else. Why argue with the powerful? They have all the money.

But even then…can you try using it? Or talking to people that use it? Not “AI experts” or “AI scientists,” but real people in the real world? Talk to some of those software engineers! Or I dunno, learn about LLMs yourself and try them out? 

Ultimately, a business or tech reporter should ask themselves: what is your job? Who do you serve? It’s perfectly fine to write relatively straightforward and positive stuff, but you have to be clear that that’s what you’re doing and why you’re doing it. 

And you know what, if all you want to do is report what a company does, fine! I have no problem with that, but at least report it truthfully. If you’re going to do an opinion piece suggesting that AI will take our jobs, at least live in reality, and put even the smallest amount of thought into what you’re saying and what it actually means. 

This isn’t even about opinion or ideology, this is basic fucking work. 

And it is fundamentally insincere. Is any of this what you truly believe? Do you know what you believe? I don’t mean this as a judgment or an attack — many people go through their whole lives with relatively flimsy reasons for the things they believe, especially in the case of commonly-held beliefs like “AI is going to be big” or “Meta is a successful company.” 

If I’m honest, I really don’t mind if you don’t agree with something I say, as long as you have a fundamentally-sound reason for doing so. My CoreWeave analysis may seem silly to some because its value has quadrupled — and that’s why I didn’t write that I believed the stock would crater, or really anything about the stock. Its success does not say much about the AI bubble other than it continues, and even if I am wrong, somehow, long term, at least I was wrong for reasons I could argue versus the general purpose sense that “AI is the biggest thing ever.” 

I understand formats can be constraining — many outlets demand an objective tone — but this is where words like “allegedly” come in. For example, The Wall Street Journal recently said that Sam Altman had claimed, in a leaked recording, that buying Jony Ive’s pre-product hardware startup would add “$1 trillion in market value” to OpenAI. As it stands, a reader — especially a Business Idiot — could be forgiven for thinking that OpenAI was now worth, or could be worth, over a trillion dollars, which is an egregious editorial failure.

One could easily add that “...to this date, there have been no consumer hardware launches at this scale outside of major manufacturers like Apple and Google, and these companies had significantly larger research and development budgets and already-existent infrastructure relationships that OpenAI lacks.”

Nothing about what I just said is opinion. Nothing about what I just said is an attack, or a sleight, and if you think it’s “undermining” the story, you yourself are not thinking objectively. These are all true statements, and are necessary to give the full context of the story.  

That, to me, is sincerity. Constrained by an entirely objective format, a reporter makes the effort to get across the context in which a story is happening, rather than just reporting exactly the story and what the company has said about it. By not including the context, you are, on some level, not being objective: you are saying that everything that’s happening here isn’t just possible, but rational, despite the ridiculous nature of Altman’s comment. 

Note that these are subjective statements. They are also the implication of simply stating that Sam Altman believes acquiring Jony Ive’s company will add $1 trillion dollars in value to OpenAI. By not saying how unlikely it is — again, without even saying the word “unlikely,” but allowing the audience to come to that conclusion by having the whole story — you give the audience the truth.

It really is that simple.


The problem, ultimately, is that everybody is aware that they’re being constantly conned, but they can’t always see where and why. Their news oscillates from aggressively dogmatic to a kind of sludge-like objectivity, and oftentimes feels entirely disconnected from their own experiences other than in the most tangential sense, giving them the feeling that their actual lives don’t really matter to the world at large. 

On top of that, the basic experience of interacting with technology, if not the world at large, kind of fucking sucks now. We go on Instagram or Facebook to see our friends and battle through a few ads and recommended content, we see things from days ago until we click stories, and we hammer past a few more ads to get a few glimpses of our friends. We log onto Microsoft Teams, it takes a few seconds to go through after each click, and then it asks why we’re not logged in, a thing that we don’t need to be able to do to make a video call. 

Our email accounts are clogged with legal spam — marketing missives, newsletters, summaries from news outlets, notifications from UPS that require us to log in, notifications that our data has been leaked, payment reminders, receipts, and even occasionally emails from real people. Google Search is broken, but then again, so is searching on basically any platform, be it our emails, workspaces or social networks. 

At scale, we as human beings are continually reminded that we do not matter, that any experiences of ours outside of what the news say makes us “different” or a “cynic,” that our pain points are only as relevant as those that match recent studies or reports, and that the people that actually matter are either the powerful or considered worthy of attention. News rarely feels like it appeals to the listener, reader or viewer, just an amorphous generalized “thing” of a person imagined in the mind of a Business Idiot. The news doesn’t feel the need to explain why AI is powerful, just that it is, in the same way that “we all knew” that being back in the office was better, even if there were far more people who disagreed than didn’t.

As a result of all of these things, people are desperate for sincerity. They’re desperate to be talked to as human beings, their struggles validated, their pain points confronted and taken seriously. They’re desperate to have things explained to them with clarity, and to have it done by somebody who doesn’t feel chained by an outlet. 

This is something that right wing media caught onto and exploited, leading to the rise of Donald Trump and the obsession with creating the “Joe Rogan of the Left,” an inherently ridiculous position based on his own popularity with young men (which is questionable based on recent reports) and its total misunderstanding of what actually makes his kind of media popular. 

However you may feel about Rogan, what his show sells on is that he’s a kind of sincere, pliant and amicable oaf. He does not seem condescending or judgmental to his audience, because he himself sits, slack-jawed, saying “yeah I knew a guy who did that” and genuinely seems to like them. While you (as I do) may deeply dislike everything on that show, you can’t deny that they seem to at least enjoy themselves, or feel engaged and accepted. 

The same goes for Theo Von (real name: Theodor Capitani von Kurnatowski III, and no, really!), whose whole affable doofus motif disarms guests and listeners. 

It works! And he’s got a whole machine that supports him, just like Rogan, money, real promotion, and real production value. They are given the bankroll and the resources to make a high-end production and a studio space and infrastructural support and then they get a bunch of marketing and social push too. There’s entire operations behind them, other than the literal stuff they do on the set, because, shocker, the audience actually wants to see them not have a boxed lunch with “THE THINGS TO BELIEVE” written on it by a management consultant. 

This is in no way a political statement, because my answer to this entire vacuous debate is to “give a diverse group of people that you agree with the beliefs of the actual promotional and financial backing and then let them create something with their honest-to-god friendships.” Bearing witness to actual love and solidarity is what will change the hearts of young people, not endless McKinsey gargoyles with multi-million-dollar budgets for “data.” 

I should be clear that this isn’t to say every single podcast should be in the format I suggest, but that if you want whatever “The Joe Rogan Of The Left” is, the answer is “a podcast with a big audience where the people like the person speaking and as a result are compelled by their message.” 

It isn’t even about politics, it’s that when you cram a bunch of fucking money into something it tends to get big, and if that thing you create is a big boring piece of shit that’s clearly built to be — and even signposted in the news as built to be — manipulative, it is in and of itself sickening.

I’m gonna continue clearing my throat: the trick here is not to lean right, nor has it ever been. Find a group of people who are compelling, diverse and genuinely enjoy being around each other and shove a whole bunch of advertising dollars into it and give it good production values to make it big, and then watch in awe as suddenly lots of people see it and your message spreads. Put a fucking trans person in there — give Western Kabuki real money, for example — and watch as people suddenly get used to seeing a trans person because you intentionally chose to do so, but didn’t make it weird or get upset when they don’t immediately vote your way. 

Because guess what — what people are hurting for right now is actual, real sincerity. Everybody feels like something is wrong. The products they use every day are increasingly-broken, pumped full of generative AI features that literally get in the way of what they’re trying to do, which already was made more difficult because companies like Meta and Google intentionally make their products harder to use as a means of making more money.  And, let’s be clear, people are well aware of the billions in profits that these companies make at the customer’s expense. 

They feel talked down to, tricked, conned, abused and abandoned, both parties’ representatives operating in terms almost as selfish as the markets that they also profit from. They read articles that blandly report illegal or fantastical things as permissible and rational and think, for a second, “am I wrong? Is this really the case? This doesn’t feel the case?” while somebody tells them that despite the fact that they have less money and said money doesn’t go as far, they’re actually experiencing the highest standard of living in history. 

Ultimately, regular people are repeatedly made to feel like they don’t matter. Their products are overstuffed with confusing menus, random microtransactions, the websites they read full of advertisements disguised as stories and actual advertisements built to trick them, their social networks intentionally separating them from the things they want to see. 

And when you feel like you don’t matter, you look to other human beings, and other human beings are terrified of sincerity. They’re terrified of saying they’re scared, they’re angry, they’re sad, they’re lonely, they’re hurting, they’re constantly on a fucking tightrope, every day feels like something weird or bad is going to happen either on the news (which for no reason other than it helps rich people constantly tries to scare them that AI will take their jobs), and they just want someone to talk to, but everybody else is fucking unwilling to let their guard down after a decade-plus of media that valorized snark and sarcasm, because the lesson they learned about being emotionally honest was that it’s weird or they’re too much or it’s feminine for guys or it’s too feminine for women.

Of course people feel like shit, so of course they’re going to turn to media that feels like real people made it, and they’ll turn to the media they’ll see the easiest, such as that given to them by the algorithm, or that which they are made to see by advertisement, or, of course, word of mouth. And if you’re sending someone to listen to something, and someone describes it in terms that sound like they’re hanging out with a friend, you’d probably give it a shot.

Outside of podcasting, people’s options for mainstream (and an alarming amount of industry) news are somewhere between “I’m smarter than you,” “something happened!” “sneering contempt,” “a trip to the principal’s office,” or “here’s who you should be mad at,” which I realize also describes the majority of the New York Times opinion page. 

While “normies” of whatever political alignment might want exactly the slop they get on TV, that slop is only slop because the people behind it believe that regular people will only accept the exact median person’s version of the world, even if they can’t really articulate it beyond “whatever is the least-threatening opinion” (or the opposite in Fox News’ case).

Really, I don’t have a panacea for what ails media, but what I do know is that in my own life I have found great joy in sincerity and love. In the last year I have made — and will continue to make, as it’s my honour to — tremendous effort to get to know the people closest to me, to be there for them if I can, to try and understand them better and to be my authentic and honest self around them, and accept and encourage them doing the same. Doing so has improved my life significantly, made me a better, more confident and more loving person, and I can only hope I provide the same level of love and acceptance to them as they do to me.

Even writing that paragraph I felt the urge to pare it back, for fear that someone would accuse me of being insincere, for “speaking in therapy language,” for “trying to sound like a hero,” not that I am doing so, but because there are far more people concerned with moderating how emotional and sincere there are than those willing to stop actual societal harms.

I think it’s partly because people see emotions as weakness. I don’t agree. I have never felt stronger and more emboldened than I have as I feel more love and solidarity with my friends, a group that I try to expand at any time I can. I am bolder, stronger (both physically and mentally), and far happier, as these friendships have given me the confidence to be who I am, and I offer the same aggressive advocacy to my friends in being who they are as they do to me. 

None of what I am saying is a one-size-fits-all solution. There is so much room for smaller, more niche projects, and I both encourage and delight in them. There is also so much more attention that can be given to these niche projects, and things are only “niche” until they are given the time in the light to become otherwise. There is also so much more that can be done within the mainstream power structures, if only there is the boldness to do so.

Objective reporting is necessary — crucial, in fact! — to democracy, but said objectivity cannot come at the cost of context, and every time it does so, the reader is failed and the truth is suffocated. And I don’t believe objective reporting should be separated from actual commentary. In fact, if someone is a reporter on a particular beat, their opinion is likely significantly more-informed than that of someone “objective” and “outside of the coverage,” based on stuff like “domain expertise.” 

The true solution, perhaps, is more solidarity and more sincerity. It’s media outlets that back up their workers, with editorial missions that aggressively fight those who would con their readers or abuse their writers, focusing on the incentives and power of those they’re discussing rather than whether or not “the markets” agree with their sentiment.

In any case, the last 15+ years of the media has led to a flattening of journalism, constantly swerving toward whatever the next big trend is — the pivot to video, contorting content to “go viral” on social media, SEO, or whatever big coverage area (AI, for example) everybody is chasing instead of focusing on making good shit people love. Years later, social networks have effectively given up on sending traffic to news, and now Google’s AI summaries are ripping away large chunks of the traffic of major media outlets that decided the smartest way to do their jobs was “make content for machines to promote,” never thinking for a second that those who owned the machines were never to be trusted.

Worse still, outlets have drained the voices from their reporters, punishing them for having opinions, ripping out anything that might resemble a personality from their writing to meet some sort of vague “editorial voice” despite readers and viewers again and again showing that they want to read the news from a human being not an outlet.

I maintain that things can change for the better, and it starts with a fundamental acceptance that those running the vast majority of media outlets aren’t doing so for their readers’ benefit. Once that happens, we can rebuild around distinct voices, meaningful coverage and a sense of sincerity that the mainstream media seems to consider the enemy. 

Read the whole story
· · · · · · · · · · · · · · · · · · · · · · · ·
mkalus
4 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums

1 Comment and 3 Shares
AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums

AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries, and are in some cases knocking their collections offline, according to a new survey published today. While the impact of AI bots on open collections has been reported anecdotally, the survey is the first attempt at measuring the problem, which in the worst cases can make valuable, public resources unavailable to humans because the servers they’re hosted on are being swamped by bots scraping the internet for AI training data. 

“I'm confident in saying that this problem is widespread, and there are a lot of people and institutions who are worried about it and trying to think about what it means for the sustainability of these resources,” the author of the report, Michael Weinberg, told me. “A lot of people have invested a lot of time not only in making these resources available online, but building the community around institutions that do it. And this is a moment where that community feels collectively under threat and isn't sure what the process is for solving the problem.”

The report, titled “Are AI Bots Knocking Cultural Heritage Offline?” was written by Weinberg of the GLAM-E Lab, a joint initiative between the Centre for Science, Culture and the Law at the University of Exeter and the Engelberg Center on Innovation Law & Policy at NYU Law, which works with smaller cultural institutions and community organizations to build open access capacity and expertise. GLAM is an acronym for galleries, libraries, archives, and museums. The report is based on a survey of 43 institutions with open online resources and collections in Europe, North America, and Oceania. Respondents also shared data and analytics, and some followed up with individual interviews. The data is anonymized so institutions could share information more freely, and to prevent AI bot operators from undermining their countermeasures.  

💡
Do you know anything else about AI scrapers? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪@emanuel.404‬. Otherwise, send me an email at emanuel@404media.co.

Of the 43 respondents, 39 said they had experienced a recent increase in traffic. Twenty-seven of those 39 attributed the increase in traffic to AI training data bots, with an additional seven saying the AI bots could be contributing to the increase. 

“Multiple respondents compared the behavior of the swarming bots to more traditional online behavior such as Distributed Denial of Service (DDoS) attacks designed to maliciously drive unsustainable levels of traffic to a server, effectively taking it offline,” the report said. “Like a DDoS incident, the swarms quickly overwhelm the collections, knocking servers offline and forcing administrators to scramble to implement countermeasures. As one respondent noted, ‘If they wanted us dead, we’d be dead.’”

One respondent estimated that their collection experienced one DDoS-style incident every day that lasted about three minutes, saying this was highly disruptive but not fatal for the collection. 

“The impact of bots on the collections can also be uneven. Sometimes, bot traffic knocks entire collections offline,” the report said. “Other times, it impacts smaller portions of the collection. For example, one respondent’s online collection included a semi-private archive that normally received a handful of visitors per day. That archive was discovered by bots and immediately overwhelmed by the traffic, even though other parts of the system were able to handle similar volumes of traffic.”

Thirty-two respondents said they are taking active measures to prevent bots. Seven indicated that they are not taking measures at this time, and four were either unsure or currently reviewing potential options. 

The report makes clear that it can’t provide a comprehensive picture of the AI scraping bot issue, the problem is clearly widespread though not universal. The report notes that one inherent issue in measuring the problem is that organizations are unaware bots are scraping their collections until they are flooded with enough traffic to degrade the performance of their site. 

“In practice, this meant that many respondents woke up one morning to an unexpected stream of emails from users that the collection was suddenly, fully offline, or alerts that their servers had been overloaded,” the report said. “For many respondents, especially those that started experiencing bot traffic earlier, this system failure was their first indication that something had changed about the online environment.”

Just last week, the University of North Carolina at Chapel Hill (UNC) published a blog that described how it handled this exact scenario, which it attributed to AI bot scrapers. On December 2, 2024, the University Libraries’ online catalog “was receiving so much traffic that it was periodically shutting out students, faculty and staff, including the head of User Experience,” according to the school. “It took a team of seven people and more working almost a full week to figure out how to stop this stuff in the first instance,” Tim Shearer, an associate University librarian for Digital Strategies & Information Technology, said. “There are lots of institutions that do not have the dedicated and brilliant staff that we have, and a lot of them are much more vulnerable.”

According to the report, one major problem is that AI scraping bots ignore robots.txt, a voluntary compliance protocol which sites can use to tell automated tools, like these bots, to not scrape the site. 

“The protocol has not proven to be as effective in the context of bots building AI training datasets,” the report said. “Respondents reported that robots.txt is being ignored by many (although not necessarily all) AI scraping bots. This was widely viewed as breaking the norms of the internet, and not playing fair online.”

We’ve previously reported that robots.txt is not a perfect method for stopping bots, despite more sites than ever using the tool because of AI scraping. UNC, for example, said it deployed a new, “AI-based” firewall to handle the scrapers. 

Making this problem worse is that many of the organizations that are being swamped by bot traffic are reluctant to require users to log in, or complete CAPTCHA tests to prove they’re human before accessing resources, because that added friction will make people less likely to access the materials. In other cases, even if institutions did want to implement some kind of friction, it might not have the resources to do so. 

“I don't think that people appreciate how few people are working to keep these collections online, even at huge institutions,” Weinberg told me. “It's usually an incredibly small team, one person, half a person, half a person, plus, like their web person who is sympathetic to what's going on. GLAM-E Lab's mission is to work with small and medium sized institutions to get this stuff online, but as people start raising concerns about scraping on the infrastructure, it's another reason that an institution can say no to this.”



Read the whole story
· · · ·
mkalus
17 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
1 public comment
cjheinz
16 hours ago
reply
Not a feature
Lexington, KY; Naples, FL

I Tried Pre-Ordering the Trump Phone. The Page Failed and It Charged My Credit Card the Wrong Amount

1 Share
I Tried Pre-Ordering the Trump Phone. The Page Failed and It Charged My Credit Card the Wrong Amount

On Monday the Trump Organization announced its own mobile service plan and the “​​T1 Phone,” a customized all-gold mobile phone that its creators say will be made in America. 

I tried to pre-order the phone and pay the $100 downpayment, hoping to test the phone to see what apps come pre-installed, how secure it really is, and what components it includes when it comes out. The website failed, went to an error page, and then charged my credit card the wrong amount of $64.70. I received a confirmation email saying I’ll receive a confirmation when my order has been shipped, but I haven’t provided a shipping address or paid the full $499 price tag. It is the worst experience I’ve ever faced buying a consumer electronic product and I have no idea whether or how I’ll receive the phone.

“Trump Mobile is going to change the game, we’re building on the movement to put America first, and we will deliver the highest levels of quality and service. Our company is based right here in the United States because we know it’s what our customers want and deserve,” Donald Trump Jr., EVP of the Trump Organization, and obviously one of President Trump’s sons, said in a press release announcing Trump Mobile

Read the whole story
mkalus
17 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

RNC Sued Over WinRed's Constant 'ALL HELL JUST BROKE LOOSE!' Fundraising Texts

1 Share
RNC Sued Over WinRed's Constant 'ALL HELL JUST BROKE LOOSE!' Fundraising Texts

This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.

A family in Utah is suing the Republican National Convention for sending unhinged text messages soliciting donations to Donald Trump’s campaign and continuing to text even after they tried to unsubscribe.

“From Trump: ALL HELL JUST BROKE LOOSE! I WAS CONVICTED IN A RIGGED TRIAL!” one example text message in the complaint says. “I need you to read this NOW” followed by a link to a donation page.

RNC Sued Over WinRed's Constant 'ALL HELL JUST BROKE LOOSE!' Fundraising Texts

The complaint, seeking to become a class-action lawsuit and brought by Utah residents Samantha and Cari Johnson, claims that the RNC, through the affiliated small-donations platform WinRed, violates the Utah Telephone and Facsimile Solicitation Act because the law states “[a] telephone solicitor may not make or cause to be made a telephone solicitation to a person who has informed the telephone solicitor, either in writing or orally, that the person does not wish to receive a telephone call from the telephone solicitor.”

The Johnsons claim that the RNC sent Samantha 17 messages from 16 different phone numbers, nine of the messages after she demanded the messages stop 12 times. Cari received 27 messages from 25 numbers, they claim, and she sent 20 stop requests. The National Republican Senatorial Committee, National Republican Congressional Committee, and Congressional Leadership Fund also sent a slew of texts and similarly didn’t stop after multiple requests, the complaint says. 

On its website, WinRed says it’s an “online fundraising platform supported by a united front of the Trump campaign, RNC, NRSC, and NRCC.” 

RNC Sued Over WinRed's Constant 'ALL HELL JUST BROKE LOOSE!' Fundraising Texts
A chart from the complaint showing the numbers of times the RNC and others have texted the plaintiffs.

“Defendants’ conduct is not accidental. They knowingly disregard stop requests and purposefully use different phone numbers to make it impossible to block new messages,” the complaint says.

The complaint also cites posts other people have made on X.com complaining about WinRed’s texts. A quick search for WinRed on X today shows many more people complaining about the same issues. 

RNC Sued Over WinRed's Constant 'ALL HELL JUST BROKE LOOSE!' Fundraising Texts

“I’m seriously considering filing a class action lawsuit against @WINRED. The sheer amount of campaign txts I receive is astounding,” one person wrote on X. “I’ve unsubscribed from probably thousands of campaign texts to no avail. The scam is, if you call Winred, they say it’s campaign initiated. Call campaign, they say it’s Winred initiated. I can’t be the only one!”

Last month, Democrats on the House Judiciary, Oversight and Administration Committees asked the Treasury Department to provide evidence of “suspicious transactions connected to a wide range of Republican and President Donald Trump-aligned fundraising platforms” including WinRed, Politico reported.   

In June 2024, a day after an assassination attempt on Trump during a rally in Pennsylvania, WinRed changed its landing page to all-black with the Trump campaign logo and a black-and-white photograph of Trump raising his fist with blood on his face. “I am Donald J. Trump,” text on the page said. “FEAR NOT! I will always love you for supporting me.”

CNN investigated campaign donation text messaging schemes including WinRed in 2024, and found that the elderly were especially vulnerable to the inflammatory, constant messaging from politicians through text messages begging for donations. And Al Jazeera uncovered FEC records showing people were repeatedly overcharged by WinRed, with one person the outlet spoke to claiming he was charged almost $90,000 across six different credit cards despite thinking he’d only donated small amounts occasionally. “Every single text link goes to WinRed, has the option to ‘repeat your donation’ automatically selected, and uses shady tactics and lies to trick you into clicking on the link,” another donor told Al Jazeera in 2024. “Let’s just say I’m very upset with WinRed. In my view, they are deceitful money-grabbing liars.” 

And in 2020, a class action lawsuit against WinRed made similar claims, but was later dismissed.

Read the whole story
· ·
mkalus
17 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

2 Shares
📄
This article was primarily reported using public records requests. We are making it available to all readers as a public service. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here.
Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

Local police in Oregon casually offered various surveillance services to federal law enforcement officials from the FBI and ICE, and to other state and local police departments, as part of an informal email and meetup group of crime analysts, internal emails shared with 404 Media show. 

In the email thread, crime analysts from several local police departments and the FBI introduced themselves to each other and made lists of surveillance tools and tactics they have access to and felt comfortable using, and in some cases offered to perform surveillance for their colleagues in other departments. The thread also includes a member of ICE’s Homeland Security Investigations (HSI) and members of Oregon’s State Police. In the thread, called the “Southern Oregon Analyst Group,” some members talked about making fake social media profiles to surveil people, and others discussed being excited to learn and try new surveillance techniques. The emails show both the wide array of surveillance tools that are available to even small police departments in the United States and also shows informal collaboration between local police departments and federal agencies, when ordinarily agencies like ICE are expected to follow their own legal processes for carrying out the surveillance. 

In one case, a police analyst for the city of Medford, Oregon, performed Flock automated license plate reader (ALPR) lookups for a member of ICE’s HSI; later, that same police analyst asked the HSI agent to search for specific license plates in DHS’s own border crossing license plate database. The emails show the extremely casual and informal nature of what partnerships between police departments and federal law enforcement can look like, which may help explain the mechanics of how local police around the country are performing Flock automated license plate reader lookups for ICE and HSI even though neither group has a contract to use the technology, which 404 Media reported last month

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police
An email showing HSI asking for a license plate lookup from police in Medford, Oregon

Kelly Simon, the legal director for the American Civil Liberties Union of Oregon, told 404 Media “I think it’s a really concerning thread to see, in such a black-and-white way. I have certainly never seen such informal, free-flowing of information that seems to be suggested in these emails.”

In that case, in 2021, a crime analyst with HSI emailed an analyst at the Medford Police Department with the subject line “LPR Check.” The email from the HSI analyst, who is also based in Medford, said they were told to “contact you and request a LPR check on (2) vehicles,” and then listed the license plates of two vehicles. “Here you go,” the Medford Police Department analyst responded with details of the license plate reader lookup. “I only went back to 1/1/19, let me know if you want me to check further back.” In 2024, the Medford police analyst emailed the same HSI agent and told him that she was assisting another police department with a suspected sex crime and asked him to “run plates through the border crossing system,” meaning the federal ALPR system at the Canada-US border. “Yes, I can do that. Let me know what you need and I’ll take a look,” the HSI agent said. 

More broadly, the emails, obtained using a public records request by Information for Public Use, an anonymous group of researchers in Oregon who have repeatedly uncovered documents about government surveillance, reveal the existence of the “Southern Oregon Analyst Group.” The emails span between 2021 and 2024 and show local police eagerly offering various surveillance services to each other as part of their own professional development. 

In a 2023 email thread where different police analysts introduced themselves, they explained to each other what types of surveillance software they had access to, which ones they use the most often, and at times expressed an eagerness to try new techniques. 

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

“This is my first role in Law Enforcement, and I've been with the Josephine County Sheriff's Office for 6 months, so I'm new to the game,” an email from a former Pinkerton security contractor to officials at 10 different police departments, the FBI, and ICE, reads. “Some tools I use are Flock, TLO, Leads online, WSIN, Carfax for police, VIN Decoding, LEDS, and sock puppet social media accounts. In my role I build pre-raid intelligence packages, find information on suspects and vehicles, and build link charts showing connections within crime syndicates. My role with [Josephine Marijuana Enforcement Team] is very intelligence and research heavy, but I will do the occasional product with stats. I would love to be able to meet everyone at a Southern Oregon analyst meet-up in the near future. If there is anything I can ever provide anyone from Josephine County, please do not hesitate to reach out!” The surveillance tools listed here include automatic license plate reading technology, social media monitoring tools, people search databases, and car ownership history tools. 

An investigations specialist with the Ashland Police Department messaged the group, said she was relatively new to performing online investigations, and said she was seeking additional experience. “I love being in a support role but worry patrol doesn't have confidence in me. I feel confident with searching through our local cad portal, RMS, Evidence.com, LeadsOnline, carfax and TLO. Even though we don't have cameras in our city, I love any opportunity to search for something through Flock,” she said. “I have much to learn with sneaking around in social media, and collecting accurate reports from what is inputted by our department.”

Emails Reveal the Casual Surveillance Alliance Between ICE and Local Police

A crime analyst with the Medford Police Department introduced themselves to the group by saying “The Medford Police Department utilizes the license plate reader systems, Vigilant and Flock. In the next couple months, we will be starting our transition to the Axon Fleet 3 cameras. These cameras will have LPR as well. If you need any LPR searches done, please reach out to me or one of the other analysts here at MPD. Some other tools/programs that we have here at MPD are: ESRI, Penlink PLX, CellHawk, TLO, LeadsOnline, CyberCheck, Vector Scheduling/CrewSense & Guardian Tracking, Milestone XProtect city cameras, AXON fleet and body cams, Lexipol, HeadSpace, and our RMS is Central Square (in case your agency is looking into purchasing any of these or want more information on them).”

A fourth analyst said “my agency uses Tulip, GeoShield, Flock LPR, LeadsOnline, TLO, Axon fleet and body cams, Lexipol, LEEP, ODMap, DMV2U, RISS/WSIN, Crystal Reports, SSRS Report Builder, Central Square Enterprise RMS, Laserfiche for fillable forms and archiving, and occasionally Hawk Toolbox.” Several of these tools are enterprise software solutions for police departments, which include things like police report management software, report creation software, and stress management and wellbeing software, but many of them are surveillance tools.  

At one point in the 2023 thread, an FBI intelligence analyst for the FBI’s Portland office chimes in, introduces himself, and said “I think I've been in contact with most folks on this email at some point in the past […] I look forward to further collaboration with you all.”

The email thread also planned in-person meetups and a “mini-conference” last year that featured a demo from a company called CrimeiX, a police information sharing tool.  

A member of Information for Public Use told 404 Media “it’s concerning to me to see them building a network of mass surveillance.”

“Automated license plate recognition software technology is something that in and of itself, communities are really concerned about,” the member of Information for Public Use said. “So I think when we combine this very obvious mass surveillance technology with a network of interagency crime analysts that includes local police who are using sock puppet accounts to spy on anyone and their mother and then that information is being pretty freely shared with federal agents, you know, including Homeland Security Investigations, and we see the FBI in the emails as well. It's pretty disturbing.” They added, as we have reported before, that many of these technologies were deployed under previous administrations but have become even more alarming when combined with the fact that the Trump administration has changed the priorities of ICE and Homeland Security Investigations. 

“The whims of the federal administration change, and this technology can be pointed in any direction,” they said. “Local law enforcement might be justifying this under the auspices of we're fighting some form of organized crime, but one of the crimes HSI investigates is work site enforcement investigations, which sound exactly like the kind of raids on workplaces that like the country is so upset about right now.”

Simon, of ACLU Oregon, said that such informal collaboration is not supposed to be happening in Oregon.

“We have, in Oregon, a lot of really strong protections that ensure that our state resources, including at the local level, are not going to support things that Oregonians disagree with or have different values around,” she said. “Oregon has really strong firewalls between local resources, and federal resources or other state resources when it comes to things like reproductive justice or immigrant justice. We have really strong shield laws, we have really strong sanctuary laws, and when I see exchanges like this, I’m very concerned that our firewalls are more like sieves because of this kind of behind-the-scenes, lax approach to protecting the data and privacy of Oregonians.”

Simon said that collaboration between federal and local cops on surveillance should happen “with the oversight of the court. Getting a warrant to request data from a local agency seems appropriate to me, and it ensures there’s probable cause, that the person whose information is being sought is sufficiently suspected of a crime, and that there are limits to the scope, about of information that's being sought and specifics about what information is being sought. That's the whole purpose of a warrant.”

Over the last several weeks, our reporting has led multiple municipalities to reconsider how the license plate reading technology Flock is used, and it has spurred an investigation by the Illinois Secretary of State office into the legality of using Flock cameras in the state for immigration-related searches, because Illinois specifically forbids local police from assisting federal police on immigration matters.

404 Media contacted all of the police departments on the Southern Oregon Analyst Group for comment and to ask them about any guardrails they have for the sharing of surveillance tools across departments or with the federal government. Geoffrey Kirkpatrick, a lieutenant with the Medford Police Department, said the group is “for professional networking and sharing professional expertise with each other as they serve their respective agencies.” 

“The Medford Police Department’s stance on resource-sharing with ICE is consistent with both state law and federal law,” Kirkpatrick said. “The emails retrieved for that 2025 public records request showed one single instance of running LPR information for a Department of Homeland Security analyst in November 2021. Retrieving those files from that single 2021 matter to determine whether it was an DHS case unrelated to immigration, whether a criminal warrant existed, etc would take more time than your publication deadline would allow, and the specifics of that one case may not be appropriate for public disclosure regardless.” (404 Media reached out to Medford Police Department a week before this article was published). 

A spokesperson for the Central Point Police Department said it “utilizes technology as part of investigations, we follow all federal, state, and local law regarding use of such technology and sharing of any such information. Typically we do not use our tools on behalf of other agencies.”

A spokesperson for Oregon’s Department of Justice said it did not have comment and does not participate in the group. The other police departments in the group did not respond to our request for comment.

Read the whole story
· · · · · ·
mkalus
17 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories
Loading...