Resident of the world, traveling the road of life
68857 stories
·
21 followers

Quoting Tom Dale

1 Share

I don't know why this week became the tipping point, but nearly every software engineer I've talked to is experiencing some degree of mental health crisis.

[...] Many people assuming I meant job loss anxiety but that's just one presentation. I'm seeing near-manic episodes triggered by watching software shift from scarce to abundant. Compulsive behaviors around agent usage. Dissociative awe at the temporal compression of change. It's not fear necessarily just the cognitive overload from living in an inflection point.

Tom Dale

Tags: ai-ethics, careers, coding-agents, generative-ai, ai, llms

Read the whole story
mkalus
1 hour ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

As Space Tourism Looms, Scientists Ask: Should We Have Sex In Orbit?

1 Share
As Space Tourism Looms, Scientists Ask: Should We Have Sex In Orbit?

Welcome back to the Abstract! Here are the studies this week that had off-Earth offspring, took stock of a mortal threat, productively slept, and sought out old friends.

First, what to expect when you’re expecting a star child. Then: how to fight cancer, the nap-plications of lucid dreaming, and why old rats don’t make new friends.

As always, for more of my work, check out my book First Contact: The Story of Our Obsession with Aliens or subscribe to my personal newsletter the BeX Files

How to make babies in space (Don’t)

Palmer, Giles Anthony et al. “Reproductive biomedicine in space: implications for gametogenesis, fertility and ethical considerations in the era of commercial spaceflight.” Reproductive BioMedicine Online.

It’s hard enough to have babies on Earth, let alone off it. But if humans ever do expand beyond our planet to live in orbital outposts or on other planets, we would presumably want to build healthy families there. Even in the near term, it is conceivable that space will be flooded by rich tourists eager to join the 250-mile-high club, raising questions about how to practice safe space sex (or if that is even possible).

In a new study, scientists review the medical and ethical challenges of space reproduction, noting that while space sex is “often overshadowed by sensationalized or speculative portrayals, the topic…nonetheless demands serious attention.”

“Space is toxic to terrestrial life. It is an inherently hostile environment for terrestrial biology to thrive,” said researchers led by Giles Anthony Palmer of the International IVF Initiative Inc. “The microgravity, cosmic radiation, circadian disruption, pressure differentials, and extreme temperatures found in orbit or beyond present unique and multifactorial stressors to the human body.”

“As we enter a new era of space exploration, defined by longer missions, broader participation, and eventual human settlement beyond Earth, the question is not simply whether reproduction can occur in space, but whether human fertility can be preserved, protected and comprehensively understood in an environment fundamentally different from that in which our species evolved,” the team added.

The study provides a comprehensive review of how various space environments might impact fertility, pregnancy, labor, and health outcomes of children. For example, studies of rodent reproduction in space show higher risks of abnormal cell division and impaired development; meanwhile, the inherent dangers of pregnancy and labor are significantly amplified in space environments.   

“The question of whether humanity should reproduce beyond Earth is no longer hypothetical—it is a pressing ethical frontier,” the team concluded. “In the context of commercial spaceflight, where ambition often outpaces caution, the stakes are higher than ever. Without robust frameworks, rigorous research, and a deeply human commitment to ethical principles, there is a risk of exporting not just life but injustice, exploitation and harm into the cosmos. To be worthy of the stars, we must earn our place, not only through technological prowess, but through ethical wisdom.”

In other news…

Let’s get cancer’s ass

Fink, Hanna et al. “Global and regional cancer burden attributable to modifiable risk factors to inform prevention.” Nature Medicine.

Roughly ten million people die from cancer each year, making it a leading cause of morbidity worldwide. While many cancers are not preventable, scientists set out to estimate just how much of the global cancer burden can be attributable to “modifiable risk factors,” meaning behavioral, environmental, or occupational factors that influence the odds of developing cancer.

The results revealed that “nearly 4 in 10 cancer cases worldwide in 2022 could have been prevented by eliminating exposure to the risk factors considered in this study,” which include smoking, alcohol consumption, and contaminated environments, said researchers led by Hanna Fink of the World Health Organization's International Agency for Research on Cancer.

“Smoking (15.1%), infections (10.2%) and alcohol consumption (3.2%) were the leading contributors to cancer burden,” the team added. “Lung, stomach, and cervical cancers represented nearly half of preventable cancers. Strengthening efforts to reduce modifiable exposures remains central to global cancer prevention.”

The researchers also found “obvious gendered patterns in causes of cancer” such as higher rates of smoking and alcohol consumption in men, and higher BMI in women. While there is an enduring allure to the idea of a cancer cure-all, this study underscores that the disease emerges from a complex interplay of factors, only some of which are under our control.

To sleep, perchance to lucid dream

Konkoly, Karen R. et al. “Creative problem-solving after experimentally provoking dreams of unsolved puzzles during REM sleep.” Neuroscience of Consciousness.

Scientists have gone ahead and done an Inception. In a new study, 20 experienced lucid dreamers were presented with puzzles matched with sound cues, which were then played as the participants slept to help them crack unsolved tasks in their dreams.  

As Space Tourism Looms, Scientists Ask: Should We Have Sex In Orbit?
Figure illustrating the experiment design. Image: Konkoly, Karen R. et al. 

“Whereas dream content is notoriously difficult to control experimentally, here we induced dreams about specific puzzles by presenting associated sounds during REM sleep,” said researchers led by Karen R. Konkoly of Northwestern University. “We preferentially recruited experienced lucid dreamers, intending for them to receive our real-time instructions in their dreams about which puzzles to volitionally attempt to solve.”

“Although many participants did not experience lucid dreams, we nevertheless found that cues successfully influenced dream content, biasing dreaming toward specific puzzles,” the team added. “Moreover, when puzzles were incorporated into dreams, they were more likely to be solved the next morning.”  

Yet more evidence for the most broadly applicable advice to humanity: sleep on it. 

Despite all my rage I am still just a rat in a maze

Gupta, Subhadeep Dutta et al. “When Familiar Faces Feel Better: A Framework for Social Neurocognitive Aging in a Rat Model.” eNeuro.

People get set in their ways as they get older—and that’s apparently true for rats, according to this new research. To probe the effects of age on mammalian social behavior, researchers obtained 169 male rats in two age cohorts: “young adults” at six months old and “aged” rats that were way over the hill at two years old.  

A series of rat mixers in water mazes revealed that the rodent elders were as likely to interact with rats as youngsters, but nearly half of them preferred to mingle with rats that were familiar to them, rather than socializing with new faces.  

“Results for the aged rats were strikingly different from young in two ways,” said researchers led by Subhadeep Dutta Gupta of the National Institute on Aging in Baltimore.  “First, as a group, aged rats failed to display a reliable social novelty preference overall” and “second, inter-individual variability was significantly greater among old animals, with nearly half exhibiting a phenotype not seen in the young group, comprising an apparent social bias for the familiar conspecific.”

I think we can all relate to an occasional social bias for familiar conspecifics. To that end, the study concludes with a truth bomb: “It is important to recognize that a brief session of social interaction with a stranger inevitably falls short in matching the depth of familiarity established through enduring human social relationships.”

In the words of the ultimate rat elder, Master Splinter: “Help each other, draw upon one another, and always remember the true force that binds you.” 

Thanks for reading! See you next week.

Read the whole story
mkalus
1 hour ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

AI coding makes you worse at learning — and not even any faster

2 Shares

Here’s a new preprint from Anthropic: “How AI Impacts Skill Formation”. AI coding bots make you bad at learning, and don’t even speed you up. [arXiv]

The researchers ran 50 test subjects through five basic coding tasks using the Trio library in Python. Some subjects were given an AI assistant, some were not.

The subjects coded in an online interview platform, and the AI users also had the AI assistant.

The researchers used screen and keystroke recording to see what the test subjects did — including those no-AI test subjects who tried using an AI bot anyway.

Afterwards, the researchers tested the subjects on coding skills — debugging, code reading, code writing, and the concepts of Trio.

The coders in the AI group were slightly faster, but it was not statistically significant. The main thing was that the AI group were 17% worse in their understanding:

The erosion of conceptual understanding, code reading, and debugging skills that we measured among participants using AI assistance suggests that workers acquiring new skills should be mindful of their reliance on AI during the learning process.

It’s just a single study and quite limited. You should expect to see AI bros dismiss the study saying it’s one library, it’s not enough coders, it’s an old model — and not to do better studies addressing their own objections.

If you don’t do the work, you don’t learn, and you don’t remember. Watching a bot do your job teaches you nothing. You end up incompetent. And you won’t work faster anyway.

Read the whole story
mkalus
21 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Darkness, democracy, and locking it down

1 Share
Darkness, democracy, and locking it down

Friday, finally. Time for the weekly roundup.

On the podcast this week: the latest Epstein dump, how it’s really a disaster in a lot of ways, and Moltbot and its terrible security. In the section for subscribers at the Supporter level, two recent stories about a fundamental issue exposing a bunch of very sensitive data.

And in this week’s interview, Joseph talks to Samuel Bagg, assistant professor of political science at the University of South Carolina. Bagg recently wrote a fascinating essay about how the problem with lots of things might be knowledge-based (people believing stuff that’s wrong or dangerous) but the solution is not more knowledge. It’s all about social identity.

@404.media

EpsteIn—as in, Epstein and LinkedIn—searches your connections on the social network for names that match those in the released files. 404 Media's Joseph Cox tested it, and it appears it works—with some caveats. “I found myself wondering whether anyone had mapped Epstein's network in the style of LinkedIn—how many people are 1st/2nd/3rd degree connections of Jeffrey Epstein?” Christopher Finke, the creator of the tool, told 404 Media in an email. “Smarter programmers than me have already built tools to visualize that, but I couldn't find anything that would show the overlap between my network and his.” “Thankfully the overlap is zero, but I did find that a previous co-worker who I purposefully chose not to keep in touch with appears in the files, and not in an incidental way. Trusting my gut on him paid off, I suppose,” he added. @Evy Kwong has more. Go to 404media.co to read more.

♬ original sound - 404 Media

Subscribers at the Supporter level get early access to interview episodes. Next week Emanuel talks to Patrick Klepek of Remap! Listen to the weekly podcasts on Apple Podcasts, Spotify, or YouTube

In other news: If you missed getting a physical copy of the zine, we got you. Our zine about ICE surveillance tactics is now available as a PDF! Read more about why we’re releasing it free in the digital realm, and get it here.

LOCK IT DOWN

The FBI has been unable to access a Washington Post reporter’s seized iPhone because it was in Lockdown Mode, a sometimes overlooked feature that makes iPhones broadly more secure, according to recently filed court records. The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.

Darkness, democracy, and locking it down
Image: Ian Muttoo via Flickr

TOTAL MESS

The Department of Justice left multiple unredacted photos of fully nude women or girls exposed as part of Friday’s dump of more than 3.5 million pages of files related to the investigations and prosecutions of Jeffrey Epstein and Ghislaine Maxwell. Unlike the majority of the images in the released files, both the nudity and the faces of the people were not redacted, making them easy to identify. In some of the photos, the women or girls were either fully nude or partially undressed, posed for cameras, and exposed their genitals. The DOJ removed the photos after 404 Media requested comment. 

Darkness, democracy, and locking it down
File photo / Unsplash

BAD VIBES

According to a new study from a team of researchers in Europe, vibe coding is killing open-source software (OSS) and it’s happening faster than anyone predicted. Thanks to vibe coding, a colloquialism for the practice of quickly writing code with the assistance of an LLM, anyone with a small amount of technical knowledge can churn out computer code and deploy software, even if they don't fully review or understand all the code they churn out. But there’s a hidden cost. Vibe coding relies on vast amounts of open-source software, a trove of libraries, databases, and user knowledge that’s been built up over decades.

Darkness, democracy, and locking it down
Photo by Daniil Komov / Unsplash

DEMOCRACY DIES

The Washington Post has been a critical institution in the lives of millions of people. What we’re seeing, though, is not a mistake. Unlike the Graham family in the late 1990s, Jeff Bezos has no reason to try to make his newspaper better or to try to best serve its readers. The newspaper's finances are barely a rounding error compared to Bezos's wealth, but what its journalists do—accountability journalism about the rich and powerful—does not serve someone who is rich and powerful. The Washington Post and many of its reporters are no longer useful to Bezos, and so he has decided to get rid of them. The Washington Post’s journalists, many of whom lost their jobs this week, have continued to do critical work, but Bezos has been systematically making the paper worse for years. 

Darkness, democracy, and locking it down
Image: Seattle City Council

READ MORE

404 MEDIA IN THE WILD

I went on Science Friday to talk about deepfakes and the Grok debacle, and if you're an Aussie you might have heard me discussing it there, too.

The English version of the documentary Emanuel appeared in about AI, called "AI: The Death of the Internet" is out now!

Joseph joined John Stewart to talk about ICE surveillance tactics, as well as PBS News Hour.

And this morning, Jason was on WNYC talking ICE and surveillance as well.

If you'd like us to come on your show, podcast, or panel, contact us.

COMMENTS OF THE WEEK

Replying to DOJ Released Unredacted Nude Images in Epstein Files, Rob writes:

“Inexcusable. I worked in ediscovery for a bit and I would be so ashamed if this happened on anything on my watch. Like, it is a shitty job to spend 12+ hours scanning/formatting/bates-stamping/printing documents + doing the redactions and having to see disturbing images, but part of why you put up with the boredom and the horror is because at the end of the day, you are playing your part in helping people get justice.” 

And in response to Our Zine About ICE Surveillance Is Here, Cam writes:

“Fantastic. Got mine in the mail yesterday. Phenomenal labor of love - excited to pass this around and share the PDFs as well. Keep doing what you're doing.”

We will with your support! Thank you! 

Darkness, democracy, and locking it down

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss AI bubble hysteria, "just go independent," and more.

JOSEPH: This week we reported how the FBI has been unable to get into a Washington Post reporter’s iPhone because it was in Lockdown Mode. Side note, I wonder how the insane cuts at The Post are going to impact its digital or physical protection of journalists, if at all. This court record was very, very interesting in that it’s a quite rare admission of why exactly authorities were unable to access a device. 

I don’t think there’s an area of cybersecurity, which we have a lot of reporting on, that is constantly in flux as mobile forensics. Nothing stays still, even for what feels like five minutes. There are constant tech developments, both on the side of Apple and Google, then on companies trying to break into those phones, like Cellebrite and Grayshift, the creator of Graykey.

As you probably remember, this dynamic really started back in 2016 after the San Bernardino terrorist attack. Authorities couldn’t get into an iPhone linked to the attack; the DOJ tried to legally compel Apple to build a backdoor into facilitate brute forcing the PIN; Apple declined to do so saying it would fundamentally lower security for all users; DOJ backed off when the FBI had a third party break into the phone, which was later revealed to be Azimuth Security (as I’ve said before, I had one source on that but The Washington Post had more, so managed to publish. It sucks they are gutting their journalists).

There have been some other high profile cases of authorities not being able to get into phones, but nothing quite like that Apple vs. FBI case. After Azimuth unlocked the phone, you had other companies largely emulate the capability of being able to unlock modern-ish iPhones. Probably the first of those was Grayshift, which Forbes first reported the existence of. Oh my god, a company has a little box that can just unlock iPhones even with their brute force protections? It was pretty nuts at the time but looks quaint now.

Then you get into what I usually refer to as the cat and mouse dynamic. Grayshift, and then Cellebrite, had the tools to break into recent iPhones. So then Apple introduced some other features. There was USB Restricted Mode, which changed the lightning cable port into a charge-only interface, meaning forensic tools couldn’t connect to it. Grayshift then said it had defeated the feature. Some cops also explored the possibility of not getting a warrant to more quickly download data in order to circumvent it.

The world kept spinning and both sides of the fight kept doing their thing. As we saw from Cellebrite and Graykey related leaks, generally these tools could get into older or even recent phones, but might have an issue with the latest device running the latest operating system. Then they’d find a way in and the cycle would continue.

FBI Couldn’t Get into WaPo Reporter’s iPhone Because It Had Lockdown Mode Enabled
Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someone’s device. At least for now.
Darkness, democracy, and locking it down

The next major development was the iPhone rebooting we revealed in 2024. That was returning iPhones that hadn’t been unlocked for a few days (presumably by the user) to a state that makes them harder to unlock. I’m not sure what the latest on that is regarding mitigations.

My point is that this story will never end, really. There will always be some sort of development in the mobile forensic space. Always some little setting or tweak or new attack that, unless you’re following closely, you’re probably not going to know about. Which makes it hard to really know when your phone is really secure.

I suppose that’s the attraction of Lockdown Mode: it is supposed to stop connections between the phone and a forensic device completely, so users don’t have to worry about niche software idiosyncrasies they probably have no idea exist.

Cellebrite Unlocked This Journalist’s Phone. Cops Then Infected it With Malware
A new report from Amnesty International reveals multiple cases where Serbian authorities used Cellebrite devices to access targets’ mobile phones before loading them with spyware.
Darkness, democracy, and locking it down

I mentioned this in passing on Bluesky when I posted the article, but I think Apple has done a pretty bad job of explaining Lockdown Mode can, seemingly, protect against mobile forensic tools. Much of the marketing and stuff on the company’s site is about protecting users from mercenary spyware (read: NSO Group, Paragon, etc). There’s no mention of mobile forensics tech like Cellebrite or Graykey. Maybe that’s for a couple of reasons: Cellebrite and Graykey absolutely have legitimate uses, and are used to combat serious crime every single day. They are abused, absolutely, but they’re also used constantly in all manner of child abuse, financial fraud, murder, kidnapping investigations. Basically, any crime, really. So, having Apple on its website saying ‘we defeat the tool that lets cops collect evidence on murderers’ is probably not a look they want. Spyware is much easier to publicly push back against. That industry is saturated with abuse.

But, now we know that Lockdown Mode can protect against these tools if you’re at risk of your device being seized and searched. That is obviously very useful information for journalists, activists, protesters, and others to know. 

JASON: It has been a brutal week for journalism, a brutal year, a brutal decade. For journalism and for the world more broadly. It has been hard to pay attention to much of anything besides ICE, and I know many people who can’t think about anything else at all right now, and I completely understand that. I have done that at times in my life and it turns me extremely defeatist and useless, so over the last few years I have really focused on working hard and doing things that I feel are meaningful, using my journalism skills and my platform, and then either logging off or explicitly focusing on being with my friends and family, exercising, or otherwise doing things that bring me joy. This is a really lucky place to be in, which I don’t take for granted, but I figure I am more useful energized and not fully miserable all the time, and so I make sure that I have some sort of balance in my life.

That’s a bit of a non sequitur preamble before I get to my real thought, which is about independent journalism, starting a business, “just going independent” and things of this nature. Whenever there are mass layoffs like we saw at the Washington Post this week, there’s understandably an online debate about the sustainability of journalism, and also a debate about whether going independent can work, who can go independent, how to do it, etc. The ones I’ve seen in the last few days feel pretty pessimistic to me. And it’s true that there are far fewer journalism jobs, there are now a tiny number of traditional publications hiring, and it’s getting harder to stand out amongst a sea of substacks and independent sites, especially considering the additional pressures of competing against AI slop, etc. I also see a lot of people saying that there is subscription fatigue, debating the ethics of paywalls, that there are concerns about legal resources, healthcare, running a business, editing help, etc. These are all real, and everyone’s situation is different. 

I understand the impulse to have these conversations but I also never really know what to say about them, and so I usually don’t participate, because honestly the discourse on this topic feels extremely fraught. We are talking about people’s livelihoods, their life’s work, their personal appetite for being an entrepreneur, their healthcare situation. And this always happens immediately after a bunch of people lose their jobs, so it always happens during a very raw situation. 

So again, deep breath, knowing I’m coming from a place of unimaginable privilege having been a part of 404 Media: Going independent is the best thing I have ever done in my life. I did not know or ever hope to dream that anything like this could have happened to me. I am a happier person in every conceivable possible way having gone independent. I work a lot, but I also have more balance in my life than I have ever had. I know this is not the case for everyone, but it is possible to do this and make a living. It is still possible. And for many people I think it is better to at least try to start something new than it is to try to hitch yourself to another dying business. (This is the reason for my preamble: It sometimes feels weird/bad/wrong to feel somewhat secure when so many people do not.)

If you are a journalist and you are thinking of trying this, talk to me first. I am happy to talk to you. A lot of the hurdles, problems, and fears expressed by people about going independent are real, but they are also not insurmountable and often they are not as big of a deal as you would expect. Legal help is available. Editing help is available. Healthcare … healthcare is the hardest thing, it’s a big thing, and I don’t have a good answer there. Running a very basic business does not take that much time, and much of it is automated through platforms like Ghost. Subscription fatigue, I’m sorry, is fake. Well, it’s real on an individual level, but the amount of people that you need to subscribe to something to approximate what a journalism job pays is not that many. There are hundreds and hundreds of millions of people who speak English and you need to convince a few hundred of them that your work is worth supporting. This is possible. It’s doable. You need to post a lot and you may need to learn to do a few new things. You need to be kind of shameless, which didn’t come easy to me and still doesn’t. But we have learned a lot in the last few years. If this is something you want to do, email me.

EMANUEL: It’s time once again to talk about the big AI picture: Bubble or no bubble, the end of all knowledge based work or a useless tool, a civilization shifting technology or a slop machine?

To be honest, I’m not going to satisfactorily answer any of these questions but I see all the same ridiculous, shocking, scary claims about AI you’re seeing, and I want to talk through some of the way I processed them this week. 

As people were losing their minds over Moltbook this week and discussing how powerful the latest LLMs are at coding, I was reporting a story about a company that heavily relies on generative AI, and how it’s failing that company’s workers and users. My reporting in this case requires sifting through a massive amount of text without much of a direction, so while everyone was talking about how powerful AI is right now, I thought: why not use one of these LLMs to do some of that work for me?

Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site
‘It exploded before anyone thought to check whether the database was properly secured.’
Darkness, democracy, and locking it down

This idea never got off the ground for technical reasons, but it made me think a lot about how I could incorporate AI into my workflow. A lot of reporting is pretty tedious because it requires sifting through a ton boring material in order to maybe find something important without having any idea what it might be, and I can easily imagine AI being helpful for that task. AI currently has the ability to sift through video, transcripts, PDFs, social media accounts, etc. The problem I kept coming back to is that if I used AI to do any of that sifting for me, I would have no idea what it may have missed. Maybe it could find useful leads much faster, but so often what happens during this process is that I’ll read through a document and see something that’s only tangentially related, or a name I didn’t recognize, and follow those leads not because it makes logical sense, but because I’m curious and bored of looking at the same document and need a change of pace. Sometimes, that’s how I find some of the most interesting stuff in my reporting. As far as I’m aware, no current LLM can do that, and even if it did, I would have to trust that it didn’t miss any of those opportunities because of an error. 

Then I thought, while all of that may be true, I could still stick to my manual scanning process but use AI for a first pass. But I felt the overwhelming desire to be lazy begin to take hold before I even finished the thought. As numerous studies have shown, reliance on automated tools leads to overreliance on automated tools and, ultimately, deskilling. I could feel myself atrophy just by entertaining the idea. Ultimately I’m still open to the possibility of using LLMs in some similar fashion, but at the moment it seems like more trouble than it’s worth. 

When I looked back over at X, I saw both AI boosters and skeptics agree that something has changed in the last few months. People who used to think the entire thing was a bubble now say they see AI embed itself into tech company workflows in a way that’s irreversible. At the same time, Moltbook, the social media for AI agents that was driving much of this hype, was revealed to be a sham and a security nightmare. 

I’m tired of saying it and I’m sure you’re tired of reading it but my position remains that AI can be both an overhyped tech bubble, and, at the same time, a technology that is here to stay and that will fundamentally change our lives in many ways. 

It’s wild to me that people who were old enough to live through or at least understand the history of the dot com boom can’t hold this thought in their heads. The internet was new and ‘overhyped’ and a lot of companies raised way too much money, so the market had a very dramatic correction, but obviously the internet was here to stay and its impact can’t be overstated. 

I’m not predicting the future but it certainly feels like we’re on a similar path with AI right now. 

Finally, these are my two takeaways from yet another week of AI hysteria:

  1. We will continue to focus on what AI is actively doing right now rather than speculating on how powerful it can be theoretically and will be in the future. 
  2. Our philosophy has always been to ground our reporting in first hand experience with the technology we’re reporting on, and I think that I have fallen a little behind in that respect, and need to experiment with some of the more recent AI tools. 

SAM: Last night I decided to write a short blog in a category I’d call “check this shit out,” where the purpose isn’t to solve a mystery or break news, but to just point at something everyone is talking about and add context to it. I saw people on Bluesky and Reddit posting an image from the Epstein files of the Mona Lisa with a redaction over the portrait’s face. The image itself is an instant classic, but the context behind it is that thousands of instances of victims’ personal information, including faces and full names, have been exposed as part of this Epstein data dump disaster. So redacting the face of a 500 year old painting seemed patently absurd. 

The DOJ Redacted a Photo of the Mona Lisa in the Epstein Files
While Epstein’s victims endure the fallout of their photos and names being exposed in the Department of Justice’s latest tranche of files, investigators redacted a photo of the Mona Lisa. Now we know why.
Darkness, democracy, and locking it down

I sent a request for comment to the DOJ specifically asking why it was redacted and also whether AI was used in redactions, because that’s another piece of context to this story: the people making the image go triple platinum on every social media platform were also speculating (or straight up declaring) that a facial recognition system was redacting images in the files, and that’s why an unrelated, centuries-old female face was caught in the net. This isn’t the craziest theory ever; AI systems similarly overindex for things like nudity, sexual speech, and terms-violating content across all social media platforms, and it’s a huge problem. Sending overzealous bots to moderate complex, nuanced user generated content is messy and requires a lot of human oversight, and usually puts the onus on users to appeal and attempt to correct (or abide by) rules that aren’t even made explicit. AI catching the Mona Lisa and not catching real victims' faces is not the wildest theory in the world. But I don't know, and don't want to speculate, about whether the DOJ used AI to do redactions. If they did, that's bad. If they didn't, the situation is still messy and terrible.

I published the story with a note that the DOJ did not immediately respond to a request for comment, and about an hour later — incredibly speedy, considering they took a day and a half to remove sexually explicit images of victims when we flagged them last weekend — someone a the DOJ responded saying they redacted the Mona Lisa because it’s actually a victim’s face in the photo. And now, looking closely at the image itself (which is already cropped tightly to exclude any background), I can see what seems like a thumb or something along the edge, as if it’s a person holding a printed photo. Maybe it’s from a novelty shop outside the Louvre, or maybe it’s one of those cutout photo ops where you stand behind an image and put your face in the hole. Either way, it’s not just a photo of the painting hanging in the Louvre like everyone (including myself) assumed. The story went from “wow the DOJ incompetently left so many images unredacted of real women while protecting a painting, how absurd” to “wow this image is actually another tiny piece of evidence in the most harrowing criminal investigation of our lifetime.” I literally said WHOA WHOA WHOA out loud alone in my apartment when I got the DOJ’s email. They did not answer the AI question.

This is simply a BTB and I don’t have any grand lesson to end with, but I do want to say that I don’t think — and I don’t think anyone’s said or thinks this, either, but just to be clear about it — that the redaction being genuine and correct changes the context of the larger story, and why I blogged it in the first place, which is that the process of protecting victims while releasing these files has been a disaster. Their lawyers have said their phones are ringing off the hook with victims realizing their information was made public in these files. We talked more about this on the podcast this week, if you're interested.

Read the whole story
mkalus
21 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Inspector General Investigating Whether ICE's Surveillance Tech Breaks the Law

1 Share
Inspector General Investigating Whether ICE's Surveillance Tech Breaks the Law

The Department of Homeland Security’s Inspector General is investigating potential privacy abuses associated with Immigrations and Customs Enforcement’s surveillance and biometric data programs, according to a letter sent to two senators.

Last week, we reported that Senators Mark Warner and Tim Kaine demanded that DHS inspector general Joseph Cuffari investigate immigration-related surveillance programs across DHS, Customs and Border Protection, and ICE. Thursday, Cuffari said his office had launched an audit called “DHS’ Security of Biometric Data and Personally Identifiable Information.”

“The objective of the audit is to determine how DHS and its components collect or obtain PII and biometric data related to immigration enforcement efforts and the extent to which that data is managed, shared, and secured in accordance with law, regulation, and Departmental policy,” Cuffari’s letter reads. He adds that one of the purposes of the investigation will be to “determine whether they have led to violations of federal law and other regulations that maintain privacy and defend against unlawful searches.”

Kaine and Warner’s initial letter specifically focused on many of the technologies and programs 404 Media has been reporting on, including DHS’s contracts with Palantir, facial recognition company Clearview AI, its side-door access to Flock’s license plate scanning technology, its social media monitoring through a company called Penlink, its phone hacking contract through a company called Paragon, its face-scanning mobile app, as well as its use of various government biometric databases in immigration enforcement. 

“DHS’ reported disregard for adhering to the law and its proven ambivalence toward observing and upholding constitutionally-guaranteed freedoms of Americans and noncitizens, including freedom of speech and equal protection under the law, leaves us with little confidence that these new and powerful tools are being used responsibly,” the senators wrote. “Coupled with DHS’ propensity to detain people regardless of their circumstances, it is reasonable to question whether DHS can be trusted with powerful surveillance tools and if in doing so, DHS is subjecting Americans to surveillance under the pretext of immigration enforcement.”

Read the whole story
mkalus
21 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

The OpenAI and Nvidia $100b not-a-deal is off

1 Share

Late last year, OpenAI was frantically signing deals for hundreds of billions of dollars! Six gigawatts of GPU chips from AMD! A hundred billion dollar deal with Nvidia for ten gigawatts of GPU chips!

The press went wild! Stock prices got quite the boost! Line went up!

It turns out this impossible data centre deal wasn’t possible. The Wall Street Journal broke the news that “The $100 Billion Megadeal Between OpenAI and Nvidia Is on Ice.” Jensen Huang of Nvidia was not impressed with OpenAI: [WSJ]

At the time, the ChatGPT-maker expected the deal negotiations to be completed in the coming weeks, people familiar with the plans said. But the talks haven’t progressed beyond the early stages.

… Huang has privately emphasized to industry associates in recent months that the original $100 billion agreement was nonbinding and not finalized … He has also privately criticized what he has described as a lack of discipline in OpenAI’s business approach and expressed concern about the competition it faces from the likes of Google and Anthropic.

Jensen Huang is now playing down the non-deal and only wants to talk about the future, where he loves OpenAI and definitely wants to invest! [Reuters]

Jim Cramer of CNBC asked Jensen about this $100 billion not a deal. Jensen says everything is fine: [YouTube]

No, there’s no controversy at all. It’s complete nonsense. We love working with OpenAI. We are incredibly honored and delighted to be able to invest in their next round. And so we’re privileged that they’re inviting us to invest for each one of their rounds. We would love to be invited and we would consider of course investing in it.

Hope that makes everything clear.

OpenAI didn’t lash out in Reuters, their unnamed sources did: “OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say”. [Reuters]

Sam Altman tweeted: [Twitter, archive]

We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time. I don’t get where all this insanity is coming from.

First answer was from noted computer scientist Grady Booch: [Twitter, archive]

Hands Sam a mirror.

These were never going to be deals. The announcements were to get big numbers into the headlines. That made the stock numbers go up. Job number one!

But the wheels are falling off the vaporware deals. The wheels were always going to fall off, they couldn’t not fall off. But now the bubble investors are getting a glimmering that perhaps this is … vapor.

Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories