Resident of the world, traveling the road of life
68855 stories
·
21 followers

AI coding makes you worse at learning — and not even any faster

1 Share

Here’s a new preprint from Anthropic: “How AI Impacts Skill Formation”. AI coding bots make you bad at learning, and don’t even speed you up. [arXiv]

The researchers ran 50 test subjects through five basic coding tasks using the Trio library in Python. Some subjects were given an AI assistant, some were not.

The subjects coded in an online interview platform, and the AI users also had the AI assistant.

The researchers used screen and keystroke recording to see what the test subjects did — including those no-AI test subjects who tried using an AI bot anyway.

Afterwards, the researchers tested the subjects on coding skills — debugging, code reading, code writing, and the concepts of Trio.

The coders in the AI group were slightly faster, but it was not statistically significant. The main thing was that the AI group were 17% worse in their understanding:

The erosion of conceptual understanding, code reading, and debugging skills that we measured among participants using AI assistance suggests that workers acquiring new skills should be mindful of their reliance on AI during the learning process.

It’s just a single study and quite limited. You should expect to see AI bros dismiss the study saying it’s one library, it’s not enough coders, it’s an old model — and not to do better studies addressing their own objections.

If you don’t do the work, you don’t learn, and you don’t remember. Watching a bot do your job teaches you nothing. You end up incompetent. And you won’t work faster anyway.

Read the whole story
mkalus
14 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Darkness, democracy, and locking it down

1 Share
Darkness, democracy, and locking it down

Friday, finally. Time for the weekly roundup.

On the podcast this week: the latest Epstein dump, how it’s really a disaster in a lot of ways, and Moltbot and its terrible security. In the section for subscribers at the Supporter level, two recent stories about a fundamental issue exposing a bunch of very sensitive data.

And in this week’s interview, Joseph talks to Samuel Bagg, assistant professor of political science at the University of South Carolina. Bagg recently wrote a fascinating essay about how the problem with lots of things might be knowledge-based (people believing stuff that’s wrong or dangerous) but the solution is not more knowledge. It’s all about social identity.

@404.media

EpsteIn—as in, Epstein and LinkedIn—searches your connections on the social network for names that match those in the released files. 404 Media's Joseph Cox tested it, and it appears it works—with some caveats. “I found myself wondering whether anyone had mapped Epstein's network in the style of LinkedIn—how many people are 1st/2nd/3rd degree connections of Jeffrey Epstein?” Christopher Finke, the creator of the tool, told 404 Media in an email. “Smarter programmers than me have already built tools to visualize that, but I couldn't find anything that would show the overlap between my network and his.” “Thankfully the overlap is zero, but I did find that a previous co-worker who I purposefully chose not to keep in touch with appears in the files, and not in an incidental way. Trusting my gut on him paid off, I suppose,” he added. @Evy Kwong has more. Go to 404media.co to read more.

♬ original sound - 404 Media

Subscribers at the Supporter level get early access to interview episodes. Next week Emanuel talks to Patrick Klepek of Remap! Listen to the weekly podcasts on Apple Podcasts, Spotify, or YouTube

In other news: If you missed getting a physical copy of the zine, we got you. Our zine about ICE surveillance tactics is now available as a PDF! Read more about why we’re releasing it free in the digital realm, and get it here.

LOCK IT DOWN

The FBI has been unable to access a Washington Post reporter’s seized iPhone because it was in Lockdown Mode, a sometimes overlooked feature that makes iPhones broadly more secure, according to recently filed court records. The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.

Darkness, democracy, and locking it down
Image: Ian Muttoo via Flickr

TOTAL MESS

The Department of Justice left multiple unredacted photos of fully nude women or girls exposed as part of Friday’s dump of more than 3.5 million pages of files related to the investigations and prosecutions of Jeffrey Epstein and Ghislaine Maxwell. Unlike the majority of the images in the released files, both the nudity and the faces of the people were not redacted, making them easy to identify. In some of the photos, the women or girls were either fully nude or partially undressed, posed for cameras, and exposed their genitals. The DOJ removed the photos after 404 Media requested comment. 

Darkness, democracy, and locking it down
File photo / Unsplash

BAD VIBES

According to a new study from a team of researchers in Europe, vibe coding is killing open-source software (OSS) and it’s happening faster than anyone predicted. Thanks to vibe coding, a colloquialism for the practice of quickly writing code with the assistance of an LLM, anyone with a small amount of technical knowledge can churn out computer code and deploy software, even if they don't fully review or understand all the code they churn out. But there’s a hidden cost. Vibe coding relies on vast amounts of open-source software, a trove of libraries, databases, and user knowledge that’s been built up over decades.

Darkness, democracy, and locking it down
Photo by Daniil Komov / Unsplash

DEMOCRACY DIES

The Washington Post has been a critical institution in the lives of millions of people. What we’re seeing, though, is not a mistake. Unlike the Graham family in the late 1990s, Jeff Bezos has no reason to try to make his newspaper better or to try to best serve its readers. The newspaper's finances are barely a rounding error compared to Bezos's wealth, but what its journalists do—accountability journalism about the rich and powerful—does not serve someone who is rich and powerful. The Washington Post and many of its reporters are no longer useful to Bezos, and so he has decided to get rid of them. The Washington Post’s journalists, many of whom lost their jobs this week, have continued to do critical work, but Bezos has been systematically making the paper worse for years. 

Darkness, democracy, and locking it down
Image: Seattle City Council

READ MORE

404 MEDIA IN THE WILD

I went on Science Friday to talk about deepfakes and the Grok debacle, and if you're an Aussie you might have heard me discussing it there, too.

The English version of the documentary Emanuel appeared in about AI, called "AI: The Death of the Internet" is out now!

Joseph joined John Stewart to talk about ICE surveillance tactics, as well as PBS News Hour.

And this morning, Jason was on WNYC talking ICE and surveillance as well.

If you'd like us to come on your show, podcast, or panel, contact us.

COMMENTS OF THE WEEK

Replying to DOJ Released Unredacted Nude Images in Epstein Files, Rob writes:

“Inexcusable. I worked in ediscovery for a bit and I would be so ashamed if this happened on anything on my watch. Like, it is a shitty job to spend 12+ hours scanning/formatting/bates-stamping/printing documents + doing the redactions and having to see disturbing images, but part of why you put up with the boredom and the horror is because at the end of the day, you are playing your part in helping people get justice.” 

And in response to Our Zine About ICE Surveillance Is Here, Cam writes:

“Fantastic. Got mine in the mail yesterday. Phenomenal labor of love - excited to pass this around and share the PDFs as well. Keep doing what you're doing.”

We will with your support! Thank you! 

Darkness, democracy, and locking it down

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss AI bubble hysteria, "just go independent," and more.

JOSEPH: This week we reported how the FBI has been unable to get into a Washington Post reporter’s iPhone because it was in Lockdown Mode. Side note, I wonder how the insane cuts at The Post are going to impact its digital or physical protection of journalists, if at all. This court record was very, very interesting in that it’s a quite rare admission of why exactly authorities were unable to access a device. 

I don’t think there’s an area of cybersecurity, which we have a lot of reporting on, that is constantly in flux as mobile forensics. Nothing stays still, even for what feels like five minutes. There are constant tech developments, both on the side of Apple and Google, then on companies trying to break into those phones, like Cellebrite and Grayshift, the creator of Graykey.

As you probably remember, this dynamic really started back in 2016 after the San Bernardino terrorist attack. Authorities couldn’t get into an iPhone linked to the attack; the DOJ tried to legally compel Apple to build a backdoor into facilitate brute forcing the PIN; Apple declined to do so saying it would fundamentally lower security for all users; DOJ backed off when the FBI had a third party break into the phone, which was later revealed to be Azimuth Security (as I’ve said before, I had one source on that but The Washington Post had more, so managed to publish. It sucks they are gutting their journalists).

There have been some other high profile cases of authorities not being able to get into phones, but nothing quite like that Apple vs. FBI case. After Azimuth unlocked the phone, you had other companies largely emulate the capability of being able to unlock modern-ish iPhones. Probably the first of those was Grayshift, which Forbes first reported the existence of. Oh my god, a company has a little box that can just unlock iPhones even with their brute force protections? It was pretty nuts at the time but looks quaint now.

Then you get into what I usually refer to as the cat and mouse dynamic. Grayshift, and then Cellebrite, had the tools to break into recent iPhones. So then Apple introduced some other features. There was USB Restricted Mode, which changed the lightning cable port into a charge-only interface, meaning forensic tools couldn’t connect to it. Grayshift then said it had defeated the feature. Some cops also explored the possibility of not getting a warrant to more quickly download data in order to circumvent it.

The world kept spinning and both sides of the fight kept doing their thing. As we saw from Cellebrite and Graykey related leaks, generally these tools could get into older or even recent phones, but might have an issue with the latest device running the latest operating system. Then they’d find a way in and the cycle would continue.

FBI Couldn’t Get into WaPo Reporter’s iPhone Because It Had Lockdown Mode Enabled
Lockdown Mode is a sometimes overlooked feature of Apple devices that broadly make them harder to hack. A court record indicates the feature might be effective at stopping third parties unlocking someone’s device. At least for now.
Darkness, democracy, and locking it down

The next major development was the iPhone rebooting we revealed in 2024. That was returning iPhones that hadn’t been unlocked for a few days (presumably by the user) to a state that makes them harder to unlock. I’m not sure what the latest on that is regarding mitigations.

My point is that this story will never end, really. There will always be some sort of development in the mobile forensic space. Always some little setting or tweak or new attack that, unless you’re following closely, you’re probably not going to know about. Which makes it hard to really know when your phone is really secure.

I suppose that’s the attraction of Lockdown Mode: it is supposed to stop connections between the phone and a forensic device completely, so users don’t have to worry about niche software idiosyncrasies they probably have no idea exist.

Cellebrite Unlocked This Journalist’s Phone. Cops Then Infected it With Malware
A new report from Amnesty International reveals multiple cases where Serbian authorities used Cellebrite devices to access targets’ mobile phones before loading them with spyware.
Darkness, democracy, and locking it down

I mentioned this in passing on Bluesky when I posted the article, but I think Apple has done a pretty bad job of explaining Lockdown Mode can, seemingly, protect against mobile forensic tools. Much of the marketing and stuff on the company’s site is about protecting users from mercenary spyware (read: NSO Group, Paragon, etc). There’s no mention of mobile forensics tech like Cellebrite or Graykey. Maybe that’s for a couple of reasons: Cellebrite and Graykey absolutely have legitimate uses, and are used to combat serious crime every single day. They are abused, absolutely, but they’re also used constantly in all manner of child abuse, financial fraud, murder, kidnapping investigations. Basically, any crime, really. So, having Apple on its website saying ‘we defeat the tool that lets cops collect evidence on murderers’ is probably not a look they want. Spyware is much easier to publicly push back against. That industry is saturated with abuse.

But, now we know that Lockdown Mode can protect against these tools if you’re at risk of your device being seized and searched. That is obviously very useful information for journalists, activists, protesters, and others to know. 

JASON: It has been a brutal week for journalism, a brutal year, a brutal decade. For journalism and for the world more broadly. It has been hard to pay attention to much of anything besides ICE, and I know many people who can’t think about anything else at all right now, and I completely understand that. I have done that at times in my life and it turns me extremely defeatist and useless, so over the last few years I have really focused on working hard and doing things that I feel are meaningful, using my journalism skills and my platform, and then either logging off or explicitly focusing on being with my friends and family, exercising, or otherwise doing things that bring me joy. This is a really lucky place to be in, which I don’t take for granted, but I figure I am more useful energized and not fully miserable all the time, and so I make sure that I have some sort of balance in my life.

That’s a bit of a non sequitur preamble before I get to my real thought, which is about independent journalism, starting a business, “just going independent” and things of this nature. Whenever there are mass layoffs like we saw at the Washington Post this week, there’s understandably an online debate about the sustainability of journalism, and also a debate about whether going independent can work, who can go independent, how to do it, etc. The ones I’ve seen in the last few days feel pretty pessimistic to me. And it’s true that there are far fewer journalism jobs, there are now a tiny number of traditional publications hiring, and it’s getting harder to stand out amongst a sea of substacks and independent sites, especially considering the additional pressures of competing against AI slop, etc. I also see a lot of people saying that there is subscription fatigue, debating the ethics of paywalls, that there are concerns about legal resources, healthcare, running a business, editing help, etc. These are all real, and everyone’s situation is different. 

I understand the impulse to have these conversations but I also never really know what to say about them, and so I usually don’t participate, because honestly the discourse on this topic feels extremely fraught. We are talking about people’s livelihoods, their life’s work, their personal appetite for being an entrepreneur, their healthcare situation. And this always happens immediately after a bunch of people lose their jobs, so it always happens during a very raw situation. 

So again, deep breath, knowing I’m coming from a place of unimaginable privilege having been a part of 404 Media: Going independent is the best thing I have ever done in my life. I did not know or ever hope to dream that anything like this could have happened to me. I am a happier person in every conceivable possible way having gone independent. I work a lot, but I also have more balance in my life than I have ever had. I know this is not the case for everyone, but it is possible to do this and make a living. It is still possible. And for many people I think it is better to at least try to start something new than it is to try to hitch yourself to another dying business. (This is the reason for my preamble: It sometimes feels weird/bad/wrong to feel somewhat secure when so many people do not.)

If you are a journalist and you are thinking of trying this, talk to me first. I am happy to talk to you. A lot of the hurdles, problems, and fears expressed by people about going independent are real, but they are also not insurmountable and often they are not as big of a deal as you would expect. Legal help is available. Editing help is available. Healthcare … healthcare is the hardest thing, it’s a big thing, and I don’t have a good answer there. Running a very basic business does not take that much time, and much of it is automated through platforms like Ghost. Subscription fatigue, I’m sorry, is fake. Well, it’s real on an individual level, but the amount of people that you need to subscribe to something to approximate what a journalism job pays is not that many. There are hundreds and hundreds of millions of people who speak English and you need to convince a few hundred of them that your work is worth supporting. This is possible. It’s doable. You need to post a lot and you may need to learn to do a few new things. You need to be kind of shameless, which didn’t come easy to me and still doesn’t. But we have learned a lot in the last few years. If this is something you want to do, email me.

EMANUEL: It’s time once again to talk about the big AI picture: Bubble or no bubble, the end of all knowledge based work or a useless tool, a civilization shifting technology or a slop machine?

To be honest, I’m not going to satisfactorily answer any of these questions but I see all the same ridiculous, shocking, scary claims about AI you’re seeing, and I want to talk through some of the way I processed them this week. 

As people were losing their minds over Moltbook this week and discussing how powerful the latest LLMs are at coding, I was reporting a story about a company that heavily relies on generative AI, and how it’s failing that company’s workers and users. My reporting in this case requires sifting through a massive amount of text without much of a direction, so while everyone was talking about how powerful AI is right now, I thought: why not use one of these LLMs to do some of that work for me?

Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site
‘It exploded before anyone thought to check whether the database was properly secured.’
Darkness, democracy, and locking it down

This idea never got off the ground for technical reasons, but it made me think a lot about how I could incorporate AI into my workflow. A lot of reporting is pretty tedious because it requires sifting through a ton boring material in order to maybe find something important without having any idea what it might be, and I can easily imagine AI being helpful for that task. AI currently has the ability to sift through video, transcripts, PDFs, social media accounts, etc. The problem I kept coming back to is that if I used AI to do any of that sifting for me, I would have no idea what it may have missed. Maybe it could find useful leads much faster, but so often what happens during this process is that I’ll read through a document and see something that’s only tangentially related, or a name I didn’t recognize, and follow those leads not because it makes logical sense, but because I’m curious and bored of looking at the same document and need a change of pace. Sometimes, that’s how I find some of the most interesting stuff in my reporting. As far as I’m aware, no current LLM can do that, and even if it did, I would have to trust that it didn’t miss any of those opportunities because of an error. 

Then I thought, while all of that may be true, I could still stick to my manual scanning process but use AI for a first pass. But I felt the overwhelming desire to be lazy begin to take hold before I even finished the thought. As numerous studies have shown, reliance on automated tools leads to overreliance on automated tools and, ultimately, deskilling. I could feel myself atrophy just by entertaining the idea. Ultimately I’m still open to the possibility of using LLMs in some similar fashion, but at the moment it seems like more trouble than it’s worth. 

When I looked back over at X, I saw both AI boosters and skeptics agree that something has changed in the last few months. People who used to think the entire thing was a bubble now say they see AI embed itself into tech company workflows in a way that’s irreversible. At the same time, Moltbook, the social media for AI agents that was driving much of this hype, was revealed to be a sham and a security nightmare. 

I’m tired of saying it and I’m sure you’re tired of reading it but my position remains that AI can be both an overhyped tech bubble, and, at the same time, a technology that is here to stay and that will fundamentally change our lives in many ways. 

It’s wild to me that people who were old enough to live through or at least understand the history of the dot com boom can’t hold this thought in their heads. The internet was new and ‘overhyped’ and a lot of companies raised way too much money, so the market had a very dramatic correction, but obviously the internet was here to stay and its impact can’t be overstated. 

I’m not predicting the future but it certainly feels like we’re on a similar path with AI right now. 

Finally, these are my two takeaways from yet another week of AI hysteria:

  1. We will continue to focus on what AI is actively doing right now rather than speculating on how powerful it can be theoretically and will be in the future. 
  2. Our philosophy has always been to ground our reporting in first hand experience with the technology we’re reporting on, and I think that I have fallen a little behind in that respect, and need to experiment with some of the more recent AI tools. 

SAM: Last night I decided to write a short blog in a category I’d call “check this shit out,” where the purpose isn’t to solve a mystery or break news, but to just point at something everyone is talking about and add context to it. I saw people on Bluesky and Reddit posting an image from the Epstein files of the Mona Lisa with a redaction over the portrait’s face. The image itself is an instant classic, but the context behind it is that thousands of instances of victims’ personal information, including faces and full names, have been exposed as part of this Epstein data dump disaster. So redacting the face of a 500 year old painting seemed patently absurd. 

The DOJ Redacted a Photo of the Mona Lisa in the Epstein Files
While Epstein’s victims endure the fallout of their photos and names being exposed in the Department of Justice’s latest tranche of files, investigators redacted a photo of the Mona Lisa. Now we know why.
Darkness, democracy, and locking it down

I sent a request for comment to the DOJ specifically asking why it was redacted and also whether AI was used in redactions, because that’s another piece of context to this story: the people making the image go triple platinum on every social media platform were also speculating (or straight up declaring) that a facial recognition system was redacting images in the files, and that’s why an unrelated, centuries-old female face was caught in the net. This isn’t the craziest theory ever; AI systems similarly overindex for things like nudity, sexual speech, and terms-violating content across all social media platforms, and it’s a huge problem. Sending overzealous bots to moderate complex, nuanced user generated content is messy and requires a lot of human oversight, and usually puts the onus on users to appeal and attempt to correct (or abide by) rules that aren’t even made explicit. AI catching the Mona Lisa and not catching real victims' faces is not the wildest theory in the world. But I don't know, and don't want to speculate, about whether the DOJ used AI to do redactions. If they did, that's bad. If they didn't, the situation is still messy and terrible.

I published the story with a note that the DOJ did not immediately respond to a request for comment, and about an hour later — incredibly speedy, considering they took a day and a half to remove sexually explicit images of victims when we flagged them last weekend — someone a the DOJ responded saying they redacted the Mona Lisa because it’s actually a victim’s face in the photo. And now, looking closely at the image itself (which is already cropped tightly to exclude any background), I can see what seems like a thumb or something along the edge, as if it’s a person holding a printed photo. Maybe it’s from a novelty shop outside the Louvre, or maybe it’s one of those cutout photo ops where you stand behind an image and put your face in the hole. Either way, it’s not just a photo of the painting hanging in the Louvre like everyone (including myself) assumed. The story went from “wow the DOJ incompetently left so many images unredacted of real women while protecting a painting, how absurd” to “wow this image is actually another tiny piece of evidence in the most harrowing criminal investigation of our lifetime.” I literally said WHOA WHOA WHOA out loud alone in my apartment when I got the DOJ’s email. They did not answer the AI question.

This is simply a BTB and I don’t have any grand lesson to end with, but I do want to say that I don’t think — and I don’t think anyone’s said or thinks this, either, but just to be clear about it — that the redaction being genuine and correct changes the context of the larger story, and why I blogged it in the first place, which is that the process of protecting victims while releasing these files has been a disaster. Their lawyers have said their phones are ringing off the hook with victims realizing their information was made public in these files. We talked more about this on the podcast this week, if you're interested.

Read the whole story
mkalus
16 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Inspector General Investigating Whether ICE's Surveillance Tech Breaks the Law

1 Share
Inspector General Investigating Whether ICE's Surveillance Tech Breaks the Law

The Department of Homeland Security’s Inspector General is investigating potential privacy abuses associated with Immigrations and Customs Enforcement’s surveillance and biometric data programs, according to a letter sent to two senators.

Last week, we reported that Senators Mark Warner and Tim Kaine demanded that DHS inspector general Joseph Cuffari investigate immigration-related surveillance programs across DHS, Customs and Border Protection, and ICE. Thursday, Cuffari said his office had launched an audit called “DHS’ Security of Biometric Data and Personally Identifiable Information.”

“The objective of the audit is to determine how DHS and its components collect or obtain PII and biometric data related to immigration enforcement efforts and the extent to which that data is managed, shared, and secured in accordance with law, regulation, and Departmental policy,” Cuffari’s letter reads. He adds that one of the purposes of the investigation will be to “determine whether they have led to violations of federal law and other regulations that maintain privacy and defend against unlawful searches.”

Kaine and Warner’s initial letter specifically focused on many of the technologies and programs 404 Media has been reporting on, including DHS’s contracts with Palantir, facial recognition company Clearview AI, its side-door access to Flock’s license plate scanning technology, its social media monitoring through a company called Penlink, its phone hacking contract through a company called Paragon, its face-scanning mobile app, as well as its use of various government biometric databases in immigration enforcement. 

“DHS’ reported disregard for adhering to the law and its proven ambivalence toward observing and upholding constitutionally-guaranteed freedoms of Americans and noncitizens, including freedom of speech and equal protection under the law, leaves us with little confidence that these new and powerful tools are being used responsibly,” the senators wrote. “Coupled with DHS’ propensity to detain people regardless of their circumstances, it is reasonable to question whether DHS can be trusted with powerful surveillance tools and if in doing so, DHS is subjecting Americans to surveillance under the pretext of immigration enforcement.”

Read the whole story
mkalus
17 minutes ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

The OpenAI and Nvidia $100b not-a-deal is off

1 Share

Late last year, OpenAI was frantically signing deals for hundreds of billions of dollars! Six gigawatts of GPU chips from AMD! A hundred billion dollar deal with Nvidia for ten gigawatts of GPU chips!

The press went wild! Stock prices got quite the boost! Line went up!

It turns out this impossible data centre deal wasn’t possible. The Wall Street Journal broke the news that “The $100 Billion Megadeal Between OpenAI and Nvidia Is on Ice.” Jensen Huang of Nvidia was not impressed with OpenAI: [WSJ]

At the time, the ChatGPT-maker expected the deal negotiations to be completed in the coming weeks, people familiar with the plans said. But the talks haven’t progressed beyond the early stages.

… Huang has privately emphasized to industry associates in recent months that the original $100 billion agreement was nonbinding and not finalized … He has also privately criticized what he has described as a lack of discipline in OpenAI’s business approach and expressed concern about the competition it faces from the likes of Google and Anthropic.

Jensen Huang is now playing down the non-deal and only wants to talk about the future, where he loves OpenAI and definitely wants to invest! [Reuters]

Jim Cramer of CNBC asked Jensen about this $100 billion not a deal. Jensen says everything is fine: [YouTube]

No, there’s no controversy at all. It’s complete nonsense. We love working with OpenAI. We are incredibly honored and delighted to be able to invest in their next round. And so we’re privileged that they’re inviting us to invest for each one of their rounds. We would love to be invited and we would consider of course investing in it.

Hope that makes everything clear.

OpenAI didn’t lash out in Reuters, their unnamed sources did: “OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say”. [Reuters]

Sam Altman tweeted: [Twitter, archive]

We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time. I don’t get where all this insanity is coming from.

First answer was from noted computer scientist Grady Booch: [Twitter, archive]

Hands Sam a mirror.

These were never going to be deals. The announcements were to get big numbers into the headlines. That made the stock numbers go up. Job number one!

But the wheels are falling off the vaporware deals. The wheels were always going to fall off, they couldn’t not fall off. But now the bubble investors are getting a glimmering that perhaps this is … vapor.

Read the whole story
mkalus
10 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

The Knit One Chair Rethinks Comfort, Trading Foam for Air

1 Share

The Knit One Chair Rethinks Comfort, Trading Foam for Air

When it comes to furniture design, lightness is usually a metaphor. For Isomi’s Knit One Chair, designed by Paul Crofts, it’s a material reality. Gone are the layers of foam and heavy upholstery. In their place, a 3D-knitted skin is all that’s needed to balance comfort, structure, and sustainability in equal measure.

A modern green armchair with a minimalist design sits in a corner of a room with wood-paneled walls and a large window, illuminated by natural light

Three brown seats from the Isomi Knit One collection are placed against a wood-paneled wall in a room with a large window letting in natural light

Removing the materials that traditionally add plushness might sound counterintuitive, but Knit One proves that comfort doesn’t rely on excess – it comes from smart design. “With the Knit One chair, we wanted to break away from wasteful, resource-heavy upholstery,” shares Paul Crofts, Design Director at Isomi. “The frame simply bolts together on site, while the knit sleeve, woven with mono-filament structural fibers, drops into place –minimal waste, maximum impact.” It’s proof that comfort doesn’t depend on layers, but on the integrity of materials and the innovation behind them.

A minimalist chair frame made of red metal tubing with angular lines, shown on a plain, light-colored background

A close-up view of a modern Isomi Knit One chair with a curved frame and rust-colored, textured fabric upholstery against a plain light background

Two people examine a brown fabric piece being processed by the Isomi Knit One industrial textile machine with a transparent cover

The chair’s knitted sleeve is made from Camira’s SEAQUAL® Collection, a textile crafted from post-consumer marine plastic waste – up to 35 recycled bottles per meter. The material is shaped using advanced 3D knitting technology, which eliminates excess fabric waste and ensures precise construction. Fully recyclable and replaceable, the sleeve extends the chair’s lifespan while removing the need for adhesives or foam. A lightweight metal frame supports the knit structure and allows the chair to be shipped flat-pack, further reducing the overall carbon footprint and making local assembly effortless – a rarity when it comes to large-scale furniture.

Two people are assembling a piece of brown furniture, possibly a playpen or crib, by attaching fabric sides to a metal frame,

Two people assemble a large, brown, modern Isomi Knit One chair on a carpeted floor. One person kneels with a tool while the other stands, partially visible, holding the chair upright

A man stands in a factory aisle holding an Isomi Knit One, a large, brown, fabric-like object, surrounded by industrial machines and bags on the floor

Two modern, orange fabric chairs from the Isomi Knit One collection are placed on the floor of a textile factory, with spools of thread and industrial knitting machines in the background

The Knit One Chair is part of a modular seating system that includes a lounge chair, straight ottoman-style module, angled module, and a solid wood side table. Together, they form a flexible arrangement that adapts to any space, from open-plan offices to relaxed residential interiors. Each piece is fully reversible, allowing endless configurations that shift from solo lounging to group seating with ease, reflecting the same thoughtful versatility that defines Isomi’s approach to design.

An Isomi Knit One modular, brown, ribbed bench with a built-in wooden tray is shown on a light-colored floor

Curved, modular brown seating from Isomi Knit One with a ribbed texture is arranged on a light-colored floor in an alternating pattern

Sustainable design can take many forms. Sometimes it requires redefining what is “excess,” removing it entirely, and reimagining how we can still experience comfort through simplicity. The Knit One Chair embodies a less is more philosophy, where designing with minimal materiality creates more space – for innovation, longevity, and a lighter impact on the planet.

A modern, armless chair with a curved design and ribbed, rust-colored upholstery—reminiscent of the Isomi Knit One aesthetic—is shown against a plain light background

Three brown seats from the Isomi Knit One collection are placed against a wood-paneled wall in a room with a large window letting in natural light

Three green seats from the Isomi Knit One collection are placed against a wood-paneled wall in a room with a large window letting in natural light

A modern pink armchair with a minimalist design sits in a corner of a room with wood-paneled walls and a large window, illuminated by natural light

A modern brown armchair with a minimalist design sits in a corner of a room with wood-paneled walls and a large window, illuminated by natural light

To learn more about the Knit One Chair designed by Paul Crofts for Isomi, visit isomi.com.

Read the whole story
mkalus
10 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Microsoft walks back AI in Windows 11! Yeah, right

1 Share

Microsoft hasn’t been having the greatest month or two.

In November, Pavan Davuluri,President of Windows and Devices, proclaimed Microsoft’s Agentic Operating System future! Users told him nobody wanted this, and what they wanted was a Windows that worked properly. Davaluri disabled replies on the tweet. [Twitter, archive]

Davuluri tweeted a few days later: “We know we have work to do.” Sure do, mate. [Twitter, archive]

Windows 11 had a teensy problem in January where you’d do a system update and your PC wouldn’t even boot any more. Microsoft released, not one, but two out-of-band patches which they hoped would fix the problem. Eventually they figured the booting problem happened if the December update hadn’t installed properly. [Bleeping Computer; Bleeping Computer]

Then the Microsoft stock price crashed last week after they issued quarterly numbers full of AI squirrels and confetti.

Time for a new Microsoft marketing perception initiative! Davaluri vibe-marketed a press release into the Verge: [Verge]

Microsoft is redirecting engineers to urgently fix Windows 11’s performance and reliability issues, aiming to halt the operating system’s death by a thousand cuts.

Even better — they’re talking about winding back on the AI spam! [Windows Central]

Copilot integrations like those found in Notepad and Paint are under review.

Probably because those two in particular are the most ridiculous AI integrations in history.

To be clear, Microsoft’s actual plans are to “streamline” the AI experience — not remove it. They don’t really want to do a single thing differently. This is only about perception.

All of this is a reaction to the user backlash, the fact that gamers are even talking about Linux, and the stock price going down. I would believe any of what Microsoft’s babbling as and when I see it. And not one moment before.

One actual change to AI in Windows is that Windows 11 now has an option to remove Copilot in the group policy editor! It’s only for Pro, Enterprise or Education versions. They started rolling this out to the beta channel in January and it looks like it’s live now. So there you go, a tiny bit of change we can believe in. [Bleeping Computer; Bluesky]

The problem, though, is that Windows 11 doesn’t … work. I’m no fan of Windows, but I’ve used Windows 10 and it basically works? If you need a Windows, 10 is fine. Windows 11 is buggy trash.

We don’t know that Windows 11 was vibe coded. There were a lot of headlines last year that 30% of Microsoft code was AI now! Based on something the CEO, Satya Nadella, said in a podcast. But of course, what he actually said was a carefully hedged claim in CEO speak: [YouTube, 45:00-45:08]

maybe 20 to 30 percent of the code that is inside of our repos today in some of our projects are probably all written by software.

“Maybe 20 to 30 percent”? In “some projects”? “Probably?” I think that means not really.

So we don’t have smoking gun evidence that Windows 11 is broken trash literally because of vibe coding. But Windows 11 feels like the most vibe coded thing ever. Nobody cared about Windows 11 working. Microsoft, where quality is job number 55 or so!

Read the whole story
mkalus
10 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories