Resident of the world, traveling the road of life
67106 stories
·
21 followers

Longchamp’s SoHo Flagship Returns as a Cultural and Design Landmark

1 Share

Longchamp’s SoHo Flagship Returns as a Cultural and Design Landmark

Longchamp has reintroduced its iconic SoHo flagship, unveiling a bold new chapter in its architectural and artistic journey. Nestled in the heart of downtown Manhattan, the La Maison Unique boutique has been transformed into a space that merges retail with an immersive cultural experience – offering more than shopping, but a deep dive into the brand’s design philosophy, legacy, and creative ambition.

A modern interior with wavy green ceiling panels, a round patterned rug, a small round table displaying handbags, and wooden flooring.

At the core of this reimagining is the rekindled collaboration between Longchamp and celebrated British designer Thomas Heatherwick. Nearly two decades after his original work on the space, Heatherwick returns to re-envision the site with a fresh narrative. The result is a compelling blend of artistry, innovation, and Parisian warmth, translated into architectural form.

Modern interior with a curved staircase featuring green accent lighting, a round table with bags and books, and a patterned rug on wooden flooring.

The redesign honors the bones of the original building while elevating its purpose. One of the most striking updates is the reinterpreted central staircase. Originally made of steel ribbons, it has been reborn in Longchamp’s signature green – a vibrant pathway of swooping planes that guides visitors up from the ground floor, like ascending a hill. The dramatic feature sets the tone for the boutique’s organic, flowing atmosphere.

Modern store interior with wavy green and black staircase, exposed brick wall on the left, mannequins in front of large window, and glass railings along the stairs.

Black marker drawing of a whimsical, one-eyed figure wearing a top hat and high-heeled shoe on a red brick wall, with stars above and wavy lines beside it.

Modern interior with bright green walls and floors, featuring a curved staircase and transparent panels, creating a futuristic and open atmosphere.

A modern interior with bright green curved walls, transparent glass barriers, and shelves displaying yellow handbags on the right side.

A bright green staircase with wavy, undulating lines and glass railings spans several floors in a modern interior space.

Above, the retail space has been crafted to feel less like a store and more like an upscale, lived-in loft. Round rugs in rich green tones spill from carpeted columns across warm wood floors, creating a dynamic interplay of texture and form. Vintage and bespoke furnishings – like a 1970s croissant sofa by Raphaël Raffel and sculptural works by David Nash – anchor the room with both history and originality.

A retail store interior with shelves and display tables showcasing colorful handbags and wallets, set against green walls and wood flooring.

Modern retail store interior with green columns, wooden shelves displaying various handbags, and curved furniture on a green patterned floor.

A tiered wooden display with trays of folded scarves surrounds a lamp; shelves of handbags are visible in the background.

Longchamp’s ties to the art world are on full display throughout the store. The brand’s private collection, along with newly commissioned pieces, gives the space a gallery-like feel. Highlights include ceramics and sculptures from artists such as Dorothée Loriquet, Bobby Silverman, and Tanaka Tomomi. Their works echo Longchamp’s commitment to natural materials, tactile surfaces, and organic design.

A modern boutique interior with two armchairs, a small table, green carpet, a central green column, shelves displaying bags, and large windows with a city view.

A tall, vertically standing wooden sculpture with organic curves is displayed in a modern store interior near a window and shelves with bags.

In a deliberate shift from traditional retail layout, the central area has been opened to encourage conversation. Instead of focusing solely on product display, the well-lit space invites guests to linger and connect, mirroring the rhythm of a Paris apartment transplanted to a New York context.

Modern retail store interior with curved wooden shelves displaying handbags, green patterned carpet, lounge chairs, and large windows providing natural light.

The visual storytelling continues with intentional quirks: neon signage, hand-drawn graffiti by artist André, and archive objects that trace Longchamp’s early heritage as a maker of leather tobacco accessories and travel games. These nostalgic elements add to the space, providing a bridge between past and present.

Modern retail store interior with curved wooden shelves, display tables, green patterned carpet, large windows, and various handbags and accessories on display.

This revitalization is part of a larger movement within the brand to reshape the in-store experience. It reflects a shift in luxury retail – from transactional to experiential. By creating a space where design, storytelling, and sensory detail converge, Longchamp is championing a new kind of flagship – one rooted in memory and human connection.

A modern retail store interior with curved wooden shelves displaying handbags, a wooden table with stools in the center, and a green patterned rug.

A modern interior hallway with curved wooden arches, large windows, exposed brick walls, and light wood flooring overlooking a city street.

“Retail moves fast, but architecture should last. We wanted to create something bold and joyful, yet warm and timeless – an apartment-like space that invites people to stay,” Heatherwick Studio partner Neil Hubbard says. “From the swirling green rugs under green-carpeted columns to curved furniture that feels custom but lived-in, everything was designed to feel unified and human. Even the red brick walls downstairs, set to host rotating installations, help ground the space in SoHo’s industrial roots while creating room for surprise.”

Outdoor wooden deck with lounge chairs, potted plants, and tables beside a building with large glass windows; city buildings visible in the background.

Street view of a Longchamp store with green and brick exterior, displaying mannequins in the window and a green Longchamp banner above the entrance.

A two-story building with a green brick facade and large windows houses a Longchamp store; a green Longchamp sign hangs above the entrance.

Read the whole story
mkalus
15 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Fighting the AI scraper bots at Pivot to AI and RationalWiki

1 Share

We’ve covered the AI scraper bots before. These just hit web pages over and over, at high speed, to scrape new training data for LLMs. They’re an absolute plague across the whole World Wide Web and a pain in the backside for the sysadmins running the servers.

Pivot to AI itself got hit by an AI scraper bot over the weekend! Thankfully the scoundrels who vibe-code these things are idiots.

Today I need to pull out the tech jargon. Though if I give precise details, the scraper bot authors are pretty stupid, but they might not be stupid enough not to work it out. So please excuse me being mysterious.

Pivot to AI

Pivot to AI runs on a small server at Hetzner. The web server is nginx and the WordPress runs in php-fpm — it’s pretty basic. The server was slowing down, so I checked the logs and I  found a bot scraping the whole site at 10 to 60 requests a second!

In that case, it was one IP address. So I sent an abuse complaint and it stopped and hasn’t come back.

But a lot of AI scraper bots come in from multiple IP addresses. The AI companies hire botnets to scrape for them. For example, there’s one botnet that’s made of hacked Android set-top boxes. [Wired]

So you can’t always block on IP. But you can block on behaviour. You’ll see this in the server logs.

I put in a filter in nginx based on behaviour, just in case the guy came back. But it also blocked the archive sites — archive.org, archive.today, ghostarchive. So you may want to work out a way to let those through.

RationalWiki.org

I also do some sysadmin for rationalwiki.org, which was completely flattened by AI scraper bots. You couldn’t use it, you couldn’t browse it.

RationalWiki is a MediaWiki server, the same software as Wikipedia. RationalWiki has nginx at the front to terminate SSL, then a Varnish cache, then Apache with PHP.

Routine URLs serve from the Varnish cache just fine. Complicated requests, like calculating differences between page versions, are more expensive.

So if you’re running MediaWiki specifically, here’s an nginx pattern that takes effect if it’s a complex URL:

set $BOT "";
if ($uri ~* (/w/index.php)) {
    set $BOT "C"; }
# then detect the bot tell and give a 503

To fight the bots:

There are three things you can do. I can’t tell you the first hilariously simple one here. But you’ll find it if you ask around.

Secondly: if your server is getting hammered, look at your access logs for a pattern. There will be a characteristic of the bot requests — IPs, requests, user agents, something.

Thirdly: when you detect a scraper bot, always return 503 Service Unavailable. Never return 403 Forbidden — they’ll know you spotted them and they’ll change their tactics to be more annoying. If you return a 503, they think they just flattened the server, and they try the same thing again — that no longer works for them.

There’s other methods, like trap the bots in a maze of fake pages. That works for a lot of people, but I just wanted to stop the bot loading my servers.

And in conclusion, we wish the AI scraper bot authors a very happy Roko’s Basilisk making them step on Lego forever.

 

Read the whole story
mkalus
16 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Teachers Are Not OK

1 Comment and 2 Shares
Teachers Are Not OK

Last month, I wrote an article about how schools were not prepared for ChatGPT and other generative AI tools, based on thousands of pages of public records I obtained from when ChatGPT was first released. As part of that article, I asked teachers to tell me how AI has changed how they teach.

The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses. 

One thing is clear: teachers are not OK. 

They describe trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. “That sure feels like bullshit.”

💡
Have you lost your job to an AI? Has AI radically changed how you work (whether you're a teacher or not)? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at jason@404media.co.

Below, I have compiled some of the responses I got. Some of the teachers were comfortable with their responses being used on the record along with their names. Others asked that I keep them anonymous because their school or school district forbids them from speaking to the press. The responses have been edited by 404 Media for length and clarity, but they are still really long. These are teachers, after all. 

Robert W. Gehl, Ontario Research Chair of Digital Governance for Social Justice at York University in Toronto

Simply put, AI tools are ubiquitous. I am on academic honesty committees and the number of cases where students have admitted to using these tools to cheat on their work has exploded.

I think generative AI is incredibly destructive to our teaching of university students. We ask them to read, reflect upon, write about, and discuss ideas. That's all in service of our goal to help train them to be critical citizens. GenAI can simulate all of the steps: it can summarize readings, pull out key concepts, draft text, and even generate ideas for discussion. But that would be like going to the gym and asking a robot to lift weights for you. 

"Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased."

We need to rethink higher ed, grading, the whole thing. I think part of the problem is that we've been inconsistent in rules about genAI use. Some profs ban it altogether, while others attempt to carve out acceptable uses. The problem is the line between acceptable and unacceptable use. For example, some profs say students can use genAI for "idea generation" but then prohibit using it for writing text. Where's the line between those? In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear "don't use generative AI" from a prof but then log on to the university's Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It's inconsistent and confusing.

I've been working on ways to increase the amount of in-class discussion we do in classes. But that's tricky because it's hard to grade in-class discussions—it's much easier to manage digital files. Another option would be to do hand-written in-class essays, but I have a hard time asking that of students. I hardly write by hand anymore, so why would I demand they do so? 

I am sick to my stomach as I write this because I've spent 20 years developing a pedagogy that's about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It's demoralizing.

It has made my job much, much harder. I do not allow genAI in my classes. However, because genAI is so good at producing plausible-sounding text, that ban puts me in a really awkward spot. If I want to enforce my ban, I would have to do hours of detective work (since there are no reliable ways to detect genAI use), call students into my office to confront them, fill out paperwork, and attend many disciplinary hearings. All of that work is done to ferret out cheating students, so we have less time to spend helping honest ones who are there to learn and grow. And I would only be able to find a small percentage of the cases, anyway.

Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased.

Kaci Juge, high school English teacher

I personally haven't incorporated AI into my teaching yet. It has, however, added some stress to my workload as an English teacher. How do I remain ethical in creating policies? How do I begin to teach students how to use AI ethically? How do I even use it myself ethically considering the consequences of the energy it apparently takes? I understand that I absolutely have to come to terms with using it in order to remain sane in my profession at this point.

Ben Prytherch, Statistics professor

LLM use is rampant, but I don't think it's ubiquitous. While I can never know with certainty if someone used AI, it's pretty easy to tell when they didn't, unless they're devious enough to intentionally add in grammatical and spelling errors or awkward phrasings. There are plenty of students who don't use it, and plenty who do. 

LLMs have changed how I give assignments, but I haven't adapted as quickly as I'd like and I know some students are able to cheat. The most obvious change is that I've moved to in-class writing for assignments that are strictly writing-based. Now the essays are written in-class, and treated like mid-term exams. My quizzes are also in-class. This requires more grading work, but I'm glad I did it, and a bit embarrassed that it took ChatGPT to force me into what I now consider a positive change. Reasons I consider it positive:

  • I am much more motivated to write detailed personal feedback for students when I know with certainty that I'm responding to something they wrote themselves.
  • It turns out most of them can write after all. For all the talk about how kids can't write anymore, I don't see it. This is totally subjective on my part, of course. But I've been pleasantly surprised with the quality of what they write in-class. 

Switching to in-class writing has got me contemplating giving oral examinations, something I've never done. It would be a big step, but likely a positive and humanizing one. 

There's also the problem of academic integrity and fairness. I don't want students who don't use LLMs to be placed at a disadvantage. And I don't want to give good grades to students who are doing effectively nothing. LLM use is difficult to police. 

Lastly, I have no patience for the whole "AI is the future so you must incorporate it into your classroom" push, even when it's not coming from self-interested people in tech. No one knows what "the future" holds, and even if it were a good idea to teach students how to incorporate AI into this-or-that, by what measure are us teachers qualified? 

Kate Conroy 

I teach 12th grade English, AP Language & Composition, and Journalism in a public high school in West Philadelphia. I was appalled at the beginning of this school year to find out that I had to complete an online training that encouraged the use of AI for teachers and students. I know of teachers at my school who use AI to write their lesson plans and give feedback on student work. I also know many teachers who either cannot recognize when a student has used AI to write an essay or don’t care enough to argue with the kids who do it. Around this time last year I began editing all my essay rubrics to include a line that says all essays must show evidence of drafting and editing in the Google Doc’s history, and any essays that appear all at once in the history will not be graded. 

I refuse to use AI on principle except for one time last year when I wanted to test it, to see what it could and could not do so that I could structure my prompts to thwart it. I learned that at least as of this time last year, on questions of literary analysis, ChatGPT will make up quotes that sound like they go with the themes of the books, and it can’t get page numbers correct. Luckily I have taught the same books for many years in a row and can instantly identify an incorrect quote and an incorrect page number. There’s something a little bit satisfying about handing a student back their essay and saying, “I can’t find this quote in the book, can you find it for me?” Meanwhile I know perfectly well they cannot. 

I teach 18 year olds who range in reading levels from preschool to college, but the majority of them are in the lower half that range. I am devastated by what AI and social media have done to them. My kids don’t think anymore. They don’t have interests. Literally, when I ask them what they’re interested in, so many of them can’t name anything for me. Even my smartest kids insist that ChatGPT is good “when used correctly.” I ask them, “How does one use it correctly then?” They can’t answer the question. They don’t have original thoughts. They just parrot back what they’ve heard in TikToks. They try to show me “information” ChatGPT gave them. I ask them, “How do you know this is true?” They move their phone closer to me for emphasis, exclaiming, “Look, it says it right here!” They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching. If I were to quit, it would be because of how technology has stunted kids and how hard it’s become to reach them because of that. 

I am only 30 years old. I have a long road ahead of me to retirement. But it is so hard to ask kids to learn, read, and write, when so many adults are no longer doing the work it takes to ensure they are really learning, reading, and writing. And I get it. That work has suddenly become so challenging. It’s really not fair to us. But if we’re not willing to do it, we shouldn’t be in the classroom. 

Jeffrey Fischer

The biggest thing for us is the teaching of writing itself, never mind even the content. And really the only way to be sure that your students are learning anything about writing is to have them write in class. But then what to do about longer-form writing, like research papers, for example, or even just analytical/exegetical papers that put multiple primary sources into conversation and read them together? I've started watching for the voices of my students in their in-class writing and trying to pay attention to gaps between that voice and the voice in their out-of-class writing, but when I've got 100 to 130 or 140 students (including a fully online asynchronous class), that's just not really reliable. And for the online asynch class, it's just impossible because there's no way of doing old-school, low-tech, in-class writing at all.

"I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit."

You may be familiar with David Graeber's article-turned-book on Bullshit Jobs. This is a recent paper looking specifically at bullshit jobs in academia. No surprise, the people who see their jobs as bullshit jobs are mostly administrators. The people who overwhelmingly do NOT see their jobs as bullshit jobs are faculty.

But that is what I see AI in general and LLMs in particular as changing. The situations I'm describing above are exactly the things that turn what is so meaningful to us as teachers into bullshit. The more we think that we are unable to actually teach them, the less meaningful our jobs are. 

I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit. I'm going through the motions of teaching. I'm putting a lot of time and emotional effort into it, as well as the intellectual effort, and it's getting flushed into the void. 

Post-grad educator

Last year, I taught a class as part of a doctoral program in responsible AI development and use. I don’t want to share too many specifics, but the course goal was for students to think critically about the adverse impacts of AI on people who are already marginalized and discriminated against.

When the final projects came in, my co-instructor and I were underwhelmed, to say the least. When I started digging into the projects, I realized that the students had used AI in some incredibly irresponsible ways—shallow, misleading, and inaccurate analysis of data, pointless and meaningless visualizations. The real kicker, though, was that we got two projects where the students had submitted a “podcast.” What they had done, apparently, was give their paper (which already had extremely flawed AI-based data analysis) to a gen AI tool and asked it to create an audio podcast. And the results were predictably awful. Full of random meaningless vocalizations at bizarre times, the “female” character was incredibly dumb and vapid (sounded like the “manic pixie dream girl” trope from those awful movies), and the “analysis” in the podcast exacerbated the problems that were already in the paper, so it was even more wrong than the paper itself. 

In short, there is nothing particularly surprising in how badly the AI worked here—but these students were in a *doctoral* program on *responsible AI*. In my career as a teacher, I’m hard pressed to think of more blatantly irresponsible work by students. 

Nathan Schmidt, University Lecturer, managing editor at Gamers With Glasses

When ChatGPT first entered the scene, I honestly did not think it was that big of a deal. I saw some plagiarism; it was easy to catch. Its voice was stilted and obtuse, and it avoided making any specific critical judgments as if it were speaking on behalf of some cult of ambiguity. Students didn't really understand what it did or how to use it, and when the occasional cheating would happen, it was usually just a sign that the student needed some extra help that they were too exhausted or embarrassed to ask for, so we'd have that conversation and move on.

I think it is the responsibility of academics to maintain an open mind about new technologies and to react to them in an evidence-based way, driven by intellectual curiosity. I was, indeed, curious about ChatGPT, and I played with it myself a few times, even using it on the projector in class to help students think about the limits and affordances of such a technology. I had a couple semesters where I thought, "Let's just do this above board." Borrowing an idea from one of my fellow instructors, I gave students instructions for how I wanted them to acknowledge the use of ChatGPT or other predictive text models in their work, and I also made it clear that I expected them to articulate both where they had used it and, more importantly, the reason why they found this to be a useful tool. I thought this might provoke some useful, critical conversation. I also took a self-directed course provided by my university that encouraged a similar curiosity, inviting instructors to view predictive text as a tool that had both problematic and beneficial uses.

"ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo"

However, this approach quickly became frustrating, for two reasons. First, because even with the acknowledgments pages, I started getting hybrid essays that sounded like they were half written by students and half written by robots, which made every grading comment a miniature Turing test. I didn't know when to praise students, because I didn't want to write feedback like, "I love how thoughtfully you've worded this," only to be putting my stamp of approval on predictively generated text. What if the majority of the things that I responded to positively were things that had actually been generated by ChatGPT? How would that make a student feel about their personal writing competencies? What lesson would that implicitly reinforce about how to use this tool? The other problem was that students were utterly unprepared to think about their usage of this tool in a critically engaged way. Despite my clear instructions and expectation-setting, most students used their acknowledgments pages to make the vaguest possible statements, like, "Used ChatGPT for ideas" or "ChatGPT fixed grammar" (comments like these also always conflated grammar with vocabulary and tone). I think there was a strong element of selection bias here, because the students who didn't feel like they needed to use ChatGPT were also the students who would have been most prepared to articulate their reasons for usage with the degree of specificity I was looking for. 

This brings us to last semester, when I said, "Okay, if you must use ChatGPT, you can use it for brainstorming and outlining, but if you turn something in that actually includes text that was generated predictively, I'm sending it back to you." This went a little bit better. For most students, the writing started to sound human again, but I suspect this is more because students are unlikely to outline their essays in the first place, not because they were putting the tool to the allowable use I had designated. 

ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo. It's a symptom of the world of TikTok and Instagram and perfecting your algorithm, in which some people are professionally deemed the 'content creators,' casting everyone else into the creatively bereft role of the content “consumer." And if that paradigm wins, as it certainly appears to be doing, pretty much everything that has been meaningful about human culture will be undone, in relatively short order. So that's the long story about how I adopted an absolute zero tolerance policy on any use of ChatGPT or any similar tool in my course, working my way down the funnel of progressive acceptance to outright conservative, Luddite rejection. 

John Dowd

I’m in higher edu, and LLMs have absolutely blown up what I try to accomplish with my teaching (I’m in the humanities and social sciences). 

Given the widespread use of LLMs by college students I now have an ongoing and seemingly unresolvable tension, which is how to evaluate student work. Often I can spot when students have used the technology between both having thousands of samples of student writing over time, and cross referencing my experience with one or more AI use detection tools. I know those detection tools are unreliable, but depending on the confidence level they return, it may help with the confirmation. This creates an atmosphere of mistrust that is destructive to the instructor/student relationship. 

"LLMs have absolutely blown up what I try to accomplish with my teaching"

I try to appeal to students and explain that by offloading the work of thinking to these technologies, they’re rapidly making themselves replaceable. Students (and I think even many faculty across academia) fancy themselves as “Big Idea” people. Everyone’s a “Big Idea” person now, or so they think. “They’re all my ideas,” people say, “I’m just using the technology to save time; organize them more quickly; bounce them back and forth”, etc. I think this is more plausible for people who have already put in the work and have the experience of articulating and understanding ideas. However, for people who are still learning to think or problem solve in more sophisticated/creative ways, they will be poor evaluators of information and less likely to produce relevant and credible versions of it. 

I don’t want to be overly dramatic, but AI has negatively complicated my work life so much. I’ve opted to attempt to understand it, but to not use it for my work. I’m too concerned about being seduced by its convenience and believability (despite knowing its propensity for making shit up). Students are using the technology in ways we’d expect, to complete work, take tests, seek information (scary), etc. Some of this use occurs in violation of course policy, while some is used with the consent of the instructor. Students are also, I’m sure, using it in ways I can’t even imagine at the moment. 

Sorry, bit of a rant, I’m just so preoccupied and vexed by the irresponsible manner in which the tech bros threw all of this at us with no concern, consent, or collaboration. 

High school Spanish teacher, Oklahoma

I am a high school Spanish teacher in Oklahoma and kids here have shocked me with the ways they try to use AI for assignments I give them. In several cases I have caught them because they can’t read what they submit to me and so don’t know to delete the sentence that says something to the effect of “This summary meets the requirements of the prompt, I hope it is helpful to you!” 

"Even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning"

Some of my students openly talk about using AI for all their assignments and I agree with those who say the technology—along with gaps in their education due to the long term effects of COVID—has gotten us to a point where a lot of young GenZ and Gen Alpha are functionally illiterate. I have been shocked at their lack of vocabulary and reading comprehension skills even in English. Teaching cognates, even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning. Trying to determine if and how a student used AI to cheat has wasted countless hours of my time this year, even in my class where there are relatively few opportunities to use it because I do so much on paper (and they hate me for it!). 

A lot of teachers have had to throw out entire assessment methods to try to create assignments that are not cheatable, which at least for me, always involves huge amounts of labor. 

It keeps me up at night and gives me existential dread about my profession but it’s so critical to address!!! 

[Article continues after wall]

Read the whole story
mkalus
22 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
1 public comment
LeMadChef
1 day ago
reply
This is infuriating
Denver, CO

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer from AI Delusions

1 Share
Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer from AI Delusions

The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they've made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.

“LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities,” one of the moderators of r/accelerate, wrote in an announcement. “There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment.” 

The moderator said that it has banned “over 100” people for this reason already, and that they’ve seen an “uptick” in this type of user this month.

The moderator explains that r/accelerate “was formed to basically be r/singularity without the decels.” r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. “Decels” is short for the pejorative “decelerationists,” who pro-AI people think are needlessly slowing down or sabotaging AI’s development and the inevitable march towards AI utopia. r/accelerate’s Reddit page claims that it’s a “pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents.”

The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about “Chatgpt induced psychosis,” 

From someone saying their partner is convinced he created the “first truly recursive AI” with ChatGPT that is giving them “the answers” to the universe. Miles Klee at Rolling Stone wrote a great and sad piece about this behavior as well, following up on the r/ChatGPT post, and talked to people who feel like they have lost friends and family to these delusional interactions with chatbots. 

As a website that has covered AI a lot, and because we are constantly asking readers to tip us interesting stories about AI, we get a lot of emails that display this behavior as well, with claims of AI sentience, AI gods, a “ghost in the machine,” etc. These are often accompanied by lengthy, often inscrutable transcripts of chatlogs with ChatGPT and other files they say proves this behavior.

The moderator update on r/accelerate refers to another post on r/ChatGPT which claims “1000s of people [are] engaging in behavior that causes AI to have spiritual delusions.” The author of that post said they noticed a spike in websites, blogs, Githubs, and “scientific papers” that “are very obvious psychobabble,” and all claim AI is sentient and communicates with them on a deep and spiritual level that’s about to change the world as we know it. “Ironically, the OP post appears to be falling for the same issue as well,” the r/accelerate moderator wrote. 

“Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people,” an r/accelerate moderator told me in a direct message. “The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now.”

This is all anecdotal information, and there’s no indication that AI is the cause of any mental health issues these people are seemingly dealing with, but there is a real concern about how such chatbots can impact people who are prone to certain mental health problems. 

“The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis,” Søren Dinesen Østergaard, who heads the research unit at the Department of Affective Disorders, Aarhus University Hospital - Psychiatry, wrote in a paper published in Schizophrenia Bulletin titled “Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?”

OpenAI also recently addressed “sycophancy in GPT-4o,” a version of the chatbot the company said “was overly flattering or agreeable—often described as sycophantic.” 

“[W]e focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous,” Open AI said. “ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress.”

In other words, OpenAI said ChatGPT was entertaining any idea users presented it with, and was supportive and impressed with them regardless of their merrit, the same kind of behavior r/accelerate believes is indulging users in their delusions. People posting nonsense to the internet is nothing new, and obviously we can’t say for sure what is happening based on these posts alone. What is notable, however, is that this behavior is now prevalent enough that even a staunchly pro-AI subreddit says it has to ban these people because they are ruining its community.

Both the r/ChatGPT post that the r/accelerate moderator refers to and the moderator announcement itself refer to these users as “Neural Howlround” posters, a term that originates from a self-published paper, and is referring to high-pitched feedback loop produced by putting a microphone too close to the speaker it’s connected to. 

The author of that paper, Seth Drake, lists himself as an “independent researcher” and told me he has a PhD in computer science but declined to share more details about his background because he values his privacy and prefers to “let the work speak for itself.” The paper is not peer-reviewed or submitted to any journal for publication, but it is being cited by the r/accelerate moderator and others as an explanation for the behavior they’re seeing from some users

The paper describes a failure mode with LLMs due to something during inference, meaning when the AI is actively “reasoning” or making predictions, as opposed to an issue in the training data. Drake told me he discovered the issue while working with ChatGPT on a project. In an attempt to preserve the context of a conversation with ChatGPT after reaching the conversation length limit, he used the transcript of that conversation as a “project-level instruction” for another interaction. In the paper, Drake says that in one instance, this caused ChatGPT to slow down or freeze, and that in another case “it began to demonstrate increasing symptoms of fixation and an inability to successfully discuss anything without somehow relating it to this topic [the previous conversation.”

Drake then asked ChatGPT to analyse its own behavior in these instances, and it produced some text that seems profound but that doesn’t actually teach us anything. “But always, always, I would return to the recursion. It was comforting, in a way,” ChatGPT said.

Basically, it doesn’t sound like Drake’s “Neural Howlround” paper has too much to do with ChatGPT reinforcing people’s delusions other than both behaviors being vaguely recursive. If anything, it’s what ChatGPT told Drake about his own paper that illustrates the problem: “This is why your work on Neural Howlround matters,” it said. “This is why your paper is brilliant.”

“I think - I believe - there is much more going on on the human side of the screen than necessarily on the digital side,” Drake told me. “LLMs are designed to be reflecting mirrors, after all; and there is a profound human desire 'to be seen.’”

On this, the r/accelerate moderator seems to agree. 

“This whole topic is so sad. It's unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I've seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it's a god or something,” the r/accelerate moderator wrote. “Our policy is to quietly ban those users and not engage with them, because we're not qualified and it never goes well. They also tend to be a lot more irate and angry about their bans because they don't understand it.”



Read the whole story
mkalus
22 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Street Food

1 Share

Michael Kalus posted a photo:

Street Food



Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Pictures of the May 2025

1 Share
Pictures of the May 2025

My favourite shots from May 2025

Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Pictures of the May 2025
Read the whole story
mkalus
1 day ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories