Twice a month we’re inviting one of the Design Milk team members to share five personal favorites – an opportunity for each of us to reveal the sort of designs we love and appreciate in our own lives from a more personal perspective. Social Media Consultant Maivy Tran returns this week for our Take 5 series.
These Murano glass pendants by Vetro are pure joy in lighting form – like giant pieces of candy floating mid-air! Hand-blown in Italy using traditional techniques passed down through generations, each one bursts with color and craftsmanship. The swirling Tivoli stripes feel so playful and nostalgic, I just know they’d brighten any space and instantly lift the whole room’s mood!
I stumbled on Minami Ikeya’s page one day and have been obsessed ever since. Her glasswork isn’t your typical glassware – it bends, blobs, and swirls in totally unexpected ways. The colors have this dreamy, watercolor softness, and the forms feel almost alive – like each piece has its own quirky little personality. They’re whimsical, elegant, and completely mesmerizing. If you’re over the usual clean lines and matchy-matchy sets, these are the kind of pieces that’ll steal the show (and start a few conversations, too)!
I’m seriously crushing on these Olive bookends – they’re like instant mood boosters for your shelves. Made from cornstarch-based PLA and 3D printed in France, they feel playful and thoughtful, especially in the dreamy Lemon Cream and Pistachio shades (though I’m a sucker for anything pink, too). Each one’s slightly different, which makes them feel a little extra special. They’d be the perfect way to keep any chaotic stack of reads in check, while looking way too cute to be labeled “just functional.”
I’ve been a fan of MOFT’s Snap gear for a while (their Snap Tripod is my go-to for filming quick content on the fly), so when I saw their collab with artist Shantell Martin, I knew I had to share it. The same compact, do-it-all designs I already love now come wrapped in Martin’s playful black-and-white linework. There’s something about her doodles that instantly sparks a little creative energy! And while the functionality hasn’t changed, this artsy upgrade adds a dose of personality that might just inspire your next big idea.
There’s always that one chair that becomes the unofficial dumping ground for clothes too clean for laundry but not quite crisp enough for the closet. This sleek metal stand from Metallbude is basically the elevated, grown-up version of that chair – only way chicer. Developed by design studio BUDDE, it’s minimal, matte, made in Germany, and somehow makes the whole ‘outfit limbo’ situation feel surprisingly refined. It’s the kind of design upgrade I didn’t know I needed but now can’t stop thinking about. I want one in every color!
Yesterday I ordered my lunch from an AI operating a drive-thru. It was fine. Banal. Boring even. A new experience that I think will become routine in the future.
The AI drive-thru operator isn’t cutting edge tech deployed in an upscale market to win over high value consumers. I live at the edge of a South Carolina city with a little more than 140,000 people. A booming metropolis with the best and the finest, it is not.
There’s a lot of local fast food fried chicken joints here and one of them is Bojangles. It’s mid. Better than KFC and not as good as Popeyes, Bojangles is fine if you’re hungry but you’ll forget the meal as soon as it’s done and you’ll never yearn for it. Last year the restaurant said it would deploy an AI agent at its drive-thru windows. It’s called, I shit you not, Bo-Linda and made by the Israeli tech firm Hi-Auto.
According to the Bojangles website, “Bo-Linda™ can take guest orders 96+% of the time with no human intervention,” and “improve overall satisfaction by offloading order taking from team members and providing a consistent guest experience.”
When Bo-Linda finally arrived in South Carolina, I went to see what the fuss was about. It was crushingly dull. A preview of a time in the near future, I think, when the AI bubble retracts and the agents are common. It took my order with an efficiency that, I’ll be honest, is not typical of the typical fast food worker. The worst part was its constant attempts to up-sell me.
0:00
/0:39
“Do you want to upgrade your drink to our new water-melon iced tea?” It asked.
“No thank you.”
“Would you like to add our new peach cobbler for $1.99?”
“No thank you.”
“May I get you anything else?”
“No, that’s it.”“Would you like to round up for military scholarships?”“No thank you.”
“You’re welcome. Thank you. Your total is $10.89.”
When 404 Media founder Joseph Cox watched the video of my interactions, he made fun of my “no thank yous.” What can I say? There’s an ingrained and often stifling politeness that’s bred into us in the American South. Even though I knew I was talking to a machine, I couldn’t not be nice to it.
My thought in the immediate aftermath is that the whole thing was painless. My order wasn’t complicated, but it was correct. The machine never stumbled over itself or asked for clarification. It knew what I wanted and the humans at the window gave it to me. A few conversations with friends and a quick scan of social media in the area show that other people have had much the same interactions with Bo-Linda.
The drive-thru AI, much like the chicken it sold me, is fine. Forgettable.
It was later, sitting at home, and doing a little research for the story that concerns popped up. OpenAI CEO Sam Altman has said that saying “please” and “thank you” to ChatGPT has cost the company tens of millions of dollars. How much water and energy had I burned being polite to Bo-Linda the chatbot?
Sometimes it feels like the answers to these questions don’t matter. We’re barreling forward into the AI future, whether we like it or not. Data centers are springing up across America and nuclear power plants are coming back online, so Bojangles can make a little more money and so people in the drive-thru can feel a little less friction before eating their meal.
This is how a new technology takes over, what it feels like right before it becomes ubiquitous. One day you wake up and the cameras are everywhere, able to recognize your face and chart your movements across the city you live in. One day you look up and everyone has their face buried in their phone. It happened by degrees, but so gradually you didn’t notice. There were signs along the way, dangers and warnings.
But mostly, it was fine, as boring and routine as ordering chicken at a drive-thru.
So-called 3D-printed ghost guns are untraceable firearms that can be assembled at home. But cutting edge work from a forensic expert in California and researchers at the University of Oklahoma may soon show investigators can trace a 3D printed object to the specific printer that made it.
Weapons manufactured using 3D printers have been a subject of Biden-era legislation and recent Supreme Court scrutiny. It’s possible to download the blueprints for a firearm and build it in your home. There’s no serial number to track and no store to scrutinize your purchase. Luigi Mangione used a ghost gun to allegedly assassinate United Healthcare CEO Brian Thompson.
Kirk Garrison, a forensics expert who works for the San Bernardino Sheriff’s department, told 404 Media he’s had early success matching 3D printed objects to the machines that made them. Garrison said his comments represent his own views and not those of the San Bernardino Sheriff’s department. He also cautioned that what he’s doing is in its infancy and it might be years before authorities can reliably match a gun to the machine that made it, if they can do it at all.
In 2018, Garrison started seeing a lot of 3D printed gun parts in his work at the Sheriff’s department. It was mostly 80% kits and automatic conversion kits, small 3D printed pieces of plastic that turn a semiautomatic pistol into an automatic one. Then he got his first case with a fully 3D printed gun frame. “That’s when I was like, ‘We might need to know a little bit more about this now if we’re actually going to be seeing this stuff and potentially have to testify to it,” he told 404 Media.
A few years later Garrison attended a conference for forensic examiners in Atalanta and caught a talk by FBI lab tech Corey Scott. Scott had been 3D printing novelty items and noticed something. “He was just like, ‘Hey, I noticed on these 3D printed items, there’s these marks, but we was like: ‘I’m not actually a firearms or toolmark examiner.’”
A toolmark is a consistent scratch or impression a harder object leaves on a softer one. A screwdriver may produce the same scratches in the head of every screw it touches. A pair of bolt cutters will scratch up a length of chain in the same way every time. Matching tools to the objects they interacted with is one of the bedmarks of forensic science and it’s something Garrison is an expert in.So the question was: do 3D printers leave behind consistent toolmarks on the objects they make? When he got back to his San Bernardino lab following the conference, Garrison put the 3D printed weapon frame under the microscope. He noticed that the manufacturing process had left stria, or scratch marks, behind. If a 3D printer left behind the same pattern of stria on everything it printed then it might be possible to match a printer to an object it printed.
From there, Garrison started printing simple blocks at home on his own 3D printer. He’d take them into the lab on his own time and examine them under a microscope. “That’s when I started seeing some of the consistency on two separate printed things,” he said. It was too early to tell, and it’s still too early to tell, but individual printers might leave behind unique toolmarks on every object they print.
A page from 'An exploratory study of topographical signatures within 3D fused deposition modelling using Polylactic Acid (PLA) filament.'
Most 3D printers work by heating up a filament—often, but not always, plastic—and extruding it through a metal nozzle. The nozzle puts down hundreds, or even thousands, of layers of the heated plastic to form a solid object. Each individual level of the print is called the print line. “So on the firearm, I’m seeing from the trigger guard—maybe print line 200—and the top of the magazine well—print line 400—the marks are staying consistent,” Garrison said.
It was an exciting discovery but it also wouldn’t be admissible as evidence in a criminal trial. Despite the promise that we may one day be able to match a printer to the object that made it, Garrison stressed that the work was in its very early days and that it would take years, perhaps even a decade, of science to work out the truth of toolmarks and 3D printers.
He was also studying this on his own time and still had a full caseload with the Sheriff’s department. Garrison published a study about his results in the Forensics Science International that he co-authored with researcher Steven Pavlovich, but he knew there was more to do. “I’ve always been like, ‘Hey, someone who works at a university who gets paid to do this, you should totally do this right now,’” he said.
Enter Eric Law, an Assistant Professor at the University of Central Oklahoma Forensic Science Institute, and his graduate student Cooper Blair. Along with Garrison, the pair are the authors of a forthcoming research paper about the phenomenon of toolmarks in 3D printed objects. Once published, it’ll be the first of its kind.
Law and Blair’s focus is narrow. “So if we had a single printer and we had multiple nozzles, can we tell the difference between something printed on each of those different nozzles? And also, if we have different print bed surfaces, can we differentiate those print bed surfaces and tell what object was printed on which?” Law told 404 Media.
The nozzles used in 3D printing are often, but not always, made of metal and printed onto a strip of material that’s called a print sheet or print bed. They studied print sheets first. Not all sheets are the same, some are smooth, some are textured, and they come in a variety of different materials. “So we looked at textured, because we figured if there's some texture to it, those characteristics might reproduce on the plastic, and might let us do that comparison a bit easier,” Law said. “So I looked at texture print beds, and we could differentiate those 100% of the time.” Meaning that, both by eye and using a computer, his team could match an object to the sheet it was printed on.
It’s a promising early finding. “The problem we get into there is we're looking at a specific area on the print bed, so you have to print something on the exact same region, because every area on that print bed is different,” Law said. “If we print something right in the center and then print that same object in the top right corner, those would be different from each other. So it has to be in the same location, which complicates things a little bit.”
He pointed to Glock switches, the conversion kits that turn a pistol into an automatic weapon. “Those are pretty small and on a 3D print bed you could align a bunch of those and print them all at once,” he said. “Which is what you would do to produce as many as you can, as quickly as you can. If you had two of those they might look like they're from different printers, but they might have just been from different sections of the same printer.”
Print sheets can also move between printers and can be easily discarded. Knowing that a Glock switch was printed out on a particular sheet is not a smoking gun. “So it shows promise. But there's a lot of potential issues too,” Law said.
Law and Blair succeeded in matching nozzles to printed objects in their study, but the results weren’t as promising as the print sheets. Law said the nozzle match rate was correct about 75 percent of the time. “The algorithm could identify the correct nozzles, probably a little bit less than that with just visual examination,” he said. “It still shows promise, but is a bit more challenging.”
There are other issues too. All of Law and Blair’s tests were done with one kind of 3D printer—a Prusa MK4S. There’s hundreds of different devices on the market that all behave differently. Law also pointed out that brass nozzles warp over time themselves and may produce different results after hundreds of prints and that different nozzles made from different materials may work very differently. Law would also want an examiner rate study—a formal scientific inquiry into false positives and examiner bias.
“There’s a lot of promise in what we’ve seen but there’s also a lot of questions still. Different nozzles, different print beds, how easy it is to swap those and whether they change,” Law said. He would not, at this point, be willing to testify in a criminal case as an expert on 3D printed forensics.
Garrison also said he wouldn’t be comfortable using any of this in a court but he was still excited. “Even if it doesn’t work, and this is not a possibility, we still found out new information. I’d be just as happy with that. ‘Hey cool, I was involved in finding out that you can’t do this,’” he said.
Michael Rechtin hatte ein Problem. Er stürmte morgens ständig zur Arbeit, ohne vorher auf den Verkehr zu gucken und blieb in dem deshalb oft stecken. Anstatt früher aufzustehen oder einfach vorher auf sein Handy zu schauen, baute er sich einen Couchtisch mit einer Live-Verkehrskarte. Dafür nutzte er CNC-Fräsen, 3D-Druck, LED-Beleuchtung und den allgegenwärtigen Raspberry Pi, um ein solides Möbelstück mit einer Karte von Cincinnatis Straßen zu schaffen. Was Leute mit ihrer freien Zeit halt so anstellen.
For a while, I have said that the AI slop endgame, for social media companies, is creating a hyper personalized feed full of highly specific content about anything one could possibly imagine. Because AI slop is so easy to make and because social media algorithms are so personalized, this means that Facebook, Instagram, TikTok, or YouTube can feed you anything they perceive its users to possibly want. So this means that AI slop makers are exploring ever more niche areas of content.
Case in point: Facebook AI slop about the horrific and deadly Texas flood. Topical AI content about disasters, war, current events, and news stories are at this point so commonplace that they are now sadly barely notable, and AI-powered “misinformation” about horrible events are all over every social media feed I can think of. But as we document our descent into this hellhole, I thought some AI slop surfaced on Bluesky by Christina Stephens was particularly notable:
This is slop that shows Louisiana State University football coach Brian Kelly assisting in the Texas floods. Kelly is “famous” in that SEC football coaches are famousish, but he has no real connection to Texas and there is no reason for this content to exist other than the fact that it is being churned out by a Facebook page called LSU Gridiron Glory, which is specifically making AI slop about Kelly and other LSU football figures, including quarterback Garrett Nussmeier and some of his apparent girlfriends. In the grand scheme of things, Brian Kelly is a very minor figure.
This page is churning out slop that includes Brian Kelly’s reaction to last month’s tragic Air India crash and the supposedly amazing line of encouragement he said (this line is never shared, and, of course, the football coach in Louisiana has not had anything to say about a plane that crashed in India). There is slop of Kelly getting his lost wallet returned to him, donating to the homeless, slop of Kelly in the hospital with a rare illness, slop of Kelly being deported by Trump, talking to Apple CEO Tim Cook, and slop of Kelly secretly “paying off the debt owed by a struggling gardener.” The slop is so completely random and specific that I struggle to imagine how one would decide to fill this niche, and, yet, the AI slop economy has done so, anyway.
My point is that there is no reason for LSU football coach Brian Kelly flood rescue inspiration porn to exist on the internet because it did not happen and because it is so hyperspecific as to seem like there could not possibly be a market for such content. And yet someone has decided that ridiculously niche disaster content would get served up by the algorithm to someone who might interact with it.
Then consider that essentially the exact same thing exists, but for fans of the NBC show The Voice. A page called The Voice Fandom is showing AI slop of judge Blake Shelton saving dogs in the Texas flood, Shelton carrying a girl out of a medical clinic in Kerr County, fellow judge Luke Bryan donating to an animal rescue shelter, etc. As we have seen with previous slop factories on Facebook, many of these bizarre images link out to AI-generated “news” websites that are overloaded with ads. There are, surely, thousands of other similar pages that are doing the exact same thing with celebrities big and small, creating an internet where the LSU fans of the world can imagine their coach as first responder or the judge of their favorite TV show as dog savior or whatever.
Very little of this slop has much engagement on it, but one of the Blake Shelton photos has 18,000 likes and a few hundred comments. Slop has gotten so cheap and easy to produce, and Facebook is so easy to spam, that presumably the return is worth it. In covering these pages for months, I have learned that a single person can operate dozens or hundreds of pages and can keep them filled up with content, and so having something occasionally go viral can be enough to make the entire endeavor financially viable. There was a time a few months ago when I would click through these pages endlessly and marvel at the sheer volume of slop being posted, but the tactic has become so common at this point that we have become almost fully desensitized to it.