Perched dramatically atop one of Lombok’s highest slopes, Villa Boë by Alexis Dornier is a topographical marvel. Spanning over 12,390 square feet, the villa doesn’t just sit on the steep hillside – it emerges from it, blending architecture, landscape, and art. At Tampah Hills – a community known for its commitment to sustainable luxury – Villa Boë feels like a natural extension of the landscape rather than an intrusion upon it.
Steep and raw, the terrain required ingenuity and sensitivity, resulting in Dornier creating a layered design that works with the contours of the hillside. At the base, a discreet garage and entrance are carved into the land. Moving upward, the spaces unfold with open living, dining, and kitchen areas connected by a series of steps and platforms. The private quarters are divided into two wings, each designed for a family, ensuring privacy without isolation. At the top is a circular yoga and contemplation pavilion, a quiet space with tranquil views of the lush hills and ocean beyond.
Villa Boë’s floor plan mirrors the site with a system of concentric circles and radial lines defining how the roofs open up and how spaces work together. The approach gives the home a sense of flow – instead of a stack of rooms, it becomes a continuous unveiling, almost like a piece of art that slowly reveals itself. The roofs fan out allowing the oceanside rooms to enjoy more natural light through floor-to-ceiling windows. Cutouts in the roofs on the top two floors create sunlit patios for the occupants to use throughout the day when they desire quiet time away from the main outdoor space below.
Every room opens to a view, highlighting the indoor/outdoor connection that the tropical location is know for. The pool, for instance, doesn’t stand apart from the house but extends through it, weaving together terraces and gardens in a seamless progression.
Material restraint plays a vital role in maintaining Villa Boë’s overall aesthetic. Dornier and his collaborators – Somewhere Concept for interiors and Bali Landscape Company for the grounds – chose materials that tie the project to its location. Teak wood ceilings and soffits, off-white walls, and white Palimanan stone floors that cool the feet while reflecting the tropical light, echo the tones and textures of Lombok’s natural environment.
Hints of American architect John Lautner emerge in the way the villa’s rooflines shape the views and anchor the building to its environment. The architecture acts as a frame – capturing fragments of sky, hillside, and horizon – so that every moment inside feels like part of an ever-evolving painting.
The infinity pool, which follows the same curves as the roofline, visually connects to the ocean on the horizon while making it feel like you’re floating above it when taking a swim.
For more information on Villa Boë and Alexis Dornier, please visit alexisdornier.com.
It’s early November! And you know what that means? Coca-Cola’s done another of its terrible AI Christmas TV ads! Where they take their 1995 ad “Holidays Are Coming” and remake it with a slop generator.
We covered last year’s bad AI Coke ad. No shot over two or three seconds, animals and snowmen visibly warped their proportions over those three seconds, the trucks’ wheels didn’t actually move.
This year’s ad is still slop, but it’s a bit less slapdash. The trucks’ wheels turn now. Except when the AI forgets to put back wheels on the truck’s prime mover. The rendered animals don’t warp quite as badly over the course of three seconds, though the trucks do. It’s not clear why there’s a sloth in snow in what looks like Canada. Near the end, the truck nearly ploughs down the pedestrians, but it magically slows down instantly. In snow. [YouTube]
AI video hasn’t actually improved over the past year. You can’t direct the AI video generator — you just press the button and hope it spits out a good clip this time.
This sixty-second ad was assembled from seventy thousand individual rendered clips. Three seconds each. Then they went through this three days of generated video desperately hoping they had 20 to 30 clips they could actually use. [WSJ]
This is not an ad for Coke — it’s an ad for AI video slop generators, and its target market is putting the fear into the writers and animators.
AI still can’t render text, so the Coca-Cola logos are hand-composited onto the trucks, and you can see the logos moving around. Except when they didn’t bother and left in the messed-up AI renderings. Of their trademark.
The Coca-Cola Company is desperately trying to talk up this mediocre demo as the best demo ever. That’s how AI works now — AI companies don’t give you an impressive demo that can’t be turned into a product, they give you a garbage demo and loudly insist it’s actually super cool: [THR]
The company believes enough has changed in a year, in both the tech and society, to evoke a different response.
Yeah, that didn’t happen. Every video comment is negative. People hate AI slop more than they did a year ago.
Times are tough, the real economy where people live is way down, the recession is biting, and the normal folk know the ones promoting AI want them out of a job. If you push AI, you are the enemy of ordinary people. And the ordinary people know it.
This is the best ad that Pepsi never paid for. Coca-Cola: it’s the fake thing.
Edit: The video isn’t even AI, it’s CGI and 3D modelling. In the official the behind-the-scenes video, at 0:44 you will see the 3D model of the Coke bottle, and 0:53 onwards shows the artists at work. This is an ordinary animated production they put an AI gloss over. And it still looks like slop. [YouTube]
Customs and Border Protection (CBP) has publicly released an app that Sheriff Offices, police departments, and other local or regional law enforcement can use to scan someone’s face as part of immigration enforcement, 404 Media has learned.
The news follows Immigration and Customs Enforcement’s (ICE) use of another internal Department of Homeland Security (DHS) app called Mobile Fortify that uses facial recognition to nearly instantly bring up someone’s name, date of birth, alien number, and whether they’ve been given an order of deportation. The new local law enforcement-focused app, called Mobile Identify, crystallizes one of the exact criticisms of DHS’s facial recognition app from privacy and surveillance experts: that this sort of powerful technology would trickle down to local enforcement, some of which have a history of making anti-immigrant comments or supporting inhumane treatment of detainees.
Handing “this powerful tech to police is like asking a 16-year old who just failed their drivers exams to pick a dozen classmates to hand car keys to,” Jake Laperruque, deputy director of the Center for Democracy & Technology's Security and Surveillance Project, told 404 Media. “These careless and cavalier uses of facial recognition are going to lead to U.S. citizens and lawful residents being grabbed off the street and placed in ICE detention.”
💡
Do you know anything else about this app or others that CBP and ICE are using? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.
Mobile Identify is designed “to identify and process individuals who may be in the country unlawfully,” according to its respective page on the Google Play Store. The app was published on Monday.
A source with knowledge of the app told 404 Media the app doesn’t return names after a face search. Instead it tells users to contact ICE and provides a reference number, or to not detain the person depending on the result. 404 Media granted the person anonymity because they weren’t permitted to speak to the press.
404 Media downloaded a copy of the app and decompiled its code, a common practice among security researchers and technology journalists. Although the Play Store page does not mention facial recognition, multiple parts of the app’s code make clear references to scanning faces. One package is called “facescanner.” Other parts mention “FacePresence” and “No facial image found.”
A screenshot from the app's Google Play Store page.
Screenshots of the app on the Play Store page show the app requires users to login with their Login.gov account, and that the app “requires camera access to take photos of subjects.” At the time of writing the app has “1+” downloads, according to the Play Store page.
The Play Store page does not say exactly how the app processes scanned faces, such as what images it compares them to, or what data the app returns upon a hit. In statements to 404 Media, DHS and CBP did not provide any specifics.
The app is for agencies that are part of the 287(g) program, the Play Store page says. This program lets ICE delegate certain immigration-related authorities and powers to local and state agencies. Members of the 287(g) Task Force Model (TFM), for instance, are allowed to enforce certain immigration authorities during their police duties, ICE’s website explains. At the time of writing, 555 agencies in 34 states are part of the TFM program, according to data published by ICE.
The American Civil Liberties Union (ACLU) has criticized the 287(g) program because a large number of participating sheriffs have made anti-immigrant statements, supported inhumane immigration and border enforcement policies, and have a pattern of racial profiling and other civil rights violations.
Cooper Quintin, senior staff technologist at the Electronic Frontier Foundation (EFF), told 404 Media “Face surveillance in general, and this tool specifically, was already a dangerous infringement of civil liberties when in the hands of ICE agents. Putting a powerful surveillance tool like this in the hands of state and local law enforcement officials around the country will only further erode peoples’ Fourth Amendment rights, for citizens and non-citizens alike. This will further erode due process, and subject even more Americans to omnipresent surveillance and unjust detainment.”
Screenshots from the app's Google Play Store page.
Mobile Fortify—the facial recognition app used by ICE which 404 Media first revealed in June—uses the CBP Traveler Verification Service (TVS) ordinarily designed for when people enter the U.S. The app took those systems and an unprecedented collection of U.S. government databases and turned them inwards, letting officers in the field reveal a person’s identity and immigration status. The app also uses data from the State Department, FBI, and state databases, and uses a bank of 200 million images.
404 Media reported in October that multiple social media videos show Border Patrol and ICE officers scanning peoples’ faces on the street.
“I’m an American citizen so leave me alone,” a person stopped by ICE says in one video.
“Alright, we just got to verify that,” one of the officers replies.
404 Media also obtained an internal DHS document which says ICE does not let people decline or consent to being scanned by the app. The document, called a Privacy Threshold Analysis, said photos taken by the app will be stored for 15 years, including those of U.S. citizens.
Ranking member of the House Homeland Security Committee Bennie G. Thompson previously told 404 Media in a statement that ICE will prioritize the results of the Mobile Fortify app over birth certificates. “ICE officials have told us that an apparent biometric match by Mobile Fortify is a ‘definitive’ determination of a person’s status and that an ICE officer may ignore evidence of American citizenship—including a birth certificate—if the app says the person is an alien,” he said. “ICE using a mobile biometrics app in ways its developers at CBP never intended or tested is a frightening, repugnant, and unconstitutional attack on Americans’ rights and freedoms.”
In response to questions about the new app for Sheriff Offices and other local law enforcement, a DHS spokesperson told 404 Media in an email “While the Department does not discuss specific vendors or operational tools, any technology used by DHS Components must comply with the requirements and oversight framework.”
CBP responded with a statement primarily discussing Mobile Fortify. “Biometric data used to identify individuals through TVS are collected by government authorities consistent with the law, including issuing documents or processing illegal aliens. The Mobile Fortify Application provides a mobile capability that uses facial comparison as well as fingerprint matching to verify the identity of individuals against specific immigration related holdings,” the statement said. CBP added it built the Mobile Fortify application to support ICE, and confirmed ICE has used the app in its operations around the U.S.
Most people probably have no idea that when you book a flight through major travel websites, a data broker owned by U.S. airlines then sells details about your flight, including your name, credit card used, and where you’re flying to the government. The data broker has compiled billions of ticketing records the government can search without a warrant or court order. The data broker is called the Airlines Reporting Corporation (ARC), and, as 404 Media has shown, it sells flight data to multiple parts of the Department of Homeland Security (DHS) and a host of other government agencies, while contractually demanding those agencies not reveal where the data came from.
It turns out, it is possible to opt-out of this data selling, including to government agencies. At least, that’s what I found when I ran through the steps to tell ARC to stop selling my personal data. Here’s how I did that:
I emailed privacy@arccorp.com and, not yet knowing the details of the process, simply said I wish to delete my personal data held by ARC.
A few hours later the company replied with some information and what I needed to do. ARC said it needed my full name (including middle name if applicable), the last four digits of the credit card number used to purchase air travel, and my residential address.
I provided that information. The following month, ARC said it was unable to delete my data because “we and our service providers require it for legitimate business purposes.” The company did say it would not sell my data to any third parties, though. “However, even though we cannot delete your data, we can confirm that we will not sell your personal data to any third party for any reason, including, but not limited to, for profiling, direct marketing, statistical, scientific, or historical research purposes,” ARC said in an email.
I then followed up with ARC to ask specifically whether this included selling my travel data to the government. “Does the not selling of my data include not selling to government agencies as part of ARC’s Travel Intelligence Program or any other forms?” I wrote. The Travel Intelligence Program, or TIP, is the program ARC launched to sell data to the government. ARC updates it every day with the previous day’s ticket sales and it can show a person’s paid intent to travel.
A few days later, ARC replied. “Yes, we can confirm that not selling your data includes not selling to any third party, including, but not limited to, any government agency as part of ARC’s Travel Intelligence Program,” the company said.
💡
Do you know anything else about ARC or other data being sold to government agencies? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.
Honestly, I was quite surprised at how smooth and clear this process was. ARC only registered as a data broker with the state of California—a legal requirement—in June, despite selling data for years.
What I did was not a formal request under a specific piece of privacy legislation, such as the European Union’s General Data Privacy Regulation (GDPR) or the California Consumer Privacy Act (CCPA). Maybe a request to delete information under the CCPA would have more success; that law says California residents have the legal right to ask to have their personal data deleted “subject to certain exceptions (such as if the business is legally required to keep the information),” according to the California Department of Justice’s website.
ARC is owned and operated by at least eight major U.S. airlines, according to publicly released documents. Its board includes representatives from Delta, United, American Airlines, JetBlue, Alaska Airlines, Canada’s Air Canada, and European airlines Air France and Lufthansa.
Public procurement records show agencies such as ICE, CBP, ATF, TSA, the SEC, the Secret Service, the State Department, the U.S. Marshals, and the IRS have purchased ARC data. Agencies have given no indication they use a search warrant or other legal mechanism to search the data. In response to inquiries from 404 Media, ATF said it follows “DOJ policy and appropriate legal processes” and the Secret Service declined to answer.
An ARC spokesperson previously told 404 Media in an email that TIP “was established by ARC after the September 11, 2001, terrorist attacks and has since been used by the U.S. intelligence and law enforcement community to support national security and prevent criminal activity with bipartisan support. Over the years, TIP has likely contributed to the prevention and apprehension of criminals involved in human trafficking, drug trafficking, money laundering, sex trafficking, national security threats, terrorism and other imminent threats of harm to the United States.” At the time, the spokesperson added “Pursuant to ARC’s privacy policy, consumers may ask ARC to refrain from selling their personal data.”
There’s a remarkable paper from MIT’s Sloan School of Management, co-written with the security vendor Safe Security: “Rethinking the Cybersecurity Arms Race: When 80% of Ransomware Attacks are AI-Driven”:
Our recent analysis of over 2800 ransomware incidents has revealed an alarming trend: AI plays an increasingly significant role in these attacks. In 2024, 80.83% of recorded ransomware events were attributed to threat actors utilizing AI.
That’s quite a remarkable claim. Especially when the actual number of attacks by AI-generated ransomware is zero. [Socket]
The paper came from CAMS — Cybersecurity at MIT Sloan — which operates as a corporate consortium. Companies pay CAMS to get themselves a nice academic paper. This is somehow proper academic research, and not just a paper mill selling massive conflicts of interest, which the companies can and do just promote as “MIT.” [CAMS]
The “Advisory Member” level of contribution to CAMS is $120,000 per year for three years. This grants you “participation in CAMS research projects of mutual interest.”
Safe Security — the customer for this paper — have spent since April touting the paper around as solid science from MIT you can totally rely on. It turns out the paper’s got a few problems.
The estimable Kevin Beaumont noted the paper’s problems in a thread on Mastodon last Wednesday, and in a blog post today: [Mastodon; Double Pulsar]
The paper is absolutely ridiculous. It describes almost every major ransomware group as using AI — without any evidence (it’s also not true, I monitor many of them). It even talks about Emotet (which hasn’t existed for many years) as being AI driven. It cites things like CISA reports for GenAI usage … but CISA never said AI anywhere.
Safe Security just happen to sell an agentic AI product, which they tout as being developed with MIT, and they wave this paper around as evidence of the imaginary AI ransomware problem they claim their product can totally fix. [Safe, archive]
Kevin notes that a pile of MIT academics, including Michael Siegel, director of CAMS and lead author on this paper, happen to be on the Safe Security advisory board. This conflict of interest is at no point disclosed in the paper. [Safe]
The paper cites the NotPetya and WannaCry ransomware from 2017 as “AI” attacks. Even if this is just a “working paper,” whoever wrote this is literally just incompetent. Even if they’re the director of a pay-for-play academic paper mill at MIT.
The paper finishes by recommending “embracing AI in cyber risk management”. Safe Security marketing material is cited in the references for the paper!
After Kevin’s thread, MIT took the paper down. But they also silently edited a pile of web pages pointing to the paper to make it look like they hadn’t been promoting the paper as hard as possible! [MIT, current version, archive of 11 September]
MIT’s copy of the paper has been removed, and they replaced it with the following text: [MIT, PDF]
You have reached the Early Research Papers section of our website. The Working Paper you have requested is being updated based on some recent reviews. We expect the updated version to appear here shortly.
Fortunately, there’s still a copy of the paper in the Internet Archive. [MIT, PDF, archive]
MIT also seems to be reaching out to people to post that this was only a working paper, not a real paper, it’s so unfair to take it seriously. You know, like when Safe Security was pushing the paper as hard as possible in their marketing for six months now. Or when MIT academics promoted the paper at conferences.
Kodak quietly acknowledged Monday that it will begin selling two famous types of film stock—Kodak Gold 200 and Kodak Ultramax 400—directly to retailers and distributors in the U.S., another indication that the historic company is taking back control over how people buy its film.
The release comes on the heels of Kodak announcing that it would make and sell two new stocks of film called Kodacolor 100 and Kodacolor 200 in October. On Monday, both Kodak Gold and Kodak Ultramax showed back up on Kodak’s website as film stocks that it makes and sells. When asked by 404 Media, a company spokesperson said that it has “launched” these film stocks and will begin to “sell the films directly to distributors in the U.S. and Canada, giving Kodak greater control over our participation in the consumer film market.”
Unlike Kodacolor, both Kodak Gold and Kodak Ultramax have been widely available to consumers for years, but the way it was distributed made little sense and was an artifact of its 2012 bankruptcy. Coming out of that bankruptcy, Eastman Kodak (the 133-year-old company) would continue to make film, but the exclusive rights to distribute and sell it were owned by a completely separate, UK-based company called Kodak Alaris. For the last decade, Kodak Alaris has sold Kodak Gold and Ultramax (as well as Portra, and a few other film stocks made by Eastman Kodak). This setup has been confusing for consumers and perhaps served as an incentive for Eastman Kodak to not experiment as much with the types of films it makes, considering that it would have to license distribution out to another company.
That all seemed to have changed with the recent announcement of Kodacolor 100 and Kodacolor 200, Kodak’s first new still film stocks in many years. Monday’s acknowledgement that both Kodak Gold and Ultramax would be sold directly by Eastman Kodak, and which come with a rebranded and redesigned box, suggests that the company has figured out how to wrest some control of its distribution away from Kodak Alaris. Eastman Kodak told 404 Media in a statement that it has “launched” these films and that they are “Kodak-marketed versions of existing films.”
"Kodak will sell the films directly to distributors in the U.S. and Canada, giving Kodak greater control over our participation in the consumer film market,” a Kodak spokesperson said in an email. “This direct channel will provide distributors, retailers and consumers with a broader, more reliable supply and help create greater stability in a market where prices have often fluctuated.”
The company called it an “extension of Kodak’s film portfolio,” which it said “is made possible by our recent investments that increased our film manufacturing capacity and, along with the introduction of our KODAK Super 8 Camera and KODAK EKTACHROME 100D Color Reversal Film, reflects Kodak’s ongoing commitment to meeting growing demand and supporting the long-term health of the film industry.”
It is probably too soon to say how big of a deal this is, but it is at least exciting for people who are in the resurgent film photography hobby, who are desperate for any sign that companies are interested in launching new products, creating new types of film, or building more production capacity in an industry where film shortages and price increases have been the norm for a few years.