Resident of the world, traveling the road of life
68318 stories
·
21 followers

How to Opt-Out of Airlines Selling Your Travel Data to the Government

1 Share
How to Opt-Out of Airlines Selling Your Travel Data to the Government

Most people probably have no idea that when you book a flight through major travel websites, a data broker owned by U.S. airlines then sells details about your flight, including your name, credit card used, and where you’re flying to the government. The data broker has compiled billions of ticketing records the government can search without a warrant or court order. The data broker is called the Airlines Reporting Corporation (ARC), and, as 404 Media has shown, it sells flight data to multiple parts of the Department of Homeland Security (DHS) and a host of other government agencies, while contractually demanding those agencies not reveal where the data came from.

It turns out, it is possible to opt-out of this data selling, including to government agencies. At least, that’s what I found when I ran through the steps to tell ARC to stop selling my personal data. Here’s how I did that:

  1. I emailed privacy@arccorp.com and, not yet knowing the details of the process, simply said I wish to delete my personal data held by ARC.
  2. A few hours later the company replied with some information and what I needed to do. ARC said it needed my full name (including middle name if applicable), the last four digits of the credit card number used to purchase air travel, and my residential address. 
  3. I provided that information. The following month, ARC said it was unable to delete my data because “we and our service providers require it for legitimate business purposes.” The company did say it would not sell my data to any third parties, though. “However, even though we cannot delete your data, we can confirm that we will not sell your personal data to any third party for any reason, including, but not limited to, for profiling, direct marketing, statistical, scientific, or historical research purposes,” ARC said in an email.
  4. I then followed up with ARC to ask specifically whether this included selling my travel data to the government. “Does the not selling of my data include not selling to government agencies as part of ARC’s Travel Intelligence Program or any other forms?” I wrote. The Travel Intelligence Program, or TIP, is the program ARC launched to sell data to the government. ARC updates it every day with the previous day’s ticket sales and it can show a person’s paid intent to travel.
  5. A few days later, ARC replied. “Yes, we can confirm that not selling your data includes not selling to any third party, including, but not limited to, any government agency as part of ARC’s Travel Intelligence Program,” the company said.
💡
Do you know anything else about ARC or other data being sold to government agencies? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

Honestly, I was quite surprised at how smooth and clear this process was. ARC only registered as a data broker with the state of California—a legal requirement—in June, despite selling data for years. 

What I did was not a formal request under a specific piece of privacy legislation, such as the European Union’s General Data Privacy Regulation (GDPR) or the California Consumer Privacy Act (CCPA). Maybe a request to delete information under the CCPA would have more success; that law says California residents have the legal right to ask to have their personal data deleted “subject to certain exceptions (such as if the business is legally required to keep the information),” according to the California Department of Justice’s website.

ARC is owned and operated by at least eight major U.S. airlines, according to publicly released documents. Its board includes representatives from Delta, United, American Airlines, JetBlue, Alaska Airlines, Canada’s Air Canada, and European airlines Air France and Lufthansa. 

Public procurement records show agencies such as ICE, CBP, ATF, TSA, the SEC, the Secret Service, the State Department, the U.S. Marshals, and the IRS have purchased ARC data. Agencies have given no indication they use a search warrant or other legal mechanism to search the data. In response to inquiries from 404 Media, ATF said it follows “DOJ policy and appropriate legal processes” and the Secret Service declined to answer.

An ARC spokesperson previously told 404 Media in an email that TIP “was established by ARC after the September 11, 2001, terrorist attacks and has since been used by the U.S. intelligence and law enforcement community to support national security and prevent criminal activity with bipartisan support. Over the years, TIP has likely contributed to the prevention and apprehension of criminals involved in human trafficking, drug trafficking, money laundering, sex trafficking, national security threats, terrorism and other imminent threats of harm to the United States.” At the time, the spokesperson added “Pursuant to ARC’s privacy policy, consumers may ask ARC to refrain from selling their personal data.”

Read the whole story
mkalus
5 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

MIT releases, then quietly removes, nonsense AI cybersecurity paper

1 Share

There’s a remarkable paper from MIT’s Sloan School of Management, co-written with the security vendor Safe Security: “Rethinking the Cybersecurity Arms Race: When 80% of Ransomware Attacks are AI-Driven”:

Our recent analysis of over 2800 ransomware incidents has revealed an alarming trend: AI plays an increasingly significant role in these attacks. In 2024, 80.83% of recorded ransomware events were attributed to threat actors utilizing AI.

That’s quite a remarkable claim. Especially when the actual number of attacks by AI-generated ransomware is zero. [Socket]

The paper came from CAMS — Cybersecurity at MIT Sloan — which operates as a corporate consortium. Companies pay CAMS to get themselves a nice academic paper. This is somehow proper academic research, and not just a paper mill selling massive conflicts of interest, which the companies can and do just promote as “MIT.” [CAMS]

The “Advisory Member” level of contribution to CAMS is $120,000 per year for three years. This grants you “participation in CAMS research projects of mutual interest.”

Safe Security — the customer for this paper — have spent since April touting the paper around as solid science from MIT you can totally rely on. It turns out the paper’s got a few problems.

The estimable Kevin Beaumont noted the paper’s problems in a thread on Mastodon last Wednesday, and in a blog post today: [Mastodon; Double Pulsar]

The paper is absolutely ridiculous. It describes almost every major ransomware group as using AI — without any evidence (it’s also not true, I monitor many of them). It even talks about Emotet (which hasn’t existed for many years) as being AI driven. It cites things like CISA reports for GenAI usage … but CISA never said AI anywhere.

Safe Security just happen to sell an agentic AI product, which they tout as being developed with MIT, and they wave this paper around as evidence of the imaginary AI ransomware problem they claim their product can totally fix. [Safe, archive]

Kevin notes that a pile of MIT academics, including Michael Siegel, director of CAMS and lead author on this paper, happen to be on the Safe Security advisory board. This conflict of interest is at no point disclosed in the paper. [Safe]

The paper cites the NotPetya and WannaCry ransomware from 2017 as “AI” attacks. Even if this is just a “working paper,” whoever wrote this is literally just incompetent. Even if they’re the director of a pay-for-play academic paper mill at MIT.

The paper finishes by recommending “embracing AI in cyber risk management”. Safe Security marketing material is cited in the references for the paper!

After Kevin’s thread, MIT took the paper down. But they also silently edited a pile of web pages pointing to the paper to make it look like they hadn’t been promoting the paper as hard as possible! [MIT, current version, archive of 11 September]

MIT’s copy of the paper has been removed, and they replaced it with the following text: [MIT, PDF]

You have reached the Early Research Papers section of our website. The Working Paper you have requested is being updated based on some recent reviews. We expect the updated version to appear here shortly.

Fortunately, there’s still a copy of the paper in the Internet Archive. [MIT, PDF, archive]

MIT also seems to be reaching out to people to post that this was only a working paper, not a real paper, it’s so unfair to take it seriously. You know, like when Safe Security was pushing the paper as hard as possible in their marketing for six months now. Or when MIT academics promoted the paper at conferences.

Read the whole story
mkalus
5 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Kodak Quietly Begins Directly Selling Kodak Gold and Ultramax Film Again

1 Share
Kodak Quietly Begins Directly Selling Kodak Gold and Ultramax Film Again

Kodak quietly acknowledged Monday that it will begin selling two famous types of film stock—Kodak Gold 200 and Kodak Ultramax 400—directly to retailers and distributors in the U.S., another indication that the historic company is taking back control over how people buy its film.

The release comes on the heels of Kodak announcing that it would make and sell two new stocks of film called Kodacolor 100 and Kodacolor 200 in October. On Monday, both Kodak Gold and Kodak Ultramax showed back up on Kodak’s website as film stocks that it makes and sells. When asked by 404 Media, a company spokesperson said that it has “launched” these film stocks and will begin to “sell the films directly to distributors in the U.S. and Canada, giving Kodak greater control over our participation in the consumer film market.”

Unlike Kodacolor, both Kodak Gold and Kodak Ultramax have been widely available to consumers for years, but the way it was distributed made little sense and was an artifact of its 2012 bankruptcy. Coming out of that bankruptcy, Eastman Kodak (the 133-year-old company) would continue to make film, but the exclusive rights to distribute and sell it were owned by a completely separate, UK-based company called Kodak Alaris. For the last decade, Kodak Alaris has sold Kodak Gold and Ultramax (as well as Portra, and a few other film stocks made by Eastman Kodak). This setup has been confusing for consumers and perhaps served as an incentive for Eastman Kodak to not experiment as much with the types of films it makes, considering that it would have to license distribution out to another company.

That all seemed to have changed with the recent announcement of Kodacolor 100 and Kodacolor 200, Kodak’s first new still film stocks in many years. Monday’s acknowledgement that both Kodak Gold and Ultramax would be sold directly by Eastman Kodak, and which come with a rebranded and redesigned box, suggests that the company has figured out how to wrest some control of its distribution away from Kodak Alaris. Eastman Kodak told 404 Media in a statement that it has “launched” these films and that they are “Kodak-marketed versions of existing films.”

 "Kodak will sell the films directly to distributors in the U.S. and Canada, giving Kodak greater control over our participation in the consumer film market,” a Kodak spokesperson said in an email. “This direct channel will provide distributors, retailers and consumers with a broader, more reliable supply and help create greater stability in a market where prices have often fluctuated.”

 The company called it an “extension of Kodak’s film portfolio,” which it said “is made possible by our recent investments that increased our film manufacturing capacity and, along with the introduction of our KODAK Super 8 Camera and KODAK EKTACHROME 100D Color Reversal Film, reflects Kodak’s ongoing commitment to meeting growing demand and supporting the long-term health of the film industry.”

It is probably too soon to say how big of a deal this is, but it is at least exciting for people who are in the resurgent film photography hobby, who are desperate for any sign that companies are interested in launching new products, creating new types of film, or building more production capacity in an industry where film shortages and price increases have been the norm for a few years.

Read the whole story
mkalus
21 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

arXiv Changes Rules After Getting Spammed With AI-Generated 'Research' Papers

1 Share
arXiv Changes Rules After Getting Spammed With AI-Generated 'Research' Papers

arXiv, a preprint publication for academic research that has become particularly important for AI research, has announced it will no longer accept computer science review articles and position papers. Why? A tide of AI slop has flooded the computer science category with low-effort papers that are “little more than annotated bibliographies, with no substantial discussion of open research issues,” according to a press release about the change.

arXiv has become a critical place for preprint and open access scientific research to be published. Many major scientific discoveries are published on arXiv before they finish the peer review process and are published in other, peer-reviewed journals. For that reason, it’s become an important place for new breaking discoveries and has become particularly important for research in fast-moving fields such as AI and machine learning (though there are also sometimes preprint, non-peer-reviewed papers there that get hyped but ultimately don’t pass peer review muster). The site is a repository of knowledge where academics upload PDFs of their latest research for public consumption. It publishes papers on physics, mathematics, biology, economics, statistics, and computer science and the research is vetted by moderators who are subject matter experts.

Review articles are overviews of a given topic that tend to be a summary of current research. Position papers are the academic equivalent of an opinion piece. It’s these two types of articles that arXiv is cracking down on.

Because of an onslaught of AI-generated research, specifically in the computer science (CS) section, arXiv is going to limit which papers can be published. “In the past few years, arXiv has been flooded with papers,” arXiv said in a press release. “Generative AI / large language models have added to this flood by making papers—especially papers not introducing new research results—fast and easy to write.”

The site noted that this was less a policy change and more about stepping up enforcement of old rules. “When submitting review articles or position papers, authors must include documentation of successful peer review to receive full consideration,” it said. “Review/survey articles or position papers submitted to arXiv without this documentation will be likely to be rejected and not appear on arXiv.”

According to the press release, arXiv has been inundated by articles but that CS was the worst category. “We now receive hundreds of review articles every month,” arXiv said. “The advent of large language models have made this type of content relatively easy to churn out on demand.

The plan is to enforce a blanket ban on review articles and positions papers in the CS category and free the moderators to look at more substantive submissions. arXiv stressed that it does not often accept review articles, but had been doing so when it was of academic interest and from a known researcher. “If other categories see a similar rise in LLM-written review articles and position papers, they may choose to change their moderation practices in a similar manner to better serve arXiv authors and readers,” arXiv said.

AI-generated research articles are a pressing problem in the scientific community. Scam academic journals that run pay-to-publish schemes are an issue that plagued academic publishing long before AI, but the advent of LLMs has supercharged it. But scam journals aren’t the only ones affected. Last year, a serious scientific journal had to retract a paper that included an AI-generated image of a giant rat penis. Peer reviewers, the people who are supposed to vet scientific papers for accuracy, have also been caught cutting corners using ChatGPT in part because of the large demands placed on their time.

Update: The original version of this article made it appear that arXiv had stopped accepting CS articles that were under peer review. It's a narrow ban on article reviews and position papers. We've updated the story and subtitle to reflect this and regret the error.



Read the whole story
mkalus
21 hours ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

Photos of October 2025

1 Share
Photos of October 2025
Photos of October 2025
Photos of October 2025
Photos of October 2025
Photos of October 2025
Photos of October 2025
Photos of October 2025
Photos of October 2025
Photos of October 2025
Photos of October 2025
Photos of October 2025


Read the whole story
mkalus
3 days ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete

AI makes you think you’re a genius when you’re an idiot

3 Shares

Today’s paper is “AI Makes You Smarter, But None the Wiser: The Disconnect between Performance and Metacognition”. AI users wildly overestimate how brilliant they actually are: [Elsevier, paywalled; SSRN preprint, PDF; press release]

All users show a significant inability to assess their performance accurately when using ChatGPT. In fact, across the board, people overestimated their performance.

The researchers tested about 500 people on the LSAT. One group had ChatGPT with GPT-4o, and one just used their brains. The researchers then asked the users how they thought they’d done.

The chatbot users did better — which is not surprising, since past LSATs are very much in all the chatbots’ training data, and they regurgitate them just fine.

The AI users did not question the chatbot at length — they just asked it once what the answer was and used whatever the chatbot, regurgitated.

But also, the chatbot users estimated their results as being even better than they actually were. In fact, the more “AI literate” the subjects measured as, the more wrongly overconfident they were.

Problems with this paper: it credits the LSAT performance as improving thinking and not just the AI regurgitating its training, and it suggests ways to use the AI better rather than suggesting not using it and actually studying. But the main result seems reached reasonably.

If you think you’re a hotshot promptfondler, you’re wildly overconfident and you’re badly wrong. Your ego is vastly ahead of your ability. Just ask your coworkers. Democratising arrogant incompetence!

Read the whole story
mkalus
3 days ago
reply
iPhone: 49.287476,-123.142136
Share this story
Delete
Next Page of Stories