Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Civil and digital rights experts are horrified by a proposed rule change that would allow the Department of Homeland Security to collect a wide range of sensitive biometric data on all immigrants, without age restrictions, and store that data throughout each person's "lifecycle" in the immigration system.
If adopted, the rule change would allow DHS agencies, including Immigration and Customs Enforcement (ICE), to broadly collect facial imagery, finger and palm prints, iris scans, and voice prints. They may also request DNA, which DHS claimed "would only be collected in limited circumstances," like to verify family relations.
[...]
Alarming critics, the update would allow DHS for the first time to collect biometric data of children under 14, which DHS claimed would help reduce human trafficking and other harms by making it easier to identify kids crossing the border unaccompanied or with a stranger.Jennifer Lynch, general counsel for a digital rights nonprofit called the Electronic Frontier Foundation, told Ars that EFF joined Democratic senators in opposing a prior attempt by DHS to expand biometric data collection in 2020.
[...]
By maintaining a database, the US also risks chilling speech, as immigrants weigh risks of social media comments—which DHS already monitors—possibly triggering removals or arrests."People will be less likely to speak out on any issue for fear of being tracked and facing severe reprisals, like detention and deportation, that we've already seen from this administration," Lynch told Ars.
[...]
EFF previously noted that DHS's biometric database was already the second largest in the world. By expanding it, DHS estimated that the agency would collect "about 1.12 million more biometrics submissions" annually, increasing the current baseline to about 3.19 million.
As the data pool expands, DHS plans to hold onto the data until an immigrant who has requested benefits or otherwise engaged with DHS agencies is either granted citizenship or removed.
[...]
DHS said it "recognizes" that its sweeping data collection plans that remove age restrictions don't conform with Department of Justice policies. But the agency claimed there was no conflict since "DHS regulatory provisions control all DHS biometrics collections" and "DHS is not authorized to operate or collect biometrics under DOJ authorities."
[...]
Currently, DHS is seeking public comments on the rule change, which can be submitted over the next 60 days ahead of a deadline on January 2, 2026. The agency suggests it "welcomes" comments, particularly on the types of biometric data DHS wants to collect, including concerns about the "reliability of technology."
[...]
However, DHS claims that's now appropriate, including in cases where children were trafficked or are seeking benefits under the Violence Against Women Act and, therefore, are expected to prove "good moral character.""Generally, DHS plans to use the biometric information collected from children for identity management in the immigration lifecycle only, but will retain the authority for other uses in its discretion, such as background checks and for law enforcement purposes," DHS's proposal said.
The changes will also help protect kids from removals, DHS claimed, by making it easier for an ICE attorney to complete required "identity, law enforcement, or security investigations or examinations."
[...]
It's possible that some DHS agencies may establish an age threshold for some data collection, the rule change noted.A day after the rule change was proposed, 42 comments have been submitted. Most were critical, but as Lynch warned, speaking out seemed risky, with many choosing to anonymously criticize the initiative as violating people's civil rights and making the US appear more authoritarian.
One anonymous user cited guidance from the ACLU and the Electronic Privacy Information Center, while warning that "what starts as a 'biometrics update' could turn into widespread privacy erosion for immigrants and citizens alike."
[...]
"You pitch it as a tool against child trafficking, which is a real issue, but does swabbing a newborn really help, or does it just create a lifelong digital profile starting at day one?" the commenter asked. "Accuracy for growing kids is questionable, and the [ACLU] has pointed out how this disproportionately burdens families. Imagine the hassle for parents—it's not protection; it's preemptively treating every child like a data point in a government file."
Wired has a story about the growing resistance to data center deployment. It seems that data centers have exceptionally bad track records in regards to adverse effects on the local communities upon which they are afflicted.
The new report was released by Data Center Watch, a project run by AI security company 10a Labs that tracks community opposition to data centers across the country. The company has been keeping eyes on this topic since 2023, and released its first public findings earlier this year. (While 10a Labs does offer risk analysis for AI companies, report author Miquel Vila says that the Data Center Watch project is separate from the company's main work, and is not paid for by any clients.) But this week's report finds that the tide has turned sharply in the months since the group's first public output. The second quarter of this year, the new report finds, represented "a sharp escalation" in data center opposition across the country.
Data Center Watch's first report from local residents. covered a period from May 2024 to March of 2025; in that period, it found, local opposition had blocked or delayed a total of $64 billion in data center projects (six projects were blocked entirely, while 10 were delayed). But Data Center Watch's new report found that opposition blocked or delayed $98 billion in projects from March to June of 2025 alone—eight projects, including two in Indiana and Kentucky, were blocked in those three months, while nine were delayed. One of those projects, a $17 billion development in the Atlanta suburbs, was put on hold in May after the county imposed a 180-day moratorium on data center development, following significant pushback.
Are data centers in any way useful or are they just another layer riding on top of the LLM tulipomania?
Previously:
(2025) China Submerges a Data Center in the Ocean to Conserve Water, is That Even a Good Idea?
(2025) How AI is Subsidized by Your Utility Bills and Drives Copper Prices
(2025) 'a Black Hole of Energy Use': Meta's Massive AI Data Center is Stressing Out a Louisiana Community
(2024) The True Cost of Data Centers
(2024) AI Demand Is Fueling A Data Center Development Boom In North America
(2022) Amazon and Microsoft Want to Go Big on Data Centres, but the Power Grid Can't Support Them
(2020) Private Equity Firms are Gobbling Up Data Centers
(2015) Why is Google Opening a New Data Center in a Former Coal-Fired Power Plant?
and many more ...
Google has spent the last few years waging a losing battle against Epic Games, which accused the Android maker of illegally stifling competition in mobile apps.
[...]
Late last month, Google was forced to make the first round of mandated changes to the Play Store to comply with the court's ruling. It grudgingly began allowing developers to direct users to alternative payment options and app downloads outside of Google's ecosystem.
[...]
These changes were only mandated for three years and in the United States. The new agreement includes a different vision for third-party stores on Android—one that Google finds more palatable and that still gives Epic what it wants. If approved, the settlement will lower Google's standard fee for developers. There will also be new support in Android for third-party app stores that will reduce the friction of leaving the Google bubble. Under the terms of the settlement, Google will support these changes through at least June 2032.Google's Android chief, Sameer Samat, and Epic CEO Tim Sweeny announced the deal late on November 4. Sweeny calls it an "awesome proposal" that "genuinely doubles down on Android's original vision as an open platform."
[...]
The changes detailed in the settlement are not as wide-ranging as Judge Donato's original order but still mark a shift toward openness. Third-party app stores are getting a boost, developers will enjoy lower fees, and Google won't drag the process out for years. The parties claim in their joint motion that the agreement does not seek to undo the jury verdict or sidestep the court's previous order. Rather, it aims to reinforce the court's intent while eliminating potential delays in realigning the app market.Google and Epic are going to court on Thursday to ask Judge Donato to approve the settlement, and Google could put the billing changes into practice by late this year.
Previously on SoylentNews:
After Two Rejections, Apple Approves Epic Games Store App for iOS - 20240716
Epic's Proposed Google Reforms to End Android App Market Monopoly - 20240414
"You a—Holes": Court Docs Reveal Epic CEO's Anger at Steam's 30% Fees - 20240316
In a blunt assessment that sent shockwaves through the tech and policy worlds, Nvidia CEO Jensen Huang has warned that China is poised to dominate the artificial intelligence (AI) race – not because of superior technology, but due to crippling energy costs and regulatory burdens hobbling Western competitors:
The prolific tech leader was speaking on the sidelines of the FT's Future of AI Summit, where he warned that China would beat the U.S. in artificial intelligence thanks to lower energy costs and looser regulations.
The comments, which CNBC could not verify independently, would represent Huang's starkest warning yet that the U.S. is at risk of losing its global lead in advanced AI technologies.
After the FT published their report, the Nvidia chief softened his tone on X:
"As I have long said, China is nanoseconds behind America in AI. It's vital that America wins by racing ahead and winning developers worldwide."
Previously:
About 170 Starshield satellites built by SpaceX for the US government's National Reconnaissance Office (NRO) have been sending signals in the wrong direction, a satellite researcher found.
The SpaceX-built spy satellites are helping the NRO greatly expand its satellite surveillance capabilities, but the purpose of these signals is unknown. The signals are sent from space to Earth in a frequency band that's allocated internationally for Earth-to-space and space-to-space transmissions.
There have been no public complaints of interference caused by the surprising Starshield emissions. But the researcher who found them says they highlight a troubling lack of transparency in how the US government manages the use of spectrum and a failure to coordinate spectrum usage with other countries.
Scott Tilley, an engineering technologist and amateur radio astronomer in British Columbia, discovered the signals in late September or early October while working on another project. He found them in various parts of the 2025–2110 MHz band, and from his location, he was able to confirm that 170 satellites were emitting the signals over Canada, the United States, and Mexico. Given the global nature of the Starshield constellation, the signals may be emitted over other countries as well.
"This particular band is allocated by the ITU [International Telecommunication Union], the United States, and Canada primarily as an uplink band to spacecraft on orbit—in other words, things in space, so satellite receivers will be listening on these frequencies," Tilley told Ars. "If you've got a loud constellation of signals blasting away on the same frequencies, it has the potential to interfere with the reception of ground station signals being directed at satellites on orbit."
In the US, users of the 2025–2110 MHz portion of the S-Band include NASA and the National Oceanic and Atmospheric Administration (NOAA), as well as nongovernmental users like TV news broadcasters that have vehicles equipped with satellite dishes to broadcast from remote locations.
Experts told Ars that the NRO likely coordinated with the US National Telecommunications and Information Administration (NTIA) to ensure that signals wouldn't interfere with other spectrum users. A decision to allow the emissions wouldn't necessarily be made public, they said. But conflicts with other governments are still possible, especially if the signals are found to interfere with users of the frequencies in other countries.
Tilley previously made headlines in 2018 when he located a satellite that NASA had lost contact with in 2005. For his new discovery, Tilley published data and a technical paper describing the "strong wideband S-band emissions," and his work was featured by NPR on October 17.
Tilley's technical paper said emissions were detected from 170 satellites out of the 193 known Starshield satellites. Emissions have since been detected from one more satellite, making it 171 out of 193, he told Ars. "The apparent downlink use of an uplink-allocated band, if confirmed by authorities, warrants prompt technical and regulatory review to assess interference risk and ensure compliance" with ITU regulations, Tilley's paper said.
Tilley said he uses a mix of omnidirectional antennas and dish antennas at his home to receive signals, along with "software-defined radios and quite a bit of proprietary software I've written or open source software that I use for analysis work." The signals did not stop when the paper was published. Tilley said the emissions are powerful enough to be received by "relatively small ground stations."
Tilley's paper said that Starshield satellites emit signals with a width of 9 MHz and signal-to-noise (SNR) ratios of 10 to 15 decibels. "A 10 dB SNR means the received signal power is ten times greater than the noise power in the same bandwidth," while "20 dB means one hundred times," Tilley told Ars.
Other Starshield signals that were 4 or 5 MHz wide "have been observed to change frequency from day to day with SNR exceeding 20dB," his paper said. "Also observed from time to time are other weaker wide signals from 2025–2110 MHz what may be artifacts or actual intentional emissions."
The 2025–2110 MHz band is used by NASA for science missions and by other countries for similar missions, Tilley noted. "Any other radio activity that's occurring on this band is intentionally limited to avoid causing disruption to its primary purpose," he said.
The band is used for some fully terrestrial, non-space purposes. Mobile service is allowed in 2025–2110 MHz, but ITU rules say that "administrations shall not introduce high-density mobile systems" in these frequencies. The band is also licensed in the US for non-federal terrestrial services, including the Broadcast Auxiliary Service, Cable Television Relay Service, and Local Television Transmission Service.
While Earth-based systems using the band, such as TV links from mobile studios, have legal protection against interference, Tilley noted that "they normally use highly directional and local signals to link a field crew with a studio... they're not aimed into space but at a terrestrial target with a very directional antenna." A trade group representing the US broadcast industry told Ars that it hasn't observed any interference from Starshield satellites.
[...] While Tilley doesn't know exactly what the emissions are for, his paper said the "signal characteristics—strong, coherent, and highly predictable carriers from a large constellation—create the technical conditions under which opportunistic or deliberate PNT exploitation could occur."
PNT refers to Positioning, Navigation, and Timing (PNT) applications. "While it is not suggested that the system was designed for that role, the combination of wideband data channels and persistent carrier tones in a globally distributed or even regionally operated network represents a practical foundation for such use, either by friendly forces in contested environments or by third parties seeking situational awareness," the paper said.
Much more information in the linked source.
Microsoft CEO Satya Nadella said during an interview alongside OpenAI CEO Sam Altman that the problem in the AI industry is not an excess supply of compute, but rather a lack of power to accommodate all those GPUs. In fact, Nadella said that the company currently has a problem of not having enough power to plug in some of the AI GPUs the firm has in inventory. He said this on YouTube in response to Brad Gerstner, the host of Bg2 Pod, when asked whether Nadella and Altman agreed with Nvidia CEO Jensen Huang, who said there is no chance of a compute glut in the next two to three years.
"I think the cycles of demand and supply in this particular case, you can't really predict, right? The point is: what's the secular trend? The secular trend is what Sam (OpenAI CEO) said, which is, at the end of the day, because quite frankly, the biggest issue we are now having is not a compute glut, but it's power — it's sort of the ability to get the builds done fast enough close to power," Satya said in the podcast. "So, if you can't do that, you may actually have a bunch of chips sitting in inventory that I can't plug in. In fact, that is my problem today. It's not a supply issue of chips; it's actually the fact that I don't have warm shells to plug into." [Emphasis added]
Nadella's mention of 'shells' refers to a data center shell, which is effectively an empty building with all of the necessary ingredients, such as power and water, needed to immediately begin production.
AI's power consumption has been a topic many experts have discussed since last year. This came to the forefront as soon as Nvidia fixed the GPU shortage, and many tech companies are now investing in research in small modular nuclear reactors to help scale their power sources as they build increasingly large data centers.
This has already caused consumer energy bills to skyrocket, showing how the AI infrastructure being built out is negatively affecting the average American. OpenAI has even called on the federal government to build 100 gigawatts of power generation annually, saying that it's a strategic asset in the U.S.'s push for supremacy in its AI race with China. This comes after some experts said Beijing is miles ahead in electricity supply due to its massive investments in hydropower and nuclear power.
Aside from the lack of power, they also discussed the possibility of more advanced consumer hardware hitting the market. "Someday, we will make a[n] incredible consumer device that can run a GPT-5 or GPT-6-capable model completely locally at a low power draw — and this is like so hard to wrap my head around," Altman said. Gerstner then commented, "That will be incredible, and that's the type of thing that scares some of the people who are building, obviously, these large, centralized compute stacks."
This highlights another risk that companies must bear as they bet billions of dollars on massive AI data centers. While you would still need the infrastructure to train new models, the data center demand that many estimate will come from the widespread use of AI might not materialize if semiconductor advancements enable us to run them locally.
This could hasten the popping of the AI bubble, which some experts like Pat Gelsinger say is still several years away. But if and when that happens, we will be in for a shock as even non-tech companies would be hit by this collapse, exposing nearly $20 trillion in market cap.
We did the math on AI's energy footprint. Here's the story you haven't heard.:
[...] Now that we have an estimate of the total energy required to run an AI model to produce text, images, and videos, we can work out what that means in terms of emissions that cause climate change.
First, a data center humming away isn't necessarily a bad thing. If all data centers were hooked up to solar panels and ran only when the sun was shining, the world would be talking a lot less about AI's energy consumption. That's not the case. Most electrical grids around the world are still heavily reliant on fossil fuels. So electricity use comes with a climate toll attached.
"AI data centers need constant power, 24-7, 365 days a year," says Rahul Mewawalla, the CEO of Mawson Infrastructure Group, which builds and maintains high-energy data centers that support AI.
That means data centers can't rely on intermittent technologies like wind and solar power, and on average, they tend to use dirtier electricity. One preprint study from Harvard's T.H. Chan School of Public Health found that the carbon intensity of electricity used by data centers was 48% higher than the US average. Part of the reason is that data centers currently happen to be clustered in places that have dirtier grids on average, like the coal-heavy grid in the mid-Atlantic region that includes Virginia, West Virginia, and Pennsylvania. They also run constantly, including when cleaner sources may not be available.
Data centers can't rely on intermittent technologies like wind and solar power, and on average, they tend to use dirtier electricity.
Tech companies like Meta, Amazon, and Google have responded to this fossil fuel issue by announcing goals to use more nuclear power. Those three have joined a pledge to triple the world's nuclear capacity by 2050. But today, nuclear energy only accounts for 20% of electricity supply in the US, and powers a fraction of AI data centers' operations—natural gas accounts for more than half of electricity generated in Virginia, which has more data centers than any other US state, for example. What's more, new nuclear operations will take years, perhaps decades, to materialize.
In 2024, fossil fuels including natural gas and coal made up just under 60% of electricity supply in the US. Nuclear accounted for about 20%, and a mix of renewables accounted for most of the remaining 20%.
Gaps in power supply, combined with the rush to build data centers to power AI, often mean shortsighted energy plans. In April, Elon Musk's X supercomputing center near Memphis was found, via satellite imagery, to be using dozens of methane gas generators that the Southern Environmental Law Center alleges are not approved by energy regulators to supplement grid power and are violating the Clean Air Act.
The key metric used to quantify the emissions from these data centers is called the carbon intensity: how many grams of carbon dioxide emissions are produced for each kilowatt-hour of electricity consumed. Nailing down the carbon intensity of a given grid requires understanding the emissions produced by each individual power plant in operation, along with the amount of energy each is contributing to the grid at any given time. Utilities, government agencies, and researchers use estimates of average emissions, as well as real-time measurements, to track pollution from power plants.
This intensity varies widely across regions. The US grid is fragmented, and the mixes of coal, gas, renewables, or nuclear vary widely. California's grid is far cleaner than West Virginia's, for example.
Time of day matters too. For instance, data from April 2024 shows that California's grid can swing from under 70 grams per kilowatt-hour in the afternoon when there's a lot of solar power available to over 300 grams per kilowatt-hour in the middle of the night.
This variability means that the same activity may have very different climate impacts, depending on your location and the time you make a request. Take that charity marathon runner, for example. The text, image, and video responses they requested add up to 2.9 kilowatt-hours of electricity. In California, generating that amount of electricity would produce about 650 grams of carbon dioxide pollution on average. But generating that electricity in West Virginia might inflate the total to more than 1,150 grams.
What we've seen so far is that the energy required to respond to a query can be relatively small, but it can vary a lot, depending on the type of query and the model being used. The emissions associated with that given amount of electricity will also depend on where and when a query is handled. But what does this all add up to?
ChatGPT is now estimated to be the fifth-most visited website in the world, just after Instagram and ahead of X. In December, OpenAI said that ChatGPT receives 1 billion messages every day, and after the company launched a new image generator in March, it said that people were using it to generate 78 million images per day, from Studio Ghibli–style portraits to pictures of themselves as Barbie dolls.
Given the direction AI is headed—more personalized, able to reason and solve complex problems on our behalf, and everywhere we look—it's likely that our AI footprint today is the smallest it will ever be.
One can do some very rough math to estimate the energy impact. In February the AI research firm Epoch AI published an estimate of how much energy is used for a single ChatGPT query—an estimate that, as discussed, makes lots of assumptions that can't be verified. Still, they calculated about 0.3 watt-hours, or 1,080 joules, per message. This falls in between our estimates for the smallest and largest Meta Llama models (and experts we consulted say that if anything, the real number is likely higher, not lower).
One billion of these every day for a year would mean over 109 gigawatt-hours of electricity, enough to power 10,400 US homes for a year. If we add images and imagine that generating each one requires as much energy as it does with our high-quality image models, it'd mean an additional 35 gigawatt-hours, enough to power another 3,300 homes for a year. This is on top of the energy demands of OpenAI's other products, like video generators, and that for all the other AI companies and startups.
But here's the problem: These estimates don't capture the near future of how we'll use AI. In that future, we won't simply ping AI models with a question or two throughout the day, or have them generate a photo. Instead, leading labs are racing us toward a world where AI "agents" perform tasks for us without our supervising their every move. We will speak to models in voice mode, chat with companions for 2 hours a day, and point our phone cameras at our surroundings in video mode. We will give complex tasks to so-called "reasoning models" that work through tasks logically but have been found to require 43 times more energy for simple problems, or "deep research" models that spend hours creating reports for us. We will have AI models that are "personalized" by training on our data and preferences.
This future is around the corner: OpenAI will reportedly offer agents for $20,000 per month and will use reasoning capabilities in all of its models moving forward, and DeepSeek catapulted "chain of thought" reasoning into the mainstream with a model that often generates nine pages of text for each response. AI models are being added to everything from customer service phone lines to doctor's offices, rapidly increasing AI's share of national energy consumption.
"The precious few numbers that we have may shed a tiny sliver of light on where we stand right now, but all bets are off in the coming years," says Luccioni.
Every researcher we spoke to said that we cannot understand the energy demands of this future by simply extrapolating from the energy used in AI queries today. And indeed, the moves by leading AI companies to fire up nuclear power plants and create data centers of unprecedented scale suggest that their vision for the future would consume far more energy than even a large number of these individual queries.
"The precious few numbers that we have may shed a tiny sliver of light on where we stand right now, but all bets are off in the coming years," says Luccioni. "Generative AI tools are getting practically shoved down our throats and it's getting harder and harder to opt out, or to make informed choices when it comes to energy and climate."
To understand how much power this AI revolution will need, and where it will come from, we have to read between the lines.
See also:
Original Submission #1 Original Submission #2 Original Submission #3
Remember when you thought age verification laws couldn't get any worse? Well, lawmakers in Wisconsin, Michigan, and beyond are about to blow you away.
It's unfortunately no longer enough to force websites to check your government-issued ID before you can access certain content, because politicians have now discovered that people are using Virtual Private Networks (VPNs) to protect their privacy and bypass these invasive laws. Their solution? Entirely ban the use of VPNs.
Yes, really.
As of this writing, Wisconsin lawmakers are escalating their war on privacy by targeting VPNs in the name of "protecting children" in A.B. 105/S.B. 130. It's an age verification bill that requires all websites distributing material that could conceivably be deemed "sexual content" to both implement an age verification system and also to block the access of users connected via VPN. The bill seeks to broadly expand the definition of materials that are "harmful to minors" beyond the type of speech that states can prohibit minors from accessing—potentially encompassing things like depictions and discussions of human anatomy, sexuality, and reproduction.
This follows a notable pattern: As we've explained previously, lawmakers, prosecutors, and activists in conservative states have worked for years to aggressively expand the definition of "harmful to minors" to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature.
Wisconsin's bill has already passed the State Assembly and is now moving through the Senate. If it becomes law, Wisconsin could become the first state where using a VPN to access certain content is banned. Michigan lawmakers have proposed similar legislation that did not move through its legislature, but among other things, would force internet providers to actively monitor and block VPN connections. And in the UK, officials are calling VPNs "a loophole that needs closing."
This is actually happening. And it's going to be a disaster for everyone.
VPNs mask your real location by routing your internet traffic through a server somewhere else. When you visit a website through a VPN, that website only sees the VPN server's IP address, not your actual location. It's like sending a letter through a P.O. box so the recipient doesn't know where you really live.
So when Wisconsin demands that websites "block VPN users from Wisconsin," they're asking for something that's technically impossible. Websites have no way to tell if a VPN connection is coming from Milwaukee, Michigan, or Mumbai. The technology just doesn't work that way.
Websites subject to this proposed law are left with this choice: either cease operation in Wisconsin, or block all VPN users, everywhere, just to avoid legal liability in the state. One state's terrible law is attempting to break VPN access for the entire internet, and the unintended consequences of this provision could far outweigh any theoretical benefit.
Almost Everyone Uses VPNsLet's talk about who lawmakers are hurting with these bills, because it sure isn't just people trying to watch porn without handing over their driver's license.
- Businesses run on VPNs. Every company with remote employees uses VPNs. Every business traveler connecting through sketchy hotel Wi-Fi needs one. Companies use VPNs to protect client and employee data, secure internal communications, and prevent cyberattacks.
- Students need VPNs for school. Universities require students to use VPNs to access research databases, course materials, and library resources. These aren't optional, and many professors literally assign work that can only be accessed through the school VPN. The University of Wisconsin-Madison's WiscVPN, for example, "allows UW–Madison faculty, staff and students to access University resources even when they are using a commercial Internet Service Provider (ISP)."
- Vulnerable people rely on VPNs for safety. Domestic abuse survivors use VPNs to hide their location from their abusers. Journalists use them to protect their sources. Activists use them to organize without government surveillance. LGBTQ+ people in hostile environments—both in the US and around the world—use them to access health resources, support groups, and community. For people living under censorship regimes, VPNs are often their only connection to vital resources and information their governments have banned.
- Regular people just want privacy. Maybe you don't want every website you visit tracking your location and selling that data to advertisers. Maybe you don't want your internet service provider (ISP) building a complete profile of your browsing history. Maybe you just think it's creepy that corporations know everywhere you go online. VPNs can protect everyday users from everyday tracking and surveillance.
Here's what happens if VPNs get blocked: everyone has to verify their age by submitting government IDs, biometric data, or credit card information directly to websites—without any encryption or privacy protection.
We already know how this story ends. Companies get hacked. Data gets breached. And suddenly your real name is attached to the websites you visited, stored in some poorly-secured database waiting for the inevitable leak. This has already happened, and is not a matter of if but when. And when it does, the repercussions will be huge.
Forcing people to give up their privacy to access legal content is the exact opposite of good policy. It's surveillance dressed up as safety.
"Harmful to Minors" Is Not a Catch-AllHere's another fun feature of these laws: they're trying to broaden the definition of "harmful to minors" to sweep in a host of speech that is protected for both young people and adults.
Historically, states can prohibit people under 18 years old from accessing sexual materials that an adult can access under the First Amendment. But the definition of what constitutes "harmful to minors" is narrow — it generally requires that the materials have almost no social value to minors and that they, taken as a whole, appeal to a minors' "prurient sexual interests."
Wisconsin's bill defines "harmful to minors" much more broadly. It applies to materials that merely describe sex or feature descriptions/depictions of human anatomy. This definition would likely encompass a wide range of literature, music, television, and films that are protected under the First Amendment for both adults and young people, not to mention basic scientific and medical content.
Additionally, the bill's definition would apply to any websites where more than one third of the site's material is "harmful to minors." Given the breadth of the definition and its one-third trigger, we anticipate that Wisconsin could argue that the law applies to most social media websites. And it's not hard to imagine, as these topics become politicised, Wisconsin claiming it applies to websites containing LGBTQ+ health resources, basic sexual education resources, and reproductive healthcare information.
This breadth of the bill's definition isn't a bug, it's a feature. It gives the state a vast amount of discretion to decide which speech is "harmful" to young people, and the power to decide what's "appropriate" and what isn't. History shows us those decisions most often harm marginalized communities.
Let's say Wisconsin somehow manages to pass this law. Here's what will actually happen:
People who want to bypass it will use non-commercial VPNs, open proxies, or cheap virtual private servers that the law doesn't cover. They'll find workarounds within hours. The internet always routes around censorship.
Even in a fantasy world where every website successfully blocked all commercial VPNs, people would just make their own. You can route traffic through cloud services like AWS or DigitalOcean, tunnel through someone else's home internet connection, use open proxies, or spin up a cheap server for less than a dollar.
Meanwhile, everyone else (businesses, students, journalists, abuse survivors, regular people who just want privacy) will have their VPN access impacted. The law will accomplish nothing except making the internet less safe and less private for users.
Nonetheless, as we've mentioned previously, while VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech. Like the larger age verification legislation they are a part of, VPN-blocking provisions simply don't work. They harm millions of people and they set a terrifying precedent for government control of the internet. More fundamentally, legislators need to recognize that age verification laws themselves are the problem. They don't work, they violate privacy, they're trivially easy to circumvent, and they create far more harm than they prevent.
A False DilemmaPeople have (predictably) turned to VPNs to protect their privacy as they watched age verification mandates proliferate around the world. Instead of taking this as a sign that maybe mass surveillance isn't popular, lawmakers have decided the real problem is that these privacy tools exist at all and are trying to ban the tools that let people maintain their privacy.
Let's be clear: lawmakers need to abandon this entire approach.
The answer to "how do we keep kids safe online" isn't "destroy everyone's privacy." It's not "force people to hand over their IDs to access legal content." And it's certainly not "ban access to the tools that protect journalists, activists, and abuse survivors."
If lawmakers genuinely care about young people's well-being, they should invest in education, support parents with better tools, and address the actual root causes of harm online. What they shouldn't do is wage war on privacy itself. Attacks on VPNs are attacks on digital privacy and digital freedom. And this battle is being fought by people who clearly have no idea how any of this technology actually works.
If you live in Wisconsin—reach out to your Senator and urge them to kill A.B. 105/S.B. 130. Our privacy matters. VPNs matter. And politicians who can't tell the difference between a security tool and a "loophole" shouldn't be writing laws about the internet.
The new variable allows users to disable the default behavior of boosting their NVIDIA GPU to a higher power state when running CUDA apps.
NVIDIA released today the NVIDIA 580.105.08 graphics drivers for NVIDIA GPUs on Linux, BSD, and Solaris systems as a new update in the latest NVIDIA 580 series.
NVIDIA 580.95.05 is here to introduce a new environment variable, CUDA_DISABLE_PERF_BOOST, which allows users to disable the default behavior of boosting their NVIDIA GPU to a higher power state when running CUDA apps. Setting this environment variable to '1' will disable the boost.
This release also fixes an issue that caused the vfio-pci module to soft lock up after powering off a virtual machine with passed-through NVIDIA GPUs, a bug that caused the Rage 2 video game to crash when loading the game menu, and a bug that caused the Metro Exodus EE (Enhanced Edition) video game to crash.
On top of that, NVIDIA 580.95.05 fixes a bug that could cause some HDMI displays to remain blank after unplugging and re-plugging them, as well as a bug that allowed VRR (Variable Refresh Rate) to be enabled on some modes where it isn't actually possible, leading to a black screen.
Also fixed in this release is a recent regression that prevented HDMI FRL (Fixed Rate Link) from working after hot unplugging and replugging a display, and an issue that could prevent large resolution or high refresh rate modes like 7680x2160p@240hz from being available when using HDMI FRL or DisplayPort.
Check out the changelog for more details about the changes implemented in the NVIDIA 580.105.08 graphics driver, which is available for download from the same page as a binary installer for 64-bit and AArch64 (ARM64) GNU/Linux distributions.
Binaries are also available for 64-bit FreeBSD systems, as well as 32-bit and 64-bit Solaris systems. NVIDIA 580.105.08 is considered the latest production branch version, and it is the recommended version for all users.
https://www.npr.org/2025/09/03/g-s1-86846/spiked-dinosaur-discovered-spicomellus
A dinosaur that roamed modern-day Morocco more than 165 million years ago had a neck covered in three-foot long spikes, a weapon on its tail and bony body armor, according to researchers who unearthed the curious beast's remains.
The discovery of the animal Spicomellus in the Moroccan town of Boulemane painted a clearer picture of the bizarre, spiked ankylosaur, which was first described in 2021 based on the discovery of a single rib bone.
Researchers now understand that the four-legged herbivore, which was about the size of a small car, was much more elaborately armored than originally believed, according to research published last month in the journal Nature.
"Spicomellus had a diversity of plates and spikes extending from all over its body, including metre-long neck spikes, huge upwards-projecting spikes over the hips, and a whole range of long, blade-like spikes, pieces of armour made up of two long spikes, and plates down the shoulder," research co-lead Susannah Maidment said in a statement to London's Natural History Museum.
"We've never seen anything like this in any animal before."
The Spicomellus' ribs were lined with fused spikes projecting outward — a feature never witnessed before in any other vertebrate, living or extinct.
Co-lead of the project Richard Butler, a paleobiology professor at the University of Birmingham, described seeing the fossil for the first time as "spine-tingling."
"We just couldn't believe how weird it was and how unlike any other dinosaur, or indeed any other animal we know of, alive or extinct," Butler told the Natural History Museum.
"It turns much of what we thought we knew about ankylosaurs and their evolution on its head and demonstrates just how much there still is to learn about dinosaurs," he added.
Researchers suggest that the Spicomellus' complex bone structure was used both to attract mates and deter rivals.
Discovering that the dinosaur had such elaborate armor that possibly prioritized form as much as function set the animal apart from its predecessors, which had less, more defensive covering on their bodies.
In addition to showy barbs along Spicomellus' exterior, remains of the animal's tail also provided a stunning new detail for scientists.
Fused vertebrae going down into its tail formed a "handle," likely leading to a club-like weapon at the end — a detail ankylosaur scientists had previously believed not to have evolved until the Cretaceous period, millions of years later.
"To find such elaborate armour in an early ankylosaur changes our understanding of how these dinosaurs evolved," Maidment said.
"It shows just how significant Africa's dinosaurs are, and how important it is to improve our understanding of them," she said.
[Ed. note: The link to the Nature article in the summary contains an embedded sharing token so that one can read the journal article. The link in the citation below is the clean link to the paywalled article page.]
Journal Reference: Maidment, S.C.R., Ouarhache, D., Ech-charay, K. et al. Extreme armour in the world's oldest ankylosaur. Nature 647, 121–126 (2025). https://doi.org/10.1038/s41586-025-09453-6
https://itsfoss.com/news/devuan-6-release/
Devuan is a Linux distribution that takes a different approach from most popular distros in the market. It is based on Debian but offers users complete freedom from systemd.
The project emerged in 2014 when a group of developers decided to offer init freedom. Devuan maintains compatibility with Debian packages while providing alternative init systems like SysVinit and OpenRC.
With a recent announcement, a new Devuan release has arrived with some important quality of life upgrades.
Codenamed "Excalibur", this release arives after extensive testing by the Devuan community. It is based on Debian 13 "Trixie" and inherits most of its improvements and package upgrades.
Devuan 6.0 ships with Linux kernel 6.12, an LTS kernel that brings real-time PREEMPT_RT support for time-critical applications and improved hardware compatibility.
On the desktop environment side of things, Xfce 4.20 is offered as the default one for the live desktop image, with additional options like KDE Plasma, MATE, Cinnamon, LXQt, and LXDE.
The package management system gets a major upgrade with APT 3.0 and its new Solver3 dependency resolver. This backtracking algorithm handles complex package installations more efficiently than previous versions. Combined with the color-coded output, the package management experience is more intuitive now.
This Devuan release also makes the merged-/usr filesystem layout compulsory for all installations. Users upgrading from Daedalus (Devuan 5.0) must install the usrmerge package before attempting the upgrade.
Similarly, new installations now use tmpfs for the /tmp directory, storing temporary files in RAM instead of on disk. This improves performance through faster read and write operations.
And, following Debian's lead, Devuan 6.0 does not include an i386 installer ISO. The shift away from 32-bit support is now pretty much standard across major distributions. That said, i386 packages are still available in the repositories.
The next release, Devuan 7, is codenamed "Freia". Repositories are already available for those adventurous enough to be early testers.
This release supports multiple CPU architectures, including amd64, arm64, armhf, armel, and ppc64el. You will find the relevant installation media on the official website, which lists HTTP mirrors and torrents.
Existing Devuan 5.0 "Daedalus" users can follow the official upgrade guide.
I have been playing YouTube videos, despite the obvious risk to my mental health.
I am using Firefox on Linux and tend to have the "volume control" on my desktop because I use an external sound card to record or drive headphones.
I notice that each time an ad comes on, the volume setting jumps up. Its not that the ad sound level is higher (although it IS).
The actual volume setting is bumped up and remains so after I have skipped the advert.
Is this not illegal interference with my computer? An offence against some law?
[Editor's Comment: Has anyone else witnessed this? I watch Youtube but rarely see any ads in the video's that I watch. As for 'legal advice' - if it is happening we have probably signed our lives away somewhere that permits it. ]
https://edition.cnn.com/travel/mad-honey-deli-bal-turkey-black-sea
In the little wooden hut perched high on metal-wrapped stilts, the drone is high, loud and insistent.
With his beekeeping suit on, but hands uncovered, Hasan Kutluata squeezes the bellows on his pine-filled bee smoker. Pale wreaths swirl in the air, mirroring the mist that drifts over the slopes of the densely forested Kaçkar mountains outside.
The smoke is to calm the bees, masking the pheromone they release when they sense danger and which warns other bees to attack.
When Kutluata lifts the lid off the round lindenwood hives, the hum rises to a crescendo — but these bees aren't angry, it's just their honey that's mad.
We're here to harvest deli bal — bal means "honey" and deli means "crazy" or "mad" — and Turkey's Black Sea region is one of only two places in the world to produce it, the other being Nepal's Hindu Kush Himalayan mountain range.
"In our untouched forests, the purple rhododendron blooms in spring," Kutluata tells CNN. "The bees collect nectar from those flowers, and that's how we get the mad honey."
The nectar contains a naturally occurring toxin called grayanotoxin. The amount that makes it into the honey varies per season and what other flowers the bees have been feasting on, but a spoonful can pack enough buzz to deliver a gently soporific high — while a jar would land you in a hospital.
For millennia, deli bal has been used as folk medicine, a spoonful taken daily to lower blood pressure or used as a sexual stimulant. Today, this potentially dangerous delicacy sells at a premium price.
[...] Deli bal is a dark amber red and its scent is sharp. The taste is earthy with subtle barnyard notes. There are telltale sensations that announce the presence of grayanotoxin: A herbal bitterness underlies the sweetness of the honey and a burning heat catches the back of the throat.
[...] This is a food that has felled armies. In the 4th century BCE, the Greek military leader Xenophon wrote of soldiers traveling near Trabzon on the Black Sea coast who overindulged on the sweet treat: "Not one of them could stand up, but those who had eaten a little were like people exceedingly drunk, while those who had eaten a great deal seemed like crazy, or even, in some cases, dying men. So they lay there in great numbers as though the army had suffered a defeat, and great despondency prevailed."
[...] "The longer the honey stays in the hive, the higher its quality becomes. The quality is determined by the promille value," he explains. Promille refers to the concentration of the honey. "The higher the promille value, the higher the quality."
"Chestnut honey can be found everywhere, but it really makes a difference," adds Emine. "In terms of the promille value, it can be 600, 700, 800, but elsewhere, it might be 500 in terms of quality."
[...] To Emine, honey "represents health. If my throat is sore, I turn to honey. If I'm coughing, I turn to honey. If I'm feeling weak, I turn to honey again."
[...] Deli bal can be sold legally in Turkey and is legal in many countries. However, the US Food and Drug Administration does not recommend its consumption.
"Consumers should check labeling of honey to ensure it is not labeled as 'mad honey' or marketed for intoxicating qualities," an FDA spokesperson told CNN.
"Eating honey with a high amount of this toxin can lead to 'mad honey' poisoning, with symptoms such as nausea, vomiting, or dizziness. This type of poisoning is rare."
AI resistance: Who says no to AI and why? – Digital Society Blog:
A poisoned dataset. A writers' strike that froze Hollywood for 148 days. Street protests against data centres. Behind each of these acts lies a growing global pushback against artificial intelligence. Drawing on the recent report, "From Rejection to Regulation: Mapping the Landscape of AI Resistance," by Can Simsek and Ayse Gizem Yasar, this article examines how artists, workers, activists, and scholars challenge the design, deployment, and governance of AI systems. It explores the drivers behind AI resistance and outlines a research agenda that treats these acts not as obstacles, but as vital contributions to democratic AI governance.
Artificial intelligence is catalysing a radical sociotechnical transformation, reshaping not only our technological infrastructures but also the institutions that organise society. In the midst of this shift, crucial questions arise: Who determines the direction of this change and the future we want to build? Who remains unheard in the conversation? Are we passive observers of increasingly deployed powerful algorithms, or do we have the agency and responsibility to challenge and reshape them?
Acts of pushback are already unfolding across diverse domains and geographies. While heterogeneous in form and motivation, these interventions share a critical orientation towards the pace, purpose, and underlying power structures of contemporary AI development. Rather than isolated incidents, they constitute elements of a broader landscape of AI resistance that demands closer attention.
To see today's pushback against AI in context, it helps to remember that resistance to new technology is nothing new. Technological paradigm shifts have consistently triggered societal concern and resistance, from the 19th century Luddites who opposed textile machinery due to labor displacement, to current debates on digital surveillance and algorithmic bias. As artificial intelligence emerges as a major transformative force, public reactions continue to alternate between optimism and concern. On the one hand, governments and private firms are committing unprecedented levels of investment in AI development; on the other, a growing amount of "AI resistance" raises fundamental objections to how these technologies are being designed, produced, deployed, and governed. But what exactly is AI resistance?
The concept of "resistance" in the context of AI encompasses a wide spectrum of actions and discourses that may be overt or subtle, organised or diffuse, individual or collective, oppositional or reformist. Drawing on insights from critical theory and science and technology studies, resistance to artificial intelligence can be understood as a form of agency exercised within existing systems of power. In this framing, the object of resistance is not technology per se, but the sociotechnical arrangements and asymmetries that both shape and are shaped by the development and application of AI.
Such resistance can manifest in diverse forms, including public protest, legal action, digital subversion, scholarly critique, and grassroots advocacy. Comparable to civil disobedience, these practices reflect a principled commitment to ethical, legal, or democratic norms perceived to be undermined by the development or deployment of certain AI systems. The term "AI resistance" therefore covers a broad range of actions and is open to multiple interpretations, given that both "resistance" and "artificial intelligence" are expansive and inherently abstract concepts. But what does AI resistance look like in practice?
In the report, we recorded numerous instances of AI resistance, including protests against the environmental impacts of data centers, opposition from big tech employees over military applications of AI, public outcry over the UK's A-level grading fiasco. While not intended to be exhaustive, we surveyed six key areas where such resistance has been particularly active:
- creative industries
- migration and border control
- medical AI
- higher education
- defense and security sectors and
- environmental activism
Thereby, we highlighted key actors in AI resistance, with particular emphasis on the role of civil society in mobilising public opposition. The report also looks at how governments have turned some forms of resistance into law. One example is the EU AI Act, which prohibits certain AI systems like deliberately manipulative AI practices.
The report also points to five main reasons why people push back against AI, each illustrated with real-world examples:
- First, there are socio-economic concerns, visible for example in the creative industries, where the 2023 Writers Guild of America strike took aim at AI's potential to replace human jobs
- Second, ethical issues arise when AI systems are opaque or biased, such as migration risk-assessment tools that can unfairly influence decisions about people's futures
- Third, safety risks are a concern, especially in healthcare, where flawed AI diagnostic results have led medical professionals to speak out
- Fourth, there are threats to democracy and sovereignty, including the use of AI for large-scale societal manipulation
- And finally, there's the environmental impact: climate-focused NGOs have highlighted research showing the significant carbon footprint of training large AI models
Journal Reference: Şimşek and Yasar (2025). From Rejection to Regulation: Mapping the Landscape of AI Resistance. Available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5287068
A new scheme will help to share the benefits of solar power in the daytime:
Australian households will be able to access free electricity for three hours every day, in an effort to encourage energy use when excess solar power is being fed into the grid.
The federal government scheme will require retailers to offer free electricity to households for at least three hours in the middle of the day, when there is often more electricity generated than is being used, leading to very cheap or even negative wholesale prices.
The Solar Sharer scheme will initially be introduced to consumers in default market offer regions like NSW, south-east Queensland and South Australia from July next year, with consultation to extend the scheme to other jurisdictions by 2027.
Households with smart meters will be able to run washers and dryers, air conditioning or any other appliances for free within the three-hour window.
Climate Change Minister Chris Bowen said the scheme would share around the benefits of solar panels, including to those without panels or who rented their homes.
"There is so much power in the middle of the day now that often the prices are very cheap or negative and this should be something, by our analysis, that energy companies can incorporate and offer," Mr Bowen told the ABC.
"It's not a silver bullet, and it is part of a suite of measures, but it's a good one. No one would claim that one particular policy solves all the challenges in the energy market."
Mr Bowen added that modern technology had made it easier for people to schedule appliances to start in the middle of the day, when electricity would be free.
"We want to see the benefits of renewable energy flow to all, even those without solar panels or batteries," he said.
But retailers have reacted with surprise to the announcement, saying it had not been raised in consultations on reforms to the network.
"This lack of consultation risks damaging industry confidence, as well as creating the potential for unintended consequences," the Australian Energy Council's chief executive Louisa Kinnear said in a statement.
[...] The government said the shift in demand was expected to lower costs for everyone by reducing peak demand in the evening, which would also minimise the need for "costly" network upgrades to ensure grid stability.
The federal government has been under pressure to address power price concerns, as state and federal rebates come off, and with a recent uptick in inflation as a consequence.
Akaysha Energy bags AU$460 million for 1,244MWh BESS in Victoria, Australia:
The financing is underpinned by a 15-year virtual tolling agreement with Snowy Hydro, representing the state-owned generator's first battery offtake agreement.
With a contracted capacity of 220MW, the arrangement constitutes the largest four-hour virtual toll agreement in the Australian market. Snowy Hydro has been active in securing battery storage capacity, with the company signing multiple offtake deals for over 2GWh of battery energy storage across Australia.
Located in southwest Victoria, the Elaine BESS will connect to the National Electricity Market (NEM) through existing transmission infrastructure. The strategic positioning will enable the battery system to manage transmission outage risks and support the integration of renewable energy sources, particularly wind and solar generation, into the grid.
[...] Akaysha Energy has established itself as a leading developer and operator of utility-scale battery storage systems in Australia. The company recently achieved commercial operation of Stage 1 of the 850MW/1,680MWh Waratah Super Battery.
See also:
https://hackaday.com/2025/11/11/moving-from-windows-to-freebsd-as-the-linux-chaos-alternative/
Back in the innocent days of Windows 98 SE, I nearly switched to Linux on account of how satisfied I was with my Windows experience. This started with the Year of the Linux Desktop in 1999 that started with me purchasing a boxed copy of SuSE Linux and ended with me switching to Windows 2000. After this I continued tinkering with non-Windows OSes including QNX, BeOS, various BSDs, as well as Linux distributions that promised a 'Windows-like' desktop experience, such as Lindows.
Now that Windows 2000's proud legacy has seen itself reduced to a rusting wreck resting on cinderblocks on Microsoft's dying front lawn, the quiet discomfort that many Windows users have felt since Windows 7 was forcefully End-Of-Life-d has only increased. With it comes the uncomfortable notion that Windows as a viable desktop OS may be nearing its demise. Yet where to from here?
Although the recommendations from the peanut gallery seem to coalesce around Linux or Apple's MacOS (formerly OS X), there are a few dissenting voices extolling the virtues of FreeBSD over both. There are definitely compelling reasons to pick FreeBSD over Linux, in addition to it being effectively MacOS's cousin. Best of all is not having to deal with the Chaos Vortex that spawns whenever you dare to utter the question of 'which Linux distro?'. Within the world of FreeBSD there is just FreeBSD, which makes for a remarkably coherent experience.
[...] In case you're more into the 'just add water' level of a desktop OS installation process, the GhostBSD project provides the ready to go option for a zero fuss installation like you would see with Linux Mint, Manjaro Linux and kin. Although I have done the hard mode path previously with FreeBSD virtual machines, to save myself the time and bother I opted for the GhostBSD experience here.
[...] Since any open source software of note that runs on Linux tends to have a native FreeBSD build, the experience here is rather same-ish. Where things can get interesting is with things related to the GPU, especially gaming. These days that of course means getting Steam and ideally the GoG Galaxy client running, which cracks open a pretty big can of proprietary worms.
[...] The two available options here are to either try one's chances with the linuxulator-steam-utils workarounds that tries to stuff the Linux client into a chroot, or to go Wine all the way with the Windows Steam client and add more Windows to your OSS.
[...] As it turns out, the low-fuss method to get Steam and GoG Galaxy working is via the the Mizutamari Wine GUI frontend. Simply install it with pkg install mizuma or via the package center, open it from the Games folder in the start menu, then select the desired application's name and then the Install button. Within minutes I had both Steam and the 'classic' GoG Galaxy clients installed and running. The only glitch was that the current GoG Galaxy client didn't want to work, but that might have been a temporary issue. Since I only ever use the GoG Galaxy 1.x client on Windows, this was fine for me.
[...] Aside from gaming, there are many possible qualifications for what might make a 'Windows desktop replacement'. As far as FreeBSD goes, the primary annoyance is having to constantly lean on the Linux or Windows versions of software. This is also true for things like DaVinci Resolve for video editing, where since there's no official FreeBSD version, you have to stuff the Linux version into a chroot once again to run it via the Linux compatibility layer.
Although following the requisite steps isn't rocket science for advanced users, it would simply be nice if a native version existed and you could just install the package. Based on my own experiences porting a non-trivial application like the FFmpeg- and SDL-based NymphCast to FreeBSD – among other OSes – such porting isn't complicated at all, assuming your code doesn't insist on going around POSIX and doing pretty wild Linux-specific things.
The FreeBSD Foundation is pleased to announce that it has completed work to build FreeBSD without requiring root privilege. We have implemented support for all source release builds to use no-root infrastructure, eliminating the need for root privileges across the FreeBSD release pipeline. This work was completed as part of the program commissioned by the Sovereign Tech Agency.
This is great news in and of itself, but there's more: FreeBSD has also improved build reproducability. This means that given the same source input, you should end up with the same binary output, which is an important part of building a verifiable chain of trust. These two improvements combined further add to making FreeBSD a trustworthy, secure option – something it already is anyway.
In case you haven't noticed, the FreeBSD project and its countless contributors are making a ton of tangible progress lately on a wide variety of topics, from improving desktop use, to solidifying Wi-Fi support, to improving the chain of trust. I think the time is quite right for FreeBSD to make some inroads in the desktop UNIX-y space, especially for people to whom desktop Linux has strayed too far from the traditional UNIX philosphy (whatever that means).
- https://www.osnews.com/story/143733/freebsd-now-builds-reproducibly-and-without-root-privilege/