Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
The polish registers as touch by disrupting the screen's electric field:
A newly formulated nail polish could one day let people activate touchscreens with their fingernails.
When pressed to a screen, the polish disrupts the screen’s electric field, which the device registers as touch. While the formula isn’t commercially viable yet, it could allow people with long nails to use them like styluses.
“This is huge, because it shows that functional behavior can be embedded invisibly into everyday cosmetic materials,” says Shuyi Sun, a computer scientist who has studied cosmetic biosensors and now works at the Association of California Nurse Leaders in Sacramento.
Touchscreens, such as on smartphones and tablets, are typically made of glass coated with a thin, transparent layer of electrically conductive material. That layer creates a small electric field across the screen. When another conductive object, such as a fingertip, contacts the screen, it disturbs the electric field. The device registers that disturbance as a touch and can detect the point on the screen where it occurred.
But nonconductive materials — like a fingernail or the fabric of a glove — don’t distort the field, so they don’t register on the screen. People with long nails must use the pads of their fingers to type because they can’t use their nails.
“It’s really hard to use your phone,” says Manasi Desai, an undergraduate student studying chemistry and biology at Centenary College of Louisiana in Shreveport. Changing which part of the finger people type with can cause typing errors, at least until users adjust to the new angle.
To remedy this common inconvenience, Desai and her research adviser, organometallic chemist Joshua Lawrence, mixed several different additives into commercially available clear nail polish. Two of those additives, ethanolamine and taurine, each resulted in a clear polish formulation that, in a blob held with tweezers, could activate the touchscreen. While ethanolamine has some toxicity, taurine is a common dietary supplement that occurs naturally in the human body.
“One of our major goals was to make it clear and colorless, so that you could apply it over any manicure or even on your bare nails,” Desai says. Desai shared the findings on March 23 at the American Chemical Society’s spring meeting in Atlanta.
The modified nail polish uses acid-base chemistry to activate the touchscreen, the team suspects, though more research is needed to confirm. When in contact with the screen’s electric field, the added molecules probably shuffle protons between themselves, moving just enough charge to affect the field and register as touch.
The lacquer isn’t ready to hit the shelves just yet, Lawrence says. Right now, painting the polish on a fingernail doesn’t leave enough additive behind to activate the screen. In future work, the duo plans to focus on improving the formula’s performance in thin coats on fingernails, possibly by getting more taurine into the polish.
Journal Reference: M. Desai and J. Lawrence. Modification of nail polish formulations for conductivity to operate capacitive touchscreens. ACS Spring 2026, Atlanta, GA. Presented March 23, 2026.
You might already use a VPN to keep your online activity private, and you may naturally assume you’re adding an extra layer of protection by doing so. But lawmakers are now raising the possibility that, in some cases, it could actually affect your rights against government spying on your data.
As Wired reports, in a letter sent to the Director of National Intelligence, Tulsi Gabbard, several Democratic lawmakers are asking whether Americans who use VPNs could be treated as foreigners under US law. If so, that could mean losing certain protections against warrantless surveillance.
The issue comes down to how VPNs handle your connection. By routing traffic through servers that are often located overseas, your activity can appear to originate from another country. For some users, that’s the point. But under US intelligence rules, communications from an unknown location may be treated as foreign, which carries fewer safeguards. Section 702 of the Foreign Intelligence Surveillance Act allows agencies to collect large amounts of data targeting people outside the US. In that context, an American using a VPN server abroad could, at least in theory, look no different from a foreign user.
The lawmakers are not alleging that this is already happening, but argue that the lack of transparency is the concern. Americans spend billions each year on VPN services, many of which route traffic internationally, yet there is little public guidance on how that might affect their rights. It would also create a bit of a contradiction, since agencies like the FBI and NSA have previously encouraged VPN use as a way to improve privacy. The question now is whether that advice could come with some serious trade-offs.
We’ll have to wait to find out if the government responds to the letter. Until then, if you live in the land of the free, it’s worth being mindful of how routing your traffic through an overseas VPN server could affect how your data is treated.
Six Democratic lawmakers are pressing the nation's top intelligence official to publicly disclose whether Americans who use commercial VPN services risk being treated as foreigners under United States surveillance law—a classification that would strip them of constitutional protections against warrantless government spying.
In a letter sent Thursday to Director of National Intelligence Tulsi Gabbard, the lawmakers say that because VPNs obscure a user's true location, and because intelligence agencies presume that communications of unknown origin are foreign, Americans may be inadvertently waiving the privacy protections they're entitled to under the law.
Several federal agencies, including the FBI, the National Security Agency, and the Federal Trade Commission, have recommended that consumers use VPNs to protect their privacy. But following that advice may inadvertently cost Americans the very protections they're seeking.
The letter was signed by members of the Democratic Party's progressive flank: Senators Ron Wyden, Elizabeth Warren, Edward Markey, and Alex Padilla, along with Representatives Pramila Jayapal and Sara Jacobs.
The concern centers on how intelligence agencies treat internet traffic routed through commercial VPN servers, which may be located anywhere in the world. Millions of Americans use these services routinely, whether to access region-restricted content like overseas sports broadcasts or to protect their privacy on public Wi-Fi networks. Because VPN servers commingle traffic from users in many countries, a single server—even one located in the United States—may carry communications from foreigners, potentially making it a target for surveillance under authorities that allow the government to secretly compel service from US service providers.
Under a controversial warrantless surveillance program, the US government intercepts vast quantities of electronic communications belonging to people overseas. The program also sweeps in enormous volumes of private messages belonging to Americans, which the FBI may search without a warrant, even though it is authorized to target only foreigners abroad.
The program, authorized under Section 702 of the Foreign Intelligence Surveillance Act, is set to expire next month and has become the subject of a fierce battle in Congress over whether it should be renewed without significant reforms to protect Americans' privacy.
Thursday's letter points to declassified intelligence community guidelines that establish a default presumption at the heart of the lawmakers' concern: Under the NSA's targeting procedures, a person whose location is unknown is presumed to be a non-US person unless there is specific information to the contrary. Department of Defense procedures governing signals intelligence activities contain the same presumption.
[...] Americans spend billions of dollars each year on commercial VPN services, many offered by foreign-headquartered companies that route traffic through servers located overseas. The letter notes that these services are widely advertised as privacy tools, including by elements of the US government itself.
Despite the scale of the market, the letter suggests consumers have been given no meaningful guidance on how to protect themselves.
The lawmakers urge Gabbard to "clarify what, if anything, American consumers can do to ensure they receive the privacy protections they are entitled to under the law and the US Constitution."
AI accountability is worryingly low:
Despite rapid AI adoption, new research from ISACA suggests many businesses might be going in blindly – more than half (59%) of UK businesses wouldn't even know how quickly they could stop AI during a crisis.
Only around one in five (21%) say they feel confident stopping an AI system within 30 minutes, highlighting major safety gaps.
And it's not just shutting them down that's a problem – not even half (42%) say they could explain an AI failure to leadership or regulators.
ISACA explained that the gaps aren't just concerning for business operations and reputation, but also from a legislative framework. The EU AI Act requires explainability and accountability.
Part of the failure comes down to unclear accountability, with 20% of workers unsure of who is responsible for AI failures. Poor visibility is also a contributing factory, with one in three organizations not requiring AI's use at work to be disclosed, which ISACA says is a nightmare for blind spots.
The report explains that businesses are currently treating is as a technical problem, but they should instead focus on it being an organization-wide governance challenge. "Truly closing the gap can’t be done by process changes alone," Chief Global Strategy Officer Chris Dimitriadis wrote. "Rather, it will require professionals who have the expertise to evaluate AI risk rigorously, embed oversight across the full lifecycle."
Looking ahead, businesses are being urged to define accountability at the senior level and to start rolling out better visibility and auditing. Besides this, they must also build AI incident response into their strategies and factor it into their broader cybersecurity postures.
With only 38% of respondents identifying the board or an exec as being accountable in the event of an AI incident, it's clear more needs to be done to disseminate information and processes through the workforce.
A hidden feature of water, long submerged, has finally been brought to the surface.
New experiments have revealed supercooled water’s critical point — a specific pressure and temperature at which two distinct phases of water turn into one. The critical point appears at about 210 kelvins (around –63° Celsius) and about 1,000 times the pressure exerted by Earth’s atmosphere at sea level, researchers report in the March 26 Science. The discovery may help explain certain odd properties of the ubiquitous, all-important liquid.
Water is already known to have a critical point at high temperature. At about 374° C and 218 times atmospheric pressure, the distinction between the liquid and gas phases is erased. Beyond that critical point, water is what’s called a supercritical fluid.
Scientists had long predicted a second critical point existed at low temperature, in water that is supercooled, meaning that it temporarily remains liquid below its normal freezing point. “For 20 years or more, many people were waiting to see direct evidence … based on experiments,” says physicist Nicolas Giovambattista of Brooklyn College in New York, who was not involved with the research. “It’s amazing that it finally came.”
Certain odd properties of water tipped scientists off to this possibility. For example, most liquids increase in density upon cooling. But water increases in density down to about 4° C where it reaches a maximum. Then it reverses course: Further cooling makes water less dense. And water’s heat capacity, the amount of energy required to increase its temperature a given amount, does a similar about-face.
Scientists suspected the flip-flopping properties could be a sign of a critical point lurking at lower temperature.
[...] Experiments at pressures and temperatures close to the predicted critical point are extremely challenging. That realm is known as “no man’s land” because supercooled water freezes almost instantaneously there. So chemical physicist Anders Nilsson of Stockholm University and colleagues turned to sophisticated tactics. “We have to do everything very quickly,” Nilsson says.
[...] The researchers’ results are impressive, says physicist Greg Kimmel of Pacific Northwest National Laboratory in Richland, Wash. “The data they present shows a pretty clear picture,” matching the critical point hypothesis. He notes, however, that the work assumes that the liquid has reached a state of equilibrium, meaning that flows of matter and energy have settled down. And since the measurements are taken so quickly, it’s unclear if that’s the case.
For Giovambattista, who has spent his career performing computer simulations of water and this critical point, just seeing it in the real world is a relief. “It’s kind of inner peace.”
Journal Reference: Seonju You, Marjorie Ladd-Parada, Kyeongmin Nam, et al., Experimental evidence of a liquid-liquid critical point in supercooled water, Science, Vol. 391, No. 6792, 2026. https://doi.org/10.1126/science.aec0018
https://blog.thereallo.dev/blog/decompiling-the-white-house-app
The official White House Android app has a cookie/paywall bypass injector, tracks your GPS every 4.5 minutes, and loads JavaScript from some guy's GitHub Pages.
The White House released an app on the App Store and Google Play. They posted a blog about it. "Unparalleled access to the Trump Administration."
It took a few minutes to pull the APKs with ADB, and threw them into JADX.
Here is everything I found.
It's a React Native app built with Expo (SDK 54), running on the Hermes JavaScript engine. The backend is WordPress with a custom REST API. The app was built by an entity called "forty-five-press" according to the Expo config.
The actual app logic is compiled into a 5.5 MB Hermes bytecode bundle. The native Java side is just a thin wrapper.
The Drone Swarm is Coming, and NATO Air Defenses Are Too Expensive to Cope
NATO is unprepared to deal with attacks by cheap, mass-produced drones and urgently needs layered, affordable air defense systems to counter the threat, taking a cue from the experience gained by Ukrainian forces over the past four years.
Experts at the Center for European Policy Analysis (CEPA) recently held a debate on the lessons armed forces should take from the ongoing conflict in the Middle East, highlighting that low-cost drones are reshaping how wars are fought.
CEPA describes itself as a nonpartisan, public policy institution, headquartered in Washington, DC.
The takeaway from Iran's tactics is that adversaries are likely to combine precision weapons with cheap, mass-produced drones to overwhelm air defense systems so that the precision weapons can get through. Managing this threat means developing low-cost defensive weapons, produced and used at scale, to complement the interceptor missiles costing millions that are built to target aircraft and ballistic missiles.
"The question is no longer how just to defeat a threat. The question is how to do so to sustainable cost and scale," said Gordon "Skip" Davis, former deputy assistant secretary general for NATO and previously director of operations for US European Command.
He noted a decisive shift in the character of war: Iran has shown that relatively unsophisticated weapons like the Shahed-type drones, which cost $20,000 each, can impose real operational stress on even the most advanced forces such as the US and its regional allies.
Ukraine is ahead of NATO in one critical area – the ability to produce and deploy low-cost systems at scale. It is manufacturing tens of thousands of interceptor drones annually, and delivering them to frontline units at rates exceeding 1,500 per day.
Instead of relying solely on expensive interceptors, Ukraine has built a layered system in which cheap one-way interceptor drones - costing as little as $2,000 - now account for the majority of drone takedowns across the country. This is typified by the small Bullet model produced by defense firm General Cherry (General Chereshnya), which can reach speeds of up to 310 km per hour (192 mph), engage targets at a distance of up to 20 km (12 miles), and operate at altitudes of up to 6 km (about 4 miles).
Davis said NATO should take several lessons from this - integrated air and missile defenses must be layered and cost-effective, not reliant purely on high-end interceptors. It must field attritable and autonomous systems en masse, not in niche roles, and this means having the industrial capacity to produce them and "magazine depth" – meaning having stockpiles available.
"The overarching conclusion, in my view, is that NATO must move from a model built around technological superiority to one built around integrated systems, scalable production and rapid adaptation," he stated.
Jason Israel, senior fellow for Defense Technology Initiative at CEPA, said software and interoperability were another vital piece of the puzzle. By this he means the various drones operated will need to integrate with command-and-control (C2) systems to coordinate operations.
"That drone that you're using, or the unmanned system that you're using, what software is behind it? Does the software allow it to be interoperable with headquarters?" he asked.
"As we've seen on the US side, the scale of the hardware has not quite gotten there yet, and software, as we know, is relatively easy to scale, but we're not seeing interoperability between the systems to the point that we would need in order to fight as an alliance in the future, and I think that's one of the big questions that I have."
"We can't have 200 different types of drones in the future that don't speak to each other," he added.
Humans also remain a key part of the command chain, and Federico Borsari, CEPA Fellow for Transatlantic Defense and Security, made the point that operators need the right training to respond appropriately.
"The operator is an important task, but needs to be very prepared for any kind of contingency. And so training and rehearsal of realistic situations is increasingly important, and I think this aspect is often overlooked."
Borsari noted that NATO countries are "very interested" in integrating Ukrainian technologies, but even more interested in benefiting from Ukrainian experience.
"Ukrainian forces started to use commercially derived unmanned systems around 2015, when volunteer organizations were helping Ukraine's depleted forces to resist Russian aggression in the Donbas region," he said.
"Over the years they have developed extremely sophisticated and effective tactics, techniques, and procedures, and also concepts of operations that are really the treasure trove at this point for NATO countries."
However, Davis warned that there does not seem to be any great sense of urgency for all this at the political level in many Western nations.
In terms of doctrine, NATO countries also need to be thinking about where the big adversaries, Russia and China, are going with respect to autonomous systems.
"We've got to think about, how do we enable a force that can employ systems that are integrated, that have the right kinds of algorithms, the right kind of computing support, to be able to do the right kinds of targeting with minimal human intervention, and have the capability for rapid in-the-field software changes like we see going on in Ukraine right now," Davis said.
The conclusion is that NATO countries need to radically overhaul and scale up their drone defenses, taking lessons from Ukraine. This doesn't just apply to frontline forces, as the Ukraine and Iran conflicts demonstrated that some nations have no qualms about targeting civilian infrastructure.
Last month, the UK and a handful of European allies launched a program to develop low-cost air defense systems. Low-Cost Effectors & Autonomous Platforms (LEAP) will initially focus on an affordable surface-to-air weapon to counter the threat of drones and missiles, and is aiming to produce something by 2027.
The UK last year beefed up its meager air defenses with the purchase of six new Land Ceptor anti-aircraft missile systems, capable of intercepting cruise missiles, aircraft, and drones.
[Source]: TechCrunch
If Google's AI researchers had a sense of humor, they would have called TurboQuant, the new, ultra-efficient AI memory compression algorithm announced Tuesday, "Pied Piper" — or, at least that's what the internet thinks.
The joke is a reference to the fictional startup Pied Piper that was the focus of HBO's "Silicon Valley" TV series that ran from 2014 to 2019.
The show followed the startup's founders as they navigated the tech ecosystem, facing challenges like competition from larger companies, fundraising, technology and product issues, and even ( much to our delight ) wowing the judges at a fictional version of TechCrunch Disrupt.
Pied Piper's breakthrough technology on the TV show was a compression algorithm that greatly reduced file sizes with near-lossless compression. Google Research's new TurboQuant is also about extreme compression without quality loss, but applied to a core bottleneck in AI systems. Hence, the comparisons.
Google Research described the technology as a novel way to shrink AI's working memory without impacting performance. The compression method, which uses a form of vector quantization to clear cache bottlenecks in AI processing, would essentially allow AI to remember more information while taking up less space and maintaining accuracy, according to the researchers.
GNU inetutils Telnetd CVE-2026-32746 Pre-Auth RCE:
A long, long time ago, in a land free of binary exploit mitigations, when Unix still roamed the Earth, there lived a pre-authentication Telnetd vulnerability.
In fact, this vulnerability was born so long ago (way back in 1994) that it may even be older than you. To put the timespan in perspective: it came into existence the same year the seminal movie Hackers was released.
That was so long ago that RISC was still a distant dream.
Come to think of it, maybe it was even the product of Zero Cool himself?
Anyway. Recently, this vulnerability was brutally put to rest.
[...] CVE-2026-32746, discovered by the DREAM Security Research Team, is a BSS-based buffer overflow that allows an attacker to corrupt roughly 400 bytes of adjacent variables.
It resides in the LINEMODE SLC (Set Linemode Characters) negotiation handler. While strictly speaking it affects 'just' GNU inetutils, most vendors have based their Telnetd implementations on the same code, making the blast radius vast and somewhat difficult to estimate. It definitely includes all the major Linux distributions (we checked).
With a vulnerability like this, we expected the Internet to explode with excitement - yet it's been almost a week now with no good analysis. We thought we might as well publish where we got to.
We'll go through a few things - how we isolated the vulnerability, what it enables attackers to do (and under what circumstances), and we'll talk about why this particular vulnerability is more of a Pandora's box to exploit than you might think.
Google has begun testing a feature that changes the headlines of published articles without notifying publishers, sparking concerns among media executives:
Google is courting fresh controversy after starting to test a feature that rewrites article headlines without seeking permission or even notifying the publishers. The trial expands on earlier artificial intelligence (AI) tools, such as AI Overviews, which condense articles into short summaries.
Media leaders voiced outrage at the complete lack of communication or approval, with one executive calling it "another overreach by Google taking liberties with content without permission." They regard headlines as a core part of "editorial judgment" and essential to journalistic integrity. Changing them without disclosure could create serious problems, including a loss of reader confidence if the new versions turn out to be inaccurate or misleading.
Marc McCollum of Raptive, which partners with thousands of publishers, questioned how far the practice might go. "Would they also test changing the lead that shows up in Google? Would they consider imagery that didn't come from the original publisher?" McCollum asked, expressing concern that Google is altering original work excessively.
From TheVerge:
What we are seeing is a "small" and "narrow" experiment, one that's not yet approved for a fuller launch, Google spokespeople Jennifer Kutz, Mallory De Leon, and Ned Adriance tell The Verge. They would not say how "small" that experiment actually is. Over the past few months, multiple Verge staffers have seen examples of headlines that we never wrote appear in Google Search results — headlines that do not follow our editorial style, and without any indication that Google replaced the words we chose. And Google says it's tweaking how other websites show up in search, too, not just news.
https://mashable.com/article/nasa-x-59-supersonic-jet-test-problem
Nothing seemed amiss as NASA's experimental X-59 supersonic jet touched down after its second test in the air, smoothly coasting onto the runway.
But the sleek, needle-nosed airplane had completed only nine minutes in the air on Friday, March 20, before a cockpit warning light forced an early landing. That warning was separate from a caution light that occurred during an earlier takeoff attempt just before 10 a.m. P.T., said Cathy Bahm, project manager at NASA's Armstrong Flight Research Center.
The brief flight left from Edwards Air Force Base in California at 10:54 a.m. P.T. marked only the second time the aircraft had flown. While the team originally planned for about an hour, leaders stressed that even short flights provide new data for moving the project forward. You can watch the landing in the video below.
[...] "Sometimes it's easy to forget that building this kind of experimental aircraft means creating something that never existed before," Pearce said during a news conference. "As far as X-planes go, it's not unusual."
The X-59 is part of a long-term effort to change how fast commercial airplanes fly over land. Traditional supersonic aircraft create a loud boom when they break the sound barrier, which is why the U.S. government bans routine supersonic passenger flights over populated areas. NASA and its contractor, Lockheed Martin, built the X-59 to fly faster than sound while producing only a "thump," with the goal of providing regulators and the industry with the evidence needed to reconsider the restrictions.
[...] Residents below didn't hear the X-59's thump during either of the first two test flights — and they weren't supposed to. The plane never flew fast enough either time to make it. Both flights intentionally stayed at subsonic speeds. NASA is using these early tests to shake out systems and watch how the plane handles.
[...] He described the aircraft as handling just like its simulators. Over hundreds of hours of test runs in the simulator, Less and other test pilots had practiced with the unconventional vision system that combines images from cameras into a high-definition display. But this was his first time flying without the traditional front window.
The long nose shape that helps soften the sonic boom doesn't leave room for a standard cockpit windscreen. But in some cases, the system offers better visibility than the naked eye, he said. If a pilot is facing into the sun, for example, image processing can reduce glare and improve contrast.
"It really felt comfortable," he said. "Even though I wasn't seeing out the front, I could see out the sides and match that up."
More than 100 test flights are planned. NASA intends to gradually push toward higher, faster flights before testing those muffled booms over towns.
The project removes the birthDate field systemd added last week in response to age verification laws.
The project's latest move has not helped its reputation among the skeptics. Last week, developers merged a pull request adding a birthDate field to its user records, tied to age verification laws in California, Colorado, and Brazil.
Earlier, we covered what that actually means, but to recap, the field is optional, can only be set by an administrator, and systemd itself does nothing with the data. It is simply a standardized field in the user record file that other projects like xdg-desktop-portal can build age verification compliance on top of—distros that do not need it can ignore it entirely.
But "optional" has not been enough to stop people from treating it as a line being crossed, and now a solo developer has responded the way the open source community usually reacts: by forking.
[...] Compared to mainline systemd, the fork changes 12 files across 5 commits, all focused on scrubbing out everything related to the birthDate addition. That means not just the field itself but also the option to set a birth date via homectl, the relevant man page entries, display code, and tests.
Though, as of writing this, it was 37 commits behind from systemd, so that is something to keep in mind if you are hoping to implement this on a general-use or production system.
Jeffrey also maintains a companion repository, systemd-suite, which is meant for testing the fork. So, while this is very much a one-person project, there seems to be at least some technical groundwork behind it beyond the birthDate revert.
[Source]: IT'S FOSS
Previously: Age Checks Creep Into Linux, systemd Locks It in, Developer Defends Himself
He then seemed to slightly walk back the claim.
On a Monday episode of the Lex Fridman podcast, Nvidia CEO Jensen Huang made a hot-button statement: "I think we've achieved AGI."
AGI, or artificial general intelligence, is a vaguely defined term that has incited a lot of discussion by tech CEOs, tech workers, and the general public in recent years, as it typically denotes AI that's equal to or surpasses human intelligence. In recent months, tech leaders have tried to distance themselves from the term and create their own terminology that they view as less over-hyped, more useful, and more clearly defined (although the new phrases they've come up with essentially mean the same thing as AGI). The term has also been the subject of key clauses in big-ticket contracts between companies like OpenAI and Microsoft, upon which a significant amount of money may hinge.
[...] But Huang then seemed to slightly walk back his earlier claims, saying, "A lot of people use it for a couple of months and it kind of dies away. Now, the odds of 100,000 of those agents building Nvidia is zero percent."
[Source]: The Verge
https://www.theregister.com/2026/03/24/foss_age_verification/?td=rt-3a
From TFA:
After weeks of debate, code to record user age was finally merged into the Linux world's favorite system management daemon.
Pull request #40954 to the systemd project is titled "userdb: add birthDate field to JSON user records." It's a new function for the existing userdb service, which adds a field to hold the user's date of birth.
[...] The change comes after the recent release of systemd 260 but unless it is reverted for some reason, it will be part of systemd 261. One of the justifications is to facilitate the new parental controls in Flatpak, which are still in the draft stage.
[...] The TBOTE findings suggest that Meta is the biggest donor behind the lobbying for these age-verification laws and the App Store Accountability Act (ACCA). TBOTE claims it has directly traced more than $25 million, and that Meta could have spent upward of $2 billion on this over the last year. It also points to €10 million-plus spent lobbying in Europe.
In the US, the main group pushing for these laws is the relatively young Digital Childhood Alliance (DCA). As right-wing think tank the "Institute for Family Studies" reported a year ago, this was assembled by over 50 conservative groups. Six months later, in July 2025, Bloomberg also reported that Meta was funding the DCA. For such a young and small organization, the DCA certainly seems to have had a rapid and almost disproportionate impact.
Nuff said.
In March of 2026, systemd, the init system that boots most modern Linux distributions, merged a pull request adding a birthDate field to its user database. The stated purpose was compliance with California's AB-1043, Colorado's SB26-051, and Brazil's Lei 15.211/2025, a wave of age verification laws requiring operating systems to collect birth dates from users at account setup, then feed that data to app stores via a real-time API. The PR was submitted by a contributor using the GitHub handle dylanmtaylor. Within days it had 945 comments and was locked by maintainers. Someone opened a revert PR. Lennart Poettering closed it without merging on March 19th, saying the field is optional and systemd "enforces zero policy." The birthDate field is still in systemd. systemd PR #40954 revert PR #41179
The lasting damage was knowing it could happen at all: that a single contributor with no stated organizational backing could submit compliance infrastructure for surveillance law directly into the software that boots your computer, get it merged by two Microsoft employees, and have the creator of systemd personally block the removal.
[...] Nobody paid him to do this. He's a cloud engineer who read the law and decided someone needed to implement it.
The same week, he submitted draft pull requests to Canonical's ubuntu-desktop-provision repository, with PR #1338 to add a birthDate field to Ubuntu's user provisioning and PR #1339 to write that birth date into AccountsService on installation. Canonical's VP of Engineering Jon Seager publicly distanced the company, saying there are "no concrete plans" to change Ubuntu in response to AB-1043. A separate Ubuntu developer, Aaron Rainbolt, proposed a different approach on the Ubuntu mailing list: an optional D-Bus interface called `org.freedesktop.AgeVerification1` that distros could implement however they choose, rather than storing a raw birthdate in userdb. The PRs remain as drafts. 9to5Linux coverage
Taylor also opened PR #4290 on the archinstall repository proposing a required birthDate prompt during user creation, stored as a systemd userdb JSON drop-in. Arch Linux maintainer Torxed locked the discussion, said he was waiting for an official organizational stance and legal counsel, and left it open. As of this writing it has not been merged. archinstall PR #4290
Here is the freedesktop merge request with lots of back-and-forth in the comments.
Dylan M. Taylor is not a household name in the Linux world. At least, he wasn't until recently.
The software engineer and longtime open source contributor has quietly built a respectable track record over the years: writing Python code for the Arch Linux installer, maintaining packages for NixOS, and contributing CI/CD pipelines to various FOSS projects.
But a recent change he made to systemd has pushed him into the spotlight, along with a wave of intense debate.
At the center of the controversy is a seemingly simple addition Dylan made: an optional birthDate field in systemd's user database.
The change, intended to give Linux distributions a lightweight, optional mechanism to comply with emerging US state laws on age verification, was immediately met with fierce resistance from parts of the Linux community. Critics saw it not merely as a technical addition, but as a symbolic capitulation to government overreach. A crack in the philosophical foundation of freedom that Linux is built on.
What followed went far beyond civil disagreement. Dylan revealed that he faced harassment, doxxing, death threats, and a flood of hate mail. He was forced to disable issues and pull request tabs across his GitHub repositories.
He has shared his opinions in a blog post that the change is not "age verification":
A common misconception about this change is that it introduces "age verification" to Linux. It doesn't. None of the PRs I submitted involve ID checks, facial recognition, or third-party verification services. You can enter any value, including January 1st, 1900.
So, we interacted with Dylan over email to ask him about the controversy, the code change, and the personal toll it has taken.
What do you Soylentils think: a moral purist? Compliance warrior? Microsoft or Meta spy? A useful idiot? Or some linear combination of the above?
https://go.theregister.com/feed/www.theregister.com/2026/03/22/cern_eggheads_burn_ai_into/
Like the major league pitcher who comes to his kid's take-your-parent-to-school day, CERN's Thea Aarrestad gave a presentation at the virtual Monster Scale Summit earlier this month about meeting a set of ultra-stringent requirements that few of her peers may ever experience.
Aarrestad is an assistant professor of particle physics at ETH Zurich. AT CERN (European Organization for Nuclear Research), she uses machine learning to optimize data collection from the Large Hadron Collider (LHC). Her specialty is anomaly detection, a core component of any proper observability system.
Each year the LHC produces 40,000 EBs of unfiltered sensor data alone, or about a fourth of the size of the entire Internet, Aarrestad estimated. CERN can't store all that data. As a result, "We have to reduce that data in real time to something we can afford to keep."
By "real time," she means extreme real time. The LHC detector systems process data at speeds up to hundreds of terabytes per second, far more than Google or Netflix, whose latency requirements are also far easier to hit as well.
Algorithms processing this data must be extremely fast," Aarrestad said. So fast that decisions must be burned into the chip design itself.
Contained in a 27-kilometer ring located a hundred meters underground between the border of Switzerland and France, the LHC smashes subatomic particles together at near-light speeds. The resulting collisions are expected to produce new types of matter that fill out our understanding of the Standard Model of particle physics — the operating system of the universe.
At any given time, there are about 2,800 bunches of protons whizzing around the ring at nearly the speed of light, separated by 25-nanosecond intervals. Just before they reach one of the four underground detectors, specialized magnets squeeze these bunches together to increase the odds of an interaction. Nonetheless, a direct hit is incredibly rare: out of the billions of protons in each bunch, only about 60 pairs actually collide during a crossing.
When particles do collide, their energy is converted into a mass of new outgoing particles (E=MC2 in the house!). These new particles "shower" through CERN's detectors, making traces "which we try to reconstruct," she said, in order to identify any new particles produced in ensuing melee.
Each collision produces a few megabytes of data, and there are roughly a billion collisions per second, resulting in about a petabyte of data (about the size of the entire Netflix library).
Rather than try to transport all this data up to ground level, CERN found it more feasible to create a monster-sized edge compute system to sort out the interesting bits at the detector-level instead.
"If we had infinite compute we could look at all of it," Aarrestad said. But less than 0.02% of this data actually gets saved and analyzed. It is up to the detectors themselves to pick out the action scenes.
The detectors, built on ASICs, buffer the captured data for up to 4 microseconds, after which the data "falls over the cliff," forever lost to history if it is not saved.
Making that decision is the "Level One Trigger," an aggregate of about 1,000 FPGAs that digitally reconstruct the event information from a set of reduced event information provided by the detector via fiber optic line at about 10 TB/sec. The trigger produces a single value, either an "accept" (1), or "reject" ("0").
Making the decision to keep or lose a collision is the job of the anomaly-detection algorithm. It has to be incredibly selective, rejecting more than 99.7 percent of the input outright. The algo, affectionately named AXOL1TL, is trained on the "background" — the areas of the Standard Model that have largely been sussed out already. It knows the typical topology of a standard collision, allowing it to instantly flag events that fall outside those boundaries. As Aarrestad put it, it's hunting for "rare physics."
The algorithm must make a decision within 50 nanoseconds. Only about 0.02% of all collision data, or about 110,000 events per second, make the cut, and are subsequently saved and transported to ground level. Even this slimmed-down throughput results in terabytes per second being sent up to the on-ground servers.
Once on the surface, the data goes through a second round of filtering, called the "High Level Trigger," which again discards the vast majority of captured collisions, identifying only about 1,000 interesting collisions from the 100,000 events per second that come through the pipe. This system has 25,600 CPUs and 400 GPUs, to reproduce the original collision and analyze the results, and produces about a petabyte a day.
"This is the data we will actually analyze," Aarrestad said.
From there the data is replicated across 170 sites in 42 countries, where it can be analyzed by researchers worldwide, with an aggregate power of 1.4 million computer cores.
The LHC detectors are a hothouse environment rarely encountered by AI. So much so that the CERN engineers had to create their own toolbox.
Sure, there are already plenty of real-time libraries for consumer applications such as noise-cancelling headphones, things like MLPerfMobile and MLPerfTiny. But they don't come anywhere close to supporting the streaming data rates and ultra-low latencies CERN requires.
So CERN trained machine learning models "to be small from the get-go," she said. They were quantized, pruned, parallelized, and distilled to the essential knowledge only. Every operation on an FPGA is quantized. Unique bitwidths were defined for each parameter, and they were made differential, so they could be optimized using gradient descent.
The engineering team developed a transpiler, HLS4ML, that would write the model in C++ code targeted for specific platforms, so it can be run on an accelerator or system-on-a-chip, a custom FPGA, or even use it to "print silicon" on an ASIC.
The detector architecture breaks from the traditional Von Neumann model of memory-processor-I/O. Nothing is sequentially-driven. Rather it is based on the "availability of data," she said. "As soon as this data becomes available, the next process will start."
Most crucially, decisions must be made on-chip – nothing can be handed off to even very fast memory. Every piece of hardware is tailored for a specific model. Decisions take place at design time. Each layer of FPGAs is a separate compute unit.
A good chunk of the on-chip silicon is taken up by pre-calculations in order to save the processing to do each calculator anew. The output of every possible input is referenced in a lookup table.
Naturally, you can't put huge models on these slivers of silicon. No room for huge transformation deep learning models here. This is where CERN found that tree-based models are very powerful, compared to the deep learning ones.
In CERN's experience, tree-based models offer the same performance but at a fraction the costs of deep learning models. This is not surprising given the Standard Model could be viewed as a collection of tabular data. For each collision, the LHC spits out a structured set of discrete measurements.
CERN is trying to measure all of the parameters of collisions to the 5-sigma level – that's 99.999%, five-nines, the gold standard for claiming a discovery. The Higgs boson subatomic particle was found using this standard.
The LHC collider has found at least 80 other hadrons, or particles held together by strong nuclear force (including one last week).
The hunt is on for new processes that occur in fewer than one in a trillion collisions.
At the end of this year, the LHC is shutting down to make way for the High Luminosity LHC, due to become operational in 2031. It will provide more of the sweet, sweet data particle physicists crave.
It will have more powerful magnets to focus the beams on very tiny spots. The bunches of protons will be doubled in size ("so there is more of a probability that those protons will talk to each other").
That means a lot more collisions and a 10-fold increase of data, leading to a much denser "event complexity." The event size jumps from 2MB to 8MB, but the resulting trails of data will jump from 4 Tb/sec to 63 Tb/sec.
The detectors are being upgraded to identify each collision, then track each particle-pairing back to its original collision point – all within a few microseconds.
While the frontier AI labs build ever-larger models, CERN is, in many ways, heading in the opposite direction, embracing aggressive anomaly detection, heterogeneously-quantized transformers and other tricks to make the AI smaller and faster than ever. When building our understanding of the universe, it is sometimes better to know what information to throw away.
https://go.theregister.com/feed/www.theregister.com/2026/03/23/musk_terafab/
Elon Musk has put Tesla, SpaceX, and xAI in harness to build a chip fabrication outfit called "Terafab" capable of producing a terawatt's worth of computing power each year, then send most of it into space.
In a Sunday afternoon presentation, Musk said the world's chipmakers currently produce 20 gigawatts' worth of compute power each year, and that whatever new capacity his key suppliers Nvidia, Samsung, and Micron produce, he will buy.
But he can't see how they produce the terawatt of compute power he wants each year, so he has built an "advanced fab" in Austin, Texas, that he says can produce "any kind of chip," and lithography masks.
Musk said his companies have developed a recursive process that allows rapid chip production, plus frequent redesigns to improve performance.
He mentioned "some very interesting new physics" that he is "confident will work. It's just a question of when."
"We are going to push the limits of physics in compute and do some wild and crazy things," he said.
He plans to produce two chips. One will be dedicated to inference and for use on Earth, mostly in humanoid robots that he thinks will sell in volumes of one to ten billion a year. The upper range would mean robots outnumber humans in a year.
The second chip will power orbiting computers that ride in satellites packing just 100 kw of compute power – about the energy consumption of a rack packed full of high-end AI gear. In time, Musk expects to launch megawatt-scale satellites.
He also mentioned building a bigger version of SpaceX's Starship that can carry 200 tons into space and shared his back-of-the-envelope math that suggests putting a terawatt of compute into space, along with all the necessary solar power and other infrastructure, means launching 10 million tons into space every year.
Our back-of-the-envelope math suggests that means Musk needs to launch 50,000 Starships a year, or 135 a day at a rate of one giant rocket every ten minutes.
The reason for doing this, Musk said, is to ensure humans find a home among the stars and a future that will be "like the best science fiction you have ever read. Like Star Trek, Iain Banks, Asimov, or Heinlein."
Don't mention the Borg, R. Daneel Olivaw, Mule, hegemonizing swarms, or the soup at the end of Stranger in a Strange Land.
Musk didn't explain how he will find sufficient resources to make any of this happen, a question that's especially important at this moment given the war in Iran has seen production of helium – an essential component in semiconductor manufacturing – fall by 30 percent.
Musk challenged doubters by pointing out Tesla and SpaceX defied critics who predicted electric cars and reusable rockets would not be feasible or economical.
"I think it's important to consider the grandness of the universe and what we can do that is much greater than what we've done before, as opposed to worrying about sort of small squabbles on Earth."
Might that have been a reference to his unproductive time at the head of the so-called Department of Government Efficiency? Or perhaps it was earthly spats alone that prevented Musk from delivering on his 2019 prediction that Tesla would deploy one million self-driving taxis in 2020? Robocab-watchers estimate about 200 self-driving Tesla taxis are currently undergoing tests.
As his appreciative audience cheered him on, Musk discussed his vision for launching a petawatt of computing power each year, made on the Moon and sent out into the solar system on a gadget he called an "electromagnetic mass driver" that looks like a kind of railgun.
"I want to live long enough to see the mass driver on the Moon," the 54-year-old said.
US government data suggests he's got 22 years in which to make it happen.
[Ed's Comment: One of our reviewing editors wrote the following comment: "He mentioned "some very interesting new physics" that he is "confident will work. It's just a question of when."" - So magic. Got it.]