Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 9 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you pay for premium AI subscriptions?

  • Yes
  • No
  • I use someone else's paid one
  • What, in THIS economy?
  • I don't use AI, you insensitive clod!

[ Results | Polls ]
Comments:65 | Votes:168

posted by jelizondo on Saturday April 11, @07:18PM   Printer-friendly
from the de-minimis dept.

After almost twenty years on the platform, EFF is logging off of X. This isn’t a decision we made lightly, but it might be overdue. The math hasn’t worked out for a while now:

We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago.

When Elon Musk acquired Twitter in October 2022, EFF was clear about what needed fixing.

We called for:

  • Transparent content moderation: Publicly shared policies, clear appeals processes, and renewed commitment to the Santa Clara Principles
  • Real security improvements: Including genuine end-to-end encryption for direct messages
  • Greater user control: Giving users and third-party developers the means to control the user experience through filters and interoperability.

Twitter was never a utopia. We've criticized the platform for about as long as it’s been around. Still, Twitter did deserve recognition from time to time for vociferously fighting for its users’ rights. That changed. Musk fired the entire human rights team and laid off staffers in countries where the company previously fought off censorship demands from repressive regimes. Many users left. Today we're joining them.

TFA goes on to explain why they're remaining on Facebook and TikTok. They have vowed to keep fighting to protect digital rights.


Original Submission

posted by jelizondo on Saturday April 11, @02:35PM   Printer-friendly

Experiments with hydrogen confirm fundamental quantum theory up to the 13th decimal place and solve proton radius puzzle:

Researchers at the Max Planck Institute for Quantum Optics (MPQ), Garching, in collaboration with Prof. Dr. Randolf Pohl from the Institute for Physics at Johannes Gutenberg University Mainz (JGU), have successfully conducted experiments on hydrogen atoms which allow testing of the Standard Model of particle physics up to the 13th decimal place. When it comes to measurements using hydrogen atoms, this is the most exact result to date. It allows researchers to, among other things, test predictions in hydrogen and solve the so-called proton radius puzzle. This puzzle has existed since measurements on two types of hydrogen indicated different proton radii. The new research results have recently been published in the journal Nature.

The Standard Model of particle physics encompasses the smallest-scale physics in a model consisting of particles and forces. One of its foundational components is quantum electrodynamics (QED). It described how light and matter fundamentally interact with each other. "Because hydrogen is relatively simple, it is well-suited for calculation. This means we can use it to test QED, and thus the Standard Model", explains Prof. Randolf Pohl. For their experiment, the researchers analyzed hydrogen's energetic structure using high-precision laser spectroscopy. They examined two different energy levels and determined the energy needed to transition from one level to the other, or, more specifically, their transition frequency. The measured transition frequency confirms the Standard Model with a deviation of less than one trillionth (0.7 parts per trillion). With this, the researchers have set a new benchmark in measuring the energy levels of hydrogen atoms. "This measurement is as good as the anomalous magnetic moment of the electron – the current gold standard for the confirmation of the Standard Model", says Pohl.

Thanks to this precision, the measurements taken confirm predictions made through the Standard Model which have never been confirmed in ordinary hydrogen before. "We are able to see very small, extremely interesting contributions that arise from the interaction with more complex particles called hadrons", says Dr. Lothar Maisenbacher from the MPQ, lead author of the study. Dr. Vitaly Wirthl, co-author and also from the MPQ, expands on this: "In the contributions to the transition frequency, we see muons in the electronic hydrogen for the first time. In theory, muon-antimuon particle pairs contribute to vacuum polarization, which is relevant for the precision of our measurement."

In addition to testing the Standard Model and QED, the scientists also used the hydrogen measurement to investigate the inconsistency with earlier measurements in muonic hydrogen. These measurements, lead by Prof. Pohl, use muonic hydrogen that possesses a muon instead of an electron. This elementary particle is similar to an electron, since it carries the same charge. However, it is more than 200 times heavier and has a lifespan of just two microseconds. The new measurement data means that the discrepancy between the two hydrogen types can be significantly ruled out for the first time. Both types have a proton radius of 0.8406 femtometers. However, it remains unclear how the discrepancy measured earlier can be explained.

Journal Reference: Maisenbacher, L., Wirthl, V., Matveev, A. et al. Sub-part-per-trillion test of the Standard Model with atomic hydrogen. Nature 650, 845–851 (2026). https://doi.org/10.1038/s41586-026-10124-3


Original Submission

posted by jelizondo on Saturday April 11, @09:50AM   Printer-friendly

https://news.mit.edu/2026/toward-cheaper-cleaner-hydrogen-production-0403

Hydrogen sits at the center of some of the world's most important industrial processes, but its production still comes with a heavy environmental cost. Today, most hydrogen is produced through high-emissions processes like steam methane reforming and coal gasification.

But hydrogen can also be made by splitting water molecules using renewable electricity, eliminating fossil fuel emissions and other toxic byproducts. Such "green hydrogen" is made by running an electric current through water in an electrolyzer.

Green hydrogen won't scale through decarbonization alone. It also has to be cost-competitive with the traditional methods of production.

1s1 Energy thinks it has the technology to finally make green hydrogen go mainstream. The company says its boron-based membrane material unlocks previously unachievable performance and durability in electrolyzers.

In tests with partners, 1s1 says, electrolyzers with its membranes needed just 70 percent of the energy to produce each kilogram of hydrogen, compared to incumbent devices.

"Green hydrogen has been a hard industry to have success in so far," acknowledges 1s1 co-founder Dan Sobek '88, SM '92, PhD '97. "The difference with us is we've done very targeted customer discovery. We have a very strong value proposition that's not just about decarbonization. We have a pipeline of potential customers that see around a 60 percent reduction in operating costs with our technology. That's a nice point of entry."

Although 1s1 is focused on hydrogen production now, its technology could also be used in fuel cells and solid-state batteries, and to extract critical metals from mining waste. The company is beginning trials in some of those applications, and it is working with a large materials company to scale up production of its membranes for hydrogen production.

"We're at an inflection point for the company," Sobek says. "The plan is, by 2030, to have a solid business in several segments: electrolyzers, mineral extraction, and in collaborations with several large companies. But right now, we have to be judicious and focused."

Sobek was born and raised in Argentina, but he also grew up at MIT over the course of three degrees and more than a decade. He first studied aeronautics and astronautics at MIT, then jumped to mechanical engineering as a graduate student, then moved to the Department of Electrical Engineering and Computer Science, where he worked under PhD advisors and MIT professors Martha Gray and Stephen Senturia. His thesis focused on a technique for quickly measuring optical properties of large numbers of biological cells.

"A lot of my learnings around microfabrication and materials chemistry ended up being really relevant for 1s1," Sobek says. "A class that was very important to me was taught by Professor Amar Bose. I was a teaching assistant for him for a couple of semesters, and that had an incredible influence on my thinking."

Following graduation, Sobek worked in microelectronics and microfluidics before founding his own company, Zymera, in 2004. The company developed deep-tissue imaging technology for detecting cancer and other serious diseases.

Around 2013, Sobek started talking to his Zymera co-founder, Sukanta Bhattacharyya, about making electrolysis more efficient, focusing on "proton exchange membrane" electrolyzers. Such electrolyzers employ a large amount of electricity to split water into hydrogen and oxygen ions. At their center is a membrane that can lose efficiency through voltage resistance.

On top of the efficiency challenge, electricity is often more expensive than fossil fuels in many parts of the world. Traditional hydrogen production also has the benefit of existing infrastructure, making it that much more difficult for green hydrogen production to scale.

Sobek and Bhattacharyya knew the most important part of such electrolyzers is their proton-conducting membrane, which shuttles hydrogen ions from the anode to the cathode in the electrolyzer's electrochemical cell.

"I asked Sukanta how we could improve the efficiency and durability of that element," Sobek recalls. "He gave me a one-word answer: boron."

Boron can be given a negative charge, which makes hydrogen ions, or protons, bond to it more quickly. The hydrogen ions can then be filtered through the membrane and released as they move through the cell. Boron-based materials are also more stable and resistant to corrosion, further improving the long-term performance of electrolyzers.

The company was officially founded in late 2019. After years of development, today 1s1 attaches a chemically tailored version of boron onto polymer materials to create its membranes for exchanging protons.

"These are first-of-a-kind membranes with stable and durable, super-acid proton exchange groups that do not poison catalysts," Sobek says.

In 2021, the U.S. Department of Energy set a goal for proton exchange membrane electrolysis to achieve 77 percent electrical efficiency by 2031. Sobek says 1s1 is already reaching that milestone in tests.

"It's not just the technology, but the way we're applying it," Sobek says, "We're making hydrogen viable for use in the production of different industrial chemicals."

1s1 is currently conducting pilots with partners, including an electrical utility owned by a large steel company in Brazil. The company is also actively exploring other applications for its technology. Last year, 1s1 announced a project to produce green ammonia with the company Nitrofix through joint funding from the U.S. Department of Energy and the Israeli Ministry of Energy and Infrastructure. It's also working with a large mine in Brazil to extract a material called niobium, which is useful for high-strength steel as well as fast-charging batteries. A similar process could even be used to extract gold.

"We can do that without using harsh chemicals, because the standard processes used to extract niobium and gold use extremely strong acids at high temperatures or extremely toxic chemicals," Sobek says. "It's gratifying for me because my home country of Argentina has had a lot of problems with the use of toxic chemicals to extract gold. We're trying to enable low-cost, responsible mining."

As 1s1 scales its membrane technology, Sobek says the goal is to deploy wherever the technology can improve processes.

"We have a large number of potential customers because this technology is really foundational," Sobek says. "Creating high-impact technologies is always fun."


Original Submission

posted by jelizondo on Saturday April 11, @05:09AM   Printer-friendly
from the all-charged-up dept.

Honda is deepening its retreat from an aggressive electric vehicle rollout, canceling three U.S.-bound EVs and warning that the shifting market could result in major financial losses as it pivots toward hybrids:

The automaker recently announced it will halt development and launch plans for the 0 Series SUV, 0 Series Saloon and the Acura RSX. Those models had been slated for U.S. production as early as this year following factory retooling tied to Honda’s next-generation EV strategy.

Honda said the decision reflects a rapidly changing business environment, including slower EV demand, tariff pressures and weaker-than-expected product performance in key markets.

“Honda determined that starting production and sales of these three models in the current business environment where the demand for EVs is declining significantly would likely result in further losses over the long term,” the company said in a statement.

The financial impact is substantial. Honda said total losses tied to the move could reach as much as $15.8 billion. That includes operating expenses projected between roughly $5.2 billion and $7.1 billion in the current fiscal year, reversing what had been an operating profit forecast just one month ago into an expected operating loss.

[...] Honda’s challenges extend beyond North America. The automaker acknowledged it has fallen behind competitors in China, particularly in EV cost competitiveness and in-vehicle software. Sales in China dropped sharply last year, further weighing on overall performance.

Previously:


Original Submission

posted by jelizondo on Saturday April 11, @12:27AM   Printer-friendly

New analyses of the planet's oldest minerals suggest a diversity of tectonic settings not previously expected more than 4 billion years ago:

Parts of the ancient Earth may have formed continents and recycled crust through subduction far earlier than previously thought.

New research led by scientists at the University of Wisconsin–Madison has uncovered chemical signatures in zircons, the planet's oldest minerals, that are consistent with subduction and extensive continental crust during the Hadean Eon, more than 4 billion years ago. The findings challenge models that have long considered Earth's earliest times as dominated by a rigid, unmoving "stagnant lid" and no continental crust, with potential implications for the timing of the origin of life on the planet.

The study, published Feb. 4 in the journal Nature, is based on chemical analyses of ancient zircons found in the Jack Hills of Western Australia. These sand-sized grains preserve the only direct records of Earth's first 500 million years and offer rare insight into how the planet's surface and interior interacted as continents first formed.

[...] These elements are essentially fingerprints of the environments where the zircons formed, allowing the scientists to distinguish zircons that formed in magmas that originated in the Earth's mantle beneath Earth's crust from those associated with subduction and continental crust. Because zircons lock in their chemistry when they crystallize and are highly resistant to alteration, they preserve uniquely reliable records of early Earth processes, even after several billion years.

"They're tiny time capsules and they carry an enormous amount of information," says John Valley, a professor emeritus of geoscience at UW–Madison who led the research.

Valley says that the chemistry of zircons found in the Jack Hills clearly shows that they originated from a much different source than other Hadean zircons found in South Africa, which carry a chemical signature typical of more primitive rocks originating within the Earth's mantle.

"What we found in the Jack Hills is that most of our zircons don't look like they came from the mantle," says Valley. "They look like continental crust. They look like they formed above a subduction zone."

Together, the two groups of zircons suggest that early Earth was not dominated by a single tectonic style, according to Valley.

[...] The oldest accepted microfossils are about 3.5 billion years old, but the Jack Hills zircons push evidence for potentially habitable surface conditions much earlier.

"We propose that there was about 800 million years of Earth history where the surface was habitable, but we don't have fossil-evidence and don't know when life first emerged on Earth," Valley says.

As scientists continue to hunt for evidence of what the earliest Earth was like, Valley says the latest results are an example of the power of improving and refining laboratory techniques.

"Our new analytical capabilities opened a window into these amazing samples," he says. "The Hadean zircons are literally so small you can't see them without a lens, and yet they tell us about the otherwise unknown story of the earliest Earth."

Journal Reference: Valley, J.W., Blum, T.B., Kitajima, K. et al. Contemporaneous mobile- and stagnant-lid tectonics on the Hadean Earth. Nature 650, 636–641 (2026). https://doi.org/10.1038/s41586-025-10066-2


Original Submission

posted by janrinok on Friday April 10, @07:36PM   Printer-friendly

https://www.osnews.com/story/144737/adobe-secretly-modifies-your-hosts-file-for-the-stupidest-reason/

If you're using Windows or macOS and have Adobe Creative Cloud installed, you may want to take a peek at your hosts file. It turns out Adobe adds a bunch of entries into the hosts file, for a very stupid reason.

        They're using this to detect if you have Creative Cloud already installed when you visit on their website.

        When you visit https://www.adobe.com/home, they load this image using JavaScript:

        https://detect-ccd.creativecloud.adobe.com/cc.png

        If the DNS entry in your hosts file is present, your browser will therefore connect to their server, so they know you have Creative Cloud installed, otherwise the load fails, which they detect.

They used to just hit http://localhost:/cc.png which connected to your Creative Cloud app directly, but then Chrome started blocking Local Network Access, so they had to do this hosts file hack instead.
        ↫ thenickdude at Reddit

At what point does a commercial software suite become malware?


Original Submission

posted by janrinok on Friday April 10, @02:53PM   Printer-friendly

https://9to5linux.com/debians-apt-3-2-released-with-history-undo-redo-and-rollback-support

This release will be part of the upcoming Debian 14 "Forky" operating system series, due out in June-July 2027.

The Debian Project tagged today APT 3.2 as the latest stable release of this package manager for Debian-based distributions that lets you install, update, and remove packages from your system.

The biggest new feature in the APT 3.2 release is the long-anticipated rollback and history functionality that other package managers like DNF for Red Hat-based distros. This change was actually implemented in the development version 3.1.7, but it's now part of the stable APT 3.2 release.

The native rollback features have been implemented in the form of the following commands: history-list to show a list of history, history-info to show info on specific transactions, history-redo to redo transactions, history-undo to undo transactions, and history-rollback to rollback transactions.

The history/undo features work pretty much the same as with DNF if you used Fedora Linux or a similar distro before. With apt history-list you can see all the transactions, then you can use apt history-info ID to see what packages were installed, and you can revert the change with the apt history-undo ID command.

Since APT 3.1, this release introduces a much-improved solver, the internal engine responsible for resolving package dependencies, that now supports upgrade by source package and the ability to prevent your system from accidentally deleting essential software during an update on setups where binaries aren't published together for all architectures.

APT's solver is now also capable of sorting dependency targets against the current alternative, as well as allowing the removal of manually installed packages. Moreover, APT 3.2 introduces JSONL performance counter logging and the ability to prevent your computer from entering sleep while running the dpkg command.

APT 3.2 will be part of the upcoming Debian 14 "Forky" release, due out in June-July 2027, but you will be able to enjoy the native rollback functionality in the forthcoming Ubuntu 26.04 LTS (Resolute Racoon) operating system that Canonical plans to officially release later this month, on April 23rd, 2026.

Meanwhile, Debian Sid (Unstable) users can already enjoy the APT 3.2 release just by updating their installations with the sudo apt update && sudo apt install apt commands. More details on today's APT 3.2 release can be found on tracker.debian.org.


Original Submission

posted by janrinok on Friday April 10, @10:07AM   Printer-friendly

A UCLA-led research team demonstrated that minuscule wires made from two unconventional materials can potentially reduce noise below the lowest level possible in traditional electronics:

That low-frequency fuzz that can bedevil cellphone calls has to do with how electrons move through and interact in materials at the smallest scale. The electronic flicker noise is often caused by interruptions in the flow of electrons by various scattering processes in the metals that conduct them.

The same sort of noise hampers the detecting powers of advanced sensors. It also creates hurdles for the development of quantum computers — devices expected to yield unbreakable cybersecurity, process large-scale calculations and simulate nature in ways that are currently impossible.

A much quieter, brighter future may be on the way for these technologies, thanks to a new study led by UCLA. The research team demonstrated prototype devices that, above a certain voltage, conducted electricity with lower noise than the normal flow of electrons.

These experimental devices used unconventional materials to form nanowires, ribbons so thin that it would take a thousand or more to match the width of a strand of hair. In contrast to conventional electronics — in which noise levels tend to remain constant — the nanowires displayed a surprising property: Noise dropped as the electrical current increased.

The behavior of the materials was driven by a quantum phenomenon in which electrons move in concert with phonons, temperature-driven vibrations that can cause flicker noise. Importantly, one of the materials in the study dampened noise at room temperature and above.

"Normally we think about phonons as the bad guys that are scattering electrons," said corresponding author Alexander Balandin, holder of the Fang Lu Endowed Chair in Engineering at the UCLA Samueli School of Engineering, distinguished professor of materials science and engineering and a member of the California NanoSystems Institute at UCLA (CNSI). "In this particular case, we found the phonons allowed electrons to jointly move along. This weird, unique property with respect to noise could allow us to improve signal-to-noise ratio."

When voltage is applied to a metallic wire, electrons travel under the action of the electric field, constantly being bumped off-path by phonons and various defects in materials, which results in noisy current. The researchers took advantage of an additional mode for electrons to move, under very specific circumstances induced by the counterintuitive rules of quantum mechanics. In this mode, electrons tend to clump together in periodic patterns that are enabled by interactions with phonons and largely synchronized with phonons.

By analogy, electrons can be pictured as surfers traveling the ocean of a conducting material, with waves of phonons flowing through it.

In the usual mode, electrons act like newbie surfers, occasionally getting knocked off their boards by phonon waves. Electrons in the quantum-based mode are like expert surfers, catching phonon waves and using their energy to move along smoothly.

With the motion of phonons and electrons so closely connected, the materials that unlock expert-surfer mode are called "strongly correlated materials."

[...] Balandin envisions a future in which strongly correlated materials can be used as conductors for connecting components on computer chips. He thinks these materials may even support a fundamental change in circuit architecture.

"All good things come to an end," he said. "With the demand for high-end, high-power computation for artificial intelligence, we have to look at materials that, 10-plus years from now, can give us an alternative means for sending electrical signals and processing them."

The researchers plan to further investigate the materials from this study, while also seeking other materials that carry charge density waves even more efficiently at room temperature.

"Perhaps there are materials that are even better," Balandin said. "The search is on."

Journal Reference: Ghosh, S., Sesing, N., Nataj, Z.E. et al. A quieter state of charge and ultra-low-noise of the collective current in quasi-1D charge-density-wave nanowires. Nat Commun 17, 116 (2026). https://doi.org/10.1038/s41467-025-67567-x


Original Submission

posted by janrinok on Friday April 10, @05:23AM   Printer-friendly

https://go.theregister.com/feed/www.theregister.com/2026/04/07/nasa_budget/

First, the good news: the Artemis II crew has successfully swung around the far side of the Moon and surpassed Apollo 13's record for the farthest humans have traveled from Earth. Now the bad news: the White House is sharpening the budget blade once again.

The US administration celebrated Artemis II's success while simultaneously proposing a FY 2027 budget [PDF] that would slash NASA's overall spending allowance from $24.4 billion to $18.8 billion [PDF].

If enacted, the request would gut science funding from $7.3 billion in 2026 to $3.9 billion. Space Operations (which includes the International Space Station) would drop from $4.2 billion to $3 billion, and Safety, Security, and Mission Services from $3 billion to $2 billion.

One bright spot is Exploration (including human missions to the Moon), which would get a bump from $7.8 billion to $8.5 billion.

Reaction has been swift and grim. One source close to NASA's Jet Propulsion Laboratory (JPL) told The Register the budget proposal was as "dismal as expected," and that "JPL is hoping that Congress will again dismiss it. We can only hope."

The Planetary Society was blunter in its response, saying: "This proposal needlessly resurrects an existential threat to US leadership in space science and exploration."

"The President has stated his desire that NASA remain the world's premier space agency. The White House's budgeting office is out of step with this broad, bipartisan consensus," it added.

In a message to the NASA workforce, obtained by NASAWatch, administrator Jared Isaacman put a positive spin on the request, saying: "The requested funding levels are sufficient for NASA to meet the Nation's high expectations and deliver on all mission priorities.

"As we saw in last year's budget request, it [the FY2027 request] calls on agencies to find efficiencies, focus resources, and do more to meet the moment."

The request ushers in another year of uncertainty for NASA. Despite the fanfare surrounding the Artemis II mission, the budget proposal describes the Space Launch System (SLS) – used to send astronauts around the Moon – as "grossly expensive and delayed" and calls for replacing the SLS and Orion – currently housing the Artemis II crew – with something "more cost-effective."

What that replacement might be remains unclear, particularly given that SpaceX's Starship, critical to NASA's lunar landing plans, suffered yet another delay on April 3 when boss Elon Musk pushed its next test flight to "4 to 6 weeks away," so no earlier than May.

This is familiar territory. The White House proposed comparable funding cuts for FY 2026, only for Congress to reject them, holding funding roughly flat year-over-year, albeit a real-terms cut once inflation is factored in, but nothing like the scale now proposed. Lawmakers also added almost $10 billion earmarked largely for human spaceflight through 2032, including $2.6 billion for the Gateway space station, which Isaacman subsequently paused in favor of a moonbase.

This time, however, the cuts are proposed against a darker backdrop of rising US defense spending, "which," our source said, "will further reduce the money available for science... This is a worrying time."


Original Submission

posted by janrinok on Friday April 10, @12:37AM   Printer-friendly

Phys.org has an interesting report on the reason the slower car many times catches up:

Many drivers will know the feeling: you pull ahead of the slower car you've been stuck behind and cruise the open road ahead at your own, faster speed. By the time you reach the next stop light, you're sure that you've left the slower car far behind you—but to your surprise, you see that same car cruise up right behind you in the mirror. Horror buffs might even recall scenes from "Friday the 13th," where masked villain Jason Voorhees always catches up to his sprinting victims—despite himself walking at a leisurely pace.In a new study published in Royal Society Open Science, Conor Boland at Dublin City University shows that this unsettlingly common phenomenon can be explained with simple mathematics. His model reveals precisely when and why a slower vehicle catches up after being overtaken, offering fresh insights into how individual vehicles interact with traffic signals.

In simple terms, if two objects move along the same path at different constant speeds, we expect them to reach any given point at different times. So far, however, traffic models haven't yet accounted for what happens when an overtaking event collides with the random timing of a traffic signal.

Rather than describing the average flow of many vehicles, Boland focused on pairwise interactions between just two cars. His approach treats the traffic signal as a random event: at the moment the overtaking driver gains a time advantage, there is no way of knowing how far the signal sits through its red-green cycle.

Using a straightforward probability framework, and assuming the driver arrives at the signal at a random point in its red-green cycle, Boland derived a formula for the probability that the slower car catches up at the next red light.

This probability turns out to depend on just three quantities: the time advantage gained by overtaking; the total length of the signal's red-green cycle; and the fraction of that cycle spent on red.

If a driver's time advantage is large relative to the red-light fraction of the cycle, the slower car almost certainly won't reappear. But as that advantage shrinks, as it often does during brief, more risky overtakes on busy roads, the catch-up probability climbs significantly. This could finally explain what Boland has dubbed the "Voorhees law of traffic."

On a psychological level, the model could also help to explain why we remember catch-up moments so vividly. In proving that catch-up events are statistically common, the Voorhees law shows that the frequency of the jarring reappearances of slower cars isn't just in your head.


Original Submission

posted by janrinok on Thursday April 09, @07:52PM   Printer-friendly

Is 90 percent accuracy good enough for a search robot?

Looking up information on Google today means confronting AI Overviews, the Gemini-powered search robot that appears at the top of the results page. AI Overviews has had a rough time since its 2024 launch, attracting user ire over its scattershot accuracy, but it's getting better and usually provides the right answer. That's a low bar, though. A new analysis from The New York Times attempted to assess the accuracy of AI Overviews, finding it's right 90 percent of the time. The flip side is that 1 in 10 AI answers is wrong, and for Google, that means hundreds of thousands of lies going out every minute of the day.

The Times conducted this analysis with the help of a startup called Oumi, which itself is deeply involved in developing AI models. The company used AI tools to probe AI Overviews with the SimpleQA evaluation, a common test to rank the factuality of generative models like Gemini. Released by OpenAI in 2024, SimpleQA is essentially a list of more than 4,000 questions with verifiable answers that can be fed into an AI.

[...] Google doesn't much like this test. Google spokesperson Ned Adriance tells the Times that Google believes SimpleQA contains incorrect information. Its model evaluations often rely on a similar test called SimpleQA Verified, which uses a smaller set of questions that have been more thoroughly vetted. "This study has serious holes," Adriance told the Times. "It doesn't reflect what people are actually searching on Google."

Evaluating new AI models sometimes feels more like art than science, which is part of the problem. Every company has its own preferred way of demonstrating what a model can do, and the non-deterministic nature of gen AI can make it hard to verify anything. These robots can get a factual question right and then completely miss it if you rerun the query immediately. Oumi even uses AI tools to run its assessments, and those models can hallucinate, too.

The other wrinkle is that AI Overviews isn't a single monolithic model. Google told Ars Technica that it uses the "right model" for each query. While AI Overviews would get the best answers from always running Gemini 3.1 Pro, that's slow and expensive. To load things promptly on a search page, the overview uses faster Gemini Flash models when possible (which appears to be most of the time).

[...] While Google says the Times' results don't match what people see, you have to wonder how the company could even know that. You've probably seen mistakes in AI Overviews—we all have because that's just how generative AI works. As Google itself reminds you at the bottom of every overview: "AI can make mistakes, so double-check responses."


Original Submission

posted by janrinok on Thursday April 09, @03:05PM   Printer-friendly
from the step-1)-post-to-social-media,-step-2)-????,-step-3)-PROFIT!!!!!! dept.

Nate Silver, formerly of FiveThirtyEight, recently published an article about the decline of social media in driving traffic to external websites. Silver describes the impact of social media on traffic to FiveThirtyEight when it relaunched under new ownership from Disney in March 2014:

You will believe what happened next: it didn't work. The whole period was like the Underwear Gnomes meme come to life. Phase 1: Collect lots of low-quality traffic from Facebook. Phase 2: ???. Phase 3: Pivot to video.

It didn't help that Facebook was constantly tinkering with News Feed, and grossly exaggerating metrics like average time spent watching videos. But more fundamentally, it was locked into a zero-sum, adversarial relationship with publishers. Facebook wanted readers to stay within its walled garden, to spend as much time as possible on Facebook. Publishers, meanwhile, regarded Facebook as the equivalent of the Port Authority Bus Terminal: a miserable, liminal space where you'd hopefully spend as little time as possible before booking a one-way ticket out of town.

Although Silver reports that FiveThirtyEight received more traffic from posting on Twitter at the time, it also declined within a few years. Silver's analysis of the content currently receiving the most engagement on Twitter shows that it is dominated by low-quality and highly partisan accounts. As he writes in regard to a chart in his article:

It's not hard to notice that Twitter has become extremely right-leaning. But I'd argue there's an equally important trend: the top accounts are of incredibly low quality. Elon, with the algorithmic boost he built in for himself, is at the eye of the storm, of course. But "Catturd" literally gets far more engagement than the New York Times, for instance.

Without really wanting to comment on individual accounts — there are some exceptions — the liberal-leaning accounts that remain prominent on Twitter aren't much better. They're partisan and combative, sometimes peddling misinformation. They're almost like a dark-mirror-world, Waluigi version of the conservative "influencers", crafted in Elon's jaded image of what liberals are like. It's no coincidence that one of the most successful ones is the Gavin Newsom Press Office account, which literally mimics President Trump's style in a sometimes funny, sometimes cringeworthy way.

Silver's analysis describes Twitter as prioritizing low-quality rage bait designed to maximize engagement and sell ads rather than showing users links to higher-quality articles outside the walled garden:

And "siloed" is on a good day: at other times, Twitter feels like a ghost town. It's still useful for some topics: the AI discourse on the platform is often relatively robust, for instance. But for something like the war in Iran, it's next to useless. Links to external websites are substantially punished, and none of the workarounds are particularly helpful. So the tangible rewards from still having 3 million followers can be surprisingly marginal. However, my account is hardly alone in this regard. The New York Times has 53 million followers, and yet its tweets often produce only a few hundred likes, retweets, and replies even when they reveal urgent, breaking news.

After reading Silver's article, I believe there are three important questions and comments:

  1. When social media platforms actively penalize content that links to external sites, that pressures content creators to stay within the confines of the walled garden. This looks a lot like a potential violation of antitrust laws.
  2. Social media prioritizes engagement to sell ads, and engagement appears to be maximized by increasing viewership of clickbait and rage bait over insightful content. This certainly lowers the quality of political discourse and helps to drive polarization.
  3. If you're a content creator looking to grow an audience with thoughtful content, there is no longer much value in promoting it on most of the largest social media outlets.

Perhaps there are two paths forward. One option is that independent blogs providing in-depth content decline in traffic and go dark due an inability to draw revenue while low-quality rage bait continues to drive discourse. The other option is that we accept that social media has become nearly useless for many types of thoughtful discussion and move back to blogs and other platforms that reward quality over engagement.


Original Submission

posted by hubie on Thursday April 09, @10:19AM   Printer-friendly

https://www.osnews.com/story/144752/plan-9-is-a-uniquely-complete-operating-system/

Thom Holwerda 2026-04-07

From 2024, but still accurate and interesting:

Plan 9 is unique in this sense that everything the system needs is covered by the base install. This includes the compilers, graphical environment, window manager, text editors, ssh client, torrent client, web server, and the list goes on. Nearly everything a user can do with the system is available right from the get go.
        ↫ moody

This is definitely something that sets Plan 9 apart from everything else, but as moody – 9front developer – notes, this also has a downside in that development isn't as fast, and Plan 9 variants of tools lack features upstream has for a long time. He further adds that he think this is why Plan 9 has remained mostly a hobbyist curiosity, but I'm not entirely sure that's the main reason. The cold and harsh truth is that Plan 9 is really weird, and while that weirdness is a huge part of its appeal and I hope it never loses it, it also means learning Plan 9 is really hard.

I firmly believe Plan 9 has the potential to attract more users, but to get there, it's going to need an onboarding process that's more approachable than reading 9front's frequently questioned answers, excellent though they are. After installing 9front and loading it up for the first time, you basically hit a brick wall that's going to be rough to climb. It would be amazing if 9front could somehow add some climbing tools for first-time users, without actually giving up on its uniqueness. Sometimes, Plan 9 feels more like an experimental art project instead of the capable operating system that it is, and I feel like that chases people away.

Which is a real shame.


Original Submission

posted by Fnord666 on Thursday April 09, @05:38AM   Printer-friendly
from the mo'-money dept.

Artificial intelligence and government officials warned that tech companies such as Anthropic and OpenAI are slated to deploy advanced models that are highly effective at hacking complex systems:

Anthropic is privately cautioning senior government officials that its upcoming model, presently known as “Mythos,” will increase the likelihood of massive cyberattacks in 2026, Axios reported. Axios CEO Jim VandeHei also reported that a source familiar with the upcoming models asserted a large-scale cyberattack may occur in 2026, with businesses being vulnerable targets.

Fortune also obtained a draft blog post from Anthropic characterizing “Mythos” as “currently far ahead of any other AI model in cyber capabilities.” The post further suggested that the model “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”

Moreover, Axios co-founder Mike Allen also asked OpenAI CEO Sam Altman whether he agreed there was a likelihood of a “world-shaking cyberattack” in 2026 during a Monday interview.

“I think that’s totally possible, yes,” Altman told Allen. “I think to avoid that, it will require a tremendous amount of work.”

Furthermore, OpenAI on Monday released a blueprint for how the government should handle AI, titled, “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” The blueprint warns of cyberattacks resulting from advanced and prevalent AI models.

“As AI systems become more capable and more embedded across the economy, they may introduce new vulnerabilities alongside new abundance,” the blueprint states. “Some systems may be misused for cyber or biological harm.”

Related: A Hacker Used Claude to Breach Mexico's Government and Steal 150GB of Data


Original Submission

posted by Fnord666 on Thursday April 09, @12:56AM   Printer-friendly
from the picture-this dept.

The regulatory price for handing three million people's dating photos to a facial recognition startup turned out to be a promise to behave:

Nearly three million people uploaded photos to OkCupid expecting those images would stay on a dating app. Instead, the photos ended up training facial recognition software, handed over by the company’s own founders to an AI firm they’d personally invested in.

Match Group settled a Federal Trade Commission lawsuit last week over the transfer, which the agency says violated OkCupid’s privacy policy and was actively covered up for years. The consent decree permanently bars Match Group and OkCupid from misrepresenting their data practices and puts them under compliance reporting for a decade.

The settlement carries no financial penalty.

[...] The data transfer happened in September 2014. Clarifai, an AI company building image recognition systems, asked OkCupid for a large dataset of user photos.

The request wasn’t routed through a business development team or vetted by legal. OkCupid’s founders were financially invested in Clarifai, and the ask came on that basis, one investor helping out another. OkCupid’s president and chief technology officer were directly involved in the data transfer, and one of the founders allegedly sent the photos from his personal email account, bypassing any corporate oversight or audit trail.

No contract governed the handoff. No restrictions were placed on what Clarifai could do with the data. Clarifai never provided any business services to OkCupid.

[...] When The New York Times reported on the arrangement in 2019, OkCupid’s response was carefully evasive. The company told the paper that Clarifai had contacted OkCupid about a possible collaboration and that no commercial agreement had been entered into. That framing was technically true and functionally misleading. There was no commercial agreement because the data was given away for free, a favor between a company and its founders’ investment. The FTC alleged that OkCupid did not address whether Clarifai had gained access to photos without consent, and described the response as part of a broader pattern of concealment. The agency said it ultimately had to enforce its Civil Investigative Demand in federal court after OkCupid obstructed the investigation.

[...] The settlement, filed March 30, 2026 in the US District Court for the Northern District of Texas, permanently prohibits misrepresenting data collection, use, and disclosure practices. Match Group did not admit wrongdoing. The Commission vote was 2-0.

Also at Yahoo! and The Verge.


Original Submission