Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you pay for premium AI subscriptions?

  • Yes
  • No
  • I use someone else's paid one
  • What, in THIS economy?
  • I don't use AI, you insensitive clod!

[ Results | Polls ]
Comments:55 | Votes:153

posted by janrinok on Friday April 10, @02:53PM   Printer-friendly

https://9to5linux.com/debians-apt-3-2-released-with-history-undo-redo-and-rollback-support

This release will be part of the upcoming Debian 14 "Forky" operating system series, due out in June-July 2027.

The Debian Project tagged today APT 3.2 as the latest stable release of this package manager for Debian-based distributions that lets you install, update, and remove packages from your system.

The biggest new feature in the APT 3.2 release is the long-anticipated rollback and history functionality that other package managers like DNF for Red Hat-based distros. This change was actually implemented in the development version 3.1.7, but it's now part of the stable APT 3.2 release.

The native rollback features have been implemented in the form of the following commands: history-list to show a list of history, history-info to show info on specific transactions, history-redo to redo transactions, history-undo to undo transactions, and history-rollback to rollback transactions.

The history/undo features work pretty much the same as with DNF if you used Fedora Linux or a similar distro before. With apt history-list you can see all the transactions, then you can use apt history-info ID to see what packages were installed, and you can revert the change with the apt history-undo ID command.

Since APT 3.1, this release introduces a much-improved solver, the internal engine responsible for resolving package dependencies, that now supports upgrade by source package and the ability to prevent your system from accidentally deleting essential software during an update on setups where binaries aren't published together for all architectures.

APT's solver is now also capable of sorting dependency targets against the current alternative, as well as allowing the removal of manually installed packages. Moreover, APT 3.2 introduces JSONL performance counter logging and the ability to prevent your computer from entering sleep while running the dpkg command.

APT 3.2 will be part of the upcoming Debian 14 "Forky" release, due out in June-July 2027, but you will be able to enjoy the native rollback functionality in the forthcoming Ubuntu 26.04 LTS (Resolute Racoon) operating system that Canonical plans to officially release later this month, on April 23rd, 2026.

Meanwhile, Debian Sid (Unstable) users can already enjoy the APT 3.2 release just by updating their installations with the sudo apt update && sudo apt install apt commands. More details on today's APT 3.2 release can be found on tracker.debian.org.


Original Submission

posted by janrinok on Friday April 10, @10:07AM   Printer-friendly

A UCLA-led research team demonstrated that minuscule wires made from two unconventional materials can potentially reduce noise below the lowest level possible in traditional electronics:

That low-frequency fuzz that can bedevil cellphone calls has to do with how electrons move through and interact in materials at the smallest scale. The electronic flicker noise is often caused by interruptions in the flow of electrons by various scattering processes in the metals that conduct them.

The same sort of noise hampers the detecting powers of advanced sensors. It also creates hurdles for the development of quantum computers — devices expected to yield unbreakable cybersecurity, process large-scale calculations and simulate nature in ways that are currently impossible.

A much quieter, brighter future may be on the way for these technologies, thanks to a new study led by UCLA. The research team demonstrated prototype devices that, above a certain voltage, conducted electricity with lower noise than the normal flow of electrons.

These experimental devices used unconventional materials to form nanowires, ribbons so thin that it would take a thousand or more to match the width of a strand of hair. In contrast to conventional electronics — in which noise levels tend to remain constant — the nanowires displayed a surprising property: Noise dropped as the electrical current increased.

The behavior of the materials was driven by a quantum phenomenon in which electrons move in concert with phonons, temperature-driven vibrations that can cause flicker noise. Importantly, one of the materials in the study dampened noise at room temperature and above.

"Normally we think about phonons as the bad guys that are scattering electrons," said corresponding author Alexander Balandin, holder of the Fang Lu Endowed Chair in Engineering at the UCLA Samueli School of Engineering, distinguished professor of materials science and engineering and a member of the California NanoSystems Institute at UCLA (CNSI). "In this particular case, we found the phonons allowed electrons to jointly move along. This weird, unique property with respect to noise could allow us to improve signal-to-noise ratio."

When voltage is applied to a metallic wire, electrons travel under the action of the electric field, constantly being bumped off-path by phonons and various defects in materials, which results in noisy current. The researchers took advantage of an additional mode for electrons to move, under very specific circumstances induced by the counterintuitive rules of quantum mechanics. In this mode, electrons tend to clump together in periodic patterns that are enabled by interactions with phonons and largely synchronized with phonons.

By analogy, electrons can be pictured as surfers traveling the ocean of a conducting material, with waves of phonons flowing through it.

In the usual mode, electrons act like newbie surfers, occasionally getting knocked off their boards by phonon waves. Electrons in the quantum-based mode are like expert surfers, catching phonon waves and using their energy to move along smoothly.

With the motion of phonons and electrons so closely connected, the materials that unlock expert-surfer mode are called "strongly correlated materials."

[...] Balandin envisions a future in which strongly correlated materials can be used as conductors for connecting components on computer chips. He thinks these materials may even support a fundamental change in circuit architecture.

"All good things come to an end," he said. "With the demand for high-end, high-power computation for artificial intelligence, we have to look at materials that, 10-plus years from now, can give us an alternative means for sending electrical signals and processing them."

The researchers plan to further investigate the materials from this study, while also seeking other materials that carry charge density waves even more efficiently at room temperature.

"Perhaps there are materials that are even better," Balandin said. "The search is on."

Journal Reference: Ghosh, S., Sesing, N., Nataj, Z.E. et al. A quieter state of charge and ultra-low-noise of the collective current in quasi-1D charge-density-wave nanowires. Nat Commun 17, 116 (2026). https://doi.org/10.1038/s41467-025-67567-x


Original Submission

posted by janrinok on Friday April 10, @05:23AM   Printer-friendly

https://go.theregister.com/feed/www.theregister.com/2026/04/07/nasa_budget/

First, the good news: the Artemis II crew has successfully swung around the far side of the Moon and surpassed Apollo 13's record for the farthest humans have traveled from Earth. Now the bad news: the White House is sharpening the budget blade once again.

The US administration celebrated Artemis II's success while simultaneously proposing a FY 2027 budget [PDF] that would slash NASA's overall spending allowance from $24.4 billion to $18.8 billion [PDF].

If enacted, the request would gut science funding from $7.3 billion in 2026 to $3.9 billion. Space Operations (which includes the International Space Station) would drop from $4.2 billion to $3 billion, and Safety, Security, and Mission Services from $3 billion to $2 billion.

One bright spot is Exploration (including human missions to the Moon), which would get a bump from $7.8 billion to $8.5 billion.

Reaction has been swift and grim. One source close to NASA's Jet Propulsion Laboratory (JPL) told The Register the budget proposal was as "dismal as expected," and that "JPL is hoping that Congress will again dismiss it. We can only hope."

The Planetary Society was blunter in its response, saying: "This proposal needlessly resurrects an existential threat to US leadership in space science and exploration."

"The President has stated his desire that NASA remain the world's premier space agency. The White House's budgeting office is out of step with this broad, bipartisan consensus," it added.

In a message to the NASA workforce, obtained by NASAWatch, administrator Jared Isaacman put a positive spin on the request, saying: "The requested funding levels are sufficient for NASA to meet the Nation's high expectations and deliver on all mission priorities.

"As we saw in last year's budget request, it [the FY2027 request] calls on agencies to find efficiencies, focus resources, and do more to meet the moment."

The request ushers in another year of uncertainty for NASA. Despite the fanfare surrounding the Artemis II mission, the budget proposal describes the Space Launch System (SLS) – used to send astronauts around the Moon – as "grossly expensive and delayed" and calls for replacing the SLS and Orion – currently housing the Artemis II crew – with something "more cost-effective."

What that replacement might be remains unclear, particularly given that SpaceX's Starship, critical to NASA's lunar landing plans, suffered yet another delay on April 3 when boss Elon Musk pushed its next test flight to "4 to 6 weeks away," so no earlier than May.

This is familiar territory. The White House proposed comparable funding cuts for FY 2026, only for Congress to reject them, holding funding roughly flat year-over-year, albeit a real-terms cut once inflation is factored in, but nothing like the scale now proposed. Lawmakers also added almost $10 billion earmarked largely for human spaceflight through 2032, including $2.6 billion for the Gateway space station, which Isaacman subsequently paused in favor of a moonbase.

This time, however, the cuts are proposed against a darker backdrop of rising US defense spending, "which," our source said, "will further reduce the money available for science... This is a worrying time."


Original Submission

posted by janrinok on Friday April 10, @12:37AM   Printer-friendly

Phys.org has an interesting report on the reason the slower car many times catches up:

Many drivers will know the feeling: you pull ahead of the slower car you've been stuck behind and cruise the open road ahead at your own, faster speed. By the time you reach the next stop light, you're sure that you've left the slower car far behind you—but to your surprise, you see that same car cruise up right behind you in the mirror. Horror buffs might even recall scenes from "Friday the 13th," where masked villain Jason Voorhees always catches up to his sprinting victims—despite himself walking at a leisurely pace.In a new study published in Royal Society Open Science, Conor Boland at Dublin City University shows that this unsettlingly common phenomenon can be explained with simple mathematics. His model reveals precisely when and why a slower vehicle catches up after being overtaken, offering fresh insights into how individual vehicles interact with traffic signals.

In simple terms, if two objects move along the same path at different constant speeds, we expect them to reach any given point at different times. So far, however, traffic models haven't yet accounted for what happens when an overtaking event collides with the random timing of a traffic signal.

Rather than describing the average flow of many vehicles, Boland focused on pairwise interactions between just two cars. His approach treats the traffic signal as a random event: at the moment the overtaking driver gains a time advantage, there is no way of knowing how far the signal sits through its red-green cycle.

Using a straightforward probability framework, and assuming the driver arrives at the signal at a random point in its red-green cycle, Boland derived a formula for the probability that the slower car catches up at the next red light.

This probability turns out to depend on just three quantities: the time advantage gained by overtaking; the total length of the signal's red-green cycle; and the fraction of that cycle spent on red.

If a driver's time advantage is large relative to the red-light fraction of the cycle, the slower car almost certainly won't reappear. But as that advantage shrinks, as it often does during brief, more risky overtakes on busy roads, the catch-up probability climbs significantly. This could finally explain what Boland has dubbed the "Voorhees law of traffic."

On a psychological level, the model could also help to explain why we remember catch-up moments so vividly. In proving that catch-up events are statistically common, the Voorhees law shows that the frequency of the jarring reappearances of slower cars isn't just in your head.


Original Submission

posted by janrinok on Thursday April 09, @07:52PM   Printer-friendly

Is 90 percent accuracy good enough for a search robot?

Looking up information on Google today means confronting AI Overviews, the Gemini-powered search robot that appears at the top of the results page. AI Overviews has had a rough time since its 2024 launch, attracting user ire over its scattershot accuracy, but it's getting better and usually provides the right answer. That's a low bar, though. A new analysis from The New York Times attempted to assess the accuracy of AI Overviews, finding it's right 90 percent of the time. The flip side is that 1 in 10 AI answers is wrong, and for Google, that means hundreds of thousands of lies going out every minute of the day.

The Times conducted this analysis with the help of a startup called Oumi, which itself is deeply involved in developing AI models. The company used AI tools to probe AI Overviews with the SimpleQA evaluation, a common test to rank the factuality of generative models like Gemini. Released by OpenAI in 2024, SimpleQA is essentially a list of more than 4,000 questions with verifiable answers that can be fed into an AI.

[...] Google doesn't much like this test. Google spokesperson Ned Adriance tells the Times that Google believes SimpleQA contains incorrect information. Its model evaluations often rely on a similar test called SimpleQA Verified, which uses a smaller set of questions that have been more thoroughly vetted. "This study has serious holes," Adriance told the Times. "It doesn't reflect what people are actually searching on Google."

Evaluating new AI models sometimes feels more like art than science, which is part of the problem. Every company has its own preferred way of demonstrating what a model can do, and the non-deterministic nature of gen AI can make it hard to verify anything. These robots can get a factual question right and then completely miss it if you rerun the query immediately. Oumi even uses AI tools to run its assessments, and those models can hallucinate, too.

The other wrinkle is that AI Overviews isn't a single monolithic model. Google told Ars Technica that it uses the "right model" for each query. While AI Overviews would get the best answers from always running Gemini 3.1 Pro, that's slow and expensive. To load things promptly on a search page, the overview uses faster Gemini Flash models when possible (which appears to be most of the time).

[...] While Google says the Times' results don't match what people see, you have to wonder how the company could even know that. You've probably seen mistakes in AI Overviews—we all have because that's just how generative AI works. As Google itself reminds you at the bottom of every overview: "AI can make mistakes, so double-check responses."


Original Submission

posted by janrinok on Thursday April 09, @03:05PM   Printer-friendly
from the step-1)-post-to-social-media,-step-2)-????,-step-3)-PROFIT!!!!!! dept.

Nate Silver, formerly of FiveThirtyEight, recently published an article about the decline of social media in driving traffic to external websites. Silver describes the impact of social media on traffic to FiveThirtyEight when it relaunched under new ownership from Disney in March 2014:

You will believe what happened next: it didn't work. The whole period was like the Underwear Gnomes meme come to life. Phase 1: Collect lots of low-quality traffic from Facebook. Phase 2: ???. Phase 3: Pivot to video.

It didn't help that Facebook was constantly tinkering with News Feed, and grossly exaggerating metrics like average time spent watching videos. But more fundamentally, it was locked into a zero-sum, adversarial relationship with publishers. Facebook wanted readers to stay within its walled garden, to spend as much time as possible on Facebook. Publishers, meanwhile, regarded Facebook as the equivalent of the Port Authority Bus Terminal: a miserable, liminal space where you'd hopefully spend as little time as possible before booking a one-way ticket out of town.

Although Silver reports that FiveThirtyEight received more traffic from posting on Twitter at the time, it also declined within a few years. Silver's analysis of the content currently receiving the most engagement on Twitter shows that it is dominated by low-quality and highly partisan accounts. As he writes in regard to a chart in his article:

It's not hard to notice that Twitter has become extremely right-leaning. But I'd argue there's an equally important trend: the top accounts are of incredibly low quality. Elon, with the algorithmic boost he built in for himself, is at the eye of the storm, of course. But "Catturd" literally gets far more engagement than the New York Times, for instance.

Without really wanting to comment on individual accounts — there are some exceptions — the liberal-leaning accounts that remain prominent on Twitter aren't much better. They're partisan and combative, sometimes peddling misinformation. They're almost like a dark-mirror-world, Waluigi version of the conservative "influencers", crafted in Elon's jaded image of what liberals are like. It's no coincidence that one of the most successful ones is the Gavin Newsom Press Office account, which literally mimics President Trump's style in a sometimes funny, sometimes cringeworthy way.

Silver's analysis describes Twitter as prioritizing low-quality rage bait designed to maximize engagement and sell ads rather than showing users links to higher-quality articles outside the walled garden:

And "siloed" is on a good day: at other times, Twitter feels like a ghost town. It's still useful for some topics: the AI discourse on the platform is often relatively robust, for instance. But for something like the war in Iran, it's next to useless. Links to external websites are substantially punished, and none of the workarounds are particularly helpful. So the tangible rewards from still having 3 million followers can be surprisingly marginal. However, my account is hardly alone in this regard. The New York Times has 53 million followers, and yet its tweets often produce only a few hundred likes, retweets, and replies even when they reveal urgent, breaking news.

After reading Silver's article, I believe there are three important questions and comments:

  1. When social media platforms actively penalize content that links to external sites, that pressures content creators to stay within the confines of the walled garden. This looks a lot like a potential violation of antitrust laws.
  2. Social media prioritizes engagement to sell ads, and engagement appears to be maximized by increasing viewership of clickbait and rage bait over insightful content. This certainly lowers the quality of political discourse and helps to drive polarization.
  3. If you're a content creator looking to grow an audience with thoughtful content, there is no longer much value in promoting it on most of the largest social media outlets.

Perhaps there are two paths forward. One option is that independent blogs providing in-depth content decline in traffic and go dark due an inability to draw revenue while low-quality rage bait continues to drive discourse. The other option is that we accept that social media has become nearly useless for many types of thoughtful discussion and move back to blogs and other platforms that reward quality over engagement.


Original Submission

posted by hubie on Thursday April 09, @10:19AM   Printer-friendly

https://www.osnews.com/story/144752/plan-9-is-a-uniquely-complete-operating-system/

Thom Holwerda 2026-04-07

From 2024, but still accurate and interesting:

Plan 9 is unique in this sense that everything the system needs is covered by the base install. This includes the compilers, graphical environment, window manager, text editors, ssh client, torrent client, web server, and the list goes on. Nearly everything a user can do with the system is available right from the get go.
        ↫ moody

This is definitely something that sets Plan 9 apart from everything else, but as moody – 9front developer – notes, this also has a downside in that development isn't as fast, and Plan 9 variants of tools lack features upstream has for a long time. He further adds that he think this is why Plan 9 has remained mostly a hobbyist curiosity, but I'm not entirely sure that's the main reason. The cold and harsh truth is that Plan 9 is really weird, and while that weirdness is a huge part of its appeal and I hope it never loses it, it also means learning Plan 9 is really hard.

I firmly believe Plan 9 has the potential to attract more users, but to get there, it's going to need an onboarding process that's more approachable than reading 9front's frequently questioned answers, excellent though they are. After installing 9front and loading it up for the first time, you basically hit a brick wall that's going to be rough to climb. It would be amazing if 9front could somehow add some climbing tools for first-time users, without actually giving up on its uniqueness. Sometimes, Plan 9 feels more like an experimental art project instead of the capable operating system that it is, and I feel like that chases people away.

Which is a real shame.


Original Submission

posted by Fnord666 on Thursday April 09, @05:38AM   Printer-friendly
from the mo'-money dept.

Artificial intelligence and government officials warned that tech companies such as Anthropic and OpenAI are slated to deploy advanced models that are highly effective at hacking complex systems:

Anthropic is privately cautioning senior government officials that its upcoming model, presently known as “Mythos,” will increase the likelihood of massive cyberattacks in 2026, Axios reported. Axios CEO Jim VandeHei also reported that a source familiar with the upcoming models asserted a large-scale cyberattack may occur in 2026, with businesses being vulnerable targets.

Fortune also obtained a draft blog post from Anthropic characterizing “Mythos” as “currently far ahead of any other AI model in cyber capabilities.” The post further suggested that the model “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”

Moreover, Axios co-founder Mike Allen also asked OpenAI CEO Sam Altman whether he agreed there was a likelihood of a “world-shaking cyberattack” in 2026 during a Monday interview.

“I think that’s totally possible, yes,” Altman told Allen. “I think to avoid that, it will require a tremendous amount of work.”

Furthermore, OpenAI on Monday released a blueprint for how the government should handle AI, titled, “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” The blueprint warns of cyberattacks resulting from advanced and prevalent AI models.

“As AI systems become more capable and more embedded across the economy, they may introduce new vulnerabilities alongside new abundance,” the blueprint states. “Some systems may be misused for cyber or biological harm.”

Related: A Hacker Used Claude to Breach Mexico's Government and Steal 150GB of Data


Original Submission

posted by Fnord666 on Thursday April 09, @12:56AM   Printer-friendly
from the picture-this dept.

The regulatory price for handing three million people's dating photos to a facial recognition startup turned out to be a promise to behave:

Nearly three million people uploaded photos to OkCupid expecting those images would stay on a dating app. Instead, the photos ended up training facial recognition software, handed over by the company’s own founders to an AI firm they’d personally invested in.

Match Group settled a Federal Trade Commission lawsuit last week over the transfer, which the agency says violated OkCupid’s privacy policy and was actively covered up for years. The consent decree permanently bars Match Group and OkCupid from misrepresenting their data practices and puts them under compliance reporting for a decade.

The settlement carries no financial penalty.

[...] The data transfer happened in September 2014. Clarifai, an AI company building image recognition systems, asked OkCupid for a large dataset of user photos.

The request wasn’t routed through a business development team or vetted by legal. OkCupid’s founders were financially invested in Clarifai, and the ask came on that basis, one investor helping out another. OkCupid’s president and chief technology officer were directly involved in the data transfer, and one of the founders allegedly sent the photos from his personal email account, bypassing any corporate oversight or audit trail.

No contract governed the handoff. No restrictions were placed on what Clarifai could do with the data. Clarifai never provided any business services to OkCupid.

[...] When The New York Times reported on the arrangement in 2019, OkCupid’s response was carefully evasive. The company told the paper that Clarifai had contacted OkCupid about a possible collaboration and that no commercial agreement had been entered into. That framing was technically true and functionally misleading. There was no commercial agreement because the data was given away for free, a favor between a company and its founders’ investment. The FTC alleged that OkCupid did not address whether Clarifai had gained access to photos without consent, and described the response as part of a broader pattern of concealment. The agency said it ultimately had to enforce its Civil Investigative Demand in federal court after OkCupid obstructed the investigation.

[...] The settlement, filed March 30, 2026 in the US District Court for the Northern District of Texas, permanently prohibits misrepresenting data collection, use, and disclosure practices. Match Group did not admit wrongdoing. The Commission vote was 2-0.

Also at Yahoo! and The Verge.


Original Submission

posted by jelizondo on Wednesday April 08, @08:11PM   Printer-friendly

Sweden is bringing back books amid declining test scores:

In 2023, the Swedish government announced that the country's schools would be going back to basics, emphasizing skills such as reading and writing, particularly in early grades. After mostly being sidelined, physical books are now being reintroduced into classrooms, and students are learning to write the old-fashioned way: by hand, with a pencil or pen, on sheets of paper. The Swedish government also plans to make schools cellphone-free throughout the country.

Educational authorities have been investing heavily. Last year alone, the education ministry allocated $83 million to purchase textbooks and teachers' guides. In a country with about 11 million people, the aim is for every student to have a physical textbook for each subject. The government also put $54 million towards the purchase of fiction and non-fiction books for students.

These moves represent a dramatic pivot from previous decades, during which Sweden—and many other nations—moved away from physical books in favor of tablets and digital resources in an effort to prepare students for life in an online world. Perhaps unsurprisingly, the Nordic country's efforts have sparked a debate on the role of digital technology in education, one that extends well beyond the country's borders. US parents in districts that have adopted digital technology to a great extent may be wondering if educators will reverse course, too.

So why did Sweden pivot? In an email to Undark, Linda Fälth, a researcher in teacher education at Linnaeus University, wrote that the "decision to reinvest in physical textbooks and reduce the emphasis on digital devices" was prompted by several factors, including questions around whether the digitalization of classrooms had been evidence-based. "There was also a broader cultural reassessment," Fälth wrote. "Sweden had positioned itself as a frontrunner in digital education, but over time concerns emerged about screen time, distraction, reduced deep reading, and the erosion of foundational skills such as sustained attention and handwriting."

Fälth noted that proponents of reform believe that "basic skills—especially reading, writing, and numeracy—must be firmly established first, and that physical textbooks are often better suited for that purpose."

[...] Swedish officials emphasize that digital technology isn't being removed from schools altogether. Rather, digital aids "should only be introduced in teaching at an age when they encourage, rather than hinder, pupils' learning." Achieving digital competence remains an important objective, particularly in higher grades.

[...] If US educational leaders were to consult their Swedish colleagues, the advice they'd likely get is not to remove digital technology whole cloth. "The goal is recalibration rather than reversal," wrote Fälth. This was echoed in a statement sent to Undark by the Swedish Ministry of Education and Research: The "Swedish government believes that digitalization is fundamentally important and beneficial, but the use of digital tools in schools must be carried out carefully and thoughtfully."

In other words, the objective is not to reject digitalization. It's more nuanced than that. The goal is to judiciously establish boundaries around technology's selective and sequential use over stages of a pupil's educational development. This means introducing digital technology at later ages after basic reading and other skills have been achieved.


Original Submission

posted by jelizondo on Wednesday April 08, @10:48AM   Printer-friendly
from the engage-with-your-surroundings dept.

Technology doesn't just make life easier, it changes how we think, how we act, and what we come to expect from the world around us. The biggest shifts show up slowly, fold into everyday life, and eventually become invisible. Over time, a tool or system starts shaping behavior:

Smartphones - Smartphones didn't just improve communication, they removed its boundaries. Messages became instant, information became constant, and waiting became optional.

Before smartphones, there were natural gaps in the day. Time between conversations. Time without updates. Time where nothing was happening. Those gaps have largely disappeared.

Now, attention is continuously pulled in multiple directions. Notifications interrupt focus, and moments of silence are often filled automatically.

[...] GPS Navigation - Finding your way used to require memory, awareness, and decision-making. People learned routes, recognized landmarks, and built mental maps of the places they lived and traveled through. GPS replaced much of that process: following instructions rather than remembering directions.

[...] In fact, studies have suggested that reliance on GPS can weaken spatial memory over time, as the brain outsources that responsibility.

[...] Social Media Algorithms - Social media introduced systems that decide what you see. Early platforms showed content in chronological order but over time algorithms began prioritizing posts based on engagement, predicting what would keep you scrolling the longest.

This changed behavior on both sides. Users consume what is most attention-grabbing, and creators adapt by producing content that performs well within the system. Over time, this creates feedback loops, where certain types of content are amplified while others disappear. What you see begins is heavily filtered and shaped, yet it feels like a reflection of reality.

Related:


Original Submission

posted by jelizondo on Wednesday April 08, @05:56AM   Printer-friendly

Researchers use archaeological-textual tool to uncover global spread of democracies—and autocracies—in early societies:

It is a common belief that democratic forms of government began in Greece and Rome. However, a newly published global study on ancient societies upends this perception, rewriting our understanding of democracy's origins.

An international team of researchers analyzed archaeological and historical evidence from 31 ancient societies across Europe, Asia, and the Americas and found that shared, inclusive governance was far more common than was once believed.

The study, which appears in the journal Science Advances, is the first comprehensive effort to use archaeological evidence to assess the types of government that existed in early societies.

"People often assume that democratic practices started in Greece and Rome," says Gary Feinman, the study's lead author and the MacArthur Curator of Mesoamerican and Central American Anthropology at the Field Museum's Negaunee Integrative Research Center. "But our research shows that many societies around the world developed ways to limit the power of rulers and give ordinary people a voice."

The researchers, drawing upon art, architecture, and other artifacts, also found evidence of autocratic governments.

"These findings show that both democracy and autocracy were widespread in the ancient world," observes New York University Professor David Stasavage, author of The Decline and Rise of Democracy: A Global History from Antiquity to Today and a co-author of the paper. "Significantly, we now have a deeper appreciation of the many factors that affect how governments form and change over time—knowledge that can guide understanding of present-day geopolitical developments."

The study's authors note that both types of governments come in different forms. In an autocracy, one person or a small group holds all the power; examples of autocracy can include absolute monarchies and dictatorships. In a democracy, decision-making power is shared among the people. Elections often go hand-in-hand with democracy, but not always.

"Elections aren't exactly the greatest metric for what counts as a democracy, so with this study, we tried to draw on historical examples of human political organization," says Feinman. "We defined two key dimensions of governance. One of them is the degree to which power is concentrated in just one individual or just one institution. The other is the degree of inclusiveness—how much the bulk of the citizens have access to power and can participate in some aspects of governance."

[...] The researchers found that population size and the number of political levels did not account for whether a society would be autocratic—contrary to conventional wisdom, populous or geographically expansive societies were not always autocratic. Instead, says Feinman, "the strongest factor shaping how much power rulers held was how they financed their authority."

Societies that depended heavily on revenue that was controlled or monopolized by leaders—such as mines, long-distance trade routes, slave labor, or war plunder—tended to become more autocratic. In contrast, societies funded mainly through broad internal taxes or community labor were more likely to distribute power and maintain systems of shared governance.

The study also shows that societies with more inclusive political systems generally had lower levels of economic inequality.

Journal Reference: https://doi.org/10.1126/sciadv.aec1426


Original Submission

posted by jelizondo on Wednesday April 08, @01:12AM   Printer-friendly

https://gizmodo.com/astronomers-say-recent-rash-of-meteor-sightings-warrants-serious-investigation-2000738638

Astronomers are still searching for answers behind this year’s unusual wave of loud and fiery meteor sightings. Over 3,000 people witnessed a slowly disintegrating daytime fireball over Western Europe. Hundreds more reported the sight—and sonic boom—of a 7-ton, 6-foot (2-meter) asteroid screeching above Ohio. March alone has already seen over 40 meteor cases, with yet another ripping through the sky over Texas last Saturday, breaking the sound barrier, before a fragment crashed into a north Houston home and ricocheted around one bedroom like a pinball.

Now, a new analysis published by the American Meteor Society (AMS) on Wednesday has confirmed just how much of a statistical outlier this 2026 barrage has been—as well as early indications of where all these rocks in our solar system might have come from.

“After years of stable baseline activity, something appears to have shifted,” according to AMS researcher Mike Hankey, who manages the society’s fireball reporting tools. “The signal is consistent across multiple metrics.”

According to those metrics—including total witness figures, the number of cases involving sonic booms, and the duration of the sightings—Hankey said, “Fireball activity has increased.”

Fireballs from outer space, loud enough to produce a sonic boom and witnessed by 50 or more people, have blitzed a trail through Earth’s atmosphere approximately once every three days since this year began, based on reports to the AMS.

“What makes 2026 unique is the combination,” Hankey wrote. “Prior high-sound years like 2021 and 2023 had elevated percentages but moderate event counts. In 2026, both the rate and the absolute count are high.”

Looking at meteor events with the highest number of witnesses—meaning 50 reports or more—30 out of 38 were meteors that were big, tough, and fast enough to produce a sonic boom (79%), which already makes the first quarter of 2026 an outlier historically. But Hankey also determined that the total number of mass sighting events and the volume of those witness reports were outliers, too. Excluding the phenomenal March 8, 2026 case over Western Europe, in which a whopping 3,229 people all reported the same fireball, the remaining 41 episodes so far this March still averaged about 67 witnesses per meteor, “more than double the historical norm,” Hanky noted.

In other words, while the total number of meteor cases has not deviated from researchers’ statistical expectations, the percentage of loud and well-documented cases did.

“Almost half of all March 2026 events with 10+ reports were seen by 50 or more people,” according to Hankey. “Events that would normally draw 25 [to] 49 witnesses instead drew 50, 100, or even 200+ witnesses. The distribution didn’t broaden—it shifted upward.”

Hankey cautioned that the AMS data for 2026’s meteor bombardment can only help develop witness-based trajectory estimates, not the more precise trajectories based on instrument data. But the sheer volume of witnesses does help us learn a bit about where these rocks came from.

Activity from a region of space known as the “Anthelion sporadic source,” defined as objects that hit Earth on their way deeper into our solar system toward the Sun, roughly doubled in 2026. A total of 12 meteors traced back to this Anthelion slice of the sky in 2026, with nearly 10 of those events apparently emanating from a single 1,000 square-degree patch.

Several of the biggest meteor events this month were traced back to this Anthelion region—including a March 9 fireball spotted by 282 people across the U.S. eastern seaboard and two fireballs that were reported 381 times over France across the following two days.

For now, Hankey believes that this current data can rule out a few hypotheses for what’s causing this uptick in meteors, or at least meteor sightings.

First, the Anthelion trajectories indicate that there’s no new cluster of asteroids entering Earth’s transit around the Sun—the sort of drifting space rocks that produce predictable annual meteor showers, like the Perseids every August.

Second, early material analyses of the fragments recovered in Ohio and Germany have had the mineral makeup of achondritic HEDs, one of the most common categories of meteorites on record. Hankey concluded that, for these reasons, it’s highly unlikely that any of these fireballs were crashing extraterrestrial spacecraft: “There is no evidence of anomalous trajectory behavior, controlled flight or non-natural composition,” he wrote in the AMS report. (Although, who’s to say aliens wouldn’t want to throw rocks at Earth.)

Hankey speculated that AI-chatbot advice might have helped more people report their sightings to AMS (one potentially very mundane explanation for the volume of reports), but there’s more than enough mystery left to warrant “serious investigation,” in his opinion.

“Whether this represents normal statistical variance,” he said, “an uncharacterized debris population, or something else entirely will require continued monitoring and further analysis.”


Original Submission

posted by janrinok on Tuesday April 07, @08:28PM   Printer-friendly

https://techtoday.co/googles-new-compression-drastically-shrinks-ai-memory-use-while-quietly-speeding-up-performance-across-demanding-workloads-and-modern-hardware-environments/

As models scale, this memory demand becomes increasingly difficult to manage without compromising speed or accessibility in modern LLM deployments. Traditional approaches attempt to reduce this burden through quantization, a method that compresses numerical precision. However, these techniques often introduce trade-offs, particularly reduced output quality or additional memory overhead from stored constants.

This tension between efficiency and accuracy remains unresolved in many existing systems that rely on AI tools for large-scale processing.

Google’s TurboQuant introduces a two-stage process intended to address these long-standing limitations.

The first stage relies on PolarQuant, which transforms vectors from standard Cartesian coordinates into polar representations. Instead of storing multiple directional components, the system condenses information into radius and angle values, creating a compact shorthand, reducing the need for repeated normalization steps and limits the overhead that typically accompanies conventional quantization methods.

The second stage applies Quantized Johnson-Lindenstrauss, or QJL, which functions as a corrective layer. While PolarQuant handles most of the compression, it can leave small residual errors, as QJL reduces each vector element to a single bit, either positive or negative, while preserving essential relationships between data points.

This additional step refines attention scores, which determine how models prioritize information during processing.

According to reported testing, TurboQuant achieves efficiency gains across several long-context benchmarks using open models.

The system reportedly reduces key-value cache memory usage by a factor of six while maintaining consistent downstream results. It also enables quantization to as little as three bits without requiring retraining, which suggests compatibility with existing model architectures.

The reported results also include gains in processing speed, with attention computations running up to eight times faster than standard 32-bit operations on high-end hardware. These results indicate that compression does not necessarily degrade performance under controlled conditions, although such outcomes depend on benchmark design and evaluation scope.

This system could also lower operation costs by reducing memory demands, while making it easier to deploy models on constrained devices where processing resources remain limited. At the same time, freed resources may instead be redirected toward running more complex models, rather than reducing infrastructure demands.

While the reported results appear consistent across multiple tests, they remain tied to specific experimental conditions. The broader impact will depend on real-world implementation, where variability in workloads and architectures may produce different outcomes.


Original Submission

posted by janrinok on Tuesday April 07, @03:43PM   Printer-friendly
from the flip-flop-it-was-doing-the-bop dept.

Sediment cores from North Atlantic reveal pole reversal dragged on for 70,000 years—far longer than previously known:

Earth's magnetic field is generated by the churn of its liquid nickel-iron outer core, but it is not a constant feature.

Every so often, the magnetic north and south poles swap places in what are called geomagnetic reversals, and the record of these flips is preserved in rocks and sediments, including those from the ocean floor. These reversals don't happen suddenly, but over several thousand years, where the magnetic field fades and wobbles while the two poles wander and finally settle in the opposite positions of the globe.

Over the past 170 million years, the magnetic poles have reversed 540 times, with the reversal process typically taking around 10,000 years to complete each time, according to years of research. Now, a new study by a University of Utah geoscientist and colleagues from France and Japan has upended this scenario after documenting instances 40 million years ago where the process took far longer to complete, upwards of 70,000 years. These findings offer a new perspective on the geomagnetic phenomenon that envelops our planet and shields it from solar radiation and harmful particles from space.

Extended periods of reduced geomagnetic shielding likely influenced atmospheric chemistry, climate processes and the evolution of living organisms, according to co-author Peter Lippert, an associate professor in the U Department of Geology & Geophysics.

"The amazing thing about the magnetic field is that it provides the safety net against radiation from outer space, and that radiation is observed and hypothesized to do all sorts of things. If you are getting more solar radiation coming into the planet, it'll change organisms' ability to navigate," said Lippert, who heads the Utah Paleomagnetic Center. "It's basically saying we are exposing higher latitudes in particular, but also the entire planet, to greater rates and greater durations of this cosmic radiation and therefore it's logical to expect that there would be higher rates of genetic mutation. There could be atmospheric erosion."

[...] "This finding unveiled an extraordinarily prolonged reversal process, challenging conventional understanding and leaving us genuinely astonished," Yamamoto wrote in a summary posted by Springer Nature.

[...] While the finding was a surprise, it may not have been unexpected, according to the study. Computer models of Earth's geodynamo—in the swirling outer core that generates the electrical currents supporting the magnetic field—had indicated reversals' durations vary, with many short ones, but also occasional long, drawn-out transitions, some lasting up to 130,000 years.

In other words, Earth's geomagnetism may have always had this unpredictable streak, but scientists hadn't caught it in the rocks until now.

Journal Reference: Yamamoto, Y., Boulila, S., Takahashi, F. et al. Extraordinarily long duration of Eocene geomagnetic polarity reversals. Commun Earth Environ 7, 180 (2026). https://doi.org/10.1038/s43247-026-03205-8


Original Submission