Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you pay for premium AI subscriptions?

  • Yes
  • No
  • I use someone else's paid one
  • What, in THIS economy?
  • I don't use AI, you insensitive clod!

[ Results | Polls ]
Comments:46 | Votes:135

posted by janrinok on Thursday April 09, @03:05PM   Printer-friendly
from the step-1)-post-to-social-media,-step-2)-????,-step-3)-PROFIT!!!!!! dept.

Nate Silver, formerly of FiveThirtyEight, recently published an article about the decline of social media in driving traffic to external websites. Silver describes the impact of social media on traffic to FiveThirtyEight when it relaunched under new ownership from Disney in March 2014:

You will believe what happened next: it didn't work. The whole period was like the Underwear Gnomes meme come to life. Phase 1: Collect lots of low-quality traffic from Facebook. Phase 2: ???. Phase 3: Pivot to video.

It didn't help that Facebook was constantly tinkering with News Feed, and grossly exaggerating metrics like average time spent watching videos. But more fundamentally, it was locked into a zero-sum, adversarial relationship with publishers. Facebook wanted readers to stay within its walled garden, to spend as much time as possible on Facebook. Publishers, meanwhile, regarded Facebook as the equivalent of the Port Authority Bus Terminal: a miserable, liminal space where you'd hopefully spend as little time as possible before booking a one-way ticket out of town.

Although Silver reports that FiveThirtyEight received more traffic from posting on Twitter at the time, it also declined within a few years. Silver's analysis of the content currently receiving the most engagement on Twitter shows that it is dominated by low-quality and highly partisan accounts. As he writes in regard to a chart in his article:

It's not hard to notice that Twitter has become extremely right-leaning. But I'd argue there's an equally important trend: the top accounts are of incredibly low quality. Elon, with the algorithmic boost he built in for himself, is at the eye of the storm, of course. But "Catturd" literally gets far more engagement than the New York Times, for instance.

Without really wanting to comment on individual accounts — there are some exceptions — the liberal-leaning accounts that remain prominent on Twitter aren't much better. They're partisan and combative, sometimes peddling misinformation. They're almost like a dark-mirror-world, Waluigi version of the conservative "influencers", crafted in Elon's jaded image of what liberals are like. It's no coincidence that one of the most successful ones is the Gavin Newsom Press Office account, which literally mimics President Trump's style in a sometimes funny, sometimes cringeworthy way.

Silver's analysis describes Twitter as prioritizing low-quality rage bait designed to maximize engagement and sell ads rather than showing users links to higher-quality articles outside the walled garden:

And "siloed" is on a good day: at other times, Twitter feels like a ghost town. It's still useful for some topics: the AI discourse on the platform is often relatively robust, for instance. But for something like the war in Iran, it's next to useless. Links to external websites are substantially punished, and none of the workarounds are particularly helpful. So the tangible rewards from still having 3 million followers can be surprisingly marginal. However, my account is hardly alone in this regard. The New York Times has 53 million followers, and yet its tweets often produce only a few hundred likes, retweets, and replies even when they reveal urgent, breaking news.

After reading Silver's article, I believe there are three important questions and comments:

  1. When social media platforms actively penalize content that links to external sites, that pressures content creators to stay within the confines of the walled garden. This looks a lot like a potential violation of antitrust laws.
  2. Social media prioritizes engagement to sell ads, and engagement appears to be maximized by increasing viewership of clickbait and rage bait over insightful content. This certainly lowers the quality of political discourse and helps to drive polarization.
  3. If you're a content creator looking to grow an audience with thoughtful content, there is no longer much value in promoting it on most of the largest social media outlets.

Perhaps there are two paths forward. One option is that independent blogs providing in-depth content decline in traffic and go dark due an inability to draw revenue while low-quality rage bait continues to drive discourse. The other option is that we accept that social media has become nearly useless for many types of thoughtful discussion and move back to blogs and other platforms that reward quality over engagement.


Original Submission

posted by hubie on Thursday April 09, @10:19AM   Printer-friendly

https://www.osnews.com/story/144752/plan-9-is-a-uniquely-complete-operating-system/

Thom Holwerda 2026-04-07

From 2024, but still accurate and interesting:

Plan 9 is unique in this sense that everything the system needs is covered by the base install. This includes the compilers, graphical environment, window manager, text editors, ssh client, torrent client, web server, and the list goes on. Nearly everything a user can do with the system is available right from the get go.
        ↫ moody

This is definitely something that sets Plan 9 apart from everything else, but as moody – 9front developer – notes, this also has a downside in that development isn't as fast, and Plan 9 variants of tools lack features upstream has for a long time. He further adds that he think this is why Plan 9 has remained mostly a hobbyist curiosity, but I'm not entirely sure that's the main reason. The cold and harsh truth is that Plan 9 is really weird, and while that weirdness is a huge part of its appeal and I hope it never loses it, it also means learning Plan 9 is really hard.

I firmly believe Plan 9 has the potential to attract more users, but to get there, it's going to need an onboarding process that's more approachable than reading 9front's frequently questioned answers, excellent though they are. After installing 9front and loading it up for the first time, you basically hit a brick wall that's going to be rough to climb. It would be amazing if 9front could somehow add some climbing tools for first-time users, without actually giving up on its uniqueness. Sometimes, Plan 9 feels more like an experimental art project instead of the capable operating system that it is, and I feel like that chases people away.

Which is a real shame.


Original Submission

posted by Fnord666 on Thursday April 09, @05:38AM   Printer-friendly
from the mo'-money dept.

Artificial intelligence and government officials warned that tech companies such as Anthropic and OpenAI are slated to deploy advanced models that are highly effective at hacking complex systems:

Anthropic is privately cautioning senior government officials that its upcoming model, presently known as “Mythos,” will increase the likelihood of massive cyberattacks in 2026, Axios reported. Axios CEO Jim VandeHei also reported that a source familiar with the upcoming models asserted a large-scale cyberattack may occur in 2026, with businesses being vulnerable targets.

Fortune also obtained a draft blog post from Anthropic characterizing “Mythos” as “currently far ahead of any other AI model in cyber capabilities.” The post further suggested that the model “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”

Moreover, Axios co-founder Mike Allen also asked OpenAI CEO Sam Altman whether he agreed there was a likelihood of a “world-shaking cyberattack” in 2026 during a Monday interview.

“I think that’s totally possible, yes,” Altman told Allen. “I think to avoid that, it will require a tremendous amount of work.”

Furthermore, OpenAI on Monday released a blueprint for how the government should handle AI, titled, “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” The blueprint warns of cyberattacks resulting from advanced and prevalent AI models.

“As AI systems become more capable and more embedded across the economy, they may introduce new vulnerabilities alongside new abundance,” the blueprint states. “Some systems may be misused for cyber or biological harm.”

Related: A Hacker Used Claude to Breach Mexico's Government and Steal 150GB of Data


Original Submission

posted by Fnord666 on Thursday April 09, @12:56AM   Printer-friendly
from the picture-this dept.

The regulatory price for handing three million people's dating photos to a facial recognition startup turned out to be a promise to behave:

Nearly three million people uploaded photos to OkCupid expecting those images would stay on a dating app. Instead, the photos ended up training facial recognition software, handed over by the company’s own founders to an AI firm they’d personally invested in.

Match Group settled a Federal Trade Commission lawsuit last week over the transfer, which the agency says violated OkCupid’s privacy policy and was actively covered up for years. The consent decree permanently bars Match Group and OkCupid from misrepresenting their data practices and puts them under compliance reporting for a decade.

The settlement carries no financial penalty.

[...] The data transfer happened in September 2014. Clarifai, an AI company building image recognition systems, asked OkCupid for a large dataset of user photos.

The request wasn’t routed through a business development team or vetted by legal. OkCupid’s founders were financially invested in Clarifai, and the ask came on that basis, one investor helping out another. OkCupid’s president and chief technology officer were directly involved in the data transfer, and one of the founders allegedly sent the photos from his personal email account, bypassing any corporate oversight or audit trail.

No contract governed the handoff. No restrictions were placed on what Clarifai could do with the data. Clarifai never provided any business services to OkCupid.

[...] When The New York Times reported on the arrangement in 2019, OkCupid’s response was carefully evasive. The company told the paper that Clarifai had contacted OkCupid about a possible collaboration and that no commercial agreement had been entered into. That framing was technically true and functionally misleading. There was no commercial agreement because the data was given away for free, a favor between a company and its founders’ investment. The FTC alleged that OkCupid did not address whether Clarifai had gained access to photos without consent, and described the response as part of a broader pattern of concealment. The agency said it ultimately had to enforce its Civil Investigative Demand in federal court after OkCupid obstructed the investigation.

[...] The settlement, filed March 30, 2026 in the US District Court for the Northern District of Texas, permanently prohibits misrepresenting data collection, use, and disclosure practices. Match Group did not admit wrongdoing. The Commission vote was 2-0.

Also at Yahoo! and The Verge.


Original Submission

posted by jelizondo on Wednesday April 08, @08:11PM   Printer-friendly

Sweden is bringing back books amid declining test scores:

In 2023, the Swedish government announced that the country's schools would be going back to basics, emphasizing skills such as reading and writing, particularly in early grades. After mostly being sidelined, physical books are now being reintroduced into classrooms, and students are learning to write the old-fashioned way: by hand, with a pencil or pen, on sheets of paper. The Swedish government also plans to make schools cellphone-free throughout the country.

Educational authorities have been investing heavily. Last year alone, the education ministry allocated $83 million to purchase textbooks and teachers' guides. In a country with about 11 million people, the aim is for every student to have a physical textbook for each subject. The government also put $54 million towards the purchase of fiction and non-fiction books for students.

These moves represent a dramatic pivot from previous decades, during which Sweden—and many other nations—moved away from physical books in favor of tablets and digital resources in an effort to prepare students for life in an online world. Perhaps unsurprisingly, the Nordic country's efforts have sparked a debate on the role of digital technology in education, one that extends well beyond the country's borders. US parents in districts that have adopted digital technology to a great extent may be wondering if educators will reverse course, too.

So why did Sweden pivot? In an email to Undark, Linda Fälth, a researcher in teacher education at Linnaeus University, wrote that the "decision to reinvest in physical textbooks and reduce the emphasis on digital devices" was prompted by several factors, including questions around whether the digitalization of classrooms had been evidence-based. "There was also a broader cultural reassessment," Fälth wrote. "Sweden had positioned itself as a frontrunner in digital education, but over time concerns emerged about screen time, distraction, reduced deep reading, and the erosion of foundational skills such as sustained attention and handwriting."

Fälth noted that proponents of reform believe that "basic skills—especially reading, writing, and numeracy—must be firmly established first, and that physical textbooks are often better suited for that purpose."

[...] Swedish officials emphasize that digital technology isn't being removed from schools altogether. Rather, digital aids "should only be introduced in teaching at an age when they encourage, rather than hinder, pupils' learning." Achieving digital competence remains an important objective, particularly in higher grades.

[...] If US educational leaders were to consult their Swedish colleagues, the advice they'd likely get is not to remove digital technology whole cloth. "The goal is recalibration rather than reversal," wrote Fälth. This was echoed in a statement sent to Undark by the Swedish Ministry of Education and Research: The "Swedish government believes that digitalization is fundamentally important and beneficial, but the use of digital tools in schools must be carried out carefully and thoughtfully."

In other words, the objective is not to reject digitalization. It's more nuanced than that. The goal is to judiciously establish boundaries around technology's selective and sequential use over stages of a pupil's educational development. This means introducing digital technology at later ages after basic reading and other skills have been achieved.


Original Submission

posted by jelizondo on Wednesday April 08, @10:48AM   Printer-friendly
from the engage-with-your-surroundings dept.

Technology doesn't just make life easier, it changes how we think, how we act, and what we come to expect from the world around us. The biggest shifts show up slowly, fold into everyday life, and eventually become invisible. Over time, a tool or system starts shaping behavior:

Smartphones - Smartphones didn't just improve communication, they removed its boundaries. Messages became instant, information became constant, and waiting became optional.

Before smartphones, there were natural gaps in the day. Time between conversations. Time without updates. Time where nothing was happening. Those gaps have largely disappeared.

Now, attention is continuously pulled in multiple directions. Notifications interrupt focus, and moments of silence are often filled automatically.

[...] GPS Navigation - Finding your way used to require memory, awareness, and decision-making. People learned routes, recognized landmarks, and built mental maps of the places they lived and traveled through. GPS replaced much of that process: following instructions rather than remembering directions.

[...] In fact, studies have suggested that reliance on GPS can weaken spatial memory over time, as the brain outsources that responsibility.

[...] Social Media Algorithms - Social media introduced systems that decide what you see. Early platforms showed content in chronological order but over time algorithms began prioritizing posts based on engagement, predicting what would keep you scrolling the longest.

This changed behavior on both sides. Users consume what is most attention-grabbing, and creators adapt by producing content that performs well within the system. Over time, this creates feedback loops, where certain types of content are amplified while others disappear. What you see begins is heavily filtered and shaped, yet it feels like a reflection of reality.

Related:


Original Submission

posted by jelizondo on Wednesday April 08, @05:56AM   Printer-friendly

Researchers use archaeological-textual tool to uncover global spread of democracies—and autocracies—in early societies:

It is a common belief that democratic forms of government began in Greece and Rome. However, a newly published global study on ancient societies upends this perception, rewriting our understanding of democracy's origins.

An international team of researchers analyzed archaeological and historical evidence from 31 ancient societies across Europe, Asia, and the Americas and found that shared, inclusive governance was far more common than was once believed.

The study, which appears in the journal Science Advances, is the first comprehensive effort to use archaeological evidence to assess the types of government that existed in early societies.

"People often assume that democratic practices started in Greece and Rome," says Gary Feinman, the study's lead author and the MacArthur Curator of Mesoamerican and Central American Anthropology at the Field Museum's Negaunee Integrative Research Center. "But our research shows that many societies around the world developed ways to limit the power of rulers and give ordinary people a voice."

The researchers, drawing upon art, architecture, and other artifacts, also found evidence of autocratic governments.

"These findings show that both democracy and autocracy were widespread in the ancient world," observes New York University Professor David Stasavage, author of The Decline and Rise of Democracy: A Global History from Antiquity to Today and a co-author of the paper. "Significantly, we now have a deeper appreciation of the many factors that affect how governments form and change over time—knowledge that can guide understanding of present-day geopolitical developments."

The study's authors note that both types of governments come in different forms. In an autocracy, one person or a small group holds all the power; examples of autocracy can include absolute monarchies and dictatorships. In a democracy, decision-making power is shared among the people. Elections often go hand-in-hand with democracy, but not always.

"Elections aren't exactly the greatest metric for what counts as a democracy, so with this study, we tried to draw on historical examples of human political organization," says Feinman. "We defined two key dimensions of governance. One of them is the degree to which power is concentrated in just one individual or just one institution. The other is the degree of inclusiveness—how much the bulk of the citizens have access to power and can participate in some aspects of governance."

[...] The researchers found that population size and the number of political levels did not account for whether a society would be autocratic—contrary to conventional wisdom, populous or geographically expansive societies were not always autocratic. Instead, says Feinman, "the strongest factor shaping how much power rulers held was how they financed their authority."

Societies that depended heavily on revenue that was controlled or monopolized by leaders—such as mines, long-distance trade routes, slave labor, or war plunder—tended to become more autocratic. In contrast, societies funded mainly through broad internal taxes or community labor were more likely to distribute power and maintain systems of shared governance.

The study also shows that societies with more inclusive political systems generally had lower levels of economic inequality.

Journal Reference: https://doi.org/10.1126/sciadv.aec1426


Original Submission

posted by jelizondo on Wednesday April 08, @01:12AM   Printer-friendly

https://gizmodo.com/astronomers-say-recent-rash-of-meteor-sightings-warrants-serious-investigation-2000738638

Astronomers are still searching for answers behind this year’s unusual wave of loud and fiery meteor sightings. Over 3,000 people witnessed a slowly disintegrating daytime fireball over Western Europe. Hundreds more reported the sight—and sonic boom—of a 7-ton, 6-foot (2-meter) asteroid screeching above Ohio. March alone has already seen over 40 meteor cases, with yet another ripping through the sky over Texas last Saturday, breaking the sound barrier, before a fragment crashed into a north Houston home and ricocheted around one bedroom like a pinball.

Now, a new analysis published by the American Meteor Society (AMS) on Wednesday has confirmed just how much of a statistical outlier this 2026 barrage has been—as well as early indications of where all these rocks in our solar system might have come from.

“After years of stable baseline activity, something appears to have shifted,” according to AMS researcher Mike Hankey, who manages the society’s fireball reporting tools. “The signal is consistent across multiple metrics.”

According to those metrics—including total witness figures, the number of cases involving sonic booms, and the duration of the sightings—Hankey said, “Fireball activity has increased.”

Fireballs from outer space, loud enough to produce a sonic boom and witnessed by 50 or more people, have blitzed a trail through Earth’s atmosphere approximately once every three days since this year began, based on reports to the AMS.

“What makes 2026 unique is the combination,” Hankey wrote. “Prior high-sound years like 2021 and 2023 had elevated percentages but moderate event counts. In 2026, both the rate and the absolute count are high.”

Looking at meteor events with the highest number of witnesses—meaning 50 reports or more—30 out of 38 were meteors that were big, tough, and fast enough to produce a sonic boom (79%), which already makes the first quarter of 2026 an outlier historically. But Hankey also determined that the total number of mass sighting events and the volume of those witness reports were outliers, too. Excluding the phenomenal March 8, 2026 case over Western Europe, in which a whopping 3,229 people all reported the same fireball, the remaining 41 episodes so far this March still averaged about 67 witnesses per meteor, “more than double the historical norm,” Hanky noted.

In other words, while the total number of meteor cases has not deviated from researchers’ statistical expectations, the percentage of loud and well-documented cases did.

“Almost half of all March 2026 events with 10+ reports were seen by 50 or more people,” according to Hankey. “Events that would normally draw 25 [to] 49 witnesses instead drew 50, 100, or even 200+ witnesses. The distribution didn’t broaden—it shifted upward.”

Hankey cautioned that the AMS data for 2026’s meteor bombardment can only help develop witness-based trajectory estimates, not the more precise trajectories based on instrument data. But the sheer volume of witnesses does help us learn a bit about where these rocks came from.

Activity from a region of space known as the “Anthelion sporadic source,” defined as objects that hit Earth on their way deeper into our solar system toward the Sun, roughly doubled in 2026. A total of 12 meteors traced back to this Anthelion slice of the sky in 2026, with nearly 10 of those events apparently emanating from a single 1,000 square-degree patch.

Several of the biggest meteor events this month were traced back to this Anthelion region—including a March 9 fireball spotted by 282 people across the U.S. eastern seaboard and two fireballs that were reported 381 times over France across the following two days.

For now, Hankey believes that this current data can rule out a few hypotheses for what’s causing this uptick in meteors, or at least meteor sightings.

First, the Anthelion trajectories indicate that there’s no new cluster of asteroids entering Earth’s transit around the Sun—the sort of drifting space rocks that produce predictable annual meteor showers, like the Perseids every August.

Second, early material analyses of the fragments recovered in Ohio and Germany have had the mineral makeup of achondritic HEDs, one of the most common categories of meteorites on record. Hankey concluded that, for these reasons, it’s highly unlikely that any of these fireballs were crashing extraterrestrial spacecraft: “There is no evidence of anomalous trajectory behavior, controlled flight or non-natural composition,” he wrote in the AMS report. (Although, who’s to say aliens wouldn’t want to throw rocks at Earth.)

Hankey speculated that AI-chatbot advice might have helped more people report their sightings to AMS (one potentially very mundane explanation for the volume of reports), but there’s more than enough mystery left to warrant “serious investigation,” in his opinion.

“Whether this represents normal statistical variance,” he said, “an uncharacterized debris population, or something else entirely will require continued monitoring and further analysis.”


Original Submission

posted by janrinok on Tuesday April 07, @08:28PM   Printer-friendly

https://techtoday.co/googles-new-compression-drastically-shrinks-ai-memory-use-while-quietly-speeding-up-performance-across-demanding-workloads-and-modern-hardware-environments/

As models scale, this memory demand becomes increasingly difficult to manage without compromising speed or accessibility in modern LLM deployments. Traditional approaches attempt to reduce this burden through quantization, a method that compresses numerical precision. However, these techniques often introduce trade-offs, particularly reduced output quality or additional memory overhead from stored constants.

This tension between efficiency and accuracy remains unresolved in many existing systems that rely on AI tools for large-scale processing.

Google’s TurboQuant introduces a two-stage process intended to address these long-standing limitations.

The first stage relies on PolarQuant, which transforms vectors from standard Cartesian coordinates into polar representations. Instead of storing multiple directional components, the system condenses information into radius and angle values, creating a compact shorthand, reducing the need for repeated normalization steps and limits the overhead that typically accompanies conventional quantization methods.

The second stage applies Quantized Johnson-Lindenstrauss, or QJL, which functions as a corrective layer. While PolarQuant handles most of the compression, it can leave small residual errors, as QJL reduces each vector element to a single bit, either positive or negative, while preserving essential relationships between data points.

This additional step refines attention scores, which determine how models prioritize information during processing.

According to reported testing, TurboQuant achieves efficiency gains across several long-context benchmarks using open models.

The system reportedly reduces key-value cache memory usage by a factor of six while maintaining consistent downstream results. It also enables quantization to as little as three bits without requiring retraining, which suggests compatibility with existing model architectures.

The reported results also include gains in processing speed, with attention computations running up to eight times faster than standard 32-bit operations on high-end hardware. These results indicate that compression does not necessarily degrade performance under controlled conditions, although such outcomes depend on benchmark design and evaluation scope.

This system could also lower operation costs by reducing memory demands, while making it easier to deploy models on constrained devices where processing resources remain limited. At the same time, freed resources may instead be redirected toward running more complex models, rather than reducing infrastructure demands.

While the reported results appear consistent across multiple tests, they remain tied to specific experimental conditions. The broader impact will depend on real-world implementation, where variability in workloads and architectures may produce different outcomes.


Original Submission

posted by janrinok on Tuesday April 07, @03:43PM   Printer-friendly
from the flip-flop-it-was-doing-the-bop dept.

Sediment cores from North Atlantic reveal pole reversal dragged on for 70,000 years—far longer than previously known:

Earth's magnetic field is generated by the churn of its liquid nickel-iron outer core, but it is not a constant feature.

Every so often, the magnetic north and south poles swap places in what are called geomagnetic reversals, and the record of these flips is preserved in rocks and sediments, including those from the ocean floor. These reversals don't happen suddenly, but over several thousand years, where the magnetic field fades and wobbles while the two poles wander and finally settle in the opposite positions of the globe.

Over the past 170 million years, the magnetic poles have reversed 540 times, with the reversal process typically taking around 10,000 years to complete each time, according to years of research. Now, a new study by a University of Utah geoscientist and colleagues from France and Japan has upended this scenario after documenting instances 40 million years ago where the process took far longer to complete, upwards of 70,000 years. These findings offer a new perspective on the geomagnetic phenomenon that envelops our planet and shields it from solar radiation and harmful particles from space.

Extended periods of reduced geomagnetic shielding likely influenced atmospheric chemistry, climate processes and the evolution of living organisms, according to co-author Peter Lippert, an associate professor in the U Department of Geology & Geophysics.

"The amazing thing about the magnetic field is that it provides the safety net against radiation from outer space, and that radiation is observed and hypothesized to do all sorts of things. If you are getting more solar radiation coming into the planet, it'll change organisms' ability to navigate," said Lippert, who heads the Utah Paleomagnetic Center. "It's basically saying we are exposing higher latitudes in particular, but also the entire planet, to greater rates and greater durations of this cosmic radiation and therefore it's logical to expect that there would be higher rates of genetic mutation. There could be atmospheric erosion."

[...] "This finding unveiled an extraordinarily prolonged reversal process, challenging conventional understanding and leaving us genuinely astonished," Yamamoto wrote in a summary posted by Springer Nature.

[...] While the finding was a surprise, it may not have been unexpected, according to the study. Computer models of Earth's geodynamo—in the swirling outer core that generates the electrical currents supporting the magnetic field—had indicated reversals' durations vary, with many short ones, but also occasional long, drawn-out transitions, some lasting up to 130,000 years.

In other words, Earth's geomagnetism may have always had this unpredictable streak, but scientists hadn't caught it in the rocks until now.

Journal Reference: Yamamoto, Y., Boulila, S., Takahashi, F. et al. Extraordinarily long duration of Eocene geomagnetic polarity reversals. Commun Earth Environ 7, 180 (2026). https://doi.org/10.1038/s43247-026-03205-8


Original Submission

posted by janrinok on Tuesday April 07, @11:01AM   Printer-friendly

$500 fiber optic HDMI cable delivers flawless 48 Gbps performance across a staggering 990 feet — crushes 8K at 60 Hz and 4K at 120 Hz over long distances

An expensive HDMI cable that's not snake oil?:

Now, these specs aren't special in a vacuum, but the fact that the cable can enable them over (up to) 990 feet — that's the impressive bit. The "entry-level" $116 version is only 3 feet long, and for that, it's quite expensive because you don't need fiber optic for this length. The best deal here is probably the 100-foot cable priced at $150, so only about $30 more for an extra 97 feet of fiber-optic goodness.

Ruipro has made the HDMI connectors on both ends removable, so you won't have to replace the entire cable if a plug breaks. When removed, the end of the cable can slot into keystone jacks and wall plates as well for easy storage. The cable itself is relatively thin for its size, and the connectors are made entirely of metal to ensure durability.

Another benefit of fiber optic is its resistance to electromagnetic interference, though that's not a huge issue to begin with for HDMI, and EMI is notoriously used as the bait to sell those aforementioned miracle cures. Regardless, this is still a solid HDMI 2.1 cable for those who value signal integrity, and even though the starting price is certainly not enticing, the subsequent options are priced rather fairly.


Original Submission

posted by janrinok on Tuesday April 07, @06:18AM   Printer-friendly

Contrary to long-standing beliefs, motion from eye movements helps the brain perceive depth—a finding that could enhance virtual reality:

When you go for a walk, how does your brain know the difference between a parked car and a moving car? This seemingly simple distinction is challenging because eye movements, such as the ones we make when watching a car pass by, make even stationary objects move across the retina—motion that has long been thought of as visual "noise" the brain must subtract out.

Now, researchers at the University of Rochester have discovered that instead of being meaningless interference, the visual motion of an image caused by eye movements helps us understand the world. The specific patterns of visual motion created by eye movements are useful to the brain for figuring out how objects move and where they are located in 3D space.

"The conventional idea has been that the brain needs to somehow discount, or subtract off, the image motion that is produced by eye movements, as this motion has been thought to be a nuisance," says Greg DeAngelis, [...] "But we found that the visual motion produced by our eye movements is not just a nuisance variable to be subtracted off; rather, our brains analyze these global patterns of image motion and use this to infer how our eyes have moved relative to the world."

[...] "We show that the brain considers many pieces of information to understand the 3D structure of the world through vision, including the patterns of image motion caused by eye movements," says DeAngelis. "Contrary to conventional ideas, the brain doesn't ignore or suppress image motion produced by eye movement. Instead, it uses this image motion to understand a scene and accurately estimate an object's motion and depth."

This research has important implications for understanding visual perception, which informs how the brain interprets everyday activities like reading and recognizing faces. But it could also provide insight and new applications for visual technologies, such as virtual reality headsets.

"VR headsets don't factor in how the eyes are moving relative to the scene when they compute the images to show to each eye. There may be a stark mismatch between the image motion that is shown to the observer in VR and what the brain is expecting to receive based on the eye movements that the observer is making," says DeAngelis. This could be what causes some people to experience motion sickness while using a VR headset.

Journal Reference: Xu, ZX., Pang, J., Anzai, A. et al. Flexible computation of object motion and depth based on viewing geometry inferred from optic flow. Nat Commun 17, 1092 (2026). https://doi.org/10.1038/s41467-025-67857-4


Original Submission

posted by hubie on Tuesday April 07, @01:33AM   Printer-friendly

Apple has finally discontinued its tower workstation:

While Apple is celebrating its upcoming 50th anniversary and looks forward to another 50 years, there’s one major product that has come to an end. The Mac Pro, as confirmed by Apple with Macworld, has been discontinued by the company. The Mac Pro section of Apple.com has been removed from the website, though Mac Pros are still available through Apple’s Certified Refurbished store.

It’s a quiet end for a product that was last updated in 2023 with an M2 Ultra chip. But it wasn’t a surprise; Bloomberg’s Mark Gurman reported last November that Apple had “largely written off” the Mac Pro, believing that the Mac Studio is a better product. Why it took so long to finally pull the plug isn’t clear, but Apple hadn’t done any updates to the hardware since the M2 Ultra upgrade nearly three years ago.

Apple has been rumored to have an update to the Mac Studio in the works, with an announcement likely between now and WWDC26. Apple positions the Mac Studio as the machine for production environments that demand workstation performance, and seemingly feels confident that the Mac Studio can fill the Mac Pro’s shoes.

The discontinuation of the Mac Pro leaves Apple without a modular tower computer, but it’s been moving away from those types of machines for a while. In response to those who think an expandable tower is a gaping hole in the Mac lineup, Apple often counters with confidence that its silicon can make up for the need for expansion cards, and Thunderbolt can handle storage needs just as well.

Apple introduced the Mac Pro in 2006, the same time Apple completed its transition from Motorola chips to Intel. It had two 64-bit, Intel Xeon 5100 (Woodcrest) processors, four hard drive bays, eight RAM slots, and started at $2,499.


Original Submission

posted by hubie on Monday April 06, @08:52PM   Printer-friendly

The Pentagon is spending $13.4 billion on AI this year alone:

The designation enters Maven into the Future Years Defense Program as a protected line item, giving it visibility and stability across budget cycles that experimental programs lack. The U.S. Army will manage all Maven contracts going forward, and oversight will transfer from the National Geospatial-Intelligence Agency to the Chief Digital and AI Officer within 30 days, with program-of-record status expected before the close of fiscal year 2026 on September 30.

Palantir took over and built a full command-and-control platform that ingests data from more than 150 sources, according to Palantir's public demonstrations: satellite imagery, drone video, radar, infrared sensors, signals intelligence, and geolocation data. Computer vision algorithms trained on millions of labeled images automatically detect and classify battlefield objects, with yellow-outlined boxes marking potential targets, blue outlines flagging friendly forces and no-strike zones, and an ‘AI Asset Tasking Recommender’ proposing which weapons platforms and munitions should be assigned to each target.

NGA Director Vice Admiral Frank Whitworth stated at Palantir's AIPCON 9 conference in March that Maven can generate 1,000 targeting recommendations per hour, as reported by The Register, with the 18th Airborne Corps reportedly achieving comparable targeting output to the 2,000-person cell used during Operation Iraqi Freedom with roughly 20 people. Maven now has more than 20,000 active users, a figure that has quadrupled since March 2024. The platform was used during the 2021 Kabul airlift, to supply target coordinates to Ukrainian forces in 2022, and most recently during Operation Epic Fury against Iran in 2026, where it reportedly enabled processing of 1,000 targets within the first 24 hours, according to SpaceNews. NATO acquired a version in March 2025.

Meanwhile, the FY2026 defense budget reached $1.01 trillion, representing a 13% increase over FY2025, and for the first time included a dedicated AI and autonomy budget line of $13.4 billion, according to MeriTalk's analysis of the Pentagon budget request. That allocation covers unmanned aerial vehicles ($9.4 billion), maritime autonomous systems ($1.7 billion), and supporting AI software ($1.2 billion). The Pentagon now oversees more than 685 AI-related projects tied to weapons systems, per Congressional Research Service tracking.

[...] The Brennan Center for Justice, in a March 2026 report titled "The Business of Military AI," documented that Hegseth halved staffing at the Office of the Director of Operational Test and Evaluation and shuttered the Civilian Protection Center of Excellence. The center's researchers wrote that "the accelerating use of AI in warfighting has not been met with commensurate urgency to reckon with its dangers."

CSIS research has quantified AI-assisted targeting error propagation at 25% under variable conditions, according to a January 2026 analysis. Whitworth stated that by June 2026, Maven will begin transmitting "100 percent machine-generated" intelligence to combatant commanders. “No human hands actually participate in that particular template and that particular dissemination,” he added. “We want to use it for everything, not just targeting.”

Senator Elissa Slotkin introduced the AI Guardrails Act this month, which would prohibit the DoD from using autonomous weapons to kill without human authorization and bar AI use for domestic mass surveillance, The Hill reported. The FY2026 NDAA already declares targeting and launch authorization "inherently governmental" functions and requires reporting of autonomous weapons directive waivers to Congress.

[...] Meanwhile, a recent CSIS analysis documented Russian forces striking approximately 300 targets per day using unmanned systems in Ukraine, with data collection feeding AI platforms designated Platform-GNS and Avtomat. Russia voted against the December 2024 UN General Assembly draft resolution on lethal autonomous weapons alongside only North Korea and Belarus. That resolution passed 166-3 but remains non-binding; no international treaty currently governs lethal autonomous weapons systems. With AI reshaping the techonolgy industry, its influence has now begun to slip into the long shadow of military usage, and the implications of such deals remains to be seen.


Original Submission

posted by hubie on Monday April 06, @04:11PM   Printer-friendly

Claude source code leaked?

The date makes it suspicious, but both the accidental publishing of source and the tear down sounds all too plausible.

https://neuromatch.social/@jonny/116324676116121930

  • Claude code source "leaks" in a mapfile
  • people immediately use the code laundering machines to code launder the code laundering frontend
  • now many dubious open source-ish knockoffs in python and rust being derived directly from the source

What's anthropic going to do, sue them? Insist in court that LLM recreating copyrighted code is a violation of copyright???

The 1 Apr Download of 'Leaked' Claude Code Source Contains Malware

Source code with a side of Vidar stealer and GhostSocks

Tens of thousands of people eagerly downloaded the leaked Claude Code source code this week, and some of those downloads came with a side of credential-stealing malware.

A malicious GitHub repository published by idbzoomh uses the Claude Code exposure as a lure to trick people into downloading malware, including Vidar, an infostealer that snarfs account credentials, credit card data, and browser history; and GhostSocks, which is used to proxy network traffic. 

Zscaler's ThreatLabz researchers came across the repo while monitoring GitHub for threats, and said it's disguised as a leaked TypeScript source code for Anthropic's Claude Code CLI. 

"The README file even claims the code was exposed through a .map file in the npm package and then rebuilt into a working fork with 'unlocked' enterprise features and no message limits," the security sleuths said in a Thursday blog.

They added that the GitHub repository link appeared near the top of Google results for searches like "leaked Claude Code." While that was no longer the case at The Register's time of publication, at least two of the developer's trojanized Claude Code source leak repos remained on GitHub, and one of them had 793 forks and 564 stars.

[...] In March, security shop Huntress warned about a similar malware campaign using OpenClaw, the already risky AI agent platform, as a GitHub lure to deliver the same two payloads.

Both of these illustrate how quickly criminals move to take a buzzy new product or news event (like OpenClaw and the Claude Code leak) and then abuse it for online scams and financial gain. "That kind of rapid movement increases the chance of opportunistic compromise, especially through trojanized repositories," the Zscaler team wrote.

The blog also includes a list of indicators of compromise, including the GitHub repositories with the trojanized Claude Code leak and malware hashes to help defenders in their threat-hunting efforts, so be sure to check that out - and, as always, be careful what you download. ®


Original Submission #1Original Submission #2