Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:42 | Votes:456

posted by hubie on Monday December 08, @03:45PM   Printer-friendly

A boycott is unlikely to work:

The chaotic state of RAM prices continues to impact the industry. According to a new report, motherboard sales have fallen by as much as 50% as a result of the crisis. It has also led to gamers calling for a RAM boycott in hopes of easing the situation, but the reality is that such a move is unlikely to work.

We've covered the memory-pricing crisis since it began, including this deep dive into the problem and how it's caused by demand from AI data centers that require massive amounts of DRAM.

A new report Japanese from outlet Gazlog[site in Chinese] states that out-of-control DDR5 prices are impacting motherboard sales, forcing manufacturers such as Asus, MSI, and Gigabyte to significantly lower their sales targets.

DDR5 prices are simply stupid right now. A 64GB kit is now more expensive than a PS5 console or an RTX 5070. We've even seen several stores remove fixed pricing signs from DDR5 displays, relying on market rates because costs are changing so rapidly each day.

The problem for motherboard makers is that people upgrading from DDR4 or older systems – along with first-time builders --need DDR5 to pair with their shiny new boards. But with prices so high, it's a bad time to buy.

The result is a 40-50% decrease in motherboard sales compared to the same period a year earlier, writes Gazlog, leading to a lowering of sales targets. It's expected that CPU sales will eventually experience a similar fall in sales due to the RAM situation.

In an attempt to fight back, there are now calls on Reddit for gamers to boycott RAM completely in the hope that prices will return to normal

Unfortunately, the rallying cry is likely to have very little, if any, effect. The biggest issue, as we know, is that DRAM supply and future manufacturing capacity have already been bought out by companies to support their aggressive data center-building plans, causing the shortage.

Most memory manufacturers' sales come from industry, enterprise, data-centre, and other segments that aren't consumer PCs. And while it's true that a mass boycott would have some impact on their bottom lines, there's the other issue: not everyone will take part.

As we saw during Covid with graphics cards, calling for the public to boycott something for the greater good – regardless of whether it would work anyway -- rarely succeeds when there are always people willing to pay any price. And that's not mentioning the scalpers who are always ready to take advantage of other people's misery.

The memory crisis is also affecting graphics cards. AMD looks set to raise prices by 10%, while both Lisa Su's firm and rival Nvidia are rumored to be considering axing some low- and mid-range cards.


Original Submission

posted by hubie on Monday December 08, @11:04AM   Printer-friendly

A Debian developer gave out the news. Julian Andres Klode wrote in the mailing lists that APT (the package manager tool of Debian Linux) begin requiring a Rust compiler. It goes like:

"I plan to introduce hard Rust dependencies and Rust code into APT, no earlier than May 2026. This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem.
In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages and a stronger approach to unit testing."

source: https://lists.debian.org/deity/2025/10/msg00071.html

I admit to not knowing enough of the "how"-s and "gotcha!"-s of Debian's development to be able to give an educated opinion about this news. But in the overall and what I have seen happening as of late, anything Rust has felt a little too eager to be wedged into any cog and bolt of Linuxlandia as can be possible.

My dormant prophetic eye almost sees a vague vision of something that resembles the Unix Wars of the 90s, but this time happening to Linux, as soon as Torvalds will eventually retire from his role as the benevolent emperor of Linux. The commercial vultures will come from all sides to "improve" it in any way that would suit them in their business, injecting in it endless bloat and useless features that noone asked about, and there will be noone there to stop it with a firm "no!" (and a middle finger up when deserved) like Torvalds does and has guarded Linux from such assaults so far.


Original Submission

posted by hubie on Monday December 08, @06:24AM   Printer-friendly
from the let's-see-more-of-this-kind-of-thing dept.

The Linux phone features 12GB RAM, up to 2TB storage, a 6.36-inch FullHD AMOLED display, and a user-replaceable 5,500mAh battery:

Jolla kicked off a campaign for a new Jolla Phone, which they call the independent European Do It Together (DIT) Linux phone, shaped by the people who use it.

The new Jolla Phone is powered by a high-performing Mediatek 5G SoC, and features 12GB RAM, 256GB storage that can be expanded to up to 2TB with a microSDXC card, a 6.36-inch FullHD AMOLED display with ~390ppi, 20:9 aspect ratio, and Gorilla Glass, and a user-replaceable 5,500mAh battery.

The Linux phone also features 4G/5G support with dual nano-SIM and a global roaming modem configuration, Wi-Fi 6 wireless, Bluetooth 5.4, NFC, 50MP Wide and 13MP Ultrawide main cameras, front front-facing wide-lens selfie camera, fingerprint reader on the power key, a user-changeable back cover, and an RGB indication LED.

On top of that, the new Jolla Phone promises a user-configurable physical Privacy Switch that lets you turn off the microphone, Bluetooth, Android apps, or whatever you wish.

The device will be available in three colors, including Snow White, Kaamos Black, and The Orange. All the specs of the new Jolla Phone were voted on by Sailfish OS community members over the past few months.

Honouring the original Jolla Phone form factor and design, the new model ships with the Sailfish OS (with support for Android apps), a Linux-based European alternative to dominating mobile operating systems, and promises a minimum of 5 years of support, no tracking, no calling home, and no hidden analytics.

"Mainstream phones send vast amounts of background data. A common Android phone sends megabytes of data per day to Google even if the device is not used at all. Sailfish OS stays silent unless you explicitly allow connections," said Jolla.

The new Jolla Phone is now available for pre-order for 99 EUR and will only be produced if at least 2000 pre-orders are reached in one month from today [goal met on 7 December -- Ed.], until January 4th, 2026. The full price of the Linux phone will be 499 EUR (incl. local VAT), and the 99 EUR pre-order price will be fully refundable and deducted from the full price.

The device will be manufactured and sold in Europe, but Jolla says that it will design the cellular band configuration to enable global travelling as much as possible, including e.g. roaming in the U.S. carrier networks. The initial sales markets are the EU, the UK, Switzerland, and Norway.

TECH SPECS:

        SoC: High performant Mediatek 5G platform
        RAM: 12GB
        Storage: 256GB + expandable with microSDXC
        Cellular: 4G + 5G with dual nano-SIM and global roaming modem configuration
        Display: 6.36" ~390ppi FullHD AMOLED, aspect ratio 20:9, Gorilla Glass
        Cameras: 50MP Wide + 13MP Ultrawide main cameras, front facing wide-lens selfie camera
        Battery: approx. 5,500mAh, user replaceable
        Connectivity: WiFi 6, BT 5.4, NFC
        Dimensions: ~158 x 74 x 9mm
        Other: Power key fingerprint reader, user changeable backcover, RGB indication LED, Privacy Switch

Privacy by Design

        No tracking, no calling home, no hidden analytics
        User configurable physical Privacy Switch - turn off you microphone, bluetooth, Android apps, or whatever you wish

Scandinavian styling in its pure form

        Honouring the original Jolla Phone form factor and design
        Replaceable back cover
        Available in three distinct colours inspired by Nordic nature

Performance Meets Privacy

        5G with dual nano-SIM
        12GB RAM and 256GB storage expandable up to 2TB
        Sailfish OS 5
        Support for Android apps with Jolla AppSupport
        User replaceable back cover with colour options
        User replaceable battery
        Physical Privacy Switch


Original Submission

posted by jelizondo on Monday December 08, @02:06AM   Printer-friendly
from the first-GUI-for-IBM-PC dept.

In 1983, before the release of Microsoft Windows, Digital Research GEM, or Apple Macintosh, the office software giant VisiCorp released a graphical multitasking operating system for the IBM PC called Visi On.

It was an "open system", so anyone could make programs for it. Well, if they owned an expensive VAX computer and were prepared to shell out $7,000 on the Software Development Kit.

42 years later, although the mainframe based development environment has been lost to time, enthusiast Nina Kalinina has pulled apart Visi Corp Visi On to reveal some of the strange and curious internals.

https://git.sr.ht/~nkali/vision-sdk/tree/main/note/index.md

In this article, they document some of the internals, clear up some marketing misconceptions, discover some interesting Visi On quirks, and even provide a new application for it.


Original Submission

posted by hubie on Monday December 08, @01:38AM   Printer-friendly
from the Robocop dept.

https://boingboing.net/2025/12/03/waymo-drives-straight-into-active-police-scene-ignores-chaos.html

Waymo self-driving cars may know how to get you from point A to point B most of the time, but there are some things Waymos still don't understand. One of these things is that you shouldn't drive directly into an intense scene where multiple police cars are surrounding a vehicle. Unfortunately, that's precisely what Waymo did in this video [Instagram].[Not reviewed --JE]

As the Waymo cruises by, a gentleman gets out of his car and onto the ground at police command. The Waymo doesn't care for this chaos — it simply wants to keep truckin'. I really wish we could see what the passenger was doing during this whole ordeal.

Videos like these are why I refuse to ride in Waymo. These cars seem to lack critical thinking when it's most needed. Luckily, this kind of situation (when it ends safely) makes for a hilarious video.


Original Submission

posted by hubie on Sunday December 07, @08:50PM   Printer-friendly
from the Shaka-when-the-walls-fell dept.

New research offers clues about why some prompt injection attacks may succeed:

Researchers from MIT, Northeastern University, and Meta recently released a paper suggesting that large language models (LLMs) similar to those that power ChatGPT may sometimes prioritize sentence structure over meaning when answering questions. The findings reveal a weakness in how these models process instructions that may shed light on why some prompt injection or jailbreaking approaches work, though the researchers caution their analysis of some production models remains speculative since training data details of prominent commercial AI models are not publicly available.

The team, led by Chantal Shaib and Vinith M. Suriyakumar, tested this by asking models questions with preserved grammatical patterns but nonsensical words. For example, when prompted with "Quickly sit Paris clouded?" (mimicking the structure of "Where is Paris located?"), models still answered "France."

This suggests models absorb both meaning and syntactic patterns, but can overrely on structural shortcuts when they strongly correlate with specific domains in training data, which sometimes allows patterns to override semantic understanding in edge cases. The team plans to present these findings at NeurIPS later this month.

As a refresher, syntax describes sentence structure—how words are arranged grammatically and what parts of speech they use. Semantics describes the actual meaning those words convey, which can vary even when the grammatical structure stays the same.

Semantics depends heavily on context, and navigating context is what makes LLMs work. The process of turning an input, your prompt, into an output, an LLM answer, involves a complex chain of pattern matching against encoded training data.

To investigate when and how this pattern-matching can go wrong, the researchers designed a controlled experiment. They created a synthetic dataset by designing prompts in which each subject area had a unique grammatical template based on part-of-speech patterns. For instance, geography questions followed one structural pattern while questions about creative works followed another. They then trained Allen AI's Olmo models on this data and tested whether the models could distinguish between syntax and semantics.

The analysis revealed a "spurious correlation" where models in these edge cases treated syntax as a proxy for the domain. When patterns and semantics conflict, the research suggests, the AI's memorization of specific grammatical "shapes" can override semantic parsing, leading to incorrect responses based on structural cues rather than actual meaning.

In layperson terms, the research shows that AI language models can become overly fixated on the style of a question rather than its actual meaning. Imagine if someone learned that questions starting with "Where is..." are always about geography, so when you ask "Where is the best pizza in Chicago?", they respond with "Illinois" instead of recommending restaurants based on some other criteria. They're responding to the grammatical pattern ("Where is...") rather than understanding you're asking about food.

This creates two risks: models giving wrong answers in unfamiliar contexts (a form of confabulation), and bad actors exploiting these patterns to bypass safety conditioning by wrapping harmful requests in "safe" grammatical styles. It's a form of domain switching that can reframe an input, linking it into a different context to get a different result.

[...] The findings come with several caveats. The researchers cannot confirm whether GPT-4o or other closed-source models were actually trained on the FlanV2 dataset they used for testing. Without access to training data, the cross-domain performance drops in these models might have alternative explanations.

The benchmarking method also faces a potential circularity issue. The researchers define "in-domain" templates as those where models answer correctly, and then test whether models fail on "cross-domain" templates. This means they are essentially sorting examples into "easy" and "hard" based on model performance, then concluding the difficulty stems from syntax-domain correlations. The performance gaps could reflect other factors like memorization patterns or linguistic complexity rather than the specific correlation the researchers propose.

The study focused on OLMo models ranging from 1 billion to 13 billion parameters. The researchers did not examine larger models or those trained with chain-of-thought outputs, which might show different behaviors. Their synthetic experiments intentionally created strong template-domain associations to study the phenomenon in isolation, but real-world training data likely contains more complex patterns in which multiple subject areas share grammatical structures.

Still, the study seems to put more pieces in place that continue to point toward AI language models as pattern-matching machines that can be thrown off by errant context. There are many modes of failure when it comes to LLMs, and we don't have the full picture yet, but continuing research like this sheds light on why some of them occur.


Original Submission

posted by hubie on Sunday December 07, @04:07PM   Printer-friendly
from the has-Netcraft-confirmed-it? dept.

By my count, Linux has over 11% of the desktop market. Here's how I got that number - and why people are making the leap:

My colleague Jack Wallen and I have been telling you for a while now that you should switch from Windows to the Linux desktop. Sounds like some of you have been listening.

The proof of the pudding comes from various sources. First, with Windows 10 nearing the end of its supported life, we told you to consider switching from Windows to Linux Mint or another Windows-like Linux distribution. What do we find now?

Zorin OS, an excellent Linux desktop, reports that its latest release, "Zorin OS 18 has amassed 1 million downloads in just over a month since its release." What makes it especially interesting is that over "78% of these downloads came from Windows" users.

[...] Many have already been making the leap. By May 2025, StatCounter data showed the Linux desktop had grown from a minute 1.5% global desktop share in 2020 to above 4% in 2024, and was at a new American high of above 5% by 2025.

In StatCounter's latest US numbers, which cover through October, Linux shows up as only 3.49%. But if you look closer, "unknown" accounts for 4.21%. Allow me to make an educated guess here: I suspect those unknown desktops are actually running Linux. What else could it be? FreeBSD? Unix? OS/2? Unlikely.

In addition, ChromeOS comes in at 3.67%, which strikes me as much too low. Leaving that aside, ChromeOS is a Linux variant. It just uses the Chrome web browser for its interface rather than KDE Plasma, Cinnamon, or another Linux desktop environment. Put all these together, and you get a Linux desktop market share of 11.37%. Now we're talking.

If you want to look at the broader world of end-user operating systems, including phones and tablets, Linux comes out even better. In the US, where we love our Apple iPhones, Android -- yes, another Linux distro -- boasts 41.71% of the market share, according to StatCounter's latest numbers. Globally, however, Android rules with 72.55% of the market.

[...] Now, of course, StatCounter's numbers, as Ed Bott has pointed out, have their problems. So I also looked at my preferred data source for operating system numbers: the US federal government's Digital Analytics Program (DAP).

This site gives a running count of US government website visits and an analysis. On average, there are 1.6 billion sessions over the last 30 days, with millions of users per day. In short, DAP gives a detailed view of what people use without massaging the data.

DAP gets its raw data from a Google Analytics account. DAP has open-sourced the code, which displays the data on the web, and its data-collection code. You can download its data in JavaScript Object Notation (JSON) format so you can analyze the raw numbers yourself.

By DAP's count, the Linux desktop now has a 5.8% market share. That may not sound impressive, but when I started looking at DAP's numbers a decade ago, the Linux desktop had a mere 0.67% share. We've come a long way.

If you add Chrome OS (1.7%) and Android (15.8%), 23.3% of all people accessing the US government's websites are Linux users. The Linux kernel's user-facing footprint is much larger than the "desktop Linux" label suggests.

[...] But wait, there's more data. According to Lansweeper, an IT asset discovery and inventory company, in its analysis of over 15 million identified consumer desktop operating systems, Linux desktops currently account for just over 6% of PC market share.

Earlier this year, I identified five drivers for people switching from Windows to Linux. These are: Microsoft's shift of focus from Windows as a product to Microsoft 365 and cloud services, the increased viability of gaming via Steam and Proton, drastically improved ease of use in mainstream distros, broader hardware support, and rising concern about privacy and data control.

Three others have emerged since then. One is that many companies and users still have perfectly good Windows 10 machines that can't "upgrade" to Windows 11. ControlUp, a company that would love to help you move to Windows 11, has found that about 25% of consumer and business Windows 10 PCs can't be moved to Windows 11.

[...] Another is that many people really, really don't want to move to Windows 11. A UK survey by consumer group Which? in September 2025 found that 26% of respondents intended to keep using Windows 10 even after updates stopped. Interestingly, 6% plan to go to an alternative operating system such as Linux.

[...] Finally, not everyone is thrilled with Windows 11 being turned into an AI-agentic operating system. Despite all the AI hype, some people don't want AI second-guessing their every move or reporting on their work to Microsoft.

After Microsoft president Pavan Davulur tweeted on Nov. 10 that "Windows is evolving into an agentic OS, connecting devices, cloud, and AI to unlock intelligent productivity and secure work anywhere," he probably expected Windows users to be happy with this vision. They weren't.

[...] My last reason for people looking to Linux from Windows doesn't matter much to users in the US, but it matters a lot to people outside the US. You see, the European Union (EU) governments don't trust Microsoft to deliver on its service promises under potential US political pressure.

This has resulted in the rise of Digital Sovereignty initiatives, where EU companies and not American tech giants are seen as much more trustworthy. As a result, many EU states have dropped Microsoft programs and have switched to open-source software.

That includes the desktop. Indeed, one EU group has created EU OS. This is a proof-of-concept Linux desktop for a Fedora-based distro that uses the KDE Plasma desktop environment.

It's not just the EU. The UK also no longer trusts Microsoft with its data. A 2024 Computer Weekly report revealed that Microsoft told Scottish police it could not guarantee that data in Microsoft 365 and Azure would remain in the UK.

[...] Taken together, all these shifts make Linux less of a tinker's special and more of a pragmatic option for people who want out of the Windows upgrade treadmill or subscription model.

Desktop Linux is moving from perennial underdog to a small but meaningful slice of everyday computing, especially among technically inclined users, non-American public-sector agencies, and ordinary consumer and business users who want a cheaper, more trustworthy desktop.


Original Submission

posted by jelizondo on Sunday December 07, @11:26AM   Printer-friendly

https://www.geekwire.com/2025/uw-nobel-winners-lab-releases-most-powerful-protein-design-tool-yet/

David Baker's lab at the University of Washington is announcing two major leaps in the field of AI-powered protein design. The first is a souped-up version of its existing RFdiffusion2 tool that can now design enzymes with performance nearly on par with those found in nature. The second is the release of a new, general-purpose version of its model, named RFdiffusion3, which the researchers are calling their most powerful and versatile protein engineering technology to date.

Last year, Baker received the Nobel Prize in Chemistry for his pioneering work in protein science, which includes a deep-learning model called RFdiffusion. The tool allows scientists to design novel proteins that have never existed. These machine-made proteins hold immense promise, from developing medicines for previously untreatable diseases to solving knotty environmental challenges.

Baker leads the UW's Institute for Protein Design, which released the first version of the core technology in 2023, followed by RFdiffusion2 earlier this year. The second model was fine-tuned for creating enzymes — proteins that orchestrate the transformation of molecules and dramatically speed up chemical reactions.

The latest accomplishments are being shared today in publications in the leading scientific journals Nature and Nature Methods, as well as a preprint last month on bioRxiv.

In the improved version of RFdiffusion2, the researchers took a more hands-off approach to guiding the technology, giving it a specific enzymatic task to perform but not specifying other features. Or as the team described it in a press release, the tool produces "blueprints for physical nanomachines that must obey the laws of chemistry and physics to function."

"You basically let the model have all this space to explore and ... you really allow it to search a really wide space and come up with great, great solutions," said Seth Woodbury, a graduate student in Baker's lab and author on both papers publishing today.

In addition to UW scientists, researchers from MIT and Switzerland's ETH Zurich contributed to the work.

The new approach is remarkable for quickly generating higher-performing enzymes. In a test of the tool, it was able to solve 41 out of 41 difficult enzyme design challenges, compared to only 16 for the previous version.

"When we designed enzymes, they're always an order of magnitude worse than native enzymes that evolution has taken billions of years to find," said Rohith Krishna, a postdoctoral fellow and lead developer of RFdiffusion2. "This is one of the first times that we're not one of the best enzymes ever, but we're in the ballpark of native enzymes."

The researchers successfully used the model to create proteins calls metallohydrolases, which accelerate difficult reactions using a precisely positioned metal ion and an activated water molecule. The engineered enzymes could have important applications, including the destruction of pollutants.

The promise of rapidly designed catalytic enzymes could unleash wide-ranging applications, Baker said.

"The first problem we really tackled with AI, it was largely therapeutics, making binders to drug targets," he said. "But now with catalysis, it really opens up sustainability."

The researchers are also working with the Gates Foundation to figure out lower-cost ways to build what are known as small molecule drugs, which interact with proteins and enzymes inside cells, often by blocking or enhancing their function to effect biological processes.

The Nature paper, titled "Computational design of metallohydrolases," was authored by Donghyo Kim, Seth Woodbury, Woody Ahern, Doug Tischer, Alex Kang, Emily Joyce, Asim Bera, Nikita Hanikel, Saman Salike, Rohith Krishna, Jason Yim, Samuel Pellock, Anna Lauko, Indrek Kalvet, Donald Hilvert and David Baker.

The Nature Methods paper, titled "Atom-level enzyme active site scaffolding using RFdiffusion2," was authored by Woody Ahern, Jason Yim, Doug Tischer, Saman Salike, Seth Woodbury, Donghyo Kim, Indrek Kalvet, Yakov Kipnis, Brian Coventry, Han Raut Altae-Tran, Magnus Bauer, Regina Barzilay, Tommi Jaakkola, Rohith Krishna and David Baker.


Original Submission

posted by jelizondo on Sunday December 07, @06:42AM   Printer-friendly

https://www.osnews.com/story/143942/freebsd-15-0-released-with-pkgbase/

The FreeBSD team has released FreeBSD 15.0, and with it come several major changes, one of which you will surely want to know more about if you're a FreeBSD user. Since this change will eventually drastically change the way you use FreeBSD, we should get right into it.

Up until now, a full, system-wide update for FreeBSD – as in, updating both the base operating system as well as any packages you have installed on top of it – would use two separate tools: freebsd-update and the pkg package manager. You used the former to update the base operating system, which was installed as file sets, and the latter to update everything you had installed on top of it in the form of packages.

With FreeBSD 15.0, this is starting to change. Instead of using two separate tools, in 15.0 you can opt to deprecate freebsd-update and file sets, and rely entirely on pkg for updating both the base operating system as well as any packages you have installed, because with this new method, the base system moves from file sets to packages. When installing FreeBSD 15.0, the installer will ask you to choose between the old method, or the new pkg-only method.

Packages (pkgbase / New Method): The base system is installed as a set of packages from the "FreeBSD-base" repository. Systems installed this way are managed entirely using the pkg(8) tool. This method is used by default for all VM images and images published in public clouds. In FreeBSD 15.0, pkgbase is offered as a technology preview, but it is expected to become the standard method for managing base system installations and upgrades in future releases.

↫ FreeBSD 15.0 release announcement

As the release announcement notes, the net method is optional in FreeBSD 15 and will remain optional during the entire 15.x release cycle, but the plan is to deprecate freebsd-update and file sets entirely in FreeBSD 16.0. If you have an existing installation you wish to convert to using pkgbase, there's a tool called pkgbasify to do just that. It's sponsored by the FreeBSD Foundation, so it's not some random script.

Of course, there's way more in this release than just pkgbase. Of note is that the 32bit platforms i386, armv6, and 32-bit powerpc have been retired, but of course, 32bit code will continue to run on their 64bit counterparts. FreeBSD 15.0 also brings a native inotify implementation, a ton of improvements to the audio components, improved Intel Wi-Fi drivers, and so, so much more.


Original Submission

posted by jelizondo on Sunday December 07, @01:59AM   Printer-friendly

https://www.tomshardware.com/tech-industry/ibm-ceo-warns-trillion-dollar-ai-boom-unsustainable-at-current-infrastructure-costs

...says there is 'no way' that infrastructure costs can turn a profit

IBM CEO Arvind Krishna used an appearance on The Verge's Decoder podcast to question whether the capital spending now underway in pursuit of AGI can ever pay for itself. Krishna said today's figures for constructing and populating large AI data centers place the industry on a trajectory where roughly $8 trillion of cumulative commitments would require around $800 billion of annual profit simply to service the cost of capital.

The claim was tied directly to assumptions about current hardware, its depreciation, and energy, rather than any solid long-term forecasts, but it comes at a time when we've seen several companies one-upping one another with unprecedented, multi-year infrastructure projects.

Krishna estimated that filling a one-gigawatt AI facility with compute hardware requires around $80 billion. The issue is that deployments of this scale are moving from the drawing board and into practical planning stages, with leading AI companies proposing deployments with tens of gigawatts — and in some cases, beyond 100 gigawatts — each. Krishna said that, taken together, public and private announcements point to roughly one hundred gigawatts of currently planned capacity dedicated to AGI-class workloads.

At $80 billion per gigawatt, the total reaches $8 trillion. He tied those figures to the five-year refresh cycles common across accelerator fleets, arguing that the need to replace most of the hardware inside those data centers within that window creates a compounding effect on long-term capex requirements. He also placed the likelihood that current LLM-centric architectures reach AGI at between zero and 1% without new forms of knowledge integration.

Krishna pointed to depreciation as the part of the calculation most underappreciated by investors. AI accelerators are typically written down over five years, and he argued that the pace of architectural change means fleets must be replaced rather than extended. "You've got to use it all in five years because at that point, you've got to throw it away and refill it," he said.

Recent financial-market criticism has centred on similar concerns. Investor Michael Burry, for example, has raised questions about whether hyperscalers can continue stretching useful-life assumptions if performance gains and model sizes force accelerated retirement of older GPUs.

The IBM chief said that ultimately, he expects generative-AI tools in their current form to drive substantial enterprise productivity, but that his concern is the relationship between the physical scale of next-gen AI infrastructure and the economics required to support it. Companies committing to these huge, multi-gigawatt campuses and compressed refresh schedules must therefore demonstrate returns that match the unprecedented capital expenditure that Krishna outlined.


Original Submission

posted by jelizondo on Saturday December 06, @09:11PM   Printer-friendly

OpenAI desperate to avoid explaining why it deleted pirated book datasets:

OpenAI may soon be forced to explain why it deleted a pair of controversial datasets composed of pirated books, and the stakes could not be higher.

At the heart of a class-action lawsuit from authors alleging that ChatGPT was illegally trained on their works, OpenAI's decision to delete the datasets could end up being a deciding factor that gives the authors the win.

It's undisputed that OpenAI deleted the datasets, known as "Books 1" and "Books 2," prior to ChatGPT's release in 2022. Created by former OpenAI employees in 2021, the datasets were built by scraping the open web and seizing the bulk of its data from a shadow library called Library Genesis (LibGen).

As OpenAI tells it, the datasets fell out of use within that same year, prompting an internal decision to delete them.

But the authors suspect there's more to the story than that. They noted that OpenAI appeared to flip-flop by retracting its claim that the datasets' "non-use" was a reason for deletion, then later claiming that all reasons for deletion, including "non-use," should be shielded under attorney-client privilege.

To the authors, it seemed like OpenAI was quickly backtracking after the court granted the authors' discovery requests to review OpenAI's internal messages on the firm's "non-use."

In fact, OpenAI's reversal only made authors more eager to see how OpenAI discussed "non-use," and now they may get to find out all the reasons why OpenAI deleted the datasets.

Last week, US magistrate judge Ona Wang ordered OpenAI to share all communications with in-house lawyers about deleting the datasets, as well as "all internal references to LibGen that OpenAI has redacted or withheld on the basis of attorney-client privilege."

According to Wang, OpenAI slipped up by arguing that "non-use" was not a "reason" for deleting the datasets, while simultaneously claiming that it should also be deemed a "reason" considered privileged.

Either way, the judge ruled that OpenAI couldn't block discovery on "non-use" just by deleting a few words from prior filings that had been on the docket for more than a year.

"OpenAI has gone back-and-forth on whether 'non-use' as a 'reason' for the deletion of Books1 and Books2 is privileged at all," Wang wrote. "OpenAI cannot state a 'reason' (which implies it is not privileged) and then later assert that the 'reason' is privileged to avoid discovery."

Additionally, OpenAI's claim that all reasons for deleting the datasets are privileged "strains credulity," she concluded, ordering OpenAI to produce a wide range of potentially revealing internal messages by December 8. OpenAI must also make its in-house lawyers available for deposition by December 19.

OpenAI has argued that it never flip-flopped or retracted anything. It simply used vague phrasing that led to confusion over whether any of the reasons for deleting the datasets were considered non-privileged. But Wang didn't buy into that, concluding that "even if a 'reason' like 'non-use' could be privileged, OpenAI has waived privilege by making a moving target of its privilege assertions."

Asked for comment, OpenAI told Ars that "we disagree with the ruling and intend to appeal."

So far, OpenAI has avoided disclosing its rationale, claiming that all the reasons it had for deleting the datasets are privileged. In-house lawyers weighed in on the decision to delete and were even copied on a Slack channel initially called "excise-libgen."

But Wang reviewed those Slack messages and found that "the vast majority of these communications were not privileged because they were 'plainly devoid of any request for legal advice and counsel [did] not once weigh in.'"

In a particularly non-privileged batch of messages, one OpenAI lawyer, Jason Kwon, only weighed in once, the judge noted, to recommend the channel name be changed to "project-clear." Wang reminded OpenAI that "the entirety of the Slack channel and all messages contained therein is not privileged simply because it was created at the direction of an attorney and/or the fact that a lawyer was copied on the communications."

The authors believe that exposing OpenAI's rationale may help prove that the ChatGPT maker willfully infringed on copyrights when pirating the book data. As Wang explained, OpenAI's retraction risked putting the AI firm's "good faith and state of mind at issue," which could increase fines in a loss.

"In a copyright case, a court can increase the award of statutory damages up to $150,000 per infringed work if the infringement was willful, meaning the defendant 'was actually aware of the infringing activity' or the 'defendant's actions were the result of reckless disregard for, or willful blindness to, the copyright holder's rights,'" Wang wrote.

In a court transcript, a lawyer representing some of the authors suing OpenAI, Christopher Young, noted that OpenAI could be in trouble if evidence showed that it decided against using the datasets for later models due to legal risks. He also suggested that OpenAI could be using the datasets under different names to mask further infringement.

Wang also found it contradictory that OpenAI continued to argue in a recent filing that it acted in good faith, while "artfully" removing "its good faith affirmative defense and key words such as 'innocent,' 'reasonably believed,' and 'good faith.'" These changes only strengthened discovery requests to explore authors' willfulness theory, Wang wrote, noting the sought-after internal messages would now be critical for the court's review.

"A jury is entitled to know the basis for OpenAI's purported good faith," Wang wrote.

The judge appeared particularly frustrated by OpenAI seemingly twisting the Anthropic ruling to defend against the authors' request to learn more about the deletion of the datasets.

In a footnote, Wang called out OpenAI for "bizarrely" citing an Anthropic ruling that "grossly" misrepresented Judge William Alsup's decision by claiming that he found that "downloading pirated copies of books is lawful as long as they are subsequently used for training an LLM."

Instead, Alsup wrote that he doubted that "any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use."

If anything, Wang wrote, OpenAI's decision to pirate book data—then delete it—seemed "to fall squarely into the category of activities proscribed by" Alsup. For emphasis, she quoted Alsup's order, which said, "such piracy of otherwise available copies is inherently, irredeemably infringing even if the pirated copies are immediately used for the transformative use and immediately discarded."

For the authors, getting hold of OpenAI's privileged communications could tip the scales in their favor, the Hollywood Reporter suggested. Some authors believe the key to winning could be testimony from Anthropic CEO Dario Amodei, who is accused of creating the controversial datasets while he was still at OpenAI. The authors think Amodei also possesses information on the destruction of the datasets, court filings show.

OpenAI tried to fight the authors' motion to depose Amodei, but a judge sided with the authors in March, compelling Amodei to answer their biggest questions on his involvement.

Whether Amodei's testimony is a bombshell remains to be seen, but it's clear that OpenAI may struggle to overcome claims of willful infringement. Wang noted there is a "fundamental conflict" in circumstances "where a party asserts a good faith defense based on advice of counsel but then blocks inquiry into their state of mind by asserting attorney-client privilege," suggesting that OpenAI may have substantially weakened its defense.

The outcome of the dispute over the deletions could influence OpenAI's calculus on whether it should ultimately settle the lawsuit. Ahead of the Anthropic settlement—the largest publicly reported copyright class action settlement in history—authors suing pointed to evidence that Anthropic became "not so gung ho about" training on pirated books "for legal reasons." That seems to be the type of smoking-gun evidence that authors hope will emerge from OpenAI's withheld Slack messages.


Original Submission

posted by jelizondo on Saturday December 06, @04:29PM   Printer-friendly

https://www.extremetech.com/computing/new-ddr5-memory-overclocking-world-record-set-at-13530-mts

Intel, Gigabyte, and Corsair have once again broken the memory overclocking world record, pushing the limit to over 13.5 megatransfers per second (MT/s). Using a Gigabyte Z890 Aorus Tachyon Ice board and Corsair's Vengeance memory, the overclockers pulled out all the stops to set this new record, wielding liquid nitrogen, special chip-and-stick binning, and hours upon hours of testing.

Most DDR5 kits are designed to operate at around 5,000 to 6,000 MT/s, with the very latest CUDIMM kits offering speeds in excess of 8,000 or even 9,000 MT/s, though those can only be used with the very latest Intel CPUs. But overclocking records are not about mainstream or even viable performance; they're just about pushing the limits, which this team of overclockers well and truly did.

Overclockers Sergmann and HiCookie screened around 50 CPUs and over 20 different memory kits to find the winning combination to break the record, according to VideoCardz. Then they paired them with buckets of LNO2, which lowered the chip's operating temperature to -196 degrees Celsius. The motherboard was insulated, though it was already designed for extreme overclocking, with physical controls, limited features, and memory placement specifically adjusted to maximize cooling and overclocking potential.

The result is a brand-new world record: 13,530 MT/s. It wasn't benchmark-stable, as it required disabling most CPU cores, running just a single DDR5 stick by itself, and a very minimal operating system build. All in the name of just getting the OS to boot long enough to validate the overclock through CPU-Z.

Don't expect your next XMP profile to have anything like this, but it shows just how capable DDR5 can be under the right circumstances. Expect to see these sorts of speeds when DDR6 memory debuts with future generation CPUs and socket designs—think AM6 in 2027 and beyond.


Original Submission

posted by janrinok on Saturday December 06, @11:41AM   Printer-friendly

Let's Encrypt to Reduce Certificate Validity from 90 Days to 45 Days:

Let's Encrypt has officially announced plans to reduce the maximum validity period of its SSL/TLS certificates from 90 days to 45 days.

The transition, which will be completed by 2028, aligns with broader industry shifts mandated by the CA/Browser Forum Baseline Requirements.

This move is designed to enhance internet security by limiting the window of compromise for stolen credentials and improving the efficiency of certificate revocation technologies.

In addition to shortening certificate lifespans, the Certificate Authority (CA) will drastically reduce the “authorization reuse period," the duration for which a validated domain control remains active before re-verification is required.

Currently set at 30 days, this period will shrink to just 7 hours by the final rollout phase in 2028.

To minimize service disruption for millions of websites, Let's Encrypt is using ACME Profiles to stagger deployments. The changes will first be introduced via opt-in profiles before becoming the default standard for all users.

While most automated environments will handle these changes seamlessly, the shortened validity period necessitates a review of current renewal configurations.

Administrators relying on hardcoded renewal intervals, such as a cron job running every 60 days, will face outages, as certificates will expire before the renewal triggers.

Let's Encrypt advises that acceptable client behavior involves renewing certificates approximately two-thirds of the way through their lifetime.

To facilitate this, the organization recommends enabling ACME Renewal Information (ARI), a feature that allows the CA to signal precisely when a client should renew.

Manual certificate management is strongly discouraged, as the administrative burden of renewing every few weeks increases the likelihood of human error and expired certificates.

The reduction in authorization reuse means clients must prove domain control more frequently. To address the friction this causes for users who cannot easily automate DNS updates, Let's Encrypt is collaborating with the IETF to standardize a new validation method: DNS-PERSIST-01.

Expected to launch in 2026, this protocol allows for a static DNS TXT entry. Unlike the current DNS-01 challenge, which requires a new token for every renewal, DNS-PERSIST-01 permits the initial verification record to remain unchanged.

This development will enable automated renewals for infrastructure where dynamic DNS updates are restricted or technically difficult, reducing the reliance on cached authorizations.


Original Submission

posted by janrinok on Saturday December 06, @06:53AM   Printer-friendly
from the so-long-and-thanks-for-all-the-fish dept.

Micron cites AI data center demand as reason for killing DIY upgrade brand:

On Wednesday, Micron Technology announced it will exit the consumer RAM business in 2026, ending 29 years of selling RAM and SSDs to PC builders and enthusiasts under the Crucial brand. The company cited heavy demand from AI data centers as the reason for abandoning its consumer brand, a move that will remove one of the most recognizable names in the do-it-yourself PC upgrade market.

"The AI-driven growth in the data center has led to a surge in demand for memory and storage," Sumit Sadana, EVP and chief business officer at Micron Technology, said in a statement. "Micron has made the difficult decision to exit the Crucial consumer business in order to improve supply and support for our larger, strategic customers in faster-growing segments."

Micron said it will continue shipping Crucial consumer products through the end of its fiscal second quarter in February 2026 and will honor warranties on existing products. The company will continue selling Micron-branded enterprise products to commercial customers and plans to redeploy affected employees to other positions within the company.

[...] The surprise announcement from Micron follows a period of rapidly escalating memory prices, as we reported in November. A typical 32GB DDR5 RAM kit that cost around $82 in August now sells for about $310, and higher-capacity kits have seen even steeper increases.

DRAM contract prices have increased 171 percent year over year, according to industry data. Gerry Chen, general manager of memory manufacturer TeamGroup, warned that the situation will worsen in the first half of 2026 once distributors exhaust their remaining inventory. He expects supply constraints to persist through late 2027 or beyond.

The fault lies squarely at the feet of AI mania in the tech industry. The construction of new AI infrastructure has created unprecedented demand for high-bandwidth memory (HBM), the specialized DRAM used in AI accelerators from Nvidia and AMD. Memory manufacturers have been reallocating production capacity away from consumer products toward these more profitable enterprise components, and Micron has presold its entire HBM output through 2026.

[...] For Micron, the calculus is clear: Enterprise customers pay more and buy in bulk. But for the DIY PC community, the decision will leave PC builders with one fewer option when reaching for the RAM sticks. In his statement, Sadana reflected on the brand's 29-year run.

"Thanks to a passionate community of consumers, the Crucial brand has become synonymous with technical leadership, quality and reliability of leading-edge memory and storage products," Sadana said. "We would like to thank our millions of customers, hundreds of partners and all of the Micron team members who have supported the Crucial journey for the last 29 years."

Also see: Micron ditches consumer memory brand Crucial to chase AI riches


Original Submission

posted by janrinok on Saturday December 06, @02:12AM   Printer-friendly

https://www.extremetech.com/computing/raspberry-pi-launches-1gb-model-at-45-temporarily-raises-prices-on-higher

Once AI-driven DRAM pricing normalizes, Raspberry Pi says it intends to reduce prices back down to align with its low-cost computing mission.

On Monday, Raspberry Pi launched a new 1GB variant of Raspberry Pi 5 priced at $45. The brand simultaneously raised prices on most higher-capacity boards due to surging LPDDR4 memory costs driven by AI demand.

The company announced that beginning this week, quite a few Raspberry Pi 4 and 5 models will cost more:

Raspberry Pi 4

        4GB model: $55 to $60

        8GB model: $75 to $85

Raspberry Pi 5

        GB entry-level model: launches at $45

        GB model: $50 to $55

        4GB model: $60 to $70

        8GB model: $80 to $95

        16GB model: $120 to $145

Compute Module 5

        16GB variants: increase of $20

Lower-capacity Raspberry Pi 4 boards, all Raspberry Pi 3+ and earlier models, and Raspberry Pi Zero products will maintain their existing prices. The classic 1GB Raspberry Pi 4 remains at $35.

Raspberry Pi says the price increases help secure memory supplies through a constrained market in 2026. LPDDR4 memory costs are rising sharply as manufacturers shift production to AI-supporting memory types, such as HBM and newer generations. Industry analysts report that commodity memory prices have climbed by more than 100% year-on-year and could roughly double again by mid-2026.

The company frames these hikes as temporary. When (or if) AI-driven DRAM pricing normalizes, Raspberry Pi will reportedly bring prices back down to align with its low-cost computing mission.


Original Submission