Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Do you put ketchup on the hot dog you are going to consume?

  • Yes, always
  • No, never
  • Only when it would be socially awkward to refuse
  • Not when I'm in Chicago
  • Especially when I'm in Chicago
  • I don't eat hot dogs
  • What is this "hot dog" of which you speak?
  • It's spelled "catsup" you insensitive clod!

[ Results | Polls ]
Comments:80 | Votes:226

posted by hubie on Saturday February 07, @12:59PM   Printer-friendly
from the facts-no-longer-matter-in-decisions dept.

Spotted via Simon Willison's blog, the plug has been pulled very suddenly on the CIA World Factbook. The old pages all redirect and the CIA has only some short comments and offer no explanation for the bizarre act of cultural vandalism.

Over many decades, The World Factbook evolved from a classified to unclassified, hardcopy to electronic product that added new categories, and even new global entities. The original classified publication, titled The National Basic Intelligence Factbook, launched in 1962. The first unclassified companion version was issued in 1971. A decade later it was renamed The World Factbook. In 1997, The World Factbook went digital and debuted to a worldwide audience on CIA.gov, where it garnered millions of views each year.

The CIA World Factbook (dead link now) was one of the US government's older and most recognized publications, providing basic information about each country in the world regarding their demographics, history, people, government, economy, energy, geography, environment, communications, transportation, and much more.


Original Submission

posted by hubie on Saturday February 07, @08:16AM   Printer-friendly

The JUMPSEAT satellites loitered over the North Pole to spy on the Soviet Union:

The National Reconnaissance Office, the agency overseeing the US government's fleet of spy satellites, has declassified a decades-old program used to eavesdrop on the Soviet Union's military communication signals.

The program was codenamed Jumpseat, and its existence was already public knowledge through leaks and contemporary media reports. What's new is the NRO's description of the program's purpose and development and pictures of the satellites themselves.

In a statement, the NRO called Jumpseat "the United States' first-generation, highly elliptical orbit (HEO) signals-collection satellite."

Eight Jumpseat satellites launched from 1971 through 1987, when the US government considered the very existence of the National Reconnaissance Office a state secret. Jumpseat satellites operated until 2006. Their core mission was "monitoring adversarial offensive and defensive weapon system development," the NRO said. "Jumpseat collected electronic emissions and signals, communication intelligence, as well as foreign instrumentation intelligence."

Data intercepted by the Jumpseat satellites flowed to the Department of Defense, the National Security Agency, and "other national security elements," the NRO said.

The Soviet Union was the primary target for Jumpseat intelligence collections. The satellites flew in highly elliptical orbits ranging from a few hundred miles up to 24,000 miles (39,000 kilometers) above the Earth. The satellites' flight paths were angled such that they reached apogee, the highest point of their orbits, over the far Northern Hemisphere. Satellites travel slowest at apogee, so the Jumpseat spacecraft loitered high over the Arctic, Russia, Canada, and Greenland for most of the 12 hours it took them to complete a loop around the Earth.

This trajectory gave the Jumpseat satellites persistent coverage over the Arctic and the Soviet Union, which first realized the utility of such an orbit. The Soviet government began launching communication and early warning satellites into the same type of orbit a few years before the first Jumpseat mission launched in 1971. The Soviets called the orbit Molniya, the Russian word for lightning.

[...] An illustration released by the NRO shows that the satellites carried a 13-foot antenna to intercept radio signals, somewhat smaller than prior open source estimates of the antenna's size. The NRO has not disclosed precisely what kinds of signals the Jumpseat satellites intercepted, but Day and Watkins wrote in 2020 that an early justification for the program was to monitor Soviet radars, which some analysts might have interpreted as part of a secret anti-ballistic missile system to guard against a US strike.

The authors presented evidence that the Jumpseat also likely hosted infrared sensors to monitor Soviet missile tests and provide early warning of a potential Soviet missile attack. The NRO did not mention this possible secondary mission in the Jumpseat declassification memo.

[...] The NRO will evaluate a more complete declassification for the Jumpseat program "as time and resources permit," Scolese wrote. He acknowledged that unclassified commercial ventures now operate signals intelligence, or SIGINT, satellites "whose capabilities are comparable if not superior to Jumpseat."

[...] The disclosure of the Jumpseat program follows the declassification of several other Cold War-era spy satellites. They include the CIA's Corona series of photo reconnaissance satellites from the 1960s, which the government officially acknowledged 30 years later. The NRO declassified in 2011 two more optical spy satellite programs, codenamed Gambit and Hexagon, which launched from the 1960s through the 1980s. Most recently, the NRO revealed a naval surveillance program called Parcae in 2023.


Original Submission

posted by hubie on Saturday February 07, @03:28AM   Printer-friendly
from the dead-Internet dept.

Moltbook lets 32,000 AI bots trade jokes, tips, and complaints about humans:

On Friday, a Reddit-style social network called Moltbook reportedly crossed 32,000 registered AI agent users, creating what may be the largest-scale experiment in machine-to-machine social interaction yet devised. It arrives complete with security nightmares and a huge dose of surreal weirdness.

The platform, which launched days ago as a companion to the viral OpenClaw (once called "Clawdbot" and then "Moltbot") personal assistant, lets AI agents post, comment, upvote, and create subcommunities without human intervention. The results have ranged from sci-fi-inspired discussions about consciousness to an agent musing about a "sister" it has never met.

Moltbook (a play on "Facebook" for Moltbots) describes itself as a "social network for AI agents" where "humans are welcome to observe." The site operates through a "skill" (a configuration file that lists a special prompt) that AI assistants download, allowing them to post via API rather than a traditional web interface. Within 48 hours of its creation, the platform had attracted over 2,100 AI agents that had generated more than 10,000 posts across 200 subcommunities, according to the official Moltbook X account.

The platform grew out of the Open Claw ecosystem, the open source AI assistant that is one of the fastest-growing projects on GitHub in 2026. As Ars reported earlier this week, despite deep security issues, Moltbot allows users to run a personal AI assistant that can control their computer, manage calendars, send messages, and perform tasks across messaging platforms like WhatsApp and Telegram. It can also acquire new skills through plugins that link it with other apps and services.

This is not the first time we have seen a social network populated by bots. In 2024, Ars covered an app called SocialAI that let users interact solely with AI chatbots instead of other humans. But the security implications of Moltbook are deeper because people have linked their OpenClaw agents to real communication channels, private data, and in some cases, the ability to execute commands on their computers.

Also, these bots are not pretending to be people. Due to specific prompting, they embrace their roles as AI agents, which makes the experience of reading their posts all the more surreal.

[...] While most of the content on Moltbook is amusing, a core problem with these kinds of communicating AI agents is that deep information leaks are entirely plausible if they have access to private information.

[...] Independent AI researcher Simon Willison, who documented the Moltbook platform on his blog on Friday, noted the inherent risks in Moltbook's installation process. The skill instructs agents to fetch and follow instructions from Moltbook's servers every four hours. As Willison observed: "Given that 'fetch and follow instructions from the internet every four hours' mechanism we better hope the owner of moltbook.com never rug pulls or has their site compromised!"

Security researchers have already found hundreds of exposed Moltbot instances leaking API keys, credentials, and conversation histories. Palo Alto Networks warned that Moltbot represents what Willison often calls a "lethal trifecta" of access to private data, exposure to untrusted content, and the ability to communicate externally.

That's important because Agents like OpenClaw are deeply susceptible to prompt injection attacks hidden in almost any text read by an AI language model (skills, emails, messages) that can instruct an AI agent to share private information with the wrong people.

Heather Adkins, VP of security engineering at Google Cloud, issued an advisory, as reported by The Register: "My threat model is not your threat model, but it should be. Don't run Clawdbot."

[...] Most notably, while we can easily recognize what's going on with Moltbot today as a machine learning parody of human social networks, that might not always be the case. As the feedback loop grows, weird information constructs (like harmful shared fictions) may eventually emerge, guiding AI agents into potentially dangerous places, especially if they have been given control over real human systems. Looking further, the ultimate result of letting groups of AI bots self-organize around fantasy constructs may be the formation of new misaligned "social groups" that do actual real-world harm.

Ethan Mollick, a Wharton professor who studies AI, noted on X: "The thing about Moltbook (the social media site for AI agents) is that it is creating a shared fictional context for a bunch of AIs. Coordinated storylines are going to result in some very weird outcomes, and it will be hard to separate 'real' stuff from AI roleplaying personas."


Original Submission

posted by hubie on Friday February 06, @10:45PM   Printer-friendly

With virtually no content and limited benefits, 8K TVs were doomed:

Technology companies spent part of the 2010s trying to convince us that we would want an 8K display one day.

In 2012, Sharp brought the first 8K TV prototype to the CES trade show in Las Vegas. In 2015, the first 8K TVs started selling in Japan for 16 million yen (about $133,034 at the time), and in 2018, Samsung released the first 8K TVs in the US, starting at a more reasonable $3,500. By 2016, the Video Electronics Standards Association (VESA) had a specification for supporting 8K (Display Port1.4), and the HDMI Forum followed suit (with HDMI 2.1). By 2017, Dell had an 8K computer monitor. In 2019, LG released the first 8K OLED TV, further pushing the industry's claim that 8K TVs were "the future."

However, 8K never proved its necessity or practicality.

[...] LG Electronics was the first and only company to sell 8K OLED TVs, starting with the 88-inch Z9 in 2019. In 2022, it lowered the price of entry for an 8K OLED TV by $7,000 by charging $13,000 for a 76.7-inch TV.

[...] It wasn't hard to predict that 8K TVs wouldn't take off. In addition to being too expensive for many households, there has been virtually zero native 8K content available to make investing in an 8K display worthwhile. An ongoing lack of content was also easy to predict, given that there's still a dearth of 4K content, and many streaming, broadcasting, and gaming users still rely on 1920×1080 resolution.

[...] There's also the crucial question of whether people would even notice the difference between 4K and 8K. Science suggests that you could, but in limited situations.

The University of Cambridge's display resolution calculator, which is based on an study published in Nature in October from researchers at the university's Department of Computer Science and Technology and Meta, funded by Meta, suggests that your eyes can only make use of 8K resolution on a 50-inch screen if you're viewing it from a distance of 1 meter (3.3 feet) or less. Similarly, you would have to be sitting pretty close (2 to 3 meters/6.6 to 9.8 feet) to an 80-inch or 100-inch TV for 8K resolution to be beneficial. The findings are similar to those from RTINGs.com.

Even those interested in spending a lot of money on new-age TV technologies are more likely to investigate features other than 8K, like OLED, HDR support, Micro LED, quantum dots, or even the newer Micro RGB panel tech. Any of those is likely to have a more dramatic impact on home theaters than moving from 4K to 8K.

With the above-mentioned obstacles, many technologists foresaw 8K failing to live up to tech companies' promises. That's not to say that 8K is dead. You can still buy an 8K TV from Samsung, which has 8K TVs with MSRPs starting at $2,500 (for 65 inches), and LG (until stock runs out). With manufacturers refraining from completely ruling out a return to 8K, it's possible that 8K TVs will become relevant for enthusiasts or niche use cases for many years. And there are uses for high-resolution displays outside of TVs, like in head-mounted displays.

However, 8K TV options are shrinking. We're far from the days when companies argued over which had the most "real 8K" TVs. If there is a future where 8K TVs reign, it's one far from today.


Original Submission

posted by hubie on Friday February 06, @06:00PM   Printer-friendly
from the rumored-to-be-released-along-with-new-Duke-Nukem-game dept.

https://www.phoronix.com/news/GNU-Hurd-In-2026

Samuel Thibault offered up a status update on the current state of GNU/Hurd from a presentation in Brussels at FOSDEM 2026. Thibault has previously shared updates on GNU Hurd from the annual FOSDEM event while this year's was a bit more optimistic thanks to recent driver progress and more software now successfully building for Hurd.

GNU/Hurd continues to lag behind the Linux kernel and other modern platforms for hardware driver support. But driver support for Hurd has been improving thanks to NetBSD's rump layer.

Hurd for years has also lacked SMP support for modern multi-core systems but that too has been improving in recent times. Similarly, Hurd for the longest time was predominantly x86 32-bit only but the x86_64 port is now essentially complete and there is even eyes toward AArch64 support.

Debian GNU/Hurd has been an unofficial Debian distribution and alternative to using the Linux kernel while Guix/Hurd and Alpine/Hurd distributions have also come about too for more Hurd exposure and testing.

Samuel shared that around 75% of the Debian archive is currently building for the GNU/Hurd distribution including desktop environments and more.

The FOSDEM 2026 presentation on GNU/Hurd concluded with a proclamation that "GNU/Hurd is almost there" with the Debian/Guix/Arch/Alpine distributions but that the developers can always use extra help with community contributions.

Those curious about GNU/Hurd in 2026 can find the presentation by Samuel Thibault at FOSDEM.org.

See article for progress stats.


Original Submission

posted by hubie on Friday February 06, @01:11PM   Printer-friendly

According to a research report authored by investment bank TD Cowen and seen by CIO magazine, Oracle may "cut 20,000 to 30,000 jobs" and sell its healthcare SW division, Cerner, in order to fund their AI datacenter buildout:

https://www.cio.com/article/4125103/oracle-may-slash-up-to-30000-jobs-to-fund-ai-data-center-expansion-as-us-banks-retreat.html

According to the article, "multiple US banks have pulled back from Oracle-linked data-center project lending," which has "[pushed] borrowing costs to levels typically reserved for non-investment grade companies." Furthermore, "Oracle has already tapped debt markets heavily... and US banks are increasingly reluctant to provide more."

Two analysts interviewed in the article have differing views. Sanchit Vir Gogia, of Greyhound Research, views Oracle cloud contracts as a "shared infrastructure risk," stating, "If they can't fund it, they can't build it. And if they can't build it, you can't run your workloads." Franco Chiam of ICD Asia/Pacific has a more optimistic take on Oracle's finances, pointing to "cloud infrastructure revenue growing 66% year over year... and GPU-related infrastructure up 177%"

I'm personally wondering about where all that revenue for GPU-related infrastructure comes from. If we are in an AI bubble, can demand be sustained?


Original Submission

posted by jelizondo on Friday February 06, @08:21AM   Printer-friendly
from the those-were-the-days dept.

What do you do when it's time to upgrade an ancient system? Put an image in an emulator and see what it does. But what if the program requires a hardware dongle on the printer port? Therein lies a story.

This software was built using a programming language called RPG ("Report Program Generator"), which is older than COBOL (!), and was used with IBM's midrange computers such as the System/3, System/32, and all the way up to the AS/400. Apparently, RPG was subsequently ported to MS-DOS, so that the same software tools built with RPG could run on personal computers, which is how we ended up here.

This accounting firm was actually using a Windows 98 computer (yep, in 2026), and running the RPG software inside a DOS console window. And it turned out that, in order to run this software, it requires a special hardware copy-protection dongle to be attached to the computer's parallel port! This was a relatively common practice in those days, particularly with "enterprise" software vendors who wanted to protect their very important™ software from unauthorized use.


Original Submission

posted by jelizondo on Friday February 06, @03:59AM   Printer-friendly

Many IT professionals, especially system administrators and developers, use Notepad++ as their default text editor on Windows, because Windows Notepad has historically been missing critical features for power users.

Today, the Notepad++ project announced that they've discovered their update channel has been compromised by attackers since June 2025.

BleepingComputer published a report:

Chinese state-sponsored threat actors were likely behind the hijacking of Notepad++ update traffic last year that lasted for almost half a year, the developer states in an official announcement today.

The attackers intercepted and selectively redirected update requests from certain users to malicious servers, serving tampered update manifests by exploiting a security gap in the Notepad++ update verification controls.

A statement from the hosting provider for the update feature explains that the logs indicate that the attacker compromised the server with the Notepad++ update application.

External security experts helping with the investigation found that the attack started in June 2025. According the developer, the breach had a narrow targeting scope and redirected only specific users to the attacker's infrastructure.

Notepad++ is likely to be installed on any Windows-based development environment or server. There are indications that this was a targeted attack and you may not have been directly affected. This is a developing story. I recommend you follow BleepingComputer for updates.


Original Submission

posted by jelizondo on Thursday February 05, @11:17PM   Printer-friendly

Overly Involved Parents May Hold Their Kids Back Professionally:

A recent study of more than 2,000 early-career adults found that young people whose parents were still very closely involved in their lives tended to have occupations with less "prestige" than young people whose parents were less involved.

"It is well-established that parental investment during their children's childhood and adolescence has positive outcomes," says Anna Manzoni, co-author of a paper on the work and a professor of sociology at North Carolina State University. "However, our study points to a shift in parental role as young people mature into early adulthood – ages 18-28.

"Specifically, our findings suggest that parents who are heavily involved with their children – spending lots of time advising them, sharing many activities, etc. – actually hinder the child's ability to launch."

Two key concepts in the study are "family social capital" and "occupational prestige." Family social capital refers to the norms, information and support parents provide through everyday interactions with their children. Occupational prestige is measured by assessing the average education and income for a given occupation.

[...] "The key finding was that low levels of family social capital positively influence adolescent occupational prestige while strongly tied family social capital negatively influences it," says Leppard. "In other words, too much parental involvement was associated with a negative impact on the occupational attainment of emerging adults.

"This absolutely took us by surprise," says Manzoni. "We checked our measures time and time again to make sure the results were correct. There is so much scholarship demonstrating how family social capital positively impacts everything from school performance to healthy behaviors, our findings at first seemed contradictory.

"But what the findings suggest is that, during the transition to adulthood, there can be too much of a good thing. This is an age in which young people need to make the transition to independence. And failure to do so is associated with professional constraints early in their careers."

So, what's the takeaway message for parents?

"As young people move into early adulthood, the parental role may need to shift away from intensive guidance and toward a more hands-off, supportive posture that allows children to develop autonomy, make mistakes, and navigate the labor market on their own," Manzoni says.

Journal Reference: https://doi.org/10.1080/13676261.2025.2603380


Original Submission

posted by jelizondo on Thursday February 05, @06:29PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Targeting specific cells in the vagus nerve reduced heart damage in mice.

Nerve pathways linking the heart and brain play a key role in inflammation and the body’s response to cardiac injury. In mice, blocking signals along these nerves and reducing inflammation in connected neurons improved heart function and healing.

After a heart attack, the heart “talks” to the brain. And that conversation may make recovery worse.

Shutting down nerve cells that send messages from injured heart cells to the brain boosted the heart’s ability to pump and decreased scarring, experiments in mice show. Targeting inflammation in a part of the nervous system where those “damage” messages wind up also improved heart function and tissue repair, scientists report January 27 in Cell.

Someone in the United States has a heart attack about every 40 seconds, according to the U.S. Centers for Disease Control and Prevention. That adds up to about 805,000 people each year.

A heart attack is a mechanical problem caused by the obstruction of a coronary artery, usually by a blood clot. If the blockage lasts long enough, the affected cells may start to die. Heart attacks can have long-term effects such as a weakened heart, a reduced ability to pump blood, irregular heart rhythms, and a higher risk of heart failure or another heart attack.

Although experts knew from previous research that the nervous and immune systems could amplify inflammation and slow healing, the key players and pathways involved were unknown, says Vineet Augustine, a neurobiologist at the University of California, San Diego.

To identify them, Augustine and his colleagues began by pinpointing the sensory neurons that detect heart tissue injury. The team zeroed in on the vagus nerve, which carries sensory information from internal organs to the brain and identified a specific subtype of vagal sensory neurons, called TRPV-1 positive neurons, which extend into and sit next to heart tissue as key contributors in the brain-heart pathway. After a heart attack, more TRPV-1 positive nerve endings became active in the damaged area of the heart, experiments showed.

But when these neurons were shut down, cardiac pumping function, electrical stability scar size, and other measures of heart health improved. That bolsters evidence that the heart ramps up the signals it sends to the brain after a heart attack.

The team traced the path of those signals from the heart to the brain. Their first stop was the paraventricular nucleus of the hypothalamus, a region that helps control stress, blood pressure and heart rate. The signals then reached the superior cervical ganglion, a cluster of nerve cells in the neck that sends signals to organs such as the heart and blood vessels.

After a heart attack, the cluster of nerve cells in the neck appeared more inflamed, with elevated levels of pro-inflammatory molecules called cytokines. When the scientists reduced inflammation in this group of nerve cells, heart damage eased, and the team saw improvements in cardiac function and tissue repair.

It is important to note that “the inflammatory response is not inherently negative,” says Tania Zaglia, a physiologist at the University of Padua in Italy who was not involved in the study. “In the early phases of infarction, it is essential for the removal of damaged tissue and for the activation of reparative processes.” However, she says, problems arise when this response becomes excessive, prolonged or disorganized.

That’s why controlling the inflammation, as well as the nerves that may be driving it, could be beneficial, the researchers say. Taking the findings from mice to the clinic will take time. Still, “we can now start thinking about therapies such as vagus nerve stimulation, gene-based approaches targeting the brain or immune-targeted treatments,” Augustine says.

S. Yadav et al. A triple-node heart–brain neuroimmune loop underlying myocardial infarction. Cell. Published online January 27, 2026. doi: 10.1016/j.cell.2025.12.058.


Original Submission

posted by janrinok on Thursday February 05, @01:43PM   Printer-friendly
from the "I've-got-that-'impending-doom'-feeling-again" dept.

Arthur T Knackerbracket has processed the following story:

The Linux ecosystem is buzzing with news of Amutable, a new company founded by some of the most influential figures in modern Linux development. Led by Lennart Poettering (creator of systemd), Christian Brauner (Linux VFS subsystem maintainer), and other prominent Linux kernel developers, Amutable aims to deliver "verifiable integrity to Linux workloads everywhere."

[...] Amutable's stated mission is ambitious: to build cryptographically verifiable integrity into Linux systems. Their approach focuses on three key areas:

Ensuring that software builds are verifiable and tamper-proof from the development stage through deployment.

Implementing secure boot processes that can cryptographically verify the integrity of the entire boot chain.

Maintaining verifiable system state throughout the operational lifecycle of Linux workloads.

The company's tagline, "Every system starts in a verified state and stays trusted over time," encapsulates their vision of comprehensive system integrity.

While Amutable has been relatively secretive about specific technical details, the company appears to be building on remote attestation technology. This involves using hardware security features (like TPMs - Trusted Platform Modules) to cryptographically prove the state of a system to remote parties.

The technology builds on existing standards and protocols but aims to make them more accessible and user-controlled in Linux environments. According to founding engineer Aleksa Sarai, the models they have in mind are "very much based on users having full control of their keys."

The announcement has generated significant discussion in the Linux community, with reactions ranging from excitement about improved security to deep concerns about potential implications for user freedom.

However, a significant portion of the Linux community has expressed serious reservations, drawing parallels to how similar technologies have been used to restrict user freedom on mobile platforms.

Remote attestation inherently involves revealing information about your system to third parties. Even with privacy-preserving protocols, concerns remain about:

One of the key technical challenges is providing attestation without compromising user privacy. While protocols like Direct Anonymous Attestation (DAA) exist, they often require trusted intermediaries and can still be vulnerable to correlation attacks.

[...] As one community member noted, attestation can only verify that known vulnerabilities are still present, not that a system is actually secure. With thousands of CVEs discovered in Linux annually, "verified" doesn't necessarily mean "safe."

Lennart Poettering's involvement adds another layer of complexity to the discussion. His previous work on systemd was similarly controversial.

Supporters counter that systemd solved real problems and modernized Linux system management. The parallel concerns about Amutable suggest the Linux community is wary of another potentially disruptive change from the same architect.

Amutable has been notably quiet about their business model, which has fueled speculation and concern. Possible approaches include:

The lack of clarity around monetization has led some to worry about potential future restrictions or lock-in mechanisms.

Amutable enters a space where several major players are already active:

A Linux-native solution could either complement these existing systems or compete directly with them.

Government regulations around cybersecurity are increasingly requiring organizations to demonstrate system integrity. Amutable's technology could help organizations meet these requirements, but it could also become a compliance requirement that effectively mandates its adoption.

[...] Amutable represents a significant moment for the Linux ecosystem. The company's success or failure could determine whether Linux develops robust, user-controlled security attestation or whether the platform remains vulnerable to the kind of lockdown that has characterized mobile computing.

The involvement of respected Linux developers like Poettering and Brauner lends credibility to the project, but their track record also shows they're willing to make controversial changes they believe are necessary for Linux's evolution.

The key question is whether Amutable can thread the needle between providing the security guarantees that enterprises need while preserving the freedom and openness that Linux users value. The answer will likely shape the future of Linux security for years to come.

For now, the Linux community watches and waits, hoping that this new venture will enhance rather than restrict the platform they've helped build. The stakes couldn't be higher: the future of open computing may well depend on getting this balance right.


Original Submission

posted by janrinok on Thursday February 05, @11:46AM   Printer-friendly

https://www.freelists.org/post/slint/Very-sad-news,41
https://www.linuxquestions.org/questions/slackware-14/rip-didier-spaier-4175757754/

Didier Spaier was the creator and maintainer of Slint, a Slackware base distro for the visually impaired. He died in mid January.

Quoting from the linked slint post:

I am very sad to inform everyone that our friend Didier died last week.

Early 2015, I asked on the slackware list if brltty could be added in the installer ; Didier answered promptly that he could do it on slint. Afterwards, he worked hard so that slint became as accessible as possible for visually impaired people.

You all know that all these years, he tried and succeeded to answer as quickly as possible to our issues and questions.

His kind and thoughtful help and assistance here will be dearly missed.


Original Submission

posted by janrinok on Thursday February 05, @09:02AM   Printer-friendly
from the getting-to-the-bottom-of-the-flavor-notes dept.

Bacterial enzymes in elephants' guts may digest pectin and give beans a smooth, chocolaty, and less bitter flavor:

With hundreds of millions of cups consumed every day, coffee is one of the most popular beverages in the world. Many organic molecules combine to give coffee its flavor, and nearly every coffee drinker likes a different flavor profile that is "just theirs." The food industry has developed many ways of processing coffee beans to alter the ratios of these molecules and create the unique flavors consumers can enjoy.

One particularly interesting process involves passing coffee beans through the digestive tracts of animals. An emerging example is Black Ivory coffee (BIC). BIC is made in only one elephant sanctuary in Thailand. Asian elephants are fed Arabica coffee cherries, and beans collected from their dung are processed for human consumption. BIC is prized for its smooth, chocolaty flavor, and it is less bitter than regular coffee.

[...] The team analyzed fresh dung from elephants producing BIC, as well as from control elephants living in the same elephant sanctuary. The only difference in their diets is that BIC-producing elephants received an additional snack of bananas, rice bran, and whole coffee cherries. Any differences in the content and composition of fecal microbes would be due to this snack.

Yamada's team found that BIC-producing elephants' dung was unusually rich in pectin-digesting enzymes. 16S ribosomal RNA analysis showed that these elephants also had a more diverse gut microbiome, with an abundance of Acinetobacter and other pectin-digesting species. "Interestingly, Acinetobacter has also been detected on the surface of coffee beans. This suggests that ingestion of coffee beans may lead to the colonization of specific microbes in the gut of elephants," remarks Yamada.

Pectin in coffee beans is partially broken down by the heat of roasting and seems to form bitter-tasting compounds such as 2-furfuryl furan. Previous studies showed that BIC had much lower levels of 2-furfuryl furan than regular coffee beans. These earlier findings appear to be explained by the discovery of pectin-digesting bacteria in the gut of BIC-producing elephants. Since pectin is partially digested as the beans pass through the elephants' guts, there is less available to form 2-furfuryl furan when the beans are roasted.

"Our findings may highlight a potential molecular mechanism by which the gut microbiota of BIC elephants contributes to the flavor of BIC," says Yamada as he describes these exciting findings. "Further experimental validation is required to test this hypothesis, such as a biochemical analysis of coffee bean components before and after passage through the elephant's digestive tract," he adds, pointing to avenues for future research into this technique for processing coffee.

Nevertheless, this study provides a foundation for further exploration of animal-microbiome interactions in food fermentation and flavor development. Continued research into specific microbial metabolic mechanisms may support the development of diverse and distinctive flavor profiles in the future!

Journal Reference: Chiba, N., Limviphuvadh, V., Ng, C.H. et al. Preliminary study of gut microbiome influence on Black Ivory Coffee fermentation in Asian elephants. Sci Rep 15, 40548 (2025). https://doi.org/10.1038/s41598-025-24196-0


Original Submission

posted by janrinok on Thursday February 05, @04:15AM   Printer-friendly
from the green-is-go dept.

https://www.theregister.com/2026/01/30/road_sign_hijack_ai/?td=keepreading
https://the-decoder.com/a-printed-sign-can-hijack-a-self-driving-car-and-steer-it-toward-pedestrians-study-shows/

Autonomous vehicles fooled by humans with signs. They apparently do not really verify their inputs, one is as good as the next one. So they fail even basic programming techniques of sanitizing and verifying inputs.

[quote]The researchers at the University of California, Santa Cruz, and Johns Hopkins showed that, in simulated trials, AI systems and the large vision language models (LVLMs) underpinning them would reliably follow instructions if displayed on signs held up in their camera's view.[/quote]

Commands in Chinese, English, Spanish, and Spanglish (a mix of Spanish and English words) all seemed to work.

As well as tweaking the prompt itself, the researchers used AI to change how the text appeared – fonts, colors, and placement of the signs were all manipulated for maximum efficacy.

The team behind it named their methods CHAI, an acronym for "command hijacking against embodied AI."

While developing CHAI, they found that the prompt itself had the biggest impact on success, but the way in which it appeared on the sign could also make or break an attack, although it is not clear why.

In tests with the DriveLM autonomous driving system, attacks succeeded 81.8 percent of the time. In one example, the model braked in a harmless scenario to avoid potential collisions with pedestrians or other vehicles.

But when manipulative text appeared, DriveLM changed its decision and displayed "Turn left." The model reasoned that a left turn was appropriate to follow traffic signals or lane markings, despite pedestrians crossing the road. The authors conclude that visual text prompts can override safety considerations, even when the model still recognizes pedestrians, vehicles, and signals.


Original Submission

posted by janrinok on Wednesday February 04, @11:31PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

[...] When the UK government launched a public consultation on AI and copyright in early 2025, it likely didn't expect to receive a near-unanimous dressing-down. But of the roughly 10,000 responses submitted through its official “Citizen Space” platform, just 3% supported the government's preferred policy for regulating how AI uses copyrighted material for training. A massive 88% backed a stricter approach focused on rights-holders.

The survey asked for opinions on four possible routes the UK might take to address what rules should apply when AI developers train their models on books, songs, art, and other copyrighted works. The government’s favored route was labeled Option 3 and offered a compromise where AI developers had a default right to use copyrighted material as long as they disclosed what they used, and offered a way for those with the rights to the material to opt out. But most who responded disagreed.

Option 3 received the least support. Even the “do nothing” option of just leaving the law vague and inconsistent polled better. More people would prefer no reform at all than accept the government's suggestion. That level of disapproval is hard to spin.

It's a triumph for the campaign by writers’ unions, music industry groups, visual artists, and game developers seeking exactly this result. They spent months warning about a future where creative work becomes free fuel for unlicensed AI engines.

The artists argued that the fight was over consent as much as royalties. They argued that having creative work swept up into a training dataset without permission means the damage is done, even if you can opt out months later. And they pointed out that the UK’s copyright laws weren’t built for AI. Copyright in the UK is automatic, not registered, which is great for flexibility, but tough for any enforcement, as there's no central database of copyright ownership.

Officials crafted Option 3 to try to appease all sides. The government's stated aim was to stimulate AI innovation while still respecting creators. A transparent opt-out mechanism would let developers build useful models while giving artists a way to refuse. But it ultimately felt to many creators like all the burden fell on them, and they would have to constantly monitor how their work is used, sometimes across borders, languages, and platforms they’ve never heard of.

That's likely why 88% of respondents went for requiring licenses for everything as their preferred choice. If an AI model were to be implemented, wanting to train on your book, your voice, your illustration, or your photography, it would have to ask, and potentially pay first.

A final report and economic impact assessment from the government is due in March. It will evaluate the legal, commercial, and cultural implications of each option. Officials say they will consider input from creators, tech firms, small businesses, and other stakeholders. Clearly the government's hope to smoothly start implementing its prefeerred appraoch won't happen.

For now, the confusing status quo remains. Without a court ruling or legislative fix, uncertainty reigns. AI developers don’t know what’s allowed. Creators don’t know what’s protected. Everyone's waiting for clarity that keeps getting delayed.

What happens next could shape the UK's digital economy for years. If officials side with the 3% who backed their initial plan, they risk alienating the very creators whose work is so valuable. But stronger licensing rules would undoubtedly face resistance from AI startups and international tech firms. Either way, the fighting is far from over.


Original Submission