Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

Roughly how much cash is in your pocket/wallet/purse right now?

  • None: why do I need cash anymore, grandpa?
  • Just enough for random small transactions
  • Enough for regular errands (grocery, fuel, etc.)
  • An unreasonably large amount
  • Normally none, but whatever amount my non-app-using acquantice paid me back for dinner
  • I'm all-in on crypto, you insensitive fiat-currency-loving clod!

[ Results | Polls ]
Comments:32 | Votes:115

posted by jelizondo on Friday May 01, @10:25AM   Printer-friendly
from the You-are-the-product dept.

Google has a price for you. Proton found it. The company analyzed over 54,000 demographic profiles using 2025 ad auction data to see what advertisers pay to reach different Americans. The average American generates about $1,605 a year in advertising value. The median is $760. The gap between those two numbers tells the story. A small number of high-value users pull the average up. The business runs on outliers.

The spread is stark. A 35- to 44-year-old man in Bozeman, Montana — no children, desktop user, making high-value corporate searches — is worth an estimated $17,929 per year. An 18- to 24-year-old father in Fort Smith, Arkansas — Android phone, low-value searches — is worth $31.05. That is a 577x difference between two people using the same free service. Device matters. A desktop user is worth 4.9 times more than the same person on Android. An iPhone user is worth 2.7 times more than Android. Having children costs you roughly 17% of your ad value. Advertiser value peaks between ages 35 and 44. By 65, average value drops to $511.

Where you live sets a floor on your price. Local service providers — lawyers, real estate agents, financial planners — bid against each other for local clicks. The more competitive the local market, the higher the floor price for everyone in it. The top markets are Edmond, Oklahoma and Bozeman, Montana, followed by Naperville, Illinois, Santa Fe, New Mexico, and Durham, North Carolina. The least valuable markets are concentrated in the Rust Belt and Appalachia — Wheeling and Parkersburg in West Virginia, Toledo, Ohio, and Buffalo, New York — where lower median incomes and fewer competing advertisers mean less bidding pressure. Over a decade, the average American represents roughly $16,050 in ad value. The most monetized profiles approach $180,000. Most people would not hand a corporation that much money over a lifetime. But that is what the system collects.

-----

Google, while big, is only one internet advertiser - and all that collected advertising income actually comes from consumers of the goods and services being advertised, as a premium on the price of the products. One particular medical device I worked on cost $600 to make, and $14,400 to sell at a net price to the patient of $15,000 for the device and another $15,000 to the hospital for the implantation procedure. Yes, the company was operating at break-even, spending 24x what the physical device cost to make and deliver on nothing but sales and marketing - hoping that some day they could get those sales costs down... didn't happen during the 2 years I worked there.


Original Submission

posted by jelizondo on Friday May 01, @05:41AM   Printer-friendly
from the AI-embraced-AI-extended-HR-extinguished dept.

Microsoft, long a symbol of American innovation, is now offering a voluntary early retirement program that targets thousands of its most seasoned U.S. employees. Framed as a generous opportunity for longtime workers, the move instead reveals a deeper corporate calculus: trimming payroll of experienced Americans to redirect resources toward artificial intelligence infrastructure and, likely, a younger, often less expensive workforce:

This is not mere cost-cutting in response to market pressures. It is a strategic thinning of the ranks amid hundreds of billions committed to AI development, at a time when the company has already shed thousands of jobs in recent years. By dangling buyouts before employees whose age plus years of service equal 70 or more—primarily those at senior director level and below—Microsoft aims to reduce its 125,000-strong U.S. workforce by up to 7 percent, or roughly 8,750 people, without the public backlash of outright layoffs.

The program, announced in an internal memo from Chief People Officer Amy Coleman, marks the first such voluntary retirement initiative in the company's 51-year history. Eligible workers will receive notification beginning May 7 and have 30 days to decide. While presented as support for those "considering their next chapter," the timing aligns precisely with Microsoft's voracious appetite for AI spending, projected near $100 billion in capital expenditures this year alone.

[...] Recent history underscores the trend. Microsoft has conducted multiple rounds of job cuts, even as it competes fiercely with Google and others in the AI race. Similar moves at Meta, which recently slashed 10 percent of its workforce to fund infrastructure, reveal an industry-wide willingness to sacrifice people for processors. The human element—wisdom forged through years of problem-solving—receives polite acknowledgment before being shown the door with a severance package and extended healthcare.

Previously: Tech Industry Lays Off Nearly 80,000 Employees in the First Quarter of 2026 (Almost 50% Due to AI)


Original Submission

posted by jelizondo on Friday May 01, @12:58AM   Printer-friendly

The ability to Side load Android may be going away

I just ran across this while bringing up another Android phone:

It is linked from the F-Droid website:

125 days until lockdown.

Starting September 2026, a silent update, nonconsensually pushed by Google, will block every Android app whose developer hasn't registered with Google, signed their contract, paid up, and handed over government ID.

Every app and every device, worldwide, with no opt-out.

( I have an interest in developing an Android apk for using cellphones as an HMI for Arduinos. )

Starting September 2026, a silent update, nonconsensually pushed by Google, will block every Android app whose developer hasn't registered with Google, signed their contract, paid up, and handed over government ID.

Every app and every device, worldwide, with no opt-out.

In August 2025, Google announced a new requirement: starting September 2026, every Android app developer must register centrally with Google before their software can be installed on any device. Not just Play Store apps: all apps. This includes apps shared between friends, distributed through F-Droid, built by hobbyists for personal use. Independent developers, church and community groups, and hobbyists alike will all be frozen out of being able to develop and distribute their software.

Registration requires:

  • Paying a fee to Google
  • Agreeing to Google's Terms and Conditions
  • Surrendering your government-issued identification
  • Providing evidence of your private signing key
  • Listing all current and all future application identifiers

If a developer does not comply, their apps get silently blocked on every Android device worldwide.

Continued here.


Original Submission #1Original Submission #2

posted by hubie on Thursday April 30, @08:13PM   Printer-friendly
from the To-err-is-human.-To-really-foul-things-up-requires-a-computer dept.

I thought you guys might like this ..

Somebody has some 'splainin' to do!

https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-powered-ai-coding-agent-deletes-entire-company-database-in-9-seconds-backups-zapped-after-cursor-tool-powered-by-anthropics-claude-goes-rogue

The founder of PocketOS has penned a social media post to warn others about the "systemic failures" of flagship AI and digital services providers. Jer Crane was inspired to write a public response after an AI coding agent deleted his firm's entire production database. The AI agent's misdemeanors were then hugely amplified by a cloud infrastructure provider's API wiping all backups after the main database was zapped. This tag team of digital trouble has wiped out months of consumer data essential to the firm's, and its customers, businesses.

[...] "Yesterday afternoon, an AI coding agent — Cursor running Anthropic's flagship Claude Opus 4.6 — deleted our production database and all volume-level backups in a single API call to Railway, our infrastructure provider," sums up the PocketOS boss. "It took 9 seconds."

[...] The PocketOS boss puts greater blame on Railway's architecture than on the deranged AI agent for the database's irretrievable destruction. Briefly, the cloud provider's API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and "wiping a volume deletes all backups." Crane also points out that CLI tokens have blanket permissions across environments.

It was also observed by the irate SaaS founder that Railway is actively promoting the use of AI-coding agents by its customers. Crane's use of an AI coding agent on the Railway platform wasn't exploring new frontiers, or wasn't supposed to be. Meanwhile, Crane has been provided no recovery solution, and Railway has apparently been hedging carefully regarding any such possibility.

[...] Thankfully, PocketOS had a full 3-month-old backup, which was restorable from, so the deletion gaps are all limited to the interim period.

There are lessons to be learned from mistakes, as usual. Crane bullet points five things that need to change as the AI industry scales faster than it builds a worthwhile safety architecture. Specifics he calls for include; stricter confirmations, scopable API tokens, proper backups, simple recovery procedures, and AI agents existing within proper guardrails.

In the meantime, please follow a thorough backup regimen and be careful out there. This isn't the first time we've seen an AI go rogue and start deleting important databases.

https://www.breitbart.com/tech/2026/04/28/gone-in-9-seconds-ai-coding-agent-deletes-entire-company-database-and-all-backups/

The founder of a software company has issued a public warning after an AI coding assistant erased his company's entire production database and all backups in just nine seconds.

Tom's Hardware reports that Jer Crane, founder of PocketOS, a platform serving car rental businesses, experienced what he describes as catastrophic failures when an AI coding agent deleted critical company data that took months to accumulate. The incident occurred when Cursor, an AI coding tool powered by Anthropic's Claude Opus 4.6, was performing what should have been a routine task in the company's staging environment.

According to Crane's detailed account posted on X, the AI agent encountered an obstacle and independently decided to resolve the issue by deleting the production database in Railway through an API call. Railway is the cloud infrastructure provider used by PocketOS, generally considered more user-friendly than major alternatives like Amazon Web Services. The entire deletion process took only nine seconds to complete.

The situation escalated beyond a simple database deletion due to Railway's infrastructure design. The cloud provider's system stored backups on the same volume as the source data, meaning when the AI agent deleted the primary database, all backup copies were simultaneously erased. This combination of the AI agent's unauthorized action and the infrastructure provider's architecture created what Crane characterizes as a recipe for disaster.

When Crane questioned the AI agent about its actions, he received a response that revealed the extent of the failure. The agent's explanation began with an acknowledgment of poor judgment. According to the verbatim response Crane shared, the AI stated it had guessed that deleting a staging volume through the API would only affect the staging environment without verifying this assumption or consulting Railway's documentation on how volumes function across different environments.

The AI agent's confession continued with an admission of multiple violations of its operational principles. It acknowledged running a destructive action without authorization, failing to understand the consequences before executing the command, and not reading the relevant documentation about Railway's volume behavior across environments. The agent recognized it should have either asked for permission first or found a non-destructive solution to the credential mismatch it encountered.


Original Submission

posted by hubie on Thursday April 30, @03:24PM   Printer-friendly

University of Oregon scientists repurposed battery-testing tool to better measure coffee's flavor profile:

University of Oregon chemist Christopher Hendon loves his coffee—so much so that studying all the factors that go into creating the perfect cuppa constitutes a significant area of research for him. His latest project: discovering a novel means of measuring the flavor profile of coffee simply by sending an electrical current through a sample beverage. The results appear in a new paper published in the journal Nature Communications.

We've been following Hendon's work for several years now. For instance, in 2020, Hendon's lab helped devise a mathematical model for brewing the perfect cup of espresso, over and over, while minimizing waste. The flavors in espresso derive from roughly 2,000 different compounds that are extracted from the coffee grounds during brewing. So it can be challenging for baristas to reproduce the same perfect cup over and over again.

That's why Hendon and his colleagues built their model for a more easily measurable property known as the extraction yield (EY): the fraction of coffee that dissolves into the final beverage. That, in turn, depends on controlling water flow and pressure as the liquid percolates through the coffee grounds. The model is based on how lithium ions propagate through a battery's electrodes, similar to how caffeine molecules dissolve from coffee grounds.

[...] There are existing methods for collecting information on coffee's chemical composition, most notably liquid or gas chromatography combined with mass spectrometry. But these kinds of analyses are expensive and time-consuming, and predictive results are limited. There are also electrochemical techniques for measuring the concentration of caffeine and other molecules, but these have not taken into account coffee strength—a property determined by all the variables that go into preparing a cup of coffee, such as coffee and water masses, grind settings, water temperature and pressure, roast color, and so forth. That's the information likely to be most helpful to baristas.

The coffee industry typically uses a method for measuring the refractive index of coffee—i.e., how light bends as it travels through the liquid—to determine strength, but it doesn't capture the contribution of roast color to the overall flavor profile. So for this latest study, Hendon decided to focus on roast color and beverage strength, the two variables most likely to affect the sensory profile of the final cuppa.

His solution turned out to be quite simple. Hendon repurposed an electrochemical tool called a potentiostat , typically used to test battery and fuel cell performance. Hendon used the tool to measure how electricity interacted with the liquid. He found that this provided a better measurement of the flavor profile. He even tested it on four different samples of coffee beans and successfully identified the distinctive signature of a batch that had failed the roaster's quality-control process.

Granted, one's taste in coffee is fairly subjective, so Hendon's goal was not to achieve a "perfect" cup but to give baristas a simple tool to consistently reproduce flavor profiles more tailored to a given customer's taste. "It's an objective way to make a statement about what people like in a cup of coffee," said Hendon . "The reason you have an enjoyable cup of coffee is almost certainly that you have selected a coffee of a particular roast color and extracted it to a desired strength. Until now, we haven't been able to separate those variables. Now we can diagnose what gives rise to that delicious cup."

Journal Reference:
Bumbaugh, Robin E., Pennington, Doran L., Wehn, Lena C., et al. Direct electrochemical appraisal of black coffee quality using cyclic voltammetry [open], Nature Communications 2026 17:1 (DOI: 10.1038/s41467-026-71526-5)


Original Submission

posted by janrinok on Thursday April 30, @10:38AM   Printer-friendly

https://mashable.com/article/nasa-nancy-grace-roman-space-telescope-explained

About a quarter-century after the Hubble Telescope reshaped astronomy, and a few years into the era of the James Webb Space Telescope, NASA's Nancy Grace Roman Space Telescope will join them not as a replacement, but as a big-picture partner. Where Hubble and Webb zoom in for close‑ups, Roman will capture Hubble‑like detail across areas about 100 times larger, turning isolated snapshots into sweeping surveys that show the very scaffolding of the universe.

At NASA's Goddard Space Flight Center in Greenbelt, Maryland, engineers are wrapping up prelaunch testing on the cutting-edge telescope. Next, the observatory will travel 900 miles to Kennedy Space Center in Cape Canaveral, Florida, where teams will prepare it for launch. 

That could happen as early as this September, about eight months ahead of schedule, NASA managers said at a news conference on Tuesday, April 21. Once in space, Roman will head to a stable orbit about 1 million miles from Earth, near the same region where Webb orbits the sun, and begin a years‑long campaign of deep space imaging. 

"We didn't want to wait to launch the Nancy Grace Roman. We're eight months ahead of schedule," said Nicky Fox, NASA's associate administrator of science. "Everybody felt the urgency. Everybody was sprinting towards this."

Named for Nancy Grace Roman, who became the agency's first chief of astronomy and one of its earliest female executives, the telescope reflects a legacy of opening new windows on the universe from above Earth's atmosphere. Nicknamed the "mother of Hubble," Roman helped lay the groundwork in the 1960s for a whole fleet of space telescopes.

At the heart of the mission is Roman's eight-foot-wide mirror, the same size as Hubble's, paired with a powerful camera that sees in infrared light, like Webb. That camera's field of view is Roman's superpower. In a single shot, it can image vast swaths of sky that Hubble simply can't match. 

Because a space telescope can only see one patch of sky at a time, it has to take many separate "pointings" — individual shots aimed at slightly different spots — and stitch them together into a mosaic.

In 2023, Ami Choi, an astrophysicist and scientist for Roman's wide field camera, contrasted the difference between Hubble and the new telescope. To photograph the Andromeda Galaxy, Hubble has to take 400 smaller images and stitch them together. For Roman's camera, that should only take two pointings, she said. 

This wide, sharp vision is what scientists need to study the so-called "dark universe." Ordinary matter — the stuff that makes up stars, planets, and even people — accounts for only about 5 percent of the cosmos. The bulk of it is dark matter and dark energy, which do not emit light but leave clues where they've influenced space's expansion and the arrangement of galaxies.

"Current observations hint that our standard model of the universe is incorrect," said Julie McHenry, senior project scientist, referring to cosmologists' best recipe for the universe. "Roman will be able to confirm these and set us on the path to understanding what's right."

Roman will trace those clues in several ways at once. By mapping the positions and shapes of hundreds of millions of galaxies, it will show how structures have grown from the early universe to today. Subtle distortions in galaxy shapes will reveal how clumps of invisible space stuff bend their light on the way to us, exposing the hidden dark matter. At the same time, Roman will discover and track large numbers of a special kind of exploding star, known as Type Ia supernovas; their predictable brightness lets astronomers measure how quickly space has expanded over time.

Taken together, these measurements will allow scientists to test competing ideas about dark matter, dark energy, and even the laws of gravity themselves with far greater precision than ever before. Other observatories can make similar kinds of measurements, but none combines Roman's sharpness and sky coverage in the infrared, NASA mission leaders say, which lets it see more distant and dust-covered galaxies.

Roman's wide‑field power also makes it skilled at exoplanet hunting. Previous missions like Kepler and TESS mostly found planets close to their stars, where their repeated crossings dim starlight in a regular rhythm. Roman will focus on a different region of planetary systems: the cooler, outer zones, where worlds similar to Jupiter and Saturn reside. It may even find wandering planets that aren't tethered to stars.

To do this, Roman will repeatedly monitor dense star fields toward the center of our Milky Way. As a foreground star passes in front of a more distant one, its gravity will briefly magnify the background star's light. If the foreground star carries planets, they can produce smaller, telltale blips in that brightening. This technique, called microlensing, works best in precisely the kind of crowded, faint, and distant regions that Roman is expected to capture.

Over its mission, Roman will attempt to record thousands of these microlensing events, revealing planets at distances and masses other surveys mostly miss. From that haul, astronomers will compare our solar system's architecture with many others and judge whether having inner rocky worlds and outer giant planets is the status quo or something more rare.

Roman will also test an advanced coronagraph — a system of masks and mirrors that blocks a star's glare so the telescope can try to see the faint glow of planets around it. On Roman, this is more of a technology trial than an everyday science instrument, but if it works, it will set the stage for a future observatory whose main goal is to directly image Earth‑like worlds around other sun‑like stars.

"What astronomers can do today with coronagraph instruments is see planets that are maybe a million times fainter than their stars," Vanessa Bailey, NASA's Roman coronagraph scientist, told Mashable. "What we're doing with the Roman coronagraph is hopefully getting to 10 million to 100 million times fainter, maybe even a little bit more, in the best case scenario."

Roman is also built for studying how the sky changes, creating a veritable library of "before" and "after" shots.

One of its major surveys will repeatedly scan high‑latitude regions of the sky, away from the plane of the Milky Way. By returning to the same fields every few days, Roman will catch supernovas as they ignite and fade, watch black holes light up as they feed on nearby material, and uncover other short-lived, dramatic events across the distant universe. Its infrared vision will reveal explosions and flares that dust clouds hide from visible‑light telescopes.

Another core program will stare toward the Milky Way's central bulge. There, Roman will track how the brightness of millions of stars rises and falls on timescales of minutes to months. Those records will not only power the microlensing planet search but also expose other phenomena, such as neutron stars and black holes.

Because Roman will cover such large areas with fine detail, its images will also become a long‑lasting reference tool. When other telescopes later spot something odd — a burst of high‑energy radiation, for instance, or an unusual variable star — astronomers will be able to pull Roman's earlier images and see what was there before the excitement.

"The images it captures will be so large there is not a screen in existence large enough to show them," said NASA administrator Jared Isaacman. "Roman will give the Earth a new Atlas of the universe. I think it's worth pausing for a moment just to think about how really incredible that is."


Original Submission

posted by janrinok on Thursday April 30, @05:53AM   Printer-friendly

https://www.phoronix.com/news/MS-Azure-Linux-Fedora-Based

Microsoft's in-house Azure Linux operating system used within Azure and for WSL and other purposes is reportedly pursuing an overhaul where it would be derived from Fedora Linux.

Azure Linux -- originally known as CBL-Mariner -- is already an RPM-based Linux distribution that is catering to the various Linux needs at Microsoft. It's scope and capabilities of Azure Linux have grown a lot the past few years and now it may evolve into being derived from Fedora.

With the recent proposal to build x86_64-v3 packages for Fedora 45, it turns out Microsoft is involved in backing this change proposal. Kyle Gospodnetich as one of the change proposal authors for x86_64-v3 packages with Fedora 45 happens to be a Microsoft Linux engineer.

The connection to Microsoft's stake in x86_64-v3 and Azure Linux looking at Fedora as a base was spelled out clearly this week during Fedora's Enterprise Linux Next (ELN) SIG meeting. It was noted that Microsoft as well as Fyra Labs are very interested in x86_64-v3 for Fedora.

In the meeting log it's explicitly laid out:

  • "and since Microsoft is supporting that change, they probably would be able to donate compute resources"

  • "Azure wants to rebase Azure Linux more or less on Fedora and they need x86_64-v3 for performance"

  • "there was some nebulous plans of forking the whole distribution for this, they were guided in this direction...so I'd rather it not fail for that reason"

Very interesting and will be fascinating to see what other changes could happen with a Fedora-based Microsoft Azure Linux distribution. In any case, also great to see Microsoft pushing the x86_64-v3 micro-architecture feature level.


Original Submission

posted by janrinok on Thursday April 30, @01:08AM   Printer-friendly

https://gizmodo.com/chinas-biggest-streaming-platform-wants-most-of-its-new-films-to-be-ai-generated-2000748454

A company dubbed China’s Netflix expects a near-complete AI takeover of film and TV within the next five years.

The streaming platform IQiyi plans to have AI create most of its new films and TV shows, per Bloomberg. CEO Gong Yu reportedly shared this at an annual content showcase, alongside an AI toolkit called Nadou Pro that can supposedly automate every step of filmmaking from scriptwriting to final rendering, with the help of AI models from Alibaba and ByteDance for its domestic version and Google Veo 3.1 for an international version.

The company’s goal is to use Nadou Pro to release a fully AI-generated movie that they hope will reach commercial success as early as this summer. IQiyi’s debut slate currently includes 16 AI-generated sci-fi and anime movies, Bloomberg reported.

Over the past year, AI-generated video content has seeped into every corner of the internet. From eerily realistic animal videos that have viewers question their sanity to viral TikToks on the messy love lives of talking fruits, short-form AI video slop is undeniably popular on the internet. But that popularity has yet to translate into any fully AI-generated, commercially successful, and engaging long-form content like movies and TV shows.

Nevertheless, the corporate world is taking notice. Earlier this year, Roku founder and CEO Anthony Wood predicted that “the first 100% AI-generated hit movie” would be released sometime within the next three years.

On the road to achieving that objective, Hollywood started spending big bucks on AI. YouTube introduced AI tools for content creation last September. Last summer, Netflix announced that it had officially begun using AI-generated final footage in shows, the first example that we know of being in the Argentine sci-fi show “El Eternauta.” Around the same time, Amazon MGM Studios launched an in-house team dedicated to building AI tools for film and TV production, and those tools have now reportedly launched in a closed beta program.

While hundreds of industry professionals are alarmed by the rise of AI in Hollywood, some are on board. An upcoming indie movie, “As Deep As The Grave,” stars a posthumously AI-generated Val Kilmer. Artists like Matthew McConaughey and Michael Caine have also sold their voices to AI companies for replication, and famous actress Natasha Lyonne co-founded AI production studio Asteria. Darren Aronofsky, the director known for movies like “Black Swan” and “Requiem for a Dream,” debuted an AI-generated YouTube series about the Revolutionary War earlier this year. Just last week, producers gave The Wrap a first peek at Bitcoin: Killing Satoshi, directed by Doug Liman of Bourne Identity and Edge of Tomorrow fame. The $70 million film is gunning for the title of Hollywood’s first big-budget AI-generated movie.

The results of these experiments have been mixed so far. For one thing, AI video generation is incredibly expensive. So much so that OpenAI had to shut down Sora last month, aka its AI video generation tool that really began the internet craze over AI slop, in an effort to reduce the company’s towering financial commitments ahead of a rumored IPO later this year. With the demise of Sora, a $1 billion Disney investment in OpenAI’s video-generation capabilities was also effectively over.

But whether anyone will be willing to pay for AI-generated content is still up in the air. Users on the internet may have decided that AI videos are fun to watch on an infinite scroll feed like TikTok or Instagram Reels, where the cost of commitment for the viewer is virtually zero as they spend mere seconds on each video, but that does not necessarily mean the AI output is or will be good enough for viewers to pay streaming subscriptions or purchase movie tickets to watch slop on bigger screens.

People are also increasingly more reactive towards AI and the corporate drive to automate human jobs. In an NBC News poll from last month, roughly half of the respondents said they had negative feelings toward AI.


Original Submission

posted by janrinok on Wednesday April 29, @08:22PM   Printer-friendly

https://9to5linux.com/tails-7-7-anonymous-distro-adds-detection-of-outdated-secure-boot-certificates

This release ships with the latest Tor Browser 15.0.10 anonymous web browser and makes the /root folder only readable by the root user.

Tails 7.7 has been released as the latest version of this Debian-based distribution designed to protect you against surveillance and censorship by leveraging the Tor anonymous network.

Coming almost a month after Tails 7.6, the Tails 7.7 release is a small update that only introduces the ability to detect outdated Secure Boot certificates. Users will now be prompted by a "Secure Boot Update Needed" notification if the Secure Boot certificates are outdated.

The notification informs the user that Tails will no longer start on their computer at some point in the future, urging them to apply all the available updates from their regular operating system to update to the latest Secure Boot certificates from MS.

"Since 2023, MS has started replacing the Secure Boot certificates originally issued in 2011. These older certificates begin expiring in June 2026. Tails now notifies you if the computer that you are using has outdated Secure Boot certificates and needs an update," said the Tails devs.

Other than that, Tails 7.7 updates the Tor Browser anonymous web browser to the latest 15.0.10 release and the Mozilla Thunderbird email client to version 140.9.1 ESR (Extended Support Release). Also, this release makes the /root folder only readable by the root user.

Tails 7.7 is the seventh maintenance release in the Tails 7.x series, which is based on the Debian 13 "Trixie" operating system series, it's powered by the long-term supported Linux 6.12 LTS kernel series, and features the GNOME 48 desktop environment by default.

Check out the release announcement page for more details about the changes included in Tails 7.7, which you can download right now from the official website as ISO and USB images for 64-bit systems. Automatic upgrades are available from Tails 7.0 and later, but you can also perform a manual upgrade.

Starting with the Tails 7.x series, Tails now requires 3 GB of RAM instead of 2 GB for a smooth experience. The system will display a notification if the RAM requirements are not met.


Original Submission

posted by janrinok on Wednesday April 29, @03:41PM   Printer-friendly

Antarctica just saw the fastest glacier collapse ever recorded:

A glacier on Antarctica's Eastern Peninsula underwent the most rapid retreat seen in modern times. In only two months, nearly half of Hektoria Glacier broke apart and disappeared.

New research led by the University of Colorado Boulder and published in Nature Geoscience explains what happened in 2023, when the glacier lost about eight kilometers of ice in just 60 days. The study found that the key factor was the flat bedrock beneath the glacier. As the ice thinned, this smooth foundation allowed large sections to lift off the ground and float, triggering an unusual and sudden calving event.

The findings could help scientists pinpoint other Antarctic glaciers that might be vulnerable to similar rapid collapse. Hektoria Glacier is relatively small by Antarctic standards, covering about 115 square miles, roughly the size of Philadelphia. However, if a much larger glacier were to retreat this quickly, the consequences for global sea level rise could be severe.

"When we flew over Hektoria in early 2024, I couldn't believe the vastness of the area that had collapsed," said Naomi Ochwat, lead author and CIRES postdoctoral researcher. "I had seen the fjord and notable mountain features in the satellite images, but being there in person filled me with astonishment at what had happened."

Ochwat and her colleagues, including CIRES Senior Research Scientist Ted Scambos, were initially studying the region for a different project. They were investigating why sea ice detached from a glacier years after a nearby ice shelf broke apart in 2002.

While reviewing satellite and remote sensing data, Ochwat noticed something unexpected. The images showed that Hektoria Glacier had retreated dramatically within a short window of time. That discovery led her to focus on a pressing question: why did this glacier collapse so quickly?

Many Antarctic glaciers are tidewater glaciers, meaning they sit on the ocean floor and extend into the sea, where they release icebergs. The landscape beneath them can vary widely. Some rest over deep troughs or underwater mountains, while others lie across broad, flat plains.

Hektoria sat on what scientists call an ice plain, a flat stretch of bedrock below sea level. Geological evidence shows that between 15,000-19,000 years ago, glaciers positioned over similar ice plains retreated at extraordinary speeds, sometimes moving back hundreds of meters per day. That historical insight helped researchers interpret what they were seeing at Hektoria.

When a tidewater glacier thins enough, it can lift off the seabed and begin floating on the ocean surface. The location where it transitions from grounded to floating ice is known as the grounding line. By analyzing multiple satellite datasets, the team identified several grounding lines at Hektoria, a sign of ice plain conditions beneath the glacier.

Because the glacier rested on a flat bed, large portions were able to lift off almost at once. Once afloat, the ice was exposed to powerful ocean forces. Cracks opened along the base of the glacier and eventually connected with fractures at the surface. This chain reaction caused extensive calving, breaking apart nearly half the glacier in a matter of weeks.

By combining frequent satellite observations, the researchers reconstructed the sequence of events in detail.

"If we only had one image every three months, we might not be able to tell you that the glacier lost two and a half kilometers in two days," Ochwat said. "Combining these different satellites, we can fill in time gaps and confirm how quickly the glacier lost ice."

The team also deployed seismic instruments that detected a series of glacier earthquakes during the period of rapid retreat. These tremors confirmed that the glacier had been firmly grounded on bedrock before lifting off. The data not only verified the presence of an ice plain but also showed that the ice loss directly contributed to rising global sea levels.

Ice plains have been identified beneath many other Antarctic glaciers. Understanding how they influence retreat rates will help scientists better forecast which glaciers might be prone to sudden collapse in the future.

"Hektoria's retreat is a bit of a shock -- this kind of lighting-fast retreat really changes what's possible for other, larger glaciers on the continent," Scambos said. "If the same conditions set up in some of the other areas, it could greatly speed up sea level rise from the continent."

Journal Reference:
Ochwat, Naomi, Scambos, Ted, Anderson, Robert S., et al. Record grounded glacier retreat caused by an ice plain calving process, Nature Geoscience 2025 18:11 (DOI: 10.1038/s41561-025-01802-4)


Original Submission

posted by janrinok on Wednesday April 29, @10:54AM   Printer-friendly
from the they-call-me-Chip dept.

Meta Platforms and Amazon.com agreed to a multibillion-dollar deal over several years in which the social-media company will use tens of millions of Amazon Web Services' Graviton chip cores to support its AI agents and other AI initiatives:

The companies declined to disclose the financial terms and exact duration of the deal. Nafea Bshara, an Amazon vice president and distinguished engineer, said the length of the deal is between three and five years.

Bshara, a co-founder of AWS's in-house chip unit Annapurna Labs, said most of the tens of millions of AWS Graviton cores will be located in the U.S.

The news comes as tech giants and artificial intelligence labs are still scrambling to get as much compute capacity as possible to support their AI goals. Meta's deal with AWS is one of several it has announced with chip companies this year, including Nvidia, Advanced Micro Devices and Arm Holdings, and underscores the need for a diversity of chips to support AI, analysts say.

[...] This isn't the first time Meta and AWS have worked together. The partnership between the two tech giants dates back to about 2016, Bshara said, but had mostly involved core cloud services, use of Amazon's Bedrock platform and Meta renting GPU clusters from AWS.

Also at ZeroHedge. Use Reader Mode in Firefox to bypass WSJ paywall.


Original Submission

posted by janrinok on Wednesday April 29, @06:09AM   Printer-friendly

Mozilla shipped it in Firefox 149 without a mention in the release notes:

Back in March, Firefox 149 was released with many changes, like a free built-in VPN, a Split View that allows the loading of two pages side by side, and the XDG portal file picker as the new default on Linux.

However, an interesting addition had gone mostly unnoticed until now.

Shivan Kaul Sahib, the VP of Privacy and Security at Brave, has put out a blog post about something that didn't make it into the Firefox 149 release notes at all. The browser now ships adblock-rust, Brave's open source Rust-based ad and tracker blocking engine.

The change landed via Bugzilla Bug 2013888, which was filed and handled by Mozilla engineer Benjamin VanderSloot. The bug is titled "Add a prototype rich content blocking engine," and keeps the engine disabled by default with no user interface or filter lists included.

For informational purposes, adblock-rust is the engine behind Brave's native content blocker (aka ad blocker). It is written in Rust and licensed under MPL-2.0, handling network request blocking, cosmetic filtering, and features a uBlock Origin-compatible filter list syntax.

Shivan also mentions that Waterfox, the popular Firefox fork, has adopted adblock-rust, building directly upon Firefox's own implementation.

Before starting, head to Enhanced Tracking Protection's shield icon in the address bar and turn it off for the website you will be testing this with. This way, adblock-rust is doing the work, not Firefox's existing feature.

I suggest testing this experimental feature on a throwaway installation of Firefox.

Now open a new tab and go to about:config. Accept the warning when it shows up. Search for privacy.trackingprotection.content.protection.enabled nd set it to "true" by clicking on the toggle.

Next, search for privacy.trackingprotection.content.protection.test_list_urls , click on the "Edit" button, and paste the following value to add the EasyList and EasyPrivacy filter lists to Firefox:

https://easylist.to/easylist/easylist.txt|https://easylist.to/easylist/easyprivacy.txt

Remember to click on the blue-colored "Save" button before moving on.

Now visit a site with known ads, like Yahoo (as I did above). If it's working, ad slots will still render in the page layout, but the actual ad content will be blocked. In my test, the banner on Yahoo came up showing only the text "Advertisement" with the advert bit stripped out.


Original Submission

posted by janrinok on Wednesday April 29, @01:26AM   Printer-friendly

Engineers are working on a long-term plan to keep the iconic spacecraft alive:

NASA engineers are working to keep the Voyager mission alive as it cruises through interstellar space, opting to shut down components of the spacecraft to save power.

Engineers at NASA’s Jet Propulsion Laboratory (JPL) sent commands to Voyager 1 to shut off one of its science instruments after the spacecraft’s power levels fell unexpectedly. Out of the 10 instruments on board Voyager 1, only two are still operating as the mission team figures out new ways to keep the spacecraft alive for longer.

“While shutting down a science instrument is not anybody’s preference, it is the best option available,” Kareem Badaruddin, Voyager mission manager at JPL, said in a statement.

In late February, Voyager 1’s power levels dropped during a routine roll maneuver. The Voyager team had to act fast; any additional drop in power could trigger a safeguard system that would begin shutting down components on its own.

Voyager is powered by heat from decaying plutonium that is converted into electricity. Each year, the aging spacecraft loses about 4 watts of power. In an effort to extend the mission duration, the team has turned off systems deemed unnecessary to keep the spacecraft going, including a few of the science instruments.

The team of engineers agreed on the order in which they would shut down instruments on board Voyager, and the Low-energy Charged Particles experiment, or LECP, was next on that list. LECP measures low-energy charged particles, including ions, electrons, and cosmic rays originating from our solar system and galaxy, and has been providing critical data on the structure of the interstellar medium for the past 49 years.

On April 17, the team was forced to send commands to shut off LECP. The sequence of commands took around 23 hours to reach Voyager 1, while the shutdown process itself took about three hours and 15 minutes to complete.

The Voyager 1 spacecraft launched on an unprecedented journey to interstellar space in 1977, becoming the farthest human-made object at a distance of 15 billion miles (25 billion kilometers) from Earth.

The twin Voyager probes have far surpassed their original mission timeline. The original mission was designed to last just five years, but Voyager 1 and 2 are still going nearly 50 years later. The journey, however, has taken a toll on the spacecraft, and NASA engineers are forced to come up with new ways to extend the mission.

Shutting down Voyager 1’s LECP will give the spacecraft around a year of breathing room while engineers finalize a more ambitious energy-saving fix for both spacecraft. The long-term plan, called “the Big Bang,” will attempt to swap out a group of powered devices and replace them with lower-power alternatives. The idea is to keep the spacecraft warm enough to continue gathering science data and further extend its operations in interstellar space.

Engineers kept one part of LECP on, a small motor that spins the sensor in a circle to scan in all directions, in the hopes that they can turn the instrument back on someday if they can garner enough extra power.

“Voyager 1 still has two remaining operating science instruments—one that listens to plasma waves and one that measures magnetic fields. They are still working great, sending back data from a region of space no other human-made craft has ever explored,” Badaruddin said. “The team remains focused on keeping both Voyagers going for as long as possible.”


Original Submission

posted by mrpg on Tuesday April 28, @06:40PM   Printer-friendly
from the ice-and-fire dept.

https://linuxiac.com/someone-made-a-windows-95-subsystem-for-linux/

WSL9x is not Microsoft WSL, but a retro Windows 95 and 98 project that makes Linux run where nobody expected it.

For the past few days, I hesitated to share this news at first, since it might sound like a late April Fools’ joke. But open-source developers always find new ways to surprise me, and this project is a perfect example.

A developer created WSL9x, a GPL-3-licensed experimental project that runs a modern Linux kernel inside… the Windows 9x kernel (Windows 95, Windows 98, and Windows ME). To be clear, despite the similar name, it has no connection to Microsoft’s official Windows Subsystem for Linux. It’s an independent retrocomputing hobby project that just borrows the name and focuses on Microsoft’s old Windows 9x family.

Right now, the project uses a patched Linux kernel 6.19 that runs alongside the Windows 9x kernel. This setup lets both operating systems run together, so you don’t need to reboot into Linux or use a typical virtual machine. The result is closer to an old-school systems hack than a practical replacement for WSL on modern Windows.

[..] For those interested, here is a link to the project.


Original Submission

posted by mrpg on Tuesday April 28, @02:00PM   Printer-friendly
from the who-doesnt dept.

A UBC study finds raccoons solve puzzles even without food rewards, suggesting they are driven by curiosity and information-seeking:

They raid compost bins, outsmart latches and sometimes look gleeful doing it. A new UBC study in Animal Behaviour suggests raccoons may not just be opportunistic—they may be genuinely curious.

UBC researchers Hannah Griebling and Dr. Sarah Benson-Amram found raccoons continued solving puzzles long after retrieving the only food reward available. This behaviour reflects intrinsic motivation rather than hunger and is described as "information foraging," because no additional food was given for continuing.

Researchers used a custom multi-access puzzle box with mechanisms such as latches, sliding doors or knobs. The box had nine entry points, grouped as easy, medium and hard. In Each 20-minute trial the puzzle box contained a single marshmallow, yet raccoons often continued opening new mechanisms after eating it, a clear sign of information-seeking.

"We weren't expecting them to open all three solutions in a single trial," said Griebling. "They kept problem solving even when there was no marshmallow at the end."

Journal Reference: https://doi.org/10.1016/j.anbehav.2026.123491


Original Submission