Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:42 | Votes:484

posted by hubie on Friday December 12, @07:36PM   Printer-friendly

https://www.npr.org/2025/12/08/nx-s1-5631826/iceblock-app-lawsuit-trump-bondi

The developer of ICEBlock, an iPhone app that anonymously tracks the presence of Immigration and Customs Enforcement agents, has sued the Trump administration for free speech violations after Apple removed the service from its app store under demands from the White House.

The suit, filed on Monday in federal court in Washington, asks a judge to declare that the administration violated the First Amendment when it threatened to criminally prosecute the app's developer and pressured Apple to make the app unavailable for download, which the tech company did in October.

Following Apple ejecting ICEBlock, Attorney General Pam Bondi said in a statement that "we reached out to Apple today demanding they remove the ICEBlock app from their App Store — and Apple did so."

Lawyer Noam Biale, who filed the suit against the administration, said Bondi's remarks show the government illegally pressuring a private company to suppress free speech.

"We view that as an admission that she engaged in coercion in her official role as a government official to get Apple to remove this app," Biale said in an interview with NPR.

The Justice Department did not return a request for comment, but Trump administration officials have said the app puts the lives of ICE agents in danger.

When reached for comment, Apple also did not respond. The lawsuit, which does not name Apple, says the tech giant bowed in the face of political pressure.

"For what appears to be the first time in Apple's nearly fifty-year history, Apple removed a U.S.-based app in response to the U.S. government's demands," according to the suit.

[...] To First Amendment advocates, the White House's pressure campaign targeting ICEBlock is the latest example of what's known as "jawboning," when government officials wield state power to suppress speech. The Cato Institute calls the practice "censorship by proxy."

ABC's suspension of Jimmy Kimmel after FCC Chair Brendan Carr threatened regulatory action and Bondi promising a crackdown on hate speech following the killing of conservative activist Charlie Kirk are two other prominent instances.

"The use of a high-level government threat to force a private platform to suppress speech fundamentally undermines the public's right to access information about government activities," said Spence Purnell, a resident senior fellow at R Street, a center-right think tank. "If high-level officials can successfully silence political opposition, it sets a dangerous precedent for the future of free expression in this country."

Genevieve Lakier, a First Amendment scholar at the University of Chicago Law School, said the White House's campaign against ICEBlock shows the administration using what has become a familiar playbook: "To use threats of adverse legal and financial consequences, sometimes vague sometimes not so vague, to pressure universities, media companies, law firms, you name it, into not speaking in the ways they like," she said.

One potential weak spot for the lawsuit, however, is a lack of direct evidence that Attorney General Bondi, or other administration officials, made threats against Apple to have the app removed, rather than merely convinced the tech company to do so.

Previously: Apple Removes ICE Tracking Apps After Pressure by Trump Administration


Original Submission

posted by hubie on Friday December 12, @02:54PM   Printer-friendly
from the polish-off-those-transformers dept.

https://distrowatch.com/dwres.php?resource=showheadline&story=20106

Canonical has announced it will provide AMD driver packages for working with hardware-accelerated AI tools.

"Canonical is pleased to announce an expanded collaboration with AMD to package and maintain AMD ROCm software directly in Ubuntu. AMD ROCm is an open software ecosystem to enable hardware-accelerated AI/ML and HPC workloads on AMD Instinct and AMD Radeon GPUs, simplifying the deployment of AI infrastructure with long term support from Canonical."

ROCm is software which enabled certain hardware accelerated processes on AMD hardware: "For AMD, the software that enables hardware-accelerated AI processing is called ROCm. It is an open software platform that includes runtimes, compilers, libraries, kernel components, and drivers that together accelerate industry standard frameworks such as PyTorch, Tensorflow, Jax, and more on supported AMD GPUs and APUs. "

Additional information on the new packages can be found in Canonical's announcement:

This work will simplify the delivery of AMD AI solutions in data centers, workstations, laptops, Windows Subsystem for Linux, and edge environments. AMD ROCm software will be available as a dependency for any Debian package, snap, or Docker image (OCI) build. Performance fixes and security patches will automatically be available to production systems.

This collaboration aims to make AMD ROCm software available in Ubuntu starting with Ubuntu 26.04 LTS, with updates available in every subsequent Ubuntu release.

[...] "We are delighted to work alongside AMD and the community to package AMD ROCm libraries directly into Ubuntu," said Cindy Goldberg, SVP of Silicon and Cloud Alliances at Canonical. "This will simplify the use of AMD hardware and software for AI workloads, and enable organizations to meet security and maintenance requirements for production use at scale."


Original Submission

posted by jelizondo on Friday December 12, @10:07AM   Printer-friendly

https://www.theregister.com/2025/12/09/porsche_bricked_russia/

Hundreds of Porsches in Russia were rendered immobile last week, raising speculation of a hack, but the German carmaker tells The Register that its vehicles are secure.

According to reports, local dealership chain Rolf traced the problem to a loss of satellite connectivity to their Vehicle Tracking Systems (VTS). This meant the systems thought a theft attempt was in progress, triggering the vehicle's engine immobilizer.

Porsche HQ was unable to help or diagnose the nature of the problem. It's understood that systems like VTS are operated by local Porsche subsidiaries or dealer networks.

But following Russia's invasion of Ukraine and the imposition of sanctions, Porsche no longer exports to the country or provides after-sales service.

In a statement to The Register, a Porsche spokesperson said no other markets were affected by the issue.

"The cybersecurity of our vehicles is a central concern for Porsche," the spokesperson told us. "Protection against cybersecurity attacks is ensured by comprehensive security processes and technical measures over the entire life cycle of our vehicles. The measures include, among other things, secure software updates, protected communication channels, and regular security tests for the early detection of suspicious activity," they added.

Resourceful Russian owners have reportedly resorted to workarounds to overcome the problem, including disabling or rebooting the VTS, or removing it entirely.

Others have claimed that disconnecting their car's batteries for ten hours does the trick. These have worked in some but not all cases, apparently.

The issue sparked speculation of a cyberattack, but security and privacy experts we spoke with were dubious.

Cian Heasley, principal consultant at Acumen Cyber, said the wave of shutdowns could be well within the capabilities of a hacktivist group, but said there had been no chatter indicating this was the case.

"If this were a coordinated cyberattack, I would have expected one of the larger pro-Ukraine groups to have claimed this attack by now and posted some sort of evidence, similar to what we saw when Russian airline Aeroflot was attacked in July of this year."

Rik Ferguson, VP Security Intelligence at Forescout, said: "Modern immobilizers don't react only to what happens around the vehicle, they depend on a constant 'trust heartbeat' signal from cloud or satellite backends. From the outside, a deliberate hack and an intentional backend shutdown can look almost identical: the tracking service disappears, the car interprets that as theft, and the immobilizer kicks in."

High-end vehicles rely on a long tail of services outside the owner's control, Ferguson said, spanning the cloud, satellite operators, and regional partners.

"Sanctions, contract disputes, misconfigurations, or attackers can all break that chain, and when they do, a six-figure car is suddenly just a very expensive ornament."

Bugcrowd CSO Trey Ford added: "It sounds like the system design has a fail-safe where if there is a loss of satellite service (platform issues, military, etc.) can lead to a lockout of the vehicle to help mitigate theft – which makes sense."

Otherwise, a criminal could create a Faraday cage to block the antenna and prevent tracking.

He continued: "It also stands to reason that a platform with the ability to lock down vehicles could inadvertently do that." This could be down to an engineering issue, failed update, a database problem, "or something as trivial as a service plan accounting error impacting satellite communication services."

The issue highlights broader concerns around connected vehicles.

Chris Hauk, consumer privacy advocate at Pixel Privacy, said engine kill systems were pushed as an anti-theft device. But "the technology could also be used by hackers to cause havoc and could also be used by totalitarian governments to shut down vehicles belonging to 'enemies of the state.'"

Paul Bischoff, consumer privacy advocate at Comparitech, added: "Any feature that requires a network connection should not affect the basic functionality of the vehicle."

"Besides remote hacks, drivers also have to worry about privacy. Newer cars collect and share a lot of information about their users, often without explicit informed consent."

It's worth noting that most Russian Porsche owners were probably not stranded without wheels, as no other brands have been affected – Russia's elite are also enthusiastic fans of Bentleys, Aston Martins, and other luxury marques.


Original Submission

posted by jelizondo on Friday December 12, @05:22AM   Printer-friendly

https://linuxiac.com/pebble-index-01-arrives-as-a-private-open-source-voice-capture-ring/

Pebble Index 01 is an open-source smart ring that captures quick voice notes instantly and privately, with on-device processing and years of battery life.

The world of wearable gadgets never stops surprising us, and open-source ones make it even more exciting. Pebble, best known for its smartwatches, has introduced Index 01, an open-source smart ring designed as a compact voice-input device for capturing short thoughts, reminders, and notes.

The ring houses a button, a microphone, Bluetooth hardware, local storage, and a long-life silver-oxide battery that lasts for years.

When pressed, it records a brief voice memo that is transferred to the Pebble mobile app once the paired phone is in range. All processing—including speech-to-text and the on-device language model that determines whether to create a note, reminder, or other action—runs locally within the open-source Pebble app.

The ring supports single- and double-click actions for tasks such as controlling music playback or triggering home automation tools. Voice inputs can be routed through MCP actions, which run locally in WASM, or directly forwarded to custom applications via webhook.

On top of that, Pebble plans to expand capabilities with optional double-click-and-hold routing to general voice agents, including integrations with web search or smartwatch-displayed responses.

One of the main selling points in Index 01 is privacy. It records only while the button is pressed, stores audio securely, and requires no internet connection or subscription. Raw audio can be reviewed in the app, and higher-quality cloud speech-to-text is optional.

The design avoids rechargeability to preserve size, durability, and multi-year uptime, with recycling handled by Pebble. So, once the battery reaches the end of its life, it's simply time to get a new ring. A key advantage is that you never have to take the ring off for the usual, annoying recharging. It stays on your finger at all times, ready to capture any idea the moment it strikes.

The ring itself contains no speaker or haptics to minimize distractions and power usage. Up to five minutes of audio can be stored on-ring before syncing, allowing operation even when the phone is out of range.

The device is made of stainless steel, features an LSR button, and is water-resistant to 1 meter. It ships in eight sizes and three finishes (polished silver, polished gold, and matte black), with worldwide pre-orders open at $75 before rising to $99 when shipments begin in March 2026.

Pebble confirmed full functionality on both iOS and Android, including mobile and smartwatch integration for reviewing captured notes or simple query responses on the watch display.

Development is currently in the DVT phase, with broader alpha testing scheduled for January ahead of mass production.


Original Submission

posted by jelizondo on Friday December 12, @12:39AM   Printer-friendly

Russia says it might build its own Linux community after removal of several kernel maintainers:

Russia says it might build its own Linux community after removal of several kernel maintainers

Russia has called Linux's recent delisting of several Russian kernel maintainers "an act of discrimination" and pledged to establish an independent development community for the open-source operating system.

"We will strengthen cooperation and establish a dialogue with those countries that are ready to work with us," Russia's digital ministry said in a comment [Website in Russian] to local media, adding that they plan to build their own "alternative structure."

"It is important to create conditions for cooperation, which can help develop a unique product," the ministry's representative added. It is unclear whether the creation of an alternative Linux community has been discussed with other countries and whether it is even possible. Leaders within the Linux project have not publicly commented on the Russian statement.

Russia's response came after the Linux community blocked  11 Russians from maintaining the Linux kernel — the operating system's core code — citing "various compliance requirements." Linux creator Linus Torvalds stated that this decision "is not getting reverted," adding that as a Finn, he will not "support Russian aggression."

One of the Linux maintainers later explained that the restrictions would apply to developers whose companies are owned or controlled by entities on the U.S. Office of Foreign Assets Control list, designated as involved in activities that "threaten the national security, foreign policy, or economy" of the country.

Most of the delisted Russian maintainers were indeed associated with sanctioned Russian companies or organizations controlled or backed by the Russian government.

Russian cyber experts have criticized Linux's latest decision, saying it will negatively affect trust within Linux's developer community and the quality of the product.

An expert from the Russian cybersecurity company Kaspersky, which has been sanctioned by the U.S., said in an interview with Russian media RBK that the level of suspicion toward patches from Russian developers may increase, complicating the process of integrating changes into the main version of the software, which is important for maintaining Linux distributions.

"The contribution of Russian developers to the Linux kernel is not that significant, so nothing critical will happen in this regard," said another Russian expert, Ivan Panchenko, co-founder of a Russian company that develops an open-source database management system.

He added that Russian patches for general software issues will likely continue to be accepted. Many Linux developers work on parts of the operating system that are outside the kernel. However, Panchenko said there may be new, separate versions of the kernel created by Russian developers — so-called forks.

This is not the first time that associations with Russia have caused problems for local developers. Last year, a Russian coder's account was blocked on GitHub, and his repositories were marked as "archived." The affected developer reportedly worked for a Russian tech equipment manufacturer sanctioned by Canada and the U.S.


Original Submission

posted by jelizondo on Thursday December 11, @07:56PM   Printer-friendly
from the feelin'-groovy dept.

https://scitechdaily.com/er-doctors-are-sounding-the-alarm-on-a-fast-growing-cannabis-illness/

Hospitals are seeing a striking rise in people with sudden bouts of intense vomiting linked to long-term cannabis use, a condition now formally classified as cannabis hyperemesis syndrome.

With the new medical code, doctors and researchers can better track how often it occurs and how it develops. Many patients are surprised to learn cannabis may be triggering their symptoms, especially since it is commonly used to ease nausea.

Over the past ten years, emergency departments have treated a growing number of people seeking help for abdominal pain accompanied by severe or persistent vomiting. A shared characteristic among many of these patients is long-term cannabis use.

Clinicians only recently gained a standardized way to document this issue. Last month, a diagnostic code for "cannabis hyperemesis syndrome" became available, describing a gastrointestinal condition that begins within 24 hours of the most recent use and can continue for several days. People who develop the syndrome often face three or four bouts of symptoms each year.

On October 1, the World Health Organization added the new code, R11.16, to its International Classification of Diseases manual (ICD-10, currently). The Centers for Disease Control and Prevention also incorporated the update for U.S. health care providers.

The change offers several practical benefits. With a single, specific billing code, clinicians no longer need to rely on multiple less accurate codes to describe the condition. The new entry also allows providers to recognize repeat episodes more easily by checking a patient's medical records.

For researchers, the update is especially valuable. It provides a clearer picture of how often the condition appears and who is most affected, helping investigators such as Beatriz Carlini track trends that were previously hard to pinpoint.

"It helps us count and monitor these cases," said Carlini, a research associate professor at the University of Washington School of Medicine who studies adverse health effects of cannabis use. "In studying addiction and other public health concerns, we have three sources of data: what clinicians tell us, what people in the communities tell us, and what health records tell us. A new code for cannabis hyperemesis syndrome will supply important hard evidence on cannabis-adverse events, which physicians tell us is a growing problem."

Although cases are increasing, many clinicians remain unfamiliar with the syndrome because it is still relatively new in medical practice.

"A person often will have multiple [emergency department] visits until it is correctly recognized, costing thousands of dollars each time," Carlini said.

Even with a proper diagnosis, patients are sometimes reluctant to accept that cannabis is causing their symptoms, said Dr. Chris Buresh, an emergency medicine specialist with UW Medicine and Seattle Children's. Cannabis is widely used to reduce nausea for chemotherapy patients or those managing HIV or migraines, which makes the connection more difficult for some to understand.

"Some people say they've used cannabis without a problem for decades. Or they smoke pot because they think it treats their nausea," he said. "It seems like there's a threshold when people can become vulnerable to this condition, and that threshold is different for everyone. Even using in small amounts can make these people start throwing up."

What causes the condition to affect some individuals but not others remains unclear.

"We don't know if it's related to the greater general availability of cannabis or the higher THC potency of some products or something else," Buresh said.

The syndrome is also challenging to treat. Standard anti-nausea medications often do not work well, he said, leading physicians to try second- or third-line options such as Haldol, which is usually prescribed for psychotic episodes.

Some patients find temporary relief from capsaicin cream, an over-the-counter product that creates a warming sensation. Others report that hot showers ease their discomfort.

"That's something that can clinch the diagnosis for me, when someone says they're better with a hot shower. Patients describe going through all the hot water in their house," he said.

Several factors can slow a patient's recovery. Because the syndrome appears intermittently, some cannabis users may believe a recent episode was unrelated and continue using cannabis without problems until they suddenly become very sick again. For those who accept the diagnosis and try to stop using cannabis to improve their symptoms, addiction can make that process challenging, Carlini said.


Original Submission

posted by hubie on Thursday December 11, @03:07PM   Printer-friendly

AI favors texts written by other AIs, even when they're worse than human ones:

As many of you already know, I'm a university professor. Specifically, I teach artificial intelligence at UPC.

Each semester, students must complete several projects in which they develop different AI systems to solve specific problems. Along with the code, they must submit a report explaining what they did, the decisions they made, and a critical analysis of their results.

Obviously, most of my students use ChatGPT to write their reports.

So this semester, for the first time, I decided to use a language model myself to grade their reports.

The results were catastrophic, in two ways:

  1. The LLM wasn't able to follow my grading criteria. It applied whatever criteria it felt like, ignoring my prompts. So it wasn't very helpful.
  2. The LLM loved the reports clearly written with ChatGPT, rating them higher than the higher-quality reports written by students.

In this post, I'll share my thoughts on both points. The first one is quite practical; if you're a teacher, you'll find it useful. I'll include some strategies and tricks to encourage good use of LLMs, detect misuse, and grade more accurately.

The second one... is harder to categorize and would probably require a deeper study, but I think my preliminary observations are fascinating on their own.

[...] If you're a teacher and you're thinking of using LLMs to grade assignments or exams, it's worth understanding their limitations.

We should think of a language model as a "very smart intern": fresh out of college, with plenty of knowledge, but not yet sure how to apply it in the real world to solve problems. So we must be extremely detailed in our prompts and patient in correcting its mistakes—just as we would be if we asked a real person to help us grade.

In my tests, I included the full project description, a detailed grading rubric, and several elements of my personal judgment to help it understand what I look for in an evaluation.

[...] The usual hallucinations began—the kind I thought were mostly solved in newer model versions. But apparently not: it was completely making up citations from the reports.

[...] Soon after, it started inventing its own grading criteria. I couldn't get it to follow my rubric at all. I gave up and decided to treat its feedback simply as an extra pair of eyes, to make sure I wasn't missing anything.

[...] Instead of asking the LLM to identify AI-written texts, which it doesn't do very well, I decided to compare my own quality ratings of each project with the LLM's ratings. Basically, I wanted to see how aligned our criteria were.

And I found a fascinating pattern: the AI gives artificially high scores to reports written with AI.

The models perceive LLM-written reports as more professional and of higher quality. They prioritize form over substance.

And I'm not saying that style isn't important, because it is, in the real world. But it was giving very high marks to poorly reasoned, error-filled work simply because it was elegantly written. Too elegantly... Clearly written with ChatGPT.

When I asked the model what it based its evaluation on, it said things like: "Well, the students didn't literally write [something]... I inferred it from their abstract, which was very well written."

In other words, good writing produced by one LLM leads to a good evaluation by another LLM, even if the content is wrong.

Meanwhile, good writing by a student doesn't necessarily lead to a good evaluation by an LLM.

This phenomenon has a name: corporatism.

[...] This situation gives me chills, because we have totally normalized using LLMs to filter résumés, proposals, or reports.

I don't even want to imagine how many users are accepting these evaluations without supervision and without a hint of critical thought.

If we, as humans, abdicate our responsibility as critical evaluators, we'll end up in a world dominated by AI corporatism.

A world where machines reward laziness and punish real human effort.

[...] To make sure students haven't overused ChatGPT, professors conduct short face-to-face interviews to discuss their projects.

It's the only way to ensure they've actually learned, and also, to be fair. If they've used the model to write more clearly and effectively but still achieved the learning objectives and understood their work, we don't penalize them.

In general, when a report smells a lot like ChatGPT, it usually means the students didn't learn much. But there are always surprises, in both directions.

Sometimes, it's legitimate use of ChatGPT as a writing assistant, which I actually encourage in class. Other times, I find reports that seem AI-written, but the students swear up and down they weren't, even after I tell them it won't affect their grade.

Maybe it's that humans are starting to write like machines.

Of course, machines have learned to write like humans—but current models still have a rigid, recognizable, and rather bland style. You can spot the overuse of bullet-pointed infinitives packed with adjectives, endless summary paragraphs, and phrasing or structures no human would naturally use.


Original Submission

posted by jelizondo on Thursday December 11, @10:25AM   Printer-friendly

https://www.extremetech.com/computing/proton-launches-encrypted-sheets-as-a-privacy-first-alternative-to-google

Anyone with a Proton Drive account can make a spreadsheet via the "New" menu in the web interface.

Proton has launched Proton Sheets, a Proton Drive cloud spreadsheet tool with stronger privacy controls than Google Sheets or Microsoft Excel. The company positions Sheets as a way to plan budgets, track projects, and manage business data without exposing information to advertising systems or AI training pipelines.

Proton Sheets uses end-to-end encryption for all content, including cell values, formulas, and filenames. Proton stores data in Switzerland and says only the account holder and their collaborators can access it. The company says it cannot read spreadsheets stored on its servers because the encryption keys remain with users.

The app supports modern spreadsheet features, including formulas, charts, and real-time collaboration. Teams can edit the same sheet together, see changes as they happen, share files through Proton Drive links, and control who can view or edit each spreadsheet. Users can import existing CSV and XLS files into Proton Drive to encrypt them, and then export them when they need to work in other office suites.

Proton highlights how its approach differs from "big tech" productivity services. While those services rely on monetizing user data and integrating AI assistants directly into documents and spreadsheets, Proton focuses on other methods. The company points to Google's Gemini in Sheets and notes that Google warns users not to share confidential information with the assistant, as humans may review it or use it to improve AI systems. Proton says it does not use customer data for advertising or to train AI models.

Sheets is rolling out gradually to Proton Drive. Anyone with a Proton Drive account can start using it from the "New" menu in the web interface once the option appears. The tool is included in the free Proton Drive tier, with paid storage plans for larger teams and workloads.


Original Submission

posted by jelizondo on Thursday December 11, @05:48AM   Printer-friendly

https://itsfoss.com/news/german-state-ditch-microsoft/

Schleswig-Holstein's migration to LibreOffice reaches 80% completion, with a one-time €9 million investment on cards for 2026.

European governments are pushing back against Big Tech's grip on public infrastructure. Denmark announced earlier this year that its Ministry of Digital Affairs was switching from Microsoft to LibreOffice. In more recent news, Switzerland's data protection authorities declared international cloud services unsuitable for handling personal data.

One German state has been leading this charge for quite some time. Schleswig-Holstein started its open source journey early, becoming something of a vanguard in Europe's move away from proprietary software.

Now, Dirk Schrödter, the Minister for Digital Transformation of the state, has shared some remarkable numbers (in Deutsch) that prove the financial case for implementing open source for government use cases.

According to Schrödter's ministry, Schleswig-Holstein will save over €15 million in license costs in 2026. This is money the state previously paid Microsoft for Office 365 and related services.

The savings come from nearly completing the migration to LibreOffice. Outside the tax administration, almost 80% of workplaces in the state government are said to have made the switch.

The remaining 20% of workplaces still depend on Microsoft programs. Technical dependencies in certain specialized applications keep these systems tied to Word or Excel for now. But converting these remaining computers is the end goal.

There is also a one-time €9 million investment set in motion for 2026, which would be used to complete the migration and further develop the open source solutions for the ministry.

These numbers deserve attention from governments worldwide. Schleswig-Holstein proves that breaking free from proprietary software isn't just ideologically appealing but financially smart.

The €15 million annual savings will compound year after year. That is public money staying in the economy instead of flowing to a user data-hungry tech giant based overseas.

More importantly, this is about data sovereignty. Why should governments hand sensitive government data to companies subject to foreign surveillance laws? Open source alternatives keep data in-house, under local control, without forced cloud uploads.


Original Submission

posted by jelizondo on Thursday December 11, @12:52AM   Printer-friendly

Chattanooga's Municipal Fiber Network Has Delivered $5.3 Billion in Community Benefits, New Study Finds:

For years, it's been known as "America's first Gig City," thanks to its city-owned fiber network.

That same infrastructure has positioned Chattanooga to potentially become the nation's first "Quantum City," according to a new economic impact analysis showing EPB Fiber and the utility's smart-grid systems has generated $5.3 billion in net community benefits for Hamilton County since 2011.

The city is now poised to enter the Quantum realm.

The research builds on an earlier 10‑year return‑on‑investment analysis – published in August 2020 [PDF] – that showed the city's publicly-owned fiber network had delivered $2.69 billion in value over its first decade.

The new follow-up study – From Gig City to Quantum City: The Value of Fiber Optic Infrastructure in Hamilton County, TN 2011-2035 [PDF] – expands the time horizon and finds that over 15 years the total community benefit has grown to $5.3 billion, illuminating how the long‑term value of municipal broadband can really pay-off.

Conducted by researchers at the University of Tennessee at Chattanooga, the study finds that the municipal fiber network has dramatically reshaped the regional economy, supporting 10,420 jobs from 2011 to 2024 – about 31 percent of all net new jobs created locally over the past decade.

The return on investment has been extraordinary: the network has delivered 6.4 times the value of its original $396 million investment, the study indicates.

For years, opponents of municipal broadband have argued that networks like EPB's require taxpayer subsidies, distort markets, and undermine competition. But the new study directly contradicts that narrative.

[...] The researchers conclude that EPB's fiber division has done the reverse – generating surplus revenue that "reverse subsidizes" electric ratepayers by strengthening the utility's financial position, reducing the pressure for rate hikes, and avoiding high borrowing costs.

The study adds that municipal fiber also made the local broadband market more competitive, not less.

[...] For policymakers, the study reinforces a central reality: broadband networks are core infrastructure, with a long-term value that extends far beyond the balance sheet.


Original Submission

posted by jelizondo on Wednesday December 10, @08:11PM   Printer-friendly

Planned orbital observatories would see satellites cross nearly all of their images:

On Wednesday, three NASA astronomers released an analysis showing that several planned orbital telescopes would see their images criss-crossed by planned satellite constellations, such as a fully expanded Starlink and its competitors. While the impact of these constellations on ground-based has been widely considered, orbital hardware was thought to be relatively immune from their interference. But the planned expansion of constellations, coupled with some of the features of upcoming missions, will mean that at least one proposed observatory will see an average of nearly 100 satellite tracks in every exposure.

Making matters worse, some of the planned measures meant to minimize the impact on ground-based telescopes will make things worse for those in orbit.

Satellite constellations are a relatively new threat to astronomy; prior to the drop in launch costs driven by SpaceX's reusable rockets, the largest constellations in orbit consisted of a few dozen satellites. But the rapid growth of the Starlink system caused problems for ground-based astronomy that are not easy to solve.

Unfortunately, even if we had an infinite budget, we couldn't just solve this by increasing our reliance on space-based hardware. While orbital satellites may be above some of the problem-causing constellations, enough of the new hardware is orbiting at altitudes where they can interfere with observations. A check of the image archive of the Hubble Space Telescope, for example, shows that over four percent of recent images contain a satellite track, a significant increase from earlier in the century.

(There are some space-based telescopes that aren't orbiting the Earth, like the James Webb Space Telescope, that will remain worry-free. But these require expensive launches and are too far from Earth for the sort of regular servicing that something like the Hubble has received.)

And the problem will only get worse, according to three astronomers at NASA's Ames Research Center in California (Alejandro Borlaff, Pamela Marcum, and Steve Howell). Based on filings made with the Federal Communications Commission, they found that the current total of satellites represents only 3 percent of what will be in orbit a decade from now if everybody's planned launches take place.

To estimate the impact that this massive population of satellites might have on astronomy, the three researchers focused on several key orbiting observatories. One is the Hubble. Another is the recently launched SPHEREx, which will perform an all-sky survey in the infrared. The Chinese are developing a telescope called Xuntian that will operate in conjunction with their orbiting space station, and the ESA is preparing a mission called ARRAKIHS meant to characterize the dark matter halos of nearby galaxies.

The impact of satellites on observations depends on many factors. Many satellites have constant infrared and radio emissions and thus always have the potential to interfere with imaging at those wavelengths. They can also reflect sunlight, but they're most likely to do that when they're near the horizon (meaning what someone on the satellite would see as the dawn or dusk edges of the Earth). While it's possible to prioritize observations that avoid the horizon, that becomes difficult when longer exposures are required. Surveys meant to identify asteroids that cross Earth's orbit will always need to image near the horizon.

Another key factor is the altitude of the observatory. Something like Xuntian, which requires an orbit that takes it to a space station, will necessarily be at a relatively low altitude and therefore below more of the constellations. Something like SPHEREx, a smaller satellite that operates independently, can potentially be lifted to a higher orbit.

So the risk the constellations pose to observatories can vary greatly based on where your observatory is, what wavelengths it's sensitive to, and what you're doing with it. That's why Borlaff, Marcum, and Howell looked at several very different pieces of hardware, although they did limit their analysis to interference from satellites that would be sunlit as they traversed across the observatory's field of view.

If constellations are built out to their planned extents, there will be roughly 550,000 satellites in orbit. At that point, the researchers estimate that the average image captured by Hubble would have two satellite tracks, while SPHEREx would have five. Things get much worse from there: 69 for ARRAKIHS and 92 for Xuntian. Over a third of the Hubble images would see at least one track, while almost all the images from the other telescopes would have at least one.

Xuntian's problems are largely the product of its low altitude. ARRAKIHS is higher, but it has a wider field of view and is expected to take long (600-second) exposures, increasing the chance that a satellite will wander by. Hubble, in contrast, has a narrow field of view, which limits how often satellites transit within its field of view.

To validate their estimates, the researchers modeled the impact of the present population of satellites on Hubble and came up with a rate of satellite tracks similar to the rate that has actually been observed.

There's no obvious way to deal with this. Right now, best practices involve orienting satellites to limit their ability to reflect light toward ground-based telescopes. But that orientation actually increases the odds that they'll reflect light toward space-based hardware. In addition, satellites will orient their solar panels toward the Sun, which means they're more likely to be face-on and maximally reflective toward any telescopes pointed away from the Sun.

Lowering the orbits of constellations will get them out of the way of more of our observatories, but it will mean the satellites experience more atmospheric friction, so they'll have a shorter liftime in orbit—something the companies putting them there are unlikely to accept. Nevertheless, that's the best solution the astronomers have, as the researchers write that it is "critical to designate safe and limited orbit layers for a sustainable use of space."

Nature, 2025. DOI: 10.1038/s41586-025-09759-5 (About DOIs).


Original Submission

posted by janrinok on Wednesday December 10, @03:25PM   Printer-friendly

Germany bets billions on nuclear fusion for energy future – DW – 10/29/2025:

Germany consumes vast amounts of energy to sustain its manufacturing might and energy-intensive sectors like the automotive and chemical industry.

The country, Europe's largest economy, still relies heavily on fossil fuels for its energy needs, even though the share of renewable sources like wind and solar has risen steadily over the past two decades.

The German government has been implementing an ambitious energy transition plan to achieve net-zero greenhouse gas emissions by 2045. It completely phased out nuclear power in 2023, and plans to wean itself off coal by 2038.

To balance the energy and environmental commitments, Berlin is also betting on new technologies such as green hydrogen and nuclear fusion.

Chancellor Friedrich Merz's Cabinet this month unveiled an action plan [PDF in German] to accelerate the development of nuclear fusion technology. It wants Germany to build the world's first fusion reactor, allocating €1.7 billion ($1.98 billion) in funding for the project.

Berlin hopes the technology will provide abundant clean, safe and reliable energy in the future.

Sarah Klein, commissioner for fusion research at the Fraunhofer Institute for Laser Technology in Aachen, Germany, says investing in fusion technology is a "smart long‑term strategic bet."

"[It] keeps Germany at the forefront of a global technology race and — alongside renewables — is crucial for ensuring energy sovereignty after the phaseout of fossil fuels," she told DW.

Sibylle Günter, scientific director of the Max Planck Institute for Plasma Physics, agreed, noting that German energy demand is "rising steadily."

"Nuclear fusion is a technology that can help us secure our energy supply without CO2 emissions in the long term and remain competitive as an industrial nation," Günter told DW.

Scientists have for decades sought to harness nuclear fusion to generate energy.

It involves bashing together two light atomic nuclei at such high temperatures and pressure that they fuse, and release energy. It's the same basic process that sees hydrogen in the sun converted into helium, generating sunlight and making life on Earth possible.

Fusion is a reverse of what happens in today's nuclear power plants — nuclear fission — where large atoms are split in a chain reaction to release energy.

Unlike nuclear fission, nuclear fusion leaves behind no radioactive waste, thus holding the promise of delivering abundant, climate-friendly energy without pollution and radioactive waste.

Germany is not alone in betting big on nuclear fusion. Countries like the US, China, Japan and the UK have been pumping billions into accelerating the development of the technology. In addition, dozens of private startups have joined the fray.

"The most innovative economies in the world are already making substantial investments in fusion. Therefore, investing in fusion is a vital future strategy for Germany's high-tech sector," Klein said.

The Fraunhofer scientist underlines that the investment is crucial for the country to remain competitive on the global stage and secure technological sovereignty.

"Beyond the science, fusion acts as a catalyst for innovation," she said, pointing to other critical technologies such as superconducting magnets, high‑power systems, advanced materials, robotics and artificial intelligence (AI).

"It is vital to involve industry stakeholders early to initiate and leverage spillover effects into other markets," she added.

Critics, however, view the spending of vast sums on the pursuit of nuclear fusion as misguided and a waste of resources. They argue that the money could be better spent on scaling up other renewable projects.

But Sibylle Günter is convinced there mustn't be a "conflict between renewables and fusion energy" as the two can "complement each other."

"Wind and solar power cannot supply electricity continuously, but fusion can. Fusion can also provide process heat for industry and energy for the production of synthetic fuels such as hydrogen," she said.

After decades of research, scientists first managed to achieve a net energy gain — meaning the energy delivered by the fusion reaction was higher than what was used to make the atomic nuclei fuse — at the end of 2022.

The experiment used high-powered lasers to achieve the feat. Other concepts use strong magnetic fields to confine super-hot plasma particles that combine and fuse to release energy.

The 2022 breakthrough and subsequent experiments have raised hopes of unlocking fusion's full potential in the near future.

Daniel Kammen, Bloomberg distinguished professor of energy and climate justice at Johns Hopkins University, thinks the "old adage" that nuclear fusion is five decades away, and has been five decades away for many decades is "no longer true."

"Advances in the diversity of approaches, in the use of machine learning and AI to control issues like magnetic (tokamak) confinement, and in system operation have all radically changed the situation," he told DW in an emailed statement.

"I have forecast that fusion prototypes will be in the pilot phase on the grid within a decade, and possibly sooner."

But other experts, including Sarah Klein, say it will take longer for commercially viable fusion power to materialize. "It's true that commercial fusion remains a long‑term prospect with significant technical and economic uncertainty. So it cannot substitute for the urgent deployment of renewables and storage today."

Klein's view is shared by Sibylle Günter, who expects the first fusion-power plants to go on the grid "in about two decades," but only if the necessary efforts are made now.

"The question is: Are we prepared today to invest in a technology so that it will be available when we need it to meet our growing energy needs?"


Original Submission

posted by janrinok on Wednesday December 10, @10:43AM   Printer-friendly

https://www.sciencenews.org/article/therapeutic-hpv-vaccine-cervical-cancer

An experimental nasal vaccine could one day serve as a treatment for cervical cancer.

In mice, the vaccine unleashed cancer-fighting immune cells in the cervix and ultimately shrank tumors, researchers report November 12 in Science Translational Medicine.

The vaccine targets a cancer protein made by the human papillomavirus, or HPV, and takes a therapeutic approach rather than a preventative one, says Rika Nakahashi-Ouchida, an immunologist at Chiba University in Japan. Treatments like this are urgently needed for people who have already been infected with HPV and now have precancerous lesions growing in their cervix, she says. "We believe these vaccines could expand treatment options for patients."

Globally, some 660,000 new cases of cervical cancer are diagnosed each year, with most caused by HPV. Current HPV vaccines like Gardasil-9 are preventative, blocking the virus from infecting the body and stamping out new cases of cervical cancer. A large 2024 study in Scotland, for instance, reported zero cases of cervical cancer among women vaccinated at age 12 or 13 since the country began its vaccination program in 2008.

But preventative vaccines can't quash existing infections. Infected people who develop cancer must rely on treatments such as surgery, radiation and chemotherapy.

Therapeutic vaccines, which direct the immune system to attack cancer, offer a potential new treatment. While many groups are developing such vaccines for cervical cancer — most as injectable shots — none are available for use. Nakahashi-Ouchida's team tried a different approach: a nanogel vaccine squirted into the nose.

The team's gel carries a protein from a cancer-causing HPV strain; researchers modified the protein to be harmless. In mice with cervical tumors, vaccination prompted an immune response to migrate from mucosal tissue in the nose to tumor tissue in the cervix, the team found. Those tumors then started to shrink.

"I was very excited to see that," Nakahashi-Ouchida says. She hadn't been sure that nasal vaccination could spark a response in tissue as distant as the cervix. In other experiments in macaques, the vaccine also spurred cancer protein-targeting immune cells to beeline to cervical tissue.

Nakahashi-Ouchida says there's much to do before such a vaccine is ready for clinical use. She'd like the vaccine to include cancer proteins from other HPV strains, for one. With such tweaks and further testing, she estimates a nasal vaccine for cervical cancer could be available in about five years.


Original Submission

posted by janrinok on Wednesday December 10, @05:52AM   Printer-friendly

Zig prez complains about 'vibe-scheduling' after safe sleep bug goes unaddressed for eons:

The Foundation that promotes the Zig programming language has quit GitHub due to what its leadership perceives as the code sharing site's decline.

The drama began in April 2025 when GitHub user AlekseiNikiforovIBM started a thread titled "safe_sleep.sh rarely hangs indefinitely." GitHub addressed the problem in August, but didn't reveal that in the thread, which remained open until Monday.

That timing appears notable. Last week, Andrew Kelly, president and lead developer of the Zig Software Foundation, announced that the Zig project is moving to Codeberg, a non-profit git hosting service, because GitHub no longer demonstrates commitment to engineering excellence.

One piece of evidence he offered for that assessment was the "safe_sleep.sh rarely hangs indefinitely" thread.

"Most importantly, Actions has inexcusable bugs while being completely neglected," Kelly wrote. "After the CEO of GitHub said to 'embrace AI or get out', it seems the lackeys at Microsoft took the hint, because GitHub Actions started 'vibe-scheduling' – choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked."

Kelly's gripe seems justified, as the bug discussed in the thread appears to have popped up following a code change in February 2022 that users flagged in prior bug reports.

The code change replaced instances of the posix "sleep" command with a "safe_sleep" script that failed to work as advertised. It was supposed to allow the GitHub Actions runner – the application that runs a job from a GitHub Actions workflow – to pause execution safely.

"The bug in this 'safe sleep' script is obvious from looking at it: if the process is not scheduled for the one-second interval in which the loop would return (due to $SECONDS having the correct value), then it simply spins forever," wrote Zig core developer Matthew Lugg in a comment appended to the April bug thread.

"That can easily happen on a CI machine under extreme load. When this happens, it's pretty bad: it completely breaks a runner until manual intervention. On Zig's CI runner machines, we observed multiple of these processes which had been running for hundreds of hours, silently taking down two runner services for weeks."

[...] While Kelly has gone on to apologize for the incendiary nature of his post, Zig is not the only software project publicly parting ways with GitHub.

Over the weekend, Rodrigo Arias Mallo, creator of the Dillo browser project, said he's planning to move away from GitHub owing to concerns about over-reliance on JavaScript, GitHub's ability to deny service, declining usability, inadequate moderation tools, and "over-focusing on LLMs and generative AI, which are destroying the open web (or what remains of it) among other problems."

Zig is a general-purpose programming language and toolchain for maintaining robust, optimal and reusable software.


Original Submission

posted by jelizondo on Wednesday December 10, @01:09AM   Printer-friendly
from the it's-about-biology-not-your-mobile-phone dept.

https://scitechdaily.com/this-cellular-trick-helps-cancer-spread-but-could-also-stop-it/

The tale of the princess and the pea describes someone so sensitive that she can detect a single pea beneath layers of bedding. In the biological world, certain cells show a similar heightened sensitivity. Cancer cells, in particular, have an extraordinary ability to detect and respond to their surroundings far beyond their immediate environment.

Now, scientists have discovered that even normal cells can achieve this extended sensing ability—by working together.

A study published in PNAS by engineers at Washington University in St. Louis reveals new insights into how cells perceive what lies beyond their immediate environment. These findings deepen understanding of cancer cell migration and could help identify new molecular targets to halt tumor spread.

Amit Pathak, professor of mechanical engineering and materials science at the McKelvey School of Engineering, described this process as "depth mechano-sensing"—the ability of cells to perceive structures beneath the surfaces they adhere to. In earlier research, Pathak and his team discovered that abnormal cells with "high front-rear polarity" (a feature typical of cells in motion) can detect their surroundings up to 10 microns beyond the layer they are attached to.

This extended sensing occurs as the cell reshapes the surrounding fibrous collagen, allowing it to probe deeper into the extracellular matrix (ECM) and "feel" what lies ahead—whether a dense tumor, soft tissue, or bone. A single abnormal cell can gauge the stiffness of the ECM and navigate based on this mechanical feedback.

The new research shows that a collective of epithelial cells, found on the surface of tissue, can do the same and then some, working together to muster enough force to "feel" through the fibrous collagen the layer as far as 100 microns away.

"Because it's a collective of cells, they are generating higher forces," said Pathak, who authored the research along with PhD student Hongsheng Yu.

According to their models, this occurs in two distinct phases of cell clustering and migration. What those clustering cells "feel" will impact migration and dispersal.

Implications for cancer research and treatment

The extra sensing power of cancer cells means that they can get out of the tumor environment and evade detection, migrating freely thanks to their enhanced sense of what's ahead, even in a soft environment. Researchers' next step will be understanding how that works, and if certain regulators allow for the range. Those regulators could be potential targets for cancer therapy. If a cancer cell can't "feel" its way forward, its toxic spread may be put in check.

Reference: “Emergent depth-mechanosensing of epithelial collectives regulates cell clustering and dispersal on layered matrices” by Hongsheng Yu and Amit Pathak, 11 September 2025, Proceedings of the National Academy of Sciences.

DOI: 10.1073/pnas.2423875122


Original Submission