Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:42 | Votes:470

posted by jelizondo on Thursday December 11, @12:52AM   Printer-friendly

Chattanooga's Municipal Fiber Network Has Delivered $5.3 Billion in Community Benefits, New Study Finds:

For years, it's been known as "America's first Gig City," thanks to its city-owned fiber network.

That same infrastructure has positioned Chattanooga to potentially become the nation's first "Quantum City," according to a new economic impact analysis showing EPB Fiber and the utility's smart-grid systems has generated $5.3 billion in net community benefits for Hamilton County since 2011.

The city is now poised to enter the Quantum realm.

The research builds on an earlier 10‑year return‑on‑investment analysis – published in August 2020 [PDF] – that showed the city's publicly-owned fiber network had delivered $2.69 billion in value over its first decade.

The new follow-up study – From Gig City to Quantum City: The Value of Fiber Optic Infrastructure in Hamilton County, TN 2011-2035 [PDF] – expands the time horizon and finds that over 15 years the total community benefit has grown to $5.3 billion, illuminating how the long‑term value of municipal broadband can really pay-off.

Conducted by researchers at the University of Tennessee at Chattanooga, the study finds that the municipal fiber network has dramatically reshaped the regional economy, supporting 10,420 jobs from 2011 to 2024 – about 31 percent of all net new jobs created locally over the past decade.

The return on investment has been extraordinary: the network has delivered 6.4 times the value of its original $396 million investment, the study indicates.

For years, opponents of municipal broadband have argued that networks like EPB's require taxpayer subsidies, distort markets, and undermine competition. But the new study directly contradicts that narrative.

[...] The researchers conclude that EPB's fiber division has done the reverse – generating surplus revenue that "reverse subsidizes" electric ratepayers by strengthening the utility's financial position, reducing the pressure for rate hikes, and avoiding high borrowing costs.

The study adds that municipal fiber also made the local broadband market more competitive, not less.

[...] For policymakers, the study reinforces a central reality: broadband networks are core infrastructure, with a long-term value that extends far beyond the balance sheet.


Original Submission

posted by jelizondo on Wednesday December 10, @08:11PM   Printer-friendly

Planned orbital observatories would see satellites cross nearly all of their images:

On Wednesday, three NASA astronomers released an analysis showing that several planned orbital telescopes would see their images criss-crossed by planned satellite constellations, such as a fully expanded Starlink and its competitors. While the impact of these constellations on ground-based has been widely considered, orbital hardware was thought to be relatively immune from their interference. But the planned expansion of constellations, coupled with some of the features of upcoming missions, will mean that at least one proposed observatory will see an average of nearly 100 satellite tracks in every exposure.

Making matters worse, some of the planned measures meant to minimize the impact on ground-based telescopes will make things worse for those in orbit.

Satellite constellations are a relatively new threat to astronomy; prior to the drop in launch costs driven by SpaceX's reusable rockets, the largest constellations in orbit consisted of a few dozen satellites. But the rapid growth of the Starlink system caused problems for ground-based astronomy that are not easy to solve.

Unfortunately, even if we had an infinite budget, we couldn't just solve this by increasing our reliance on space-based hardware. While orbital satellites may be above some of the problem-causing constellations, enough of the new hardware is orbiting at altitudes where they can interfere with observations. A check of the image archive of the Hubble Space Telescope, for example, shows that over four percent of recent images contain a satellite track, a significant increase from earlier in the century.

(There are some space-based telescopes that aren't orbiting the Earth, like the James Webb Space Telescope, that will remain worry-free. But these require expensive launches and are too far from Earth for the sort of regular servicing that something like the Hubble has received.)

And the problem will only get worse, according to three astronomers at NASA's Ames Research Center in California (Alejandro Borlaff, Pamela Marcum, and Steve Howell). Based on filings made with the Federal Communications Commission, they found that the current total of satellites represents only 3 percent of what will be in orbit a decade from now if everybody's planned launches take place.

To estimate the impact that this massive population of satellites might have on astronomy, the three researchers focused on several key orbiting observatories. One is the Hubble. Another is the recently launched SPHEREx, which will perform an all-sky survey in the infrared. The Chinese are developing a telescope called Xuntian that will operate in conjunction with their orbiting space station, and the ESA is preparing a mission called ARRAKIHS meant to characterize the dark matter halos of nearby galaxies.

The impact of satellites on observations depends on many factors. Many satellites have constant infrared and radio emissions and thus always have the potential to interfere with imaging at those wavelengths. They can also reflect sunlight, but they're most likely to do that when they're near the horizon (meaning what someone on the satellite would see as the dawn or dusk edges of the Earth). While it's possible to prioritize observations that avoid the horizon, that becomes difficult when longer exposures are required. Surveys meant to identify asteroids that cross Earth's orbit will always need to image near the horizon.

Another key factor is the altitude of the observatory. Something like Xuntian, which requires an orbit that takes it to a space station, will necessarily be at a relatively low altitude and therefore below more of the constellations. Something like SPHEREx, a smaller satellite that operates independently, can potentially be lifted to a higher orbit.

So the risk the constellations pose to observatories can vary greatly based on where your observatory is, what wavelengths it's sensitive to, and what you're doing with it. That's why Borlaff, Marcum, and Howell looked at several very different pieces of hardware, although they did limit their analysis to interference from satellites that would be sunlit as they traversed across the observatory's field of view.

If constellations are built out to their planned extents, there will be roughly 550,000 satellites in orbit. At that point, the researchers estimate that the average image captured by Hubble would have two satellite tracks, while SPHEREx would have five. Things get much worse from there: 69 for ARRAKIHS and 92 for Xuntian. Over a third of the Hubble images would see at least one track, while almost all the images from the other telescopes would have at least one.

Xuntian's problems are largely the product of its low altitude. ARRAKIHS is higher, but it has a wider field of view and is expected to take long (600-second) exposures, increasing the chance that a satellite will wander by. Hubble, in contrast, has a narrow field of view, which limits how often satellites transit within its field of view.

To validate their estimates, the researchers modeled the impact of the present population of satellites on Hubble and came up with a rate of satellite tracks similar to the rate that has actually been observed.

There's no obvious way to deal with this. Right now, best practices involve orienting satellites to limit their ability to reflect light toward ground-based telescopes. But that orientation actually increases the odds that they'll reflect light toward space-based hardware. In addition, satellites will orient their solar panels toward the Sun, which means they're more likely to be face-on and maximally reflective toward any telescopes pointed away from the Sun.

Lowering the orbits of constellations will get them out of the way of more of our observatories, but it will mean the satellites experience more atmospheric friction, so they'll have a shorter liftime in orbit—something the companies putting them there are unlikely to accept. Nevertheless, that's the best solution the astronomers have, as the researchers write that it is "critical to designate safe and limited orbit layers for a sustainable use of space."

Nature, 2025. DOI: 10.1038/s41586-025-09759-5 (About DOIs).


Original Submission

posted by janrinok on Wednesday December 10, @03:25PM   Printer-friendly

Germany bets billions on nuclear fusion for energy future – DW – 10/29/2025:

Germany consumes vast amounts of energy to sustain its manufacturing might and energy-intensive sectors like the automotive and chemical industry.

The country, Europe's largest economy, still relies heavily on fossil fuels for its energy needs, even though the share of renewable sources like wind and solar has risen steadily over the past two decades.

The German government has been implementing an ambitious energy transition plan to achieve net-zero greenhouse gas emissions by 2045. It completely phased out nuclear power in 2023, and plans to wean itself off coal by 2038.

To balance the energy and environmental commitments, Berlin is also betting on new technologies such as green hydrogen and nuclear fusion.

Chancellor Friedrich Merz's Cabinet this month unveiled an action plan [PDF in German] to accelerate the development of nuclear fusion technology. It wants Germany to build the world's first fusion reactor, allocating €1.7 billion ($1.98 billion) in funding for the project.

Berlin hopes the technology will provide abundant clean, safe and reliable energy in the future.

Sarah Klein, commissioner for fusion research at the Fraunhofer Institute for Laser Technology in Aachen, Germany, says investing in fusion technology is a "smart long‑term strategic bet."

"[It] keeps Germany at the forefront of a global technology race and — alongside renewables — is crucial for ensuring energy sovereignty after the phaseout of fossil fuels," she told DW.

Sibylle Günter, scientific director of the Max Planck Institute for Plasma Physics, agreed, noting that German energy demand is "rising steadily."

"Nuclear fusion is a technology that can help us secure our energy supply without CO2 emissions in the long term and remain competitive as an industrial nation," Günter told DW.

Scientists have for decades sought to harness nuclear fusion to generate energy.

It involves bashing together two light atomic nuclei at such high temperatures and pressure that they fuse, and release energy. It's the same basic process that sees hydrogen in the sun converted into helium, generating sunlight and making life on Earth possible.

Fusion is a reverse of what happens in today's nuclear power plants — nuclear fission — where large atoms are split in a chain reaction to release energy.

Unlike nuclear fission, nuclear fusion leaves behind no radioactive waste, thus holding the promise of delivering abundant, climate-friendly energy without pollution and radioactive waste.

Germany is not alone in betting big on nuclear fusion. Countries like the US, China, Japan and the UK have been pumping billions into accelerating the development of the technology. In addition, dozens of private startups have joined the fray.

"The most innovative economies in the world are already making substantial investments in fusion. Therefore, investing in fusion is a vital future strategy for Germany's high-tech sector," Klein said.

The Fraunhofer scientist underlines that the investment is crucial for the country to remain competitive on the global stage and secure technological sovereignty.

"Beyond the science, fusion acts as a catalyst for innovation," she said, pointing to other critical technologies such as superconducting magnets, high‑power systems, advanced materials, robotics and artificial intelligence (AI).

"It is vital to involve industry stakeholders early to initiate and leverage spillover effects into other markets," she added.

Critics, however, view the spending of vast sums on the pursuit of nuclear fusion as misguided and a waste of resources. They argue that the money could be better spent on scaling up other renewable projects.

But Sibylle Günter is convinced there mustn't be a "conflict between renewables and fusion energy" as the two can "complement each other."

"Wind and solar power cannot supply electricity continuously, but fusion can. Fusion can also provide process heat for industry and energy for the production of synthetic fuels such as hydrogen," she said.

After decades of research, scientists first managed to achieve a net energy gain — meaning the energy delivered by the fusion reaction was higher than what was used to make the atomic nuclei fuse — at the end of 2022.

The experiment used high-powered lasers to achieve the feat. Other concepts use strong magnetic fields to confine super-hot plasma particles that combine and fuse to release energy.

The 2022 breakthrough and subsequent experiments have raised hopes of unlocking fusion's full potential in the near future.

Daniel Kammen, Bloomberg distinguished professor of energy and climate justice at Johns Hopkins University, thinks the "old adage" that nuclear fusion is five decades away, and has been five decades away for many decades is "no longer true."

"Advances in the diversity of approaches, in the use of machine learning and AI to control issues like magnetic (tokamak) confinement, and in system operation have all radically changed the situation," he told DW in an emailed statement.

"I have forecast that fusion prototypes will be in the pilot phase on the grid within a decade, and possibly sooner."

But other experts, including Sarah Klein, say it will take longer for commercially viable fusion power to materialize. "It's true that commercial fusion remains a long‑term prospect with significant technical and economic uncertainty. So it cannot substitute for the urgent deployment of renewables and storage today."

Klein's view is shared by Sibylle Günter, who expects the first fusion-power plants to go on the grid "in about two decades," but only if the necessary efforts are made now.

"The question is: Are we prepared today to invest in a technology so that it will be available when we need it to meet our growing energy needs?"


Original Submission

posted by janrinok on Wednesday December 10, @10:43AM   Printer-friendly

https://www.sciencenews.org/article/therapeutic-hpv-vaccine-cervical-cancer

An experimental nasal vaccine could one day serve as a treatment for cervical cancer.

In mice, the vaccine unleashed cancer-fighting immune cells in the cervix and ultimately shrank tumors, researchers report November 12 in Science Translational Medicine.

The vaccine targets a cancer protein made by the human papillomavirus, or HPV, and takes a therapeutic approach rather than a preventative one, says Rika Nakahashi-Ouchida, an immunologist at Chiba University in Japan. Treatments like this are urgently needed for people who have already been infected with HPV and now have precancerous lesions growing in their cervix, she says. "We believe these vaccines could expand treatment options for patients."

Globally, some 660,000 new cases of cervical cancer are diagnosed each year, with most caused by HPV. Current HPV vaccines like Gardasil-9 are preventative, blocking the virus from infecting the body and stamping out new cases of cervical cancer. A large 2024 study in Scotland, for instance, reported zero cases of cervical cancer among women vaccinated at age 12 or 13 since the country began its vaccination program in 2008.

But preventative vaccines can't quash existing infections. Infected people who develop cancer must rely on treatments such as surgery, radiation and chemotherapy.

Therapeutic vaccines, which direct the immune system to attack cancer, offer a potential new treatment. While many groups are developing such vaccines for cervical cancer — most as injectable shots — none are available for use. Nakahashi-Ouchida's team tried a different approach: a nanogel vaccine squirted into the nose.

The team's gel carries a protein from a cancer-causing HPV strain; researchers modified the protein to be harmless. In mice with cervical tumors, vaccination prompted an immune response to migrate from mucosal tissue in the nose to tumor tissue in the cervix, the team found. Those tumors then started to shrink.

"I was very excited to see that," Nakahashi-Ouchida says. She hadn't been sure that nasal vaccination could spark a response in tissue as distant as the cervix. In other experiments in macaques, the vaccine also spurred cancer protein-targeting immune cells to beeline to cervical tissue.

Nakahashi-Ouchida says there's much to do before such a vaccine is ready for clinical use. She'd like the vaccine to include cancer proteins from other HPV strains, for one. With such tweaks and further testing, she estimates a nasal vaccine for cervical cancer could be available in about five years.


Original Submission

posted by janrinok on Wednesday December 10, @05:52AM   Printer-friendly

Zig prez complains about 'vibe-scheduling' after safe sleep bug goes unaddressed for eons:

The Foundation that promotes the Zig programming language has quit GitHub due to what its leadership perceives as the code sharing site's decline.

The drama began in April 2025 when GitHub user AlekseiNikiforovIBM started a thread titled "safe_sleep.sh rarely hangs indefinitely." GitHub addressed the problem in August, but didn't reveal that in the thread, which remained open until Monday.

That timing appears notable. Last week, Andrew Kelly, president and lead developer of the Zig Software Foundation, announced that the Zig project is moving to Codeberg, a non-profit git hosting service, because GitHub no longer demonstrates commitment to engineering excellence.

One piece of evidence he offered for that assessment was the "safe_sleep.sh rarely hangs indefinitely" thread.

"Most importantly, Actions has inexcusable bugs while being completely neglected," Kelly wrote. "After the CEO of GitHub said to 'embrace AI or get out', it seems the lackeys at Microsoft took the hint, because GitHub Actions started 'vibe-scheduling' – choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked."

Kelly's gripe seems justified, as the bug discussed in the thread appears to have popped up following a code change in February 2022 that users flagged in prior bug reports.

The code change replaced instances of the posix "sleep" command with a "safe_sleep" script that failed to work as advertised. It was supposed to allow the GitHub Actions runner – the application that runs a job from a GitHub Actions workflow – to pause execution safely.

"The bug in this 'safe sleep' script is obvious from looking at it: if the process is not scheduled for the one-second interval in which the loop would return (due to $SECONDS having the correct value), then it simply spins forever," wrote Zig core developer Matthew Lugg in a comment appended to the April bug thread.

"That can easily happen on a CI machine under extreme load. When this happens, it's pretty bad: it completely breaks a runner until manual intervention. On Zig's CI runner machines, we observed multiple of these processes which had been running for hundreds of hours, silently taking down two runner services for weeks."

[...] While Kelly has gone on to apologize for the incendiary nature of his post, Zig is not the only software project publicly parting ways with GitHub.

Over the weekend, Rodrigo Arias Mallo, creator of the Dillo browser project, said he's planning to move away from GitHub owing to concerns about over-reliance on JavaScript, GitHub's ability to deny service, declining usability, inadequate moderation tools, and "over-focusing on LLMs and generative AI, which are destroying the open web (or what remains of it) among other problems."

Zig is a general-purpose programming language and toolchain for maintaining robust, optimal and reusable software.


Original Submission

posted by jelizondo on Wednesday December 10, @01:09AM   Printer-friendly
from the it's-about-biology-not-your-mobile-phone dept.

https://scitechdaily.com/this-cellular-trick-helps-cancer-spread-but-could-also-stop-it/

The tale of the princess and the pea describes someone so sensitive that she can detect a single pea beneath layers of bedding. In the biological world, certain cells show a similar heightened sensitivity. Cancer cells, in particular, have an extraordinary ability to detect and respond to their surroundings far beyond their immediate environment.

Now, scientists have discovered that even normal cells can achieve this extended sensing ability—by working together.

A study published in PNAS by engineers at Washington University in St. Louis reveals new insights into how cells perceive what lies beyond their immediate environment. These findings deepen understanding of cancer cell migration and could help identify new molecular targets to halt tumor spread.

Amit Pathak, professor of mechanical engineering and materials science at the McKelvey School of Engineering, described this process as "depth mechano-sensing"—the ability of cells to perceive structures beneath the surfaces they adhere to. In earlier research, Pathak and his team discovered that abnormal cells with "high front-rear polarity" (a feature typical of cells in motion) can detect their surroundings up to 10 microns beyond the layer they are attached to.

This extended sensing occurs as the cell reshapes the surrounding fibrous collagen, allowing it to probe deeper into the extracellular matrix (ECM) and "feel" what lies ahead—whether a dense tumor, soft tissue, or bone. A single abnormal cell can gauge the stiffness of the ECM and navigate based on this mechanical feedback.

The new research shows that a collective of epithelial cells, found on the surface of tissue, can do the same and then some, working together to muster enough force to "feel" through the fibrous collagen the layer as far as 100 microns away.

"Because it's a collective of cells, they are generating higher forces," said Pathak, who authored the research along with PhD student Hongsheng Yu.

According to their models, this occurs in two distinct phases of cell clustering and migration. What those clustering cells "feel" will impact migration and dispersal.

Implications for cancer research and treatment

The extra sensing power of cancer cells means that they can get out of the tumor environment and evade detection, migrating freely thanks to their enhanced sense of what's ahead, even in a soft environment. Researchers' next step will be understanding how that works, and if certain regulators allow for the range. Those regulators could be potential targets for cancer therapy. If a cancer cell can't "feel" its way forward, its toxic spread may be put in check.

Reference: “Emergent depth-mechanosensing of epithelial collectives regulates cell clustering and dispersal on layered matrices” by Hongsheng Yu and Amit Pathak, 11 September 2025, Proceedings of the National Academy of Sciences.

DOI: 10.1073/pnas.2423875122


Original Submission

posted by jelizondo on Tuesday December 09, @08:23PM   Printer-friendly
from the is-that-a-tablet-in-your-pocket? dept.

https://www.extremetech.com/mobile/samsungs-first-tri-fold-phone-is-here-everything-you-need-to-know

The Galaxy Z TriFold has a massive, 10-inch screen when fully unfolded.

Samsung has officially unveiled the Galaxy Z TriFold, the company's first tri-fold smartphone. The device comes after Huawei launched its Mate XT, but Samsung's approach has a distinctive inward-folding design that keeps the large main screen fully protected when closed.

The Galaxy Z TriFold has a vast, 10-inch screen when fully unfolded, turning the phone into a tablet. When folded, it has a standard 6.5-inch front screen. The design uses a "Flex G" architecture with both hinges folding inward, contrasting with Huawei's outward-folding approach.

The device contains a Snapdragon 8 Elite for Galaxy processor, 16GB of RAM, and storage options ranging from 512GB to 1TB. The 5,600mAh battery supports 45W wired charging and 15W wireless charging. It features a 200MP main sensor, a 12MP ultra-wide lens, and a 10MP telephoto lens with 3x optical zoom.

The tri-fold weighs 309 grams and measures just 3.9mm thick when fully unfolded. It is rated IP48, meaning it is water-resistant to 1.5 meters but isn't dust-tight.

Pre-orders in South Korea begin Dec. 12, 2025, with a price of around $2,450. The US launch is confirmed for Q1 2026, with availability also coming to China, Singapore, Taiwan, and the UAE. The device launches with Android 16 and One UI 8, featuring multitasking modes that allow three full-sized apps to run simultaneously.


Original Submission

posted by jelizondo on Tuesday December 09, @03:39PM   Printer-friendly

New research indicates that complex life began forming almost a billion years earlier than previously thought:

New research has uncovered that complex life began forming much earlier, and across a longer timeframe, than scientists had previously assumed. The findings offer fresh insight into the environmental conditions that shaped early evolution and call into question several longstanding scientific ideas in this field.

Led by the University of Bristol and published today in Nature (December 3), the study reports that complex organisms arose well before oxygen became abundant in Earth's atmosphere. Oxygen had long been thought to be essential for the development of advanced life, but the results indicate that this requirement may not hold for the earliest stages of evolution.

"The Earth is approximately 4.5 billion years old, with the first microbial life forms appearing over 4 billion years ago. These organisms consisted of two groups – bacteria and the distinct but related archaea, collectively known as prokaryotes," said co-author Anja Spang, from the Department of Microbiology & Biogeochemistry at the Royal Netherlands Institute for Sea Research.

Prokaryotes dominated the planet for hundreds of millions of years before more complex eukaryotic cells emerged. This latter group includes algae, fungi, plants, and animals.

Davide Pisani, Professor of Phylogenomics in the School of Biological Sciences at the University of Bristol and co-author, explained: "Previous ideas on how and when early prokaryotes transformed into complex eukaryotes have largely been in the realm of speculation. Estimates have spanned a billion years, as no intermediate forms exist and definitive fossil evidence has been lacking."

To address these uncertainties, the international team expanded upon the existing 'molecular clocks' technique, which estimates when species last shared a common ancestor.

"The approach was two-fold: by collecting sequence data from hundreds of species and combining this with known fossil evidence, we were able to create a time-resolved tree of life. We could then apply this framework to better resolve the timing of historical events within individual gene families," added co-lead author Professor Tom Williams in the Department of Life Sciences at the University of Bath.

By comparing more than 100 gene families across multiple biological systems and focusing on traits that differentiate eukaryotes from prokaryotes, the researchers began reconstructing the sequence of events that shaped the rise of complex life.

The team found that the transition toward complexity began nearly 2.9 billion years ago, almost a billion years earlier than some prior estimates. Their results also indicate that the nucleus and other internal cellular structures formed well before mitochondria.

"The process of cumulative complexification took place over a much longer time period than previously thought," said author Gergely Szöllősi, head of the Model-Based Evolutionary Genomics Unit at the Okinawa Institute of Science and Technology (OIST).

These findings enabled the researchers to rule out several existing hypotheses for eukaryogenesis (the evolution of complex life). Because their results did not fully match any current model, they introduced a new scenario called 'CALM' – Complex Archaeon, Late Mitochondrion.

Lead author Dr. Christopher Kay, Research Associate in the School of Biological Sciences at the University of Bristol, explained: "What sets this study apart is looking into detail about what these gene families actually do – and which proteins interact with which – all in absolute time. It has required the combination of a number of disciplines to do this: palaeontology to inform the timeline, phylogenetics to create faithful and useful trees, and molecular biology to give these gene families a context. It was a big job."

"One of our most significant findings was that the mitochondria arose significantly later than expected. The timing coincides with the first substantial rise in atmospheric oxygen," said author Philip Donoghue, Professor of Palaeobiology in the School of Earth Sciences at the University of Bristol.

"This insight ties evolutionary biology directly to Earth's geochemical history. The archaeal ancestor of eukaryotes began evolving complex features roughly a billion years before oxygen became abundant, in oceans that were entirely anoxic."

Reference: "Dated gene duplications elucidate the evolutionary assembly of eukaryotes" 3 December 2025, Nature.


Original Submission

posted by jelizondo on Tuesday December 09, @10:51AM   Printer-friendly
from the think-of-the-children dept.

https://appleinsider.com/articles/25/12/01/us-wants-laws-to-force-app-store-age-checks-despite-apples-existing-protections

The United States wants big tech companies like Apple to protect children online by adding age verification safeguards to the App Store. It's a political push that completely ignores what protections Apple already provides to parents and children.

Lawmakers have been particularly keen to protect children from online dangers, and have repeatedly demanded big tech companies like Apple and Google do more to help. In the latest attempt to make big tech bend to its demands, the U.S. government is going after the App Store.

The App Store Accountability Act (ASA) was introduced in May as a way for parents to get more tools to protect their children online. In late November, the ASA was brought up in Congress as part of a raft of measures to keep kids safe online, led by the Kids Online Safety Act (KOSA).

It's also due to be discussed as part of a House Energy and Commerce Committee hearing. A report on Monday by The Verge says that the discussion will look at a nine-bill package of measures.

Under the ASA, introduced by Sen. Mike Lee (R-UT) and Rep. John James (R-MI), app storefronts, like Apple's App Store and Google Play, will be required to verify the age of all users in a privacy-protecting way. The result can then be used to limit what apps would be accessible to the user, if they are deemed too young.

Accounts that are used by minors must be linked to a parental account, which would need to provide parental consent for downloading apps or making purchases. App Stores also have to meet standards like providing secure age verification and accurate app age ratings.

While there are some measures in place, such as California's age check law, the intention of the bills are to make things the same across the United States. Instead of dealing with various laws and measures in different states, there would be one set of overriding rules that offer close to the same protections.

Much like many attempts to legislate technology, there is a difference between intention and looking at reality. Had lawmakers looked closer, they would see that Apple already has something in place that does just what they asked.

Apple's Family Sharing system allows for a parent or guardian to create an Apple Account for a child under 13 years old. Child accounts can be managed by the parent account in various ways, including Screen Time limits and "Content & Privacy Restrictions" affecting what they can see.

This can limit how much mature or explicit content a child can see in the App Store or in various other apps and services, including podcasts, music videos, and even Apple Books. This also extends to the App Store, with controls allowing parents to require the child account to request access to an app or to make a purchase.

After releasing a whitepaper in February, Apple said in July that it would change the experience for child accounts in iOS 26, including a simplified setup process. The age of the child would also be shared with app developers in the format of an age range, so content can be further tailored to them.

Apple also wanted to expand its age ratings in the App Store to five categories, including new levels for 13+, 16+, and 18+.

A lot of this covers the main thrust of the bill, though age verification is the difficult part. Unlike an adult, a child is unlikely to have much in the way of computer-readable proof of their age.

However, as shown at the time of the Utah version of the bill's implementation, Apple does use credit card requests to the connected adult account in creating the child account.

It's not just Apple that has this in place. Google also has its own parental management system, which will also face scrutiny due to the discussion of the bill.

One element of the bill that won't sit well with Apple is that it effectively shifts the blame for any lapses to it and Google. This is ridiculous.

Currently, the onus is placed on the creator apps and services to work to protect children from content, which, frankly, is how it should be. However, by requiring Apple and Google to make the checks in their respective storefronts, there is less of a need for those services to worry about being attacked by parent groups and critics.

It therefore won't be Facebook's fault for its moderation system constantly letting explicit material slip through like it does now, nor a web browser providing access to an adult website.

It'll be Apple or Google instead of who's actually responsible, because those App Stores had to do the first set of age checks.

[...] The UK's attempt to make the Internet safer for children damaged areas for other users in the country, and created the perfect conditions for even more damaging privacy breaches.

The attempt by the United States to limit child access to damaging material is a just one. That certainly cannot be argued against at all.

However, accomplishing that requires care and thought by the content hosts and the feds who want to regulate it without knowing what they're talking about, because the remedy could easily become worse than the disease. The UK has certainly demonstrated that.

S.1586 - App Store Accountability Act


Original Submission

posted by jelizondo on Tuesday December 09, @06:03AM   Printer-friendly

https://www.phoronix.com/news/CDE-2.5.3-Desktop

Two years and one week since the prior point release, Common Desktop Environment 2.5.3 is now available as the latest iteration of this Unix desktop environment built around the Motif toolkit. CDE has been open-source for more than a decade now but its development not exactly brisk. But for those resisting the likes of Wayland and other modern display tech -- especially with KDE announcing today Plasma 6.8 will be Wayland-exclusive -- CDE 2.5.3 is now available.

CDE 2.5.3 ships with various bug fixes, dtwm now supporting more mouse buttons, some compiler fixes and resolving some warnings, a systemd service file for dtlogin is added, and other mostly minor changes. Besides the dtlogin systemd service file, perhaps most notable otherwise are the fixes for satisfying the GCC 15 compiler.

CDE 2.5.3 downloads and more details on this new point release can be found via SourceForge. Yes, another sign of its times.

While on the topic of CDE, you may also recall the separate NxCDE as a modern CDE effort. NxCDE hasn't seen any release since June 2023 nor any commits to its GitHub repository now in two years.


Original Submission

posted by hubie on Tuesday December 09, @01:21AM   Printer-friendly
from the what-an-entangled-web-we-weave? dept.

Physicists have developed a "physics shortcut" that allows ordinary laptops to solve complex quantum dynamics problems, a feat previously reserved for supercomputers and AI models (Live Science). The breakthrough, from the University at Buffalo, is an extension of a decades-old method called the truncated Wigner approximation (TWA).

TWA is a semiclassical approach that simplifies quantum math by retaining necessary quantum behavior while discarding less critical details. Historically, applying TWA required re-deriving complicated math for every new problem, making it inaccessible. The team transformed this into a user-friendly "conversion table" that translates a quantum problem into solvable equations, allowing physicists to get usable results on a consumer laptop within hours (University at Buffalo).

This new, practical approach significantly lowers the computational cost and makes exploring certain quantum phenomena much easier. It's hoped that this will save supercomputing resources for the truly intractable quantum systems, while allowing more common quantum dynamics to be studied efficiently on accessible consumer-grade computers (ScienceDaily).


Original Submission

posted by hubie on Monday December 08, @08:34PM   Printer-friendly

https://www.npr.org/2025/12/03/g-s1-100212/san-francisco-sues-manufacturers-ultraprocessed-foods

The city of San Francisco filed a lawsuit against some of the nation's top food manufacturers on Tuesday, arguing that ultraprocessed food from the likes of Coca-Cola and Nestle are responsible for a public health crisis.

City Attorney David Chiu named 10 companies in the lawsuit, including the makers of such popular foods as Oreo cookies, Sour Patch Kids, Kit Kat, Cheerios and Lunchables. The lawsuit argues that ultraprocessed foods are linked to diseases such as Type 2 diabetes, fatty liver disease and cancer.

"They took food and made it unrecognizable and harmful to the human body," Chiu said in a news release. "These companies engineered a public health crisis, they profited handsomely, and now they need to take responsibility for the harm they have caused."

Ultraprocessed foods include candy, chips, processed meats, sodas, energy drinks, breakfast cereals and other foods that are designed to "stimulate cravings and encourage overconsumption," Chiu's office said in the release. Such foods are "formulations of often chemically manipulated cheap ingredients with little if any whole food added," Chiu wrote in the lawsuit.

[...] U.S. Health Secretary Robert F. Kennedy Jr. has been vocal about the negative impact of ultraprocessed foods and their links to chronic disease and has targeted them in his Make America Healthy Again campaign. Kennedy has pushed to ban such foods from the Supplemental Nutrition Assistance Program for low-income families.

An August report by the U.S. Centers for Disease Control and Prevention found that most Americans get more than half their calories from ultraprocessed foods.

[...] "Mounting research now links these products to serious diseases—including Type 2 diabetes, fatty liver disease, heart disease, colorectal cancer, and even depression at younger ages," University of California, San Francisco, professor Kim Newell-Green said in the news release.

The lawsuit argues that by producing and promoting ultraprocessed foods, the companies violate California's Unfair Competition Law and public nuisance statute. It seeks a court order preventing the companies from "deceptive marketing" and requiring them to take actions such as consumer education on the health risks of ultraprocessed foods and limiting advertising and marketing of ultraprocessed foods to children.

It also asks for financial penalties to help local governments with health care costs caused by the consumption of ultraprocessed foods.


Original Submission

posted by hubie on Monday December 08, @03:45PM   Printer-friendly

A boycott is unlikely to work:

The chaotic state of RAM prices continues to impact the industry. According to a new report, motherboard sales have fallen by as much as 50% as a result of the crisis. It has also led to gamers calling for a RAM boycott in hopes of easing the situation, but the reality is that such a move is unlikely to work.

We've covered the memory-pricing crisis since it began, including this deep dive into the problem and how it's caused by demand from AI data centers that require massive amounts of DRAM.

A new report Japanese from outlet Gazlog[site in Chinese] states that out-of-control DDR5 prices are impacting motherboard sales, forcing manufacturers such as Asus, MSI, and Gigabyte to significantly lower their sales targets.

DDR5 prices are simply stupid right now. A 64GB kit is now more expensive than a PS5 console or an RTX 5070. We've even seen several stores remove fixed pricing signs from DDR5 displays, relying on market rates because costs are changing so rapidly each day.

The problem for motherboard makers is that people upgrading from DDR4 or older systems – along with first-time builders --need DDR5 to pair with their shiny new boards. But with prices so high, it's a bad time to buy.

The result is a 40-50% decrease in motherboard sales compared to the same period a year earlier, writes Gazlog, leading to a lowering of sales targets. It's expected that CPU sales will eventually experience a similar fall in sales due to the RAM situation.

In an attempt to fight back, there are now calls on Reddit for gamers to boycott RAM completely in the hope that prices will return to normal

Unfortunately, the rallying cry is likely to have very little, if any, effect. The biggest issue, as we know, is that DRAM supply and future manufacturing capacity have already been bought out by companies to support their aggressive data center-building plans, causing the shortage.

Most memory manufacturers' sales come from industry, enterprise, data-centre, and other segments that aren't consumer PCs. And while it's true that a mass boycott would have some impact on their bottom lines, there's the other issue: not everyone will take part.

As we saw during Covid with graphics cards, calling for the public to boycott something for the greater good – regardless of whether it would work anyway -- rarely succeeds when there are always people willing to pay any price. And that's not mentioning the scalpers who are always ready to take advantage of other people's misery.

The memory crisis is also affecting graphics cards. AMD looks set to raise prices by 10%, while both Lisa Su's firm and rival Nvidia are rumored to be considering axing some low- and mid-range cards.


Original Submission

posted by hubie on Monday December 08, @11:04AM   Printer-friendly

A Debian developer gave out the news. Julian Andres Klode wrote in the mailing lists that APT (the package manager tool of Debian Linux) begin requiring a Rust compiler. It goes like:

"I plan to introduce hard Rust dependencies and Rust code into APT, no earlier than May 2026. This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem.
In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages and a stronger approach to unit testing."

source: https://lists.debian.org/deity/2025/10/msg00071.html

I admit to not knowing enough of the "how"-s and "gotcha!"-s of Debian's development to be able to give an educated opinion about this news. But in the overall and what I have seen happening as of late, anything Rust has felt a little too eager to be wedged into any cog and bolt of Linuxlandia as can be possible.

My dormant prophetic eye almost sees a vague vision of something that resembles the Unix Wars of the 90s, but this time happening to Linux, as soon as Torvalds will eventually retire from his role as the benevolent emperor of Linux. The commercial vultures will come from all sides to "improve" it in any way that would suit them in their business, injecting in it endless bloat and useless features that noone asked about, and there will be noone there to stop it with a firm "no!" (and a middle finger up when deserved) like Torvalds does and has guarded Linux from such assaults so far.


Original Submission

posted by hubie on Monday December 08, @06:24AM   Printer-friendly
from the let's-see-more-of-this-kind-of-thing dept.

The Linux phone features 12GB RAM, up to 2TB storage, a 6.36-inch FullHD AMOLED display, and a user-replaceable 5,500mAh battery:

Jolla kicked off a campaign for a new Jolla Phone, which they call the independent European Do It Together (DIT) Linux phone, shaped by the people who use it.

The new Jolla Phone is powered by a high-performing Mediatek 5G SoC, and features 12GB RAM, 256GB storage that can be expanded to up to 2TB with a microSDXC card, a 6.36-inch FullHD AMOLED display with ~390ppi, 20:9 aspect ratio, and Gorilla Glass, and a user-replaceable 5,500mAh battery.

The Linux phone also features 4G/5G support with dual nano-SIM and a global roaming modem configuration, Wi-Fi 6 wireless, Bluetooth 5.4, NFC, 50MP Wide and 13MP Ultrawide main cameras, front front-facing wide-lens selfie camera, fingerprint reader on the power key, a user-changeable back cover, and an RGB indication LED.

On top of that, the new Jolla Phone promises a user-configurable physical Privacy Switch that lets you turn off the microphone, Bluetooth, Android apps, or whatever you wish.

The device will be available in three colors, including Snow White, Kaamos Black, and The Orange. All the specs of the new Jolla Phone were voted on by Sailfish OS community members over the past few months.

Honouring the original Jolla Phone form factor and design, the new model ships with the Sailfish OS (with support for Android apps), a Linux-based European alternative to dominating mobile operating systems, and promises a minimum of 5 years of support, no tracking, no calling home, and no hidden analytics.

"Mainstream phones send vast amounts of background data. A common Android phone sends megabytes of data per day to Google even if the device is not used at all. Sailfish OS stays silent unless you explicitly allow connections," said Jolla.

The new Jolla Phone is now available for pre-order for 99 EUR and will only be produced if at least 2000 pre-orders are reached in one month from today [goal met on 7 December -- Ed.], until January 4th, 2026. The full price of the Linux phone will be 499 EUR (incl. local VAT), and the 99 EUR pre-order price will be fully refundable and deducted from the full price.

The device will be manufactured and sold in Europe, but Jolla says that it will design the cellular band configuration to enable global travelling as much as possible, including e.g. roaming in the U.S. carrier networks. The initial sales markets are the EU, the UK, Switzerland, and Norway.

TECH SPECS:

        SoC: High performant Mediatek 5G platform
        RAM: 12GB
        Storage: 256GB + expandable with microSDXC
        Cellular: 4G + 5G with dual nano-SIM and global roaming modem configuration
        Display: 6.36" ~390ppi FullHD AMOLED, aspect ratio 20:9, Gorilla Glass
        Cameras: 50MP Wide + 13MP Ultrawide main cameras, front facing wide-lens selfie camera
        Battery: approx. 5,500mAh, user replaceable
        Connectivity: WiFi 6, BT 5.4, NFC
        Dimensions: ~158 x 74 x 9mm
        Other: Power key fingerprint reader, user changeable backcover, RGB indication LED, Privacy Switch

Privacy by Design

        No tracking, no calling home, no hidden analytics
        User configurable physical Privacy Switch - turn off you microphone, bluetooth, Android apps, or whatever you wish

Scandinavian styling in its pure form

        Honouring the original Jolla Phone form factor and design
        Replaceable back cover
        Available in three distinct colours inspired by Nordic nature

Performance Meets Privacy

        5G with dual nano-SIM
        12GB RAM and 256GB storage expandable up to 2TB
        Sailfish OS 5
        Support for Android apps with Jolla AppSupport
        User replaceable back cover with colour options
        User replaceable battery
        Physical Privacy Switch


Original Submission