Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
https://www.theregister.com/2025/11/04/dhs_wants_to_collect_biometric_data/
If you're filing an immigration form - or helping someone who is - the Feds may soon want to look in your eyes, swab your cheek, and scan your face. The US Department of Homeland Security wants to greatly expand biometric data collection for immigration applications, covering immigrants and even some US citizens tied to those cases.
DHS, through its component agency US Citizenship and Immigration Services, on Monday proposed a sweeping expansion of the agency's collection of biometric data. While ostensibly about verifying identities and preventing fraud in immigration benefit applications, the proposed rule goes much further than simply ensuring applicants are who they claim to be.
First off, the rule proposes expanding when DHS can collect biometric data from immigration benefit applicants, as "submission of biometrics is currently only mandatory for certain benefit requests and enforcement actions." DHS wants to change that, including by requiring practically everyone an immigrant is associated with to submit their biometric data.
"DHS proposes in this rule that any applicant, petitioner, sponsor, supporter, derivative, dependent, beneficiary, or individual filing or associated with a benefit request or other request or collection of information, including U.S. citizens, U.S. nationals and lawful permanent residents, and without regard to age, must submit biometrics unless DHS otherwise exempts the requirement," the rule proposal said.
DHS also wants to require the collection of biometric data from "any alien apprehended, arrested or encountered by DHS."
It's not explicitly stated in the rule proposal why US citizens associated with immigrants who are applying for benefits would have to have their biometric data collected. DHS didn't answer questions to that end, though the rule stated that US citizens would also be required to submit biometric data "when they submit a family-based visa petition."
In addition to expanded collection, the proposed rule also changes the definition of what DHS considers to be valid biometric data.
"Government agencies have grouped together identifying features and actions, such as fingerprints, photographs, and signatures under the broad term, biometrics," the proposal states. "DHS proposes to define the term 'biometrics' to mean 'measurable biological (anatomical, physiological or molecular structure) or behavioral characteristics of an individual,'" thus giving DHS broad leeway to begin collecting new types of biometric data as new technologies are developed.
The proposal mentions several new biometric technologies DHS wants the option to use, including ocular imagery, voice prints and DNA, all on the table per the new rule.
"The rule proposes to grant DHS express authority to require, request, or accept raw DNA or DNA test results," DHS said, including "to prove or disprove ... biological sex" in situations where that can affect benefit eligibility.
DHS wants to use all that data for identity enrollment, verification and management of the immigration lifecycle, national security and criminal history checks, "the production of secure identity documents," to prove familial relationships, and to perform other administrative functions, the rule states.
As we noted in our story last week about DHS' new rule expanding biometric data collection on entry into and exit from the US, biometric technology - especially the often-used facial recognition scan - is ripe for misuse and prone to errors.
This new proposed rule goes far beyond subjecting immigrants to algorithmic identification tech prone to misidentifying non-white individuals, however, and reaches a new level of surveillance, with DHS seeking to collect and keep DNA test results - including partial profiles - from immigrants and some US citizens to verify family ties or biological sex when relevant. It's not much more assuring that DHS also wants to collect new forms of biometric data like voice records, which are increasingly easy to spoof with AI.
When we asked DHS questions about its biometric expansion proposal, it only sent us a statement identical to the one it sent last week when we inquired about the new entry/exit biometric requirements. The agency didn't respond when we asked for a statement pertaining to this latest proposed rule.
DHS is taking comments on the proposal until January 2; so far the submissions are nearly entirely negative, with posters decrying the plan as government overreach, comparing the proposal to communist China, and calling it a violation of Constitutional guarantees against unreasonable search and seizure.
Study concludes cybersecurity training doesn't work:
It was a big sample group. The researchers examined nearly 20,000 employees at UC San Diego Health. People who got cybersecurity training were compared to those who got none.
Some people with training were slightly less likely to click on a phishing lure than the untrained. But some trained people were more likely to click.
"And we found that there was no relation to time and your cybersecurity annual training. And so that means even if you had just recently taken it, you are just as likely to click as someone who had taken it 8, 10, 12 months ago," said Ariana Mirian, one of the co-authors of the study done at UC San Diego.
Phishing is done to gain access to your online information including passwords, banking information or medical records.
The study found some phishing lures worked better than others. For instance, a fake message, that claimed to be from Human Resources, asked you to click on an update to your company's dress code policy. Lots of people fell for that one.
Even more people fell for a fake message asking recipients to click on an update to their company's vacation policy.
The UCSD study kept track of cumulative lure clicks over several months, and it suggested that even if you don't click on the first one you get, pretty soon one of them is likely to get you.
"So what this is showing is that each month, a new set of users is failing," Mirian said as she pointed to a graph in the study. "So you can imagine if this goes on forever, eventually most people will fail at least one phishing lure."
Mirian works for the cybersecurity company Censys, and she was completing her Ph.D. at UCSD when she co-authored the study, which was presented at the Black Hat USA convention in Las Vegas this year.
She said given how ineffective cybersecurity training is, it might be better to build more effective security into workplace computer systems.
"Should we as a security community be putting all the time and energy and money into other defenses like multifactor authentication or maybe email spam detection? Things that remove the responsibility from the end user and put it on the system itself," she said.
Because that training just doesn't seem to stick to people.
Users are urged to download only from the official GitHub page:
With Windows 10 reaching its end of life (EOL) last month, a free third-party tool called Flyoobe (which we have covered here) has been gaining popularity for enabling safe upgrades to Windows 11 on unsupported systems by bypassing the system requirements. Additionally, it includes various options to customize the OS further, such as removing AI features and unwanted apps. However, there is a suspected bogus build of this tool downloadable via an official-looking domain, which is causing concern.
The developer of Flyoobe recently issued a warning about a potentially malicious copy of the tool being distributed through a website that is not directly affiliated with the project. According to a notice marked "SECURITY ALERT" on the official Github page, an unofficial mirror is being hosted at https://flyoobe.net/ (do not visit), which may contain malware or a tampered build of Flyoobe. The developer also urges users to download only from the official GitHub releases, emphasizing that the mentioned website has no connection with the developer or the project's official pages.
WindowsForum has detailed procedures for downloading and verifying the official version.
Related: Microsoft's Killing Script Used to Avoid Microsoft Account in Windows 11
LM8560, the eternal chip from the 1980 years:
Almost everybody has had one, or even has still one of it at home. Almost everybody who had it, kept it in his/her immediate proximity while sleeping...
The LM8560 is the integrated circuit (IC), which was built in almost all the digital alarm clocks and clock radios, with numeric LED display, that have been produced from around 1985, or even the first 1980 years, up to the 2010 years. A very few and last models are still produced in year 2023. But they are fading out. Alarm clocks and clock radios with numeric LED displays are disappearing today, being replaced by LCD displays, merely because of trend reasons. Furthermore, the red color for the LED displays is also becoming rare, because of trend reasons as well, being replaced by green, or worse, by blue LEDs. It doesn't matter whether it was a major brand, like Sony or a no-name brand sold for 10€ at a discounter market, the brain of the alarm clock was almost always the same. The now 'obsolete' but timeless LM8560, originally made by the Japanese Sanyo. The LM8560 is a low power consumption MOS integrated circuit.
[...] An electronic clock consists essentially of 2 different parts. A frequency generator, that generates a stable frequency and one part counts the waves coming from the generated frequency. Counting the waves that pass, the passing of seconds is counted. In quartz watches, the frequency generator is a quartz oscillator. In addition, if you want obtain a very good accuracy in quartz clocks, it is necessary to add some components to make the quartz frequency adjustable. A calibration process has to be performed individually during the manufacturing of each clock. This increases the time and cost for the workers needed for the manufacturing. The LM8560 does not need any calibration operation. As soon as it is assembled, it is ready to use with no mistakes! It contains only the counter and the logic for to manage the alarm and clock functions. It has no frequency generator. The LM8560 simply counts the waves coming from the AC power outlet. It can be 50 Hz or 60 Hz, depending on the Countries. In my case it is 50 Hz. It means that 50 waves per second come in. Every 50 counted waves, one second of time is counted. Every 60 counted seconds the minute digits are increased by one. Every 60 minutes the hour digits are increased by one. In addition to the counter and the logic for the clock/alarm functions, the LM8560 contains the circuit to drive directly the LED display.
People who know how to manage Ubuntu systems and would like to be able to prove this with official credentials can now get certified through Canonical Academy.
"With Canonical Academy, successful candidates receive verifiable digital badges that demonstrate open source competence. Backed by Canonical, these credentials provide credible evidence of technical ability in a competitive job market."
Additional information on the courses offered can be found in this blog post.
To the hiring managers out there: how relevant are these kind of certifications these days?
An Anonymous Coward writes:
New law makes electric vehicles safer for pedestrians in Australia
With claims that a new law 'will save lives' a new Australian law will require all new electric vehicles to emite sound at low speed. Vision Australian claim that 35 percent of people who are blind or have low vision 'have had a collision with a silent vehicle'. Of the 27.2 million people in Australia an estimated 453 thousand have low vision of which 66 thousand are blind with more than 70% of people affected being over 65 years old. Public view on the matter ranges from both ends of the spectrum with many people asking why a better system could not be found for which is not more noise pollution.
Australia's electric cars will no longer move in silence, with a long-awaited safety law now in effect requiring all new EVs to emit an audible sound when travelling at low speeds.
From November 1, every new electric vehicle sold in Australia must be fitted with an Acoustic Vehicle Alerting System (AVAS), which produces a recognisable sound when the car is moving at or under 25km/h, a move advocates say will save lives.
Vision Australia's general manager of corporate affairs and advocacy, Chris Edwards, said the new rule marked a major victory after years of campaigning.
"Vision Australia has been calling for an acoustic vehicle alerting system to be introduced in Australia since 2018," Mr Edwards said.
He said their research found that 35 per cent of people who are blind or have low vision have had a collision with a silent vehicle.
"Further reporting shows that pedestrian road crashes cost the Australian community over $1.2bn each year," he said.
"With electric vehicles predicted to make up 90 per cent of Australia's vehicle fleet by 2050, we knew there couldn't be any further delay to mandating AVAS.
"All pedestrians should have the right to feel safe and confident when navigating public spaces, and this mandate will ensure they will. AVAS will save lives."
The new legislation brings Australia in line with other major jurisdictions including the European Union, the United States and Japan, where similar sound requirements have been in place for several years.
7 basic science discoveries that changed the world:
The National Institutes of Health has cut almost US$2 billion of grants that were already approved, and the National Science Foundation has terminated more than 1,400 grants. And the president has even bigger plans to eviscerate science. His proposed budget for fiscal year 2026 would cut non-defence-related research and development by 36%.
"They have cancelled wholesale a wide variety of research efforts in midstream," says John Holdren at Harvard University in Cambridge, Massachusetts, who was the science adviser to former president Barack Obama during both his terms. "They intend to now solidify it with cuts in the budgets."
The cancelled and threatened research is a mixture of 'applied' work that has stated applications, which can be commercial in nature, and 'basic' or 'blue skies' research, intended to develop new knowledge.
[...] Basic research is easily mocked because it can seem impractical, but, in fact, it is a major driver of economic growth. "The return on investment in basic research — the return to society — is very high, typically multiple dollars back per dollar invested," says Holdren.
The US funding cuts will hit basic research particularly hard because the government has historically been the prime supporter of fundamental research. The private sector will never invest enough in such research, says Holdren. "The timescale for returns is too long and the ability of the funder to capture those returns too uncertain," he says. "That's the reason that the funding of fundamental research is, at its base, a responsibility of government."
Although it's impossible to estimate how the reductions in federal support might limit future discoveries, scientists point to a long list of findings that emerged out of fundamental research and went on to change the world. Here are a few examples.
In the summer of 1966, while he was an undergraduate at Indiana University, Hudson Freeze went to live in a cabin on the edge of Yellowstone National Park. He was working for microbiologist Thomas Brock, who was convinced that certain microorganisms were living at surprisingly high temperatures. Dodging bears, and the traffic jams they caused, Freeze visited the hot springs every day to sample their bacteria.
On 19 September, Freeze succeeded in growing a sample of yellowish microbes from Mushroom Spring. Under a microscope, he found an array of cells collected from the near-boiling fluids. "I was seeing something that nobody had ever seen before," says Freeze, now at the Sanford Burnham Prebys Medical Discovery Institute in La Jolla, California. "I still get goosebumps when I remember looking into the microscope."
Three years later, Freeze and Brock described one of the bacteria, which they called Thermus aquaticus1. It grew best at 70 °C. Then, in 1970, they isolated an enzyme2 from T. aquaticus, which performed sugar metabolism at an optimal temperature of 95 °C. By then, Freeze had left for graduate studies and shifted his priorities to slime moulds. However, other researchers kept studying T. aquaticus, and in 1976, a team at the University of Cincinnati in Ohio isolated another enzyme3: a 'DNA polymerase' that could synthesize new DNA at 80 °C.
Seven years later, this Taq polymerase would prove to be just what biochemist Kary Mullis needed to create the polymerase chain reaction (PCR), a method for rapidly making thousands upon thousands of copies of a single fragment of DNA4. Mullis needed high temperatures to break DNA molecules apart, so he also needed a polymerase that worked at high temperatures to avoid repetitive heating and cooling.
Today, PCR is an indispensable tool in everything from medicine — where it's used to match organ donors with recipients and in the diagnosis of cancers — to DNA fingerprinting that can help police to identify killers.
Magnetic resonance imaging (MRI) is a mainstay of modern medicine. It can generate detailed images of a person's internal anatomy — revealing, for instance, abnormal structures in the heart or whether a tumour has grown or shrunk. A variant called functional MRI (fMRI) tracks changing blood flow in the brain, which has enabled researchers to discover fundamental insights about how the brain works. What's more, MRI is non-invasive and doesn't require the use of radioactive substances or ionizing radiation, unlike many other imaging methods.
MRI emerged from research in the 1930s into the physical properties of atomic nuclei, and of the fundamental particles within them. It was "pretty esoteric stuff" with "no applications really in sight or in mind", says Carmen Giunta, a chemist at Le Moyne College in Syracuse, New York.
One key discovery on the path to MRI machines involved studies of protons and neutrons, which make up atomic nuclei. Such particles have a property called spin that describes their angular momentum.
In the 1930s, physicist Isidor Rabi and his colleagues were investigating spin by firing beams of atomic nuclei through magnetic fields. Protons and neutrons, depending on the orientations of their spins, have slightly different energy levels when exposed to magnetic fields. "The resonance method that he developed was a way of detecting when these spins change their orientation in the presence of a magnetic field," says Giunta. For this work, Rabi won the Nobel Prize in Physics in 1944.
Nuclear magnetic resonance found its first uses in chemical laboratories. Because atomic nuclei are sensitive to their surroundings, careful measurements of magnetic resonance could be used to determine how atoms were connected in large molecules. From the 1970s onwards, it was fashioned into a tool for imaging biological tissues. Paul Lauterbur and Peter Mansfield shared the Nobel Prize in Physiology or Medicine in 2003 for their work in developing MRI.
It started in Prague in early 1888, when botanist Friedrich Reinitzer extracted chemicals called cholesterol esters from carrot roots. One of the substances — crystals of cholesteryl benzoate — did something unexpected. Normal crystals, when heated, lose both their solidity and colour at the same temperature, but these did not. "They lost their solidity at 145 °C but retained their bluish colour, which they only lost at 178 °C," says Michel Mitov at the University of the Côte d'Azur in Nice, France.
Other researchers had seen similar behaviours before, but Reinitzer recognized that there might be an important new phenomenon at work. Unsure how to explain it, on 14 March he wrote a long letter to physicist Otto Lehmann in Aachen, in what is now Germany. "Lehmann was the perfect colleague for continuing and reproducing the observation," says Mitov, because he had built a microscope with a heated stage, which meant he could observe the crystals' behaviour in real time. The two exchanged letters and samples for weeks, and Reinitzer presented their initial results at a meeting in Vienna in May.
Lehmann's key observation was that, when the crystals lost their solidity, they still retained some properties of a crystal. Yet in other respects they were liquid. At a molecular level, they were composed of long molecules that remained in an ordered orientation (as in a crystal) but could also move freely (as in a liquid). Lehmann called them liquid crystals.
For decades, many researchers refused to accept this, because it flew in the face of the system that physicists and chemists used to categorize matter. Substances were either solid, liquid or gaseous. Liquid crystals blurred the lines, and acceptance of this came at "a very high intellectual cost", says Mitov.
In the first half of the twentieth century the evidence became undeniable, but research into liquid crystals dried up, out of a belief that they would never be of any use. The field was revived by US chemists in the late 1950s, and in 1968, engineers developed the first flat screens based on liquid crystals, which ultimately gave rise to flat-screen televisions. However, their applications go far beyond screens, says Mitov, including uses in cameras, microscopes, smart materials, robotics and even anti-counterfeiting technology.
"My head explodes every time I see a new application of CRISPR, or that CRISPR has cured someone," says microbiologist Francisco Mojica at the University of Alicante in Spain.
CRISPR, short for Clustered Regularly Interspaced Short Palindromic Repeats, is a tool that can edit genomes in a very precise way. It opened up vast opportunities for basic research and has paved the way for curing genetic disorders, including sickle-cell disease, immune dysfunction and life-threatening metabolic conditions. Emmanuelle Charpentier and Jennifer Doudna shared the 2020 Nobel Prize in Chemistry for developing the tool.
The discoveries that led to this revolution happened decades earlier. In 1989, Mojica was a PhD student investigating Haloferax mediterranei R-4: a single-celled organism called an archaean, found in salt-producing ponds near Alicante. He tried to find out how the microbe could survive in such briny conditions. Having identified some promising regions of the microbe's genome, Mojica sequenced them and was surprised to find short segments that were repeated at regular intervals. He and other researchers gave these repeated segments different names but eventually settled on the acronym CRISPR. As he investigated them, Mojica proposed some potential functions for the repeating segments: ideas that were, he says wryly, "absolutely wrong".
https://www.phoronix.com/news/GNOME-Mutter-Drops-X11
The merge to GNOME Mutter has finally happened that "completely drops" the X11 back-end to make GNOME strictly focused on Wayland-based environments.
The four month old merge request by Bilal Elmoussaoui to drop the X11 back-end was merged a short time ago. The merge request sums it up as:
"Drop the X11 backend
Completely drops the whole x11 backend."
After the X11 path was disabled by default in the GNOME 49 release, the code is being outright removed for the GNOME 50 cycle.
Following on that was this merge to better adapt Mutter to the dropped X11 backend.
GNOME 50 will continue supporting XWayland clients (apps / games) but moving forward strictly for Wayland-based desktop sessions.
Previously:
• Think Twice Before Abandoning X11. Wayland Breaks Everything!
• Ubuntu Dropping GNOME's X11 Session
• Fedora Considers Dropping GNOME X11 Session From Repositories
• Wayland May Soon Overtake X11 in Linux GUIs
• When Will the Death Watch for the X Window System (aka X11) Begin?
Tiny electric motor is as powerful as four Tesla motors put together and outperforms record holder by 40%
UK-based YASA has just built a tiny electric motor that makes Tesla motors look like slackers, and this invention could potentially reshape the future of EVs. The company has unveiled a new prototype that's breaking records for power and performance density. It's smaller and lighter than traditional motors, yet it's somehow more powerful. Perhaps the best part is that it's a fully functional motor, rather than some lab-only concept.
This tiny electric motor can produce more than 1,000 horsepower. The new YASA axial flux motor weighs just 28 pounds, or about the same as a small dog. However, it delivers a jaw-dropping 750 kilowatts of power, which is the equivalent of 1,005 horsepower. That's about the same as two Tesla Model 3 Performance cars combined, or four individual Tesla motors. In comparison, the previous record holder, which was also produced by the same company, weighed 28.8 pounds, and achieved a peak power of 550 kilowatts (737 horsepower). This makes the current electric motor 40 percent better than the previous edition. It can also sustain between 350 and 400 kilowatts (469–536 horsepower) continuously, meaning it's not just built for short bursts, as it can deliver massive power all day long.
[...] A lighter motor means a lighter car, which means better efficiency, faster acceleration, and longer range from the same battery.
For EVs, every pound matters, so saving weight without compromising performance could be a gamechanger.
YASA, which is a wholly owned subsidiary of Mercedes-Benz, already produces motors that power some of the world's fastest and most expensive cars.
Like the year of Linux, will this be the year of Electric Vehicles ??
Canada says hacktivists breached water and energy facilities:
The Canadian Centre for Cyber Security warned today that hacktivists have breached critical infrastructure systems multiple times across the country, allowing them to modify industrial controls that could have led to dangerous conditions.
The authorities issued the warning to raise awareness of the elevated malicious activity targeting internet-exposed Industrial Control Systems (ICS) and the need to adopt stronger security measures to block the attacks.
The alert shares three recent incidents in which so-called hacktivists tampered with critical systems at a water treatment facility, an oil & gas firm, and an agricultural facility, causing disruptions, false alarms, and a risk of dangerous conditions.
"One incident affected a water facility, tampering with water pressure values and resulting in degraded service for its community," describes the bulletin.
"Another involved a Canadian oil and gas company, where an Automated Tank Gauge (ATG) was manipulated, triggering false alarms."
"A third one involved a grain drying silo on a Canadian farm, where temperature and humidity levels were manipulated, resulting in potentially unsafe conditions if not caught on time."
The Canadian authorities believe that these attacks weren't planned and sophisticated, but rather opportunistic, aimed at causing media stir, undermining trust in the country's authorities, and harming its reputation.
Sowing fear in societies and creating a sense of threat are primary goals for hacktivists, who are often joined by sophisticated APTs in this effort.
The U.S. government has repeatedly confirmed that foreign hacktivists have attempted to manipulate industrial system settings. Earlier this month, a Russian group called TwoNet was caught in the act against a decoy plant.
Although none of the recently targeted entities in Canada suffered catastrophic consequences, the attacks highlight the risk of poorly protected ICS components such as PLCs, SCADA systems, HMIs, and industrial IoTs.
https://hackaday.com/2025/10/22/what-happened-to-running-what-you-wanted-on-your-own-machine/
https://archive.ph/6i4vr
When the microcomputer first landed in homes some forty years ago, it came with a simple freedom—you could run whatever software you could get your hands on. Floppy disk from a friend? Pop it in. Shareware demo downloaded from a BBS? Go ahead! Dodgy code you wrote yourself at 2 AM? Absolutely. The computer you bought was yours. It would run whatever you told it to run, and ask no questions.
Today, that freedom is dying. What's worse, is it's happening so gradually that most people haven't noticed we're already halfway into the coffin.
The latest broadside fired in the war against platform freedom has been fired. Google recently announced new upcoming restrictions on APK installations. Starting in 2026, Google will tightening the screws on sideloading, making it increasingly difficult to install applications that haven't been blessed by the Play Store's approval process. It's being sold as a security measure, but it will make it far more difficult for users to run apps outside the official ecosystem. There is a security argument to be made, of course, because suspect code can cause all kinds of havoc on a device loaded with a user's personal data. At the same time, security concerns have a funny way of aligning perfectly with ulterior corporate motives.
[...] The walled garden concept didn't start with smartphones. Indeed, video game consoles were a bit of a trailblazer in this space, with manufacturers taking this approach decades ago. The moment gaming became genuinely profitable, console manufacturers realized they could control their entire ecosystem. Proprietary formats, region systems, and lockout chips were all valid ways to ensure companies could levy hefty licensing fees from developers. They locked down their hardware tighter than a bank vault, and they did it for one simple reason—money. As long as the manufacturer could ensure the console wouldn't run unapproved games, developers would have to give them a kickback for every unit sold.
[...] Then came the iPhone, and with it, the App Store. Apple took the locked-down model and applied it to a computer you carry in your pocket. The promise was that you'd only get apps that were approved by Apple, with the implicit guarantee of a certain level of quality and functionality.
[...] Apple sold the walled garden as a feature. It wasn't ashamed or hiding the fact—it was proud of it. It promised apps with no viruses and no risks; a place where everything was curated and safe. The iPhone's locked-down nature wasn't a restriction; it was a selling point.
But it also meant Apple controlled everything. Every app paid Apple's tax, and every update needed Apple's permission. You couldn't run software Apple didn't approve, full stop. You might have paid for the device in your pocket, but you had no right to run what you wanted on it. Someone in Cupertino had the final say over that, not you.
When Android arrived on the scene, it offered the complete opposite concept to Apple's control. It was open source, and based on Linux. You could load your own apps, install your own ROMs and even get root access to your device if you wanted. For a certain kind of user, that was appealing. Android would still offer an application catalogue of its own, curated by Google, but there was nothing stopping you just downloading other apps off the web, or running your own code.
Sadly, over the years, Android has been steadily walking back that openness. The justifications are always reasonable on their face. Security updates need to be mandatory because users are terrible at remembering to update. Sideloading apps need to come with warnings because users will absolutely install malware if you let them just click a button. Root access is too dangerous because it puts the security of the whole system and other apps at risk. But inch by inch, it gets harder to run what you want on the device you paid for.
[...] Microsoft hasn't pulled the trigger on fully locking down Windows. It's flirted with the idea, but has seen little success. Windows RT and Windows 10 S were both locked to only run software signed by Microsoft—each found few takers. Desktop Windows remains stubbornly open, capable of running whatever executable you throw at it, even if it throws up a few more dialog boxes and question marks with every installer you run these days.
[...] Here's what bothers me most: we're losing the idea that you can just try things with computers. That you can experiment. That you can learn by doing. That you can take a risk on some weird little program someone made in their spare time. All that goes away with the walled garden. Your neighbour can't just whip up some fun gadget and share it with you without signing up for an SDK and paying developer fees. Your obscure game community can't just write mods and share content because everything's locked down. So much creativity gets squashed before it even hits the drawing board because it's just not feasible to do it.
It's hard to know how to fight this battle. So much ground has been lost already, and big companies are reluctant to listen to the esoteric wishers of the hackers and makers that actually care about the freedom to squirt whatever through their own CPUs. Ultimately, though, you can still vote with your wallet. Don't let Personal Computing become Consumer Computing, where you're only allowed to run code that paid the corporate toll. Make sure the computers you're paying for are doing what you want, not just what the executives approved of for their own gain. It's your computer, it should run what you want it to!
The Neoliner Origin is the largest cargo ship powered primarily by wind:
The Neoliner Origin, the world's largest cargo ship to use wind as its primary propulsion, has officially touched the water for the first time.
Launched from the RMK Marine shipyard in Tuzla, Turkey, it marks a major milestone in the journey towards decarbonising global maritime transport.
[...] Designed to slash carbon emissions by up to 80% compared to conventional cargo vessels, the Neoliner Origin is part of a broader effort to provide sustainable, low-carbon shipping options for major global brands.
Companies including Renault, Hennessy and Clarins are already on board, integrating this eco-friendly vessel into their supply chains as part of their sustainability commitments.
Measuring 136 metres (446 feet) in length, the Neoliner Origin is primarily a roll-on/roll-off (ro-ro) cargo ship, specifically designed to carry outsize cargo that can be wheeled on and off the vessel.
Its cargo capacity includes space for 5,300 tonnes or up to 265 containers.
[...] The vessel is equipped to carry refrigerated (reefer) cargo, ensuring perishable goods stay fresh throughout its 13-day transatlantic crossings. While its primary role is cargo transport, the Neoliner Origin also has space to accommodate up to 12 passengers comfortably, offering a unique maritime experience.
Powering this impressive ship are two 90-metre (295-foot) masts and an expansive 3,000 square metres (32,300 square feet) of sails. Wind will provide 60–70% of the vessel's propulsion, supported by hybrid diesel-electric engines when needed.
To further boost efficiency, the ship employs slow steaming—sailing at a reduced speed of 11 knots—to conserve fuel and reduce emissions. It even generates energy from its own wake, maximising sustainability at every turn.
The Neoliner Origin is slated to return to Baltimore in December:
The world's largest sailing cargo ship crept up the Chesapeake Bay in the quiet, rainy hours of Thursday morning, squeezed under the Bay Bridge and berthed at the Port of Baltimore.
It was there, chiefly, to unload goods — but also to get some repairs.
One of the sails on the wind-powered ship, a rare but growing breed in the world of maritime commerce, was damaged during a spate of bad weather while crossing the notoriously rough North Atlantic Ocean. While in port at the Dundalk Marine Terminal, workers were scheduled to patch up the Neoliner Origin vessel for its two-week return to France.
"The panels will be reinstalled during the Baltimore stopover so that the ship can make full use of its sails on the return trip," Gabriella Paulet, a spokesperson for the ship owner, Neoline, said in an email Thursday.
The 450-foot-long vessel, the first of its kind ever to call on Baltimore, is on its maiden voyage. It was built in Turkey and departed France in mid-October, first stopping in the French territory of Saint Pierre and Miquelon, off the coast of Canada.
There, technicians boarded and repaired the sail panels as the ship continued to Baltimore.
[...] It's the largest wind-powered cargo ship in the world, but it is just a "pilot" for Neoline, which expects to see larger ships in the coming years — both from itself and other companies.
"Hopefully, it will be surpassed soon," co-founder Jean Zanuttini said in an interview last week.
Neural organics lead to lower energy costs, faster calculation speeds:
Fungal networks may be a promising alternative to tiny metal devices used in processing and storing digital memories and other computer data, according to a new study.
Mushrooms have long been recognized for their extreme resilience [5:29 --JE] and unique properties. Their innate abilities make them perfect specimens for bioelectronics, an emerging field that, for next-gen computing, could help develop exciting new materials.
As one example, researchers from The Ohio State University recently discovered that common edible fungi, such as shiitake mushrooms, can be grown and trained to act as organic memristors, a type of data processor that can remember past electrical states.
Their findings showed that these shiitake-based devices not only demonstrated similar reproducible memory effects to semiconductor-based chips but could also be used to create other types of low-cost, environmentally friendly, brain-inspired computing components.
"Being able to develop microchips that mimic actual neural activity means you don't need a lot of power for standby or when the machine isn't being used," said John LaRocco, lead author of the study and a research scientist in psychiatry at Ohio State's College of Medicine. "That's something that can be a huge potential computational and economic advantage."
Fungal electronics aren't a new concept, but they have become ideal candidates for developing sustainable computing systems, said LaRocco. This is because they minimize electrical waste by being biodegradable and cheaper to fabricate than conventional memristors and semiconductors, which often require costly rare-earth minerals and high amounts of energy from data centers.
"Mycelium as a computing substrate has been explored before in less intuitive setups, but our work tries to push one of these memristive systems to its limits," he said.
To explore the new memristors' capabilities, researchers cultured samples of shiitake and button mushrooms. Once mature, they were dehydrated to ensure long-term viability, connected to special electronic circuits, and then electrocuted at various voltages and frequencies.
"We would connect electrical wires and probes at different points on the mushrooms because distinct parts of it have different electrical properties," said LaRocco. "Depending on the voltage and connectivity, we were seeing different performances."
After two months, the team discovered that when used as RAM – the computer memory that stores data – their mushroom memristor was able to switch between electrical states at up to 5,850 signals per second, with about 90% accuracy. However, performance dropped as the frequency of the electrical voltages increased, but much like an actual brain, it could be fixed by connecting more mushrooms to the circuit.
[...] Building on the flexibility mushrooms offer also suggests there are possibilities for scaling up fungal computing, said Tahmina. For instance, larger mushroom systems may be useful in edge computing and aerospace exploration; smaller ones in enhancing the performance of autonomous systems and wearable devices.
Organic memristors are still in early development, but future work could optimize the production process by improving cultivation techniques and miniaturizing the devices, as viable fungal memristors would need to be far smaller than what researchers achieved in this work.
"Everything you'd need to start exploring fungi and computing could be as small as a compost heap and some homemade electronics, or as big as a culturing factory with pre-made templates," said LaRocco. "All of them are viable with the resources we have in front of us now."
Journal Reference: LaRocco J, Tahmina Q, Petreaca R, Simonis J, Hill J (2025) Sustainable memristors from shiitake mycelium for high-frequency bioelectronics [OPEN]. PLoS One 20(10): e0328965. https://doi.org/10.1371/journal.pone.0328965
Phoronix reports that the apt repository utility core to Debian and its derivative distribution will be adding rust dependencies.
Debian developer Julian Andres Klode sent out a message on Halloween that may give some Debian Linux users and developers a spook: the APT packaging tool next year will begin requiring a Rust compiler. This will place a hard requirement by Debian Linux on Rust support for all architectures. Debian CPU architectures with ports currently but lacking Rust support will either need to see support worked on or be sunset.
Julian sent out the message on Friday that he plans to introduce hard Rust dependencies on APT no earlier than May 2026.
In some areas of the APT codebase there are benefits to using the memory-safe Rust programming language and thus warranting a hard requirement for Rust in the Debian world:
"I plan to introduce hard Rust dependencies and Rust code into APT, no earlier than May 2026. This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem.In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages and a stronger approach to unit testing."
This puts some Debian ports like for the more obscure m68k, Hewlett Packard Precision Architecture (HPPA), SuperH/SH4, and Alpha in a tough position in lacking proper Rust support right now but being a port. They will need to work on Rust support or face sunsetting the Debian ports:
"If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port.It's important for the project as whole to be able to move forward and rely on modern tools and technologies and not be held back by trying to shoehorn modern software on retro computing devices."
Julian's announcement in full can be read on the mailing list.
"You don't have to claim that they're aliens to make these exciting":
[...] Anyone who studies planetary formation would relish the opportunity to get a close-up look at an interstellar object. Sending a mission to one would undoubtedly yield a scientific payoff. There's a good chance that many of these interlopers have been around longer than our own 4.5 billion-year-old Solar System.
One study from the University of Oxford suggests that 3I/ATLAS came from the "thick disk" of the Milky Way, which is home to a dense population of ancient stars. This origin story would mean the comet is probably more than 7 billion years old, holding clues about cosmic history that are simply inaccessible among the planets, comets, and asteroids that formed with the birth of the Sun.
This is enough reason to mount a mission to explore one of these objects, scientists said. It doesn't need justification from unfounded theories that 3I/ATLAS might be an artifact of alien technology, as proposed by Harvard University astrophysicist Avi Loeb. The scientific consensus is that the object is of natural origin.
Loeb shared a similar theory about the first interstellar object found wandering through our Solar System. His statements have sparked questions in popular media about why the world's space agencies don't send a probe to actually visit one. Loeb himself proposed redirecting NASA's Juno spacecraft in orbit around Jupiter on a mission to fly by 3I/ATLAS, and his writings prompted at least one member of Congress to write a letter to NASA to "rejuvenate" the Juno mission by breaking out of Jupiter's orbit and taking aim at 3I/ATLAS for a close-up inspection.
The problem is that Juno simply doesn't have enough fuel to reach the comet, and its main engine is broken. In fact, the total boost required to send Juno from Jupiter to 3I/ATLAS (roughly 5,800 mph or 2.6 kilometers per second) would surpass the fuel capacity of most interplanetary probes.
Ars asked Scott Bolton, lead scientist on the Juno mission, and he confirmed that the spacecraft lacks the oomph required for the kind of maneuvers proposed by Loeb. "We had no role in that paper," Bolton told Ars. "He assumed propellant that we don't really have."
[...] Loeb's calculations also help illustrate the difficulty of pulling off a mission to an interstellar object. So far, we've only known about an incoming interstellar intruder a few months before it comes closest to Earth. That's not to mention the enormous speeds at which these objects move through the Solar System. It's just not feasible to build a spacecraft and launch it on such short notice.
Now, some scientists are working on ways to overcome these limitations.
One of these people is Colin Snodgrass, an astronomer and planetary scientist at the University of Edinburgh. A few years ago, he helped propose to the European Space Agency a mission concept that would have very likely been laughed out of the room a generation ago. Snodgrass and his team wanted a commitment from ESA of up to $175 million (150 million euros) to launch a mission with no idea of where it would go.
ESA officials called Snodgrass in 2019 to say the agency would fund his mission, named Comet Interceptor, for launch in the late 2020s. The goal of the mission is to perform the first detailed observations of a long-period comet. So far, spacecraft have only visited short-period comets that routinely dip into the inner part of the Solar System.
[...] Long-period comets are typically discovered a year or two before coming near the Sun, still not enough time to develop a mission from scratch. With Comet Interceptor, ESA will launch a probe to loiter in space a million miles from Earth, wait for the right comet to come along, then fire its engines to pursue it.
Odds are good that the right comet will come from within the Solar System. "That is the point of the mission," Snodgrass told Ars.
[...] "You don't have to claim that they're aliens to make these exciting," Snodgrass said. "They're interesting because they are a bit of another solar system that you can actually feasibly get an up-close view of, even the sort of telescopic views we're getting now."
[...] Snodgrass sees Comet Interceptor as a proof of concept for scientists to propose a future mission specially designed to travel to an interstellar object. "You need to figure out how do you build the souped-up version that could really get to an interstellar object? I think that's five or 10 years away, but [it's] entirely realistic."
Scientists in the United States are working on just such a proposal. A team from the Southwest Research Institute completed a concept study showing how a mission could fly by one of these interstellar visitors. What's more, the US scientists say their proposed mission could have actually reached 3I/ATLAS had it already been in space.
The American concept is similar to Europe's Comet Interceptor in that it will park a spacecraft somewhere in deep space and wait for the right target to come along. The study was led by Alan Stern, the chief scientist on NASA's New Horizons mission that flew by Pluto a decade ago. "These new kinds of objects offer humankind the first feasible opportunity to closely explore bodies formed in other star systems," he said.
It's impossible with current technology to send a spacecraft to match orbits and rendezvous with a high-speed interstellar comet. "We don't have to catch it," Stern recently told Ars. "We just have to cross its orbit. So it does carry a fair amount of fuel in order to get out of Earth's orbit and onto the comet's path to cross that path."
[...] A mission to encounter an interstellar comet requires no new technologies, Stern said. Hopes for such a mission are bolstered by the activation of the US-funded Vera Rubin Observatory, a state-of-the-art facility high in the mountains of Chile set to begin deep surveys of the entire southern sky later this year. Stern predicts Rubin will discover "one or two" interstellar objects per year. The new observatory should be able to detect the faint light from incoming interstellar bodies sooner, providing missions with more advance warning.
"If we put a spacecraft like this in space for a few years, while it's waiting, there should be five or 10 to choose from," he said.
[...] "Each time that ESA has done a comet mission, it's done something very ambitious and very new," Snodgrass said. "The Giotto mission was the first time ESA really tried to do anything interplanetary... And then, Rosetta, putting this thing in orbit and landing on a comet was a crazy difficult thing to attempt to do."
"They really do push the envelope a bit, which is good because ESA can be quite risk averse, I think it's fair to say, with what they do with missions," he said. "But the comet missions, they are things where they've really gone for that next step, and Comet Interceptor is the same. The whole idea of trying to design a space mission before you know where you're going is a slightly crazy way of doing things. But it's the only way to do this mission. And it's great that we're trying it."