Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:42 | Votes:358

posted by hubie on Friday November 21, @07:10PM   Printer-friendly

https://www.theregister.com/2025/11/18/google_chrome_seventh_0_day/

Seventh Chrome 0-day this year

Google pushed an emergency patch on Monday for a high-severity Chrome bug that attackers have already found and exploited in the wild.

The vulnerability, tracked as CVE-2025-13223, is a type confusion flaw in the V8 JavaScript engine, and it's the seventh Chrome zero-day this year. All have since been patched. But if you use Chrome as your web browser, make sure you are running the most recent version - or risk full system compromise.

This type of vulnerability happens when the engine misinterprets a block of memory as one type of object and treats it as something it's not. This can lead to system crashes and arbitrary code execution, and if it's chained with other bugs can potentially lead to a full system compromise via a crafted HTML page.

"Google is aware that an exploit for CVE-2025-13223 exists in the wild," the Monday security alert warned.

Also on Monday, Google issued a second emergency patch for another high-severity type confusion bug in Chrome's V8 engine. This one is tracked as CVE-2025-13224. As of now, there's no reports of exploitation - so that's another reason to update sooner than later.

Google's LLM-based bug hunting tool Big Sleep found CVE-2025-13224 in October, and a human - the Chocolate Factory's own Clément Lecigne - discovered CVE-2025-13223 on November 12.

Lecigne is a spyware hunter with Google's Threat Analysis Group (TAG) credited with finding and disclosing several of these types of Chrome zero-days. While we don't have any details about who is exploiting CVE-2025-13223 and what they are doing with the access, TAG tracks spyware and nation-state attackers abusing zero days for espionage expeditions.

TAG also spotted the sixth Chrome bug exploited as a zero-day and patched in September. That flaw, CVE-2025-10585, was also a type confusion flaw in the V8 JavaScript and WebAssembly engine.


Original Submission

posted by hubie on Friday November 21, @02:23PM   Printer-friendly
from the still-might-be-three-raccoons-in-a-trenchcoat dept.

A Chinese company cut open their invention on stage to prove that it was not a human in a robot suit after comments that it looked too real. Unless it was a human with a missing leg, the robot was indeed proven to be a mechanical invention.

Technology company, Xpeng, unveiled its second-generation humanoid robot, IRON, at its AI Day in Guangzhou, China last week, rivalling Tesla's Optimus robots.
Powered by a solid-state battery and three custom AI chips, IRON features a "humanoid spine, bionic muscles, and fully covered flexible skin, and supports customisation for different body shapes."

The robot has the power to make 2,250 trillion operations per second (TOPS) and features 82 degrees of freedom, including 22 in each hand.

"Its movements are natural, smooth, and flexible, capable of achieving, catwalk walking and other high-difficulty human-like actions," Xpeng said


Original Submission

posted by jelizondo on Friday November 21, @09:34AM   Printer-friendly
from the software-freedom dept.

Software Engineer Nikita Prokopov delves into how programs have changed over recent years from doing our bidding to working against us, controlling us. This adverse change has been ushered in through requiring accounts, update processes, notifications, and on-boarding procedures.

This got so bad that when a program doesn't ask you to create an account, it feels refreshing.

"Okay, but accounts are still needed to sync stuff between machines."

Wrong. Syncthing is a secure, multi-machine distributed app and yet doesn't need an account.

"Okay, but you still need an account if you pay for a subscription?"

Mullvad VPN accepts payments and yet didn't ask me for my email.

These new, malevolent programs fight for attention rather than getting the job done while otherwise staying out of the way. Not only do they prioritize "engagement" over its opposite, "usability", they also tend to push (hostile) agendas along the way.

Previously:
(2025) What Happened to Running What You Wanted on Your Own Machine?
(2025) Passkeys Are Incompatible With Open-Source Software
(2024) Achieving Software Freedom in the Age of Platform Decay
(2024) Bruce Perens Solicits Comments on First Draft of a Post-Open License


Original Submission

posted by jelizondo on Friday November 21, @04:45AM   Printer-friendly

Developers tend to scrutinize AI-generated code less critically and they learn less from it:

When two software developers collaborate on a programming project—known in technical circles as 'pair programming'—it tends to yield a significant improvement in the quality of the resulting software. 'Developers can often inspire one another and help avoid problematic solutions. They can also share their expertise, thus ensuring that more people in their organization are familiar with the codebase,' explains Sven Apel, professor of computer science at Saarland University. Together with his team, Apel has examined whether this collaborative approach works equally well when one of the partners is an AI assistant. [...]

For the study, the researchers used GitHub Copilot, an AI-powered coding assistant introduced by Microsoft in 2021, which, like similar products from other companies, has now been widely adopted by software developers. These tools have significantly changed how software is written. 'It enables faster development and the generation of large volumes of code in a short time. But this also makes it easier for mistakes to creep in unnoticed, with consequences that may only surface later on,' says Sven Apel. The team wanted to understand which aspects of human collaboration enhance programming and whether these can be replicated in human-AI pairings. Participants were tasked with developing algorithms and integrating them into a shared project environment.

'Knowledge transfer is a key part of pair programming,' Apel explains. 'Developers will continuously discuss current problems and work together to find solutions. This does not involve simply asking and answering questions, it also means that the developers share effective programming strategies and volunteer their own insights.' According to the study, such exchanges also occurred in the AI-assisted teams—but the interactions were less intense and covered a narrower range of topics. 'In many cases, the focus was solely on the code,' says Apel. 'By contrast, human programmers working together were more likely to digress and engage in broader discussions and were less focused on the immediate task.

One finding particularly surprised the research team: 'The programmers who were working with an AI assistant were more likely to accept AI-generated suggestions without critical evaluation. They assumed the code would work as intended,' says Apel. 'The human pairs, in contrast, were much more likely to ask critical questions and were more inclined to carefully examine each other's contributions,' explains Apel. He believes this tendency to trust AI more readily than human colleagues may extend to other domains as well. 'I think it has to do with a certain degree of complacency—a tendency to assume the AI's output is probably good enough, even though we know AI assistants can also make mistakes.' Apel warns that this uncritical reliance on AI could lead to the accumulation of 'technical debt', which can be thought of as the hidden costs of the future work needed to correct these mistakes, thereby complicating the future development of the software.


Original Submission

posted by jelizondo on Friday November 21, @12:07AM   Printer-friendly

At a recent AI conference in San Francisco, over 300 founders and investors were asked a provocative question: which billion-dollar AI startup would you bet against? The answer was surprising. Perplexity AI topped the list, with OpenAI coming in second. While the OpenAI vote raised eyebrows given its market dominance, the Perplexity verdict reveals something deeper about the AI search landscape in 2025.

Perplexity Founded in 2022, the company hit a $20 billion valuation by September 2025, processing 780 million queries monthly with over 30 million active users. Impressive on paper, but the company has raised nearly $1.5 billion in funding, with valuations jumping from $500 million to $20 billion in just 18 months. Fundraising rounds roughly every two months suggests either extraordinary growth or growing desperation to prove the business model works.

Here's the uncomfortable truth: Perplexity is increasingly looking like what Silicon Valley dreads most a wrapper. The company initially had a competitive edge when it pioneered AI powered web search with real time information. But that advantage has evaporated faster than anyone expected.

[...] The AI bubble will eventually deflate. When it does, wrappers built on vanity metrics and unsustainable unit economics will be the first to go. Perplexity's 360 million "free users" in India won't save them when those users discover that ChatGPT and Google do the same thing for free and they don't need to pay ₹17,000 for the privilege.

MEDIUM.COM


Original Submission

posted by janrinok on Thursday November 20, @07:15PM   Printer-friendly

Turris, the hardware division of cz.nic CZ domain registry, has released their latest [open source] router device Omnia NG.

Coverage from cnx-software:

The Turris Omnia NG is a high-performance Wi-Fi 7 router with a mini PCIe slot for 4G/5G modems, two 10GbE SFP+ cages, a 240×240 px color display, and a D-Pad button, running OpenWrt-based Turris OS, and designed for advanced home users, small businesses, and lab environments.

Built around a 2.2 GHz Qualcomm IPQ9574 quad-core 64-bit Arm Cortex-A73 CPU, the Omnia NG supports Wi-Fi 7/6 tri-band connectivity. Additionally, it features four 2.5Gbps Ethernet ports, two USB 3.0 ports, NVMe storage support, and includes a 90 W power supply for attached peripherals. Other hardware highlights include rack-mount supports, a metal chassis, and antenna arrays for 4×4 MIMO operation. It comes 10 years after the original Turris Omnia open-source router was launched on Indiegogo.


Original Submission

posted by janrinok on Thursday November 20, @02:37PM   Printer-friendly

Use the right tool for the job:

In my first interview out of college I was asked the change counter problem:

Given a set of coin denominations, find the minimum number of coins required to make change for a given number. IE for USA coinage and 37 cents, the minimum number is four (quarter, dime, 2 pennies).

I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9). The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview.

But you only need dynamic programming if you're writing your own algorithm. It's really easy if you throw it into a constraint solver like MiniZinc and call it a day.

[...] Lots of similar interview questions are this kind of mathematical optimization problem, where we have to find the maximum or minimum of a function corresponding to constraints. They're hard in programming languages because programming languages are too low-level. They are also exactly the problems that constraint solvers were designed to solve. Hard leetcode problems are easy constraint problems. Here I'm using MiniZinc, but you could just as easily use Z3 or OR-Tools or whatever your favorite generalized solver is.

[...] Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?" Constraint solvers runtimes are unpredictable and almost always slower than an ideal bespoke algorithm because they are more expressive, in what I refer to as the capability/tractability tradeoff. But even so, they'll do way better than a bad bespoke algorithm, and I'm not experienced enough in handwriting algorithms to consistently beat a solver.

[...] Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration. And you get more interesting opportunities to teach optimizations, like symmetry breaking.


Original Submission

posted by janrinok on Thursday November 20, @09:52AM   Printer-friendly

Floating solar panels show promise, but environmental impacts vary by location, study finds:

Floating solar panels are emerging as a promising clean energy solution with environmental benefits, but a new study finds those effects vary significantly depending on where the systems are deployed.

Researchers from Oregon State University and the U.S. Geological Survey modeled the impact of floating solar photovoltaic systems on 11 reservoirs across six states. Their simulations showed that the systems consistently cooled surface waters and altered water temperatures at different layers within the reservoirs. However, the panels also introduced increased variability in habitat suitability for aquatic species.

"Different reservoirs are going to respond differently based on factors like depth, circulation dynamics and the fish species that are important for management," said Evan Bredeweg, lead author of the study and a former postdoctoral scholar at Oregon State. "There's no one-size-fits-all formula for designing these systems. It's ecology - it's messy."

While the floating solar panel market is established and growing in Asia, it remains limited in the United States, mostly to small pilot projects. However, a study released earlier this year by the U.S. Department of Energy's National Renewable Energy Laboratory estimated that U.S. reservoirs could host enough floating solar panel systems to generate up to 1,476 terawatt-hours annually, enough to power approximately 100 million homes.

Floating solar panels offer several advantages. The cooling effect of the water can boost panel efficiency by an estimated 5 to 15%. The systems can also be integrated with existing hydroelectric and transmission infrastructure. They may also help reduce evaporation, which is especially valuable in warmer, drier climates.

However, these benefits come with questions about potential impacts on aquatic ecosystems, an area that has received limited scientific attention.

[...] They found that changes in temperature and oxygen dynamics caused by floating solar panels can influence habitat availability for both warm-water and cold-water fish species. For instance, cooler water temperatures in summer generally benefit cold-water species, though this effect is most pronounced when panel coverage exceeds 50%.

The researchers note the need for continued research and long-term monitoring to ensure floating photovoltaic systems support clean energy goals without compromising aquatic ecosystems.

"History has shown that large-scale modifications to freshwater ecosystems, such as hydroelectric dams, can have unforeseen and lasting consequences," Bredeweg said.

Journal Reference: https://doi.org/10.1016/j.limno.2025.126293


Original Submission

posted by janrinok on Thursday November 20, @05:04AM   Printer-friendly
from the fly-me-to-the-moon dept.

Everybody knows Intel's 4004, designed for a calculator, was the first CPU on a chip. Everybody is wrong.

For a long time, what is now considered to be a prime candidate for the title of the 'world's first microprocessor' was a very well-kept secret. The MP944 is the inauspicious name of the chip we want to highlight today. It was developed to be the brains behind U.S. Navy's F-14 Tomcat's Central Air Data Computer (CADC). Thus, it isn't surprising that the MP944 was a cut above the Intel 4004, the world's first commercial microprocessor, designed to power a desktop calculator.

The MP944 was designed by a team of engineers approximately 25-strong. Leading the two-year development of this microprocessor were Steve Geller and Ray Holt.

The processor began service, in the aforementioned F-14 flight / control computer in June 1970, over a year before Intel's 4004 would become available, in November 1971. An MP944 worked as part of a six-chip system for the real-time calculation of flight parameters such as altitude, airspeed, and Mach number – and was a key innovation to enable the Tomcat's articulated sweep-wing system.

By many accounts, the MP944 didn't just pre-date the 4004 by quite a margin, it was significantly more performant. The tweet, we embedded above, suggests Geller & Holt's design was "8x faster than the Intel 4004." Completing all the complicated polynomial calculations required by the CADC likely dictated this degree of performance it delivered.

[...] As well as offering amazing performance for the early 1970s, the MP944 had to satisfy some stringent military-minded specifications. For example, it has to remain operational in temperatures spanning -55 to +125 degrees Celsius.

Being an essential component of a flight system also meant the military pushed for safety and failsafe measures. That was tricky, with such a cutting-edge development in a new industry. What ended up being provided to the F-14 Tomcats was a system that could constantly self-diagnose issues while executing its flight computer duties. These MP944 systems could apparently switch to an identical backup unit, fitted as standard, within 1/18th of a second of a fault being flagged by the self-test system.

As mentioned above, this processor of many firsts seems to be of largely academic interest nowadays. However, if Holt's attempts to publish the research paper outlining the architecture of the F-14's MP944-powered CADC system had been cleared back in 1971, we'd surely now all be living in a different future.


Original Submission

posted by janrinok on Thursday November 20, @12:18AM   Printer-friendly
from the What-is-your-major-malfunction-numbnuts? dept.

Task and Purpose has a short article on a traveling art exhibit of photos taken during the filming of Full Metal Jacket (1987). The actor Matthew Modine played Pvt. Joker in the war film directed by Stanley Kubrick. While Modine was playing the role of a war correspondent, he also ended up taking behind-the-scenes photos on set.

"If you're going to take pictures on my set, this is the camera you need to get," Kubrick said.

Those instructions, Modine realized, included an unspoken permission slip: to capture behind-the-scenes pictures of the iconic war film as it was being made (which perhaps made sense for the film: Pvt. Joker, after all, is a combat correspondent in the Marines, and snaps photos throughout).

Modine's photographs and a journal he kept during the filming are now the heart of "Full Metal Jacket Diary," an exhibit at the National Veterans Memorial And Museum in Columbus, Ohio. The photographs and other pieces spent much of the year at the National Museum of the Marine Corps in Quantico, Virginia, as the exhibit "Full Metal Modine."

The Internet Movie Database has a detailed entry, as usual, on Stanley Kubrik's Full Metal Jacket.


Original Submission

posted by janrinok on Wednesday November 19, @07:36PM   Printer-friendly
from the Altman-Bezos-Gates-and-Musk-again dept.

https://www.nytimes.com/2025/11/14/magazine/neurotech-neuralink-rights-regulations.html
https://archive.ph/mgZRE

As neural implant technology and A.I. advance at breakneck speeds, do we need a new set of rights to protect our most intimate data — our minds?

On a recent afternoon in the minimalist headquarters of the M.I.T. Media Lab, the research scientist Nataliya Kosmyna handed me a pair of thick gray eyeglasses to try on. They looked almost ordinary aside from the three silver strips on their interior, each one outfitted with an array of electrical sensors. She placed a small robotic soccer ball on the table before us and suggested that I do some "basic mental calculus." I started running through multiples of 17 in my head. After a few seconds, the soccer ball lit up and spun around. I seemed to have made it move with the sheer force of my mind, though I had not willed it in any sense. My brain activity was connected to a foreign object. "Focus, focus," Kosmyna said. The ball swirled around again. "Nice," she said. "You will get better." Kosmyna, who is also a visiting research scientist at Google, designed the glasses herself. They are, in fact, a simple brain-computer interface, or B.C.I., a conduit between mind and machine. As my mind went from 17 to 34 to 51, electroencephalography (EEG) and electrooculography (EOG) sensors picked up heightened electrical activity in my eyes and brain. The ball had been programmed to light up and rotate whenever my level of neural "effort" reached a certain threshold. When my attention waned, the soccer ball stood still.

For now, the glasses are solely for research purposes. At M.I.T., Kosmyna has used them to help patients with A.L.S. (Amyotrophic Lateral Sclerosis) communicate with caregivers — but she said she receives multiple purchase requests a week. So far she has declined them. She's too aware that they could easily be misused.

Neural data can offer unparalleled insight into the workings of the human mind. B.C.I.s are already frighteningly powerful: Using artificial intelligence, scientists have used B.C.I.s to decode "imagined speech," constructing words and sentences from neural data; to recreate mental images (a process known as brain-to-image decoding); and to trace emotions and energy levels. B.C.I.s have allowed people with locked-in syndrome, who cannot move or speak, to communicate with their families and caregivers and even play video games. Scientists have experimented with using neural data from fMRI imaging and EEG signals to detect sexual orientation, political ideology and deception, to name just a few examples.

Advances in optogenetics, a scientific technique that uses light to stimulate or suppress individual, genetically modified neurons, could allow scientists to "write" the brain as well, potentially altering human understanding and behavior. Optogenetic implants are already able to partially restore vision to patients with genetic eye disorders; lab experiments have shown that the same technique can be used to implant false memories in mammal brains, as well as to silence existing recollections and to recover lost ones.

Neuralink, Elon Musk's neural technology company, has so far implanted 12 people with its rechargeable devices. "You are your brain, and your experiences are these neurons firing," Musk said at a Neuralink presentation in June. "We don't know what consciousness is, but with Neuralink and the progress that the company is making, we'll begin to understand a lot more."

Musk's company aims to eventually connect the neural networks inside our brains to artificially intelligent ones on the outside, creating a two-way path between mind and machine. Neuroethicists have criticized the company for ethical violations in animal experiments, for a lack of transparency and for moving too quickly to introduce the technology to human subjects, allegations the company dismisses. "In some sense, we're really extending the fundamental substrate of the brain," a Neuralink engineer said in the presentation. "For the first time we are able to do this in a mass market product."

The neurotechnology industry already generates billions of dollars of revenue annually. It is expected to double or triple in size over the next decade. Today, B.C.I.s range from neural implants to wearable devices like headbands, caps and glasses that are freely available for purchase online, where they are marketed as tools for meditation, focus and stress relief. Sam Altman founded his own B.C.I. start-up, Merge Labs, this year, as part of his effort to bring about the day when humans will "merge" with machines. Jeff Bezos and Bill Gates are investors in Synchron, a Neuralink competitor.

Even if Kosmyna's glasses aren't for sale, similar technology is on the market. In 2023, Apple patented an AirPods prototype equipped with similar sensors, which would allow the device to monitor brain activity and other so-called biosignals. Last month, Meta unveiled a pair of new smart glasses and a "neural band," which lets users text and surf the web with small gestures alone. Overseas, China is fast-tracking development of the technology for medical and consumer use, and B.C.I.s are among the priorities of its new five-year plan for economic development.

"What's coming is A.I. and neurotechnology integrated with our everyday devices," said Nita Farahany, a professor of law and philosophy at Duke University who studies emerging technologies. "Basically, what we are looking at is brain-to-A.I. direct interactions. These things are going to be ubiquitous. It could amount to your sense of self being essentially overwritten."

To prevent this kind of mind-meddling, several nations and states have already passed neural privacy laws. In 2021, Chile amended its constitution to include explicit protections for "neurorights"; Spain adopted a nonbinding list of "digital rights" that protects individual identity, freedom and dignity from neurotechnologies. In 2023, European nations signed the Léon Declaration on neurotechnology, which prioritizes a "rights oriented" approach to the sector. The legislatures of Mexico, Brazil and Argentina have debated similar measures. California, Colorado, Montana and Connecticut have each passed laws to protect neural data.

The federal government has started taking an interest, too. In September, three senators introduced the Management of Individuals' Neural Data (MIND) Act, which would direct the Federal Trade Commission to examine how neural data should be defined and protected. The Uniform Law Commission, a nonprofit that authors model legislation, has convened lawyers, philosophers and scientists who are working on developing a standard law on mental privacy that states could choose to adopt.

Without regulations governing the collection of neural data and the commercialization of B.C.I.s, there is the real possibility that we might find ourselves becoming even more beholden to our devices and their creators than we all already are. In clinical trials, patients have sometimes been left in the lurch; some have had to have their B.C.I.s surgically explanted because funding for their trial ran out.

And the possibility that therapeutic neurotechnologies could one day be weaponized for political purposes looms heavily over the field. Musk, for example, has expressed a desire to "destroy the woke mind virus." As Quinn Slobodian and Ben Tarnoff argue in a forthcoming book, it does not require a great logical leap to suspect that he sees Neuralink as part of a way to do so.

In the 1920s, the German psychiatrist Hans Berger took the first EEG measurements, celebrating the fact that he could detect brain waves "from the unscathed skull." In the 1940s and '50s, scientists experimented with the use of electrodes to alleviate tremors and epilepsy. The Spanish neurophysiologist José Delgado made headlines in 1965, after he used implanted electrodes to stop a charging bull in its tracks; he bragged that he could "play" the minds of monkeys and cats like "electronic toys."

In a 1970 interview with The New York Times, Delgado prophesied that we would soon be able to alter our own "mental functions" as a result of genetic and neuroscientific advances. "The question is what sort of humans would we like, ideally, to construct?" he asked. The notion that a human being could be "constructed" had been troubling philosophers, scientists and writers since at least the late 18th century, when scientists first manipulated electric currents inside animal bodies. The language of electrification quickly seeped out of science and into politics: The historian Samantha Wesner has shown that in France, Jacobin revolutionaries spoke of "electrifying" people to recruit them to their cause and writers toyed with the possibility that political sentiment could be electrically controlled.

Two centuries later, when Delgado and his colleagues showed that it had become technically possible to use electricity to alter the workings of the animal mind, this too was accompanied by an explosion of political concern about the relation between the citizen and the state. Because the thinking subject is by definition a political subject — "the very presence of mind is a political presence," argues Danielle Carr, a historian of neuroscience who researches the political and cultural history of B.C.I.s and related technologies — the potential to alter the human brain was also understood as a threat to liberal politics itself.

In the U.S., where the Cold War fueled anxiety about potential brainwashing technologies, Delgado's work was at first approached with wonder and confusion, but it soon fell under increasing suspicion. In 1953, the director of the C.I.A., Allen Dulles, warned that the Soviet government was conducting a form of "brain warfare" to control minds. In a forthcoming book, Carr traces how the liberal doctrine of universal human rights and freedoms, including the freedom of thought, was positioned as a protective umbrella against communist mind-meddling, co-opting pre-existing struggles against psychiatric experimentation. While the United States warned of brain warfare abroad, it also worked to deploy it at home. Dulles authorized the creation of the C.I.A.'s clandestine MK-Ultra program, which for 20 years conducted psychiatric and mind-control experiments, often on unwitting and incarcerated subjects, until it was abruptly shut down in 1973.

Around this time, the University of California, Los Angeles, sought to create a Center for the Study and Reduction of Violence leading to widespread speculation that the center would screen people in prisons and mental hospitals for indications of aggression and then subject them to brain surgery. An outcry, led in part by the Black Panthers, shut down funding for the initiative. These developments raised public awareness of neural technologies and contributed to the elevation of laws and rights as a stopgap against their worst uses. "We believe that mind control and behavior manipulation are contrary to the ideas laid down in the Bill of Rights and the American Constitution," the Republican lawmaker Steven Symms argued in a 1974 speech. Over the next decades, the development of neurotechnology drastically slowed. By the 1990s, the end of the Cold War dispelled concerns about communist mind-meddling, and the political climate was ripe for reconsideration of the promises and perils of neurotech. In 2013, President Barack Obama created the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) program, which poured hundreds of millions of dollars into neuroscience. In 2019, the Pentagon's Defense Advanced Research Projects Agency announced that it was funding several teams working to develop nonsurgical neurotechnologies that could, for example, allow service members to control "swarms of unmanned aerial vehicles" with their minds. As experimentation progressed, so did the medical and therapeutic uses of B.C.I.s. In 2004, Matthew Nagle, a tetraplegic, became the first human to be implanted with a sophisticated B.C.I. For individuals who cannot move or speak — those who are living with degenerative disease or paralysis, for example — advances in neural implants have been transformative.

Earlier this year, Bradford Smith, who lives with A.L.S. and was the third person to receive a Neuralink implant, used the A.I. chatbot Grok to draft his X posts. "Neuralink does not read my deepest thoughts or words I think about," Smith explains in an A.I.-generated video about his experience. "It just reads how I want to move and moves the cursor where I want." Because he received the implant as part of a clinical trial, Smith's neural data is protected by HIPAA rules governing private health information. But for mass-market consumer devices — like EEG headbands and glasses, for example — that can be used to enhance cognition, focus and productivity rather than simply restore brain functions that have been compromised — there are very few data protections. "The conflation of consumer and medical devices, and the lack of a consistent definition of neural data itself, adds to the confusion about what is at stake," said Leigh Hochberg, a neurointensive care physician, neuroscientist and director of BrainGate clinical trials. "It's a good societal conversation to have, to reflect on what we individually believe should remain private."

In 2017, driven by a sense of responsibility and terror about the implications of his own research, Rafael Yuste, a neuroscientist at Columbia University, convened scientists, philosophers, engineers and clinicians to create a set of ethical guidelines for the development of neurotechnologies.

One of the group's primary recommendations was that neurorights protecting individual identity, agency and privacy, as well as equal access and protection from bias, should be recognized as basic human rights and protected under the law. Yuste worked with the lawyer Jared Genser to create the Neurorights Foundation in 2021. Together, they surveyed 30 consumer neurotech companies and found that all but one had "no meaningful limitations" to retrieving or selling user neural data. There is consensus that some regulation is necessary given the risks of companies and governments having unfettered access to neural data, and that existing human rights already offer a small degree of protection. But neuroscientists, philosophers, developers and patients disagree about what kinds of regulations should be in place, and about how neurorights should be translated into written laws.

"If we keep inventing new rights, there is a risk that we won't know where one ends and the other begins," said Andrea Lavazza, an ethicist and a philosopher at Pegaso University in Italy who supports the creation of new rights. The United Nations, UNESCO and the World Economic Forum have each convened groups to investigate the implications of neurotechnologies on human rights; dozens of guidance documents on the ethics of the field have been published.

One of the fundamental purposes of law, at least in the United States, is to protect the individual from unwarranted interference. If neurotechnologies have the potential to decode or even change patterns of thought and action, advocates believe that law has the distinct capacity to try to restrain its reach into the innermost chambers of the mind. And while freedom of thought, conscience, opinion, expression and privacy are all recognized as basic human rights in international law, some philosophers and lawyers argue that these fundamental freedoms need to be updated and reinterpreted if they have any hope of protecting individuals against interference from neural devices, because they were conceived when the technology was only a distant dream. Farahany, the law and philosophy professor at Duke, argues that we need to recognize a fundamental right to "cognitive liberty," which scholars have defined as "the right and freedom to control one's own consciousness and electrochemical thought process" — to protect our minds. For Farahany, this kind of liberty "is a precondition to any other concept of liberty, in that, if the very scaffolding of thought itself is manipulated, undermined, interfered with, then any other way in which you would exercise your liberties is meaningless, because you are no longer a self-determined human at that point."

To call for the recognition of a new fundamental right, or even for the enhancement of existing human rights is, at the moment, a countercultural move. Over the past several years, human rights and the international laws designed to protect them have been gravely weakened, while technologies that underlie surveillance capitalism have grown only more widespread. We already live in a world awash with personal data, including sensitive financial and biological information. We leave behind reams of revealing data wherever we go, in both the physical and digital worlds. Our computer cameras are powerful enough to capture our heart rates and make medical diagnoses. Adding neural data on top of this might not constitute such an immense shift. Or it might change everything, offering external actors a portal into our most intimate — and often unarticulated — thoughts and desires. The emergence of B.C.I.s during the mid-20th century was greeted and ultimately torpedoed by Cold War liberalism — Dulles, the C.I.A. director, warned that mind-control techniques could thwart the American project of "spreading the gospel of freedom." Today, we lack a corresponding language with which to push back against the data economy's expanding reach into our minds. In a world where everything can be reduced to data to be bought and sold, where privacy regulations offer only a modicum of protection and where both domestic and international law have been weakened, there are few tools to shield our innermost selves from financialization.

In this sense, the debate over neurorights is a kind of last-ditch effort to ensure that the hard-won protections of the past century carry over into this one — to try to prevent the freedom of thought, conscience and opinion, for example, from being effectively suppressed by the increasingly algorithmic experience of everyday life. How much privacy might we, as a society, be willing to trade in exchange for augmented cognition? "In three years, we will have large-scale models of neural data that companies could put on a device or stream to the cloud, to try to make predictions," said Mackenzie Mathis, a neuroscientist at the Swiss Federal Institute of Technology, Lausanne. How those kinds of data transfers should be regulated, she said, is an urgent question. "We are changing people, just like social media, or large-language models changed people." Under these conditions, the challenge of ensuring that individuals retain the ability to manage access to their innermost selves, however defined, becomes all the more acute. "Our mental life is the core of our self, and we used to be sure that no one could break this barrier," said Lavazza. The collapse of that surety could be an invitation to dread a future in which the unrestricted use of these technologies will have destroyed society as we know it. Or it could be an occasion to rethink the politics that got us here in the first place.


Original Submission

posted by hubie on Wednesday November 19, @02:49PM   Printer-friendly

https://bit-hack.net/2025/11/10/fpga-based-ibm-pc-xt/

Recently I undertook a hobby project to recreate an IBM XT Personal Computer from the 1980s using a mix of authentic parts and modern technology. I had a clear goal in mind: I wanted to be able to play the EGA version of Monkey Island 1 on it, with no features missing. This means I need mouse support, hard drive with write access for saving the game, and Adlib audio, my preferred version of the game's musical score.

The catalyst for this project was the discovery that there are low-power versions of the NEC V20 CPU available (UPD70108H), which is compatible with the Intel 8088 used in the XT. Being a low-power version significantly simplifies its connection to an FPGA, which typically operate with 3.3-volt IO voltages. Coupled with a low-power 1MB SRAM chip (CY62158EV30) to provide the XT with its 640KB of memory, and I started to have the bones of a complete system worked out.

Source code, schematics and gerber files: https://github.com/bit-hack/iceXt


Original Submission

posted by hubie on Wednesday November 19, @10:01AM   Printer-friendly
from the that's-a-long-time-to-have-systemd-around dept.

https://distrowatch.com/dwres.php?resource=showheadline&story=20094

Canonical has announced the company will extend support on long-term support (LTS) versions of Ubuntu to supply security updates for 15 years.

"Today, Canonical announced the expansion of the Legacy add-on for Ubuntu Pro, extending total coverage for Ubuntu LTS releases to 15 years. Starting with Ubuntu 14.04 LTS (Trusty Tahr), this extension brings the full benefits of Ubuntu Pro – including continuous security patching, compliance tooling and support for your OS – to long-lived production systems."

The extended support is provided as part of Canonical's Ubuntu Pro service.

Editor's Comment: Ubuntu Pro is free for personal use on up to 5 computers. There is also a pricing system for professional and enterprise use.


Original Submission

posted by hubie on Wednesday November 19, @05:16AM   Printer-friendly

https://itsfoss.com/news/mozilla-ai-window-plans/

Planned browsing mode will let users chat with an AI assistant while surfing the web.

Firefox has been pushing AI features for a while now. Over the past year, they've added AI chatbots in the sidebar, automatic alt text generation, and AI-enhanced tab grouping. It is basically their way of keeping up with Chrome and Edge, both of which have gone all-in on AI.

Of course not everyone is thrilled about AI creeping into their web browsers, and Mozilla (the ones behind Firefox) seems to understand that. Every AI feature in Firefox is opt-in. You can keep using the browser as you always have, or flip on AI tools when you actually need them.

Now, they are taking this approach a step further with something called AI Window.

Mozilla has announced it's working on AI Window, a new browsing mode that comes with a built-in AI assistant. Think of it as a third option alongside the Classic browsing mode and Private Window mode.

Before you get angry, know that it will be fully optional. Switch to AI Window when you want help, or just ignore it entirely. Try it, hate it, disable it. Mozilla's whole pitch is that you stay in control.

On the transparency front, they are making three commitments:

        A fully opt-in experience.
        Features that protect your choice.
        More transparency around how your data is used.

Why bother with all this, you ask? Mozilla sees AI as part of the web's future and wants to shape it their way. They figure ignoring AI while it reshapes the web doesn't help anyone, so they want to steer it toward user control rather than watch browsers from AI companies (read: Big Tech) lock people in.

Ajit Varma, the Vice President and Head of Product at Firefox, put it like this:

"We believe standing still while technology moves forward doesn't benefit the web or humanity. That's why we see it as our responsibility to shape how AI integrates into the web — in ways that protect and give people more choice, not less."

The feature isn't live. Mozilla's building it "in the open" and wants feedback to shape how it turns out. If you want early access, there's a waitlist at firefox.com/ai to get updates and first dibs on testing.


Original Submission

posted by hubie on Wednesday November 19, @12:31AM   Printer-friendly

https://www.scientificamerican.com/article/raccoons-are-showing-early-signs-of-domestication/
https://archive.fo/HF0AV

City-dwelling raccoons seem to be evolving a shorter snout—a telltale feature of our pets and other domesticated animals

With dexterous childlike hands and cheeky "masks," raccoons are North America's ubiquitous backyard bandits. The critters are so comfortable in human environments, in fact, that a new study finds that raccoons living in urban areas are physically changing in response to life around humans—an early step in domestication.

The study lays out the case that the domestication process is often wrongly thought of as initiated by humans—with people capturing and selectively breeding wild animals. But the study authors claim that the process begins much earlier, when animals become habituated to human environments.

"One thing about us humans is that, wherever we go, we produce a lot of trash," says the study's co-author and University of Arkansas at Little Rock biologist Raffaela Lesch. Piles of human scraps offer a bottomless buffet to wildlife, and to access that bounty, animals need to be bold enough to rummage through human rubbish but not so bold as to become a threat to people. "If you have an animal that lives close to humans, you have to be well-behaved enough," Lesch says. "That selection pressure is quite intense."

Proto-dogs, for example, would have dug through human trash heaps, and cats were attracted to the mice that gathered around refuse. Over time, individual animals that had a reduced fight-or-flight response could feed more successfully around humans and pass their nonreactive behavior on to their offspring.

Oddly, tameness has also long been associated with traits such as a shorter face, a smaller head, floppy ears and white patches on fur—a pattern that Charles Darwin noted in the 1800s. The occurrence of these characteristics is known as domestication syndrome, but scientists didn't have a comprehensive theory to explain how the traits were connected until 2014. That's when a team of evolutionary biologists noticed that many of the physical traits that co-occur with domestication trace back to an important group of cells during embryonic development called neural crest cells. In early development, these form along an organism's back and migrate to different parts of the body, where they become important for the development of different types of cells. The biologists hypothesized that mutations that hamper the proliferation and development of neural crest cells could later result in a shorter muzzle, a lack of cartilage in the ears, a loss of pigmentation in the coat and a dampened fear response—leading to a better chance of survival in proximity to humans.

Lesch says the neural crest cells are the most salient hypothesis scientists have to explain domestication syndrome right now, but they are still gathering and evaluating evidence for or against it. One piece of the puzzle would be seeing if domestication syndrome was observable in real time with wild animals. For the new study, she and 16 graduate and undergraduate students gathered nearly 20,000 photographs of raccoons across the contiguous U.S. from the community science platform iNaturalist. The team found that raccoons in urban environments had a snout that was 3.5 percent shorter than that of their rural cousins.

Journal Reference: Apostolov, A., Bradley, A., Dreher, S. et al. Tracking domestication signals across populations of North American raccoons (Procyon lotor) via citizen science-driven image repositories. Front Zool 22, 28 (2025). https://doi.org/10.1186/s12983-025-00583-1


Original Submission