Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
National Geographic published an interesting article about renewable energy myths.
Still, myths about renewable energy are commonplace, says Andy Fitch, an attorney at Columbia Law School's Sabin Center for Climate Change Law who coauthored a report rebutting dozens of misconceptions. This misinformation, and in some cases, purposeful disinformation, may lead people to oppose renewable projects in their communities. Support for wind farms off New Jersey, for example, dropped more than 20 percent in less than five years after misleading and false claims began circulating.
"It's easy to prick holes into the idea of an energy transition," because it is a new concept to many people, Fitch says.
Myth #1 Renewable energy is unreliable.
There will always be days when clouds cover the sun or the wind is still. But those conditions are unlikely to occur at the same time in all geographic areas. "There's always a way to coordinate the energy mix" to keep the lights on, Fitch says.Today that coordination generally includes electricity from fossil fuels or coal. In California, where more than half the state's power now comes from solar, wind, and other renewables, natural gas and other non-renewables generate the rest.
Improvements in storage technology will also increasingly allow renewable energy to be captured during sunny or windy days. Already, some 10 percent of California's solar-powered energy is saved for evening use.
Myth #2 Rooftop solar is super pricey.
Back in 1980, solar panels cost a whopping $35 (in today's dollars) per watt of generated energy. In 2024 that figure fell to 26 cents. Solar has become so cost-efficient that building and operating the technology is now cheaper over its lifespan than conventional forms of energy like gas, coal, and nuclear power.Homeowners also save a significant amount of money after rooftop solar is installed, according to the U.S. Department of Energy. (The method remains cost effective, even after federal subsidies to purchase the panels ceased late last year.) A family who finances panels might save close to a thousand dollars a year in their electric bills, even taking into account payments on the loan.
Myth #3 Wind power inevitably kills wildlife.
With hundreds of thousands of turbines in operation, wind power now makes up eight percent of the world's energy. But alongside these sprouting modern windmills has come stories of birds, whales, and even insects and bats killed or injured in their presence.In some cases, wind energy can cause a small fraction of wildlife deaths, but they "pale in comparison to what climate change is doing to [the animals'] habitat," says Douglas Nowacek, a conservation technology expert at Duke University. "If we're going to slow down these negative changes, we have to go to renewable energy."
When it comes to whales or other marine mammals, "we have no evidence—zero" that any offshore wind development has killed them, says Nowacek, who studies this as lead researcher in the school's Wildlife and Offshore Wind program. (Most die instead from ship strikes and deadly entanglements in commercial fishing gear.)
Myth #4 Electric cars can't go far without recharging.
Electric vehicles are an important element of the transition to renewable energy because, unlike gas-powered cars, they can be charged by solar and wind energy. EVs are also more energy efficient, since they use nearly all of their power for driving, compared with traditional cars' use of just 25 percent. (Most of the rest is lost as heat.)Concerns that EVs can't make it to their destination likely spring from early prototypes, when cars developed in the 1970s got less than 40 miles per charge. Today, some 50 models can go more than 300 miles, with some topping 500.
Worries about the longevity of EV batteries are also unfounded. Only one percent of batteries manufactured since 2015 have had to be replaced (outside of manufacturing recalls, which have been negligible in recent years). Studies done by Tesla found the charging capacity in its sedans dropped just 15 percent [PDF] after 200,000 miles.
Myth #5 Renewables are on track to solve the climate crisis.
The world is in a better place than it would be without renewables. Before the 2015 Paris Agreement called for this energy transition, experts had forecast 4°C planetary warming by 2100; now they expect it to stay under 3°C, according to a recent report by World Weather Attribution, a climate research group. But even this target "would still lead to a dangerously hot planet," the report states. Last summer Hawaiian observatories documented carbon dioxide concentrations above 430 parts per million—a record breaking high far above the 350 PPM Paris target.To sufficiently slow climate warming, experts say wind generation must more than quadruple its current pace by 2030, and solar and other renewables must also be more widely adopted. Yet while global investments for renewable energy rose 10 percent in the first half of last year, it fell by more than a third in the U.S.
Bali is preparing to introduce a law which will require tourists to declare personal bank account information for a period of three months in order to visit the island. This law is intended to filter out less desirable travellers to promote "high quality tourism" in a move to counter the bad behaviour of boorist visitors over the last several decades. This change will come on top of the recently applied tourist levy and tightening of the management for incidents involving tourists.
Would you give your latest three bank statements to the Bali government in order to visit?
Microsoft has been gradually making it harder to activate Windows (and other products) without an internet connection. Most recently, it started clamping down on local accounts that could bypass OOBE sign-in, and now we're seeing reports that another legacy method has been retired. Phone activation, where you could call Microsoft to activate Windows & Office, no longer works, as Ben Kleinberg demonstrates in a new YouTube video.
Now, it'd be reasonable to assume that something as archaic as calling to activate your license had probably been sunset long ago. However, you'll be surprised to learn that Microsoft still lists it as a viable method in its support docs. This is particularly important for people on older operating systems like Windows 7, who expect an offline alternative to Microsoft's now-online-only activation systems.
Moreover, this ordeal was necessary for Ben because he was using an OEM key that could not be activated directly within Windows 7, as the activation servers for that version are effectively dead. The video shows that calling the listed number plays an automated message saying "support for product activation has moved online."
After the call, he also received a text message containing a link to the modern Microsoft Product Activation Portal we know today. Upon visiting the site, Ben was required to sign in with a Microsoft account, which immediately defeated the purpose of activating the call.
Initially, he couldn't get the confirmation ID on his iPhone using Firefox, but switching to Safari on his laptop resolved the issue. This wasn't a device-specific problem, just a browser-related hiccup. Eventually, Ben acquired the numbers he needed, and both his copy of Windows 7 and Office 2010 were successfully activated.
The video concludes on a bittersweet note, highlighting that call activation is effectively dead. However, users can still access the portal on a computer or phone to complete the process. Ironically, the entire reason for calling Microsoft in the first place was that Ben couldn't activate Windows 7 from within the OS, but now that a website exists, there's no need to call anyway.
Unfortunately, a Microsoft account is required, which Ben complained about and mirrors the concern many users have, even in the latest Windows 11 builds today.
The Economic Times published an hilarious article about a mathematician opinion of AI for solving math problems:
Renowned mathematician Joel David Hamkins has expressed strong doubts about large language models' utility in mathematical research, calling their outputs "garbage" and "mathematically incorrect". Joel Hamkins, a prominent mathematician and professor of logic at the University of Notre Dame, recently shared his unvarnished assessment of large language models in mathematical research during an appearance on the Lex Fridman podcast. Calling large language models fundamentally useless, he said they give "garbage answers that are not mathematically correct", reports TOI.
Joel David Hamkins is a mathematician and philosopher who undertakes research on the mathematics and philosophy of the infinite. He earned his PhD in mathematics from the University of California at Berkeley and comes to Notre Dame from the University of Oxford, where he was Professor of Logic in the Faculty of Philosophy and the Sir Peter Strawson Fellow of Philosophy at University College, Oxford. Prior to that, he held longstanding positions in mathematics, philosophy, and computer science at the City University of New York.
"I guess I would draw a distinction between what we have currently and what might come in future years," Hamkins began, acknowledging the possibility of future progress. "I've played around with it and I've tried experimenting, but I haven't found it helpful at all. Basically zero. It's not helpful to me. And I've used various systems and so on, the paid models and so on."
Firing a salvo, Joel David Hamkins expressed his frustration with the current AI systems despite experimenting with various models. "I've played around with it and I've tried experimenting, but I haven't found it helpful at all," he stated bluntly.
According to mathematician John Hamkins, AI's tendency to be confidently wrong mirrors some of the most frustrating human interactions. And what is even more concerning for him is how AI systems respond when those errors are highlighted, and not the occasional mathematical error. When Joel David Hamkins highlights clear flaws in their reasoning, the models often reply with breezy reassurances such as, "Oh, it's totally fine." Such AI responses combined with combination of confidence, incorrectness, and resistance to correction puts a threat to collaborative trust that is very much needed for meaningful and essential mathematical dialogue.
"If I were having such an experience with a person, I would simply refuse to talk to that person again," Hamkins said, noting that the AI's behaviour resembles unproductive human interactions he would actively avoid. He believes when it comes to genuine mathematical reasoning, today's AI systems remain unreliable.
Despite these issues, Hamkins recognizes that current limitations may not be permanent. "One has to overlook these kind of flaws and so I tend to be a kind of skeptic about the value of the current AI systems. As far as mathematical reasoning is concerned, it seems not reliable."
His criticism comes amid mixed reactions within the mathematical community about AI's growing role in research. While some scholars report progress using AI to explore problems from the Erdős collection, others have urged to exercise caution. Mathematician Terence Tao, for example, has warned that AI can generate proofs that appear flawless but contain subtle errors no human referee would accept. At the heart of the debate is a persistent gap: strong performance on benchmarks and standardized tests does not necessarily translate into real-world usefulness for domain experts.
Californians can now submit demands requiring 500 brokers to delete their data:
Californians are getting a new, supercharged way to stop data brokers from hoarding and selling their personal information, as a recently enacted law that's among the strictest in the nation took effect at the beginning of the year.
According to the California Privacy Protection Agency, more than 500 companies actively scour all sorts of sources for scraps of information about individuals, then package and store it to sell to marketers, private investigators, and others.
The nonprofit Consumer Watchdog said in 2024 that brokers trawl automakers, tech companies, junk-food restaurants, device makers, and others for financial info, purchases, family situations, eating, exercising, travel, entertainment habits, and just about any other imaginable information belonging to millions of people.
Two years ago, California's Delete Act took effect. It required data brokers to provide residents with a means to obtain a copy of all data pertaining to them and to demand that such information be deleted. Unfortunately, Consumer Watchdog found that only 1 percent of Californians exercised these rights in the first 12 months after the law went into effect. A chief reason: Residents were required to file a separate demand with each broker. With hundreds of companies selling data, the burden was too onerous for most residents to take on.
On January 1, a new law known as DROP (Delete Request and Opt-out Platform) took effect. DROP allows California residents to register a single demand for their data to be deleted and no longer collected in the future. CalPrivacy then forwards it to all brokers.
Starting in August, brokers will have 45 days after receiving the notice to report the status of each deletion request. If any of the brokers' records match the information in the demand, all associated data—including inferences—must be deleted unless legal exemptions such as information provided during one-to-one interactions between the individual and the broker apply. To use DROP, individuals must first prove they're a California resident.
I used the DROP website and found the flow flawless and the interface intuitive. After I provided proof of residency, the site prompted me to enter personal information such as any names and email addresses I use, and specific information such as VIN (vehicle identification numbers) and advertising IDs from phones, TVs, and other devices. It required about 15 minutes to complete the form, but most of that time was spent pulling that data from disparate locations, many buried in system settings.
It initially felt counterintuitive to provide such a wealth of personal information to ensure that data is no longer tracked. As I thought about it more, I realized that all that data is already compromised as it sits in online databases, which are often easily hacked and, of course, readily available for sale. What's more, CalPrivacy promises to use the data solely for data deletion. Under the circumstances, enrolling was a no-brainer.
It's unfortunate that the law is binding only in California. As the scourge of data-broker information hoarding and hacks on their databases continues, it would not be surprising to see other states follow California's lead.
Now if we could just make this a model for laws in the other US fiefdoms (i.e. states).
https://scitechdaily.com/scientists-found-a-way-to-help-the-brain-bounce-back-from-alzheimers/
For more than a hundred years, Alzheimer's disease (AD) has been regarded as a condition that cannot be undone. Because of this assumption, most scientific efforts have focused on stopping the disease before it starts or slowing its progression, rather than attempting to restore lost brain function. Despite decades of research and billions of dollars invested, no drug trial for Alzheimer's has ever been designed with the explicit goal of reversing the disease and restoring normal brain performance.
That long-standing belief is now being directly tested by researchers from University Hospitals, Case Western Reserve University, and the Louis Stokes Cleveland VA Medical Center. Their work asked a fundamental question that had rarely been explored: Can brains already damaged by advanced Alzheimer's recover?
The study was led by Kalyani Chaubey, PhD, of the Pieper Laboratory and was published on December 22 in Cell Reports Medicine. By analyzing multiple preclinical mouse models alongside brain tissue from people with Alzheimer's, the researchers identified a critical biological problem underlying the disease. They found that Alzheimer's is strongly driven by the brain's failure to maintain normal levels of a key cellular energy molecule called NAD+. Just as important, they showed that keeping NAD+ levels in balance can both prevent the disease and, under certain conditions, reverse it.
NAD+ naturally declines throughout the body as people age, including in the brain. When this balance is disrupted, cells gradually lose the ability to carry out essential processes needed for normal function and survival. The team found that this loss of NAD+ is far more pronounced in the brains of people with Alzheimer's. The same severe decline was also observed in mouse models of the disease.
[...] Amyloid buildup and tau abnormalities are among the earliest and most important features of Alzheimer's. In both mouse models, these mutations led to extensive brain damage that closely resembles the human condition. This included breakdown of the blood-brain barrier, damage to nerve fibers, chronic inflammation, reduced formation of new neurons in the hippocampus, weakened communication between brain cells, and widespread oxidative damage. The mice also developed severe memory and thinking problems similar to those experienced by people with Alzheimer's.
After confirming that NAD+ levels drop sharply in both human and mouse Alzheimer's brains, the researchers explored two different strategies. They tested whether preserving NAD+ balance before symptoms appear could prevent Alzheimer's, and whether restoring NAD+ balance after the disease was already well established could reverse it.
[...] The results exceeded expectations. Maintaining healthy NAD+ levels prevented mice from developing Alzheimer's, but even more striking outcomes were seen when treatment began later. In mice with advanced disease, restoring NAD+ balance allowed the brain to repair major pathological damage caused by the genetic mutations.
Both mouse models showed complete recovery of cognitive function. This recovery was supported by blood tests showing normalized levels of phosphorylated tau 217, a recently approved clinical biomarker of Alzheimer's in people. These findings provided strong evidence that the disease process had been reversed and highlighted a potential biomarker for future clinical trials.
"We were very excited and encouraged by our results," said Andrew A. Pieper, MD, PhD, senior author of the study and Director of the Brain Health Medicines Center, Harrington Discovery Institute at UH. "Restoring the brain's energy balance achieved pathological and functional recovery in both lines of mice with advanced Alzheimer's. Seeing this effect in two very different animal models, each driven by different genetic causes, strengthens the idea that restoring the brain's NAD+ balance might help patients recover from Alzheimer's."
The findings suggest a major change in how Alzheimer's could be approached in the future. "The key takeaway is a message of hope – the effects of Alzheimer's disease may not be inevitably permanent," said Dr. Pieper. "The damaged brain can, under some conditions, repair itself and regain function."
[...] Dr. Pieper cautioned that this strategy should not be confused with over-the-counter NAD+-precursors. Studies in animals have shown that such supplements can raise NAD+ to dangerously high levels that promote cancer. The approach used in this research relies instead on P7C3-A20, which helps cells maintain a healthy NAD+ balance during extreme stress without pushing levels beyond their normal range.
Journal Reference: “Pharmacologic reversal of advanced Alzheimer’s disease in mice and identification of potential therapeutic nodes in human brain” by Kalyani Chaubey, Edwin Vázquez-Rosa, Sunil Jamuna Tripathi, et al., 22 December 2025, Cell Reports Medicine. DOI: 10.1016/j.xcrm.2025.102535
https://www.theregister.com/2026/01/04/ai_agents_insider_threats_panw/
AI agents represent the new insider threat to companies in 2026, according to Palo Alto Networks Chief Security Intel Officer Wendi Whitmore, and this poses several challenges to executives tasked with securing the expected surge in autonomous agents.
"The CISO and security teams find themselves under a lot of pressure to deploy new technology as quickly as possible, and that creates this massive amount of pressure - and massive workload - that the teams are under to quickly go through procurement processes, security checks, and understand if the new AI applications are secure enough for the use cases that these organizations have," Whitmore told The Register.
"And that's created this concept of the AI agent itself becoming the new insider threat," she added.
According to Gartner's estimates, 40 percent of all enterprise applications will integrate with task-specific AI agents by the end of 2026, up from less than 5 percent in 2025. This surge presents a double-edged sword, Whitmore said in an interview and predictions report.
On one hand, AI agents can help fill the ongoing cyber-skills gap that has plagued security teams for years, doing things like correcting buggy code, automating log scans and alert triage, and rapidly blocking security threats.
"When we look through the defender lens, a lot of what the agentic capabilities allow us to do is start thinking more strategically about how we defend our networks, versus always being caught in this reactive situation," Whitmore said.
[...] One of the risks stems from the "superuser problem," Whitmore explained. This occurs when the autonomous agents are granted broad permissions, creating a "superuser" that can chain together access to sensitive applications and resources without security teams' knowledge or approval.
"It becomes equally as important for us to make sure that we are only deploying the least amount of privileges needed to get a job done, just like we would do for humans," Whitmore said.
"The second area is one we haven't seen in investigations yet," she continued. "But while we're on the predictions lens, I see this concept of a doppelganger."
This involves using task-specific AI agents to approve transactions or review and sign off on contracts that would otherwise require C-suite level manual approvals.
[...] By using a "single, well-crafted prompt injection or by exploiting a 'tool misuse' vulnerability," adversaries now "have an autonomous insider at their command, one that can silently execute trades, delete backups, or pivot to exfiltrate the entire customer database," according to Palo Alto Networks' 2026 predictions.
This also illustrates the ongoing threat of prompt-injection attacks. This year, researchers have repeatedly shown prompt injection attacks to be a real problem, with no fix in sight.
"It's probably going to get a lot worse before it gets better," Whitmore said, referring to prompt-injection. "Meaning, I just don't think we have these systems locked down enough."
[...] "Historically, when an attacker gets initial access into an environment, they want to move laterally to a domain controller," Whitmore said. "They want to dump Active Directory credentials, they want to elevate privileges. We don't see that as much now. What we're seeing is them get access into an environment immediately, go straight to the internal LLM, and start querying the model for questions and answers, and then having it do all of the work on their behalf."
Whitmore, along with just about every other cyber exec The Register has spoken with over the past couple of months, pointed to the "Anthropic attack" as an example.
She's referring to the September digital break-ins at multiple high-profile companies and government organizations later documented by Anthropic. Chinese cyberspies used the company's Claude Code AI tool to automate intel-gathering attacks, and in some cases they succeeded.
While Whitmore doesn't anticipate AI agents to carry out any fully autonomous attacks this year, she does expect AI to be a force multiplier for network intruders. "You're going to see these really small teams almost have the capability of big armies," she said. "They can now leverage AI capabilities to do so much more of the work that previously they would have had to have a much larger team to execute against."
Whitmore likens the current AI boom to the cloud migration that happened two decades ago. "The biggest breaches that happened in cloud environments weren't because they were using the cloud, but because they were targeting insecure deployments of cloud configurations," she said. "We're really seeing a lot of identical indicators when it comes to AI adoption."
For CISOs, this means establishing best practices when it comes to AI identities and provisioning agents and other AI-based systems with access controls that limit them to only data and applications that are needed to perform their specific tasks.
"We need to provision them with least-possible access and have controls set up so that we can quickly detect if an agent does go rogue," Whitmore said.
OpenAI is betting big on audio AI, and it's not just about making ChatGPT sound better. According to new reporting from The Information, the company has unified several engineering, product, and research teams over the past two months to overhaul its audio models, all in preparation for an audio-first personal device expected to launch in about a year.
The move reflects where the entire tech industry is headed — toward a future where screens become background noise and audio takes center stage. Smart speakers have already made voice assistants a fixture in more than a third of U.S. homes. Meta just rolled out a feature for its Ray-Ban smart glasses that uses a five-microphone array to help you hear conversations in noisy rooms — essentially turning your face into a directional listening device. Google, meanwhile, began experimenting in June with "Audio Overviews" that transform search results into conversational summaries. And Tesla is integrating Grok and other LLMs into its vehicles to create conversational voice assistants that can handle everything from navigation to climate control through natural dialogue.
It's not just the tech giants placing this bet. A motley crew of startups has emerged with the same conviction, albeit with varying degrees of success. The makers of the Humane AI Pin burned through hundreds of millions before their screenless wearable became a cautionary tale. The Friend AI pendant, a necklace that records your life and offers companionship, has sparked privacy concerns and existential dread in equal measure. And now at least two companies, including Sandbar and one helmed by Pebble founder Eric Migicovsky, are building AI rings expected to debut in 2026, allowing wearers to literally talk to the hand.
The form factors may differ, but the thesis is the same: audio is the interface of the future. Every space — your home, your car, even your face — is becoming an interface.
OpenAI's new audio model, slated for early 2026, will reportedly sound more natural, handle interruptions like an actual conversation partner, and even speak while you're talking, which is something today's models can't manage. The company is also said to envision a family of devices, possibly including glasses or screenless smart speakers, that act less like tools and more like companions.
As The Information notes, former Apple design chief Jony Ive, who joined OpenAI's hardware efforts through the company's $6.5 billion acquisition in May of his firm io, has made reducing device addiction a priority, seeing audio-first design as a chance to "right the wrongs" of past consumer gadgets.
Strong subsidies keep Tesla on top in Norway:
Last year, 95.5 percent of all newly registered vehicles in Norway were electric. While consumers in Europe and other markets are pivoting away from Tesla and toward hybrid vehicles, the Scandinavian country is staying firmly on course toward full EV adoption.
The Norwegian Road Federation reported that 95.9 percent of new cars registered in November were electric, a figure that climbed to 98 percent in December. These numbers represent a sharp increase from late 2024, when Norway became the first country where electric vehicles outnumbered petrol-powered cars on the road. In 2025, most newly registered gasoline-powered vehicles were hybrids, sports cars, or models used by first responders.
Tesla remains Norway's most popular automotive brand by a wide margin, increasing its market share slightly last year to 19.1 percent. This stands in stark contrast to trends in the US, China, and much of Europe, where Tesla sales have declined amid the rollback of EV incentives and growing public backlash against CEO Elon Musk's political views. The company was also named America's least reliable car brand last year, coinciding with a nine percent drop in global sales.
Signs of weakening confidence in EVs are particularly visible in the United States. Ford discontinued the all-electric F-150 Lightning last year in favor of hybrid models. In Europe, policymakers recently abandoned plans to ban new gasoline car sales by 2035.
Despite gradually increasing taxes on EVs, Norway continues to offer comparatively strong incentives, while duties on petrol-powered cars are also rising. Electric vehicles priced below roughly $30,000 remain exempt from value-added tax, and buyers rushed to make purchases ahead of January 1, when an additional $5,000 in VAT took effect on more expensive EVs.
Chinese automaker BYD also made notable gains in Norway last year, though it remains far behind Tesla. Its market share increased from 2.1 to 3.3 percent, with sales more than doubling over the period.
Globally, BYD has overtaken Tesla as the world's leading EV seller, posting a sales increase of over 28 percent in 2025. The rapid pace at which BYD and other Chinese automakers have brought vehicles from concept to assembly has forced Western manufacturers to rethink their production workflows and accelerate development timelines.
https://scitechdaily.com/scientists-create-a-periodic-table-for-artificial-intelligence/
Artificial intelligence is increasingly relied on to combine and interpret different kinds of data, including text, images, audio, and video. One obstacle that continues to slow progress in multimodal AI is deciding which algorithmic approach best fits the specific task an AI system is meant to solve.
Researchers have now introduced a unified way to organize and guide that decision process. Physicists at Emory University developed a new framework that brings structure to how algorithms for multimodal AI are derived, and their work was published in The Journal of Machine Learning Research.
"We found that many of today's most successful AI methods boil down to a single, simple idea — compress multiple kinds of data just enough to keep the pieces that truly predict what you need," says Ilya Nemenman, Emory professor of physics and senior author of the paper. "This gives us a kind of 'periodic table' of AI methods. Different methods fall into different cells, based on which information a method's loss function retains or discards."
A loss function is the mathematical rule an AI system uses to evaluate how wrong its predictions are. During training, the model continually adjusts its internal parameters in order to reduce this error, using the loss function as a guide.
"People have devised hundreds of different loss functions for multimodal AI systems and some may be better than others, depending on context," Nemenman says. "We wondered if there was a simpler way than starting from scratch each time you confront a problem in multimodal AI."
To address this, the team developed a mathematical framework that links the design of loss functions directly to decisions about which information should be preserved and which can be ignored. They call this approach the Variational Multivariate Information Bottleneck Framework.
"Our framework is essentially like a control knob," says co-author Michael Martini, who worked on the project as an Emory postdoctoral fellow and research scientist in Nemenman's group. "You can 'dial the knob' to determine the information to retain to solve a particular problem."
"Our approach is a generalized, principled one," adds Eslam Abdelaleem, first author of the paper. Abdelaleem took on the project as an Emory PhD candidate in physics before graduating in May and joining Georgia Tech as a postdoctoral fellow.
"Our goal is to help people to design AI models that are tailored to the problem that they are trying to solve," he says, "while also allowing them to understand how and why each part of the model is working."
AI-system developers can use the framework to propose new algorithms, to predict which ones might work, to estimate the needed data for a particular multimodal algorithm, and to anticipate when it might fail.
"Just as important," Nemenman says, "it may let us design new AI methods that are more accurate, efficient and trustworthy."
The researchers brought a unique perspective to the problem of optimizing the design process for multimodal AI systems.
"The machine-learning community is focused on achieving accuracy in a system without necessarily understanding why a system is working," Abdelaleem explains. "As physicists, however, we want to understand how and why something works. So, we focused on finding fundamental, unifying principals to connect different AI methods together."
Abdelaleem and Martini began this quest — to distill the complexity of various AI methods to their essence — by doing math by hand.
"We spent a lot of time sitting in my office, writing on a whiteboard," Martini says. "Sometimes I'd be writing on a sheet of paper with Eslam looking over my shoulder."
The process took years, first working on mathematical foundations, discussing them with Nemenman, trying out equations on a computer, then repeating these steps after running down false trails.
"It was a lot of trial and error and going back to the whiteboard," Martini says.
They vividly recall the day of their eureka moment.
They had come up with a unifying principal that described a tradeoff between compression of data and reconstruction of data. "We tried our model on two test datasets and showed that it was automatically discovering shared, important features between them," Martini says. "That felt good."
As Abdelaleem was leaving campus after the exhausting, yet exhilarating, final push leading to the breakthrough, he happened to look at his Samsung Galaxy smart watch. It uses an AI system to track and interpret health data, such as his heart rate. The AI however, had misunderstood the meaning of his racing heart throughout that day.
"My watch said that I had been cycling for three hours," Abdelaleem says. "That's how it interpreted the level of excitement I was feeling. I thought, 'Wow, that's really something! Apparently, science can have that effect."
The researchers applied their framework to dozens of AI methods to test its efficacy.
"We performed computer demonstrations that show that our general framework works well with test problems on benchmark datasets," Nemenman says. "We can more easily derive loss functions, which may solve the problems one cares about with smaller amounts of training data."
The framework also holds the potential to reduce the amount of computational power needed to run an AI system.
"By helping guide the best AI approach, the framework helps avoid encoding features that are not important," Nemenman says. "The less data required for a system, the less computational power required to run it, making it less environmentally harmful. That may also open the door to frontier experiments for problems that we cannot solve now because there is not enough existing data."
The researchers hope others will use the generalized framework to tailor new algorithms specific to scientific questions they want to explore.
Meanwhile, they are building on their work to explore the potential of the new framework. They are particularly interested in how the tool may help to detect patterns of biology, leading to insights into processes such as cognitive function.
"I want to understand how your brain simultaneously compresses and processes multiple sources of information," Abdelaleem says. "Can we develop a method that allows us to see the similarities between a machine-learning model and the human brain? That may help us to better understand both systems."
Reference: “Deep Variational Multivariate Information Bottleneck – A Framework for Variational Losses” by Eslam Abdelaal, Ilya Nemenman and K. Michael Martini Jr., 2 September 2025, arXiv.
DOI: 10.48550/arXiv.2310.03311
Head-up displays or HUDs were meant to simplify driving, but Ford may have found a very unconventional way to rethink them:
Head-Up Displays (HUDs) were invented as a means to help drivers keep their vision straight on the road without any impediments to their vision. First used in Fighter jets, HUDs display all the relevant information through a projection in the windscreen that the driver can see. It shows speed, vehicle condition, and, in some cases, even map settings.
But it appears that Ford wants to take HUD tech to another level. Alongside an adjustable HUD from last year, the Blue Oval has invented another one – and it's as ridiculous as it looks.
Ford has just filed a patent for its own definition of the HUD, specifically patent no.20260001397 filed with the United States Patent and Trademark Office (USPTO) on January 1, 2026.
Looking through the patent, Ford's new version of the HUD will be implemented through the sun visor of the vehicle, meaning drivers will be able to deploy the HUD by lowering or attaching it directly...in their line of sight.
The patent filing itself says that the whole idea of this visor HUD is "to eliminate the need to project images onto the windshield and/or to have a projector located on the vehicle dashboard. In certain embodiments, the head-up display visor is portable and is affixed to the driver's conventional sun visor by a clip, thereby allowing a driver to move the device from one vehicle to another."
[...] Another benefit of their project is the visor HUD's portability for owners with multiple vehicles. It must be noted, though, that while a patent has been filed, this is merely Ford's way of protecting its own intellectual property. There is no guarantee that this (novel idea) will ever make it into production.
TFA includes a few illustrations.
The BBC has an interesting report on a French university training spies:
University professor Xavier Crettiez admits that he doesn't know the real names of many of the students on his course.
This is a highly unusual state of affairs in the world of academia, but Prof Crettiez's work is far from standard.
Instead, he helps train France's spies.
"I rarely know the intelligence agents' backgrounds when they are sent on the course, and I doubt the names I'm given are genuine anyway," he says.
If you wanted to create a setting for a spy school, then the campus of Sciences Po Saint-Germain on the outskirts of Paris seems a good fit.
With dour, even gloomy-looking, early 20th Century buildings surrounded by busy, drab roads and large, intimidating metal gates, it has a very discreet feel.
Where it does stand out is its unique diploma that brings together more typical students in their early 20s, and active members of the French secret services, usually between the ages of 35 and 50.
The course is called Diplôme sur le Renseignement et les Menaces Globales, which translates as Diploma of Intelligence and Global Threats.
It was developed by the university in association with the Academie du Renseignement, the training arm of the French secret services.
This came following a request from French authorities a decade ago. After the 2015 terrorist attacks in Paris, the government went on a large recruitment drive within the French intelligence agencies.
It asked Sciences Po, one of France's leading universities, to come up with a new course to both train potential new spies, and provide continuous training for current agents.
Large French companies were also quick to show an interest, both in getting their security staff onto the course, and snapping up many of the younger graduates.
The diploma is made up of 120 hours of classwork with modules spread over four months. For external students – the spies and those on placement from businesses – it costs around €5,000 ($5,900; £4,400).
The core aim of the course is to identify threats wherever they are, and how to track and overcome them. The key topics include the economics of organized crime, Islamic jihadism, business intelligence gathering and political violence.
To attend one of the classes and speak to the students I had to be vetted first by the French security services. The theme of the lesson I joined was "intelligence and over-reliance on technology".
One of the students I speak to is a man in his 40s who goes by the name Roger. He tells me in very precise, clipped English that he is investment banker. He adds: "I provide consultancy across west Africa, and I joined the course to provide risk assessments to my clients there."
Prof Crettiez, who teaches political radicalisation, says there has been a huge expansion in the French secret services in recent years. And that there are now around 20,000 agents in what he called the "inner circle".
This is made up of the DGSE, which looks at matters overseas, and is the French equivalent of the UK's MI6 or the US's CIA. And the DGSI, which focuses on threats within France, like the UK's MI5 or the US's FBI.
But he says it's not just about terrorism. "There are the two main security agencies, but also Tracfin an intelligence agency which specializes in money laundering.
"It is preoccupied with the surge in mafia activity, especially in southern France, including corruption in the public and private sectors mainly due to massive profits in illegal drug trafficking."
Other lecturers on the course include a DGSE official once located in Moscow, a former French ambassador to Libya, and a senior official from Tracfin. The head of security at the French energy giant EDF also runs one module.
Twenty eight students are enrolled in this year's class. Six are spies. You can tell who they are, as they are the ones huddled together during class breaks, away from the young students, and not too overwhelmed with joy when I approach them.
Without saying their exact roles, and with arms crossed, one says the course is considered a fast-track stepping stone for a promotion from the office to field work. Another says he gets fresh ideas being in this academic environment. They signed the day's attendance form with just their first names.
Nearly half of the students in the class are in fact women. And this is a relatively recent development according to one of the lecturers, Sebastien-Yves Laurent, a specialist on technology in spying.
"Women's interest in intelligence gathering is new," he says. "They are interested because they think it will provide for a better world.
"And if there is one common thread amongst all these young students it's that they are very patriotic and that is new compared to 20 years ago.
If you are keen to apply to get on the course, French citizenship is an essential requirement, although some dual citizens are accepted.
Yet Prof Crettiez says he has to be wary. "I regularly get applications from very attractive Israeli and Russian women with amazing CVs. Unsurprisingly they are binned immediately."
In a recent group photo of the class you can immediately tell who the spies are - they had their backs to the camera.
While all the students and professional spies I met are trim and athletic, Prof Crettiez is also keen to dispel the myth of James Bond-like adventure.
"Few new recruits will end up in the field," he says. "Most French intelligence agencies jobs are desk bound."
The videos from the 39C3 are all in place, and Cory Doctorow's fast-paced talk, A post-American, enshittification-resistant Internet, is among them.
That talk is worth special mention. Don't be put off by the gratuitous cursing or the CCC's misspelling of the name Internet. And because it's often easier, and always faster, to just read text than slog through a video, Cory has also posted a transcript of his presentation:
We won that skirmish, but friends, I have bad news, news that will not surprise you. Despite wins like that one, we have been losing the war on the general purpose computer for the past 25 years.
Which is why I've come to Hamburg today. Because, after decades of throwing myself against a locked door, the door that leads to a new, good internet, one that delivers both the technological self-determination of the old, good [I]nternet, and the ease of use of Web 2.0 that let our normie friends join the party, that door has been unlocked.
Today, it is open a crack. It's open a crack!
His presentation is good all the way through, even to the final Q & A.
Basically, the gist is that 1) the US dollar is no longer a (semi-)neutral platform and 2) the threat of withdrawing financial support has already been played and cannot be used for leverage any more. Countries are now forced to actively work around both points, which is inconvenient and expensive, but the result is that they have been liberated from similar future threats and thus in that way have regained a bit of independence as far as software laws go. That liberation is because economic retaliation has already occurred, nations can more or less safely undo the anti-circumvention laws forced down their throats by "free" trade "agreements". The first country to do so will be able to take a very big bite out of the trillions of dollars (or euros) which Apple and the others currently collect.
What other 39C3 presentations have soylentils found interesting in a positive way?
Previously:
(2025) The 39th Chaos Communication Congress (39C3) Taking Place Now in Hamburg Through 30 Dec 2025
(2025) 38th Chaos Communication Congress (38C3) Presentations Online
(2017) 34th Chaos Communication Congress (34C3) Presentations Online
So far, every country in the world has had one of two responses to the Trump tariffs. The first one is: "Give Trump everything he asks for (except Greenland) and hope he stops being mad at you." This has been an absolute failure. Give Trump an inch, he'll take a mile. He'll take fucking Greenland. Capitulation is a failure.
But so is the other tactic: retaliatory tariffs. That's what we've done in Canada (like all the best Americans, I'm Canadian). Our top move has been to levy tariffs on the stuff we import from America, making the things we buy more expensive. That's a weird way to punish America! It's like punching yourself in the face as hard as you can, and hoping the downstairs neighbor says "Ouch!"
And it's indiscriminate. Why whack some poor farmer from a state that begins and ends with a vowel with tariffs on his soybeans. That guy never did anything bad to Canada.
But there's a third possible response to tariffs, one that's just sitting there, begging to be tried: what about repealing anticircumvention law?
If you're a technologist or an investor based in a country that's repealed its anticircumvention law, you can go into business making disenshittificatory products that plug into America's defective tech exports, allowing the people who own and use those products to use them in ways that are good for them, even if those uses make the company's shareholders mad.
Simple premise, interesting ramifications - I wonder what the course corrections will look like...
https://www.hzdr.de/db/Cms?pNid=99&pOid=76137
Proposed is a plan to impact gravity waves, imparting more energy unto them, by using lasers - much like the LIGO interferometer, but in reverse. By imparting energy from light into the gravity waves, we subtly change the gravity wave itself, and can measure the amount of energy taken from the light source. Thus, we can measure properties about the gravity wave - and, perhaps, the graviton.
In an interferometer tailored to Schützhold's idea, it could be possible not only to observe gravitational waves but also to manipulate them for the first time by stimulated emission and absorption of gravitons. According to Schützhold, light pulses whose photons are entangled, that is, quantum mechanically coupled, could significantly increase the sensitivity of the interferometer further. "Then we could even draw inferences about the quantum state of the gravitational field itself," says Schützhold.
High-school level summary at link, with link to journal article contained within.
https://phys.org/news/2025-12-ice-home-food-scientist-easy.html
When you splurge on a cocktail in a bar, the drink often comes with a slab of aesthetically pleasing, perfectly clear ice. The stuff looks much fancier than the slightly cloudy ice you get from your home freezer. How do they do this?
Clear ice is actually made from regular water—what's different is the freezing process.
With a little help from science, you can make clear ice at home, and it's not even that tricky. However, there are quite a few hacks on the internet that won't work. Let's dive into the physics and chemistry involved.
Homemade ice is often cloudy because it has a myriad of tiny bubbles and other impurities. In a typical ice cube tray, as freezing begins and ice starts to form inward from all directions, it traps whatever is floating in the water: mostly air bubbles, dissolved minerals and gases.
These get pushed toward the center of the ice as freezing progresses and end up caught in the middle of the cube with nowhere else to go.
That's why when making ice the usual way—just pouring water into a vessel and putting in the freezer—it will always end up looking somewhat cloudy. Light scatters as it hits the finished ice cube, colliding with the concentrated core of trapped gases and minerals. This creates the cloudy appearance.
As well as looking nice, clear ice is denser and melts slower because it doesn't have those bubbles and impurities. This also means that it dilutes drinks more slowly than regular, cloudy ice.
Because it doesn't have impurities, the clear ice should also be free from any inadvertent flavors that could contaminate your drink.
Additionally, because it's less likely to crumble, clear ice can be easily cut and formed into different shapes to further dress up your cocktail.
If you've tried looking up how to make clear ice before, you've likely seen several suggestions. These include using distilled, boiled or filtered water, and a process called directional freezing. Here's the science on what works and what doesn't.
You might think that to get clear ice, you simply need to start out with really clean water. However, a recent study found this isn't the case.
- Using boiling water: Starting out with boiling water does mean the water will have less dissolved gases in it, but boiling doesn't remove all impurities. It also doesn't control the freezing process, so the ice will still become cloudy.
- Using distilled water: While distilling water removes more impurities than boiling, distilled water still freezes from the outside in, concentrating any remaining impurities or air bubbles in the center, again resulting in cloudy ice.
- Using filtered or tap water: Filtering the water or using tap water also doesn't stop the impurities from concentrating during the conventional freezing process.
As it turns out, it's not the water quality that guarantees clear ice. It's all about how you freeze it. The main technique for successfully making clear ice is called "directional freezing."
Directional freezing is simply the process of forcing water to freeze in a single direction instead of from all sides at once, like it does in a regular ice cube tray.
This way, the impurities and air will be forced to the opposite side from where the freezing starts, leaving the ice clear except for a small cloudy section.
In practice, this means insulating the sides of the ice container so that the water freezes in one direction, typically from the top down. This is because heat transfer and phase transition from liquid to solid happens faster through the exposed top than the insulated sides.
The simplest way to have a go at directional freezing at home is to use an insulated container—you can use a really small cooler (that is, an "esky"), an insulated mug or even a commercially available insulated ice cube tray designed for making clear ice at home.
Fill the insulated container with water and place it in the freezer, then check on it periodically.
Once all the impurities and air bubbles are concentrated in a single cloudy area at the bottom, you can either pour away this water before it's fully frozen through, or let the block freeze solid and then cut off the cloudy portion with a large serrated knife, then cut the ice into cubes for your drinks.
If using a commercial clear ice tray, it will likely come with instructions on how to get rid of the cloudy portion so you can enjoy the sparkling clear ice.
This article is republished from The Conversation under a Creative Commons license. Read the original article.