Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
I came across a very interesting social media post by John Carlos Baez about a paper published a few weeks ago that showed you can build a universal computation machine using a single billiard ball on a carefully crafted table. According to one of the paper's authors (Eva Miranda):
With Isaac Ramos, we show that 2D billiard systems are Turing complete, implying the existence of undecidable trajectories in physically natural models from hard-sphere gases to celestial mechanics.
Determinism ≠ predictability.
From Baez:
More precisely: you can create a computer that can run any program, using just a single point moving frictionlessly in a region of the plane and bouncing off the walls elastically.
Since the halting problem is undecidable, this means there are some yes-or-no questions about the eventual future behavior of this point that cannot be settled in a finite time by any computer program.
This is true even though the point's motion is computable to arbitrary accuracy for any given finite time. In fact, since the methodology here does *not* exploit the chaos that can occur for billiards on certain shaped tables, it's not even one of those cases where the point's motion is computable in principle but your knowledge of the initial conditions needs to be absurdly precise.
Achieving Turing completeness using billiards goes back to the early 80s with a paper by Fredkin and Toffoli that established the idea of "Conservative Logic," which was also mentioned by Richard Feynman in his Feynman Lectures on Computation, but that system used the interactions of multiple billiard balls whereas this paper shows you only need one (if you carefully layout the edges of your table).
The Baez link has some very interesting comments, including from Eva Miranda.
Arthur T Knackerbracket has processed the following story:
On January 8, 2026, a seemingly innocuous code change at Cloudflare triggered a cascade of DNS resolution failures across the internet, affecting millions of users worldwide. The culprit wasn't a cyberattack, server outage, or configuration error — it was something far more subtle: the order in which DNS records appeared in responses from 1.1.1.1, one of the world's most popular public DNS resolvers.
[...] The story begins on December 2, 2025, when Cloudflare engineers introduced what appeared to be a routine optimization to their DNS caching system. The change was designed to reduce memory usage — a worthy goal for infrastructure serving millions of queries per second. After testing in their development environment for over a month, the change began its global rollout on January 7, 2026.
By January 8 at 17:40 UTC, the update had reached 90% of Cloudflare's DNS servers. Within 39 minutes, the company had declared an incident as reports of DNS resolution failures poured in from around the world. The rollback began immediately, but it took another hour and a half to fully restore service.
The affected timeframe was relatively short — less than two hours from incident declaration to resolution — but the impact was significant. Users across multiple platforms and operating systems found themselves unable to access websites and services that relied on CNAME records, a fundamental building block of modern DNS infrastructure.
To understand what went wrong, it's essential to grasp how DNS CNAME (Canonical Name) records work. When you visit a website like www.example.com, your request might follow a chain of aliases before reaching the final destination:
Each step in this chain has its own Time-To-Live (TTL) value, indicating how long the record can be cached. When some records in the chain expire while others remain valid, DNS resolvers like 1.1.1.1 can optimize by only resolving the expired portions and combining them with cached data. This optimization is where the trouble began.
The problematic change was deceptively simple. Previously, when merging cached CNAME records with newly resolved data, Cloudflare's code created a new list and placed CNAME records first:
let mut answer_rrs = Vec::with_capacity(entry.answer.len() + self.records.len());
answer_rrs.extend_from_slice(&self.records); // CNAMEs first
answer_rrs.extend_from_slice(&entry.answer); // Then A/AAAA recordsTo save memory allocations, engineers changed this to append CNAMEs to the existing answer list. This seemingly minor optimization had a profound consequence: CNAME records now sometimes appeared after the final resolved answers instead of before them.
The reason this change caused widespread failures lies in how many DNS client implementations process responses. Some clients, including the widely-used getaddrinfo function in glibc (the GNU C Library used by most Linux systems), parse DNS responses sequentially while tracking the expected record name.
When processing a response in the correct order:
- Find records for www.example.com
- Encounter www.example.com CNAME cdn.example.com
- Update expected name to cdn.example.com
- Find cdn.example.com A 198.51.100.1
- Success!
But when CNAMEs appear after A records:
- Find records for www.example.com
- Ignore cdn.example.com A 198.51.100.1 (doesn't match expected name)
- Encounter www.example.com CNAME cdn.example.com
- Update expected name to cdn.example.com
- No more records found — resolution fails
This sequential parsing approach, while seemingly fragile, made sense when it was implemented. It's efficient, requires minimal memory, and worked reliably for decades because most DNS implementations naturally placed CNAME records first.
The impact of this change was far-reaching but unevenly distributed. The primary victims were systems using glibc's getaddrinfo function, which includes most traditional Linux distributions that don't use systemd-resolved as an intermediary caching layer.
Perhaps most dramatically affected were certain Cisco ethernet switches. Three specific models experienced spontaneous reboot loops when they received responses with reordered CNAMEs from 1.1.1.1. Cisco has since published a service document describing the issue, highlighting how deeply this problem penetrated into network infrastructure.
Interestingly, many modern systems were unaffected. Windows, macOS, iOS, and Android all use different DNS resolution libraries that handle record ordering more flexibly. Even on Linux, distributions using systemd-resolved were protected because the local caching resolver reconstructed responses according to its own ordering logic.
At the heart of this incident lies a fundamental ambiguity in RFC 1034, the 1987 specification that defines much of DNS behavior.
The phrase "possibly preface" suggests that CNAME records should appear before other records, but the language isn't normative. RFC 1034 predates RFC 2119 (published in 1997), which standardized the use of keywords like "MUST" and "SHOULD" to indicate requirements versus suggestions.
Further complicating matters, RFC 1034 also states that "the difference in ordering of the RRs in the answer section is not significant," though this comment appears in the context of a specific example comparing two A records, not different record types.
This ambiguity has persisted for nearly four decades, with different implementers reaching different conclusions about what the specification requires.
One of the most puzzling aspects of this incident is how it survived testing for over a month without detection. The answer reveals the complexity of modern internet infrastructure and the challenges of comprehensive testing.
Cloudflare's testing environment likely used systems that weren't affected by the change. Most modern operating systems handle DNS record ordering gracefully, and many Linux systems use systemd-resolved, which masks the underlying issue. The specific combination of factors needed to trigger the problem — direct use of glibc's resolver with CNAME chains from 1.1.1.1 — may not have been present in their test scenarios.
This highlights a broader challenge in infrastructure testing: the internet's diversity means that edge cases can have mainstream impact. What works in a controlled testing environment may fail when exposed to the full complexity of real-world deployments.
The DNS community's response to this incident has been swift and constructive. Cloudflare has committed to maintaining CNAME-first ordering in their responses and has authored an Internet-Draft proposing to clarify the ambiguous language in the original RFC.
The proposed specification would explicitly require CNAME records to appear before other record types in DNS responses, codifying what has been common practice for decades. If adopted, this would prevent similar incidents in the future by removing the ambiguity that allowed different interpretations.
The incident also sparked broader discussions about DNS implementation robustness. While Cloudflare's change exposed fragility in some client implementations, it also highlighted the importance of defensive programming in critical infrastructure components.
[...] The incident revealed an even deeper complexity: even when CNAME records appear first, their internal ordering can cause problems.
[...] For the broader DNS community, this incident serves as a reminder of the importance of specification clarity and comprehensive testing. As internet infrastructure continues to evolve, identifying and resolving these legacy ambiguities becomes increasingly important.
The incident also highlights the value of diverse DNS resolver implementations. The fact that different resolvers handle record ordering differently provided natural resilience — when one approach failed, others continued working.
The January 8, 2026 DNS incident demonstrates how seemingly minor changes to critical infrastructure can have far-reaching consequences. A memory optimization that moved CNAME records from the beginning to the end of DNS responses triggered failures across multiple platforms and caused network equipment to reboot.
At its core, this was a story about assumptions — assumptions built into 40-year-old specifications, assumptions made by implementers over decades, and assumptions about how systems would behave under different conditions. When those assumptions collided with reality, the result was a brief but significant disruption to internet connectivity.
[...] As Cloudflare's engineers learned, sometimes the order of things matters more than we realize. In the complex world of internet infrastructure, even the smallest details can have the largest consequences.
Caltech-led Team Finds New Superconducting State:
Superconductivity is a quantum physical state in which a metal is able to conduct electricity perfectly without any resistance. In its most familiar application, it enables powerful magnets in MRI machines to create the magnetic fields that allow doctors to see inside our bodies. Thus far, materials can only achieve superconductivity at extremely low temperatures, near absolute zero (a few tens of Kelvin or colder). But physicists dream of superconductive materials that might one day operate at room temperature. Such materials could open entirely new possibilities in areas such as quantum computing, the energy sector, and medical technologies.
"Understanding the mechanisms leading to the formation of superconductivity and discovering exotic new superconducting phases is not only one of the most stimulating pursuits in the fundamental study of quantum materials but is also driven by this ultimate dream of achieving room-temperature superconductivity," says Stevan Nadj-Perge, professor of applied physics and materials science at Caltech.
Now a team led by Nadj-Perge that includes Lingyuan Kong, AWS quantum postdoctoral scholar research associate, and other colleagues at Caltech has discovered a new superconducting state—a finding that provides a new piece of the puzzle behind this mysterious but powerful phenomenon.
[...] In normal metals, individual electrons collide with ions as they move across the metal's lattice structure made up of oppositely charged ions. Each collision causes electrons to lose energy, increasing electrical resistance. In superconductors, on the other hand, electrons are weakly attracted to each other and can bind, forming duos called Cooper pairs. As long as the electrons stay within a certain relatively small range of energy levels known as the energy gap, the electrons remain paired and do not lose energy through collisions. Therefore, it is within that relatively small energy gap that superconductivity occurs.
Typically, a superconductor's energy gap is the same at all locations within the material. For example, in a superconducting crystal without impurities, all pieces of the crystal would have the same energy gap.
But beginning in the 1960s, scientists began theorizing that the energy gap in some superconducting materials could modulate in space, meaning the gap could be stronger in some areas and weaker in others. Later, in the 2000s, the idea was further developed with the proposal of what is called the pair density wave (PDW) state, which suggests that a superconducting state could arise in which the energy gap modulates with a long wavelength, where the gap fluctuates between a larger and smaller measurement.
Over the past decade, this concept has garnered significant experimental interest with numerous materials, including iron-based superconductors being explored as potential hosts of a PDW state.
Now, working with extremely thin flakes of an iron-based superconductor, FeTe0.55Se0.45, Nadj-Perge and his colleagues have discovered a modulation of the superconducting gap with the smallest wavelength possible, matching the spacing of atoms in a crystal. They have named it the Cooper-pair density modulation (PDM) state.
"The observed gap modulation, reaching up to 40 percent, represents the strongest reported so far, leading to the clearest experimental evidence to date that gap modulation can exist even at the atomic scale," says Kong, lead author of the new paper.
This unexpected discovery was made possible by the first successful realization of scanning tunneling microscopy experiments of an iron-based superconductor on a specialized device for studying such thin flakes. Such experiments had been hampered for nearly two decades by the presence of severe surface contamination, but the Caltech team, working in the Kavli Nanoscience Institute (KNI), developed a new experimental approach that enabled a sufficiently clean surface for microscopic probes.
Journal Reference:
Kong, Lingyuan, Papaj, Michał, Kim, Hyunjin, et al. Cooper-pair density modulation state in an iron-based superconductor, Nature (DOI: 10.1038/s41586-025-08703-x)
An interesting technical article about satellite communications and Iran
In Iran, not only mobile and fixed networks are jammed, but also Starlink. We explain how this is likely achieved despite thousands of satellites.
Reliable information is challenging to come by, as practically the entire country has been offline since the evening of January 8; the content delivery network Cloudflare registers almost no more data traffic from Iran, and the internet observation group Netblocks also speaks of a complete communication blockade.
One of the few digital ways out currently leads via satellite through the global network Starlink by SpaceX. Although usage is forbidden in Iran, terminals are smuggled into the country, and SpaceX tolerates their use; since January 13, it has even been free of charge. However, activists are reporting that Starlink is also functioning increasingly poorly in Iran, and users are being actively tracked. But how can a system of thousands of satellites be jammed from the ground, and how does the regime find users of the devices without access to customer data or the network ?
The US organization Holistic Resilience, which helps Iranians secure their internet access, speaks of around 50,000 users in the country. In this article, we will explore how Starlink works, why it functions in Iran, and how the Iranian government is likely jamming the network. While neither the regime nor SpaceX likes to reveal their cards, hackers and journalists are not deterred by this, and the laws of physics apply to everyone.
Physics of Foam Strangely Resembles AI Training:
Foams are everywhere: soap suds, shaving cream, whipped toppings and food emulsions like mayonnaise. For decades, scientists believed that foams behave like glass, their microscopic components trapped in static, disordered configurations.
Now, engineers at the University of Pennsylvania have found that foams actually flow ceaselessly inside while holding their external shape. More strangely, from a mathematical perspective, this internal motion resembles the process of deep learning, the method typically used to train modern AI systems.
The discovery could hint that learning, in a broad mathematical sense, may be a common organizing principle across physical, biological and computational systems, and provide a conceptual foundation for future efforts to design adaptive materials. The insight could also shed new light on biological structures that continuously rearrange themselves, like the scaffolding in living cells.
In a paper in Proceedings of the National Academy of Sciences, the team describes using computer simulations to track the movement of bubbles in a wet foam. Rather than eventually staying put, the bubbles continued to meander through possible configurations. Mathematically speaking, the process mirrors how deep learning involves continually adjusting an AI system's parameters — the information that encodes what an AI "knows" — during training.
"Foams constantly reorganize themselves," says John C. Crocker, Professor in Chemical and Biomolecular Engineering (CBE) and the paper's co-senior author. "It's striking that foams and modern AI systems appear to follow the same mathematical principles. Understanding why that happens is still an open question, but it could reshape how we think about adaptive materials and even living systems."
In some ways, foams behave mechanically like solids: they more or less hold their shape and can rebound when pressed. At a microscopic level, however, foams are "two-phase" materials, made up of bubbles suspended in a liquid or solid. Because foams are relatively easy to create and observe yet exhibit complex mechanical behavior, they have long served as model systems for studying other crowded, dynamic materials, including living cells.
[...] During training, modern AI systems continually adjust their parameters — the numerical values that encode what they "know." Much like bubbles in foams were once thought to descend into metaphorical valleys, searching for the positions that require the least energy to maintain, early approaches to AI training aimed to optimize systems as tightly as possible to their training data.
Deep learning accomplishes this using optimization algorithms related to the mathematical technique "gradient descent," which involves repeatedly nudging a system in the direction that most improves its performance. If an AI's internal representation of its training data were a landscape, the optimizers guide the system downhill, step by step, toward configurations that reduce error — those that best match the examples it has seen before.
Over time, researchers realized that forcing systems into the deepest possible valleys was counterproductive. Models that optimized too precisely became brittle, unable to generalize beyond the data they had already seen. "The key insight was realizing that you don't actually want to push the system into the deepest possible valley," says Robert Riggleman, Professor in CBE and co-senior author of the new paper. "Keeping it in flatter parts of the landscape, where lots of solutions perform similarly well, turns out to be what allows these models to generalize."
When the Penn researchers looked again at their foam data through this lens, the parallel was hard to miss. Rather than settling into "deep" positions in this metaphorical landscape, bubbles in foams also remained in motion, much like the parameters in modern AI systems, continuously reorganizing within broad, flat regions with similar characteristics. The same mathematics that explains why deep learning works turned out to describe what foams had been doing all along.
[...] "Why the mathematics of deep learning accurately characterizes foams is a fascinating question," says Crocker. "It hints that these tools may be useful far outside of their original context, opening the door to entirely new lines of inquiry."
Journal Reference: Amruthesh Thirumalaiswamy et al, Slow relaxation and landscape-driven dynamics in viscous ripening foams, PNAS (2025). https://doi.org/10.1073/pnas.2518994122. https://dx.doi.org/10.48550/arxiv.2301.13400 [arXiv]
On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday.
"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."
[...]
Chen's tool is a "skill file" for Claude Code, Anthropic's terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model (LLM) that powers the assistant. Unlike a normal system prompt, for example, the skill information is formatted in a standardized way that Claude models are fine-tuned to interpret with more precision than a plain system prompt. (Custom skills require a paid Claude subscription with code execution turned on.)
[...]
So what does AI writing look like? The Wikipedia guide is specific with many examples, but we'll give you just one here for brevity's sake.Some chatbots love to pump up their subjects with phrases like "marking a pivotal moment" or "stands as a testament to," according to the guide. They write like tourism brochures, calling views "breathtaking" and describing towns as "nestled within" scenic regions. They tack "-ing" phrases onto the end of sentences to sound analytical: "symbolizing the region's commitment to innovation."
To work around those rules, the Humanizer skill tells Claude to replace inflated language with plain facts and offers this example transformation:
Before: "The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain."
After: "The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics."
[...]
even though most AI language models tend toward certain types of language, they can also be prompted to avoid them, as with the Humanizer skill. (Although sometimes it's very difficult, as OpenAI found in its yearslong struggle against the em dash.)Also, humans can write in chatbot-like ways. For example, this article likely contains some "AI-written traits" that trigger AI detectors even though it was written by a professional writer—especially if we use even a single em dash—because most LLMs picked up writing techniques from examples of professional writing scraped from the web.
[My initial reaction was, nice a way to filter out the AI slop! And there's a plugin! When in reality, it's a plugin to help the Claude LLM sound less like an AI. So, the deep dark bad path, got it. Too much optimism, I guess.]
'NVIDIA Contacted Anna's Archive to Secure Access to Millions of Pirated Books'
The new complaint alleges that "competitive pressures drove NVIDIA to piracy", which allegedly included collaborating with the controversial Anna's Archive library.
According to the amended complaint, a member of Nvidia's data strategy team reached out to Anna's Archive to find out what the pirate library could offer the trillion-dollar company
"Within a week of contacting Anna's Archive, and days after being warned by Anna's Archive of the illegal nature of their collections, NVIDIA management gave 'the green light' to proceed with the piracy. Anna's Archive offered NVIDIA millions of pirated copyrighted books."
Busted? Well at least they asked. Meta blamed it on porno. Will there be any fallout?
Scientists have discovered that human hair does not emerge because it is pushed upward from the root. Instead, it is pulled along by forces generated by a previously unseen network of moving cells. This finding overturns long held ideas in biology and may change how scientists approach hair loss and tissue regeneration.
Researchers from L'Oréal Research & Innovation and Queen Mary University of London used advanced 3D live imaging to observe individual cells inside human hair follicles that were kept alive in laboratory culture. Their study, published in Nature Communications, revealed that cells in the outer root sheath (the layer that surrounds the hair shaft) move in a downward spiral within the same region that produces the upward pulling force responsible for hair growth.
Dr Inês Sequeira, Reader in Oral and Skin Biology at Queen Mary and one of the lead authors, said, "Our results reveal a fascinating choreography inside the hair follicle. For decades, it was assumed that hair was pushed out by the dividing cells in the hair bulb. We found that instead that it's actively being pulled upwards by surrounding tissue acting almost like a tiny motor."
[...] Dr. Thomas Bornschlögl, other lead author, from the same L'Oréal team adds: "This reveals that hair growth is not driven only by cell division – instead, outer root sheath actively pull the hair upwards." This new view of follicle mechanics opens fresh opportunities for studying hair disorders, testing drugs, and advancing tissue engineering and regenerative medicine."
While the research was carried out on human follicles in lab culture, it offers new clues from hair science and regenerative medicine. The team believes that understanding these mechanical forces could help design treatments that target the follicles physical as well as biochemical environment. Furthermore, the imaging technique developed will allow live testing of different drugs and treatments.
The study also highlights the growing role of biophysics in biology, showing how mechanical forces at microscopic scale shape the organs we see every day.
Reference: "Mapping cell dynamics in human ex vivo hair follicles suggests pulling mechanism of hair growth" by Nicolas Tissot, Gaianne Genty, Roberto Santoprete, et. al., 21 November 2025, Nature Communications. DOI: 10.1038/s41467-025-65143-x
Humans use tools, it's one of the things that make us great. Some of the other smarter monkeys also use tools. Next up we have Cows. Cow tool users. Beware the bovine master race ... also lactose tolerant.
Veronika, a cow living in an idyllic mountain village in the Austrian countryside, has spent years perfecting the art of scratching herself with sticks, rakes, and deck brushes. Now that scientists have discovered her, she has the distinction of the first cow known to use tools.
She picks up objects with her tongue, grips them tight with her mouth, and directs their ends to where she wants them most. When she's wielding a deck brush, she will use the bristled end to scratch her thick-skinned back, but switches to the smooth handle when scratching her soft, sensitive belly.
[...] The brown cow's know-how came to the attention of scientists last year after Alice Auersperg, a cognitive biologist at the University of Veterinary Medicine in Vienna, published a book on tool use in animals. Shortly after, her inbox was flooded with messages from people claiming to have seen their pets use tools. "I got all of those emails from people saying things like 'my cat is using the Amazon box as a tool. It's her new house,'" she says. Among these mundane reports was something truly new: a video of a cow picking up a rake and scratching her backside with it.
"It seemed really interesting," she recalls. "We had to take a closer look." Not long after, Auersperg and her colleague Antonio Osuna-Mascaró, a post-doctoral researcher at the same University, drove to Veronika's home.
To say Veronika was living her best life would be an understatement. Her owner, a soft-hearted baker named Witgar Wiegele, had kept Veronika and her mother as pets. She'd spent her life roaming around a picturesque pasture surrounded by forests and snow-covered mountains. Veronika, now 13 years old, has had many years to mess around with the many sticks and landscaping tools that line her enclosure.
The only downside to her idyllic lifestyle is that each summer, horse flies plague Wiegele's property. According to the researchers, the desire to shoo these flies away and scratch their bites likely drove Veronika to develop her self-scratching skills.
https://www.nationalgeographic.com/animals/article/cow-using-tools
Schools across the U.S. are rolling out AI-powered surveillance technology, including drones, facial recognition and even bathroom listening devices. But there's not much data to prove they keep kids safe:
Inside a white stucco building in Southern California, video cameras compare faces of passersby against a facial recognition database. Behavioral analysis AI reviews the footage for signs of violent behavior. Behind a bathroom door, a smoke detector-shaped device captures audio, listening for sounds of distress. Outside, drones stand ready to be deployed and provide intel from above, and license plate readers from $8.5 billion surveillance behemoth Flock Safety ensure the cars entering and exiting the parking lot aren't driven by criminals.
This isn't a high-security government facility. It's Beverly Hills High School.
District superintendent Alex Cherniss says the striking array of surveillance tools is a necessity, and one that ensures the safety of his students. "We are in the hub of an urban setting of Los Angeles, in one of the most recognizable cities on the planet. So we are always a target and that means our kids are a target and our staff are a target," he said. In the 2024-2025 fiscal year, the district spent $4.8 million on security, including staff. The surveillance system spots multiple threats per day, the district said.
Beverly Hills' apparatus might seem extreme, but it's not an outlier. Across the U.S., schools are rolling out similar surveillance systems they hope will keep them free of the horrific and unceasing tide of mass shootings. There have been 49 deaths from gunfire on school property this year. In 2024, there were 59, and in 2023 there were 45, per Everytown for Gun Safety. Between 2000 and 2022, 131 people were killed and 197 wounded at schools in the U.S., most of them children. Given those appalling metrics, allocating a portion of your budget to state of the art AI-powered safety and surveillance tools is a relatively easy decision.
[...] Skeptics, however, said there's little proof AI technologies are going to bring those numbers down significantly, and they ruin trust with students. A 2023 American Civil Liberties Union report found that eight of the 10 largest school shootings in America since Columbine occurred on campuses with surveillance systems. Chad Marlow, a senior policy counsel at the ACLU who authored the report, said that even with the advent of AI-powered tools, there's a dearth in independent research to verify it's any better at preventing tragedies. "It's very peculiar to make the claim that this will keep your kids safe," he said.
The report also found that the surveillance fostered an atmosphere of distrust: 32% of 14 to 18-year-old students surveyed said they felt like they were always being watched. In focus groups run by the ACLU, students said they felt less comfortable alerting educators to mental health issues and physical abuse. Marlow argues that's a lousy tradeoff. "Because kids don't trust people they view as spying on them, it ruptures trust and actually makes things less safe," he said.
Originally spotted on Schneier on Security.
France records more deaths than births for first time since end of second world war:
A public consultation last year found the financial cost of raising children was a barrier to parenthood for 28% of French adults. A public consultation last year found the financial cost of raising children was a barrier to parenthood for 28% of French adults. France records more deaths than births for first time since end of second world war
For the first time since the end of the second world war, France has recorded more deaths than births, suggesting that the country's long-held demographic advantage over other EU countries is slipping away.
Across the country in 2025, there were 651,000 deaths and 645,000 births, according to newly released figures from the national statistics institute Insee.
France had long been an exception across Europe, with birthrates that topped many of its neighbours'. In 2023 – the most recent year for which comparable data is available – the fertility rate in France of 1.65 children per woman was the second-highest in the EU, trailing only Bulgaria's 1.81.
This week's data, however, suggests that the country is not immune to the demographic crunch sweeping the continent as populations age and birthrates tumble.
On Tuesday, Insee said the fertility rate in France had dropped to 1.56 in 2025. This was the lowest rate since the end of the first world war.
It was also a 24% drop compared with the 2.01 rate registered 15 years ago, the institute's Sylvie Le Minez said. "Since 2010, births have been declining year after year in France."
A public consultation carried out by the national assembly late last year gave insight into why this may be happening [article in French]. Of the more than 30,000 respondents, 28% cited the financial costs of raising and caring for children as the principal obstacle to having them, while 18% cited worries about the future of society and 15% pointed to the difficulties in balancing the needs of a family with work and personal life.
The data suggests that France is poised to join the many other EU countries facing the prospect of a shrinking labour force as ageing populations increase the cost of pensions and elderly care.
Life expectancy in France reached record highs last year, at 85.9 years for women and 80.3 for men, while the share of people aged 65 or older climbed to 22%, hovering around the same proportion of those under the age of 20.
"This is not a first for European countries," said Le Minez, highlighting that 20 of the EU's 27 countries had registered more deaths than births in 2024. "But this time, this is also the case for France."
Even so, France's population grew slightly last year to 69.1 million, due to net migration which was estimated to be about 176,000. As anti-immigration sentiment, led by France's National Rally, steadily makes inroads in the country, projections have suggested that the rise of the far right could speed up population decline.
Without immigration, France's population could drop to as low as 59 million by 2100, according to recent forecasts by Eurostat, the EU's official statistics agency.
A growing number of college professors are sounding the alarm over a quiet but accelerating crisis on American campuses, as Gen Z Arriving at College Unable to Read:
According to a report by Fortune, professors across the country say students are struggling to process written sentences, complete assigned reading, or engage meaningfully with texts that were once foundational to higher education.
The problem is not confined to remedial courses or underperforming schools.
Faculty say it is widespread, structural, and getting worse.
[...] Timothy O'Malley of the University of Notre Dame said students often have no idea how to approach traditional reading assignments and instead turn to artificial intelligence tools for summaries.
"Today, if you assign that amount of reading, they often don't know what to do," O'Malley told Fortune.
[...] Professors say it is the predictable outcome of a K–12 system that no longer ensures basic competence.
Standards were lowered, accountability eroded, and reading increasingly treated as optional.
The result is a generation arriving at adulthood unprepared for rigorous work, real expectations, and the responsibilities that come with them, and universities now face the consequences.
Has AI become the modern equivalent of Cliff Notes?
The 'bombshell' science that casts doubt on claims about microplastics:
[...] Officially, microplastics are no larger than five millimetres in size (the size of a grain of rice or smaller), whereas nanoplastics are one nanometre to 1000 nanometres in size (as small as bacteria) and are a lot harder to detect. In recent years, studies have claimed to have found these minuscule particles in nearly every human organ and tissue, including the lungs, liver, kidneys, heart, brain, placenta, testicles, bone marrow and blood.
[...] With large microplastics, scientists can easily spot particles under a microscope and then fire a laser at them to see if they are plastic. But with nanoplastics, scientists must burn the particle and measure the gases emitted, which is less reliable and still in its infancy as a technique.
This unreliability of testing has made researchers more sceptical about the more alarmist findings. An abstract presented at the European Society of Human Reproduction and Embryology last year showing microplastics in human reproductive fluids was met with raised eyebrows among scientists.
"Many previous scary sounding headlines on microplastics in blood and food have turned out to be measurement errors," warns Oliver Jones, professor of chemistry at RMIT University, Melbourne, referring to reports that preceded last year's findings.
Likewise, separate claims that microplastics had been found in human blood in 2022 were criticised by a US chemist as being "consistent with incidental or accidental contaminations", in a letter to the Environmental International journal.
[...] Yet despite the testing issues, many experts are still convinced microplastics are causing harm.
Prof Philip Landrigan, a paediatrician and epidemiologist at Boston College in the US, led a recent review into microplastics for the Lancet, and says people should not dismiss the dangers. "The Guardian article is accurate in pointing out that there is work to be done in refining, standardising and harmonising the analytical techniques for examining microplastics in tissue samples," he says.
"There is a need especially to distinguish microplastics from lipids [fats]. But the Guardian is wrong in implying that this whole area of science is rubbish.
"The presence of microplastics in the human body needs to be taken seriously, even if we don't yet know all the ways in which they may harm health. They cannot be wished away."
As millions of us embark on New Year pledges to eat better, exercise more and learn something new, research published today suggests hobbies could do more than improve your personal life, they could make you better at work.
The study by researchers from the University of East Anglia (UEA) and Erasmus University Rotterdam explored how 'leisure crafting' - intentionally shaping your free time through goal setting, learning and connection - does not just boost well-being outside the office but can spill over into creativity, engagement, and meaning at work, especially for older employees.
Published in the journal Human Relations, the findings show that giving people simple, doable advice about how to grow through their hobbies can make a real difference in their daily lives.
"It's already known that hobbies are good for your well-being," said lead author Dr Paraskevas Petrou, of Erasmus School of Social & Behavioural Sciences.
"But our study shows that hobbies don't just make you happier, they can also help you feel more fulfilled and creative at work. This goes beyond just relaxing or having fun - like binge-watching Netflix - and turns the hobby into something that helps people grow."
Co-author Prof George Michaelides, from UEA's Norwich Business School, added: "We were surprised to see that leisure crafting had a stronger effect at work than in people's personal lives. We had expected equal benefits in both areas.
"One possible reason is that people who took part in our study were already fairly satisfied with their lives outside work, but their work life had more room for improvement. If what people do outside work can also have this positive impact on them in the workplace, organizations should support staff not just in their jobs, but in all areas of their lives."
[...] Co-author Prof Laura Den Dulk, also of Erasmus University Rotterdam, said: "What makes this study different is that we didn't just ask people how they feel. We asked them to take a small, specific action - to approach their hobby in a new way - and then we saw how it actually affected their lives week by week.
"This is a reminder that people aren't just employees - they're whole individuals, and supporting their personal growth outside of work can have a positive impact inside the workplace too."
[...] The authors say there are several ways in which organizations can maximize the benefits of leisure crafting. For example, they could be more aware that their employees are more than just workers and help staff to realize their full potential outside work.
This could be by making hobbies eligible for the use of employee or personal development funds and recognizing leisure-time commitments, 'me-time' and leisure-time projects as a life domain that is also important next to, for example, family commitments.
They could also offer similar interventions to their employees, either as online or on-site masterclasses or personal development modules that can help employees grow in a holistic rather than in an exclusively work-related way.
Journal Reference: Petrou, P., Den Dulk, L., & Michaelides, G. (2026). The leisure crafting intervention: Effects on work and non-work outcomes and the moderating role of age. Human Relations, 0(0). https://doi.org/10.1177/00187267251407641
IT spending set to hit $1.4 trillion in 2026 - but what exactly are we spending it on?
IT spending is set to rise 11.1% in 2026 to hit $1.43 trillion - and it comes as no surprise that continued AI deployment will drive much of that growth.
The latest Gartner projections claim Generative AI model spending is one of the biggest categories in Europe especially, with a 78.2% rise expected.
Gartner explained cloud and cybersecurity investments, together with AI tools, will continue despite industry-wide tight budgets and limited headcount growth.
Although enterprises are set to plough more money into tech, there's a clear evolution at play with a bigger focus on smarter, more efficient and more personalized options. An overview across five key categories shows the biggest growth coming from data center systems, up 18.8% year-over-year, however this remains the smallest overall expense in terms of dollar value.
The biggest is attributed to IT services, followed by software, communication services and devices.
But the rise in spend isn't necessarily because companies want to expand what they have.
"Europe is facing regulatory pressure, competition between countries, geopolitical tensions, and national security concerns-all focused on making sure Europe can develop and manage AI systems on its own, without depending on foreign platforms or providers," Distinguished VP Analyst John-David Lovelock explained.
Separately, Garner expects 35% of countries to be locked into region-specific AI platforms, up from 5% today. This shift towards regionally hosted cloud services is also expected to drive a 24% growth in public cloud spending in 2026.
Worse still, analysts at Gartner explain that price increases are artificially inflating the figure, which suggests growth might not be as high as projections indicate.