Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Apparently, China's dominance of the supply chain means that it's also seen as the principal source of cybersecurity risk:
"We know that foreign, hostile actors see Australia's energy system as a good target," Home Affairs assistant secretary for cyber security Sophie Pearce told the small, afternoon-on-the-last-day audience.
"We know that cyber vector is the most likely means of disrupting our energy ecosystem, and I think that the energy transition raises the stakes even further. Where we're reliant on foreign investment and foreign supply chains, lots of opportunity there, obviously.
"When there's a dependency on jurisdictions that might require or can compel access to data or access to systems, that increases the risks."
[...] Pearce Courtney handles cyber coordination for energy markets at AEMO, and while he says it's maintaining visibility over the whole structure that keeps the organisation "up at night", technology concentration risk is on the radar.
[...] "In terms of the technology and the devices and where we're buying our supply chain. That's probably the other challenge that doesn't keep us up at night, that's a significant, complex challenge."
China controls 80 per cent of the global supply chain for all the manufacturing stages of solar panels, according to an International Energy Agency (IEA) report from 2022. A similar study from 2024 shows China has almost 85 per cent of global battery cell production capacity.
[Editor's Comment: Title corrected to more accurately reflect the summary contents--JR 2025-09-15 05:55]
Newly released video at House UFO hearing appears to show U.S. missile striking and bouncing off orb:
A newly released video captured by a U.S. reaper drone shows a glowing orb off the coast of Yemen. Then in the video, a Hellfire missile suddenly struck the unidentified object and bounced off it.
Rep. Eric Burlison, a Republican from Missouri, shared the video at a House Oversight hearing on Tuesday on what the military calls "Unidentified Aerial Phenomena" or better known as UFOs.
The video, dated Oct. 30, 2024, was provided by a whistleblower and when slowed down, the missile can be seen continuing on its own path after striking the orb.
A recent government report revealed that it had received more than 750 new UAP sightings between May 2023 and June 2024, leaving lawmakers digging into the mystery and national security concerns posed by the objects.
"We've never seen a Hellfire missile hit a target and bounce off," said Lue Elizondo, a former senior intelligence official with the Pentagon.
"When a hellfire makes a hit, a kinetic strike on something solid, there's usually not much left of whatever it is it's hitting," Elizondo said. "It's very, very destructive. But in the video ... what seems to happen is that the missile is either redirected, or in some case, perhaps glances off the object and continues on its way."
What was not shown in the video is a second reaper drone that launched the missile.
Details remain unclear, including what the mission was.
The U.S. military was conducting regular air strikes against Houthi targets that posed a threat to the U.S. Navy and commercial vessels.
Pentagon officials told CBS News they have no comment.
The Defense Department in 2023 launched a website for declassified UAP information, following a House Oversight Committee held a hearing earlier that year that featured testimony from a former military intelligence officer and two former fighter pilots, who had first-hand experience with the mysterious objects.
Scientists Stunned as Tiny Algae Keep Moving Inside Arctic Ice:
Scientists know that microbial life can survive under some extreme conditions—including, hopefully, harsh Martian weather. But new research suggests that one particular microbe, an algal species found in Arctic ice, isn't as immobile as it was previously believed. They're surprisingly active, gliding across—and even within—their frigid stomping grounds.
In a Proceedings of the National Academy of Sciences paper published September 9, researchers explained that ice diatoms—single-celled algae with glassy outer walls—actively dance around in the ice. This feisty activity challenges assumptions that microbes living in extreme environments, or extremophiles, are barely getting by. If anything, these algae evolved to thrive despite the extreme conditions. The remarkable mobility of these microbes also hints at an unexpected role they may play in sustaining Arctic ecology.
"This is not 1980s-movie cryobiology," said Manu Prakash, the study's senior author and a bioengineer at Stanford University, in a statement. "The diatoms are as active as we can imagine until temperatures drop all the way down to -15 C [5 degrees Fahrenheit], which is super surprising."
That temperature is the lowest ever for a eukaryotic cell like the diatom, the researchers claim. Surprisingly, diatoms of the same species from a much warmer environment didn't demonstrate the same skating behavior as the ice diatoms. This implies that the extreme life of Arctic diatoms birthed an "evolutionary advantage," they added.
For the study, the researchers collected ice cores from 12 stations across the Arctic in 2023. They conducted an initial analysis of the cores using on-ship microscopes, creating a comprehensive image of the tiny society inside the ice.
To get a clearer image of how and why these diatoms were skating, the team sought to replicate the conditions of the ice core inside the lab. They prepared a Petri dish with thin layers of frozen freshwater and very cold saltwater. The team even donated strands of their hair to mimic the microfluidic channels in Arctic ice, which expels salt from the frozen apparatus.
As they expected, the diatoms happily glided through the Petri dish, using the hair strands as "highways" during their routines. Further analysis allowed the researchers to track and pinpoint how the microbes accomplished their icy trick.
"There's a polymer, kind of like snail mucus, that they secrete that adheres to the surface, like a rope with an anchor," explained Qing Zhang, study lead author and a postdoctoral student at Stanford, in the same release. "And then they pull on that 'rope,' and that gives them the force to move forward."
If we're talking numbers, algae may be among the most abundant living organisms in the Arctic. To put that into perspective, Arctic waters appear "absolute pitch green" in drone footage purely because of algae, explained Prakash.
The researchers have yet to identify the significance of the diatoms' gliding behavior. However, knowing that they're far more active than we believed could mean that the tiny skaters unknowingly contribute to how resources are cycled in the Arctic.
"In some sense, it makes you realize this is not just a tiny little thing; this is a significant portion of the food chain and controls what's happening under ice," Prakash added.
That's a significant departure from what we often think of them as—a major food source for other, bigger creatures. But if true, it would help scientists gather new insights into the hard-to-probe environment of the Arctic, especially as climate change threatens its very existence. The timing of this result shows that, to understand what's beyond Earth, we first need to protect and safely observe what's already here.
Journal Reference:
DOI: Ice gliding diatoms establish record-low temperature limits for motility in a eukaryotic cell
Researchers investigated giant prehistoric trash piles to reveal where animal remains came from:
You can learn a lot about people by studying their trash, including populations that lived thousands of years ago.
In what the team calls the "largest study of its kind," researchers applied this principle to Britain's iconic middens, or giant prehistoric trash (excuse me, rubbish) piles. Their analysis revealed that at the end of the Bronze Age (2,300 to 800 BCE), people—and their animals—traveled from far to feast together.
"At a time of climatic and economic instability, people in southern Britain turned to feasting—there was perhaps a feasting age between the Bronze and Iron Age," Richard Madgwick, an archaeologist at Cardiff University and co-author of the study published yesterday in the journal iScience, said in a university statement. "These events are powerful for building and consolidating relationships both within and between communities, today and in the past."
Madgwick and his colleagues investigated material from six middens in Wiltshire and the Thames Valley via isotope analysis, a technique archaeologists use to link animal remains to the unique chemical make-up of a particular geographic area. The technique reveals where the animals were raised, allowing the researchers to see how far people traveled to join these feasts.
"The scale of these accumulations of debris and their wide catchment is astonishing and points to communal consumption and social mobilisation on a scale that is arguably unparalleled in British prehistory," Madgwick added.
[...] "Our findings show each midden had a distinct make up of animal remains, with some full of locally raised sheep and others with pigs or cattle from far and wide," said Carmen Esposito, lead author of the study and an archaeologist at the University of Bologna. "We believe this demonstrates that each midden was a lynchpin in the landscape, key to sustaining specific regional economies, expressing identities and sustaining relations between communities during this turbulent period, when the value of bronze dropped and people turned to farming instead."
A number of these prehistoric trash heaps, which resulted from potentially the largest feasts in Britain until the Middle Ages (that would mean they even outdid the Romans), were eventually incorporated into the landscape as small hills.
"Overall, the research points to the dynamic networks that were anchored on feasting events during this period and the different, perhaps complementary, roles that each midden had at the Bronze Age-Iron Age transition," Madgwick concluded.
Since previous research indicates that Late Neolithic (2,800 BCE to 2,400 BCE) communities in Britain were also organizing feasts that attracted guests—and their pigs—from far and wide, I think it's fair to say that prehistoric British people were throwing successful ragers across 2,000 years.
Journal Reference: Esposito, Carmen et al. Diverse feasting networks at the end of the Bronze Age in Britain (c. 900-500 BCE) evidenced by multi-isotope analysis [OPEN], iScience, Volume 0, Issue 0, 113271 https://doi.org/10.1016/j.isci.2025.113271
Arbitrarily inflated lock-in-tastic fees curbed as movement charges must be cost-linked:
Most of the provisions of the EU Data Act will officially come into force from the end of this week, requiring cloud providers to make it easier for customers to move their data, but some of the big players are keener than others.
The European Data Act is an ambitious attempt by the European Commission to galvanize the market for digital services by opening up access to data. But it also contains provisions to permit customers to move seamlessly between different cloud operators and combine data services from different providers in a so-called multi-cloud strategy.
Cloud users have often complained about the fees that operators charge whenever data is transferred outside of their networks. Investigations by regulators such as the UK's Competition and Markets Authority (CMA) have led the big three platforms – AWS, Microsoft's Azure and Google Cloud – to all waive egress fees, but only for users quitting their platforms.
While the Data Act doesn't rule out vendors charging data transfer fees, it does expect cloud firms to pass on costs to customers rather than charging arbitrary or excessive payments.
Google is keen to publicize that it is going further than this and offering data movement at no cost for customers in both the European Union and the United Kingdom via a newly announced Data Transfer Essentials service.
There's a catch, of course – Google makes it clear that its service is designed for cost-optimized data transfer between two services of a customer organization that happen to be running on different cloud platforms.
In other words, it is for traffic that would effectively be considered internal to the customer organization and not for transfers to third parties. Google warns that if one of its audits uncovers that the service is being misused in this way, the traffic will be billed as regular internet traffic.
Microsoft is offering at-cost transfer for customers and cloud service partners in the EU shifting data to another provider, but there are also strings attached. Customers must create an Azure Support request for the transfer, specifying where the data is to be moved, and it must also be to a service operated by the same customer, not to endpoints belonging to different customers.
We understand that AWS specifies that EU customers "request reduced data transfer rates for eligible use cases under the European Data Act," requiring them to contact customer support for further information. We asked AWS for clarification.
Google claims that its move demonstrates its own commitment to harbouring an open and fair cloud market in Europe.
This might have something to do with it being a bit of an underdog here, making up about 10 percent of the European cloud market, while AWS is estimated to take 32 percent, and Azure another 23 percent.
"The original promise of the cloud is one that is open, elastic, and free from artificial lock-ins. Google Cloud continues to embrace this openness and the ability for customers to choose the cloud service provider that works best for their workload needs," said the Google Cloud's senior director for global risk and compliance, Jeanette Manfra.
Pluralistic: Fingerspitzengefühl (08 Sep 2025) – Pluralistic: Daily links from Cory Doctorow:
This was the plan: America would stop making things and instead make recipes, the "IP" that could be sent to other countries to turn into actual stuff, in distant lands without the pesky environmental and labor rules that forced businesses to accept reduced profits because they weren't allowed to maim their workers and poison the land, air and water.
This was quite a switch! At the founding of the American republic, the US refused to extend patent protection to foreign inventors. The inventions of foreigners would be fair game for Americans, who could follow their recipes without paying a cent, and so improve the productivity of the new nation without paying rent to old empires over the sea.
It was only once America found itself exporting as much as it imported that it saw fit to recognize the prerogatives of foreign inventors, as part of reciprocal agreements that required foreigners to seek permission and pay royalties to American patent-holders.
But by the end of the 20th Century, America's ruling class was no longer interested in exporting things; they wanted to export ideas, and receive things in return. You can see why: America has a limited supply of things, but there's an infinite supply of ideas (in theory, anyway).
There was one problem: why wouldn't the poor-but-striving nations abroad copy the American Method for successful industrialization? If ignoring Europeans' patents allowed America to become the richest and most powerful nation in the world, why wouldn't, say, China just copy all that American "IP"? If seizing foreigners' inventions without permission was good enough for Thomas Jefferson, why not Jiang Zemin?
America solved this problem with the promise of "free trade." The World Trade Organization divided the world into two blocs: countries that could trade with one another without paying tariffs, and the rabble without who had to navigate a complex O(n^2) problem of different tariff schedules between every pair of nations.
To join the WTO club, countries had to sign up to a side-treaty called the Trade-Related Aspects of Intellectual Property Rights (TRIPS). Under the TRIPS, the Jeffersonian plan for industrialization (taking foreigners' ideas without permission) was declared a one-off, a scheme only the US got to try and no other country could benefit from. For China to join the WTO and gain tariff-free access to the world's markets, it would have to agree to respect foreign patents, copyrights, trademarks and other "IP."
We know the story of what followed over the next quarter-century: China became the world's factory, and became so structurally important that even if it violated its obligations under the TRIPS, "stealing the IP" of rich nations, no one could afford to close their borders to Chinese imports, because every country except China had forgotten how to make things.
But this isn't the whole story – it's not even the most important part of it. In his new book Breakneck, Dan Wang (a Chinese-born Canadian who has lived extensively in Silicon Valley and in China) devotes a key chapter to "process knowledge":
What's "process knowledge"? It's all the intangible knowledge that workers acquire as they produce goods, combined with the knowledge that their managers acquire from overseeing that labor. The Germans call it "Fingerspitzengefühl" ("fingertip-feeling"), like the sense of having a ball balanced on your fingertips, and knowing exactly which way it will tip as you tilt your hand this way or that.
[...] Process knowledge is everything from "Here's how to decant feedstock into this gadget so it doesn't jam," to "here's how to adjust the flow of this precursor on humid days to account for the changes in viscosity" to "if you can't get the normal tech to show up and calibrate the part, here's the phone number of the guy who retired last year and will do it for time-and-a-half."
It can also be decidedly high-tech. A couple years ago, the legendary hardware hacker Andrew "bunnie" Huang explained to me his skepticism about the CHIPS Act's goal of onshoring the most advanced (4-5nm) chips.
[...] This process is so esoteric, and has so many figurative and literal moving parts, that it needs to be closely overseen and continuously adjusted by someone with a PhD in electrical engineering. That overseer needs to wear a clean-room suit, and they have to work an eight-hour shift without a bathroom, food or water break (because getting out of the suit means going through an airlock means shutting down the system means long delays and wastage).
That PhD EENG is making $50k/year. Bunnie's topline explanation for the likely failure of the CHIPS Act is that this is a process that could only be successfully executed in a country "with an amazing educational system and a terrible passport." For bunnie, the extensive educational subsidies that produced Taiwan's legion of skilled electrical engineers and the global system that denied them the opportunity to emigrate to higher-wage zones were the root of the country's global dominance in advanced chip manufacture.
I have no doubt that this is true, but I think it's incomplete. What bunnie is describing isn't merely the expertise imparted by attaining a PhD in electrical engineering – it's the process knowledge built up by generations of chip experts who debugged generations of systems that preceded the current tin-vaporizing Rube Goldberg machines.
[...] Wang evocatively describes how China built up its process knowledge over the WTO years, starting with simple assembly of complex components made abroad, then progressing to making those components, then progressing to coming up with novel ways to reconfiguring them ("a drone is a cellphone with propellers"). He explains how the vicious cycle of losing process knowledge accelerated the decline of manufacturing in the west: every time a factory goes to China, US manufacturers that had been in its supply chain lose process knowledge. You can no longer call up that former supplier and brainstorm solutions to tricky production snags, which means that other factories in the supply chain suffer, and they, too get offshored to China.
America's vicious cycle was China's virtuous cycle. The process knowledge that drained out of America accumulated in China. Years of experience solving problems in earlier versions of new equipment and processes gives workers a conceptual framework to debug the current version – they know about the raw mechanisms subsumed in abstraction layers and sealed packages and can visualize what's going on inside those black boxes.
[...] But here's the thing: while "IP" can be bought and sold by the capital classes, process knowledge is inseparably vested in the minds and muscle-memory of their workers. People who own the instructions are constitutionally prone to assuming that making the recipe is the important part, while following the recipe is donkey-work you can assign to any freestanding oaf who can take instruction.
[...] The exaltation of "IP" over process knowledge is part of the ancient practice of bosses denigrating their workers' contribution to the bottom line. It's key to the myth that workers can be replaced by AI: an AI can consume all the "IP" produced by workers, but it doesn't have their process knowledge. It can't, because process knowledge is embodied and enmeshed, it is relational and physical. It doesn't appear in training data.
In other words, elevating "IP" over process knowledge is a form of class war. And now that the world's store of process knowledge has been sent to the global south, the class war has gone racial. Think of how Howard Dean – now a paid shill for the pharma lobby – peddled the racist lie that there was no point in dropping patent protections for the covid vaccines, because brown people in poor countries were too stupid to make advanced vaccines:
The truth is that the world's largest vaccine factories are to be found in the global south, particularly India, and these factories sit at the center of a vast web of process knowledge, embedded in relationships and built up with hard-won problem-solving.
Bosses would love it if process knowledge didn't matter, because then workers could finally be tamed by industry. We could just move the "IP" around to the highest bidders with the cheapest workforces. But Wang's book makes a forceful argument that it's easier to build up a powerful, resilient society based on process knowledge than it is to do so with IP. What good is a bunch of really cool recipes if no one can follow them?
I think that bosses are, psychoanalytically speaking, haunted by the idea that their workers own the process knowledge that is at the heart of their profits. That's why bosses are so obsessed with noncompete "agreements." If you can't own your workers' expertise, then you must own your workers. Any time a debate breaks out over noncompetes, a boss will say something like, "My intellectual property walks out the door of my shop every day at 5PM." They're wrong: the intellectual property is safely stored on the company's hard drives – it's the process knowledge that walks out the door.
Wyden says default use of RC4 cipher led to last year's breach of health giant Ascension:
A prominent US senator has called on the Federal Trade Commission to investigate Microsoft for "gross cybersecurity negligence," citing the company's continued use of an obsolete and vulnerable form of encryption that Windows uses by default.
In a letter to FTC Chairman Andrew Ferguson, Sen. Ron Wyden (D–Ore.) said an investigation his office conducted into the 2024 ransomware breach of the health care giant Ascension found that the default use of the RC4 encryption cipher was a direct cause. The breach led to the theft of medical records of 5.6 million patients.
It's the second time in as many years that Wyden has used the word "negligence" to describe Microsoft's security practices.
"Because of dangerous software engineering decisions by Microsoft, which the company has largely hidden from its corporate and government customers, a single individual at a hospital or other organization clicking on the wrong link can quickly result in an organization-wide ransomware infection," Wyden wrote in the letter, which was sent Wednesday. "Microsoft has utterly failed to stop or even slow down the scourge of ransomware enabled by its dangerous software."
RC4 is short for Rivest Cipher 4, a nod to mathematician and cryptographer Ron Rivest of RSA Security, who developed the stream cipher in 1987. It was a trade-secret-protected proprietary cipher until 1994, when an anonymous party posted a technical description of it to the Cypherpunks mail list. Within days, the algorithm was broken, meaning its security could be compromised using cryptographic attacks. Despite the known susceptibility to such attacks, RC4 remained in wide use in encryption protocols, including SSL and its successor TLS, until about a decade ago.
Microsoft, however, continues to support RC4 as the default means for securing Active Directory, a Windows component that administrators use to configure and provision user accounts inside large organizations. While Windows offers more robust encryption options, many users don't enable them, causing Active Directory to fall back to the Kerberos authentication method using the vulnerable RC4 cipher.
[...] Wyden said his office's investigation into the Ascension breach found that the ransomware attackers' initial entry into the health giant's network was the infection of a contractor's laptop after using Microsoft Edge to search Microsoft's Bing site. The attackers were then able to expand their hold by attacking Ascension's Active Directory and abusing its privileged access to push malware to thousands of other machines inside the network. The means for doing so, Wyden said: Kerberoasting.
[...] Referring to the Active Directory default, Green wrote:
It's actually a terrible design that should have been done away with decades ago. We should not build systems where any random attacker who compromises a single employee laptop can ask for a message encrypted under a critical password! This basically invites offline cracking attacks, which do not need even to be executed on the compromised laptop—they can be exported out of the network to another location and performed using GPUs and other hardware.
More than 11 months after announcing its plans to deprecate RC4/Kerberos, the company has provided no timeline for doing so. What's more, Wyden said, the announcement was made in a "highly technical blog post on an obscure area of the company's website on a Friday afternoon." Wyden also criticized Microsoft for declining to "explicitly warn its customers that they are vulnerable to the Kerberoasting hacking technique unless they change the default settings chosen by Microsoft."
Wyden went on to criticize Microsoft for building a "multibillion-dollar secondary business selling cybersecurity add-on services to those organizations that can afford it. At this point, Microsoft has become like an arsonist selling firefighting services to their victims."
Of all the eccentricities of the quantum realm, time crystals—atomic arrangements that repeat certain motions over time—might be some of the weirdest. But they certainly exist, and to provide more solid proof, physicists have finally created a time crystal we can actually see.
In a recent Nature Materials paper, physicists at the University of Colorado Boulder presented a new time crystal design: a glass cell filled with liquid crystals—rod-shaped molecules stuck in strange limbo between solid and liquid. It's the same stuff found in smartphone LCD screens. When hit with light, the crystals jiggle and dance in repeating patterns that the researchers say resemble "psychedelic tiger stripes."
"They can be observed directly under a microscope and even, under special conditions, by the naked eye," said Hanqing Zhao, study lead author and a graduate student at the University of Colorado Boulder, in a release. Technically, these crystalline dances can last for hours, like an "eternally spinning clock," the researchers added.
Time crystals first appeared in a 2012 paper by Nobel laureate Frank Wilczek, who pitched an idea for an impossible crystal that breaks several rules of symmetry in physics. Specifically, a time crystal breaks symmetry because its atoms do not lock into a continuous lattice, and their positions change over time.
Physicists have since demonstrated versions of Wilczek's proposal, but these crystals lasted for a terribly short time and were microscopic. Zhao and Ivan Smalyukh, the study's senior author and a physicist at the University of Colorado Boulder, wanted to see if they could overcome these limitations.
For the new time crystal, the duo exploited the molecules' "kinks"—their tendency to cluster together when squeezed in a certain way. Once together, these kinks behave like whole atoms, the researchers explained.
"You have these twists, and you can't easily remove them," Smalyukh said. "They behave like particles and start interacting with each other."
The team coated two glass cells with dye molecules, sandwiching a liquid crystal solution between the layers. When they flashed the setup with polarized light, the dye molecules churned inside the glass, squeezing the liquid crystal. This formed thousands of new kinks inside the crystal, the researchers explained.
"That's the beauty of this time crystal," said Smalyukh. "You just create some conditions that aren't that special. You shine a light, and the whole thing happens."
The team believes its iteration of the time crystal could have practical uses. For instance, a "time watermark" printed on bills could be used to identify counterfeits. Also, stacked layers could serve as a tiny data center.
It's rare for quantum systems to be visible to the naked eye. Only time will tell if this time crystal amounts to anything—the researchers "don't want to put a limit on the applications right now"—but even if it doesn't, it's still a neat demonstration of how physical theories exist in strange, unexpected corners of reality.
Journal Reference: Zhao, H., Smalyukh, I.I. Space-time crystals from particle-like topological solitons. Nat. Mater. (2025). https://doi.org/10.1038/s41563-025-02344-1
The RTX 4090 48GB looks like a whole different card:
Remember those Frakenstein GeForce RTX 4090 48GB graphics cards emerging from China? Russian PC technician and builder VIK-on [17:17 --JE] has provided detailed insights into how Chinese factories are transforming the GeForce RTX 4090, once regarded as one of the best graphics cards, to effectively double its memory capacity specifically for AI workloads.
As a mainstream product, the GeForce RTX 4090 does not support memory chips in a clamshell configuration, unlike Nvidia's professional and data center products. Essentially, this means that the Ada Lovelace flagship only houses memory chips on one side of the PCB. In clamshell mode, graphics cards typically feature memory chips on both sides of the PCB. This limitation is addressed by the GeForce RTX 4090 4GB "upgrade kit," which sells for around $142 in China.
The upgrade kit comprises a custom PCB designed with a clamshell configuration, facilitating the installation of twice the number of memory chips. Most components are pre-installed at the manufacturing facility, requiring the user to solder the GPU and memory chips onto the PCB. Additionally, the upgrade kit includes a blower-style cooling solution, designed for integration with workstation and server configurations that utilize multi-GPU architectures.
VIK-on demonstrated the process of extracting the AD102 silicon and twelve 2GB GDDR6X memory chips from the MSI GeForce RTX 4090 Suprim and installing them onto the barebone PCB. The technician utilized spare GDDR6X memory chips from defective graphics cards, thereby obtaining additional GDDR6X memory at no cost. Clearly, this operation requires specialized soldering skills and access to appropriate high-end tools.
The technician also uploaded a leaked, modified firmware onto the GeForce RTX 4090 48GB. It is important to note that each graphics card possesses a unique GPU device ID, which contains all pertinent information. During the system initialization process, the firmware verifies whether the GPU device ID corresponds with the one embedded within the chip. Hacked firmware has been present for some time.
Indeed, it was during the era of the GeForce RTX 20-series (Turing) that enthusiasts uncovered the capability to deactivate memory channels. This feature was not advantageous for the general public, as it was illogical to impair a fully functional graphics card by reducing its memory capacity. However, for repair professionals, this discovery proved invaluable, enabling them to salvage graphics cards with defective memory channels. Consequently, this led to the emergence of unorthodox models in the market, such as the GeForce RTX 3090 with 20GB of memory instead of the standard 24GB, or the GeForce RTX 3070 Ti with 6GB of memory instead of the expected 8GB.
The firmware modders identified the possibility of expanding memory capacity through the modification. Consequently, the GeForce RTX 4090 48GB and the GeForce RTX 4080 Super 32GB came into existence.
Upgrading the GeForce RTX 4090 to 48GB is an expensive endeavor. First, it is necessary to possess the graphics card in order to extract the Ada Lovelace silicon and GDDR6X memory chips. If you do not have any GDDR6X modules readily available, you'll need to purchase each module, currently priced at $24 on Chinese e-commerce platforms.
Consequently, the total cost for the upgrade is $430, excluding shipping costs. Assuming you were fortunate enough to purchase a GeForce RTX 4090 at its original MSRP of $1,599, the total amounts to $2,029. These GeForce RTX 4090 48GB graphics cards typically sell for around $3,320 in China, so you're saving close to 39% - again, assuming you have the soldering skills and access to all the equipment necessary for the upgrade. Alternatively, you can pay someone more qualified to perform the upgrade for you.
RTX 4090 supply has already started to dwindle, meaning that sooner or later, Chinese factories will likely begin experimenting with the GeForce RTX 5090,if they haven't already. A rumor is already circulating about the GeForce RTX 5090 128GB. While it may seem like a scam now, it could become a reality further down the road.
AI's free web scraping days may be over, thanks to this new licensing protocol:
AI companies are capturing as much content as possible from websites while also extracting information. Now, several heavyweight publishers and tech companies -- Reddit, Yahoo, People, O'Reilly Media, Medium, and Ziff Davis (ZDNET's parent company) -- have developed a response: the Really Simple Licensing (RSL) standard.
You can think of RSL as Really Simple Syndication's (RSS) younger, tougher brother. While RSS is about syndication, getting your words, stories, and videos out onto the wider web, RSL says: "If you're an AI crawler gobbling up my content, you don't just get to eat for free anymore."
The idea behind RSL is brutally simple. Instead of the old robots.txt file -- which only said, "yes, you can crawl me," or "no, you can't," and which AI companies often ignore -- publishers can now add something new: machine-readable licensing terms.
Want an attribution? You can demand it. Want payment every time an AI crawler ingests your work, or even every time it spits out an answer powered by your article? Yep, there's a tag for that too.
This approach allows publishers to define whether their content is free to crawl, requires a subscription, or will cost "per inference," that is, every time ChatGPT, Gemini, or any other model uses content to generate a reply.
The key capabilities of RSL include:
- A shared vocabulary that lets publishers define licensing and compensation terms, including free, attribution, pay-per-crawl, and pay-per-inference compensation.
- An open protocol to automate content licensing and create internet-scale licensing ecosystems between content owners and AI companies.
- Standardized, public catalogs of licensable content and datasets through RSS and Schema.org metadata.
- An open protocol for encrypting digital assets to securely license non-public proprietary content, including paywalled articles, books, videos, and training datasets.
- Supporting collective licensing via RSL Collective or any other RSL-compatible licensing server.
It's a clever fix for a complex problem. As Tim O'Reilly, the O'Reilly Media CEO and one of the RSL initiative's high-profile backers, said: "RSS was critical to the internet's evolution...but today, as AI systems absorb and repurpose that same content without permission or compensation, the rules need to evolve. RSL is that evolution."
O'Reilly's right. RSS helped the early web scale, whether blogs, news syndication, or podcasts. But today's web isn't just competing for human eyeballs. The web is now competing to supply the training and reasoning fuel for AI models that, so far, aren't exactly paying the bills for the sites they're built on.
Of course, tech is one thing; business is another. That's where the RSL Collective comes in. Modeled on music's ASCAP and BMI, the nonprofit is essentially a rights-management clearinghouse for publishers and creators. Join for free, pool your rights, and let the Collective negotiate with AI companies to ensure you're compensated.
As anyone in publishing knows, a lone freelancer, or most media outlets for that matter, has about as much leverage against the likes of OpenAI or Google as a soap bubble in a wind tunnel. But a collective that represents "the millions" of online creators suddenly has some bargaining power.
(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Let's step back. For the last few years, AI has been snacking on the internet's content buffet with zero cover charge. That approach worked when the web's economics were primarily driven by advertising. However, those days are history. The old web ad model has left publishers gutted while generative AI companies raise billions in funding.
So, RSL wants to bolt a licensing framework directly into the web's plumbing. And because RSL is an open protocol, just like RSS, anyone can use it. From a giant outlet like Yahoo to a niche recipe blogger, RSL allows web publishers to spell out what they want in return when AI comes crawling.
The work of guiding RSL falls to the RSL Technical Steering Committee, which reads like a who's who of the web's protocol architects: Eckart Walther, co-author of RSS; RV Guha, Schema.org and RSS; Tim O'Reilly; Stephane Koenig, Yahoo; and Simon Wistow, Fastly.
The web has always run on invisible standards such as HTTP, HTML, RSS, and robots.txt. In Web 1.0, social contracts were written into code. If RSL catches on, it may be the next layer in that lineage: the one that finally gives human creators a fighting chance in the AI economy.
And maybe, just maybe, RSL will stop the AI feast from becoming an all-you-can-eat buffet with no one left to cook.
Scientists urge EU governments to reject Chat Control rules:
As the final vote draws closer, an open letter has highlighted significant risks that remain in the EU's controversial 'Chat Control' regulation.
617 of the world's top scientists, cryptographers and security researchers have released an open letter today (10 September) calling on governments to reject the upcoming final vote on the EU's 'Chat Control' legislation.
The group of scientists and researchers – hailing from 35 countries and including the likes of AI expert Dr Abeba Birhane – has warned that the EU's proposed legislation targeting online child sexual abuse material (CSAM), known colloquially as Chat Control, would undermine the region's digital security and privacy protections and "endangers the digital safety of our society in Europe and beyond".
The group also warned that the new rules will create "unprecedented capabilities" for surveillance, control and censorship, and has an "inherent risk for function creep and abuse by less democratic regimes".
This is not the first time this collective has warned against the regulation, having previously published its recommendations in July 2023, May 2024 and September 2024.
The proposed legislation would require providers of messaging services such as WhatsApp, Signal, Instagram, email and more to scan its users' private digital communications and chats for CSAM material. This scanning would even apply to end-to-end encrypted communications, regardless of a provider's own security protections.
Any content flagged as potential CSAM material by the scanning algorithms would then be automatically reported to authorities.
Currently, 15 EU member states have issued support for the legislation – including Ireland. Six member states oppose the rules, while six remain undecided in their stance.
While the latest draft of the legislation has been amended to exclude the detection of audio and text communications – limiting detection to "visual content", such as images and URLs – the scientists argue that the legislation in its current form is still unacceptable.
The group argues that none of the legislation's changes address its major concerns, namely the infeasibility of scanning hundreds of millions of users for CSAM content with appropriate accuracy, the undermining of end-to-end encryption protections and the heightened privacy risks to EU citizens.
While the latest draft of the regulation has reduced the scope of targeted material (limited to visual content and URLs), the group of scientists states that this reduction will not improve effectiveness.
"There is no scientific basis to argue that detection technology would work any better on images than on text," reads the letter, with further assertions that CSAM detection methods can be easily evaded. The group states that just changing a few bits in an image is "sufficient to ensure that an image will not trigger state-of-the-art detectors".
The group also criticises the EU's proposal of using AI and machine learning to detect CSAM imagery due to the technology's unreliability.
"We reiterate to the best of our knowledge there is no machine-learning algorithm that can perform such detection without committing a large number of errors, and that all known algorithms are fundamentally susceptible to evasion."
When it comes to URLs, the group says that evading detectors is even easier, due to the ease at which users can redirect to other URLs.
In terms of end-to-end encryption, the group says that the legislation violates the core principles of the practice – ensuring that only the intended two endpoints can access the data, and avoiding a single point of failure.
"Enforcing a detection mechanism to scan private data before it gets encrypted – with the possibility to transmit it to law enforcement upon inspection – inherently violates both principles," says the group.
The researchers also call into question the proposal of service providers using age verification and age assessment measures, pointing to recent backlash to the UK's Online Safety Act in relation to similar requirements. The group states that these age verification rules could become a reason to ban the use of virtual private networks (VPNs), thus threatening freedom of speech, freedom of information and undermining "the tools needed by whistleblowers, journalists and human right activists".
Lastly, the researchers find that the current "techno-solutionist proposal" has little potential to achieve its stated ambition – the eradication of abuse perpetrated against children.
The group calls on administrations to focus instead on measures recommended by the UN, such as education, trauma-sensitive reporting hotlines and keyword-search based interventions.
"By eliminating abuse, these measures will also eradicate abusive material without introducing any risk to secure digital interactions which are essential for the safety of the children the proposed regulation aims to protect."
The Chat Control legislation has been under fire for a number of years now from digital rights groups and advocates including the Pirate Party's Patrick Breyer.
Last year, voting on the legislation was temporarily withdrawn by the EU Council in a move that was believed to have been influenced by prominent pushback against the regulation.
Irish cybersecurity expert Brian Honan told SiliconRepublic.com that Chat Control could potentially "put everyone under mass surveillance by scanning all messages on our personal devices before they are sent, even encrypted ones, undermining the security of the messaging platforms and imposing on our rights to privacy".
"The proposals of client-side scanning also introduces a significant risk of that software being targeted by criminals, hostile nation states, and being abused by authoritarian governments."
He added that while Chat Control's goal of stopping the spread of CSAM is worthy and should be supported, "the proposed EU Chat Control is not the appropriate mechanism to do so".
"Real progress against dealing with CSAM will come from investing in more resources for police forces to investigate and prosecute those behind this material, stronger sanctions against platforms and countries that allow this material, and increased support for hotlines like the Irish Internet Hotline."
See also: Encrypted email provider Tuta warns EU privacy is at risk with Chat Control law:
ASML, Dutch multinational producer of EUV lithography machines, has announced an investment of €1.3bn in French AI startup Mistral, which positions itself as an European alternative to ChatGPT and similar.
It is to become "a long-term collaboration agreement to explore the use of AI models across ASML's product portfolio as well as research, development and operations, to benefit ASML customers with faster time to market and higher performance holistic lithography systems ..." in a "first-of-its-kind partnership between a semiconductor equipment manufacturer and a leading AI company".
The deal brings together Europe's top AI start-up and one of the continent's most valuable public companies, which supplies the equipment to make the advanced chips that are used to train and run AI models.
Arthur Mensch, current CEO of Mistral, was quoted by the Financial Times as saying that "it's important for European companies not to have too much dependency on US technology", while his counterpart at ASML claimed that sovereignty was an additional benefit, but they didn't "pick Mistral because they were European", rather because AI will become "a strategic technology ... I think this will help the European ecosystem, but we do it because it's good for Mistral and it's good for ASML.".
In the short term, he claimed, ASML would begin using Mistral's AI expertise to help develop new chipmaking tools, as well as offering customers new capabilities as they use its existing systems. "We started to look for a partner, because we thought that this is not something we should try to do ourselves ...the company's expertise is in chipmaking equipment and we are not AI experts".
In case you're groaning now while reaching for that special bottle of whisky under your desk, labeled peak AI hype, there was an intriguing comment on the Interwebs about the true purpose of the deal.
AI companies like Mistral use specialised wafer-scale chips from Cerebras in their data centres. Hardware acceleration and software optimisation partly explain why Mistral's models are so fast. Were ASML to diversify and pull that kind of silicon–software co‑design into its own stack (computational lithography, metrology and field service) the benefits are obvious: faster OPC/ILT runs, smarter tool control, and on‑tool copilots that digest logs and manuals in real time. That's probably the logic behind ASML's €1.3 bn move for a stake in Mistral and a strategic partnership: it's product synergy, not flag‑waving--even if the new CEO happens to be French.
LIGO's discovery of gravitational waves—ripples in space-time from powerful cosmic events—hit astrophysics more like a tidal wave than a ripple. At the dawn of its tenth anniversary, the multinational collaboration has set another scientific milestone, this time solving not one but two mysteries in black hole physics.
A paper published today in Physical Review Letters describes how the LIGO-VIRGO-KAGRA (LVK) Collaboration captured the sharpest-ever gravitational wave signal from a black hole merger. Further analysis of GW250114, the signal in question, validates two major predictions made by Stephen Hawking and Roy Kerr in 1971 and 1963, respectively.
First, we're more certain than ever that when black holes merge, the resulting black hole is wider than both parts combined. Second, we only need to know two metrics to describe gravitational disturbances in black holes: mass and spin.
"It's a beautiful, landmark result," Arthur Kosowsky, a theoretical physicist at the University of Pittsburgh who wasn't involved in the new work, told Gizmodo in an email. The latest results are "a confirmation of both the fundamental nature of a spinning black hole and also a remarkable test of strong-field general relativity," he added.
The latest results come almost ten years after GW150914, the first gravitational wave signal ever detected, which LIGO observed in 2015. In 2021, physicists used the 2015 signal to test Hawking's theorem. The team assessed a confidence level of 95% for this test, but with the new, cleaner result, that has jumped to an impressive 99.999%—as close as one gets to the "truth" in modern science.
"A decade ago we couldn't be certain that black holes ever collide in our universe," Steve Fairhurst, LIGO spokesperson and a physicist at Cardiff University in the United Kingdom, told Gizmodo. "Now we observe several black hole mergers per week. With the 300 gravitational-wave candidates observed to date, we're beginning to provide a census of the population of black holes in the universe."
Black holes lose a lot of mass during a merger. The violent conflagration can also speed up a black hole's spin, decreasing its area. Hawking and Jacob Bernstein's black hole area theorem posits that, despite these factors, the product of a merger will still generate a bigger black hole.
In the merger that produced GW250114, the initial black holes exhibited surface areas around 92,665 square miles (240,000 square kilometers), whereas the final black hole's surface area measured about 154,441 square miles (400,000 square kilometers). To put the final product in perspective, it weighs about 63 times the mass of our Sun and spins at 100 revolutions per second, according to the study.
"Musical" software developed by LIGO members—including Gregorio Carullo, an astrophysicist at the University of Birmingham in the United Kingdom—enabled the team to make such precise measurements. The tool essentially let them "hear" each black hole as it merged into a larger one, at sensitivity levels four times higher than a decade ago.
"Black holes are black, so it's very difficult to 'look' into them," Carullo told Gizmodo in a video call. Gravitational wave experiments offer an easy workaround, since everything controlled by gravity technically produces gravitational waves. Massive, cantankerous black holes are especially loud—and we're getting better at tuning into these signals, he explained.
"When black holes collide, they emit these characteristic sounds that are specific and peculiar to that black hole," Carullo said. "If we can hear these sounds, or notes, [that] depend on just the mass and the spin...you can extract the mass and the spin of the black hole."
The fact that this is possible at all is what makes black holes so extraordinary, he added. "People think of black holes as something scary, but actually, it's the simplest thing you can imagine."
Excitingly, gravitational wave astronomy is still "in its infancy," Fairhurst said. LIGO's Nobel-winning discovery was huge, no doubt, but no end goals exist for this project. If anything, the discovery of GW250114 marked the beginning of a new chapter in astronomy.
"In the coming years, we will continue to see the sensitivity of the detectors improve, providing ever more and higher-fidelity observations," Fairhurst said. "At some stage, we are likely to observe something unexpected—either a signal that is hard to explain astrophysically, one that doesn't exactly match the predictions of general relativity, or a signal from an unexpected source."
In a release, Kip Thorne—one of three physicists who fathered LIGO—recalled Hawking asking him, shortly after LIGO's historic gravitational wave detection in 2015, whether the instrument could test his area theorem. Hawking, unfortunately, passed away three years before LIGO finally did just that.
The anecdote, along with the story of how LIGO arrived at GW250114, shows how generations of breakthroughs—both theoretical and experimental—converge to expand our understanding of the universe. And that's something to be very excited about.
Journal Reference:
LIGO Scientific, Virgo, and KAGRA Collaborations, A. G. Abac, I. Abouelfettouh, et al. GW250114: Testing Hawking's Area Law and the Kerr Nature of Black Holes [open], Physical Review Letters (DOI: 10.1103/kw5g-d732)
Dead Internet Theory Lives: One Out of Three of You Is a Bot:
According to CloudFlare, nearly one-third of all internet traffic is now bots. Most of those bots, you won't ever directly interact with, as they are crawling the web and indexing websites or performing specific tasks—or, increasingly, collecting data to train AI models. But it's the bots that you can see that have people like OpenAI CEO Sam Altman and others questioning (albeit with seemingly zero remorse or consideration of any alternative) whether he and his cohort are destroying the internet.
Last week, Altman responded to a post that showed lots of comments in the subreddit r/ClaudeCode, a Reddit community built around Anthropic's Claude Code tool, praising OpenAI's Codex, an AI coding agent. "i have had the strangest experience reading this: i assume its all fake/bots, even though in this case i know codex growth is really strong and the trend here is real," he wrote, very [subtly (sp.)] acknowledging how great his own product is.
While Altman suggested some of this may be people adopting the quirks and word choices of chatbots, among other factors, he did acknowledge that "the net effect is somehow AI twitter/AI reddit feels very fake in a way it really didnt a year or two ago." It follows a previous observation he made earlier this month in which he said, "i never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now."
"Dead internet theory" is the idea that much of the content online is created, interacted with, and fed to us by bots. If you believe the conspiratorial origins of the theory, which is thought to have first cropped up on imageboards around 2021, it's an effort to control human behavior. If you're slightly less blackpilled about the whole thing, then perhaps you're more into the idea that it's primarily driven by the monetary incentives of the internet, where engagement—no matter how low value it may be—can generate revenue.
Interestingly, the theory appeared pre-ChatGPT, suggesting the bot problem was bad before LLMs became widely accessible. Even then, there was evidence that a ton of internet traffic came from bots (some estimates place it over 50%, which is well above CloudFlare's measurements), and there were concerns about "The Inversion," or the point where fraud detection systems mistake bot behavior for human and vice versa.
But now, at a time when companies like OpenAI are making publicly available agents that can navigate the web like a person and perform tasks for them, the level of authenticity online is likely to plummet even further. Altman seems to see it, but hasn't suggested actually, you know, *doing* anything about it.
It's not dissimilar from a situation earlier this year in which Altman warned that AI tools have "fully defeated" most authentication services that humans rely on to verify their identity and said that, as a result, scams are likely going to explode. Just like Altman's observation about inauthentic behavior on social media, he seemed to have zero interest in slowing his company's activity to stop the erosion of the digital systems we count on, despite seemingly being able to recognize the pitfalls.
Why? Well, how about another conspiracy theory? Perhaps it's because Altman has another company he'd like to pitch as the solution for it all: his bizarre "World" identification verification system/crypto scheme that requires people to scan their eyeballs to prove they are human. He's already broached a potential deal with Reddit to verify its users as authentic—noteworthy considering he's now called out bot activity on the platform. The faster we get pushed to the Dead Internet Theory cliff, the more incentive companies have to call on Altman's other firm to save us all. Call it the New Internet Order.
This Is the First Time Scientists Have Seen Decisionmaking in a Brain:
Neuroscientists from around the world have worked in parallel to map, for the first time, the entire brain activity of mice while they were making decisions. This achievement involved using electrodes inserted inside the brain to simultaneously record the activity of more than half a million neurons distributed across 95 percent of the rodents' brain volume.
Thanks to the image obtained, the researchers were able to confirm an already theorized architecture of thought: that there is no single region exclusively in charge of decision making and instead it is a coordinated process among multiple brain areas.
To illuminate all the regions involved in this decision making process, the team trained mice to turn a small steering wheel to move circles on a screen. If the shape moved correctly toward the center, the animal received sugar water as a reward.
After running this experiment with 139 mice across 12 labs and monitoring their brain activity, the experiment managed to map 620,000 neurons located across 279 brain regions, with a subset of 75,000 well-isolated neurons then being analyzed. The resolution of the neural map produced is unprecedented in the study of brain and its neural networks during the thinking process. Moreover, it represents a milestone both in terms of the type of specimen observed and the extent of the brain area covered. Until now, only whole brains of fruit flies, fish larvae, or small sections of more complex brains had been mapped.
The results were published in two papers in the journal Nature. Although the scientists involved acknowledge that the data are not definitive, they represent a starting point in the neural study of decisionmaking. The value of this data lies in the fact that the neural pathway of decisionmaking is now clearer, which will allow scientists to better understand complex thinking abilities and perform more advanced analyses. In addition, the dataset is publicly available.
"These initial conclusions corroborate aspects of brain function that were already intuited from the more limited studies available. It's as if we suspected how a movie would end without having seen the ending; now they've shown it to us," Juan Lerma, a research professor at the Spanish National Research Council, told the Science Media Centre España. (Lerma was not involved in the research.) "In short, the data show that, in decisionmaking, for example, many brain areas are involved, more than expected, while in sensory processing the areas are more distinct."
The adult human brain contains about 86 billion neurons, each capable of establishing thousands of synaptic connections with other cells. Although it weighs about 1.4 kilograms, the human brain consumes about 20 percent of the body's total energy at rest, a remarkably high proportion for its size. Although today's supercomputers outperform the brain in numerical calculations, none yet matches its energy efficiency or its capacity for learning, adaptation, and parallel processing. There's still a long way to go before neuroscience can fully map the neural processes of human decisionmaking, but studies like this one take us one step closer.
Journals:
A brain-wide map of neural activity during complex behaviour
Brain-wide representations of prior information in mouse decision-making