Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:71 | Votes:113

posted by janrinok on Tuesday September 16, @04:38AM   Printer-friendly
from the why-our-sun-is-tame dept.

Solar pacifiers: Influence of the planets may subdue solar activity:

Our Sun is about five times less magnetically active than other sunlike stars – effectively a special case. The reason for this could reside in the planets in our solar system, say researchers at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR). In the last ten years, they have developed a model that derives virtually all the Sun's known activity cycles from the cyclical influence of the planets' tidal forces. Now they have also been able to demonstrate that this external synchronization automatically curbs solar activity (DOI: 10.1007/s11207-025-02521-0).

At the moment, the Sun is actually reaching a maximum level of activity only seen roughly every eleven years. That is the reason why we on Earth observe more polar lights and solar storms as well as turbulent space weather in general. This has an impact on satellites in space right down to technological infrastructure on Earth. Despite this, in comparison with other sunlike stars, the strongest radiation eruptions from our Sun are 10 to 100 times weaker. This relatively quiet environment could be an important precondition for Earth being inhabitable. Not least for this reason, solar physicists want to understand what precisely drives solar activity.

It is known that solar activity has many patterns – both shorter and longer periodic fluctuations that range from a few hundred days to several thousand years. But researchers have very different ways of explaining the underlying physical mechanisms. The model developed by the team led by Frank Stefani at HZDR's Institute of Fluid Dynamics views the planets as pacemakers: on this understanding, approximately every eleven years, Venus, Earth and Jupiter focus their combined tidal forces on the Sun. Via a complex physical mechanism, each time they give the Sun's inner magnetic drive a little nudge. In combination with the rosette-shaped orbital motion of the Sun, this leads to overlapping periodic fluctuations of varying lengths – exactly as observed in the Sun.

"All the solar cycles identified are a logical consequence of our model; its explanatory power and internal consistency are really astounding. Each time we have refined our model we have discovered additional correlations with the periods observed," says Stefani. In the work now published, this is QBO – Quasi Biennial Oscillation – a roughly bi-annual fluctuation in various aspects of solar activity. The special point here is that in Stefani's model, QBO cannot only be assigned to a precise period, but it also automatically leads to subdued solar activity.

Up to now, solar data have usually reported on QBO periods of 1.5 to 1.8 years. In earlier work, some researchers had suggested a connection between QBO and so-called Ground Level Enhancement events. They are sporadic occurrences during which energy-rich solar particles trigger a sudden increase in cosmic radiation on the Earth's surface. "A study conducted in 2018 shows that radiation events measured close to the ground occurred more in the positive phase of an oscillation with a period of 1.73 years. Contrary to the usual assumption that these solar particle eruptions are random phenomena, this observation indicates a fundamental, cyclical process," says Stefani. This is why he and his colleagues revisited the chronology once again. They discovered the greatest correlation for a period of 1.724 years. "This value is remarkably close to the value of 1.723 years which occurs in our model as a completely natural activity cycle," says Stefani. "We assume that it is QBO."

While the Sun's magnetic field oscillates between minimum and maximum over a period of eleven years, QBO imposes an additional short-period pattern on the field strength. This subdues the field strength overall because the Sun's magnetic field does not maintain its maximal value for so long. A frequency diagram reveals two peaks: one at maximum field strength and the other when QBO swings back. This effect is known as bimodality of the solar magnetic field. In Stefani's model, the two peaks cause the average strength of the solar magnetic field to be reduced – a logical consequence of QBO.

"This effect is so important because the Sun is most active during the highest field strengths. This is when the most intense events occur with huge geomagnetic storms like the Carrington event of 1859 when polar lights could even be seen in Rome and Havanna, and high voltages damaged telegraph lines. If the Sun's magnetic field remains at lower field strengths for a significantly longer period of time, however, this reduces the likelihood of very violent events," Stefani explains.

Publication:
F. Stefani, G. M. Horstmann, G. Mamatsashvili, T. Weier, Adding Further Pieces to the Synchronization Puzzle: QBO, Bimodality, and Phase Jumps, in Solar Physics, 2025 (DOI: 10.1007/s11207-025-02521-0)

See also:


Original Submission

posted by hubie on Monday September 15, @11:54PM   Printer-friendly
from the no-actual-manufacturing-jobs dept.

New Apple-funded program teaches manufacturing to US firms:

Apple has announced that it is opening its first Apple Manufacturing Academy in Detroit, providing a program of advanced manufacturing skills for US workers.

If you really want to bring manufacturing jobs back to the US, you need to train people rather than impose tariffs. As part of its existing commitment to investing $500 billion in US businesses, Apple is launching an Apple Manufacturing Academy that will open with a two-day program on August 19, 2025.

"We're thrilled to welcome companies from across the country to the Apple Manufacturing Academy starting next month," said Sabih Khan, Apple's new chief operating officer in a statement. "Apple works with suppliers in all 50 states because we know advanced manufacturing is vital to American innovation and leadership."

"With this new programming," he continued, "we're thrilled to help even more businesses implement smart manufacturing so they can unlock amazing opportunities for their companies and our country."

Running in partnership with Michigan State University, the the new academy will follow broadly the same structure as existing Developer Academies, such as the one already in Detroit. It will host small and medium-sized businesses from across the US, and teach manufacturing and technology skills including:

  • Machine learning and deep learning in manufacturing
  • Automation in the product manufacturing industry
  • Leveraging data to improve quality
  • Applying digital technologies to operations

The sessions will initially consist of in-person workshops with Apple staff. Later in 2025, Apple says a virtual program will be added, specifically for issues such as project management.

Firms interested in applying and register for the first academy on Michigan State University's official site.

While this academy is newly announced, it's part of the long-standing $500 billion program that Apple is announcing piecemeal. The most recent addition to it is the investment into Texas-based firm MP Materials on a project to increase Apple's use of US-made rare earth magnets.

See also:


Original Submission

posted by hubie on Monday September 15, @07:12PM   Printer-friendly

The newly developed concept uses liquid uranium to heat rocket propellant:

Engineers from Ohio State University are developing a new way to power rocket engines, using liquid uranium for a faster, more efficient form of nuclear propulsion that could deliver round trips to Mars within a single year.

NASA and its private partners have their eyes set on the Moon and Mars, aiming to establish a regular human presence on distant celestial bodies. The future of space travel depends on building rocket engines that can propel vehicles farther into space and do it faster. Nuclear thermal propulsion is currently at the forefront of new engine technologies aiming to significantly reduce travel time while allowing for heavier payloads.

Nuclear propulsion uses a nuclear reactor to heat a liquid propellant to extremely high temperatures, turning it into a gas that's expelled through a nozzle and used to generate thrust. The newly developed engine concept, called the centrifugal nuclear thermal rocket (CNTR), uses liquid uranium to heat rocket propellant directly. In doing so, the engine promises more efficiency than traditional chemical rockets, as well as other nuclear propulsion engines, according to new research published in Acta Astronautica.

If it proves successful, CNTR could allow future vehicles to travel farther using less fuel. Traditional chemical engines produce about 450 seconds of thrust from a given amount of propellant, a measure known as specific impulse. Nuclear propulsion engines can reach around 900 seconds, with the CNTR possibly pushing that number even higher.

"You could have a safe one-way trip to Mars in six months, for example, as opposed to doing the same mission in a year," Spencer Christian, a PhD student at Ohio State and leader of CNTR's prototype construction, said in a statement. "Depending on how well it works, the prototype CNTR engine is pushing us towards the future."

CNTR promises faster routes, but it could also use different types of propellant, like ammonia, methane, hydrazine, or propane, that can be found in asteroids or other objects in space.

The concept is still in its infancy, and a few engineering challenges remain before CNTR can fly missions to Mars. Engineers are working to ensure that startup, shutdown, and operation of the engine don't cause instabilities, while also finding ways to minimize the loss of liquid uranium.

"We have a very good understanding of the physics of our design, but there are still technical challenges that we need to overcome," Dean Wang, associate professor of mechanical and aerospace engineering at Ohio State and senior member of the CNTR project, said in a statement. "We need to keep space nuclear propulsion as a consistent priority in the future, so that technology can have time to mature."


Original Submission

posted by hubie on Monday September 15, @02:29PM   Printer-friendly
from the that's-only-1900-floppies-for-the-install dept.

The PowerShell script should work with any version of Windows 11:

NTDEV has introduced the Nano11 Builder for Windows 11. This is a tool that allows Windows 11 testers and tinkerers to pare down Microsoft's latest OS into the bare minimum. With this new release, the developer has significantly pushed the boundaries of their prior Tiny11 releases.

Nano11's extreme pruning of the official installer disk image from Microsoft produces a new ISO "up to 3.5 times smaller" than the original. The example Windows 11 standard ISO shown in the Nano11 demo was 7.04GB, but after the PowerShell script had done its stuff, you can end up with a 2.29GB ISO.

Furthermore, the completed Windows 11 install can scrape in as low as 2.8GB if you use Windows 11 LTSC as the source ISO.

Before we talk anymore about Nano11, please be warned that it is described as "an extreme experimental script designed for creating a quick and dirty development testbed." It could also be useful for Windows 11 VM (virtual machine) tests, suggests its developer. But it isn't designed for installing a compact Windows 11 for your daily workhorse.

[...] Some of the social media postings suggest that, when following in NTDEV's Nano11 footsteps, you will end up with as little as a 2.8GB Windows 11 install footprint. However, this will depend on the 'flavor' of Windows 11 you start with, and there is also a little bit more work to be done to achieve the minimum size.

After installation, the example Nano11 install actually uses up 11.0GB of the 20GB virtual disk in the VM. It is only after NTDEV runs the 'Compact' command on the C: drive using LZX compression and then deletes the virtual memory page file that we see the installation reduced to around the 3.2GB level.

The nano11 script used on a Win11 LTSC image leads to an even better result! This will be perfect for when you need an ISO that can be installed in like 5 minutes and without any (and I mean it) fluff https://t.co/iJXGKahIx4 pic.twitter.com/UVnsS6MlgGSeptember 9, 2025

Also see: Tiny11 Builder Update Lets Users Strip Copilot and Other Bloat From Windows 11


Original Submission

posted by hubie on Monday September 15, @09:46AM   Printer-friendly
from the grok-is-this-real? dept.

Google Is Telling People DOGE Never Existed:

Here's a Mandela effect event that you probably thought was real: The Department of Government Efficiency, the pseudo-agency run by Elon Musk to cut "fraud, waste, and abuse" from federal operations, didn't actually exist. At least, that is what Google's AI Overview response will tell you if you search certain content related to DOGE's operations.

A Bluesky user who goes by iucounu first pointed out this mistake in Google's comprehension skills, finding that querying the search engine for information and the number of deaths caused by DOGE's cutting of essential programs results in a response that claims the agency is "fictional" and from "a political satire or conspiracy theory." Gizmodo was able to recreate these results:

According to Google, "There is no actual government department named DOGE, and the term is used in critical or satirical contexts to refer to policies or actions taken by the Trump administration." The results expand on this later, stating, "It is crucial to understand that there is no actual government entity named DOGE, and the discussion around it is part of political discourse or satire, not a factual government action."

There are certainly outlets and people who have suggested that DOGE is fake, either in that it does nothing to accomplish its stated mission or actually is not a real agency established by the federal government (though it certainly functions as one). But the AI Overview does not cite any source that suggests this.

The closest it gets to a source outright saying DOGE doesn't exist is a link to the Democrats' House Committee on the Budget, which has a page titled "The So-Called 'DOGE,'" but even that offers a pretty clear statement that DOGE is not some mass delusion: "DOGE is an organization in the Executive Office of the President. It is not a cabinet-level agency with Senate-approved leadership and has no statutory authority to alter Congressionally appropriated funds." The other sources, places like Lawfare and the Center on Budget and Policy Priorities, don't even come close to suggesting the agency is a satire.

So what gives? Google didn't offer any explanation when contacted, though a spokesperson for the company did tell Gizmodo, "This AI Overview is clearly incorrect. It violated our policies around civic information, and we are taking action to address the issue."


Original Submission

posted by jelizondo on Monday September 15, @05:05AM   Printer-friendly
from the cybersecurity-elephant-in-the-room dept.

Apparently, China's dominance of the supply chain means that it's also seen as the principal source of cybersecurity risk:

"We know that foreign, hostile actors see Australia's energy system as a good target," Home Affairs assistant secretary for cyber security Sophie Pearce told the small, afternoon-on-the-last-day audience.

"We know that cyber vector is the most likely means of disrupting our energy ecosystem, and I think that the energy transition raises the stakes even further. Where we're reliant on foreign investment and foreign supply chains, lots of opportunity there, obviously.

"When there's a dependency on jurisdictions that might require or can compel access to data or access to systems, that increases the risks."

[...] Pearce Courtney handles cyber coordination for energy markets at AEMO, and while he says it's maintaining visibility over the whole structure that keeps the organisation "up at night", technology concentration risk is on the radar.

[...] "In terms of the technology and the devices and where we're buying our supply chain. That's probably the other challenge that doesn't keep us up at night, that's a significant, complex challenge."

China controls 80 per cent of the global supply chain for all the manufacturing stages of solar panels, according to an International Energy Agency (IEA) report from 2022. A similar study from 2024 shows China has almost 85 per cent of global battery cell production capacity.

[Editor's Comment: Title corrected to more accurately reflect the summary contents--JR 2025-09-15 05:55]


Original Submission

posted by jelizondo on Monday September 15, @12:19AM   Printer-friendly

Newly released video at House UFO hearing appears to show U.S. missile striking and bouncing off orb:

A newly released video captured by a U.S. reaper drone shows a glowing orb off the coast of Yemen. Then in the video, a Hellfire missile suddenly struck the unidentified object and bounced off it.

Rep. Eric Burlison, a Republican from Missouri, shared the video at a House Oversight hearing on Tuesday on what the military calls "Unidentified Aerial Phenomena" or better known as UFOs.

The video, dated Oct. 30, 2024, was provided by a whistleblower and when slowed down, the missile can be seen continuing on its own path after striking the orb.

A recent government report revealed that it had received more than 750 new UAP sightings between May 2023 and June 2024, leaving lawmakers digging into the mystery and national security concerns posed by the objects.

"We've never seen a Hellfire missile hit a target and bounce off," said Lue Elizondo, a former senior intelligence official with the Pentagon.

"When a hellfire makes a hit, a kinetic strike on something solid, there's usually not much left of whatever it is it's hitting," Elizondo said. "It's very, very destructive. But in the video ... what seems to happen is that the missile is either redirected, or in some case, perhaps glances off the object and continues on its way."

What was not shown in the video is a second reaper drone that launched the missile.

Details remain unclear, including what the mission was.

The U.S. military was conducting regular air strikes against Houthi targets that posed a threat to the U.S. Navy and commercial vessels.

Pentagon officials told CBS News they have no comment.

The Defense Department in 2023 launched a website for declassified UAP information, following a House Oversight Committee held a hearing earlier that year that featured testimony from a former military intelligence officer and two former fighter pilots, who had first-hand experience with the mysterious objects.

At a House Oversight Committee hearing on unidentified anomalous phenomena (UAPs) Sept. 9, Rep. Eric Burlison (R-MO) shared drone footage from October, 2024 which shows the "hit" [3:26 --JE]


Original Submission

posted by jelizondo on Sunday September 14, @07:39PM   Printer-friendly

Scientists Stunned as Tiny Algae Keep Moving Inside Arctic Ice:

Scientists know that microbial life can survive under some extreme conditions—including, hopefully, harsh Martian weather. But new research suggests that one particular microbe, an algal species found in Arctic ice, isn't as immobile as it was previously believed. They're surprisingly active, gliding across—and even within—their frigid stomping grounds.

In a Proceedings of the National Academy of Sciences paper published September 9, researchers explained that ice diatoms—single-celled algae with glassy outer walls—actively dance around in the ice. This feisty activity challenges assumptions that microbes living in extreme environments, or extremophiles, are barely getting by. If anything, these algae evolved to thrive despite the extreme conditions. The remarkable mobility of these microbes also hints at an unexpected role they may play in sustaining Arctic ecology.

"This is not 1980s-movie cryobiology," said Manu Prakash, the study's senior author and a bioengineer at Stanford University, in a statement. "The diatoms are as active as we can imagine until temperatures drop all the way down to -15 C [5 degrees Fahrenheit], which is super surprising."

That temperature is the lowest ever for a eukaryotic cell like the diatom, the researchers claim. Surprisingly, diatoms of the same species from a much warmer environment didn't demonstrate the same skating behavior as the ice diatoms. This implies that the extreme life of Arctic diatoms birthed an "evolutionary advantage," they added.

For the study, the researchers collected ice cores from 12 stations across the Arctic in 2023. They conducted an initial analysis of the cores using on-ship microscopes, creating a comprehensive image of the tiny society inside the ice.

To get a clearer image of how and why these diatoms were skating, the team sought to replicate the conditions of the ice core inside the lab. They prepared a Petri dish with thin layers of frozen freshwater and very cold saltwater. The team even donated strands of their hair to mimic the microfluidic channels in Arctic ice, which expels salt from the frozen apparatus.

As they expected, the diatoms happily glided through the Petri dish, using the hair strands as "highways" during their routines. Further analysis allowed the researchers to track and pinpoint how the microbes accomplished their icy trick.

"There's a polymer, kind of like snail mucus, that they secrete that adheres to the surface, like a rope with an anchor," explained Qing Zhang, study lead author and a postdoctoral student at Stanford, in the same release. "And then they pull on that 'rope,' and that gives them the force to move forward."

If we're talking numbers, algae may be among the most abundant living organisms in the Arctic. To put that into perspective, Arctic waters appear "absolute pitch green" in drone footage purely because of algae, explained Prakash.

The researchers have yet to identify the significance of the diatoms' gliding behavior. However, knowing that they're far more active than we believed could mean that the tiny skaters unknowingly contribute to how resources are cycled in the Arctic.

"In some sense, it makes you realize this is not just a tiny little thing; this is a significant portion of the food chain and controls what's happening under ice," Prakash added.

That's a significant departure from what we often think of them as—a major food source for other, bigger creatures. But if true, it would help scientists gather new insights into the hard-to-probe environment of the Arctic, especially as climate change threatens its very existence. The timing of this result shows that, to understand what's beyond Earth, we first need to protect and safely observe what's already here.

Journal Reference:
DOI: Ice gliding diatoms establish record-low temperature limits for motility in a eukaryotic cell


Original Submission

posted by hubie on Sunday September 14, @02:49PM   Printer-friendly

Researchers investigated giant prehistoric trash piles to reveal where animal remains came from:

You can learn a lot about people by studying their trash, including populations that lived thousands of years ago.

In what the team calls the "largest study of its kind," researchers applied this principle to Britain's iconic middens, or giant prehistoric trash (excuse me, rubbish) piles. Their analysis revealed that at the end of the Bronze Age (2,300 to 800 BCE), people—and their animals—traveled from far to feast together.

"At a time of climatic and economic instability, people in southern Britain turned to feasting—there was perhaps a feasting age between the Bronze and Iron Age," Richard Madgwick, an archaeologist at Cardiff University and co-author of the study published yesterday in the journal iScience, said in a university statement. "These events are powerful for building and consolidating relationships both within and between communities, today and in the past."

Madgwick and his colleagues investigated material from six middens in Wiltshire and the Thames Valley via isotope analysis, a technique archaeologists use to link animal remains to the unique chemical make-up of a particular geographic area. The technique reveals where the animals were raised, allowing the researchers to see how far people traveled to join these feasts.

"The scale of these accumulations of debris and their wide catchment is astonishing and points to communal consumption and social mobilisation on a scale that is arguably unparalleled in British prehistory," Madgwick added.

[...] "Our findings show each midden had a distinct make up of animal remains, with some full of locally raised sheep and others with pigs or cattle from far and wide," said Carmen Esposito, lead author of the study and an archaeologist at the University of Bologna. "We believe this demonstrates that each midden was a lynchpin in the landscape, key to sustaining specific regional economies, expressing identities and sustaining relations between communities during this turbulent period, when the value of bronze dropped and people turned to farming instead."

A number of these prehistoric trash heaps, which resulted from potentially the largest feasts in Britain until the Middle Ages (that would mean they even outdid the Romans), were eventually incorporated into the landscape as small hills.

"Overall, the research points to the dynamic networks that were anchored on feasting events during this period and the different, perhaps complementary, roles that each midden had at the Bronze Age-Iron Age transition," Madgwick concluded.

Since previous research indicates that Late Neolithic (2,800 BCE to 2,400 BCE) communities in Britain were also organizing feasts that attracted guests—and their pigs—from far and wide, I think it's fair to say that prehistoric British people were throwing successful ragers across 2,000 years.

Journal Reference: Esposito, Carmen et al. Diverse feasting networks at the end of the Bronze Age in Britain (c. 900-500 BCE) evidenced by multi-isotope analysis [OPEN], iScience, Volume 0, Issue 0, 113271 https://doi.org/10.1016/j.isci.2025.113271


Original Submission

posted by hubie on Sunday September 14, @10:00AM   Printer-friendly

Arbitrarily inflated lock-in-tastic fees curbed as movement charges must be cost-linked:

Most of the provisions of the EU Data Act will officially come into force from the end of this week, requiring cloud providers to make it easier for customers to move their data, but some of the big players are keener than others.

The European Data Act is an ambitious attempt by the European Commission to galvanize the market for digital services by opening up access to data. But it also contains provisions to permit customers to move seamlessly between different cloud operators and combine data services from different providers in a so-called multi-cloud strategy.

Cloud users have often complained about the fees that operators charge whenever data is transferred outside of their networks. Investigations by regulators such as the UK's Competition and Markets Authority (CMA) have led the big three platforms – AWS, Microsoft's Azure and Google Cloud – to all waive egress fees, but only for users quitting their platforms.

While the Data Act doesn't rule out vendors charging data transfer fees, it does expect cloud firms to pass on costs to customers rather than charging arbitrary or excessive payments.

Google is keen to publicize that it is going further than this and offering data movement at no cost for customers in both the European Union and the United Kingdom via a newly announced Data Transfer Essentials service.

There's a catch, of course – Google makes it clear that its service is designed for cost-optimized data transfer between two services of a customer organization that happen to be running on different cloud platforms.

In other words, it is for traffic that would effectively be considered internal to the customer organization and not for transfers to third parties. Google warns that if one of its audits uncovers that the service is being misused in this way, the traffic will be billed as regular internet traffic.

Microsoft is offering at-cost transfer for customers and cloud service partners in the EU shifting data to another provider, but there are also strings attached. Customers must create an Azure Support request for the transfer, specifying where the data is to be moved, and it must also be to a service operated by the same customer, not to endpoints belonging to different customers.

We understand that AWS specifies that EU customers "request reduced data transfer rates for eligible use cases under the European Data Act," requiring them to contact customer support for further information. We asked AWS for clarification.

Google claims that its move demonstrates its own commitment to harbouring an open and fair cloud market in Europe.

This might have something to do with it being a bit of an underdog here, making up about 10 percent of the European cloud market, while AWS is estimated to take 32 percent, and Azure another 23 percent.

"The original promise of the cloud is one that is open, elastic, and free from artificial lock-ins. Google Cloud continues to embrace this openness and the ability for customers to choose the cloud service provider that works best for their workload needs," said the Google Cloud's senior director for global risk and compliance, Jeanette Manfra.


Original Submission

posted by hubie on Sunday September 14, @05:14AM   Printer-friendly

Pluralistic: Fingerspitzengefühl (08 Sep 2025) – Pluralistic: Daily links from Cory Doctorow:

This was the plan: America would stop making things and instead make recipes, the "IP" that could be sent to other countries to turn into actual stuff, in distant lands without the pesky environmental and labor rules that forced businesses to accept reduced profits because they weren't allowed to maim their workers and poison the land, air and water.

This was quite a switch! At the founding of the American republic, the US refused to extend patent protection to foreign inventors. The inventions of foreigners would be fair game for Americans, who could follow their recipes without paying a cent, and so improve the productivity of the new nation without paying rent to old empires over the sea.

It was only once America found itself exporting as much as it imported that it saw fit to recognize the prerogatives of foreign inventors, as part of reciprocal agreements that required foreigners to seek permission and pay royalties to American patent-holders.

But by the end of the 20th Century, America's ruling class was no longer interested in exporting things; they wanted to export ideas, and receive things in return. You can see why: America has a limited supply of things, but there's an infinite supply of ideas (in theory, anyway).

There was one problem: why wouldn't the poor-but-striving nations abroad copy the American Method for successful industrialization? If ignoring Europeans' patents allowed America to become the richest and most powerful nation in the world, why wouldn't, say, China just copy all that American "IP"? If seizing foreigners' inventions without permission was good enough for Thomas Jefferson, why not Jiang Zemin?

America solved this problem with the promise of "free trade." The World Trade Organization divided the world into two blocs: countries that could trade with one another without paying tariffs, and the rabble without who had to navigate a complex O(n^2) problem of different tariff schedules between every pair of nations.

To join the WTO club, countries had to sign up to a side-treaty called the Trade-Related Aspects of Intellectual Property Rights (TRIPS). Under the TRIPS, the Jeffersonian plan for industrialization (taking foreigners' ideas without permission) was declared a one-off, a scheme only the US got to try and no other country could benefit from. For China to join the WTO and gain tariff-free access to the world's markets, it would have to agree to respect foreign patents, copyrights, trademarks and other "IP."

We know the story of what followed over the next quarter-century: China became the world's factory, and became so structurally important that even if it violated its obligations under the TRIPS, "stealing the IP" of rich nations, no one could afford to close their borders to Chinese imports, because every country except China had forgotten how to make things.

But this isn't the whole story – it's not even the most important part of it. In his new book Breakneck, Dan Wang (a Chinese-born Canadian who has lived extensively in Silicon Valley and in China) devotes a key chapter to "process knowledge":

https://danwang.co/breakneck/

What's "process knowledge"? It's all the intangible knowledge that workers acquire as they produce goods, combined with the knowledge that their managers acquire from overseeing that labor. The Germans call it "Fingerspitzengefühl" ("fingertip-feeling"), like the sense of having a ball balanced on your fingertips, and knowing exactly which way it will tip as you tilt your hand this way or that.

[...] Process knowledge is everything from "Here's how to decant feedstock into this gadget so it doesn't jam," to "here's how to adjust the flow of this precursor on humid days to account for the changes in viscosity" to "if you can't get the normal tech to show up and calibrate the part, here's the phone number of the guy who retired last year and will do it for time-and-a-half."

It can also be decidedly high-tech. A couple years ago, the legendary hardware hacker Andrew "bunnie" Huang explained to me his skepticism about the CHIPS Act's goal of onshoring the most advanced (4-5nm) chips.

[...] This process is so esoteric, and has so many figurative and literal moving parts, that it needs to be closely overseen and continuously adjusted by someone with a PhD in electrical engineering. That overseer needs to wear a clean-room suit, and they have to work an eight-hour shift without a bathroom, food or water break (because getting out of the suit means going through an airlock means shutting down the system means long delays and wastage).

That PhD EENG is making $50k/year. Bunnie's topline explanation for the likely failure of the CHIPS Act is that this is a process that could only be successfully executed in a country "with an amazing educational system and a terrible passport." For bunnie, the extensive educational subsidies that produced Taiwan's legion of skilled electrical engineers and the global system that denied them the opportunity to emigrate to higher-wage zones were the root of the country's global dominance in advanced chip manufacture.

I have no doubt that this is true, but I think it's incomplete. What bunnie is describing isn't merely the expertise imparted by attaining a PhD in electrical engineering – it's the process knowledge built up by generations of chip experts who debugged generations of systems that preceded the current tin-vaporizing Rube Goldberg machines.

[...] Wang evocatively describes how China built up its process knowledge over the WTO years, starting with simple assembly of complex components made abroad, then progressing to making those components, then progressing to coming up with novel ways to reconfiguring them ("a drone is a cellphone with propellers"). He explains how the vicious cycle of losing process knowledge accelerated the decline of manufacturing in the west: every time a factory goes to China, US manufacturers that had been in its supply chain lose process knowledge. You can no longer call up that former supplier and brainstorm solutions to tricky production snags, which means that other factories in the supply chain suffer, and they, too get offshored to China.

America's vicious cycle was China's virtuous cycle. The process knowledge that drained out of America accumulated in China. Years of experience solving problems in earlier versions of new equipment and processes gives workers a conceptual framework to debug the current version – they know about the raw mechanisms subsumed in abstraction layers and sealed packages and can visualize what's going on inside those black boxes.

[...] But here's the thing: while "IP" can be bought and sold by the capital classes, process knowledge is inseparably vested in the minds and muscle-memory of their workers. People who own the instructions are constitutionally prone to assuming that making the recipe is the important part, while following the recipe is donkey-work you can assign to any freestanding oaf who can take instruction.

[...] The exaltation of "IP" over process knowledge is part of the ancient practice of bosses denigrating their workers' contribution to the bottom line. It's key to the myth that workers can be replaced by AI: an AI can consume all the "IP" produced by workers, but it doesn't have their process knowledge. It can't, because process knowledge is embodied and enmeshed, it is relational and physical. It doesn't appear in training data.

In other words, elevating "IP" over process knowledge is a form of class war. And now that the world's store of process knowledge has been sent to the global south, the class war has gone racial. Think of how Howard Dean – now a paid shill for the pharma lobby – peddled the racist lie that there was no point in dropping patent protections for the covid vaccines, because brown people in poor countries were too stupid to make advanced vaccines:

The truth is that the world's largest vaccine factories are to be found in the global south, particularly India, and these factories sit at the center of a vast web of process knowledge, embedded in relationships and built up with hard-won problem-solving.

Bosses would love it if process knowledge didn't matter, because then workers could finally be tamed by industry. We could just move the "IP" around to the highest bidders with the cheapest workforces. But Wang's book makes a forceful argument that it's easier to build up a powerful, resilient society based on process knowledge than it is to do so with IP. What good is a bunch of really cool recipes if no one can follow them?

I think that bosses are, psychoanalytically speaking, haunted by the idea that their workers own the process knowledge that is at the heart of their profits. That's why bosses are so obsessed with noncompete "agreements." If you can't own your workers' expertise, then you must own your workers. Any time a debate breaks out over noncompetes, a boss will say something like, "My intellectual property walks out the door of my shop every day at 5PM." They're wrong: the intellectual property is safely stored on the company's hard drives – it's the process knowledge that walks out the door.


Original Submission

posted by hubie on Sunday September 14, @12:26AM   Printer-friendly

Wyden says default use of RC4 cipher led to last year's breach of health giant Ascension:

A prominent US senator has called on the Federal Trade Commission to investigate Microsoft for "gross cybersecurity negligence," citing the company's continued use of an obsolete and vulnerable form of encryption that Windows uses by default.

In a letter to FTC Chairman Andrew Ferguson, Sen. Ron Wyden (D–Ore.) said an investigation his office conducted into the 2024 ransomware breach of the health care giant Ascension found that the default use of the RC4 encryption cipher was a direct cause. The breach led to the theft of medical records of 5.6 million patients.

It's the second time in as many years that Wyden has used the word "negligence" to describe Microsoft's security practices.

"Because of dangerous software engineering decisions by Microsoft, which the company has largely hidden from its corporate and government customers, a single individual at a hospital or other organization clicking on the wrong link can quickly result in an organization-wide ransomware infection," Wyden wrote in the letter, which was sent Wednesday. "Microsoft has utterly failed to stop or even slow down the scourge of ransomware enabled by its dangerous software."

RC4 is short for Rivest Cipher 4, a nod to mathematician and cryptographer Ron Rivest of RSA Security, who developed the stream cipher in 1987. It was a trade-secret-protected proprietary cipher until 1994, when an anonymous party posted a technical description of it to the Cypherpunks mail list. Within days, the algorithm was broken, meaning its security could be compromised using cryptographic attacks. Despite the known susceptibility to such attacks, RC4 remained in wide use in encryption protocols, including SSL and its successor TLS, until about a decade ago.

Microsoft, however, continues to support RC4 as the default means for securing Active Directory, a Windows component that administrators use to configure and provision user accounts inside large organizations. While Windows offers more robust encryption options, many users don't enable them, causing Active Directory to fall back to the Kerberos authentication method using the vulnerable RC4 cipher.

[...] Wyden said his office's investigation into the Ascension breach found that the ransomware attackers' initial entry into the health giant's network was the infection of a contractor's laptop after using Microsoft Edge to search Microsoft's Bing site. The attackers were then able to expand their hold by attacking Ascension's Active Directory and abusing its privileged access to push malware to thousands of other machines inside the network. The means for doing so, Wyden said: Kerberoasting.

[...] Referring to the Active Directory default, Green wrote:

It's actually a terrible design that should have been done away with decades ago. We should not build systems where any random attacker who compromises a single employee laptop can ask for a message encrypted under a critical password! This basically invites offline cracking attacks, which do not need even to be executed on the compromised laptop—they can be exported out of the network to another location and performed using GPUs and other hardware.

More than 11 months after announcing its plans to deprecate RC4/Kerberos, the company has provided no timeline for doing so. What's more, Wyden said, the announcement was made in a "highly technical blog post on an obscure area of the company's website on a Friday afternoon." Wyden also criticized Microsoft for declining to "explicitly warn its customers that they are vulnerable to the Kerberoasting hacking technique unless they change the default settings chosen by Microsoft."

Wyden went on to criticize Microsoft for building a "multibillion-dollar secondary business selling cybersecurity add-on services to those organizations that can afford it. At this point, Microsoft has become like an arsonist selling firefighting services to their victims."


Original Submission

posted by jelizondo on Saturday September 13, @07:40PM   Printer-friendly

Researchers created a strange quantum crystal from a material found in smartphones—the first of its kind visible to the naked eye:

Of all the eccentricities of the quantum realm, time crystals—atomic arrangements that repeat certain motions over time—might be some of the weirdest. But they certainly exist, and to provide more solid proof, physicists have finally created a time crystal we can actually see.

In a recent Nature Materials paper, physicists at the University of Colorado Boulder presented a new time crystal design: a glass cell filled with liquid crystals—rod-shaped molecules stuck in strange limbo between solid and liquid. It's the same stuff found in smartphone LCD screens. When hit with light, the crystals jiggle and dance in repeating patterns that the researchers say resemble "psychedelic tiger stripes."

"They can be observed directly under a microscope and even, under special conditions, by the naked eye," said Hanqing Zhao, study lead author and a graduate student at the University of Colorado Boulder, in a release. Technically, these crystalline dances can last for hours, like an "eternally spinning clock," the researchers added.

Time crystals first appeared in a 2012 paper by Nobel laureate Frank Wilczek, who pitched an idea for an impossible crystal that breaks several rules of symmetry in physics. Specifically, a time crystal breaks symmetry because its atoms do not lock into a continuous lattice, and their positions change over time.

Physicists have since demonstrated versions of Wilczek's proposal, but these crystals lasted for a terribly short time and were microscopic. Zhao and Ivan Smalyukh, the study's senior author and a physicist at the University of Colorado Boulder, wanted to see if they could overcome these limitations.

For the new time crystal, the duo exploited the molecules' "kinks"—their tendency to cluster together when squeezed in a certain way. Once together, these kinks behave like whole atoms, the researchers explained.

"You have these twists, and you can't easily remove them," Smalyukh said. "They behave like particles and start interacting with each other."

The team coated two glass cells with dye molecules, sandwiching a liquid crystal solution between the layers. When they flashed the setup with polarized light, the dye molecules churned inside the glass, squeezing the liquid crystal. This formed thousands of new kinks inside the crystal, the researchers explained.

"That's the beauty of this time crystal," said Smalyukh. "You just create some conditions that aren't that special. You shine a light, and the whole thing happens."

The team believes its iteration of the time crystal could have practical uses. For instance, a "time watermark" printed on bills could be used to identify counterfeits. Also, stacked layers could serve as a tiny data center.

It's rare for quantum systems to be visible to the naked eye. Only time will tell if this time crystal amounts to anything—the researchers "don't want to put a limit on the applications right now"—but even if it doesn't, it's still a neat demonstration of how physical theories exist in strange, unexpected corners of reality.

Journal Reference: Zhao, H., Smalyukh, I.I. Space-time crystals from particle-like topological solitons. Nat. Mater. (2025). https://doi.org/10.1038/s41563-025-02344-1


Original Submission

posted by jelizondo on Saturday September 13, @02:51PM   Printer-friendly

The RTX 4090 48GB looks like a whole different card:

Remember those Frakenstein GeForce RTX 4090 48GB graphics cards emerging from China? Russian PC technician and builder VIK-on [17:17 --JE] has provided detailed insights into how Chinese factories are transforming the GeForce RTX 4090, once regarded as one of the best graphics cards, to effectively double its memory capacity specifically for AI workloads.

As a mainstream product, the GeForce RTX 4090 does not support memory chips in a clamshell configuration, unlike Nvidia's professional and data center products. Essentially, this means that the Ada Lovelace flagship only houses memory chips on one side of the PCB. In clamshell mode, graphics cards typically feature memory chips on both sides of the PCB. This limitation is addressed by the GeForce RTX 4090 4GB "upgrade kit," which sells for around $142 in China.

The upgrade kit comprises a custom PCB designed with a clamshell configuration, facilitating the installation of twice the number of memory chips. Most components are pre-installed at the manufacturing facility, requiring the user to solder the GPU and memory chips onto the PCB. Additionally, the upgrade kit includes a blower-style cooling solution, designed for integration with workstation and server configurations that utilize multi-GPU architectures.

VIK-on demonstrated the process of extracting the AD102 silicon and twelve 2GB GDDR6X memory chips from the MSI GeForce RTX 4090 Suprim and installing them onto the barebone PCB. The technician utilized spare GDDR6X memory chips from defective graphics cards, thereby obtaining additional GDDR6X memory at no cost. Clearly, this operation requires specialized soldering skills and access to appropriate high-end tools.

The technician also uploaded a leaked, modified firmware onto the GeForce RTX 4090 48GB. It is important to note that each graphics card possesses a unique GPU device ID, which contains all pertinent information. During the system initialization process, the firmware verifies whether the GPU device ID corresponds with the one embedded within the chip. Hacked firmware has been present for some time.

Indeed, it was during the era of the GeForce RTX 20-series (Turing) that enthusiasts uncovered the capability to deactivate memory channels. This feature was not advantageous for the general public, as it was illogical to impair a fully functional graphics card by reducing its memory capacity. However, for repair professionals, this discovery proved invaluable, enabling them to salvage graphics cards with defective memory channels. Consequently, this led to the emergence of unorthodox models in the market, such as the GeForce RTX 3090 with 20GB of memory instead of the standard 24GB, or the GeForce RTX 3070 Ti with 6GB of memory instead of the expected 8GB.

The firmware modders identified the possibility of expanding memory capacity through the modification. Consequently, the GeForce RTX 4090 48GB and the GeForce RTX 4080 Super 32GB came into existence.

Upgrading the GeForce RTX 4090 to 48GB is an expensive endeavor. First, it is necessary to possess the graphics card in order to extract the Ada Lovelace silicon and GDDR6X memory chips. If you do not have any GDDR6X modules readily available, you'll need to purchase each module, currently priced at $24 on Chinese e-commerce platforms.

Consequently, the total cost for the upgrade is $430, excluding shipping costs. Assuming you were fortunate enough to purchase a GeForce RTX 4090 at its original MSRP of $1,599, the total amounts to $2,029. These GeForce RTX 4090 48GB graphics cards typically sell for around $3,320 in China, so you're saving close to 39% - again, assuming you have the soldering skills and access to all the equipment necessary for the upgrade. Alternatively, you can pay someone more qualified to perform the upgrade for you.

RTX 4090 supply has already started to dwindle, meaning that sooner or later, Chinese factories will likely begin experimenting with the GeForce RTX 5090,if they haven't already. A rumor is already circulating about the GeForce RTX 5090 128GB. While it may seem like a scam now, it could become a reality further down the road.


Original Submission

posted by janrinok on Saturday September 13, @10:10AM   Printer-friendly

AI's free web scraping days may be over, thanks to this new licensing protocol:

AI companies are capturing as much content as possible from websites while also extracting information. Now, several heavyweight publishers and tech companies -- Reddit, Yahoo, People, O'Reilly Media, Medium, and Ziff Davis (ZDNET's parent company) -- have developed a response: the Really Simple Licensing (RSL) standard.

You can think of RSL as Really Simple Syndication's (RSS) younger, tougher brother. While RSS is about syndication, getting your words, stories, and videos out onto the wider web, RSL says: "If you're an AI crawler gobbling up my content, you don't just get to eat for free anymore."

The idea behind RSL is brutally simple. Instead of the old robots.txt file -- which only said, "yes, you can crawl me," or "no, you can't," and which AI companies often ignore -- publishers can now add something new: machine-readable licensing terms.

Want an attribution? You can demand it. Want payment every time an AI crawler ingests your work, or even every time it spits out an answer powered by your article? Yep, there's a tag for that too.

This approach allows publishers to define whether their content is free to crawl, requires a subscription, or will cost "per inference," that is, every time ChatGPT, Gemini, or any other model uses content to generate a reply.

The key capabilities of RSL include:

  • A shared vocabulary that lets publishers define licensing and compensation terms, including free, attribution, pay-per-crawl, and pay-per-inference compensation.
  • An open protocol to automate content licensing and create internet-scale licensing ecosystems between content owners and AI companies.
  • Standardized, public catalogs of licensable content and datasets through RSS and Schema.org metadata.
  • An open protocol for encrypting digital assets to securely license non-public proprietary content, including paywalled articles, books, videos, and training datasets.
  • Supporting collective licensing via RSL Collective or any other RSL-compatible licensing server.

It's a clever fix for a complex problem. As Tim O'Reilly, the O'Reilly Media CEO and one of the RSL initiative's high-profile backers, said: "RSS was critical to the internet's evolution...but today, as AI systems absorb and repurpose that same content without permission or compensation, the rules need to evolve. RSL is that evolution."

O'Reilly's right. RSS helped the early web scale, whether blogs, news syndication, or podcasts. But today's web isn't just competing for human eyeballs. The web is now competing to supply the training and reasoning fuel for AI models that, so far, aren't exactly paying the bills for the sites they're built on.

Of course, tech is one thing; business is another. That's where the RSL Collective comes in. Modeled on music's ASCAP and BMI, the nonprofit is essentially a rights-management clearinghouse for publishers and creators. Join for free, pool your rights, and let the Collective negotiate with AI companies to ensure you're compensated.

As anyone in publishing knows, a lone freelancer, or most media outlets for that matter, has about as much leverage against the likes of OpenAI or Google as a soap bubble in a wind tunnel. But a collective that represents "the millions" of online creators suddenly has some bargaining power.

(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Let's step back. For the last few years, AI has been snacking on the internet's content buffet with zero cover charge. That approach worked when the web's economics were primarily driven by advertising. However, those days are history. The old web ad model has left publishers gutted while generative AI companies raise billions in funding.

So, RSL wants to bolt a licensing framework directly into the web's plumbing. And because RSL is an open protocol, just like RSS, anyone can use it. From a giant outlet like Yahoo to a niche recipe blogger, RSL allows web publishers to spell out what they want in return when AI comes crawling.

The work of guiding RSL falls to the RSL Technical Steering Committee, which reads like a who's who of the web's protocol architects: Eckart Walther, co-author of RSS; RV Guha, Schema.org and RSS; Tim O'Reilly; Stephane Koenig, Yahoo; and Simon Wistow, Fastly.

The web has always run on invisible standards such as HTTP, HTML, RSS, and robots.txt. In Web 1.0, social contracts were written into code. If RSL catches on, it may be the next layer in that lineage: the one that finally gives human creators a fighting chance in the AI economy.

And maybe, just maybe, RSL will stop the AI feast from becoming an all-you-can-eat buffet with no one left to cook.


Original Submission