Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

When transferring multiple 100+ MB files between computers or devices, I typically use:

  • USB memory stick, SD card, or similar
  • External hard drive
  • Optical media (CD/DVD/Blu-ray)
  • Network app (rsync, scp, etc.)
  • Network file system (nfs, samba, etc.)
  • The "cloud" (Dropbox, Cloud, Google Drive, etc.)
  • Email
  • Other (specify in comments)

[ Results | Polls ]
Comments:166 | Votes:292

posted by janrinok on Wednesday October 15, @10:35AM   Printer-friendly

Nobel economics prize goes to 3 researchers for explaining innovation-driven economic growth

Joel Mokyr, Philippe Aghion and Peter Howitt won the Nobel memorial prize in economics Monday for their research into the impact of innovation on economic growth and how new technologies replace older ones, a key economic concept known as "creative destruction."

The winners represent contrasting but complementary approaches to economics. Mokyr is an economic historian who delved into long-term trends using historical sources, while Howitt and Aghion relied on mathematics to explain how creative destruction works.

Dutch-born Mokyr, 79, is from Northwestern University; Aghion, 69, from the Collège de France and the London School of Economics; and Canadian-born Howitt, 79, from Brown University.

Mokyr was still trying to get his morning coffee when he was reached on the phone by an AP reporter, and said he was shocked to win the prize.

"People always say this, but in this case I am being truthful—I had no clue that anything like this was going to happen," he said.

His students had asked him about the possibility he would win the Nobel, he said. "I told them that I was more likely to be elected Pope than to win the Nobel Prize in economics—and I am Jewish, by the way."

Mokyr will turn 80 next summer but said he has no plans to retire. "This is the type of job that I dreamed about my entire life," he said.

Like fellow laureate Mokyr, Aghion also expressed surprise at the honor. "I can't find the words to express what I feel," he said by phone to the press conference in Stockholm. He said he would invest his prize money in his research laboratory.

Asked about current trade wars and protectionism in the world, Aghion said that: "I am not welcoming the protectionist way in the US. That is not good for ... world growth and innovation."

The winners were credited with better explaining and quantifying "creative destruction," a key concept in economics that refers to the process in which beneficial new innovations replace—and thus destroy—older technologies and businesses. The concept is usually associated with economist Joseph Schumpeter, who outlined it in his 1942 book "Capitalism, Socialism and Democracy."

The Nobel committee said Mokyr "demonstrated that if innovations are to succeed one another in a self-generating process, we not only need to know that something works, but we also need to have scientific explanations for why."

Mokyr has long been known as an optimist about the positive effects of technological innovation.

In an interview with the AP in 2015, he cited the music streaming service Spotify as an example of an "absolutely astonishing" innovation that economists had difficulty measuring. Mokyr noted he once owned more than 1,000 CDs and, before that, "I spent a large amount of my graduate student budget on vinyl records." But now he could access a huge music library for a small monthly fee.

Aghion and Howitt studied the mechanisms behind sustained growth, including in a 1992 article in which they constructed a mathematical model for creative destruction.

Aghion helped shape French President Emmanuel Macron's economic program during his 2017 election campaign. More recently, Aghion co-chaired the Artificial Intelligence Commission, which in 2024 submitted a report to Macron outlining 25 recommendations to position France as a leading force in the field of AI.

"The laureates' work shows that economic growth cannot be taken for granted. We must uphold the mechanisms that underlie creative destruction, so that we do not fall back into stagnation," said John Hassler, chair of the committee for the prize in economic sciences.

One half of the 11 million Swedish kronor (nearly $1.2 million) prize goes to Mokyr and the other half is shared by Aghion and Howitt. Winners also receive an 18-carat gold medal and a diploma.

The economics prize is formally known as the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel. The central bank established it in 1968 as a memorial to Nobel, the 19th-century Swedish businessman and chemist who invented dynamite and established the five Nobel Prizes.

Since then, it has been awarded 57 times to a total of 99 laureates. Only three of the winners have been women.


Original Submission

posted by janrinok on Wednesday October 15, @05:51AM   Printer-friendly

Warp Speed! How Some Galaxies Can Move Away from Us Faster Than Light:

If there is an absolute law in the universe, it's that nothing can travel faster than the speed of light.

For science-fiction enthusiasts, that's a bit depressing. Space is big, and while the speed of light is incredibly fast to us humans, on interstellar scales it's glacially slow. Even at a photon's speed of about 300,000 kilometers per second, it's a journey of more than four years to reach just the closest star to the sun.

But we have to be careful how we state this universal law. To be more specific, nothing can move faster than the speed of light through space. That may seem like a nitpick, but it turns out to have literally cosmic importance.

To see why, we have to look at the behavior of the universe itself. One of the most important things the universe does is expand. It's getting bigger every day. The foundational observations for this, which were made more than a century ago, showed that distant galaxies appeared to be receding from us—not only that, but ones farther away were moving faster.

That's what an explosion does: at some time after the bang, the fastest-moving material will be farthest away. This is where the idea of the big bang model for the origin of the universe comes from.

The cosmic expansion can be measured as a rate, meaning at a given distance from us, a galaxy will be moving away from us at given speed. At a different distance, a galaxy will be moving at a different speed. We measure this as a rate of expansion called the Hubble constant. Our best measurements of this give a value of about 70 kilometers per second per megaparsec. In other words, a galaxy one megaparsec from us (about 3.26 million light-years) will be receding at 70 km/sec. A galaxy two megaparsecs away will be moving twice as fast, or at 140 km/sec, and so on.

Extrapolating this, though, runs us into trouble: it's not hard to see that at some distance from us, the recession speed will equal the speed of light. If you run the numbers, just using the speed of light divided by the Hubble constant, you find that distance is about 14 billion light-years. That distance is called the Hubble sphere, and anything farther away than that would be—from our perspective—moving faster than light.

Here's where things get weird. ("Oh yes, here is where that happens," I expect you're thinking.) The universe is actually quite a bit bigger than that. We know the cosmos was born about 13.8 billion years ago. By some few hundreds of millions of years later, galaxies had formed. When we see the light from a distant galaxy, it's taken, say, 12 billion years to reach us, but over that time, the universe has expanded. So the light has actually had to travel much farther than 12 billion light-years to get to us. By the time the light reaches us, that galaxy is more like 23 billion light-years away.

That means there are galaxies outside our Hubble sphere, and they are moving away from us faster than the speed of light! How is that possible?

And now we come back to choosing our words carefully. Yes, nothing can move through space faster than light. But those galaxies aren't moving through space. They're moving with it.

Because—to belabor this point—space itself is expanding, and that changes everything. The ultimate speed limit only matters for material objects, but space isn't included in that definition. It can expand at whatever rate it wants, and that means at some distance from us it is expanding faster than light. Galaxies, embedded in the fabric of space, are swept along with it, so past some threshold—in this case, the boundary of the Hubble sphere—they move away from us at superluminal speeds.

An imperfect analogy: Imagine a boat on the ocean that can move across the water at 20 km/hour. If the boat is headed away from you, that's how fast you'll see it moving. But now imagine the boat's in a current moving at 30 km/hour away from you. You'd now measure the boat moving at 50 km/hour, even though the speed of the boat relative to the water is only 20. To be clear, this is only an analogy and shouldn't be taken too far. But it helps to picture how this works.

So galaxies can move away from us faster than light. But can we see them?

Naively you'd think the answer is no because space is expanding too rapidly for light to catch up to us. But that's not actually the case. Imagine a galaxy just outside our Hubble sphere. We see it moving away at just faster than light. But from its point of view, the light it emits can easily reach the edge of our Hubble sphere after some time because that edge is not as far from that galaxy. And once that light passes over that distance, it can, by definition, reach us! This means we can in fact see galaxies outside that distance, even though those galaxies are receding faster than light. Incidentally, this also means that from the perspective of galaxies just outside our own Hubble sphere, the Milky Way is moving away from them at superluminal speed. That's right: from a certain cosmic point of view, you're breaking a universal speed limit right now, even if you're sitting motionless in a chair.

Does your head hurt yet? Yeah, I get it. But in fact I'm being quite gentle and hand-wavy about this because it's all so extremely complex. To truly understand, you have to invoke Einstein's general theory of relativity, which complicates matters greatly. Relativistic physics describes all this very well, but translating it into words is difficult. It's like trying to describe a symphony with words alone; you can talk about its volume and pitch, but without hearing the music, those words can't possibly truly describe it.

It amazes me that so many people not only understand the implications of general relativity but use that physics in their day-to-day life trying to understand the universe's behavior—a study called cosmology. And it's more wondrous that someone came up with this idea in the first place. Einstein was pretty smart, it turns out.

But it also took a huge number of astronomers from all over the world working over the past century to get us to where we are now in that cosmic exploration. And we're still very far from understanding it all. There's still so much left to learn, and—in an expanding universe—we may never reach its limit.

If you enjoyed this article, I'd like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I've been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.


Original Submission

posted by janrinok on Wednesday October 15, @01:02AM   Printer-friendly
from the I-can't-see-it-replacing-beer dept.

PsyPost has a very interesting report about the consequences of using Ayahuasca on our feelings about death.

I've been interested in the stuff but a couple of friends that tried it reported there is a large possibility of vomiting after drinking the stuff and I just don't like that. Point of fact, they ask you to fast before the ceremony to minimize vomiting but to me dry heaves are worst.

Anyway, a very interesting read.

People who regularly use ayahuasca, a traditional Amazonian psychedelic drink, may have a fundamentally different way of relating to death. A new study published in the journal Psychopharmacology indicates that long-term ayahuasca users tend to show less fear, anxiety, and avoidance around death—and instead exhibit more acceptance. These effects appear to be driven not by spiritual beliefs or personality traits, but by a psychological attitude known as "impermanence acceptance."

The findings come from researchers at the University of Haifa, who sought to better understand how psychedelics influence people's thinking and behavior around mortality. According to their data, it is not belief in an afterlife or a shift in metaphysical views that predicts reduced death anxiety. Instead, the results suggest that learning to accept change and the transient nature of life may be central to how ayahuasca helps people relate more calmly to death.

Ayahuasca is a psychoactive brew traditionally used by Indigenous Amazonian groups in healing and spiritual rituals. The drink contains the powerful hallucinogen DMT (N,N-Dimethyltryptamine) along with harmala alkaloids that make it orally active. Many users describe deeply emotional, and often death-themed, visions during their experiences. These may include the sensation of personal death, symbolic rebirth, contact with deceased individuals, or feelings of ego dissolution—the temporary loss of a sense of self.

The research team, led by Jonathan David and Yair Dor-Ziderman, were interested in this recurring death-related content. Historical records, cultural traditions, and previous studies all suggest that ayahuasca frequently evokes visions or thoughts related to death. In one survey, over half of ayahuasca users said they had experienced what felt like a "personal death" during a session. Others described visions involving graves, spirits, or life-after-death themes.

Despite these consistent reports, empirical studies that systematically assess how ayahuasca affects death-related cognition and emotion remain rare. Past work has often relied on limited self-reports, lacked control groups, and overlooked possible mediating psychological factors. The current study aimed to address those gaps with a more rigorous design.

"We were motivated by the lack of research exploring how ayahuasca use might relate to the way people think about and come to terms with the most certain aspect of life: death. Most studies in this area have focused on other psychedelics and on short-term or clinical effects, while we wanted to explore longer-lasting, personality-level changes. We also wanted to understand why such changes might occur, which has been largely missing from the existing literature," David told PsyPost.

"There is a hype in popular and scientific venues regarding the efficacy of psychedelics to affect a fundamental shift in our response to the theme of death. In particular, ayahuasca has long been described as the 'vine of the dead' (translation from Quechua) and death-related themes are pervasive in ayahuasca visions," added Dor-Ziderman, a research director at the University of Haifa and visiting scholar at Padova University.

"However, there has been surprisingly little empirical work on how such encounters shape one's relationship with mortality. Furthermore, most existing studies rely on single self-report scales and overlook the unconscious, behavioral, and cognitive layers of how humans process death. We wanted to provide a comprehensive, multidimensional assessment of "death processing," and to identify the causal mechanisms which mediate, or account for, long-term differences in death processing between ayahuasca users and non-users."

To examine how these individuals relate to death, the researchers administered a detailed set of questionnaires and behavioral tasks. These included measures of death anxiety, fear of personal death, death-avoidant behaviors, and death acceptance. They also used implicit tasks, such as response times to death-related words, to capture unconscious reactions. The idea was to get a broad, multi-dimensional picture of how people think and feel about mortality.

The researchers found statistically significant differences between the two groups. Compared to non-users, ayahuasca users scored lower on death anxiety, were less likely to avoid thinking about death, showed fewer fear responses, and expressed greater acceptance of mortality. These patterns held true across both self-report and behavioral measures. Notably, even the subtle response time tasks pointed in the same direction.

The effect sizes were moderate to large, suggesting these differences are not just statistical artifacts. The changes showed up in emotional, cognitive, and behavioral domains alike, which the authors interpret as evidence of a generalized shift in how ayahuasca users process the idea of death.

"Although these findings should be interpreted with caution, since this was a cross-sectional and mostly self-report study, our results suggest that ayahuasca use may help people feel less anxious about death and more accepting of it, especially among long-term users," David said.

The researchers then looked at several possible explanations for these differences. They examined whether ayahuasca users held stronger beliefs in life after death, which could potentially make them less afraid of dying. They also tested for differences in personality traits, such as openness to experience or neuroticism, and trait mindfulness.

While ayahuasca users did score higher on all of these traits, none of them explained the group differences in death processing. In other words, although ayahuasca users were more open, less neurotic, more mindful, and more likely to believe in some form of existence after death, these factors did not statistically account for their lower death anxiety and higher acceptance.

Instead, one psychological variable stood out: impermanence acceptance. This concept refers to an attitude of openness toward the fact that all things—including life itself—are temporary. People who score high on impermanence acceptance tend to feel less distressed by change and more at ease with the idea that nothing lasts forever."This is a cross-cultural concept found mainly in Buddhism that refers to the acceptance that everything is always changing, and that change is a natural part of life," David explained.

Interestingly, simple awareness of impermanence—knowing that things change—was not enough to predict lower death anxiety. It was the emotional acceptance of this fact, rather than intellectual acknowledgment, that seemed to matter

"Our results were more decisive than we expected," Dor-Ziderman told PsyPost. "We anticipated ayahuasca users to fear death less, but we did not anticipate that impermanence acceptance—and not afterlife beliefs or personality—would emerge as the key mediator in all of the death processing indices we examined. That even self-identified materialists showed the same pattern, which challenges the common idea that psychedelic comfort with death depends on adopting metaphysical beliefs. This last point is important as it suggests that psychedelics can be beneficial regardless of ontological beliefs."

The researchers also explored what aspects of ayahuasca experiences might shape this attitude toward impermanence. The researchers looked at various factors, including frequency of use, age at first use, and how recently participants had taken the substance. None of these usage patterns predicted impermanence acceptance.

However, one aspect of the ayahuasca experience did: ego dissolution. Participants who reported stronger episodes of ego dissolution—where their usual sense of self faded or broke down—also tended to score higher on impermanence acceptance. This suggests that certain acute, subjective experiences during ayahuasca sessions may help reorient people's attitudes toward change and mortality.

The authors speculate that temporarily losing one's sense of a fixed identity may help the brain "train" for death in a psychological sense. The mind may learn that even its most stable perceptions—like the boundaries of the self—are not permanent. This realization may generalize to a broader understanding of impermanence and eventually reduce existential fear.

Another question is whether these changes apply only to ayahuasca, or whether similar effects would appear with other psychedelics. The authors are already running a follow-up study with psilocybin users, and preliminary results suggest a similar pattern, pointing toward a broader effect across psychedelic substances.

"It is important to acknowledge that the sample size of the study was relatively small (a little more than 100 participants overall), and like any other study—results need to be replicated," Dor-Ziderman told PsyPost. "In the this regard we can already report that we are currently working on a replication study, this time with psilocybin (magic mushrooms) users, and it appears our results replicate. That our results replicate to an independent sample which consume a different psychedelic indicates that our results are solid, and furthermore, that they relate to psychedelics in general and not just ayahuasca."

"The results indicated that in this regard, ayahuasca users were no different that healthy controls—their denial mechanism seemed to be intact. This finding was somewhat surprising as in another study we found that long-terms meditators' brains did evidence a shift from death denial to acceptance, and our initial assumption was that self/ego-dissolution (which both populations experienced) were the causal mechanism. So there is something to be said for training the mind and arriving at certain experiences rather than taking pharmacological 'shortcuts'—at least in regard to deeply rooted long-term effects. However, we are still investigating this and are seeking to replicate these findings, so stay tuned."

The study, "Embracing change: impermanence acceptance mediates differences in death processing between long-term ayahuasca users and non-users," was authored by Jonathan David, Aviva Berkovich-Ohana, and Yair Dor-Ziderman.


Original Submission

posted by jelizondo on Tuesday October 14, @08:20PM   Printer-friendly

A unique case of a woman with male chromosomes in her blood:

This is believed to be the first recorded case of its kind, and doctors believe her blood came from her twin brother while they were in the womb.

In every other cell in her body, Ana Paula Martins has XX chromosomes associated with female sex characteristics.

But in her blood cells she carries XY chromosomes that typically determine biological male sex.

This is believed to be the first recorded case of its kind, and doctors believe her blood came from her twin brother while they were in the womb.

This phenomenon was discovered in 2022, after a miscarriage experienced by Ana Paula.

During a medical examination, the gynecologist ordered a karyotype analysis, which allows for a detailed examination of an individual's chromosomes, usually from a blood sample.

"They called me from the lab and said the analysis needed to be repeated," Ana Paula recalls.

The results showed the presence of XY chromosomes in her blood, which confused both Ana and the doctors.

"I went to examine the patient and she had, so to speak, absolutely all the normal female characteristics," explains Gustavo Maciel, a gynecologist at a Brazilian health organization. Fleury Medicine and Health.

"She had a uterus, ovaries... the ovaries were functioning normally," adds this professor at the School of Medicine of the University of São Paulo.

Ana Paula was then referred to geneticist Kai Kwai at the hospital. Albert Einstein Israelita Sao Paulo.

He began detailed medical research with Professor Masiel and other experts.

During the research, Ana Paula told doctors that she had a twin brother, which was crucial to understanding her case.

Comparing their DNA showed that Ana Paula's blood cells, but only blood cells, were identical to those of her twin brother.

She had the same characteristic genetic markers.

"In the DNA of her mouth, in the DNA of her skin - she is her own, unique," says Professor Maciel.

"But in her blood, she is, in fact, her brother."

Ana Paula's case is an example of chimerism, when an organism has different genetic sets in different tissues or organs.

Certain medical therapies can lead to chimerism, such as bone marrow transplantation.

For example, when leukemia patients receive donor cells that then populate their bone marrow.

Spontaneous chimerism is "a very rare occurrence," emphasizes Dr. Kwai.

By reviewing scientific papers, researchers identified cases of twin pregnancies in other mammals in which blood exchange occurred between twins of different sexes.

Scientists assume that in the womb, the placentas of Ana Paula and her twin brother made some kind of contact, forming a connection between the blood vessels that carried the boy's blood to the girl's.

"There was a blood transfusion that we call intertwin transfusion syndrome."

"At one point, the twins' veins and arteries intertwined in the umbilical cord and he transferred all of his blood material to Anne Paule," explains Professor Maciel.

"The most astonishing thing is that this material remained in her body her entire life," he adds.

She then began producing blood with XY chromosomes, while the XX remained in other parts of her body.

"She has a little piece of her brother running through her veins," says Dr. Quayo.

The team believes that this unusual case could contribute to further research into human immunity and reproduction.

Ana Paula's body tolerates her brother's cells and they are not attacked by her immune system.

"Her case could open up new fields of research and help us better understand some issues, for example, those related to transplantation," says Professor Maciel.

There are reports of rare cases of the presence of XY chromosomes in women, but these are mostly associated with fertility problems.

However, this is not the case with Ana Paula, who became pregnant during the research and gave birth to a healthy son.

Genetic analysis showed that the child has the expected DNA - half of the chromosomes come from the mother and half from the father, and there is nothing from the uncle.

"(Anne Paula's) egg cell contains her genetic material."

"Her blood was not involved," explains Professor Maciel.

It was important for Anna Paula to discover the cause of her genetic change, but most importantly, she learned that it would not affect her pregnancy.

"It wasn't something that could get in the way of achieving my goal of having my baby," she says.


Original Submission

posted by jelizondo on Tuesday October 14, @03:34PM   Printer-friendly

Tom's Hardware published a report about the new deal between AMD and OpenAI:

OpenAI and AMD have announced a multibillion-dollar partnership that will see the companies collaborate on AI data centers powered by AMD processors. OpenAI has committed to purchasing 6 gigawatts of AMD chips, starting with the MI450 next year. That will be done either by purchasing the chips directly from AMD or through cloud computing partners. AMD CEO Lisa Su told the WSJ that the deal would generate tens of billions of dollars in revenue for AMD over the next five years.

The two companies are not disclosing the exact financial details of the deals. However, AMD emphasized that each "per gigawatt" of capacity is worth tens of billions of dollars, so it's possible the deal is worth upwards of $60 billion.

In return, OpenAI will receive warrants for up to 160 million AMD shares, approximately 10% of AMD, at a price of $0.01 per share, to be awarded in phases, provided that OpenAI meets deployment milestones. The warrants will only be exercised if AMD's share price increases, although again, the specifics are unclear.

The deal is an enormous win for AMD and stands juxtaposed with Nvidia's groundbreaking Intel partnership announced last month. Under the terms of that deal, Nvidia and Intel are jointly developing Intel x86 RTX SOCs for PCs featuring Nvidia graphics, as well as custom Nvidia data center x86 processors. Nvidia also received $5 billion in Intel stock as part of the deal.

OpenAI will use AMD's chip for inference in order to cope with skyrocketing demand. "It's hard to overstate how difficult it's become... We want it super fast, but it takes some time." OpenAI's Sam Altman said to WSJ.

"We are thrilled to partner with OpenAI to deliver AI compute at massive scale," said Dr. Lisa Su, chair and CEO, AMD. "This partnership brings the best of AMD and OpenAI together to create a true win-win enabling the world's most ambitious AI buildout and advancing the entire AI ecosystem."

The first deployment will be 1 gigawatt worth of MI450 chips, scheduled for the second half of 2026. Altman said the AI buildout has reached a phase "where the entire industry's got to come together and everybody's going to do super well," not only on chips and data centers, but also further down the supply chain too.

OpenAI has also inked a $100 billion deal with Nvidia, and will use Nvidia's investment to secure and deploy 10 gigawatts worth of AI data centers. While that deal isn't finalized, AMD and OpenAI reportedly say this deal is "definitive" and plan to immediately file the requisite details with regulators, a step that has yet to happen in the Nvidia deal.


Original Submission

posted by jelizondo on Tuesday October 14, @10:51AM   Printer-friendly
from the next-up-fighting-proprietary-data-formats dept.

Tom's Hardware is reporting on a project by Cambridge University to rescue data trapped on old floppy disks. Magnetic media only lasts a decade or so under optimal, climate controlled storage conditions. So this task is much more fundamental than just pushing the old disks into off-the-shelf drives.

Led by the library's digital preservation team, the project aims to document and formalize best practices for floppy disk recovery, encompassing cleaning and handling methods, as well as imaging workloads. It's also pulling in expertise from the retro-computing community, whose trial-and-error techniques are often the only reason legacy formats still survive.

You can forget those cheap USB floppy drives you can buy online. Cambridge's preservationists don't just mount disks and hope for the best; they sample the raw magnetic signal itself. Specialized hardware, such as the KryoFlux and open-hardware Greaseweazle interfaces, captures the flux transitions — the tiny changes in polarity that encode data — and reconstructs the file structure later in software. This flux-level imaging process enables archivists to recover non-PC formats and identify weak or damaged sectors that would otherwise remain unread.

This project only addresses the matter of hardware, so far. Although that is important on its own when working for preservation, much of the data will turn out to be trapped in proprietary or DRM'd formats. Thus draconian copyright laws can impose an unnecessary non-technical barrier to the final steps of legally retrieving the data and bringing it to a usable form.

Previously:
(2025) A Story About USB Floppy Drives
(2024) PC Floppy Copy Protection: Softguard Superlok
(2024) PC Floppy Copy Protection: Formaster Copy-Lock
(2024) Japan's Digital Minister Claims Victory Against Floppy Disks
(2024) Where Are Floppy Disks Today? Planes, Trains, And All These Other Places
(2022) The Last Man Selling Floppy Disks Says He Still Receives Orders From Airlines


Original Submission

posted by hubie on Tuesday October 14, @06:11AM   Printer-friendly
from the pay-up-or-else dept.

CRM giant Salesforce has been hacked affecting Qantas and other large corporations. While Salesforce claims to be number 1 in the world, a big claim in the presence of SAP and Microsoft, this recent hack shows that no system is completely secure. More than a billion records have been stolen from the 39 companies, including the Qantas Frequent Flyers program, Toyota, Disney, McDonalds, and HBO Max. Hackers have threatened to release this personal data unless Salesforce pay a ransom.

Problem is that when you start paying ransoms you don't stop paying.

Updates:
    • Salesforce refuses to submit to extortion demands linked to hacking campaigns
    • Hackers leak Qantas data containing 5 million customer records after ransom deadline passes


Original Submission

posted by hubie on Tuesday October 14, @01:24AM   Printer-friendly
from the sometimes-gov't-developments-do-work dept.

It's so common to hear that the gov't can't possibly do anything right, or for a good price, that many people believe it was always true. Here is a counter example to discuss:

https://theconversation.com/believe-it-or-not-there-was-a-time-when-the-us-government-built-beautiful-homes-for-working-class-americans-to-deal-with-a-housing-shortage-253512

In 1918, as World War I intensified overseas, the U.S. government embarked on a radical experiment: It quietly became the nation's largest housing developer, designing and constructing more than 80 new communities across 26 states in just two years.

These weren't hastily erected barracks or rows of identical homes. They were thoughtfully designed neighborhoods, complete with parks, schools, shops and sewer systems. In just two years, this federal initiative provided housing for almost 100,000 people.

Few Americans are aware that such an ambitious and comprehensive public housing effort ever took place. Many of the homes are still standing today.
[...]
Alongside housing construction, the Housing Corporation invested in critical infrastructure. Engineers installed over 649,000 feet of modern sewer and water systems, ensuring that these new communities set a high standard for sanitation and public health.

Attention to detail extended inside the homes. Architects experimented with efficient interior layouts and space-saving furnishings, including foldaway beds and built-in kitchenettes. Some of these innovations came from private companies that saw the program as a platform to demonstrate new housing technologies.

One company, for example, designed fully furnished studio apartments with furniture that could be rotated or hidden, transforming a space from living room to bedroom to dining room throughout the day.

To manage the large scale of this effort, the agency developed and published a set of planning and design standards − the first of their kind in the United States. These manuals covered everything from block configurations and road widths to lighting fixtures and tree-planting guidelines. The standards emphasized functionality, aesthetics and long-term livability.

Architects and planners who worked for the Housing Corporation carried these ideas into private practice, academia and housing initiatives. Many of the planning norms still used today, such as street hierarchies, lot setbacks and mixed-use zoning, were first tested in these wartime communities.

And many of the planners involved in experimental New Deal community projects, such as Greenbelt, Maryland, had worked for or alongside Housing Corporation designers and planners. Their influence is apparent in the layout and design of these communities.

The USA has another housing crunch now, partly (I've read) due to private capital bidding up the prices of many houses/apartments and turning them into rental units--makes a nice rate of return for the investors, but sucks for everyone else. While I have no expectations that the current administration (and their history with high end developments) would consider anything like this, it might be something to work toward in a few years, after the next election?


Original Submission

posted by hubie on Monday October 13, @08:41PM   Printer-friendly

Teachers need to be scientists themselves, experimenting and measuring the impact of powerful AI products on education:

American technologists have been telling educators to rapidly adopt their new inventions for over a century. In 1922, Thomas Edison declared that in the near future, all school textbooks would be replaced by film strips, because text was 2% efficient, but film was 100% efficient. Those bogus statistics are a good reminder that people can be brilliant technologists, while also being inept education reformers.

I think of Edison whenever I hear technologists insisting that educators have to adopt artificial intelligence as rapidly as possible to get ahead of the transformation that's about to wash over schools and society.

At MIT, I studythe history and future of education technology, and I have never encountered an example of a school system – a country, state or municipality – that rapidly adopted a new digital technology and saw durable benefits for their students. The first districts to encourage students to bring mobile phones to class did not better prepare youth for the future than schools that took a more cautious approach. There is no evidence that the first countries to connect their classrooms to the internet stand apart in economic growth, educational attainment or citizen well-being.

New education technologies are only as powerful as the communities that guide their use. Opening a new browser tab is easy; creating the conditions for good learning is hard.

It takes years for educators to develop new practices and norms, for students to adopt new routines, and for families to identify new support mechanisms in order for a novel invention to reliably improve learning. But as AI spreads through schools, both historical analysis and new research conducted with K-12 teachers and students offer some guidance on navigating uncertainties and minimizing harm.

[...] Today, there is a cottage industry of consultants, keynoters and "thought leaders" traveling the country purporting to train educators on how to use AI in schools. National and international organizations publish AI literacy frameworks claiming to know what skills students need for their future. Technologists invent apps that encourage teachers and students to use generative AI as tutors, as lesson planners, as writing editors, or as conversation partners. These approaches have about as much evidential support today as the CRAAP test did when it was invented.

There is a better approach than making overconfident guesses: rigorously testing new practices and strategies and only widely advocating for the ones that have robust evidence of effectiveness. As with web literacy, that evidence will take a decade or more to emerge.

But there's a difference this time. AI is what I have called an "arrival technology." AI is not invited into schools through a process of adoption, like buying a desktop computer or smartboard – it crashes the party and then starts rearranging the furniture. That means schools have to do something. Teachers feel this urgently. Yet they also need support: Over the past two years, my team has interviewed nearly 100 educators from across the U.S., and one widespread refrain is "don't make us go it alone."

[...] First, regularly remind students and teachers that anything schools try – literacy frameworks, teaching practices, new assessments – is a best guess. In four years, students might hear that what they were first taught about using AI has since proved to be quite wrong. We all need to be ready to revise our thinking.

Second, schools need to examine their students and curriculum, and decide what kinds of experiments they'd like to conduct with AI. Some parts of your curriculum might invite playfulness and bold new efforts, while others deserve more caution.

[...] Third, when teachers do launch new experiments, they should recognize that local assessment will happen much faster than rigorous science. Every time schools launch a new AI policy or teaching practice, educators should collect a pile of related student work that was developed before AI was used during teaching. If you let students use AI tools for formative feedback on science labs, grab a pile of circa-2022 lab reports. Then, collect the new lab reports. Review whether the post-AI lab reports show an improvement on the outcomes you care about, and revise practices accordingly.

Between local educators and the international community of education scientists, people will learn a lot by 2035 about AI in schools. We might find that AI is like the web, a place with some risks but ultimately so full of important, useful resources that we continue to invite it into schools. Or we might find that AI is like cellphones, and the negative effects on well-being and learning ultimately outweigh the potential gains, and thus are best treated with more aggressive restrictions.

Everyone in education feels an urgency to resolve the uncertainty around generative AI. But we don't need a race to generate answers first – we need a race to be right.

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Original Submission

posted by hubie on Monday October 13, @03:58PM   Printer-friendly

While drones flying over different parts of Europe have raised concerns in many countries, some are worried about a more dystopian future with the technology:

While drones flying over different parts of Europe have raised concerns in many countries, some are worried about a more dystopian future with the technology.

Russia's full-scale invasion of Ukraine could lead to a new arms race — one not defined by big submarines or loud missiles, but by small, silent drones.

Ukrainian President Volodymyr Zelenskyy addressed the prospect during his speech at the United Nations General Assembly, where he warned that it is cheaper to stop Russia now "than wondering who will be the first to create a simple drone carrying a nuclear warhead"."

We must use everything we have, together, to force the aggressor to stop. And only then do we have a real chance that this arms race won't end in catastrophe for all of us," he said."Otherwise, [Russian President Vladimir] Putin will keep driving the war forward — wider and deeper."

Experts warn drones carrying nuclear weapons might already exist.

TASS, the Russian state-owned news agency, reported in 2023 on the manufacture of a nuclear-armed underwater drone called Poseidon.

Previously, in 2018, the United States defence ministry also publicly acknowledged Russia was developing a "new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo" or underwater drone.

Mick Ryan, a retired Australian Army major general and senior fellow for military studies at the Lowy Institute, said drones with nuclear warheads "may already be a reality".

"It's something that we should be concerned about," Ryan, who is also a strategic adviser at a US drone company, Skydio, told SBS News.

"Particularly since detecting a drone underwater that's capable of very long ranges would be a significant threat to Western countries, including Australia," Ryan said.

[...] Nuclear warheads are not the only possible future predicted for drones, as politicians are warning about the use of artificial intelligence (AI) to control drones.

During his speech at the UN, Zelenskyy said "it's only a matter of time" before drones operate "all by themselves, fully autonomous, and no human involved, except the few who control AI systems".

Earlier in September, The Wall Street Journal reported that AI-powered drones were introduced on the battlefield, with Ukraine utilising technology that allows groups of drones to make decisions independently.

Ryan said the use of AI might actually help reduce civilian casualties in future warfare.

"AI might actually make them more deadly for the military and less deadly for civilians. Now, that's a perfect scenario, of course, and it's theoretical," he said.

On the other hand, there are concerns about AI gaining access to nuclear weapons.

Foreign Minister Penny Wong told the UN on Thursday: "AI's potential use in nuclear weapons and unmanned systems challenges the future of humanity."

"Decisions of life and death must never be delegated to machines", she said and offered to other leaders to set rules and standards on the use of AI.

Some others have also expressed concerns about ethical and regulatory challenges related to autonomous drones.

Ryan said: "If you have AI controlling a drone that has a nuclear weapon, we should be very concerned about that."

"I think AI for conventional weapons and AI for nuclear weapons are two very different conversations with two very different forms of risk."

[...] The risk of drones, however, has not been limited only to the war zones, with a series of drone incursions being seen in Europe recently.

On Saturday, drones were spotted near military facilities in Denmark, following reports of drones being seen over Danish airports. There were also reports of drone observation in Germany, Norway and Lithuania.

Danish defence minister Troels Lund Poulsen described the incident as "systematic" and a "hybrid attack".

The Russian government has dismissed any claims of involvement in the drone incidents.

Drones were spotted near military facilities in Denmark, following reports of drones being seen over Danish airports.

"The European Union formally announced on the weekend it will focus on developing a drone wall system in its eastern defences to defend against incursions.

[...] The drones are the threat of today and will remain the threat of tomorrow. Definitely, no country can afford to ignore this threat and has to take action at different levels ... [and] learn from the partners, including Ukraine, who are at the forefront of developing these systems in modern warfare."


Original Submission

posted by hubie on Monday October 13, @11:12AM   Printer-friendly

Comets Lemmon and SWAN may be visible around the same time as they race across the solar system:

Skywatchers, rejoice. This month, not one but two comets are set to soar into our night skies for your viewing pleasure.

The two comets, C/2025 R2 (SWAN) and C/2025 A6 (Lemmon), were both discovered in 2025. The celestial visitors are gearing up for a close flyby of Earth in October, becoming more visible as they approach our planet. SWAN will be closest to Earth on October 19, while Lemmon is set for its own close approach on October 21. Both icy comets may even be visible to the naked eye around that time.

Astronomers spotted Lemmon in January using the Mt. Lemmon SkyCenter observatory in Arizona's Santa Catalina Mountains. The comet was speeding toward the inner solar system at speeds up to 130,000 miles per hour (209,000 kilometers per hour).

Later in September, amateur astronomer Vladimir Bezugly discovered comet SWAN in images from the SWAN instrument on NASA's SOHO satellite. The comet became significantly brighter as it emerged from the Sun's direction.

At its closest approach, SWAN will be at a distance of approximately 24 million miles (39 million kilometers) from our planet, or about a quarter of the distance between the Sun and Earth. SWAN is now at a brightness magnitude of around 5.9, according to EarthSky. The unexpectedly bright comet is currently in the southern skies, but it is slowly moving north, according to NASA.

Following SWAN's closest approach, comet Lemmon will be right behind. The comet will be about half the distance between the Sun and Earth before rounding the Sun on November 8. From there, it will begin its next journey around the star. Lemmon will continue to brighten as it approaches the Sun, but it will likely stay visible, and possibly become even brighter, around October 31 to November 1, according to EarthSky.

SWAN is best viewed in the Southern Hemisphere. The comet crossed into the Libra constellation on September 28, and will make its way across Scorpius on October 10. Around October 9-10, it will appear near Beta Librae, the brightest star in the Libra constellation, EarthSky reports.

It may, however, be a bit tricky to spot because its position in the skies will be close to the setting Sun. Sky watchers hoping to catch a glimpse of SWAN need up toward the west after sunset.

Conditions are more favorable for Lemmon. The comet is best viewed in the Northern Hemisphere, where it will be positioned near the Big Dipper for most of October. Sky watchers should look to the eastern skies just before sunrise to spot the comet.

By mid-October, the comet may be easier to view. On October 16, Lemmon will pass near Cor Caroli, a binary star system in the northern constellation of Canes Venatici, according to EarthSky. Around that time, the comet could be visible to the naked eye.


Original Submission

posted by hubie on Monday October 13, @06:24AM   Printer-friendly

Some early experiments with AI are revealing the technology's shortcomings - and, by extension, the value of human workers:

This time three years ago, most people had never heard of generative AI. Today, the technology is a cultural behemoth, and businesses across virtually every industry are facing huge pressure to embrace it.

At least at first glance, customer service would seem to be a field that's particularly ripe for AI-powered automation. Chatbots specialize in fielding simple queries, while newer and more powerful agents can access a business's internal files to provide up-to-date information, send follow-up emails, and perform other complex tasks. Little wonder that a fleet of companies like Salesforce and Microsoft have been replacing human customer service reps with AI.

New research, however, suggests this could turn out to be a mistake -- that despite the huge amount of marketing gusto that's been poured into selling generative AI-powered customer service tools to businesses, the technology could in fact be doing more harm than good.

You know that relief you feel when you finally get past a customer service bot and an actual person picks up the phone? Turns out most other people seem to feel that way too, even in the age of AI.

[...] "Companies that rushed to cut jobs in the name of AI savings are now facing massive, and often unexpected costs," he told ZDNET. "We've seen customers share examples of AI-generated errors -- like chatbots giving wrong answers, marketing emails misfiring, or content that misrepresents the brand -- and they notice when the human touch is missing."

He added that the backlash has spread to social media.

Some companies have already learned painful lessons about AI's shortcomings and adjusted course accordingly. In one early example from last year, McDonald's announced that it was retiring an automated order-taking technology that it had developed in partnership with IBM after the AI-powered system's mishaps went viral across social media.

More recently, fintech company Klarna started hiring human customer service employees again after realizing that AI was delivering a "lower quality," as company CEO Sebastian Siemiatkowski told Bloomberg. (Siemietkowski told CNBC in May that his company's investments in AI had contributed to an employee headcount reduction of about 40%.)

A global survey of 2,000 CEOs conducted by IBM early this year found that only about one in four internal AI business initiatives has delivered expected ROI. Even more jarringly, a MIT study published in August showed that 95% of businesses' experiments with AI have not delivered any real returns.

[...] In 2018, after Tesla failed to meet production quotas for its Model 3, CEO Elon Musk admitted in a tweet that the electric vehicle company's reliance upon "excessive automation...was a mistake."

Businesses aggressively pushing to deploy AI-powered customer service initiatives in the present could come to a similar conclusion: that even though the technology helps to cut spending and boost efficiency in some domains, it isn't able to completely replicate the human touch. At least for the time being, its shortcomings very well may overshadow its benefits.


Original Submission

posted by hubie on Monday October 13, @01:39AM   Printer-friendly

To cat lovers, a litter box is a necessity. But not to scientists. A team of researchers decided to investigate litter boxes as records of behavior: the pre-squat scratch, the whirl, the precise geometry of the bury

To cat owners, a litter box is a nuisance. But to scientists, it's a trove of information. A team of researchers at Nestlé Purina PetCare decided to investigate litter boxes as records of behavior: the pre-squat scratch, the whirl, the precise geometry of the bury.

The scientists built a painstaking dictionary of these gestures—a full "ethogram," or catalog, of species-specific behaviors—and then identified the distinct moves in feline bathroom habits: grooming, digging, sniffing litter. "We landed on 39 different behaviors that cats do in a litter box, with the understanding that depending on their satisfaction with the litter box, the environment and the dynamics around them, those behaviors will shift," says Ragen McGowan, director of digital and AI product development at Purina and one of the authors of a paper published recently in Applied Animal Behaviour Science on the development of Purina's AI-powered litter box monitor. "We realized this ethogram could be a window into their health."

And Imma gonna leave a link to where I found it, on Fark


Original Submission

posted by jelizondo on Sunday October 12, @08:50PM   Printer-friendly
from the resistance-is-futile-you-will-be-assimilated dept.

As a very long time user of MythTV and free OTA ATSC 1.0 TV, reading this one did not make my day:

CordCutters published news of a recent FCC decision to allow broadcasters flexibility on switching to ATSC 3.0 technology:

In a major shift for American television viewers, the Federal Communications Commission (FCC) has decided against setting a hard deadline to end the old digital TV system that powers most broadcasts and cable services today. [...] The agency, now headed by Brendan Carr, had initially pushed for a quicker switch to the advanced ATSC 3.0 technology, known as NextGen TV. But after hearing concerns from consumer groups, cable companies, and satellite providers, the FCC is choosing a more flexible, voluntary approach to make the change easier for everyone involved.

According to the new proposal this would "tentatively conclude that television stations should be allowed to choose when to stop broadcasting in 1.0 and start broadcasting exclusively in 3.0."

To understand this, it's helpful to step back and explain the basics. For over 15 years, U.S. TV stations and multichannel video programming distributors (MVPDs)—think cable giants like Comcast or satellite services like DirecTV and DISH—have relied on ATSC 1.0. This is the standard digital TV technology that replaced fuzzy analog signals in 2009, delivering clearer pictures and more channels. It's the "original" digital TV, or what some call the "OG" of modern broadcasting. ATSC 1.0 works universally across free over-the-air antennas, cable boxes, and satellite dishes, reaching nearly every household without special upgrades.

NextGen TV, built on ATSC 3.0, promises even better features: sharper 4K video, interactive apps, and stronger signals that can cut through buildings or bad weather. It's like upgrading from a reliable old smartphone to one with a bigger screen and faster apps. The transition started voluntarily during the Biden administration, with a handful of cities testing it out. But since Trump's return in January 2025—about nine and a half months ago—the push intensified. FCC leaders wanted a nationwide shutdown of ATSC 1.0 by a set date to speed things up, arguing it would modernize broadcasting and free up airwaves for new uses.

This aggressive stance hit a wall of opposition. Consumer advocates, led by the Consumer Technology Association (CTA) and its president Gary Shapiro, warned that forcing the change too fast could leave millions of viewers in the dark. Older TVs and set-top boxes might stop working, forcing families to buy new equipment they can't afford. Cable and satellite lobbies echoed these fears, pointing out the massive costs of rewiring their networks to carry the new signals. For context, imagine every home suddenly needing a software update or new hardware just to watch local news—disruptive and expensive, especially for low-income or rural households.

The FCC's latest move, outlined in a document called the Fifth Further Notice of Proposed Rulemaking (FNPRM), listens to these voices. Instead of a mandatory cutoff, the agency proposes keeping the transition market-driven and optional. Broadcasters— the TV stations that send out signals—would get to decide when, or even if, they fully drop ATSC 1.0. Many are already "simulcasting," meaning they beam both the old and new signals at the same time, like offering two radio stations on one frequency. The FCC wants to ease rules around this, removing red tape that currently limits how long stations can keep the old signal running. This builds on policies from the Democratic-led FCC, extending the grace period without a strict timeline.

The plan also calls for ways to cut costs and smooth the ride for all players. For consumers, that could mean subsidies or incentives to upgrade TVs or antennas without breaking the bank. Manufacturers might get breaks on producing hybrid devices that handle both standards. Smaller broadcasters in rural areas, who often operate on tight budgets, would benefit from fewer mandates. And MVPDs could phase in NextGen support at their own pace, avoiding a sudden overhaul that might raise monthly bills.

But the FCC isn't stopping at flexibility—it's opening the floor for public input on trickier issues. One big question: Should new TVs sold in stores be required to receive ATSC 3.0 signals right out of the box? This echoes a famous FCC rule from the 1960s, when regulators under Chairman Newton Minow mandated UHF tuners in TVs. That move helped spark the growth of companies like Sinclair Inc., now a leading cheerleader for NextGen TV. Yet today, the CTA and others are pushing back hard, saying it could hike prices for basic sets and slow sales.

This compromise feels like a win for balance. Proponents of NextGen, like Sinclair, get regulatory green lights to experiment and expand. Critics, including the cable industry, avoid the chaos of a rushed shutdown. For everyday viewers, it means no panic-buying of new gear tomorrow. The transition, which began quietly years ago at events like a 2019 FCC symposium, can now evolve naturally. Back then, questions about integrating NextGen into cable systems lingered unanswered by groups like Pearl TV or the ATSC standards body. Today's proposal nods to those gaps, seeking fresh input.

Reflecting on history adds irony. A quarter-century ago, ATSC 1.0 was hailed as revolutionary, even as early tech from firms like Sinclair hinted at what 3.0 could become. Now, with costs in mind, the FCC is ensuring the next leap doesn't repeat past disruptions. As comments roll in over the coming months, this could shape TV for the next generation—literally. For now, Americans can keep flipping channels without fear of a digital cliff.

Hardware requirements aside, ATSC 3.0 will have DRM which, as I understand it, will make recording impossible. I know there are far worse things going on in Washington now, but wow this sucks.


Original Submission

posted by hubie on Sunday October 12, @04:05PM   Printer-friendly
from the AI-earthquake-overlords dept.

https://arstechnica.com/science/2025/10/like-putting-on-glasses-for-the-first-time-how-ai-improves-earthquake-detection/

On January 1, 2008, at 1:59 am in Calipatria, California, an earthquake happened. You haven't heard of this earthquake; even if you had been living in Calipatria, you wouldn't have felt anything. It was magnitude -0.53, about the same amount of shaking as a truck passing by. Still, this earthquake is notable, not because it was large but because it was small—and yet we know about it.

Over the past seven years, AI tools based on computer imaging have almost completely automated one of the fundamental tasks of seismology: detecting earthquakes.
[...]
"In the best-case scenario, when you adopt these new techniques, even on the same old data, it's kind of like putting on glasses for the first time, and you can see the leaves on the trees," said Kyle Bradley, co-author of the Earthquake Insights newsletter.
[...]
Less certain is what comes next. Earthquake detection is a fundamental part of seismology, but there are many other data processing tasks that have yet to be disrupted. The biggest potential impacts, all the way to earthquake forecasting, haven't materialized yet.

"It really was a revolution," said Joe Byrnes, a professor at the University of Texas at Dallas. "But the revolution is ongoing."
[...]
The main tool that scientists traditionally use is a seismometer. These record the movement of the Earth in three directions: up–down, north–south, and east–west. If an earthquake happens, seismometers can measure the shaking in that particular location.
[...]
Before good algorithms, earthquake cataloging had to happen by hand. Byrnes said that "traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms."
[...]
"The field of seismology historically has always advanced as computing has advanced," Bradley told me.

There's a big challenge with traditional algorithms, though: They can't easily find smaller quakes, especially in noisy environments.
[...]
earthquakes have a characteristic "shape." The magnitude 7.7 earthquake above looks quite different from the helicopter landing, for instance.

So one idea scientists had was to make templates from human-labeled datasets. If a new waveform correlates closely with an existing template, it's almost certainly an earthquake.

Template matching works very well if you have enough human-labeled examples. In 2019, Zach Ross' lab at Caltech used template matching to find 10 times as many earthquakes in Southern California as had previously been known
[...]
Also, template matching is computationally expensive. Creating a Southern California quake dataset using template matching took 200 Nvidia P100 GPUs running for days on end.
[...]
AI detection models solve all of these problems:

  • They are faster than template matching.
  • Because AI detection models are very small (around 350,000 parameters compared to billions in LLMs like GPT4.0), they can be run on consumer CPUs.
  • AI models generalize well to regions not represented in the original dataset.

[...]
To train an AI model, scientists take large amounts of labeled data, like what's above, and do supervised training.
[...]
Earthquake Transformer, which was developed around 2020 by a Stanford University team led by S. Mostafa Mousavi, who later became a Harvard professor.

Like many earthquake detection models, Earthquake Transformer adapts ideas from image classification. Readers may be familiar with AlexNet, a famous image-recognition model that kicked off the deep-learning boom in 2012.
[...]
Earthquake Transformer converts raw waveform data into a collection of high-level representations that indicate the likelihood of earthquakes and other seismologically significant events. This is followed by a series of deconvolution layers that pinpoint exactly when an earthquake—and its all-important P and S waves—occurred.

The model also uses an attention layer in the middle of the model to mix information between different parts of the time series. The attention mechanism is most famous in large language models, where it helps pass information between words. It plays a similar role in seismographic detection.
[...]
Ars has previously reported on how the introduction of ImageNet, an image recognition benchmark, helped spark the deep learning boom. Large, publicly available earthquake datasets have played a similar role in seismology.

Earthquake Transformer was trained using the Stanford Earthquake Dataset (STEAD), which contains 1.2 million human-labeled segments of seismogram data from around the world. (The paper for STEAD explicitly mentions ImageNet as an inspiration). Other models, like PhaseNet, were also trained on hundreds of thousands or millions of labeled segments.
[...]
The holy grail of earthquake science is earthquake prediction. For instance, scientists know that a large quake will happen near Seattle but have little ability to know whether it will happen tomorrow or in a hundred years. It would be helpful if we could predict earthquakes precisely enough to allow people in affected areas to evacuate.

You might think AI tools would help predict earthquakes, but that doesn't seem to have happened yet.
[...]
As in many other scientific fields, seismologists face some pressure to adopt AI methods, whether or not they are relevant to their research.

"The schools want you to put the word AI in front of everything," Byrnes said. "It's a little out of control."

This can lead to papers that are technically sound but practically useless. Hubbard and Bradley told me that they've seen a lot of papers based on AI techniques that "reveal a fundamental misunderstanding of how earthquakes work."
[...]
While these are real issues, and ones Understanding AI has reported on before, I don't think they detract from the success of AI earthquake detection. In the last five years, an AI-based workflow has almost completely replaced one of the fundamental tasks in seismology for the better.

That's pretty cool.

Earthquake in SoylentNews stories:
Earthquake search on SoylentNews


Original Submission