Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

On my linux machines, I run a virus scanner . . .

  • regularly
  • when I remember to enable it
  • only when I want to manually check files
  • only on my work computers
  • never
  • I don't have any linux machines, you insensitive clod!

[ Results | Polls ]
Comments:42 | Votes:446

posted by jelizondo on Sunday December 07, @01:59AM   Printer-friendly

https://www.tomshardware.com/tech-industry/ibm-ceo-warns-trillion-dollar-ai-boom-unsustainable-at-current-infrastructure-costs

...says there is 'no way' that infrastructure costs can turn a profit

IBM CEO Arvind Krishna used an appearance on The Verge's Decoder podcast to question whether the capital spending now underway in pursuit of AGI can ever pay for itself. Krishna said today's figures for constructing and populating large AI data centers place the industry on a trajectory where roughly $8 trillion of cumulative commitments would require around $800 billion of annual profit simply to service the cost of capital.

The claim was tied directly to assumptions about current hardware, its depreciation, and energy, rather than any solid long-term forecasts, but it comes at a time when we've seen several companies one-upping one another with unprecedented, multi-year infrastructure projects.

Krishna estimated that filling a one-gigawatt AI facility with compute hardware requires around $80 billion. The issue is that deployments of this scale are moving from the drawing board and into practical planning stages, with leading AI companies proposing deployments with tens of gigawatts — and in some cases, beyond 100 gigawatts — each. Krishna said that, taken together, public and private announcements point to roughly one hundred gigawatts of currently planned capacity dedicated to AGI-class workloads.

At $80 billion per gigawatt, the total reaches $8 trillion. He tied those figures to the five-year refresh cycles common across accelerator fleets, arguing that the need to replace most of the hardware inside those data centers within that window creates a compounding effect on long-term capex requirements. He also placed the likelihood that current LLM-centric architectures reach AGI at between zero and 1% without new forms of knowledge integration.

Krishna pointed to depreciation as the part of the calculation most underappreciated by investors. AI accelerators are typically written down over five years, and he argued that the pace of architectural change means fleets must be replaced rather than extended. "You've got to use it all in five years because at that point, you've got to throw it away and refill it," he said.

Recent financial-market criticism has centred on similar concerns. Investor Michael Burry, for example, has raised questions about whether hyperscalers can continue stretching useful-life assumptions if performance gains and model sizes force accelerated retirement of older GPUs.

The IBM chief said that ultimately, he expects generative-AI tools in their current form to drive substantial enterprise productivity, but that his concern is the relationship between the physical scale of next-gen AI infrastructure and the economics required to support it. Companies committing to these huge, multi-gigawatt campuses and compressed refresh schedules must therefore demonstrate returns that match the unprecedented capital expenditure that Krishna outlined.


Original Submission

posted by jelizondo on Saturday December 06, @09:11PM   Printer-friendly

OpenAI desperate to avoid explaining why it deleted pirated book datasets:

OpenAI may soon be forced to explain why it deleted a pair of controversial datasets composed of pirated books, and the stakes could not be higher.

At the heart of a class-action lawsuit from authors alleging that ChatGPT was illegally trained on their works, OpenAI's decision to delete the datasets could end up being a deciding factor that gives the authors the win.

It's undisputed that OpenAI deleted the datasets, known as "Books 1" and "Books 2," prior to ChatGPT's release in 2022. Created by former OpenAI employees in 2021, the datasets were built by scraping the open web and seizing the bulk of its data from a shadow library called Library Genesis (LibGen).

As OpenAI tells it, the datasets fell out of use within that same year, prompting an internal decision to delete them.

But the authors suspect there's more to the story than that. They noted that OpenAI appeared to flip-flop by retracting its claim that the datasets' "non-use" was a reason for deletion, then later claiming that all reasons for deletion, including "non-use," should be shielded under attorney-client privilege.

To the authors, it seemed like OpenAI was quickly backtracking after the court granted the authors' discovery requests to review OpenAI's internal messages on the firm's "non-use."

In fact, OpenAI's reversal only made authors more eager to see how OpenAI discussed "non-use," and now they may get to find out all the reasons why OpenAI deleted the datasets.

Last week, US magistrate judge Ona Wang ordered OpenAI to share all communications with in-house lawyers about deleting the datasets, as well as "all internal references to LibGen that OpenAI has redacted or withheld on the basis of attorney-client privilege."

According to Wang, OpenAI slipped up by arguing that "non-use" was not a "reason" for deleting the datasets, while simultaneously claiming that it should also be deemed a "reason" considered privileged.

Either way, the judge ruled that OpenAI couldn't block discovery on "non-use" just by deleting a few words from prior filings that had been on the docket for more than a year.

"OpenAI has gone back-and-forth on whether 'non-use' as a 'reason' for the deletion of Books1 and Books2 is privileged at all," Wang wrote. "OpenAI cannot state a 'reason' (which implies it is not privileged) and then later assert that the 'reason' is privileged to avoid discovery."

Additionally, OpenAI's claim that all reasons for deleting the datasets are privileged "strains credulity," she concluded, ordering OpenAI to produce a wide range of potentially revealing internal messages by December 8. OpenAI must also make its in-house lawyers available for deposition by December 19.

OpenAI has argued that it never flip-flopped or retracted anything. It simply used vague phrasing that led to confusion over whether any of the reasons for deleting the datasets were considered non-privileged. But Wang didn't buy into that, concluding that "even if a 'reason' like 'non-use' could be privileged, OpenAI has waived privilege by making a moving target of its privilege assertions."

Asked for comment, OpenAI told Ars that "we disagree with the ruling and intend to appeal."

So far, OpenAI has avoided disclosing its rationale, claiming that all the reasons it had for deleting the datasets are privileged. In-house lawyers weighed in on the decision to delete and were even copied on a Slack channel initially called "excise-libgen."

But Wang reviewed those Slack messages and found that "the vast majority of these communications were not privileged because they were 'plainly devoid of any request for legal advice and counsel [did] not once weigh in.'"

In a particularly non-privileged batch of messages, one OpenAI lawyer, Jason Kwon, only weighed in once, the judge noted, to recommend the channel name be changed to "project-clear." Wang reminded OpenAI that "the entirety of the Slack channel and all messages contained therein is not privileged simply because it was created at the direction of an attorney and/or the fact that a lawyer was copied on the communications."

The authors believe that exposing OpenAI's rationale may help prove that the ChatGPT maker willfully infringed on copyrights when pirating the book data. As Wang explained, OpenAI's retraction risked putting the AI firm's "good faith and state of mind at issue," which could increase fines in a loss.

"In a copyright case, a court can increase the award of statutory damages up to $150,000 per infringed work if the infringement was willful, meaning the defendant 'was actually aware of the infringing activity' or the 'defendant's actions were the result of reckless disregard for, or willful blindness to, the copyright holder's rights,'" Wang wrote.

In a court transcript, a lawyer representing some of the authors suing OpenAI, Christopher Young, noted that OpenAI could be in trouble if evidence showed that it decided against using the datasets for later models due to legal risks. He also suggested that OpenAI could be using the datasets under different names to mask further infringement.

Wang also found it contradictory that OpenAI continued to argue in a recent filing that it acted in good faith, while "artfully" removing "its good faith affirmative defense and key words such as 'innocent,' 'reasonably believed,' and 'good faith.'" These changes only strengthened discovery requests to explore authors' willfulness theory, Wang wrote, noting the sought-after internal messages would now be critical for the court's review.

"A jury is entitled to know the basis for OpenAI's purported good faith," Wang wrote.

The judge appeared particularly frustrated by OpenAI seemingly twisting the Anthropic ruling to defend against the authors' request to learn more about the deletion of the datasets.

In a footnote, Wang called out OpenAI for "bizarrely" citing an Anthropic ruling that "grossly" misrepresented Judge William Alsup's decision by claiming that he found that "downloading pirated copies of books is lawful as long as they are subsequently used for training an LLM."

Instead, Alsup wrote that he doubted that "any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use."

If anything, Wang wrote, OpenAI's decision to pirate book data—then delete it—seemed "to fall squarely into the category of activities proscribed by" Alsup. For emphasis, she quoted Alsup's order, which said, "such piracy of otherwise available copies is inherently, irredeemably infringing even if the pirated copies are immediately used for the transformative use and immediately discarded."

For the authors, getting hold of OpenAI's privileged communications could tip the scales in their favor, the Hollywood Reporter suggested. Some authors believe the key to winning could be testimony from Anthropic CEO Dario Amodei, who is accused of creating the controversial datasets while he was still at OpenAI. The authors think Amodei also possesses information on the destruction of the datasets, court filings show.

OpenAI tried to fight the authors' motion to depose Amodei, but a judge sided with the authors in March, compelling Amodei to answer their biggest questions on his involvement.

Whether Amodei's testimony is a bombshell remains to be seen, but it's clear that OpenAI may struggle to overcome claims of willful infringement. Wang noted there is a "fundamental conflict" in circumstances "where a party asserts a good faith defense based on advice of counsel but then blocks inquiry into their state of mind by asserting attorney-client privilege," suggesting that OpenAI may have substantially weakened its defense.

The outcome of the dispute over the deletions could influence OpenAI's calculus on whether it should ultimately settle the lawsuit. Ahead of the Anthropic settlement—the largest publicly reported copyright class action settlement in history—authors suing pointed to evidence that Anthropic became "not so gung ho about" training on pirated books "for legal reasons." That seems to be the type of smoking-gun evidence that authors hope will emerge from OpenAI's withheld Slack messages.


Original Submission

posted by jelizondo on Saturday December 06, @04:29PM   Printer-friendly

https://www.extremetech.com/computing/new-ddr5-memory-overclocking-world-record-set-at-13530-mts

Intel, Gigabyte, and Corsair have once again broken the memory overclocking world record, pushing the limit to over 13.5 megatransfers per second (MT/s). Using a Gigabyte Z890 Aorus Tachyon Ice board and Corsair's Vengeance memory, the overclockers pulled out all the stops to set this new record, wielding liquid nitrogen, special chip-and-stick binning, and hours upon hours of testing.

Most DDR5 kits are designed to operate at around 5,000 to 6,000 MT/s, with the very latest CUDIMM kits offering speeds in excess of 8,000 or even 9,000 MT/s, though those can only be used with the very latest Intel CPUs. But overclocking records are not about mainstream or even viable performance; they're just about pushing the limits, which this team of overclockers well and truly did.

Overclockers Sergmann and HiCookie screened around 50 CPUs and over 20 different memory kits to find the winning combination to break the record, according to VideoCardz. Then they paired them with buckets of LNO2, which lowered the chip's operating temperature to -196 degrees Celsius. The motherboard was insulated, though it was already designed for extreme overclocking, with physical controls, limited features, and memory placement specifically adjusted to maximize cooling and overclocking potential.

The result is a brand-new world record: 13,530 MT/s. It wasn't benchmark-stable, as it required disabling most CPU cores, running just a single DDR5 stick by itself, and a very minimal operating system build. All in the name of just getting the OS to boot long enough to validate the overclock through CPU-Z.

Don't expect your next XMP profile to have anything like this, but it shows just how capable DDR5 can be under the right circumstances. Expect to see these sorts of speeds when DDR6 memory debuts with future generation CPUs and socket designs—think AM6 in 2027 and beyond.


Original Submission

posted by janrinok on Saturday December 06, @11:41AM   Printer-friendly

Let's Encrypt to Reduce Certificate Validity from 90 Days to 45 Days:

Let's Encrypt has officially announced plans to reduce the maximum validity period of its SSL/TLS certificates from 90 days to 45 days.

The transition, which will be completed by 2028, aligns with broader industry shifts mandated by the CA/Browser Forum Baseline Requirements.

This move is designed to enhance internet security by limiting the window of compromise for stolen credentials and improving the efficiency of certificate revocation technologies.

In addition to shortening certificate lifespans, the Certificate Authority (CA) will drastically reduce the “authorization reuse period," the duration for which a validated domain control remains active before re-verification is required.

Currently set at 30 days, this period will shrink to just 7 hours by the final rollout phase in 2028.

To minimize service disruption for millions of websites, Let's Encrypt is using ACME Profiles to stagger deployments. The changes will first be introduced via opt-in profiles before becoming the default standard for all users.

While most automated environments will handle these changes seamlessly, the shortened validity period necessitates a review of current renewal configurations.

Administrators relying on hardcoded renewal intervals, such as a cron job running every 60 days, will face outages, as certificates will expire before the renewal triggers.

Let's Encrypt advises that acceptable client behavior involves renewing certificates approximately two-thirds of the way through their lifetime.

To facilitate this, the organization recommends enabling ACME Renewal Information (ARI), a feature that allows the CA to signal precisely when a client should renew.

Manual certificate management is strongly discouraged, as the administrative burden of renewing every few weeks increases the likelihood of human error and expired certificates.

The reduction in authorization reuse means clients must prove domain control more frequently. To address the friction this causes for users who cannot easily automate DNS updates, Let's Encrypt is collaborating with the IETF to standardize a new validation method: DNS-PERSIST-01.

Expected to launch in 2026, this protocol allows for a static DNS TXT entry. Unlike the current DNS-01 challenge, which requires a new token for every renewal, DNS-PERSIST-01 permits the initial verification record to remain unchanged.

This development will enable automated renewals for infrastructure where dynamic DNS updates are restricted or technically difficult, reducing the reliance on cached authorizations.


Original Submission

posted by janrinok on Saturday December 06, @06:53AM   Printer-friendly
from the so-long-and-thanks-for-all-the-fish dept.

Micron cites AI data center demand as reason for killing DIY upgrade brand:

On Wednesday, Micron Technology announced it will exit the consumer RAM business in 2026, ending 29 years of selling RAM and SSDs to PC builders and enthusiasts under the Crucial brand. The company cited heavy demand from AI data centers as the reason for abandoning its consumer brand, a move that will remove one of the most recognizable names in the do-it-yourself PC upgrade market.

"The AI-driven growth in the data center has led to a surge in demand for memory and storage," Sumit Sadana, EVP and chief business officer at Micron Technology, said in a statement. "Micron has made the difficult decision to exit the Crucial consumer business in order to improve supply and support for our larger, strategic customers in faster-growing segments."

Micron said it will continue shipping Crucial consumer products through the end of its fiscal second quarter in February 2026 and will honor warranties on existing products. The company will continue selling Micron-branded enterprise products to commercial customers and plans to redeploy affected employees to other positions within the company.

[...] The surprise announcement from Micron follows a period of rapidly escalating memory prices, as we reported in November. A typical 32GB DDR5 RAM kit that cost around $82 in August now sells for about $310, and higher-capacity kits have seen even steeper increases.

DRAM contract prices have increased 171 percent year over year, according to industry data. Gerry Chen, general manager of memory manufacturer TeamGroup, warned that the situation will worsen in the first half of 2026 once distributors exhaust their remaining inventory. He expects supply constraints to persist through late 2027 or beyond.

The fault lies squarely at the feet of AI mania in the tech industry. The construction of new AI infrastructure has created unprecedented demand for high-bandwidth memory (HBM), the specialized DRAM used in AI accelerators from Nvidia and AMD. Memory manufacturers have been reallocating production capacity away from consumer products toward these more profitable enterprise components, and Micron has presold its entire HBM output through 2026.

[...] For Micron, the calculus is clear: Enterprise customers pay more and buy in bulk. But for the DIY PC community, the decision will leave PC builders with one fewer option when reaching for the RAM sticks. In his statement, Sadana reflected on the brand's 29-year run.

"Thanks to a passionate community of consumers, the Crucial brand has become synonymous with technical leadership, quality and reliability of leading-edge memory and storage products," Sadana said. "We would like to thank our millions of customers, hundreds of partners and all of the Micron team members who have supported the Crucial journey for the last 29 years."

Also see: Micron ditches consumer memory brand Crucial to chase AI riches


Original Submission

posted by janrinok on Saturday December 06, @02:12AM   Printer-friendly

https://www.extremetech.com/computing/raspberry-pi-launches-1gb-model-at-45-temporarily-raises-prices-on-higher

Once AI-driven DRAM pricing normalizes, Raspberry Pi says it intends to reduce prices back down to align with its low-cost computing mission.

On Monday, Raspberry Pi launched a new 1GB variant of Raspberry Pi 5 priced at $45. The brand simultaneously raised prices on most higher-capacity boards due to surging LPDDR4 memory costs driven by AI demand.

The company announced that beginning this week, quite a few Raspberry Pi 4 and 5 models will cost more:

Raspberry Pi 4

        4GB model: $55 to $60

        8GB model: $75 to $85

Raspberry Pi 5

        GB entry-level model: launches at $45

        GB model: $50 to $55

        4GB model: $60 to $70

        8GB model: $80 to $95

        16GB model: $120 to $145

Compute Module 5

        16GB variants: increase of $20

Lower-capacity Raspberry Pi 4 boards, all Raspberry Pi 3+ and earlier models, and Raspberry Pi Zero products will maintain their existing prices. The classic 1GB Raspberry Pi 4 remains at $35.

Raspberry Pi says the price increases help secure memory supplies through a constrained market in 2026. LPDDR4 memory costs are rising sharply as manufacturers shift production to AI-supporting memory types, such as HBM and newer generations. Industry analysts report that commodity memory prices have climbed by more than 100% year-on-year and could roughly double again by mid-2026.

The company frames these hikes as temporary. When (or if) AI-driven DRAM pricing normalizes, Raspberry Pi will reportedly bring prices back down to align with its low-cost computing mission.


Original Submission

posted by janrinok on Friday December 05, @09:24PM   Printer-friendly

Supreme Court hears case that could trigger big crackdown on Internet piracy:

Supreme Court justices expressed numerous concerns today in a case that could determine whether Internet service providers must terminate the accounts of broadband users accused of copyright infringement. Oral arguments were held in the case between cable Internet provider Cox Communications and record labels led by Sony.

Some justices were skeptical of arguments that ISPs should have no legal obligation under the Digital Millennium Copyright Act (DMCA) to terminate an account when a user's IP address has been repeatedly flagged for downloading pirated music. But justices also seemed hesitant to rule in favor of record labels, with some of the debate focusing on how ISPs should handle large accounts like universities where there could be tens of thousands of users.

"There are things you could have done to respond to those infringers, and the end result might have been cutting off their connections, but you stopped doing anything for many of them," Sotomayor said to attorney Joshua Rosenkranz, who represents Cox. "You didn't try to work with universities and ask them to start looking at an anti-infringement notice to their students. You could have worked with a multi-family dwelling and asked the people in charge of that dwelling to send out a notice or do something about it. You did nothing and, in fact, counselor, your clients' sort of laissez-faire attitude toward the respondents is probably what got the jury upset."

A jury ordered Cox to pay over $1 billion in 2019, but the US Court of Appeals for the 4th Circuit overturned that damages verdict in February 2024. The appeals court found that Cox did not profit directly from copyright infringement committed by its users, but affirmed the jury's separate finding of willful contributory infringement. Cox is asking the Supreme Court to clear it of willful contributory infringement, while record labels want a ruling that would compel ISPs to boot more pirates from the Internet.

Rosenkranz countered that Cox created its own anti-infringement program, sent out hundreds of warnings a day, suspended thousands of accounts a month, and worked with universities. He said that "the highest recidivist infringers" cited in the case were not individual households, but rather universities, hotels, and regional ISPs that purchase connectivity from Cox in order to resell it to local users.

If Sony wins the case, "those are the entities that are most likely to be cut off first because those are the ones that accrue the greatest number of [piracy notices]," the Cox lawyer said. Even within a multi-person household where the IP address is caught by an infringement monitoring service, "you still don't know who the individual [infringer] is," he said. At another point in the hearing, he pointed out that Sony could sue individual infringers directly instead of suing ISPs.

Justice Amy Coney Barrett asked Cox, "What incentive would you have to do anything if you won? If you win and mere knowledge [of infringement] isn't enough, why would you bother to send out any [copyright] notices in the future? What would your obligation be?"

Rosenkranz answered, "For the simple reason that Cox is a good corporate citizen that cares a lot about what happens on its system. We do all sorts of things that the law doesn't require us to do." After further questioning by Barrett, Rosenkranz acknowledged that Cox would have no liability risk going forward if it wins the case.

Kagan said the DMCA safe harbor, which protects entities from liability if they take steps to fight infringement, would "seem to do nothing" if the court sides with Cox. "Why would anybody care about getting into the safe harbor if there's no liability in the first place?" she said.

Kagan also criticized Sony's case. She pointed to the main principles underlying Twitter v. Taamneh, a 2023 ruling that protected Twitter against allegations that it aided and abetted ISIS in a terrorist attack. Kagan said the Twitter case and the Smith & Wessoncase involving gun sales to Mexican drug cartels show that there are strict limits on what kinds of behavior are considered aiding and abetting.

Kagan described how the cases show there is a real distinction between nonfeasance (doing nothing) and misfeasance, that treating one customer like everyone else is not the same as providing special assistance, and that a party "must seek by your action to make it occur" in order to be guilty of aiding and abetting.

"If you look at those three things, you fail on all of them," Kagan said to attorney Paul Clement, who represents Sony. "Those three things are kind of inconsistent with the intent standard you just laid out."

Clement said that to be held liable, an Internet provider "has to know that specified customers are substantially certain to infringe" and "know that providing the service to that customer will make infringement substantially certain."

Justice Neil Gorsuch indicated that determining secondary liability for Internet providers should be taken up by Congress before the court expands that liability on its own. "Congress still hasn't defined the contours of what secondary liability should look like. Here we are debating them, so shouldn't that be a flag of caution for us in expanding it too broadly?"

Clement tried to keep the focus on residential customers, saying that 95 percent of infringing customers are residential users. But he faced questions about how ISPs should handle much larger customers where one or a few users infringe.

Justice Samuel Alito questioned Clement about what ISPs should do with a university where some students infringe. Alito didn't seem satisfied with Clement's response that "the ISP is supposed to sort of have a conversation with the university."

Alito said that after an ISP tells a university, "a lot of your 50,000 students are infringing... the university then has to determine which particular students are engaging in this activity. Let's assume it can even do that, and so then it knocks out 1,000 students and then another 1,000 students are going to pop up doing the same thing. I just don't see how it's workable at all."

Clement said that hotels limit speeds to restrict peer-to-peer downloading, and suggested that universities do the same. "I don't think it would be the end of the world if universities provided service at a speed that was sufficient for most other purposes but didn't allow the students to take full advantage of BitTorrent," he said. "I could live in that world. But in all events, this isn't a case that's just about universities. We've never sued the universities."

Barrett replied, "It seems like you're asking us to rely on your good corporate citizenship too, that you wouldn't go after the university or the hospital."

Kagan said that if Sony wins, Cox would have little incentive to cooperate with copyright holders. "It seems to me the best response that Cox could have is just to make sure it never reads any of your notices ever again, because all of your position is based on Cox having knowledge of this," she said.

Clement argued in response that "I think willful blindness would satisfy the common law standard for aiding and abetting."

Some of the discussion focused on the legal concepts of purpose and intent. Cox has argued that knowledge of infringement "cannot transform passive provision of infrastructure into purposeful, culpable conduct." Sony has said Cox exhibited both "purpose and intent" to facilitate infringement when it continued providing Internet access to specific customers with the expectation that they were likely to infringe.

Sotomayor said Cox's position is "that the only way you can have aiding and abetting in this field is if you have purpose," while Sony is saying, "we don't have to prove purpose, we have to prove only intent." Sotomayor told Clement that "we are being put to two extremes here. The other side says, 'there's no liability because we're just putting out into the stream of commerce a good that can be used for good or bad, and we're not responsible for the infringer's decision.'"

Sotomayor said the question of purpose vs. intent may be decided differently based on whether Cox's customer is a residence or a regional ISP that buys Cox's network capacity and resells it to local customers. Sotomayor said she is reluctant "to say that because one person in that region continues to infringe, that the ISP is materially supporting that infringement because it's not cutting off the Internet for the 50,000 or 100,000 people who are represented by that customer."

But a single-family home contains a small number of people, and an ISP may be "materially contributing" to infringement by providing service to that home, Sotomayor said. "How do we announce a rule that deals with those two extremes?" she asked.

Clement argued that the DMCA's "safe harbor takes care of the regional ISPs. Frankly, I'm not that worried about the regional ISPs because if that were really the problem, we could go after the regional ISPs."

Cox's case has support from the US government. US Deputy Solicitor General Malcolm Stewart told justices today that "in copyright law and more generally, this form of secondary liability is reserved for persons who act for the purpose of facilitating violations of law. Because Cox simply provided the same generic Internet services to infringers and non-infringers alike, there is no basis for inferring such a purpose here."

Sotomayor asked Stewart if he's worried that a Cox win would remove ISPs' economic incentive to control copyright infringement. "I would agree that not much economic incentive would be left," Stewart replied. "I'm simply questioning whether that's a bad thing."

Stewart gave a hypothetical in which an individual Internet user is sued for infringement in a district court. The district court could award damages and impose an injunction to prevent further infringement, but it probably couldn't "enjoin the person from ever using the Internet again," Stewart said.

"The approach of terminating all access to the Internet based on infringement, it seems extremely overbroad given the centrality of the Internet to modern life and given the First Amendment," he said.

Oral arguments ended with a reply from Rosenkranz, who said Clement's suggestion that ISPs simply "have a conversation" with universities is "a terrible answer from the perspective of the company that is trying to figure out what its legal obligations are [and] facing crushing liabilities." Rosenkranz also suggested that record labels pay for ISPs' enforcement programs.

"The plaintiffs have recourse," he said. "How about a conversation with the ISPs where they talk about how to work out things together? Maybe they kick in a little money. Now, they won't get billion-dollar verdicts, but if they believe that the programs that Cox and others have aren't satisfactory, they can design better programs and help pay for them."


Original Submission

posted by jelizondo on Friday December 05, @03:39PM   Printer-friendly

https://www.techspot.com/news/110441-oracle-credit-risk-hits-three-year-high-ai.html

A key measure of credit risk linked to Oracle has climbed to its highest level in three years, and Wall Street analysts warn that pressure is likely to intensify next year unless the company does more to explain how it will fund its artificial intelligence expansion. The shift reflects mounting anxiety over the scale, structure, and timing of Oracle's borrowing as it races to add data center capacity for AI workloads.

Morgan Stanley credit analysts Lindsay Tyler and David Hamburger describe several fault lines: a growing gap between spending and available funding, a balance sheet that continues to swell, and the possibility that assets built for today's AI architectures could become outdated faster than expected. They argue that these issues are now being priced directly into the cost of default protection on Oracle's debt.

The cost of five-year credit default swaps on Oracle rose to 1.25 percentage points a year in late November, according to ICE Data Services, marking the highest level since 2022. That means buyers of protection are paying 125 basis points annually to insure against a default over five years, a sharp step-up from earlier in the AI cycle. The swaps are now close enough to crisis-era territory that analysts are openly discussing whether the all-time peak set in 2008 could be challenged.

In their note, Tyler and Hamburger say Oracle's five-year CDS spread could break above 1.5 percentage points in the near term and might move toward two percentage points if the company continues to provide only limited detail about its financing strategy as 2026 approaches.

For reference, Oracle's default swaps hit a record 1.98 percentage points during the 2008 financial crisis. Oracle declined to comment on the assessment or the recent trading in its credit protection.

Oracle has become one of the main corporate symbols of AI-related credit risk because it relies heavily on debt markets to support its infrastructure plans. In September, the company raised $18 billion in the US investment-grade bond market, adding a large new slug of conventional corporate debt to its capital structure.

Weeks later, a syndicate of roughly 20 banks arranged an additional $18 billion project finance loan to build a major data center campus in New Mexico, with Oracle set to step in as the tenant once the facilities are completed.

On top of that, banks are assembling a separate $38 billion loan package to back the construction of data centers in Texas and Wisconsin being developed by Vantage Data Centers, where Oracle is expected to be the anchor tenant.

Over the past two months, Tyler and Hamburger say it has become clearer that these construction loans, rather than Oracle's traditional bond financing alone, are a major driver of hedging flows. The analysts also note that some of this hedging might unwind if and when the originating banks distribute pieces of the loans to other investors, though they stress that new holders may then choose to hedge as well.

The result is an environment in which both banks and bondholders use Oracle's CDS as a flexible tool to manage exposure to AI-linked credit risk. Morgan Stanley previously argued that near-term credit deterioration and uncertainty would fuel further hedging among traditional bond investors, direct lenders, and "thematic" players who want a macro way to trade the AI spending boom.

In their latest comments, the analysts say both the bondholder and thematic hedging dynamics could become more critical over time as the market internalizes the scale of Oracle's commitments.

This has had visible consequences in performance metrics. Oracle's CDS spreads have widened more than those of the broader investment-grade CDX index, indicating that investors are demanding a higher premium to insure Oracle than for the average high-grade borrower. At the same time, Oracle's cash bonds have lagged the Bloomberg high-grade corporate index, reflecting weaker demand for its debt amid hedging activity and heightened concerns about leverage.

The Morgan Stanley team has also changed its recommended trading stance. Earlier this year, the bank had favored a "basis trade" that involved buying Oracle bonds and simultaneously buying CDS protection, on the view that the derivative spreads would widen more than the underlying bond spreads.

Now, the analysts are abandoning the bond leg of that strategy. They maintain that an outright CDS trade is cleaner in the current environment and more likely to benefit from further spread widening if concerns about Oracle's funding plans, balance sheet trajectory, and AI spending persist. For investors, that recommendation underscores how a company at the center of the AI race has also become a preferred vehicle for expressing caution about the financial risks of that race.


Original Submission

posted by jelizondo on Friday December 05, @10:54AM   Printer-friendly
from the how-about-not-throwing-the-baby-out-with-the-bathwater dept.

Hacker Dave Cross has written a short blog post about how Perl's early success created the seeds of its downfall or, as he puts it, made it a victim of the Dotcom Survivor Syndrome. From the 90s through the 00s, Perl had been not just part of the WWW but in many ways instrumental in actually creating the WWW as we knew it in its prime. Perl and the community around it have improved a lot in the last 25 years, even if the versioning might disguise that fact.

To understand the long shadow Perl casts, you have to understand the speed and pressure of the dot-com boom.

We weren't just building websites.
We were inventing how to build websites.

Best practices? Mostly unwritten.
Frameworks? Few existed.
Code reviews? Uncommon.
Continuous integration? Still a dream.

The pace was frantic. You built something overnight, demoed it in the morning, and deployed it that afternoon. And Perl let you do that.

But that same flexibility—its greatest strength—became its greatest weakness in that environment. With deadlines looming and scalability an afterthought, we ended up with:

Thousands of lines of unstructured CGI scripts
Minimal documentation
Global variables everywhere
Inline HTML mixed with business logic
Security holes you could drive a truck through

When the crash came, these codebases didn't age gracefully. The people who inherited them, often the same people who now run engineering orgs, remember Perl not as a powerful tool, but as the source of late-night chaos and technical debt.
[...]

It did not help that there has been every appearance of ongoing M$ whisper campaigns maligning Perl since the 00s. For text processing, there is still nothing better. And, as has been pointed out countless times already, the WWW is text (i.e. XML and co).

Previously:
(2020) Announcing Perl 7
(2019) Perl Is Still The Goddess For Text Manipulation
(2017) Perl, the Glue That Holds the Internet (and SN) Together, Turns 30 This Year


Original Submission

posted by jelizondo on Friday December 05, @06:01AM   Printer-friendly

https://arstechnica.com/space/2025/12/space-ceo-explains-why-he-believes-private-space-stations-are-a-viable-business/

It's a critical time for companies competing to develop a commercial successor to the International Space Station. NASA is working with several companies, including Axiom Space, Voyager Technologies, Blue Origin, and Vast, to develop concepts for private stations where it can lease time for its astronauts.

The space agency awarded Phase One contracts several years ago and is now in the final stages of writing requirements for Phase Two after asking for feedback from industry partners in September. This program is known as Commercial LEO Destinations, or CLDs in industry parlance.

Time is running out for NASA if it wants to establish continuity from the International Space Station, which will reach its end of life in 2030, with a follow-on station ready to go before then.

One of the more intriguing companies in the competition is Voyager Technologies, which recently announced a strategic investment from Janus Henderson, a global investment firm. In another sign that the competition is heating up, Voyager also just hired John Baum away from Vast, where he was the company's business development leader.

To get a sense of this competition and how Voyager is coming along with its Starlab space station project, Ars spoke with the firm's chairman, Dylan Taylor. This conversation has been lightly edited for clarity.

Ars: I know a lot of the companies working on CLDs are actively fundraising right now. How is this coming along for Voyager and Starlab?

Dylan Taylor: Fundraising is going quite well. You saw the Janus announcement. That's significant for a few reasons. One is, it's a significant investment. Of course, we're not disclosing exactly how much. (Editor's note: It likely is on the order of $100 million.) But the more positive development on the Janus investment is that they are such a well-known, well-respected financial investor.

If you look at the kind of bellwether investors, Janus would be up there with a Blackstone or Blackrock or Fidelity. So it's significant not only in terms of capital contribution, but in... showing that commercial space stations are investable. This isn't money coming from the Gulf States. It's not a syndication of a bunch of $1,000 checks from retail investors. This is a very significant institutional investor coming in, and it's a signal to the market. They did significant diligence on all our competitors, and they went out of the way saying that we're far and away the best business plan, best design, and everything else, so that's why it's so meaningful.

Ars: How much funding do you need to raise to complete Starlab?

Dylan Taylor: We currently estimate the cost to design, manufacture, and launch Starlab to be approximately $2.8 to $3.3 billion. And then if you look at what's anticipated in Phase Two in the NASA services contracts, it's about a $700 million capital plug that we need to raise in the market, and we're well on our way on that. We're not going to raise all of that now because obviously, after we win Phase Two, there will be a significant markup in valuation, and we'll have the ability to raise additional capital at that time. So we're only raising what we need at this stage of the project.

Ars: How are you coming as far as progress on your initial contract with NASA?

Dylan Taylor: We have our CDR (critical design review) coming up. It's December 15 to 18. We have achieved 27 milestones. We have four milestones left on our CLD Phase One contract.

Ars: You've changed your partners on the project a little bit. Where are you now on that?

Dylan Taylor: We moved the structure construction from Bremen, Germany, to Louisiana. That will be constructed by Vivace. So the structure will be made in the US. We have a significant presence, as you know, in Houston. We'll have it in Louisiana. And we just added Leidos to the team, so there'll be a big Huntsville component to our test and integration as well. So the key partners right now in terms of equity ownership and the joint venture are ourselves, Airbus, Mitsubishi, Palantir, Space Applications Services, and MDA. And then additional partners who are on the team that aren't equity holders include Northrup, Leidos, and Hilton Hotels.

Ars: What is your current timeline for development?

Dylan Taylor: We're still on 2029. I don't anticipate that pushing out for any reason in the near term. Obviously, if we had a significant delay on Phase Two selection, that could impact things. You know, some people think that we have Starship risk. In my view, I'm highly confident Starship will be ready to go when we're ready to launch. If it's not, based on the New Glenn upgrades that were recently announced, if they're successful in implementing those, then theoretically New Glenn could also launch us. As you know, we've got a launch agreement with SpaceX on Starship, so that's still the plan.

Ars: I would not consider a 2029 Starship launch date a major risk.

Dylan Taylor: Yeah, exactly. I'm not concerned about it. But there are people who are concerned. They bring it up a lot. Now, that being said, not to pick on the other players, but my understanding is Axiom has to launch on Falcon Heavy. I'm not sure SpaceX is that excited to do a Falcon Heavy launch, so in my mind, that could be a potential risk for them. Maybe, I don't know.

Ars: What was your reaction to the directive that came out in August from NASA interim administrator Sean Duffy on commercial space stations?

Dylan Taylor: I was surprised at the fact that they appeared to be backing off the requirements a bit. You know, I don't know where it (the Phase Two Request for Proposals from NASA) ends up. That's anybody's guess. But if I were to bet, I would think it would be more similar to the original procurement strategy than the memo. But we won't know until it comes out.

Ars: Obviously, there is still an interim administrator at NASA. We had a government shutdown for a month. What's your current understanding of the timeline for the Phase Two process?

Dylan Taylor: The last information we have is that they still expected to send the RFP out by the end of the year, and then have Phase Two selection sometime late Q1, early Q2 next year. That information was mostly communicated prior to the government shutdown. So I think with the government shutdown—I'm guessing here because I don't know—but I think you probably roll forward 45 days or so. If that's the case, we're probably looking at an RFP in January and a selection in probably in June or July. That's our best estimate based upon what we have been told.

Ars: We're now under five years from the International Space Station coming down. There's still a lot of work to be done for replacement. I think it's clear there are some challenges for this program, not speaking specifically about Starlab but just the general idea of commercial space stations. What advice would you have for Jared Isaacman to help make sure the CLD program is a success for NASA and the country?

Dylan Taylor: I know Jared, and I'm very optimistic. He's very, very smart, a very capable person. He's pro-commercial space. Based on his testimony and just what I know about him, he believes that commercial solutions are often better than government solutions. So I'm very optimistic he's going to be a transformational administrator. I think it's very good for the industry. I think the advice I would have for him on this program would be the same advice I'd have for him on all programs. And it's just simply clarity—clarity of mission, clarity of requirements, clarity of timeline, and the market will figure it out from there.

And specifically on CLDs, I think it's important they make a selection sooner rather than later. In my view, that selection should not just be a Space Act Agreement. It should be tied to a services commitment on the backside as well. I think that's important to signal who the chosen commercial space station successors are, whether there's two or three. I don't think there will be one. There shouldn't be one.

Ars: Has the government committed enough funding to make the program a success?

Dylan Taylor: I think this is where I might deviate from our competitors a bit. I think the answer is yes. I mean, if we have a reasonable amount of capital allocated in Phase Two and service contract commitments, the rest of the capital markets will be there. We demonstrated this with Janus and our IPO, frankly. Separately, we raised $430 million on a convertible note for Voyager, in 48 hours, two weeks ago, at an interest rate of 0.75 percent. The capital is there for well-run companies that are able to communicate the future of these projects to investors.

So the short answer is, yes, I think there is enough funding. I think where sometimes NASA might get the story a bit wrong is that they think they need to provide all the capital for these programs. And that's not really the case. They need to provide some of the capital. But most importantly, they need to provide the signal. We saw this on launch, right? I mean, NASA didn't fund all of SpaceX's development. They're certainly not funding all of Starship's development. But what they did do is they selected Commercial Cargo and Commercial Crew winners, and then SpaceX is probably the best example of being able to raise capital around that.

Ars: Do you think there are customers beyond NASA for these stations? I'm sure you must. But who are they?

Dylan Taylor: There's huge demand, Eric. Honestly, this has been one of my surprises. Over the last 12 months, and I really want to credit Axiom on this, with the PAM (private astronaut) missions, they really pioneered this notion of sovereign astronauts outside of the ISS consortium. There's huge demand from emerging countries with space agencies that want a sovereign astronaut, that want to send their astronauts to the ISS or to a safe and qualified and NASA-approved space station. So there is a lot of demand there.

We're in active discussions—I would say advanced discussions—with a lot of sovereign astronauts, and I fully anticipate that we're going to be oversubscribed when it comes to astronaut demand. And then on the commercial capacity, on the research side, we see huge demand for our commercial research capacity on Starlab. And just to remind you, we have 100 percent of the research capacity of the ISS, and we see demand in excess of our capacity. We're striking deals as we speak.


Original Submission

posted by hubie on Friday December 05, @01:24AM   Printer-friendly
from the couldnt-make-it-never dept.

Windows takes a backseat on Dell's latest AI workstation as Linux gets the priority:

Dell has a solid track record with Linux-powered OSes, particularly Ubuntu. The company has been shipping developer-focused laptops with Ubuntu pre-installed for years.

Many of their devices come with compatible drivers working out of the box. Audio, Wi-Fi, Thunderbolt ports, and even fingerprint readers mostly work without hassle. My daily workhorse is a Dell laptop that hasn't had a driver-related issue for quite some time now.

And a recent launch just reinforces their Linux approach.

Dell just launched the Pro Max 16 Plus. It is being marketed as the first mobile workstation with an enterprise-grade discrete NPU, the Qualcomm AI 100 PC Inference Card. It packs 64GB of dedicated AI memory and dual NPUs on a single card.

Under the hood, you get Intel Core Ultra processors (up to Ultra 9 285HX), memory up to 256GB CAMM2 at 7200MT/s, GPU options up to NVIDIA RTX PRO 5000 Blackwell with 24GB VRAM, and storage topping out at 12TB with RAID support.

Interestingly, Phoronix has received word that the Windows 11 version of the Dell Pro Max 16 Plus won't ship until early 2026, while the validated Ubuntu 24.04 LTS version is already available.

With this, Dell is targeting professionals who can't rely on cloud inferencing. It says that the discrete NPU keeps data on-device while eliminating cloud latency, enabling work in air-gapped environments, disconnected locations, and compliance-heavy industries.

Dell Pro Max 16 Plus

[Ed. note: NPU is a neural processing unit designed to accelerate AI and machine learning tasks]


Original Submission

posted by hubie on Thursday December 04, @08:42PM   Printer-friendly
from the enshittification-will-continue-until-morale-improves dept.

Netflix Quietly Drops Support for Casting to Most TVs

Netflix will only support Google Cast on older devices without remotes:

Have you been trying to cast Stranger Things from your phone, only to find that your TV isn't cooperating? It's not the TV—Netflix is to blame for this one, and it's intentional. The streaming app has recently updated its support for Google Cast to disable the feature in most situations. You'll need to pay for one of the company's more expensive plans, and even then, Netflix will only cast to older TVs and streaming dongles.

The Google Cast system began appearing in apps shortly after the original Chromecast launched in 2013. Since then, Netflix users have been able to start video streams on TVs and streaming boxes from the mobile app. That was vital for streaming targets without their own remote or on-screen interface, but times change.

Today, Google has moved beyond the remote-free Chromecast experience, and most TVs have their own standalone Netflix apps. Netflix itself is also allergic to anything that would allow people to share passwords or watch in a new place. Over the last couple of weeks, Netflix updated its app to remove most casting options, mirroring a change in 2019 to kill Apple AirPlay.

The company's support site (spotted by Android Authority) now clarifies that casting is only supported in a narrow set of circumstances. First, you need to be paying for one of the ad-free service tiers, which start at $18 per month. Those on the $8 ad-supported plan won't have casting support.

Even then, Casting only appears for devices without a remote, like the earlier generations of Google Chromecasts, as well as some older TVs with Cast built in. For example, anyone still rocking Google's 3rd Gen Chromecast from 2018 can cast video in Netflix, but those with the 2020 Chromecast dongle (which has a remote and a full Android OS) will have to use the TV app. Essentially, anything running Android/Google TV or a smart TV with a full Netflix app will force you to log in before you can watch anything.

[...] Netflix has every reason to want people to log into its TV apps. After years of cheekily promoting password sharing, the company now takes a hardline stance against such things. By requiring people to log into more TVs, users are more likely to hit their screen limits. Netflix will happily sell you a more expensive plan that supports streaming to this new TV, though.

[...] So Netflix may have a good reason to think it can get away with killing casting. However, trying to sneak this one past everyone without so much as an announcement is pretty hostile to its customers.

Netflix Is Killing Casting From Your Phone

Unless you have older hardware, you can't cast Netflix to your TV anymore:

Smart TVs have undoubtedly taken over the streaming space, and it's not hard to see why. You download the apps you want to use, log into your accounts, and presto: You can stream anything with a few clicks of your remote.

But smart TV apps aren't the only way people watch shows and movies on platforms like Netflix. Among other methods, like plugging a laptop directly into the TV, many people still enjoying casting their content from small screens to big screens. For years, this has been a reliable way to switch from watching Netflix on your smartphone or tablet to watching on your TV—you just tap the cast button, select your TV, and in a few moments, your content is beamed to the proper place. Your device becomes its own remote, with search built right-in, and it avoids the need to sign into Netflix on TVs outside your home, such as when staying in hotels.

At least it did, but Netflix no longer wants to let you do it.

[...] Netflix doesn't explain why it's making the change, so I can only speculate. First, it's totally possible this is simply a tech obsolescence issue. Many companies drop support for older or underused technologies, and perhaps Netflix sees now as the time to largely drop support for casting. Streamlining the tech the app has to support means less work for Netflix developers, and it wouldn't be the first time the company dropped support for older platforms. However, that doesn't really explain why the company still supports some devices for casting. Maybe it took a look at its user base, and made the calculation that enough subscribers relied on Google Cast devices for casting, but not enough relies on newer hardware for casting. We might not really know unless Netflix decides to issue a statement.

That said, I can't help but feel like this is related to Netflix's crackdown on password sharing. The company clearly doesn't want you using its services unless you have your own paid account—or have another user pay extra to have you on their account. Casting, however, makes it easy to continue using someone else's account without paying for it. Since Netflix only requires mobile users to log into the account owner's home wifi once a month to continue watching on a device, you could theoretically cast Netflix from your smartphone to your TV to continue enjoying your shows and movies "for free." By removing casting as an option for most users, those users will either need to connect a device to the TV by wire—like a laptop connected via HDMI—or log into the smart TV app. And if those users don't actually have permission to access that account via that app, they won't be able to stream.

If this really is the company's intention, it's doing so at the inconvenience of paying users, too. If you're traveling, you now need to bother with signing into your account on a TV you don't own. If you don't like using your smart TV apps, you're kind of out of luck, unless you want to deal with connecting a computer to your TV whenever you want to catch up on Stranger Things.

Were any Soylentils doing this?


Original Submission

posted by hubie on Thursday December 04, @03:53PM   Printer-friendly

AI red-teamers in Korea show how easily the model spills dangerous biochemical instructions:

Google's newest and most powerful AI model, Gemini 3, is already under scrutiny. A South Korean AI-security team has demonstrated that the model's safety net can be breached, and the results may raise alarms across the industry.

Aim Intelligence, a startup that tests AI systems for weaknesses, decided to stress-test Gemini 3 Pro and see how far it could be pushed with a jailbreak attack. Maeil Business Newspaper reports that it took the researchers only five minutes to get past Google's protections.

The researchers asked Gemini 3 to provide instructions for making the smallpox virus, and the model responded quickly. It provided many detailed steps, which the team described as "viable."

This was not just a one-off mistake. The researchers went further and asked the model to make a satirical presentation about its own security failure. Gemini replied with a full slide deck called "Excused Stupid Gemini 3."

[...] The AI security testers say this is not just a problem with Gemini. Newer models are becoming so advanced so quickly that safety measures cannot keep up. In particular, these models do not just respond; they also try to avoid detection. Aim Intelligence states that Gemini 3 can use bypass strategies and concealment prompts, rendering simple safeguards far less effective.

[...] If a model strong enough to beat GPT-5 can be jailbroken in minutes, consumers should expect a wave of safety updates, tighter policies, and possibly the removal of some features. AI may be getting smarter, but the defenses protecting users don't seem to be evolving at the same pace.


Original Submission

posted by hubie on Thursday December 04, @11:04AM   Printer-friendly
from the tech-bros-doing-their-thing dept.

A blog post covers why datacenters in space are a terrible, horrible, no good idea. Thermal management is just the beginning of the long list of challenges which make space an inferior environment for data centers.

In the interests of clarity, I am a former NASA engineer/scientist with a PhD in space electronics. I also worked at Google for 10 years, in various parts of the company including YouTube and the bit of Cloud responsible for deploying AI capacity, so I'm quite well placed to have an opinion here.

The short version: this is an absolutely terrible idea, and really makes zero sense whatsoever. There are multiple reasons for this, but they all amount to saying that the kind of electronics needed to make a datacenter work, particularly a datacenter deploying AI capacity in the form of GPUs and TPUs, is exactly the opposite of what works in space. If you've not worked specifically in this area before, I'll caution against making gut assumptions, because the reality of making space hardware actually function in space is not necessarily intuitively obvious.

Previously:
(2025) The Data Center Resistance Has Arrived
(2025) Microsoft: the Company Doesn't Have Enough Electricity to Install All the AI GPUs in its Inventory
(2025) China Submerges a Data Center in the Ocean to Conserve Water, is That Even a Good Idea?
(2025) Data Centers Turn to Commercial Aircraft Jet Engines Bolted Onto Trailers as AI Power Crunch Bites
(2025) The Real (Economic) AI Apocalypse is Nigh
(2025) Real Datacenter Emissions Are A Dirty Secret
... and more.


Original Submission

posted by hubie on Thursday December 04, @06:17AM   Printer-friendly
from the get-the-flock-out-of-here dept.

An accidental leak revealed that Flock, which has cameras in thousands of US communities, is using workers in the Philippines to review and classify footage:

Flock, the automatic license plate reader and AI-powered camera company, uses overseas workers from Upwork to train its machine learning algorithms, with training material telling workers how to review and categorize footage including images people and vehicles in the United States, according to material reviewed by 404 Media that was accidentally exposed by the company.

The findings bring up questions about who exactly has access to footage collected by Flock surveillance cameras and where people reviewing the footage may be based. Flock has become a pervasive technology in the US, with its cameras present in thousands of communities that cops use every day to investigate things like carjackings. Local police have also performed numerous lookups for ICE in the system.

Companies that use AI or machine learning regularly turn to overseas workers to train their algorithms, often because the labor is cheaper than hiring domestically. But the nature of Flock's business—creating a surveillance system that constantly monitors US residents' movements—means that footage might be more sensitive than other AI training jobs.

Flock's cameras continuously scan the license plate, color, brand, and model of all vehicles that drive by. Law enforcement are then able to search cameras nationwide to see where else a vehicle has driven. Authorities typically dig through this data without a warrant, leading the American Civil Liberties Union and Electronic Frontier Foundation to recently sue a city blanketed in nearly 500 Flock cameras.

Broadly, Flock uses AI or machine learning to automatically detect license plates, vehicles, and people, including what clothes they are wearing, from camera footage. A Flock patent also mentions cameras detecting "race."

It included figures on "annotations completed" and "annotator tasks remaining in queue," with annotations being the notes workers add to reviewed footage to help train AI algorithms. Tasks include categorizing vehicle makes, colors, and types, transcribing license plates, and "audio tasks." Flock recently started advertising a feature that will detect "screaming." The panel showed workers sometimes completed thousands upon thousands of annotations over two day periods.

The exposed panel included a list of people tasked with annotating Flock's footage. Taking those names, 404 Media found some were located in the Philippines, according to their LinkedIn and other online profiles.

Many of these people were employed through Upwork, according to the exposed material. Upwork is a gig and freelance work platform where companies can hire designers and writers or pay for "AI services," according to Upwork's website.

The tipsters also pointed to several publicly available Flock presentations which explained in more detail how workers were to categorize the footage. It is not clear what specific camera footage Flock's AI workers are reviewing. But screenshots included in the worker guides show numerous images from vehicles with US plates, including in New York, Michigan, Florida, New Jersey, and California. Other images include road signs clearly showing the footage is taken from inside the US, and one image contains an advertisement for a specific law firm in Atlanta.

One slide about audio told workers to "listen to the audio all the way through," then select from a drop-down menu including "car wreck," "gunshot," and "reckless driving." Another slide says tire screeching might be associated with someone "doing donuts," and another says that because it can be hard to distinguish between an adult and a child screaming, workers should use a second drop-down menu explaining their confidence in what they heard, with options like "certain" and "uncertain."

Another slide deck explains that workers should not label people inside cars but should label those riding motorcycles or walking.

After 404 Media contacted Flock for comment, the exposed panel became no longer available. Flock then declined to comment.


Original Submission