Stories
Slash Boxes
Comments

SoylentNews is people

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

The Best Star Trek

  • The Original Series (TOS) or The Animated Series (TAS)
  • The Next Generation (TNG) or Deep Space 9 (DS9)
  • Voyager (VOY) or Enterprise (ENT)
  • Discovery (DSC) or Picard (PIC)
  • Lower Decks or Prodigy
  • Strange New Worlds
  • Orville
  • Other (please specify in comments)

[ Results | Polls ]
Comments:108 | Votes:108

posted by janrinok on Saturday April 27, @06:13AM   Printer-friendly
from the let's-go-build-that-railway,-all-218-miles-of-it dept.

"After years of promises and years of lip service, we finally have all the funding needed, all the approvals, all the permits, all the union workers, and there's only one thing left to do now to get this party started.

We need to build it. And that starts today."

U.S. Sen. Catherine Cortez Masto, D-Nev., April 22, Las Vegas.

It looks like America is going to get its first real high-speed rail train.

On Monday, April 22, U.S. Transportation Secretary Pete Buttigieg officially opened the start of the works for the Brightline West High-Speed Rail Project. The 218-mile rail line will operate between Las Vegas, Nevada, and Rancho Cucamonga, California, and will be a fully electric, zero-emission system.

The high-speed train should average an 186 miles an hour speed, bringing the overland travel time between Las Vegas and Los Angeles down from 4 to 2 hours. To do so, 195 miles (315 km) of new track needs to be laid down to exacting standards, on the mid-shoulder of Interstate 15. There will be stations in Las Vegas, Victor Valley, Hesperia and Rancho Cucamonga, California. The line should be fully operational by 2028, in time for the Olympic Games.

Funding, to the tune of 12 billion dollar, comes half from private industry, and half from the Federal Government ($6.5 billion in grants and financing). An estimated 35,000 jobs, including 10,000 direct union construction jobs, and 1,000 permanent jobs once the line is operational, are associated with the project initiated by Brightline, a company which already runs a train service between Miami and Orlando.

"Today answers the question that has been asked too often, likely," Buttigieg said during the groundbreaking ceremony.

"The question whether America can still build massive, forward-looking engineering marvels that make people's lives better for generations ... and this is just the start."


Original Submission

posted by janrinok on Saturday April 27, @01:28AM   Printer-friendly

The specific process by which Google enshittified its search (24 Apr 2024)

https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan

All digital businesses have the technical capacity to enshittify: the ability to change the underlying functions of the business from moment to moment and user to user, allowing for the rapid transfer of value between business customers, end users and shareholders:

Which raises an important question: why do companies enshittify at a specific moment, after refraining from enshittifying before? After all, a company always has the potential to benefit by treating its business customers and end users worse, by giving them a worse deal. If you charge more for your product and pay your suppliers less, that leaves more money on the table for your investors.

Of course, it's not that simple. While cheating, price-gouging, and degrading your product can produce gains, these tactics also threaten losses. You might lose customers to a rival, or get punished by a regulator, or face mass resignations from your employees who really believe in your product.

Companies choose not to enshittify their products...until they choose to do so. One theory to explain this is that companies are engaged in a process of continuous assessment, gathering data about their competitive risks, their regulators' mettle, their employees' boldness. When these assessments indicate that the conditions are favorable to enshittification, the CEO walks over to the big "enshittification" lever on the wall and yanks it all the way to MAX.

The Men Who Killed Google Search

https://www.wheresyoured.at/the-men-who-killed-google/

The story begins on February 5th 2019, when Ben Gomes, Google's head of search, had a problem. Jerry Dischler, then the VP and General Manager of Ads at Google, and Shiv Venkataraman, then the VP of Engineering, Search and Ads on Google properties, had called a "code yellow" for search revenue due to, and I quote, "steady weakness in the daily numbers" and a likeliness that it would end the quarter significantly behind.

For those unfamiliar with Google's internal scientology-esque jargon, let me explain. A "code yellow" isn't, as you might think, a crisis of moderate severity. The yellow, according to Steven Levy's tell-all book about Google, refers to — and I promise that I'm not making this up — the color of a tank top that former VP of Engineering Wayne Rosing used to wear during his time at the company. It's essentially the equivalent of DEFCON 1 and activates, as Levy explained, a war room-like situation where workers are pulled from their desks and into a conference room where they tackle the problem as a top priority. Any other projects or concerns are sidelined.

In emails released as part of the Department of Justice's antitrust case against Google, Dischler laid out several contributing factors — search query growth was "significantly behind forecast," the "timing" of revenue launches was significantly behind, and a vague worry that "several advertiser-specific and sector weaknesses" existed in search.

Anyway, a few days beforehand on February 1 2019, Kristen Gil, then Google's VP Business Finance Officer, had emailed Shashi Thakur, then Google's VP of Engineering, Search and Discover, saying that the ads team had been considering a "code yellow" to "close the search gap [it was] seeing," vaguely referring to how critical that growth was to an unnamed "company plan." To be clear, this email was in response to Thakur stating that there is "nothing" that the search team could do to operate at the fidelity of growth that ads had demanded.

Shashi forwarded the email to Gomes, asking if there was any way to discuss this with Sundar Pichai, Google's CEO, and declaring that there was no way he'd sign up to a "high fidelity" business metric for daily active users on search. Thakur also said something that I've been thinking about constantly since I read these emails: that there was a good reason that Google's founders separated search from ads.

On February 2, 2019, just one day later, Thakur and Gomes shared their anxieties with Nick Fox, a Vice President of Search and Google Assistant, entering a multiple-day-long debate about Google's sudden lust for growth. The thread is a dark window into the world of growth-focused tech, where Thakur listed the multiple points of disconnection between the ads and search teams, discussing how the search team wasn't able to finely optimize engagement on Google without "hacking engagement," a term that means effectively tricking users into spending more time on a site, and that doing so would lead them to "abandon work on efficient journeys." In one email, Fox adds that there was a "pretty big disconnect between what finance and ads want" and what search was doing.

When Gomes pushed back on the multiple requests for growth, Fox added that all three of them were responsible for search, that search was "the revenue engine of the company," and that bartering with the ads and finance teams was potentially "the new reality of their jobs."

On February 6th 2019, Gomes said that he believed that search was "getting too close to the money," and ended his email by saying that he was "concerned that growth is all that Google was thinking about."

[Ed's Comment: This is only the beginning of the story. Go to the link if you wish to read more.--JR]


Original Submission #1Original Submission #2

posted by janrinok on Friday April 26, @08:46PM   Printer-friendly

https://www.technologyreview.com/2024/04/24/1091740/chinese-keyboard-app-security-encryption/

Almost all keyboard apps used by Chinese people around the world share a security loophole that makes it possible to spy on what users are typing.

The vulnerability, which allows the keystroke data that these apps send to the cloud to be intercepted, has existed for years and could have been exploited by cybercriminals and state surveillance groups, according to researchers at the Citizen Lab, a technology and security research lab affiliated with the University of Toronto.

These apps help users type Chinese characters more efficiently and are ubiquitous on devices used by Chinese people. The four most popular apps—built by major internet companies like Baidu, Tencent, and iFlytek—basically account for all the typing methods that Chinese people use. Researchers also looked into the keyboard apps that come preinstalled on Android phones sold in China.

What they discovered was shocking. Almost every third-party app and every Android phone with preinstalled keyboards failed to protect users by properly encrypting the content they typed. A smartphone made by Huawei was the only device where no such security vulnerability was found.

In August 2023, the same researchers found that Sogou, one of the most popular keyboard apps, did not use Transport Layer Security (TLS) when transmitting keystroke data to its cloud server for better typing predictions. Without TLS, a widely adopted international cryptographic protocol that protects users from a known encryption loophole, keystrokes can be collected and then decrypted by third parties.

"Because we had so much luck looking at this one, we figured maybe this generalizes to the others, and they suffer from the same kinds of problems for the same reason that the one did," says Jeffrey Knockel, a senior research associate at the Citizen Lab, "and as it turns out, we were unfortunately right."

Even though Sogou fixed the issue after it was made public last year, some Sogou keyboards preinstalled on phones are not updated to the latest version, so they are still subject to eavesdropping.

This new finding shows that the vulnerability is far more widespread than previously believed.

[...] "The scale of this was really shocking to us," says Wang. "And also, these are completely different manufacturers making very similar mistakes independently of one another, which is just absolutely shocking as well."

The massive scale of the problem is compounded by the fact that these vulnerabilities aren't hard to exploit. "You don't need huge supercomputers crunching numbers to crack this. You don't need to collect terabytes of data to crack it," says Knockel. "If you're just a person who wants to target another person on your Wi-Fi, you could do that once you understand the vulnerability."

[...] One potential cause of the loopholes' ubiquity is that most of these keyboard apps were developed in the 2000s, before the TLS protocol was commonly adopted in software development. Even though the apps have been through numerous rounds of updates since then, inertia could have prevented developers from adopting a safer alternative.

The report points out that language barriers and different tech ecosystems prevent English- and Chinese-speaking security researchers from sharing information that could fix issues like this more quickly. For example, because Google's Play store is blocked in China, most Chinese apps are not available in Google Play, where Western researchers often go for apps to analyze.

Sometimes all it takes is a little additional effort. After two emails about the issue to iFlytek were met with silence, the Citizen Lab researchers changed the email title to Chinese and added a one-line summary in Chinese to the English text. Just three days later, they received an email from iFlytek, saying that the problem had been resolved.


Original Submission

posted by janrinok on Friday April 26, @04:03PM   Printer-friendly
from the feature-not-a-bug dept.

A GitHub flaw, or possibly a design decision, is being abused by threat actors to distribute malware using URLs associated with Microsoft repositories, making the files appear trustworthy:

While most of the malware activity has been based around the Microsoft GitHub URLs, this "flaw" could be abused with any public repository on GitHub, allowing threat actors to create very convincing lures.

Yesterday, McAfee released a report on a new LUA malware loader distributed through what appeared to be a legitimate Microsoft GitHub repositories for the "C++ Library Manager for Windows, Linux, and MacOS," known as vcpkg, and the STL library.

The URLs for the malware installers, shown below, clearly indicate that they belong to the Microsoft repo, but we could not find any reference to the files in the project's source code.

Finding it strange that a Microsoft repo would be distributing malware since February, BleepingComputer looked into it and found that the files are not part of vcpkg but were uploaded as part of a comment left on a commit or issue in the project.

[...] As the file's URL contains the name of the repository the comment was created in, and as almost every software company uses GitHub, this flaw can allow threat actors to develop extraordinarily crafty and trustworthy lures.

For example, a threat actor could upload a malware executable in NVIDIA's driver installer repo that pretends to be a new driver fixing issues in a popular game. Or a threat actor could upload a file in a comment to the Google Chromium source code and pretend it's a new test version of the web browser.

Originally spotted on Schneier on Security.

Recently: xz-style Attacks Continue to Target Open-Source Maintainers


Original Submission

posted by janrinok on Friday April 26, @11:14AM   Printer-friendly
from the its-no-longer-raining-cats-and-dogs dept.

FAA to require reentry vehicles licensed before launch

[....] In a notice published in the Federal Register April 17, the FAA's Office of Commercial Space Transportation announced it will no longer approve the launch of spacecraft designed to reenter unless they already have a reentry license. The office said that it will, going forward, check that a spacecraft designed to return to Earth has a reentry license as part of the standard payload review process.

[....] "Unlike typical payloads designed to operate in outer space, a reentry vehicle has primary components that are designed to withstand reentry substantially intact and therefore have a near-guaranteed ground impact as a result of either a controlled reentry or a random reentry,"

[....] "Therefore, it is crucial to evaluate the safety of the reentry prior to launch," the agency concluded in the notice. "This way, the FAA is able to work with the reentry operator to meet the required risk and other criteria."

The notice did not state what prompted the change. However, it comes after Varda Space Industries launched its first spacecraft in June 2023 but did not get a reentry license for it until February after months of effort and an earlier, rejected reentry license application. Varda's capsule safely landed at the Utah Test and Training Range a week after receiving the license.

[....] Commercial spacecraft reentries remain rare. The FAA currently lists only two active reentry licenses, one for Varda and the other for SpaceX's Dragon spacecraft. However, the FAA expects demand for those licenses to increase as more companies seek to return cargo or crew from space.

Catch a falling space junk, put it in your pocket, savor radioactive decay.


Original Submission

posted by hubie on Friday April 26, @06:30AM   Printer-friendly
from the in-the-Moog-for-some-Pi dept.

Gearnews has an article about use of Raspberry Pi microcomputers in digital signal processing (DSP) systems, observing that digital synthesizers are essentially computers in specialized housings. In addition to the complex software, there is a lot of work in making an enclosure with useful controls and displays. Increasingly manufacturers are building their synthesizers around the Raspberry Pi:

The biggest synthesizer manufacturer to make use of the Raspberry Pi is Korg. The Japanese synth company's Wavestate, Modwave and Opsix digital synths all make use of the Raspberry Pi Compute Module. (They're in the module versions too.)

In an article on the Raspberry Pi home page, Korg's Andy Leary sites price and manufacturing scale as the main reason Korg decided on these components. He also liked that it was ready to go as is, providing CPU, RAM and storage in a single package. "That part of the work is already done," he said in the article. "It's like any other component; we don't have to lay out the board, build it and test it."

The software for each instrument is, of course, custom. The Raspberry Pi, however, generates the sound. "Not everyone understands that Raspberry Pi is actually making the sound," said Korg's Dan Philips in the same piece. "We use the CM3 because it's very powerful, which makes it possible to create deep, compelling instruments."

These used to be designed with off-the-shelf parts from Motorola and Texas Instruments. However around 20 years ago, according to a Raspberry Pi link about Korg synthesizers, Linux entered synthesizer production scene.

Previously:
(2024) Berlin's Techno Scene Added to UNESCO Cultural Heritage List
(2021) The Yamaha DX7 Synthesizer's Clever Exponential Circuit, Reverse-Engineered
(2019) Moog Brings Back its Legendary Model 10 'Compact' Modular Synth
(2014) History of the Synthesizer - 50 Years


Original Submission

posted by hubie on Friday April 26, @01:47AM   Printer-friendly
from the revenge-of-Clippy dept.

Windows 11 Start menu ads are now rolling out to everyone

W11 rolling out ads (or "recommendations") in the start-menu. I guess this explains blocking alternative start menus. Security and Performance reasons standing in the way of the "recommendation-dollars".

https://www.theverge.com/2024/4/24/24138949/microsoft-windows-11-start-menu-ads-recommendations-setting-disable

Microsoft is starting to enable ads inside the Start menu on Windows 11 for all users. After testing these briefly with Windows Insiders earlier this month, Microsoft has started to distribute update KB5036980 to Windows 11 users this week, which includes "recommendations" for apps from the Microsoft Store in the Start menu.

"The Recommended section of the Start menu will show some Microsoft Store apps," says Microsoft in the update notes of its latest public Windows 11 release. "These apps come from a small set of curated developers." The ads are designed to help Windows 11 users discover more apps, but will largely benefit the developers that Microsoft is trying to tempt into building more Windows apps.

[...] Luckily you can disable these ads, or "recommendations" as Microsoft calls them. If you've installed the latest KB5036980 update then head into Settings > Personalization > Start and turn off the toggle for "Show recommendations for tips, app promotions, and more." While KB5036980 is optional right now, Microsoft will push this to all Windows 11 machines in the coming weeks.

Microsoft is Annoying Users Again. Here's What It Could Do Instead

The temptation to coldly stuff ads into every service is just too much for some people. There must be a better way - and Microsoft knows it:

It's tempting to believe that brute force always works.

After all, wasn't much of the modern tech industry built upon the notion of moving fast and breaking things?

When it comes to Microsoft and some of its ingrained customer-facing habits, the idea that you can simply force things on your users rarely dies.

Yet the company's latest attempts seem, as often happens, to have annoyed the very people Microsoft shouldn't.

First, Microsoft decided this was the right time to mess with the Windows 11 Start menu. By inserting ads for "recommended apps."

I know this annoyed people because my colleague Lance Whitney began his description of this maneuver with delicious words: "Microsoft is testing a way to further mismanage the Windows 11 Start menu: displaying icons for recommended apps."

I sense he may even have been very annoyed by this maneuver -- even though it's only a test -- because he added: "As a Windows Insider, I have some feedback, none of which is printable. Despite being a $3 trillion company with $227 billion in revenue in 2023, Microsoft apparently feels the need to eke out more money from Windows users by annoying them with ads."

[...] In this case, the ads were for Copilot, Microsoft's new AI companion. The promise is for "Copilot with more efficient settings." These include managing your passwords and bookmarking tabs.

[...] There is another way, Microsoft. Truly there is.

Testing new ideas by simply shoving them at your customers describes a mindset that is, well, nerdish. It's direct, it's matter-of-factual and it can be somewhat grating.

An alternative is to communicate with your customers first -- or, dare one mention it -- use a little charm and wit to entice, rather than repel.

Instead of instantly shoving recommended apps on the Start Menu, perhaps Microsoft could have teased one or two popular app recommendations in, say, a humorous way. Humor can be disarming in a way that arm-twisting is not.

Microsoft could even have shown that it respected its customers' mindset and declared: "Look, we know you don't like ads much, but these apps might just be useful."

Similarly, in the case of the Copilot ads on Edge, Microsoft could have used the voice of its new AI: "Hey, did you know that Copilot can make your life just a tiny, tiny bit easier? Want to see how?"

[...] Of course, there's a difference between inserting a product change you feel sure the customers will like, as opposed to one that you know -- you just know -- they'll see as merely the company trying to make more money.

[...] But when customers -- particularly paying customers -- are faced with the cold insertion of something they may instinctively be resistant to, they tend to respond with frigid irritation.

[...] Microsoft has understood that human warmth is a valuable asset, one that isn't immediately revealed on its balance sheet.

If you recognize the truth of that, you have to carry it through all of your communication. Even the most basic, potentially irritating communication. Especially the most basic, potentially irritating communication.


Original Submission #1Original Submission #2

posted by hubie on Thursday April 25, @09:01PM   Printer-friendly
from the I-knew-it dept.

https://www.bbc.com/news/science-environment-68881369

The 46-year-old Nasa spacecraft is humanity's most distant object.

A computer fault stopped it returning readable data in November but engineers have now fixed this.

For the moment, Voyager is sending back only health data about its onboard systems, but further work should get the scientific instruments back online.

Voyager-1 is more than 24 billion km (15 billion miles) away, so distant, its radio messages take fully 22.5 hours to reach us.

"Voyager-1 spacecraft is returning usable data about the health and status of its onboard engineering systems," Nasa said in a statement.

"The next step is to enable the spacecraft to begin returning science data again."

[...] A corrupted chip has been blamed for the ageing spacecraft's recent woes.

This prevented Voyager's computers from accessing a vital segment of software code used to package information for transmission to Earth.

For a period of time, engineers could get no sense whatsoever out of Voyager, even though they could tell the spacecraft was still receiving their commands and otherwise operating normally.

The issue was resolved by shifting the affected code to different locations in the memory of the probe's computers.

Previously:
    NASA Knows What Knocked Voyager 1 Offline, but It Will Take a While to Fix
    Voyager 1 Starts Making Sense Again After Months of Babble
    Humanity's Most Distant Space Probe Jeopardized by Computer Glitch


Original Submission

posted by hubie on Thursday April 25, @04:18PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Since Framework showed off its first prototypes in February 2021, we've generally been fans of the company's modular, repairable, upgradeable laptops.

Not that the company's hardware releases to date have been perfect—each Framework Laptop 13 model has had quirks and flaws that range from minor to quite significant, and the Laptop 16's upsides struggle to balance its downsides. But the hardware mostly does a good job of functioning as a regular laptop while being much more tinkerer-friendly than your typical MacBook, XPS, or ThinkPad.

But even as it builds new upgrades for its systems, expands sales of refurbished and B-stock hardware as budget options, and promotes the re-use of its products via external enclosures, Framework has struggled with the other side of computing longevity and sustainability: providing up-to-date software.

Driver bundles remain un-updated for years after their initial release. BIOS updates go through long and confusing beta processes, keeping users from getting feature improvements, bug fixes, and security updates. In its community support forums, Framework employees, including founder and CEO Nirav Patel, have acknowledged these issues and promised fixes but have remained inconsistent and vague about actual timelines.

Patel says Framework has taken steps to improve the update problem, but he admits that the team's initial approach—supporting existing laptops while also trying to spin up firmware for upcoming launches—wasn't working.

[...] Part of the issue is that Framework relies on external companies to put together firmware updates. Some components are provided by Intel, AMD, and other chip companies to all PC companies that use their chips. Others are provided by Insyde, which writes UEFI firmware for Framework and others. And some are handled by Compal, the contract manufacturer that actually produces Framework's systems and has also designed and sold systems for most of the big-name PC companies.

As far back as August 2023, Patel has written that the plan is to work with Compal and Insyde to hire dedicated staff to provide better firmware support for Framework laptops. However, the benefits of this arrangement have been slow to reach users.

[...] Framework puts a lot of effort into making its hardware easy to fix and upgrade and into making sure that hardware can stay useful down the line when it's been replaced by something newer. But supporting that kind of reuse and recycling works best when paired with long-term software and firmware support, and on that front, Framework has been falling short.

Framework will need to step up its game, especially if it wants to sell more laptops to businesses—a lucrative slice of the PC industry that Framework is actively courting. By this summer or fall, we'll have some idea of whether its efforts are succeeding.

Previously:
    Framework Laptop 16 Review: A Modular Marvel, but a Mediocre Gaming Laptop
    The Framework Laptop is an Upgradable, Customizable 13-Inch Notebook Coming This Spring


Original Submission

posted by hubie on Thursday April 25, @11:33AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

You don't hear much about it, but the U.S. Air Force has been flying an aircraft that made its first flight in 1955. The U-2 Dragon Lady is one of the longest-serving aircraft in the U.S. inventory, and there's a reason the USAF continues using it in one military conflict after another — it's reliable, difficult to shoot down, and can perform a variety of tasks in and out of combat operations.

The U-2 arose after World War II when the U.S. sought to utilize aerial reconnaissance against the growing threat of the Soviet Union. This necessitated an aircraft capable of flying at incredibly high altitudes and the development of special equipment to conduct intelligence, surveillance, and reconnaissance (ISR) missions in and out of enemy territory. The U-2 entered development in the early 1950s and continued production until 1989, leaving 31 operational aircraft in the inventory.

[...] An aircraft doesn't remain in operation for over seven decades without putting something on the table, and the U-2 didn't disappoint. While the USAF uses it to gather intelligence during combat operations, it can support strategic ISR missions during peacetime. USAF pilot Capt. Francis Gary Powers' U-2 was shot down over Sverdlovsk, Soviet Union, in 1960, escalating tensions between the U.S. and the USSR.

[...] The U-2 has a small radar signature and operates at a 70,000-foot altitude, which makes it challenging to shoot down. Still, the U.S. lost some while flying missions over enemy territory, but the USAF continued using U-2s long after more robust and technologically advanced options arose. These days, ISR missions employ various collection tools to build a complete picture of the battlespace, and the U-2 is still an integral part of that endeavor.

For most of its active service history, the U-2 flew with an onboard camera capable of a 66-inch focal length. The aircraft often flew as high as 70,000 feet and still took excellent images of targets miles away. This was long before the advent of digital imagery, so the U-2's unique optical bar camera required the development of wet film.

The film had to be developed at Beale Air Force Base in California by the 9th Reconnaissance Wing before analysts could check it for intelligence data. Despite the difficulty the camera posed, its worth was apparent in the intelligence products it produced. Still, the USAF finally replaced the antiquated technology in the summer of 2022. 

[...] The USAF plans to finally retire the U-2 in fiscal year 2026, but before that happens, it's being used to test new artificial intelligence tools and advanced communications technologies. In 2020, the USAF outfitted a U-2 with AI technology and let the system operate the aircraft, so innovations continue despite the aircraft's planned retirement. This isn't the first time the U-2 has been on the chopping block, so there's a chance it could survive its planned obsolescence in FY26, but only time will tell.

[Ed. comment: NASA flies them as well]


Original Submission

posted by janrinok on Thursday April 25, @06:45AM   Printer-friendly
from the everything-is-fine dept.

https://arstechnica.com/security/2024/04/kremlin-backed-hackers-exploit-critical-windows-vulnerability-reported-by-the-nsa/

Kremlin-backed hackers have been exploiting a critical Microsoft vulnerability for four years in attacks that targeted a vast array of organizations with a previously undocumented tool, the software maker disclosed Monday.

When Microsoft patched the vulnerability in October 2022—at least two years after it came under attack by the Russian hackers—the company made no mention that it was under active exploitation. As of publication, the company's advisory still made no mention of the in-the-wild targeting.

[...] On Monday, Microsoft revealed that a hacking group tracked under the name Forest Blizzard has been exploiting CVE-2022-38028 since at least June 2020—and possibly as early as April 2019. The threat group—which is also tracked under names including APT28, Sednit, Sofacy, GRU Unit 26165, and Fancy Bear—has been linked by the US and the UK governments to Unit 26165 of the Main Intelligence Directorate, a Russian military intelligence arm better known as the GRU.

Since as early as April 2019, Forest Blizzard has been exploiting CVE-2022-38028 in attacks that, once system privileges are acquired, use a previously undocumented tool that Microsoft calls GooseEgg. The post-exploitation malware elevates privileges within a compromised system and goes on to provide a simple interface for installing additional pieces of malware that also run with system privileges. This additional malware, which includes credential stealers and tools for moving laterally through a compromised network, can be customized for each target.

[...] People administering Windows machines should ensure that the fix for CVE-2022-38028 has been installed, as well as the fix for CVE-2021-34527, the tracking designation for a previous critical zero-day that came under mass attack in 2021.


Original Submission

posted by janrinok on Thursday April 25, @02:03AM   Printer-friendly

The Federal Trade Commission (FTC) voted 3-2 on Tuesday to ban noncompete agreements that prevent tens of millions of employees from working for competitors or starting a competing business after they leave a job:

From fast food workers to CEOs, the FTC estimates 18 percent of the U.S. workforce is covered by noncompete agreements — about 30 million people.

The final rule would ban new noncompete agreements for all workers and require companies to let current and past employees know they won't enforce them. Companies will also have to throw out existing noncompete agreements for most employees, although in a change from the original proposal, the agreements may remain in effect for senior executives.

[...] The new rule is slated to go into effect in 120 days after it's published in the Federal Register. But its future is uncertain, as pro-business groups opposing the rule are expected to take legal action to block its implementation.

[...] The U.S. Chamber of Commerce, the largest pro-business lobbying group in the country, has said it will sue to block the rule.

Previously: Is This the End of Non-Compete Contracts?


Original Submission

posted by janrinok on Wednesday April 24, @09:16PM   Printer-friendly
from the things-not-intended dept.

Remember graphene, the single-atom-thick material nobody has managed to bring to production scale (yet)?

Problem with graphene -- and other 2D materials -- is that their atoms have always tended to cluster together to make nanoparticles instead. Instead of a clean sheet of material, you'll get a 3D blob, and a mournful look at the Star Trek poster on your lab wall.

Now scientists have managed to make such a 2D, single-atom, layer, using a fairly simple technique. This opens up the possibility of having a valid candidate for mass production.

The new technique was discovered through the well-honed scientific process of trying (and failing) to do something else. In this case, the investigators started out with a material containing atomic monolayers of silicon sandwiched between titanium carbide. The researchers' aim was to coat this electrically conductive ceramic with a thin layer of gold, at a high temperature, to make a contact. (Maybe someone of the team had plumbing problems at home.)

To their surprise, instead of a nice golden coat they ended up with intercalation, where one material in a layered structure is replaced by another. In this case, the silicon atoms were replaced with gold atoms. Some smart-ass noted that this meant they effectively had a 2D layer of gold atoms sandwiched in between layers of something else. Maybe, if we remove the titanium carbide?

How they got the idea is not clarified (an iaido practitioner in the ranks, one presumes), but the team turned to an etching technique used by ancient Japanese blacksmiths. While the process immediately showed promise, the researchers said that finding the exact formula involved months of trial and error, but no limbs were lost.

In the end, what was crucial was the solution and duration of application of the reagent (Murakami's etch) used, as well as complete darkness. That's because light hitting Murakami's reagent can produce cyanide, which dissolves gold. When trying to create a 2D gold sheet, having it dissolve is clearly an unwanted result.

The final finishing touch was to add a single molecule layer of a surfactant, to stop the different sheets from sticking together, and prevent them from rolling up. The end result consists of separate 2D gold layers of up to 100 nanometres wide (400 times thinner than the thinnest gold leaf) in a solution, a bit like cornflakes in milk.

Flush with the success of getting a published article in Nature Synthesis, the study authors suggested that the new material, dubbed goldene, might be useful in applications in which gold nanoparticles already show promise. Light can generate waves in the sea of electrons at a gold nanoparticle's surface, which can channel and concentrate that energy. This strong response to light has been harnessed in gold photocatalysts to split water to produce hydrogen, for instance. Goldene could open up opportunities in areas such as this.

All is not yet rainbows and fluffy rabbits though: residual Fe atoms from the Murakami's reagent might still throw a spanner in the works.

Journal Reference:
Peplow, Mark. Meet 'goldene': this gilded cousin of graphene is also one atom thick, (DOI: 10.1038/d41586-024-01118-0)


Original Submission

posted by janrinok on Wednesday April 24, @04:38PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Rotary engines (also known as Wankel engines and Wankel rotary engines) are quite different from piston or "reciprocating" engines. One of the distinguishing features is that they don't need valves to operate. That's to say, the Wankel rotary engine design doesn't have valves — but quite a few rotary engine designs have incorporated them. Both rotary engines and piston engines utilize internal combustion and share the same phases of intake, compression, power, and exhaust. But beyond this similarity, they are very different in design and operation.

The work done by a rotary engine doesn't need to be converted into rotational motive power for use by propellers or transmission, as is the case with piston engines. As a result, they are considered more efficient by some metrics and require fewer moving parts to function. The first rotary engines were primarily used for aircraft during World War I, but the design was abandoned due to flaws and inefficiencies.

In 1954, German engineer Felix Wankel invented a new design for an automobile rotary engine for the German car and bike company NSU. After prototype testing by NSU in the following years, they entered into an agreement with Japanese company Mazda to develop Wankel rotary engines for its cars. The first Mazda cars with rotary engines were launched in Japan in the 1960s before crossing the Pacific to America in 1971. Remaining one of the few companies that stuck with the rotary engine design, Mazda has, over the years, developed some of the most innovative engines of this type.

These engines create power by combusting a mixture of compressed air and fuel within a chamber or cylinder, translating the displacement of the rotor or piston into motion. Wankel rotary engines feature an equilateral triangular rotor with convex edges in an ovaloid chamber or rotor housing (the shape is an epitrochoid where the long sides of a symmetrical oval have two curves like a slight figure eight).

As the triangular rotor moves through the chamber, all three of its apexes are in constant contact with the housing (thanks to rotary engine apex seals), creating three gas volumes that are isolated by the rotor. There are two ports on the same side of the figure eight housing, with the top one for intake of fuel and air and the bottom one for the exhaust of combusted gases. Intake can be assisted by a supercharger pushing air into the chamber, while exhaust can be assisted by a turbocharger pulling air out of the chamber.

As the leading edge of the rotor passes the intake port, it creates a vacuum, inducing (or pulling) air and fuel into the chamber. The rotor's motion then compresses the air and fuel, lit by a spark plug (two in Mazda's design), further propelling the rotor along its path and sending the combusted gases out of the exhaust port. The rotor is attached to an eccentric output shaft (containing lobes not on the center of the shaft) via a gear, and as it rotates, the shaft produces a torque that's used to turn the transmission.

So far, we've explained the most basic Wankel rotary engine, but Mazda — the company that popularized it — has developed several variations over the years. One improvement included a concave pocket on each of the three convex edges of the rotor, which increased the amount of volume within the epitrochoid housing.

Mazda also attempted to solve one of the biggest problems of the Wankel rotary engine — the incomplete combustion of all the fuel in the air-fuel mixture — by incorporating two spark plugs to combust both compressed air-fuel volumes created within a single cycle of the rotor. In its latest rotary engine design used in the November 2023 Mazda MX-30 Skyactiv R-EV, the Japanese company also changed the location of the fuel intake to the top of the housing, separating it from the air intake, thereby keeping it in the main intake area and increasing atomization to improve the incomplete combustion of fuel.

Another Mazda rotary engine innovation is using an auxiliary intake port valve that can swivel open to increase the amount of air during the intake phase when required for more power at higher RPMs. Later designs of the Mazda rotary engine included two more valves for greater air intake when needed at higher RPMs. So, to answer the question, do rotary engines have valves? Yes, they do, including in some of the most popular rotary engines of all time — those used in the Mazda RX-7 and Mazda RX-8.

Rotary engines have three power phases instead of just the one of a four-stroke engine. This, along with the way the rotor is mated to the gear of the output shaft, means that for every complete rotation of the rotor, the output shaft rotates thrice, instead of just once, as with a reciprocating piston engine. All four phases occur within a single cycle of the rotor, rather than one at a time in a four-stroke piston engine.

There is also no need for a minimum of two valves (intake and exhaust) per cylinder as seen in a piston engine. Instead, fuel enters via intake ports that don't need to open and close. In a Wankel rotary engine, air and fuel are pulled in by a partial vacuum, and combusted gases are pushed out by pressure. However, various valves have been adopted to deliver variable valve timing for better air intake at different RPMs.

In its simplest form, a Wankel rotary engine has just two moving parts: the rotor and the output shaft. Compare that to modern piston cylinder engines, which have over 40 moving parts. Of course, in many iterations of the rotary engine, Mazda used two rotors rather than one, bringing the total number of moving parts to three. You get at least three more moving parts if you include the extra air intake valves the company added in later years.

Advantages of a rotary engine include its compact size, which makes it lightweight and gives it a higher power-to-weight ratio than piston engines. Its simple design with fewer moving parts makes it easier to produce, and the parts also move slower compared to piston engines, making rotary engines more reliable in terms of wear and tear. A reciprocating engine needs its power translated to rotational motion, whereas the rotary engine produces direct rotational motion, creating much lower vibration, higher RPMs, and smoother power delivery. Rotary engines are also touted for their multi-fuel capabilities, ranging from gasoline to ethanol and natural gas.

Disadvantages of a rotary engine include lower fuel efficiency, increased emissions (thanks to the use of oil — that is combusted alongside the air-fuel mixture — within the housing to better lubricate and seal the rotor), and regular replacement of the seals (apex, face, and side). Rotary engines also have reduced thermodynamic efficiency thanks to the large rotor housing with combustion happening only in certain sections, which leads to different temperatures of the housing, causing uneven expansion and difficulty in maintaining a seal. [...]


Original Submission

posted by janrinok on Wednesday April 24, @11:54AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Microsoft this week demoed VASA–1, a framework for creating videos of people talking from a still image, audio sample, and text script, and claims – rightly – it's too dangerous to be released to the public.

These AI-generated videos, in which people can be convincingly animated to speak scripted words in a cloned voice, are just the sort of thing the US Federal Trade Commission warned about last month, after previously proposing a rule to prevent AI technology from being used for impersonation fraud.

Microsoft's team acknowledge as much in their announcement, which explains the technology is not being released due to ethical considerations. They insist that they're presenting research for generating virtual interactive characters and not for impersonating anyone. As such, there's no product or API planned.

"Our research focuses on generating visual affective skills for virtual AI avatars, aiming for positive applications," the Redmond boffins state. "It is not intended to create content that is used to mislead or deceive.

"However, like other related content generation techniques, it could still potentially be misused for impersonating humans. We are opposed to any behavior to create misleading or harmful contents of real persons, and are interested in applying our technique for advancing forgery detection."

Kevin Surace, Chair of Token, a biometric authentication biz, and frequent speaker on generative AI, told The Register in an email that while there have been prior technology demonstrations of faces animated from a still frame and cloned voice file, Microsoft's demonstration reflects the state of the art.

"The implications for personalizing emails and other business mass communication is fabulous," he opined. "Even animating older pictures as well. To some extent this is just fun and to another it has solid business applications we will all use in the coming months and years."

The "fun" of deepfakes was 96 percent nonconsensual porn, when assessed [PDF] in 2019 by cybersecurity firm Deeptrace.

Nonetheless, Microsoft's researchers suggest that being able to create realistic looking people and put words in their mouths has positive uses.

"Such technology holds the promise of enriching digital communication, increasing accessibility for those with communicative impairments, transforming education, methods with interactive AI tutoring, and providing therapeutic support and social interaction in healthcare," they propose in a research paper that does not contain the words "porn" or "misinformation."

While it's arguable AI generated video is not quite the same as a deepfake, the latter defined by digital manipulation as opposed to a generative method, the distinction becomes immaterial when a convincing fake can be conjured without cut-and-paste grafting.

[...] In prepared remarks, Rijul Gupta, CEO of DeepMedia, a deepfake detection biz, said:

[T]he most alarming aspect of deepfakes is their ability to provide bad actors with plausible deniability, allowing them to dismiss genuine content as fake. This erosion of public trust strikes at the very core of our social fabric and the foundations of our democracy. The human brain, wired to believe what it sees and hears, is particularly vulnerable to the deception of deepfakes. As these technologies become increasingly sophisticated, they threaten to undermine the shared sense of reality that underpins our society, creating a climate of uncertainty and skepticism where citizens are left questioning the veracity of every piece of information they encounter.

But think of the marketing applications.


Original Submission