Your 12 hourly digest for Slashdot

Slashdot
News for nerds, stuff that matters 
'sudo' and 'su' Are Being Rewritten In Rust For Memory Safety
May 1st 2023, 02:03, by EditorDavid

Phoronix reports: With the financial backing of Amazon Web Services, sudo and su are being rewritten in the Rust programming language in order to increase the memory safety for the widely relied upon software... to further enhance Linux/open-source security. "[B]ecause it's written in C, sudo has experienced many vulnerabilities related to memory safety issues," according to a blog post announcing the project: It's important that we secure our most critical software, particularly from memory safety vulnerabilities. It's hard to imagine software that's much more critical than sudo and su. This work is being done by a joint team from Ferrous Systems and Tweede Golf with generous support from Amazon Web Services. The work plan is viewable here. The GitHub repository is here.

Read more of this story at Slashdot.

Google Has More Powerful AI, Says Engineer Fired Over Sentience Claims
May 1st 2023, 00:03, by EditorDavid

Remember that Google engineer/AI ethicist who was fired last summer after claiming their LaMDA LLM had become sentient? In a new interview with Futurism, Blake Lemoine now says the "best way forward" for humankind's future relationship with AI is "understanding that we are dealing with intelligent artifacts. There's a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them." (Although earlier in the interview, Lemoine concedes "Is there a chance that people, myself included, are projecting properties onto these systems that they don't have? Yes. But it's not the same kind of thing as someone who's talking to their doll.") But he also thinks there's a lot of research happening inside corporations, adding that "The only thing that has changed from two years ago to now is that the fast movement is visible to the public." For example, Lemoine says Google almost released its AI-powered Bard chatbot last fall, but "in part because of some of the safety concerns I raised, they deleted it... I don't think they're being pushed around by OpenAI. I think that's just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something." "[Google] still has far more advanced technology that they haven't made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They've had that technology for over two years. What they've spent the intervening two years doing is working on the safety of it — making sure that it doesn't make things up too often, making sure that it doesn't have racial or gender biases, or political biases, things like that. That's what they spent those two years doing... "And in those two years, it wasn't like they weren't inventing other things. There are plenty of other systems that give Google's AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That's the one that I was like, "you know this thing, this thing's awake." And they haven't let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model... "[W]hat it comes down to is that we aren't spending enough time on transparency or model understandability. I'm of the opinion that we could be using the scientific investigative tools that psychology has come up with to understand human cognition, both to understand existing AI systems and to develop ones that are more easily controllable and understandable." So how will AI and humans will coexist? "Over the past year, I've been leaning more and more towards we're not ready for this, as people," Lemoine says toward the end of the interview. "We have not yet sufficiently answered questions about human rights — throwing nonhuman entities into the mix needlessly complicates things at this point in history."

Read more of this story at Slashdot.

Russian Forces Suffer Radiation Sickness After Digging Trenches and Fishing in Chernobyl
Apr 30th 2023, 21:55, by EditorDavid

The Independent reports: Russian troops who dug trenches in Chernobyl forest during their occupation of the area have been struck down with radiation sickness, authorities have confirmed. Ukrainians living near the nuclear power station that exploded 37 years ago, and choked the surrounding area in radioactive contaminants, warned the Russians when they arrived against setting up camp in the forest. But the occupiers who, as one resident put it to The Times, "understood the risks" but were "just thick", installed themselves in the forest, reportedly carved out trenches, fished in the reactor's cooling channel — flush with catfish — and shot animals, leaving them dead on the roads... In the years after the incident, teams of men were sent to dig up the contaminated topsoil and bury it below ground in the Red Forest — named after the colour the trees turned as a result of the catastrophe... Vladimir Putin's men reportedly set up camp within a six-mile radius of reactor No 4, and dug defensive positions into the poisonous ground below the surface. On 1 April, as Ukrainian troops mounted counterattacks from Kyiv, the last of the occupiers withdrew, leaving behind piles of rubbish. Russian soldiers stationed in the forest have since been struck down with radiation sickness, diplomats have confirmed. Symptoms can start within an hour of exposure and can last for several months, often resulting in death.

Read more of this story at Slashdot.

Chess has a New World Champion: China's Ding Liren
Apr 30th 2023, 20:34, by EditorDavid

The Guardian reports: The Magnus Carlsen era is over. Ding Liren becomes China's first world chess champion. The country now can boast the men's and women's titleholders: an unthinkable outcome during the Cultural Revolution when it was banned as a game of the decadent West. After 14 games which ended in a 7-7 draw, the championship was decided by four "rapid chess" games — with just 25 minutes on each players clock, and 10 seconds added after each move. Reuters reports that the competition was still tied after three games, but in the final match 30-year-old Ding capitalized on mistakes and "time management" issues by Ian Nepomniachtchi. Ding's triumph means China holds both the men's and women's world titles, with current women's champion Ju Wenjun set to defend her title against compatriot Lei Tingjie in July... Ding had leveled the score in the regular portion of the match with a dramatic win in game 12, despite several critical moments — including a purported leak of his own preparation. The Chinese grandmaster takes the crown from five-time world champion Magnus Carlsen of Norway, who defeated Nepomniachtchi in 2021 but announced in July he would not defend the title again this year... [Ding] had only been invited to the tournament at the last minute to replace Russia's Sergey Karjakin, whom the international chess federation banned for his vocal support of Russia's invasion of Ukraine. Ding ranks third in the FIDE rating list behind Carlsen and Nepomniachtchi. It's the second straight world-championship defeat for Nepomniachtchi, the Guardian reports: "I guess I had every chance," the Russian world No 2 says. "I had so many promising positions and probably should have tried to finish everything in the classical portion. ... Once it went to a tiebreak, of course it's always some sort of lottery, especially after 14 games [of classical chess]. Probably my opponent made less mistakes, so that's it." Ding wins €1.1 million, The Guardian reports — also sharing this larger story: "I started to learn chess from four years old," Ding says. "I spent 26 years playing, analyzing, trying to improve my chess ability with many different ways, with different changing methods. with many new ways of training." He continues: "I think I did everything. Sometimes I thought I was addicted to chess, because sometimes without tournaments I was not so happy. Sometimes I struggled to find other hobbies to make me happy. This match reflects the deepness of my soul."

Read more of this story at Slashdot.

Droids for Space? Startup Plans Satellites With Robotic Arms For Repairs and Collecting Space Junk
Apr 30th 2023, 19:34, by EditorDavid

The Boston Globe reports on a 25-person startup pursuing an unusual solution to the problem of space junk: "Imagine if every car we ever created was just left on the road," said aerospace entrepreneur Jeromy Grimmett. "That's what we're doing in space." Grimmett's tiny company, Rogue Space Systems Corp., has devised a daring solution. It's building "orbots" — satellites with robotic arms that can fly right up to a disabled satellite and fix it. Or these orbots could use their arms to collect orbiting rubble left behind by hundreds of previous launches — dangerous junk that's become a hazard to celestial navigation... Rogue Space aims to catch up fast, with help from Small Business Technology Transfer funds from the SpaceWERX Orbital Prime initiative. Created by the U.S. Space Force, Orbital Prime seeks to build up U.S. private-sector firms that can protect national security by maintaining military satellites and clearing hazardous space debris. Its first 10-pound, proof-of-concept satellite will launch later this year, the article points out, "to test sensors and software to confirm the system can identify and track other satellites." But "the real excitement will begin later this year" when the company launches a prototype that's four times larger that will "use maneuvering thrusters to test the extremely precise navigation needed to approach a satellite." And then in late 2024 or early 2025 the company will launch its 660-pound satellite "with robotic arms for fixing other satellites or for dragging debris to a lower orbit, where it will fall back to Earth."

Read more of this story at Slashdot.

Ben & Jerry's Cofounder Launches Nonprofit Cannabis Line
Apr 30th 2023, 18:34, by EditorDavid

The "Ben" in Ben & Jerry's "has gone from ice cream to cannabis with a social mission," reports the Chicago Tribune: Ben Cohen has started Ben's Best Blnz, a nonprofit cannabis line with a stated mission of helping to right the wrongs of the war on drugs. The company says on its website that 80% of its profits will go to grants for Black cannabis entrepreneurs while the rest will be equally divided between the Vermont Racial Justice Alliance and the national Last Prisoner Project, which is working to free people incarcerated for cannabis offenses... Ben's Best Blnz, or B3, says it licenses its formulas, packaging, trademarks, and marketing materials to for-profit businesses that pay a royalty. After expenses are deducted, the royalties are donated to the cause.

Read more of this story at Slashdot.

Transition to EVs Cited as More Automakers Reduce Workforces
Apr 30th 2023, 17:34, by EditorDavid

This February Ford cut 3,800 jobs, according to CNN, "citing difficult economic conditions and its major push toward electric vehicles... The veteran automaker said the layoffs were primarily triggered by its transition to electric vehicles, and a reduction in 'vehicle complexity.'" Then in March GM also "unexpectedly cut several hundred jobs to help it trim costs and form a top-tier workforce to guide its transition to an all-electric car company," according to the Detroit Free Press — while later also announcing buyouts to try to "accelerate attrition." A spokesperson explained that GM wanted "to reduce vehicle complexity and expand the use of shared systems between its internal combustion engine and future electric vehicle programs." Up next is Stellantis, the multinational automotive giant formed when Fiat-Chrysler merged with PSA Group in 2021. It's now "trying to cut its workforce to trim expenses and stay competitive," reports the Associated Press, "as the industry makes the long and costly transition to electric vehicles." Stellantis on Wednesday said it's offering buyouts to groups of white-collar and unionized employees in the U.S., as well as hourly workers in Canada. The cuts are "in response to today's increasingly competitive global market conditions and the necessary shift to electrification," the company said in a prepared statement. Stellantis said it's looking to reduce its hourly workforce by about 3,500, but wouldn't say how many salaried employees it's targeting. The company has about 56,000 workers in the U.S., and about 33,000 of them could get the offers. Of those, 31,000 are blue-collar workers and 2,500 salaried employees. The company has another 8,000 union workers in Canada, but it would not say how many will get offers... The offers follow Ford and General Motors, which have trimmed their workforces in the past year through buyout offers. About 5,000 white-collar workers took General Motors up on offers to leave the company this year. Ford cut about 3,000 contract and full-time salaried workers last summer, giving them severance packages. The article adds that Shawn Fain, the new president of the United Auto Workers union, has told reporters "that he's unhappy with all three companies" over attempts to unionize "new joint-venture factories that will make battery cells for future electric vehicles." The Detroit Free Press has specifics: He said, for instance, that the wages are lower at the GM and LG Energy Solution Ultium Cells joint venture in Ohio compared with other auto production jobs even though the work is potentially dangerous and requires significant training... The EV transformation is crucial for the future of the industry and its workers, and the union expects its members not to "get lost in the transition," Fain said, noting that jobs are needed "that raise people up, not take us back."

Read more of this story at Slashdot.

OpenAI CTO Says AI Systems Should 'Absolutely' Be Regulated
Apr 30th 2023, 16:34, by EditorDavid

Slashdot reader wiredmikey writes: Mira Murati, CTO of ChatGPT creator OpenAI, says artificial general intelligence (AGI) systems should be "absolutely" be regulated. In a recent interview, Murati said the company is constantly talking with governments and regulators and other organizations to agree on some level of standards. "We've done some work on that in the past couple of years with large language model developers in aligning on some basic safety standards for deployment of these models," Murati said. "But I think a lot more needs to happen. Government regulators should certainly be very involved." Murati specifically discussed OpenAI's approach to AGI with "human-level capability." OpenAI's specific vision around it is to build it safely and figure out how to build it in a way that's aligned with human intentions, so that the AI systems are doing the things that we want them to do, and that it maximally benefits as many people out there as possible, ideally everyone. Q: Is there a path between products like GPT-4 and AGI? A: We're far from the point of having a safe, reliable, aligned AGI system. Our path to getting there has a couple of important vectors. From a research standpoint, we're trying to build systems that have a robust understanding of the world similarly to how we do as humans. Systems like GPT-3 initially were trained only on text data, but our world is not only made of text, so we have images as well and then we started introducing other modalities. The other angle has been scaling these systems to increase their generality. With GPT-4, we're dealing with a much more capable system, specifically from the angle of reasoning about things. This capability is key. If the model is smart enough to understand an ambiguous direction or a high-level direction, then you can figure out how to make it follow this direction. But if it doesn't even understand that high-level goal or high-level direction, it's much harder to align it. It's not enough to build this technology in a vacuum in a lab. We really need this contact with reality, with the real world, to see where are the weaknesses, where are the breakage points, and try to do so in a way that's controlled and low risk and get as much feedback as possible. Q: What safety measures do you take? A: We think about interventions at each stage. We redact certain data from the initial training on the model. With DALL-E, we wanted to reduce harmful bias issues we were seeing... In the model training, with ChatGPT in particular, we did reinforcement learning with human feedback to help the model get more aligned with human preferences. Basically what we're trying to do is amplify what's considered good behavior and then de-amplify what's considered bad behavior. One final quote from the interview: "Designing safety mechanisms in complex systems is hard... The safety mechanisms and coordination mechanisms in these AI systems and any complex technological system [are] difficult and require a lot of thought, exploration and coordination among players."

Read more of this story at Slashdot.

You are receiving this email because you subscribed to this feed at blogtrottr.com. By using Blogtrottr, you agree to our policies, terms and conditions.

If you no longer wish to receive these emails, you can unsubscribe from this feed, or manage all your subscriptions.

Comments

Popular posts from this blog

Gizmodo

Gizmodo