Your 12 hourly digest for Slashdot

Slashdot
News for nerds, stuff that matters 
What Will Technology Do in 2023?
Jan 1st 2023, 02:34, by EditorDavid

Looking back at 2022's technology, the lead technology writer for the New York Times criticized Meta's $1,500 VR headset and the iPhone's "mostly unnoticeable improvements." But then he also predicted which new tech could affect you in 2023. Some highlights: - It's very likely that next year you could have a chatbot that acts as a research assistant. Imagine that you are writing a research paper and want to add some historical facts about World War II. You could share a 100-page document with the bot and ask it to sum up the highlights related to a certain aspect of the war. The bot will then read the document and generate a summary for you.... That doesn't mean that we'll see a flood of stand-alone A.I. apps in 2023. It may be more the case that many tools we already use for work will begin building automatic language generation into their apps. Rowan Curran, a technology analyst at the research firm Forrester, said apps like Microsoft Word and Google Sheets could soon embed A.I. tools to streamline people's work flows. - In 2023, the V.R. drumbeat will go on. Apple, which has publicly said it will never use the word "metaverse," is widely expected to release its first headset. Though the company has yet to share details about the product, Apple's chief executive, Tim Cook, has laid out clues, expressing his excitement about using augmented reality to take advantage of digital data in the physical world. "You'll wonder how you lived your life without augmented reality, just like today you wonder: How did people like me grow up without the internet?" Mr. Cook said in September to students in Naples. He added, however, that the technology was not something that would become profound overnight. Wireless headsets remain bulky and used indoors, which means that the first iteration of Apple's headgear will, similar to many others that preceded it, most likely be used for games. In other words, there will continue to be lots of chatter about the metaverse and virtual (augmented, mixed, whatever-you-want-to-call-dorky-looking) goggles in 2023, but it most likely still won't be the year that these headsets become widely popular, said Carolina Milanesi, a consumer tech analyst for the research firm Creative Strategies. "From a consumer perspective, it's still very uncertain what you're spending your thousand bucks on when you're buying a headset," she said. "Do I have to do a meeting with V.R.? With or without legs, it's not a necessity."

Read more of this story at Slashdot.

The Shameful Open Secret Behind Southwest's Failure? Software Shortcomings
Dec 31st 2022, 23:34, by EditorDavid

Computer programmer Zeynep Tufekci now writes about the impact of technology on society. In an opinion piece for the New York Times, Tufekci writes on "the shameful open secret" that earlier this week led Southwest airlines to suddenly cancel 5,400 flights in less than 48 hours. "The recent meltdown was avoidable, but it would have cost them." Long-time Slashdot reader theodp writes that the piece "takes a crack at explaining 'technical debt' to the masses." Tufekci writes: Computers become increasingly capable and powerful by the year and new hardware is often the most visible cue for technological progress. However, even with the shiniest hardware, the software that plays a critical role inside many systems is too often antiquated, and in some cases decades old. This failing appears to be a key factor in why Southwest Airlines couldn't return to business as usual the way other airlines did after last week's major winter storm. More than 15,000 of its flights were canceled starting on Dec. 22, including more than 2,300 canceled this past Thursday — almost a week after the storm had passed. It's been an open secret within Southwest for some time, and a shameful one, that the company desperately needed to modernize its scheduling systems. Software shortcomings had contributed to previous, smaller-scale meltdowns, and Southwest unions had repeatedly warned about it. Without more government regulation and oversight, and greater accountability, we may see more fiascos like this one, which most likely stranded hundreds of thousands of Southwest passengers — perhaps more than a million — over Christmas week. And not just for a single company, as the problem is widespread across many industries. "The reason we made it through Y2K intact is that we didn't ignore the problem," the piece argues. But in comparison, it points out, Southwest had already experienced another cancellation crisis in October of 2021 (while the president of the pilots' union "pointed out that the antiquated crew-scheduling technology was leading to cascading disruptions.") "In March, in its open letter to the company, the union even placed updating the creaking scheduling technology above its demands for increased pay." Speaking about this week's outage, a Southwest spokesman concedes that "We had available crews and aircraft, but our technology struggled to align our resources due to the magnitude and scale of the disruptions." But Tufekci concludes that "Ultimately, the problem is that we haven't built a regulatory environment where companies have incentives to address technical debt, rather than passing the burden on to customers, employees or the next management.... For airlines, it might mean holding them responsible for the problems their miserly approach causes to the flying public."

Read more of this story at Slashdot.

AI-Powered Software Delivery Company Predicts 'The End of Programming'
Dec 31st 2022, 22:34, by EditorDavid

Matt Welsh is the CEO and co-founder of Fixie.ai, an AI-powered software delivery company founded by a team from Google and Apple. "I believe the conventional idea of 'writing a program' is headed for extinction," he opines in January's Communications of the ACM, "and indeed, for all but very specialized applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed." His essay is titled "The End of programming," and predicts a future will "Programming will be obsolete." In situations where one needs a "simple" program (after all, not everything should require a model of hundreds of billions of parameters running on a cluster of GPUs), those programs will, themselves, be generated by an AI rather than coded by hand.... with humans relegated to, at best, a supervisory role.... I am not just talking about things like Github's CoPilot replacing programmers. I am talking about replacing the entire concept of writing programs with training models. In the future, CS students are not going to need to learn such mundane skills as how to add a node to a binary tree or code in C++. That kind of education will be antiquated, like teaching engineering students how to use a slide rule. The engineers of the future will, in a few keystrokes, fire up an instance of a four-quintillion-parameter model that already encodes the full extent of human knowledge (and then some), ready to be given any task required of the machine. The bulk of the intellectual work of getting the machine to do what one wants will be about coming up with the right examples, the right training data, and the right ways to evaluate the training process. Suitably powerful models capable of generalizing via few-shot learning will require only a few good examples of the task to be performed. Massive, human-curated datasets will no longer be necessary in most cases, and most people "training" an AI model will not be running gradient descent loops in PyTorch, or anything like it. They will be teaching by example, and the machine will do the rest. In this new computer science — if we even call it computer science at all — the machines will be so powerful and already know how to do so many things that the field will look like less of an engineering endeavor and more of an an educational one; that is, how to best educate the machine, not unlike the science of how to best educate children in school. Unlike (human) children, though, these AI systems will be flying our airplanes, running our power grids, and possibly even governing entire countries. I would argue that the vast majority of Classical CS becomes irrelevant when our focus turns to teaching intelligent machines rather than directly programming them. Programming, in the conventional sense, will in fact be dead.... We are rapidly moving toward a world where the fundamental building blocks of computation are temperamental, mysterious, adaptive agents.... This shift in the underlying definition of computing presents a huge opportunity, and plenty of huge risks. Yet I think it is time to accept that this is a very likely future, and evolve our thinking accordingly, rather than just sit here waiting for the meteor to hit. "I think the debate right now is primarily around the extent to which these AI models are going to revolutionize the field," Welsh says in a video interview. "It's more a question of degree rather than whether it's going to happen.... "I think we're going to change from a world in which people are primarily writing programs by hand to a world in which we're teaching AI models how to do things that we want them to do... It starts to feel more like a field that focus on AI education and maybe even AI psychiatry. In order to solve these problems, you can't just assume that people are going to be writing the code by hand."

Read more of this story at Slashdot.

'If Aliens Contact Humanity, Who Decides What We Do Next?'
Dec 31st 2022, 21:34, by EditorDavid

If humankind detects a message from an advanced civilisation, "It would be a transformative event for humankind," writes the Guardian, "one the world's nations are surely prepared for. "Or are they?" "Look at the mess we made when Covid hit. We'd be like headless chickens," says Dr John Elliott, a computational linguist at the University of St Andrews. "We cannot afford to be ill-prepared, scientifically, socially, and politically rudderless, for an event that could happen at any time and which we cannot afford to mismanage." This frank assessment of Earth's unreadiness for contact with life elsewhere underpins the creation of the Seti (Search for Extraterrestrial Intelligence) post-detection hub at St Andrews. Over the next month or two, Elliott aims to bring together a core team of international researchers and affiliates. They will take on the job of getting ready: to analyse mysterious signals, or even artefacts, and work out every aspect of how we should respond.... "After the initial announcement, we'd be looking at societal impact, information dissemination, the media, the impact on religions and belief systems, the potential for disinformation, what analytical capabilities we'll need, and much more: having strategies in place, being transparent with everything we've discovered — what we know and what we do not know," says Elliott.... Lewis Dartnell, an astrobiologist and professor of science communication at the University of Westminster, said the new hub at St Andrews is "an important step in raising awareness at how ill-prepared we currently are" for detecting a signal from an alien civilisation. But he added that any intelligent aliens were likely to be hundreds if not thousands of light years away, meaning communication time would be on the scale of many centuries. "Even if we were to receive a signal tomorrow, we would have plenty of breathing space to assemble an international team of diverse experts to attempt to decipher the meaning of the message, and carefully consider how the Earth should respond, and even if we should. "The bigger concern is to establish some form of international agreement to prevent capable individuals or private corporations from responding independently — before a consensus has formed on whether it is safe to respond at all, and what we would want to say as one planet," he said.

Read more of this story at Slashdot.

Ancient Cats Migrated With Humans All Over the World
Dec 31st 2022, 20:34, by EditorDavid

Slashdot reader guest reader shares some interesting research from the University of Missouri: Nearly 10,000 years ago, humans settling in the Fertile Crescent, the areas of the Middle East surrounding the Tigris and Euphrates rivers, made the first switch from hunter-gatherers to farmers. They developed close bonds with the rodent-eating cats that conveniently served as ancient pest-control in society's first civilizations. A new study at the University of Missouri found this lifestyle transition for humans was the catalyst that sparked the world's first domestication of cats, and as humans began to travel the world, they brought their new feline friends along with them. Leslie A. Lyons, a feline geneticist and Gilbreath-McLorn endowed professor of comparative medicine in the MU College of Veterinary Medicine, collected and analyzed DNA from cats in and around the Fertile Crescent area, as well as throughout Europe, Asia and Africa, comparing nearly 200 different genetic markers.... Lyons added that while horses and cattle have seen various domestication events caused by humans in different parts of the world at various times, her analysis of feline genetics in the study strongly supports the theory that cats were likely first domesticated only in the Fertile Crescent before migrating with humans all over the world.... Lyons, who has researched feline genetics for more than 30 years, said studies like this also support her broader research goal of using cats as a biomedical model to study genetic diseases that impact both cats and people, such as polycystic kidney disease, blindness and dwarfism.... "[A]nything we can do to study the causes of genetic diseases in cats or how to treat their ailments can be useful for one day treating humans with the same diseases," Lyons said.

Read more of this story at Slashdot.

Other Software Projects Are Now Trying to Replicate ChatGPT
Dec 31st 2022, 19:34, by EditorDavid

"The first open source equivalent of OpenAI's ChatGPT has arrived," writes TechCrunch, "but good luck running it on your laptop — or at all." This week, Philip Wang, the developer responsible for reverse-engineering closed-sourced AI systems including Meta's Make-A-Video, released PaLM + RLHF, a text-generating model that behaves similarly to ChatGPT [listed as a work in progress]. The system combines PaLM, a large language model from Google, and a technique called Reinforcement Learning with Human Feedback — RLHF, for short — to create a system that can accomplish pretty much any task that ChatGPT can, including drafting emails and suggesting computer code. But PaLM + RLHF isn't pre-trained. That is to say, the system hasn't been trained on the example data from the web necessary for it to actually work. Downloading PaLM + RLHF won't magically install a ChatGPT-like experience — that would require compiling gigabytes of text from which the model can learn and finding hardware beefy enough to handle the training workload.... PaLM + RLHF isn't going to replace ChatGPT today — unless a well-funded venture (or person) goes to the trouble of training and making it available publicly. In better news, several other efforts to replicate ChatGPT are progressing at a fast clip, including one led by a research group called CarperAI. In partnership with the open AI research organization EleutherAI and startups Scale AI and Hugging Face, CarperAI plans to release the first ready-to-run, ChatGPT-like AI model trained with human feedback. LAION, the nonprofit that supplied the initial dataset used to train Stable Diffusion, is also spearheading a project to replicate ChatGPT using the newest machine learning techniques.

Read more of this story at Slashdot.

MIT's Newest fMRI Study: 'This is Your Brain on Code'
Dec 31st 2022, 18:34, by EditorDavid

Remember when MIT researchers did fMRI brain scans measuring the blood flow through brains to determine which parts were engaged when programmers evaluated code? MIT now says that a new paper (by many of the same authors) delves even deeper: Whereas the previous study looked at 20 to 30 people to determine which brain systems, on average, are relied upon to comprehend code, the new research looks at the brain activity of individual programmers as they process specific elements of a computer program. Suppose, for instance, that there's a one-line piece of code that involves word manipulation and a separate piece of code that entails a mathematical operation. "Can I go from the activity we see in the brains, the actual brain signals, to try to reverse-engineer and figure out what, specifically, the programmer was looking at?" asks Shashank Srikant, a PhD student in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). "This would reveal what information pertaining to programs is uniquely encoded in our brains." To neuroscientists, he notes, a physical property is considered "encoded" if they can infer that property by looking at someone's brain signals. Take, for instance, a loop — an instruction within a program to repeat a specific operation until the desired result is achieved — or a branch, a different type of programming instruction than can cause the computer to switch from one operation to another. Based on the patterns of brain activity that were observed, the group could tell whether someone was evaluating a piece of code involving a loop or a branch. The researchers could also tell whether the code related to words or mathematical symbols, and whether someone was reading actual code or merely a written description of that code..... The team carried out a second set of experiments, which incorporated machine learning models called neural networks that were specifically trained on computer programs. These models have been successful, in recent years, in helping programmers complete pieces of code. What the group wanted to find out was whether the brain signals seen in their study when participants were examining pieces of code resembled the patterns of activation observed when neural networks analyzed the same piece of code. And the answer they arrived at was a qualified yes. "If you put a piece of code into the neural network, it produces a list of numbers that tells you, in some way, what the program is all about," Srikant says. Brain scans of people studying computer programs similarly produce a list of numbers. When a program is dominated by branching, for example, "you see a distinct pattern of brain activity," he adds, "and you see a similar pattern when the machine learning model tries to understand that same snippet." But where will it all lead? They don't yet know what these recently-gleaned insights can tell us about how people carry out more elaborate plans in the real world.... Creating models of code composition, says O'Reilly, a principal research scientist at CSAIL, "is beyond our grasp at the moment." Lipkin, a BCS PhD student, considers this the next logical step — figuring out how to "combine simple operations to build complex programs and use those strategies to effectively address general reasoning tasks." He further believes that some of the progress toward that goal achieved by the team so far owes to its interdisciplinary makeup. "We were able to draw from individual experiences with program analysis and neural signal processing, as well as combined work on machine learning and natural language processing," Lipkin says. "These types of collaborations are becoming increasingly common as neuro- and computer scientists join forces on the quest towards understanding and building general intelligence."

Read more of this story at Slashdot.

Inspired by Amazon, Paid Promotions Spread to Other Online Shopping Sites in 2022
Dec 31st 2022, 17:34, by EditorDavid

We're buying more things online, the Washington Post notes. But how we buy may be changing too: For the first time in years, Google and Meta have grabbed less than half of the digital marketing money spent in the United States in 2022. Amazon, which took more than 11 percent of all digital ads purchased, was the biggest reason Google and Meta lost ground as advertising powerhouses, according to the research firm Insider Intelligence. In part because of Amazon's success with paid product promotions, Walmart, Target, the grocery delivery company Instacart, drugstore chain Walgreens and other retailers are also putting a higher priority on tailoring commercials to influence what you buy, advertising specialists said. Another reason these ads are spreading is that retailers' knowledge of what you buy is valuable, especially now that there are more limitations on how internet powers such as Facebook can follow everything you do to target you with ads. Like Google and Facebook, stores are trying to use as much information as they can find about you to steer your choices. One difference from Google and Facebook is that retailers like Amazon and Walmart make money from influencing what you buy and from selling you the product. The thing is ... these ads seem to work on you. And that's why paid product persuasion is likely here to stay.

Read more of this story at Slashdot.

World Chess Champ Magnus Carlsen Also Wins World Blitz and Rapid Chess Titles
Dec 31st 2022, 16:34, by EditorDavid

"Rapid chess" grants 15 minutes to each player for all moves (plus 10 seconds per move). "Blitz chess" grants each player three minutes (plus 2 seconds per move). Now CNN reports that five-time world chess champion Magnus Carlsen "won both the World Rapid and World Blitz chess titles in Almaty, Kazakhstan, in the latest landmark of his glittering career." The 32-year-old Norwegian is now the holder of all three world chess championship titles — in Classical, Rapid and Blitz — for the third time in his career, while no other player has ever won both the Rapid and Blitz titles in the same year. "Gonna need more hands soon," Carlsen joked on Twitter, posting a video of himself counting his now 15 world titles on his fingers. It caps a triumphant end to Carlsen's remarkable decade-long reign as the classical world champion, as he has already announced that he will not defend his title next year. Chess24 reports that for his first three-minute match, Magnus Carlsen showed up two and a half minutes late — and starting with just 30 seconds left on his clock, still beat his opponent.

Read more of this story at Slashdot.

You are receiving this email because you subscribed to this feed at blogtrottr.com. By using Blogtrottr, you agree to our policies, terms and conditions.

If you no longer wish to receive these emails, you can unsubscribe from this feed, or manage all your subscriptions.

Comments

Popular posts from this blog

Gizmodo

DZone.com Feed