In this week's edition of MIT Sloan in the News, we're highlighting some of our artificial intelligence media coverage from over the past year — with insights from faculty and researchers about cybersecurity, generative AI, LLMs, chatbots, and more. Enjoy! |
|
Highlights |
|
 |
|
The AI Innovator | 10/29/2024 | Kate Kellogg
A recent study by professor Kate Kellogg and co-authors found that when it comes to emerging technologies, junior employees may not be the best teachers of their more senior colleagues. "We expected that juniors would be a great source of expertise for senior professionals trying to learn to effectively use generative AI," said Kellogg. However, generative AI's broad capabilities and exponential speed of development is limiting the ability of junior employees to coach senior employees, since they can't keep up themselves. The study's results came as a surprise to the authors.
|
The New York Times | 09/12/2024 | David Rand
It's the information, not the chatbot itself, that's changing people's minds, said professor David Rand. "It is the facts and evidence themselves that are really doing the work here," he said.
|
Business Insider | 08/30/2024 | Andrew W. Lo
The glaring problem with publicly available AI tools is that they're "inherently sociopathic," professor Andrew W. Lo and co-author wrote in a research report. "This sociopathy seems to cause the characteristic glibness of large language model output; an LLM can easily argue both sides of an argument because neither side has weight to it," they wrote. It may be able to role-play as a financial advisor by relying on its training data, but the AI needs to have a deeper understanding of a client's state of mind to build trust.
|
The Economist | 08/21/2024 | Danielle Li
Despite the fact that the early internet was dominated by men, for example, young American women were more online than their male counterparts by 2005. On top of this, professor Danielle Li notes that the studies do not actually show whether men's current ChatGPT use translates into better or more productive work. At the moment, the technology may be more of a digital toy, she says. Perhaps, then, high-achieving women are simply better at avoiding distraction.
|
Charter | 07/18/2024 | Thomas Malone, Abdullah Almaatouq
In a recent paper that analyzes 74 studies on human and AI collaboration, professor Thomas Malone, assistant professor Abdullah Almaatouq, and co-author, looked at a series of experiments where humans and AI performed tasks individually and together. They found that, on average, combining the two yielded worse outcomes than using the best of the two alone.
|
Associated Press | 07/12/2024 | Michael Cusumano
According to professor Michael Cusumano, so-called "acqui-hires," in which one company acquires another to absorb talent, have been common in the tech industry for decades, but what's happening in the AI industry is a little different. "To acquire only some employees or the majority, but not all license technology, leave the company functioning but not really competing," Cusumano said.
|
Scientific American | 06/11/2024 | Rama Ramakrishnan
While Google's AI answers may initially cause a big jump in the search engine's energy costs, the costs may begin to decrease again as engineers figure out how to make the system more efficient, said professor of the practice Rama Ramakrishnan. "The number of searches going through an LLM is going to go up, and therefore, probably, the energy costs are going to go up. But I think the cost per query seems to be going down."
|
Fortune | 04/29/2024 | Andrew McAfee
In a new report, principal research scientist Andrew McAfee explores the implications of generative AI in economic growth. Fears over a steep drop in labor demand are "probably overblown," McAfee writes. "The history of general-purpose technologies shows that the growth they bring is accompanied by strong demand for labor."
|
CNBC | 03/3/2024 | Stuart Madnick
Professor Stuart Madnick and his team have simulated cyberattacks in the lab, resulting in explosions. They were able to hack into computer-controlled motors with pumps and make them incinerate. Attacks that cause temperature gauges to malfunction, pressure values to jam, and circuits to be circumvented can also cause blasts in lab settings. Such an outcome, Madnick said, would do far more than simply taking a system offline for a while, as a typical cyberattack does.
|
Pharmaceutical Executive | 02/27/2024 | Deborah L. Ancona
Professor Deborah L. Ancona says: "If this current quarter's cost cutting across Fortune 500s is a sign of the continued turmoil that many firms will encounter, executives need to forge a new operating model that addresses the reality of firms introducing emerging technologies and reduced headcounts."
|
Time Magazine | 01/26/2024 | Nur Ahmed, Neil Thompson
Industry's increasingly privileged access to AI inputs has resulted in a widening gap between AI systems built by businesses, compared with those built by researchers in academia. "Now, academics are doing more follow-up or follow-on research instead of trying to push the boundaries," says post-doctoral associate Nur Ahmed. "The National Artificial Intelligence Research Resource (NAIRR) is an incredibly important first step, but it's just the first step. That's not going to be enough to meet the demand for all the publicly minded stuff that academics should be doing and would want to do," says research scientist Neil Thompson.
|
The New York Times | 01/24/2024 | Daron Acemoglu, Simon Johnson, Zeynep Ton
|
|
|
Opinion Pieces |
|
 |
|
Harvard Business Review | 02/26/2024 | Dimitris Bertsimas
Dimitris Bertsimas, associate dean for Business Analytics, and co-author wrote: "For those private market investors willing to embrace external data, the rewards can be significant. By harnessing the power of external data, they can gain a crucial advantage in an increasingly competitive market, driving success for themselves and their portfolio companies."
|
MIT Sloan Management Review | 05/29/2024 | Renée Richardson Gosline
Senior lecturer and research scientist Renée Richardson Gosline and co-authors wrote: "OpenAI's ChatGPT has generated excitement since its release in November 2022, but it has also created new challenges for managers. On the one hand, business leaders understand that they cannot afford to overlook the potential of generative AI large language models (LLMs). On the other hand, apprehensions surrounding issues such as bias, inaccuracy, and security breaches loom large, limiting trust in these models. In such an environment, responsible approaches to using LLMs are critical to the safe adoption of generative AI."
|
|
|
|
|
|