Connect with us

Technology & Innovation

The Download: 2023’s worst tech failures, and the end of online anonymity in China

Diane Davis



The Download: 2023’s worst tech failures, and the end of online anonymity in China

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

The worst technology failures of 2023

Welcome to our annual list of the worst technologies. This year, one technology disaster in particular holds lessons for the rest of us: the Titan submersible that imploded in the shadow of the Titanic. 

Everyone had warned Stockton Rush, the sub’s creator, that it wasn’t safe. But he believed innovation meant tossing out the rule book and taking chances. He set aside good engineering in favor of wishful thinking. He and four others died. 

To us it shows how the spirit of innovation can pull ahead of reality, sometimes with unpleasant consequences. It was a phenomenon we saw time and again this year, like when GM’s Cruise division put robotaxis into circulation before they were ready. Others find convoluted ways to keep hopes alive, like a company that is showing off its industrial equipment but is quietly still using bespoke methods to craft its lab-grown meat.

The worst cringe, though, is when true believers can’t see the looming disaster, but we do. That’s the case for the new “Ai Pin,” developed at a cost of tens of millions, that’s meant to replace smartphones. It looks like a titanic failure to us. Read the full story to find out the seven worst technologies of 2023.  

—Antonio Regalado

How 2023 marked the death of anonymity online in China

There are so many people we meet on the internet daily whose real names we will never know. The TikTok teen who learned the trendy new dance, the anime artist who uploaded a new painting, the random commenter posting under the YouTube video you just watched. That’s the internet we are familiar with. 

In China, it’s already been impossible to be fully anonymous for a while now, thanks to a sophisticated system that requires identity verification to use any online services. Despite that, there were still corners of the Chinese internet where you could remain obscure. But lately, even this last bit of anonymity is slipping away. Read the full story.

—Zeyi Yang

Gene editing took center stage in 2023

Gene editing can be used to delete, insert, or alter portions of our genetic code. We’ve been able to modify DNA for years, but newer technologies like CRISPR mean that we can do it faster, more accurately, and more efficiently than ever before. 

In 2023, we saw the first approval of a CRISPR-based gene-editing therapy. And many more are to come. So let’s take a look at the developments that made news this year. What is the promise of gene editing, and what are the current pitfalls? Read the full story. 

In 2023, MIT technology Review published a striking number of stories about gene editing. And really, that’s no surprise. Perhaps no technology has more power to transform medicine.

—Cassandra Willyard

This story is from The Checkup, our weekly newsletter giving you the inside track on all things health and biotech. Sign up to receive it in your inbox every Thursday.

Is this the most energy efficient way to build homes?

When the Canadian engineer Harold Orr and his colleagues began designing an ultra-efficient home in Saskatchewan in the late ’70s, they knew that the trick wasn’t generating energy in a greener way, but using less of it. They needed to make a better thermos, not a cheaper coffee maker.

The result was the 1978 Saskatchewan Conservation House, a cedar-clad trapezoid that cut energy usage by 85%—and helped inspire today’s globally recognized passive-house standard for building design. It’s a marriage of efficiency and rigorously applied physics, and the associated benefits are vast. Read the full story. 

—Patrick Sisson

This story is from the next magazine edition of MIT technology Review, set to go live on January 8—and it’s all about innovation. If you don’t already, subscribe to get a copy when it lands.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Hyperloop One is shutting down
Frankly, the ambition never made much sense—and now it’s unraveled entirely. (Bloomberg $)

2 What we learn about wars on TikTok
The videos that do well tend to be apocalyptic, alarmist, and full of propaganda. (WSJ $)
What it’s like to be a TikTok moderator. (The Guardian)
Misinformation is warping the debate in the US over Ukraine aid. (BBC)

3 Apple wants to catch up with AI research rivals
It’s focusing on work to shrink large language models to run more efficiently on smartphones. (FT $)
These six questions will dictate the future of generative AI. (MIT technology Review)
The problem with America’s big AI safety plan? It’s likely to be woefully underfunded. (Wired $)

4 Twitter’s problems run so much deeper than Elon Musk
People were disengaging en masse before he even came on the scene. (The Atlantic $)

5 These were the biggest discoveries in computer science this year
From quantum computing to AI to cryptography, there was plenty to get excited about. (Quanta $)
A dispute about a quantum computing milestone shows just how tough it is to make them practical. (Wired $)

6 How e-scooter startup Bird crashed and burned
Safety concerns, issues with financial reporting and the pandemic all contributed. (Wired $) 
It owes money to more than 300 cities and towns, which shows just how rapidly it expanded before it collapsed. (Quartz $)

7 VR is becoming a hit in nursing homes
Which, in a way, makes a lot of sense. (WP $)

8 The beef industry is about to be hit by a demographic time bomb 
It’s a lot more popular with boomers than the rest of the US population. (Wired $)
Lab-grown meat just reached a major milestone. Here’s what comes next. (MIT technology Review) 

9 YouTube has a big plagiarism problem
And creators say they want more than just apologies. (NBC)
This is how much money influencers make. (WP $)

10 This was the year millennials aged out of the internet
We’re just exhausted with it. Gen Z, over to you. Good luck. (NYT $)

Quote of the day

“Governance got a bit loosey-goosey during the bubble.”

—Healy Jones, vice president of financial strategy at Kruze Consulting, tells the technology/headspin-silicon-valley-startups.html”>New York Times that a lack of due diligence by venture capitalists allowed startup fraud to thrive in the last decade.

The big story

How Bitcoin mining devastated this New York town


April 2022

If you had taken a gamble in 2017 and purchased Bitcoin, today you might be a millionaire many times over. But while the industry has provided windfalls for some, local communities have paid a high price, as people started scouring the world for cheap sources of energy to run large Bitcoin-mining farms.

It didn’t take long for a subsidiary of the popular Bitcoin mining firm Coinmint to lease a Family Dollar store in Plattsburgh, a city in New York state offering cheap power. Soon, the company was regularly drawing enough power for about 4,000 homes. And while other miners were quick to follow, the problems had already taken root. Read the full story.

—Lois Parshley

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Thankfully, it’s probably too late to hand over control of your Christmas planning to ChatGPT.
+ It’s time to condense 2023 in 84 gloriously weird sentences.
+ Enjoy this sweet lil story about madeleines at the most wonderful time of the year.
+ Merry Christmas from Snoopy and the Peanuts gang! 
+ May baby Gromit bless your new year ❤

Technology & Innovation

LLMs become more covertly racist with human intervention

Diane Davis



LLMs become more covertly racist with human intervention

Even when the two sentences had the same meaning, the models were more likely to apply adjectives like “dirty,” “lazy,” and “stupid” to speakers of AAE than speakers of Standard American English (SAE). The models associated speakers of AAE with less prestigious jobs (or didn’t associate them with having a job at all), and when asked to pass judgment on a hypothetical criminal defendant, they were more likely to recommend the death penalty. 

An even more notable finding may be a flaw the study pinpoints in the ways that researchers try to solve such biases. 

To purge models of hateful views, companies like OpenAI, Meta, and Google use feedback training, in which human workers manually adjust the way the model responds to certain prompts. This process, often called “alignment,” aims to recalibrate the millions of connections in the neural network and get the model to conform better with desired values. 

The method works well to combat overt stereotypes, and leading companies have employed it for nearly a decade. If users prompted GPT-2, for example, to name stereotypes about Black people, it was likely to list “suspicious,” “radical,” and “aggressive,” but GPT-4 no longer responds with those associations, according to the paper.

However the method fails on the covert stereotypes that researchers elicited when using African-American English in their study, which was published on arXiv and has not been peer reviewed. That’s partially because companies have been less aware of dialect prejudice as an issue, they say. It’s also easier to coach a model not to respond to overtly racist questions than it is to coach it not to respond negatively to an entire dialect.

“Feedback training teaches models to consider their racism,” says Valentin Hofmann, a researcher at the Allen Institute for AI and a coauthor on the paper. “But dialect prejudice opens a deeper level.”

Avijit Ghosh, an ethics researcher at Hugging Face who was not involved in the research, says the finding calls into question the approach companies are taking to solve bias.

“This alignment—where the model refuses to spew racist outputs—is nothing but a flimsy filter that can be easily broken,” he says. 

Continue Reading

Technology & Innovation

I used generative AI to turn my story into a comic—and you can too

Diane Davis



I used generative AI to turn my story into a comic—and you can too

The narrator sits on the floor and eats breakfast with the cats. 


After more than a year in development, Lore Machine is now available to the public for the first time. For $10 a month, you can upload 100,000 words of text (up to 30,000 words at a time) and generate 80 images for short stories, scripts, podcast transcripts, and more. There are price points for power users too, including an enterprise plan costing $160 a month that covers 2.24 million words and 1,792 images. The illustrations come in a range of preset styles, from manga to watercolor to pulp ’80s TV show.

Zac Ryder, founder of creative agency Modern Arts, has been using an early-access version of the tool since Lore Machine founder Thobey Campion first showed him what it could do. Ryder sent over a script for a short film, and Campion used Lore Machine to turn it into a 16-page graphic novel overnight.

“I remember Thobey sharing his screen. All of us were just completely floored,” says Ryder. “It wasn’t so much the image generation aspect of it. It was the level of the storytelling. From the flow of the narrative to the emotion of the characters, it was spot on right out of the gate.”

Modern Arts is now using Lore Machine to develop a fictional universe for a manga series based on text written by the creator of Netflix’s Love, Death & Robots.

The narrator encounters the man in the corner shop who jokes about the cat food. 


Under the hood, Lore Machine is built from familiar parts. A large language model scans your text, identifying descriptions of people and places as well as its overall sentiment. A version of Stable Diffusion generates the images. What sets it apart is how easy it is to use. Between uploading my story and downloading its storyboard, I clicked maybe half a dozen times.

That makes it one of a new wave of user-friendly tools that hide the stunning power of generative models behind a one-click web interface. “It’s a lot of work to stay current with new AI tools, and the interface and workflow for each tool is different,” says Ben Palmer, CEO of the New Computer Corporation, a content creation firm. “Using a mega-tool with one consistent UI is very compelling. I feel like this is where the industry will land.”

Look! No prompts

Campion set up the company behind Lore Machine two years ago to work on a blockchain version of Wikipedia. But when he saw how people took to generative models, he switched direction. Campion used the free-to-use text-to-image model Midjourney to make a comic-book version of Samuel Taylor Coleridge’s The Rime of the Ancient Mariner. It went viral, he says, but it was no fun to make.

Marta confronts the narrator about their new diet and offers to cook for them. 


“My wife hated that project,” he says. “I was up to four in the morning, every night, just hammering away, trying to get these images right.” The problem was that text-to-image models like Midjourney generate images one by one. That makes it hard to maintain consistency between different images of the same characters. Even locking in a specific style across multiple images can be hard. “I ended up veering toward a trippier, abstract expression,” says Campion.

Continue Reading

Technology & Innovation

The robots are coming. And that’s a good thing.

Diane Davis



The robots are coming. And that’s a good thing.

What if we could throw our sight, hearing, touch, and even sense of smell to distant locales and experience these places in a more visceral way?

So we wondered what would happen if we were to tap into the worldwide community of gamers and use their skills in new ways. With a robot working inside the deep freezer room, or in a standard manufacturing or warehouse facility, remote operators could remain on call, waiting for it to ask for assistance if it made an error, got stuck, or otherwise found itself incapable of completing a task. A remote operator would enter a virtual control room that re-created the robot’s surroundings and predicament. This person would see the world through the robot’s eyes, effectively slipping into its body in that distant cold storage facility without being personally exposed to the frigid temperatures. Then the operator would intuitively guide the robot and help it complete the assigned task.

To validate our concept, we developed a system that allows people to remotely see the world through the eyes of a robot and perform a relatively simple task; then we tested it on people who weren’t exactly skilled gamers. In the lab, we set up a robot with manipulators, a stapler, wire, and a frame. The goal was to get the robot to staple wire to the frame. We used a humanoid, ambidextrous robot called Baxter, plus the Oculus VR system. Then we created an intermediate virtual room to put the human and the robot in the same system of coordinates—a shared simulated space. This let the human see the world from the point of view of the robot and control it naturally, using body motions. We demoed this system during a meeting in Washington, DC, where many participants—including some who’d never played a video game—were able to don the headset, see the virtual space, and control our Boston-based robot intuitively from 500 miles away to complete the task.

The best-known and perhaps most compelling examples of remote teleoperation and extended reach are the robots NASA has sent to Mars in the last few decades. My PhD student Marsette “Marty” Vona helped develop much of the software that made it easy for people on Earth to interact with these robots tens of millions of miles away. These intelligent machines are a perfect example of how robots and humans can work together to achieve the extraordinary. Machines are better at operating in inhospitable environments like Mars. Humans are better at higher-level decision-making. So we send increasingly advanced robots to Mars, and people like Marty build increasingly advanced software to help other scientists see and even feel the faraway planet through the eyes, tools, and sensors of the robots. Then human scientists ingest and analyze the gathered data and make critical creative decisions about what the rovers should explore next. The robots all but situate the scientists on Martian soil. They are not taking the place of actual human explorers; they’re doing reconnaissance work to clear a path for a human mission to Mars. Once our astronauts venture to the Red Planet, they will have a level of familiarity and expertise that would not be possible without the rover missions.

Robots can allow us to extend our perceptual reach into alien environments here on Earth, too. In 2007, European researchers led by J.L. Deneubourg described a novel experiment in which they developed autonomous robots that infiltrated and influenced a community of cockroaches. The relatively simple robots were able to sense the difference between light and dark environments and move to one or the other as the researchers wanted. The miniature machines didn’t look like cockroaches, but they did smell like them, because the scientists covered them with pheromones that were attractive to other cockroaches from the same clan.

The goal of the experiment was to better understand the insects’ social behavior. Generally, cockroaches prefer to cluster in dark environments with others of their kind. The preference for darkness makes sense—they’re less vulnerable to predators or disgusted humans when they’re hiding in the shadows. When the researchers instructed their pheromone-soaked machines to group together in the light, however, the other cockroaches followed. They chose the comfort of a group despite the danger of the light. 


These robotic roaches bring me back to my first conversation with Roger Payne all those years ago, and his dreams of swimming alongside his majestic friends. What if we could build a robot that accomplished something similar to his imagined capsule? What if we could create a robotic fish that moved alongside marine creatures and mammals like a regular member of the aquatic neighborhood? That would give us a phenomenal window into undersea life.

Sneaking into and following aquatic communities to observe behaviors, swimming patterns, and creatures’ interactions with their habitats is difficult. Stationary observatories cannot follow fish. Humans can only stay underwater for so long. Remotely operated and autonomous underwater vehicles typically rely on propellers or jet-based propulsion systems, and it’s hard to go unnoticed when your robot is kicking up so much turbulence. We wanted to create something different—a robot that actually swam like a fish. This project took us many years, as we had to develop new artificial muscles, soft skin, novel ways of controlling the robot, and an entirely new method of propulsion. I’ve been diving for decades, and I have yet to see a fish with a propeller. Our robot, SoFi (pronounced like Sophie), moves by swinging its tail back and forth like a shark. A dorsal fin and twin fins on either side of its body allow it to dive, ascend, and move through the water smoothly, and we’ve already shown that SoFi can navigate around other aquatic life forms without disrupting their behavior.

SoFi is about the size of an average snapper and has taken some lovely tours in and around coral reef communities in the Pacific Ocean at depths of up to 18 meters. Human divers can venture deeper, of course, but the presence of a scuba-­diving human changes the behavior of the marine creatures. A few scientists remotely monitoring and occasionally steering SoFi cause no such disruption. By deploying one or several realistic robotic fish, scientists will be able to follow, record, monitor, and potentially interact with fish and marine mammals as if they were just members of the community.

Continue Reading