Connect with us

Technology & Innovation

The Download: tracking animals, and biotech plants

Diane Davis

Published

on

dossier of journalist information concept

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How tracking animal movement may save the planet

Animals have long been able to offer unique insights about the natural world around us, acting as organic sensors picking up phenomena invisible to humans. Canaries warned of looming catastrophe in coal mines until the 1980s, for example.

These days, we have more insight into animal behavior than ever before thanks to technologies like sensor tags. But the data we gather from these animals still adds up to only a relatively narrow slice of the whole picture. Results are often confined to silos, and for many years tags were big and expensive, suitable only for a handful of animal species.

This is beginning to change. Researchers are asking: What will we find if we follow even the smallest animals? What if we could see how different species’ lives intersect? What could we learn from a system of animal movement, continuously monitoring how creatures big and small adapt to the world around us? It may be, some researchers believe, a vital tool in the effort to save our increasingly crisis-plagued planet. Read the full story.

—Matthew Ponsford 

This story is from the upcoming print issue of MIT technology Review, dedicated to exploring hidden worlds. Buy a subscription to get your hands on a copy when it publishes on February 28th! Deals start at just $8 a month.

These are the biotech plants you can buy now

—Antonio Regalado

This spring I am looking forward to growing some biotech in my backyard for the first time. It’s possible because of startups that have started selling genetically engineered plants directly to consumers, including a bright-purple tomato and a petunia that glows in the dark.

This week, for $73, I ordered both by pressing a few buttons online.

Biotech seeds have been a huge business for a while. In fact, by sheer mass, GMOs are probably the single most significant product of genetic engineering ever. But the difference now is that people are able to plant and grow GMO houseplants in their homes. Read the full story. 

Watch this robot as it learns to stitch up wounds

The news: A new AI-trained surgical robot can make stitches on its own. A video taken by researchers at the University of California, Berkeley, shows the two-armed robot completing stitches in a row on a simple wound in imitation skin. It managed to make six stitches before a human had to intervene. 

Why it matters: It’s common for surgeons today to get help from robots, but we’re a long way from them being able to fully replace many tasks. This new research marks progress toward robots that can operate more autonomously on very intricate, complicated tasks. Read the full story. 

—James O’Donnell

Three frequently asked questions about EVs, answered

Transportation is a critical part of the climate change puzzle: it accounts for something like a quarter of global emissions. And the vehicles that we use to shuttle around to work, school, and the grocery store in many parts of the world are a huge piece of the problem.

Last week, MIT technology Review hosted an event where we dug into the future of batteries and the materials that go into them. We got so many great questions, and we answered quite a few of them (subscribers should check out the recording of the full event). 

But there were still a lot of questions, particularly about EVs, that we didn’t get to. So let’s take a look at a few of those. 

—Casey Crownhart

This story is from The Spark, our weekly newsletter all about the technology that could combat the climate crisis. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The first US moon landing for over 50 years is due today
If all goes to plan, Intuitive Machines’ Odysseus spacecraft will touch down at 5.30pm ET. (WP $)
Here’s how you can watch it. (NYT $)

2 ChatGPT had a meltdown yesterday
Which is not necessarily worrying in itself… but it isn’t great that we have no idea why. (technology%2F2024%2F02%2Fchatgpt-alarms-users-by-spitting-out-shakespearean-nonsense-and-rambling%2F%3Fmc_cid%3D8d2404be49%26mc_eid%3DUNIQID&data=05%7C02%7C%7C7a9f483188ae4387f99e08dc33a02068%7C961f23f8614c4756bafff1997766a273%7C1%7C0%7C638442010063983878%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=duo%2BDx%2FppidnuHwo4iXqtn7l4NWMB6FN1qSQ4Yt%2Bkhk%3D&reserved=0″ target=”_blank” rel=”noreferrer noopener”>Ars Technica)
Gab’s racist chatbots have been trained to deny the Holocaust. (Wired $)
+ Soon, we might be using AI to do all sorts of tasks for us. (NPR)

3 You can buy Vision Pro headsets in Russia 
Two years after Apple quit the country. (NBC)

4 Google is racing to fix a new “overly woke” AI-powered tool 
It was returning women and people of color when asked to produce images of America’s founding fathers, for example. (BBC)
It’s pausing the ability for Gemini AI to generate images until it’s fixed the issue. (The Verge)
These new tools let you see for yourself how biased AI image models are. (MIT technology Review) 
How it feels to be sexually objectified by an AI. (MIT technology Review)

5 American winters are getting warm
They’re also getting shorter, and less predictable. (Insider $)

6 Instagram is a news site, whether it likes it or not
And that means it has a responsibility to do content moderation properly. (NYT $)

7 Inside the weird world of Instacart’s AI-generated recipes
It’s becoming harder and harder to work out what’s been made by a human versus a machine. (404 Media)
Why Big Tech’s watermarking plans are some welcome good news. (MIT technology Review)

8 We need protection from companies building tech to read our minds
It’s not such a concerning issue right now, but it could be sooner than you know. (Vox)
How your brain data could be used against you. (MIT technology Review)

9 Why AM radio lingers on 
A surprisingly diverse group of people still rely on it, even as it heads towards obsolescence. (The Atlantic $)

10 Writing by hand has a positive impact on memory and learning ✍
I knew it! (Scientific American $)

Quote of the day

“Happy listening! ???? Happy listening! ???? Happy listening! ???? Happy listening! ????

—An example of how ChatGPT went off the rails yesterday, screenshotted and shared by a user on X. 

The big story

Inside the app Minnesota police used to collect data on journalists at protests

<a href="https://www.technologyreview.com/2022/03/23/1047899/secret-police-app-minnesota-police-journalists-protests-data/?truid=<<%20Test%20Link%20ID%20>>&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=<dossier of journalist information concept

MS TECH

March 2022

Photojournalist J.D. Duggan was covering a protest in Minnesota in April 2021 when police officers surrounded him and others, and told them to get on the ground.

Officers sorted the press from the protesters, walked them to a parking lot, and began photographing them, one by one, with cellphones, which they told Duggan would be stored in an app. 

An investigation by MIT technology Review found the data was collected using a tool called Intrepid Response, an easy way to almost instantly de-anonymize protest attendees and keep tabs on their movements. For some, the tool’s use is a dangerous step in the direction of authoritarianism. Read the full story.

—Sam Richards & Tate Ryan-Mosley

+ Fascinated by the stories in this grisly interactive map, which details murders committed in medieval London, York and Oxford. 
+ Terrible night’s sleep last night? Fear not, it’s possible to salvage your day. (NYT $) 
+ This athletic fluffy cat is bound to bring a smile to your face.
+ Some simple ways to make your diet healthier.


Technology & Innovation

LLMs become more covertly racist with human intervention

Diane Davis

Published

on

LLMs become more covertly racist with human intervention

Even when the two sentences had the same meaning, the models were more likely to apply adjectives like “dirty,” “lazy,” and “stupid” to speakers of AAE than speakers of Standard American English (SAE). The models associated speakers of AAE with less prestigious jobs (or didn’t associate them with having a job at all), and when asked to pass judgment on a hypothetical criminal defendant, they were more likely to recommend the death penalty. 

An even more notable finding may be a flaw the study pinpoints in the ways that researchers try to solve such biases. 

To purge models of hateful views, companies like OpenAI, Meta, and Google use feedback training, in which human workers manually adjust the way the model responds to certain prompts. This process, often called “alignment,” aims to recalibrate the millions of connections in the neural network and get the model to conform better with desired values. 

The method works well to combat overt stereotypes, and leading companies have employed it for nearly a decade. If users prompted GPT-2, for example, to name stereotypes about Black people, it was likely to list “suspicious,” “radical,” and “aggressive,” but GPT-4 no longer responds with those associations, according to the paper.

However the method fails on the covert stereotypes that researchers elicited when using African-American English in their study, which was published on arXiv and has not been peer reviewed. That’s partially because companies have been less aware of dialect prejudice as an issue, they say. It’s also easier to coach a model not to respond to overtly racist questions than it is to coach it not to respond negatively to an entire dialect.

“Feedback training teaches models to consider their racism,” says Valentin Hofmann, a researcher at the Allen Institute for AI and a coauthor on the paper. “But dialect prejudice opens a deeper level.”

Avijit Ghosh, an ethics researcher at Hugging Face who was not involved in the research, says the finding calls into question the approach companies are taking to solve bias.

“This alignment—where the model refuses to spew racist outputs—is nothing but a flimsy filter that can be easily broken,” he says. 

Continue Reading

Technology & Innovation

I used generative AI to turn my story into a comic—and you can too

Diane Davis

Published

on

I used generative AI to turn my story into a comic—and you can too

The narrator sits on the floor and eats breakfast with the cats. 

LORE MACHINE / WILL DOUGLAS HEAVEN

After more than a year in development, Lore Machine is now available to the public for the first time. For $10 a month, you can upload 100,000 words of text (up to 30,000 words at a time) and generate 80 images for short stories, scripts, podcast transcripts, and more. There are price points for power users too, including an enterprise plan costing $160 a month that covers 2.24 million words and 1,792 images. The illustrations come in a range of preset styles, from manga to watercolor to pulp ’80s TV show.

Zac Ryder, founder of creative agency Modern Arts, has been using an early-access version of the tool since Lore Machine founder Thobey Campion first showed him what it could do. Ryder sent over a script for a short film, and Campion used Lore Machine to turn it into a 16-page graphic novel overnight.

“I remember Thobey sharing his screen. All of us were just completely floored,” says Ryder. “It wasn’t so much the image generation aspect of it. It was the level of the storytelling. From the flow of the narrative to the emotion of the characters, it was spot on right out of the gate.”

Modern Arts is now using Lore Machine to develop a fictional universe for a manga series based on text written by the creator of Netflix’s Love, Death & Robots.

The narrator encounters the man in the corner shop who jokes about the cat food. 

LORE MACHINE / WILL DOUGLAS HEAVEN

Under the hood, Lore Machine is built from familiar parts. A large language model scans your text, identifying descriptions of people and places as well as its overall sentiment. A version of Stable Diffusion generates the images. What sets it apart is how easy it is to use. Between uploading my story and downloading its storyboard, I clicked maybe half a dozen times.

That makes it one of a new wave of user-friendly tools that hide the stunning power of generative models behind a one-click web interface. “It’s a lot of work to stay current with new AI tools, and the interface and workflow for each tool is different,” says Ben Palmer, CEO of the New Computer Corporation, a content creation firm. “Using a mega-tool with one consistent UI is very compelling. I feel like this is where the industry will land.”

Look! No prompts

Campion set up the company behind Lore Machine two years ago to work on a blockchain version of Wikipedia. But when he saw how people took to generative models, he switched direction. Campion used the free-to-use text-to-image model Midjourney to make a comic-book version of Samuel Taylor Coleridge’s The Rime of the Ancient Mariner. It went viral, he says, but it was no fun to make.

Marta confronts the narrator about their new diet and offers to cook for them. 

LORE MACHINE / WILL DOUGLAS HEAVEN

“My wife hated that project,” he says. “I was up to four in the morning, every night, just hammering away, trying to get these images right.” The problem was that text-to-image models like Midjourney generate images one by one. That makes it hard to maintain consistency between different images of the same characters. Even locking in a specific style across multiple images can be hard. “I ended up veering toward a trippier, abstract expression,” says Campion.

Continue Reading

Technology & Innovation

The robots are coming. And that’s a good thing.

Diane Davis

Published

on

The robots are coming. And that’s a good thing.

What if we could throw our sight, hearing, touch, and even sense of smell to distant locales and experience these places in a more visceral way?

So we wondered what would happen if we were to tap into the worldwide community of gamers and use their skills in new ways. With a robot working inside the deep freezer room, or in a standard manufacturing or warehouse facility, remote operators could remain on call, waiting for it to ask for assistance if it made an error, got stuck, or otherwise found itself incapable of completing a task. A remote operator would enter a virtual control room that re-created the robot’s surroundings and predicament. This person would see the world through the robot’s eyes, effectively slipping into its body in that distant cold storage facility without being personally exposed to the frigid temperatures. Then the operator would intuitively guide the robot and help it complete the assigned task.

To validate our concept, we developed a system that allows people to remotely see the world through the eyes of a robot and perform a relatively simple task; then we tested it on people who weren’t exactly skilled gamers. In the lab, we set up a robot with manipulators, a stapler, wire, and a frame. The goal was to get the robot to staple wire to the frame. We used a humanoid, ambidextrous robot called Baxter, plus the Oculus VR system. Then we created an intermediate virtual room to put the human and the robot in the same system of coordinates—a shared simulated space. This let the human see the world from the point of view of the robot and control it naturally, using body motions. We demoed this system during a meeting in Washington, DC, where many participants—including some who’d never played a video game—were able to don the headset, see the virtual space, and control our Boston-based robot intuitively from 500 miles away to complete the task.


The best-known and perhaps most compelling examples of remote teleoperation and extended reach are the robots NASA has sent to Mars in the last few decades. My PhD student Marsette “Marty” Vona helped develop much of the software that made it easy for people on Earth to interact with these robots tens of millions of miles away. These intelligent machines are a perfect example of how robots and humans can work together to achieve the extraordinary. Machines are better at operating in inhospitable environments like Mars. Humans are better at higher-level decision-making. So we send increasingly advanced robots to Mars, and people like Marty build increasingly advanced software to help other scientists see and even feel the faraway planet through the eyes, tools, and sensors of the robots. Then human scientists ingest and analyze the gathered data and make critical creative decisions about what the rovers should explore next. The robots all but situate the scientists on Martian soil. They are not taking the place of actual human explorers; they’re doing reconnaissance work to clear a path for a human mission to Mars. Once our astronauts venture to the Red Planet, they will have a level of familiarity and expertise that would not be possible without the rover missions.

Robots can allow us to extend our perceptual reach into alien environments here on Earth, too. In 2007, European researchers led by J.L. Deneubourg described a novel experiment in which they developed autonomous robots that infiltrated and influenced a community of cockroaches. The relatively simple robots were able to sense the difference between light and dark environments and move to one or the other as the researchers wanted. The miniature machines didn’t look like cockroaches, but they did smell like them, because the scientists covered them with pheromones that were attractive to other cockroaches from the same clan.

The goal of the experiment was to better understand the insects’ social behavior. Generally, cockroaches prefer to cluster in dark environments with others of their kind. The preference for darkness makes sense—they’re less vulnerable to predators or disgusted humans when they’re hiding in the shadows. When the researchers instructed their pheromone-soaked machines to group together in the light, however, the other cockroaches followed. They chose the comfort of a group despite the danger of the light. 

JACK SNELLING

These robotic roaches bring me back to my first conversation with Roger Payne all those years ago, and his dreams of swimming alongside his majestic friends. What if we could build a robot that accomplished something similar to his imagined capsule? What if we could create a robotic fish that moved alongside marine creatures and mammals like a regular member of the aquatic neighborhood? That would give us a phenomenal window into undersea life.

Sneaking into and following aquatic communities to observe behaviors, swimming patterns, and creatures’ interactions with their habitats is difficult. Stationary observatories cannot follow fish. Humans can only stay underwater for so long. Remotely operated and autonomous underwater vehicles typically rely on propellers or jet-based propulsion systems, and it’s hard to go unnoticed when your robot is kicking up so much turbulence. We wanted to create something different—a robot that actually swam like a fish. This project took us many years, as we had to develop new artificial muscles, soft skin, novel ways of controlling the robot, and an entirely new method of propulsion. I’ve been diving for decades, and I have yet to see a fish with a propeller. Our robot, SoFi (pronounced like Sophie), moves by swinging its tail back and forth like a shark. A dorsal fin and twin fins on either side of its body allow it to dive, ascend, and move through the water smoothly, and we’ve already shown that SoFi can navigate around other aquatic life forms without disrupting their behavior.

SoFi is about the size of an average snapper and has taken some lovely tours in and around coral reef communities in the Pacific Ocean at depths of up to 18 meters. Human divers can venture deeper, of course, but the presence of a scuba-­diving human changes the behavior of the marine creatures. A few scientists remotely monitoring and occasionally steering SoFi cause no such disruption. By deploying one or several realistic robotic fish, scientists will be able to follow, record, monitor, and potentially interact with fish and marine mammals as if they were just members of the community.

Continue Reading

Trending