Connect with us

Technology & Innovation

What Luddites can teach us about resisting an automated future

Diane Davis

Published

on

A story in comic format. In this first panel, two figures in silhouette look out at a modern city skyline.  The text reads, "The future is here already. AI art has arrived. Simply write a prompt..."

A person's smiling headshot being uploaded.  The text reads, "Or allow a company unrestricted access to your likeness..."
The headshot from the previous panel with distroted features and a wavy new background. The text reads, "...And ta-da! Some absolutely serviceable hotel lobby art. But calling this 'AI art' uncritically buys into the AI hype machine."
Two people look at the blank space where the framed picture of a flower has been stolen by a giant robot hand. The text reads, "Algorithmically generated art theft" might be more accurate.

Two panels. In the first, a scrabbly line resembling a signature. The text reads, "Machine-learning software is "taught" by feeding it existing art without consent, credit, or compensation. To the extent that mangled signature remnants remain visible on AI art." In the second, the text reads, "Artists have begun to push back." over the image of protestors led by Matthew Butterick, co-counsel in a class action lawsuit brought against AI art by artists, who says, "Everybody would creates for a living should be in code red."
The text reads, "A common refrain from defenders of AI art has been to label these naysayers:"  above multiple speech bubbles saying "Luddite!" "Luddites!" and "Luddism!"
Text across the top continues, "...a term synonymous with technophobia, anti-progress, and reactionism. It's even used to describe being hapless with new tech." A group of people eating dinner together, and one person is saying, "Oh, I can't use TikTok! I'm such a Luddite!" The below the image reads, "In truth, the Luddites were skilled with machines. They were simply fighting for better worker rights."

A man in a cravat holding a plume at his writing desk smiles, while behind in shadow are two women at textile machines. The text reads, "In 1799, the British government passed legislation that prohibited trader unions and collective bargaining. Mill owners introduced more machines in their factories, reducing wages."
A man with his shirtsleeves rolled up, raising a sledgehammer.  The text reads, "And so, in 1811, after years of frustrating negotiations, a spate of coordinated attacks on mill frameworks erupted across the United Kingdom, led by "King Ned Ludd."

Sheets of paper flutter down to a gallows in the foreground as soldiers with bayonets raised corral a crowd of civilians behind.  The text reads, "But them, 14000 soldiers were sent to protect the mills and quash riots. New legislation made frame breaking a capital offense. Eventually, key Luddite organizers were identified, arrested and executed.
Lord Byron reciting from "Song for the Luddites" says with his eyes closed, "This is the dew, which the tree shall renew Of Liberty, planted by Ludd!"  The text above reads, "Yet the movement lived on. Outbreaks of machine breaking continued for years in the UK and beyond.
A woman smashing something with a sledgehammer. The text reads, "In 1837, a spinning jenny in Chalabre, France was destroyed by workers. The women workers 'made themselves conspicuous by their fury and violence.' From local newspaper, L'Aude."

The text reads, "In the 1870s, English textile designer, author, and socialist William Morris began taking an interest in the production process of his designs. He was disgusted by the poor living conditions of workers and the pollution caused by the textile industry." William Morris in profile saying, "Why does a reasonable man use a machine? Surely to save his labour. Under capitalism, machines were primarily used to increase production, thereby increasing the worker's drudgery."
The text reads, "People have been making similar arguments against AI art." A tweet from CatBastardQuinn, @QuinnCat13 reads, "We could automate menial jobs so people have time to make art and music but apparently we'd rather automate art and music so people have time for menial jobs."
An office worker at a laptop with face in hands and an Amazon worker in a pile of shipping boxes. The text reads" Likewise, Morris was interested in what he called 'worthy work'.' He wanted people to take pleasure in their work rather than 'mere toiling to live that we may live to toil.'"
The text reads, "He understood that machines were only as progressive as the people who used them." Morris facing us from a field of app notifications, says "[An unfree man is a] slave to machinery; the new machine MUST be invented, and when invented he MUST - I will not say use it, but be used by it whether he likes it or not."
Frederick Taylor overseeing factory workers standing around machines with a stopwatch.  The text reads, "In America, mechanical engineer Frederick W. Taylor began using what he called the 'Scientific Management' in his factories. Taylor timed each worker's every movement, breaking down their work into a set of discrete tasks. Then demanded workers speed them up."
The text reads, "Taylorism, as it became known, was less a science than a political ideology concerned with remolding workers into pliant subjects. Taylor faces us to say, "A complete mental revolution on the part of the workingman...toward their work, toward their fellow men, and toward their employers."
The text reads, "But the spirit of Luddism lived on. Seemingly out of nowhere, there was a rash of mechanical breakdowns at Taylor's factories." An angry Taylor tells us, "These men were deliberately breaking their machines." while two men stand behind him shrugging and smiling slightly while a plume a smoke rises from an unseen area.

A cover of a  book entitled, "Sabotage: Its History, Philosophy & Function by author Walker C Smith. shows a black cat climbing atop a bag of money.  The text reads, "Despite setbacks and intervention from Congress, Taylorism spread. The Industrial Workers of the World responded by publishing two tracts on the topic of sabotage in 1913. 'The aim is to hit the employer in his vital spot, his heart and soul, in other words, his pocketbook.;"
Hands reach to an assembly line where the words "Automation - as it was later dubbed by Delmar Harder, vice president of Ford Motor Company, in 1947 - continued unabated." repeat on each panel of the conveyor.

The text reads, "The legacy of the Luddites lived on in the latter half of the 20th century. Confined to the lowest-paying jobs, black workers were the first to be targeted by the midcentury push for automation.  Robert L. Allen quotes from his book, Black Awakening in Capitalist America (1969) "Not only is the economic situation of the masses of blacks grim, but the prospects are that it will not improve, rather it will deteriorate. This is due partly to the unregulated impact of automation."
The text reads, "The Black Panthers were also quick to recognize <a href=technology and automation were not politically neutral." Huey Newton, cofounder of the Black Panther Party says from a podium, "If the ruling circle remains in power it seems to me that capitalists will continue to develop their technological machinery because they are not interested in the people…Every worker is in jeopardy because of the ruling circle."” class=”wp-image-1088535″ />
A group of Black Panthers standing together.  The text reads, "The Black Panther Party's Ten Point Program was updated in 1972. 'People's community control of modern <a href=technology‘ was added to the demands for ‘land, bread, housing, education, clothing, justice, peace."” class=”wp-image-1088536″ />

A conveyor belt with a chart, titled "Overrepresentation of African Americans in 3 occupation categories with the highest expected displacement, %" The statistics are Office Support e.g. secretaries with a displacement rate of 36%, Food Services, e.g. fast-food cooks with a displacement rate of 35% and Production work e.g. machinists with a displacement rate of 34%. The text reads, "A 2019 McKinsey Global Intitute report found that: 'African Americans are overrepresented in occupations likely to be the most affected by automation.' And the 2030 outlook doesn't look much better."
A crowd of students holding a sign that says "STRIKE" and a button that says, "I am a human being: do not fold, spindle, or mutilate" The text reads, "The introduction of the computer in the 1970s only exacerbated growing tensions. Punch cards used in universities, similar to cards used for drafting recruits for Vietnam, were seen as a symbol of bureaucracy and alienation. Students burned, vandalized, and otherwise destroyed punch cards for course registration."
Lisa Gitelman, author of Always Already New: Media, History and the Data of Culture (2008) says, "Many people simply stopped drawing distinctions between one card-enabled system and another. Whether the cards registered draftees or pupils, they helped 'the system.""
Men in a warroom facing a map. The text reads, "Vietnam had become the first computational war, bringing a shift toward strategies rooted in quantitative data collection and automated analysis."

A fighter plane with the name, "Tom Cat" on the side. The text reads, "Sensor arrays and unmanned drones became a big part of the war. An ongoing experiment with the intention of the eventual replacement of human pilots. American soldiers were sabotaging equipment, staging protests and refusing to fight. Automation of war as with the automation of industry, was a political strategy designed to, once again, reassert control over rebellious workers."
A person at the self-checkout frustrated by the machines error message which says, "Unexpected item in the bagging area!"  The text reads "Now, as more of our daily lives are automated, people are finding that it doesn't always make our lives easier."
The text reads, "In 2016, a study revealed that physicians spend two hours of computer work for every hour spent speaking with a patient face to face." Atul Gawande, surgeon and public health researcher tells us, "I've come to feel that a system that promised to increase my mastery over my work has, instead, increased my work's mastery over me."

The text above people using computers reads, "In the rush to claim success with new AI software, this invisible human work becomes even more insidious. Researcher Jathan Sadowski calls it 'Potemkin AI.'  Sadowski says to us, "There is a long list of services that purport to be powered by sophisticated software, but actually rely on humans acting like robots."
Writer and filmmaker Astra Taylor calls these modern-day Mechanical Turks an example of "fauxtomation" which she says "reinforces the idea that work has no value if it is unpaid and acclimates us to the idea that one day we won't be needed."
A sad person hold s a robot mask to their face. The text reads, "As a 2019 report by the think tank Data & Society concludes, 'Automated and AI technologies tend to mask the human labour that allows them to be fully integrated into a social context while profoundly changing the conditions and quality of labour that is at stake."

The text reads, "Opposition to 21st century tech can be found in unlikely places. Silicon Valley executives are restricting their own children's screen time and sending them to tech-free schools." Taewoo Kim, chief AI engineer at machine learning startup One Smart Lab, tells us from in front of a Waldorf school, "You can't put your face in a device and expect to develop a long-term attention span."
A bullet shaped robot outside near grass has sauce on it. The text reads, "In San Francisco, security robots sent to harass the homeless have been repeatedly assaulted, with one being covered in BBQ sauce and wrapped in tarp."
An autonomous vehicle with a broken window.  The text reads, "In Arizona, people are slashing the tires of driverless cars after Uber struck and killed a woman in Tempe." A quote from Douglas Rushkoff, author of Team Human (2019) reads, "People are lashing out justifiably. There's a growing sense that the giant corporations honing driverless technologies do not have our best interests at heart."
A pie chart with the words, "A pew research poll found that 85% of Americans favored the restriction of automation to only the most dangerous forms of work."

People standing around and looking at their cell phones. The text reads, "We're all living through an era in which we have become the product."  Shoshana Zuboff, author of The Age of Surveillance Capitalism (2018) tells us, "The age of surveillance capitalism is a titanic struggle between capital and each one of us. It is a direct intervention into free will, an assault on human autonomy. To tune and herd and shape and push us in the direction that creates the highest probability of their business success. [There's no way] to dress this up as anything but behavioral modification."
A facial recognition target over the face of a Black person next to a hand pointing at the word "Error." The text reads, "Many are also drawing attention to the biases of software made by an almost entirely male, predominantly white workforce."  A quote from Timnit Gebru, cofounder of Black in AI says, "I'm worried about groupthink, insularity, and arrogance in the AI community, [...] If many are actively excluded from its creation, this <a href=technology will benefit a few while harming a great many."” class=”wp-image-1088552″ />
The text reads, "VPNs, the dark web, and plugins like RequestPolicy are arguably Luddite responses to new <a href=technology. Computer science students have already developed Glaze- a tool to prevent AI models from mimicking artist styles." Maxigas, in Resistance to the Current: The Dialectics of Hacking (2022) says, "A retrograde attempt to rewind web history: a Luddite machine that, as they say ‘breaks’ the essential mechanisms of websites."” class=”wp-image-1088553″ />

Gavin Mueller in Breaking Things at Work (2021) says, "Luddism contains a critical perspective on <a href=technology that pays particular attention to technology‘s relationship to the labor process and working conditions. In other words, it views technology not as neutral but as a site of struggle. Luddism rejects production for production’s sake: it is critical of ‘efficiency’ as an end goal."” class=”wp-image-1088554″ />
Above a person tearing a line chart, the text reads, "'Degrowth,' 'slow living,' 'quiet quitting,' and the 'I Do Not Dream of Labour' movements could all be described as forms of modern neo-Luddism."  Jason Hickel, author of Less is More: How Degrowth will Save the World (2020)  says, "Lashed to the growth imperative, <a href=technology is used not to do the same amount of stuff in less time, but rather to do more stuff in the same amount of time."” class=”wp-image-1088555″ />

A man with a sledgehammer resting on his shoulder as hi looks out at the horizon. The text reads, "Questioning and resisting the worst excesses of <a href=technology isn’t antithetical to progress. If your concept of ‘progress’ doesn’t put people at the center of it, is it even progress? Maybe those of us who are apprehensive about AI art are Luddites. Maybe we should wear that badge with pride. Welcome to the future. Sabotage it."” class=”wp-image-1088556″ />

Tom Humberstone is a comic artist and illustrator based in Edinburgh, Scotland.

Technology & Innovation

LLMs become more covertly racist with human intervention

Diane Davis

Published

on

LLMs become more covertly racist with human intervention

Even when the two sentences had the same meaning, the models were more likely to apply adjectives like “dirty,” “lazy,” and “stupid” to speakers of AAE than speakers of Standard American English (SAE). The models associated speakers of AAE with less prestigious jobs (or didn’t associate them with having a job at all), and when asked to pass judgment on a hypothetical criminal defendant, they were more likely to recommend the death penalty. 

An even more notable finding may be a flaw the study pinpoints in the ways that researchers try to solve such biases. 

To purge models of hateful views, companies like OpenAI, Meta, and Google use feedback training, in which human workers manually adjust the way the model responds to certain prompts. This process, often called “alignment,” aims to recalibrate the millions of connections in the neural network and get the model to conform better with desired values. 

The method works well to combat overt stereotypes, and leading companies have employed it for nearly a decade. If users prompted GPT-2, for example, to name stereotypes about Black people, it was likely to list “suspicious,” “radical,” and “aggressive,” but GPT-4 no longer responds with those associations, according to the paper.

However the method fails on the covert stereotypes that researchers elicited when using African-American English in their study, which was published on arXiv and has not been peer reviewed. That’s partially because companies have been less aware of dialect prejudice as an issue, they say. It’s also easier to coach a model not to respond to overtly racist questions than it is to coach it not to respond negatively to an entire dialect.

“Feedback training teaches models to consider their racism,” says Valentin Hofmann, a researcher at the Allen Institute for AI and a coauthor on the paper. “But dialect prejudice opens a deeper level.”

Avijit Ghosh, an ethics researcher at Hugging Face who was not involved in the research, says the finding calls into question the approach companies are taking to solve bias.

“This alignment—where the model refuses to spew racist outputs—is nothing but a flimsy filter that can be easily broken,” he says. 

Continue Reading

Technology & Innovation

I used generative AI to turn my story into a comic—and you can too

Diane Davis

Published

on

I used generative AI to turn my story into a comic—and you can too

The narrator sits on the floor and eats breakfast with the cats. 

LORE MACHINE / WILL DOUGLAS HEAVEN

After more than a year in development, Lore Machine is now available to the public for the first time. For $10 a month, you can upload 100,000 words of text (up to 30,000 words at a time) and generate 80 images for short stories, scripts, podcast transcripts, and more. There are price points for power users too, including an enterprise plan costing $160 a month that covers 2.24 million words and 1,792 images. The illustrations come in a range of preset styles, from manga to watercolor to pulp ’80s TV show.

Zac Ryder, founder of creative agency Modern Arts, has been using an early-access version of the tool since Lore Machine founder Thobey Campion first showed him what it could do. Ryder sent over a script for a short film, and Campion used Lore Machine to turn it into a 16-page graphic novel overnight.

“I remember Thobey sharing his screen. All of us were just completely floored,” says Ryder. “It wasn’t so much the image generation aspect of it. It was the level of the storytelling. From the flow of the narrative to the emotion of the characters, it was spot on right out of the gate.”

Modern Arts is now using Lore Machine to develop a fictional universe for a manga series based on text written by the creator of Netflix’s Love, Death & Robots.

The narrator encounters the man in the corner shop who jokes about the cat food. 

LORE MACHINE / WILL DOUGLAS HEAVEN

Under the hood, Lore Machine is built from familiar parts. A large language model scans your text, identifying descriptions of people and places as well as its overall sentiment. A version of Stable Diffusion generates the images. What sets it apart is how easy it is to use. Between uploading my story and downloading its storyboard, I clicked maybe half a dozen times.

That makes it one of a new wave of user-friendly tools that hide the stunning power of generative models behind a one-click web interface. “It’s a lot of work to stay current with new AI tools, and the interface and workflow for each tool is different,” says Ben Palmer, CEO of the New Computer Corporation, a content creation firm. “Using a mega-tool with one consistent UI is very compelling. I feel like this is where the industry will land.”

Look! No prompts

Campion set up the company behind Lore Machine two years ago to work on a blockchain version of Wikipedia. But when he saw how people took to generative models, he switched direction. Campion used the free-to-use text-to-image model Midjourney to make a comic-book version of Samuel Taylor Coleridge’s The Rime of the Ancient Mariner. It went viral, he says, but it was no fun to make.

Marta confronts the narrator about their new diet and offers to cook for them. 

LORE MACHINE / WILL DOUGLAS HEAVEN

“My wife hated that project,” he says. “I was up to four in the morning, every night, just hammering away, trying to get these images right.” The problem was that text-to-image models like Midjourney generate images one by one. That makes it hard to maintain consistency between different images of the same characters. Even locking in a specific style across multiple images can be hard. “I ended up veering toward a trippier, abstract expression,” says Campion.

Continue Reading

Technology & Innovation

The robots are coming. And that’s a good thing.

Diane Davis

Published

on

The robots are coming. And that’s a good thing.

What if we could throw our sight, hearing, touch, and even sense of smell to distant locales and experience these places in a more visceral way?

So we wondered what would happen if we were to tap into the worldwide community of gamers and use their skills in new ways. With a robot working inside the deep freezer room, or in a standard manufacturing or warehouse facility, remote operators could remain on call, waiting for it to ask for assistance if it made an error, got stuck, or otherwise found itself incapable of completing a task. A remote operator would enter a virtual control room that re-created the robot’s surroundings and predicament. This person would see the world through the robot’s eyes, effectively slipping into its body in that distant cold storage facility without being personally exposed to the frigid temperatures. Then the operator would intuitively guide the robot and help it complete the assigned task.

To validate our concept, we developed a system that allows people to remotely see the world through the eyes of a robot and perform a relatively simple task; then we tested it on people who weren’t exactly skilled gamers. In the lab, we set up a robot with manipulators, a stapler, wire, and a frame. The goal was to get the robot to staple wire to the frame. We used a humanoid, ambidextrous robot called Baxter, plus the Oculus VR system. Then we created an intermediate virtual room to put the human and the robot in the same system of coordinates—a shared simulated space. This let the human see the world from the point of view of the robot and control it naturally, using body motions. We demoed this system during a meeting in Washington, DC, where many participants—including some who’d never played a video game—were able to don the headset, see the virtual space, and control our Boston-based robot intuitively from 500 miles away to complete the task.


The best-known and perhaps most compelling examples of remote teleoperation and extended reach are the robots NASA has sent to Mars in the last few decades. My PhD student Marsette “Marty” Vona helped develop much of the software that made it easy for people on Earth to interact with these robots tens of millions of miles away. These intelligent machines are a perfect example of how robots and humans can work together to achieve the extraordinary. Machines are better at operating in inhospitable environments like Mars. Humans are better at higher-level decision-making. So we send increasingly advanced robots to Mars, and people like Marty build increasingly advanced software to help other scientists see and even feel the faraway planet through the eyes, tools, and sensors of the robots. Then human scientists ingest and analyze the gathered data and make critical creative decisions about what the rovers should explore next. The robots all but situate the scientists on Martian soil. They are not taking the place of actual human explorers; they’re doing reconnaissance work to clear a path for a human mission to Mars. Once our astronauts venture to the Red Planet, they will have a level of familiarity and expertise that would not be possible without the rover missions.

Robots can allow us to extend our perceptual reach into alien environments here on Earth, too. In 2007, European researchers led by J.L. Deneubourg described a novel experiment in which they developed autonomous robots that infiltrated and influenced a community of cockroaches. The relatively simple robots were able to sense the difference between light and dark environments and move to one or the other as the researchers wanted. The miniature machines didn’t look like cockroaches, but they did smell like them, because the scientists covered them with pheromones that were attractive to other cockroaches from the same clan.

The goal of the experiment was to better understand the insects’ social behavior. Generally, cockroaches prefer to cluster in dark environments with others of their kind. The preference for darkness makes sense—they’re less vulnerable to predators or disgusted humans when they’re hiding in the shadows. When the researchers instructed their pheromone-soaked machines to group together in the light, however, the other cockroaches followed. They chose the comfort of a group despite the danger of the light. 

JACK SNELLING

These robotic roaches bring me back to my first conversation with Roger Payne all those years ago, and his dreams of swimming alongside his majestic friends. What if we could build a robot that accomplished something similar to his imagined capsule? What if we could create a robotic fish that moved alongside marine creatures and mammals like a regular member of the aquatic neighborhood? That would give us a phenomenal window into undersea life.

Sneaking into and following aquatic communities to observe behaviors, swimming patterns, and creatures’ interactions with their habitats is difficult. Stationary observatories cannot follow fish. Humans can only stay underwater for so long. Remotely operated and autonomous underwater vehicles typically rely on propellers or jet-based propulsion systems, and it’s hard to go unnoticed when your robot is kicking up so much turbulence. We wanted to create something different—a robot that actually swam like a fish. This project took us many years, as we had to develop new artificial muscles, soft skin, novel ways of controlling the robot, and an entirely new method of propulsion. I’ve been diving for decades, and I have yet to see a fish with a propeller. Our robot, SoFi (pronounced like Sophie), moves by swinging its tail back and forth like a shark. A dorsal fin and twin fins on either side of its body allow it to dive, ascend, and move through the water smoothly, and we’ve already shown that SoFi can navigate around other aquatic life forms without disrupting their behavior.

SoFi is about the size of an average snapper and has taken some lovely tours in and around coral reef communities in the Pacific Ocean at depths of up to 18 meters. Human divers can venture deeper, of course, but the presence of a scuba-­diving human changes the behavior of the marine creatures. A few scientists remotely monitoring and occasionally steering SoFi cause no such disruption. By deploying one or several realistic robotic fish, scientists will be able to follow, record, monitor, and potentially interact with fish and marine mammals as if they were just members of the community.

Continue Reading

Trending