Connect with us

Technology & Innovation

Myths about neurodivergent people and leadership

Diane Davis

Published

on

By Ludmila N. Praslova 7 minute Read

People don’t usually attend voluntary training sessions on neurodiversity inclusion with the intention to ask ableist questions. They come because they want to be allies. And yet, even among would-be allies, the typical question is, “how can I/others be a better leader to autistic people.” Why not “how can I be a better colleague, direct report, or ally?” This thinking can’t be explained by a habit of being high in organizational hierarchy – the question is often asked by individuals who never had managerial responsibilities.

This seemingly innocuous question reflects one of the most persistent stereotypes associated with implicit ableism. Many believe that autistic and, more broadly, neurodivergent individuals (e.g., those with ADHD or learning differences) can’t be leaders. Prominent examples such as Richard Branson, Charles Schwab (dyslexic) or Elon Musk (autism spectrum) are explained away as rare exceptions. Other models of autistic leadership in business, politics, or the Navy, as well as the many examples of small business owners, are sensationalized and ignored at the same time. Overall, neurominorities are still seen as only “fit” for subordinate positions or select (usually technical) individual contributor roles.

Tellingly, another popular question is “which jobs are suitable for autistic (or other neurodivergent) people?” It reveals the same underlying ableist assumption: that the full range of jobs isn’t suitable. In reality, there is a tremendous range of talents and abilities among neurodivergent individuals, matching the full range of jobs available—plus some jobs that never existed until neurodivergent people created them.

Much of the job creation comes out of necessity. Elon Musk said that he only became an entrepreneur known for Tesla and SpaceX because he could not get a job—and so did many others. Bias against neurominorities in the workplace is staggering, with 50% of UK managers stating that they would not hire neurodivergent talent. According to The Economist, “autism is a condition that defies simple generalizations. Except one: The potential of far too many autistic people is being squandered.” Workplace access and success barriers result in the unemployment rate of autistic college graduates in the U.S. as high as 85%, while 46% of employed autistic adults are over-educated or overqualified for their roles.

Unemployment data seems shockingly incongruous with the findings that autistic professionals can be up to 140% more productive than the average employee, and that neurodivergent traits are associated with much-needed originality of ideas. 

However, dwelling on the “business case” for diversity has many limitations. Without the desire to support the dignity and thriving of all humans, the business rationale for diversity is not effective—and it can even promote commodifying talent, while simultaneously dehumanizing individuals and perpetuating bias. First, the lack of inclusion is a major injustice to neurodivergent people, genius-level talent or not. Second, it is an opportunity loss for organizations and our larger society.

Some might say that the unemployment data indicates the need to focus on the most immediate issue—neurominority hiring. Organizations can address the leadership issue later on. However, inclusion is truly effective only if it is systemic. The lack of neurominority perspective in leadership is a crucial link in the vicious cycle of prejudice and exclusion. Without addressing all stages of the talent pipeline simultaneously, we are unlikely to see much progress in inclusion.

Tackling long-standing biases requires an understanding of how these biases function. Specifically, how do people who consider themselves moral and just continue to deny opportunities to others? And why do organizations shun neurodivergent talent while struggling with a talent shortage?

Myths about neurodiversity

Bias, including ableism, is persistent because several psychological mechanisms support it. Here are three important ways in which prejudice against neurominorities is maintained:

1. Successful careers of neurodivergent individuals are seen as an exception, via subtyping 

Subtyping is a mechanism that supports persistence of stereotypes by clustering group members who defy the stereotype into subgroups, such as “educated immigrants” or “prominent autistics.” Separating out people like Anthony Hopkins, Daryl Hannah, or Greta Thunberg can support the idea that others are “really autistic,” leaving the stereotype intact. The belief that success is only possible for a few, exceptional neurodivergent individuals persists despite the many examples and data. For instance, one UK study of self-made millionaires revealed that about 40% of the 300 studied were dyslexic (vs. 10% of dyslexic individuals in the general population). 

2. Pathologizing of positives and strengths

Because of the overall negative stereotypes of neurodivergence, even positive behaviors or attributes can be interpreted as negative. In a recent study of moral behavior in autistic vs. non-autistic people, autistic participants acted ethically regardless of whether they were observed, while “healthy controls” (meaning, the non-autistic people) were less ethical when not observed. The authors interpreted the consistently ethical behavior of autistic participants as a moral deficit—a pathology. After outrage from the autistic community, the report wording was slightly modified, but much of the pathologizing language remains.

 3. Perpetuating misinformation

Another question often asked in the context of autism inclusion is  “how can organizations work with someone who lacks empathy?” An openly autistic business leader Charlotte Valeur was even asked “how do you deal with empathy” in a board position interview.  The assumption underlying this question is that autistic people lack empathy. However, the relationship between empathy and autism is complex. Overall, autistic people vary in empathy (just like neurotypicals); desire relationships just as much; and many report very high levels of caring. The key issue in interactions with neurotypical individuals is not an “autistic deficit” but a dual empathy problem, with neurotypical individuals lacking empathy toward autistic people and exhibiting significant automatic bias and exclusionary behaviors.

Similarly, there is a persistent stereotype that “all people with Tourette’s use obscene words and have anger and cognitive issues.” In fact, coprolalia, the involuntary and repetitive use of obscene language, is a rare symptom, and most people with Tourette’s have normal emotional regulation and intelligence.

Myths about leadership

The current fast-changing environment of reinventing work presents opportunities to improve the inclusion of neurodivergent people on all levels of organizations. However, in addition to debunking myths about neurodivergence, this would require debunking leadership myths.

The perception of neurodivergence as an obstacle to advancement is supported by outdated ideas about leadership. These ideas include: 

1. A Fascination with confidence and charisma

This can lead to the rise of arrogant incompetent individuals, and ultimately harm a team’s productivity and morale. With more attention to substance over style, organizations could benefit from the expertise and commitment of humble, capable, and fair leaders—including neurodivergent ones. Over time, this could also help break organizational cycles of discrimination and make workplaces more inclusive. 

2. The focus on command and control management

According to Ron Carucci, author of Rising to Power and To be Honest, leaders who “micromanage and exercise every bit of authority that comes with the role, no matter how trivial,” and insist on “making most of the decisions and having most of the answers” create inefficiencies and frustrations. The “command” model does not work in the knowledge and creativity economy and with self-motivated individuals and teams. Leaders who bring out the best in motivated teams are often introverted and humble.

In the context of distributed and remote work “command and control” tactics become increasingly counterproductive. Instead of exerting positional power, the future of work calls for leading through influence—and that requires focusing on purpose and authenticity rather than control. Purpose-focused influence is an excellent fit for neurodivergent strengths, as demonstrated by activists like Greta Thunberg or Daryl Hannah. So is thought leadership derived from creativity and innovation. 

Moreover, one of the most promising models of leadership for creativity is shared leadership. Effective use of shared leadership requires group diversity, which is often impeded by the third leadership myth. 

3. The tyranny of “fit” 

Excessive focus on group cohesion results in groupthink in leadership teams. A suggested way to limit the dangers of groupthink is to appoint “devil’s advocates.” Neurodivergent individuals, traditionally labeled as “poor fit,” are likely to bring original thinking and honesty to help leadership teams think more carefully, objectively, and creatively, enhancing the competitive advantage. An idealistic teen climate activist Greta Thunberg and the tech innovation billionaire Elon Musk could not have been more different from the “average”—and from each other. Yet, it might be our desire to break from the tyranny of fit that made these two neurodivergent individuals Time magazine persons of the year 2019 and 2021, respectively.

Making the world of work more inclusive of neurodivergent leadership will require significant effort to let go of biases and embed inclusion deep within organizational processes. However, the stakes are extremely high, as implicit ableism might be impeding the rise of the very leadership we need to survive. According to Caroline Stokes, author of “Elephants Before Unicorns,” and a thought leader on organizational emotional intelligence and executive coach powered by ADHD, “survival of the organization in the 21st century will depend on creating high integrity product and people-first culture, to positively impact high-stakes human and planetary needs.” Leaders who follow their ethical principles regardless of whether they are observed or not are likely to play a major role in ensuring this survival.


 Ludmila N. Praslova, PhD, SHRM-SCP, uses her extensive experience with global, cultural, ability, and neurodiversity to help create inclusive and equitable workplaces. She is a professor and director of Graduate Programs in Industrial-Organizational Psychology at Vanguard University of Southern California.


Technology & Innovation

LLMs become more covertly racist with human intervention

Diane Davis

Published

on

LLMs become more covertly racist with human intervention

Even when the two sentences had the same meaning, the models were more likely to apply adjectives like “dirty,” “lazy,” and “stupid” to speakers of AAE than speakers of Standard American English (SAE). The models associated speakers of AAE with less prestigious jobs (or didn’t associate them with having a job at all), and when asked to pass judgment on a hypothetical criminal defendant, they were more likely to recommend the death penalty. 

An even more notable finding may be a flaw the study pinpoints in the ways that researchers try to solve such biases. 

To purge models of hateful views, companies like OpenAI, Meta, and Google use feedback training, in which human workers manually adjust the way the model responds to certain prompts. This process, often called “alignment,” aims to recalibrate the millions of connections in the neural network and get the model to conform better with desired values. 

The method works well to combat overt stereotypes, and leading companies have employed it for nearly a decade. If users prompted GPT-2, for example, to name stereotypes about Black people, it was likely to list “suspicious,” “radical,” and “aggressive,” but GPT-4 no longer responds with those associations, according to the paper.

However the method fails on the covert stereotypes that researchers elicited when using African-American English in their study, which was published on arXiv and has not been peer reviewed. That’s partially because companies have been less aware of dialect prejudice as an issue, they say. It’s also easier to coach a model not to respond to overtly racist questions than it is to coach it not to respond negatively to an entire dialect.

“Feedback training teaches models to consider their racism,” says Valentin Hofmann, a researcher at the Allen Institute for AI and a coauthor on the paper. “But dialect prejudice opens a deeper level.”

Avijit Ghosh, an ethics researcher at Hugging Face who was not involved in the research, says the finding calls into question the approach companies are taking to solve bias.

“This alignment—where the model refuses to spew racist outputs—is nothing but a flimsy filter that can be easily broken,” he says. 

Continue Reading

Technology & Innovation

I used generative AI to turn my story into a comic—and you can too

Diane Davis

Published

on

I used generative AI to turn my story into a comic—and you can too

The narrator sits on the floor and eats breakfast with the cats. 

LORE MACHINE / WILL DOUGLAS HEAVEN

After more than a year in development, Lore Machine is now available to the public for the first time. For $10 a month, you can upload 100,000 words of text (up to 30,000 words at a time) and generate 80 images for short stories, scripts, podcast transcripts, and more. There are price points for power users too, including an enterprise plan costing $160 a month that covers 2.24 million words and 1,792 images. The illustrations come in a range of preset styles, from manga to watercolor to pulp ’80s TV show.

Zac Ryder, founder of creative agency Modern Arts, has been using an early-access version of the tool since Lore Machine founder Thobey Campion first showed him what it could do. Ryder sent over a script for a short film, and Campion used Lore Machine to turn it into a 16-page graphic novel overnight.

“I remember Thobey sharing his screen. All of us were just completely floored,” says Ryder. “It wasn’t so much the image generation aspect of it. It was the level of the storytelling. From the flow of the narrative to the emotion of the characters, it was spot on right out of the gate.”

Modern Arts is now using Lore Machine to develop a fictional universe for a manga series based on text written by the creator of Netflix’s Love, Death & Robots.

The narrator encounters the man in the corner shop who jokes about the cat food. 

LORE MACHINE / WILL DOUGLAS HEAVEN

Under the hood, Lore Machine is built from familiar parts. A large language model scans your text, identifying descriptions of people and places as well as its overall sentiment. A version of Stable Diffusion generates the images. What sets it apart is how easy it is to use. Between uploading my story and downloading its storyboard, I clicked maybe half a dozen times.

That makes it one of a new wave of user-friendly tools that hide the stunning power of generative models behind a one-click web interface. “It’s a lot of work to stay current with new AI tools, and the interface and workflow for each tool is different,” says Ben Palmer, CEO of the New Computer Corporation, a content creation firm. “Using a mega-tool with one consistent UI is very compelling. I feel like this is where the industry will land.”

Look! No prompts

Campion set up the company behind Lore Machine two years ago to work on a blockchain version of Wikipedia. But when he saw how people took to generative models, he switched direction. Campion used the free-to-use text-to-image model Midjourney to make a comic-book version of Samuel Taylor Coleridge’s The Rime of the Ancient Mariner. It went viral, he says, but it was no fun to make.

Marta confronts the narrator about their new diet and offers to cook for them. 

LORE MACHINE / WILL DOUGLAS HEAVEN

“My wife hated that project,” he says. “I was up to four in the morning, every night, just hammering away, trying to get these images right.” The problem was that text-to-image models like Midjourney generate images one by one. That makes it hard to maintain consistency between different images of the same characters. Even locking in a specific style across multiple images can be hard. “I ended up veering toward a trippier, abstract expression,” says Campion.

Continue Reading

Technology & Innovation

The robots are coming. And that’s a good thing.

Diane Davis

Published

on

The robots are coming. And that’s a good thing.

What if we could throw our sight, hearing, touch, and even sense of smell to distant locales and experience these places in a more visceral way?

So we wondered what would happen if we were to tap into the worldwide community of gamers and use their skills in new ways. With a robot working inside the deep freezer room, or in a standard manufacturing or warehouse facility, remote operators could remain on call, waiting for it to ask for assistance if it made an error, got stuck, or otherwise found itself incapable of completing a task. A remote operator would enter a virtual control room that re-created the robot’s surroundings and predicament. This person would see the world through the robot’s eyes, effectively slipping into its body in that distant cold storage facility without being personally exposed to the frigid temperatures. Then the operator would intuitively guide the robot and help it complete the assigned task.

To validate our concept, we developed a system that allows people to remotely see the world through the eyes of a robot and perform a relatively simple task; then we tested it on people who weren’t exactly skilled gamers. In the lab, we set up a robot with manipulators, a stapler, wire, and a frame. The goal was to get the robot to staple wire to the frame. We used a humanoid, ambidextrous robot called Baxter, plus the Oculus VR system. Then we created an intermediate virtual room to put the human and the robot in the same system of coordinates—a shared simulated space. This let the human see the world from the point of view of the robot and control it naturally, using body motions. We demoed this system during a meeting in Washington, DC, where many participants—including some who’d never played a video game—were able to don the headset, see the virtual space, and control our Boston-based robot intuitively from 500 miles away to complete the task.


The best-known and perhaps most compelling examples of remote teleoperation and extended reach are the robots NASA has sent to Mars in the last few decades. My PhD student Marsette “Marty” Vona helped develop much of the software that made it easy for people on Earth to interact with these robots tens of millions of miles away. These intelligent machines are a perfect example of how robots and humans can work together to achieve the extraordinary. Machines are better at operating in inhospitable environments like Mars. Humans are better at higher-level decision-making. So we send increasingly advanced robots to Mars, and people like Marty build increasingly advanced software to help other scientists see and even feel the faraway planet through the eyes, tools, and sensors of the robots. Then human scientists ingest and analyze the gathered data and make critical creative decisions about what the rovers should explore next. The robots all but situate the scientists on Martian soil. They are not taking the place of actual human explorers; they’re doing reconnaissance work to clear a path for a human mission to Mars. Once our astronauts venture to the Red Planet, they will have a level of familiarity and expertise that would not be possible without the rover missions.

Robots can allow us to extend our perceptual reach into alien environments here on Earth, too. In 2007, European researchers led by J.L. Deneubourg described a novel experiment in which they developed autonomous robots that infiltrated and influenced a community of cockroaches. The relatively simple robots were able to sense the difference between light and dark environments and move to one or the other as the researchers wanted. The miniature machines didn’t look like cockroaches, but they did smell like them, because the scientists covered them with pheromones that were attractive to other cockroaches from the same clan.

The goal of the experiment was to better understand the insects’ social behavior. Generally, cockroaches prefer to cluster in dark environments with others of their kind. The preference for darkness makes sense—they’re less vulnerable to predators or disgusted humans when they’re hiding in the shadows. When the researchers instructed their pheromone-soaked machines to group together in the light, however, the other cockroaches followed. They chose the comfort of a group despite the danger of the light. 

JACK SNELLING

These robotic roaches bring me back to my first conversation with Roger Payne all those years ago, and his dreams of swimming alongside his majestic friends. What if we could build a robot that accomplished something similar to his imagined capsule? What if we could create a robotic fish that moved alongside marine creatures and mammals like a regular member of the aquatic neighborhood? That would give us a phenomenal window into undersea life.

Sneaking into and following aquatic communities to observe behaviors, swimming patterns, and creatures’ interactions with their habitats is difficult. Stationary observatories cannot follow fish. Humans can only stay underwater for so long. Remotely operated and autonomous underwater vehicles typically rely on propellers or jet-based propulsion systems, and it’s hard to go unnoticed when your robot is kicking up so much turbulence. We wanted to create something different—a robot that actually swam like a fish. This project took us many years, as we had to develop new artificial muscles, soft skin, novel ways of controlling the robot, and an entirely new method of propulsion. I’ve been diving for decades, and I have yet to see a fish with a propeller. Our robot, SoFi (pronounced like Sophie), moves by swinging its tail back and forth like a shark. A dorsal fin and twin fins on either side of its body allow it to dive, ascend, and move through the water smoothly, and we’ve already shown that SoFi can navigate around other aquatic life forms without disrupting their behavior.

SoFi is about the size of an average snapper and has taken some lovely tours in and around coral reef communities in the Pacific Ocean at depths of up to 18 meters. Human divers can venture deeper, of course, but the presence of a scuba-­diving human changes the behavior of the marine creatures. A few scientists remotely monitoring and occasionally steering SoFi cause no such disruption. By deploying one or several realistic robotic fish, scientists will be able to follow, record, monitor, and potentially interact with fish and marine mammals as if they were just members of the community.

Continue Reading

Trending