Skip to content

People Don’t Worry About Losing Jobs to AI, Even When Told It Could Happen Soon

November 18, 2025
Illustration of worker threatened by robot
Surveyed workers were slightly more worried by a shorter AI-takeover timeline, but it didn't significantly change their views about government safety nets.

As debates rage about artificial intelligence's impact on jobs, new research suggests that even warnings that AI could disrupt workers' employment soon do little to shake their confidence.

In a survey-based study, political scientists Anil Menon of UC Merced and Baobao Zhang of Syracuse University examined how people respond to forecasts of the arrival of “transformative AI,” ranging from as early as 2026 to as distant as 2060.

The researchers found that shorter timelines made respondents slightly more anxious about losing their jobs to automation, but did not meaningfully alter their views on when job losses would occur or their support for government responses such as retraining workers or providing a universal basic income.

Respondents to the survey of 2,440 U.S. adults who read about the rapid development of large language and other generative models, such as Genesis, Claude and ChatGPT, predicted automation might come somewhat sooner. Yet their policy preferences and economic outlooks were essentially unchanged. When the survey’s timeframes were combined, respondents showed modest increases in concern about unemployment due to technology.

“These results suggest that Americans’ beliefs about automation risks are stubborn,” the authors said. “Even when told that human-level AI could arrive within just a few years, people don’t dramatically revise their expectations or demand new policies.”

Menon and Zhang said their findings challenge the assumption that making technological threats feel more immediate will mobilize public support for regulation or safety nets.

The study draws on construal level theory, which examines how people’s sense of time shapes their risk judgments. Participants who were told that AI breakthroughs were imminent were not significantly more alarmed than those given distant timelines.

These results suggest that Americans’ beliefs about automation risks are stubborn.

Study co-authored by Professor Anil Menon

The survey, fielded in March 2024, assigned respondents randomly to a group. Three groups read vignettes stating job-threatening AI would arrive in a particular year: 2026, 2030 or 2060. A fourth control group received no timeline information.

Each vignette described experts predicting that advances in machine learning and robotics could replace human workers in a wide range of professions, from software engineers and legal clerks to teachers and nurses.

After reading the vignette, participants estimated when their jobs and others’ jobs would be automated, reported confidence in those predictions, rated their worry about job loss, and indicated support for several policy responses, including limits on automation and increased AI research funding.

While exposure to any timeline increased awareness of automation risks, only the 2060 condition significantly raised worry about job loss, perhaps, the researchers said, because that forecast seemed more credible than claims of imminent disruption.

The study, published in The Journal of Politics, comes amid widespread debate over how large language models and other generative systems will reshape work. Tech leaders have predicted human-level AI may emerge within the decade, while critics argue that such forecasts exaggerate current capabilities.

Menon and Zhang said the study shows the public remains cautious but not panicked, an insight that may help policymakers gauge when and how citizens will support interventions such as retraining programs or universal basic income proposals.

The authors noted several caveats. Their design focused on how timeline cues influence attitudes but did not test other psychological pathways, such as beliefs about AI’s economic trade-offs or the credibility of expert forecasts. The researchers also acknowledged that the survey cannot track changes in individuals’ perceptions over time.

“The public’s expectations about automation appear remarkably stable,” they said. “Understanding why they are so resistant to change is crucial for anticipating how societies will navigate the labor disruptions of the AI era.”