An older man with white hair is sat before his laptop, outside, showing distrust at what he sees.

AI is everywhere, and Boomers don’t trust it 

Artificial intelligence tools like ChatGPT, Claude, Google Gemini, and Meta AI represent a stronger threat to data privacy than the social media juggernauts that cemented themselves in the past two decades, according to new research on the sentiments of older individuals from Malwarebytes.  

A combined 54% of people between the ages of 60 and 78 told Malwarebytes that they “agree” or “strongly agree” that ChatGPT and similar generative AI tools “are more of a threat than social media platforms (e.g., Facebook, Twitter/X, etc.) concerning personal data misuse.” And an even larger share of 82% said they “agree” or “strongly agree” that they are “concerned with the security and privacy of my personal data and those I interact with when using AI tools.”  

The findings arrive at an important time for consumers, as AI developers increasingly integrate their tools into everyday online life—from Meta suggesting that users lean on AI to write direct messages on Instagram to Google forcing users by default to receive “Gemini” results for basic searches. With little choice in the matter, consumers are responding with robust pushback.  

For this research, Malwarebytes conducted a pulse survey of its newsletter readers in October via the Alchemer Survey Platform. In total, 851 people across the globe responded. Malwarebytes then focused its analysis on survey participants who belong to the Baby Boomer generation.  

Malwarebytes found that:  

  • 35% of Baby Boomers said they know “just the names” of some of the largest generative AI products, such as ChatGPT, Google Gemini, and Meta AI.  
  • 71% of Baby Boomers said they have “never used” any generative AI tools—a seeming impossibility as Google search results, by default, now provide “AI overviews” powered by the company’s Gemini product. 
  • Only 12% of Baby Boomers believe that “generative AI tools are good for society.”  
  • More than 80% of Baby Boomers said that they worry about generative AI tools both improperly accessing their data and misusing their personal information.  
  • While more than 50% of Baby Boomers said they would feel more secure in using generative AI tools if the companies behind them provided regular security audits, a full 23% were unmoved by proposals in transparency or government regulation. 

Distrust, concern, and unfamiliarity with AI  

Since San Francisco-based AI developer OpenAI released ChatGPT two years ago to the public, “generative” artificial intelligence has spread into nearly every corner of online life.  

Countless companies have integrated the technology into their customer support services with the help of AI-powered chatbots (which caused a problem for one California car dealer when its own AI chat bot promised to sell a customer a 2024 Chevy Tahoe for just $1). Emotional support and mental health providers have toyed with having their clients speak directly with AI chatbots when experiencing a crisis (to middling results). Audio production companies now advertise features to generate spoken text based off samples of recorded podcasts, art-sharing platforms regularly face scandals of AI-generated “stolen” work, and even AI “girlfriends”—and their scantily-clad, AI-generated avatars—are on offer today.  

The public are unconvinced.  

According to Malwarebytes’ research, Baby Boomers do not trust generative AI, the companies making it, or the tools that implement it.  

A full 75% of Baby Boomers said they “agree” or “strongly agree” that they are “fearful of what the future will bring with AI.” Those sentiments are reflected in the 47% of Baby Boomers who said they “disagree” or “strongly disagree” that “generative AI tools are good for society.”  

In particular, Baby Boomers shared a broad concern over how these tools—and the developers behind them—collect and use their data.  

More than 80% of Baby Boomers agreed that they held the following concerns about generative AI tools: 

  • My data being accessed without my permission (86%) 
  • My personal information being misused (85%) 
  • Not having control over my data (84%) 
  • A lack of transparency into how my data is being used (84%) 

The impact on behavior here is immediate, as 71% of Baby Boomers said they “refrain from including certain data/information (e.g., names, metrics) when using generative AI tools due to concerns over security or privacy.”  

The companies behind these AI tools also have yet to win over Baby Boomers, as 87% said they “disagree” or “strongly disagree” that they “trust generative AI companies to be transparent about potential biases in their systems.” 

Perhaps this nearly uniform distrust in generative AI—in the technology itself, in its implementation, and in its developers—is at the root of a broad disinterest from Baby Boomers. An enormous share of this population, at 71%, said they had never used these tools before.  

The statistic is difficult to believe, primarily because Google began powering everyday search requests with its own AI tool back in May 2024. Now, when users ask a simple question on Google, they will receive an “AI overview” at the top of their results. This functionality is powered by Gemini—Google’s own tool that, much like ChatGPT, can generate images, answer questions, fine-tune recipes, and deliver workout routines.  

Whether or not users know about this, and whether they consider this “using” generative AI, is unclear. What is clear, however, is that a generative AI tool created by one of the largest companies in the world is being pushed into the daily workstreams of a population that is unconvinced, uncomfortable, and unsold on the entire experiment.  

Few paths to improvement  

Coupled with the high levels of distrust that Baby Boomers have for generative AI are widespread feelings that many corrective measures would have little impact.  

Baby Boomers were asked about a variety of restrictions, regulations, and external controls that would make them “feel more secure about using generative AI tools,” but few of those controls gained mass approval.  

For instance, “detailed reports on how data is stored and used” only gained the interest of 44% of Baby Boomers, and “government regulation” ranked even lower, with just 35% of survey participants. “Regular security audits by third parties” and “clear information on what data is collected” piqued the interest of 52% and 53% of Baby Boomers, respectively, but perhaps the most revealing answers came from the suggestions that the survey participants wrote in themselves.  

Several participants specifically asked for the ability to delete any personal data ingested by the AI tools, and other participants tied their distrust to today’s model of online corporate success, believing that any large company will collect and sell their data to stay afloat. 

But frequently, participants also said they could not be swayed at all to use generative AI. As one respondent wrote:  

“There is nothing that would make me comfortable with it.”    

Whether Baby Boomers represent a desirable customer segment for AI developers is unknown, but for many survey participants, that likely doesn’t matter. It’s already too late. 

ABOUT THE AUTHOR

David Ruiz

Pro-privacy, pro-security editor. Former journalist turned advocate turned cybersecurity defender. Still a little bit of each. Failing book club member.