Accenture Life Trends 2025 Overview
Accenture Life Trends 2025 Overview
We create these trends every year as a window into the interplay between people and their behaviors and attitudes to the world around them—be it business, technology or other societal shifts. It’s now commonly accepted that customer obsession is the best growth strategy. Superb customer experiences are expected. It takes meticulous orchestration to play a meaningful and relevant role in your customers’ lives. However, people are messy. They’re emotional and they’re changing faster than you can
Accenture Life Trends 2025 Overview
@bbrtteee1 month ago
Accenture Song
Accenture
life trends 2025
About this report
We create these trends every year as a window into the interplay between people and their behaviors and attitudes to the world around them-be it business, technology or other societal shifts.
It's now commonly accepted that customer obsession is the best growth strategy. Superb customer experiences are expected. It takes meticulous orchestration to play a meaningful and relevant role in your customers' lives. However, people are messy. They're emotional and they're changing faster than you can change your business, so keeping pace is a constant challenge. These trends examine these shifts and seek to help businesses define how to catalyze growth by staying relevant to customers-which is Accenture Song's mission.
Research snapshot
Each year, Accenture Song's global network of designers, creatives, technologists, sociologists, and anthropologists across 50+ design studios and creative agencies watch out for signals in their countries. We synthesize their thoughts, discuss them with futurists and academics and shape them into trends. External, in-depth interviews with people in eight countries tell us whether and how the trends are manifesting, in their own words. We combine these insights with an extensive online survey of 24,295 people across 22 markets to shape these final trends.
Executive summary
As disruptive breakthroughs dramatically evolve people's digital experiences, they naturally react and adjust their relationship with technology to ensure it still serves them. Right now, trust online is in the spotlight and people are increasingly scrutinizing what they see and what they believe, which is affecting how customers behave towards the businesses trying to reach them. Thematically, then, the opening trend anchors the set.
Cost of hesitations details how it's now incredibly easy to create all kinds of digital content, and a flood of scams is blurring the lines between the authentic and the deceptive. Even on once-trustworthy platforms, it's harder for people to tell what's real, seeding hesitation into their digital interactions.
Within this context, The parent trap investigates how people are evaluating their options for helping the next generation shape a safe, healthy relationship with digital technology as it evolves.
Impatience economy observes that consumers are going their own way, finding quick solutions via relatable online content to satisfy their growing impatience to achieve life goals.
Foundational to a thriving workplace, The dignity of work is being challenged and resulting in rising tensions as business, technological and human trends collide. As new technologies arrive in the workplace, will people hesitate-or trust and embrace them?
Perhaps a response to a digital experience that is breeding hesitation, people are seeking simplicity and deeper connections, which we see as a movement towards Social rewilding . They want to engage with the world in meaningful ways, finding textural experiences that connect them with their environment and each other.
Don't hesitate. Read on.
Contents
Trend 1
Trend 2
Trend 3
Trend 4
6-22
23-37
The dignity of work
54-69
38-53
70-84
Trend 1:
cost of
hesitations
Accenture Life Trends 2025
The innate trustworthiness of digital technology is under threat, and its additive value for people's daily lives has become diluted by authenticity and trust issues. It's now incredibly easy to create all kinds of digital content, so a flood of scams is blurring the lines between what's real and what's deceptive, making it harder for people to tell them apart. This introduces hesitation into all interactions, which is disrupting people's online experiences.
Generative AI is a revelatory tool both for honest enterprises and bad actors. It's ushering in a new era of confusion and concern, challenging people's trust in digital in deeply personal ways. As people weigh up the possibility of dialing down their dependence on the internet, trust should become a top priority. While the scams may occur on channels, brands suffer from the consequences of hesitation.
What's going on
At the center of this trend is people's new hesitation reflex when doing anything online, and the resulting cost for anyone doing business there. For customers, the degradation of experience and the rising likelihood of being misled means they must constantly ask themselves, 'Is this real?' in multiple contexts, and on platforms they once trusted. If people become too weary, online shopping could take a hit and brands would suffer.
Some of the signals that shape this trend aren't new, but their escalation makes their inclusion here important, especially as increasing use of artificial intelligence has the potential to ramp things up very fast.
In the past year,
%
of people have seen fake news or articles
38.8 %
have seen fraudulent product reviews online
52 %
have experienced deep-fake attacks or scams for personal information and/or money
Accenture Life Trends survey, 2024
Suggested, recommended, sponsored, for you?
The online discovery experience has become chaotic and frustrating. The digital places that were once reliably on-point for finding products, services and information have become less effective. There are three main factors causing this degradation. First, the search experience has become cluttered with suggested posts and related queries when people just want the results they've asked for. Second, people increasingly can't trust that what they find is real, (more on that in a moment). Third, the apparent commercialization of every point in the online experience-driven by the need for a sustainable business model-means that almost half of people (48%) feel like shopping is being pushed on them whenever they go online. 1
In a world where technological progress seems so highly prized, these things should be getting better, not worse.
Search engine algorithms spawned an SEO industry that has subtly but indisputably changed the web by shaping word choices, page layouts and site maps to maximize visibility. Experts know how to make sure their content rises above genuine results that would be more helpful or relevant-and this tactic can now be supercharged by generative AI that can create against the algorithm's guidelines. 2
Consequently, search results are clogged up by lowquality content, seemingly unrelated suggested posts and product recommendation lists motivated by affiliate marketing kickbacks, pushing genuinely relevant results down the page. Fundamentally, search engines now often respond to a user's request for help by creating more work-and let's not forget that their original purpose was to make navigating the internet easier.
This degradation is pushing people to create new routes to discovery. Many have lost patience with search engines and are going direct to a trusted source or retailer instead. Others are turning to places like Reddit and similar platforms, where short-form content focused on shopping recommendations has become popular. 3,4
Wait, is this real?
The acceleration of generative AI content into all places where people have traditionally discovered, socialized and shopped online is causing trust issues and fueling hesitation. Our Accenture Life Trends survey indicates 62% of respondents say trust is an important factor to them when choosing to engage with a brand, (up from 56% last year). 5 If people are going to be able to distinguish between legitimate and false content, the brands and creators sharing it might consider ways to signal authenticity to rebuild trust.
'Personally, I find fake pictures or videos on the internet very unacceptable. Even though the internet is a virtual environment, virtual does not mean fake.'
YK Zhang, 33, China
Is this information real?
Even content created with harmless intent is affecting people's ability to trust what they read online. Our survey found that 48.6% of people often or always question the authenticity of the news. 6 It's becoming harder to distinguish fact from fiction-to identify what has been written by a person with relevant credentials and what has been generated by a well-trained machine. Organizations producing content for digital channels are excited about the advancing abilities of technology for helping them create more of it at speed. But the critical question many are failing to ask is: Do people want it?
Further, generative AI models sometimes surface incorrect or misleading results known as hallucinations, because the models believe their own assertions. Causes include inadequate or biased training data, or incorrect assumptions made by the model. What should be the next generation of customer engagement tools are sometimes making unhelpful suggestions like putting glue on pizza. 7, 8 Of course, this example
is easy to dismiss as absurd, but others are harder to accurately assess.
Is this product real?
Generative AI is vulnerable to exploitation for the purpose of misleading people, which is increasingly muddying the discovery experience. Computer-generated images often misrepresent the quality of a product and/or the detail of its features, sparking social media trends showing, 'What I ordered vs what I got.' 9
Is this brand real?
In advertising, bad actors are staging fake podcast endorsements or using deep-fake videos of celebrities including Viola Davis to promote dubious brands. 10, 11 One consumer commented on X, 'I keep getting [social media] ads for clothes that I would absolutely buy and then I google [sic] the company and they have no online presence beside a website. [â¦] no way to tell if it's a legitimate company or an AI scam tailored to my exact taste in T-shirts.' 12
Is this website real?
People are also being stung by visiting sites that bear all the hallmarks of the result they want to see, when they are just fronts for something else. For example, customers trying to reduce their carbon footprint by buying local are increasingly receiving their order from the other side of the planet, shrouded in plastic packaging. Worse, some pay and share their personal details and receive nothing. 13 When the deception becomes clear, people are left feeling naïve, victimized and angry.
Worse still, a tactic called 'malvertising' uses online advertising as a route to attack people's computers, but it doesn't even require direct action from the user. Simply visiting a website hosting malvertising is enough to cause problems. 14
Is this review real?
Once a helpful shortcut to verifying credibility and quality, online reviews have also lost their dependability, with fake reviews becoming a pervasive issue-even more so now they are quickly scalable through AI. In 2022, TripAdvisor identified 1.3 million fake reviews, and in 2021, TrustPilot removed 2.7 million. 15
Our survey found that 38% of those surveyed have seen fraudulent product reviews online in the past year, and 52.8% often or always question the authenticity of product reviews when they see them. 16
Is this picture real?
A Getty Images report revealed that people feel less favorably about brands that use AI-generated visuals. 17 The survey of over 7,500 people from 25 countries found that 90% of consumers want to know if an image is AI-generated, and 87% value image authenticity. Crucially, 76% say they find it increasingly difficult to tell the difference between real and AI-generated images, fueling skepticism.
Language is evolving to lend expression to the uncomfortable feeling of spotting generative AI content that feels inhuman. 'Slop' is the new 'spam'-a broad term that has gained traction to describe shoddy or unwanted AI-generated content in art, books, social media and search results. 18
The issue of trust throughout the online experience is multi-faceted and stems from a spectrum of motivations from well-intended and genuine to malicious and harmful.
Hyper-personalized harm
Fraudulent behavior and scams online aren't new but put them together with generative AI, and people whose aim is to commit crimes online now have a tool that makes it much easier. Releasing generative AI so that anyone can use it has created numerous unintended consequences, some of which are as serious as it gets-and it's happening before people feel the promised value of the technology.
The most tangible consequence is fraud perpetuated when people are tricked into sharing payment information in exchange for a nonexistent product or service. Less easily quantifiable is the psychological impact of new, non-financial types of fraud that are destroying people's trust in the online experience and making hesitation a reflex. Our survey has found that in the past year, 32.6% of respondents have experienced deep-fake attacks or scams designed to steal their personal information and/or money. 19
In June 2024, Google DeepMind published research into tactics being exploited by bad actors to misuse generative AI's capabilities. 20, 21 Among those tactics affecting people directly are impersonation, altering people's appearance to change the story told by a photograph, and creating non-consensual intimate imagery, using
a person's likeness. They're also falsifying documents, using people's IP without permission and imitating or reproducing original work, brands or styles with the intention of presenting it as real. Listed out like this, it becomes abundantly clear that along with the good generative AI can do, it can also do significant damage in the wrong hands.
Deep-fake scams are proliferating worldwide, leading organizations to consider them a bigger threat than identity theft. 22 In these scams, callers use deep-fake audio to mimic the voice of a loved one in a dire situation and request financial help. The Asia-Pacific region saw a 1530% rise in deepfake cases from 2022 to 2023-the second largest rise in the world, behind North America. 23
Further, celebrities and citizens alike are increasingly discovering their images and voices are being used without their consent for nefarious purposes, risking harm to their mental well-being and their reputation. According to Britt Paris at Rutgers School of Communication and Information, with deep-fake technology, 'anybody can just put a face into this app and get an image of somebody⦠completely without clothes.' 24 Shockingly, this is happening both to adult and child victims.
For years, organizations have been asking people to prove who they are. Now, brands are on the hook to do the same.
In the absence of adequate action from those responsible for moderating this technology, people are increasingly skeptical of what they see online, leading to a risk that they hesitate to sign up, opt in or buy now. Search, social, commerce and consumption channels must tread carefully. Trust is easier broken than built, and the early signs are that consumers may already be seeking alternatives. Brands will need to be ready.
Is people's well-being simply considered the cost of progress? Digital interactions are now riddled with pitfalls and confusion, and inaction implies that the impact on people's lives doesn't matter. Brands' best route to enabling people to engage online without hesitation is to focus on reassurance and trust to serve their needs-not at the expense of strategic targets, but in addition to them. People need clear reasons to trust and engage with a brand online.
What's next
For a long time, the sheer convenience of the internet outweighed people's need for trust, but that equation is beginning to shift. Our survey found that 59.9% of people are questioning the authenticity of online content more than before. 25
If this trend continues, unchecked by legal or systematic intervention, people will likely start to abandon any platforms and brands they can't trust. Whether they reduce, change or stop certain behaviors entirely will be down to the individual, but it will affect discovery, sharing, shopping and socializing.
The most important move now is for every brand, platform, business and government to prioritize trust in channels and digital experiences. Leaders' goal should be to make it easy for people to trust in their brand, such that engaging with it is a hesitation-free choice.
Decontaminating the ecosystem
Design and marketing now face a huge challenge: How do they maintain a strong digital relationship with customers when the channels they use are becoming contaminated by slop? Communicating authenticity will mean revisiting or even redesigning channel strategies from scratch.
Brands must be honest about whether their channels have been polluted and whether customers have fallen out of love with, avoid or distrust them. This presents an opportunity for a more direct relationship with customers as they seek alternatives. Those that invest in becoming the trusted brand within a category could become its default choice.
We expect to see platforms investing in technologies to mitigate scams and harm, and to fumigate the slop that is destroying the customer experience. This move might be motivated by stronger policy enforcement and investment in trust and safety functions-which has recently fallen victim to efficiency drives within big tech. 26 It will be challenging for platforms due to the ease and low cost of AI-generated content, but prioritizing quality over quantity is necessary to maintain the internet as a valuable and authentic resource for everyone. Balancing profitability with authenticity is difficult but crucial.
'In the end, the customer is going to find out whether the product is real or fake. And if it's fake, then you've lost the customer forever. The whole point of business is to build that trust with their consumers.'
Azure'de, 38, US
/
Th
e
p
a
r
e
n
t
t
r
a
p
/
/
Th
e
d
i
g
n
i
t
y
o
f
w
o
r
k
/
So
c
i
a
l
r
e
w
i
l
di
n
g
The search for authenticity
When platforms fall short, brands in some industries may need to prioritize consumer safety to help people avoid falling victim to scams and abuse, and to help minimize the consequences if it happens. People need protection and systems to help them right wrongs, restore dignity and repair damage. One route to enhancing trust is to create digital signifiers of authenticity. For example, it's possible to use two-way QR codes and blockchain technologies to prove the authenticity of products, to create transparency of the product's journey.
We may also see the introduction of symbols of trust, stamped onto content to indicate that it has not been created or manipulated using AI. Industries might organize new associations (or use existing ones) to manage and uphold trust standards. Technology itself may also help, with AI programmed to calculate probable reliability and an option to appeal against mistaken ratings.
For organizations dabbling with AI-generated content, it will be critical to monitor how that content is received by customers. The prevailing assumption is that people will adapt to altered images of people, objects and landscapes, but we see evidence indicating the opposite. 27 There are emerging risks to brand trust and perception based on the use of AI content, which must be carefully managed.
People feel it matters significantly if AI-generated images are shared by:
51
%
their healthcare provider
50 % their regular bank
44 %
their favorite technology brand
Accenture Life Trends survey, 2024
'What makes the most sense to me when interacting with a brand on the internet is the question of the trust I have in that brand.'
Daniel, 31, Brazil
These organizations should also align with their brand purpose when considering the use of generative AI. For instance, it's long been known that imagery has a powerful effect on the way people feel about themselves. Fashion and beauty brands may want to carefully consider the use generative AI imagery and how it will perpetuate unrealistic beauty standards.
Whether formalized or unwritten, we expect to see new rules of engagement around when it's acceptable to use AI. People are likely to be annoyed by AI-generated content, images or videos that mislead in relation to travel, emotionally manipulative subjects, for illustrating or embellishing real stories, and for physical products where the design has an emotional component.
Responsibly seeking consent
When it comes to training AI models using people's content, platforms will need to be explicit about their intent-and doing this via terms and conditions will not be enough. People must be afforded a fair opportunity to evaluate whether to allow the use of their personal data.
Creators, artists, media and individual people are already grappling with issues like consent, copyright and privacy. We expect to see legislation, regulation and protection increasing rapidly and unevenly in relation to AI use cases. At the time of writing, over 400 pieces of AI-related legislation exist-most notably the COPIED act, which seeks to grant consent and transparency. 28
Creators rely on platforms to distribute and monetize their content, but generative AI disrupts this process by summarizing that content, using it to train data and reducing visitor numbers. Their relationship is thereby undermined, as their work is exploited without proper compensation and their livelihood is impacted. Public opinion on slop is already being swayed by creatives with large audiences who feel AI threatens their work or their livelihood. 29
Software solutions are already emerging. The Cara app was created to give artists who oppose unethical AI a safe space to share images and network with peers. 30 Developed by the University of Chicago, Glaze adds a so-called 'cloak' to an image to thwart scraping attempts, and Nightshade distorts the image for the generative AI scraper. 31 Kudurru puts control in artists' hands by enabling them to block the scaping IP address and/or to return an image of their own choosing. 32
Brands would be well advised to take a careful look at their communication channels and drive trust in and through them. This presents an opportunity for a more direct relationship with customers as they seek alternatives. Those that invest in becoming the trusted brand within a category could become its new, hesitation-free choice.
The onus is now on brands to make sure that their online activity builds trust rather than adding to hesitation.
'Authentic' was Merriam-Webster's word of the year in 2023. 33 With the rise of artificial intelligence and its impact on deep-fake videos, actors' contracts, academic honesty and a vast number of other topics, the line between real and fake has become blurred. Authenticity and its growing importance to internet users is more than just something brands should aspire to be. It's something they must actively do as part of every interaction with their customers.
Being authentic and trustworthy is a win-win, because when customers can trust an organization, they will engage with itwithout hesitation.
Why this matters now
In a matter of months, what people thought they knew and trusted about how they consumed information, socialized and shopped online changed. Generative AI's ability to facilitate hyperpersonalized harm and harassment, deep-fakes and scams has quickly become a serious issue. Every day, people are experiencing a pausequestioning the authenticity of the information they're reading, the products they're seeing, the websites they visit, and the calls, texts and emails they're receiving.
From sophisticated scams to engagement-farming slop, online content and experiences are becoming less trustworthy for consumers. If brands, organizations and platforms fail to prioritize authenticity and earn trust, people will stop engaging.
/
We recommend
Platforms should evolve and invest to modernize their content moderation value chains to address the known exponential influx of content that is harmful and deceitful. This is already being worked on at a rapid pace, and that will continue.
Brands need to establish and communicate clear methods for customers to verify their authenticity. Reassure customers by creating beacons of trust in communications, commerce and baked into the product. This is a marketing, digital and security collaboration to ensure channels are trusted and customers are retained. Explore when and how to use AI-generated content, with moral and responsible AI front of mind. 34
Customers will need and demand support as more and more people fall victim to these sophisticated scams . Organizations will look at this in terms of costs, but they should think creatively about ways they can help their customers. Understand where customers need additional support in discovery, and provide trusted solutions, advice and communications to allay the concerns that make them hesitate.
If the volume of deep-fake scams continues, insurance companies may want think through new types of products, similar to the identity theft products launched years ago. A new insurance product against deep-fake scams and abuse could offer coverage for financial losses, legal fees and emotional distress, and could provide comprehensive protection and support for victims of digital fraud and harassment.
Governments may need to ramp up consumer
protections and new compliance measures placed on organizations. These new consumer protections may require organizations to safeguard against scams, deep-fake harassment and abuse, ensuring product safety and transparency.
Trend 2:
the parent
trap
Accenture Life Trends 2025
Most parents' instinct is to protect their childrento keep them healthy and safe from harm. They perceive their job includes raising children to be well-adjusted adults and perhaps passing on important cultural values. One of today's biggest parental challenges is how to help the next generation shape a safe, healthy relationship with digital technology.
Growing up today looks very different from parents' own experiences. Unfettered access to the internet and social media is causing harm, influencing extreme behaviors and forcing young people to live with unintended consequences. We're seeing an acceleration of top-down policies from governments and bottom-up action from parents and schools to establish guardrails and protect children. This will undoubtedly have major repercussions for organizations-and soon.
What's going on
Finding the balance between letting children learn by experience and safeguarding them against harm has always been an underlying challenge for parents. In some situations, potential threats are obvious from a distance, giving parents time to intervene. But unlike physical hazards, many dangers of smartphones and social media for young people approach undetected, until the consequences become clear.
There are strong ties between this and other trends in this 2025 set: adults are coming to terms with the less favorable ways that digital technology has affected their lives, and they're seeking both a cure for themselves and prevention for their children.
It's important to stress that smartphones and social media aren't all bad-there are demonstrable benefits for all users, including young people. Smartphones offer obvious convenience, a ubiquitous companion to education and the safety of location tracking, while social media offers a window into a diverse world that people simply wouldn't have seen 50 years ago. This breeds empathy, curiosity and connectedness in the next generation.
For young people who don't feel they fit in with their peers, social media is a place to explore and find a precious sense of belonging-especially for those from the LGBTIQ+ communities. 35 It's also a remarkable mouthpiece for parenting experts, enabling them to offer clarity and education on child development that helps today's parents. It's also a place for camaraderie-a daily reminder that others are very much in the same boat.
However, evidence of the negative impact of social media and smartphones is piling up, and while governments are pushing for protective measures, many parents and schools don't feel the results are coming fast enough, so they're mobilizing. And in the absence of granular controls, their approach is more extensive.
Social psychologist Jonathan Haidt's best-selling 2024 book, The Anxious Generation, has whipped up a visceral sense of urgency around young people's well-being. In it, he shares statistics that show significant rises in teen depression, mental illness among college students, anxiety among 18- to 25-year-olds and hospital admissions for selfharm and suicide among younger adolescents. 36
These concerns have surged since 2010-2012, which coincides with mass smartphone adoption, a sudden engagement spike on social media and business models that have increased screen time.
Haidt's book is behaving as a catalyst. While these links between technology and mental health outcomes have been suggested for a while, they're now backed up by data and presented in a way that makes the harm feel more tangible and understandable.
56.5% of those aged 18-24 are more than twice as likely as those over 55 (23.3%) to agree that social media significantly impacts how they think about their own identity. 37
Vulnerabilities
Parents are justifiably concerned about mental and emotional harm from online bullying (from both friends and strangers), abuse, unrealistic beauty standards and age-inappropriate content that once seen, cannot be unseen.
'When there are screens, there's no communication.'
Marie, 44, France
Broadly speaking, impacts have manifested differently for girls and for boys. 38 For instance, a survey for the Children's Commissioner for England found that over half (51%) of 16- to 21-yearold girls had been sent or shown explicit content involving someone they know, compared to a third (33%) of boys. 39
A troubling new trend is also taking hold in high schools around the world, where girls are being cyberbullied with deep-fake nudes and pornography featuring their own image without their consent. Boys at one high school in New Jersey reportedly targeted more than 30 girls in this way before being discovered. 40 This behavior is making young victims feel shamed, silenced and frightened for their safety, and has tipped bullying into sexual abuse.
Meanwhile, teenage boys are targeted by 'sextortion' scammers on social media, driving some to take their own lives. In the United States, the FBI reported that over 12,600 victimsprimarily young boys-were coerced into sharing explicit images and then extorted for money under the threat that those pictures would be released to their friends. 41
On social media, unrealistic standards of beauty driven by filters, cosmetic enhancements and deep-fakes are, increasing symptoms of anxiety and depression in young women. This has reached new levels with the introduction of a Miss AI Beauty Pageant. As a representative from the Chilean Network for Women stated, 'It is incalculable how harmful the creation of 'model women' by machines can be. If beauty pageants with real women were already creating impossible expectations for us, where do invented women leave us?' 42
Extreme behaviors and beliefs
Parents are also concerned about the influences their children are exposed to online. Those who are persuaded into extreme beliefs or behaviors can be coerced into problematic or dangerous actions that could harm themselves or others.
Social media can be a confusing place, serving a constant flow of content to people who aren't always asking for it. Adults can find it overwhelming, and those with young, developing brains even more so. In a whirlwind of confusion, decisive voices provide something to hold on to. Unfortunately, some of the loudest voices promote toxic traits for shock purposes, which, once entertained, are perpetuated by algorithms. 43
For instance, misogyny isn't new, but social media is changing it. According to research by University College, London and the University of Kent, after only five days using a popular social media platform, there was a fourfold increase in misogynistic content being served as suggested content. The algorithm delivered extreme videos, often focused on anger and blame directed at women.' 44 Research shows that Gen Z boys and young men are more likely than Baby Boomers to believe that feminism has done more harm than good. 45
'I think children spending a lot of time on digital screens is a bad thing. It makes them more isolated from the outside world. They become a bit detached from reality.'
Peter, 49, United Kingdom
There has also been a notable increase in avoidance of in-person interaction. In Japan, a form of extreme social withdrawal called Hikikomori has been a focus since the late-1990s, just as access to the internet became widespread and adolescent boys started retreating into solitude. 46, 47
/
/