Designing human-centred services in the age of AI: Our thoughts (so far)
5 minute read
AI is changing how people access digital services. From chatbots to generative search, these new intermediaries raise questions about trust, accessibility, and shifting user journeys. Across our teams, we've been sharing our thoughts on designing for equity, ensuring clear human touchpoints, and creating useful, trustworthy content in an age of AI-powered search
You’ll have likely noticed the term ‘disintermediation’ is becoming something of a hot topic at the moment. At Nexer, it’s also increasingly relevant to conversations we’ve having with clients, and within our teams. Traditionally, disintermediation refers to removing intermediaries (the “middleman”) between a user and what they need. In the AI era, it’s more complex. Rather than simply cutting out steps, AI often introduces new intermediaries that sit between people and services.
AI proliferation feels increasingly like the new frontier of the web, with huge ramifications for how services are accessed and delivered. From generative search to conversational agents, users increasingly rely on AI to navigate the web, bypassing traditional websites and human touchpoints. This shift raises urgent questions about trust, access, and the design of digital services.
And it’s these questions we’ve been trying to ask ourselves. Conversations across our research, service design, content, accessibility and optimisation teams have explored what AI disintermediation (and indeed, intermediation) means for our clients, our services, and the people who rely on well-designed services. It’s a starting point for deeper thinking, and we’re keen to get people's views.
It goes without saying this is a HUGE topic, and the more we discuss it, the more we find to discuss. But here are a few of our initial thoughts.
What is AI disintermediation, and why does it matter?
In recent piece for Third Sector magazine, Zoe Amar characterised the challenge of disintermediation in a charity context as:
“when AI cuts charities out of the journey between people seeking help and the information they need, disrupting how people get advice, interact with services and donate.”
While this is a good example of disintermediation, something we quickly identified in our discussions is that we often talk about disintermediation when what we’re really seeing is a new kind of intermediation; AI agents such as large language models and AI-powered search are inserting themselves between people and the services they’re trying to access.
These systems summarise, redirect, and sometimes even decide what users see, all on a totally personalised basis. For charities, public services, and advice providers, this shift can mean fewer direct visits, reduced donations, and less control over how crucial information is surfaced.
It’s not just about simply automating search and results. Fundamentally, it’s about who gets left out when human touchpoints disappear and carefully crafted content is at the risk of not being surfaced.
Impacts on mission and equity of access
AI can improve speed and reduce costs when implemented well, but it can also erode trust, empathy, and understanding. We need to ask some important questions when we consider its role in service delivery:
- what’s the “minimum viable human*” in your service journey?
- which steps must retain a person? What are the eligibility edge cases and safeguarding, implications?
- whose needs get obscured when support becomes invisible or missed out by AI search? For users with low digital confidence or access needs, this could have really big implications
* What is the smallest, essential human involvement needed in a service journey to maintain trust, safety, and inclusion?
Designing for inclusion in an AI-first world
When AI handles first-line interactions, the risk of accessibility barriers being present increases. Escalation paths may not be immediately apparent to people, and poorly designed flows can create dead end moments where people can’t progress or recover. This is a service design challenge. We need to design for these new seams between AI and human support, so journeys remain connected and inclusive.
As such, creating inclusive AI as part of a service means:
- crafting thoughtful content and clear wayfinding
- validating interaction patterns through inclusive usability testing
- guaranteeing a “get me to a human” promise, and communicating it clearly
- mapping escalation triggers and fallback routes so there’s no wrong door
- defining the ‘minimum viable human’ in your journey, where human judgment is essential for trust, safeguarding, or complex decisions
Research into how people with access needs interact with AI is a vital step in understanding these risks. Service blueprinting and failure-state design can help ensure that when automation fails, users aren’t left stranded. Every entry point, whether that’s AI chat, search snippet, or social post, should offer a clear next step and a visible human route.
Content and trust in an AI-mediated world
Social media and AI are now shaping the first steps of user journeys. Platforms like TikTok and Instagram are becoming primary entry points for discovery, while AI-powered search determines what content gets surfaced. This means content strategy can’t stop at the homepage, it needs to span social feeds, search snippets, and conversational interfaces.
For organisations, the priority must be consistency and trust. Modular content that works across channels, combined with clear trust signals such as author credentials, timestamps, and transparent sourcing helps people recognise reliable information wherever they encounter it.
At the same time, Generative Engine Optimisation (GEO) is emerging as the term for AI engine optimisation (along with terms such as AEO). This means structuring your web content so AI can interpret it accurately by using clear categorisation and internal linking, plain language summaries, and breaking long guidance into digestible sections. Regular content audits, and Google’s E-E-A-T principles (Experience, Expertise, Authoritativeness, Trustworthiness) remain essential for visibility and credibility. These are not new principles, and they remain the foundation of a robust SEO strategy.
Recent work with clients shows how this kind of optimisation can improve surfacing in AI-powered search, even as click-through rates decline. It’s also worth acknowledging that while these shifts may mean less overall traffic, those who do click are often more engaged and purposeful, an important driver for funding and engagement strategies.
Risks, regulation and responsibility
Who owns the accuracy of AI-generated content? What happens when outdated or harmful advice is surfaced, and who is responsible?
We need clearer standards for:
- explainability and transparency
- human-in-the-loop oversight
- lawful data use and retention
- incident response and harm detection
From health advice to automated decision-making, the risks of misrepresented automation are growing. Trauma-informed content and careful testing may be hard to maintain when AI summarises or rewrites information. That has implications we can’t ignore.
Digital first approaches emphasise the need for robust governance. When services pivot to app-first models, trust hinges on transparency and explainability. Larger service providers, such as the NHS, are setting benchmarks for secure, authoritative content, but smaller organisations may struggle to match these standards, creating uneven service landscapes, and leaving them susceptible to the risks of AI intermediation.
Generational shifts and behavioural changes
Younger users’ browsing habits are changing fast. Under 25s are moving away from homepages and traditional search, towards social and video feeds and, increasingly, AI chat that provide quick, summarised answers.
The Reuters Institute Digital News Report 2025 demonstrates this shift, noting younger audiences’ growing use of social media and video for news and, for the first time, early adoption of AI chatbots to bypass search and websites. Ofcom’s latest ‘Adults’ Media Use and Attitudes 2025’ report similarly shows rising exposure to AI tools among UK users, including young people, even as overall trust in AI outputs has not increased year on year.
This shift isn’t only about where traffic goes. It’s also about how people evaluate information. Global surveys show ambivalence: younger cohorts are more familiar with and open to AI, but concerns about accuracy and bias persist, reinforcing the important of taking care to build in trust signifying and mechanisms to direct people to the information they’re trying to reach.
Together, these trends present a challenge to traditional service design patterns (e.g., content hubs, linear journeys, deep navigation) and risk widening the gap between users who benefit from AI mediated shortcuts and those with low digital confidence, different preferences, or specific access needs.
What sparked this reflection for us? This thought provoking LinkedIn post from Darius Pocha on recruiting GenAlpha students, how expectations are shifting, and what that means for the way organisations show up online was a timely remind to interrogate these patterns in service delivery.
Excellent collaboration and leadership is already happening in this space
Wrapping up, and in reasons to be cheerful, there are already plenty of organisations doing meaningful work exploring these questions.
Manchester’s ‘People’s Panel for AI’ is a great example of democratic engagement in tech. Through roadshows and training, it empowers residents, including those at risk of digital exclusion, to shape AI-enabled services. This model shows how involving diverse voices early can build trust and prevent harm.
We’re also inspired by the work of Rachel Coldicutt and Careful Industries, who are leading critical conversations on responsible AI and digital ethics. Their focus on human-centred governance and risk frameworks for public services is shaping how organisations think about safety, transparency, and inclusion in AI deployments.
Alongside this, the charity, Scope, is conducting vital research into how people with disabilities interact with AI tools, though their ‘User behaviour and AI learning group’ exploring both opportunities and risks. This work will help identify where cognitive load, accessibility barriers, and trust gaps emerge, and how inclusive design principles can mitigate them. We’re keen to learn from these findings and integrate them into our own access practices.
AI is already reshaping how people find, trust, and use digital services. As designers, researchers, and content practitioners, our challenge is to ensure those changes don’t come at the expense of equity or empathy.
The path forward isn’t about resisting automation, but it’s about working with people to shape it with care. In practice this this means testing and co-designing solutions, designing for transparency, building in human touchpoints, and keeping inclusion at the core of every interaction.
We’re continuing to explore these questions across our teams, and we’d love to hear how others are approaching them.
How are you adapting your services for an AI-mediated world?
Get in touch with us at: hello@nexerdigital.com