Eight tips for service design with expert users (who don’t exist)

Photo by Museums Victoria on Unsplash

Here are some thoughts on researching and designing a service for expert users. These are specifically around designing a totally new service; one for which no users yet exist. 

Usually when designing a service there’s a bank of knowledge of an existing system to draw from. Alternatively, there might be users of an existing service that we can iterate design ideas with. We didn’t have either of these, so we needed to think differently about how we could get user information, and how we’d design and test the service we were making. 

These are my top tips to help you research and design for experts, especially when they don’t actually exist yet.

Who were the users and why didn’t they exist?

I was the UX researcher (and service designer) in an agile development team. By ‘expert users’, I mean we were designing for users who would be repeatedly using the same system over the course of a working day. They would process information for multiple end customers, who would call in by telephone. On average each user would be processing 50 – 100 customer calls each day. 

The users didn’t exist because the service was part of a new giant call centre that was in the process of being set up. We needed to have the service in place for when the staff began their new jobs and the call centre was open, so we started designing and building before the staff had been recruited. 

What was the service you were making?

The service was designed to let users take payments from customers. It replaced an old and cumbersome offline system, although it wasn’t a direct copy of the old one; it was updated to meet modern business requirements. See my previous blog post about working with Business Analysts (and Product Owners and Service Managers) to determine these business requirements.

Each customer call might require the user to perform a range of actions on the customer’s account. There was no single linear process that each user would be expected to perform.

So that’s a broad summary of what we were making. Here’s eight research and design tips we discovered through process: 

Tip 1: Expect user requirements to change

We spent time with key stakeholders of different services and delivery teams, all of whom would have a hand in the service we were creating. We ran workshops to start to pin down some of the most concrete requirements. These were items that were more likely to stay constant over the course of the development. For example:

  • What should this service do for the organisation?
    • What outputs does it need to provide?
  • Are there KPIs the service should provide data on?
  • What other stakeholders do we need to speak to?
  • What are the big blocks of the service that need to talk to each other?

These were the most concrete items at this point. We didn’t know what the service would look like from the user’s perspective, but we knew (roughly) what it needed to do. So we started to interview and meet with these stakeholders. It was like painting in big brush strokes to start with. 

From this point, we started to create strawman user journey maps and work with the Solutions Architects to form service blueprints. 

Eventually, these strawman service blueprints started to crystalize so that in each workshop stakeholders adjusted them less and less. Gradually, the requirements could start to be signed off. But even after multiple discussions these requirements continued to shift as one team discovered they needed a part of it to work differently, or that they wouldn’t be ready to provide certain data in time, which knocked on to other teams (including us). 

We took a pragmatic approach, and acknowledged that while the skeleton of the service was solid, some of the ‘fleshy bits’ of the service were going to change. 

However, in parallel to this service design with stakeholders from the organisation, we were also starting to bring in user data, to give us evidence that some of the user journeys needed to be in a particular form, and either couldn’t change or needed to. This helped us push back when requirements affected the user journey negatively.

Given that we didn’t have actual users yet, here’s how we went about getting that user data:

Tip 2: Requirements gathering: think Venn diagram

We knew we didn’t have any users from whom we could gather requirements. But we also knew there were some areas or ‘types’ of workers who did similar jobs. 

  • Some of these were in the same type of business, but didn’t handle telephone calls. 
  • Some of them handled payments, but didn’t work with computers. 
  • Some of them were experts at taking customer calls, and worked with computers to process the data. 

Within these three areas was the sweet spot of the type of user we were building for. Here’s a sophisticated diagram (which I’ve been told might look a little rude) to explain this:

Venn diagram showing the three tangential user types and the sweet spot between them

We spent time observing and side-sitting with representatives of these different types of user (we called them ‘tangential’ users), collating user needs and pain points of the existing services they performed. We built these pain points into an as-is customer journey map, so we knew what we needed to NOT create in any new service. We also tagged the needs of each of these different types, so we could refer back to them in future iterations. 

These tangential interviews allowed us to start to sketch in what the core use cases would be for our imaginary users, and to start to define the ‘to be’ user journeys. 

Tip 3: Don’t give experts what they ask for (necessarily)

Side-sitting with experts in their tangential fields (often people who’d done the same job for over 20 years) was eye opening. They had clear opinions of what worked and what didn’t work for them. They would often love to have a screen full of multiple complicated options and terminology in front of them, and they seemed to be the only people who could understand them. They were true experts – and they were proud of their mastery of their current system. 

But we weren’t just building something for long-term experts to perform every conceivable task equally often. If we’d built a new service based on only their requests, we would have made something that would have required a lot of training for new staff to use. Yes, it would have been super slick and quick when they were up to speed, but it would not have been the simplest and easiest to use. It also wouldn’t have been built around the most frequent tasks they needed to do. 

We were starting to gather data on the frequency of certain types of tasks, from our tangential users. So we took their feedback, and we used it in the design, but we didn’t give them back exactly what they asked for: we gave them a product that met business goals and surfaced the most common tasks most readily. We also avoided current pain points.  Finally, we provided the opportunity to use short cuts for more complex, infrequent tasks.

Tip 4: Heuristics can be all you have 

Even when we had a good idea of the requirements, the user journeys and the needs, we still had to start with a blank sheet of paper when designing. At that point, all we had were heuristics of design for expert users. I was surprised that there wasn’t more advice available on the web, but four articles really helped:

“It is our job to remove real clutter—any tools or data not needed right now—but it is not our jobs to hide what experienced users need just to make ourselves feel better when we look at their screens.”

Bruce Tognazzini – https://asktog.com/atc/the-third-user/
  • We were working in a design environment constrained by design templates to support one quick transaction. E.g. As a [person] I want to [buy something] so that [I can be done as quickly as possible and never come back to this site]. This article by Laurian Vega gave us added confidence that we could step away from traditional ‘transactional’ design templates, and design for those multiple returning use cases that our users would do, every day. 

So, thanks to all those nice people. 🙂

Particular heuristics that worked for us were:

  1. Design to reduce mouse clicks (although this is not the be-all-and-end-all)
  2. Assume experts won’t only use a mouse, and will tab to navigate around the screen 
  3. Design to reduce the time it takes for a user to do a task
  4. Make sure the most important information is seen first (i.e. understand what the most important information is that the user should see and design for that)
  5. Design it to work within their wider environment – consider interruptions, what happens if a call drops, dual screens, and how information will be transferred between systems and screens by the user. 

Tip 5: Embed a tangential expert in your team

As the user requirements came together, we ran design sprints to start to pull together screens and journey details. But design sprints were difficult to run, because everyone was busy and it was almost impossible get everyone in a room for enough time. We were lucky because one of the team had also worked in one of the tangential user roles before, so while we used the design sprints to carve away large lumps of stone from our service sculpture and start to form the broad shape, we could also get quick feedback from that team member on a rolling basis, to sand down some of the rough edges. This then fed back into regular large team catch-ups. 

Tip 6: Be aware of your biases (as much as possible)

It was hard not to revert back to making a traditional one-linear-journey for each use case, rather than a ‘hub and spoke’ approach that the expert users needed. We were still thinking of each use case as a long-single route, just because we’d spent years working on them, or using them. But this was wrong, and having the embedded tangential user in the team helped to remind us this each time we looked like we were heading down that route. 

Tip 7: Identify and optimise the subroutines first

The service we built allows users to perform a series of tasks for the customer when they call. But each of these tasks contains some sub-tasks, and those sub-tasks form little units of page sequences that need to be shown each time. 

For example, if someone calls and wants to pay for something, the user has to search for that item in the system, select it, and confirm it. Then it gets added to the user’s account. That’s subroutine one, and it happens as one of the most common subroutines of screens. 

Then the user needs to take the payment, so there are screens that allow the user to enter the amount the user wants to pay, confirm it and then take payment. This is subroutine two. 

Within subroutine two, there’s also a mini subroutine where if a user has a discount on the thing they’re paying for, they can branch off and apply that, and then come back to the same place. This could be called subroutine 2a. 

When we had these subroutines figured out, we concentrated on making each as quick and efficient as possible. Once we felt they worked, we stitched them back into one system, and worked out how to trigger each one from a call-to-action in the central ‘dashboard’ of the service. In the end, we realised we only needed two main core functions from the dashboard that would meeting most of the user needs, which helped keep the dashboard nice and tidy (and easier to use). 

Tip 8: Evidence is everything

We were almost designing blind to start with, based on tangential users, opinions of what works best, heuristics, experience from working in other domains, and a bit of guesswork. 

By the time we had some paper prototypes ready, the call centre staff were nearly in place, and some of the first recruits were already being trained up on the call centre processes. We were lucky as we could start to test our prototypes with them. If they weren’t available we would have had to go to our tangential experts for their feedback. 

Paper prototyping and usability testing was where the real solid feedback came in and helped steer us towards the right first-release designs. We ran a series of iterative tests over weeks, to validate our designs. 

However, because we were working with expert users we ran the tests in particular ways, to maximise ecological validity. For example: 

  • We tested in the environment the users would be working in. 
    • A very busy office. Plenty of noise and distractions
    • Multiple monitors
  • We familiarised new users with the system, giving them an introduction to the system so that they weren’t using it ‘cold’. We tried to replicate the fact that they would be familiar with it. 
  • We had a pool of users who had tested it before, so they were already familiar with it each time we arrived with a new iteration. Again, trying to replicate how they would use it for real. 
  • We gave users many repetitive tasks to do with the service
  • We asked them to process a certain number of transactions as comfortably as they could, and noted:
    • Time taken to process x items (of different types)
    • ‘Mistakes’ made (that is, when they didn’t do something we expected them to)
  • We videoed them working with the service, so that we could report back to stakeholders with compelling evidence if anything needed to be changed. 

This evidence became crucial in persuading the team that we needed to make changes to the designs. However, we were pleasantly surprised that the end user testing worked pretty well. Hopefully this is testament to the processes we followed to get to that stage. 


Overall, we created a service that did what the organisation wanted it to, and which met the needs of the experts, who didn’t exist at the start of the process. 

If there’s one take-away I’d say, it’s to think around the edges of the problem. For example, if you can’t get what you need in order to get evidence, work out what you DO know to start with, and build on that with tangential evidence. Gradually pull the pieces together until you end up with something more solid. Find the most solid information you can and build from there. 

I couldn’t find much information online when we started this process, so I hope this helps fill a gap for anyone else also building a service for experts (who don’t exist).

How to make UX and business analysis (BA) work together

Until recently, I’d never worked with a business analyst (BA) as part of a delivery team. I’d grown up in an agency world, where a brief would appear, the product was already scoped (to an extent) and the need for UX was implicit in the brief.

In my last few contracting roles, when I arrived in the agile teams, there were BAs already in place, along with solutions architects, product owners, service managers and tech leads, etc. It took some time to understand what each person actually did, because even though each time I joined a new team the job roles were roughly called the same thing, the actual role that each person did was often slightly different, depending on that person’s interpretation of it, as well as what needed doing in the team.

For example, in the first team I worked in there had been a lack of a UX researcher for a while, and there was a need for formative research to understand user needs. BAs were already in place, so they’d taken it upon themselves to gather the information about the current processes. This was all cool. I picked things up when I arrived in the team, but it left me confused about the difference between BAs and UX, especially in the formative period of developing a new product.

When I searched online to try and get some definition of a BA’s role, and how to distinguish it from UX, I didn’t find very much. But over the next 18 months I noticed ways in which there are clear distinctions, and so I thought I’d share them here. 

Nothing is set in stone

First off, a disclaimer: there are no clear role distinctions set it stone; different people interpret the roles differently. But these are my impressions, from the people I’ve worked with. 

The role of a BA seems to be flexible. However, generally, there are themes of behaviours that form core activities, such as:

  1. Gathering business requirements for a service
    • That is, understanding what the company that is making the service or product need it to do in order for it to be a success.
    • Talking to services that input into the product to help define their requirements too. 
  2. Liaising between the developers and the PO
    • Backlog refinement with developers means the BA can report back to the PO when features can and can’t be squeezed into a sprint.
  3. Acting like proxy product owners (PO)
    • Taking the vision of the end product and making smaller executive decisions on what it feasible given time and resource constraints, before working with the PO to confirm these. 

I’ve also had people suggest that BAs focus on quantitative data, but I would disagree, and say that’s just as pertinent to a service designer. It depends what the quantitative data is measuring. 

Unique BA roles as I understand them

The BAs I’ve worked with have all been great at gathering requirements for a service, to help define what the business needs it to be (point 1, above). For example, what is going to make this product stand out in the market; what market intelligence does the business need it to produce to add value to it. 

They’ve also been good at point 2: being the bridge between product owners, developers and tech leads to define what is in scope for MVP, iteration 1, etc. 

And they’re always willing to step in and take that role in number three, although I suspect that might not be a true BA role, and is more a sort of ‘gap filling’ when time and resources are tight. 

However, these three areas are also areas that a UX or service designer could (and possibly should) want to be across too. For me, the main area of overlap between UX and BA is the first one: gathering business/user requirements. This is also the one where I had an example of the different perspective BAs have, compared to UXers. 

When BA thought it was UX, but actually was BA 

When I joined a team that hadn’t had a UXer for a while, and the BAs had filled in to gather the initial product and service requirements, they’d reported back to the team that they also had the user requirements for the service from their research too. 

The product vision was born. The service was outlined. A UX designer was brought in, and they started to prototype up the screens based on the flowcharts from the BAs. All seemed good and progress was swift.

However, it became clear after I joined that the requirements were not user focussed. Instead, they focused on an object as it worked its way through the system. 

We were making a system to process financial transactions. It was to be a new service, to replace an existing, paper-heavy, laborious process. The initial research had been done with the original, laborious system. Yet, rather than focus on the actions and requirements that the workers had to perform in order to process the transactions, the BA research was fundamentally based around flow charts, showing how a transaction moved from state to state as it passed through it. 

For this reason the transaction was then seen by the product team as the sort of ‘fundamental unit’ of the system, and the low-level design was created based on this premise. This caused a fundamental issue as the service developed. 

In addition, there were no current pain points documented. The current process was long-winded and tedious, and was ripe with errors of copying and pasting information from one system to another, or losing slips of paper. None of this made it into the ‘As-is’ journey, so that it could be ironed out of the ‘To-be’.

So what?

When I joined the team as a UX researcher, and spent my first few visits observing the processes that had “already been documented”. It was clear that the emphasis was on the object and not on the user. One specific issue reared its head, which caused the system to need redevelopment:

The system was built around the transaction, specifically a payment, and in the ‘back-end’ of the system the payment was the fundamental unit. To that unit were attached other features, like a reference number, a status, and various other attributes. 

However, occasionally, the payer would be able to get a rebate on their payment, if they were on low-income. In this instance if they had a full rebate, and didn’t have to pay anything then that payment would be wiped out, so the fundamental unit of the system (the payment) on which other attributes were hung, would cease to exist. If that happened, we argued, on what would all those attributes hang? The tech lead insisted users didn’t see rebates as a payment, but we provided evidence that users considered it as just that. 

UX had noticed since the first contextual visit that the fundamental unit was the item that the person was paying for, and this was what the workers referenced as they worked. If the UX had been in from the start, with the view from the user’s perspective, this would have been avoided. 

I don’t want to say UX is doing anything better than BA. This is simply an example of where business analysis took one perspective (on an object), but UX took another (on the user).


This example helped me to understand that BA and UX are not the same thing, but are complimentary: two sides of the same coin. 

Both of these elements need to be in the mix when creating a product, especially a new product. BA and UX need to work closely together; brought together into a formal service design vision, and by informally plotting the requirements of back-end and front-end systems together daily. Recognising these differences helps.

It seems like a simple Venn diagram:

  • UX brings the user needs; the observations of current pain points for users.
  • BA’s bring the business requirements that the end product needs to meet.
  • All work together (always, but specifically in design sprints)

In practice, there’s always going to be muddiness between the two roles, but knowing them better, and what they do, will make for better products.