Design | 11 min read

What we learned building our bot


When a new technology emerges, there’s an initial trigger.

The masses react and become disillusioned, and then the trend slowly builds through a state of enlightenment and finally productivity. This is known as the Gartner Hype Cycle.

gartner hype cycle

If we view bots – the biggest trend in software today – through that lens, they would fit squarely in the trough of disillusionment.

Early attempts at chatbots have fallen flat in their execution, mostly because they rely too much on natural language processing or A.I. capabilities that simply don’t yet exist. Despite this, we’re bullish on bots, but only those that are built to solve a very specific problem.

No matter what we’re building, we always start from first principles. That means we do deep exploration into the roots of the problem, to try to understand what the problem is and why we are going to solve it. When debating whether or not a bot was for us, we discovered there are some simple tasks that don’t make sense for humans to do – and bots could be helpful in these cases.

Let’s say you’re going to chat to a business, but it’s the middle of the night. No one is there to actually reply to you. In a situation like this, most messengers would display their own version of a closed sign. Something like, “Sorry, we’re not here yet. Get back to us in the morning and someone will reply to you”. What if a bot stepped in to set expectations for the user while the team is away and collected their contact information. Down the line, a bot could suggest help articles using machine learning or ask users to rate a conversation when it comes to an end.

It took us a year and a half to build our own bot, Operator, and in doing so we learned five important lessons, all of which you’ll want to consider if you plan to do the same.

1. Some people just don’t want to talk to bots

In our earliest days of bot building, a person would start a conversation in our messenger, and we’d have the bot respond with a simple reply, “Hi, I’m Interbot, Intercom’s digital assistant. The team will reply here as soon as they can.”

When the bot introduces itself, people start to form expectations.

We didn’t want to pretend that this was a human. We wanted to be extremely clear and polite, and let the bot introduce itself. What could go wrong?

When we tested the concept, users hated it. It turns out that when the bot introduces itself, people start to form expectations. Some people believe it’s a fully-fledged chatbot, one that will work instead of real human support, and some people just don’t want to talk to robots.

We tried multiple versions of this introduction, but what actually worked best was no greeting at all. In this case, when the person starts a conversation, the bot gets straight to the point and says something like, “The team usually responds in under two hours this time of day. Check back here later, or enter your email below.” No introduction required, and nothing is said that isn’t related to the human’s task.

2. Your bot probably doesn’t need a character

As our exploration continued, we began to ask questions around the bot’s character. Does it need a personality? Should it be male or female?

In some cases, when the product is a bot itself, it’s fine to have some character. But if people are really expecting to have live chat with humans, any kind of personality is a distraction. For example, look at the Quartz News App below. It delivers your news in a conversational UI, and it doesn’t have any personality. You’re just reading news in conversation.


You can react with emojis and you can see GIFs. It looks like a chat, but there is no avatar, no character behind it and it has a straightforward tone of voice. You’re just reading news.

The job for our bot is quite similar. It should be an assistant to the conversation, one that’s as simple and as invisible as possible, rather than a participant. That means it should be gender neutral and essentially introverted. It’s never the subject of a message and doesn’t draw attention to itself. It doesn’t refer to itself as “I” and never refers to itself in any single message. No character necessary.

3. Words actually matter

In the world of conversational UI, it should come as no surprise but words really can change a user’s assumptions and expectations. Below is one of the first messages we tested. It’s very simple. Can you spot anything wrong with it?

What’s wrong is the word “soon”. People have very different expectations of what “soon” means. In a live chat scenario, some people think that “soon” means right now, immediately. Others believe that “soon” is closer to never.

Striving for clarity, we changed the phrasing and tried, “Brian will reply as soon as he can”. But using the word “he” or “she” caused another problem. Gender isn’t stored in the system, so gendered pronouns would not be appropriate.

To become gender neutral, we tried using the word “they”, but this started to become impersonal. It’s not how a business talks. The successful phrasing we landed on was straight to the point, and spoke directly to the end user’s action: “Don’t miss Brian’s reply. Get notified by email.”

4. Avoid the “bot jail”

Bot jail is the conversational UI equivalent to voice jail. Imagine the last time you called the bank over the phone. You likely experienced something like this:

“For balances, statements, payments and day-to-day banking queries, press 1. To talk about new accounts, or services, press 2. To set up, or activate telephone banking, press 3.”

By the time the message is complete, do you remember what option 2 was? If you click 1, then there are another set of choices, and it’s easy to get lost. You’re just trying navigate this labyrinth, escape this voice jail and finally, talk to a human.

Bots are often no better. They present you with some options, in this case with buttons instead of voice commands, and you can’t seem to get to a human, much less resolve your problem. In most cases you’re only presented with very simple reply options, like “Thanks”, “Okay” or “Got it”. You’re simply reading text and occasionally typing something to imitate a conversation.

Another example of this is Google Assistant. Take a look below to see what happens when you try to get restaurant recommendations.


We try to avoid those types of UI as much as possible. We still use short replies like the example below, but it’s only for one question.

We always give users the option to write a reply too. This is like an escape hatch. If you don’t want to navigate these choices you can just write a reply, and if we don’t understand it, we’ll just fall back to human support. It gets you out of jail.

5. Bots should have manners

We found that it wasn’t just the content of our bot that created negative user feelings, but the behaviour as well. As we know from Isaac Asimov’s “Three Laws of Robotics“: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

We reframed that concept through the lens of our bot, and created a set of principles that we call manners. They are:

  • Never interrupt when a human is typing.
  • If the user doesn’t engage with the initial interaction, stop the conversation. Don’t try to re-engage them with other messages.
  • If the bot wants to perform multiple tasks, they need to be integrated into a coherent experience.

To make this work, we built a special manners engine to make sure the bot is always on its best behavior.

So, after a year and a half of trial and error, where did we land after all of this? Our new bot is Operator, and it excels in those aforementioned simple interactions. Like collecting contact information from users…









Wistia video thumbnail - Operator Lead (autoplay)

Thanks for reporting a problem. We’ll attach technical data about this session to help us figure out the issue. Which of these best describes the problem?

Any other details or context?




Asking users to rate a conversation once it’s finished…











Wistia video thumbnail - Operator Rating (autoplay)

Thanks for reporting a problem. We’ll attach technical data about this session to help us figure out the issue. Which of these best describes the problem?

Any other details or context?




And suggesting contextually relevant help content…











Wistia video thumbnail - Operator Smart Suggestions (autoplay)

Thanks for reporting a problem. We’ll attach technical data about this session to help us figure out the issue. Which of these best describes the problem?

Any other details or context?




So, should your business build a bot? Whatever your product, make sure that you understand the problem you are trying to solve, and why you’re solving it. That means building the exact solution to the problem at hand. So in that sense, the answer is no, don’t build a bot. Build what makes sense for your product.

This post is based on a talk I gave at You can catch a video of the talk here.