What We Learned from Hard Level Labelling

Or why a simple feature has made $1b+ in the games industry

In the end of 2014 teams across King were working hard to improve our games. We were adding new levels, improving performance, looking for new ways to add content. A few of us kept wondering "why do so many people play these games?"

We sat in rooms with players and listened to their feedback. We asked questions. We looked through massive amounts of data and tested variations of features. The games were doing so well it seemed stupid to ask "why aren't they doing even better?"

But there was a nagging feeling that we were at a local maximum. What was the path to the global maximum? We started asking more player experience questions. What do players pay for? Why do players pick this game when they pick up their phone? Do they wish things were faster or slower? Do they want the game harder or easier?

We wanted to know what mattered most to players.

We started the "experimentation group" in King in early 2015.

How could we test our theories? Our highest grossing games had targets to hit. When they were off the whole company was off. Most of our testing happened in those huge businesses. The tests were more and more risk averse. There was no way to break new ground there.

Small games had little impact on the numbers. They could take chances the other games couldn't. Instead of doing "proven features" they could test new frontiers. Three of those games joined the Experimentation Group.

Our mission was simple: find something with a huge impact on the business. The first thing we did was tell players which levels were hard.

Some people in the team thought telling the players a level was hard would make everyone leave the game.

It wasn't easy getting the feature out. Some people in the team thought telling the players a level was hard would make everyone leave the game. The first release told the players the hard level was a "special challenge". It had no impact on metrics. We added skulls and dark purple and clear "Hard Level!" messaging.

We thought it might reduce churn. If players knew a level was hard they might stick with it longer. It could impact willingness to pay on that level. But the best tests defy expectations. There was no "lower churn on hard levels" or "increase in conversion on hard levels". People didn't run screaming from the game. Engagement went up across every level.

We saw some amazing reviews. Someone complained "I'm laying in bed with my partner and they have hard levels in the game. Why don't I have those?" Hard levels didn't scare players off. They wanted them. And those two people laying in bed had the same levels. The only difference was the labelling.

From this first test three things stuck with us:

1. Find what you think matters the most to your players and question it. That's the biggest opportunity.

2. State your expectations before each test. Write them down. Being wrong is the quickest path to breakthroughs. If you don't document your pre-test thinking, teams see what they want to see in test outcomes.

3. Maximise clarity for players. Cut anything that's extra. Clear up the wording. Make sure the visuals are clear. Optimize for player player impact. Messy UI leads to low interaction and no result. This is how most tests die. Even if you're scared of the impact, nothing is worse than a +-0 impact. You've wasted your time.

Be straightforward and clear. You can find out what players want. You have to be brave enough to test your assumptions. 

If you want to learn more about our experimentation framework get in touch via contact@experimentation.group or using the “Book A Call” button above.

Previous
Previous

TXG on the Rolling Product Review

Next
Next

How Experimentation Grew King