## Introduction

We are currently developing a construction and simulation game, Student Factory. As you may have surmised from the title, the setting is in a University and the objective, simple, create the next “batch” of well packaged, well polished, bug free, students ready to embark on life’s journey.

As in every aspiring good simulation game, the AI that will govern our brilliant students, and glorious (or glorified, your choice) educators, is important. This is because the AI in this type of games is one of the primary drivers of the experience. It is what will put ideas into our student’s heads on what mischiefs, sorry I meant homework, to do next and how to plan and accomplish their ambitious plans. Irrespective of the context, this is a task easier said than done, hence the plethora of AI frameworks. See for example the excellent AI Pro series of books that cover topics from, what is game AI, to the most intricate and elaborate algorithms based on machine learning and scheduling optimization. This diversity of methods available has some implications. One of these implications is that there is no “silver” bullet for game AI, i.e. a single AI to rule them all, no such thing. Another implication is that it is easy to get lost in the details, and there are plenty of details.

What I cover in this post, and the ones to follow, is a Utility Theory based approach for game AI (UAI for short) that I find useful for the game I am currently developing. Student Factory, at least insofar as the AI is concerned is an agent simulation system. This is the main reason that led me to create a utility based AI. Another factor that weighed in my decision is that I am intimately familiar with this approach, not from a game AI perspective, but from a decision theoretical perspective, for example see what I was up to before SF. This familiarity makes the concepts involved in UAI seem intuitive to me, which can go a long way when designing and debugging the agent’s AI. That said, I have nothing against state machines or behaviour trees or the numerous other methods available out there. I won’t list the benefits of using a utility based AI, however, I will attempt to share my insights with you and guide you through a rough layout of a UAI implementation. With that information, and the numerous links to references throughout these posts I hope you’ll have enough information to decide whether this approach makes sense for your game or application.

## A whirlwind introduction to utility theory

Utility theory has its roots in economics and game theory. It is quite interesting to see how these subjects developed, but for your purposes I’ll use an imperfect but practical definition without getting into undue details. Utility theory can be seen as a set of mathematical recipes that if followed would enable you to objectively(!?) evaluate a set of alternatives (or options). Equipped with this information you could then choose the option that seems to be best at the time.

An example is often useful, especially with generic definitions like the one above. Consider the following setting, you are at your house coding away happily when suddenly you develop a thirst. Now the circumstances present you with two options, a) stay at you desk and keep coding, or b) get up have a glass of water and then continue with your previous task. If you choose b) you will still have option a) at the next instant of time and the next etc. So for a given time interval you ignore option a) and continue with your task. However, this cannot continue indefinitely, at some point you will stand up and have that glass of water. So what was different this time? What a foolish question you may say “I was too thirsty to ignore my need for water”. Yet this simple question hides something interesting, it would appear that as time went by your thirst developed similarly to the following plot:

That’s great, and this is because we can describe this kind of functions. This one in particular is a quadratic of the form:

$$ u_{Drink}(t) = A t^2 + B$$

where A and B are some constants. Granted, your feeling of “thirst” may not have developed exactly as shown in the above plot, however, the important thing is that it is at least plausible that such a function reflects reality. That’s good enough for our little nefarious agents.

Consider now that we have the same scenario but with a twist in the plot. Instead of being in the cosy environment of your home, you are in the middle of a dessert. Same as before you develop a thirst. Now what do you do? If you ignore your thirst this decision may soon prove to be deadly. So, assuming you’ve been in the desert long enough for your water supplies to be exhausted, presumably one of your top priorities would be to seek water. And therefore this time around the relevant function describing the urgency (or utility) of finding and drinking water should be different as you no longer have the luxury of a water tap.

By now the main idea of what is the suggested recipe for an AI according to Utility Theory should start becoming more clear. Describe everything using plausible functions and use these functions as the drivers for your agents’ decision making process. Of course, what is plausible is subjective, and will no doubt require some experimentation.

The functions I used above are referred to as utility functions, as they describe the utility of a particular action with respect to an input variable. I used time in the example as the input variable, but I could as easily have used distance, wealth, pleasure, pizza slices or any other variable.

## A Few Useful Utility Functions for Game AI

There are plenty of functions that you could use, however, I find that I often use a small set of functions, the following being the most common ones.

The following functions are parametrised using 2 points, the starting point \((x_l, y_l)\) and \((x_u, y_u)\), the ending point. Usually, utility functions are normalized and the usual normalization is in the range \([0,1]\), however, some people do prefer a range \([0, 100]\). To change the normalization simply restrict \(y_l\) and \(y_u\) to be in that range. Negative \(y\) values are simply clamped to 0 and it is assumed that the domain is restricted to \((x_l, x_u)\).

### Linear

$$ y(x) = y_l + (y_u – y_l) \frac{x – x_l}{x_u – x_l} $$

This is the simplest one, and a good starting point for a utility function. Have look at this for an interactive plot.

### Power Function Right Half

$$ y(x) = y_l +(y_u – y_l) \left(\frac{x – x_l}{x_u – x_l}\right)^p $$

As with the linear function you can use this to tweak this function to your heart’s content.

### Parametrised Sigmoid Function

The most commonly used sigmoid function seems to be this one

$$ y(x) = \frac{1}{1+ e^{-x}} $$

However, this function doesn’t have a convenient normalization like the linear or power functions above, so I tend to avoid it and use the following function instead

$$ y(x) = \frac{y_u – y_l}{2} \frac{ \frac{2}{|x_u – x_l|} \left( x – \frac{x_l + x_u}{2} \right) (1-k) }{ k – 2 k\left| \frac{2}{x_u – x_l} \left( x – \frac{x_l+x_u}{2} \right) \right| + 1} + \frac{y_u + y_l}{2}$$

where \( -1 < k < 1 \). This is much better behaved and varying the \(k\) parameter we can get all sorts of “shapes” for our utility functions. You can see what the effect of the parameter is here.

## Conclusion

This was the first post in the series on decision making and game AI. We discussed what utility theory is and how it could be applied in a simple decision making scenario.

If you enjoyed this post stay tuned for the next instalments where we’ll go through a simple utility theory based AI implementation.