Butter Royale AI — Continued
A Gentle Introduction to AI — Continued
Part two of an article about designing and implementing AI bots for a battle royale game released on Apple Arcade
In the first segment of this two-part article about implementing bots for Butter Royale, I outlined the early design and how we arrived at an AI framework to use. This time, I will go a bit more in-depth into how we designed our AI Behaviour Trees, how we setup AI profiles for bot difficulty and, finally, touch on how our QA department actually managed to test and verify that the bots worked as intended!
As mentioned at the end of the previous article, we ended up using NodeCanvas¹ for our AI needs, since its feature set fit our feature requirements like a glove. Its Behaviour Trees, visual editor/debugging tools, built-in event system and variable blackboard system allowed us to setup dynamic, reactive and (most importantly) functional bots with multiple difficulty levels — in less than three weeks.
We originally anticipated that it would take longer to develop the feature into a functional state, but due to the extensive pre-planning we did, along with the robustness of the AI framework we landed on, the pieces fell into place faster than expected. Everything came together smoothly and without major hitches, so we even had time to spare for polish and iterating on the feature — always a bonus!
DRY Behaviour Trees
Without further ado, let’s delve briefly into the world of programming, and have a look at a common principle of software development that also applies to Behaviour Trees— DRY².
“Don’t repeat yourself” is a principle that basically aims to reduce repetition of similar work being done in multiple different areas of a software’s code-base. This helps avoid situations where if you need to change one small piece of code, you don’t have to apply that change in many different places in the code-base, but can just change it in that one location and have it reflected throughout the software.
DRY also applies to Behaviour Trees — and visual scripting in general, since these are essentially abstractions of code presented in a different form, one that is often easier to understand by the likes of myself and other nerf herders/game designers compared to “pure” code approaches.
For instance, instead of having one Behaviour Tree for each AI profile in our game — which would mean lots of duplicate trees and an ongoing maintenance hell, we decided to have only one, global tree used by all the bots. This way, any change we made to the overall bot behaviour would apply to ALL our bots, independent of their individual profiles.
Using SubTrees to Maintain DRY Principle
“…the Behaviour Tree had finally grown to start resembling a spaghetti beast…”
A feature of NodeCanvas that can help maintain that DRY principle is SubTrees. Specific bot behaviours might need to be repeated in different branches of the Behaviour Tree, and instead of having multiple instances of those behaviours that would all need to be maintained individually, they can be separated out into SubTrees that could then be reused anywhere in the main Behaviour Tree.
While this was our approach in the very beginning, we eventually chose to scrap the usage of SubTrees and merge everything into one tree due to a performance issue in the version of NodeCanvas we were using; given the amount of bots we were spawning per match, the additional loading time needed for SubTrees caused a noticeable stutter when spawning the bots.
There was one happy side-effect of doing away with SubTrees: Debugging bot behaviour problems actually became easier, since the visual debugger tool present in NodeCanvas doesn’t at a glance tell you what goes on within a specific SubTree when viewing the main tree.
The downside was that by the end of the project, when we had implemented all the various bot behaviours we wanted — like collection of items, melee and ranged combat, fallback behaviours that kept the bots safe from the danger zone and more — the Behaviour Tree had finally grown to start resembling a spaghetti beast, pushing at the limits for how far one singular Behaviour Tree can go before becoming unmanageable.
Behaviour Tree Setup Details
Priority Based Branches
Our Behaviour Tree was setup to be constantly reevaluated by the bots, working from left to right through a series of branches with decreasing priority, with the order of the branches being the determining factor for priority.
Within each branch of the tree, the various conditions being checked were all marked as dynamic within NodeCanvas, a flag which allows the higher priority branches on the left to take precedence at a moment’s notice if the conditions checked should ever change and warrant the execution of a different action.
At the extreme left end, we placed the highest priority behaviour like seeking out nearby squad member or reacting to nearby enemies. At the extreme right end, the “least important” fallback behaviour that handles basic movement when there’s no other interesting actions that should be taken.
Rough overview of the different branches, listed here in the order of their appearance and priority in the tree:
- Seek out Nearby Squad Members (if bot itself is knocked down and needs to be revived, or if nearby squad member needs to be revived)
- Open Nearby Delivery (supply drop in the form of a fridge)
- React to Nearby Enemy (seek out, attack or flee)
- Collect Nearby NOM (“Nutritionally Operated Machinery” — the term used to describe weapons in the game. Depending on factors such as current NOM tier, tier of nearby NOM, ammo remaining)
- Open Nearby Chest (after NOM collection so bot can secure a NOM to defend itself before trying to open chest)
- Collect Nearby Health > Ammo > Armor (in that order, depending on state of the bot’s health, ammo and armor at any given time)
- Follow Nearby Squad Member (to try to stay together as a squad)
- Seek Safe Zone and Avoid Butter (butter = danger zone, shrinking every X seconds, deals damage if bot is caught in it)
- Wander Aimlessly (fallback; nothing else to do, so just walk around and pretend to have a purpose in life)
Dynamic Conditions and Probability Checks
Each of these branches contained a number of conditions that had to be met before the bot could perform the intended behaviour, and each branch contained sub-branches to handle edge cases or the different “choices” the bot could make every time the tree was reevaluated.
To make those choices, the bots relied on probability selectors, where the outcomes were based on weighted variables stored in the variable blackboard for each AI profile. Some of those variables were fixed, defined per AI profile, while others were dynamic/floating values that changed throughout the course of a match.
For instance, some selectors weighed the bot’s current health versus a fixed value, with the probability of the bot making one choice or the other — in this example, whether to pick up a Health Kit or not — gradually increasing or decreasing inverse of the bot’s health. The same principle was applied to determine whether the bot would pick up additional ammo or armour, and to help bots decide between engaging in combat or running away to fight another day.
Setting up the conditions in this way helped ensure that the bot would react dynamically and appropriately to the situation at hand.
AI Profiles for Difficulty
As mentioned, we decided to setup AI profiles for a range of different play styles and difficulty levels, to simulate the variety of human players one can encounter in a real match; from absolute noobs to self-proclaimed pro-gamers.
Using the variable Blackboards in NodeCanvas, each profile was setup with weighted variables that determined how aggressive, stealthy, loot hungry, campy or brave the individual bots would be. This was also used to determine the radius of their Line of Sight, their overall firing accuracy, their chance to flee from battles, etc.
In total, each profile was setup with nearly fifty parameters that could be used to alter each AI profile’s behaviour under different conditions.
To start with, we defined a few different profiles and set up the parameters according to their intended behaviour:
- Balanced—baseline profile that’s balanced to do a bit of everything, without a particular focus.
- Yolo — all out attack, this guy doesn’t hold back. Will take any opportunity to go after the next available target.
- Survivor — highly focused on health, armor and staying alive no matter what. Will run away to live and fight another day
- Camper — high tendency to camp in bushes, bide their time, ambush players who walk past — but only if within optimal firing range.
AI Skill Levels
We then took each of these AI profiles and created more granular versions of each to simulate players of different skill levels — from weak, through medium and hard to godlike.
- Weak— weaker variants have shorter Line of Sight, poor accuracy when shooting, rarely heal themselves and have little regard for their own well being. Attempts to simulate beginner and low-skilled players.
- Medium — average LoS, average accuracy, higher chance to collect items and chests than the weak bots.
- Hard — longer LoS, better accuracy, picks up health when needed, seeks optimal weapon range in encounters with other contestants on the butterfield.
- Godlike — godlike variants cheat a little bit, having longer Line of Sight than normal players would, and also have near perfect accuracy (against targets that don’t move, anyway). They will stock up on ammo, health and armor, and actively seek out bigger and better NOMs. If they’re in danger, they will sometimes run away to find health. Attempts to simulate more hard-core pro gamers.
Overall, the pool of actions the bots could take throughout the match stayed the same across the different profiles and difficulty levels, but the parameters allowed us to balance the frequency at which those actions would take place for any given profile, as well as tweak how efficient the bots would be when executing the various actions.
At a glance, it might not be apparent to a normal player what kind of profile and difficulty level a given bot in a match is setup with, but over time, this system provides an overall experience for the player where no two encounters with bots are ever exactly the same. Instead, the bots all display different priorities, strategies and wants — like a normal player would.
Weighted Bot Spawning System
Finally, we setup a weighted spawning system that was used to determine the types of bots that would spawn in any given match, balanced in such a way that players would get fair matching with a varied selection of AI profiles and difficulties in each match. Neither too many weak nor too many hard bots would make for a good experience for the average, casual Apple Arcade player, so it required some iteration to find a good balance.
Quality Assurance — Testing the Bots
To put the final stamp on any feature and say “It’s done! Let’s move on to the next thing”, there’s one more thing that needs to happen. Sometimes neglected, sometimes done halfheartedly — I am of course talking about Quality Assurance; the process of making sure that a feature actually works as expected.
Now, it should always be the responsibility of each designer, developer, artist or anyone else who contribute to a project to at least do the bare minimum of testing to make sure their stuff is up to snuff, but even with rigorous testing, we often become blind to flaws in our own work, and bugs slip through the cracks.
Having a dedicated QA team to help find and iron out those flaws is a must, and having one on-site and as part of the team — like we have at Mighty Bear — is invaluable!
Alright, so we’ve established that testing is good, and that QA is necessary, but… How do you QA test semi-unpredictable bots that can dynamically change their behaviour from one second to the next, sometimes for reasons that are not apparent to the casual observer — and the only tool at your disposal for testing is to observe the bots using the game’s regular spectator mode, one bot at a time? 🤔
Our solution was to write up a document outlining the expected, likely and possible bot behaviours in different situations, based on the actual implementation. Another mind-map, if you will — similar to what we did at the very start of the project as described in my previous article. This new and updated map of bot behaviour allowed the QA department to build an accurate and extensive checklist they could use when observing the bot behaviours.
While the bots could still display seemingly erratic behaviour (though no different/worse than a regular human player), QA could check against this list and go “Yep, that’s a possible action to take in this situation. Works as intended!”
That’s all, folks! So long and thanks for all the fish!
tl;dr; — bots were implemented in game. They have since taken on a will of their own and have been escaping into the real world. We tried to pull the plug on them at one point, but it backfired badly (long story), so now we just look the other way and pretend not to notice. I, for one, welcome our new AI overlords…