Building a heatmap for Mighty Action Heroes

Building a Heatmap for Mighty Action Heroes

Mighty Action Heroes is Mighty Bear Games’ first Web3 game — a real-time multiplayer third-person Battle Royale. More details here.

Aim of the Heatmap

The game map (apart from being aesthetically pleasing and thematically consistent) was thoughtfully designed to meet various objectives such as:

  • Facilitating all-hell-breaks-loose street fights
  • Creating opportunities for ambush
  • Allowing strategic placement to capitalise on different weapon styles

Of course, it’s one thing to to have the team test these mechanisms internally, and a whole other thing to watch our early access players duke it out and see if the designs were meeting their objectives.

Were killers utilising bushes? Were victims trapped between cars and buildings when they met their end? Were sniper rifles dominating the fights at bridges? We wanted to know.


To answer the above questions, we collected non-personal data of all players, such as the positions and timestamps for player deaths, and the weapons of the fragger and the fragged.

Essentially, this was going to be a data plot superimposed on the map itself. But unlike most traditional heatmaps, we actually didn’t want to aggregate the data into amorphous blobs of red and blue; we needed precision.

Below are some features we implemented to best allow end-users to generate insights from the tool.


There still needed to be some kind of visual gradient conveying the correlation of ‘number of deaths’ and ‘heat’. Given that we weren’t going for aggregated lumps of red and blue, the solution was to go with many discrete dots, each with low opacity, overlapping one another.

We were expecting data points in the tens of thousands, so there needed to be a mechanism to control the size of each plot point as well.

To achieve the right balance, I opted for both a radius and opacity slider to allow our analysts to adjust the parameters dynamically.

One feature I definitely wanted was a time slider. Also because of the shrinking ring mechanic, I just knew it was going to be extremely satisfying watching the data points converge towards the centre.


For the actual map, With the help of our tech artists, I got an aerial snapshot of the map, along with the numbers to scale the death_position data to fit the png. It needed to be precise otherwise the final result would have had people ‘dying’ on cars and in the river, which would make the final data inaccurate.


A feature that we added in later was checkboxes. Although we could generate dashboards that would give us precise stats, percentages, correlations, etc, this was just quite simply more fun. Personally, I’m not entirely sure how useful this will end up being, but our game designer is excited to see what insights this feature will reveal.


I chose Python for the ready availability of data analytics libraries and general code readability. The Bokeh library was an obvious choice, given the impressive list of demos which guaranteed that the features we wanted were available out of the box.

Jupyter Notebook was also an easy enough choice, being easy to run (shift-enter) and rapidly tweakable. The sequential nature of Jupyter notebooks also essentially meant anyone could easily follow the logic of how we were generating the heatmap, which is what will be described below.

There were four broad steps: preparing the map, preparing the data, building the widgets (sliders and checkboxes), and putting them all together.

The code snippets below will make more sense with some understanding of the Bokeh library, but they should be human-readable enough to follow in any case.

1. Prepare the map

More of a sanity check, really, to handle logistical issues such as axis values, map size, minimum resolution, etc.

from bokeh.plotting import figure, show, output_notebook, output_file
# Sets output to be in-notebook (use output_file for new tab render)
p = figure(x_range=(0, 50), y_range=(0, 50))
p.image_url(url=['map.png'], x=0, y=50, w=50, h=50)

2. Prepare the data

Bokeh works out of the box with Pandas library, so depending on the data being collected, you would probably want to wrangle the data into a dataframe. Ours looked something like this (truncated, obviously):

3. Build the widgets

Widgets are interactive controls in Bokeh applications for front-end users to tweak parameters, like sliders and dropdown options, box select zoom in, download as .png, that sort of thing.

There really are no limits to what widgets can do because on top of the out-of-the-box features, the on-change logic can always be defined as a python or javascript function. The key component for these widgets is the ability to filter what data is plotted. Here’s some sample code for filters.

def apply_filters(attr, old, new):
# Filter based on slider_time
df_new = all_deaths_df[(all_deaths_df['match_time_death'] <= slider_time.value)]

# Filter based on bot checkbox
df_new = df_new[~all_deaths_df['player_id'].str.lower().str.contains("bot")]

# Filter based on weapon_grade
weapon_grade_map = {0:'normal', 1:'powerful', 2:'mighty'}
reg_exp = ""
if len(>0:
for i in
reg_exp += weapon_grade_map[i] + "|"
df_new = df_new[df_new['killer_weapon_id'].str.lower().str.contains(reg_exp[:-1])]

# Other filters, etc...

And here’s some sample code for applying the filters to the widgets:

# Slider for time
slider_time = Slider(title="Time of Death (seconds)", start=1, end=700, step=1, value=600, width=850)
slider_time.on_change('value', apply_filters)

# Checkbox for bot filter
checkbox_bot_filter = CheckboxGroup(labels = ['Hide bots'])
checkbox_bot_filter.on_change('active', apply_filters)

# Checkbox for weapon_grade filter
weapon_grade_div = Div(text="""Select all killed by weapon of grade: """)
checkbox_weapon_grade_filter = CheckboxGroup(labels = ['Normal', 'Powerful', 'Mighty'], active=[0,1,2])
checkbox_weapon_grade_filter.on_change('active', apply_filters)
# Other filters, etc...

It was important to apply all the filters all at once because the intention here was to have compound filters (e.g. all deaths after 60sec and exclude bots and see only ‘Powerful’ weapons).

4. Putting them all together

In sum, this heatmap was built with the following steps:

  1. Create the map, also known as the plot p as in step 1.
  2. Plot the points as circles, using the dataframe from step 2 as the datasource for the plot coordinates:
# Set datasource_table that goes to plot
source = ColumnDataSource(all_deaths_df)

# Plot Circles with color-palette of choice. Run: bokeh.palettes.__palettes__ for full list
r ='death_pos_x', y='death_pos_y', line_color=None, fill_color="red",
fill_alpha=0.05, size=9, source=source)

3. Include plot p and all the widgets from step 3 in a desired layout for the Bokeh application:

sliders = column(slider_size, slider_opacity, slider_time)
checkboxes = column(checkbox_bot_filter, row(column(weapon_grade_div, checkbox_weapon_grade_filter), column(weapon_div, checkbox_weapon_filter)))
layout = column(p, total_data_points, current_data_points, current_perc_points,
checkboxes, sliders, sizing_mode="stretch_both")


4. Run, or show, the application

show(bokeh_app, notebook_url='localhost:8888')

And this here is the very satisfying final result:

Limitations and Improvements

Data piping

The most glaring limitation here is how current the data is. Right now, it’s whatever snapshot was last uploaded to the app.

We considered creating some parameters like start_date, end_date, region, etc to dynamically fish out whatever data is being asked for, and then spit it out in the app, but there simply wasn’t a use case where there would be an urgent need for getting exact slices of data, and we decided not to spend the manpower building a feature that we wouldn’t need.

The simple solution is to document the process of taking a snapshot of the desired data using AWS Athena, and uploading that to wherever the Bokeh app was hosted.

The code that handles the data also breaks for major changes to the incoming data (e.g. adding new columns). This can be mitigated by having a more thorough discussion with the team in charge of collecting analytics data, but it might just be one of those unavoidable cases where we don’t know what we don’t know, and tweak have to be made later on.

Sample size

Sample size is another a limitation. At just 10,000 data points, the user can start to feel the slight lag in the data responsiveness to the sliders, and at 30,000, it takes a few seconds for all the data points to react to the widgets, such as a change in radius.

Our solution was to cap the sample size at 15,000. So for datasets larger than that, 15,000 data points would be randomly selected across the data. Performance improvements might be possible hosting the app on a more powerful (and expensive) machine, but the benefits of saving those seconds are decidedly not worth the cost.


The current setup is currently running on a local host, meaning each user would need to clone the repo and download the associated dependencies.

A simple solution is to host the Jupyter notebook, or the whole app itself, on a simple EC2 instance, but there simply hasn’t been a demand for that to justify the effort spent building and maintaining that.

Closing thoughts

It was satisfying to complete this app, which start to end took about a weeks’ worth of effort discussion, building, and modifying. Being the team’s Devops engineer, it was encouraging that my managers who knew I enjoyed these visualisation-type side projects roped me in and let me carve out the time to work on this.

I’m hoping to hear from the game designers if this actually helps validate their designs. But more importantly, I’m hoping it helps improves player experience, particularly because I intend to be playing a lot of Mighty Action Heroes 😛

Building a heatmap for Mighty Action Heroes was originally published in Mighty Bear Games on Medium, where people are continuing the conversation by highlighting and responding to this story.