As we’re nearing the public release of Dani AI, we had to address a few major issues that were impacting the workflow of creating artificial intelligence in games. These issues mainly cover performance and editor quality-of-life.
Here’s a quick overview of what we updated in Dani AI:
Faster performance (up to 30%)
Smaller memory footprint (better for mobile games)
Action state Icons for showing the action’s current running status
Proportional sliders for calculating weights
For games, performance is incredibly important in maintaining a smooth gameplay, whether that is a mobile game or a hardcore PC game. For simulations, high performance allows more complex simulations while maintaining the same frame rate.
We took those thoughts into consideration, and decided to rewrite and simplify the runtime component of DANI AI with performance in mind. As a result, we managed to achieve up to 30% improvement in performance, with a 10% reduction in memory usage!
Editor Quality of Life
We made a lot of updates to the editor, based on user feedback and suggestions:
Displaying Action States
The current status of an Action was shown as a piece of text in the bottom of an Action node in the editor. When we introduced zooming in/out, it was quite difficult to see what the current status is. Instead, we opted to show an icon in the node that changes according to which state the Action is in:
When connecting multiple Observers to a Decision, adjusting the weights for each Observer was less than ideal. One would need to adjust the weight of one Observer, then adjust the weights of all other connected Observers to get the correct end value. It led to some funky math where we had an instances of weights being in the hundreds just because it was “easier to calculate”.
To streamline the workflow, we added sliders that automatically adjust themselves in proportion to the currently edited weight. They act pretty similar to Humble Bundle sliders:
That wraps up much of what we wanted to cover before the big release (soon!), so thanks to all of the brave testers willing to use Dani AI in their projects!
During the past couple of months, we’ve been taking a look at our editor tool and noticed a few fundamental design flaws that our testers reported. We went back to the drawing board and drew inspiration from other tools and games, including Amplify Shader, Blender, and Democracy 3. A key concept from these tools and games is that information is shown when needed, and hidden (but accessible) when not.
A More Responsive Editor
We overhauled the Editor’s functionality, stability, and design. Such changes include:
A redesign of the nodes and connections
Nodes now separate incoming connections to better show where the inputs are coming from
In Unity’s “Play Mode”, running nodes are colored in shades of green from darkest to lightest to show the order of the AI’s thoughts (Observer -> Decision -> Action)
Node design is fully editable via a GUISkin object in the project
Knowing what AI to edit
Selected templates are marked with a checkbox in the dropdown menu as a quick reminder of what template is being edited
Better touchpad support
Selecting multiple nodes is a matter of clicking and dragging, as opposed to click-drag-click
Scrolling is much faster
Dark Skin support
We are able to natively support Unity’s dark skin for all the Unity Pro users!
Faster startup speeds and rendering, especially during Play Mode
Check out the before and after images of the changes we’ve made to our UI!
Clicking connections was fairly limited in the old editor. One would have to click a small square in the middle of the connection in order to select it. It led to a few problems:
Connections were hard to click with the touchpad
Large resolutions or high DPI made clicking even more difficult
Some squares overlap
Clicking on a connection is important in Dani AI, because the connection provides us additional information that is not normally shown on the editor, such as conditions. As a result, we changed our click detection to a spline-based detection, so a connection can be clicked anywhere on the line.
Originally, we offloaded the node-displaying capabilities to the inspector in Unity. While it does the job well, it led to a couple of issues:
It’s easy to lose focus on the node by clicking on another object in Unity
Would need to click the node again to refocus on the node again
A possible solution is to manage multiple inspectors
Done via “Add Tab/Inspector” in any editor window
Not many people know this trick
It’s easy to forget to lock the inspectors, otherwise all inspectors will show the same selected object
Instead, we opted to bake our own inspector into the editor. For the editor coding developers, the built-in inspector maintains the same functionality, so you can keep your existing custom inspectors!
As we are nearing the public release for Dani AI (date TBD), we’re happy to continue to work with our testers for bug-hunting and feedback!
And as always, if you have any questions or would like to test our AI, please shoot us an email at firstname.lastname@example.org or find us on Facebook or @initialprefabs on Twitter
We had a blast last month showcasing Modern Knights and spending time talking to you guys in Play NYC and Otakon. For us, it was about learning from others and making moves based on what was learned and what we want to do next. And we’re continuing that in Unity Developer Day in NYC this Saturday, so catch us there!
Speaking of conversations, we’ve gotten quite a few questions during the events from the curious and the veteran testers about the current state of Dani AI. Based on the current feedback and suggestions, we’re currently focusing on improving its user interface, so we took the time to fix the largest kinks in its armor:
Connecting the Dots
Dani AI is a powerful AI tool that decides which actions to take at a given situation. It does so by connecting observers, decisions, and actions (otherwise known as AI nodes)together to create a template that an agent can use to perform actions in a meaningful way. In other words, the nodes provide the information, while the connections provide the context of what is happening or will happen.
One of the first issues we tackled in the editor was the inconsistencies of creating and editing the connections. Originally, connections are only editable in the Inspector. A typical workflow for this is adjusting the conditions for the decision to activate:
Click the observer
Look in the “Connected Decisions” list
Find the correct decision that contains the condition
Edit the condition
It’s a bit odd, given that the only way to create connections is dragging the mouse in the editor to connect the nodes together. As a result, we made the connections clickable in the editor so we can jump to editing the condition in one click.
We also added a new custom inspector for connections to provide relevant information on the nodes and the conditions.
Weighing in on the Situation
When an observer is connected to a decision, it contributes to the decision’s total score by generating a sub score based on the condition. For example, if an agent wants to attack, his health should be above 50 and if so, contribute a point to the “Attack” decision.
Given that situation, its a bit odd to see percent signs when clicking on the observer or connections. Originally it was meant to show the influence an obsever has relative to other connected observers, but we kept it simple by just showing the weight instead:
When the decision is selected, the connections show both the weight and its relative influence over the decision:
Snapping into Order
Freely-moving nodes are fun to drag around, but are often quite annoying to align. One pixel may not seem like a problem, but it’s enough for our testers to waste minutes per connection just to align them properly.
As a solution, we added a really simple snapping method that rounds the positions of each node to the nearest incremental value (5 pixels by default, but is configurable in the settings). It’s most easily noticeable when moving the mouse slowly as it produces a stepper-like movement.
A stepper-like pattern for aligning nodes together.
Where are we now?
Again, big thanks to our testers who are willing to test our tool every step of the way. We’re aiming to push Dani AI to the Unity Asset Store soon, but right now we’re confident that Dani AI is in a reasonable stable state.
If you have any questions or would like to test Dani AI, please shoot us an email at email@example.com or find us on Facebook or @initialprefabs on Twitter.
Otakon 2017 and Play NYC are both right around the corner, and we’ve been busy over the past few months improving Modern Knights and Dani AI based on the current testing feedback. In the meantime, we’ve found the time to post about jump into the thought process of creating reusable AI.
The Scale is to Understand the Issue
As mentioned in the previous post, there are times when a non-player character (or NPC) needs to be less robotic and do more than “if A is true, then do B”. What we mean by that statement is that given many different scenarios that the NPC will encounter, it is relatively difficult to define the conditions for each scenario. Additionally, it would be harder to scale up and try out new ideas for the character.
For our knights, we’ve had this dilemma with the interaction between the knight and the player. The knight is an NPC with its own set of decisions. The player is well, a character in the game controlled by a complex system of cells using a physical interface known as a mouse, keyboard, and screen. Simply speaking, it’s pretty obvious for the knight to “remember” the player and periodically check if the player is doing anything.
Logistically, it leads to some issues:
Should the knight really know if the player is moving? If so, what about crouching and sprinting? Rolling?
If a knight is far away, should it check what the player is doing?
When the player makes a sound, should every knight check if they heard it?
Should the knight even know what a “player” is?
As the player mechanics become more complex, adding new code to the AI to compensate becomes harder to do. For example, sprinting generates more noise for the knights to hear, so we need to add a check to see if the player is sprinting. Shooting also generates noise, so we add another check for shooting. Eventually, the number of checks become rather too much to manage and bugfix. Additionally, all the effort placed into those checks are wasted when the knight loses a reference to the player (“forgetting” that the player exists).
From a designer’s point of view, it’s very worrisome, since there would be a lot of downtime to get the mechanics in place to ensure that the AI is still working as intended. From a programmer’s point of view, it’s a lot of unnecessary work that can be better spent on other parts of the game.
It begs the question: How can we build this AI to ensure that it works, even if we’re modifying the player?
To Perceive is to Survive
The solution takes a simple approach: What about the player does the AI care about and how do we model it? No matter what the player is doing (running, sneaking, shooting, bashing heads with shields and swords), it boils down to three cases:
Player is visible
Player makes sounds that the knight can hear
Player is a threat
A threat in this case is anything that can harm the knight. Just like real life, threats are generally a momentary thing, and we often go back to our daily lives once the threat is over (like dealing with mosquitos in the hot summer). By making a player a generic “threat”, we don’t have to make the AI think about the player directly, freeing up the player for the designers to move forward with new ideas.
But that leaves the question: A threat is a “thing” that a knight can detect, how do visuals and sounds work in this idea?
A threat is a perceived notion, and the knight that perceived it is going to react accordingly. To do so, we created a simple perception model for each threat. The model contains a reference to the source threat, as well as a number from 0 to 1 to show how well known the threat is to the individual knight, otherwise called it’s awareness.
When the player shoots, it emits an intriguing event to nearby knights, whether that’s a sound or a visual notion. This event is simply a fraction between 0 and 1 that diminishes with distance. The farther away the knights are, the smaller the value they receive from the event. This value is then added to the knight’s total awareness value. Overtime, should the player no longer emits any kind of event (death or too far away), the awareness value decreases until the knight is no longer interested. This opens up to new moments where the player needs to be clever on how to escape as opposed to just stepping outside of the combat range.
A simple setup for detecting threats using Dani AI. When the player makes a sound i.e. shooting with a gun, the player sends an event for the AI to pick up and try to investigate in response.
By setting up this perception system, the knight only needs to listen to a threat and do actions to raise its awareness. The player only needs to emit its presence through events. Should we decide to add friendly AI, they too would need to emit its presence, all without ever touching the knights.
Knights detecting the player after a few gunshots. The knight on the right noticed faster due to its proximity and loudness of the gun.
That pretty much covers what we wanted to say for Part 1. For Part 2, we’re thinking of covering the topic of combat AI, so stay tuned for that!
After our last play test session at Playcrafting’s Spring Expo we received a lot of player feedback and notice some balancing issues with our latest addition, meleeing.
Meleeing was the most requested feature in our game and our last spring demo we added it. While the melee combat allowed combo attacks and somewhat fancy animations, we felt that it was a bit much for our game. Check out our gif below of the previous version of meleeing.
We found that with a two handed style, meleeing was too overpowered. It was a hit or miss situation; the amount of damage a knight gives to you is equal to the amount of damage you deal (which results in a 3 hit kill).
Instead of focusing on complicated attack patterns – we went back to basics. A much simpler attack combo (left and right strikes) with the ability to block and roll. With this kind of system we can easily scale the complexity of the player’s skills and attacks (akin to Skyrim’s third person combat). For instance, we can add more to the attack combo based on the number of swings a player does.
Above, we switched to a simple right left attack. You can cancel the attack and switch to a block or even roll and evade your enemies. The purpose is to make melee combat more viable and flexible to create intuitive player controls.
So, what of ranged combat?
Ranged combat will still exist in the game. We’re still experimenting with ranged combat, but we’re aiming to not make the game feel like a run and gun kind of game. Something like that will take us time and experimentation, because the questions we ask ourselves are:
How do we want the player to react given the situation?
Are we giving the player enough options for the said situation?
How can we balance the gameplay and make it sufficiently challenging for the player?
Ranged combat still has these hurdles and to do that we need to tweak our knights’ AI to accommodate for ranged combat (e.g. tactics to deploy such as flanking and taking cover). However, we’re taking things step by step and in the upcoming weeks we’ll post more updates on them.
So with that all said what’s the next immediate goal for our game? Well, there are two large expos this summer, Otakon and Play NYC.
From August 11th – 13th, we’ll be in the video game section at Otakon! So if you’re in the DC area, drop by video game section of Otakon, we hope to see you there!
We’ll provide more details for Play NYC which will be happening August 19th – 20th. So stay tuned for more updates!
GDC 2017 (otherwise known as Game Developers Conference) is going on right now with amazing talks and keynotes on everything related to game development! While that is going on, we made some major improvements to our Dani AI system as well as to its editor, thanks to the feedback that we received from our testers!
Editor’s UI Overhaul
We updated the interface to focus on quickly providing information on what the agent is doing at a glance. Much of our inspiration came from various interfaces from games including Tom Clancy’s The Division, Endless Legends, and Civilization V.
Why games? Other than the point of Dani being an AI framework for games, interfaces in games are great at displaying information to glance at, since the focus is not particularly on the interface itself, but is on the game world and what’s going on in it. It’s a lot easier to glance in the corner and see how much ammo you have left, than to push a button and have a ton of weapon information show up. As a result, we’ve done the following:
Icon support for nodes
Quickly associate images with the node’s purpose
Customizable with one line of code
Action nodes now show what state they are in on runtime
Unity Pro skin is now supported
Additionally, we’ve made various quality-of-life improvements on the editor, such as selecting multiple nodes and copy+paste functionality, with full undo support. Keyboard support is available to, if you want to use Ctrl + C and Ctrl + V to copy and paste.
Utility AI Support
There are times when a character needs to be less robotic and do more than “if A is true, then do B”. We call those agent “humanoid characters”, and a utility-based approach suits well for those kind of characters. Utility-based AI is another way to determine which actions an agent can take via using something along the lines of “how much will A make me do B?” The more A influences B, the more “useful” A is.
Utility can be defined using curves, and what we’ve done is adding an Animator Curve (used in Unity!) to visually define the usefulness without needing to work out and invent new equations.
As always, huge thanks to our testers who are willing to test the framework with us and implement Dani AI into their own projects. If you have any questions or would like to test Dani AI, please shoot us an email at firstname.lastname@example.org or find us on Facebook or @initialprefabs on Twitter.
Happy New Year! As 2016 closes, we will leave you a quick update on what’s happening and what to expect:
Dani AI is getting a buff
We are currently working on rebuilding Dani AI from the ground up and pushing it out with our wonderful testers. So far, the improvements we made for Dani are:
Adding real-time display of an agent’s thoughts, with edit support
Simplifying workflow to better reflect the human thought process
Simplifying the API to feel more like Unity’s API
Optimizing performance (1200 -> 2400 max agents on a Haswell Core i3, some people run on potato rigs)
Improving stability (handles exceptions more gracefully)
There are still some kinks to work out, but overall it is in a stable state. Of course, seeing is believing, so here’s a clip on a bunch of knights celebrating the new year (we don’t have an actual conversation animation yet, so their way of communicating is by jumping a certain number of times). Watch the guy in the shining armor.
And here is a clip of the knight behaving in first person:
The beauty is that we managed to sync animations with Dani via root motion, something that we haven’t been able to do so until just recently. No more foot sliding!
If you would like to be a tester, please contact us!
The knights are becoming more human
The previous two clips are also part of the overhaul we are making on Modern Knights, our third person shooter game. Earlier, we focused a lot on the knights attacking and doing surprise attacks. But a knight is not without honor and thus is humane to some degree. When they are unaware of a threat, they will default to doing off duty things such as walking around and performing small talk with fellow knights.
When will you be able to see the more finished version? We might be showing our next version at the Playcrafting Winter Play expo on January 26th! Details here.