Archive

Tag Archives: gamedev

I made an orbital camera controller for someone who wanted help on the Godot Discord channel. Here’s the source. When applied to a Camera Node it gives this kind of behavior:

extends Camera
export var follow_target_path:NodePath = ""
var follow_target:Node
export var follow_distance:float = 5.0
export var follow_height:float = 1.0
export var mouse_sensitivity_x:float = 0.005
export var mouse_sensitivity_y:float = 0.005
var last_mouse_delta:Vector2 = Vector2()
var mouse_accumulator:Vector2 = Vector2() # The reason we have this instead of using the mouse.position is 'cause when we capture the camera, this will always be 0.
func _ready():
follow_target = get_node(follow_target_path)
func _process(delta):
# Looking. Rotate _this_. Tilt _camera_.
mouse_accumulator += last_mouse_delta # TODO: Cap the Y component.
last_mouse_delta = Vector2() # Reset after we process the mouse so we don't accumulate between events.
# Calculate our new position.
var look_target = follow_target.global_transform.origin + Vector3(0, follow_height, 0)
var target_position = look_target + \
Vector3(cos(mouse_accumulator.x*mouse_sensitivity_x), sin(mouse_accumulator.y*mouse_sensitivity_y), sin(mouse_accumulator.x*mouse_sensitivity_x))*follow_distance
self.look_at_from_position(target_position, look_target, Vector3(0, 1, 0))
# For mouse handling:
func _input(event):
if event is InputEventMouseMotion:
#event.position Will always be center when in capture mode. Same with event.speed.
last_mouse_delta = event.relative

The player controller is fairly straightforward, so I’ve not included it as a separate gist. For a Kinematic Player, one can move relative to the camera direction like so:

extends KinematicBody

export var walk_speed:float = 5.0

func _process(delta):
	# Walking in the direction the camera is pointing.
	var camera = get_viewport().get_camera()
	var dy = int(Input.is_action_pressed("move_forward")) - int(Input.is_action_pressed("move_backward"))
	var dx = int(Input.is_action_pressed("move_left")) - int(Input.is_action_pressed("move_right"))
	var move = (camera.global_transform.basis.x * -dx) + (camera.global_transform.basis.z * -dy)
	move = Vector3(move.x, 0, move.z).normalized()  # Take out the 'looking down' component.
	self.move_and_slide(move*walk_speed)

Don’t Crush Me is a game about pleading for your life with a robotic garbage compactor. It came up many years ago during a discussion in the AwfulJams IRC channel. The recent advent of Smooth Inverse Frequency proved an ample opportunity to revisit the idea with the benefits of modern machine learning. In this post we’re going to cover the building SIF in Rust, compiling it to a library we can use in the Godot Game Engine, and then building a dialog tree in GDScript to control our gameplay.

First, a little on Smoothed Inverse Frequency:
In a few words, SIF involves taking a bunch of sentences, converting them to row-vectors, and taking out the principle component. The details are slightly more involved, but not MUCH more involved. Part of the conversion to vector rows involves tokenization (which I largely ignore in favor of splitting on whitespace for simplicity), and smoothing based on word frequency (which I also currently ignore).

Really, one of the “largest” challenges in this process was taking the Glove vectors and embedding them in the library so that GDScript didn’t have to read anything from a multi-gigabyte file. The Glove 6B 50-D uncased vectors take up only about 150 megs in an optimal float format, and I’m quite certain they can be made more compact still. Additionally, since we know all of the tokens in advance, we can use a Perfect Hash Function to optimally index into the words at runtime.

With our ‘tokenize’ and ‘vectorize’ functions defined we are free to put these methods into a small Rust GDNative library and built it out. After an absurdly long wait for the build to compile (~20 minutes on my Ryzen 3950X) we have a library! It’s then a matter of adding a few supporting config files and we have a similarity method we can use:

Now the less fun part: Writing Dialog. In the older jam Hindsight is 60 Seconds, I capped things off with a dialog tree as part of a last ditch effort to avoid doing work on things that mattered. The structure of that tree was something like this…

const COMMENT = "_COMMENT"
const ACTION = "_ACTION"
const PROMPT = "_PROMPT"
const BACKGROUND = "_BACKGROUND"
var dialog = {
     "_TEMPLATE": {
         COMMENT: "We begin at _START. Ignore this.",
         PROMPT: "The dialog that starts this question.",
         ACTION: "method_name_to_invoke",
         "dialog_choice": "resulting path name or a dictionary.  If a dictionary, parse as though it were a path on its own.",
         "alternative_choice": {
             PROMPT: "This is one of the ways to do it.",
             "What benefit does this have?": "question",
             "Oh neat.": {
                 PROMPT: "We can go deeper.",
                 "…": "_END"
             }
         }
     },

I like this format. It’s easy to read and reason about, but it’s limited in that only one dialog choice corresponds to one action. For DCM I wanted to be able to have multiple phrasings of the same thing without repeating the entire block. Towards that end, I used a structure like this:

var dialog_tree = {
    "START": [ # AI Start state:
        # Possible transitions:
        {
            TRIGGER_PHRASES:["Hello?", "Hey!", "Is anyone there?", "Help!", "Can anyone hear me?"],
            TRIGGER_WEIGHTS: 0, # Can be an array, too.
            NEXT_STATE: "HOW_CAN_I_HELP_YOU",  # AI State.
            RESPONSE: "Greetings unidentified waste item.  How can I assist you?",
            PLACEHOLDER: "Can you help me?",
            ON_ENTER: "show_robot"  # When we run this transition.
        },

        {
            TRIGGER_PHRASES: ["Stop!", "Stop compressing!", "Don't crush me, please!", "Don't crush me!", "Wait!", "Hold on."],
            NEXT_STATE: "STOP_COMPRESS_1",
            RESPONSE: "Greetings unidentified waste item.  You have asked to halt the compression process.  Please give your justification.",
            PLACEHOLDER: "I am alive.",
            ON_ENTER: "show_robot"
        },

        {
            TRIGGER_PHRASES: ["Where am I?", "What is this place?"],
            NEXT_STATE: "WHERE_AM_I",
            RESPONSE: "Greetings unidentified waste item.  You are in the trash compactor.",
            ON_ENTER: "show_robot"
        }
    ],

This has proven to be incredibly unruly and, if you are diligent, you may have realized it’s just as possible to do the same “multiple trigger phrases” in the first approach via some simple splitting on a special character like “|”.

So how well does it work? The short answer is, “well enough.” It has highlighted a much more significant issue: the immensity of the input space. Initially, it was thought that using a placeholder in the input text would help to anchor and bias the end-user’s choices and hide the seams of the system. In practice, this was still a wrought endeavor.

All things considered, I’m still proud of how things turned out. It’s a system that’s far from perfect, but it’s interesting and it was plenty satisfying to build. I hope that people enjoy the game after the last bits are buffed out (hopefully before GDC 2020).

At the close of my earlier update I mentioned wanting to try ‘Tracer-style’ time travel where only the player moves backwards and everything else stays in place. I gave it a go and got it working, but it wasn’t particularly interesting. It was basically just the player moving in the opposite direction. Animation could probably jazz that up, but a more fun idea came to me in the middle of a sleepless night:

Seeing the future.

Trivially, if everything in the world rewinds and the player can make different decisions, that’s basically seeing the future. And that’s what I built:

It’s not perfect. You’ll notice that the dynamic cubes retain their velocity after the time rewind happens, but that’s solvable.

Here’s how it works: there’s a global time keeper which records the current tick. The base class has three methods (_process, get_state, and set_state), and two variables (start_tick and history[]).

The global time keeper sends a signal when time starts to rewind. During the rewind process, each tick is a step backwards instead of a step forward. The _process method of the base class checks to see if a rewind is active and, if so, calls set_state(history[global_tick]). If rewind is not active, we append or update the history. There’s some nuance to tracking deltas and despawning, but really that’s about it. Simple, eh?