It's the place where nbilling writes down his personal thoughts on making games.

2025-07-31

Since the last update I've done lots of UI work and written an experimental new renderer that supports variable roof/ceiling/floor height per tile. I spent a while thinking about what the new anatomy of a tile would be, eventually coming up with variable roof height, ceiling height, and floor height, and a constant bedrock height for the engine in general.

The existing level geometry raycasting algorithm isn't sufficient for this task, because we need to know not only the height of a wall where the ray entered a tile, but also where it exited (for example in order to calculate where the ceiling/floor start/end, in screen Y, for that tile). We also no longer want the algorithm to stop when it hits a wall, because there could be a wall visible behind it above or below.

After the new raycast algorithm, I started out writing a purely vertical scanline renderer for experiment, and this was fairly straight forward to do untextured. See:

So far so good, next is textures. I knew texturing the walls would be easier so I did that first, and it worked more or less unchanged from my existing code that draws vertical strips of wall texture without perspective correction.

Where I ran into a roadblock was texturing the ceilings/floors. My first attempt at this resulted in horribly distorted textures. At this point the new renderer was only drawing vertical scanlines, and without any perspective correction. I managed to fix part of the distortion by doing per-pixel perspective correction for vertical strips of ceiling/floor texture (at great processing expense) but another issue persisted whereby partially clipped areas of ceiling/floor would get stretched way out of proportion. I never worked out (even conceptually) how to properly clip the ceiling/floors to avoid that stretching-- I think I gave up after about a full day of debugging. At that point I recognized that fixing this naive way of rendering ceilings/floors would be more effort than just implementing the more optimal horizontal scanline method (what my existing ceiling/floor renderer does) for variable heights.

The vertical scanline rendering code already calculates the screen Y where strips of ceiling/floor should begin and end rendering, and it knows the height of the strip in world space, so it can save this information up like Doom's visplanes. Then, in the horizontal scanline renderer we can use that information to render ceiling/floor pixels at the right height. There are differences here to visplanes which bear mentioning, such as we have tiles with fixed dimensions as opposed to Doom's sectors, and we don't group together multiple vertical strips for a single tile like visplanes do for sectors, instead we end up grouping together all vertical strips at the same height, which also means that when rendering a height the texture being sampled will change pixel to pixel, as opposed to rendering a visplane which is all sampled from a single texture.

Next, the horizontal scanline renderer also needs to be changed so that it does a pass for every height, although it is simple to skip heights where no ceilings/floors are visible. Additionally for each height we can keep track of the min/max screen Y with pixels to render and skip all the horizontal lines that we know are empty. See:

2025-06-16

Yesterday and today I was working on the user-configurable game options and controls. I started out with them being implemented the same way, but quickly realized that it's easier to handle them differently. For game options, I think, the types can be different but you'll only have one setting at a time-- eg. resolution is a different type of option to fullscreen/windowed, but at any one time you only have a single resolution set. Input bindings are different, "interact" might have the key E assigned to it, and also the right mouse button. The key assigned to it never changes type (it's always a key), and the mouse button assigned to it never changes type (it's always a mouse button).

Because the input bindings are homogeneous I put them in a fixed length array of struct (eg. it contains the key, the mouse button, and the controller button), whereas because the options are heterogeneous I put them in a fixed length array of union (eg. can be bool, int, float).

At game start options and input bindings are read from ini files and if the player changes them they can be written back to those files. The info records for each option and input binding have a default value so everything can be restored to defaults, or a new ini can be generated if one isn't found.

I plan to write the menus for changing options/input bindings using the same info records to make that fairly simple.

All the c++ code initializing static arrays of info records and the enums to identify the elements of the arrays is generated with really a very simple python script from txt files stored with the code.

So eg. part of options.txt:


  # Name Type DefaultValue ShowInMenu MenuName

  Fullscreen Bool false true Fullscreen
          

Part of inputBindings.txt:


  # Name DefaultScancode DefaultMouseButton

  # Player
  Shoot UNKNOWN LEFT
  MoveForward W NONE
  MoveBackward S NONE
  StrafeLeft A NONE
  StrafeRight D NONE
  Interact E NONE
  Reload R NONE
  Grenade G NONE
  Melee Q NONE
  Map M NONE
  Pause ESCAPE NONE
  Wep1 1 NONE
  Wep2 2 NONE
  Wep3 3 NONE
  Wep4 4 NONE
  MapEditor L NONE
  Console GRAVE NONE

  # Menus
  MenuUp UP NONE
  MenuDown DOWN NONE
  MenuLeft LEFT NONE
  MenuRight RIGHT NONE
  MenuSubmit RETURN NONE
  MenuCancel ESCAPE NONE
  MenuDelete X NONE
          

This was at least the simplest system I could come up with to keep track of these things and make it easy to add more.

2025-06-15

Raycaster game engine has progressed a lot since the posts I made on this last year. The last thing I added was the menus for saving/loading game:

2024-09-26

Today I want to write down thoughts I had on a C++ game engine. Now that I'm on the home stretch with the top-down shooter in Unity and expanding content for that, my mind is free to wander onto planning for next time.

I started reading Game Programming in C++ by Sanjay Madhav and have found it a surprisingly light read given the spooky subject matter. I like the "game object" model he suggests, it's roughly like the UE one-- game objects are Actors and you attach Components to them. Stuff like a sprite to be rendered is a Component (this is also very similar to how you use Components in Unity of course). It has great sample code that sets up this framework and uses it in an easy to follow way. I understand why it doesn't go into object reuse (or I wasn't able to find it if he does) but I wish there were a chapter in the back that discussed ways to pool and reuse Actors/Components. Ditto for custom binary formats and custom tools for asset processing/creation. Enough of that I'm getting carried away again.

I'm glad I messed around with the projects from the book and went down the path of using SDL2, too. SDL2 seems great so far. Maybe at some time in the future I'll want to pull out rendering code I'm writing on top of SDL2 now and do it in OpenGL or Vulkan, but I doubt I'll need to for this next project, and SDL2 handles input just fine.

After my little existential crisis about how to effectively pool the inheritance hierarchy of Actors and Components was resolved, it was about time to have one for the format of maps and game data! I've used json before for this kind of stuff, but for maps I don't think it's even worth trying to use json. The requirements I settled on were that the overall system had to allow me to edit maps graphically and game data in a normal text editor. For one thing that means I need to make my own map editor, which I was already planning on, and that I either need to load game data as strings or I need a compiler. So far so good. I had a near breakdown trying to get my VS C++ project to automatically generate headers from flatbuffers .fbs files-- which involved going so far as to recreate the entire project in cmake just so that cmake could figure out how to achieve that build step for me... Ultimately, though I decided in the interest of my sanity that I would just use a custom, very simple, text format for my game data files and write a custom parser and compiler for it (credit to javidx for this inspiration).

My little untextured raycasting engine is humming along, it has a map editor with a paint brush and an eraser. The UI is next up and I have many grandiose intentions toward that too. I think I will have a visual tree like the Unity canvas where the nodes use their parents position as an offset when drawing themselves but they can be as wide/tall as they want, there will be some function that runs once per frame and does a "raycast" through the mouse pointer to find the highest Z-order bounding box in the UI it's overlapping and send enter/exit/click events. I don't think UI objects will have any components, they can just have nested objects that implement whatever functionality I want to compose. Until that fails horribly, that's the plan.

2024-09-25 Part 2

During work on this top-down shooter game it also happened that I had to start ditching the PhysX implementations of motion and collision detection. The precise detail of movement in a game, to me, is too important to leave to a black box. So the first thing to go was the player's motion-- now the player controller code can manually keep track of its velocity and apply acceleration and drag, and "collide n' slide" with level geometry exactly the way that feels good. After that it didn't make sense to use PhysX to move monsters either. In this kind of game the monsters move at static speeds, they don't need any inertia. The player controller, monsters, projectiles-- now they all have custom physics code.

Initially, to get field of view and fog of war to work they way I wanted, I had to write a raycasting algorithm on the game's tilemaps (remember this is a top-down game). So in spite of this being a 3D game engine rendering quads in an orthographic projection where the distance never changes, and all this other bending over backwards... I still end up raycasting in a tile grid like a wolf3d renderer (this is my ALL TIME favourite webpage on raycasting, although I didn't know it existed at the time I was writing the raycaster for the top-down shooter).

As all this is unfolding, I start thinking about my next game. I would love to do something 3D but I don't want to transition to meshes yet. At this point I'm planning to keep using Unity for the foreseeable future, so the first thing I have to work out is how to represent the 3D level geometry in Unity. The file format doesn't really matter-- I mean it can't be a scene exactly because Unity is a "true" 3D renderer-- but it can be tiles or linedefs, stored in binary or text. I could load that stuff into Unity and generate a mesh (or meshes) for static level geometry and then use Unity's 3D renderer to render the game.

I spent several weeks thinking about alternative implementations of that, but in the end it just seemed like replacing all the physics in Unity and then generating mesh geometry to workaround the 3D renderer is doing too much to shoehorn my game into the wrong engine. The straw that broke the camels back was me trying to work out how to do collisions with level geometry. The simple thing to do would be generating a mesh collider from the generated level mesh, but then how do you handle doors, lifts, and moving platforms? You can keep track of vertices and move those around, but then to make collisions work you'd have to update the mesh collider it generated. Every frame? No way will that run fast.

You could create a separate mesh for the level geometry that moves, and move that instead of moving vertices. That's what a "true" 3D game would certainly do, but then it's making this geometry generation system even more complicated, and it's not how this game is fundamentally structured.

It's as though I'd started running into all these kludges that would be necessary to accommodate the basic structure of my game in Unity, before I'd even written a line of code. This is making me very frustrated! Okay what if I take a step back. Rendering a map from linedefs is a bit overwhelming since it's my first time writing the renderer and I'm doing it from scratch-- probably need real space partitioning (eg. BSP) even to raycast against walls. Raycasting on a grid is much simpler, and I've done it recently for my top-down shooter.

If level geometry is a grid then the same raycast algorithm works for rendering and for physics, leaving aside how we render, LoS, and collide with dynamic objects. Doors work pretty elegantly in a grid-- it's just one type of tile. The renderer can offset it from the walls by adjusting the height it draws the columns of door, and the same thing works for making doors open/close vertically. Colliding with doors requires no special work at all, you just have overlap/raycast tests take the door state into account.

So now I've got this decision to make: strip requirements way back to a wolf3d-like game and write my own engine in C++, or keep hacking away at Unity and start generating meshes for 2.5D level geometry with the aim of eventually tunneling out the "other side" into 3D with meshes and skeletal animation in Unity where I can finally drop all the kludges and workarounds?

No spoilers, but you know which one sounds more exciting right?

2024-09-25

Since I released SprawlRacer I've been working with a great artist on a top-down shooter (demo footage). The level creation workflow for this game is more involved than it was for SprawlRacer. The map is visually more complicated and there are puzzles and encounters, all that stuff.

I'm glad that I had some experience making maps (and making a workflow for making maps...) in SprawlRacer first. Lots of problems repeated and other ones that came up were less overwhelming because they were extensions of what I'd seen before. I needed a trigger system for switches to open doors and for keys to allow the player to manually open doors, teleporters needed to connect with one another, levels needed some way to tell the game which level an exit led to. I didn't want to create versions of every door/trigger/exit in the game data or in a Unity prefab, so for this kind of "edit time" configuration I put fields on the spawner game objects that write their value to the object they spawn when the level first loads.

For some kinds of objects I found that a symbolic Gizmo wasn't easy enough to read, for example "props" which vary greatly in size and whose colliders are important to the gameplay of a level. I had to write editor scripts for prop spawners to read the size of colliders from game data and display it as a wireframe Gizmo. It's cool that you can do this in Unity so easily, and it made we wonder what it would be like make a stripped down level editor specialized for what my particular game needs. And maybe if I'm creating solutions to all these things in Unity and carrying them forward from one project to the next, if I solved them in my own editor then I could carry that forward too.

2024-09-20

Today I'll talk a bit about my experience setting up the Unity editor to edit levels for a game. You have the freedom in Unity to make a "level" whatever you want. It could be a self-contained scene, it could be entirely your own data scheme and reconstituted onto a blank scene at load time, or anything in between.

SprawlRacer didn't allow saving in the middle of a race so I never had to think about how to serialize/deserialize the state of all the game objects in the scene. Static level "geometry" consisted of layered tilemaps, dynamic objects like pedestrians and pick-ups would get spawned in by spawner objects, the spline the AI used to navigate the track was just a built-in Unity spline component. Unity made laying out all those elements in each track fairly intuitive so, to me, using the Unity editor as my level editor was a no-brainer. The definitions of all these elements was done with Unity prefabs since there weren't many of them to keep track of, although I had started to move some things into json files when there were a lot of variations (eg. the roster of AI racers, the lists of tracks for each division).

This was my first experience designing levels and coming up with a workflow for creating them. You can go pretty far with the Unity editor just plopping prefabs down, but it didn't take long for me to get frustrated by the fiddly-ness and difficulty in "reading" the scene view of the map. I made a few changes to the editor experience for my own convenience. Most notable of which was creating custom Gizmo textures for object spawners, racer starting points, etc. Below is a screenshot of the Unity "scene view" of a SprawlRacer level being edited, notice the yellow Gizmos all over.

Almost all the physics of SprawlRacer was built in PhysX stuff too. Dynamic objects had built-in 2d box/circle colliders and static geometry used the built-in precombined tilemap collision mesh component.

I was totally new to designing a level format, I knew I was going to make all the levels myself, the level state didn't need to be serialized/deserialized, and rendering/physics wasn't going to deviate from the Unity built-in happy path. So naturally I never considered making a custom level editor and level format. My "levels" ended up being somewhat of a hybrid-- common scenes that get loaded first and then the track scene layered on top of that additively.

I did learn a few things along the way that started me down the custom format path, however. First, the idea of separating the dynamic object (eg. pick-up) from the marker in the level that spawns it is a step towards thinking of a level format as distinct from a Unity scene. Second, the convenience of defining all the AI racers and track/division info as json made me want to move as much game data as possible into a custom format (text files) for my next game. Below is an example of some json defining an AI racer in the SprawlRacer game data.

2024-09-19

Welcome to the first post of nbilling's webpage!

I promise future posts will be shorter and lighter, this one ended up being a sort of a confession.

I will post here about making games, or other programming. But if I'm being honest I mostly just like programming games. I've started many game projects that I never finished, but two games that I did finish are MegaCollider and SprawlRacer. Each of them I finished in about 3 months, starting with MegaCollider.

Prior to making MegaCollider I had been working on a game named TetraMage for close to 3 years without being close to releasing it. At that point I felt, on the one hand really glad that I had been working consistently on this game for so long, but on the other hand rather frustrated with the pace of progress. I still hadn't released anything! TetraMage had itself been inspired by the idea that maybe if I scaled down the scope of my prior game ideas to something "boring" in terms of the underlying technology and focus all my efforts on just finishing something fun, then I could stop starting and quickly abandoning prototypes.

When I reached peak frustration with TetraMage (in late 2023?) I actually started an even more grandiose project that I wasted months on before finally getting a good idea. Make the smallest possible thing that is fun to me and that I can release. Time box it to 3 months-- any features that don't make it or bugs that I can't fix just get dropped at the end. That one was MegaCollider. Almost no art assets, no level design, almost all the physics is just PhysX in Unity out of the box. But I think it's fun and just recently I took a look at it after months of not thinking about it and it was much better than I remember.

I learned a bunch of interesting lessons (Note: interesting for me-- I once got some good advice from a colleague to the effect of "just because a problem was hard for you doesn't make it a hard problem"), I was proud of finishing it, and felt so great being able to start from scratch on my next game. The next one relaxed some of the constraints of the former. Now there were going to be art assets, but I was working alone so only things I could pull off myself. There would be level design, but again it has to be something I'm capable of on my own in 3 months. I was still going to use the Unity engine and-- importantly!-- I was going to reuse as much code as possible. By the way that was SprawlRacer.

Being able to reuse so much code in SprawlRacer was a lightbulb moment for me. Reusing code doesn't have to mean packaging it into a library with unit tests and putting it in a public repo. Frankly that kind of thing is such a drag and I really wonder how (or if?) people who do this with game code are able to be productive. Maybe they work at game companies with too many programmers solving the same problems, who knows? At my own work obviously I engage in this sort of thing but there it's culturally ingrained and probably domain-appropriate (we don't ship a boxed product every X years, let alone an entertainment product).

Somewhere in this period I started intensely enjoying making these games, rather than just doing it because I didn't want to die without producing any work that was personally meaningful. I changed my daily schedule so I could get up several hours before my family does and work on a game while I was fresh, rather than doing it after my kids went to sleep, and frequently just being too tired to get anything done. I also started to realize what parts of the process were causing hang-ups or making it less fun.

The biggest one is "bad constraints". Most constraints are good, I reckon. Having a deadline is good, obligations in your life that dictate a daily rhythm are good, having a budget is good, having some fixed rendering, physics, editor, etc. technology is good. Bad ones are things that take away your ability to "just get on with it", maybe things that move the locus of control from inside to outside. Being constrained to art "assets" from a store or free website is a "bad" constraint to me. I found that I'd rather create crappy art myself. "Crappy art" constraint is a good constraint to me. Needing to hire someone or depend on someone who's not already said they were going to work with you is a "bad" constraint to me. Having to make a simple design overly complicated to fit it into a really general-purpose techonology stack is a "bad" constraint IMO too. This last one I've coped with in Unity for a while but more about that another time.

When I finished SprawlRacer I had several things I was unhappy with about the game that I wanted to do differently, which is a huge perk of making things on a short time limit. For the next game I was committed to it being a top-down shooter with animated sprites (even if I had to make the crappy animations myself!), a campaign of hand-made levels, and puzzles. As I was wrapping up SprawlRacer I took a risk (risk for me, being that I am terrified of rejection) by reaching out to several people who had posted online that they were artists looking for programmers to collaborate with. I was still committed to starting and finishing when I said for this next game, but if someone else wanted to collaborate with me then that's a win-win. If not then I plan accordingly and do it on my own-- and I had already started producing prototype crappy art for the game at that point. Happily one person responded and we hit it off, so that game is underway and I'm glad is very much on track.

In future posts I'll share some of what I've learned from my current project-- my first one with a collaborator, maybe some media from that game, and my plans for once that's released.

Modest as the two games I've finished are, I am still proud of them and very grateful for the lessons that I've learned that led to me making them. It took me decades of not getting started, and then doing it the wrong way, but I can't be mad now that I feel like I'm finally going in the right direction.

Cheers!

-nbilling