This post is about the framework code I’ve written as part of my Honours project application. The aim of the framework is to make working with DirectX 11 as painless as possible. I won’t actually discuss anything about my project in it. The framework will expand as the project goes on, hopefully blossoming into something other people might consider using, so taking stock when it’s in its most simple form is a pretty worthwhile blog post.
The Window
Window management works like it does in SFML. Sort of. That’s the goal, anyway.
The dxf::Window manages the application’s DirectX device and a bunch of other DirectX objects which it creates, like a swap chain and render target1. Window::Create() sets up DirectX and the destructor cleans everything up, so the Window is the first thing an application using this framework needs to create before it gets on with important things.
Shaders
Currently there’s only vertex and pixel shaders. Setting up shaders is straightforward.
This finds the shader code file and compiles it
Vertex shaders are special because they have an ID3D11InputLayout* associated with them. I used to have to set these up manually by filling out an array of D3D11_INPUT_ELEMENT_DESC structs and calling ID3D11Device::CreateInputLayout, which necessitates a tonne of pointless bespoke code which is a chore to write and easy to mess up2. Now, though, I use black magic in the form of shader reflection to automatically generate the input layout object as demonstrated here (thanks, Bobby Anguelov!), which eases the process greatly. I’m wondering what else I could do with shader reflection.
When it’s time to render, you just Bind the shader to the device context along with all the other objects.
There’s objects associated with shaders, too, such as Textures, Samplers, and ConstantBuffers, all currently rather skeletally implemented. I’m working on features as I come to need them.
Meshes
The Mesh class manages vertex and index buffers. Currently I don’t handle non-indexed meshes. To create a mesh (in this case, an indexed quad):
This abstracts away nitty-gritty DirectX code, which is nice.
Actually rendering the mesh isn’t quite the way I’d like it yet. I’d like the verbs to be something like ‘render [mesh] to [target]’ where target is an instance of some kind of RenderTarget class. For now the process is:
ImGui
There’s more to talk about but I’ll finish for now by talking about the GUI layer.
ImGui is where that nifty little ‘Test’ window comes from. ImGui is rad, and I wouldn’t know about it if not for a news post on Gamasutra a few months ago. I’ve not used it extensively yet so there might be drawbacks I’ve not yet spotted, but for my debug UI purposes it looks like the best option there is 3.
ImGui doesn’t do any rendering. You send it commands, it constructs lists of vertices, and you handle the rendering. You don’t even need to worry too much about how to do that, because there are examples which show how to write a renderer for DirectX, OpenGL or another environment which you can just copy into your codebase.
Example:
ImGui is haphazardly integrated with the rest of my program at the moment and doesn’t yet make use of my useful framework code, so tidying that up is definitely a thing I want to do in the weeks ahead.
Next steps: actually implementing Honours project stuff…
I’m planning to factor the render target out into a separate class which the Window will own an instance of. It’ll be possible to render to any given ‘RenderTarget’, then render that to the Window’s back buffer RenderTarget. All this might not be completely possible. ↩
I tried some really horrible ways of getting around writing that annoying repetitive input layout code before I happened upon shader reflection. ↩
I encountered a few problems while setting it up which I will try to write about (later) so that other people have a less frustrating time. ↩
This was going to be an exploration-focused game about scavenging your way around asteroid fields and derelict space stations using a suite of clunky and unconventional movement tools. I got stuck in, began working on a grappling gun attachment which you could swap out a thruster for… and then the rest of Summer 2015 happened. This weekend I reopened the Unity project, made the game presentable, and decided to put it out as it is.
I’d like to come back to it – the idea’s been kicking around in my head for about 3 years and it wants out pretty bad – but it probably won’t happen in the immediate future. At least this way the game gets out into the world.
People seem to like Flappy Word. According to one person it is “officially more addicting than Flappy Bird!”
Is that good?
##Technical Issues
The game runs, but not as well as it ought to.
It takes an annoyingly long time to start up. With no loading screen it seems like nothing is happening, so I need to look into ways to both improve the load time and implement a loading screen if possible.
On some computers it didn’t work at all. The issue was usually reported as something to do with the browser not being able to allocate enough memory to hold the game, but I haven’t sat down at one of the offending computers to see the error for myself.
I’ll start by looking at storage usage. The folder containing the v1 WebGL release build of Flappy Word is 32.3 MB. That’s a lot considering how simple this game is. In 1990 the capacity of a normal desktop computer’s hard drive was about 40 MB (source). Consumer-available RAM units didn’t break the 40 MB barrier until around 2000 (source).
Where’s all this data coming from?
The Compressed folder just contains tiny versions of the files in the Release folder, but since the game still loads and runs just fine if I delete them it’s not clear if they’re used at all.
Within the Release folder the biggest file by a long way is build.js, which I believe contains engine code, game logic code, and so on all compiled into JavaScript. No text file should be this large. Sublime Text can’t seem to open it. Notepad struggles. Notepad++ craps itself when you try to scroll down through the wall of whitespace-less code.
I guess that if I were to simply write this game in JavaScript the all the source files put together would amount to a tiny fraction of this 20 MB monolith. It might take a bit longer to make because I’m not exactly experienced in JS, but this is not a complicated game and it doesn’t need a heavyweight engine like Unity backing it up. I don’t think I can control the size of the builds.js file that Unity creates, but I’ll certainly look into it.
So what’s in the asset-containing file, builds.data? It accounts for almost 20% of the folder size, and in this case I actually can control how much space it takes up.
I’ve changed the game a little bit since v1, but if I launch Unity, rebuild the game and look at the editor log I get to look at a breakdown of the assets which the game uses. Unused assets are stripped out of the build.
The biggest space-hog by a long way is the dictionary, but even then 1.1 MB isn’t much to worry about. There are probably smarter ways to handle the dictionary file that I haven’t had time to figure out yet. Currently the entire thing is loaded into memory for fast access and to sort it by word length, which my gut tells me is better than doing a bunch of file reading operations every time I need to get a new word.
My next question is how much the game actually takes up in memory. If I run it in the Unity editor and look at the profiler, I get this:
472 MB. It seems like a large amount of space, but this is running unoptimized in the editor, after all. How much memory is it using when it runs in the browser? Chrome’s task manager says the tab with Flappy Word running in it sits at around 230 MB or so, meaning the game uses less memory in release than it does in the editor, as you’d expect. I haven’t found a way to see the game’s exact memory usage in the browser yet. Still, looking back at our RAM-capacity-over-time stats… it’s a lot.
Another technical issue others have reported and that I’ve seen for myself is a certain amount of input lag or choppiness in framerate from time to time. I think this might just be Unity’s WebGL player’s fault. There might be ways I can improve it.
Anyway. That’s all very interesting and the biggest problems seem outside my control. What about the actual game?
##The Actual Game
The first negative that came up was people not realising what they were supposed to do. Coupled with the fact that the game didn’t have focus by default so their keyboard input did nothing until they clicked on it, this made for a pretty confusing first impression. I can solve this pretty easily just by putting in a tutorial prompt right at the beginning.
People seemed to like the little boost you get with each letter typed because it “adds complexity without adding new interactions”, as one person said. I chucked it in when I thought of it at the last minute and found it made every keypress mechanically meaningful, which I like, so I’m glad others like it. I think the strength of the boost, a long with a lot of other parameters, needs tweaking.
The ramp from short words to allowing longer ones feels about right.
The typewriter sounds were a good call.
People seem to be into the idea of a typing game which isn’t just about typing fast (like Typing of the Dead).
Profanity is good. Because the game starts off with only short words and most rude words are short, the ratio of rude words is abnormally high at the beginning. “In the last ~15 minutes I’ve had ‘penis’ twice and ‘phalli’ once.” I played the game last night and literally the first word that came up was ‘anus’.