This weekend I took part in Global Game Jam for the first time.
Global Game Jam is an annual happening where people gather at specific locations all over the world and make games. It’s not like Ludum Dare where you can work from home - instead the emphasis is very much on being in a shared space, meeting and working with new people. Abertay University was host to a throng of jammers, a crowd consisting of students, teachers, local industry and many other sorts of people.
This news is about a month old by now, but: that’s right, I’m ‘done’ with Flappy Word.
You can play a web build here or on the itch.io page, where you can also download versions for Windows, Mac or Linux1.
Things I learned
1) I think Unity’s WebGL build target isn’t quite there yet. They’ve dropped the ‘preview’ label but all this really means is that it’s now at a point where the Unity folks are willing to cover support tickets about it – if you’re a Premium or Enterprise user. It’s still kind of laggy and outputs enormous files which the user has to download. I don’t want to be too down on it but at least for the immediate future if I want to target HTML5/WebGL I’ll probably just do it the ‘hard’ way and write some JavaScript2. I’m sure it’ll be good eventually.
2) Implementing the word-typing (you know, the main mechanic) taught me a fair bit.
a) Use System.Enum.Parse to convert between letters (chars) and key codes (enums) so you know which key the player should type next. I don’t understand what magic this function is performing but it’s very useful.
b) I used Resources for the dictionary file. I’ve no idea if this is the optimal way to handle big text-based data assets in Unity, but it was simple and I’ve found no reason to switch to any other method.
c) This big huge string is parsed and broken up into an array of strings, which is then sorted for length. Sorting is nice and simple thanks to an anonymous delegate:
Delegates are kind of analogous to modern C++ lambdas (wee local function objects), with differences. They can’t capture variables from their scope, but other methods can be ‘assigned’ to them (I don’t think I understand what that actually means yet).
3) Screen shake made the game feel better, but it introduced audio problems. I was using AudioSource.PlayClipAtPoint, which had worked fine up until then as a quick throwaway audio clip solution, but now that the camera (and therefore listener) was jiggling about the 3D spatialisation was noticeable, and horrible. Detaching the Listener from the camera worked, but it still bothered me that I didn’t have a method for just playing non-spatialised temporary audio clips.
staticpublicAudioSourcePlayClip2D(AudioClipclip,floatvolume=1.0f,floatpitch=1.0f){GameObjecttemp=newGameObject("TempAudio");AudioSourceasource=temp.AddComponent<AudioSource>();asource.clip=clip;asource.spatialBlend=0.0f;// Make it 2Dasource.volume=volume;asource.pitch=pitch;asource.Play();// Start the soundDestroy(temp,clip.length);// Destroy the object after clip duration.returnasource;}
So I had to write my own function which does EXACTLY what PlayClipAtPoint does but without the position and zeroing out the spatialBlend variable. Because the Unity scripting API doesn’t have one for some reason.
4) Unity’s UI stuff is cool. My UIController script sure as hell doesn’t interface with it as gracefully as I’d like. That’s something to improve upon next time.
5) Avoid having one big monolithic script that controls everything by factoring parts of it out into separate scripts as early as possible, even if you end up attaching them all to one ‘controller’ GameObject. It’s just better that way. The mono-Monobehaviour I created was horrifying to work with until I refactored most of it out into new scripts.
Anyway, if I keep going I’ll be writing for ages. This was a surprisingly educational project.
Repository
I’ve decided to make the Bitbucket repository public. By NO means should any of the code within be imitated. It is all bad. Horrible and bad, but if you end up taking a look hopefully you can learn how not to do things.
This post is about the framework code I’ve written as part of my Honours project application. The aim of the framework is to make working with DirectX 11 as painless as possible. I won’t actually discuss anything about my project in it. The framework will expand as the project goes on, hopefully blossoming into something other people might consider using, so taking stock when it’s in its most simple form is a pretty worthwhile blog post.
The Window
Window management works like it does in SFML. Sort of. That’s the goal, anyway.
dxf::Windowwindow;window.Create(640,480,"DirectX Window");while(window.IsOpen()){// application main loopdxf::WindowEventevent;while(.window.PollEvent(event)){// event processing loopswitch(event.type){casedxf::WindowEvent::Closed:window.Close();break;default:break;}}window.Clear();// clear to blackwindow.Bind();// prepare to render// ... draw objects here ...window.Display();// flip buffers}
The dxf::Window manages the application’s DirectX device and a bunch of other DirectX objects which it creates, like a swap chain and render target1. Window::Create() sets up DirectX and the destructor cleans everything up, so the Window is the first thing an application using this framework needs to create before it gets on with important things.
Shaders
Currently there’s only vertex and pixel shaders. Setting up shaders is straightforward.
// Set up the simple vertex shader.dxf::VertexShadersimple_vs;simple_vs.Create(m_window->GetDevice(),"shaders/simple_vs.hlsl","main"));
This finds the shader code file and compiles it
Vertex shaders are special because they have an ID3D11InputLayout* associated with them. I used to have to set these up manually by filling out an array of D3D11_INPUT_ELEMENT_DESC structs and calling ID3D11Device::CreateInputLayout, which necessitates a tonne of pointless bespoke code which is a chore to write and easy to mess up2. Now, though, I use black magic in the form of shader reflection to automatically generate the input layout object as demonstrated here (thanks, Bobby Anguelov!), which eases the process greatly. I’m wondering what else I could do with shader reflection.
When it’s time to render, you just Bind the shader to the device context along with all the other objects.
There’s objects associated with shaders, too, such as Textures, Samplers, and ConstantBuffers, all currently rather skeletally implemented. I’m working on features as I come to need them.
Meshes
The Mesh class manages vertex and index buffers. Currently I don’t handle non-indexed meshes. To create a mesh (in this case, an indexed quad):
dxf::Meshmesh;unsignedquad_indices[6];quad_indices[0]=0;// bottom leftquad_indices[1]=1;// top leftquad_indices[2]=2;// top rightquad_indices[3]=0;// bottom leftquad_indices[4]=2;// top rightquad_indices[5]=3;// bottom right// Pass in the device, a pointer to the first index, and how many indices// there are.mesh.SetIndices(window->GetDevice(),quad_indices,6);// This structure should match up with the input structure used in the vertex // shader the mesh is rendered with otherwise weird not good things will happen.structVertex{D3DXVECTOR3position;};Vertexquad_verts[4];quad_verts[0].position=D3DXVECTOR3(0.0f,0.0f,0.0f);// bottom leftquad_verts[1].position=D3DXVECTOR3(0.0f,1.0f,0.0f);// top leftquad_verts[2].position=D3DXVECTOR3(1.0f,1.0f,0.0f);// top rightquad_verts[3].position=D3DXVECTOR3(1.0f,0.0f,0.0f);// bottom right// Pass in the device, a pointer to the first vertex, the size of each vertex, and// how many vertices there are.mesh.SetVertices(window->GetDevice(),&quad_verts[0],sizeof(quad_verts[0]),4);
This abstracts away nitty-gritty DirectX code, which is nice.
Actually rendering the mesh isn’t quite the way I’d like it yet. I’d like the verbs to be something like ‘render [mesh] to [target]’ where target is an instance of some kind of RenderTarget class. For now the process is:
There’s more to talk about but I’ll finish for now by talking about the GUI layer.
ImGui is where that nifty little ‘Test’ window comes from. ImGui is rad, and I wouldn’t know about it if not for a news post on Gamasutra a few months ago. I’ve not used it extensively yet so there might be drawbacks I’ve not yet spotted, but for my debug UI purposes it looks like the best option there is 3.
ImGui doesn’t do any rendering. You send it commands, it constructs lists of vertices, and you handle the rendering. You don’t even need to worry too much about how to do that, because there are examples which show how to write a renderer for DirectX, OpenGL or another environment which you can just copy into your codebase.
ImGui is haphazardly integrated with the rest of my program at the moment and doesn’t yet make use of my useful framework code, so tidying that up is definitely a thing I want to do in the weeks ahead.
Next steps: actually implementing Honours project stuff…
I’m planning to factor the render target out into a separate class which the Window will own an instance of. It’ll be possible to render to any given ‘RenderTarget’, then render that to the Window’s back buffer RenderTarget. All this might not be completely possible. ↩
I tried some really horrible ways of getting around writing that annoying repetitive input layout code before I happened upon shader reflection. ↩
I encountered a few problems while setting it up which I will try to write about (later) so that other people have a less frustrating time. ↩
This was going to be an exploration-focused game about scavenging your way around asteroid fields and derelict space stations using a suite of clunky and unconventional movement tools. I got stuck in, began working on a grappling gun attachment which you could swap out a thruster for… and then the rest of Summer 2015 happened. This weekend I reopened the Unity project, made the game presentable, and decided to put it out as it is.
I’d like to come back to it – the idea’s been kicking around in my head for about 3 years and it wants out pretty bad – but it probably won’t happen in the immediate future. At least this way the game gets out into the world.