Doing Something Different

Okay, I’m taking another break from Gateway. I’ve completed some work since the beta test, but I really felt the need to work on other things for a bit. During my break I wrote two small games. One is a simple open-source FPS, where you shoot drones that are trying to kill you in an open meadow. The other is a modernized version of Asteroids…and it’s ready for release on Steam in just a couple of weeks! …But I’m getting ahead of myself.

No-Fly Zone

So! The first-person drone-shooting game:

It’s really super simple. This is just a toy project that I hoped would be useful for anyone wanting to learn about game development using OpenGL. All of the source code and assets are available on my public repository. It also runs in both Windows and Linux.

But what I really want to write about is what I did after that.

Asteroids Millennium

After I released No-Fly Zone, my dear better half suggested that I put more value into my work and consider selling some of it. (Her words were something like stop giving away your shit for free.)

[Aside: After some introspection I realized that somewhere between the age of 10 and…um, my current still-youthful age, I had mysteriously acquired the opinion that selling my work was evil and giving knowledge to the world was good. Further introspection revealed that if I did both of these things I would still average out to an “okay” person and I could live with that.]

But what kind of game would I write? The number of small projects I had successfully completed in the last year or so had demonstrated that my chances of finishing a 3-4 month-long project were pretty high. (I also didn’t want to re-purpose an old project.) So, my goal was to sell a game that would take about that long to build. Eventually, I settled on a modern remake of the classic Asteroids—it had mechanics that were easily understood and I had some fun ideas that would really add new life to the game. It also wouldn’t take forever to implement.

After 4 months of development (and also some beta testing with colleagues), I had a product that I felt was ready. Working to set things up on Steam went smoothly, partly because I wisely started the process more than a month before my planned release date. I didn’t know what to expect since I had never done this before, but the folks at Valve spent a lot of time writing great documentation and tutorials.

Here’s the store pageAsteroids Millennium is scheduled for release August 7th.

GN

Advertisements

Tech Report: A Technique for Rendering Cheap Laser Bolts That Look Good From All Angles

[Note: An explanatory video of the 3D laser bolt effect is now available on YouTube.]

In this post, I will present a technique for rendering laser bolts that look good from any viewing angle (including head-on), without the use of expensive computations such as blurring. To understand this technique, we will assume the effect has already been achieved, and work backwards to discover the correct approach.

Our completed effect is depicted below:

laser-single-no-markup

Advanced rendering techniques might accomplish this effect with post-processing: the laser would be rendered as solid geometry, and blur passes would generate the glow. Let’s assume that the costly blur operation has already been accomplished for us, i.e., that we’ve gotten it for free.

To do this, we’ll pretend that the blur itself is a simple texture (in the shape of a laser bolt) that has been overlayed on the screen such that it covers both end points of the laser:

laser-single-by-texture

No single texture will do the job, however, because the texture would have to be different for differently shaped lasers, and its appearance would depend greatly on the user’s viewing angle.

Consider a single texture like the one below:

laser-front

Interestingly, this is exactly what we would expect a laser bolt to look like if we viewed it dead-on. If we viewed the laser bolt from its side, we would expect to see something like the following:

laser-side-no-markup

We can easily see that the second texture can be generated from the first texture. We simply divide the first texture into three sections: left, middle, and right segments. The right and left segments always remain unchanged, but we can stretch the middle one to generate the second texture:

laser-front-with-markup.png          laser-side-with-markup.png

Since we can stretch the middle portion to any length, and scale or rotate the resulting image arbitrarily, we can easily see that it is possible to essentially “pre-compute” the blur effect that would be necessary for a convincing laser bolt effect. All that is necessary is to overlay the pre-computed blur onto the screen coordinates of the laser bolt itself, using an orthographic projection.

The screen coordinates of the laser bolt are easily calculated by projecting the 3D end-points of the laser into screen space. Once we have those two points, we easily compute the 8 vertex positions (necessary for the left, middle, and right segments of our blur texture) within orthographic space. We can also scale these vertex positions based on their distance from the camera to generate the illusion of a perspective view even though we’re using an orthographic projection.

laser-single-with-markup

Note that the laser bolt also looks exactly like we would expect, when viewed directly from the front:

laser-demo-screenshot-front

There is one more hurdle to overcome. Because we have rendered the laser bolt using an orthographic projection, our fragment depth values do not exist in the same coordinate system as the rest of our (3D) scene. In other words, our lasers will not occlude (or be occluded by) other geometry. To correct this, we will use the z-coordinates that were computed during our projection step above to obtain depth values for the laser end points. In our vertex shader, we can then assign appropriate depth values to each of the 8 vertices used to render our laser. This will allow us to depth-test the lasers against the rest of the scene.

By using batch rendering techniques, we can render a large number of laser bolts efficiently:

laser-demo-screenshot

An example implementation and complete source code for the above screenshot is available here.

GN

August 28 – Where I’m At

Several months ago, I had a conversation with a colleague about game development. I was expressing to him some concerns about my ability to finish Gateway in a reasonable amount of time, and whether or not the game would actually be fun. He pointed out that many games ship uncompleted, and that companies release content and patches later on. Insightfully, he also suggested that I develop a vertical prototype of my game to get some feedback before releasing it.

A vertical prototype is a narrow slice of a game that is fully playable. An example in my case would be a single mission. Although the main game mechanics (as well as several missions and other features) were already implemented, some of them needed to be upgraded or more thoroughly tested. A vertical prototype would be a great way to focus my efforts, so I made up my mind to develop one for Gateway and have some coworkers play-test it for me.

After that conversation, I spent the next several months fixing bugs and sharpening up some game mechanics. I designed a five minute mission specifically for testing purposes. It included a mix of my favourite dialogue and visuals from the actual campaign. A lot of effort went into improving or redoing some existing art assets, too, that I felt weren’t up to snuff. (My friends who develop game art professionally were an immense help here—they provided me with a ton of useful advice as I reworked my models and textures. Thanks folks!)

I scheduled play-testing for August 26th. Listed below are the tasks I aimed to complete before then. Each task was assigned a difficulty level. Not all of them were completed, but the essentials were done.

  • correct a major bug in the particle system (4)
  • rework the turret animation system to something more realistic (2)
  • model and texture new NIA and ISC cruisers and turrets (18)
  • correct the appearance of the first-person shield effect (1)
  • make the game engine fully adhere to an OpenGL 3.2 core profile (2)
  • rework the GUI system to look nicer and be bug-free (3)
  • add zoom blur effect when boost is activated (2)
  • correct a problem with missile collisions not being detected (1)
  • improve handling of in-game camera (1)
  • add debris assets (space junk, asteroids, cruiser chunks, etc.) (3)
  • correct obstacle avoidance AI (unknown, maybe 2)
  • create NIA/ISC cruiser collision geometry (1)
  • fix an issue with the turret shields not rendering properly (1)
  • optimize laser bolt light generation (2)
  • fix an issue with the targeting arrow (2)
  • discard non-colliding candidates faster during collision checks (1)
  • change laser bolts to use batch rendering (2)
  • texture the new cockpit model (17)

Last week, I demoed Gateway and my coworkers were thrilled to finally try the game I had been talking about for so long. It was very well received, and most of my colleagues had helpful suggestions to make it even better. Here are some of the most notable issues that were identified:

  • targeting arrow is sometimes misleading and still needs a bit of work
  • “target destroyed” and “mission complete” feedback indicators would be useful
  • star fly-bys need to be brighter to indicate player speed
  • HUD speed indicator would be good, too
  • adding a “reverse-flip-over” maneuver to reverse direction would be very useful

During testing, something interesting happened several times: my colleagues became upset that the player didn’t take any damage when colliding against large objects like gates, cruisers, or asteroids. What’s funny about this is that I disabled large-object-collision damage on purpose to make things easier, but everyone seemed disappointed by this. That surprised me, and I have two guesses for why everyone reacted this way: (1) players expected realistic mechanics, i.e., collisions should cause damage, and (2) players might have felt that their careful maneuvers were meaningless if there was no penalty for failing to execute them properly. (This was an eye-opening moment for me. I hadn’t spent a lot of time thinking about these kinds of implicit rewards.)

There were some positive points as well:

  • players found destroying things (turrets, gate coils, fighters, etc.) very satisfying, especially when destroying several of these in quick succession
  • the sleek look and feel of the game made it very pleasant to play in general
  • in-game audio was immersive and well-received

Overall, everyone had a positive experience, which was a bit of a relief for me. Until that day, I really had no idea if my game was fun or not. Also, it was incredibly helpful to get so much detailed feedback. Thanks, everyone!

Of course, I can’t just talk the talk after all that. Here are some screenshots from the vertical prototype mission.

My next task is to tackle the improvements that my coworkers suggested. It’s a real confidence booster to get this kind of feedback. Not that I ever really doubted it, but this experience has reinforced my belief that Gateway is a game not only worth developing, but worth playing, too.

GN

Focus on Finishing

It’s been a while since my last post, and also a while since I’ve completed anything useful on Gateway. Oh sure, I’ve had some time here and there to actually sit down, open up the project file, and write some code or build some environments. The real problem, I realized this morning as I got out of the shower, was that I hadn’t been making good use of my time. And here’s why.

I haven’t had as much time lately to devote to my game – this meant that instead of spending time actually producing work, I was internally building up new worlds, weapons, and graphical effects, all in my head. Whenever I found an hour or so to sit down with my favourite pet project, I’d want to try them out. I wouldn’t have time to finish anything, so the next time I sat down on my computer I would work on another feature. With nothing really completed, this just resulted in a bunch of nonessential, half-finished weapons, environments, or effects that didn’t do anything to further my progress on the game itself. Some of them were even detrimental to the project because they required changes to the existing engine.

So, as I stepped through my sliding shower doors this morning, it struck me: I really need to focus on the more important parts of the game. Hadn’t I given myself this advice several times already in the past?

This was a really good kick in the pants that I needed. (There’s probably a joke in there somewhere.) In the several hours that I’ve had this weekend, I’ve listened to the wiser part of myself and decided not to implement any new features or try any new experiments with graphical effects. Don’t get me wrong; all of those things are great fun and always a valuable learning experience. But by focusing my efforts this weekend on completing features that were absolutely critical to the game, I accomplished a lot. In the space of several hours, I had done the following tasks:

  • Removed the old, crappy explosion effect and created a much better one with my custom-built particle editor (I’ll post about that sometime)
  • Addressed some major frame rate smoothness issues and screen tearing problems (this one had been eating away at me for a while now)
  • Starting correcting the appearance of the first-person player shield effect (another problem that desperately needed fixing)
  • Corrected some minor issues that occurred with the ordnance system

And, there’s still a few hours left in the weekend as I write this! Who knows what else I’ll accomplish today?

GN

Effective Level Editing

A few years ago, I completed a project called Dactyl. It was a 2D asteroids-shooter-type game. There were 25 hand-generated levels populated with mines, asteroids of various types, useful equipment, and automated turrets.

dactyl-01

To visualize each level before I implemented it, I used graph paper and a pencil*. Once I felt a level was complete, I plugged the object coordinates into a text file, and then fed the file into the Dactyl engine. And pop, out came a new level.

This approach definitely wouldn’t work in Gateway. The size of each level (up to about 20km!), especially considering the differences between, say, the 10 meter fighters and the half-kilometer battle cruisers, made drawing complex scenarios to scale next to impossible. Making changes to level designs would also be painful: erasing a group of 30 fighters and shifting them over a kilometer would quickly eat away at my eraser and my sanity. Besides, my hands aren’t steady enough for that kind of nonsense.

Dactyl levels were sketched out by hand.

Dactyl levels were sketched out by hand.

The solution for Gateway was to build my own level editor.

I’ll admit that I didn’t spend a lot of time investigating existing solutions. I was (pretty) sure that there was (probably, maybe) some level editor somewhere that would offer the right parameters and would allow me to drag, drop, and configure all of the game entities I wanted onto the Gateway playing field, but I really didn’t feel up to downloading and trying various existing editors only to find that each one was missing a crucial feature.

Ah, yes…implementing things from scratch rather than using existing solutions. I often do this with smaller projects for a number of reasons: (a) I believe in doing everything yourself at least once, (b) it’s often more fun, and (c) I can usually whip up something small in less time than it would take for me to learn to use an existing tool. (I recognize that this is a twisted combination of both laziness and diligence.) I also (d) like to be intimately familiar with the code I use, and this is easiest when it’s code that I write myself. Additionally, my own level editor could be completely customized for exactly what I needed it to do. (Besides, by the time I started considering developing my own, the back burners in my brain had already fired up and began doing the work for me. I think somewhere back there I knew very early that I’d have to write my own.)

I intentionally wanted to keep my editor’s feature list as minimal as possible, since I’d probably only use it specifically for Gateway (although it could be altered without much effort for similar games) and didn’t want to spend too much time adding features that weren’t absolutely necessary. I just wanted the bare essentials for designing levels quickly and easily…so, there were hard choices to make.

C# .NET would be an ideal choice for this project, since I’ve worked with it extensively on a variety of projects. This meant that graphical user interface elements like buttons or text boxes wouldn’t be a problem. The next thing I had to consider was whether or not I wanted a 2D or a 3D representation of the world in my editor. There were pros and cons to each.

Advantages of a 3D level editor:

  • Entities could be moved around the environment and viewed from the player’s perspective
  • Everything could be viewed as it would appear in-game: for example, depth or height would be easier to determine than it would be in a 2D, top-down view

Disadvantages of a 3D level editor:

  • Harder to implement than a top-down representation in terms of entity selection (picking), transformations, etc.
  • C# .NET does not feature built-in 3D support and would require using GLControls or something similar (I wasn’t adverse to this, but it was another layer of complexity I wasn’t sure was really necessary)

Ultimately, I decided to go with a 2D representation simply because the style of the game didn’t place a whole lot of importance on an object’s height or altitude within the game; the flight controls are more like flying an aircraft than a true 6DOF fighter (I did this to make the game simpler and more accessible to players). Thus, the world could fairly easily be represented in a top-down fashion. Of course, objects exist at many different “altitudes” in the game, but not to such a degree that it was critical that this be represented clearly in a level editor intended solely for myself.

Next, there were issues of scale to consider. I wanted to have missions that could potentially be around 20km by 20km in size. I also wanted everything in the editor world layout to be visible all at once without zooming or scrolling. Ultimately, I decided on a layout pixel size of 600×600, with every pixel being equal to 30 meters in real space. That was more than precise enough for my needs and resulted in a playing field of 18km by 18km.

My choice of scale had one unfortunate implication. The smallest unit of size that could be represented in my editor was 30m (1 pixel), but the fighters in Gateway are around 10 meters long, which was one-third of the smallest visible unit in my editor. Cruisers, being around 600 meters in size, could be represented to scale with 20×20 pixel sprites, but fighters posed a problem. I decided to simply make the fighter sprites larger than they would be in real space by several pixels in each direction, to allow easy visualization and selection with the mouse.

scale issues

Without some changes, the fighter sprites would absolutely be too minuscule to determine orientation, select with the mouse, or even see properly.

Next on the list was to decide the kinds of features I wanted to support in terms of building worlds. Tools like selection, translation, and rotation were essential. A ‘snap’ functionality would be immensely useful in terms of both moving (snap to grid) and rotating (snap to angle). Since I’ve used Blender a lot, I decided to support Blender-style keyboard shortcuts (which often don’t require modifier keys like shift or control) to make things familiar and faster: ‘A’ to toggle selections, ‘G’ for grab, ‘R’ for rotate, etc.

After about two weeks, I had a fully-working editor. It only had a minimal set of features, but that was all I needed.

sundial_screenshot

Level-building in progress. The player and a few wing mates are spawned far away from the action. The editor is just as much fun to use as it was to implement!

Team colours are represented by the highlights around objects. A grid in the background is used to determine distances. A few other tabbed panels are used for scripting music, dialogue, etc., and basically just contain multi-line text boxes.

Developing a level editor has been a very interesting and exciting experience for me. If anyone has experiences or stories working with level editors (including their own), I would love to hear about them in the comment section below.

GN


* As a side note, I have entire sketch books full of old game level designs from way back when. Maybe I’ll post some.

July 26 — Where I’m At

Well, here we are so far. A lot of work has been completed since the last progress report.

  • Voice recording and mastering is all done. That’s right. Every single line has been edited and made to sound like it’s coming over radio, so it’s all campaign-ready. I really enjoyed working with the individuals who were nice enough to offer their voice acting talents. (Finding enough actors was something I was initially concerned about, but many people were very willing to spend a half hour or so working with me to bring my characters to life. I’m very grateful for that.) There were a lot of bloopers filled with jokes and curse words. It made the editing process (which took hours upon hours) very entertaining.
  • The first 3 campaign missions have been built. These are shorter missions intended to gradually introduce features and familiarize the player with the controls, and are thus fairly simple. The remaining missions will be longer, but will probably be built quicker as I develop momentum (and gain familiarity with my own game engine…funny how that works). I have to say, it is incredibly exciting to see the action and hear my characters come to life in the way I imagined.
  • Various game play improvements. The targeting system has been improved, the computer players are smarter, bugs are being squashed…the list goes on.

Currently, my priority is getting the campaign finished. Building the missions not only advances the game towards completion, but also reveals bugs in the engine and helps identify game play issues.

Here is a screenshot from the third mission, Incursion, where a simple assault goes awry and the player’s forces are ambushed by the Coalition.

The player provides support during a mission to destroy a series of enemy network satellites.

The player provides support during a mission to destroy a series of enemy satellites.

Until next time.

GN

July 5 — Where I’m At

It’s only been a week since my last update, but there’s been a lot of good progress. I’ll keep this quick since there’s a few other things I’d like to tackle this evening. So, here we go:

  • Voice recording is nearly, nearly done. Unfortunately, I didn’t reach my goal of having all of the recording done by the end of the weekend. A lot was done this week, but I’m still looking for an actor for the last part (Admiral Banks). All of the other parts have been recorded, though, which means that I can go ahead and build the first eight missions.
  • Improved the cockpit. I decided that even my meager art skills couldn’t excuse the poor quality of the existing design. I spent a few days building and texturing a new cockpit model, which is shown below.

shot-1

  • Improved the missile system. The new system requires players to be smarter about how they launch their missiles; launches that are made too close to a target or not facing the target enough will only result in wasted ordnance.
  • Addressed a number of outstanding bugs and gameplay issues. All minor things, but there’s nothing like a quick demonstration to a colleague to make you realize there’s a lot that needs fixing. (“oops, that shouldn’t happen”, “still gotta fix that”, “that’s boring so I’ve left it until later”, etc.)

GN