Final Report

 

A. In the project review document, start by addressing these main questions:

  1. Game concept: How and why did your game concept change from initial concept to what you implemented?
  1. Initially, we wanted to make a 3d fps version of the original bomberman game. We planned to have different game modes, character abilities, and a lot more bombs. We ended up with only deathmatch mode and combined different types of bombs into only 5 bombs. We did not have time to implement multiple game modes and character abilities. The reason we combine the bombs is that most of the bombs share similar properties and we think it is more fun to have more than 1 property per bomb.
  2. [Robin] We were initially planning to construct a world with dynamic objects such as moving geometries and destructible. A good candidate game we can develop our idea upon was the TNT war mod of Minecraft, where the world is comprised by blocks and the blocks are free to be destroyed and moved. We spent about 2 weeks dedicated to develop our in game editor to make constructing such world easier. Unfortunately, during week 6, I suddenly realized we were developing a multi-player game. That meant no matter how polish our editor was and no matter how well we could construct the world dynamically, we simply could not put all of those dynamic objects in a multi-player context that required the server to do calculations on all of those dynamic objects in real time and send to all the clients efficiently. And we cannot afford to investigate in a different server model to make it possible. Therefore, we choose to abandon those dynamic elements and construct a static world instead.

  1. Design: How does your final project design compare to the initial design, and what are the reasons for the differences, if any?
  1. [Martin] I feel as if our initial design didn’t change too much from what we had intended which was a first person shooter with only bombs.
  2. [Steven] One thing we did follow through from our initial design is the UI, as most of the UI stuff is what we have wanted. However, we also wanted to implement a minimap, but we find it to be unnecessary later on because our map is quite small.
  3. [Alexie] Changed Bomberman to Bombergirl at the professor’s recommendation. Also scrapped the original idea that bombs were automatically generated inside the robot. But these were just small details.
  1. Schedule: How does your final schedule compare with your projected schedule, and what are the reasons for the differences, if any? (You should be able to glean this from your status reports.)
  1. Well our status reports were mostly optimistic, we anticipated the time to implement many of the features, and leave some wiggle room in our schedule.  We were staying on track with our schedule for the first few weeks. However, we deviated from our schedule sometime in the middle due to various reasons, such as falling behind or being ahead on schedule, scrapping ideas due to time constraint, and re-prioritizing features to be implemented in the game. Most of the UI stuff got pushed to very late in the schedule because of this reason.
  2. [Alexie] I underestimated the difficulty of using Maya to model, rig, and animate. In particular, I wasn’t able to get the skeleton rig working, and subsequently couldn’t get any animations done. I didn’t get to do as much with the robot model as I could and wasted a lot of time.

B. Then address these more general questions:

  1. What software methodology and group mechanics decisions worked out well, and which ones did not? Why?
  1. [Martin] We would meet twice a week to see how everyone was doing and if anyone needed help or talk about what we want next in the game.
  2. [Steven] We don’t always follow through in some of our meetings, as sometimes some of us would miss a few meetings because of other classes and projects, but luckily we were to fill in most of ours communication gaps through slack.
  3. [Robin] For group based decisions, we separated our team into different parts, each of which was dedicated to a specific task. I was in charge of graphics frameworks so when I was coding, I rarely thought about other aspects in the engine, such as how Wai Ho’s entity component system for the game specific management. The merit of this approach is that we reduce the coupling of our works so no one’s work was largely dependent and stalled on others’. The downside was the framework might not be scalable in a long run, since the entity component system needs to be incorporated into every aspect of the engine to easily do memory management. We finished most of our engine in week 10 but we encountered large amount of memory leaks that were hard to trace. Anyways, for the demo, we were able run the game within 15 mins without crashing. So there will always be a trade off.
  1. Which aspects of the implementation were more difficult than you expected, and which were easier? Why?
  1. [Martin] Implementing UI was a lot easier than expected since we used a library called librocket. After we had it going it was relatively simple to do
  2. [Brian] Animation was more difficult to implement that expected, mostly because animating the models the way we want them to took a long time to figure out.
  3. [Wai Ho] We thought implementing character physics in Bullet physics engine would be easy, but it turned out it is more difficult than we expected. The correct way of implementing the character physics would be to make it a kinematic object and write our own character controller, but we ended up making our character a rigid body and modifying the player’s velocity directly because it is easier. We decided to use entity component system in the beginning, but because nobody on our team has solid understanding of this technique, I ended up refactoring a lot of code throughout the 10 weeks to make our code more usable and extensible. After our backend code is stable, it is simple to add new features. Although I didn’t work on the client side a lot, I feel like it is very easy to handle key input and loading models. This is probably because we are using OpenSceneGraph, which handles most of these stuffs for us.
  4. [Steven] Modeling for the map actually took longer than expected, as we don’t have a solid map design in mind, and we ended up changing and scraping parts of the map when we do gameplay balancing. One thing I regret is scraping the Jump Pad feature I implemented, because we removed the second level of the map, we deemed it to be unnecessary and later on removed it from the game. Texturing also took a lot of time as well, mainly because I want to do create a workflow for texturing. Things like designing metallic, roughness and normal map for different of the texture requires an extra bit of effort. And later on, I found myself doing a lot of texture baking, texture atlas, and compressions in order to decrease memory usage, and improve on the graphical performance.
  5. [Robin] The in-game editor and shadow mapping in the tiled-deferred shading context.

The in game editor is just too time consuming to make so that it is production ready. There were lots of edge cases to deal with, especially for the object picker and manipulator.

The shadow mapping was exceptionally hard to implement in a tiled-based deferred shading framework. For classical deferred shading algorithm, the lights are rendered one at a time, so that we can reuse a single buffer every time we render a light. However, since all lights are rendered in one pass for a tiled-deferred shading algorithm, I need to prepare a huge 4K buffer altas for all possible shadow depth maps before the light render pass begins. This involves organizing the lifetime of a atlas slot as well as the uniform buffer object of a light to include all necessary information to index the location of the shadow depth map in the shadow atlas  Also, for point light shadow maps, I need to render the whole scene 6 times to prepare 6 shadow maps for a single point light. So, I need to do both occlusion culling and frustum culling to disable light updating for all invisible lights in order to save performance. Next, it was tricky to index the faces of point light shadow depth maps in an atlas, since the typical point light shadow map lookup directly uses a vector to find the faces in a cubemap. However, we didn’t have a depth cubemap at hand, since all the depth maps were stored in the 2D atlas. I did something really tricky to index the depth map. I used 6 single pixel images to construct a cubemap. The six images represented values of {0.0, 0.2, 0.4, 0.6, 0.8, 1.0}.  So, when the shadow calculation wanted a face, it got one of these numbers. Then, in the shader, I multiplied the number by 5.0 to get the correct face of the cubemap. e.g: 0.2 * 5.0 = 1.0, which is negative-x face of the cube. Then I used the face to find the shadow map index in the atlas by indexing into the shadow_map_indexes uniform array passed into the shader.

The whole PBR pipeline was easier than I thought. Since I did a shader manager and a basic implementation of deferred shading before the project started, it was easy for me to make changes to the shaders to experiment with different PBR rendering equations. The open source of unreal 4 engine helped me a lot as well in terms of choosing the right equation.

  1. Which aspects of the project are you particularly proud of? Why?
  1. [Martin] The UI is what I am proud of simply of how good it looks. Alexie, Phil, and I did a great job in making sure everything looked nice and to form.
  2. [Phillip] The UI and getting a HD image of Marty face into the game as the SM Bomb.
  3. [Alexie] I’m proud of the appearance of the final model, especially since I did it super quickly to make up for lost time.
  4. [Brian] I am proud of our in-game world editor since it helped us out with modifying our game world and seeing those changes directly in the game without have to export/import anything. Plus, we can reuse this editor for creating more maps if we decide to do so in the future, and it isn’t limited to the game we made in CSE 125.
  5. [Wai Ho] I am very proud of our in-game world editor plus many advanced rendering techniques that my teammates implemented. We have very nice UI and models thanks to our artist. I am also proud of the trailing effects of the bombs that I implemented, which adds some colors to our game. Personally I believe I did a good job implementing the back-end side. My teammate who has never touched the server code thought that it was relatively simple to add new features on the back-end side.
  6. [Steven] I’m proud of how we did the map and implement the lights, it gives the map and the game a unique style.
  7. [Robin] The physically based shading and an optimized tiled deferred shading renderer.

The PBR pipeline is just so powerful that enables us to make the outstanding indoor scene without manually adjust 100+ parameters of the game object to make it look nice; we only have 4 parameters: albedo, roughness, metallic, and normal interpolation.

The classical deferred shading algorithm can handle approximately 100 phong shading point lights at a reasonable performance. But we were using PBR pipeline that involves doing more gbuffer fetches per shaded pixel and more calculations for each pixel, that will make the classical deferred shading taxing. With a tiled-deferred shading algorithm, the number of buffer fetches is drastically reduced, e.g: 100 overlapping lights only needs to fetch the gbuffer 1 time instead of 100 times; also, some expensive tricks like stencil buffer culling, in classical deferred shading is not used in the tiled deferred shading context to reduce light overdraw.

What’s more, I optimized the gbuffer size to two 32bit RGBA ubyte buffer for materials and one  64bits floating point buffer for compressed normal and linear depth used to reconstruct the view space position. The tiled deferred shading algorithm enabled us to put up to 1000 dynamic PBR point lights in the scene without impacting performance drastically.

  1. What was the most difficult software problem you faced, and how did you overcome it (if you did)?
  1. [Alexie] Autodesk Maya was hell’s incarnate. It had a massive learning curve and trying to model, rig, and animate as a complete newbie within 3 months time with other courses to do. I was able to figure out a lot about modeling from watching videos and looking things up online, but ultimately rigging took too long for me to complete. Robin managed to find some programs to automatically rig and animate our model, however, and we managed to get that done in time.
  2. [Steven] 3ds Max crashes a lot :’(, I had to learn quite a few more tools in order to do all the texturing stuff, all of which cost me a lot of time that I would otherwise be able to work on something more productive.
  3. [Robin] How to manage a large amount of shaders and how to manage shadow map update in a tiled deferred context as mentioned above.

Also, debugging shader is a huge pain.

Yes I solved all of them, except for an efficient way to debug shaders. I used a rudimentary way to debug shaders: output the color representing the value of interest. Before discovering tools like CodeXL and apitrace, which can peek the frame buffers every frame, I manually write the buffer data out as a raw image file to inspect the per-pixel data inside, which sucks.

  1. If you used an implementation language other than C++, describe the environments, libraries, and tools you used to support development in that language. What issues did you run into when developing in that language? Would you recommend groups use the language in the future? If so, how would you recommend groups best proceed to make it as straightforward as possible to use the language? And what should groups avoid?
  2. How many lines of code did you write for your project? (Do not include code you did not write, such as library source.) Use any convenient mechanism for counting, but state how you counted.
  1. 50k lines of code total
  2. [Robin]

wc -l `find . -type f`

KaboomGraphicsEngine:

include. -> 3,476

src -> 13,275

Media/EffectFiles -> 607

Media/Shaders -> 2,485

==> Total: 19,843 (should be over 20K including the other parts I wrote)

  1. In developing the media content for your project, you relied upon a number of tools ranging from the DirectX/OpenGL libraries to modeling software. And you likely did some troubleshooting to make it all work. So that students next year can benefit from what you learned, please detail your tool chain for modeling, exporting, and loading meshes, textures, and animations. Be specific about the tools and versions, any non-obvious steps you had to take to make it work (e.g., exporting from the tool in a specific manner), and any features or operations you specifically had to avoid — in other words, imagine that you were tutoring someone on how to use the toolchain you used to make it all work. Also, for the tools you did use, what is your opinion of them? Would you use them again, or look elsewhere?
  1. [Robin] For troubleshooting OpenGL and shaders:
  1. Are you sure? Try DirectX 11 instead. The visual studio 2013 has an awesome graphics debugger for debugging directx 11. Surprisingly, it can even set a breakpoint in the shader!
  2. Look at 1)
  3. If you are stubborn, initialize your OpenGL context to core profile. With this, you can use Nvidia’s NSight to debug the shader upto OpenGL 4.0; also you can use Crytek’s Renderdoc to debug.
  4. If for some reason, you have to use compatibility profile for OpenGL, like us (since we choose to use OSG, which was built on top of deprecated opengl functions), remember, currently, there is no way to set breakpoints in shaders (after OpenGL 3.3, for 3.3 you can try a tool called GLSL-Debugger, though I never had a chance to try that) ! You have to somehow go through the process I mentioned before. Use apitrace to generate a list of OpenGL draw calls, and inspect the immediate state changes before and after a draw call. Also, use AMD’s CodeXL to inspect all the buffers used per frame, in order to know if a buffer is properly filled during an intermediate stage of your pipeline.

For our PBR pipeline:

  1. make models in 3ds max
  2. do albedo textures in 3dsmax or blender.
  1. If you hand paint textures, do it in blender, as there are lots of tutorials online
  2. do not paint lighting information in the albedo map, light specular and ambient occlusions
  3. save the unwrapped uv maps to use for roughness, metallic and normal maps

      3)  Do the roughness, metallic and normal maps

             For detailed surfaces, we hand paint them. For general purpose surfaces such as a wall, there is an convenient commercial tool called bitmap2material.

4) for a preview of pbr asset before putting them in the game editor, we use Marmoset toolbag 2, a brilliant PBR tool, to conveniently see if the textures are correct or not.

5) In 3dsmax, construct the world geometries light indoor walls and grounds. Then manually unwrap the uv to do texturing on a huge texture atlas. In this way, the objects are rendered with fewer draw calls to improve performance.

        6) compress all the textures to pvrtc or dxt1

        7) all all models in the in-game editor to assemble them.

        8) create textured materials for all models and slightly adjust the interpolations and make it look right in the game.

                

        8) add lights to the scene.

        9) profit.

For visual studio,

  1. NShader for highlighting the syntax of shaders.
  2. VA Assist X for improved auto complete and refactoring.
  3. VsVim for supporting vim keybindings in visual studio.

  1. Would you have rather started with a game engine or would you still prefer to work from scratch?
  1. [Wai Ho] Since this is my first time writing my game, I found that I learned a lot by working from scratch, but if I were to write another game in the future, I would prefer using a game engine.
  2. [Steven] I have learn quite a lot from this project as well, but I didn’t learn as much of OSG/OpenGL as I would like, because I don’t have prior experience in graphics before taking this class. So I would prefer to work from scratch to work on graphics if I can start over.

c. [Robin] I would start from scratch for the graphics part (not the level editor or game logic part).

In this project, we were using osg as our dependency for rendering. OSG is pretty optimized to handle some basic stuff, like frustum culling and reducing the draw calls taking advantage of its awesome scene graph structure.

However, there are some caveats that really bites me hard during the development.

  1. Every time you make a RTT camera for rendering to texture, by defaults, it creates 2 buffers – the buffer you assigned and an additional depth buffer the same size as the color buffer. (If you create a depth buffer, it creates an additional color buffer the same size).

        

        However, this behavior is not documented (or not easily found on its website). When I was to implement the huge depth buffer for the texture atlas, it created two 4K buffers! I only found this when our game frequently crush the driver of the demo computer. I then used apitrace and osg source code to find the duplicate of render buffer for every frame buffer object I created.

2) The whole osg is built on top of the deprecated opengl function calls. Some routines might be optimized for apis prior to Opengl 3.x and for a forward rendering pipeline. But since we are rewriting the rendering part and using deferred shading, some optimizations in osg become extra and unnecessary cost in our framework, such as the unused draw calls glLightfv(), glMaterial(), glEnable(GL_LIGHTING), etc will be called very frequently and there is no easy way to prevent them from generating.

3) Since it uses the compatibility profile of osg, if you want to use OpenGL core features, you cannot use it under Mac OS, since Mac OS can only initialize core context for features after OpenGL 2.1. This means we immediately lose the portability of our game.

4) Currently, there are no tools to debug the shader of OpenGL 4.x under compatibility profile.

However, the reason we chose to use osg is that I before 125 I was not an expert of OpenGL api so I had no idea what was the best way to reduce the draw calls. Although osg generates lots of unnecessary draw calls, it is still optimized in most cases, such as it has a osg::Optimizer class that can automatically delete all the unnecessary state changes and group geometries and send them to graphics card by batch. Also, osg has very solid utilities, such as its math classes, bounding box, polytope, intersection calculation, occlusion query, image and model loaders for nearly all the common types, etc, which speeds up our development a lot.

For the level editor, I’ll definitely use a third-party library such as the newly open-sourced sony level editor, which can be customized to integrate our own engine.

For the other parts of the game engine, I’ll probably going to find a well tested entity component framework, since these are not gonna change as frequently as a renderer would and rewriting it is just re-inventing the wheel.

  1. For those who used a networking library (e.g., RakNet or Boost) or physics library (e.g., Bullet), would you use it again if you were starting over knowing what you know now? Describe any lessons you learned using it (problems that you had to troubleshoot and how you addressed them) for future groups who may use it. If you did not use a library, judging from the experiences of the groups that did, would you have used it in retrospect?
  1. We use Bullet physics engine and we didn’t use RakNet or Boost for our networking. I [Steven] followed one of the tutorial on the 125 website for building the networking layer and it proves to be very useful later on when debugging. If you are using TCP and experiencing slowness, try turning on TCP_NODELAY.
  2. We don’t have to struggle on networking as much as we did for the Bullet library. A general advice for Bullet is to learn it early. There aren’t many tutorials on bullet and even less when you encounter issues. Learn from the example code instead. As for implementing the character controller, we looked at the btKinematicCharacterController, but ended up not using it because it’s too barebone and require us write our own collision detection between other kinematic object and rigid bodies.
  1. What lessons about group dynamics did you learn about working in such a large group over an extended period of time on a challenging project?
  1. [Martin] make constant reports and If someone is struggling with a concept or work help see if you can help them out.
  2. [Alexie] If you’re struggling with something, ask other group members to help. Wish I had done that way earlier than I did.
  3. [Wai Ho] Communicate with team members often and regularly. Discuss any struggle you face early.
  4. [Steven] Group meeting is very important, as it is the only way to know if your teammates are struggling on a particular problem or not. Also I realized most of our team are night owls, so meeting early is probably not a good idea….
  5. [Robin] Before I join the group, I need to do extensive research on graphics topics and have a vision of what the game will be like. Next, I need to write something to the other group members to show certain level of graphics is indeed achievable.

  1. Looking back over the past 10 weeks, how would you do things differently, and what would you do again in the same situation?
  1. [Alexie] Start earlier, ask for help earlier, dedicated more time to the project.
  2. [Steven] Not to add more feature on the last two weeks, and should work on polishing up the game instead.
  3. [Phillip] Playtest the game more and forced more on the gameplay.
  4. [Robin] We were doing pretty well in terms of code organization, but it still needs a lot of improvement to be reusable.

Before even start coding, we should take 3 or more days (before the quarter start) to design the overall software structure of the game. It is better to have a visual representation of the whole engine like a UML diagram for each classes we are going to implement in the game.

Then, write out how a single component of the game is going to go through the whole engine and finally appears on the screen. That might be a chain of function calls going through the diagrams designed and art toolsets to use for production.

Also, I made a wrong decision on shadow mapping. I literally speet a whole week working out the shadows for our game. But I only tested them on my own desktop, which has a GTX580, and it was not at all laggy. When we were doing the dry run, the framerate drops below 30 in most cases because of the taxing point light shadows and the poor performance of the graphics card. Only one point light shadow will make the fps drops 10fps, which was unacceptable. I knew the shadow system could be more efficient, but it was already week 9 at that point and we could not afford to spend additional time to do shadows. In the end, we ditched all the point light shadows and only left the directional light shadows for the sun, which means I wasted a week on nothing substantial for our game.  

Moreover, we should have do the textures right from the point when I finish the PBR system. It ended up me, and Steven spending nearly 3 whole days (20hrs * 3) before the demo day to make the textures, do texture mappings, uvw unwrapping, combine them into texture atlas, and struggle with 3ds max quirks and pvrtc, dds compression tools crashing. I can say it was pretty close that we could not finish the textures on time and left the whole scene untextured instead.

I should have make the reflection probe freely movable in the scene right after I finish implementing the pbr reflection system. I only started doing this 12 hrs before the final demo, and I simply could not get it done with some bugs bugging me for hours. That’s why in our final demo, the reflection was actually not obvious and was incorrect on some ground with low roughness. All surface reflection was relying on the global reflection probe centered at the origin, which effectively captured the sky but not the actual indoor scene, making the reflective ground reflecting the sky, instead of the ceiling in its zone.

Should have focused more on the game mechanism. I found our game though visually appealing, but lack playability and not juicy in terms of interactivity. I guess if we made the game efficiently handle dynamic objects, the game would be a lot more interesting. That means we would need to ditch bullet engine for physics and choose a simpler physics engine that does not handle all the calculations physically correctly, but calculate fast enough to handle a lots of objects.

  1. Which courses at UCSD do you think best prepared you for CSE 125?
  1. [Martin] CSE 123 helped in terms of networking.
  2. [Steven] same as Marty, CSE 123 helped a lot in terms of network communications.
  3. [Brian] CSE 167 helped in terms of graphics.
  4. [Alexie] CSE 169 for understanding various modeling, rigging, and animation terms.
  5. [Robin] CSE 167, 169, 131, 199 (thanks to Prof. Jurgen Schulze), and UC socially dead 😉
  6. [Phillip] CSE 167 for graphics, CSE 134b for the UI since we used HTML and CSS
  1. What was the most important thing that you learned in the class?
  1. [Martin] I learned how to work as a group to accomplish a large project like this.
  2. [Phillip] Marty learned how to be a salty team player.
  3. [Alexie] Some people are very salty in life. Also, learned how to use the basics of Maya, something I’ve been wanting to do for a while now.
  4. [Robin]

1) How to speak english.

2) If you jump for the sun, at least, you can land on the moon.

That’s what we did. We imagined the graphics of our game could be like one of those AAA titles with our superb rendering capabilities from the start, so during the first few weeks, we have all the plans set up for making it possible and we worked really hard to achieve that goal. Although, during week 7, we encountered numerous difficulties and realized we could not possibly achieve that goal before the deadline, but the foundation of the first 6 weeks really prepared us to get closer to that goal.

3) Be active and make progress constantly.

 

  1. Please post four final screenshots of your game on your group pages for posterity. I will also display them on the group web page.

We need to find some picture with our particle effects.

 

image00 image02 image03 image01 image04

C. Finally, if you wish, I would appreciate any feedback on the course (entirely optional):

  1. What books did you find helpful that were not on the recommended list but should be? What books were on the recommended list but were not useful and should be removed?
  • Game engine architecture: things absolutely needed for a game engine
  • Game programming patterns
  • Real-time shadows: Although the title is about shadows, the book is a really good guide for people to really understand how real time rendering works.

  • All Shader X and GPU Pro series, if you want to make AAA graphics 🙂

  • Physically based rendering: from theory to implementation: Teach you the foundation of graphics rendering both real-time and non real-time. I always use this book to reference the equations I don’t understand in real time rendering. It also helps with the implementation of our PBR framework.

  1. I will be teaching this course next Spring. What advice/tips/suggestions would you give students who will take the course next year?
  1. [Alexie] TO ALL ARTISTS. DO. NOT. UNDERESTIMATE. HOW LONG IT TAKES. TO MAKE 3D MODELS. AND RIG THEM. AND ANIMATE THEM. ESPECIALLYYYYY RIGGING THEM. This is assuming you don’t have previous experience working with Maya, but it also depends on how ambitious you are about your model’s final product. I wanted a really clean and smooth model that I could animate as I please, but rigging turned out to be really difficult. If you plan on doing everything yourself, the Maya Learning Channel on YouTube is great for that. Start really early and don’t waste time. If you haven’t worked with Maya before, expect a high learning curve. Here’s a general idea of how long the videos for each step are: Modelling, 1 – 2 hours. Rigging, 6 hours. Animation, 1 hour. This doesn’t include the time spent trying to find the features in Maya and actually doing the work. Something else important to consider when designing models is where the joints have to be. I recommend looking at other model schematics for reference. There’s a lot more joints than you’d think if you wanted a more complex looking model. Blender has a bit of learning curve too for texturing, so prepare for that. Look into hand painting materials if you like to manually draw in details and textures!

b. [Robin] For those who are interested in graphics:

If you really want your graphics to be outstanding: Read more, code more about graphics related things before entering into this class. There is no time for you to take more than a day to learn a completely new graphics concept during the development in 125.

There are times that you cannot hold off yourself from implementing fancy algorithms like PBR, deferred shading and global illumination without prior knowledge about these. Based on my experience, these algorithms are not at all complicated, but making them work properly and consistently requires lots of experience of setting them up in an engine. For example, simply setting up the gbuffer requires you to know at least how to render to texture, how to choose texture format for the buffer and how to control their sizes. Some tutorials of deferred shading does not care about these details and instead choose rgba32f textures for all of the render targets to make it work. It is sufficient for a simple demo with only a sphere and 10 lights in the scene, but as the game grow larger and the shaded pixels need to cover the whole screen, the graphics card memory bandwidth and size will become the bottleneck if all the targets are rgba32f, and you just don’t know what happened to drag down your fps. Due to extreme frustration, you may end up implementing them half complete and never got a chance to show them or at least make the effect obvious. That’s what I experienced before the class, but these experience is invaluable when I actually develop the engine in 125.

I personally spend at least half an hour reading technical blogs about rendering before sleeping. Here are some recommendations:

This guy is a developer of The Order 1886.

Two resources helped me the most:

  1. Its shadow mapping technique demo
  2. Its position reconstruction from depth demo

https://mynameismjp.wordpress.com/

I learned how Unreal 4 did PBR in a siggraph course in this blog

http://blog.selfshadow.com/

Some neat and easy techniques, such as the flow map of water, and dual parabolic shadow map.

http://graphicsrunner.blogspot.com/

I implemented the tiled-deferred shading in these slides.

https://software.intel.com/sites/default/files/m/d/4/1/d/8/lauritzen_deferred_shading_siggraph_2010.pdf

http://www.slideshare.net/DICEStudio/directx-11-rendering-in-battlefield-3

A lot of PBR stuff. Helped me out on implementing reflections.

https://seblagarde.wordpress.com

A collection of research papers published in Siggraph, i3D, EG, etc. Terrific and most up to date real time rendering resources.

http://kesen.realtimerendering.com/

Wolfgang Engel’s blog. A lot of optimization tricks.

http://diaryofagraphicsprogrammer.blogspot.com/

ShaderToy: enjoy the amazing shaders written by extremely smart people.

http://www.shadertoy.com

Documentation of Unreal 4. Really useful to know what are the best effects you can possibly make.

https://docs.unrealengine.com/latest/INT/Engine/Rendering/LightingAndShadows/LightTypes/SkyLight/index.html

Gamedev forum:

http://www.gamedev.net/index

A collection of industry people’s blogs:

http://svenandersson.se/2014/realtime-rendering-blogs.html

  1. How can the course be improved for next year?
  1. [Martin] This course can be improved by having former students come back and talk about their experiences and what libraries they used. Having Jake come in and talk was great but only gave us one perspective of it.
  2. [Alexie] Agreed with Marty. I really wish I could have heard from previous artists what programs they recommended and also general model design.
  3. [Wai Ho] Lecture in afternoon instead of morning.3
  4. [Robin] For the guest lectures, I think it might be more useful for us to have technical ones that are not too hard for us to implement.

The first guest lecturer from blizzard talked really deeply of how the network works in a blizzard game. But that was actually not very useful for our purpose. Instead, it might be better to let the guy talk about some general problems we might encounter in a small scale game like the one we developed.

A real example of what we actually need is how to correctly set up the Entity component system. There are numerous tutorials online on this topic but different tutorials has different mindsets of doing that, making us wonder if some of the authors really understand what is entity component system. We ended up a customized or “imagined” version of entity component system that fits our need, but we never knew if we were doing it correctly or if it was scalable in a long run.

Also, it would be nice to have some classes focusing on optimizing the Microsoft C++ compiler settings and how to efficiently manage the headers.

As our codebase was enlarging overtime, we find our compilation time was too long. For a clean build, it needs at least 4 minutes to build on the demo computer using Core i7 950, a very decent cpu. I did build some other source code similar in size using visual studio. It was taking less time than ours. I guess there should be a specific compiler setting that speed things up.

Also, the header management was really bad in our game. Lots of files include headers that are frequently changed; so if the header is modified, all associated files are going to be recompiled, which takes a lot of development time. It would be great to have lectures on organizing header files and program structures so that the dependencies are minimized.

Moreover, it would be great if we can have a lecture on how to do a robust and flexible logging system to trace bugs in a real time environment.

New graphics card… It’s already 2015 and the GTX460 on the demo machine is a mid-end card released 5 years ago, and the graphics cards in the lab computers are only slightly more performant than my integrated graphics card in my laptop. If the graphics card can be more performant, like a mid-range GTX760 in 2015, we can enable 6 or more shadowed point lights without lagging. A good thing about a slow graphics card is that I know the algorithms or settings are not efficient enough. But that’s all. In 10 weeks, it is hard to make everything optimized. When developing on a laggy computer, I usually feel really frustrated and reluctant to move on, always thinking my implementation is crap… (even though it is)

  1. Any other comments or feedback?
  1. [Alexie] Thanks for an awesome quarter! I had a lot of fun.
  2. [Martin] Woot I got a degree.
  3. [Steven] Thanks for the donuts, it was a great moral boost! 😀
  4. [Robin] This is the class that makes me confident enough go the graphics route in the next couple of years. Thank Professor Voelker for the awesome class and thank all the members in the Salty Marty group.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *