Final Report
This is our final report for the project.
1. Game concept: To what extent did your game concept change from initial concept to what you implemented? If it did change, how did it change and why?
Our game concept remains largely unchanged - our specification outlined an asymmetric tag game and we were able to deliver the MVP as well as most of the nice to haves. Perhaps the only major concept change was the horror aspect - to properly include a scary theme, we would need a lot more features in place, which unfortunately we did not have the time for by the end. We ended up cutting down some features, and shifted the theme to a more dynamic one.
2. Design: How does your final project design compare to the initial design, and what are the reasons for the differences, if any?
There were a lot of initial design ideas left unimplemented. We had a floor map in mind, which we quickly thought was too much effort and decided to stick with one room instead. One of the gameplay elements was map editing with traps which we also completely cut in favor of player powerups. Cutting away the horror also changed a lot of design choices: in terms of graphics, particle effects with fog and mist we cut. Dynamic lighting adjustments were mostly cut, with the exception for nocturnal mode. Gameplay wise, the music theme, round duration, powerup cards art design were all changed to adjust for a more dynamic theme than horror theme. This happened gradually as we realized the horror aspects didn’t fit well within the timeframe of the project.
With regards to running out of time - we believe that if we maintained the velocity we had by the end of quarter, we could get a lot more content in (such as map elements addon, more dynamic lighting), but, if we had extra time to begin with, we would not have this velocity.
3. Schedule: How does your final schedule compare with your projected schedule, and what are the reasons for the differences, if any? (You should be able to glean this from your status reports.)
Graphics - 3 weeks behind. This was due to the immense amount of baseline code that had to be setup (see below for an extremely detailed report on graphics).
Art - 3 weeks behind. The assets were completed relatively early on in the quarter ahead of schedule, but due to the bottleneck of graphics, several adjustments had to be made once graphics was updated
Audio - 3 weeks behind. There wasn’t much on the schedule for audio, but the sound engine wasn’t fully complete until week 10, 3 weeks after it was supposed to be done. Most of the sound effects and music were also completed in week 10.
Client-server integration - week 5 (2 weeks behind). This first milestone was two weeks behind mainly because we wanted to ensure that integration is properly done with graphics. Our first integration milestone was a very basic triangle movement check.
MVP - week 8/9 (4 weeks behind). The main cause of the delay was that we severely underestimated the amount of work that still needs to be done after the barebones client-server integration. We needed to introduce camera logic, movement logic, and the renderer needed to display 3D cubes and models instead of 2D flat triangles. Sorting these out was the main challenge and time crunch of the project.
Playtesting - 1 day vs initial weeklong testing. By the time we resolved the MVP and had all relevant gameplay elements in place, we had only a few days left to playtest the game, which meant that we had to make lots of balancing decisions on the fly (hunter/runner speed, jump height, etc).
From the delays mentioned above, the root cause of most of them were due to being optimistic in our timeline in our specification, and being bottlenecked by other parts of development. It was hard to implement game logic without being able to get visual feedback. Conversely, once we had the integration in place, development velocity ramped up a lot quicker.
4. Describe your development environment. What tools did you use? What was your build workflow? If you supported multiple platforms (e.g., MacOS and/or Linux), how did you support making your project work on all platforms? Do you have any tips or suggestions for future groups for their development environment?
Visual Studio IDE - build system and text editor and debugger. We used VS to manage all our dependencies (which really was just the Parson and FMOD libraries) so build workflow was as simple as a click. The downside is that our project is Windows only, which is both a blessing and a curse (as explained below).
Python - uv package manager and virtual environments for simple texture conversion scripts. Blender's inbuilt python environment for scene/animation/bounding box exports. These were mainly utilities that would help export data from scenes into our game.
[Alex] Using Visual Studio was really useful because we could build and test our project really quickly. The IDE tools are pretty helpful for debugging and keeping everything in the project organized.
5. What group mechanics decisions worked out well, and which ones (if any) did not? Why?
Good decisions:
Biweekly meetings - having biweekly team meetings really helped ensure that everyone was on track and knew what they had to work on for the next meeting. During these meetings, we mainly talked about our progress and trackers on the task at hand, and delegated work for the time up until the next meeting. If group members had any emergencies or a busy week up ahead, we can also adjust workload accordingly.
Responsiveness and activeness in Discord - outside of meetings, our group members were also very active on Discord - this ensures that any questions about others' works can be answered with relatively low delay, accelerating development.
Atomic work items and strong ownership - during the meetings, we would delegate tasks that are very specific and usually to one person only. This induces a strong sense of ownership of the task at hand and makes it easy to ask about progress. Instead of saying "lets have 3 people work on the server this week," we would say "lets have Chase work on jump, Will work on attack, and Alex work on dodging." The former is vague and could cause a lot of overhead of communication between the three members to figure out who is responsible for what, whilst the latter requirements are clear and more manageable for each individual member.
Merge often, don't race ahead too much - most gameplay logic and elements branched off our "main" branch (integration2), and is quickly merged back after the feature is complete. The sole exception to this is the graphics branch, which we had one big merge in week6 to get it integrated with the gameplay branch. Dealing with big merges like these are frustrating and could induce a lot of errors if handled incorrectly, hence why we preferred small and often merges.
Mixed decisions:
Exclusive development environment - one of the major downsides of using VS is that it limits the dev machines. Some of our group members weren't able to work from home or work from their own machines, which impaired development efficiency. However, this meant that more of our group members decided to work in the lab space, which we believe actually boosted our efficiency. There are less distractions in the lab, and you can directly communicate with any teammates there.
6. Which aspects of the implementation were more difficult than you expected, and which were easier? Why?
[Andrew] I think the gameplay graphics (camera logic and UI) were considerably more difficult than expected… Previously I had little experience with DX12, or graphics library in general. Jaiden really helped me out alot with understanding the memory and data flow from CPU to GPU, which I think is the crucial point for implementing these functionalities.
[Alex] I was responsible for a lot of the integration early on, and I think it was easier than expected. We ran into some hardware issues, but overall not too bad. I think the hardest part was building off of the new integration and getting everyone up to speed on how to proceed with development. When there’s a major push or someone has to step away for a week, it can be pretty confusing getting back into the code.
[Will] When the game gets more and more complicated, it can be hard to track down the code. Also, when we build more powerups, we need to refactor lots of code to make them work with the new powerups.
[Jerry] Same with the answers above, I think tracking down the progress of other parts of the code is hard. I am surprised that we didn’t have to deal too much with multithreading in our game except for the timer mechanism, which is simpler than I thought.
7. Which aspects of the project are you particularly proud of? Why?
[Andrew] I like the networking infrastructure - the templated functions I architected really made it easy to craft custom packets, though perhaps it wasnt the most performant. It really did save alot of hassle down the line when we wanted to implement more gameplay functionalities.
[Jaiden] I’m proud of the lightmapping. It wasn’t that hard with our custom asset pipeline but it made everything look great.
[John] I’m proud of how the game turned out, I think it was pretty much exactly how we initially envisioned it, and it’s cool to see it be real.
[Alex] I’m proud of some of the powerups I implemented, such as the bear. I think they added a lot of uniqueness to our game and were fun to design.
[Will] I’m proud of the features that I worked on, like collision, bounding boxes, attack, dodge, some powerups like speedup, jump height, phantom, etc.
[Jerry] I am proud of the round logic that I implemented, such as phase change, timer, and tie breakers etc, and these round logic were solid, never broken once!
[Chase] I’m proud of the music, especially the audio effects that I applied on top of the track. I think it made it sound a lot more unsettling than it originally was, and added a lot of atmosphere to the game.
8. What was the most difficult software problem you faced, and how did you overcome it (if you did)?
[Andrew] Not particularly the difficult, but one that was very hard to debug - we had a non-deterministic bug that would crash the server every now and then, and we suspected it had something to do with parallel access to state and the network wrapper (they aren't thread safe), so we added mutexes to everything. We thought the bug was resolved, but it only decreased its frequency. We resolved the bug one day randomly while taking a look through the code: turns out, we entered the wrong packet type (GameState vs AppPhase) for one of our templated functions for the network wrapper and that was causing an undefined behavior. This line of code had nothing to do with logic errors, it was really just a hidden typo in an unexpected location
[Alex] Same ^^ . Another issue that we dealt with the entire quarter was there was a lot of input delay for the clients and varying between clients. We ignored it initially, assuming it was due to testing all the clients on the same computer, but it persisted and ended up being even worse in the dry run. It really confused us how this problem could be happening to our group only. It took a long time to find but we made a small change to how the client received packets, which fixed the problem.
Another obstacle to our progress was waiting on graphics to get to a point that we could work with to implement movement and game features. We were able to get past this by developing up the graphics incrementally, in a way that was useful to other developers (starting with a moving triangle, then a 3d cube, then 4 debug cubes, etc)
[Jerry] Agreed with Andrew. That packet bug caused us to investigate a large part of other codes and make modifications.
[John] Making the bounding boxes. Holy bounding boxes.
[Will] The “attack” action is more complicated and difficult than I expected because this action consists of registering one attack at a time, then some delay, attack calculation (check range for hit), and cool down & movement slowdown. The range calculation requires me to refresh some linear algebra tricks. Then, because the calculation takes place after some delay, we need to use the current position of the player instead of the one sent in the packet earlier. Those parts are tricky, but with the help from Andrew and Jaiden, we are able to finish them.
[Chase] There were a couple of weeks where I wrestled with trying to get the game to work on my laptop, which kept breaking because the graphics code kept changing. Eventually I gave in and decided to work with the lab computers from that point forward.
9. In developing the media content for your project, you relied upon a number of tools ranging from the underlying graphics libraries to modeling software. And you likely did some troubleshooting to make it all work. So that students in future years can benefit from what you learned, please detail your tool chain for modeling, exporting, and loading meshes, textures, and animations. Be specific about the tools and versions, any non-obvious steps you had to take to make it work (e.g., exporting from the tool in a specific manner), and any features or operations you specifically had to avoid — in other words, imagine that you were tutoring someone on how to use the toolchain you used to make it all work. Also, for the tools you did use, what is your opinion of them? Would you use them again, or look elsewhere? Are there any tools that you used but, looking back, you would avoid?
[Jaiden] Here is my advice on DX12:
- Math conventions:
- Know whether:
- a transformation matrix is interpreted row-major or column major
- a transformation matrix is meant to transform row vectors or column vectors
- your coordinate system is left or right handed
- your rotations are clockwise or counterclockwise
- +y represents up or down in texture coordinate space
- DirectX
- left-handed normalized device coordinates
- HLSL
- stores matrices column major
- texture sampling:
- +y represents down in texture coordinate space
- left-handed cubemap coordinate system (seems industry standard)
- DirectXMath
- stores matrices row major
- provides helper functions to create matrices that transform row vectors and assume a left-handed coordinate system with clockwise rotations
- Blender
- Python API stores row major matrices that transform column vectors
- right-handed coordinate system with counterclockwise rotations
- +y represents up in texture coordinate space
- Advice
- use a column-major math library to remove the need to transpose matrices before they are sent to the GPU
- aka not DirectXMath
- hlslpp may be worth looking into
- For any matrices that transform vectors from a left-handed coordinate system to a right-handed one or vice-versa, take extra care using library functions or write the function yourself. It is unlikely that your math library supports this directly.
- I wrote a custom view matrix generation function to transform vectors from the right-handed world space coordinate system inherited from Blender to a left-handed camera coordinate system more suitable for DX12 normalized device coordinates
- File Paths
- we used Microsoft's ReadData.h library to read data files in the executable path
- i don't know how it works
- Shaders
- at build time, shaders were compiled to bytecode by dxc which ships with Visual Studio
- compiling shaders at runtime from a .hlsl file restricts you to older shader models
- shared some types between C++ and HLSL using a unified header file and some fancy macros
- HLSL constant buffers have weird alignment rules, not like C structs
- Bindless Rendering
- allows you to access any data that you've sent to the GPU without binding it to a virtual register
- made easier with Shader Model 6.6 released in 2021, but not every computer supports it
- 2 members had to switch to using lab computers because of this
- can still be used with pre Shader Model 6.6, but it's more annoying
- because of this restriction, it is not a useful path unless you're doing it for resume driven development or ray tracing
- General architecture
- USE VALIDATION LAYERS
- about 80% of bugs were a result of an invalid configuration of DX12 "objects"
- about 95% of these were caught by validation layers
- different structs that contain per-drawcall information depending on the type (static vs skinned mesh) of draw call
- passed in via root constants
- also contain indices of descriptors in the Resource Descriptor Heap (only relevant for bindless rendering)
- coupled CPU and GPU buffers as well as their descriptors into a single struct that could be initialized from CPU data
- did something similar for textures
- Libraries
- I had a pretty minimal approach to libraries because i wanted to use this class to write stuff as from-scratch as possible.
- DirectXMath is not similar enough to HLSL to be ideal but is convenient because it comes with Windows development
- ddspp was a good .dds texture file header reader
- ReadData.h was modified significantly to use my preferred array structure
- d3dx12.h is a common helper library used in tutorials. I'm pretty indifferent to it. You could just copy the chunks of it that each tutorial uses or write your own utilities.
- Textures
- DDS textures contain all necessary metadata to set them up in DX12
- i would recommend using them
- Cubemaps
- Nvidia Texture Tools does not export them with the correct array metadata in DDS
- still good for previewing whether the cubemap faces have been stitched properly
- should have an array size of 6 as it is equivalent to an array of 2D textures of size 6
- can have padding between rows of texels
- this padding may differ between the CPU and GPU memory representation
- have to be uploaded to GPU on an upload heap but must be copied to and read from a different kind of GPU heap
- Things you should know absolutely how to do:
- create and execute command lists
- create and set pipeline state objects
- create and set root signatures
- tell shaders what their inputs and outputs are
- create descriptor heaps and descriptors
- reate upload heaps and memcpy to them
- create render target views and depth stencil views and write to them
- synchronize resource reading/writing with fences
- Tutorials/Resources
- Getting started
- https://www.braynzarsoft.net/viewtutorial/q16390-03-initializing-directx-12 is very detailed
- https://alain.xyz/blog/raw-directx12 is a bit of a brisker start
- https://github.com/microsoft/DirectX-Graphics-Samples Hello World example code here is helpful reference
- Bindless rendering
- https://github.com/TheSandvichMaker/HelloBindlessD3D12
- really beautiful example code; can be read top to bottom
- also has texture stuff
- also inspired my use of an arena allocator for descriptors
- Textures
- https://alextardif.com/D3D11To12P3.html
- Reference
- https://microsoft.github.io/DirectX-Specs/
- My (Jaiden's) development logs
- I was quite meticulous with them, recording many nontrivial bugs I ran into
Asset Pipeline
Our visual asset pipeline was inspired by Christoph Peters's series of blog posts about his toy renderer. The idea is to use Blender source files and Blender's Python API to write to binary data files that contain data in the exact format that the GPU expects. This means that Python would be handling most of the data transformations before the application even runs instead of using C++ to parse plain text or transform data. John edited our source Blender file and uploaded it to Google Drive with packed textures.
The first script, exporter.py (371 loc) writes vertex positions, vertex normals, texture coordinates, triangle material ids, materials, and skinning data (if applicable). We did not use index buffers for simplicity and cache locality. It also writes textures into .png files by traversing each material's node graph. The exporter produced one file for the scene and one file for each skinned character.
The .png texture files were converted into the .dds format, which can be directly accessed by the GPU and can contain mip-maps. I used the Nvidia Texture Tools Exporter which comes with a command line utility for batch processing while Andrew used DirectXTex's texconv. AMD's Compressionator could not be installed in the lab machines. I also considered using bc7enc and its cousins, but they don't generate mip maps. Base color textures were compressed with the BC7 codec. Only red and green normal map channels were stored in BC5, with the blue channel reconstructed in the pixel shader. Roughness and metallic maps used BC4. The HDR lightmap was compressed in BC6 (its HDRness "just works" in DX12). Texture compression alleviated filesize issues in our asset pipeline. Most compressed textures were the same size as their source textures while containing mip-maps down to 1 pixel. However our source lightmap was 800 MB, and compressed down to 90 MB (with mipmaps as well).
Lightmap UVs were generated with Smart UV Project with a small margin. Smart UV Project produced better-looking UVs in a few seconds whereas Lightmap Pack took a few minutes. To ensure the lightmap was HDR and stored in a linear color space, we created the texture in Blender, selecting the 32-bit float option. 8K resolution provided us with 2.5mm/texel on the scene floor and took John's RTX 3080 4 hours to bake using 512 samples. Despite using a light-portal to improve light-sampling efficiency in our indoor room, the lightmap still was noticeably noisy. Exporting and compressing the lightmap was a manual process. Our singular cubemap was rendered in Blender as a six-frame image sequence, aggregated into a cubemap using DirectXTex's texassemble program, and mip-mapped using Nvidia Texture Tools Exporter.
For animation data animation_exporter.py (70 loc) would record the scene's active animation by writing one transformation matrix per bone per frame into a long buffer. (It also wrote the adjugate transpose of these matrices for proper normal vector transformation). Instead of computing these transformations, we simply play the scene's active animation and record the each bone's pose each frame, allowing us to play animations in engine without interpolating keyframes. This could not be lumped in the main exporter script because scenes contain multiple animations and Blender 4.4 does not expose a way to swap them via Python. Our game does not contain many animations, so manually swapping the active animation for each animation that needed export was not burdensome. However, this may not be a great approach for animation-heavy games.
There were a few cons with this pipeline. Obviously, it is more upfront work. The programmer implementing it has to worry about Blender's Python API, datatype alignment, and reading buffers at correct positions. This delays other team members from seeing the assets in game. Additionally, without good logging and error detection, off by n errors can cause cascading issues. When developing across multiple machines and updating constantly, this is frequently caused by missing assets. Animations also had to be exported one by one due to limitations with Blender's Python API. Animated meshes could only be exported correctly if the Blend file was saved with their armature in its bind pose. Compressing the 70 or so scene textures at first took about 5 minutes, though textures need only be compressed once. Texture compression time could be an iteration-time bottleneck in games where textures are frequently edited. Finally, file corruption more strongly impacts compressed formats. This happened twice, and was difficult to track down as my importer did little verification about the validity of files.
There are also many pros. Having data stored in the exact binary format required for the GPU is really good for developer velocity once implemented. Initial scene loading was not there until week 7. But then textures took about a week. Animations, a notorious CSE 125 pain point, took about 3 days to get in-engine (though the logic for playing them at the right times took longer). Lightmaps, which require an entire separate UV map for every object, took 2 days. Cubemaps took 1 day. Once you have a basic setup, expanding it is easy. The binary nature of the formats meant no text parsing in C++. Also, wrangling data into the right format in Python is generally much easier than doing so in C++. Our C++ importer was super simple and mostly consisted of getting pointers to each buffer and storing buffer lengths totaling about 150 lines between meshes and animation. Loading times were also excellent. At demo day, most games, even with low-poly styles, took around 20-30 seconds to load their assets whereas our game with PBR materials and 800,000 triangles takes about a second. This doesn't matter much for demo day itself, but is great for iteration time. You will be testing your game a lot, especially if you are working on the graphics side, so shaving off 20 seconds each run adds up.
Overall, I think this pipeline was right for our game, though I wish I did the following things differently:
- Better logging of missing asset file errors
- Better communication with the artist about the restrictions each asset required (eg. no complex shader graphs, applied transforms for skinned meshes)
- Nicer file sharing that uploading and downloading from Google Drive (maybe Git LFS or something idk)
[John] ^^^ Branching from Jaiden’s since our segments of development were so closely tied together.
A lot of my pipeline was built by Jaiden LOL. I liked how modular and low level he made everything and would definitely recommend that approach for future teams. I used Blender and Procreate for most of the asset creation, and Jaiden made it pretty explicit early on what would and wouldn’t be supported, and that definitely prevented a lot of troubleshooting down the line. For his scripts, it was run through Blender, so there were no conversions occurring outside of Blender, which definitely made things easier and less prone to corrupting. Having everything needed from an asset like the texture, model, animation, material, etc export in one go is quick and intuitive. For Jaiden’s specific pipeline, the only real restrictions were in the material graph, since only specific portions of the bsdf material are supported, and we chose to forgo alpha as well. Though off of those alone, it’s enough to make everything look good, especially since normal maps were implemented, which made “looking good” really easy. I think by taking a large brunt of the work, Jaiden’s pipeline made my job significantly easier.
[Jerry]
At the beginning, graphics could be a big blocker for other parts of the game because it takes a while to set up, which is why we used a simple game library called raylib to develop game logic at the beginning. The libraries take in models and position vectors to display in a window, which makes it easier to implement basic game logic such as moving and collision detections before the graphics is set up. What we did was essentially taking one of the example script and adding logic (movement and collision) on top of it to visualize things.
[Will]
Adding on what Jerry said, raylib is an awesome starting point in the first few weeks because we can have a general sense of how collision logic should work, and we can verify things like bounding boxes visually. However, raylib is not exactly the same as DX12, where, for example, the coordinate system is not the same. Therefore, we were very careful in integrating our results into DX12.
10. For those who used a networking library (e.g., RakNet or Boost), a physics library (e.g., Rapier or Bullet), an audio library (e.g., SFML or SoLoud), or a GUI library (e.g., imgui or nanovg), which libraries did you use and would you use them again if you were starting over knowing what you know now? Describe any lessons you learned using it (problems that you had to troubleshoot and how you addressed them) for future groups who may use it. If you did not use a library for any of those modules, judging from the experiences of the groups that did, would you have used it in retrospect?
We used the FMOD library to create the audio engine, which actually was fairly simple to work with. The hardest part was setting up the development environment, since there wasn’t much documentation to work from, but a few blog posts in particular really helped with the process. I’ll leave them here for future reference:
Making a Basic FMOD Audio Engine
How to set up the FMOD API in Visual Studio
Some of the issues involved the blog posts themselves. For example, we had to paste fmodL_vc.lib and fmodstudioL_vc.lib into the folder with our exe, which was mentioned as an alternative in the second blog post, but we believed we had followed the other steps. Another issue was with the audio engine blog post, which was a little bit outdated and wasn’t always completely consistent in its code. None of it was too hard to debug, especially since I wrote the code myself based on the tutorial as I understood it instead of copy/pasting, but it did get in the way.
At the end of the day, FMOD was extremely simple to use and I’d do the same thing if I took the class again. It simplified the audio process substantially and I wouldn’t change the plan without a good reason.
I’d absolutely choose to just use FMOD again, I can’t see another alternative being much easier to use (although it’s always possible!).
11. If you used an implementation language other than C++, describe the environments, libraries, and tools you used to support development in that language. What issues did you run into when developing in that language? Would you recommend groups use the language in the future? If so, how would you recommend groups best proceed to make it as straightforward as possible to use the language? And what should groups avoid?
[Jaiden] I used Python for writing scripts to export Blender file data or writing scripts to run command line utilities to compress textures. Blender ships with its own Python interpreter that contains NumPy, but is missing many other nice libraries like Pillow. It is impossible to modify this environment without admin permissions, so I stuck to using it to export scene data and write intermediary .png texture files. Running a script using Blender's Python interpreter can be done with:
blender scene.blend --background --python script.py
I then used a separate script to compress the textures by calling command line utilities. I used the UV package manager to set up a virtual environment that I used to run the compression scripts and write the export scripts outside of Blender's primitive text editor. I installed the full bpy library (though fake-bpy-module would have worked too), which requires pinning very specific Python version. Overall, I think Python is great for data wrangling and the UV package manager coupled with virtual environments have solved the dependency hell issues for me.
[Jerry]: We used Python to write scripts to extract bounding boxes from Blender. The code is ran inside the Blender environment. I would recommend it because it is the default supported language in Blender terminal.
12. How many lines of code did you write for your project? (Do not include code you did not write, such as library source.) Use any convenient mechanism for counting (e.g., scc), but state how you counted.
6256 lines, counted manually just by looking at the files (.cpp, .h, shaders, and any .py scripts that we've written) (we didn’t have too many files so this wasn’t too bad to do)
13. What lessons about group dynamics did you learn about working in such a large group over an extended period of time on a challenging project?
[Andrew] Take the initiative - whenever you're uncertain about roles and responsibilities, being active and volunteering to do it is a good way to get everything kickstarted.
[John] It’s important to get everyone on board with an idea before you go and make the product. I think because we were all passionate about some aspect of the game, it made the need to motivate people pretty much nonexistent. Everyone being on board means the product will almost always turn out more polished.
[Alex] Communication. Everyone had different schedules so it was really important to keep each other updated so we were always making progress. I had to work on the lab computers but honestly it was really beneficial because then I was always in the lab, could discuss things with my other teammates, and get help quickly.
One other thing that helped was we designed a lot of our code with other people in mind. The networking infrastructure was pretty extensible and easy to add new packets. When we originally developed movement we didn’t have 3d cubes yet, but we implemented the 2d movement and collision in a way that would make it easy to extend to 3d. The shop/powerup was designed so that it was straightforward to add new power ups without breaking everything. Overall, this allowed our team to make really quick iterative updates and make a lot of progress towards the end without having too much hardship.
Also, I don’t know if this is the right place to put this, but I felt very motivated to work on our game because the graphics and art looked so amazing, I wanted to do it justice.
[Jerry] Good group dynamics are based on consistent communication. I think we had good group dynamics because everyone is quite active in discord.
[Will] Everybody has a strong faith in making our game the best we can. Especially in the middle of the quarter, when our game is halfway there, I would feel like getting lost. But as long as everybody is pushing hard, we can make it just right.
[Chase] I’ve worked in bigger groups to build software projects like this before, but the team here was really cohesive and came together well to all build the game we were excited about. Making sure that everyone has a chance to contribute to the project in ways they’re excited about was important as always.
14. Looking back over the past 10 weeks, is there anything you would do differently, and what would you do again in the same situation?
[John] I would DEFINITELY have tried to take more off Jaiden’s plate early on. I definitely ended up with more free time than I expected, since waiting on a lightmap bake was pretty much dead time. Waiting until literally 20 minutes before the presentation to attempt to help with graphics was not as helpful as if I had been following along with his learning. As for what I would do again, I would work with this team again. It was fun and I enjoyed working at Visual Studios
[Alex] In the first couple of weeks, I felt a little lost as to what to start working on and what to do that would be useful for the team at that stage of development. Something that we don’t really learn in courses at UCSD is how to get started on a project (PAs usually have starter code), which definitely contributed to this feeling. I tried to contribute by working on the integration between branches, because I knew that this was something I could do and it would be useful. I think I would try to reach out earlier in the future to be able to contribute more.
[Andrew] I would help Jaiden out a lot earlier, maybe around week 3-4. This way, we can start the integration work on camera logic and 3D rendering a lot earlier, saving off valuable time in weeks 5-7.
[Chase] I would’ve committed earlier to spending time at the lab, and preferably spending more time earlier in the quarter. I had a busier schedule than I anticipated, and without set times to work, it took me a while to really figure out a flow that worked well for me.
15. Which courses at UCSD do you think best prepared you for CSE 125?
[Jaiden] For graphics:
-
CSE 167: Computer Graphics
- model, view, and projection matrices
- transformation matrices
- hardware rasterization pipeline
- texture mapping
-
CSE 169: Compter Animation
- animation is typically one of the hardest parts to implement
- i also wrote my own renderer from scratch for 169, which was good practice
-
CSE 160: Parallel Computing (I also tutored this class!)
- knowing how GPUs work is generally good if you’re gonna be programming them and optimizing code for them
- DirectX is lower level than what you would learn here
-
CSE 168: Rendering
- unless you’re doing raytracing or physically based shading, it is not directly applicable
- still good practice though!
CSE 167, along with a good understanding of MATH18: Lots of linear algebra in graphics, these classes are a must if you want to do any graphics: UI, basic rendering, setting up shaders, etc.
CSE 160, for understanding how memory and data transfer works between the CPU and GPU, as well as good practice on synchronization.
CSE 120, for an indepth understanding of synchronization if you want to attempt multithreading. Our timer ran on a separate thread than our main server, which caused some bugs initially. If you want to attempt more difficult multithreading (for collision detection, networking), then taking this class will help alot.
CSE 123, a must if you want to delve into networking and program from raw sockets - there is alot of knowledge covered over TCP, which is the main protocol we use in this class. Perhaps more crucially, the class's PAs also give lots of programming practice in C and managing memory and state, which helps alot with CSE125 if you choose to use C++.
CSE 110, for the experiences of organizing logistics for a large development team, and dealing with repos with a million branches (luckily we didn't have those!).
[Jerry] CSE 167: Familiarize me with managing floats in C++ such that I can manage game states and positions etc.
CSE 224: Understanding the basic network structure and optimization at an application layer.
CSE 120: Threads and locks helped me understand and craft an asynchronous timer structure.
[Will] CSE 167: this class introduces some of the most basic concepts of graphics that our game is developed on
CSE 110: this class focuses more on project and group management, which is useful when we need to keep this project on track
CSE 224: this class prepares me well to understand the network of our game in a higher level. Once our network infrastructure (this part belongs to CSE 123) is setup, I am able to develop on top of it easily.
CSE 120: lots of useful OS concepts, and working on Nachos has prepared me for working on complicated projects like this.
CSE 100: very good C++ intro class
[John] uhhh maybe MAE8? It helps with spatial understanding, and some of the features between AutoCAD and Blender are similar
[Alex] CSE 110, 120, 123, 167. I think a lot of the work I did didn’t require knowledge from these classes, but it was very useful for understanding what other team members were doing and being able to work off their code.
[Chase] CSE 110 and CSE 167 were both helpful, one for working in large teams and the other for building complex systems in C++. But also, my music production classes (MUS 173A-B, MUS 176) also mattered a lot given that I was in charge of the audio for the game.
16. What were the most valuable things that you learned in the class?
[John] A lot of what I learned were relatively soft skills, since I didn’t pick up a technical study during the 10 weeks. Because I didn’t, I learned to put my trust in the team that they would finish, which wasn’t hard considering everyone was so driven.
[Alex] Similar to John. Learning to work in a team without a lot of outside help was really valuable.
[Will] Being able to work on a big project is valuable, but what’s more valuable is to work and grow with the team, where there is more information and communication going on than the code.
[Chase] Learning to work in a complex system that you might not understand every piece of was extremely important for me personally. The value of building strong APIs for others to use was far clearer for me in this class than it had been before.
17. Please post four final screenshots of your game on your group pages for posterity. I will display them on the group web page.
18. For the pizza celebration after the demos, what did you think about having it in the B270 lab so that people can play each other's games? Should we do it again? Or were you completely exhausted?
[Alex] I liked the pizza celebration, it was cool to play the other games and talk to the other groups, which I hadn’t really done all quarter. At the same time, I was pretty exhausted but I think this is the best time to have a celebration.
[John] Yes, I liked it. I think getting to play the game was a way different experience than watching it played. Seeing old alumni show up to play was also a great time.
[Jerry] I loved it.
[Andrew] Please keep it… It was genuinely lots of fun and a really great pressure vent after the demo. Genuinely one of the best nights in my college experience so far. I will come next year for the demo and the afterparty too :D
[Will] This is awesome. Please keep ordering pizza from Regents Pizzeria :) I also love to try out other groups’ games.
[Chase] I loved it — of course I was tired, but it was so fun getting to play everyones’ game, and it would’ve been the last chance to do it.
19. What advice/tips/suggestions would you give students who will take the course next year?
[John] Find a great team, it makes the whole process way more fun.
[Alex] Work on something that you’re interested in and care about. It makes development a lot more fun.
[Jerry] BE GOOD AT RNG YOUR TEAM. Don’t worry if the progress starts slow. The base structure takes the most time to build, and the development later is much faster and easier than you’ll think.
[Andrew] Start early, start often :). The more work you do at the start of the quarter, the more time you will save later down the line.
20. Do you have any suggestions for improving the course?
[John] I liked the course a lot, and my suggestion is more post course than for the course. I think it’d be cool to formally meet alumni and see the games they’re making/have made or where they are in life. I think they’d have great advice. Maybe add elo? Ranked competitive CSE 125.
[Chase] If at all possible, I’d love a chance to play alumni’s games at the start of the class if any of the old teams had working projects that were easy to set up. Obviously that would take work, but it could give a clearer idea of what we’re about to work on.
21. Any other comments or feedback?
[John] Thank you for the class! It was great fun.
[Alex] Yes, thank you! One of my favorite classes at UCSD. This might just be my own fault because I didn’t use it much but it was a little confusing to navigate the course website.
[Andrew] It was hands down the most memorable and enjoyable experience in UCSD so far. Thanks to everyone that made it a reality!!!