This guide is part of a larger “knowledge dump” and is a work-in-progress. Thanks for your feedback!

### 3D Math

**Prerequisites**: Algebra, Geometry, Trigonometry, and ideally Calculus

**Destination**: Advanced, for the purposes of game dev at least.

Stewart’s Prolific Calculus Books. If you don’t own a Calculus book, buy one! It’s a far better reference than anything on the internet. Also, if you don’t know Calculus, or stopped with Calc I or II, then read this thing, from front to back if you need to. Self-education is a real and invaluable thing, so don’t be scared of it! (And Calc III is when things start getting fun! I regret only getting to Calc III during my final semester at school, because that’s when I really started enjoying math.) At a minimum, you should be able to apply exponentiation and logarithms, and be able to derive derivatives and integrals of single variable equations. (Bonus points for multi-variable equations.) The ability to work with curves, planes and volumes is also highly useful throughout game dev, whereas Taylor-series and such are more academic than practical. The beauty of this knowledge is that it reveals opportunities when you otherwise couldn’t see them, especially in graphics and physics.

Mathematics for 3D Game Programming and Computer Graphics. (Aka the Big Ol’ Skeleton Book.) At only $30, this book is a complete steal! Knowledge of affine matrix transformations is *crucial* if you’re dealing with, well, matrix transformations, which are quite ubiquitous throughout game dev. Too many crazy bugs I’ve encountered came from people working with affine transforms despite lacking any real knowledge of them. (Somewhat understandable, considering that this information isn’t easily available online.) Otherwise, the chapters on interpolation, curves, and surfaces are incredibly useful in a number of areas. Of course, the other parts are also valuable… but not relevant to this subject. I’ll be duplicating this link a lot.

### Graphics

**Prerequisites**: Knowledge of a preferred, low-level rendering API (I recommend OpenGL), and Calculus

**Destination**: Advanced

#### FUNDAMENTAL GRAPHICS CONCEPTS AND TECHNIQUES

Mathematics for 3D Game Programming and Computer Graphics. (Aka the Big Ol’ Skeleton Book.) At only $30, this book is an absolute requirement if you’re still learning. It covers all the fundamentals you must know and more, and it’s all incredibly valuable stuff. Everything from affine matrix transformations to mesh manipulation, curves, physics, physically based rendering, shadow maps, bumpmapping, and more. You can’t really learn this stuff online, at least not without great difficulty and uncertainty. So, buy it! Then, go ahead and read, at a bare minimum, the chapters elucidating vector math, matrix math, affine transformations, orthographic and projection transforms, frustum culling, fog, Blinn-Phong/Gourard shading, and billboards/polyboards. It’s a lot to process, I know… but it’s mandatory, if not *super interesting!* Apart from the matrix math, you don’t need to memorize all the details, as you can always fall back on the book as a reference. Instead, (and this is critical), *focus on understanding the concepts*. I realize not everyone is in the habit of learning this way, although they should be. I’ll mention other required reading from the book as we go along.

Physically Based Rendering. If you’re very interested in raytracing (which I’m not actually that intimate with), this is the golden standard, or so I’m told. This isn’t entirely applicable to games, though it would likely improve your understanding of the physics and math behind rendering dramatically. Likely a must-read if you’re wanting to eventually work at Pixar or in CGI. Otherwise, I’m not going to refer to it; just thought you’d like to know of it.

Some advice from personal experience: affine transforms are wonderful and you should use them often. They make your code far more legible, and can be much less computationally expensive than applying a sequence of translations, rotations, and scales to a vector manually. Additionally, you should name them in terms of what they do, not by the standard “projection/view/model” nonsense! I.e., *viewToClipTransform*, or *entityToWorldTransform*. Also, make sure you know the difference between row and column major matrices, and always try to ascertain whether whatever reference material you’re reading is working in row or column-major. Most game engines are in row-major, whereas academic and mathematical texts are generally in column-major. OpenGL and related libraries are also generally in column-major (although this is less relevant in retained mode.) The Big Ol’ Skeleton Book is consistently in column-major.

Now is a good time to ensure you’ve read up on the following, basic sub-topics:

- Polygon Meshes on Wikipedia. This is how your geometry data will be represented in any graphics application, and submitted to the GPU. Note that rendering data such as position, normals, and texture coordinates are stored on a per-vertex basis, and that this sequence of vertices will be submitted to the GPU along with parameters specifying how those vertices should be indexed and interpreted to form faces or lines for rendering. In OpenGL, the arguments
*GL_TRIANGLES*,*GL_TRIANGLE_STRIP*, and*GL_TRIANGLE_FAN*represent fairly standard methods of organizing your meshes’ vertices into faces.*GL_LINES*is used when you’d simply like a series of lines, as opposed to triangle faces. (Note that OpenGL has a separate and more convenient mechanism for drawing all of your meshes as wireframes, if that’s what your goal is.) - Frustum Culling. See also: the Skellington Book. This is how geometry which doesn’t intersect the player’s view is “culled,” or prevented from being rendered. This is a ubiquitous and
*major*cost-saving technique, so learn it well. The Skellington Book describes methods of computing intersection with various simple shapes. See also: Inigo Quilez’s advice on frustum culling very large objects. - Z-Buffering/Depth Testing on Wikipedia. This is how graphics hardware is able to determine which geometry is front of other geometry, independent of the order in which the geometry is drawn.
- Alpha Blending on Wikipedia. This is how graphics hardware is able to render partially transparent meshes. Note that this technique
*is*order dependent, specifically because your translucent geometry must be drawn from back-to-front, typically in one pass after rendering your opaque geometry. - Gourard Shading and the Blinn-Phong Shading Model on Wikipedia. This is how reflected light was calculated in almost all games prior to 2010, when physically based rendering (PBR) rose to popularity in the games community. Blinn-Phong is still used in smaller titles today, but given PBR’s relatively major aesthetic improvement, it will be worth transitioning to later.
- Fog: Linear and Exponential. You’ll be implementing this manually, most likely. Consider each fog factor as the weight of a linear interpolation between the color of a surface’s reflected light and the fog’s color. The purpose of this is typically to obfuscate the “popping” of geometry at the far distances of your game’s scene, if not simply to have fog for aesthetic purposes. “Atmospheric Scattering” is a more modern and physically accurate approach to achieving obfuscation by a planet’s atmosphere.
- Billboards and Polyboards. (See the Skellington book.) This is how particles, foliage, and “beam” effects are typically rendered. They are quite simple and have many, many uses.
- Bumpmapping. (See the Skellington book.) This allows the surface normals of your geometry to vary at a finer granularity between individual vertices, using information stored in textures. The math here can be a little complicated.
- Transforming normals. (I.e. the creation and use of
*conormal transforms*.) This is a necessary way to manipulate your affine transforms so that they may operate on*directions*, rather than mere points. - Rim Lighting. (Notice the light around the periphery of Mario’s body.) Basically, rim lighting is the addition of light as a function of the dot product of the normal and the view direction. It’s a cheap but useful trick!
- Quaternions. These are an alternative, four-variable way of representing rotations, generally used when 3×3 matrices are too large for some use case. They’re a bit complicated, but you’ll inevitably wind up needing them or working with them, so study them carefully. The most important operations are multiplication, conjugation, vector transformation, and conversion between the various other forms of rotation. This website is actually an invaluable resource with respect to rotations and affine transformations, their various representations, and methods of converting between them.

Shadertoy. Now that you know some graphics basics, its time to screw around with shaders via a 100% web-based GLSL implementation! Go nuts, have fun! Know that each shader is rendering over a full-screen quad (i.e. a mesh composed of exactly four vertices.) The next resource will prove most inspiring and useful in this context…

Inigo Quilez’s Articles. There are some insanely valuable gems in here! Distance Functions in particular is a must-read. Inigo also has some excellent shader demonstrations at Shadertoy: more specifically, Distance Functions Examples, Improved Texture Filtering, and GLSL Perlin Noise. View his Shadertoy profile for some seriously awe-inspiring demos. (Note: *demo scene* is the keyword you’ll want to use to find more of these types of shaders. There’s some pretty incredible stuff here, and likely some untapped potential in the field of game dev.)

Perlin Noise. The holy grail of graphics-related techniques! Also useful when doing almost any procedural generation! Implement it! See Inigo’s Perlin Noise shader if you’d like a wonderful in-shader implementation. Note that there’s a somewhat improved (but less well established) version entitled Simplex Noise, whose 1D and 2D variants are free, whereas 3D variants and higher are patented as of this writing. To elaborate on their discrepancies: Perlin noise interpolates between random gradients organized at fixed intervals along each axis to derive a noise value, the distribution of gradients appearing square or cube-like. Simplex noise, however, distributes gradients in a *simplex* pattern, which make the algorithm inherently faster and less artifact prone than its predecessor. Simplices are fundamentally simpler shapes than cubes; now might be a good time to read up on them, as they have some abstract utility!

Before proceeding, now is the time to either acquire or implement a vector math library that, if in C or C++, ideally utilizes SIMD. (See General Programming for details on SIMD.) Decide now if you’d like it to be row or column-major.

Also, you should implement a very basic game engine so you can start experimenting with things, or utilize an existing engine that provides access to low-level control. Unity, UE4, and maybe JMonkey are probably viable options for learning, but I would also heartily recommend my guide below for learning OpenGL and implementing a flexible engine around it, since regardless of whatever engine you use, you’re going to have to master the low-level details that it attempts to obfuscate (especially if you’re wanting to work as a graphics professional in the industry.) Regardless, the most important thing is that you have low-level access to shaders. You may appreciate the level of control and the learning curve presented by implementing your own engine from scratch, like yours truly, or you may simply want to move on to the quickest and dirtiest solution possible… just be aware that the short-term benefit of doing so would be exceeded by the long-term costs, in my opinion. I’d love to see some discussion on this subject in the comments.

GPU Gems. Go ahead and skim a few chapters of this now, and maybe read about some topics that pique your interest. You’ll be coming back to this a lot, and equipping yourself with an arsenal of standard techniques will prove invaluable as you start working in the graphics realm. Note that these methods, while being quintessential in the graphics community, can be somewhat dated, and you should supplement topics of interest with extensive Googling. (You’re going to start running into academic papers at this time. Become proficient at reading them!) It’s also worth noting that in particular, the use of Deferred Shading (which we will get to) will substantially change and/or improve the older techniques, since deferred shading decouples geometry information from the vertex data used to generate it.

#### SHADOW MAPS: THE ULTIMATE TEST

I now recommend that you start reading about shadow maps, a ubiquitous technique used to render shadows, and attempt to implement it; if you’re smart enough to understand and implement it, then you’re smart enough to do anything! And then you’ll know with certainty whether or not you’ve got the chops to be a graphics programmer… and then we can move on to less intimidating things (mostly). The Big Ol’ Skellington book would prove invaluble here, although Wikipedia’s article is probably a better introduction. Also, Common Techniques to Improve Shadow Depth Maps will aid you in your quest to understand shadowmaps, and help you tackle many of the visual artifacts that you’ll inevitably encounter. (It’s a very artifact-y technique.) Do not be discouraged! Fix those visual artifacts! Implement for yourself a beauteous shadowmap! And definitely skim this resource before attempting to implement one. A potentially larger concern than the shadowmap itself, however, is implementing multiple frustum culling passes over your game’s geometry: one for the user’s view, and another from the shadow-casting light’s perspective. Think very carefully about this problem, and implement as ideal a solution as possible, because this can turn your entire culling pass into a twisted mess if you’re not careful. Consider that in the future, you may have *an arbitrary number *of shadow-casting lights in your scene! (That means an arbitrary number of passes!) Try to reuse VAOs between the multiple passes, and make sure the innards of your frustum culling pass never assume that there will be only one pass per render frame. Finally, once you’re done, add some Percentage-Closer Filtering to make your shadow-mapping ultra pristine. As always, be sure to spend some time refactoring and polishing this code so you can reuse it in future projects. Congratulations! If you’ve implemented all of that successfully, then you are officially a talented person.

#### INTERMEDIATE GRAPHICS TECHNIQUES

And if you’ve gotten this far, I’d also say that you’re officially a graphics programmer of intermediate skill. Instead of holding your hand through what was once a painful learning curve, I’m now going to list some categories containing topics of interest, the order of topics in each category being mostly irrelevant. However, I’d recommend that you learn the majority of topics in each category before moving to the next category or independently listed resource below.

- Subsurface Scattering and Depth Maps. See also: the GPU Gems chapter. Note that the
*wrapped lighting*formula provided by GPU Gems is widely used, and if you don’t already have an approximation to SSS in your engine, its easily worth including. (You’ll need to normalize it, however, to achieve the best results.) - Volumetric Shadows. (Read the section from the Skellington Book first.)
- Volumetric Fog and Light Scattering. A decent overview!
- Font Rendering, Signed Distance Fields.
- Basic MSAA in OpenGL. (Note, however, that we’ll have to do the anti-aliasing ourselves if and when we swap to a deferred shading model.)
- Anisotropic Filtering. Basic hardware-level feature, which is easily added via an extension in OpenGL.
- Skeletal Animation on Wikipedia, and Dual Quaternion Skinning (a standard and high quality approach to to skeletal animation).

#### HIGH DYNAMIC RANGE RENDERING

Now that we’ve learned some useful, more complicated graphics techniques, let’s transition to the world of *post-processing! *Shaders will become increasingly more important here, as we’re basically operating on individual pixel and depth values rather than vertices and their resulting fragments in the traditional sense. Hence, these techniques have the useful property of being largely decoupled from our geometry data.. wonderful! After this, we’re going to learn how to further decouple our geometry from some existing techniques, resulting in cleaner code…

- High Dynamic Range (HDR) Rendering. Note that 16-bit floating point textures are ideal when implementing this; 32-bits is overkill. Remember that GPUs have limited VRAM for our textures.
- Tonemapping Operators (ways of mapping HDR values to LDR values). Uncharted 2’s method is a pretty great operator to choose.
- HDR Bloom, and a fantastic, simple way of implementing it.
- Depth-of-Field. There’s a recent paper detailing some standard methods of achieving this, and there’s also an older publication listing many of the existing techniques. Bokeh is a standard, modern method.
- Motion Blur. There’s a recent, easy tutorial on implementing this, and the previous paper I mentioned also discusses some modern approaches.
- Light Scattering as a Post-Process on GPU Gems. A.k.a. light shafts, godrays, etc.
- Toon Outlining. Sobel-edge filtering in particular is standard, though it’s also common to re-draw the mesh using thick lines, face winding order reversed, and color set to black. If you use Sobel, you’ll have to think carefully about your parameters to the edge detection: use of depth is standard, but the use of normals for more fine-grained addition of edges can be really tricky, especially when combined with bumpmapping, in my experience.

#### DEFERRED SHADING

Alright, so now that we know about HDR rendering and post-processing, it’s time to learn about an advanced, widely used rendering model entitled *deferred shading*. Essentially, this involves writing our geometry data into a set of separate textures (known colloquially as *the g-buffers*), such as albedo, position, normal, and material data, then recombining that geometry data into a final color and depth in one final pass by applying our lighting model. The advantages of this our numerous: our geometry-dependent shaders become nice and decoupled from our lighting and shadowing code; we get a higher, more stable framerate as our expensive lighting calculations no longer depend on the amount of geometry being rendered; and we can render a *large number of lights*, thanks to the fact that our lighting code no longer depends on our scene geometry. There’s a lot of information out there on this, and many different ways of implementing it, so you’ll have to decide for yourself if your engine would benefit from this rather than a *forward shading *model (what we’ve been doing up until now), and how best to do things on a case-by-case basis. (The largest negatives are major difficulties with alpha-blending and the major cost of texture-reads on the GPU.)

- Deferred Shading on Wikipedia.
- How to Structure the G-Buffer, with HLSL code for XNA. (Note that again, 16-bit floating point textures are ideal, whereas 32-bit textures are overkill.)
- Compact Normal Storage. (This will help you squeeze as much into your g-buffer as possible.)
- Deferred Shading in Battlefield 3. Also includes some fancy bits on Tile-Based Deferred Rendering, which you might consider implementing after you get a basic deferred setup working. Lots of other good info here. More on BF3’s Tile-Based Shading.
- Tile-Based Deferred Rendering, by Intel. Again, I wouldn’t recommend implementing this until later, but it does include some useful perspectives on deferred shading, and is otherwise pretty fascinating.
- Ambient Occlusion on GPU Gems, Alchemy’s Ambient Occlusion Method, and Deinterleaved Texture Sampling from Nvidia. AO is a way of solving the “global illumination” problem at a small scale. Alchemy’s particular approach to AO is an established, straightforward, and beautiful method, so I whole-heartedly recommend that you implement it. (Please let me know of any better approaches to AO in the comments below.) The final link describes the process of “de-interleaving” your sampling while computing AO as an optimization technique. (This is essentially because a naive AO implementation gathers texture samples apparently at random, although its sampling pattern can be decomposed into several sequential access patterns. Again, recall that texture reads are expensive!)

#### LIGHTING AND PHYSICALLY-BASED RENDERING

- An Introduction to PBR, by David Rosen. Isn’t this guy great?
- Absorb the chapter from the Skellington book discussing lighting and, more specifically,
*physically based rendering*if you haven’t already. The book describes caching BRDF information into textures, and although this can save GPU cycles and therefore allow a more expensive formulation, I don’t believe it’s warranted until you’ve computed your BRDF entirely in-shader and committed yourself to it as your final solution. Even then, it may not be necessary. - Disney’s BRDF Explorer: This tool allows you to quickly iterate on the lighting model of any application you may be developing, and is also good for learning about existing BRDF models, as it comes pre-packaged with them. Go ahead and download and install the BRDF Explorer. Now, it’s time to learn a lot more about specific BRDF models than the book covered! Don’t worry, you don’t need to remember the math precisely for this, just the three major terms of any BRDF: the Microfacet (D)istribution Factor, the (F)resnel Coefficient, and the (G)eometric Attenuation Factor. Now, open the file
*compare.brdf*in both the tool and a text editor of your choosing. Select the*lit object*tab at the bottom of the tool, then select*no IBL*in the dropdown beneath the picture of the teapot. Now check*useAshikmanShirleyG*, and set the parameter*F0*to 1.0 (since we’re exaggerating the BRDF for demonstration purposes), and voila! You are now ready to play around with parameters and witness the beauty that is BRDF-itude. Specifically, move*specWidth*around notice the impact it has on the teapot. Pretty cool stuff, eh? In the real world, you’d set*F0*to some constant based on experimentation for whatever aesthetic you’re trying to achieve (0.04 is a nice realistic default, though). Once you’ve fiddled with the checkbox options and selected a lighting model that you’re happy with, you can just mimic the code for your selected options from*compareMore.brdf*into your game engine, then you’re good to go. Of course, piping the necessary parameters to your lighting shader is an issue all of its own, but if you’ve mastered the OpenGL section or are using a familiar game development suite, then you’ll know how to accomplish this. If you were working in a deferred lighting model, things would be even*more*complicated, since you’d have to pack and unpack these parameters from the g-buffer… - As of this writing, the latest and greatest BRDF formulation for AAAs is an optimized combination of the GGX distribution combined with Schlick’s approximation of Smith’s geometric attenuation factor.
- Siggraph 2013 Course on PBR. Pay special attention to the AAA game post-mortems! Even more particularly, Unreal’s post-mortem!
- Cel Shading. This can be surprisingly difficult to get right, so beware! Dividing between two or three colors is standard, though I recommend only two. I’ve heard of some people incorporating an art-team-provided color table, indexed to arrive at each cel’s blended color, but this seems a bit overkill to me — just darken the texture by some procedural quantity, fine-tuned by art. Note that all of your lighting calculations will likely need to take the cel-shading into consideration (e.g., environment/ambient lighting, ambient occlusion and other forms off global illumination), or else you’ll arrive at some odd “partly-cell-shaded” results. Getting this right will involve a lot of experimentation.

#### SPHERICAL HARMONICS

This is some pretty sophisticated, high-order math, but it has a *surprising *number of applications in graphics. In particular, the computation and use of *irradiance maps*, a plethora of recent techniques from academia, and some useful applications you can invent for yourself would require SH. Essentially, SH are used to compress any function over the surface of a sphere into a handful of variables (generally nine, twenty-seven if the output is a color), which allows you to store a *ton *of information in some seemingly unlikely places. E.g., functions compressed via SH can be stored as a set of shader constants, or even on a *per-vertex* basis! (Per-vertex SH have some applications in global illumination.) Now, here are some useful resources:

- Stupid Spherical Harmonics. A great introduction to and explanation of SH. Pretty complicated, so make sure you understand the fundamentals of
*what*SH are and how they’re computed before moving on. - SH Rotations, by Filmic Worlds. Necessary assuming the relative space of your SH ever, uh, rotates.
- Wikipedia on SH. Useful as a reference, e.g. for the formulae for each band. Also useful if you’re either
*very*mathematically minded or hail from academia, as most math-related Wikipedia articles tend to be.

#### ENVIRONMENT MAPPING

- Environment Mapping Overview. A really nice, simple overview of environment mapping, and the distinction between radiance and irradiance maps. Essentially, environment mapping is a ubiquitous method by which
*ambient light*is represented in an application. This resource also briefly touches on how to compute an irradiance map using spherical harmonics. Note that*diffuse lighting*is essentially the dispersion of*irradiant*light, and*specular lighting*is essentially the reflection of*radiant*light. - Image-Based Lighting Approaches and Parallax-Corrected Cubemap. Really complicated, but a rare gem that gives basic insight into how to implement
*environment probes*for ambient lighting and blend between them. Note that you could have just one, position-independent probe for your entire scene, avoiding the interpolation problem altogether. - Irradiance Environment Mapping using Spherical Harmonics. This is how you should derive irradiance maps, no question. It’s
*orders of magnitude*faster than any other method! This paper has over a whopping*500 citations*! - Environment Mapping with PBR in Black Ops: II. Practical post-mortem on PBR with environment maps. Has some other PBR bits included.

### OpenGL

**Prerequisites**: Understanding of *fundamental graphics concepts and techniques*. (See the Graphics section!)

**Destination**: Advanced

If you’re somewhat experienced and would like an argument for OpenGL over DirectX, here’s Valve’s perspective.

**A Foreword**:

The first thing to know about OpenGL is that there are two modes of the entire API: *immediate* and *retained*. Immediate mode is old and basically kept around for legacy purposes; you should never use it for a new engine. Retained mode is much more efficient, since data submissions to the GPU are stored in buffers and lists before being submitted in bulk, whereas immediate mode submits every tiny piece of data immediately. (Or at least it used to, modern implementations are a bit more clever.) Retained also isn’t *that* much more complicated, though it can seem intimidating at first. Here is a more thorough explanation of the two modes which includes contrasting code. When Googling, you’re likely going to find a bunch of immediate mode content… but to you, it will be almost worthless. Focus on retained mode, and you’ll be golden.

The second thing to note is that OpenGL has API versions ranging from 1.1 to 4.x (whatever is current), and the one you target will affect how many platforms your game can run on. You can reasonably expect 95% of the PC market to support OpenGL 3.1, so given it’s huge advantages over versions 2.x, it’s a solid default target. Of course, you can always throw in advanced features for higher-end GPUs, as is common practice.

The OpenGL 4 Manual and the OpenGL 4 Core Specification. These are listed here mainly for reference, as you’ll be relying on them heavily very soon. Go ahead and skim the manual a little to get a sense of its layout, then thoroughly analyze the title image of the specification to get an understanding of how functionality is organized on the GPU.

OpenGL Wiki: Rendering Pipeline Overview. Here’s an overview of how the data you submit to the GPU gets processed and eventually rasterized to the display. For now, ignore Tessellation, the Geometry Shader, and Transform Feedbacks, as these are optional and generally advanced functionality that you’re not going to need until well after you’ve actually learned the core API. Otherwise, the programmable phases will be of particular interest to you, since you’ll be implementing them soon yourself! As for the non-programmable phases, they’re still valuable to understand, since they give meaning and understanding to the actually-programmable phases.

OpenGL Spaces, by Songho. Before you can make use of any OpenGL code, you need to understand what the various “spaces” are in OpenGL and how to transform between them. Get really technical in your understanding — know the difference between Clip and NDC space, and how projection matrices work. *Know everything!* Note that this link includes some immediate mode demonstrations.

Apple’s Recommended Practices for Working with VAOs. Alright, now we’re in the thick of it. This is the best article I’ve ever found delineating *how* to use OpenGL in retained mode, and how to do so effectively. It describes the various methods of constructing a VAO (Vertex Array Object, which consists of VBOs, or Vertex Buffer Objects), submitting data to the GPU, and much more. It’s a somewhat jargon-y ride, but persist through and congratulations — you now know 90% of how to use OpenGL in retained mode! Note that this talks of OpenGL ES specifically (a variant of the API designed for mobile devices), but the lessons given are basically the same. Some of the optimizations it recommends are also mobile specific, but for the most part they are equally applicable to desktop/laptop graphics hardware.

Your next quest: implement everything you’ve just learned in a homemade, low-level graphics API! This will be challenging but very rewarding, as by the time you’re done you’ll be an expert on OpenGL, possibly employable at a number of game companies! (Or should I say, “advanced” at OpenGL. Let’s reserve the term “expert” for the poor folks who’ve implemented an actual OpenGL driver or two.) And you’ll have your very own powerful graphics engine! Woohoo! You’re going to need to refer extensively to the the wiki, the manual, *and* the specification, in addition to constant Googling. Based on my own experiences, here’s how I’d go about designing it.

### Physics

To be continued!