Skip to content Skip to sidebar Skip to footer

Understanding Depth of Field: A Quick Guide

Depth of Field

In most games, DOF (Depth of field) is a concept of how game scenes are depicted and perceived by the players. Whenever we hear DOF the first thing that comes into our mind is background blur. But that’s not the case. To understand how depth works, and how video games imitate this effect, we need to understand 2 main concepts below.

  • How video games render images
  • Depth Maps

Fun Activity: Visualising Depth of Field

Let’s start with a fun activity. We humans have a simple eye that has a fixed lens system. That has a changeable aperture in our pupil helping us perceive light at different focal lengths. We have two eyes which enable us to sense depth by focusing on objects and our brains combine information from two sources into one. But this is also doable in a single eye.

Try closing one eye and hold your hand stretched out, try to look at your hand and whatever is behind you, in quick successions. You can see that the periphery of your focus has a slight blur around it. This is what we call Depth of Field. The further the object is, the blurrier it becomes, if you also have an object closer to you but still out of focus it also becomes blurry, giving you a sense of distance between objects.

Fun activity: Visualizing depth side by side

How videogames render images

Games work on the concept of Vertices, Polygons, and Textures. For this instance, we are only going to talk about the polygons. When a game is being modeled, you want it to have as lesser number of vertices as possible for faster processing. Triangles are the best for this purpose. The more polygons you have, the more detailed the object becomes, and the more time will be required to render the said object. If we have a low number of polygons, we get a low-poly model which also has its separate use case.

Now let’s imagine we have a triangle in front of us and we have another triangle behind it. We know that the object is behind another only when one of them is more prominent than the other. In Game Engines there are 3 coordinates, x, y, and z, which define a 3D space. Manipulating these values gives us a flat image from a particular angle.

Now if we have 2 different polygons with different x, y, and z values and we try to combine them, we end up with an image with some parts sticking out and other parts intersecting with each other. If we keep repeating this process, the game camera will understand that some objects are behind other objects, and we only need to see the topmost object. This allows us not to render all the polygons but only those that the camera sees. We have emulated layering and pseudo-depth in a 2D image.

A more in-depth video discussing this topic has been covered here.

Calculating which object appears on top by Z values

Depth-Maps

We have learnt layering in a 2D image but if we try to see it as it is, we will see a mash-up of objects that look like a group of objects with no definite depth. In this situation, we add a Depth map. This is a non-rendered image defined by white and black intensities. This gives the game engine and camera an understanding of where exactly the object is on the scene. Closer objects are White. Further objects are black or vice-versa, and everything between are shades of grey. Now the game engine knows where the objects are. It communicates them to the game camera, bringing one of our most important aspects, Focus.

Different depth maps Visualized

Focus and Blur

Focus, in the context of optics, refers to the clarity and sharpness of an image. It is the point at which light rays converge to form a sharp and well-defined reproduction of an object. For Example, It is what the player (Manual Focus) or the game engine (Auto Focus) wants the player to put their attention to in that particular scene.

Now the camera can focus on different objects in the scene, but it still needs something more to emulate the realistic feeling, Blur. The concept of blur in video games has existed since the PS2 era. Most people believed in shutting it off because game engines are very bad at handling these. However, recently major improvements were introduced to the latest game engines, UE4 and UE5. This has improved and will continue improving even further as we continue researching better optimization techniques. Consequently, this has made blur rendering much more accurate and faster than conventional means.

A few notable techniques that affect blur are:

  • Anti-Aliasing
  • Field of View
  • Bloom
  • Resolution Scaling
Bloom in Videogames

The Whole Picture

Now we understand, how depth is being emulated, how depth maps work, and the concept behind blur. When we combine all these concepts, we get the depth of field in video games.

We have now emulated 3D space in a 2D image. But we are still talking about images, what about when we actually game? We are doing millions of operations every single second to get this accurate depiction. Imagine when we play games on our PC, we do not need it to run as a PowerPoint slide right? We expect at least 60 frames per sec, which means millions of operations are being done in parallel to maintain a stable frame rate, frame time, and quality.

This is what defines having a good Graphics Processing Unit (GPU) than using the CPU to render images. Certain workloads are better reserved for specialized hardware.

“Just because you can doesn’t mean you should” – Sherrilyn Kenyon

Depth of Field in Skyrim

To Infinity and Beyond!

The way forward is paved not only by new improvements but also by the developers supporting them. There are tonnes of different and new techniques that have come up that have improved the quality of DOF, Blur, and Focus in a lot of different ways. These include Ray Tracing, Path Tracing, DLSS/FSR, Global Illumination, and Dynamic Shadows. Only a handful of developers are willing to put it to use to make a name for themselves. CD Projekt Red’s Cyberpunk 2077 is a good example. Some are playing it safe, while others cannot afford the technology yet. With the advent of AI, we imagine DOF operations and techniques becoming faster, more efficient, and more affordable.

The move toward realism (Image Credits: Gaurav “Wrathchild” NUA)

So, what’s your take on DOF? Did you enjoy reading the blog? Let us know in the comments below.

Also Check Out

2 Comments

  • Bidhisuta
    Posted January 16, 2024 at 9:30 AM

    I enjoyed this thoroughly. Even tho i am a noob when it comes to these things, this particular blog was much interesting. 🙂

    • Post Author
      AKAMA
      Posted January 16, 2024 at 4:24 PM

      Thanks Bidhisuta. Glad you found it interesting!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.