A tour around 3D graphics

The process of rendering a 3D space on a flat screen is complex and fascinating. Computer graphics is what brings numbers and symbols to colorful life on a screen. This article will briefly explain what is done in rendering engines to transform raw components to a rendered picture.

Source: Mo

Mesh

Source: Unknown

It all starts with a spatial data. 3D models, either artistic or industrial, consist of vertices and faces. Every vertex has it’s position and a normal vector, which is a direction where the vertex is „looking at”. Faces are just sets of vertices. Some might be surprised, but it does matter whether the vertices are listed clockwise or counterclockwise — typically faces are visible only from one side! The vertices and faces of an object are collectively called a mesh.

Camera

Source: ithare.com

Supposing we have a few models placed on a scene, we need to place a camera. Essentially, a camera is a point directed in some way. Knowing the position and direction of the camera and all the objects (expressed in scene’s space coordinates), we can decide what objects will be „seen” — be inside of a camera frustum. The frustum usually has a shape of a truncated pyramid bound by clipping planes.

Rasterization

Source: Unknown

A screen consists of little dots called pixels. The matrix of these pixels is called a raster. During the process of rasterization, some linear algebra is used to determine which pixels of screen’s raster are occupied by which objects. These objects are then painted on the raster from the furthest to the closest, in an order stored in z-buffer, a matrix of values telling how far from the camera are rendered things.

Textures

Source: Aleksandar Pasaric

What is the color of our objects? If we want to determine this, we need Textures. Those are 2D pictures storing some data as colors. They „wrap” meshes around. We can put many textures on a single mesh in order to provide many kinds of data. The textures are used to tell the shaders (programs running on Graphic Processing Unit), what is the value of some parameter of a mesh at certain point. Some popular meanings are e.g. albedo — the plain color unaffected by anything, roughness, emission. UV unwrapping is the process of cutting a mesh along the edges so that it is stretched as little as possible when laid on a plane.

Light: Interpolated light

Source: Unknown

One could see with a naked eye that this description lacks something important. Nothing can be seen by humans without light, the wonderful physical phenomena that allows our miraculous perception of the world. The first approach used to solve this problem was computationally effective, but not very realistic. We can calculate the brightness of every pixel on the raster knowing it’s normal vector and the coordinates of light sources. The exact formulas vary depending on the shading model we use, but they rely mostly on trigonometry. Such approaches basing on interpolation, e.g. the Phong model, are widely used in the real-time computer graphics, where every frame needs to be rendered quickly enough for the user to not cause delays.

Light: Raytracing

Source: Unknown

However, if we don’t hurry because we are rendering a static picture (or many of them, later assembled to a movie), we can use a much more realistic approach, involving raytracing. The idea is taken straight from the physical world surrounding us: the color of every point we see is just a wavelength of a ray of light that hits our eye from this exact direction. Yes, every second of life we are attacked with myriads of rays, straight into our eyes. For every ray we know the direction where it came from, which allows us to trace it. This is done in raytracing — we follow a ray that hit the camera’s eye to its origin and see what substances did it reflect from. Due to the properties of substance’s surface, density and structure, the wavelength (and thus — color) is changed in some way we know. Having traced a few reflections, knowing the light distribution of substances, we can accurately set the color of a pixel.

Shaders and rendering pipeline

Source: Unknown

As mentioned before, shaders are programs running on GPU rather than CPU. They can be programmed to manipulate what is on the screen, usually using a huge amount of maths. Some shaders, like compute shader are not used to produce visual effects, but to perform difficult calculations. On the GPU, the data is passed from shader to shader in such a way that the output of one shader is the input of another — that’s why it’s called a rendering pipeline. The most important shader types include vertex shaders, which can move around the vertices of a mesh, and pixel shader, also known as fragment shader, which handles the rasterization. With a good understanding of shaders and an artistic touch, dull meshes can be turned into beautiful things.

Those are just a few components contributing to computer graphics. All these steps are taking place during production of our favorite animations, games and breathtaking renders. With existing software like Blender or Brush, virtually everybody can dive into the vast field of CG and find there something they enjoy. The selection is wide — from 3D modeling via creating textures to programming shaders.

Written by: Adam Leczkowski

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Devideal

Software development company creating innovative mobile apps in Flutter.