3D Production Pipeline: Lighting, Rendering & Compositing


Lighting, as the name implies involves controlling the various light elements within the scene or shots.  The Lighting Specialist will use storyboards, scripts and other means to establish the ‘goal’ of the shot before beginning lighting setup as it’s important to understand the directors vision in order to produce it digitally.  Once the ‘goal’ of the shot is established the Lighting Specialist then produces it by creating using multiple lights like, key light, point light, fill light, and rim light, to achieve a certain effect that can be further manipulated using their various properties.  Using these properties a Lighting Specialist can define how light interacts with different types of materials, how the scene leads the viewer, mood and atmosphere, colour theory and harmony, and complexities of the textures involved.  The lighting process is so involved that it’s debatable as to whether Lighting Specialists have more control than Texture Artists when it comes to a shot’s colour scheme, mood and overall atmosphere.



Out of all the processes within the 3D Pipeline, Rendering, is is the most technically complex.  But it’s easier to understand if you think of it like a photo that needs to be developed and printed before it is displayed.  Just like the photo, a model is nothing more then a mathematical representation of points and surfaces(vertices and polygons) in three dimensional space.

“The term ‘rendering’ happens when a 3D software packages render engine translates a scene from a mathematical approximation to a finalized 2D image.  During this process the entirety of the scenes elements, including, spatial, texturing and lighting information are combined to determine the colour value of each pixel in the flattened image.” Slick, J.

There are two types of rendering who’s only difference is in the speed at which they render.  Real-Time Rendering and Offline or Pre-Rendering.

Real-Time Rendering is mainly used in gaming and interactive graphics where images must be computed at 18-20 frames per second at the minimum in order for motion to appear smooth.  This is achieved by dedicated graphics cards(GPUs) pre-compiling as much information as possible.  The majority of a game’s environment will be pre-computed and ‘baked’ to improve render speed.

Offline or Pre-Rendering is when speed isn’t a priority and can achieve ultra detailed, photo-realistic qualities.  Larger studios have been known to dedicate up to 90 hours render time to individual frames.  This type of rendering also allows for more detailed textures and models that can be in excess of 4k resolution and much higher poly counts.

There are also three different rendering techniques each with its own advantages and disadvantages.  The three techniques are, scanline(rasterization), raytracting and radiosity.

Scanline rendering is used for real-time rendering as it is the quickest due to the fact that it renders polygon by polygon rather than pixel by pixel.  This technique used in conjunction with ‘baked’ lighting can achieve speeds of 60 frames per second or higher.

Raytracing traces one or more rays of light from the camera to every pixel.  The colour of each pixel is determined by the light rays interaction with every object in its traced path.  As a result, raytracing can achieve photo-realistic quality but is exponentially slower.

Radiosity simulates surface colour by accounting for indirect illumination giving the scene its graduated shadows and colour bleeding.  Radiosity is generally used in conjunction with raytracing to improve visual quality.


Compositing is the process of combining rendered elements from multiple sources to create the final product which could be a still or animated picture.  There are three types of compositing, node-based, layer-based and deep compositing.  Node-based compositing links media objects and effects in a ‘tree’ structure on a procedural map.

Layer-based compositing has each media in a composite in its own separate layer within a timeline.  Each layer is then rendered on top of the other.  Because of this Layer-based rendering is not suited to complex composites, particularly 3D.


Deep compositing is a relative newcomer to the scene but is fast becoming the go to choice for compositing solutions.  It’s similar to layer-based compositing except instead of being a flat 2D image, deep compositing has an array of values for Z space.  This enables realistic effects such as fog or clouds as images can be placed at multiple points in Z space.



Slick, J. (2015). Introducing the Computer Graphics Pipeline. Accessed June 28, 2015. http://3d.about.com/od/3d-101-The-Basics/tp/Introducing-The-Computer-Graphics-Pipeline.htm

Gulati, Pratik. (June 9, 2010). Step-by-Step: How to make an Animated Movie. Accessed June 28, 2015. http://cgi.tutsplus.com/articles/step-by-step-how-to-make-an-animated-movie–cg-3257

Boudon, G. (2015). Understanding a 3D Production Pipeline – Learning The Basics. Accessed June 28, 2015. http://blog.digitaltutors.com/understanding-a-3d-production-pipeline-learning-the-basics/

Parrish, D. (June 9, 2004). ‘Inspired 3D: Lighting and Compositing’: Lighting a Production Shot. Accessed June 28, 2015. http://www.awn.com/vfxworld/inspired-3d-lighting-and-compositing-lighting-production-shot

Slick, J. (2015). What is Rendering?. Accessed June 28, 2015. http://3d.about.com/od/3d-101-The-Basics/a/Rendering-Finalizing-The-3d-Image.htm

Seymour, M. (February 27, 2014). The Art of Deep Compositing. Accessed June 28, 2015. http://www.fxguide.com/featured/the-art-of-deep-compositing/

The Foundary, What is Digital Compositing?. Accessed June 28, 2015. https://www.thefoundry.co.uk/products/nuke/about-digital-compositing/