How to render – Fundamental principles

What to consider in any render engine to achieve a proper photographic look

This tutorial contains all you need to ever know. If you want to bring your renderings to the next level, whether with analytic or evocative photorealism, you need a basic understanding of photography.

As designers, who are dealing with images as part of the research and design process, who are taking reference pictures at exhibitions, in galleries, and in the everyday world – it’s likely you already have an eye for what makes a good photo.

When you’re looking for sources of inspiration and reference material, the best thing you can do is to look for good photos instead of renderings, because the real world is full of variations and imperfections.

It also very important to keep in mind that you cannot “unsee” what you are looking at when rendering – CGI, not a photo. Once you know you looked at a rendering, it is mentally impossible to distance yourself from knowing you’re looking at a rendering, so just ask for a second or third opinion if you have doubts regarding the photographic quality of your image.

No matter if you’re rendering with Maxwell, Vray, Octane, Arnold or Unreal, here are eight software agnostic topics for beginners:

  • Purpose
  • Scenography
  • Model
  • Materials
  • Textures
  • Camera
  • Lighting
  • Postprocessing
  • Useful links

Purpose – Before you even switch on your computer; the first thing you want to ask yourself: What is the purpose of the renderings I’m about to do? What emotional response do I want to evoke? What story do I want to tell with my images, and who is my audience? Basically, there are two categories:

Analytic: formal aesthetics (evaluation), MFC (material, finish, and colour variants evaluation), explanatory (assembly, functionality)

Evocative: design competition, client pitch, product launch, fair or exhibition, sales catalogue, online communication

Scenography – Your set-up should relate to the rendering’s purpose. If you are lacking photo styling ideas, just do an online search to see how high-quality brands in your product’s category are using images; try to understand the stories they are trying to tell, look at how they communicate with analytical and evocative images for each different purpose:

Auto-clipped: For plain renderings without context, you can do an auto-clipped rendering. Just render an alpha channel and a shadow channel in parallel with your main rendering channel. The alpha channel is a mask that will automatically clip your product, even when there are out of focus regions from rendering with depth of field. Open the rendering channel in Photoshop and use Layer > Matting > Remove Black Matte to get rid of the fringe. Then create a new layer underneath and fill it with any background you like, whether with a solid or gradient colour. Now paste the shadow channel between the render channel and layer below. Use the Multiply mode for the shadow layer so that the shadow darkens the layer below. You may need to tint the shadows, depending on the cold or warm lighting used in your scene

Photo studio: For photo studio renderings, simply use one of the ready-made photo studios that come with your renderer. Drop your product in, assign the materials, adjust the lighting and camera to your liking, and you are done. For evocative compositions, a few good props can already provide a good sense of scale or suggest your product’s utility value. There are many online vendors where you can buy props and accessories to contextualise your product (link list at the end)

In context: For renderings with an indoor or outdoor context, think first if you really have the time and budget to model a scenery, because you need a very high amount of detail to make it look convincingly photographic. Normally, your primary goal is to concentrate on the design of your product

• Indoor: There are many online vendors where you can buy interiors, furnishings and accessories to contextualise your product (link list at the end). Set up emitters that mimic real photo studio lights to light the interior or try IBL (image based lighting with a spherical HDR image). Hide as many walls as possible so you do not trap light rays, which substantially slows down interior renderings. Real sunlight lighting is slowest

• Outdoor: It is just the same with outdoor renderings. But for simple scenes that you can model, like a courtyard or lawn, or scenes where very little detailed background will be visible, you can scatter realistic grass, flowers or trees, you can texture map soil, gravel or walkway stones, and you can use cut-out vegetation or people to cast natural shadows into your scene. Alternatively, you can buy backplates with matching HDR images (link list at the end), commission them, or even take them yourself

Model – No matter if you’re using surface, solid or polygonal modelling; your finished 3D model should be professional, which means complete and properly detailed, because you cannot render what doesn’t exist:

Scale: If your product was not modelled at real-world scale, transparent materials and lighting will not behave the way you expect, and you almost certainly enter a time-consuming loop starting to fiddle with various parameters. Either model properly in the first place, or scale to real world size in your renderer

Parts: Just like for your CNC-machining and 3D printing needs, your model should consist of discrete parts, assembled into the final product. Also, if you are insert moulding, overmoulding or two-shot moulding a part with a different material, or use IMD (in-mould decoration), discrete parts make material assignments easier, instead of laboriously selecting and then grouping polygons

Draft: Plastic, metal, glass or ceramic parts that are moulded must respect draft angles that depend on the material and surface relief, so that the parts can be demoulded (taken out of the tool, the mould) without damage “in the line of draft”. You will also have visible tool parting lines where the tool parting cannot conform to part edges

Details: Moulded parts are filleted all-around, except on the tool parting line, which must remain sharp, and you will have wider and narrower splitlines between parts, depending on your product’s assembly method. Also, no assembly is 100% accurate, so rotate and move parts by a tiny fraction

Tessellation: Tessellation is the process of subdividing your part’s surfaces or solids into a mesh of polygons. Rendering and game engines ultimately use triangular polygons in the rendering process. The denser your meshes are, the less facetted they will render, but your computer’s memory usage rises. Areas with high curvature require more triangles while flat areas require less. Ideally, each part should be exported with adaptive subdivision, a tessellation that corresponds to the local amount of curvature

Materials – Try to stay away from material presets and downloads when you do not understand how they were made and how they work. Maybe they just “don’t work”. Maybe they “look right” in one situation, but not in yours. It is much better to examine the material you want to create in the real world, just like you would when specifying your product for production. If that’s impossible, search for high resolution photographs where you can see the material in different lighting conditions, so you really understand why it looks the way it looks. Then you can easily create your own materials, and use and tweak existing ones properly:

Colour: Colour is the most basic material property, and renderers base colour on RGB values. However, never use pure RGB 255 white, pure RGB 0 black, or fully saturated colours where one RGB colour channel is 255. That would produce artificial looking results. If you were to do photometry on a white piece of paper under lab lighting, you won’t get converted RGB values much higher than 230 in all three channels

• The raw diffuse material colour can be specified from RAL, NCS, British Standard or Pantone colour systems, and translated into the closest RGB or HSV value combination (link list at the end). In some cases, the colour aspect will be derived from a texture, for example from a high-resolution photo, scan, or illustration, like for wood, stone or textiles (link list at the end) and product graphics like labels, stickers, decals, printed logos or text

• When your material is translucent or transparent, for example polypropylene or glass, the colour is determined by the material’s transmission colour, or a subsurface scattering colour in case of materials like cheese or silicone rubber. Transparent materials increase render time. Be especially careful with subsurface scattering because it substantially increases render time. Transparent materials refract the light as it passes through, which is why you need to find their index of refraction, IOR, just like you need to do for metals, and ideally also the extinction coefficient (link list at the end). The thicker a transparent object is, the more it reduces the incident light’s energy, so you should set the material’s attenuation accordingly

• Highly reflective materials can appear very different depending on the viewing angle, which is why you should use a Fresnel shading method. Think of a whiteboard that seen from the front at 0° appears white, whereas when seen from a grazing angle that approaches 90° reflects the environment instead

Finish: The other material property you need to think about is how the material is finished; is it shiny, silky, or matt; or is it lacquered, waxed, or otherwise coated? Apart from raw concrete, bricks, wood and soil, this means you need a two-layer material, where one layer defines the base substance, and a second layer the finish; very much like how things are in the real world. On the finish layer, use the roughness parameter to control how sharp or diffuse your material’s reflectivity is:

• Almost always, your materials have some relief, for example granular like on many plastic products, or directional like on brushed metals, woven textiles, or wood. You can generate reliefs with greyscale bump map, normal map, or displacement maps (link list at the end), or even 3D model the relief for macro shots. In most cases, you’ll want to use normal maps, which reveal more detail than bump maps

• You can generate your own normal maps from greyscale images in Photoshop with Filter > 3D > Generate Normal Map or use an online tool.

• In case the relief must look truly three-dimensional or is visible along the silhouette of a part, use displacement map textures that subdivide and then transform your part’s polygon mesh. Always begin with a low subdivision, because too many triangles consume a lot of memory and could freeze your computer

• 16-bit bump map, normal map and displacement map textures will make for better anti-aliasing, which means less or no visible pixelation

• If your material has a colour and finish layer, and maybe even a coating layer, remember that normally the relief must be present in all layers. Alternatively, you can use a global bump map that affects all layers, which is what you should do when using displacement maps

Defects: Unless your product was produced meticulously and handled with extreme care, all materials will have subtle irregularities, like slightly uneven reliefs, scuff marks, scratches, dust, dirt and maybe even suffer from discoloration or corrosion. Careful introduction of minuscule variances and defects with bump maps or normal maps is easy and makes for much higher photorealism

Tip: When changing existing materials or creating new ones, try not to test-render your full scene, because that takes far more time than developing your materials in isolation or by only rendering a region. Once a material really works well, use it, save it, share it, and recycle it in future projects

Textures – Colour map, bump map, normal map or roughness map textures should be seamless tiles of at least 2K or better 4K resolution, sometimes even higher. When buying or making your own textures, it is essential to establish what size they represent in the real world; textures that are squeezed or stretched or look too small or too large instantly destroy any photographic illusion.

If a material needs several texture maps, like basic wood for example, which needs a diffuse map, a bump/normal map, and a roughness map, they all must have the same transformations, so they appear on your part in the exact same place, scale, and orientation.

The process of applying 2D textures onto meshes of 3D surfaces is called texture mapping. Each surface and its mesh have so-called “UVs”, two-dimensional texture coordinates that correspond to the two sides of each surface’s mesh, like the two “directions” of a piece of paper. The UVs determine how a texture map will be applied onto a mesh.

Surface mapping: By default, the texture map is squeezed or stretched like a rubber sheet, to cover a surface’s mesh exactly once. Because all surfaces of your part have a different size and aspect ratio, you will get an odd-looking patchwork. Although this sounds like a rather useless method, it makes perfect sense for single-surface covers, logo badges, labels, or stickers like on bottles and other kinds of packaging

Projection mapping: The texture is projected through 3D space onto all assigned meshes, just like how a projector projects a picture onto a wall and everything else in the way of the beam of light. That means that a meshes’ UV coordinates do not matter. The most useful projections are planar, cubic, triplanar and cylindric. A logo or product graphic is usually printed or debossed on flat or only singularly curved areas of a part. The planar projection is perfect for that. That said, with projection mapping you can get distortions across curved surfaces, depending on the projection method and projection direction. But, with random textures like scratches or dust, it rarely matters; it is a simple way to cover your parts with little imperfections without having to unwrap a part’s UVs for UV mapping. And some projection mapping issues can be solved by clever rotation or positioning of the projector

UV mapping: The texture is wrapped, like a flat piece of tailored fabric, around your part’s meshes without distortion. It is the most versatile texture mapping method, but it means more work, because you must first unwrap the UVs of your part’s meshes to a flat pattern. You don’t have to necessarily unwrap every part of your model though, mainly those with a plywood, leather, or textile material, or when projection mapping or procedural mapping results in visible distortion

Procedural mapping: The texture is “grown” through space, so your part looks like it has been carved out from the material assigned to it. This works very well for natural stone materials, defects, and discolorations

Tip: When making your own textures, you can transform non-seamless tiles into seamless ones in Photoshop using Filter > Other > Offset by 50% height and 50% width of the texture, and then using the Clone Stamp Tool or Patch Tool to blend over the non-matching zone

Camera – All good render engines feature a physical camera model. You can directly translate your photography skills or mimic existing cameras, no tweaking of obscure parameters is necessary.

ISO: The ISO value sets a camera sensor’s sensitivity to light. This setting was originally developed in the era of analogue photography, and later adapted to the digital world. It is best to start with the standard ISO 100 value. The higher the ISO, the brighter the image. On film or digital cameras, this eventually results in perceptible image noise but not so in rendering. If you don’t want to change the lighting and f-stop for a certain depth of field, but the image is too dark or too bright, try changing the ISO value

Focal length: The focal length value of your camera lens determines the amount of perspective distortion and foreshortening. For product photography, use a focal length value of 90mm or higher. Like in professional product, interior and architecture photography, you should also use positive or negative lens shift to remove the annoying converging verticals

Shutter speed: The shutter speed value determines for how long the camera’s sensor is exposed to light. The higher the shutter speed value, the shorter the exposure time. Doubling the shutter speed value from 30 to 60, for example, halves the amount of light reaching the camera’s sensor. If your light set-up is realistic and illumination is good, but your image looks too dark or too bright, try changing the shutter speed value or f-stop value

f-stop: The f-stop value determines the diameter of the lens diaphragm aperture. The higher the f-stop value, the smaller its diameter, and the darker the image (when ISO and shutter speed remain unchanged). The f-stop value also determines the depth of field, DOF, and bokeh. The DOF is the zone that is in focus in front and behind what you are focusing on. Just like in photography, a shallower depth of field is great for separating a subject from its background, guiding the viewer’s eye

• You can darken or brighten an image by changing the f-stop value, too, but remember that you’re then also changing the DOF

• The relation between shutter speed and f-stop is very straightforward. Changing the f-stop value one step from 2 to 2.8, for example, requires doubling the shutter speed value for the same exposure value: from a 60th second to a 30th second; doubling the f-stop value from 2 to 4, for example, requires quadrupling the shutter speed value, from a 60th second to a 15th second

Backplate rendering: When you are using a backplate plus matching HDR image to render your product in a context, simply read the backplate photo’s EXIF data and use the same values for your renderer’s camera settings, and only then match the backplate’s view. This also ensures that what you are rendering automatically has the right exposure, as your backplate scene is lit by the matching HDR image

Output: Set your camera to 16-bit or better 32-bit output, because light and dark areas will contain far more visual information, something you can tease out and adjust in postproduction, otherwise lost in basic 8-bit images. Your minimum output resolution should be HD format, which means 1920 x 1080 pixels. If your time budget and hardware are up for it, better render a 4K image with 3840 x 2160 pixels. Some renderers allow you to render progressively, meaning you can render your image to a higher quality later by resuming the render when there is a better time for it, typically over night

Lighting – Lighting is the number one factor in rendering, just like in non-abstract painting or photography. Proper lighting direction brings out the three-dimensionality of your product and context, it models the shadows and provides essential reflections. As explained before, look at high quality product photography for inspiration, not renderings. It is always a good idea to begin with classic three-point lighting, or an equivalent HDR image, and then take it from there.

Emitters: Emitters with realistic light output and colour temperature provide the most flexible and most photographic way of lighting. Especially useful are HDR images of real photo studio lights that you can assign to simple planes of a matching size (link list at the end). For anything else, always use real world light emission values, and that means lumens (light output) and Kelvin (colour temperature). You can easily find that information in an LED (or flash) manufacturer’s specification sheet. Very useful is that in most renderers you can hide emitters from being seen by the camera, something you cannot do in a real-life setting, and you can even make lights exclusive to certain objects, so they don’t overpower the rest of your scenery

HDR images: High dynamic range images, HDRIs, are equirectangular projections of an environment, wrapped on a dimensionless sphere that surrounds your entire scene. HDRIs are 32-bit depth images that store actual luminance information, not only RGB information, about each pixel. Therefore, HDR images provide realistic lighting and environmental reflections. This lighting technique is called image-based lighting, IBL. To render your product as if it were placed in a certain indoor or outdoor context, you can buy backplate images with matching HDRs (link list at the end), commission them, or even take them yourself. Pay attention when lighting with photo studio lights and an HDR image simultaneously; the viewer might notice that shadows and reflections don’t match

Physical sky: For outdoor renderings or interiors with window frame or tree shadows, physical sky lighting will provide the most realistic solution. You can easily dial in various locations or time of day to adjust the mood of your image. Physical sky rendering tends to be slow, an HDRI can be a good alternative. To speed up interior renders, you can place invisible emitter planes at the windows, so the incoming light from the physical sky is complemented. This technique reduces image noise

Tip: You can switch off lighting effects like dispersion and refractive caustics to speed up your renders when they do not contribute anything meaningful to your image

Postprocessing – Some things simply take too long, cause too much image noise, or cannot be rendered by your renderer, and in any case, you’ll need to put finishing touches to your images. For maximum image editing freedom, always render in 16-bit or even 32-bit depth; with the basic 8-bit depth, you have hardly any room for adjustments, particularly all details in the very bright and very dark areas of your image are lost, as explained before.

Channels: To change colours or adjust materials in Photoshop without having to do a new rendering, and to fine-tune shadows or reflections afterwards, you can simply render additional render channels or render passes, based on certain parts or materials, so all masking, including DOF, is done for you automatically

Noise: Ingenious software like Topaz DeNoise AI, but also Photoshop’s Camera Raw Filter, allows you to render to a lower quality level, costing less time, when more than just camera noise is present, and then clean up your image

Barrel: Photoshop’s Filter > Lens Correction > Custom setting allows you to add an ever so slight amount of lens distortion to your image for an extra photographic feel

Vignette: Photoshop’s Filter > Lens Correction > Custom setting allows you to add a very gentle amount of vignetting to your image for an extra photographic feel

Bloom: Instead of rendering bloom caused by direct viewing of lights and strong specular reflections and refractions, you can simply add convincing bloom effects in Photoshop. Duplicate your rendering’s layer, slide the Levels black point marker almost all the way to the right so only the very bright areas remain visible. Use Gaussian Blur on that layer and then set its blending mode to Screen. Fine tune as you like by using more than one such bloom layer or reduce the opacity, if it is too overpowering or becoming a cliché

Film grain: In Photoshop, convert your final image to a smart object so you can go back and fine-tune. Under Filter > Camera Raw Filter > Effects, you can add a subtle amount of authentic film (sensor) grain to your image for an extra photographic feel

Tip: Postprocess your renderings in Photoshop non-destructively with Adjustment Layers. First get the Levels, Curves and Colour Balance right, before you start with more involved image manipulations; you want to have a good base to work from

Useful links

ISO https://en.wikipedia.org

Focal length https://en.wikipedia.org

Shutter speed https://en.wikipedia.org

f-stop https://en.wikipedia.org

Colour conversion, for example RAL or Pantone to RGB

Refractive index and extinction coefficients for many materials

Material and surface imperfection textures

Photo studio lighting HDR images

HDR sky domes, HDR images and matching backplates

Render props, accessories, products, furnishings, interiors, etc.