This Glossary is here to give quick Explanations and answers to words you may not fully understand


What is Ambient Occlusion?

Ambient Occlusion (AO) is a shading method that is often used as a replacement to Global Illumination because it can be much faster to render and easier to control, at the expense of accuracy. Visually-speaking, ambient occlusion adds extra soft self-shadowing, very apparent in cracks and crevices, and gives the image more realism. A common technique is to render the scene in as normal and then render out a separate Ambient Occlusion pass. Then, both image passes are loaded into a compositor, and the ambient occlusion pass is multiplied onto the original image, adjusting the strength to fit the needs of the shot.


What is Animation?

Animation is the process of imparting motion to something that does not normally have motion of its own accord. There are all sorts of animation in the world around us, from the animation that you can create with this program, to that which you might see in store displays with little motors driving effects, and everything in between. In the context of this program, animation is the process of conveying motion directly, through the use of keyframes, or indirectly, through the use of generators such as those found in MoGraph or Dynamics.


What is Anti-aliasing?

Anti-aliasing is a filtering process for digital media files intended to “smooth out” the harsh “digital” look or sound that digital media can have.  The reason that digital media (images & sounds) need such filtering is that digital media is not smoothly created; instead, time and picture samples are taken which adds an unnatural sharpness or pixellation to recorded images or sounds.  Another way to look at this is to imaging a smooth flowing curve, such as a sine wave.  Then, imagine a bar graph with each column reaching up to the curve and touching it.  Notice the gaps between the curve and the graph column?  That is aliasing at work.

A way to reduce sampling errors is to approximate the values as they transition between the actual digital samples.  In an image, this amounts to “blending” the colors/values in between the original pixels.  In reality many neighboring pixels are sampled and weighted to get the best possible result.  The more you understand anti-aliasing, the better you can control how the filtering can work to meet the look you are going for.

There are several different ways to anti-aliasing rendered imagery, and each has their strengths and weaknesses.  Typically, the biggest weakness is the time it takes to generate the anti-aliasing in the first place, as that process typically occurs during render time.  Again, a quick study of the different methods will further assist you to get to the look you require.


What is an Array?

An Array is a collection of data, typically arranged in either a line, and/or a rectangular or cubic field. If you work in an office, you may have a stacked mail box for the employees in your company. This is a good real world example of an array. The typical spreadsheet table is an array. In computer programming, arrays are very common, and a great way to hold and process information. In Cinema 4D, arrays are available to artists, typically in the MoGraph Cloner, but in other operations as well. You will recognize an array when you set and X by Y (and/or Z) values for your clones.


What is Box Modeling?

Box Modeling(Hard surface Modeling) is a modeling technique that starts with a cube primitive. Typically, you add several segments to each side of the cube, and then start pushing and pulling a rough shape using the standard tools and brushes.  You extrude, add edges, subtract edges, and so on in order to create your desired object.  Many modelers prefer box modeling for characters and other object types, though there are many methods you can try.

Box modeling is fairly easy, though it might take awhile to get to the shape you wish. For characters, box modeling is combined with HyperNurbs to smooth out the mesh.  One advantage for using HyperNurbs for characters is that you can build and rig a lower resolution base mesh, and let HyperNurbs add the additional detail that you wish at render time.


What is Bump Mapping?

Bump Mapping is a rendering effect that makes an otherwise smooth surface appear to have an actual texture. Typically, gray scale maps are painted or generated (often from photos), and then applied to the object with the texture toolset. You can also create bumps with procedural shaders. At render time, the effect becomes apparent. You can see the effect in the viewport when you choose the Advanced Open GL option.


What is Chromatic Aberration?

Chromatic Aberration is an optical artifact that is caused when light passes through lenses onto a recording substrate, such as photographic film or digital photo-sensors. Chromatic aberration is visible at object boundaries, and presents as a subtle shift of colors, where none are actually visible in reality. It results when differently colored parts of light (depending on the wavelength) are refracted with varying intensities within the different internal elements of the lens (the lens is actually several lenses, either concave or convex, arranged in a cylindrical housing). For the most part, photographers and cinematographers work hard to mitigate this artifact, though it can be used as a design element.

In CG work, however, it is the duplication of these real world artifacts that programmers strive for in creating digital imagery, as the presence of these artifacts can actually enhance the believability of the artificial image. Note, however, that the methods for calculating chormatic aberration at render time is expensive in terms of render time.


What are  Coordinates for?

Coordinates are numeric combinations that are used to describe an object or element position within space. Coordinates can be used to map all sorts of different spatial descriptions. You are likely the most familiar with 2D and 3D space described with X, Y, and Z coordinates. When addressing space in this manner, you are using Cartesian coordinates. Cartesian coordinates can describe 2D and 3D space. A variation of the classic Cartesian coordinates are UV coordinates, described with the U and V labels instead of X and Y. Other spatial mapping methods that do see use from time to time include polar coordinates, spherical coordinates, and coordinates that can be used to describe other dimensions.


What is Direct Illumination?

Direct Illumination is the lighting/shading method that was originally used by computer graphics programs.  When rendering, direct illumination considers only light from the original source—no bounce light or light emissive polygons (or sources other than lights) will add to the lighting within the rendered scene. This method is not obsolete, however, as it is used by global illumination (which adds bounce light and greatly enhances realism).


What is Dithering?

Dithering is an imaging method used to add additional levels of shading where none actually exist.  In the dark ages of computer graphics, where everything was either black or white, dithering was implemented to simulate shading.  Several different forms of dithering was developed, and are often still available today.  The most popular has been the summation dither, as it tends to offer the most natural results.


What is Extrude?

Extrude, or extrusion, is a process that will pull a material through a patterned shape in order to create a 3D object.  Much like you can do with play-dough, the Extrude NURBs tool that will take a shape you create and sweep it away from the source shape to create a 3D object, just as in extrusion in the physical world.

One of the benefits of Cinema 4D is that you can have much more control over your object than you could with a real-world extrusion, and for much less effort on your part.  You can control the extrusion direction, and cap characteristics, as well as tweak the polygon mesh to meet your modeling needs.


What is a Fresnel?

The Fresnel effect is a natural phenomenon that describes the way we perceive light and reflections as they bounce off surfaces and into our eyes.  Sometimes called the “storefront effect,” in that viewing storefront windows head on will show few reflections, whereas glancing views will how much stronger reflections.  The Fresnel shader addresses this effect directly, and offers control to tune the effect to your preference.  Prior to the introduction of fresnel shaders in computer graphics, reflections look very stylistic; interesting, but not real.

Fresnel shaders are found in many different material channels, not just reflections.  In some cases, other shaders may directly incorporate the Fresnel effect, so that you won’t have to add the shader in yourself.


What is Global Illumination?

Global Illumination is a method of lighting/rendering that will greatly improve the realism of virtually any scene.  Global illumination does this by calculating bounce light, and light emanating from sources other than traditional scene lights.  Global illumination takes the initial lighting solution from the direct illumination pass, and then determines where light falls, and how light would then reflect off of, or bounce, from the surfaces which receive an initial light ray.  In addition to adding bounce lighting, global illumination can also modify the color of the bounced light based upon the color of the surface from which the light ray has bounced. There are numerous controls to fine tune the global illumination solution, as it can be computer intensive and slow.  In addition, animation can cause the lighting solution to flicker frame-to-frame when certain settings are used.  Therefore, there are special settings which help to minimize this flicker, or remove it altogether.


What is HDRI?

HDRI is an acronym for “High Dynamic Range Image,” where dynamic range refers to the amount of illumation contained within the final image dataset.  Typically used in photography, but indeed CG imagery as well, HDRI imagery contains enough illumination information to determine the final contrast, gamma, color and so on well after the original image is created.  In film and digital photography, an HDRI is typically created with “bracketed exposures”, meaning an exposure from several f-stops.  Then, in a program that supports deep color or RAW format images, you can then assemble the HDRI image from the separate exposures.  Later, you can choose what the white and black point of the image would be, along with several other factors.  As long as the image is maintained within a format that offers HDRI benefits, you can continue adjust the image as your purpose suits, all without a loss of data, or a degradation of the quality of the image.


2 Comments

Wilma Dickfit · 14. February 2017 at 19:13

I don’t agree with the sexism in this post
Love yourself you sexist misogynistic piece of cake

    margi · 21. February 2017 at 13:50

    Dear Wilma, there might be a missunderstanding for you. I think you should not use that kind of language in a professional environment even if you are not happy with an explication of an author, which motherlanguage is not English. Thank you for beeing tolerant.

Leave a Reply

Related Posts

Cinema 4D

Tired of the look of your phone? Let it become an Eye-catcher with this Swiss-Designed Case & Skin Collection!

Almost everbody owns a smartphone. And chances are that your friends own smartphones as well. Probably even the same as yours. That is why Swiss Designer Lawlez launched his new Line of iPhone & Samsung Read more…

Cinema 4D

5 steps to achieve Photo-Realistic CG renders

In this Article we tell you about the 5 most important things to consider when trying to create realistig CGi.

3D Text

3D Typography R&D in Cinema 4D (image gallery)

Sanity Created in  Cinema 4D rendered using Octane. ok? Created in Cinema 4D rendered using Octane. This piece involved volumetrics and ies lighting. Lavender Town Created in Cinema 4D rendered using Octane. Empathy Created in Read more…