This Glossary is here to give quick Explanations and answers to your questions.
What is Ambient Occlusion?
Ambient Occlusion (AO) is a shading method that is often used as a replacement to Global Illumination because it can be many times faster to render while beeing easier to control, at the expense of accuracy. Visually-speaking, ambient occlusion adds soft self-shadowing, very apparent in cracks and corners, this effect gives the image more realism. A common technique is to render the scene normal and then render out a separate Ambient Occlusion pass. Then both image passes are loaded into Photoshop, and the ambient occlusion pass is multiplied onto the original image, adjusting the strength to fit the needs of the image.
What is Animation?
Animation is the process of imparting motion to something that does not normally have motion on its own. There are all sorts of animation in the world around us, from the animation that you can create with Cinema 4D, to that which you might see in store displays with little motors driving effects, and everything in between. In the context of cinema, animation is the process of conveying motion directly, through the use of keyframes, or indirectly, through the use of generators such as those found in MoGraph or Dynamics.
What is Anti-aliasing?
Anti-aliasing is a filtering process for digital media files intended to “smooth out” the harsh digital look or sound that digital media can have. The reason that digital media (images & sounds) need such filtering is that digital media is not smoothly created; instead, time and picture samples are taken which adds an unnatural sharpness or pixellation to recorded images or sounds. Another way to look at this is to imaging a smooth flowing curve, such as a sine wave. Then, imagine a bar graph with each column reaching up to the curve and touching it. Notice the gaps between the curve and the graph column? That is aliasing at work.
A way to reduce sampling errors is to approximate the values as they transition between the actual digital samples. In an image, this amounts to “blending” the colors/values in between the original pixels. In reality many neighboring pixels are sampled and weighted to get the best possible result. The more you understand anti-aliasing, the better you can control how the filtering can work to meet the look you are going for.
There are several different ways to anti-aliasing rendered imagery, and each has their strengths and weaknesses. Typically, the biggest weakness is the time it takes to generate the anti-aliasing in the first place, as that process typically occurs during render time. Again, a quick study of the different methods will further assist you to get to the look you require without wasting to much time.
What is an Array?
An Array is a collection of data, typically arranged in either a line, and/or a rectangular or cubic field. If you work in an office, you may have a stacked mail box for the employees in your company. This is a good real world example of an array. The typical spreadsheet table is an array. In computer programming, arrays are very common, and a great way to hold and process information. In Cinema 4D, arrays are available to artists, typically in the MoGraph Cloner, but in other operations as well. You will recognize an array when you set and X by Y (and/or Z) values for your clones.
What is Box Modeling?
Box Modeling(Hard surface Modeling) is a modeling technique that starts with a cube primitive. Typically, you add several segments to each side of the cube, and then start pushing and pulling a rough shape using the standard tools and brushes. You extrude, add edges, subtract edges, and so on in order to create your desired object. Many modelers prefer box modeling for characters and other object types, though there are many methods you can try.
Box modeling is fairly easy, though it might take awhile to get to the shape you wish. For characters, box modeling is combined with HyperNurbs to smooth out the mesh. One advantage for using HyperNurbs for characters is that you can build and rig a lower resolution base mesh, and let HyperNurbs add the additional detail that you wish at render time.
What is Bump Mapping?
Bump Mapping is a rendering effect that makes an otherwise smooth surface appear to have an actual texture. Typically, gray scale maps are painted or generated (often from photos), and then applied to the object with the texture toolset. You can also create bumps with procedural shaders. At render time, the effect becomes apparent. You can see the effect in the viewport when you choose the Advanced Open GL option.
What is Chromatic Aberration?
Chromatic Aberration is an optical artifact that is caused when light passes through lenses onto a recording substrate, such as photographic film or digital photo-sensors. Chromatic aberration is visible at object boundaries, and presents as a subtle shift of colors, where none are actually visible in reality. It results when differently colored parts of light (depending on the wavelength) are refracted with varying intensities within the different internal elements of the lens (the lens is actually several lenses, either concave or convex, arranged in a cylindrical housing). For the most part, photographers and cinematographers work hard to mitigate this artifact, though it can be used as a design element.
In CG work, however, it is the duplication of these real world artifacts that programmers strive for in creating digital imagery, as the presence of these artifacts can actually enhance the believability of the artificial image. Note, however, that the methods for calculating chormatic aberration at render time is expensive in terms of render time.
What are Coordinates for?
Coordinates are numeric combinations that are used to describe an object or element position within space. Coordinates can be used to map all sorts of different spatial descriptions. You are likely the most familiar with 2D and 3D space described with X, Y, and Z coordinates. When addressing space in this manner, you are using Cartesian coordinates. Cartesian coordinates can describe 2D and 3D space. A variation of the classic Cartesian coordinates are UV coordinates, described with the U and V labels instead of X and Y. Other spatial mapping methods that do see use from time to time include polar coordinates, spherical coordinates, and coordinates that can be used to describe other dimensions.
What is Direct Illumination?
Direct Illumination is the lighting/shading method that was originally used by computer graphics programs. When rendering, direct illumination considers only light from the original source—no bounce light or light emissive polygons (or sources other than lights) will add to the lighting within the rendered scene. This method is not obsolete, however, as it is used by global illumination (which adds bounce light and greatly enhances realism).
What is Dithering?
Dithering is an imaging method used to add additional levels of shading where none actually exist. In the dark ages of computer graphics, where everything was either black or white, dithering was implemented to simulate shading. Several different forms of dithering was developed, and are often still available today. The most popular has been the summation dither, as it tends to offer the most natural results.
What is Extrude?
Extrude, or extrusion, is a process that will pull a material through a patterned shape in order to create a 3D object. Much like you can do with play-dough, the Extrude NURBs tool that will take a shape you create and sweep it away from the source shape to create a 3D object, just as in extrusion in the physical world.
One of the benefits of Cinema 4D is that you can have much more control over your object than you could with a real-world extrusion, and for much less effort on your part. You can control the extrusion direction, and cap characteristics, as well as tweak the polygon mesh to meet your modeling needs.
What is a Fresnel?
The Fresnel effect is a natural phenomenon that describes the way we perceive light and reflections as they bounce off surfaces and into our eyes. Sometimes called the “storefront effect,” in that viewing storefront windows head on will show few reflections, whereas glancing views will how much stronger reflections. The Fresnel shader addresses this effect directly, and offers control to tune the effect to your preference. Prior to the introduction of fresnel shaders in computer graphics, reflections look very stylistic; interesting, but not real.
Fresnel shaders are found in many different material channels, not just reflections. In some cases, other shaders may directly incorporate the Fresnel effect, so that you won’t have to add the shader in yourself.
What is Global Illumination?
Global Illumination is a method of lighting/rendering that will greatly improve the realism of virtually any scene. Global illumination does this by calculating bounce light, and light emanating from sources other than traditional scene lights. Global illumination takes the initial lighting solution from the direct illumination pass, and then determines where light falls, and how light would then reflect off of, or bounce, from the surfaces which receive an initial light ray. In addition to adding bounce lighting, global illumination can also modify the color of the bounced light based upon the color of the surface from which the light ray has bounced. There are numerous controls to fine tune the global illumination solution, as it can be computer intensive and slow. In addition, animation can cause the lighting solution to flicker frame-to-frame when certain settings are used. Therefore, there are special settings which help to minimize this flicker, or remove it altogether.
What is HDRI?
HDRI is an acronym for “High Dynamic Range Image,” where dynamic range refers to the amount of illumation contained within the final image dataset. Typically used in photography, but indeed CG imagery as well, HDRI imagery contains enough illumination information to determine the final contrast, gamma, color and so on well after the original image is created. In film and digital photography, an HDRI is typically created with “bracketed exposures”, meaning an exposure from several f-stops. Then, in a program that supports deep color or RAW format images, you can then assemble the HDRI image from the separate exposures. Later, you can choose what the white and black point of the image would be, along with several other factors. As long as the image is maintained within a format that offers HDRI benefits, you can continue adjust the image as your purpose suits, all without a loss of data, or a degradation of the quality of the image.
What is Index of Refraction / IOR?
The Index of Refraction is a value used to specify the way that light is scattered as it passes through a material. You should be most familar with the ‘bent straw’ effect when viewing a clear glass of waterwith a straw inside. That is because the light traveling through the glass of water is behaving differently than when it travels through air. Since we live on Earth, all IOR values assume that 1.0 is the value for traditional linear light behavior. And thus, air has an IOR of 1.0. Water has an IOR of 1.33. In the glass example, do you know how many different IORs would need to be considered, in order to accurately render the scene so that it matches reality?
Three: 1.00 for the air, 1.33 for the water in the glass, and 1.52 for the water glass. That will get you a nice result. Why just nice? Well the straw would also have an IOR. all materials in our universe have an IOR, even those that we may percieve as opaque. This is why you can use Fresnel shaders and SSS shaders on anything.
What is Key Interpolation?
Interpolation is a process used by the computer to determine how to treat the frames that exist between each keyframe created by the animator. Essentially, the computer is guessing how those in-between frames will behave. In some cases, an animator may use few keyframes for a particular element, and let the computer do the work. In other cases, an animator will create many keyframes in a shot, with few interpolated frames in between.
Motion Keyframes (position, rotation, scale) offer the following Key interpolation types:
A smooth transition from keyframe to keyframe will be the result.
A straight transition from keyframe to keyframe, with abrubt changes as defined by the keyframes themselves. Effective for maintaining velocities while rotating.
This method jumps to the keyframe value at a given time. In the graph editor, a literal step shape is drawn between each keyframe. Animators use this type of interpolation when performing pose to pose animation, as this method allows the animator to see only the keyframes, but in the proper time as they appear within the timeline. Final animation rarely uses Step, as it is just a tool to aid in character posing.
What is a Mesh?
A Mesh is another term for the net of geometric polygons that make up the objects in your scene. Meshes reference, or are referenced by attribute tags such as surfacing and UV information, animation rigs, simulation and rendering engines. Without meshes, you would have only parametric objects to render. Meshes are formed by vertices and the faces that are created from a minimum of three vertices. Edges bridge the connection between two or more vertices. You should avoid free-standing vertices, edges or polygons in your mesh, unless said elements are intended to serve a specific purpose.
What is MoGraph?
MoGraph, included in CINEMA 4D Broadcast, Studio, is the special motion graphics portion of Cinema 4D. MoGraph excels in the cloning of objects, driving of motion, and the creation of exciting elements often used within motion graphics sequences. Of course, MoGraph also has excellent text capabilities, and when all of these features are used in conjunction with the features already present in Cinema 4D, you have a real motion graphics powerhouse of a toolkit.
What is Motion Blur?
Motion Blur is an artifact that is seen in recorded images when a subject moves during the exposure of a frame of film or image. In reality, you can experience motion blur just by quickly waving your hand in front of your face, but we are more concerned with the traditional photographic artifact.
Motion blur is affected by exposure time, shutter speed and film speed. In computer graphics, the best form of motion blur is often called “3D motion blur” since it is the closest to the experiences of the analog world. 3D motion blur will correctly blur subjects in motion, shadows, and NOT blur reflections inappropriately. 3D motion blur is a physically accurate motion blur.
The amount of resources required to produce an accurate 3D motion blur is quite expensive, though fortunately modern computers have brought the effect within the reach of most artists. It boils down to a quality/time question that is in need of an answer. However, even though 3D motion blur is the most accurate, accuracy is not always necessary.
What is a Node?
Node refers to a scene element or sub-element. A node is a generic container that can hold different types of information, depending upon the node type (and there are effectively an infinite amount of node types that an application or development engineer can create.
Nodes can hold transform information, geometric information, shading information and so on. In addition to being containers of information, nodes can also be operators, and can perform math functions, for example. In some cases, a node can be a simple add node, where you can have one node add to another, or add a value to the value already present in the operator node. You can also have more complex nodes that can solve various levels of complex problems—in reality, these complex nodes are typically made of simpler nodes put together to form a higher level operation.
When nodes are chained together, the result is called a “node tree.” Node trees can be simple to very complex. In order to create a node chain or node tree, you must pipe the output of one node to the input of another. The output of that node would be dependent upon the goal of that branch of the node tree.
Nodes may have multiple inputs and outputs. Depending upon the actual contents of the node, its possible that the numbers of inputs and outputs of the node is actually variable.
Because of all of this versatility, nodes are commonly used for digital compositing and the internal datastructures of 3D content creation applications. In some cases, the nodes may actually be disguised from the user, but its far more powerful to allow the user to work with the actual nodes themselves.
Xpresso is the node editor in Cinema 4D, and Thinking Particles also relies on nodes to build rules for the particle systems and interactions. From the online documentation: “Nodes are the primary building blocks of expressions and are designed to carry out the most diverse of tasks, from reporting an object’s current position to processing math operations. Depending on the node’s type, you can add various inputs and outputs to the node called ports. As with XGroups, you add these ports using the inputs menu and outputs menu.”