注册 登录  
 加关注
   显示下一条  |  关闭
温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!立即重新绑定新浪微博》  |  关闭

Antonieo's

Life is not wonderful , therefor it is .

 
 
 

日志

 
 

The Art of Rendering  

2012-04-12 13:58:49|  分类: CG |  标签: |举报 |字号 订阅

  下载LOFTER 我的照片书  |

来自FXGuide
由于博文篇幅限制,未转载全文,大量精髓内容请跃迁至 原文地址 查看吧。
通过这篇文章,可以全面了解当前CG界世界前沿的渲染器领域,文章从渲染器原理、发展历史、各大渲染器的实现以及优缺点方面进行了详尽而深入的阐述。

Rendering is always an exercise in managing how much computer power you are willing to devote to simulating reality – that cost is expressed in both dollars and time.

Once considered a commodity item in the whole CG / VFX world – rendering is now a hot topic. CG supervisor Scott Metzger jokes that one can’t talk about renderers without annoying someone. “Renderers are like religion (laughs). Rendering is a religion! Especially now in this era, which is really really exciting, there is so much going on and there are so many renderers, so much happening. To me it is the most exciting part of being in our industry right now.”

As Dana Batali, Vice President of RenderMan products at Pixar commented to fxguide at an earlier Siggraph,  “Rendering drives the largest computational budget of getting the pixels to the screen.” He pointed out at that time ‘sims’ (physical sims like cloth etc) were only about 5% of most film’s computation budgets. Since rendering dominates render farms one cannot devote as much effort to perfect light simulations in a render as you can to a destruction simulation in just perhaps one shot.

Renderers are easy to write in the abstract, as perhaps a university project, but to work in production environments is extremely difficult. Arnold, by Solid Angle, is some 200,000 lines of highly optimized C++ code, and it is considered a very direct implementation without a lot of hacks or tricks. Production requirements in terms of rendertime and scene complexity are staggering. And the problem is not just contained to final render time, as Arnold founder Marcos Fajardo pointed out at Siggraph 2010 – final render CPU time might cost $0.10 per hour, but artist time is closer to $40 an hour, so interactivity is also vital.

This leads to the heart of rendering: picking the best approach that will get the results looking as good as possible, in the time you have, and more precisely picking which attributes of an image – be it complex shading, complex motion blur, sub-surface scattering or some other light effects should be your priority – which ones will play in your shot, and which attributes need to be more heavily compromised.

Rendering is an art of trying to cheat compromises.

Modo render by Pascal Beekmans Stats: Res: 1500*500, Indirect Illumination Monte Carlo, - 24.8B Vertices - 8.27B Poly. Please just click to enlarge.


Concepts

There are many choices and factors that influence the decision of a studio to pick one renderer or another, from price to their pipeline experiences, but for this article we focus on a comparison based on the needs of global illumination (GI) in an entertainment industry production environment. We have chosen to focus on major studios with the expectation that many smaller facilities are interested in the choices made by those larger companies with dedicated production and R&D personnel. This is not to lessen the importance of smaller facilities but rather to acknowledge the filter down nature of renderer choices.


Reflection and shading models

Teapotahedron (Utah teapot)


The goal of realistic rendering is to compute the amount of light reflected from visible scene surfaces that arrives to the virtual camera through image pixels. This light determines the color of image pixels. Key to that are the models of reflection/scattering andshadingthat are used to describe the appearance of a surface.


Reflection/Scattering – How light interacts with the surface at a given point
Shading – How material properties vary across the surface
Reflection or scattering is the relationship between incoming and outgoing illumination at a given point.

A mathematical description of reflectance characteristics at a point is the BRDF – bidirectional reflectance distribution function.


BRDF

An object’s absorption, reflection or scattering is the relationship between incoming and outgoing illumination at a given point. This is at the heart of getting objects looking correct.

Descriptions of ‘scattering’ are usually given in terms of a bidirectional scattering distribution function or, as it is known, the object’s BSDF at that point.


Shading

Shading addresses how different types of scattering are distributed across the surface (i.e. which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader.  A simple example of shading is texture mapping, which uses an image to specify the diffuse color at each point on a surface, giving it more apparent detail.

The modern chase for realism revolves around more accurate simulation of light and the approaches renderers have taken to provide the best lighting solution. Key to current lighting solutions is global illumination.


Global illumination

Cornell Box render in Maya


The defining aspect of the last few years of renderers has been global illumination (GI).

Jeremy Birn (lighting TD at Pixar and author ofDigital Lighting and Rendering, 2006) succinctly defines GI as “any rendering algorithm that simulates the inter-reflection of light between two surfaces. When rendering with global illumination, you don’t need to add bounce lights to simulate indirect light, because the software calculates indirect light for you based on the direct illumination hitting surfaces in your scene”.


Real image (credit: Digitalcompositing.com)


Note the color bleeding of the red into the figure's shadows



We want to get all the contribution from all the other surfaces so that it takes into account BRDF and radiance from each direction. GI makes CG lighting much more like real world lighting and accounts for radiosity or the color bleeding that happens when no reflective surfaces still provide bounce, and bounce tinted to their diffuse color.

Solutions:


Radiosity
Photon mapping  (and with final gathering)
Point clouds
Brick maps
Monte Carlo ray tracing
Conventional radiosity

Conventional radiosity is an approach to GI where indirect light is transmitted between surfaces by diffuse reflection of their surface color, and sorted in the vertices of the surface meshes. While this was one of the first types of GI to become available, the resolution of your geometry is linked to the resolution of your GI solution. To achieve more detail in the shadows, you need to increase the poly count, and if the objects are animated and moving it needs to be recomputed every frame. As such it was not popular in VFX.

In a simple ray tracer, the ray’s directions are determined regularly and normally in a simple grid. But there is a key alternative, Monte Carlo ray tracing, also known as stochastic ray tracing. In Monte Carlo ray tracing the ray’s origins, directions, and/or times are set by using random numbers. See below.

A ray tracer suffers in render time as the number of shiny surfaces and the number of lights and objects balloons – as it tends to on a major effects shot.

The key with a ray tracer is not its complexity but the complexity of its optimizations and implementation.

The key concepts are simple enough, but the demands on a production ray tracer to deliver inside a computational budget on exceedingly complex projects is no small demand. Until recently, full ray tracers were not used for animation. They were popular for still shots, or very small special cases, but most ray tracing commercially happened as part of a larger solution, as part of a hybrid solution.

Now that is changing there is great demand for the amazing accuracy and subtly of a ray tracing solution. But the key is to stay focused on producing good results not necessarily accurate results. In films and TV shows it is rare that accuracy is regarded as the absolute yardstick. Flexibility to solve directorial requirements need not also encompass physical accuracy, but the ability to create realistic imagery is pivotal.


Photon mapping

The photon mapping method is an extension of ray tracing. In 1989, Andrew Glassner wrote about ray tracing in An introduction to ray tracing:

“Today ray tracing is one of the most popular and powerful techniques in the image synthesis repertoire: it is simple, elegant, and easily implemented. [However] there are some aspects of the real world that ray tracing doesn’t handle very well (or at all!) as of this writing. Perhaps the most important omissions are diffuse inter-reflections (e.g. the ‘bleeding’ of colored light from a dull red file cabinet onto a white carpet, giving the carpet a pink tint) etc.”

The photon map algorithm was developed in 1993–1994 and the first papers on the method were published in 1995. It is a versatile algorithm capable of simulating global illumination including caustics and diffuse inter-reflections. And for many years it provided the same flexibility as the more general ray tracing methods using only a fraction of the computation time.

The key with photon mapping compared to early radiosity ray tracing that stored values at each vertex is that in the new approach the GI is sorted in a separate data type – the photon map. The resolution of a photon map is independent from the rest of the geometry.

The speed and accuracy of the photon map depends on the number of ‘photons’ used. They bounce around a scene and bounce off any surface which should be brightened by indirect light and are stored in a map – not unlike a ‘paint gun’ effect where the blotches reflect the photons. This means that you can get good results from just photon maps, but if the number of photons are not enough, the results will not be smooth.


500 photons (Maya render)


50,000 photons



The solution to this problem is to use photon maps with final gathering, which in effect smooths out the photon map, providing much more continuous and smoother illumination. In addition to ‘filtering’ the photon map it provides what Birn describes as something that “functions as a global illumination solution unto itself, adding an extra bounce of indirect light.”

The photon map with final gathering  for computation of global illumination is a three-pass method:

First, photons are emitted from the light sources, traced through the scene, and stored in a photon map when they hit non-specular objects
Then, the unorganized collection of stored photons is sorted into a tree
Finally, the scene is rendered using final gathering (a single level of distribution ray tracing). The irradiance at final gather ray hit points is estimated from the density and power of the nearest photons. Irradiance interpolation is used to reduce the number of final gathers.

The photon map is still decoupled from the geometric representation of the scene. This is a key feature of the algorithm, making it capable of simulating global illumination in complex scenes containing millions of triangles, instanced geometry, and complex procedurally defined objects.

Combining global illumination with final gathering can achieve the most physically accurate illumination results required. And it is widely used for interior architectural shots that require the effect of the light contribution from exterior and interior light sources. To seamlessly turn final gather on and off with global illumination, a scene must be modeled in a physically plausible way for both of these lighting effects. For example, lights should have roughly identical values for color (direct light) and energy (photons) attributes. Materials are normally also designed to be physically plausible also. So in Softimage, for example, there is an Architectural Material (mia_material) shader that is designed to support most physical materials used by architectural and product design renderings. It supports most hard surface materials such as metal, wood, and glass. It is tuned especially for glossy reflections and refractions and high-quality glass.

Radiance or ‘color bleeding’ in RenderMan

Copyright Pixar/Disney.


As time progressed companies aimed to add radiosity to images, even if they were not doing a full ray tracing solution. With release v11 of Pixar’s RenderMan, ray tracing was implemented as part of the shading language, to aid in rendering physically accurate interreflections, refractions, and ambient occlusion. In the case of RenderMan today, v16 offers several new features implemented specifically to enhance the performance of ray traced radiosity, including a radiosity cache, physically plausible shading, and a pure brute force ray tracing solution. The new Raytrace Hider in v16 lets renders bypass the REYES algorithm altogether.  The Raytrace Hider is a new option that allows one to render images using pure ray tracing, bypassing the usual rasterization process that PRMan uses. Rays can now be shot from the camera with jittered time samples and lens positions to produce accurate motion blur and depth of field effects. With v16, ray traced GI has become a viable production tool. Prior to v16, RenderMan was still producing great GI solutions using multi-pass solutions.





Before the expense of ray traced radiosity became feasible for large productions, Pixar’s RenderMan tackled GI in two distinct ways, one using ray tracing to just add the indirect radiance after the direct illumination has been calculated, and the other using a version with no ray tracing at all. all. These techniques were first used in production on Pirates of Caribbean 2: Dead Man’s Chest, 2006. For production scenes with complex shaders and geometry, these techniques proved to be relatively fast, memory efficient, and artifact free, but because of their multipass nature, they required significant disk I/O, careful asset management, and were unsuitable for interactive re-lighting.

Pixar’s RenderMan provided two multipass solutions or options for natural color bleeding:  brick maps (this is an approach similar to photon mapping) and point clouds. We spoke to Pixar’s Per H Christensen, about these two multi-pass approaches.

Click to listen to Mike Seymour talk to Per Christensen who explains the differences between brick maps and point clouds. Per Christensen is a senior software developer in Pixar’s RenderMan group at Pixar in Seattle. His main research interest is efficient ray tracing and global illumination in very complex scenes. Before joining Pixar, he worked for Mental Images in Berlin and Square USA in Honolulu. He received a Master’s Degree in electrical engineering from the Technical University of Denmark and a Ph.D. in computer graphics from the University of Washington.

Option 1 Brick Maps: Ray traced solution/brick maps to solving indirect illumination or radiosity

The steps are :

- Render with direct illumination, and during this render the software writes out point cloud (with each point in the cloud having the direct illumination color on it), this is “baking the direct illumination”.
- Then the software converts this point cloud into a 3D brick map. This 3D map – very much like a texture map – is independent of the camera effectively.
- The final step is to render the final image and for each shading point where you want to know the indirect illumination or the radiance the software shoot rays back out into the 3D brick map and looks up the color at that point. Doing it this way gets expensive very quickly, but it is optimized in RenderMan to just look up in the brick map and to minimizes the number rays. As REYES divides the surface into micropolygons RenderMan does this well.

Option 2 Point Clouds: to solving indirect illumination or radiosity with point clouds (no ray tracing)

The steps are:

- Render with direct illumination as before, write out point cloud (each pt has the direct illumination color), but do not do the brick map
- Render the final image – for each shading point where before you would shoot rays into a brick map – you now look up in an octree. So points close by where we are in 3 space the software evaluates fully, but points a long way away are just clumped together for an aggregate solution. In a way at each shading point you want a fisheye view of the world, at that point, a rasterization of the world – but using the octree to speed everything up.
RenderMan doesn’t account from every point in the cloud and does not use ray tracing at all in this method.

To deal with these very big point clouds there is a cache system that reads the points as you need them. Similarly, for the ray tracing method, the software was optimized to allow for dynamically offloading of geometry when not needed to reduce memory use.

In the past, TDs picked one of these two methods for a show (film) to establish the approach that works for that film’s scenes. For example, Wall-E used the point cloud methods for ambient occlusion, as there was a lot of very dense garbage (in the start of the film) and for the ray tracing method they would need to access all of the geometry to determine ray intersections, making the point cloud method the one the Pixar team selected to use for that film.

Importantly, most of the discussion above is about diffuse transports. To cover a broader more general approach to GI it should be stated that only photon mapping and Monte Carlo ray tracing allows one to solve for the specular paths or even diffuse to specular lighting effects such as in caustics. Caustics remains a very complex and demanding issue. While solved many times in production it tends to rarely be solved directly and is more often solved as a special case.

The RenderMan team used to recommend using multipass methods such as baking the direct illumination or photon mapping. But with the multi-resolution radiosity cache introduced in PRMan v16 it is just as efficient, and much easier to use the new techniques.


No light (virtually)


Light from the balls only


 



Balls with no bounce light - but note reflections


Final shot


 

Christophe Hery, Senior Scientist at Pixar says, “Obviously I have been a big supporter of multipass approaches in the past, and in particular of point-based techniques. But through my more recent Physically Based and Energy Conservation work, But through my more recent Physically Based and Energy Conservation work ( primarily at ILM with my former colleague Simon Premoze – though I now reproduced and enhanced it at Pixar ), I discovered that Multi Importance Sampling is a very practical solution for allowing a unification of illumination: for instance, it is because we sample the BRDFs that we can transparently swap a specular emitting light of a given shape and intensity, with a geometry of the same size and color, and in essence get the same reflection from it (obviously a crucial part to get that working is HDR textures). Interestingly, solving the visibility for specular (ray-tracing for MIS) will essentially give you for free the shadowing on your diffuse components. Associate to that the radiosity cache (from PRMan v16) and then you find yourself in a situation where the PBGI (Point Based Global Illumination) stuff or even spherical harmonics become obsolete (to justify these, you would need to have more or less a pure diffuse illumination, at least a low frequency light field).”

Christophe Hery joined Pixar in 2010 after spending 17 years at ILM. In 2010 he received a Technical Achievement Award for the development of point-based rendering for indirect illumination and ambient occlusion. He is recognized throughout the industry as one of the leading technical innovators and researchers in areas of lighting and rendering. As such, fxguide asked for his personal opinion as to whether this means he increasingly favors a full ray traced solution.

“Yes. I believe the whole industry is moving in that direction. Normalized BRDFs and area lights, in conjunction (through MIS), deliver a plausible image with minimum tweaking, empowering the artists to focus in beautifying the shots.”

In recent times at Siggraph and elsewhere there have been advances in lighting such as the use of spherical harmonics, but the use of these cutting edge approaches are somewhat mitigated by adopting a more complete ray tracing solution. Hery expands on the point above:

“SHs (Spherical Harmonics) as a full pre-canned illumination solution can ‘only’ reproduce low frequency luminaires or materials. As such, they are not a complete approach, and they always need to be supplemented by something else. Plus they come with serious issues related to precomputation and storage. If one is to use MIS to achieve specular, one might as well share the light sampling (and traced visibility) and essentially get diffuse for free.”

When the master GI solution is ray traced, the setups are also easy: there is no special extra data to manage. Hery, again:

“You can even make the whole thing incremental (in a progressive refinement manner), enabling fast interactions. With PRMan v16, I do not think we are at a time that it makes sense to trace the camera rays (better to rely on Reyes tessellation), but everything else can. On the other hand, Lumiere PRMan’s relighter, is starting to work great in pure ray-trace hider mode.”

Lumiere is a part of RenderMan RPS, and provides an API for developers to access. Lumiere is the REYES re-renderer, but there is also the RAYS re-rendering/relighter. Many studios have written their own interface into Lumiere, including from inside Nuke, inside Maya, or as a stand-alone facility application. Lumiere is actually a name given to two different types of re-rendering inside Pixar, both a relighting Katana-style tool and an interactive tool – the name is used on a several of Pixar’s internal tools.

 


Monte Carlo ray tracer

Arnold is an example of a Monte Carlo Ray tracer, it’s an unbiased, uni-directional stochastic ray tracer.  Unlike RenderMan it uses ray tracing for direct and indirect lighting, but also unlike earlier ray tracers is not slow and difficult to use with animations and moving objects. Arnold is very much a production renderer designed precisely for VFX and animation production. (see below).

Arnold fully supports GI and provides incredibly high levels of realism and visual subtly while also covering the flexibility needed for productions.

Arnold has no rasterization tricks, no irradiance caches or photon maps for light sources. According to Eric Haines (Ray Tracing News): “Motion blur and depth of field effects weakens rasterization’s appeal, since much sampling has to be done either way; these features are a natural part of stochastic ray tracing. Not being a hybrid system using some form of rasterization has a number of advantages. First, there’s only a single version of the model stored, not one for the rasterizer and another for the ray tracer. Reducing memory footprint is critical for rendering large scenes, so this helps considerably. Arnold also uses compression and quantization of data (lower precision) in order to reduce memory costs. Not using two methods of rendering avoids other headaches: maintaining two separate renderers, fixing mis-syncs between the two (e.g., one thinks a surface is in one location, the other has a different result), dealing with a large number of effects consistently in both, etc.”

Avoiding irradiance caches, as in the hybrid approaches above, means that there is minimal precomputation time for Arnold. This means rendering can happen immediately versus waiting for precomputations to be completed. Combined with progressive rendering (where an image is roughed out and improves over time), this is an advantage in a production environment, as artists and technical directors can then iterate more rapidly.


Image Based Lighting (IBL)

An important part of GI is image based lighting. IBL involves capturing an omni-directional representation of real-world light information as an image, typically in one of three ways:


? bracketed photographing of a chrome ball
? stitching together a series of bracketed stills, often taken with a very wide or 180 degree fisheye lens
? using specialist scanning cameras such as the Spheron
These HDR images can then projected onto a dome or sphere analogously to environment mapping, and this is used to simulate the lighting for the objects in the scene. This allows highly detailed real-world lighting to be used to light a scene. Almost all modern rendering software offers some type of image-based lighting, though the exact terminology used in the system may vary.
Below are several examples from the fxphd Renderman in Production course at fxphd.com by Christos Obretenov of Lollipop shaders. There are no lights in the scene – in all three images the lighting is all raytraced diffuse, bounce, specular from the unclipped HDR dome maps, rendered in RenderMan. All the rendering/shading was done by Christos Obretenov. Arkell Rasiah provided the HDR maps, and Jason Gagnon provided the van model.




 


  评论这张
 
阅读(1121)| 评论(0)
推荐 转载

历史上的今天

评论

<#--最新日志,群博日志--> <#--推荐日志--> <#--引用记录--> <#--博主推荐--> <#--随机阅读--> <#--首页推荐--> <#--历史上的今天--> <#--被推荐日志--> <#--上一篇,下一篇--> <#-- 热度 --> <#-- 网易新闻广告 --> <#--右边模块结构--> <#--评论模块结构--> <#--引用模块结构--> <#--博主发起的投票-->
 
 
 
 
 
 
 
 
 
 
 
 
 
 

页脚

网易公司版权所有 ©1997-2017