Skip to main content

Posts

Layered Materials

Most of the materials in real life can be modeled more precisely by using layers of different materials. Some layered surfaces can be analytically solved and converted into a single reflection model. However, vast majority of them cannot be represented this way. Therefore, representing arbitrarily layered materials is quiet challenging. The most obvious way is to simulate all the interactions between layers. Although this is straightforward to implement, it might cause higher variance and also it is not possible to find pdf and bsdf values for $(w_i, w_o)$ pairs, obviously. Arbitrarily Layered Micro-Facet Surfaces [WW2007] presents a method that unifies arbitrarily layered microfacet surfaces into a single surface model. However, there are some issues with this paper: When sampling a direction between layers, a direction is chosen according to the individual brdfs of the surfaces. So, if a layer is rough, the transmitted or reflected direction is chosen accordingly. How...

Image Based Lighting

Although it is possible to mimic real world lighting with various area light sources and analytical skylights to some extent, it is pretty hard to achieve what can be represented by image based lights (environment lights). Using an image based light source adds a great level of realism to scenes without having to create an entire environment just for lighting purposes. Because images of these light sources are stored as one of the hdr file formats, high dynamic range of scenes is well preserved so that the correct radiance distribution can be used. Image based lights should not be distinguished from other light sources in order to keep a generic light interface. So, an integrator must be able to do the followings at some surface: Sample a direction according to the pdf of this light source. Get pdf value for a certain direction, that is, probability of choosing this direction according to the pdf of this light source. Get radiance value for a certain direction, that is, how mu...

Multiple Importance Sampling

Importance sampling is proven to be very practical variance reduction technique since exact sampling routines can be found analytically for terms in the rendering equation most of the time. However, finding a single sampling routine for whole integrand analytically is not trivial. Therefore, it is important to combine various strategies focused on different terms. For the integrand with indirect lighting, using more than one sampling strategy is not feasible since following more than one ray degrades performance due to recursive nature of the algorithm. For the integrand with direct lighting, however, different sampling strategies can be used since each sampled direction is tested against lights to understand if anything exists between the surface and lights. This intersection test procedure is costly but not recursive. Two of the terms of the integrand cause most of the variance: bsdf and incoming radiance. If just one of them is used following cases might occur: If a direc...

Sampling Visible Normals

The importance sampling strategy proposed in [WMLT07] gives quite acceptable results at incidence angles. However, at grazing angles some bad case scenarios may occur. For example, an incident light may reflect with high sampling weights and cause bright pixels in the output. Also, some samples are wasted due to sampling a micro normal which causes  $ w_i \cdot m \lt 0  $  where $w_i$ is the incident direction and $m$ is sampled micro normal. It is clear that we should also consider the incident direction $w_i$ when we are sampling micro normals $m$ for a better sampling strategy. That is, we should only sample from normals that are visible in direction $w_i$. Importance Sampling Microfacet-Based BSDFs using the Distribution of Visible Normals[Hd14] proposed a way to sample normals from only visible normals. This strategy produces sampling weights in $[0,1]$ which is obviously better than $[0, \infty]$ of previous strategy. To better understand this paper you may r...

Microfacet Theory: Reflection From and Transmission Through Rough Surfaces

A Reflectance Model for Computer Graphics[CT82] introduced microfacet theory to the graphics. Since then, many improvements and simplifications over the model have been proposed. Since most of the terms that constitute the formula of the model are fixed, much of the research focused on different distribution function of normals and masking-shadowing functions.  Microfacet Models for Refraction through Rough Surfaces[WMLT07] extends the microfacet model for reflection from rough surfaces to refraction through rough surfaces. Although previous work can be found exactly on this subject, this paper is the first to validate the proposed model. It presents a reference for implementors by giving the sampling weights and steps to implement the model. Also, it proposes a new distribution function, GGX, which has longer tails than Beckmann distribution resulting in more realistic appearance for certain materials. The only drawback for this distribution is that it takes more to reduce ...

Diffuse Materials

BRDFs that model diffuse materials approximates behaviour of light when light interacts with highly scattering materials. The most well-known is obviously the Lambertian model whose BRDF is only a constant and easiest to implement. Also, Oren-Nayar models the real world objects more realistically than Lambertian model. It assumes the surface is made up of many microfacets each of which is a perfect lambertian surface. Since the model does not have a closed-form solution, an approximation can be used which is presented by the related work. It considers shadowing, masking and interreflection between the facets. There is not a specific importance sampling strategy for BRDF parts. Samples are taken from cosine-weighted distribution which is already enough to reduce most of the variance. Relevant implementations are: /glue/src/material/lambertian.cpp /glue/src/material/orennayar.cpp Lambertian Oren-Nayar(alpha=0.5)

What is this?

Hello there. Glue is a physically based renderer in which I plan to implement many different types of integrators, materials, lights etc. So, the reason I call it a physically based renderer but not a path tracer or a ray tracer is that it will not only contain these integrators. I started to work on this project approximately two months ago. I have read several books, papers and other resources. My aim is to publish a blog post for every paper that I implement and is worth consideration. I am going to try to summarize what paper states and show its advantages and/or disadvantages. Also, I am going to talk about how Glue is structured time to time. Glue is designed to try as many different parameters from scene file as possible to see the effects of them. However, some parts of it are designed in a way that it is not possible to change them from scene file since static polymorphism is chosen over runtime polymorphism due to performance issues. Glue is not a completely research...