Skip to main content

Posts

Showing posts from May, 2018

Sampling Visible Normals

The importance sampling strategy proposed in [WMLT07] gives quite acceptable results at incidence angles. However, at grazing angles some bad case scenarios may occur. For example, an incident light may reflect with high sampling weights and cause bright pixels in the output. Also, some samples are wasted due to sampling a micro normal which causes  $ w_i \cdot m \lt 0  $  where $w_i$ is the incident direction and $m$ is sampled micro normal. It is clear that we should also consider the incident direction $w_i$ when we are sampling micro normals $m$ for a better sampling strategy. That is, we should only sample from normals that are visible in direction $w_i$. Importance Sampling Microfacet-Based BSDFs using the Distribution of Visible Normals[Hd14] proposed a way to sample normals from only visible normals. This strategy produces sampling weights in $[0,1]$ which is obviously better than $[0, \infty]$ of previous strategy. To better understand this paper you may r...

Microfacet Theory: Reflection From and Transmission Through Rough Surfaces

A Reflectance Model for Computer Graphics[CT82] introduced microfacet theory to the graphics. Since then, many improvements and simplifications over the model have been proposed. Since most of the terms that constitute the formula of the model are fixed, much of the research focused on different distribution function of normals and masking-shadowing functions.  Microfacet Models for Refraction through Rough Surfaces[WMLT07] extends the microfacet model for reflection from rough surfaces to refraction through rough surfaces. Although previous work can be found exactly on this subject, this paper is the first to validate the proposed model. It presents a reference for implementors by giving the sampling weights and steps to implement the model. Also, it proposes a new distribution function, GGX, which has longer tails than Beckmann distribution resulting in more realistic appearance for certain materials. The only drawback for this distribution is that it takes more to reduce ...

Diffuse Materials

BRDFs that model diffuse materials approximates behaviour of light when light interacts with highly scattering materials. The most well-known is obviously the Lambertian model whose BRDF is only a constant and easiest to implement. Also, Oren-Nayar models the real world objects more realistically than Lambertian model. It assumes the surface is made up of many microfacets each of which is a perfect lambertian surface. Since the model does not have a closed-form solution, an approximation can be used which is presented by the related work. It considers shadowing, masking and interreflection between the facets. There is not a specific importance sampling strategy for BRDF parts. Samples are taken from cosine-weighted distribution which is already enough to reduce most of the variance. Relevant implementations are: /glue/src/material/lambertian.cpp /glue/src/material/orennayar.cpp Lambertian Oren-Nayar(alpha=0.5)

What is this?

Hello there. Glue is a physically based renderer in which I plan to implement many different types of integrators, materials, lights etc. So, the reason I call it a physically based renderer but not a path tracer or a ray tracer is that it will not only contain these integrators. I started to work on this project approximately two months ago. I have read several books, papers and other resources. My aim is to publish a blog post for every paper that I implement and is worth consideration. I am going to try to summarize what paper states and show its advantages and/or disadvantages. Also, I am going to talk about how Glue is structured time to time. Glue is designed to try as many different parameters from scene file as possible to see the effects of them. However, some parts of it are designed in a way that it is not possible to change them from scene file since static polymorphism is chosen over runtime polymorphism due to performance issues. Glue is not a completely research...