Motivational Image

The Surreal Reality of War

Most Arabic markets can be described as vivid, colorful and vibrant. Not so the Souk in Aleppo, Syria. Destroyed by the horrific war it is now an example for the surreal reality of war.

Demo image

The motivational image is based on a Video from Eva zu Beck, who visited Aleppo in September 2019. I've edited the image, to add light rays and chromatic aberration.

The narrow cobblestone road has a metal sheeting roof, scattered with bullet holes, which cast light rays into the slightly hazy atmosphere. The doors to the shops are shut, rubble is piled up in front of them. At the end of the road we can see the unsharp outlines of two adults and a young child. Towards the corners of the image, a strong chromatic aberration contributes to the surreality of the image, giving the viewer an uncomfortable, nauseating feeling.

Feature Breakdown

Big features (>15pts)

Advanced Camera Model (15pts)

To better control the view on the scene, I plan to implement an advanced camera model. It will contain Depth of Field, lens distortion and chromatic aberration.

Homogeneous participating media (15pts)

To create the hazy atmosphere and the light rays, I will implement Homogeneous participating media.

Smaller features (<15pts)

Rough conducter BSDF (5pts) and Rough diffuse BSDF (5pts)

Many materials in the scene are not purely diffuse or conductive. The metalic doors and the rough cobblestones and concrete walls will profit from additional BSDF models. I plan to implement a rough conducter (with the beckmann distribution) and a rough diffuse BSDF.

Bump Mapping (5pts)

The motivational image can profit a lot from bump mapping, as it has many models with uneven surfaces (e.g. the cobblestones and the roll up doors).

Images as Textures (5pts)

I want to use images to texturize the models.

Mesh Modeling (5pts)

I plan to use models from the internet for most of the scene. I will then combine them into the complete scene and edit them as necessary.

Directional light (5pts)

To help to control the light rays, I will implement a directional light.

Possible additional features

Denoising

I might want to use some denoising, e.g. noise2noise, which has a reference implementation on github.

Euler rendering (5pts)

To speed up the rendering of the scene, I could use the Euler cluster.

Feature Validation

Depth of field
Source: Lecture slides, original from pbre2.

To validate my implementation, I will create a scene (similar to the one in pbre2) and render it in Mitsuba. I will then compare the results with my results using tev (especially considering the noise-distribution).

Lens distortion
Source: Automatic source camera identification

Again, I will create a scene to render in both Mitsuba and Nori. A possible scene might be a plane with a grid motive or a scene with vertical pillars.

chromatic aberration
Source: Benedikt Bitterli

Validating the chromatic aberration is harder, as neither Mitsuba nor PBRT implement this feature. I will therefore use a similar method as Benedikt Bitterli and render a plane with a dotted texture on it. Alternatively, I can also check my results with other high contrast scenes. This will not allow for a mathematically correct validation, but a visually correct implementation should be shown.

Homogeneous participating media

To validate the implementation of homogeneous participating media, I will create a scene and render it both in Mitsuba and Nori. Especially, I will combine it with other Features (like the directional light).

Source: PBR-Book Online
Rough conducter BSDF and rough diffuse BSDF
Source: Mitsuba Renderer documentation

Again, I will compare my results with scenes rendered in mitsuba.

Bump Mapping
Source: Mitsuba Renderer documentation

For validating the bump mapping implementation I will again compare my results with scenes rendered in mitsuba.

Images as Textures
My results from the checkerboard task in Assignment 1

To validate the image texture, I can basically just compare results from the renderer with the actuall model in a 3D editor to check for correct alignment. To have perfectly overlapping images to compare, I will however probably still render a textured model in Mitsuba for comparison.

Directional light
Scene with directional light rendered in Mitsuba.

For the directional light, I will again use a scene rendered in Mitsuba to compare with my results. I have already created the scene, which I plan to use (shown in the image above).