Most Arabic markets can be described as vivid, colorful and vibrant. Not so the Souk in Aleppo, Syria. Destroyed by the horrific war it is now an example for the surreal reality of war.
The motivational image is based on a Video from Eva zu Beck, who visited Aleppo in September 2019. I've edited the image, to add light rays and chromatic aberration.
The narrow cobblestone road has a metal sheeting roof, scattered with bullet holes, which cast light rays into the slightly hazy atmosphere. The doors to the shops are shut, rubble is piled up in front of them. At the end of the road we can see the unsharp outlines of two adults and a young child. Towards the corners of the image, a strong chromatic aberration contributes to the surreality of the image, giving the viewer an uncomfortable, nauseating feeling.
To better control the view on the scene, I plan to implement an advanced camera model. It will contain Depth of Field, lens distortion and chromatic aberration.
To create the hazy atmosphere and the light rays, I will implement Homogeneous participating media.
Many materials in the scene are not purely diffuse or conductive. The metalic doors and the rough cobblestones and concrete walls will profit from additional BSDF models. I plan to implement a rough conducter (with the beckmann distribution) and a rough diffuse BSDF.
The motivational image can profit a lot from bump mapping, as it has many models with uneven surfaces (e.g. the cobblestones and the roll up doors).
I want to use images to texturize the models.
I plan to use models from the internet for most of the scene. I will then combine them into the complete scene and edit them as necessary.
To help to control the light rays, I will implement a directional light.
I might want to use some denoising, e.g. noise2noise, which has a reference implementation on github.
To speed up the rendering of the scene, I could use the Euler cluster.
To validate my implementation, I will create a scene (similar to the one in pbre2) and render it in Mitsuba. I will then compare the results with my results using tev (especially considering the noise-distribution).
Again, I will create a scene to render in both Mitsuba and Nori. A possible scene might be a plane with a grid motive or a scene with vertical pillars.
Validating the chromatic aberration is harder, as neither Mitsuba nor PBRT implement this feature. I will therefore use a similar method as Benedikt Bitterli and render a plane with a dotted texture on it. Alternatively, I can also check my results with other high contrast scenes. This will not allow for a mathematically correct validation, but a visually correct implementation should be shown.
To validate the implementation of homogeneous participating media, I will create a scene and render it both in Mitsuba and Nori. Especially, I will combine it with other Features (like the directional light).
Again, I will compare my results with scenes rendered in mitsuba.
For validating the bump mapping implementation I will again compare my results with scenes rendered in mitsuba.
To validate the image texture, I can basically just compare results from the renderer with the actuall model in a 3D editor to check for correct alignment. To have perfectly overlapping images to compare, I will however probably still render a textured model in Mitsuba for comparison.
For the directional light, I will again use a scene rendered in Mitsuba to compare with my results. I have already created the scene, which I plan to use (shown in the image above).