Rendering

Rendering: Render pass testing

For my project, I was tasked with providing some suitable images to be sent over to the University, in order for these images to be used in the Final Year Exhibition catalogue to dramatically increase its appeal. These images should be created in 300dpi and use the CMYK colour format to be best compatible for print, since these would be printed for use in a physical catalogue. 

For my rendering, I wanted to utilise AOVs (or Render Passes) within Maya. Render Passes are a seperate layer to the render in a way, allowing certain layers such as the Diffuse to be viewed separately to the full output, which is known as a 'Beauty pass', which has all of the layers packed into a single image sequence (or video). I wanted to also learn to use these as they are an essential part of VFX and compositing, an industry I am interested to learn and perhaps seek employment in. 

I had never used render passes before, but figured that the development of these Final year exhibition images could be a great excuse to test the use of render passes before I begin rendering my final project. The reason for this is that an image is easy to render, but a whole animation is absolutely not. Therefore, testing the renders via still images would be a much safer and quicker way to do so, and could be done during this stage of development. I felt that this would be a highly efficient use of time in my project, so I decided to go ahead with this plan. 


To start with render passes, you need to open Mayas render settings menu. From here, the AOV tab will allow you to select a range of pre-created renderable passes. From the list, I have selected the following:
  • Z - Z will render a pass to help measure the depth in the scene. This allows for the use of depth of field, which can be freely tweaked in the compositing stage. This will be very useful in adding realism to my scene, as well as completing one of the shots. 
  • Diffuse - This will render all of the colour information from my scene
  • Diffuse Albedo - This will also render all colour information, but will be devoid of any shadows (hence the term Albedo). 
  • Emission - Unusually, this pass will enable the separate control of the fog created in my scene. 
  • Shadow Matte - This will render out all shadow information. The pass is inverted, and can be flipped during compositing easily.
  • Specular - Renders all reflectivity and shiny materials in the scene
However, one crucial pass was missing: Ambient occlusion. Whilst the Shadow_Matte pass accounts for all the major shadows, Ambient occlusion is responsible for "simulating the soft shadows that should naturally occur when indirect or ambient lighting is cast out onto your scene" (Ambient Occlusion: What You Need to Know, 2014). Strangely Maya does not offer this pass by default, but it can be created with relative ease manually instead. I wanted to try include ambient occlusion as it may push my character the extra mile in achieving a realistic look to it. 


The first step is to create an aiAmbientOcclusion node within the Hypershade menu. To link it to a render pass, the exact name of the node needs to be copied (aiAmbientOcclusion1). 



Back in the Render Settings menu, under the AOV tab, you can choose 'Add Custom'. I named my custom render pass 'AO'. Under the drop down menu of this new pass (highlighted in yellow in the above image) you can select an AOV node, where on the right you can paste the name of the aiAmbientOcclusion node in the Shader section, which can be seen in the image on the right-hand side. To my knowledge, this applies the AO material node to the scene and adds it as a render pass. You can change some of the settings in the Ambient Occlusion material as well, but the only change I did was increase the samples from 4 to 5 to increase its quality a little bit. 


Now that all of the render passes are set up, the last step is to set up the main render settings. An important setting to keep checked is the 'Merge AOVs' box, so that all the AOVs will be contained in a singular image. A crucial step with render passes is to use .EXR format. .EXR formats are highly uncompressed format types, but offer a big upgrade in the quality of the output. The benefit of using an .EXR image is that it can store these render passes as 'layers' within the outputted image file. These passes can then be accessed and separated in the compositing stage. 

Once that was all ready, it was time to test a render of my character. 



Here is a render of my character, with the render settings samples visible on the right. It is worth noting that this is a camera angle I have chosen to be used in the Final year catalogue. I want to include a side profile and front profile of my main character. 

The render settings samples are at their default settings. This may be suitable for the average use, but I expect higher samples will be necessary for a project such as mine, where every bit of detail is valuable. 


Within the render, you can view the individual render passes separately. Above is the Ambient Occlusion pass that I created, which you can see is working successfully. Using this or the shadow pass are great for spotting noise and troubleshooting the level of sampling a project might require. In the above image, I have zoomed into a spot which contains a lot of noise from the AO pass. This suggests that the diffuse requires more samples. I will also want to include more samples for the Specular and Camera (AA) since my project has a lot of specular details and the Camera (AA) to my knowledge should be the highest value of samples in a render. 


Here I have tested a render using much higher samples, using the Shadow_Matte to preview this time. The render has been done with Camera (AA), Diffuse and Specular increased to 4 each, which may not seem significant but this massively increases render times. For comparison, the first render done earlier took about a minute, whereas this frame took nearly 10 minutes to complete. 

Even with a large increase in the samples, noise can still be seen. Though it is massively decreased in comparison to the first image I posted displaying noise. It is also worth keeping in mind that this noise is being viewed from very close up, and would not be viewed at this distance, it is purely for spotting any big issues with noise.

I decided to deal with noise later, and use the increased samples for now since I would only need to render two still frames. I wanted to test the render passes and see how successful they were before going ahead with using them. I rendered out two stills, the one shown earlier and another from a different camera angle. It is worth noting that I rendered these images out at 300DPI for the purposes of being more sutitable for print-based work, as these images will be printed into a physical catalogue. 


Once I had rendered out both images, I saved them as a Multi-layered .EXR image and imported them into After Effects, which would be my chosen program to use for compositing. In order to utilise the different render passes, the effect 'ExtratoR' needs to be applied so you can select the render pass to use, otherwise it defaults to the Beauty pass (all the passes combined into the full image by default). Above is a demonstration of how all these images have been layered and used in my composition. 

Shot 1 - Raw

Here is the resulting image for shot 1, which is a side profile of the character. I did not make many changes to the result, other than sharpening the result to make it more contrasted and clear. Overall the image quality is excellent and showcases lots of the detail in my character nicely. 

Shot 2 - Raw

Here is the front profile shot of my character, with the same setup of render passes and constrast for consistency. Again, the image is done with excellent quality. It is worth noting that I have utilised the Z-Depth pass in the above images, but I will detail this more in the compositing section once I am compositing the final renders of my animation and 360 video. 

Shot 1 - Edited

From here, I imported the images into Photoshop. I did this to ensure the resolution and pixel/inch were correct, as well as converting the image to CMYK colour. Finally, I added a slight blue tint over the images in order to make the atmosphere feel more cold and depressing, to help convey my story. This should be significant considering that people will look at my project images in the catalogue before viewing the final result. 

Shot 2 - Edited 

Here is the front profile shot, modified in the same manner as Shot 1 for consistency. I was very pleased with the final result, and felt that the visuals were excellent. These two final images were submitted for print for the Final year exhibition catalogue as mentioned earlier. 

Overall, the render passes appeared to be very successful, allowing me full control over all the elements. However, I did not find myself utilising them as much as perhaps I had imagined, suggesting that they are perhaps not as significantly used as they should be, or that the rendered result was good enough to not warrant any further manipulation. However, the use of Ambient occlusion is nice, as well as the use of the Z pass to create a nice depth of field effect. I will be using the same render passes for my final render of the animation and 360 video. 

Rendering: Main animation

Before rendering my animation, I wanted to be absolutely sure I was rendering something suitable and professional before starting the render. I did not want to be in a situation where I had rendered the entire sequence before realising my animation had a severe issue, such as a bad case of noise. As such, I decided to perform a few test renders to evaluate the noise in my output, and how I could go about removing it. I wanted to compare the results and see what level of samples in the render quality settings I would need to use to get the level of quality I was aiming for, as well as the results of the Arnold Noice denoiser. 


Test render #1: 
Samples: 3, 3, 3, 2, 1, 1
Denoised: No
Approximate render time: 3 minutes


Test render #2:
Samples: 3, 3, 3, 2, 1, 1
Denoised: Yes
Approximate render time: 3 minutes +15 seconds


Test render #3
Samples: 4, 4, 4, 2, 1, 1
Denoised: No
Approxmiate render time: 8 minutes 30 seconds


Test render #4:
Samples: 4, 4, 4, 2, 1, 1
Denoised: Yes
Approximate render time: 8 minutes 30 seconds + 15 seconds

I am glad that I decided to perform these tests, as there was some noise that was only easily spotted when in motion. All videos look good, but the lower samples definitely can be spotted as the noise is more significant. However, I was very disappointed that the highest level of samples still produced noise, even with the denoising process applied. In fact, the denoising process seemed to have little to no effect in reducing the noise in my images. I specifically used the Arnold Noice Denoiser since it was suggested that this specific denoiser would be good for batch renders/image sequences. Perhaps my settings were not properly tuned for my project, though unfortunately I lacked the knowledge to know what settings to change in order to achieve a better output. In addition, I noticed that the denoised results would cause the render passes to be lost. Ultimately I decided the denoiser was not worthwhile due to its drawbacks and lack of effectiveness. 

Overall, it is clear that the higher samples have made a good difference in the noise reduction. In terms of the quality though, I personally cannot distringuish between the two results. Ultimately, as someone who aspires to persue a career in 3D VFX, aspects such as noise are important to solve since this could be a potential turn-away from potential employers or other industry-level visitors to the final year exhibition. However, there was a definite cause for worry in terms of the render times per frame, which significantly increased with these increased samples. 

I did some calculation to estimate the duration the rendering process would take with both level of samples for my animation with the total amount of frames to render being 848. 

Samples 3, 3, 3, 2, 1, 1:
Approximately 3 minutes per frame (or 180 seconds)
180 * 848 = 152, 640 seconds | 42.4 hours | 1.78 days 

Samples 4, 4, 4, 2, 1, 1
Approximately 8 minutes 30 seconds per frame (or 510 seconds)
510 * 848 = 432, 480 seconds | 120.1 hours | 5 days

At this point in production, I had roughly 2 weeks left to complete the rendering of both Animation and 360 degree video, Compositing and Editing. There was a definite cause for concern with time left in the project. However, I still felt very strongly about the potential problems of the noise using lower samples. Furthermore, the animation was my main part of the project, and should be done to the best of my ability. Based on the fact that I still had roughly 2 weeks left to complete this assignment, I felt that a 5 day render (+3.2 day increase from lower samples) would be justified considering what I have detailled above. 


One final test I decided to do was the use of Motion Blur in my project. It is important to include as even smaller movements lacking motion blur will simply look unrealistic. However, I wanted to ensure that the motion blur that I enabled was a suitable level, and wanted to render a small portion of my animation to test this so that I could see if it would look ok. I had to view this in motion so I picked the part of the animation where there is most movement (When the Greatsword is lifted up). 

Test render #5
Samples: 3, 3, 3, 2, 1, 1
Denoised: No
Approximate render time: 3 minutes 30 seconds
Depth of field: Yes

I had left the Motion Blur value to default, and enabled it for this render. From the video above, I felt this level of Motion Blur was very suitable and was good to use in my project. 

At this point, everything was ready to be rendered in terms of my Animation. At this point I am still tweaking the final animation, so once this is done I will start the render of the Animation, using 4, 4, 4, 2, 1, 1 samples. 


Rendering: 360 video

As a secondary part of my project and varied mode of presentation, I decided to create a 360 degree video of my character. The idea was that the person could view my character from all angles to be able to view all of the detail that I had worked hard on creating. In addition, interactability should be a great addition for my exhibition to boost its appeal (assuming that the exhibition takes place physically). 


To start, a new camera needs to be created with the Sphereical mode enabled. This creates the template for a 360 video where it will capture all of the surrounding space into a large image. 


I wanted to test how a 360 degree video worked, so I created a quick test render. Here is the resulting image with the Spherical mode of camera enabled. Initially when I had thought about a 360 degree video, I had hoped it would work a bit like a 3D model inspector, where you can drag around a central point. Unfortunately this was not the case, as putting the camera at the center just caused the view to be blocked by geometry of the model. 



At this point, the 'template' of the 360 degree video is ready to be converted into an actual 360 video. The way this works is you can use an application to 'inject' the MetaData into the video or image in order to convert it into a 360 degree video which actually allows the user to move around. I used Spacial Media Metadata Injector to achieve this, and it was really simple. All I had to do was select my video in the interface, check the first option and select 'Inject metadata'. 


Here is the result, uploaded to YouTube; A platform which supports 360 degree videos. As you can see, the injection of metadata has converted the large image into a smaller section of the image, wrapped around a sphere in which the camera in the center can be rotated to view all areas of the image. 

However, in the above video is a very obvious issue: The resolution.  Something I had not considered was the fact that - once the video is injected - the space in essence zooms into a smaller portion of the image I had rendered, which causes a lack of quality to become apparent. It was clear I had to increase the resolution of the render of the 360 degree video. 

Screen resolutions example (Wakefield, n.d.)

I went back to the drawing board to look at different possible resolutions, and what higher resolution might be suitable. 


For reference, the 360 degree video shown earlier was rendered at a resolution of 1920x1080. 


The above image demonstrates why my 360 video lost so much quality. Although the resolution is 1920x1080, a very acceptable resolution, the blue square in the above image is what gets shown in the 360 video, which is a very small portion of the video. In order for the quality to be retained, the video would need to be rendered at a higher resolution. 


I used a very rough adaptation of the template to help identify a suitable resolution to use. The blue square in the center represents the main viewing space of the 360 degree videom whereas the low opacity red image shows how much is being cut off when converted into a 360 degree view. If I wanted the smaller section to utilise a resolution of 1920x1080, my full video would need to be much larger than 4k resolution. 
YouTube supported resolution sizes (Uifalean, 2019)

However, this was not easily achievable. 4k is already a huge resolution which will increase render times dramatically. Moreover, I was interested in hosting my 360 degree video on YouTube. This was mainly for the purpose of putting my work out there, but I was also toying with the idea of either using the 360 degree video in a video player or hosted via YouTube since YouTube offers some interesting interactability and this could benefit my exhibition space. 

Ultimately, it would not be feasible to render at a resolution higher than 4k for various reasons. Despite this, 4k is still a great resolution and should maintain the level of quality I am looking to preserve in my video. As a result, I decided to render my 360 video in 4k resolution, specifically 3840x2160. 



Admittedly, I had done a test at 5k resolution before realising that 4k was the maximum supported resolution by YouTube. However, this 360 video (conerted to 4k maxmimum) still demonstrates a lot of quality, and looks really good when compared to the previous one done in 1920x1080. It was clear that 4k resolution was the way to go, as the detail in the above video should be more than good enough, espcially when taking into consideration of the nature of the video being a 360 degree video. 

There was one final thing I wanted to do with my 360 degree video, and that was to create a kind of turntable within the 360 video. Currently the 360 degree video is basically a still with some idle animation to keep it from seeming too static, but you only ever see the front of the character. It wouldn't be ideal to only show one area of the character, the viewer should be able to see ALL of the character. By using a turntable to orbit the character, the viewer could easily view the character from all sides, with the ability to drag through the duration of the video to see specific points if they desire. This seemed like the most feasible option for the scope of a 360 degree video. 


In order to do this, I started by creating a NURBs circle, which would serve as the path the camera follows. I matched the circle to be in line with the current camera view as I was pleased with the current distance from the camera to the character. I then created a Motion Path, and attached the camera to the Motion Path using the settings demonstrated in the image above. This caused the camera to path across the NURBs circle. 


This required a bit of tweaking of the animation. For a start, the camera used a smoothed Bezier curve so the camera would start slowly, accelerate, then eventually decelerate. For a turntable-style video, this was not suitable as I wanted the video to be loopable. This would allow the exhibition space to run unattended forever in theory. In order for it to loop perfectly, I had to use a Liner interopolation so the speed of the movement was a constant one. 

In addition, I had added 48 frames at the start to allow the nCloth to fully form. I wanted to make absolute sure the cloth would form properly before the 360 video started so hopefully the nCloth would not be obvious when the looping occurs in the video. For reference, if you go back to the first 360 video I linked earlier and loop that YouTube video, that only uses 24 frames of additional rendering to help the nCloth to form. It is very apparent when the video loops as the cloth suddenly jumps back to a different position where it still hadn't fully formed. I was hoping that 48 frames would be plenty, using this video as a more accurate estimate. 

 
Finally, I tested the speed of this camera motion. I wanted it to be quite slow so that it could allow the viewer to get a good feel for my character and allow them to absorb the detail. The above video is the original speed I had, which was way too fast. 


In the above video, I have slowed the camera movement by about half its original speed. However, it still feels too fast. I wanted to slow it down more.

At this point, I ran into a potential problem. I had done tests of how long the 360 renders would take, testing both samples of 3 and 4. Here were my results:

Samples: 3, 3, 3, 2, 1, 1
Duration: Approximately 6 minutes 15 seconds

Samples: 4, 4, 4, 2, 1, 1 (Same as animation)
Duration: Approximately 14 minutes

Already the render times are significantly longer than that of the animation, taking nearly double the time for both levels of sampling. At this point I had rendered my Animation, and I needed to render the 360 video as soon as possible, since I had 7 days left to complete my project. Currently the latest camera motion of the turntable in the 360 video used 768 frames, and approximately this would take 3.3 days to render in samples of 3, whereas samples of 4 would take 7.5 days. Already at a faster camera motion, rendering with samples of 4 is impossible with the time left in the project. I would be forced to use a slightly lower sample size of 3, 3, 3, 2, 1, 1 for my 360 video. 

I approximated that if I increased the duration of the video from 768 frames to 1152 frames (roughly 1.5x slower/longer) the 360 video would take 5 days to render. 


Here is a video done at the increased duration, with a slower camera pan. I still think it is too fast, but it is very close. Considering this would take 5 days out of the 7 remaining, I felt that this would be the best I could get my video to be. 

I could get the compositing done in the remaining 2 days, but this would definitely cause some significant crunch. Eventually I decided that this would be worth it for the end result though, to make sure that the project was done to the best quality possible. 

After this entry, I have started on rendering the 360 degree video. In the meantime, I would like to investigate the practicality of including sound effects into my project, as this could significantly boost the immersion. 


Bibliography:

Pluralsight.com. 2014. Ambient Occlusion: What You Need to Know. [online] Available at: <https://www.pluralsight.com/blog/film-games/understanding-ambient-occlusion> [Accessed 29 May 2021].

Wakefield, J., n.d. 2K or 4K? What’s Better on a Gaming Monitor Screen. [online] Viotek. Available at: <https://viotek.com/2k-4k-whats-better-gaming-monitor-screen/> [Accessed 29 May 2021].

Uifalean, A., 2019. The Perfect YouTube Video Dimension and Size. [online] Lumen5 Learning Center. Available at: <https://lumen5.com/learn/youtube-video-dimension-and-size/> [Accessed 29 May 2021].

Comments