Advances in Ray Tracing English Edition

854 Views

November 27, 23

スライド概要

■Overview
In Advances in Ray Tracing, we will discuss recent improvements in RE ENGINE's Ray Tracing, new feature implementations, and usage and measurements for each title.

Note: This is the contents of the publicly available CAPCOM Open Conference Professional RE:2023 videos, converted to slideshows, with some minor modifications.

■Prerequisites
Those who have knowledge around Ray tracing or have implementation experience.

I'll show you just a little bit of the content !
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
CAPCOM Open Conference Professional RE:2023
https://www.capcom-games.com/coc/2023/

Check the official Twitter for the latest information on CAPCOM R&D !
https://twitter.com/capcom_randd
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

profile-image

株式会社カプコンが誇るゲームエンジン「RE ENGINE」を開発している技術研究統括によるカプコン公式アカウントです。 これまでの技術カンファレンスなどで行った講演資料を公開しています。 【CAPCOM オープンカンファレンス プロフェッショナル RE:2023】  https://www.capcom-games.com/coc/2023/ 【CAPCOM オープンカンファレンス RE:2022】  https://www.capcom.co.jp/RE2022/ 【CAPCOM オープンカンファレンス RE:2019】  http://www.capcom.co.jp/RE2019/

シェア

またはPlayer版

埋め込む »CMSなどでJSが使えない場合

関連スライド

各ページのテキスト
1.

Advances in Ray Tracing ©CAPCOM 1

2.

Agenda 1. Denoiser improvements 2. Ray tracing new features 3. Console performance Today I will introduce what ray tracing improved in RE ENGINE after the gdc 'Resident Evil Village’ talk. I will break it down into three parts: 2 First, I'll go over denoiser improvements and try to give you inspiration on how I think when dealing with problems. Next, I'll share the new features in ray tracing that were added in latest RE ENGINE. Finally, I will give you ray tracing and denoisers GPU execution time in PS5 as a reference. ©CAPCOM 2

3.

What have we done? For details, please see "Resident Evil Village: Our Approach to Game Design, Art Direction, and Graphics" - GDC 2021 At GDC in 2021, the presentation "Resident Evil Village: Our Approach to Game Design, Art Direction, and Graphics" was given. Please reference this for details. 3 To recap, DirectX 12 / PS5 acceleration structures API was implemented and rendering members introduced ways to optimize BVH building times especially for skeletal meshes. We also implemented geometry to texture mapping framework in RE ENGINE. As for performance concerns, RE ENGINE uses json files to map textures instead of supporting user shaders for ray tracing. ©CAPCOM 3

4.

What have we done? Preparation Tracing Accumulation Filtering Depth/Normal Ray Direction Spatial History Sorting Temporal Motion Vector Disocclusion Tracing Moment Firefly No Yes Wavelet Bilateral Shading Composite For details, please see "Resident Evil Village: Our Approach to Game Design, Art Direction, and Graphics" - GDC 2021 As for me, after a basic ray tracing framework is established, the potential performance for consoles is checked and limited for ray tracing. 4 With a low sample rate, the final results are full of devastating noise. I tried to solve this noise in the meantime optimizing tracing and shading cost. At that time, no official denoiser framework was provided. I researched and invented some of the method like indirect dispatch for wavelet filter which saves more than half of the cost of denoiser. And the step for history is adjusted, view-independent history calculation could be calculated at first for dynamic tracing dispatch. Some of these technologies I learned from other companies and improved them. For example, I tried several guided filters (wavelet and bilateral). The most important thing for the filter is consistency. At the same time, it should be as close to the ground truth result as possible. Please see GDC 2021 for more details. ©CAPCOM 4

5.

GI Denoise Improvement Artists want to put lights (Emissive, IBL) without any restriction and no noise. Reason for the noise Sometimes light can be difficult to sample. It totally depends on the geometry and what kind of light you use. Improvements start from this problem. Artists want to use lights freely. Lights like emissive and image based lights are highly dependent on the surrounded geometry environment. 5 Some light can be difficult to sample because rays don’t know where the light is if we don’t tell them. The problem we need solve is creating all the paths between the light source and the camera. For example, we trace a diffuse ray for 64 samples. Only one of them hit the light. 63 missed target and get black result. If we can only trace 1 sample per frame, you will see 63 black frames with one lighted frame, that causes high variance and noise. What expected is 64 frame with the same result as one frame lighted result divided by 64. ©CAPCOM 5

6.

Buffered Light VS Geometry Light Buffered light Geometry light Sample every hit Sample only hit geometry Image-based light / Emissive Directional / Punctual / Area light Buffered light like directional / punctual / area light relieve this problem. The light buffer is looped and lights are connected to hit geometry which calls the next event estimation. You just need to check the visibility term to ensure that the light can be reached. 6 But geometry light such as emissive and image-based light is difficult to sample. We don’t know where the light is. If the ray didn’t hit the light, we cannot create the light path, which makes this situation need far more rays compared to buffered light environments. Still, I want to solve this problem with limited samples, or at least improve it a little. ©CAPCOM 6

7.

Input (Difficult Situation) • Shading Results • Depth Light comes form here • • Normal So the image you see on the left of the slide is the shading result from image-based light with shutter. I only traced the first bounce. Most of traced rays cannot reach the light which causes sparse results here. 500 thousands rays are traced, only thousands of pixels hit the light. 7 We got the shading result, depth, and normal buffer for guiding filter. ©CAPCOM 7

8.

Output (1 frame only result) Before Now What makes it more difficult is I need to make the result convergence and consistent from one frame tracing. If I can’t denoise it through one frame data, when I move the camera, you will see artifacts from some of the disocclusion pixels. 8 And here is the comparison between previous result and what improved now. ©CAPCOM 8

9.

Output (1 frame only) Compared to the video, you can see noise is substantially reduced. Here is the video comparison. You can see noise is substantially reduced with this approach while detail is preserved. 9 ©CAPCOM 9

10.

Ideas U-Net Delaunay Triangulation Probes Guiding Ray Direction Guiding Light Position Disocclusion Rays So, how do we reach this result? There are several ways through which we can reach a good result. If you try and dig it deeper, all roads lead to Rome. 10 I will summarize what I tried to solve this problem. ©CAPCOM 10

11.

Machine Learning U-Net Transformer Sheng He, Rina Bao, P. Ellen Grant, Yangming Ou ”U-Netmer: U-Net meets Transformer for medical image segmentation” *2 • Suitable for PC • Performance is not acceptable in current generation consoles Olaf Ronneberger, Philipp Fischer, and Thomas Brox ”U-Net: Convolutional Networks for Biomedical Image Segmentation”May *1 Machine learning is the best fit for denoiser. U-Net is widely used in biomedical image segmentation and image upscaling. 11 Recently, Transformer also integrated to the u net framework. The intuitive idea for U-Net is to compress the source data to multiple feature levels and learning from the most abstract level. For reconstructing, all these level features are used to improve final quality. Separating source data into multiple features is critical for denoising. Actually, it's also important in most computer vision machine learning algorithms. The reason it's not selected is because current generation consoles can't afford the cost. I do research through PC with different machine learning framework. Although, I don’t have enough time and budget to implement machine learning framework in RE ENGINE. But if you do, using machine learning for the denoiser could result in the best quality. The same compression / decompression idea is basically used to denoise even multilayer perceptron is not used. ©CAPCOM 11

12.

Delaunay Triangulation • Interpolate between vertices Meng Qi, Thanh-Tung Cao, Tiow-Seng Tan "Computing Two-dimensional Constrained Delaunay Triangulation Using Graphics Hardware” *3 The Delaunay triangulation method. Sparse diffuse ray samples are similar to the left image from the paper. A Voronoi diagram can be generated from point clouds. If the surface is a plane, we can expect that the lighted pixel distribution is correlated to its surrounding luminance. 12 Why is the diffuse result to a low frequency output? Because it propagates the light uniformly. Without geometry interference, the propagation from samples should be consistent even with a limited sample rate. Sparse point cloud distribution should be the same as an aggregated sample’s distribution except the high frequency geometry aliasing. ©CAPCOM 12

13.

Lagrange Denoiser in Graphics Interpolation from Delaunay triangles Results depend on vertex value and triangles area size chaos docs "Irradance Map" *4 The same idea is used in irradiance caching with VRay. You can see Delaunay triangles are generated and the result for the luminance is interpolated through each vertex and13 divided by the triangle's area size. The difficulty is creating precise Delaunay triangles that cling to 3d geometry. Even with guided buffers, 2d Delaunay triangles will sometimes give disfigured results. A 3d Delaunay algorithm could cost more in performance for real time, but solving geometry consistency is the critical issue for this method. I would like to call it Lagrange method, because it uses discrete points to reconstruct the result. Another thing is the generation and propagation steps are not very fast with this method. I didn’t spend too much time on optimizing it, but If you put your energy into optimization, it could work better. ©CAPCOM 13

14.

Probes Lumen approach • Trace / Interpolate probes • Some details are lost in interpolation, post fix required SIGGRAPH "Radiance Caching for real-time Global Illumination" *5 The probe approach is one widely used in Unreal Engine and is called Lumen. The difficulty is also in the interpolation. However, they improved this method to the extreme. Performance and quality are balanced. Actually, the probe approach needs pretty tricky adjustments and several guided buffers are used. 14 As far as I know, Epic Games uses bent normal, SSAO to improve the quality. Just read their code, you will find all the answers. They've done a considerable amount. In my opinion, it won't be easy to go beyond what they've already accomplished. ©CAPCOM 14

15.

Guiding Ray Direction Guided direction Cosine hemisphere Guiding ray direction to the light also seems feasible. Compared to cosine hemisphere distribution, guided direction gets more lighted results. This method can be implemented 15 using reservoir sampling. Actually, the most important parameter through Monte Carlo methods is PDF. Reservoirs save all possible directions for the light and its PDF. The tricky thing is you can't always use accumulated data, instead you need to update and clean Reservoirs to refresh your image. The time when you clean is dependent on your implementation. For example, wrong accumulated data could be destroyed through moment buffer. Correct history data could be regularly updated jittered by pixels. ©CAPCOM 15

16.

Guiding Ray Direction PDF updated by reservoir sampling Ray Default Direction Ray Guiding Direction Advantages • Speeds up convergence • More history accumulated than moving average Trace / Shade Disadvantages • Better performed with temporal data support Spatial / Temporal PDF Update • Precise spatial accumulation loss performance For the implementation, I separate two ray direction buffers into guided and default. Then I blend them randomly if the history is enough. The guiding direction and its PDF are updated from reservoir sampling. Guided history can accumulate up to 500 frames. The result is pretty good if accumulated history is used. 16 The disadvantage for this method is noise cannot be solved from one frame data. Actually, we do the same thing with spatial and temporal accumulation using the moving average weight, which is even more aggressive for the samples. Only if the reservoir buffer accumulates enough history, will the guided direction be acceptable and PDF precision be enough to use. You can’t guide direction if you don’t have history. So, as previously mentioned, we try to solve noise only using one frame data. That doesn't help for this purpose. ©CAPCOM 16

17.

Guiding Light Position • Objective is converting geometry light to light buffer dynamically • If emissive, get position from hit geometry • If IBL, ray marching the distance from camera distance field • Save light position and fill empties from guided filter • Connect other hit position to light path randomly How about guiding light position from the camera distance field? If emissive / IBL’s position is saved to buffers like punctual light, you just need to do what the next event estimation did to connect 17 the path and shoot visibility rays. If the hit geometry is emissive, it's simple. You can get the position from the geometry. If IBL is hit, you need to take the nearest distance to that focal point. So, ray marching through camera distance field is possible if the camera distance is larger than the ray marched distance, which means you are inside the room. If the camera distance is smaller than the ray marched distance, this means you are outside the room. After getting those light positions, propagate them to those pixels without light and do the next event estimation at the final stage. The reason why I didn’t use this method is because it needs shadow rays to check if small geometry lights are reachable. Geometry lights don’t have shadow maps. ©CAPCOM 17

18.

Guiding Light Position • Objective is converting geometry light to light buffer dynamically • If emissive, get position from hit geometry • If IBL, ray marching the distance from camera distance field • Save light position and fill empties from guided filter • Connect other hit position to light path randomly Another reason is because the camera distance field is needed and there's not enough budget. Still, depending on the concrete problem you need to solve, this method could give you precise accelerated results from one frame 18 data. The objective for this method is to try to build an automatic polygon light system through traced lighted geometries. If the development is in the early stages, polygon light could also be set by the artist. ©CAPCOM 18

19.

Disocclusion Rays 960x540 1 ray (half in console) History Rejection Trace 1 Ray Trace 1 Ray Trace more Rays (16-64) 120x67 64 rays (half in console) Blend More budget is given from low resolution structure Easier to filter low resolution data Another idea is tracing more rays depending on the history. Artists don’t want to make visual changes to their product close to the game's release. They only want to denoiser works better in difficult scenes. And console performance is not acceptable for tracing too many rays. In some difficult scenes, results can't be convergent even if tracing 128 rays per pixel. You need to propagate low frequency light while preserving high frequency visibility. 19 So, extract useful information, reducing the buffer size and make it enough to propagate. The same idea is used across all other research, such as machine learning. The buffer is squeezed to a small resolution while consoles use checkerboard which is much smaller. In this buffer, 16 to 64 rays is traced per pixel. Comparing the result of 540p’s image with 1 sample per pixel and 67p’s image with 64 samples per pixel, 67p’s image is substantially clearer and easier to propagate. And a good reconstruction algorithm is needed after that. ©CAPCOM 19

20.

Reconstruction Guided buffer Moment Reconstruction blends the guided color buffer and the moment buffer to generate an upscaled result. The guided buffer is responsible for the light field changes while the moment buffer is responsible for geometry changes. 20 ©CAPCOM 20

21.

Why it works? 120x67 color / average direction Decouple the signal Low resolution light information 960x540 shadow (moment) 4k geometry (normal) Medium resolution shadow information High resolution geometry information Reconstruction Moment + Color / Direction = Upscaled Color / Direction (Moment Clamp) Upscaled Color / Direction + Normal = 4k Color (Project Spherical Harmonics) So, why does this work? The result can be separated into light contribution and visibility term. 21 Light contribution for GI is propagated diffusely and continuously. After propagation, only low frequency output remain which means it can be compressed in low dimension. On the other hand, visibility term is high frequency geometry information. And it is critical to the final image quality. Two steps have been used to reconstruct the visibility term. The moment buffer is used to block the light from rough geometry. And a more precise normal map is used to sketch the details for geometry occlusion. ©CAPCOM 21

22.

Euler in Graphics Low frequency High frequency ansariddle "Multigrid Visualzation" *6 Aliasing Now, let's talk about how to propagate through low resolution and how to reconstruct it back. It's actually similar to a multi-grid linear solving method. If the high frequency geometry term is separated at first, the result could be better for reconstruction. Wrong interpolation will lead aliasing and get a numerical error as the result. 22 For details, please see the related papers. ©CAPCOM 22

23.

Jitter Numerical Error Before • Temporal differences are caused from high frequency geometry aliasing After • Jittering the result after temporal accumulation could relieve this issue So, to not make the logic too complicated. Even the result contains some numerical error or aliasing from inadequate smooth decomposition. There is a way to relieve the problem: jittering. And another jitter is executed after blending the image from temporal accumulation. ©CAPCOM 23 23

24.

Auto Exposure Ghosting Light is ghosting in the previous denoiser because the result is not filtered in linear space. Reason for this issue Reinhard curve makes color non-linear Lack of temporal luminance-based rejection Another problem that was pointed out by the artists is when light changes quickly from auto exposure. Incorrect result is caused by filtering in nonlinear space. The reason is because the Reinhard curve compressed the color. It is not correct to filter from results that used Reinhard curve. 24 Another reason is temporal luminance-based rejection was not used before. Because it will increase the noise clamping from one frame only buffer. ©CAPCOM 24

25.

Auto Exposure Ghosting Before Now First, I'll show you the comparison between previous and current results. 25 ©CAPCOM 25

26.

Removing Reinhard Curves Now Before JOHN HABLE ”Why Reinhard Desaturates Your Blacks” *7 Shade Shade Reinhard curve Spatial accumulation Spatial accumulation Temporal accumulation Temporal accumulation Deflicker Denoise through linear space Instead of using Reinhard curves, aggressive denoiser logic and other deflicker logic are added to remove fireflies. Compared to the previous method, it's not easy to remove linear color noise. Additional steps are added, but we'll touch on this later. 26 ©CAPCOM 26

27.

Removing Reinhard Curves Before Now The brightness is totally different After the Reinhard curve is removed, the brightness is totally changed. And the emissive brightness issue that artists tend to complain about is solved. In previous version, the light was set by a high intensity value. Due to the effect of the Reinhard curve, the result will not be as bright as you thought. With that removed, the light balance, which was adjusted by the artists, is broken. So, this feature can only be used for new titles. ©CAPCOM 27 27

28.

Aggressive Filtering Spatial propagation Temporal filter Spatial propagation Variance check Spatial merge Deflicker Spatial upscaling Spatial filter • Linear color is difficult to denoise compared to Reinhard curved color • Incorrect emissive brightness is also fixed • 3 more passes are added for spatial denoising The linear result is more difficult to denoise especially for high intensity lights. Light is propagated linearly, which means more steps are needed. 3 more guided propagation passes are added to relieve this problem. The visual quality is upgraded compared to the previous. The contrast of the image is increased, and the light is more balanced. ©CAPCOM 28 28

29.

Temporal Luminance Rejection Before Now Besides removing the Reinhard curve from the framework, the temporal luminance rejection is also improved. Luminance rejection solves dynamic light switching problems. The stabilized one frame result is used to rectify the history. 29 Other methods like light buffer flags can also be used to relieve history light ghosting problem which caused by luminance changes. ©CAPCOM 29

30.

Fake Specular Low Quality Before When roughness exceeds the threshold, fake specular is used before, which causes material incoherence. Reasons to use before Now • Specular rays with high roughness could be as expensive as diffuse rays • Difficult for convergence Fake specular is used when the roughness exceeds the threshold. For example, 0.3 before, which causes material incoherence and incorrect results. A reason to use fake specular is because specular rough ray could be as expensive as diffuse ray. 30 In most conditions, rough specular can be denoised using the same policy as diffuse denoiser. The difference is that rough specular BRDF is a view dependent lobe. Accumulating spatial reusable results needs more precise calculation. ©CAPCOM 30

31.

Specular Pass Roughness threshold High resolution • Different scale filter policies depend on roughness • Different resolution policies depend on roughness and the title's budget • Still, rough specular denoiser needs to be improved (similar to what diffuse did) Low resolution Large scale filter Small scale filter Upscale Merge Depending on title’s budget, high resolution and low resolution traced specular buffers are separated. Different scale filters are used with different approaches. I don’t have enough time to focus on rough specular denoiser, but it could be improved in the future. ©CAPCOM 31 31

32.

Specular Motion Ghosting Specular ghosting is not resolved from the previous temporal motion. The blend between geometry motion and virtual position is not well calculated. If viewed closely, you can see that ghosting exists. Specular ghosting is not resolved from the previous temporal motion. The main reason is that some of the wrong motion is not removed correctly. If viewed closely, the image is broken up. But if you aren't close and the camera movement isn't that fast, the result is also acceptable from the previous result. Whatever, this was my mistake and I improved it. ©CAPCOM 32 32

33.

Comparison Before Now Now, let's compare results between the previous and current. 33 ©CAPCOM 33

34.

Specular Motion • For performance issues, virtual motion now only uses planar Motion Geometry Virtual Moment based merge • Color / moment clamp is used to clamp the motion of animation in mirrors • Diffusing luminance clamp is different, specular clamp the color while diffuse clamp the history Thin lens motion is used before improving the motion for curved geometry. For performance issues, planar virtual position is totally enough for most cases. Moment clamp is used for the motion of animations in mirrors. Specular clamp is different to diffuse. Instead of using history, color clamp is also needed to maintain visual quality. The final result is merged based on moment buffer. ©CAPCOM 34 34

35.

Shading • User shader support (in development) • Complicated shading models (in development) • Temporal accumulated light importance sampling (future improvement) For shading, rendering unit members implement user shader related support. Complicated shading models are also in development. 35 Light visibility tests could be cached to a hash table. It's meaningless to test all the light for every frame. It could be done in the future. ©CAPCOM 35

36.

New Features Many new features have also been added to RE ENGINE from the rendering unit. ©CAPCOM 36

37.

Mirror Transparent Only perfect reflection supported No denoiser used Mirror transparency is handled specially with separate implementation. Only perfect reflection is supported and no denoiser is used for performance consideration. 37 ©CAPCOM 37

38.

Effect Ray Tracing Supported by GI / Reflection / AO Effect ray tracing is also implemented by the effects unit. In most cases, the same denoiser progress is shared after shading. 38 ©CAPCOM 38

39.

Ray Tracing Area Light Rectangle / Disk / Line light supported GI / Reflection / AO supported Area light shadow also supported Rectangle, disk, line light are also implemented by another rendering unit member. Both direct and indirect area light shadows are supported. 39 ©CAPCOM 39

40.

Terrain / SpeedTree Support BVH’s data does not totally match Gbuffer’s data and ray offset is needed. Terrain and SpeedTree ray tracing are also supported by rendering unit members who are responsible for landscapes and foliage. BVH’s data doesn't completely match Gbuffer’s position. So ray offset or additional rays are needed to avoid intersection. ©CAPCOM 40 40

41.

Performance & Title Use Cases Title Name Version Devil May Cry 5 Special Edition 1 GI / Reflection Resident Evil Village 2 GI / Reflection / AO Resident Evil 2 / 3 / 7 2 GI / Reflection /AO Resident Evil 4 2 Reflection EXOPRIMAL 2 GI / Reflection Future titles will use version 3 As for title use cases, 3 versions of shading and denoiser systems are created to keep released titles stable. Version 1 is only used in Devil May Cry 5: Special Edition. 41 Version 2 is more widely used in titles like Resident Evil Village, RE7, RE2/3/4 remakes, and the EXOPRIMAL. Resident Evil 4 only uses ray tracing reflection for performance issues. And version 3 is the one showcased in today’s presentation which will be used in future titles. ©CAPCOM 41

42.

PS5 Performance GDC 2021 Preparation 376μs Accumulation 1043μs Linear Depth / Geometry 177μs Spatial Diffuse 512μs Spatial Specular 128μs Total Spatial (Overlapped) 612μs Accumulation Motion 65μs Disocclusion History 134μs Tracing Generate Ray Direction Sort Ray Direction Tracing Diffuse Tracing Specular Tortal Trace (Overlapped) 2481μs 122μs 69μs 1583μs 609μs 1665μs Firefly (Overlapped) 49μs Moment (Overlapped) 75μs Temporal 307μs Filtering 650~838μs Work Queue Copy Unfiltered Result 39μs 117μs Shading Diffuse 596μs Wavelet Filter 32~200μs Shading Specular 148μs Bilateral Filter 172μs Tortal Shading (Overlapped) 625μs Composite 290μs This data was taken from GDC 2021’s performance cost on PS5. You can see that ray tracing in total cost 4.7ms. Tracing and shading cost 2.3ms. Others like generate ray direction and sorting cost 0.2ms. Denoiser cost 2.2ms. ©CAPCOM 42 42

43.

PS5 Performance Version 3 denoiser cost Version 2 denoiser cost 2.2ms Spatial denoiser steps added +1.3ms Wavelet filtering removed -0.8ms Additional disocclusion rays added 25% of tracing and shading cost Compared to the previous version denoiser, spatial propagation steps were added, which adds about 1ms. On the other hand, wavelet filter logic is removed. And additional disocclusion rays are traced, which is about 25% of the total tracing and shading cost. So the new method adds more rays dealing with the noise. Additional costs depend on how tracing and shading cost through the scene. 43 As for denoiser, disocclusion rays help stabilize denoiser and wavelet filtering is no longer needed in the final step. However, for dealing with rays from disocclusion and upscaling, additional spatial steps are added leading to a total of 0.5ms cost. ©CAPCOM 43

44.

Conclusion • Denoiser improvements 1. GI denoise improvements 2. Auto exposure improvements 3. Specular motion projection improvements 4. Fake rough specular removed • New features • Performance In conclusion, those are the various ways in which ray tracing was improved in RE ENGINE. The presentation mainly focused on GI denoiser related improvements due to the fact that it is a difficult topic and valuable to share 44 and discuss. Specular related improvements were also introduced. Many people contributed to the new features for ray tracing, but I was only able to briefly mentioned some of their work due to time constraints. I also touched on PS5 performance costs comparing the previous denoiser to the current one. ©CAPCOM 44

45.

Thank you for listening Quotation 1. Olaf Ronneberger, Philipp Fischer, and Thomas Brox MICCAI 2015 ”U-Net: Convolutional Networks for Biomedical Image Segmentation”May 18, 2015 https://arxiv.org/pdf/1505.04597v1.pdf(2023/8/24) 2. Sheng He, Rina Bao, P. Ellen Grant, Yangming Ou ”U-Netmer: U-Net meets Transformer for medical image segmentation” April 3,2023 https://arxiv.org/pdf/2304.01401.pdf (2023/8/24) 3. Meng Qi, Thanh-Tung Cao, Tiow-Seng Tan "Computing Two-dimensional Constrained Delaunay Triangulation Using Graphics Hardware” 2017 https://www.comp.nus.edu.sg/~tants/cdt.html(2023/8/24) 4. chaos docs "Irradance Map" https://docs.chaos.com/display/VRAY3SOFTIMAGE/Irradiance+Map (2023/8/24) 5. SIGGRAPH "Radiance Caching for real-time Global Illumination" 2021, P24 https://advances.realtimerendering.com/s2021/Radiance%20Caching%20for%20realtime%20Global%20Illumination%20(SIGGRAPH%202021).pptx (2023/8/24) 6. ansariddle "Multigrid Visualzation" Wikipedia September 14, 2016 https://en.wikipedia.org/wiki/File:Multigrid_Visualization.png#filelinks (2023/8/24) 7. JOHN HABLE filmicworlds ”Why Reinhard Desaturates Your Blacks” May 17, 2010 http://filmicworlds.com/blog/why-reinhard-desaturates-your-blacks/ (2023/8/24) That concludes this presentation. Thank you for your listening. 45 ©CAPCOM 45