whisper_model_load: loading model from 'models/whisper-tiny.en/ggml-model.bin' whisper_model_load: n_vocab = 51864 whisper_model_load: n_audio_ctx = 1500 whisper_model_load: n_audio_state = 384 whisper_model_load: n_audio_head = 6 whisper_model_load: n_audio_layer = 4 whisper_model_load: n_text_ctx = 448 whisper_model_load: n_text_state = 384 whisper_model_load: n_text_head = 6 whisper_model_load: n_text_layer = 4 whisper_model_load: n_mels = 80 whisper_model_load: f16 = 1 whisper_model_load: type = 1 whisper_model_load: mem_required = 244.00 MB whisper_model_load: adding 1607 extra tokens whisper_model_load: ggml ctx size = 84.99 MB whisper_model_load: memory size = 11.41 MB whisper_model_load: model size = 73.54 MB log_mel_spectrogram: n_sample = 88320534, n_len = 552003 log_mel_spectrogram: recording length: 5520.033691 s main: processing 88320534 samples (5520.0 sec), 8 threads, lang = english, task = transcribe, timestamps = 1 ... [00:00.000 --> 00:10.000] [MUSIC] [00:25.000 --> 00:26.000] Hello, everybody. [00:26.000 --> 00:33.000] Okay, we have a good crowd for John's, the second talk. [00:33.000 --> 00:34.000] It's very exciting. [00:34.000 --> 00:38.000] This is the first year that John will be talking twice. [00:38.000 --> 00:41.000] A couple things to know. [00:41.000 --> 00:43.000] John will talk for about an hour or so. [00:43.000 --> 00:45.000] And then we'll have 30 minutes for questions. [00:45.000 --> 00:47.000] The mic is right there. [00:47.000 --> 00:49.000] That's actually just right there. [00:49.000 --> 00:52.000] And so just line up when we get to the questions. [00:52.000 --> 00:55.000] Try to keep your questions on what John talked about. [00:55.000 --> 00:59.000] If you get up and ask what do them for is coming out, I'm going to kick you in the knee. [00:59.000 --> 01:00.000] So right there. [01:00.000 --> 01:01.000] [LAUGH] [01:01.000 --> 01:04.000] So I will not waste any more time. [01:04.000 --> 01:07.000] But you guys in the back, because John's going to write in the board. [01:07.000 --> 01:09.000] And we have plenty of D.C.s here. [01:09.000 --> 01:10.000] You can file in. [01:10.000 --> 01:13.000] Don't worry that there's reserve seats there. [01:13.000 --> 01:14.000] Just go ahead and sit in them. [01:14.000 --> 01:15.000] All right. [01:15.000 --> 01:18.000] I will give you guys Mr. Carmack. [01:18.000 --> 01:24.000] [APPLAUSE] [01:24.000 --> 01:31.000] Okay, so I guess this is sort of going to be like a school room session. [01:31.000 --> 01:33.000] I diluted myself for a little while. [01:33.000 --> 01:36.000] This would be the first talk where I ever actually made slides to present. [01:36.000 --> 01:38.000] But it didn't actually come to pass. [01:38.000 --> 01:42.000] So it's going to be notes and talking and some screwing on the board again. [01:42.000 --> 01:47.000] So almost all of what we do in game development is really more about artistry. [01:47.000 --> 01:49.000] It's not trying to appeal to people. [01:49.000 --> 01:53.000] But there's the small section of the small section of what goes into the games. [01:53.000 --> 02:00.000] That's drawing the pictures on the screen that you can at least make some ties to the hardest of hard sciences. [02:00.000 --> 02:09.000] And while it's great that people are researching the psychology in the different ways that people think about compulsion loops and some of these other game design topics, [02:09.000 --> 02:21.000] the law physics that goes into rendering just kind of goes through the heart of physics where it goes through the kind of the all-star list of physics with newtons, optics, and Maxwell's equations and Einstein's relativity. [02:21.000 --> 02:29.000] And it's going to be to think that this is sort of brought to bear in the techniques that go into sort of making the games that we play. [02:29.000 --> 02:33.000] So at the start of the start you think, well, okay, we see light. [02:33.000 --> 02:35.000] So what actually is light? [02:35.000 --> 02:41.000] And we've got a definition now that lights the sliver of the electromagnetic spectrum that we can actually perceive. [02:41.000 --> 02:50.000] But that has a really long and complicated history for how we sort of reach that conclusion and how it's not really as clear-cut as most people would like it to be. [02:50.000 --> 03:08.000] I have an optical research started kind of all the way back with a lot of the Greek philosophers, but I knew it needed a whole lot of work with breaking light up with prisms, seeing how light light was actually composed of all the different colors of the spectrum, and they add together to make what we perceive as light. [03:08.000 --> 03:19.000] And then for there was a centuries-long debate about whether light was a particle like this little tiny billiard ball, these photons that you shoot out, or a wave effect like all the things that you see in waves and water. [03:19.000 --> 03:22.000] And waves and water and waves and matter, so on. [03:22.000 --> 03:29.000] And finally, we reached the conclusion that, well, it's a wave particle duality of quantum mechanics talks about. [03:29.000 --> 03:33.000] And this is very unsatisfying when you begin looking at this. [03:33.000 --> 03:35.000] But it's really pretty much irrefutable. [03:35.000 --> 03:40.000] There are these straightforward experiments that can be done to show that you look at it one way. [03:40.000 --> 03:43.000] It's a way that you look at it another way. It's a light. [03:43.000 --> 03:45.000] Or it's a particle. [03:45.000 --> 03:49.000] Luckily for computer graphics, we hardly cared all about that. [03:49.000 --> 03:57.000] Only when you start looking at some aspects of surface reflectance models, do you start caring at all about some of these quantum mechanical properties of light? [03:57.000 --> 04:08.000] For the most part, we can look at light as zillions and zillions of little billiard balls shot out from lights and bouncing off of things and eventually reaching our eyes so that we can perceive them. [04:08.000 --> 04:15.000] There's a lot of simplifications that have to happen when you talk about simulating this. [04:15.000 --> 04:26.000] There's a lot of engineering disciplines, like thermal management, radio engineering that do simulations of the electromagnetic spectrum, just other parts of it, how they bounce around interact with things. [04:26.000 --> 04:30.000] And this has done all the time and it works. It really is science. [04:30.000 --> 04:39.000] So you can say rendering an image or deciding how much light reaches a particular area is about as basic of a science as it comes. [04:39.000 --> 04:52.000] There's not any artistic measure in here. There are tons of other aspects when you get into perception that do become questions about, well maybe there is artistry that goes into producing something when you've got an impression that you want. [04:52.000 --> 05:07.000] But when you're talking about simulating an environment which most of what we do in sort of the hardcore FPS type games is we are pretending that we've got this virtual world and we're running a camera through it and we're trying to simulate what's happening in various ways. [05:07.000 --> 05:18.000] And nowadays we know what we would have to do to make that almost perfect. We just have nowhere near the computing capacity to do really, really high level simulations. [05:18.000 --> 05:35.000] But we can trace if you're not going to do the right thing to at least understand what the right thing is and then understand which trade offs you're making and make them with sort of a clear head rather than accidentally backing into trade offs that may or may not be really the best way to go about things. [05:35.000 --> 06:04.000] So, so many that it took a long time for people to realize that these other phenomena things like radio waves and there's a lot of confusion in 19th and 20th century physics about like which things were particles and which things were raised and we still have kind of mixed up terminology and when you talk about cosmic rays that are actually particles and you talk about alpha radiation and beta radiation and these things that are particle based rather than being raised from the electromagnetic spectrum. [06:04.000 --> 06:18.000] But we use this stuff all the time for radio waves. You know your Wi-Fi as two gigahertz, I spent frequencies. You know the visible white rays are up in the terahertz rains many terahertz. [06:18.000 --> 06:27.000] But there's basically the same thing. They just differ in how they interact with matter. They're produced in somewhat similar ways but the different things change. [06:27.000 --> 06:41.000] They behave differently when they interact with other things based on their wavelength which is why x-rays can shoot through things. Radio waves can go through some things that the visible light pretty much bounces off of. [06:41.000 --> 06:54.000] So another important critical thing really is that photons, the little bundles of light that we talk about, they are absolutely quantized. It's again part of the quantum weirdness that you can't send off as arbitrarily divisible amount. [06:54.000 --> 07:14.000] There is an almost unbelievably large number of them given light that's growing out is I can just say zillions with a straight face because it's a very large scientific notation number. It's not trillions. It's not quadrillions. It's even more than that that are coming out in terms of the bundle of quantum energy. [07:14.000 --> 07:27.000] Now they do have characteristics to them. If we treat them as little billiard balls and computer graphics we are generally looking at only a few different spectrums of a few different wavelengths in the spectrum of light. [07:27.000 --> 07:42.000] And that has to do with an aspect of a human visual system while there are this incredibly divisible spectrum of light that goes out. We're only susceptible to three sort of styles of light and they're not even individual frequencies. [07:42.000 --> 07:51.000] That's why we can get by with red, green, and blue for monitors in this spectrum because we only have three types of color receptors in our eyes. [07:51.000 --> 08:01.000] And I often think how it would be really interesting if you could look at all these other spectrums bouncing around and that's what, firmly imaging and some of these other things let you sort of get a peek into it. [08:01.000 --> 08:08.000] And that's only light that's very, that's e-imrigation. It's very close to the visible spectrum, the infrared. [08:08.000 --> 08:19.000] It would be much more bizarre and interesting to be able to visualize radio waves in a real-time space to like see all the multi-path that's causing your Wi-Fi to be weird in specific ways. [08:19.000 --> 08:28.000] Why moving something over here causes the radiation to change so much at your antenna to make a difference in your reception strength. [08:28.000 --> 08:35.000] And these are all things that have a bearing to what you do with light transport as well as other wave phenomena like audio. [08:35.000 --> 08:41.000] I'm really, really high end audio processing is the exact same thing as what we treat light processing. [08:41.000 --> 08:47.000] You send out energy, bounces off of all sorts of things in the world and eventually arrives at something that's going to perceive it. [08:47.000 --> 08:51.000] Which would be your ears in that case versus your eyes. [08:51.000 --> 09:06.000] So to kind of start with the path of a photon of what it would take, you've got something creates the photon and for the longest time in our human existence about the only thing that we saw creating photons was a great deal of heat. [09:06.000 --> 09:16.000] You heat things up hot enough and photons start coming off of them. You heat it up enough. It starts glowing a dull red. You heat it up more. It starts getting more yellow-rasion towards white. [09:16.000 --> 09:21.000] As more and more of the colors of the spectrum are emitted from these hot things. [09:21.000 --> 09:25.000] And obviously the sun is a very hot thing where you get a fusion reactor going. [09:25.000 --> 09:31.000] And the light that comes off of that is all of these atoms giving up some energy. [09:31.000 --> 09:36.000] So photons carry energy away from where they came from. [09:36.000 --> 09:42.000] And this is the radiative heat transfer where something gets hot. [09:42.000 --> 09:54.000] If you heat it up by itself there, it glows and it eventually stops glowing. It glows down, going down to the spectrum, getting cooler and cooler until you don't see any visible light because it's actually lost much of its heat. [09:54.000 --> 10:11.000] On Earth, radiative heat transfer is the least effective form of heat transfer. You get much more from conduction where it just kind of goes through the actual physical contact in other areas as the heat spreads out or convection where moving currents of air or water take the heat away. [10:11.000 --> 10:17.000] But in space, radiation is the only way you lose heat. And in aerospace engineering this is extremely important. [10:17.000 --> 10:27.000] Things like the areas like the International Space Station and Space Ships, they have to worry a whole lot about thermal management because the only tool they really got is radiation. [10:27.000 --> 10:40.000] You see these enormous solar panels where they collect solar energy, but a lot of space vehicles have to have enormous radiators where they actually let the energy go out from the vehicle, otherwise they would get hotter and hotter. [10:40.000 --> 10:47.000] So it's important to note that even if it's not glowing so that we can see it, everything is still radiating. [10:47.000 --> 10:58.000] So you don't see the space station glowing red hot, it's just glowing up whatever its normal temperature is which can be perceived with infrared sensors, but it slowly loses energy and it eventually reaches a balance. [10:58.000 --> 11:08.000] That's why something stuck out in the sun in space doesn't get hotter and hotter. Eventually it reaches the point where the light that's coming in and hitting it is equaled by the radiation that's leaving it. [11:08.000 --> 11:28.000] And there are, like you can make, we've made rocket engines that are radiatively cooled where they burn 5,000 degrees or so inside and they get so blindingly white hot on the outside that all of the energy that's not going out the nozzle that's soaking into the walls is radiated away as a whole lot of light. [11:28.000 --> 11:32.000] And this is essentially what old-style incandescent light bulbs were. [11:32.000 --> 11:44.000] And a tungsten filament, you made it really hot by pushing electrons through it and it got hotter and up and it started glowing. And if you watched closely, if it was a heavy filament, you could watch it warm up or especially shut down. [11:44.000 --> 11:50.000] It would go through, kind of ramp through the temperatures and you would see it be red and get up to white hot. [11:50.000 --> 11:59.000] And then when you shut it down, it would cool down through yellow and then back through red before finally settling, settling back to radiating in non-visible regions. [11:59.000 --> 12:16.000] So we're in temperature eventually. Nowadays we have a lot more efficient ways to create photons with fluorescence and LEDs. Things that are tuned carefully to just barely nudge the electrons in the atoms out to an excited state, let them collapse back down and spit a photon out. [12:16.000 --> 12:28.000] For the most part, the photon emission is random in terms of which direction it goes. When you look at radio engineering, there's huge bodies of literature for intended design that determine how you can make it slightly stronger. [12:28.000 --> 12:46.000] We can get slightly stronger, weaker in different directions, but there's still a very fundamental nature of randomness, which is again the quantum mechanics aspect of things. There is at a very low level natural events are completely random and you can't just say, I only want photons that are coming, going to come out of the left side of this material. [12:46.000 --> 13:14.000] So you hit a photon that pops off in some random direction. It may go straight for, if it's coming from a distant star, it could go straight for trillions of miles. More or less just traveling through space. There's little bits of general relativity that you would recognize, you know, walking of light that can happen. But for the most part, it can continue on indefinitely. It's a self-propagating wave. So it pops off of some atoms somewhere, maybe flies through space for a billion trillion miles or something. [13:14.000 --> 13:43.000] It comes in, finally, hits our Earth's atmosphere and then starts interacting with the atmosphere in some way. Every change in density that visible light goes through will result in bending its path somewhat. This is called refraction. The most obvious case when you look at it is things like prisons and lenses where you can see the light really strongly warped. But it happens in any sort of density change going from the vacuum of space to the outer reaches of our atmosphere [13:43.000 --> 14:12.000] and then every change in pressure or temperature changes in density and that causes very slight subtle movements of the changes in the direction of the light. This is actually why stars twinkle out at night if you're on a clear night and you see stars coming in from billions or trillions of miles away. It's going completely straight until it hits the upper atmosphere and then it may slightly deviate just tiny fractions of degrees and this can cause the very small number of photons that you're seeing there [14:12.000 --> 14:40.000] to kind of come and go and move around in different ways. The most important thing for from a computer graphics standpoint are the effects that happen when it hits more solid matter, solid surfaces or even liquid surfaces. And that's where it has the opportunity to generally, well, you can even gas, you can wind up having the case of absorbing the photon. This happens rarely in gas, you can pass through hundreds of miles of atmosphere [14:40.000 --> 15:07.000] and not have too many of the photons absorbed. But it happens very rapidly in matter and solid matter. I mean, a typical photon when it hits a surface might penetrate a little bit into it, a surface like metal will bounce off of just the first several atoms. It doesn't take many molecular many atoms of metal before you can reflect light out, which is why you can make these super enormous space mirrors that are just [15:07.000 --> 15:36.000] a very tiny spluttering of aluminum on some plastic film and they can actually make solar sails or giant solar collectors and concentrators. But for most other materials, the light can penetrate a little bit further into it. As an interaction with the molecules, it can either be absorbed raising the temperature a little bit, going into eventually making it hotter so that it starts radiating out, radiating out at some level, or it can redirect the photon in some way. [15:36.000 --> 15:44.000] You've got the minor redirection from the refraction and much stronger ones when it interacts and bounces off of the solid surface. [15:44.000 --> 16:05.000] Now there's a ton of different names. There's literally a couple dozen different names for the different ways that light can interact with services. There's all the different types of scattering, of course, refraction reflection reflection reflection can be split up into specular reflection, diffuse reflection, and there's all sorts of different subcategories. Optics is a huge topic. There are societies. [16:05.000 --> 16:34.000] Dedicated into every aspect of it, and there's huge terminologies for all of it. But for the most part, you can say, as the photon comes in, if it's not absorbed, it's going to be kicked out some other direction. I'm that it can go and interact potentially with the atmosphere, potentially with another surface. And eventually, it's either absorbed. Well, eventually it is absorbed somewhere. But for the most part, they're absorbed into the surfaces around us. But a tiny, tiny fraction of all the photons that are bouncing around, [16:34.000 --> 17:00.000] eventually hits our eyes. And even when it gets to our eyes, which are mostly transparent, there's this chance that the photon hits, and it's specularly reflects off of our eye, and we've made it all the way out of the billions of possible traces, made it to my eye, and then decides to specular and reflect off some other direction. But most of it that hits the eye and hits the lens, gets through, propagates through eye, and then gets raised in Optimus, humor, and all the biological parts of the eye, [17:01.000 --> 17:29.000] and hits receptors in the back of our eyeballs that turn those eventually into neural impulses that our brain works with. Now, our eyes can actually be quite sensitive. The rods, the non-color sensitive part of our eyes, when they're fully dark adapted, if you've been staying outside for in a dark area for 20, 30 minutes, single photons can cause chemical reactions to happen inside the rod cells. [18:59.000 --> 19:28.000] So I, to recap the basic pictures of this, you've got something, a sun up here, spits out some light, travels through space, gets to the atmosphere, on the earth. Maybe bends a little bit, maybe just go straight through, comes down, gets a surface, maybe gets absorbed, maybe hits something else, you've got walls and rooms, and bouncing around in there, and eventually if we're seeing it, reaches somebody's eyeball inside. [19:28.000 --> 19:55.000] And that's the physics of what happens, it's really well understood, it does come down to a lot of data acquisition and characterization. When you talk about how the critical interactions with the surfaces, when you've got your basic theoretical thing, you talk about a flat surface, you say, light comes in, what happens to it? That's the question of surface response. If you have a perfect mirror, [19:55.000 --> 20:05.000] and it's worth noting that to be, you don't have to be perfect on an atomic level to be a perfect mirror, you only have to be perfect on the optical level, which is somewhat larger. [20:05.000 --> 20:19.000] So people can make basically perfect mirrors, just highly highly polished things. A perfect mirror will have the photon reflect off in this exact reflection. If you take the normal to the surface, you wind up with equal angles there. [20:19.000 --> 20:46.000] So highly polished surfaces act like this, when you get a reflection off of something like the surface of water, it'll behave like this, but most of the surfaces that we look around us do not behave like this. We have a spread of the energy where it comes in and it bounces off to some degree in every direction. No matter which way you look at most surfaces, as you see, again, zillions of photons coming in, [20:46.000 --> 20:53.000] some of them go in every direction, they just go in a direction that's biased based on the type of surface that it is. [20:53.000 --> 21:06.000] A surface that I, one of the easy things that a lot of times is approximated both in the engineering sciences and in computer graphics is to assume that the surface reflects perfectly diffusely or is a land-version surface. [21:06.000 --> 21:17.000] And what that means is that no matter which way the light comes in, if it hits it completely edge-on, completely straight on, it has an equal probability of going in every direction. [21:17.000 --> 21:39.000] And there are some materials that are close to this. If you take something like a block of chalk, white chalk, that behaves almost as a perfect diffuse reflector. If you light it from one position and you look at that, like a little scribed-out area on it from any different area around it, it will appear to have about the same amount of energy coming out of it. [21:39.000 --> 21:55.000] But there are, I also also have a more complex than that, but most of them will say, for if you've got light coming in here, there will be more of it coming out around the reflection area and some general amount coming out in all different directions. [21:55.000 --> 22:08.000] But these can actually get quite complicated. And the simplifications that we use in graphics sort of approximate these, but you can measure these with specific tools that go in and take lots of samples for moving the lights around. [22:08.000 --> 22:35.000] Because it depends, unfortunately, this is one of the areas where it does get not so great for computer graphics. It depends both on the incoming direction and the outcoming direction. And those are two angles in each one, so it winds up being a four-dimensional equation to say how light comes in here, how does it come out in some other direction. And in fact, it gets worse than that because very few things do reflect just off of this upper surface. [22:35.000 --> 23:01.000] Most of the time, the light will go in, go in a little of the surface, bounce around a little bit and shoot out some other direction. So if you're saying, well, I thought time comes in here, not only do you have to say you're being really, really accurate, which angle does it come off of, but also how far away from the original point does it come off of, or if it's a thin surface, how does it come out on the backside? [23:02.000 --> 23:13.000] So, the other setups coming there, when you look at, like a leaf in the sunshine, you've got a lot of the energy bounces off the shiny top face, a lot of it diffuses for when it comes out on the backside. [23:13.000 --> 23:31.000] So, these are not, not pleasantly, analytically tractable things. They wind up being big tables of data. And one thing that's important to remember is when you see like tables of data that are collected, they're things don't necessarily capture all of the important characteristics of the surface. [23:31.000 --> 23:43.000] Where if you take one of these sensors that you can capture a table of data here, if you did have your perfect mirror reflector, it's almost certainly not going to have the exact sample exactly where you want. [23:43.000 --> 23:51.000] So, but eventually data does wind just as we increase resolution on things, we'll have higher and higher resolutions for our surface models. [23:51.000 --> 24:12.000] And we'll get closer and closer to reality for what we're simulating. So, to go as kind of a capsule history of computer graphics rendering then, when computer graphics started off, if you look in the 60s, 60s and early 70s, computer graphics research focused on the hidden line problem. [24:12.000 --> 24:27.000] We had, we had line-oriented displays either true vector displays where like the video game arcade games like, I'm liking out now. Yeah, I asked for the best example. [24:27.000 --> 24:37.000] I, that are actually drawn by raster beans moving around where they really are true line displays. There's no raster, there's no angely saying, all the different games like that. [24:37.000 --> 24:54.000] I will let the early, the earliest computer graphics systems were basically like that when they're vector displays. And once people learned how to draw, figured out all the basic projective math is saying, alright, I've got my cube here. [24:54.000 --> 25:16.000] I wanted to look like that, but when I draw it, I've got that on there. How do we figure out which lines that we're going to erase? And that was an occupied research for a while to figure out effective ways to do that without spending at the time the scary divide costs for different things and you'd have lots of interesting work thing going on. [25:16.000 --> 25:43.000] But when we eventually got raster displays where we could fill them in, of course at that point people filled in the services of the cube, they're all grayscale at that time. So you can draw a cube and say, well, this will be the light base, this will be the dark face. But that was neat at the time, but that was not sort of what things look like in reality. So people started taking the steps that they could to try and say, what do we need to do to make this more approximate what we see with our eyes. [25:43.000 --> 26:04.000] And this has been a path that's been driven probably more than half by sort of ad hoc approaches about just, well, what's reasonably easy for us to do that gets us somewhat closer to it while there's also been sort of a parallel path of saying, well, what's the physics actually doing, how do we make an actual solution for it. [26:04.000 --> 26:22.000] So the earliest things that got added to the shading model for computer graphics was if we assumed that there's going to be a light that's at some point in the beginning they wouldn't even be local, you just say light is coming in from this direction. [26:22.000 --> 26:36.000] So we want to be able to say what color or what shade should each individual service be based on where that light is. So you've got the obvious things that if it's not facing the light, no light hits it and you draw it black. [26:36.000 --> 26:46.000] So the question about things that are directly facing the light, so if you've got light coming in, if you have a surface completely perpendicular to it, you make that your brightest color. [26:46.000 --> 26:57.000] If you've got a surface that's completely parallel with it, it gets no light and make that zero. So you've got some curve that goes between it to say how bright something should be. [26:57.000 --> 27:10.000] And it turns out that that's a fairly straightforward bit of math to solve where you have light coming in at a certain angle. You've got the normal to the surface. [27:10.000 --> 27:22.000] The amount of light that would strike a little surface there is proportional to the cosine of this angle. And that's actually, that's not an approximation, that's actually been a ground truth. [27:22.000 --> 27:33.000] If you've got the light coming in and you've got something coming in at this angle, a surface that's out. [27:33.000 --> 27:45.000] If you count the number of arrays that go in on something catching four of them directly turning it down, only covering two spaces there, all that actually works out correct. [27:45.000 --> 27:54.000] And this is the basis for a lot of the real calculations for light transport, not a hack actually part of real proper physics measuring. [27:54.000 --> 28:10.000] So once you've got that basic approach, you go back to your cube, and you get your light coming in and you've got a brighter face, a brighter face, a darker face, and the faces away from it are completely black. [28:10.000 --> 28:20.000] And then most people say, well, we don't usually see things like that. So now we get into the fudge and you say, well, let's just brighten everything up a little bit, you'll add an ambient term. [28:20.000 --> 28:32.000] So you sort of just add this minimal level to everything on the backside. And that helps a little bit if you've got a cube, then everything looks pretty much great because it's a constant color just on the side that you might not see over there. [28:32.000 --> 28:46.000] But if you've got something more complex, everything that's not facing away from the light winds up being the same color. And it's clearly not correct, it's not what you'd like, but it was all that seemed reasonable to do at the time. [28:46.000 --> 28:57.000] The next step was to start looking at surfaces that are more than these perfectly diffuse reflectors. If you model your cube like this, it looks kind of like it was maybe carved out of chalk. [28:57.000 --> 29:05.000] And it can be a decent representation of that. But very few of the surfaces that we see around us are really that simple. [29:05.000 --> 29:12.000] Most things have some kind of a shine or highlight on them. As we look around, you can see reflections and highlights on all sorts of things. [29:12.000 --> 29:21.000] And the obvious bits of metal and plastic, little things that you might hold in your hand like I'm looking at, all these different shines and reflections on the plastic that I'm holding here. [29:21.000 --> 29:38.000] Now, the observation was made that the highlights on most objects that weren't completely near us. They tended to be something like a bright hot spot, like if you had your sphere here, you would have a bright hot spot that kind of faded a little bit around there. [29:38.000 --> 29:45.000] And just by looking at that and saying, "Well, what could we do that would be kind of like that?" [29:45.000 --> 30:01.000] The observation was made that, "Well, if you take this sort of cosine arrangement here, this makes this nice broad fall off. It makes a, you know, over the entire surface of the sphere coming from that it will fade off to halfway around the light." [30:01.000 --> 30:18.000] But if you wanted something that was really tight, I thought was, "Well, we can just take this value and raise it to a higher power. We can just take this and go square it, cube it, take it to the 20th power, which can be done effectively mathematically, quite cheaply." [30:18.000 --> 30:30.000] This has no basis in physical reality at all. This is a completely ad hoc approach. But it worked out okay, and this is what I'm, you know, the fun lighting model was about where you separate the surface of the surface of the surface. [30:30.000 --> 30:40.000] You separate it into your diffuse lighting, which is the more or less what color the surface is, and then your specular lighting, which is what the highlights are going to look like. [30:40.000 --> 30:59.000] So you had this other value to play around with, and that was the specular power. And nowadays, I regret using that in my terminology where we have power maps and nobody understands what those are. They relate to the, you know, the specular exponent where you're going to take something to a power of to the, you know, the specular power of the specular power of the specular power. [30:59.000 --> 31:28.000] To tighten it, the better terminology that's used more often now is a roughness map where you have a mapping, and you also do it in logarithmic space where you're going to linear, but more or less that's still today when a lot of graphics involves is you've got a roughness parameter, which affects this exponent that you take this extra vector to generate your specular highlights form. And again, it was, it would make, so if you're rendering your cube and you get the light at the right angle, like if I'm looking at the right angle, it's not the right angle. [31:28.000 --> 31:40.000] Like if I'm looking at this here and the lights over here, you know, we hit that, if that's at that right reflection angle, then you'll get a nice bright shade on there. That flat surface will catch the light and it will go into it. [31:40.000 --> 31:50.000] And that would be looked at as a real advance for the rendering. So you've got something that looks diffuse, but when it moves it, when it moves into the light, it kind of catches a flash of light and fades out. [31:50.000 --> 32:11.000] So the facets on these solid shaded models started looking better. Now what I'm next in the people wanting to do is, okay, we've got enough cubes and tetrahedrons and don't echohedrons and whatever. So we want to start making things that look more realistic. We need to have a teapot. We need to have a curved surface in some way. [32:11.000 --> 32:38.000] So you make some curved surface like this. There was a lot of work in the early days on directly rasterizing curved surfaces drawing them directly, but all the real time graphics, almost all of it, has been a matter of turning your curved surfaces into approximations with flat surfaces. So you've got something that is theoretically a curve, a curve, but really it's a bunch of facets. [32:38.000 --> 32:56.000] So if you apply the lighting model there to it, you see all of these facets. It stands out as like, okay, you just carve this as you carve this out with all these flat planes and it doesn't fool you into thinking that this is this smooth curved object. [32:56.000 --> 33:21.000] So the next step in graphics that went on was adding the interpolation across the vertex where instead of calculating a value for a face, you calculate it for a given vertex for one corner and then you just average, you interpolate across there so that a point here is going to be some average between three or four of the points that make it up. [33:21.000 --> 33:38.000] And that works surprisingly well. If you're looking at it a few surface, it works out just about as good as you'd like. There are some minor artifacts called mock bands that you get as it changes too much, but if your test relations, okay, that works out all right. [33:38.000 --> 33:59.000] It works out less well with the specular highlights. And the reason is that your specular highlight, if you've got, it might show up like if you were supposed to have some hot spot right here in the middle of a surface. If you calculated at the outside, this is going to be almost zero for the specular almost zero. [33:59.000 --> 34:08.000] And when you interpolate across it, it's going to have nothing. You're just not going to see it. You'll only see a highlight when the specular comes up at the very edges. [34:08.000 --> 34:19.000] And this is what's still to this day sort of the standard OpenGL shading model is it's gross shading with calculations at the vertex's interpolating the colors or parameters across it. [34:19.000 --> 34:34.000] So this model is still with us to this day for a lot of sort of quick stuff that's not visual simulation oriented. You just write something using lighting with OpenGL. That's the model that you get if you turn on specular highlights. [34:34.000 --> 34:48.000] In graphics where they care more about visual quality, what started happening was interpolating not the color across it, but interpolating the normal sort of the curvature across each point and then applying the lighting model at every time. [35:18.000 --> 35:26.000] A few hundred times or a few thousand times, two hundreds of thousands of times for a scene was a large use of additional processing power. [35:26.000 --> 35:38.000] But it got you the good looking areas where you could have a highlight that looked about like it should moving across the surface or sitting on the floor, looking stable there as you moved around. [35:38.000 --> 35:56.000] We've seen games that do not have interpolation in different ways where the lighting would change dramatically. We always have the problem of densely-tested characters or objects and then very low-testillation on the world. [35:56.000 --> 36:07.000] And the problem that you'd run into with that is that if you're applying one of these interpolation schemes to it, you would have something that you could never have highlights in the middle of the surface only at the corners. [36:07.000 --> 36:18.000] And there were also issues with perspective math and clipping that would mean that it would change as a really big polygon could put by the edge of the screen in almost all cases the way people did it. [36:18.000 --> 36:31.000] And this was one of the big things that pushed me during the quake time frame to use light maps for the first time where instead of I had seen other games that were doing lighting at the vertexes and I didn't think it was, you know, it wasn't good enough. [36:31.000 --> 36:39.000] You couldn't get anything resembling a shadow. You had all these swimming artifacts with the lighting and it just didn't give you what I wanted to see. [36:39.000 --> 36:53.000] And while quake didn't have any specular highlights, it did have these, you had samples every 16 pixels in the light maps that we interpolated across those and that gave us the look that was very important for it. [36:53.000 --> 37:04.000] And we didn't get to actually, it was only all the way up to doing three where we would start doing purpixell operations like this to get the much better calculations. [37:04.000 --> 37:22.000] So, even with this level of graphics at that time where you just got sort of these fun lighting, simple models, hacks like the specular exponent and the ambient term, we started to see some offline things being rendered like some movies, you know, early work, some nearly NASA promotional work that Jim Blin was. [37:22.000 --> 37:43.000] We're significant in the sort of growth of all of this and we finally saw some feature of the electrical films with like the last star fighter and especially Tron where you would see you go back and you look at Tron and you have a lot of these sort of grow shading, solid model things on there with your light cycles or recognizers and so on. [37:43.000 --> 38:02.000] And they were doing something, they were intelligently picking a battle that could be won at the time. If you said, well, we have to go ahead and render photo realistic humans, we were nowhere close to that task, but we could do geometric solid models that looked good enough to show on the big screen and that was, you know, that was a pretty big breakthrough. [38:32.000 --> 38:43.000] And the whole process of hidden surface removal is another step on top of this where if you've got lots of cues, how do you know which one draws on top of the other one. [38:43.000 --> 38:52.000] And this was another thing if you look back in research from the 70s especially, there's tons of work going on on hidden surface removal, these clever different algorithmic ways. [38:52.000 --> 39:03.000] Today we just kill it with a depth buffer, we just throw megabytes and megabytes of memory and the problem gets solved much, much easier. But this path of rasterization is still with us today. [39:03.000 --> 39:28.000] GPUs don't rasterize in scan line order like this, they follow crazy winding paths to maximize memory bandwidth, to fill up tiles, you know, to rasterize them in different pieces, and they rasterize all clogs at a time, but it's still essentially a rasterization method where we have shapes and we figure out how to rasterize them, we figure out which pixels they're going to cover, and then we figure out what we want to do with them. [39:28.000 --> 39:48.000] The alternate scheme which was also developed in the later 70s is ray tracing, where instead of saying, "Alright, I'm starting with my object, I'm going to take these vertices, these four vertices that are in space, I'm going to take my virtual camera, and I'm going to transform them, find out where they are on the screen, and then fill them in." [39:48.000 --> 40:13.000] The ray tracing goes the other way where you start off with your camera in space somewhere, your virtual viewing screen, and through that you send rays out in your world and you intersect them with your cue over here, and if it hits that cue first, it knows it didn't hit anything behind that, it's got a surface point there, and it can apply whatever shading model that needs to. [40:13.000 --> 40:42.000] The thing that ray tracing gave, I mean it's radically slower, like hundreds or thousands of times slower than rasterization, if you're doing just the most straightforward thing, if you just want to draw that cue, you can draw the same thing with rasterization or ray tracing, it's just going to be a thousand times slower with ray tracing, but it allowed a couple things that were either very difficult or impossible to do properly with rasterization, and the thing that you would always see in ray tracing demos is your shiny reflective spheres. [40:42.000 --> 40:55.000] So you've got a little chrome ball, and the fact that you could see the world reflected into it, and then back in your eye was the thing that ray tracing could do that rasterization couldn't do really more at the dam at all. [40:55.000 --> 41:09.000] And you would approximate it with environment maps and different things, but for reflections and for refraction doing those things properly, ray tracing was really the only good solution, but it wasn't practical even for most offline work. [41:39.000 --> 42:06.000] So you can see that there's a lot of information about surface interactions and finding out what you hit with the light that kind of dodges one of the really hard problems, which is saying that well, the light obviously doesn't reach through things. If you transform something up here and you transform another surface down here and the lights up here, this shouldn't be in shadow because it's blocked by this, but that turns out to not be a particularly trivial thing to resolve. [42:06.000 --> 42:27.000] Basically the same problem of how you view something from your point of view, but viewed from the lights point of view, and that can mean that well, if every light in your scene has to do similar rendering process to what your view does, possibly harder because there are only directional lights in many different cases, and it's just a tough problem. [42:27.000 --> 42:36.000] As with so many things, there's a lot of wonderful research in the 70s and 80s going through about how you do shadows effectively with these different analytic solutions. [42:36.000 --> 42:53.000] In the end, we had a brief period where stencil volumes were an effective way to do things, but now it's essentially all shadow buffers where we really do take every light rendering image from their scene and use that to back project onto their figure things out. [42:53.000 --> 43:13.000] But that was one thing that ray tracing had an elegant solution for. Again, if you're already a thousand times slower, who cares if you're another factor, two or three slower, for every point you hit, you go ahead and say I've got my light up here, I'll trace to the light or to however many lights I've got, and if there's something that blocks it, then that's going to be shadow and I can take it out. [43:13.000 --> 43:35.000] So ray tracing always had this much clearer abstraction of what you're doing, it's easy to tell that you're sending out a little array, you hit something, you determine where you hit all the other lights or if you bounce or refract into something else, so it's always been easy and clear, it's just had this thousand times slower, the problem to deal with. [43:35.000 --> 44:02.000] So the advances that were being made on graphics kind of after this early age focused on the changes in what you can do with the services as the first obvious thing, and a lot of these were driven by sort of artistic and aesthetic condition concerns, where we got, if you pull up a 3D rendering program, you look at their material stuff, there's a whole page full of options, things that you can tweak, nobs you can turn, check boxes you can set, [44:02.000 --> 44:15.000] and each of these had some use case where somebody wanted this because it made their image generally look a certain way that they wanted very rarely with these things driven by sort of physically correct rendering. [44:15.000 --> 44:31.000] And there was a huge plethora of these things that came out, every different program had a different set of options, you always had this fall back of you got your diffuse colors, your specular color, your roughness, this basic fun shading model has persisted this day, but now we have a ton of other things that we can do. [44:31.000 --> 44:45.000] We can tag on their, some surface approximates scattering approximations for now lighting, a different frequency response on surfaces, is like some of the things that do have physical basis to them. [44:45.000 --> 44:59.000] Like one obvious thing, the for now effect is the effect that as you get more and more glancing to something the reflection gets stronger and stronger, and you see this, this is what makes water and glass look like water and glass. [44:59.000 --> 45:19.000] If you look straight at them, you pretty much see straight through them without a whole lot of reflection, but as you get more and more edge on, even a surface like this, where when I'm looking at this at this angle here, I've got a very, very strong, clear sense of the slightly wavy reflection of that white line there, while if I look at it right here, it's barely visible. [45:19.000 --> 45:35.000] So that's a physical effect in reality that you can work through the real physics equations of why this happens, but people again sort of called up the trustee, raise a cosine to a power, and it sort of looks like what we want when we're dotting a couple of vectors together. [45:35.000 --> 45:41.000] So that has, that's something that's based on plausible physics, but generally only roughly approximated. [45:41.000 --> 45:51.000] And there are other things like that, like the change in some metals, get their metallic look because they slightly change colors as they get towards a grazing angle. [45:51.000 --> 46:01.000] So again, you can calculate the real physics for that, and you can just sign it kind of say, well, this color sort of changes to this color at the edges and start interpolating between them. [46:01.000 --> 46:10.000] But lots and lots of good work and lots of high budget movies and so on will build with these sort of very ad hoc techniques. [46:10.000 --> 46:18.000] But sort of in parallel with this, the other big revolution that was happening was global light transport and global illumination. [46:18.000 --> 46:28.000] That comes back to that whole hack of the ambient term, the sense that obviously where, okay, if I'm right here, the light's only directly hitting the outside. [46:28.000 --> 46:34.000] The back of my hand has no direct view to any light, but it's still quite bright and clearly illuminated. [46:34.000 --> 46:42.000] It's right because all those lights hit this white, white board, bounce off of that and wind up lighting my hand from the back. [46:42.000 --> 46:50.000] And you can see color changes, like if I move up here where it's mostly covered by the blue marker on there, it will be blue tint to it. [46:50.000 --> 46:56.000] And this recognition that so much of what we consider important in the visual field is actually indirect. [46:56.000 --> 47:00.000] It's not just a matter of, here's the light, here's the surface, what's the reaction. [47:00.000 --> 47:04.000] Because we come back to how much of the light gets bounced around. [47:04.000 --> 47:12.000] And there's a term called the albedo of the surface, which is what fraction of the light gets reflected versus absorbed. [47:12.000 --> 47:20.000] And there's some tricky terminology with this because you can have either the total solar albedo where you talk about how much energy comes off of the sun. [47:20.000 --> 47:25.000] And this is used for climate modeling and some remote imaging and things like this where you matter. [47:25.000 --> 47:31.000] But you've also been then got the visible albedo, which for rendering is what we care about. [47:31.000 --> 47:42.000] And the point is that the best reflectors, your chrome sphere that's mirrored or your white piece of chalk or your freshly driven snow, those can reflect 90% of the light. [47:42.000 --> 47:50.000] While your darkest surfaces, your lump of black color, I asphalt in some cases might only reflect 5%. [47:50.000 --> 48:00.000] But when you're reflecting 90% of the light, what that means is that if you're in a room that has mostly white surfaces, a single bit of light coming out of your... [48:00.000 --> 48:02.000] You're lighting it or... [48:02.000 --> 48:06.000] Might bounce around a dozen times before it finally gets absorbed. [48:06.000 --> 48:11.000] So it could take a very complex path before it winds up getting to your eye. [48:11.000 --> 48:18.000] And this is why we could have cases like a dark room illuminated only through the crack under the door. [48:18.000 --> 48:26.000] But you can still wind up looking around, even around corners, you can go into the closet in the dark room and eliminate it under the keyhole and still find things somewhat lit. [48:26.000 --> 48:34.000] And that's because of this many bouncing path the light can take from the lighting that are coming around to it actually gets to your eye. [48:34.000 --> 48:42.000] And this turns out to be a really frighteningly complex and expensive problem to solve properly. [48:42.000 --> 48:56.000] The first sets of attempts at this dealt with radiosity approaches. [48:56.000 --> 49:05.000] Now a lot of this was driven by engineering things beyond just making pictures because you would talk about things like heat management. [49:05.000 --> 49:14.000] If you have a certain amount of energy coming in here, how hot is something going to get? And what's the hottest part going to be? Because that matters for a lot of engineering terms. [49:14.000 --> 49:26.000] So you can do things like make a complex surface here and say energy is coming in here. [49:26.000 --> 49:32.000] How much of this energy makes its way to here, here, here. [49:32.000 --> 49:40.000] And if not just a matter of what that's based in geometry calculations to say how much of this is directly been impinging on that surface. [49:40.000 --> 49:48.000] What gets complicated then is you say well this reflects 50% of its light and that 50% goes to all of these different ones here. [49:48.000 --> 49:53.000] And this one reflects 50% and that goes to all the ones here. [49:53.000 --> 50:05.000] And in theory you're doing everything floating point math. You can bounce it 100 times and say you get well 0.01% winds up coming back to another spot. [50:05.000 --> 50:13.000] At some point you just say it's converged well enough. This solution is not going to change much on the matter of how many more bounces that you do. [50:13.000 --> 50:31.000] So the radiosity solutions work by creating this giant linear algebra matrix of coefficients where you say you identify all of your surfaces and you say how much can what form factor, what fraction of the energy goes to all of the other different surfaces. [50:31.000 --> 50:41.000] And then you may be solving this 10,000 by 10,000 matrix and there's a lot of work on the optimizations that you then go into solving this more effectively. [50:41.000 --> 50:49.000] But there are two reasons why radiosity is not a particularly relevant technique for computer graphics anymore. [50:49.000 --> 51:01.000] One aspect that it sort of glossed over was the notion of occlusion where if you've got a surface, if this goes out here and you go around the dark corner. [51:01.000 --> 51:13.000] We've got this surface here. It's clear that it can't see this surface at all. I can see the surface, the surface, it's in part of the surface, the fraction of it. [51:13.000 --> 51:30.000] And it can see an even smaller part of this surface over here. So you have to calculate these occlusion terms where you're saying each one, each surface, unless you're in your, you know, your deformed stretched, I cost a heat runner some, I, you know, serve it some solid that has no convex. [51:30.000 --> 51:58.000] No concavities inside it. You're going to have these aspects of occlusion. And this becomes a very, very difficult thing to solve completely analytically. If you're trying to stay in just analytic world and you, you try to solve well, okay, we have this surface, including this surface, and then another surface here, another surface here. It's the potentially visible set problem. I, on every polygon and it's an analytic nightmare. [51:58.000 --> 52:27.000] So you wind up solving this by approximating, you just say, all right, I've got a surface here, I'll just throw a bunch of rays to test out here, and I'll throw 20 rays out, if 10 of them get through, I'll say I'm 50% included. Now, a purest load will start branching and saying, we have a, that's random, there's this randomness, you might be misestimating, there could be pathological cases. And there's, you know, there's some truth to that anytime that your sampling things, there are sampling cases that, that can turn out pathological. [52:27.000 --> 52:52.000] But the other side of that then goes, it's like, well, we're, we're tracing rays. We have another technique that involves lots of tracing rays and come about it from the different world, which is to say, well, let's start with ray tracing, and let's try and solve the global elimination problem using nothing but ray tracing, which leads to path tracing. [53:52.000 --> 54:21.000] So, one of your billions of light rays goes out, hits there, it decides it's going to reflect up, another one goes out, hits here, it's going to reflect over. But eventually, some ray is going to come down, hit a point here, and then reflect that exactly the direction that goes over and hits the surface of your eye, which the lens can then focus into something that you can perceive. And this is, has an interesting biological side to it, the larger an eye is, the more light it can collect, which is what it can be. [54:21.000 --> 54:50.000] And this is what's happening in reality. Zillions and zillions of photons come off, they bounce around, and eventually some tiny fraction of them, get the lens in your eye or your detector, whatever you're using, and can be resolved into an image. So, you can make an image like this. People have done it, it is extraordinarily inefficient. [54:50.000 --> 55:19.000] But you can solve everything with it. This is a complete and accurate, so as accurate as your analysis of what the light's distribution is, and what the surface's distributions are, this can be as good as that. You can have your extra surface up here where you hit the ceiling, you bounce back down, you hit a wall over here, you bounce back over, and then eventually make your way to the eye. And you start thinking, well, you can have 10 bounces going in a random direction, your eye is only sitting there. [55:19.000 --> 55:48.000] And your eye is only some handful of millimeters across, but you're projecting an area this size. How many traces do you have to do? Or you have to do billions and billions, and you wind up with a very noisy image at that. But if you did enough of them, this would come out with the right solution. Trace array, if either gets absorbed or reflects into a different way, or transmits the rig, you got this whole. The model that you use, the bidirectional subsurface scattering, I'm a distribution function. [56:18.000 --> 56:46.000] The reasonable approximations are, instead of tracing, throwing rays out from the light, which are mostly going to go nowhere near what you want, you can reverse the trace and go from your eye, liking the kind of classic ray tracing, go to the surface, and then you start getting into the cases where one of the key, one of the sort of buzzwords in high end rendering is whether a render is biased or unbiased. [56:46.000 --> 57:14.000] A biased render is not necessarily perfect physics, but it's almost, they do it because it's going to be a lot faster. Like the standard thing that you do, if you don't mind being a biased render, you say, well, I have all these directions that I could go to the world. I could go up to the ceiling, I could go down to the floor, but I know I've got all these lights up here, so I'm going to send most of my rays towards the lights because those are almost certainly going to be the things that really make a difference. [57:14.000 --> 57:27.000] So, you go, you hit your point and you trace, against every light, you know, you've got three lights going here, let's run a trace up against some check for a cluter, solid things blocking it off. [57:27.000 --> 57:43.000] And then you start throwing random amounts of rays in different directions. You can be smart and base it on what the character of the surface is. I, you know, it again comes down these distribution functions where you could have rays where it's more likely that it light comes in than the light. [58:13.000 --> 58:32.000] If you're biased and you trace specifically to certain lights, there could be combinations of surfaces here. Like you might have a surface here which is slightly emissive, and if you wind up hitting that because you were tracing towards the light, that's going to get over represented based on, you know, versus something that's over here that wasn't in the direction of one of the lights. [58:32.000 --> 59:01.000] But this approach, you know, it pretty much works. We do, like, for the baking in in tech five, we have a very primitive lighting solution because even though we do it offline, we have to, the surface area of one of the maps in range is about as much as the pixels that go into a feature film, and we have turn around time. So clearly we can't do these billions of ray traces for every, what would be a frame of that, we have to keep these down to some credible amount of time. [59:01.000 --> 59:30.000] So what we do is, when we're rasterizing a surface, we don't even have the viewer at all, we're doing a view independent approach for the global illumination, and then again the terminology is problematic because we have radiosity as terminology in a lot of places as a synonym for global illumination, and technically it's not, it shouldn't be that way. We have a visualizer called rad preview, even though it does not do a matrix calculation for radiosity at all. It's, you know, it is based on this more of a tracing approach. [59:30.000 --> 59:42.000] So we get our surfaces, we look at all the lights that we think should be affecting us, we trace to them to get our shadows and sample them to make soft shadows. In fact that's another important thing. [59:42.000 --> 59:59.000] The way you get a soft shadow is if you've got a surface and you've got an object that's going to cast a shadow, if you have, if you had a plate light source, so it was nothing but a teeny tiny point that all the energy came out of, then you would have a hard shadow. [59:59.000 --> 60:27.000] A hard shadow engine would look like doing three. I'm where you just have, you've got fully illuminated and then fully shadowed. In reality there's no such thing as a point light source, and this is an important, important thing to realize. I, everything, even if you look at a light bulb, a dangling incandescent light bulb, the photons are actually coming out not off of a point, but off of a little zigzagy filament that's inside that. It has an area and the photons come off distributed from that area. [60:27.000 --> 60:49.000] Now the sharpness of a shadow depends on the ratio of the area of that emitter, the distance that it's going across. When you have a great big broad fluorescent light assembly, and you've got a smaller clue here, everything is going to be lit to some degree that you have. [60:49.000 --> 61:18.000] So in this case you might have only the very smallest area there that would be solid completely shadow, but as you move over you start to be able to see part of the light so it gets brighter and brighter until you get to the point over here where you can see the entire light emitter. So we have, I, to get the soft shadows and rages, I am, and well, so if you looked at the original the earlier quakes, there were soft shadows in there, [61:18.000 --> 61:29.000] but they weren't a matter of calculating soft shadows, they were because we made a hard shadow calculation and then we interpolated between it, which is why you got kind of a boring stair step in the edges there. [61:29.000 --> 61:46.000] For Tech5 we actually send a number of shadow samples, and this is one of those things that gets into performance trade-offs where if a designer sets a very large area for a light source, then you will have a very broad area of changing shadow resolutions, [61:46.000 --> 61:58.000] and if you only put 16 tests to it, that means you only have the possibility of 16 bands of different lighting and that's in the best case if it comes out exactly sort of your samples where they do their best good. [61:58.000 --> 62:08.000] And it's completely possible to have if you got a broad area of light source to need hundreds of samples for every pixel that is determined how bright that should be. [62:08.000 --> 62:17.000] And it can get worse in a lot of cases, a lot of offline rendering they use thousands of samples per fragment when you get into the global illumination. [62:17.000 --> 62:30.000] So what we do from the direct lighting, obviously it's a biased lighting approach there, because we sampled directly to the lights, but then we send out random rays from the surface to see what else it gets. [62:30.000 --> 62:37.000] And when it goes out and hits the surface up here, then we apply a simplified version of the lighting to that. [62:37.000 --> 62:42.000] We don't do all the full soft shadows, but we do basic lighting approaches. [62:42.000 --> 62:53.000] We've had options to do multiple additional bounces, but this is what we live with, this sum approach of sampling the global environment, and we don't do it lots for each pixel. [62:53.000 --> 63:06.000] What we wind up doing is, each point throws one or a few samples in a different directions, and then we average them for this pixel we average over a broader range of pixels. [63:06.000 --> 63:21.000] And these are the types of tradeoffs that everybody doing rendering makes different trades like this where they decide what you think is most important, how much time you can afford to spend on things, and you kind of make your choices and you live with them after that. [63:21.000 --> 63:29.000] But we know doing it right is just a matter of throwing billions of rays in an ideal case. [63:29.000 --> 63:32.000] You have to throw lots and lots into the environment. [63:32.000 --> 63:39.000] We can make decent approximations now, but we're going to soak up all the additional computing power that can be given. [63:39.000 --> 63:46.000] I want to solve it in the offline rendering world is that the frames will always take a half hour to render in those studios. [63:46.000 --> 63:59.000] The more power they get, just the more things that they add to it. There's hope that that's not a law of nature that we are getting to faster turnaround, kind of like the pace of hard drive size versus usage. [63:59.000 --> 64:13.000] But it does seem likely that the path forward is lots and lots of rays, physically accurate material definitions and approaches that are approximations of the sampling of path tracing. [64:13.000 --> 64:31.000] We can do, there are some meat demos going on going around today like the brigade path tracing demo, which is real time, and it's doing simple path tracing from a parallel outdoor light, and it's noisy and fizzly as it comes in, that you can stop and watch it kind of come in more crisply. [64:31.000 --> 64:42.000] And eventually, this is going to be the way things go. This is the way we're going to be rendering, but we still have maybe a couple orders of magnitude before it's really competitive. [64:42.000 --> 64:58.000] I think one more order of magnitude and performance, and you'll start seeing it used for some real things, but it's still, you have to have a good reason to step away from rasterization, but probably when we get two orders of magnitude, then you start seeing it as one of the more general tools. [64:58.000 --> 65:16.000] And the reason that it's winning in the offline world, even though it's still slower, people still care about how long they're rendering staking if you're making a feature of film or TV commercial, it matters when you're iteration time, but the sense is that you get more out of this being understandable. [65:16.000 --> 65:40.000] With rasterization, the environment maps, shadow maps, there are all these norms that people just, the best people know what they mean, but 90% of the people working in visual graphics, they have these things that they know, push this this way, and it kind of does something, but it's a lot of black magic, and a lot of things that are just not at all physically plausible. [65:40.000 --> 65:56.000] Because one of the things that I've been working with the artists at the end in the last several months, to start moving us towards this more physically based sense of things where if you just use your standard, diffuse specular roughness, you can have materials just make no sense at all in the real world. [65:56.000 --> 66:03.000] You can have things that reflect more energy than come in when you've got a bright diffuse and a bright specular. [66:03.000 --> 66:28.000] The real step that we've had to make education wise is treating these maps not just to something that you paint in Photoshop, but how you define the materials that are there where it shouldn't be that if you're looking at something that's a belt buckle, you say, okay, this is metal, it's going to have a high specular, it's going to have a low diffuse specular, they have color in it, it's going to have a high power or a low roughness depending on how you're formulating it, because that's what it is. [66:28.000 --> 66:57.000] But far too often, over the past decade in computer and games especially, the maps that have been fed into these things, that a few snaps specular maps where the glass or roughness are whatever you turn it, there are things that are painted in where a lot of times you'd see a specular map or you take your diffuse map and you kind of monochromizes in many color-shipped in, you stick it into the into the specular, and you wind up with things that, yes, it makes parts of it shiny and parts of it not shiny, [66:57.000 --> 67:13.000] but some of these things like I don't actually think that there is a physical material that exists that has a red specular reflection color, maybe there is, but it's certainly not common, you know, specular colors are generally white except for metals, which can be the color of the base surface. [67:13.000 --> 67:27.000] So there's the biggest thing that's going to be happening for making games look better is really not advancing the graphics technologies, at least for our studio, it's the matter of getting materials and actually make sense. [67:27.000 --> 67:37.000] And once you're there, then you can start improving, you know, improving the things that you do with adding your better global light transport, I, of their cases there. [67:37.000 --> 67:51.000] One more thing before I cut out from the time warning here, I, so that the cost of all of this, millions and billions of rays, one technique that has gotten a lot of currency in recent years is ambient occlusion. [67:51.000 --> 68:00.000] Now to explain what ambient occlusion is, it's another one of those great big hacks, but it works, you know, useful and it's used, it's kind of standard there in a lot of offline work. [68:00.000 --> 68:10.000] So if you have a, you know, an object that's got some concavity here, and you've got the light shining on it from here, so you light it all up. [68:10.000 --> 68:22.000] In an ideal world, you'd be doing all of this path tracing, and you would say that, okay, some of the rays hit here, they bounce here, they bounce around in here, some of them go up here, hit here, get into that. [68:22.000 --> 68:34.000] So the path, the tortuous path that light can take to get into there, that's what you really want to deal with it. You've got your light surface there, you might need to take trace ten bounces, thousands and thousands of things. [68:34.000 --> 68:48.000] The observation that ambient occlusion is based on is that when something has other things very close to it, it is very likely to be, I, not as bright as things that do not have things next to it. [68:48.000 --> 69:10.000] If you've got a flat surface and you're lit, you know, there's nothing that's going to be brilliant that's taking anything away from it, but if you have a flat surface that has an occluder here, this area right here, it might be directly seeing the light, and it might be seeing everything in this part of the hemisphere, but part of it's going to be hitting this. [69:10.000 --> 69:19.000] And some of that may be going and seeing the light, some of them may be bouncing in different directions, so ambient occlusion all it does is instead of sampling the whole world. [69:19.000 --> 69:29.000] It samples just a small area around the point that you're working with, and importantly, it's perhaps even more importantly than scope of what it's sampling. [69:29.000 --> 69:43.000] When it hits things, it doesn't worry about the surface model, it doesn't run, I, you know, BRD, as the whatever, I, all it does is say either I hit something close or I didn't hit something and maybe you'll keep trying to how far away it is. [69:43.000 --> 69:56.000] And if you get something like this where, okay, there's some light coming in here, I can see this, but I trace out a 90% of everything around me is hitting something else sort of close. [69:56.000 --> 70:09.000] So based on that, I'm going to darken it down just on the assumption that if I didn't run a global illumination, trace through all of this, then it would come out and say that I'm not as bright as something that's next to me that's, you know, that's open. [70:09.000 --> 70:20.000] So something out here, that'll get the full value of whatever it calculates, and as you move towards here, some of it's starting to get darker until you move all the way in here, we're almost all with it. [70:20.000 --> 70:27.000] And it's a very, very crude approximation of just assuming that whatever it hits isn't going to be bright. [70:27.000 --> 70:40.000] And you can break that by having here cases where, you know, if you had, if the light was coming in right here where it's directly illuminating all of that, and if that was a light surface, you could have more light coming down onto there rather than less. [70:40.000 --> 70:48.000] And maybe the collision would say it's got nearby things, it should always be less, but you could actually be getting more light from the global illumination in those cases. [70:48.000 --> 70:58.000] We just wanted a long line of all of these approximations that we do, but the takeaway point is we know what we should do, we know what we would do if we had infinite computing power to go with it. [70:58.000 --> 71:09.000] So all of the things now are approximations onto it, ways that we can model our data, ways that we can reduce our number of traces, and optimizations in the current paths to make things go faster. [71:09.000 --> 71:17.000] And there's lots of work going on with G.P. you accelerated ray tracing, getting some of the caustic graphics work for optimizing it in some other ways. [71:17.000 --> 71:22.000] And there's lots of active research going on about what corners can you cut. [71:22.000 --> 71:32.000] And it's interesting because again, we know what the right way, zillions of photons coming out, collect them all, right? You're the lens of your eye, and sort of make an image from that. [71:32.000 --> 71:39.000] But it's going to be research for becoming decade or more as we kind of work out what the very best approximations for this are. [71:39.000 --> 71:45.000] So I ran a little bit over by one hour, but I can start typing questions now, so we've got the microphone there. [71:45.000 --> 71:52.000] [applause] [71:52.000 --> 72:01.000] Up until about maybe five to seven years ago, there was every year an obvious increase in realism. [72:01.000 --> 72:24.000] And off-line rendering for especially movies, and I'm wondering, since a lot of the things that you mentioned here have been around for as long as I can remember an O-brae and all that decades ago, what is the mean driver of that increase in visual fidelity or realism in more recent years? [72:24.000 --> 72:36.000] For a couple factors, one is actually getting smarter about the materials, where you can throw in all of this light transport stuff, and if you don't have good materials for it, it won't matter. [72:36.000 --> 72:43.000] You'll still get non-realistic images, so better data collections of the laser scanning and different things that let us get really good material qualities. [72:43.000 --> 72:50.000] That's been one factor, but probably the biggest factor has just been people being willing to throw that much more processing power out of things. [72:50.000 --> 73:02.000] I am here to go ahead and instead of letting these early cases work a day to render an image that's never going to get used in production, and all you do is see some of the images in academic research. [73:02.000 --> 73:21.000] And the problem with that is, while some of the academic research would get the formulas right, they wouldn't have the data right to go with it where if you've got, it's kind of like programmer art, if you wind up with the programmer or the graphics researcher building the test scene for it, it's probably not going to be a particularly good model in the world. [73:21.000 --> 73:48.000] So I think those are really the two things, materials, and then largely getting into the hands, making it reasonable for the people that are going to put the level of craft and detail that it needs to represent the world, making it reasonable for them to use. [73:48.000 --> 73:53.000] Does any motivation for educating artists at the end to make it? [73:53.000 --> 74:01.000] Well, I actually think it's necessary. I think that if you're not getting with physical rendering now, you're going to be left behind as an industry. [74:01.000 --> 74:17.000] And there's been interesting watching the offline world where you had the masters of their domain at Pixar, because they had the very best in process and technology for a long time, they were sort of stragglers to adopt many of the things with ray tracing and physically-based rendering. [74:17.000 --> 74:28.000] But, you know, they've come around for the most part now, still using the right tool at the right time. But I can't think of many good arguments for not using physically plausible materials. [74:28.000 --> 74:35.000] I don't think that there are artistic gains to be had by not doing it, and there's also it's a mind-builds where you can mess yourself up. [74:35.000 --> 74:37.000] Thank you. [74:37.000 --> 74:50.000] The very latest versions of OpenGL support pixel and fragment shaders, and one of the things that I'm curious about is why you don't use procedural graphics and procedural geometry more than you do. [74:50.000 --> 74:57.000] Okay. So procedural graphics has been, by the way, the future for the last 20 years. [74:57.000 --> 75:12.000] And I think that I actually have a fairly strong and sound argument for the Slophical Stance against this, where in the end procedural data is quirky, hard to deal with data compression. [75:12.000 --> 75:17.000] And one of the things that we are continuing to get more and more of is space. [75:17.000 --> 75:32.000] So, while you can always pick out some niche market where you are going to be extremely constrained on your space, and you think, well, mobile should have been maybe the space where procedural stuff comes into its own. [75:32.000 --> 75:40.000] But, you know, that's ramping through all the storage spaces for everything that it's really not. You know, all the standard methods are going on. [75:40.000 --> 75:52.000] So, it's not a particular, it's a good tool for making programmers, but when you want to put it into the hands of the people that are going to, if you're modeling the real world, you laser scan everything. [75:52.000 --> 76:00.000] You go in and say, I'm going to scan this room, and I'm going to have a terabyte of data, and I'll just render that as a more of this point cloud, and that's credible even. [76:00.000 --> 76:05.000] It's not, we can't ship a game like that yet, but that's still within sight of something that we can do. [76:05.000 --> 76:12.000] And if you want to give it to an artist to create something, then there are largely going to be compositing together different things, and procedural sources. [76:12.000 --> 76:27.000] Yeah, you're using for your clouds and your smoke and particle things like that, but, you know, this was, this was Pixar's can't for a long time about doing, you know, they would create with, with procedures, analytic procedures, relevant textures, and that way lost. [76:27.000 --> 76:45.000] It was really pretty conclusive that nobody wants to do that, they want to throw 20 layers of effective painting on top of things, and you can still come up with use cases for it, but it adds a lot of complexity, I have for, you know, for a wind that outside of poster child cases, where he isn't there. [76:45.000 --> 76:57.000] So, for your offline rendering, have you ever considered using progressive photon mapping techniques, and we ever had a chance to talk with Henrik Londjensen about any of that? [76:57.000 --> 77:14.000] So, I wrote a photon mapping version for our system, and there's an interesting, a really interesting aspect to this, where, so a fundamental aspect of global illumination is that there's no difference between a light emitter and a light reflector, where you have to look at saying the photons that come up with a light emitter, and a light emitter, and a light reflector, where you have to look at saying the photons that come up with a light emitter, and a light emitter. [77:14.000 --> 77:36.000] So, I'm saying the photons that come off of this surface are just as good as the photons that come off of that light, and when you calculate through, when you make a photon map for something, you figure out how many photons you're going to send into the world, you create a map of them, and you use that as an accelerator for determining your global illumination solution for each point. [77:36.000 --> 77:47.000] So, what I ran into was, while that works fine for a single sort of character of a scene, for an indoor scene, I found photon mapping to be pretty effective in a lot of ways. [77:47.000 --> 78:05.000] I mean, you still have all the problems of where you wind up setting things, bleed throughs in some cases, and they're manageable problems, but when I ran some numbers that I realized that if you're calculating an outdoor area, the amount of light that falls on like one, eight and a half by eleven sheet of paper, just hold on. [78:05.000 --> 78:21.000] All of a sudden, that surface has all of the photons, the same amount of photons that come out of a hundred watt incandescent light bulb, and you start saying, well, we have acres and acres of surfaces out here, and of course, we're scaling everything down, so it still fits with them. [78:21.000 --> 78:26.000] I completely did not get to any of my output monitors, gamma correction, all that stuff. [78:26.000 --> 78:44.000] I mean, we have all these hacks to kind of normalize it, but I found that to be in a situation where you had a bright outdoor area and then a dimmer indoor area, you had to have so many photons in the outside to make the dim one come out reasonably that it became pretty prohibited. [78:44.000 --> 78:59.000] The other reason that we don't do photon maps is that it requires a sequencing where the nice thing about distributed ray tracing and the path tracing in its purest form, it's completely embarrassingly parallel. [78:59.000 --> 79:13.000] Any surface can be done at the at any time, because we run on multi threads, you know, multi core processors and multiple systems in a cluster, and if you want to do something with an intermediate step, like a photon map, you have to build the photon map in some popular parallel way, and then try to get the same kind of a light bulb. [79:13.000 --> 79:21.000] So, we went to a completely separable solution, and we went to a completely separable solution. [79:42.000 --> 80:09.000] A lot of problems stop happening. It was interesting implementing the photon map stop going through a few of the cases, and it's certainly a valid direction right now, but I think that the, in a lot of cases, the necessity to generate that ahead of time is a little bit of a hazard for implementation in a lot of parallel cases, but running on a single system, if you know you're going to cloud through it all there, it's got a lot of benefits. [80:09.000 --> 80:13.000] I just heard a little more on a cluster. [80:13.000 --> 80:25.000] Hi, so you talked a lot about the geometry and the ray tracing, all that sort of stuff. I was just curious if you could talk about how you managed to light representations specifically things like fluorescent and that sort of stuff. [80:25.000 --> 80:32.000] So, I think another one of my topics that was on my list that I didn't have time to go through. [80:32.000 --> 80:44.000] So, again, the classical computer graphics light is you wind up with three models of light. You've got a point light, a spotlight, a parallel light, and those are our sort of baseline lights in the editor. [80:44.000 --> 81:12.000] I'm being re-ordinate the point lights by getting the more area radius so we can get the soft shadows and so we can have the distributor ray tracing to that. The biggest problem though is that these are all of our lights are completely physically implausible because they're physically bounded with the exception of a parallel light. And some of this is history when we go from from Quake 1, all the way up there, especially in doing three, we built all of our lights out of textures, [81:12.000 --> 81:39.000] because doing three was all dynamic, so we multiplied two textures together where you would have a projection texture and a fall off texture. So, they occupied this physical space in the world, which is great for calling reasons where you can say, "Alright, in doing three, we tried to say no more than three lights hitting a surface because it was a linear cost, every light cost more on that surface." So, we round out with these lights that were very physically implausible. [81:39.000 --> 82:05.000] So, while you can make, if you're doing this multiply two textures together, you can make a Gaussian fall off light, which is a present light to work with that is radially symmetric, but most of the lights in the game wound up being our square light, which is a light that goes almost to the outside edges of this texture and just fading a little bit and then fading a little bit in the other direction, so we can get kind of about as much light as we could into the world for minimal fragment cost. [82:06.000 --> 82:32.000] So, fortunately, we kept those through rage as most of our primary light style, and we had some of our very best artists love this because it gave them total control in painting with light, so they would be able to say, "I want this area a little bit brighter here, so I'll use this different texture instead of the standard one, move this raw stretch it so it just barely goes below the floor, but it has no fall off, so it's going to throw all the light into it." [82:33.000 --> 82:45.000] That is largely the type of artistic wizardry that we need to evolve past because you will never be able to take light emitters like that and make them more real because the light's not real. [82:45.000 --> 82:57.000] You can even have completely real materials and you can be doing it with path tracing, but if your light is only coming from these things that do not resemble real lights, then it's never going to be bought off as real. [82:57.000 --> 83:16.000] Now, several years ago, I made a pre-mature evidently push towards physically base lighting where I was trying to set all of our lights up with using IES light profiles, which are these actual light profiles that the people that make light bulbs going measure all of these things. [83:16.000 --> 83:38.000] You can get the light that's coming in all these different areas, different sample points coming out of it. And that's really useful, although it's important to note that there are simplifications in here, just like just because you see an equation doesn't mean it's true just because you see a table of data doesn't mean it's true either because you have simplifications like an IES spec for three fluorescent bulbs in a fixture. [83:38.000 --> 83:59.000] And yes, you are sampling what the light is at all of these points, but really you should be getting three shadows from it rather than one from an area light source. So there's simplifications built into that. But I still, we are not currently using that. The main reason why it felt through when I pushed for it originally was, it comes back to the performance. [83:59.000 --> 84:25.000] To keep the build times at a certain, you know, at a level that they were familiar with, you wound up with these lights now are extending infinitely their proper inverse square fall off lights. So if you got a level with a thousand lights in it, then in theory you're tracing a thousand traces out at a minimum to just see where any light gets there. So you cut this down to some rational number of samples and what that means is there's lots of noise in the images. [84:25.000 --> 84:39.000] And one of the battles that's been particularly hard for all of the tech five stuff is trying to have a situation where the designers and artists are willing to work with an approximation of what they, you know, what the final output is. [84:39.000 --> 84:49.000] And it is, you know, it is just very tempting to say, well, I always want to look at what the final output is, which means that everything is always a production quality render, which means it always takes forever. [84:49.000 --> 85:00.000] And I keep hoping that there will be more of an acceptance of, well, this is roughly what it's like. I can still figure out where my gameplay and rough lighting and everything is, but that's about what we fight. [85:00.000 --> 85:04.000] [inaudible] [85:04.000 --> 85:18.000] Hi John. Taking quality materials there for granted, I'm curious what additional visual fidelity you gain by ray tracing box lock trees and then what visual sacrifices you make and what sacrifices you have to make in terms of performance or taking performance. [85:18.000 --> 85:35.000] So the question of what you're ray tracing against is sort of orthogonal to the method. I mean, you can you can rationalize your ray trace lots of different representations and there was lots of work that went into directly ray tracing against curved surfaces and certainly spheres and some of the easy cases. [85:35.000 --> 85:47.000] And for years, I didn't think that ray tracing into some vulnerable box-old space would be an obvious thing to do because it seems that there's, you know, there's winds, there's certainly far simpler. [85:47.000 --> 86:10.000] We have to make a more regular data structure. There's all these things, but it doesn't seem to be panning out that way. It does seem to be that all ray tracing will be in triangle meshes that you will decimate to it. And there's certainly advantages to the comfortable toolpaths and everything there. It seems that's the way that history is flowing in. That's probably the way it's going to work out when we are ray tracing everything. [86:10.000 --> 86:25.000] You talked a little bit yesterday on the motion blur that happens on my K DLCD screens as you move your head very quickly. Do you have any more thoughts on if that's a solvable problem for this generation of the R's that's going to take a little longer? [86:25.000 --> 86:35.000] So we have an existence proof of something that's good enough. I mean, what now put together by packing up the SAM-Sun displays is good enough. [86:35.000 --> 86:49.000] If we can get 90 hertz displays that are low persistence, that will do 120 would probably be better, but I am like my interlace scheme may be a good thing to kind of add on top of that if it can be done. [86:49.000 --> 87:04.000] But I think there's a good prospect in the fallback plan is, you'll see the backlight flashing. So it's important, and I think that I'm betting that it will be solved for a consumer grade VR in the not-too-distic way. [87:04.000 --> 87:11.000] But it's not very right now outside of Dallas-Pertertite. [87:11.000 --> 87:29.000] Thanks for the talk. A few years ago I read an MIT paper explaining how to compute saw shadows and what they did was the interplay of linear leave between the parts that were lit and the parts that were not lit. [87:29.000 --> 87:38.000] Is that the approach it takes? Is that a linear map or not linear map between the number and the pen number? And I was just hoping you could explain in detail how you calculate the intermediate levels. [87:38.000 --> 87:46.000] Okay, so that does fall into the the category of large body of work of approximations that is pretty much gone and forgotten right now. [87:46.000 --> 87:52.000] Our saw shadows are done by sending a certain number of samples like it's 16 by default. [87:52.000 --> 88:01.000] You send 16 samples to different points on the light that are randomly distributed and the density of the shadow is just the fraction of them to get through. [88:01.000 --> 88:06.000] So you can crank that number up in some cases for some of the little broad area emitters. [88:06.000 --> 88:12.000] And there you'd want it to be 256 samples so you could get a full range of even more ordinary bright lights. [88:12.000 --> 88:21.000] But we get by with 16. There's an approximation that I did on that that, instead of randomly sending to points in the center of the points across the surface. [88:21.000 --> 88:34.000] All the points across the area of the light source might default. We send them across the circumference of the light, which gives you, you know, in theory can sometimes make it look I a square factor better, but it looks bad at edges. [88:34.000 --> 88:40.000] So we're still tracing different things on there. But in the bottom line, it's just how are many samples you throw. That's the fraction that comes out. [88:40.000 --> 88:50.000] Things like that are going back through the history of graphics for 40 years. There's a ton of things that were somewhat complicated analytic solutions that have just over and over fallen to rob. [88:50.000 --> 89:06.000] And I think that all of these things will as well, you know, when we are tracing billions of rays per frame, that's when we'll be using ray tracing. I don't think there's going to be too many intermediate steps to that. [89:06.000 --> 89:30.000] So I know that in AutoCAD and other engineering programs, there are catalogs of different types of materials that you can test the effects of different things on the structure, so on and so forth, just the different kinds of materials. [89:30.000 --> 89:41.000] So, my question is for you is that with trying to make your artists use more accurate materials, are you trying to create a catalog of textures or... [89:41.000 --> 89:55.000] Yeah, so right now we are very much trying to have our master swatch list of, you know, if we need... there's the clear things about, okay, if you're metal, you're in this range, if you're paint, you're in this range, you're wood, you're in this range, asphalt. [89:55.000 --> 90:05.000] Having all of this represented as these are the valid ranges of diffuse specular roughness and maps that you're going to have. [90:05.000 --> 90:16.000] So we're still working through all of that. And in terms of material libraries, it's a little frustrating when you look at whether it's a 3D studio or a Modo or a Buree or whatever. [90:16.000 --> 90:28.000] The materialists are usually the ad hoc collection that's created over a couple decades of company lifespan and they're usually not a complete consistent cohesive, physically based set of materials. [90:28.000 --> 90:40.000] We spent a little bit of time trying to backtrack values from one of the material library sets and the things that we could use. And it wasn't completely clear that they were coming out in the right ranges. [90:40.000 --> 91:09.000] So we're building up our own set and there's lots of studios doing that. There are online, there are sets of BRDF measurements for a lot of materials that would be good to start drawing some of the materials from. But there's... we're still looking for, okay, what's the diffuse specular and roughness values going rather than this full table of data. But eventually, I expect that we all will be using this as data-scanding from the real world because over and over, that's what I'm interested in. [91:09.000 --> 91:13.000] Thank you. [91:13.000 --> 91:14.000] All right. [91:14.000 --> 91:16.000] That looks like it, John. Thank you. Thanks. [91:16.000 --> 91:17.000] On time. [91:17.000 --> 91:19.000] [applause] [91:19.000 --> 91:22.000] [music] [91:22.000 --> 91:25.000] [music] [91:25.000 --> 91:29.000] [music] [91:29.000 --> 91:33.000] [music] [91:33.000 --> 91:37.000] [music] [91:37.000 --> 91:41.000] [music] [91:41.000 --> 91:45.000] [music] [91:45.000 --> 91:49.000] [music] [91:49.000 --> 91:53.000] [music] [91:53.000 --> 91:57.000] [music] [91:57.000 --> 92:01.000] [music] main: load time = 58.44 ms main: mel time = 11318.92 ms main: sample time = 1261.97 ms main: encode time = 37654.71 ms / 9413.68 ms per layer main: decode time = 65975.34 ms main: total time = 116659.55 ms