karl3ļ¼ writeme.com wrote:
> traffick slave boss draws a sphere
> 
> traffick slave boss: "i want to use uhhhh lightcasting ...? i want to use ... 
> uh ...
> 
> uhh {let's use the new kind of sphere drawing ! where it traces the light 
> from the lightsource to the eye rather than vice versa, solving all the light 
> dynamics of the environment via informed subdivision! and using the new 
> surface equation where ...

brdfs and rendering equations are a little complicated >( they're all 
formalized into integrals.

but basically the BRDF is the scale you apply to the cosine law as a function 
of angle of incidence and emission.

it's described as like a solid angle integral or such, but it's the same old 
thing where the energy at a point is a function of the angle it came from and 
its distance. the solid angle integral formalizes the inverse square root law, 
partly.

so !

if we were to draw a sphere using BRDF's and path tracing that were actually 
accurate, like old heaven-7 days or such where people bandied adaptive 
interpolation code around .... we could try to improve pathtracing and make it 
realtime and accurate.

but the math is confusing to me with the integrals now

rather than modeling the scene starting with the camera, you'd model the scene 
starting with the light sources.

each object would have its bounds projected onto the light source, and the 
angular surface of each light source would be divided into regions based on 
which object it hit.

we could keep these regions as a vaguely defined reference that could be 
rootfound or whatnot when needed, or we could define the objects in such a way 
that these regions could be precisely known.

considering the inverse of these regions, we now have a map on every object of 
where on it it is struck by light that is immediately from a light source, and 
what light source is striking it.

this is the most simple phase of the path tracing, because after this we now, 
with each object's surface subdivided into regions of differing combinations of 
identities of incoming light source, have something more complicated than the 
emitted light of a light source -- we have the reflected light of a surface

so, the scene is now broken into regions of surfaces that are reflecting light 
from the same sources across them. we now consider the light emission from each 
of these sources, and to skip to the important part, this the integral of an 
expanding spherical volume across the surface, where some areas of the surface 
are occluded in one way, and some areas of the surface are occluded in another.

the first core challenge here is to
(a) further break the emission from a single surface region into further 
subregions that bound discrete changes in the occlusion it has to handle,
(b) describe the surface emission from these subregions in a consistent and 
appliable way that can be further projected onto further surfaces
and maybe (c) perform that projection until a description of the system can be 
found that stabilizes when recursed.

it can likely be described more simply than that :s

but that's the first area of big confusion i bump into.  however, the 
traditional intro scene is just a sphere and a plane! these are _really simple 
surfaces_, _however_, the sphere surface shows a clear initial challenge:
the hemispheres of emission across a region on a sphere change.

ok, that's not a problem, that's actually _not hard to resolve_ given the other 
things involved in the algorithm idea. there is a region bounding the surface 
of the sphere, and for each point on the region the hemisphere of emission 
changes.
this changing hemisphere of emission is a smoothly changing occlusion across 
the surface. we need a model for that occlusion so it can be projected 
elsewhere.

it kind of feels like a smoothly changing 4-dimensional volume or something 
which is confusing to me.

but _basically_ um, there is a parameter to the surface emission calculations 
that smoothly changes, and this parameter can be modeled as a function and we 
can look for a formula for it, as well as a formula for any other parameters 
that hinge on it, based on it.
so a sphere has well-known short equations, and we can substitute and try to 
solve or at least describe any equation that depends on it.  we care about 
discrete changes in these equations -- those describe regions that are 
appropriate to treat differently or discard if they are out of view.

so let's think about that a little. say we consider the half of a sphere object 
this toward a light source. light emits from this half as if it, itself, were a 
lightsource, and with a drastically changing strength depending on angle.

so the sphere illuminates everything, but leaves a sphere shadow adjacent to 
the light that is only as big as its shadow.

this is a large and clear illumination, and here we go with the smooth 
occlusion -- any point that is illuminated by the sphere, is illuminated only 
by the points to it that are not occluded.

so, the simplest way to consider occlusion, is to note the occluding object, 
and calculate the bounds at the region of incidence! this is much clearer than 
a 4d volume of information or a field function or whatever, it's just another 
kind of projection of a surface.
of course, it's an integral of projections over points, so it's the same 
problem, but it's more familiar to think about.

and that's basically the sole remaining problem. the light from the sphere will 
strike two things: the viewer's eye or the camera's lens, and the plane.

now that we've hit the camera, we can move toward drawing of these things --

Reply via email to