Previous | Next --- Slide 11 of 51
Back to Lecture Thumbnails
csciutto

Are bunnies a consistent motif in graphics / a part of computer graphics history? I just finished reading Rainbow's End by Vernor Vinge, and one of the main characters is a graphical bunny...

kencheng

Why do we need to take small steps and recompute the gradient? It seems to me that the gradient should be the same (barring numerical precision) at every step, since we're using the distance function over Euclidean geometry:

Suppose we knew the closest point on the surface. Then the line between that point and the query point is the shortest path to the surface. If we head in any other direction, we would be straying from the closest path.

jackk314

@kencheng the goal isn't to find the shortest path, though: it's to approximately find the closest point.

username

I am a bit confused about the "Better yet". Do we still need the idea above to compute the closest point for each grid cell? So the only difference is that we precompute them and store them for future reference?

tbell

This only approximates the answer, yes? This seems like it wouldn't work for very irregular distance functions with highly dynamic gradients.

kmc

@csciutto the bunny is part of graphics lore! https://www.cc.gatech.edu/~turk/bunny/bunny.html

yilong_new

@username If we store the grid, then we can do something like this:

for grid in implicit_surface: 
   for grid' in image: 
       if dist(nearest[grid'], grid') > dist(grid, grid'): 
           nearest[grid'] = grid

(This may be optimized further, for example, we can let a grid search on its "neighbors" only)

Since there are usually not many grids on the implicit surface, this may be more efficient in some cases.

kencheng

@jackk314 That's not what I was saying. The gradient should point in the direction of the shortest path to the closest point on the surface. This gives us a line equation that we can use to generate points to test in the distance function. There's no need to recompute gradients.

But I see that if we're only able to estimate the gradient by finite computational methods, then yeah, we'd need to recompute the gradient each time to avoid drift.