Hi James,

If I understand correctly, it is simpler then you might think.

The main purpose of these overviews is to have a quicker/better response at 
higher
(overview) levels. So if you have one giant Geotiff (meaning a LOT of cells),
you still would have to open and check ALL cell values, even if you want to
have a small overview image of 100x100 pixels, in which you have to downs-ample
anyway to get the general overview. Down-sampling images is a very cpu
intensive task, which can be done in different ways. The best looking way
also are the most 'expensive'.

The Overviews are more like a good looking cache (already down-sampled) that
are saved so you don not have to go into the full dataset the next time
you only need this 100x100 pixel image.

In general, the overview/pyramid levels have nothing to do with the zoom
levels of a software client. Having 5 or 8 overview levels in your data, a 
client
can still zoom in 10 or 12 or 3 steps through the map image.

It will just pick the most efficient available image/overview.

So say a software client in which you want to have a general overview which
(if you would work with printed maps) would (on zoom scale level) be more close
to the level which is is like the 8x pyramid (zoom level), would
- in the case there IS a 8x pyramid (your [2,4,8,16]) case): just pick the
8x version to generate the image for you.
- in the case there is NO 8x level (image), it will pick the level which
will take the least effort to generate the 8x zoom image, with the extra
constraint that it will never have to up-sample(? is that the term)? the
image. So in this case I think it will not pick the 16x level, because that
would mean that it will have to blow up pixels to fit..
Same with (viewing) zoom levels NOT on the exact overview levels: it will
take the (overview) data that is the least work.

I realize this is mostly talking about normal geotiffs, but I think the
reasoning also goes up for Cloud Optimized GeoTIFFs.

It's all about 'prerendering' because down--sampling is a cpu intensive
task. And for prerendered images you can use more expensive sample
algorithms.

Hope this helps, or else somebody has a better answer :-)

Regards,

Richard Duivenvoorde



On 10/8/23 00:01, James Sammone via gdal-dev wrote:
I'm not sure if this is the best channel to ask this question as it might
be beyond the scope, but I've asked it in a few others and have had no
responses aside from others also being curious.
<https://gis.stackexchange.com/posts/450396/timeline>

I am trying to understand the relationship between overviews and zoom
levels so I know how to make more efficient Cloud Optimized GeoTIFFs. Using
gdaladdo or gdal.BuildOverviews(), I can create overviews at [2,4,8,16] or
at just [16]. From my understanding, this means the size is being divided
by those values to provide downsample arrays of the original source. In the
first example [2,4,8,16], I've created 4 separate overview arrays into the
GeoTIFF that are 2x, 4x, 8x, and 16x downsampled. And in the second example
using only [16], I've built one overview array into the GeoTIFF that is 16x
downsampled.

How can I understand how these overviews are applied when it comes to zoom
levels? Does the 16x downsample appear sooner in the second example when
zooming out than for the first example due to being first in order? Or do
the 16x downsamples appear at the same zoom level for both cases but the
second example has additional 2x, 4x, and 8x downsamples that also appear
before getting there?

Thanks for any insight into this anyone can provide. Despite using
overviews all the time, I've struggled with this for a while and had
largely consigned to not understanding it.

Best,

James


_______________________________________________
gdal-dev mailing list
gdal-dev@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/gdal-dev

_______________________________________________
gdal-dev mailing list
gdal-dev@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/gdal-dev

Reply via email to