On Thu, Dec 1, 2011 at 8:46 AM, Maria Ripa <[email protected]> wrote:

>   Hi List,
>
> We have large datasets of points in a database. We need to publish our
> data via WFS in GeoServer. I would like to know what is considered to be
> large from a GeoServer point of view. Is there a breakpoint where the
> amount of rows starts to be too many? I have 30 Million points wich seems
> to be problematic,  (300 000 points is no problem though).
>

I've seen installations serving up to 500 million polygons. GeoServer
streams out the results, so it never uses much
of a memory footprint, what is slow normally, with these data amounts, is
the extraction from the database, and
for that one has to tune the database itself, either playing with the query
planner tunables to make it use the spatial
indexes, cluster the tables, cluster the database itself on enough
machines, and so on.
When growing that large there is no single recipe I think, what needs to be
done is different depending on the
data, usage patterns and so on.

Cheers
Andrea

-- 
-------------------------------------------------------
Ing. Andrea Aime
GeoSolutions S.A.S.
Tech lead

Via Poggio alle Viti 1187
55054  Massarosa (LU)
Italy

phone: +39 0584 962313
fax:      +39 0584 962313

http://www.geo-solutions.it
http://geo-solutions.blogspot.com/
http://www.youtube.com/user/GeoSolutionsIT
http://www.linkedin.com/in/andreaaime
http://twitter.com/geowolf

-------------------------------------------------------
------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
Geoserver-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/geoserver-users

Reply via email to