Hey, we run GS in a beanstalk stack (with docker) and read the content of the
data directory from S3 when each instance starts as a volume mount. These
instances are immutable so don't follow the same pattern you're talking
about but by doing so we can manage the data directory as code in a git repo
and the deployment of instances is done by our CI/CD pipeline. This approach
also allows us to scale our compute horizontally as load increases.

For the record, i've found that m4.large instances with ~5gig of ram
assigned to the jvm provides the best bang for buck compute wise. We run 4
of these behind an ELB at peak times and 1 during lulls. Auto scaling rules
look after this for us. This is with control flow set to 8 getMap requests. 

We also seed our tile cache to S3 (zipped) and read it on to each instance
at launch. Although this slows the launch time to ~10 minutes, it improves
performance significantly after that. We played with the S3 blob store
extension but our corporate proxy meant this option was not feasible for a
few reasons. 

Happy to share more i this approach is of interest to you. 

 

 



--
View this message in context: 
http://osgeo-org.1560.x6.nabble.com/Sharing-the-GeoServer-data-folder-on-a-S3-bucket-tp5330492p5331593.html
Sent from the GeoServer - User mailing list archive at Nabble.com.

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Geoserver-users mailing list

Please make sure you read the following two resources before posting to this 
list:
- Earning your support instead of buying it, but Ian Turton: 
http://www.ianturton.com/talks/foss4g.html#/
- The GeoServer user list posting guidelines: 
http://geoserver.org/comm/userlist-guidelines.html

Geoserver-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geoserver-users

Reply via email to