On Wednesday, April 25, 2018 at 6:22:39 AM UTC-5, Thomas Müller wrote: > > > >> >>> >>> Another usecase could be to have a async puppetdb connection from the >>> second datacenter. If the connection between the datacenters is not stable >>> enough to use a single puppetdb I would need to add a puppetdb per DC.Then >>> I also would want to sync data to the central puppetdb instance. >>> >>> >> Is that an *actual* use case or a hypothetical one? >> >> I'm just thinking what my options are if it the datacenter link is not > stable enough. I'm not investing time to create a solution. > > I've thought a bit longer about the "importing reports". it's not just > importing reports, it's also importing facts and importing catalogs to the > central db. Overall I think this really would require much time to > implement the tooling. Maybe then it will be easier to query 2 puppetdb's > instead of syncing everything to one. > > But maybe all works out fine and no hacking will be necessary. :) > > There are Postgres-level tools for database federation and synchronization. As I already suggested, something along those lines is probably worth your consideration as a mechanism for the actual data movement. The other question to consider is how to structure the data so that it even makes sense to combine them at all, and again, at a bare minimum, your various participating masters should rely on a common CA. To a good approximation, the CA identity is the site identity, and it does not make much sense to combine data from different sites in the same database.
John -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/feed1ef2-7f3e-40a0-8cb4-ef7532ee4813%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.