On 11/7/2018 4:14 PM, John Ralls wrote: > Not my understanding of “heavy database updates” (which would be > something like > 100K TPS), but OK. Yeah, I may not be using the right terminology, but look at my suggestions as generally correct if not using the right glossary. In the database arena I'm more of a semi-sophisticated user than a domain expert. I assume you are a domain expert, at least relative to my database experience, an opportunity for me to learn. What may be confusing is I do have a lot of engineering experience.
In this case I considered processing two asynchronous events (second+ user) arriving at the same time, not raw processing throughput, a different kind of performance. Even the "sync data" proposal in a SOHO environment shouldn't stress today's multi-core, gigabit memory commodity computers for throughput in a SOHO environment. I'm thinking more race conditions caused by async (two+ user) events. Still thinking SOHO. On 11/7/2018 4:16 PM, John Ralls wrote: > That’s a lot more complex than any backend I’d want to implement, but > fortunately GnuCash’s backends are plugins so you’re welcome to write > a separate one. Fair enough. Hopefully architecture framework design decisions can support this sort of future "plugin" expansion. Guess I'd better look for "plugin support documentation", see what I can figure out. _______________________________________________ gnucash-devel mailing list gnucash-devel@gnucash.org https://lists.gnucash.org/mailman/listinfo/gnucash-devel