Timothy Nelson <wayl...@wayland.id.au> writes: > Great! I'm not surprised it's been around a long time -- I didn't think I > could be the only one to think of it. > Thanks for the heads-up on Postgres-XL -- I'd missed that one somehow.
FWIW, we also have some in-core history with passing plans around, for parallel-query workers. The things I'd take away from that are: 1. It's expensive. In the parallel-query case it's hard to tease apart the cost of passing across a plan from the cost of starting a worker, but it's certainly high. You would need a way of only invoking this mechanism for expensive-anyway queries, which puts a hole in the idea you seemed to have of having a hard separation between parse/plan processes and execute processes. 2. Constant-folding at plan time is another reason you can't have a hard separation: the planner might run arbitrary user-defined code. 3. Locking is a pain. In the Postgres architecture, table locks acquired during parse/plan have to be held through to execution, or concurrent DDL might invalidate your plan out from under you. We finesse that in the parallel-query case by expecting the leader process to keep hold of all the needed locks, and then having some kluges that allow child workers to acquire the same locks without blocking. (The workers perhaps don't really need those locks, but acquiring them avoids the need to poke holes in various you-must-have-a-lock-to-do-this sanity checks.) I fear this area might be a great deal harder if you're trying to pass plans from a parse/plan process to an arms-length execute process. 4. Sharing execute workers between sessions (which I think was an implicit part of your idea) is hard; hard enough that we haven't even tried. There's too much context-sensitive state in a backend and too little way of isolating which things depend on the current user, current database etc. Probably this could be cleaned up with enough work, but it'd not be a small project. regards, tom lane