On 12/21/2015 11:58 AM, Alberto Garcia wrote: > On Thu 17 Dec 2015 01:50:08 AM CET, John Snow <js...@redhat.com> wrote: >> In working through a prototype to enable multiple block jobs. A few >> problem spots in our API compatibility become apparent. >> >> In a nutshell, old Blockjobs rely on the "device" to identify the job, >> which implies: >> >> 1) A job is always attached to a device / the root node of a device >> 2) There can only be one job per device >> 3) A job can only affect one device at a time >> 4) Once created, a job will always be associated with that device. >> >> All four of these are wrong and something we need to change, so >> principally the "device" addressing scheme for Jobs needs to go and we >> need a new ID addressing scheme instead. > > Out of curiosity, do you have specific examples of block jobs that are > affected by these problems? >
The motivating problem is multiple block jobs. Multiple block jobs are hard to implement in a generic way (instead of per-job) because the API is not suited well for it. We need to refer to jobs by ID instead of "device," but while we're at it Jeff Cody is working on a more universal/fine-grained op blocker permission system. (See his RFC discussion thread for more details.) The two can be co-developed to form a new jobs API. > For the intermediate block-stream functionality I was having problems > because of 1), so I extended the 'device' parameter to identify > arbitrary node names as well. > > Just to make things clear: your proposal looks good to me, I'm only > wondering whether you're simply looking for a cleaner API or you have a > use case that you cannot fulfill because of the way block jobs work > now... > The cleaner interface is definitely the larger motivator. > Thanks! > > Berto > However, better flexibility also plays a part. Say we have two devices: [drive0]: [X] --> [Y] --> [Z] [drive1]: [A] --> [B] In theory, we should be able to commit Z into Y into X while we simultaneously perform a backup from X to A. We definitely can't do that now. There may be some better use cases -- backups, fleecing and other read-only operations in particular have a high likelihood of being able to run concurrently with other operations. We definitely *can* just extend the old API to allow for these kinds of things, but since it represents a new paradigm of job manipulation, it's easier to just extend the block jobs api into a new "jobs" API and allow the system to expand to other subsystems. Thanks, --js