Thanks Danny! The narrative is well structured and easy to follow. I encourage more folks to take a look. I left a couple of comments, mostly about plans for memory management.
On Thu, Jul 20, 2023 at 7:47 AM Danny McCormick via dev <dev@beam.apache.org> wrote: > Hey everyone! Today, many users have pipelines that choose a single model > for inference from 100s or 1000s of models based on properties of the data. > Unfortunately, RunInference does not support this use case. I put > together a proposal for RunInference that allows a single keyed > RunInference transform to serve a different model for each key. I'd > appreciate any thoughts or comments! > > > https://docs.google.com/document/d/1kj3FyWRbJu1KhViX07Z0Gk0MU0842jhYRhI-DMhhcv4/edit?usp=sharing > > Thanks, > Danny >