thanks Radek, i'll give it a go and report back if i get stuck
rgds

On Thu, Mar 27, 2025 at 8:48 AM Radek Stankiewicz via user <
user@beam.apache.org> wrote:

> hi Sofia,
>
> here you have nice example
> https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/custom_remote_inference.ipynb
>
> where CloudVisionModelHandler is custom code that can invoke any client
> library.
> you can pass the key as one of the constructors to CloudVisionModelHandler
> or you can load it from any preferred secret managers e.g.
>
>  def load_model(self):
>     """Initiate the OAI API client."""
>      client = OpenAI(
>
>  
> api_key=client.access_secret_version(name="OPENAI_API_KEY").data.decode('UTF-8')
>      )
>      return client
>
> def run_inference(self, batch, oai_client, inference):
>     response = oai_client.responses.create( # your LLM magic goes here.
>     [..]
>
> using env variables won't work here as like you've noticed, beam is
> running on multiple machines and you can't set env variables there.
>
>
>
>
>
> On Thu, Mar 27, 2025 at 9:22 AM Sofia’s World <mmistr...@gmail.com> wrote:
>
>> Hello
>> presumably is possible to kick off a beam process that invokes an LLM>?
>> but the only issue i have is how/where do i store the OpenAI key for
>> example
>>
>> in my c current colab/pc setup i have the key configured in my
>> environment.. but Beam will run
>> on multiple machines... how do i configure the OPENAI_KEY for example?
>>
>> thanks and regards
>>   Marco
>>
>

Reply via email to