My day to day work is in medical education – and assessment specifically. I’m 
interested in using LLMs to (1) help generate assessment items (well draft 
versions of them at least) and (2) individualised descriptive feedback based on 
assessment performance.

But, rather than trying to get LC/Xavvi to do it all I think I’ll likely end up 
using LC draw on the relevant data to construct an appropriate query, passing 
that to the LLM (ChatGPT, BioMedLM or whatever) via API to generate and return 
the response, and then having LC deal with it from there (e.g. upload to 
assessment item bank for further work; assemble and distribute feedback report).

A few hours yesterday spent crafting and refining inputs and queries suggests 
that’s all entirely possible.

Should be fun!

> How to integrate the AI part with the LC part? Again, one could acquire
> just the list, and then let LC generate the output string. But at least all
> would end up in a handler, fully in LC, where I assume additional “real”
> work would then be done.
>
> What I mean is, how can one best integrate the “outside” AI work with the
> “inside” LC work? That is what I have to get my head around.
>
> Lurking in the background,I do not want AI to put LC out to pasture. I did
> that once with HC, and still have not gotten over it.
_______________________________________________
use-livecode mailing list
use-livecode@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-livecode

Reply via email to