On Sat, Mar 18, 2023 at 9:58 AM Mikael Djurfeldt <mik...@djurfeldt.com>
wrote:

> On Sat, Mar 18, 2023 at 9:46 AM <to...@tuxteam.de> wrote:
>
>> On Sat, Mar 18, 2023 at 09:41:37AM +0100, Mikael Djurfeldt wrote:
>> > On Sat, Mar 18, 2023 at 9:36 AM <to...@tuxteam.de> wrote:
>>
>> [...]
>>
>> > > Perhaps you didn't know, but you are training the model :-)
>> > >
>> >
>> > Unfortunately not. I'm prompting it within its 32000 token (GPT-4)
>> > attention span. Next conversation the model is back to exactly the same
>> > state again. Then, of course, it is possible that OpenAI chooses to
>> filter
>> > out something from the dialogs it has had.
>>
>> You don't think that those coversations end up as raw data for the
>> next model? I'd be surprised, but you know definitely more than me.
>>
>
> I know very little apart from knowing what deep learning is and having
> skimmed the "Attention is all you need"-paper. I only meant that you are
> not training the model during and between sessions. It is certainly
> possible that OpenAI filters out things from the dialogs to use as part of
> training for the next version. They warn you that they may take data from
> the dialogs. If and how they do that I don't know.
>

Or, as GPT-4 would phrase it: I apologize for the confusion in my previous
answer. You may be right that I'm training the next version of the model. :)

Reply via email to