Indeed, it can. It comes up with fake information. But now it is heavily
moderated to not allow that.

Em seg., 10 de abr. de 2023 às 16:33, H L V <hveeder...@gmail.com> escreveu:

> Can it dream?
> Harry
>
> On Mon, Apr 10, 2023 at 11:49 AM Alain Sepeda <alain.sep...@gmail.com>
> wrote:
>
>> There are works to allow LLM to discuss in order to have reflection...
>> I've seen reference to an architecture where two GPT instances talk to
>> each other, with different roles, one as a searcher, the other as a
>> critic...
>> Look at this article.
>> LLM may just be the building block of something bigger...
>>
>> https://www.nextbigfuture.com/2023/04/gpt4-with-reflexion-has-a-superior-coding-score.html
>>
>> add to that, they can use external applications (plugin), talk to
>> generative AI like Dall-E...
>>
>> Many people say it is not intelligent, but are we ?
>> I see AI making mistakes very similar to the one I do when I'm tired, or
>> beginner...
>>
>> The real difference is that today, AI are not the fruit of a Darwinian
>> evolution, with struggle to survive, dominate, eat or be eaten, so it's
>> less frightening than people or animals.
>> The only serious fear I've heard is that we become so satisfied by those
>> AIs, that we delegate our genetic evolution to them, and we lose our
>> individualistic Darwinian struggle to survive, innovate, seduce a partner,
>> enjoying a bee-Hive mentality, at the service of the AI system, like
>> bee-workers and bee-queen... The promoter of that theory estimate it will
>> take a millennium.
>> Anyway there is nothing to stop, as if a majority decide to stop
>> developing AI, a minority will develop them at their service, and China is
>> ready, with great experts and great belief in the future. Only the West is
>> afraid. (there is a paper on that circulating, where fear of AI is linked
>> to GDP/head)
>>
>>
>> Le lun. 10 avr. 2023 à 16:47, Jed Rothwell <jedrothw...@gmail.com> a
>> écrit :
>>
>>> I wrote:
>>>
>>>
>>>> Food is contaminated despite our best efforts to prevent that.
>>>> Contamination is a complex process that we do not fully understand or
>>>> control, although of course we know a lot about it. It seems to me that as
>>>> AI becomes more capable it may become easier to understand, and more
>>>> transparent.
>>>>
>>>
>>> My unfinished thought here is that knowing more about contamination and
>>> seeing more complexity in it has improved our ability to control it.
>>>
>>>
>>> Sean True <sean.t...@gmail.com> wrote:
>>>
>>> I think it’s fair to say no AGI until those are designed in,
>>>> particularly the ability to actually learn from experience.
>>>>
>>>
>>> Definitely! ChatGPT agrees with you!
>>>
>>

-- 
Daniel Rocha - RJ
danieldi...@gmail.com

Reply via email to