[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-26 Thread Offray Vladimir Luna Cárdenas

Hi,

Comments inlined below:

On 22/03/23 7:34, in_pharo_users--- via Pharo-users wrote:

Offray,  and to all others,

you are missing the issue.

The problem we face is not to measure 'intelligence' of a system, but it's 
ability to verbally act indistinguishable from a human.

This ability is allready given as chatbots are accepted by millions of users, 
f.i. as user interfaces. (measurement = 'true', right?)

ChatGPT has the ability to follow a certain intention, f.i. to convince the 
user to buy a certain product.  For this purpose, chat bots are getting  now 
equipped with life like portrait pictures, speech input and output systems with 
life like voices, phone numbers that they can use to make calls or being 
called.  They are fed with all available data on the user, and we know that ALL 
information about every single internet user in available and is being 
consolidared on necessity.  The chat bots are able to use this information to 
guide their conversational strategy, as the useful aspects of the users mindset 
are extracted from his internet activity.

These chat bots are now operated on social network platforms with life like 
names, 'pretending' to be human.

These bots act verbally indistinguishable from humans for most social media 
users, as the most advanced psychotronic technology to manufacture consent.

The first goal of such a propaganda will naturally be to manufacture consent 
about humans accepting being manipulated by AI chat bots, right?


I don't think I have missed the point, as we agreed (I think) on 
chatbots not being intelligent, just having such appearance. That why 
I'm calling "AI" #ApparentIntelligence (in the sense of look alike, but 
not real). Of course, something looking like a real thing without being 
the real thing can be used for manipulation since the first times of 
gossip, printing press and now automatization, with the changes in 
scale/danger that such medium changes imply.


I don't think that manufactured consent is so easy, as this very thread 
shows. What is being automated is manufactured polarization (but humans 
can do pretty well by our own on polarization).




How can this be achieved?

Like allways in propaganda, the first attempt is to
- suppress awareness of the propaganda, then
- suppress the awareness of the problematic aspects of the propaganda content, 
then
- reframe the propaganda content as acceptable, then as something to wish for,
- achive collaboration of the propaganda victim with the goals of the 
propaganda content.

Interestingly, this is exactly the schema that your post follows, Offray.


On the contrary, my post is advocating for a critical reading of 
Apparent Intelligence, by reframing the terms and the acritical 
technoutopic / technoapocalyptic readings/discourses that are spreading 
rapidly on the wider web, as I think that this community has shown an 
historical different position beyond/resisting hype and current trends. 
So I don't see how any of the steps you mention are "blueprint followed" 
in my post, and I think they will be difficult to locate without 
specific examples.





This often takes the form of domain framing, like we see in our conversation:  
the problem is shifted to the realm of academics - here informatics/computer 
sciences - and thus delegated to experts exclusively.  We saw this in the 9/11 
aftermath coverup.

Then, Offray, you established yourself as an expert in color, discussing 
aspects that have allready been introduced by others and including the groups 
main focus 'Smalltalk', thus manufacturing consent and establishing yourself as 
a reliable 'expert', and in reverse trying to hit at me, whom you have 
identified as an adversary.

Then you offered a solution in color to the problem at hand with 'traceable AI' 
and thus tried to open the possibility of collaboration with AI proponents for 
the once critical reader.


Heh, heh. On the contrary seems that the one seeing a scheme and a 
enemies location/confrontation with deep plots and tactics is you. 
Providing external creditable sources beyond opinion, belonging to a 
established discursive falsafiable tradition (i.e. one that you can 
criticize instead of blindly accept) is a way to enrich 
discourse/argumentation beyond conspiracy theories. You could also quote 
your sources instead, which would allow the community to see where our 
positions are hold/sustained, even if we use different domain frames, 
which is better that claiming no domain or expertise in pursuit of 
openness. So instead of this are my opinions without any external source 
or reference to pretend no expertise or domain framing, we could 
advocate for openness by welcoming different expertise and argumentation 
and making our sources/bias as evident as possible.





I do not state, Offray, that you are knowingly an agent to promote the NWO AI 
program.  I think you just 'learned' / have been programmed to be a successful 
academic software developer, because to be 

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-26 Thread in_pharo_users--- via Pharo-users
Dear Offray,

I have nothing to comment on this.

---

In general I have made the observation that   certain people who want to push 
an agenda to promote alterior motives tend to reiterate false prepositions and 
so false conclusions over and over.

If there is an apodictic statement that contradicts their alterior motives, 
this people just can't help themselves but to deny truth against better 
knowledge ad nauseam.


On 26.3.2023 at 8:34 PM, "Offray Vladimir Luna Cárdenas" 
 wrote:
>
>Hi,
>
>Comments inlined below:
>
>On 22/03/23 7:34, in_pharo_users--- via Pharo-users wrote:
>> Offray,  and to all others,
>>
>> you are missing the issue.
>>
>> The problem we face is not to measure 'intelligence' of a 
>system, but it's ability to verbally act indistinguishable from a 
>human.
>>
>> This ability is allready given as chatbots are accepted by 
>millions of users, f.i. as user interfaces. (measurement = 'true', 
>right?)
>>
>> ChatGPT has the ability to follow a certain intention, f.i. to 
>convince the user to buy a certain product.  For this purpose, 
>chat bots are getting  now equipped with life like portrait 
>pictures, speech input and output systems with life like voices, 
>phone numbers that they can use to make calls or being called.  
>They are fed with all available data on the user, and we know that 
>ALL information about every single internet user in available and 
>is being consolidared on necessity.  The chat bots are able to use 
>this information to guide their conversational strategy, as the 
>useful aspects of the users mindset are extracted from his 
>internet activity.
>>
>> These chat bots are now operated on social network platforms 
>with life like names, 'pretending' to be human.
>>
>> These bots act verbally indistinguishable from humans for most 
>social media users, as the most advanced psychotronic technology 
>to manufacture consent.
>>
>> The first goal of such a propaganda will naturally be to 
>manufacture consent about humans accepting being manipulated by AI 
>chat bots, right?
>
>I don't think I have missed the point, as we agreed (I think) on 
>chatbots not being intelligent, just having such appearance. That 
>why 
>I'm calling "AI" #ApparentIntelligence (in the sense of look 
>alike, but 
>not real). Of course, something looking like a real thing without 
>being 
>the real thing can be used for manipulation since the first times 
>of 
>gossip, printing press and now automatization, with the changes in 
>scale/danger that such medium changes imply.
>
>I don't think that manufactured consent is so easy, as this very 
>thread 
>shows. What is being automated is manufactured polarization (but 
>humans 
>can do pretty well by our own on polarization).
>
>
>> How can this be achieved?
>>
>> Like allways in propaganda, the first attempt is to
>> - suppress awareness of the propaganda, then
>> - suppress the awareness of the problematic aspects of the 
>propaganda content, then
>> - reframe the propaganda content as acceptable, then as 
>something to wish for,
>> - achive collaboration of the propaganda victim with the goals 
>of the propaganda content.
>>
>> Interestingly, this is exactly the schema that your post 
>follows, Offray.
>
>On the contrary, my post is advocating for a critical reading of 
>Apparent Intelligence, by reframing the terms and the acritical 
>technoutopic / technoapocalyptic readings/discourses that are 
>spreading 
>rapidly on the wider web, as I think that this community has shown 
>an 
>historical different position beyond/resisting hype and current 
>trends. 
>So I don't see how any of the steps you mention are "blueprint 
>followed" 
>in my post, and I think they will be difficult to locate without 
>specific examples.
>
>
>>
>> This often takes the form of domain framing, like we see in our 
>conversation:  the problem is shifted to the realm of academics - 
>here informatics/computer sciences - and thus delegated to experts 
>exclusively.  We saw this in the 9/11 aftermath coverup.
>>
>> Then, Offray, you established yourself as an expert in color, 
>discussing aspects that have allready been introduced by others 
>and including the groups main focus 'Smalltalk', thus 
>manufacturing consent and establishing yourself as a reliable 
>'expert', and in reverse trying to hit at me, whom you have 
>identified as an adversary.
>>
>> Then you offered a solution in color to the problem at hand with 
>'traceable AI' and thus tried to open the possibility of 
>collaboration with AI proponents for the once critical reader.
>
>Heh, heh. On the contrary seems that the one seeing a scheme and a 
>enemies location/confrontation with deep plots and tactics is you. 
>Providing external creditable sources beyond opinion, belonging to 
>a 
>established discursive falsafiable tradition (i.e. one that you 
>can 
>criticize instead of blindly accept) is a way to enrich 
>discourse/argumentation beyond conspiracy theories. You could also 
>quote 
>your sources inst

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-26 Thread Offray Vladimir Luna Cárdenas

Dear anonymous,

Me neither.

It is pretty difficult to make constructive discourse against hidden 
agendas, ulterior motives, self evident truths, sources absence or 
general affirmations without particular examples or detailed sustain.


Offray

On 26/03/23 14:16, in_pharo_users--- via Pharo-users wrote:

Dear Offray,

I have nothing to comment on this.

---

In general I have made the observation that   certain people who want to push 
an agenda to promote alterior motives tend to reiterate false prepositions and 
so false conclusions over and over.

If there is an apodictic statement that contradicts their alterior motives, 
this people just can't help themselves but to deny truth against better 
knowledge ad nauseam.


On 26.3.2023 at 8:34 PM, "Offray Vladimir Luna Cárdenas" 
 wrote:

Hi,

Comments inlined below:

On 22/03/23 7:34, in_pharo_users--- via Pharo-users wrote:

Offray,  and to all others,

you are missing the issue.

The problem we face is not to measure 'intelligence' of a

system, but it's ability to verbally act indistinguishable from a
human.

This ability is allready given as chatbots are accepted by

millions of users, f.i. as user interfaces. (measurement = 'true',
right?)

ChatGPT has the ability to follow a certain intention, f.i. to

convince the user to buy a certain product.  For this purpose,
chat bots are getting  now equipped with life like portrait
pictures, speech input and output systems with life like voices,
phone numbers that they can use to make calls or being called.
They are fed with all available data on the user, and we know that
ALL information about every single internet user in available and
is being consolidared on necessity.  The chat bots are able to use
this information to guide their conversational strategy, as the
useful aspects of the users mindset are extracted from his
internet activity.

These chat bots are now operated on social network platforms

with life like names, 'pretending' to be human.

These bots act verbally indistinguishable from humans for most

social media users, as the most advanced psychotronic technology
to manufacture consent.

The first goal of such a propaganda will naturally be to

manufacture consent about humans accepting being manipulated by AI
chat bots, right?

I don't think I have missed the point, as we agreed (I think) on
chatbots not being intelligent, just having such appearance. That
why
I'm calling "AI" #ApparentIntelligence (in the sense of look
alike, but
not real). Of course, something looking like a real thing without
being
the real thing can be used for manipulation since the first times
of
gossip, printing press and now automatization, with the changes in
scale/danger that such medium changes imply.

I don't think that manufactured consent is so easy, as this very
thread
shows. What is being automated is manufactured polarization (but
humans
can do pretty well by our own on polarization).



How can this be achieved?

Like allways in propaganda, the first attempt is to
- suppress awareness of the propaganda, then
- suppress the awareness of the problematic aspects of the

propaganda content, then

- reframe the propaganda content as acceptable, then as

something to wish for,

- achive collaboration of the propaganda victim with the goals

of the propaganda content.

Interestingly, this is exactly the schema that your post

follows, Offray.

On the contrary, my post is advocating for a critical reading of
Apparent Intelligence, by reframing the terms and the acritical
technoutopic / technoapocalyptic readings/discourses that are
spreading
rapidly on the wider web, as I think that this community has shown
an
historical different position beyond/resisting hype and current
trends.
So I don't see how any of the steps you mention are "blueprint
followed"
in my post, and I think they will be difficult to locate without
specific examples.



This often takes the form of domain framing, like we see in our

conversation:  the problem is shifted to the realm of academics -
here informatics/computer sciences - and thus delegated to experts
exclusively.  We saw this in the 9/11 aftermath coverup.

Then, Offray, you established yourself as an expert in color,

discussing aspects that have allready been introduced by others
and including the groups main focus 'Smalltalk', thus
manufacturing consent and establishing yourself as a reliable
'expert', and in reverse trying to hit at me, whom you have
identified as an adversary.

Then you offered a solution in color to the problem at hand with

'traceable AI' and thus tried to open the possibility of
collaboration with AI proponents for the once critical reader.

Heh, heh. On the contrary seems that the one seeing a scheme and a
enemies location/confrontation with deep plots and tactics is you.
Providing external creditable sources beyond opinion, belonging to
a
established discursive falsafiable tradition (i.e. one that you
can
criticize instead of blindly accept) is a way to e

[Pharo-users] Re: Wow - Chat GPT understands Smalltalk

2023-03-26 Thread Richard O'Keefe
I tried it on three "dead" languages:
- a bubble-sort in Pop-2
- a system of linear equations solver in IMP-80
- the array concatenation example in SNOBOL 4 from Rosetta Code.
It got the first one right -- despite 'array' access
being written 'subscr(index, array)' -- and the second
one almost right -- mistaking an nxn array for a 2n
array. It got the third one (which I had stripped of comments)
right too.

It's impressive.  And it's impressive because it's clearly
NOT based on analysing a large corpus of code in the language
you give it (because there IS no corpus of Pop-2 and the
available corpus of IMP is both limited and very sparsely
commented.  Not much SNOBOL either.

On Wed, 15 Mar 2023 at 20:07, Tim Mackinnon  wrote:

> Of course everyone is talking about ChatGPT, and I saw a post to
> ExplainCode and didn’t expect it to understand Smalltalk, so was impressed
> that it does! Presumably the Tonel Smalltalk code on GitHub has helped
> along with numerous articles?
>
> Try it out: https://whatdoesthiscodedo.com/
>
> I gave this simple example: (1 to: 100 by: 3) reject: [:i | i isOdd]
>
> And got this (neat eh):
>
> The code above is written in the programming language Smalltalk.
>
> It creates a collection of numbers from 1 to 100, incrementing by 3 at
> each step. This is done using the "to:by:" construct where the starting
> value is 1, the ending value is 100, and the increment is 3.
>
> The resulting collection is then filtered using the "reject:" method,
> which takes a block of code as an argument. In this case, the block checks
> whether each element (denoted by the variable i) is odd by using the
> "isOdd" method. If an element is odd, it will be rejected from the
> collection.
>
> So, the final result is a collection of all even numbers from 1 to 100,
> incrementing by 3 at each step.
> Share this explanation with colleagues: whatdoesthiscodedo.com/g/a0ec56e
>