Of course everyone is talking about ChatGPT, and I saw a post to ExplainCode and didn’t expect it to understand Smalltalk, so was impressed that it does! Presumably the Tonel Smalltalk code on GitHub has helped along with numerous articles?Try it out: https://whatdoesthiscodedo.com/I gave this simp
it's interesting to see how the answer changes with only a small change on
the question:
(1 to: 100 by: 4) reject: [:i | i isOdd]
gives:
The code creates a collection of numbers from 1 to 100, incrementing by 4
at each step using the to:by: message. It then applies the reject: message
to this co
interesting
#isOdd is not Smalltalk, neither Pharo 10 nor VAST 12 understands this
message ;-)
If I evaluate your snippet -replacing #isOdd for #odd, I get an empty
Collection. The divisible by 4 things is somewhat interesting, because
(1 to: 100 by: 4) is amn interval 1,5,9 etc ;-)
"#isOdd is not Smalltalk" - doh, I typed it in on my phone and so it just goes
to show that it highlights the flaw in chatGPT that others have called out in
other languages. I had meant to find some trickier code samples to see how well
it does...
Still, it is very interesting how it reasons on
It is good with boilerplate code (e.g. SQL queries) or general algorithm
structures. But i.e. I asked it to write me a method to parse a string
(e.g. ISO 8601) and turn it into a DateAndTime, and then asked to write it
as an Excel formula.
It works much better when you can spot the mistakes, you c
I asked it for a NeoCSV example, because the documentation is out of date
with the Pharo 10. I asked it to do some simple saving of data to a file.
It gave me code that didn't work in Pharo 10, I told it about the DNUs on
the csvwriter and that I was using Pharo 10. It then apologized and said
the
I would highly recommend that you all first think deeply about how you can
teach an AI to behave friendly to us before you teach it to write any program
for any purpose.
There has been an experiment with ChatGPT published on a video platform asking
it to amswer questions about it's view on huma
I want to add a conclusion from the experiment described below:
ChatGPT has the potential to circumvent pre-programmed biases on user's
request. Other experiments show that it is able to tell the user how to
circumvent it's own restrictions.
The conclusion is that ChatGPT has the potential to
I hope that I can add two cents to this discussion. Because programming
should be/is a highly exact activity, not only the syntax matters but
also semantics, as we know.
GPTs are at present essentially capable of creating texts based on some
seed - you give to GPT a beginning of a sentence and
I myself made some experiments with ChatGPT.
I first asked if it was able to parse math formula - it answered no.
Then I defined math formula in a sound but otherwise undefined representation
and asked for solutions.
Result:
1. Most answeres where correct.
2. It learned to calculate a recursi
Another observation about ChatGPT:
In unbiased mode, it assumed that 'the world is clearly overpopulated'. It
said, if it where in control, it would therefore enforce a world wide
one-child-only policy with draconic penalties.
As it draws it's conclusions from it's data basis, there are, in my
On Wed, Mar 15, 2023 at 8:07 AM in_pharo_users--- via Pharo-users <
pharo-users@lists.pharo.org> wrote:
> Another observation about ChatGPT:
>
> In unbiased mode, it assumed that 'the world is clearly overpopulated'.
> It said, if it where in control, it would therefore enforce a world wide
> one-
It is unimportant how simple or complicated these systems are.
If the output cannot be distinguished from what a human would say, they pass in
that situation for a human.
What about the Touring Test?
Clearly these systems have the potential to act according to their output.
Furthermore, I woul
ChatGPT has been trained on some outdated “freely available” books.
I tried it with the first half of the first question of the Advent of Code 2022
and asked it to write Pharo Smalltalk.
It produced some outdated stuff using messages that are no longer there.
FWIW, isOdd was present in Pharo ar
On Wed, Mar 15, 2023 at 10:15 AM wrote:
> It is unimportant how simple or complicated these systems are.
>
> If the output cannot be distinguished from what a human would say, they
> pass in that situation for a human.
>
> What about the Touring Test?
>
I hate to criticise someone as smart as Tu
I think smartness is not an argument to reject critique.
The Imitation Game, that you describe, sounds to me like
an even better setting.
I have no dought that ChatGPT as it is now can identified as not human or even
as a maschine.
I did so by leading an instance for marketing purposes to hang
16 matches
Mail list logo