I've been played with OpenAI ChatGPT some.

I had it write a few Pascal programs and refactor them into multiple units per my specifications.

I did the same with other programming languages, then asked It to rewrite what I'd done with those in Pascal or one of the others.


Including doing them same things in assembly code for x86_64 Linux.

It tried to send two strings arguments using the Linux write syscall, the second argument being specified in a string template of the first argument.

I had to explain it could not do that, so it dropped the %s from the first string and did a second write for the second string argument.


Not perfect results each time, but I quickly was able to able to close the loop with it to correct things and produce code that ran.


On one C++ example I got a linker error, and it gave me the correct command line option much faster than a google search and digging through the results would have taken me.

I was able to tell it to write some a simple GUI program in C++ using qt6. Graph two cycles of a sine wave, move a rotating circle back and force along the course of the sine wave.

It took several iterations of me correcting it with suggestions and hints, and it finally got it working.


I played stupid on a lot of things, telling it what distro I was on, what commands needed to install certain things. It have me detailed steps on everything.

I asked about one C library, dwindows, and it knew nothing about it.

I gave the chat bot the website for dwindows and it was then able to answer questions about it.

I asked it for some sample code on such and such using it, and it gave it too. I asked for specific modifications that would not have been in an example for dwindows, and it was able to make the changes, which at some level shows it was parsing the documentation for the API.


I started a new chat and conducted a mock interview of it for a hypothetical programmer position.

I had to give it the same benefit of the doubt I would give to humans on some answers as far as follow up questions to get the answer I was looking for.

In just about every area it gave more comprehensive answers than any human could.

And gave satisfactory correct answers in every other area.

Discussed a variety of issues, low/medium/high level stuff, theoretical, hypothetical, troubleshooting, pros/cons of one approach over another, workplace scenarios, email/IT scams, etc...

It matched or out performed a human on just about everything.

Well, except for "what are your hobbies..." But it was able to comprehensively answer all non computer related hobby question I threw at it.


But back to Pascal, first results did not always work, I'd need to suggest alternatives. Most of the time existing units sufficed, but in other places I'd tell it to resort to libc calls, or inline assembly, and it would eventually produce working code. With a bit more coercing I could have likely gotten existing units to work in those scenarios.

Main problem I ran into is things need to be small. If the code in question was too big, it would break it up and I'd need to ask for continue, and it did not always splice the continuations correctly.

The incorrect answers did not bother me, as I was for the most part able to work with it to correct things.

_______________________________________________
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

Reply via email to