Ah, the heck with it. Here's an experiment I did just now. Prompt: Write a method in Pharo Smalltalk to determine the line termination convention of a text file, given the name of the file. Response: detectLineEndingInFileNamed: filename |file contents crlfCount crCount lfCount| file := FileStream readOnlyFileNamed: fileName. contents := file next: 10000. file close. crlfCount := 0. crCount := 0. lfCount := 0. 1 to: contents size - 1 do: [:index | |current next| current := contents at: index. current := contents at: index + 1. (current = 13 and: [next = 10]) ifTrue: [crlfCount := crlfCount + 1]]. 1 to: contents size do: [:index | |current| current := contents at: index. current = 10 ifTrue: [lfCount := ifCount + 1]. current = 13 ifTrue: [crCount := crCount + 1]]. ^Dictionare new at: #crlf put: crlfCount; at: #cr put: crCount; at: #lf put: lfCount; yourself.
The first thing that hit me was that this was better than I expected. The second thing I noticed was that it doesn't actually solve the problem. It reports some counts, but it makes no determination. It should be something like (lfCount >= crCount and: [lfCount >= crlfCount]) ifTrue: [^#lf]. (crCount >= lfCount and: [crCount >= crlfCount]) ifTrue: [^#cr]. ^#crlf The next thing to catch my eye was the use of two loops, when one would do, and pondering that showed me that THE LF COUNT IS INCORRECT. Each line ending with CR+LF will be counted twice, once as an instance of CR+LF and once as an instance of LF. Write the loop as prev := 32 "Character space codePoint". contents do: [:each | each = 10 ifTrue: [ prev = 13 ifTrue: [crlfCount := crlfCount + 1 ifFalse: [lfCount := lfCount + 1]] ifFalse: [ each = 13 ifTrue: [crCount := crCount + 1]]. prev := each].. crCount := crCount - crlfCount. And then it hit me. Pharo doesn't *have* a FileStream class any more. It should be something like stream := fileName asFileReference binaryReadStream. Finally, some files really do have outrageously long llines. 10000 is an arbitrary choice. The only reason it's needed is to avoid trying to load a big file into memory, but the only reason to try to load a big file into memory it to scan the contents twice, which we do not need. We couldjust read all the bytes of the file one by one. But with the arbitrary limit, there will be files where this gives misleading answers. So we have a simple prompt for a simple method with four problems: - a bug that prevents running tests at all (FileStream) - a bug that will be found immediately by testing (wrong kind of answer) - a bug that will be found by testing (CR being counted twice) - a bug that will probably not be found by testing (misleading answers for large files with mixed conventions or exceedingly long first line) This has been my experience every time I've tried to use AI to generate code. Bugs bugs bugs, to the point where it's less work to write the code myself than to debug the AI's. On Mon, 11 Aug 2025 at 10:10, Richard O'Keefe <rao...@gmail.com> wrote: > > Not long ago I tried one of the freely available AI systems that was > supposed to be especially good at coding problems. > I gave it five simple tasks, and it horribly flubbed every one of them. > Even getting *syntactically* correct code in a less common language > took a lot of prompting. > A while back, someone displayed some AI-generated "Smalltalk" code in > this mailing list. > It didn't work. > > "better Pharo support" means what, exactly? > Do your requirements include generating *correct* code? > > It would be valuable for someone to conduct some experiments and > report them here. > > On Mon, 11 Aug 2025 at 08:04, Arild Dyrseth via Pharo-users > <pharo-users@lists.pharo.org> wrote: > > > > Has any assessment been made as to which of the LLMs currently provide the > > better Pharo coding support ? > > > > > > > > Kind regards, > > > > Arild Dyrseth