The main problem with the coding front ends is their lack of good knowledge of 
anything that's not modern, and they will tend to make stuff up to avoid doing 
a lot of work to find out how to do it all. 

That's why people are really excited with Open Claw / Hermes Agent / Nanobot, 
etc... They are designed to spend extra time learning, rather than just relying 
on how they were trained. Codex is one of the LLM backends for these agents. I 
usually add something into the SOUL.md files about not trusting their own 
training over what references are available in the files available.

The bot I created deliberately made skills to follow design patters particular 
to the platform. Removing the assumptions means less hallucinations. 

Regards,
Marcus B

On Fri, 17 Apr 2026, at 05:03, George M. Rimakis wrote:
> $20/mo will buy you a ChatGPT subscription which I think is more than
> adequate for any M100 project.
>
> The size of whatever project you are working on, by virtue of it being
> designed for a small system, will fit in the models context window. Plus
> whatever other context you need to provide.
>
> Maybe you can fine tune Gemma 4 or something to run locally, but I’m pretty
> sure it will cost you much more than $20 a month, and won’t be as good as
> Codex.
>
> But honestly if you want to do it just to do it, that’s a totally different
> story :D
>
> There isn’t a ton of practicality in tinkering with these old machines
> anyway, just for fun.
>
> -George
>
> On Thu, Apr 16, 2026 at 9:57 AM Marcus B <[email protected]> wrote:
>
>> Well, that's a coincidence with a recent project I've been working on...
>>
>> I've recently been playing with hermes agent on a Raspberry Pi, and a $20
>> Ollama subscription.
>>
>> The intention was to create a Picomite/Picocalc MMBASIC programming
>> expert. I first got it to consume the Picomite manual, and put it into a
>> wiki, that had pages small enough to not overwhelm its context window. Next
>> step was asking it to review the whole structure, and identify design
>> patterns and programming techniques and to document them in the wiki it
>> created. At this stage I also got it to create its own skills for
>> programming, and for the design patterns.
>>
>> I then grabbed about 125 Picocalc BASIC programs from github, and then
>> asked the agent to do the same again, design patterns and techniques; but
>> also add code examples to the wiki. At all steps, reminding it that it was
>> the consumer of the documentation, and to format it for itself.
>>
>> The outcome was about 117 files and 28 skills. I have not asked it to
>> write code yet, but I'm pretty confident it'd create something mostly sane.
>> That all used less than 5% of my weekly allowance for tokens (< $1).
>>
>> I've attached an example of one of the pages it created. My guess is it
>> hasn't extensively expanded the manual into its own documentation, but as
>> hermes is designed to learn, it can expand on what it has.
>>
>> On a side note, this is one of about 5 agents I've created. One is doing a
>> great job of managing my calendar and tasks, and sending me the latest news
>> and weather every morning. All created by just asking it to do it for me in
>> mostly plain language.
>>
>> Regards,
>> Marcus B
>>
>>
>> On Thu, 16 Apr 2026, at 10:02, Kenneth Pettit wrote:
>> > On 4/15/26 3:14 PM, [email protected] wrote:
>> >> On Wed, 15 Apr 2026, Joshua O'Keefe wrote:
>> >>
>> >>>> On Apr 14, 2026, at 10:39 AM, John R. Hogerhuis <[email protected]>
>> >>>> wrote:
>> >>>>
>> >>>> Anyone interested in collaborating on that? Ideally someone who has
>> >>>> created a local model so we don't get stuck spinning our wheels.
>> >>>>
>> >>>
>> >>> Hi John. Reach out, LLMs are an area I've done and am doing work. I'd
>> >>> love to talk.
>> >>
>> >> If you guys stand up a dedicated LLM for our community, it would be a
>> >> monster help. I would be hella willing to donate funds to the cause.
>> >> It would definitely help with my PC-2 project (which is a monster
>> >> effort) as well as my back-burner M100 project (also a monster effort).
>> >>
>> >> I'm not rich, just saying
>> >
>> > Hmm, I *did* buy a loaded Macbook Pro (think 128GB Unified memory) three
>> > months ago.  Maybe it is time to put all that memory to good use
>> > training an LLM for Model100 programming!. :)  I bought the extra RAM
>> > for that very thing but just haven't had time yet.
>> >
>> > Ken

Reply via email to