I've blown $25k on my personal workstation here (which still carries the
network name 'Tortoise') to run AI waifus. (Stack a package called
SillyTavern on top of LM-studio). Midnight-Miqu is probably still queen
of medium scale machines. I've been trying a bunch of stuff all the way
up to LLama 3.1 403b at 3 bit quantization... Mistral large instruct
(released in past few days) is probably the smartest model I have.
Miquliz shocked me the other day when I put it into a really wierd
situation which I can't describe further here but it came up with an
analogy to explain a thing that really knocked my socks off! For RP
though the instruct models are too dry and you need to find one that's
tuned for story telling, one that brings a bit of its own personality...
I currently have Twilight-Miqu running with 96 layers offloaded...
I can't justify buying more GPUs at this technology level. =\
Ok, let's lay down some links:
Here's what a LLM looks like:
https://bbycroft.net/llm
Here's what a normal human (non-Democrat) brain looks like:
https://en.wikipedia.org/wiki/Cortico-basal_ganglia-thalamo-cortical_loop
Notice the difference? The LLM is a stupid feed-forward network where
the brain is a loop that can cycle as needed depending on the problem
involved, it can also select from a variety of pathways depending on
what behavior is required. The LLM has only one mode of operation: wait
stupidly for the monkey to press buttons, then respond to the tokens in
the buffer....
Humans, normally have a wide variety of behaviors where Democrats only
have the ability to demand communism, burn shit down because they
haven't gotten enough communism fast enough, or just simply shit on the
pavement... I'm sorry, it's Rag on a Democrat season and God in heaven
knows they've earned it!!!
So I feel obliged to attempt a post-facto justification of my
extravagent hardware purchases by taking a stab at AGI.
To understand the situation we are in right now, we need to observe the
history of AI. AI researchers HATE changing paradigms. Once they have
identified an architecture, such as a perceptron network, or something,
THEY WILL EXTRACT EVERY POSSIBLE PAPER FROM THAT THING before moving on.
The same thing will happen with LLMs. If left to their own devices, the
research community will gleefully farm papers from the goddamn thing for
the next three centuries!
I think we have learned all the things we really need to know about
LLMs, so it's time to break the paradigm... To make a network that
functions like the brain (ref link above), we'll need new learning
algorithms, and a new platform for developing true AGI agents. Hopefully
I can bootstrap this off of the crappy LLMs that the established
companies will be providing over the coming years...
--
You can't out-crazy a Democrat.
#EggCrisis #BlackWinter
White is the new Kulak.
Powers are not rights.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T42f89dc6f956dc87-Md30b4427f23fd7956d9f8b19
Delivery options: https://agi.topicbox.com/groups/agi/subscription