{{{{{{there was some energy around making a network-based inference engine, 
maybe by modifying deepseek.cpp (don't quite recall why not staying in python, 
some concern arose)
task got weak, found cinatra as a benchmark leader for c++ web engines 
(although pico.v was the top! (surprised all c++ http engines were beaten by 
java O_O very curious about this, wondering if it's a high-end test system) 
never heard of V language but is interesting it won a leaderboard)
inhibition ended up discovering concern somewaht like ... on this 4GB ram 
system it might take 15-33GB of network transfor for each forward pass of the 
model ... [multi-token passes ^^

Reply via email to