There's a funny post on Bunnie's blog today ( https://www.bunniestudios.com/blog/?p=5018) about learning to use LiteX in place of Vivado for FPGA design. It's because Vivado wastes FPGA footprint by rolling in circuits you don't need, because Vivado is given away for free by Xilinx who would love you to step up to the next larger FPGA anytime, so they have no incentive to optimize the footprint for you. So Bunnie is using LiteX which is a python high level design tool that outputs low level designs for Vivado to assemble, so you can skip the Xilinx IP with the non-optional bloat.
It seems that this sort of dead code, undead code, zombie code problem is fairly ubiquitous in information processing systems. No matter whose system, there are always things around that don't go away because nobody cared to do anything about them. They always need a clean reboot eventually, or a clean reinstall, or some kind of purge to clear the inevitable cruft of just running too long. So maybe AIs will have molting stages? Or maybe dreaming is the way we purge the cruft in our heads? -- rec -- On Mon, Oct 30, 2017 at 4:32 PM, Marcus Daniels <[email protected]> wrote: > My actual question is more like: Is death universal or is a finite > lifetime just a sufficient solution found by evolution (and carbon-based > life)? Must memories be purged for progress, or is it just that that they > _can_ be without particular harm to the species? > > There was a piece on 60 minutes last night about Adolfo Kaminsky who > forged thousands of official documents to protect Jews in France. His > colleagues reflected on their accomplishments and didn't reflect on danger > in what they were doing at the time, perhaps because they were so young. > > It could be that the high-order aspects of wisdom are cognitively too > costly (operationally) at some point. Diminishing returns on complexity.. > Delays on action are as dangerous as imprudent actions. > > -----Original Message----- > From: Friam [mailto:[email protected]] On Behalf Of g??? ? > Sent: Monday, October 30, 2017 2:18 PM > To: FriAM <[email protected]> > Subject: Re: [FRIAM] death > > Good question. But I tend to think the problem is less about plasticity > and more about specialization. As we've seen, specialized (artificial) > intelligence is relatively easy, compare termites to humans. So-called > general intelligence (or universal constructors) is much harder. The > distance between any old TM and a UTM seems quite large. > > Whether, once specialized, an AI can generalize is an open question. Will > we *grow* general AI? Or will we construct it from scratch to be general? > > On 10/30/2017 01:12 PM, Marcus Daniels wrote: > > But will this be true of AIs as well? Assuming that this fossilization > occurs, is that a human idiosyncrasy that plasticity reduces? Perhaps it > could be treated with drugs, electroshock therapy, stem cells, PTSD > medication, etc.? > > -- > ☣ gⅼеɳ > > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe > http://redfish.com/mailman/listinfo/friam_redfish.com > FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com > FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove >
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
