A mathematician responded privately to me with the same basic complexity measure suggested by Matt (and with some comments on the computational complexity measure of finding a minimal DCGoNORs in the context of DAGoNORs aka "Circuit Complexity"), in response to my question at math.stackexchange.com titled: Recasting Algorithmic Information In Terms of Finite Directed _Cyclic_ Graphs? <https://math.stackexchange.com/questions/4314444/recasting-algorithmic-information-in-terms-of-finite-directed-cyclic-graphs>
Here's my response to that DCGoNORs complexity measure (essentially, counting the number of edges in the DCG, ignoring the initialization state): I'm not, at this stage, concerned with measures of the computational complexity of finding DCG circuits of minimal complexity. I'm merely concerned with defining a measurement of DCG circuit complexity. Your suggestion of simply counting the number of true entries in the directed adjacency matrix should be an adequate measure to get things off the ground and were it not for my motivation I would leave it at that. So I think your question for me would be: Why go to the trouble of defining a generator that, for each natural number outputs an isomorphism class of directed adjacency matrices when simply counting the number of true entries in the directed adjacency matrix would provide an adequate complexity measure? There are two dimensions to my answer: Theoretic and Political. Theoretic: Viewing adjacency as a (probably _VERY_ ) sparse boolean matrix, for any reasonably large number of edges, there is actually a wide range of complexities, e.g. edges randomly scattered throughout such a sparse matrix have a higher complexity than if all those edges are along a single column or row. Political: Although I agree that getting hung up on being overly rigorous about the representation of the directed adjacency matrix is inappropriate at this stage of development of the, potentially very rich, mathematical frontier, there are times when $1000 bills are left scattered on the ground and no one bothers to pick them up for some reason -- even when their job is, ostensibly, to make money. There is resistance to applying the notion of Kolmogorov Complexity to the natural sciences -- in everything from physics to sociology -- despite Solomonoff's proof being over a half century old and Moore's Law having gone through about 20 doublings. Since I proposed the criteria Hutter adopted for his Prize 2006, there has been very little recognition of its importance despite the enormous resources pouring into machine learning during the ensuing 15 years and despite Hutter's PhD students founding DeepMind. In diagnosing this resistance, I've identified 2 layers: 1) People don't _want_ an unbiased model selection criterion, for obvious reasons (people like their lies, including the lies they tell to themselves such as "I am not a liar."), and 2) Among those whose job it is to fight bias, they express this resistance to unbiased model selection by nit-picking less-than-relevant details in Algorithmic Information Theory such as making a mountain out of the choice-of-UTM mole-hill, or making a mountain out of the choice-of-resource limits mole-hill. Both of those "mountains" are mole hills compared to the half-century of theory, data and machine power available to help us decide questions, political conflicts over which, that are threatening another Thirty Years War. So, yes, simply counting the number of edges in a DCG would be great, as it reduces "The Argument Surface" of attack by sophists who currently dismiss KC-approximation as model selection criterion by focusing on the mole-hills UJTM-choice or resource-choice as insurmountable mountains. But they aren't being honest with themselves. I want to leave them no wiggle room. On Fri, Oct 22, 2021 at 7:32 PM James Bowery <[email protected]> wrote: > > > On Fri, Oct 22, 2021 at 1:02 PM Matt Mahoney <[email protected]> > wrote: > >> I think a n-NOR contest could be made practical, at least for a smaller >> data set like the Calgary corpus (3 MB), if not enwik9, using a state >> machine made of NOR gates. >> > > Even if I'm overestimating the difficulty of an enwik9 contest, it would > be best to start with something like the Calgary corpus, if only to > maximally-confront people with the way the DCGnNORs approach to KC nukes > the specious arguments about the KC additive constant to cross-emulate UTMs. > > >> A state machine (n1, n2, n3, n4) is specified by the number of input bits >> n1, state bits n2, gates n3, and output bits n4, followed by a list of n3 >> gates in ascending order starting at n1+n2+1. Gate number k is specified by >> a list of inputs in the range 1..k-1 and terminated by 0. The last n2 of >> those gates are the next state. The last n4 gates are also outputs >> (overlapping the next state). The initial state is all 0 bits. After each >> cycle, the last n2 next state bits are moved to the current state >> n1+1..n1+n2, which must not overlap. >> > > That looks right. The very sparse connectivity matrix (implied by your > "Gate number k...") is probably best encoded as you state but it does raise > a relatively trivial question. "Science" and "Bias" have become rhetorical > moral territories jealously protected by powerful entrenched interests. > They have every incentive to exploit that triviality to maximum effect -- > but their bias and pseudo-science will be forced to cede significant > territory. > > >> Programs that take file input have 9 input bits for the current input >> byte in 1..8 (lsb first). Bit 9 is set to 1 at end of input and bits 1..8 >> are set to 0. There is one input byte per cycle. >> > > Wouldn't be reasonable to permit the circuit to have control of the input > clock? > > >> Programs that produce file output have 9 output bits in the same format. >> When the last bit is 1, the remaining bits are ignored and execution is >> terminated. >> > > Why not go to a 2 connection (clock, data) port (and simply call it quits > when the last bit of the corpus gets clocked out by the circuit)? > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T728994814c1a40a0-M0c6a379a9f21ffe758e57277 Delivery options: https://agi.topicbox.com/groups/agi/subscription
