Right about the narrow "conscious" output bandwidth, but with remarks. I cheer you for another note some time ago, too, that "we knew it since the 1950s" or something. It could be extended to 1960s. A lot of rediscoveries or "rehashes", while neither the new discoverers, nor the observers knew or pay credit.
IMO this particular discovery is one of this "new" ones too (see below), it was one of my own points 22 and 23 years ago about the "disadvantages of humans", but it should be known by default . From the open access part of the info it seems that they don't emphasize that all these measures are valid only based on the selected way of measuring the information content, which is not absolute and is decided by the evaluator-observer. The miniscule bandwidth of **consciously controlled output given particular way of measuring** such as typing etc. in particular language (known dictionary), with known distributions, severely limited set of expected possible actions etc. is true, either 10 or 20 or 50 bits, however it is so only with that way of measuring it, that chosen precision of the actions/"random events", compressed, reduced to keys from the keyboard, most probable next characters/presses and they counted in few bits etc. On the other hand it could be measured another way too, which could reverse the results and would produce an astronomical bandwidth, while this operation still can be attributed to the "consciousness" as an initiator and driver, although not thinking of the details: for doing *anything *or just to keep existing and not dissolving, the body has to control - to keep in appropriate range and state - all of its constituent parts, one way or another: including atoms and electrons (ATP, chemical reactions ...), with the finest precision for each measurable minimal time step; Planck constant? Also of course technically it is not just the body or an individual, depending on the horizon cut it is the Universe, as the environment is also involved and allows the body, the "agent", to do whatever it does; it "collaborates" and the evaluator-observers assumes that the environment would be stable, the temperature will be within the required range, "it predicts" that this will hold. However this and **any** other prediction of the "external" state is correct **only* *because the environment is collaborative or doesn't change in this regard*.* If the environment decides anything else, the agent and its "intelligence" are gone with no trace. Thus focusing only on the agent is like computing partial derivatives, with selecting one variable and keeping a googolplex other variables constant, which in reality are not constant. The Universe doesn't understand the compressed bits or characters, so the fingers, hands, arms, need to have *all their elementary particles moved,* as well as the eyes motions if they read from reference or if they look at the keyboard - the neck motions, the support of the whole body in its pose etc., the neural signals that pass through the whole brain, i.e. all changes in the whole body. When the account is just at 10 bits, it's so because there is a capricious evaluator-observer, who is both *slow* and lacking the appropriate sensory input and capacity to capture that other stream, and who in addition is "complex" enough herself, so that she can decompress this or filter all the redundant parts (redundant **only in his selected POV and what information to pick, based on its sensors etc.** ). Also the typist or agent's behavior is restricted, canalized, it's pushed in a narrow corridor in order to fit this bandwidth, by external virtual agent, "virtual control unit". The small bandwidth could be thought of also as the amount of "useful" information, but it always depends on the observer-evaluator. For other observers the outputted information is actually 0 bits, all meaningless, as the whole experiment, since the scale of the bandwidth is already known etc. The proper evaluation of the entropy or information gain or exchange requires exact definition and operation of the whole system: the sender, the receiver, their knowledge/PDF/expectations, how they change etc. (...) *Man and the Thinking Machine* (Analysis of the possibility of creating a Thinking Machine and some disadvantages of man and organic matter compared to it), "The Sacred Computer", T.Arnaudov, 12.2001 (...) * Limited ability to remember precise information and to process symbolic information Most of us cannot always remember accurately even a text of several dozens words* (from one reading) or a few meaningless numbers. People who cope with these "difficult" tasks are often called geniuses and attributed with supernatural abilities. Most people read at a speed of up to 30-50 letters per second...or 108-180 thousand per hour.." [even the lower end are fast readers] * the original in Bulgarian is "tens of", but I see "dozens" are preferred in English ... *HOW WOULD I INVEST 1 MILLION WITH THE GREATEST BENEFIT FOR THE DEVELOPMENT OF THE COUNTRY?*, Todor Arnaudov, Plovdiv, Bulgaria, ~6.2003 [The quote is from the "mother of all" modern AI strategies, as far as I know, 14-15 years before the "real ones"; the original is in Bulgarian, machine translated with a few small edits] Original: https://www.oocities.org/todprog/ese/proekt.htm TM - thinking machine "(...) A striking example of the potential of the TM is the enormous information flow through its memory, which can be controlled down to a binary character. A person is capable, in a second, of outputting conscious information which is described by several dozen bits. This applies to speech, singing, typing, moving the mouse pointer, drawing, playing a musical instrument. In the same second, an ordinary personal computer transfers, through its central processor alone, a billion times more information, over which the machine has complete control. If such a powerful information flow is controlled by reason, the calculating hardware will turn into an amazingly perceptive student, who will quickly become a productive creator in all kinds of arts and a tireless scientific worker. Connected to robotic bodies, in the design of which, in fact, it could also participate, the TM will be able to perform physical activities, to lend a "friendly arm" to man and industry - literally. (...) Since the personality of the TM will be recorded as pure information - on computer media that can be rewritten and stored theoretically forever, unlike the human personality carrier - the brain, a machine trained to a certain extent could be reproduced simply by including a copy of its personality in a new "body", where it will begin to develop independently and learn only what is necessary for the new activity. All the intelligent experience possessed by a previous TM will be easily transferred to another, which will have the personality of its "mother" by birth. Due to the fact that the mind of the TM will be able to have a direct high-speed connection with a computing machine - processors from the body of the TM itself, or another machine to which the TM is electronically connected as a user - computer design, modeling and, generally speaking, creativity entrusted to the Machine will be carried out much faster than a person can do it. All input and output devices will be part of the TM's "imagination", therefore much faster than the currently mandatory moving, equivalent to inert and slow information input devices used by humans. The machine will be able to be a programmer, too, of course. If it is given the opportunity to study its device in detail, once created, it could contribute to its improvement until it reaches the limits provided by the specific "hardware" on which its computer "soul" is embodied. TM can go even further - the "soul" could improve the "body" - the "electronic flesh", until it reaches physical limits. In this role, the Machine will work as an electronic engineer, searching for new circuit solutions; a physicist, improving current technologies for the production of integrated circuits and nanotechnology, or a discoverer of still unknown ways of building machines. (...) According to my strategy, a research institute would be founded that would bring together computer scientists, engineers, art historians, linguists, philosophers, psychologists, neurologists; translators who speak many languages; creators in various arts - writers and poets, composers and musicians; artists, photographers and film directors. The members of the institute would preferably have knowledge and skills in more than one field, both scientists and creators, because the goal of the research would be to discover what is common between all manifestations of reason, between sciences and arts. The form of thought is different in different manifestations of thinking, but its essence, the mechanisms that underlie it, are the same and only the data with which it works changes - word, sound, images, sequences of images, abstract concepts, etc. The Institute will also play the role of a "wing" that finds, "protects and supports" gifted people in order to support their development and, if they wish, to enjoy their talent in research. The Institute will have a software house, in which "among other things" "smart" application software will be produced, using the Institute's developments on the path to AGI: programs for automated design, multimedia, word processing, translators, games, and other application programs. The goal of the Institute will be the software creation of TM, possessing universal capabilities for exchanging information with other computing machines, in particular robotic modules. The robots created by the robotics department will be, in addition to a way to use IR for physical activities, another means of attracting public attention and advertising the Institute. After the Thinking Machine is implemented, it will be able to be used in any creative spheres of human activity and in the work of the Institute itself. I assume that after the Discovery and the creation of the TM, running on standard computers, the Institute will "scald itself" and will be able to establish a design department to develop new complete computing systems, adapted specifically for the operation of the Machine. (...)" .... Related interdisciplinarity was implemented by DeepMind and Hassabis underlies that in a 2022 interview. It is also done as a surrogate with the enormous collections of datasets in all modalities and the multimodal learning which is namely combining all kinds of representations. ... The Sacred Computer: Thinking machine, Creativity and Human Development https://github.com/twenkid Join or help the "fabless" institute and the "rolling" online conference "Sacred Computer 2025" ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T599834c0d9f7e4d4-Mdaf913d59fed014c9c548bd1 Delivery options: https://agi.topicbox.com/groups/agi/subscription