RE: [agi] Blockchainifying Conscious Awareness

2018-06-19 Thread John Rose
Rob, This is a very insightful and knowledgeable reply and most of your coverage is spot-on on. But… Think of it when “databases” were first becoming pursued and popular. I don’t know, say 1990’ ish? What was a database then? And think of databases now, their realm of function, for examp

RE: [agi] Re: MindForth is the First Working AGI for robot embodiment.

2018-06-21 Thread John Rose
Ehm, "chunking out code"...that's ah, yeah good way to describe it 😊 I agree. Arthur, you need to elevate yourself man. The Elon Musk's of the world are stealing all the thunder. John > -Original Message- > From: Mike Archbold via AGI > > At least A.T. Murray is in the trenches chunki

RE: [agi] Blockchainifying Conscious Awareness

2018-06-22 Thread John Rose
Are you saying that you can’t have a purely temporal GHZ state blockchain without some spatial entanglement? But the photons used in the earlier blocks are gone, already absorbed. So you are left with entanglement in time. What happens then to the early blocks when tampering with the leading

RE: [agi] The reality onion...

2018-07-23 Thread John Rose
> -Original Message- > From: Steve Richfield via AGI > > John, > The big thing you are missing (besides the absence of trans-dimensional > technology) is that each layer defends itself against inner layers as though > its > life depends on it - which it does. This is what creates and mai

[agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-09 Thread John Rose
How I'm thinking lately (might be totally wrong, totally obvious, and/or totally annoying to some but it’s interesting): Consciousness Oriented Intelligence (COI) Consciousness is Universal Communications Protocol (UCP) Intelligence is consciousness manifestation AI is a computational conscio

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread John Rose
> -Original Message- > From: Russ Hurlbut via AGI > > 1. Where do you lean regarding the measure of intelligence? - more towards > that of Hutter (the ability to predict the future) or towards > Winser-Gross/Freer > (causal entropy - soft of a proxy for future opportunities; ref > https:

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-11 Thread John Rose
> -Original Message- > From: Nanograte Knowledge Technologies via AGI > > Is there a truth in all of this? If you cannot see it, then you cannot see it. > Einstein could see it, how the universe operated, except for what happened > beyond the speed of light. There is. When it's looked a

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-11 Thread John Rose
> -Original Message- > From: Jim Bromer via AGI > > I am definitely not understanding whatever it is you guys are saying. I am > not > opposed to mystical discussions (theories that contain mysteries which are not > explainable using contemporary knowledge and which may turn out to be >

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-12 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > I don't believe that my thermostat is conscious. Or let me taboo words like > "believe" and 'conscious". I assign a low probability to the possibility that > my > thermostat has a homunculus or an immortal soul or a little person insi

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-12 Thread John Rose
> -Original Message- > From: Nanograte Knowledge Technologies via AGI > > Challenging a la Haramein? No doubt. But that is what the adventure is all > about. Have we managed to wrap our minds fully round the implications of > Mandelbrot's contribution? And then, there is so much else of s

[agi] Massive Bacteriological Consciousness - Gut Homunculi

2018-09-12 Thread John Rose
I’m tellin’ ya, nobody believes me! More and more research has been conducted on microbial gut intelligence... Then a couple years ago bacteria were scientifically shown to be doing quantum optimization processing. Now we see all kinds electrical microbiome activity going on in the gut:

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-13 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > We could say that everything is conscious. That has the same meaning as > nothing is conscious. But all we are doing is avoiding defining something > that is > really hard to define. Likewise with free will. I disagree. Some things

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-14 Thread John Rose
> -Original Message- > From: Jim Bromer via AGI > > > There are some complications of the experience of our existence, and those > complications may be explained by the complex processes of mind. > Since we can think we can think about the experience of life and interweave > the strands

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-14 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > When we say that X is more conscious tha Y we really mean that X is more like > a human than Y. > > The problem is still there how to distinguish between p-zombie and a > conscious being. > > The definition of a p-zombie makes this i

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-14 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > > It's relevant if consciousness is the secret sauce. and if it applies to the > complexity problem. > > Jim is right. I don't believe in magic. > A Recipe for a Theory Mind Three pints of AIT (Algorithmic Information Theory) Ale

RE: [agi] Massive Bacteriological Consciousness - Gut Homunculi

2018-09-15 Thread John Rose
> -Original Message- > From: Steve Richfield via AGI > > John vonNeuman once noted that the difference between mechanical, > electrical, and chemical processes disappears when the scale becomes small > enough. So, OF COURSE there are electrical phenomena to observe. > Steve Steve, Sort

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-19 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > What do you think qualia is? How would you know if something was > experiencing it? > You could look at qualia from a multi-systems signaling and a compressionist standpoint. They're compressed impressed samples of the environment a

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-21 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > You didn't answer my question. What is qualia? How do I know if monkeys, > fish, insects, human embryos, robots, or thermostats have qualia and how > would they behave differently if they did or did not. What is the test? > Qualia =

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-28 Thread John Rose
> -Original Message- > From: Jim Bromer via AGI > > John, > Can you map something like multipartite entanglement to something more > viable in contemporary computer programming? I mean something simple > enough that even I (and some of the other guys in this group) could > understand? Or

RE: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-28 Thread John Rose
> -Original Message- > From: Nanograte Knowledge Technologies via AGI > > John. considering eternity, what you described is but a finite event. I dare > say, > not only consciousness, but cosmisity. > Until one comes to terms with their true insignificance will they not grasp their tr

RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-09 Thread John Rose
> -Original Message- > From: Jim Bromer via AGI > > Operating on compressed data without having to decompress it is the goal that > I am thinking of so being able to access internal relations would be > important. > There can be some compressed data that does not contain explicit interna

RE: [agi] Comrade AGI

2018-10-10 Thread John Rose
> -Original Message- > From: Steve Richfield via AGI > > All societies need rules and limits, as otherwise, only serial killers would > survive. Saudi Arabia has different rules that work well to keep every child's > living parents together, whereas we have the crap "freedoms" to encourag

RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-11 Thread John Rose
> -Original Message- > From: Jim Bromer via AGI > > And if the concept of randomness is called into question then > how do you think entropic extremas are going to hold up? > "Entropic extrema" as in computational resource expense barrier, including chaotic boundaries, too expensive to

RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-11 Thread John Rose
> -Original Message- > From: Jim Bromer via AGI > > "Randomness" is merely computational distance from agent perspective." > > That is really interesting but why the fixation on the particular > fictionalization? Randomness is computation distance from the agent > perspective? No it is

RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-11 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > On Thu, Oct 11, 2018 at 12:38 PM John Rose > wrote: > > OK, what then is between a compression agents perspective (or any agent > for that matter) and randomness? Including shades of randomness to > r

RE: [agi] Compressed Algorithms that can work on compressed data.

2018-10-12 Thread John Rose
> -Original Message- > From: Matt Mahoney via AGI > > Encrypted data appears random if you don't know the key. But it is not > random because it has a short description (compressed plaintext + key). > Kolmogorov proved that there is no general algorithm to tell the difference. > If > the

RE: [agi] My AGI 2019 paper draft

2019-04-30 Thread John Rose
Matt > "The paper looks like a collection of random ideas with no coherent structure or goal" Argh... I love this style of paper whenever YKY publishes something my eyes are on it. So few (if any) are written this way, it's a terse jazz fusion improv of mecho-logical-mathematical thought

RE: [agi] Mens Latina -- 2019-04-28

2019-05-02 Thread John Rose
> -Original Message- > From: A.T. Murray > > For example, the AI might say what means in English, "You are a human > being and I am a person." > > C. The AI may demonstrate activation spreading from one concept to > another concept. > > If you type in "homo" for "human being", the AI ma

[agi] ConscioIntelligent Thinkings

2019-08-23 Thread John Rose
I'm thinking AGI is on the order of 90% consciousness and 10% intelligence. Consciousness I see as Universal Communication Protocol (UCP) and I see consciousness as "Occupying Representation" (OR). Representation being structure (or patterns from a patternist perspective). Then, from panpsychis

RE: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread John Rose
> -Original Message- > From: Matt Mahoney > > So the hard problem of consciousness is solved. Rats have a thalamus which > controls whether they are in a conscious state or asleep. > > John, is that what you meant by consciousness? Matt, Not sure about the hard problem here but a rat w

RE: [agi] Re: ConscioIntelligent Thinkings

2019-08-24 Thread John Rose
> Matt, > > Not sure about the hard problem here but a rat would have far less > consciousness when sleeping that is for sure 😊 > > Why? Think about the communication model with other objects/agents. > > John Although... I have to say that sometimes when I'm sleeping, lucid dreaming or whateve

Re: [agi] FAO: Senator Reich. Law 1

2019-09-05 Thread John Rose
On Thursday, September 05, 2019, at 9:58 AM, Nanograte Knowledge Technologies wrote: > That's not helping you A.T.Murray ;) Oh wow, Mentifex biography. How sweet. What's next a movie? LOL  (You gotta be F'in kidding me) John -- Artificial General Intelli

[agi] Re: by successive approximation.

2019-09-08 Thread John Rose
On Saturday, September 07, 2019, at 10:21 AM, Alan Grimes wrote: > Some examples of the limitations of the brain's architecture, include the inability to multiplex mental resources -> ie having a network of dozens of instances while retaining the advantages of having a single knowledge and skill

[agi] ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-13 Thread John Rose
Consciousness mixmuxes structure with protocol with language thus modulating the relationship between symbol complexity and communication complexity in an environment of agents. And conscious agents regulate symbol entropy in effect maintaining a symbol negentropy. The agents route symbols based

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-14 Thread John Rose
On Saturday, September 14, 2019, at 12:57 AM, rouncer81 wrote: > Seriously, im starting to get ready to go use all this superfluous > engineering skill ive collected over the last couple of years to go draw up > the schematics for my home personal guillotine system (tm). Ya just don't become one

Re: [agi] whats computer vision anyway

2019-09-14 Thread John Rose
On Wednesday, September 11, 2019, at 8:43 AM, Stefan Reich wrote: > With you, I see zero innovation. No new use case solved, nothing, over the > past, what, 2 years? No forays into anything other than text (vision, > auditory, whatever)? > Actually, Mentifex did contribute something incredibly

Re: [agi] whats computer vision anyway

2019-09-14 Thread John Rose
On Saturday, September 14, 2019, at 6:19 PM, Stefan Reich wrote: > Yeah, I'm sure I should increase my use of Latin variable names. I mean... maybe but. When you run an obfuscator or minifier on code what does it do? Removes human readable. Minifier minimizes representation. But variable names,

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-15 Thread John Rose
Yeah so, one way is the create a Qualia Flow as an Information Ratchet. Each click of the ratchet can be a discrete experience. The ratchet gets it's energy from the motion in the AGI's internal dynamical systems entropy. click click click Then this ticking, when regulated is a systems signalin

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-17 Thread John Rose
On Sunday, September 15, 2019, at 8:32 AM, immortal.discoveries wrote: > John, interesting posts, some of what you say makes sense, you're not far off > (although I would like to see more details). This is just a hypothetical engineering discussion. But to put it more succinctly, is consciousnes

Re: [agi] whats computer vision anyway

2019-09-17 Thread John Rose
On Monday, September 16, 2019, at 12:11 PM, rouncer81 wrote: > yes variables are simple and old,  we dont need them anymore. Sorry, object names :) In some languages everything is an object. The thought was going in the direction of reverse obfuscation...opposite direction of minification.   Com

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-17 Thread John Rose
Well then it should be more of a multi-ratchet reflecting the topological entropic/chaotic computational synergy of the internal dynamical multi-systems mapped and bifurcated into full-duplex language transmission. Single ratchet = Morse code. Multi-ratchet = Polyphony (larger symbol space) Jo

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-17 Thread John Rose
Please try to get this right it's very important: https://www.youtube.com/watch?v=xsDk5_bktFo John -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T354278308a7acf85-Mc9027de72ed4514f96d5c5b3 Delivery options: htt

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-18 Thread John Rose
On Wednesday, September 18, 2019, at 2:49 AM, korrelan wrote: > https://youtu.be/UcBDSoVs42M Instead of dimension reduction then a multidimensional field of ratchets transmitting a fuller richer conscious experience. A direct coupling at max bandwidth. John -

[agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-18 Thread John Rose
Allow dimension modulation. Put some dimension control into the protocol layer allowing for requests of dimension adjustment from current transmission level... John -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi

Re: [agi] Re: ConscioIntelligence, Symbol Negentropy in Communication Complexity

2019-09-18 Thread John Rose
On Wednesday, September 18, 2019, at 4:04 PM, Secretary of Trades wrote: > https://www.gzeromedia.com/so-you-want-to-arm-a-proxy-group I don't get it. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T354278308a7a

[agi] Re: Transformers - update

2019-09-18 Thread John Rose
On Wednesday, September 18, 2019, at 8:14 AM, immortal.discoveries wrote: > https://openai.com/blog/emergent-tool-use/ While entertaining there is absolutely nothing new here related to AGI ??? John -- Artificial General Intelligence List: AGI Permalink

[agi] Re: AGI Research Without Neural Networks

2019-09-19 Thread John Rose
For ancillary like sensory you have to?  For core I don't think neural at all. Not to say neural is not emulated in some way in core... But I think any design has to use architectural optimization or has to be pre-architecturally optimized. John -- Artifi

[agi] Re: Transformers - update

2019-09-19 Thread John Rose
I'm wrong. You're right. Was just hoping for more :) Incremental, team and skills building. Inventing and discovering new ideas while doing that. And when finding something good not releasing it to the public (for safety naturally). John -- Artificial Gen

Re: [agi] Simulation

2019-09-21 Thread John Rose
All four are partially correct. It is a simulation. And you're it. When you die your own private Idaho ends *poof*. This can all be modeled within the framework of conscioIntelligence, CI = UCP + OR. When you are that tabula rasa simuloid in your mother's womb you begin to occupy a represent

Re: [agi] Simulation

2019-09-21 Thread John Rose
On Saturday, September 21, 2019, at 11:01 AM, Stefan Reich wrote: > Interesting thought. In all fairness, we can just not really interact with a > number which doesn't have a finite description. As soon as we do, we pull it > into our finiteness and it stops being infinite. IMO there are only fi

Re: [agi] Simulation

2019-09-22 Thread John Rose
On Saturday, September 21, 2019, at 7:24 PM, rouncer81 wrote: > Time is not the 4th dimension, time is actually powering space.    > (x*y*z)^time. And what's the layer on top of (x*y*z)^time that allows for intelligent interaction and efficiency to be expressed and executed in this physical univ

Re: [agi] Simulation

2019-09-23 Thread John Rose
On Sunday, September 22, 2019, at 8:42 AM, korrelan wrote: > Our consciousness is like… just the surface froth, reading between the lines, or the summation of interacting logical pattern recognition processes. That's a very good clear single brain description of it. Thanks for that. I don't thi

Re: [agi] Simulation

2019-09-23 Thread John Rose
On Sunday, September 22, 2019, at 6:48 PM, rouncer81 wrote: > actually no!  it is the power of time.    doing it over time steps is an > exponent worse. Are you thinking along the lines of Konrad Zuse's Rechnender Raum?  I just had to go read some again after you mentioned this :) John

Re: [agi] Simulation

2019-09-24 Thread John Rose
On Monday, September 23, 2019, at 7:43 AM, korrelan wrote: > From the reference/ perspective point of a single intelligence/ brain there are no other brains; we are each a closed system and a different version of you, exists in every other brain. How does ANY brain acting as a pattern reservoir

Re: [agi] Simulation

2019-09-24 Thread John Rose
On Tuesday, September 24, 2019, at 7:07 AM, immortal.discoveries wrote: > The brain is a closed system when viewing others Uhm... a "closed system" that views. Not closed then? John -- Artificial General Intelligence List: AGI Permalink: https://agi.topicb

Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-24 Thread John Rose
I'm thinking of a mathematical measure called "What The Fuckedness".  WTF({K, P, Le, ...}), K-Complexity, Perplexity and Logical Expectation. Anything missing? It can predict the expressive pattern on someone’s face when they go and type phrases into Mentifex's website expecting AI. John -

Re: [agi] Simulation

2019-09-24 Thread John Rose
On Tuesday, September 24, 2019, at 7:36 AM, korrelan wrote: "the brain is presented with external patterns" "When you talk to someone" "Take this post as an example; I’m trying to explain a concept" "Does any of the actual visual information you gather" These phrases above, re-read them, are more

Re: [agi] Simulation

2019-09-27 Thread John Rose
On Tuesday, September 24, 2019, at 2:05 PM, korrelan wrote: > The realisation/ understanding that the human brain is closed system, to me… is a first order/ obvious/ primary concept when designing an AGI or in my case a neuromorphic brain simulation. A human brain is merely an instance node on

Re: [agi] Simulation

2019-09-27 Thread John Rose
On Tuesday, September 24, 2019, at 3:34 PM, korrelan wrote: > Reading back up the thread I do seem rather stern or harsh in my opinions, if > I came across this way I apologise.  I didn't think that of you we shouldn't be overly sensitive and afraid to offend. There is no right to not be offende

Re: [agi] MindForth is the brain for an autonomous robot.

2019-09-27 Thread John Rose
On Wednesday, September 25, 2019, at 7:01 PM, James Bowery wrote: > Yes, what is missing is the parsimony of your measure, since the Perplexity > and Logical Expectation measures have open parameters that if filled properly > reduce to K-Complexity. James, interesting, thanks for making us aware

Re: [agi] Re: The world on the eve of the singularity.

2019-09-27 Thread John Rose
We must first accept and understand that there are intelligence structures bigger than ourselves and some of these structures cannot be fully modeled by one puny human brain. And some structures are vastly inter-generational... and some may be designed or emerged that way across generations to

Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 8:59 AM, korrelan wrote: > If the sensory streams from your sensory organs were disconnected what would your experience of reality be?  No sight, sound, tactile or sensory input of any description, how would you play a part/ interact with this wider network you des

Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 10:57 AM, immortal.discoveries wrote: > We could say our molecules make the decision korrelan :) And the microbiome bacteria, etc., transmitting through the gut-brain axis could have massive more complexity than the brain. "The gut-brain axis, a bidirectional ne

Re: [agi] Simulation

2019-09-27 Thread John Rose
On Friday, September 27, 2019, at 1:44 PM, immortal.discoveries wrote: > Describing intelligence is easier when ignore the low level molecules. What if it loops? I remember reading a book as a kid where a scientist invented a new powerful microscope, looked into it, and saw himself looking int

Re: [agi] Simulation

2019-09-27 Thread John Rose
Persist as what? Unpersist the sun rising, break the 99.99... % probability that it rises tomorrow. What happens? We burn. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Taa86c5612b8739b7-M1ebf99508486d21d6e0f5

Re: [agi] Simulation

2019-09-28 Thread John Rose
On Saturday, September 28, 2019, at 4:59 AM, immortal.discoveries wrote: > Nodes have been dying ever since they were given life. But the mass is STILL > here. Persistence is futile. We will leave Earth and avoid the sun. You right. It is a sad state of affairs with the environment...the destruct

Re: [agi] can someone tell me what before means without saying before in it?

2019-09-29 Thread John Rose
On Saturday, September 28, 2019, at 9:26 PM, Ben Goertzel wrote: > making a mathematical version of "Difference and Repetition" in terms of distinction graphs is one of the things on my theory to-do list... https://arxiv.org/abs/1902.00741 Ben, nice paper bravo. We think similar but you're like 1

Re: [agi] can someone tell me what before means without saying before in it?

2019-09-29 Thread John Rose
"The graphtropy of a distinction graph, constructed relative to an observer, is therefore considerable as a measure of how much excessive algorithmic information exists in the system of observations modeled by the distinction graph, relative to the observer. Or to put it more simply, the graphtr

Re: [agi] The Job market.

2019-09-29 Thread John Rose
On Sunday, September 29, 2019, at 3:15 AM, Alan Grimes wrote: > THEY WILL PAY, ALL OF THEM!!! LOL. Hang in there. IMO us engineers get better with age as long as we keep learning, the more you try and fail the wiser you get. Hell I got more than 10 years on ya son and I’m still kickin’ keister! 

Re: [agi] The Job market.

2019-10-02 Thread John Rose
On Wednesday, October 02, 2019, at 1:05 AM, James Bowery wrote: > Harvard University's Jonathan Haidt is so terrified of the truth coming out > that he's actually come out against Occam's Razor > . There are sityations where the simplest explanation is

[agi] Hydrating Representation Potential Backoff

2019-10-02 Thread John Rose
Time makes us think that humans are willfully creating AGI, as if it is in the future, like the immanentizing of the singularity eschaton. Will scientific advances occur at an ever increasing rate? It would have to slow down at a certain point. Has to right? As we approach max compression of all

[agi] Re: Hydrating Representation Potential Backoff

2019-10-02 Thread John Rose
Heat can up-propagate into symbol and replicate out of there. Energy converts to informational transmission and disentopizes it's gotta go somewhere right? Even backwards in time as we're predicting. -- Artificial General Intelligence List: AGI Permalin

Re: [agi] The Job market.

2019-10-04 Thread John Rose
On Wednesday, October 02, 2019, at 11:24 AM, James Bowery wrote: > ANY situation can be one where the most viable _decision_ is to stop the > search for the simplest explanation and _act_ on the simplest explanation you > have found _thus far_.  This is a consequence of the incomputability of >

Re: [agi] The Job market.

2019-10-04 Thread John Rose
On Thursday, October 03, 2019, at 2:08 PM, Stefan Reich wrote: > So venting about constantly being rejected is _yet another reason_ for being > rejected? It’s a vicious feedback loop! We should all vent more… *primal scream ensues* -- Artificial General In

Re: [agi] The Job market.

2019-10-04 Thread John Rose
On Wednesday, October 02, 2019, at 11:24 AM, James Bowery wrote: > Wolfram!  Well!  Perhaps you should take this up with Hector Zenil > : Interesting:   https://arxiv.org/abs/1608.05972 Yaneer Bar-Yam has produced much good reading also. ---

Re: [agi] The Job market.

2019-10-04 Thread John Rose
A persona is an operable abstract representation projected to consciointelligent entities for occupation and communication... Hmmm what is an expression of the graphtropy of persona?  Hmmm...  brain engage brain engage brain engage I'm trying to think but nothin' happens!     https://www.youtu

[agi] Re: Transformers - update

2019-10-04 Thread John Rose
OMG why, why that game. I know all the little back doors and hidden compartments... No. I refuse to be involved with this. The hours wasted if I could only get them back. Q3 Arena was history… now it's getting resurrected? Punish the people doing this! --

Re: [agi] The Job market.

2019-10-05 Thread John Rose
On Friday, October 04, 2019, at 12:42 PM, Matt Mahoney wrote: > Evolution is arguably simple, but it required 10^48 DNA copy operations on > 10^37 bits to create human intelligence Simple programs that create apparent complexity are not full representations of that complexity since they don't co

Re: [agi] The Job market.

2019-10-06 Thread John Rose
On Saturday, October 05, 2019, at 8:01 PM, Matt Mahoney wrote: > The complexity of an object is the fewest number of symbols needed to > describe it in some language. It has nothing to do with computation time, > energy, or consciousness. It is only the simplicity of a theory that > determines i

Re: [agi] The Job market.

2019-10-07 Thread John Rose
On Sunday, October 06, 2019, at 12:12 PM, Matt Mahoney wrote: > Do numbers exist? It's a philosophical question. Philosophy is arguing about > the meanings of words. What do you mean by "exist"? > That's an interesting question.  There is a max digits of Pi possibly calculable within this unive

Re: [agi] The Job market.

2019-10-08 Thread John Rose
Son of a gluon! Pitkänen thinking similar 'cept quantum model. I'm thinking algebraic entanglement on contemporary computers: http://vixra.org/abs/.0109 Proof of concept might be to calculate Pi faster on a group of computers with algebraic entanglement verses traditional signaling methodo

Re: [agi] The Job market.

2019-10-08 Thread John Rose
On Tuesday, October 08, 2019, at 10:13 AM, Matt Mahoney wrote: > Did you understand the paper? I'm thinking using thermodynamic buzzwords to > promote Christian values and crackpot theories of consciousness stored in the > phosphate bonds of DNA. Matt are you saying that Christianity doesn't exi

Re: [agi] The Job market.

2019-10-08 Thread John Rose
Here we go: http://www.tgdtheory.fi/pdfpool/intsysc.pdf -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-Mdc2a54673db32b721b7f15b2 Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] International AI Issues

2019-10-08 Thread John Rose
Interesting: https://www.marketwatch.com/story/us-to-blacklist-chinese-artificial-intelligence-companies-2019-10-07 Also interesting: https://www.youtube.com/watch?v=4cwXifDaCjE -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.

Re: [agi] The Job market.

2019-10-08 Thread John Rose
Few are at Pitkänen's level IMO. I find it incredibly stimulating reading his stuff... Depends on your interests.  I still don't understand why though many researchers have an almost irrational fear of associating consciousness with intelligence and reject anyone that publishes in that directio

Re: [agi] The Job market.

2019-10-09 Thread John Rose
On Tuesday, October 08, 2019, at 7:15 PM, Matt Mahoney wrote: > So the only thing that needs explaining is this feeling that you are > conscious. N!  It's way more than first person experience you always focus on that. Think outside the skull. Consciousness is the glue that holds us all tog

Re: [agi] International AI Issues

2019-10-09 Thread John Rose
On Wednesday, October 09, 2019, at 2:57 PM, Steve Richfield wrote: > This will get MUCH worse. Yes and we need to somehow avoid another thousand year totalitarian situation before it starts. Hmmm how to do that -- Artificial General Intelligence List:

[agi] Re: Noise and Confirmation Bias

2019-10-09 Thread John Rose
I often wonder, what is the term for compression that is both lossy and lossless? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M61b5653b3e39fa06c2dd89c5 Delivery options: https://agi.topicbox.

[agi] Re: Noise and Confirmation Bias

2019-10-09 Thread John Rose
Oh I thought it would be called lossylossless. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M4079732e59282cc14997c841 Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: Noise and Confirmation Bias

2019-10-09 Thread John Rose
distill out structure with a cloud around it -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-Meab87a08d9167da840f10ee2 Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: Noise and Confirmation Bias

2019-10-09 Thread John Rose
or the act of forgetting some but remembering what's important -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-Ma2cc82c88fdbb896836b8dc1 Delivery options: https://agi.topicbox.com/groups/agi/subsc

[agi] Re: Noise and Confirmation Bias

2019-10-09 Thread John Rose
hmm maybe it's like consciousness - Oops! won't go there doh! -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M551ee7e1bac189c3bceb946c Delivery options: https://agi.topicbox.com/groups/agi/subscr

Re: [agi] International AI Issues

2019-10-09 Thread John Rose
LOL: https://www.dailymail.co.uk/sciencetech/article-7547413/Student-project-places-one-persons-face-thwart-facial-recognition-software.html -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T48b3b969c849a2e5-M3807

Re: [agi] International AI Issues

2019-10-09 Thread John Rose
On Wednesday, October 09, 2019, at 10:31 PM, Steve Richfield wrote: > It appears to be possible to build a device that emulates a cell tower, but > only to certain cell phones, which would provide for SECURE communications. > Also, such a device could connect to cell towers to relay calls, thereb

Re: [agi] International AI Issues

2019-10-09 Thread John Rose
P2P WiFi network on mobile devices, no towers, then run hyper secure decentralized blockchain comm like this https://www.backchannel.live/#about Have fun tracing that suckas! -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/

Re: [agi] Re: Noise and Confirmation Bias

2019-10-10 Thread John Rose
How about resolving an alphabet or a language out of data where it is not obvious. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M59cd38581e06a08207daf90f Delivery options: https://agi.topicbox

Re: [agi] Re: Noise and Confirmation Bias

2019-10-10 Thread John Rose
then the whole chunk of data maps to one new symbol -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M00bf9fc46c5f0711f6a8a9b5 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: Noise and Confirmation Bias

2019-10-10 Thread John Rose
the problem with boolean logics is that they're boolean -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-M71663465485fdb4427d1700d Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] International AI Issues

2019-10-10 Thread John Rose
On Thursday, October 10, 2019, at 8:18 AM, Matt Mahoney wrote: > Backchannel uses TOR so it requires an internet connection. They use a modified TOR, but I have a relationship with them so I will avoid discussing... What do you think of QUIC which is rolling out: https://en.wikipedia.org/wiki/QU

Re: [agi] Re: Noise and Confirmation Bias

2019-10-10 Thread John Rose
Compressing a neutrosophic or fuzzy logic interpretation into boolean logic. That might be lossylosslessness. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8c8ee84b385720a5-Mb7d06ed09a35162270a8ea56 Delivery op

  1   2   3   4   5   6   7   8   >