Russ -
I also have a lot of life in my immediate environment, with a 1 year old
puppy and kitty who grew up wrestling their way through the house and
take one another's cues when it comes to alerting to the birds outside
the window and the moths and flies who are silly enough to be available
to these two little terrorists. The wild things also/moreso, not to
mention the networks of relationships within a
domus/guild/habitat/ecosystem.
While watching my two little dogs run around our house, it struck me
that a feature that distinguishes living from non-living entities is
the apparent effortlessness with which living ones navigate the world.
Imagine how difficult it would be to build a robot that could
navigate the world so effortlessly. To make the comparison a bit
simpler, imagine how difficult it would be to build a robotic cockroach.
Mark Tilden's BEAM Robotics and Nervous Nets were pretty impressive in
this regard back 20+ years ago:
https://en.wikipedia.org/wiki/BEAM_robotics
https://www.sciencedirect.com/science/article/abs/pii/S0921889003001520
https://www.rssc.org/uploads/4/5/1/1/45118641/buildfest_handout_2006_06_10.pdf
I'm not describing these to contradict your main point, rather to
counter-point it? And then there is Theo Jansen's Strandbeest!
https://www.strandbeest.com/
There is something fascinating about autonomous "agents" operating
outside the context of the von Neumann "Universal Computing" paradigm.
I'm of the spirit to believe that collectives of interacting
sub-universal-complexity elements can collectively execute Universal
Computation.
https://en.wikipedia.org/wiki/Swarm_robotics
Others here can speak *much* more directly to the arc of
development/evolution of the SFI/Swarm.org work going back 30 ish
years? (I don't know of any specific *hardware* instantiation/bridge
beyond Tilden, et al's work, but there may well be). There is probably
a swarm of tiny origami boats driven by surface-tension gradients
somewhere in Sausalito Bay, doing performance art or perhaps plotting a
world takeover?
https://www.youtube.com/watch?v=OU76wwmg9Hs
https://www.swarm.org/
https://sfi-edu.s3.amazonaws.com/sfi-edu/production/uploads/sfi-com/dev/uploads/filer/8a/2a/8a2ae001-9ad5-43e6-b7e3-4d951223e9e8/96-06-042.pdf
I started down the rabbit hole of a google search:
and got overwhelmed with how popular the invocation of "Swarm" has
become: Oh well.
Strange Find in Australia: A dopplenymic artist - Christopher Langton:
https://www.youtube.com/watch?v=SJ27Rphomsg
and a wonderfully apropos-for-the-NFT-moment essay on Virtual Art which
references Chris a lot:
https://textinart.files.wordpress.com/2019/12/leonardo-oliver-grau-virtual-art_-from-illusion-to-immersion-2003-mit-press.pdf
When I asked ChatGPT whether anyone has built a robotic cockroach, it
came up with these examples. (I haven't checked to see whether these
are real projects.)
*
DASH: The Dynamic Autonomous Sprawled Hexapod (DASH) robot,
developed at the University of California, Berkeley, was
inspired by the rapid locomotion of cockroaches. It has six
legs and can move quickly on various terrains using a simple
control mechanism.
*
Harvard RoboBee: Although not specifically modeled after a
cockroach, the Harvard RoboBee project aims to develop small,
insect-like robots. These tiny flying robots are inspired by
the mechanics and flight capabilities of insects and
demonstrate similar agility and maneuverability.
*
iSprawl: The iSprawl robot, developed at the University of
California, Berkeley, was inspired by cockroaches' ability to
squeeze through small spaces. It uses a compliant body design
and six legs to navigate tight and cluttered environments.
*
VelociRoACH: Developed at the University of California,
Berkeley, the VelociRoACH is a fast-running robot designed to
mimic the high-speed locomotion of cockroaches. It utilizes a
legged design and has demonstrated impressive speed and agility.
__These mainly explore locomotion. Besides locomotion,
cockroaches notice when someone enters an area where they are exposed.
They quickly scuttle off to some hiding place. How do they sense the
presence of a new being? How do they know where the hiding places are?
How do they know how to move in the right direction? How do they know
how to avoid small obstacles and fires? Etc.
One can argue that these capabilities are hard-wired in. But that
doesn't make it any easier. These are still capabilities they
have, that would be a challenge to build.
I became amazed at how well-connected living entities are to their
environments. They quickly and easily extract and use information from
their environment that is important to their survival.
Man-made robots have nowhere near that level of embeddedness and
environmental integration.
Was it Rodney Brooks who said that we should build that sort of
connectedness before worrying about building intelligence into our
robots? Today that struck me as an important insight.
I do agree that robotics/AI is a *long way* (but on a different
time-metric than we are?) from "life as we know it" but some of the
AI/Robot-dysphoria comes *from* the "as we know it" clause... which can
be as benign as the "uncanny valley" experience to something on the
order of a literal or merely *literary* "gray goo scenario".
https://www.diggitmagazine.com/blog/generative-ai-and-uncanny-valley-and-call-action
https://www.theatlantic.com/technology/archive/2023/03/ai-chatgpt-writing-language-models/673318/
Mumble,
- Steve
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
1/2003 thru 6/2021 http://friam.383.s1.nabble.com/