Vancouver’s False Creek – Two mornings of sun and fog

© 2016 Photos by David Huer

Are we becoming the AI we should be afraid of?

This is a re-post of the second part of 9 Minds for the Future, positing “synthesizing horizon-mindedness” as the locus driving the expansion and success of our species.

Recently, John Battelle posted an essay asking: Is AI The Worst Mistake In Human History?

“One of the most intriguing public discussions to emerge over the past year is humanity’s wrestling match with the threat and promise of artificial intelligence. AI has long lurked in our collective consciousness — negatively so, if we’re to take Hollywood movie plots as our guide — but its recent and very real advances are driving critical conversations about the future not only of our economy, but of humanity’s very existence.

In May 2014, the world received a wakeup call from famed physicist Stephen Hawking. Together with three respected AI researchers, the world’s most renowned scientist warned that the commercially-driven creation of intelligent machines could be “potentially our worst mistake in history.” Comparing the impact of AI on humanity to the arrival of “a superior alien species,” Hawking and his co-authors found humanity’s current state of preparedness deeply wanting. “Although we are facing potentially the best or worst thing ever to happen to humanity,” they wrote, “little serious research is devoted to these issues outside small nonprofit institutes.” ”

The fundamental question is asked by Max Tegmark, a founder of the Future of Life Institute:

“It’s a race between the growing power of the technology, and the growing wisdom we need to manage it…Right now, almost all the resources tend to go into growing the power of the tech.” Who determines what is “good”? We are just now grappling with the very real possibility that we might create a force more powerful than ourselves. Now is the time to ask ourselves — how do we get ready?

I read this shortly reading about the Brexit Leave campaign; post-referendum. The “leaders” are backpedaling. They opened Pandora’s box. They’ve unleashed the dragon. But had NO plan. As a species we do this regularly. We avoid loss by avoiding thinking about it until the threat is upon us. We prevaricate. We fiddle, as Rome burns.

What is our plan?

And for me:

> Are we becoming the AI we should be afraid of?

> Is Horizon Mind and Horizon Intelligence …both singularly personal and the collective intelligence of our species?

David Huer


This is a concept for how Horizon Intelligence develops . . .


When our children are born, they look out upon a strange and scary place.

All thinking is internal . . .

david huer-reframing space-update-001crop

The questing mind moves out from that internal space

Learning to involve sensory feedback: a mother’s breast milk, being singed, standing and falling, toy-making and using and breaking. Shitting and stinking, and eating boogers, and saying “No!”.

A sensory (feeling-seeing-scenting-tasting-thinking) feedback loop . . .

david huer-reframing space-update-002crop

But then, the questing human mind extends that feedback loop.

And here are the interesting questions . . .

Does the questing mind extend that feedback beyond others (mum, dad, sibling, grandma, the family dog) to wider horizons? Seeking to frame the horizon as a future that can be re-framed, changed, to whatever we want it to be;

This “horizon re-framing mind” — this Horizon Intelligence–perhaps it is the vista-seeking mind that makes our species gain the bigger perspective?

 

david huer-reframing space-update-003crop

And perhaps, it is Horizon Intelligence that synthesizes all of the other intelligences into one? Making the sum of the parts that which makes us aware? That makes us planners and doers of our own destiny? [see: 9 Minds… for the proposed list]

 

david huer-reframing space-update-004crop

And if so, is it this intrinsic quality of thinking-being-contemplating-doingness that Information Technology kills? The “thrumming guitar pluck”, the quiet humming golden thread–that IT muffles and snuffs out?  The missing quality that we cannot replicate?

david huer-reframing space-update-005crop

Are we losing our Horizon Intelligence?

And if we are, are we becoming the Artificial Intelligence we should be afraid of?

— David Huer



Images:
All other images and artwork: © 2015 David Huer. Photo is of Sombrio Beach on the west coast of Vancouver Island.

Parasuits – uplifting humanity, uplifting industry

Parasuit R&D – a competitive edge?

julian-garnier-codepen

The amazingly good CSS 3D Solar System coding at codepen.io is by@JulianGarnier. Click the picture to see the code in action! Use the controls at right to change display options.

When reading about IIT Madras graduate Naga Naresh Karutura, I do not see a “double amputee”. Instead, high potential to be an ISRO astronaut flight specialist candidate.

naresh-karutura-parasuit* Software engineer at Alphabet (aka Google).
* Professional problem-solver.
* Eternally dogged optimist.
* Amputean°

In micro-gravity, Parasuit°-enabled Amputean° and Paraplegian° Astronauts may have the competitive edge. Industries and cultures embracing the perspective shift gain competitive edge, too.

parasuits - enabling r&d


° I’ve coined these words, having noticed they somehow convey a sense of uplifting. Everyone needs an uplift, now and then.


Images:

Codepen.io image by Julian Gardner: http://davehuer.com/bcitfolio/innoprojects/inno-shieldship.html – The basis of this proposal of mine from 1993/4? NEO delta-V is a technically and profitably-usable resource.

Constellation EVA Spacesuit: NASA: http://www.nasa.gov/pdf/246726main_ConstellationSpaceSuitSystemBriefing.pdf
(Image modified with removal of lower extremities)

Mr. Naga Naresh Karutura: http://social.yourstory.com/2015/11/naga-naresh-karutura/

Astronaut Double Amputeans and Paraplegians?

It’s all about perspective.

And tenacity.

And determination.

And character.

tom-reaching-for-lab-umbilicals-sts098-330-0071After NASA Astronaut Scott Kelly and Russian cosmonaut Mikhail Kornienko of Roscosmos got home from a year in space this past spring, the US Congress began examining whether to provide lifetime health benefits to astronauts. Many are ex-military and get benefits that way. NASA will monitor recipients for long-duration mission health planning.
 
At industrial design school, one of my human factors projects was a spider-like astrogeology exoskeleton, designed so geologists could move along a cliff face looking at strata. And the thinking fed into iteration #1 of my design thesis: one of the first social web wearables; a performance tool for whitewater slalom athletes. But a bigger aspect of the thinking keeps coming back to me: A question that might help address the dilemma of bone loss. A question that keeps coming back after doing WarriorHealth CombatCare and finding out about the fine work done at Walter Reed National Military Medical Center in Bethseda, MD.

USMC Cororal Todd LoveIs a veteran with paraplegia or no legs the natural spacecraft driver?

Since bone density loss is a key barrier to long-term low gravity living…aren’t technically-trained veteran paraplegians and double amputees great candidates for the astronaut corps? Does a fighter pilot have the discipline, skill and temperament to be a launch driver? Does an armored division tanker have the skills to be an in-flight systems specialist?

Does USMC Corporal Todd Love (an incredibly inspirational guy! seen here) really need legs to operate in micro-gravity, when he might only need wheels on the ground? (As you can see, he does not need legs whatsoever).

I ask you…

Constellation_spacesuitConstellation_spacesuit-paraplegian2

Could this approach create uplifting new opportunities to serve and thrive in a way that makes the unavoidable SCI injury extraordinarily valuable?

Aren’t two-legged people naturally less abled in the spaceflight environment?

Who is the more-natural space-athlete?

Who is the more-natural astronaut?

David Huer


Numbers

US veterans with Spinal Cord Injury (SCI) as of 2008: 26,000 veterans
http://www.military.com/benefits/veterans-health-care/veterans-with-spinal-cord-injury-disorders.html

US citizens with SCI: 240,000 and 337,000 people
New injuries per year, as of 2015: 12,500 people
http://www.sci-info-pages.com/facts.html

Latest EVA suit made by Oceaneering Inc. (Houston, TX):
http://www.nasa.gov/pdf/246726main_ConstellationSpaceSuitSystemBriefing.pdf

Interesting aspects of design/safety aspects
* Effect to bone mass/density and to body functions
* Pant legs + boots: removedaperture and material needs cut by ~40%, with 5 apertures (head, left & right arm, left & right leg) reduced to 3.
* Electronic components & new designs for torso & new “thighboots”
* “Thighboots” reduce the dangers of entanglement & provide push-off tasking as needed
* External prosthetics designed to attach to thighboots

Healthcare research outcome:
Could NASA, the VA and DoD assess impact on SCI to help society groundside?


Images:

US Astronaut Tom Jones (STS-129): https://skywalking1.files.wordpress.com/2009/11/tom-reaching-for-lab-umbilicals-sts098-330-0071.jpg

USMC Corporal Todd Love and Team X-T.R.E.M.E. competing in The Spartan Race, Leesburg, VA, 2012: http://www.dailymail.co.uk/news/article-2195897/Triple-amputee-veteran-completes-grueling-10-5-mile-endurance-race-called-The-Beast-hours-honor-fallen-U-S-soldiers.html

Constellation EVA Spacesuit: NASA http://www.nasa.gov/pdf/246726main_ConstellationSpaceSuitSystemBriefing.pdf
(Image modified with removal of lower extremities in image)

Could Cloudbox Mimics improve the naturalness of machine-learning?

Creating a “Cloudbox Mimic”
to map Rhizome growth choices, as a self-comprehended ‘hypotheses testing’ learning tool of ever-enlarging complexity

Would ‘asymmetric logic’ help machine-learners practice natural learning?

dhuer-cloudboxing-aIn 2014, I developed the Cloudboxing© thinking technique. Teaching myself to stitch together a set of cognitive cloud datapoints to create a place to study the building blocks of coding language, to learn exactly what code was and where it could be located in my data set. ie. Using my first cognitive language (Liquid Membraning) to translate coding language into the “building blocks” of Liquid Membraning language. See Project #5 at http://davehuer.com/solving-wicked-problems/

Lately, in between work, consulting, and venturing, I’ve been thinking about machine learning and Google’s DeepMind project, and wondering whether the “flatness” of programmed teaching creates limits to the learning process? For example…whilst reading the Google team’s “Teaching Machines to Read and Comprehend” article http://arxiv.org/pdf/1506.03340v1.pdf

Could we enlarge the possibilities, using spatial constructs to teach multidimensional choice-making?

creating a cloud-box to mimic rhizome growth choicesThis could be a software construct, or a physical object [such as a transparent polymer block, where imaging cameras record choice-making at pre-determined XYZ coordinates to ensure the locations of choices are accurately mapped (especially helpful when there are multiple choices at one juncture)].

Encapsulating and organizing defined space for machine-learned self-comprehension. mimics the “cloudboxing” technique.

And, it mimics the natural self-programmed logic of self-learning…a novel teaching tool for the machine-learning entity:

  • Creating a set of challenges through 3dimensional terrain that mimics pre-defined/pre-mapped subterranean tunnels
  • Creating an opportunity to dimensionally map an emulated (or actual) entity growing through the tunnel system
  • Studying the polar coordinates of the entity traversing the pre-defined space(s)

1) What about using a rhizome?Jiaogulan-Rhizome

. . . Using a natural entity teaches a machine-learning entity to mimic natural learning.

Using a plant creates the possibility that we can map choice-making, using attractants such as H2O and minerals, as a mimic for conscious entities developing learned behaviour.

 

2) Once you have a defined baseline data set, could machines learn better if being blocked and shunted by an induced stutter?

using stuttering blocks to teach choice-decision-making

Perhaps learning by stuttering and non-stuttering might produce interesting data?

By creating a stuttering event as the baseline, perhaps the program will use this to overcome obstacles to the learning process as well as the object of the lesson to learn to not stutter? This could produce a host of interesting possibilities and implications.

3) Things get incredibly interesting if the program eventually attempts to produce choice options outside the available options . . .


Note: These ideas continue the conceptual work of WarriorHealth CombatCare, re-purposing the anti-stuttering Choral Speech device SpeechEasy for Combat PTSD treatment. The research proposal for that work is here: https://www.researchgate.net/profile/David_Huer


Images:

Jiaogulan-Rhizome: Own work/Eigenes Foto by Jens Rusch, 29 August 2014  CC Attribution-Share Alike 3.0 Germany license.

Drawings: David Huer © 2014-2015