9 Minds for the Future

5-minds-for-the-future2Howard Gardner misses 4 mindful intelligences when he talks about the 5 Minds for the Future that he believes are “the specific cognitive abilities that will be sought and cultivated by leaders in the years ahead”.

Mr. Gardner’s list:

The Disciplinary Mind (DM): the mastery of major schools of thought, including science, mathematics, and history, and of at least one professional craft.

The Synthesizing Mind (SM): the ability to integrate ideas from different disciplines or spheres into a coherent whole and to communicate that integration to others.

The Creating Mind (CM): the capacity to uncover and clarify new problems, questions, and phenomena.

The Respectful Mind (RM): awareness of and appreciation for differences among human beings and human groups.

The Ethical Mind (EM): fulfillment of one’s responsibilities as a worker and as a citizen.

Los_Angeles_Pollution-cropped

Photo by DAVID ILIFF. License: CC-BY-SA 3.0

But this is a scientific model of the world: an idealized model of a civilized world of thinkers atop the hierarchy of civilization. It is not the world we live in. Not the world our children face:

A warming world, with melting ice caps, ice-free polar seas, and an interrupted tropic-arctic ocean heat conveyor belt. A world of sea-level rise, and drowned cities, and desperate populations. A world of rising starvation and denuded oceans. And refugees and internment camps. This is the world that every one of our grandchildren face. A world needing additional forms of Mindfulness if we are to succeed in restoring the biosphere.  Here are four to add to the mix:

KanjisenkiThe Warrior Mind (WM): combining strategic, tactical and artistic sensibilities. Miyamoto Musashi is the archetype. And my personal example is a US Marine. A man whose best friend died on the battlefield. A man who honoured that friend by not forgetting, and instead helping to set up a foundation for wounded warriors in his memory. This is a mind that frames its actions with honour. We need this mind.

The Profiting Mind (PM): looking for ways to earn profit in all its forms (financial, triple bottom line, personal). We need minds like this, too. To re-frame capitalism so it works for every species that inhabits the Earth.

The Earth Mind (EaM):  awareness of and appreciation for differences among different species of the earth, human and non-human, and the interrelatedness of all species in the biosphere we call home. We need this mind, too. Because this mind recognizes that all of us are one connected system that is failing because we have brought the ecosystem out of balance.

And the Horizon Mind (HM): The capacity to not just imagine the future, but to re-frame that future to make it better, and figure out the way from here to there. We need this mind to see where we could go, and to cut away every old way of thinking that got us into the hole we are now.


At times, I wonder if a Horizon Mind–and Horizon Intelligence–synthesizes all of the others? And whether Horizon Intelligence is both singularly personal and the collective intelligence of our species?

And whether Horizon Intelligence develops like this . . .?


When our children are born, they look out upon a strange and scary place.

All thinking is internal . . .

david huer-reframing space-update-001crop

The questing mind moves out from that internal space

Learning to involve sensory feedback: a mother’s breast milk, being singed, standing and falling, toy-making and using and breaking. Shitting and stinking, and eating boogers, and saying “No!”.

A sensory (feeling-seeing-scenting-tasting-thinking) feedback loop . . .

david huer-reframing space-update-002crop

But then, the questing human mind extends that feedback loop.

And here are the interesting questions . . .

Does the questing mind extend that feedback beyond others (mum, dad, sibling, grandma, the family dog) to wider horizons? Seeking to frame the horizon as a future that can be re-framed, changed, to whatever we want it to be;

This “horizon re-framing mind” — this Horizon Intelligence–perhaps it is the vista-seeking mind that makes our species gain the bigger perspective?

 

david huer-reframing space-update-003crop

And perhaps, it is Horizon Intelligence that synthesizes all of the other intelligences into one? Making the sum of the parts that which makes us aware? That makes us planners and doers of our own destiny?

 

david huer-reframing space-update-004crop

And if so, is it this intrinsic quality of thinking-being-contemplating-doingness that Information Technology kills? The “thrumming guitar pluck”, the quiet humming golden thread–that IT muffles and snuffs out?  The missing quality that we cannot replicate?

david huer-reframing space-update-005crop

Are we losing our Horizon Intelligence?

And if we are, are we becoming the Artificial Intelligence we should be afraid of?

— David Huer



Images:

5 Minds for the Future: Book Cover via Amazon Books:
Calligraphy by Miyamoto Musashi. Public Domain: Mr. Granger
Los_Angeles_Pollution: Photo by DAVID ILIFF. License: CC-BY-SA 3.0
All other images and artwork: © 2015 David Huer. Photo is of Sombrio Beach on the west coast of Vancouver Island.

The Unmentioned Details of the Entrepreneurial Journey

Having read Peter Baskerville‘s brilliantly dispassionate, no-nonsense answer to Why do most café startups fail? at Quora this morning, I thought my readers might like to see a re-post of a 2010 cartoon about the perils of venturing anew.

Never give up! Never Surrender!

Best for the day and this New Year. 

01 January 2015

– David Huer

huer-unmentioned-details-jan2015-001

 

Cutting DeepMind’s data error/loss rate

skascience_darkenergy_300dpi (2)I have been reading about Google’s DeepMind “Neural Turing Machine” at [link] MIT Tech Review and have a suggestion regarding the loss rate:

(Quote:) The DeepMind work involves first constructing the device and then putting it through its paces. Their experiments consist of a number of tests to see whether, having trained a Neural Turing Machine to perform a certain task, it could then extend this ability to bigger or more complex tasks. “For example, we were curious to see if a network that had been trained to copy sequences of length up to 20 could copy a sequence of length 100 with no further training,” say Graves and co.

It turns out that the neural Turing machine learns to copy sequences of lengths up to 20 more or less perfectly. And it then copies sequences of lengths 30 and 50 with very few mistakes. For a sequence of length 120, errors begin to creep in, including one error in which a single term is duplicated and so pushes all of the following terms one step back. “Despite being subjectively close to a correct copy, this leads to a high loss,” say the team

Could we assign a positive value to each error and a negative value to each absolutely correct copy, and then develop a reducing error rate from the positive value rate?

Also, would Synomal Superpositional Clouds (SSCs) help assign high value to errors? There is writing about SSC’s here.

The learning brain experiences the wicked problem of survival every moment — and for this process perhaps error-minimizing may be more important than exact copying?  

by David Huer


Image by space-science-society