Could Cloudbox Mimics improve the naturalness of machine-learning?

Creating a “Cloudbox Mimic”
to map Rhizome growth choices, as a self-comprehended ‘hypotheses testing’ learning tool of ever-enlarging complexity

Would ‘asymmetric logic’ help machine-learners practice natural learning?

dhuer-cloudboxing-aIn 2014, I developed the Cloudboxing© thinking technique. Teaching myself to stitch together a set of cognitive cloud datapoints to create a place to study the building blocks of coding language, to learn exactly what code was and where it could be located in my data set. ie. Using my first cognitive language (Liquid Membraning) to translate coding language into the “building blocks” of Liquid Membraning language. See Project #5 at http://davehuer.com/solving-wicked-problems/

Lately, in between work, consulting, and venturing, I’ve been thinking about machine learning and Google’s DeepMind project, and wondering whether the “flatness” of programmed teaching creates limits to the learning process? For example…whilst reading the Google team’s “Teaching Machines to Read and Comprehend” article http://arxiv.org/pdf/1506.03340v1.pdf

Could we enlarge the possibilities, using spatial constructs to teach multidimensional choice-making?

creating a cloud-box to mimic rhizome growth choicesThis could be a software construct, or a physical object [such as a transparent polymer block, where imaging cameras record choice-making at pre-determined XYZ coordinates to ensure the locations of choices are accurately mapped (especially helpful when there are multiple choices at one juncture)].

Encapsulating and organizing defined space for machine-learned self-comprehension. mimics the “cloudboxing” technique.

And, it mimics the natural self-programmed logic of self-learning…a novel teaching tool for the machine-learning entity:

  • Creating a set of challenges through 3dimensional terrain that mimics pre-defined/pre-mapped subterranean tunnels
  • Creating an opportunity to dimensionally map an emulated (or actual) entity growing through the tunnel system
  • Studying the polar coordinates of the entity traversing the pre-defined space(s)

1) What about using a rhizome?Jiaogulan-Rhizome

. . . Using a natural entity teaches a machine-learning entity to mimic natural learning.

Using a plant creates the possibility that we can map choice-making, using attractants such as H2O and minerals, as a mimic for conscious entities developing learned behaviour.

 

2) Once you have a defined baseline data set, could machines learn better if being blocked and shunted by an induced stutter?

using stuttering blocks to teach choice-decision-making

Perhaps learning by stuttering and non-stuttering might produce interesting data?

By creating a stuttering event as the baseline, perhaps the program will use this to overcome obstacles to the learning process as well as the object of the lesson to learn to not stutter? This could produce a host of interesting possibilities and implications.

3) Things get incredibly interesting if the program eventually attempts to produce choice options outside the available options . . .


Note: These ideas continue the conceptual work of WarriorHealth CombatCare, re-purposing the anti-stuttering Choral Speech device SpeechEasy for Combat PTSD treatment. The research proposal for that work is here: https://www.researchgate.net/profile/David_Huer


Images:

Jiaogulan-Rhizome: Own work/Eigenes Foto by Jens Rusch, 29 August 2014  CC Attribution-Share Alike 3.0 Germany license.

Drawings: David Huer © 2014-2015

Cutting DeepMind’s data error/loss rate

skascience_darkenergy_300dpi (2)I have been reading about Google’s DeepMind “Neural Turing Machine” at [link] MIT Tech Review and have a suggestion regarding the loss rate:

(Quote:) The DeepMind work involves first constructing the device and then putting it through its paces. Their experiments consist of a number of tests to see whether, having trained a Neural Turing Machine to perform a certain task, it could then extend this ability to bigger or more complex tasks. “For example, we were curious to see if a network that had been trained to copy sequences of length up to 20 could copy a sequence of length 100 with no further training,” say Graves and co.

It turns out that the neural Turing machine learns to copy sequences of lengths up to 20 more or less perfectly. And it then copies sequences of lengths 30 and 50 with very few mistakes. For a sequence of length 120, errors begin to creep in, including one error in which a single term is duplicated and so pushes all of the following terms one step back. “Despite being subjectively close to a correct copy, this leads to a high loss,” say the team

Could we assign a positive value to each error and a negative value to each absolutely correct copy, and then develop a reducing error rate from the positive value rate?

Also, would Synomal Superpositional Clouds (SSCs) help assign high value to errors? There is writing about SSC’s here.

The learning brain experiences the wicked problem of survival every moment — and for this process perhaps error-minimizing may be more important than exact copying?  

by David Huer


Image by space-science-society