Cutting DeepMind’s data error/loss rate

skascience_darkenergy_300dpi (2)I have been reading about Google’s DeepMind “Neural Turing Machine” at [link] MIT Tech Review and have a suggestion regarding the loss rate:

(Quote:) The DeepMind work involves first constructing the device and then putting it through its paces. Their experiments consist of a number of tests to see whether, having trained a Neural Turing Machine to perform a certain task, it could then extend this ability to bigger or more complex tasks. “For example, we were curious to see if a network that had been trained to copy sequences of length up to 20 could copy a sequence of length 100 with no further training,” say Graves and co.

It turns out that the neural Turing machine learns to copy sequences of lengths up to 20 more or less perfectly. And it then copies sequences of lengths 30 and 50 with very few mistakes. For a sequence of length 120, errors begin to creep in, including one error in which a single term is duplicated and so pushes all of the following terms one step back. “Despite being subjectively close to a correct copy, this leads to a high loss,” say the team

Could we assign a positive value to each error and a negative value to each absolutely correct copy, and then develop a reducing error rate from the positive value rate?

Also, would Synomal Superpositional Clouds (SSCs) help assign high value to errors? There is writing about SSC’s here.

The learning brain experiences the wicked problem of survival every moment — and for this process perhaps error-minimizing may be more important than exact copying?  

by David Huer


Image by space-science-society