Data Compression in the Brain

This article, “How the brain assigns objects to categories“, describes a great example of how multiple, segmented neural networks accomplish the fine art of “data compression”. Evolutionarily speaking, this is a “cheap and dirty trick”, but it works!

Once the generic category is formed in the prefrontal cortex, the straitia are free to focus on other details.  The first few exemplars of a category may linger in memory, somewhere, but new exemplars are likely ignored.

What if the straita could not forget?  What if the prefrontal cortex was not present to absorb recurrent exemplars?


Memory, Adaptation and Entropy

I will write more in the coming weeks and months about the various types of memory a life form may leverage in order to adapt to its environment.  An interesting article from ScienceDaily illustrates how epigenetics, those chemical changes which alter the way DNA is processed (or not processed) in our cells, provide an organism with an adaptation subsystem that helps it better fit its environment,

Adaptation cannot occur without memory.  Organisms, including plants, leverage many forms of memory.  Other than chemical and physical construction, perhaps the most important characteristic which differentiates kinds of memories is the informational entropy capacities of those memories.  Memory systems with higher entropy capacities can assimilate larger informational variety.  As the informational variety (entropy) capacity of a memory system increases, so will rise the organisms potential to adapt to a greater number of environmental conditions.  That is, the higher the entropy capacity, the higher the potential utility of the adaptive system.

From the article,

Epigenetic memory comes in various guises, but one important form involves histones — the proteins around which DNA is wrapped. Particular chemical modifications can be attached to histones and these modifications can then affect the expression of nearby genes, turning them on or off. These modifications can be inherited by daughter cells, when the cells divide, and if they occur in the cells that form gametes (e.g. sperm in mammals or pollen in plants) then they can also pass on to offspring.

I will also illustrate in the coming weeks and months that adaptive system utility is not merely a function of higher information entropy capacity.  Adaptive system utility can also be extended by the system’s ability to “clean house”, “collect the garbage” and reduce information variety when the system has become saturated.

Delegating Our Precious Memories to the Cloud

An interesting set of results published in MIT Technology Review,

In short, if we think some information will be on the internet later, we are likely to not bother remembering that information today.

One way to interpret the results of the study is that, where we have a chance to jettison information, we do.  Hopefully, “jettison” means to “compress” information into a handy rule of thumb, word, concept or theory which we reuse with efficiency, rather than to reuse the original data set over and over again in all its unwieldy bulk. Sometimes however, we can be conned into simply erasing or otherwise strongly de-prioritizing memories without bothering to create a useful summary of what we’ve lost.  This supports that idea that our memory erasure mechanisms (or de-prioritization) are separate from our symbol creation mechanisms.  This would make sense from an incremental, evolutionary perspective.  It would also suggest that I should not assume my “transaction model” of cognition represents the working of a single mechanism.

It is interesting that one of the researchers (Wegner) refers to this as “transactive memory” (this might be related to transactional analysis in psychology).

Heuristics Over Logic

We have evolved to favor heuristics over logic precisely because we have evolved to make decisions,

To make a decision is to reduce data in a problem domain to some compressed form, call it a concept or a word, then reuse that form in the future.  If we had to use all data we had ever learned to adapt to every environmental change, we would keep slowing down as we learned more, eventually to be crippled by the ambiguity of the data we had collected over time.  While this behavior helps us to maximize the utility of the brain system we have, it also leaves us with a glaring hole: we may fail to realize that the use of a specific word, heuristic or concept is invalid or otherwise fraught with some kind of risk.

Remember, decision-making and concept formation are one-way trips.  Information is discarded.  This means we had better make good decisions the first time because we might not be able to easily give ourselves second chances to reconsider.

Update 20 July 2011: See also,

It is Always a People Problem. Always.

There are no such things as technology problems, only people problems.

No technology can build itself, nor use itself, nor correct its own problems.  Even self-replicating machines, built using any technology in use (or even in conception) today, would merely execute the delayed choice of their builders.  Consider the case of a man, eager to protect his home against theft, who installs an anti-theft device which would kill any unwanted intruder, perhaps with a bullet to the head.  The homeowners’s device is commonly called a booby trap.  One day, while the home owner is away, an intruder enters the home and is killed.  Is the home owner responsible?  You betcha!  The home owner may claim they are not responsible because they did not pull the trigger directly, but in the end they made a choice to apply extreme prejudice to any intruder and they developed a device to execute that delayed choice.  The homeowner’s booby trap did not kill the intruder, the home owner did.  Every action of any technology, including any act of construction, any act of repair, or any act of use, is ultimately the extended action of human beings.

No technology is a perfect fit for any problem and all technologies come with trade-offs associated with their use.  Even survival comes with its own set of trade-offs.  It is the responsibility of human beings to understand their problems to the best of their abilities, to understand the trade-offs associated with the technology options before them, and to choose appropriate technologies wisely.  Trade-off balancing does not happen on its own.  Humans are the ultimate arbiters of which technology problems they choose to live with.

If all humans were to vanish from this Universe tomorrow, there would be no human problems of any kind.  Human technologies would instantly cease being human technologies and would merely exist as artifacts of matter like any other.  At the same instance of Universal human extinction, all “problems” would also similarly vanish.

This is not merely an academic exercise in ethics.  The implications of failing to understand this point can be tremendous.  If the home owner in my delayed choice example would have understood his culpability ahead of time, would he have been so eager to create his intruder-killing device?  The lack of understanding of the concept of delayed choice leads, in business, law and in politics, to a class of problem called moral hazards.  Failure to understand this critical point about technology, in particular computing technology, can cause some people to impart “magical” qualities to technologies which the technologies do not have, which can skew expectation, and can lead to project and business failure.

No, no, no.  The only kinds of problems which exist in this world are people problems, by definition.  If you doubt that, then find a way to kill all of humanity right now and watch all problems simply vanish away the moment before you and I cease to be.

Current Activities

I know, I know, this blog has been a little quiet.  I have been involved lately with fiction writing and maybe even a little game design.

Two short stories:

  • “My Wife”
  • “The Frog of Truth”
Now writing chapter 4 of my novel:
  • “Mercedes 10”
The I/T architecture ethics book?  I am in research stage with cognitive informatics, decision theory and behavioral economics. Understanding how and why human beings make the decisions they do is critical in software application design as well as business design, yet I am astounded at how little this set of related phenomena is understood by software and business designers.  I would say that, in some cases, the lapse is downright criminal.

Cognitive Entropy and Cognitive Informatics

In regards to my thoughts on cognitive irreversibility, I think the extant research favors the term, “cognitive entropy“.  I have a lot of reading to do, but I am not yet sure if my particular thoughts have been explicitly addressed.

An interesting paper, here:

Also, here:

Apparently, the IEEE has an interest group on Cognitive Informatics.  See also, the International Journal of Cognitive Informatics and Natural Intelligence,