Memory, Irreversibility and Transactions

A “system” is a finite set of memory components interrelated through causative event maps.

Phwew, that was a mouthful!  What does that mean?

Memory is the ability of matter to change state and maintain that state for a non-zero period of time.  At the smallest scales of existence, atoms have memory when, for instance, chemical changes influence the electron configuration of those atoms.  The ability of paper to hold graphite markings throughout its lifetime is also a form of memory.

An event is a directional transfer of energy from one memory component to another, from source to target, in a way that induces a state change in the target which lasts for a non-zero period of time.  An event is an event if it alters the memory configuration of its target.  An event map is a set of source/target associations.  Causality is the study of the effects of event maps upon their state-absorbing targets.

To study a system is to study a well-defined, finite set of memory components and the causative event maps which affect those components.  For every system under study, there exists that which is outside of that system which we call the system’s environment.  Causative events flow from system to environment, and from environment to system, composing a causative event map called a feedback loop.

Entropy is the degree to which a system has been affected by its causative event map.  Low entropy implies that a system has “room” to absorb new state changes in an unambiguous way.  A set of aligned, untoppled dominoes has low entropy.  High positive entropy implies that a system has attained a degree of ambiguity with regard to its ability to absorb specific kinds of changes.  A set of toppled dominoes has a high degree of entropy relative to “toppling” events.  One can attempt to topple already-toppled dominoes, but the result is ambiguous in that it is more difficult to leave evidence of a toppling event (a finger push) than it was prior to toppling.  Negative entropy is a condition in which a system is to some degree “reset” so that it can once again, unambiguously, absorb more events than it could before.  To induce negative entropy into a system of toppled dominoes is to set them back up again to be retoppled.

All physical systems tend to increase in measures of entropy over time.  They do so because they have memory and exhibit hysteresis.  To memorize a change is to freeze that change in time.  Changes induced by previous events interfere with the ability of new events to be absorbed.  A thermodynamically hot system imparts kinetic events to cold systems they are connected to, at the cost of the energy stored in its own memory.  Slowly, the cold systems absorb the kinetic energy of the hot until a point is reached which the cold memory systems reach capacity, or become saturated.  Such a point of memory capacity saturation is called “equilibrium”.  If the cold system had no memory, for instance if it were a vacuum, it would never have increased in temperature and the hot system would have eventually become absolutely cold since it would be connected to systems with infinite capacities to absorb events.

As noted by Erwin Schrödinger, life in general has a “habit” of reversing entropy and in fact could be defined by this single, dominant habit.  Lifeless physical systems tend towards maximum positive entropy and tend to remain that way.  Life, on the other hand, does its damnedest to reverse entropy.  For life, it is not merely enough to keep entropy from increasing.  Like all systems, life which is saturated to its limit of information capacity can fail to adapt to a changing environment.  Life is a process through which its subsystems are continually de-saturated in order to make room for new information.  Life depends on entropy reversal.

This is not to say that entropy reversal does not happen to lifeless systems; entropy may be reversed here and there and for short periods of time.  Random, isolated reversals of entropy in any system however are always—even in the case of life—compensated for by an increase of entropy in the outer environment.  Ultimately, the Great Environment we call the Universe is continually losing more and more of its ability to unambiguously absorb new events.  The arrow of time since the Big Bang is the story of how the memory components of the Universe are reaching capacity saturation.

The metaphor of the economic transaction is useful for describing the flow of events leading to entropy reversal.  Financial transactions follow the same entropy build-up and subsequent decrease.  Even in the simplest of cases, financial participants form a “memory system” which saturates before it collapses.  Work is done between participants before money is exchanged.  The exchange of money allows the information of the transaction to “compress”, and entropy to reverse in the well-defined, temporary system of the particular transaction.  This entropy reversal occurs, of course, at the expense of the outer environment.  Quantum transactions also follow the same build-up and tear-down in terms of the memory capacities of participating elements of matter.

For true de-saturation to occur within a system, a system’s memory must be irreversibly erased.  If memory erasure were reversible, then memory would not have been erased and the system would have remained saturated.  “Reversible” memory loss is not true memory loss, but an illusion, a shuffling, a card trick.  Irreversibility however, comes at a price for a system.  One can shuffle sand in a sandbox from one side to another, but to truly increase the capacity of a sandbox one must expend energy to remove sand from it and returning that sand to the outer environment.  “Irreversibility” however, is not some separate, measurable feature of entropy reversal, but is a necessary part of its definition.  If a transaction is reversible, then entropy was not reversed.  If entropy has not been reversed, either partially or completely, then the transaction metaphor does not apply.  Irreversibility is a necessary test to determine the appropriateness of the transaction metaphor.

Memory, Adaptation and Entropy

I will write more in the coming weeks and months about the various types of memory a life form may leverage in order to adapt to its environment.  An interesting article from ScienceDaily illustrates how epigenetics, those chemical changes which alter the way DNA is processed (or not processed) in our cells, provide an organism with an adaptation subsystem that helps it better fit its environment,

http://www.sciencedaily.com/releases/2011/07/110724135553.htm

Adaptation cannot occur without memory.  Organisms, including plants, leverage many forms of memory.  Other than chemical and physical construction, perhaps the most important characteristic which differentiates kinds of memories is the informational entropy capacities of those memories.  Memory systems with higher entropy capacities can assimilate larger informational variety.  As the informational variety (entropy) capacity of a memory system increases, so will rise the organisms potential to adapt to a greater number of environmental conditions.  That is, the higher the entropy capacity, the higher the potential utility of the adaptive system.

From the article,

Epigenetic memory comes in various guises, but one important form involves histones — the proteins around which DNA is wrapped. Particular chemical modifications can be attached to histones and these modifications can then affect the expression of nearby genes, turning them on or off. These modifications can be inherited by daughter cells, when the cells divide, and if they occur in the cells that form gametes (e.g. sperm in mammals or pollen in plants) then they can also pass on to offspring.

I will also illustrate in the coming weeks and months that adaptive system utility is not merely a function of higher information entropy capacity.  Adaptive system utility can also be extended by the system’s ability to “clean house”, “collect the garbage” and reduce information variety when the system has become saturated.

Unifying the Concept of “Transactions” Across Disciplines

I am working to unify the concept of “transaction” across the disciplines of:

  • economics,
  • business,
  • management,
  • information technology, and even
  • physics and psychology.

This is important: transaction analysis and design is not addressed well in the modeling paradigms of both I/T and enterprise architecture.  Unfortunately, I believe this is a critical omission.

Freeman Dyson Responds

I erred with respect to Professor Freeman Dyson’s involvement with quantum electrodynamics (QED) and with respect to a quote I had previously included by him (http://www.iwise.com/eeQ1E).  I had a hunch I was getting something wrong, so I wrote Professor Dyson to ask for clarification.  His response,

Thank you for asking whether I agree with your statement. The answer is no. I do not agree, because I was talking about mathematicians and not about physicists. Dirac and Bethe were dealing with quite different problems. They were not concerned with architecture. They were inventing a physical theory. I was using their theory and finding the architecture to tidy up the mathematical details. Yours sincerely, Freeman Dyson.

Of course, it is good to understand Dyson’s role with regard to QED a little more clearly.  Second it is, of course, interesting to see his perspective on his role as “architect”.  Last, but not least, it made my day to receive a response from a hero of mine!

Thank you, Professor Dyson!

Architecture: the Normative Art

Architecture is the normative art.

To be normative is to occupy the “ought” side of David Hume’s is-ought divide.  A positivist, focusing on what is, has no rational method for arriving at what ought to be.  In ethics, for instance, the positivist can document the evidence for the existence of murder throughout human history but can they arrive, through purely descriptive and deductive means, to the conclusion that murder is unjustified?  What is justice?

To describe an “ought” is to architect.  Regardless of the problem domain and placing the concept of professionalism aside, the architect is the person who ultimately makes the very human decision of what to value and what is “good” in design.  Though the concept of architecture is, and should be, associated with the act of creation, adherence to ideals, models, standards and virtues have always been the defining aspect of architecture.

To live an ethical life the individual must relate to human ideals, virtues and associated (measurable) standards. It also requires the ethicist to have identified and communicated those virtues — usually by identifying standards which people can relate to — and to have defined principles and rules for adhering to those virtues. The ethicist, in human affairs, is a “cultural architect”.

Each of us in the United States are taught to value the virtues of our Constitution. We expect those in power to embody those virtues described within that document.  Our best measure of that embodiment is the degree to which we notice those in power upholding the principles and rules also described therein. We rightfully consider James Madison the “chief architect” of the United States since he was the critical agent who determined the virtues, or qualities, which would make a good country and then designed the legal structure (principle and rules) which would best institutionalize those virtues.  The U.S. Constitution is the architectural description for the United States.

The physicist Freeman Dyson once said,

The bottom line for mathematicians is that the architecture has to be right. In all the mathematics that I did, the essential point was to find the right architecture. It’s like building a bridge. Once the main lines of the structure are right, then the details miraculously fit. The problem is the overall design. [iWise, http://www.iwise.com/eeQ1E, retrieved 2011-06-01]

What did Freeman Dyson mean by “right”?  Mathematics is essentially a deductive/positivist/descriptive exercise.  I have seen some physicists waste their lives in cherished theory which “penciled”, “made sense”, “without mistake”, “was going to overturn Einstein” and all that, yet never predicted new phenomena let alone reproduced the values of known phenomena.  The problem with the life work of these people was never their math, which was deductively correct, but the initial set of axioms the they chose as valuable.  Freeman Dyson, in my opinion, was referring to the “right” choice of axiom and basic principle with regard to the “architecture” of a scientific theory that works.  Once must choose axioms which help the theorist conform to experimental reality, or not.  The choice of initial axiom is not a deductive exercise however, but an inductive choice of “ought”.  Those whose life’s work leads to naught chose wrong.  For Freeman Dyson’s part however he, along with Paul Dirac, Hans Bethe, Sin-Itiro Tomonaga, Julian Schwinger, and Richard Feynman are properly known as the architects of quantum electrodynamics (QED).  They came to be the architects through their identification of axiom and principle which not only enabled the development of a cohesive set of mathematics, but also led them to conform, par excellence, to real world experiment.  In short, they identified the “oughts” of QED.  They chose well.

Regardless of the problem domain, the architect is always that person who breaches the is-ought divide.

Important update from Professor Dyson, herehttps://nicoletedesco.wordpress.com/2011/06/05/freeman-dyson-responds