Some References for “Architecture Ethics”

I am waiting to obtain some consensus from my IASA peers with regard to the course outline.  Until then, I will talk obliquely about the content as it seems to be shaping up.  For now, let me give you a list of some of the references I will include these below.  I will also include references to some of the other classics, like Aristotle, Plato, Kant, and Hume, but here are some interesting readings from modern times.

Memory, Irreversibility and Transactions

A “system” is a finite set of memory components interrelated through causative event maps.

Phwew, that was a mouthful!  What does that mean?

Memory is the ability of matter to change state and maintain that state for a non-zero period of time.  At the smallest scales of existence, atoms have memory when, for instance, chemical changes influence the electron configuration of those atoms.  The ability of paper to hold graphite markings throughout its lifetime is also a form of memory.

An event is a directional transfer of energy from one memory component to another, from source to target, in a way that induces a state change in the target which lasts for a non-zero period of time.  An event is an event if it alters the memory configuration of its target.  An event map is a set of source/target associations.  Causality is the study of the effects of event maps upon their state-absorbing targets.

To study a system is to study a well-defined, finite set of memory components and the causative event maps which affect those components.  For every system under study, there exists that which is outside of that system which we call the system’s environment.  Causative events flow from system to environment, and from environment to system, composing a causative event map called a feedback loop.

Entropy is the degree to which a system has been affected by its causative event map.  Low entropy implies that a system has “room” to absorb new state changes in an unambiguous way.  A set of aligned, untoppled dominoes has low entropy.  High positive entropy implies that a system has attained a degree of ambiguity with regard to its ability to absorb specific kinds of changes.  A set of toppled dominoes has a high degree of entropy relative to “toppling” events.  One can attempt to topple already-toppled dominoes, but the result is ambiguous in that it is more difficult to leave evidence of a toppling event (a finger push) than it was prior to toppling.  Negative entropy is a condition in which a system is to some degree “reset” so that it can once again, unambiguously, absorb more events than it could before.  To induce negative entropy into a system of toppled dominoes is to set them back up again to be retoppled.

All physical systems tend to increase in measures of entropy over time.  They do so because they have memory and exhibit hysteresis.  To memorize a change is to freeze that change in time.  Changes induced by previous events interfere with the ability of new events to be absorbed.  A thermodynamically hot system imparts kinetic events to cold systems they are connected to, at the cost of the energy stored in its own memory.  Slowly, the cold systems absorb the kinetic energy of the hot until a point is reached which the cold memory systems reach capacity, or become saturated.  Such a point of memory capacity saturation is called “equilibrium”.  If the cold system had no memory, for instance if it were a vacuum, it would never have increased in temperature and the hot system would have eventually become absolutely cold since it would be connected to systems with infinite capacities to absorb events.

As noted by Erwin Schrödinger, life in general has a “habit” of reversing entropy and in fact could be defined by this single, dominant habit.  Lifeless physical systems tend towards maximum positive entropy and tend to remain that way.  Life, on the other hand, does its damnedest to reverse entropy.  For life, it is not merely enough to keep entropy from increasing.  Like all systems, life which is saturated to its limit of information capacity can fail to adapt to a changing environment.  Life is a process through which its subsystems are continually de-saturated in order to make room for new information.  Life depends on entropy reversal.

This is not to say that entropy reversal does not happen to lifeless systems; entropy may be reversed here and there and for short periods of time.  Random, isolated reversals of entropy in any system however are always—even in the case of life—compensated for by an increase of entropy in the outer environment.  Ultimately, the Great Environment we call the Universe is continually losing more and more of its ability to unambiguously absorb new events.  The arrow of time since the Big Bang is the story of how the memory components of the Universe are reaching capacity saturation.

The metaphor of the economic transaction is useful for describing the flow of events leading to entropy reversal.  Financial transactions follow the same entropy build-up and subsequent decrease.  Even in the simplest of cases, financial participants form a “memory system” which saturates before it collapses.  Work is done between participants before money is exchanged.  The exchange of money allows the information of the transaction to “compress”, and entropy to reverse in the well-defined, temporary system of the particular transaction.  This entropy reversal occurs, of course, at the expense of the outer environment.  Quantum transactions also follow the same build-up and tear-down in terms of the memory capacities of participating elements of matter.

For true de-saturation to occur within a system, a system’s memory must be irreversibly erased.  If memory erasure were reversible, then memory would not have been erased and the system would have remained saturated.  “Reversible” memory loss is not true memory loss, but an illusion, a shuffling, a card trick.  Irreversibility however, comes at a price for a system.  One can shuffle sand in a sandbox from one side to another, but to truly increase the capacity of a sandbox one must expend energy to remove sand from it and returning that sand to the outer environment.  “Irreversibility” however, is not some separate, measurable feature of entropy reversal, but is a necessary part of its definition.  If a transaction is reversible, then entropy was not reversed.  If entropy has not been reversed, either partially or completely, then the transaction metaphor does not apply.  Irreversibility is a necessary test to determine the appropriateness of the transaction metaphor.

Remembering and Forgetting, Saturation in Neural Networks

This study by Rosenzweig, Barnes, and McNaughton highlights the importance of forgetting in order to make the best use of the brain cells we have,

If we fail to forget, our neural networks will saturate and become useless.  Saturation in a neural network does not merely mean that a network cannot learn more, it can mean that a network could fail to respond to input in an appropriate manner.

Consider a very simple network consisting of two input neurons I1 and I2, and two output neurons O1 and O2.  A neural network learns by increasing the strength of connections between associated inputs and outputs.  For instance should an input signal be present at I1 while an output signal is also present at O2, then the connection I1O2 would be strengthened.  Consider Ivan Pavlov, his dog, a dog treat, and Pavlov’s bell.

A trained neural network acts by pro-actively triggering appropriate output neurons when specific input signals are present.  Should a signal trigger the first input neuron in our example, I1, the second output neuron would be triggered pro-actively.

  • Given the “learned” synaptic connection: I1O2
  • Assuming: I1
  • Triggered: O2 (I1O2)

Consider Pavlov’s dogs salivating when the bells rang regardless if treats were provided.

If a second training exercise triggered input I1 but instead the first output neuron was triggered in lieu of the second, the connection I1O1 would also be strengthened.  We now have,

  • I1O1
  • I1↔O2

After the second training session, should I1 be triggered once again, which output neuron would trigger?  Without any further weighting functions to apply to our connections, a I1 signal would trigger both outputs,


Consider a situation where Pavlov’s dogs were sometimes offered treats when the bells rang, or sometimes were given electric shocks.  What would the dogs have expected the next time bells rang?  Would they have expected treats, electric shocks, or both?

Perhaps this state of affairs is desirable, perhaps it is not.  Now that this cross-association is saturated however, there exists no way to trigger only O2 given I1.  Even if all future training sessions reinforce the I1↔O2 connection, the system will remain ambiguous forever.

It is likely that nature’s first, simple neural networks exhibited this kind easy saturation.  Perhaps early critters could only adapt to very limited environmental conditions during their very short lives.  Perhaps these critters simply died from indecision if they encountered natural oddities they weren’t prepared for.  In the competitive evolutionary race however, those critters who occasionally reset their saturated networks would have an evolutionary advantage over those who did not.  To reset an easily saturated neural network would have been to allow the forgetting of anomalies.  These critters would have had a better chance of survival in the real, random natural world.  They would relearn their most common and important lessons and forget the oddities which simply did not pertain to most circumstances of their lives.

In the context of the article, 4-(3-phosphonopropyl) piperazine-2-carboxylic acid (CPP) provides an occasional “reset” function to spatial memory that allows de-saturation and re-learning.  CPP is one of nature’s “dirty tricks” that helps to alleviate the downsides of easily saturated neural networks.  Nature has converged upon many such dirty tricks over the eons, including:

  • Chemical washes (CPP)
  • Inhibition, “pulsing” and other mild periodic reset mechanisms
  • Network segmentation (slows saturation)
  • Physical growth and degeneration
  • Specialty circuits (e.g., “instinct”)
  • Preferential learning such as that which provides increased weight to electric shocks versus pleasurable food treats
  • Consciousness (self-awareness)
  • Concept formation and other information compression mechanisms
  • Emotion, heuristic, magical thinking, social deference and economic behavior in humans

The basic lesson is that, short of ameliorating effects, all neural networks easily saturate.  For any cognitive function, researchers should ask two questions:

  • How does the associated network saturate?  What are the effects?
  • What solutions has evolution converged upon to de-saturate the network?

Cataloging Cognitive Phenomena Using Reversibility Criteria

As you can probably tell by yesterday’s post, my thesis is still young, not quite formed, and has a few holes.  As an exercise, I am considering cataloging cognitive behavior (especially economic behavior) in terms of reversibility.  I am wondering if the results of this exercise, which might have a physical basis in neurobiology, could result in a kind of “periodic table of elements” for human behavior. Could it help point the way to better understanding of the neurobiology of various behavioral mechanisms?  This would be a mult-dimensional map, including:

  • Degree of irreversibility
  • Irreversibility seeking versus irreversibility maintenance
  • Irreversibility recognition (do you know it when you see it?)
Consider some examples:
  • Defense of property (irreversibility maintenance)
  • Economic transaction (irreversibility seeking)
  • Social bonding (irreversibility seeking while bonding, maintenance afterwards)
  • Obsessive compulsive disorder (irreversibility seeking, but unable to recognize it when it occurs)
  • Schizophrenia (brain randomness: low irreversibility/bias, high irreversibility seeking behavior)

Consider the potential mapping along these dimensions:

Reversible ← → Irreversible
Successful recognition ↑
Recognition failure ↓

Indeed, I am having a difficult time expressing what is on my mind.  I will try to read more literature on behavioral economics to see if others have already tread these waters and also to see if I can get some hints on how to express my thoughts better on this topic.  Of course, this work might be a nothing but a snipe hunt, but I think I might at least learn something from the exercise.

Cognitive Irreversibility and the Limits of Nomadic Life

Is the Never-Ending March of Transience a Given?

Since I first read Alvin Toffler’s book, Future Shock, as a child (ca. 1972), I had come to agree with his premise that our future will be wrought with increased transience across all aspects of life.  For instance our friendships would be increasingly temporary, jobs would be increasingly unstable and we would become increasingly nomadic out of necessity.  The intervening years have been kind to Toffler’s prognostication.  A colleague of mine, Scott Anderson, is beginning to suspect that he is due for a relocation as a result of some yet unforeseen set of circumstances waiting in the wings to boot him from his cozy abode.  There is a limit to how much ambiguity humans can and will put up with.  I am beginning to suspect that continual increased transience is not a given because, at the smallest of time scales, our lives become too ambiguous for us to handle.  We will find other ways of coping with the complexities of life, or at least appear to, in addition to the easy choice of decreasing time.  These other means may not be rational, and may even be violent.  To understand why it helps to understand the role of information loss and irreversibility in cognitive processes.

Information Omission and Irreversibility

Ayn Rand proposed that useful concepts in the human mind are dependent upon the process of informational omission.  Once we identify sensory measurements as related, or even abstract ideas as related, we “compress” the related measurements or abstractions into a grouping and then discard the original measurements.  The group, or concept she said, is then integrated into existing concepts in a hierarchical manner.  This informational omission reduces cognitive load and the integration allows us to reuse information across greater and greater scales.  Rand’s concepts are very closely related to groups in group theory.  However, group theory is inherently lossless.  Like wires extruded from a hot copper ingot, the continuum of real numbers in group theory can be “extruded” through the use of functions into finer and finer strands until individual elements have been isolated, such as discrete integers.

In group theory, functions are computable and reversible.  The results of extrusion functions can always be traced back to their source, like a wound-up, fine copper wire not yet separated from its generating ingot.  Human concept formation however is not inherently reversible.  A “cut” is necessarily made when concepts are formed, otherwise the continued explicit use of the original information (sensory data or other abstractions) would overwhelm our ability to think clearly, think quickly or think at all (especially under stress).  This cut may not be computable or traceable like the codomains of mathematical groups.  In his book, The Emperor’s New Mind, physicist Roger Penrose described the role of non-computability in human decision making.  Philosophers such as David Hume had long ago identified the non-logical (non-computable) nature of epistemological induction.  Rand, I believe correctly, identified the concept formation process as an inductive one, non-deductive, hence essentially non-computable.  Human concept formation, like all non-computable processes, is associated with information loss and thus varying degrees of cognitive irreversibility.

Cognitive load” is the burden of information variety in the mind.  Maintaining a large variety of ungrouped/unconceptualized information in working memory is often described as a state of high entropyambiguity, or of indecision, and in generally perceived as uncomfortable if not downright dangerous.  We could never make a decision without conceptualizing our problems and potential solutions.  To do so requires we commit to casting aside this information or that, and commit to compression of thought.  This will not occur on its own.  The simplistic memory systems of early evolution probably cataloged raw sensory information and did little else.  Those simplistic memory systems, which we still possess, quickly become filled to capacity.  Evolution, however, has favored the animal that could scale that memory through pruning and reuse. The predator-escaping human of pre-history would be dead if they could not make up their mind in time whether to turn right or left at a fork in the potential escape route because their mind was filled only with raw facts.  Decision-making, pruning the tree of facts to derive a conclusion, is a feature of evolution.  With decision-making come the epiphenomena of disambiguation and irreversibility.  Though our cognitive processes are adept at reducing ambiguity we do not explicitly seek ambiguity reduction for its own sake.  Similarly, while our inductive and concept-formation processes tend to exhibit varying degrees of irreversibility, we probably do not explicitly seek irreversibility for its own sake.  Rather we just do what we do and try to create working knowledge for ourselves so that the world makes sense and so that we can take action when we need or desire to.  Disambiguation and irreversibility are side effects of our normal cognitive processes.  On the other hand, we are motivated to form concepts and to take action in the world.

Cognitive irreversibility is not always complete.  Sometimes we discard original sensory data and basic abstractions, sometimes we don’t.  Human memory, especially when considering such phenomena as the reliability of witness testimony in law, is notoriously unreliable because we recreate many past events by “uncompressing” concepts and other mental structures in order to rebuild discarded (or otherwise inaccessible) original sensory data and abstraction.  In this respect, concepts act somewhat like mathematical groups, though imperfectly.  Some of the original raw memory may linger to make up for some of this loss.  In most cases however our neuroanatomy does a fine job of mimicking mathematical group reversibility and we fail notice the imperfections.  What is important to understand however is that actual information is discarded, or rendered inaccessible, and that we are utterly dependent upon non-computable, somewhat irreversible processes no matter how much we may convince ourselves otherwise.

Relationships, Law and Superstition

Hunter-gatherer tribes which experienced the ambiguity of multiple locations and varying meal options traveled together and kept their interpersonal relationships constant.  One benefit that consistency of social network provides is a reduction of ambiguity and cognitive load.  Creating new relationships, especially trusted relationships, is hard.  Relationship building requires understanding the behavior new people.  It also requires the time necessary for our endocrine systems to work properly to literally allows us to chemically bond with one another.  Breaking relationships with other humans is difficult.  In particular break-ups, either with friends, family or colleagues, is fraught with the perils of emotional reaction.  It is easier to bind to other human beings than it is to unbind.  This difficult-to-reverse behavior is why we as a species are described as “social animals”.  We need good reason to destroy relationships.  The lead up to the break-up usually consumes a lot of thought and emotion.  Hunter-gatherer tribes typically develop complex rules in order to keep their tribes together.  After all, if family and friends were in constant flux, how much mind would have been left to concentrate on the perils of travel, feeding and defense?

Hard-and-fast rules for anything reduce cognitive load.  Committing to a credo of “eye for an eye” is easy.  Instead of examining all evidence for every crime, instead of revisiting ethical and moral propositions for each and every case, the simple heuristic allows people under stress to survive.  By pruning the possibility for revisiting ethical and moral propositions over and over, laws allow difficult situations in human relationships to be arbitrated with speed and allows all concerned to spend more time thinking about the more important problems in life such as food and health concerns.  If the human brain power were, perhaps a few orders of magnitude higher in terms of I.Q., perhaps we could revisit ethical and moral propositions over and over again and make better decisions.  However, we are not intelligent gods.  The relative irreversibility of enforced law allows human beings, on the whole, to thrive, even if individual cases here and there may not be adjudicated fairly.

For similar reasons it is not difficult to see why superstition comes to be so readily.  Creating a rule, any rule, then discarding the original thought behind it is a natural survival skill.  Some rules are useful, some rules are neutral, some rules might even be deadly on occasion.  No matter, we are excellent concept-generating and rule-developing machines.  The general mechanisms for concept-generation and rule-development are what allowed us to survive and thrive in the great maelstrom of the evolutionary process, not specific concepts or specific rules.  We come to cherish our concepts and rules, even institutionalize them, for the benefit of survival.  In fact we are even rewarded for our concept and rule creation with the flush of opiates that come with the moment of, “Eureka!”  That some rules come to be called superstition is par for the course for a system that, biologically, does not favor specifics but in general favors something, anything.  What makes superstition “sticky”, what makes it last so long, what makes it spread from human to human and over generations, is the one-way nature of information which accompanies the creation of rules of any kind: if you are motivated to embrace a simple idea instead of ambiguity, a rule can last for a very long time and what is more, others will be similarly motivated to think it is a good idea as well.

Monarchies and Irreversible Social Structures

Why do humans follow their leaders?  Being empathetic as we are, why were our ancestors not more egalitarian than they turned out to be?  The simple answer is that to follow a leader is easy.  To surrender choice to a leader is like adopting a simple law or a superstition, it lowers cognitive load by removing the need to think about those things the leader is thinking for you.  To the extent that a leader’s reign might be temporary, the self interested human cannot completely unload their decision making.  To the extent that a leader’s reign is protected and assumed to be long lasting—to the extent it is perceived to be irreversible—then the self interested is free to lighten their cognitive load and concentrate on other things in life.

It is common in some schools of legal theory to attribute the existence of a monarchy to the need solve various regress problems of permission and enforcement.  It is the sovereign who charters the legislators, the jurists and the enforcers to act on their behalf.  Familial inheritance solves the problem of chartering the sovereign.  The original monarch arose by force of “habit” amongst the populace.  When considering cognitive load, a simpler explanation becomes apparent: surrendering to a sovereign is, for the subject, a simple solution to a set of hard problems present in complex societies and in a dangerous world.  The inheritable nature of monarchies is valuable in that it provides the subjects with a sense of perceived irreversibility, which reduces the need for any subject to consider all aspects of law for themselves at any time.  This sense of ease and utility in reducing cognitive load also helps explain why even egalitarian democracies drift towards strong, central control over time, assuming extraordinary efforts are not taken to stop such drift.  If the end of a monarchy was perceived as near, or if a strong, central government were perceived as close to dissolution, panic might ensue amongst the citizenry.  What to do?  How to fend for thyself?  How to defend one’s property?  The burden of the enormity of thinking required to solve complex problems in a complex society would be avoided if possible.  It seems that, for some, it is better to lose information and keep it lost by surrendering to a tyrant than it is to shoulder the responsibilities of citizenship for thyself.  Irreversible social structures of all kinds fulfill the role of cognitive salve including institutional religions, labor unions, fraternal orders and, as mentioned previously, families.

Feedback Loops, Money and the Economics of Time

Economic activity is human collaboration motivated by the exchange of economically liquid—exchangeable—goods or services.  Humans by themselves pursuing their self interests do not economic activity make.  Humans in spontaneous collaboration do not economic make.  The exchange of an exchangeable good or service is an interesting motivational force in human affairs.  While one-on-one collaboration takes place, motivation to collaborate can become increasingly difficult to obtain the more complex the activity.  To motivate a group of people to build a complex farming community, for instance, requires the development of interpersonal bonds, cajoling, arguing, reasoning, discussion, more discussion and sometimes even violence.  When a human exchanges of a good or service however, our ability for symbolic manipulation and concept formation do something miraculous: it reduces the need for the kind of cognitive activity normally required to trigger collaborative activity.  Instead of long, arduous argument about why such and such a barn might need to be built, the exchange of consumable goods reduces the cognitive loads of all parties by alleviating the need to concentrate on the logic of the need to collaborate.  Instead, the participants can allow themselves the luxury of concentrating on the logic of their self-interests.

Information is lost in the economic exchange however, specifically much of the information about the need to collaborate that would have normally been exchanged need not be exchanged at all.  The interesting thing about this information loss, especially since the invention of money, is that the information loss can accelerate human collaboration.  Since the logic of the need to collaborate need not enter the collaboration equation, as long as goods are exchanged humans can remain motivated to continue their collaboration, even if the original conditions of collaboration have long been satisfied.  Positive economic activity is actually a case where cognitive irreversibility fails to be achieved in some aspect.  It may be the easy thing to do to ignore the original conditions of the collaboration as long as the opportunity for the transaction of goods, and the rush the closing of those transactions provide, continue to present themselves.  Humans remain motivated, perhaps to continue building barns.  As long as goods exchange continues satisfactorily, collaboration with other humans suddenly becomes easier because motivation is high and cognitive load for all concerned remain low.  As long as the original logic of the collaboration need not be reexamined, cycles of activity can continue accelerate and miracles occur.  Money accelerates the cycles even more because they are not directly consumable and thus likely to survive serial exchange from one motivated human to another.  This is one of the advantages of the use of gold as a currency as opposed to wheat or even oil.  Information loss in human affairs leads to feedback cycles of all kinds, not just economic booms.  We sometimes call negative feedback cycles born of information loss “moral hazards” or, sometimes, insanity.

Of course, fish will always grow to fill their fish tank.  Economic exchange reduces human cognitive load, which enables collaboration to occur more easily and can even lead to enhanced motivation to build or to serve.  Economic exchange allows humans not just to survive, but to thrive as individuals and allows societies to grow.  Eventually however, cognitive limits are reached once again as the collaborative activities reach critical levels of complexity.  Many cognitive reduction techniques are brought to bare to handle complex collaborations such as forming collaborative hierarchies, surrender to legal authority or reduction of time.  Management hierarchies work because all participants are alleviated of the need to consider all information all the time; “managers” make decisions upon which others are compelled to act.  The effect of the manager is similar to that of the legal sovereign: information load is reduced and the perception of compulsion allows the participants to relax in their anticipation of the future need to think too hard.  Breaking up large activities into smaller serial activities play a similar role to sovereigns and management in that cognitive information is allowed to pass once a subgoal has been reached.  Reducing the length of time required for each unit of economic activity to complete allows human collaborations to scale into yet larger scales of complexity.

Our Complex World and the Limit To Scott’s Fears

Our world is becoming increasingly complex every day.  My colleague Scott is correct to assume that units of economic activity will continue to be reduced in time.  Units of economic activity also include his employment tenure, the housing values in his neighborhood and even the positive economic feedback cycles found in the society of any geographic location.  Time units, in human activity, can only be reduced so far however before the number of discrete activities themselves overwhelm the limited human mind.  Fortunately, we do not need to reduce economic activity into the scale of the microsecond since evolution has provided us with other tricks to toss out “unwanted” information and perception.  Not to worry Scott, perhaps the people you live and work with might stop running faster and faster and might instead start adopting more hard and fast rules of conduct, more superstitions, attempt to throw more money at their problems or surrender themselves to tyrants.  Sometimes people also drink to forget or entertain themselves to death.

When we humans feel we have too much on our mind, we are very, very inventive in our ways to lighten our mental loads.  In a complex world, sometimes the insane are the luckiest of them all, for they can escape the pain of overload through the simple bliss of ignorance.

[Update 5 July 2011: a word of caution here]

Alex Bell on the Disease that is UML

Alex Bell has written a lovely article on the use of UML called, “UML Fever: Diagnosis and Recovery”,

I admit to using UML from time to time.  I do so because a specific UML modeling tool may meet specific communication and data storage requirements I might have.  UML tools, for instance, make for interesting “object databases”.  I occasionally find the detailed grammar of UML helpful as a helpful guide for thinking through the details of certain classes of problems.  I use extensions to the grammar frequently.  As a generalized communication grammar however, UML is severely limited and should be ignored.  The appropriateness of UML is not well understood by a significant fraction of practitioners and its over use is certainly counter productive.

I believe that modeling tools are useful but must recover from the UML disease that has infected them all.  I have a specific evolutionary path in mind which I will write about in detail at a future time.

If you wish to improve the efficacy of communication of complex concepts between team members, I urge you to return to the basics of literature, philosophy, psychology, linguistics, marketing and art.  At the very least, read Edward Tufte.  The efficiency-through-standards argument of UML is a myth.

Data, Information and Knowledge

For my purposes, I define:

  • Data is the “in flight” potential to induce informational change; more specifically, data is a signal or energy (boson)
  • Information is a particular state of matter (fermion)
  • Knowledge is information encoded in the mind and available to our self-aware cognitive processes (consciousness)
    • Information encoded into the nervous system but not available to self-aware cognitive processes I will continue to associate with the more inert moniker of “information”
    • Wisdom is another matter entirely!

I like these definitions because they seem to help to clarify concepts in common with both information theory and physics.