Some References for “Architecture Ethics”

I am waiting to obtain some consensus from my IASA peers with regard to the course outline.  Until then, I will talk obliquely about the content as it seems to be shaping up.  For now, let me give you a list of some of the references I will include these below.  I will also include references to some of the other classics, like Aristotle, Plato, Kant, and Hume, but here are some interesting readings from modern times.

Advertisements

Architecture Ethics

I am now developing an online course on “Architecture Ethics” for IASA.  Currently, I have defined the course objectives as follows.  The target audience are information technology architects and architects-in-training, primarily in North America and Europe although I hope that Asian students will also find it informative.  (My recent experience in China has provided me with a number of good examples for all students.)

My current introduction:

What do Love Canal and Barclays have in common?  In these very public cases, improper ethical planning arguably encouraged opportunities for immoral action.  As a professional architect you are in a position of leadership and trust, and are responsible for the ethical implications of your decisions and the morality of your actions.  You are responsible for the ethical planning of your daily work and long term career including the proper selection of projects, the identification of collaborative environments that can enable or hinder success, avoiding moral risks to employer and customer, meeting the challenges of regulatory and legal frameworks, and even for the determination of proper compensation for your effort and risks.  This course will introduce you to concrete skills that will help you recognize potential ethical failures in the practice of computing-associated architecture, strategies to mitigate or otherwise compensate for those failures, and ultimately, simply put, how to architect well.

After completing this course, you will be able to:

  • Identify some of your highest risk factors to project and career success, and strategies to counter them.
  • Identify financial impacts of ethical decision making in architecture.
  • Identify and communicate additional ethical considerations for your particular community, industry, employer, and job.
  • Effectively communicate the value of professional architecture.
  • Develop an ethical context, or “Collaborative Viewpoint” for your Architecture Description.
  • Understand why the ethical context is the proper frame within which you should understand everything you do as a professional architect, and why IASA exists.

Target audience:

  • Information technology architects, solution architects, and enterprise architects
  • Students training for a career in computing-associated architecture
  • Potential employers and clients of computing-associated architects

Ethics

Your ethics are that set of abstract principles and measurable standards you use to enable you to think and act as rationally as possible. To purposely thwart your ability to think and act rationally, or to allow allow that through neglect,  is unethical.

One can derive an ethic by first understanding what beliefs, behaviors and other factors thwart rational thinking in yourself, and then second determine, by experiment, the principles and standards which allow one to manage one’s roadblocks to rationality. Ethics has nothing to say about the content of your rational thought, for that is the realm of morality.

http://en.wikipedia.org/wiki/Ethics

Memory, Irreversibility and Transactions

A “system” is a finite set of memory components interrelated through causative event maps.

Phwew, that was a mouthful!  What does that mean?

Memory is the ability of matter to change state and maintain that state for a non-zero period of time.  At the smallest scales of existence, atoms have memory when, for instance, chemical changes influence the electron configuration of those atoms.  The ability of paper to hold graphite markings throughout its lifetime is also a form of memory.

An event is a directional transfer of energy from one memory component to another, from source to target, in a way that induces a state change in the target which lasts for a non-zero period of time.  An event is an event if it alters the memory configuration of its target.  An event map is a set of source/target associations.  Causality is the study of the effects of event maps upon their state-absorbing targets.

To study a system is to study a well-defined, finite set of memory components and the causative event maps which affect those components.  For every system under study, there exists that which is outside of that system which we call the system’s environment.  Causative events flow from system to environment, and from environment to system, composing a causative event map called a feedback loop.

Entropy is the degree to which a system has been affected by its causative event map.  Low entropy implies that a system has “room” to absorb new state changes in an unambiguous way.  A set of aligned, untoppled dominoes has low entropy.  High positive entropy implies that a system has attained a degree of ambiguity with regard to its ability to absorb specific kinds of changes.  A set of toppled dominoes has a high degree of entropy relative to “toppling” events.  One can attempt to topple already-toppled dominoes, but the result is ambiguous in that it is more difficult to leave evidence of a toppling event (a finger push) than it was prior to toppling.  Negative entropy is a condition in which a system is to some degree “reset” so that it can once again, unambiguously, absorb more events than it could before.  To induce negative entropy into a system of toppled dominoes is to set them back up again to be retoppled.

All physical systems tend to increase in measures of entropy over time.  They do so because they have memory and exhibit hysteresis.  To memorize a change is to freeze that change in time.  Changes induced by previous events interfere with the ability of new events to be absorbed.  A thermodynamically hot system imparts kinetic events to cold systems they are connected to, at the cost of the energy stored in its own memory.  Slowly, the cold systems absorb the kinetic energy of the hot until a point is reached which the cold memory systems reach capacity, or become saturated.  Such a point of memory capacity saturation is called “equilibrium”.  If the cold system had no memory, for instance if it were a vacuum, it would never have increased in temperature and the hot system would have eventually become absolutely cold since it would be connected to systems with infinite capacities to absorb events.

As noted by Erwin Schrödinger, life in general has a “habit” of reversing entropy and in fact could be defined by this single, dominant habit.  Lifeless physical systems tend towards maximum positive entropy and tend to remain that way.  Life, on the other hand, does its damnedest to reverse entropy.  For life, it is not merely enough to keep entropy from increasing.  Like all systems, life which is saturated to its limit of information capacity can fail to adapt to a changing environment.  Life is a process through which its subsystems are continually de-saturated in order to make room for new information.  Life depends on entropy reversal.

This is not to say that entropy reversal does not happen to lifeless systems; entropy may be reversed here and there and for short periods of time.  Random, isolated reversals of entropy in any system however are always—even in the case of life—compensated for by an increase of entropy in the outer environment.  Ultimately, the Great Environment we call the Universe is continually losing more and more of its ability to unambiguously absorb new events.  The arrow of time since the Big Bang is the story of how the memory components of the Universe are reaching capacity saturation.

The metaphor of the economic transaction is useful for describing the flow of events leading to entropy reversal.  Financial transactions follow the same entropy build-up and subsequent decrease.  Even in the simplest of cases, financial participants form a “memory system” which saturates before it collapses.  Work is done between participants before money is exchanged.  The exchange of money allows the information of the transaction to “compress”, and entropy to reverse in the well-defined, temporary system of the particular transaction.  This entropy reversal occurs, of course, at the expense of the outer environment.  Quantum transactions also follow the same build-up and tear-down in terms of the memory capacities of participating elements of matter.

For true de-saturation to occur within a system, a system’s memory must be irreversibly erased.  If memory erasure were reversible, then memory would not have been erased and the system would have remained saturated.  “Reversible” memory loss is not true memory loss, but an illusion, a shuffling, a card trick.  Irreversibility however, comes at a price for a system.  One can shuffle sand in a sandbox from one side to another, but to truly increase the capacity of a sandbox one must expend energy to remove sand from it and returning that sand to the outer environment.  “Irreversibility” however, is not some separate, measurable feature of entropy reversal, but is a necessary part of its definition.  If a transaction is reversible, then entropy was not reversed.  If entropy has not been reversed, either partially or completely, then the transaction metaphor does not apply.  Irreversibility is a necessary test to determine the appropriateness of the transaction metaphor.

It is Always a People Problem. Always.

There are no such things as technology problems, only people problems.

No technology can build itself, nor use itself, nor correct its own problems.  Even self-replicating machines, built using any technology in use (or even in conception) today, would merely execute the delayed choice of their builders.  Consider the case of a man, eager to protect his home against theft, who installs an anti-theft device which would kill any unwanted intruder, perhaps with a bullet to the head.  The homeowners’s device is commonly called a booby trap.  One day, while the home owner is away, an intruder enters the home and is killed.  Is the home owner responsible?  You betcha!  The home owner may claim they are not responsible because they did not pull the trigger directly, but in the end they made a choice to apply extreme prejudice to any intruder and they developed a device to execute that delayed choice.  The homeowner’s booby trap did not kill the intruder, the home owner did.  Every action of any technology, including any act of construction, any act of repair, or any act of use, is ultimately the extended action of human beings.

No technology is a perfect fit for any problem and all technologies come with trade-offs associated with their use.  Even survival comes with its own set of trade-offs.  It is the responsibility of human beings to understand their problems to the best of their abilities, to understand the trade-offs associated with the technology options before them, and to choose appropriate technologies wisely.  Trade-off balancing does not happen on its own.  Humans are the ultimate arbiters of which technology problems they choose to live with.

If all humans were to vanish from this Universe tomorrow, there would be no human problems of any kind.  Human technologies would instantly cease being human technologies and would merely exist as artifacts of matter like any other.  At the same instance of Universal human extinction, all “problems” would also similarly vanish.

This is not merely an academic exercise in ethics.  The implications of failing to understand this point can be tremendous.  If the home owner in my delayed choice example would have understood his culpability ahead of time, would he have been so eager to create his intruder-killing device?  The lack of understanding of the concept of delayed choice leads, in business, law and in politics, to a class of problem called moral hazards.  Failure to understand this critical point about technology, in particular computing technology, can cause some people to impart “magical” qualities to technologies which the technologies do not have, which can skew expectation, and can lead to project and business failure.

No, no, no.  The only kinds of problems which exist in this world are people problems, by definition.  If you doubt that, then find a way to kill all of humanity right now and watch all problems simply vanish away the moment before you and I cease to be.

Metaphysical Ontology

When creating an object-oriented analysis of anything, perhaps a business problem, do you assume that the categorical hierarchy you develop reflects the “true nature” of the phenomena you are analyzing?  Do you assume a metaphysical ontology?

http://en.wikipedia.org/wiki/Ontology

How does ontological categorization differ from category theory in mathematics?

http://en.wikipedia.org/wiki/Category_theory