Some References for “Architecture Ethics”

I am waiting to obtain some consensus from my IASA peers with regard to the course outline.  Until then, I will talk obliquely about the content as it seems to be shaping up.  For now, let me give you a list of some of the references I will include these below.  I will also include references to some of the other classics, like Aristotle, Plato, Kant, and Hume, but here are some interesting readings from modern times.

Architecture Ethics

I am now developing an online course on “Architecture Ethics” for IASA.  Currently, I have defined the course objectives as follows.  The target audience are information technology architects and architects-in-training, primarily in North America and Europe although I hope that Asian students will also find it informative.  (My recent experience in China has provided me with a number of good examples for all students.)

My current introduction:

What do Love Canal and Barclays have in common?  In these very public cases, improper ethical planning arguably encouraged opportunities for immoral action.  As a professional architect you are in a position of leadership and trust, and are responsible for the ethical implications of your decisions and the morality of your actions.  You are responsible for the ethical planning of your daily work and long term career including the proper selection of projects, the identification of collaborative environments that can enable or hinder success, avoiding moral risks to employer and customer, meeting the challenges of regulatory and legal frameworks, and even for the determination of proper compensation for your effort and risks.  This course will introduce you to concrete skills that will help you recognize potential ethical failures in the practice of computing-associated architecture, strategies to mitigate or otherwise compensate for those failures, and ultimately, simply put, how to architect well.

After completing this course, you will be able to:

  • Identify some of your highest risk factors to project and career success, and strategies to counter them.
  • Identify financial impacts of ethical decision making in architecture.
  • Identify and communicate additional ethical considerations for your particular community, industry, employer, and job.
  • Effectively communicate the value of professional architecture.
  • Develop an ethical context, or “Collaborative Viewpoint” for your Architecture Description.
  • Understand why the ethical context is the proper frame within which you should understand everything you do as a professional architect, and why IASA exists.

Target audience:

  • Information technology architects, solution architects, and enterprise architects
  • Students training for a career in computing-associated architecture
  • Potential employers and clients of computing-associated architects


Your ethics are that set of abstract principles and measurable standards you use to enable you to think and act as rationally as possible. To purposely thwart your ability to think and act rationally, or to allow allow that through neglect,  is unethical.

One can derive an ethic by first understanding what beliefs, behaviors and other factors thwart rational thinking in yourself, and then second determine, by experiment, the principles and standards which allow one to manage one’s roadblocks to rationality. Ethics has nothing to say about the content of your rational thought, for that is the realm of morality.

It is Always a People Problem. Always.

There are no such things as technology problems, only people problems.

No technology can build itself, nor use itself, nor correct its own problems.  Even self-replicating machines, built using any technology in use (or even in conception) today, would merely execute the delayed choice of their builders.  Consider the case of a man, eager to protect his home against theft, who installs an anti-theft device which would kill any unwanted intruder, perhaps with a bullet to the head.  The homeowners’s device is commonly called a booby trap.  One day, while the home owner is away, an intruder enters the home and is killed.  Is the home owner responsible?  You betcha!  The home owner may claim they are not responsible because they did not pull the trigger directly, but in the end they made a choice to apply extreme prejudice to any intruder and they developed a device to execute that delayed choice.  The homeowner’s booby trap did not kill the intruder, the home owner did.  Every action of any technology, including any act of construction, any act of repair, or any act of use, is ultimately the extended action of human beings.

No technology is a perfect fit for any problem and all technologies come with trade-offs associated with their use.  Even survival comes with its own set of trade-offs.  It is the responsibility of human beings to understand their problems to the best of their abilities, to understand the trade-offs associated with the technology options before them, and to choose appropriate technologies wisely.  Trade-off balancing does not happen on its own.  Humans are the ultimate arbiters of which technology problems they choose to live with.

If all humans were to vanish from this Universe tomorrow, there would be no human problems of any kind.  Human technologies would instantly cease being human technologies and would merely exist as artifacts of matter like any other.  At the same instance of Universal human extinction, all “problems” would also similarly vanish.

This is not merely an academic exercise in ethics.  The implications of failing to understand this point can be tremendous.  If the home owner in my delayed choice example would have understood his culpability ahead of time, would he have been so eager to create his intruder-killing device?  The lack of understanding of the concept of delayed choice leads, in business, law and in politics, to a class of problem called moral hazards.  Failure to understand this critical point about technology, in particular computing technology, can cause some people to impart “magical” qualities to technologies which the technologies do not have, which can skew expectation, and can lead to project and business failure.

No, no, no.  The only kinds of problems which exist in this world are people problems, by definition.  If you doubt that, then find a way to kill all of humanity right now and watch all problems simply vanish away the moment before you and I cease to be.

What is a “Technology”?

To Wikipedia, a decent definition of “technology” has been posted,

Technology is the making, usage and knowledge of tools, techniques, crafts, systems or methods of organization in order to solve a problem or serve some purpose.

Yes, the concept of “technology” is as general as that.  With patents in mind, can a “technology” be a pure product of the mind?  As an example, is an epistemological framework a technology?  What is a “tool”?

Regardless of the assumption or rejection of metaphysical dualism, is the mind itself not a tool?  Is the mind not a collection of matter and states which can be manipulated by human agency to achieve a goal of that agency?  If self-reference is where we draw the line at “abstract idea” versus technology, where does self-reference end and the “world” begin?  Are arms and legs tools, or merely “self”?

If I could take a pill which would transform one of my eyes into a Steve Austin, bionic “super eye”, is that eye merely “me” or is it a technology?  If I were to integrate nano-scale technology into my physiology, does that nano-scale technology cease to be a technology and become “me”?  What if the DNA of a future child were to be manipulated so that, once that child was born, their body would be impregnated with a technology produced by the programming of that child’s DNA?  Would the programming itself be a considered a technology?

I reject metaphysical dualism.  The biological “brain”, the epistemological mind and the non-nervous aspects of the human body are one.  No separation exists.  I am tempted to say that any goal-suiting change we might make to any state of matter, even the memories of our minds actively created, are technologies.  This line of reasoning points the way to a future state of absurdity with regard to United States patent law.  Either, one day, the prohibition against patenting “abstract ideas” will be lifted, or the entire patent regime will crumble.  I am not sure which.

“Bilksi v. Kappos” Brings Us Closer to Patenting Abstract Ideas

A fascinating discussion with regard to Bilski v. Kappos, here:

I have heard other opinion close to this: we seem to be getting closer to allowing the patenting of abstract ideas.  That is, the Supreme Court is hesitant about categorically ruling out any kind patent, including “pure” business methods, in fear that to do so would preclude future technologies which we haven’t even dreamed about yet.

The Bilski decision strengthened the patentability of software.

Jefferson on the “Embarrassment” of Patents

Thomas Jefferson, making a case for the “embarrassment” of patents,

If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it.

-Thomas Jefferson, in a letter to Isaac McPherson, 13 August 1813

The Ethical Context

My cousin Rebecca Littman is correct: too many topics in my book!  My colleague Bill Barr also has the same great criticism.  Indeed, there are several books I need to generate from this.

I am converging on many concrete theses, and of course, multiple books and articles which to focus on over time.  I will have to start at the philosophical level first however since there is where all of my axioms and other assumptions will be defined and described.

Perhaps my first book will be to satisfy the thesis of,

The case for the ethical context in I/T architecture

Ultimately, the “ethical context” is the source of the most valuable quality attributes which must be balanced in any architecture.  These qualities, amongst others, are the difference between profit and loss, legal and illegal, even life or death:

  • Epistemology, semantics and standards of proof
  • Contract and charter trees/privilege and responsibility
  • Profit and cost responsibility
  • Value in ex ante versus ex post consideration (short term versus long term thinking)
  • Information quality in principle-agent relationships
  • Transaction costs (in the generalized sense, borrowing from both economic and legal theory)
  • Game theory (prisoner’s dilemma, chicken, information cascades)
  • Cognitive illusion and fallacy (responsibility for truth)
  • Rules versus standards (responsibility to protect)
  • Statute, regulation and law
  • Collaboration, property, markets, efficiency, rent-seeking, regulatory capture: the industry context
  • Market suppression: the business context
  • Architectural governance as a set of ethical virtues

Is “Bilski v. Kappos” the Beginning of the End for Patents?

Listen to, or read the Supreme Court’s fascinating oral arguments for Bilski v. Kappos.  I can’t help but think: given the difficulty in defining “machine” or “transformation”, does this portend a day when patents are no longer properly definable?  Is the distinction between “abstract idea” and “technology” ultimately arbitrary?

If you care at all about software patents and intellectual property, you really should pay attention to this case.  (If you are an I/T architect, I strongly suggest you do.)

In the end the Supreme Court said that the machine-or-transformation test was not the only test which determines patent eligibility and that all aspects of the patent must be considered.  Though not producing of definite rules, this opinion leaves open the possibility for patenting pure software algorithms.  To not do so, the Supreme Court reasoned, might prevent unknown future technologies of importance from being patented.  (The ambiguity is meant to error on the side of the unknown.)

Listen to, or read the discussion for yourself.