Some References for “Architecture Ethics”

I am waiting to obtain some consensus from my IASA peers with regard to the course outline.  Until then, I will talk obliquely about the content as it seems to be shaping up.  For now, let me give you a list of some of the references I will include these below.  I will also include references to some of the other classics, like Aristotle, Plato, Kant, and Hume, but here are some interesting readings from modern times.

Technological Unemployment, the Architecture Profession, and My Worth as an Author

I believe Michael Ferguson‘s analysis about the future, jobs, and technological unemployment is essentially correct,

Technology is automating more and more jobs.  We software-oriented architects are the “grunts” that are helping to usher this process along.  Indeed, we are working to automate ourselves out of traditional employment.  We have been creating conditions which favor permanent entrepreneurship for every one of us, and which do not favor traditional employment for any of us.

From a Coasean economics perspective, information technology is helping to reduce general transaction costs worldwide such that transaction costs internal to firms and external to them are approaching parity.  In other words, it is increasingly nonsensical for any company to bother hiring employees.  This does not mean however, that companies do not need people, nor does it mean that future consumers do not need the products of your hard work!  Read Michael’s article for his detailed analysis of this phenomenon.

How can I write a book on a “theory of I/T architecture”, of the philosophy and science of I/T architecture, without addressing this trend?  I can’t.  I need to discuss where we have been as professionals, where we are, and where were are going.  I must play the futurist and make predictions.  Of course, some of my predictions will be shown to have been correct over time, some wrong, but stick my neck out I must!  There is no way I can write such a book, sit on the side lines, and simply throw up my arms and say, “I have no idea what to do next.”  If I am not attempting to help my readers make critical decisions about their personal futures, then what good would I be as an author?  Why should you bother to read what I have to write?

It is Always a People Problem. Always.

There are no such things as technology problems, only people problems.

No technology can build itself, nor use itself, nor correct its own problems.  Even self-replicating machines, built using any technology in use (or even in conception) today, would merely execute the delayed choice of their builders.  Consider the case of a man, eager to protect his home against theft, who installs an anti-theft device which would kill any unwanted intruder, perhaps with a bullet to the head.  The homeowners’s device is commonly called a booby trap.  One day, while the home owner is away, an intruder enters the home and is killed.  Is the home owner responsible?  You betcha!  The home owner may claim they are not responsible because they did not pull the trigger directly, but in the end they made a choice to apply extreme prejudice to any intruder and they developed a device to execute that delayed choice.  The homeowner’s booby trap did not kill the intruder, the home owner did.  Every action of any technology, including any act of construction, any act of repair, or any act of use, is ultimately the extended action of human beings.

No technology is a perfect fit for any problem and all technologies come with trade-offs associated with their use.  Even survival comes with its own set of trade-offs.  It is the responsibility of human beings to understand their problems to the best of their abilities, to understand the trade-offs associated with the technology options before them, and to choose appropriate technologies wisely.  Trade-off balancing does not happen on its own.  Humans are the ultimate arbiters of which technology problems they choose to live with.

If all humans were to vanish from this Universe tomorrow, there would be no human problems of any kind.  Human technologies would instantly cease being human technologies and would merely exist as artifacts of matter like any other.  At the same instance of Universal human extinction, all “problems” would also similarly vanish.

This is not merely an academic exercise in ethics.  The implications of failing to understand this point can be tremendous.  If the home owner in my delayed choice example would have understood his culpability ahead of time, would he have been so eager to create his intruder-killing device?  The lack of understanding of the concept of delayed choice leads, in business, law and in politics, to a class of problem called moral hazards.  Failure to understand this critical point about technology, in particular computing technology, can cause some people to impart “magical” qualities to technologies which the technologies do not have, which can skew expectation, and can lead to project and business failure.

No, no, no.  The only kinds of problems which exist in this world are people problems, by definition.  If you doubt that, then find a way to kill all of humanity right now and watch all problems simply vanish away the moment before you and I cease to be.

Architecture: the Normative Art

Architecture is the normative art.

To be normative is to occupy the “ought” side of David Hume’s is-ought divide.  A positivist, focusing on what is, has no rational method for arriving at what ought to be.  In ethics, for instance, the positivist can document the evidence for the existence of murder throughout human history but can they arrive, through purely descriptive and deductive means, to the conclusion that murder is unjustified?  What is justice?

To describe an “ought” is to architect.  Regardless of the problem domain and placing the concept of professionalism aside, the architect is the person who ultimately makes the very human decision of what to value and what is “good” in design.  Though the concept of architecture is, and should be, associated with the act of creation, adherence to ideals, models, standards and virtues have always been the defining aspect of architecture.

To live an ethical life the individual must relate to human ideals, virtues and associated (measurable) standards. It also requires the ethicist to have identified and communicated those virtues — usually by identifying standards which people can relate to — and to have defined principles and rules for adhering to those virtues. The ethicist, in human affairs, is a “cultural architect”.

Each of us in the United States are taught to value the virtues of our Constitution. We expect those in power to embody those virtues described within that document.  Our best measure of that embodiment is the degree to which we notice those in power upholding the principles and rules also described therein. We rightfully consider James Madison the “chief architect” of the United States since he was the critical agent who determined the virtues, or qualities, which would make a good country and then designed the legal structure (principle and rules) which would best institutionalize those virtues.  The U.S. Constitution is the architectural description for the United States.

The physicist Freeman Dyson once said,

The bottom line for mathematicians is that the architecture has to be right. In all the mathematics that I did, the essential point was to find the right architecture. It’s like building a bridge. Once the main lines of the structure are right, then the details miraculously fit. The problem is the overall design. [iWise,, retrieved 2011-06-01]

What did Freeman Dyson mean by “right”?  Mathematics is essentially a deductive/positivist/descriptive exercise.  I have seen some physicists waste their lives in cherished theory which “penciled”, “made sense”, “without mistake”, “was going to overturn Einstein” and all that, yet never predicted new phenomena let alone reproduced the values of known phenomena.  The problem with the life work of these people was never their math, which was deductively correct, but the initial set of axioms the they chose as valuable.  Freeman Dyson, in my opinion, was referring to the “right” choice of axiom and basic principle with regard to the “architecture” of a scientific theory that works.  Once must choose axioms which help the theorist conform to experimental reality, or not.  The choice of initial axiom is not a deductive exercise however, but an inductive choice of “ought”.  Those whose life’s work leads to naught chose wrong.  For Freeman Dyson’s part however he, along with Paul Dirac, Hans Bethe, Sin-Itiro Tomonaga, Julian Schwinger, and Richard Feynman are properly known as the architects of quantum electrodynamics (QED).  They came to be the architects through their identification of axiom and principle which not only enabled the development of a cohesive set of mathematics, but also led them to conform, par excellence, to real world experiment.  In short, they identified the “oughts” of QED.  They chose well.

Regardless of the problem domain, the architect is always that person who breaches the is-ought divide.

Important update from Professor Dyson, here

What is a “Technology”?

To Wikipedia, a decent definition of “technology” has been posted,

Technology is the making, usage and knowledge of tools, techniques, crafts, systems or methods of organization in order to solve a problem or serve some purpose.

Yes, the concept of “technology” is as general as that.  With patents in mind, can a “technology” be a pure product of the mind?  As an example, is an epistemological framework a technology?  What is a “tool”?

Regardless of the assumption or rejection of metaphysical dualism, is the mind itself not a tool?  Is the mind not a collection of matter and states which can be manipulated by human agency to achieve a goal of that agency?  If self-reference is where we draw the line at “abstract idea” versus technology, where does self-reference end and the “world” begin?  Are arms and legs tools, or merely “self”?

If I could take a pill which would transform one of my eyes into a Steve Austin, bionic “super eye”, is that eye merely “me” or is it a technology?  If I were to integrate nano-scale technology into my physiology, does that nano-scale technology cease to be a technology and become “me”?  What if the DNA of a future child were to be manipulated so that, once that child was born, their body would be impregnated with a technology produced by the programming of that child’s DNA?  Would the programming itself be a considered a technology?

I reject metaphysical dualism.  The biological “brain”, the epistemological mind and the non-nervous aspects of the human body are one.  No separation exists.  I am tempted to say that any goal-suiting change we might make to any state of matter, even the memories of our minds actively created, are technologies.  This line of reasoning points the way to a future state of absurdity with regard to United States patent law.  Either, one day, the prohibition against patenting “abstract ideas” will be lifted, or the entire patent regime will crumble.  I am not sure which.

“Bilksi v. Kappos” Brings Us Closer to Patenting Abstract Ideas

A fascinating discussion with regard to Bilski v. Kappos, here:

I have heard other opinion close to this: we seem to be getting closer to allowing the patenting of abstract ideas.  That is, the Supreme Court is hesitant about categorically ruling out any kind patent, including “pure” business methods, in fear that to do so would preclude future technologies which we haven’t even dreamed about yet.

The Bilski decision strengthened the patentability of software.

Is “Bilski v. Kappos” the Beginning of the End for Patents?

Listen to, or read the Supreme Court’s fascinating oral arguments for Bilski v. Kappos.  I can’t help but think: given the difficulty in defining “machine” or “transformation”, does this portend a day when patents are no longer properly definable?  Is the distinction between “abstract idea” and “technology” ultimately arbitrary?

If you care at all about software patents and intellectual property, you really should pay attention to this case.  (If you are an I/T architect, I strongly suggest you do.)

In the end the Supreme Court said that the machine-or-transformation test was not the only test which determines patent eligibility and that all aspects of the patent must be considered.  Though not producing of definite rules, this opinion leaves open the possibility for patenting pure software algorithms.  To not do so, the Supreme Court reasoned, might prevent unknown future technologies of importance from being patented.  (The ambiguity is meant to error on the side of the unknown.)

Listen to, or read the discussion for yourself.