Read this interesting interview by Daniel Kahneman at Edge.org with cognitive scientist Gary A. Klein. Dr. Klein is the author of Working Minds and many other works in the field of cognitive task analysis and naturalistic decision making.
I am waiting to obtain some consensus from my IASA peers with regard to the course outline. Until then, I will talk obliquely about the content as it seems to be shaping up. For now, let me give you a list of some of the references I will include these below. I will also include references to some of the other classics, like Aristotle, Plato, Kant, and Hume, but here are some interesting readings from modern times.
- Exit, Voice and Loyalty, Albert O. Hirschmann, 1970
- Human Scale Development (English), Manfred Max-Neef, 1991
- The Moral Animal: Why We Are, the Way We Are: The New Science of Evolutionary Psychology, Robert Wright, 1994
- The Legal Analyst: A Toolkit for Thinking about the Law, Ward Farnsworth, 2007
- The Craftsman, Richard Sennett, 2008
- The Better Angels of Our Nature: Why Violence Has Declined, Steven Pinker, 2011
Danut Prisacaru has started a new blog, The Software Philosopher. I look forward to hear more from, and to participate in this blog!
I am now developing an online course on “Architecture Ethics” for IASA. Currently, I have defined the course objectives as follows. The target audience are information technology architects and architects-in-training, primarily in North America and Europe although I hope that Asian students will also find it informative. (My recent experience in China has provided me with a number of good examples for all students.)
My current introduction:
What do Love Canal and Barclays have in common? In these very public cases, improper ethical planning arguably encouraged opportunities for immoral action. As a professional architect you are in a position of leadership and trust, and are responsible for the ethical implications of your decisions and the morality of your actions. You are responsible for the ethical planning of your daily work and long term career including the proper selection of projects, the identification of collaborative environments that can enable or hinder success, avoiding moral risks to employer and customer, meeting the challenges of regulatory and legal frameworks, and even for the determination of proper compensation for your effort and risks. This course will introduce you to concrete skills that will help you recognize potential ethical failures in the practice of computing-associated architecture, strategies to mitigate or otherwise compensate for those failures, and ultimately, simply put, how to architect well.
After completing this course, you will be able to:
- Identify some of your highest risk factors to project and career success, and strategies to counter them.
- Identify financial impacts of ethical decision making in architecture.
- Identify and communicate additional ethical considerations for your particular community, industry, employer, and job.
- Effectively communicate the value of professional architecture.
- Develop an ethical context, or “Collaborative Viewpoint” for your Architecture Description.
- Understand why the ethical context is the proper frame within which you should understand everything you do as a professional architect, and why IASA exists.
- Information technology architects, solution architects, and enterprise architects
- Students training for a career in computing-associated architecture
- Potential employers and clients of computing-associated architects
For 18 months, I had been buried in the PLC (industrial controls) world.
My original mission was to “rethink” the approach to PLC software application development because the state of the art of these systems is perceived as dismal. PLC systems are seen as infested with bugs, or difficult to document, difficult to maintain, difficult to expand, or some painful combination thereof. PLC culture has encouraged “one-off” application development, disregard for re-use, disregard for team-development, and tends to be ignorant of or eschew automation in testing and debugging methods. PLC development culture has not demanded the integration of the advances in software engineering from the past 20 years or so, both in terms of tools and technique. It is amazing, for instance, how many PLC developers are ignorant of the concept of unit testing or even source code revision control. The lack of this demand may stem from the intellectual insularity of the culture and innocent ignorance.
My original mission was to overcome the downside risks of the usual PLC development culture and create a new culture, a new development methodology, and new infrastructure that could bypass the usual shortcomings and help us create applications of higher quality than had been expected to date. The implementation of this mission however is expensive and fraught with risk (e.g., time-to-completion risk), mostly because of the dismal quality of vendor-supplied tools that PLC developers have no choice but to use. These risks were known from the beginning. The check-writer’s tolerance for such risks were not known for sure however, only what they said they could tolerate was known. The spoken and actual tolerance for risk turned out to be very different after all. No one has been surprised.
I just completed ITIL foundations training. I’ll let you all know later, when I find out, if I passed the test. [Update: I did.]
What caught my attention most during training is that the ITIL library writers, in my opinion, correctly identified economic value as a combination of both (marginal) utility and warranty (irreversibility). Somewhere along the line, I/T practitioners discovered what few economists (save for some, like Hernando de Soto Polar) bothered to factor into so many economic formulations: utility is fine, but if the economic actor fails to perceive that their utility is theirs to keep, then the sense of economic value falls. While property rights (de Soto) alone do not economic value make, they are necessary prerequisites for any functioning economy. In information technology a service like Google provides great utility, but if it were perceived as an unreliable service its overall economic value would drop through the floor.
Of course, the ITIL “utility + warranty” model is itself a little simplistic. Max Neef breaks up utility further:
- protection (security, warranty)
Max Neef provides a nice balance of qualities, certainly, but I feel that protection/security/warranty/irreversibility plays a very specific role in economic transactions because of the way our brains are built. I believe it remains useful to break out qualities associated with irreversibility (security, protection, warranty) into a separate, analyzable category of study. For me, ITIL’s “utility + warranty” description of economic value is a great model to use.
Your ethics are that set of abstract principles and measurable standards you use to enable you to think and act as rationally as possible. To purposely thwart your ability to think and act rationally, or to allow allow that through neglect, is unethical.
One can derive an ethic by first understanding what beliefs, behaviors and other factors thwart rational thinking in yourself, and then second determine, by experiment, the principles and standards which allow one to manage one’s roadblocks to rationality. Ethics has nothing to say about the content of your rational thought, for that is the realm of morality.
Technology is automating more and more jobs. We software-oriented architects are the “grunts” that are helping to usher this process along. Indeed, we are working to automate ourselves out of traditional employment. We have been creating conditions which favor permanent entrepreneurship for every one of us, and which do not favor traditional employment for any of us.
From a Coasean economics perspective, information technology is helping to reduce general transaction costs worldwide such that transaction costs internal to firms and external to them are approaching parity. In other words, it is increasingly nonsensical for any company to bother hiring employees. This does not mean however, that companies do not need people, nor does it mean that future consumers do not need the products of your hard work! Read Michael’s article for his detailed analysis of this phenomenon.
How can I write a book on a “theory of I/T architecture”, of the philosophy and science of I/T architecture, without addressing this trend? I can’t. I need to discuss where we have been as professionals, where we are, and where were are going. I must play the futurist and make predictions. Of course, some of my predictions will be shown to have been correct over time, some wrong, but stick my neck out I must! There is no way I can write such a book, sit on the side lines, and simply throw up my arms and say, “I have no idea what to do next.” If I am not attempting to help my readers make critical decisions about their personal futures, then what good would I be as an author? Why should you bother to read what I have to write?
We ended up studying something that we call “heuristics and biases”. Those were shortcuts, and each shortcut was identified by the biases with which it came. The biases had two functions in that story. They were interesting in themselves, but they were also the primary evidence for the existence of the heuristics. If you want to characterize how something is done, then one of the most powerful ways of characterizing the way the mind does anything is by looking at the errors that the mind produces while it’s doing it because the errors tell you what it is doing. Correct performance tells you much less about the procedure than the errors do.
If it weren’t for Nature’s “cheap and dirty tricks” of the mind, we would not be alive today. On the other side of the coin is the science of information saturation in complex adaptive systems, as told by Geoffrey West, also at Edge.org,
The work I got involved in was to try to understand these scaling laws. And to make it a very short story, what was proposed apart from the thinking was, look, this is universal. It cuts across the design of organisms. Whether you are insects, fish, mammals or birds, you get the same scaling laws. It is independent of design. Therefore, it must be something that is about the structure of the way things are distributed.
You recognize what the problem is. You have ten14 cells. You have this problem. You’ve got to sustain them, roughly speaking, democratically and efficiently. And however natural selection solved it, it solved it by evolving hierarchical networks.
There is a very simple way of doing it. You take something macroscopic, you go through a hierarchy and you deliver them to very microscopic sites, like for example, your capillaries to your cells and so on. And so the idea was, this is true at all scales. It is true of an ecosystem; it is true within the cell. And what these scaling laws are manifesting are the generic, universal, mathematical, topological properties of networks.
Read the whole article, especially the part about network saturation along S-curves, and about singularity/collapse of those networks. Also note his discovery about the growth curve of companies, which is a semi-vindication of Coasean economics.
Most excellent article, “The Cognitive Science of Rationality”,
I particularly like the discussion of error types.
These modern models of cognitive science are great, but they only explain the mechanisms used to desaturate our neural networks. What is missing is a good method to differentiate phenomena as a function of whether they are a result of network saturation or desaturation. At this time, I have no reliable means of differentiating the two. For instance, is autism a problem of heavy saturation or of excessive desaturation?