Read this interesting interview by Daniel Kahneman at Edge.org with cognitive scientist Gary A. Klein. Dr. Klein is the author of Working Minds and many other works in the field of cognitive task analysis and naturalistic decision making.
I am waiting to obtain some consensus from my IASA peers with regard to the course outline. Until then, I will talk obliquely about the content as it seems to be shaping up. For now, let me give you a list of some of the references I will include these below. I will also include references to some of the other classics, like Aristotle, Plato, Kant, and Hume, but here are some interesting readings from modern times.
- Exit, Voice and Loyalty, Albert O. Hirschmann, 1970
- Human Scale Development (English), Manfred Max-Neef, 1991
- The Moral Animal: Why We Are, the Way We Are: The New Science of Evolutionary Psychology, Robert Wright, 1994
- The Legal Analyst: A Toolkit for Thinking about the Law, Ward Farnsworth, 2007
- The Craftsman, Richard Sennett, 2008
- The Better Angels of Our Nature: Why Violence Has Declined, Steven Pinker, 2011
Danut Prisacaru has started a new blog, The Software Philosopher. I look forward to hear more from, and to participate in this blog!
I am now developing an online course on “Architecture Ethics” for IASA. Currently, I have defined the course objectives as follows. The target audience are information technology architects and architects-in-training, primarily in North America and Europe although I hope that Asian students will also find it informative. (My recent experience in China has provided me with a number of good examples for all students.)
My current introduction:
What do Love Canal and Barclays have in common? In these very public cases, improper ethical planning arguably encouraged opportunities for immoral action. As a professional architect you are in a position of leadership and trust, and are responsible for the ethical implications of your decisions and the morality of your actions. You are responsible for the ethical planning of your daily work and long term career including the proper selection of projects, the identification of collaborative environments that can enable or hinder success, avoiding moral risks to employer and customer, meeting the challenges of regulatory and legal frameworks, and even for the determination of proper compensation for your effort and risks. This course will introduce you to concrete skills that will help you recognize potential ethical failures in the practice of computing-associated architecture, strategies to mitigate or otherwise compensate for those failures, and ultimately, simply put, how to architect well.
After completing this course, you will be able to:
- Identify some of your highest risk factors to project and career success, and strategies to counter them.
- Identify financial impacts of ethical decision making in architecture.
- Identify and communicate additional ethical considerations for your particular community, industry, employer, and job.
- Effectively communicate the value of professional architecture.
- Develop an ethical context, or “Collaborative Viewpoint” for your Architecture Description.
- Understand why the ethical context is the proper frame within which you should understand everything you do as a professional architect, and why IASA exists.
- Information technology architects, solution architects, and enterprise architects
- Students training for a career in computing-associated architecture
- Potential employers and clients of computing-associated architects
For 18 months, I had been buried in the PLC (industrial controls) world.
My original mission was to “rethink” the approach to PLC software application development because the state of the art of these systems is perceived as dismal. PLC systems are seen as infested with bugs, or difficult to document, difficult to maintain, difficult to expand, or some painful combination thereof. PLC culture has encouraged “one-off” application development, disregard for re-use, disregard for team-development, and tends to be ignorant of or eschew automation in testing and debugging methods. PLC development culture has not demanded the integration of the advances in software engineering from the past 20 years or so, both in terms of tools and technique. It is amazing, for instance, how many PLC developers are ignorant of the concept of unit testing or even source code revision control. The lack of this demand may stem from the intellectual insularity of the culture and innocent ignorance.
My original mission was to overcome the downside risks of the usual PLC development culture and create a new culture, a new development methodology, and new infrastructure that could bypass the usual shortcomings and help us create applications of higher quality than had been expected to date. The implementation of this mission however is expensive and fraught with risk (e.g., time-to-completion risk), mostly because of the dismal quality of vendor-supplied tools that PLC developers have no choice but to use. These risks were known from the beginning. The check-writer’s tolerance for such risks were not known for sure however, only what they said they could tolerate was known. The spoken and actual tolerance for risk turned out to be very different after all. No one has been surprised.
I just completed ITIL foundations training. I’ll let you all know later, when I find out, if I passed the test. [Update: I did.]
What caught my attention most during training is that the ITIL library writers, in my opinion, correctly identified economic value as a combination of both (marginal) utility and warranty (irreversibility). Somewhere along the line, I/T practitioners discovered what few economists (save for some, like Hernando de Soto Polar) bothered to factor into so many economic formulations: utility is fine, but if the economic actor fails to perceive that their utility is theirs to keep, then the sense of economic value falls. While property rights (de Soto) alone do not economic value make, they are necessary prerequisites for any functioning economy. In information technology a service like Google provides great utility, but if it were perceived as an unreliable service its overall economic value would drop through the floor.
Of course, the ITIL “utility + warranty” model is itself a little simplistic. Max Neef breaks up utility further:
- protection (security, warranty)
Max Neef provides a nice balance of qualities, certainly, but I feel that protection/security/warranty/irreversibility plays a very specific role in economic transactions because of the way our brains are built. I believe it remains useful to break out qualities associated with irreversibility (security, protection, warranty) into a separate, analyzable category of study. For me, ITIL’s “utility + warranty” description of economic value is a great model to use.
Your ethics are that set of abstract principles and measurable standards you use to enable you to think and act as rationally as possible. To purposely thwart your ability to think and act rationally, or to allow allow that through neglect, is unethical.
One can derive an ethic by first understanding what beliefs, behaviors and other factors thwart rational thinking in yourself, and then second determine, by experiment, the principles and standards which allow one to manage one’s roadblocks to rationality. Ethics has nothing to say about the content of your rational thought, for that is the realm of morality.