For 18 months, I had been buried in the PLC (industrial controls) world.
My original mission was to “rethink” the approach to PLC software application development because the state of the art of these systems is perceived as dismal. PLC systems are seen as infested with bugs, or difficult to document, difficult to maintain, difficult to expand, or some painful combination thereof. PLC culture has encouraged “one-off” application development, disregard for re-use, disregard for team-development, and tends to be ignorant of or eschew automation in testing and debugging methods. PLC development culture has not demanded the integration of the advances in software engineering from the past 20 years or so, both in terms of tools and technique. It is amazing, for instance, how many PLC developers are ignorant of the concept of unit testing or even source code revision control. The lack of this demand may stem from the intellectual insularity of the culture and innocent ignorance.
My original mission was to overcome the downside risks of the usual PLC development culture and create a new culture, a new development methodology, and new infrastructure that could bypass the usual shortcomings and help us create applications of higher quality than had been expected to date. The implementation of this mission however is expensive and fraught with risk (e.g., time-to-completion risk), mostly because of the dismal quality of vendor-supplied tools that PLC developers have no choice but to use. These risks were known from the beginning. The check-writer’s tolerance for such risks were not known for sure however, only what they said they could tolerate was known. The spoken and actual tolerance for risk turned out to be very different after all. No one has been surprised.
I just completed ITIL foundations training. I’ll let you all know later, when I find out, if I passed the test. [Update: I did.]
What caught my attention most during training is that the ITIL library writers, in my opinion, correctly identified economic value as a combination of both (marginal) utility and warranty (irreversibility). Somewhere along the line, I/T practitioners discovered what few economists (save for some, like Hernando de Soto Polar) bothered to factor into so many economic formulations: utility is fine, but if the economic actor fails to perceive that their utility is theirs to keep, then the sense of economic value falls. While property rights (de Soto) alone do not economic value make, they are necessary prerequisites for any functioning economy. In information technology a service like Google provides great utility, but if it were perceived as an unreliable service its overall economic value would drop through the floor.
Of course, the ITIL “utility + warranty” model is itself a little simplistic. Max Neef breaks up utility further:
- protection (security, warranty)
Max Neef provides a nice balance of qualities, certainly, but I feel that protection/security/warranty/irreversibility plays a very specific role in economic transactions because of the way our brains are built. I believe it remains useful to break out qualities associated with irreversibility (security, protection, warranty) into a separate, analyzable category of study. For me, ITIL’s “utility + warranty” description of economic value is a great model to use.
Your ethics are that set of abstract principles and measurable standards you use to enable you to think and act as rationally as possible. To purposely thwart your ability to think and act rationally, or to allow allow that through neglect, is unethical.
One can derive an ethic by first understanding what beliefs, behaviors and other factors thwart rational thinking in yourself, and then second determine, by experiment, the principles and standards which allow one to manage one’s roadblocks to rationality. Ethics has nothing to say about the content of your rational thought, for that is the realm of morality.
I believe Michael Ferguson‘s analysis about the future, jobs, and technological unemployment is essentially correct,
Technology is automating more and more jobs. We software-oriented architects are the “grunts” that are helping to usher this process along. Indeed, we are working to automate ourselves out of traditional employment. We have been creating conditions which favor permanent entrepreneurship for every one of us, and which do not favor traditional employment for any of us.
From a Coasean economics perspective, information technology is helping to reduce general transaction costs worldwide such that transaction costs internal to firms and external to them are approaching parity. In other words, it is increasingly nonsensical for any company to bother hiring employees. This does not mean however, that companies do not need people, nor does it mean that future consumers do not need the products of your hard work! Read Michael’s article for his detailed analysis of this phenomenon.
How can I write a book on a “theory of I/T architecture”, of the philosophy and science of I/T architecture, without addressing this trend? I can’t. I need to discuss where we have been as professionals, where we are, and where were are going. I must play the futurist and make predictions. Of course, some of my predictions will be shown to have been correct over time, some wrong, but stick my neck out I must! There is no way I can write such a book, sit on the side lines, and simply throw up my arms and say, “I have no idea what to do next.” If I am not attempting to help my readers make critical decisions about their personal futures, then what good would I be as an author? Why should you bother to read what I have to write?
This presentation, “The Marvels and Flaws of Intuitive Thinking”, is part of a series from Edge.org which looks most interesting,
We ended up studying something that we call “heuristics and biases”. Those were shortcuts, and each shortcut was identified by the biases with which it came. The biases had two functions in that story. They were interesting in themselves, but they were also the primary evidence for the existence of the heuristics. If you want to characterize how something is done, then one of the most powerful ways of characterizing the way the mind does anything is by looking at the errors that the mind produces while it’s doing it because the errors tell you what it is doing. Correct performance tells you much less about the procedure than the errors do.
If it weren’t for Nature’s “cheap and dirty tricks” of the mind, we would not be alive today. On the other side of the coin is the science of information saturation in complex adaptive systems, as told by Geoffrey West, also at Edge.org,
The work I got involved in was to try to understand these scaling laws. And to make it a very short story, what was proposed apart from the thinking was, look, this is universal. It cuts across the design of organisms. Whether you are insects, fish, mammals or birds, you get the same scaling laws. It is independent of design. Therefore, it must be something that is about the structure of the way things are distributed.
You recognize what the problem is. You have ten14 cells. You have this problem. You’ve got to sustain them, roughly speaking, democratically and efficiently. And however natural selection solved it, it solved it by evolving hierarchical networks.
There is a very simple way of doing it. You take something macroscopic, you go through a hierarchy and you deliver them to very microscopic sites, like for example, your capillaries to your cells and so on. And so the idea was, this is true at all scales. It is true of an ecosystem; it is true within the cell. And what these scaling laws are manifesting are the generic, universal, mathematical, topological properties of networks.
Read the whole article, especially the part about network saturation along S-curves, and about singularity/collapse of those networks. Also note his discovery about the growth curve of companies, which is a semi-vindication of Coasean economics.
Most excellent article, “The Cognitive Science of Rationality”,
I particularly like the discussion of error types.
These modern models of cognitive science are great, but they only explain the mechanisms used to desaturate our neural networks. What is missing is a good method to differentiate phenomena as a function of whether they are a result of network saturation or desaturation. At this time, I have no reliable means of differentiating the two. For instance, is autism a problem of heavy saturation or of excessive desaturation?
I must remember to include viral infection in my list of nature’s “cheap and dirty tricks” to reverse entropy in the brain,