Danut Prisacaru has started a new blog, The Software Philosopher. I look forward to hear more from, and to participate in this blog!
Technology is automating more and more jobs. We software-oriented architects are the “grunts” that are helping to usher this process along. Indeed, we are working to automate ourselves out of traditional employment. We have been creating conditions which favor permanent entrepreneurship for every one of us, and which do not favor traditional employment for any of us.
From a Coasean economics perspective, information technology is helping to reduce general transaction costs worldwide such that transaction costs internal to firms and external to them are approaching parity. In other words, it is increasingly nonsensical for any company to bother hiring employees. This does not mean however, that companies do not need people, nor does it mean that future consumers do not need the products of your hard work! Read Michael’s article for his detailed analysis of this phenomenon.
How can I write a book on a “theory of I/T architecture”, of the philosophy and science of I/T architecture, without addressing this trend? I can’t. I need to discuss where we have been as professionals, where we are, and where were are going. I must play the futurist and make predictions. Of course, some of my predictions will be shown to have been correct over time, some wrong, but stick my neck out I must! There is no way I can write such a book, sit on the side lines, and simply throw up my arms and say, “I have no idea what to do next.” If I am not attempting to help my readers make critical decisions about their personal futures, then what good would I be as an author? Why should you bother to read what I have to write?
I am working to unify the concept of “transaction” across the disciplines of:
- information technology, and even
- physics and psychology.
This is important: transaction analysis and design is not addressed well in the modeling paradigms of both I/T and enterprise architecture. Unfortunately, I believe this is a critical omission.
I admit to using UML from time to time. I do so because a specific UML modeling tool may meet specific communication and data storage requirements I might have. UML tools, for instance, make for interesting “object databases”. I occasionally find the detailed grammar of UML helpful as a helpful guide for thinking through the details of certain classes of problems. I use extensions to the grammar frequently. As a generalized communication grammar however, UML is severely limited and should be ignored. The appropriateness of UML is not well understood by a significant fraction of practitioners and its over use is certainly counter productive.
I believe that modeling tools are useful but must recover from the UML disease that has infected them all. I have a specific evolutionary path in mind which I will write about in detail at a future time.
If you wish to improve the efficacy of communication of complex concepts between team members, I urge you to return to the basics of literature, philosophy, psychology, linguistics, marketing and art. At the very least, read Edward Tufte. The efficiency-through-standards argument of UML is a myth.
My response to James Turner’s article, Process Kills Developer Passion,
…you’re spending a lot of your time on process, and less and less actually coding the applications… The underlying feedback loop making this progressively worse is that passionate programmers write great code, but process kills passion. Disaffected programmers write poor code, and poor code makes management add more process in an attempt to “make” their programmers write good code. That just makes morale worse, and so on.
Software process, like “method” in science, is bunk! I finally understood what the philosopher Paul Feyerabend was trying to say. What is important is the data, or the stuff of reality, otherwise known as results. Experiment (software tests), deduction and induction are all very important but no two people are going to arrive at their conclusions in the same way. That is, no two people process data and leverage their capacity for deduction and induction in the same way. One person’s process (method) is another person’s confusion.
If you want to de-motivate a creative scientist or software developer, force them to think like someone else who isn’t them.
Process is no substitute for knowledge.
Listen to, or read the Supreme Court’s fascinating oral arguments for Bilski v. Kappos. I can’t help but think: given the difficulty in defining “machine” or “transformation”, does this portend a day when patents are no longer properly definable? Is the distinction between “abstract idea” and “technology” ultimately arbitrary?
If you care at all about software patents and intellectual property, you really should pay attention to this case. (If you are an I/T architect, I strongly suggest you do.)
In the end the Supreme Court said that the machine-or-transformation test was not the only test which determines patent eligibility and that all aspects of the patent must be considered. Though not producing of definite rules, this opinion leaves open the possibility for patenting pure software algorithms. To not do so, the Supreme Court reasoned, might prevent unknown future technologies of importance from being patented. (The ambiguity is meant to error on the side of the unknown.)
Listen to, or read the discussion for yourself.
Now reading, Ward Farnsworth’s “The Legal Analyst“,
If you want a good description of how software is and is not like law, read Chapter 17, “Rules and Standards”.
This is an excellent book for law students and normal folks alike. Farnsworth’s introduces the reader to the conceptual trade-offs of the modern legal system (mostly U.S.), then discusses how these trade-offs affect judicial decision making. These trade-offs include:
- ex ante versus ex post
- economic efficiencies
- trust (principle-agent problem, the Prisoner’s Dilemma, “chicken”)
- rules versus standards
- hindsight bias, slippery slope and other cognitive errors
The bibliography is awesome!