- Founding documents that everyone references, such as Zachman's 1987 paper for the IBM Systems Journal, and the MIT book on Enterprise Architecture as Strategy.
- Various summaries and guides to the body of knowledge. Mitre produced a guide to the (evolving) EA body of knowledge around 2004 (known as EABOK), and there is a bunch of people currently trying to produce a new version of the EABOK. The BCS has published a Reference Model for Enterprise and Solution Architecture, which it uses as the basis for its architecture certification.
- Some standards, such as ISO/IEC 42010 and RM/ODP.
- Various frameworks that package this knowledge into some kind of semi-formal framework, such as TOGAF and DODAF/MODAF, as well as the Zachman framework. The Federal EA framework is incorporated into US law via the Clinger-Cohen Act.
- Various tools that encapsulate portions of this knowledge.
- Some attempts to formalize and ground this knowledge. There is some interesting work in the defence community to produce an EA ontology (MODEM) to replace the existing MODAF/DODAF metamodel. Meanwhile, there is a growing academic literature, much of it from the Netherlands for some reason.
- Huge amounts of commentary and proposed additions/revision by individual authors and bloggers. There is also a considerable admixture of insight and obfuscation from the large consultancies and analyst firms.
The body of knowledge as a whole can be understood as a set of definitions ("this is what we shall call a business function"), assertions and observations ("loosely coupled structures are more flexible"), instructions/injunctions ("always agree your principles before planning your systems"), and examples (mostly artificial or anecdotal), together with a bunch of rather spurious or irrelevant claims ("this is a mathematically proven technique", "this classification was invented by the ancient Greeks", "this is what everybody means by 'complexity' "). The body of knowledge also relies significantly on some terms that usually remain undefined, such as "alignment".
This body of knowledge as a whole has been subject to a number of criticisms.
- It is incomplete, inconsistent, imprecise, muddled, simplistic.
- It is impractical, fails to deliver value, not fit for purpose (however that purpose may be understood).
- It carries a hidden ideological agenda (which conveniently suits the commercial interests of the large software and IT services companies).
- It lacks a proper base of empirical evidence - much of the knowledge is received wisdom and rehashed fragments from other sources.
- It relies on "subject matter experts", who often have no experience or training in knowledge research and are merely trading their own preconceptions.
In response to these perceived flaws in the collective body of knowledge, many EA practitioners espouse a personal body of knowledge, which is smaller and hopefully more consistent than the conventional/collective body of knowledge. This personal body of knowledge is often presented explicitly as an alternative to the conventional body of knowledge, for example rants against TOGAF or Zachman. However, personal bodies of knowledge are not immune from the same criticisms, and often suffer from methodological syncretism.
How has the collective body of knowledge been developed and validated? Typically a combination of the following
- This is what everybody knows.
- This is what every good architect knows.
- Here's an interesting new idea, so let's bung it in somewhere.
- Our customers like it, so it must be right.
- This bit of TOGAF is obviously rubbish, so let's chuck it away and put something else in its place.
Imre Lakatos said that a research programme can be progressive or degenerative. A progressive research programme is one that enhances the explanatory or predictive power of a body of knowledge. (For example, the development of new medical knowledge is progressive if and only if it clearly contributes to more effective healthcare.) A degenerative research programme is one that merely adjusts the body of knowledge to explain away inconvenient results. (Hubert Dreyfus has argued that AI was a degenerative research programme, because it ran into unexpected problems it could not solve.) I have frequently observed that much of EA looks more like mediaeval scholasticism (taxonomy for the sake of taxonomy) than modern science.
I know that some of the readers of this blog aspire to push forward the EA body of knowledge in various ways, including the next version of TOGAF and the replacement EABOK - so my challenge to you guys is this: How are you making sure that your work is progressive and evidence-based?