Webex (along with Unyte and a few others) provides a facility for meeting over the internet. Since Cisco acquired Webex, it has saved nearly a third of its travel and expense budget.
SaaS specialist Phil Wainewright describes this saving as a direct benefit of the acquisition. But of course companies can exploit web meeting as an external service, without acquiring the capability in-house. Indeed, this is one of the main advantages of SaaS - the ability to exploit external capability.
A recent book on Software-as-a-Service makes this very point in the title: "Why Buy The Cow?" In other words, if you are lucky enough to live near a dairy supplier, you can buy milk when you need it, you don't need to buy a cow to get a regular supply of milk.
No doubt Cisco has gained other benefits from owning Webex, but the savings from web meetings shouldn't be included. If Cisco and Webex executives are claiming otherwise, perhaps they should carefully read the book, which was written by, er, Subrah S. Iyar, co-founder of WebEx.
Is there ever a good reason to buy the cow? If you own the cow, you don't have to worry Who Moved My Cheese. In other words, there are certain types of change (sudden unavailability of cheese, uncontrolled increases in price) you can protect yourself against. But of course if you own a cow, there are now a lot of new things to worry about instead: Who Moved My Grass?
Link: Why Buy The Cow
Thursday, January 31, 2008
Wednesday, January 30, 2008
SOMF
An article has just appeared on Wikipedia advertising something called The Service-Oriented Modelling Framework, based on a new book by one Michael Bell. Advertisements are usually excised from Wikipedia fairly speedily, so the article may not last long.
According to the article, Bell has invented a modelling process and language for SOA that is both anthropomorphic and holistic. If this is true, it is a remarkable achievement. Most modelling languages are materialist and reductionist - they describe certain aspects of the system-of-interest by reducing them to objects. Whereas an anthropomorphic model would ascribe human characteristics (e.g. personalities and desires) to the system-of-interest.
I guess I need to read the book when it comes out. Perhaps the publishers would care to send me a review copy?
According to the article, Bell has invented a modelling process and language for SOA that is both anthropomorphic and holistic. If this is true, it is a remarkable achievement. Most modelling languages are materialist and reductionist - they describe certain aspects of the system-of-interest by reducing them to objects. Whereas an anthropomorphic model would ascribe human characteristics (e.g. personalities and desires) to the system-of-interest.
I guess I need to read the book when it comes out. Perhaps the publishers would care to send me a review copy?
Tuesday, January 22, 2008
Real-Time Events
Opher reports a car accident (unplanned events again) and concludes that people need to process events in real-time and not in batch.
Congratulations to Opher on his fast reactions, and commiserations on the slower reactions of the driver behind. Clearly there are some events you have to process in real time. But I hope he is not implying that all events must be processed in real-time. When the fuel tank indicator appears, do you refuel immediately or do you wait until you reach the next gas station? How often do you have the vehicle serviced?
I think the critical question for systems designers here is to determine which are the events that call for a real-time response, and which are the events where a batch response is more appropriate.
There are also events that call for a low-latency (extremely fast) response, but don't count as real-time. For example, in the financial markets, there may be short-lived arbitration opportunities, which means you can make a lot of money if you can react within a small number of milliseconds. This is not a real-time requirement, because nobody expects you to pick up every single opportunity - just catch a reasonable number of them. (The reactions of the frog should allow it to catch just enough insects to fill its belly - but some insects escape to breed more insects. The insects get faster, and so do the reactions of the frogs.)
Surely the event infrastructure should be capable of handling any of these patterns.
Congratulations to Opher on his fast reactions, and commiserations on the slower reactions of the driver behind. Clearly there are some events you have to process in real time. But I hope he is not implying that all events must be processed in real-time. When the fuel tank indicator appears, do you refuel immediately or do you wait until you reach the next gas station? How often do you have the vehicle serviced?
I think the critical question for systems designers here is to determine which are the events that call for a real-time response, and which are the events where a batch response is more appropriate.
There are also events that call for a low-latency (extremely fast) response, but don't count as real-time. For example, in the financial markets, there may be short-lived arbitration opportunities, which means you can make a lot of money if you can react within a small number of milliseconds. This is not a real-time requirement, because nobody expects you to pick up every single opportunity - just catch a reasonable number of them. (The reactions of the frog should allow it to catch just enough insects to fill its belly - but some insects escape to breed more insects. The insects get faster, and so do the reactions of the frogs.)
Surely the event infrastructure should be capable of handling any of these patterns.
Saturday, January 19, 2008
How Many Events?
Homeward bound, delayed in Zurich by the consequences of a crashed Boeing at Heathrow, Opher Etzion blogs On events in flight management. It seems to him "that more events related to flights happened relative to previous years".
For regular business travellers, the best thing we can say about a flight is that it was "uneventful". However good the catering, however large and comfortable the seats, however charming and sexy the air staff, none of this can make up for the inconvenience of delays or lost baggage. So when Opher counts the number of events, I assume he is referring to adverse events.
Of course there are countless events in flight management that are completely invisible to passengers - or even to the air crew - unless something goes wrong, and perhaps even then. In a distributed man-machine system, different parts of the system will be paying attention to different types of event, at different levels of granularity. (An air traffic controller deals with the event PlaneAwaitingLandingSlot; his manager deals with the aggregate event NumberOfPlanesAwaitingLandingSlotIsGreaterThanX.) We can think of this in terms of the architecture of attention - this calls for accurate modelling of events, leading to clear system design.
Furthermore, as Opher points out, there is an important distinction between attention (detection) and action. "In some cases ... the detection is very easy, the complexity is in the response." Last week, Opher apparently experienced a failure of response. "The captain told us several times that he is pushing them to send buses, but they are not responsive ..." It is often easy to blame Them, but it isn't always clear who exactly is responsible.
So the other challenge in designing complex event-driven systems-of-systems is to specify the architecture of response. There are many people and systems and organizations involved in flight management, and a complex event may call for a complex and collaborative response.
So is there a universal and homogeneous event model shared by all the participants in flight management? I don't think we can reasonably insist on this. Instead, we have to allow for some kind of amplification and attenuation - where different subsystems may have different event models, and there is some mechanism for translating and coordinating across these different models. I think this approach is more flexible, more robust, and compatible with loosely coupled SOA.
For regular business travellers, the best thing we can say about a flight is that it was "uneventful". However good the catering, however large and comfortable the seats, however charming and sexy the air staff, none of this can make up for the inconvenience of delays or lost baggage. So when Opher counts the number of events, I assume he is referring to adverse events.
Of course there are countless events in flight management that are completely invisible to passengers - or even to the air crew - unless something goes wrong, and perhaps even then. In a distributed man-machine system, different parts of the system will be paying attention to different types of event, at different levels of granularity. (An air traffic controller deals with the event PlaneAwaitingLandingSlot; his manager deals with the aggregate event NumberOfPlanesAwaitingLandingSlotIsGreaterThanX.) We can think of this in terms of the architecture of attention - this calls for accurate modelling of events, leading to clear system design.
Furthermore, as Opher points out, there is an important distinction between attention (detection) and action. "In some cases ... the detection is very easy, the complexity is in the response." Last week, Opher apparently experienced a failure of response. "The captain told us several times that he is pushing them to send buses, but they are not responsive ..." It is often easy to blame Them, but it isn't always clear who exactly is responsible.
So the other challenge in designing complex event-driven systems-of-systems is to specify the architecture of response. There are many people and systems and organizations involved in flight management, and a complex event may call for a complex and collaborative response.
So is there a universal and homogeneous event model shared by all the participants in flight management? I don't think we can reasonably insist on this. Instead, we have to allow for some kind of amplification and attenuation - where different subsystems may have different event models, and there is some mechanism for translating and coordinating across these different models. I think this approach is more flexible, more robust, and compatible with loosely coupled SOA.
Technological Perfecta
There are several technologies that might work well together, indeed they certainly should work well together. At various times in this blog, I've talked about the potential synergies between (i) SOA and Business Intelligence, (ii) SOA and Business Process Management, and (iii) SOA/EDA and Complex Event Processing. The third of these synergies is currently getting some attention, following some enthusiastic remarks by Jerry Cuomo, WebSphere CTO (see Rich Seeley and Joe McKendrick).
All four together would be amazing, but a lot of organizations aren't ready for this. Moreover each technology has its own set of tools and platforms, and its own set of disciplines and disciples.
In Betting on the SOA Horse, Tim Bass describes this potential synergy using the language of gambling - exacta and trifecta. I'm not very familiar with this language, but what I think this means is that you only win the bet if the horses pass the post in the correct sequence. Tim writes:
In On Trifecta and Event Processing, Opher Etzion disagrees with this metaphor. He argues that these technologies are mutually independent (he calls them "orthogonal"). If he is correct, this would have three consequences: (i) flexibility of deployment - you can implement and exploit them in any sequence; (ii) flexibility of benefit - you can get business benefits from any of them in isolation, and then additional benefits if and when they are all deployed together; and therefore (iii) considerably lower risk.
My position on this is closer to Opher. I think there are some mutual dependencies between these technologies, but they are what I call soft dependencies. P has a hard dependency on Q if Q is necessary for P. Whereas P has a soft dependency on Q if Q is desirable for P.
In planning a technology change programme, it is very useful to recognize soft dependencies, because it permits some deconfliction between different elements. Deconfliction here means forced decoupling, understanding that the results may be sub-optimal (at least initially), but accepting this in the interests of getting things done.
In a perfect world, we might want to deploy all four technologies together, or in a precisely defined sequence. But pragmatism suggests we don't bet on the impossible or highly improbable. The challenge for the technology architect is to organize a technology portfolio to get the best balance of risk and reward. This is not primarily about comparing the features of different products, but about understanding the fundamental structural principles that allow these technologies to be deployed in a flexible and efficient manner.
Discussion continues: Technological Perfecta 2
All four together would be amazing, but a lot of organizations aren't ready for this. Moreover each technology has its own set of tools and platforms, and its own set of disciplines and disciples.
In Betting on the SOA Horse, Tim Bass describes this potential synergy using the language of gambling - exacta and trifecta. I'm not very familiar with this language, but what I think this means is that you only win the bet if the horses pass the post in the correct sequence. Tim writes:
"Betting on horses is a risky business. Exactas and trifecta have enormous payouts, but the odds are remote."
In On Trifecta and Event Processing, Opher Etzion disagrees with this metaphor. He argues that these technologies are mutually independent (he calls them "orthogonal"). If he is correct, this would have three consequences: (i) flexibility of deployment - you can implement and exploit them in any sequence; (ii) flexibility of benefit - you can get business benefits from any of them in isolation, and then additional benefits if and when they are all deployed together; and therefore (iii) considerably lower risk.
My position on this is closer to Opher. I think there are some mutual dependencies between these technologies, but they are what I call soft dependencies. P has a hard dependency on Q if Q is necessary for P. Whereas P has a soft dependency on Q if Q is desirable for P.
In planning a technology change programme, it is very useful to recognize soft dependencies, because it permits some deconfliction between different elements. Deconfliction here means forced decoupling, understanding that the results may be sub-optimal (at least initially), but accepting this in the interests of getting things done.
In a perfect world, we might want to deploy all four technologies together, or in a precisely defined sequence. But pragmatism suggests we don't bet on the impossible or highly improbable. The challenge for the technology architect is to organize a technology portfolio to get the best balance of risk and reward. This is not primarily about comparing the features of different products, but about understanding the fundamental structural principles that allow these technologies to be deployed in a flexible and efficient manner.
Discussion continues: Technological Perfecta 2
Labels:
BI,
BPM,
deconfliction,
event-driven,
risk,
risk-trust-security
Thursday, January 10, 2008
Case Studies
There is a significant demand for technology case studies, from would-be adopters and practitioners of specific technologies. There is also a considerable supply of technology case studies, mostly from vendors.
But I don't see the supply meeting the demand. There seems to be a gap between what people want to know and what people are willing to publish.
Most so-called case studies take this form:
I regard these as press releases rather than genuine case studies. What purpose can they possibly serve apart from name-dropping? We are told that a respectable large organization is happy with the product, but we are given few if any details on how the product was used. Impressive numbers may be quoted, but we have no idea how they are measured or what to compare them with.
One thing we really want to know is about practical lessons - difficulties and pitfalls. An alternative perspective perhaps?
That's not much better. It may identify some areas of concern, but still doesn't give us a rounded evaluation of a project, successful or otherwise.
And academic studies often aren't much use either.
What we need is a lot more detailed, warts-and-all case studies, in plain English. But there are never enough organizations willing to expose themselves to independent public scrutiny. Any organizations willing to volunteer?
Update: this blogpost originally referred to SOA case studies. I have removed the specific references to SOA, as I believe my argument applies to any technology or product.
But I don't see the supply meeting the demand. There seems to be a gap between what people want to know and what people are willing to publish.
Most so-called case studies take this form:
Gringotts Bank (NYSE:GOBL) needed a fast yet secure customer response across multiple legacy vaults so they installed WebHogz Enterprise Edition Version 6.66. Gringott's Chief Architect Bill Weasley saidOur productivity has whizzed up by 31.4%.WebHogz CTO Harry Potter saidThis application clearly demonstrates the magical superiority of our product over muggle alternatives.
I regard these as press releases rather than genuine case studies. What purpose can they possibly serve apart from name-dropping? We are told that a respectable large organization is happy with the product, but we are given few if any details on how the product was used. Impressive numbers may be quoted, but we have no idea how they are measured or what to compare them with.
One thing we really want to know is about practical lessons - difficulties and pitfalls. An alternative perspective perhaps?
However, Dr Aco Malfoy, a security consultant with Arthur T. Riddle, voiced some concerns about the WebHogz solution.A number of our clients have had problems with the stability and performance of the product. Because of these concerns we have advised Gringotts Bank to install a separate firewall system and invest in additional dragonware, which will add considerably to the overall cost of the WebHogz project.
That's not much better. It may identify some areas of concern, but still doesn't give us a rounded evaluation of a project, successful or otherwise.
And academic studies often aren't much use either.
Professor Hermione Granger, a fellow of Halloween College at Hogsfjord University, has completed a three-year comparative study of vaulting technologies in collaboration with a number of industrial partners including WebHogz Labs.Our preliminary findings, based on an action research paradigm, do seem really quite promising, but we obviously need much more research funding before we can produce more reliable and detailed figures.
What we need is a lot more detailed, warts-and-all case studies, in plain English. But there are never enough organizations willing to expose themselves to independent public scrutiny. Any organizations willing to volunteer?
Update: this blogpost originally referred to SOA case studies. I have removed the specific references to SOA, as I believe my argument applies to any technology or product.
Tuesday, January 08, 2008
Flight from Quality
TIBCO (TIBX:NSQ) shares soared in December, following impressive financial results. This week they have fallen to a 52-week low, following a Sell advisory from Goldman Sachs (Euro2day via Tim Bass). Goldman Sachs analyst Derek Bingham predicts a flight from quality.
The SOA mantra of Loose Coupling works both ways here. On the one hand, greater standardization and interoperability could mean there is less reason to buy everything from a single supplier, and so “best-of-breed” offerings become more viable, even during an economic downturn. On the other hand, some companies may feel that it is less critical to get the highest quality infrastructure from the beginning, since greater flexibility makes it easier to contemplate changing things later.
The traditional economics of the software industry has generally favoured the giants, which is why IBM and Microsoft have seen off so many rivals. Meanwhile, the ecology of SOA favours diversity and heterogeneity. Traditional investors such as Goldman Sachs and its Wall Street clients may not appreciate this yet. The important question for TIBCO and other niche vendors is the extent to which the economics of SOA can overcome the economics of software.
The TIBCO shareprice recovered on January 16th, and has risen further since, perhaps helped by buy-out speculation following Oracle's acquisition of BEA.
But Tim and I were misled by the original story (apparently from Thomson Financial) linking the shareprice fall to the Goldman Sachs advisory. According to Yahoo (another company that knows something about buyout speculation!), the Goldman Sachs advisory was last April.
I stand by my original point, however, that stock market investors and city analysts don't necessarily appreciate the economics of SOA.
Customers will move away from buying more expensive “best-of-breed” offerings, like Tibco’s products, and more toward buying less expensive “good enough” substitutes that are bundled with broader solutions from the likes of IBM, Oracle Corp. and SAP AG.This is not about whether TIBCO has the best products, but about the buying behaviour of TIBCO's customers.
The SOA mantra of Loose Coupling works both ways here. On the one hand, greater standardization and interoperability could mean there is less reason to buy everything from a single supplier, and so “best-of-breed” offerings become more viable, even during an economic downturn. On the other hand, some companies may feel that it is less critical to get the highest quality infrastructure from the beginning, since greater flexibility makes it easier to contemplate changing things later.
The traditional economics of the software industry has generally favoured the giants, which is why IBM and Microsoft have seen off so many rivals. Meanwhile, the ecology of SOA favours diversity and heterogeneity. Traditional investors such as Goldman Sachs and its Wall Street clients may not appreciate this yet. The important question for TIBCO and other niche vendors is the extent to which the economics of SOA can overcome the economics of software.
Update and Correction
The TIBCO shareprice recovered on January 16th, and has risen further since, perhaps helped by buy-out speculation following Oracle's acquisition of BEA.
But Tim and I were misled by the original story (apparently from Thomson Financial) linking the shareprice fall to the Goldman Sachs advisory. According to Yahoo (another company that knows something about buyout speculation!), the Goldman Sachs advisory was last April.
I stand by my original point, however, that stock market investors and city analysts don't necessarily appreciate the economics of SOA.
Labels:
economics,
quality,
quality of service,
software industry,
TIBCO
Subscribe to:
Comments (Atom)