Hamish has a very interesting, and thorough post on middleware here.
Here's what he says about middleware:
"essentially it is a series of tools that are designed to link existing information and applications"
And the big guys are deep into middleware of course, again by Hamish:
"IBM with Websphere, Microsoft with .net and in the ERP space, SAP with Netweaver, and now Oracle with Fusion, when it is developed."
Actually IBM software equals more or less middleware, that's their main focus - leave the applications to others. They do not like what SAP and Oracle are doing now, middleware is theirs they say.
Now, what's all this about middleware, why at all?
Essentially because applications historically has information and logic inseparable. Thus to make two or more applications work in unison you need an active layer to coordinate the happenings, and the data.
The interactions between such could look like this:
40 applications means 780 relationships. Nice for the middleware suppliers, bad for the users.
Is that pretty stupid or what?
Yes it is.
Split the logic and information. Then 40 applications (or logic engines if you will) using one set of raw data equals 40 independent relationships. Change any application (or logic) and it will have no effect on anything else. Like this:
Goodbye middleware. Hello simplicity.
You still have middleware, it's just easier to manage star topologies rather than network topologies.
Something has to coordinate your applications ("Logic engines" as you say), or you end up coding the coordination logic into the apps, and you are back to where you started.
Posted by: Christian Mogensen | April 28, 2005 at 13:21
Christian,
good point. To clarify, my argument is based on true separation of information and logic:
Today information is kept in a manipulated form - then you need coordination between the apps, in network or star form as you put it.
But if you truly leave the information unaltered the data can be used as is with no coordination as in demanipulation as is common today.
This would relegate an app to be a "report generator" - no coordination of logic between the apps needed - no more true middleware needed.
Example: An app (logic engine) runs the workflow which captures transactions of parts - in procurement and in production. The raw data from that app, unaltered, is kept centrally - then a separate app could deliver (on-the-fly logic applied and presented) inventory lists, another a P&L report and so forth, all without altering the central raw data. Here I see no need for middleware.
Posted by: sig | April 28, 2005 at 14:46
Hi Sig
Looking good on the bike. I must get out on mine and get this winter gut off.
For me middleware is the ultimate "will fix all." Except in highly controlled and specific circumstances it is a nightmare that causes a lot of problems....
Agree about the data being centralised, but for me the problem is that even though the meta-model is consistent, the individual components, if they are doing the same job might do it a different way. To take a simple example, one component posts all amounts with gross tax, and another net of tax. The data model looks the same, but the different components have not used it in a consistent fashion.
Anyway, coming back to Hugh's cheapest or best, I see difficulties in Oracle arriving at either extreme easily or quickly, if at all.
Posted by: Hamish | April 28, 2005 at 17:49
Hi Hamish
I passed through your area in Switzerland the other day, and the roads looked nice and clean (as always in Switzerland!) so you have no excuse not to get out there!
Good example you have there - "one component posts all amounts with gross tax, and another net of tax" - in a way it points to my argument like this:
Amount with or without tax is already two sets of information that has logic applied / manipulated (adding or deducting tax to some figure). What I'm pushing for is a true split - i.e. each would be not one but two pieces of information; the basic figure and the tax rate.
If those sets of raw data is kept then any 'app' as in report generator could do whatever it will and deliver with or without tax, or whatever else - of course shedding the calculated figures after presenting them.
Posted by: sig | April 28, 2005 at 18:09
Sig, pardon my obvious lack of true s/w experience, but I do have a question.
I like your idea of keeping data "pure" until it's needed. It sounds quite elegant and logical. But then I'm a copywriter and not an engineer.
But is keeping all the possible sets of raw data separate realistic in businesses with highly complex transactions? Could you end up with too many separate data sets? Are there practical or theoretical barriers to this?
Posted by: Tim Rickards | April 28, 2005 at 21:31
Tim, excellent point. Simple answer is that there should not be any practical barriers to that.
In fact it's when you keep 'manipulated data' that you end up with many, diverse and separate data sets - let's take accounting:
Old way, manipulated data model requires you keep a balance sheet for each day, or a profit and loss for every hour if you're a control freak. Now that is too much for most so one keeps a end-of-month set of records instead. Still that requires lots of data in itself, including assumptions that the system (and middleware) needs to re-extract some useful data for use in other parts of the (old-way) system.
New way, raw data, you keep the actual transaction (part in, part out, pill spent, hours spent, whatever) and calculate on the fly, which means any P&L or Balance can be displayed for say Friday August 15th, 2003 at 8.07 - as viable as current. And best of all, if you develop a new kind of report in a year or two (or the government requires you to do it!) - then no prob, it's just design the report and use the raw data available.
Like changes to GAAP - that would be just another report-template and ever year future and past can be delivered in old and new GAAP, no more apples and bananas :-)
So would that be more data than the current way? Yes, perhaps - but data-storage is cheap, and doing the calculations on the fly to construct a Balance from raw data is exactly what CPUs are built for, your PDA processor could do that :-)
Simpler sets of data, yes! Raw data that was created in a real transaction or event, traceable, easy to use, clean.
Add that new regulations like Sarbanes-Oxley and Basel II for financial institutions (they have to keep all transactions for past seven years!) requires keeping the data anyway.
Sorry about rambling on, but you had a very good point there!
Posted by: sig | April 29, 2005 at 07:36
Sig,
Thanks for the reply. I think I get the general gist of what you're saying. I can't come up with an exact analogy, but I can "see" what you mean.
It seems that this new system requires that the data be extremely maleable and universal...almost "natural" in fact...like an element.
That seems to run counter to "developing and protecting a built-in user base with proprietary technology". ;-)
Posted by: Tim Rickards | April 29, 2005 at 23:34