« thanks Paris | Main | cementing or not, technology overriding culture »


Christian Mogensen

You still have middleware, it's just easier to manage star topologies rather than network topologies.

Something has to coordinate your applications ("Logic engines" as you say), or you end up coding the coordination logic into the apps, and you are back to where you started.



good point. To clarify, my argument is based on true separation of information and logic:

Today information is kept in a manipulated form - then you need coordination between the apps, in network or star form as you put it.
But if you truly leave the information unaltered the data can be used as is with no coordination as in demanipulation as is common today.

This would relegate an app to be a "report generator" - no coordination of logic between the apps needed - no more true middleware needed.

Example: An app (logic engine) runs the workflow which captures transactions of parts - in procurement and in production. The raw data from that app, unaltered, is kept centrally - then a separate app could deliver (on-the-fly logic applied and presented) inventory lists, another a P&L report and so forth, all without altering the central raw data. Here I see no need for middleware.


Hi Sig

Looking good on the bike. I must get out on mine and get this winter gut off.

For me middleware is the ultimate "will fix all." Except in highly controlled and specific circumstances it is a nightmare that causes a lot of problems....

Agree about the data being centralised, but for me the problem is that even though the meta-model is consistent, the individual components, if they are doing the same job might do it a different way. To take a simple example, one component posts all amounts with gross tax, and another net of tax. The data model looks the same, but the different components have not used it in a consistent fashion.

Anyway, coming back to Hugh's cheapest or best, I see difficulties in Oracle arriving at either extreme easily or quickly, if at all.


Hi Hamish

I passed through your area in Switzerland the other day, and the roads looked nice and clean (as always in Switzerland!) so you have no excuse not to get out there!

Good example you have there - "one component posts all amounts with gross tax, and another net of tax" - in a way it points to my argument like this:

Amount with or without tax is already two sets of information that has logic applied / manipulated (adding or deducting tax to some figure). What I'm pushing for is a true split - i.e. each would be not one but two pieces of information; the basic figure and the tax rate.
If those sets of raw data is kept then any 'app' as in report generator could do whatever it will and deliver with or without tax, or whatever else - of course shedding the calculated figures after presenting them.

Tim Rickards

Sig, pardon my obvious lack of true s/w experience, but I do have a question.

I like your idea of keeping data "pure" until it's needed. It sounds quite elegant and logical. But then I'm a copywriter and not an engineer.

But is keeping all the possible sets of raw data separate realistic in businesses with highly complex transactions? Could you end up with too many separate data sets? Are there practical or theoretical barriers to this?


Tim, excellent point. Simple answer is that there should not be any practical barriers to that.

In fact it's when you keep 'manipulated data' that you end up with many, diverse and separate data sets - let's take accounting:

Old way, manipulated data model requires you keep a balance sheet for each day, or a profit and loss for every hour if you're a control freak. Now that is too much for most so one keeps a end-of-month set of records instead. Still that requires lots of data in itself, including assumptions that the system (and middleware) needs to re-extract some useful data for use in other parts of the (old-way) system.

New way, raw data, you keep the actual transaction (part in, part out, pill spent, hours spent, whatever) and calculate on the fly, which means any P&L or Balance can be displayed for say Friday August 15th, 2003 at 8.07 - as viable as current. And best of all, if you develop a new kind of report in a year or two (or the government requires you to do it!) - then no prob, it's just design the report and use the raw data available.
Like changes to GAAP - that would be just another report-template and ever year future and past can be delivered in old and new GAAP, no more apples and bananas :-)

So would that be more data than the current way? Yes, perhaps - but data-storage is cheap, and doing the calculations on the fly to construct a Balance from raw data is exactly what CPUs are built for, your PDA processor could do that :-)
Simpler sets of data, yes! Raw data that was created in a real transaction or event, traceable, easy to use, clean.

Add that new regulations like Sarbanes-Oxley and Basel II for financial institutions (they have to keep all transactions for past seven years!) requires keeping the data anyway.

Sorry about rambling on, but you had a very good point there!

Tim Rickards


Thanks for the reply. I think I get the general gist of what you're saying. I can't come up with an exact analogy, but I can "see" what you mean.

It seems that this new system requires that the data be extremely maleable and universal...almost "natural" in fact...like an element.

That seems to run counter to "developing and protecting a built-in user base with proprietary technology". ;-)

The comments to this entry are closed.

My Photo


  • Phone: +33 6 8887 9944
    Skype: sigurd.rinde
    iChat/AIM: sigrind52

Tweet this

Thingamy sites

  • Main site
  • Concept site

Tittin's blog


Enterprise Irregulars


Twitter Updates

    follow me on Twitter


    • Alltop, all the cool kids (and me)


    Blog powered by Typepad
    Member since 01/2005