Open Distributed Processing: Unplugged!

by Ian Joyner

1. Introduction

This is an introduction to Open Distributed Processing, or ODP, which gets to the basics, the essential elements, unplugged from the complicated verbiage of the ISO standards. What we will find is a very useful framework for specifying systems, whether distributed or not. Many technologies lay claim to being best kept secrets; and while ODP is perhaps not as widely hyped as some things in the industry, we do not want it to be a secret at all: ODP has been designed for wide applicability.

You should take a look at ODP if you are a systems architect, programmer or a manager who wants an understanding of the structure of modern systems development, and useful frameworks for development processes. ODP is ISO's standardisation effort for distributed processing. ODP is a set of standards produced by ISO/IEC and ITU-T. ODP is the ISO/IEC set of standards 10746, and ITU-T's X.900.

1.1. Ease of Use

Distributed processing sounds like a very technical and complicated idea. However, distributed processing will become widely used in the very near future, and it will be largely painless! In fact with the advent of the World Wide Web WWW, one particular kind of distributed processing has already become very popular as it is easy to use. The goal of ODP is to realise such ease of use in many diverse areas of distributed processing, while keeping the implementation technology set broad.


The basis of ODP's ease of use is that programmers and end users should not need to be concerned with the nature and means of distribution. In other words programming and use of a distributed application appears exactly the same as if the application were not distributed at all. The means that achieve this homogeneous view are ODP transparencies.

Transparencies provide you as users and programmers with a uniform view of the system. As in networking, where a message may traverse several different networks and even several different kinds of networks, without the knowledge of the user, distributed processing can take place over different domains controlled by different authorities, and over heterogeneous equipment, where hardware and software may be different and even radically diverse. However, in general, the difference between networking and distributed solutions is that with networking, the user is aware that the system is running on multiple machines, with distribution the system appears as a singular entity.


The ODP transparencies are provided by elements of software called ODP functions which mask the complexity of a distributed system from both users and programmers.

Aside from distribution transparency, ODP caters for interworking in a heterogeneous environment, and for developing applications that are portable across heterogeneous systems, as standardised modes of working and interfaces are specified.

1.2. What ODP does for me?

Most applications to date provide a particular service for a user, and users have a static set of services installed on their computer. Distributed processing, however, provides a much richer dynamic environment for a user to find and invoke services.

There are two modes of finding services. The first is akin to a white pages function where the user knows the name of a service, and invokes the service by that name. In this case the white pages function is automatic as the user does not have to get out the telephone book to translate the name to a physical network address (phone number): the system handles this function transparently. This transparency and function are called location transparency and relocation function. The ODP Naming Framework standard defines how names are handled in the ODP context to provide this white pages service.

The second mode is when the user does not know about a particular service that fulfils the particular desire of the moment, but knows roughly what they want. In this case the user can describe what they want and get the distributed system to locate a set of services which will fulfil that desire. This is akin to a yellow pages function. The set returned could be empty, in which case your desires must go unfulfilled unless you can loosen your criteria; the set could be a single service, in which case you can invoke it straight away (which could happen automatically), or you might get back a list of services, in which case you must tighten your criteria or manually choose one that satisfies your fancy. This search mode is known as the ODP Trading Function, where the Trading software takes your desires, and trades for the service on your behalf.

Once you have located the service you want, ODP provides the means to create an interaction with the service, and invoke the service.


Aside from transparencies which provide end users and programmers with a simplified view of distributed systems, and the functions which provide an effective form of reuse by removing the need to develop common distributed functionality for each application, ODP provides system developers with a framework for an effective and disciplined approach to distributed specification. This framework is provided by the ODP viewpoints. The viewpoints are of little interest to the end user, but mainly of interest to ODP system developers as it helps focus on where issues should be addressed in system specification.. The viewpoints provide an orderly way to tame the complexity involved in specifying distributed systems.

1.3. The ODP Reference Model ODP-RM

The combination of ODP transparencies, ODP functions and ODP viewpoints defines the ODP reference model, ODP-RM. The relationship between these is that ODP viewpoints are used to define ODP functions, and ODP functions implement ODP transparencies. The ODP reference model itself can be used to specify specific distributed systems within enterprises, or used to specify other ODP component standards, and the viewpoints framework has been road tested in the production of the International Standard for the ODP and CORBA Trading Function.

1.4. Component Standards

The ODP reference model is defined in a four part standard. The ODP reference model provides the framework for the development of other standards. These other standards are component standards which define basic infrastructure such as naming, or the ODP functions themselves.

1.5. The role of ODP Standardisation

Many people might feel that the purpose of standardisation is to impose restrictions on the computer industry outside which computer vendors must not step. Thus standards can be viewed with the suspicion that they will stifle future innovation. This very legalistic view of standards could not be further from the truth, particularly in the case of ODP.

In fact ODP strives to be minimalistic, so that future innovation can take forms that are currently unthought of. It recognises that a single set of standards, solving the problem of heterogeneity is unlikely to ever be established, so ODP describes a framework within which many standards can co-exist and interoperate. ODP strives to be a future proof standard. In many cases the role of defacto standards developed by single vendors or a consortium of vendors are precisely to give the owning vendor an advantage in the market place, and to stifle the creativity of the competition. By being future proof, ODP strives to set up a world where creativity and innovation is not penalised.

Standardisation of interfaces means that implementations can be radically different and interesting. New hardware and software systems need only conform at particular levels in order to gain acceptance in the industry. Unfortunately, there has been too much emphasis on standarisation along artificial lines other than abstract interface conformance points. Bad standardisation is standardisation of implementation technologies such as processors, hardware platforms, operating systems and programming languages. These standards often become de facto standards, and really do stifle innovation. The role of standards is to provide a level playing field so that innovation and new ideas are possible. However, in many cases this is not what vendors want, as innovation can quickly challenge any market advantage. Thus standards help make the computer industry a more competitive market place, by encouraging, not stifling innovation. Hence a great deal of activity has occurred recently by vendors trying to establish their own de facto standards. Yes, unfortunately, this is all about power, so beware of implementation standards.

1.6. Relationship to other Standards

Since ODP has been developed by ISO/IEC and ITU-T, it might be natural to assume that ODP is an extension of the OSI standard. This is not true as ODP is independent of any particular network, and may be used with OSI, TCP/IP, NetBIOS, and other networks.

There is therefore no relationship between ODP and other network standards. Other standards are being developed that seemingly might compete with ODP such as OMG's CORBA and Microsoft's DCOM. ODP is not in competition with these standards. In fact ODP is very much the big picture of distributed processing, and CORBA and DCOM fit into the ODP framework very well. This would have to be the case, as if ODP meant death to other standards, the promises made in the previous section that ODP was not to stifle innovation would be rather hollow.

In fact, some people are involved in both ODP and CORBA standardisation efforts. CORBA has services and functions which are analogous to ODP functions and where these are similar the work can be done together. This has certainly already happened with the Trading function and CORBA/ODP IDL which are shared between ODP and CORBA. Other standards also have common synergy such as ODP's Type Repository and OMG's Meta Object Facility.

Aside from the CORBA services and functions, CORBA really only defines the means for communication between applications. This is an object bus known as the CORBA ORB or Object Request Broker. The ORB is also similar to the microkernel in distributed operating systems. In a microkernel operating system, applications send requests to the kernel which dispatch the requests to a service element. In a distributed operating system, the microkernel becomes a bus that dispatches requests to possibly remote services. The CORBA ORB in effect standardises this model.

Microsoft's DCOM provides a similar object bus, which can provide the implementation of a CORBA ORB. In ODP terms these object buses really are what is known in ODP as an engineering channel. This is a very small part of ODP, and which can be satisfied by external standards. Thus there is great opportunity for synergy between ODP and other standards.

The rest of the ODP Unplugged paper can be found following these links:

ODP Transparencies

ODP Functions

ODP Viewpoints



Quality of Service



Further Reading