Tuesday, June 16, 2009
As the title suggests, this post, the first in a series of continuations of my June 13 post, is about the technical aspects of interoperability. For your convenience, I’ve copied the figure below from the earlier post.
The word "interoperability" means different things to different people. To DBAs, for example, the ability of SQL Server, Oracle and DB2 to recognize each other’s commands is a sign of the interoperability of these databases. In the current post, however, I'll focus primarily on a distributed computing model that is capable of cross-platform and cross-programming language interoperability, Web services. In the process, I'll compare Web services with a couple of its predecessors, CORBA and DCOM, which are still being used widely.
Why CORBA and DCOM success is limited
Although CORBA and DCOM have been implemented on various platforms, the reality is that any solution built on these protocols will be dependent on a single vendor's implementation. Thus, if one were to develop a DCOM application, all participating nodes in the distributed application would have to be running a flavor of Windows. In the case of CORBA, every node in the application environment would need to run the same ORB product. Now there are cases where CORBA ORBs from different vendors do interoperate. However, that interoperability generally does not extend into higher-level services such as security and transaction management. Furthermore, any vendor specific optimizations would be lost in this situation.
Both these protocols depend on a closely administered environment. The odds of two random computers being able to successfully make DCOM or IIOP calls out of the box are fairly low. In addition, programmers must deal with protocol unique message format rules for data alignment and data types. DCOM and CORBA are both reasonable protocols for server-to-server communications. However, both they have severe weaknesses for client-to-server communications, especially when the client machines are scattered across the Internet.
Common Object Requesting Broker Architecture (CORBA) provides standards-based infrastructure for interactions between distributed objects. It allows software components running on dissimilar hardware, hosted on different operating systems, and programmed in different programming languages to communicate, collaborate, and perform productive work over a network.
A CORBA system accomplishes this magical task by confining the interaction between objects to well-defined interfaces. To use the service of a distributed object, you must interact with it through the interface. How the interface is implemented -- or even where it is implemented -- is not known and doesn't have to be known by the client. The figure below illustrates the workings of a typical CORBA system.
Object interactions in CORBA
Achieving location transparency
In this figure, the client code makes use of a server object through its interface. In fact, the interface is implemented by a stub object that is local to the client. As far as the client can tell, it's only interacting with the local stub object. Under the hood, the stub object implements the same interface as the remote server object. When the client invokes methods on the interface, the stub object forwards the call to a CORBA component called the Object Request Broker (ORB). The calling code in the client does not have to know that the call is actually going through a stub.
Distributed Component Object Model (DCOM) is a Microsoft proprietary technology for software components distributed across several networked computers to communicate with each other. However, it has been deprecated in favor of the Microsoft .NET Framework, which includes support for Web services.
Internet poses problems for CORBA and DCOM
DCOM was a major competitor to CORBA. Proponents of both of these technologies saw them as one day becoming the model for code and service-reuse over the Internet. However, the difficulties involved in getting either of these technologies to work over Internet firewalls, and on unknown and insecure machines, meant that normal HTTP requests in combination with web browsers won out over both of them.
Web services overview
Web services specifications such as SOAP, WSDL, and the WS-* family are currently the leading distributed computing standards for interfacing, interoperability, and quality of service policies. Web services are often called "CORBA for the Web" because of their many derivative concepts and ideas.
There are several ways to think about Web services when building applications. At the most basic level it is an advanced communications protocol family allowing applications to talk to each other. This level has progressed quite significantly over the past few years with many tools (e.g., Oracle’s JDeveloper and Microsoft's Visual Studio) that allow software developers to write interacting Web services and build complex applications. This level is often characterized by direct one-on-one interactions between services or relatively few services interacting with each other.
However, just using Web services as a communications protocol belies its true power, that of the service-oriented architecture (SOA). The SOA describes an entire system of services dynamically looking around for each other, getting together to perform some application, and recombining in many ways. This model encourages the reuse of technology and software that evolves the way applications are designed, developed and put to use. It brings the world of distributed computing to a closer reality. At this level software developers need to think of the SOA model and design their distributed application across the model. This level is characterized by the use of technologies to allow distributed communications of services such as the use of an Enterprise Service Bus (ESB), which is a common distribution network for services to work with. More on ESB in an upcoming post.
Finally, the highest level is to look at this SOA model and the many component services as building blocks that can be assembled in whole sections into full applications instead of the traditional method of writing line after line of code. By examining the connecting interfaces, we can build whole applications without ever really writing code. For that matter direct code may even get in the way since the services may be written in numerous different languages and platforms. The blocks can be put together into a workflow of operations that define how the application performs, and other tools can be used to monitor the effectives of the workflow at each service or group of services. At this level, developers can put away the use of regular programming languages and work in a Model-Driven Architecture that helps them to build applications more accurately to a design. More on workflow in an upcoming post.