Showing posts with label Web services. Show all posts
Showing posts with label Web services. Show all posts

Wednesday, February 10, 2010

Interoperability Between Oracle and Microsoft Technologies, Using RESTful Web Services - BPEL


A guide to developing REST Web services using the Jersey framework and Oracle JDeveloper 11g follows.

RESTful Web services are the latest revolution in the development of Web applications and distributed programming for integrating a great number of enterprise applications running on different platforms. Representational state transfer (REST) is the architectural principle for defining and addressing Web resources without using the heavy SOAP stack of protocols (WS-* stack). From the REST perspective, every Web application is a service; thus it's very easy to develop Web services with basic Web technologies such as HTTP, the URI naming standard, and XML and JSON parsers.

For a detailed account of how to create RESTful Web services by using Oracle technologies such as Oracle JDeveloper 11g, the Jersey framework (the reference implementation of the JAX-RS [JSR 311] specification), and Oracle WebLogic Server as well as how to consume the Web service by using Microsoft technologies such as Visual Studio .NET 2008 and the .NET 3.5 framework, click here.

Building a Web Services Network with BPEL - Caveat Emptor

Buoyed by maturing Web service standards, more and more organizations are using Web services in a collaborative environment. BPEL is fast becoming the platform for orchestrating these Web services for inter-enterprise collaboration. As discussed in earlier posts in this blog, BPEL offers the compelling benefits of a standards-based approach and loosely-coupled process integration to companies building an online marketplace or collaborative network.

Yet the exciting new capabilities offered by Web services carry some risk. In many cases, partner relationships break down or integration costs skyrocket if certain technical and administrative challenges are not addressed at design time:

* Partners must agree well in advance to conduct business according to specific criteria. Transport protocol, purpose of the interaction, message format, and business constraints have to be communicated clearly.

* Joining the network has to be an easy process; collaborative networks become successful mainly through growth.

* Users must easily find business services at runtime, or the promise of services-oriented architecture (SOA) is largely lost. (Service repositories are useful for this purpose.) If developers cannot readily find and reuse services, the services essentially don't exist.

* Partners should have the ability to monitor Web services in real-time. End users should be able to track the progress of a specific order, and trading partners diagnose a specific bottleneck within a business process.

These challenges are exacerbated when a collaborative network operates in a hosted environment. In that model, partners expose the functionality provided by their legacy applications into a Web service. This Web service is published into a centralized repository. The host is responsible for orchestrating the complex business processes, which in turn, leverage partner Web services.

Wednesday, December 30, 2009

SOA, Web Services, BPEL, Human Workflow, User Interaction and Healthcare Systems


Lack of integration among legacy healthcare systems and applications means a continued reliance on manual processes that can introduce high risk errors into critical medical data. And isolated systems can compromise a provider's ability to follow an individual patient's care seamlessly from intake to treatment to aftercare.

While healthcare providers recognize that integration can help them achieve better service levels, many have been reluctant to proceed because of the critical nature of healthcare systems. But the approach to integration need not be a radical one of system rip and replace, nor does it have to precede through the development of system-by-system integration solutions.

Service Oriented Architecture (SOA) is a standards-based approach to integrating IT resources that can enable you to leverage existing assets, while at the same time building an infrastructure that can rapidly respond to new organizational challenges and deliver new dynamic applications. The SOA approach can help free application functionality from its underlying IT architecture and make existing and new services available for consumption over the network.

To derive a new value from existing services and go beyond simple point-to-point integration, you will need to combine and orchestrate these services in a business process. You will want to connect them in a coordinated manner, for example, have the result(s) from one service be the input to another service and have branching logic based on the outcome. Of course, you can use Java, C#, or another programming environment to call the services and manage the processes and data, but there is an easier, declarative way.

BPEL

An important standard in the SOA world is BPEL, or Business Process Execution Language, which serves as the glue to tie SOA-based services (Web services) together into business processes -- at least the portions that can be automated. The resulting BPEL process can itself be exposed as a Web service, and therefore, be included in other business processes.

The BPEL standard says nothing about how people interact with it, but BPEL in the Oracle Inc. BPEL Process Manager (to be discussed in my next post) includes a Human Workflow component (shown in the figure below) that provides support for human interaction with processes.



BPEL and User Interaction

I began an introduction to BPEL and human workflow towards the bottom of my December 14 post. Click here for a good deal more on this topic.

Humans can be involved in business processes as a special kind of implementation of an activity. To facilitate this, a new BPEL activity type called human task is required. From the point of view of the BPEL business process, a human task is a basic activity, which is not implemented by a piece of software, but realized by an action performed by a human being. In the drag-and-drop design pallet shown in the figure above, the actor of a human activity can be introduced into a BPEL process by using your mouse. A human activity can be associated with different groups of people, one for each generic human role.

People dealing with business processes do so by using a user interface. When human activities are used, the input data and output data must be rendered in a way that the user can interpret. More on this in upcoming posts.

Wednesday, November 25, 2009

Ontology-Based Software Application Development -- Java and .NET


Consider the following scenario: A programmer needs to read data from a database via the JDBC interface. The system administrator of the organization provides user name and password, which obviously need to be used in the process. Then, the programmer

1. Searches the entire API for a method call (or calls), which takes a database user name as an input parameter.

2. Has to understand how various API calls should be sequenced in order to go from the connection information all the way to actually receiving data from the database.

If the APIs are not semantically rich (i.e., they contain only syntactic information, which the programmers have to read and interpret), understanding, learning and using an API can be a very time consuming task.

For a discussion of how the application of ideas from the areas of "Knowledge Management" and "Knowledge Representation" -- The enrichment of purely syntactic information of APIs with semantic information -- will allow the computer to perform certain tasks that normally the human programmer has to perform, see

http://www.aifb.uni-karlsruhe.de/WBS/aeb/smartapi/smartapi.pdf


A similar semantification of Web services (Ontology-enabled Services) is being widely discussed and implemented today.




See, for example,

http://www.cs.vu.nl/~maksym/pap/Onto-SOA-WAI.pdf

and

http://www.computer.org/portal/web/csdl/doi/10.1109/AICT-ICIW.2006.141


A number of my earlier post have been about Protégé , the popular ontology development tool, and OWL, one of the main ontology languages. To continue that discussion, see

http://www.sandsoft.com/edoc2004/KnublauchMDSW2004.pdf


which discusses a realistic application scenario -- some initial thoughts on a software architecture and a development methodology for Web services and agents for the Semantic Web. Their architecture is driven by formal domain models (ontologies).

Central to their design is Jena, a Java framework for building Semantic Web applications. It provides a programmatic environment for RDF, RDFS and OWL, SPARQL and includes a rule-based inference engine.

Jena is open source and grown out of work with the HP Labs Semantic Web Programme.

For more on Jena, see

http://jena.sourceforge.net/documentation.html

Jena is a programming toolkit that uses the Java programming language. While there are a few command-line tools to help you perform some key tasks using Jena, mostly you use Jena by writing Java programs.

But, .NET developers have similar resources. See, for example

http://www.ic.uff.br/~esteban/files/sbgames09_Alex.pdf


for a development environment using Microsoft Visual Studio, the base language C#, and the graphical library XNA. Protégé has been used for designing the ontology, and the application uses the OwlDotNetApi library.

This 2009 work demonstrates a step-by-step implementation, from the definition of an ontological knowledge base to the implementation of the main classes of a strategy game. It aims at serving as a basic reference for developers interested in starting .NET development of ontology-based applications.

Sunday, September 27, 2009

Watson - An efficient access point to online ontologies - A gateway to the Semantic Web

Next generation semantic applications will be characterized by a large number of sometimes widely-distributed ontologies, some of them constantly evolving. That is, many next-generation semantic applications will rely on ontologies embedded in a network of already existing ontologies. Other semantic applications – e.g. some electronic health records (EHR) – will maintain a single, globally consistent semantic model that serves the needs of application developers and fully integrates a number of pre-existing ontologies.

As the Semantic Web gains momentum, more and more semantic data is becoming available online. Semantic Web applications need an efficient access point to this Semantic Web data. Watson, the main focus of this post, provides such a gateway. Two limited demonstrations of Watson - one video, the other static - are given below.

Overview of Watson Functionalities

The role of a gateway to the Semantic Web is to provide an efficient access point to online ontologies and semantic data. Therefore, such a gateway plays three main roles:

(1) it collects the available semantic content on the Web
(2) analyzes it to extract useful metadata and indexes, and
(3) implements efficient query facilities to access the data.

Watson provides a variety of access mechanisms, both for human users and software programs. The combination of mechanisms for searching semantic documents (keyword search), retrieving metadata about these documents and querying their content (e.g., through SPARQL) provides all the necessary elements for applications to select and exploit online semantic resources in a lightweight fashion, without having to download the corresponding ontologies.

For a easy-to-follow video demonstration of The Watson plug-in for the NeOn toolkit, click on

http://videolectures.net/iswc07_daquin_watson/

and, better still, click one of the Media Player links at this destination.

Note: There is a Watson plug-in for the ontology editor Protégé in the works.

Protégé (see my August 24 post below) is probably the most popular ontology editor available. In addition, its well established plug-in system facilitates the development of a plug-in using the Waston Web Services and API. To date, however, the Protégé site provides only what it describes as, “more a proof of concept or an example than a real plug-in.”

NeOn Toolkit

The NeOn architechture for ontology management supports the next generation semantics-based applications. The NeOn architecture is designed in an open and modular way and includes infrastructure services such as a registry and a repository and supports distributed components for ontology development, reasoning and collaboration in networked environments.

The NeOn toolkit, the reference implementation of the NeOn architechture, is based on the Eclipse infrastructure.

Ontology Management
Semantic Web, Semantic Web Services, and Business Applications
Copyright 2008 Springer

A static demonstration of the Watson plug-in for the NeOn toolkit

The Watson plug-in allows the user to select entities of the currently edited ontology he/she would like to inspect, and to automatically trigger queries to Watson, as a remote Web service. Results of these queries, i.e. semantic descriptions of the selected entities in online ontologies, are displayed in an additional view allowing further interactions. The figure below provides an example, where the user has selected the concept “human” and triggered a Watson search. The view on the right provides the query results (a list of definitions of class human which have been found on the Semantic Web) and allows easy integration of the results by simply clicking on one of the different “add”-buttons.

Finally, the core of the plug-in is the component that interacts with the core of the NeOn toolkit: its datamodel. Statements retrieved by Watson from external ontologies can be integrated in the edited ontology, requiring for the plug-in to extend this ontology through the NeOn toolkit datamodel and data management component.



{click on the image above for a larger view}

An interesting exercise:
And search on "snomed"


And, this dynamic view is what you get after clicking the (view as graph) link.

Thursday, August 13, 2009

Semantic Web Technologies: Ontologies, Agents, Web Services


Here's a Web app developed at Stanford University some time ago. It's made up of a single ontology with entries for foods and wines, an ontology agent that returns text from this ontology and a portal that manipulates the text returned by the agent.

http://onto.stanford.edu:8080/wino/index.jsp

The ontology underlying the agent used here contains hierarchies and descriptions of food and wine categories, along with restrictions on how particular instances might be paired together.

For readers wanting an easy-to-read discussion on what exactly an agent is and how it works with ontologies, see

http://www.nature.com/nature/webmatters/agents/agents.html

The next three figures are of screen shots taken from this Web app, which doesn't always employ state-of-the-art technologies like OWL (discussed in earlier posts) under the hood; but, this app provides a simple example of the most basic building blocks that are found in many of today's Semantic Web apps (also discussed in earlier posts).



Figure I - The home page




Figure II - After clicking on the "Roast Duck" link




Figure III - After clicking on the "Web Inventory Search" link

The ontology defines the concept of a wine. According to the specification, a wine is a potable liquid produced by at least one maker of type winery, and is made from at least one type of grape (such grapes are restricted to wine grapes elsewhere in the ontology.)

The declaration additionally stipulates that a wine comes from a region that is wine-producing and, most importantly to the agent, that a wine has four properties: color, sugar, body, and flavor.

The concept of a meal course underlies pairing of a food with a wine. Each course is a consumable thing comprising at least one food and at least one drink, the latter of which is stipulated to be a wine.

When the user selects a type of course, or an individual food that gets mapped to a type of course, the agent will consult that course definition for restrictions on the constituent food or wine. All such course types map back to this concept, like objects to their superclasses in object-oriented programming.

Suppose the user has selected pasta with fra diavolo, or perhaps pasta with spicy red sauce directly. The concept of such a food is defined elsewhere in the ontology. Furthermore, such courses must be a subclass of those with specific restrictions on the properties of their wines.

One wine that matches the above restrictions is the Pauillac. This individual wine is simply defined as a Pauillac whose maker is Chateau Lafite Rothschild. Together with other statements in the ontology, this allows the reasoner (discussed in an earlier post) to deduce many additional facts: that this a Medoc wine from Bordeaux, in France, and that it is red, to name a few.

The concept of a Pauillac specifies that all such wines feature full bodies and strong flavors and are made entirely from cabernet sauvignon grapes. Further, Pauillacs are a particular subset of Medocs, distinguished by their origin in the Pauillac region. It is through this additional subclass relationship that Pauillac are defined elsewhere as red and dry.

Following the above example through the ontology reveals a straightforward logical path for pairing the Pauillac with the selected course. Because these items were specified in a standardized, machine-readable format, it is an equally straightforward task for any compliant automated reasoner.

Why use an ontology?

The functionality provided by the wine agent is not unlike that which could be provided by a simple look-up table. Indeed, food/wine pairings are traditionally published in some form of tabular chart where marks appear at the intersections of columns and rows representing compatible varieties of food and wine. The wine agent demonstrates that at least this simple task can be accomplished within semantic markup technology, but what about more complicated applications like, for example, those required by electronic health rcords (EHR)? For now, I'll postpone any discussion of EHR and stick with this food-wine pairing app.

Suppose that if not the entire web, then at least some number of cooperating parties were using semantic markup to participate in this project. Rather than the traditional approach of trying to build an enormous database of foods and wines, then, the definitions would be distributed across the participating parties. A restaurant or retailer offering an on-line menu could mark each food item with standardized machine-readable definitions. Similarly, then, a wine retailer could mark its wines according to the definitions exemplified above.

Such markings would benefit from well-known advantages of ontologies. For instance, through the subclassing, adding a new Pauillac to the inventory would not require wine.com (the source of the GUI shown in Figure III) to mark all of the wine's properties; it would just be another Pauillac as specified in the example, plus any differentiating features. But more importantly in terms of software agents, all the markings would be machine readable, and could be handled by systems from any organization.

Rather than relying on a human user to select a food or food type, the agent could crawl the web for foods marked within the wines namespace and pre-compile suitable pairings. This is where Web services (discussed in earlier posts) come in.

Footnote

The notion of agent-based computing has been adopted enthusiastically in the financial trading community, where autonomous market trading agents have been said to outperform human commodity traders by 7%.

Machines can monitor stock market movements much more quickly than humans, and if you can encode the kinds of rules that you want, then it is not unreasonable to imagine that computational traders will be able to outperform humans.

I commented on high-speed trading in my August 3 post.

Sunday, June 28, 2009

Technical Interoperability of Disparate Systems - An Electronic Health Record (EHR)-Centric System Implemented with a Service-Oriented Architecture


The independent organizations (a.k.a. silos) shown in Figure 1 typically implement their business processes on different computer systems (Unix, Windows, mainframes), on different platforms (J2EE, .NET) and with different vocabularies. Yet, they all need to communicate in a way that serves, in this case, a trauma victim at the time of his or her emergency, researchers who will later study cohorts of this trauma victim, and sundry others (most of whom are not shown in the simplified model).

This post will discuss an Electronic Health Record (EHR)-centric system implemented with a Service-Oriented Architecture (SOA) -- the business industry’s de facto standard for interoperability. The functions of this system are data collection, management and distribution for both the immediate situation and long term health care experiences.

I’ll address some of the challenges (and opportunities) of technical interoperability, holding off ‘til later posts any discussion of semantic interoperability.

Note: This rather lengthy post is not and is not meant to be a palimpsest on SOA. But, I have included a number of references (links to references) for anyone looking for a more detailed account of this topic.

Figure 1

Figure 2 shows a traditional implementation of the system shown in the use case drawn in Figure 1. When a change -- virtually any change -- is made in one part of this interconnected system, the overall system usually stops working, until system-wide adjustments are made. In short, the figure below shows a system that’s “hard wired.”

Note: For simplicity, these figures and this discussion omit a number of real-world details: for example, the voice communication often employed by police and EMS teams.


Figure 2

To remedy this problem, a new architecture, that of Service-Oriented Architecture (SOA), was devised. An SOA implementation of the system shown in the use case is outlined in Figure 3. At first glance, the SOA-based system might seem more complicated than the system shown in Figure 2. But, as the following discussion will attempt to show, it isn’t.



Figure 3

Note: The major part of this post will be based on a Web services implementation of SOA. However, before concluding this post, I’ll address the oft-heard view that SOA is now “dead.” SOA is indeed alive and kicking in many organizations.

The system in Figure 2, if fleshed out, would move toward becoming the fully connected system shown below (in practice, very few systems are fully connected). In such a system of computer applications, when one node is changed, all other nodes that connect to it would need a new interface to the changed node.



As shown, in a fully connected network with n nodes, there are n x (n-1) / 2 direct paths. That means 15 connections for 6 nodes, 45 connections for 10 nodes, etc.

In contrast, each node in the system in Figure 3 has only one connection – that to a bus, as represented in the figure below.



Service-Oriented Architecture

SOA relies on services exposing their functionality via interfaces that other applications and services can read to understand how to utilize those services.

This style of architecture promotes reuse at the macro (service) level rather than micro (classes) level. It can also simplify interconnection to – and usage of – existing IT (legacy) assets.

Designers can implement SOA using a wide range of technologies, including SOAP, REST, DCOM, CORBA, or Web Services. (I gave a brief overview of DCOM, CORBA and Web Services in my June 16 post and will discuss REST below.) The key is independent services with defined interfaces that can be called to perform their tasks in a standard way, without a service having foreknowledge of the calling application, and without the application having or needing knowledge of how the service actually performs its tasks.


Albeit in the distant past, I’ve used the Oracle SOA Suite to federate a unified system out of the disparate systems shown in the figures.


This suite, which includes JDeveloper, a robust Java IDE, is by no means the only solution; there are others, both proprietary and open source! For example,

Proprietary:
http://i.zdnet.com/whitepapers/cape_clear_Principles_of_BPEL.pdf

Open source:
http://orchestra.ow2.org/xwiki/bin/view/Main/

However, the Oracle SOA Suite is a complete set of service infrastructure components for creating, deploying, and managing services. It enables services to be created, managed, and orchestrated into composite applications and business processes. Additionally, you can adopt it incrementally on a project by project basis and still benefit from the common security, management, deployment architecture, and development tools that you get out of the box.

This Suite is a standards-based technology suite that consists of the following:

* Oracle BPEL Process Manager to orchestrate services into business processes
* ESB to connect existing IT systems and business partners as a set of services
* Oracle Business Rules for dynamic decisions at run time that can be managed by business users or business analysts
* Oracle Application Server Integration Business Activity Monitoring to monitor services and disparate events and provide real-time visibility into the state of the enterprise, business processes, people, and systems.
* Oracle Web Services Manager to secure and manage authentication, authorization, and encryption policies on services that is separate from your service logic
* UDDI registry to discover and manage the life cycle of Web services.
* Oracle Application Server that provides a complete Java 2, Enterprise Edition (J2EE) environment for your J2EE applications.

For an overview of the soon-to-be-released Version 11, see

http://www.oracle.com/technology/products/ias/bpel/techpreview/2008-05-01-whats-new-in-oracle-soa-suite-tp4.pdf

Business Process Execution Language (BPEL)

One of the key standards accelerating the adoption of SOA is Business Process Execution Language (BPEL) for Web Services. BPEL enables organizations to automate their business processes by orchestrating services (thus enabling developers to build end-to-end business processes spanning applications, systems, and people in a standard way). Existing functionality is exposed as services. New applications are composed using services. Services are reused across different applications. Services everywhere! For an account of Orchestration versus Choreography, see
http://www.oracle.com/technology/pub/articles/matjaz_bpel1.html

BPEL has emerged as the standard for process orchestration enabling you to build end-to-end business processes spanning applications, systems, and people in a standard way.

In the next figure, I show two pallets of drag-and-drop components available in version 10.3 of the SOA Suite for the development of a BPEL process. Version 11 will be availability shortly.


Introduction to the JDeveloper BPEL Designer

http://download-west.oracle.com/docs/cd/B31017_01/core.1013/b28938/design.htm#CHDIJCGH

Enterprise Service Bus

An Enterprise Service Bus (ESB) is an architectural pattern, not a software product. Different software products can form an ESB. In some cases, companies use multiple products in different areas, leveraging specific functionality to meet their unique requirements. These different products can be federated together as the realization of the ESB pattern.




In the next figure, I show two pallets of drag-and-drop components available in version 10.3 of the SOA Suite for the development of an ESB. As I mentioned earlier, Version 11 will be availability shortly.




Introduction to the JDeveloper ESB Designer

http://download-west.oracle.com/docs/cd/B31017_01/core.1013/b28938/design.htm#CHDIBCCC

Figure 3 introduced Business Activity Monitoring (BAM) and Business Rules to the architecture

Business Activity Monitoring – BAM satisfies a growing need to enable health care executives, and more specifically operations managers, to improve their decision-making processes by first getting a real time view of the business events occurring in their enterprise, and second using the derived intelligence to analyze and improve the efficiency of their business processes.

Understanding Business Activity Monitoring in Oracle SOA Suite
http://www.packtpub.com/article/business-activity-monitoring-in-oracle-soa-suite

Business Rules – Business Rules is one of the newer technologies (typically run from an application server) and is designed to make applications more agile. A business analyst can change business policies expressed as rules quickly and safely with little or no assistance from a programmer

Using Business Rules to Define Decision Points in Oracle SOA Suite

Part 1.
http://www.packtpub.com/article/business-rules-define-decision-points-oracle-soa-suite-part1

Part 2.
http://www.packtpub.com/article/business-rules-define-decision-points-oracle-soa-suite-part2

For detailed coverage of the Oracle Service Bus, BPEL Process Manager, Web Service Manager, Rules, Human Workflow, and Business Activity Monitoring. See

Oracle SOA Suite Developer's Guide
http://www.packtpub.com/developers-guide-for-oracle-soa-suite-10gr3/book

and

Getting Started With Oracle SOA Suite 11g R1 – A Hands-On Tutorial
http://www.packtpub.com/getting-started-with-oracle-soa-suite-11g-r1/book

When designing an SOA solution, it's not always clear whether you should use a Web services BPEL process or an ESB mediation flow (or both).

http://www.ibm.com/developerworks/websphere/library/techarticles/0803_fasbinder2/0803_fasbinder2.html

Building Event-Driven Architecture with an Enterprise Service Bus

http://www.oracle.com/technology/pub/articles/jellema-esb.html

Software frameworks

You should keep in mind that interoperability issues between platforms can arise when switching from one protocol to another and one software framework to another. Some examples include SOAP, REST, the .Net Framework, Enterprise Java Beans (EJB), and Java Messaging Service (JMS).

.Net Web services running over HTTP can be called in three different ways: HTTP GET operation, HTTP POST operation, and SOAP. The GET and POST operations are useful if you need to call a Web Service quickly and no SOAP client is readily available. You can use REST to perform GET, POST, PUT, and DELETE operations over the HTTP in a Perl script. In this script, you can specify SQL queries and simple message queues.

For a discussion of SOAP/REST in the .NET (Visual Studio 2008 SP1) ecosystem, see

http://www.pluralsight.com/community/blogs/scottallen/archive/2008/08/11/visual-studio-sp1-and-the-metification-of-rest.aspx

and

http://msdn.microsoft.com/en-us/library/dd203052.aspx

Often, services within an organization’s firewall are REST-based, while 3rd party partners use a mix of SOAP and HTTPS protocols to access their public services. By using REST for internal services, organizations are avoiding some of the overhead that the WS-* standards introduce.

If the SOAP client is available, here's how to make a simple choice between REST and SOAP. If the application is resource-based, choose REST. If the application is activity-based, opt for SOAP. Under REST, a client might request that several operations be performed on a series of resources over the HTTP. For SOAP-based requests, only one invoke operation is needed for each activity-oriented service that a client might request be performed.

REST requests do not depend on WSDL as SOAP requests do. Yet, REST and SOAP can co-exist as requests from a composite Web service application to an external Web service.

When to use REST based web services?

http://oracled.wordpress.com/2009/03/11/when-to-use-rest-based-web-services/




When Web services are outside the control of the organization (as in the figures at the top of this post), you need to ensure that they can interoperate externally with one another with respect to shared semantics and contractual obligations. Semantic misunderstandings and contractual loopholes contribute to interoperability problems between external enterprise Web services. Future posts will focus on these aspects of interoperability.

For presentations that includes the pros and cons of different protocols (SOAP and HTTP) and software frameworks (.NET and J2EE), see

http://www.esri.com/events/devsummit/pdfs/keynote_chappell.pdf

and

http://www.esri.com/events/devsummit/sessions/keynote.html

The oft-heard view that SOA is now “dead”

Once thought to be the savior of IT, SOA instead turned into a great failed experiment -- at least for many organizations. SOA was supposed to reduce costs and increase agility on a massive scale. In many situations, however, SOA has failed to deliver its promised benefits.

Yet, although SOA is understandably purported to be dead by many, the requirement for a service-oriented architecture is stronger than ever.

Adopting SOA (i.e., application re-architecture) requires disruption to the status quo. SOA is not simply a matter of deploying new technology and building service interfaces to existing applications; it requires redesign of the application portfolio. And it requires a massive shift in the way IT operates. The small select group of organizations that has seen spectacular gains from SOA did so by treating it as an agent of transformation. In each of these success stories, SOA was just one aspect of the transformation effort. And here’s the secret to success: SOA needs to be part of something bigger. If it isn’t, then you need to ask yourself why you’ve been doing it.

The latest shiny new technology will not make things better. Incremental integration projects will not lead to significantly reduced costs and increased agility. If you want spectacular gains, then you need to make a spectacular commitment to change.

Note: As of 2008, increasing numbers of third-party software companies offer software services for a fee. In the future, SOA systems may consist of such third-party services combined with others created in-house.

More on this can be found in “WOA: Putting the Web Back in Web Services.”

http://blogs.gartner.com/nick_gall/2008/11/19/woa-putting-the-web-back-in-web-services/

Also, here’s a link to one of my earlier articles on SOA

“The Economic, Cultural and Governance Issues of SOA”

http://www.cioupdate.com/reports/article.php/11050_3652091_1/The-Economic-Cultural-and-Governance-Issues-of-SOA.htm

Tuesday, June 16, 2009

Electronic Health Records (EHR) – Interoperability of Disparate Systems – Technical


As the title suggests, this post, the first in a series of continuations of my June 13 post, is about the technical aspects of interoperability. For your convenience, I’ve copied the figure below from the earlier post.




The word "interoperability" means different things to different people. To DBAs, for example, the ability of SQL Server, Oracle and DB2 to recognize each other’s commands is a sign of the interoperability of these databases. In the current post, however, I'll focus primarily on a distributed computing model that is capable of cross-platform and cross-programming language interoperability, Web services. In the process, I'll compare Web services with a couple of its predecessors, CORBA and DCOM, which are still being used widely.

Why CORBA and DCOM success is limited


Although CORBA and DCOM have been implemented on various platforms, the reality is that any solution built on these protocols will be dependent on a single vendor's implementation. Thus, if one were to develop a DCOM application, all participating nodes in the distributed application would have to be running a flavor of Windows. In the case of CORBA, every node in the application environment would need to run the same ORB product. Now there are cases where CORBA ORBs from different vendors do interoperate. However, that interoperability generally does not extend into higher-level services such as security and transaction management. Furthermore, any vendor specific optimizations would be lost in this situation.

Both these protocols depend on a closely administered environment. The odds of two random computers being able to successfully make DCOM or IIOP calls out of the box are fairly low. In addition, programmers must deal with protocol unique message format rules for data alignment and data types. DCOM and CORBA are both reasonable protocols for server-to-server communications. However, both they have severe weaknesses for client-to-server communications, especially when the client machines are scattered across the Internet.

CORBA overview

Common Object Requesting Broker Architecture (CORBA) provides standards-based infrastructure for interactions between distributed objects. It allows software components running on dissimilar hardware, hosted on different operating systems, and programmed in different programming languages to communicate, collaborate, and perform productive work over a network.

A CORBA system accomplishes this magical task by confining the interaction between objects to well-defined interfaces. To use the service of a distributed object, you must interact with it through the interface. How the interface is implemented -- or even where it is implemented -- is not known and doesn't have to be known by the client. The figure below illustrates the workings of a typical CORBA system.

Object interactions in CORBA





Achieving location transparency


In this figure, the client code makes use of a server object through its interface. In fact, the interface is implemented by a stub object that is local to the client. As far as the client can tell, it's only interacting with the local stub object. Under the hood, the stub object implements the same interface as the remote server object. When the client invokes methods on the interface, the stub object forwards the call to a CORBA component called the Object Request Broker (ORB). The calling code in the client does not have to know that the call is actually going through a stub.

DCOM overview

Distributed Component Object Model (DCOM) is a Microsoft proprietary technology for software components distributed across several networked computers to communicate with each other. However, it has been deprecated in favor of the Microsoft .NET Framework, which includes support for Web services.

Internet poses problems for CORBA and DCOM

DCOM was a major competitor to CORBA. Proponents of both of these technologies saw them as one day becoming the model for code and service-reuse over the Internet. However, the difficulties involved in getting either of these technologies to work over Internet firewalls, and on unknown and insecure machines, meant that normal HTTP requests in combination with web browsers won out over both of them.

Web services overview

Web services specifications such as SOAP, WSDL, and the WS-* family are currently the leading distributed computing standards for interfacing, interoperability, and quality of service policies. Web services are often called "CORBA for the Web" because of their many derivative concepts and ideas.

There are several ways to think about Web services when building applications. At the most basic level it is an advanced communications protocol family allowing applications to talk to each other. This level has progressed quite significantly over the past few years with many tools (e.g., Oracle’s JDeveloper and Microsoft's Visual Studio) that allow software developers to write interacting Web services and build complex applications. This level is often characterized by direct one-on-one interactions between services or relatively few services interacting with each other.

SOA overview

However, just using Web services as a communications protocol belies its true power, that of the service-oriented architecture (SOA). The SOA describes an entire system of services dynamically looking around for each other, getting together to perform some application, and recombining in many ways. This model encourages the reuse of technology and software that evolves the way applications are designed, developed and put to use. It brings the world of distributed computing to a closer reality. At this level software developers need to think of the SOA model and design their distributed application across the model. This level is characterized by the use of technologies to allow distributed communications of services such as the use of an Enterprise Service Bus (ESB), which is a common distribution network for services to work with. More on ESB in an upcoming post.

Finally, the highest level is to look at this SOA model and the many component services as building blocks that can be assembled in whole sections into full applications instead of the traditional method of writing line after line of code. By examining the connecting interfaces, we can build whole applications without ever really writing code. For that matter direct code may even get in the way since the services may be written in numerous different languages and platforms. The blocks can be put together into a workflow of operations that define how the application performs, and other tools can be used to monitor the effectives of the workflow at each service or group of services. At this level, developers can put away the use of regular programming languages and work in a Model-Driven Architecture that helps them to build applications more accurately to a design. More on workflow in an upcoming post.

Tuesday, April 14, 2009

Electronic Health Records (EHR)

The health information technology provisions in the Obama administration's stimulus bill are a step toward the goal of nearly universal electronic health record adoption in the U.S. over the next 10 years (compared with 17 percent today). The central feature of the plan is incentive payments for using electronic records for improvements in health quality, efficiency, prevention and safety.

This kind of spending, if done wrong, can have the negative market consequence of interfering with rapid innovation by locking in today’s processes and technologies, which although well-intended, came about without a systemic view. And we risk locking out the very innovations we need for meaningful health information sharing to support better decisions.

The goal for health IT should not be primarily the creation of standards or the certification of software. Rather, standards and certification should support measurable health improvements. Health improvements are not achieved by the mere installation of software; they are achieved through the effective use of information for better decision-making.

At the same time, individual states -- for example Massachusetts, which has a newly passed law that requires hospitals and community health centers in the state to implement an electronic health record systems by Oct. 1, 2015 -- have also been moving to improve the delivery of better health care through the use of EHR.

To implement these programs, the healthcare industry seems poised to increase the adoption of EHR and electronic transmission standards to promote accuracy, transparency, and processing speed across disparate information systems. Today, they have a smorgasbord of health information technologies available to help them build a far better health system.

There are, in fact, too many standards and too many organizations writing them. There are the standards that support the systems we have in place today as well as the XML/Web-based standards that support newer web-centric systems and healthcare information exchanges.

While creating EHR data is an important first goal, an EHR is much more valuable if it can be summarized, moved, and shared. This and future posts will address EHR in that broader context. These discussions will address many technical, clinical, economic, political and managerial issues of electronic health record systems.

Perhaps the most difficult challenge is to bind the standards to structured vocabularies to ensure that there is the transfer of unambiguous knowledge of the meaning of the data among cooperating systems.

Before I get specific (sometimes talking about tools to facilitate implementation of these systems), I want to point out that building technology on U.S. standards alone would leave us with essentially a non-standard EHR platform. Remember that other countries, for example Canada and England, have single-payer health systems, while the U.S. does not.

A look back to before the PC, the Internet and all that

And, before looking ahead to how Electronic Health Records of the future will likely work, I thought I'd also display a pulmonary function test report produced at Yale New Haven Hospital (YNHH) several decades ago. While its underlying data (held in a PDP-8 computer) could be transmitted electronically -- analog modem to analog modem -- over a 128 bits/sec telephone line connection, these paper reports were generally sent from the laboratory where they were generated to the office of a YNHH physician via inter-office mail or to an outside physician via the U.S. Postal Service.



Computer-generated paper report: This photograph was produced by a Poloroid instant camera that was suspended over the front of a cathode ray tube (CRT). The CRT, along with a teletypewriter, provided human readable output for the PDP-8.

In subsequent posts, I'll include talk about safe wireless practices, Web services and other seemingly off-topic subjects, because they too are important parts of the EHR story.

Interoperability will be among these topics. The linking of vital information as patients receive care from a fragmented healthcare system is a problem that has consistently plagued interoperability efforts in healthcare. The privacy, technical, and policy issues involved need be addressed in order to effectively share information across multiple organizations. Making the information available will help to prevent drug interactions and adverse events, avoid medical errors, and help inform decision making for the patient and clinician. It will also enable the support of public health efforts, improvements in research, better physician and organizational performance and benchmarking, and greater empowerment of patients and families as active participants in their own healthcare, among other benefits.

In discussing these issues, I will sometimes cite my earlier writing on financial, legal and organizational issues that appears in articles available through links provided in my bibliography at the bottom of this page.

Finally, this might be a good time to introduce a few technical terms: HL7, HL7 mapping, HIPPA etc. They're central to the discussion of IT health records management.

Health Level 7 (HL7)

HL7 refers to both a standards organization and the set of healthcare messaging standards that it creates. Founded in 1987 to create a set of standards for hospital information systems (HIS), HL7 has expanded its reach to the creation of international standards that transgress hospitals to address clinical and administrative data in healthcare domains such as pharmaceutical, medical device, and insurance transactions. There are already a large number of countries that have mandated the use of HL7 for the transmission of healthcare data and there is an expectation that HL7 will become a part of the United State's Health Insurance Portability and Accountability Act (HIPAA) in the future.

For the large number of international healthcare organizations that are embracing the electronic transmission of healthcare data, there remain some formidable challenges. Though some compliance regulations specify the newer, XML-based HL7 v3.x, there are many jurisdictions that still need to update their legacy systems to handle this format, and many that even have multiple disparate data formats in the same system.

In the US, for example, many legacy HISs employ HL7 EDI messages alongside HIPAA X12N messages. Though these formats have quite a lot in common, syntactically speaking, they are by no means interoperable and must be mapped on-the-fly to create a dynamic workflow for managing healthcare transactions. Of course, the introduction of the XML-based HL7 v3.x adds EDI/XML mapping to the complication of mapping data from EDI to EDI.

HL7 mapping

There are off-the-shelf, any-to-any graphical data mapping tool (e.g., Altova MapForce) that supports mapping HL7 data, in its legacy EDI or newer XML-based format, to and from XML, databases, flat files, other EDI formats, and Web services. Mappings are implemented by simply importing the necessary data structures (MapForce ships with configuration files for the latest EDI standards and offers the full set of past and present HL7 standards as a free download on its Web site) and dragging lines to connect nodes. A built-in function library lets you add advanced data filters and functions to further manipulate the output data. MapForce can also facilitate the automation of your HL7 transaction workflow through code generation in Java, C#, or C++ and an accessible command line interface. Additional support for mapping HL7 data to and from Web services gives healthcare organizations the ability to meet new technology challenges and changing enterprise infrastructures as they unfold within internal and external provider domains.