Wednesday, February 24, 2010

Business Process Management and Service Oriented Architecture

Business Process Management (BPM) is used to model, simulate, automate, manage, and monitor processes, in order to coordinate operations with dynamic business priorities.

With BPM, workflows (both human and automated) are determined in real-time by events and/or outcomes within the process, and effective knowledge transfer is made possible as processes become well-documented business artifacts on which staff members can be trained.

To enjoy the full benefits of BPM, processes must integrate with existing applications and systems (e.g., hospital EHR and EMR, to name a couple of areas currently being funded by the Obama administration in the U.S.). This is where Service Oriented Architecture (SOA) - the subject of earlier posts - comes in. BPM and SOA are a natural match. There are links to a few of my early articles on SOA, in the bibliography at the bottom of this blog.

In preparation for my next post to this blog, I'd like to cite the following book:

This book shows the reader how to fill the semantic gap between the process model and the applications:

Modeling business processes for SOA and developing end-to-end IT support has become one of the top IT priorities. The SOA approach is based on services and on processes. Processes are focused on composition of services and in that sense services become process activities.

Experience has shown that the implementation and optimization of processes are the most important factors in the success of SOA projects. SOA is so valuable to businesses because it enables process optimization. In order to optimize processes, we need to know which processes are relevant and we have to understand them – something that cannot be done without business process modeling. There is a major problem with this approach – a semantic gap between the process model and the applications.

This book will show you how to fill this gap. It describes a pragmatic approach to business process modeling using the Business Process Modeling Notation (BPMN) and the automatic mapping of BPMN to the Business Process Execution Language (BPEL), which is the de-facto standard for executing business processes in SOA. The book will also cover related technologies like Business Rules Management and Business Activity Monitoring which play a pivotal role in achieving closed loop Business Process Management.


Wednesday, February 10, 2010

Interoperability Between Oracle and Microsoft Technologies, Using RESTful Web Services - BPEL

A guide to developing REST Web services using the Jersey framework and Oracle JDeveloper 11g follows.

RESTful Web services are the latest revolution in the development of Web applications and distributed programming for integrating a great number of enterprise applications running on different platforms. Representational state transfer (REST) is the architectural principle for defining and addressing Web resources without using the heavy SOAP stack of protocols (WS-* stack). From the REST perspective, every Web application is a service; thus it's very easy to develop Web services with basic Web technologies such as HTTP, the URI naming standard, and XML and JSON parsers.

For a detailed account of how to create RESTful Web services by using Oracle technologies such as Oracle JDeveloper 11g, the Jersey framework (the reference implementation of the JAX-RS [JSR 311] specification), and Oracle WebLogic Server as well as how to consume the Web service by using Microsoft technologies such as Visual Studio .NET 2008 and the .NET 3.5 framework, click here.

Building a Web Services Network with BPEL - Caveat Emptor

Buoyed by maturing Web service standards, more and more organizations are using Web services in a collaborative environment. BPEL is fast becoming the platform for orchestrating these Web services for inter-enterprise collaboration. As discussed in earlier posts in this blog, BPEL offers the compelling benefits of a standards-based approach and loosely-coupled process integration to companies building an online marketplace or collaborative network.

Yet the exciting new capabilities offered by Web services carry some risk. In many cases, partner relationships break down or integration costs skyrocket if certain technical and administrative challenges are not addressed at design time:

* Partners must agree well in advance to conduct business according to specific criteria. Transport protocol, purpose of the interaction, message format, and business constraints have to be communicated clearly.

* Joining the network has to be an easy process; collaborative networks become successful mainly through growth.

* Users must easily find business services at runtime, or the promise of services-oriented architecture (SOA) is largely lost. (Service repositories are useful for this purpose.) If developers cannot readily find and reuse services, the services essentially don't exist.

* Partners should have the ability to monitor Web services in real-time. End users should be able to track the progress of a specific order, and trading partners diagnose a specific bottleneck within a business process.

These challenges are exacerbated when a collaborative network operates in a hosted environment. In that model, partners expose the functionality provided by their legacy applications into a Web service. This Web service is published into a centralized repository. The host is responsible for orchestrating the complex business processes, which in turn, leverage partner Web services.

Monday, February 1, 2010

Front-end Web Application for use in the Human Workflow used to Disambiguate IDs in an Electronic Healthcare Record (or other) Automated System

From December 14, 2009 post

Disambiguation is a process through which multiple potential identification matches are further parsed until the patient can be matched with his or her data with sufficient certainty to allow for the delivery of a health service with reasonable confidence. The complexity of disambiguation varies according to factors such as the number of potential matches and the type of information available for further analyses. When sufficient digital data are not available to further differentiate potential matches, automated disambiguation may not be possible and may require human involvement.

Disambiguation entails implementing significant new workflows and may require substantial time and resources. When human involvement is required, many of the potential benefits of automation are lost. For example, at the point of care, disambiguation is often done by asking the patient further questions regarding personal characteristics and/or health care history. In some situation, disambiguation may not be possible, as when the patient is not present and information needed to further facilitate matching may not be accessible.

From December 7, 2009 post:

Disambiguation of IDs is the process of resolving multiple potential matches into a match with the correct person. In general, statistical matching algorithms are likely to require substantially more-frequent disambiguation compared to that required by a system that uses theoretically perfect universal IDs; often, disambiguation is done by human intervention. Such disambiguation imposes significant costs and operational inefficiencies, particularly if, for example, a physician must resolve the ambiguities.

Note 1: Many of the efficiency and safety benefits theoretically possible with health information technology (HIT) systems depend on eliminating such human involvement and its concomitant slowness, expense, and propensity for error.

Note 2: What follows applies to IDs in general, even though I’ve chosen the healthcare industry for much of this discussion.

When the business process can’t be completed by automation alone, the business process incorporates human workflow. Manual disambiguation of an uncertain ID is one such human task. The form shown in the figure below illustrates a Web app created with Visual Studio 2010 [Beta 2] that a user employs to carry out part of a human workflow.

Please note: This post is presented in early draft form.

Communication between the client (user interface shown in the figure above) and the application services is performed by using proxy classes that run in the client and that represent the application service. In practice, a Web reference is a generated proxy class that locally represents the exposed functionality of an XML Web service. The proxy class defines methods that represent the actual methods exposed by an XML Web service. When your application creates an instance of the proxy class, your application can call the XML Web service methods as if the XML Web service were a locally available component.

At design time, the proxy class enables you to use statement completion for the XML Web service methods. At run time, a call to a method of the proxy object is processed and encoded as a SOAP request message. If the XML Web service does not support SOAP, the proxy class uses HTTP GET and POST. The message is then sent to the target Web Service for processing. If the service description defines a response message, the proxy object processes this message and returns a response to your application.

Note: To make XML Web services outside a firewall available to the Web browser, when creating the Web reference in Visual Studio, you must explicitly specify the address and port of your network's proxy server.

{ click the figures for a larger view }

Click here for more on Microsoft Visual Studio 2010

Click here for more on Oracle BPEL and Human Workflow

Note: If patients cannot be unambiguously identified via a computer-based process, machine-level interoperability will be hampered significantly.