Tuesday, June 30, 2009

Electronic Health Records (EHR) & the Need for Cross-Browser Compatibility

Cross-Browser Problems/Solutions

Back when I did software development work (secure, data-intensive, interactive Web applications -- .NET and J2EE), I would finish debugging a project on my PC and then deploy the app to a test server from which it would be downloaded by every manner of user. Most of them were running Internet Explorer, some Firefox or Safari, and a few were running some other browser. The problem was that some of these users, users not using the same browser as me, discovered that the app didn't render as expected.

Today, unfortunately, writing dynamic web applications is still a tedious and error-prone process; some developers spend up to 75% of their time working around subtle incompatibilities between web browsers and platforms.

With the widespread adoption of Electronic Health Records (EHR) imminent, the likelihood that a given Web app will be downloaded by someone using a browser other than the one(s) used to develop and test the app will only increase.

But help may be on the way: cross browser Web development may be getting easier and more efficient with the upcoming version of Google Web Toolkit (GWT) version 2. Google claims that GWT applications will automatically support Internet Explorer, Firefox, Mozilla, Safari, and Opera with no browser detection or special-casing within your code in most cases.

Shown in the figure below is the GWT implementation of an app that would not be expected to render exactly the same in Internet Explorer and Firefox.

Note: A few examples of cross-browser incompatibility appear in the figures of my 2005 article ASP.NET & Multiplatform Environments (a link to it is provided in the bibliography at the bottom of this blog).

GWT is an interesting platform that allows you to write Java code that automatically gets converted into JavaScript by the GWT compiler. This allows for a simple way to create AJAX apps, using Java. Google also created a plug-in for Eclipse to make debugging and deploying very easy.

With Ajax, Web applications can retrieve data from the server asynchronously in the background without interfering with the display and behavior of the existing page. The use of Ajax has led to better quality of Web services [see my June 28 post] due to the asynchronous mode.

The core language of the World Wide Web

Until recently, most simple browser applications were created in HTML 4.1, which could be enhanced with JavaScript and CSS. For Web applications that were as rich as desktop applications, developers had to embed Adobe Flash or, more recently, Microsoft Silverlight apps in their HTML code. But, a new version of HTML, HTML 5, which is the fifth major revision of the core language of the World Wide Web, is bringing this same rich functionality to the browser without the need for Flash or Silverlight.

A good summary of the major features of HTML 5 is in Tim O'Reilly's blog post about it.


However, there is still a problem. Currently HTML 5 compliant browsers are only 60% of the market. Quite a few enterprises are still on Internet Explorer 6. So, Flash, which has the distinct advantage of working on older browsers and has about a 95% market penetration, is still very popular. However, some expect that HTML 5 will be as popular as Flash by 2011.

Even though HTML 5 is not yet a finished standard, most of it is already supported in major browsers: Firefox 3, Internet Explorer 8, and Safari 4. This means that you can create a HTML 5 application right now!

Note: Flex, like ActiveX, Silverlight, and Java Applets before them are, in a sense, replacements for the browser. But, each replaces the web browser in a proprietary way.

HTML 5 is right around the corner

While the entire HTML 5 standard is years or more from adoption, there are many of its powerful features available in browsers today. In fact, five key next-generation features are already available in the latest (sometimes experimental) browser builds from Firefox, Opera, Safari, and Google Chrome. Microsoft has announced that it will support HTML 5, but as Vic Gundotra, VP of Engineering at Google has noted, "We eagerly await evidence of that."

GWT meets HTML 5

Google Web Toolkit is a really good way to "easily" design a flashy HTML/CSS/JavaScript application that works as expected across different browsers and operating systems.

{click for larger view}

Sunday, June 28, 2009

Technical Interoperability of Disparate Systems - An Electronic Health Record (EHR)-Centric System Implemented with a Service-Oriented Architecture

The independent organizations (a.k.a. silos) shown in Figure 1 typically implement their business processes on different computer systems (Unix, Windows, mainframes), on different platforms (J2EE, .NET) and with different vocabularies. Yet, they all need to communicate in a way that serves, in this case, a trauma victim at the time of his or her emergency, researchers who will later study cohorts of this trauma victim, and sundry others (most of whom are not shown in the simplified model).

This post will discuss an Electronic Health Record (EHR)-centric system implemented with a Service-Oriented Architecture (SOA) -- the business industry’s de facto standard for interoperability. The functions of this system are data collection, management and distribution for both the immediate situation and long term health care experiences.

I’ll address some of the challenges (and opportunities) of technical interoperability, holding off ‘til later posts any discussion of semantic interoperability.

Note: This rather lengthy post is not and is not meant to be a palimpsest on SOA. But, I have included a number of references (links to references) for anyone looking for a more detailed account of this topic.

Figure 1

Figure 2 shows a traditional implementation of the system shown in the use case drawn in Figure 1. When a change -- virtually any change -- is made in one part of this interconnected system, the overall system usually stops working, until system-wide adjustments are made. In short, the figure below shows a system that’s “hard wired.”

Note: For simplicity, these figures and this discussion omit a number of real-world details: for example, the voice communication often employed by police and EMS teams.

Figure 2

To remedy this problem, a new architecture, that of Service-Oriented Architecture (SOA), was devised. An SOA implementation of the system shown in the use case is outlined in Figure 3. At first glance, the SOA-based system might seem more complicated than the system shown in Figure 2. But, as the following discussion will attempt to show, it isn’t.

Figure 3

Note: The major part of this post will be based on a Web services implementation of SOA. However, before concluding this post, I’ll address the oft-heard view that SOA is now “dead.” SOA is indeed alive and kicking in many organizations.

The system in Figure 2, if fleshed out, would move toward becoming the fully connected system shown below (in practice, very few systems are fully connected). In such a system of computer applications, when one node is changed, all other nodes that connect to it would need a new interface to the changed node.

As shown, in a fully connected network with n nodes, there are n x (n-1) / 2 direct paths. That means 15 connections for 6 nodes, 45 connections for 10 nodes, etc.

In contrast, each node in the system in Figure 3 has only one connection – that to a bus, as represented in the figure below.

Service-Oriented Architecture

SOA relies on services exposing their functionality via interfaces that other applications and services can read to understand how to utilize those services.

This style of architecture promotes reuse at the macro (service) level rather than micro (classes) level. It can also simplify interconnection to – and usage of – existing IT (legacy) assets.

Designers can implement SOA using a wide range of technologies, including SOAP, REST, DCOM, CORBA, or Web Services. (I gave a brief overview of DCOM, CORBA and Web Services in my June 16 post and will discuss REST below.) The key is independent services with defined interfaces that can be called to perform their tasks in a standard way, without a service having foreknowledge of the calling application, and without the application having or needing knowledge of how the service actually performs its tasks.

Albeit in the distant past, I’ve used the Oracle SOA Suite to federate a unified system out of the disparate systems shown in the figures.

This suite, which includes JDeveloper, a robust Java IDE, is by no means the only solution; there are others, both proprietary and open source! For example,


Open source:

However, the Oracle SOA Suite is a complete set of service infrastructure components for creating, deploying, and managing services. It enables services to be created, managed, and orchestrated into composite applications and business processes. Additionally, you can adopt it incrementally on a project by project basis and still benefit from the common security, management, deployment architecture, and development tools that you get out of the box.

This Suite is a standards-based technology suite that consists of the following:

* Oracle BPEL Process Manager to orchestrate services into business processes
* ESB to connect existing IT systems and business partners as a set of services
* Oracle Business Rules for dynamic decisions at run time that can be managed by business users or business analysts
* Oracle Application Server Integration Business Activity Monitoring to monitor services and disparate events and provide real-time visibility into the state of the enterprise, business processes, people, and systems.
* Oracle Web Services Manager to secure and manage authentication, authorization, and encryption policies on services that is separate from your service logic
* UDDI registry to discover and manage the life cycle of Web services.
* Oracle Application Server that provides a complete Java 2, Enterprise Edition (J2EE) environment for your J2EE applications.

For an overview of the soon-to-be-released Version 11, see


Business Process Execution Language (BPEL)

One of the key standards accelerating the adoption of SOA is Business Process Execution Language (BPEL) for Web Services. BPEL enables organizations to automate their business processes by orchestrating services (thus enabling developers to build end-to-end business processes spanning applications, systems, and people in a standard way). Existing functionality is exposed as services. New applications are composed using services. Services are reused across different applications. Services everywhere! For an account of Orchestration versus Choreography, see

BPEL has emerged as the standard for process orchestration enabling you to build end-to-end business processes spanning applications, systems, and people in a standard way.

In the next figure, I show two pallets of drag-and-drop components available in version 10.3 of the SOA Suite for the development of a BPEL process. Version 11 will be availability shortly.

Introduction to the JDeveloper BPEL Designer


Enterprise Service Bus

An Enterprise Service Bus (ESB) is an architectural pattern, not a software product. Different software products can form an ESB. In some cases, companies use multiple products in different areas, leveraging specific functionality to meet their unique requirements. These different products can be federated together as the realization of the ESB pattern.

In the next figure, I show two pallets of drag-and-drop components available in version 10.3 of the SOA Suite for the development of an ESB. As I mentioned earlier, Version 11 will be availability shortly.

Introduction to the JDeveloper ESB Designer


Figure 3 introduced Business Activity Monitoring (BAM) and Business Rules to the architecture

Business Activity Monitoring – BAM satisfies a growing need to enable health care executives, and more specifically operations managers, to improve their decision-making processes by first getting a real time view of the business events occurring in their enterprise, and second using the derived intelligence to analyze and improve the efficiency of their business processes.

Understanding Business Activity Monitoring in Oracle SOA Suite

Business Rules – Business Rules is one of the newer technologies (typically run from an application server) and is designed to make applications more agile. A business analyst can change business policies expressed as rules quickly and safely with little or no assistance from a programmer

Using Business Rules to Define Decision Points in Oracle SOA Suite

Part 1.

Part 2.

For detailed coverage of the Oracle Service Bus, BPEL Process Manager, Web Service Manager, Rules, Human Workflow, and Business Activity Monitoring. See

Oracle SOA Suite Developer's Guide


Getting Started With Oracle SOA Suite 11g R1 – A Hands-On Tutorial

When designing an SOA solution, it's not always clear whether you should use a Web services BPEL process or an ESB mediation flow (or both).


Building Event-Driven Architecture with an Enterprise Service Bus


Software frameworks

You should keep in mind that interoperability issues between platforms can arise when switching from one protocol to another and one software framework to another. Some examples include SOAP, REST, the .Net Framework, Enterprise Java Beans (EJB), and Java Messaging Service (JMS).

.Net Web services running over HTTP can be called in three different ways: HTTP GET operation, HTTP POST operation, and SOAP. The GET and POST operations are useful if you need to call a Web Service quickly and no SOAP client is readily available. You can use REST to perform GET, POST, PUT, and DELETE operations over the HTTP in a Perl script. In this script, you can specify SQL queries and simple message queues.

For a discussion of SOAP/REST in the .NET (Visual Studio 2008 SP1) ecosystem, see




Often, services within an organization’s firewall are REST-based, while 3rd party partners use a mix of SOAP and HTTPS protocols to access their public services. By using REST for internal services, organizations are avoiding some of the overhead that the WS-* standards introduce.

If the SOAP client is available, here's how to make a simple choice between REST and SOAP. If the application is resource-based, choose REST. If the application is activity-based, opt for SOAP. Under REST, a client might request that several operations be performed on a series of resources over the HTTP. For SOAP-based requests, only one invoke operation is needed for each activity-oriented service that a client might request be performed.

REST requests do not depend on WSDL as SOAP requests do. Yet, REST and SOAP can co-exist as requests from a composite Web service application to an external Web service.

When to use REST based web services?


When Web services are outside the control of the organization (as in the figures at the top of this post), you need to ensure that they can interoperate externally with one another with respect to shared semantics and contractual obligations. Semantic misunderstandings and contractual loopholes contribute to interoperability problems between external enterprise Web services. Future posts will focus on these aspects of interoperability.

For presentations that includes the pros and cons of different protocols (SOAP and HTTP) and software frameworks (.NET and J2EE), see




The oft-heard view that SOA is now “dead”

Once thought to be the savior of IT, SOA instead turned into a great failed experiment -- at least for many organizations. SOA was supposed to reduce costs and increase agility on a massive scale. In many situations, however, SOA has failed to deliver its promised benefits.

Yet, although SOA is understandably purported to be dead by many, the requirement for a service-oriented architecture is stronger than ever.

Adopting SOA (i.e., application re-architecture) requires disruption to the status quo. SOA is not simply a matter of deploying new technology and building service interfaces to existing applications; it requires redesign of the application portfolio. And it requires a massive shift in the way IT operates. The small select group of organizations that has seen spectacular gains from SOA did so by treating it as an agent of transformation. In each of these success stories, SOA was just one aspect of the transformation effort. And here’s the secret to success: SOA needs to be part of something bigger. If it isn’t, then you need to ask yourself why you’ve been doing it.

The latest shiny new technology will not make things better. Incremental integration projects will not lead to significantly reduced costs and increased agility. If you want spectacular gains, then you need to make a spectacular commitment to change.

Note: As of 2008, increasing numbers of third-party software companies offer software services for a fee. In the future, SOA systems may consist of such third-party services combined with others created in-house.

More on this can be found in “WOA: Putting the Web Back in Web Services.”


Also, here’s a link to one of my earlier articles on SOA

“The Economic, Cultural and Governance Issues of SOA”


Saturday, June 20, 2009

Interoperability of Disparate Systems ...

A system that includes new and old technologies -- partially represented by the use case diagram shown below -- will be the subject of upcoming posts. Interoperability -- technical and semantic -- using open, standard protocols will be my focus.

{double click for larger view}

For anyone not familiar with use case diagrams, here's a link to an
easy-to-follow introduction to using them and another link to the Wikipedia entry for use case diagrams.

The discussion of technical interoperability will include, but not be limited to, whether to employ Web services with BPEL or ESB (or both) and business activity monitoring (BAM) tools.

The discussion of semantic interoperability will include, but not be limited to, whether to employ HL7 version 3 messages or CDA documents (or both).

Other topics such as service level agreements (SLA), key performance indicators (KPI), telemetry, and telemedicine will be included.

Friday, June 19, 2009

National Politics, The American Medical Association, Healthcare and IT Funding

In Wednesday's post, I expressed the belief that national politics could have an impact upon the architecture of the upcoming IT-based EHR system. These politics cannot be ignored, I opined, when it comes to the distribution of funds under the HITECH Act (the health IT component of the American Recovery & Reinvestment Act). This rationale is rooted in history.

Twenty years ago this week, I.F. Stone died at the age of eighty-one. He was the premier investigative reporter of the twentieth century, a self-described radical journalist.

The late ABC news anchor, Peter Jennings, paid tribute to I.F. Stone on his evening newscast the day after his death, June 18, 1989.

People may not remember that Meet the Press was originally a radio program before it became a TV program. And when Meet the Press started in the mid-’40s, I.F. Stone was one of the regular panelists on the radio program. He was also one of the regular panelists on the TV program.

He was a very well-known journalist, the sort of person you would expect to see on one of today’s Sunday chat shows.

In December 1949, on Meet the Press, the person he was interviewing was a guy called Dr. Morris Fishbein. Now, in the ’40s, Morris Fishbein was the most famous doctor in America. He was the editor of The Journal of the American Medical Association (an article from which I linked to in my May 13 post), and he was the person that the medical and pharmaceutical industries put up to oppose socialized medicine or a national health insurance. He was the person who coined the phrase “socialized medicine” as a means of discrediting national health insurance.

Fishbein had described the proposals for national health insurance as a step on the road to communism. And so, Stone said to him, “Dr. Fishbein, given that President Truman has already spoken out in favor of national health insurance, do you think that that makes him a dangerous communist or just a deluded fellow traveler?”

I.F. Stone continued by saying that the aircraft industry, at the beginning of the Second World War, was producing about 500 planes a year. And President Roosevelt said that in order to defeat Hitler, they need to produce 500 planes a day. And basically, Stone pointed out that the aircraft industry had this huge backlog. It didn’t suit them to expand production. They wanted to keep things the way they were. They had a monopoly, just like pharmaceutical companies might have today. So Truman knew that some things are too important to be left to private enterprise, and he felt that healthcare was one of them.

But what’s interesting about this argument that Stone was having with Fishbein is two things: first, that that was the last time I.F. Stone was ever on Meet the Press, and secondly, that he wasn’t again allowed to be on national television for eighteen years.

Footnote: Earlier this week, the American Medical Association (AMA) announced that it was "letting Congress know" that it would resist a public plan for health insurance coverage.

Politically, the revelation could be a significant blow to progressive health care reform advocates, who contend that a public option is the best way to reduce costs and increase insurance coverage. The AMA has the institutional resources and the prestige to impact debates in the halls of Congress.

Wednesday, June 17, 2009

Electronic Health Records (EHR) – Interoperability of Disparate Systems – Political

The top box in the figure below (adapted from the first figure in my June 16 post) contains the label "Political," and the video in my May 13th post (repeated below) shows excerpts from a recently held hearing in the United States Senate Committee On Finance, chaired by Senator Max Baucus. The latter motivated the former. But, I've had additional reasons to think about the influence of national politics upon the architecture of the upcoming IT-based EHR system.

Montana Senator Baucus is the Senate’s point man on healthcare reform. A new article in the Montana Standard finds that Senator Baucus has received more campaign money from health and insurance industry interests than any other member of Congress. The article says, “In the past six years, nearly one-fourth of every dime raised by Baucus and his political-action committee has come from groups and individuals associated with drug companies, insurers, hospitals, medical-supply firms, health-service companies and other health professionals.”

Moreover, it’s hard for even the most casual follower of the daily news to avoid finding his or her own reason to believe that national politics will play a major role in the rollout of our EHR system.

The vast majority of the funds within the HITECH Act (the health IT component of the American Recovery & Reinvestment Act) are assigned to payments that will reward physicians and hospitals for effectively using a robust, connected EHR system. Few should doubt that how these billions of dollars are distributed will have a profound impact on the shaping of our national EHR system.

In addition to funding EHRs (also called electronic medical records, or EMRs), HITECH adds some privacy-enforcement teeth to HIPAA, which has long been criticized for loopholes’ allowing the release of medical record information to health care vendors for marketing purposes. Pharmaceutical companies, for instance, have frequently used prescription information from these records to target their mailings for new or alternative drugs and treatments. HITECH now mandates that individual patients’ consent be obtained before releasing any information to vendors -- or to anyone not in the immediate health care loop that includes physicians and hospitals as well as insuring and billing entities (for a scary look at how easily this loop can expand, read “Health Privacy—The Way We Live Now”). The new provisions also require voluntary and affirmative disclosure of any breaches or violations of private records. However, as the final stimulus package wended its way through Congress, so many loopholes (five pages’ worth) were added that just about any group with political connections, or with loose medical affiliations, could gain access to everyone’s personal EHR just by asking for it or by paying for it.

While there are workgroups and committees of experts working diligently to shape an EHR that helps bring about a better healthcare system for the nation, the members of these groups don’t control the purse strings. Politicians do. Stay tuned.

Tuesday, June 16, 2009

Electronic Health Records (EHR) – Interoperability of Disparate Systems – Technical

As the title suggests, this post, the first in a series of continuations of my June 13 post, is about the technical aspects of interoperability. For your convenience, I’ve copied the figure below from the earlier post.

The word "interoperability" means different things to different people. To DBAs, for example, the ability of SQL Server, Oracle and DB2 to recognize each other’s commands is a sign of the interoperability of these databases. In the current post, however, I'll focus primarily on a distributed computing model that is capable of cross-platform and cross-programming language interoperability, Web services. In the process, I'll compare Web services with a couple of its predecessors, CORBA and DCOM, which are still being used widely.

Why CORBA and DCOM success is limited

Although CORBA and DCOM have been implemented on various platforms, the reality is that any solution built on these protocols will be dependent on a single vendor's implementation. Thus, if one were to develop a DCOM application, all participating nodes in the distributed application would have to be running a flavor of Windows. In the case of CORBA, every node in the application environment would need to run the same ORB product. Now there are cases where CORBA ORBs from different vendors do interoperate. However, that interoperability generally does not extend into higher-level services such as security and transaction management. Furthermore, any vendor specific optimizations would be lost in this situation.

Both these protocols depend on a closely administered environment. The odds of two random computers being able to successfully make DCOM or IIOP calls out of the box are fairly low. In addition, programmers must deal with protocol unique message format rules for data alignment and data types. DCOM and CORBA are both reasonable protocols for server-to-server communications. However, both they have severe weaknesses for client-to-server communications, especially when the client machines are scattered across the Internet.

CORBA overview

Common Object Requesting Broker Architecture (CORBA) provides standards-based infrastructure for interactions between distributed objects. It allows software components running on dissimilar hardware, hosted on different operating systems, and programmed in different programming languages to communicate, collaborate, and perform productive work over a network.

A CORBA system accomplishes this magical task by confining the interaction between objects to well-defined interfaces. To use the service of a distributed object, you must interact with it through the interface. How the interface is implemented -- or even where it is implemented -- is not known and doesn't have to be known by the client. The figure below illustrates the workings of a typical CORBA system.

Object interactions in CORBA

Achieving location transparency

In this figure, the client code makes use of a server object through its interface. In fact, the interface is implemented by a stub object that is local to the client. As far as the client can tell, it's only interacting with the local stub object. Under the hood, the stub object implements the same interface as the remote server object. When the client invokes methods on the interface, the stub object forwards the call to a CORBA component called the Object Request Broker (ORB). The calling code in the client does not have to know that the call is actually going through a stub.

DCOM overview

Distributed Component Object Model (DCOM) is a Microsoft proprietary technology for software components distributed across several networked computers to communicate with each other. However, it has been deprecated in favor of the Microsoft .NET Framework, which includes support for Web services.

Internet poses problems for CORBA and DCOM

DCOM was a major competitor to CORBA. Proponents of both of these technologies saw them as one day becoming the model for code and service-reuse over the Internet. However, the difficulties involved in getting either of these technologies to work over Internet firewalls, and on unknown and insecure machines, meant that normal HTTP requests in combination with web browsers won out over both of them.

Web services overview

Web services specifications such as SOAP, WSDL, and the WS-* family are currently the leading distributed computing standards for interfacing, interoperability, and quality of service policies. Web services are often called "CORBA for the Web" because of their many derivative concepts and ideas.

There are several ways to think about Web services when building applications. At the most basic level it is an advanced communications protocol family allowing applications to talk to each other. This level has progressed quite significantly over the past few years with many tools (e.g., Oracle’s JDeveloper and Microsoft's Visual Studio) that allow software developers to write interacting Web services and build complex applications. This level is often characterized by direct one-on-one interactions between services or relatively few services interacting with each other.

SOA overview

However, just using Web services as a communications protocol belies its true power, that of the service-oriented architecture (SOA). The SOA describes an entire system of services dynamically looking around for each other, getting together to perform some application, and recombining in many ways. This model encourages the reuse of technology and software that evolves the way applications are designed, developed and put to use. It brings the world of distributed computing to a closer reality. At this level software developers need to think of the SOA model and design their distributed application across the model. This level is characterized by the use of technologies to allow distributed communications of services such as the use of an Enterprise Service Bus (ESB), which is a common distribution network for services to work with. More on ESB in an upcoming post.

Finally, the highest level is to look at this SOA model and the many component services as building blocks that can be assembled in whole sections into full applications instead of the traditional method of writing line after line of code. By examining the connecting interfaces, we can build whole applications without ever really writing code. For that matter direct code may even get in the way since the services may be written in numerous different languages and platforms. The blocks can be put together into a workflow of operations that define how the application performs, and other tools can be used to monitor the effectives of the workflow at each service or group of services. At this level, developers can put away the use of regular programming languages and work in a Model-Driven Architecture that helps them to build applications more accurately to a design. More on workflow in an upcoming post.

Monday, June 15, 2009

Electronic Health Records (EHR) & “Meaningful Use” survey results & Speech Recognition Technology

Electronic health records (EHR) have been recognized as a core component to national healthcare reform in the United States. Beginning in 2011, physicians and hospitals can receive bonus payments under Medicare and Medicaid, but only if they are found to be “meaningful EHR users.” Nuance Communications, Inc., a leading supplier of speech solutions that are expected to help with the transition to and utilization of EHR, surveyed physicians to better understand how “meaningful use” should, from their point of view, ultimately be defined.

A couple of my prior posts were about speech recognition and another couple of my prior posts were about electronic health records. So, I was interested in the results of this survey.

The EHR Meaningful Use Physician Study shows —

93 percent of doctors “disagree” or “strongly disagree” that using an EHR has reduced time spent documenting care.

When asked what doctors consider an “incentive to drive national EHR adoption,” 75 percent of the physicians surveyed said they consider “access to tools that would help doctors to better document within an EHR (beyond the keyboard), such as speech recognition” an incentive; whereas 69 percent cited “stimulus money.”

When asked about qualifications that the federal government should measure as part of pay-outs associated with EHR meaningful use, physicians cited the following:

  • 90 percent said “access to medical records faster without waiting for records to come out of transcription,” was “important” or “very important.”
  • 83 percent said “more complete patient reports, with higher levels or detail on the patient’s condition and visit,” was “important” or “very important.”
  • 83 percent said “better caregiver-to-caregiver communication based on improved reporting that is more accessible and easily shareable,” was “important” or “very important.”
  • 79 percent said, “improved documentation by pairing the EHR point-and-click template with physician narrative,” was “important” or “very important.”

When asked about the importance of various EHR components, physicians identified the following as the five most important:

    • Lab test results reporting and review
    • Documentation tools that allow doctors to speak the physician narrative into the EHR
    • E-Prescribing
    • Secure health messaging between caregivers
    • Keyboard support via speech recognition for data entry into the EHR

    74 percent of the doctors surveyed said “EHR cookie cutter templates” and “patient notes with no uniqueness” are challenges to realizing the full value of EHRs.

    93 percent of doctors surveyed either “agree” or “strongly agree” with the following statement, “I think capturing physician narrative as part of the documentation process is necessary for complete and quality patient notes.”

    67 percent of the doctors surveyed cited “time associated with reliance on keyboard and mouse to document within an EHR” as a major hurdle.

    My thoughts, after perusing these findings: On the one hand, this is a self-serving report created by a vendor of speech recognition products; On the other hand, this report suggests that there could be a good deal of resistance to EHR by physicians and hospitals if speech recognition technology is not successfully integrated into the coming EHR system(s). Stay tuned.

    Saturday, June 13, 2009

    Electronic Health Records (EHR) - Interoperability of Disparate Systems

    As suggested in the figure below, there are technical, semantic, organizational, political and economic matters to consider when deciding how to move data and/or exert operations from one place to another. These challenges are present in the recently reinvigorated campaign to adopt and exchange electronic health records.

    On February 17, 2009, President Barack Obama signed into law the American Recovery & Reinvestment Act (ARRA). The health IT component of the Bill is the HITECH Act, which appropriates a net $19.5 billion dollars to encourage healthcare organizations to adopt and effectively utilize Electronic Health Records (EHR) and establish health information exchange networks at a regional level, all while ensuring that the systems deployed protect and safeguard the critical patient data at the core of the system.

    There are two portions of the HITECH Act -- one providing $2 billion immediately to the Department of Health & Human Services (HHS) and its sub-agency, the Office of the National Coordinator for Health IT (ONC), and directs creation of standards and policy committees; a second that allocates $36 billion that will be paid to healthcare providers who demonstrate use of Electronic Health Records.

    The government is focused on two primary goals in this legislation: moving physicians who have been slow to adopt Electronic Health Records to a computerized environment, and ensuring that patient data no longer sits in silos within individual provider organizations but instead is actively and securely exchanged between healthcare professionals. Therefore, the vast majority of the funds within the HITECH Act are assigned to payments that will reward physicians and hospitals for effectively using a robust, connected EHR system.

    In short, a great deal of largely-Government-funded IT work is about to be undertaken to enable often-disparate healthcare recordkeeping systems to interoperate. In the next few posts, I will address a number of the issues that need to be considered when planning such projects.

    Thursday, June 11, 2009

    Demographics of past visitors to this blog

    To determine what, if any, response there had been to this still-fairly-new blog, I enlisted the help of Google Analytics from which I obtained the report posted below.

    Wednesday, June 10, 2009

    The coming evolution of wireless local area networks (in healthcare, academe and elsewhere) means better video, voice, and data

    Wireless local area networks are about to become more pervasive because greater numbers of end users are going to be more favorably impressed than ever with their performance and greater numbers of administrators are going to be more favorably impressed than ever with their cost/benefit ratio. The emerging Wi-Fi standard, 802.11n, is behind these changes.

    The examples of 802.11n networks outlined in this post are taken from the healthcare industry and education, but they apply equally well to any networked environment -- and, today, that means just about every environment. Healthcare, especially, with its stringent security requirements, large files like X-rays and changing physical environment caused by portable equipment will benefit from this nascent wireless standard as much and sometimes more than any other field.

    With the ubiquitous coverage that 802.11n promises, doctors and nurses will have full network access whether at the patient’s bedside, in wards, or in the waiting areas. Office workers and others ancillary staff will be able to move their laptops and other mobile devices from their desks to a conference room or to a cross-campus facility, more often than not seamlessly.

    While 802.11n is not expected to be ratified before Q4 of this year at the earliest, draft versions of 802.11n are already making it possible to run bandwidth-hungry applications like VoIP and video streaming. The draft 802.11n products currently on the market demonstrate a significantly higher throughput and improved range. And, the 802.11n standard promises to achieve as much as 5x the throughput and up to double the range over legacy 802.11 a/b/g technology.

    At this level of throughput and range performance, 802.11n can support multimedia applications, with the ability to transport multiple high-definition (HD) video streams, while at the same time accommodating Voice over Internet Protocol (VoIP) streams and data transfers for multiple users with high Quality of Service (QoS) and latest generation security protections in place. In enterprise, campus and municipal networks, 802.11n offers the robustness, throughput, security and QoS capabilities that IT managers have come to expect from wired Ethernet networks. But, wireless devices have this one additional attribute: they are not tethered to a wall jack like wired ones and that makes all the difference in the world.

    I've linked to a number of videos so that you can see some of this functionality in action:


    It is well documented that wireless performance varies based on a variety of factors such as the type of applications delivered over Wi-Fi or the physical challenges presented by building materials or architectural configurations. Cisco’s lab testing engineers have consistently reached connection data rates of 300 Mbps per 802.11n radio. This data rate typically translates to a throughput rate of 185 Mbps for sustained periods of time.


    Before the introduction of 802.11n, a healthcare organization that needed to stream high-definition (HD) video for mobile diagnostic services would be limited to only two HD streams at a time over a wireless network. Even then, an 802.11g network would not be a reliable transport medium for HD streaming video. Previous standards, such as 802.11b, did not have the necessary throughput capacity for any HD video streams.

    802.11n allows for the distribution of seven times more video streams than 802.11g networks (Table 1). Such an increase in the throughput rate can truly mobilize applications such as bandwidth-intensive, video-streaming applications. With 802.11n, organizations like the healthcare provider mentioned earlier can dramatically increase the number of simultaneous mobile diagnostics that can be performed. The result is a significant improvement in medical staff productivity, resource utilization, and patient satisfaction (due to shorter wait times), all of which leads to greater profitability.

    * In real-life network deployments, Cisco 802.11n solutions have maintained a consistent throughput peak of 185 Mbps. Unfortunately, when it comes to video streaming over Wi-Fi, contention reduces the available throughput per 802.11n radio to roughly 140 Mbps.

    ** A typical DVD-quality video stream requires about 5 Mbps of throughput. A high-definition video stream requires double the throughput—that is, about 10 Mbps.

    8x More Users

    The transformative nature of wireless networking drives -- and also feeds -- an insatiable appetite for network-connected devices. Most of us today have at least one Wi-Fi-enabled device, but many of us are starting to carry more than one -- for example, a dual-mode phone, a laptop computer, and a digital camera. We are also becoming accustomed to finding an available network that we can connect those devices to while at home, at work, or on the go, which in turn drives the need for ubiquitous network connectivity.

    The proliferation of these network-connected devices is creating an undeniable need for high density deployments as more and more users connect to the same network with multiple devices for different reasons. This need is only exacerbated in areas where people tend to congregate in large numbers for business, education, entertainment, or other reasons.

    Consider a large lecture hall where many students congregate during class and are connecting to the wireless network with their laptop computers in order to download the instructor’s presentation slides and notes or conduct parallel, online research on the discussion topic of the day.

    If we assume that this large lecture hall is equipped with three 802.11g access points today, the students in the classroom and their connected devices would be sharing an available bandwidth of 22 Mbps by load balancing these users and devices across the three available access points. Now suppose that all these students were required by the instructor to use a “blackboard” type of application to download presentation notes transcribed onto slides in real time. The application would require a consistent bandwidth of 5 Mbps in order to provide a good user experience, and the result would be that only 12 students (four students per access point) would be able to use the application effectively in the classroom.

    Suppose we were to replace these three 802.11g access points with three next-generation, 802.11n access points. The system-level bandwidth in the classroom would increase substantially and more than 96 students (32 users per access point) would be able to connect to their wireless network and expect to have a consistent application experience.

    In fact, a one-to-one replacement of access points is the most prevalent migration scenario to 802.11n for organizations that want to increase their deployment density. User density becomes an even more complex problem to solve when network users are demanding different bandwidths to run their specific applications. It is not hard to imagine how airport terminal or conference room hotspots, where users run a variety of mobility applications, would benefit from next-generation wireless. Not only would it allow more users on the network, but it could also improve their individual user experience.

    Cisco testing has shown that on a systemwide basis, adding devices (users) onto the network may at some point create some throughput loss, up to 5 percent, which will result in slightly fewer additional users being able to use the network. That is why the number of users is not entirely aligned with the expected performance improvement we see from migrating to 802.11n.

    9x Faster

    Even though Internet or Intranet video streaming and higher user density are both compelling reasons to migrate to 802.11n, the vast majority of companies migrating to next-generation wireless will do so because of the raw performance improvement their users will experience daily. Extensive field testing has shown that sustained throughput performance of 802.11n wireless networks is 185 Mbps. However, in many cases during those field trials, a sustained upper limit of 198 Mbps has been observed.

    Companies migrating to a next-generation 802.11n wireless network can expect to experience an improvement in performance that is up to nine times faster than 802.11g technology for the mobile applications used today. Furthermore, many applications, such as scheduled data backups and large file transfers that were previously performed over the wired network, will now be mobilized. These performance improvements increase overall employee effectiveness and productivity and in turn shorten the 802.11n investment payback period, while increasing the return on investment.

    There is no doubt that the emergence of 802.11n will also bring about an influx of bandwidth hungry mobile applications that could not be enabled wirelessly until now.

    802.11n vs. gigabit Ethernet

    Of course, 802.11n speeds still fall far short of those of gigabit Ethernet. However, downloading an 8MB file over 802.11n should take about 4 seconds if there are 10 users on a given access point, compared to less than a second for both fast and gigabit Ethernet. Even with 20 users per access point, the file download times ranged from two to eight seconds -- still satisfactory for most users.

    Although latency is up to 20 times higher than that of gigabit Ethernet, the difference will not be enough to impact VoIP. The same can be said of jitter, the amount of variation in the arrival times of VoIP packets. Jitter can be as high as 150 times that of gigabit Ethernet, but who cares? Again, the difference will have little impact on jitter-sensitive applications such as VoWLAN [voice over WLAN] because the absolute value is so small compared to the VoWLAN jitter budget.

    See http://www.youtube.com/watch?v=WXELBG9oakk

    This video, the first in a 4-part series on VoIP, is not about wireless, but the concepts, which apply to both wired and wireless networks, may be of interest .

    More reliable

    802.11n is not only faster, it’s a lot more reliable because it uses “MIMO” -- Multiple Input, Multiple Output — technology. It means, in effect, you have multiple antennas working. So, if a signal doesn’t get through going in one direction, you’re able to send it another way with another antenna, and the signal is more likely to get through. This use of multiple antennas also can mean fewer "dead spots" in coverage.

    Better security

    802.11n also has better security, with stronger encryption, than 802.11g. That makes 802.11n particularly attractive to small- and medium-size organizations, which don’t have the level of IT resources that larger organizations do.

    And, fortunately, all of today's wireless network security best practices still apply to 802.11n. It's important to realize, however, that 802.11n may also raise business risk simply by supporting more users and applications across larger areas. In short, the same old attacks may now be far more disruptive to your business.

    Ultimately, 802.11n networks can be made just as secure as -- if not more secure than -- yesterday's 11a/b/g networks. But, this takes awareness and follow-through.

    Caveat emptor

    Like yesterday's 802.11a/b/g standards, the 802.11n high throughput standard employs 802.11i "robust security." In fact, all Draft n products are required to support Wi-Fi Protected Access version 2 (WPA2) -- the Wi-Fi Alliance's test program for 802.11i.

    The good news: All 802.11n WLANs built from scratch can forget about WEP crackers and WPA (TKIP MIC) attacks, because every 802.11n device can encrypt data with AES. The catch: WLANs that must support both old 802.11a/b/g clients and new 802.11n clients may be forced to permit TKIP. Doing so makes it possible for older non-AES clients to connect securely. Unfortunately, 802.11n prohibits high-throughput data rates when using TKIP.

    It is therefore best to split old 802.11a/b/g clients and new 802.11n clients into separate SSIDs: a high-throughput WLAN requiring AES (WPA2) and a legacy WLAN that allows TKIP or AES (WPA+WPA2). This can be done by defining two SSIDs on a virtual AP or by dedicating different radios on dual-radio APs. This is only a stop-gap measure, however. As soon as you can retire or replace those legacy devices, do away with TKIP to improve both speed and security.

    Forward and backward compatibility

    The IEEE 802.11n specification is now stable and converging. Many vendors have stated that their Wi-Fi CERTIFIED 802.11n draft 2.0 products are planned to be software-upgradeable to the eventual IEEE 802.11n standard. The industry now needs assurance that these new products interoperate with each other and that they are backwards compatible with and friendly to the legacy 802.11a/b/g systems. The Wi-Fi CERTIFIED program delivers this assurance.

    Devices eligible for certification implement most of the mandatory capabilities in the IEEE 802.11n Draft 2.0 specification. In addition, certain optional capabilities are covered under the certification testing, if implemented in the device. The certification defines and verifies out-of-box behavior of draft 802.11n devices. It also tests for backwards compatibility with and protection of legacy 802.11a/b/g networks from potential disruption by 802.11n. Security and QoS testing are mandatory for the Wi-Fi CERTIFIED 802.11n draft 2.0 products.

    Comparing Wi-Fi and WiMAX

    Some people describe the difference between Wi-Fi and WiMAX as analogous to the difference between a cordless phone and a mobile phone. Wi-Fi, like a cordless phone, is primarily used to provide a connection within a limited area like a home or an office. WiMAX is used (or planned to be used) to provide broadband connectivity from some central location to most locations inside or outside within its service radius as well as to people passing through in cars. But, be forewarned: just like mobile phone service, there are WiMAX dead spots within buildings.

    From a techie POV, the analogy is apt at another level: Wi-Fi, like cordless phones, operates in unlicensed spectrum (in fact cordless phones and Wi-Fi can interfere with each other in the pitiful swatch of spectrum that's been allocated to them). There are some implementations of WiMAX for unlicensed spectrum but most WiMAX development has been done on radios which operate on frequencies whose use requires a license.

    Wi-Fi CAN operate at distances as great as WiMAX, but there's a reason why it doesn't. Radios operating in the unlicensed frequencies are not allowed to be as powerful as those operated with licenses; less power means less distance.

    Though both offer wireless data connectivity, there are more differences than similarities. Check out the following comparisons:

    Coverage Range

    The coverage range of Wi-Fi 802.11n is about 400 meters in open spaces but will be lesser indoors. For WiMAX 802.16e, coverage distance can be metro-wide and can be more than 50 km.


    Wi-Fi 802.11n was developed to provide faster speed (around 300 Mbps) than the a, b and g variants of this standard. WiMAX on the other hand can handle speeds up to 70 Mbps. It should be noted however, that for both standards, available bandwidth is dependent on many factors such as the distance from the base stations or access points, RF environment and the number of users connected.

    Quality of Service

    802.11n and WiMAX have different Quality of Service (QoS) mechanisms. This feature is standard in WiMAX and utilizes a method based on type of connection between the base station and the user device. Wi-Fi has introduced a QoS mechanism where certain traffic flows can be prioritized over others. For example, VoIP or video streaming applications may be given priority over ordinary web surfing.

    Target Market

    Wi-Fi, including 802.11n, was primarily developed for wireless local area networks (WLAN) with a limited coverage area. It has found popular usage in last-mile delivery or consumer applications, such as hotspots in public places, offices or at home. WiMAX, on the other hand, was developed primarily for wireless metropolitan area networks (WMAN) with coverage ranges of up to several kilometers. Service is usually subscription-based and provided by telco operators intended for business users. Example applications are as backhaul for wide area networks or internet connection for ISPs.

    Today, Dell customers can add an Intel wireless module that supports Wi-Fi and WiMAX to Dell's Studio 17 and Studio XPS 16 for $60, according to Dell's Direct2Dell blog.

    But, wireless broadband networks based on WiMAX are only available in three U.S. cities: Atlanta, Baltimore and Portland, Oregon. That means most users won't get any benefit from adding WiMAX cards to their Dell laptops unless they live in one of these three cities. Over time, more U.S. users will get access to WiMAX networks as operator Clearwire expands coverage to more cities.

    HP, the world’s largest laptop maker by units, does not offer WiMax as an option on any notebooks.

    Of course, you can always use a USB modem. Sprint’s U300 USB modem, which supports both 3G and Mobile WiMax, is $80 with a two-year contract

    Detailed reference on 802.11

    Early use of 802.11n at M.I.T. and a medical center
    http://www.computerworld.com/action/article.do command=viewArticleBasic&articleId=9111000

    Friday, June 5, 2009

    Program Your Applications To "Tweet" Their Status Updates To A Private Account

    Databases like Oracle and SQL Server can automatically notify the outside world when a wide variety of events occur. For example,
    • The number of patients in the the emergency room has reached its limit.
    • The number of widgets in inventory has dropped to the reorder point.
    • A long-running search has ended.
    • An X-ray or other file has been accessed by someone without authorization.

    Typically, notification that such an event has occurred is generated by code contained within a database procedure.

    But, database-centric applications aren’t the only kind that can programmatically generate events. For example, when a security breach, employee mishap or other emergency is detected by the code written in a Web application, notification, often via redundant channels, may be sent.

    Traditionally, a text (e.g., via email server) or speech (e.g., via a speech server) message is generated. However, suddenly, tweeting (a short message sent by
    Twitter) has become a popular option.

    What is Twitter?

    To understand the usefulness of an interface to Twitter, you need to know what Twitter is.

    It's sort of like IM but without the expectation that any particular person will be there to answer. It's very public. If you start your day off with a "Hello, Mom", everyone who “follows” you (that is, who has configured their feed to include your messages) will see it. This capability is sometimes called “micro-blogging”.

    However, for the scenario under discussion, you can configure Twitter to allow only the people (or person) you want to see your tweet – with or without the need to authenticate.

    The Twitter API supports Basic Authentication. Basic authentication allows a user to pass in a user name/password associated with a URL. If you've ever tried to navigate to a Web site and it popped up a dialog and asked for a username and password, that site was probably protected by basic authentication. (Twitter has just recently started offering OAuth authentication.)

    Messages on Twitter are called “tweets” and cannot be longer than 140 characters.

    In general, the goal of Twitter for most people is to communicate and stay in touch with people you know, would like to know, or have interests in common. You follow people much like you would “friend” them on other social networks. The people you follow show up in your stream. People talk about whatever it is that's on their minds. Sometimes you'll see people replying to each other and at other times, they just want to share a thought. However, these uses are quite different from the use we’re considering here.

    The Twitter API

    To make calls to Twitter from a database, one needs a method for plugging into Twitter. Fortunately, Twitter provides an API for performing almost any interaction that you might wish for, including the ability to post a tweet.

    The first thing to do is to set up a Twitter account for your messages. You may want to setup a special Twitter account for this purpose, rather than using your primary account.

    A caveat: internet access from within a corporate firewall, especially from a database server, is an iffy proposition at best. Many places don’t allow it at all. Some do allow it via a proxy. The larger an organization, the less likely it is to allow internet access from a database server.

    There are Twitter-supplied libraries for 13 programming languages:

    • C#/.NET
    • Java
    • ActionScript/Flash
    • PHP
    • Ruby
    • PL/SQL
    • JavaScript
    • C++
    • Python
    • Scala
    • Perl
    • Eiffel
    • Objective-C/Cocoa