Wednesday, July 22, 2009

Electonic Health Records / Electronic Medical Records - Scams


Dr. David Scheiner, an internist based in the Chicago neighborhood of Hyde Park, was President Obama’s doctor for twenty-two years, from 1987 until he entered the White House but has publicly opposed Obama’s health plan, calling for a single payer system (see the text and video in my May 13, 2009 post for an introduction to the current debate over single payer).

In a recent interview, Dr. Scheiner said, "President Obama believes that when we have electronic records, somehow night will change into day. That won’t happen. First of all, it’s extremely costly. It will become even easier to scam the health insurance companies and Medicare when you have a cursor that can go over all the things that you perhaps didn’t do, but look good on paper, and you can code much higher."

I post this quote simply to remind us all that while electronic health records / electronic medical records will automate and, in all likelihood, improve many aspects of the delivery of healthcare, at the same time EHR / EMR has the potential to facilitate the execution of today's healthcare scams. So, for the time being, we might well have a reason to temper our enthusiasm for the inevitable computerization of our still-largely-paper-bound healthcare records systems.

Monday, July 20, 2009

Electronic Health Records (EHR) - Semantic Interoperability - Part 1


To press a suit means one thing to a tailor and another thing to a lawyer.

A free radical means one thing to a chemist but meant another thing to members of the House Un-American Activities Committee (HUAC) during the 1950’s.

And, medication for pain and pain medication don’t always mean the same thing. The controversy surrounding the recent death of Michael Jackson illustrates this last point.

In the examples above, as in clinical terminology, words can take on different meanings depending on factors like time or place (i.e., context).

Furthermore, clinicians and organizations use different clinical terms that mean the same thing. For example, the terms heart attack, myocardial infarction, and MI may mean the same thing to a cardiologist, but, to a computer, they are all different. There is a need to exchange clinical information consistently between different health care providers, care settings, researchers and others (semantic interoperability), and because medical information is recorded differently from place to place (on paper or electronically), a comprehensive, unified medical terminology system is needed as part of the information infrastructure.

Interoperability

Interoperability is the ability of two parties, either human or machine, to exchange data or information.

First, syntactic interoperability guarantees the exchange of the structure of the data, but carries no assurance that the meaning will be interpreted identically by all parties. Web pages built with HTML or XML are good examples of machine-to-machine syntactic interoperability because a properly structured page can be read by any machine with a Web browser. The meaning of the page to a particular machine may vary substantially; however, this is not usually considered a problem because the semantics of a page are meant to be interpreted by human viewers.

Next, human or semantic interoperability guarantees that the meaning of a structure is unambiguously exchanged between humans. Documents such as progress notes, referrals, consults, and others achieve semantic interoperability at a clinician-to-clinician level by relying on common medical vocabularies.

Finally, computable semantic interoperability requires that the meaning of data be unambiguously exchanged from machine to machine (as shown in the figure below). This does not necessarily mean that all machines need to process the received data the same way, but rather that each machine will make its processing decisions based on the same meaning.



Words and Meanings

The meanings of words change, sometimes rapidly. But a formal language such as used in an ontology -- a rigorous and exhaustive organization of some knowledge domain that is usually hierarchical and contains all the relevant entities and their relations -- can encode the meanings (semantics) of concepts in a form that does not change. In order to determine what is the meaning of a particular word (or term in a database, for example), it is necessary to label each fixed concept representation in an ontology with the word(s) or term(s) that may refer to that concept.

When multiple words refer to the same (fixed) concept, in language this is called synonymy; when one word is used to refer to more than one concept, that is called ambiguity. Ambiguity and synonymy are among the factors that make computer understanding of language very difficult. The use of words to refer to concepts (the meanings of the words used) is very sensitive to the context and the purpose of any use for many human-readable terms.

The use of ontologies in supporting semantic interoperability is to provide a fixed set of concepts whose meanings and relations are stable and can be agreed to by users. When a word used in some interoperability context changes its meaning, then to preserve interoperability it is necessary to change the pointer to the ontology element(s) that specifies the meaning of that word.

There are a number of tools for the programmatic handling (i.e., creating, querying, etc.) of ontologies. The visual representation of ontologies is an important contribution of these tools.



IBM Integrated ontology Development Toolkit (formerly named IBM Semantics Toolkit) is one of many toolkits designed for storage, manipulation, query, and inference of ontologies and corresponding instances.

An upcoming post will discuss the role and value of semantic technology in service-oriented architectures (SOA).

Friday, July 17, 2009

Electronic Health Record Interoperability: Specifications, Standards, and Working Groups

{Prelude to upcoming post - semantic interoperability}

The American Recovery and Reinvestment Act of 2009 (ARRA) states that the
HIT Policy Committee shall make recommendations on standards and implementation specifications, among other tasks.

The
Healthcare Information Technology Standards Panel (HITSP) EHR-Centric Interoperability Specification consolidates all information exchanges that involve an Electronic Health Record (EHR) System within any of the thirteen HITSP Interoperability Specifications existing as of February 13, 2009, the enactment date of the American Recovery and Reinvestment Act (ARRA).

Reading these two statements and others like them requires an understanding of terms such as

*
recommendation
* specification
*
standard, and
*
working group

Each of these bullets is linked to its Wikipedia entry.



Given that many of the readers of this blog (from 42 countries so far) are interested in the information technology aspects of The American Recovery and Reinvestment Act , electronic health record interoperability and the like, it may be easier for them if I elaborate on the four terms listed above by using the example of cascading style sheets (CSS), a simple language that allows you to declare how documents are displayed by Web browsers. By so doing, I’ll be introducing additional expressions with which these members of the IT community are already familiar.

The Cascading Style Sheets language was created through a collaborative effort between Web developers and browser programmers under the auspices of the
World Wide Web Consortium (W3C for short).

The W3C is an international industry group that comprises over 500 companies, research institutions, and Web development organizations that issues technical specifications for Web languages and protocols.

W3C specifications are called "recommendations" because the W3C is technically not a standards-issuing organization, but in practice this is usually an issue of semantics.

Recommendations are taken as defining a standard form of a Web language, and they are used by Web developers, software tools creators, browser programmers, and others as a blueprint for computer communication over the Web. Examples of W3C Recommendations include Hypertext Markup Language (HTML) and Extensible Markup Language (XML).

The CSS Specifications

The W3C Recommendations issued by the Cascading Style Sheet working group compose the official specification for the CSS language. The CSS working group consists of a number of experts in Web development, graphic design, and software programming, representing a number of companies, who all work together to establish a common styling language for the Web.

CSS Level 1

The Cascading Style Sheets Level 1 (sometimes called CSS1 for short) was officially issued as a W3C Recommendation in December 1996. The URL for this specification is
http://www.w3.org/TR/REC-CSS1.

If you try to read the W3C Recommendation for CSS1, you may end up confused. That's because W3C documents aren't written as a general introduction to a subject but rather as precise language definitions for software developers. Most W3C Recommendations are quite opaque to most normal people, although the CSS1 specification isn't too bad compared with some. Being able to refer to the official specification is quite useful, though.

Optional note for programmers - Classes and IDs

In addition to setting styles based on HTML elements, CSS allows for rules based on two optional attributes in HTML: class and id. Each of these can serve as the selector part of a CSS rule and can be set on any visible HTML tag.

The div and span elements really come into their own with class and id selectors. Through the use of class and id attributes, div or span tags can be made to have nearly any effect and presentation, which is often good but sometimes bad. Care must be taken to avoid using class or id selectors that you're not ignoring more appropriate markup, which has understood semantics. In other words, a div with a class of bldtxt has no specific meaning in the context of HTML, but a strong tag definitely does. Before using div or span, consider if another tag would make more sense.

The mention above of the word "semantics" is meant to be a segue to the subject of a later post, semantic interoperability.



Thursday, July 16, 2009

Customer self-service and social Web communities like Twitter and Facebook vs. spammers, scammers and hackers


Organizations from large corporations to small medical centers are bridging the gap between their cloud-computing customer service and support contact centers and social Web communities like Twitter and Facebook.

See, for example, https://www.salesforce.com/form/demo/csm_reg.jsp?d=70130000000EoNf

This means that there are new targets for spammers, scammers and hackers.

Redux

At the bottom of my May 30, 2009 post I added the following note on full-URL links vs. compressed links:

I've been asked why I didn't use link-shrinkers in earlier posts. Here's why:

First, I should say that there are some things I like about link compression: Some link-shrinkers let you personalize the new address with a unique phrase such as your name, or show you how many people click the link after you've posted it. Furthermore, link compression is just the beginning. More and more of these outfits allow users to see all sorts of details like where a link is showing up around the Web and where the people clicking on it are located.

However, this convenience may come at a cost. The tools add another layer to the process of navigating the Web, potentially leaving a trail of broken links if a service suddenly closes shop. They can also make it harder to tell what you're really clicking on, which may make these Lilliputian links attractive to spammers and scammers.

But popularity and convenience don't eliminate the potential risks of these link loppers. If so many services are springing up, chances are some will just as quickly disappear. And if a URL shortening service goes down, the links created with it could lead nowhere.

Another worry is that you're not likely to know exactly where a truncated link will take you. So you could be directed to unsavory or illegal content or something malicious like a computer worm. This means URL shortening services need to keep an eye on the kinds of sites their users are linking to.

Some problems

Purveyors of spam and malicious software are taking full advantage of URL-shortening services like bit.ly and TinyURL in a bid to trick unwary users into clicking on links to dodgy and dangerous Web sites. Fortunately, with the help of a couple of tools and some common sense, most Internet users can avoid these scams altogether.

According to alerts from anti-virus vendors McAfee, Symantec and Trend Micro, the latest to abuse these services is the Koobface worm, which targets users of social networking sites like Facebook (Koobface is an anagram of Facebook) and Myspace. It's now also spreading via microblogging service Twitter. Koobface arrives as a message that urges users to click on a link to a video, which invariably leads to a site that prompts the visitor to install a missing video plug-in. The fake plug-in turns the user's system into a bot that can be used for a variety of criminal purposes, from spamming to attacking other computers and spreading the worm.

Some solutions


TinyURL, which is among the longest-running URL shortening services, lets you automatically enable the preview of all shortened URLs. Just visit this page and click the "Enable Previews" link, and from then on TinyURLs will be converted into their longer form when you visit a Web page that features them. You must have cookies enabled in your browser for this setting to take, and you will need to set the cookie for each browser you use.

If you browse the Web with Firefox, there is an add-on called Long URL Please, which currently converts URLs shortened by 72 different services, including bit.ly, cli.gs, digg.com, is.gd, kl.am, ow.ly, tr.im, and tinyurl.com. Long_URL_Please also works in Internet Explorer and other browsers: Simply add their bookmark to your bookmarks, and then click on it when you're at a page that includes shortened URLs to display the long URL.

Firefox users who are familiar with the Greasemonkey add-on may prefer the Tiny URL Decoder script , which also works with a long list of URL shortening services.

Expandmyurl.com is another bookmarklet approach that works across browsers.

Wednesday, July 8, 2009

Cyberattacks Can Harm And Website Monitoring Can Benefit Electronic Health Records (EHR)

Cyberattacks have crippled the Web sites of several major American and South Korean government agencies since the July 4th holiday (U.S.) weekend.

The Washington Post , which also came under attack, reported on its Web site today that a total of 26 Web sites were targeted. In addition to sites run by government agencies, several commercial Web sites were also attacked, including those operated by Nasdaq, it reported, citing researchers involved in the investigation.

Authorities suspected that the hackers used a new variant of the denial-of-service (DoS) program to attack the Web sites. A denial-of-service attack is one in which an attack, sometimes from a single source, overwhelms a target computer with messages, denying access to legitimate users without actually having to compromise the targeted computer. Although frequently intentional, a DoS can also occur unintentionally through a misconfigured system.

In several of my prior posts, I discussed the [national and international] push toward a system of interconnected Electronic Health Records (EHR) networks. This system, like those cited in today's media reports, depends on the availability of well performing Web sites.

An Associated Press article in today's newspapers refers to Keynote Systems, a company that monitors such events, so I decided to look into their services. I had written on other aspects of this subject in the article Capacity and Disaster Recovery Planning for an Internet Connection to which I link in my bibliography at the very bottom of this blog.

Keynote Systems is a mobile and Web site monitoring company based in San Mateo, Calif. The company publishes data detailing outages on Web sites, including 40 government sites it watches.

Managing a single Web application with thousands of users typically requires a system administrator and a few support personnel, at a cost of up to $30,000 per month. About 20% of this cost, or about $6,000 per month or $72,000 per year, is spent on monitoring the reliability and availability of these services. Keynote Systems’ services to monitor URLs start at $100 per month or $1,200 per year -- the cost savings to an operations team can be significant.

Keynote Systems’ test and measurement products and services are driven by a global network of more than 2,600 measurement computers and mobile devices in more than 240 locations in 160 metropolitan areas around the world -- the largest on-demand test and measurement network in the world. Users know precisely how Web sites, content, applications, and services will perform on mobile networks and devices -- all with hard metrics that test precise behavior patterns and more accurately predict performance problems.

Test and measurement products and services deliver in-depth, relevant KPIs (a subject which I discuss in my article cited above) that are easily understood and accurately represent what happens in the real world -- using real browsers and real devices. The economic gains provided by using Keynote Systems’ network are upfront -- using Keynote Systems does not require an increase in capital expenses.

Their Web site offers a free evaluation of their software-as-a-service (SaaS) and downloadable software products. While I don't recommend that you immediately send them or any other like organization a check, I do think you should know about what they have to offer.

Monday, July 6, 2009

Computer Hardware and CO2 Emissions


With every upload, download, email, tweet and post (including this one) there comes an energy cost.

In the last couple of years the rise in popularity of online video content and video services has led to prophesies of doom about how the increases in traffic it entails will lead to exafloods of data that ultimately bring the net to its knees. But such fears have proved ungrounded. In fact already exabytes of data regularly course their way through the veins of the internet and in the past two years the growth of traffic has actually dropped from a steady 100% each year-on-year to around 60%, probably thanks to better compression software.

But that's no reason to celebrate. The fact is traffic is still growing. And so too does the amount of hardware infrastructure required to accommodate it. At conservative estimates computer hardware is already on par with aviation in terms of the global CO2 emissions they produce, roughly 2%.

And it's not just the likes of YouTube pushing up the traffic. Facebook, for example, has more than 200 million account holders, of which 15 million update their status at least once a day, uploading nearly a billion photos each month. When you consider that some people, like the US stand-up comic Steve Hofstetter, claim to have as many as half a million friends on Facebook, it's worth remembering that many of these will receive emails every time he posts a gag on Facebook. More pointless traffic.

Generally the traffic generated by these exchanges is minimal compared to video transfer, as indeed is the case for Tweets and even AudioBoos. But the point is that while many people wrestle with their conscience about whether to fly we think nothing of sending emails, messaging, Tweeting or updating our Facebook status. A Google search may only produce 0.2 grams of CO2, but these e-missions quickly add up.