This whole issue with Sermo has me hoppin’ mad. This story really came to a head last week, but I didn’t really pay attention to it as I’m not a physician and therefore don’t care about a site of which I can’t participate. However, I finally did read some articles about this and my jaw dropped. Here, an AMA-sanctioned site with $27M ($9M when they started) in venture capital money whose entire business model centers around having a secure, private community for physicians, does nothing except use public databases (as in [nearly] free) to “ensure” a physician is not misrepresenting his/her identity. Sermo’s website makes the claim that their authentication is done in “real-time.” To quote Alex Frost, a VP at Sermo:
One of the components of the system, and one of the powerful concepts in this being a safe community for interaction, is that we built a real-time authentication and credentialing system. If you are an MD, you can gain access to the system by answering a few challenge questions, and we will verify who you are.
This is so incorrect, it’s hard to know where to start. To use words like “authentication” and “credentialing” and not be referring to information tokens that due diligence would expect only that person to have, is misleading at best, outright lying at worst. When the festering abscess that is Sermo’s security model was cut open and exposed by Medgadget’s scalpel, “Sermaphrodites” circled the wagons to protect Sermo! Rather than the Sermo physicians appreciating this potential save to their community by forcing the issue to the forefront (and calling for Sermo’s CEO, Daniel Palestrant MD, to explain how this could be allowed to happen while sitting on so much money), they saw this intervention as a “how-to” for druggies to get DEA#s (news flash: this doesn’t affect the street wino any more than it affects the white-collar drug seeker who already knew this), calling Medgadget’s authors out and turning on their own. However, another blogger independently showed that Sermo could easily be penetrated by anyone, publishing the results. Yet there was no riot on this psychologist’s blog. Sermo physicians threatened to turn Medgadget authors in to their respective state medical boards (for what, publishing public information?!?) and called for advertisers to remove their support, making me wonder if the overly-strong reaction was as much about a fellow physician breaking the “good-ol-boy fraternity mentality” than what was actually disclosed.
This was a known issue (by Sermo’s own admission) and they did nothing about it. However, this post isn’t about the drama above. Unlike many of the flamethrowers and trolls out there, I actually have framework for a solution. I’m not an MBA or looking to quit medical school and form a company, so this has nothing to do with being anti-Sermo in concept. I am passionate about information security and how it relates secure electronic physician-patient communication. I couldn’t care less about Sermo as a company or a site; like I said above, why would I care about a site I can’t even join? I’ve joined these threads online because of a genuine interest in the underlying technology, and what I see in Sermo’s gross security mismanagement is a threat to physicians’ trust of its implementation and use.
Some background on me, I was all but ready to sit for my CISSP exam in information security when my father, who had end-stage liver disease, became continually and critically ill. I already had been accepted into medical school, but rather than work up until the day I’d leave, I decided instead to take those months and help family and my dad (who eventually–thank God–had a transplant and is doing quite well). In systems administration, I was one of the first Red Hat Certified Engineers (in fact, I took my exam on their Raleigh campus because they hadn’t even begun to outsource the exam yet) and have two Sun Solaris SA certifications. I only mention all of this to give readers an idea of my status as a serious computer professional. In my few attempts to discuss these matters on comment boards so far, people see “medical student,” and it screams out like a naive 20-something. I’m nothing of the sort, especially in this field.
What follows is going to be long and technical (I’ll do the best I can on making that as painless as possible), but it’s because some groundwork is necessary to understand key concepts first. You’ve been warned…if you’re still interested, let’s go!
PART I: Digital Signatures“Digital signatures” in the security world does not mean a scanned image of a paper signature. While this is, indeed “digital,” it is laughably easy to forge and offers no more guarantee than some jackhole running off with a physical, rubber signature stamp. A truly digitally signed document must meet some basic criteria:
1. The signer is indisputably involved. To properly sign a document/file/message digitally, intervention is required–namely a passphrase against a cryptographic key. Therefore, there is no “rubber stamping” in this arena, a la a nurse stamping a prescription pad or “verbal orders” that were allegedly never given.
2. The signer is indisputably who they claim to be. This is done by prior verification/escrow of the cryptographic key and the foreknowledge that the identity can not mathematically be forged. When the signature occurs, the exact time and date become part of the signature. Taken as a whole, #1 and #2 provide the principle of “non-repudiation,” or the inability of the signer to “back out” by saying they didn’t mean it or that it didn’t happen at the specified date/time, etc.
3. Document integrity. The contents are guaranteed to be tamper-proof–there is no retroactive changing of anything. The document as a whole undergoes a one-way hash algorithm, a fingerprint of sorts, and the alteration of single digit, character or space renders the hash invalid. To illustrate what I mean, the hash algorithm MD5, when applied to the text of the Preamble of the US Constitution returns “
This computational output is a string of only 32 hexadecimal digits, so there is no way that I can take this short string and reconstruct the original, “We the People…” This is why it’s called a “one-way” hash. In fact, I can take the entire Constitution–the entire Library of Congress, even–and generate a similar, but different, unique hash. This is why I use a fingerprint analogy: you can’t extrapolate a fingerprint to a make a person, but the “mark” left behind definitely identifies where it came from. Change the document one iota and you change the fingerprint; have a different fingerprint in hand, and you know it was not from the same, unaltered document. This is essential.
PART II: Public Key Cryptography
Ever since there has been a need to keep information secret, there have been methods of doing so. Egyptians utilized a special staff where papyrus was rolled around its circumference, a message was written linearly, and other characters filled the gaps afterwards. If this papyrus was intercepted, it would be useless without the special staff which corresponded to the right helical turn length. Julius Caesar used a frameshifted alphabet, where A corresponded to, say N, B to O, C to P, etc. Unless you knew what the offset was, the message was gibberish. Of course, there’s the story of the “Enigma machine” of WWII, and now with fancy computers, the possibilities are endless–both to create new cryptosystems and to do “brute-force attacks” to break them. But that’s another book, actually.
The point here is all the examples above illustrate “symmetric key” encryption. The same “key” used to encrypt is the same key used to decrypt. The encrypted result can be iron-clad, but obtain the key and your done for. For example, if the other party got a hold of that Egyptian staff, all bets were off. So if securing the encrypting key is all-important, how do you encrypt things on the wild-west of the Internet without sending that all-important key over insecure lines? Enter public-key, or “asymmetric” cryptography.
Asymmetric keys mean that I have two keys–one public, one private–that were generated simultaneously and are mathematically, inextricably interconnected. The secret key says safe with me, but my public key can be broadcast anywhere. If someone wants to send an encrypted message to me, they don’t use some super-secret device, they use my PUBLIC key combined with their SECRET key, and the two make a message that we both can read. My secret key is safe and secured physically (my computer) and digitally (by passphrase). My public key, on the other hand, is downloadable and free for the world to use. If you tried viewing the key from the previous link, it is an example of what’s called “ciphertext,” or encrypted data transmitted by plain letters and numbers. This makes it platform neutral, easy to embed in email, chats, Sermo posts (oops, I’m foreshadowing…bad Rico!)
The two most common public key systems used for Internet communication are PGP (“Pretty Good Privacy”–now a commercial enterprise, but the founder, Phil Zimmerman started it all at MIT) and its freeware, open-source counterpart, GPG (“GNU Privacy Guard”). To see all of this in action before you get too lost in the background and theory, a digitally signed message is shown below. The original, as I wrote it, is the English text from “This” to “keyring;” everything that enveloped it above and below came from the signature process. This process here was simply invoking GPG to sign the text, and upon prompting me for my passphrase to unlock my secret key, GPG produced this output:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 This is an example of a signed message. Above, the hash algorithm should be shown (SHA256) so that the recipient can verify with that same algorithm that every character in this message has arrived, unaltered. Moreover, although you can't tell by the gibberish below, this is also digitally signed with my private GPG key, and this is verifiable if you have my public GPG key on your keyring. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (Darwin) iD8DBQFHAAjdozJz1Dh2WKURCKu3AKDC2WQfSMxhhW382wsslrBDNiF+/QCfa026 4gPie5pNTyXN5RFMCDej3dA= =7DOW -----END PGP SIGNATURE-----
PART III: Web of Trust
All the PGP/GPG stuff is well and good except when you consider that the ID in the key is a name and email address–hardly something that can’t be forged. What prevents me from generating a keypair that corresponds to “George W. Bush <email@example.com>” and sending/signing emails and messages pretending to be Dubya? Absolutely nothing. This is solved by a model called the “web of trust.” Let’s say there are two people, Bob and Alice (in security, it always is “Bob and Alice,” don’t ask) who know each other in real life. Bob can get Alice to “sign” his public key such that when others see his public key, they see Alice’s signature there, too. Alice agrees to sign only because she can attest that Bob really is who he says he is, and vice versa. When “Carol,” Alice’s friend, sends Bob a signed email, Bob has no idea if she’s really a friend of Alice’s. However, if he sees that Alice–a person whom he knows and trusts–has signed her key that was used for the email to Bob–Bob can reasonably assume that Carol’s idenitity is credible. If there are two people on Carol’s keychain that Bob personally knows (and also has their digital public keys), then Bob really has much better assurance of Carol’s identity.
Now, If the contents of the communication is “Hey, nice to meet you,” then all of this is rather silly. However, if the content of the communication has some request that requires a commitment of time, energy or money, then the bar is raised. Now let’s zoom to the present situation, and the communication is a physician requesting information on a patient. Now there’s a legal component to consider as well. The more people gather to voluntarily sign each other’s keys after verifying identity, the stronger this web becomes and the more it can be trusted.
The explosion of the internet “to the masses” is exactly what made this model fail. When the bulk of the Internet users were those affiliated with universities, colleges, the government and specialty technology companies, you had a select enough group to do this with. 10 years later, when disgruntled radicals, convicted felons, and my grandma all have broadband, the idea of “voluntary trust” among like-minded individuals is laughable.
PART IV: Of Triangles and Tradeoffs
The following graphic is the classic “security triangle” (there are many variations) that shows that you can’t have all three things at once completely:
In order to make a system secure, it’s going to cost money and it’s going to have to have some hassle associated with it. Whether it’s having to remember passwords, carrying a swipe badge in the case of physical security, there’s a component of inconvenience–always. In order to make things secure but easier to use, you’re going to have to sink proportionally more money into it. Using the PGP vs. GPG example above, GPG is free, but using it is not intuitive. There are graphical “wrappers” and plugins for email/chat clients that allow one to mouse click this or that, but most are kludgy, and there is no consistent interface at all. PGP, on the other hand, uses 95% of the same underlying technology, but as a polished corporate product, it provides stable, reliable solutions ranging from personal software to enterprise tools to secure entire infrastructures. This obviously is not going to come cheap, but it’s a hell of a lot easier to use and deploy. Not being able to “have it all” is essential in managing both users’ and managers’ expectations regarding security solutions.
Sermo sacrificed security to make a system that was cheap and easy to use, or more to the point, easy to sign up. With all the VC money that Sermo had, this is inexcusable.
PART V: Wherein I actually get to the point
While the Web of Trust model above could not be sustained with the Internet at large, it works beautifully among a tight-knit community–like physicians on a social network! Doctors are a naturally suspicious, overly cautious, and a fiercely protective group. Especially if “signing one’s name” to something, physicians in these overly-litigious times understandably need a lot of reassurance. I say, “PERFECT!” What better self-policing model than to have that suspicion, that scrutiny to ensure a secure, physician-only social/medical network? The ever-present sentiment of “I went through hell-training, gave up 10 years after college for shit wages…” doesn’t lend itself well to being “spied” on by slimeball, used-car-salesman drug rep types posing as doctors to listen in on discussions and reporting back to their Mother Ship.
It’s self-policing in every way. Here, it doesn’t matter if you have MD, DO, PhD, PharmD, etc. after your name; it matters if you have other peoples’ digital signatures attached to your own. If some ‘newbie’ comes in and claims to be such-and-such and really doesn’t have much in the way of referring signed sigs, they’ll naturally be less trusted, perhaps not included into certain forums until they’re vouched for–just like it is in real life. In cyberspace, people have these disconnected expectations of human behavior, like some forum doesn’t treat them as one of their own after two postings, but here in “meatspace,” doesn’t it work the same way? You go to one or two meetings or the like by yourself, and you might get a lukewarm reception. You go with 2-3 “regulars” that introduce you to people, and you’re going to have a much different, much more rewarding, experience. If this is going to be a social network–a social network that needs to have the requisite security where members are truly vetted–then it’s going to have the same issues, concerns, and dynamics of ANY social network, as far as the people dynamic is concerned.
But let’s be honest. Sermo isn’t trying to create a social network for physicans, much less try to make it secure place any more than it needs to keep operational. Looking at the Sermo graphics and market propaganda, you’d think it was trying to make Facebook for physicians. The real point of Sermo is to make money for its stakeholders, which it’s poised to do, hand over fist. Sermo’s free membership and ad-free content is paid for by revenue from “clients” who will pay to either monitor discussions in nonspecific fashion for a subscription fee or pay a large sum to have a question put out there, (eg. Merck asks, “If Vioxx were to come back on the market, would you prescribe it?” — Sermo just made $50,000 for a yes/no poll). Sermo claims that it only shares aggregate data and that no personally identifying information is shared with third parties, but how much can you trust a company that already has shown such disgustingly lax security practices?
Everything I’ve outlined above is a framework, a skeleton on which one can lay a foundation for a better, more secure way of communicating. Sermo could have and still can implement any of this at any time. I used PGP/GPG as examples only; there are many implementations of the cryptographic and security principles that lie behind these. I say build a new network, one that from the ground-up is a collaborative effort of physicians, not the money-making vision of a single physician. Build a network that is truly socially policed, where members are vetted according to agreed-upon standards, where discussion can take place in appropriate forums with reasonable assurance that the information will not be shared by others in any fashion, aggregate or not. The security technology is there to use however desired. It can be a zero cost solution with a steeper learning/usage curve or a paid-for solution to make life easier for everyone; the community will decide what’s best. Build a social network that is based on a meritocracy where members that give the most get the most decision-making authority, rather than a autocracy that doles out $75 to a poor resident for answering a poll question that Big Pharma paid tens of thousands for. Have complete transparency in business practices, privacy statements, etc.
Just don’t build another Sermo.