There’s a lot of talk these days about personalized medicine. This emerging medical science promises medical diagnoses and treatments that are tailored to an individual’s set of medical problems, rather than the current methodology of generalized treatments for a wide range of disease with an average range of effectiveness and a broad range of side effects. Personalized medicine offers much more specific treatments of disease, improved outcomes, faster recovery, and fewer side effects.
With the advent of new diagnostic technologies, such as next generation sequencing (NGS), experimental personalized treatments for disease are being developed for rampant diseases like cancer, extreme allergies, and bacterial and viral infections. Many of these treatments involve the use of genomic sequencing to identify the precise source of disease, then engineering treatments to combat those biomarkers that are unique to the disease state. While promising, these techniques require an extremely large amount of computation in order to yield actionable results. For institutions that have this technology available to them, the specificity of these treatments is improving steadily, and the research and experimental methodologies are becoming ever more promising.
Laws and restrictions are barriers
So, why isn’t personalized medicine developing at a faster rate, and why aren’t many of these new methods available to the general public? The answer is, in part, that the stringency of privacy laws and restrictions on where human personal health information (PHI) can be stored, the format it can be accessed in, who can access it, and where you can analyze it is so locked down that most researchers can’t analyze the data using existing high performance computing (HPC) infrastructures that may be available to them because the computational environments don’t meet the compliance standards required by the Health Insurance Portability and Accountability Act (HIPAA) and other privacy laws surrounding the health industry.
The reality is that these regulations are not defined in a specific manner, especially with regard to technological solutions being used to process and interpret medical information. The vast majority of the law talks about security restrictions on electronic health records (EHR) systems and the infrastructure that house these databases. As such, individual auditors and compliance officers often interpret the needs for compliance in very different ways, since the laws are written in a subjective manner.
What this means for personalized medicine, is that hospitals, universities, government agencies, and corporations all have to err on the side of extreme levels of security in order to approximate the privacy laws as they are currently defined. In most IT organizations with restrictions such as these, decisions are made to prioritize security over performance, to the great detriment of research. Anyone who follows this emerging field knows that modern laboratory technologies produce enormous amounts of data that need to be transmitted to storage systems and analyzed on compute infrastructures before any interpretations can be made.
Transferring those data from the equipment, through many firewalls to encrypted storage, and running on isolated compute equipment is going to be very slow and prohibitive to the progress of research in general, not to mention expensive to implement for the organization due to the need to duplicate infrastructure to meet security needs. In my estimation, this extreme interpretation of security applications is the hold up for personalized medicine. Don’t get me wrong, privacy is important and it needs to be protected, but there are likely better ways to preserve privacy that embrace modern technology practices without squelching the productivity of researchers under the thumb of extreme security.
Needed: common reference architectures
What the industry needs is to develop common reference architectures that utilize flexible and dynamic virtual infrastructures to protect information as it flows from place to place, lands on remote storage, is analyzed, then returned to its safe place, all while moving the data and analyzing the data at the best possible speeds.
The use of better data transfer utilities that encrypt data during transmission using encryption features that are built into modern processors, along with better and faster networking practices that temporarily define isolated virtual networks through the use of software defined networking (SDN), will help pave the way towards wide-scale application of personalized medicine techniques. The use of these types of technology, combined with proven reference designs that auditors and compliance officers can refer to, will help dig the medical environments out of the dark ages and place them squarely in the 21st century, affording them the best that research computing has to offer at affordable prices on shared infrastructures that are institutionally owned. With the technology barrier resolved, while preserving privacy, personalized medicine could begin to progress towards wide-scale implementation.
What questions do you have?