This post results from the project “MedSec” within the Munich Cyber Security Program (MCSP) The MCSP is a cooperation project between Champlain College and ComCode (Germany). This project focuses on Cybersecurity topics for medical devices / medical services
This week I got to dive into classifying and compiling security regulations for a medical device provided to me. This device had to first be classified under the device definitions for the European Union and the United States. After this was done I proceeded to apply the necessary guidelines and regulations to it.
To start the classification process, I first had to look at the device and discover what its purpose was. This involved looking at the medical condition the device attempts to treat and how this device sought to combat the condition. Once I found that it did contribute and actively tried to improve someone living with this condition, I then had to determine the risk of this device failing was. Would it hurt the user significantly? Could it kill them? Would it just be a quick fix that doesn’t harm the user? I wound up determining that the device would not harm the user significantly if it failed, which made it a low-risk to the user. I then used the classification scales set forward by MDR and CFR21 by the EU and FDA respectively. This led me to classify the device as Software as a Medical device on a lower risk class. I then turned to see what the requirements were in regards to security for the device itself and the systems that the company offering the device had control over. This would include devices such as servers and the actual application due to its nature (it is accessible through a web application portal).
The next step is figuring out what controls the vendor needs to apply and providing a layout of those controls to the client. The first thing I found while looking at the MDR guidelines was that priority was given to the security and viability of the information collected with the application and software. This usually involves ensuring that as the data is transmitted from the web portal to the server, it is encrypted to the industry standard. This ensures that the data is not revealed to anyone not meant to see it and that session information such as id, and data is not in cleartext. There is also a component needed about access control that ensures that the data on the app and the data used by the software is only accessed by the patient or an authorized medical professional. These are stated explicitly as needed by the application and the regulations. These security measures also extend to servers receiving the information. The measures ensure that they have sufficient logging and that they are penetration tested to ensure that vulnerabilities found in the wild can and are dealt with in regards to the servers. Also, all actions done have to be logged. This way, should an incident occur, an investigator and forensic analyst can then get an accurate picture as to what has happened. These are just a few of the controls that not only have to be implemented at the beginning of the device’s lifecycle, but have to be continuously tested, updated, and maintained by the vendor.
These requirements are mirrored in the US as well. However, the US maintains a more strict policy in regards to software analysis and checking for poor programming practices, as well as the idea of failing securely. Essentially this means that should the application suffer a failure, it still cannot reveal information that is within the application because of an error in the code.
Next week, I will be continuing with this process and delving into the different stages of security implementation throughout a device lifecycle, and when they have to be considered for implementation. I may even get to do some security testing in regards to this and either device and see how security has been implemented.
Written By: Michael Verdi ’22 // Computer & Information Systems Security