With 10 years of experience in the MedTech field, clinical data experts and co-founders of SMART-TRIAL, Páll Jóhannesson and Jón Bergsteinsson have noticed how time after time medical device companies are tripping over the same pitfalls. The consequences are that data collection becomes expensive, time-consuming and complex.
Therefore, Páll Jóhannesson and Jón Bergsteinsson have decided to share their insights on the most common pitfalls they have encountered to date, and how device manufacturers can avoid them. This blog is a summary of their key insights put forward during the joint webinar, with Greenlight Guru. Watch it on-demand, here.
Device studies are often small and require various data that is normally not collected around drugs, because the way you apply devices in clinical practice is often by someone interacting with them. Clinical data collection in device studies and clinical operations for devices is conducted around the whole life cycle of a device (from early stage to later stage for market approval, and post-market). Data collection in a clinical context is gathered through various means in different projects. It’s not always about a clinical trial or study; clinical data can be gathered for medical devices in many different ways, and even by different individuals (healthcare providers, physicians, investigators, patients).
When it comes to clinical data capture for medical devices & diagnostics, another challenge is the ever-changing focus from regulators on clinical data. New regulations in Europe (MDR, IVDR) combined with an increased focus on clinical data by the FDA, have impacted the amount and quality of clinical data needed not only for market access but also to keep the device on the market. Read more about MDR compliance in our practical guide for device manufacturers here.
Standards that cover clinical practice in clinical investigations for medical devices have recently been updated. ISO 14155:2020 places increased requirements on clinical operations, both for pre- and post-market activities. So, if you are to collect any clinical information about your device/diagnostics tool in a post-market setting, you might need to use an electronic data capture solution to support your activities. Due to these changes in standards and regulations, conducting studies solely on paper will not be good enough anymore. See how world-leading hearing aid manufacturer Oticon, switched from paper-based data collection and management to EDC.
Another huge challenge for devices & diagnostics is the fact that buyers and procurement experts in the healthcare sector are placing much more focus on providing value-based care. This requires manufacturers to provide some documentation or justification on why one should select one device over another. To be able to produce the clinical data required to make the choice between one device over another is now crucial not only for market access but also for selling the product.
All in all, it is more and more apparent that medical devices & diagnostics need to collect an increased amount of data, and that’s where an electronic data capture platform comes in. Let us show you how it works!
Electronic data capture has started to become more of a norm, in the medical devices space. However, the market is very much saturated with solutions that have been created for a different industry. Traditional electronic solutions that are designed to support and facilitate clinical operations are limited in that they’re designed for trials and clinical operations that run on an outdated standard, e.g., ‘’phase 1-4’’ trials for pharmaceuticals.
When it comes to devices & diagnostics, this particular setup in the clinical development stages of a device is not comparable. Devices & diagnostics go through a completely different clinical developmental phase than pharmaceuticals. This means that the solutions that are being offered are not as fitted and manufacturers risk unnecessarily complicating their data collection process even further, which often increases the resource requirements and decreases the clinical data quality.
The data format standards that are often used in pharmaceuticals are also completely different than for devices. We can see this already with the FDA and the European authorities, who do not require data to be formatted in the same way as pharmaceuticals because it’s simply not as relevant. There is this unfounded myth in the industry that the FDA requires the same clinical data standard for submissions as in the case of pharmaceuticals. But this is not the case.
All this makes the point that you should not be selecting solutions that are designed for another industry.
Most clinical software solutions nowadays are designed to function for larger pharmaceutical companies. 97% of the industry is comprised of small to medium enterprises within medical devices & diagnostics. These enterprises do not have the same budget, manpower, resources or even experience as big pharmaceuticals to be able to buy the right solutions and maintain them as expected. This puts an excessive strain on operations which may become very costly because these solutions simply require too many resources.
Many of these traditional solutions have been designed for different parts of the Life Science Industry and the documents that are often provided are not suited for medical devices. There are different documentation requirements set forth for device operations, e.g., ISO 14155. All this means is that you will have to do more work if you choose a solution that has not been adjusted to this specific requirement.
These pitfalls are items that we have identified throughout our 10 years of work with over 250 medical device studies, spanning all over the world. While working in the MedTech industry and within the clinical operations space, we kept encountering these same 7 pitfalls. So, we decided to share these pitfalls to help companies of all sizes learn to make better choices that result in better products.
Instead of collecting clinical data on paper, we strongly recommend you go digital. Companies who choose the digital approach, collect and monitor data much more efficiently. They also save time and resources when it comes to complying with ISO 14155:2020. By using a digital solution, you can monitor your clinical data remotely, and can get an overview of what’s going on with your study in real-time. In contrast, paper is not as reliable as it can get lost or damaged and it is a difficult medium to collaborate with. Obviously, there are more constraints to only using paper to collect data in a clinical setup. At some point in the process, all that data must be transferred to a computer, so you might as well start from the beginning and save yourself all that time.
We have very often seen early-stage companies that are embarking on their first clinical study, but also larger companies with a massive budget, ending up collecting more data than needed. In doing so, they greatly increase the workload on clinical staff, e.g., clinical investigators, coordinators, nurses, etc., who might then feel more frustrated and less motivated. And of course, the greater the data set that you have, the more time you need to allocate. So, the solution to this is to start at the end:
The most important thing when planning any clinical data collection project is to define your hypothesis.
If you have not established a scientific hypothesis, you will have a hard time collecting the right data. On the other hand, if you have defined your hypothesis but you’re still caught up in collecting all data around it just try to minimize it to the exact requirements of the statistical analysis/plan and the report you want to generate from that.
So, the second step is usually to define what exactly you want to present. What kind of statistics or graphs would you like your data to generate so that you can present it to an authority in Europe, to the FDA for review, or to anyone else who might look at your data?
The next step would be to define your data collection plan. And this plan is not a document or a template that you can find and download online. It is simply a way to define how you want to collect your data, which can be done in many different ways. We offer templates that you can use in SMART-TRIAL and that are suited to help you exactly with this. In essence, this plan defines what questions you would like to ask to be able to create the reports you had decided to create, how to best formulate these questions, and how they would all look together accumulated in a digital setup (e.g., an electronic case report form, inside an EDC software).
The last step is to design your data collection activity. This is somewhat linked to your protocol design, where you define the number of visits or timepoints within an activity such as a clinical investigation. Here, you identify at what timepoints you want to collect, which parameters, using what methods, etc. What we have often seen is that people begin at this step. After having collected their data, they try to go back to define a hypothesis that works. This approach is of course not scientific. Even though you might have defined something, we often see that protocols get adjusted and changed on the way because they were not designed like that.
There is this Danish publication, about clinical trial protocols that have been submitted to the ethics committee, which states that over 80% of all protocols submitted over the course of 10 years have been amended more than 3 times over the course of the study period. That is simply because they were not designed well enough. While you can challenge this and say ‘’you cannot know everything beforehand”, this practice is still very good to follow before you collect any data.
We can offer a real case example from a study that ran in SMART-TRIAL a while back. Two weeks into the study they had enrolled numerous patients and their team at the site could not keep up with entering data because their design was not well thought out and they were collecting too much data. We ended up stopping the study and re-designing the whole thing to fix this.
Jón: A data collection plan is a part of your clinical investigation protocol. It is everything from your sample size to study design and basics, designed to data collection requirements, visit windows, etc. These all make up your data collection plan together with, for example, the data management plan as it also incorporates how you want to export the data, how often, who’s responsible for that, and if there are any standards of data you’d like to be presented.
Páll: The reason we say, "avoid just collecting data that is interesting," is because it puts extra workload on the clinicians. It will increase the cost of your trial and it might end up making the key elements of your study fail because it's too heavy. They can't do all the work or they are not as motivated because they're having to do other things that they can necessarily see as relevant for what they're doing in the study. So again, it's not because you can't do some of those things, but just be careful not to make 50% about what the key endpoints of your trial are, and the other 50% of the data you're collecting is maybe useful down the line. I recommend you split this up and do two different projects at an investigational level. This would be more of a soft research project on the things that might be interesting. You can also do some of those things post-market. You can still do post-market follow-up studies. Especially in Europe, it's required for class III devices. You can also do those activities, and thereby collect data that you did not necessarily need for your certification, but that might be interesting going forward.
Páll: Basically, yes. You still need to keep your eyes on two things. So, depending on what your claims are for your device on safety and performance, you need to make sure that you're meeting those endpoints. So basically, you can support those endpoints with clinical data from your CRFs, but we are advising that you do not collect data that is merely interesting, so to speak. We see companies that end up collecting unnecessary data because now we're already there, so we will add 30 different variables that might, at some point, become interesting to us but are not necessarily a requirement for us right now to maintain our pathway to market or to be able to make these claims that we want to make when we go to market. So, keep an eye on what you need to do first and stick to that.
Jón: Another thing regarding getting the data in there, that is also somewhat related to not only just designing a nice CRF or a nice data collection form or not having too much data, has a lot to do with motivation too. I know personally, when I first got into this industry, I was working as a data and staff member for a clinical trial, and what frustrated me was the lack of appreciation that I was getting. The same goes for clinical staff. I have been at clinical sites sitting next to a nurse who was entering data, and she just keeps rolling her head around because it's not motivating enough. Others are head over heels about the study that you're driving because it's so much fun. So, it also has a lot to do with how you communicate with the people at the site. Are you motivating them well enough? Are they even interested enough? Have they been presented with the values that they're providing to this study? Everything impacts your data collection. Yes, cutting down on data points can help, but like Páll just mentioned, there is a thin line there and that is some of the things that we tend to be experts in, and we tend to share that experience with our customers. When they design their studies, we tend to provide feedback on that part so that people can take that in and then decide what they want to do with that.
Páll: You don’t have to scrap what you already did. It is common that you submit protocol amendments where you request authorization from the authorities governing your study, to make changes in it going forward. When you do so, try to make sure that you don’t do many of them. Try to gather further evidence, and proactively seek out any other issues that might be related to your study and make a single amendment rather than multiple amendments. That’s because it might affect the way the clinicians are working with your device, or the data that they or the patient are giving. So, it changes some of the workflows, the endpoints. Again, you don’t have to scrap what you have already. It’s important to keep the ‘old’ data while collecting new data so that you can always document what the previous and new versions are. Test before you release the new version of your study for clinicians and patients.
As device manufacturers, your focus is understandably on your device. You perform pre- and post-market activities where you study how your device is performing on safety and efficacy parameters. However, when you define how you want to run your study, you need to remember that at the center of it all is the individual. This individual wants to participate and give their data, and most importantly wants to have a better life or gain some benefits from using, or from being treated with your device. So, it is a good idea to include Patient Reported Data (PRO/ePRO) in your study.
There is greater importance from regulatory bodies and competent authorities on actually having PRO data on your device. Although this is being pushed much more these days, we still see too many companies that forget to include it. Patient Reported Outcome data is not very difficult to get as it is data that patients are usually very motivated to provide. It can be very beneficial later on when you need to go to market or to document safety and performance aspects because patient data is very helpful in relation to your device. Learn more about how SMART-TRIAL’s integrated ePRO makes PRO collection easy.
We are well aware that a lot of medical device companies do not have much clinical experience in-house and do rely on their key opinion leaders (KOLs) when designing studies. However, we must remember that it is not always about the clinical evidence. It is not always about just listening to the clinical investigators that recommend you the specific endpoints to focus on. For example, if your device measures blood pressure, it is very important to measure the blood pressure itself because you will most likely compare it against the clinical standard. But you need to remember that putting a blood pressure measuring device on the market is not complex.
There are thousands and thousands of devices out there on the market and that might make it more difficult for you to make yours unique. This doesn’t relate only to equivalent or predicate devices, but it’s also relevant for novel technologies. That is because, again, the focus on clinical data is not only coming from regulators but it is also coming from the buyers. So, thinking ahead and including a question about factors that can impact market access, health economics and other questions that could be asked during the clinical trial on the usability of the device, or feedback from staff on their experience, might be much more valuable than focusing only on clinical performance and outcomes.
Going back to pitfall #2, we must remember not to collect too much data. So, be careful because there is a thin line between too much data and too little data. While there is no rule of thumb to follow for when to stop and when you collected enough, the rule is to simply think a little further ahead and go beyond the clinical evidence. Think about the market access strategies, sales, operations, procurements, and whether you can include minimal data collection parameters or outcomes in your study to support that strategy. Watch our free webinar on how to incorporate market access in your clinical strategy here.
Páll: I would say that, yes, the clinical evaluation can include a cost benefit analysis. However, it is not required. You might still want to do it, or your device might be in an area where it will be required from buyers. So even though it's not necessarily required to obtain the approval to go-to-market, you're not going to be able to sell your device without the evidence on cost effectiveness or cost utility or cost benefit. And so, this is back to one of the pitfalls. Look a little bit further ahead than only the clinical evidence. Make sure that if you need other items to actually succeed, rather than only safety and performance, make sure to include them in the clinical evaluation report based on the clinical investigation.
When you are thinking about your study and how your device will be used in praxis, you need to remember that your study design will be impacted by how your device is used. In other words, try to design your study so that entering data or giving feedback on the performance of the device is thought of as part of the natural flow of using that device. So, don’t try to put extra strain on the clinicians at the wrong timepoints or introduce extra steps that results in worse data. Also, keep in mind that not all the sites are the same. There are differences between one clinic and another in terms of what they are used to doing and their work process. So, try to include your end users as much as you can when you’re designing your study or planning how to do it. Otherwise, you might end up not getting the data you need.
And the solution to all this is to test, test, and test. We cannot stress the importance of testing your study with real users enough. Start testing studies internally with colleagues, try to test them with clinicians if they are involved in collecting the data, test with patients. Competent Authorities in Europe have started demanding that you involve patients when you define your eCRF (electronic Case Report Form), and that you involve patients every step of the way before you apply to the Ethics Committee.
While you are testing, you will realize where the hurdles are to getting the quality of data you need, and where you place too much strain on clinicians or patients in getting the data in. This way you can identify the risks and mitigate them by smoothing out the data collection plan or by ensuring you collect less data in this study and in a follow-up study you collect additional data. If you consider this thoroughly, it doesn’t have to be more expensive to do 2 separate studies instead of 1. It only requires correct planning and to make sure your resources are spent wisely.
There is this real case where a device study was to be initiated at one site that had been used as a test site. This went very well but when the study began, another site was added which had a different building structure. The surgery room was far away from the computers, and they didn’t allow any tablets or computers in there. If anyone had to use SMART-TRIAL, they would need to go out of the surgery room. There was also the issue that due to locks and cards not being accessible, they couldn’t randomize. So, the workflow clearly hadn’t been thought through well enough, and this example shows how these simple day-to-day things can make or break your study protocol.
Páll: If you designed a study, it does not mean that you can produce high quality data because that depends on several different factors. Quality of data can, for example, be defined by the type of variables you are collecting. So even though your endpoint might be very clear, that endpoint can be collected in different ways. If the way that you choose to collect your endpoint results in challenges for data analysis that can generate low-quality data. If the data has errors or hasn't been validated, or the form isn't designed well enough, the information that is put into the system might not produce the results that you're expecting. Data quality can be affected by for example, typos or even just a lack of testing simply because the form that you wanted them to answer now got forgotten, because it was not placed at the right time, or it wasn't seen within the system because it could have been looking differently. You could have the greatest protocol ever, but there are so many factors involved when it comes to the data collection itself that can impact the quality of data.
Jón: First of all, you need to justify what the state of the art is. So, what that really means is that state of the art is not something that you pull off the shelf or you find in a publication, state of the art is something that you need to document or justify what you would consider to be the state of the art based on scientific justification. So that can be, for example, references to publications, references to your own clinical data from maybe similar devices, references to data sets coming from registries from other similar or equivalent devices. It can be many ways that you can justify state of the art. It is up to you to justify what you would consider to be equivalent performance and safety to the state of the art. Every single time I hear a question like that, and I hear a notified body’s comments about it, or an experienced medical expert, they always say the same thing: it depends on you, how you justify it. And the reason for that is because the people that are looking at this, whether it is the FDA, competent authorities, notified bodies, will be listening to you. They cannot listen to anything else that you apart from what you present and justify in your own documents. You are the experts in certain aspects, as we are experts in other aspects, and device manufacturers that produce for example technical files are the experts on their devices. The people that are reviewing these technical files are listeners, so it’s up to you, or in this example device manufacturers, to sell it to them.
Jón: I actually shared a post on LinkedIn about this a couple of months back. The definition of a registry is very different from one therapeutic area and one device to another. But in essence, a registry can either be a single observational-like study that is conducted by a manufacturer, or it can be a set of many studies that are conducted by a manufacturer on maybe a portfolio of devices that have a similar outcome measure. And a registry is really just a term for gathering observational data. So, whether that is done with an observational study or a tool that folks will produce as case series, or in addition with some kind of survey tool that can help you support some outcomes. It really comes down to the device itself, but the term registry is basically just a synonym for what I would consider an observational data collection.
Another pitfall that we see with companies is ‘’mixing data collection tools’’. This is probably one of the most prominent pitfalls we have encountered, because there are thousands upon thousands of software tools out there. In general, if people don’t know which ‘’correct’’ tool to use to solve their problem, they try to find some sort of solution themselves. That is how people end up using Excel for one type of data, e.g., case reports, case series or even feedback from clinicians. We often see this happening inside a single company, where one colleague uses Excel, and another colleague uses a survey tool to gather clinical experience in a PMS setup. Another colleague who’s been running clinical operations doesn’t have enough funds so they’re running everything on paper.
First of all, mixing up tools in this way is highly discouraging because it often brings about chaos and extensive time used to try to mitigate issues and migrate data. The solution to this is to define a standard operating procedure (SOP) for data gathering (which can be part of your QMS) where you specify that a specific software tool should be used. By doing this, you not only get a better overview, but you also bring much of that control back and avoid the chaos. This also introduces the possibility of improving the data quality in general, because clinical data in Excel vs paper vs survey tool differs from one solution to another.
By having a common tool where you gather all operations in one place, you have the oversight that executives, investors, or board members need. One of the worst things to experience is receiving an email before a board meeting in which you are asked to provide the latest clinical data you had collected. And if one type of data is stored in Excel, other on paper or OneDrive, other type in some other tool, etc., then you’re set up for a hard time.
Another advantage to having the data in one place is that it can enhance your regulatory compliance and it can simplify the QA parts of collecting data. While if you use multiple tools, you risk non-compliance as it is much harder to keep up with 3-4 solutions all the time. Schedule a free, non-binding demo of SMART-TRIAL and see for yourself how it can ease your data collection and management process.
Too often we have seen that because MedTech companies who want to save time or cut corners as they don’t necessarily have the resources to invest in state-of-the-art tools or software, end up not finding the right solution and forgetting about Good Clinical Practice (GCP) or validation. They tend to go too fast and submit data that has been collected in a non-validated platform, which gets rejected by notified bodies or competent authorities. This happens because they cannot document how the validation was made on the platform used to collect the data.
So, the solution to this is to go with compliance. When you are working with vendors ask them to document how they can assist you to comply with ISO 14155, FDA CFR21 Part11 and any other relevant regulatory requirements. If they cannot do that then it’s best to choose another vendor that can. Remember that a solution in itself cannot be compliant. You can only measure compliance with, for example, ISO 14155 in the way that you work with a solution. Don’t get fooled by someone saying that the solution on its own is compliant. Make sure that you work in a compliant way with all your solutions and have the necessary documentation in place for inspection.
There are frameworks and guidelines both for Americas and Europe that tackle this specifically. These are designed to assist companies with producing the required documents that show that a software solution has been validated according to the requirements for GCP. It is better to be safe than sorry, and to require this documentation up-front instead of only relying on the vendor’s statements.
SMART-TRIAL EDC facilitates compliance with ISO 14155, FDA CFR21 Part11, by providing ready-to-use templates, and a standard operating procedure (SOP) template which can assist study stakeholders in using SMART-TRIAL correctly. Contact us to find out how our EDC can help streamline your data collection efforts.
Páll: Yes. So, it looks mostly like the documentation you would expect yourself to be making basically. So, you know the documentation for medical devices, you need verification, validation reports, it needs to be signed and stamped. And in most cases, when it comes to compliance, it needs to map out how that solution is compliant to a given set of regulatory requirements or standards. That mapping can be done differently, sometimes we see it done in an Excel spreadsheet. We sometimes see it done on a designed PDF or in another way in a report, but you would expect that report to be fairly thorough and fairly easy to read when it comes to explaining how this tool facilitates compliance towards a specific standard or regulation.
Jón: In section 7.8.3 of ISO 14155:2020 there is a list - a bullet points list that actually defines what kind of documentation requirements a software would need to fulfill what you're going to be using for clinical data collection in the digital matter. And it goes through everything from verification, validation, documentation, to other facilitation documents, so you could actually look at that section specifically to get a feeling of what's required.
Jón: ISO 14155:2022 covers the requirements from a good clinical practice standpoint, for what you need to fulfill or accomplish in terms of documentation when it comes to clinical data collection. Then there are other standards and requirements, such as data privacy regulation standards that you need to ensure are also taken care of. There might be data standards that are required for data analysis. How are you going to analyze your particular outcome measures? There might be a specific way to do that and structure it because when the statistician receives the data, they may need to look at it in a certain way. There might even be data standards, such as defining, coding adverse events, which can happen during a study or out on the market. There are several standards that are applicable. One of the things that I’ve learned throughout my now almost a little bit more than a decade, within this industry, is the fact that none of the standards, or regulations, tell you what to do. They tell you what you need to include, and present, but not how to do it. So, you are freer than you think to produce what you believe is compliant with what is required by the regulations.
Páll: In terms of templates, what you could find are templates which are more generic than others, e.g., adverse event reporting or safety reporting, something like demographics. You are most likely also able to dig out templates for inclusion, exclusion criteria for specific therapeutic areas, and then amend those templates. They might not necessarily be directly transferable to your study, but you will most likely be able to find bits and pieces that you can pull into your studies.
Jón: Almost all regulatory bodies around the world, require you to follow good clinical practice. And the International Standards Organization, ISO 14155:2020 is often looked at as the gold standard for how you should do clinical investigations. That covers United States, Japan, Canada, Europe, and many other countries. There are data standards that are applicable for both Europe and the US, such as GDPR in Europe, and acts in the different states in the US but also general HIPAA around the hospitals for information protections and FDR, FDA, CFR 21 part 11 regarding e- signatures and systems used to gather information trials. But looking at the two (US, Europe) and comparing them together, they're very similar when it comes to clinical data collection. They do expect the same kind of level of standard for data. The regulations themselves might be called differently, but they require the same quality level.
If you are interested in more questions that were asked during the webinar, you can access the Q&A document here.