Joe Sheffer is managing editor at AAMI. Email: jsheffer@aami.org

Joe Sheffer is managing editor at AAMI. Email: jsheffer@aami.org

Close modal

Eric Bergman, PhD, is director of human factors engineering for global home therapies at Fresenius Medical Care in Waltham, MA. Email: eric.bergman@fmc-na.com

Eric Bergman, PhD, is director of human factors engineering for global home therapies at Fresenius Medical Care in Waltham, MA. Email: eric.bergman@fmc-na.com

Close modal

Laura Chang, BS, is a systems usability engineer at DePuy Synthes, a Johnson & Johnson company, in Boston. Email: lchang8@its.jnj.com

Laura Chang, BS, is a systems usability engineer at DePuy Synthes, a Johnson & Johnson company, in Boston. Email: lchang8@its.jnj.com

Close modal

Wayne Ho, MEng, is managing director of Healthcare Human Factors at Toronto General Hospital in Toronto, Canada. Email: wayne@humanfactors.ca

Wayne Ho, MEng, is managing director of Healthcare Human Factors at Toronto General Hospital in Toronto, Canada. Email: wayne@humanfactors.ca

Close modal

Desislava Ivanova is a senior systems usability engineer at DePuy Synthes, a Johnson & Johnson company, in New Brunswick, NJ. Email: divanova@its.jnj.com

Desislava Ivanova is a senior systems usability engineer at DePuy Synthes, a Johnson & Johnson company, in New Brunswick, NJ. Email: divanova@its.jnj.com

Close modal

Korey Johnson, MS, is managing director of Bold Insight in Chicago. Email: korey.johnson@boldinsight.com

Korey Johnson, MS, is managing director of Bold Insight in Chicago. Email: korey.johnson@boldinsight.com

Close modal

Merrick Kossack is a research director of the Human Factors Research & Design team at Emergo by UL in Chicago. Email: merrick.kossack@ul.com

Merrick Kossack is a research director of the Human Factors Research & Design team at Emergo by UL in Chicago. Email: merrick.kossack@ul.com

Close modal

Joe Sheffer What's the general lay of the land when it comes to early use testing? Is it on the rise? Are we seeing any particular trends in terms of the types of testing?

Wayne Ho I have two perspectives. In general, I don't see early use testing as being on the rise because we're still seeing a lot of reliance on other approaches to get user data. Rather than early use data, we're seeing a lot of marketing focus groups and opinions based on expert users instead of bringing in users for actual use testing. Marketing teams for many of the clients that we work with tend to, in some cases, dominate the approach to early design development and not necessarily leveraging early use testing as much as we think should happen. On the flip side, though, and on a more positive note, companies or manufacturers that have already bought into the value of early testing are doing it more and more. For certain clients that really believe in taking a user-centric approach to product design, they're doing it more and therefore seeing the value and a return on the investment.

Merrick Kossack Companies that are still relatively new to human factors and its application in medical device development seem to have this idea that early use testing is “nice to have.” For example, a manufacturer will do final human factors validation, but perhaps they don't understand the value in doing early use testing. Then, once they go through an initial project and see the results from their human factors validation work, they have a realization: “If we would have done early use testing, we could have discovered some of these issues beforehand. We could have saved some time, some money, some effort.”

As companies start to evolve and ascend the learning curve of applying human factors, they start to incorporate it more and more and they start to realize the benefits. Therefore, as more and more companies and project teams experience the benefits of early use testing, they start to include it more and more.

Eric Bergman The benefits are the key. Certainly, in my company, what I've seen is that as we've moved that testing earlier and earlier and as new projects start, and we engage with early use testing, teams see the benefit and it drives earlier testing. Without getting into specifics, recently we did usability testing on a prototype device that was only partially functional. In the past, teams here might not have engaged in use testing that early, but we were able to get a lot of good feedback very early on and influence the next iteration of the prototype device. We're seeing more of that.

Korey Johnson I'm not going to say I disagree, but I have a different perspective of where I'm looking for the trends and improvement. I have seen a positive trend that there is more early use testing happening now as opposed to about seven to nine years ago. I completely agree with a lot of what has been said—that for manufacturers with less mature human factors practices, the motivation for human factors tends to start with the (mis)interpreting of the regulatory requirement to be “conduct a validation study.” As these human factors practices mature, the manufacturers realize that the requirement is really to demonstrate safe and effective use, and a validation study should be seen as one possible culmination of a robust human factors effort throughout product development.

There's been a sufficient amount of time since the Food and Drug Administration (FDA) has really started regulating the application of human factors and usability engineering to the development of medical devices. As a result, in many cases, manufacturers have made the transition from viewing “human factors” and “validation testing” as synonymous to viewing human factors as a more coordinated effort (including more early use testing) throughout product development. Though again, this depends on the maturity of the human factors practice itself in many cases.

The second point I wanted to make is discriminating between early use testing and early user research. I have seen rise a rise in the amount of early user research conducted by my clients, whether or not that research is supported by external resources. The difference between the two—early use testing and early user research—is whether you're actually looking at interacting with a prototype or just getting out and doing exploratory research in the operational context of use of your end users, which can really inform new products or the next generation of products. I have seen a rise in that as well.

Merrick Kossack It seems as if we are seeing some rises in the trends—they are increasing. However, is early use testing being done enough?

Korey Johnson That's a fair point. I would agree that, no—it's still not being done enough.

Eric Bergman I would agree with that as well.

Desi Ivanova Thus far, my human factors experience has been in the consumer group of J&J, which is where I'm working currently. We do a lot of internal early use testing, and we're able to do that because of our facilities and procedures. We're not necessarily going out to vendors to help us with formative testing, but we're still doing it, regardless, pretty early in the project.

Therefore, my question is if you're gathering these thoughts on how industry is moving, are you also considering some of the internal efforts that might be going on within a company? Or are we talking more formative and summative studies that are completed with a human factors vendor?

Korey Johnson I would say they both count in this discussion. That was kind of the point I was trying to make. As a consultancy, even if my teams were not doing much more of that than we used to (we actually are), I still am seeing teams like yours who I can tell are doing more of it either through seeing documentation or just seeing the end results of the products that they're developing, and then being much better positioned for formative and summative testing down the road.

Merrick Kossack There are definitely companies that are much more mature when it comes to their adoption of human factors practices. J&J, I would say, is one of those companies. I see companies such as yours and others out there that have adopted the practice of doing early use testing. They make it part of their development plans. However, for a bunch of newer companies and especially startups, it's not even on the radar. So we have a continuum of levels of adoption.

Eric Bergman This speaks to a fundamental point that Korey alluded to in his comments. It's the difference between taking a technology perspective and a use perspective. If you take a use perspective, you're trying to understand how people are going to use the technology and then everything flows out of that. Questions arise about what the requirements should be, and you're going to have to understand the use and the users to answer those questions.

On the other hand, if you're taking a technology perspective, it's all about categorizing the features and functions. You're not really thinking about the use first. These are two different foundational approaches to product design and development.

Korey Johnson One of the things that I dread hearing when I suggest doing some exploratory user research is, “I can't do any user research yet. I don't have a fully functional product.” That position is reflective of what Eric just said—it's more of a technology-based mindset as opposed to a use-based mindset.

Eric Bergman At a previous company, I went into the first meeting for a medical device project and I explained my role. I was the first person in a human factors role at the company, and the marketing lead said to me, “Well, we haven't finalized the requirements yet, and we don't have any functional prototypes to test. So you should come back in a few months.” I then had to explain, “No, I've arrived here at exactly the right time. I'm going to help you identify and finalize those requirements. And, by the way, we can test that nonworking, nonfunctional prototype. We can still test with that.” And indeed, that's what we did.

Joe Sheffer Are there additional barriers or hindrances to conducting early use testing of medical devices?

Korey Johnson I'm going to pull on a thread that Merrick laid down and talk about the maturity level of the different internal teams because that can be a huge hindrance. Some teams are very mature and have very close ties between their internal marketing teams and their user research, user experience, or human factors team. When that's the case—when those close ties are in place—then you see the siloes starting to come down and the stakeholders realize that they're doing very complementary things.

“Some teams are very mature and have very close ties between their internal marketing teams and their user research, user experience, or human factors team. When that's the case—when those close ties are in place—then you see the siloes starting to come down and the stakeholders realize that they're doing very complementary things.”

—Korey Johnson, managing director of Bold Insight in Chicago

When that crosstalk does not exist and you have those siloes up, it's very hard for a human factors practitioner or a user experience practitioner to break into that “early research” that is owned by the marketing team. There seems to be a perception that they're doing the exact same thing, when in fact there are very different mindsets when you're talking about someone who's focused on understanding behavior and use of a product to improve the design as opposed to someone focused on how to best market the product.

Merrick Kossack On a related note, I think the culture of the organization, as well as its structure and who holds more of the power, can have a significant influence—whether it's marketing or it's the development of the organization or the product, and so forth. In some cases, we've seen organizations that perhaps are led by a marketing team that might include clinicians who could be ”users” of the device. Sometimes they might suffer from what might be considered a “paradox of expertise,” where they are physicians or nurses themselves. They then might believe, incorrectly, that they completely understand how something is going to be used and how it might be used in the real world. That overconfidence often can negatively affect whether something is going to work correctly, rather than determining it through early use testing.

Joe Sheffer We've mentioned the value of considering early use–related research beyond use testing. Can you please elaborate on the value of conducting this early research, even before a functional device or simply a mockup exists?

Eric Bergman There's no requirement to have a working prototype. Sometimes we're just trying to understand a current practice, and that can go beyond learning from ethnographic research. You can do a study, sometimes with just paper artifacts. For example, I once led a study on a procedure related to insulin dosing using paper documents that physicians were using to teach the procedure to their patients. You can ask participants in the study to try to perform tasks using these pieces of paper, and you get a very useful window into their understanding of fundamental concepts required to perform the tasks. In that way, you can identify potential requirements for an automated system that could provide support for those tasks by doing testing, but that testing can happen before a prototype or mockup exists.

Merrick Kossack Based on my experience, when people hear the word “testing” or the phrases “use testing” or “usability testing” they're immediately drawn to that idea of having to recruit participants, bring them in, spend the time, the money, and the energy to run through these processes. These could be long sessions. I've been involved in discussions where it's just about educating people that there are other types of evaluations. I often try to steer clear of that word “testing” to let them know that you don't need that fully functional prototype—you don't even need a partially functional prototype to do “testing” or an early evaluation. You can still bring in others, do cognitive walkthroughs, expert reviews, other examples such as that. A lot of times it comes down to education.

Eric Bergman That's a great point. Saying “evaluation” or “study” instead of “testing” is a good way to go.

Korey Johnson Agreed. I try not to use “testing” unless you really mean formative usability testing or summative usability testing. I have the same challenge to overcome this with terminology.

Joe Sheffer How can performing early use research or testing reduce development time or time to market?

Korey Johnson Not to go too far to one end of the spectrum, but having to redo your human factors validation study later represents a significant amount of cost as well as a delay. Whether you're talking about actual early use testing or exploratory research to better understand your intended user population, those are the types of research that prevent you from getting to that place nobody wants to be—a human factors validation study with lots of unanticipated use errors, where you then have to go back and figure out how to remedy those issues.

Merrick Kossack When you think about the FDA, they don't want to just know that you did testing. It's not meant to be a checkbox exercise. They want to look at your submission and know how your design has evolved over time through efforts such as early use testing or other formative types of evaluations. They want to know that you've discovered potential use-related issues and the risks that go with them, that you've mitigated them, and that you've tested the effectiveness of those mitigations.

By not doing the early use evaluations, it can leave the FDA wondering, “How did this design evolve?” And if the agency is wondering about something, they're going to request additional information, which means delays on getting the submission approved. In that respect, the longer it takes for your submission to go through the agency, the longer it's going to get the product to market.

Wayne Ho Echoing a similar story, we've had clients come in and their first experience of use testing was at the summative stage because they hadn't necessarily recognized the value of early use testing or perhaps had misinterpreted the FDA guidance to note that human factors validation or self-testing was the only testing that you really needed to do. This particular client reached the summative testing stage and a use error had occurred—one that was not easily justifiable and really did affect safety.

“By not doing the early use evaluations, it can leave the FDA wondering, ‘How did this design evolve?’ And if the agency is wondering about something, they're going to request additional information, which means delays on getting the submission approved.”

—Merrick Kossack, a research director of the Human Factors Research & Design team at Emergo by UL in Chicago

That's something that could have been captured very easily in early use testing—without question. In that particular case, they had to go back and make a change in their software. The issue had to be mitigated to achieve regulatory approval. They had to go and find their development team, which had already been moved on to another project. They had to bring them back to this old project, fix the problem, then test it, verify it—again—all perform all the types of software verification that need to happen, then do the supplemental validation tests.

In the end, that ended up costing the company several million dollars. Purely from a cost perspective, they're fortunate that they were able to afford that. There are other companies that wouldn't be able to do so. But you can certainly imagine that if they had just done some early use testing, they could have resolved that problem well in advance. Certainly, from a cost perspective, you can see the value of early use testing.

Merrick Kossack I've been on projects where new interface elements are being introduced, and based on formative testing, they have some suspicions there might be some risks involved. The formative testing definitely showed those risks existed and that they were serious. You could see people's eyes get really big when they can actually observe this happening early on. There might be some swearing under the breath perhaps or consternation about having to go back to the drawing board. But you could also see there's a little bit of relief that they discovered it early rather than at the very end where the potential impact, as Wayne said, could be tremendous in terms of time and money.

Korey Johnson Even removing safety and risk from the equation, sometimes the best thing that can happen from early use assessments is determining that a product should not continue down the development funnel, that it isn't going to work for the people for whom it's intended. This happened recently for one of my clients, and it was something as simple as a container. I won't go into too much detail, but it involved the novel design of this simple container that's supposed to be used in a certain way to dispense medication. When it came time to assess that with the intended use population, that use population could not and did not want to use that container the way it had been designed. So the container gets repurposed for something else and a more appropriate design is pursued. And sometimes that's the best outcome.

“If you have an asteroid headed toward earth, if we detect it way out in the far reaches of the solar system, all you have to do is give it a little nudge, it turns out, and it will miss. If you wait until it's about to hit us, it's too late to make a difference—and that's the issue.”

—Eric Bergman, director of human factors engineering for global home therapies at Fresenius Medical Care in Waltham, MA

Eric Bergman Obviously, it's important that we get products to market in a timely manner. But if we get the wrong products to market, the company is not served. The patients and customers are not served. Therefore, in addition to taking products that are not going to be successful off the table, making that course adjustment early provides an opportunity to get the right product to market.

The best analogy I could make is if you have an asteroid headed toward earth, if we detect it way out in the far reaches of the solar system, all you have to do is give it a little nudge, it turns out, and it will miss. If you wait until it's about to hit us, it's too late to make a difference—and that's the issue. We can make a big difference early on with a small investment. And later on, the investment is far too big or it's just impossible.

Laura Chang Another point is that when you perform testing early in the process, and you're working in an interdisciplinary setting, it also gets the engineers, designers, and other teams to be more aware of usability and keeping that in their line of sight. It really affects the rest of the development of a product.

Joe Sheffer What roles can healthcare technology management (HTM) professionals, such as biomedical engineers, clinical engineers, and biomedical equipment technicians, working in healthcare facilities play in early use evaluations?

Wayne Ho From our experience, we often rely on the expertise of HTM professionals as we perform usability testing, whether it's early use testing or later testing that's formative or summative in nature. Their input is valuable because aspects that haven't been considered up to that point are how devices actually need to be implemented in the hospitals themselves and the key role that biomedical engineers and clinical engineers play in the whole life cycle of the product. So we often will include them in terms of being able to provide some of that technical support and input into that.

We can use the example of infusion pumps to describe how devices are used in the context of the clinical environment. We're very focused on pumps delivering medication safely, but one of the big challenges that hospitals such as ours might experience is a shortage of pumps. How is the flow of pumps going from a centralized clinical engineering department to the appropriate wards and departments?

Including that information in the overall context is really important for patient safety and for usability around the whole life cycle of the product. It needs be safe and easy to use at the bedside. However, if there isn't a pump that's available, clean, or functioning, that's also incredibly dangerous and problematic from a safety perspective. So we have certainly included our medical engineering and clinical engineering staff when we're dealing with aspects such as reprocessing or management of availability of devices.

Merrick Kossack It really depends on the device and the nature of the tests. Some devices have certain workflows or use scenarios that are performed by HTM professionals, for example. And they're very critical functions, whether it's updating software libraries on pumps or doing other types of equipment configurations. And when those can have a significant associated use risk, then it becomes vital to include HTM professionals in evaluations.

You can also think of a biomed as a fly on the wall—they observe exactly how devices are being used in actual environments by actual users. They can have important insights into possible use-related issues that manufacturers or designers never even considered because, as much as they might do early user research, they don't see actual usage day in and day out. Biomeds can provide some of that anecdotal evidence or information to help guide design.

Eric Bergman A related aspect is that HTM professionals are the ones who will receive devices back from users with some kind of complaint of a functional problem, such as “This is not accepting a prescription entry properly” or “The setup isn't working right.” Then, they might check the device and say, “It's functional. It's fine. It works.” In that gap between the user complaining that something is not working and the technician saying, “No, it does work. I can't find anything wrong with it”—that's where we often identify use issues. Because what's happening there is that the highly trained technician doesn't find a problem but the user is finding a problem. These kinds of unconfirmed, technical, or claimed-to-be-technical failures that aren't actually technical failures often are pointers to use issues.

Wayne Ho It's interesting because we actually did research on that maybe five years ago. We published a paper, and it's exactly as Eric just described.1 

We were finding that devices were being sent down to our clinical engineers. They would do their appropriate checks and everything worked exactly as intended by the manufacturer from a technical perspective, and therefore, the devices were labeled as “no fault found.” Then, we did some research to provide evidence that in cases where no fault is found, it actually correlates to and is indicative of usability issues itself. So that's exactly what we found from our research team.

Joe Sheffer I know that many of you are involved in developing standards. Are there any noteworthy standards-related developments—or, more generally speaking, emerging approaches—to early use evaluations that we can highlight?

Korey Johnson Although it's not related to any standard in particular, I've been having a conversation more frequently at standards committee meetings and at other venues. The conversation is closely related to the last topic we just discussed—not just the extent to which biomeds, for example, can play a role in understanding use-related errors, but also the extent to which hospital administrators and the clinical administrators can play a role in establishing some type of use-related evaluation program at their site. Whether it's internal or external, that evaluation program would allow for a programmatic way of assessing use-related issues and clinical workflow–related issues in their hospital.

Increasingly, I've been seeing the human factors field challenged with being able to get into those environments to conduct research to better understand what really happens in hospitals when these types of devices are used. While I see a little bit of encouraging collaboration at that level, I think we still need a lot more. We have a long way to go before human factors researchers have the level of access needed in clinical environments.

“In that gap between the user complaining that something is not working and the technician saying, “No, it does work. I can't find anything wrong with it”—that's where we often identify use issues.”

—Eric Bergman

Wayne Ho We happen to be fortunate in that we're a human factors consultancy that's embedded in Toronto General Hospital. But it's really hard to get that access in general. In the next couple weeks, we're going to perform some early use testing with three-dimensional (3D) models for surgical tools in our operating room (OR). That's not necessarily easy to do. We're going to have surgeons who are going to be right in the OR doing it. And these are nonfunctional 3D-printed prototypes that we're going to be using.

It's great to be able to get that access to clinicians, to nurses, to OR nurse managers, and so forth, as well as access to other facilities, in order to gain their expertise in cocreation workshops and other types of activities in which we're going to be involved.

“We're doing a lot more in-context types of focus groups and other types of approaches around a mix of ethnography with the cognitive walkthroughs—all at once and together. Although there's nothing super novel about these approaches, the main point is bringing all those things together and trying to do everything earlier.”

—Wayne Ho, managing director of Healthcare Human Factors at Toronto General Hospital

We also are trying to look at other approaches that might be considered in early use testing. But at the same time, what we're really trying to focus on is understanding the appropriate workflows because we're considering introducing new or novel devices or significant changes to traditional devices and we don't know how they're actually going to be used.

How do we get out into the environments and figure out what the surgeon will do when they're given something that looks like what they're given? Today, we're doing a lot more of these cocreation workshops. We're doing a lot more in-context types of focus groups and other types of approaches around a mix of ethnography with the cognitive walkthroughs—all at once and together. Although there's nothing super novel about these approaches, the main point is bringing all those things together and trying to do everything earlier. We're finding that to be really helpful.

Merrick Kossack There are existing sources that people can use to learn how human factors fits into the design development process. AAMI TIR59, on integrating human factors into design controls, outlines and describes early formative types of evaluations. 2 In addition, TIR51 deals with contextual inquiry,3 which is a form of early user research—again, getting away from the word “testing.”

Wayne Ho We may need to consider whether we need a report that helps people take better advantage of early use testing. TIR59 does cover quite a bit of it, and some of the contextual inquiry work (TIR51) does as well. Overall, however, it's worth exploring whether additional guidance is needed, perhaps from AAMI, on the value of early use testing and how to do it.

1.
Flewwelling
CJ
,
Easty
AC
,
Vicente
KJ
,
Cafazzo
JA.
The use of fault reporting of medical equipment to identify latent design flaws
.
J Biomed Inform.
2014
;
51
:
80
5
.
2.
AAMI TIR59:2017
.
Integrating human factors into design controls
.
Arlington, VA
:
Association for the Advancement of Medical Instrumentation
.
3.
AAMI TIR51: 2014/(R)2017
.
Human factors engineering—Guidance for contextual inquiry
.
Arlington, VA
:
Association for the Advancement of Medical I