Does every design output have to trace to a design input requirement?
After getting pulled into this discussion many times over the years, this has been a common constraint placed on teams when leadership has had simpler design experiences. This approach leads to weak traceability and exponentially increases the number of traces in complex designs (e.g. Software, complex PLD/ASICs).
When attending FDA Design Control training back in 1998, the FDA representative explained traceability from design inputs to design outputs was required. Traceability from design outputs back up was not required. When working on a team implementing SEI CMM Level 3, bottom-up traceability was considered a business decision, not a requirement.
When working in avionics, the industry standard at the time, DO-178B, allowed the specification Derived Requirements. Traceability from the design outputs up to the design input requirements was performed on critical systems to catch the following: Undocumented features, dead code and deactivated code. A design decision had to be analyzed for safety implications.
Design input requirements are required to be complete, correct, unambiguous and not to conflict with other requirements. So, derived requirements are design decisions that support existing feature requirements. They should not be new features.
Referring back the IEEE Standard for Software Requirements Specification in the IEEE STD 830-1993 - yes it is ancient version but this is not a new concept - it recommendations to not normally specify design items. The examples it provides are specific to software engineering but the same concept applies for all disciplines.
From a project management perspective, design decisions are made to strategically support the success of the project. If my medical device has multiple processors, I may decide to use a common architecture to minimize my development time of common functions. The selection of a processor is a hardware to software design constraint. When the decision is made, this design constraint impacts the development environment, the common software architecture and safety mitigations that may need to be in place to ensure the processors errata does not cause safety or efficacy issues.
With this line of thought, design decisions would be made at the Architectural steps of the development process. Architectural activities would include establishing the software to software interfaces, hardware to software interfaces, budgets for power allocation, memory allocation, timing, and human to system interfaces. The following is a list of typical design decisions:
- Development environment
- Communications Buses and communications data dictionaries/Interface design descriptions.
- Aesthetics unrelated to safety and efficacy.
- Minor physical characteristics that do not impact center of gravity or overall size.
- Memory Maps
These design decisions would trace down to the detailed design specifications and changing these decisions could potentially result in a significant amount of re-verification and revalidation. If there are traces to these specifications, then its design input is a design constraint. These constraints may be driven by business requirements (cost, market, user culture, regulatory standards, etc.) instead of safety and efficacy.
Weak traceability has a cost. Developers have a lot on their plate and schedules are tighter than ever. Solid traceability is a useful tool. Weak traceability is an exercise in unnecessary documentation that prevents users from seeing the value of solid traceability.
Our development teams have had many conversations discussing what the intended use is for our medical device and what are user needs for the medical device. The following are a list of sources used to gain understanding of the intent of intended use and user needs.
By looking at an example of a predicate device, for a high speed pneumatic vitreous cutter, the FDA provides the following description:
For ophthalmic vitreous cutters, IEC 80601-2-58 includes a similar definition for vitrectomy.
Vitreous cutters can be used in the anterior segment and/or the posterior segment.
If the intended use is limited to one segment or the other, this should be specified.
(c) Design input. Each manufacturer shall establish and maintain procedures to ensure that the design requirements relating to a device are appropriate and address the intended use of the device, including the needs of the user and patient.
In regards to needs of the user, a surgeon may need the cutter to perform specific tasks during the procedure. These could include: Core vitreous removal, vitreous shaving, membrane peel, membrane cutting, etc.
Anything specified within the scope of the intended use or user needs will need to be validated.
Additional source material includes:
Levels of Specificity for therapeutic (including preventive) medical devices:
1. Identification of function (e.g., cut)
2. Identification of tissue type (e.g., soft tissues)
3. Identification of an organ system (e.g., GI tract ) or
Identification of a specific organ (e.g. liver)
4. Identification of a particular disease entity (e.g., resection of hepatic metastases) or target
5. Identification of an effect on clinical outcome (e.g., use of medical device improves the
rate of durable complete remissions with chemotherapy)
Sec. 801.4 Meaning of intended uses.
The words intended uses or words of similar import in 801.5, 801.119, and 801.122 refer to the objective intent of the persons legally responsible for the labeling of devices. The intent is determined by such persons' expressions or may be shown by the circumstances surrounding the distribution of the article. This objective intent may, for example, be shown by labeling claims, advertising matter, or oral or written statements by such persons or their representatives. It may be shown by the circumstances that the article is, with the knowledge of such persons or their representatives, offered and used for a purpose for which it is neither labeled nor advertised. The intended uses of an article may change after it has been introduced into interstate commerce by its manufacturer. If, for example, a packer, distributor, or seller intends an article for different uses than those intended by the person from whom he received the devices, such packer, distributor, or seller is required to supply adequate labeling in accordance with the new intended uses. But if a manufacturer knows, or has knowledge of facts that would give him notice that a device introduced into interstate commerce by him is to be used for conditions, purposes, or uses other than the ones for which he offers it, he is required to provide adequate labeling for such a device which accords with such other uses to which the article is to be put.
We have had the challenge that as we release single use accessories for our devices, the requirements for labeling become broader and broader. Our devices and accessories are sold worldwide and specific regions can add specific requirements for labeling. For example, China and Japan require over-labeling and are very particular about controlling the process.
As we have a large number of new projects in progress, we are proactively defining our labeling requirements. For our disposables, we have requirements for labeling primary packaging (containing sterile components), secondary packaging (boxes containing 6 pouches that go on the shelf) and a shipper is the container used for transporting product to our customers. We also have labeling in our instructions for use, on the shippers containing replacement modules and on shippers for upgrade kits.
The standards defining the symbols we use include EN ISO 15223:2012 and ISO 7000:2014. EN 980 was obsoleted and requirements from it were incorporated in EN ISO 15223. MDD and FDA give us additional labeling requirements. Part of this weeks exercise is to challenge every instance of labeling to understand the basis for it's existence. Both the MDD and FDA require that we identify the manufacture's address. The FDA's requirement is limited to legal name, city, state and zip. MDD also requires that we identify our European Community Representative and this address requires a street address. At some point, we've added the manufacturing site address and this is one of the requirements we are researching it source. We also have "Made in:" in English and French for the origin of Manufacturing. We think is one is a Canadian regulatory requirement. The FDA has new guidance for "Does not contain natural latex rubber." labeling so we have a trace for this one.
Over the years, we hear different things from different sources. An additional detail is added here and there so a periodic scrub is healthy. Over the last couple of years, we've added English descriptions under symbols. Where we've have text to describe the product, the descriptions are translated into 20-30 languages. On other products we've found that a picture of the accessory has been acceptable.
The warnings on our packaging trace up to our failure modes and effects analyses. For example, if using a box cutter to open a shipper could damage the sterile barrier of our primary packaging, we add a warning not to use a box cutter. On the primary packaging, we have a warning not to use its contents if the package is damaged. This risk is specific to shippers that are taped down the center. The design of other shippers is open from the side so using a box cutter does not create a hazardous situation.
Our current goal is to define the rules to this labeling game and standardize our labeling for our new product. We'll have traces to regulatory sources and standards for symbols. Teams that follow us, on the next projects, will have a clear and complete map of the 'because' for each label on these products. If the standards and regulations change, we will have a tool for understanding the impact.
We have a project risk of missing required labeling that results in rework late in the project. The project risk of unnecessary labeling is cost (e.g. translations). Based on history, last minute surprises have been the norm and this is a good way of breaking that bad habit.
During my career, I have been working mostly with companies to deliver safe-critical products. Guess what... safety critical or not, the method for working with people to ensure process are accepted, adopted, used and improved is the same.
Acceptance is the first step. The funny thing is acceptance occurs at a senior leadership level. And the key is understanding "What needs to be accepted?". The business has a choice to make. Is the process designed to manage the quality of work, to ensure the visibility of compliance, or both? Both is actually the path to success. In football, the rulebook establishes the policies and the coaches playbook fits the team's capabilities to meet the rules. Incorporated in the rules are goals and constraints. Incorporated in the playbook are the coach's expectations for each member of the team. Using this structure, a corporation's policies need establish the least number of rules needed to promote the success of the business. A business unit would incorporate the policies needed to incorporated the specific requirements needed for the success of the teams within the business unit. The coaches would be responsible for defining processes that meet the businesses rules and ensure each teams' best performance and outcomes.
As senior leadership establishes the culture, listening and creativity are needed to find the right approach in helping them pick the path to success. At one company, I identified all of the constraints on our business from a regulatory perspective (world wide medical device regulations minus overlaps) and printed them out - 46 pages. I printed out our policy documents - 450 pages. For a five minute elevator speech, this proved to be a great visual when discussing whether or not our policies were overly constraining. Every situation is different.
Adoption occurs when the teams have buy-in on the value of the process (idealist). This is not to say everyone is happy about following processes (realist). The foundation of the processes should begin with the current accepted practices the teams follow habitually (pragmatist). The ones that scrutinize every detail (analysts) help ensure completeness and correctness. Keep in mind that a number of number of creative team members (integrator) will argue with the implemented no matter what is implemented. Integrator are valuable because they challenge the norms and drive continuous improvement. A great leader for establishing processes has a nice balance of all of these perspectives when making decisions.
To be used, the decision leaders are your marketing staff that ensure the process fits their team. Decision leaders may not be the manager, who may be appointed and/or pays the salaries. The decision leader is the expert that everyone looks to for a head nod meaning - makes sense, lets do this. For usefulness, decision leaders must be identified and included during process definition. After deployment, the decision leader is the one who helps keep the train on the rails plus will identify where fixes are needed. What is the difference between and integrator and decision leader? The integrator is a leader is a bit more theoretical - systemic thinker. A decision leader is someone who gets their hands dirty and wants to ensure the teams they lead are not burdened with work that is a waste of their life's energy.
For continuous improvement, the integrators and the decision leaders are the canaries in the coal mine. They perceive both the potential issues and the real issues with the processes. A retired quality engineer once told me that a process that sits is a process that no one is following. If a process has not been improved in the last two years, the leadership needs to examine how processes are improved and remove barriers such as bureaucracy or complacency.
If you are deploying processes, your perceptions, intuition and persistence are your most valuable qualities. If the process deployments are not successful, revisit your perceptions and intuitions and be persistent. Process improvement requires windows of opportunity so be ready with your next play and base it on the current situation.
Having learned about Test Readiness Reviews when working with Avionics development teams, the next medical device company had no similar practice in place. In my first week at the new company, we found that test protocols were executed against undocumented configurations and that a large amount of testing had been performed using test procedures missing one approval signature. Retroactively recognizing these issues and having to redo the test activities is a harsh thing to do to hard workers. Especially when these expectations had not been previously communicated to the team.
During that project, a significant amount of time and effort was wasted in verification testing due to similar issues. The largest case occurred when a large set of test procedures were not fully reviewed prior to approval. Issues were identified in the protocols during execution and during the review and dispositioning of the results.
Our next project manager did not have the time to repeat these missteps. As we were easing the team into this process, an abbreviated test readiness review method was adopted. Its use resulted in a 99.9% reduction in unacceptable test results. The only time test result had to be discarded was when the TRR method was not followed. The test readiness reviews typically take 15 minutes to preform and to get be approved. A test readiness review is not a meeting. It is review of transition criteria needed for valid design verification and design validation activities captured in a checklist.
The scope of a test readiness review includes the following:
- Has the software configuration under test been established then loaded on the units under test?
- Has the hardware configuration under test been established and installed on the units under test?
- Are the units under test under configuration control?
- For disposables, have they been sterilized as appropriate for the type of testing?
For the design control documents, source code, schematics and drawings for this configuration:
- If applicable, have the anomalies, for the scope of testing, been resolved?
- Have the documents, code, schematics and drawings been reviewed?
- Have the documents, code, schematics and drawings been approved?
If tools automate the test activities, have they been validated?
Is the calibration for the support equipment current?
Have testers been trained or otherwise qualified, as appropriate, for the scope of testing?
This proactive approach has helped communicate the expectations to the team, ensure we are ready for test and we have wasted no time negotiating what to do with non-compliant test results.