Locked lesson.
About this lesson
Exercise files
Download this lesson’s related exercise files.
DFMEA Detection Rating.docx60.9 KB DFMEA Detection Rating - Solution.docx
257.4 KB
Quick reference
DFMEA Detection Rating
Detection is one of the three scoring categories of the Design FMEA. Detection scores the ability of the design and development process to identify the failure mode in a timely manner so that it can be mitigated if needed.
When to use
The scoring of an FMEA is step 5 in the Design FMEA process. When doing scoring, detection is normally done last of the three. It is worth noting, normally all three scores are done for one failure mode then the team goes to the next failure mode rather than doing all the severity scores, then all the occurrence scores and finally all the detection scores.
Instructions
The detection scores are the scores that are the least dependent upon the specific product that is being analysed. These scores represent the application of the organization’s normal product development methodology and design control processes to the product. As such it is evaluating the methodology and processes far more than it is the product.
For that reason, the sources of information are the product development process, the design control procedures and configuration management procedures. If the organization does not have those, then the project plan for testing, verification and validation should be used. In addition, the team can use input from the customers for their testing and installation process, customer complaints from other product lines, and focus group feedback.
The tables below show the scoring criteria from the IEC 60812 standard.
Select the highest score that reflects exiting or planned process for use on the development of this product. The detection term is only for guidance, the criteria is what should be used when making your decision. Once the value is determined, enter it onto the Design FMEA form in the detection column.
Note: The colors are added to enhance the learning, they are not a required part of the analysis.
Hints & tips
- Score the actual processes and procedures you use, not your good intentions.
- The detection score is supposed to reflect the practice of improving the design for problems that are found. If that is not your practice, then the detection score should be rated very high since your organization does not attempt to detect problems early in the development process in order to fix them.
- Since the detection score represents organizational policies and practices, the scoring may entail some organizational politics if the approved procedures and policies do not lead to a very low score. Keep in mind, you are assessing technical risk in the product, not commenting on the organization’s standard practices.
- 00:04 Hello, I'm Ray Sheen.
- 00:06 Well, it's now time to move to the third of scoring criteria for
- 00:09 the Design FMEA, and that is scoring detection.
- 00:15 The detection ratings are just a bit weird for the Design FMEA.
- 00:19 You are evaluating the probability,
- 00:21 that the failure would be detected during the design process.
- 00:25 The assumption being, that detected at that time,
- 00:27 a design change would be made to eliminate or control the failure mode.
- 00:32 What is a bit odd about the scoring, is that a high score is for low detection.
- 00:37 And a low score is for high detection.
- 00:40 for this reason, a better name for
- 00:42 this rating would probably have been Undetectability, not detection.
- 00:47 When doing this rating, you should first describe the detection method used in
- 00:51 the development process, to find this type of failure.
- 00:54 List that on the FMEA form, and then score it.
- 00:57 And one more time, I normally score the severity, occurrence and
- 01:01 detection for one failure mode, then go to the next failure mode.
- 01:05 I don't score all the severities, and then all the occurrences, and
- 01:09 then all the detections.
- 01:11 I work across the form horizontally.
- 01:14 Let's consider data sources.
- 01:16 Your primary source of information for this rating will be your design and
- 01:20 development process.
- 01:21 By that I mean analysis, testing, verification and validation.
- 01:25 That is done along with the design control aspects of the process.
- 01:29 An area I also like to include is the design configuration warning and
- 01:33 cautions, that are included in the design documentation.
- 01:36 If your organization does not have a standard process for
- 01:38 all of these, then use the project plan for the product development.
- 01:42 This area of rating can also consider customer experience with detection.
- 01:47 This would include the results of customer installation or application, and
- 01:52 the customer returns and complaint history.
- 01:54 For new products, you may wanna include a customer focus group.
- 01:57 They can help to make sure your development process is robust, and
- 02:01 considers all the customer uses and applications when doing your testing and
- 02:05 analysis.
- 02:06 This area will focus heavily on test data, but don't overlook the power
- 02:10 of a good simulation or analysis of your product or system.
- 02:14 Okay, let's look at the ratings.
- 02:16 In this case, we are detecting the failure mode.
- 02:19 The rating of 10 is the worst for detection.
- 02:22 In fact, it is the inability to detect the existence of the failure mode.
- 02:26 Use this score if there is no design control for this element, or
- 02:29 if the ability to either detect the cause of the failure, or
- 02:33 the failure itself can only be done by the customer once they are trying to
- 02:37 use the product, and they have been negatively impacted by that effect.
- 02:43 Now, let's look at the rating of nine.
- 02:45 This is used when there's a very remote chance of detecting the cause or failure.
- 02:49 Either the design controls like testing and analysis for
- 02:52 this condition are very weak, or the tests and analysis that are used
- 02:56 don't correlate with the actual experience or conditions at the customer.
- 03:00 This is also the score if the failure mode cannot be detected by you,
- 03:03 the manufacturer or supplier of the product, but the customer
- 03:06 is able to detect it immediately before they feel the effect of the failure.
- 03:11 The score of 8 is for the remote chance of detecting the failure or cause.
- 03:15 So in this case, you could catch the failure in a verification or
- 03:18 validation testing but only in testing that occurs after the design is
- 03:22 complete and frozen, which means it might easily slip through.
- 03:26 You can also use this score if the failure can be detected internally, but
- 03:30 only by random testing.
- 03:33 Usually, that means some type of random distructive testing of the product.
- 03:37 The score of 7 is for a very low chance of detection.
- 03:40 In this case, you're still relying on verification and
- 03:43 validation testing after design freeze.
- 03:46 But the testing does go all the way to failure.
- 03:49 So you can see if there is any design margin or
- 03:52 if the product is very fragile, with respect to this failure mode.
- 03:56 Next is the low chance of detection which gets a score of 6.
- 03:59 There is a fair amount of testing that is occurring with
- 04:02 respect to this failure mode and cause.
- 04:04 There is verification and validation testing that includes degradation testing.
- 04:08 This will enable modeling of the design to determine if it is robust with respect to
- 04:12 this failure mode, or the customer may install the product and
- 04:15 conduct their own testing, as part of the final acceptance of the product system.
- 04:19 That would obviously be much more realistic than internal testing, but
- 04:23 it's normally used only for custom built products.
- 04:26 This is also the score to use when there's a very good testing program, but
- 04:29 some of the customer requirements are not known, and therefore cannot be tested
- 04:34 or if different customers react differently to the failure.
- 04:37 The score of 5 is labeled the moderate chance of detection.
- 04:41 In this case, the validation tests are conducted prior to the design freeze,
- 04:45 providing time for the design team to make changes to the product,
- 04:49 to reduce the cause or the impact of the failure.
- 04:52 However, the nature of the testing is just to make sure everything passes the spec,
- 04:56 not to determine how much design margin exists.
- 04:59 The testing may also be done by a customer testing prototypes of the product, and
- 05:03 providing feedback before the design is completed.
- 05:07 Double 4, gives us a moderately high chance of detecting the failure
- 05:10 during development, and eliminating or reducing it.
- 05:13 This score is used when the validation tests are done prior to design freeze, and
- 05:17 the tests are run to the point of failure to determine the fragility,
- 05:20 and allow the team to add design margin.
- 05:23 This is also used if the failure mode is a well known condition that is tested and
- 05:28 evaluated.
- 05:29 During development as a stand-alone part of the development process.
- 05:33 Level 3, is a high chance of detection.
- 05:36 The validation tests are again done prior to design freeze.
- 05:39 And this time, they are done to measure degradation.
- 05:42 That way, the design team can predict when the design begins to become fragile and
- 05:46 can design a way from that point.
- 05:48 Level 2 has a very high chance of detection.
- 05:51 It's used when the development process includes design analysis and
- 05:55 detection controls embedded in the development process have demonstrated,
- 05:59 that they have a high detection capability.
- 06:02 This takes time and a number of development processes to achieve.
- 06:06 Once in a row is not good enough.
- 06:08 You have an excellent simulation such as a digital twin, and
- 06:12 it has been highly correlated with actual field data.
- 06:15 You can use this to test the failure condition and rate a score of two.
- 06:20 Finally, to get a score of 1,
- 06:21 the failure mode must be fully prevented through the product design.
- 06:25 This would mean applying Poka Yoke techniques that prevent the failures from
- 06:29 ever occurring.
- 06:31 So let's continue with our pen example.
- 06:33 We're analyzing the ball of a Ball Point Pen.
- 06:35 And as we considered the development process and use, the assembly process is
- 06:40 analyzing and testing prior to design freeze to determine any assembly problems.
- 06:45 So it is essentially testing to failure and a rating of 4.
- 06:49 The testing of the ink drying out is also done prior to design freeze, but
- 06:54 that test is just a test against the ink specification for pass or fail rating.
- 06:58 So it is a score of 5.
- 07:00 The corrosion test for the ball is done prior to the design freeze also.
- 07:04 But some of the customer application environments are not well understood, so
- 07:08 there are likely some requirements that are not known, we give this a 6.
- 07:12 With respect to the failure of damage to the ball due to dropping or abusing it,
- 07:16 this tested before design freeze and tested to failure, so the score is 4.
- 07:21 Finally, the testing for dirt or other substances that prevent the ball from
- 07:25 rolling freely, is tested with our standard test.
- 07:28 But the standard test does not have some of the unique customer
- 07:31 environment contaminations, because they are not known.
- 07:34 So, this failure mode also gets a score of 6.
- 07:39 The detection analysis score is a score of your development process' ability,
- 07:44 to find and predict whether the problem is likely to occur at some point in time
- 07:49 during the product lifecycle.
Lesson notes are only available for subscribers.
PMI, PMP, CAPM and PMBOK are registered marks of the Project Management Institute, Inc.