The core of risk analysis (5th Step of the VDA/AIAG FMEA Handbook 2019) is the application of evaluation tables to risk assessment. The conceptual models for FMEA evaluation are presented below.
The FMEA methodological description represents a conceptual and procedural model for assessing risks. Such methods have significantly more “vagueness” than, for example, the principles for describing physical realities.
The factors of severity, occurrence and detection are used to describe the existing risks in planned design solutions and processes. They are allocated on a scale of one to ten, where the higher values represent the more critical evaluations.
For decision-making for each of the above factors, methodological descriptions provide texts allocated to the respective factor in tabular form. A lot of time and effort was invested in developing these formulations. It is nevertheless advisable to stress repeatedly that every team decision represents a subjective assessment and cannot claim to be precise in any way. See in this connection the following reference from the AIAG-VDA FMEA Handbook 2019:
“It is not appropriate to compare the ratings of one team’s FMEA with the ratings of another team’s FMEA, even if the product/ process appear to be identical, since each team’s environment is unique and thus their respective individual ratings will be unique (i.e. the ratings are subjective).”
The following content will help to use the evaluations in a more transparent way in the practical application of the FMEA
First, an apparently simple allocation:
The severity (S) always relates to the failure effects, primarily in the overall system or its application (e.g. vehicle).
The occurrence (O) relates to the potential cause(s) of the failure mode.
The detection (D) relates to the determination of the cause or mode of the failure.
However, it is not that straightforward. In particular, detection in the Design FMEA does not consist of classical detection controls. Instead, the verification and validation controls constitute the core of the entries.
The distinction between the prevention and detection controls represents a further challenge. A small illustration of this challenge: Is the security check of the hand luggage of airline passengers a prevention or detection control?
We recommend the following initial determination/definition to make the distinction in FMEA:
“A risk is a gap in knowledge – A gap in knowledge is a risk”
On this basis, it is possible to determine: controls that generate knowledge (and its application) are prevention controls. Thus even controls that appear to involve checks (e.g. incoming goods checks) have a preventive character in the risk analysis. In product development, the “Shaker Test” can have a preventive effect as long as it generates knowledge for the design. At a later stage, the “same” Shaker Test on test pieces of more advanced maturity becomes a “detection control” because it verifies the final design. In the development and application of the evaluation tables, this conceptual model should be taken into account.
Thus, we come to a second determination/definition:
The O-value is a quality assessment of the prevention controls previously planned and documented in the FMEA.
The D-value is a quality assessment of the detection controls previously planned and documented in the FMEA (verification/validation in the design).
My recommendation in the context of these two definitions is to apply the evaluation table for the O and D-values (the texts for derivation of the S-values do not require a conceptual model of this sort). The questions should evaluate the capacities of the controls in the respective cause-failure mode combination. As a logical consequence of this, it follows that the same prevention or detection control has various ratings in different contexts (an FMEA simulation is not constant for various tasks in terms of capacity).
1. The evaluation catalogs used can be agreed specifically for design or customer (customized agreements)
2. The normative specifications are otherwise deemed to be “state of the art” (e.g. AIAG/VDA FMEA Handbook 2019)
3. For us, the preferred approach to rating the detection controls for the Design FMEA is:
a) The “content of the detection controls is the planned verification and validation actions for verifying the corresponding functions”
b) The D-value represents an evaluation of the effectiveness of the respective verification and validation control for the corresponding function(s), taking account of the degree of maturity and the representativeness of the test pieces used.
1. As a result, catalogued, fixed D-values are not appropriate for the selected verification procedures (example: the Shaker Test has different capabilities depending on the function and requirement in question, which is represented in different D-values for the “same” Shaker Test.
2. The aspect “position on the timeline” is also covered by the degree of maturity of the test pieces in this context. Questions: To what extent do the test pieces represent the expected variation in the series? Do these marginal positions represent the tolerance combinatorics? Do the test pieces simply represent functional samples that have not yet been made with series tools? This makes it necessary to repeat the same verification procedure several times along the timeline with different D-values (the more representative the test pieces are for the subsequent series, the better the D-value).
3. The evaluation catalogs developed should reflect these aspects. The detection controls in the process are, in my view, checks to detect the non-conformities in the product (product features). I see checks in the process with an integrated option to react (Monitoring and System Reaction – MSR) as prevention controls. The catalogs then evaluate the capacity of the check to find existing failures in the product.