What gets measured is what gets done. If you want quality to improve, measure it, track the measurements and incorporate them into pay, bonus, and and employment decisions.
Good philosophy in an ideal world.
But there are ways around everything. I worked for a large process control and instrumentation equipment company, and we were getting returns on I/O modules that were related to an amplifier circuit. It turns out, the designer used typical performance specifications instead of minimum/maximum numbers, and didn't compound them to determine the overall instrument tolerance. Marketing insisted on a 0.025% tolerance, but if you used the min/max specifications on the components, this wasn't guaranteed.
I met with the test engineer who told me that:
1) The first-pass yield being reported on the instrument was 92%, and that met conditions for a bonus for executives. So, quality goal met....or was it? Why were we getting so many returns?
2) He also told me that the first-pass yield was really 3rd, 4th or 5th pass. If the module failed tolerance, they kept testing it until the first time it passed, then shipped it and counted it as first-pass yield.
3) I wondered how they could do that, and he told me that there was no automated recording of the data at all, so no way of proving how many times it was tested before being declared a Pass. If it absolutely would not pass, it was declared a Fail. But the data recording was a manual report.
Charged with solving the problem, I examined the component tolerances and the accumulated errors in the circuits indicated that a 0.050% tolerance could be guaranteed. But Marketing said that that was unacceptable, that we needed the 0.025% spec in order to be competitive. Or at least publish that it was that spec. Problem was, usage in the field proved that we didn't meet it, and customers were getting pissed.
So in order to find the real first-pass yield, I worked with the ethical test engineer who ratted out the issue to me (but not to management, who would have reassigned him or targeted him in the next layoff), and we tested enough production units for a Weibull plot, a statistical analysis.
We found that we could only meet the 0.025% specification about 30% of the time. So in other words, the true first-pass yield was 30%, not 92% - and 70% failed.
This would not be allowed to be reported higher up, so we had to work behind the scenes to resolve it. It took over a year, but eventually the circuit was redesigned (essentially we did a 2-stage amplifier scheme, with a gain of 10 and a gain of 100, instead a gain of 1000 at one stage), and were able to get a true 95% or so first-pass yield, and returns went to a trickle.
But even though this company prided itself on a sophisticated and dedicated quality system and had industry-leading professionals in that area, politics would not allow problems to be identified publicly within the various organizations, and especially not if bonuses would fail to be paid. Problems could only be fixed if they were kept quiet, behind the scenes, and did not embarrass anyone at all. They were not fixed if this criteria could not be met. And that's how it works in the real world, for most companies; this one being a Fortune 500 company with an impeccable reputation at the time.
Now I work for an employer who honestly identifies and fixes problems in the open within the company, with no blame, only emphasizing problem-solving. And it is a relief and a joy.