We saw in another post how batch size impacts on throughput. But what happens when defects occur and have to be reworked?
In this animation the first step generates a defect on every third item until it is returned for rework. Once an item is reworked no new instances of this defect are created. But how many escaped? And for how long were they at large?
Rework makes flowrate even more sensitive to batch size. At the bottom, our single piece flow detected the error on day 5 and it was reworked on day 6. Only one defect escaped, it was at large for 4 days and incurred an overall delay of 1 day.
Contrast this with the batch size of 10, where 6 defects escape, they are at large for 40 days and incur a delay of 16 days. Not only is the process much slower, but it is also much more unstable. It generates more rework, is less able to recover from disturbances, and in real life escaped errors have a habit of compounding more errors.
A key difference between the two is the delay in the feedback signal, 3 days vs 29. Shortening feedback loops is a massive enabler for improving not just throughput, but stability, making scheduling more reliable.