1.5 SIGMA SHIFT
In this post and associated lecture I have taken up 1.5 sigma, this simple concept confuses many specially who are new to six sigma but to cut long story short 1.5 sigma shift is a statistical correction or in simple words a buffer which is created to protect our processes and products from variation which is a constant companion of longterm projects. Simply think that you have to go to a remote town for two days, this place has limited supplies of all major amenities and you have to stay there for two days so think how will you pack, I assume you will keep extra phone batteries or battery banks, extra set of clothes and what not, basically what you are doing is preparing to face any kind of deviation or variation and this is what 1.5 sigma shift is used for.
Let’s look into deeper aspects of this concept.
What we know of six sigma is that it is a “Data based methodology to improve process performance by reducing variability and to bring down the number of defects to 3.4 defects per million opportunities and this includes 1.5 sigma shift.
If you look at standard sigma level vs number of defects table it looks something as given below
Sigma Level

No. of Defects per Million

2 Sigma
3 Sigma
4 Sigma
5 Sigma
6 Sigma

308537
66807
6210
233
3.4 which ideally means nearzero defects.

But Six Sigma statistically is equal to 2 defects per billion opportunities and this anomaly creates lot of confusion.
[ Here I would like to mention a book which was published in early 1990’s , name of this book is “Six Sigma Producibility Analysis and Process Characterization ( by Mickel J. Harry and J.Ronald Lawson) and this book has a table showing standard normal distribution table which has z value of 6 (earlier most tables end at z value of 3)]
Coming back to our discussion, six sigma is a measure of variation i.e how well your process is working, so a process working at 6 sigma capacity will be working at an efficiency that it has near zero defects (which is 3.4 defects Per million opportunities) and earlier I just said statistically 6 sigma is equal to 2 defects per billion opportunities, the reason behind this ambiguity is our famous 1.5 sigma shift. So what we call 6sigma in shortterm processes corresponds to 4.5 sigma for longterm processes and this includes a compensation factor known as 1.5 sigma.
The first thing you need to know is that all processes are designed to meet the specification limits but as law of thermodynamic states entropy enters and variation plays its role and when this happens
 Either Process Standard Deviation goes up OR
 Mean of process moves away from center
And due to this fewer standard deviations will fit in between mean and spec limits and this decreases sigma level, so to accommodate this variation which creeps in when projects are long term, the concept of 1.5 sigma shift was introduced.
By now the question which will crop up in an intelligent mind is, that don’t we have control limits and if yes then they will handle the exceptions so what’s the need for this shift.
Here I won’t be going into too many details but would sufficiently answer your question.
Simple and precise answer to this question is
“Control limits are not enough or a better word is Sufficient”
Reasons are
 Sampling errors
 Control charts will not detect each and every movement in process Avg.
 Variability in data collection
Other reasons are
 In production, OVERSIMPLIFICATION is the biggest error as it estimates sigma based on shortterm variation or data.
 Measuring products that have never been used.
 Not considering all activities of the value chain which substantially add to variation such as shipping and handling effects.
 Incomplete understanding of customer requirements
 Not counting in environmental factors to which product is exposed to or how the customer will use or misuse the product.
In short, control charts cannot keep track of all variation.
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<script>
(adsbygoogle = window.adsbygoogle  []).push({
google_ad_client: "capub7007460025020162",
enable_page_level_ads: true
});
</script>
When we work on any process, we need to consider few things
 The goal itself is under standard environmental conditions.
 Changing environmental conditions which may result in variation (environment will not be stable and it will change cause variation, so in planning stages itself we consider a compensation factor to accommodate unavoidable variation and this compensation factor is 1.5 sigma shift. This means that
Your short term goal = long term goal + appropriate compensation factor
Which can be better understood as
Short term goal is 6 Sigma
Long term Goal is 4.5 Sigma
Compensation Factor is 1.5 Sigma
This shift of 1.5 sigma was calculated by Motorola as Long Term Dynamic Mean Variation and this they did not calculate overnight, lots of studies were carried out and lots of process data was collected only after all this they reached the conclusion that the variation is inbetween 1.4 to 1.6 sigma and for calculation purposes they zeroed on 1.5 sigma.
This longterm variation I just mentioned is accounted by
 Variation in the process mean over time
 Increase in the standard deviation of the process over time (or changed customer requirements)
This variation can be due to either of above factors or both.
So, we have to understand that when we say six sigma we mean data is collected over short period of time i.e 6 to 8 months and due to this we only encounter chance causes of variation but when we talk about longterm processes i.e data is collected over span of years and here we encounter variation in its full force and this includes both chance and assignable causes of variation.
So, six sigma is a statistical correction to accommodate variation over long term, role of sigma shift is theoretical, what you need to understand is that purpose of six sigma is to generate organizational performance improvement and it is up to the organization to determine its appropriate sigma level based on its customer expectations, ideally the purpose of six sigma is to determine if your process is improving or not compared to others in same business.
Instead of focusing too much on statistics one should work on sound execution of DMAIC methodology because if underlying causes of variation are understood and controlled we will have defectfree processes in long run.