SPC (Statistical Process Control), are statistical devices founded on the theory of variation, employed to understand any process measured over a duration of time, normally to identify areas of improvement or maintain high performance levels. SPC uses time series analysis and graphical data representations to simplify information to a broad array of audiences (Benneyan et al., 2003). Using data measured at specific time intervals, SPC is useful in detecting changes that take place at the beginning of the intervention, before conclusive results from a bigger summative assessment are available. This information is useful in developing hypothesis and adjusting essentials of the intervention to raise the probability of success. In healthcare, clinicians monitor the heart rates, oxygen levels and other patient vitals using key health indicators. SPC can be used in the same way to monitor the level of patient care. For example, a health process can use SPC to assess variation and improvement opportunities for diabetes care. This might involve focusing on care and cost of admitted patients with diabetes. Using parameters such as the length of patient stay for a specific number of patients, the team can leverage the variation in the process and adjust their improvement plan either to focus on specific patient problems or making changes to the care process.
Sample size is a crucial factor when it comes to performance measurement. A sample that is too large risks wasting resources such as money, energy and time. A small sample on the other hand does not accurately represent the target population or denies the researcher valuable insights into the phenomenon being researched. It is recommended that relevant data be used since this is a comparative method, with focus in validity and reliability. The samples need to be gathered over long durations of time because they need to be big enough for precision especially if the date is for accountability purposes and not for improvement. Sample size is determined by how accurate and precise one wants their research to be, that is, what degree one intends their results to represent the actual population. The accuracy is affected by two measures; namely the margin of error and confidence levels. The margin of error, also known as confidence intervals is the negative or positive deviation allowed in the survey results. It is the level of deviation between the sample and the entire population. The confidence level signifies the percentage of the population that lies within the margin of error boundaries. Using these two values, one can easily calculate the required sample size.
Process control charts are basic connected-point graphs. The points are graphed on x/y axis, with the x-axis representing time. The plots are often averages of groups or even individual measurements. These control charts help in identifying the difference in measurements during the process observation period. Essentially, a control chart is a data line graph with median lines showing standard deviation to determine predictability and stability. Health processes can leverage control charts to gain insights into variation causes in crucial measurements and reveal effective strategies for improvement. As discussed above, the sample size may affect the results of the survey which could further affect the research result’s reliability and/or validity. Therefore, healthcare organizations should invest in some statistical techniques when dealing with small sample sizes in their SPC charts. In situations with small samples, especially due to population size, one useful approach is to employ finite population correction. This method works by adjusting a variance approximation for an approximated total or mean, in a way that the variance applies only to the population portion not in the sample. When operating with a small sample size, this method can be used to approximate the desired level of power (Button et al., 2013).
The second method is to design and measure qualities that enhance research. If the objective is to identify a substantial effect, one can decrease the standard error or increase the parameter estimate. The chosen outcome measure must be reliable to reduce attenuation caused by unreliability. The measure should also be sensitive to raise the chances of detecting change. When faced with a small sample size with lengthy measurement variables for each individual in a mutual time frame, multivariate techniques have been found useful. Multivariate analysis involves data analysis of numerous measurements or the simultaneous analysis of a dependent variable with other variables. This technique is useful when studying complex data sets using small sample sizes. For example, a model was used to predict that Delhi would present more than half a million Covid-19 cases by July 2020. This evaluation was based on numerous variables like, public behavior, overall community immunity, government decision, public transport and occupation (GLT, 2020). The applied use of multivariate analysis to a specific issue may involve both multivariate and univariate techniques to understand the link between variables and the significance to the research problem.
References
Benneyan, J., Lloyd, R., & Plsek, P. (2003). Statistical process control as a tool for research and healthcare improvement. Quality & Safety In Health Care, 12(6), 458–464. https://doi.org/https://doi.org/10.1136/qhc.12.6.458
Button, K., Ioannidis, J., Mokrysz, C., Nosek, B., Flint, J., Robinson, E., & Munafò, M. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365-376. https://doi.org/10.1038/nrn3475
GLT. (2020). Overview of Multivariate Analysis | What is Multivariate Analysis?. GreatLearning Blog: Free Resources what Matters to shape your Career!. Retrieved 13 July 2021, from https://www.mygreatlearning.com/blog/introduction-to-multivariate-analysis/.