StatTools : Sequential (Unpaired) Analysis Explained

Links : Home Index (Subjects) Index (Categories) Contact StatTools

Related Links:
Sequential Analysis Introduction and Explained Page
Sequential Unpaired Difference Between Two Counts Program Page
Sequential Unpaired Difference Between Two Means Program Page
Sequential Unpaired Difference Between Two Ordinal Arrays Program Page
Sequential Unpaired Difference Between Two Proportions Program Page
Sequential Unpaired Difference Between Two Survival Rates Program Page

Introduction Counts Means Proportions Ordinal Arrays Survivals Reference
Methodological Considerations Terminology Settings
General discussions on sequential analysis are presented in Sequential Analysis Explained and will not be repeated here.

This page briefly explains the Triangular Test that was developed in the late 1980s and 1990s by Whitehead. For those interested in obtaining a full understanding of the theories and methodologies of the Triangular Test, Whitehead's text book (see reference) is highly recommended.

The Triangular test is a sequential statistical method for comparing two groups, based on the relationship between Fisher's information V (expression of the quantity of data) and the efficiency score Z (expression of the effect size). The calculation of V and Z depends on the nature of the measurements concerned, but the interpretations of their relationship are the same. V and Z can be calculated at any time during the study.

Statistical borders are drawn that allows the researcher to make one of 3 decision whenever the data is reviewed. These are to continue with the experiment, to reject the null hypothesis and stop the experiment, or to accept the null hypothesis and stop the experiment.

The baseline and two borders forms a triangle. While the V / Z plot remains within the triangle, no decision should be made other than to collect more data. If the plot crosses the outer border, then the null hypothesis can be rejected (significant difference exists). If the plot crosses the inner border than the null hypothesis can be accepted (no significant difference).

The primary straight line borders are calculated on the assumption that the data will be reviewed after the data is obtained from every case as they occur. These borders are narrowed to become more powerful if the data are reviewed less frequently, the extent of the narrowing depending on the sample size between the reviews. The final borders, with periodic narrowing, looks like a Christmas tree.

The methods of calculating the effects size for different types of data, and how the borders are defined, will not be explained in details in these pages. These are well described in Whitehead's book, and the algorithms can be easily obtained in published papers (see reference)

Please note that the coordinates defined in these pages are based on a 2 tailed test (detecting a difference in either direction). If a one tail test is to be used (one group more than the other but not interested if it is the other way around) then the type I Error (α) used should be doubled (eg. 0.1 instead of 0.05).

Please also note that the stopping borders are not affected by the ratio of sample size between the two groups, as these are calculated from α, β(1-power), and the effect size θ. Discrepancies between the two sample sizes however affects the calculation of V and Z from the data, so will alter the predicted sample size requirement at the planning stage.

At the end of the analysis, a termination test can be done by calculating the final effect size Theta T=Z/sqrt(V). For the null hypothesis, T0=0 and SD(T0)=sqrt(V). The normalized z test can therefore be used to test whether T deviates from null.

In his book, Whitehead presented 5 models, for normally distributed means, Poisson distributed counts, binomially distributed proportions, survival rates, and non-parametric ordinal arrays. These are discussed individually as in the following sections.