Tests for paired count outcomes =============================== * James A Proudfoot * Tuo Lin * Bokai Wang * Xin M Tu ## Abstract For moderate to large sample sizes, all tests yielded pvalues close to the nominal, except when models were misspecified. The signed-rank test generally had the lowest power. Within the current context of count outcomes, the signed-rank test shows subpar power when compared with tests that are contrasted based on full data, such as the GEE. Parametric models for count outcomes such as the GLMM with a Poisson for marginal count outcomes are quite sensitive to departures from assumed parametric models. There is some small bias for all the asymptotic tests, that is,the signed-ranktest, GLMM and GEE, especially for small sample sizes. Resampling methods such as permutation can help alleviate this. * biostatistics * statistics as topic * epidemiologic methods * investigative techniques * analytical, diagnostic and therapeutic ## Introduction Although not as popular as continuous and binary variables, count outcomes arise quite often in clinical research. For example, number of hospitalisations, number of suicide attempts, number of heavy drinking days and number of packs of cigarettes smoked per day are all popular count outcomes in mental health research. Studies yielding paired outcomes are also popular. For example, to evaluate new eye-drops, we can treat one eye of a subject with the new eye-drops and the other eye with a placebo drop. To evaluate skin cancer for truck drivers, we can compare skin cancer on the left arm with the right arm, since the left arm is more exposed to sunlight. To evaluate the stress of combat on Veterans’ health, we may use twins in which one is exposed to combat and the other is not, as differences observed with respect to health are likely attributable to combat experience. In a pre-post study, the effect of an intervention is evaluated by comparing a subject’s outcomes before (pre) and after (post) receiving the intervention. In all these studies, each unit of analysis has two outcomes arising from two different conditions. Interest is centred on the difference between the means of the two outcomes. For continuous outcomes, the paired t-test is the standard statistical method for evaluating differences between the means. However, the paired t-test does not apply to non-continuous variables such as binary and count (frequency) outcomes. For binary outcomes, McNemar’s test is the standard. For count or frequency outcomes, there is not much discussion in the literature. Many use Wilcoxon’s signed-rank test because this method is applicable to paired non-continuous outcomes such as count responses. One major weakness of the signed-rank test is its limited power. As observations are converted to ranks and only ranks are used in the test statistic, the signed-rank test does not use all available information in the original data, leading to lower power when compared with tests that use all data. This is why t-tests are preferred and widely used to compare two independent groups for continuous outcomes. With recent advances in statistical methodology, there are more options for comparing paired count responses. In this paper, we discuss some alternative procedures that use all information in the original data and thus generally provide more power than the signed-rank test. In the second section, we first provide a brief review of paired outcomes and methods for comparing continuous and binary paired outcomes. We then discuss the classic signed-rank test and modern alternatives for comparing paired count outcomes. In the third section, we compare different methods for comparing paired count outcomes using simulation studies. In the fourth section, we present our concluding remarks. ## Methods for paired count outcomes ### Paired continuous and binary outcomes Consider a sample of *n* subjects indexed by *i* and let ![Formula][1] and ![Formula][2] denote the paired outcomes from the *i*th subject. The subject may be an individual or a pair of twins, depending on applications. For example, in a pre-post study, the paired outcomes correspond to the pretreatment and post-treatment assessment and the subject is an individual. In studies involving twins, the paired outcomes come from each pair of twins. Because the two outcomes are correlated, statistical methods for comparing independent samples such as the t-test cannot be applied. For a continuous outcome, the paired t-test is generally applied to evaluating differences in the means of the paired outcomes. If we assume that ![Formula][3] and ![Formula][4] follow a bivariate normal distribution, then the difference between the two outcomes, ![Formula][5] , is also normally distributed. Under the null of no difference between the means of the two outcomes, ![Formula][6] has a normal distribution with mean 0 and variance ![Formula][7] . Thus, we can apply the t-test to the differences ![Formula][8] to test the null: ![Formula][9] (1) where ![Formula][10] denotes the *t* distribution with *k* df, and ![Formula][11] and ![Formula][12] denote the sample mean and SD. We can also use this sampling distribution to construct CIs. In practice, the bivariate normal distribution assumption for the paired outcomes ![Formula][13] and ![Formula][14] is quite strong and may not be met in real study data. If the assumption fails, the differences ![Formula][15] generally do not follow the normal distribution and thus the ![Formula][16] statistic in Equation (1) may not follow the *t* distribution. For large samples such as ![Formula][17] , ![Formula][18] follows approximately a standard normal distribution. Thus, one may replace ![Formula][19] with the ‘asymptotic’ standard normal, ![Formula][20] , to test the null as well as construct CIs, even if the outcomes ![Formula][21] and ![Formula][22] are not bivariate normal. In what follows, we assume large samples, since all the tests to be discussed next are asymptotic tests, that is, they approximately follow a mathematical distribution such as the normal distribution only for large samples. For small to moderate samples, all these tests have unknown distributions and asymptotic mathematical distributions such as the standard normal for large samples for the paired t-test may not work well. We discuss alternatives for small to moderate samples in the discussion section. If the paired outcomes are binary, the above hypothesis becomes the comparison of the proportions of ![Formula][23] and ![Formula][24] . McNemar’s test is the standard for comparing the paired outcomes. Let ![Formula][25] and ![Formula][26] denote the proportions associated with ![Formula][27] and ![Formula][28] , that is, ![Formula][29] Then the hypothesis to be tested is given by: ![Formula][30] (2) McNemar’s test is premised on the idea of comparing concordant and discordant pairs in the sample. Shown in table 1 is a ![Formula][31] cross-tabulation for the two levels of the binary outcomes. Let ![Formula][32] , ![Formula][33] , ![Formula][34] and ![Formula][35] denote the cell probabilities (or proportions) for the four cells in the table, that is, View this table: [Table 1](http://gpsych.bmj.com/content/31/1/e100004/T1) Table 1 A 2×2 contingency table displaying joint distributions of paired binary outcomes, with a, b, c and d denoting cell count ![Formula][36] Then, ![Formula][37] can be expressed in terms of the cell probabilities as follows: ![Formula][38] Similarly, ![Formula][39] can be expressed as: ![Formula][40] Thus, ![Formula][41] implies ![Formula][42] and vice versa. The hypothesis of interest in Equation (2) involving ![Formula][43] and ![Formula][44] can be expressed in terms of ![Formula][45] and ![Formula][46] : ![Formula][47] McNemar’s test evaluates the difference between the concordant and discordant pairs, *b* and *c*, that is, ![Formula][48] A large difference leads to rejection of the null. By normalising this difference, the statistic ![Formula][49] above follows approximately the standard normal for large sample size. ### Paired count outcomes For count outcomes, McNemar’s test clearly does not apply. The paired t-test is also inappropriate for such outcomes. First, the difference ![Formula][50] does not follow a normal distribution. Second, even if both ![Formula][51] and ![Formula][52] follow a Poisson distribution, the difference ![Formula][53] is not a Poisson variable; ![Formula][54] in general is not even guaranteed to have non-negative values. One approach that has been used to compare paired count outcomes is the Wilcoxon signed-rank test. Within our context, let ![Formula][55] denote the rank of ![Formula][56] based on its absolute value ![Formula][57] . The ranks are integers that indicate the position of ![Formula][58] after rearranging them in ascending order.1 The signed-rank test has the following statistic: ![Formula][59] where ![Formula][60] denotes an indicator with the value 1 (0) if the logic ![Formula][61] >0 is true (otherwise). Thus, ![Formula][62] only adds up the ranks for the positive ![Formula][63] ’s. The statistic ![Formula][64] ranges from 0 to ![Formula][65] . Under ![Formula][66] , about one-half of the ![Formula][67] 's are positive. Thus, any pair (![Formula][68] , ![Formula][69] ) has 50% chance that ![Formula][70] . In terms of ranks, this means that the sum of ![Formula][71] for positive ![Formula][72] is about half of the range ![Formula][73] . Thus, we can specify the null as: ![Formula][74] where ![Formula][75] . Under the null ![Formula][76] , ![Formula][77] has mean ![Formula][78] , half of the range ![Formula][79] . The normalised ![Formula][80] : ![Formula][81] (3) has approximately a normal distribution with mean ![Formula][82] and SE ![Formula][83] for large samples, which is readily applied to calculate p values and/or confidence bands. Since paired outcomes are a special case of general longitudinal outcomes, longitudinal methods can be applied to test the null. For example, both the generalised linear mixed-effects model (GLMM) and generalised estimating equations (GEE), two most popular longitudinal models, can be specialised to the current setting. When applying GLMM, we specify the following model: ![Formula][84] (4) where *z* denotes a random effect to account for correlation between the paired outcomes, ![Formula][85] denotes a Poisson distribution with mean μ , ![Formula][86] denotes the exponential and ![Formula][87] denotes the log function. The null of same mean between ![Formula][88] and ![Formula][89] can be expressed as: ![Formula][90] Note that since the random effect *z* may be positive or negative and the random of the normal distribution is unbounded, the log transformation of the Poisson mean in Equation (4) is necessary to ensure that ![Formula][91] and ![Formula][92] stay positive. For applying GEE, we only need to specify the mean of each paired outcome. This is because unlike GLMM, GEE is a ‘semi-parametric’ model and imposes no mathematical distribution on the outcomes. Thus, under GEE, both the Poisson distribution for each outcome and the random effect *z* for linking the paired outcomes are removed. The corresponding GEE is given by: ![Formula][93] (5) Since there is no random effect in Equation (5), the log transformation is also not necessary and thus the GEE can be specified simply as: ![Formula][94] (6) Compared with the GLMM in Equation (4), the GEE above imposes no mathematical distribution either jointly or marginally, allowing for valid inference for a broad class of data distributions. The GLMM in Equation (4) may yield biased inference if: (1) at least one of the outcomes does not follow the Poisson; (2) the random effect *z* follows a non-normal distribution; and (3) ![Formula][95] and ![Formula][96] are not correlated according to the specified random-effect structure. In contrast, the GEE in Equation (5) forgoes all such constraints and yields valid inference regardless of the marginal distribution and correlation structure of the outcomes ![Formula][97] and ![Formula][98] . ## Simulation study In this section, we evaluate and compare the performances of the different methods discussed above by simulation. All simulations are performed with a Monte Carlo (MC) sample of ![Formula][99] under a significance level of ![Formula][100] . Performance of a test is characterised by: (1) bias and (2) power. We consider both aspects when comparing the different methods. ### Bias If a test performs correctly, it should yield type I error rates at the specified nominal level ![Formula][101] . Several factors can affect the performance of the test. First, if data do not follow the assumed mathematical distributions, the test in general is biased. For example, if the paired t-test is applied to paired outcomes that are not bivariate normal, it will generally be biased. Second, with the exception of the paired t-test, all tests discussed above rely on large samples to provide valid results. When applied to small or moderate samples, such tests may have bias. For example, the normal distribution may not provide a good approximation to the sampling distribution of the statistic ![Formula][102] of the Wilcoxon signed-rank test ![Formula][103] when applied to a sample size of, say, ![Formula][104] . Thus, to compare the performance of each different method, we consider sample sizes ranging from *n*= 10 to 200. To evaluate the effects of model assumptions on test performance, we simulate correlated count responses ![Formula][105] and ![Formula][106] using a copula approach,2 where each outcome marginally follows a negative binomial (NB) distribution: ![Formula][107] (7) The above model deviates from the GLMM in Equation (4) in two ways. First, ![Formula][108] (![Formula][109] ) follows an NB, rather a Poisson. Second, correlation between ![Formula][110] and ![Formula][111] does not follow the normal distribution based on random effect. Unlike Poisson, NB has an extra parameter τ controlling for dispersion (variability). Thus, although Poisson and NB have the same mean, NB has a different (larger) variance than Poisson.1 Since ![Formula][112] converges to ![Formula][113] as τ increases, selecting a relatively small τ allows us to examine the impact of the Poisson assumption on inference when the GLMM in Equation (4) is applied to count outcomes that are not compliant with the Poisson model. For the simulation study, we set ![Formula][114] and 5, and correlation between ![Formula][115] and ![Formula][116] to 0.5. To evaluate bias using MC simulation, we simulate paired outcomes ![Formula][117] and ![Formula][118] from the model in Equation (7), apply each of the tests discussed in the third section and compute p values for testing the null hypothesis. This process is repeated ![Formula][119] times. A test has little or no bias if the proportion of nulls rejected over the ![Formula][120] times is close to the nominal value ![Formula][121] . Shown in table 2 are averaged p values for the different tests from their applications to the ![Formula][122] simulated paired outcomes, where GLMM (Poisson) denotes the GLMM for Poisson in Equation (4), GLMM (NB) denotes the GLMM for NB distribution (by replacing the Poisson in the GLMM in Equation (4) with NB), GEE denotes the GEE without log transformation in Equation (6) and GEE (log-link) denotes the GEE with log transformation in Equation (5). For moderate to large sample sizes, n=100, 200, all tests yielded p values close to the nominal ![Formula][123] , except for GLMM (Poisson), which had highly inflated type I errors for ![Formula][124] . As indicated earlier, with a small dispersion parameter such as ![Formula][125] , NB has much more variability than its Poisson counterpart, leading to poor fit when fitting simulated data with the GLMM assuming the Poisson. Thus, the high bias in the type I error reflects model mis specification. View this table: [Table 2](http://gpsych.bmj.com/content/31/1/e100004/T2) Table 2 Averaged p values from testing the null of no difference between paired outcomes by different methods over *M*=2000 MC replicates Although the paired t-test is not a valid test, it performed well for all sample sizes considered, although showing small downward bias, especially for small sample sizes. For extremely small sample sizes such as n=10, all three asymptotically valid methods, signed-rank test, GLMM (NB) and GEE, showed small upward bias, especially when ![Formula][126] . As the sample size increased, the bias diminished, as expected. ### Power If a group of tests all provide good type I error rates, we can further compare them for power. It is common that two unbiased tests may provide different power, because they may use a different amount of information from study data or use the same information differently. For example, within the current study, the signed-rank test may provide less power than the GEE, because the former only uses the ranks of the original count outcomes, completely ignoring magnitudes of ![Formula][127] ’s. Thus, it is of interest to compare power across the different tests. We again use the MC approach to compare power across the different methods. However, unlike the evaluation of bias, we must also be specific about the difference in the means of paired outcomes so that we can simulate the outcomes under the alternative hypothesis. For this study, we specify the null and alternative as follows: ![Formula][128] (8) We simulate correlated outcomes ![Formula][129] again using the copula from the GLMM in Equation (7), but with ![Formula][130] and ![Formula][131] as specfied under ![Formula][132] in Equation (8). For each simulated outcome ![Formula][133] , we apply the different methods and test the null hypothesis under ![Formula][134] . This process is repeated ![Formula][135] times and the power for each method is estimated by the per cent of times the null is rejected. Shown in table 3 are power estimates from testing the null hypothesis in Equation (8) by the different methods from their applications to the ![Formula][136] paired count outcomes simulated under the alternative hypothesis in Equation (8). As type I error rates for GLMM (Poisson) were highly biased, power estimates from this method are not meaningful. Among the remaining four tests, the signed-rank test has the lowest power. The paired t-tests, GLMM (NB) and both GEE methods yield comparable power estimates, though both GEE methods and GLMM (NB) appear to perform best with a sample size of at least 25. When ![Formula][137] and the sample size is high (more than, say, 50 subjects) all tests have comparable power and correct nominal significance level. Figures 1 and 2 show the power estimates under additional alternative hypotheses. The GLMM (NB) method appears to be less efficient for larger differences in means with sample sizes around 50 when ![Formula][138] . View this table: [Table 3](http://gpsych.bmj.com/content/31/1/e100004/T3) Table 3 Power estimates from testing the null of no difference between paired outcomes by different methods over *M*=2000 MC replicates ![Figure 1](http://gpsych.bmj.com/https://gpsych.bmj.com/content/gpsych/31/1/e100004/F1.medium.gif) [Figure 1](http://gpsych.bmj.com/content/31/1/e100004/F1) Figure 1 Power for each method under different alternative hypotheses. Data are generated with larger dispersion (ie, ![Formula][139] ). GEE, generalised estimating equation; GLMM, generalised linear mixed-effects model; NB, negative binomial. ![Figure 2](http://gpsych.bmj.com/https://gpsych.bmj.com/content/gpsych/31/1/e100004/F2.medium.gif) [Figure 2](http://gpsych.bmj.com/content/31/1/e100004/F2) Figure 2 Power for each method under different alternative hypotheses. Data are generated with smaller dispersion (ie, ![Formula][140] ), more similar to a Poisson distribution. GEE, generalised estimating equation; GLMM, generalised linear mixed-effects model; NB, negative binomial. ## Discussion In this report, we discussed several methods for testing differences in paired count outcomes. Unlike paired continuous and binary outcomes, analysis of paired count outcomes has received less attention in the literature. Although the signed-rank test is often used, it is not an optimal test. This is because it uses ranks, rather than original count outcomes (differences between paired count outcomes), resulting in loss of information and leading to reduced power. Thus, unless study data depart severely from the normal distribution, the signed-rank test is not used for comparing paired continuous outcomes, as the paired t-test is a more powerful test. Within the current context of count outcomes, the signed-rank test again shows subpar power when compared with tests that are contrasted based on full data, such as the GEE. The simulation study in this report also shows that parametric models for count outcomes such as the GLMM with a Poisson for marginal count outcomes are quite sensitive to departures from assumed parametric models. As expected, semiparametric models like the GEE provide better performance. Also, the paired t-test seems to perform quite well. This is not really surprising, since within the current context the GEE and paired t-test are essentially the same, except that the former relies on the asymptotic normal distribution for inference, while the latter uses the t distribution for inference. As the sample size grows, the t becomes closer to the standard normal distribution. Thus, p values and power estimates are only slightly different between the two for small to moderate samples. The simulation results also show some small bias for all the asymptotic tests, that is, the signed-rank test, GLMM and GEE, especially for small sample sizes. In most clinical studies, sample sizes are relatively large and this limitation has no significant impact. For studies with small samples, such as those in bench sciences, bias in type I error rates may be high and require attention. One popular statistical approach is to use resampling methods such as permutation.3 Within the current context of paired count responses, the permutation technique is readily implemented. For example, we first decide whether to switch the order of the paired outcomes ![Formula][141] in a random fashion and then apply any of the tests considered above, such as the GEE, and compute the statistic based on the ‘permuted’ sample. We repeat this process M times (such as M=1000) and obtain a sampling distribution of the test statistic. If the statistic based on the original data falls either below the 2.5th or above the 97.5th percentile, we reject the null. Under permutation, model assumptions such as the Poisson in the GLMM have no impact on inference and all the tests provide valid inference. ### Supplementary data [[gpsych-2018-100004-supp1.docx]](pending:yes) ## Footnotes * Contributors JAP directed all simulation studies, ran some of the simulation examples and helped edit and finalise the manuscript. TL helped run some of the simulation examples and drafted some parts of the manuscript. BW helped check some of the simulation study results and draft part of the simulation results. XMT helped draft and finalise the manuscript. * Funding The report was partially supported by the National Institutes of Health, Grant UL1TR001442 of CTSA funding. * Disclaimer The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. * Competing interests None declared. * Patient consent Not required. * Provenance and peer review Not commissioned; externally peer reviewed. * Data statement No additional data are available. * Received August 13, 2018. * Accepted August 13, 2018. * © Author(s) (or their employer(s)) 2018. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: [http://creativecommons.org/licenses/by-nc/4.0](http://creativecommons.org/licenses/by-nc/4.0) ## References 1. 1. Tang W , He H , Tu XM . Applied categorical and count data analysis . Florida, FA: Chapman & Hall/CRC, 2012. 2. 2. Yan J . Enjoy the Joy of Copulas: With a Package **copula** . J Stat Softw 2007;21:1–21. 3. 3. Efron B , Tibshirani R . An introduction to the bootstrap . New York, NY: Springer Science+Business Media, 1993. James Proudfoot obtained a master’s degree in statistics from the University of British Columbia. He is currently in the Biostatistical and Epidemiology Research Division at UC San Diego, CA. His work involves manuscripts preliminary assessment, biostatistical analysis counseling, and research on the application of statistical methods in psychiatric studies.
![][142] [1]: /embed/mml-math-1.gif [2]: /embed/mml-math-2.gif [3]: /embed/mml-math-3.gif [4]: /embed/mml-math-4.gif [5]: /embed/mml-math-5.gif [6]: /embed/mml-math-6.gif [7]: /embed/mml-math-7.gif [8]: /embed/mml-math-8.gif [9]: /embed/mml-math-9.gif [10]: /embed/mml-math-10.gif [11]: /embed/mml-math-11.gif [12]: /embed/mml-math-12.gif [13]: /embed/mml-math-13.gif [14]: /embed/mml-math-14.gif [15]: /embed/mml-math-15.gif [16]: /embed/mml-math-16.gif [17]: /embed/mml-math-17.gif [18]: /embed/mml-math-18.gif [19]: /embed/mml-math-19.gif [20]: /embed/mml-math-20.gif [21]: /embed/mml-math-21.gif [22]: /embed/mml-math-22.gif [23]: /embed/mml-math-23.gif [24]: /embed/mml-math-24.gif [25]: /embed/mml-math-25.gif [26]: /embed/mml-math-26.gif [27]: /embed/mml-math-27.gif [28]: /embed/mml-math-28.gif [29]: /embed/mml-math-29.gif [30]: /embed/mml-math-30.gif [31]: /embed/mml-math-31.gif [32]: /embed/mml-math-32.gif [33]: /embed/mml-math-33.gif [34]: /embed/mml-math-34.gif [35]: /embed/mml-math-35.gif [36]: /embed/mml-math-37.gif [37]: /embed/mml-math-38.gif [38]: /embed/mml-math-39.gif [39]: /embed/mml-math-40.gif [40]: /embed/mml-math-41.gif [41]: /embed/mml-math-42.gif [42]: /embed/mml-math-43.gif [43]: /embed/mml-math-44.gif [44]: /embed/mml-math-45.gif [45]: /embed/mml-math-46.gif [46]: /embed/mml-math-47.gif [47]: /embed/mml-math-48.gif [48]: /embed/mml-math-49.gif [49]: /embed/mml-math-50.gif [50]: /embed/mml-math-51.gif [51]: /embed/mml-math-52.gif [52]: /embed/mml-math-53.gif [53]: /embed/mml-math-54.gif [54]: /embed/mml-math-55.gif [55]: /embed/mml-math-56.gif [56]: /embed/mml-math-57.gif [57]: /embed/mml-math-58.gif [58]: /embed/mml-math-59.gif [59]: /embed/mml-math-60.gif [60]: /embed/mml-math-61.gif [61]: /embed/mml-math-62.gif [62]: /embed/mml-math-63.gif [63]: /embed/mml-math-64.gif [64]: /embed/mml-math-65.gif [65]: /embed/mml-math-66.gif [66]: /embed/mml-math-67.gif [67]: /embed/mml-math-68.gif [68]: /embed/mml-math-69.gif [69]: /embed/mml-math-70.gif [70]: /embed/mml-math-71.gif [71]: /embed/mml-math-72.gif [72]: /embed/mml-math-73.gif [73]: /embed/mml-math-74.gif [74]: /embed/mml-math-75.gif [75]: /embed/mml-math-76.gif [76]: /embed/mml-math-77.gif [77]: /embed/mml-math-78.gif [78]: /embed/mml-math-79.gif [79]: /embed/mml-math-80.gif [80]: /embed/mml-math-81.gif [81]: /embed/mml-math-82.gif [82]: /embed/mml-math-83.gif [83]: /embed/mml-math-84.gif [84]: /embed/mml-math-85.gif [85]: /embed/mml-math-86.gif [86]: /embed/mml-math-87.gif [87]: /embed/mml-math-88.gif [88]: /embed/mml-math-89.gif [89]: /embed/mml-math-90.gif [90]: /embed/mml-math-91.gif [91]: /embed/mml-math-92.gif [92]: /embed/mml-math-93.gif [93]: /embed/mml-math-94.gif [94]: /embed/mml-math-95.gif [95]: /embed/mml-math-96.gif [96]: /embed/mml-math-97.gif [97]: /embed/mml-math-98.gif [98]: /embed/mml-math-99.gif [99]: /embed/mml-math-100.gif [100]: /embed/mml-math-101.gif [101]: /embed/mml-math-102.gif [102]: /embed/mml-math-103.gif [103]: /embed/mml-math-104.gif [104]: /embed/mml-math-105.gif [105]: /embed/mml-math-106.gif [106]: /embed/mml-math-107.gif [107]: /embed/mml-math-108.gif [108]: /embed/mml-math-109.gif [109]: /embed/mml-math-110.gif [110]: /embed/mml-math-111.gif [111]: /embed/mml-math-112.gif [112]: /embed/mml-math-113.gif [113]: /embed/mml-math-114.gif [114]: /embed/mml-math-115.gif [115]: /embed/mml-math-116.gif [116]: /embed/mml-math-117.gif [117]: /embed/mml-math-118.gif [118]: /embed/mml-math-119.gif [119]: /embed/mml-math-120.gif [120]: /embed/mml-math-121.gif [121]: /embed/mml-math-122.gif [122]: /embed/mml-math-123.gif [123]: /embed/mml-math-124.gif [124]: /embed/mml-math-125.gif [125]: /embed/mml-math-126.gif [126]: /embed/mml-math-129.gif [127]: /embed/mml-math-130.gif [128]: /embed/mml-math-131.gif [129]: /embed/mml-math-132.gif [130]: /embed/mml-math-133.gif [131]: /embed/mml-math-134.gif [132]: /embed/mml-math-135.gif [133]: /embed/mml-math-136.gif [134]: /embed/mml-math-137.gif [135]: /embed/mml-math-138.gif [136]: /embed/mml-math-139.gif [137]: /embed/mml-math-140.gif [138]: /embed/mml-math-141.gif [139]: F1/embed/mml-math-144.gif [140]: F2/embed/mml-math-145.gif [141]: /embed/mml-math-146.gif [142]: /embed/graphic-1.gif