Roel Bogie

Chapter 2

the e-learning and contained the same cases as the pre-test, in a different order to limit recall bias. Statistical analysis We calculated the IOA among raters for endoscopic Kudo classification and Paris classification of LSTs using Fleiss kappa coefficients with 95% confidence intervals and Gwet’s first order coefficients where appropriate. 16-18 In case of an unequal proportion of endoscopic subtypes, the Gwet’s first order agreement coefficient (AC1) is a more robust tool than the Fleiss kappa coefficient. 18 We examined difficulties in differentiation between subtypes by calculating the proportion of pairwise agreement. 9 To examine the spread of the agreement, Cohen’s kappa coefficients between all possible pairs of expert raters were calculated. To compare performance between endoscopy fellows and experts, an overall expert opinion was calculated by using the most common answer. Concordance was calculated between endoscopy fellows and this extracted answer by presenting the proportion of matching answers. We categorized the level of IOA according to Landis and Koch: 19 perfect agreement, almost perfect agreement, substantial agreement, moderate agreement, fair agreement, slight agreement and less than chance agreement (kappa coefficient: 1, 0.81-0.99, 0.61-0.80, 0.41-0.60, 0.21-0.40, 0.01-0.20 and <0 respectively). We performed sensitivity analysis to adjust for high impact of single raters. We used an adapted paired t-test to compare the agreement coefficients before and after training. 20 A Z-test was used to test differences in agreement between groups. P values ≤0.05 were considered significant. Statistics were performed using R statistics 3.2.2 for Microsoft Windows 21 with the ‘kappaSize’module for sample size calculations 15, 22 and the R scripts of Advanced Analytics, LLC. 18, 20,23

Table 2.1: Overviewof the inter-observer agreement results among experts. | *Fleiss kappa coefficients which do not match with mean pairwise agreement and Gwet’s AC1 score because of unequal proportion of categories.

Gwet’s AC1 [95% CI]

Fleiss kappa [95% CI]

Mean pairwise agreement

Feature

Kudo endoscopic classification

0.62 [0.55-0.69] 0.59 [0.58-0.61] 0.75 [0.66-0.83] 0.74 [0.71-0.76]

71.0% 87.1%

Granular vs non-granular

Per Kudo subtype: LST-G-H

- - -

0.56 [0.53-0.58] 0.76 [0.73-0.78] 0.55 [0.52-0.57] 0.53 [0.50-0.55]

- - -

LST-G-NM LST-NG-FE LST-NG-PD

24

Made with FlippingBook - professional solution for displaying marketing and sales documents online