Resumen
Objective: To propose simple capillaroscopic definitions for interpretation of capillaroscopic morphologies and to assess inter-rater reliability.
Methods: The simple definitions proposed were: normal--hairpin, tortuous or crossing; abnormal--not hairpin, not tortuous and not crossing; not evaluable--whenever rater undecided between normal and abnormal. Based upon an aimed kappa of 0.80 and default prevalences of normal (0.4), abnormal (0.4) and not evaluable (0.2) capillaries, 90 single capillaries were presented to three groups of raters: experienced independent raters, n = 5; attendees of the sixth EULAR capillaroscopy course, n = 34; novices after a 1-h course, n = 11. Inter-rater agreement was assessed by calculation of proportion of agreement and by kappa coefficients.
Results: Mean kappa based on 90 capillaries was 0.47 (95% CI: 0.39, 0.54) for expert raters, 0.40 (95% CI: 0.36, 0.44) for attendees and 0.46 (95% CI: 0.41, 0.52) for novices, with overall agreements of 67% (95% CI: 63, 71), 63% (95% CI: 60, 65) and 67% (95% CI: 63, 70), respectively. Comparing only normal vs the combined groups of abnormal and not evaluable capillaries did increase the kappa: 0.51 (95% CI: 0.37 ,: 0.65), 0.53 (95% CI: 0.49, 0.58) and 0.55 (95% CI: 0.49, 0.62). On the condition that the capillaries were classifiable, the mean kappa was 0.62 (95% CI: 0.50, 0.74) for expert raters (n = 65), 0.76 (95% CI: 0.69, 0.83) for attendees (n = 20) and 0.81 (95% CI: 0.74, 0.89) for novices (n = 44).
Methods: The simple definitions proposed were: normal--hairpin, tortuous or crossing; abnormal--not hairpin, not tortuous and not crossing; not evaluable--whenever rater undecided between normal and abnormal. Based upon an aimed kappa of 0.80 and default prevalences of normal (0.4), abnormal (0.4) and not evaluable (0.2) capillaries, 90 single capillaries were presented to three groups of raters: experienced independent raters, n = 5; attendees of the sixth EULAR capillaroscopy course, n = 34; novices after a 1-h course, n = 11. Inter-rater agreement was assessed by calculation of proportion of agreement and by kappa coefficients.
Results: Mean kappa based on 90 capillaries was 0.47 (95% CI: 0.39, 0.54) for expert raters, 0.40 (95% CI: 0.36, 0.44) for attendees and 0.46 (95% CI: 0.41, 0.52) for novices, with overall agreements of 67% (95% CI: 63, 71), 63% (95% CI: 60, 65) and 67% (95% CI: 63, 70), respectively. Comparing only normal vs the combined groups of abnormal and not evaluable capillaries did increase the kappa: 0.51 (95% CI: 0.37 ,: 0.65), 0.53 (95% CI: 0.49, 0.58) and 0.55 (95% CI: 0.49, 0.62). On the condition that the capillaries were classifiable, the mean kappa was 0.62 (95% CI: 0.50, 0.74) for expert raters (n = 65), 0.76 (95% CI: 0.69, 0.83) for attendees (n = 20) and 0.81 (95% CI: 0.74, 0.89) for novices (n = 44).
Idioma original | Inglés |
---|---|
Páginas (desde-hasta) | 883-890 |
Número de páginas | 8 |
Publicación | Rheumatology |
Volumen | 55 |
N.º | 5 |
DOI | |
Estado | Publicada - 1 may. 2016 |
Nota bibliográfica
Publisher Copyright:© The Author 2015.
Tipos de Productos Minciencias
- Artículos de investigación con calidad A1 / Q1