Please use this identifier to cite or link to this item: http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/9553
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPhatak, Sanaten_US
dc.contributor.authorSaptarshi, Ruchilen_US
dc.contributor.authorSharma, Vanshajen_US
dc.contributor.authorShah, Rohanen_US
dc.contributor.authorZanwar, Abhisheken_US
dc.contributor.authorHegde, Pratikshaen_US
dc.contributor.authorCHAKRABORTY, SOMASHREEen_US
dc.contributor.authorGOEL, PRANAYen_US
dc.date.accessioned2025-04-15T06:53:30Z-
dc.date.available2025-04-15T06:53:30Z-
dc.date.issued2024-12en_US
dc.identifier.citationRheumatology.en_US
dc.identifier.issn1462-0324en_US
dc.identifier.issn1462-0332en_US
dc.identifier.urihttps://doi.org/110.1093/rheumatology/keae678en_US
dc.identifier.urihttp://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/9553-
dc.description.abstractObjectives Convolutional neural networks (CNNs) are increasingly used to classify medical images, but few studies utilize smartphone photographs. The objective of this study was to assess CNNs for differentiating patients from controls and detecting joint inflammation.Methods We included consecutive patients with early inflammatory arthritis and healthy controls, all examined by a rheumatologist (15% by two). Standardized hand photographs of the hands were taken, anonymized and cropped around joints. Pre-trained CNN models were fine-tuned on our dataset (80% training; 20% test set). We used an Inception-ResNet-v2 backbone CNN modified for two class outputs (patient vs control) on uncropped photos. Separate Inception-ResNet-v2 CNNs were trained on cropped photos of middle finger proximal interphalangeal (MFPIP), index finger proximal interphalangeal (IFPIP) and wrist. We report accuracy, sensitivity, specificity and area under the receiver operating characteristic curve (AUC).Results We analysed 800 hands from 200 controls (mean age 37.8 years) and 200 patients (mean age 49 years). Two rheumatologists showed 0.89 concordance. The wrist was commonly involved (173/400) followed by the MFPIP (134) and IFPIP (128). The screening CNN achieved 99% accuracy and specificity and 98% sensitivity in predicting a patient compared with controls. Joint-specific CNN accuracy, sensitivity, specificity and AUC were as follows: wrist (75%, 92%, 72% and 0.86, respectively), IFPIP (73%, 89%, 72% and 0.88, respectively) and MFPIP (71%, 91%, 70% and 0.87, respectively).Conclusion Computer vision distinguishes patients and controls using smartphone photographs, showing promise as a screening tool. Future research will focus on validating findings in diverse populations and other joints and integrating this technology into clinical workflows.en_US
dc.language.isoenen_US
dc.publisherOxford University Pressen_US
dc.subjectComputer visionen_US
dc.subjectconvolutional neural networken_US
dc.subjectInflammatory arthritisen_US
dc.subjectRheumatoid arthritisen_US
dc.subject2024en_US
dc.titleIncorporating computer vision on smart phone photographs into screening for inflammatory arthritis: results from an Indian patient cohorten_US
dc.typeArticleen_US
dc.contributor.departmentDept. of Biologyen_US
dc.identifier.sourcetitleRheumatologyen_US
dc.publication.originofpublisherForeignen_US
Appears in Collections:JOURNAL ARTICLES

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.