Please use this identifier to cite or link to this item: http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/9843
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorBhatta, Varun-
dc.contributor.authorAGRAWAL, SARANSH-
dc.date.accessioned2025-05-14T06:30:04Z-
dc.date.available2025-05-14T06:30:04Z-
dc.date.issued2025-05-
dc.identifier.citation115en_US
dc.identifier.urihttp://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/9843-
dc.description.abstractThe widespread adoption of Machine Learning (ML) and Artificial Intelligence (AI) in scientific practice has raised novel philosophical questions concerning the epistemic status of these technologies. In this thesis, I will examine the property of epistemic opacity (also referred to as the “black-box” problem), which poses significant challenges for the users of these technologies. I provide an argumentative literature review of epistemic opacity in AI-ML models, and analyze how the black-box nature of these technologies undermines the epistemic goals for which these models are deployed. Unlike the theoretically grounded models of conventional scientific practice, ML models make inferences by identifying statistical correlations in the data itself. Although a ML model might make accurate predictions, even the scientists who have constructed the model might lack access to the “inner workings” of the ML model. This is because the scientists lack a direct theoretical interpretation of the epistemic components of a ML model—for instance, the significance of the weights assigned to a set of parameters constituting a neural network. This raises questions about the epistemic justification for using ML techniques in scientific practice. Moreover, this has also led to widespread debate concerning the trade-offs between predictive capabilities, explanatory value, theoretical understanding, and other epistemic desiderata for working scientists. I aim to contribute to this debate by highlighting the plurality of meanings attributed to fundamental scientific concepts like prediction and discovery and argue for the utility of distinguishing between different conceptual notions that are associated with these terms. Furthermore, I also argue that discovery and prediction claims in ML modelling rely on different modes of justification compared to conventional scientific practice and how these different modes of justification can shape the meaning taken up by the concepts of discovery and prediction in the context of ML modelling in science.en_US
dc.language.isoenen_US
dc.subjectPhilosophy of Scienceen_US
dc.subjectML Modellingen_US
dc.subjectPhilosophy of AIen_US
dc.subjectScientific Epistemologyen_US
dc.subjectEpistemic Opacityen_US
dc.subjectBlack Boxen_US
dc.titlePhilosophical Analyses of ML Modelling in Scienceen_US
dc.typeThesisen_US
dc.description.embargoOne Yearen_US
dc.type.degreeBS-MSen_US
dc.contributor.departmentInterdisciplinaryen_US
dc.contributor.registration20201236en_US
Appears in Collections:MS THESES

Files in This Item:
File Description SizeFormat 
20201236_Saransh_Agrawal_MS_Thesis.pdfMS Thesis953.13 kBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.