Please use this identifier to cite or link to this item:
http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/9843
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Bhatta, Varun | - |
dc.contributor.author | AGRAWAL, SARANSH | - |
dc.date.accessioned | 2025-05-14T06:30:04Z | - |
dc.date.available | 2025-05-14T06:30:04Z | - |
dc.date.issued | 2025-05 | - |
dc.identifier.citation | 115 | en_US |
dc.identifier.uri | http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/9843 | - |
dc.description.abstract | The widespread adoption of Machine Learning (ML) and Artificial Intelligence (AI) in scientific practice has raised novel philosophical questions concerning the epistemic status of these technologies. In this thesis, I will examine the property of epistemic opacity (also referred to as the “black-box” problem), which poses significant challenges for the users of these technologies. I provide an argumentative literature review of epistemic opacity in AI-ML models, and analyze how the black-box nature of these technologies undermines the epistemic goals for which these models are deployed. Unlike the theoretically grounded models of conventional scientific practice, ML models make inferences by identifying statistical correlations in the data itself. Although a ML model might make accurate predictions, even the scientists who have constructed the model might lack access to the “inner workings” of the ML model. This is because the scientists lack a direct theoretical interpretation of the epistemic components of a ML model—for instance, the significance of the weights assigned to a set of parameters constituting a neural network. This raises questions about the epistemic justification for using ML techniques in scientific practice. Moreover, this has also led to widespread debate concerning the trade-offs between predictive capabilities, explanatory value, theoretical understanding, and other epistemic desiderata for working scientists. I aim to contribute to this debate by highlighting the plurality of meanings attributed to fundamental scientific concepts like prediction and discovery and argue for the utility of distinguishing between different conceptual notions that are associated with these terms. Furthermore, I also argue that discovery and prediction claims in ML modelling rely on different modes of justification compared to conventional scientific practice and how these different modes of justification can shape the meaning taken up by the concepts of discovery and prediction in the context of ML modelling in science. | en_US |
dc.language.iso | en | en_US |
dc.subject | Philosophy of Science | en_US |
dc.subject | ML Modelling | en_US |
dc.subject | Philosophy of AI | en_US |
dc.subject | Scientific Epistemology | en_US |
dc.subject | Epistemic Opacity | en_US |
dc.subject | Black Box | en_US |
dc.title | Philosophical Analyses of ML Modelling in Science | en_US |
dc.type | Thesis | en_US |
dc.description.embargo | One Year | en_US |
dc.type.degree | BS-MS | en_US |
dc.contributor.department | Interdisciplinary | en_US |
dc.contributor.registration | 20201236 | en_US |
Appears in Collections: | MS THESES |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
20201236_Saransh_Agrawal_MS_Thesis.pdf | MS Thesis | 953.13 kB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.