Digital Repository

New Material Discovery Using Physics-Inspired Reinforcement Learning Techniques

Show simple item record

dc.contributor.advisor Bajaj, Chandrajit
dc.contributor.author BANSAL, GARVIT
dc.date.accessioned 2025-05-21T09:29:05Z
dc.date.available 2025-05-21T09:29:05Z
dc.date.issued 2025-05
dc.identifier.citation 58 en_US
dc.identifier.uri http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/10067
dc.description.abstract The accelerated discovery of novel materials with targeted properties is crucial for advancing technological fields such as photovoltaics, catalysis, and energy storage. Traditional methods for material discovery often rely on experimental intuition or computational brute-force screening, both of which become increasingly inefficient as the complexity and dimensionality of the search space increase. To overcome these limitations, this thesis introduces a novel computational framework based on physics-inspired reinforcement learning (RL), specifically leveraging Stochastic Hamiltonian Dynamics (SHD), to efficiently explore high-dimensional search spaces for materials optimization. The properties of materials can be calculated via computational techniques like Density Functional Theory (DFT) calculations, which, although accurate, are computationally demanding. To address this challenge, we incorporate local surrogate modeling to approximate expensive DFT computations rapidly and cost-effectively. These inherently differentiable surrogate models significantly accelerate property evaluations, enabling efficient gradient-based optimization within both structural and chemical spaces. Additionally, to seamlessly navigate categorical variables associated with chemical compositions, we employ the Gumbel-Softmax reparameterization technique, transforming discrete choices into differentiable, continuous variables. Optimal control theory, particularly the Stochastic Pontryagin Maximum Principle (PMP), underpins our approach, providing a systematic framework for stable convergence and global optimization. By adapting SHD within this control-theoretic framework, we combine deterministic momentum-driven exploration with controlled stochastic perturbations, effectively avoiding local minima and promoting efficient global optimization. Ultimately, this thesis advances computational methodologies in materials science by establishing a robust, scalable, and versatile discovery framework. By integrating physical insights, reinforcement learning strategies, surrogate modeling, and optimal control, we provide a transformative approach that addresses the critical computational barriers and accelerates the discovery of new materials with optimal properties. en_US
dc.description.sponsorship University of Texas, Austin en_US
dc.language.iso en_US en_US
dc.subject Reinforcement Learning en_US
dc.subject Optimal Control en_US
dc.subject Density Functional Theory en_US
dc.subject Pontryagin Maximum Principle en_US
dc.subject Photovoltaics en_US
dc.subject Global Optimization en_US
dc.title New Material Discovery Using Physics-Inspired Reinforcement Learning Techniques en_US
dc.type Thesis en_US
dc.description.embargo One Year en_US
dc.type.degree BS-MS en_US
dc.contributor.department Dept. of Physics en_US
dc.contributor.registration 20201013 en_US


Files in this item

This item appears in the following Collection(s)

  • MS THESES [1980]
    Thesis submitted to IISER Pune in partial fulfilment of the requirements for the BS-MS Dual Degree Programme/MSc. Programme/MS-Exit Programme

Show simple item record

Search Repository


Advanced Search

Browse

My Account