Digital Repository

Conditional Molecule Generation Using Transformer Decoder

Show simple item record

dc.contributor.advisor Priyakumar, U Deva en_US
dc.contributor.author BAGAL, VIRAJ en_US
dc.date.accessioned 2021-07-06T05:42:51Z
dc.date.available 2021-07-06T05:42:51Z
dc.date.issued 2021-07
dc.identifier.citation 51 en_US
dc.identifier.uri http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/6020
dc.description In this thesis, we propose LigGPT, a transformer decoder model for conditional molecular generation. The main component of the model is the masked self-attention mechanism that allows it to learn the SMILES grammar and long range dependencies very well. LigGPT has comparable validity and uniqueness scores to other models on the MOSES dataset. It outperforms other models in terms of novelty and internal diversity on the MOSES dataset. The model performs better than other models on the GuacaMol dataset. Using saliency maps we show that the generative process of model is interpretable. LigGPT is more efficient than the famous character based recurrent neural network as is evident by training on only ten percent of data. Apart from unconditional generation, we show the ability of LigGPT to generate molecules based on properties. Moreover, it can also be trained to retain the scaffold structure while generating molecules having desired values of certain properties. This can have tremendous applications in any sector which involves the creation of novel molecules. We even demon- strate LigGPT’s usage in one shot lead optimization. Consequently, LigGPT is a strong model and has the capability of making a positive impact on real world application for molecular generation. en_US
dc.description.abstract Deep learning is being widely used for de novo generation of molecules. Molecules can be represented in the form of string of characters, SMILES representation, which allows the implementation of transformer architectures. In this work, we propose a transformer decoder based network for the generation of molecules with high validity, uniqueness and novelty. The proposed model is capable of conditional generation where the condition can be based on a scaffold or/and multiple physicochemical properties. Moreover, we show that saliency maps can be used to make the generative process interpretable. en_US
dc.language.iso en_US en_US
dc.subject deep learning en_US
dc.subject molecule generation en_US
dc.subject natural language generation en_US
dc.subject interpretability en_US
dc.subject conditional generation en_US
dc.subject lead optimization en_US
dc.subject self supervised learning en_US
dc.title Conditional Molecule Generation Using Transformer Decoder en_US
dc.type Thesis en_US
dc.type.degree BS-MS en_US
dc.contributor.department Dept. of Chemistry en_US
dc.contributor.registration 20161150 en_US


Files in this item

This item appears in the following Collection(s)

  • MS THESES [1705]
    Thesis submitted to IISER Pune in partial fulfilment of the requirements for the BS-MS Dual Degree Programme/MSc. Programme/MS-Exit Programme

Show simple item record

Search Repository


Advanced Search

Browse

My Account