Abstract:
This thesis focuses on the problem of continual test time domain adaptation in deep learning,
where a trained model needs to adapt to new and changing environments during deployment. The first contribution of this work is the development of a novel strategy for obtaining a signal for domain shift, which enables the model to overfit without compromising its ability to adapt to future domains. The second contribution is the presentation of a novel framework called SATA, which uses self-knowledge distillation and contrastive learning to adapt a pre-trained model to continual domain shift. The proposed framework improves the accuracy, time complexity, space complexity, and stability of the machine learning model. The research conducted in this thesis contributes to the ongoing effort to develop more robust and reliable deep learning models that can adapt to new and changing environments.