Abstract:
This thesis presents a survey of recent developments at the intersection of causal inference and reinforcement learning (RL), with a focus on how causal reasoning can enhance sequential decision-making. We examine key motivations for integrating causal frameworks into RL, including improved sample efficiency, robustness to spurious correlations, and better generalization in partially observed environments. We outline major approaches that incorporate structural causal models, counterfactual reasoning, and causal discovery into standard RL pipelines. Particular attention is given to methods addressing confounding bias and leveraging causal graphs for policy improvement. We also provide a critical comparison of algorithms across experimental benchmarks and theoretical settings, highlighting their respective strengths and limitations. This survey aims to provide a cohesive foundation for future research in causal reinforcement learning by synthesizing insights across multiple disciplines.