Abstract:
In industrial power systems, Boiler efficiency is an important factor in reducing operational costs and environmental impact. Previous control methods mostly rely on fixed rules and reactive adjustments; this makes them fail to address complex, nonlinear interdependencies among operational parameters. To overcome these limitations, we create an integrated framework that combines ARIMA forecasting with Q-learning-based reinforcement learning to optimize boiler performance proactively. In our approach, historical sensor data are first preprocessed and modeled using an ARIMA (Autoregressive Integrated Moving Average) model to predict future trends for important operational variables. Then, the Q-learning agent that formulates control actions by treating the boiler’s operation as a Markov Decision Process (MDP) gets these forecasted values and input. The agent selects actions—such as fine-tuning fuel flow or adjusting pres sure settings—to maximize a reward function defined in terms of improved efficiency, reduced fuel usage, and lower emissions. To allow operators to specify the required level of efficiency and to visualize real-time adjustments, a user-friendly Gradio interface has been created. This human-in-the-loop design provides transparency and allows manual intervention when necessary. Experimental simulations indicate that our integrated system not only shifts boiler management from a reactive to a proactive paradigm but also delivers statistically significant improvements in efficiency compared to conventional methods. The advantages of this framework include its ability to adapt dynamic conditions, its potential for cost savings, emission reduction and many more. But, the approach is really sensitive to data quality—the accuracy of the ARIMA model critically depends on the quality and granularity of historical data—and the Q-learning algorithm requires careful hyperpa rameter tuning to ensure robust performance. Despite these challeges, the simulation results validate that combining ARIMA forcasting with adaptive reinforcement learning offers a trustworthy solution for realtime boiler optimization. Altogether, our integrated approach gives a viable solution for achieving sustainable, efficient, and proactive boiler control in complex industrial settings