dc.description.abstract |
The growing demand for personalized educational content has underscored the limitations of traditional, manually crafted question-generation methods, which struggle to scale while maintaining pedagogical quality. This thesis presents an AI-driven framework for automated question generation, integrating large lan- guage models (LLMs) with psycholinguistic principles to produce contextually relevant and cognitively appropriate questions. The core contribution is a two-stage training procedure for generating high- quality math word problems. First, Meta’s LLaMA-2-7B is fine-tuned using QLoRA on a curated dataset of math problems. Then, more powerful LLMs intervene to refine and diversify the generated questions. A human annotation step follows, filtering out irrelevant outputs before a second round of supervised fine-tuning using QLoRA. Additionally, this work explores context-based question generation through a separate supervised fine-tuning of a T5-based model. |
en_US |