Abstract:
Speaker diarization, the task of determining "who spoke when" in an audio recording, is a critical component in applications such as meeting transcription, voice assistant technolo gies, and conversational analysis. Traditional clustering-based diarization methods strug gle with overlapping speech, while end-to-end neural diarization (EEND) systems often lack robustness across diverse acoustic conditions. This thesis presents TS-VAD+, an en hanced transformer-based speaker diarization model that builds upon the TS-VAD frame work by incorporating state-of-the-art speaker embeddings (ECAPA-TDNN), a WavLM based speech encoder, and memory-aware attention mechanisms. These improvements aim to address key limitations in handling multi-speaker and overlapping speech scenarios. We evaluate the TS-VAD+ model on the DIHARD III dataset, demonstrating its effec tiveness through systematic experiments. Pretraining on wideband simulated data (16 kHz) significantly improved domain adaptation, outperforming narrowband-pretrained models. Further refinements, including VBx clustering, voice activity detection (VAD) postprocessing, and data augmentation. While the memory module TS-VAD+ (mm-TS VAD+) showed promising results in leveraging external speaker embeddings, its perfor mance gains were limited by the size of the fine-tuning dataset. Overall, TS-VAD+ demonstrates competitive performance in speaker diarization, par ticularly in high-overlap conditions. Future work could explore self-supervised speaker embeddings, dynamic memory mechanisms, and large-scale augmentation strategies to further enhance diarization accuracy and generalization across diverse domains.