An intelligent extension of the training set for the Persian n-gram language model: an enrichment algorithm
DOI:
https://doi.org/10.7764/onomazein.61.09Palabras clave:
training corpus, n-gram language model, dependency parsing, enrichment algorithm, free word-orderResumen
In this article, we are going to introduce an automatic mechanism to intelligently extend the training set to improve the n-gram language model of Persian. Given the free word-order property in Persian, our enrichment algorithm diversifies n-gram combinations in baseline training data through dependency reordering, adding permissible sentences and filtering ungrammatical sentences using a hybrid empirical (heuristic) and linguistic approach. Experiments performed on baseline training set (taken from a standard Persian corpus) and the resulting enriched training set indicate a declining trend in average relative perplexity (between 34% to 73%) for informal/spoken vs. formal/written Persian test data.