Интеллектуальный анализ данных и распознавание образов
Интеллектуальные системы и технологии
Обработка и анализ изображений и сигналов
Машинное обучение
В.А. Малых, В.А. Лялин "К вопросу о классификации зашумленных текстов"
В.А. Малых, В.А. Лялин "К вопросу о классификации зашумленных текстов"

Аннотация.

Классическая задача классификации текстов была освещена во множестве работ, но существующие подходы в основном сосредоточены на улучшении качества классификации для так называемых чистых коллекций, не содержащих опечаток. В этой работе авторы приводят результаты исследования современных популярных моделей текстовой классфикации на предмет устойчивости к опечаткам для корпусов на русском и английском языках.

Ключевые слова:

нейронные сети; классификация текстов; устойчивость к шуму.

Стр. 174-182.

DOI: 10.14357/20790279180520

Полная версия статьи в формате pdf. 

Литература

1. Joulin Armand, Grave Edouard, Bojanowski Piotr and Mikolov Tomas. 2016. Bag of Tricks for Efficient Text Classification. arXiv preprint arXiv:1607.01759.
2. Malykh Valentin. 2018. Robust Word Vectors: Embeddings for Noisy Texts.
3. Kim Yoon, Jernite Yacine, Sontag David and Rush Alexander M. 2016. Character-Aware Neural Language Models. In AAAI, pages 2741-2749.
4. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 142-150.
5. Rubtsova Yuliya. 2014. Automatic Term Extraction for Sentiment Classification of Dynamically Updated Text Collections into Three Classes In Knowledge Engineering and the Semantic Web, pp140-149, Springer
6. Bochkarev V.V., Shevlyakova A.V., and Solovyev V.D. 2015. The average word length dynamics as an indicator of cultural changes in society. Social Evolution & History, 14(2), 153-175.
7. Cucerzan S. and Brill E. 2004. Spelling correction as an iterative process that exploits the collective knowledge of web users. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing.
8. Joulin A., Grave E., Bojanowski P. and Mikolov T. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.
9. Howard J. and Ruder S. 2018. Fine-tuned Language Models for Text Classification. arXiv preprint arXiv:1801.06146.
10. Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser L. and Polosukhin I. 2017. Attention is all you need. In Advances in Neural Information Processing Systems (pp. 6000-6010).
11. Xiang Zhang, Junbo Jake Zhao and Yann LeCun. 2017. Character-level Convolutional Networks for Text Classification. arXiv preprint arXiv:1509.01626
12. Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. arXiv preprint arXiv:1408.5882
13. KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau and Yoshua Bengio. 2014. On the Properties of Neural Machine Translation: Encoder- Decoder Approaches arXiv preprint arXiv:1409.1259
14. Dzmitry Bahdanau, Kyunghyun Cho and Yoshua Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate arXiv preprint arXiv:1409.0473
15. Bengio Y., Simard P. and Frasconi P. 1994. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2), pp.157-166.
16. Tutubalina Elena, and Nikolenko Sergey. 2015. Inferring sentiment- based priors in topic models. In Mexican International Conference on Artificial Intelligence, pp. 92-104.
17. Niu J., Yang Y., Zhang S., Sun Z. and Zhang W. 2018. Multi-task Character-Level Attentional Networks for Medical Concept Normalization. Neural Processing Letters, pp.1-18.
18. Vinciarelli A. Noisy text categorization, 2005. IEEE Transactions on Pattern Analysis and Machine Intelligence. Dec;27(12):1882-95.
19. Srivastava N., Hinton G., Krizhevsky A., Sutskever I. and Salakhutdinov R. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), pp.1929-1958.
20. Pineda F.J. 1987. Generalization of backpropagation to recurrent neural networks. Physical review letters, 59(19), p.2229.
21. Glorot X. and Bengio Y. 2010, March. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 249256).
 

 

2024-74-1
2023-73-4
2023-73-3
2023-73-2

© ФИЦ ИУ РАН 2008-2018. Создание сайта "РосИнтернет технологии".