Generation of Persian sentences By Generative Adversarial Network
Subject Areas : electrical and computer engineeringNooshin riahi 1 , Sahar Jandaghy 2
1 - دانشگاه الزهرا (س)
2 -
Keywords: Text generationGenerative Adversarial NetworksDeep learning,
Abstract :
Text generation is a field of natural language processing. Text generation enables the system to produce comprehensive, .grammatically correct texts like humans. Applications of text generation include image Captioning, poetry production, production of meteorological reports and environmental reports, production of business reports, automatic text summarization, .With the appearance of deep neural networks, research in the field of text generation has change to use of these networks, but the most important challenge in the field of text generation using deep neural networks is the data is discrete, which has made gradient inability to transmit. Recently, the use of a new approach in the field of deep learning, called generative adversarial networks (GANs) for the generation of image, sound and text has been considered. The purpose of this research is to use this approach to generate Persian sentences. In this paper, three different algorithms of generative adversarial networks were used to generate Persian sentences. to evaluate our proposed methods we use BLEU and self-BLEU because They compare the sentences in terms of quality and variety.
Celikyilmaz, A., Clark, E., & Gao, J. (2020). Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799.#
Lamb, A. M., Goyal, A. G. A. P., Zhang, Y., Zhang, S., Courville, A. C., & Bengio, Y. (2016). Professor forcing: A new algorithm for training recurrent networks. In Advances in neural information processing systems (pp. 4601-4609).#
Press, O., Bar, A., Bogin, B., Berant, J., & Wolf, L. (2017). Language generation with recurrent generative adversarial networks without pre-training. arXiv preprint arXiv:1706.01399.#
Zhang, Y., Gan, Z., Fan, K., Chen, Z., Henao, R., Shen, D., & Carin, L. (2017). Adversarial feature matching for text generation. arXiv preprint arXiv:1706.03850.#
Bengio, S., Vinyals, O., Jaitly, N., & Shazeer, N. (2015). Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems (pp. 1171-1179)#
Ranzato, M. A., Chopra, S., Auli, M., & Zaremba, W. (2015). Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732.7#
Huszár, F. (2015). How (not) to train your generative model: Scheduled sampling, likelihood, adversary. arXiv preprint arXiv:1511.05101.#
Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Jozefowicz, R., & Bengio, S. (2015). Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349.#
Yang, Z., Hu, Z., Salakhutdinov, R., & Berg-Kirkpatrick, T. (2017). Improved variational autoencoders for text modeling using dilated convolutions. arXiv preprint arXiv:1702.08139.#
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680)#
Yu, L., Zhang, W., Wang, J., & Yu, Y. (2017, February). Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-first AAAI conference on artificial intelligence.#
Guimaraes, G. L., Sanchez-Lengeling, B., Outeiral, C., Farias, P. L. C., & Aspuru-Guzik, A. (2017). Objective-reinforced generative adversarial networks (ORGAN) for sequence generation models. arXiv preprint arXiv:1705.10843.#
Kusner, M. J., & Hernández-Lobato, J. M. (2016). Gans for sequences of discrete elements with the gumbel-softmax distribution. arXiv preprint arXiv:1611.04051.#
Jang, E., Gu, S., & Poole, B. (2016). Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144.#
Mescheder, L., Nowozin, S., & Geiger, A. (2017). The numerics of gans. In Advances in Neural Information Processing Systems (pp. 1825-1835).#
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. In Advances in neural information processing systems (pp. 2234-2242)#
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training gans. In Advances in neural information processing systems (pp. 2234-2242)#
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. In Advances in neural information processing systems (pp. 5767-5777).#
Rajeswar, S., Subramanian, S., Dutil, F., Pal, C., & Courville, A. (2017). Adversarial generation of natural language. arXiv preprint arXiv:1705.10929.#
Lin, K., Li, D., He, X., Zhang, Z., & Sun, M. T. (2017). Adversarial ranking for language generation. In Advances in Neural Information Processing Systems (pp. 3155-3165).#
Che, T., Li, Y., Zhang, R., Hjelm, R. D., Li, W., Song, Y., & Bengio, Y. (2017). Maximum-likelihood augmented discrete generative adversarial networks. arXiv preprint arXiv:1702.07983.#
Guo, J., Lu, S., Cai, H., Zhang, W., Yu, Y., & Wang, J. (2017). Long text generation via adversarial training with leaked information. arXiv preprint arXiv:1709.08624.#
https://dbrg.ut.ac.ir/hamshahri/ آزمایشگاه بانک اطلاعاتی دانشگاه تهران#
S. Bakhshaei, S. Khadivi, N. Riahi and H. Sameti, "A study to find influential parameters on a Farsi-English statistical machine translation system," 2010 5th International Symposium on Telecommunications, 2010, pp. 985-991, doi: 10.1109/ISTEL.2010.5734165#
Srivastava, R. K., Greff, K., & Schmidhuber, J. (2015). Highway networks. arXiv preprint arXiv:1505.00387.#
Vezhnevets, A. S., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D., & Kavukcuoglu, K. (2017). Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv:1703.01161.#
Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002, July). BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics (pp. 311-318).#
Zhu, Y., Lu, S., Zheng, L., Guo, J., Zhang, W., Wang, J., & Yu, Y. (2018, June). Texygen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (pp. 1097-1100).#
ROUGE, L. C. (2004, July). A package for automatic evaluation of summaries. In Proceedings of Workshop on Text Summarization of ACL, Spain.#
Lavie, A., Sagae, K., & Jayaraman, S. (2004, September). The significance of recall in automatic metrics for MT evaluation. In Conference of the Association for Machine Translation in the Americas (pp. 134-143). Springer, Berlin, Heidelberg.#