• Home
  • Deep Neural Networks
    • List of Articles Deep Neural Networks

      • Open Access Article

        1 - Numeric Polarity Detection based on Employing Recursive Deep Neural Networks and Supervised Learning on Persian Reviews of E-Commerce Users in Opinion Mining Domain
        Sepideh Jamshidinejad Fatemeh Ahmadi-Abkenari Peiman Bayat
        Opinion mining as a sub domain of data mining is highly dependent on natural language processing filed. Due to the emerging role of e-commerce, opinion mining becomes one of the interesting fields of study in information retrieval scope. This domain focuses on various s More
        Opinion mining as a sub domain of data mining is highly dependent on natural language processing filed. Due to the emerging role of e-commerce, opinion mining becomes one of the interesting fields of study in information retrieval scope. This domain focuses on various sub areas such as polarity detection, aspect elicitation and spam opinion detection. Although there is an internal dependency among these sub sets, but designing a thorough framework including all of the mentioned areas is a highly demanding and challenging task. Most of the literatures in this area have been conducted on English language and focused on one orbit with a binary outcome for polarity detection. Although the employment of supervised learning approaches is among the common utilizations in this area, but the application of deep neural networks has been concentrated with various objectives in recent years so far. Since the absence of a trustworthy and a complete framework with special focuses on each impacting sub domains is highly observed in opinion mining, hence this paper concentrates on this matter. So, through the usage of opinion mining and natural language processing approaches on Persian language, the deep neural network-based framework called RSAD that was previously suggested and developed by the authors of this paper is optimized here to include the binary and numeric polarity detection output of sentences on aspect level. Our evaluation on RSAD performance in comparison with other approaches proves its robustness. Manuscript profile
      • Open Access Article

        2 - Multi-level ternary quantization for improving sparsity and computation in embedded deep neural networks
        Hosna Manavi Mofrad Seyed Ali ansarmohammadi Mostafa Salehi
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and co More
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and computational resources. Network compression techniques can overcome these challenges. Quantization and pruning methods are the most important compression techniques among them. One of the famous quantization methods in DNNs is the multi-level binary quantization, which not only exploits simple bit-wise logical operations, but also reduces the accuracy gap between binary neural networks and full precision DNNs. Since, multi-level binary can’t represent the zero value, this quantization does’nt take advantage of sparsity. On the other hand, it has been shown that DNNs are sparse, and by pruning the parameters of the DNNs, the amount of data storage in memory is reduced while computation speedup is also achieved. In this paper, we propose a pruning and quantization-aware training method for multi-level ternary quantization that takes advantage of both multi-level quantization and data sparsity. In addition to increasing the accuracy of the network compared to the binary multi-level networks, it gives the network the ability to be sparse. To save memory size and computation complexity, we increase the sparsity in the quantized network by pruning until the accuracy loss is negligible. The results show that the potential speedup of computation for our model at the bit and word-level sparsity can be increased by 15x and 45x compared to the basic multi-level binary networks. Manuscript profile
      • Open Access Article

        3 - Multi-Level Ternary Quantization for Improving Sparsity and Computation in Embedded Deep Neural Networks
        Hosna Manavi Mofrad ali ansarmohammadi Mostafa Salehi
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and co More
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and computational resources. Network compression techniques can overcome these challenges. Quantization and pruning methods are the most important compression techniques among them. One of the famous quantization methods in DNNs is the multi-level binary quantization, which not only exploits simple bit-wise logical operations, but also reduces the accuracy gap between binary neural networks and full precision DNNs. Since, multi-level binary can’t represent the zero value, this quantization does not take advantage of sparsity. On the other hand, it has been shown that DNNs are sparse, and by pruning the parameters of the DNNs, the amount of data storage in memory is reduced while computation speedup is also achieved. In this paper, we propose a pruning and quantization-aware training method for multi-level ternary quantization that takes advantage of both multi-level quantization and data sparsity. In addition to increasing the accuracy of the network compared to the binary multi-level networks, it gives the network the ability to be sparse. To save memory size and computation complexity, we increase the sparsity in the quantized network by pruning until the accuracy loss is negligible. The results show that the potential speedup of computation for our model at the bit and word-level sparsity can be increased by 15x and 45x compared to the basic multi-level binary networks. Manuscript profile