• List of Articles quantization

      • Open Access Article

        1 - Joint Source and Channel Analysis for Scalable Video Coding Using Vector Quantization over OFDM System
        Farid Jafarian Hassan Farsi
        Conventional wireless video encoders employ variable-length entropy encoding and predictive coding to achieve high compression ratio but these techniques render the extremely sensitive encoded bit-stream to channel errors. To prevent error propagation, it is necessary t More
        Conventional wireless video encoders employ variable-length entropy encoding and predictive coding to achieve high compression ratio but these techniques render the extremely sensitive encoded bit-stream to channel errors. To prevent error propagation, it is necessary to employ various additional error correction techniques. In contrast, alternative technique, vector quantization (VQ), which doesn’t use variable-length entropy encoding, have the ability to impede such an error through the use of fix-length code-words. In this paper, we address the problem of analysis of joint source and channel for VQ based scalable video coding (VQ-SVC). We introduce intra-mode VQ-SVC and VQ-3D-DCT SVC, which offer similar compression performance to intra-mode H.264 and 3D-DCT respectively, while offering inherent error resilience. In intra-mode VQ-SVC, 2D-DCT and in VQ-3D-DCT SVC, 3D-DCT is applied on video frames to exploit DCT coefficients then VQ is employed to prepare the codebook of DCT coefficients. In this low bitrate video codecs, high level robustness is needed against the wireless channel fluctuations. To achieve such robustness, we propose and calculate optimal codebook of VQ-SVC and optimal channel code rate using joint source and channel coding (JSCC) technique. Next, the analysis is developed for transmission of video using an OFDM system over multipath Rayleigh fading and AWGN channel. Finally, we report the performance of these schemes to minimize end-to-end distortion over the wireless channel. Manuscript profile
      • Open Access Article

        2 - Self-Organization Map (SOM) Algorithm for DDoS Attack Detection in Distributed Software Defined Network (D-SDN)
        Mohsen Rafiee Alireza  shirmarz
        The extend of the internet across the world has increased cyber-attacks and threats. One of the most significant threats includes denial-of-service (DoS) which causes the server or network not to be able to serve. This attack can be done by distributed nodes in the netw More
        The extend of the internet across the world has increased cyber-attacks and threats. One of the most significant threats includes denial-of-service (DoS) which causes the server or network not to be able to serve. This attack can be done by distributed nodes in the network as if the nodes collaborated. This attack is called distributed denial-of-service (DDoS). There is offered a novel architecture for the future networks to make them more agile, programmable and flexible. This architecture is called software defined network (SDN) that the main idea is data and control network flows separation. This architecture allows the network administrator to resist DDoS attacks in the centralized controller. The main issue is to detect DDoS flows in the controller. In this paper, the Self-Organizing Map (SOM) method and Learning Vector Quantization (LVQ) are used for DDoS attack detection in SDN with distributed architecture in the control layer. To evaluate the proposed model, we use a labelled data set to prove the proposed model that has improved the DDoS attack flow detection by 99.56%. This research can be used by the researchers working on SDN-based DDoS attack detection improvement. Manuscript profile
      • Open Access Article

        3 - High Rate Shared Secret Key Generation Using the Phase Estimation of MIMO Fading Channel and Multilevel Quantization
        V. Zeinali Fathabadi H. Khaleghi Bizaki A. Shahzadi
        Much attention has recently been paid to methods of shared secret key generation that exploit the random characteristics of the amplitude and phase of a received signal and common channel symmetry in wireless communication systems. Protocols based on the phase of a rece More
        Much attention has recently been paid to methods of shared secret key generation that exploit the random characteristics of the amplitude and phase of a received signal and common channel symmetry in wireless communication systems. Protocols based on the phase of a received signal, due to the uniform distribution phase of fading channel, are suitable in both static and dynamic environments and, they have a key generation rate (KGR) higher than protocols based on received signal strength (RSS).In addition, previous works have generally focused on key generation protocol for single-antenna (SISO) systems but these have not produced a significant KGR. So in this paper to increase the randomness and key generation rate are used received signal phase estimations on multiple-antenna (MIMO) systems because they have the potential to present more random variables in key generation compared to SISO systems. The results of simulation show that the KGR of the proposed protocol is 4 and 9 times more than the KGR of a SISO system, when the numbers of transmitter and receiver antennas are the same and equal to 2 and 3, respectively. Also, the key generation rate will increase considerably, when to extract the secret key bits using multilevel quantization. Manuscript profile
      • Open Access Article

        4 - Analysis and Design of a Low Power Analog to Digital Converter Using Carbon Nano-Tube Field Effect Transistors
        Saeedeh Heidari D. Dideban
        Nowadays, analog to digital (A/D) converters are indistinguishable parts of system on chip (soC) structures because they omit the distance between analog real data and digital logic world. Due to this fact and ever increasing trend for using portable instruments, the fi More
        Nowadays, analog to digital (A/D) converters are indistinguishable parts of system on chip (soC) structures because they omit the distance between analog real data and digital logic world. Due to this fact and ever increasing trend for using portable instruments, the figures of merit for design of these converters such as speed, power and occupied area are improved. Different methods are proposed to improve the performance of these converters. In this paper, we design a fast and low power ADC using carbon nano-tube field effect transistor (CNTFET) and then its performance is comprehensively compared with a MOSFET based counterpart at the same technology node. The performance is studied two encoders: ROM and Fat tree. The obtained results are presented using HSPICE simulator at 0.9 V power supply. The simulated data from CNTFET based converter shows significant improvements in delay and power compared with its CMOS based counterpart. The power and delay obtained from CNTFET based converter using ROM encoder are improved by 92.5% and 54% with respect to the same parameters obtained from CMOS based design while the improvements using a Fat tree encoder in CNTFET converter reaches 93% and 72% in comparison with CMOS conventional design. Manuscript profile
      • Open Access Article

        5 - Multi-level ternary quantization for improving sparsity and computation in embedded deep neural networks
        Hosna Manavi Mofrad Seyed Ali ansarmohammadi Mostafa Salehi
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and co More
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and computational resources. Network compression techniques can overcome these challenges. Quantization and pruning methods are the most important compression techniques among them. One of the famous quantization methods in DNNs is the multi-level binary quantization, which not only exploits simple bit-wise logical operations, but also reduces the accuracy gap between binary neural networks and full precision DNNs. Since, multi-level binary can’t represent the zero value, this quantization does’nt take advantage of sparsity. On the other hand, it has been shown that DNNs are sparse, and by pruning the parameters of the DNNs, the amount of data storage in memory is reduced while computation speedup is also achieved. In this paper, we propose a pruning and quantization-aware training method for multi-level ternary quantization that takes advantage of both multi-level quantization and data sparsity. In addition to increasing the accuracy of the network compared to the binary multi-level networks, it gives the network the ability to be sparse. To save memory size and computation complexity, we increase the sparsity in the quantized network by pruning until the accuracy loss is negligible. The results show that the potential speedup of computation for our model at the bit and word-level sparsity can be increased by 15x and 45x compared to the basic multi-level binary networks. Manuscript profile
      • Open Access Article

        6 - Detection of Quantized Sparse Signals Using Locally Most Power Full Detector in Wireless Sensor NetworkS
        Abdolreza Mohammadi Amin Jajarmi
        This paper addresses the problem of distributed detection of stochastic sparse signals in a wireless sensor network. Observations/local likelihood ratios in each sensor node are quantized into 1-bit and sent to a fusion center (FC) through non-ideal channels. At the FC, More
        This paper addresses the problem of distributed detection of stochastic sparse signals in a wireless sensor network. Observations/local likelihood ratios in each sensor node are quantized into 1-bit and sent to a fusion center (FC) through non-ideal channels. At the FC, we propose two sub-optimal detectors after that the data are fused based on the locally most powerful test (LMPT). We obtain the quantization threshold for each sensor node via an asymptotic analysis of the performance of the detector. It is realized that the quantization threshold depends on the bit error probability of the channels between the sensor nodes and the FC. Simulation results are carried out to confirm our theoretical analysis and to depict the performance of the proposed detectors. Manuscript profile
      • Open Access Article

        7 - Multi-Level Ternary Quantization for Improving Sparsity and Computation in Embedded Deep Neural Networks
        Hosna Manavi Mofrad ali ansarmohammadi Mostafa Salehi
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and co More
        Deep neural networks (DNNs) have achieved great interest due to their success in various applications. However, the computation complexity and memory size are considered to be the main obstacles for implementing such models on embedded devices with limited memory and computational resources. Network compression techniques can overcome these challenges. Quantization and pruning methods are the most important compression techniques among them. One of the famous quantization methods in DNNs is the multi-level binary quantization, which not only exploits simple bit-wise logical operations, but also reduces the accuracy gap between binary neural networks and full precision DNNs. Since, multi-level binary can’t represent the zero value, this quantization does not take advantage of sparsity. On the other hand, it has been shown that DNNs are sparse, and by pruning the parameters of the DNNs, the amount of data storage in memory is reduced while computation speedup is also achieved. In this paper, we propose a pruning and quantization-aware training method for multi-level ternary quantization that takes advantage of both multi-level quantization and data sparsity. In addition to increasing the accuracy of the network compared to the binary multi-level networks, it gives the network the ability to be sparse. To save memory size and computation complexity, we increase the sparsity in the quantized network by pruning until the accuracy loss is negligible. The results show that the potential speedup of computation for our model at the bit and word-level sparsity can be increased by 15x and 45x compared to the basic multi-level binary networks. Manuscript profile