• List of Articles Partition

      • Open Access Article

        1 - A Basic Proof Method for the Verification, Validation and Evaluation of Expert Systems
        Armin Ghasem Azar Zohreh Mohammad Alizadeh
        In the present paper, a basic proof method is provided for representing the verification, Validation and evaluation of expert systems. The result provides an overview of the basic method for formal proof such as: partition larger systems into small systems prove correct More
        In the present paper, a basic proof method is provided for representing the verification, Validation and evaluation of expert systems. The result provides an overview of the basic method for formal proof such as: partition larger systems into small systems prove correctness on small systems by non-recursive means, prove that the correctness of all subsystems implies the correctness of the entire system. Manuscript profile
      • Open Access Article

        2 - Concept Detection in Images Using SVD Features and Multi-Granularity Partitioning and Classification
        Kamran  Farajzadeh Esmail  Zarezadeh Jafar Mansouri
        New visual and static features, namely, right singular feature vector, left singular feature vector and singular value feature vector are proposed for the semantic concept detection in images. These features are derived by applying singular value decomposition (SVD) " More
        New visual and static features, namely, right singular feature vector, left singular feature vector and singular value feature vector are proposed for the semantic concept detection in images. These features are derived by applying singular value decomposition (SVD) "directly" to the "raw" images. In SVD features edge, color and texture information is integrated simultaneously and is sorted based on their importance for the concept detection. Feature extraction is performed in a multi-granularity partitioning manner. In contrast to the existing systems, classification is carried out for each grid partition of each granularity separately. This separates the effect of classifications on partitions with and without the target concept on each other. Since SVD features have high dimensionality, classification is carried out with K-nearest neighbor (K-NN) algorithm that utilizes a new and "stable" distance function, namely, multiplicative distance. Experimental results on PASCAL VOC and TRECVID datasets show the effectiveness of the proposed SVD features and multi-granularity partitioning and classification method Manuscript profile
      • Open Access Article

        3 - Using Static Information of Programs to Partition the Input Domain in Search-based Test Data Generation
        Atieh Monemi Bidgoli Hassan haghighi
        The quality of test data has an important effect on the fault-revealing ability of software testing. Search-based test data generation reformulates testing goals as fitness functions, thus, test data generation can be automated by meta-heuristic algorithms. Meta-heurist More
        The quality of test data has an important effect on the fault-revealing ability of software testing. Search-based test data generation reformulates testing goals as fitness functions, thus, test data generation can be automated by meta-heuristic algorithms. Meta-heuristic algorithms search the domain of input variables in order to find input data that cover the targets. The domain of input variables is very large, even for simple programs, while this size has a major influence on the efficiency and effectiveness of all search-based methods. Despite the large volume of works on search-based test data generation, the literature contains few approaches that concern the impact of search space reduction. In order to partition the input domain, this study defines a relationship between the structure of the program and the input domain. Based on this relationship, we propose a method for partitioning the input domain. Then, to search in the partitioned search space, we select ant colony optimization as one of the important and prosperous meta-heuristic algorithms. To evaluate the performance of the proposed approach in comparison with the previous work, we selected a number of different benchmark programs. The experimental results show that our approach has 14.40% better average coverage versus the competitive approach Manuscript profile
      • Open Access Article

        4 - A Novel Approach for Establishing Connectivity in Partitioned Mobile Sensor Networks using Beamforming Techniques
        Abbas Mirzaei Shahram Zandian
        Network connectivity is one of the major design issues in the context of mobile sensor networks. Due to diverse communication patterns, some nodes lying in high-traffic zones may consume more energy and eventually die out resulting in network partitioning. This phenomen More
        Network connectivity is one of the major design issues in the context of mobile sensor networks. Due to diverse communication patterns, some nodes lying in high-traffic zones may consume more energy and eventually die out resulting in network partitioning. This phenomenon may deprive a large number of alive nodes of sending their important time critical data to the sink. The application of data caching in mobile sensor networks is exponentially increasing as a high-speed data storage layer. This paper presents a deep learning-based beamforming approach to find the optimal transmission strategies for cache-enabled backhaul networks. In the proposed scheme, the sensor nodes in isolated partitions work together to form a directional beam which significantly increases their overall communication range to reach out a distant relay node connected to the main part of the network. The proposed methodology of cooperative beamforming-based partition connectivity works efficiently if an isolated cluster gets partitioned with a favorably large number of nodes. We also present a new cross-layer method for link cost that makes a balance between the energy used by the relay. By directly adding the accessible auxiliary nodes to the set of routing links, the algorithm chooses paths which provide maximum dynamic beamforming usage for the intermediate nodes. The proposed approach is then evaluated through simulation results. The simulation results show that the proposed mechanism achieves up to 30% energy consumption reduction through beamforming as partition healing in addition to guarantee user throughput. Manuscript profile
      • Open Access Article

        5 - CDF (2,2) Wavelet Lossy Image Compression on CPLD
        A. A. Lotfi-Neyestanak Mohammad Mohaghegh Hazrati Mohammad Mohaghegh Hazrati N. Ahmidi
        This paper presents a hardware implementation of CDF(2,2) wavelet image compressor. The design demonstrates that high quality circuit implementation is possible through the use of suitable data organization (partitioned approach) and algorithm-to-architecture mappings ( More
        This paper presents a hardware implementation of CDF(2,2) wavelet image compressor. The design demonstrates that high quality circuit implementation is possible through the use of suitable data organization (partitioned approach) and algorithm-to-architecture mappings (parallel-ism or pipelining). A VHDL code for CDF(2,2) was developed to satisfy our objective. Then it was synthesized in Foundation 5.1 software and downloaded to CPLD XC9572 by a JTAG ByteBlaster cable. The original image was transmitted through serial port. The AVR’s ATmega8535 was used to implement serial protocol to and back from the CPLD. The main goal is to reach a higher performance and throughput with a single CPLD. Details of the encoder design have been discussed and the results are presented. Manuscript profile
      • Open Access Article

        6 - Automated Implementation of Quantum Circuits on QFPGA for Emulation
        M. Heidarzadeh Mohammad Danaee Far
        This paper defines an optimal architecture for the FPGA using exact methods. In order to achieve this goal, optimal placement and routing solutions are found using the integer linear programming techniques. After redefining the internal architecture of the logic blocks, More
        This paper defines an optimal architecture for the FPGA using exact methods. In order to achieve this goal, optimal placement and routing solutions are found using the integer linear programming techniques. After redefining the internal architecture of the logic blocks, quantum circuits are partitioned by a heuristic algorithm in order to reach maximum utilization of the resources inside logic blocks and minimum delay of the paths traversed by the q-bits in the circuit. Experimental results show that FPGA architecture modifications can result in the reduction of the delay of critical paths of circuits by up to half in some cases and in a considerable reduction of the number of channels used for routing. Furthermore, the results show that defining the logic blocks with 12 q-bits instead of 4 q-bits can decrease circuits delay and the number of used channels to a large extent. Manuscript profile
      • Open Access Article

        7 - Semi-Partitioning Multiprocessor Real-Time Scheduling in Data Stream Management Systems
        M. Alemi M. Haghjoo
        In data stream management systems as long as streams of data arrive to the system, stored queries are executed on these data. Regarding high workload, high processing capacity is required, leading to consider multiple processors to cope with it. Partitioning approach, o More
        In data stream management systems as long as streams of data arrive to the system, stored queries are executed on these data. Regarding high workload, high processing capacity is required, leading to consider multiple processors to cope with it. Partitioning approach, one of the main methods in multiprocessor real-time scheduling, bind each query to one of processors based on its utilization, ratio of estimated execution time to period, and instances of each query which should be completed under defined deadline can only be executed on specified processor. Each query which could not be assigned to any processor can be split based on utilization of processors and spread among them, causing to get closer to optimum result. This system has been examined with real network data, showing lower miss ratio and higher utilization in comparison to simple partitioning approach. Manuscript profile
      • Open Access Article

        8 - Efficient Document Partitioning for Load Balancing between Servers Using Term Frequency of Past Queries
        Reyhaneh Torab Sajjad Zarifzadeh
        The main goal of web search engines is to find the most relevant results with respect to the user query in a shortest possible time. To do so, the crawled documents have to be partitioned between several servers in order to use their aggregate retrieval and processing p More
        The main goal of web search engines is to find the most relevant results with respect to the user query in a shortest possible time. To do so, the crawled documents have to be partitioned between several servers in order to use their aggregate retrieval and processing power. The search engines use different policies for efficient partitioning of documents. In this paper, we propose a new document partitioning method that intends to balance the load between servers to reduce the response time of queries. The idea is to weigh each term based on its daily frequency in log of past queries. We then assign a weight to each document via summing the weight of its substituent terms. The weight of a document approximates the likelihood of its presence in future search results. Finally, the documents are partitioned between servers in a way that the sum of document weights in each server becomes roughly equal. Our evaluation results show that the proposed method is able to balance the load by about 20% better than former algorithms, especially in the peak of search engine traffic. Manuscript profile
      • Open Access Article

        9 - A New Measure for Partitioning of Block-Centric Graph Processing Systems
        Masoud Sagharichian Morteza Alipour Langouri
        Block-centric graph processing systems have received significant attention in recent years. To produce the required partitions, most of these systems use general-purpose partitioning methods. As a result, the performance of them has been limited. To face this problem, s More
        Block-centric graph processing systems have received significant attention in recent years. To produce the required partitions, most of these systems use general-purpose partitioning methods. As a result, the performance of them has been limited. To face this problem, special partitioning algorithms have been proposed by researchers. However, these methods focused on traditional partitioning measures like the number of cutting-edges and the load-balance. In return, the power of block-centric graph processing systems is due to unique characteristics that are focused on the design of them. According to basic and important characteristics of these systems, in this paper two new measures are proposed as partitioning goals. To the best of our knowledge, the proposed method is the first work that considers the diameter and size of the high-level graph as optimization factors for partitioning purposes. The evaluation of the proposed method over real graphs showed that we could significantly reduce the diameter of the high-level graph. Moreover, the number of cutting-edges of the proposed method are very close to Metis, one of most popular centralized partitioning methods. Since the number of required supersteps in block-centric graph processing systems mainly depends on the diameter of the high-level graph, the proposed method can significantly improve the performance of these systems. Manuscript profile
      • Open Access Article

        10 - A Two-Level Method Based on Dynamic Programming for Partitioning and Optimization of the Communication Cost in Distributed Quantum Circuits
        zohreh davarzani maryam zomorodi-moghadam M. Houshmand
        Nowadays, quantum computing has played a significant role in increasing the speed of algorithms. Due to the limitations in the manufacturing technologies of quantum computers, the design of a large-scale quantum computer faces many challenges. One solution to overcome t More
        Nowadays, quantum computing has played a significant role in increasing the speed of algorithms. Due to the limitations in the manufacturing technologies of quantum computers, the design of a large-scale quantum computer faces many challenges. One solution to overcome these challenges is the design of distributed quantum systems. In these systems, quantum computers are connected to each other through the teleportation protocol to transfer quantum information. Since quantum teleportation requires quantum resources, it is necessary to reduce the number of that. The purpose of this paper is to present a distributed quantum system considering the two goals of balanced distribution of qubits and minimizing the number of teleportation protocols in two levels. In the first level, by presenting a dynamic programming algorithm, an attempt has been made to distribute qubits in a balanced manner and reduce the number of connections between subsystems. According to the output partitioning obtained from the first level, in the second level and in the stage of implementation of global gates, when one of the qubits of this gate is teleported from the home to the desired destination, this qubit may be able to be used by a number of global gates, observing the precedence restrictions and as a result it reduces the number of teleportations. The obtained results show the better performance of the proposed algorithm. Manuscript profile
      • Open Access Article

        11 - Determination of formation temperature, oxygen fugacity and Ce4+/Ce3+ ratio with using zircon chemistry in the pegmatitic dikes of Malayer-Boroujerd-Shazand, Sanandaj-Sirjan zone
        Majid Ghasemi siani
        The Granitoid plutons in the Sanandaj-Sirjan zone host numerous pegmatitic dikes. This study is focused on mineral chemistry of zircons in the pegmatite dikes in the Malayer, Boroujerd and Shazand district to evaluate zircon crystallization temperature, oxygen fugacity More
        The Granitoid plutons in the Sanandaj-Sirjan zone host numerous pegmatitic dikes. This study is focused on mineral chemistry of zircons in the pegmatite dikes in the Malayer, Boroujerd and Shazand district to evaluate zircon crystallization temperature, oxygen fugacity and Ce4+/Ce3+ ratio and also zircon/rock partition coefficients of REEs and U, Th, Ta, Nb and Y. Trace element discrimination digrams such as Th versus Y and Yb/Sm versus Y and Nb, indicated studied zircons were located in the syenite pegmatite field. Zircon/rock partition coefficients indicate that zircon granis are enriched in the HREE than LREE. Zircon chemistry show that zircon in the Shazand and Malayer pegmatite dikes have more Hf and less REE distribution than zircons in the Boroujerd pegmatite dikes. Consequently, it indicates the role of latter hydrothermal process in the formation of Boroujerd zircons. Crystallization temperature, oxygen fugacity and Ce4+/Ce3+ ratios decrease from Malayer to Shazand and finally Boroujerd pegmatite dikes. Reduced condition of magmatism, Th/U contents below 1 and Y/Ho content higher than 20 indicate that these pegmatities are barren. Manuscript profile