• List of Articles Database

      • Open Access Article

        1 - A Database Selection Method and Migration Model for Big Data
        Mohammad Reza Ahmadi
        Development of infrastructure and public services, especially in cloud computing applications, databases and storage techniques in traditional patterns has serious limitations and challenges. Increasing development tools and data services created widespread need for sto More
        Development of infrastructure and public services, especially in cloud computing applications, databases and storage techniques in traditional patterns has serious limitations and challenges. Increasing development tools and data services created widespread need for storing large data processing activities from public, private and comprehensive social networks, the need for data migration to new databases with different characteristics become an inevitable task. In traditional models, digital data were stored in storage systems or in separated databases. But with the expansion of the size and composition of data and generation of large data structure, practices and traditional patterns is not acceptable for new needs and the use of storage systems in the new formats and models are essential. In this paper, we have studied the dimensions of database structure and functions of traditional and new storage systems together with technical solutions for migration from traditional structured databases to unstructured data are presented. At the end, main features of distributed storage systems compared with traditional models and their performance are presented. Manuscript profile
      • Open Access Article

        2 - Monitoring of Iran Monthly Temperature Trend based on database output European Centre for Medium-Range Weather Forecasts (ECMWF) ERA Interim Version
           
        The role of temperature and the importance of its transformation has led to a serious attention to this climate over the last few decades. The rising temperature in some regions of Iran and its possible implications have led to serious concerns for researchers and plann More
        The role of temperature and the importance of its transformation has led to a serious attention to this climate over the last few decades. The rising temperature in some regions of Iran and its possible implications have led to serious concerns for researchers and planners. The aim of this study is to determine the spatial transformation of Iran's temperature over the past four decades. In order to evaluate this trend, the ERA Interim European Centre for Medium-Range Weather Forecasts (ECMWF) database was used during the 1979-2015 period with a spatial resolution of 12566 × 125/0 ° arc per month with 9966 cells. Non-parametric Mann-Kendall and Sen's Slope methods were used to reveal the temperature trend. The results showed that four months of February, March, May and October experienced a one-way (incremental) temperature trend. The highest average of the country's seasonal increase was due to the winter season and its minimum was fall season. In all months of the year, the regions of the country that were between 30 to 35 degrees north have experienced the most significant incremental trend. The cold and temperate regions of the country have been experiencing higher temperatures than other areas. Also, the negative trend of south-east and south (Bushehr coastal areas) of Iran is due to four reasons: 1. Mineralization of the climate of the area; 2. Increased airborne weather conditions; 3. Precipitation vapor; and 4. Clouds and range of temperature changes. The maximum average temperature gradient of the country was at 11.1 ° C in February, and its minimum level was as high as 0.002 ° C in November. In general, Iran's winters are getting warmer, and this will be considered a serious threat to the country's flood victims. Manuscript profile
      • Open Access Article

        3 - Analysis and Evaluation of Techniques for Myocardial Infarction Based on Genetic Algorithm and Weight by SVM
        hojatallah hamidi Atefeh Daraei
        Although decreasing rate of death in developed countries because of Myocardial Infarction, it is turned to the leading cause of death in developing countries. Data mining approaches can be utilized to predict occurrence of Myocardial Infarction. Because of the side effe More
        Although decreasing rate of death in developed countries because of Myocardial Infarction, it is turned to the leading cause of death in developing countries. Data mining approaches can be utilized to predict occurrence of Myocardial Infarction. Because of the side effects of using Angioplasty as main method for diagnosing Myocardial Infarction, presenting a method for diagnosing MI before occurrence seems really important. This study aim to investigate prediction models for Myocardial Infarction, by applying a feature selection model based on Wight by SVM and genetic algorithm. In our proposed method, for improving the performance of classification algorithm, a hybrid feature selection method is applied. At first stage of this method, the features are selected based on their weights, using weight by Support Vector Machine. At second stage, the selected features, are given to genetic algorithm for final selection. After selecting appropriate features, eight classification methods, include Sequential Minimal Optimization, REPTree, Multi-layer Perceptron, Random Forest, K-Nearest Neighbors and Bayesian Network, are applied to predict occurrence of Myocardial Infarction. Finally, the best accuracy of applied classification algorithms, have achieved by Multi-layer Perceptron and Sequential Minimal Optimization. Manuscript profile
      • Open Access Article

        4 - Unsupervised Segmentation of Retinal Blood Vessels Using the Human Visual System Line Detection Model
        Mohsen Zardadi Nasser Mehrshad Seyyed Mohammad Razavi
        Retinal image assessment has been employed by the medical community for diagnosing vascular and non-vascular pathology. Computer based analysis of blood vessels in retinal images will help ophthalmologists monitor larger populations for vessel abnormalities. Automatic s More
        Retinal image assessment has been employed by the medical community for diagnosing vascular and non-vascular pathology. Computer based analysis of blood vessels in retinal images will help ophthalmologists monitor larger populations for vessel abnormalities. Automatic segmentation of blood vessels from retinal images is the initial step of the computer based assessment for blood vessel anomalies. In this paper, a fast unsupervised method for automatic detection of blood vessels in retinal images is presented. In order to eliminate optic disc and background noise in the fundus images, a simple preprocessing technique is introduced. First, a newly devised method, based on a simple cell model of the human visual system (HVS) enhances the blood vessels in various directions. Then, an activity function is presented on simple cell responses. Next, an adaptive threshold is used as an unsupervised classifier and classifies each pixel as a vessel pixel or a non-vessel pixel to obtain a vessel binary image. Lastly, morphological post-processing is applied to eliminate exudates which are detected as blood vessels. The method was tested on two publicly available databases, DRIVE and STARE, which are frequently used for this purpose. The results demonstrate that the performance of the proposed algorithm is comparable with state-of-the-art techniques. Manuscript profile
      • Open Access Article

        5 - Optimization of LZ78 compression algorithm in tracing location of mobile communication users
        M.R. mirsarraf Mohammad Hakkak
        For location updating of mobile users, two compression algorithms, namely, LZ78 and proposed compression algorithm (modified LZ78) are introduced in this paper to be used in PCS networks. Some problems related to using these algorithm are the usage of memory of dictiona More
        For location updating of mobile users, two compression algorithms, namely, LZ78 and proposed compression algorithm (modified LZ78) are introduced in this paper to be used in PCS networks. Some problems related to using these algorithm are the usage of memory of dictionary in mobile users and HLR data base as well as the ambiguity about the last location of mobile users due to delay in location updating caused by the compression algorithm. The advantage of these algorithms is reduction at the number of location updatings for a mobile user. With some modifications in the LZ78 algorithm, its problems in implementation are reduced and its usage for PCS networks is enhanced. These changes result from combining this algorithm with distance based location updating algorithm and sending symbols corresponding to some limited neighborhoods identity instead of cell number by compression algorithm For comparison between LZ78 and proposed modified algorithm, we use simulation technique. The simulation program have two structures for PCS network, namely, square cells and hexagon cell networks. For mobile users, we considered two movement pattern: one is directional and the other is omnidirectional movement pattern. The outputs of the simulation program are the number of location updating, the maximum ambiguity of user location and size of dictionary for compression algorithms. Comparing the two algorithms by simulation, we observe that in the modified LZ78 algorithm the parameters of location updating number, maximum user ambiguity and size of dictionary are lower than those in the LZ78 algorithm. At the end of the article, cost of location management of mobile user versus call to mobility ratio (the average number of call toward user to the average number of its movement) is calculated. By comparing location management for LZ78 algorithm, modified LZ78 and distance based location updating algorithm, we observe that the cost of location management is reduced for modified LZ78 compression algorithm. Manuscript profile
      • Open Access Article

        6 - Optimization of Query Processing in Versatile Database Using Ant Colony Algorithm
        hasan Asil
        Nowadays, with the advancement of database information technology, databases has led to large-scale distributed databases. According to this study, database management systems are improved and optimized so that they provide responses to customer questions with lower co More
        Nowadays, with the advancement of database information technology, databases has led to large-scale distributed databases. According to this study, database management systems are improved and optimized so that they provide responses to customer questions with lower cost. Query processing in database management systems is one of the important topics that grabs attentions. Until now, many techniques have been implemented for query processing in database system. The purpose of these methods is to optimize query processing in the database. The main topics that is interested in query processing in the database makes run-time adjustments of processing or summarizing topics by using the new approaches. The aim of this research is to optimize processing in the database by using adaptive methods. Ant Colony Algorithm (ACO) is used for solving optimization problems. ACO relies on the created pheromone to select the optimal solution. In this article, in order to make adaptive hybrid query processing. The proposed algorithm is fundamentally divided into three parts: separator, replacement policy, and query similarity detector. In order to improve the optimization and frequent adaption and correct selection in queries, the Ant Colony Algorithm has been applied in this research. In this algorithm, based on Versatility (adaptability) scheduling, Queries sent to the database have been attempted be collected. The simulation results of this method demonstrate that reduce spending time in the database. According to the proposed algorithm, one of the advantages of this method is to identify frequent queries in high traffic times and minimize the time and the execution time. This optimization method reduces the system load during high traffic load times for adaptive query Processing and generally reduces the execution runtime and aiming to minimize cost. The rate of reduction of query cost in the database with this method is 2.7%. Due to the versatility of high-cost queries, this improvement is manifested in high traffic times. In the future Studies, by adapting new system development methods, distributed databases can be optimized. Manuscript profile
      • Open Access Article

        7 - Providing a New Solution in Selecting Suitable Databases for Storing Big Data in the National Information Network
        Mohammad Reza Ahmadi davood maleki ehsan arianyan
        The development of infrastructure and applications, especially public services in the form of cloud computing, traditional models of database services and their storage methods have faced sever limitations and challenges. The increasing development of data service produ More
        The development of infrastructure and applications, especially public services in the form of cloud computing, traditional models of database services and their storage methods have faced sever limitations and challenges. The increasing development of data service productive tools and the need to store the results of large-scale processing resulting from various activities in the national network of information and data produced by the private sector and pervasive social networks has made the process of migrating to new databases with appropriate features inevitable. With the expansion and change in the size and composition of data and the formation of big data, traditional practices and patterns do not meet new needs. Therefore, it is necessary to use data storage systems in new and scalable formats and models. This paper reviews the essential solution regarding the structural dimensions and different functions of traditional databases and modern storage systems and technical solutions for migrating from traditional databases to modern ones suitable for big data. Also, the basic features regarding the connection of traditional and modern databases for storing and processing data obtained from the national information network are presented and the parameters and capabilities of databases in the standard platform context and Hadoop context are examined. As a practical example, a combination of traditional and modern databases using the balanced scorecard method is presented as well as evaluated and compared. Manuscript profile