• List of Articles Segmentation

      • Open Access Article

        1 - Applying data mining techniques to regions segmentation for entrance exams to governmental universities
        نرجس سرعتی آَشتیانی somayyeh alizadeh علی  مبصّـری
        The large numbers of Iranian high school graduates are willing to enter in governmental and popular colleges and compete for it. On the other hand, these graduate students are from various regions with different levels of access to facilities. In opinion of directors of More
        The large numbers of Iranian high school graduates are willing to enter in governmental and popular colleges and compete for it. On the other hand, these graduate students are from various regions with different levels of access to facilities. In opinion of directors of relevant agencies, the quota allocation solves this problem and they are looking to use the knowledge hidden in the data are available in this area.By this way volunteers from each region are compared together and managers are helped to allocate proper quota to related students in regions of each segment. In recent years, quota allocation was determined by Taxonomy that its result is a kind of ranking that does not allow group analyzing and identifies number of region theoretically. To solve this problem clustering is a good strategy. This study is carried out by using data mining techniques and Crisp methods on related dataset from education ministry, interior ministry, ministry of health, and center of statistic and evaluation organization for the first time. After extracting of effective attributes in this area, data preparation, data reduction and combination of attributes using Factor Analysis have done.in next step, by using K-means algorithm, similar items assign in to a cluster that has the minimum distance with centroid mean and then by using neural networks and decision trees, new item can be devoted to each cluster. Finally for assessing created models, accuracy of outputs compared with other methods. Outcomes of this research are: determining the optimal number of sectors, segmenting regions, analyzing each section, extracting decision rules, predicting class labels for new areas faster and more accurately, allowing the appropriate strategies formulation for each section Manuscript profile
      • Open Access Article

        2 - Image Processing of steel sheets for Defect Detection by using Gabor Wavelet
        masoud shafiee mostafa sadeghi
        In different steps of steel production, various defects appear on the surface of the sheet. Putting aside the causes of defects, precise identification of their kinds helps classify steel sheet correctly, thereby it allocates a high percentage of quality control process More
        In different steps of steel production, various defects appear on the surface of the sheet. Putting aside the causes of defects, precise identification of their kinds helps classify steel sheet correctly, thereby it allocates a high percentage of quality control process. QC of steel sheet for promotion of product quality and maintaining the competitive market is of great importance. In this paper, in addition to quick review of image process techniques used, using image process by means of Gabor wavelet, a fast and precise solution for detection of texture defects in steel sheet is presented. In first step, the approach extracts considerable texture specification from image by using Gabor wavelet. The specification includes both different directions and different frequencies. Then using statistical methods, images are selected that have more obvious defects, and location of defects is determined. Offering the experimental samples, the accuracy and speed of the method is indicated. Manuscript profile
      • Open Access Article

        3 - Historical analysis of institutionalization trends in the field of science and technology policy in Iran
        Seyed Kamal Vaezi mehrdad javaherdashti
        This study analyzes the institutions that have been used for science and technology policy in the Islamic Republic of Iran since the Pahlavi era (first and second Pahlavi) until now. Institutions that were created during this period and their duties and powers are brief More
        This study analyzes the institutions that have been used for science and technology policy in the Islamic Republic of Iran since the Pahlavi era (first and second Pahlavi) until now. Institutions that were created during this period and their duties and powers are briefly mentioned and approaches such as modernization and modernization in the Pahlavi era, redefining values ​​and value creation based on Islamic indicators in the period. After the Revolution of 1978, which were the source of the institutions of their time, using the historical method (prevailing environmental conditions) and the focus group, this trend was analyzed and in the final part in the form of conclusions, policy recommendations that From this analysis, it has been obtained such as merging some institutions, determining the unity mechanism of the same plurality of policy makers and implementers, vertical and horizontal strategic coordination between programs and macro documents in this field, and finally practical suggestions and suggestions for future research. . Manuscript profile
      • Open Access Article

        4 - Outdoor Color Scene Segmentation towards Object Detection using Dual-Resolution Histograms
        javad rasti monadjemi monadjemi abbas vafaei
        One of the most important problems in automatic outdoor scene analysis is the approach of segmentation towards object detection. The special characteristics of such images -like color variety, different luminance effects and color shades, abundant texture details, and d More
        One of the most important problems in automatic outdoor scene analysis is the approach of segmentation towards object detection. The special characteristics of such images -like color variety, different luminance effects and color shades, abundant texture details, and diversity of objects- lead to major challenges in the segmentation process. In the previous research, we proposed a k-means clustering algorithm in a multi-resolution platform for preliminary color segmentation. In this method, the texture details are deliberately expunged and apparent clusters are gradually removed in the blurred versions of the image to let more detailed classes expose in the more clarified versions. The performance of this step-by-step approach is relatively higher than the traditional k-means in color clustering for outdoor scene segmentation. In this paper, an adaptive method based on the circular hue histogram in a dual-resolution platform is suggested to detect the apparent clusters in the blurred images. Experimental results on two outdoor datasets show about 20% decrease in the pixel segmentation error as well as around 30% increase in both precision and speed in the convergence of the clustering algorithm. Manuscript profile
      • Open Access Article

        5 - Provide a method for customer segmentation using the RFM model in conditions of uncertainty
        mohammadreza gholamian azime mozafari
        The purpose of this study is to provide a method for customer segmentation of a private bank in Shiraz based on the RFM model in the face of uncertainty about customer data. In the proposed framework of this study, first, the values ​​of RFM model indicators including e More
        The purpose of this study is to provide a method for customer segmentation of a private bank in Shiraz based on the RFM model in the face of uncertainty about customer data. In the proposed framework of this study, first, the values ​​of RFM model indicators including exchange novelty (R), number of exchanges (F) and monetary value of exchange (M) were extracted from the customer database and preprocessed. Given the breadth of the data, it is not possible to determine the exact number to determine whether the customer is good or bad; Therefore, to eliminate this uncertainty, the gray number theory was used, which considers the customer's situation as a range. In this way, using a different method, the bank's customers were segmented, which according to the results, customers were divided into three main sections or clusters as good, normal and bad customers. After validating the clusters using Don and Davis Boldin indicators, customer characteristics in each sector were identified and at the end, suggestions were made to improve the customer relationship management system. Manuscript profile
      • Open Access Article

        6 - Segmentation of Shotori fault using structural, geomorphology, seismicity and fractaly analysis
             
        Shotori active fault zone (in the northern end of Nayband fault) has a dextral strike-slip mechanism with a revers component. Landsat image studies show that this fault is uncontinuous and segmented. In this research, based on fault geometric discontinuity, two segments More
        Shotori active fault zone (in the northern end of Nayband fault) has a dextral strike-slip mechanism with a revers component. Landsat image studies show that this fault is uncontinuous and segmented. In this research, based on fault geometric discontinuity, two segments, were determined on both the northern (with trend of N40w) and southern segments (with trend of N20w). Both of them are reverse with a right lateral slip movement component. The southern segment is the most active segment, based on fractal earthquake and fractal fractures (Ds= 1/60, DN=1/73) and earthquakes (Ds=0/43, DN=0/68) morphotectonic¬ parameters such as river slope indicator (SLs=1703/27,SLN=1526/7), sinuosity river channel (SS=1/24,SN=1/27), the V ratio (Vs=0/7,VN=0/9) and structural and seismic data. The most frequent recorded earthquakes and the biggest registered earthquake with a magnitude of 7.4 on the Richter scale have taken place in the southern segment. This indicates a high potential of seismic activity on this segment of the Shotori fault. Manuscript profile
      • Open Access Article

        7 - Design, Implementation and Evaluation of Multi-terminal Binary Decision Diagram based Binary Fuzzy Relations
        Hamid Alavi Toussi Bahram Sadeghi Bigham
        Elimination of redundancies in the memory representation is necessary for fast and efficient analysis of large sets of fuzzy data. In this work, we use MTBDDs as the underlying data-structure to represent fuzzy sets and binary fuzzy relations. This leads to elimination More
        Elimination of redundancies in the memory representation is necessary for fast and efficient analysis of large sets of fuzzy data. In this work, we use MTBDDs as the underlying data-structure to represent fuzzy sets and binary fuzzy relations. This leads to elimination of redundancies in the representation, less computations, and faster analyses. We also extended a BDD package (BuDDy) to support MTBDDs in general and fuzzy sets and relations in particular. Representation and manipulation of MTBDD based fuzzy sets and binary fuzzy relations are described in this paper. These include design and implementation of different fuzzy operations such as max, min and max-min composition. In particular, an efficient algorithm for computing max-min composition is presented.Effectiveness of our MTBDD based implementation is shown by applying it on fuzzy connectedness and image segmentation problem. Compared to a base implementation, the running time of the MTBDD based implementation was faster (in our test cases) by a factor ranging from 2 to 27. Also, when the MTBDD based data-structure was employed, the memory needed to represent the final results was improved by a factor ranging from 37.9 to 265.5. We also describe our base implementation which is based on matrices. Manuscript profile
      • Open Access Article

        8 - Unsupervised Segmentation of Retinal Blood Vessels Using the Human Visual System Line Detection Model
        Mohsen Zardadi Nasser Mehrshad Seyyed Mohammad Razavi
        Retinal image assessment has been employed by the medical community for diagnosing vascular and non-vascular pathology. Computer based analysis of blood vessels in retinal images will help ophthalmologists monitor larger populations for vessel abnormalities. Automatic s More
        Retinal image assessment has been employed by the medical community for diagnosing vascular and non-vascular pathology. Computer based analysis of blood vessels in retinal images will help ophthalmologists monitor larger populations for vessel abnormalities. Automatic segmentation of blood vessels from retinal images is the initial step of the computer based assessment for blood vessel anomalies. In this paper, a fast unsupervised method for automatic detection of blood vessels in retinal images is presented. In order to eliminate optic disc and background noise in the fundus images, a simple preprocessing technique is introduced. First, a newly devised method, based on a simple cell model of the human visual system (HVS) enhances the blood vessels in various directions. Then, an activity function is presented on simple cell responses. Next, an adaptive threshold is used as an unsupervised classifier and classifies each pixel as a vessel pixel or a non-vessel pixel to obtain a vessel binary image. Lastly, morphological post-processing is applied to eliminate exudates which are detected as blood vessels. The method was tested on two publicly available databases, DRIVE and STARE, which are frequently used for this purpose. The results demonstrate that the performance of the proposed algorithm is comparable with state-of-the-art techniques. Manuscript profile
      • Open Access Article

        9 - Efficient Land-cover Segmentation Using Meta Fusion
        Morteza Khademi Hadi Sadoghi Yazdi
        Most popular fusion methods have their own limitations; e.g. OWA (order weighted averaging) has “linear model” and “summation of inputs proportions in fusion equal to 1” limitations. Considering all possible models for fusion, proposed fusion method involve input data c More
        Most popular fusion methods have their own limitations; e.g. OWA (order weighted averaging) has “linear model” and “summation of inputs proportions in fusion equal to 1” limitations. Considering all possible models for fusion, proposed fusion method involve input data confusion in fusion process to segmentation. Indeed, limitations in proposed method are determined adaptively for each input data, separately. On the other hand, land-cover segmentation using remotely sensed (RS) images is a challenging research subject; due to the fact that objects in unique land-cover often appear dissimilar in different RS images. In this paper multiple co-registered RS images are utilized to segment land-cover using FCM (fuzzy c-means). As an appropriate tool to model changes, fuzzy concept is utilized to fuse and integrate information of input images. By categorizing the ground points, it is shown in this paper for the first time, fuzzy numbers are need and more suitable than crisp ones to merge multi-images information and segmentation. Finally, FCM is applied on the fused image pixels (with fuzzy values) to obtain a single segmented image. Furthermore mathematical analysis and used proposed cost function, simulation results also show significant performance of the proposed method in terms of noise-free and fast segmentation. Manuscript profile
      • Open Access Article

        10 - Analysis of Business Customers’ Value Network Using Data Mining Techniques
        Forough Farazzmanesh (Isvand) Monireh Hosseini
        In today's competitive environment, customers are the most important asset to any company. Therefore companies should understand what the retention and value drivers are for each customer. An approach that can help consider customers‘ different value dimensions is the More
        In today's competitive environment, customers are the most important asset to any company. Therefore companies should understand what the retention and value drivers are for each customer. An approach that can help consider customers‘ different value dimensions is the value network. This paper aims to introduce a new approach using data mining techniques for mapping and analyzing customers‘ value network. Besides, this approach is applied in a real case study. This research contributes to develop and implement a methodology to identify and define network entities of a value network in the context of B2B relationships. To conduct this work, we use a combination of methods and techniques designed to analyze customer data-sets (e.g. RFM and customer migration) and to analyze value network. As a result, this paper develops a new strategic network view of customers and discusses how a company can add value to its customers. The proposed approach provides an opportunity for marketing managers to gain a deep understanding of their business customers, the characteristics and structure of their customers‘ value network. This paper is the first contribution of its kind to focus exclusively on large data-set analytics to analyze value network. This new approach indicates that future research of value network can further gain the data mining tools. In this case study, we identify the value entities of the network and its value flows in the telecommunication organization using the available data in order to show that it can improve the value in the network by continuous monitoring. Manuscript profile
      • Open Access Article

        11 - Improvement in Accuracy and Speed of Image Semantic Segmentation via Convolution Neural Network Encoder-Decoder
        Hanieh Zamanian Hassan Farsi Sajad Mohammadzadeh
        Recent researches on pixel-wise semantic segmentation use deep neural networks to improve accuracy and speed of these networks in order to increase the efficiency in practical applications such as automatic driving. These approaches have used deep architecture to predic More
        Recent researches on pixel-wise semantic segmentation use deep neural networks to improve accuracy and speed of these networks in order to increase the efficiency in practical applications such as automatic driving. These approaches have used deep architecture to predict pixel tags, but the obtained results seem to be undesirable. The reason for these unacceptable results is mainly due to the existence of max pooling operators, which reduces the resolution of the feature maps. In this paper, we present a convolutional neural network composed of encoder-decoder segments based on successful SegNet network. The encoder section has a depth of 2, which in the first part has 5 convolutional layers, in which each layer has 64 filters with dimensions of 3×3. In the decoding section, the dimensions of the decoding filters are adjusted according to the convolutions used at each step of the encoding. So, at each step, 64 filters with the size of 3×3 are used for coding where the weights of these filters are adjusted by network training and adapted to the educational data. Due to having the low depth of 2, and the low number of parameters in proposed network, the speed and the accuracy improve compared to the popular networks such as SegNet and DeepLab. For the CamVid dataset, after a total of 60,000 iterations, we obtain the 91% for global accuracy, which indicates improvements in the efficiency of proposed method. Manuscript profile
      • Open Access Article

        12 - Performance Analysis of Hybrid SOM and AdaBoost Classifiers for Diagnosis of Hypertensive Retinopathy
        Wiharto Wiharto Esti Suryani Murdoko Susilo
        The diagnosis of hypertensive retinopathy (CAD-RH) can be made by observing the tortuosity of the retinal vessels. Tortuosity is a feature that is able to show the characteristics of normal or abnormal blood vessels. This study aims to analyze the performance of the CAD More
        The diagnosis of hypertensive retinopathy (CAD-RH) can be made by observing the tortuosity of the retinal vessels. Tortuosity is a feature that is able to show the characteristics of normal or abnormal blood vessels. This study aims to analyze the performance of the CAD-RH system based on feature extraction tortuosity of retinal blood vessels. This study uses a segmentation method based on clustering self-organizing maps (SOM) combined with feature extraction, feature selection, and the ensemble Adaptive Boosting (AdaBoost) classification algorithm. Feature extraction was performed using fractal analysis with the box-counting method, lacunarity with the gliding box method, and invariant moment. Feature selection is done by using the information gain method, to rank all the features that are produced, furthermore, it is selected by referring to the gain value. The best system performance is generated in the number of clusters 2 with fractal dimension, lacunarity with box size 22-29, and invariant moment M1 and M3. Performance in these conditions is able to provide 84% sensitivity, 88% specificity, 7.0 likelihood ratio positive (LR+), and 86% area under the curve (AUC). This model is also better than a number of ensemble algorithms, such as bagging and random forest. Referring to these results, it can be concluded that the use of this model can be an alternative to CAD-RH, where the resulting performance is in a good category. Manuscript profile
      • Open Access Article

        13 - A Threshold-based Brain Tumour Segmentation from MR Images using Multi-Objective Particle Swarm Optimization
        Katkoori Arun  Kumar Ravi  Boda
        The Pareto optimal solution is unique in single objective Particle Swarm Optimization (SO-PSO) problems as the emphasis is on the variable space of the decision. A multi-objective-based optimization technique called Multi-Objective Particle Swarm Optimization (MO-PSO) i More
        The Pareto optimal solution is unique in single objective Particle Swarm Optimization (SO-PSO) problems as the emphasis is on the variable space of the decision. A multi-objective-based optimization technique called Multi-Objective Particle Swarm Optimization (MO-PSO) is introduced in this paper for image segmentation. The multi-objective Particle Swarm Optimization (MO-PSO) technique extends the principle of optimization by facilitating simultaneous optimization of single objectives. It is used in solving various image processing problems like image segmentation, image enhancement, etc. This technique is used to detect the tumour of the human brain on MR images. To get the threshold, the suggested algorithm uses two fitness(objective) functions- Image entropy and Image variance. These two objective functions are distinct from each other and are simultaneously optimized to create a sequence of pareto-optimal solutions. The global best (Gbest) obtained from MO-PSO is treated as threshold. The MO-PSO technique tested on various MRI images provides its efficiency with experimental findings. In terms of “best, worst, mean, median, standard deviation” parameters, the MO-PSO technique is also contrasted with the existing Single-objective PSO (SO-PSO) technique. Experimental results show that Multi Objective-PSO is 28% advanced than SO-PSO for ‘best’ parameter with reference to image entropy function and 92% accuracy than Single Objective-PSO with reference to image variance function. Manuscript profile
      • Open Access Article

        14 - Foreground-Back ground Segmentation using K-Means Clustering Algorithm and Support Vector Machine
        Masoumeh Rezaei mansoureh rezaei Masoud Rezaei
        Foreground-background image segmentation has been an important research problem. It is one of the main tasks in the field of computer vision whose purpose is detecting variations in image sequences. It provides candidate objects for further attentional selection, e.g., More
        Foreground-background image segmentation has been an important research problem. It is one of the main tasks in the field of computer vision whose purpose is detecting variations in image sequences. It provides candidate objects for further attentional selection, e.g., in video surveillance. In this paper, we introduce an automatic and efficient Foreground-background segmentation. The proposed method starts with the detection of visually salient image regions with a saliency map that uses Fourier transform and a Gaussian filter. Then, each point in the maps classifies as salient or non-salient using a binary threshold. Next, a hole filling operator is applied for filling holes in the achieved image, and the area-opening method is used for removing small objects from the image. For better separation of the foreground and background, dilation and erosion operators are also used. Erosion and dilation operators are applied for shrinking and expanding the achieved region. Afterward, the foreground and background samples are achieved. Because the number of these data is large, K-means clustering is used as a sampling technique to restrict computational efforts in the region of interest. K cluster centers for each region are set for training of Support Vector Machine (SVM). SVM, as a powerful binary classifier, is used to segment the interest area from the background. The proposed method is applied on a benchmark dataset consisting of 1000 images and experimental results demonstrate the supremacy of the proposed method to some other foreground-background segmentation methods in terms of ER, VI, GCE, and PRI. Manuscript profile
      • Open Access Article

        15 - Automatic Change Detection by Intelligent Backgrounding Method
        M. Fathi H. Shakuri
        The segmentation of foreground regions in image sequences is the first and the most important stage in many automated visual surveillance applications; and background subtraction is a method typically used for such applications. In this method, each new frame is compare More
        The segmentation of foreground regions in image sequences is the first and the most important stage in many automated visual surveillance applications; and background subtraction is a method typically used for such applications. In this method, each new frame is compared with a model of the empty scene (which we call it ‘Background’), then those regions in the image that differ significantly from the background are identified as foreground. This paper presents a new background subtraction approach. In this method, each image is divided into similar NN blocks; then, some features are extracted from every block and the history of each feature are modeled as a combination of gaussian distributions. These distributions are updated after reception of every frame information. Then the gaussian distributions of the adaptive mixture models are evaluated to determine which one most likely describes the background and each block is classified as background or foreground based on the gaussians distributions which represents its feature value most effectively. The software implementations on personal computers show accepting capability of this approach for handling intruders to the scene, objects being introduced or removed from the scene, noises and unwanted changes in the background. Also, high speed of execution and reduced memory requirements makes this approach as a suitable method for high percentage of real-time applications Manuscript profile
      • Open Access Article

        16 - Segmentation of Steel Surfaces towards Defect Detection Using New Gabor Composition Method
        S. J. Alemasoom A. Monadjemi H. A. Alemasoom
        The images of steel surfaces are generally textural images. There are different texture analysis methods to extract features from these images. In those methods using multi-scale/multi-directional analysis, Gabor filters are used for feature extraction. In this paper, w More
        The images of steel surfaces are generally textural images. There are different texture analysis methods to extract features from these images. In those methods using multi-scale/multi-directional analysis, Gabor filters are used for feature extraction. In this paper, we extract texture features using the optimum Gabor filter bank. This filter bank is designed in a way that diverse filtering frequency and orientation will allow it to extract considerable amounts of texture information from the input images. We also introduce a new method called Gabor composition for segmentation and defect detection of steel surfaces. In this method, using two different algorithms, the input image is decomposed into detail images using an appropriate Gabor filter bank and then selected detail images are re composed. The created feature map illustrates the defective areas well. By calculating data distribution of detail images and comparing them, the second method of Gabor composition can accomplish segmentation without needing the normal images and the number of detail images to re-compose. Furthermore, we did different tests towards optimizing of segmentation by means of classifiers. Using a K-means classifier and adding gray levels to the extracted features, complete the segmentation procedure. The experimental results show that the Gabor composition method in most of the tests has got better defect detection performance than the ordinary K-means classifier and the standard wavelet method; also the Second method of Gabor composition has got the best performance over all. Manuscript profile
      • Open Access Article

        17 - Improving Age Estimation of Dental Panoramic Images Based on Image Contrast Correction by Spatial Entropy Method
        Masoume Mohseni Hussain Montazery Kordy Mehdi Ezoji
        In forensic dentistry, age is estimated using dental radiographs. Our goal is to automate these steps using image processing and pattern recognition techniques. With a dental radiograph, the contour is extracted and features such as apex, width and tooth length are dete More
        In forensic dentistry, age is estimated using dental radiographs. Our goal is to automate these steps using image processing and pattern recognition techniques. With a dental radiograph, the contour is extracted and features such as apex, width and tooth length are determined, which are used to estimate age. Optimizing the resolution of radiographic images is an important step in contour extraction and age estimation. In this article, the aim is to improve the image resolution in order to extract the appropriate area and proper segmentation of the tooth, which makes it possible to estimate age better. In this model, due to the low resolution of radiographic images, in order to increase the accuracy of extracting the desired area of each tooth (ROI), the image resolution increases using spatial entropy based on the spatial distribution of pixel brightness, along with another increasing resolution method, like the Laplacian pyramids. Increasing the resolution of the image leads to the extraction of appropriate ROI and the removal of unwanted areas. The database used in this study is 154 adolescent panoramic radiographs, of which 73 are male and 81 are female. This database is prepared from Babol University of Medical Sciences. The results show that by using fixed tooth segmentation methods and only by applying the proposed effective method to improve image resolution, the extraction of appropriate ROI increased from 66% to 78% which shows a good improvement. The extracted ROI is then delivered to the segmented block and the contour extracted. After contour extraction, age is estimated. The age estimation using the proposed method is closer to the manual age estimate compared to the method that does not use the proposed algorithm to increase the image resolution. Manuscript profile
      • Open Access Article

        18 - Segmenting and Profiling Psychological-Demographic Behavior of Karbala Pilgrims Attending the Annual Arbaeen Rituals: A Case Study of Pilgrims Crossing Khuzestan Province's Air and Land Borders
        Yaghoob  daghagheleh maryam darvishi
        Religious tourism is one of the five main branches of tourism according to the World Trade Organization. The popularity of pilgrimage has increased in recent decades, and religious tourism has been an important part of the dynamics of the world tourism economy. Therefor More
        Religious tourism is one of the five main branches of tourism according to the World Trade Organization. The popularity of pilgrimage has increased in recent decades, and religious tourism has been an important part of the dynamics of the world tourism economy. Therefore, this applied descriptive-survey study sought to segment and determine the psychological-demographic and behavioral profile of Karbala pilgrims attending the annual Arbaeen rituals. The statistical population of the study comprised of the pilgrims of Karbala who have traveled through the borders of Khuzestan province to attend the Arbaeen rituals. The neural network algorithm was used to analyze the data via self-organizing maps. The study's findings indicated that there were three different groups of pilgrims with different travel motives, who were categorized as a) being more tourist than a pilgrim (tourist pilgrim), b) being more pilgrim than a tourist (pilgrim tourist), and c) pilgrims. These three groups of tourists were named after leisureists, enthusiasts of various activities, and religious people, suggesting that each type of pilgrim had its own psychological, demographic, and behavioral characteristics. Therefore, marketing strategies for developing tourism activities should be set based on the characteristics of each type of tourist. Manuscript profile
      • Open Access Article

        19 - Comparing the Semantic Segmentation of High-Resolution Images Using Deep Convolutional Networks: SegNet, HRNet, CSE-HRNet and RCA-FCN
        Nafiseh Sadeghi Homayoun Mahdavi-Nasab Mansoor Zeinali Hossein Pourghasem
        Semantic segmentation is a branch of computer vision, used extensively in image search engines, automated driving, intelligent agriculture, disaster management, and other machine-human interactions. Semantic segmentation aims to predict a label for each pixel from a giv More
        Semantic segmentation is a branch of computer vision, used extensively in image search engines, automated driving, intelligent agriculture, disaster management, and other machine-human interactions. Semantic segmentation aims to predict a label for each pixel from a given label set, according to semantic information. Among the proposed methods and architectures, researchers have focused on deep learning algorithms due to their good feature learning results. Thus, many studies have explored the structure of deep neural networks, especially convolutional neural networks. Most of the modern semantic segmentation models are based on fully convolutional networks (FCN), which first replace the fully connected layers in common classification networks with convolutional layers, getting pixel-level prediction results. After that, a lot of methods are proposed to improve the basic FCN methods results. With the increasing complexity and variety of existing data structures, more powerful neural networks and the development of existing networks are needed. This study aims to segment a high-resolution (HR) image dataset into six separate classes. Here, an overview of some important deep learning architectures will be presented with a focus on methods producing remarkable scores in segmentation metrics such as accuracy and F1-score. Finally, their segmentation results will be discussed and we would see that the methods, which are superior in the overall accuracy and overall F1-score, are not necessarily the best in all classes. Therefore, the results of this paper lead to the point to choose the segmentation algorithm according to the application of segmentation and the importance degree of each class. Manuscript profile
      • Open Access Article

        20 - Test case Selection based on Test-Driven Development
        Zohreh Mafi mirian mirian
        Test-Driven Development (TDD) is one of the test-first software production methods in which the production of each component of the code begins with writing the test case. This method has been noticed due to many advantages, including the readable, regular and short cod More
        Test-Driven Development (TDD) is one of the test-first software production methods in which the production of each component of the code begins with writing the test case. This method has been noticed due to many advantages, including the readable, regular and short code, as well as increasing the quality, productivity and reliability, and the possibility of regression testing due to the creation of a comprehensive set of unit tests. The large number of unit test cases produced in this method is considered as a strong point in order to increase the reliability of the code, however, the repeated execution of test cases increases the duration of the regression testing in this method. The purpose of this article is to present an algorithm for selecting test cases to reduce the time of the regression test in TDD method. So far, various ideas have been proposed to select test cases and reduce the regression test time. Most of these ideas are based on programming language and software production methods. The idea presented in this article is based on the program difference method and the nature of the TDD method. In this method, meaningful semantic and structural connections are created between unit tests and code blocks, and the test case selection is done based on these relationships. Manuscript profile
      • Open Access Article

        21 - Regression Test Time Reduction in Test Driven Development
        Zohreh Mafi mirian mirian
        Test-Driven Development (TDD) is one of the test-first software production methods in which the production of each component of the code begins with writing the test case. This method has been noticed due to many advantages, including the readable, regular, and short co More
        Test-Driven Development (TDD) is one of the test-first software production methods in which the production of each component of the code begins with writing the test case. This method has been noticed due to many advantages, including the readable, regular, and short code, as well as increasing quality, productivity, and reliability. The large number of unit test cases produced in this method is considered as an advantage (increases the reliability of the code), however, the repeated execution of test cases increases the regression test time. The purpose of this article is to present an algorithm for selecting test cases to reduce the time of the regression test in the TDD method. So far, various ideas have been proposed to select test cases. Most of these ideas are based on programming language and software production methods. The idea presented in this article is based on the program difference method and the nature of the TDD method, also a tool is written as a plugin in Java Eclipse. The provided tool consists of five main components: 1) Version Manager, 2) Code Segmentation, 3) Code Change Detection (in each version compared to the previous version), 4) Semantic Connection Creation (between unit tests and code blocks), and finally 5) Test Cases Selection. Manuscript profile