• List of Articles Estimation

      • Open Access Article

        1 - Present an Initial Estimation Method for Logical Transaction-based Software Projects
        mehrdad shahsavari
        The first and most basic requirement for successful entry of a project, is have a realistic and reasonable estimation. In this paper, in order to increase accuracy of software projects estimation and reduce complexity of estimation process, we introduce a method called More
        The first and most basic requirement for successful entry of a project, is have a realistic and reasonable estimation. In this paper, in order to increase accuracy of software projects estimation and reduce complexity of estimation process, we introduce a method called the "Logical Transaction Point (LTP)". Our method is most appropriate for transactional software. By use of this method can estimate the total size of use-case's and size of the whole software. In this paper we will prove the more accurate the same technique as UCP method and Due to more transparency and simplicity, its deployment is easier. We provide the main basis for this method, the degree of functional point analysis (FPA) and estimation the degree of use case point (UCP). Manuscript profile
      • Open Access Article

        2 - Hybrid fuzzy c-means clustering algorithm and multilayer perceptron for increasing the estimate accuracy of the geochemical element concentration case study: eastern zone of porphyry copper deposit of Sonajil
        Moharam Jahangiri SeydReza Ghavami Behzad tokhmechi
        Pattern recognition methods are able to identify the hidden relationships between exploration data, especially in the case of limited number of data. The geochemical distribution patterns of the elements are identified and generalized using these methods. Multilayer per More
        Pattern recognition methods are able to identify the hidden relationships between exploration data, especially in the case of limited number of data. The geochemical distribution patterns of the elements are identified and generalized using these methods. Multilayer perceptron, MLP, is one of the pattern recognition methods which is used for the estimation of geochemical element concentrations in mineral deposit studies. In the current study, multilayer neural network was used to estimate the concentration of geochemical elements based on 1755 surface and borehole samples, analyzed by ICP. Fuzzy c-means, FCM, clustering algorithm was used to increase the neural network estimation accuracy. The optimal number of clusters in the dataset was identified by validation indices and was used to design estimator. The clustering data on average showed an increase of 13% accuracy compared to normal mode. The average accuracy was increased from 75 percent to 88 percent. Elements with the lowest estimation accuracy showed an acceptable increase on the estimation accuracy by using clustering data. Mean squared error was 0.079 using all data and decreased to 0.025 while using hybrid developed method. Manuscript profile
      • Open Access Article

        3 - A Novel Method based on the Cocomo model to increase the accuracy of software projects effort estimates
        mahdieh salari vahid khatibi amid khatibi
        It is regarded as a crucial task in a software project to estimate the criteria, and effort estimation in the primary stages of software development is thus one of the most important challenges involved in management of software projects. Incorrect estimation can lead t More
        It is regarded as a crucial task in a software project to estimate the criteria, and effort estimation in the primary stages of software development is thus one of the most important challenges involved in management of software projects. Incorrect estimation can lead the project to failure. It is therefore a major task in efficient development of software projects to estimate software costs accurately. Therefore, two methods were presented in this research for effort estimation in software projects, where attempts were made to provide a way to increase accuracy through analysis of stimuli and application of metaheuristic algorithms in combination with neural networks. The first method examined the effect of the cuckoo search algorithm in optimization of the estimation coefficients in the COCOMO model, and the second method was presented as a combination of neural networks and the cuckoo search optimization algorithm to increase the accuracy of effort estimation in software development. The results obtained on two real-world datasets demonstrated the proper efficiency of the proposed methods as compared to that of similar methods. Manuscript profile
      • Open Access Article

        4 - High I/Q Imbalance Receiver Compensation and Decision Directed Frequency Selective Channel Estimation in an OFDM Receiver Employing Neural Network
        afalahati afalahati Sajjad Nasirpour
        The disparity introduced between In-phase and Quadrature components in a digital communication system receiver known as I/Q imbalance is a prime objective within the employment of direct conversion architectures. It reduces the performance of channel estimation and caus More
        The disparity introduced between In-phase and Quadrature components in a digital communication system receiver known as I/Q imbalance is a prime objective within the employment of direct conversion architectures. It reduces the performance of channel estimation and causes to receive the data symbol with errors. This imbalance phenomenon, at its lowest still can result very serious signal distortions at the reception of an OFDM multi-carrier system. In this manuscript, an algorithm based on neural network scenario, is proposed that deploys both Long Training Symbols (LTS) as well as data symbols, to jointly estimate the channel and to compensate parameters that are damaged by I/Q imbalanced receiver. In this algorithm, we have a tradeoff between these parameters. I.e. when the minimum CG mean value is required, the minimum CG mean value could be chosen without others noticing it, but in usual case we have to take into account other parameters too, the limited values for the aimed parameters must be known. It uses the first iterations to train the system to reach the suitable value of GC without error floor. In this present article, it is assumed that the correlation between subcarriers is low and a few numbers of training and data symbols are used. The simulation results show that the proposed algorithm can compensate the high I/Q imbalance values and estimate channel frequency response more accurately compared with to date existing methods. Manuscript profile
      • Open Access Article

        5 - Design of Fall Detection System: A Dynamic Pattern Approach with Fuzzy Logic and Motion Estimation
        Khosro Rezaee Javad Haddadnia
        Every year thousands of the elderly suffer serious damages such as articular fractures, broken bones and even death due to their fall. Automatic detection of the abnormal walking in people, especially such accidents as the falls in the elderly, based on image processing More
        Every year thousands of the elderly suffer serious damages such as articular fractures, broken bones and even death due to their fall. Automatic detection of the abnormal walking in people, especially such accidents as the falls in the elderly, based on image processing techniques and computer vision can help develop an efficient system that its implementation in various contexts enables us to monitor people’s movements. This paper proposes a new algorithm, which drawing on fuzzy rules in classification of movements as well as the implementation of the motion estimation, allows the rapid processing of the input data. At the testing stage, a large number of video frames received from CASIA, CAVAIR databases and the samples of the elderly’s falls in Sabzevar’s Mother Nursing Home containing the falls of the elderly were used. The results show that the mean absolute percent error (MAPE), root-mean-square deviation (RMSD) and standard deviation error (SDE) were at an acceptable level. The main shortcoming of other systems is that the elderly need to wear bulky clothes and in case they forget to do so, they will not be able to declare their situation at the time of the fall. Compared to the similar techniques, the implementation of the proposed system in nursing homes and residential areas allow the real time and intelligent monitoring of the people. Manuscript profile
      • Open Access Article

        6 - Pose-Invariant Eye Gaze Estimation Using Geometrical Features of Iris and Pupil Images
        Mohammad Reza Mohammadi Abolghasem Asadollah Raie
        In the cases of severe paralysis in which the ability to control the body movements of a person is limited to the muscles around the eyes, eye movements or blinks are the only way for the person to communicate. Interfaces that assist in such communications often require More
        In the cases of severe paralysis in which the ability to control the body movements of a person is limited to the muscles around the eyes, eye movements or blinks are the only way for the person to communicate. Interfaces that assist in such communications often require special hardware or reliance on active infrared illumination. In this paper, we propose a non-intrusive algorithm for eye gaze estimation that works with video input from an inexpensive camera and without special lighting. The main contribution of this paper is proposing a new geometrical model for eye region that only requires the image of one iris for gaze estimation. Essential parameters for this system are the best fitted ellipse of the iris and the pupil center. The algorithms used for both iris ellipse fitting and pupil center localization pose no pre-assumptions on the head pose. All in all, the achievement of this paper is the robustness of the proposed system to the head pose variations. The performance of the method has been evaluated on both synthetic and real images leading to errors of 2.12 and 3.48 degrees, respectively. Manuscript profile
      • Open Access Article

        7 - Parameter Estimation in Hysteretic Systems Based on Adaptive Least-Squares
        Mansour Peimani Mohammad Javad Yazdanpanah Naser Khaji
        In this paper, various identification methods based on least-squares technique to estimate the unknown parameters of structural systems with hysteresis are investigated. The Bouc-Wen model is used to describe the behavior of hysteretic nonlinear systems. The adaptive ve More
        In this paper, various identification methods based on least-squares technique to estimate the unknown parameters of structural systems with hysteresis are investigated. The Bouc-Wen model is used to describe the behavior of hysteretic nonlinear systems. The adaptive versions are based on the fixed and variable forgetting factor and the optimized version is based on optimized adaptive coefficient matrix. Simulation results show the efficient performance of the proposed technique in identification and tracking of hysteretic structural system parameters compared with other least square based algorithms. Manuscript profile
      • Open Access Article

        8 - Digital Video Stabilization System by Adaptive Fuzzy Kalman Filtering
        Mohammad javad Tanakian Mehdi Rezaei Farahnaz Mohanna
        Digital video stabilization (DVS) allows acquiring video sequences without disturbing jerkiness, removing unwanted camera movements. A good DVS should remove the unwanted camera movements while maintains the intentional camera movements. In this article, we propose a no More
        Digital video stabilization (DVS) allows acquiring video sequences without disturbing jerkiness, removing unwanted camera movements. A good DVS should remove the unwanted camera movements while maintains the intentional camera movements. In this article, we propose a novel DVS algorithm that compensates the camera jitters applying an adaptive fuzzy filter on the global motion of video frames. The adaptive fuzzy filter is a Kalman filter which is tuned by a fuzzy system adaptively to the camera motion characteristics. The fuzzy system is also tuned during operation according to the amount of camera jitters. The fuzzy system uses two inputs which are quantitative representations of the unwanted and the intentional camera movements. Since motion estimation is a computation intensive operation, the global motion of video frames is estimated based on the block motion vectors which resulted by video encoder during motion estimation operation. Furthermore, the proposed method also utilizes an adaptive criterion for filtering and validation of motion vectors. Experimental results indicate a good performance for the proposed algorithm. Manuscript profile
      • Open Access Article

        9 - A Fast and Accurate Sound Source Localization Method using Optimal Combination of SRP and TDOA Methodologies
        Mohammad  Ranjkesh Eskolaki Reza Hasanzadeh
        This paper presents an automatic sound source localization approach based on combination of the basic time delay estimation sub method namely, Time Difference of Arrival (TDOA), and Steered Response Power (SRP) methods. The TDOA method is a fast but vulnerable approach More
        This paper presents an automatic sound source localization approach based on combination of the basic time delay estimation sub method namely, Time Difference of Arrival (TDOA), and Steered Response Power (SRP) methods. The TDOA method is a fast but vulnerable approach to find sound source location in long distances and reverberant environments and so sensitive in noisy situations, on the other hand the conventional SRP method is time consuming but successful approach to accurately find sound source location in noisy and reverberant environment. Also another SRP based method namely SRP Phase Transform (SRP-PHAT) has been suggested for better noise robustness and more accuracy of sound source localization. In this paper, based on the combination of TDOA and SRP based methods, two approaches proposed for sound source localization. In the first proposed approach which is named Classical TDOA-SRP, the TDOA method is used to find approximate sound source direction and then SRP based methods used to find the accurate location of sound source in the Field of View (FOV) which is obtained through the TDOA method. In the second proposed approach which is named Optimal TDOA-SRP, for more reduction of computational time of processing of SRP based methods and better noise robustness, a new criteria has been proposed to find the effective FOV which is obtained through the TDOA method. Experiments carried out under different conditions confirm the validity of the purposed approaches. Manuscript profile
      • Open Access Article

        10 - Nonlinear State Estimation Using Hybrid Robust Cubature Kalman Filter
        Behrooz Safarinejadian Mohsen Taher
        In this paper, a novel filter is provided that estimates the states of any nonlinear system, both in the presence and absence of uncertainty with high accuracy. It is well understood that a robust filter design is a compromise between the robustness and the estimation a More
        In this paper, a novel filter is provided that estimates the states of any nonlinear system, both in the presence and absence of uncertainty with high accuracy. It is well understood that a robust filter design is a compromise between the robustness and the estimation accuracy. In fact, a robust filter is designed to obtain an accurate and suitable performance in presence of modelling errors.So in the absence of any unknown or time-varying uncertainties, the robust filter does not provide the desired performance. The new method provided in this paper, which is named hybrid robust cubature Kalman filter (CKF), is constructed by combining a traditional CKF and a novel robust CKF. The novel robust CKF is designed by merging a traditional CKF with an uncertainty estimator so that it can provide the desired performance in the presence of uncertainty. Since the presence of uncertainty results in a large innovation value, the hybrid robust CKF adapts itself according to the value of the normalized innovation. The CKF and robust CKF filters are run in parallel and at any time, a suitable decision is taken to choose the estimated state of either the CKF or the robust CKF as the final state estimation. To validate the performance of the proposed filters, two examples are given that demonstrate their promising performance. Manuscript profile
      • Open Access Article

        11 - Balancing Agility and Stability of Wireless Link Quality Estimators
        MohammadJavad Tanakian Mehri Mehrjoo
        The performance of many wireless protocols is tied to a quick Link Quality Estimation (LQE). However, some wireless applications need the estimation to respond quickly only to the persistent changes and ignore the transient changes of the channel, i.e., be agile and sta More
        The performance of many wireless protocols is tied to a quick Link Quality Estimation (LQE). However, some wireless applications need the estimation to respond quickly only to the persistent changes and ignore the transient changes of the channel, i.e., be agile and stable, respectively. In this paper, we propose an adaptive fuzzy filter to balance the stability and agility of LQE by mitigating the transient variation of it. The heart of the fuzzy filter is an Exponentially Weighted Moving Average (EWMA) low-pass filter that its smoothing factor is changed dynamically with fuzzy rules. We apply the adaptive fuzzy filter and a non-adaptive one, i.e., an EWMA with a constant smoothing factor, to several types of channels from short-term to long-term transitive channels. The comparison of the filters outputs shows that the non-adaptive filter is stable for large values of the smoothing factor and is agile for small values of smoothing factor, while the proposed adaptive filter outperforms the other ones in terms of balancing the agility and stability measured by the settling time and coefficient of variation, respectively. Notably, the proposed adaptive fuzzy filter performs in real time and its complexity is low, because of using limited number of fuzzy rules and membership functions. Manuscript profile
      • Open Access Article

        12 - Body Field: Structured Mean Field with Human Body Skeleton Model and Shifted Gaussian Edge Potentials
        Sara Ershadi-Nasab Shohreh Kasaei Esmaeil Sanaei Erfan Noury Hassan Hafez-kolahi
        An efficient method for simultaneous human body part segmentation and pose estimation is introduced. A conditional random field with a fully-connected graphical model is used. Possible node (image pixel) labels comprise of the human body parts and the background. In the More
        An efficient method for simultaneous human body part segmentation and pose estimation is introduced. A conditional random field with a fully-connected graphical model is used. Possible node (image pixel) labels comprise of the human body parts and the background. In the human body skeleton model, the spatial dependencies among body parts are encoded in the definition of pairwise energy functions according to the conditional random fields. Proper pairwise edge potentials between image pixels are defined according to the presence or absence of human body parts that are near to each other. Various Gaussian kernels in position, color, and histogram of oriented gradients spaces are used for defining the pairwise energy terms. Shifted Gaussian kernels are defined between each two body parts that are connected to each other according to the human body skeleton model. As shifted Gaussian kernels impose a high computational cost to the inference, an efficient inference process is proposed by a mean field approximation method that uses high dimensional shifted Gaussian filtering. The experimental results evaluated on the challenging KTH Football, Leeds Sports Pose, HumanEva, and Penn-Fudan datasets show that the proposed method increases the per-pixel accuracy measure for human body part segmentation and also improves the probability of correct parts metric of human body joint locations. Manuscript profile
      • Open Access Article

        13 - A Novel Effort Estimation Approach for Migration of SOA Applications to Microservices
        Vinay Raj Sadam Ravichandra
        Microservices architecture's popularity is rapidly growing as it eases the design of enterprise applications by allowing independent development and deployment of services. Due to this paradigm shift in software development, many existing Service Oriented Architecture ( More
        Microservices architecture's popularity is rapidly growing as it eases the design of enterprise applications by allowing independent development and deployment of services. Due to this paradigm shift in software development, many existing Service Oriented Architecture (SOA) applications are being migrated to microservices. Estimating the effort required for migration is a key challenge as it helps the architects in better planning and execution of the migration process. Since the designing style and deployment environments are different for each service, existing effort estimation models in the literature are not ideal for microservice architecture. To estimate the effort required for migrating SOA application to microservices, we propose a new effort estimation model called Service Points. We define a formal model called service graph which represents the components of the service based architectures and their interactions among the services. Service graph provides the information required for the estimation process. We recast the use case points method and model it to become suitable for microservices architecture. We have updated the technical and environmental factors used for the effort estimation. The proposed approach is demonstrated by estimating the migration effort for a standard SOA based web application. The proposed model is compatible with the design principles of microservices and provides a systematic and formal way of estimating the effort. It helps software architects in better planning and execution of the migration process. Manuscript profile
      • Open Access Article

        14 - Explaining the Role of Insurance in Developing Financial Institutions and Economic Growth in Selected Countries Using Dynamic panel Data Regression Methods and Generalized Momentum Estimation (GMM)
        Mohammad Nasr Isfahani Teimour   Mohammadi
        One of the key factors of countries’ planning in order to reach a stable long-term economic growth is existence of a proper financial market. Based on economic literature, efficient financial markets can positively affected economic growth of a nation. Furthermore, inve More
        One of the key factors of countries’ planning in order to reach a stable long-term economic growth is existence of a proper financial market. Based on economic literature, efficient financial markets can positively affected economic growth of a nation. Furthermore, investigation of effects of insurance which under a favorable financial market can make a direct and indirect influences on economic growth is mainly crucial for scholars. The main purpose of this study is to find out the effects of insurance and financial Structure on economic growth of 40 selected countries using a panel data approach over the period of 2000-2012. The results revealed that financial structure in some cases has significant effect, while in a number of cases, it shows insignificant impact. Also, the financial development index has a negligible negative effect on the economic growth of advanced countries and in the same way, this variable in MENA countries has a negative impact on economic growth. Moreover, this variable has various coefficients in several groups of countries. Furthermore, the findings depicted a positive significant effect of insurance on economic growth in advanced economies, while it negatively affects the economic growth in American nations. Manuscript profile
      • Open Access Article

        15 - Speed Estimation and Sensorless Torque Optimization of Single Phase Induction Motor
        S. Vaez-Zadeh - Personal page A. Payman
        Recently, performance improvement and speed control of Single-Phase Induction Motors (SPIMs) have been paid attention. These aims is required the machine speed. In this paper, a method is proposed to estimate the SPIMs speed, and then, its application in torque optimiza More
        Recently, performance improvement and speed control of Single-Phase Induction Motors (SPIMs) have been paid attention. These aims is required the machine speed. In this paper, a method is proposed to estimate the SPIMs speed, and then, its application in torque optimization of the machine is investigated. For this purpose, the motor speed is obtained in terms of the motor parameters and stator flux linkage components by use of the SPIMs equations in stationary reference frame. By obtaining the flux linkage from motor windings voltages and currents, the motor speed is estimated desirably. Then the estimated speed is used to increase the average torque, to decrease the pulsation torque and to optimize the motor torque. After that, the simulation results in condition of using the real speed is compared with the estimated speed one. The low simulation error proves the validity of the proposed method Manuscript profile
      • Open Access Article

        16 - 3D Model Reconstruction by Silhouette, Stereo and Motion Features Fusion
        H. Ghassemian H. Ebrahimnezhadi
        In this paper we propose a new approach to reconstruct the three-dimensional model of object using multi camera silhouettes during time. The main idea in this work is to reduce the current bottlenecks of three-dimensional model reconstruction including: ambiguous stereo More
        In this paper we propose a new approach to reconstruct the three-dimensional model of object using multi camera silhouettes during time. The main idea in this work is to reduce the current bottlenecks of three-dimensional model reconstruction including: ambiguous stereo matching in low contrast regions; non-exact color adjustment between cameras which raises the matching uncertainty; shading and non-consistency of intensity duo to motion and varying the light angle which raises the motion estimation error; high dependency of silhouette method to the number of cameras. We propose a novel scheme to combine three popular methods i.e. stereo matching, motion and silhouette. The novelties of this work include: region growing for low color different neighborhood to increase the quality of background removing process, robust feature based stereo matching of multi camera images to find the exact place of some sparse singular points belong to the surface of object, singular points matching to robustly estimate the motion parameters in next frame. Also, we propose a hierarchical cone intersection method to extract the bounding edges visual hull from all the silhouettes captured by virtual cameras during time. Manuscript profile
      • Open Access Article

        17 - Array Processing Based on GARCH Model
        H. Amiri H. Amindavar M. Kamarei
        In this paper, we propose a new model for additive noise based on GARCH time-series in arraysignal processing. Due to the some reasons such as complex implementation and computational problems, probability distribution function of additive noise is assumed Gaussian. In More
        In this paper, we propose a new model for additive noise based on GARCH time-series in arraysignal processing. Due to the some reasons such as complex implementation and computational problems, probability distribution function of additive noise is assumed Gaussian. In the different applications, scrutiny and measurement of noise shows that noise can sometimes significantly non-Gaussian and thus the methods based on Gaussian noise will degrade in an actual conditions. Heavy-tail probability density function (PDF) and time-varying statistical characteristics (e.g.; variance) are the most features of the additive noise process. On the other hand, GARCH process has important properties such as heavy-tail PDF (as excess kurtosis) and volatility modeling through feedback mechanism onto conditional variance so that it seems the GARCH model is a good candidate for the additive noise model in the array processing applications. In this paper, we propose a new method based on GARCH using the maximum likelihood approach in array processing and verify the performance of this approach in the estimation of the Direction-of-Arrivals of sources against the other methods and using the Cramer-Rao Bound. Manuscript profile
      • Open Access Article

        18 - Model Reference Adaptive Control Design for a Teleoperation System with Output Prediction
        K. Hosseini-Sunny H. R. Momeni F. Janabi-Sharifi
        In this paper a new control scheme is proposed to ensure stability and performance of the teleoperation systems while a wide range of time delay of transmission line is allowed. For this mean, time delay is estimated and used to predict the plant output. A model referen More
        In this paper a new control scheme is proposed to ensure stability and performance of the teleoperation systems while a wide range of time delay of transmission line is allowed. For this mean, time delay is estimated and used to predict the plant output. A model reference adaptive controller (MRAC) is designed for the master site using the predicted output of the plant. The proposed control system indicates good stability and force tracking performance. For the slave site, an independent MRAC is designed and it is shown that a good tracking for the position and velocity signals is achieved. Manuscript profile
      • Open Access Article

        19 - Radar Detection in Gaussian Clutter Using Bayesian Estimation of Target
        M. F. Sabahi M. Modarres Hashemi a. sheikhi
        In many of detection problems the received signals models under two hypotheses, H0 and H1, are the same except that some model parameters have fixed value under H0. These models are so called Nested Models. One of the most important examples is detection of a target wit More
        In many of detection problems the received signals models under two hypotheses, H0 and H1, are the same except that some model parameters have fixed value under H0. These models are so called Nested Models. One of the most important examples is detection of a target with unknown amplitude in the clutter. In this problem, one can assume similar models for received signals under H0 and H1 unless the target amplitude is assumed to be zero under H0. If the Bayesian approach used for treating unknown parameters, it can be shown that the likelihood ratio can be calculated as the ratio of the posterior and the prior probability of unknown parameters. Using this method a new detector for detection in Gaussian clutter is presented in this paper. Simulation results show that the proposed detector has much better performance compared with conventional GLRT detectors. It is also shown that a CFAR property is achieved provided that a small modifications in decision rule. Manuscript profile
      • Open Access Article

        20 - A New Approach to Compress Multicarrier Phase-Coded Radar Signals
        R. mohseni a. sheikhi m.a. masnadi shirazi
        Multicarrier phase coded signals have been recently introduced to achieve high range resolution in radar systems. As single carrier phase coded radars, the common method for compression of these signals, is using matched filter or computing the auto correlation function More
        Multicarrier phase coded signals have been recently introduced to achieve high range resolution in radar systems. As single carrier phase coded radars, the common method for compression of these signals, is using matched filter or computing the auto correlation function directly. In this paper we propose a new method based on fast Fourier transform (FFT) with lower computational load with respect to traditional approach. Furthermore, based on this new approach, a method for estimation of communication channel is introduced that can be used for improving detection performance and target position estimation in tracking mode. Manuscript profile
      • Open Access Article

        21 - A New Combined Strategy for Estimation of Individual Power Quality Parameters Using Adaptive Neural Network
        H. R. Mohammadi A. Yazdian Varjani H. mokhtari
        With respect to increment of power quality problems and also increasing application of sensitive devices to such problems, the power quality enhancement becomes a serious concern. The series-shunt and combined compensators can be used for compensation of voltage, curren More
        With respect to increment of power quality problems and also increasing application of sensitive devices to such problems, the power quality enhancement becomes a serious concern. The series-shunt and combined compensators can be used for compensation of voltage, current, or both voltage and current. One of the most important stages for precise and optimized compensation of power quality parameters is the fast and accurate estimation of individual parameters. In this paper, a new combined strategy based on a unified adaptive estimator is proposed which is capable of detection and accurate estimation of individual power quality parameters. In comparison to other estimation methods, the proposed method has a simple structure, low computation, high precision and is capable of individual power quality parameters estimation. Therefore, the proposed method can be used for on-line application such as selective compensation in series, shunt active power filters, and unified power quality conditioner. The exclusive properties of the proposed strategy will be shown by simulation results in transient and steady state conditions. Manuscript profile
      • Open Access Article

        22 - Experimental Modeling of Two-Dimensional Systems with ARMA Structure
        M. sadabadi M. shafiee M. karrari
        In this paper, experimental modeling of two-dimensional discrete systems with ARMA structure is considered. Therefore two-dimensional model order selection and parameter estimation problems are proposed. This method shows that the information of AR and MA orders are imp More
        In this paper, experimental modeling of two-dimensional discrete systems with ARMA structure is considered. Therefore two-dimensional model order selection and parameter estimation problems are proposed. This method shows that the information of AR and MA orders are implicitly contained in two different correlation matrices and the AR and MA orders of the 2-D ARMA model can be independently determined before parameter estimation. The two-dimensional model is assumed to be causal, stable, linear, and spatial shift-invariant with quarter plane (QP) support. Numerical Simulations are presented to show the good performance and effectiveness of the proposed method in two-dimensional discrete system with ARMA structure. Manuscript profile
      • Open Access Article

        23 - A Contrast Independent Algorithm for Binarization of Document Images
        M. Valizadeh E. Kabir
        In this paper, we present a contrast independent algorithm for binarization of degraded document images. The proposed algorithm does not require any parameter setting by user. Therefore, it can handle document images with variable foreground and background intensities a More
        In this paper, we present a contrast independent algorithm for binarization of degraded document images. The proposed algorithm does not require any parameter setting by user. Therefore, it can handle document images with variable foreground and background intensities and low contrast documents. The proposed algorithm involves three consecutive stages. At the first stage, independent of contrast between foreground and background, sensible parts of each character are extracted using the modified water flow model, which is designed for the extraction of sensible part of each character and the drawbacks of water flow model are solved in this algorithm. In the second stage, the gray levels of foreground are estimated using the extracted text pixels and the gray levels of background are locally estimated by averaging the original image. At the third stage, for each pixel of image, the average of estimated foreground and background gray levels is defined as local threshold. After extensive experiments, the proposed binarization algorithm demonstrates superior performance against conventional binarization algorithms on a set of degraded document images captured with camera. Proposed algorithm efficiently extracts the low contrast texts. Manuscript profile
      • Open Access Article

        24 - A New Hardware Method for Direction Estimation in Fingerprint Images
        E. Alibeigi S. Samavi Z. Rahmani
        One of the main identity authentication methods is the use of fingerprints. The most popular biometric method is fingerprint analysis. Most of the automatic fingerprint systems are based on minutiae matching. Therefore, extraction of minutiae is a critical stage in the More
        One of the main identity authentication methods is the use of fingerprints. The most popular biometric method is fingerprint analysis. Most of the automatic fingerprint systems are based on minutiae matching. Therefore, extraction of minutiae is a critical stage in the design of fingerprint authentication systems. Computation of direction of lines in fingerprints is a stage which affects the quality of the extracted minutiae. The existing algorithms require complex and time-consuming computations and are software-based. This paper presents a hardware implementation which has improved the current methods. The presented method is based on pipeline architecture and has proved to perform efficiently. Manuscript profile
      • Open Access Article

        25 - A Likelihood Ratio Approach to Information Fusion for Image-Based Fingerprint Verification
        M. S. Helfroush M. Mohammadpour
        Image-based fingerprint verification systems have been considered as a parallel method against the minutiae-based approach. This paper proposes a training based fusion method for fingerprint verification, using likelihood ratio (L.R). In this method, the matching scores More
        Image-based fingerprint verification systems have been considered as a parallel method against the minutiae-based approach. This paper proposes a training based fusion method for fingerprint verification, using likelihood ratio (L.R). In this method, the matching scores which are extracted from orientation, spectral and textural features are fused. In order to fuse these image-based features, the likelihood ratio approach has been employed. FVC2000 database has been selected to evaluate the method. Also, the proposed method has been compared to a similar one that uses the simple sum as its fusion system. The comparison results show that the proposed fusion method has made a significant improvement for the accuracy of matching system, so that the equal error rate (ERR) of proposed system has been reduced to 0.14%. Manuscript profile
      • Open Access Article

        26 - High Rate Shared Secret Key Generation Using the Phase Estimation of MIMO Fading Channel and Multilevel Quantization
        V. Zeinali Fathabadi H. Khaleghi Bizaki A. Shahzadi
        Much attention has recently been paid to methods of shared secret key generation that exploit the random characteristics of the amplitude and phase of a received signal and common channel symmetry in wireless communication systems. Protocols based on the phase of a rece More
        Much attention has recently been paid to methods of shared secret key generation that exploit the random characteristics of the amplitude and phase of a received signal and common channel symmetry in wireless communication systems. Protocols based on the phase of a received signal, due to the uniform distribution phase of fading channel, are suitable in both static and dynamic environments and, they have a key generation rate (KGR) higher than protocols based on received signal strength (RSS).In addition, previous works have generally focused on key generation protocol for single-antenna (SISO) systems but these have not produced a significant KGR. So in this paper to increase the randomness and key generation rate are used received signal phase estimations on multiple-antenna (MIMO) systems because they have the potential to present more random variables in key generation compared to SISO systems. The results of simulation show that the KGR of the proposed protocol is 4 and 9 times more than the KGR of a SISO system, when the numbers of transmitter and receiver antennas are the same and equal to 2 and 3, respectively. Also, the key generation rate will increase considerably, when to extract the secret key bits using multilevel quantization. Manuscript profile
      • Open Access Article

        27 - Particle Filter with Adaptive Observation Model
        H. Haeri H. Sadoghi Yazdi
        Particle filter is an effective tool for the object tracking problem. However, obtaining an accurate model for the system state and the observations is an essential requirement. Therefore, one of the areas of interest for the researchers is estimating the observation fu More
        Particle filter is an effective tool for the object tracking problem. However, obtaining an accurate model for the system state and the observations is an essential requirement. Therefore, one of the areas of interest for the researchers is estimating the observation function according to the learning data. The observation function can be considered linear or nonlinear. The existing methods for estimating the observation function are faced some problems such as: 1) dependency to the initial value of parameters in expectation-maximization based methods and 2) requiring a set of predefined models for the multiple models based methods. In this paper, a new unsupervised method based on the kernel adaptive filters is presented to overcome the above mentioned problems. To do so, least mean squares/ recursive least squares adaptive filters are used to estimate the nonlinear observation function. Here, given the known process function and a sequence of observations, the unknown observation function is estimated. Moreover, to accelerate the algorithm and reduce the computational costs, a sparsification method based on approximate linear dependency is used. The proposed method is evaluated in two applications: time series forecasting and tracking objects in video. Results demonstrate the superiority of the proposed method compared with the existing algorithms. Manuscript profile
      • Open Access Article

        28 - A New State Estimator in Distribution Systems
        S. Sabzebin F. Karbalaei
        Due to the lack of measurement in distribution systems, state estimation has particular importance. Different methods are presented to improve the accuracy of system state with limited measurements. In this paper a new state estimator in distribution systems are offered More
        Due to the lack of measurement in distribution systems, state estimation has particular importance. Different methods are presented to improve the accuracy of system state with limited measurements. In this paper a new state estimator in distribution systems are offered. This estimator bases on backward forward load flow estimates system state with adjusting load consumption at each step. Voltage measurements in slack bus, loads and zero injection measurements are inputs of estimator. This estimator is compared with weight least square estimator and its results are shown. The estimator calculates voltage magnitude with less error and also faster than WLS estimator. 85-bus system is presented in this paper. Manuscript profile
      • Open Access Article

        29 - The Effect of MIMO Channel Estimator in the Precoder Design of Wireless Sensor Networks
        H. Rostami A. Falahati
        One of the most important applications of wireless sensor networks was to estimate the unknown phenomenon. The cooperative activities of wireless sensors and scattered information of sensor nodes over network are used to decentralized estimation. Precoder design done on More
        One of the most important applications of wireless sensor networks was to estimate the unknown phenomenon. The cooperative activities of wireless sensors and scattered information of sensor nodes over network are used to decentralized estimation. Precoder design done on the sensor nodes in order to provide an optimal estimate of the actual amount. Precoder design is an optimization problem. Since the channel is wireless link on the wireless sensor networks. Therefore, assuming the access of full channel state information isn't correct in this network. Since the perfect channel state information is required in the precoder design process, so the effects of the channel estimation investigated on precoder design process. On the issue of channel estimation, channel estimated by using of the known training sequence method with LS and MMSE criteria. Since power restriction is the key subject in the wireless sensor networks, therefore in this study power restriction considered in the channel estimation and precoder design problem. Manuscript profile
      • Open Access Article

        30 - Improving the Power System State Estimation Algorithm Based on the PMUs Placement and Voltage Angle Relationships
        A. R. Sedighi M. Sayaf M. R. Taban
        To optimize the operation of power systems, monitoring of network state variables is important. Because these variables play an important role in improving economic efficiency, network reliability and analyze system status.Therefore, state estimation algorithm have been More
        To optimize the operation of power systems, monitoring of network state variables is important. Because these variables play an important role in improving economic efficiency, network reliability and analyze system status.Therefore, state estimation algorithm have been used to determine an accurate estimate of state variables with limited measurements. Since modern measuring devices, such as PMUs, in addition to the measurement of electrical quantities are able to measure bus voltage angle,in this paper, a new method is proposed to obtain a more accurate estimate of all network variable. The proposed algorithm determines number and location of the measuring devices (PMUs) in such a way that state variables and electrical quantities can be obtained in the most accurate estimate. Increasing the state estimation calculations accuracy is due to the use of the derivatives of the buses voltage angle equations along with the state estimation relations. Finally, the calculation of the state estimation is performed using the least squared weighted method (WLS). The calculations performed on the IEEE 14 bus network are done using MATLAB and MATPOWER software. The results show that the proposed method has been successful in increasing the accuracy of estimating state variables and reducing the number and proper location of PMUs . Manuscript profile
      • Open Access Article

        31 - A Pattern-Matching Method for Estimating WCET of Multi-Path Monotonic Loops
        Mehdi Sakhaei-nia S. parsa
        Pattern matching is one of possible methods proposed for estimating the WCET of the loops. If the loop matches with the proposed pattern, the number of iterations is calculated using an equation. In fact, the derivation of counter values for all iterations is thus avoid More
        Pattern matching is one of possible methods proposed for estimating the WCET of the loops. If the loop matches with the proposed pattern, the number of iterations is calculated using an equation. In fact, the derivation of counter values for all iterations is thus avoided. A shortcoming of pattern matching methods is its excessive dependence upon patterns. It is dependent upon location, frequency and how to change in value of the counter and structure and place of counter tester. In order to reduce dependence upon patterns, loop flow can be modeled in two sets of symbolic expressions indicating iteration conditions and changes in value of counters. Based upon these expressions, the number of possible values that could be assigned to the loop control variables during the loop execution is computed as the worst-case estimation of the number of loop iterations. But the estimate presented in this method is greater than the actual value and there is overestimation. In this paper, the variables whose values are equal on the different paths and this value is accounted as an iteration, are detected and are considered in the estimations. This will reduce the overestimation. The evaluations are showed that the proposed method is effective and efficient and has less overestimation. Manuscript profile
      • Open Access Article

        32 - Human Action Recognition in Still Image of Human Pose using Multi-Stream neural Network
        Roghayeh Yousefi K. Faez
        Today, human action recognition in still images has become one of the active topics in computer vision and pattern recognition. The focus is on identifying human action or behavior in a single static image. Unlike the traditional methods that use videos or a sequence of More
        Today, human action recognition in still images has become one of the active topics in computer vision and pattern recognition. The focus is on identifying human action or behavior in a single static image. Unlike the traditional methods that use videos or a sequence of images for human action recognition, still images do not involve temporal information. Therefore, still image-based action recognition is more challenging compared to video-based recognition. Given the importance of motion information in action recognition, the Im2flow method has been used to estimate motion information from a static image. To do this, three deep neural networks are combined together, called a three-stream neural network. The proposed structure of this paper, namely the three-stream network, stemmed from the combination of three deep neural networks. The first, second and third networks are trained based on the raw color image, the optical flow predicted by the image, and the human pose obtained in the image, respectively. In other words, in this study, in addition to the predicted spatial and temporal information, the information on human pose is also used for human action recognition due to its importance in recognition performance. Results revealed that the introduced three-stream neural network can improve the accuracy of human action recognition. The accuracy of the proposed method on Willow7 action, Pascal voc2012, and Stanford10 data sets were 91.8%, 91.02%, and 96.97%, respectively, which indicates the promising performance of the introduced method compared to state-of-the-art performance. Manuscript profile
      • Open Access Article

        33 - Grid Impedance Estimation of Low Voltage Grids Using Signal Processing Techniques for Frequency Range of 2 kHz – 150 kHz
        M. M. AlyanNezhadi H. Hassanpour F. Zare
        In this paper, the impedance of low voltage grids in frequency range of 2 kHz - 150 kHz is estimated using rectangular pulse injections and signal processing techniques. The grid impedance is defined as division of voltage signal to current signal in frequency domain. I More
        In this paper, the impedance of low voltage grids in frequency range of 2 kHz - 150 kHz is estimated using rectangular pulse injections and signal processing techniques. The grid impedance is defined as division of voltage signal to current signal in frequency domain. In noisy condition, the accuracy of impedance estimation is directly dependent to energy of injected signal. The injection signal must has sufficient energy in the frequency range of estimation for an accurate impedance estimation. In the proposed method, several injection signals with different widths are selected with the Genetic algorithm. The grid responses to the injected signals are measured and then denoised for an accurate impedance estimation. When the measurement duration is low, the whole transient state of the grid is not measured; hence the impedance estimation is not accurate. Therefore, in this paper a method is proposed for determining the best measurement duration for impedance estimation using Time-Frequency distributions. The proposed method is applied on several simulated grids and the results show the ability and accuracy of the proposed method in grid impedance estimation. Manuscript profile
      • Open Access Article

        34 - Parity Check Matrix Estimation of k/n Convolutional Coding in Noisy Environment Based on Walsh-Hadamard Transform
        Mohammad khaksar H. Khaleghi Bizaki
        Blind estimation of Physical layer transmission parameters, is one of the challenges for smart radios to adapt itself to network standards. These parameters could be transmission rate, modulation and coding scheme that is used for combating with channel errors. Therefor More
        Blind estimation of Physical layer transmission parameters, is one of the challenges for smart radios to adapt itself to network standards. These parameters could be transmission rate, modulation and coding scheme that is used for combating with channel errors. Therefore, Channel Coding Estimation, including code parameters, parity check matrix and generator matrix estimation, is one the interesting research topics in the context of software radios. Algebraic methods like Euclidean methods and Rank-based methods are usually performed on intercepted received sequence to estimate the code. Poor efficiency in a high error probability environment is the main drawback of this methods. Transform-based methods, like Walsh-Hadamard transform is one of the methods that could solve channel coding estimation problem. In this paper, new algorithm based on Walsh-Hadamard Transform is proposed that could reconstruct the parity check matrix of convolutional code with general k/n rate in a high error probability environments (BER>0.07), that has much better performance compared to other methods. This algorithm exploits algebraic properties of convolutional code in order to form k-n equation for estimation of k-n rows of the parity check matrix and then use Walsh-Hadamard transform to solve these equations. Simulation results verified excellent performance of the proposed algorithm in high error probability environments compared to other approaches. Manuscript profile
      • Open Access Article

        35 - State Estimation of Nonlinear Systems Using Gaussian-Sum Cubature Kalman Filter Based-on Spherical Simplex-Radial Rule
        Mohammad Amin Ahmadpour Kahkak بهروز صفری نژادیان
        In this paper, a new algorithm of Gaussian sum filters for state estimation of nonlinear systems is presented. The proposed method consists of several parallel Cubature Kalman filters each of which is implemented according to the simplex spherical-radial rule. In this m More
        In this paper, a new algorithm of Gaussian sum filters for state estimation of nonlinear systems is presented. The proposed method consists of several parallel Cubature Kalman filters each of which is implemented according to the simplex spherical-radial rule. In this method, the probability density function is the sum of the weights of several Gaussian functions. The mean value, covariance, and weight coefficients of these Gaussian functions are calculated recursively over time, and each of the Cubature Kalman filters are responsible for updating one of these functions. Finally, the performance of the proposed filter is investigated using two nonlinear state estimation problems and the results are compared with conventional nonlinear filters. The simulation results show the appropriate accuracy of the proposed algorithm in state estimation of nonlinear systems. Manuscript profile
      • Open Access Article

        36 - An Intelligent Approach for OFDM Channel Estimation Using Gravitational Search Algorithm
        F. Salehi mohammad hassan majidi N. Neda
        The abundant benefits of Orthogonal Frequency-Division Multiplexing (OFDM) and its high flexibility have resulted in its widespread applications in many telecommunication standards. One important parameter for improving wireless system’s efficiency is the accurate estim More
        The abundant benefits of Orthogonal Frequency-Division Multiplexing (OFDM) and its high flexibility have resulted in its widespread applications in many telecommunication standards. One important parameter for improving wireless system’s efficiency is the accurate estimation of channel state information (CSI). In the literatures many techniques have been studied in order to estimate the CSI. Nowadays, the techniques based on intelligent algorithms such as genetic algorithm (GA) and particle swarm optimization (PSO) have attracted attention of researchers. With a very low pilot overhead, these techniques are able to estimate the channel frequency response (CFR) properly only using the received signals. Unfortunately each of these techniques suffers a common weakness: they have a slow convergence rate. In this paper, a new intelligent and different method has been presented for channel estimation using gravitational search algorithm (GSA). This method can achieve accurate channel estimation with a moderate computational complexity in comparison with GA and PSO estimators. Furthermore, with higher convergence rate our proposed method is capable of providing the same performance as GA and PSO. For a two-path fast fading channel, simulation results demonstrate the robustness of our proposed scheme according to the bit error rate (BER) and the mean square error (MSE). Manuscript profile
      • Open Access Article

        37 - Improving Age Estimation of Dental Panoramic Images Based on Image Contrast Correction by Spatial Entropy Method
        Masoume Mohseni Hussain Montazery Kordy Mehdi Ezoji
        In forensic dentistry, age is estimated using dental radiographs. Our goal is to automate these steps using image processing and pattern recognition techniques. With a dental radiograph, the contour is extracted and features such as apex, width and tooth length are dete More
        In forensic dentistry, age is estimated using dental radiographs. Our goal is to automate these steps using image processing and pattern recognition techniques. With a dental radiograph, the contour is extracted and features such as apex, width and tooth length are determined, which are used to estimate age. Optimizing the resolution of radiographic images is an important step in contour extraction and age estimation. In this article, the aim is to improve the image resolution in order to extract the appropriate area and proper segmentation of the tooth, which makes it possible to estimate age better. In this model, due to the low resolution of radiographic images, in order to increase the accuracy of extracting the desired area of each tooth (ROI), the image resolution increases using spatial entropy based on the spatial distribution of pixel brightness, along with another increasing resolution method, like the Laplacian pyramids. Increasing the resolution of the image leads to the extraction of appropriate ROI and the removal of unwanted areas. The database used in this study is 154 adolescent panoramic radiographs, of which 73 are male and 81 are female. This database is prepared from Babol University of Medical Sciences. The results show that by using fixed tooth segmentation methods and only by applying the proposed effective method to improve image resolution, the extraction of appropriate ROI increased from 66% to 78% which shows a good improvement. The extracted ROI is then delivered to the segmented block and the contour extracted. After contour extraction, age is estimated. The age estimation using the proposed method is closer to the manual age estimate compared to the method that does not use the proposed algorithm to increase the image resolution. Manuscript profile
      • Open Access Article

        38 - Estimating the score of assessment centers based on the concept of risk and interpersonal differences
        امیر  آذرفر MohamadMahdi Alishiri Hossein Safari ali ebadi
        One of the methods are used to assess and evaluate employees is the evaluation center. Evaluation centers usually have good validity at the tool level, but there are weaknesses at the model level of estimating the final score of evaluation centers. This research is desi More
        One of the methods are used to assess and evaluate employees is the evaluation center. Evaluation centers usually have good validity at the tool level, but there are weaknesses at the model level of estimating the final score of evaluation centers. This research is designed to provide a method to estimate the final score of assessment centers based on the concept of risk and taking into account interpersonal differences. For this purpose, the data of the evaluation center of 800 managers of the country has been used. In this research, nine different models have been designed and evaluated, and finally, the model with the lowest error rate is presented as the selected model. Manuscript profile
      • Open Access Article

        39 - Evaluation of the Effect of Transformer Oil Parameters on the Transformer Health Index Using Curve Estimation Method
        Morteza Saeid Hamed Zeinoddini-Meymand
        Transformers are one of the most expensive and important equipment in power systems that are under the influence of electrical, thermal and chemical reactions The transformer health index is a standard that is used to evaluate the condition and determine the remaining l More
        Transformers are one of the most expensive and important equipment in power systems that are under the influence of electrical, thermal and chemical reactions The transformer health index is a standard that is used to evaluate the condition and determine the remaining life of the transformer by using laboratory data and field inspections. The purpose of this article is to determine the relationships between electrical, physical, chemical parameters of oil, dissolved gases in oil and transformer health index. One of the advantages of using the regression method in the analysis of transformer data compared to other methods for determining the transformer health index is determining the influence of the parameters that have the greatest impact on each other. In this article, Curve Estimation Regression method is used and the results are drawn by drawing graphs by SPSS statistical software to analyze the parameters. To carry out the simulations, the laboratory data of some transformers have been considered. Manuscript profile
      • Open Access Article

        40 - Comparative Study of 5G Signal Attenuation Estimation Models
        Md Anoarul Islam Manabendra Maiti Judhajit Sanyal Quazi Md Alfred
        Wireless networks functioning on 4G and 5G technology offer a plethora of options to users in terms of connectivity and multimedia content. However, such networks are prone to severe signal attenuation and noise in a number of scenarios. Significant research in recent y More
        Wireless networks functioning on 4G and 5G technology offer a plethora of options to users in terms of connectivity and multimedia content. However, such networks are prone to severe signal attenuation and noise in a number of scenarios. Significant research in recent years has consequently focused on establishment of robust and accurate attenuation models to estimate channel noise and subsequent signal loss. The identified challenge therefore is to identify or develop accurate computationally inexpensive models implementable on available hardware for generation of estimates with low error and validate the solutions experimentally. The present work surveys some of the most relevant recent work in this domain, with added emphasis on rain attenuation models and machine learning based approaches, and offers a perspective on the establishment of a suitable dynamic signal attenuation model for high-speed wireless communication in outdoor as well as indoor environments, presenting the performance evaluation of an autoregression-based machine learning model. Multiple versions of the model are compared on the basis of root mean square error (RMSE) for different orders of regression polynomials to find the best-fit solution. The accuracy of the technique proposed in the paper is then compared in terms of RMSE to corresponding moderate and high complexity machine learning techniques implementing adaptive spline regression and artificial neural networks respectively. The proposed method is found to be quite accurate with low complexity, allowing the method to be practically applicable in multiple scenarios. Manuscript profile
      • Open Access Article

        41 - Integrated Fault Estimation and Fault Tolerant Control Design for Linear Parameter Varying System with Actuator and Sensor Fault
        Hooshang Jafari Amin Ramezani Mehdi Forouzanfar
        Fault occurrence in real operating systems usually is inevitable and it may lead to performance degradation or failure and requires to be meddled quickly by making appropriate decisions, otherwise, it could cause major catastrophe. This gives rise to strong demands for More
        Fault occurrence in real operating systems usually is inevitable and it may lead to performance degradation or failure and requires to be meddled quickly by making appropriate decisions, otherwise, it could cause major catastrophe. This gives rise to strong demands for enhanced fault tolerant control to compensate the destructive effects and increase system reliability and safety in the presence of faults. In this paper, an approach for estimation and control of simultaneous actuator and sensor faults is presented by using integrated design of a fault estimation and fault tolerant control for time-varying linear systems. In this method, an unknown input observer-based fault estimation approach with both state feedback control and sliding mode control was developed to assure the closed-loop system's robust stability via solving a linear matrix inequality formulation. The presented method has been applied to a linear parameter varying system and the simulation results show the effectiveness of this method for fault estimation and system stability. Manuscript profile
      • Open Access Article

        42 - Porosity estimation with data fusion approach (Bayesian theory) in wells of Azadegan oil field, Iran
        عطیه  مظاهری طرئی Hoseyn Memarian Behzad Tokhmchi Behzad Moshiri
        Porosity is one of the main variables in evaluating the characteristics of an oil field. Petrophysical data are normally used to determine these variables. Measurements obtained from well logs, containes some errors and uncertainty. This porosity is influenced by differ More
        Porosity is one of the main variables in evaluating the characteristics of an oil field. Petrophysical data are normally used to determine these variables. Measurements obtained from well logs, containes some errors and uncertainty. This porosity is influenced by different factors, such as temperature, pressure, fluid type, clay content and the and amount of hydrocarbons. One of the best, and yet most practical ways to reduce the amount of uncertainty in porosity measurement is using various sources of data and data fusion techniques. Data fusion increase certainty and confidence and reduce risk and error in decision making. In this research, the porosity is estimated in 4 wells of Azadegan oil field, with data fusion method (Bayesian theory). To check the ability of generalization of the method, the porosity was also estimated in one other well of this field. A maximum of 7 input variables were used to estimate porosity in this new approach. The results showed that data fusion technique is more powerfull than traditional tecniques for porosity estimation. According to the results, this method has higher credibility than traditional techniques that show 0.7 to 0.8 regressions with log data but data fusion technique showed solidarity over 0.9 with log data. Manuscript profile
      • Open Access Article

        43 - Comparison of the function of conventional neural networks for estimating porosity in one of the southeastern Iranian oil fields
        Farshad Toffighi parviz armani Ali Chehrazi َAndisheh Alimoradi
        In the oil industry, artificial intelligence is used to identify relationships, optimize, estimate and classify porosity. One of the most important steps in evaluating the petrophysical parameters of the reservoir is to identify the porosity properties. The main purpose More
        In the oil industry, artificial intelligence is used to identify relationships, optimize, estimate and classify porosity. One of the most important steps in evaluating the petrophysical parameters of the reservoir is to identify the porosity properties. The main purpose of this study is to compare the accuracy and generalizability of three multilayer feed neural networks (MLFNs), radius base function networks (RBFNs) and probabilistic neural networks (PNNs) to estimate porosity using seismic properties. In this regard, geological data of 7 wells were evaluated from an offshore oil field in Hindijan in the northwest of the Persian Gulf basin. Acoustic impedance was estimated using model-based inversion method and then the mentioned neural networks were designed using optimal seismic properties and evaluated by stepwise regression method. Finally, it became clear that the MLFN model did not work well for estimating porosity. PNN has the best performance accuracy in porosity interpolation, but RBFN generalizability is better. Manuscript profile
      • Open Access Article

        44 - Thermal Side Reaction on Lithium-Ion Battery in Fast Charging Mode with Multi-Stage Constant Current and Voltage Control CC-CV Charging Vechnique
        Sohaib Azhdari رحمت اله میرزایی
        Lithium-ion batteries are extensively used in fast charging stations because of their high density of energy storage and power. It is critical to know how to charge lithium batteries since their structure is very sensitive to heat. When using fast charging techniques fo More
        Lithium-ion batteries are extensively used in fast charging stations because of their high density of energy storage and power. It is critical to know how to charge lithium batteries since their structure is very sensitive to heat. When using fast charging techniques for charging batteries, considerable heat is generated. This heat is caused by the ohmic losses of the battery and its internal reactions. While fast charging reduces the charging time of the battery, it may damage its structure. There are various methods of fast charging. Each has its advantages and limitations. By applying changes to the multi-stage constant current charging method, in addition to reducing the charging time, attempts were made to prevent damage to the battery. This improved method can deal with cases where temperature effects are removable, such as when there is a ventilation system. It minimizes charging time as much as possible. Manuscript profile
      • Open Access Article

        45 - Identification of Transfer Function Parameters of Brushless DC Motor Using Particle Swarm Algorithm
        Ahmad Shirzadi Arash Dehestani Kolagar Mohammad Reza  Alizadeh Pahlavani
        So far, comprehensive and extensive studies have been conducted on the brushless DC motor (BLDC), and a part of these studies focuses on the estimation of the parameters of the transfer function of this motor. Estimation of BLDC motor transfer function parameters is ess More
        So far, comprehensive and extensive studies have been conducted on the brushless DC motor (BLDC), and a part of these studies focuses on the estimation of the parameters of the transfer function of this motor. Estimation of BLDC motor transfer function parameters is essential to study motor performance and predict its behavior. Therefore, an efficient, accurate and reliable parameter estimation method is needed. In this article, the problem of estimating the parameters of the transfer function of the inverter-fed BLDC motor set has been solved using particle swarm algorithms (PSO). The results of using this algorithm have been compared with the results of other optimization algorithms. The comparison of these results has shown that the PSO algorithm is an efficient, accurate and reliable method for solving the transfer function parameter estimation problem. Manuscript profile
      • Open Access Article

        46 - Design of Distributed Consensus Controller for Leader-Follower Singular Multi-Agent Systems in the Presence of Sensor Fault
        Saeid Poormirzaee Hamidreza Ahmadzadeh masoud Shafiee
        In this paper, the problem of sensor fault estimation and designing of a distributed fault-tolerant controller is investigated to guarantee the leader-follower consensus for homogeneous singular multi-agent systems for the first time. First, a novel augmented model for More
        In this paper, the problem of sensor fault estimation and designing of a distributed fault-tolerant controller is investigated to guarantee the leader-follower consensus for homogeneous singular multi-agent systems for the first time. First, a novel augmented model for the system is proposed. It is shown that the proposed model is regular and impulse-free unlike some similar research works. Based on this model, the state and sensor fault of the system are simultaneously estimated by designing a distributed singular observer. The proposed observer also has the ability to estimate time-varying sensor fault. Then, a distributed controller is designed to guarantee the leader-follower consensus using estimation of state and sensor fault. The sufficeient conditions to ensure the stability of the observer dynamic and consensus dynamic are drived in terms of linear matrix inequalities (LMIs). The gains of observer and controller are computed by solving these conditions with MATLAB software. Finally, the validation and efficiency of the proposed control system for the leader-follower consensus of singular multi-agent systems exposed to sensor faults is illustrated by computer simulations. The simulation results show that the propsed control strategy deeling to the sensor falut in the singular multi-agent systems is effective. Manuscript profile
      • Open Access Article

        47 - Evaluation of Interpolation Methods for Estimating the Fading Channels in Digital TV Broadcasting
        Ali Pouladsadeh Mohammadali Sebghati
        Variations in telecommunication channels is a challenge of the wireless communication which makes the channel estimation and equalization a noteworthy issue. In OFDM systems, some subcarriers can be considered as pilots for channel estimation. In the pilot-aided channel More
        Variations in telecommunication channels is a challenge of the wireless communication which makes the channel estimation and equalization a noteworthy issue. In OFDM systems, some subcarriers can be considered as pilots for channel estimation. In the pilot-aided channel estimation procedure, interpolation is an essential step to achieve channel response in data subcarriers. Choosing the best interpolation method has been the subject of various researches, because there is no interpolator as the best method in all conditions, and their performance depends on the fading model, signal-to-noise ratio and pilot overhead ratio. In this paper, the effect of different interpolation methods on the quality of DVB-T2 broadcast links is evaluated. A simulation platform is prepared in which different channel models are defined according to the real-world measurements. The interpolation is performed by five widely-used methods (nearest neighbor, linear, cubic, spline, and Makima) for different pilot ratios. After channel equalization by the results of the interpolator, the bit error rate is calculated as the main criterion for evaluation and comparison. The rules of selecting the appropriate interpolator in different conditions is presented. It is generally concluded that for fading scenarios close to flat fading or high pilot overhead ratio, the simple interpolators such as linear interpolator are proper choices. But in harsh conditions, i.e. severe frequency-selective fading channels or low pilot overhead ratio, the more complicated interpolators such as cubic and spline methods yield better results. The amount of improvements and differences are quantified in this study. Manuscript profile
      • Open Access Article

        48 - Estimating the shear sonic log using machine learning methods, and comparing it with the obtained data from the core
        Houshang Mehrabi Ebrahim Sfidari Seyedeh Sepideh  Mirrabie Sadegh  Barati Boldaji Seyed Mohammad Zamanzadeh
        Machine learning methods are widely used today to estimate petrophysical data. In this study, an attempt has been made to calculate shear sonic log (DTS) from other petrophysical data using machine learning methods and compare it with the sonic data obtained from the More
        Machine learning methods are widely used today to estimate petrophysical data. In this study, an attempt has been made to calculate shear sonic log (DTS) from other petrophysical data using machine learning methods and compare it with the sonic data obtained from the core. For this purpose, computational methods such as Standard Deviation, Isolation Forest, Min. Covariance, and Outlier Factors were used to normalize the data and were compared. Given the amount of missing data and box plots, the Standard Deviation method was selected for normalization. The machine learning methods used include Random Forest, Multiple Regression, Boosted Regression, Support Vector Regression, K-Nearest Neighbor, and MLP Regressor. Multiple regression had the lowest evaluation index (R2=0.94), while Random Forest regression had the highest correlation between the estimated shear sonic log and the original shear sonic log with an evaluation index of 0.98. Therefore, Random Forest regression was used for the final estimation, and to prevent data generalization or overfitting, the GridSearchCV function was used to calculate optimal hyperparameters and final estimation. The estimated sonic log showed a very high similarity with the core data. Manuscript profile
      • Open Access Article

        49 - Employing different techniques to explore, modeling and reserve estimation of gypsum deposit in the northwest of Tafresh district, Markazi province
        Reza Ahmadi
        <p>In the present research, to explore gypsum reserves in the northwest of Tafresh district, a complete process including prospecting to reserve estimation has been carried out. To achieve the goal, first, an extensive area of 4500 km2 was investigated using remote sens More
        <p>In the present research, to explore gypsum reserves in the northwest of Tafresh district, a complete process including prospecting to reserve estimation has been carried out. To achieve the goal, first, an extensive area of 4500 km2 was investigated using remote sensing operation through Landsat 8 Operational Land Imager (OLI) images. Using proper pre-processing and processing techniques containing principal component analysis, false color band combination, least squares regression and spectral angle mapper on the images, 17 promising areas scattered in the region were identified. Based on more detailed studies and field surveys of the promising areas, it was specially focused on Darbar gypsum deposit located nearby Darbar village. Therefore, a variety of exploratory activities comprising six trenches with a total volume of 135.61 m3, one stope (selective mining unit), 1:1000 topographic-geological map, chemical analysis of 9 samples and drilling of one exploratory borehole with a depth of about 40 m was performed. The results of the chemical analysis of the samples show that the total percentage of SO3 and CaO compounds for all tested samples is more than 76%. In addition, the result of the technological test to determine the quality characteristics of the stone and baking ability in the pilot scale evaluated by Nizar Cement Factory of Qom is desirable. Gypsum modeling and reserve estimation of this deposit were also done with the classical method of contour lines using Surfer software. Based on the calculations, the in-place reserve of Darbar gypsum deposit was estimated significant amount of 5982610 tons.</p> Manuscript profile