• List of Articles linear

      • Open Access Article

        1 - Determination of relationship between sedimentological parameters and morphology of linear sand dunes in north of Ahangaran, east of Iran
        Benyamin Rezazadeh Arash Amini Gholamreza Mirabshabestari
        Field studies and satellite images confirmed the existence of linear sand dunes in the north of Ahangaran region, located in Zirkouh (southern Khorasan province), east of Iran. In evaluation of sand dunes, totally 21 sand dunes from 5 stations in different geographical More
        Field studies and satellite images confirmed the existence of linear sand dunes in the north of Ahangaran region, located in Zirkouh (southern Khorasan province), east of Iran. In evaluation of sand dunes, totally 21 sand dunes from 5 stations in different geographical locations were studied. The obtained sedimentological evidence revealed that Ahangaran sand dunes can be classified into two simple and composite groups morphologically. Results of sedimentological analysis also indicated a positive correlation between particle size and morphology of dunes; i.e. with changes in sedimentological parameters, the morphology of sand dunes grades from simple to composite forms in central and western part of the studied area. The fine-grained pattern of crest is another characteristic which were introduced for the studied linear sand dunes. The comparison of the sedimentological parameters of these sand dunes with the other places of the world such as Kalahari, Namibia, Australia and Egyptian Sinai indicate that the Ahangaran sand dunes with an average of 2.34φ in grain size is similar to the other regions, but have lower sorting in the range of 0.79 which is more outstanding than the other parts of the world. Manuscript profile
      • Open Access Article

        2 - The Review of Time Processing in Shams Ghazals: Applying Stream Of Consciousness Technique
        مینا  بهنام ابوالقاسم  قوام Mohammad taghavi محمدرضا  هاشمی
        Some of Molavi’s ghazals have specific features that guide readers’ mind through intellectual basics of stream of consciousness works. The study reviewed the category of Time in Molavi’s ghazals using theoretical bases of this technique about Time. Research hypothesis i More
        Some of Molavi’s ghazals have specific features that guide readers’ mind through intellectual basics of stream of consciousness works. The study reviewed the category of Time in Molavi’s ghazals using theoretical bases of this technique about Time. Research hypothesis is as follows: time warps, repetitive round trips to past and present time during every Ghazal and the importance of present time in creation moment of Molavi’s poem. We can divide time in his Ghazals, in a general categorization, to two linear-continuous and nonlinear- discontinuous kinds. Second kind of time closes Ghazals to stream of consciousness works. Measuring basis for time warps is start moment of Ghazal in the study. Molavi, based on rule of free association dominating his mental domain travels from present to past, past to present, and in some cases, travels to future reviewing past mementos and in some cases, time warps and round trips to past and present are so fast and continuous. This issue can affect language, narrative methods, kind of imagery and other cases in Ghazal. Manuscript profile
      • Open Access Article

        3 - A comparative analysis of narratives about “Rostam” and “Goshtasb” in Ferdowsi’s Shahnameh (Based on the Interaction of Cyclic and Lnear Patterns of Time, in the Formation and Critical Study of the Narratives)
          محمدکاظم  یوسف‌پور
        In Persian literature, Ferdowsi’s Shahnameh has been used as the resource of many researches and an interesting subject for many literary and non-literary scholars. The special multidisciplinary structure of this masterpiece and the diversity and the extent of its nar More
        In Persian literature, Ferdowsi’s Shahnameh has been used as the resource of many researches and an interesting subject for many literary and non-literary scholars. The special multidisciplinary structure of this masterpiece and the diversity and the extent of its narratives provide an opportunity for different interpretations using methods like historicism, discourse analysis and narratology in contemporary research arena. Searching in historical origins, this article aims at studying the way of narrative building in narratives about Rostam and Goshtasb, and they interacted in epic. The use of cyclic pattern in codification and organization of historical narratives mainly leads to the formation of narratives with mythical and epical contents that limits and transubstantiate events and historical characters in the predetermined substructure patterns. As much as the historiography method under the influence of cyclic pattern of time leads to the ambiguity of historical origins of events and characters, identification, and repetition in this method of historiography, the linear transition of events with story-like narrating ploys, provides narratives about events with different historical and temporal origins that appear to be coherent since they are formed under the domination of cyclic perception of time, and they do not transfer the whole past. For this reason, and according to neo-historicism perspective, every historical narrative is a story about the past which is not equal to the past. After matching the dual patterns of time perception with epistemology schools of history, by using discourse and narratology approaches, this article displays how the cyclic and linear patterns influence the narratives related to Rostam and Goshtasb, and their permutations. Manuscript profile
      • Open Access Article

        4 - Estimating the LNAPL level elevation in oil-contaminated aquifer by using of gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)
        فاطمه  ابراهیمی Mohammad Nakhaei HamidReza Nasseri  
        One of the main concerns in the aquifers adjacent to oil facilities is the leakage of LNAPLs. Since remediation processes costly and time consuming, so the first step in these systems is determining design goals. Often the most important goal of these systems is to maxi More
        One of the main concerns in the aquifers adjacent to oil facilities is the leakage of LNAPLs. Since remediation processes costly and time consuming, so the first step in these systems is determining design goals. Often the most important goal of these systems is to maximize pollutant removal and minimize the cost. Identifying the thickness of LNAPL and its fluctuations can determine the type of recovery method and thus can be effective on the amount of removal and the cost of the implementation. In this study, three methods of gene expression programming (GEP), adaptive neuro-fuzzy inference system (ANFIS) and multivariate linear regression (MLR) were used to estimate and predict the LNAPL level. Input variables are groundwater level elevation and discharge rate of LNAPL and the output variable is the LNAPL level elevation. The results of the three models were analyzed by statistical parameters and it was determined that GEP technique has better results and could be used successfully in predicting LNAPL level fluctuations in recovery processes. Also, the GEP model provides an equation for predicting the LNAPL level that can be used in the field to predict the elevation of the LNAPL level. Manuscript profile
      • Open Access Article

        5 - Stability Analysis of Networked Control Systems under Denial of Service Attacks using Switching System Theory
        Mohammad SayadHaghighi Faezeh Farivar
        With the development of computer networks, packet-based data transmission has found its way to Cyber-Physical Systems (CPS) and especially, networked control systems (NCS). NCSs are distributed industrial processes in which sensors and actuators exchange information bet More
        With the development of computer networks, packet-based data transmission has found its way to Cyber-Physical Systems (CPS) and especially, networked control systems (NCS). NCSs are distributed industrial processes in which sensors and actuators exchange information between the physical plant and the controller via a network. Any loss of data or packet in the network links affects the performance of the physical system and its stability. This loss could be due to natural congestions in network or a result of intentional Denial of Service (DoS) attacks. In this paper, we analytically study the stability of NCSs with the possibility of data loss in the feed-forward link by modelling the system as a switching one. When data are lost (or replaced with a jammed or bogus invalid signal/packet) in the forward link, the physical system will not receive the control input sent from the controller. In this study, NCS is regarded as a stochastic switching system by using a two-position Markov jump model. In State 1, the control signal/packet passes through and gets to the system, while in State 2, the signal or packet is lost. We analyze the stability of system in State 2 by considering the situation as an open-loop control scenario with zero input. The proposed stochastic switching system is studied in both continuous and discrete-time spaces to see under what conditions it satisfies Lyapunov stability. The stability conditions are obtained according to random dwell times of the system in each state. Finally, the model is simulated on a DC motor as the plant. The results confirm the correctness of the obtained stability conditions. Manuscript profile
      • Open Access Article

        6 - Review of the Commercialization Linear Model
        Ayda Matin Shadi Mohammad zadeh
        The master key of the world today is the creation of value. The entry approach to the today’s business world is technology and the master key of the technology is commercialization and value added created by it. Commercialization is the conversion process of the new tec More
        The master key of the world today is the creation of value. The entry approach to the today’s business world is technology and the master key of the technology is commercialization and value added created by it. Commercialization is the conversion process of the new technologies to the commercially successful products. Commercialization is containing different arrays of technical, commercial and financial process which converts the new technology into the useful products or services. In other word, commercialization of research findings is the link between technology and market and the focus of it is on the end rings of value chain. Since delivering a product to the market can be the guarantee of organization success and survival, commercialization, the technical knowledge is known as a vital factor. In research organizations, research doesn’t have meaning without product commercialization. Because producing or testing an idea seems useless without access to the product special customers. In order to apply the concept of commercialization in organizations, it’s necessary to be familiar with the commercialization models. A classification of commercialization models are classified into two categories: linear and functional. Due to the importance of linear models, in this article after checking the concept of commercialization, we will check the most important commercial linear models. The linear models are the Goldsmith, Kokobu, Cooper, Rothwell & Zegfeld model, Andrew & Sirkin, Jolley, and the Yeong–Deok Lee models. Manuscript profile
      • Open Access Article

        7 - Development of Multivariate Regression Relationship Between Factors Affecting Unemployment Rate
        roya soltani mahnaz ebrahimi sadrabadi Ali Mohammad Kimiagari
        In this research, the multi-variable linear regression relationship has been developed among the important factors influencing unemployment rate. The seasonal data is from 1394 to 1394, which is compiled from reliable economic data bases of country. Independent variable More
        In this research, the multi-variable linear regression relationship has been developed among the important factors influencing unemployment rate. The seasonal data is from 1394 to 1394, which is compiled from reliable economic data bases of country. Independent variables are: net foreign assets of the banking system (billion rials), net debt of the public sector to the banking system (billion rials), liquidity in terms of its constituent parts (billion rials), rate of dollar (rials), economic participation rate, average inflation rate, The average annual interest rate on state-owned banks, the percentage of jobseekers (65-15). The results indicate that there is a negative and significant relationship between unemployment rate and average inflation rate and economic participation rate, while the net debt of the public sector has had a positive and significant relationship with the banking system and unemployment rate. The greatest negative effects on unemployment rate are the rate of economic participation and the greatest positive impact on the unemployment rate is the net debt of the public sector to the banking system. Manuscript profile
      • Open Access Article

        8 - Error Reconciliation based on Integer Linear Programming in Quantum Key Distribution
        zahra eskandari mohammad  rezaee
        Quantum telecommunication has received a lot of attention today by providing unconditional security because of the inherent nature of quantum channels based on the no-cloning theorem. In this mode of communication, first, the key is sent through a quantum channel that i More
        Quantum telecommunication has received a lot of attention today by providing unconditional security because of the inherent nature of quantum channels based on the no-cloning theorem. In this mode of communication, first, the key is sent through a quantum channel that is resistant to eavesdropping, and then secure communication is established using the exchanged key. Due to the inevitability of noise, the received key needs to be distilled. One of the vital steps in key distillation is named key reconciliation which corrects the occurred errors in the key. Different solutions have been presented for this issue, with different efficiency and success rate. One of the most notable works is LDPC decoding which has higher efficiency compared to the others, but unfortunately, this method does not work well in the codes with a high rate. In this paper, we present an approach to correct the errors in the high rate LDPC code-based reconciliation algorithm. The proposed algorithm utilizes Integer Linear Programming to model the error correction problem to an optimization problem and solve it. Testing the proposed approach through simulation, we show it has high efficiency in high rate LDPC codes as well as a higher success rate compared with the LDPC decoding method - belief propagation – in a reasonable time. Manuscript profile
      • Open Access Article

        9 - A New Combined Strategy for Estimation of Individual Power Quality Parameters Using Adaptive Neural Network
        H. R. Mohammadi A. Yazdian Varjani H. mokhtari
        With respect to increment of power quality problems and also increasing application of sensitive devices to such problems, the power quality enhancement becomes a serious concern. The series-shunt and combined compensators can be used for compensation of voltage, curren More
        With respect to increment of power quality problems and also increasing application of sensitive devices to such problems, the power quality enhancement becomes a serious concern. The series-shunt and combined compensators can be used for compensation of voltage, current, or both voltage and current. One of the most important stages for precise and optimized compensation of power quality parameters is the fast and accurate estimation of individual parameters. In this paper, a new combined strategy based on a unified adaptive estimator is proposed which is capable of detection and accurate estimation of individual power quality parameters. In comparison to other estimation methods, the proposed method has a simple structure, low computation, high precision and is capable of individual power quality parameters estimation. Therefore, the proposed method can be used for on-line application such as selective compensation in series, shunt active power filters, and unified power quality conditioner. The exclusive properties of the proposed strategy will be shown by simulation results in transient and steady state conditions. Manuscript profile
      • Open Access Article

        10 - Optimization of the Nonlinear Behavior of Power Amplifiers in Satellite Digital Image Transmission Using Particle Swarm Method
        A. A. Lotfi-Neyestanak Gh. sowlat Mohammad Jahanbakht
        Nonlinear behavior of the power amplifiers in satellite transmitters causes a lot of errors in digital image transmission. So, even by using a moderate linearizer, the bit error rate (BER) will greatly improve. In this paper, the particle swarm optimization has been use More
        Nonlinear behavior of the power amplifiers in satellite transmitters causes a lot of errors in digital image transmission. So, even by using a moderate linearizer, the bit error rate (BER) will greatly improve. In this paper, the particle swarm optimization has been used as an effective method with good conversion speed. Effects of an optimized cubic linearizer on digital image transmission are evaluated. The simulations results for the bit error rate as a function of signal to noise ratio (SNR), third order intercept point (TOI), and noise figure (NF) of low noise amplifier (LNA) are compared with each other. Manuscript profile
      • Open Access Article

        11 - Left Ventricular Segmentation in Echocardiography Images by Manifold Learning and Dynamic Directed Vector Field Convolution
        N.  Mashhadi H. Behnam Ahmad Shalbaf Z. Alizadeh Sani
        Cardiac diseases are the major causes of death throughout the world. The study of left ventricular (LV) function is very important in the diagnosis of heart diseases. Automatic tracking of the boundaries of the LV wall during a cardiac cycle is used for quantification o More
        Cardiac diseases are the major causes of death throughout the world. The study of left ventricular (LV) function is very important in the diagnosis of heart diseases. Automatic tracking of the boundaries of the LV wall during a cardiac cycle is used for quantification of LV myocardial function in order to diagnose various heart diseases including ischemic disease. In this paper, a new automatic method for segmentation of the LV in echocardiography images of one cardiac cycle by combination of manifold learning and active contour based dynamic directed vector field convolution (DDVFC) is proposed. In this method, first echocardiography images of one cardiac cycle have been embedded in a two dimensional (2-D) space using one of the most popular manifold learning algorithms named Locally Linear Embeddings. In this new space, relationship between these images is well represented. Then, segmentation of the LV wall during a cardiac cycle is done using active contour based DDVFC. In this method, final contour of each segmented frame is used as the initial contour of the next frame. In addition, in order to increase the accuracy of the LV segmentation and also prevent the boundary distortion, maximum range of the active contour motion is limited by Euclidean distances between consequent frames in resultant 2-D manifold. To quantitatively evaluate the proposed method, echoacardiography images of 5 healthy volunteers and 4 patients are used. The results obtained by our method are quantitatively compared to those obtained manually by the highly experienced echocardiographer (gold standard) which depicts the high accuracy of the presented method. Manuscript profile
      • Open Access Article

        12 - Identification and Contribution Evaluation of Interharmonic Sources in a Power System Using Adaptive Linear Neuron and Superposition and Projection Method
        P. Sarafrazi H. R. Mohammadi
        In this paper a new method for identification of interharmonic producing loads in a power system is proposed which is capable of evaluating the contribution of each individual load in the point of common coupling. This method is based on using the superposition and proj More
        In this paper a new method for identification of interharmonic producing loads in a power system is proposed which is capable of evaluating the contribution of each individual load in the point of common coupling. This method is based on using the superposition and projection method which needs the norton equivalent circuit of loads and supply network. Also in the proposed method, a two-stage adaptive linear neuron is used for determining the interharmonic components of a signal. The effectiveness of the proposed method has been shown through simulation studies in the MATLAB/SIMULINK software. The simulation results show the capability of the proposed method for identification and contribution evaluation of interharmonic sources in a power system. Manuscript profile
      • Open Access Article

        13 - Identification and Contribution Evaluation of Interharmonic Sources in a Power System Using Adaptive Linear Neuron and Superposition and Projection Method
        P. Sarafrazi H. R. Mohammadi
        In this paper a new method for identification of interharmonic producing loads in a power system is proposed which is capable of evaluating the contribution of each individual load in the point of common coupling. This method is based on using the superposition and proj More
        In this paper a new method for identification of interharmonic producing loads in a power system is proposed which is capable of evaluating the contribution of each individual load in the point of common coupling. This method is based on using the superposition and projection method which needs the norton equivalent circuit of loads and supply network. Also in the proposed method, a two-stage adaptive linear neuron is used for determining the interharmonic components of a signal. The effectiveness of the proposed method has been shown through simulation studies in the MATLAB/SIMULINK software. The simulation results show the capability of the proposed method for identification and contribution evaluation of interharmonic sources in a power system. Manuscript profile
      • Open Access Article

        14 - Evaluation of trend of rainfall and temperature changes and their effects on meteorological drought in Kermanshah province
        Maryam Teymouri Yeganeh Liela Teymouri Yeganeh
        Climate change is one of the natural features of the atmospheric cycle, which results in anomalies or fluctuations in the process of meteorological parameters such as rainfall and temperature. Also, drought is one of the weather and climate disasters, including catastro More
        Climate change is one of the natural features of the atmospheric cycle, which results in anomalies or fluctuations in the process of meteorological parameters such as rainfall and temperature. Also, drought is one of the weather and climate disasters, including catastrophic events. It alternates with floods and causes significant damage each year. Lack of rainfall has different effects on groundwater, soil moisture and river flow. For this reason, the study of changes in precipitation and temperature has always been the focus of researchers in various sciences, including natural resources and the environment. In this study, using the data of Kermanshah Meteorological Organization related to 30 years of rainfall, average minimum temperature and average maximum temperature in three stations of Kermanshah, Islamabad West and Sarpol-e Zahab to assess the severity of drought each year by DIC software Using standard precipitation index (SPI) and examining the trend of temperature changes using two non-parametric Mann-Kendall tests, Sensitimator and also linear regression. In order to study the drought trend during the 30-year period, statistical software was used and the results showed that during the 30-year period, all three stations are in near normal condition. Also, the results of temperature changes using the mentioned tests indicate the increasing trend of temperature and this trend is significant at the level of 99% using two non-parametric Mann-Kendall tests. Manuscript profile
      • Open Access Article

        15 - Improving Robotic Arm Control via Model Reference Adaptive Controller Using EMG Signals Classification
        Mahsa Barfi Hamidreza Karami Elham Farahi Fatemeh ّّFaridi Seyed Manouchehr Hosseini Pilangorgi
        The purpose of designing and manufacturing prosthetic organs is to create their maximum behavioral similarity to human organs. The aim of this paper is to improve the robotic arm control via Model Reference Adaptive System (MRAS) based on Lyapunov theory using EMG data More
        The purpose of designing and manufacturing prosthetic organs is to create their maximum behavioral similarity to human organs. The aim of this paper is to improve the robotic arm control via Model Reference Adaptive System (MRAS) based on Lyapunov theory using EMG data classification. In this paper, human arm is modeled with a robot with two degrees of freedom. The proposed control method is MRAS. The outcome of this research is a robotic arm with MRAS, using the classification of electromyogram (EMG) data recorded from human arm movements, results in proper tracking of the reference signal, less overshoot and steady-state error compared to the conventional PI controller. For this purpose, using two electrodes, EMG data is collected from the anterior deltoid and middle deltoid muscles of the arm of five female athletes and by performing two movements of abduction and flexion of the arm. Then, after eliminating noise, integral of absolute value (IAV), zero crossing (ZC), variance (VAR) and median frequency (MF) are extracted. Then, classification is done by linear discriminant analysis (LDA) method to detect movements based on data characteristics. Finally, the proposed controller and model are designed according to the EMG characteristics to achieve the proper control response and the appropriate command signal is sent to the controller to perform the corresponding movement. The results and the values of the obtained errors show the conformity of the model and controller behavior with the predefined movement pattern. Manuscript profile
      • Open Access Article

        16 - Integrated Fault Estimation and Fault Tolerant Control Design for Linear Parameter Varying System with Actuator and Sensor Fault
        Hooshang Jafari Amin Ramezani Mehdi Forouzanfar
        Fault occurrence in real operating systems usually is inevitable and it may lead to performance degradation or failure and requires to be meddled quickly by making appropriate decisions, otherwise, it could cause major catastrophe. This gives rise to strong demands for More
        Fault occurrence in real operating systems usually is inevitable and it may lead to performance degradation or failure and requires to be meddled quickly by making appropriate decisions, otherwise, it could cause major catastrophe. This gives rise to strong demands for enhanced fault tolerant control to compensate the destructive effects and increase system reliability and safety in the presence of faults. In this paper, an approach for estimation and control of simultaneous actuator and sensor faults is presented by using integrated design of a fault estimation and fault tolerant control for time-varying linear systems. In this method, an unknown input observer-based fault estimation approach with both state feedback control and sliding mode control was developed to assure the closed-loop system's robust stability via solving a linear matrix inequality formulation. The presented method has been applied to a linear parameter varying system and the simulation results show the effectiveness of this method for fault estimation and system stability. Manuscript profile
      • Open Access Article

        17 - Porosity modeling in Azadegan oil field: a comparative study of Bayesian theory of data fusion, multi layer neural network, and multiple linear regression techniques
        عطیه  مظاهری طرئی حسین معماریان بهزاد تخم چی بهزاد مشیری
        Porosity parameter is an important reservoir property that can be obtained by studying the well core. However, all wells in a field do not have a core. Additionally, in some wells such as horizontal wells, measuring the well core is practically impossible. However, for More
        Porosity parameter is an important reservoir property that can be obtained by studying the well core. However, all wells in a field do not have a core. Additionally, in some wells such as horizontal wells, measuring the well core is practically impossible. However, for almost all wells, log data is available. Usually these logs are used to estimate porosity. The porosity value obtained from this method is influenced by factors such as temperature, pressure, fluid type, and amount of hydrocarbons in shale formations. Thus it is slightly different from the exact value of porosity. Thus, estimates are prone to error and uncertainty. One of the best and yet most practical ways to reduce the amount of uncertainty in measurement is using various sources and data fusion techniques. The main benefit of these techniques is that they increase confidence and reduce risk and error in decision making. In this paper, in order to determine porosity values, data from four wells located in Azadegan oil field are used. First, multilayer neural network and multiple linear regressions are used to estimate the values and then the results of these techniques are compared with a data fusion method (Bayesian theory). To check if it would be possible to generalize these three methods on other data, the porosity parameter of another independent well in this field is also estimated by using these techniques. Number of input variables to estimate porosity in both the neural network and the multiple linear regressions methods is 7, and in the data fusion technique, a maximum of 7 input variables is used. Finally, by comparing the results of the three methods, it is concluded that the data fusion technique (Bayesian theory) is a considerably more accurate technique than multilayer neural network, and multiple linear regression, when it comes to porosity value estimation; Such that the results are correlated with the ground truth greater than 90%. Manuscript profile
      • Open Access Article

        18 - Linear modeling of effective area changes of Gol Sisakht Lake (Kohgiluyeh and Boyerahmad)
        amirhossein parsa
        In many methods of functional magnetic resonance imaging (fMRI) analysis, to account for the effect of the hemodynamic system, the response of the hemodynamic system (HRF) is modeled with functions such as the gamma function or the Gaussian function. These functions usu More
        In many methods of functional magnetic resonance imaging (fMRI) analysis, to account for the effect of the hemodynamic system, the response of the hemodynamic system (HRF) is modeled with functions such as the gamma function or the Gaussian function. These functions usually have one or two parameters that are selected by the analyst based on his prior knowledge. The purpose of this article is to study satellite imagery of Gol Golsik Lake over several years and its shoreline changes, which are modeled on changes in the area of ​​Gol Gol Manjir. Kuh-e-Gol Lake will not be mature. This lake will be completely dry in 2030. By examining the satellite image of Kuh-e Gol Lake, it can be found that the most important factors for the drying up of Kuh-e Gol Lake are natural and human factors, and the weather conditions are such that the sea should be reduced and the sea should be reduced. It was noted that the decrease in water volume of Kuh-e Gol Lake causes the decrease of animal and plant species around the lake. As a result, the decrease of animal species will cause drought and destruction of Kuh-e Gol ecosystem, which will accelerate the drying process of the lake.. Manuscript profile
      • Open Access Article

        19 - Resolving Zeno’s Paradoxes Based on the Theory of the “Linear Analytic Summation” and Evaluation of Evolution of Responsesa
        Reza Shakeri Ali Abedi Shahroodi
        Zeno challenged the problem of motion following his master Parmenides and presented his criticisms of the theory of motion based on four arguments that in fact introduced the paradoxes of this theory. These paradoxes, which contradict an evident problem (motion), provok More
        Zeno challenged the problem of motion following his master Parmenides and presented his criticisms of the theory of motion based on four arguments that in fact introduced the paradoxes of this theory. These paradoxes, which contradict an evident problem (motion), provoked some reactions. This paper initially refers to two of Zeno’s paradoxes and then presents the responses provided by some thinkers of different periods. In his response to Zeno’s paradoxes, Aristotle separated the actual and potential runs of motion and, following a mathematical approach, resorted to the concept of infinitely small sizes. Kant has also referred to this problem in his antinomies. Secondly, the authors explain the theory of linear analytic summation, which consists of two elements: 1) The distance between two points of transfer can be divided infinitely; however, the absolute value of the subsequent distance is always smaller than the absolute value of the previous distance; 2) since the infinitude of the division is of an analytic rather than a synthetic nature, the summation limit of these distances will be equal to the initial distance. Based on this theory, as motion is not free of direction and continuous limits, an integral limit of distance is traversed at each moment, and the analytic, successive, and infinite limits of distance are determined. The final section of this paper is intended to evaluate the responses given to the paradoxes. Manuscript profile
      • Open Access Article

        20 - Data-Driven Sliding Mode Control Based on Projection Recurrent Neural Network for HIV Infection: A Singular Value Approach
        Ashkan  Zarghami mehdi  Siahi Fereidoun Nowshiravan Rahatabad
        In the present study, drug treatment of HIV infection is investigated using a Data-Driven Sliding Mode Control (DDSMC) combined with a Projection Recurrent Neural Network (PRNN). The major objective is to establish the control law that eliminates the need for HIV infect More
        In the present study, drug treatment of HIV infection is investigated using a Data-Driven Sliding Mode Control (DDSMC) combined with a Projection Recurrent Neural Network (PRNN). The major objective is to establish the control law that eliminates the need for HIV infection mathematical formulae and ensures that the physical limits of the actuator are reached. This is accomplished by creating the concepts of model-free adaptive control, in which the relation between input and output is described using local dynamic linearized models based on quasi-partial derivatives. To determine the DDSMC law, a performance index is first defined based on the fulfillment of a discrete-time exponential reaching condition. By turning this index into a quadratic programming problem, the dynamics of the PRNN are extracted based on projection theory. The closed-loop system is explicitly determined using the optimizer output equation and the closed-loop stability analysis is evaluated using the singular value approach. The simulation results reveal that the proposed algorithm has robust performance in conducting the state variables of HIV infection to the healthy equilibrium point in the face of model uncertainty and external disturbances when compared to one of the newest control techniques. Manuscript profile
      • Open Access Article

        21 - External Skeletal Fixators in Small Animal
        hamid reza moslemi navid Ehsani pour Faeze  Emarloo
        An external skeletal fixator is an orthopedic method for treating open or closed fractures of long tubular bones, joint stiffness, bone lengthening, and congenital malformations. An external skeletal fixator is a device that is installed outside the organ and inserts p More
        An external skeletal fixator is an orthopedic method for treating open or closed fractures of long tubular bones, joint stiffness, bone lengthening, and congenital malformations. An external skeletal fixator is a device that is installed outside the organ and inserts pins into the fracture to fix it and adjust the position of the pin. They are connected to the frame and secured with bolts and nuts. Fixtures have changed significantly in appearance and biomechanics over time, but the principle and function remain the same. These fixtures consist of pins or thin stainless steel wires that penetrate the skin and reach the bone. This way the broken part is fixed in the right direction. Depending on the body geometry and shape, these external skeletal fixators are available in different types such as linear, circular, and hybrid fixators. The simplest and most common type of external skeletal fixator is the linear fixator. The use of an external fixator has several advantages over other fixation methods such as stabilization of the fracture at some distance from the injury site, no need for a cast, ease of patient movement, and minimal involvement of the joint. Premature loosening of the pin is the most common complication causing pain, inflammation, and discharge from the pin tract. Although these fixators are versatile and effective treatment models, they require careful maintenance during treatment. Before deciding to use an external fixator, the patient's and pet's owner's ability to comply with postoperative care instructions should be considered. This article reviews the types of external fixators, postoperative care, and their complications. Manuscript profile