• List of Articles Scheduling

      • Open Access Article

        1 - Scheduling tasks in cloud environments using mapping framework - reduction and genetic algorithm
        nima khezr nima jafari novimipour
        Task scheduling is a vital component of any distributed system such as grids, clouds, and peer-to-peer networks that refer tasks to appropriate resources for execution. Common scheduling methods have disadvantages such as high time complexity, inconsistent execution of More
        Task scheduling is a vital component of any distributed system such as grids, clouds, and peer-to-peer networks that refer tasks to appropriate resources for execution. Common scheduling methods have disadvantages such as high time complexity, inconsistent execution of input tasks, and increased program execution time. Exploration-based scheduling algorithms to prioritize tasks from Manuscript profile
      • Open Access Article

        2 - Optimal Sensor Scheduling Algorithms for Distributed Sensor Networks
        Behrooz Safarinejadian Abdolah Rahimi
        In this paper, a sensor network is used to estimate the dynamic states of a system. At each time step, one (or multiple) sensors are available that can send its measured data to a central node, in which all of processing is done. We want to provide an optimal algorithm More
        In this paper, a sensor network is used to estimate the dynamic states of a system. At each time step, one (or multiple) sensors are available that can send its measured data to a central node, in which all of processing is done. We want to provide an optimal algorithm for scheduling sensor selection at every time step. Our goal is to select the appropriate sensor to reduce computations, optimize the energy consumption and enhance the network lifetime. To achieve this goal, we must reduce the error covariance. Three algorithms are used in this work: sliding window, thresholding and randomly chosen algorithms. Moreover, we will offer a new algorithm based on circular selection. Finally, a novel algorithm for selecting multiple sensors is proposed. Performance of the proposed algorithms is illustrated with numerical examples. Manuscript profile
      • Open Access Article

        3 - Hybrid Task Scheduling Method for Cloud Computing by Genetic and PSO Algorithms
        Amin Kamalinia Ali Ghaffari
        Cloud computing makes it possible for users to use different applications through the internet without having to install them. Cloud computing is considered to be a novel technology which is aimed at handling and providing online services. For enhancing efficiency in cl More
        Cloud computing makes it possible for users to use different applications through the internet without having to install them. Cloud computing is considered to be a novel technology which is aimed at handling and providing online services. For enhancing efficiency in cloud computing, appropriate task scheduling techniques are needed. Due to the limitations and heterogeneity of resources, the issue of scheduling is highly complicated. Hence, it is believed that an appropriate scheduling method can have a significant impact on reducing makespans and enhancing resource efficiency. Inasmuch as task scheduling in cloud computing is regarded as an NP complete problem; traditional heuristic algorithms used in task scheduling do not have the required efficiency in this context. With regard to the shortcomings of the traditional heuristic algorithms used in job scheduling, recently, the majority of researchers have focused on hybrid meta-heuristic methods for task scheduling. With regard to this cutting edge research domain, we used HEFT (Heterogeneous Earliest Finish Time) algorithm to propose a hybrid meta-heuristic method in this paper where genetic algorithm (GA) and particle swarm optimization (PSO) algorithms were combined with each other. The results of simulation and statistical analysis of proposed scheme indicate that the proposed algorithm, when compared with three other heuristic and a memetic algorithms, has optimized the makespan required for executing tasks. Manuscript profile
      • Open Access Article

        4 - BSFS: A Bidirectional Search Algorithm for Flow Scheduling in Cloud Data Centers
        Hasibeh Naseri Sadoon Azizi Alireza Abdollahpouri
        To support high bisection bandwidth for communication intensive applications in the cloud computing environment, data center networks usually offer a wide variety of paths. However, optimal utilization of this facility has always been a critical challenge in a data cent More
        To support high bisection bandwidth for communication intensive applications in the cloud computing environment, data center networks usually offer a wide variety of paths. However, optimal utilization of this facility has always been a critical challenge in a data center design. Flow-based mechanisms usually suffer from collision between elephant flows; while, packet-based mechanisms encounter packet re-ordering phenomenon. Both of these challenges lead to severe performance degradation in a data center network. To address these problems, in this paper, we propose an efficient mechanism for the flow scheduling problem in cloud data center networks. The proposed mechanism, on one hand, makes decisions per flow, thus preventing the necessity for rearrangement of packets. On the other hand, thanks do SDN technology and utilizing bidirectional search algorithm, our proposed method is able to distribute elephant flows across the entire network smoothly and with a high speed. Simulation results confirm the outperformance of our proposed method with the comparison of state-of-the-art algorithms under different traffic patterns. In particular, compared to the second-best result, the proposed mechanism provides about 20% higher throughput for random traffic pattern. In addition, with regard to flow completion time, the percentage of improvement is 12% for random traffic pattern Manuscript profile
      • Open Access Article

        5 - Energy Efficient Cross Layer MAC Protocol for Wireless Sensor Networks in Remote Area Monitoring Applications
        R Rathna L Mary Gladence J Sybi Cynthia V Maria Anu
        Sensor nodes are typically less mobile, much limited in capabilities, and more densely deployed than the traditional wired networks as well as mobile ad-hoc networks. General Wireless Sensor Networks (WSNs) are designed with electro-mechanical sensors through wireless d More
        Sensor nodes are typically less mobile, much limited in capabilities, and more densely deployed than the traditional wired networks as well as mobile ad-hoc networks. General Wireless Sensor Networks (WSNs) are designed with electro-mechanical sensors through wireless data communication. Nowadays the WSN has become ubiquitous. WSN is used in combination with Internet of Things and in many Big Data applications, it is used in the lower layer for data collection. It is deployed in combination with several high end networks. All the higher layer networks and application layer services depend on the low level WSN in the deployment site. So to achieve energy efficiency in the overall network some simplification strategies have to be carried out not only in the Medium Access Control (MAC) layer but also in the network and transport layers. An energy efficient algorithm for scheduling and clustering is proposed and described in detail. The proposed methodology clusters the nodes using a traditional yet simplified approach of hierarchically sorting the sensor nodes. Few important works on cross layer protocols for WSNs are reviewed and an attempt to modify their pattern has also been presented in this paper with results. Comparison with few prominent protocols in this domain has also been made. As a result of the comparison one would get a basic idea of using which type of scheduling algorithm for which type of monitoring applications. Manuscript profile
      • Open Access Article

        6 - A Task Mapping and Scheduling Algorithm based on Genetic Algorithm for Embedded System Design
        mohadese nikseresht Mohsen Raji
        Embedded system designers face numerous design requirements and objectives (such as runtime, power consumption and reliability). Since meeting one of these requirements mostly contradicts other design requirements, it seem to be inevitable to apply multi-objective appr More
        Embedded system designers face numerous design requirements and objectives (such as runtime, power consumption and reliability). Since meeting one of these requirements mostly contradicts other design requirements, it seem to be inevitable to apply multi-objective approaches in various stages of designing embedded systems, including task scheduling step. In this paper, a multi-objective task mapping and scheduling in the design stage of the embedded system is presented. In this method, tasks are represented by task graphs assuming that the hardware architecture platform is given and determined. In order to manage the dependencies between tasks in the task graph, a segmentation method is used, in which the tasks that can be executed simultaneously are specified in a segment and is considered in the scheduling process. In the proposed method, the task mapping and scheduling problem is modeled as a genetic algorithm-based multi-objective optimization problem considering execution time, energy consumption, and reliability. In comparison to similar previous works, the proposed scheduling method respectively provides 21.4%, 19.2%, and 20% improvement in execution time, energy consumption, and reliability. Applying a multi-objective helps the designer to pick out the best outcome according to different considerations. Manuscript profile
      • Open Access Article

        7 - Enhancing Efficiency and Organizational Knowledge in Assembly Lines Using Discrete-event Simulation Techniques
        Moslem Fadaei Hadi Heydari Ghare Bagh Sedigh Reisi
        In today’s competitive market, companies must work closely with their customers and suppliers if they want to survive and improve their own performance and to provide a better response to market forces. In a typical supply chain, information flow is more important than More
        In today’s competitive market, companies must work closely with their customers and suppliers if they want to survive and improve their own performance and to provide a better response to market forces. In a typical supply chain, information flow is more important than the other two involved flows including material flow and cash flow. It can be lead to enhance organizational knowledge. The main purpose of the research, is to identify, analyse and improve the performance of an assembly line using simulation techniques. Many factors, such as setup time, operating time, failure rate, repair rate, and production rate, can cause data to be non-stationary. Therefor, in order to analyse such complicated systems it is necessary to apply simulation techniques. The most important properties of an assembly line including bottlenecks, cycle time, buffers capacities, and the number of finished products in a given time period are investigated in the research. After data gathering, and building an appropriate simulation model, the simulation experiments were done with Enterprise Dynamic software. The simulation model is implemented and tested against real-world data and is demenostrated by a numerical example. Then, the bottlenecks and performance measures including throughput time and waiting time are identified and analysed in order to develop a new scenario in which opportunities for improvements are presented. The results show significant improvements in terms of reducing waiting time and increasing efficiencies. Manuscript profile
      • Open Access Article

        8 - Solve the issue of project scheduling in a steady state with resource constraints and timing of delivery intervals
        Meysam Jafari Eskandari rozbeh azizmohammadi
        Due to considering the real conditions of project and solving manager’s problems, the primary methods development of scheduling for projects has recently drawn researcher’s attentions so that these methods are looking for finding optimal sequence to realize project goal More
        Due to considering the real conditions of project and solving manager’s problems, the primary methods development of scheduling for projects has recently drawn researcher’s attentions so that these methods are looking for finding optimal sequence to realize project goals and to provide its constraints such as dependence, resource constraint (renewable and non-renewable). The importance of these issues has practically and theoretically led researchers to do much efforts on different conditions of issues for project schedule, various methods to solve and or to develop each of them. In this research based on selection of some executive methods for any activity with renewable and non-renewable resource and considering prerequisite relation from kind of start to end and having delivery time for any activity in two time periods that has been provided with regard to delivery time of penalty cost with delay or without delay. The presented model has been solved in small scale by gams software and in small, medium and large scale using meta-heuristic Methods of NSGA ll and cuckoo after coding in software of matlab 2013. The comparison of the answer obtained from the above algorithm indicates the better performance of genetic algorithm in most indexes and cuckoo algorithm has superiority on time index of problem solving. Manuscript profile
      • Open Access Article

        9 - Semi-Partitioning Multiprocessor Real-Time Scheduling in Data Stream Management Systems
        M. Alemi M. Haghjoo
        In data stream management systems as long as streams of data arrive to the system, stored queries are executed on these data. Regarding high workload, high processing capacity is required, leading to consider multiple processors to cope with it. Partitioning approach, o More
        In data stream management systems as long as streams of data arrive to the system, stored queries are executed on these data. Regarding high workload, high processing capacity is required, leading to consider multiple processors to cope with it. Partitioning approach, one of the main methods in multiprocessor real-time scheduling, bind each query to one of processors based on its utilization, ratio of estimated execution time to period, and instances of each query which should be completed under defined deadline can only be executed on specified processor. Each query which could not be assigned to any processor can be split based on utilization of processors and spread among them, causing to get closer to optimum result. This system has been examined with real network data, showing lower miss ratio and higher utilization in comparison to simple partitioning approach. Manuscript profile
      • Open Access Article

        10 - Lifetime Improvement of Real-Time Embedded Systems by Battery-Aware Scheduling
        S. Manoochehri kargahi kargahi
        Many embedded systems and mobile devices use batteries as their energy suppliers. The lifetime of these devices is thus dependent on the battery behavior. Accordingly, battery management besides reducing the energy consumption of the respective system helps to increase More
        Many embedded systems and mobile devices use batteries as their energy suppliers. The lifetime of these devices is thus dependent on the battery behavior. Accordingly, battery management besides reducing the energy consumption of the respective system helps to increase the efficiency of such systems. Maximizing the battery lifetime is a quiet challenging problem due to the nonlinear behavior of batteries and its dependence on the characteristics of the discharge profile. This paper employs dynamic voltage scaling (DVS) to extend the lifetime of battery-operated real-time embedded systems. We propose a battery-aware scheduling algorithm to maximize the lifetime and efficiency of the battery. The proposed algorithm is based on greedy heuristics suggested by battery characteristic and power consumption of tasks to employs DVS. Two methods are used to evaluate the mentioned algorithms; the first one is based on the cost function derived from a high-level analytical model of battery, and the second one is based on Dualfoil, a low-level li-ion battery simulator. Experimental results show that the system lifetime can be increased about 4.3% to 19.6%in various situations (in terms of system workload and tasks power consumption). Manuscript profile
      • Open Access Article

        11 - Energy-Aware Scheduling for Real-Time Unicore Mixed-Criticality Systems
        S. H. Sadeghzadeh yasser sedaghat
        Integrated modular avionics (IMA) has significantly evolved avionic industry. In this architecture, tasks with different criticality have been integrated into a share hardware in order to reduce the size, weight, power consumption and cost so they commonly use the resou More
        Integrated modular avionics (IMA) has significantly evolved avionic industry. In this architecture, tasks with different criticality have been integrated into a share hardware in order to reduce the size, weight, power consumption and cost so they commonly use the resources. The industry’s interest in integrating tasks has resulted in introducing mixed-criticality systems. Real time and assurance of executing critical tasks are considered of the two basic needs for these kinds of systems. However, integration of critical and non-critical tasks makes some problems for scheduling executing tasks. On the other hand, reducing energy consumption is another important need as these devices run by batteries. Therefore, the present study aims at satisfying the above mentions needs (real time scheduling and reducing energy consumption) by introducing an innovative energy- aware scheduling approach. The proposed algorithm guarantees executing critical tasks as well as reducing energy consumption by dynamic voltage and frequency scaling (DVFS). The results of simulation showed that energy consumption of the proposed algorithm improved up to 14% in comparison with the similar approaches. Manuscript profile
      • Open Access Article

        12 - Scheduling of IoT Application Tasks in Fog Computing Environment Using Deep Reinforcement Learning
        Pegah Gazori Dadmehr Rahbari Mohsen Nickray
        With the advent and development of IoT applications in recent years, the number of smart devices and consequently the volume of data collected by them are rapidly increasing. On the other hand, most of the IoT applications require real-time data analysis and low latency More
        With the advent and development of IoT applications in recent years, the number of smart devices and consequently the volume of data collected by them are rapidly increasing. On the other hand, most of the IoT applications require real-time data analysis and low latency in service delivery. Under these circumstances, sending the huge volume of various data to the cloud data centers for processing and analytical purposes is impractical and the fog computing paradigm seems a better choice. Because of limited computational resources in fog nodes, efficient utilization of them is of great importance. In this paper, the scheduling of IoT application tasks in the fog computing paradigm has been considered. The main goal of this study is to reduce the latency of service delivery, in which we have used the deep reinforcement learning approach to meet it. The proposed method of this paper is a combination of the Q-Learning algorithm, deep learning, experience replay, and target network techniques. According to experiment results, The DQLTS algorithm has improved the ASD metric by 76% in comparison to QLTS and 6.5% compared to the RS algorithm. Moreover, it has been reached to faster convergence time than QLTS. Manuscript profile
      • Open Access Article

        13 - Priority-Based Task Scheduling Using Fuzzy System in Mobile Edge Computing
        Entesar Hosseini Mohsen Nickray SH. GH.
        Mobile edge computing (MEC) are new issues to improve latency, capacity and available resources in Mobile cloud computing (MCC). Mobile resources, including battery and CPU, have limited capacity. So enabling computation-intensive and latency-critical applications are i More
        Mobile edge computing (MEC) are new issues to improve latency, capacity and available resources in Mobile cloud computing (MCC). Mobile resources, including battery and CPU, have limited capacity. So enabling computation-intensive and latency-critical applications are important issue in MEC. In this paper, we use a standard three-level system model of mobile devices, edge and cloud, and propose two offloading and scheduling algorithms. A decision-making algorithm for offloading tasks is based on the greedy Knapsack offloading algorithm (GKOA) on the mobile device side, which selects tasks with high power consumption for offloading and it saves energy consumption of the device. On the MEC side, we also present a dynamic scheduling algorithm with fuzzy-based priority task scheduling (FPTS) for prioritizing and scheduling tasks based on two criteria. Numerical results show that our proposed work compared to other methods and reduces the waiting time, latency and system overhead. Also, provides the balance of the system with the least number of resources. And the proposed system reduces battery consumption in the smart device by up to 90%. The results show that more than 92% of tasks are executed successfully in the edge environment. Manuscript profile
      • Open Access Article

        14 - Optimal Strategy Determination of Preventive Maintenance Scheduling in the Presence of Demand Response Resources
        V. Sharifi M. Rashidinejad A. Abdollahi M. Mollahassani-pour
        In this paper, a new method is proposed for maintenance scheduling of generation units in a competitive electricity market environment. The problem of productive maintenance scheduling is one of the most important problems in the restructured power system due to its imp More
        In this paper, a new method is proposed for maintenance scheduling of generation units in a competitive electricity market environment. The problem of productive maintenance scheduling is one of the most important problems in the restructured power system due to its impact on the safety and emission of pollutants and producers' profits. In order to consider producers' risk, productive maintenance scheduling has been modeled from the producer's point of view using non-cooperative game theory, which is used to achieve an optimal Nash equilibrium strategy. On the other hand, the independent system operator seeks to achieve a level of appropriate reliability and pollution reduction. In this paper, load response programs are one of the options for influencing energy policy decision-making. Also, the coordination procedure has been used to coordinate producers' maintenance programs with reliability-pollution maintenance program. The proposed model has been implemented on the IEEE-RTS Modified 24 Bus. The results indicate the effectiveness of the proposed method. Manuscript profile
      • Open Access Article

        15 - Coordinated Scheduling of Electricity and Natural Gas Networks Considering the Effect of PtG Units on Handling Electric Vehicles’ Uncertainties
        Iman Goroohi Sardou Ali Mobasseri
        Gas-fueled power plants are considerably effective in power system operation during peak hours due to their high up and down ramping rates. By increasing the penetration of gas-fueled power plants in power systems and development of new technologies, such as power-to-ga More
        Gas-fueled power plants are considerably effective in power system operation during peak hours due to their high up and down ramping rates. By increasing the penetration of gas-fueled power plants in power systems and development of new technologies, such as power-to-gas (ptg) units, coordinated scheduling of both electricity and natural gas (NG) networks has attracted systems researchers’ attention. The NG volume generated by ptg units are stored in storages to directly supply the NG demands, or to sell in NG markets. If necessary, the stored NG volumes are reconverted into electricity which may be a suitable replacement for batteries and storages in electricity network in long term. In this paper, a mixed integer linear programming (MILP) model is proposed for stochastic coordinated scheduling of electricity and NG networks with ptg units, considering uncertainties of charging and discharging available capacities of vehicle to grid (v2g) stations. A test network integrating modified 24-bus IEEE electricity network and Belgium gas network including nine power stations (three of them are gas-fueled), three v2g stations, and three ptg stations is studied to evaluate the effectiveness of the proposed model. Simulation results demonstrate the effectiveness of ptg units in handling the uncertainties of v2g stations charging and discharging. Besides, the effectiveness of coordinated scheduling of both electricity and NG networks in comparison with independent scheduling of both networks is demonstrated. Manuscript profile
      • Open Access Article

        16 - Preventive and Probabilistic-Possibilistic Scheduling of Microgrid against the Natural Phenomena in the Presence of Electric Vehicles
        Amirhossein Nasri A. Abdollahi M. Rashidinejad
        This paper proposes a preventive and probabilistic–possibilistic framework for day-ahead scheduling of Electric Vehicles (EVs) parking lot and distributed generation in a microgrid. The suggested scheduling is performed in normal and emergency conditions when a natural More
        This paper proposes a preventive and probabilistic–possibilistic framework for day-ahead scheduling of Electric Vehicles (EVs) parking lot and distributed generation in a microgrid. The suggested scheduling is performed in normal and emergency conditions when a natural phenomenon appears and the microgrid is disconnected from the upstream network. Furthermore, the uncertainty of EVs number in a parking lot is considered by Z-number as a probabilistic-possibilistic model. Moreover, the uncertainties of photovoltaic units generation, wind turbine output power, market price, and load demand are modeled by Monte Carlo as a probabilistic method. Furthermore, natural phenomena occurrences are modeled by considering multifarious scenarios according to when the phenomenon unfolds and how much it takes. In the suggested framework, the operation of parking lots is based on the uncertainty and EVs charging/discharging schedule. The operational cost in normal condition and load shedding cost in addition to operational cost are considered as the objective functions of the proposed structure. To evaluate the performance of the suggested structure, the modified 33-bus IEEE distribution network is employed. Manuscript profile
      • Open Access Article

        17 - An Efficient Approach for Resource Allocation in Fog Computing Considering Request Congestion Conditions
        Samira Ansari Moghaddam سميرا نوفرستي مهري رجايي
        Cloud data centers often fail to cope with the millions of delay-sensitive storage and computational requests due to their long distance from end users. A delay-sensitive request requires a response before its predefined deadline expires, even when the network has a hig More
        Cloud data centers often fail to cope with the millions of delay-sensitive storage and computational requests due to their long distance from end users. A delay-sensitive request requires a response before its predefined deadline expires, even when the network has a high load of requests. Fog computing architecture, which provides computation, storage and communication services at the edge of the network, has been proposed to solve these problems. One of the fog computing challenges is how to allocate cloud and fog nodes resources to user requests in congestion conditions to achieve a higher acceptance rate of user requests and minimize their response time. Fog nodes have limited storage and computational power, and hence their performance is significantly reduced due to high load of user requests. This paper proposes an efficient resource allocation method in fog computing that decides where (fog or cloud) to process the requests considering the available resources of fog nodes and congestion conditions. According to the experimental results, the performance of the proposed method is better compared with existing methods in terms of average response time and percentage of failed requests. Manuscript profile
      • Open Access Article

        18 - Energy-Aware Data Gathering in Rechargeable Wireless Sensor Networks Using Particle Swarm Optimization Algorithm
        Vahideh Farahani Leili Farzinvash Mina Zolfy Lighvan Rahim Abri Lighvan
        This paper investigates the problem of data gathering in rechargeable Wireless Sensor Networks (WSNs). The low energy harvesting rate of rechargeable nodes necessitates effective energy management in these networks. The existing schemes did not comprehensively examine t More
        This paper investigates the problem of data gathering in rechargeable Wireless Sensor Networks (WSNs). The low energy harvesting rate of rechargeable nodes necessitates effective energy management in these networks. The existing schemes did not comprehensively examine the important aspects of energy-aware data gathering including sleep scheduling, and energy-aware clustering and routing. Additionally, most of them proposed greedy algorithms with poor performance. As a result, nodes run out of energy intermittently and temporary disconnections occur throughout the network. In this paper, we propose an energy-efficient data gathering algorithm namely Energy-aware Data Gathering in Rechargeable wireless sensor networks (EDGR). The proposed algorithm divides the original problem into three phases namely sleep scheduling, clustering, and routing, and solves them successively using particle swarm optimization algorithm. As derived from the simulation results, the EDGR algorithm improves the average and standard deviation of the energy stored in the nodes by 17% and 5.6 times, respectively, compared to the previous methods. Also, the packet loss ratio and energy consumption for delivering data to the sink of this scheme is very small and almost zero Manuscript profile
      • Open Access Article

        19 - Improve security in cloud computing infrastructure using block chain protocol
        Mohsen Gerami Vahid Yazdanian Siavash Naebasl
        Security in cloud computing is very important, cloud computing security is a set of computer security and network security in general is information security and when a processing task by using the virtual machine scheduling algorithm in the cloud for processing Unloadi More
        Security in cloud computing is very important, cloud computing security is a set of computer security and network security in general is information security and when a processing task by using the virtual machine scheduling algorithm in the cloud for processing Unloading will be This virtual machine will not be able to distinguish the normal mobile user from attackers, thus violating the privacy and security of the transmitted data, so after determining the unloading strategy, the China block can be used in information security. And the information of each server is encapsulated and unloaded in the form of a block. In this research, a proposed solution is presented, which is the combination of China blockchain and cloud computing to increase security and efficiency. The proposed solution is implemented and evaluated in order to evaluate its efficiency increase compared to other existing solutions. Manuscript profile
      • Open Access Article

        20 - TPALA: Two Phase Adaptive Algorithm based on Learning Automata for job scheduling in cloud Environment
        Abolfazl Esfandi Javad Akbari Torkestani Abbas Karimi Faraneh Zarafshan
        Due to the completely random and dynamic nature of the cloud environment, as well as the high volume of jobs, one of the significant challenges in this environment is proper online job scheduling. Most of the algorithms are presented based on heuristic and meta-heuristi More
        Due to the completely random and dynamic nature of the cloud environment, as well as the high volume of jobs, one of the significant challenges in this environment is proper online job scheduling. Most of the algorithms are presented based on heuristic and meta-heuristic approaches, which result in their inability to adapt to the dynamic nature of resources and cloud conditions. In this paper, we present a distributed online algorithm with the use of two different learning automata for each scheduler to schedule the jobs optimally. In this algorithm, the placed workload on every virtual machine is proportional to its computational capacity and changes with time based on the cloud and submitted job conditions. In proposed algorithm, two separate phases and two different LA are used to schedule jobs and allocate each job to the appropriate VM, so that a two phase adaptive algorithm based on LA is presented called TPALA. To demonstrate the effectiveness of our method, several scenarios have been simulated by CloudSim, in which several main metrics such as makespan, success rate, average waiting time, and degree of imbalance will be checked plus their comparison with other existing algorithms. The results show that TPALA performs at least 4.5% better than the closest measured algorithm. Manuscript profile
      • Open Access Article

        21 - A Multi-Objective Differential Evolutionary Algorithm-based Approach for Resource Allocation in Cloud Computing Environment
        Saeed Bakhtiari Mahan Khosroshahi
        In recent years, the cloud computing model has received a lot of attention due to its high scalability, reliability, information sharing and low cost compared to separate machines. In the cloud environment, scheduling and optimal allocation of tasks affects the effectiv More
        In recent years, the cloud computing model has received a lot of attention due to its high scalability, reliability, information sharing and low cost compared to separate machines. In the cloud environment, scheduling and optimal allocation of tasks affects the effective use of system resources. Currently, common methods for scheduling in the cloud computing environment are performed using traditional methods such as Min-Min and meta-heuristic methods such as ant colony optimization algorithm (ACO). The above methods focused on optimizing one goal and do not estimate multiple goals at the same time. The main purpose of this research is to consider several objectives (total execution time, service level agreement and energy consumption) in cloud data centers with scheduling and optimal allocation of tasks. In this research, multi-objective differential evolution algorithm (DEA) is used due to its simple structure features and less adjustable parameters. In the proposed method, a new approach based on DEA to solve the problem of allocation in cloud space is presented which we try to be effective in improving resource efficiency and considering goals such as time, migration and energy by defining a multi-objective function and considering mutation and crossover vectors. The proposed method has been evaluated through a CloudSim simulator by testing the workload of more than a thousand virtual machines on Planet Lab. The results of simulation show that the proposed method in comparison with IqrMc, LrMmt and FA algorithms, in energy consumption by an average of 23%, number of migrations by an average of 29%, total execution time by an average of 29% and service level agreement violation (SLAV) by an average of 1% has been improved. In this case, use of the proposed approach in cloud centers will lead to better and appropriate services to customers of these centers in various fields such as education, engineering, manufacturing, services, etc. Manuscript profile
      • Open Access Article

        22 - Fuzzy Multicore Clustering of Big Data in the Hadoop Map Reduce Framework
        Seyed Omid Azarkasb Seyed Hossein Khasteh Mostafa  Amiri
        A logical solution to consider the overlap of clusters is assigning a set of membership degrees to each data point. Fuzzy clustering, due to its reduced partitions and decreased search space, generally incurs lower computational overhead and easily handles ambiguous, no More
        A logical solution to consider the overlap of clusters is assigning a set of membership degrees to each data point. Fuzzy clustering, due to its reduced partitions and decreased search space, generally incurs lower computational overhead and easily handles ambiguous, noisy, and outlier data. Thus, fuzzy clustering is considered an advanced clustering method. However, fuzzy clustering methods often struggle with non-linear data relationships. This paper proposes a method based on feasible ideas that utilizes multicore learning within the Hadoop map reduce framework to identify inseparable linear clusters in complex big data structures. The multicore learning model is capable of capturing complex relationships among data, while Hadoop enables us to interact with a logical cluster of processing and data storage nodes instead of interacting with individual operating systems and processors. In summary, the paper presents the modeling of non-linear data relationships using multicore learning, determination of appropriate values for fuzzy parameterization and feasibility, and the provision of an algorithm within the Hadoop map reduce model. The experiments were conducted on one of the commonly used datasets from the UCI Machine Learning Repository, as well as on the implemented CloudSim dataset simulator, and satisfactory results were obtained.According to published studies, the UCI Machine Learning Repository is suitable for regression and clustering purposes in analyzing large-scale datasets, while the CloudSim dataset is specifically designed for simulating cloud computing scenarios, calculating time delays, and task scheduling. Manuscript profile
      • Open Access Article

        23 - Improving the load balancing in Cloud computing using a rapid SFL algorithm (R-SFLA)
        Kiomars Salimi Mahdi Mollamotalebi
        Nowadays, Cloud computing has many applications due to various services. On the other hand, due to rapid growth, resource constraints and final costs, Cloud computing faces with several challenges such as load balancing. The purpose of load balancing is management of th More
        Nowadays, Cloud computing has many applications due to various services. On the other hand, due to rapid growth, resource constraints and final costs, Cloud computing faces with several challenges such as load balancing. The purpose of load balancing is management of the load distribution among the processing nodes in order to have the best usage of resources while having minimum response time for the users’ requests. Several methods for load balancing in Cloud computing have been proposed in the literature. The shuffled frog leaping algorithm for load balancing is a dynamic, evolutionary, and inspired by nature. This paper proposed a modified rapid shuffled frog leaping algorithm (R-SFLA) that converge the defective evolution of frogs rapidly. In order to evaluate the performance of R-SFLA, it is compared to Shuffled Frog Leaping Algorithm (SFLA) and Augmented Shuffled Frog Leaping Algorithm (ASFLA) by the overall execution cost, Makespan, response time, and degree of imbalance. The simulation is performed in CloudSim, and the results obtained from the experiments indicated that the proposed algorithm acts more efficient compared to other methods based on the above mentioned factors. Manuscript profile