A Hybrid Approach based on PSO and Boosting Technique for Data Modeling in Sensor Networks
الموضوعات :hadi shakibian 1 , Jalaledin Nasiri 2
1 - Department of Computer Engineering, Faculty of Engineering, Alzahra University, Tehran, Iran
2 - Department of Mathematical Sciences, Ferdowsi University of Mashhad, Mashhad, Iran
الکلمات المفتاحية: Wireless sensor network, Distributed optimization, Particle swarm optimization, Regression, Boosting,
ملخص المقالة :
An efficient data aggregation approach in wireless sensor networks (WSNs) is to abstract the network data into a model. In this regard, regression modeling has been addressed in many studies recently. If the limited characteristics of the sensor nodes are omitted from consideration, a common regression technique could be employed after transmitting all the network data from the sensor nodes to the fusion center. However, it is not practical nor efferent. To overcome this issue, several distributed methods have been proposed in WSNs where the regression problem has been formulated as an optimization based data modeling problem. Although they are more energy efficient than the centralized method, the latency and prediction accuracy needs to be improved even further. In this paper, a new approach is proposed based on the particle swarm optimization (PSO) algorithm. Assuming a clustered network, firstly, the PSO algorithm is employed asynchronously to learn the network model of each cluster. In this step, every cluster model is learnt based on the size and data pattern of the cluster. Afterwards, the boosting technique is applied to achieve a better accuracy. The experimental results show that the proposed asynchronous distributed PSO brings up to 48% reduction in energy consumption. Moreover, the boosted model improves the prediction accuracy about 9% on the average.
[1] Sharma, Himanshu, Ahteshamul Haque, and Frede Blaabjerg. "Machine Learning in Wireless Sensor Networks for Smart Cities: A Survey." Electronics 10.9 (2021): 1012.
[2] Liu, Longgeng, et al. "An algorithm based on logistic regression with data fusion in wireless sensor networks." EURASIP Journal on Wireless Communications and Networking 2017.1 (2017): 1-9.
[3] Deng, Yulong, et al. "Temporal and spatial nearest neighbor values based missing data imputation in wireless sensor networks." Sensors 21.5 (2021): 1782.
[4] Zuhairy, Ruwaida M., and Mohammed GH Al Zamil. "Energy-efficient load balancing in wireless sensor network: An application of multinomial regression analysis." International Journal of Distributed Sensor Networks 14.3 (2018): 1550147718764641.
[5] Kumar, D. Praveen, Tarachand Amgoth, and Chandra Sekhara Rao Annavarapu. "Machine learning algorithms for wireless sensor networks: A survey." Information Fusion 49 (2019): 1-25.
[6] Ghate, Vasundhara V., and Vaidehi Vijayakumar. "Machine learning for data aggregation in WSN: A survey." International Journal of Pure and Applied Mathematics 118.24 (2018): 1-12.
[7] Ren, Xiaoxing, et al. "Distributed Subgradient Algorithm for Multi-Agent Optimization With Dynamic Stepsize." IEEE/CAA Journal of Automatica Sinica 8.8 (2021): 1451-1464.
[8] Doan, Thinh T., Siva Theja Maguluri, and Justin Romberg. "Convergence rates of distributed gradient methods under random quantization: A stochastic approximation approach." IEEE Transactions on Automatic Control (2020).
[9] Zhang, Peng, and Gejun Bao. "An incremental subgradient method on Riemannian manifolds." Journal of Optimization Theory and Applications 176.3 (2018): 711-727.
[10] Berahas, Albert S., Charikleia Iakovidou, and Ermin Wei. "Nested distributed gradient methods with adaptive quantized communication." 2019 IEEE 58th Conference on Decision and Control (CDC). IEEE, 2019.
[11] Xu, Xiangxiang, and Shao-Lun Huang. "An information theoretic framework for distributed learning algorithms." 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021.
[12] Perumal, T. Sudarson Rama, V. Muthumanikandan, and S. Mohanalakshmi. "Energy Efficiency Optimization in Clustered Wireless Sensor Networks via Machine Learning Algorithms." Machine Learning and Deep Learning Techniques in Wireless and Mobile Networking Systems. CRC Press, 2021. 59-77.
[13] Kumar, D. Praveen, Tarachand Amgoth, and Chandra Sekhara Rao Annavarapu. "Machine learning algorithms for wireless sensor networks: A survey." Information Fusion 49 (2019): 1-25.
[14] Mohanty, Lipika, et al. "Machine Learning-Based Wireless Sensor Networks." Machine Learning: Theoretical Foundations and Practical Applications. Springer, Singapore, 2021. 109-122.
[15] Pundir, Meena, and Jasminder Kaur Sandhu. "A systematic review of Quality of Service in Wireless Sensor Networks using Machine Learning: Recent trend and future vision." Journal of Network and Computer Applications (2021): 103084.
[16] Antonian, Edward and Peters, Gareth and Peters, Gareth and Chantler, Michael John and Yan, Hongxuan, GLS Kernel Regression for Network-Structured Data (August 9, 2021). Available at SSRN: https://ssrn.com/abstract=3901694.
[17] Liu, Longgeng, et al. "An algorithm based on logistic regression with data fusion in wireless sensor networks." EURASIP Journal on Wireless Communications and Networking 2017.1 (2017): 1-9.
[18] Wang, Heyu, Lei Xia, and Chunguang Li. "Distributed online quantile regression over networks with quantized communication." Signal Processing 157 (2019): 141-150.54532343WE332 .
[19] Wang, Heyu, and Chunguang Li. "Distributed quantile regression over sensor networks." IEEE Transactions on Signal and Information Processing over Networks 4.2 (2017): 338-348.
[20] Danaee, Alireza, Rodrigo C. de Lamare, and Vitor H. Nascimento. "Energy-Efficient Distributed Recursive Least Squares Learning with Coarsely Quantized Signals." 2020 54th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2020.
[21] Danaee, Alireza, Rodrigo C. de Lamare, and Vitor H. Nascimento. "Energy-efficient distributed learning with coarsely quantized signals." IEEE Signal Processing Letters 28 (2021): 329-333.
[22] Hellkvist, Martin, Ayça Özçelikkale, and Anders Ahlén. "Generalization error for linear regression under distributed learning." 2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). IEEE, 2020.
[23] Shuman, David I., et al. "Distributed signal processing via Chebyshev polynomial approximation." IEEE Transactions on Signal and Information Processing over Networks 4.4 (2018): 736-751.
[24] M. Rabbat and R. Nowak, "Distributed Optimization in Sensor Networks," in Proceedings of the 3rd international symposium on Information processing in sensor networks, Berkeley, California, USA, (2004), pp. 20-27.
[25] Bertsekas, Dimitri P., "Incremental gradient, subgradient, and proximal methods for convex optimization: a survey," Optimization for Machine Learning, No. 85, pp. 1-38, 2011.
[26] M. Rabbat, and Nowak, R. "Quantized Incremental Algorithms for Distributed Optimization," IEEE Journal on Selected Areas in Communications, 23 (4) (2006), pp. 798-808.
[27] S.H. Son, M. Chiang, S. R. Kulkarni, and S. C. Schwartz, "The Value of Clustering in Distributed Estimation for Sensor Networks," in proceedings of International Conference on Wireless Networks, Communications and Mobile Computing, Maui, Hawaii, 2 (2005), pp. 969-974.
[28] P.J. Marandi, N.M. Charkari, "Boosted Incremental Nelder-Mead Simplex Algorithm: Distributed Regression in Wireless Sensor Networks," IFIP Joint Conference on Mobile and Wireless Communications Networks, France, (2008), pp. 199-212.
[29] P.J. Marandi, M. Mansourizadeh, N. M. Charkari, "The Effect of Resampling on Incremental Nelder-Mead Simplex Algorithm: Distributed Regression over Wireless Sensor Network," in Proceedings of the Third International Conference on Wireless Algorithms, Systems, and Applications, LNCS, 5258 (2008), Dallas, Texas, pp. 420-431.
[30] H. Shakibian and N. Moghadam Charkari, "D-PSO for Distributed Regression over Wireless Sensor Networks," Iranian Journal of Electrical and Computer Engineering, Vol. 11, No. 1, pp. 43-50, 2012.
[31] Shakibian, Hadi, and Nasrollah Moghadam Charkari. "In-cluster vector evaluated particle swarm optimization for distributed regression in WSNs." Journal of network and computer applications 42 (2014): 80-91.
[32] Zhao, Jijun, Hao Liu, Zhihua Li, and Wei Li., "Periodic Data Prediction Algorithm in Wireless Sensor Networks," In Advances in Wireless Sensor Networks, pp. 695-701, 2013. [33] Cheng, Long, et al. "An Indoor Localization Algorithm based on Modified Joint Probabilistic Data Association for Wireless Sensor Network." IEEE Transactions on Industrial Informatics (2020).
[34] Shahbazian, Reza, and Seyed Ali Ghorashi. "Distributed cooperative target detection and localization in decentralized wireless sensor networks." The Journal of Supercomputing 73.4 (2017): 1715-1732.
[35] Zandhessami, Hessam, Mahmood Alborzi, and Mohammadsadegh Khayyatian. "Energy Efficient Routing-Based Clustering Protocol Using Computational Intelligence Algorithms in Sensor-Based IoT." Journal of Information Systems and Telecommunication (JIST) 1.33 (2021): 55.
[36] Daanoune, Ikram, Baghdad Abdennaceur, and Abdelhakim Ballouk. "A comprehensive survey on LEACH-based clustering routing protocols in Wireless Sensor Networks." Ad Hoc Networks (2021): 102409.
[37] Qi, Hong, et al. "Inversion of particle size distribution by spectral extinction technique using the attractive and repulsive particle swarm optimization algorithm." Thermal Science 19.6 (2015): 2151-2160.
[38] Mo, Simin, Jianchao Zeng, and Weibin Xu. "Attractive and repulsive fully informed particle swarm optimization based on the modified fitness model." Soft Computing 20.3 (2016): 863-884.
[39] Ursem, Rasmus K. "Diversity-guided evolutionary algorithms." International Conference on Parallel Problem Solving from Nature. Springer, Berlin, Heidelberg, 2002. [40] A.P. Engelbrecht, Computational Intelligence: An introduction, 2ed., Wiley, 2007.
[41] Madden, S., 2003. Intel Berkeley research lab data. USA: Intel Corporation, 2004 [2004-06-08]. http://berkeley, intel-research, net/labdata, html.
[42] C. Guestrin, P. Bodi, R. Thibau, M. Paskin, and S. Madde, "Distributed Regression: An Efficient Framework for Modeling Sensor Network data," in Proceedings of third international symposium on Information processing in sensor networks, Berkeley, California, USA, (2004), pp. 1-10.
[43] Y. Shi, R.C. Eberhart, "Empirical study of particle swarm optimization," in Proceedings of the IEEE International Congress on Evolutionary Computation, 3 (1999), pp. 101-106.
[44] Xu, Zhaoyi, Yanjie Guo, and Joseph Homer Saleh. "Multi-objective optimization for sensor placement: An integrated combinatorial approach with reduced order model and Gaussian process." Measurement 187 (2022): 110370.
[45] Premkumar, M., and T. V. P. Sundararajan. "DLDM: Deep learning-based defense mechanism for denial of service attacks in wireless sensor networks." Microprocessors and Microsystems 79 (2020): 103278.
[46] Mohanty, Sachi Nandan, et al. "Deep learning with LSTM based distributed data mining model for energy efficient wireless sensor networks." Physical Communication 40 (2020): 101097.
http://jist.acecr.org ISSN 2322-1437 / EISSN:2345-2773 |
Journal of Information Systems and Telecommunication
|
A Hybrid Approach based on PSO and Boosting Technique for Data Modeling in Sensor Networks |
Hadi Shakibian1*, Jalal A. Nasiri2
|
1. Department of Computer Engineering, Faculty of Engineering, Alzahra University, Tehran, Iran 2. Department of Mathematical Sciences, Ferdowsi University of Mashhad, Mashhad, Iran |
Received: 14 Jul 2021/ Revised: 04 Dec 2021/ Accepted: 29 Jan 2022 |
|
Abstract
An efficient data aggregation approach in wireless sensor networks (WSNs) is to abstract the network data into a model. In this regard, regression modeling has been addressed in many studies recently. If the limited characteristics of the sensor nodes are omitted from consideration, a common regression technique could be employed after transmitting all the network data from the sensor nodes to the fusion center. However, it is not practical nor efferent. To overcome this issue, several distributed methods have been proposed in WSNs where the regression problem has been formulated as an optimization based data modeling problem. Although they are more energy efficient than the centralized method, the latency and prediction accuracy needs to be improved even further. In this paper, a new approach is proposed based on the particle swarm optimization (PSO) algorithm. Assuming a clustered network, firstly, the PSO algorithm is employed asynchronously to learn the network model of each cluster. In this step, every cluster model is learnt based on the size and data pattern of the cluster. Afterwards, the boosting technique is applied to achieve a better accuracy. The experimental results show that the proposed asynchronous distributed PSO brings up to 48% reduction in energy consumption. Moreover, the boosted model improves the prediction accuracy about 9% on the average.
Keywords: Wireless sensor network; Distributed optimization; Particle swarm optimization; Regression; Boosting.
1- Introduction
In wireless sensor networks (WSNs), keeping massive ongoing data is an expensive task due to the limited power supply and capacity of the sensor nodes. Moreover, this data is expected to be analyzed in order to extract more useful information about the phenomenon of interest. In this regard, regression modelling has been addressed as an efficient approach for abstracting [1], [2] and analyzing the network data [3], [4].
Distributed data and limited characteristics of the sensor nodes impose major challenges on performing regression over WSNs. A naive simple solution is to gather all the network data in the fusion center and obtain the network regressor using a well-known technique [5], [6]. Although a high accuracy is achieved, a huge data transmission from the sensor nodes to the fusion center is needed which makes this solution inapplicable, especially when the network grows in size.
To overcome both the communication and the computation constraints of the sensor nodes, several
Learning/optimization algorithms have been proposed in many research papers.
A distributed sub-gradient algorithm with uncoordinated dynamic step sizes has been proposed for multi-agent convex optimization problems [7]. In this algorithm, each agent can utilize its estimation of the local function value. Theoretical analysis show all the agents reach a consensus on the optimal solution. The gradient methods have also been studied by [8], [9], [10] over a network with communication constraints.
In [11], the information theoretic optimality of the distributed learning algorithms has been addressed in which each node is given i.i.d. samples and sends an abstracted function of the observed samples to a central node for decision making.
The use of machine learning algorithms in clustered WSNs has been studied by [12] in order to decrease data communications and make use of the features of WSNs. Different applications of machine learning algorithms in the context of WSNs has been recently reviewed by [13], [14], [15].
A kernel regression algorithm has been introduced in [16] to predict a signal defined over the network nodes with a series of regularly sampled data points. A Laplace approximation is proposed to provide a lower bound for the marginal out-of-sample prediction uncertainty to address the large problems.
Logistic regression fusion rule (LRFR) has been proposed in [17] in which the coefficients of the LRFR is learnt at first, and then, it is used to make a global decision about the presence/absence of the target.
In [18], a quantized communication based distributed online regression algorithm has been proposed. Also, a distributed quantile regression algorithm has been proposed by [19], where, each node estimates the global parameter vector of a linear regression model by employing its local data as well as collaboration with the other nodes. Due to the sparsity of numerous natural and artificial systems, they have introduced distributed quantile regression algorithm to exploit the sparsity and consequently to improve the performance of the method.
An energy-efficient distributed learning framework has been proposed using the quantized signals in the context of IoT networks [20], [21]. This is a recursive least-squares algorithm that learns the parameters using low-bit quantized signals and requires low computational cost.
Some distributed learning algorithms have also been suggested based on linear and polynomial regression models [22], [23].
On the other hand, several distributed regression models have been proposed in WSNs in which the learning problem is formulated as an optimization task [24]. To solve it, Incremental Gradient (IG) algorithm has been proposed in which the parameter to be estimated is circulated through the network. Along the way, each sensor node adjusts the parameter by performing a sub-gradient [25] based on its own local data set. Increasing the network cycles, the accuracy might be improved. In [26], IG has been proposed with the addition of quantization technique which can be used in the presence of low bandwidth to reduce the bits of transmitted data. In [27], a cluster-based version of IG has been developed. It brings a better energy efficiency and robustness. Incremental Nelder-Mead Simplex (IS) has been proposed in [28] and [29] with the addition of boosting and re-sampling techniques, respectively. They introduce a better accuracy and convergence rate.
In [30] a new evolutionary based approach has been proposed based on the PSO algorithm, denoted as Distributed PSO (DP). In DP, the network is partitioned into a number of clusters, dedicating a swarm of particles for which. Then the regressor of each cluster is trained by employing PSO algorithm distributively within the cluster. The final model is obtained after combining the clusters models by the fusion center. This approach obtains a model closer to the centralized case, and decreases the latency significantly. However, its synchronous processes are in contrast with autonomous nature of WSNs. In addition, different clusters have their own cluster size and data pattern which are not taken into account by DP.
IVeP [31] is another PSO based distributed approach that learns the network regression model using a multi-objective optimization technique. They employ VEPSO model to perform the optimization task through inter- and intra-cluster cycles. The results show high prediction accuracy with moderate energy consumption.
In this paper, a modified version of DP algorithm is proposed that can simultaneously decrease the communication overheads as well as improves the final prediction accuracy. Firstly, Asynchronous DP (ADP), has been proposed by defining a diversity threshold for the particles within each cluster swarm. As a result, each cluster regressor is learned regardless of the status of the other clusters. Defining diversity thresholds, the number of transmissions is reduced. However the final accuracy might be decreased on the other hand. In this regard, Boosted ADP (BADP) has been introduced which boosts the clusters regressors and keeps the overall accuracy in high. The proposed algorithms have been compared with IG- and IS-based algorithms as well as IVeP and centralized approaches in terms of the accuracy, latency, and communication cost. The results show that ADP and BADP bring the lowest latency. Moreover, thanks to the boosting technique, BADP learns a model closer to the centralized approach while the communication cost still remains considerably acceptable. The contributions of this paper are:
· Asynchronous DP algorithm is proposed in which in-cluster optimization is performed asynchronously based on the size and data patterns of the cluster. While this is in accordance to the autonomous operations of the sensor networks, it brings more energy efficiency.
· The obtained model by ADP is boosted to improve the overall accuracy even further. Accordingly, Boosted ADP algorithm is proposed that obtains a high accurate network model and closer to the centralized approach with quite acceptable communication requirements.
The rest of this paper is organized as follows. Distributed regression problem is formally stated in section 2. The proposed approach is introduced in section 3. Evaluation and experimental results are discussed in section 4 and the last section is concluding remarks.
2- Distributed Regression in WSNs
Consider a sensor network with nodes and measurements per node spatially distributed in an area. Every sensor node is expected to capture the phenomenon of interest in pre-defined time intervals [32]. Each measurement is stored as a record as:
in which () denotes to the -th node's location, is epoch number, and is the captured measurement. Now, considering
as the feature space and:
as the labels, the aim of the parametric regression is to learn the coefficients of the mapping function , i.e. , such that the RMS error be minimized:
(1)
Throughout this paper the following assumptions will be held:
· The learning process starts by disseminating a query from the fusion center to cluster heads.
· Every sensor node can localize itself by executing a well-known localization algorithm [33], [34].
· Since clustering is not the subject of this paper, it is assumed that the network is partitioned into clusters via a well know clustering algorithm [35], [36], designating a cluster head for each cluster, , …, .
· The member nodes belonging to the cluster are denotes as {} where is the size of the cluster.
· The local data set of , cluster data , and global network data are denoted as,, and , respectively.
· The denotes the size of the parameter under estimate.
Table 1 shows the Nomenclature used in this study.
Table 1. Nomenclature used in this study.
Symbol | Definition | |||||||||||||||||||||||||||
| Number of sensor nodes | |||||||||||||||||||||||||||
| Number of sensor measurements | |||||||||||||||||||||||||||
| Number of clusters | |||||||||||||||||||||||||||
| Cluster head j | |||||||||||||||||||||||||||
| Sensor node i in cluster j | |||||||||||||||||||||||||||
| The local data of | |||||||||||||||||||||||||||
| The cluster data j | |||||||||||||||||||||||||||
| The global(network) data | |||||||||||||||||||||||||||
| The cluster regression model j | |||||||||||||||||||||||||||
| The network regression model | |||||||||||||||||||||||||||
| The feature space | |||||||||||||||||||||||||||
| The swarm size | |||||||||||||||||||||||||||
| The problem dimensionality | |||||||||||||||||||||||||||
| The diversity of swarm j | |||||||||||||||||||||||||||
| The dimension d of particle i | |||||||||||||||||||||||||||
| The weight of the local repressor of |
Algorithm 1: Distributed PSO (DP) [13] | ||||||||||||||||||||||||||||
Fusion Center disseminates the desired model for each cluster do data_view_unification() parameters_initialization() for in range 1: for each cluster node i do runs a local PSO sends its best particle to the end for sends the best of the best particles to its members end for sends and its RMSE to the fusion center end for The fusion center obtains by weighted averaging
3-1- Asynchronous DP (ADP) The major drawback of DP is that the migration steps should be synchronized for all clusters. In more words, the particles of a particular cluster might be converged before the final migration, while more migration steps might be required in another cluster. This is because different clusters have different data patterns and cluster size. By eliminating the extra migrations inside the converged clusters, the energy consumption is reduced. Furthermore, the synchronized clusters are in contrast with the autonomous nature of WSNs. To resolve these issues, asynchronous DP (ADP) is introduced in this section. Attractive and Repulsive PSO, called as ARPSO, is a variant of PSO model in which the particles can switch between two phases [37], [38]. This approach is based on the diversity guided evolutionary algorithm (DGEA) developed by [39]. In ARPSO, the particles obey from the diversity of the swarm to alternate between an attraction and repulsion phases to make a proper exploitation-exploration tradeoff. Accordingly, the swarm diversity is defined as:
(2) where is the swarm size, is the dimensionality of the problem, and is the average of the dimension over all the particles, i.e. (3)
Although ARPSO was originally applied to one swarm, nothing prevents its application to sub-swarms [40]. The diversity equation in ARPSO has been adopted in ADP for measuring the diversity of the clusters swarms. In cluster , the diversity is calculated using only the best particles received from the cluster nodes:
(4)
where: = (5)
If the diversity (Eq. 4) be greater than a threshold the in-cluster optimization is stopped, and the cluster regressor is transmitted to the fusion center as well as the corresponding RMS error. The final model is obtained by the fusion center similar to the idea proposed in DP algorithm. The steps of ADP is shown in Algorithm 2. 3-2- Boosted ADP (BADP) Defining smaller thresholds, the quality of the clusters models are expected to be increased in ADP algorithm. However, it brings more communication cost. In this regard, in order to keep both energy efficiency and high accuracy, a boosting technique is applied on ADP inspiring from [28]. In Boosted ADP (BADP) algorithm, firstly, the clusters regressors are obtained using a diversity threshold, as explained in ADP. Then, each cluster model is boosted before transmitting to the fusion center. To do this, within the cluster , the cluster head broadcasts the final obtained regressor and the size of the
|