Neural Load Disaggregation: Comparison
Please note this is a comparison between Version 2 by Jason Zhu and Version 1 by Yassine Himeur.

Non-intrusive load monitoring (NILM) techniques are central techniques to achieve the energy sustainability goals through the identification of operating appliances in the residential and industrial sectors, potentially leading to increased rates of energy savings. NILM received significant attention, reflected by the number of contributions and systematic reviews published yearly.

  • load disaggregation
  • neural NILM
  • federated learning

1. Introduction

Global energy demand is rising quickly, which in turn, makes the need for electric energy rise even faster, especially in household setups. Current studies reveal that the most crucial element in resolving energy issues would be the intelligent and cost-effective use of electricity as the primary source of energy [1]. This, in turn, raises the need for systems that recommend best practices and actions to use energy in homes, workplaces, and buildings more efficiently [2,3,4][2][3][4]. To recommend positive actions to the users and help them adopt a more efficient energy consumption behavior, it is essential first to capture their energy footprint and analyze their behavior concerning the use of appliances [5,6][5][6]. The analysis of energy utilization can help in this regard. A viable solution is to use smart meters and sensors to record the energy consumption of each appliance, potentially combined with smart data analytics to visualize the energy consumption habits [7]. Nonetheless, a more financially affordable solution is to use only a single meter, and non-intrusive load monitoring (NILM) techniques [8,9][8][9] to identify the consumption of each appliance from the aggregate measurements. NILM techniques offer thus the possibility to determine which appliances are utilized in a household at any moment and the corresponding amount of energy consumed [10]. Therefore, these approaches can be leveraged by different services such as activity monitoring [11], and the detection of defective appliances [12].
Several algorithms were suggested to address the NILM problem [13]. Nonetheless, deep neural networks have received significant attention since their first introduction [14]. These models fast became the main research stream in NILM scholarship, mainly encouraged by the availability of several real and synthetic energy datasets (e.g., REFIT [15], SynD [16]), enabling the training and testing of these models considering different scenarios. Many of these approaches were developed to evaluate the advantages and drawbacks of different deep learning concepts on the energy disaggregation task and achieved significant enhancement in the performance.

2. Data Engineering for NILM

The emergence of ML algorithms in NILM scholarship highlighted the importance of data engineering to enhance the disaggregation performance for different appliances. Thus, this aspect received particular attention from recent systematic reviews. The current manuscript groups three main data processing steps under the term data engineering: data preprocessing, feature extraction, and postprocessing techniques. Nonetheless, feature extraction received relatively more attention from the selected set of reviews than preprocessing and postprocessing techniques adopted in different contributions. The preprocessing techniques adopted in different contributions were covered in two reviews, mainly [27,33][17][18]. The authors of [27][17] highlight two main techniques considered mandatory for the majority of algorithms: (i) handling sampling rates and missing data, and (ii) balancing. The first technique, handling sampling rates and missing data, is related to the quality of the data sets during training and is leveraged to address potential technical problems that may occur in real setups (i.e., hardware and communication issues). The second technique provides a balance between the ON states/events of each appliance and the OFF states/events. The latter problem is mainly caused by residential appliances being OFF most of the time. In addition to the previous two techniques, the authors of [33][18] provided an overview of data augmentation techniques adopted mainly to address the underrepresented classes. An overview of the types of features in NILM was suggested in five different reviews, mainly [9,25,28,31][9][19][20][21]. A consensus between all these reviews can be concluded where three types of features were highlighted: steady-state features, transient features, and external/nontraditional features. Alternatively, the reviews presented in [27,32][17][22] provide a classification of NILM features based on the sampling frequency required, where they offered a clear distinction between low-frequency and high-frequency features, as follows:
  • High-frequency sampling: This approach involves collecting data at a high rate, such as at a rate of one to several times per second [32][22]. This can provide a high level of detail and resolution, leading thus to improved accuracy. The majority of transient features require high sampling rates.
  • Low-frequency sampling: This approach involves collecting data at a lower rate, such as at a rate of once per minute or once per hour. This can be less resource-intensive but may also result in a lower level of detail and accuracy.
Considering post-processing techniques, only reviews presented in [31,33][18][21] provided an overview of existing approaches for NILM algorithms. One of the main findings of the quantitative analysis provided by the first review (i.e., [33][18]) was the enhancement that can be achieved, where they found that 28% to 54% of improvement was recorded in related work. Consequently, it is to conclude that postprocessing techniques are a key factor in improving existing algorithms. It was widely acknowledged in all the reviews that ML and AI models are the most prominent algorithms in the NILM scholarship in recent years. Consequently, data engineering techniques are of enormous importance to future NILM developments.

3. NILM Algorithms, Comparison, and Evaluation Setups

The non-intrusive monitoring of the operation and energy consumption of appliances, especially in household setups that consist of a large variate of loads and specific usage patterns, has been recognized as an essential task for more than three decades, with the seminal work of Hart that defined the task [8]. The first group of solutions was mostly based on combinatorial optimization techniques, which assumed that the total load was the result of a combination of appliances (with known loads) that operate in different states (even not operating at all) and tried to find the combination of appliances and states that better matches the overall load measurement. Taking this one step further, hidden Markov models(HMM) attempted to model the task using a probabilistic approach concerning the appliances that operate at every moment and the state they are on. In the last decade, technological advancements in neural networks and the underlying infrastructures that support their operations, as well as the abundance of training data, gave rise to the ML approaches for NILM and mainly to neural NILM, which demonstrated state-of-the-art performance under a variety of training conditions (e.g., high sampling rates, enough computational capacity). The problem of non-intrusive monitoring of appliances’ load based on the disaggregation of the measurements from a single monitoring device is usually approached in the literature by breaking it into smaller tasks. Given a known inventory of appliances for a household, these tasks comprise (a) the detection of different states for each appliance, (b) the extraction of signatures per state and appliance, and (c) the classification of each measurement to the most promising combination of appliances’ states [34][23]. Instead of monitoring the operation of each appliance on a second-by-second basis, some NILM techniques simply identify state change events and consequently record the start and end time of an appliance usage and the total energy consumed [35][24]. Alternatively, Neural NILM models provide a point-to-point solution for each appliance. Convolutional neural networks (CNNs) can be employed to detect state-change events. As suggested in [35][24], a current sequence of length L2 is transformed in an image of L×L pixels and is fed to a CNN, which is then trained to identify appliances initially on a single load task. This task allows distinguishing between appliances when a single appliance is on at each moment. This is taken one step further by establishing a multi-load identification task, in which the model is trained to distinguish between all possible load combinations. The main restriction of such approaches is that the number of appliances in a household can be large. Consequently, the respective number of combinations that must be identified at any moment becomes huge. Energy measurement data are usually considered to be in the form of time series or sequences. Consequently, the respective DNN architectures that capture the temporal semantics of input have also been employed. More specifically, recurrent neural networks (RNNs) have been used in [36][25] as an alternative to combinatorial optimization. RNNs successfully reconstruct the appliance signatures for the aggregated measurements and can perfectly fit appliances they have already been trained on. However, they need help to generalize on unseen appliances or power states and require vast amounts of data and a lot of computational power to be trained. In an attempt to improve the generalization of RNNs, authors in [37][26] employ gated recurrent units (GRU) and show that they outperform the RNN baseline. In the same direction, authors in [38][27] suggest using LSTM-RNNs to tackle the vanishing gradient problem better whilst learning the long-term patterns that constitute the appliances’ signatures in the multi-state and multi-appliance setup. The autoencoders (AEs) represent another architecture commonly used to extract more coherent input data representations. As such, they can be used to extract the features that compose the signature of the various appliances. They are composed of encoding and decoding layers, and at training time, they learn to optimize the output so that it better resembles (if not identical) the input. After training, the encoder is used to obtain the representation of the input to a different dimension. A stochastic variation of autoencoders is the denoising autoencoders (dAEs), which introduce noise to the input so that the autoencoder does not learn the identity function (i.e., f(x) = f) during training. Consequently, the energy disaggregation task can be approached as a denoising problem, utilizing techniques that can transfer a noisy overall consumption from multiple appliances to a “clean” consumption of each individual appliance, using as input either active, reactive, apparent power, current, voltage, or any combination of them. Denoising AEs employs a 1-D convolutional layer in the encoder part to feed the input measurements in segments (few seconds windows) and another 1-D convolutional layer in the decoder, with the size that depends on the size of the appliance activations [39][28]. They can be trained using synthetic datasets that combine the measurements of various appliances and aim to reconstruct each appliance’s signature in the output. Authors in [40][29] have combined dAEs with RNNs to combine the merits of ANNs and HMM-based methods. Using dAEs, they obtain the signatures of the appliances, and by feeding them to the LSTM, they can identify the most promising combination of appliances (and modes) that corresponds to the aggregated consumption at any moment. The review presented in [33][18] on the DNN approaches for low-frequency NILM begins with the increased requirements for processing high-frequency NILM data and continues with the evaluation of various NN-based techniques that combine CNNs with LSTMs, GRUs, and other RNN variations or even with generative adversarial networks (GANs) and AEs (denoising or variational autoencoders) in an attempt to improve the classification accuracy of collective appliance signals. The main challenge for the different algorithms relates to the overall performance, which is usually affected by the dataset used, the sampling frequencies, the input features, the metrics used for evaluation, etc. The choice of the best parameters for all the above can significantly affect the final performance as much as the architecture. According to [21][30], a best practice for developing DNN models is the automation of hyper-parameters tuning and selecting the appropriate architecture. Using toolkits that aggregate multiple alternative architectures allows for finding the best solution at each NILM setup. The evaluation of NILM algorithms is generally performed using widely acknowledged ML metrics and NILM datasets. Nonetheless, some evaluation metrics dedicated only to NILM models can also be identified [41][31] though receiving little attention in recent NILM reviews since they are less commonly used. However, despite their seldom use, these metrics could show a better summary of disaggregation results since they focus on the NILM problem by design. NILM datasets also received significant attention from existing reviews where the sampling rate and the data quality remain the main concern. Furthermore, NILM toolkits are an important part of the evaluation as they improve research efficiency. This aspect was only covered in two reviews [26,31][21][32] revealing that available NILM toolkits emphasize the algorithms without considering the available hardware and network infrastructure, which is critical for the real-time monitoring of appliances. In this direction, lightweight models [42][33] that combine CNNs for learning features and simple classifiers to detect appliances seem to be promising solutions. Another solution for scalability is using FL approaches [43][34], which can move the processing load from a centralized to a decentralized approach taking advantage of several low processing power devices to solve the same task. Federated NILM solutions can also support privacy since data are not shared across nodes or with a centralized server [44][35], but also open new challenges for researchers, which are discussed in more detail in the following section.

4. Federated NILM

FL [45[36][37][38],46,47], also referred to as collaborative learning, is a learning paradigm that Google introduced in 2017 to protect the privacy of its clients. Following this learning paradigm, the model is sent to the client rather than the data uploaded to a cloud server. It starts in a central server responsible for initializing the model’s weight and sharing them with the clients. Upon the reception of the global model, each client executes a training task using its local data for a number of iterations and sends the new weights of the model back to the central server. Once the central server has received the local models, it will aggregate them to obtain an updated version of the global model. The process is repeated for several rounds until convergence is achieved. The most popular aggregation algorithm is known as the FedAvG [48][39], which relies on calculating the average of the weights of local models as an aggregation mechanism. The weighted average can be used when the size of local datasets differs for clients participating in the training. Several variants of this scheme exist in the literature, considering different aspects [46][37]. For example, peer-to-peer FL enables direct clients’ communication and eliminates the central node [45][36]. More precisely, each client broadcasts their model to the other clients contributing to the training round. Considering this variant of FL, the goal is to achieve a fully decentralized training process without the need for a central server considered a single point of failure. Other variants of FL also exist but remain out of the scope of the current manuscript. The upgrade of the electrical grid in many countries around the globe, with the advanced metering infrastructure and edge devices, offers the possibility of adopting an FL paradigm for efficient grid management. It was extensively adopted in the case of load forecasting (e.g., [49][40]) and power generation prediction for renewable energies (e.g., [50][41]). Nonetheless, only a handful of contributions have explored the adoption of this learning paradigm in NILM scholarship: ten contributions for residential load disaggregation, one for solar energy disaggregation, and only one for investigating security aspects of FL in smart grids with respect to load disaggregation. An FL framework for NILM was suggested in [51][42], where transfer learning was used between different domains. The goal of the contribution was to protect consumers’ privacy and overcome the problem of non-identically distributed data. Three public data sets were considered during the evaluation setup, where the main focus was to establish a comparison with centralized load disaggregation schemes. The results showed high potential for the suggested FL approach. Nonetheless, transfer learning from one domain to another one demonstrated poor results and showed that fine-tuning is required. Despite the extensive evaluation of the disaggregation performance, the previous study provided no analysis of the communication cost and model efficiency, and little attention was given to the hardware requirements of the edge devices. These limitations were also admitted in [56][43] and highlighted as future direction. Furthermore, the authors stressed the need to upgrade NILM toolkits with federated/decentralized trainers, enabling further research in this respect. Both of the previous studies adopted a Seq2Point model, which shows the strength of this model in the case of FL for load disaggregation. More precisely, even short versions of this model provide very competitive results as demonstrated in [19][44] where the authors suggested shortening the Seq2Point baseline trained following an FL paradigm revealing promising results despite the low number of training clients. A similar study focusing on transfer learning was suggested in [53][45], where a model-agnostic meta-learning approach was introduced to enable task-specific learning and allow data owners to adjust the models based on the tasks. In this regard, the FL is augmented with a meta-learning step at each round. The evaluation setup demonstrated enhanced disaggregation performance but with a longer time required for convergence. The FL was further tested in combination with differential privacy in [57][46] where the Seq2Point [14] baseline was leveraged during the experimental setup. The evaluation showed that this combination provides good results in the case of the fridge, which exhibits a period consumption pattern but failed in the case of hand-operated appliances, mainly the kettle and microwave, which are directly related to daily routines. Furthermore, they demonstrate that differential privacy causes poor results due to the noise added where smaller epsilon values allow mitigating privacy attacks. Still, higher values provide similar privacy leakage to the standard FL framework. A similar study was presented in [55][47], evaluating the impact of noise added on the overall disaggregation of a standard federated NILM framework. The evaluation was performed using a temporal pooling model on three different data sets. It resulted in the amount of added noise drastically hindering the disaggregation task, thus achieving similar conclusions to work presented in [57][46]. The performance of a classification federated NILM algorithm was investigated in [54][48], combining FL with state-of-the-art NILM models for state classification. However, it mainly concentrated on using testing data from the same buildings included in the training, which may have led to biased conclusions. A multi-target federated NILM was suggested in [44][35]. The proposed framework leverages a multi-target learning paradigm to train a single model for all the target appliances with pruning techniques to compress the model. The experiments on three real datasets demonstrated an acceptable trade-off between privacy and disaggregation performance but with a relatively low performance, mainly a low f1-score. Interestingly, a federated decision tree algorithm was designed in [52][49] for load disaggregation leveraging a two-state voting process and node-level parallelism for co-modeling NILM. During the model training phase, the server receives the local training results. It makes the final decision to select the model parameters used to split the tree nodes, including features and the corresponding thresholds. The local clients are responsible for data preprocessing, tree structure initialization, gradient computation, local histogram establishment, local split finding, and model updating. The voting thus results in a list of top-K candidate features chosen based on the maximum variance gain on local machines forwarded to the central server that will select candidate features based on majority voting. Unfortunately, designed this way, the algorithm suffers from privacy leakage of partial feature indexes. Despite the interesting findings of previous studies, a shared shortcoming between is their high on to the central node. More precisely, all previously presented works adopt a client-server architecture where the server represents a single point of failure. To overcome this issue, a fully decentralized FL approach was evaluated in [20][50] by adopting a circle topology instead of a star topology to optimize clients’ communication. The experimental setup highlighted equivalent results to the centralized FL approach. However, the authors did not evaluate the gain/loss in the communication bandwidth in the case of the decentralized FL. Furthermore, each node in the circle topology is a point of failure. Further research is thus required to develop a mechanism that allows to re-establish the circle in the case of failures. The results obtained on unseen buildings were chosen whenever available. Moreover, the F1-score is the most common metric among the different contributions. It is clear from the table that the results drastically differ between appliances. The highest f1-score was reported in the case of the washing machine upon optimal model selection before the aggregation in [56][43]. Meanwhile, the worst value was reported for the case of the dishwasher in [55][47]. Apart from indicating the low quality of FL frameworks, these results highlight the tremendous challenge that training on several appliances from different buildings could impose. The low values reported in [44,55][35][47] are linked to the approaches added to the standard federated framework, that is, compression and differential privacy. Overall, the reported results are acceptable, especially in the case of approaches that consider training data from different buildings and were tested on unseen buildings, simulating thus the most realistic scenario.

References

  1. Schäuble, D.; Marian, A.; Cremonese, L. Conditions for a cost-effective application of smart thermostat systems in residential buildings. Appl. Energy 2020, 262, 114526.
  2. Sardianos, C.; Varlamis, I.; Chronis, C.; Dimitrakopoulos, G.; Alsalemi, A.; Himeur, Y.; Bensaali, F.; Amira, A. The emergence of explainability of intelligent systems: Delivering explainable and personalized recommendations for energy efficiency. Int. J. Intell. Syst. 2021, 36, 656–680.
  3. Sardianos, C.; Varlamis, I.; Chronis, C.; Dimitrakopoulos, G.; Alsalemi, A.; Himeur, Y.; Bensaali, F.; Amira, A. Reshaping consumption habits by exploiting energy-related micro-moment recommendations: A case study. In Proceedings of the International Conference on Smart Cities and Green ICT Systems, International Conference on Vehicle Technology and Intelligent Transport Systems, Crete, Greece, 3–5 May 2019; Springer: Berlin/Heidelberg, Germany, 2021; pp. 65–84.
  4. Himeur, Y.; Alsalemi, A.; Al-Kababji, A.; Bensaali, F.; Amira, A.; Sardianos, C.; Dimitrakopoulos, G.; Varlamis, I. A survey of recommender systems for energy efficiency in buildings: Principles, challenges and prospects. Inf. Fusion 2021, 72, 1–21.
  5. Al-Kababji, A.; Alsalemi, A.; Himeur, Y.; Fernandez, R.; Bensaali, F.; Amira, A.; Fetais, N. Interactive visual study for residential energy consumption data. J. Clean. Prod. 2022, 366, 132841.
  6. Sardianos, C.; Varlamis, I.; Chronis, C.; Dimitrakopoulos, G.; Himeur, Y.; Alsalemi, A.; Bensaali, F.; Amira, A. Data analytics, automations, and micro-moment based recommendations for energy efficiency. In Proceedings of the 2020 IEEE Sixth International Conference on Big Data Computing Service and Applications (BigDataService), Oxford, UK, 3–6 August 2020; pp. 96–103.
  7. Sayed, A.; Himeur, Y.; Alsalemi, A.; Bensaali, F.; Amira, A. Intelligent edge-based recommender system for internet of energy applications. IEEE Syst. J. 2021, 16, 5001–5010.
  8. Hart, G.W. Nonintrusive appliance load monitoring. Proc. IEEE 1992, 80, 1870–1891.
  9. Himeur, Y.; Alsalemi, A.; Bensaali, F.; Amira, A.; Al-Kababji, A. Recent trends of smart nonintrusive load monitoring in buildings: A review, open challenges, and future directions. Int. J. Intell. Syst. 2022, 37, 7124–7179.
  10. Tanoni, G.; Principi, E.; Squartini, S. Multi-Label Appliance Classification with Weakly Labeled Data for Non-Intrusive Load Monitoring. IEEE Trans. Smart Grid 2022, 14, 440–452.
  11. Bousbiat, H.; Leitner, G.; Elmenreich, W. Ageing Safely in the Digital Era: A New Unobtrusive Activity Monitoring Framework Leveraging on Daily Interactions with Hand-Operated Appliances. Sensors 2022, 22, 1322.
  12. Rashid, H.; Stankovic, V.; Stankovic, L.; Singh, P. Evaluation of non-intrusive load monitoring algorithms for appliance-level anomaly detection. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8325–8329.
  13. Pereira, L.; Nunes, N. Performance evaluation in non-intrusive load monitoring: Datasets, metrics, and tools—A review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1265.
  14. Kelly, J.; Knottenbelt, W. Neural nilm: Deep neural networks applied to energy disaggregation. In Proceedings of the 2nd ACM International Conference on Embedded Systems for Energy-Efficient Built Environments, Seoul, Republic of Korea, 4–5 November 2015; pp. 55–64.
  15. Murray, D.; Stankovic, L.; Stankovic, V. An electrical load measurements dataset of United Kingdom households from a two-year longitudinal study. Sci. Data 2017, 4, 160122.
  16. Klemenjak, C.; Kovatsch, C.; Herold, M.; Elmenreich, W. A synthetic energy dataset for non-intrusive load monitoring in households. Sci. Data 2020, 7, 108.
  17. Kaselimi, M.; Protopapadakis, E.; Voulodimos, A.; Doulamis, N.; Doulamis, A. Towards Trustworthy Energy Disaggregation: A Review of Challenges, Methods, and Perspectives for Non-Intrusive Load Monitoring. Sensors 2022, 22, 5872.
  18. Huber, P.; Calatroni, A.; Rumsch, A.; Paice, A. Review on deep neural networks applied to low-frequency nilm. Energies 2021, 14, 2390.
  19. Angelis, G.F.; Timplalexis, C.; Krinidis, S.; Ioannidis, D.; Tzovaras, D. NILM applications: Literature review of learning approaches, recent developments and challenges. Energy Build. 2022, 261, 111951.
  20. Yan, L.; Sheikholeslami, M.; Gong, W.; Tian, W.; Li, Z. Challenges for real-world applications of nonintrusive load monitoring and opportunities for machine learning approaches. Electr. J. 2022, 35, 107136.
  21. Schirmer, P.A.; Mporas, I. Non-Intrusive Load Monitoring: A Review. IEEE Trans. Smart Grid 2022, 14, 769–784.
  22. Gurbuz, F.; Bayindir, R.; Bulbul, H. A brief review of non-intrusive load monitoring and its impact on social life. In Proceedings of the 9th International Conference on Smart Grid, icSmartGrid 2021, Setubal, Portugal, 29 June–1 July 2021; pp. 289–294.
  23. Sardianos, C.; Varlamis, I.; Dimitrakopoulos, G.; Anagnostopoulos, D.; Alsalemi, A.; Bensaali, F.; Amira, A. “ I Want to… Change”: Micro-moment based Recommendations can Change Users’ Energy Habits. In Proceedings of the 8th International Conference on Smart Cities and Green ICT Systems (SMARTGREENS 2019), Crete, Greece, 3–5 May 2019; pp. 30–39.
  24. Yang, D.; Gao, X.; Kong, L.; Pang, Y.; Zhou, B. An event-driven convolutional neural architecture for non-intrusive load monitoring of residential appliance. IEEE Trans. Consum. Electron. 2020, 66, 173–182.
  25. Linh, N.V.; Arboleya, P. Deep learning application to non-intrusive load monitoring. In Proceedings of the 2019 IEEE Milan PowerTech, Milan, Italy, 23–27 June 2019; pp. 1–5.
  26. Kim, J.; Kim, H. Classification performance using gated recurrent unit recurrent neural network on energy disaggregation. In Proceedings of the 2016 International Conference on Machine Learning and Cybernetics (ICMLC), Jeju, Republic of Korea, 10–13 July 2016; Volume 1, pp. 105–110.
  27. Kim, J.; Le, T.T.H.; Kim, H. Nonintrusive load monitoring based on advanced deep learning and novel signature. Comput. Intell. Neurosci. 2017, 2017, 4216281.
  28. Bonfigli, R.; Felicetti, A.; Principi, E.; Fagiani, M.; Squartini, S.; Piazza, F. Denoising autoencoders for non-intrusive load monitoring: Improvements and comparative evaluation. Energy Build. 2018, 158, 1461–1474.
  29. Wang, T.; Ji, T.; Li, M. A New Approach for Supervised Power Disaggregation by Using a Denoising Autoencoder and Recurrent LSTM Network. In Proceedings of the 2019 IEEE 12th International Symposium on Diagnostics for Electrical Machines, Power Electronics and Drives (SDEMPED), Toulouse, France, 27–30 August 2019; pp. 507–512.
  30. Bousbiat, H.; Faustine, A.; Klemenjak, C.; Pereira, L.; Elmenreich, W. Unlocking the Full Potential of Neural NILM: On Automation, Hyperparameters & Modular Pipelines. IEEE Trans. Ind. Inform. 2022, 1–9.
  31. Gomes, E.; Pereira, L. PB-NILM: Pinball guided deep non-intrusive load monitoring. IEEE Access 2020, 8, 48386–48398.
  32. Dash, S.; Sahoo, N. Electric energy disaggregation via non-intrusive load monitoring: A state-of-the-art systematic review. Electr. Power Syst. Res. 2022, 213, 108673.
  33. Athanasiadis, C.; Doukas, D.; Papadopoulos, T.; Chrysopoulos, A. A scalable real-time non-intrusive load monitoring system for the estimation of household appliance power consumption. Energies 2021, 14, 767.
  34. Wang, H.; Si, C.; Liu, G.; Zhao, J.; Wen, F.; Xue, Y. Fed-NILM: A federated learning-based non-intrusive load monitoring method for privacy-protection. Energy Convers. Econ. 2022, 3, 51–60.
  35. Zhang, Y.; Tang, G.; Huang, Q.; Wang, Y.; Wu, K.; Yu, K.; Shao, X. FedNILM: Applying federated learning to NILM applications at the edge. IEEE Trans. Green Commun. Netw. 2022.
  36. Li, L.; Fan, Y.; Tse, M.; Lin, K.Y. A review of applications in federated learning. Comput. Ind. Eng. 2020, 149, 106854.
  37. Yin, X.; Zhu, Y.; Hu, J. A comprehensive survey of privacy-preserving federated learning: A taxonomy, review, and future directions. ACM Comput. Surv. (CSUR) 2021, 54, 1–36.
  38. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated learning: Challenges, methods, and future directions. IEEE Signal Process. Mag. 2020, 37, 50–60.
  39. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 1175–1191.
  40. Fekri, M.N.; Grolinger, K.; Mir, S. Distributed load forecasting using smart meter data: Federated learning with Recurrent Neural Networks. Int. J. Electr. Power Energy Syst. 2022, 137, 107669.
  41. Wang, Y.; Bennani, I.L.; Liu, X.; Sun, M.; Zhou, Y. Electricity consumer characteristics identification: A federated learning approach. IEEE Trans. Smart Grid 2021, 12, 3637–3647.
  42. Li, Q.; Ye, J.; Song, W.; Tse, Z. Energy disaggregation with federated and transfer learning. In Proceedings of the 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), New Orleans, LA, USA, 14 June–31 July 2021; pp. 698–703.
  43. Wang, H.; Si, C.; Zhao, J. A Federated Learning Framework for Non-Intrusive Load Monitoring. arXiv 2021, arXiv:2104.01618.
  44. Kaspour, S.; Yassine, A. A Federated Learning Model With Short Sequence To Point Mechanism For Smart Home Energy Disaggregation. In Proceedings of the 2022 IEEE Symposium on Computers and Communications (ISCC), Rhodes, Greece, 30 June–3 July 2022; pp. 1–6.
  45. Liu, R.; Chen, Y. Learning Task-Aware Energy Disaggregation: A Federated Approach. arXiv 2022, arXiv:2204.06767.
  46. Potter, H.; Lee, S.; Mossé, D. Towards Privacy-preserving Framework for Non-Intrusive Load Monitoring. In Proceedings of the Twelfth ACM International Conference on Future Energy Systems, Virtual Event, 28 June–2 July 2021; pp. 259–263.
  47. Dai, S.; Meng, F.; Wang, Q.; Chen, X. DP2-NILM: A Distributed and Privacy-preserving Framework for Non-intrusive Load Monitoring. arXiv 2022, arXiv:2207.00041.
  48. Dai, S.; Meng, F.; Wang, Q.; Chen, X. FederatedNILM: A Distributed and Privacy-preserving Framework for Non-intrusive Load Monitoring based on Federated Deep Learning. arXiv 2021, arXiv:2108.03591.
  49. Chang, X.; Li, W.; Zomaya, A.Y. Fed-GBM: A cost-effective federated gradient boosting tree for non-intrusive load monitoring. In Proceedings of the Thirteenth ACM International Conference on Future Energy Systems, Virtual Event, 28 June–1 July 2022; pp. 63–75.
  50. Giuseppi, A.; Manfredi, S.; Menegatti, D.; Pietrabissa, A.; Poli, C. Decentralized federated learning for nonintrusive load monitoring in smart energy communities. In Proceedings of the 2022 30th Mediterranean Conference on Control and Automation (MED), Vouliagmeni, Greece, 28 June 2022–1 July 2022; pp. 312–317.
More
Video Production Service