1. Introduction
At present, a very worrying energy crisis is affecting socio-political, economic, and energy supply stability on a global scale. For this last aspect, it is particularly necessary to intensify the promotion of an energy self-sufficiency policy based on renewable energies.
One of the objectives for further increasing the competitiveness of renewable energies must be based on the improvement of energy storage systems (ESSs). In particular, it is important to improve the operating and maintenance costs of the installations based on the application of advanced methods for the monitoring and predictive diagnosis of failures, aimed at extending the useful life of the batteries. The price of a photovoltaic (PV) system is significantly impacted by the reduced battery life. More than 40% of the costs over the course of a PV system’s life cycle are attributable to the battery
[1]. System reliability will improve, and operating costs will be significantly reduced as the battery’s lifespan increases
[2][3]. By avoiding hazardous operating conditions such as deep discharge and overcharging, batteries’ lifespans can be increased.
A battery can offer many years of dependable service if it is properly designed, constructed, and maintained. A brand-new battery may not initially operate at full capacity. During the first few years of use, the capacity usually increases, peaks, and then drops until the battery reaches the end of its useful life. A lead–acid battery is generally considered to have reached the end of its useful life when its capacity reaches 80% of its rated capacity. Below 80%, the rate of battery deterioration quickens, making it more susceptible to a mechanical shock or a rapid failure induced by a high discharge rate. It should be noted that a battery will eventually lose its capacity even in perfect circumstances
[4].
The storage capacity and useful life of batteries are significantly influenced by the appropriate management of their use, which is necessary to achieve the energy storage system’s maximum availability
[5]. Batteries can be used differently by condition-monitoring devices to increase their lifespan. Additionally, they can regulate the state of the batteries, allowing the application of predictive maintenance approaches to decide when to replace them
[6][7]. They can also estimate the quantity of energy stored in the batteries to plan power usage and charging cycles.
2. Maintenance Particularities of Each Type of Battery
The useful life of batteries depends on many factors, and it is known that it can be shortened by both overcharging and undercharging and overloading. It can also be highly detrimental if the current discharges are too deep under conditions of high ambient temperatures or due to the confined space of the installation site. This phenomenon is general and affects different types of batteries, either Li-ion type
[8] or Valve-Regulated Lead–Acid (VRLA)-type batteries (gel or Absorbent Glass Mat (AGM)), which, due to their comparatively low cost, are still widely used
[9].
For VRLA (gel or AGM) batteries, overcharging can cause gassing, which dries out the electrolyte and raises the internal resistance, leading to irreparable damage. When the charging voltage of 12 V batteries approaches 15 V, flat-plate VRLA batteries begin to leak water
[1].
It should be noted that Li-ion batteries are particularly expensive and that they are also susceptible to irreversible damage from excessive charging or discharging. When the system is not in use, various components (alarm systems, relays, standby current from specific loads, current drain from battery chargers, etc.) gradually drain the battery. Since Li-ion batteries have a much higher charging efficiency than lead–acid batteries, it is advised to set the charging efficiency factor to 99 percent
[10].
LiFePO4 batteries can generally use the factory-programmed “charging parameters” as well. When the current falls below a set threshold, some battery chargers stop charging. This limit must be exceeded by the tail current. LiFePO4 batteries outperform lead–acid batteries at high discharge rates. The Peukert exponent should be set to 1.05
[10] unless the battery manufacturer specifies otherwise.
When the system is not in use, the battery should be isolated in case there is any uncertainty regarding potential residual current consumption by opening its circuit breaker, removing the battery fuse(s), or disconnecting the battery’s positive terminal. If the system has been completely discharged and the low-voltage disconnection of the cells has taken place, the residual discharge current is particularly hazardous. A Li-ion battery still has a reserve of about 1 Ah for every 100 Ah of capacity after being disconnected due to the low cell voltage. If the battery’s reserve capacity is reduced, the battery will suffer damage. If the system is left in a discharged state for more than 10 days, a residual current of 4 mA, for instance, can destroy a 100 Ah battery (4 mA × 24 h × 10 days = 0.96 Ah). It is especially advised in these circumstances to use devices with low current consumption
[10].
3. BMS Applications
By managing the battery with an intelligent battery management system (BMS), information is received that will allow major degradation problems to be avoided. The amortization of a BMS is quickly realized by contributing to the prolongation of battery life.
It would be desirable if a BMS could have easy access to the innards of commonly used batteries to protect the drive battery’s individual cells and to increase the service life as well as the cycle number. This possibility would be very interesting for most types of batteries. However, this is not feasible for practically all commercial batteries, except in the case of laboratory tests. In this final instance, the BMS measures the control parameters: the battery current, temperature, and cell voltage. The nominal voltage of a common battery cell is 3.6 V, with a maximum end-of-charge voltage of 4.2 V and a minimum end-of-discharge voltage of 2.5 V. Unrepairable damage, such as capacity loss and an increase in self-discharging, is brought on by high discharging (<2.5 V). Overvoltage (>4.2 V) can cause spontaneous self-ignition, which is dangerous. When temperatures and voltages are excessively high while charging, there is a significant risk of capacity loss. A typical battery has a lifespan of 500 to 1000 cycles when used properly before losing 20% of its initial capacity.
The forecasting of a battery’s state of charge (SOC) and state of health (SOH) is possible in part by monitoring the cell voltage, current, and temperature. SOC refers to the battery’s present level of charge in relation to its maximum capacity. SOH describes the battery’s present health in comparison to a brand-new battery
[11].
In summary, among the main functions to be performed on batteries by a BMS are charge and discharge control
[12], thermal management
[13], battery equalization
[14], fault diagnosis
[15][16][17], data acquisition, communication
[18], and the estimation of the battery’s state of charge
[19][20], state of energy (SoE)
[21], state of power (SoP)
[22], and state of health (SoH)
[23]. Especially in top-priority safety-critical cases, they require the application of safeguarding functions to the system, including battery disconnection operations from generation or consumption sources
[24].
Recent research
[25][26][27] has reported smart BMS systems to implement specific functions in critical applications. The aircraft industry, automotive sector
[28], and renewable energy grid integration applications
[26] are fields where the BMS plays an essential role in system performance. Aircraft electrification’s modern challenges
[29] involve migrating from hydraulic or pneumatic onboard systems to electrical systems. This is the so-called “More electric Aircraft” (MEA) research field. European projects such as I-PRIMES
[30], MOET
[31], EPOCAl
[32], and ENIGMA
[27] tackle the control of the electrical system. The optimal operation of the batteries leads to efficient battery sizing, which involves less weight, a critical aspect in airplanes. Therefore, the design of more innovative MEA control systems
[29] (including BMSs) is necessary. Electric vehicles are also a hot topic in BMS research
[33][34]. As in the aircraft industry, optimal battery sizing reduces the car’s weight so that autonomy can be increased with optimal BMS development.
4. Battery Modeling
Numerous researchers
[35] have investigated the dynamic behavior of battery operation. Lead–acid batteries were used to power the models created years ago
[36][37][38], but lithium and nickel–cadmium batteries are similar in some ways. Lead–acid chemistry is similar to nickel–cadmium (NiCd) chemistry in that an electrolyte contains two different metals. Unlike sulfuric acid in lead–acid batteries, potassium hydroxide (KOH) does not enter the reaction in NiCd batteries. Because the positive and negative plates alternate while being submerged in an electrolyte, the manufacturing process is comparable to lead–acid batteries
[39]. A summary of the mathematical lithium and NiCd battery models created at the University of South Carolina is provided in
[40]. Numerous authors have developed in-depth dynamic models of lithium batteries. These range from straightforward models with resistance (R) or parallel-resistance capacitors (RC)
[41][42][43] to more intricate models with phase change components and coils
[44][45]. Placing these components in series and incorporating particularities to achieve higher levels of adjustment in the electrical behavior of the battery was the researchers’ primary method of operation
[46][47].
Other mathematical models have been developed in addition to these electrical models to estimate the parametric variations depending on the values associated with the time of use/disuse and changes in battery temperature
[48]. The remainder of the battery’s useful life (RUL) is calculated using these models after the effects of aging have been added. The temperature (T), depth of discharge (DOD), state of charge (SOC), and discharge velocity (C-rate) are the primary variables that influence battery aging mechanisms
[49][50][51].
Analyzing the aging phenomenon is necessary to keep track of a battery’s useful life. Batteries experience both calendar aging, which occurs when they are stationary, and cycling aging, which occurs when they undergo cyclic operation
[35]. The temperature and the SOC are two primary factors that contribute to aging in the first scenario, in addition to time itself. The Arrhenius equation can be used to explain how the temperature changes exponentially
[52], whereas the SOC changes linearly
[53][54]. On the other hand, the DOD and the C-rate also play a role in aging brought on by cycling
[55]. In contrast to the C-rate, which uses a second-degree polynomial, the first one accomplishes this using a logarithmic relation
[56][57].
Both a rise in the battery’s internal resistance and a reduction in capacity are the two consequences that these effects have in practice
[58]. The RC components are also affected, but these effects only have an immediate impact on how quickly the battery reacts to sudden changes in current. Practically speaking, a relationship can be drawn between the battery’s capacity loss and aging, which will continue to happen continuously despite taking place slowly.
The gradual loss of a battery’s capacity and its increase in internal resistance are the most obvious effects of battery aging, and these are measured by the state of the battery (SOH) parameter. The SOH can be calculated either using Equation (1) or (2) as the ratio of the battery’s current capacity (Cap) to its initial capacity (Cap
ini) or its initial internal resistance Rn
ini to increased internal resistance Rinc, respectively.
Its operational limit will be established by the SOH. In other words, this is the parameter that determines when a battery in a given application reaches the end of its useful life. The aging of batteries and other relevant parameters have been analyzed through experimental tests performed in laboratories following predetermined protocols
[59][60][61][62][63].
When taking into account a battery’s actual operating circumstances, sensors may frequently be unable to determine or estimate aging parameters by measuring their internal variables. Mathematical models developed in experimental laboratory conditions do not easily adapt to the random conditions of complex degradation associated with the real-world operating regime that batteries are subjected to in challenging and difficult circumstances because measurements can be very challenging, expensive, or intractable. Batteries gradually degrade under these circumstances as a result of multi-parametric cumulative effects, the relative contributions of which are very difficult to measure. Online trend analysis techniques are more suitable for monitoring the true state of charge (SOH) of batteries when operating under actual conditions
[64].
5. Common Catastrophic Battery Failures
Finding out when batteries’ useful lives are up with the potential to move on to adequate maintenance in the appropriate time and form is one of the key components of diagnosis. However, it is also important to take into account the risks related to some common catastrophic failures, whose effects might be felt immediately. According to their technology, batteries are known to fail in a variety of ways
[65]. Due to their relative advantages, relatively low cost of installation and maintenance, energy density, and safety, some of the most popular battery types today are the VRLA and gel types. The associated typical failure modes for these types of batteries are those that are listed below
[66][67]:
Many of the above-mentioned failure modes, particularly dry-out, positive-grid corrosion, and thermal runaway, are strongly influenced by an increase in the internal battery temperature, which in turn depends, under normal circumstances, primarily but not exclusively on the ambient temperature resulting from environmental factors such as weather. The battery installation chamber acts as a “filter” to change the outside temperature to the ambient temperature.
The internal battery temperature has a significant impact on aging, grid corrosion, and the rate of water loss (dry-out) from evaporation or hydrogen evolution at the negative plates (self-discharge), all of which rise with temperature. On the other hand, applications that involve intense cycling may benefit from a (moderate) temperature increase
[68].
The phenomenon of dry-out occurs and is accelerated by excessive heat (lack of proper ventilation or, to put it another way, heat accumulation inside the battery as a result of a prior failure from the heat dissipation process), as well as overcharging, which can result in elevated internal temperatures and high ambient (installation chamber) temperatures, and significantly contributes to grid corrosion. Failures can show signs of drying out in between 82 and 85% of cases
[67]. It frequently occurs as a side effect of some failure modes and as a unique inducer of others, such as thermal runaway. Negative-strap corrosion, which causes the slow loss of the electrolyte, is the typical failure mode for a VRLA battery under normal operating conditions. The pressure relief valve (PRV) allows the sealed cells to vent when the internal temperature rises. The battery capacity decreases, and the internal impedance rises when enough electrolyte is vented, removing the glass matte from contact with the plates.
When a battery experiences thermal runaway, its temperature rises quickly, causing it to overheat to an extreme degree. As a result, the battery may melt, catch fire, or even explode. Only high ambient temperatures and/or excessive charging voltage can cause thermal runaway in a battery
[69][70]. Even though runaway failures are less common, they can still have critical consequences. In these situations, the battery system’s automatic control action must be immediate and based on its quick isolation from the loads and disconnection from the generation sources, both of which must be carried out by the BMS system as a whole. In a self-sustaining reaction, thermal runaway happens when a battery’s internal parts start to melt. The battery’s internal temperature increases as the current is accepted. The battery can accept more current from the charger because the temperature increase lowers the impedance of the battery. The battery gets hotter as a result of the higher current. When the heat removed linearly increases more slowly than the heat produced by the reaction, thermal runaway occurs. As a result of the reaction mass’s temperature being raised by the excess heat, the rate of reaction rises. In turn, this quickens the rate at which heat is produced. According to a rough rule of thumb, the reaction rate—and subsequently, the rate of heat generation—doubles with every 10 C increase in temperature, causing the battery temperature to “runaway”. The temperature can rise even further to the point of plastic meltdown and potential fire
[66] once the electrolyte has boiled away, exceeding the upper limit of 126 °C
[67] that will eventually be reached when it begins to boil.
The float voltage and ambient temperature can have varying effects on various batteries. Significant variations in the float voltages between various batteries of various makes and models can result in varying aging times. The chemistry of the battery and its construction, its age, and particularly the chamber conditions where batteries are installed all play a part in the response variation in each battery’s behavior
[71].
If the battery’s internal heat generation process enters an advanced uncontrolled phase, violent boiling and quick gas generation will take place, leading to over-pressurization. If this condition is not caught in time, it can result in catastrophic damage due to emissions of hydrogen, oxygen, hydrogen sulfide gas (an irritant), and atomized electrolyte. The installation chamber may catch fire or explode as a result of this process
[72]. Users may be exposed to hazardous gas emissions in this situation, including hot, toxic gases, liquids, and particles, increasing the possibility of catastrophic mishaps.
All batteries are known to be “killed” by high temperatures, though the impact varies depending on the model, manufacturer, and manufacturing technology used. At 95 °F (35 °C), the life of lead–acid is reduced by 50%, whereas the life of nickel–cadmium is reduced by 16–18%
[56][62]. The battery life is halved for every 18 °F (10 °C) increase in battery temperature. Positive-grid corrosion occurs more quickly as a result of the elevated temperature, as do other failure modes. A 20-year battery will only last 10 years if it is kept at a temperature of 95 °F (35 °C) as opposed to the intended 77 °F (25 °C), and so on. A 20-year battery will only last 5 years if the temperature is raised by another 18 °F to 113 °F (45 °C). The internal temperature of the batteries is therefore the most crucial parameter to take into account in predictive trend analysis
[56].
On the other hand, any battery’s internal chemical reactions are slowed down by the low-temperature range. Depending on the technology, the degree of performance reduction varies as well. A VRLA battery might need a 20% capacity compensation, for instance, when the temperature is close to freezing. A capacity increase of twofold is necessary for a lead–calcium cell using 1.215 specific gravity acid, compared to an 18% increase for the Ni-Cd cell.
In a perfect world, the trend analysis of some battery parameters, especially temperature, impedance, capacity, and their relationship with SOH, would be a great tool for determining when it is time to replace the batteries and how the batteries degrade over time. However, as was already mentioned, not all of these parameters can be easily assessed
[73][74][75].
However, despite the unquestionable advancements brought about by the introduction of BMSs, there are still some issues that limit the ability to fully utilize the potential for performing estimations with more precise models referring to the internal state parameters of the batteries, such as SOC and SOH. The limitations on computing power and data storage that are currently in place make this problem more difficult. The most recent proposals that have been made in recent years, based on the use of IoT, cloud computing, twin models, big data, and machine learning, aim to eliminate the current difficulties
[76][77][78][79].
In this work, as a contribution, a decentralized but synchronized real-world smart battery management system has been designed using a Cerbo GX general controller with networking communication capability and cloud data processing access, four charge regulators, and a sensorized smart battery monitor with networking and Bluetooth capabilities. Currently, BMSs can be utilized as distributed control systems for real-world applications when general controllers, charge regulators, and smart monitors and sensors are integrated, such as those suggested in this work, which enable more accurate estimations of the battery’s electrical parameters.
The main feature of the proposed BMS is that it is intelligent, as it provides the ability to supply dynamic parameters from a non-intelligent battery in a similar way to intelligent batteries. It is a real-world BMS system and a comparatively low-cost system.