Computer Vision and Artificial Intelligence for Fish Recognition: Comparison
Please note this is a comparison between Version 3 by Jason Zhu and Version 2 by Jason Zhu.

Computer vision has been applied to fish recognition. With the inception of deep learning techniques in the early 2010s, the use of digital images grew strongly, and this trend is likely to continue. As the number of articles published grows, it becomes harder to keep track of the state of the art and to determine the best course of action for new studies.

  • digital images
  • artificial intelligence
  • machine learning
  • fish recognition
  • deep learning

1. Introduction

Monitoring the many different aspects related to fish, including their life cycle, impacts of human activities, and effects of commercial exploration, is important both for optimizing the fishing industry and for ecology and conservation purposes. There are many activities related to fish monitoring, but these can be roughly divided into four main groups. The first, recognition, has as its main goal to detect and count the number of individuals in a given environment. The second, measurement, aims at non-invasively estimating the dimensions and weight of fish. The third, tracking, aims at following individuals or shoals over time, which is usually done to either aid in the counting process or to determine the behavior of the fish as a response to the environment or to some source of stress. The fourth, classification, aims to identify the species or other factors in order to obtain a better characterization of a given area. Monitoring activities are still predominantly carried out visually, either “in loco” or employing images or videos. There are many limitations associated with visual surveys, including high costs [1][2], low throughput [1][3], subjectivity [2], etc. In addition, human observers have been shown to be less consistent than computer measurements due to individual biases, and the presence of humans and their equipment often causes animals to display avoidance behavior and make data collection unreliable [4]. Imaging devices also enable the collection of data in dangerous areas that could pose risks to a human operator [5].
Given the limitations associated with visual surveys, it is no surprise that new methods capable of automating at least part of the process have been increasingly investigated over the last three decades [6]. Different types of devices have been used for gathering the necessary information from the fish, including acoustic sensors [7], sonars [8], and a wide variety of imaging sensors [9]. I
Digital images and videos have been explored for some time, but with the rapid development of new artificial intelligence (AI) algorithms and models and with the increase in computational resources, this type of approach has become the preferred choice in most circumstances [10]. Deep learning models have been particularly successful in dealing with difficult detection, tracking, and classification problems [5]. Despite the significant progress achieved recently, very few methods found in the literature are suitable for use under real conditions. One of the reasons for this is that although the objective of most studies is to eventually arrive at some method for monitoring the fish underwater under real, uncontrolled conditions, a substantial part of the research is carried out using images captured either under controlled underwater conditions (for example, in tanks with artificial illumination) or out of the water. This makes it much easier to generate images in which fish are clearly visible and have good contrast with the background, which is usually not the case in the real world. On the other hand, research carried out under more realistic conditions often has trouble achieving good accuracies [10]. In any case, it is important to recognize that image-based methods may not be suitable to tackle all facets of fish monitoring [5], which underlies the importance of understanding the problem in depth before pursuing a solution that might ultimately be unfeasible. The combination of digital images and videos with artificial intelligence algorithms has been very fruitful in many different areas [5], and significant progress has also been achieved in fish monitoring and management. With hundreds of different strategies already proposed in the literature, it is difficult to keep track of which aspects of the problem have already been successfully tackled and which research gaps still linger. This is particularly relevant considering that the impact of those advancements in practice is still limited.

2. Recognition

One of the most basic tasks in fisheries, aquaculture, and ecological monitoring is the detection and counting of fish and other relevant species. This can be done underwater in order to determine the population in a given area [11], over conveyor belts during the process of buying and selling and tank transference [12], or out of the water to determine, for example, the number of fish of different species captured during a catch. Estimating fish populations is important to avoid over-fishing and keep production sustainable [13][14], as well as for wildlife conservation and animal ecology purposes [11]. Automatically counting fish can also be useful for the inspection and enforcement of regulations [15]. Fish detection is often the first step of more complex tasks such as behavior analysis, detection of anomalous events [16], and species classification [17]. Lighting variations caused by turbidity, light attenuation at lower depths, and waves on the surface may cause severe detection problems, leading to error [5][10][14][18][19]. Possible solutions for this include the use of larger sensors with better light sensitivity (which usually cost more), or employing artificial lighting, which under natural conditions may not be feasible and can also attract or repel fish, skewing the observations [2]. Additionally, adding artificial lighting often makes the problem worse due to the backscattering of light in front of the camera. Other authors have used infrared sensors together with the RGB cameras to increase the information collected under deep water conditions [20]. Post-processing the images using denoising and enhancement techniques is another option that can at least partially address the issue of poor quality images [18][21], but it is worth pointing out that this type of technique tends to be computationally expensive [19]. Finally, some authors explore the results obtained under more favorable conditions to improve the analysis of more difficult images or video frames [22]. The background of images may contain objects such as cage structures, lice skirts, biofouling organisms, coral, seaweed, etc., which greatly increases the difficulty of the detection task, especially if some of those objects mimic the visual characteristics of the fish of interest [10]. Banno et al. [2] reported a considerable number of false positives due to complex backgrounds, but added that those errors could be easily removed manually. The buildup of fouling on the camera’s lenses was also pointed out by Marini et al. [23] as a potential source of error that should be prevented either by regular maintenance or by using protective gear. One of the main sources of errors in underwater fish detection and counting is the occlusion by other fish or objects, especially when several individuals are present simultaneously [23]. Some of the methods proposed in the literature were designed specifically to address this problem [5][24][25], but the success under uncontrolled conditions has been limited [2]. Partial success has been achieved by Labao and Naval [14], who devised a cascade structure that automatically performs corrections on the initial estimates by including the contextual information around the objects of interest. Another possible solution is the use of sophisticated tracking strategies applied to video recordings, but even in this case occlusions can lead to low accuracy. Structures and objects present in the environment can also cause occlusions, especially considering that fish frequently seek shelter and try to hide whenever they feel threatened. Potential sources of occlusion need to be identified and taken into account if the objective is to reliably estimate the fish population in a given area from digital images taken underwater [11]. Underwater detection, tracking, measurement, and classification of fish requires dealing with the fact that individuals will cross the camera’s line of sight at different distances [26]. This poses several challenges. First, fish outside the range of the camera’s depth of field will appear out of focus and the consequent loss of information can lead to error. Second, fish located too far from the camera will be represented by only a few pixels, which may not be enough for the task at hand [14], thus increasing the number of false negatives [23]. Third, fish that pass too close to the camera may not appear in their entirety in any given image/frame, again limiting the information available. Coro and Walsh [20] explored color distributions in the object to compensate for the lack of resolvability of fish located too close to the camera. One way to deal with the difficulties mentioned so far is by focusing the detection on certain distinctive body structures rather than the whole body. Costa et al. [27] dealt with problems caused by body movement, bending, and touching specimens by focusing the detection on the eyes, which more unambiguously represented each individual than their whole bodies. Qian et al. [28] focused on the fish heads in order to better track individuals in a fish tank. The varying quality of underwater images poses challenges not only to automated methods but also to human experts responsible for annotating the data [4]. Especially in the case of low-quality images, annotation errors can be frequent and, as a result, the model ends up being trained with inconsistent data [29]. Banno et al. [2] have shown that the difference in counting results yielded by two different people can surpass 20%, and even repeated counts carried out by the same person can be inconsistent. Annotation becomes even more challenging and prone to subjectivity-related inconsistency with more complex detection tasks, such as pose estimation [30]. With the intrinsic subjectivity of the annotation process, inconsistencies are mostly unavoidable, but their negative effects can be mitigated by using multiple experts and applying a majority rule to assign the definite labels [31]. The downside of this strategy is that manual annotation tends to be expensive and time-consuming, so the best strategy will ultimately depend on how reliable the annotated data needs to be. With so many factors affecting the characteristics of the images, especially when captured under uncontrolled conditions, it is necessary to prepare the models to deal with such a variety. In other words, the dataset used to train the models needs to represent the variety of conditions and variations expected to be found in practice. In turn, this often means that thousands of images need to be captured and properly annotated, which explains why virtually all image datasets used in the reported studies have some kind of limitation that decreases the generality of the models trained [16][30][32] and, as a result, limits their potential for practical use [2][4]. This is arguably the main challenge preventing more widespread use of image-based techniques for fish monitoring and management. Given the importance of this issue, it is revisited from slightly different angles both in Section 3.4 and Section 4.

3. Measurement

Non-invasively estimating the size and weight of fish is very useful both for ecological and economic purposes. Biomass estimation in particular can provide clues about the feeding process, possible health problems, and potential production in fisheries. It can also reveal important details about the condition of wild species populations in vulnerable areas. In addition, fish length is one of the key variables needed for both taking short-term management decisions and modeling stock trends [1], and automating the measurement process can reduce costs and produce more consistent data [33][34]. Automatic measurement of body traits can also be useful after catch to quickly provide information about the characteristics of the fish batch, which can, for example, be done during transportation on the conveyor belts [12]. Bravata et al. [35] enumerated several shortcomings of manual measurements. In particular, conventional length and weight data collection requires the physical handling of fish, which is time-consuming for personnel and stressful for the fish. Additionally, measurements are commonly taken in the field, where conditions can be suboptimal for ensuring precision and accuracy. This highlights the need for a more objective and systematic way to ensure accurate measurements. Fish are not rigid objects and models must learn how to adapt to changes in posture, position, and scale [1]. High accuracies have been achieved with dead fish in an out-of-water context using techniques based on the deep learning concept [1][36][37], although even in those cases errors can occur due to unfavorable fish poses [38]. Measuring fish underwater has proven to be a much more challenging task, with high accuracies being achieved only under tightly controlled or unrealistic conditions [39][40][41], and even in this case, some kind of manual input is sometimes needed [42]. Despite the difficulties, some progress has been achieved under more challenging conditions [43], with body bending models showing promise when paired with stereo vision systems [33]. Other authors have employed a semi-automatic approach, in which the human user needs to provide some information for the system to perform the measurement accurately [44]. Partial or complete body occlusion is a problem that affects all aspects of image-based fish monitoring and management, but it is particularly troublesome in the context of fish measurement [37][45]. Although statistical methods can partially compensate for the lost information under certain conditions [1], usually errors caused by occlusions are unavoidable [43], even if a semi-automatic approach is employed [44]. Some studies dealt with the problem of measuring different fish body parts for a better characterization of the specimens [43]. One difficulty with this approach is that the limits between different body parts are usually not clear even for experienced evaluators, making the problem relatively ill-defined. This is something intrinsic to the problem, which means that some level of uncertainty will likely always be present. One aspect of body measurement that is sometimes ignored is that converting from pixels to a standard measurement unit such as centimeters is far from trivial [1]. First, it is necessary to know the exact distance between the fish and the camera in order to estimate the dimensions of each pixel, but such a distance changes through the body contours, so in practice, each pixel has a different conversion factor associated. The task is further complicated by the fact that pixels are not circles, but squares. Thus, the diagonal will be more than 40% longer than any line parallel to the square’s sides. These facts make it nearly impossible to obtain an exact conversion, but properly defined statistical corrections can lead to highly accurate estimates [1]. Proper corrections are also critical to compensate for lens distortion, especially considering the growing use of robust and waterproof action cameras which tend to have significant radial distortion [38]. Most models are trained to have maximum accuracy as the target, which normally means properly balancing false positives and false negatives. However, there are some applications for which one or another type of error can be much more damaging. In the context of measurement, fish need to be first detected and then properly measured. If spurious objects are detected as fish, their measurements will be completely wrong, which in practice may cause problems such as lowering prices paid for the fisherman or skewing inspection efforts [29]. Research on the use of computer vision techniques for measuring fish is still in its infancy. Because many of the studies aim at proving a solid proof of concept instead of generating models ready to be used in practice, the datasets used in such studies are usually limited in terms of both the number of samples and variability [41][46][47]. As the state of the art evolves, more comprehensive databases will be needed (see Section 4). One negative consequence of dataset limitations is that overfitting occurs frequently [35]. Overfitting is a phenomenon in which the model adapts very well to the data used for training but lacks generality to deal with new data, leading to low accuracies. There are a few measures that can be taken to avoid overfitting, such as early training stop and image augmentation applied to the training subset, but the best way to deal with the problem is to increase the number and variability of the training dataset [4][5]. One major reason for the lack of truly representative datasets in the case of fish segmentation and measuring is that the point-level annotations needed in this case are significantly more difficult to acquire than image-level annotations. If the fish population is large, a more efficient approach would be to indicate that the image contains at least one fish, and then let the model locate all the individuals in the image [32], thus effectively automating part of the annotation process. More research effort is needed to improve accuracy in order for this type of approach to become viable.

4. Tracking

Many studies dedicated to the detection, counting, measurement, and classification of fish use individual images to reach their goal. However, videos or multiple still images are frequently used in underwater applications. This implies that each fish will likely appear in multiple frames/images, some of which will certainly be more suitable for image analysis. Thus, considering multiple recognition candidates for the same fish seems a reasonable strategy [6][17]. This approach implicitly requires that individual fish be tracked. Fish tracking is also a fundamental step in determining the behavior of individuals or shoals [28][48][49], which in turn is used to detect problems such as diseases [50], lack of oxygenation [51], the presence of ammonia [52] and other pollutants [53], feeding status [26][54], changes in the environment [51], welfare status [55][56], etc. The term “tracking” is adopted here in a broad sense, as it includes not only studies dedicated to determining the trajectory of fish over time but also those focusing on the activity and behavior of fish over time, in which case the exact trajectory may not be as relevant as other cues extracted from videos or sequences of images [49]. There are many challenges that need to be overcome for proper fish tracking. Arguably, the most difficult one is to keep track of large populations containing many visually similar individuals. This is particularly challenging if the intention is to track individual fish instead of whole shoals [13][57]. Occlusions can be particularly insidious because as fish merge and separate, their identities can be swapped, and tracking fails [58]. In order to deal with a problem as complex as this, some authors have employed deep learning techniques such as semantic segmentation [13], which can implicitly extract features from the images which enable more accurate tracking. Other authors adopted a sophisticated multi-step approach designed specifically to deal with this kind of challenge [59]. However, when too little individual information is available, which is usually the case in densely packed shoals with a high rate of occlusions [29], camera-based individual tracking becomes nearly unfeasible. For this reason, some authors have adopted strategies that try to track the shoal as a whole, rather than following individual fish [51][60]. Another challenge is the fact that it is more difficult to detect and track fish as they move farther away from the camera [13]. There are two main reasons for this. First, the farther away the fish are from the camera, the smaller the number of pixels available to characterize the animal. Second, some level of turbidity will almost always be present, so visibility can decrease rapidly with distance. In addition, real underwater fish images are generally of poor quality due to limited range, non-uniform lighting, low contrast, color attenuation, and blurring [29]. These problems can be mitigated using image enhancement and noise reduction techniques such as Retinex-based and bilateral trigonometric filters [13][50], but not completely overcome. A possible way to deal with this issue is to employ multiple cameras bringing an extended field of view, which can be very useful not only to counteract visibility issues but also to meet the requirements of shoal tracking and monitoring [51]. However, the additional equipment may cause costs to rise to unacceptable levels and make it more complex to manage the system and to track across multiple cameras. Due to body bending while free swimming, the same individual can be observed with very different shapes and fish size and orientation can vary [29]. If not taken into account, this can cause intermittency in the tracking process [28][48]. A solution that is frequently employed in situations such as this is to use deformable models capable of mirroring the actual fish poses [29][56]. Some studies explore the posture patterns of the fish to draw conclusions about their behavior and for early detection of potential problems [61]. Tracking is usually carried out using videos captured with a relatively high frame rate, so when occlusions occur, tracking may resume as soon as the individual reappears a few frames later. However, there are instances in which plants and algae (both moving and static), rocks, or other fish hide a target for too long a time for the tracker to be able to properly resume tracking. In cases such as this, it may be possible to apply statistical techniques (e.g., covariance-based models) to refine tracking decisions [59], but tracking failures are likely to happen from time to time, especially if many fish are being tracked simultaneously [28][62]. If the occlusion is only partial, there are approaches based on deep learning techniques that have achieved some degree of success in avoiding tracking errors [63]. Another solution that has been explored is a multi-view setup in which at least two cameras with different orientations are used simultaneously for tracking [64]. Exploring only the body parts that have more distinctive features, such as the head [65], is another way that has been tested to counterbalance the difficulties involved in tracking large groups of individuals. Under tightly controlled conditions, some studies have been successful in identifying the right individuals and resuming tracking even days after the first detection [66]. As in the case of fish measurement, the majority of studies related to fish tracking are performed using images captured in tanks with at least partially controlled conditions. In addition, many of the methods proposed in the literature require that the data be recorded in shallow tanks with depths of no more than a few centimeters [62]. While these constraints are acceptable in prospective studies, they often are too restrictive for practical use. Thus, further progress depends on investigating new algorithms more adapted to the conditions expected to occur in the real world. One limitation of many fish tracking studies is that the trajectories are followed in a 2D plane, while real movement occurs in a tridimensional space, thus limiting the conclusions that can be drawn from the data [62][67]. In order to deal with this limitation, some authors have been investigating 3D models more suitable for fish tracking [49][52][64][68][69]. Many of those efforts rely on stereo-vision strategies that require accurate calibration of multiple cameras or unrealistic assumptions about the data acquired, making them unsuitable for real-time tracking [62]. This has led some authors to explore single sensors with the ability to acquire depth information, such as Microsoft’s Kinect, although in this case, the maximum distance for detectability can be limited [62].

References

  1. Álvarez Ellacuría, A.; Palmer, M.; Catalán, I.A.; Lisani, J.L. Image-based, unsupervised estimation of fish size from commercial landings using deep learning. ICES J. Mar. Sci. 2020, 77, 1330–1339.
  2. Banno, K.; Kaland, H.; Crescitelli, A.M.; Tuene, S.A.; Aas, G.H.; Gansel, L.C. A novel approach for wild fish monitoring at aquaculture sites: Wild fish presence analysis using computer vision. Aquac. Environ. Interact. 2022, 14, 97–112.
  3. Saleh, A.; Sheaves, M.; Rahimi Azghadi, M. Computer vision and deep learning for fish classification in underwater habitats: A survey. Fish Fish. 2022, 23, 977–999.
  4. Ditria, E.M.; Sievers, M.; Lopez-Marcano, S.; Jinks, E.L.; Connolly, R.M. Deep learning for automated analysis of fish abundance: The benefits of training across multiple habitats. Environ. Monit. Assess. 2020, 192, 698.
  5. Ditria, E.M.; Lopez-Marcano, S.; Sievers, M.; Jinks, E.L.; Brown, C.J.; Connolly, R.M. Automating the Analysis of Fish Abundance Using Object Detection: Optimizing Animal Ecology With Deep Learning. Front. Mar. Sci. 2020, 7.
  6. Shafait, F.; Mian, A.; Shortis, M.; Ghanem, B.; Culverhouse, P.F.; Edgington, D.; Cline, D.; Ravanbakhsh, M.; Seager, J.; Harvey, E.S. Fish identification from videos captured in uncontrolled underwater environments. ICES J. Mar. Sci. 2016, 73, 2737–2746.
  7. Noda, J.J.; Travieso, C.M.; Sánchez-Rodríguez, D. Automatic Taxonomic Classification of Fish Based on Their Acoustic Signals. Appl. Sci. 2016, 6, 443.
  8. Helminen, J.; Linnansaari, T. Object and behavior differentiation for improved automated counts of migrating river fish using imaging sonar data. Fish. Res. 2021, 237, 105883.
  9. Saberioon, M.; Gholizadeh, A.; Cisar, P.; Pautsina, A.; Urban, J. Application of machine vision systems in aquaculture with emphasis on fish: State-of-the-art and key issues. Rev. Aquac. 2017, 9, 369–387.
  10. Salman, A.; Siddiqui, S.A.; Shafait, F.; Mian, A.; Shortis, M.R.; Khurshid, K.; Ulges, A.; Schwanecke, U. Automatic fish detection in underwater videos by a deep neural network-based hybrid motion learning system. ICES J. Mar. Sci. 2020, 77, 1295–1307.
  11. Follana-Berná, G.; Palmer, M.; Lekanda-Guarrotxena, A.; Grau, A.; Arechavala-Lopez, P. Fish density estimation using unbaited cameras: Accounting for environmental-dependent detectability. J. Exp. Mar. Biol. Ecol. 2020, 527, 151376.
  12. Jeong, S.J.; Yang, Y.S.; Lee, K.; Kang, J.G.; Lee, D.G. Vision-based Automatic System for Non-contact Measurement of Morphometric Characteristics of Flatfish. J. Electr. Eng. Technol. 2013, 8, 1194–1201.
  13. Abe, S.; Takagi, T.; Torisawa, S.; Abe, K.; Habe, H.; Iguchi, N.; Takehara, K.; Masuma, S.; Yagi, H.; Yamaguchi, T.; et al. Development of fish spatio-temporal identifying technology using SegNet in aquaculture net cages. Aquac. Eng. 2021, 93, 102146.
  14. Labao, A.B.; Naval, P.C. Cascaded deep network systems with linked ensemble components for underwater fish detection in the wild. Ecol. Inform. 2019, 52, 103–121.
  15. French, G.; Fisher, M.; Mackiewicz, M.; Needle, C. Convolutional Neural Networks for Counting Fish in Fisheries Surveillance Video. In Machine Vision of Animals and their Behaviour (MVAB); Amaral, T., Matthews, S., Fisher, R., Eds.; BMVA Press: Dundee, UK, 2015; pp. 7.1–7.10.
  16. Zhao, S.; Zhang, S.; Lu, J.; Wang, H.; Feng, Y.; Shi, C.; Li, D.; Zhao, R. A lightweight dead fish detection method based on deformable convolution and YOLOV4. Comput. Electron. Agric. 2022, 198, 107098.
  17. Huang, P.; Boom, B.J.; Fisher, R.B. Hierarchical classification with reject option for live fish recognition. Mach. Vis. Appl. 2015, 26, 89–102.
  18. Boudhane, M.; Nsiri, B. Underwater image processing method for fish localization and detection in submarine environment. J. Vis. Commun. Image Represent. 2016, 39, 226–238.
  19. Liu, H.; Liu, T.; Gu, Y.; Li, P.; Zhai, F.; Huang, H.; He, S. A high-density fish school segmentation framework for biomass statistics in a deep-sea cage. Ecol. Inform. 2021, 64, 101367.
  20. Coro, G.; Walsh, M.B. An intelligent and cost-effective remote underwater video device for fish size monitoring. Ecol. Inform. 2021, 63, 101311.
  21. Li, C.Y.; Guo, J.C.; Cong, R.M.; Pang, Y.W.; Wang, B. Underwater Image Enhancement by Dehazing With Minimum Information Loss and Histogram Distribution Prior. IEEE Trans. Image Process. 2016, 25, 5664–5677.
  22. Park, J.H.; Kang, C. A Study on Enhancement of Fish Recognition Using Cumulative Mean of YOLO Network in Underwater Video Images. J. Mar. Sci. Eng. 2020, 8, 952.
  23. Marini, S.; Fanelli, E.; Sbragaglia, V.; Azzurro, E.; Fernandez, J.D.R.; Aguzzi, J. Tracking Fish Abundance by Underwater Image Recognition. Sci. Rep. 2018, 8, 13748.
  24. Aliyu, I.; Gana, K.J.; Musa, A.A.; Adegboye, M.A.; Lim, C.G. Incorporating Recognition in Catfish Counting Algorithm Using Artificial Neural Network and Geometry. KSII Trans. Internet Inf. Syst. 2020, 14, 4866–4888.
  25. Li, J.; Liu, C.; Lu, X.; Wu, B. CME-YOLOv5: An Efficient Object Detection Network for Densely Spaced Fish and Small Targets. Water 2022, 14, 2412.
  26. Ditria, E.M.; Jinks, E.L.; Connolly, R.M. Automating the analysis of fish grazing behaviour from videos using image classification and optical flow. Anim. Behav. 2021, 177, 31–37.
  27. Costa, C.S.; Zanoni, V.A.G.; Curvo, L.R.V.; de Araújo Carvalho, M.; Boscolo, W.R.; Signor, A.; dos Santos de Arruda, M.; Nucci, H.H.P.; Junior, J.M.; Gonçalves, W.N.; et al. Deep learning applied in fish reproduction for counting larvae in images captured by smartphone. Aquac. Eng. 2022, 97, 102225.
  28. Qian, Z.M.; Cheng, X.E.; Chen, Y.Q. Automatically Detect and Track Multiple Fish Swimming in Shallow Water with Frequent Occlusion. PLoS ONE 2014, 9, e106506.
  29. Atienza-Vanacloig, V.; Andreu-García, G.; López-García, F.; Valiente-González, J.M.; Puig-Pons, V. Vision-based discrimination of tuna individuals in grow-out cages through a fish bending model. Comput. Electron. Agric. 2016, 130, 142–150.
  30. Lin, B.; Jiang, K.; Xu, Z.; Li, F.; Li, J.; Mou, C.; Gong, X.; Duan, X. Feasibility Research on Fish Pose Estimation Based on Rotating Box Object Detection. Fishes 2021, 6, 65.
  31. Bock, C.H.; Pethybridge, S.J.; Barbedo, J.G.A.; Esker, P.D.; Mahlein, A.K.; Ponte, E.M.D. A phytopathometry glossary for the twenty-first century: Towards consistency and precision in intra- and inter-disciplinary dialogues. Trop. Plant Pathol. 2022, 47, 14–24.
  32. Laradji, I.H.; Saleh, A.; Rodriguez, P.; Nowrouzezahrai, D.; Azghadi, M.R.; Vazquez, D. Weakly supervised underwater fish segmentation using affinity LCFCN. Sci. Rep. 2021, 11, 17379.
  33. Muñoz-Benavent, P.; Andreu-García, G.; Valiente-González, J.M.; Atienza-Vanacloig, V.; Puig-Pons, V.; Espinosa, V. Enhanced fish bending model for automatic tuna sizing using computer vision. Comput. Electron. Agric. 2018, 150, 52–61.
  34. Palmer, M.; Álvarez Ellacuría, A.; Moltó, V.; Catalán, I.A. Automatic, operational, high-resolution monitoring of fish length and catch numbers from landings using deep learning. Fish. Res. 2022, 246, 106166.
  35. Bravata, N.; Kelly, D.; Eickholt, J.; Bryan, J.; Miehls, S.; Zielinski, D. Applications of deep convolutional neural networks to predict length, circumference, and weight from mostly dewatered images of fish. Ecol. Evol. 2020, 10, 9313–9325.
  36. Zhang, S.; Yang, X.; Wang, Y.; Zhao, Z.; Liu, J.; Liu, Y.; Sun, C.; Zhou, C. Automatic Fish Population Counting by Machine Vision and a Hybrid Deep Neural Network Model. Animals 2020, 10, 364.
  37. Tseng, C.H.; Hsieh, C.L.; Kuo, Y.F. Automatic measurement of the body length of harvested fish using convolutional neural networks. Biosyst. Eng. 2020, 189, 36–47.
  38. Monkman, G.G.; Hyder, K.; Kaiser, M.J.; Vidal, F.P. Using machine vision to estimate fish length from images using regional convolutional neural networks. Methods Ecol. Evol. 2019, 10, 2045–2056.
  39. Al-Jubouri, Q.; Al-Nuaimy, W.; Al-Taee, M.; Young, I. An automated vision system for measurement of zebrafish length using low-cost orthogonal web cameras. Aquac. Eng. 2017, 78, 155–162.
  40. Alshdaifat, N.F.F.; Talib, A.Z.; Osman, M.A. Improved deep learning framework for fish segmentation in underwater videos. Ecol. Inform. 2020, 59, 101121.
  41. Ravanbakhsh, M.; Shortis, M.R.; Shafait, F.; Mian, A.; Harvey, E.S.; Seager, J.W. Automated Fish Detection in Underwater Images Using Shape-Based Level Sets. Photogramm. Rec. 2015, 30, 46–62.
  42. Rasmussen, J.H.; Moyano, M.; Fuiman, L.A.; Oomen, R.A. FishSizer: Software solution for efficiently measuring larval fish size. Ecol. Evol. 2022, 12, e8672.
  43. Baloch, A.; Ali, M.; Gul, F.; Basir, S.; Afzal, I. Fish Image Segmentation Algorithm (FISA) for Improving the Performance of Image Retrieval System. Int. J. Adv. Comput. Sci. Appl. 2017, 8.
  44. Shafait, F.; Harvey, E.S.; Shortis, M.R.; Mian, A.; Ravanbakhsh, M.; Seager, J.W.; Culverhouse, P.F.; Cline, D.E.; Edgington, D.R. Towards automating underwater measurement of fish length: A comparison of semi-automatic and manual stereo–video measurements. ICES J. Mar. Sci. 2017, 74, 1690–1701.
  45. Garcia, R.; Prados, R.; Quintana, J.; Tempelaar, A.; Gracias, N.; Rosen, S.; Vågstøl, H.; Løvall, K. Automatic segmentation of fish using deep learning with application to fish size measurement. ICES J. Mar. Sci. 2020, 77, 1354–1366.
  46. Fernandes, A.F.; Turra, E.M.; Érika, R. de Alvarenga.; Passafaro, T.L.; Lopes, F.B.; Alves, G.F.; Singh, V.; Rosa, G.J. Deep Learning image segmentation for extraction of fish body measurements and prediction of body weight and carcass traits in Nile tilapia. Comput. Electron. Agric. 2020, 170, 105274.
  47. Zhou, X.; Chen, S.; Ren, Y.; Zhang, Y.; Fu, J.; Fan, D.; Lin, J.; Wang, Q. Atrous Pyramid GAN Segmentation Network for Fish Images with High Performance. Electronics 2022, 11, 911.
  48. Qian, Z.M.; Wang, S.H.; Cheng, X.E.; Chen, Y.Q. An effective and robust method for tracking multiple fish in video image based on fish head detection. BMC Bioinform. 2016, 17, 251.
  49. Wang, G.; Muhammad, A.; Liu, C.; Du, L.; Li, D. Automatic Recognition of Fish Behavior with a Fusion of RGB and Optical Flow Data Based on Deep Learning. Animals 2021, 11, 2774.
  50. Anas, O.; Wageeh, Y.; Mohamed, H.E.D.; Fadl, A.; ElMasry, N.; Nabil, A.; Atia, A. Detecting Abnormal Fish Behavior Using Motion Trajectories in Ubiquitous Environments. Procedia Comput. Sci. 2020, 175, 141–148.
  51. Han, F.; Zhu, J.; Liu, B.; Zhang, B.; Xie, F. Fish Shoals Behavior Detection Based on Convolutional Neural Network and Spatiotemporal Information. IEEE Access 2020, 8, 126907–126926.
  52. Xu, W.; Zhu, Z.; Ge, F.; Han, Z.; Li, J. Analysis of Behavior Trajectory Based on Deep Learning in Ammonia Environment for Fish. Sensors 2020, 20, 4425.
  53. Zhao, X.; Yan, S.; Gao, Q. An Algorithm for Tracking Multiple Fish Based on Biological Water Quality Monitoring. IEEE Access 2019, 7, 15018–15026.
  54. Iqbal, U.; Li, D.; Akhter, M. Intelligent Diagnosis of Fish Behavior Using Deep Learning Method. Fishes 2022, 7, 201.
  55. Duarte, S.; Reig, L.; Oca, J. Measurement of sole activity by digital image analysis. Aquac. Eng. 2009, 41, 22–27.
  56. Pinkiewicz, T.; Purser, G.; Williams, R. A computer vision system to analyse the swimming behaviour of farmed fish in commercial aquaculture facilities: A case study using cage-held Atlantic salmon. Aquac. Eng. 2011, 45, 20–27.
  57. Delcourt, J.; Becco, C.; Vandewalle, N.; Poncin, P. A video multitracking system for quantification of individual behavior in a large fish shoal: Advantages and limits. Behav. Res. Methods 2009, 41, 228–235.
  58. Delcourt, J.; Denoël, M.; Ylieff, M.; Poncin, P. Video multitracking of fish behaviour: A synthesis and future perspectives. Fish Fish. 2013, 14, 186–204.
  59. Boom, B.J.; He, J.; Palazzo, S.; Huang, P.X.; Beyan, C.; Chou, H.M.; Lin, F.P.; Spampinato, C.; Fisher, R.B. A research tool for long-term and continuous analysis of fish assemblage in coral-reefs using underwater camera footage. Ecol. Inform. 2014, 23, 83–97.
  60. Sadoul, B.; Evouna Mengues, P.; Friggens, N.; Prunet, P.; Colson, V. A new method for measuring group behaviours of fish shoals from recorded videos taken in near aquaculture conditions. Aquaculture 2014, 430, 179–187.
  61. Xia, C.; Chon, T.S.; Liu, Y.; Chi, J.; Lee, J.M. Posture tracking of multiple individual fish for behavioral monitoring with visual sensors. Ecol. Inform. 2016, 36, 190–198.
  62. Saberioon, M.; Cisar, P. Automated multiple fish tracking in three-Dimension using a Structured Light Sensor. Comput. Electron. Agric. 2016, 121, 215–221.
  63. Li, W.; Li, F.; Li, Z. CMFTNet: Multiple fish tracking based on counterpoised JointNet. Comput. Electron. Agric. 2022, 198, 107018.
  64. Liu, X.; Yue, Y.; Shi, M.; Qian, Z.M. 3-D Video Tracking of Multiple Fish in a Water Tank. IEEE Access 2019, 7, 145049–145059.
  65. Wang, S.H.; Zhao, J.W.; Chen, Y.Q. Robust tracking of fish schools using CNN for head identification. Multimed. Tools Appl. 2017, 76, 23679–23697.
  66. Pérez-Escudero, A.; Vicente-Page, J.; Hinz, R.C.; Arganda, S.; de Polavieja, G.G. idTracker: Tracking individuals in a group by automatic identification of unmarked animals. Nat. Methods 2014, 11, 743–748.
  67. Zhang, L.; Wang, J.; Duan, Q. Estimation for fish mass using image analysis and neural network. Comput. Electron. Agric. 2020, 173, 105439.
  68. Cheng, X.E.; Du, S.S.; Li, H.Y.; Hu, J.F.; Chen, M.L. Obtaining three-dimensional trajectory of multiple fish in water tank via video tracking. Multimed. Tools Appl. 2018, 77, 24499–24519.
  69. Huang, T.W.; Hwang, J.N.; Romain, S.; Wallace, F. Fish Tracking and Segmentation From Stereo Videos on the Wild Sea Surface for Electronic Monitoring of Rail Fishing. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 3146–3158.
More
Video Production Service