Injurious pecking against conspecifics is a serious problem in turkey husbandry. Bloody injuries act as a trigger mechanism to induce further pecking, and timely detection and intervention can prevent massive animal welfare impairments and costly losses. Thus, the overarching aim is to develop a camera-based system to monitor the flock and detect injuries using neural networks. In a preliminary study, images of turkeys were annotated by labelling potential injuries. These were used to train a network for injury detection.
1. Introduction
Research on farm animal welfare and behavior now utilizes computer vision and deep learning technologies. In the best-case scenario, such approaches can support, simplify, and, above all, accelerate continuous animal observation. Furthermore, implemented real-time monitoring of large animal flocks such as in conventional poultry farming that uses computer vision and machine-learning algorithms can prevent large-scale outbreaks of diseases or behavioral disorders
[1]. For example, previous studies regarding poultry farming evaluated behavior
[2], lameness
[3], feeding
[4][5], lighting preferences
[6], or movement
[7][8] based on new PLF technologies.
Analyzing animal behavior and health should be conducted with minimal human interference and involvement to not unnecessarily affect animals or disturb their natural behavior. Computer vision is a proven and non-invasive technology for video and image data collection
[9]. Computer vision tasks can use pose estimation, which provides important behavioral information. Pose estimation can be described as follows: individual objects are abstracted into keypoints, i.e., spatial locations of interest such as body parts or joints. These keypoints are built into skeletons, and poses are finally estimated on them. To enhance the recognition precision, additional markers can be placed on the studied animal, although this method could distract it and could be very expensive depending on the number of individuals
[10]. Alternatively, modern approaches for pose estimation of animals are supported by non-invasive vision-based solutions such as keypoint detection (KPD). Thus, keypoints are marked manually on sample images or video frames to form a skeleton model to record an individual animal as well as estimate its pose
[11][12].
In turkey husbandry, injurious pecking against conspecifics is a widespread and serious problem in animal welfare
[13]. The early detection of the occurrence of this injurious pecking in a turkey flock can avoid serious wounding. Indeed, bloody injuries trigger further pecking behavior
[14], and thus an early intervention can prevent an outbreak of this behavioral disorder
[15]. One option to support the turkey farmer in monitoring the flock with regard to animal welfare-related issues such as the occurrence of injurious pecking is the use of computer vision systems. In a preliminary study, the foundations were laid for the development of an image-based automated system using a neural network to detect pecking injuries in a turkey flock
[16]. A neural network was trained based on the manual annotations of (color) alterations to skin and plumage on images of turkey hens. Various additional work steps were then performed to improve the detection assessment. However, the essential issue in the preliminary study was uncertainty regarding the correct evaluation. This primarily occurred in the case of plumage injuries in which detection was difficult due to shadows, turkeys’ posture, and/or overlapping of the individual animals. In the system developed to date, there was also an increased rate of false positives due to erroneously detected ‘injuries’ on the litter or on stable equipment. To tackle these problems and reduce the false-positive detections in further research, the present study aimed to provide more information and therefore first to train the network in identifying the animal and its body regions (e.g., head, neck, back, tail, and wings) in order to then, in a second step, detect potential injuries on the previously identified animal body.
2. Injury Identification during Turkey Husbandry Using Neural Networks
Analyzing animal behavior via tracking and monitoring has been implemented by different tools including radio-frequency identification (RFID) transponders
[17], accelerometers
[18], and cameras coupled with image analysis
[19]. Videos or images have been analyzed and used for studies on broilers including bodyweight
[20][21], health status
[22][23], behavior
[24], flock movement
[25], and locomotion/activity
[3][26]. The technology has also been used in the poultry sector for research into butchering
[27], carcass and meat monitoring
[28][29], and egg quality analysis
[30][31].
A recent review on tracking systems for the assessment of farmed poultry stated that computer vison systems can be used for a variety of applications such as checking images for the presence of poultry, classifying the identified animal as sick or absent, determining feeding and watering structures, or for locating the exact position of poultry in an image
[32]. The so-called keypoints can offer more detailed information about the body and body parts of a recorded animal. KPD algorithms can locate these areas in isolation, and pose estimation can detect these keypoints and connect their structural information
[33]. Thus, these pose estimation models have mainly been used for humans
[34][35]. They were also tested on laboratory animals when recorded in a controlled environment, e.g., mice
[36], locusts
[37], fruit flies
[38], and even worms (
C. elegans)
[39]. However, there are relatively few architectures for recognizing the poses of farm animals such as cows
[33][40][41], pigs
[42], and broiler chickens
[10]. By detecting the different body keypoints and their locations, these tools can offer activity recognition or video surveillance in humans.
Fang et al.
[10] combined a pose estimation network of broiler chickens with a classification network to analyze broiler chickens’ behavior. They used the front view of a broiler’s head and the side view of the body to construct a pose skeleton through the feature points of a chicken and then tracked specific body parts in various behaviors such as eating, resting, or running. Finally, they stated that their research provided an appropriate non-invasive method to analyze chicken behavior. More recently, Doornweerd et al.
[43] reported the performance of an animal pose estimation network: the network was investigated, trained, and tested on multi-species data from broilers as well as from turkeys. They collected data for pose and gait assessments and evaluated a multi-species model to reduce the required dataset and finally annotation needs. Doornweerd et al.
[43] recorded the turkeys walking along a corridor, and they used a view from behind the animals, paying particular attention to their locomotion system. They then defined eight key points (head, neck, right and left knee, right and left hock, and right and left foot).