Traditional Computer-Vision Methods Implemented in Sports: Comparison
Please note this is a comparison between Version 2 by Conner Chen and Version 1 by Banoth Thulasya Naik.

Automatic analysis of video in sports is a possible solution to the demands of fans and professionals for various kinds of information. Analyzing videos in sports has provided a wide range of applications, which include player positions, extraction of the ball’s trajectory, content extraction, and indexing, summarization, detection of highlights, on-demand 3D reconstruction, animations, generation of virtual view, editorial content creation, virtual content insertion, visualization and enhancement of content, gameplay analysis and evaluations, identifying player’s actions, referee decisions and other fundamental elements required for the analysis of a game.

Recent developments in video analysis of sports have a focus on the features of computer vision techniques, which are used to perform certain operations for which these are assigned, such as detailed complex analysis such as detection and classification of each player based on their team in every frame or by recognizing the jersey number to classify players based on their team will help to classify various events where the player is involved. In higher-level analysis, such as tracking the player or ball, many more such evaluations are to be considered for the evaluation of a player’s skills, detecting the team’s strategies, events and the formation of tactical positions such as midfield analysis in various sports such as soccer, basketball, and also various sports vision applications such as smart assistants, virtual umpires, assistance coaches. A higher-level semantic interpretation is an effective substitute, especially in situations when reduced human intervention and real-time analysis are desired for the exploitation of the delivered system outputs.

  • sports
  • ball detection
  • player tracking
  • artificial intelligence in sports
  • computer vision
  • embedded paltforms

1. Basketball

Basketball is a sport played between two teams consisting of five players each. The task of this sport is to score more points than the opponent. This sport has several activities with the ball such as passing, throwing, bouncing, batting, or rolling the ball from one player to another. Physical contact with an opponent player may be a foul if the contact impedes the players’ desired movement. The advancements in computer vision techniques have effectively employed fully automated systems to replace the manual analysis of basketball sports. Recognizing the player’s action and classifying the events [29,30,31][1][2][3] in basketball videos helps to analyze the player’s performance. Player/ball detection and tracking in basketball videos are carried out in [32,33,34,35,36,37][4][5][6][7][8][9] but fail in assigning specific identification to avoid identity switching among the players when they cross. By estimating the pose of the player, the trajectory of the ball [38,39][10][11] is estimated from various distances to the basket. By recognizing and classifying the referee’s signals [40][12], player behavior can be assessed and highlights of the game can be extracted [41][13]. The behavior of a basketball team [42][14] can be characterized by the dynamics of space creation presented in [43,44,45,46,47,48][15][16][17][18][19][20] that works to counteract space creation dynamics with a defensive play presented in [49][21]. By detecting the specific location of the player and ball in the basketball court, the player movement can be predicted [50][22] and the ball trajectory [51,52,53][23][24][25] can be generated in three dimensions which is a complicated task. It is also necessary to study the extraction of basketball players’ shooting motion trajectory, combined with the image feature analysis method of basketball shooting, to reconstruct and quantitatively track the basketball players’ shooting motion trajectory [54,55,56,57][26][27][28][29]. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data is complicated. Therefore, it is necessary to analyze the real-time gameplay [58][30]Table 21 summarizes various proposed methodologies used to complete various challenging tasks in basketball sport including their limitations.
Table 21.
 Studies in basketball.

2. Soccer

Soccer is played using football, and eleven players in two teams compete to deliver the ball into the other team’s goal, thereby scoring a goal. The players confuse each other by changing their speed or direction unexpectedly. Due to them having the same jersey color, players look almost identical and are frequently possess the ball, which leads to severe occlusions and tracking ambiguities. In such a case, a jersey number must be detected to recognize the player [60][32]. Accurate tracking [61,62,63,64,65,66,67,68,69,70,71,72][33][34][35][36][37][38][39][40][41][42][43][44] by detection [73,74,75,76][45][46][47][48] of multiple soccer players as well as the ball in real-time is a major challenge to evaluate the performance of the players, to find their relative positions at regular intervals, and to link spatiotemporal data to extract trajectories. The systems which evaluate the player [77][49] or team performance [78][50] have the potential to understand the game’s aspects, which are not obvious to the human eye. These systems are able to evaluate the activities of players successfully [79][51] such as the distance covered by players, shot detection [80[52][53],81], the number of sprints, player’s position, and their movements [82[54][55],83], the player’s relative position concerning other players, possession [84][56] of the soccer ball and motion/gesture recognition of the referee [85][57], predicting player trajectories for shot situations [86][58]. The generated data can be used to evaluate individual player performance, occlusion handling [21][59] by the detecting position of the player [87][60], action recognition [88][61], predicting and classifying the passes [89[62][63][64],90,91], key event extraction [92,93[65][66][67][68][69][70][71][72][73][74],94,95,96,97,98,99,100,101], tactical performance of the team [102,103,104[75][76][77][78][79],105,106], and analyzing the team’s tactics based on the team formation [107[80][81][82],108,109], along with generating highlights [110,111,112,113][83][84][85][86]Table 32 summarizes various proposed methodologies to resolve various challenging tasks in soccer with their limitations.
Table 32.
 Studies in Soccer.

3. Cricket

In many aspects of cricket as well, computer vision techniques can effectively replace manual analysis. A cricket match has many observable elements including batting shots [114[87][88][89][90][91][92][93][94],115,116,117,118,119,120,121], bowling performance [122[95][96][97][98][99][100],123,124,125,126,127], number of runs or score depending on ball movement, detecting and estimating the trajectory of the ball [128][101], decision making on placement of players’ feet [129][102], outcome classification to generate commentary [130[103][104],131], detecting umpire decision [132,133][105][106]. Predicting an individual cricketer’s performance [134,135][107][108] based upon his past record can be critical in team member selection at international competitions. Such process are highly subjective and usually require much expertise and negotiation decision-making. By predicting the results of cricket matches [136,137,138,139,140][109][110][111][112][113] such as the toss decision, home ground, player fitness, player performance criteria [141][114], and other dynamic strategies the winner can be estimated. The video summarization process gives a compact version of the original video for ease in managing the interesting video contents. Moreover, the video summarization methods capture the interest of the viewer by capturing exciting events from the original video [142,143][115][116]Table 43 summarizes various proposed methodologies with their limitations to resolve various application issues in cricket.
Table 43.
 Studies in Cricket.

5. Volleyball

In volleyball, two teams of six players each are placed on either side of a net. Each team attempts to ground a ball on the opposite team’s court and to score points under the defined rules. So, detecting and analyzing the player activities [163[137][138][139],164,165], detecting play patterns and classifying tactical behaviors [166,167,168,169][140][141][142][143], predicting league standings [170][144], detecting and classifying spiking skills [171[145][146],172], estimating the pose of the player [173][147], tracking the player [174][148], tracking the ball [175][149], etc., are the major aspects of volleyball analysis. Predicting the ball trajectory [59][31] in a volleyball game by observing the motion of the setter player has been conducted. Table 65 summarizes various proposed methodologies to resolve various challenging tasks in volleyball sport with their limitations.
Table 65.
 Studies in volleyball.
182,183][156][157] and hockey ball trajectory estimation are the major aspects of hockey sport.
Ice hockey is another similar game to field hockey, with two teams with six players each, wearing skates and competing on an ice rink. All players aim to propel a vulcanized rubber disk, the puck, past a goal line and into a net guarded by a goaltender. Ice hockey is gaining huge popularity on international platforms due to its speed and frequent physical contact. So, detecting/tracking the player [184[158][159][160],185,186], estimating the pose of the player [187][161], classifying and tracking with different identification the players of the same team or different teams, tracking the ice hockey puck [188][162], and classification of puck possession events [189][163] are the major aspects of the ice hockey sport. Table 76 summarizes various proposed methodologies to resolve various challenging tasks in hockey/ice hockey with their limitations.
Table 76.
 Studies in hockey.

4. Tennis

Worldwide, Tennis has experienced gain a huge popularity. This game need a meticulous analysis to reducing human errors and extracting several statistics from the game’s visual feed. Automated ball and player tracking belongs to such class of systems that requires sophisticated algorithms for analysis. The primary data for tennis are obtained from ball and player tracking systems, such as HawkEye [144,145][117][118] and TennisSense [28,146][119][120]. The data from these systems can be used to detect and track the ball/player [147,148,149[121][122][123][124],150], visualizing the overall tennis match [151,152][125][126] and predicting trajectories of ball landing positions [153,154[127][128][129],155], player activity recognition [156[130][131][132],157,158], analyzing the movements of the player and ball [159][133], analyzing the player behavior [160][134] and predicting the next shot movement [161][135] and real-time tennis swing classification [162][136]Table 54 summarizes various proposed methodologies to resolve various challenging tasks in tennis with their limitations.
Table 54.
 Studies in Tennis.

6. Hockey/Ice Hockey

Hockey, also known as Field hockey, is an outdoor game played between two teams of 11 players each. These players use sticks that are curved at the striking end to hit a small and hard ball into their opponent’s goal post. So, detecting [176][150] and tracking the player/hockey ball, recognizing the actions of the player [177[151][152][153],178,179], estimating the pose of the player [180][154], classifying and tracking the players of the same team or different teams [181][155], referee gesture analysis [

7. Badminton

Badminton is one of the most popular racket sports, which includes tactics, techniques, and precise execution movements. To improve the performance of the player, technology plays a key role in optimizing the training of players; technology determines the movements of the player [190][164] during training and game situations such as with action recognition [191,192[165][166][167],193], analyzing the performance of player [194][168], detecting and tracking the shuttlecock [195,196,197][169][170][171]Table 87 summarizes various proposed methodologies to resolve various challenging tasks in badminton with their limitations.
Table 87.
 Studies in badminton.

8. Miscellaneous

Player detection and tracking is the major requirement in athletic sports such as running, swimming [198[172][173],199], and cycling. In sports such as table tennis [200][174], squash [201[175][176],202], and golf [203][177], ball detection and tracking and player pose detection [204][178] are challenging tasks. In ball-centric sports such as rugby, American football, handball, baseball, ball/player detection [205,206,207,208,209,210,211][179][180][181][182][183][184][185] and tracking [212,213,214,215,216,217[186][187][188][189][190][191][192][193][194][195],218,219,220,221], analyzing the action of the player [23[196][197][198][199][200][201][202],222,223,224,225,226,227], event detection and classification [228[203][204][205][206][207],229,230,231,232], performance analysis of player [233[208][209][210],234,235], referee identification and gesture recognition are the major challenging tasks. Video highlight generation is a subclass of video summarization [236,237,238,239][211][212][213][214] which may be viewed as a subclass of sports video analysis. Table 98 summarizes various proposed methodologies to resolve various challenging tasks in various sports with their limitations.
Table 98.
 Studies in various sports.

9. Overview of Machine Learning/Deep Learning Techniques

There are multiple ways to classify, detect, and track objects to analyze the semantic levels involved in various sports. They pave the way for player localization, jersey number recognition, event classification and trajectory forecasting of the ball in a sports video with a much better interpretation of an image as a whole.
The selected AI algorithm is better if it is tested and benchmarked on different data. To evaluate the robustness of AI algorithms, some metrics are required which measure the performance of particular AI algorithms to enable better selection. Figure 81 depicts the road map of the machine learning algorithms’ general information, methods, and evaluation criteria for a particular task and required libraries/tools for training the model. Figure 92 depicts the roadmap of the deep learning algorithm selection, training, and evaluation criteria for a particular task and required libraries/tools for training the model. Figure 103 shows taxonomy of various deep learning techniques of classification [240,241,242,243,244[215][216][217][218][219][220],245], detection [246][221] and prediction [247,248,249][222][223][224] algorithms, unsupervised learning [250[225][226],251], tracking [252[227][228][229][230][231][232][233][234][235][236],253,254,255,256,257,258,259,260,261], and trajectory prediction [262,263,264,265,266,267,268,269][237][238][239][240][241][242][243][244]. Since various tasks in sports such as classification/detection, tracking, and trajectory prediction show great advantages in various sports. A bi-layered parallel training architecture in distributed computing environments was introduced in [270][245], which discusses the time-consuming training process of large-scale deep learning algorithms.
Figure 81.
 Block diagram of the road map to machine learning architecture selection and training.
Figure 92.
 Block diagram of the road map to deep learning architecture selection and training.
Figure 103.
 Overview of deep learning algorithms of classification/detection, tracking and trajectory prediction.

References

  1. Wu, L.; Yang, Z.; He, J.; Jian, M.; Xu, Y.; Xu, D.; Chen, C.W. Ontology-based global and collective motion patterns for event classification in basketball videos. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 2178–2190.
  2. Wu, L.; Yang, Z.; Wang, Q.; Jian, M.; Zhao, B.; Yan, J.; Chen, C.W. Fusing motion patterns and key visual information for semantic event recognition in basketball videos. Neurocomputing 2020, 413, 217–229.
  3. Liu, L. Objects detection toward complicated high remote basketball sports by leveraging deep CNN architecture. Future Gener. Comput. Syst. 2021, 119, 31–36.
  4. Fu, X.; Zhang, K.; Wang, C.; Fan, C. Multiple player tracking in basketball court videos. J. Real-Time Image Process. 2020, 17, 1811–1828.
  5. Yoon, Y.; Hwang, H.; Choi, Y.; Joo, M.; Oh, H.; Park, I.; Lee, K.H.; Hwang, J.H. Analyzing basketball movements and pass relationships using realtime object tracking techniques based on deep learning. IEEE Access 2019, 7, 56564–56576.
  6. Ramanathan, V.; Huang, J.; Abu-El-Haija, S.; Gorban, A.; Murphy, K.; Fei-Fei, L. Detecting events and key actors in multi-person videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3043–3053.
  7. Chakraborty, B.; Meher, S. A real-time trajectory-based ball detection-and-tracking framework for basketball video. J. Opt. 2013, 42, 156–170.
  8. Santhosh, P.; Kaarthick, B. An automated player detection and tracking in basketball game. Comput. Mater. Contin. 2019, 58, 625–639.
  9. Acuna, D. Towards real-time detection and tracking of basketball players using deep neural networks. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; pp. 4–9.
  10. Zhao, Y.; Yang, R.; Chevalier, G.; Shah, R.C.; Romijnders, R. Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction. Optik 2018, 158, 266–272.
  11. Shah, R.; Romijnders, R. Applying Deep Learning to Basketball Trajectories. arXiv 2016, arXiv:1608.03793.
  12. Žemgulys, J.; Raudonis, V.; Maskeliūnas, R.; Damaševičius, R. Recognition of basketball referee signals from real-time videos. J. Ambient Intell. Humaniz. Comput. 2020, 11, 979–991.
  13. Liu, W.; Yan, C.C.; Liu, J.; Ma, H. Deep learning based basketball video analysis for intelligent arena application. Multimed. Tools Appl. 2017, 76, 24983–25001.
  14. Yao, P. Real-Time Analysis of Basketball Sports Data Based on Deep Learning. Complexity 2021, 2021, 9142697.
  15. Chen, L.; Wang, W. Analysis of technical features in basketball video based on deep learning algorithm. Signal Process. Image Commun. 2020, 83, 115786.
  16. Wang, K.C.; Zemel, R. Classifying NBA offensive plays using neural networks. In Proceedings of the Proceedings of MIT Sloan Sports Analytics Conference, Boston, MA, USA, 11–12 March 2016; Volume 4, pp. 1–9.
  17. Tsai, T.Y.; Lin, Y.Y.; Jeng, S.K.; Liao, H.Y.M. End-to-End Key-Player-Based Group Activity Recognition Network Applied to Basketball Offensive Tactic Identification in Limited Data Scenarios. IEEE Access 2021, 9, 104395–104404.
  18. Lamas, L.; Junior, D.D.R.; Santana, F.; Rostaiser, E.; Negretti, L.; Ugrinowitsch, C. Space creation dynamics in basketball offence: Validation and evaluation of elite teams. Int. J. Perform. Anal. Sport 2011, 11, 71–84.
  19. Bourbousson, J.; Sève, C.; McGarry, T. Space–time coordination dynamics in basketball: Part 1. Intra-and inter-couplings among player dyads. J. Sports Sci. 2010, 28, 339–347.
  20. Bourbousson, J.; Seve, C.; McGarry, T. Space–time coordination dynamics in basketball: Part 2. The interaction between the two teams. J. Sports Sci. 2010, 28, 349–358.
  21. Tian, C.; De Silva, V.; Caine, M.; Swanson, S. Use of machine learning to automate the identification of basketball strategies using whole team player tracking data. Appl. Sci. 2020, 10, 24.
  22. Hauri, S.; Djuric, N.; Radosavljevic, V.; Vucetic, S. Multi-Modal Trajectory Prediction of NBA Players. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 1640–1649.
  23. Zheng, S.; Yue, Y.; Lucey, P. Generating Long-Term Trajectories Using Deep Hierarchical Networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 1551–1559.
  24. Bertugli, A.; Calderara, S.; Coscia, P.; Ballan, L.; Cucchiara, R. AC-VRNN: Attentive Conditional-VRNN for multi-future trajectory prediction. Comput. Vis. Image Underst. 2021, 210, 103245.
  25. Victor, B.; Nibali, A.; He, Z.; Carey, D.L. Enhancing trajectory prediction using sparse outputs: Application to team sports. Neural Comput. Appl. 2021, 33, 11951–11962.
  26. Li, H.; Zhang, M. Artificial Intelligence and Neural Network-Based Shooting Accuracy Prediction Analysis in Basketball. Mob. Inf. Syst. 2021, 2021, 4485589.
  27. Chen, H.T.; Chou, C.L.; Fu, T.S.; Lee, S.Y.; Lin, B.S.P. Recognizing tactic patterns in broadcast basketball video using player trajectory. J. Vis. Commun. Image Represent. 2012, 23, 932–947.
  28. Chen, H.T.; Tien, M.C.; Chen, Y.W.; Tsai, W.J.; Lee, S.Y. Physics-based ball tracking and 3D trajectory reconstruction with applications to shooting location estimation in basketball video. J. Vis. Commun. Image Represent. 2009, 20, 204–216.
  29. Hu, M.; Hu, Q. Design of basketball game image acquisition and processing system based on machine vision and image processor. Microprocess. Microsyst. 2021, 82, 103904.
  30. Yichen, W.; Yamashita, H. Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks. Int. J. Econ. Manag. Eng. 2021, 15, 283–289.
  31. Suda, S.; Makino, Y.; Shinoda, H. Prediction of volleyball trajectory using skeletal motions of setter player. In Proceedings of the 10th Augmented Human International Conference, Reims, France, 11–12 March 2019; pp. 1–8.
  32. Gerke, S.; Linnemann, A.; Müller, K. Soccer player recognition using spatial constellation features and jersey number recognition. Comput. Vis. Image Underst. 2017, 159, 105–115.
  33. Baysal, S.; Duygulu, P. Sentioscope: A soccer player tracking system using model field particles. IEEE Trans. Circuits Syst. Video Technol. 2015, 26, 1350–1362.
  34. Kamble, P.; Keskar, A.; Bhurchandi, K. A deep learning ball tracking system in soccer videos. Opto-Electron. Rev. 2019, 27, 58–69.
  35. Choi, K.; Seo, Y. Automatic initialization for 3D soccer player tracking. Pattern Recognit. Lett. 2011, 32, 1274–1282.
  36. Kim, W. Multiple object tracking in soccer videos using topographic surface analysis. J. Vis. Commun. Image Represent. 2019, 65, 102683.
  37. Liu, J.; Tong, X.; Li, W.; Wang, T.; Zhang, Y.; Wang, H. Automatic player detection, labeling and tracking in broadcast soccer video. Pattern Recognit. Lett. 2009, 30, 103–113.
  38. Komorowski, J.; Kurzejamski, G.; Sarwas, G. BallTrack: Football ball tracking for real-time CCTV systems. In Proceedings of the 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 27–31 May 2019; pp. 1–5.
  39. Hurault, S.; Ballester, C.; Haro, G. Self-Supervised Small Soccer Player Detection and Tracking. In Proceedings of the 3rd International Workshop on Multimedia Content Analysis in Sports, Seattle, WA, USA, 12–16 October 2020; pp. 9–18.
  40. Kamble, P.R.; Keskar, A.G.; Bhurchandi, K.M. A convolutional neural network based 3D ball tracking by detection in soccer videos. In Proceedings of the Eleventh International Conference on Machine Vision (ICMV 2018), Munich, Germany, 1–3 November 2018; Volume 11041, p. 110412O.
  41. Naidoo, W.C.; Tapamo, J.R. Soccer video analysis by ball, player and referee tracking. In Proceedings of the 2006 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries, Somerset West, South Africa, 9–11 October 2006; pp. 51–60.
  42. Liang, D.; Liu, Y.; Huang, Q.; Gao, W. A scheme for ball detection and tracking in broadcast soccer video. In Proceedings of the Pacific-Rim Conference on Multimedia, Jeju Island, Korea, 13–16 November 2005; pp. 864–875.
  43. Naik, B.; Hashmi, M.F. YOLOv3-SORT detection and tracking player-ball in soccer sport. J. Electron. Imaging 2022, 32, 011003.
  44. Naik, B.; Hashmi, M.F.; Geem, Z.W.; Bokde, N.D. DeepPlayer-Track: Player and Referee Tracking with Jersey Color Recognition in Soccer. IEEE Access 2022, 10, 32494–32509.
  45. Komorowski, J.; Kurzejamski, G.; Sarwas, G. FootAndBall: Integrated Player and Ball Detector. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Valletta, Malta, 27–29 February 2020; Volume 5, pp. 47–56.
  46. Pallavi, V.; Mukherjee, J.; Majumdar, A.K.; Sural, S. Ball detection from broadcast soccer videos using static and dynamic features. J. Vis. Commun. Image Represent. 2008, 19, 426–436.
  47. Leo, M.; Mazzeo, P.L.; Nitti, M.; Spagnolo, P. Accurate ball detection in soccer images using probabilistic analysis of salient regions. Mach. Vis. Appl. 2013, 24, 1561–1574.
  48. Mazzeo, P.L.; Leo, M.; Spagnolo, P.; Nitti, M. Soccer ball detection by comparing different feature extraction methodologies. Adv. Artif. Intell. 2012, 2012, 512159.
  49. Garnier, P.; Gregoir, T. Evaluating Soccer Player: From Live Camera to Deep Reinforcement Learning. arXiv 2021, arXiv:2101.05388.
  50. Kusmakar, S.; Shelyag, S.; Zhu, Y.; Dwyer, D.; Gastin, P.; Angelova, M. Machine Learning Enabled Team Performance Analysis in the Dynamical Environment of Soccer. IEEE Access 2020, 8, 90266–90279.
  51. Baccouche, M.; Mamalet, F.; Wolf, C.; Garcia, C.; Baskurt, A. Action classification in soccer videos with long short-term memory recurrent neural networks. In Proceedings of the International Conference on Artificial Neural Networks, Thessaloniki, Greece, 15–18 September 2010; pp. 154–159.
  52. Jackman, S. Football Shot Detection Using Convolutional Neural Networks. Master’s Thesis, Department of Biomedical Engineering, Linköping University, Linköping, Sweden, 2019.
  53. Lucey, P.; Bialkowski, A.; Monfort, M.; Carr, P.; Matthews, I. quality vs quantity: Improved shot prediction in soccer using strategic features from spatiotemporal data. In Proceedings of the 8th Annual MIT Sloan Sports Analytics Conference, Boston, MA, USA, 28 February–1 March 2014; pp. 1–9.
  54. Cioppa, A.; Deliege, A.; Giancola, S.; Ghanem, B.; Droogenbroeck, M.V.; Gade, R.; Moeslund, T.B. A context-aware loss function for action spotting in soccer videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 13126–13136.
  55. Beernaerts, J.; De Baets, B.; Lenoir, M.; Van de Weghe, N. Spatial movement pattern recognition in soccer based on relative player movements. PLoS ONE 2020, 15, e0227746.
  56. Barbon Junior, S.; Pinto, A.; Barroso, J.V.; Caetano, F.G.; Moura, F.A.; Cunha, S.A.; Torres, R.d.S. Sport action mining: Dribbling recognition in soccer. Multimed. Tools Appl. 2022, 81, 4341–4364.
  57. Kim, Y.; Jung, C.; Kim, C. Motion Recognition of Assistant Referees in Soccer Games via Selective Color Contrast Revelation. EasyChair Preprint no. 2604, EasyChair. 2020. Available online: https://easychair.org/publications/preprint/z975 (accessed on 2 November 2021).
  58. Lindström, P.; Jacobsson, L.; Carlsson, N.; Lambrix, P. Predicting player trajectories in shot situations in soccer. In Proceedings of the International Workshop on Machine Learning and Data Mining for Sports Analytics, Ghent, Belgium, 14–18 September 2020; pp. 62–75.
  59. Ul Huda, N.; Jensen, K.H.; Gade, R.; Moeslund, T.B. Estimating the number of soccer players using simulation-based occlusion handling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1824–1833.
  60. Machado, V.; Leite, R.; Moura, F.; Cunha, S.; Sadlo, F.; Comba, J.L. Visual soccer match analysis using spatiotemporal positions of players. Comput. Graph. 2017, 68, 84–95.
  61. Ganesh, Y.; Teja, A.S.; Munnangi, S.K.; Murthy, G.R. A Novel Framework for Fine Grained Action Recognition in Soccer. In Proceedings of the International Work-Conference on Artificial Neural Networks, Munich, Germany, 17–19 September 2019; pp. 137–150.
  62. Chawla, S.; Estephan, J.; Gudmundsson, J.; Horton, M. Classification of passes in football matches using spatiotemporal data. ACM Trans. Spat. Algorithms Syst. 2017, 3, 1–30.
  63. Gyarmati, L.; Stanojevic, R. QPass: A Merit-based Evaluation of Soccer Passes. arXiv 2016, arXiv:abs/1608.03532.
  64. Vercruyssen, V.; De Raedt, L.; Davis, J. Qualitative spatial reasoning for soccer pass prediction. In CEUR Workshop Proceedings; Springer: Berlin/Heidelberg, Germany, 2016; Volume 1842.
  65. Yu, J.; Lei, A.; Hu, Y. Soccer video event detection based on deep learning. In Proceedings of the International Conference on Multimedia Modeling, Thessaloniki, Greece, 8–11 January 2019; pp. 377–389.
  66. Brooks, J.; Kerr, M.; Guttag, J. Using machine learning to draw inferences from pass location data in soccer. Stat. Anal. Data Min. ASA Data Sci. J. 2016, 9, 338–349.
  67. Cho, H.; Ryu, H.; Song, M. Pass2vec: Analyzing soccer players’ passing style using deep learning. Int. J. Sports Sci. Coach. 2021, 17, 355–365.
  68. Zhang, K.; Wu, J.; Tong, X.; Wang, Y. An automatic multi-camera-based event extraction system for real soccer videos. Pattern Anal. Appl. 2020, 23, 953–965.
  69. Deliège, A.; Cioppa, A.; Giancola, S.; Seikavandi, M.J.; Dueholm, J.V.; Nasrollahi, K.; Ghanem, B.; Moeslund, T.B.; Droogenbroeck, M.V. SoccerNet-v2: A Dataset and Benchmarks for Holistic Understanding of Broadcast Soccer Videos. arXiv 2020, arXiv:abs/2011.13367.
  70. Penumala, R.; Sivagami, M.; Srinivasan, S. Automated Goal Score Detection in Football Match Using Key Moments. Procedia Comput. Sci. 2019, 165, 492–501.
  71. Khan, A.; Lazzerini, B.; Calabrese, G.; Serafini, L. Soccer event detection. In Proceedings of the 4th International Conference on Image Processing and Pattern Recognition (IPPR 2018), Copenhagen, Denmark, 28–29 April 2018; pp. 119–129.
  72. Khaustov, V.; Mozgovoy, M. Recognizing Events in Spatiotemporal Soccer Data. Appl. Sci. 2020, 10, 8046.
  73. Saraogi, H.; Sharma, R.A.; Kumar, V. Event recognition in broadcast soccer videos. In Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing, Hyderabad, India, 18–22 December 2016; pp. 1–7.
  74. Karimi, A.; Toosi, R.; Akhaee, M.A. Soccer Event Detection Using Deep Learning. arXiv 2021, arXiv:2102.04331.
  75. Suzuki, G.; Takahashi, S.; Ogawa, T.; Haseyama, M. Team tactics estimation in soccer videos based on a deep extreme learning machine and characteristics of the tactics. IEEE Access 2019, 7, 153238–153248.
  76. Suzuki, G.; Takahashi, S.; Ogawa, T.; Haseyama, M. Decision level fusion-based team tactics estimation in soccer videos. In Proceedings of the IEEE 5th Global Conference on Consumer Electronics, Kyoto, Japan, 11–14 October 2016; pp. 1–2.
  77. Ohnuki, S.; Takahashi, S.; Ogawa, T.; Haseyama, M. Soccer video segmentation based on team tactics estimation method. In Proceedings of the International Workshop on Advanced Image Technology, Nagoya, Japan, 7–8 January 2013; pp. 692–695.
  78. Clemente, F.M.; Couceiro, M.S.; Martins, F.M.L.; Mendes, R.S.; Figueiredo, A.J. Soccer team’s tactical behaviour: Measuring territorial domain. J. Sports Eng. Technol. 2015, 229, 58–66.
  79. Hassan, A.; Akl, A.R.; Hassan, I.; Sunderland, C. Predicting Wins, Losses and Attributes’ Sensitivities in the Soccer World Cup 2018 Using Neural Network Analysis. Sensors 2020, 20, 3213.
  80. Niu, Z.; Gao, X.; Tian, Q. Tactic analysis based on real-world ball trajectory in soccer video. Pattern Recognit. 2012, 45, 1937–1947.
  81. Wu, Y.; Xie, X.; Wang, J.; Deng, D.; Liang, H.; Zhang, H.; Cheng, S.; Chen, W. Forvizor: Visualizing spatio-temporal team formations in soccer. IEEE Trans. Vis. Comput. Graph. 2018, 25, 65–75.
  82. Suzuki, G.; Takahashi, S.; Ogawa, T.; Haseyama, M. Team tactics estimation in soccer videos via deep extreme learning machine based on players formation. In Proceedings of the IEEE 7th Global Conference on Consumer Electronics, Nara, Japan, 9–12 October 2018; pp. 116–117.
  83. Wang, B.; Shen, W.; Chen, F.; Zeng, D. Football match intelligent editing system based on deep learning. KSII Trans. Internet Inf. Syst. 2019, 13, 5130–5143.
  84. Zawbaa, H.M.; El-Bendary, N.; Hassanien, A.E.; Kim, T.h. Event detection based approach for soccer video summarization using machine learning. Int. J. Multimed. Ubiquitous Eng. 2012, 7, 63–80.
  85. Kolekar, M.H.; Sengupta, S. Bayesian network-based customized highlight generation for broadcast soccer videos. IEEE Trans. Broadcast. 2015, 61, 195–209.
  86. Li, J.; Wang, T.; Hu, W.; Sun, M.; Zhang, Y. Soccer highlight detection using two-dependence bayesian network. In Proceedings of the IEEE International Conference on Multimedia and Expo, Toronto, ON, Canada, 9–12 July 2006; pp. 1625–1628.
  87. Foysal, M.F.A.; Islam, M.S.; Karim, A.; Neehal, N. Shot-Net: A convolutional neural network for classifying different cricket shots. In Proceedings of the International Conference on Recent Trends in Image Processing and Pattern Recognition, Solapur, India, 21–22 December 2018; pp. 111–120.
  88. Khan, M.Z.; Hassan, M.A.; Farooq, A.; Khan, M.U.G. Deep CNN based data-driven recognition of cricket batting shots. In Proceedings of the International Conference on Applied and Engineering Mathematics (ICAEM), Taxila, Pakistan, 4–5 September 2018; pp. 67–71.
  89. Khan, A.; Nicholson, J.; Plötz, T. Activity recognition for quality assessment of batting shots in cricket using a hierarchical representation. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies; ACM Digital Library: New York, NY, USA, 2017; Volume 1, p. 62.
  90. Sen, A.; Deb, K.; Dhar, P.K.; Koshiba, T. CricShotClassify: An Approach to Classifying Batting Shots from Cricket Videos Using a Convolutional Neural Network and Gated Recurrent Unit. Sensors 2021, 21, 2846.
  91. Gürpınar-Morgan, W.; Dinsdale, D.; Gallagher, J.; Cherukumudi, A.; Lucey, P. You Cannot Do That Ben Stokes: Dynamically Predicting Shot Type in Cricket Using a Personalized Deep Neural Network. arXiv 2021, arXiv:2102.01952.
  92. Bandara, I.; Bačić, B. Strokes Classification in Cricket Batting Videos. In Proceedings of the 2020 5th International Conference on Innovative Technologies in Intelligent Systems and Industrial Applications (CITISIA), Sydney, Australia, 25–27 November 2020; pp. 1–6.
  93. Moodley, T.; van der Haar, D. Scene Recognition Using AlexNet to Recognize Significant Events Within Cricket Game Footage. In Proceedings of the International Conference on Computer Vision and Graphics, Valletta, Malta, 27–29 February 2020; pp. 98–109.
  94. Gupta, A.; Muthiah, S.B. Viewpoint constrained and unconstrained Cricket stroke localization from untrimmed videos. Image Vis. Comput. 2020, 100, 103944.
  95. Al Islam, M.N.; Hassan, T.B.; Khan, S.K. A CNN-based approach to classify cricket bowlers based on their bowling actions. In Proceedings of the IEEE International Conference on Signal Processing, Information, Communication & Systems (SPICSCON), Dhaka, Bangladesh, 28–30 November 2019; pp. 130–134.
  96. Muthuswamy, S.; Lam, S.S. Bowler performance prediction for one-day international cricket using neural networks. In Proceedings of the IIE Annual Conference Proceedings. Institute of Industrial and Systems Engineers (IISE), New Orleans, LA, USA, 30 May–2 June 2008; p. 1391.
  97. Bhattacharjee, D.; Pahinkar, D.G. Analysis of performance of bowlers using combined bowling rate. Int. J. Sports Sci. Eng. 2012, 6, 1750–9823.
  98. Rahman, R.; Rahman, M.A.; Islam, M.S.; Hasan, M. DeepGrip: Cricket Bowling Delivery Detection with Superior CNN Architectures. In Proceedings of the 6th International Conference on Inventive Computation Technologies (ICICT), Lalitpur, Nepal, 20–22 July 2021; pp. 630–636.
  99. Lemmer, H.H. The combined bowling rate as a measure of bowling performance in cricket. S. Afr. J. Res. Sport Phys. Educ. Recreat. 2002, 24, 37–44.
  100. Mukherjee, S. Quantifying individual performance in Cricket—A network analysis of Batsmen and Bowlers. Phys. A Stat. Mech. Its Appl. 2014, 393, 624–637.
  101. Velammal, B.; Kumar, P.A. An Efficient Ball Detection Framework for Cricket. Int. J. Comput. Sci. Issues 2010, 7, 30.
  102. Nelikanti, A.; Reddy, G.V.R.; Karuna, G. An Optimization Based deep LSTM Predictive Analysis for Decision Making in Cricket. In Innovative Data Communication Technologies and Application; Springer: Berlin/Heidelberg, Germany, 2021; pp. 721–737.
  103. Kumar, R.; Santhadevi, D.; Barnabas, J. Outcome Classification in Cricket Using Deep Learning. In Proceedings of the IEEE International Conference on Cloud Computing in Emerging Markets (CCEM), Bengaluru, India, 19–20 September 2019; pp. 55–58.
  104. Shukla, P.; Sadana, H.; Bansal, A.; Verma, D.; Elmadjian, C.; Raman, B.; Turk, M. Automatic cricket highlight generation using event-driven and excitement-based features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1800–1808.
  105. Kowsher, M.; Alam, M.A.; Uddin, M.J.; Ahmed, F.; Ullah, M.W.; Islam, M.R. Detecting Third Umpire Decisions & Automated Scoring System of Cricket. In Proceedings of the 2019 International Conference on Computer, Communication, Chemical, Materials and Electronic Engineering (IC4ME2), Rajshahi, Bangladesh, 11–12 July 2019; pp. 1–8.
  106. Ravi, A.; Venugopal, H.; Paul, S.; Tizhoosh, H.R. A dataset and preliminary results for umpire pose detection using SVM classification of deep features. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; pp. 1396–1402.
  107. Kapadiya, C.; Shah, A.; Adhvaryu, K.; Barot, P. Intelligent Cricket Team Selection by Predicting Individual Players’ Performance using Efficient Machine Learning Technique. Int. J. Eng. Adv. Technol. 2020, 9, 3406–3409.
  108. Iyer, S.R.; Sharda, R. Prediction of athletes performance using neural networks: An application in cricket team selection. Expert Syst. Appl. 2009, 36, 5510–5522.
  109. Jhanwar, M.G.; Pudi, V. Predicting the Outcome of ODI Cricket Matches: A Team Composition Based Approach. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECMLPKDD 2016), Bilbao, Spain, 19–23 September 2016.
  110. Pathak, N.; Wadhwa, H. Applications of modern classification techniques to predict the outcome of ODI cricket. Procedia Comput. Sci. 2016, 87, 55–60.
  111. Alaka, S.; Sreekumar, R.; Shalu, H. Efficient Feature Representations for Cricket Data Analysis using Deep Learning based Multi-Modal Fusion Model. arXiv 2021, arXiv:2108.07139.
  112. Goel, R.; Davis, J.; Bhatia, A.; Malhotra, P.; Bhardwaj, H.; Hooda, V.; Goel, A. Dynamic cricket match outcome prediction. J. Sports Anal. 2021, 7, 185–196.
  113. Karthik, K.; Krishnan, G.S.; Shetty, S.; Bankapur, S.S.; Kolkar, R.P.; Ashwin, T.; Vanahalli, M.K. Analysis and Prediction of Fantasy Cricket Contest Winners Using Machine Learning Techniques. In Evolution in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2021; pp. 443–453.
  114. Shah, P. New performance measure in Cricket. ISOR J. Sports Phys. Educ. 2017, 4, 28–30.
  115. Shingrakhia, H.; Patel, H. SGRNN-AM and HRF-DBN: A hybrid machine learning model for cricket video summarization. Vis. Comput. 2021, 1–17.
  116. Guntuboina, C.; Porwal, A.; Jain, P.; Shingrakhia, H. Deep Learning Based Automated Sports Video Summarization using YOLO. Electron. Lett. Comput. Vis. Image Anal. 2021, 20, 99–116.
  117. Owens, N.; Harris, C.; Stennett, C. Hawk-eye tennis system. In Proceedings of the International Conference on Visual Information Engineering, Guildford, UK, 7–9 July 2003; pp. 182–185.
  118. Wu, G. Monitoring System of Key Technical Features of Male Tennis Players Based on Internet of Things Security Technology. Wirel. Commun. Mob. Comput. 2021, 2021, 4076863.
  119. Conaire, C.O.; Kelly, P.; Connaghan, D.; O’Connor, N.E. Tennissense: A platform for extracting semantic information from multi-camera tennis data. In Proceedings of the 16th International Conference on Digital Signal Processing, Santorini, Greece, 5–7 July 2009; pp. 1–6.
  120. Connaghan, D.; Kelly, P.; O’Connor, N.E. Game, shot and match: Event-based indexing of tennis. In Proceedings of the 9th International Workshop on Content-Based Multimedia Indexing (CBMI), Lille, France, 28–30 June 2011; pp. 97–102.
  121. Giles, B.; Kovalchik, S.; Reid, M. A machine learning approach for automatic detection and classification of changes of direction from player tracking data in professional tennis. J. Sports Sci. 2020, 38, 106–113.
  122. Zhou, X.; Xie, L.; Huang, Q.; Cox, S.J.; Zhang, Y. Tennis ball tracking using a two-layered data association approach. IEEE Trans. Multimed. 2014, 17, 145–156.
  123. Reno, V.; Mosca, N.; Marani, R.; Nitti, M.; D’Orazio, T.; Stella, E. Convolutional neural networks based ball detection in tennis games. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1758–1764.
  124. Archana, M.; Geetha, M.K. Object detection and tracking based on trajectory in broadcast tennis video. Procedia Comput. Sci. 2015, 58, 225–232.
  125. Polk, T.; Yang, J.; Hu, Y.; Zhao, Y. Tennivis: Visualization for tennis match analysis. IEEE Trans. Vis. Comput. Graph. 2014, 20, 2339–2348.
  126. Kelly, P.; Diego, J.; Agapito, P.; Conaire, C.; Connaghan, D.; Kuklyte, J.; Connor, N. Performance analysis and visualisation in tennis using a low-cost camera network. In Proceedings of the 18th ACM Multimedia Conference on Multimedia Grand Challenge, Beijing, China, 25–29 October 2010; pp. 1–4.
  127. Fernando, T.; Denman, S.; Sridharan, S.; Fookes, C. Memory augmented deep generative models for forecasting the next shot location in tennis. IEEE Trans. Knowl. Data Eng. 2019, 32, 1785–1797.
  128. Pingali, G.; Opalach, A.; Jean, Y.; Carlbom, I. Visualization of sports using motion trajectories: Providing insights into performance, style, and strategy. In Proceedings of the IEEE Visualization 2001, San Diego, CA, USA, 24–26 October 2001; pp. 75–544.
  129. Pingali, G.S.; Opalach, A.; Jean, Y.D.; Carlbom, I.B. Instantly indexed multimedia databases of real world events. IEEE Trans. Multimed. 2002, 4, 269–282.
  130. Cai, J.; Hu, J.; Tang, X.; Hung, T.Y.; Tan, Y.P. Deep historical long short-term memory network for action recognition. Neurocomputing 2020, 407, 428–438.
  131. Vinyes Mora, S.; Knottenbelt, W.J. Deep learning for domain-specific action recognition in tennis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 114–122.
  132. Ning, B.; Na, L. Deep Spatial/temporal-level feature engineering for Tennis-based action recognition. Future Gener. Comput. Syst. 2021, 125, 188–193.
  133. Polk, T.; Jäckle, D.; Häußler, J.; Yang, J. CourtTime: Generating actionable insights into tennis matches using visual analytics. IEEE Trans. Vis. Comput. Graph. 2019, 26, 397–406.
  134. Zhu, G.; Huang, Q.; Xu, C.; Xing, L.; Gao, W.; Yao, H. Human behavior analysis for highlight ranking in broadcast racket sports video. IEEE Trans. Multimed. 2007, 9, 1167–1182.
  135. Wei, X.; Lucey, P.; Morgan, S.; Sridharan, S. Forecasting the next shot location in tennis using fine-grained spatiotemporal tracking data. IEEE Trans. Knowl. Data Eng. 2016, 28, 2988–2997.
  136. Ma, K. A Real Time Artificial Intelligent System for Tennis Swing Classification. In Proceedings of the IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herl’any, Slovakia, 21–23 January 2021; pp. 21–26.
  137. Vales-Alonso, J.; Chaves-Diéguez, D.; López-Matencio, P.; Alcaraz, J.J.; Parrado-García, F.J.; González-Castaño, F.J. SAETA: A smart coaching assistant for professional volleyball training. IEEE Trans. Syst. Man Cybern. Syst. 2015, 45, 1138–1150.
  138. Kautz, T.; Groh, B.H.; Hannink, J.; Jensen, U.; Strubberg, H.; Eskofier, B.M. Activity recognition in beach volleyball using a Deep Convolutional Neural Network. Data Min. Knowl. Discov. 2017, 31, 1678–1705.
  139. Ibrahim, M.S.; Muralidharan, S.; Deng, Z.; Vahdat, A.; Mori, G. A hierarchical deep temporal model for group activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1971–1980.
  140. Van Haaren, J.; Ben Shitrit, H.; Davis, J.; Fua, P. Analyzing volleyball match data from the 2014 World Championships using machine learning techniques. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 627–634.
  141. Wenninger, S.; Link, D.; Lames, M. Performance of machine learning models in application to beach volleyball data. Int. J. Comput. Sci. Sport 2020, 19, 24–36.
  142. Haider, F.; Salim, F.; Naghashi, V.; Tasdemir, S.B.Y.; Tengiz, I.; Cengiz, K.; Postma, D.; Delden, R.v.; Reidsma, D.; van Beijnum, B.J.; et al. Evaluation of dominant and non-dominant hand movements for volleyball action modelling. In Proceedings of the Adjunct of the 2019 International Conference on Multimodal Interaction, Suzhou, China, 14–18 October 2019; pp. 1–6.
  143. Salim, F.A.; Haider, F.; Tasdemir, S.B.Y.; Naghashi, V.; Tengiz, I.; Cengiz, K.; Postma, D.; Van Delden, R. Volleyball action modelling for behavior analysis and interactive multi-modal feedback. In Proceedings of the 15th International Summer Workshop on Multimodal Interfaces, Ankara, Turkey, 8 July 2019; p. 50.
  144. Jiang, W.; Zhao, K.; Jin, X. Diagnosis Model of Volleyball Skills and Tactics Based on Artificial Neural Network. Mob. Inf. Syst. 2021, 2021, 7908897.
  145. Wang, Y.; Zhao, Y.; Chan, R.H.; Li, W.J. Volleyball skill assessment using a single wearable micro inertial measurement unit at wrist. IEEE Access 2018, 6, 13758–13765.
  146. Zhang, C.; Tang, H.; Duan, Z. WITHDRAWN: Time Series Analysis of Volleyball Spiking Posture Based on Quality-Guided Cyclic Neural Network. J. Vis. Commun. Image Represent. 2019, 82, 102681.
  147. Thilakarathne, H.; Nibali, A.; He, Z.; Morgan, S. Pose is all you need: The pose only group activity recognition system (POGARS). arXiv 2021, arXiv:2108.04186.
  148. Zhao, K.; Jiang, W.; Jin, X.; Xiao, X. Artificial intelligence system based on the layout effect of both sides in volleyball matches. J. Intell. Fuzzy Syst. 2021, 40, 3075–3084.
  149. Tian, Y. Optimization of Volleyball Motion Estimation Algorithm Based on Machine Vision and Wearable Devices. Microprocess. Microsyst. 2020, 81, 103750.
  150. Şah, M.; Direkoğlu, C. Review and evaluation of player detection methods in field sports. Multimed. Tools Appl. 2021, 1–25.
  151. Rangasamy, K.; As’ari, M.A.; Rahmad, N.A.; Ghazali, N.F. Hockey activity recognition using pre-trained deep learning model. ICT Express 2020, 6, 170–174.
  152. Sozykin, K.; Protasov, S.; Khan, A.; Hussain, R.; Lee, J. Multi-label class-imbalanced action recognition in hockey videos via 3D convolutional neural networks. In Proceedings of the 19th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), Busan, Korea, 27–29 June 2018; pp. 146–151.
  153. Fani, M.; Neher, H.; Clausi, D.A.; Wong, A.; Zelek, J. Hockey action recognition via integrated stacked hourglass network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 29–37.
  154. Cai, Z.; Neher, H.; Vats, K.; Clausi, D.A.; Zelek, J. Temporal hockey action recognition via pose and optical flows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019.
  155. Chan, A.; Levine, M.D.; Javan, M. Player Identification in Hockey Broadcast Videos. Expert Syst. Appl. 2021, 165, 113891.
  156. Carbonneau, M.A.; Raymond, A.J.; Granger, E.; Gagnon, G. Real-time visual play-break detection in sport events using a context descriptor. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, Portugal, 24–27 May 2015; pp. 2808–2811.
  157. Wang, H.; Ullah, M.M.; Klaser, A.; Laptev, I.; Schmid, C. Evaluation of local spatio-temporal features for action recognition. In Proceedings of the British Machine Vision Conference, London, UK, 7–10 September 2009.
  158. Um, G.M.; Lee, C.; Park, S.; Seo, J. Ice Hockey Player Tracking and Identification System Using Multi-camera video. In Proceedings of the IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Jeju, Korea, 5–7 June 2019; pp. 1–4.
  159. Guo, T.; Tao, K.; Hu, Q.; Shen, Y. Detection of Ice Hockey Players and Teams via a Two-Phase Cascaded CNN Model. IEEE Access 2020, 8, 195062–195073.
  160. Liu, G.; Schulte, O. Deep reinforcement learning in ice hockey for context-aware player evaluation. arXiv 2021, arXiv:1805.11088.
  161. Vats, K.; Neher, H.; Clausi, D.A.; Zelek, J. Two-stream action recognition in ice hockey using player pose sequences and optical flows. In Proceedings of the 16th Conference on Computer and Robot Vision (CRV), Kingston, QC, Canada, 29–31 May 2019; pp. 181–188.
  162. Vats, K.; Fani, M.; Clausi, D.A.; Zelek, J. Puck localization and multi-task event recognition in broadcast hockey videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4567–4575.
  163. Tora, M.R.; Chen, J.; Little, J.J. Classification of puck possession events in ice hockey. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 22–25 July 2017; pp. 147–154.
  164. Weeratunga, K.; Dharmaratne, A.; Boon How, K. Application of computer vision and vector space model for tactical movement classification in badminton. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 June 2017; pp. 76–82.
  165. Rahmad, N.; As’ari, M. The new Convolutional Neural Network (CNN) local feature extractor for automated badminton action recognition on vision based data. J. Phys. Conf. Ser. 2020, 1529, 022021.
  166. Steels, T.; Van Herbruggen, B.; Fontaine, J.; De Pessemier, T.; Plets, D.; De Poorter, E. Badminton Activity Recognition Using Accelerometer Data. Sensors 2020, 20, 4685.
  167. Binti Rahmad, N.A.; binti Sufri, N.A.J.; bin As’ari, M.A.; binti Azaman, A. Recognition of Badminton Action Using Convolutional Neural Network. Indones. J. Electr. Eng. Inform. 2019, 7, 750–756.
  168. Ghosh, I.; Ramamurthy, S.R.; Roy, N. StanceScorer: A Data Driven Approach to Score Badminton Player. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Austin, TX, USA, 13–20 September 2020; pp. 1–6.
  169. Cao, Z.; Liao, T.; Song, W.; Chen, Z.; Li, C. Detecting the shuttlecock for a badminton robot: A YOLO based approach. Expert Syst. Appl. 2021, 164, 113833.
  170. Chen, W.; Liao, T.; Li, Z.; Lin, H.; Xue, H.; Zhang, L.; Guo, J.; Cao, Z. Using FTOC to track shuttlecock for the badminton robot. Neurocomputing 2019, 334, 182–196.
  171. Rahmad, N.A.; Sufri, N.A.J.; Muzamil, N.H.; As’ari, M.A. Badminton player detection using faster region convolutional neural network. Indones. J. Electr. Eng. Comput. Sci. 2019, 14, 1330–1335.
  172. Hou, J.; Li, B. Swimming target detection and tracking technology in video image processing. Microprocess. Microsyst. 2021, 80, 103535.
  173. Cao, Y. Fast swimming motion image segmentation method based on symmetric difference algorithm. Microprocess. Microsyst. 2021, 80, 103541.
  174. Hegazy, H.; Abdelsalam, M.; Hussien, M.; Elmosalamy, S.; Hassan, Y.M.; Nabil, A.M.; Atia, A. IPingPong: A Real-time Performance Analyzer System for Table Tennis Stroke’s Movements. Procedia Comput. Sci. 2020, 175, 80–87.
  175. Baclig, M.M.; Ergezinger, N.; Mei, Q.; Gül, M.; Adeeb, S.; Westover, L. A Deep Learning and Computer Vision Based Multi-Player Tracker for Squash. Appl. Sci. 2020, 10, 8793.
  176. Brumann, C.; Kukuk, M.; Reinsberger, C. Evaluation of Open-Source and Pre-Trained Deep Convolutional Neural Networks Suitable for Player Detection and Motion Analysis in Squash. Sensors 2021, 21, 4550.
  177. Wang, S.; Xu, Y.; Zheng, Y.; Zhu, M.; Yao, H.; Xiao, Z. Tracking a golf ball with high-speed stereo vision system. IEEE Trans. Instrum. Meas. 2018, 68, 2742–2754.
  178. Zhi-chao, C.; Zhang, L. Key pose recognition toward sports scene using deeply-learned model. J. Vis. Commun. Image Represent. 2019, 63, 102571.
  179. Liu, H.; Bhanu, B. Pose-Guided R-CNN for Jersey Number Recognition in Sports. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 2457–2466.
  180. Pobar, M.; Ivašić-Kos, M. Detection of the leading player in handball scenes using Mask R-CNN and STIPS. In Proceedings of the Eleventh International Conference on Machine Vision (ICMV 2018), Munich, Germany, 1–3 November 2018; Volume 11041, pp. 501–508.
  181. Van Zandycke, G.; De Vleeschouwer, C. Real-time CNN-based Segmentation Architecture for Ball Detection in a Single View Setup. In Proceedings of the 2nd International Workshop on Multimedia Content Analysis in Sports, Nice, France, 25 October 2019; pp. 51–58.
  182. Burić, M.; Pobar, M.; Ivašić-Kos, M. Adapting YOLO network for ball and player detection. In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods, Prague, Czech Republic, 19–21 February 2019; Volume 1, pp. 845–851.
  183. Pobar, M.; Ivasic-Kos, M. Active Player Detection in Handball Scenes Based on Activity Measures. Sensors 2020, 20, 1475.
  184. Komorowski, J.; Kurzejamski, G.; Sarwas, G. DeepBall: Deep Neural-Network Ball Detector. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Valletta, Malta, 27–29 February 2019; 2019; Volume 5, pp. 297–304.
  185. Liu, W. Beach sports image detection based on heterogeneous multi-processor and convolutional neural network. Microprocess. Microsyst. 2021, 82, 103910.
  186. Zhang, R.; Wu, L.; Yang, Y.; Wu, W.; Chen, Y.; Xu, M. Multi-camera multi-player tracking with deep player identification in sports video. Pattern Recognit. 2020, 102, 107260.
  187. Karungaru, S.; Matsuura, K.; Tanioka, H.; Wada, T.; Gotoda, N. Ground Sports Strategy Formulation and Assistance Technology Develpoment: Player Data Acquisition from Drone Videos. In Proceedings of the 8th International Conference on Industrial Technology and Management (ICITM), Cambridge, UK, 2–4 March 2019; pp. 322–325.
  188. Hui, Q. Motion video tracking technology in sports training based on Mean-Shift algorithm. J. Supercomput. 2019, 75, 6021–6037.
  189. Castro, R.L.; Canosa, D.A. Using Artificial Vision Techniques for Individual Player Tracking in Sport Events. Proceedings 2019, 21, 21.
  190. Buric, M.; Ivasic-Kos, M.; Pobar, M. Player tracking in sports videos. In Proceedings of the IEEE International Conference on Cloud Computing Technology and Science (CloudCom), Sydney, Australia, 11–13 December 2019; pp. 334–340.
  191. Moon, S.; Lee, J.; Nam, D.; Yoo, W.; Kim, W. A comparative study on preprocessing methods for object tracking in sports events. In Proceedings of the 20th International Conference on Advanced Communication Technology (ICACT), Chuncheon, Korea, 11–14 February 2018; pp. 460–462.
  192. Xing, J.; Ai, H.; Liu, L.; Lao, S. Multiple player tracking in sports video: A dual-mode two-way bayesian inference approach with progressive observation modeling. IEEE Trans. Image Process. 2010, 20, 1652–1667.
  193. Liang, Q.; Wu, W.; Yang, Y.; Zhang, R.; Peng, Y.; Xu, M. Multi-Player Tracking for Multi-View Sports Videos with Improved K-Shortest Path Algorithm. Appl. Sci. 2020, 10, 864.
  194. Lu, W.L.; Ting, J.A.; Little, J.J.; Murphy, K.P. Learning to track and identify players from broadcast sports videos. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1704–1716.
  195. Huang, Y.C.; Liao, I.N.; Chen, C.H.; İk, T.U.; Peng, W.C. Tracknet: A deep learning network for tracking high-speed and tiny objects in sports applications. In Proceedings of the 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Taipei, Taiwan, 18–21 September 2019; pp. 1–8.
  196. Santiago, C.B.; Sousa, A.; Reis, L.P.; Estriga, M.L. Real time colour based player tracking in indoor sports. In Computational Vision and Medical Image Processing; Springer: Berlin/Heidelberg, Germany, 2011; pp. 17–35.
  197. Tan, S.; Yang, R. Learning similarity: Feature-aligning network for few-shot action recognition. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–7.
  198. Ullah, A.; Ahmad, J.; Muhammad, K.; Sajjad, M.; Baik, S.W. Action recognition in video sequences using deep bi-directional LSTM with CNN features. IEEE Access 2017, 6, 1155–1166.
  199. Russo, M.A.; Kurnianggoro, L.; Jo, K.H. Classification of sports videos with combination of deep learning models and transfer learning. In Proceedings of the International Conference on Electrical, Computer and Communication Engineering (ECCE), Chittagong, Bangladesh, 7–9 February 2019; pp. 1–5.
  200. Waltner, G.; Mauthner, T.; Bischof, H. Indoor Activity Detection and Recognition for Sport Games Analysis. arXiv 2021, arXiv:abs/1404.6413.
  201. Soomro, K.; Zamir, A.R. Action recognition in realistic sports videos. In Computer Vision in Sports; Springer: Berlin/Heidelberg, Germany, 2014; pp. 181–208.
  202. Xu, K.; Jiang, X.; Sun, T. Two-stream dictionary learning architecture for action recognition. IEEE Trans. Circuits Syst. Video Technol. 2017, 27, 567–576.
  203. Chaudhury, S.; Kimura, D.; Vinayavekhin, P.; Munawar, A.; Tachibana, R.; Ito, K.; Inaba, Y.; Matsumoto, M.; Kidokoro, S.; Ozaki, H. Unsupervised Temporal Feature Aggregation for Event Detection in Unstructured Sports Videos. In Proceedings of the IEEE International Symposium on Multimedia (ISM), San Diego, CA, USA, 9–11 December 2019; pp. 9–97.
  204. Li, Y.; He, H.; Zhang, Z. Human motion quality assessment toward sophisticated sports scenes based on deeply-learned 3D CNN model. J. Vis. Commun. Image Represent. 2020, 71, 102702.
  205. Chen, H.T.; Chou, C.L.; Tsai, W.C.; Lee, S.Y.; Lin, B.S.P. HMM-based ball hitting event exploration system for broadcast baseball video. J. Vis. Commun. Image Represent. 2012, 23, 767–781.
  206. Punchihewa, N.G.; Yamako, G.; Fukao, Y.; Chosa, E. Identification of key events in baseball hitting using inertial measurement units. J. Biomech. 2019, 87, 157–160.
  207. Kapela, R.; Świetlicka, A.; Rybarczyk, A.; Kolanowski, K. Real-time event classification in field sport videos. Signal Process. Image Commun. 2015, 35, 35–45.
  208. Maksai, A.; Wang, X.; Fua, P. What players do with the ball: A physically constrained interaction modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 972–981.
  209. Goud, P.S.H.V.; Roopa, Y.M.; Padmaja, B. Player Performance Analysis in Sports: With Fusion of Machine Learning and Wearable Technology. In Proceedings of the 3rd International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 27–29 March 2019; pp. 600–603.
  210. Park, Y.J.; Kim, H.S.; Kim, D.; Lee, H.; Kim, S.B.; Kang, P. A deep learning-based sports player evaluation model based on game statistics and news articles. Knowl.-Based Syst. 2017, 138, 15–26.
  211. Tejero-de Pablos, A.; Nakashima, Y.; Sato, T.; Yokoya, N.; Linna, M.; Rahtu, E. Summarization of user-generated sports video by using deep action recognition features. IEEE Trans. Multimed. 2018, 20, 2000–2011.
  212. Javed, A.; Irtaza, A.; Khaliq, Y.; Malik, H.; Mahmood, M.T. Replay and key-events detection for sports video summarization using confined elliptical local ternary patterns and extreme learning machine. Appl. Intell. 2019, 49, 2899–2917.
  213. Rafiq, M.; Rafiq, G.; Agyeman, R.; Choi, G.S.; Jin, S.I. Scene classification for sports video summarization using transfer learning. Sensors 2020, 20, 1702.
  214. Khan, A.A.; Shao, J.; Ali, W.; Tumrani, S. Content-Aware summarization of broadcast sports Videos: An Audio–Visual feature extraction approach. Neural Process. Lett. 2020, 52, 1945–1968.
  215. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90.
  216. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015.
  217. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9.
  218. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
  219. Iandola, F.N.; Moskewicz, M.W.; Ashraf, K.; Han, S.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1 MB model size. arXiv 2016, arXiv:abs/1602.07360.
  220. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2021, arXiv:1704.04861.
  221. Murthy, C.B.; Hashmi, M.F.; Bokde, N.D.; Geem, Z.W. Investigations of object detection in images/videos using various deep learning techniques and embedded platforms—A comprehensive review. Appl. Sci. 2020, 10, 3280.
  222. Cao, D.; Zeng, K.; Wang, J.; Sharma, P.K.; Ma, X.; Liu, Y.; Zhou, S. BERT-Based Deep Spatial-Temporal Network for Taxi Demand Prediction. IEEE Trans. Intell. Transp. Syst. 2021. Early Access.
  223. Wang, J.; Zou, Y.; Lei, P.; Sherratt, R.S.; Wang, L. Research on recurrent neural network based crack opening prediction of concrete dam. J. Internet Technol. 2020, 21, 1161–1169.
  224. Chen, C.; Li, K.; Teo, S.G.; Zou, X.; Li, K.; Zeng, Z. Citywide traffic flow prediction based on multiple gated spatio-temporal convolutional neural networks. ACM Trans. Knowl. Discov. Data 2020, 14, 1–23.
  225. Zaremba, W.; Sutskever, I.; Vinyals, O. Recurrent Neural Network Regularization. arXiv 2014, arXiv:abs/1409.2329.
  226. Jiang, X.; Yan, T.; Zhu, J.; He, B.; Li, W.; Du, H.; Sun, S. Densely connected deep extreme learning machine algorithm. Cogn. Comput. 2020, 12, 979–990.
  227. Zhang, Y.; Wang, C.; Wang, X.; Zeng, W.; Liu, W. Fairmot: On the fairness of detection and re-identification in multiple object tracking. Int. J. Comput. Vis. 2021, 129, 3069–3087.
  228. Wojke, N.; Bewley, A.; Paulus, D. Simple online and realtime tracking with a deep association metric. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3645–3649.
  229. Hu, H.N.; Yang, Y.H.; Fischer, T.; Darrell, T.; Yu, F.; Sun, M. Monocular Quasi-Dense 3D Object Tracking. arXiv 2021, arXiv:2103.07351.
  230. Kim, A.; Osep, A.; Leal-Taixé, L. EagerMOT: 3D Multi-Object Tracking via Sensor Fusion. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 11315–11321.
  231. Chaabane, M.; Zhang, P.; Beveridge, J.R.; O’Hara, S. Deft: Detection embeddings for tracking. arXiv 2021, arXiv:2102.02267.
  232. Zeng, F.; Dong, B.; Wang, T.; Chen, C.; Zhang, X.; Wei, Y. MOTR: End-to-End Multiple-Object Tracking with TRansformer. arXiv 2021, arXiv:2105.03247.
  233. Wang, Z.; Zheng, L.; Liu, Y.; Li, Y.; Wang, S. Towards real-time multi-object tracking. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 107–122.
  234. Xu, Y.; Osep, A.; Ban, Y.; Horaud, R.; Leal-Taixé, L.; Alameda-Pineda, X. How to train your deep multi-object tracker. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6787–6796.
  235. Sun, P.; Jiang, Y.; Zhang, R.; Xie, E.; Cao, J.; Hu, X.; Kong, T.; Yuan, Z.; Wang, C.; Luo, P. Transtrack: Multiple-object tracking with transformer. arXiv 2021, arXiv:2012.15460.
  236. Xu, Z.; Zhang, W.; Tan, X.; Yang, W.; Su, X.; Yuan, Y.; Zhang, H.; Wen, S.; Ding, E.; Huang, L. PointTrack++ for Effective Online Multi-Object Tracking and Segmentation. arXiv 2021, arXiv:2007.01549.
  237. Gupta, A.; Johnson, J.; Fei-Fei, L.; Savarese, S.; Alahi, A. Social gan: Socially acceptable trajectories with generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2255–2264.
  238. Phan-Minh, T.; Grigore, E.C.; Boulton, F.A.; Beijbom, O.; Wolff, E.M. Covernet: Multimodal behavior prediction using trajectory sets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 14074–14083.
  239. Li, X.; Ying, X.; Chuah, M.C. Grip: Graph-based interaction-aware trajectory prediction. In Proceedings of the IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, NZ, USA, 27–30 October 2019; pp. 3960–3966.
  240. Salzmann, T.; Ivanovic, B.; Chakravarty, P.; Pavone, M. Trajectron++: Dynamically-feasible trajectory forecasting with heterogeneous data. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 683–700.
  241. Mohamed, A.; Qian, K.; Elhoseiny, M.; Claudel, C. Social-stgcnn: A social spatio-temporal graph convolutional neural network for human trajectory prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 14424–14432.
  242. Amirian, J.; Zhang, B.; Castro, F.V.; Baldelomar, J.J.; Hayet, J.B.; Pettré, J. Opentraj: Assessing prediction complexity in human trajectories datasets. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020; pp. 1–17.
  243. Yu, C.; Ma, X.; Ren, J.; Zhao, H.; Yi, S. Spatio-temporal graph transformer networks for pedestrian trajectory prediction. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 507–523.
  244. Wang, C.; Wang, Y.; Xu, M.; Crandall, D.J. Stepwise Goal-Driven Networks for Trajectory Prediction. arXiv 2021, arXiv:abs/2103.14107.
  245. Chen, J.; Li, K.; Bilal, K.; Li, K.; Philip, S.Y. A bi-layered parallel training architecture for large-scale convolutional neural networks. IEEE Trans. Parallel Distrib. Syst. 2018, 30, 965–976.
More
Video Production Service