Ball Tracking System for Cricket Using Computer Vision

0
672
Ball Tracking System for Cricket Using Computer Vision

Introduction Ball Tracking System for Cricket

This article is interesting to everyone because we doing Ball Tracking System for Cricket. After the revolution of technology in cricket the new rule for checking the empire decision is wrong or right. This known as DRS (Decision Review System) is quite ubiquitous in the sport of cricket these days. Teams are starting to rely heavily on the DRS to overturn tight umpiring decisions and that, quite often, can turn the match in their favor.

This ball tracking concept, part of the DRS, is now an all too familiar sight for cricket fans:

Cricket teams and franchises use this idea of ball-tracking to understand the weak zones of opposition players as well. Which position is a particular batsman vulnerable in? Where does the bowler consistently pitch in the death overs?

Ball tracking systems help teams analyze and understand these questions. 

What is Meant By Ball Tracking System for Cricket

The Ball Tracking System means in computer vision like  Object Detection and Object Tracking.

#here paste object detection blog link.

Object detection is one of the most important and basic concepts in computer vision. It has a far-reaching role in different domains such as defense, space, sports, and other fields. Here, I have listed a few interesting use cases of Object Detection in Defense and Space:

  • Automatic Target Aimer
  • Training Robots in real word simulations to retrieve people in hazardous physical environments
  • Detecting Space Debris

Object Detection is the task of identifying an object and its location in an image. Object detection is similar to an  image classification problem but with one small difference is the task identifying the location of an object as well a concept known as Localization.

As you can see here, the location of the object is represented by a rectangular box that is popularly known as a Bounding Box. Bounding Box represents the coordinates of the object in an image. But wait how is Object Detection different from Object Tracking? Let’s answer this question now.

Object Tracking is a special case of Object Detection. It applies to only video data. In object tracking, the object and its location are identified from every frame of a video.

NOTE: that Object Detection is for an image whereas Object Tracking is for the sequence of fast-moving frames.

  • Use Cases

In this section, I will explain some used cases based on the ball tracking system in sports.

  • Use case 1: LBW

Full form of LBW Leg Before Wicket. This concept is related to hawk-eye in cricket. The trajectory of the ball assists in deciding whether the ball has pitched inside or outside the stumps. It also includes information about the ball hitting the stumps or not.

Another example, in tennis, during serves or a rally, the ball tracking system assists in knowing whether the ball has pitched inside or outside the permissible lines on the court.

  • Detect the strong and weak zones of Batsman

Everyone has to be a match-winning or experienced or best player. Boller’s aim is to pick their wicket, at the earliest opportunity is crucial for any team to win matches. This is possible with ball-tracking technology, analyzing the raw videos, and generating heat maps.

Due to these heatmaps, easily identify the strong and weak zone and strong zone.

  •  Soccer Ball

This game detects the full and boundary grid concepts resume tracking in ball-track-lost and ball-out-of-frame situations. 

  • Different Approach to Ball Tracking System

Numbers of deep learning algorithms are developed for the ball-tracking system as well as pre-trained models. But there are some challenges with them when it comes to tracking a fast-moving cricket ball.

Here are some challenges

  1. Some of the baller balls move with a very high speed of around 130 to 160 kph. Due to this, the ball traces along its path.
  2. The similar objects on the ground to that of the ball might be challenging to classify. For example, 30-yard dots in the field when viewed from the top almost look like a ball

Hence, in this article, I will focus on 2 simple approaches to track a fast-moving ball in a sports video:

  • Sliding Window
  • Segmentation by Color

Let’s discuss them in detail.

Approach 1 – Sliding Window

One of the simplest ways could be to break down the image into smaller patches, say 3 * 3 or 5 * 5 grids, and then classify every patch into one of 2 classes – whether a patch contains a ball or not. This approach is known as the sliding window approach as we are sliding the window of a patch across every part of an image.

This method is really simple and efficient. But, it’s a time taking process as it considers several patches of the image. Another drawback of the sliding window approach is that it’s expensive as it considers every patch of an image.

So next, I will discuss the alternative approach to the sliding window.

Approach 2 – Segmentation by Color

This is technology everyone sees as ‘xerox’. Instead of considering every patch, we can reduce the patches for classification based on the color of the ball. Since we know the color of the ball, we can easily differentiate the patches that have a similar color to that of the ball from the rest of the patches.

Tutorial

Let’s read video file and save in frame
import cv2
import numpy as np
import imutils
 
video='28.mp4'
 
# Create a VideoCapture object and read from input file
# If the input is the camera, pass 0 instead of the video file name
cap = cv2.VideoCapture(video)
cnt=0
 
# Check if camera opened successfully
if (cap.isOpened()== False):
 print("Error opening video stream or file")
 
ret,first_frame = cap.read()
 
# Read until video is completed
while(cap.isOpened()):
  
 # Capture frame-by-frame
 ret, frame = cap.read()
   
 if ret == True:
  
   #removing scorecard
   roi = frame[:800,:]
  
   #cropping center of an image
   thresh=600
   end = roi.shape[1] - thresh
   roi = roi[:,thresh:end]
  
   cv2.imshow("image",roi)
   # Press Q on keyboard to  exit
   if cv2.waitKey(25) & 0xFF == ord('q'):
     break
 
   cv2.imwrite('frames/'+str(cnt)+'.png',roi)
   cnt=cnt+1
 
 # Break the loop
 else:
   break
 
cv2.destroyAllWindows()   

Reading Frame
import matplotlib.pyplot as plt
import cv2
import numpy as np
import os
import re
 
#listing down all the file names
frames = os.listdir('frames/')
frames.sort(key=lambda f: int(re.sub('\D', '', f)))
 
#reading frames
images=[]
for i in frames:
   img = cv2.imread('frames/'+i)
   img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
   img = cv2.GaussianBlur(img,(25,25),0)
   images.append(img)
 
images=np.array(images)

To extract the fame, the main objective is to track the ball on the pitch, we need to extract the frames that contain the pitch. Here, I am using the concept of scene detection to accomplish the task.

nonzero=[]
for i in range((len(images)-1)):
  
   mask = cv2.absdiff(images[i],images[i+1])
   _ , mask = cv2.threshold(mask, 50, 255, cv2.THRESH_BINARY)
   num = np.count_nonzero((mask.ravel()))
   nonzero.append(num)
  
  
x = np.arange(0,len(images)-1)
y = nonzero
 
plt.figure(figsize=(20,4))
plt.scatter(x,y)
 


threshold = 15 * 10e3
for i in range(len(images)-1):
   if(nonzero[i]>threshold):
       scene_change_idx = i
       break
      
frames = frames[:(scene_change_idx+1)]
 

Now, we have obtained the frames that contain a pitch. Next, we will implement a segmentation approach (i.e. approach 1st). Let’s carry out all the steps of the approach for only a single frame now.

We will read the frame and apply Gaussian blur to remove noises in an image.

img= cv2.imread('frames/' + frames[10])
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray,(25,25),0)
 
plt.figure(figsize=(5,10))
plt.imshow(gray,cmap='gray')

As the color of the ball is known, we can easily segment the white-colored objects in an image. Here, 200 acts as a threshold. Any pixel value below this 200 will be marked as 0 and above 200 will be marked as 255.

_ , mask = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY)

plt.figure(figsize=(5,5))

plt.imshow(mask,cmap=’gray’)

As you can see here, the white-colored objects are segmented. The white color indicates white colored objects and black indicates the rest of the colors. We have separated the white-colored objects from the others.

Now, we will find the contours of segmented objects in an image.

image, contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

Draw the contours on the original image.

img_copy = np.copy(gray)

cv2.drawContours(img_copy, contours, -1, (0,255,0), 3)

plt.imshow(img_copy, cmap=’gray’)

Now extract the patches from image

!rm -r patch/*
num=20
cnt=0
for i in range(len(contours)):
   x,y,w,h = cv2.boundingRect(contours[i])
   numer=min([w,h])
   denom=max([w,h])
   ratio=numer/denom
   if(x>=num and y>=num):
       xmin, ymin= x-num, y-num
       xmax, ymax= x+w+num, y+h+num
   else:
       xmin, ymin=x, y
       xmax, ymax=x+w, y+h
 
   if(ratio>=0.5 and ((w<=10) and (h<=10)) ):   
       print(cnt,x,y,w,h,ratio)
       cv2.imwrite("patch/"+str(cnt)+".png",img[ymin:ymax,xmin:xmax])
       cnt=cnt+1
Bild the image classifier,
import os
import cv2
import numpy as np
import pandas as pd
folders=os.listdir('data/')
 
images=[]
labels= []
for folder in folders:
   files=os.listdir('data/'+folder)
   for file in files:
       img=cv2.imread('data/'+folder+'/'+file,0)
       img=cv2.resize(img,(25,25))
      
       images.append(img)
       labels.append(int(folder))
 
images = np.array(images)
features = images.reshape(len(images),-1)

Split the dataset

from sklearn.model_selection import train_test_split
x_tr,x_val,y_tr,y_val = train_test_split(features,labels, test_size=0.2, stratify=labels,random_state=0)

 Now, draw a baseline model for identifying the patch containing the ball

from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(max_depth=3)
rfc.fit(x_tr,y_tr)
 
Evaluate the model for validation
from sklearn.metrics import classification_report
y_pred = rfc.predict(x_val)
print(classification_report(y_val,y_pred))
Now, apply the all frame
!rm -r ball/*
ball_df = pd.DataFrame(columns=['frame','x','y','w','h'])
 
for idx in range(len(frames)):
  
   img= cv2.imread('frames/' + frames[idx])
   gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
   gray = cv2.GaussianBlur(gray,(25, 25),0)
   _ , mask = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY)
   image, contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
  
   !rm -r patch/*
 
   num=20
   cnt=0
   df = pd.DataFrame(columns=['frame','x','y','w','h'])
   for i in range(len(contours)):
       x,y,w,h = cv2.boundingRect(contours[i])
 
       numer=min([w,h])
       denom=max([w,h])
       ratio=numer/denom
 
       if(x>=num and y>=num):
           xmin, ymin= x-num, y-num
           xmax, ymax= x+w+num, y+h+num
       else:
           xmin, ymin= x,y
           xmax, ymax= x+w, y+h
 
       if(ratio>=0.5):   
           #print(cnt,x,y,w,h,ratio)
           df.loc[cnt,'frame'] = frames[idx]
           df.loc[cnt,'x']=x
           df.loc[cnt,'y']=y
           df.loc[cnt,'w']=w
           df.loc[cnt,'h']=h
          
           cv2.imwrite("patch/"+str(cnt)+".png",img[ymin:ymax,xmin:xmax])
           cnt=cnt+1
  
  
   files=os.listdir('patch/')   
   if(len(files)>0):
  
       files.sort(key=lambda f: int(re.sub('\D', '', f)))
 
       test=[]
       for file in files:
           img=cv2.imread('patch/'+file,0)
           img=cv2.resize(img,(25,25))
           test.append(img)
 
       test = np.array(test)
 
       test = test.reshape(len(test),-1)
       y_pred = rfc.predict(test)
       prob=rfc.predict_proba(test)
 
       if 0 in y_pred:
           ind = np.where(y_pred==0)[0]
           proba = prob[:,0]
           confidence = proba[ind]
           confidence = [i for i in confidence if i>0.7]
           if(len(confidence)>0):
 
               maximum = max(confidence)
               ball_file=files[list(proba).index(maximum)]
 
               img= cv2.imread('patch/'+ball_file)
               cv2.imwrite('ball/'+str(frames[idx]),img)
 
               no = int(ball_file.split(".")[0])
               ball_df.loc[idx]= df.loc[no]
           else:
               ball_df.loc[idx,'frame']=frames[idx]
 
       else:
           ball_df.loc[idx,'frame']=frames[idx]
 
Now identify the frame location with axis
ball_df.dropna(inplace=True)
print(ball_df)

A step toward all frame, draw a bounding box around frames that contain ball
files = ball_df['frame'].values
 
num=10
for idx in range(len(files)):
  
   #draw contours
   img = cv2.imread('frames/'+files[idx])
  
   x=ball_df.loc[idx,'x']
   y=ball_df.loc[idx,'y']
   w=ball_df.loc[idx,'w']
   h=ball_df.loc[idx,'h']
  
   xmin=x-num
   ymin=y-num
   xmax=x+w+num
   ymax=y+h+num
 
   cv2.rectangle(img, (xmin, ymin), (xmax, ymax), (255,0,0), 2)
   cv2.imwrite("frames/"+files[idx],img) 

The final step, convert all steps into video
frames = os.listdir('frames/')
frames.sort(key=lambda f: int(re.sub('\D', '', f)))
 
frame_array=[]
 
for i in range(len(frames)):
   #reading each files
   img = cv2.imread('frames/'+frames[i])
   height, width, layers = img.shape
   size = (width,height)
   #inserting the frames into an image array
   frame_array.append(img)
 
out = cv2.VideoWriter('28.mp4',cv2.VideoWriter_fourcc(*'DIVX'), 25, size)
for i in range(len(frame_array)):
   # writing to a image array
   out.write(frame_array[i])
out.release()

Conclusion

Understand the ball tracking for cricket, using a baseline model built for image classification. But challenge task for, there are few  best hyperparameters in this approach such as the size of the Gaussian filter and the thresholding the value because of extract the only important value and adjusted depending on the types of video

LEAVE A REPLY

Please enter your comment!
Please enter your name here