
small (250x250 max)
medium (500x500 max)
Large
Extra Large
large ( > 500x500)
Full Resolution

SURF TRACKING OF OCCLUDED IMAGES A Thesis by CHETANA DIVAKAR NIMMAKAYALA Submitted to the Office of Graduate Studies of Texas A&M UniversityCommerce in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May 2016 ii SURF TRACKING OF OCCLUDED IMAGES A Thesis by CHETANA DIVAKAR NIMMAKAYALA Approved by: Advisor: Nikolay Metodiev Sirakov Committee: Abdullah Arslan Mutlu Mete Head of Department: Sang Suh Dean of the College: Brent Donham Dean of Graduate Studies: Arlene Horne iii Copyright © 2016 CHETANA DIVAKAR NIMMAKAYALA iv ABSTRACT SURF TRACKING OF OCCLUDED IMAGES Chetana Divakar Nimmakayala, MS Texas A&M UniversityCommerce, 2016 Advisor: Professor Nikolay Metodiev Sirakov, PhD For a large class of applications, which includes humancomputer interaction, security, surveillance and traffic control, detection and tracking targets are an essential step in understanding the motion of objects. Target tracking becomes more challenging and complex as the scene starts to include more real world objects. In this study we present a method to capture and track moving objects using Speeded Up Robust Features (SURF) features. The target is user specified. SURF features are extracted from the target and are matched to the SURF features in the image frame taken from the video. When a match is found the target is boxed to show its location. The process is continued for all the frames of the video. Occlusion is taken into consideration and the algorithm successfully tracks both partially and completely occluded objects. This approach is applied to both synthetically developed and real life videos to capture different kinds of targets in different scenarios. Concepts were validated in a MATLAB environment and the experimental results are documented. v ACKNOWLEDGEMENTS I would like to express my sincere gratitude to my advisor Dr. Nikolay M. Sirakov for the continuous support of my master thesis and related research, for his motivation, patience, and immense knowledge. His encouragement gave me the incentive to widen my research from various perspectives. His guidance helped me in completing this study successfully. I would also like to thank Dr. Mete for his help in the programming code. Last but not the least; my thanks goes to all committee members for their insightful comments and motivation. vi TABLE OF CONTENTS LIST OF TABLES .................................................................................................................... vii LIST OF FIGURES ................................................................................................................. viii CHAPTER 1. INTRODUCTION ..................................................................................................... 1 Statement of the Problem ..................................................................................... 2 Purpose of the Study ........................................................................................... 2 Limitations .......................................................................................................... 3 2. SURVEY OF THE LITERATURE ........................................................................... 4 GMM and Blob Analysis .................................................................................... 5 Speeded Up Robust Features (SURF) ................................................................. 9 Object Matching ................................................................................................ 14 Geometric Transformation ................................................................................ 18 Occlusion .......................................................................................................... 19 3. METHOD OF PROCEDURE/ALGORITHM ........................................................ 20 4. EXPERIMENTAL RESULTS ................................................................................. 23 5. PROGRAM GUIDE ................................................................................................ 28 6. CONCLUSION ........................................................................................................ 29 7. FUTURE WORK ....................................................................................................... 30 REFERENCES ......................................................................................................................... 31 VITA ……………………………………………………….……………………………….. 33 vii LIST OF TABLES TABLE 1. This shows the time taken to process the frames when there is no occlusion ................ 22 2. This shows the time taken to process the frames when there is occlusion. ..................... 22 viii LIST OF FIGURES FIGURE 1. Blob Analysis function block with input and outputs ....................................................... 6 2. Above shown is the working of a Gaussian Mixture Model where the background is segmented and only the foreground is displayed. After noise removal, template is saved and applied to every frame where the object in motion is detected. ....................... 7 3. Above shown is the final implementation, which detects the moving cars in the street by a fixed CCTV camera .......................................................................................................... 8 4. This shows the Blob Analysis algorithm when the target is too close to the camera source. The object is not detected in one single blob and hence we can see there are various boxes to track the motion ..................................................................................... 8 5. It takes three arithmetic operations to calculate the sum of the intensities inside a rectangular region of any size: =ABC+D .................................................................. 10 6. Integral image can be used to upscale at constant cost .................................................. 11 7. Scale space, multiresolution pyramids .......................................................................... 11 8. Haar Wavelet filter to compute the response in x and y direction. Darker parts have the weight of 1 and the lighter parts has a weight of +1 ................................................ 12 9. Orientation Assignment for three different quadrants .................................................... 13 10. Illumination difference cannot be spotted by this algorithm. The first two images are similar in illumination and can be differentiated by the algorithm but not the third one ix with difference in illumination …………………………………………………………..14 11. Shows two matrices which contain the descriptors/features of the target and the image frame. Matrix 1 has mX64 elements; where m are the number of SURF points. Matrix 2 has nX64 elements; where n are the number of SURF points…………………………...14 12. Shown here are two similar images between which SURF match has been done. Similar points are joined with yellow lines …………………………………………………… 16 13. Locating the clock on the Big Ben using the matching algorithm. (a) Shows the features which have the most probable match and (b) shows the specific location highlighted by a box. ...................................................................................................... 16 14. Matching between two images with a scaling difference of 1.5 and 0.5 ........................ 17 15. Matching between two images with a rotational difference of 90, 180 and 270 degrees. 17 16. Shows the function breakdown of maintaining the geometric state of the target once found after matching. ……………………………………………………………………18 17. The image shows different types of occlusion. Image (B) is partially occluded whereas (C) and (D) show complete occlusion…………………………………………………...19 18. Matching between an initial image and its presence in a video frame being partially occluded .......................................................................................................................... 19 19. This shows the initial selection of the target. The first frame is frozen and the user is asked to pick the target by marking two corners of the box which encloses the target….23 20. Example demonstrates tracking a car traveling through a straight path at an intersection. Detected target is enclosed in a yellow box. …………………………………………….24 x 21. Example demonstrates the tracking of an airplane as captured through a still camera. The orientation of the plane changes in every frame as it flies by as compared to that captured in the first frame. Detected target is enclosed in a yellow box. ………………………...24 22. Tracking of a particular race car in the presence of many similar looking targets. The algorithm is able to detect the correct target using its specific SURF points. Detected target is enclosed in a yellow box. ……………………………………………………...25 23. Synthetically generated video where a deck of cards gets occluded behind the paper, is tracked. (a) shows before the occlusion. (b), (c), (d) shows during the occlusion. (e) and (f) shows after the occlusion when the target is coming out. Tracked object is bounded in a yellow box…………………………………………………………………………….25 24. The motion of an airplane is tracked which gets partially occluded. (a) and (b) shows before the objects becomes occluded. (c) is during the occlusion when the object is hidden (d), (e) and (f) is when the object comes out of occlusion. ……………………26 25. This is an exampe of when we have a moving camera. As long as the target is in the frame, the algorithm will capture it. Detected target is enclosed in a yellow box………26 26. This is an example of a complete occlusion where the boat, which is the target, disappears and reappears from behind the other boat. (a) and (b) shows before the objects becomes occluded. (c) is during the occlusion when the object is hidden (d), (e) and (f) is when the object comes out of occlusion……………………………………………….27 27. Detected target is enclosed in a yellow box. This is an example of a complete occlusion where the boat, which is the target, disappears and reappears from behind the other boat. (a) shows before the objects gets occluded. (b), (c) and (d) is before the occlusion (e) and (f) is when the object comes out of occlusion………………………………………….27 1 Chapter 1 INTRODUCTION Object detection and tracking are some of the most useful and challenging tasks in computer vision. It is one of the most prevalent branch today which helps to determine meaningful events and suspicious activities. This is a field which tracks and recognizes objects of interest from various sources. Object tracking is mostly a combination of three major steps, which include finding objects, tracking these objects frame to frame, and evaluating tracking results to conclude semantic events and latent phenomena (Porikli & Yilmaz, 2012). Of the numerous topics in computer vision, detecting and tracking a moving object are of great interest. Motion in particular, is an important cue for computer vision. For many applications, any object in motion is of interest and anything else can be ignored (Butler, Bove, & Sridharan, 2005). Early methods of motion detection relied on the detection of temporal changes, but were then enhanced with the use of detection masks and filters which improved the efficiency of the change detection algorithms. However, these enhancements were limited by their inability to provide a perfect tracker and could not be used without a prior contextbased knowledge. Hence the study of motion detection as a statistical estimation problem became incorporated in the study of motion detection (Rachid & Nikos, 2000). Rachid notes that “tracking goes further than motion detection and requires extra motionbased measurements, specifically the segmentation of the corresponding motion parameters.” (Rachid & Nikos, 2000, p 260) The extent of any generated algorithm depends on the application (Zhu, 2011). In any kind of an automated video surveillance, which deals with real time tracking of any form, we may need to detect the activity of any moving subject under observation with more accuracy and 2 precision. Sometimes, mere tracking is sufficient. The accuracy of any algorithm is determined by its applications. A realworld application of any algorithm with a faster computing time is likely. Statement of the Problem In this research I studied a method to effectively track the motion of an object under partial and complete occlusion. An algorithm is presented to successfully capture the target object by picking up feature points like SURF which are specific to the target object, and using them to keep track of its motion through the frames of the video. Both partial and complete occlusion is taken into consideration and the object is detected once it reappears completely. Purpose of the Study We started this research studying Gaussian mixture models and blob analysis to capture the target. Gaussian Mixture Model is a time consuming process, and depending on the length of the video, the time taken to generate a mask could be very long (Zhu, 2011). The bigger the video and number of frames, the more time it needs to generate the mask to distinguish between the background and foreground. The video also has to be taken from a fixed camera source. Blob analysis simply traces all the objects that show a difference in properties from the background, such as brightness or color. Hence, all moving objects are detected, without distinguishing one from another. Also if the target is very close to the camera source, the algorithm is unable to capture the entire target in one blob, and breaks up to be part of many blobs. With so many disadvantages, its real time application is not favorable and hence we wanted to devise a technique or a method that would perform more accurately. Considering the above disadvantages, we proposed to apply the SURF method, which was used to specifically pick out the target and distinguish it from the other moving or similar 3 looking stationary targets in the video. The SURF points of the target were feature points which were used to track it. These feature points were invariant to scaling and rotation. Hence, instant changes in the target size or orientation would still be captured and the movement would be tracked. Occluded objects can also be detected once they are visible again. SURF points are taken and matched to make sure we recapture the same target. Limitations Object tracking is a complex procedure due to many reasons (Yilmaz, Javed & Shah, 2006). There could be unwanted, unknown noise. The target object to be tracked may be partially or fully occluded and the algorithm used may or may not be able to handle occlusion. The nature of the object and the clarity of the camera are very important if we need an accurate tracking. Object shapes may sometimes be very complex for the active contour to work on it accurately (Kass, Witkin & Terzopoulos, 1988). Realtime processing may have its own requirements. Hence, it is very important to know the limitations of an algorithm in order to define its domain of application. Every algorithm has its own specifications and limitations. Previously done research using Gaussian Mixture Models (GMM) and blob analysis gave a highly flexible algorithm which worked excellent on a variety of videos. However, time taken to compute the output was too long and the target was not captured as one when too close to the camera source. 4 Chapter 2 SURVEY OF THE LITERATURE Visual tracking is an essential branch of computer vision. It is essential to have a correct algorithm that can ideally look for the target object with the correct set of specifications. The algorithm has to be concise in breaking the videos in a given number of frames in order to obtain accurate results. Background subtraction is an important tool to be used for object tracking. It is a technique in image processing where the foreground of the image is extracted for further processing. Kim, Chalidabhongse, Harwood and Davis presents a method for foregroundbackground separation using codebooks. “They represent a compressed form of background model for a long image sequence” (Kim, Chalidabhongse, Harwood & Davis, 2005). It also highlights two important features that help in making the algorithm better: layered modeling/detection and adaptive codebook updating. Simpler background models have been defined, which creates a unimodal distribution. Other novel algorithms are defined to detect moving objects from a static background scene (Horprasert, Harwood & Davis, 1999). We studied GMM, a probabilistic method that uses training frames to understand the difference between the foreground and background. It assumed data points are generated from a mixture of a finite number of Gaussian distributions (Zoran, 2004). We applyed the vision.ForegroundDetector system object in MATLAB to implement segmentation method with GMM (MathWorks, 2016). A video is broken down into its frames to compute a foreground mask/background model. The ForegroundDetector System object was used to compare a color or grayscale video frame with a background model which would help to determine if the pixels found are part of the background or the foreground. The object was differentiated from its 5 background using difference in color and brightness. Change in intensity helped the algorithm to track it. It did a reasonable job of extracting the shape from the foreground however this method cannot handle the movement of the camera. Predictions are another way to track the motion of the target. Here the algorithm will predict or estimate the position of the target in the next frame using information from the previous frames. A measuring tool is used to find the difference between the real and estimated positions in the present frame (Kandhare, Arslan & Sirakov, 2014). The difference corrects the estimation. This method helps us save time and the algorithm knows the specific region to focus on; rather than focusing on the entire video frame. The most common prediction algorithm is the Kalman Filter (Kandhare, Arslan & Sirakov, 2014). A Kalman Filter is an optimal estimator, it infers parameters of interest from indirect, inaccurate, and uncertain observations. The best way to work with a prediction algorithm is to define a mass center. The mass center of the target is called the measured position and can be applied to calculate the estimated position of the target in the next frame (Kandhare, Arslan & Sirakov, 2014). While a prediction algorithm has many advantages, it fails to distinguish between the target and the background if it is too crowded. GMM and Blob Analysis Blob Analysis is used to detect corresponding regions in an image individually. Regions to be detected can be based on any criteria, of which we use objects in motion. “It is a fundamental technique of computer vision based on analysis of consistent image regions” (Adaptive Vision, 2016). It is used to separate objects that can be clearly distinguished from the background. Any object in motion will always have different properties, such as color and contrast and brightness, as compared to the background. A blob is hence used to capture these properties and detect the moving object. 6 We use the blob analysis function from MATLAB, part of the Computer Vision System Toolbox (MathWorks, 2016), to calculate statistics for labeled regions in a binary image. Statistics can be selected based on the requirement. They are present in the Selector block from Simulink (MathWorks, 2016). These statistics can help to distinguish a particular part in the image frame, which is part of a blob. All objects that are part of a blob are in motion. Figure 1. Blob Analysis function block with input and outputs. The above function is an example of blob analysis, which takes in a vector BW that represents a binary image as an input and returns three components as an output: Area, Centroid, and BBox. Area stands for a vector representation, which contains the number of pixels that are part of the blob. Centroid is an M x 2 matrix, which contains the coordinate of the centroid, where M stands for the number of blobs. BBox is an M x 4 matrix, which contains the coordinates of the bounding boxes, where M again represents the number of blobs (MathWorks, 2016). A Gaussian Mixture Model (GMM) represents sum of M weighted terms as given by the following equation (Douglas): (  ) Σ (  ) (1) In equation (1) X is a Mdimensional continuous real valued vector, which represents the weight wi and i = 1, …,M are the “Gaussian density components” (Douglas), used as parametric model for the probability distribution of the continuous calculation. GMM parameters are 7 obtained from the video frames of an input video using an iterative algorithm estimated from a welltrained prior model. They are capable of representing a larger class of data over a sample distribution and form smooth approximation to arbitrary density shapes. (a) (b) (c) (d) Figure 2 (a), (b), (c) and (d). Results above show the work of a Gaussian Mixture Model where the background is segmented and only the foreground is displayed. After noise removal, template is saved and applied to every frame where the object in motion is detected. Using a mask generated by GMM on the entire video, the algorithm will apply it on every frame to distinguish the background from the foreground. It picks up all moving objects which are subtracted from the mask. It cannot specifically distinguish between the moving targets; it will simply capture everything that is in motion, or most of it. The result is boxed using a red box as is displayed in the output; as seen above in the (d) part of Figure 2. 8 (a) (b) (c) (d) Figure 3 (a), (b), (c) and (d). Results above show the final implementation using GMM and Blob analysis. After the initial stage of implementing the mask to subtract the background from the foreground, the algorithm detects the moving cars in the street by a fixed CCTV camera. A blob analysis algorithm is highly flexible and works accurately with any background segmentation software. However, it also has its flaws; when the object source is too close to the camera; it fails to capture the entire object as one. (a) (b) Figure 4: (a) and (b) This example shows the Blob Analysis algorithm when the target is too close to the camera source. The object is not detected in one single blob and hence we can see there are various boxes to track the motion. 9 Speeded Up Robust Features (SURF) SURF is a local feature detector and descriptor which has applications for object recognition, registration, classification, and 3D reconstruction. It outperforms different methods devised earlier in terms of repeatability, distinctiveness, and robustness, and computation time (Bay, Ess, Tuytelaars & Gool, 2008). To determine points of interest, the SURF method approximates the determinant of Hessian blob detector. The latter is calculated using the above defined integral image which is precomputed. The points’ feature descriptors are determined through the sum of the Haar wavelet calculated in a certain neighborhood point of interest. The descriptors obtained can be utilized to find and recognize objects, people or faces, to make 3D scenes, and to track objects. (Bay, Ess, Tuytelaars & Gool, 2008). In the following application, SURF is used from MATLAB which is present in the Computer Vision Toolbox. SURF Interest Points This approach uses a combinantio of basic Hessianmatrix approximation and integral images. The method is used as the computation time is minimized drastically. The integral image IΣ(X) to be calculated for a particular point X = (x,y)T is adding up all pixels formed inside a rectangle between the origin and X(Bay, Ess, Tuytelaars & Gool, 2008). IΣ(X) ΣΣ ( ) (2) After the computation of the integral image, it needs three more arithmetic operations to calculate the sum of the intensities over the rectangle, as shown in Figure 5, which consists of the part of the image. They include 2 subtractions and 1 addition. 10 Figure 5. It takes three arithmetic operations to calculate the sum of the intensities inside a rectangular region of any size: =ABC+D Hessian Matrix Finding SURF points primarily uses Hessian matrix. Advantages include accuracy and performance improvement. For an image I where a particular pixel is defined by X = (x, y) which has the scale s, it’s the Hessian metrics can be defined as (Ahuja & Todorovic, 2008): ( ) [ ( ) ( ) ( ) ( )] (3) Here Lxx(X, σ) is the convolution of the second order derivative of the Gaussian function, as stated below, with the image I in x. Similarly the other component values are calculated as well. ( ) √ (4) Where 2 is the variance of a mask at (x, y). In the method using Hessian detector, we apply the approximated Hessian matrix using box filter. This method shows better results over using just the Hessian matrix to approximately replace the Gaussian filter. Here σ is taken as 1.2, and approximate box filters are used. The filter response is normalized according to the mask size. The determinant value is calculated using the equation stated below (Li, 2014): ( ) ( ) (5) Where is the convolution result of the image with the mask. 11 Scale Space Representation Any feature point should be a scale invariant locator because at times the image may not contain the object at the same scale always. The best way to implement a scale space would be by creating a pyramid. The image has to be smoothed with a Gaussian. After which it needs to be subsampled in order to achieve a higher level in the pyramid (Li, 2014). All the slices of the pyramid are maintained for matching in the subsequent section. Figure 6. Integral image can be used to upscale at constant cost. Figure 7. Scale space, multiresolution pyramids. Interest Point Localization To localize the interest point in the image over the scales, a nonmaximum suppression in a 3x3x3 neighborhood is applied. One of the many methods proposed (Brown & Lowe, 2002) interpolates in scale and image space, the maxima of the determinant of the Hessian. This is an important step since the difference in scale between the first layers of every octave is relatively big which helps the algorithm to detect the target in a different scale. 12 SURF Descriptor Descriptors as used to identify the intensity distribution present within the neighborhood of the interest point. In order to reduce time to compute the features and match them, Haar wavelet is applied. It has shown improvement in robustness as well. Indexing is based on the sign of the Laplacian which can increase matching speed (best case situation by a factor or two). (6) The equation above shows the Laplacian of the Gaussian function. Finding the descriptors is a twostep process (Bay, Ess, Tuytelaars & Gool, 2008). Step one is to draw a circle over a selected point of interest to set an orientation. Step two is to construct a square over the previous point with the orientation selected that will help us to extract the descriptors. Figure 8. Haar Wavelet filter is used to calculate the response in x and y direction. Darker parts have the weight of 1 and the lighter parts has a weight of +1. The Haarwavelet response is calculated in x and y direction. A circle of radius 6*s is made around the interest point; s represents the scale at which the point of interest is found. Dominant orientation is estimated over an imaginary region of 60 degree by finding the sum of all responses inside it. A new vector is calculated using the about area’s horizontal and vertical responses. They are summed together to get the value. The longest such vector lends its orientation to the interest point. 13 Figure 9. Orientation Assignment for three different quadrants considering three different points To extract the specific descriptors, a square region is constructed over the point of interest with the previously set orientation. The region is split to 4x4 square subregions and the Haar wavelet is computed. A 5x5 space sampled points can also be used to compute a few simple features. Each of these subdivisions match the size of the descriptor. The responses in the specific directions (dx in the x direction and dy in the y direction) are added over their respective subregion and the numbers generated become a part of a feature vector set. The absolute values of all the dx (dx) and absolute value of all the dy (dy) are calculated. Hence a fourdimensional descriptor vector v for its intensity structure V = (Σdx, Σdy, Σdx. Σdy) is obtained. Combining all 4x4 sub regions, this gives a descriptor vector which contains 64 values. These values are used by the algorithm to distinguish one feature point from another. During the matching stage, the sign of the Laplacian is included for the underlying interest point which are found at blobtype structures. The sign will not be able to distinguish dark structures on a light surface from light structures on a dark surface, which means that SURF does not support illumination difference. A similar object in different illuminations will not pick up a match in this algorithm. However, it supports rotation and scaling (Bay, Ess, Tuytelaars & Gool, 2008). 14 Figure 10. Illumination difference cannot be spotted by this algorithm. The first two images are similar in illumination and can be differentiated by the algorithm but not the third one with difference in illumination. Object Matching Object matching/ recognition represents a process which identifies a specific object in a digital image or a video. Algorithms which implement these use matching, learning, pattern recognition, appearance based or feature based technique including edges, gradients, histograms of oriented gradients (HOG), Haar wavelet, and linear binary patterns (MathWorks, 2016). [ ] [ ] Figure 11. Shows two matrices which contains the descriptors/features of the target and the image frame. Matrix a has m x 64 elements; where m are the number of SURF points. Matrix b has n x 64 elements; where n are the number of SURF points. The method for matching takes in two vectors with the features of the target and the corresponding image frame to match it with. The feature set of the target has a M1 x D matrix, where M1 represents the number of interest points and D represents the number of descriptors which is 64 according to SURF and the feature set of the image frame has a M2 x D matrix, 15 where M2 represents the number of interest points and D represents the number of descriptors, again 64. In order to find the pairs that match, we use an exhaustive approach where features from the first image are matched to every features in the second by computing a pairwise distance between the feature vectors in both of them. Another notsoperfect method would be to calculate the nearest neighbor. This will given an efficient approximation and can be used for large feature sets. A threshold value is set to select the correct match depending on the distance. The value can range anywhere between 0 to 100. A pair of features is not matched if the distance between them is more than T percent from a perfect match. Threshold is set higher for binary image which is 10.0 and 1.0 for other types. To gain more match points the threshold can be increased. In order to implement the above said method, MATLAB provides a inbuilt function which calculates the same. matchFeatures(FeatureSet1, FeatureSet2) matches the descriptors from FeatureSet 1 and FeatureSet2 which is the target and video frame respectively. A match is made between similar interest points and MATLAB returns the indices of the matching feature in a M x 2 matrix, where the first row corresponds to the points in the first image and the second row corresponds to the points in the second image. Both the rows can be individually extracted and then compared to see the match. They are marked on both the images and joint with a line to see the match. 16 Figure 12. Shown here are two similar images between which SURF match has been done. Similar points are connected with yellow lines. (a) (b) Figure 13 (a) and (b). Locating the clock on the Big Ben using the SURF matching algorithm. (a) Shows the features which have the most probable match and (b) Boxes the specific location where the match is found. 17 (a) (b) Figure 14 (a) and (b). SURF matching between two images with a scaling difference of 1.5 and 0.5 (a) (b) (c) Figure 15 (a), (b) and (c). SURF matching between two images with a rotational difference of 90, 180 and 270 degrees. 18 Geometric Transformation A geometric transformation is a bijection of a set which has some geometric structure to another set. Its domain and range will each have a set of points. A geometric transformation will be classified by the dimension of the operand set. This is an important step to make sure the geometric state of the target is maintained. At times during the detection stage it may not be possible to capture all the points to redefine the bounding box similar to what was there in the initial frame. Hence here we define a 2D object which will keep the overall geometry maintained. This is a simple 2 stage process. Step 1: Define parameters of the transformation here the object is created. MATLAB provides different functions to create the same. We use the tform function to set the object. Step 2: Perform the transformation: The tform object and the image is passed into the imwarp function. This function takes care of the transformation depending on the created object. Figure 16 .Function breakdown of maintaining the geometric state of the target once found after matching. 19 Occlusion Occlusion relates to the visibility of the target. When the target is obstructed due to the presence of another object, it is said that the target is occluded. If the target is partially visible, we say that it is a partial occlusion. If the target is not seen at all, and reappears out again, we say that the target was completely occluded. It is one of the leading issues in object tracking because at any point a target may disappear behind another and a successful algorithm should be able to track it in spite of the occlusion, like when two people walk past each other, or a car drives under a bridge. Hence an algorithm must be prepared to track the object or identify its presence after it is out. Exactly how it manifests itself or how it has to be dealt with will vary due to the problem at hand. The algorithm developed under the present study demonstrates tracking using SURF in case of partial and complete occlusion. Figure 17. The image shows different types of occlusion. Image (B) is partially occluded whereas (C) and (D) show complete occlusion. Figure 18. Matching between an initial image and its presence in a video frame being partially occluded. 20 Chapter 3 METHOD OF PROCEDURE/ALGORITHM This section presents the developed algorithm in two parts. The first part describes the matching algorithm where the target is matched onto the image frame of the video. This supports partial and complete occlusion: INPUT: The target as selected by the user in the initial frame. Step 1: Read the target and get the SURF feature points. Step 2: Extract the feature descriptors for the calculated points. Step 3: Initialize video reader and fetch the frames from the video individually. Calculate the SURF feature points of the corresponding video frames. Step 4: Extract the feature descriptors of the corresponding feature points for image frame. Step 5: Find a match between the points from the target to the video frame. Step 6: Estimate the geometric transformation of the target in the image frame and get the bounding box. Step 7: Locate the target and show the detected object. Step 8: Do this for all the frames of the video to track the motion of the target. OUTPUT: The selected target on all the image frames of the video. The video is saved to be accessed later. Another algorithm is presented here to capture the change in angle of the target. There sometimes could be slight changes in the orientation which a fixed target cannot match. For such changes the algorithm can be modified a bit as presented below: 21 INPUT: The target as selected by the user in the initial frame. Step 1: Read the target and get the SURF feature points. Step 2: Extract the feature descriptors for the calculated points. Step 3: Initialize video reader and fetch the frames from the video individually. Calculate the SURF feature points of the corresponding video frames. Step 4: Extract the feature descriptors of the corresponding feature points for image frame. Step 5: Find a match between the points from the target to the video frame. Change the target at every frame. Overwrite with the previously found target. Step 6: Find feature points and descriptors of the new target and match these to the next frame. Step 7: Estimate the geometric transformation of the target in the image frame and get the bounding box. Step 8: Locate the target and show the detected object. Step 9: Do this for all the frames of the video to track the motion of the target. OUTPUT: The selected target on all the image frames of the video. The video is saved to be accessed later. The results are recorded in a tabular form for the videos used as example to test the algorithm. Some of them are synthetically generated and some are real time videos used to test it. Table 1 is for normal videos and Table 2 is for videos with occlusion. 22 Table 1. This shows the time taken to process the frames when there is no occlusion. Video Duration (sec) No of Frames Frame dimensions Time to process (sec) Race Car 0:03 75 540 X 960 47 Car among clutter 0:04 95 540 X 960 71 Plane Landing 0:04 119 1280 X 720 80 Race Car 0:06 150 540 X 960 122 Card Tracking 0:11 321 1080 X 1920 559 Table 2. This shows the time taken to process the frames when there is occlusion Video Duration (sec) No of Frames Frame dimensions Time to process (sec) Partial or Complete Occlusion Airplane Landing 0:04 96 540 X 960 71 Partial Occlusion Boat 0:05 154 1280 X 720 89 Complete Occlusion Deck of Cards 0:09 270 960 X 640 298 Partial Occlusion Deck of Cards 0:11 321 1080 X 1920 559 Complete Occlusion 23 Chapter 4 EXPERIMENTAL RESULTS Experimental results were conducted in both a simulated and a real world environment. MATLAB (version: 2015a) was used to validate the data. Every video has to be given as an input to the algorithm for it to run. The video has to be present in the same folder as the .m file which contains the code. Videos can be of variable length with the following acceptable formats: .mp4, .mov, .avi. The target to be tracked has to be present in the first frame of the video. The user is given an option to select the initial box around target and the algorithm continues the tracking from there. When occluded or not found, either the algorithm shows no box or may randomly fall on another part of the video which it thinks has the most similar descriptor. However, once out of occlusion it falls on the target again. The code also provides the ability to save the output in the same folder. The following results are presented in different scenarios: (a) (b) (c) (d) Figure 19. This shows the initial selection of the target. The first frame is frozen and the user is asked to pick the target by marking two corners of the box which encloses the target. 24 (a) (b) (c) (d) Figure 20 (a), (b), (c) and (d). Example demonstrates tracking a car traveling through a straight path at an intersection. Detected target is enclosed in a yellow box. (a) (b) (c) (d) Figure 21 (a), (b), (c) and (d). Example demonstrates the tracking of an airplane as captured through a still camera. The orientation of the plane changes in every frame as it flies by as compared to that captured in the first frame. Detected target is enclosed in a yellow box. 25 (a) (b) (c) (d) Figure 22 (a), (b), (c) and (d). Tracking of a particular race car in the presence of many similar looking targets. The algorithm is able to detect the correct target using its specific SURF points. Detected target is enclosed in a yellow box. (a) (b) (c) (d) (e) (f) Figure 23 (a), (b), (c), (d), (e) and (f). Synthetically generated video where a deck of cards becomes occluded behind the paper, is tracked. (a) shows before the occlusion. (b), (c), and (d) shows during the occlusion. (e) and (f) shows after the occlusion when the target is coming out. Tracked object is bounded in a yellow box. 26 (a) (b) (c) (d) (e) (f) Figure 24 (a), (b), (c), (d), (e) and (f). The motion of an airplane is tracked which becomess partially occluded. (a) and (b) shows before the objects gets occluded. (c) is during the occlusion when the object is hidden (d), (e) and (f) is when the object comes out of occlusion. Detected target is enclosed in a yellow box. (a) (b) (c) (d) Figure 25 (a), (b), (c) and (d). This is an exampe of a moving camera. As long as the target is in the frame, the algorithm will capture it. Detected target is enclosed in a yellow box. 27 (a) (b) (c) (d) (e) (f) Figure 26 (a), (b), (c), (d), (e) and (f). This is an example of a complete occlusion where the boat, which is the target, disappears and reappears from behind the other boat. (a) and (b) shows before the objects gets occluded. (c) is during the occlusion when the object is hidden (d), (e) and (f) is when the object comes out of occlusion. Detected target is enclosed in a yellow box. (a) (b) (c) (d) (e) (f) Figure 27 (a), (b), (c), (d), (e) and (f). This is an example of a complete occlusion where the boat, which is the target, disappears and reappears from behind the other boat. (a) shows before the objects gets occluded. (b), (c) and (d) is before the occlusion (e) and (f) is when the object comes out of occlusion. Detected target is enclosed in a yellow box. 28 Chapter 5 PROGRAM GUIDE MATLAB version: 2015a is used to run the program. Every video has to be given as an input to the algorithm for it to run. The video has to be present in the same folder as the .m file which contains the code. Videos can be of variable length with the following acceptable formats: .mp4, .mov, .avi. The target to be tracked has to be present in the first frame of the video. The user is presented with the first frame of the video. He/she chooses the initial box around target by specifying the top left and the bottom right corner to box the target. The algorithm continues the tracking from there. As output the video player is shown with every frame being displayed. This is the stage where the output is being generated and it will be slow. The code also saves this output in the same folder as a .avi file to view the video later. Name of the video can be specified within e code. Speed of the saved video will be similar to that of the input video. 29 Chapter 6 CONCLUSION The above shown method is one of the many ways which can be used to detect and track an object. The software used is MATLB 2015a and experiments were run in a Mac Book Pro 2014. The increase in demand of video technology has given rise to numerous video recognition softwares. Every software strives to show advanced results in less time. This study included the application of SURF feature points. This has definitely been an improvement over previously tried methods of GMM and blob analysis. A couple of previous limitations were overcome in this algorithm. The stability of the camera was not important. Any video taken from a moving source could also be accepted as long as the target in question was present in the video frame. Time taken to generate the output reduced significantly and this does not need initial processing time. The algorithm worked as long as the first frame contains the target, and it does not have to process the entire video beforehand like in the case of GMM. Since movement of the target is unpredictable, SURF points were scale and rotational invariant to pick up the target if they have changed the orientation. Successful results were seen when applied to most of the videos. 30 Chapter 7 FUTURE WORK Tracking objects depend on various entities and it is almost impossible to control all the deciding factors with one algorithm. Similarly, it is difficult to predict an algorithm which may give positive results for every situation possible. There can be many ways a particular situation can be analyzed from an algorithm’s point of view. We can use different methods to grab feature points, e.g., SIFT features. SIFT has a higher descriptor vector of 128 values. Hence the time taken is a little high, but due to more descriptors, the algorithm can catch minor changes. Also addition of active contour can help to show the output much better. 31 REFERENCES Porikli, F.; Yilmaz, A.; Object Detection and Tracking; TR2012003 January 2012 Butler D E., Bove V M, and Sridharan S. Realtime adaptive foreground/back groundsegmentation. EURASIP Journal on Advance in Signal Processing (2005), no. 14,2292. Rachid P, Nikos D. Geodesic active contours and level sets for the detection and tracking of Moving object. IEEE Transaction on PAMI 22 (2000), no. 3, 266. Zhu C. Video Object Tracking using SIFT and Mean Sift. Report No. Ex005/2011. Yilmaz A, Javed O, Shah M. Object Tracking: A Survey. Journal. ACM Computing Surveys(CSUR). V 38 Issue 4, 2006, Article No. 13 Kass M, Witkin A, Terzopoulos D, Snakes: Active Contour Models, International Journal of Computer Vision, 321331 (1988) Kim K, Chalidabhongse T, Harwood D, Davis L. Realtime foregroundbackground segmetation using codebook model, 2005. doi:10.1016/j.rti.2004.12.004 Horprasert T, Harwood D, Davis LS. A statistical approach for realtime robust background subtraction and shadow detection. IEEE FrameRate App Workshop, Kerkyra, Greece; 1999. Zoran Z. Improved Adaptive Gaussian Mixture Model for background Subtraction. In Proc. ICPR, 2004. Date of Conference: 2326 Aug. 2004. Page(s): 28  31 Vol.2 MathWorks, 2016:Retreive from http://www.mathworks.com/help/vision/ref/vision.foregrounddetectorclass.html Kandhare P, Arslan A, Sirakov N. Tracking partially occluded objects with centripetal active contour. Math. Appl. 3 (2014), 61–75. 32 Adaptive Vision, 2016; Retrieved from http://docs.adaptivevision.com/current/studio/machine_vision_guide/BlobAnalysis.html MathWorks, 2016; Retrieved from http://www.mathworks.com/help/vision/ref/blobanalysis.html MathWorks, 2016; Retrieved from http://www.mathworks.com/help/simulink/slref/selector.html Douglas Reynolds; Gaussian Mixture Models; MIT Lincoln Laboratory, 244 Wood St., Lexington, MA 02140, USA Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool; Speeded Up Robust Features (SURF). Computer Vision and Image Understanding, Elsevier, Vol. 110, Issue 3, June 2008, Pages 346359 M. Brown and D. Lowe. Invariant features from interest point groups. In BMVC, 2002 N.Ahuja and S.Todorovic, “Unsupervised category modeling, recognition, and Segmentation in images,” IEEE Trans. Pattern Anal. Mach. Intel., vol. 30, no. 12, pp. 2158–2174, Dec. 2008. Li Li, Image Matching Algorithm based on Featurepoint and DAISY Descriptor; Journalism of multimedia, Vol 9, NO. 6, June 2014 MathWorks, 2016; Retrieved from http://www.mathworks.com/discovery/objectrecognition.html 33 VITA Chetana Divakar Nimmakayala was born on January 12, 1992, in Pune, Maharashtra, and is an Indian citizen. She graduated from Don Bosco Institute of Technology, Mumbai, Maharashtra in May 2013. She received her Bachelor of Engineering in Computer Engineering. She started her Masters in Science in Computer Science in University of TexasArlington, Texas and subsequently took a transfer to Texas A&M University Commerce, Texas where she graduated in May 2016. Permanent address of your choosing: Department of Mathematics, Binnion Hall Room 305, Texas A&M University Commerce, P.O. Box 3011, Commerce, TX 754293011 Email: nchetanad@gmail.com
Click tabs to swap between content that is broken into logical sections.
Rating  
Title  SURF Tracking of Occluded Objects 
Author  Nimmakayala, Chetana Divakar 
Subject  Computer science; Mathematics 
Abstract  SURF TRACKING OF OCCLUDED IMAGES A Thesis by CHETANA DIVAKAR NIMMAKAYALA Submitted to the Office of Graduate Studies of Texas A&M UniversityCommerce in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May 2016 ii SURF TRACKING OF OCCLUDED IMAGES A Thesis by CHETANA DIVAKAR NIMMAKAYALA Approved by: Advisor: Nikolay Metodiev Sirakov Committee: Abdullah Arslan Mutlu Mete Head of Department: Sang Suh Dean of the College: Brent Donham Dean of Graduate Studies: Arlene Horne iii Copyright © 2016 CHETANA DIVAKAR NIMMAKAYALA iv ABSTRACT SURF TRACKING OF OCCLUDED IMAGES Chetana Divakar Nimmakayala, MS Texas A&M UniversityCommerce, 2016 Advisor: Professor Nikolay Metodiev Sirakov, PhD For a large class of applications, which includes humancomputer interaction, security, surveillance and traffic control, detection and tracking targets are an essential step in understanding the motion of objects. Target tracking becomes more challenging and complex as the scene starts to include more real world objects. In this study we present a method to capture and track moving objects using Speeded Up Robust Features (SURF) features. The target is user specified. SURF features are extracted from the target and are matched to the SURF features in the image frame taken from the video. When a match is found the target is boxed to show its location. The process is continued for all the frames of the video. Occlusion is taken into consideration and the algorithm successfully tracks both partially and completely occluded objects. This approach is applied to both synthetically developed and real life videos to capture different kinds of targets in different scenarios. Concepts were validated in a MATLAB environment and the experimental results are documented. v ACKNOWLEDGEMENTS I would like to express my sincere gratitude to my advisor Dr. Nikolay M. Sirakov for the continuous support of my master thesis and related research, for his motivation, patience, and immense knowledge. His encouragement gave me the incentive to widen my research from various perspectives. His guidance helped me in completing this study successfully. I would also like to thank Dr. Mete for his help in the programming code. Last but not the least; my thanks goes to all committee members for their insightful comments and motivation. vi TABLE OF CONTENTS LIST OF TABLES .................................................................................................................... vii LIST OF FIGURES ................................................................................................................. viii CHAPTER 1. INTRODUCTION ..................................................................................................... 1 Statement of the Problem ..................................................................................... 2 Purpose of the Study ........................................................................................... 2 Limitations .......................................................................................................... 3 2. SURVEY OF THE LITERATURE ........................................................................... 4 GMM and Blob Analysis .................................................................................... 5 Speeded Up Robust Features (SURF) ................................................................. 9 Object Matching ................................................................................................ 14 Geometric Transformation ................................................................................ 18 Occlusion .......................................................................................................... 19 3. METHOD OF PROCEDURE/ALGORITHM ........................................................ 20 4. EXPERIMENTAL RESULTS ................................................................................. 23 5. PROGRAM GUIDE ................................................................................................ 28 6. CONCLUSION ........................................................................................................ 29 7. FUTURE WORK ....................................................................................................... 30 REFERENCES ......................................................................................................................... 31 VITA ……………………………………………………….……………………………….. 33 vii LIST OF TABLES TABLE 1. This shows the time taken to process the frames when there is no occlusion ................ 22 2. This shows the time taken to process the frames when there is occlusion. ..................... 22 viii LIST OF FIGURES FIGURE 1. Blob Analysis function block with input and outputs ....................................................... 6 2. Above shown is the working of a Gaussian Mixture Model where the background is segmented and only the foreground is displayed. After noise removal, template is saved and applied to every frame where the object in motion is detected. ....................... 7 3. Above shown is the final implementation, which detects the moving cars in the street by a fixed CCTV camera .......................................................................................................... 8 4. This shows the Blob Analysis algorithm when the target is too close to the camera source. The object is not detected in one single blob and hence we can see there are various boxes to track the motion ..................................................................................... 8 5. It takes three arithmetic operations to calculate the sum of the intensities inside a rectangular region of any size: =ABC+D .................................................................. 10 6. Integral image can be used to upscale at constant cost .................................................. 11 7. Scale space, multiresolution pyramids .......................................................................... 11 8. Haar Wavelet filter to compute the response in x and y direction. Darker parts have the weight of 1 and the lighter parts has a weight of +1 ................................................ 12 9. Orientation Assignment for three different quadrants .................................................... 13 10. Illumination difference cannot be spotted by this algorithm. The first two images are similar in illumination and can be differentiated by the algorithm but not the third one ix with difference in illumination …………………………………………………………..14 11. Shows two matrices which contain the descriptors/features of the target and the image frame. Matrix 1 has mX64 elements; where m are the number of SURF points. Matrix 2 has nX64 elements; where n are the number of SURF points…………………………...14 12. Shown here are two similar images between which SURF match has been done. Similar points are joined with yellow lines …………………………………………………… 16 13. Locating the clock on the Big Ben using the matching algorithm. (a) Shows the features which have the most probable match and (b) shows the specific location highlighted by a box. ...................................................................................................... 16 14. Matching between two images with a scaling difference of 1.5 and 0.5 ........................ 17 15. Matching between two images with a rotational difference of 90, 180 and 270 degrees. 17 16. Shows the function breakdown of maintaining the geometric state of the target once found after matching. ……………………………………………………………………18 17. The image shows different types of occlusion. Image (B) is partially occluded whereas (C) and (D) show complete occlusion…………………………………………………...19 18. Matching between an initial image and its presence in a video frame being partially occluded .......................................................................................................................... 19 19. This shows the initial selection of the target. The first frame is frozen and the user is asked to pick the target by marking two corners of the box which encloses the target….23 20. Example demonstrates tracking a car traveling through a straight path at an intersection. Detected target is enclosed in a yellow box. …………………………………………….24 x 21. Example demonstrates the tracking of an airplane as captured through a still camera. The orientation of the plane changes in every frame as it flies by as compared to that captured in the first frame. Detected target is enclosed in a yellow box. ………………………...24 22. Tracking of a particular race car in the presence of many similar looking targets. The algorithm is able to detect the correct target using its specific SURF points. Detected target is enclosed in a yellow box. ……………………………………………………...25 23. Synthetically generated video where a deck of cards gets occluded behind the paper, is tracked. (a) shows before the occlusion. (b), (c), (d) shows during the occlusion. (e) and (f) shows after the occlusion when the target is coming out. Tracked object is bounded in a yellow box…………………………………………………………………………….25 24. The motion of an airplane is tracked which gets partially occluded. (a) and (b) shows before the objects becomes occluded. (c) is during the occlusion when the object is hidden (d), (e) and (f) is when the object comes out of occlusion. ……………………26 25. This is an exampe of when we have a moving camera. As long as the target is in the frame, the algorithm will capture it. Detected target is enclosed in a yellow box………26 26. This is an example of a complete occlusion where the boat, which is the target, disappears and reappears from behind the other boat. (a) and (b) shows before the objects becomes occluded. (c) is during the occlusion when the object is hidden (d), (e) and (f) is when the object comes out of occlusion……………………………………………….27 27. Detected target is enclosed in a yellow box. This is an example of a complete occlusion where the boat, which is the target, disappears and reappears from behind the other boat. (a) shows before the objects gets occluded. (b), (c) and (d) is before the occlusion (e) and (f) is when the object comes out of occlusion………………………………………….27 1 Chapter 1 INTRODUCTION Object detection and tracking are some of the most useful and challenging tasks in computer vision. It is one of the most prevalent branch today which helps to determine meaningful events and suspicious activities. This is a field which tracks and recognizes objects of interest from various sources. Object tracking is mostly a combination of three major steps, which include finding objects, tracking these objects frame to frame, and evaluating tracking results to conclude semantic events and latent phenomena (Porikli & Yilmaz, 2012). Of the numerous topics in computer vision, detecting and tracking a moving object are of great interest. Motion in particular, is an important cue for computer vision. For many applications, any object in motion is of interest and anything else can be ignored (Butler, Bove, & Sridharan, 2005). Early methods of motion detection relied on the detection of temporal changes, but were then enhanced with the use of detection masks and filters which improved the efficiency of the change detection algorithms. However, these enhancements were limited by their inability to provide a perfect tracker and could not be used without a prior contextbased knowledge. Hence the study of motion detection as a statistical estimation problem became incorporated in the study of motion detection (Rachid & Nikos, 2000). Rachid notes that “tracking goes further than motion detection and requires extra motionbased measurements, specifically the segmentation of the corresponding motion parameters.” (Rachid & Nikos, 2000, p 260) The extent of any generated algorithm depends on the application (Zhu, 2011). In any kind of an automated video surveillance, which deals with real time tracking of any form, we may need to detect the activity of any moving subject under observation with more accuracy and 2 precision. Sometimes, mere tracking is sufficient. The accuracy of any algorithm is determined by its applications. A realworld application of any algorithm with a faster computing time is likely. Statement of the Problem In this research I studied a method to effectively track the motion of an object under partial and complete occlusion. An algorithm is presented to successfully capture the target object by picking up feature points like SURF which are specific to the target object, and using them to keep track of its motion through the frames of the video. Both partial and complete occlusion is taken into consideration and the object is detected once it reappears completely. Purpose of the Study We started this research studying Gaussian mixture models and blob analysis to capture the target. Gaussian Mixture Model is a time consuming process, and depending on the length of the video, the time taken to generate a mask could be very long (Zhu, 2011). The bigger the video and number of frames, the more time it needs to generate the mask to distinguish between the background and foreground. The video also has to be taken from a fixed camera source. Blob analysis simply traces all the objects that show a difference in properties from the background, such as brightness or color. Hence, all moving objects are detected, without distinguishing one from another. Also if the target is very close to the camera source, the algorithm is unable to capture the entire target in one blob, and breaks up to be part of many blobs. With so many disadvantages, its real time application is not favorable and hence we wanted to devise a technique or a method that would perform more accurately. Considering the above disadvantages, we proposed to apply the SURF method, which was used to specifically pick out the target and distinguish it from the other moving or similar 3 looking stationary targets in the video. The SURF points of the target were feature points which were used to track it. These feature points were invariant to scaling and rotation. Hence, instant changes in the target size or orientation would still be captured and the movement would be tracked. Occluded objects can also be detected once they are visible again. SURF points are taken and matched to make sure we recapture the same target. Limitations Object tracking is a complex procedure due to many reasons (Yilmaz, Javed & Shah, 2006). There could be unwanted, unknown noise. The target object to be tracked may be partially or fully occluded and the algorithm used may or may not be able to handle occlusion. The nature of the object and the clarity of the camera are very important if we need an accurate tracking. Object shapes may sometimes be very complex for the active contour to work on it accurately (Kass, Witkin & Terzopoulos, 1988). Realtime processing may have its own requirements. Hence, it is very important to know the limitations of an algorithm in order to define its domain of application. Every algorithm has its own specifications and limitations. Previously done research using Gaussian Mixture Models (GMM) and blob analysis gave a highly flexible algorithm which worked excellent on a variety of videos. However, time taken to compute the output was too long and the target was not captured as one when too close to the camera source. 4 Chapter 2 SURVEY OF THE LITERATURE Visual tracking is an essential branch of computer vision. It is essential to have a correct algorithm that can ideally look for the target object with the correct set of specifications. The algorithm has to be concise in breaking the videos in a given number of frames in order to obtain accurate results. Background subtraction is an important tool to be used for object tracking. It is a technique in image processing where the foreground of the image is extracted for further processing. Kim, Chalidabhongse, Harwood and Davis presents a method for foregroundbackground separation using codebooks. “They represent a compressed form of background model for a long image sequence” (Kim, Chalidabhongse, Harwood & Davis, 2005). It also highlights two important features that help in making the algorithm better: layered modeling/detection and adaptive codebook updating. Simpler background models have been defined, which creates a unimodal distribution. Other novel algorithms are defined to detect moving objects from a static background scene (Horprasert, Harwood & Davis, 1999). We studied GMM, a probabilistic method that uses training frames to understand the difference between the foreground and background. It assumed data points are generated from a mixture of a finite number of Gaussian distributions (Zoran, 2004). We applyed the vision.ForegroundDetector system object in MATLAB to implement segmentation method with GMM (MathWorks, 2016). A video is broken down into its frames to compute a foreground mask/background model. The ForegroundDetector System object was used to compare a color or grayscale video frame with a background model which would help to determine if the pixels found are part of the background or the foreground. The object was differentiated from its 5 background using difference in color and brightness. Change in intensity helped the algorithm to track it. It did a reasonable job of extracting the shape from the foreground however this method cannot handle the movement of the camera. Predictions are another way to track the motion of the target. Here the algorithm will predict or estimate the position of the target in the next frame using information from the previous frames. A measuring tool is used to find the difference between the real and estimated positions in the present frame (Kandhare, Arslan & Sirakov, 2014). The difference corrects the estimation. This method helps us save time and the algorithm knows the specific region to focus on; rather than focusing on the entire video frame. The most common prediction algorithm is the Kalman Filter (Kandhare, Arslan & Sirakov, 2014). A Kalman Filter is an optimal estimator, it infers parameters of interest from indirect, inaccurate, and uncertain observations. The best way to work with a prediction algorithm is to define a mass center. The mass center of the target is called the measured position and can be applied to calculate the estimated position of the target in the next frame (Kandhare, Arslan & Sirakov, 2014). While a prediction algorithm has many advantages, it fails to distinguish between the target and the background if it is too crowded. GMM and Blob Analysis Blob Analysis is used to detect corresponding regions in an image individually. Regions to be detected can be based on any criteria, of which we use objects in motion. “It is a fundamental technique of computer vision based on analysis of consistent image regions” (Adaptive Vision, 2016). It is used to separate objects that can be clearly distinguished from the background. Any object in motion will always have different properties, such as color and contrast and brightness, as compared to the background. A blob is hence used to capture these properties and detect the moving object. 6 We use the blob analysis function from MATLAB, part of the Computer Vision System Toolbox (MathWorks, 2016), to calculate statistics for labeled regions in a binary image. Statistics can be selected based on the requirement. They are present in the Selector block from Simulink (MathWorks, 2016). These statistics can help to distinguish a particular part in the image frame, which is part of a blob. All objects that are part of a blob are in motion. Figure 1. Blob Analysis function block with input and outputs. The above function is an example of blob analysis, which takes in a vector BW that represents a binary image as an input and returns three components as an output: Area, Centroid, and BBox. Area stands for a vector representation, which contains the number of pixels that are part of the blob. Centroid is an M x 2 matrix, which contains the coordinate of the centroid, where M stands for the number of blobs. BBox is an M x 4 matrix, which contains the coordinates of the bounding boxes, where M again represents the number of blobs (MathWorks, 2016). A Gaussian Mixture Model (GMM) represents sum of M weighted terms as given by the following equation (Douglas): (  ) Σ (  ) (1) In equation (1) X is a Mdimensional continuous real valued vector, which represents the weight wi and i = 1, …,M are the “Gaussian density components” (Douglas), used as parametric model for the probability distribution of the continuous calculation. GMM parameters are 7 obtained from the video frames of an input video using an iterative algorithm estimated from a welltrained prior model. They are capable of representing a larger class of data over a sample distribution and form smooth approximation to arbitrary density shapes. (a) (b) (c) (d) Figure 2 (a), (b), (c) and (d). Results above show the work of a Gaussian Mixture Model where the background is segmented and only the foreground is displayed. After noise removal, template is saved and applied to every frame where the object in motion is detected. Using a mask generated by GMM on the entire video, the algorithm will apply it on every frame to distinguish the background from the foreground. It picks up all moving objects which are subtracted from the mask. It cannot specifically distinguish between the moving targets; it will simply capture everything that is in motion, or most of it. The result is boxed using a red box as is displayed in the output; as seen above in the (d) part of Figure 2. 8 (a) (b) (c) (d) Figure 3 (a), (b), (c) and (d). Results above show the final implementation using GMM and Blob analysis. After the initial stage of implementing the mask to subtract the background from the foreground, the algorithm detects the moving cars in the street by a fixed CCTV camera. A blob analysis algorithm is highly flexible and works accurately with any background segmentation software. However, it also has its flaws; when the object source is too close to the camera; it fails to capture the entire object as one. (a) (b) Figure 4: (a) and (b) This example shows the Blob Analysis algorithm when the target is too close to the camera source. The object is not detected in one single blob and hence we can see there are various boxes to track the motion. 9 Speeded Up Robust Features (SURF) SURF is a local feature detector and descriptor which has applications for object recognition, registration, classification, and 3D reconstruction. It outperforms different methods devised earlier in terms of repeatability, distinctiveness, and robustness, and computation time (Bay, Ess, Tuytelaars & Gool, 2008). To determine points of interest, the SURF method approximates the determinant of Hessian blob detector. The latter is calculated using the above defined integral image which is precomputed. The points’ feature descriptors are determined through the sum of the Haar wavelet calculated in a certain neighborhood point of interest. The descriptors obtained can be utilized to find and recognize objects, people or faces, to make 3D scenes, and to track objects. (Bay, Ess, Tuytelaars & Gool, 2008). In the following application, SURF is used from MATLAB which is present in the Computer Vision Toolbox. SURF Interest Points This approach uses a combinantio of basic Hessianmatrix approximation and integral images. The method is used as the computation time is minimized drastically. The integral image IΣ(X) to be calculated for a particular point X = (x,y)T is adding up all pixels formed inside a rectangle between the origin and X(Bay, Ess, Tuytelaars & Gool, 2008). IΣ(X) ΣΣ ( ) (2) After the computation of the integral image, it needs three more arithmetic operations to calculate the sum of the intensities over the rectangle, as shown in Figure 5, which consists of the part of the image. They include 2 subtractions and 1 addition. 10 Figure 5. It takes three arithmetic operations to calculate the sum of the intensities inside a rectangular region of any size: =ABC+D Hessian Matrix Finding SURF points primarily uses Hessian matrix. Advantages include accuracy and performance improvement. For an image I where a particular pixel is defined by X = (x, y) which has the scale s, it’s the Hessian metrics can be defined as (Ahuja & Todorovic, 2008): ( ) [ ( ) ( ) ( ) ( )] (3) Here Lxx(X, σ) is the convolution of the second order derivative of the Gaussian function, as stated below, with the image I in x. Similarly the other component values are calculated as well. ( ) √ (4) Where 2 is the variance of a mask at (x, y). In the method using Hessian detector, we apply the approximated Hessian matrix using box filter. This method shows better results over using just the Hessian matrix to approximately replace the Gaussian filter. Here σ is taken as 1.2, and approximate box filters are used. The filter response is normalized according to the mask size. The determinant value is calculated using the equation stated below (Li, 2014): ( ) ( ) (5) Where is the convolution result of the image with the mask. 11 Scale Space Representation Any feature point should be a scale invariant locator because at times the image may not contain the object at the same scale always. The best way to implement a scale space would be by creating a pyramid. The image has to be smoothed with a Gaussian. After which it needs to be subsampled in order to achieve a higher level in the pyramid (Li, 2014). All the slices of the pyramid are maintained for matching in the subsequent section. Figure 6. Integral image can be used to upscale at constant cost. Figure 7. Scale space, multiresolution pyramids. Interest Point Localization To localize the interest point in the image over the scales, a nonmaximum suppression in a 3x3x3 neighborhood is applied. One of the many methods proposed (Brown & Lowe, 2002) interpolates in scale and image space, the maxima of the determinant of the Hessian. This is an important step since the difference in scale between the first layers of every octave is relatively big which helps the algorithm to detect the target in a different scale. 12 SURF Descriptor Descriptors as used to identify the intensity distribution present within the neighborhood of the interest point. In order to reduce time to compute the features and match them, Haar wavelet is applied. It has shown improvement in robustness as well. Indexing is based on the sign of the Laplacian which can increase matching speed (best case situation by a factor or two). (6) The equation above shows the Laplacian of the Gaussian function. Finding the descriptors is a twostep process (Bay, Ess, Tuytelaars & Gool, 2008). Step one is to draw a circle over a selected point of interest to set an orientation. Step two is to construct a square over the previous point with the orientation selected that will help us to extract the descriptors. Figure 8. Haar Wavelet filter is used to calculate the response in x and y direction. Darker parts have the weight of 1 and the lighter parts has a weight of +1. The Haarwavelet response is calculated in x and y direction. A circle of radius 6*s is made around the interest point; s represents the scale at which the point of interest is found. Dominant orientation is estimated over an imaginary region of 60 degree by finding the sum of all responses inside it. A new vector is calculated using the about area’s horizontal and vertical responses. They are summed together to get the value. The longest such vector lends its orientation to the interest point. 13 Figure 9. Orientation Assignment for three different quadrants considering three different points To extract the specific descriptors, a square region is constructed over the point of interest with the previously set orientation. The region is split to 4x4 square subregions and the Haar wavelet is computed. A 5x5 space sampled points can also be used to compute a few simple features. Each of these subdivisions match the size of the descriptor. The responses in the specific directions (dx in the x direction and dy in the y direction) are added over their respective subregion and the numbers generated become a part of a feature vector set. The absolute values of all the dx (dx) and absolute value of all the dy (dy) are calculated. Hence a fourdimensional descriptor vector v for its intensity structure V = (Σdx, Σdy, Σdx. Σdy) is obtained. Combining all 4x4 sub regions, this gives a descriptor vector which contains 64 values. These values are used by the algorithm to distinguish one feature point from another. During the matching stage, the sign of the Laplacian is included for the underlying interest point which are found at blobtype structures. The sign will not be able to distinguish dark structures on a light surface from light structures on a dark surface, which means that SURF does not support illumination difference. A similar object in different illuminations will not pick up a match in this algorithm. However, it supports rotation and scaling (Bay, Ess, Tuytelaars & Gool, 2008). 14 Figure 10. Illumination difference cannot be spotted by this algorithm. The first two images are similar in illumination and can be differentiated by the algorithm but not the third one with difference in illumination. Object Matching Object matching/ recognition represents a process which identifies a specific object in a digital image or a video. Algorithms which implement these use matching, learning, pattern recognition, appearance based or feature based technique including edges, gradients, histograms of oriented gradients (HOG), Haar wavelet, and linear binary patterns (MathWorks, 2016). [ ] [ ] Figure 11. Shows two matrices which contains the descriptors/features of the target and the image frame. Matrix a has m x 64 elements; where m are the number of SURF points. Matrix b has n x 64 elements; where n are the number of SURF points. The method for matching takes in two vectors with the features of the target and the corresponding image frame to match it with. The feature set of the target has a M1 x D matrix, where M1 represents the number of interest points and D represents the number of descriptors which is 64 according to SURF and the feature set of the image frame has a M2 x D matrix, 15 where M2 represents the number of interest points and D represents the number of descriptors, again 64. In order to find the pairs that match, we use an exhaustive approach where features from the first image are matched to every features in the second by computing a pairwise distance between the feature vectors in both of them. Another notsoperfect method would be to calculate the nearest neighbor. This will given an efficient approximation and can be used for large feature sets. A threshold value is set to select the correct match depending on the distance. The value can range anywhere between 0 to 100. A pair of features is not matched if the distance between them is more than T percent from a perfect match. Threshold is set higher for binary image which is 10.0 and 1.0 for other types. To gain more match points the threshold can be increased. In order to implement the above said method, MATLAB provides a inbuilt function which calculates the same. matchFeatures(FeatureSet1, FeatureSet2) matches the descriptors from FeatureSet 1 and FeatureSet2 which is the target and video frame respectively. A match is made between similar interest points and MATLAB returns the indices of the matching feature in a M x 2 matrix, where the first row corresponds to the points in the first image and the second row corresponds to the points in the second image. Both the rows can be individually extracted and then compared to see the match. They are marked on both the images and joint with a line to see the match. 16 Figure 12. Shown here are two similar images between which SURF match has been done. Similar points are connected with yellow lines. (a) (b) Figure 13 (a) and (b). Locating the clock on the Big Ben using the SURF matching algorithm. (a) Shows the features which have the most probable match and (b) Boxes the specific location where the match is found. 17 (a) (b) Figure 14 (a) and (b). SURF matching between two images with a scaling difference of 1.5 and 0.5 (a) (b) (c) Figure 15 (a), (b) and (c). SURF matching between two images with a rotational difference of 90, 180 and 270 degrees. 18 Geometric Transformation A geometric transformation is a bijection of a set which has some geometric structure to another set. Its domain and range will each have a set of points. A geometric transformation will be classified by the dimension of the operand set. This is an important step to make sure the geometric state of the target is maintained. At times during the detection stage it may not be possible to capture all the points to redefine the bounding box similar to what was there in the initial frame. Hence here we define a 2D object which will keep the overall geometry maintained. This is a simple 2 stage process. Step 1: Define parameters of the transformation here the object is created. MATLAB provides different functions to create the same. We use the tform function to set the object. Step 2: Perform the transformation: The tform object and the image is passed into the imwarp function. This function takes care of the transformation depending on the created object. Figure 16 .Function breakdown of maintaining the geometric state of the target once found after matching. 19 Occlusion Occlusion relates to the visibility of the target. When the target is obstructed due to the presence of another object, it is said that the target is occluded. If the target is partially visible, we say that it is a partial occlusion. If the target is not seen at all, and reappears out again, we say that the target was completely occluded. It is one of the leading issues in object tracking because at any point a target may disappear behind another and a successful algorithm should be able to track it in spite of the occlusion, like when two people walk past each other, or a car drives under a bridge. Hence an algorithm must be prepared to track the object or identify its presence after it is out. Exactly how it manifests itself or how it has to be dealt with will vary due to the problem at hand. The algorithm developed under the present study demonstrates tracking using SURF in case of partial and complete occlusion. Figure 17. The image shows different types of occlusion. Image (B) is partially occluded whereas (C) and (D) show complete occlusion. Figure 18. Matching between an initial image and its presence in a video frame being partially occluded. 20 Chapter 3 METHOD OF PROCEDURE/ALGORITHM This section presents the developed algorithm in two parts. The first part describes the matching algorithm where the target is matched onto the image frame of the video. This supports partial and complete occlusion: INPUT: The target as selected by the user in the initial frame. Step 1: Read the target and get the SURF feature points. Step 2: Extract the feature descriptors for the calculated points. Step 3: Initialize video reader and fetch the frames from the video individually. Calculate the SURF feature points of the corresponding video frames. Step 4: Extract the feature descriptors of the corresponding feature points for image frame. Step 5: Find a match between the points from the target to the video frame. Step 6: Estimate the geometric transformation of the target in the image frame and get the bounding box. Step 7: Locate the target and show the detected object. Step 8: Do this for all the frames of the video to track the motion of the target. OUTPUT: The selected target on all the image frames of the video. The video is saved to be accessed later. Another algorithm is presented here to capture the change in angle of the target. There sometimes could be slight changes in the orientation which a fixed target cannot match. For such changes the algorithm can be modified a bit as presented below: 21 INPUT: The target as selected by the user in the initial frame. Step 1: Read the target and get the SURF feature points. Step 2: Extract the feature descriptors for the calculated points. Step 3: Initialize video reader and fetch the frames from the video individually. Calculate the SURF feature points of the corresponding video frames. Step 4: Extract the feature descriptors of the corresponding feature points for image frame. Step 5: Find a match between the points from the target to the video frame. Change the target at every frame. Overwrite with the previously found target. Step 6: Find feature points and descriptors of the new target and match these to the next frame. Step 7: Estimate the geometric transformation of the target in the image frame and get the bounding box. Step 8: Locate the target and show the detected object. Step 9: Do this for all the frames of the video to track the motion of the target. OUTPUT: The selected target on all the image frames of the video. The video is saved to be accessed later. The results are recorded in a tabular form for the videos used as example to test the algorithm. Some of them are synthetically generated and some are real time videos used to test it. Table 1 is for normal videos and Table 2 is for videos with occlusion. 22 Table 1. This shows the time taken to process the frames when there is no occlusion. Video Duration (sec) No of Frames Frame dimensions Time to process (sec) Race Car 0:03 75 540 X 960 47 Car among clutter 0:04 95 540 X 960 71 Plane Landing 0:04 119 1280 X 720 80 Race Car 0:06 150 540 X 960 122 Card Tracking 0:11 321 1080 X 1920 559 Table 2. This shows the time taken to process the frames when there is occlusion Video Duration (sec) No of Frames Frame dimensions Time to process (sec) Partial or Complete Occlusion Airplane Landing 0:04 96 540 X 960 71 Partial Occlusion Boat 0:05 154 1280 X 720 89 Complete Occlusion Deck of Cards 0:09 270 960 X 640 298 Partial Occlusion Deck of Cards 0:11 321 1080 X 1920 559 Complete Occlusion 23 Chapter 4 EXPERIMENTAL RESULTS Experimental results were conducted in both a simulated and a real world environment. MATLAB (version: 2015a) was used to validate the data. Every video has to be given as an input to the algorithm for it to run. The video has to be present in the same folder as the .m file which contains the code. Videos can be of variable length with the following acceptable formats: .mp4, .mov, .avi. The target to be tracked has to be present in the first frame of the video. The user is given an option to select the initial box around target and the algorithm continues the tracking from there. When occluded or not found, either the algorithm shows no box or may randomly fall on another part of the video which it thinks has the most similar descriptor. However, once out of occlusion it falls on the target again. The code also provides the ability to save the output in the same folder. The following results are presented in different scenarios: (a) (b) (c) (d) Figure 19. This shows the initial selection of the target. The first frame is frozen and the user is asked to pick the target by marking two corners of the box which encloses the target. 24 (a) (b) (c) (d) Figure 20 (a), (b), (c) and (d). Example demonstrates tracking a car traveling through a straight path at an intersection. Detected target is enclosed in a yellow box. (a) (b) (c) (d) Figure 21 (a), (b), (c) and (d). Example demonstrates the tracking of an airplane as captured through a still camera. The orientation of the plane changes in every frame as it flies by as compared to that captured in the first frame. Detected target is enclosed in a yellow box. 25 (a) (b) (c) (d) Figure 22 (a), (b), (c) and (d). Tracking of a particular race car in the presence of many similar looking targets. The algorithm is able to detect the correct target using its specific SURF points. Detected target is enclosed in a yellow box. (a) (b) (c) (d) (e) (f) Figure 23 (a), (b), (c), (d), (e) and (f). Synthetically generated video where a deck of cards becomes occluded behind the paper, is tracked. (a) shows before the occlusion. (b), (c), and (d) shows during the occlusion. (e) and (f) shows after the occlusion when the target is coming out. Tracked object is bounded in a yellow box. 26 (a) (b) (c) (d) (e) (f) Figure 24 (a), (b), (c), (d), (e) and (f). The motion of an airplane is tracked which becomess partially occluded. (a) and (b) shows before the objects gets occluded. (c) is during the occlusion when the object is hidden (d), (e) and (f) is when the object comes out of occlusion. Detected target is enclosed in a yellow box. (a) (b) (c) (d) Figure 25 (a), (b), (c) and (d). This is an exampe of a moving camera. As long as the target is in the frame, the algorithm will capture it. Detected target is enclosed in a yellow box. 27 (a) (b) (c) (d) (e) (f) Figure 26 (a), (b), (c), (d), (e) and (f). This is an example of a complete occlusion where the boat, which is the target, disappears and reappears from behind the other boat. (a) and (b) shows before the objects gets occluded. (c) is during the occlusion when the object is hidden (d), (e) and (f) is when the object comes out of occlusion. Detected target is enclosed in a yellow box. (a) (b) (c) (d) (e) (f) Figure 27 (a), (b), (c), (d), (e) and (f). This is an example of a complete occlusion where the boat, which is the target, disappears and reappears from behind the other boat. (a) shows before the objects gets occluded. (b), (c) and (d) is before the occlusion (e) and (f) is when the object comes out of occlusion. Detected target is enclosed in a yellow box. 28 Chapter 5 PROGRAM GUIDE MATLAB version: 2015a is used to run the program. Every video has to be given as an input to the algorithm for it to run. The video has to be present in the same folder as the .m file which contains the code. Videos can be of variable length with the following acceptable formats: .mp4, .mov, .avi. The target to be tracked has to be present in the first frame of the video. The user is presented with the first frame of the video. He/she chooses the initial box around target by specifying the top left and the bottom right corner to box the target. The algorithm continues the tracking from there. As output the video player is shown with every frame being displayed. This is the stage where the output is being generated and it will be slow. The code also saves this output in the same folder as a .avi file to view the video later. Name of the video can be specified within e code. Speed of the saved video will be similar to that of the input video. 29 Chapter 6 CONCLUSION The above shown method is one of the many ways which can be used to detect and track an object. The software used is MATLB 2015a and experiments were run in a Mac Book Pro 2014. The increase in demand of video technology has given rise to numerous video recognition softwares. Every software strives to show advanced results in less time. This study included the application of SURF feature points. This has definitely been an improvement over previously tried methods of GMM and blob analysis. A couple of previous limitations were overcome in this algorithm. The stability of the camera was not important. Any video taken from a moving source could also be accepted as long as the target in question was present in the video frame. Time taken to generate the output reduced significantly and this does not need initial processing time. The algorithm worked as long as the first frame contains the target, and it does not have to process the entire video beforehand like in the case of GMM. Since movement of the target is unpredictable, SURF points were scale and rotational invariant to pick up the target if they have changed the orientation. Successful results were seen when applied to most of the videos. 30 Chapter 7 FUTURE WORK Tracking objects depend on various entities and it is almost impossible to control all the deciding factors with one algorithm. Similarly, it is difficult to predict an algorithm which may give positive results for every situation possible. There can be many ways a particular situation can be analyzed from an algorithm’s point of view. We can use different methods to grab feature points, e.g., SIFT features. SIFT has a higher descriptor vector of 128 values. Hence the time taken is a little high, but due to more descriptors, the algorithm can catch minor changes. Also addition of active contour can help to show the output much better. 31 REFERENCES Porikli, F.; Yilmaz, A.; Object Detection and Tracking; TR2012003 January 2012 Butler D E., Bove V M, and Sridharan S. Realtime adaptive foreground/back groundsegmentation. EURASIP Journal on Advance in Signal Processing (2005), no. 14,2292. Rachid P, Nikos D. Geodesic active contours and level sets for the detection and tracking of Moving object. IEEE Transaction on PAMI 22 (2000), no. 3, 266. Zhu C. Video Object Tracking using SIFT and Mean Sift. Report No. Ex005/2011. Yilmaz A, Javed O, Shah M. Object Tracking: A Survey. Journal. ACM Computing Surveys(CSUR). V 38 Issue 4, 2006, Article No. 13 Kass M, Witkin A, Terzopoulos D, Snakes: Active Contour Models, International Journal of Computer Vision, 321331 (1988) Kim K, Chalidabhongse T, Harwood D, Davis L. Realtime foregroundbackground segmetation using codebook model, 2005. doi:10.1016/j.rti.2004.12.004 Horprasert T, Harwood D, Davis LS. A statistical approach for realtime robust background subtraction and shadow detection. IEEE FrameRate App Workshop, Kerkyra, Greece; 1999. Zoran Z. Improved Adaptive Gaussian Mixture Model for background Subtraction. In Proc. ICPR, 2004. Date of Conference: 2326 Aug. 2004. Page(s): 28  31 Vol.2 MathWorks, 2016:Retreive from http://www.mathworks.com/help/vision/ref/vision.foregrounddetectorclass.html Kandhare P, Arslan A, Sirakov N. Tracking partially occluded objects with centripetal active contour. Math. Appl. 3 (2014), 61–75. 32 Adaptive Vision, 2016; Retrieved from http://docs.adaptivevision.com/current/studio/machine_vision_guide/BlobAnalysis.html MathWorks, 2016; Retrieved from http://www.mathworks.com/help/vision/ref/blobanalysis.html MathWorks, 2016; Retrieved from http://www.mathworks.com/help/simulink/slref/selector.html Douglas Reynolds; Gaussian Mixture Models; MIT Lincoln Laboratory, 244 Wood St., Lexington, MA 02140, USA Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool; Speeded Up Robust Features (SURF). Computer Vision and Image Understanding, Elsevier, Vol. 110, Issue 3, June 2008, Pages 346359 M. Brown and D. Lowe. Invariant features from interest point groups. In BMVC, 2002 N.Ahuja and S.Todorovic, “Unsupervised category modeling, recognition, and Segmentation in images,” IEEE Trans. Pattern Anal. Mach. Intel., vol. 30, no. 12, pp. 2158–2174, Dec. 2008. Li Li, Image Matching Algorithm based on Featurepoint and DAISY Descriptor; Journalism of multimedia, Vol 9, NO. 6, June 2014 MathWorks, 2016; Retrieved from http://www.mathworks.com/discovery/objectrecognition.html 33 VITA Chetana Divakar Nimmakayala was born on January 12, 1992, in Pune, Maharashtra, and is an Indian citizen. She graduated from Don Bosco Institute of Technology, Mumbai, Maharashtra in May 2013. She received her Bachelor of Engineering in Computer Engineering. She started her Masters in Science in Computer Science in University of TexasArlington, Texas and subsequently took a transfer to Texas A&M University Commerce, Texas where she graduated in May 2016. Permanent address of your choosing: Department of Mathematics, Binnion Hall Room 305, Texas A&M University Commerce, P.O. Box 3011, Commerce, TX 754293011 Email: nchetanad@gmail.com 
Date  2016 
Faculty Advisor  Sirakov, Nikolay M 
Committee Members 
Mete, Mutlu Arslan, Abdullah 
University Affiliation  Texas A&M UniversityCommerce 
Department  MSComputer Science 
Degree Awarded  M.S. 
Pages  43 
Type  Text 
Format  
Language  eng 
Rights  All rights reserved. 



A 

B 

C 

D 

E 

F 

G 

H 

J 

L 

M 

N 

O 

P 

Q 

R 

S 

T 

V 

W 


