Our project this week has been to see if we can work out ways to fairly compare each of the 6 methods of tracking that are available in the OpenCV software Suite. Our next big task is to come up with some way of describing the output that each method produces, and to investigate the potential errors. For instance, we have been using a video in which no people start, but the bounding box is chosen where they will being showing up. Unfortunately, I think this leads the methods to begin by training to recognize the background, which is the opposite of what we want. So, I have begun looking through our videos to see if I can either find, or modify, a video clip to have just a single person throughout, and have the person already in the video clip when it starts. We want to see if this helps the accuracy of any particular method or not. Another thing that we need to work on next is to see if we can introduce a bounding box that changes size as the person moves away from the camera. Currently, the box remains the same size, so we need to see if there is some sort of setting that can dynamically generate a new size. Eventually we will want to be able to compare the accuracy of all 6 methods, possibly by comparing the tightness of the box fit, and the accuracy of tracking each person. We also need to add in the ability of tracking multiple people in one file, and seeing how each tracker perform, but for now, we’ll concentrate on tracking a single person.