Oh, man. I found a tutorial that explains how to track motion through a using the OpenCV libraries. While this might have some useful ramifications for our experiments, which I will come back to in a second, one thing that was very useful was the ability to find a bounding box that was dynamic in shape. What I need to do in the next week is to see if I can adapt that dynamic bounding box method to my own program, which would be fantastic. Once I have the dynamic bounding box shapes, I can print each bounding box out to a file. The goal here to to be able to somehow quantify the accuracy of each of the 6 methods as they track one or more people in an object.
So how does the ability to track motion in a video potentially help us? The background remains largely constant, so any simple difference between the background and something entering it would probably indicate motion. The tutorial that I found converts an image into gray scale, and saves an image where no new information is entering it (in other words, the background.) After the initial image is established, we can subtract the current frame from the background frame. To account for simple lighting differences or shadows moving, a threshold would need to be set. Anything greater than that threshold is motion that should be investigated, and anything lower than that can probably be safely ignored. One of the things I would like to try to work on is to see if we can somehow combine the ability to detect motion with the tracker methods that we’ve been working on recently so that we can use any random video, even if people don’t start in the frame to begin with.
Also, next week I will be attending the Swarmathon competition with some of the other members of the team at the Kennedy Space Flight Center in Florida. We might even get to see a rocket launch!