Monday, April 11, 2011

Results!

So, results. So far the best way to classify Xs has been by:

Step 1) Generate a bunch of random numbers. I did 40000, which would allow for 10000 comparisons in a single image (x and y or each point, two points for one comparison). These numbers are between 0 and 1, and represent a percentage, thus to get the pixel you take the percentage value and times it by the width or height of the image/section you are looking at.

Step 2) Compare the darkness of the pairs of pixels, label it as 1, 0, or -1 depending on how they compare.

Step 3) Repeat. Alot.

Step 4) Allow jboost to do its magic.


For the results below I only used 2 thousand random comparisons.

Results 1

As you can see was scanning regions too big. Even with this fairly big error, the results were surprisingly good. There are two regions that it constantly detects as a positive that is clearly not. The crotch of the person and the random wall segment to the person's bottom right. For the next round of training more negative examples will be passed in of these regions to hopefully correct this issue.


Results 2


Scanning a smaller region, and for whatever reason had worse results as when compared to the bigger region, but still fairly good. Also, in both results sets it was better at detecting my girlfriends Xs than mine. Not fair.


Next things todo:
1) More data, I have a feeling that with 200 positive and 1000 negatives (as compared to the 100 positive 300 negatives for these results) that my results will be significantly better, but I will have to see. This is partially due to that I will focus more training on specific regions that I see need more help

2) Instead of just two points try on a line (top half - bottom half).

3) Starting working on making it real time in C/C#/C++ and with real time data off of the Kinect.



I tried graphing out relevant data but couldn't get any graphs that showed anything useful.

No comments:

Post a Comment