Using OpenCV to avoid Shirt Guy Dom in Megatokyo

I’m a great fan of Megatokyo but the publication rate of new comics can seem glacial at times. There have been times when I’ve checked the site hoping to see an update only to find that it’s a “Shirt guy Dom” comic (which I have no interest in reading; although to be fair there seems to have been less of those amongst the more recent comics).

Anyway since I become a NAO developer I’ve been trying to beef up my python skills and learn a bit about OpenCV – an open source library of computer vision related algorithms. So one day I thought to myself why not try to write a python program to determine whether a given Megatokyo comic was a “real” comic or a “Shirt Guy Dom” so rather than manually checking for updates the script could tell me 1) had a new comic been published, and 2) should I read it. This seemed fairly straightforward since while there is a lot of shading in a normal Megatokyo comic the “Shirt Guy Dom” comics are very black and white – this means that there should be no need for particularly clever feature recognition instead I should be able to produce a histogram for each comic and train a classifier to learn the difference between histograms for comics I wanted to read and those I didn’t.

There are several steps to this process:

  1. Check for new comics and download them
  2. Compute the histograms for each comic
  3. Create a labelled data set of “comic” and “dontread” images
  4. Train a support vector machine (SVM) using some pre-classified comics
  5. Use the trained SVN to determine whether the new comic(s) should be classified as “don’t read”

Step 1 is not particularly hard so I won’t say much about it. It takes advantage of the fact that the comic filenames are sequentially numbered and so it can easily find the number of the most recent comic on disk and then attempt to download the next comic in the sequence.

Step 2 is where OpenCV comes into play. However Megatokyo comics can be in GIF or JPEG format and OpenCV does not support GIF files. So before we can compute the histograms of the comics we need to convert the GIF images to a format OpenCV can use. In order to avoid having to remember which images were converted from GIF and which were JPEGs I elected to convert all images to PNG format. For this I used the PythonMagick binding for ImageMagick.

1
2
3
4
5
6
7
8
9
import PythonMagick
def convert_file_name_to_png(filename):
return os.path.splitext(filename)[0] + '.png'
def convert_image_to_png(basedir, src):
image = PythonMagick.Image(basedir + '/' + src)
dest = convert_file_name_to_png(src)
image.write(basedir + '/' + dest)

Having got all the images in a format that OpenCV could handle, computing the histograms was straightforward. Since most comics are greyscale I converted the images to greyscale before computing the histogram. Also rather than using a full 256 “buckets” for the histograms I chose to limit them to 64 thinking that this should give enough detail to train a classifier while keeping the number of features down.

1
2
3
4
5
6
7
8
9
10
11
import cv
def make_histogram(imagefile):
col = cv.LoadImageM(imagefile)
gray = cv.CreateImage(cv.GetSize(col), cv.IPL_DEPTH_8U, 1)
cv.CvtColor(col, gray, cv.CV_RGB2GRAY)
hist = cv.CreateHist([NUM_BINS], cv.CV_HIST_ARRAY, [[0,255]], 1)
cv.CalcHist([gray], hist)
cv.NormalizeHist(hist, 1.0)
return hist

Step 3 – I wanted a way to classify images that didn’t require any fancy file formats and would be easy to set up. In the end I settled on using the filesystem. Under the folder where I stored the megatokyo images I created a folder for each category (there are only two but in principle there could be more) and put symbolic links in the folder (linking back to the original image in the parent directory) for each image that represented the class signified by the folder.

So, for example, in my dontread folder I have links like this

1
2
3
4
5
6
7
8
0031.png -> ../0031.png
0045.png -> ../0045.png
0065.png -> ../0065.png
0076.png -> ../0076.png
0082.png -> ../0082.png
0086.png -> ../0086.png
0093.png -> ../0093.png
...

The advantage of this was that to read any comics in a category all I needed to do was point an image viewer at the directory for the category.

I manually classified the first 411 images and wrote a short piece of python to set up the symbolic links for me:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
import sys
import getopt
from megatokyo import Usage, make_link
dontread = [ '0031', '0045', '0065', '0076', '0082', '0086', '0093', '0104', '0130', '0170', '0186', '0191', '0227', '0228', '0242', '0257', '0265', '0279', '0302', '0315', '0320', '0328', '0361', '0388', '0411' ]
def make_categories(negative):
positive = []
ineg = []
for i in negative:
ineg.append(int(i))
for i in range(1, max(ineg)):
tmp = str(i)
pstr = "00000"[0:(4-len(tmp))] + tmp
if pstr not in negative:
positive.append(pstr)
return { 'comic' : positive, 'dontread' : negative}
def make_links(basedir, categories):
for k in categories.keys():
vs = categories[k]
for v in vs:
make_link(basedir, k, v+".png")
def main(argv=None):
if argv is None:
argv = sys.argv
try:
try:
opts, args = getopt.getopt(argv[1:], "h", ["help"])
except getopt.error, msg:
raise Usage(msg)
if 0 == len(args):
raise Usage("Missing base path")
basedir = args[0].strip()
print "Base dir = " + basedir
make_links(basedir, make_categories(dontread))
except Usage, err:
print >>sys.stderr, err.msg
print >>sys.stderr, "for help use --help"
return 2
if __name__ == "__main__":
sys.exit(main())

Steps 4 & 5 – this is where I ran into trouble since although OpenCV does come with an implementation of a SVM I could not get it to work. It looked like it should work but everything I tried resulted in the following error message:

1
2
3
4
5
6
NotImplementedError: Wrong number or type of arguments for overloaded function 'CvSVM_train'.
Possible C/C++ prototypes are:
train(CvSVM *,CvMat const *,CvMat const *,CvMat const *,CvMat const *,CvSVMParams)
train(CvSVM *,CvMat const *,CvMat const *,CvMat const *,CvMat const *)
train(CvSVM *,CvMat const *,CvMat const *,CvMat const *)
train(CvSVM *,CvMat const *,CvMat const *)

After breakpointing the code inside the opencv binding and seeing that I was passing something that should have corresponded to

1
train(CvSVM *,CvMat const *,CvMat const *,CvMat const *,CvMat const *,CvSVMParams)

I decided that I would try something else. I first tried the pyopencv binding which sounded promising but didn’t have a lot of luck with that either so I finally settled on PyML. Training the SVM then meant producing an array of histogram data and labelling it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
def classify(basedir, category_names):
all_images = get_images(basedir)
all_classified_images = []
classified = {}
for c in category_names:
pimg = get_png_images(basedir+'/'+c)
classified[c] = pimg
for im in pimg:
all_classified_images.append(im)
# now need to find the images which are not classified yet
unclassified = []
for i in all_images:
if i not in all_classified_images:
unclassified.append(i)
# make histograms of all images
hmap = make_histograms(basedir, all_images)
clf = learn(classified, hmap)
usamples = []
for u in unclassified:
hist = hmap[u]
row = []
for j in range(NUM_BINS):
row.append(cv.QueryHistValue_1D(hist, j))
usamples.append(row)
data = VectorDataSet(usamples, patternID=unclassified)
results = clf.test(data)
patterns = results.getPatternID()
labels = results.getPredictedLabels()
# make map of image name to predicted label
lmap = {}
for i in range(len(patterns)):
lmap[patterns[i]] = labels[i]
return lmap
# train a support vector machine to recognize the images based on histograms
def learn(classified, histograms):
clf = SVM()
total_samples = 0
for c in classified.keys():
cim = classified[c]
total_samples = total_samples + len(cim)
samples = []
labels = []
for c in classified.keys():
cim = classified[c]
for im in cim:
hist = histograms[im]
row = []
for j in range(NUM_BINS):
row.append(cv.QueryHistValue_1D(hist, j))
samples.append(row)
labels.append(c)
data = VectorDataSet(samples, L=labels)
print str(data)
clf.train(data)
return clf

Conclusion

After being trained on the first 411 images the system then classified the next 814 of which 25 where classified as “dontread” Of those 25, five were comics that were incorrectly classified (they had more black in them than a typical comic). There are several things I could probably do to improve matters:

  • Attempt to tune the SVM – I used the defaults for PyML and the default linear SVM. Using a different kernel might give better results.
  • Use larger histograms (for example 128 or 256 buckets instead of 64) – this might capture more subtlety of shading.
  • Make the number of images in the training set more equal amongst the different classes – currently the training data has 386 images in the comic category and 25 in the dontread category (this was simply the proportion in the first 411 comics).

That said the main purpose of doing this was a learning exercise and I’ve done as much as I feel like for the moment.

The other thing that I meant to do but haven’t yet was set up the links to the newly classified images so that I only need to look in the “comic” folder to see the comics I want to read.

You can find all the code for this on github at: https://github.com/davesnowdon/PythonOpenCvImageClassifier. Since I’m new to both python and OpenCV I make no claims that it particularly great code but it does seem to work.