Q. From these images “we want to derive the area of the board covered by plant material to serve as an index of plant density.” We would like to “develop .. a simple protocol for estimating area-covered by plant material in our digital images with Pixcavator
This would be hard to accomplish with images similar to these. To capture the vegetation effectively, one has to separate it from the background. Then, ideally, the latter would have to either uniformly lighter or uniformly darker than the former (see Gray scale images). The light/dark squares make the task very challenging.
Instead, one can digitally isolate the squares within each image, so that the area covered by vegetation can be estimated from a set of sub-images (i.e., individual squares) with uniform background colors:
In the screenshot, the colored areas are the complement of the vegetation. Their total area is 64.95%, so the vegetation takes the rest, 35.05%.
Blue gives a good separation of the background.
The new version of our image analysis software has been made available to the users. This release is primarily about fixing a few annoying bugs:
- Loading image of sizes >2000×2000 causes the software to stall (fixed, but still impractical for processing).
- Changing the color channels after processing causes messed up data in the Output tab.
- Summary in the Output tab isn’t updated when manually select/deselect objects.
- Some image processing tools in the Tools tab don’t work properly.
This study was conducted in 2009 for a company that is “working in the online social media sector and are looking for an accurate image analysis solution that allows us to compare a reference photo to a large dataset of photos to determine if the reference photo is duplicated in the larger dataset.”
The full title of the report is “Image-to-image search with Pixcavator (PxSearch): a case study”. It was written by Dr. Ash Pahwa and myself and is presented here with minor modifications.
The first version of PxSearch was created in 2007. Using that version, initially the search results with the collection had 4-5 good hits (i.e., the transformed version of the original) at the top and then some bad hits. Some of the good matches weren’t even visible. After the upgrades, the results became 10 out of 10 or close. This improvement made this, more extensive, study possible. The results are OK, even though the collections are still very small. The company eventually went with another vendor, it’s still an interesting document to browse through.
Since 2009, there has been no work going on but, hopefully, this project will be one of the summer projects for the REU site.
Incidentally, I don’t like the term “reverse image search” popularized by TinEye. If the image search that we are used to at Google etc is “direct image search” (text-to-image) then the “reverse image search” is supposed to search for text based on images. Not only this isn’t what we are talking about, but also the problem hasn’t been even remotely solved (see this pathetic list: Visual image search engines). This is the reason I prefer “image-to-image search” to describe this application.