I was about to review the newly released Google Similar Image Search when I ran across this one. The verdict: not so good.
The guy does not seem to realize though that Microsoft released its own similarity search a few months before. I am not judging because I missed it myself when it came out. It would be interesting to test and see which one is better (or not as bad). One point in favor of Microsoft is that Google didn’t index all images.
UPDATE: Another good revew at Rich Marr’s Tech Blog.
Comments Off
Flow-through pore characteristics of monolithic silicas and their impact on column performance in high-performance liquid chromatography [1] by R. Skudasa, B.A. Grimesb, M. Thommesc and K.K. Ungera (Journal of Chromatography A Volume 1216, Issue 13, 27 March 2009, Pages 2625-2636).
The idea is to examine the sizes of pores in the microscopy images (on the right), derive the permeability of the material from that data, and then compare to experimental results.
The image analysis was done manually (they call it “direct analysis”) and then with Pixcavator: “The values estimated by the “Pixcavator” program were based on the area estimation via integrating the number of pixels in this area.”
The comparison table is below.
The correlation looks good.
Other examples of image analysis.
Comments Off
Download here. Recent examples here.
The main changes in this version of Pixcavator are the following.
First, an annoying bug in the user interface was fixed. I don’t want to remind everybody what it was but I do certainly apologize. A few minor bugs were fixed too.
Second, a new slider “Border contrast” replaces the old. The idea is that by moving it you can jump to the nest sharp border. For example in the image below, the change in the gray level is very gradual. So, if you move the slider “Size” or “Contrast”, the growth is very slow and the former won’t even notice the sharp edge. With the new slider you get there is just a few abrupt steps: border contrast = 0, 10, and 15, respectively.
To our current customers. You can download the new version of Pixcavator and then activate it with your current serial number. This amounts to free upgrades for a foreseeable future.
BTW, the first digit in the version number refers to the calendar year of the development. This is the fourth since version 1.0 appeared in August ’06 (a prototype/testing program was created in the summer of ’05).
Finally our plans for the coming months:
- 4.1: Introduce a way to fully pre-compute all data (not only the construction of the topology graph but also and its analysis) so that moving sliders will change both the contours and the sliders virtually instantly.
- 4.2: Speed up the core algorithm (construction of the topology graph) significantly. I think its complexity will be linear instead of the current quadratic.
- 4.3: Introduce more data filtering tools (beyond unmark dark and light objects).
I’d be glad to hear your suggestions.
Comments Off
I started writing the article for “Pixel” and the word certainly has multiple meanings…
- A location within the image: two coordinates.
- A location and its value: 0 or 1 for binary, 0-255 for gray scale, 3 numbers for color.
- A little square/tile (see Cell decomposition of images).
- A unit of length.
- A unit of area.
More important is to keep in mind while analyzing images this simple principle:
Pixels are small.
This is important in two ways.
First, as the resolution increases the analysis results should “converge” to the analysis results of the real scene depicted in the image. Because the world is analog (another good principle).
Whatever the “real” (or physical) object is depicted in the image, its area computed as the sum of pixels will be as close as we like to its “true” area as the resolution increases (for more rigorous interpretation [1]). See the pictures below.
This is not that simple with the length. Indeed, increasing the resolution will not reduce the relative error of the measurement. See Lengths of curves.
Second, we need to analyze image in such a way that a single pixel variation of the image would be negligible. In fact, a singple round of erosion or dilation, i.e. adding or removing a layer of pixels from the border of an object, will not dramatically change the area or perimeter of an object. Why? Because pixels are small.
The original image, the effect of dilation, the effect of erosion.
This works fine for geometric measurements (see also Robustness of geometry) if the topology does not change. It’s not so easy for topology. The example on the right shows that adding the red pixel merges three objects and also creates a hole (white object).
Then we can say that these topological features aren’t robust. In fact the robustness can be measured in term of how many dilations and erosions it takes to change the topology. For example,
- how many erosions does it take to split an object into two or more?
- how many dilations does it take to create a hole in an object?
Comments Off
Photoshop CS4 from Adobe Systems is a powerful image and photo editor but not a tool for scientific image analysis.
The software has a huge multitude of tools for image processing (and, of course, photo manipulation). There is no point in listing them here. The “extended” version also has a few fun features like auto-blending or content-aware scaling.
However, its image analysis capabilities are very limited. After searching for a while, just these two below all I found:
Use selection tools to define and calculate distance, perimeter, area, and many other measurements. Record data points in a Measurement Log and then export the data, including histogram data, to a spreadsheet for further quantitative analysis.
Easily and accurately tally objects or features in scientific images with the Count tool, which eliminates the need to perform manual calculations or rely on visual assessments of changes from image to image. Save even more time by performing multiple counts in a single image. Use separate colors for each count and save your counts in the file.
Adobe Photoshop CS4 Extended is priced at $999.
Comments Off