This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

Approaches to image analysis

From Intelligent Perception

Jump to: navigation, search

Start with Overview and Topological Features of Images.

The methods described below are implemented in ImageJ and other image analysis software.

1 Thresholding and binarization

Dark-light.jpg

Information is lost when one chooses a single threshold to binarize a gray scale image (see Thresholding), which is a common way of image segmentation. Even though the thresholds can be chosen based on the shape of the gray scale histogram and other information to make it best, sometimes no single choice of a threshold will reveal all the features. This is the case when one part of the picture is significantly darker than another, as is common in fingerprinting, below. A simpler but just as specific example is a black object on gray background next to a gray object on white background, right. No matter what threshold is chosen, there is only one object visible. A similar example with 255 objects of different levels of gray on lighter backgrounds would show that any method based on merging gray levels will fail to distinguish all of these objects.

Below, a few clearly distinguishable ridges have disappeared as the areas have become all white or all black after thresholding. The example shows that no particular choice of threshold will preserve all ridges. Indeed a higher threshold will keep the red area all black, while a lower threshold will keep the green area all white.

Binarization FP1.JPG

2 Watershed algorithm

Original image.
Watershed segmentation.
Watershed segmentation.
An example of the output of Pixcavator.
Another example of the output of Pixcavator.

Pixcavator's image analysis method may appear similar to the watershed algorithm. It is true only in the sense that both look for minima and maxima of the gray scale function. The difference comes from the main idea of topology which is that there are objects and holes in them, and there is nothing else. In case of gray scale, this means that there are dark objects on light background with light holes in them, or vice versa.

3 Analysis of topology of gray scale images with trees

The papers below take some similar approaches to the problem. Interestingly, these authors appear to be unaware of each other.

  • J. Andrew Bangham, J. R. Hidalgo, Richard Harvey, Gavin C. Cawley, The Segmentation of Images via Scale-Space Trees, British Machine Vision Conference, 1998.
  • P. Monasse and F. Guichard. Fast computation of a contrast invariant image representation. IEEE Transactions on Image Processing, 9(5):860–872, 2000.
  • S. Kundu, Peak-tree: a new approach to image simplification, Proc. SPIE Vision Geometry VIII, Vol. 3811, p. 284-294, 1999.
  • Jin Feng, and Huaxiang Lu, Peak Analysis of Grayscale Image: Algorithm and Application [1].
  • There are more.

The differences are quite important.

  • They don't use homology, which makes their approach only partially applicable to 3D images.
  • Color images aren't discussed.
  • Object counting isn't addressed.

The results of image simplification are very similar though. For more see Graph representation of images.

4 Computation of homology of binary images

There have been a few attempts to address homology computation in imaging in the academic literature, and the treatment tends to be cursory at best [19], [21]. More recent studies [27], [18] provide a number of useful algorithms for homology. In the context of digital topology, the issue is addressed in references [16] and [22].

Existing software designed to compute Betti numbers and other topological characteristics tend to be academic computer programs which lack a user friendly interface.

  • CHomP, Computational Homology Project [9], works with a command prompt;
  • PLEX [23] is a set of routines written for MATLAB that computes Betti numbers;
  • Alpha Shapes [5] runs only in Linux/Unix/Sun and only computes Betti numbers of 3D “point clouds”;
  • CGAL, Computational Geometry Algorithms Library [8] and Simplicial Homology for GAP [26] are collections of C++ code.

Of these, only CHomP is applicable to digital images. None of these methods and programs is applicable to gray scale or color images or video sequences. Existing software tools that use topology are currently difficult to use and narrowly applicable.

For more see Homology software.

A topological approach to image analysis was taken by Djemel Ziou, Marie-Flavie Auclair-Fortier, and Madjid Allili in their US Patent Application No. 20050232511 “Image model based on n-pixels and defined in algebraic topology, and applications thereof” [2] and several publications. The methods are quite advanced and goals are far-reaching. One can only hope that this approach will have a wiki like ours one day.

5 UNrelated approaches

Here we need a list of techniques, though unrelated to our methods or even goals, that however use same or related words. This may lead to confusion. These words may be the following.

Topology

It could mean different things to different people [3].

Level sets

It could have nothing to do with image analysis [4]. If you have a shape that changes, with time or otherwise, it is convenient to represent these shapes as level sets of a single function. This seems especially useful in modeling because it is much easier to parametrize a single function. In image analysis we are in the exact opposite situation. We already have a single function - the gray level. Now what we do is take its level sets (or sub-level, does not matter in this context) and analyze them as binary images/shapes/regions. In fact, one can think of them as a single shape that changes with time. We've made a full circle...


6 Discussion of some literature

The imaging industry historically has used ad hoc methods to deal with inherently topological issues.

Image segmentation is the extraction of objects from an image. A common technique finds the boundaries of the objects -- edge search. In the case of a gray scale image, the edges are found by locating the areas where the rate of change from black to white is high. In order to do this, a threshold level has to be set, which means that loss of information has taken place as some edges may have been lost. Without an unbroken boundary, it is impossible to reconstruct the objects in order to conduct further analysis.

By trying to represent objects by means of their boundaries, current image analysis methods tend to segment gray scale images by finding the level curves of the gray scale. Examples of such methods are Contour Trees [7], connected operators [24], and Digital Morse Theory [10]. In our view, objects are sub-level sets and their boundaries are level curves, with the former more robust with respect to noise. Indeed a loss of a single pixel may result in a broken boundary. Also, these methods don’t apply to color images.

With respect to image enhancement of 3D images, the reduction of “noise” is generally accomplished by the removal of objects and voids. An example is the method of connected operators [24]. However, cutting the spurious little tunnels found when mapping the brain is a challenge, because a tunnel does not correspond to any object or void. An approach has been suggested [2], [3] in which the components of the complement of the object in its convex hull are used to represent its tunnels. However, this representation fails in more complex settings such as porous material. The representation of tunnels in 3D will remain confused without homology theory.

Even when the 1st Betti number is computed, the topological features often are not represented explicitly. This means that the size and location of tunnels cannot be determined, and noise can be confused with features. One example is the so-called Digital Morse Theory [10].

Commonly all procedures are global. This means that if only part of an image has been degraded, then the results of many procedures, such as Fourier transform, may be dramatically different. In the proposed approach, the change of a part of the image will not generally affect the processing of the other parts.

Current methods often use floating point arithmetic, which takes more memory, results in slower computations, and introduces round-up errors. The significance of these errors, as well as errors resulting from the approximations and discretization of continuous methods, is often ignored. These errors may produce fatal instabilities, especially in an iteration procedure such as in the active contours method. Proof that these errors will not affect the outcome is absent or valid only under restrictive conditions.

All of the above methods are inapplicable to time dependent images. A time-dependent Contour Tree [27] only records Betti numbers and does not capture the features. A time-varying Reeb graph [13] records the changes of the topology of the level curves, not sub-level sets, and detects only changes in the topology of an image but does not explicitly capture the topological features of an image.

7 References

1. N. Akkiraju, H. Edelsbrunner, M. Facello, P. Fu, E. P. Mucke, and C. Varela, Alpha shapes: definition and software. Proc. Internat. Comput. Geom. Software Workshop, ed. N. Amenta, Rpt. GCG 80, Geometry Center, Minneapolis, Minnesota, 1995.

2. Z. Aktouf, G. Bertrand, and L. Perroton. A 3D-hole closing algorithm. Lectures Notes in Computer Science, Vol. 1176, pp 36--47. Springer Verlag, 1996.

3. C. Arcelli, G. Sanniti di Baja, and S. Svensson. Computing and analysing convex deficiencies to characterise 3D complex objects. Image and Vision Computing, 23(2):203-211, Feb. 2005.

4. C. Ballester, V. Caselles, and P. Monasse, The tree of shapes of an image, ESAIM: Control, Optimization and Calculus of Variations, January 2003, Vol. 9, pp. 1-18.

5. Biogeometry Project, Alpha Shapes, [5].

6. G. Bredon, Topology and Geometry. Springer Verlag, 1993.

7. H. Carr, J. Snoeyink, and U. Axen, Computing Contour Trees in All Dimensions, Computational Geometry, 2003.

8. CGAL, Computational Geometry Algorithms Library, [6].

9. CHomP, Computational Homology Project.

10. J. Cox, D. Karron, and N. Ferdous, Digital Morse Theory for Scalar Volume Data. Journal of Mathematical Imaging and Vision. Vol. 18, 2003, pp. 95-117.

11. R. Datta, J. Li, and J. Z. Wang, Content-based image retrieval: Approaches and trends of the new age. In Proceedings of the 7th International Workshop on Multimedia Information Retrieval, November 2005.

12. T. K. Dey and S. Guha, Computing Homology Groups of Simplicial Complexes in R3 , Journal of the ACM, vol. 45 (1998), pp. 266-287.

13. H. Edelsbrunner, J. Harer, A. Mascarenhas, and V. Pascucci. Time-varying Reeb graphs for continuous space-time data. Proc. 20th Ann. Sympos. Comput. Geom., 2004', pp. 366-372.

14. H. Edelsbrunner, D. Letscher, and A. Zomorodian, Topological persistence and simplification. Discrete Comput. Geom. 28 (2002), pp. 511-533.

15. P. Geladi and H. Grahn, Multivariate Image Analysis, John Wiley & Sons, 1997.

16. R. González and P. Real, Towards digital cohomology, Lecture Notes in Computer Science, v. 2886, pp. 92-101, 2003.

17. V. Gudivada and V. Raghavan. Content-based image retrieval systems. IEEE Computer, 28(9):18 – 22, September 1995.

18. T. Kaczynski, K. Mischaikow, and M. Mrozek, Computational Homology, Appl. Math. Sci. Vol. 157, Springer Verlag , NY 2004.

19. R. Klette and A. Rosenfeld, Digital Geometry: Geometric Methods for Digital Image Analysis, Morgan Kaufmann Series in Computer Graphics, 2004.

20. Kodak’s infoimaging page, [7].

21. G. Lohmann, Volumetric Image Analysis, John Wiley & Sons, 1998.

22. R. Malgouyres and M. More, On the Computational Complexity of Reachability in 2D Binary Images and Some Basic Problems of 2D Digital Topology, Theoretical Computer Science, vol. 283/1, no 1, pp 67-108, 2002.

23. PLEX at Topological Methods in Scientific Computing, Statistics and Computer Science.

24. P. Salembier and J. Serra, Flat zones filtering, connected operators, and filters by reconstruction, IEEE Transactions on Image Processing, vol. 4, n. 8, August, 1995, pp. 1153-1160.

25. P. Saveliev, Topological analysis of parametric images, preprint.

26. Simplicial Homology for GAP, [8].

27. B.-S. Sohn and C. Bajaj, Time-Varying Contour Topology, IEEE Transactions on Visualization and Computer Graphics, January/February 2006, pp. 14-25.

28. A. J. Zomorodian, Topology for Computing, Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, 2005.