In my last paper, I made a comment about topology of binary images: “These issues have been studied over the last 100 years or so and they are well understood”. It was pointed out to me that digital image analysis didn’t start until the 1960s, so how come?
Let me set the record straight.
The history is this. Algebraic topology was founded by Poincare around 1900 (the title of his book “Analysis Situs” converted from Latin to Greek turns into “topology”). There was no talk about binary images, obviously. What they studied was cell complexes, collections of cells attached to each other in an appropriate way. The cells were initially only triangular but later of any shape. It was also informally assumed that all topological theorems are independent of the cell decomposition or representation. This fact was formally proven by the 1950s, roughly. By then all the issues had been settled and algebraic topology had become one of the central disciplines in mathematics. The fist monographs were written in the 1930s (Alexandroff&Hopf) and first (graduate) textbooks were written in the 1960s (Hilton&Wiley, Mac Lane, Spanier, and many more).
Undergraduate books are rare (one that I like the most and use is Topology of Surfaces by Kinsey). Courses are even rarer. As a result, computer scientists (and even mathematicians) are often unfamiliar with the well established ways of dealing with even the most elementary topological issues (and I mean really elementary: how many objects, which ones have holes or tunnels and how many, etc.)
Even though relevant papers pop up once in a while, the connection of image analysis to algebraic topology is not a common knowledge among practitioners of computer vision and image analysis. I know this from personal experience…
The main reference on the subject is Computational Homology by Kaczynski, Mischaikow, and Mrozek. This is still very much a graduate text. Hopefully, our wiki is more accessible.