From Intelligent Perception
- A location within the image: two coordinates.
- A location and its value: 0 or 1 for binary, 0-255 for gray scale, 3 numbers for color.
- A little square/tile (see Cell decomposition of images).
- A unit of length.
- A unit of area.
In 3D, it's a "voxel".
One main thing to keep in mind while analyzing images is this simple principle:
This is important in two ways.
First, as the resolution increases the analysis results should "converge" to the analysis results of the real scene depicted in the image. Because the world is analog (another good principle).
Whatever the "real" (or physical) object is depicted in the image, its area computed as the sum of pixels will be as close as we like to its "true" area as the resolution increases (for more rigorous interpretation ). See the pictures below.
This is not that simple with the length. Indeed, increasing the resolution will not reduce the relative error of the measurement. See Lengths of curves.
Second, we need to analyze image in such a way that a single pixel variation of the image would be negligible. In fact, a single round of erosion or dilation, i.e. adding or removing a layer of pixels from the border of an object, will not dramatically change the area or perimeter of an object. Why? Because pixels are small.
This works fine for geometric measurements (see also Robustness of geometry) if the topology does not change. It's not so easy for topology. The example on the right shows that adding the red pixel merges three objects and also creates a hole (white object).
The we can say that these topological features aren't robust. In fact the robustness can be measured in term of how many dilations and erosions it takes to change the topology. For example,
- how many erosions does it take to split an object into two or more?
- how many dilations does it take to create a hole in an object?