Comparing Pixel Values
In the first step, the algorithm scans through the query image and takes every foreground pixel (background pixels can also be taken) value and compares this with the pixel value in the database image at the corresponding location. If it finds the same value at the same position in the database image, this will be taken as a hit count. Otherwise, it will be taken as a miss count and finally the difference of the hit and the miss count is divided by the total number of foreground pixels in the query image. The result of this division gives a number that indicates how Similar the Query image is to the Database image (SQD).
In the second step, the database image is scanned and its foreground pixel elements are compared against the query image as is done in the first step. This will give us a result that indicates how Similar the Database image is to the Query image (SDQ). Then the average of the SQD and the SDQ, Average Similarity Measure (ASM), is taken as a ranking measure for the retrieval process. Higher ASM value means higher similarity. However given a query image, it might happen that for two different database images Img1 and Img2, the average similarity measure for Img1 (ASM1) might be greater than the average similarity measure for Img2 (ASM2) yet Img2 is more similar to the query than Img1.
In order to avoid such a scenario we will discard all those images with negative SDQ or negative SQD values no matter how big their ASM is. By doing so, we can easily avoid all the false positives and improve the retrieval rate drastically.