Recently, the Bag-of-Feature (BoF) model has shown promising performance in object and generic image retrieval. The similarity between two images is typically measured by the distance between the two histograms. Due to the imperfection of local descriptor and quantization error, visually similar image patches can be wrongly quantized into different visual words, making this distance-based measure less accurate. To address this issue, this paper explores the information of latent class, which is formed by all the database images that share the same visual concept with the one being compared to a given query. We then cast image similarity as the probability of the query and a database image belonging to a same latent class. Considering that a class of images together can better depict a visual concept, the shift from image-to-image to image-to-class comparison is expected to bring a more robust similarity measure. Because the ground truth of the latent class is not accessible in image retrieval, we define a latent class prior in our probabilistic model and derive its marginal distribution. This gives rise to a novel and efficient image similarity measure. It can significantly improve retrieval performance without prolonging retrieval process. Experimental study on multiple benchmark data sets demonstrates its advantages.