A Model Representation for Image Content (Report)
Annals of DAAAM & Proceedings 2010, Annual
-
- 2,99 €
-
- 2,99 €
Publisher Description
1. INTRODUCTION While the number of images on the www keeps increasing, it becomes necessary to find new methods for obtaining similar images or image retrieval by content. There are large collections of images and we want to find images using a query. The query will be an image too. In this case "images like "query image" means images which are similar with the query image. It is a new problem of how to describe images for finding similarities between them. The description of an image is realized based on the low level features: color, shape and texture or combination of these. The images from the queried collections and the query images are described in the same way (using the same low level feature). First a metric distance will be applied on these descriptions. The similar images will be obtained by sorting ascending the distance values obtained in the first step. But sometimes is hard to specify a query. For example if we want to find images that contain sky, the query may be specified by color distribution. However the sky is blue in a sunny day, orange at sunset or grey in a cloudy day. Thus it is hard to specify such a query using only the low level feature color. For doing this the images are described by their semantic contents using annotations. These associations are a translation from image instances (object from images) to keywords, based on machine learning techniques, as described in (Duygulu et al., 2001). The reason for this difficulty is the existence of the gap to bridge between the low-level features of images' objects and their semantic contents.