A Model for SIFT Optimization Using FPAA-Driven Block Convolution
Access rightsWorldwide access.
Access changed 3/2/2017.
MetadataShow full item record
The Scale Invariant Feature Transform (SIFT) is a useful algorithm for describing local features in an image. This research successfully demonstrates a model for optimizing SIFT using a Field Programmable Analog Array (FPAA). A method is presented which theoretically can reduce SIFT run time by nearly 50% by greatly limiting the image area required to search for SIFT features. Block convolution is a simpliﬁed, non-overlapping method of convolution which is computed with the FPAA. The block convolution between the input image and a specially designed kernel is used to determine potential regions of interest. The performance of several different types of kernels will be compared, most of which were produced using an evolutionary search algorithm. A threshold is then applied to the ﬁltered image produced by the FPAA and regions which are unlikely to produce matches are masked. By removing the necessity to search in areas of the image where a match is unlikely to be found, we see a more efﬁcient implementation of SIFT that also demonstrates the usefulness of new FPAA technology.