A Robust Visual Method for Assessing the Relative Performance of Edge Detection Algorithms
Document Type
Article
Publication Date
1-1997
Keywords
—Experimental comparison of algorithms, edge detector comparison, low level processing, performance evaluation, analysis of variance, human rating.
Digital Object Identifier (DOI)
https://doi.org/10.1109/34.643893
Abstract
A new method for evaluating edge detection algorithms is presented and applied to measure the relative performance of algorithms by Canny, Nalwa-Binford, Iverson-Zucker, Bergholm, and Rothwell. The basic measure of performance is a visual rating score which indicates the perceived quality of the edges for identifying an object. The process of evaluating edge detection algorithms with this performance measure requires the collection of a set of gray-scale images, optimizing the input parameters for each algorithm, conducting visual evaluation experiments and applying statistical analysis methods. The novel aspect of this work is the use of a visual task and real images of complex scenes in evaluating edge detectors. The method is appealing because, by definition, the results agree with visual evaluations of the edge images.
Was this content written or created while at USF?
Yes
Citation / Publisher Attribution
Pattern Analysis and Machine Intelligence, v. 19, issue 12, p. 1338-1359
Scholar Commons Citation
Heath, M.; Sarkar, Sudeep; Sanocki, T.; and Bowyer, K., "A Robust Visual Method for Assessing the Relative Performance of Edge Detection Algorithms" (1997). Psychology Faculty Publications. 504.
https://digitalcommons.usf.edu/psy_facpub/504