Removing image defects in an undetectable manner has been studied for its many useful and varied applications. In many cases the desired result may be ambiguous from the image data alone and needs to be guided by a user’s knowledge of the intended result. This paper presents a framework for interactively incorporating user guidance into the filling-in process, more effectively using user input to fill in damaged regions in an image. This framework contains five main steps: first, the scratch or defect is detected; second, the edges outside the defect are detected; third, curves are fit to the detected edges; fourth, the structure is completed across the damaged region; and finally, texture synthesis constrained by the previously computed curves is used to fill in the intensities in the damaged region. Scratch detection, structure completion, and texture synthesis are influenced or guided by user input when given. Results include removal of defects from images that contain structure, texture, or both structure and texture. Users can complete images with ambiguous structure in multiple ways by gesturing the cursor in the direction of the desired structure completion.
(c) 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.;