[talks] Aleksey Boyko: PreFPO on Wednesday, May 21st 12pm, Rm. 402
Nicki Gotsis
ngotsis at CS.Princeton.EDU
Wed May 14 14:34:43 EDT 2014
Aleksey Boyko will present his Pre FPO which is scheduled for Wednesday, May 21st at noon in Rm. 402.
Members of his committee are: Thomas Funkhouser (advisor), Brian Kernighan (reader), Szymon Rusinkiewicz (reader), Adam Finkelstein (non-reader), and Jianxiong Xiao(non-reader).
Everyone is invited to attend his talk. The title and abstract are below.
Title:
"On tools and interfaces for efficient and accurate interactive annotation of static 3D point clouds"
Abstract:
Collecting massive 3D scans of real world environments has become a
common practice for many private companies and government agencies.
This data is accurate and rich enough to provide impressive
visualizations of these environments.
However, to truly tap into the potential that such a precise digital
depiction of the world offers, these scans need to be annotated.
Existing automatic methods report high accuracies for object
localization and segmentation, thus greatly improving the complexity of
the task.
However, the central task of annotation, proper label assignment to the
discovered objects, is still a challenging task for existing systems.
The goal of this work is to design an interface that facilitates the
process of labeling objects in large natural 3D scenes.
Prior efforts in this field span a variety of approaches, from purely
manual to automatic, to achieve different levels of success.
Manual annotation systems produce near perfect results at a cost of
enormous human effort.
Automatic methods aim at requiring less human input but achieve much
lower accuracy, which in turn requires more human interaction.
Machine-aided tools attempt to balance these two extremes, however the
necessary level of annotator's effort is rarely considered, while the
task of training a model that hopefully achieves higher accuracy still
takes the center stage.
Noticing that the machines have yet a long way to go to match humans'
ability to understand real world, and how prone to fatigue and
frustration humans are, a preferable approach is to make user effort a
priority in any annotation interface.
This dissertation assumes the necessity of the human annotator to
confirm labels for all objects in order to ensure correctness and
explicitly focuses on the tools and interfaces that streamline and
facilitate this process.
Taking advantage of the scene continuity of the 3D scan data this work
advances in two principal directions.
First, annotation of objects in groups is proposed to increase the
throughput of the information flow from the user to the machine.
Second, the non-essential yet time consuming tasks (e.g., scene
navigation, selection decisions) are relayed onto a machine by employing
an active learning approach to streamline the annotation process and
diminish user fatigue and distraction.
After evaluating these two directions, a third hybrid approach is
proposed---a group active interface.
This method takes advantage of the human ability to understand entire
scenes and queries objects in groups that are easy to understand and
label together thus further increasing the throughput of the annotation
process.
Empirical evaluation of this approach on a pre-segmented object data
indicates an improvement by a factor of 1.7 in annotation time compared
to other methods discussed without loss in accuracy.
More information about the talks
mailing list