My publications, ordered by year.
See also Google Scholar for citations.
2017

Ruizhen Hu, Wenchao Li, Oliver van Kaick, Ariel Shamir, Hao Zhang, Hui Huang
Learning to Predict Part Mobility from a Single Static Snapshot
Trans. on Graphics (Proc. SIGGRAPH Asia), vol. 36, n. 6, 227:1227:13, 2017.
[PDF] [DOI] [Bib] [Project page]We introduce a method for learning a model for the mobility of parts in 3D objects. Our method allows not only to understand the dynamic functionalities of one or more parts in a 3D object, but also to apply the mobility functions to static 3D models. Specifically, the learned part mobility model can predict mobilities for parts of a 3D object given in the form of a single static snapshot reflecting the spatial configuration of the object parts in 3D space, and transfer the mobility from relevant units in the training data. The training data consists of a set of mobility units of different motion types. Each unit is composed of a pair of 3D object parts (one moving and one reference part), along with usage examples consisting of a few snapshots capturing different motion states of the unit. Taking advantage of a linearity characteristic exhibited by most part motions in everyday objects, and utilizing a set of partrelation descriptors, we define a mapping from static snapshots to dynamic units. This mapping employs a motiondependent snapshottounit distance obtained via metric learning. We show that our learning scheme leads to accurate motion prediction from single static snapshots and allows proper motion transfer. We also demonstrate other applications such as motiondriven object detection and motion hierarchy construction.

Ruizhen Hu, Wenchao Li, Oliver van Kaick, Hui Huang, Melinos Averkiou, Daniel CohenOr, Hao Zhang
CoLocating StyleDefining Elements on 3D Shapes
Trans. on Graphics, vol. 36, n. 3, pp. 33:133:15, 2017.
[PDF] [DOI] [Bib] [Project page]We introduce a method for colocating styledefining elements over a set of 3D shapes. Our goal is to translate highlevel style descriptions, such as "Ming" or "European" for furniture models, into explicit and localized regions over the geometric models that characterize each style. For each style, the set of styledefining elements is defined as the union of all the elements that are able to discriminate the style. Another property of the styledefining elements is that they are frequentlyoccurring, reflecting shape characteristics that appear across multiple shapes of the same style. Given an input set of 3D shapes spanning multiple categories and styles, where the shapes are grouped according to their style labels, we perform a crosscategory coanalysis of the shape set to learn and spatially locate a set of defining elements for each style. This is accomplished by first sampling a large number of candidate geometric elements, and then iteratively applying feature selection to the candidates, to extract stylediscriminating elements until no additional elements can be found. Thus, for each style label, we obtain sets of discriminative elements that together form the superset of defining elements for the style. We demonstrate that the colocation of styledefining elements allows us to solve problems such as style classification, and enables a variety of applications such as stylerevealing view selection, styleaware sampling, and styledriven modeling for 3D shapes.
2016

Noa Fish, Oliver van Kaick, Amit Bermano, Daniel CohenOr
Structureoriented Networks of Shape Collections
Trans. on Graphics (Proc. SIGGRAPH Asia), vol. 35, n. 6, pp. 171:1171:14, 2016.
[PDF] [DOI] [Bib] [Project page]We introduce a coanalysis technique designed for correspondence inference within large shape collections. Such collections are naturally rich in variation, adding ambiguity to the notoriously difficult problem of correspondence computation. We leverage the robustness of correspondences between similar shapes to address the difficulties associated with this problem. In our approach, pairs of similar shapes are extracted from the collection, analyzed and matched in an efficient and reliable manner, culminating in the construction of a network of correspondences that connects the entire collection. The correspondence between any pair of shapes then amounts to a simple propagation along the minimax path between the two shapes in the network. At the heart of our approach is the introduction of a robust, structureoriented shape matching method. Leveraging the idea of projective analysis, we partition 2D projections of a shape to obtain a set of 1D ordered regions, which are both simple and efficient to match. We lift the matched projections back to the 3D domain to obtain a pairwise shape correspondence. The emphasis given to structural compatibility is a central tool in estimating the reliability and completeness of a computed correspondence, uncovering any nonnegligible semantic discrepancies that may exist between shapes. These detected differences are a deciding factor in the establishment of a network aiming to capture local similarities. We demonstrate that the combination of the presented observations into a coanalysis method allows us to establish reliable correspondences among shapes within large collections.

Ruizhen Hu, Oliver van Kaick, Bojian Wu, Hui Huang, Ariel Shamir, Hao Zhang
Learning How Objects Function via CoAnalysis of Interactions
Trans. on Graphics (Proc. SIGGRAPH), vol. 35, n. 4, pp. 47:147:13, 2016.
[PDF] [DOI] [Bib] [Project page]We introduce a coanalysis method which learns a functionality model for an object category, e.g., strollers or backpacks. Like previous works on functionality, we analyze objecttoobject interactions and intraobject properties and relations. Differently from previous works, our model goes beyond providing a functionalityoriented descriptor for a single object; it prototypes the functionality of a category of 3D objects by coanalyzing typical interactions involving objects from the category. Furthermore, our coanalysis localizes the studied properties to the specific locations, or surface patches, that support specific functionalities, and then integrates the patchlevel properties into a category functionality model. Thus our model focuses on the how, via common interactions, and where, via patch localization, of functionality analysis.
Given a collection of 3D objects belonging to the same category, with each object provided within a scene context, our coanalysis yields a set of protopatches, each of which is a patch prototype supporting a specific type of interaction, e.g., stroller handle held by hand. The learned category functionality model is composed of protopatches, along with their pairwise relations, which together summarize the functional properties of all the patches that appear in the input object category. With the learned functionality models for various object categories serving as a knowledge base, we are able to form a functional understanding of an individual 3D object, without a scene context. With patch localization in the model, functionalityaware modeling, e.g, functional object enhancement and the creation of functional object hybrids, is made possible.
2015

Yanir Kleiman, Oliver van Kaick, Olga SorkineHornung, Daniel CohenOr
SHED: Shape Edit Distance for Finegrained Shape Similarity
Trans. on Graphics (Proc. SIGGRAPH Asia), vol. 34, n. 6, pp. 235:1235:11, 2015.
[PDF] [DOI] [Bib] []Computing similarities or distances between 3D shapes is a crucial building block for numerous tasks, including shape retrieval, exploration and classification. Current stateoftheart distance measures mostly consider the overall appearance of the shapes and are less sensitive to fine changes in shape structure or geometry. We present shape edit distance (SHED) that measures the amount of effort needed to transform one shape into the other, in terms of rearranging the parts of one shape to match the parts of the other shape, as well as possibly adding and removing parts. The shape edit distance takes into account both the similarity of the overall shape structure and the similarity of individual parts of the shapes. We show that SHED is favorable to stateoftheart distance measures in a variety of applications and datasets, and is especially successful in scenarios where detecting fine details of the shapes is important, such as shape retrieval and exploration.
Ruizhen Hu, Chenyang Zhu, Oliver van Kaick, Ligang Liu, Ariel Shamir, Hao Zhang
Interaction Context (ICON): Towards a Geometric Functionality Descriptor
Trans. on Graphics (Proc. SIGGRAPH), vol. 34, n. 4, pp. 83:183:12, 2015.
[PDF] [DOI] [Bib] [Project page]We introduce a contextual descriptor which aims to provide a geometric description of the functionality of a 3D object in the context of a given scene. Differently from previous works, we do not regard functionality as an abstract label or represent it implicitly through an agent. Our descriptor, called interaction context or ICON for short, explicitly represents the geometry of objecttoobject interactions. Our approach to object functionality analysis is based on the key premise that functionality should mainly be derived from interactions between objects and not objects in isolation. Specifically, ICON collects geometric and structural features to encode interactions between a central object in a 3D scene and its surrounding objects. These interactions are then grouped based on feature similarity, leading to a hierarchical structure. By focusing on interactions and their organization, ICON is insensitive to the numbers of objects that appear in a scene, the specific disposition of objects around the central object, or the objects' finegrained geometry. With a series of experiments, we demonstrate the potential of ICON in functionalityoriented shape processing, including shape retrieval (either directly or by complementing existing shape descriptors), segmentation, and synthesis.
2014

Noa Fish*, Melinos Averkiou*, Oliver van Kaick, Olga SorkineHornung, Daniel CohenOr, Niloy J. Mitra
Metarepresentation of Shape Families
Trans. on Graphics (Proc. SIGGRAPH), vol. 33, n. 4, pp. 34:134:11, 2014.
[PDF] [DOI] [Bib] [Project page]We introduce a metarepresentation that represents the essence of a family of shapes. The metarepresentation learns the configurations of shape parts that are common across the family, and encapsulates this knowledge with a system of geometric distributions that encode relative arrangements of parts. Thus, instead of predefined priors, what characterizes a shape family is directly learned from the set of input shapes. The metarepresentation is constructed from a set of cosegmented shapes with known correspondence. It can then be used in several applications where we seek to preserve the identity of the shapes as members of the family. We demonstrate applications of the metarepresentation in exploration of shape repositories, where interesting shape configurations can be examined in the set; guided editing, where models can be edited while maintaining their familial traits; and coupled editing, where several shapes can be collectively deformed by directly manipulating the distributions in the metarepresentation. We evaluate the efficacy of the proposed representation on a variety of shape collections.

Oliver van Kaick, Noa Fish, Yanir Kleiman, Shmuel Asafi, Daniel CohenOr
Shape Segmentation by Approximate Convexity Analysis
Trans. on Graphics (TOG), vol. 34, n. 1, pp. 4:14:11, 2014.
[PDF] [Bib] [Project page]We present a shape segmentation method for complete and incomplete shapes. The key idea is to directly optimize the decomposition based on a characterization of the expected geometry of a part in a shape. Rather than setting the number of parts in advance, we search for the smallest number of parts that admit the geometric characterization of the parts. The segmentation is based on an intermediatelevel analysis, where first the shape is decomposed into approximate convex components, which are then merged into consistent parts based on a nonlocal geometric signature. Our method is designed to handle incomplete shapes, represented by point clouds. We show segmentation results on shapes acquired by a range scanner, and an analysis of the robustness of our method to missing regions. Moreover, our method yields results that are comparable to stateoftheart techniques evaluated on complete shapes.
2013

Oliver van Kaick, Kai Xu, Hao Zhang, Yanzhen Wang, Shuyang Sun, Ariel Shamir, Daniel CohenOr
CoHierarchical Analysis of Shape Structures
Trans. on Graphics (Proc. SIGGRAPH), vol. 32, n. 4, pp. 69:169:10, 2013.
[PDF] [DOI] [Bib] [Project page]We introduce an unsupervised cohierarchical analysis of a set of shapes, aimed at discovering their hierarchical part structures and revealing relations between geometrically dissimilar yet functionally equivalent shape parts across the set. The core problem is that of representative coselection. For each shape in the set, one representative hierarchy (tree) is selected from among many possible interpretations of the hierarchical structure of the shape. Collectively, the selected tree representatives maximize the withincluster structural similarity among them. We develop an iterative algorithm for representative coselection. At each step, a novel clusterandselect scheme is applied to a set of candidate trees for all the shapes. The treetotree distance for clustering caters to structural shape analysis by focusing on spatial arrangement of shape parts, rather than their geometric details. The final set of representative trees are unified to form a structural cohierarchy. We demonstrate cohierarchical analysis on families of manmade shapes exhibiting high degrees of geometric and finerscale structural variabilities.

Oliver van Kaick, Hao Zhang, Ghassan Hamarneh
Bilateral Maps for Partial Matching
Computer Graphics Forum, vol. 32, n. 6, pp. 189200, 2013.
[PDF] [DOI] [Bib]We introduce the bilateral map, a local shape descriptor whose region of interest is defined by two feature points. Compared to the classical descriptor definition using a single point, the bilateral approach exploits the use of a second point to place more constraints on the selection of the spatial context for feature analysis. This leads to a descriptor where the shape of the region of interest is anisotropic and adapts to the context of the two points, making it more refined for shape analysis, in particular, partial matching.
2012

Yunhai Wang, Shmulik Asafi, Oliver van Kaick, Hao Zhang, Daniel CohenOr, Baoquan Chen
Active CoAnalysis of a Set of Shapes
Trans. on Graphics (Proc. SIGGRAPH Asia), vol. 31, n. 6, pp. 165:1165:10, 2012.
[PDF] [DOI] [Bib] [Project page] [Dataset]Unsupervised coanalysis of a set of shapes is a difficult problem since the geometry of the shapes alone cannot always fully describe the semantics of the shape parts. In this paper, we propose a semisupervised learning method where the user actively assists in the coanalysis by iteratively providing inputs that progressively constrain the system. We introduce a novel constrained clustering method based on a spring system which embeds elements to better respect their interdistances in feature space together with the usergiven set of constraints. We also present an active learning method that suggests to the user where his input is likely to be the most effective in refining the results. We show that each single pair of constraints affects many relations across the set. Thus, the method requires only a sparse set of constraints to quickly converge toward a consistent and errorfree semantic labeling of the set.
2011

Oana Sidi, Oliver van Kaick, Yanir Kleiman, Hao Zhang, Daniel CohenOr
Unsupervised CoSegmentation of a Set of Shapes via DescriptorSpace Spectral Clustering
ACM Trans. on Graphics (Proc. SIGGRAPH Asia), vol. 30, n. 6, pp. 126:1126:9, 2011.
[PDF] [DOI] [Bib] [Project page]We introduce an algorithm for unsupervised cosegmentation of a set of shapes so as to reveal the semantic shape parts and establish their correspondence across the set. The input set may exhibit significant shape variability where the shapes do not admit proper spatial alignment and the corresponding parts in any pair of shapes may be geometrically dissimilar. Our algorithm can handle such challenging input sets since, first, we perform coanalysis in a descriptor space, where a combination of shape descriptors relates the parts independently of their pose, location, and cardinality. Secondly, we exploit a key enabling feature of the input set, namely, dissimilar parts may be ``linked'' through thirdparties present in the set. The links are derived from the pairwise similarities between the parts' descriptors. To reveal such linkages which may manifest themselves as anisotropic and nonlinear structures in the descriptor space, we perform spectral clustering with the aid of diffusion maps. We show that with our approach, we are able to cosegment sets of shapes that possess significant variability, achieving results that are close to those of a supervised approach.

Oliver van Kaick, Andrea Tagliasacchi, Oana Sidi, Hao Zhang, Daniel CohenOr, Lior Wolf, Ghassan Hamarneh
Prior Knowledge for Part Correspondence
Computer Graphics Forum (Proc. Eurographics), vol. 30, n. 2, pp. 553562, 2011.
[PDF] [DOI] [Bib] [Project page]Classical approaches to shape correspondence base their computation purely on the properties, in particular geometric similarity, of the shapes in question. Their performance still falls far short of that of humans in challenging cases where corresponding shape parts may differ significantly in geometry or even topology. We stipulate that in these cases, shape correspondence by humans involves recognition of the shape parts where prior knowledge on the parts would play a more dominant role than geometric similarity. We introduce an approach to part correspondence which incorporates prior knowledge imparted by a training set of presegmented, labeled models and combines the knowledge with contentdriven analysis based on geometric similarity between the matched shapes. First, the prior knowledge is learned from the training set in the form of perlabel classifiers. Next, given two query shapes to be matched, we apply the classifiers to assign a probabilistic label to each shape face. Finally, by means of a joint labeling scheme, the probabilistic labels are used synergistically with pairwise assignments derived from geometric similarity to provide the resulting part correspondence. We show that the incorporation of knowledge is especially effective in dealing with shapes exhibiting large intraclass variations. We also show that combining knowledge and content analyses outperforms approaches guided by either attribute alone.

Oliver van Kaick
Matching dissimilar shapes
PhD Thesis, Simon Fraser University, Canada, 2011.
[External link to PDF]In this thesis, we address the challenge of computing correspondences between dissimilar shapes. This implies that, although the shapes represent the same class of object, there can be major differences in the geometry, topology, and part composition of the shapes as a whole. Additionally, the dissimilarity can also appear in the form of a shape that possesses additional parts that are not present in the other shapes. We propose three approaches for handling such shape dissimilarity. The first two approaches incorporate additional knowledge that goes beyond a direct geometric comparison of the shapes. We show that these approaches allow us to compute correspondences for shapes that differ significantly in their geometry and topology, such as manmade shapes. In the third approach, we compute partial correspondences between shapes that have additional parts in relation to each other. To address this challenge, we propose a new type of shape descriptor, called the bilateral map, whose region of interest is defined by two points. The region of interest adapts to the context of the two points and facilitates the selection of the scale and shape of this region, making this descriptor more effective for partial matching. We demonstrate the advantages of the bilateral map for computing partial and full correspondences between pairs of shapes.
2010

Oliver van Kaick, Hao Zhang, Ghassan Hamarneh, Daniel CohenOr
A Survey on Shape Correspondence
Eurographics StateoftheArt report, pp. 6182, 2010.
Extended version published in Computer Graphics Forum, vol. 30, n. 6, pp. 16811707, 2011.
[PDF] [DOI] [Bib] [Project page]We review methods that are designed to compute correspondences between geometric shapes represented by triangle meshes, contours, or point sets. This survey is motivated in part by some recent developments in spacetime registration, where one seeks to correspond nonrigid and timevarying surfaces, and semantic shape analysis, which underlines a recent trend to incorporate shape understanding into the analysis pipeline. Establishing a meaningful shape correspondence is often difficult since it generally requires an understanding of the structure of the shapes at both the local and global levels, and sometimes the functionality of the shape parts as well. Despite its inherent complexity, shape correspondence is a recurrent problem and an essential component in numerous geometry processing applications. In this survey, we discuss the different forms of the correspondence problem and review the main solution methods, aided by several classification criteria which can help objectively compare the solutions. We conclude the survey by discussing open problems and future perspectives.

Oliver van Kaick, Aaron Ward, Ghassan Hamarneh, Mark Schweitzer, Hao Zhang
Learning Fourier Descriptors for ComputerAided Diagnosis of the Supraspinatus
Academic Radiology, vol. 17, n. 8, pp. 10401049, 2010.
[PDF] [DOI] [Bib]Supraspinatus muscle disorders are frequent and debilitating, resulting in pain and a limited range of shoulder motion. The gold standard for diagnosis involves an invasive surgical procedure. As part of a proposed clinical workflow for noninvasive computeraided diagnosis (CAD) of the condition of the supraspinatus, we present a method to classify 3D shapes of the muscle into relevant pathology groups, based on magnetic resonance (MR) images. We compute the Fourier coefficients of 2D contours lying on parallel imaging planes and integrate the corresponding frequency components across all contours. To classify the shapes, we learn the Fourier coefficients that best distinguish the different classes. We show that our method leads to significant improvement when compared to previous work. Moreover, we confirm that analyzing the 3D shape of the muscle has potential as a form of diagnosis reinforcement to assess the condition of the supraspinatus.

Hao Zhang, Oliver van Kaick, Ramsay Dyer
Spectral Mesh Processing
Computer Graphics Forum, vol. 29, n. 6, pp. 18651894, September 2010. (Revised and extended version of Eurographics 2007 STAR report).
[PDF] [DOI] [Bib]Spectral methods for mesh processing and analysis rely on the eigenvalues, eigenvectors, or eigenspace projections derived from appropriately defined mesh operators to carry out desired tasks. Early work in this area can be traced back to the seminal paper by Taubin in 1995, where spectral analysis of mesh geometry based on a combinatorial Laplacian aids our understanding of the lowpass filtering approach to mesh smoothing. Over the past fifteen years, the list of applications in the area of geometry processing which utilize the eigenstructures of a variety of mesh operators in different manners have been growing steadily. Many works presented so far draw parallels from developments in fields such as graph theory, computer vision, machine learning, graph drawing, numerical linear algebra, and highperformance computing. This paper aims to provide a comprehensive survey on the spectral approach, focusing on its power and versatility in solving geometry processing problems and attempting to bridge the gap between relevant research in computer graphics and other fields. Necessary theoretical background is provided. Existing works covered are classified according to different criteria: the operators or eigenstructures employed, application domains, or the dimensionality of the spectral embeddings used. Despite much empirical success, there still remain many open questions pertaining to the spectral approach. These are discussed as we conclude the survey and provide our perspective on possible future research.
2008

H. Zhang, A. Sheffer, D. CohenOr, Q. Zhou, O. van Kaick, A. Tagliasacchi
Deformationdriven shape correspondence
Computer Graphics Forum (Special Issue of Symposium on Geometry Processing (SGP 2008)) vol. 27, n. 5, pp. 14311439, Copenhagen, Denmark, July 0204, 2008.
[PDF] [DOI] [Bib] [Project page]Nonrigid 3D shape correspondence is a fundamental and difficult problem. Most applications which require a correspondence rely on manually selected markers. Without user assistance, the performances of existing automatic correspondence methods depend strongly on a good initial shape alignment or shape prior, and they generally do not tolerate large shape variations. We present an automatic feature correspondence algorithm capable of handling large, nonrigid shape variations, as well as partial matching. This is made possible by leveraging the power of stateoftheart mesh deformation techniques and relying on a combinatorial tree traversal for correspondence search. The search is deformationdriven, prioritized by a selfdistortion energy measured on meshes deformed according to a given correspondence. We demonstrate the ability of our approach to naturally match shapes which differ in pose, local scale, part decomposition, and geometric detail through numerous examples.
2007

Oliver van Kaick, Ghassan Hamarneh, Hao Zhang, Paul Wighton
Contour Correspondence via Ant Colony Optimization
Proc. 15th Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2007), Maui, Hawaii, United States, pp. 271280, October 2930 & November 0102, 2007.
[PDF] [DOI] [Bib] []We formulate contour correspondence as a Quadratic Assignment Problem (QAP), incorporating proximity information. By maintaining the neighborhood relation between points this way, we show that better matching results are obtained in practice. We propose the first Ant Colony Optimization (ACO) algorithm specifically aimed at solving the QAPbased shape correspondence problem. Our ACO framework is flexible in the sense that it can handle general point correspondence, but also allows extensions, such as order preservation, for the more specialized contour matching problem.

Hao Zhang, Oliver van Kaick, Ramsay Dyer
Spectral Methods for Mesh Processing and Analysis
Eurographics 2007 State of the Art Report, Prague, Czech Republic, September 0307, 2007.
[PDF] [Bib]Spectral methods for mesh processing and analysis rely on the eigenvalues, eigenvectors, or eigenspace projections derived from appropriately defined mesh operators to carry out desired tasks. Early works in this area can be traced back to the seminal paper by Taubin in 1995, where spectral analysis of mesh geometry based on a combinatorial Laplacian aids our understanding of the lowpass filtering approach to mesh smoothing. Over the past ten years or so, the list of applications in the area of geometry processing which utilize the eigenstructures of a variety of mesh operators in different manners have been growing steadily. Many works presented so far draw parallels from developments in fields such as graph theory, computer vision, machine learning, graph drawing, numerical linear algebra, and highperformance computing. This stateoftheart report aims to provide a comprehensive survey on the spectral approach, focusing on its power and versatility in solving geometry processing problems and attempting to bridge the gap between relevant research in computer graphics and other fields. Necessary theoretical background will be provided and existing works will be classified according to different criteria  the operators or eigenstructures employed, application domains, or the dimensionality of the spectral embeddings used  and described in adequate length. Finally, despite much empirical success, there still remain many open questions pertaining to the spectral approach, which we will discuss in the report as well.

Varun Jain, Hao Zhang, Oliver van Kaick
NonRigid Spectral Correspondence of Triangle Meshes
International Journal on Shape Modeling, vol. 13, n. 1, pp. 101124, June 2007.
[PDF] [DOI] [Bib]We present an algorithm for finding a meaningful vertextovertex correspondence between two triangle meshes, which is designed to handle general nonrigid transformations. Our algorithm operates on embeddings of the two shapes in the spectral domain so as to normalize them with respect to uniform scaling and rigidbody transformation. Invariance to shape bending is achieved by relying on approximate geodesic point proximities on a mesh to capture its shape.
2006

Oliver van Kaick, Greg Mori
Automatic Classification of Outdoor Images by Region Matching
Proceedings of the Third Canadian Conference on Computer and Robot Vision (CRV'2006), Québec cityQC, Canada, pp. 18, June 0709, 2006.
[PDF] [DOI] [Bib]This paper presents a novel method for image classification. It differs from previous approaches by computing image similarity based on region matching. Firstly, the images to be classified are segmented into regions or partitioned into regular blocks. Next, lowlevel features are extracted from each segment or block, and the similarity between two images is computed as the cost of a pairwise matching of regions according to their related features. Experiments are performed to verify that the proposed approach improves the quality of image classification. In addition, unsupervised clustering results are presented to verify the efficacy of this image similarity measure.

O. M. van Kaick, H. Pedrini
A Comparative Evaluation of Metrics for Fast Mesh Simplification
Computer Graphics Forum, vol. 25, n. 2, pp. 197210, June 2006.
[DOI] [Bib]Triangle mesh simplification is of great interest in a variety of knowledge domains, since it allows manipulation and visualization of large models, and it is the starting point for the design of many multiresolution representations. A crucial point in the structure of a simplification method is the definition of an appropriate metric for guiding the decimation process, with the purpose of generating low error approximations at different levels of resolution. This paper proposes two new alternative metrics for mesh simplification, with the aim of producing high quality results with reduced execution time and memory usage, and being simple to implement. A set of different established metrics is also described and a comparative evaluation of these metrics against the two new metrics is performed. A single implementation is used in the experiments, in order to enable the evaluation of these metrics independently from other simplification aspects. Results obtained from the simplification of a number of models, using the different metrics, are compared.

Rong Liu, Hao Zhang, Oliver van Kaick
Spectral Sequencing based on Graph Distance
Proceedings of Geometric Modeling and Processing (GMP'2006), PittsburghPA, United States, pp. 630636, July 2628, 2006.
[PDF] [DOI] [Bib]Rong Liu, Hao Zhang, Oliver van Kaick
An Investigation into Spectral Sequencing based on Graph Distance
Technical Report TR200608, School of Computing Science, Simon Fraser University, May 2006.
[PDF] [Link] [Bib]The construction of linear mesh layouts has found various applications, such as implicit mesh filtering and mesh streaming, where a variety of layout quality criteria, e.g., span and width, can be considered. While spectral sequencing, derived from the Fiedler vector, is one of the bestknown heuristics for minimizing width, it does not perform as well as the CuthillMckee (CM) scheme in terms of span. In this paper, we treat optimal mesh layout generation as a problem of preserving graph distances and propose to use the subdominant eigenvector of a kernel (affinity) matrix for sequencing. Despite the nonsparsity of the affinity operators we use, the layouts can be computed efficiently for large meshes through subsampling and eigenvector extrapolation. Our experiments show that the new sequences obtained outperform those derived from the Fiedler vector, in terms of spans, and those obtained from CM, in terms of widths and other important quality criteria. Therefore, in applications where several such quality criteria can influence algorithm performance simultaneously, e.g., mesh streaming and implicit mesh filtering, the new mesh layouts could potentially provide a better tradeoff.
2005

O. M. van Kaick, H. Pedrini
Assessment of Image Surface Approximation Accuracy given by Triangular Meshes.
International Conference on Computer Vision and Graphics (ICCVG'2004), Warsaw, Poland, September 2224, 2004.
Published in Kluwer Book Series  Computational Imaging and Vision 32, 2005.
[External link to PDF] [DOI]This project investigates the use of triangular meshes for approximating digital images, allowing substantial reduction in the cost of storing, manipulating, and rendering surfaces. As an alternative to regular grid models, in which a set of sampled points representing measures of intensity or elevation are stored at regular intervals, the proposed method constructs a set of nonoverlapping contiguous triangular faces which adaptively approximates the data, while preserving relevant features. The data points need not lie in any particular pattern and the density may vary over space.

O. M. van Kaick, H. Pedrini
Métricas para simplificação de malhas triangulares (Metrics for triangle mesh simplification).
Master's Thesis, Federal University of Parana, CuritibaPR, Brazil, 2005.
[External link to PDF]Triangle mesh simplification is of great interest in a variety of knowledge domains, since it allows manipulation and visualization of large models, and it is the starting point for the design of many multiresolution representations. A crucial point in the structure of a simplification method is the definition of an appropriate metric for guiding the decimation process, with the purpose of generating low error approximations at different levels of resolution. This paper proposes two new alternative metrics for mesh simplification, with the aim of producing high quality results with reduced execution time and memory usage, and being simple to implement. A set of different established metrics is also described and a comparative evaluation of these metrics against the two new metrics is performed. A single implementation is used in the experiments, in order to enable the evaluation of these metrics independently from other simplification aspects. Results obtained from the simplification of a number of models, using the different metrics, are compared.
2004

O.M. van Kaick, M.V.G. da Silva, H. Pedrini
Efficient Generation of Triangle Strips from Triangulated Meshes.
Journal of WSCG, vol. 12, n. 13, pp. 475481, February 2004 (ISSN: 12136972).
[To PDF]The development of methods for storing, manipulating, and rendering large volumes of data efficiently is a crucial task in several scientific applications, such as medical image analysis, remote sensing, computer vision, and computeraided design. Unless data reduction or compression methods are used, extremely large data sets cannot be analyzed or visualized in real time. Polygonal surfaces, typically defined by a set of triangles, are one of the most widely used representations for geometric models. The aim of this project was to create an efficient algorithm for compressing triangulated models through the construction of triangle strips. Experimental results show that these strips are significantly better than those generated by the leading triangle strip algorithms.
2003

O.M. van Kaick, H. Pedrini
Análise Comparativa de Métodos de Triangulação para Aproximação de Imagens Digitais (Comparative analysis of methods for approximation of digital images with triangle meshes).
Proceedings of I Workshop de Trabalhos de Iniciação Científica em Computação Gráfica e Processamento de Imagens (WICCGPI'2003), São CarlosSP, Brazil, pp. 1825, October 1215, 2003.
[PDF]
O.M. van Kaick, H. Pedrini
Estudo Comparativo de Métodos de Compressão de Modelos de Terrenos Digitais através de Superfícies Triangulares (Comparative study of methods for compression of digital terrains through triangle meshes).
Proceedings of III Colóquio Brasileiro de Ciências Geodésicas (CBCG'2003), CuritibaPR, Brazil, pp. 115, May 69, 2003.
[PDF]This project investigates the use of triangular meshes for approximating digital images, allowing substantial reduction in the cost of storing, manipulating, and rendering surfaces. As an alternative to regular grid models, in which a set of sampled points representing measures of intensity or elevation are stored at regular intervals, the proposed method constructs a set of nonoverlapping contiguous triangular faces which adaptively approximates the data, while preserving relevant features. The data points need not lie in any particular pattern and the density may vary over space.

W.R. Schwartz, O.M. van Kaick, M.V.G. da Silva, H. Pedrini
Reconhecimento em Tempo Real de Agentes Autônomos em Futebol de Robôs (Realtime recognition of autonomous agents in robot soccer).
Proceedings of VI Simpósio Brasileiro de Automação Inteligente (SBAI'2003), BauruSP, Brazil, pp. 9499, September 1417, 2003.
[PDF]Robot soccer has been adopted as a standard problem to promote the development of new techniques in the field of Artificial Intelligence. The domain of robot soccer is highly dynamic and complex, consisting of multiagents that are capable of performing individual and cooperative actions to solve a task. This work aims the implementation of a computer vision module for a robotsoccer team. A flexible and efficient algorithm is proposed for real time identification and tracking of objects in the scene.
2002

M.V.G. da Silva, O.M. van Kaick, H. Pedrini
Fast Mesh Rendering Through Efficient Triangle Strip Generation.
Journal of WSCG, vol. 10, n. 1, pp. 127134, February 2002 (ISSN: 12136972).
[To PDF]The development of methods for storing, manipulating, and rendering large volumes of data efficiently is a crucial task in several scientific applications, such as medical image analysis, remote sensing, computer vision, and computeraided design. Unless data reduction or compression methods are used, extremely large data sets cannot be analyzed or visualized in real time. Polygonal surfaces, typically defined by a set of triangles, are one of the most widely used representations for geometric models. The aim of this project was to create an efficient algorithm for compressing triangulated models through the construction of triangle strips. Experimental results show that these strips are significantly better than those generated by the leading triangle strip algorithms.

O.M. van Kaick, M.V.G. da Silva, W.R. Schwartz, H. Pedrini
Fitting Smooth Surfaces to Scattered 3D Data Using Piecewise Quadratic Approximation.
Proceedings of IEEE International Conference on Image Processing (ICIP'2002), Rochester, United States, pp. 493496, September 2225, 2002.
[To PDF]O.M. van Kaick, W.R. Schwartz, M.V.G. da Silva, H. Pedrini
Representação de Superfícies de Terrenos através de Aproximações Quadráticas e Cúbicas (Representation of terrain surfaces with quadratic and cubic approximations).
Proceedings of Simpósio Brasileiro de Geomática (SBG'2002), Presidente PrudenteSP, Brazil, pp. 510518, July 0913, 2002.
[PDF]The approximation of surfaces to scattered data is an important problem encountered in a variety of scientific applications, such as reverse engineering, computer vision, computer graphics, and terrain modeling. Modeling certain regions as piecewise linear surfaces (C0continuity surfaces) may require a large number of triangles, whereas a curved surface can provide a more accurate and compact model of the true surface. Smooth surfaces can also produce superior results for rendering purposes, reducing certain perceptual problems such as the appearance of Mach Bands along element boundaries. This project proposes an automatic method for constructing smooth surfaces defined as a network of curved triangular patches. The method starts with a coarse mesh approximating the surface through triangular elements covering the boundary of the domain, then iteratively adds new points from the data set until a specified error tolerance is achieved. Once this initial triangulation has been generated, smooth surfaces are constructed over the triangular mesh. The resulting surface over the triangular mesh is represented by piecewise polynomial patches possessing C1 continuity.

M.V.G. da Silva, O.M. van Kaick, W.R. Schwartz, H. Pedrini
Extração de Redes de Drenagem a partir de Modelos Digitais de Terreno (Extraction of drainage networks from digital terrain models).
Proceedings of Simpósio Brasileiro de Geomática (SBG'2002), Presidente PrudenteSP, Brazil, pp. 452457, July 0913, 2002.
[PDF]
Drainage network provides important information on several applications, such as determination of flood areas and erosion, determination of watershed, transportation of sediments, and geomorphology. The calculation of water flow is, in general, dependent on the terrain model used to represent the surface. Typically, terrain data are represented through a regular grid of points, a triangular irregular mesh, or a map of contours. The implementation of a drainage network extraction algorithm is affected by the occurrence of errors in the elevation data, as a result of digitization techniques used to generate the data. This project aims to develop an automatic method for extracting the drainage network from digital elevation models, allowing a good balance between capacity of processing great volumes of data and time efficiency.

M.V.G. da Silva, O.M. van Kaick, O.R.P. Bellon, L. Silva
A Hough Transform based Method to Improve Edge Map from Range Images.
Proceedings of the 4th International Conference on Computer Vision, Pattern Recognition and Image Processing (CVPRIP'2002), Durham, North Carolina, United States, pp. 720723, March 2002.This work proposes an efficient and robust method based on the Hough Transform for the refinement of edge maps from range images. The developed method provides a fast result, preserving the shape of the objects. The method proposed is composed of two stages: (1) the detection of segments of straight lines guided by the Hough Transform; (2) the search among the found straight line segments for those that best describe the object edges.
2001

O.M. van Kaick, M.V.G. da Silva, L. Silva, O.R.P. Bellon
An Efficient Approach to Improve Edge Detection from Range Images based on Hough Transform.
Proceedings of Image and Vision Computing New Zealand 2001, Dunedin, New Zealand, pp. 219224, November 2628, 2001.This work proposes an efficient and robust method based on the Hough Transform for the refinement of edge maps from range images. The developed method provides a fast result, preserving the shape of the objects. The method proposed is composed of two stages: (1) the detection of segments of straight lines guided by the Hough Transform; (2) the search among the found straight line segments for those that best describe the object edges.

O.M. van Kaick, W.R. Schwartz, M.V.G. da Silva, H. Pedrini
Identificação e Rastreamento em Tempo Real de Múltiplos Agentes Autônomos. Sistema de visão computacional para futebol de robôs (Identification and realtime tracking of multiple autonomous agents. Computer vision system for robot soccer).
Proceedings of X Seminário de Computação (SEMINCO'2001), BlumenauSC, Brazil, pp. 5970, September 2427, 2001.
[PDF]
Robot soccer has been adopted as a standard problem to promote the development of new techniques in the field of Artificial Intelligence. The domain of robot soccer is highly dynamic and complex, consisting of multiagents that are capable of performing individual and cooperative actions to solve a task. This work aims the implementation of a computer vision module for a robotsoccer team. A flexible and efficient algorithm is proposed for real time identification and tracking of objects in the scene.