The course is concerned with approximate geometric methods for the analysis of large data sets represented by point clouds.
Data is being collected in order to draw conclusions from it, i.e. to discover relations, make extrapolations into the future, etc. More often than not, data comes as a set or sequence of points describing objects, with each coordinate representing some quantification of some feature. On a computer such data is just a sequence of 0's and 1's; the need to analyze and "understand" it calls for means to support the process. One way is to visualize the data. For example, a data set representing a number of people by their respective heights and weights can be drawn as a point set in the plane, and this drawing may reveal some correlation that could be approximated by a linear function.
For a wide range of applications (brain research, robotics, statistics, bioinformatics, character and speech recognition, computer graphics etc.) this approach is too simplistic, for various reasons. First of all, the size of the data may be huge (in the millions and billions, and sometimes so huge that we cannot even store it). And secondly, objects may have many features, giving rise to sets of dimension in the hundreds - and we know that simple visualization methods tend to fail starting in dimension 4. (Many features may in fact be redundant, but it is part of the endeavor to find out which ones.)
Many of the arising problems appear to be too hard to be solved exactly in an efficient fashion. The course discusses several approximate methods for the analysis of large high-dimensional data sets that have been developed over the last years in response to the issues indicated above. While we have applications in mind for the questions we address, we emphasize theoretical aspects in the solutions.
Methods we cover are random sampling, grid structures, core-sets, well separated pair decomposition, low distortion low-dimensional embeddings. Applications we address are shape and dimension analysis, nearest neighbor search, clustering etc.
Examples for specific questions arising in these applications are the following: for some point in d-space, what is its closest neighbor in the point cloud? What is the closest pair in the point cloud? What is the "best" grouping of the points in the cloud into k groups? Which subset of the point set of size k provides the "best" description of the point cloud? What is the dimensionality of the point cloud and what does dimensionality mean here? Can the point cloud be embedded into a lower-dimensional space while preserving many of its characteristics?
In every lecture we provide you with an exercise sheet. You should solve it in written form and return the solutions at the beginning of the subsequent lecture. Solving the exercises in teams is not allowed. Your solutions will be graded. If you reach at least 80% of all possible points you will get the grade 6.0
There will be an oral exam of 15 minutes at the end of the course. Your final grade consists to 50% of the grade for the exam and to 50% of the grade for the exercises. Passing grades from both parts are required.
Lectures Lecture Notes Problems
Lecture 1, March 20
Introduction to geometry
Chapter 1 [PDF] Lecture 2, March 27
Low distortion embeddings
Bibliography (Chapters1, 2)
[PDF] Lecture 3, April 3
Low distortion embeddings
[PDF] Lecture 4, April 10
SDP and maximum cut
[PDF] [solution] Lecture 5, April 17
Lower bounds for the Hamming cube
[PDF] Lecture 6, April 24
Low distortion embeddings
[PDF] Lecture 7, May 8
Epsilon nets and VC dimension (1)
Chapter 3 [PDF] Lecture 8, May 15
Epsilon nets and VC dimension (2)
[PDF] Lecture 9, May 22
Epsilon nets and VC dimension (3)
[PDF] Lecture 10, May 29
Bounding volumes: smallest enclosing balls
Chapter 4 [PDF] Lecture 11, June 5
Smallest enclosing cuboids, Directional width
Chapters 5-6 [PDF] Lecture 12, June 12
Directional width (2)
--- Lecture 13, June 19
Directional width (3)
For Semidefinite programming see Chapter 7.
References from the Lecture Notes
M.Badoiu and K.L.Clarkson. Optimal core-sets for balls.
G.Barequet and S.Har-Peled. Efficiently approximating the minimum-volume bounding box of a point set in three dimensions
J. of Algorithms (JoA), volume 38 (1), pages 91-109, January 2001.
K.Fischer and B.Gärtner. The smallest enclosing ball of balls: combinatorial structure and algorithms.
Proceedings of the nineteenth annual symposium on Computational geometry, pages 292-301, 2003.
A.Goel, P.Indyk and K.R.Varadarajan. Reductions among high dimensional proximity problems.
Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, pages 769-778, 2001.
B.Schölkopf and A.Smola. Learning with Kernels.
MIT Press, Cambridge Massachusetts, 2002.
E.Welzl. Smallest Enclosing Disks (Balls and Ellipsoids).
In H.Maurer, editor, New Results and New Trends in Computer Science, volume 555 of Lecture Notes in Computer Science, pages 359-370, 1991.
A fundamental problem in computer graphic rendering is modeling how
light is reflected from surfaces. A class of functions called
bi-directional reflectance distribution functions (BRDFs) characterize
the light transport at an idealized surface point.
Traditionally researchers in graphics tried to model BRDFs by taking
the physics of light transport for real materials into account.
Another approach taken is to interpolate sample points measured from
the real BRDF for some material. Notice that this means one either has
to model or measure a BRDF for all materials involved in the scene that
is to be rendered. Matusik, Pfister, Brand and McMillan took the
sampling approach one step further. Firstly, they sample several real
BRDFs such that each measured BRDF is represented by a high dimensional
vector. The entirety of these vectors is a point cloud in some high
dimensional Euclidean space. Then Matusik et al. apply linear and
non-linear dimension reduction techniques to this point cloud in order
to obtain a low dimensional representation that allows them to
parameterize the space (manifold?) of all BRDFs. That is, they are able
to synthesize BRDFs that never were sampled by interpolation from
sampled BRDFs. Practically, they sampled BRDFs for more then 100
materials. For every material they had more than 20 million sample
points that they compressed into a vector with roughly four million
components. The data model generated from these vectors, i.e. from the
point cloud consisting of roughly one hundred points in a space with
more than four million dimensions, was a 15 dimensional non-linear
manifold representation of the BRDF space. The data allowed to
reconstruct the measured BRDFs very accurately (actually ten dimensions
would have been sufficient for that) and to synthesize sufficiently
many new BRDFs. The experimentally found dimension
(around ten) of the BRDF space is in good accordance with the number
free parameters in BRFD models based on the physics of light transport
Reference. W. Matusik, H. Pfister, M. Brand and L. McMillan. A Data-Driven Reflectance Model. In Proceedings of SIGGRAPH 2003.
In visual speech recognition pioneered by Bregler and Omohundro a video
of the speaker is used as a cue in addition to the traditional acoustic
cues. Therefore a polygon with a fixed number of nodes is fitted to the
lips of the speaker in each frame of the video. Every node of the
has has coordinates which gives rise to vector whose number of
is twice the number of nodes. These vectors form a point cloud that
the so called "lip space" which has to be learned. The learned manifold
is then used for tracking and extracting the lips, for interpolating
between frames in an image sequence and for providing features for
speech recognition. In their experiments Bregler and Omohundro fitted a
with 40 nodes to the lips, i.e., every polygon was represented by an 80
dimensional vector. It turned out that the point cloud sampled from the
lip space (manifold?) could be described by a non-linear five
dimensional model. The cues derived from this model significantly
improved the performance of acoustic speech recognizers in degraded
and was also tested on a purely visual lip reader.
Reference. C. Bregler and S. M. Omohundro. Nonlinear manifold learning for visual speech recognition In Proceedings of ICCV 1995.
The genetic code can be decomposed into codon sequences which are
triplets of bases (U,A,G,T). Some of the 64 different triplets are used
to encode amino acids form which proteins are synthesized and some
serve other purposes. The genetic code associates a set of sibling
codons to the same amino acid, and some codons occur more frequently
than others in gene sequence. Biased codon usage seems to be property
highly expressed genes which tend to use only a limited number of
Such a bias was for example observed in fast growing prokaryotes and
eukaryotes. With every gene in a genome one can associate a 64
vector of relative codon usage. A genome that consists of n
genes is then represented by a point cloud in 64 dimensional space.
Biologist when given a genome which exhibits a codon bias are
interested in getting reference sets of genes that characterize the
bias. Carbone, Zinoyev and Kepes devised a simple algorithm to find
such reference sets
from the point cloud representing a genome.
Reference. A. Carbone, A. Zinovyev and F. Kepes. Condon adaptation index as a measure of dominating codon bias In Bioinformatics 19(16).
The brain is composed of large numbers of individual cells (neurons)
that communicate among themselves to solve tasks parsing image or
speech or generate accurate motor movements. Understanding how
populations of cells
work together is one of the paramount problems in neuroscience. Using
methods the activity pattern of large populations of cells can measured
on millisecond time scales. Giving rise of a stream of 512 times 512
images. Every image can be considered as a point in 512 times 512
dimensional space. Thus the stream gives rise to a point cloud. Kenet
et al. discovered that the recorded images of activity generated in the
cortex, without visual stimulation, resemble images of activity evoked
by oriented patterns. This observation can quantified by analyzing the
corresponding point clouds using dimension reduction techniques.
Reference. T. Kenet, D. Bibitchkov, M. Tsodyks, A. Grinvald and A. Arieli. Spontaneously emerging cortical representations of visual attributes In Nature 425, 2003.
(See also: Point Clouds in Imaging Science In SIAM News 37(7), 2004.)