Location: IFW A32
Integrating Object-Oriented Design and Formal Verification
Bernhard Beckert, Univ. Karlsruhe
The importance of software verification is increasing as more and more
safety and security critical applications emerge. Currently, however,
tools and methods for software verification do not adequately support
the programming languages used in practice. In my talk, I give an
overview of the KeY project, which aims to overcome this problem.
The central idea of KeY is to enhance a commercial CASE tool with
functionality for formal specification and deductive verification. The
ultimate goal of the project is to facilitate and promote the use of
formal methods as an integral part of real-world software development
An important part of the KeY project is the development of a Dynamic
Logic calculus for the programming language Java Card. I describe the
main ideas and principles of this calculus, present some of its rules,
and demonstrate how it is used to reason about properties of Java Card
programs and verify them.
Approximation Algorithms for
Asymmetric Traveling Salesperson Problems
Markus Bläser, Medizinische Univ. Lübeck
The Traveling Salesperson Problem (TSP) is a well known
and fundamental problem in computer science with a lot
of practical applications. Since it is NP-hard (at least
most of its variants), much effort has been spent on designing
good approximation algorithms for this problem.
In our talk, we will mainly discuss the following two
The first variant is the asymmetric TSP with distances one
and two. This problem is of particular interest,
since despite its simplicity, it is already APX-complete.
The second variant is the asymmetric maximum TSP. Here our goal
is to find a maximum weight (i.e., longest) Hamiltonian tour
instead of a minimum weight (i.e., shortest) Hamiltonian tour.
Algorithms for this problem frequently occur as blackboxes
in algorithms for the shortest common superstring problem.
The latter problem arises in data compression
and DNA sequence assembly.
Finally, we will briefly discuss relaxations of the above
two problems, namely the problem of computing k-restricted
Algorithmic Problems in Point Pattern Matching
Peter Braß, Freie Univ. Berlin
Geometric pattern matching is an important subject that,
depending on the model of patterns and geometric objects,
can lead to quite diverse problems. In this talk I will
discuss some algorithmic aspects of the point pattern
model, where the objects and patterns are finite point sets.
This model is especially suited for the use of methods
of computational geometry. The general question `Does pattern A
occur in the background B' then becomes, e.g., `Does the set
B contain a subset congruent to A?'. I will discuss this
and related problems, and present results on several exact,
approximate, and preprocessing variants.
Efficient Property Testers and Abstract Combinatorial Programming
Artur Czumaj, New Jersey Inst. of Technology, Newark
A property tester is an algorithm that aims at determining whether a
given object O satisfies a predefined property P or O is far from any
object having property P. The power of this approach is that since the
goal is to give only an "approximate" answer to the decision problem,
many properties can be tested (by a randomized algorithm) in a
sublinear or even constant time. For example, one can test in a
constant time whether a given graph is bipartite or is "far" from any
In this talk we present some efficient property testers for various
combinatorial and geometric problems. We discuss also a novel
framework for analyzing property testing algorithms that is based on a
connection of property testing with a generalization of linear
programming which we call Abstract Combinatorial Programming.
Benjamin Doerr, Univ. Kiel
Can you distribute n points in the unit square such that each
rectangle contains a fraction of the points proportional to its
volume? Can you round a non-integral solution of a linear program to
an integer one without violating the constraints? Can you split a
collection of objects having some specified properties into two in
such a way that the properties are evenly split among the resulting
In general, this is not possible. More over, discrepancy theory
teaches us that usually one is even far from reaching these
objectives. Nevertheless, for a number of problems interesting
algorithmic solutions exist.
In this talk we study a discrepancy problem arising in digital image
processing. We use a modification of the well-known randomized
rounding technique to obtain significant progress over previous
Rapid Stochastic Gradient Descent
Nicol N. Schraudolph, ETH Zürich
As Machine Learning tackles increasingly complex real-world phenomena,
there is a growing need for scalable algorithms that can fit large,
nonlinear, differentiable models to vast amounts of noisy, often
non-stationary data. Conventional optimisation algorithms, however,
are inadequate for this purpose: simple gradient descent is unacceptably
slow to converge, conjugate gradient and truncated quasi-Newton methods
do not tolerate noise, and the cost of an extended Kalman filter rises
quadratically with the size of the system.
We are developing new ways to accelerate stochastic gradient descent
by using second-order information obtained through the efficient
computation of curvature matrix-vector products. This cheap curvature
information can be used directly for diagonal adaptive conditioning of
the system, or iteratively to build up a stochastic approximation of
second-order gradient steps. The resulting algorithms approach the
rapid convergence of second-order methods at only linear cost per
iteration, and can thus be used to efficiently optimise extremely
large nonlinear systems.
Berthold Vöcking, Max-Planck-Inst. Informatik, Saarbrücken
Multiple-choice allocation algorithms have been studied intensively
over the last decade. These algorithms have several applications in
the areas of load balancing, routing, resource allocation and
hashing. The underlying idea is simple and can be explained best in
the balls-and-bins model: Instead of assigning balls (jobs, requests,
or keys) simply at random to bins (machines, servers, or positions in
a hash table), choose first a small set of bins at random, inspect
these bins, and place the ball into one of the bins containing the
smallest number of balls among them.
The simple idea of first selecting a small set of alternatives at
random and then making the final choice after careful inspection of
these alternatives leads to great improvements against algorithms that
place their decisions simply at random. We illustrate the power of
this principle in terms of simple balls-and-bins processes. In
particular, we study recently presented algorithms that treat bins
asymmetrically in order to obtain a better load balancing. We compare
the behavior of these asymmetric schemes with symmetric schemes and
prove that the asymmetric schemes achieve a better load balancing than
their symmetric counterparts.