NordConsNet Workshop 2018 (29th May)

The 17th workshop of NordConsNet, the Nordic Network for researchers and practitioners of Constraint Programming

The Workshop

The Nordic Network for researchers and practitioners of Constraint programming (NordConsNet) kindly invites you to participate in the yearly NordConsNet Workshop. The purpose of the workshop is to learn about ongoing research in Constraint Programming and optimisation, existing projects and products, and further development of the network. NordConsNet Workshop 2018 is the 17th edition of this annual event.

Location

The workshop will take place at the Palace, Göteborg, Södra Hamngatan 2, on Tuesday 29 May 2018 see the webpage Palace. The venue is located 5 minutes walking distance from the main railway station. The conference room is located on the 2nd floor.

Registration

To register please send mail to Waldemar Kocjan. Registration including lunch and catering must be send prior to May, 19. If you register after that date, you are still very welcome to attend any talks during the day, just be mindful not to enter during a talk and don't expect catering.

Please forward this information to anyone who might be interested in this workshop but is not yet on the NordConsNet mailing list: they can subscribe to it by applying to Justin Pearson.

Organisation

The NordConsNet Workshop 2018 is organised and sponsored by Jeppesen Systems. Please send any questions regarding the workshop to Waldemar Kocjan at Jeppesen.

Programme

The programme is preliminary and may change until the start of the workshop.

Time Presenter Title Presentation
10:00-10:05 Waldemar Kocjan (Jeppesen) Welcome -
10:05-10.35: Mattias Grönkvist (Jeppesen) Constraint Programming and Optimization at Jeppesen -
10:35-11:05 Emily Curry (Chalmers/Jeppesen) Alternative Pricing for Rostering Column Generation PDF
11:05-11:30 Fika Tea & Coffee -
11:30-12:00 Dag Wedelin (Chalmers) On the in-the-middle algorithm and heuristic and some of its properties PDF
12:00-12:30 Gustav Björdal (Uppsala University) Declarative Local-Search Neighbourhoods in MiniZinc PDF
12:30-13:00 Helge Spieker (Simula Research Lab, Norway) Estimating Objective Boundaries for Constraint Optimization Problems PDF
13:00-14:00 Lunch Food -
14:00-14:30 Christian Schulte (KTH/RISE) GECODE: Generic Constraint Development Environment PDF
14:30-15:00 Linnea Ingmar (Uppsala University) Making Compact-Table Compact PDF
15:00-15:30 Christos Dimitrakakis (Chalmers) DUCT: An Upper Confidence Bound Approach to Distributed Constraint Optimization Problems PDF
15:30-15:50 Fika Tea & Coffee -
15:50-16:20 Stephan Gocht (KTH) Understanding Conflict-Driven Clause Learning SAT Solvers Through the Lens of Proof Complexity PDF
16:20-16:50 Jan Elffers (KTH) Divide and Conquer: Towards Faster Pseudo-Boolean Solving PDF
16:50 Closing -


Abstracts

Constraint Programming and Optimization at Jeppesen
Mattias Grönkvist(Jeppesen)

Jeppesen is a world-leading supplier of crew and fleet planning and operations software for airlines. We use the latest computational technology to deliver high quality results to large-scale optimization problems as fast as possible. In this presentation I will talk about the company, the problems we solve, and the technologies and methodologies we use.

Alternative Pricing for Rostering Column Generation
Emily Curry

In airline crew rostering, the objective is to create personalized schedules, i.e., rosters, for a set of crew members. Because of the large number of possible rosters that could be formed, the problem is solved using column generation, where each column corresponds to a specific roster. The pricing problem is then defined as to find legal rosters with the potential of improving the current solution. Since the rules and regulations regarding rosters vary between airlines, we have chosen to treat the pricing problem as a black-box optimization problem.
Three different methods for solving the black-box pricing problem have been implemented. The first method uses binary particle swarm optimization (BPSO) to search for improving rosters. The other two methods use surrogate modeling to fit a nonlinear surrogate function to a set of sampled rosters using radial basis functions. The surrogate function was then either linearly approximated, so that a shortest path problem could be set up and solved, or solved heuristically by a BPSO method.
The three methods have been evaluated on five real-world test cases. For each test case, a large number of different pricing problems are solved. Our comparison of the methods' performance shows that the method using BPSO performed the best, followed by the surrogate modeling approach without the linear approximation.

On the in-the-middle algorithm and heuristic and some of its properties
Dag Wedelin (Chalmers)

Beginning with special cases of linear programming, I will describe these algorithms, and some of their properties. I will also briefly discuss the max-sum problem and related algorithms, and discuss some of the general challenges with numerical propagation algorithms in relation to classical constraint programming.

Declarative Local-Search Neighbourhoods in MiniZinc
Gustav Björdal (Uppsala University)

The aim of solver-independent modelling is to create a model of a satisfaction or optimisation problem independent of a particular solving technology. This avoids early commitment to a solving technology and allows easy comparison of technologies.
MiniZinc is a solver-independent modelling language, supported by CP, MIP, SAT, SMT, and (constraint-based) local search (CBLS) backends. Some solving technologies, in particular CP and CBLS, require not only a model but also a search strategy. While backends for these technologies offer default (autonomous) search strategies, it is often beneficial to be able to include in a model a user-specified search strategy for a particular solving technology, especially if the search strategy can encapsulate knowledge about the structure of the problem. This is complex since a local-search strategy is often tightly tied to the model. Hence we wish to use the same language for specifying the model and the local search.
In this presentation, I will show how we extended MiniZinc so that one can attach a fully declarative neighbourhood specification to a model, while maintaining the solver-independence of the language.

Estimating Objective Boundaries for Constraint Optimization Problems
Arnaud Gotlieb and Helge Spieker (Simula Research Lab, Norway)

Solving Constraint Optimization Problems (COP) requires exploring a large search space. By providing objective boundaries, this space can be pruned. Finding close boundaries, that correctly under- or overestimate the optimum, is difficult without having a heuristic function for the COP. We present a method for learning to estimate boundaries from problem instances using machine learning. The trained model can then estimate boundaries for unseen instances and thereby support the constraint solver through an additional boundary constraint.

GECODE: Generic Constraint Development Environment
Christian Schulte (KTH/RISE)

Gecode is a widely used toolkit for developing constraint-based systems and applications. Gecode provides a constraint solver with state-of-the-art performance while being modular and extensible. Gecode is: open (documented interfaces support tasks from modeling to implementing new variables, constraints, search engines, ...), free (MIT license), portable (standard C++), accessible (extensive tutorial and reference documentation, efficient (excellent performance, has won the 2008-2012 MiniZinc challenges in all categories), and parallel (exploits today's multi-core hardware).

The talk provides an overview of what Gecode is, what one can do with it, and what are the basic ideas behind its architecture.

Making Compact-Table Compact
Linnea Ingmar (Uppsala University)

The compact-table propagator for table constraints appears to be a strong candidate for inclusion into any constraint solver due to its efficiency and simplicity. However, successful integration into a constraint solver based on copying rather than trailing is not obvious: while the underlying bit-set data structure is sparse for efficiency it is not compact for memory, which is essential for a copying solver.
The presentation introduces techniques to make compact-table an excellent fit for a copying solver. The key is to make sparse bit-sets dynamically compact (only their essential parts occupy memory and their implementation is dynamically adapted during search) and tables shared (their read-only parts are shared among copies). We evaluate our techniques on top of Gecode as a copying solver. Dynamically compact bit-sets reduce peak memory by 7.5% and runtime by 13.6% on average and by up to 66.3% and 33.2%. Shared tables even further reduce runtime and memory usage. The reduction in runtime exceeds the reduction in memory and a cache analysis indicates that our techniques might also be beneficial for trailing solvers. The proposed implementation runs on average almost an order of magnitude faster than Gecode's original implementations while using half the memory.

DUCT: An Upper Confidence Bound Approach to Distributed Constraint Optimization Problems
Christos Dimitrakakis (Chalmers)

We propose a distributed upper confidence bound approach, DUCT, for solving distributed constraint optimization problems. We compare four variants of this approach with a baseline random sampling algorithm, as well as other complete and incomplete algorithms for DCOPs. Under general assumptions, we theoretically show that the solution found by DUCT after $T$ steps is approximately $T^{-1}$-close to the optimal.
Experimentally, we show that DUCT matches the optimal solution found by the well-known DPOP and O-DPOP algorithms on moderate-size problems, while always requiring less agent communication. For larger problems, where DPOP fails, we show that DUCT produces significantly better solutions than local, incomplete algorithms. Overall we believe that DUCT is a practical, scalable algorithm for complex DCOPs.

Understanding Conflict-Driven Clause Learning SAT Solvers Through the Lens of Proof Complexity
Stephan Gocht (KTH)

Over the last decades Boolean satisfiability (SAT) solvers based on conflict-driven clause learning (CDCL) have developed to the point where they can handle formulas with millions of variables. Yet it is not well understood how different aspects of state-of-the-art CDCL algorithms affect the performance, which is heavily influenced by the choice of benchmarks.
Resolution, the proof system underlying CDCL, is well understood from a proof complexity point of view. Therefore, we use theoretical benchmarks, which have the advantage that they are a) scalable, b) extremal with respect to different theoretical properties and c) theoretically easy in the sense that they have short resolution proofs. This allows studying how efficient the proof search of a solver is.
In this work we evaluate different solver configurations on theoretical benchmarks, which allows us for the first time to relate the performance of these different solver configurations to theoretical properties of the benchmarks and raises a number of intriguing questions for further research. br>

Divide and Conquer: Towards Faster Pseudo-Boolean Solving
Jan Elffers (KTH)

The last 20 years have seen dramatic improvements in performance of algorithms for Boolean satisfiability, so-called SAT solvers, and today conflict-driven clause learning (CDCL) solvers are routinely used in a wide range of application areas. One serious short-coming of CDCL, however, is that the underlying method of reasoning is quite weak, which can lead to exponential running times for many simple combinatorial problems. A tantalizing solution is to instead use stronger pseudo-Boolean (PB) reasoning, but so far the promise of exponential gains in performance has failed to materialize, the increased theoretical strength seems hard to harness algorithmically, and in many applications CDCL-based methods are still superior.
We propose a modified approach to pseudo- Boolean solving, using division instead of the saturation rule in [Chai and Kuehlmann '05] and other PB solvers. In addition to resulting in a stronger conflict analysis, this also keeps integer coefficient sizes down, which further improves performance. Together with some other optimizations, this yields a solver that performed very well in the Pseudo- Boolean Competitions 2015 and 2016, and that has speed approaching that of CDCL-based methods measured in terms of number of conflicts per second.