Français
CeV Constraints and Verification ...

Ongoing projects

COVERIF: Combining abstract interpretation and constraint programming to verify critical properties of embedded softwares with floating point computations
type: ANR
members: LIX, LIP6, LINA, I3S.
period: 2015-2020
A challenge in critical embedded software is the development of a set of tools and techniques to allow to verify accuracy, correctness and robustness. COVERIF aims at exploring new methods based on a tight collaboration between abstract interpretation and constraint programming. More precisely, the purpose of COVERIF is to attempt to move the boundaries inherent to these techniques, to improve the estimate accuracy, to allow a more complete verification of programs with floating point computations and, so, to improve robustness of critical decisions linked to such computations.

Past projects

VACSIM: Validating command of critical systems by simulating/coupling and formal methods
type: ANR
period: 2011-2015
VACSIM targets the development of both methodological and formal contributions that will allow building a prototype on the basis of the ControlBuild tool, the Dassault Systèmes numerical engineering tool, to illustrate, using industrial benchmarks, the coupling benefits. The final project aim is to propose a continuous validation process that can be used during the whole life of command from critical systems based on numerical engineering environments.
PAJERO: Development of an horizontal software platform in SaaS/Cloud/HPC/WorkForce Management to provide a new optimization service associating complex human and means ressources
type: OSEO ISI
period: 2011-2015
PAJERO is an innovative project formed to develop a Cloud-based Workforce Optimization platform. Proposed as an on-demand application (SaaS) it will provide a real-time capability to solve complex resource management problems through a set of groundbreaking features:
  • A new generation of optimization solver using advanced Constraint Programming extensions and parallel algorithms
  • HUDL: A natural business-oriented metalanguage aimed to allow a computer illiterate business specialist to set up reusable business rules and solver programmation
  • A set of programming tools to transparently exploit hardware environments of the 15 next years: massively parallel devices (many-core, GPGPU) and Cloud Computing
  • This platform will provide to vertical markets ways to solve its workforce resources optimization problems and to get the needed computing power
AEOLUS: Mastering the complexity of the cloud
Aeolus is an ANR funded research project whose main objective is to tackle the scientic challenges that need to be solved in order to bridge the gap between Infrastructure as a Service and Platform as a Service solutions, by developing theory and tools to automate deployment, recon figuration, and upgrades of variable sized, non-homogeneous machine pools. Le projet AEOLUS s'attaque aux problèmes scientifiques liés à l'antagonisme qui existe entre l'infrastructure comme service et la plateforme comme service au sein du cloud. We expect that the results of this research work will allow to efficiently deploy, maintain, and administer, in a cost-effective way, the dynamically changing distributed architectures which are at the heart of Cloud services.
MANCOOSI: managing the complexity of open source system facilities
type: FP7-ICT-2007-1
members: Université Paris Diderot (France), Edge-IT (France), Université de L'Aquila (Italie), INESC-ID (Portugal), Caixa Magica (Portugal), Université de Nice-Sophia Antipolis (France), Université de Tel Aviv (Israël), ILOG (France), Université de Louvain (Belgique), Pixart (Argentine).
period: 2008-2011
Mancoosi aims to develop the scientific knowledge and build the tools necessary to manage the complexity of the open source infrastructure, which is one of the essential building blocks of tomorrow's software architectures. Mancoosi explicitly targets the difficult problems that arise when one wants to efficiently and safely upgrade a set of software components in complex software infrastructures, like those found in open source software distributions, among the most complex software systems known, made of tens of thousands of components that evolve over time without centralized design. Mancoosi will provide:
  • a model of the infrastructure, and the transformations it undergoes when adding or removing components;
  • advanced algorithm to choose efficient evolution paths when updating a platform;
  • a forum to attract leading experts by organizing an international competition;
  • tools that incorporate these findings and advance the state of the art in the field.
CAVERN: Constraints and abstractions for program verification
type: ANR Sécurité et Sûreté informatique
members: IRISA, I3S, CEA-LIST, ILOG.
period: 2008-2011
The CAVERN project (Constraints and Abstractions for program VERificatioN) aims to enhance the potential of Constraint Programming for the automated verification of imperative programs. The classic approach consists in building a constraint system that represents the objective to meet. This process is currently based on the use of “generic” constraint propagation based solvers developed for other applications (combinatorial optimization, planning, etc.). The originality of the project lies in the design of abstraction-based constraint solver dedicated to the automated testing of imperative programs. Concretely, the CAVERN project partners will study the integration of selected abstractions in their own constraint libraries, as currently used in their testing tools, in order to improve the treatment of loops, memory accesses (references and dynamic structures) and floating-point computations. Dealing efficiently with these constructs will allow us to scale-up constraint-based testing techniques for imperative programs. This should open the way to more automated testing processes which will facilitate software dependability assessment.
TESTEC: Testing critical embedded real time systems
type: ANR Technologies Logicielles
period: 2008-2011
Since a few years, environments for automatic test case generation from specification models emerged from the actual need of functional tests. The main issue for these tools is the handling of combinatorial explosion while insuring the requested coverage computed on the unique basis of the specifications. To this regard, the TESTEC project addresses two key issues:
  • The first issue is the optimisation of usual techniques to handle large scale projects. Our approach is based on an explicit management of the time within the models as well as the handling of continuous and discrete variables of an hybrid system.
  • The second issue is the size reduction of the tests generated from the specification models using formal verification results obtained from either some software or some system implementation.
The TESTEC project will provide a prototype to validate our approach in an environment based on existing tools.
DANOCOPS: Automatic detection of non conformance between a program and its specification
type: RNTL
period: 2004-2007
Validation of critical systems could represent up to 50% of a system cost. Due to the increasing complexity of such systems and to the nowadays trend to let third-party handle part of the development, this cost tend to grow up. Thus, the ability to define and to check the system is now a critical issue. Today, in industry, software checking relies on tests and code reviews. Though a lot of research have been done on more formal approaches, formal methods are not really spread out within the software industry. Between these two trends, there is room for a third approach, a less formal approach though an automatic one. Some tools already exist which automatically detect come error in the source code. However, they do not actually take advantage of the specification. Our proposal is to detect automatically some non-conformance of a program against its specification. We propose an innovative technique to detect non-conformances. This technique relies on a transformation of a program and of its specification in a set of constraints. Such a transformation has already been successfully used in the process of automatic test data generation. The aim of the DANOCOPS project is to explore the capabilities of this technique to detect some non-conformances and to develop a tool to apply it on programs written in C/C++/Java with a specification written in UML/OCL (B and Lustre will be also explored).
V3F: Validation and Verification of programs with floating point computations
type: ACI Sécurité Informatique
members: CEA LIST, LIFC, LIG, I3S.
period: 2003-2006
Using floating-point numbers to handle real numbers is an important source of failures and defects in critical embedded softwares. System models associated to model verifications, proofs, and automatic test data generation, provides promising means to improve the software development process and the reliability of these systems. However, the tools yet available lack of capabilities to correctly handle floating-point numbers. Floating-point numbers offer a poor representation of real number but are available for free on most computers. The main issue with floating-point numbers is their poor mathematical properties. Moreover, the behavior of floating-point numbers is hardware dependent, even with a IEEE 754 compliant unit.
INKA: Automatic and deterministic generation of structural test cases
type: RNTL
period: 2000-2003
Structural testing is a costly task requiring dedicated skill. Indeed, it relies on a detailed analysis of the software structure and, a backward reasoning to determine the input values required to run a given targeted piece of the software. As a consequence, most of the softwares meet the market with some pieces of instructions that have never been run. The INKA tool automatically and deterministically generates test cases from a description of the piece of software that have to be reached. INKA relies on an original technics where the program under test and the selected points are translated to an equivalent system of constraints. This system is then used to compute input values.