At one end of the spectrum, when a purported theorem is in focus, a researcher may choose to find a proof, any proof, independent of the properties of that possible proof. At the other end of the spectrum, a researcher may be concerned with the nature of a proof of the theorem under study. Here I am not thinking of direct versus indirect proof or proof by contradiction; rather, I am thinking of various aspects of elegance. Aspects of elegance include proof length, proof size, length of longest equation or formula in the deduced steps, level of proof, and various others. Indeed, as I recently wrote about, elegance is of main concern in much of my own research.
If you browse in the literature of mathematics or logic, you will occasionally find proofs of significant theorems supplied over decades, and, for various given theorems, you will find a succession of proofs that sometimes differ sharply in the context of elegance.
Of the various aspects of elegance, that which I find easiest to directly address is proof length. In my writings I have provided diverse direct approaches to use, with the aid of an automated reasoning program, for seeking a proof shorter than that in hand, whether published or not. Not so recently, in my pursuit of a proof even shorter than I found in 2009--a 42-step proof of a theorem in lattice theory--I inadvertently discovered, to me, and I believe it would be to various mathematicians, a proof with some most pleasing properties. (That 42-step proof is found in a notebook, offered on my website, , focusing on the area of mathematics focusing on algebra. I shall offer here a most recently discovered shorter proof as part of challenges.)
The theorem in focus in this column, cited earlier in the context of a 42-step proof, concerns an axiomatization
discovered by William McCune, specifically, the following 29-letter
single axiom for lattice theory.
(The symbol "v
" denotes union, and the symbol "^
" denotes intersection.)
(((y v x)^x) v (((z^ (x v x)) v (u^x))^v))^ (w v ((v6 v x)^ (x v v7)))=x.Some time after his discovery, as history shows, he had in hand a proof of length 135 (applications of paramodulation). He chose to focus on the following nonstandard 4-basis axiomatization of lattice theory, rather than the more familiar 6-basis. I offer you each in negated form.
x = x. b v (a^ (b^c)) != b | b^ (a v (b v c)) != b | ((a^b) v (b^c)) v b != b | ((a v b)^ (b v c))^b != b. b ^ a != a ^ b | (a ^ b) ^ c != a ^ (b ^ c) | a ^ (a v b) != a | b v a != a v b | (a v b) v c != a v (b v c) | a v (a ^ b) != a.(Of course, when paramodulation is used, the inclusion of a clause for reflexivity,
x = x
, is virtually required.)
McCune, after informing me of his marvelous discovery of the cited single axiom for lattice theory, asked whether I could shorten his 135-step proof, deriving the four members of the nonstandard 4-basis. With the methodology I have used for many, many years, I eventually found, with his program OTTER in March 2002, a 50-step proof. (When I informed him of the existence of a 50-step proof, he asked if he could buy it.)
Just one of the fifty steps, obtained with paramodulation, relies on nine distinct variables, which provides a hint of what is forthcoming. This variable richness of a proof--of the 50-step proof, nine--is one of the measures of proof elegance.
If you are using a program that offers you the option of placing a maximum on the variable richness of retained clauses, and if you conduct different experiments with different assignments to this parameter, you will often find that--not surprising--the smaller the value, the longer the resulting proof will be. After all, constraints clearly affect the nature of a proof.
Thanks to McCune, variable richness
can be directly addressed with OTTER, one of whose parameters permits
the researcher the option of assigning a maximum to the variable
richness of any retained conclusion, with assign(max_distinct_vars,n)
.
In the experiment that yielded the 50-step proof, I had in fact made no
explicit use of this feature of his program.
However, one of the hints that was included to direct the program's
reasoning relied on more than nine distinct variables.
(Bob Veroff's hints strategy is indeed powerful for directing an
automated reasoning program's reasoning; I often use hints in a later
experiment, where the hints are gathered from one or more earlier
experiments.)
In the study featured here, I did, for each experiment, include an
assignment for max_distinct_vars
, an assignment which led to a new proof,
that which I promised to include here.
On March 8, 2018, I returned to my attempts to find an even shorter proof,
even shorter than the cited 42-step proof.
I assigned the limit of 11 to max_distinct_vars
, with the conjecture
that this added freedom might lead somewhere.
This belief in part rested on the proofs I had in hand of different
lengths, each of which relied on smaller and smaller assignments to
variable richness.
For example, in 2009 with an assignment of the value 7, I found a
74-step proof of the cited 4-basis and, eventually, had in hand a 58-step proof.
When I relied on the value 8 for a limit on variable richness, I
eventually found a 57-step proof.
Therefore, given these results, the assignment of the value 11 to
max_distinct_vars
seemed most intriguing.
And indeed, with this added latitude of 11 versus 9, OTTER eventually
did return to me a proof of length 42.
For further discussion of this proof, see my website.
Even later, using McCune's program, I found a marvelous proof of length 40, the following, a proof with charming properties, whose consideration leads to fascinating challenges.
----- Otter 3.3g-work, Jan 2005 ----- The process was started by wos on thrash, Wed Mar 14 22:21:04 2018 The command was "otter". The process ID is 3907. -----> EMPTY CLAUSE at 7.26 sec ----> 26770 [hyper,25706,2,22380,2707,23654]. Length of proof is 40. Level of proof is 24. ---------------- PROOF ---------------- 2 [] b v (a^ (b^c))!=b|b^ (a v (b v c))!=b| ((a^b) v (b^c)) v b!=b| ((a v b)^ (b v c))^b!=b|. 4 [] (((y v x)^x) v (((z^ (x v x)) v (u^x))^v))^ (w v ((v6 v x)^ (x v v7)))=x. 56 [para_into,4.1.1.1.2,4.1.1] (((x v y)^y) v (y v y))^ (z v ((u v y)^ (y v v)))=y. 57 [para_into,4.1.1.2.2,4.1.1] (((x v (((y^ (z v z)) v (u^z))^v))^ (((y^ (z v z)) v (u^z))^v)) v (((w^ ((((y^ (z v z)) v (u^z))^v) v (((y^ (z v z)) v (u^z))^v))) v (v6^ (((y^ (z v z)) v (u^z))^v)))^v7))^ (v8 v z)= ((y^ (z v z)) v (u^z))^v. 118 [para_from,57.1.1,4.1.1.1.2.1.1] (((x v y)^y) v (((((z^ (y v y)) v (u^y))^v) v (w^y))^v6))^ (v7 v ((v8 v y)^ (y v v9)))=y. 124 [para_into,118.1.1.1.2,118.1.1] (((x v y)^y) v (z^y))^ (u v ((v v y)^ (y v w)))=y. 157 [para_into,124.1.1.2.2,124.1.1] (((x v (y^z))^ (y^z)) v (u^ (y^z)))^ (v v z)=y^z. 208 [para_from,157.1.1,4.1.1.1.2.1.1] (((x v y)^y) v (((z^y) v (u^y))^v))^ (w v ((v6 v y)^ (y v v7)))=y. 259 [para_into,208.1.1.1.2,208.1.1] (((x v y)^y) v y)^ (z v ((u v y)^ (y v v)))=y. 298 [para_from,259.1.1,208.1.1.2.2] (((x v y)^y) v (((z^y) v (u^y))^v))^ (w v y)=y. 302 [para_from,259.1.1,124.1.1.2.2] (((x v y)^y) v (z^y))^ (u v y)=y. 306 [para_from,259.1.1,56.1.1.2.2] (((x v y)^y) v (y v y))^ (z v y)=y. 382 [para_from,306.1.1,124.1.1.1.1] (x v (y^ (x v x)))^ (z v ((u v (x v x))^ ((x v x) v v)))=x v x. 389 [para_into,382.1.1.1.2,306.1.1] (x v x)^ (y v ((z v (x v x))^ ((x v x) v u)))=x v x. 495 [para_from,389.1.1,298.1.1.1.2] (((x v y)^y) v ((z^y) v (z^y)))^ (u v y)=y. 627 [para_from,495.1.1,389.1.1.2.2] ((x^y) v (x^y))^ (z v y)= (x^y) v (x^y). 681 [para_into,627.1.1,302.1.1,flip.1] ((x v y)^y) v ((x v y)^y)=y. 842 [para_from,681.1.1,208.1.1.1.2.1] (((x v y)^y) v (y^z))^ (u v ((v v y)^ (y v w)))=y. 847 [para_from,681.1.1,124.1.1.1] x^ (y v ((z v x)^ (x v u)))=x. 1365 [para_into,847.1.1.2.2,157.1.2] x^ (y v ((((z v ((u v x)^ (x v v)))^ ((u v x)^ (x v v))) v (w^ ((u v x)^ (x v v))))^ (v6 v (x v v))))=x. 1389 [para_from,847.1.1,298.1.1.1.2] (((x v y)^y) v ((z^y) v (u^y)))^ (v v y)=y. 2191 [para_from,1389.1.1,847.1.1.2.2] ((x^y) v (z^y))^ (u v y)= (x^y) v (z^y). 2455 [para_into,2191.1.1,302.1.1,flip.1] ((x v y)^y) v (z^y)=y. 2707 [para_from,2455.1.1,1365.1.1.2] x^ (y v (x v z))=x. 2727 [para_from,2455.1.1,847.1.1.2] x^ (x v y)=x. 3712 [para_into,2727.1.1,842.1.1,flip.1] ((x v y)^y) v (y^z)=y. 3715 [para_into,2727.1.1,259.1.1,flip.1] ((x v y)^y) v y=y. 3829 [para_from,3712.1.1,157.1.1.2] (((x v (y^ (z^u)))^ (y^ (z^u))) v (v^ (y^ (z^u))))^z=y^ (z^u). 4147 [para_from,3715.1.1,847.1.1.2] x^ ((y v x)^ (x v z))=x. 4194 [para_from,3715.1.1,2191.1.1.2] ((x^y) v (z^y))^y= (x^y) v (z^y). 4203 [para_from,3715.1.1,2455.1.1.1.1] (x^x) v (y^x)=x. 5734 [para_into,4147.1.1.2,842.1.1] (x^y)^x=x^y. 5745 [para_into,4147.1.1.2,259.1.1] x^x=x. 8017 [para_from,5734.1.1,4194.1.2.2] ((x^y) v ((y^z)^y))^y= (x^y) v (y^z). 22229 [para_from,5745.1.1,4203.1.1.1] x v (y^x)=x. 22302 [para_from,5745.1.1,1389.1.1.1.2.2] (((x v y)^y) v ((z^y) v y))^ (u v y)=y. 22378 [para_into,22229.1.1.2,4147.1.1] ((x v y)^ (y v z)) v y= (x v y)^ (y v z). 22380 [para_into,22229.1.1.2,3829.1.1] x v (y^ (x^z))=x. 22414 [para_into,22229.1.1,3712.1.1,flip.1] (x v y)^y=y. 23654 [para_into,22414.1.1.1,22378.1.1] ((x v y)^ (y v z))^y=y. 25344 [para_into,22302.1.1,22414.1.1] (x^y) v y=y. 25706 [para_into,25344.1.1.1,8017.1.1] ((x^y) v (y^z)) v y=y. 26770 [hyper,25706,2,22380,2707,23654.
Of the 40 deduced steps, only one relies on ten distinct variables. And this marvelous proof leads me to several challenges.
For the first challenge, can you find a proof, relying on McCune's single axiom alone, that has richness 11 and is of smaller length than 40? I know of no proof with this richness.
For your second challenge, can you find a proof of smaller size, where size measures the number of symbols in the proof, not counting the axioms or parentheses or commas?
For your third challenge--and one that appears to be very difficult to meet--can you find a direct approach to seeking proofs of smaller size than that in hand?
For your fourth challenge, can you find any proof based solely on McCune's single axiom such that the proof length is strictly less than 40? You may use as target the cited 4-basis or the cited 6-basis. However, I warn you that the proofs I have seen targeting the 6-basis are longer than those for the 4-basis. the shortest with target the 6-basis I have found has length 70 (applications of paramodulation) found after I (with OTTER) found the 40-step roof I present here. So, implied as a challenge, can you find a proof shorter than length 70 with target the 6-basis?
A caveat: If you decide to seek a shorter proof, say, of some conjunction, beware of a not-obvious trap. Specifically, so typical of many of my experiments is the case in which a subproof of one of the members of the conjunction is found and then a shorter subproof of that member is found. But, perhaps as a disappointing surprise, the proof of the entire conjunction can get longer when using the shorter of the two subproofs.
Somewhat related is the value of using the cramming strategy, a strategy in which the researcher forces the program to cram the proof steps of a chosen subproof into the proof of the entire conjunction. You see, Intuitively, a case to be avoided when seeking a shorter proof is that in which the subproofs of the members of the conjunction share too little in common.
As an aside and a demonstration of the robustness of McCune's automated reasoning program, in May 2018 I obtained proofs that rely on a retained clause whose number (reflecting how many have been retained) exceeds 6 million. In a different experiment, also focusing on McCune's single axiom for lattice theory, the program offered me a proof after more than 1,600,000 CPU-seconds. Such numbers correctly suggest how much progress has occurred in automated reasoning, and such numbers strongly support the value of relying on the assistance of an automated reasoning program.
They also suggest how hard it is to find very elegant proofs and how much progress may lay ahead. As a mathematician, I find the cited results given here most pleasing.
The Oaxaca workshop 18w5208 "Theory and Practice of Satisfiability Solving" was the fourth meeting between theoreticians and practitioners working around the SATisfiability problem. Previous meetings took place in Banff in 2014, in Dagstuhl in 2015 and in Toronto in 2016.
The workshop program contained one hour survey talks and 30 minutes more specific talks, including challenge talks.
The first day of the workshop was dedicated to core SAT, the second day focused around two extensions of SAT, Pseudo-Boolean solving and MaxSAT, but also enlarged the scope of the workshop to Satisfiability Modulo Theory and Mixed Integer Programming. The third day was dedicated to QBF. The fourth day covered polynomial calculus (Gröbner bases), worst case complexity and various recent results on topics covered in previous days. The last day covered both practical aspects of SAT or SMT and theoretical aspects in proof complexity. The full programme can be found here.
It is important to point out that for every new workshop in this series the interaction between practitioners has increased significantly, and this was especially pronounced in the workshop in Oaxaca with lively discussions during most of the talks (often necessitating a somewhat liberal interpretation of the exact times in the workshop schedule). The two communities do know each other much better, their intersection is growing. This was witnessed by presentations this year made by people focussing on other aspects four years ago, for instance Jakob Nordström presenting the inner details of pseudo-Boolean solvers or Joao Marques-Silva presenting MaxSAT-based proof systems.
Three key observations were made during the workshop.
Moshe Vardi made a point about the evaluation of solvers in SAT papers and SAT related competitions. The focus of the competition benchmarks to compare the solvers on a range of predefined sets of benchmarks prevents simple empirical analysis such as scalability. As such, it would be interesting to provides alternative set of scalable benchmarks on which (current and) new techniques could be thoroughly evaluated. A gitlab repository to manage collectively such library of benchmarks is now available.
Olaf Beyersdorff made an important observation: the theoreticians worked with SAT practitioners to explain afterwards the reason of the revolutionary Conflict Driven Clause Learning architecture, which led to the current situation that theory of SAT does not affect much solver implementations. In areas such as QBF and DQBF, both theoreticians and practitioners contribute to the improvement of the solvers, because they tackle those problems from different point of views at the same time. As such, it provides some hope that solvers for other proof systems such as cutting planes or polynomial calculus could benefit from such common effort. First results on that direction have been published this year.
Fahiem Bacchus gave an inspiring talk about core guided MaxSAT, by providing a unique view of such solvers differentiated by the way they handle a "cardinality layer". Such cardinality layer is built incrementally, driven by the detection of unsat cores, using new variables on top of the original CNF. The cores in that context may represent non trivial reasoning. Depending of the cardinality constraint encoding used in the cardinality layer, it may allow for instance to learn disjunction of cardinality constraints, each cardinality constraint being represented by a literal. This talk showed once again how important the effect of encodings can be, and in particular that introduction of new variables can have a huge impact in practice (as also suggested by theory, where unrestricted use of extension variables can make resolution-based SAT solvers simulate some of the strongest proof systems studied in proof complexity).
All the presentations have been recorded and are available online.
We are thankful to BIRS/CMO for hosting the workshop in the beautiful Oaxaca city.
Philipp Rümmer, Secretary of AAR and CADE
The affairs of CADE are managed by its Board of Trustees. This includes the selection of CADE (IJCAR) co-chairs, the selection of CADE venues, electing Herbrand award winners, instituting new awards, etc.
The current members of the board of trustees are (in alphabetical order):
The terms of Christoph Benzmüller and Aart Middeldorp end.
We are therefore seeking to elect two trustees. An election will be held soon and all CADE members will receive a ballot.
The following candidates were nominated and their statements are below:
My scientific contributions comprise foundational research (e.g., semantics and proof theory of classical higher-order logic, universal reasoning framework for quantified non-classical logics), system development and system integration (e.g., Leo provers, agent based reasoning, proof assistants), reasoning infrastructures (e.g., TPTP THF, LeoPARD) and applications in philosophy, mathematics and artificial intelligence (including normative reasoning for controlling intelligent systems).
I have been an active member of the CADE community for more than two decades, and I have previously served as CADE trustee & vice president, as IJCAR steering committee member and as AAR board member. In the last decade I have spend significant effort in outreaching to other communities. Recently I have been elected as a steering committee member of the Society of Deontic Logic and Normative Systems (DEON), I have given numerous invited presentations outside of our core community, and I have organised several conferences and events in which CADE research connects with other areas. I also publish very widely.
In my opinion our community has a bright future ahead. But CADE should eventually reconsider its comparably strong focus on automated reasoning in classical logics. Interdisciplinary application areas have emerged which require the automation of propositional and quantified non-classical logics, and I believe that our research work and our tools are not yet well enough adapted to serve such challenge applications. I am therefore motivated to continue the stimulation and fostering of interdisciplinary research activities beyond CADE's traditional focus areas, while at the same maintaining my core research activities in the area of higher-order automated theorem proving.
I have been attending CADE since 1994. I presented several papers at CADE and IJCAR and served on many program committees. Together with Amy Felty, I served as PC chair of the highly successful 25th edition in Berlin.
After spending more than ten years in Japan, since 2003 I lead a research group on computational logic in Innsbruck, in which automated deduction plays an important role. System descriptions and competitions are an essential part of CADE. Several software tools (on certification, confluence, theorem proving, to name a few) are developed in Innsbruck and we initiate/organize/host competitions (CoCo, TermCOMP) and workshops (IWC, WST) to advance the development of tools.
One of the most important topics that needs to be addressed by the CADE trustees is the publication forum of the proceedings. More and more national funding agencies insists on open access publications. The current situation is very unsatisfactory. Open access is essential to ensure that CADE stays an attractive conference in the future. If re-elected as CADE trustee, I will actively push for a change.
I am a tenured professor of computer science at Baden-Wuerttemberg Cooperative State University in Stuttgart, Germany. In the CADE community I am probably best known for the development of the theorem prover E, the co-creation of the ES* and PAAR workshop series, and as one of the program chairs of IJCAR 2018.
I have attended nearly every CADE and IJCAR meeting since CADE-13 in 1996, part of the first FLoC. In a very real sense, CADE (and CASC) have shaped a good part of my academic career. I try to strike a good balance between theory and practice in my research, and I think a similar balance is important for CADE.
I have always been a proponent of free software and the creative commons movement. I think CADE should seriously consider moving to a reputable non-profit open-access publisher. Locking papers behind the paywall of a commercial publisher is antithetical to the free exchange of scientific ideas. Requiring authors to give up the copyright on their work to a third party is also hard to justify in a time where nearly every step of publication is done by unpaid scientific volunteers. For similar reasons, I think that code of systems described at CADE should be available - ideally in source code form under a free license. Many important implementation details and techniques cannot be learned from a short paper, and replication of results is essentially impossible without access to the code.
I am quite happy with the current schedule of alternating CADE and IJCAR conferences, with a FLoC every 4 years. In my experience, conferences held at university locations are usually, if not universally, both more economic and more productive than conferences in hotels, so in the case of competing bids I would usually prefer the former.
I guess most people know me as "the TPTP guy" or "the CASC guy", which is a fair reflection of my focus in Automated Reasoning. I'm particularly interested in the evaluation and effective use of automated reasoning systems, i.e., the practice rather than the theory. But I know for sure that good practice builds on the shoulders of theory giants, and theory without implementation does not impress the user community. So, as a trustee (twice already) I have and will continue to support a balanced conference of both theory and practice papers. This is important when choosing program chairs. Like everyone, I am in favour of using CADE resources to support students, summer schools, doctoral symposia, etc. Finally, I'm cool with IJCAR, and believe that CADE and IJCAR are the premier forums for the presentation of new research results in all aspects of automated deduction.
CADE membership currently coincides with membership in the Association of Automated Reasoning (AAR). CADE membership has so far been interpreted as including all persons who have attended at least one CADE or IJCAR in the past. Over the decades, this has resulted in a large number of members who are no longer active in the automated reasoning community. The CADE trustees, in agreement with the AAR board, suggest splitting the interpretation of AAR and CADE membership and limiting CADE membership to persons registered for at least one the last three CADEs (including IJCAR).
The topic was discussed at the CADE business meeting on Tuesday, July 17th, 5pm. There were 53 participants, resulting in a quorum. After a discussion of several alternatives to interpret the AAR bylaw on membership, it was agreed by vote (51 people YES, 1 NO, 1 ABSTINENCE) that paying the CADE/IJCAR conference fee constitutes membership for three years. This means the CADE voting community now consists of all participants of the past 3 CADE conferences (including IJCAR). We will implement this new interpretation for the first time in the upcoming trustee elections.
The change in interpretation by CADE does not affect the distribution of the AAR newsletter. The newsletter will be sent to anybody having ever attended a CADE/IJCAR as long as she/he does not opt out.
The ITP conference series is concerned with all topics related to interactive theorem proving, ranging from theoretical foundations to implementation aspects and applications in system verification, security, and formalization of mathematics.
Researchers and practitioners are invited to submit proposals for co-located workshops on topics relating to interactive theorem proving. Workshops can target the ITP community in general, focus on a particular ITP system, or highlight more specific issues or recent developments. Proposals for in-depth tutorials or tool introductions are also welcome.
Co-located events will take place on 8 September and 13 September and will be held on the same premises as the main conference. Conference facilities are offered free of charge to the organizers. Workshop-only attendees will enjoy a significantly reduced registration fee.
Proposals for workshops should contain at least the following pieces of information:
Proposals should be submitted by email, no later than 9 December 2018. Selected workshops will be notified by 23 December 2018.
Call for Workshops
Workshop proposals for CADE-27 are solicited. The workshops will
take place on August 25-26 2019, before the main conference. Both
well-established workshops and newer ones are encouraged. Similarly,
proposals for workshops with a tight focus on a core automated reasoning
specialization, as well as those with a broader, more applied focus,
are very welcome.
Call for Tutorials
Tutorial proposals for CADE-27 are solicited. Tutorials are expected to
be either half-day or full-day events, with a theoretical or applied
focus, on a topic of interest for CADE-27. Proposals should provide the
following information:
Call for System Competitions
The CADE ATP System Competition (CASC), which evaluates automated
theorem proving systems for classical logics, has become an integral
part of the CADE conferences.
Further system competition proposals are solicited. The goal is to
foster the development of automated reasoning systems in all areas
relevant for automated deduction in a broader sense. Proposals should
include the following information:
Important Dates
Workshop/Tutorials/System Competitions:
Submission Instructions
Proposals for workshops, tutorials, and system competitions should be
uploaded via easychair.
Recurrent topics of RAMiCS conferences, held since 1994, include semiring- and lattice-based structures such as relation algebras and Kleene algebras, their connections with program logics and other logics, their use in theories of computing, their formalisation with theorem provers, and their application to modelling and reasoning about computing systems and processes.
Programme: see here
Registration (early fee before 15 September 2018): see here
Certified Programs and Proofs (CPP) is an international forum on theoretical and practical topics in all areas, including computer science, mathematics, and education, that consider certification as an essential paradigm for their work. Certification here means formal, mechanized verification of some sort, preferably with production of independently checkable certificates.
Topics of interest
We welcome submissions in research areas related to formal
certification of programs and proofs. The following is a suggested
list of topics of interests to CPP. This is a non-exhaustive list and
should be read as a guideline rather than a requirement.
Important Dates (AoE, UTC-12h)
LATA is a conference series on theoretical computer science and its applications. LATA 2019 will reserve significant room for young scholars at the beginning of their career. It will aim at attracting contributions from classical theory fields as well as application areas.
Topics of either theoretical or applied interest include, but are not limited to:
Important Dates (all at 23:59 CET):
For information about the submission process, see the conference website.
The aim of JELIA 2019 is to bring together active researchers interested in all aspects concerning the use of logics in Artificial Intelligence to discuss current research, results, problems, and applications of both theoretical and practical nature. JELIA strives to foster links and facilitate cross-fertilization of ideas among researchers from various disciplines, among researchers from academia and industry, and between theoreticians and practitioners.
Authors are invited to submit papers presenting original and unpublished research in all areas related to the use of logics in Artificial Intelligence including:
Important Dates:
For information about the submission process, see the conference website.
Lean Together is a meeting for users (both current and future) and developers of the Lean proof assistant and its libraries, focusing on the formalization of mathematics. The 2019 meeting will also serve to kick off the Lean Forward NWO project
We are seeking participants at all levels of experience with Lean and proof assistants. The program will include tutorial sessions on the system and its libraries. Users of other systems who are interested in mathematical applications are encouraged to attend.
Please save the date, and let us know if you would like to participate!
The ACL2 Workshop series is the major technical forum for users of the ACL2 theorem proving system to present research related to the ACL2 theorem prover and its applications. ACL2 is an industrial-strength automated reasoning system, the latest in the Boyer-Moore family of theorem provers. The 2005 ACM Software System Award was awarded to Boyer, Kaufmann, and Moore for their work in ACL2 and the other theorem provers in the Boyer-Moore family.
The registration is open. Early registration is until October 26. See the website for further information including registration, organization, venue, sponsors, and a list of accepted papers.
Please see this link for information about lodging options near the workshop venue.
ThEdu's programme comprised seven contributions, presented also in the web pages. Now postproceedings are planned to collect the contributions upgraded to full papers. The contributions' topics are diverse according to ThEdu's scope, and this is a call open for everyone, also those who did not participate in the workshop. All papers will undergo review according to EPTCS standards.
THedu'18 Scope:
Computer Theorem Proving is becoming a paradigm as well as a
technological base for a new generation of educational software in
science, technology, engineering and mathematics. The workshop brings
together experts in automated deduction with experts in education in
order to further clarify the shape of the new software generation and
to discuss existing systems.
Topics of interest include:
Important Dates:
More information about the postproceedings on this web page.
Model checking is a computer-assisted method for the analysis of dynamical systems that can be modeled by state-transition systems. Drawing from research traditions in mathematical logic, programming languages, hardware design, and theoretical computer science, model checking is now widely used for the verification of hardware and software in industry.
The editors and authors of this handbook are among the world's leading researchers in this domain, and the 32 contributed chapters present a thorough view of the origin, theory, and application of model checking. In particular, the editors classify the advances in this domain and the chapters of the handbook in terms of two recurrent themes that have driven much of the research agenda: the algorithmic challenge, that is, designing model-checking algorithms that scale to real-life problems; and the modeling challenge, that is, extending the formalism beyond Kripke structures and temporal logic.
The book will be valuable for researchers and graduate students engaged with the development of formal methods and verification tools.
More details are available on the Springer web page of the book.
Cyber-physical systems (CPSs) combine cyber capabilities, such as computation or communication, with physical capabilities, such as motion or other physical processes. Cars, aircraft, and robots are prime examples, because they move physically in space in a way that is determined by discrete computerized control algorithms. Designing these algorithms is challenging due to their tight coupling with physical behavior, while it is vital that these algorithms be correct because we rely on them for safety-critical tasks.
This textbook teaches undergraduate students the core principles behind CPSs. It shows them how to develop models and controls; identify safety specifications and critical properties; reason rigorously about CPS models; leverage multi-dynamical systems compositionality to tame CPS complexity; verify CPS models of appropriate scale in logic; and develop an intuition for operational effects.
The book is supported with homework exercises, lecture videos, and slides.
The University of Luxembourg/ Centre for Security and Trust is seeking to hire one research associate (postdoc) and one PhD candidate to perform research in formal methods for voting systems. The positions are part of the joint Luxembourg National Research Fund (FNR) and Norwegian RCN project "SURCVS" on the design and evaluation of secure yet usable voting systems.
The University offers highly competitive salaries and is an equal opportunity employer. You will work in an exciting international environment and will have the opportunity to participate in the development of a dynamic and growing centre.
Successful candidates will participate in the activities of the Security and Trust of Software Systems (SaToSS) research group led by Prof. Dr. Sjouke Mauw. The group is focused on formalising and applying formal reasoning to real-world security problems and trust issues.
Further information and submission guidelines:
Applications will be considered on basis of receipt so an early submission is encouraged; applications submitted within the stated deadline will be given preference.
Deadline for applications (for both positions): October 15th, 2018
The TCS Group at KTH Royal Institute of Technology invites applications for a PhD position in CS focusing on algorithms for solving the Boolean satisfiability problem (SAT) very efficiently for large classes of instances, and on analyzing and understanding such algorithms. To receive full consideration your application should be submitted via this web page by the deadline October 31, 2018, but candidates will be reviewed continuously and early applications are strongly encouraged.
This is a position within the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program WASP, Sweden's largest ever individual research program and a major national initiative for strategically motivated basic research, education and faculty recruitment. The PhD student will be part of the WASP AI-Math Graduate School, providing a valuable network with other researchers within WASP and presenting unique opportunities for students who are dedicated to achieving international research excellence with industrial relevance.
The position is for four years full time, but PhD positions usually (though not necessarily) include 20% teaching, in which case they are prolonged for one more year. The successful candidate would ideally start sometime as early as possible in 2019, though this is negotiable. The position is fully funded and comes with a competitive salary.
See the following web page for the full announcement with more information and instructions how to apply. Informal enquiries are welcome and may be sent to Jakob Nordström.
I am looking for PhD students to work with me on a variety of topics at the intersection of software engineering, formal methods, and verification technology such as:
The PhD students will join the Software Institute, a recently inaugurated center of excellence devoted to all aspects of software engineering research and development.
The Software Institute is part of the Faculty of Informatics at the Università della Svizzera italiana (USI), located in beautiful Lugano, Switzerland. The Faculty (established in 2004) offers BSc, MSc, and PhD study programs, and has a high international standing in multiple areas of computer science research.
The official teaching and working language of the Faculty is English. PhD positions come with a competitive salary (around 50 kCHF/year for PhD students) and excellent working conditions.
Eligible candidates should have a Master's Degree in Computer Science or a closely related area, a good knowledge of English, a passion for programming, and a burning desire to try out new ideas and to make an impact. Female candidates are particularly encouraged to apply.
For more information about the positions, and details on how to apply, see here.
We have two open PhD student positions in the program verification group. These positions involve performing research on program verification under the supervision of Prof. Dr. Bart Jacobs and postdoctoral researcher Amin Timany, with the goal of obtaining a PhD within four to five years. PhD students are expected to publish and present their results regularly at competitive international conferences and journals, which generally involves international travel, as well as to contribute to the Department's educational obligations. Possible topics include:
Links:
Profile:
Interested? Contact Prof. Bart Jacobs & Dr. Amin Timany no later than September 30, 2018. Do not hesitate to contact us if you have any questions regarding these open positions.
VERIMAG has TWO open post-doc positions on Coq developments (certified distributed algorithms; certified compiler)
How to Apply & Contact Information
Please send information requests/applications to
Applications must include the following documents:
Required Skills
Coq is a proof assistant mixing software development in a purely functional strongly-typed language and theorem proving.
VERIMAG is a leading laboratory in methods for the development of safety-critical systems. There are several ongoing efforts at VERIMAG on Coq proofs, including one on distributed algorithms and one on certified compilers.
We are looking for two Ph.D. students or postdoctoral researchers to work on the DFG project "Proving Accountability in Traffic" (PAcT) that aims to formalize traffic rules for automated vehicles in logic. The funding is for 3 years.
The focus of the project is on developing new methods for the grand challenge of guaranteeing legally correct maneuvers of automated vehicles. A special focus will be a) on the formalization of traffic rules using logic and b) the consideration of uncertainties in the future behavior of other traffic participants as well as uncertainties originating from the object retrieval of environment sensors (e.g. video cameras, LIDAR, RADAR, etc.). Using formal verification techniques for hybrid systems should ensure legally correct maneuvers despite the aforementioned uncertainties. Results should be demonstrated in simulations and a real automated vehicle.
Job requirements:
This is a joint project by Matthias Althoff and Tobias Nipkow at the Technical University Munich. It is funded by the German Research Foundation (DFG).
Remuneration will be according to the German public sector salary scale TV-L E13 (starting at about 3400 Euros/month).
Applications should be sent to both
in an email entitled "PAcT application", providing:Informal enquiries are welcome.
Professor and postdoc/researcher positions in Tallinn in trustworthy software technologies
The Government of Estonia has allocated funds to create six new research groups in selected areas of ICT at the Tallinn University of Technology and University of Tartu during 2018-2022.
One of these areas, to be hosted at the Tallinn U of Technology, is trustworthy software technologies.
Specific topics of interest in this area include theories, methods and tools for program analysis, verification, program transformations and generation, program synthesis, programming languages, functional programming, refinement/dependent types, software contracts, theorem proving and proof assistants, certified software, processes of building trust in software, economics of trust.
Applications are being sought for these positions:
The official position announcement for the professor position is here.
Informal queries can be addressed to Tarmo Uustalu.
The Joint Laboratory ProofInUse hires an experienced R&D engineer (M/F) in the domain of Formal Methods for Software Engineering: see the full description here.
ProofInUse originates from the sharing of resources and knowledge between the Toccata research team, specializing in techniques for deductive program verification and the SME AdaCore, a software publisher, specializing in providing software development tools for critical systems. The SME TrustInSoft, specialized in advanced static analysis techniques for safety and security of C/C++ source code, recently joined the ProofInUse Laboratory.
The purpose of ProofInUse is to increase significantly the number of customers of the SPARK 2014 and the TrustInSoft Analyzer technologies, by popularizing the use of formal proof techniques. This popularization requires the resolution of several scientific and technological challenges. A first axis of work is to design and implement a new plug-in for deductive verification in the TrustInSoft Analyzer, making use of the Why3 intermediate tool for verification condition generation, along the guidelines and design choices previously adopted for SPARK, that include strong restrictions on the analyzed code regarding the possibility of aliasing in data structures. This new plug-in must support new techniques for analyzing bit-level and floating-point codes, as well as new facilities for providing counterexamples when proofs fail. A second axis of work is to leverage the former non-aliasing restrictions, via an alias analysis based on a Rust-style sharing control and borrowing. A third axis is to actively participate to the development of a formally proved C/C++ software library.
The recruited engineer will work in close collaboration with the ProofInUse Research and Development team, to address both the scientific and the technological challenges presented above. It is expected that the engineer contributes both to advancing the academic knowledge in ProofInUse context and to the transfer of this knowledge into the software products distributed by AdaCore and TrustInSoft. The engineer will participate actively to the production of scientific publications, and to the software development of the Why3 tool and the TrustInSoft Analyzer.
We expect from the candidate some experience with Formal Methods for Software Engineering in a broad sense, typically the candidate should have defended a PhD in the domain of Formal Methods. More specifically, a plus would be some experience in formal logic and proof techniques, in automated deduction, in Satisfiability Modulo Theory solvers, in Model Checking or in Abstract Interpretation techniques.
The candidate should have a fair experience in software development, a plus would be the knowledge of functional programming, and the knowledge of the programming languages OCaml, C/C++ and/or Rust.
The candidate should be able to write and speak in English fluently.
The job will take place both on Toccata's lab site in building 650 of University Paris-Sud in Orsay, and at TrustInSoft's site in Paris, France.
The position is to be filled as soon as possible starting from Nov 1st 2018, for a duration of 36 months.
Apply by sending a CV and a motivation letter to the e-mail contact addresses below, and/or via the URL given at the top of this message.
Contact/Information: Claude Marche, Benjamin Monate, Yannick Moy.
We invite candidates for PhD student and Postdoc positions to start in 2018 or early 2019 in Innsbruck. The 5-year ERC project "SMART" aims to combine modern machine learning techniques with formal reasoning to provide proof advice and assistance.
The starting date can be negotiated. PhD student positions like Postdoc positions are formal employment in Austria, with a regular salary and benefits.
A background in machine learning, or formal proof is an advantage. Knowledge of German is not required, the group is international and the language of communication is English.
Candidates for a PhD position must hold a MSc in computer science or mathematics and candidates for the postdoctoral position hold a PhD degree in computer science or mathematics.
Applications and informal inquiries are welcome, please contact Cezary Kaliszyk. Applications should include a CV and names and contact details of two references. For the Postdoc positions please include a brief research statement.
The city of Innsbruck, which hosted the Olympic Winter Games in 1964, 1976 and 2012 (YOG), is superbly located in the beautiful surroundings of the Tyrolean Alps. The combination of the Alpine environment and urban life in this historic town provides a high quality of living.
More information about the project, the group, and the university, see this web page.
Stanford University Professor Emeritus Zohar Manna passed away on August 30, 2018 at his home in Netanya, Israel, after several months of illness. He was 79. Manna was a fellow of the Association for Computing Machinery and was a pioneer in the use of formal methods for automated reasoning, verification, and synthesis, on which subjects he authored or co-authored nine books. He served on the faculties of the Weizmann Institute and Stanford University for many years and was recipient of Guggenheim and Fullbright fellowships and of the Herbrand Award for distinguished contributions to automated reasoning. He is survived by his wife of half a century, Nitza, children, Irit, Hagit, Yonit, and Amit, and five grandchildren, and by his 29 doctoral students. He will be sorely missed by all who knew him.
Welcome!
If this is the first time you receive this newsletter and you don't know why, the reason is most likely that you enjoyed great talks and free ice cream at ijcar 2018. This makes you de facto a member of the Association for Automated Reasoning. That in turn means that from now on you will receive this newsletter. It is fairly low traffic, currently there is a new issue every three months or so.
In there you will find information relevant to the AR community at large, including among other things call for papers and job announcements. There are also columns contributed by various members of the AAR on topics of their choosing. If you want to contribute a column or advertise an event related to AR, you are welcome to do so. Just send me an email.
If you want to opt out, the one to contact is Philipp, the AAR secretary, that manages the list of members.