Newsletter No. 116

August 2016

August 2016

- From the AAR President, Larry Wos
- In memoriam (John) Alan Robinson
- Is There a Place for User Experiments in the AR Community?
- Special Issue
- Conferences
- Open Positions
- The Editor's Corner

People of all ages often enjoy solving puzzles. Sometimes, a person requires that no assistance be given. In contrast, I find it more than piquant to see how an automated reasoning program can be used on a given puzzle. Therefore, in this column I again offer a puzzle as a challenge.

But first a bit of history. In 1963, I was introduced to the field then called automatic theorem proving. Of course, it was far, far from automatic. I pressed not long after to have the name changed to automated theorem proving, because it seemed to me to be more accurate. I did not succeed.

By 1979, my colleagues and I had been using our program to tackle problems in circuit design and circuit validation. I suggested that we also try to have the program solve puzzles. What was common to these activities—including, of course, finding proofs of theorems—was careful logical reasoning. After some persuasion—and noting that puzzle solving would likely generate far broader interest in our work—I succeeded. My colleagues agreed not only that we use the program for puzzling solving but also that the name automated reasoning be adopted for the field.

So, you now are given as a challenge the following puzzle to solve by using an automated reasoning program. The puzzle is taken from American Way Magazine, May 1982, page 161. The puzzle (if memory serves) was given to me years ago by a colleague at Argonne National Laboratory. I do not recall any attempt we made to solve it.

**The Puzzle:**
At a year-end cocktail party attended by five couples, the conversation
turns to birthdays.
Much to everyone's surprise, no two people in the group were born in the
same year or even in the same month, although their ages range from 25 to
34.
Moreover, the five youngest people are married to the five oldest (but in no
particular order).
And even more coincidentally, the five youngest have birthdays in the first
half of the year, and the five oldest have birthdays in the last half of the
year (no one has a birthday in June or December).

**Your Task:**
Using the clues given, try to match up the first and last names of each
person. and figure out their ages and months of birth.

Five last names: Donelly, Frye, Allan, Jenner, Tillman.

**Clues:**

- Barry, who is 33 and was born in September, is older than Mrs. Jenner.
- Three people are between the ages of Gloria and her husband Mark (Gloria is not necessarily younger than Mark).
- Andria Allan was born in January.
- Stuart is younger than Dan.
- The person who was born in March is Elizabeth's brother. He is younger than Barry's wife.
- Margaret Frye's husband is 32 years old.
- The oldest person in the crowd was born in November.
- Mrs. Donelly and Mrs. Jenner have the same astrological sign.
- Mr. Tillman, whose birthday is in May, is one year younger than his wife.
- Laurie's husband was born in April.
- The party's male host was born in August.
- Martin is 25.

If you get a solution, I would enjoy reading it, especially if OTTER is involved. If you solved the puzzle with another program, I am also interested: I am looking for more data regarding program comparison.

Before attempting to solve the puzzle, however, you might prefer a more reasonable challenge—a puzzle sometimes called the fruit puzzle. It was solved years ago by one of our automated reasoning programs. The solution I know about rests on not too many clauses. Of course, as predictable, you are asked not to consult any of my writings or the internet.

**The Fruit Puzzle:**
There are three boxes, respectively labeled apples, oranges, and bananas.
Unfortunately, each box is mislabeled.
You are correctly told that each box contains exactly one of the three kinds
of fruit and that no two boxes contain the same kind.
You are also correctly told that box 2—the one labeled oranges—actually
contains apples.

You are asked to determine the contents of boxes 1 and 3 and, of equal importance, then prove that your answer is correct.

It is with the deepest sadness that we announce the death of Alan Robinson on 5th August 2016 at the age of 86. His loss will be felt by the whole Automated Reasoning community. Alan was best known for the invention of the Resolution inference rule, which dramatically improved the efficiency of automated reasoning systems. He received many honours for his contributions, including the AMS Milestone Award in Automated Reasoning in 1985 and the CADE Herbrand Award in 1996. He also received honorary doctorates from the Universities of Leuven, Uppsala, and Madrid.

Alan was born in Yorkshire, England and studied Classics at Cambridge, before moving to the USA in 1952 to complete his education at Oregon and Princeton in Philosophy. His Ph.D. dissertation was supervised by Carl Hempel, with advice from Hilary Putnam. Hempel was well known for his work with the Berlin and Vienna circles of logical empiricists, devoted to formalization of science and mathematics. With Martin Davis, Putnam built one of the first automated reasoning systems. In his 1996 "Reminiscences" paper, Alan's friend, the famous physicist Murray Gell-Mann, described Alan's career trajectory thus: "He had gone from studying classics at Cambridge (like Margaret) to Greek philosophy, and then to mathematical logic, and finally to computer science—an interesting progression, which perhaps only Tony Leggett can match." "Margaret" was Gell-Mann's wife and Alan's wife Gwen's best friend and matron of honour.

Alan worked in several areas related to the application of formal mathematics, including DuPont, and Argonne National Labs in Chicago. While a visitor at Argonne, he became interested in Automated Reasoning and, in 1963, invented Resolution. The resolution principle was one of the essential ideas for the Fifth Generation Computer Project in Japan.

He moved to Syracuse University in 1967 and became professor emeritus there in 1993. He was a frequent visitor to AI and AR research laboratories across the World, conveying news of the latest developments from one site to the next. Alan created a standard for the itinerant scholar, freely and graciously sharing his depth of knowledge, and encouraging students and senior researchers alike. He was open to engagement with anyone, and always a gracious discussant on any topic. Bob Boyer described him as "the Henry Kissinger of AI" for this jet-setting lifestyle. He had a tremendous sense of humour and was a sparkling conversationalist and a constant source of new ideas and inspiration. His lectures were widely enjoyed—spontaneous, fun but full of clarity.

The application of Resolution to logic programming fascinated him and he was the
founding editor of the *Journal of Logic Programming*. While a significant
contributor to logic programming, he always had positive compliments for good
programming, and in his book *Logic Form and Function* he introduced what were
revealed as "programming mechanisms", from Hilbert and Russell, and clarified
the role of these in programming. His passion was to continue to learn and
understand the relationship between logic and programming. In one of his later
year lecture tours, he committed to explaining the role of Gödel's theorems in
the general paradigm of logic programming languages.

In an interview, published by Maarten van Emden, Alan's passion for programming and his early encounter with Lisp was explained:

I first encountered Lisp when John McCarthy rang my doorbell the day after we moved into a house in Menlo Park in the summer of 1965. I had published the Resolution paper a few months earlier and McCarthy had (in 2 hours!) written and run a Lisp program embodying the proof procedure, with, he said, minimal debugging. This seemed a miracle to me, who had for years been flailing about, first with assembly languages and then with Fortran. The gruff McCarthy, who was by then already well known, was also something of an astonishing revelation, but that is another story. I bought a copy of the Lisp 1.5 manual, and found it brilliant but with strangely clumsy but MIT-ishly bold and naive blunders (involving QUOTE and functions as objects, FUNARGS and all that, eventually fixed years later in SCHEME but still there in Common Lisp).

In later years, he explored the concept of informal reasoning, via case studies of the reasoning of human mathematicians.

Alan was also an accomplished pianist who enjoyed talking music, accompanying friends and family, and playing chamber music—in later years often with Jacques Cohen (violin) and Koichi Furukawa (cello) at many logic programming conference functions.

He is survived by his wife Gwen; his sons Alan and Hugh; and his daughter Gwen Owen.

The CADE board of trustees shares the sadness for Alan Robinson's passing.
Alan Robinson won the Herbrand Award in 1996. With first-order resolution,
unification, and hyperresolution, just to mention the most famous of his
contributions, he was one of the founders of the field of automated reasoning,
as well as neighboring fields such as declarative programming. In 1991
Jean-Louis Lassez and Gordon Plotkin edited the book *Computational Logic:
Essays in Honor of Alan Robinson*, and in 2001 the *Handbook of Automated
Reasoning* co-edited by Alan Robinson and Andrei Voronkov appeared. In recent
years, Alan Robinson's talk at IJCAR 2012 in Manchester is widely remembered.

Dear Colleagues,

I would like to share with you some thoughts about how we evaluate results in the automated reasoning (AR) community and which kind of research we think is valuable.

Our community currently focuses on theoretical analyses, on empirical evaluations, and on systems' competitions. But hardly any attention is given to user evaluations. With this note I would like to draw attention to this neglect and I also hope to stimulate a fruitful discussion.

My note was motivated by a rejected paper at this year's IJCAR, but this in itself is unimportant. It is the circumstances that warrant interest and which motivated me to think about different approaches for providing scientific evidence.

The paper in question was called "An Empirical Evaluation of Two GUIs for an Interactive Theorem Prover" and it constitutes (as far as I know) the first empirical user evaluation of a formal verification system. Two out three reviewers recommended (weak) rejection with a confidence score of 4 (expert level), and that was it for the paper.

Now the interesting part: one reviewer said that "this paper would fit very well for a conference on software engineering, but not for IJCAR", in other words, it is out of scope. Another reviewer complained that he or she has "some doubts of the usefulness of this very detailed (lengthy?) analysis for the reader. Is it really necessary to know the exact hypothesis of the evaluation, or which hypothesis can be rejected using which kind of test…?"

I must admit that I am quite worried about either response for several reasons:

First, controlled user experiments are able to provide aspects of scientific evidence that other experimental studies (e.g., case studies, benchmarks) cannot:

- It is actually possible to falsify a claim (up to statistical errors): for example, you can find out whether users really can do with your method/tool what you think they can do.
- If properly done, then conclusions have predictive power (again up to statistical errors). It is extremely important, for example, for a manager to decide whether investment in a new technology might pay off or not.

Second, the recommendation to submit the paper to a software
engineering (SWE) conference is misleading. Note that the technical
subject of SWE is **not** user evaluation either: it is about
architectures, frameworks, processes, etc. It is not the case that the
SWE community thinks that user studies are their subject. However, in
contrast to the AR community, the SWE community understood that it is
important not to merely claim the superiority of a new approach, but
to supply empiricial evidence as well. In that sense it is more open
and perhaps also more mature than AR.

By the way, this was not always so: before Walter Tichy pointed out the lack of experimental validation in the area of SWE in 1992 (LNCS 707, pp. 30-32 and several follow-up articles), the SWE community had been reacting exactly like IJCAR: few people performed experimental user studies and those who did were mostly rejected on grounds of being not "technical" enough. But the SWE community finally learned that if you want to claim relevance of your methods for users, then you must do at least some user studies.

I strongly believe that empirical user studies must find their place in our community as well, if we ever want to create impact outside our niche. This is not to say that user experiments should replace benchmarks or analytic evaluations, but they should be part of our scientific portfolio for corroborating evidence of claims.

Third, to perform meaningful user studies, one needs to master appropriate scientific methods and tools: "Is it really necessary to know the exact hypothesis of the evaluation, or which hypothesis can be rejected using which kind of test …?"—Why, yes of course! Or else you are just claiming nonsense! You could just as well ask "Is it really necessary to prove soundness using all this complex mathematical stuff?", when you give a formal argument. To explain in detail how we designed and implemented our study so that it is statistically valid so others can repeat the setup, to detail how we mitigated threats—these are central contributions of the paper.

To summarize: validation by controlled user experiments is a mainstream scientific method. It is standard in the life sciences, experimental physics, chemistry, social sciences, etc. And it should be in Computer Science as well! To do it properly, requires a solid foundation, to use the right tools, to analyse the threats, etc. To say that experimental user studies are out of scope or that they don't need to be explained properly can be even viewed as downright un-scientific! I believe that experimental user studies should be part of any community who cares about or claims usability of their results.

To give just one more example from cybersecurity: until ten years ago, essentially no user studies were performed. But in the last years, "Usable security" has been firmly established as a topic in all top security conferences—and it is for the better of the whole community. One can ask: why is the AR community with its impressive results, its gifted researchers, its powerful tools so much at the margins of Computer Science?

I would be happy to receive your thoughts on what you think the place of experimental user studies should be in AR, regardless of whether you agree or disagree with me.

P.S.: In the meantime, our paper has been accepted, with minor modifications, at Automated Software Engineering 2016 (20% acceptance rate). So it is true—our work is appreciated in the SWE community. But, of course, I would have preferred to discuss it at IJCAR …

The past few decades have seen major achievements in interactive theorem proving, such as the formalisation of deep mathematical theorems and significant bodies of theoretical computer science, as well as the verification of complex software and hardware systems. Too often, these impressive results have been published in abbreviated or fragmentary form in conference proceedings, or not at all. This special issue welcomes full-length papers describing past work not previously published in a journal, along with new developments of any length. Small, self-contained Proof Pearls and applications of all kinds are also welcome.

This special issue will be devoted to applications of interactive theorem proving in their full variety: formalised mathematics, formalised theory, formalised semantics, formal proofs of hardware or software systems. They can be large or small.

Manuscripts should be unpublished works and not submitted elsewhere. Revised and enhanced versions of papers published in conference proceedings that have not appeared in archival journals are eligible for submission. All submissions will be reviewed according to the usual standards of scholarship and originality.

Papers should be in PDF format, following the JAR guidelines for authors, and be submitted using the usual Springer online submission system. When you reach the drop-down menu "Choose Article Type", you must select "S.I. : MILESTONE" to specify the special issue. To encourage a speedy review cycle, it will be expected that authors of submissions also serve as referees.

Important Dates:

**1 Apr 2017**: Submission Deadline

1 Oct 2017: Notification of accepted papers

1 Jan 2018: Final version

For more information, please see the special issue web site.

Guest Editors:

- Larry Paulson, University of Cambridge
- Jeremy Avigad, Carnegie Mellon University
- Gerwin Klein, Data61 and UNSW Australia

29 August to 1 September, Marseille, France

Computer Science Logic (CSL) is the annual conference of the European Association for Computer Science Logic (EACSL). The conference is intended for computer scientists whose research activities involve logic, as well as for logicians working on issues significant for computer science. CSL 2016 will be the 25th edition in the series.

CSL 2016 is hosted by Aix-Marseille Université and will take place in Marseille (France) from 29 August to 1 September (Satellite workshops 28 August and 2-3 September). Both the main conference and its satellite workshops will be held in the city center campus of the Faculty of Science. These events are jointly organized by the Laboratoire d'Informatique Fondamentale de Marseille and the Institut de Mathématiques de Marseille.

Invited speakers:

- Libor Barto (Charles University in Prague)
- Agata Ciabattoni (Technische Universität Wien)
- Anca Muscholl (Université Bordeaux & Institut Universitaire de France)
- Alexandra Silva (University College London)

The conference programme is available online.

Satellite workshops (28 August, 2 and 3 September):

- CRECOGI: Concurrent, Resourceful and Effectful Computation by Geometry of Interaction
- LCC: Logic and Computational Complexity
- PLRR: Parametricity, Logical Relations and Realizability
- QSLC: Quantitative Semantics of Logic and Computation

See the conference web site for more information.

5 to 7 January 2017, Kanpur, India

ALI, the Association for Logic in India, announces the seventh edition of its biennial International Conference on Logic and its Applications (ICLA), to be held at the Indian Institute of Technology Kanpur, from 5 to 7 January 2017. ICLA 2017 will be co-located with the Methods for Modalities Workshop to be held from 8 to 10 January 2017.

ICLA is a forum for bringing together researchers from a wide variety of fields in which formal logic plays a significant role, along with mathematicians, computer scientists, philosophers and logicians studying foundations of formal logic in itself. A special feature of this conference is the inclusion of studies in systems of logic in the Indian tradition, and historical research on logic.

The earlier events in this series featured many eminent logicians as invited speakers, and we are pleased to announce that this year's speakers will include

- Nicholas Asher, IRIT Toulouse
- Natasha Dobrinen, University of Denver
- Luke Ong, University of Oxford
- Richard Zach, University of Calgary

Authors are invited to submit papers presenting original and unpublished research in any area of logic and applications. Articles on mathematical and philosophical logic, computer science logic, foundations and philosophy of mathematics and the sciences, use of formal logic in areas of theoretical computer science and artificial intelligence, logic and linguistics, history of logic, Indian systems of logic, or on the relationship between logic and other branches of knowledge, are welcome.

Submissions must be in English and should provide sufficient detail to
allow the programme committee to assess the merits of the paper. The
submission may not exceed 12 pages in Springer LNCS style.
The deadline for Submission is **2 September 2016**.
See the conference web site
for details.

15 to 17 January 2017, Paris, France

VMCAI provides a forum for researchers from the communities of Verification, Model Checking, and Abstract Interpretation, facilitating interaction, cross-fertilization, and advancement of hybrid methods that combine these and related areas.

The program of VMCAI 2016 will consist of refereed research papers as well as invited lectures and tutorials. Research contributions can report new results as well as experimental evaluations and comparisons of existing techniques. Topics include, but are not limited to Program Verification, Model Checking, Abstract Interpretation, Abstract Domains, Program Synthesis, Static Analysis, Type Systems, Deductive Methods, Program Certification, Error Diagnosis, Program Transformation, and Hybrid and Cyber-physical Systems.

Submissions can address any programming paradigm, including concurrent, constraint, functional, imperative, logic, and object-oriented programming.

Submissions are restricted to 17 pages in Springer's LNCS format, not counting
references.
The abstract submission deadline is **18 September 2016**.
See the conference web site for
details.

16 and 17 January 2017, Paris, France

CPP is an international forum on theoretical and practical topics in all areas, including computer science, mathematics, and education, that consider certification as an essential paradigm for their work. Certification here means formal, mechanized verification of some sort, preferably with production of independently checkable certificates.

We welcome submissions in research areas related to formal certification of programs and proofs. The following is a suggested list of topics of interests to CPP. This is a non-exhaustive list and should be read as a guideline rather than a requirement.

- certified or certifying programming, compilation, linking, OS kernels, runtime systems, and security monitors;
- program logics, type systems, and semantics for certified code;
- certified decision procedures, mathematical libraries, and mathematical theorems;
- proof assistants and proof theory;
- new languages and tools for certified programming;
- program analysis, program verification, and proof-carrying code;
- certified secure protocols and transactions;
- certificates for decision procedures, including linear algebra, polynomial systems, SAT, SMT, and unification in algebras of interest;
- certificates for semi-decision procedures, including equality, first-order logic, and higher-order unification;
- certificates for program termination;
- logics for certifying concurrent and distributed programs;
- higher-order logics, logical systems, separation logics, and logics for security;
- teaching mathematics and computer science with proof assistants.

Submitted papers must be formatted following the ACM SIGPLAN Proceedings
format using **10 point** font for the main text (not the default 9 point
font).
Papers should should not exceed 12 pages including all tables, figures, and
bibliography. Shorter papers are very welcome and will be given equal
consideration.

Abstracts must be submitted by **5 October 2016**. Whenever appropriate, the
submission should come along with a formal development, using whatever prover,
e.g., Agda, Coq, Dafny, Elf, HOL, HOL Light, Isabelle, Lean, Matita, Mizar,
NQTHM, PVS, Vampire, etc. Such formal developments must be submitted together
with the paper as auxiliary material, and will be taken into account during
the reviewing process. See the conference
web site for details.

8 to 11 March 2017, Hannover, Germany

Authors are invited to submit papers presenting original and unpublished research on theoretical aspects of computer science. Typical areas include (but are not limited to):

- algorithms and data structures, including: design of parallel, distributed, approximation, and randomized algorithms; analysis of algorithms and combinatorics of data structures; computational geometry, cryptography, algorithmic learning theory, algorithmic game theory;
- automata and formal languages, including: algebraic and categorical methods, coding theory;
- complexity and computability, including: computational and structural complexity theory, parameterized complexity, randomness in computation;
- logic in computer science, including: finite model theory, database theory, semantics, specification verification, rewriting and deduction;
- current challenges, for example: natural computing, quantum computing, mobile and net computing.

The paper submission deadline is **25 September 2016**.
See the conference web site for details.

A postdoctoral research position working with Aaron Stump on dependently typed lambda encodings, in the Computer Science department at The University of Iowa, is available. The position is for two years, and the start date is flexible. Salary and benefits are competitive. The project is funded by the U.S. National Science Foundation.

The project is to develop and evaluate Cedille, a new implementation of a system called the Calculus of Dependent Lambda Eliminations (CDLE). CDLE is based on a version of the Calculus of Constructions (CoC) with implicit products (cf. Miquel's implicit CoC). This system is extended with new features supporting dependently typed programming with lambda encodings. These new features enable data to be defined to be their own inductions (so the type of 3 says that if you prove standard step and base cases for a property P, then you get a proof of P of 3), and allow large eliminations with such data. This system supports programming and proving with higher-order lambda encodings of datatypes with embedded functions; such datatypes are not allowed in Coq or Agda for consistency reasons. Nevertheless, CDLE is a sound logic under the Curry-Howard isomorphism, as higher-order lambda encodings (as possible already even in System F) do not jeopardize consistency. For more about this system, see the draft paper on Aaron Stump's web page.

If you are interested in applying for the position, please send an updated CV including the names of two references to Aaron Stump by email. Please also include preferred start date. A doctoral degree in Computer Science is required, and background in type theory, programming languages, compilers, or formal methods is strongly desired. Inquiries are also very welcome. Aaron Stump would be very happy to have applications from graduating students at institutions in Europe. Our Computational Logic Center here at U. Iowa, which he runs together with Cesare Tinelli, has many international connections, including a number of current postdocs who completed doctoral training at schools in France and the UK.

Andrei Popescu is hiring a postdoctoral associate for 14 months on a project having the following goals:

- Develop an expressive, compositional and quasi-automated framework for information flow security of web-based systems.
- Use this framework to produce the world's first feature-rich and security-verified social media platform, in collaboration with an industrial partner.

The position is based at the Middlesex University London and offers a competitive salary. Please contact Andrei Popescu if you are interested, have a PhD, and have expertise in proof assistant technology. Experience with Isabelle/HOL is a plus. For informal inquiries about the position, Andrei Popescu can also be contacted while at ITP 2016 in Nancy.

For some background on the kind of verification contemplated, see the following papers and verified systems:

An 18-month position for post-doctoral research on
Statistical Verification of Distributed Programs Within SimGrid
is available at the Inria Nancy / LORIA research center within the Inria project lab
*HAC SPECIS: High-performance Application and Computers, Studying Performance and Correctness In Simulation*.

The objective of this project is to adapt and implement the technique of statistical model checking within SimGrid. Instead of attempting to exhaustively compute all possible system behaviors, executions are sampled according to the probability distributions associated with the target execution platform in order to estimate the probability that a property is satisfied in the system. Beyond the analysis of Boolean properties, the same technique can provide estimates for quantitative measures and thus provide a useful complement to the current functionality of deterministic performance evaluation through simulation.

Successful candidates must have defended a PhD in computer science after 1 September 2014 or expect to defend their PhD before taking up the position. Expected qualifications are:

- solid knowledge of formal verification techniques, in particular model checking and/or runtime verification,
- solid programming experience, in particular with system programming in C,
- basic knowledge of probability theory and statistics.

The post-doctoral researcher will be hosted by the VeriDis research group in Nancy and will interact with the teams MExICo of Inria Saclay and MYRIADS of Inria Rennes, partners in the Inria project lab HAC-SPECIS.

Contacts: Marie Duflot-Kremer
and Stephan Merz.

Duration: 18 months

Starting date: between 1 November 2016 and 1 January 2017

Salary: 2620 euros gross monthly (about 2115 euros net), medical insurance included

Applications (including a motivation letter including your scientific and career projects, a CV describing your research activities (max. 2 pages), a short description of your best contributions (max. 1 page for max. 3 contributions including theoretical research, implementation or industry transfer), your two best publications, if you have not defended yet, the list of expected members of your PhD committee and the expected date of defense) should be sent to the the addresses indicated above as two pdf files (one for the publications, the other for the other documents). Additionally, at least one recommendation letter from your PhD advisor(s), and up to two additional letters of recommendation should be sent directly by their authors to the email addresses indicated above.

Informal enquiries concerning the position are welcome.

The University of Barcelona offers two PhD positions in collaboration with the Catalan industrial sector. As such, the industrial component of the PhD revolves around the development and verification of legal software in Coq. This work will be complemented with the formalization of parts of logic/mathematics. Since the group where this project will be embedded works on ordinal analysis via modal logic and reflection principles, we expect collaboration with the main group to arise, but we are open to alternative proposals.

We offer two three-year positions in the PhD program in Mathematics and Computer Science which is located in the very center of Barcelona. Apart from the usual PhD trajectory, the candidates will participate in cutting-edge developments in formalization in an industrial setting. The travel allowances can vary to over 2200 EUR per year and the gross salary varies between 18K and 22K per year, depending on how much financial support this project will receive from the Catalan authorities. Incorporation should take place between November 2016 and February 2017.

We are looking for candidates with a background in theoretical computer science and/or mathematical logic. It is a strict requirement to have finished a relevant Master with an average score of at least 6.5 (out of 10) over all courses taken in both the master and undergraduate studies. Apart from the required knowledge of proof assistants like Coq, and OCaml, other IT skills are recommended, especially knowledge/experience with other functional programming languages. Previous commercial work experience is a plus and working proficiency in English is a must.

Interested candidates should file their pre-application through the official application portal. In order to distinguish the two PhD student positions, we tentatively tagged them `The Syntax Road' and `The Semantics Road', respectively. It is true that indeed one part of the formalization of our mathematics project involves modal calculi whereas another part will emphasize on relational (Kripke) semantics, but choosing at this stage for one of the Syntax or Semantics road does not imply any irreversible decision.

Direct links to the pre-application: Syntax Road, Semantics Road.

Upon filing this pre-application, you should send additional information to
Joost J. Joosten, no later than **10 September
2016**. The additional information should at least contain:
CV;
motivation letter;
transcript of obtained academic results in the relevant master and undergraduate;
e-mail addresses of three references to whom we might refer if needed.

Further information about the positions can be obtained by writing an e-mail to Joost J. Joosten.

Our community has lost one of its founders, as Maria Paola Bonacina remarked
above. Earlier this month, I traveled across Germany
with the two volumes of the *Handbook of Automated Reasoning* in my
bag. Once in my Saarbrücken office, I took them out, glanced
quickly in them, then proceeded
to read my email. One message was from Alan Bundy, who informed me of **Alan
Robinson**'s death. Together with Randy Goebel, he has written a touching
in memoriam.

* * *

Facebook is generally a waste of time—thankfully, a small waste of my
time, because I'm not addicted. But once in a while, thanks to
colleagues, I discover something important there, such as the
200 terabyte proof
by **Marijn J. H. Heule**, **Oliver Kullmann**, and **Victor W. Marek**,
which was discussed on the Nature
web site ("A computer cracks the Boolean Pythagorean triples problem—but is it really maths?"). Congratulations! Marijn Heule, who is affiliated to the University of Texas, Austin,
gave a talk in Saarbrücken with the title
"Everything's Bigger in Texas." Indeed!

* * *

In his column, **Larry Wos** writes:

In 1963, I was introduced to the field then called automatic theorem proving. Of course, it was far, far from automatic. I pressed not long after to have the name changed to automated theorem proving, because it seemed to me to be more accurate. I did not succeed.

Now that's an interesting notion. I have been trying for years to convince
my colleagues to write *automat ic theorem provers*
for ATPs, and I doubt I've had any success. Perhaps Larry was more successful
than he thinks or is willing to admit. Or perhaps today's researchers are not
attuned to the nuance between

There is a consensus in our community around *automat ed reasoning*
and

This having been said, I am certainly among those who think that we ought to
be careful before putting labels on provers. Automatic provers are often
employed in an interactive way—Larry Wos is a virtuoso in the interactive
use of Otter—and interactive provers can provide strong automation, too.
Some researchers have been promoting the term *auto-active* as a portmanteau
word from *automatic* and *interactive*, but it's hard to see
the latter word in the resulting conglomerate; and the
portmanteau word doesn't seem to carry a different meaning than
*semi-automatic* or *interactive*, depending on whether one sees
the glass as half full or half empty.

In any case, we surely all want the reasoning power of automatic provers and the flexibility and trustworthiness of interactive provers, and more and more systems or combinations of systems are attempting to provide just that. Two of the latest, most promising examples are the Lean prover and the cvc4coq integration. The architectures can vary greatly, but the ultimate goal is the same.