ASSOCIATION FOR AUTOMATED REASONING

NEWSLETTER No. 89, October 2010

From the AAR President, Larry Wos...

I was surprised and indeed delighted by the response to the challenge, posed in an earlier newsletter, concerning the Rezus-style formula. That challenge asked one to find a proof of the given Rezus formula with an automated reasoning program. Among others, G. Sutcliffe and J. Halleck succeeded nicely. Further, in the context of research, Halleck formulated a new approach to attacking such problems. Moreover, the challenge sparked a sequence of e-mail messages, still continuing, that brought into play numerous comments from Rezus himself. In other words, our newsletter caused much profitable interchange among logicians and computer scientists.

Perhaps the following two types of challenge will have a similar effect.

The first challenge focuses on proof finding with an automated reasoning program. For total clarity (in response to some of the recent e-mail messages), the key is automation, even if a proof exists in the literature. The underlying paradigm, especially for those new to the field, asserts that progress of a general nature can result from (1) finding a theorem that is currently outside the power offered by the program in use and (2) devising a strategy or approach to find a proof. For real progress to occur, however, the new strategy or approach must not be tailored to the specific theorem; instead, it must offer generality.

The theorem to be proved in this first challenge comes from classical propositional calculus. Where the function i denotes implication and the function n denotes negation, the following three clauses provide a well-known axiom system.

%  Luka 1 2 3.
P(i(i(x,y),i(i(y,z),i(x,z)))).
P(i(i(n(x),x),x)).
P(i(x,i(n(x),y))).

The challenge is to prove from the three clauses the following formula expressed in clause notation, a formula that is a member of yet another three-axiom system for this logic.

P(i(i(x,i(n(y),z)),i(x,i(i(u,z),i(i(y,u),z))))).  %  thesis 60

If you would enjoy a slightly less difficult theorem to first attempt to prove with your program, the following suffices, a formula known as Frege.

P(i(i(x,i(y,z)),i(i(x,y),i(x,z)))).  % thesis 35

The Frege formula is also a member of an axiom system for classical propositional calculus.


The second promised challenge focuses on a programming project. The automated reasoning programs with which I am familiar do not, during the run, modify themselves. The challenge, then, is to provide such a capability to a reasoning program (of course, I would especially welcome its implementation in OTTER).

As one example, I suggest an enhancement that extends the spirit of the set of support strategy to level 2 and higher. With the focus on level-2 clauses, when a clause of level-2 is deduced, it is adjoined to the set of support list if and only if at least one of its parents is a member of the initial set of support. If none of its parents is a member of the initial set of support and if the clause is to be retained, it is immediately placed on list(usable) (for OTTER), and the newly retained clause is not allowed to initiate paths of reasoning.

If the proposed enhancement is implemented in the reasoning program and applied to level 3 and beyond, then clearly the levels will not grow so dramatically.

One of the side effects of this enhancement is the increase, in a practical sense, in the use of a breadth-first (level-saturation) approach; indeed, a breadth-first search is far more usable if the size of the levels increases slowly.

Of course, the proposed enhancement increases the likelihood that all paths to reaching the goal are blocked.

If you would enjoy reading a fuller account of possible enhancements, you might visit my website, automatedreasoning.net, and the latest notebook found there titled Future Enhancements for Automated Reasoning Programs.

Announcement of CADE Elections

Wolfgang Ahrendt
(Secretary of AAR and CADE)
E-mail: ahrendt@chalmers.se

An election of CADE trustees will be held soon and all AAR members will receive a ballot. The following people (listed in alphabetical order) are currently serving as trustees of CADE Inc.:

The terms of Maria Paola Bonacina, Amy Felty, and Geoff Sutcliffe expire. Moreover, Andrei Voronkov, whose term would have expired as well, resigned August 2010.

Thus, there are four positions of elected trustees to fill. The position of Jürgen Giesl is taken by Viorica Sofronie-Stokkermans and Nikolaj Bjørner as CADE-23 program chairs.

The following candidates (listed in alphabetical order) were nominated and their statements are below:

Franz Baader nominated by Cesare Tinelli and Peter Baumgartner
Clark Barrett nominated by Christoph Weidenbach and Uwe Waldmann
Amy Felty nominated by Maria Paola Bonacina and Renate Schmidt
Jürgen Giesl nominated by Franz Baader and Reiner Hähnle
Konstantin Korovin nominated by Andrei Voronkov and Jürgen Giesl
Larry Paulson nominated by Christoph Benzmüller and Alan Bundy
Brigitte Pientka nominated by Andreas Abel and Amy Felty

Nominee Statement of Franz Baader

CADE is the major forum for discussing all aspects of automated reasoning research (theory, implementation, and applications). While theoretical work and work on implementations are very well represented at CADE (through regular papers, system descriptions, and the system competition), we must continue our work on attracting more papers on applications, which are now often published at conferences related to the application area (like CAV, KR).

In order to ensure the role of CADE as the flagship of automated reasoning, we must try both to keep the high scientific standards of the conference and to attract more automated reasoning-related research from other areas (like knowledge representation, constraint solving, verification, and computer algebra). Another important point is to avoid a fragmentation of the field into many small and specialized subcommunities with their own small conferences and workshops. This does not mean that I am against having these smaller events as well. But there should be one big event (every 2-3 years) where the whole automated reasoning community meets and with which it identifies. For this reason, while I was CADE president, I have worked hard to put the IJCAR series on a stable basis (e.g., by establishing a permanent IJCAR steering committee, participating in the formulation of the "IJCAR rules") and to ensure participation of IJCAR within the FLoC conferences (e.g., trying to make their schedules compatible: IJCAR every two years, alternating with CADE; FLoC every four years). Unfortunately, there are now again discussions within FLoC about changing this schedule, and the CADE trustees should try to have a voice in this discussion to ensure that CADE's and IJCAR's interests are not ignored by FLoC.

Nominee Statement of Clark Barrett

This is a very exciting time for automated deduction and the CADE conference! With the ongoing advances both in theory and in computing power, automated deduction is emerging from a niche field to become a powerful enabling technology for a wide variety of applications. As a community, I believe we have a message to get out which is that automated deduction has arrived! It can solve real problems — and we've only just scratched the surface in terms of finding applications. As a trustee I would like part of my focus to be on spreading this message by looking for new connections and applications and finding ways to involve people working in these areas in CADE. At the same time, CADE is already a thriving community with many fine traditions and great intellectual resources. The other part of my focus would be simply to reinforce its current strengths, to make sure the conference continues to thrive, and to ensure it provides a welcoming environment for bright minds new and old.

Nominee Statement of Amy Felty

There is currently a lot of diversity in our field which helps make for interesting and fruitful conferences, including both CADE and IJCAR. The IJCAR initiative has been quite successful, and I strongly support its continuation, as well as continuing participation in FLoC, which provides an excellent way to bring together many people in related but diverse areas. I also strongly support CADE's role in supporting student participation, particularly through the Bledsoe Travel Award. Another strong point of CADE/IJCAR is the associated workshops, which provide the opportunity to bring people together to discuss and learn about specialized areas.

Over the years, I have worked in many areas of automated reasoning, including logical foundations, techniques for automating aspects of reasoning with higher-order logic, and applications in security and privacy of personal data. I publish and participate frequently in CADE and its affiliated workshops. For example, most recently, I was program co-chair of LFMTP 2009 (Logical Frameworks and Metalanguages: Theory and Practice), affiliated with CADE-22. In 1996, I was the Conference Chair of CADE when it participated in the first FLoC. I have served as elected Trustee the last 3 years. Before that I served as Secretary of CADE and Secretary-Treasurer of AAR (Association for Automated Reasoning). I would like to continue my work with CADE and work toward even more visibility and growth.

Nominee Statement of Jürgen Giesl

My research is concerned with different aspects of automated deduction. Therefore, I have been attending and publishing papers at CADE since 1994.

I was one of the PC-chairs of IJCAR 2010, PC chair of RTA 2005, steering committee chair of RTA (2005-2007), and I served many times on the PCs of CADE, IJCAR, RTA, LPAR, and several other conferences in the field.

As a trustee, my goal would be to keep the scope of CADE as broad as possible. CADE should be open to any aspect of automated deduction and it should cover everything from theoretical results and practical contributions up to applications of automated reasoning.

Nominee Statement of Konstantin Korovin

My research focuses on theoretical, implementation and application aspects of automated reasoning. I have been working on ordering constraints, orientability, AC-orderings and more recently, on instantiation-based methods and integration of theory reasoning into first-order logic. I have been developing and implementing iProver — an instantiation-based theorem prover for first-order logic, which won the EPR division at the last three CASC competitions. I gave an invited talk on instantiation-based reasoning at CADE-22.

CADE is the leading conference in automated reasoning. We have seen a number of success stories at CADE including: first-order and higher-order theorem proving, satisfiability modulo theories, software and hardware verification, reasoning with description and modal logics. This shows CADE excels in combining diverse research communities and fosters rapidly growing and exciting areas. I believe this tradition should be encouraged to continue the success of CADE. I strongly support the organisation of workshops co-located with CADE, as these attract research work from bordering areas which are usually published in conferences such as CAV, RTA, LICS, ICALP, TPHOL and IJCAI.

CADE maintains a good balance between theoretical work, implementation and applications. I believe CADE can further strengthen interactions between the research community and real-world applications. I thoroughly support the development of new systems and the organisation of competitions such as CASC and SMT-COMP, which demonstrate the possibilities of state-of-the-art systems to industrial users and conversely, propagate problems from real-world applications to the research community.

As a CADE trustee, I would be very keen in supporting and promoting high quality theoretical research, development of reasoning systems and applications of automated reasoning.

Nominee Statement of Larry Paulson

In my view, “Automated deduction” refers to all forms of mechanically assisted theorem proving, whether interactive or fully automatic, first-order or higher-order. CADE should try to attract the best papers across the entire field, resisting ghettoisation. In the same spirit, CADE should regularly participate in IJCAR, to facilitate interactions among researchers in a range of related disciplines and application areas.

My own research has primarily focused on higher-order interactive proof assistants, but I have regularly incorporated techniques from tableau theorem proving and have recently done a lot of work involving the application of resolution theorem provers. I have a wealth of editorial experience, having served on innumerable program committees for the past 20 years or so, and have long experience of serving as an editor of the Journal of Automated Reasoning. Recently I was co-chair of ITP-2010, a new conference amalgamating TPHOLs and the ACL2 workshop.

Nominee Statement of Brigitte Pientka

In my view, CADE is the major venue for all aspects of automated reasoning research, and I regard it as the driving force to promote our field. While we should continue our emphasis on theoretical work and system building, I believe we should also reach out and welcome applications of our work in areas such as computer algebra, constraint solving, mechanization of programming languages, and reasoning about programs. Joint events such as IJCAR and co-locating with FLOC are of vital importance. They give our area increased visibility and demonstrate the impact of our results to the general computer science community at large.

My own research interest is in the theory and practice of logical frameworks. This includes: designing and investigating type theories and logics, building efficient verification and programming tools, and demonstrating their effectiveness using realistic applications. Since 2001, I have regularly published at CADE/IJCAR, ICLP,TPHOLs and ITP. I also have served regularly on the program committee of these conferences.

Over the last few years, I have been actively involved in promoting our field. Over the previous years I was publicity chair for CADE 2005 and I currently serve on the steering committee of LFMTP (Logical Frameworks and Meta-languages: Theory and Practice). In 2009, I was conference chair of CADE. As a CADE trustee, I intend to continue working towards ensuring that automated reasoning receives the attention it deserves.

Call for volunteers for AAR and CADE Secretary

Maria Paola Bonacina
(CADE President)

After three years of excellent service, AAR and CADE Secretary Wolfgang Ahrendt has informed the CADE Board of Trustees and the AAR Board of Directors that he wishes to step down. He deserves heartfelt thanks from all our field! Thus, the two corporations are seeking a new Secretary. A preliminary announcement was given during the Business Meetings of CADE and IJCAR during FLoC 2010 in Edinburgh.

The role of Secretary is an important one, and most precious to our organizations. Former secretaries include Bill McCune, Bob Veroff, myself, and Amy Felty. The two roles of Secretary of AAR and Secretary of CADE have been filled by the same person since 1999, because AAR and CADE are strongly interconnected: CADE is a subcorporation of AAR, and the members of AAR elect the board of trustees of CADE. Thus, this call is for candidates to be the Secretary of both AAR and CADE.

The Secretary is a full-right member of both the AAR Board of Directors and the CADE Board of Trustees. The Secretary shares with the other members of these boards the responsibility, and opportunity, to influence and shape the core community work and decisions. In addition, the Secretary has the following distinguished duties:

Clearly, this is a very important job, which may give a lot of reward: the new Secretary will inherit all necessary documents and historical records from the outgoing Secretary and will be welcomed by the two boards. As CADE President I'll be happy to welcome the new Secretary and help towards a smooth transition.

Interested applicants should send an e-mail message with a brief statement, highlighting motivation and previous organizational experience, if any, and the url of their web page, to

president@cadeinc.org
secretary@cadeinc.org

no later than November 7th, 2010. The Secretary will be appointed by the two boards as soon as possible and all applicants will be informed of the outcome.

Response to Dr. Larry Wos' president's column in the June 2010 AAR newsletter

John Halleck

This copy has been cut down, for space reasons, from a longer reply that has been placed on the web here

I have recently been playing with J. J. Zeman's semisubstitutivity, for entertainment, and had just gone on record as saying that it was an interesting topic, but probably didn't have any practical value in automated theorem proving. (Terminology aside: I can not pronounce “semisubstitutivity” without a lot of work, and therefore have trouble typing it consistently. I therefore prefer “semisubstitivity”.)

But Larry Wos' president's column in the June gave a challenge theorem that could not have been more taylor made for the techniques I was playing with.

In the column, Larry posed a two part challenge, the one part being to generate a computer proof that the Rezus single axiom for BCIW was a theorem of BCIW. [The problem being that it is huge, and that causes problems for many automated theorem provers.] The other part of the challenge, in typical Larry Wos style, was to show a general method of proving such complex expressions from simple axioms.

I think that it should be noted that Dr. Bob Veroff, Dr. Larry Wos, and probably others have already published computer generated proofs that the Rezus axiom is a theorem of BCIW, so my just having such a proof is nothing particularly new or interesting. And, in fact, hand generated proofs existed long before the computer generated ones.

My technique isn't the general method asked for, but it is a novel (and extremely effective in this case) technique for attacking large expressions. The new technique produces a semisubstitutivity proof of the expression given, and that proof can be easily (mechanically) converted to a more traditional proof.

Just using semisubstitivity as the inference rule isn't actually particularly interesting. Semi-substitivity has much worse fanout at any given step than a standard Condensed Detachment, or a typical Modus Ponens and Uniform Substitution proof. Without some trick to reduce the search space it would be a much worse approach than the traditional approaches.

So, I hear you think, “What is the trick here?”

I'm afraid that the trick does depend on semisubstitutivity, which is probably unfamiliar to most of the readers of the newsletter. I refer you to my web page on the topic for details on it for the long version

The short version is that if one had something of the form (p > q) and of the form (q > p) in most reasonable logics, one would have license to replace p by q, or q by p, anywhere in any theorem [Substitution of equivalents] without compromising validity. A subtly obvious question is whether having just the conditional (p > q), instead of the bi-conditional equivalence, buys you anything at all. The surprising answer is yes. For logics with just implication and not other operators what it buys you is particularly clean. In BCIW if you have (p > q), there are positions in any expression where p (if it exists) can be replaced by q without compromising validity. This surprising fact is joined by the less obvious, but more surprising fact that in all other positions in the expression q (if it exists) can be replaced by p. [Proofs at the above link.]

So semisubstitutivity licenses you to do strategic surgery inside of theorems, which makes dealing with large expressions somewhat more tractable. But the problem of doing search with it is as bad, or worse, than with traditional methods.

OK, we now have enough tedious background to explain the actual trick.

Given a theorem of the form (p > q) and an alleged theorem that you are working on, semisubstitutivity licenses either (depending on the position in the theorem) replacement of p by q, or of q by p, without compromising validity. But what can you say about replacements in the opposite direction than those licensed replacements, other than that they do not necessarily preserve validity? What can be said is that those replacements can be undone by a validity preserving licensed replacement. So, and here is the trick, any sequence of only replacements in the opposite direction of the licensed directions, from an given expression to a known valid theorem (such as an axiom), can be run in the reverse (licensed) direction to produce a validity preserving proof from the axiom to the original starting point. And reduction of the given alleged theorem to an axiom can be very very simple and clean.

The cleanest example I'm aware of is the Rezus theorem that Larry offered: [I'm going to use polish notation here because infix notation of expressions of this size is totally unreadable (at least to me).] The alleged theorem we want to prove is (in Polish Notation):

CCCxCCCyyCCzzCCuuCCvvCxwwCCCCCv6v7CCv7v8Cv6v8CCCCCv9Cv9v10Cv9v10CCCv11v12CCv12v13Cv11v13v14v14v15v15CCv16CCCv17v17CCv18v18CCv19v19CCv20v20Cv16v21v21v22v22

The axioms to prove it from are:

B: CCpqCCrpCrq
C: CCpCqrCqCpr
I: Cpp
W: CCpCpqCpq

And to make the explanation readable, I'll break it up with the following annotated version with the graph. The annotation is:

  Symbol #:  direction licensed, position, graphic

Positions are given as strings of A (for Antecedent) and C (for Consequent). So, for example, CCA means the Consequent of the Consequent of the Antecedent.

(I'm only including a small part othe full tree here for an example for the first step. The full labeled tree for this expression is here)

# : sym   direction position  diagram.

...
                                |        |     |        |
84: C     -     CCCACACCA       |        |     |        .- C
                                |        |     |           |
85: C     +    ACCCACACCA       |        |     |           .- C
                                |        |     |           |  |
86: v20   -   AACCCACACCA       |        |     |           |  .- v20
                                |        |     |           |  |
87: v20   +   CACCCACACCA       |        |     |           |  .- v20
                                |        |     |           |
88: C     -    CCCCACACCA       |        |     |           .- C
                                |        |     |              |
89: v16   +   ACCCCACACCA       |        |     |              .- v16
                                |        |     |              |
90: v21   -   CCCCCACACCA       |        |     |              .- v21
                                |        |     |
...

In order to avoid making this a truly ugly example, I'll introduce some template lemmas to make this short and sweet. And I'll work from the right to the left so that any unprocessed part of the expression is still numbered as above, and I don't have to reconstruct the diagram above after each replacement.

The template lemmas are all of the form

  (( {axiom or theorem} > s) > s)

Each is trivially (4 or less steps from the axioms) provable from the axioms:

B' simplify: CCCCpqCCqrCprss  # Note that this is not based on B, it is
                              # based on B'.
I  simplify: CCCppqq
W  simplify: CCCCpCpqCpqrr
MP simplify: CCCpCpqqrr

Rezus' single axiom:

CCCxCCCyyCCzzCCuuCCvvCxwwCCCCCv6v7CCv7v8Cv6v8CCCCCv9Cv9v10Cv9v10CCCv11v12CCv12v13Cv11v13v14v14v15v15CCv16CCCv17v17CCv18v18CCv19v19CCv20v20Cv16v21v21v22v22

Scanning from the right to the left in the expression, the first place we match a template is at symbol 84, where we match I simplify. In this position (CCCACACCA) the reverse replacement (q replaced by p) is licensed, so the unlicensed (p replaced by q) will be used: [Since, oddly enough, all replacements in this proof occur at negative positions, I'm not going to keep repeating this part]

84: CCCppqq, p/v20, q/Cv16v21, applied at CCCACACCA
CCCxCCCyyCCzzCCuuCCvvCxwwCCCCCv6v7CCv7v8Cv6v8CCCCCv9Cv9v10Cv9v10CCCv11v12CCv12v13Cv11v13v14v14v15v15CCv16CCCv17v17CCv18v18CCv19v19Cv16v21v21v22v22

Continuing our right to left scan, the next place we match a template is at symbol 80, where we match I simplify

80: CCCppqq, p/v19, q/Cv16v21, applied CCCACACCA
CCCxCCCyyCCzzCCuuCCvvCxwwCCCCCv6v7CCv7v8Cv6v8CCCCCv9Cv9v10Cv9v10CCCv11v12CCv12v13Cv11v13v14v14v15v15CCv16CCCv17v17CCv18v18Cv16v21v21v22v22

And then we match at symbol 76 with I simplify

76: CCCppqq, p/v18, q/Cv16v21, applied at CACACCA
CCCxCCCyyCCzzCCuuCCvvCxwwCCCCCv6v7CCv7v8Cv6v8CCCCCv9Cv9v10Cv9v10CCCv11v12CCv12v13Cv11v13v14v14v15v15CCv16CCCv17v17Cv16v21v21v22v22

At symbol 72 we match I simplify

72: CCCppqq, p/v17, q/Cv16v21, applied at ACACCA
CCCxCCCyyCCzzCCuuCCvvCxwwCCCCCv6v7CCv7v8Cv6v8CCCCCv9Cv9v10Cv9v10CCCv11v12CCv12v13Cv11v13v14v14v15v15CCv16CCv16v21v21v22v22

At symbol 68 we match W simplify

68: CCCCpCpqCpqrr, p/v16, q/v21, r/v22, applied at CCA
CCCxCCCyyCCzzCCuuCCvvCxwwCCCCCv6v7CCv7v8Cv6v8CCCCCv9Cv9v10Cv9v10CCCv11v12CCv12v13Cv11v13v14v14v15v15v22v22

At symbol 52 we match B' simplify

52: CCCCpqCCqrCprss, p/v11, q/v12, r/v13, s/v14
CCCxCCCyyCCzzCCuuCCvvCxwwCCCCCv6v7CCv7v8Cv6v8CCCCCv9Cv9v10Cv9v10v14v14v15v15v22v22

At symbol 42 we match W simplify

42: CCCCpCpqCpqrr, p/v9, q/v10, r/v14, applied at AACAACA
CCCxCCCyyCCzzCCuuCCvvCxwwCCCCCv6v7CCv7v8Cv6v8CCv14v14v15v15v22v22

At symbol 40 we match I simplify

40: CCCppqq, p/v14, q/v15, applied at CAACA
CCCxCCCyyCCzzCCuuCCvvCxwwCCCCCv6v7CCv7v8Cv6v8v15v15v22v22

At symbol 28 we match B' simplify

28: CCCCpqCCqrCprss, p/v6, q/v7, r/v8, s/v15, applied at AACA
CCCxCCCyyCCzzCCuuCCvvCxwwCCv15v15v22v22

At symbol 26 we match I simplify

26: CCCppqq, p/v15, q/v22, applied at CA
CCCxCCCyyCCzzCCuuCCvvCxwwv22v22

At symbol 18 we match I simplify

18: CCppqq, p/v q/Cxw, applied at CCCCACAA

At symbol 14 we match I simplify

14: CCCppqq, p/u, q/Cxw, applied at CCACAA
CCCxCCCyyCCzzCxwwv22v22

At symbol 10 we match I simplify

10: CCCppqq, p/z, q/Cxw, applied at CACAA
CCCxCCCyyCxwwv22v22

At symbol 6 we match I simplify

6: CCCppqq, p/y, q/Cxw, applied at ACAA
CCCxCCxwwv22v22

At symbol 3 we match MP simplify

3: CCCpCCpqqrr, p/x, q/w, r/v22, applied at AA
Cv22v22

And this is just the axiom I.

1: Cv22v22, v22/p

Since the reduction, in only the opposite of the unlicensed directions, has produced an axiom, we can now read off the proper (licensed direction) semisubstitutivity proof of the theorem from the I axiom. (Just do the steps in the reverse order, reversing the substitutions.)

The only obvious hand wave I've done here was in using the template lemmas and in the the choice of which template lemmas I've used.

Few semisubstitutivity proofs are likely to be this clean, but I think this example does show that the technique has promise.

A semisubstitutivity proof can always be straight forwardly converted to a standard proof. (As I've done on the web page)

Addendum:
Geoff Sutcliffe has reported that Vampire can solve this problem. It will be added to a forthcoming release of the TPTP library.

Interview with Alan Robinson about the Beginnings of Automatic Deduction

On the occasion of Alan Robinson's recent 80-th birthday he was interviewed by Maarten van Emden about Alan's role in the beginnings of Automatic Deduction. Alan feels that the AAR Newsletter would be the ideal venue for the interview, and we are more than happy to publish it!

van Emden
From what I picked up from gossip in various places, your undergraduate work was in classics, and your 1956 PhD thesis was on the 18th-century philosopher David Hume. Immediately after your defence you went to work as an Operations Research programmer in industry. Next thing I know is that only eight years after that the Journal of the ACM published “A Machine-Oriented Logic ...”. Can you tell us something about these extraordinary eight years?

Robinson
In 1960 I left industry for a postdoctoral position in Pittsburgh, and there I read the newly-published paper by Martin Davis and Hilary Putnam.

van Emden
That doesn't help! Instead of eight extraordinary years, we have four anni mirabiles ...

Robinson
Yes. Quite. It does look a bit odd. I need to go back a few years before 1960 and explain how I came to be in Pittsburgh in 1960. Here is a bit of personal history leading up to that. My earliest exposure to modern mathematical logic was actually in the 1952/3 academic year. This was my first year in the USA. I spent it (including the extra term in the summer of 1953) at the University of Oregon, getting a Master's degree in philosophy.

In Oregon I concentrated on three topics: modern symbolic logic, the philosophy of science, and epistemology. My teacher in all of these was Arthur Pap. Arthur was an enormously learned Swiss-American logical positivist who at that time was the most militant, brilliant and prolific of the leaders (Herbert Feigl, Rudolf Carnap, Carl Hempel, F. Waismann, A.J.Ayer, and a few others) of the logical empiricist movement pioneered by Moritz Schlick and the Wiener Kreis.

van Emden
So from classics in Cambridge you made the transition not just to philosophy, but to a very modern program leaning towards analytic philosophy.

Robinson
The modern does have roots in the ancient. From 1949 to 1952 I was an undergraduate. For Part 2 of my degree (in Classics) at Cambridge University I chose the topic of Ancient Philosophy. One of its subtopics was the ancient logic of Aristotle and the Stoics. This was my first encounter with the ideas of formal logic.

In Oregon I first studied the predicate calculus in Fall semester of 1952, in Pap's one-semester course. He taught the course from a now-forgotten 1949 introductory textbook by J.C. Cooley. He also had us read papers on logic and foundations of mathematics. For this we used a book entitled “Readings in Philosophical Analysis”, also published in 1949, edited by Feigl and Sellars. In this splendid book were reprinted papers by Quine, Tarski, Frege, Reichenbach, C.I. Lewis, Hempel, Nagel, Carnap, and Russell. We had a short seminar from Pap on the earlier parts of Principia Mathematica. Pap also introduced me to Gödel's Incompleteness Theorems, to Turing machines, and to the rewrite-rule, string-based symbolic computing schemes of Emil Post.

van Emden
Indeed a turnaround, almost to computer science. Yet you turned back to traditional philosophy for your PhD thesis.

Robinson
Arthur suggested the topic for my master's thesis — Theories of Meaning Implicit in the British Empiricists Locke, Berkeley and Hume — and I finished it in the summer of 1953, just days before setting off for Princeton.

This year with Pap was a most extraordinary intellectual year for me, and it was an important turning point in my life. Pap in effect rescued me from a career doing something (what? I often wonder) in the traditional literary humanities, and relaunched me on my (late-starting!) career doing things in the mathematical sciences. In 1952 and 1953, at the age of 23, with his help, I began (!) to learn algebra, analytic geometry and, as the year went on, elementary calculus. These subjects had been completely absent in my schooling since I was 14, when (as was unsual in the stupid British educational system based on narrowly focussed specialization at an early age) my studies had been suddenly limited to Latin and Greek literature and history, and nothing else. This had meant that for eight years, from 1944 until 1952, I had been locked up in an intellectual box, condemned to a life of ignorance about mathematics and science.

van Emden
I suppose you would approve of the unspecialized education that I had, which was typical of the continent of Europe. Here is the list of subjects taught to me in the last three years of high school: algebra, geometry, trigonometry, “descriptive geometry” (in case you wondered, the invention of Gaspard Monge, still valued on the continent in the 1950s), Dutch, English, French, German, geography, history, biology, chemistry, physics, mechanics, bookkeeping, law, astronomy, drawing, and “line drawing” (a preliminary to technical drawing). None of these were optional. As you can imagine, no electives. The subjects were taught at various levels of intensity, ranging from three periods per week in every year to one weekly period in one year. The point to note is that we never did anything in depth.

It is important for a youngster to experience what it means to do something well, and the British system seems designed to do that. The continental system seems designed to efficiently produce the maximal number of people usable in government and industry.

Robinson
The grass is always greener on the other side of the fence. I resent the narrow box that the British system had put me in and Arthur Pap released me from this box. Pap himself had been trained as a mathematician and physicist (like Carnap, Hilary Putnam, and other logical positivists) before becoming a full-time philosopher and champion of logic and analytic philosophy.

van Emden
Did you go to Princeton right after Oregon?

Robinson
Yes. In September 1953 I transferred to the doctoral program in philosophy at Princeton. The University of Oregon did not give PhDs in Philosophy, and in any case Pap was leaving to teach elsewhere, ending up at Yale, where he tragically died at the age of 37. He had advised me to go to Princeton on the grounds that it not only had the best mathematics department, where logic was pursued as a seriously important field of study, but also had an excellent modern philosophy department which was itself very strong in logic and which encouraged its graduate students to attend the mathematics department's logic courses. I was in fact able to attend the course offered by Alonzo Church in the autumn of 1953. The course consisted of our studying, and then Church's discussing in class, every chapter of his wonderful book “An Introduction to Mathematical Logic”, Volume 1. In 1953 this was still available only in the form of a pre-publication manuscript in the Mathematics Library (Princeton University Press finally published it in 1956; there never was a Volume 2).

van Emden
This is wonderful! That means you were at Princeton in exactly the glorious period described by Gian-Carlo Rota in “Fine Hall in its Golden Age”. You must read that!

Robinson
I did. Yes, that's what it was like. Of course I did not take all the courses he took or talked to all the people he tells about. He describes Church's course well: it's true that Church did spend an extraordinary amount of class time cleaning and re-cleaning the blackboard and, yes, he did indeed look like a cross between a panda and a large owl.

I also took a logic course from Hilary Putnam (who was later to be one of the authors of the 1960 Davis-Putnam paper). Hilary, like Arthur Pap, had been trained as a mathematician before he became a philosopher. Martin Davis had left Princeton in 1950 after getting his PhD with Church on recursive function theory. Marvin Minsky and John McCarthy had also left Princeton at about the same time as Davis. I did not meet any of them personally until some years later. However, I did come to know Hilary Putnam well. He was actually, formally, my PhD thesis supervisor. In practice, however, his supervision of my thesis “Causality, Probability and Testimony” (on the inductive logic implicit in David Hume's theory of the causal relation and the weighing of evidence) consisted only of reading after it was finished! Luckily he liked it and gave it his formal approval, which meant that it was then officially accepted by the department.

van Emden
And then you took that job as an Operations Research programmer at Dupont. That beautifully illustrates the Wild West, shoot-from-the-hip period of computing when they could not hire programmers and instead recruited the same kind of people they took for cryptanalysis during the war: promising oddballs like chess players, crossword-puzzle wizzes, and scholars in the humanities.

Robinson
There's something in that, but in my case it must have helped that in Princeton I got a boost from friends on the mathematics department faculty, and some of the visiting researchers at the Institute for Advanced Study, where my wife was a Research Assistant to one of the Professors. This amounted to an informal course of study in mathematics. J.C. Moore, a professor in the mathematics department (a topologist) liked the idea of my late calling to the field, and he acted as my informal expert advisor for a couple of years. It was he who told me to read Courant and Robbins's “What is Mathematics?”, followed by Birkhoff and MacLane's “Survey of Modern Algebra”, followed by Courant's majestic two-volume “Differential and Integral Calculus”, and finally ‘as many of the Bourbaki volumes as you can manage’. I acted on this advice faithfully, but I managed to wade through only two Bourbaki's before deciding that they were too abstract and unintuitive to be doing me much good. At the Institute (where I also had a job in the kitchen, washing dishes) I met and listened to several notable mathematicians of the day, some of whom were already, or have since become, very famous: J.-P. Serre, Raoul Bott, Marston Morse, Ioan James, Freeman Dyson. I found their company most inspiring, but of course I could follow very little of what they talked about among themselves or in their lectures.

van Emden
Did you view your job at Dupont as the beginning of a new career or was it just a day job while you looked for an opportunity to get back to academic philosophy?

Robinson
At first I thought of it as the beginning of a new career. It was certainly a full-time job and rather a demanding one, but it was also a tremendous learning opportunity for me. For one thing, I began serious computer programming. From 1956 until 1960 I was surrounded at duPont by mathematicians, engineers and programmers and I was able to acquire a decent (though very late!) grounding, and a lot of practical experience, in general applied mathematics with an emphasis on computational linear algebra and on the implementation of numerical algorithms in assembly language on the Univac 1 and the IBM 704. Higher level programming languages were only just beginning to come into general use in that time so I was able to write and run some programs using Fortran. I did little or no research work on logic in these four years.

On the other hand I managed keep up my reading, particularly enjoying Martin Davis's book on computability. I was particularly fascinated by Emil Post's ideas, and I also worked hard on von Neumann and Morgenstern's Theory of Games and Economic Behavior.

In the early nineteen fifties Martin Davis had been focussing mainly on recursive function theory and unsolvability. He and Hilary had already started (in the mid 1950s) their fruitful collaboration on Hilbert's Tenth Problem before (in the later 1950s) beginning their work on implementing Herbrand's first-order proof procedure. This latter collaboration was sparked off by their participation in the 1957 Cornell Summer School in Logic where Abraham Robinson managed to fire up several other people to try to program this procedure efficiently on the computers of that era. Paul Gilmore and Hao Wang were among these. But I was not able to attend — it was not considered part of my job. By 1959 I realised that the time had come for me to leave duPont, but this was much easier said than done.

van Emden
But ultimately you managed to get back to academia and research in logic.

Robinson
Yes. I managed to leave duPont in 1960, when the Princeton Philosophy Department kindly recommended me for a one-year postdoctoral research fellowship in philosophy at the University of Pittsburgh, where I spent the 1960/61 academic year. I was able to use this transition year to publish a paper based on part of my doctoral dissertation, and to seek for a more permanent job in research or university teaching. I applied to a number of places, two in particular: Rice University, and the Applied Mathematics Division of the Argonne National Laboratory. Both offered me positions: Rice offered a full time position to teach logic and the philosophy of science, starting in 1961, and Argonne offered me a summer research position to begin in May 1961.

The 1960 Davis-Putnam paper had just been published, and Bill Miller, the Director of the Applied Mathematics Division, assigned me the summer task of implementing it on the Argonne's IBM 704. I started writing the program in the winter of 1960/61 while still at the University of Pittsburgh, and I arrived at Argonne in May 1961 with a program already written but of course not yet tested on a computer (not all universities at that time had yet acquired one). After a few weeks of trying to get it to run (I had written it in assembly language) I reprogrammed it in Fortran, and it ran beautifully. During this phase of the work I met George Robinson, the manager of software engineering at Argonne. He was very helpful in the nitty gritty business of actually running and debugging the program and getting it to work properly, and he became fascinated by the logical details and began his own transition to being a theorem-proving researcher. I wrote up this work at the end of the summer, in an Argonne Report (its title was Gamma 1: A Theorem Proving Program for the IBM 704)) before leaving to take up my new permanent job in September 1961 at Rice, and soon afterwards I wrote it up more fully for publication. Thus my first paper on automatic theorem proving appeared eventually in 1963 in the Journal of the ACM.

van Emden
Was Wos already there at that time?

Robinson
Yes. But I did not meet him during my first visit in the summer of 1961. Miller invited me back again to Argonne for the successive summers of 1962, 1963 and 1964 and it was during the 1962 summer that I first met Larry Wos. He had been working full-time as a mathematician at Argonne for some years already. Miller had decided to assign him to join me and George on pursuing the theorem proving research, and his first job was to learn logic. Up to that point he had been working on other mathematical tasks. George and I started him off on a crash course using Quine's Methods of Logic, and George and he worked intensely at this. Wos learned very quickly, and he was soon ready to assume the role, with George, of leading a full-time Argonne team working on theorem-proving programs. At the end of the summer of 1963 I had completed my second shot at the problem of efficiently organizing the Herbrand proof procedure, in which I was able to draw upon the key idea (namely, unification, as I decided to call it — it had been nameless until then!) which I found described in the Prawitz 1960 paper whose existence had been drawn to my attention by one of Argonne's brilliant polymaths, a mathematical physicist named Bill Davidon. The unification idea had also of course been discovered and described, albeit obscurely, much earlier, in Herbrand's 1930 thesis, but I did not then know this. Neither did Prawitz, as he told me when I finally met him in Stockholm many years later — Prawitz had rediscovered this crucial idea on his own.

van Emden
Do you happen to recall who first came up with the unit-resolution strategy?

Robinson
My 1965 resolution paper was written at the end of the summer of 1963 and submitted to the Journal of the ACM. Larry Wos and George Robinson forged ahead and programmed this procedure during the subsequent months. It was in the course of designing their implementation that they came up with what they called the Unit Preference Strategy, and so forth. Larry also stressed the usefulness of the Subsumption concept. During these same months, back at Rice, I wrote my paper on hyper-resolution, which appeared in 1964 — a year before the resolution paper itself, which it presupposed!

It turned out that the 1963 resolution paper had lain, waiting to be refereed, for almost a whole year at MIT. I gather Marvin Minsky had been asked by the JACM to referee it. It was indeed being discussed openly in his group and they had even written about it in their internal reports. The first I knew about all this was about a year after submitting the paper, when I received a phone call from Minsky to ask my permission to use it as one of the source materials in his AI seminar! I thought (and still think) that this long delay in refereeing the paper was deplorable, and so did the Editor of the JACM, Richard Hamming. I complained to him, and he got on to Minsky, and my paper was finally returned to me with a request for some small revisions. I quickly sent Hamming a revised version, which then quickly appeared in January 1965. By then Larry Wos and George Robinson had completed their first theorem proving experiment and written it up in a paper referring to the first (1963) version of my resolution paper, which of course never reached print. This led to their use of the so-called factorization concept, which I had introduced in that first version, but eliminated in the second, published version. Larry always said afterwards that he preferred my first version. I however thought the first version was rather clumsy and much preferred my second version, the one that was published, because in my opinion it was more elegant.

van Emden
Did you know Quine's propositional consensus rule (the dual of resolution) of his 1955 paper which was used by Dunham and North in 1962?

Robinson
No, not until later, when someone (it may well have been Wolfgang Bibel) pointed it out to me. Someone else also pointed out an even earlier reference, 1938 or so, in someone's master's thesis of that year. It is striking to realise now that in all of this I did not meet with any of the people whom I subsequently realised had been involved in the theorem-proving research going on at that time, and how little I knew about their work. I did meet eventually Davis, Gilmore, Loveland, Prawitz, and Wang, but only long afterwards. One person I encountered early on, however, was John McCarthy. He sought me out as soon as I arrived at Stanford to spend the summer of 1965 there. He had recently read my 1965 paper, and had immediately sat down and written a LISP program to run it on the Stanford computer. He said it took him two hours! I had not paid much attention to LISP until then. He took me over to his house and I watched, amazed, as his primitive teletype machine clattered out the printed result of running one of the examples in the paper. I date to this incident my devotion to LISP and the lambda calculus, and I consider John McCarthy to have been one of the people who really inspired me in those years and who has continued to do so ever since.

Addendum:
The early history of Automated Deduction discussed in the preceding interview is also the topic of the following article which contains further information also about other early researchers in our field.

W. Bibel, Early History and Perspectives of Automated Deduction. Proceedings of the 30th Annual German Conference on Artificial Intelligence (KI-2007), J. Hertzberg, M. Beetz, R. Englert (eds.), September 10-13, Osnabrück, LNAI, Vol. 4667, Springer, Berlin, 2-18 (2007).

Conferences

CADE-23, Conference on Automated Deduction

The 23nd International Conference on Automated Deduction will be held in Wrocław, Poland, on July 31st to August 5th, 2011. CADE is the major forum for the presentation of research in all aspects of automated deduction.

CADE invites high-quality submissions on the general topic of automated deduction, including foundations, applications, implementations and practical experiences.

Logics of interest include, but are not limited to

Methods of interest include, but are not limited to

Applications of interest include, but are not limited to

The CADE program will include the CADE ATP System Competation (CASC), as well as various workshops and tutorials.

Full papers of up to 15 pages and system descriptions of up to 5 pages shall be submitted until February 7th, 2011. An abstract must be submitted already on February 1st. Accepted papers will be published in Springer's LNCS series.

CADE also calls for workshop and tutorial proposals, which should be submitted until December 7th, 2010.

Details about the conference, including the submission procedure will be published on the conference web site.

TABLEAUX 2011, Automated Reasoning with Analytic Tableaux and Related Methods

The Twentieth International Conference on Automated Reasoning with Analytic Tableaux and Related Methods will take place in Bern, Switzerland on July 4-8, 2011. Topics of interest include (but are not restricted to):

Papers about applications are particularly encouraged!

Workshop & tutorial proposal submission: 10 January 2011
Abstract & title submission: 24 January 2011
Paper submission: 31 January 2011

Full details about this event will be published in due course on the conference web pages

NFM 2011, NASA Formal Methods Symposium

The third NASA Formal Methods Symposium will take place in Pasadena, California, USA, from April 18th to 20th, 2011. The Symposium is a forum with the goals of identifying challenges and providing solutions to achieving assurance in mission- and safety-critical systems. Theorem proving is listed amongst the topics of interests.

Regular papers of up to 15 pages and tool papers of up to 6 pages are to be submitted until November 19th, 2010. The proceedings will be published in Springer's LNCS series.

See the symposium web site for details.

WoLLIC 2011, Workshop on Logic, Language, Information and Computation

The 18th Workshop on Logic, Language, Information and Computation will be held at the University of Pennsylvania, Philadelphia, USA, from May 18th to 21st, 2011. WoLLIC is an annual international forum on inter-disciplinary research involving formal logic, computing and programming theory, and natural language and reasoning.

A title and single-paragraph abstract should be submitted by January 1st, and the full paper by January 8th. The proceedings will be published in Springer's LNCS series.

Refer to the conference web site for full details, including submission instructions.

RDP 2011, Federated Conference on Rewriting, Deduction, and Programming

The sixth edition of the International Conference on Rewriting, Deduction, and Programming, will be held in Novi Sad, Serbia, on May 29th to June 3rd, 2011. It will consist of two main conferences:

RDP has also issued a call for workshop proposals to be submitted by November 11th.

Full details are available from the conference web site.

Workshops

Book Announcement: Logical Analysis of Hybrid Systems

Logical Analysis of Hybrid Systems: Proving Theorems for Complex Dynamics.

Andre Platzer.

Springer Verlag, 2010. 426 p.

ISBN 978-3-642-14508-7,

Hybrid systems are models for complex physical systems and have become a widely used concept for understanding their behaviour. Many applications are safety-critical, including car, railway, and air traffic control, robotics, physical-chemical process control, and biomedical devices. Hybrid systems analysis studies how we can build computerised controllers for physical systems which are guaranteed to meet their design goals.

This is the first book that leverages the power of logic for hybrid systems. The author develops a coherent logical approach for systematic hybrid systems analysis, covering its theory, practice, and applications. It is further shown how the developed verification techniques can be used to study air traffic and railway control systems.

This book is intended for researchers, postgraduates, and professionals who are interested in hybrid systems analysis, cyberphysical or embedded systems design, logic and theorem proving, or transportation and automation.

Further information can be found here.