kh at dcs.st-and.ac.uk
Tue Aug 8 10:34:35 EDT 2006
At 19:44 -0400 7/8/06, Jonathan S. Shapiro wrote:
>The BitC language specification today does not yet include any
>specification for the logic language. I am not sure if this is relevant
>to your comment.
In BitC, you have a computation language embedded within a
verification language. We have only
a vestigial verification language in Hume, though we are working on a
notation for possible future inclusion, and have developed an
embedding of Hume in Promela.
We are primarily interested in automated proofs of resource
properties (time, space etc.),
where your interest is presumably in more general properties. In
order to achieve this, we have
to be extremely cautious about both language design and implementation.
> > c) it is concerned with much more low-level fine detail than Hume
>Yes. We are very concerned with low-level expressiveness.
Likewise, this is an issue that we're interested in.
> There are
>several very talented language designers who are working on
>verifiability of high-level, functional languages. Most are working on
>pure term languages (where I consider Haskell's "monads" to fall within
>the "pure term" idea).
You're presumably thinking of House here -- I need to look at this again.
In a verification setting, I would see monads more as a separation
language, where type information is used to separate
stateless from stateful computation (or to characterise other
Yes, you can model monads by using explicit states
in a pure term language, but isn't this also true of implicit state?
The advantage is that stronger
properties can usually be derived at less effort for the stateless
(pure term) computations.
>The problem is (Mark Jones' work notwithstanding) that these languages
>are not well suited to construction of systems code. The need for
>operating systems, runtimes, and other high-performance low-level code
>is not going away.
We would absolutely agree with this.
>There are both technical and social reasons for this.
>The current state of the art in software verification has not paid a lot
>of attention to this space (mainly because verification is already hard
>enough by itself). The problem is that this leaves us standing in
>mid-air building castles because our operating systems and runtime
>systems are not verified or verifiable -- or even constructed in any
>formally structured or principled way.
And also with this.
> > Unlike Hume, it includes imperative operations (I'd be interested in
>> seeing what effect this has on verification -- this may weaken the
>> proofs that can be constructed?)
>The main impact is that our basis for discharge will need to be theorems
>about the evolution of the state space rather than approaching the
>problem through term substitution. This is, of course, a larger search
>problem and we expect to hit many challenges.
>We have been careful to preserve a pure subset language so that we can
>switch back and forth between term-based and state-based reasoning as a
>way to slice the search. I'm actually about to strengthen this aspect of
>the language explicitly.
The difficulty (at least I found with SML) is that if you allow
side-effects in arbitrary
expressions, then you must be highly conservative about *possible*
state manipulation, especially when higher-order functions are
present. My experience was
that (c. 1985), using a dataflow analysis I was still marking large
parts of a program as "possibly-state-manipulating", including library
code. In this situation, you would derive much weaker proofs than
would otherwise be possible, even if
the code in question is never really used in a side-effecting
situation. Monadic-style separation is a big win in this respect,
since the characterisation is "for free", through the types.
You hint that you are also using some way to separate stateless and
stateful computation, but is this just
stateless programming in the verification language, and stateful
programming in the computation language
(MetaOCAML has done something similar) or something stronger (e.g.
identification of purely functional sub-programs?).
>But I think that there is another thing to say here: it is better to
>have weak proofs than no proofs. Right now, the number of systems that
>can prove useful properties about things like operating system kernels
There's also some work by Julia Lawall and Charles Consel on device
drivers that you presumably know about?
The real problem with unrestricted state is that if you are not very
careful, you may weaken properties to the extent that they are no
longer useful! The best solution may be to define a good abstraction
so that you have fine-grained control over the extent of any effects.
This is essentially what Greg Morrisett does in Cyclone.
> Jones' work on House is terrific, but everybody involved admits
>that House is a toy (and that this was the right thing to do first). We
>are approaching from the perspective of OS builders. Our objectives are
>more aggressive and may not be realistic.
>Perhaps our approach will not work, but if it does not we will learn
>something about why not.
>> and throw/catch exception handling.
>Yes. However, throw/catch is not in the core language. There is a
>straightforward whole-program rewrite that can entirely remove them. We
>don't want to actually *do* that rewrite, but there is no new
>foundational semantics introduced.
>In fact, exceptions do not even require us to trigger value restrictions
My real interest is in the effect on time and space prediction --
nested exceptions can make this
more difficult, for example, and rewriting the source is undesirable
if it modifies cost properties.
> > It also includes type classes for overloading (something we've avoided,
>> because it can make costing harder -- again it would be interesting to
>> see what this does for verification -- presumably the properties have to
>> be more general?).
>I am not sure what you mean by "costing".
>Certainly, any proof about an overloaded function must be suitably
>quantified. However, as with Haskell, type classes are not part of the
>core language. For whole-program analysis, at least, they can be
I see you're doing whole-program analysis. You're presumably also
eliminating polymorphism and higher-order definitions
at the same time? This is certainly one approach to dealing with
complex language features, though in our setting
I'm wary of the possibility of changing execution costs (or moving
the cost model too far away from the source program).
In accordance with University policy on electronic mail, this email
reflects the opinions of the individual concerned, may contain
confidential or copyright information that should not be copied or
distributed without permission, may be of a private or personal
nature unless explicitly indicated otherwise, and should not under
any circumstances be taken as an official statement of University
policy or procedure (see http://www.st-and.ac.uk).
More information about the bitc-dev