[bitc-dev] Retrospective Thoughts on BitC

Jonathan S. Shapiro shap at eros-os.org
Mon Apr 16 10:42:29 PDT 2012


Matt makes several tempting comments here. They are thought-provoking, and
I think they bear some consideration, but I also think there is reason for
cautious skepticism.

On Sun, Apr 15, 2012 at 10:56 PM, Matt Oliveri <atmacen at gmail.com> wrote:

> The way I interpret Shap's point is that we don't know yet if
> zero-pause GC is good enough for _all_ programs, or even all those
> programs which would otherwise use C and C shlibs.
> It's probably a lot closer than stop-the-world GC, but it's too early
> to put all our eggs in that basket.
>

Hmm. Well, that's certainly a point I *should* have considered. The truth
is that my brain was more in the "let's tackle the stuff we actually know
how to do first" mode.

C4 illustrates a *foundational* problem. There is no such thing as
pauseless GC unless the collector can operate faster than the mutator. In
the limit that really isn't possible. The best we will ever achieve is
collectors that are pauseless *up to* a well-characterized allocation rate.


> At any rate, maybe a typesafe runtime for all programs is not the right
> goal.
>

Splat! (That's the sound of spitting something disgusting out of my mouth
disgustedly).

I have seen many runtimes that cannot be implemented efficiently in
currently deployed commercial type systems. I have not seen *any* runtime
systems that  inherently need to be type-unsafe. I have not seen *any* runtime
systems that would need to give up performance in order to be type-safe.
The latter statements assume that we apply modern type knowledge to these
systems. You can't do efficient type safety in the type systems of Java,
C#, or ML.


> For starters, if zero-pause GC turns out to be unacceptable for some
> programs, then we have no evidence that the goal is attainable.
>

You mean *when*. There *will* be applications with high, sustained
allocation rates that do not release storage.

But this is silly. There are applications whose allocation characteristics
don't fit malloc() too. Some of those applications are widely used. The JVM
comes to mind.


> Further, even if it is, it's restrictive; not all applications need GC.
> For example, those programs which can be shown to be safe just using
> region typing should not be using GC.
>

Region safety does not guarantee timing of release or place any bound on
heap growth. A region-safe and region-managed program may not release heap
rapidly enough to run to completion. Regions definitely help, but they are
not a replacement for GC.


> But of course, we want to be able to build a system out of components
> that use GC and components that only use region-typesafe allocation.
>

Why? I would say rather: we may be *forced* by the limits of a particular
GC to do this, but what we *want* is adopt the simplest regime that will
actually solve our problem.


> More generally, we want to be able to build a system out of components
> that use all sorts of memory management policies.
> The problem then is that anyone could come up with a new and
> reasonable memory management policy at any time, and we'd want to be
> able to integrate it.
>

That's a lovely image, but I think it's a bad goal.

The hard experience is that this just isn't possible in practice. We have
extended experience with mingling safe/unsafe environments. As it happens,
we also have experience with fully safe environments where different
processes use different GCs but can share objects in some form. Both
introduce nearly insurmountable complexity and risk into the overall system.


> In summary, I think zero-pause GC is promising, and a safe runtime
> that uses it could be extremely popular, but it will not replace C.
>

Actually, I disagree. If a pauseless GC supporting a high enough sustained
allocation rate becomes widely used, and a low-level type system that is
friendly to high-performance I/O comes into place, I suspect you would find
regulatory agencies all over the world requiring a cutover as a condition
of certification for their member companies.


> I originally got interested in BitC because I thought its design goal
> was to be a practical substrate for formal verification.
> But now it seems like Shap is pursuing the "live dangerously" option,
> but trying to make it less dangerous using strong, low-level types.
>

Shap found out that formal verification often yields less confidence than
strong typing, which makes the effort of formal methods not worth the gain.


shap
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.coyotos.org/pipermail/bitc-dev/attachments/20120416/9f5a9e48/attachment.html 


More information about the bitc-dev mailing list