[bitc-dev] Pools in lieu of GC
Jonathan S. Shapiro
shap at eros-os.com
Wed Mar 4 20:49:56 EST 2009
On Wed, Mar 4, 2009 at 7:18 PM, Eric Rannaud <eric.rannaud at gmail.com> wrote:
> On Wed, Mar 04, 2009 at 06:36:38PM -0500, Jonathan S. Shapiro wrote:
>> The two are unrelated. If you allocated on the heap, you will eventually GC.
> Why would you allocate a closure on the heap? And even it's that the
> case, if it's not escaping, you can just free it when it gets out of
> scope. Why do you need to GC? Sorry, I don't understand what you're
> saying here.
If you allocate *anything* on the heap, you will eventually GC. It
does not matter if you are allocating closures or something else.
Closures are necessarily allocated on the heap, because they are
inherently part of first-class procedures. Any procedure having a
closure has necessarily been created at runtime. If it didn't need a
closure, or if (in a suitably optimizing implementation) the closure
did not escape, the closure would have been allocated on the stack.
>> > What about hard real-time applications?
>> Define "hard real time". Most of the things that people believe to be
>> hard real time are not. Many of the rest have non-real-time phases. So
>> this question can only be answered sensibly with a but more
> This is always a good question to ask somebody who thinks he needs
> hard-RT, as many situations are not as bad as they sound. But from the
> point of view of language design, unless you're claiming there are no
> hard-RT situations out there, or that BitC will just not work for them,
> I fail to see the pertinence.
Truly hard real-time applications necessarily do not perform dynamic
allocations at all, so the question of GC becomes irrelevant. All
other applications are soft real time, and in those applications the
timing of GC is almost always manageable.
What people tend to forget is that GC is just as fast as hand allocation.
>> > How do you do GC across many many CPU cores?
>> There are concurrent collectors.
> Yet few are used on any kind of real-life large scale (I don't know of
> any). Are you damn sure that this will never present a challenge?
No. But I'm quite sure that shared-memory concurrency is inherently
unscalable, so it does not matter.
> How do you implement a GC on CUDA? What about an hypothetical 1000 cores
CUDA is no big deal. That's loosely coupled. At a 1000 cores one of
two things is true:
1. You aren't doing shared mutable memory, so it's just 1000 independent GCs
2. You're doing *constrained* sharing. For such applications the GC,
the compilation, and the runtime need to be specialized.
3.You have 990 idle cores.
> It may well not be impossible, but is the research on this advanced
> enough, and the implementation numerous and well understood enough that
> the question can just be set aside?
Bluntly, I don't really care. The problem spaces you are discussing
are problem spaces having ten actual running instances worldwide. I'm
concerned about the majority case.
>> > Can we expect a notion similar to freestanding environments as in C?
>> > (similar in its resource requirements)
>> Probably not. The Coyotos kernel will simply not include the
>> collector, but it's a special case. Pragmatically, if your memory is
>> so tight that you can't afford the space for a collector, you can't
>> afford to write in assembler and you can't afford to use the heap that
> I guess you mean "you can't afford not to write in assembler"? If so, I
> disagree. I've written embedded applications using a few dozen KB of RAM
> in C, and that's just fine. And a lot easier that in assembly. You have
> to be careful about inlining and stack size but that's far from
> unreasonable constraints.
Like I said. You can only write those in assembler. Using a clever
macro package (to wit: C) doesn't really alter the point I was making.
The 32KB application space no longer really exists, even in embedded
systems. You literally cannot buy ROMS that small anymore.
More information about the bitc-dev