[bitc-dev] Type Classes Versus Tedium
Jonathan S. Shapiro
shap at eros-os.org
Thu Aug 25 14:33:44 EDT 2005
Thank you. Up to a point, I agree that tedium would not be motivation
for type class, but when there are a very large number of specialized
functions tedium very morphs unusability. I'll get to your examples
below, but I want to briefly explain my obsession. :-)
In truth, I wasn't all that concerned about the arithmetic operators. It
was clear that we could handle them "well enough" because they were
close to ground. Internally, I was pretty consistent in saying to
Swaroop that if we couldn't make this work pretty quickly, we should
fall back to an ad-hoc integral class and deal with it later.
So the real reason I pushed on the issue is because it triggered my
aesthetic sense: "this *ought* to be doing so, and I need to really
understand why it isn't". Not quite knowing what to expect, I started
to explore it.
There was never a question in my mind whether overloading was useful --
I wrote one of the very first C++ books, so overloading is an old friend
(and occasional enemy). I stuck with the example of addition on the
mailing list mainly as a placeholder for the larger issue of
overloading. The real questions in my mind were:
1. What is the cost at runtime?
2. What complications are introduced into the type system that
might negatively impact verification?
Also, I was aware of Appel's comments concerning the introduction of
equality types in ML and the problem that created.
> Type classes were for me one of the larger hurdles to learning
> Haskell because they make GHC's type checker's error messages
> much more opaque.
Thank you very much for the examples you give below. This is extremely
> This hurdle came up very early: specifically, when I tried to
> print some value other than a String or a Char and got back
> an opaque message about ambiguous use of the type class Show.
Indeed. Our provisional belief is that the majority of these cases will
(in interactive practice) involve numeric literals. It is not clear that
we will *have* an interactive top loop -- it's under active discussion.
If we do, it will almost certainly come up in a mode where interactive
literals in top-level *expressions* (not declarations or definitions)
are "flattened" after type checking (with an informative diagnostic) in
order to make interactive use more straightforward.
BitC makes a different design choice than Haskell and ML in one place
that may be relevant. The problem is that you have to get the type
propagation started from *somewhere*, and if you don't get it from
literals (we don't) and you cannot get it from ground operations
(because of type classes), then you have to get it from somewhere else.
Our resolution to this is that we require structure and union fields to
be typed explicitly -- either with a concrete type or with a type
variable contained in the parameters.
This doesn't really help when you write
(define (main) (+ 1 2))
or (define (main) (tuple 1 2))
but frankly, who cares? Real programs exist to manipulate data, and they
invariably end up storing it somewhere.
We will have to wait and see how well this compromise works. For
interactive purposes I am hopeful that the expression hack will prove to
be good enough.
Would this approach, in practice, address your concerns? What else
should we think about?
> I have a second, less-important reason for advising against
> type classes, having to do with compilation time, but I'll hold
> it till there's indication that people want to hear it.
I'ld very much like to hear it -- especially since the Mark's papers
claim that there is little or no overhead. However, let me state that my
metric is C++ compile times, not C compile times, and I'm mostly
interested in comparing optimizing compile times rather than debugging
More information about the bitc-dev