[bitc-dev] C interfacing vs. portability

rtm443x at gmail.com rtm443x at gmail.com
Thu Feb 26 17:47:04 EST 2009


I'm currently reading my way through past threads + other docs such as
the informal lang spec, but haven't got too far, so this may be
addressed elsewhere, in which case please point me at it, but ...

>On Thu, Feb 26, 2009 at 12:47 PM, Rick R <rick.richardson at gmail.com> wrote:
>> Would it be bad to rely on the C (Posix?) standard: <types.h> and the
>> int32_t int64_t etc.
>> And define a set of rules:
>> If the compiler is 64 bit, then the integral in BitC maps to int64_t,
>> if 32 bit int32_t etc.
>> Real/float/double same way.. but also allow overrides if necessary.
>
>I'm having trouble understanding this proposal. BitC doesn't need to
>define a single integral type. We support multiple sizes of machine
>integers directly.
>

Why? My (minimal) familiarity with proving stuff is that you've got
enough of a problem with bounded integers, which may overflow (which
is a runtime error and you want to avoid these, natch). But instead of
one type of bounded integer you have several. Worse, they don't even
map cleanly onto the underlying language because C chooses not to
define this.
So, why bound integers at all? It should be possible to simply declare
that x is an int (meaning unbounded signed) or a domain-convenient
subset like -12..45 (like pascal) and map appropriately to the
underlying machine. It should be possible for the compiler to clearly
say that it can or can't bound the integer to a given, sensible range
(say, a machine word) from an expression/statement sequence. 
If the compiler can't find a bound, it defaults to an expensive
unbounded type and reports it. If that's not ok by the user (and it
usually will be fine for non-system development, which I presume and
hope that most of BitC will be used for) they can start applying
assertions, hints or other pragmas which the compiler can use, either
to find a bound or to throw an exception at a suitable point. If
needed, the user can start providing a proper proof with an external
prover that things stay bounded.
If you put your machine-specific types in the language, that looks like
hard work on several fronts.

Actually, maybe another front. IIRC the original dec alphas were
restricted to 32 & 64 bit integers
(<http://en.wikipedia.org/wiki/DEC_Alpha#Data_types>); IIRC only in
the later ones was this relaxed to have 8 and 16 bit register to mem
operations.
I don't know if you can assume this restriction won't return.

Presumably this has been discussed before, so could anyone please point
me the right way.

Regarding the representation of integers, I have wondered if, for
by-1-incrementing/decrementing of very small integers, one might get a
bit extra speed (and/or free up an execution unit) if you represented
a value as a word with a single bit which you shifted and tested
rather than representing as a full integer. Perhaps that's out of
place in this conversation, but it's a thought.

Actually there is one extension to int I might mention that makes no
semantic difference at all but in multiprocessor environments it might
permit an optimisation, which is to specify an int which is
allocated+padded so it, and only it, is on it's own cache line to
prevent false sharing between processors. You might be able to impose
the padding as a library function, but to ensure that something gets
allocated strictly (and efficiently) to a cache line (allowing that
there may be >1 level of cache per processor) might need a bit of
compiler magic. Perhaps it can be done entirely in the library.

ta

jan


[snip]


More information about the bitc-dev mailing list