[bitc-dev] Quick haskell question
Mark P. Jones
mpj at cs.pdx.edu
Thu Aug 12 11:53:00 PDT 2010
> On Tue, Aug 10, 2010 at 10:57 PM, Mark P. Jones <mpj at cs.pdx.edu> wrote:
> No. Operator precedence is not a lexical property....
> Ick. Am I confusing Haskell with ML again? I would have sworn that fixity statements were lexically scoped.
By "lexical property", I mean a property of how symbols are written.
In theory, you could have a language in which operator precedence is
a lexical property (sample rule: operator precedence is determined
by the length of the operator symbol so x++y*z parses as (x++y)*z),
but I'm pretty sure I wouldn't want to use such a language :-)
"lexical scoping" is a different concept, having more to do with
semantics than syntax. Fixity statements in Haskell are lexically
scoped, but that doesn't mean you can tell anything about the fixity
of an operator symbol from its name alone, nor does it guarantee or
require that you have seen a fixity declaration before you see a use.
> - The parser recognizes expressions of the following form as a
> special case, but does not attempt to resolve precedence etc.
> expr0 op1 expr1 op2 .... opN exprN
> This presumably requires either that conventional function application have known high precedence (above any infix operator) or known low precedence...
That's how Haskell works, although it's not technically necessary.
Just think of function application as another infix operator, say @,
and then a Haskell expression like p + q x * r becomes:
p + q @ x * r
Now you can apply the same techniques as with other infix
operators to decide whether @, *, or + has higher precedence.
But unless you have a compelling use case, I'd suggest sticking
with the Haskell approach: "application binds more tightly than
any infix operator" is easy for programmers to remember and seems
to work well in practice.
By the way, Haskell complicates things quite a bit by tossing one
solitary prefix operator, unary minus, into the mix with a precedence
that is in the middle of the range used for infix operators. There
is an extensive comment in the Hugs source code that explains the
algorithm for tidying up infix operators, including this wrinkle.
Shout if you'd like me to forward a pointer or a copy.
> - When static analysis begins...
> Do you mean "symbol resolution"?
By static analysis, I mean the whole collection of checks and
analyses that are needed after parsing to validate or disambiguate
a source program prior to code generation. Within this process,
tidying up infix operators sits somewhere between symbol
resolution (you can't determine what fixity a symbol has until
you've checked that it has a definition/declaration) and type
checking (you can't check types until you've figured out the
syntactic structure of the term you're checking).
> It is very difficult to imagine why a non-lexical approach is desirable unless that is due to lack of forward declarations. Can you give an example of a case where lexically constrained infix is awkward?
I don't understand your question, which may be because we're
using "lexical" in different ways. If my earlier comments haven't
cleared this up already, could you expand on the question and then
I will try again.
All the best,
More information about the bitc-dev