[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
As Bison reads tokens, it pushes them onto a stack along with their semantic values. The stack is called the parser stack. Pushing a token is traditionally called shifting.
For example, suppose the infix calculator has read `1 + 5 *', with a `3' to come. The stack will have four elements, one for each token that was shifted.
But the stack does not always have an element for each token read. When the last n tokens and groupings shifted match the components of a grammar rule, they can be combined according to that rule. This is called reduction. Those tokens and groupings are replaced on the stack by a single grouping whose symbol is the result (left hand side) of that rule. Running the rule's action is part of the process of reduction, because this is what computes the semantic value of the resulting grouping.
For example, if the infix calculator's parser stack contains this:
1 + 5 * 3 |
and the next input token is a newline character, then the last three elements can be reduced to 15 via the rule:
expr: expr '*' expr; |
Then the stack contains just these three elements:
1 + 15 |
At this point, another reduction can be made, resulting in the single value 16. Then the newline token can be shifted.
The parser tries, by shifts and reductions, to reduce the entire input down to a single grouping whose symbol is the grammar's start-symbol (see section Languages and Context-Free Grammars).
This kind of parser is known in the literature as a bottom-up parser.
5.1 Look-Ahead Tokens | Parser looks one token ahead when deciding what to do. | |
5.2 Shift/Reduce Conflicts | Conflicts: when either shifting or reduction is valid. | |
5.3 Operator Precedence | Operator precedence works by resolving conflicts. | |
5.4 Context-Dependent Precedence | When an operator's precedence depends on context. | |
5.5 Parser States | The parser is a finite-state-machine with stack. | |
5.6 Reduce/Reduce Conflicts | When two rules are applicable in the same situation. | |
5.7 Mysterious Reduce/Reduce Conflicts | Reduce/reduce conflicts that look unjustified. | |
5.8 Stack Overflow, and How to Avoid It | What happens when stack gets full. How to avoid it. |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The Bison parser does not always reduce immediately as soon as the last n tokens and groupings match a rule. This is because such a simple strategy is inadequate to handle most languages. Instead, when a reduction is possible, the parser sometimes "looks ahead" at the next token in order to decide what to do.
When a token is read, it is not immediately shifted; first it becomes the look-ahead token, which is not on the stack. Now the parser can perform one or more reductions of tokens and groupings on the stack, while the look-ahead token remains off to the side. When no more reductions should take place, the look-ahead token is shifted onto the stack. This does not mean that all possible reductions have been done; depending on the token type of the look-ahead token, some rules may choose to delay their application.
Here is a simple case where look-ahead is needed. These three rules define expressions which contain binary addition operators and postfix unary factorial operators (`!'), and allow parentheses for grouping.
expr: term '+' expr | term ; term: '(' expr ')' | term '!' | NUMBER ; |
Suppose that the tokens `1 + 2' have been read and shifted; what
should be done? If the following token is `)', then the first three
tokens must be reduced to form an expr
. This is the only valid
course, because shifting the `)' would produce a sequence of symbols
term ')'
, and no rule allows this.
If the following token is `!', then it must be shifted immediately so
that `2 !' can be reduced to make a term
. If instead the
parser were to reduce before shifting, `1 + 2' would become an
expr
. It would then be impossible to shift the `!' because
doing so would produce on the stack the sequence of symbols expr
'!'
. No rule allows that sequence.
The current look-ahead token is stored in the variable yychar
.
See section Special Features for Use in Actions.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Suppose we are parsing a language which has if-then and if-then-else statements, with a pair of rules like this:
if_stmt: IF expr THEN stmt | IF expr THEN stmt ELSE stmt ; |
Here we assume that IF
, THEN
and ELSE
are
terminal symbols for specific keyword tokens.
When the ELSE
token is read and becomes the look-ahead token, the
contents of the stack (assuming the input is valid) are just right for
reduction by the first rule. But it is also legitimate to shift the
ELSE
, because that would lead to eventual reduction by the second
rule.
This situation, where either a shift or a reduction would be valid, is called a shift/reduce conflict. Bison is designed to resolve these conflicts by choosing to shift, unless otherwise directed by operator precedence declarations. To see the reason for this, let's contrast it with the other alternative.
Since the parser prefers to shift the ELSE
, the result is to attach
the else-clause to the innermost if-statement, making these two inputs
equivalent:
if x then if y then win (); else lose; if x then do; if y then win (); else lose; end; |
But if the parser chose to reduce when possible rather than shift, the result would be to attach the else-clause to the outermost if-statement, making these two inputs equivalent:
if x then if y then win (); else lose; if x then do; if y then win (); end; else lose; |
The conflict exists because the grammar as written is ambiguous: either
parsing of the simple nested if-statement is legitimate. The established
convention is that these ambiguities are resolved by attaching the
else-clause to the innermost if-statement; this is what Bison accomplishes
by choosing to shift rather than reduce. (It would ideally be cleaner to
write an unambiguous grammar, but that is very hard to do in this case.)
This particular ambiguity was first encountered in the specifications of
Algol 60 and is called the "dangling else
" ambiguity.
To avoid warnings from Bison about predictable, legitimate shift/reduce
conflicts, use the %expect n
declaration. There will be no
warning as long as the number of shift/reduce conflicts is exactly n.
See section Suppressing Conflict Warnings.
The definition of if_stmt
above is solely to blame for the
conflict, but the conflict does not actually appear without additional
rules. Here is a complete Bison input file that actually manifests the
conflict:
%token IF THEN ELSE variable %% stmt: expr | if_stmt ; if_stmt: IF expr THEN stmt | IF expr THEN stmt ELSE stmt ; expr: variable ; |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Another situation where shift/reduce conflicts appear is in arithmetic expressions. Here shifting is not always the preferred resolution; the Bison declarations for operator precedence allow you to specify when to shift and when to reduce.
5.3.1 When Precedence is Needed | An example showing why precedence is needed. | |
5.3.2 Specifying Operator Precedence | How to specify precedence in Bison grammars. | |
5.3.3 Precedence Examples | How these features are used in the previous example. | |
5.3.4 How Precedence Works | How they work. |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Consider the following ambiguous grammar fragment (ambiguous because the input `1 - 2 * 3' can be parsed in two different ways):
expr: expr '-' expr | expr '*' expr | expr '<' expr | '(' expr ')' ... ; |
Suppose the parser has seen the tokens `1', `-' and `2'; should it reduce them via the rule for the addition operator? It depends on the next token. Of course, if the next token is `)', we must reduce; shifting is invalid because no single rule can reduce the token sequence `- 2 )' or anything starting with that. But if the next token is `*' or `<', we have a choice: either shifting or reduction would allow the parse to complete, but with different results.
To decide which one Bison should do, we must consider the results. If the next operator token op is shifted, then it must be reduced first in order to permit another opportunity to reduce the sum. The result is (in effect) `1 - (2 op 3)'. On the other hand, if the subtraction is reduced before shifting op, the result is `(1 - 2) op 3'. Clearly, then, the choice of shift or reduce should depend on the relative precedence of the operators `-' and op: `*' should be shifted first, but not `<'.
What about input such as `1 - 2 - 5'; should this be `(1 - 2) - 5' or should it be `1 - (2 - 5)'? For most operators we prefer the former, which is called left association. The latter alternative, right association, is desirable for assignment operators. The choice of left or right association is a matter of whether the parser chooses to shift or reduce when the stack contains `1 - 2' and the look-ahead token is `-': shifting makes right-associativity.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Bison allows you to specify these choices with the operator precedence
declarations %left
and %right
. Each such declaration
contains a list of tokens, which are operators whose precedence and
associativity is being declared. The %left
declaration makes all
those operators left-associative and the %right
declaration makes
them right-associative. A third alternative is %nonassoc
, which
declares that it is a syntax error to find the same operator twice "in a
row".
The relative precedence of different operators is controlled by the
order in which they are declared. The first %left
or
%right
declaration in the file declares the operators whose
precedence is lowest, the next such declaration declares the operators
whose precedence is a little higher, and so on.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In our example, we would want the following declarations:
%left '<' %left '-' %left '*' |
In a more complete example, which supports other operators as well, we
would declare them in groups of equal precedence. For example, '+'
is
declared with '-'
:
%left '<' '>' '=' NE LE GE %left '+' '-' %left '*' '/' |
(Here NE
and so on stand for the operators for "not equal"
and so on. We assume that these tokens are more than one character long
and therefore are represented by names, not character literals.)
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The first effect of the precedence declarations is to assign precedence levels to the terminal symbols declared. The second effect is to assign precedence levels to certain rules: each rule gets its precedence from the last terminal symbol mentioned in the components. (You can also specify explicitly the precedence of a rule. See section Context-Dependent Precedence.)
Finally, the resolution of conflicts works by comparing the precedence of the rule being considered with that of the look-ahead token. If the token's precedence is higher, the choice is to shift. If the rule's precedence is higher, the choice is to reduce. If they have equal precedence, the choice is made based on the associativity of that precedence level. The verbose output file made by `-v' (see section Invoking Bison) says how each conflict was resolved.
Not all rules and not all tokens have precedence. If either the rule or the look-ahead token has no precedence, then the default is to shift.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Often the precedence of an operator depends on the context. This sounds outlandish at first, but it is really very common. For example, a minus sign typically has a very high precedence as a unary operator, and a somewhat lower precedence (lower than multiplication) as a binary operator.
The Bison precedence declarations, %left
, %right
and
%nonassoc
, can only be used once for a given token; so a token has
only one precedence declared in this way. For context-dependent
precedence, you need to use an additional mechanism: the %prec
modifier for rules.
The %prec
modifier declares the precedence of a particular rule by
specifying a terminal symbol whose precedence should be used for that rule.
It's not necessary for that symbol to appear otherwise in the rule. The
modifier's syntax is:
%prec terminal-symbol |
and it is written after the components of the rule. Its effect is to assign the rule the precedence of terminal-symbol, overriding the precedence that would be deduced for it in the ordinary way. The altered rule precedence then affects how conflicts involving that rule are resolved (see section Operator Precedence).
Here is how %prec
solves the problem of unary minus. First, declare
a precedence for a fictitious terminal symbol named UMINUS
. There
are no tokens of this type, but the symbol serves to stand for its
precedence:
... %left '+' '-' %left '*' %left UMINUS |
Now the precedence of UMINUS
can be used in specific rules:
exp: ... | exp '-' exp ... | '-' exp %prec UMINUS |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The function yyparse
is implemented using a finite-state machine.
The values pushed on the parser stack are not simply token type codes; they
represent the entire sequence of terminal and nonterminal symbols at or
near the top of the stack. The current state collects all the information
about previous input which is relevant to deciding what to do next.
Each time a look-ahead token is read, the current parser state together with the type of look-ahead token are looked up in a table. This table entry can say, "Shift the look-ahead token." In this case, it also specifies the new parser state, which is pushed onto the top of the parser stack. Or it can say, "Reduce using rule number n." This means that a certain number of tokens or groupings are taken off the top of the stack, and replaced by one grouping. In other words, that number of states are popped from the stack, and one new state is pushed.
There is one other alternative: the table can say that the look-ahead token is erroneous in the current state. This causes error processing to begin (see section 6. Error Recovery).
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
A reduce/reduce conflict occurs if there are two or more rules that apply to the same sequence of input. This usually indicates a serious error in the grammar.
For example, here is an erroneous attempt to define a sequence
of zero or more word
groupings.
sequence: /* empty */ { printf ("empty sequence\n"); } | maybeword | sequence word { printf ("added word %s\n", $2); } ; maybeword: /* empty */ { printf ("empty maybeword\n"); } | word { printf ("single word %s\n", $1); } ; |
The error is an ambiguity: there is more than one way to parse a single
word
into a sequence
. It could be reduced to a
maybeword
and then into a sequence
via the second rule.
Alternatively, nothing-at-all could be reduced into a sequence
via the first rule, and this could be combined with the word
using the third rule for sequence
.
There is also more than one way to reduce nothing-at-all into a
sequence
. This can be done directly via the first rule,
or indirectly via maybeword
and then the second rule.
You might think that this is a distinction without a difference, because it does not change whether any particular input is valid or not. But it does affect which actions are run. One parsing order runs the second rule's action; the other runs the first rule's action and the third rule's action. In this example, the output of the program changes.
Bison resolves a reduce/reduce conflict by choosing to use the rule that
appears first in the grammar, but it is very risky to rely on this. Every
reduce/reduce conflict must be studied and usually eliminated. Here is the
proper way to define sequence
:
sequence: /* empty */ { printf ("empty sequence\n"); } | sequence word { printf ("added word %s\n", $2); } ; |
Here is another common error that yields a reduce/reduce conflict:
sequence: /* empty */ | sequence words | sequence redirects ; words: /* empty */ | words word ; redirects:/* empty */ | redirects redirect ; |
The intention here is to define a sequence which can contain either
word
or redirect
groupings. The individual definitions of
sequence
, words
and redirects
are error-free, but the
three together make a subtle ambiguity: even an empty input can be parsed
in infinitely many ways!
Consider: nothing-at-all could be a words
. Or it could be two
words
in a row, or three, or any number. It could equally well be a
redirects
, or two, or any number. Or it could be a words
followed by three redirects
and another words
. And so on.
Here are two ways to correct these rules. First, to make it a single level of sequence:
sequence: /* empty */ | sequence word | sequence redirect ; |
Second, to prevent either a words
or a redirects
from being empty:
sequence: /* empty */ | sequence words | sequence redirects ; words: word | words word ; redirects:redirect | redirects redirect ; |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Sometimes reduce/reduce conflicts can occur that don't look warranted. Here is an example:
%token ID %% def: param_spec return_spec ',' ; param_spec: type | name_list ':' type ; return_spec: type | name ':' type ; type: ID ; name: ID ; name_list: name | name ',' name_list ; |
It would seem that this grammar can be parsed with only a single token
of look-ahead: when a param_spec
is being read, an ID
is
a name
if a comma or colon follows, or a type
if another
ID
follows. In other words, this grammar is LR(1).
However, Bison, like most parser generators, cannot actually handle all
LR(1) grammars. In this grammar, two contexts, that after an ID
at the beginning of a param_spec
and likewise at the beginning of
a return_spec
, are similar enough that Bison assumes they are the
same. They appear similar because the same set of rules would be
active--the rule for reducing to a name
and that for reducing to
a type
. Bison is unable to determine at that stage of processing
that the rules would require different look-ahead tokens in the two
contexts, so it makes a single parser state for them both. Combining
the two contexts causes a conflict later. In parser terminology, this
occurrence means that the grammar is not LALR(1).
In general, it is better to fix deficiencies than to document them. But this particular deficiency is intrinsically hard to fix; parser generators that can handle LR(1) grammars are hard to write and tend to produce parsers that are very large. In practice, Bison is more useful as it is now.
When the problem arises, you can often fix it by identifying the two
parser states that are being confused, and adding something to make them
look distinct. In the above example, adding one rule to
return_spec
as follows makes the problem go away:
%token BOGUS ... %% ... return_spec: type | name ':' type /* This rule is never used. */ | ID BOGUS ; |
This corrects the problem because it introduces the possibility of an
additional active rule in the context after the ID
at the beginning of
return_spec
. This rule is not active in the corresponding context
in a param_spec
, so the two contexts receive distinct parser states.
As long as the token BOGUS
is never generated by yylex
,
the added rule cannot alter the way actual input is parsed.
In this particular example, there is another way to solve the problem:
rewrite the rule for return_spec
to use ID
directly
instead of via name
. This also causes the two confusing
contexts to have different sets of active rules, because the one for
return_spec
activates the altered rule for return_spec
rather than the one for name
.
param_spec: type | name_list ':' type ; return_spec: type | ID ':' type ; |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The Bison parser stack can overflow if too many tokens are shifted and
not reduced. When this happens, the parser function yyparse
returns a nonzero value, pausing only to call yyerror
to report
the overflow.
By defining the macro YYMAXDEPTH
, you can control how deep the
parser stack can become before a stack overflow occurs. Define the
macro with a value that is an integer. This value is the maximum number
of tokens that can be shifted (and not reduced) before overflow.
It must be a constant expression whose value is known at compile time.
The stack space allowed is not necessarily allocated. If you specify a
large value for YYMAXDEPTH
, the parser actually allocates a small
stack at first, and then makes it bigger by stages as needed. This
increasing allocation happens automatically and silently. Therefore,
you do not need to make YYMAXDEPTH
painfully small merely to save
space for ordinary inputs that do not need much stack.
The default value of YYMAXDEPTH
, if you do not define it, is
10000.
You can control how much stack is allocated initially by defining the
macro YYINITDEPTH
. This value too must be a compile-time
constant integer. The default is 200.
[ << ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |