Syntactic analysis¶
- @production(prod, priority=None)¶
Use this decorator to declare a grammar production:
from ptk.lexer import ReLexer, token from ptk.parser import LRParser, production class MyParser(LRParser, ReLexer): @production('E -> E "+" E') def sum(self): pass
See the Production syntax section.
The priority argument may be specified to declare that the production has the same priority as an existing token type. Typical use for unary minus:
from ptk.lexer import ReLexer, token from ptk.parser import LRParser, production class MyParser(LRParser, ReLexer): # omitting productions for binary +, -, * and / @production('E -> "-" E', priority='*') def minus(self): pass
You can also use a token type that has not been declared to the lexer as long as you have declared an explicit priority for it, using one of the associativity decorators:
from ptk.lexer import ReLexer, token from ptk.parser import LRParser, production, leftAssoc, nonAssoc @leftAssoc('+', '-') @leftAssoc('*', '/') @nonAssoc('UNARYMINUS') # Non associative, higher priority than anything else class MyParser(LRParser, ReLexer): @production('E -> "-" E', priority='UNARYMINUS') def minus(self): pass
- exception ptk.parser.ParseError(grammar, tok, state, tokens)[source]¶
Syntax error when parsing.
- Variables
token – The unexpected token.
- ptk.parser.leftAssoc(*operators)[source]¶
Class decorator for left associative operators. Use this to decorate your
Parser
class. Operators passed as argument are assumed to have the same priority. The later you declare associativity, the higher the priority; so the following code@leftAssoc('+', '-') @leftAssoc('*', '/') class MyParser(LRParser): # ...
declares ‘+’ and ‘-’ to be left associative, with the same priority. ‘*’ and ‘/’ are also left associative, with a higher priority than ‘+’ and ‘-‘.
See also the priority argument to
production()
.
- ptk.parser.rightAssoc(*operators)[source]¶
Class decorator for right associative operators. Same remarks as
leftAssoc()
.
- ptk.parser.nonAssoc(*operators)[source]¶
Class decorator for non associative operators. Same remarks as
leftAssoc()
.
- class ptk.parser.LRParser[source]¶
LR(1) parser. This class is intended to be used with a lexer class derived from
LexerBase
, using inheritance; it overridesLexerBase.newToken()
so you must inherit from the parser first, then the lexer:class MyParser(LRParser, ReLexer): # ...
- class ptk.parser.ProductionParser(callback, priority, grammarClass, attributes)[source]¶
- class ptk.async_parser.AsyncLRParser[source]¶
This class works like
LRParser
but supports asynchronous methods (new in Python 3.5). You must useAsyncLexer
in conjuction with it:- class Parser(AsyncLRParser, AsyncLexer):
# …
And only use
AsyncLexer.asyncFeed()
to feed it the input stream. Semantic actions must be asynchronous methods as well. When the start symbol is reduced, theasyncNewSentence()
method is awaited.
Production syntax¶
Basics¶
The productions specified through the production()
decorator must be specified in a variant of BNF; for example
from ptk.lexer import ReLexer
from ptk.parser import LRParser, production
class Parser(LRParser, ReLexer):
@production('E -> E plus E')
def binaryop_sum(self):
pass
@production('E -> E minus E')
def binaryop_minus(self):
pass
Here non terminal symbols are uppercase and terminals (token types) are lowercase, but this is only a convention.
When you don’t need separate semantic actions you can group several productions by using either the ‘|’ symbol:
from ptk.lexer import ReLexer
from ptk.parser import LRParser, production
class Parser(LRParser, ReLexer):
@production('E -> E plus E | E minus E')
def binaryop(self):
pass
Or decorating the same method several times:
from ptk.lexer import ReLexer
from ptk.parser import LRParser, production
class Parser(LRParser, ReLexer):
@production('E -> E plus E')
@production('E -> E minus E')
def binaryop(self):
pass
Semantic values¶
The semantic value associated with a production is the return value of the decorated method. Values for items on the right side of the production are not passed to the method by default; you have to use a specific syntax to associate each item with a name, which will then be used as the name of a keyword argument passed to the method. The name must be specified between brackets after the item, for instance:
from ptk.lexer import ReLexer
from ptk.parser import LRParser, production
class Parser(LRParser, ReLexer):
@production('E -> E<left> plus E<right>')
def sum(self, left, right):
return left + right
You can thus use alternatives and default argument values to slightly change the action’s behavior depending on the actual matched production:
from ptk.lexer import ReLexer
from ptk.parser import LRParser, production
class Parser(LRParser, ReLexer):
@production('SYMNAME -> identifier<value> | identifier<value> left_bracket identifier<name> right_bracket')
def symname(self, value, name=None):
if name is None:
# First form, name not specified
else:
# Second form
Position in code¶
You may want to store the position in the input stream at which a production matched, for warning reporting for example. The same syntax as for the named used in the left part of the production lets you get this information as a keyword argument:
from ptk.lexer import ReLexer
from ptk.parser import LRParser, production
class Parser(LRParser, ReLexer):
@production('SYMNAME<position> -> identifier<value> | identifier<value> left_bracket identifier<name> right_bracket')
def symname(self, position, value, name=None):
if name is None:
# First form, name not specified
else:
# Second form
In this case, the position argument will hold a named 2-tuple with attributes column and line. Those start at 1.
Litteral tokens¶
A litteral token name may appear in a production, between double quotes. This allows you to skip declaring “simple” tokens at the lexer level.
from ptk.lexer import ReLexer
from ptk.parser import LRParser, production
class Parser(LRParser, ReLexer):
@production('E -> E "+" E')
def sum(self):
pass
Note
Those tokens are considered “declared” after the ones explicitely declared through the token()
decorator. This may be important because of the disambiguation rules; see the notes for the token()
decorator.
Litteral tokens may be named as well.
Repeat operators¶
A nonterminal in the right side of a production may be immediately followed by a repeat operator among “*”, “+” and “?”, which have the same meaning as in regular expressions. Note that this is only syntactic sugar; under the hood additional productions are generated.
A -> B?
is equivalent to
A ->
A -> B
The semantic value is None if the empty production was applied, or the semantic value of B if the ‘A -> B’ production was applied.
A -> B*
is equivalent to
A ->
A -> L_B
L_B -> B
L_B -> L_B B
The semantic value is a list of semantic values for B. ‘+’ works the same way except for the empty production, so the list cannot be empty.
Additionally, the ‘*’ and ‘+’ operators may include a separator specification, which is a symbol name or litteral token between parens:
A -> B+("|")
is equivalent to
A -> L_B
L_B -> B
L_B -> L_B "|" B
The semantic value is still a list of B values; there is no way to get the values for the separators.
Wrapping it up¶
Fully functional parser for a four-operations integer calculator:
from ptk.lexer import ReLexer, token
from ptk.parser import LRParser, production, leftAssoc
@leftAssoc('+', '-')
@leftAssoc('*', '/')
class Parser(LRParser, ReLexer):
@token('[1-9][0-9]*')
def number(self, tok):
tok.value = int(tok.value)
@production('E -> number<n>')
def litteral(self, n):
return n
@production('E -> "-" E<val>', priority='*')
def minus(self, val):
return -val
@production('E -> "(" E<val> ")"')
def paren(self, val):
return val
@production('E -> E<left> "+"<op> E<right>')
@production('E -> E<left> "-"<op> E<right>')
@production('E -> E<left> "*"<op> E<right>')
@production('E -> E<left> "/"<op> E<right>')
def binaryop(self, left, op, right):
return {
'+': operator.add,
'-': operator.sub,
'*': operator.mul,
'/': operator.floordiv
}[op](left, right)
Parsing lists of integers separated by commas:
from ptk.lexer import ReLexer, token
from ptk.parser import LRParser, production
class Parser(LRParser, ReLexer):
@token('[1-9][0-9]*')
def number(self, tok):
tok.value = int(tok.value)
@production('LIST -> number*(",")<values>')
def integer_list(self, values):
print('Values are: %s' % values)
Conflict resolution rules¶
Conflict resolution rules are the same as those used by Yacc/Bison. A shift/reduce conflict is resolved by choosing to shift. A reduce/reduce conflict is resolved by choosing the reduction associated with the first declared production. leftAssoc()
, rightAssoc()
, nonAssoc()
and the priority argument to production()
allows you to explicitely disambiguate.
Asynchronous lexer/parser¶
The AsyncLexer
and AsyncLRParser
classes allow
you to parse an input stream asynchronously. Since this uses the new
asynchronous method syntax introduced in Python 3.5, it’s only
available with this version of Python. Additionally, you must install
the async_generator module.
The basic idea is that the production methods are asynchronous. Feed
the input stream one byte/char at a time by awaiting on
AsyncLexer.asyncFeed()
. When a token has been recognized
unambiguously, this will in turn await on
AsyncParser.asyncNewToken()
. Semantic actions may then be
awaited on as a result.
Note that if you use a consumer in your lexer, the feed method must be asynchronous as well.
The samples directory contains the following example of an asynchronous parser:
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
"""
Four operations calculator, asynchronous. Due to various buffering
problems you probably won't see what's the point unless you force
stdin to be noninteractive, e.g.
$ echo '3*4+6' | python3 ./async_calc.py
"""
import operator, os, asyncio, sys, codecs
from ptk.async_lexer import token, AsyncLexer, EOF
from ptk.async_parser import production, leftAssoc, AsyncLRParser, ParseError
@leftAssoc('+', '-')
@leftAssoc('*', '/')
class Parser(AsyncLRParser, AsyncLexer):
async def asyncNewSentence(self, result):
print('== Result:', result)
# Lexer
def ignore(self, char):
return char in [' ', '\t']
@token(r'[1-9][0-9]*')
def number(self, tok):
tok.value = int(tok.value)
# Parser
@production('E -> "-" E<value>', priority='*')
async def minus(self, value):
print('== Neg: - %d' % value)
return -value
@production('E -> "(" E<value> ")"')
async def paren(self, value):
return value
@production('E -> number<number>')
async def litteral(self, number):
return number
@production('E -> E<left> "+"<op> E<right>')
@production('E -> E<left> "-"<op> E<right>')
@production('E -> E<left> "*"<op> E<right>')
@production('E -> E<left> "/"<op> E<right>')
async def binaryop(self, left, op, right):
print('Binary operation: %s %s %s' % (left, op, right))
return {
'+': operator.add,
'-': operator.sub,
'*': operator.mul,
'/': operator.floordiv
}[op](left, right)
async def main():
reader = asyncio.StreamReader()
await asyncio.get_event_loop().connect_read_pipe(lambda: asyncio.StreamReaderProtocol(reader), sys.stdin)
decoder = codecs.getincrementaldecoder('utf_8')()
parser = Parser()
while True:
byte = await reader.read(1)
if not byte:
break
char = decoder.decode(byte)
if char:
if char == '\n':
char = EOF
else:
print('Input char: "%s"' % repr(char))
await parser.asyncFeed(char)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Asynchronous lexer/parser using Deferreds¶
The DeferredLexer
and DeferredLRParser
work the same
as AsyncLexer
and AsyncLRParser
, but use
Twisted’s Deferred objects instead of Python 3.5-like asynchronous
methods. The special methods are called DeferredLexer.deferNewToken`and
:py:func:`DeferredLRParser.deferNewSentence()
and must return
Deferred instances. Semantic actions can return either Deferred
instances or regular values. See the defer_calc.py sample for details.