You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
from funcparserlib.lexer import make_tokenizer
from funcparserlib.parser import some
tokenize = make_tokenizer([
(u'x', (ur'x',)),
])
some(lambda t: t.type == "x").parse(tokenize("x"))
results in
Traceback (most recent call last):
File "/Users/gsnedders/Documents/other-projects/funcparserlib/funcparserlib/funcparserlib/tests/test_parsing.py", line 76, in test_tokenize
some(lambda t: t.type == "x").parse(tokenize("x"))
File "/Users/gsnedders/Documents/other-projects/funcparserlib/funcparserlib/funcparserlib/parser.py", line 121, in parse
(tree, _) = self.run(tokens, State())
File "/Users/gsnedders/Documents/other-projects/funcparserlib/funcparserlib/funcparserlib/parser.py", line 309, in _some
if s.pos >= len(tokens):
TypeError: object of type 'generator' has no len()
tokenize("x") is a generator, and you can't call len on a generator.
The text was updated successfully, but these errors were encountered:
gsnedders
added a commit
to gsnedders/funcparserlib
that referenced
this issue
Jun 28, 2016
Per @vlasovskikh in #45 (comment), this is intended behaviour, which yes, means if you can't pass the return value of make_tokenizer to parse and instead need to wrap it in list.
results in
tokenize("x")
is a generator, and you can't calllen
on a generator.The text was updated successfully, but these errors were encountered: