Lines Matching refs:token

59         # Track the set of token types that can follow any rule invocation.
64 # matched a token. Prevents generation of more than one error message
70 # but no token is consumed during recovery...another error is found,
72 # one token/tree node is consumed for two errors.
80 # the stop token index for each rule. ruleMemo[ruleIndex] is
82 # get back the stop token for associated rule or MEMO_RULE_FAILED.
95 ## The goal of all lexer rules/methods is to create a token object.
97 # create a single token. nextToken will return this object after
98 # matching lexer rule(s). If you subclass to allow multiple token
99 # emissions, then set this to the last token to be matched or
100 # something nonnull so that the auto token emit mechanism will not
101 # emit another token.
102 self.token = None
104 ## What character index in the stream did the current token start at?
105 # Needed, for example, to get the text for current token. Set at
109 ## The line on which the first character of the token resides
115 ## The channel number for the current token
118 ## The token type for the current token
121 ## You can set the text for the current token to override what is in
200 single token insertion or deletion error recovery. If
203 To turn off single token insertion or deletion error
239 # a single token and hope for the best
251 # if current token is consistent with what could come after set
252 # then we know we're missing a token; error recovery is free to
253 # "insert" the missing token
266 a token (after a resync). So it will go:
270 3. consume until token found in resynch set
279 # if we've already reported an error and have not matched a token
343 tokenName, self.getTokenErrorDisplay(e.token)
354 + self.getTokenErrorDisplay(e.token) \
370 + self.getTokenErrorDisplay(e.token)
374 + self.getTokenErrorDisplay(e.token)
378 + self.getTokenErrorDisplay(e.token) \
384 + self.getTokenErrorDisplay(e.token) \
406 an error and next valid token match
426 How should a token be displayed in an error message? The default
430 the token). This is better than forcing you to override a method in
431 your token objects because you don't have to go modify your lexer
454 single token insertion and deletion, this will usually not
456 token that the match() routine could not recover from.
462 # uh oh, another error at same token index; must be a case
463 # where LT(1) is in the recovery token set so nothing is
464 # consumed; consume a single token so at least to prevent
478 A hook to listen in on the token consumption during error recovery.
487 A hook to listen in on the token consumption during error recovery.
511 input might just be missing a token--you might consume the
552 we resync'd to that token, we'd consume until EOF. We need to
557 At this point, it gets a mismatched token error and throws an
558 exception (since LA(1) is not in the viable following token
565 for the token that was a member of the recovery set.
594 This is set of token types that can follow a specific rule
631 At the "3" token, you'd have a call chain of
639 You want the exact viable token set when recovering from a
640 token mismatch. Upon token mismatch, if LA(1) is member of
641 the viable next token set, then you know there is most likely
642 a missing token in the input stream. "Insert" one by just not
669 """Attempt to recover from a single missing or extra token.
673 LA(1) is not what we are looking for. If LA(2) has the right token,
674 however, then assume LA(1) is some extra spurious token. Delete it
680 If current token is consistent with what could come after
681 ttype then it is ok to 'insert' the missing token, else throw
694 mismatched token error. To recover, it sees that LA(1)==';'
695 is in the set of tokens that can follow the ')' token
701 # if next token is what we are looking for then "delete" this token
706 input.consume() # simply delete extra token
709 # report after consuming so AW sees the token in the exception
712 # we want to return the token we're actually matching
715 # move past ttype token as if all were ok
719 # can't recover with single token deletion, try insertion
724 # report after inserting so AW sees the token in the exception
738 # we don't know how to conjure up a token for sets yet
741 # TODO do single token deletion like above for Token mismatch
748 into the label for the associated token ref; e.g., x=ID. Token
761 """Conjure up a missing token during error recovery.
767 $x points at that token. If that token is missing, but
768 the next token in the stream is what we want we assume that
769 this token is missing and we keep going. Because we
770 have to return some token to replace the missing token,
776 a CommonToken of the appropriate type. The text will be the token.
786 ## This code is factored out from mismatched token and mismatched set
787 ## recovery. It handles "single token insertion" error recovery for
802 Consume tokens until one matches the given token or token set
804 tokenTypes can be a single token type or a set of token types
908 return [token.text for token in tokens]
913 Given a rule number and a start token index number, return
917 It returns the index of the last token matched by the rule.
931 input stream? Return the stop token index or MEMO_RULE_UNKNOWN.
937 1 past the stop token matched for this rule last time.
995 @brief Abstract baseclass for token producers.
1002 to keep going or you do not upon token recognition error. If you do not
1007 requested a token. Keep lexing until you get a valid one. Just report
1008 errors and keep going, looking for a valid token.
1024 The iteration will not include the final EOF token, see also the note
1033 """Return next token or raise StopIteration.
1035 Note that this will raise StopIteration when hitting the EOF token,
1040 token = self.nextToken()
1041 if token is None or token.type == EOF:
1043 return token
1076 self._state.token = None
1096 Return a token from this source; i.e., match a token on the char
1101 self._state.token = None
1113 if self._state.token is None:
1116 elif self._state.token == SKIP_TOKEN:
1119 return self._state.token
1132 Instruct the lexer to skip creating a token for current lexer rule
1133 and look for another token. nextToken() knows to keep looking when
1134 a lexer rule finishes with token set to SKIP_TOKEN. Recall that
1135 if token==null at end of any token rule, it creates one for you
1139 self._state.token = SKIP_TOKEN
1143 """This is the lexer entry point that sets instance var 'token'"""
1160 def emit(self, token=None):
1162 The standard method called to automatically emit a token at the
1163 outermost lexical rule. The token object should point into the
1165 use that to set the token's text. Override this method to emit
1172 if token is None:
1173 token = CommonToken(
1180 token.line = self._state.tokenStartLine
1181 token.text = self._state.text
1182 token.charPositionInLine = self._state.tokenStartCharPositionInLine
1184 self._state.token = token
1186 return token
1246 Return the text matched so far for the current token or any
1260 Set the complete text of this token; it wipes any previous
1272 ## # if we've already reported an error and have not matched a token
1335 a token, so do the easy thing and just kill a character and hope
1401 """Set the token stream and reset the parser"""
1430 """Return the start token or tree."""
1435 """Return the stop token or tree."""