7.8.4. TokenBigramIgnoreBlankSplitSymbol
¶
7.8.4.1. Summary¶
TokenBigramIgnoreBlankSplitSymbol
is similar to
TokenBigram. The differences between them are the followings:
Blank handling
Symbol handling
7.8.4.2. Syntax¶
TokenBigramIgnoreBlankSplitSymbol
hasn’t parameter:
TokenBigramIgnoreBlankSplitSymbol
7.8.4.3. Usage¶
TokenBigramIgnoreBlankSplitSymbol
ignores white-spaces in
continuous symbols and non-ASCII characters.
TokenBigramIgnoreBlankSplitSymbol
tokenizes symbols by bigram
tokenize method.
You can find difference of them by 日 本 語 ! ! !
text because it
has symbols and non-ASCII characters.
Here is a result by TokenBigram :
Execution example:
tokenize TokenBigram "日 本 語 ! ! !" NormalizerAuto
# [
# [
# 0,
# 1337566253.89858,
# 0.000355720520019531
# ],
# [
# {
# "position": 0,
# "force_prefix": false,
# "value": "日"
# },
# {
# "position": 1,
# "force_prefix": false,
# "value": "本"
# },
# {
# "position": 2,
# "force_prefix": false,
# "value": "語"
# },
# {
# "position": 3,
# "force_prefix": false,
# "value": "!"
# },
# {
# "position": 4,
# "force_prefix": false,
# "value": "!"
# },
# {
# "position": 5,
# "force_prefix": false,
# "value": "!"
# }
# ]
# ]
Here is a result by TokenBigramIgnoreBlankSplitSymbol
:
Execution example:
tokenize TokenBigramIgnoreBlankSplitSymbol "日 本 語 ! ! !" NormalizerAuto
# [
# [
# 0,
# 1337566253.89858,
# 0.000355720520019531
# ],
# [
# {
# "position": 0,
# "force_prefix": false,
# "value": "日本"
# },
# {
# "position": 1,
# "force_prefix": false,
# "value": "本語"
# },
# {
# "position": 2,
# "force_prefix": false,
# "value": "語!"
# },
# {
# "position": 3,
# "force_prefix": false,
# "value": "!!"
# },
# {
# "position": 4,
# "force_prefix": false,
# "value": "!!"
# },
# {
# "position": 5,
# "force_prefix": false,
# "value": "!"
# }
# ]
# ]