简单实例
这是一个简单的分析器,将文本通过空格拆分成各个tokens
POST _analyze
{
"analyzer": "whitespace",
"text": "The quick brown fox."
}
{
"tokens" : [
{
"token" : "The",
"start_offset" : 0,
"end_offset" : 3,
"type" : "word",
"position" : 0
},
{
"token" : "quick",
"start_offset" : 4,
"end_offset" : 9,
"type" : "word",
"position" : 1
},
{
"token" : "brown",
"start_offset" : 10,
"end_offset" : 15,
"type" : "word",
"position" : 2
},
{
"token" : "fox.",
"start_offset" : 16,
"end_offset" : 20,
"type" : "word",
"position" : 3
}
]
}
带完整解析的文本分析
POST _analyze
{
"char_filter": [
"html_strip"
],
"tokenizer": "standard",
"filter": [ "lowercase", "asciifolding" ],
"text": "Is this déja vu <b>test</b> ?"
}
其中对应的是
- char_filter , character filter 文本过滤器,可以配置0到多个
- tokenizer,令牌分析器,有且仅有一个
- filter,token filter,令牌过滤器,可以配置0到多个
经过上面的过滤查询,分别对应
- 去掉HTML标签
- 默认分析器,可以删除大部分标点符号
- 拼写转小写,将不在基本拉丁Unicode块中的字母,数字和符号字符(前127个ASCII字符)转换为等效的ASCII(如果存在)
得到的结果如下
{
"tokens" : [
{
"token" : "is",
"start_offset" : 0,
"end_offset" : 2,
"type" : "<ALPHANUM>",
"position" : 0
},
{
"token" : "this",
"start_offset" : 3,
"end_offset" : 7,
"type" : "<ALPHANUM>",
"position" : 1
},
{
"token" : "deja",
"start_offset" : 8,
"end_offset" : 12,
"type" : "<ALPHANUM>",
"position" : 2
},
{
"token" : "vu",
"start_offset" : 13,
"end_offset" : 15,
"type" : "<ALPHANUM>",
"position" : 3
},
{
"token" : "test",
"start_offset" : 20,
"end_offset" : 28,
"type" : "<ALPHANUM>",
"position" : 4
}
]
}
参考资料
- https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-custom-analyzer.html
- https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-overview.html