In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
How to implement the HanLP word splitter HanLPTokenizer? in view of this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
The functional expansion of anlp is mainly reflected in the following aspects:
Keyword extraction
Automatic summary
Phrase extraction
Pinyin conversion
Simple-to-multiplication transformation
Text recommendation
Here is the code for the hanLP word splitter
Note: use maven dependencies
Com.hankcs
Hanlp
Portable-1.3.4
Java8 is used for processing
Import java.util.ArrayList
Import java.util.List
Import java.util.stream.Collectors
Import org.apache.commons.lang3.StringUtils
Import com.hankcs.hanlp.seg.Segment
Import com.hankcs.hanlp.seg.Dijkstra.DijkstraSegment
Import com.hankcs.hanlp.seg.NShort.NShortSegment
Import com.hankcs.hanlp.tokenizer.IndexTokenizer
Import com.hankcs.hanlp.tokenizer.NLPTokenizer
Import com.hankcs.hanlp.tokenizer.SpeedTokenizer
Import com.hankcs.hanlp.tokenizer.StandardTokenizer
Public class HanLPTokenizer {
Private static final Segment N_SHORT_SEGMENT = new NShortSegment () .enableCustomDictionary (false)
.enablePlaceRecognize (true) .enableOrganizationRecognize (true)
Private static final Segment DIJKSTRA_SEGMENT = new DijkstraSegment () .enableCustomDictionary (false)
.enablePlaceRecognize (true) .enableOrganizationRecognize (true)
/ * *
* Standard participle
* @ param text
* @ return
, /
Public static List standard (String text) {
List list = new ArrayList ()
StandardTokenizer.segment (text) .forEach (term-> {
If (StringUtils.isNotBlank (term.word)) {
List.add (term.word)
}
});
Return list.stream (). Distinct (). Collect (Collectors.toList ())
}
/ * *
* NLP participle
* @ param text
* @ return
, /
Public static List nlp (String text) {
List list = new ArrayList ()
NLPTokenizer.segment (text) .forEach (term-> {
If (StringUtils.isNotBlank (term.word)) {
List.add (term.word)
}
});
Return list.stream (). Distinct (). Collect (Collectors.toList ())
}
/ * *
* Index participle
* @ param text
* @ return
, /
Public static List index (String text) {
List list = new ArrayList ()
IndexTokenizer.segment (text) .forEach (term-> {
If (StringUtils.isNotBlank (term.word)) {
List.add (term.word)
}
});
Return list.stream (). Distinct (). Collect (Collectors.toList ())
}
/ * *
* High Speed Dictionary Segmentation
* @ param text
* @ return
, /
Public static List speed (String text) {
List list = new ArrayList ()
SpeedTokenizer.segment (text) .forEach (term-> {
If (StringUtils.isNotBlank (term.word)) {
List.add (term.word)
}
});
Return list
}
/ * *
* N-shortest path participle
* @ param text
* @ return
, /
Public static List nShort (String text) {
List list = new ArrayList ()
N_SHORT_SEGMENT.seg (text) .forEach (term-> {
If (StringUtils.isNotBlank (term.word)) {
List.add (term.word)
}
});
Return list.stream (). Distinct (). Collect (Collectors.toList ())
}
/ * *
* shortest path participle
* @ param text
* @ return
, /
Public static List shortest (String text) {
List list = new ArrayList ()
DIJKSTRA_SEGMENT.seg (text) .forEach (term-> {
If (StringUtils.isNotBlank (term.word)) {
List.add (term.word)
}
});
Return list.stream (). Distinct (). Collect (Collectors.toList ())
}
Public static void main (String [] args) {
String text = "Test do not move 12"
System.out.println ("Standard participle:" + standard (text))
System.out.println ("NLP participle:" + nlp (text))
System.out.println (index participle: + index (text))
System.out.println ("N-shortest path participle:" + nShort (text))
System.out.println ("shortest path participle:" + shortest (text))
System.out.println ("Fast Dictionary Segmentation:" + speed (text))
}
}
This is the answer to the question about how to realize the HanLP participle HanLPTokenizer. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.