In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article is about how spark mllib implements feature selection based on chi-square check. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
The running code is as follows: package spark.FeatureExtractionAndTransformationimport org.apache.spark.mllib.feature.ChiSqSelectorimport org.apache.spark.mllib.linalg.Vectorsimport org.apache.spark.mllib.regression.LabeledPointimport org.apache.spark.mllib.util.MLUtilsimport org.apache.spark. {SparkConf SparkContext} / * feature selection based on Chi-square check * Chi-square check: * in statistical inference of classified data, it is generally used to test whether a sample conforms to an expected distribution. * it is the degree of deviation between the actual value of the statistical sample and the theoretical inference value. * the smaller the chi-square value. The more it tends to conform to Created by eric on 16-7-24. * / object FeatureSelection {val conf = new SparkConf () / / create the environment variable .setMaster ("local") / / set the localization handler .setAppName ("TF_IDF") / / set the name val sc = new SparkContext ( Conf) def main (args: Array [String]) {val data = MLUtils.loadLibSVMFile (sc) "/ home/eric/IdeaProjects/wordCount/src/main/spark/FeatureExtractionAndTransformation/fs.txt") val discretizedData = data.map {lp = > / / create a data processing space LabeledPoint (lp.label Vectors.dense (lp.features.toArray.map {x = > xamp 2})} val selector = new ChiSqSelector (2) / / create chi-square check val transformer = selector.fit (discretizedData) / / create training model val filteredData = discretizedData.map {lp = > / / filter the first two features LabeledPoint (lp.label, transformer.transform (lp.features))} filteredData.foreach (println) / / (0.0) [1.0, 0.5]) / / (1.0, [0.0]) / / (0.0, [1.5]) / / (1.0, [0.5]) / / (1.0) [2.0 fs.txt0 1.0])} fs.txt0 1:2 2:1 3:0 1:0 2:0 3:1 4:00 1:3 2:3 3 3 21 1 2 1 0 3 1 0 3 1 4 1 31 1 1 4 3 3 4 1 3 4 4 1 the results are as follows
Thank you for reading! This is the end of this article on "how to achieve feature selection based on chi-square check in spark mllib". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.