Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

When the classification regression tree can not reflect the real trend of the data

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

In essence, both lightgbm and xgboost are learning algorithms based on classification regression tree integration, so they also have some inherent defects:

When the characteristics of the training data are concentrated in an interval, and the test data deviate from the interval, it will be impossible to fit. The fundamental reason is that the target value of the classification regression tree for the data on a certain leaf node is determined by using the average value of the target value of the leaf node as a gradient to improve the final predicted value.

For example, I have the following data:

X,y

1,1

2,2

3,3

4,4

5,5

6,6

7,7

8,8

9,9

10,10

11,11

12,12

13,13

14,14

15,15

16,16

17,17

18,18

19,19

20,20

21,21

22,22

23,23

24,24

25,25

26,26

27,27

28,28

This is obviously yellowx.

If you enter test data x = 200 y should be 200

But when you test with the following program, you find that no matter how you adjust the parameters, you can't get 200.

Because the classification regression tree divides these data into several leaf nodes, the maximum target value is only 28, so he no longer makes linear fitting according to the characteristics. The procedure is as follows:

Import pandas as pdimport lightgbm as lgbpath_train = "data.csv" train1 = pd.read_csv (path_train) testlist = [[200]] # using lgb regression prediction model Specific parameters are set as follows: model_lgb = lgb.LGBMRegressor (objective='regression',num_leaves=28, learning_rate=0.1, n_estimators=2000, max_bin = 28, bagging_fraction = 0.8, bagging_freq = 5, feature_fraction = 0.2319, feature_fraction_seed=9) Bagging_seed=9, min_data_in_leaf = 10, min_sum_hessian_in_leaf = 100, max_depth = 10) # training and forecasting model_lgb.fit (train1 [['x']] .fillna (- 1) Train1 ['y']) test1 = pd.DataFrame (testlist) test1.columns = ['x'] y_pred = model_lgb.predict (test1 ['x']] .fillna (- 1)) print (y_pred) print ("lgb success")

To paraphrase a saying, "A nation that has never seen the starry sky. How can there be a dream of traveling in the universe?"

Therefore, not all data can be poured directly into the lightgbm,xgboost. We should pay attention to analyzing whether the characteristics of the new predicted data are within the spatial range of the characteristics of the training data set.

Otherwise, other methods should be used when digging. For example, linear regression, or add a linear regression to the bottom of the classified regression tree of the blog above.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report