In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail what are the questions about Pointnet++. The content of the article is of high quality, so the editor will share it with you for reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.
one
The first is the accuracy of classification tasks.
The ssg version in the paper can reach 90.7%. But the author and the students asked can only reach about 90.2%. So the author thinks carefully about the reasons, referring to the information told us in the article, and thinks that the problem may lie in the following points:
1. The selection of dataset. The official data set (2048 points) corresponds to 90.7%. If replaced by the author's own data (10000 points), the accuracy is 91.9%. The author tests the former. So there's no problem here.
2. The method of domain selection. The two methods of KNN/ball query will also affect the accuracy. The default in the code is the latter. So that's not the problem.
3. When running evaluate.py, you should have votes = 12, which will also affect the result.
Other parameter settings, such as learning rate and batch_size, are default, which are no different from those mentioned in the paper, so there will be no problem.
But the final result still falls short of the 90.7% in the paper.
Communication with the author also failed to get good feedback.
two
Another troubling problem is the calculation of the number of parameters. I still don't understand the number of parameters from PointNet to PointNet++,. I hope some students will take a look at the calculation process here.
1. Calculation of ssg version of classified tasks in PointNet++.
The author calculates the feature extraction part and the classification task part separately. The feature extraction part is mainly 1-1 convolution, we need to pay attention to weight+bias, the green'+ 1'in the formula indicates the number of bias.
Feature extraction part:
Conv_num= (316) * 64 + (6431) * 64 + (641) * 128 + (128) * 128 + (128) * 128 + (128) * 256 + (256) * 256 + (256) * 512 + (512) * 1024 802624
Classification section:
Fc_num = (1024) * 512 + (512) * 256 + (256) * 40 = 666408
Total parameter memory:
Bytes_num = (conv_num+fc_num) * 4=5876128bytes (that is, 5.8MB, less than the 8.7MB mentioned in the paper)
It should be noted here that in the formula for calculating conv_num, the'+ 3'is marked in red letters, which is mentioned in the code. Pointnet_sa_module first does the sample_and_group operation to see what the code does:
Sample_and_group (npoint, radius, nsample, xyz, points, knn=False, use_xyz=True):''Input: npoint: int32 radius: float32 nsample: int32 xyz: (batch_size, ndataset, 3) TF tensor points: (batch_size, ndataset, channel) TF tensor, if None will just use xyz as points knn: bool, if True use kNN instead of radius search use_xyz: bool, if True concat XYZ with local point features Otherwise just use point features Output: new_xyz: (batch_size, npoint, 3) TF tensor new_points: (batch_size, npoint, nsample, 3+channel) TF tensor idx: (batch_size, npoint, nsample) TF tensor, indices of local points as in ndataset points grouped_xyz: (batch_size, npoint, nsample, 3) TF tensor Normalized point XYZs (subtracted by seed point XYZ) in local regions''new_xyz = gather_point (xyz, farthest_point_sample (npoint, xyz)) # (batch_size, npoint, 3) if knn: _, idx = knn_point (nsample, xyz, new_xyz) else: idx, pts_cnt = query_ball_point (radius, nsample, xyz, new_xyz) grouped_xyz = group_point (xyz Idx) # (batch_size, npoint, nsample, 3) grouped_xyz-= tf.tile (tf.expand_dims (new_xyz, 2), [1Gramma nsampleje 1]) # translation normalization if points is not None: grouped_points = group_point (points, idx) # (batch_size, npoint, nsample, channel) if use_xyz: new_points = tf.concat ([grouped_xyz, grouped_points] Axis=-1) # (batch_size, npoint, nample, 3+channel) else: new_points = grouped_points else: new_points = grouped_xyz return new_xyz, new_points, idx, grouped_xyz
There is a concate operation that splices features and coordinates, so the final output of the number of channel layers is channel+3. The notes are also mentioned.
So there is a red'+ 3'in the formula for calculating conv_num, so this part needs to be noted.
But even if we pay attention to this point, the final result is not consistent with the 8.7MB mentioned in the paper.
two。 I would also like to mention that in the previous picture, we noticed that the author said that pointnet's Model size is 40MB, but this is what pointnet wrote in his paper:
3.5MB .
I don't know why the two numbers don't agree. I didn't calculate this, because T-net is really too tedious, so dig a hole for the time being.
There are a few questions about Pointnet++ to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.