Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

"digital-analog" separation of autopilot

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

The cold has not spread to everyone, at least it has spread to every industry. As the core technology of automobile intelligence, self-driving is the exploration and progress of spending money on the one hand, and the mass production of surviving products on the other. Whether to the left or to the right, there are both challenges and opportunities for the self-driving industry.

The development of self-driving is not accidental, but a necessary process of social development. Although history will not reappear, the law of historical development is always strikingly similar. Since Dartmouth put forward the concept of artificial intelligence in 1956 to the beginning of the 21st century, self-driving technology has undergone earth-shaking changes in both artificial intelligence technology and automobile shape. Whether in the Internet industry or the automobile industry, data, algorithms, and computing power have become the new driving forces driving the industry forward in the intelligent age. Increasing data, constantly optimizing algorithms, and constantly evolving semiconductor computing power, the super data centers, algorithmic models and computing violence generated by applications seem to exist in an infinite space where silicon-based intelligence will go beyond carbon-based intelligence prediction functions. Unfortunately, this prediction function is divergent but not convergent.

When we solve a problem, it will certainly give rise to a new problem. With the promotion of data, computing power and algorithms, intelligence has achieved certain results, convenient and fast food delivery, the improvement of the active safety performance of cars, the establishment of lights-out factories to liberate the labor force, and so on. The price to be paid for each point of progress is essentially the price of data processing. There is an interesting question: can data represent real things? If not, how can machines perceive the physical world? If machines cannot understand the physical world of humans, how can the world of machines be established?

Throughout the development of artificial intelligence, it emerged from symbolic logic reasoning and flourished from statistical and machine learning to today's deep learning. The fundamental research of artificial intelligence is nothing more than the feature data extraction of the physical world and the model training of the virtual world. that is to say, not all data can be used, not all data exist. Hidden behind the data, algorithms and computing power are the inherent changes in the development of artificial intelligence technology. As a new wave of artificial intelligence at the present stage, the technology of deep learning and the way of thinking behind it have become the necessary basic ability and cognitive way for artificial intelligence technology practitioners, project managers and strategic planning decision-makers. As the engine of the rise and prosperity of the third round of artificial intelligence, deep learning is in the core position no matter from the development of AI technology or industrial application, while autopilot, especially the perceptual recognition part, will become an application platform for deep learning and play the role of waist connection between the upper application and the lower chip.

The research of autopilot has a similar process with natural language processing. From the initial knowledge rule-driven to data-driven, it is essentially a change in human cognition of the objective world. The data-driven R & D model is that when the methodology is determined, the performance optimization of the system depends on the amount of available data, that is, the advantages and disadvantages of the system are strongly related to the scale of the data. The scale of this data is not only the data itself, but also the processing ability of the data, especially in the case of geopolitical tensions, similarities and differences in legal and regulatory measures and differences in cultural background. data capability is not only the hard capability of data processing technology, but also the embodiment of enterprise soft power. At present, no matter the autopilot algorithms in the industry or the recommendation, search and speech recognition algorithms in the Internet are all focused on improving the quality of data and the scale of model parameters, in essence, they are still tapping the development potential of existing technological paths. Through large-scale pre-training model, independent generation of data, relying on the common sense relationship of knowledge graph, making use of multi-source data to make up for the limitations of deep learning in generalization, small data, interpretability, autonomous learning ability and so on, constantly improve the level and depth of problem solving.

The optimization of the algorithm model depends on the data, and the data highlights the value in the algorithm model, and the two are not only interrelated but also independent of each other, so it is easy to have several problems.

The first is the problem of data scale, the size is only a relative concept, the demand for the amount of data is not convergent, the development cost of software algorithm is transferred to the cost of data processing, with the increase of quantity, the cost of data transmission, storage, cold and hot processing will continue to increase, on the surface, the semiconductor process technology is improving, the data processing ability is enhanced, and the efficiency and cost of software algorithms are inversely proportional. But the cost of the data is increasing.

The second is data compliance. An Internet mogul once said, "Chinese people are more likely to accept that their faces, voices, and shopping choices are recorded and digitized, and are more willing to exchange personal information for convenience." I don't know from which angle the boss came to this conclusion, but it can be seen that data compliance is closely related to everyone and that there is a strong correlation between data and products. A series of challenges such as the rights and responsibilities of autopilot, moral and ethical issues, and unexplainable algorithms can all be regarded as data compliance issues. Data compliance is a balance between laws and regulations and product convenience, and a measure to ensure product fairness, so this process must be a dynamic process with more data and continuous compliance.

The third problem is the data whirlpool. Nowadays, every household of enterprises is more or less trying to collect data. Self-driving companies generally adopt a two-pronged approach, accumulating virtual simulation data while actual physical scene data, and nothing is happiest than cloud service providers and semiconductor providers. Although everyone in the industry is advocating the interconnection of data, in fact, they have no contact with each other. After all, no one wants to share the cake of the resource pool with others.

The fourth problem is the lack of benchmark data. The industry is generally constantly collecting data from the physical world for model training. The self-driving industry is constantly testing and simulating to accumulate data mileage, but the common problem is the lack of benchmark data sets, which leads to the improvement of the effectiveness of the new model will be unilateral, resulting in the phenomenon of both referees and players. Once deployed to the product side, there will be continuous problems. At a time when advanced autopilot is not yet popular, sporadic accidents are not so much a problem with software algorithms as problems with training data.

In view of this situation, a new technology called remote upgrade is used in the industry to optimize the software algorithm to realize the commercial closed loop of data. But is this approach really fair and friendly to consumers? This virtually makes consumers fall into an uncertain blind box state.

Darwin's theory of biological evolution tells us that natural selection and survival of the fittest. The world teaches us to adapt to society, but does not teach us to transform society. From the development trend of the semiconductor industry, it was born in the 1940s, at first, chip companies had design, manufacturing, packaging and testing. With the development of the chip industry, chip companies began to differentiate gradually and develop to a specialized and refined depth, thus forming the upstream and downstream industrial chain of the industry. At present, the volume of the algorithm model is growing exponentially. Taking the M6 model of Dharma House as an example, the number of model parameters reaches 10 trillion. Single server, take Nvidia V100 as an example, single card display memory 32GB, computing power 125Tera FLOPS, it is difficult to meet the training needs of the hundreds of billions of parameter model, in turn, the volume growth of the model has great pressure on data reading and writing, storage, training and so on. In the self-driving industry, although the industrial chain is circular, with the improvement of product maturity in the future, it will probably become a stable industrial chain. After all, the human brain is better at serial processing information. The combination of mobile and Internet gives different attributes to smart cars, and it is also the collection and distribution of data and models and Tencent App Center. Therefore, the data and model for advanced autopilot will be separated, and the enterprise will develop into a specialized and sophisticated platform. Data processing companies specialize in data problems, Data As Service, model training companies specialize in the development of models and tool kits, Model As Service, when the business develops to a certain scale, scale is the biggest technical barrier. Some people will say that only children make choices, and adults want both data and models. All the models are OK in the early stage of industrial development, and the trend of differentiation will become mainstream after the industry enters a mature period. If you do not adjust the strategy, you will fall into the situation of taking care of one or the other, and you will always be in a state of patching. Product competition is out of the question.

From the perspective of the Internet industry, the trend of separation of data and models has emerged. The sensitivity of the Internet industry to personal privacy data is constantly evolving from the algorithm level and data level. Small data training, federal learning, privacy computing and other methods are moving towards the foreground of the industry. Compared with the Internet based on personal data, autopilot data at this stage is mainly on the B side, such as scene, road, region and so on. There is a big difference between the two in terms of supervision and safety. In the future, with the upgrading of intelligence, the improvement of automobile intelligence and user experience will certainly rely on personal data, and the integration of multiple data sources will lay the foundation for the development of intelligent society.

This article is from the official account of Wechat: automobile Observer Alliance (ID:gh_6caf2b9784b6), author: Shifu

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report