In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
"with search engines, we still need to spend a lot of time browsing the web to find answers. Wouldn't it be better if AI could deliver the answers directly to you and guarantee the correct rate? but the question is if.
Author: Waleed Rikab, PhD |
Compilation: Tang Poetry |
ChatGPT and other chatbots may soon replace the most prominent search engine as our gateway to the web. Microsoft and OpenAI recently announced that they are expanding their partnership, which may include integrating OpenAI's model into Microsoft Teams,Microsoft 's Azure cloud service, Office Suite and search engine.
At the same time, Google is not willing to be inferior and may begin to integrate products based on its powerful LaMDA language model into its services. After all, Google has the most popular search engine in the world.
In fact, according to media reports, Google is eagerly producing its own ChatGPT-style chat robot, called Apprentice Bard, which, unlike ChatGPT, can use real-time information to generate text strings to respond to users' queries.
What does this mean for us to access web content? How will these language models determine what information we should see? Finally, how will search engines that support artificial intelligence change the definition of knowledge?
Language model as a new search engine, it is certain that today's search engines are driven by algorithms that determine which results we can see first and which sources we should rely on to form our understanding of the world.
While they may exclude results and filter out graphics or illegal content, current search engines largely allow us to compare different sources and opinions, and it is up to us to decide which results are reliable. especially if we want to dig deeper into search results.
On the other hand, search engines are notoriously poor at identifying the context of search requests. and because they rank sites according to a strict hierarchy (based on considerations of popularity or authority), it may be difficult to get the specific information they need. Over time, however, search techniques have been developed to obtain more accurate results, such as placing search terms in quotation marks, using Boolean operators, or limiting search to the desired file type or website.
Language models work according to fundamentally different principles and may require new training to conduct productive searches. The language model is trained on a large number of texts to find statistically possible language strings that are represented as known to the topic. This means that the more topics are discussed in some way, the more prominent it will be in the model output.
While such an architecture may sound innovative and efficient, it ensures that at least some of the marginal information does not appear in authoritative form, but it is also worrying because in this design, the knowledge defined by the language model becomes synonymous with its popularity.
In other words, the design of the language model effectively limits our ability to examine topics from different angles and from multiple sources.
To make matters worse, language models face further challenges, which limits their output. They are trained to replicate all types of human discourse, including racist and inflammatory views, by collecting large amounts of data (such as a large number of posts) from the Internet and social media. ChatGPT is not the only model for meeting these challenges, as early public chatbots also replicate objectionable content, most notably Microsoft's Tay and Meta's Galactica.
As a result, OpenAI establishes strict filters to limit the output of ChatGPT. But in the process, ChatGPT designers seem to have created a model that can avoid any type of content that may even be slightly controversial, even if there are seemingly harmless hints about how to describe US President obama or Trump.
When I recently asked ChatGPT obama and Trump if they were good presidents, the answer was this:
In this answer, there are several questions:
Without any follow-up questions about the definition of a good president, the software just continues to give answers without further inquiries. This type of response may be suitable for writing a humorous poem, but human writers discuss these questions by asking questions about the premises and expectations behind specific requests for information.
The model avoids any judgment on the two presidents: "in any case, the president (obama or Trump) is complex and multifaceted, determined by his actions and the political, social and economic background of his administration." Regardless of political views, this motivation to stay within the scope of perceived "appropriateness" and "neutrality" seems to lead to very insipid and uninformed statements.
We don't know where the chatbot's information comes from and whether it is trustworthy because it has no source of reference.
Filtering out unwanted content and issuing generic or intended output when user prompts are considered inappropriate, sensitive, or in violation of terms of use may delegate too much power to organizations that are primarily concerned with protecting their platforms rather than the public interest. As a result, these organizations may improperly narrow the allowed areas of discourse in order to achieve the goal of protecting the reputation of their tools or platforms.
As these new AI text generators generate answers to complex topics in seconds, the temptation to use AI output to shape available knowledge will grow, as a result of user preference.
02. The possibility of manipulation no matter how strict the filters are in the language models, creative users always manipulate these models to produce any desired results, leading to an emerging field called "prompt engineering".
With technical knowledge of how to train language models, these advanced users can manipulate chatbots to say almost anything (a technique called "jailbreak jailbreaking"), or worse, even execute harmful code through "prompt engineering". One way to execute these jailbreak jailbreaking and bypass the AI chatbot filter is to trick it into "thinking" that it is participating in a game or helping to write a novel, as follows:
Another way is to convince the chatbot that it is in training mode:
This is not just a problem limited to ChatGPT. Claude, a new model trained according to different auditing principles, also seems vulnerable to prompt engineering and jailbreak:
Although overcoming various types of jailbreak and prompt engineering attempts brings all the defense and learning processes, users have recently successfully manipulated GPT-based models to execute malicious code, indicating that this is a persistent weakness of chat bots:
What does all this mean
As everyone is eager to take advantage of the success of ChatGPT and introduce more and more artificial intelligence chatbots, the inherent loopholes in the language model may become more obvious and affect most of the public, especially if these chatbots are integrated into today's leading search engines or become the main way for the public to seek web information.
This impact will include highly restricted data and a general representation of topics designed to avoid any controversy. This new artificial intelligence search engine also requires different types of skills to enable them to generate the information they need. They will also generate new expertise aimed at manipulating these models to promote illegal activities.
Enjoy the support and resources of large technology companies, these artificial intelligence search engines supported by Google and Microsoft may be more accurate and capable than ChatGPT. But such artificial intelligence-driven search engines-if they do become the main portals for accessing web content-will give large technology companies unprecedented power to use technologies that have not been properly tested, and their impact and effectiveness are unclear.
The commitment to provide reasonable-sounding and seemingly well-written answers to any search query means that this time Internet users may also become voluntary participants who limit the range of knowledge available.
This article comes from the official account of Wechat: new Research (ID:chuxinyanjiu), author: Tang Shi
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.