ChatGpt and the integration with the Bing search engine. Here’s what changes

The big innovation is the use of current sources. Artificial intelligence decides how to gather the most relevant information to answer the user and then uses search tools to find it.

The artificial intelligence starting to be integrated into the Microsoft Bing search engine is more than what we experienced on ChatGpt. It is a different experience, more accurate and with more current results. You can see this in the test we did of the new Bing, for now only available to selected users, before the official launch which could be in March.
If you ask ChatGpt for recent news or today’s weather forecast, in fact, you only get excuses. Or even outright inventions. As is well known, ChatGpt only uses a database from 2021, with which it has been trained to be able to answer our questions. Bing’s artificial intelligence does indeed use the same technology as ChatGpt (by Open Ai), but it is an improved version with the collaboration between Microsoft and OpenAI. So much so that some (as reported in the New York Times) suspect that it is a Gpt-4 technology, as opposed to ChatGpt version 3.5.
Certainly, the big news is the use of current sources. Artificial intelligence decides how to gather the most relevant information to answer the user and then uses search tools to find it. The system then sends this data, which are the results of the search, to the basic model, which uses them to compose the answer in natural language.
It is as if there were two levels of knowledge in Bing, as Antonio Cisternino, a computer science expert at the CNR in Pisa, explains.
‘On the one hand there is the deep knowledge of AI. The model built with basic training on millions of web pages and books, with which Open AI has learnt to understand natural language and can respond by putting together one word after another. On the other hand, there is more superficial and temporary knowledge, which the AI makes use of every time it is queried by the user and uses search results,’ says Cisternino.
Bing thus makes a summary of the search results and gives a direct answer to the question. A bit like ChatGpt would do if we pasted a link or a document and asked it to summarise it.
Note that other companies, including Neeva, also use this method, although it seems to work better in the new Bing.
This is how Microsoft managed to solve several of ChatGpt’s limitations. Not only the fact of having old data, but also another problem: the tendency of large language models to invent facts and data. Chatbots do not have the perception of what is true or false; they mirror what is on the Internet, with all the trappings. These ‘hallucinations’ – as experts call them – uttered with great confidence by the bot can spread disinformation and falsehoods. Last year, Meta had to withdraw its chatbot aimed at the scientific community, Galactica, after it was found to be spouting scientific nonsense.

Well, giving the model access to up-to-date data reduced, though not eliminated, the hallucination rate for Bing’s chat function. “Many of the hallucinations were due to the model trying to fill in the blanks of things that happened after the end of the training data,” explains Kevin Scott, Microsoft’s Chief Technology Officer. The company is using other techniques to further reduce the error rate. These include giving users the ability to vote on which answers are best and which information is reliable; and adding memory to systems so that algorithms learn from conversations. ChatGpt currently has a memory of only 4,000 words.
Bing also tackles the problem of unreliability by offloading it somewhat on users, because it adds to the answers – unlike ChatGpt – a list of used and clickable sources.
At the moment it is an imperfect job, as Microsoft itself acknowledges. In our test it confused several editions of San Remo. Its ‘summary of the Ukraine war to date’ stops at March 2022, oddly enough. Sometimes after writing a bit he gets stuck in the middle of a sentence (like ChatGpt) and if we tell him to continue he says he cannot write long articles, ‘so as not to violate copyright’. Perhaps a sign of caution towards the publishers whose sources it uses and within the framework of American fair use principles. Microsoft itself has declared (as has Google) that it will always want to protect the sustainability of digital publishers and the possibility of users clicking on their sites (without stopping at chatbot responses).

We asked for recommendations for a vegetarian dinner in Milan and he gave a list of restaurants, not mentioning the highest quality one (Michelin star), which is also listed in the first position in all his sources. We pointed this out to him and he justified himself by saying that he did not realise we wanted to include fine restaurants.
He was very helpful, however, in recommending a mortgage. His summary of the web results made it possible to highlight alternative opportunities immediately.
The general feeling is that we are in front of a new, as acerbic as it is astounding, fruit of human technology that will change the way we experience the internet for all of us.