So-called ‘artificial intelligence’ is quite stupid

So-called “artificial intelligence” is a rhetorical excess invented by computer scientists, something like the “affluent society” of economists. Behind a ridiculous phraseology what they want to say is that there is a mechanical means of being creative, which is false.

Creation is an exclusively human faculty that consists of elaborating something that did not previously exist. Production is completely different from reproduction.An Internet search engine does not invent anything; it reproduces something that already exists. It provides a list of links and forces the user to take a second action: click on one of them to find what he is looking for.

On the contrary, “artificial intelligence” is more convenient because it saves this second step and directly displays certain information. The answer immediately follows the question, without any intermediate complications.

The vast majority of users, contrary to what they themselves believe, are not really looking for information but for shortcuts to knowledge, for immediate and easy-to-obtain results. They are guided by the law of minimum effort and, in time, will prefer to resort to “artificial intelligence”. The result is that soon search engines, which are nothing more than a business, will be based on it.

That is why all the technological monopolies (Microsoft, Google, Facebook, Amazon) are running after artificial intelligence. The business figures are not small. Google’s search engine generates $150 billion, more than half of the multinational’s turnover. Google’s “AI Overviews” function has been available to U.S. users since May 14.

But knowledge knows no shortcuts. Like the other chabots, “AI Overviews” returns simple answers to simple questions. In this way, the algorithms will produce a simple, i.e. rickety, intellectual universe. It will be a summary of the “dominant ideology”. In a racist society, such as the present one, “artificial intelligence” manufactures racist answers and provides racist images.

Chatbots are based on large language models (LLM) that do not produce texts whose data are certain, but only statistically probable. This is the opposite of scientific logic: a statement is neither true nor false because it corresponds to the facts, but because the majority believes it to be so and expresses it in their social media accounts. In this sense, “artificial intelligence” is nothing more than a collection of widespread prejudices.

However, for the system to respond, the content must be on the Internet.Otherwise, the user will get no response or the chabot will make it up.The same happens if the language model has not been updated.The system does not respond or displays outdated explanations. It is more like a newspaper archive than this morning’s newspaper.

Any internet user finds numerous false information on the net and the chabots repeat and reproduce those exact same falsehoods.In computer jargon they call it “hallucinations”.They are texts that the system dumps, written in a coherent way but with incorrect, biased or totally wrong data.

The coherence of the text gives it an appearance of veracity but, once again, coherence has little to do with scientific logic. As the fabricated text becomes more coherent, the more easily the deception sneaks through.

Artificial intelligence hallucinations occur not only with what are usually considered subjective opinions, such as the existence of miracles or psychophonies, but also with scientific papers.On November 15, 2022 Meta presented “Galactica”, an LLM language model designed to combine and reason about scientific knowledge.

“Galactica” was formed from 48 million scientific articles, websites, textbooks, lecture notes and encyclopedias.From the beginning “Galactica” warned that the result could be unreliable.Three days later it had to withdraw it because of hallucinations.

The same thing happened to Google with its “Gemini” chatbot, which produced real aberrations.With the first ChatGPT model, tests showed that 20 percent of the answers were incorrect.Last year, during a lawsuit in the United States, a lawyer presented the judge with a brief containing case law generated by ChatGPT.

Among the judgments cited, six were false.In response the judge prohibited the filing of AI-generated court documents that had not been reviewed by a human being, noting that “in their current state generative artificial intelligence platforms are prone to hallucinations and biases.”

Anyway, if you thought you had enough with media intoxication, hoaxes and fake news on the Internet, “artificial intelligence” has arrived for more of the same.The difference is in the speed. The chabots hallucinate much faster than the Efe agency, La Sexta or the New York Times.

It is absoulutely VITAL that you spread these articles and share them far and wide!!

There will never be ads, nor paywalls on this website and I do this, delivering articles for free, so please donate if you can afford it!

Donations are NOT neccesary in any way, but ALWAYS appreciated.

You can also check my bitchute, rumble, vidlii channels where I reupload videos with juicy info in them.

Stay up to date on gab, gettr, telegram, twitter and minds!

Our spanish readers can also check the podcast, and watch the odysee channel.

Leave a comment