Why Nostr? What is Njump?
2024-02-24 08:41:00

Cyph3rp9nk on Nostr: After more than a year of playing with the different engines (LLM) I have come to the ...

After more than a year of playing with the different engines (LLM) I have come to the conclusion that they can be dangerous, and not precisely because they can help you gain productivity, but because in many cases they break the critical thinking process.

A practical example:

So far if you want to search on a topic, you will probably have to browse google or your favorite search engine and read a few pages until you reach a conclusion.

In technical matters this is already dangerous in itself, because if an LLM offers you a single answer, you are already ignoring other points of view, avoiding pros and cons.

But this becomes even more dangerous in political, moral and behavioral issues. The LLM will dictate the answer, it will dictate what is right and what is wrong, and little by little, with its increasing use, the dominant LLM in the market will dictate what society should think, how it should behave and what decisions it should make.

And don't be silly, LLM is not an artificial intelligence, it is not a self-aware intelligence, don't scam, an LLM is just massive data processing and indexing, and the rules are dictated by a human.

In conclusion, the new tools wrongly called "artificial intelligence" will be used by governments to shape public opinion and the behavior of the unwary, they are the new television.

Author Public Key
npub1lnms53w04qt742qnhxag5d6awy7nz6055flnmjkr6jg39hm86dlq7arrnt