…quotation marks… That’s enshittification at work: https://en.wikipedia.org/wiki/Enshittification A backwards step from… Alta Vista in 1996!!! https://jkorpela.fi/altavista/
…quotation marks… That’s enshittification at work: https://en.wikipedia.org/wiki/Enshittification A backwards step from… Alta Vista in 1996!!! https://jkorpela.fi/altavista/
“has a model of how words relate to each other, but does not have a model of the objects to which the words refer.
It engages in predictive logic, but cannot perform syllogistic logic - reasoning to a logical conclusion from a set of propositions that are assumed to be true”
Is this true of all current LLMs?
Thank you for replying. This is the level of info I used to love on Reddit and now love on Lemmy.
Thanks for your reply, I appreciate the correction and the info.
The thing that strikes me about LLMs is that they have been created to chat. To converse. They’re partly influenced by Turing tests where the objective is to convince someone you’re human by keeping up a conversation. They weren’t designed to create meaningful content or factual content.
People still seem to want to use chat GPT to create something, and fix the accuracy as a second step. I say go back to the drawing board and create a tool that analyses statements and tries to create information based on trusted linked open data sources.
Discuss :)
Ah, the corporate enshittification of search.
There are certainly photoshop jobs of this image, but as far as I can tell, the black and white version with the sign saying ‘pon farr night fridays’ is the original. I’d like to know where the photo came from though.