iNaturalist is a website that crowdsources pictures of plants and animals to help identify species. Its tagline is “A Community for Naturalists.” iNaturalist is administered by its own small charit…
I’m sorry, but didn’t Google pioneer the image recognition models years ago that Inaturalist uses to help users identify plants and animals? Google can suck it, but perhaps ai has proven it has a place in these applications?
Pattern recognition and “generative” ai are 2 completely different things. One is helpful tool with limited capabilities , the other is an overhyped markov chain that uses humongous amounts of energy and steals humans’ creative works.
It is my understanding that the advances in classifier models were and are inexorably linked to generative models. Wasn’t Deepdream a fairly crude inversion of existing classifier models?
You’re totally misunderstanding the context of that statement. The problem of classifying an image as a certain animal is related to the problem of generating a synthetic picture of a certain animal. But classifying an image of as a certain animal is totally unrelated to generating a natural-language description of “information about how to distinguish different species”. Moreover, we know empirically that these LLM-generated descriptions are highly unreliable.
I’m sorry, but didn’t Google pioneer the image recognition models years ago that Inaturalist uses to help users identify plants and animals? Google can suck it, but perhaps ai has proven it has a place in these applications?
Pattern recognition and “generative” ai are 2 completely different things. One is helpful tool with limited capabilities , the other is an overhyped markov chain that uses humongous amounts of energy and steals humans’ creative works.
It is my understanding that the advances in classifier models were and are inexorably linked to generative models. Wasn’t Deepdream a fairly crude inversion of existing classifier models?
You’re totally misunderstanding the context of that statement. The problem of classifying an image as a certain animal is related to the problem of generating a synthetic picture of a certain animal. But classifying an image of as a certain animal is totally unrelated to generating a natural-language description of “information about how to distinguish different species”. Moreover, we know empirically that these LLM-generated descriptions are highly unreliable.
Lying machines have no place in anything involving facts or knowledge. Get the fuck out.
this is the one we banned a few days ago coming in from another server
and even if it isn’t hoo boy that posting record