Haha. Maybe.
I doubt the VCs will provide much followup funding if they can’t control the code base but weirder things have happened.
Haha. Maybe.
I doubt the VCs will provide much followup funding if they can’t control the code base but weirder things have happened.
There are a lot of scams around AI and there’s a lot of very serious science.
While generative AI gets all the attention there are many other fields of AI that you probably use on a regular basis.
The reason we don’t see the rest of the AI iceberg is because it’s mostly interesting when you have enormous amounts of data you want to analyze and that doesn’t apply to regular people. Most of the valuable AIs (as in they’ve been proven to make or save a bunch of money) do stuff like inventory optimization, protein expression simulation, anomaly detection, or classification.
It’s otherwise a fairly well written article but the title is a bit misleading.
In that context, scare quotes usually mean that generative AI was trained on someone’s work and produced something strikingly similar. That’s not what happened here.
This is just regular copyright violations and unethical behavior. The fact that it was an AI company is mostly unrelated to their breaches. The author covers 3 major complaints and only one of them even mentions AI and the complaint isn’t about what the AI did it’s about what was done with the result. As far as I know the APL2.0 itself isn’t copyrighted and nobody cares if you copy or alter the license itself. The problem is that you can’t just remove the APL2.0 from some work it’s attached to.
I keep wondering if information like this will change anyone’s mind about Disney.
It seems like all Iger has to do is throw a little shade at Trump or DeSantis and everyone instantly believes that Disney is some sort of bastion of progressive thought that doesn’t have a vile history of exploitation.
When I was a kid, a lot of US maps where US-centered. They would chop Eurasia down the middle and include some overlap on the edges (so places like India might show up twice).
It would depend on how well we can control it.
Ideally the material would be completely nonreactive for as long as you’re using it and then instantly degrade into component elements.
The faster things degrade, the higher the chance that they’ll degrade when you don’t want it to.
A bunch of scientific papers are probably better data than a bunch of Reddit posts and it’s still not good enough.
Consider the task we’re asking the AI to do. If you want a human to be able to correctly answer questions across a wide array of scientific fields you can’t just hand them all the science papers and expect them to be able to understand it. Even if we restrict it to a single narrow field of research we expect that person to have a insane levels of education. We’re talking 12 years of primary education, 4 years as an undergraduate and 4 more years doing their PhD, and that’s at the low end. During all that time the human is constantly ingesting data through their senses and they’re getting constant training in the form of feedback.
All the scientific papers in the world don’t even come close to an education like that, when it comes to data quality.
Haha. Not specifically.
It’s more a comment on how hard it is to separate truth from fiction. Adding glue to pizza is obviously dumb to any normal human. Sometimes the obviously dumb answer is actually the correct one though. Semmelweis’s contemporaries lambasted him for his stupid and obviously nonsensical claims about doctors contaminating pregnant women with “cadaveric particles” after performing autopsies.
Those were experts in the field and they were unable to guess the correctness of the claim. Why would we expect normal people or AIs to do better?
There may be a time when we can reasonably have such an expectation. I don’t think it will happen before we can give AIs training that’s as good as, or better, than what we give the most educated humans. Reading all of Reddit, doesn’t even come close to that.
That’s my point. Some of them wouldn’t even go through the trouble of making sure that it’s non-toxic glue.
There are humans out there who ate laundry pods because the internet told them to.
This is why actual AI researchers are so concerned about data quality.
Modern AIs need a ton of data and it needs to be good data. That really shouldn’t surprise anyone.
What would your expectations be of a human who had been educated exclusively by internet?
Maybe.
There have been a number of technologies that provided similar capabilities, at least initially.
When photography, audio recording, and video recording were first invented, people didn’t understand them well. That made it really easy to create believable fakes.
No modern viewer would be fooled by the Cottingley Fairies.
The sound effects in old radio shows and movies wouldn’t fool modern audiences either.
Video effects that stunned audiences at the time just look old fashioned now.
I expect that, over time, people will learn to recognize the low-effort scams. Eventually we’ll reach an equilibrium where most people won’t fall for them and there will still be skilled scammers who will target gullible people and get away with it.