True but trust is hard to establish in decentralized platforms like the fediverse. As far as I’m aware the only decentralized banking is unfortunately cryptocurrency.
True but trust is hard to establish in decentralized platforms like the fediverse. As far as I’m aware the only decentralized banking is unfortunately cryptocurrency.
Isn’t that what NFTs do?
Mid journey and the like have already been caught creating shutterstock watermarks in images. Future models might be able to fake specific watermarks well.
I slightly hate myself for suggesting it, but are you essentially describing NFTs?
Thanks for trying to explain it. I was hung up on thinking all UUIDs looked like UUID v4. I read up a little on UUID v7 and it’s making sense. Probably should’ve done that sooner.
I’ve more of a math background than cs so monotonic is a word I know well but it apparently means something slightly different to me. Monotonicity isn’t mentioned anywhere in that link.
Would they not have monotonic uuids after altering the code in the article to use a function or lambda as they suggested?
I was reading it as an endorsement for autoincrementing int primary keys and a condemnation of uuids in general which is a genuine stance I’ve known people to take. Is that not it?
UUIDs make great primary keys in some applications. If you generated 100 trillion UUID4s, there’s about a 1 in a billion chance of finding a duplicate. Thats usually good enough for my databases.
The issue here was that they used a single UUID instead of generating a new one for each record.
There’s a local llama subreddit with a lot of good information and 4chan’s /g/ board will usually have a good thread with a ton of helpful links in the first post. Don’t think there’s anything on lemmy yet. You can run some good models on a decent home pc but training and fine tuning will likely require renting out some cloud gpus.
I’m absolutely biased as a data engineer who loves SQL, but there are some good reasons why SQL has been the de facto standard for interacting with databases since the 80s.
One of its draws is that it’s easy to understand. I can show a stakeholder that I’m selecting “sum(sale_amount) from transactions where date=yesterday” and they understand it. Many analysts are even able to write complicated queries when they don’t know anything else about programming.
Since it’s declarative, you rarely have to think about all the underlying fuckery that lets you query something like terabytes of data in redshift in minutes.
Debugging is often pretty nice too. I can take some query that didn’t do what it was supposed to and run it over and over in a console until the output is right.
By this same logic, someone in 2004-2005 could look back to the crash in the 80s and see they’re at the same level as that peak and a crash is imminent. In fact it was a year or two away and new homes for sale were going to go up another 50% before the crash.
That’s not to say that things looks good. Some sort of correction is probably coming but it could be another year or two or more.