

I was gonna joke that this article could be “headlines about me, from the future” but then I read this comment and now I might not be joking. These molecules yearn to get the fuck away from this place
I was gonna joke that this article could be “headlines about me, from the future” but then I read this comment and now I might not be joking. These molecules yearn to get the fuck away from this place
I used to reliably be able to leave my house at the listed show time, drive about 10 miles to the theater, get a ticket, park and buy popcorn and be in my seat right as the show started.
Even in code it’s only “right” a small percentage of the time if you count “right” as being able to get the answer quickly, accurately, without it losing context, and happening in less time than it would if you’d been searching. To me, LLMs are just another way of getting to data, and are about as “right” as Google is by shotgunning literally millions of results at you. You (the human) still have to parse through it all, and choose to do something with it.
Hey somebody revive that damn canary and get back to work
Is accessibility designed by someone that doesn’t require that accessibility any good?
It can be if it’s tested with users. There are guidelines/principles (just like with sighted users), but what makes a good (robust) experience is subjective and requires testing.
Please keep in mind Jack Dorsey is just some guy who’s had the same shit idea twice.
Tolerance requires a mutual respect for each others’ coexistence. It’s kind of right on the label. You don’t have a meeting of the minds with someone whose premise begins and ends with denying that.
My 9th gen intel is still not the bottleneck of my 120hz 4K/AI rig, not by a longshot.
I agree, I hope this means that Steam OS will be coming to the main legion go.
Yeah I got mind refurbished also, so someone else took the first hit on driving it off the lot (and waiting for it to be built). I guess they didn’t use it to its full extent though. That didn’t make it “cheap” though.
It’s sort of a niche within a niche and I appreciate your sharing some knowledge with me, thanks!
Hmm maybe in the new year I’ll try and update my process. I’m in the middle of a project though so it’s more about reliability than optimization. Thanks for the info though.
I usually run batches of 16 at 512x768 at most, doing more than that causes bottlenecks, but I feel like I was also able to do that on the 3070ti also. I’ll look into those other tools though when I’m home, thanks for the resources. (HF diffusers? I’m still using A1111)
(ETA: I have written a bunch of unreleased plugins to make A1111 work better for me, like VSCode-like editing for special symbols like (/[, and a bunch of other optimizations. I haven’t released them because they’re not “perfect” yet and I have other projects to be working on, but there’s reasons I haven’t left A1111)
I just run SD1.5 models, my process involves a lot of upscaling since things come out around 512 base size; I don’t really fuck with SDXL because generating at 1024 halves and halves again the number of images I can generate in any pass (and I have a lot of 1.5-based LORA models). I do really like SDXL’s general capabilities but I really rarely dip into that world (I feel like I locked in my process like 1.5 years ago and it works for me, don’t know what you kids are doing with your fancy pony diffusions 😃)
Oh I meant for image generation on a 4080, with LLM work I have the 64gb of the Mac available.
It fails whenever it exceeds the vram capacity, I’ve not been able to get it to spillover to the system.
Oh I didn’t mean “should cost $4000” just “would cost $4000”. I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.
Apple price gouges for memory, yes, but a 64gb theoretical 4090 would have cost as much in this market as the whole computer did. If you’re using it to its full capabilities then I think it’s one of the best values on the market. I just run the 20b models because they meet my needs (and in open webui I can combine a couple at that size), as I use the Mac for personal use also.
I’ll look into the Amd Strix though.
I know it’s a downvote earner on Lemmy but my 64gb M1 Max with its unified memory runs these large scale LLMs like a champ. My 4080 (which is ACHING for more VRAM) wishes it could. But when it comes to image generation, the 4080 smokes the Mac. The issue with image generation and VRAM size is you can think of the VRAM like an aperture, and the lesser VRAM closes off how much you can do in a single pass.
Something to talk about when you’re making the 900th unnecessary video about the switch 2