Not exactly what you’re asking, but it’s also worth checking your local library. Some of them grant their cardholders access to external sources that might overlap with what you’re after.
Not exactly what you’re asking, but it’s also worth checking your local library. Some of them grant their cardholders access to external sources that might overlap with what you’re after.
Every piece of shit they make that I bought ended up broken and in the trash within a couple of weeks.
The autocomplete is fucking fantastic for writing unit tests, especially when there’s a bunch of tedious boilerplate that you frequently need SOME OF. I’m also really impressed by its ability to generate real code from comments or pseudocode.
Generally, though, I find it pretty awful for writing non-test code. It too often hallucinates an amazing API and I kick myself for not knowing it existed. Then I realize it’s because the API doesn’t actually exist, and the dumb fucker is clearly borrowing from a library from a completely different stack.
That advice on the wiki seems to be focused on users who don’t know anything about docker and running with some defaults that might not be ideal.
You can run Sonarr just fine in Portainer. It’s just a wrapper around plain old docker anyway. And if you want to use docker compose, you can still do that in Portainer. I think they call them Stacks in Portainer.
Portainer is just a GUI front end for Docker. If you like it, stick with it. I used it until I moved to Unraid and had zero issues.
I don’t pirate software anymore. If I do the math on how much enjoyment I get even from a mediocre AAA game title, it is dwarfed by what I’d spend on a night out, so the value is there for me. On top of that the risk of malware (or the effort in mitigating it) isn’t really worth it.
Tv and movies? Pirate it. The streaming services are garbage and the content has too much crap for me to want to pay a corporation for it. If it became too hard to pirate I just wouldn’t watch it anymore.
Books kind of fall in the middle. Happy to pay for ebooks if the author makes it practical, but I’m not keen on buying through Amazon.
It’s a little worrisome, actually. Professionally written software still needs a human to verify things are correct, consistent, and safe, but the tasks we used to foist off on more junior developers are being increasingly done by AI.
Part of that is fine - offloading minor documentation updates and “trivial” tasks to AI is easy to do and review while remaining productive. But it comes at the expense of the next generation of junior developers being deprived of tasks that are valuable for them to gain experience to work towards a more senior level.
If companies lean too hard into that, we’re going to have serious problems when this generation of developers starts retiring and the next generation is understaffed, underpopulated, and probably underpaid.
I thought about hypothetically confirming that Usenet indexers have this show right up to the latest episode.
You should think of Overseerr as a single install the same way you think of Plex. For instance, you don’t install Plex Media Server on every device you have, and then copy all your media to each device, right? Same principle applies here.
You want one Overseerr instance to live in one place (why not the machine you run Plex on?), then have everybody connect to THAT machine using their web browser. If you’re all on the same network it’s easy, though you might need to open up some ports on your firewall. If you want it to work over the internet, you’ve got a little more work to do.
If you want to automate that a bit, set up https://github.com/meeb/tubesync.
It’ll watch any YouTube playlists you specify (I created one called “Save to Plex”) and automatically download them and import them into Plex. Adding videos is as easy as sticking them into your playlist from whatever YouTube client you use.
Sonarr and Radarr are there for managing your requests, so they’ll handle things like downloading it when it’s available (either because it’s a new release or because the torrent/nzb weren’t readily available at the time you added it), upgrading an existing file to a higher quality version if it becomes available, sourcing a new copy if you mark the one it found as bad (e.g. huge, hard-coded Korean subtitles ruining your movie).
If you’re trying to find new stuff based on vague conditions (like “90s action movie), I don’t think any of the self hosted apps are a huge help. You’re probably better off sourcing ideas from an external site like IMDb or tvdb (maybe even Rotten Tomatoes?). Those sites maintain their own rich indexes of content and tags, whereas the self hosted stuff seems to be built more around the “I’ll make an api request once I know what you’re looking for”, which sucks when you don’t really know what you’re looking for.
I think there are even browser extensions for IMDb that will add a button to the IMDb movie page letting you automatically add it to Radarr if you like the look of it.
I can’t recommend an all-in-one primer, but if you want to look up guides independently, you’ll probably be most interested in these tools/services:
A Usenet indexer is going to let you download .nzb files, which is analogous to downloading .torrent files from a torrent indexer. The nzb describes what posts in what newsgroups contain the files for a particular release.
If you’re looking to set up some extra infrastructure for automating a lot of steps, there’s also web apps to cover a ton of video use cases, like:
I’d highly recommend setting up Docker and putting all of these apps into separate containers. Linuxserver creates easy to setup and update Docker packages for all these things. It’s also a great resource for finding other web apps you didn’t know you needed.
Link for the lazy: https://youtu.be/o4GZUCwVRLs
Definitely worth a watch.
Did you give up any Plex features you miss? I’ve been running a Plex server for years without serious issues, but I’m tired of seeing my CPUs getting hammered so bad when it doesn’t seem justifiable.
I, uhh, keep the original media safe in a storage locker!
The domain is .rs which is Serbian, but I dunno about their actual hosting.
It’s still up today, anyway. 🤞
This post made me nostalgic for the days when uTorrent was the shit. Man, how the mighty have fallen.
My org has issues with e2e, but we keep them because they usually inform that something, somewhere isn’t quite right.
Our CI/CD pipeline is configured to automatically re-run a build if a test in the e2e suite fails. If it fails a second time, then it sends up the usual alerts and a human has to get involved.
In addition to that, we track transient failures on the main branch and have stats on which ones are the noisiest. Someone is always peeling the noisiest one off the stack to address why it’s failing (usually time zones or browser async issues that are easy to fix).
It’s imperfect, and still results in a lot of developers just spam retrying builds to “get stuff done”, but we’ve decided the signal to noise ratio is good enough that we want to keep things the way they are.
It works for us to some extent, but there are ultimately people who specialize in various areas. Nobody can be a true master of absolutely every area in a large system with enough knowledge queued up to be able to be an independent power player. And I think that’s reasonable to anybody who works in or near the trenches.
It should be expected that any member of a full stack team can figure out how to do any particular task, but it’s a waste of resources not to have people lean on teammates who have more recent deep experience in a particular area.
This goes for both technical areas as well as domains. If my teammate just spent a couple of weeks adding a feature that makes heavy use of, I dunno, database callbacks in the user authentication area, and I just picked up a task to fix a bug in that same area, I have two choices. I can consult with my teammate and get their take on what I’m doing and have them preemptively point out any areas of risk/hidden complexity. Or, I can just blunder into that area and spend my own couple of weeks getting properly ramped up so I can be confident I made the right change and didn’t introduce a subtle regression or performance issue.
As the number of tech and domain concerns increases, it becomes more and more important for people to collaborate intelligently.
My team oscillates between 4 and 6 developers and this has been our practice with pretty high success. Quality is high (when measuring our ability to deliver within our initial estimates and our bug count), morale is good, and everyone is still learning with minimal frustration.
I think the issue of team cohesion comes into play if people are all independent and also stop talking. If I take the couple of weeks in my previous example instead of talking to my teammate for some knowledge transfer, I’m disincentivized from leaving my personal bubble at all. I’m also less likely to stay within overall software patterns if I think I’m always trailblazing. The first time you write new code, you’ve established a pattern. The first time someone repeats what you did, they’ve cemented that pattern. That’s often a nasty double edged sword, but I generally lean toward consistency rather than one-off “brilliant” code.
Where this can start to break down is when people become pigeonholed in a certain role. “Oh, you’re the CSS guy, so why don’t you take this styling ticket?” and “Kevin, you do all the database migrations so here’s another one” are great ways to damage morale and increase risk if someone leaves.
Tl;dr: everyone should be able to pick up any task, and they should endeavor to keep their axe sharp in every conceivable area, but it needs to be carefully offset against wasting time by refusing to lean on other people.
As with most things, it’s usually a matter of personalities and not policies.
My secret weapon is “what questions do you think I should have asked?”.
Not as a cop out, but something to follow your other questions with. A good interviewer will point you toward some of the company workings you wouldn’t have thought to ask about. And if they’re evasive it’s a red flag you can use to consider their offer.
I looked into this a while back and gave up.
I didn’t find any (good) models I wouldn’t have to pay for, but some of the paid STL sites had sets available for really reasonable prices, so that wasn’t really a blocker.
But FDM is basically incapable of printing any interesting models. Even if you’re printing good layers, most interesting models aren’t geometrically compatible with how an FDM model prints. You can print with supports, but removing supports from such thin, fragile bits of a model is nigh impossible without doing damage.
I went as far as shopping around for a resin printer, but I didn’t like all the ventilation cautions I read. Adding a printer is one thing, but having a well ventilated area that overlaps with where I’d want a printer was an unsolveable problem in my home.
If you just want to give it a try, grab a model off Thingiverse and see how your printer does. If you can get a piece you’d be happy to proceed with painting, that might be worth a few more iterations to see if it’s workable for your setup.