Recently learned about this stuff on a Lemmy World post and I thought I’d move the conversation here since they’ve been fussy about DB0 in the past.
I’m really just a common seaman when it comes to the high seas. I just use Proton and qBit and whichever website is supposed to be safe and active nowdays (currently Torrent Galaxy?). I just download from the magnet link to qBit and save it on my drive. I don’t know much about torrent streaming or ports or networks or anything IT might ask me to check beyond “plug it in”.
But for some shows I’ve only been able to find single episodes, not full seasons, so when I heard about something that compiles stuff for me, it seemed convenient. I’d be curious to learn more. Unfortunately the websites for these services don’t really offer any explanation to new users and laymen, so I got a bit lost. Thought I’d ask here rather than venture into their forums where they already don’t seem to welcome idiots like me.
So… what the heck is Sonarr and how do I use it?
So there are multiple technologies at play. One is an indexer program (jackett/prowlarr/etc). These basically hook up to public trackers (1337x, TPB, etc).
Then you have Sonarr/Radarr which are connected to the indexer. Sonarr and radarr basically have an rss feed (which is basically a list of content, podcasts and youtube apps use this to show you new episodes/videos).
I think they use tmdb or something as there source of rss feeds. They also let you select which shows to monitor and it stores that inforamation in a database. So sonarr will reach out to tmdb and request the latest rss feed for a show every so often for the shows in the database. If an episode that sonarr is supposed to download is listed on the rss feed it will then send a request to its indexer and tell it what show, what episode, what season, etc.
The indexer then searches each tracker it is connected to for that show, season, episode combo and returns a list of links to sonarr/radarr.
Sonarr then has a set of rules in its database to filter these links (ie minimum quality, language, etc) to determine which link to pick). Finally in its settings sonarr/radarr has a location where it should save the files.
Now sonarr/radarr cant download themselves, instead they are also hooked to a torrent client. For example qbittorrent which has an api which allows you to programatically download torrents (ie it has a command to download a torrent and sonarr/radarr sends the command along with additional information like the link and where to save the files.
This is the basic setuo but there are other tools used sometimes like unpackarr which is for decompressing files that get downloaded. Unpackarr watches a folder for new files and if it finds a file in a compressed format (7z, rar, zip, etc) it will automatically decompress it so that a media program like jellyfin can play it without you having to do it manually.
Programs like jellyfin are media servers where you would specify folders for movies/tv shows/etc and any playable file in those folders can be streamed in their app/web interface. These kind of programs are really just graphical programs that are easy to set up and use that are built on top of more technical programs like ffmpeg which does the transcoding and streaming.
Then there are also programs like flaresolverr. You would integrate this into your indexer because some trackers might use cloudflare to prevent bots (they require you to click a checkbox and watch the movement of the cursor to see if it is robotic). Flaresolverr uses something called selenium webdriver which is a program that can automate a webbrowser. You can program it to open web pages, click things, etc. I assume the code uses randomization to make cloudflare think a person is moving the mouse to click the button so you can access those trackers
In simple terms that’s how it works. All these programs set up a web interface and api and send each other http requests to communicate
Magnificently explained