![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://beehaw.org/pictrs/image/1be75b15-2f18-429d-acf7-dcea8e512a4b.png)
The license does not prevent number four from happening, they just ask people not to do it
The license does not prevent number four from happening, they just ask people not to do it
It’s absolutely FOSS. It is not, however federated. But that is not a requirement to be free and open source software
Think of it like this, Linux is free and open source software, even if I don’t give you a shell on my computer.
You can use the code, however you want, in any project you want.
The back end is open source, but sometimes they’ve lagged years behind releasing the source code. Other developers have stood up copies of the signal network. Session, for example.
You can self host your own signal, but it’s not federated, so you’d have nobody to talk to
I don’t think they are trolls, either lots of breathless enthusiasm, or performance art
Even after you get your ideal setup with all your traffic transversing your network to a single host, you have bottle necked the whole network to the speed of that single host.
Usually in networks devices are able to talk to each other directly across switch fabrics and not interdesr with other traffic.
Say you have four devices A B C D each pair trying to send 1GiB/S of traffic to each other over a GbE network connected to the same switch. A,B gets 1 GbE and C,D gets 1 GbE. For a total concurrent speed of 2GbE.
In your model since all traffic has to hit the central wireguard node W first you can only get 1GbE speed concurrently
The synology.com account, not the NAS account
The person doing free labor and providing open source software doesn’t use the preferred vocabulary… Still a net positive, no reason to brigade their issue tracker for wrong think.
Encouraging the internet to harass a volunteer is low.
Nothing stopping people from forking the project changing the vocabulary, and maintaining their fork. But that’s more work than drive by hate
Tailscale, cloudflared tunnels, nebula
I was surprised too. But a lot of the current NAS devices basically operate as hosting devices. It makes sense the hard drives are there the power is there the RAM is there the CPU is there. So for the low intensity containers and VMs you want to run like a Plex server, or DNS server, or tail scale it’s all right there
At this point, it might be easier just to buy a supported UPS. I’m glad the backups 850 is working. It’s a good data point
I followed your advice, and went through the settings, and try to enable the USB device. But it’s just not detected.
Oh the synology drive is a file system syncing utility, it provides local caching of a remote file system and then syncs the files back. It’s not the software that shuts down the computer
Right now when updates get applied to the NAS, if it gets powered off during the update window that would be really bad and inconvenient require manual intervention.
In memory caching, and the Amy cashing, well I think the file system would almost certainly be in a consistent state, you might lose data in flight if you’re not careful.
The real problem, that I need an nas for, is not the loss of some data, it’s when the storms hit and there’s flooding, the power can go up and down and cycle quite rapidly. And that’s really bad for sensitive hardware like hard disks. So I want the NAS to shut off when the power starts getting bad, and not turn on for a really long time but still turn on automatically when things stabilize
Because this device runs a bunch of VMs and containers as well closing down so that all of those rights get flushed is good practice
AND their Synology drive client requires administrative permission to install on Mac OS, and on Windows. Why? Why…
Well I’m ranting about this process, I have other complaints.
Synology.com - if you want to add a second factor to your account, requires a phone number to be the master factor, in case you lose your second factor. So if you’re worried about Sim jacking, or even just not having a consistent phone number for the lifetime of the deployment, it’s kind of a terrible practice. There’s no way to unlink all phone numbers from an account, you can only replace them with a new phone number.
Synology does actually support hardware USB keys, but only as a secondary factor behind SMS… Ai ya.
Is there a key?
100%. I think the developer taking the project read only was not a temper tantrum, it was just them signifying they don’t have time to maintain it. So now if you want anything to happen you must fork it.
I think it’ll generate 5 days converted into seconds number of operations.
To decrypt however, you have to do all those operations, so I think it would take 5 days to decrypt. Even if you wait 10 days to start the operation
This seems interesting. But for something so complex I would really like them to have a white paper to see how they achieve this.
https://eprint.iacr.org/2023/189 Other systems, for instance, use a third party network to broadcast the parts of the secret that are needed to decrypt over time. So you’re relying on a third-party service, and if that third party service disappears you can’t unencrypt
IPv6 addresses and letsencrypt
If your addresses are globally unique, you don’t have to care about internal or external