• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle




  • I think turn based is fine and in fact I like. However, when no one has a turn it’s annoying to sit around while nothing happens as the timer keeps ticking. Also, to make it “active”, the turn timer doesn’t stop when you hit the menu. If you delay your action the enemy may get to take their turn, just because you neglected to navigate the menu. I think ATB is actually the worst of both worlds, would prefer either turn based or action RPG rather than being forced to navigate a menu in some facsimile of ‘real time’.

    Where FF7 kind of went south from a gameplay perspective compared to 6 was that in 6, summons were a brief flash. In FF7, by contrast, for example Knights of the round would “treat” you to an 80 second spectacle, which was cool the first couple of times, but then just a tedious waste of time. Generally rinse and repeat this for any action that was pretty quick in FF6 and before but a slow spectacle in FF7, with no real option to speed up those animations you had already seen a dozen times that wore out their welcome long ago. Just like that stupid chest opening in OOT.

    Anyway, I did enjoy FF7, but the “game” half was kind of iffy.


  • Thing is those criticisms also mostly apply to FF7.

    Disconnect between combat and exploration? I see that for Zelda, but ff7 goes harder, with a random encounter jolting you into a different game engine for combat.

    To much time in combat waiting while nothing happens? FF7 battle system is mostly waiting for turns to come to with lots of dead time.

    Exploration largely locked to narrative allowing it? Yeah, FF7 had that too, with rare optional destinations a very prescribed order and forced stops. It opens up late in the game.

    The video generally laments that OOT was more a playable story than an organic gameplay experience, and FF7 can be characterized the same way. Which can be enjoyable, but it can be a bit annoying when the game half of things is awkward and bogs things down a bit. Particularly if you are getting subjected to repeated “spectacle” (the slow opening of chests in oot, the battle swirl, camera swoops, and oh man the summons in ff7…)

    They both hit some rough growing pains in the industry. OOT went all in on 3D before designers really got a good idea on how to manage that. FF7 had so much opportunity for spectacle open up that they sometimes let that get in the way. Also the generally untextured characters with three design variations that are vastly different (field, battle, and pre rendered) as that team try to find their footing with visual design in a 3d market.


  • Agreed, as a game, as in fun, ff7 wasn’t very good. That music, those visual designs (the pre rendered stuff), and the story (though it suffered from bad localization) were compelling. But random encounters, fights filled with mostly waiting to be able to do things, the best attacks doing too much spectacle which was nice the first time, but pretty boring on repetition… The materia management became frustrating as you got more party members and no way to arrange or search, even with in game dialog mentioning how it was a pain…

    Chrono Cross actually had significantly better game design, with enemies on screen and no standing around waiting for some characters turn to come up before anything would happen. Wish ff7 had clipped the “no action allowed by either side” time and that would have helped immensely. Then it just becomes a matter of if the player prefers real time adventure to menu driven play.



  • While I’m not particularly invested in their choice, I will say that I’ve got some counters to the points given as to why not:

    • Logging for diagnostics: probably the closest point, but you can either centralize such logs where local disk does not matter, or leave log in ram with aggressive rotation out.
    • Ability to update without rebooting. The diskless systems I work with can be updated live too. However live updates do eat more memory in my case due to reasons that will be clear soon. Besides, a rolling reboot should be fairly non disruptive to “bake” the live updates into the efficient form. Other Diskless situations just live in tmpfs, in which case live updates are no problem at all, though it is a lot of ram to do this.
    • Diskless uses too much RAM: At least with the setups I work with, the Diskless ram usage is small, as the root filesystem is downloaded on demand with a write overlay in zram to compress all writes. Effectively like a livecd generally boots, but replace cd with a network filesystem.

  • Yep, and I see evidence of that over complication in some ‘getting started’ questions where people are asking about really convoluted design points and then people reinforcing that by doubling down or sometimes mentioning other weird exotic stuff, when they might be served by a checkbox in a ‘dumbed down’ self-hosting distribution on a single server, or maybe installing a package and just having it run, or maybe having to run a podman or docker command for some. But if they are struggling with complicated networking and scaling across a set of systems, then they are going way beyond what makes sense for a self host scenario.


  • Based on what I’ve seen, I’d also say a homelab is often needlessly complex compared to what I’d consider a sane approach to self hosting. You’ll throw all sorts of complexity to imitate the complexity of things you are asked to do professionally, that are either actually bad, but have hype/marketing, or may bring value, but only at scales beyond a household’s hosting needs and far simpler setups will suffice that are nearly 0 touch day to day.


  • For 90% of static site requirements, it scales fine. That entry point reverse proxy is faster at fetching content to serve via filesystem calls than it is at making an http call to another http service. For self hosting types of applications, that percentage guess to go 99.9%

    If you are in a situation where serving the files through your reverse proxy directly does not scale, throwing more containers behind that proxy won’t help in the static content scenario. You’ll need to do something like a CDN, and those like to consume straight directory trees, not containers.

    For dynamic backend, maybe. Mainly because you might screw up and your backend code needs to be isolated to mitigate security oopsies. Often it also is useful to manage dependencies, but that facet is less useful for golang where the resulting binary is pretty well self contained except maybe a little light usage of libc.


  • But it you already have an nginx or other web server otherwise required to start up (which is in all likelihood the case), you don’t need any more auto startup, the “reverse proxy” already started can just serve it. I would say that container orchestration versioning can be helpful in some scenarios, but a simple git repository for a static website is way more useful since it’s got the right tooling to annotate changes very specifically on demand.

    That reverse proxy is ultimately also a static file server. There’s really no value in spinning up more web servers for a strictly static site.

    Folks have gone overboard assuming docker or similar should wrap every little thing. It sometimes adds complexity without making anything simpler. It can simplify some scenarios, but adding a static site to a webserver is not a scenario that enjoys any benefit.


  • Because serving static files doesn’t really require any flexibility in web serving code.

    If your setup has an nginx or similar as a reverse proxy entry point, you can just tell it to serve the directory. Why bother making an entire new chroot and proxy hop when you have absolutely zero requirements beyond what the reverse proxy already provides. Now if you don’t have that entry point, fine, but at least 99% of the time I see some web server as initial arbiter into services that would have all the capability to just serve the files.


  • WSL may be fine for a Windows user to get some access to Linux, however for me it misses the vast majority of what I value in a desktop distribution -Better Window managers. This is subjective, but with Windows you are stuck with Microsoft implementation, and if you might like a tiling window manager, or Plasma workspaces better, well you need to run something other than Windows or OSX.

    -Better networking. I can do all kinds of stuff with networking. Niche relative to most folks, but the Windows networking stack is awfully inflexible and frustrating after doing a lot of complex networking tasks in Linux

    -More understanding and control over the “background” pieces. With Windows doing nothing a lot is happening and it’s not really clear what is happening where. With Linux, it can be daunting like Windows, but the pieces can be inspected more easily and things are more obvious.

    -Easier “repair”. If Windows can’t fix itself, then it’s really hard to recover from a lot of scenarios. Generally speaking a Linux system has to be pretty far gone

    -Easier license wrangling. Am I allowed to run another copy of Windows? Can I run a VM of it or does it have to be baremetal? Is it tied to the system I bought with it preloaded, or is it bound to my microsoft account? With most Linux distributions, this is a lot easier, the answer is “sure you can run it”.

    -Better package management. If I use flatpak, dnf, apt, zypper, or snap, I can pretty much find any software I want to run and by virtue of installing in that way, it also gets updated. Microsoft has added winget, which is a step in the right direction, but the default ‘update’ flow for a lazy user still ignores all winget content, and many applications ignore all that and push their own self-updater, which is maddening.

    The biggest concern, like this thread has, is that WSL sets the tone for “ok, you have enough Linux to do what you need from the comfort of the ‘obviously’ better Microsoft ecosystem” and causes people to not consider actually trying it for real.