• 4 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle
  • Oh, I backup religiously since Blue failed right after I moved and backup my backups on my laptop as well. (literally failed; I lost everything and had to run photorec and three other tools to pick out everything I’d done for the previous six months, since that I hadn’t copied to a backup on my server because I was prepping to move at the time).

    So far, OTBR is the biggest stopping issue since HA runs it but nothing sticks. I admit, moving zwave is my actual biggest dread; zigbees I can do probably in a weekend, but zwave is such hell to unpair and re-pair (thought it makes up for it by sticking forever). That’s part of the reason I love Thread and Matter; they’re almost as sticky as zwave once they pair, and while pairing them is variable (sometimes fast, sometimes not so much) they repair themselves pretty consistently if the outage is under 24 hours and you can deliberately unpair them fairly easily.


  • I’ve been running Home Assistant for roughly five-six years (Pi, then Blue, now Amber and a second instance on my server for network integrations like nmap and netgear), but since my SmartThings hub was taking care of zigbee/zwave, until now I used HA as a coordinator for every smart device ecosystem I was using (Hue, Wyze, Ring, Blink, Alexa, August, Arlo, et al). Sorry that wasn’t clear.

    While Ive started slowly adding zigbee devices directly, I haven’t started with zwave and thread isn’t working for me yet (OTBR is running but nothing sticks). And I really don’t want to have my hub fail and all my thread/matter devices useless when I don’t have anything that can access them.







  • So it can be done, it just–required a lot of steps and me making a mapping spreadsheet of all the containers. But! Automations and scripts run in the homeassistant container, while when you ssh, you’re going into the ssh addon container which should have been obvious and really was once I finished mapping all the containers.

    Goal: I need /usr/local/bin in the ssh container so I can run scripts over ssh and access my function library script easily without ./path/to/script.

    Summary: ssh into HAOS from the homeassistant container with an HAOS root user (port 22222), run docker exec to get into the ssh addon container, then make your symlinks for /usr/local/bin.

    (Note: this is ridiculously complicated and I know there has to be a better way. But this works so I win.)

    1. Get access to HAOS itself as root: https://developers.home-assistant.io/docs/operating-system/debugging. Verify you can login successfully.
    2. In homeassistant container:
    • a. create an .ssh folder (/config/.ssh)
    • b. add the authorized_keys file you made for step one.
    • c. add the public and private keys you made for step one (should be in the ssh addon container).
    • d. set permissions;
    chmod 600 /config/.ssh/authorized_keys
    chmod 600 /config/.ssh/PRIVATE_KEY
    chmod 644 /config/.ssh/PUBLIC_KEY
    chmod 700 /config/.ssh
    
    • e. In /config/shell_scripts.yaml or wherever you put your shell scripts, add the script you want to use to update /usr/local/bin: UPDATE_BIN_SCRIPT: /config/shell_scripts/UPDATE_BIN_SCRIPT
    • f. Restart HA.
    • g. Check it in Developer Tools->Services

    I have no idea how consistent the ssh addon container name is usually but it’s different on all three of my installs, so insert your container name for SSH_ADDON_CONTAINER_NAME

    Steps: login to HAOS, go into the SSH Container, and do the update. This is horribly messy but hey, it works.

    UPDATE_BIN_SCRIPT

    #!/bin/bash
    
    # OPTIONAL: Update some of the very outdated alpine packages in both homeassistant and the ssh addon (figlet makes cool ascii art of my server
    # name).   You'll need to run it twice; once for the homeassistant container, then again in the ssh container.  Assuming you want to update packages,
    # anyway
    # update homeassistant container packages
    apk add coreutils figlet iproute2 iw jq ncurses procps-ng sed util-linux wireless-tools
    
    # ssh into HAOS and access docker container
    ssh -i /config/.ssh/PRIVATE_KEY -p 22222 root@HA_IP_ADDRESS << EOF
    	docker exec SSH_ADDON_CONTAINER_NAME \
    	bash -c \
           'apk add coreutils figlet iproute2 iw jq ncurses procps-ng sed util-linux wireless-tools; \
    	if [ ! -h /usr/local/bin/SCRIPT1 ]; then echo "SCRIPT1 does not exist"; \
    	ln -s /homeassistant/shell_scripts/SCRIPT1 /usr/local/bin/SCRIPT1; echo "Link created"; \
    	else echo "Link exists";fi; \
    	if [ ! -h /usr/local/bin/SCRIPT2 ]; then echo "SCRIPT2 does not exist"; \
    	ln -s /homeassistant/shell_scripts/SCRIPT2 /usr/local/bin/SCRIPT2; echo "Link created"; \
    	else echo "Link exists";fi'
    EOF
    
    echo "Done"
    

    I am going to feel really stupid when I find out there’s a much easier way.


  • Docker containers are designed to be immutable. The moment they’re stopped and recreated, any changes to them ads thrown out. You’re supposed to add a layer to your Docker image if you want to add command lines and such. That’s why it’ll keep deleting your stuff every time you update.

    It took me until I put Home Assistant on my server in a docker container to realize what was going on there. I use docker more now, but it’s really, really nothing like this.

    Running the script inside Docker should put it in the right place, but I wouldn’t advice doing it that way.

    That’s what I’ve been doing manually over regular ssh (not the 22222 port one).

    To work around the path issue, maybe consider using hard links rather than soft links?

    That’s what I think I need to do, but the only ‘hard’ links–at least according to multiple find -name/find -iname searches on the ssh 22222 port–are all in /mnt/data/docker/overlay2 and /var/lib/docker/overlay2. I get there’s a working pattern with the overlays but dear God why.

    Alternatively, you could figure out where HAOS stores the Docker config and add a volume definition of your own. You’ll probably be able to put all of your files in /usr/local/bin by adding a line like “- /path/home/host:/usr/local/bin” in the right place. I don’t know where this config is stored, though.

    Okay that makes sense. I guess the first step is to get the container structure and volume.

    Thanks so much! I’ll update if I find the solution or die trying.




  • You know, I didn’t think of that. I’ve never run an OS in docker; all I tested my data collection scripts on were my regular VM’s a few times just for fun. And for that matter, most LXC containers I run in Proxmox are privileged to get around restrictions (still haven’t found a way for LXC’s to let me compile different architectures, though. HA may have updated their docker to current, which would explain why it happened so suddenly.

    And yes, for now, I’ll just do root login to grab the information; it’s technically more accurate, I am just knee-jerk distrustful of using root to the point until Proxmox and this last year, I almost forgot it existed unless there’s a very weird linux problem I need it for. Thanks for this information, though; I’ve only just started seriously working with LXC and docker containers, so that’s not an approach I woudl have considered.


  • Full disclosure: I just–and I mean just–got my head wrapped around docker and containers due to installing Proxmox on my server. Right now, my Proxmox server runs a LXC container for docker, and in docker I run Handbrake and MakeMKV images that run the GUIs in a browser or run with command line. They connect to each other through mounting the LXC’s /home/user into both., then added a connection to the remote shares on my other server so I can send them to my media server. Yes, I did have to map all the mountings out first before I started but hey, that’s how I learn.

    Long way of saying: I am just now able to start understanding how Home Assistant works–someone said Home Assistant OS was basically really a hypervisor overseeing a lot of containers and now that I use Proxmox, that really helped–but I’m still really unfamiliar with the details.

    I installed the full Home Assistant on a dedicated Pi4, so it’s the only thing it does. Until yesterday, the only part I actually interacted with was the data portion, which is where all my files are, where I configure my GUI and script, store addons, etc. The container for this portion runs on Alpine Linux; I can and have and do install/update/change/build packages I need or like to use. in there It’s ephemeral; anything I do outside the data directory (it holds /config, /addons, etc) gets wiped clean on update, so I reinstall them whenever HA does an update .

    When I run my data collection scripts on my Home Assistant SBC, they take their information from the container aka Alpine Linux., including saying my OS was Alpine. All of this worked correctly up until–according to the directory dates, December 10th at 2:40 AM when the /sys/firmware was last updated and everything in it vanished, breaking the symlink to /proc/device-tree/model. This also updated the container OS to Alpine 3.19.0. Data collection runs hourly; one of my Pis ssh’s into each computer to run four data collection scripts and updates a browser page I run off apache, so I can check current presence and network status and also check the OS/hardware/running services of all my computers from the browser (the services script doesn’t work on Alpine yet; different structure). I didn’t notice until recently because work got super busy, so I only verified availability and network status regularly.

    These are the packages I install or switch to an updated/different version the Alpine container to help with this or just have fun: -figlet (it’s just cute ASCII art for an ssh banner), -iproute2 (network info, when updated has option to store network info in a variable as a json),

    • iw (wireless adapter info),
    • jq (reads and processes json files),
    • procps-ng (updated uptime package for more options),
    • sed (updated can do more than the installed one),
    • util-linux (for column command in bash),
    • wireless-tools (iwconfig, more wireless data if iw doesn’t have it) (Note: I think tr may also be updated by one of these.)

    These are the ones I use for data collection that are already installed:

    • lscpu (“Model name” “Vendor ID” “Architecture” “CPU(s)” “CPU min MHz” “CPU max MHz”)
    • uname (kernel)

    These are the files I access for data collection:

    • /proc/device-tree/model (Computer model)
    • /proc/meminfo (RAM)
    • /proc/uptime (Uptime)
    • /etc/os-release (Current OS data)
    • /sys/class/thermal/thermal_zone0/temp (CPU temperature for all my SBCs except BeagleBone Black)

    Until this month, all of those files were accessible both before I do the package updates and after. The only one affected was maybe /proc/uptime by the uptime update to get more options. Again: I’ve been running these scripts or versions of them for well over a year and I test individually on each SBC before adding them to my data collection scripts to run remotely; all of these worked on every computer, including whatever SBC was running Home Assistant. (Odroid N2+ until it died a few months ago) And all of them work right now–except /proc/device-tree/model on my Home Assistant SBC. The only way I can get model info is to add an extra ssh to Home Assistant itself as root and grab the data off that file (and while I"m there, get the OS data for Home Assistant instead of Alpine), save it to my shell script directory in my data container, and have the my script process that file for my data after it gets the rest from the container.

    That’s why I’m weirded out; this is one of the things that is the same on every single Linux OS I’ve used and on Alpine, so why on earth would this one thing change?

    This could conceivably be an Alpine issue; I downloaded Alpine 3.19.0 to run in Proxmox when I get a chance, and I kind of hope that it’s a deliberate change in Alpine, because otherwise, I can’t imagine why on earth the HA team would alter Alpine to break that symlink. Or they could be templating Alpine for the container each time and this time it accidentally broke. The entire thing is just so weird. Or maybe–though not likely–a bug in Alpine 3.19.0, but I doubt it; I can’t possibly be the first to notice, it was released at least three weeks ago and I googled a lot.

    I’m honestly not sure it affects anything at all, but it bothers me so here we are. Though granted, it did make me finally get off my ass and figure out how to login as root into HA as well as do a badly needed refactor of my main data collection script (the one that does the ssh’ing) as well as clean and refactor my computer information scripts, so maybe it was destiny.




  • I know, I’m trying to write up a clear bug report on this, but I’m honestly not sure if it actually has any effect other than messing up my data collection scripts. Yeah, it’s annoying the hell out of me but I’ve been going through the documented issues with the core and it doesn’t look like anyone else noticed a problem. I’ve been trying to figure out if it’s created by an alpine package that I can run, but not much luck there.

    Note: I enabled root for Home Assistant OS and the symlink and file are fine there.



  • Oh thank God. Normally I know how to read (since kindergarten) but in the time between posting and your reply, I hit a very unwilling thirty-six hours awake so I low-grade panicked that actually, it only read normal to me and I was lecturing people on becoming a vegan fascists or something.

    I am still thinking on the article but it’s going to need a couple of times to put it in context. I’m still trying not to form really firm opinions on much yet on Fediverse since I seriously do not know enough and yes, even I find it hilarious when I have to backtrack from a really stupid position, but I can save public embarrassment for later. Lemmy’s still young, I have plenty of time for that.




  • I’m a QC analyst and we are fully Agile, so I’m required to attend ever. team. meeting. Discovery, story point estimation, design spikes, any day can be poorly handled emotional regulation day and whoever’s feeling it is making it everyone’s problem when all we want is to finish a few maintenance items and maybe add a comma to some text. Though the testers have nothing to do with this after story point until there actual code migrated to one of the testing environments, we are forced to bear witness to entire dev teams made up of people from three to eight countries, whose only common language is English and as often the only native speaker, I am the only one who can’t mutter not very goddamn quietly in my native tongue that no one else understands; this may have been my motivation at one point to learn Welsh on Duolingo. A Project Manager making three times more than anyone else in the room sometimes swoops in during SCRUM two weeks into our sprint cycle to be perky at us and–on far too many occasions for this to be random–informs us the acceptance criteria had a couple of updates before swooping back out to PM something else’s life. We all hate her quietly until someone who went to check JIRA notes there are double the number of criteria and the user story is not the same in any way;. then everyone but me gets to hate her verbally with no one the wiser. I maintain bitterly grudging silence because everyone in the room speaks English, sometimes better than I do, and they have been in Texas long enough to pickup conversationally hostile Spanish. Our scrum master will either grimly pretend it’s always been this way or very blatantly not care.

    At final demo as the tester, I will perform a dramatic rendition of ‘page with comma’ and ‘title:justfication left’ or run batch scripts in terminal while they watch absolutely nothing happening and nod wisely. Half the people in attendance wears suits for a living and have never used a computer; they have secretaries for that. Two worked with my mom and are quietly judging my performance and find me lacking. One stakeholder will ask a thousand questions, five of which have any relation to what we’re doing and I am expected to answer with no discernible change in my performance. Someone is watching TV and can’t be fucked to turn down the volume. Everyone else sits in eerie silence and I might hear a snore. Every one of these people are considered qualified enough to decide if we’re did a good job and sign off on it so we can finally end the sprint and the code can be added to the next release to production. No one feels a sense of relief or satisfaction; at least one dev hasn’t slept since the PM destroyed our lives and may be clinically insane.

    Our sprints last four weeks with a prep week in between; we will experience some version of this cycle of dev hell roughly eight times a year and sometimes involving the legislature making their lack of time management all of our problem. Only one sprint will go as planned. One.

    The worst part is; despite this, knowing full well what hell is before me, I went back to college for software development of my own free will.


  • I don’t mean that things are badly made, just that the resources to enter lemmy are targeting a specific audience still.

    You don’t say. What a weirdly easy to fix problem that only requires the ability to add links and text to a webpage: that’s a rare combination of skills indeed.

    You first need to learn what lemmy is, how it works (because nerds can’t simply tell how you can do it, they need you to understand how it works first), and then where and how to register.

    Bold choice: I can honestly say it never occurred to me to insist someone attend a mandatory lecture (is there a quiz afterward) to have the opportunity to join a server so they can post memes and find people who are into baking bread and Smurfs meta.

    Yes, I am being sarcastic–because everything you observed is alarmingly accurate to the point one might be forgiven for saying it’s a pattern-- but I’m also beginning to wonder if I’m having a some kind of break with reality on how the joining a Federated server works.

    The process is as follows (I think):

    • 1.) Go to Join Lemmy. Click “Find a Server”
    • 2.) Make sure language is set to [your language]. Click on a cute animal icon; it genuinely does not matter which one. This is not a lifetime commitment; it’s a first date.
    • 3.) Click on Sign Up on [Server Name].
    • 4.) Enter a username, email (maybe optional) your password twice, and do the Turing puzzle. [Optional: three to four short answer questions on your name, your interests, why you’re here, and possible some recreational math] Check NSFW for porn as desired. Click the clearly marked button Sign Up.
    • 5.) You’re a Lemmite now; go with God and experience the wonders of the Federation’s shitposts and feelings about cats.

    I am genuinely wondering what it is I’m missing about the shortest, easiest social media sign up process I’ve done and my body count is greater than twenty at minimum and some places, I had multiple accounts [excluding: usenet, mailing lists, messageboards et al]; tumblr was like three pages of cross-examination and required me to expend effort to think up easy to remember lies to get through too many mandatory questions; Facebook wanted me to write a detailed autobiography with exact dates covering birth to present day in their multi-page questionnaire of my life and times and I did this while scanning for surprise privacy settings and feeling exposed.

    Non-sarcastically: reading this, it makes me wonder what would have happened if I’d asked someone what mastodon is and how it works instead of googling and excitedly jumping into something new to see what happened.

    If i had been told before I even googled the website that picking a server/federation is way too complicated for most people, that to sign up I needed to do my homework on the origins and history of the Federation so I’d be able to understand it, as the process to create an account is complicated and confusing. For most people, that is.

    I’m pretty sure I would have done it anyway, but I don’t think I would have seen it as a brand new adventure, something new to learn about and explore and be part of and helped grow. I wonder if i would have posted an intro and started following people who were doing the things I went back to school to learn do and finally get my degree so I could learn from them. Or would I have read my feed wondering what to do; everyone here knew all about the Federation and I couldn’t ask them because then they’d know I didn’t belong; I was just a college dropout who learned to script and linux and design websites because it was fun and went back to school with some serious overconfidence in my skills. I wonder if I would ever have posted a single word before I finally realized this is not an after school special, I am not a tragic victim of mean people, so cheer the fuck up and do your homework already

    Or: if I’d just take my ritalin, because if I remember correctly, I was compiling my very first kernel, on my own, outside school lab conditions (Raspberry Pi 4 8GB, 64 bit: Eurydice) so yeah, I would have like an hour earlier, I would have immediately realized this person was telling me that this place is not for me and I did not belong. And I would have agreed; buddy, I’ve been condescended to by literal geniuses with PhDs in fields I can’t spell. Whole servers of them exist? Christ. Thank God for the warning, I’m gonna dip. Not that I would have said that: depending on how the compile went, i would have either devoted a twitter thread to performing a melodramatic interpretation of it or forgot about it with a vague hostility toward Mastodon and that weird Federation thing.

    Talk about the road not taken.


  • So after twenty-something years on social media, along with mailing lists, messageboards, usenet, this is a topic I think about literally every time I have to add, change, migrate, delete my account as I migrated from platform to platform like some virtual vagabond between text-driven city-states. A virtual vagabond with no worldly goods, no name, no history, and completely invisible to all. To exist, I must apply to the City Leader, and if accepted, I get a name, a nice studio apartment, and visibility as well as contact with other humans after watching a short commercial every five or so humans. If I leave, am thrown out, or the city is burned down, I can’t take anything the city gave me with me. By ‘gave’, I mean ‘loaned’ btw; none of those things were actually mine.

    All the discussion of whether or not to federate with Threads were interesting in that in general, it’s kind of pointless. A server instance isn’t a democracy; the owner’s opinion is the only one that matters. If you don’t like it, leave. And I don’t argue their right to do so; they’re paying the bills, doing the upgrades, eating grapes with robot butlers, I don’t know their lives. Federated means anyone can run their own not-twitter or not-reddit; go for it. All you need is money, free time, and the knowledge of how to register a domain name, get, run, secure, and maintain servers, and install and configure the program, lure people in, and avoid breaking any national or international laws. Like I said: I really seriously do not argue the owner’s right to decide anything for their server. i know how to do all those things and I ran several websites and archives: I wanted a nap before installation step.

    Fediverse is a massive step in loosening the stranglehold megacorporations had on our ability to shitpost in peace and talk about our cats without feeling stalked by people wanting to sell us shit or sell our browsing habits, blood pressure, and underwear size to those who will the try to sell us deeply individualized shit; it’s the circle of life, man.

    Wow this got long but feelings.

    So at this point–two decades and change of social media, the rise and fall of social empires, so much virtual vagabonding across the virtual desert to find a new city-state…I don’t think it’s too early to consider getting around to a productive discussion of how we go about separating the individual identity from the community and define what is theirs to keep no matter where they are. If there was ever a place and time to start building a model, it’s where all the city states are allies and the individuals can interact with each other no matter what city they’re in. The account transferability in Mastodon is a really good start, but it’s not a solution, much less the solution. It’s a beginning.

    I don’t expect to have a working, finished, flawless product in six to eight weeks or six to eight months; I expect it to slide in three weeks and two days after the announcement that it’s ready for alpha testing and immediately break the first time a tester opens it; it’ll be another month before it goes into testing again. I expect it will be a weird buggy mess of wtf after months of virtual warfare and everyone will hate it before the rough draft of the design documents are even released. I expect there will be one weird guy who really thinks everything should be written in Rust because he’s insane and never sleeps. Five to eight devs will dramatically quit; one will quietly move to Utah and farm emus. None of them will be the Rust guy; you’re stuck with him. I expect the working version after testing is done will be hated by everyone and probably kind of crappy. But it will also be amazing, because as of it’s release–no matter how shitty, buggy, or how many inexplicable design choices are made–the individual exists outside of being community property and that no matter where we go or how much we pissed off that admin or if our city-state was nuked from orbit, there are things that are ours and we get to keep them.


  • I’ve been thinking on that and assuming fosstodon and lemmy.world both agree to defederate with Threads, I’m going to go ahead and set up regular donations. I only use DW a few times a year, but I renew my premium membership every six months and it’s not cheap. I want to keep supporting it because its model does not include ads at all (premium gives you lots of icons, too, and I used to be a huge icon person, so I can’t say that’s not a consideration). Unless I lose my job or something, I’ll keep paying until death or dw closes whether I ever use it again; it’s worth supporting.

    I was already considering it–when i joined mastodon I bought their stickers to show my appreciation–but this is been a wake-up call. If lemmy.world decides to federate, I mean, I’m not going to leave, but I am going to email lemmy.ml about why my application is still pending and use that for my primary.