• 13 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: April 24th, 2023

help-circle



  • Well, that took a lot more blood, sweat, and tears than I thought it would.

    Usually when performing an update, I do the following:

    • Take a snapshot of the VM
    • Change the version number in Lemmy docker-compose.yml file for both the lemmy and lemmy-ui containers
    • Re-create the containers, and start following the logs
    • If the database migration (if any) appears to be taking longer than expected, I temporarily disable the reverse-proxy so that Lemmy isn’t getting slammed while trying to perform database migrations (and then re-enable it once complete)
    • Upon any issues, examine where things might’ve gone wrong, adjust if needed, and worse-case scenario rollback to the snapshot created at the start

    Everything was going to plan until step 4, the database migrations. After about 30 minutes of database migrations running, I shut off external access to the instance. Once we got to the hour and a half mark, I went ahead and stopped the VM and began rolling back to the snapshot… Except normally a snapshot restore doesn’t take all that long (maybe an hour at most), so when I stepped back 3 hours later and saw that it had performed about 20% of the restore that’s where things started going wrong. It seemed like the whole hypervisor was practically buckling while attempting to perform the restore. So I thought, okay I’ll move it back to hypervisor “A” (“Zeus”)… except then I forgot why I initially migrated it to hypervisor “B” (“Atlas”) which was that Zeus was running critically low on storage, and could no longer host the VM for the instance. I thought “Okay, sure we’ll continue running it on Atlas then, let me go re-enable the reverse-proxy (which is what allows external traffic into Lemmy, since the containers/VM is on an internal network)”… which then lead me to find out that the reverse-proxy VM was… dead. It was running Nginx, nothing seemed to show any errors, but I figured “Let’s try out Caddy” (which I’ve started using on our new systems) - that didn’t work either. It was at that point that I realized I couldn’t even ping that VM from its public IP address - even after dropping the firewall. Outbound traffic worked fine, none of the configs had changed, no other firewalls in place… just nothing. Except I could get 2 replies to a continuous ping in between the time the VM was initializing and finished starting up, after that it was once again silent.

    So, I went ahead and made some more storage available on Zeus by deleting some VMs (including my personal Mastodon instance, which thankfully I had already migrated my account over to our new Mastodon instance a week before) and attempted to restore Lemmy onto Zeus. Still, I noticed that the same behavior of a slow restore was happening even on this hypervisor, and everything on the hypervisor was coming to a crawl while it was on-going.

    This time I just let the restore go on, which took numerous hours. Finally it completed, and I shut down just about every other VM and container on the hypervisor, once again followed my normal upgrade paths, and crossed my fingers. It still took about 30 minutes for database migrations to complete, but it did end up completing. Enabled the reverse-proxy config, and updated the DNS record for the domain to point back to Zeus, and within 30 seconds I could see federation traffic coming in once again.

    What an adventure, to say the least. I still haven’t been able to determine why both hypervisors come to a crawl with very little running on them. I suspect one or more drives are failing, but its odd for to occur on both hypervisors at around the same time, and SMART data for none of the drives show any indications of failure (or even precursors to failure) so I honestly do not know. It does however tell me that its pretty much time to sunset these systems sooner rather than later since the combination of the systems and the range of IP addresses that I have for them comes out to about ~$130 a month. While I could probably request most of the hardware to be swapped out and completely rebuild them from scratch, it’s just not worth the hassle considering that my friend and I have picked up a much newer system (the one mentioned in my previous announcement post and with us splitting the cost it comes out to about the same price.

    Given this, the plan at this point is to renew these two systems for one more month when the 5th comes around, meaning that they will both be decommissioned on the 5th of February. This is to give everyone a chance to migrate their profile settings from The Outpost over to The BitForged Space as both instances are now running Lemmy 0.19.0 (to compare, the instance over at BitForged took not even five minutes to complete its database migrations - I spent more time verifying everything was alright) and to also give myself a bit more time to ensure I can get all of my other personal services migrated over, along with any important data.

    I’ve had these systems for about three years now, and they’ve served me quite well! However, its very clear that the combination of the dated specs, and lack of setting things up in a more coherent way (I was quite new to server administration at the time) is showing that its time to mark this chapter, and turn the page.

    Oh, and to top off the whole situation, my status page completely died during the process too - the container was running (as I was still receiving numerous notifications as various services went up and down), however inbound access was also not working either… So I couldn’t even provide an update on what was going on. I am sorry to have inconvenienced everyone with how long the update process took, and it wasn’t my intention to make it seem as if The Outpost completely vanished off the planet. However I figured it was worth it to spend my time focusing on bringing the instance back online instead of side-tracking to investigate what happened to the status page.

    Anyways, with all that being said, we’re back for now! But it is time for everyone to finish their last drink while we wrap things up.



  • At some point, yes - while I don’t have a concrete date of when The Outpost will be officially decommissioned (as the server it’s running on still has plenty of things that I can’t move over just yet), you might’ve noticed that the performance of the site is pretty shaky at times.

    Sadly, that’s pretty much just due to the older hardware in the server, I’ve tried for the last four months to work around it by trying to configure various tweaks for Lemmy and postgres (the database software - which is where the heart of the issues come from), but it hasn’t had much of an effect. I’m pretty much out of options of what I can try at this point, since it not only affects Lemmy but all of the other stuff that I run on the server for myself (hence why I’ve decided to invest in a better system).

    So you don’t have to move over right this second, but I would recommend it sometime in the future. The plan is to at the very least wait till Lemmy 0.19 comes out since it should let you migrate your subscribed communities (and blocked ones, if any) as far as I’m aware - but it won’t transfer over posts and comments sadly. They’re still working out some roadblocks for 0.19, so I suspect it won’t be out this month (they don’t have an estimate just yet of a release date).




  • They might’ve done so out of necessity. I don’t know if the dev(s) of the Simple Tools apps were working on it full time, but if they were and just not enough contributions were coming in from it… Well everyone has to eat.

    As the saying goes, “everyone has their price”. It’s easy to condemn the developers for their choice until you’re in the exact same scenario as they were. Whether that’s because they were starving, or even just offered enough money to make their lives a lot easier - not too many people would turn it down.




  • I posted about this on the KDE community a couple of weeks ago, but Dolphin (their file manager) has a nice trick for archives (zips, tar’s, etc) - in the extract menu, there’s an “Extract, Autodetect Subfolder” button which will:

    • If the archive has an inner subfolder (and just that), it will extract this as expected
    • If the archive doesn’t have an inner subfolder, and all the files are at the root level, it will create a new folder for you and extract the files there

    This way, you don’t end up with files splattered all over say, your downloads folder. Easily one of my favorite features, and is something I wish every File Manager had. It feels like someone had the same pain that I do (and I’m sure plenty others) of extracting something, and regretting it - but then they went as far as to fix the problem for everyone and implemented a feature for it (I’d love to have the knowledge to contribute to KDE someday)!


  • I wonder if the question is in reference to unlocking the root account and setting a password for it. I don’t know of any distros that actually have an unlocked root account and leave its password as empty, but I suppose its not completely impossible.

    That being said, if an attacker gets physical access to your PC, its game over anyways. If your drive isn’t encrypted with something like LUKS, then they can just boot up a live USB of whatever distro they want, mount the drive, and have easy access to its contents.

    Ideally if you want to protect your PC against physical attacks, you’ll at the minimum want some sort of drive encryption enabled, and preferably with Secure Boot enabled with your own keys enrolled if your machine supports it.




  • Does Minecraft (specifically the Java edition) count as a Linux native game? It’s written in Java, so thus it’s not really “native” to one specific platform.

    It’s always worked perfectly for me on Linux, and have a lot of strong memories with the game. Pair it with something like Prism Launcher for easily installing mods / modpacks / resource packs / etc (which is available on Flathub) and you’ve got a pretty good setup! Though the “official” launcher is available through most package repositories these days as well.




  • Anyway, a web browser is a terrible way to interact with the fediverse since the browser doesn’t know about your accounts, so I’d advocate for getting rid of web apps altogether

    I’m confused about this - so you’re saying that people on their desktop/laptop shouldn’t be able to browse Lemmy from their web browser? Having to install an app really only works for the likes of say, Snapchat and Instagram where they’re mobile-first platforms which clearly Lemmy is not. Even Discord, who really wants you to use their desktop app allows you to use it via a browser and most of the features are still available (and the ones that aren’t are due to browser sandbox limitations, such as PTT and “Krisp” support).

    I’m even more confused about “since the browser doesn’t know about your accounts”, are you saying that its bad that you have to sign into your instance’s account when you first start using the site? Because I don’t see how that is different from mobile (or even a desktop app) either, I use Liftoff on my phone and its not like it magically signed me into my account even though I had other Lemmy apps already signed in on my phone. I feel like I must be really misinterpreting what you’re saying here.

    I know that Android does technically have an Accounts Framework that multiple apps can tie into (so that if you have multiple apps from Microsoft for example, signing into one app signs into the others) but I’m pretty sure that only works if all the apps are signed by the same digital key - which makes sense for your general corporation like Microsoft, Google, Apple, etc but not for apps made my multiple independent developers since that would be a massive security issue.

    And even if that none of that were an issue, Liftoff is made with Dart/Flutter, which dessalines (the main dev of lemmy-ui and Jerboa) may not have any experience with which could be another potential issue. I’ve contributed a couple of small fixes for Jerboa, but while I have Kotlin + Android experience, I don’t have that much experience with Jetpack Compose (the UI framework Jerboa uses) which means in order for me to make any major contributions to Jerboa I’d need to get caught up on the whole Compose stack first (which when I originally did try to learn it, was an incredibly rapidly moving target like Swift/SwiftUI was in its early days) and I wouldn’t be surprised if Flutter was somewhat similar to this.