• 6 Posts
  • 127 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • sftp://USERNAME@SERVER:PORT in the address bar of most file managers will work. You can omit the port if it’s the default (22), you can omit the username if it’s the same as your local user.

    You can also add the server as a favorite/shortcut in your file manager sidebar (it works at least in Thunar and Nautilus). Or you can edit ~/.config/gtk-3.0/bookmarks directly:

    file:///some/local/directory
    file:///some/other/directory
    sftp://my.example.org/home/myuser my.example.org
    sftp://otheruser@my.example.net:2222/home/otheruser my.example.net
    

  • Quite fast.

    KVM/libvirt VM with 4GB RAM and 4vCores shared with a dozen other services, storage is not the fastest (qcow2-backed disks on a ext4 partition inside a LUKS volume on a 5400RPM hard drive… I might move it so a SSD sometime soon) so features highly dependent on disk I/O (thumbnailing) are sometimes sluggish. There is an occasional slowdown, I suppose caused by APCu caches periodically being dropped, but once a page is loaded and the cache is warmed up, it becomes fast again.

    Standard apache + php-fpm + postgresql setup as described in the Nextcloud official documentation, automated through this ansible role


  • Syslog over TCP with TLS (don’t want those sweet packets containing sensitive data leaving your box unencrypted). Bonus points for mutual authentication between the server/clients (just got it working and it’s 👌 - my implementation here

    It solves the aggregation part but doesn’t solve the viewing/analysis part. I usually use lnav on simple setups (gotty as a poor man’s web interface for lnav when needed), and graylog on larger ones (definitely costly in terms of RAM and storage though)


  • Obfuscation can be helpful in not disclosing which are some services or naming schemes

    The “obfuscation” benefits of wildcard certificates are very limited (public DNS records can still easily be found with tools such as sublist3r), and they’re definitely a security liability (get the private key of the cert stolen from a single server -> TLS potentially compromised on all your servers using the wildcard cert)


  • VMs have a lot of additional overhead.

    The overhead is minimal, KVM VMs have near-native performance (type 1 hypervisor). There is some memory overhead as each VM runs its own kernel, but a lot of this is cancelled by KSM [1] which is a memory de-duplication mechanism.

    Each VM runs its own system services (think systemd, logging, etc) so there is some memory/disk usage overhead there - but it would be the same with Incus/LXC as they do the same thing (they only share the same kernel).

    https://serverfault.com/questions/225719/so-really-what-is-the-overhead-of-virtualization-and-when-should-i-be-concerned

    I usually go for bare-metal > on top of that, multiple VMs separated by context (think “tenant”, production/testing, public/confidential/secret, etc. VMs provide strong isolation which containers do not. At the very minimum it’s good to have at least separate VMs for “serious business” and “lab” contexts) > applications running inside the VMs (containerized or not - service/application isolation through namespaces/systemd has come a long way, see man systemd-analyze security) - for me the benefit of containerization is mostly ease of deployment and… ahem running inscrutable binary images with out-of-date dependencies made by strangers on the Internet)

    If you go for a containerization solution on top of your VMs, I suggest looking into podman as a replacement for Docker (less bugs, less attack surface, no single-point-of-failure in the form of a 1-million-lines-of-code daemon running as root, more unix-y, better integration with systemd [2]. But be aware of the maintenance overhead caused by containerization, if you’re serious about it you will probably end up maintaining your own images)


  • “buggy as fuck” because there’s a bug that makes it so you can’t easily run it if your locate is different than English?

    It sends pretty bad signals when it causes a crash on the first lxd init (sure I could make the case that there are workarounds, switch locales, create the bridge, but it doesn’t help make it appear as a better solution than proxmox). Whatever you call it, it’s a bad looking bug, and the fact that it was not patched in debian stable or backports makes me think there might be further hacks needed down the road for other stupid bugs like this one, so for now, hard pass on the Debian package (might file a bug on the bts later).

    About the link, Proxmox kernel is based on Ubuntu, not Debian…

    Thanks for the link mate, Proxmox kernels are based on Ubuntu’s, which are in turn based on Debian’s, not arguing about that - but I was specifically referring to this comment

    having to wait months for fixes already available upstream or so they would fix their own shit

    any example/link to bug reports for such fixes not being applied to proxmox kernels? Asking so I can raise an orange flag before it gets adopted without due consideration.



  • DO NOT migrate / upgrade anything to the snap package

    It was already in place when I came in (made me roll my eyes), and it’s a mess. As you said, there’s no proper upgrade path to anything else. So anyway…

    you should migrate into LXD LTS from Debian 12 repositories

    The LXD version in Debian 12 is buggy as fuck, this patch has not even been backported https://github.com/canonical/lxd/issues/11902 and 5.0.2-5 is still affected. It was a dealbreaker in my previous tests, and doesn’t inspire confidence in the bug testing and patching process on this particular package. On top of it, It will be hard to convice other guys that we should ditch Ubuntu and their shenanigans, and that we should migrate to good old Debian (especially if the lxd package is in such a state). Some parts of the job are cool, but I’m starting to see there’s strong resistance to change, so as I said, path of least resistance.

    Do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable?



  • but more like playing a video game and it drops down to 15fps

    Likely not a server-side problem (check CPU usage on the server), if the server was struggling to transcode I think it would result in the playback pausing, and resuming when the encoder catches up. Network/bandwidth problems would result in buffering. This looks like a bad playback performance problem, what client are you using? Try with multiple clients (use the web interface ina browser as a baseline) and see if it makes any difference.



  • The migration is bound to happen in the next few months, and I can’t recommend moving to incus yet since it’s not in stable/LTS repositories for Debian/Ubuntu, and I really don’t want to encourage adding third-party repositories to the mix - they are already widespread in the setup I inherited (new gig), and part of a major clusterfuck that is upgrade management (or the lack of). I really want to standardize on official distro repositories. On the other hand the current LXD packages are provided by snap (…) so that would still be an improvement, I guess.

    Management is already sold to the idea of Proxmox (not by me), so I think I’ll take the path of least resistance. I’ve had mostly good experiences with it in the past, even if I found their custom kernels a bit strange to start with… do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable? I’d still like to put a word of caution about that.




  • Would it be better to just have one PostgreSQL service running that serves both Nextcloud and Lemmy

    Yes, performance and maintenance-wise.

    If you’re concerned about database maintenance (can’t remember the last time I had to do this… Once every few years to migrate postgres clusters to the next major version?) bringing down multiple services, setup master-slave replication and be done with it


  • In my experience and for my mostly basic needs, major differences between libvirt and proxmox:

    • The “clustering” in libvirt is very limited (no HA, automatic fencing, ceph inegration, etc. at least out-of-the box), I basically use it to 1. admin multiple libvirt hypervisors from a single libvirt/virt-manager instance 2. migrate VMs between instances (they need to be using shared storage for disks, etc), but it covers 90% of my use cases.
    • On proxmox hosts I let proxmox manage the firewall, on libvirt hosts I manage it through firewalld like any other server (+ libvirt/qemu hooks for port forwarding).
    • On proxmox I use the built-in template feature to provision new VMs from a template, on libvirt I do a mix of virt-clone and virt-sysprep.
    • On libvirt I use virt-install and a Debian preseed.cfg to provision new templates, on proxmox I do it… well… manually. But both support cloud-init based provisioning so I might standardize to that in the future (and ditch templates)

  • /thread

    This is my go-to setup.

    I try to stick with libvirt/virsh when I don’t need any graphical interface (integrates beautifully with ansible [1]), or when I don’t need clustering/HA (libvirt does support “clustering” at least in some capability, you can live migrate VMs between hosts, manage remote hypervisors from virsh/virt-manager, etc). On development/lab desktops I bolt virt-manager on top so I have the exact same setup as my production setup, with a nice added GUI. I heard that cockpit could be used as a web interface but have never tried it.

    Proxmox on more complex setups (I try to manage it using ansible/the API as much as possible, but the web UI is a nice touch for one-shot operations).

    Re incus: I don’t know for sure yet. I have an old LXD setup at work that I’d like to migrate to something else, but I figured that since both libvirt and proxmox support management of LXC containers, I might as well consolidate and use one of these instead.