• 0 Posts
  • 20 Comments
Joined 3 years ago
cake
Cake day: January 21st, 2021

help-circle

  • You don’t need a domain. However it is probably a good idea.

    1. You can’t get a globally trusted SSL certificate for an IP address. So you will need to use a self-signed certificate and manage trusting it on every device.
    2. If you don’t have a stable IP you will need to update bookmarks whenever it changes and memorizing it may be a chore.

    If you don’t want to purchase your own domain you can likely use a free subdomain, this will often come from a dynamic DNS provider.

    However if you can I would strongly recommend getting your own domain sooner rather than later. If only because it means that you can own your email address which is basically the keys to all third-party services you use these days. Domains are pretty cheap, probably <$20/year for a generic like .com or the TLD of your country. Personally I would happy skip out on eating out once a year to have my domain.



  • The problem with separating Calendar + Mail + Contacts is that they work best together. Although to be far I am not aware of an open-source system that effectively combines them.

    Calendar event invites an updates go over mail. So you want your calendar application to automatically be able to get those. Also options like “automatically add invites from contacts to my calendar” is an awesome feature. Contacts can also be used for spam filtering (although this integration is a bit easier to do externally).

    So currently I am using Nextcloud (self-hosted) although I don’t really like it because it is pretty slow on my low-powered VPS. But even still it doesn’t actually have proper email integration. There are bugs open and slowly moving but I’m still using Thunderbird to process most of my calendar stuff.

    Not to mention JMAP which is slowly progressing which would be a huge improvement, especially for mobile clients. It also combines these three services.


  • kevincox@lemmy.mltoSelfhosted@lemmy.worldWhat's the deal with Docker?
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    4 months ago

    For desktop apps Flatpak is almost certainly a better option than Docker. Flatpak uses the same core concepts as Docker but Flatpak is more suited for distributing graphical apps.

    1. Built in support for sharing graphics drivers, display server connections, fonts and themes.
    2. Most Flatpaks use common base images. Not only will this save disk space if you have lots of, for example GNOME, applications as they will share the same base but it also means that you can ship security updates for common libraries separately from application updates. (Although locked insecure libraries is still a problem in general, it is just improved over the docker case.)
    3. Better desktop integration via the use of “portals” that allow requesting specific things (screenshot, open file, save file, …) without full access to the user’s system.
    4. Configuration UIs that are optimized for the desktop usecase. Graphically tools for install, uninstall, manage permissions, …

    Generally I would still default to my distro’s packages where possible, but if they are unsuitable for whatever reason (not available, too old, …) then a Flatpak is a great option.


  • kevincox@lemmy.mltoSelfhosted@lemmy.worldWhat's the deal with Docker?
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    4 months ago

    I feel that a lot of people here are missing the point. Docker is popular for selfhosted services for a few main reasons:

    1. It is one package that can be used on any distribution (or even OS with a Linux VM).
    2. The package contains all dependencies required to run the software so it is pretty reliable.
    3. It provides some basic sandboxing against non-malicious services. Basically the service can’t scribble all over your filesystem. It can only write to specific directories that you have given it access to (via volumes) other than by exploiting security vulnerabilities.
    4. The volume system also makes it very obvious what data is important and needs to be backed up or similar, you have a short list.

    Docker also has lots of downsides. I would generally say that if your distribution packages software I would prefer the distribution’s package over the docker image. A good distribution package will also solve all of these problems. The main issue you will see with distribution packages is a longer delay before new versions are made available.

    What Docker completely dominates was previous cross-distribution packaging options which typically took one of the previous strategies.

    1. Self-contained compiled tarball. Run the program inside as your user. It probably puts its data in the extracted directory, maybe. How do you upgrade? Extract and copy a data directory? Self-update? Code is mutable and mixed with data, gross.
    2. Install script. Probably runs as root. Makes who-knows what changes to your system. Where is the data, is the service running? Will it auto-start on boot. Hope that install script supports your distro.
    3. Source tarball. Figure out the dependencies. Hope they don’t conflict with the versions your distro has. Set up users and setup scripts yourself. Hope the build doesn’t take too long.

  • While you are technically right there is very little logical difference between containers and VMs. Really the only fundamental difference is that containers use the same kernel while VMs run their own. (let’s not even worry about para-virtualization right now).

    In practice I would say the biggest difference is that there is better memory sharing so total memory usage will often be less. But honestly this mostly comes down to the fact that the average container bundles less software than the average VM image. Easier management of volumes is also nice because typically you will just bind-mount a host directory, but it also isn’t hard to mount a block device on a Linux host.



  • I don’t know if the s is actually “SSL” or “secure” but the point is that the are the same protocol, running over an encryption layer. So adding an s suffix is running the same protocol over some encrypted transport. You see this s suffix for lots of things like irc/ircs and dav/davs.

    This is different to sftp which isn’t related to ftp at all other than they are both protocols that transfer files.


  • I wouldn’t really recommend NFS unless you need to remote mount as a “true filesystem” with full support for things like sockets, locking and other UNIX filesystem features or you need top performance. It is so difficult to do authentication and UID mapping that it typically isn’t worth it for simpler use cases like “add, remove or download files”.

    scp can be slow with large numbers of small files. rsync is much better at that and can do differential transfers if you need that. Since rsync can also run over SSH it can be very easy to just use it as a default.



  • Unless you went out of your way to set up FTP and get a TLS certificate I would put my money on you using SFTP which uses SSH for authentication and transport security. It doesn’t require anything to set up other than TOFU server keys and a client key or password for authentication.

    Which is probably the right thing to use. Really you shouldn’t be using FTP anymore. Probably you just want HTTP for public data and SFTP for private authenticated data.



  • Right now I am just using nautilus (default GNOME file manager) but in past I was using Thunar (default XFCE file manager). I’d be pretty surprised if whatever file manager you are currently using doesn’t support SFTP out of the box. Typically you can just enter something like sftp://myhost.example into the location bar. They may also have a dedicated network connection section with a wizard to add it.



  • IDK what OS you are on but on Linux most file managers have support for remote filesystems. SFTP (SSH-FTP, not to be confused with FTPS which is FTP-secure) is ubiquitous and if you use scp then you already have SSH set up.

    If you need Windows support it is more of a pain. You may need to set up Samba or WebDAV and permissions can suck. But you can also download a third-party file browser that supports remote protocols.

    So basically SFTP, and I fairly regularly just use a graphical file manager when I am doing one-off operations.


  • it is simply Security Through Obscurity at best.

    I think this is a bit too strong. The bit about NAT that people associate with improved security is that it acts as a stateful firewall. This basically means that it allows outbound connections not inbound connections.

    A preventing inbound connections does provide a meaningful reduction in attack surface. No longer is every vulnerability scan on the internet going to probe your machine and it is going to be much harder for a remote attacker to get access.

    However there are two main flaws:

    1. Stateful firewalls are not perfect filters of incoming connections.
    2. Local devices still have full access to your device.


  • If you are relying on Docker as a security boundary you are making a mistake.

    Docker isolation is good enough to keep honest people honest but isn’t good enough to keep out malicious actors. The Linux kernel API is simply too large of an attack surface to be highly secure.

    If you want to run completely untrusted software you want a VM boundary at a very minimum. Ideally run it on completely separate hardware. There are few exceptions like browser isolation and gVisor which are strong software isolation without a VM but docker or any Linux container runner is not on that list. If the software has direct access to the host kernel it shouldn’t be considered secure.