• 9 Posts
  • 461 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • If you can create a port forward in your router and run stuff at your house what’s the point of a relay then? Just expose the ports that Syncthing uses and configure your client to connect to it using your dynamic DNS. No public or private relays are required.

    1. Port forward the following in your router to the local Syncthing host, any client will be able to connect to it directly:
    • Port 22000/TCP: TCP based sync protocol traffic
    • Port 22000/UDP: QUIC based sync protocol traffic
    1. Go into the client and edit the home device. Set it to connect using the dynamic DNS directly:

    For extra security you may change the Syncthing port, or run the entire thing over a Wireguard VPN like I also do.

    Note that even without the VPN all traffic is TLS protected.


  • TCB13@lemmy.worldtoSelfhosted@lemmy.worldWhat's the deal with Docker?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Docker and the success of containers is mostly due to the ease of shipping code that carries its own dependencies and can be run anywhere

    I don’t disagree with you, but that also shows that most modern software is poorly written. Usually a bunch of solutions that hardly work and nobody is able to reproduce their setup in a quick, sane and secure way.

    There are a many container runtimes (CRI-O, podman, mirantis, containerd, etc.). Docker is just a convenient API, containers are fully implemented just with Linux native features (namespaces, seccomp, capabilities, cgroups) and images follow an open standard (OCI).

    Yes, that’s exactly point point. There are many options, yet people stick with Docker and DockerHub (that is everything but open).

    In systemd you need to use 30 different options to get what using containers you achieve almost instantly and with much less hussle.

    Yes… maybe we just need some automation/orchestration tool for that. This is like saying that it’s way too hard to download the rootfs of some distro, unpack it and then use unshare to launch a shell on a isolated namespace… Docker as you said provides a convenient API but it doesn’t mean we can’t do the same for systemd.

    but I want to simply remind you that containers are the successor of VMs (virtualize everything!), platforms that were completely proprietary and in the hands of a handful of vendor

    Completely proprietary… like QEMU/libvirt? :P




  • TCB13@lemmy.worldtoSelfhosted@lemmy.worldWhat's the deal with Docker?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    8
    ·
    edit-2
    8 months ago

    The thing with Docker is that people don’t want to learn how to use Linux and are buying into an overhyped solution that makes their life easier without understanding the long term consequences. Most of the pro-Docker arguments go around security and that’s mostly BS because 1) systemd can provide as much isolation a docker containers and 2) there are other container solutions that are at least as safe as Docker and nobody cares about them.

    Companies such as Microsoft and GitHub are all about re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We see this in everything now Docker/DockerHub/Kubernetes and GitHub actions were the first sign of this cancer. We now have a generation that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker BS or isn’t a 3rd party cloud xyz deploy-from-github service.

    Before anyone comments that Docker isn’t totally proprietary and there’s Podman consider the following: It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies. In the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term.

    Docker may make development and deployment very easy and lowered the bar for newcomers have the dark side of being designed to reconfigure and envelope the way development gets done so someone can profit from it. That is sad and above all set dangerous precedents and creates generations of engineers and developers that don’t have truly open tools like we did. There’s LOT of money into transitioning everyone to the “deploy-from-github-to-cloud-x-with-hooks” model so those companies will keep pushing for it.

    Note that technologies such as Docker keep commoditizing development - it’s a negative feedback loop that never ends. Yes I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something. At the end of the they the business of those cloud companies is transforming developer knowledge into products/services that companies can buy with a click.


  • Joplin, and what ultimately pushed me away from it was the portability of the data within it—I didn’t love that I wasn’t ultimately just working with a folder of Markdown

    I believe you did miss something, Joplin “stores notes in Markdown format. Markdown is a simple way to format text that looks great on any device and, while it’s formatted text, it still looks perfectly readable in a plain text editor.” Source: https://joplinapp.org/help/apps/rich_text_editor/

    You have have a bunch of options when it comes to synchronization:

    You can just point it at some folder and it will store the files there and then sync it with any 3rd party solution you would like. I personally use WebDav because it’s more convenient (iOS support) and it’s very easy to get a Nginx instance to serve what it needs:

    server {
        listen 443 ssl http2;
        server_name  xyz.example.org;
        ssl_certificate ....;
        ssl_certificate_key ...;
        root /mnt/SSD1/web/root;
    
       # Set your password with: WebDAV htpasswd -c /etc/nginx/.credentials-dav.list YOUR_USERNAME
        location /dav/notes {
    	alias /mnt/SSD1/web/dav/notes;
            auth_basic              realm_name;
            auth_basic_user_file    /etc/nginx/.credentials-dav.list;
            dav_methods     PUT DELETE MKCOL COPY MOVE;
            dav_ext_methods PROPFIND OPTIONS;
            dav_access      user:rw;
            client_max_body_size    0;
            create_full_put_path    on;
        }
    

    I was already using Nginx as a reverse proxy / SSL termination for FileBrowser so it was just a couple of lines to get it running a WebDAV share for Joplin.

    Is FileBrowser doing any cross-device syncing at all, or is it as it appears on the surface

    FileBrowser doesn’t do cross-device syncing and that’s the point, I don’t ever want it doing it. For sync I use Syncthing, I just run both on my NAS and have them pointed at the same folder. All of my devices run Syncthing and sync their data with the NAS so this way I can have the NAS working as a central repository and everything is available through FileBrowser.


    • FileBrowser
    • Joplin

    I’ve been using those two and they’re way faster and more reliable than NextCloud.

    I’ve already found alternatives for all services, except for the calendar.

    I’m using Baikal for Contacts & Calendar, it provides a generic CardDAV and CalDAV solution that can be access from iOS/Android or some web client like the plugins for RoundCube. Thunderbird also now has native support for CardDAV and CalDAV and it works just fine with Baikal.



  • I know exactly how Synching works, the point is not the p2p nature of it, the point is that Nextcloud’s sync performance and reliability isn’t even comparable because the desktop clients, sync algorithm and server side tech (PHP) won’t ever be as performant at dealing with files as Go is.

    The way Nextcloud implemented sync is totally their decision and fault. Syncthing can be used in a more “client > server” architecture and there are professional deployments of that provided by Kastelo for enterprise customers with SSO integrations, web interfaces, user management and whatnot.

    Nextcloud could’ve just implemented all their web UI and then rely on the Syncthing code for the desktop / mobile clients sync. Without even changing Syncthing’s code, one way to achieve this would be launch a single Syncthing instance per NC user and then build a GUI around that that would communicate with the NC API do handle key exchanges with the core Syncthing process. Then add a few share options in the context menu and done.

    This situation illustrates very clearly the issue with Nextcloud development and their decisions - they could’ve just leveraged the great engine that Syncthing has a backend for sync but instead, as stubborn as they are, they came up with an half assed-solution that in true Nextcloud fashion never delivers as promised.











  • but the influence that PHP may have over your data access patterns can be a source of significant performance problems.

    Let me rephrase that for you: the influence that poorly written PHP code, an utter and total disregard for good software development practices and the general ineptitude shown by the NC developers have over your data access patterns is the source of significant performance problems. We also have to consider all the client side issues, poor decisions and a general lack of any testing.

    Fixed :)