• 1 Post
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle
  • For a bit of context for those not too familiar with CDN stuff. My web server hosts about 20 small business websites. None are heavy on images or video or anything else. Most sites have well under 1k visitors a day, some are under 100.

    Each month CloudFlare CDN saves me between 40-60gb of traffic which is nothing my server couldn’t handle, but over a year is ~600gb in saved data so it adds up

    If you had a Lemmy instance with even just 100 active users, with all the images and videos and all the federated background communications, that would add up extremely quickly.


  • tristan@aussie.zonetoSelfhosted@lemmy.worldSpam posts
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    It’s a shitty situation that’s causing mods and users alike a lot of frustration and might be a bit before it’s sorted.

    Unfortunately I think this is something that will need to be dealt with Federation wide before it’s under control… But even then it’ll still add a lot of extra ongoing work to the mods of instances and communities just to clean up anything that gets through






  • My current setup is 3x Lenovo m920q (soon to be 4) all in a proxmox cluster, along with a qnap nas with 20gb ram and 4x 8tb in raid 5.

    The specs on the m920q are: I5 8500T 32gb ram 256gb sata SSD 2tb nvme SSD 1gbe nic

    On each proxmox machine, I have a docker server in swarm mode and each of those vm all have the same NFS mounts pointing to the nas

    On the Nas I have a normal docker installation which runs my databases

    On the swarm I have over 60 docker containers, including the arr services, overseerr and two deluge instances

    I have no issues with performance or read/write or timeouts.

    As one of the other posters said, point all of your arr services to the same mount point as it makes it far easier for the automated stuff to work.

    Put all the arr services into a single stack (or at least on a single network), that way you can just point them to the container name rather than IP, for example, in overseerr to tell it where sonarr is, you’d just say http://sonarr:8989 and it will make life much easier

    As for proxmox, the biggest thing I’ll say from my experience, if you’re just starting out, make sure you set it’s IP and hostname to what you want right from the start… It’s a pain in the ass to change them later. So if you’re planning to use vlans or something, set them up first

    Pic of my setup








  • Congrats on your new slippery slope haha

    ike I fire up a docker image which plays music (if that’s even possible?) it has to have access to the disc, sound drivers, maybe interactive stuff etc on the host PC right?

    So the main things you’ll want to read up on for that are mounts. Mounts will let you attach files and folders from the host computer into the docker container that it sees as if it’s inside the container.

    A lot of docker apps will run a web host, so instead of accessing them like a normal application, you load up the website that’s located at the IP address, and the exposed port. Then just like running Netflix or anything, it already has access to local sound and video devices through that

    This also means that you can open them up to other computers/devices on the home network… so your phone could load it up and play music or your windows PC could, and it’s all served from that docker container

    If you’re interested in hosting media, you could look into Plex or jellyfin, they are media servers that can stream self hosted videos, music and photos over the network.

    There’s a lot of other options that are more specific, and what’s right for everybody else might not be right for you so it’s worth playing around with various options



  • Docker/kubernetes and VMS are similar in that they are all virtualisation but the similarity kinda end there. Love them or hate them, Each has its own important role in IT infrastructure.

    First off, docker itself needs a host operating system to run. Secondly, Docker are containers. Each image is built on a cut down version of the operating system generally to perform one specific task or run one specific application. The environment is preconfigured to work exactly as intended so generally speaking, you don’t get the whole “but it works on my machine”

    Kubernetes I’m not the most qualified to speak to, but pretty much someone said “ok docker is great but we want redundancy, scalability, etc” and made kubernetes.

    A vm is a full virtual machine. You can give it virtual harddisks, virtual network cards, etc. You then install a full operating system on it, could be windows or Linux or whatever you need.

    From there you can install docker if that’s what you want, or can install specific apps. This is the first difference, is if you install the app compared to a docker container, you need to make sure you have all the prerequisites met, all the correct compatibility, etc. It’s up to you to make sure your system is correct for the software.

    Another major difference is docker containers are all seen on the network as coming from whatever the host machine’s IP is.

    Whereas the network views each vm as it’s own device on the network, giving each it’s own IP (if using dhcp) and allowing things like vlans and things.

    As for my setup, I have 3 VMs with docker servers, each with between 20-30 docker containers, 3 VMs running adguard DNS, 1 vm acting as a tailscale entry point, then a few application specific VMs. It’s handy just being able to fire up a blank Ubuntu instance to play with me software, and if anything goes wrong just delete the whole machine and start fresh.

    Then for storage behind it all, I have a qnap ts453d with 4x 8tb drives.

    Then outside my home, I have 2 X Oracle hosted VMs, one hosting about 22 websites and all the stuff they need, one acting as a tunnel into my home services since I’m behind a CGNAT, and then another physical server located in the local data centre running email for a few small businesses and myself


  • Proxmox is like esxi, it lets you setup virtual machines. So you can fire up a virtual Linux machine and allocate it like 2gb ram and limit it to 2 cores of the CPU or give it the whole lot depending on what you need to do

    Having them in a cluster let’s them move virtual machines between the physical hardware and have complete copies so if one goes down the next can just start up

    It is a little overkill, I’m probably only using about 20% of its resources but it’s all for a good cause. I’m currently unable to work due to kidney failure but I’m working towards a transplant. If I do get a transplant and can return to work, being able to say “well this is my home setup and the various things I know how to do” looks a lot better than “I sat on my ass for the last 4 years so I’m very rusty”

    This whole setup cost me about $1000aud and uses 65-70w on average