A person with way too many hobbies, but I still continue to learn new things.

  • 4 Posts
  • 93 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle
  • There was no such thing as a default firewall, but even now when I set up a new Debian machine there are no firewall rules, just the base iptables installed so you CAN add rules. Back then we also had insecure things like telnet installed by default and exposed to the world, so there’s really no telling exactly how they managed to get into my machine. It’s still good to learn about network security up front rather than relying on any default settings if someone is planning on self-hosting.


  • This was back in '99 and I didn’t know much about linux (or servers) at the time, so I’m not exactly sure what they did… but one morning I woke up and noticed my web service wasn’t working. I had an active login on the terminal but was just getting garbage from it, and I couldn’t log in remotely at all. My guess was that someone hacked in, but hacked the system so badly that they basically trashed it. I was able to recover a little data straight from the drive but I didn’t know anything about analyzing the damage to figure out what happened. so I finally ended up wiping the drive and starting over.

    At that point I did a sped-run of learning how to set up a firewall, and noticed right away all kinds of attempts to hit my IP. It took time to learn more about IDS and trying not to be too wreckless in setting up my web pages, but apparently it was enough to thwart however that first attacker got in. Eventually I moved to a dedicated firewall in front of multiple servers.

    Since then I’ve had a couple instances where someone cracked a user password and started sending spam through, but fail2ban stopped that. And boy are there a LOT of attempts at trying to get into the servers. I should probably bump up fail2ban to block IPs faster and over a longer period when they use invalid user names since attacks these days happen from such a wider range of IPs.


  • I see a number of comments to use a virtual server host, but I have not seen any mention of the main reason WHY this is advisable… If you want to host something from your home, people need a way to reach you. There are two options for this – use a DDNS service (generally frowned upon for permanent installations), or get a static IP address from your provider.

    DDNS means you have to monitor whenever your local IP address changes, send out updated records, and wait for those changes to propagate across the internet. This generally will mean several minutes or more of down time where nobody can reach your server, and can happen at completely random times.

    A static IP is reliable, but they cost money, and some providers won’t even give you the option unless you get a business-class connection, which costs even more money. However this cost is usually already rolled into the price of a virtual machine.

    Keep in mind also that when hosting at home, simply using a laptop to stay online 24/7 is not enough, you also need a battery backup for your network equipment. You will want to learn about setting up a firewall and some kind of IDS to protect the front end of your services, but for starting out you can host this on the same machine as your other services. And if you really want to be safe, set up a second internal machine that you can perform regular backups to, so when your machine gets hacked you have a way to restore the information.

    My first server was online for two whole weeks before someone blew it up. Learn security first, everything after that will be easy.


  • I dunno, like I said zfs is pretty damn good at recovery. If the drives simply drop out but there’s no hardware fault you should be able to clear the errors and bring the pool back up again. And the chances of two drives failing at the same time are pretty low. One of these days I do need to buy a spare to have on hand though. Maybe I’ll even swap out one drive just to see how long it takes to rebuild.


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldSecond hand disks?
    link
    fedilink
    English
    arrow-up
    13
    ·
    9 months ago

    My current setup is eight 18TB Exos drives, all purchased from Amazon’s refurb shop, and running in a RAIDz2. I’m pulling about 450MB/s through various tests on a system that is in use. I’ve been running this about a year now and smartd hasn’t detected any issues. I have almost never run new drives for my storage and the only time I’ve ever lost data was back when I was running mdadm and a power glitch broke the sync on multiple drives so the array couldn’t be recovered. With zfs I have even run a RAID0 with five drives which saw multiple power incidents (before I got a redundant power supply) and I never once lost anything because of zfs’ awesome error detection.

    So yes, used drives can be just fine as long as you do your research on the drive models, have a very solid power supply, and are configured for hot-swapping so you can replace a drive when they fail. Of course that’s solid advice even for brand new drives, but my last set of used drives (also from ebay) lasted about a decade before it was time to upgrade. Sure, individual drives took a dump over that time, this was another set of eight and I replaced three of them, but the data was always safe.



  • No matter how you go about it, getting these drives set up to be reliable isn’t going to be cheap. If you want to run without an enclosure, at the very least (and assuming you are running Linux) you are going to want something like LSI SAS cards with external ports, preferably a 4-port card (around $50-$100, each port will run four drives) that you can flash into IT mode. You will need matching splitter cables (3x $25 each). And most importantly you need a VERY solid power supply, preferably something with redundancy (probably $100 or more). These prices are based on used hardware from ebay, except for the cables, and you’ll have to do some considerable research to learn how to flash the SAS cards, and which ones can be flashed.

    Of course this is very bare-bones, you won’t have a case to mount the drives in, and splitter cables from the power supply can be finicky, but with time and experience it can be made to work very well. My current NAS is capable of handling up to 32 external and 8 internal drives and I’m using 3D-printed drive cages with some cheap SATA2 backplanes to finally get a rock-solid setup. It takes a lot of work and experience to do things cheaply.


  • This right here. As a member of the OpenNIC project, I used to run an open resolver and this required a lot of hands-on maintenance. Basically what happens is someone sends a very small packet requesting the lookup of something which returns a huge amount of data (like DNSSEC records). They can make thousands of these requests in a short period, attempting to flood out the target domain’s DNS servers and effectively take them offline, by using your open server as the attacker.

    At the very least, you need to have strict rate-limiting controls on DNS lookups. And since the requests come in through UDP, they can spoof their IP address so you can’t simply block an attacker. When I ran into this issue, I wrote up scripts to monitor for a lot of requests to the same domain name and outright block those until the attack stopped. It wasn’t a great solution, but it did at least make sure my system wasn’t contributing to an attack.

    Your best bet is to only respond to DNS requests for your own domain(s). If you really want an open resolver, think about limiting it by creating some sort of sign-up method (for instance, ddns servers use a specific URL to register the changing IP of known users), but still keep the rate-limiting in place.


  • You might want to use a code block instead of bullet points for your table, the way you presented it is unreadable but I found the info on your blog page.

    One of my criteria for video formats is the portability. Like sometimes I might watch something through a web browser which natively supports x264. Yeah x265 provides better compression, and AV1 certainly looks interesting, but they both require the addition of codecs on most of my viewing devices and in some cases that’s not possible.

    For most cases I’ve found that CRF25 with x264 works reasonably well. I tend to download 720p videos to watch on our 1080p TV and don’t notice the difference except in very minor situations like rapid motion on a solid-color background (usually only seen on movie studio logo screens). Any sort of animated shows can go even lower without noticeable degradation.


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldSelfhosting Overleaf
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    11 months ago

    Wait, there’s an option to host Overleaf locally? Is there any cost associated with this, or any restrictions on the number of users?

    [Edit] Found some more info on this, there’s a free community version, and then an enterprise version with a fee that lets you self-host but adds features like SSO and support from the company. I’ll definitely have to look more into both of these options. Thanks, OP, for making me aware of this!



  • The key concept here is how valuable your time is to rebuild your collection. I have a ~92TB (8x16 radiz2) array with about 33TB of downloaded data that has never been backed up as it migrated from my original cluster of 250GB drives through to today. I think part of the key is to have a spare drive on hand and ready to go when you do lose a drive, to be swapped in as soon as a problem shows up, plus having email alerts when a drive goes down so you’re aware right away.

    To add a little more perspective to my setup (and nightmare fuel for some people), I have always made my clusters from used drives, generally off ebay but the current batch comes from Amazon’s refurbished shop. Plus these drives all sit externally with cables from SAS cards. The good news is this year I finally built a 3D-printed rack to organize the drives, matched to some cheap backplane cards, so I have less chance of power issues. And power is key here, my own experience has shown that if you use a cheap desktop power supply for external drives, you WILL lose data. I now run a redundant PS from a server that puts out a lot more power than I need, and I haven’t lost anything since those original 250GB drives, nor have I had any concerns while rebuilding a failed drive or two. At one point during my last upgrade I had 27 HDDs spun up at once so I have a lot of confidence in this setup with the now-reduced drive count.



  • One thing I’m not following in all the discussions about how self-contained docker is… nearly all of my images make use of NFS shares and common databases. For example, I have three separate smtp servers which need to put incoming emails into the proper home folders, but also database connections to track detected spam and other things. So how would all these processes talk to each other if they’re all locked within their container?

    The other thing I keep coming back to, again using my smtp servers as an example… It is highly unlikely that anyone else has exactly the same setup that I do, let alone that they’ve taken the time to build a docker image for it. So would I essentially have to rebuild the entire system from scratch, then learn how to create a docker script to launch it, just to get the service back online again?




  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldShould I move to Docker?
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    11 months ago

    I’m not sure I understand this idea that VMs have a high overhead. I just checked one of my servers, there are nine VMs running everything from chat channels to email to web servers, and the server is 99.1% idle. And this is on a poweredge R620 with low-power CPUs, it’s not like I’m running something crazy-fast or even all that new. Hell until the beginning of this year I was running all this stuff on poweredge 860’s which are nearly 20 years old now.

    If I needed to set up the VM again, well I would just copy the backup as a starting point, or copy one of the mirror servers. Copying a VM doesn’t take much, I mean even my bigger storage systems only use an 8GB image. That takes, what, 30 seconds? And for building a new service image, I have a nearly stock install which has the basics like LDAP accounts and network shares set up. Otherwise once I get a service configured I just let Debian manage the security updates and do a full upgrade as needed. I’ve never had a reason to try replacing an individual library for anything, and each of my VMs run a single service (http, smtp, dns, etc) so even if I did try that there wouldn’t be any chance of it interfering with anything else.

    Honestly from what you’re saying here, it just sounds like docker is made for people who previously ran everything directly under the main server installation and frequently had upgrades of one service breaking another service. I suppose docker works for those people, but the problems you are saying it solves are problems I have never run in to over the last two decades.


  • This is kinda where I’m at as well. I have always run my home services each in their own VM. There’s no fuss to set up a new one, if I want to move it to a different server I just copy the *.img file over and launch it. Sure I run a lot of internet services across my various machines but it all just works so I don’t understand what purpose there would be to converting all the custom configurations over to docker. It might make sense if I was trying to run all my services directly on the bare metal, but who does that?


  • I guess it just annoys me that they built a product on incredibly shady practices and have somehow managed to wedge themselves in to the business world under the guise of being “legitimate”. Trusting anything on their site, to me, feels as risky as trusting anything you see on Yelp – sure a real person might have posted the review, or maybe the business paid their blackmail tax to not get de-listed, but how many better opportunities are not being shown because the company deleted all their positive reviews?


  • Hell, why do this many people use LinkedIn? The whole platform was built off of scraping Windows user’s address books without permission, sending unsolicited emails to all of those contacts using the name of that user, and pretending like they were such a great platform that of course your friends are inviting you to also join. And I’m pretty sure they still use this practice today because I continue to get emails from people who have no idea why their name is being attached to the spam I receive.