You can get a quick overview via DSM, I think in the Disk Manager. For more details you could jump into a terminal and use smartctl.
You can get a quick overview via DSM, I think in the Disk Manager. For more details you could jump into a terminal and use smartctl.
Have you checked the SMART values of your drives? Do they give you a reason for your concerns?
Anyhow, you should never be in a position where you need to worry about drive failure. If the data is important, back it up separatly. If it isn’t, well, don’t sweat it then.
Why would you buy something new if your current solution works and your requirements don’t change? Just keep it.
Wasabi S3 is nice and cheap. You’ll only pay what you use, so probably just a few cents in your case.
Oops, nevermind:
If you store less than 1 TB of active storage in your account, you will still be charged for 1 TB of storage based on the pricing associated with the storage region you are using.
I recently upgraded three of my proxmox hosts with SSDs to make use of ceph. While researching I faced the same question - everyone said you need an enterprise SSD, or ceph would eat it alive. The feature that apparently matters the most in my case is Power Loss Protection (PLP). It’s not even primarily needed to protect from an possible outage, but it forces sync writes instead of relying on a cache for performance.
There are some SSDs marketed for usage in data centers, these are generally enterprisey. Often they are classified for “Mixed Use” (read and write) or “Read Intensive”. Other interesting metrics are the Drive Writes Per Day (DWPD) and obviously TBW and IOPS.
At the end I went with used Samsung PM883.
But before you fall into this rabbit hole, you might check if you really need an enterprise SSD. If all you’re doing is running a few vms in a homelab, I would expect consumer SSDs to work just fine.
What’s wrong with Portainer?
No, the registrar just registers the domain for you (duh). You can then change the DNS recods for this domain and these records will propagate to other DNS servers all around the world. Your clients will use some of these DNS servers to lookup the IP address of your server and then connect to this IP.
The traffic between your clients and server has nothing to do with your domain registrar.
You could look into mainboards with IPMI. They give you a web based interface to fully control your server, including power management, shell, sensor readings, etc.
Haha, no problem!
Have you seen the link?
I love Jellyfin but I would absolutely not make it accessible over the public internet. A VPN is the way to go.
Fixed, thanks.
Yeah, tail
would be the more obvious choice instead of negating head.
Fuck, I need coffee. @klay@lemmy.world is right (again).
You’re right, I edited my comment. Thanks!
This line seems to list all dumps and then deletes all but the two most recent ones.
In detail:
ls -1 /backup/*.dump
lists all files ending with .dump alphabetically inside the /backup directoryhead -n -2
returns all filenames except the two most recent ones from the end of the listxargs rm -f
passes the filenames to rm -f
to delete themTake a look at explainshell.com.
Yeah, the quality is really good. It’s also not cheap. I bought this case mostly because it’s rather shallow and did fit into my previous server rack.
I’m now at a point where I should buy another drive cage but I’m a bit hesitant to spend 150€ for it. Well…
Edit: Any reason you decided to go with a non-server mainboard without IPMI and ECC support?
Fun! I used the exact same chassis for my NAS. Thanks for sharing!
I don’t use rclone at all, restic is perfectly capable to backup to remote storage on its own.
I’ve been working in IT for about 6/7 years now and I’ve been selfhosting for about 5. And in all this time, in my work environment or at home, I’ve never bothered about backups.
That really is quite a confession to make, especially in a professional context. But good for you to finally come around!
I can’t really recommend a solution with a GUI but I can tell you a bit about how I backup my homelab. Like you I have a Proxmox cluster with several VMs and a NAS. I’ve mounted some storage from my NAS into Proxmox via NFS. This is where I let Proxmox store backups of all VMs.
On my NAS I use restic to backup to two targets: An offsite NAS which contains full backups and additionally Wasabi S3 for the stuff I really don’t want to lose. I like restic a lot and found it rather easy to use (also coming from borg/borgmatic). It supports many different storage backends and multithreading (looking at you, borg).
I run TrueNAS, so I make use of ZFS Snapshots too. This way I have multiple layers of defense against data loss with varying restore times for different scenarios.
Use something like pgAdmin, DBeaver or the pg cli to connect to your postgres instance. Then run the command from the changelog as a SQL query.