There’s a quote from 1908’s Wind in the Willows: Believe me, my young friend, there is nothing–absolutely nothing–half so much worth doing as simply messing about in boats.
Fill in your own hobby, and it reads just as well.
There’s a quote from 1908’s Wind in the Willows: Believe me, my young friend, there is nothing–absolutely nothing–half so much worth doing as simply messing about in boats.
Fill in your own hobby, and it reads just as well.
$10/month is one drink in the pub on one Friday night out of four. It’s not even a movie ticket.
European electricity rates are closer to $0.30, and I agree that 100W 24/7 is a cost worth being aware of. I think we’re seeing in this thread that it’s pretty easy to find a system with standard PC parts from the past decade that idles in the 50W range, like OP, even with a couple of HDDs, and $50/year (US), even $150/year (EU), electricity cost to keep an old desktop out of a landfill maybe doesn’t seem so bad.
I mean, one should think hard whether their home lab really needs a second full system running for failover, or whether they really need a separate desktop-based system just for NAS. And maybe don’t convert your old gaming rig and its GPU to a home server. Or the quad-Xeon server that work is ‘just giving away,’ even if it would be cool to have a $50,000 computer running in the basement.
5W vs 50W is an annual difference of 400 kWh. Or 150 kG CO2e, if that’s your metric. Either way, it’s not a huge cost for most people capable of running a 24/7 home lab.
If you start thinking about the costs - either cash or ghg - of creating an RPi or other dedicated low power server; the energy to run HDDs, at 5-10W each, or other accessories, well, the picture gets pretty complicated. Power is one aspect, and it’s really easy to measure objectively, but that also makes it easy to fetishize.
Haven’t noticed any issues, but I’m not intentionally using mDNS. dhcpd tells all the clients where the nameserver is and issues ddns updates to bind, so I haven’t needed any of the zero-config stuff. I did disable avahi on a linux server, but that was more because it was too chatty than caused any actual problems. I wouldn’t think there would be any more issues between mDNS and a fake domain than between mDNS and a real, big-boy domain on the same network.
I recently moved my internal network to a public domain. [random letters].top was $1.60 at porkbun, and now I can do DNSSEC and letsencrypt. I added a pre-hook to LE’s renew that briefly opens the firewall for their challenges, but now I’m going to have to look at the DNS challenge.
Almost everything I do references just hostname, with dns-search supplied by dhcp, so there was surprisingly little configuration to change when I switched domains.
I’d tried that…this has been going on for five days, and I can not describe my level of frustration. But I solved it, literally just now.
Despite systemctl status apparmor.service
claiming it was inactive, it was secretly active. audit.log was so full of sudo that I failed to see all of the
apparmor="DENIED" operation="mknod" profile="/usr/sbin/named" name="/etc/bind/dnssec-keys/K[zone].+013+16035.l6WOJd" pid=152161 comm="isc-net-0002" requested_mask="c" denied_mask="c" fsuid=124 ouid=124FSUID="bind" OUID="bind"
That made me realize, when I thought I fixed the apparmor rule, I’d used /etc/bind/dnskey/ rw
instead of /etc/bind/dnskey/** rw
The bind manual claims that you don’t need to manually create keys or manually include them in your zone file, if you use dnssec-policy default
or presumably any other policy with inline-signing. Claims that bind will generate its own keys, write them, and even manage timed rotation or migration to a new policy. I can’t confirm or deny that, because it definitely found the keys I had manually created (one of which was $INCLUDEd in the zone file, and one not) and used them. It also edited them and created .state files.
I feel like I should take the rest of the day off and celebrate.
Back in the day, I’d go through HDDs faster than systems-always needed to add storage before I could replace the CPU. I didn’t start disassembling them until they got up to the 500 _M_B range, but you’d often get 3 platters back then. OP must be harvesting from a whole workgroup - I’ve only got a 3cm stack and 7 drives waiting for the screwdriver.
No idea, honestly, what the popular perception of N100 platform is. It only came to my mind because I’d watched https://www.youtube.com/watch?v=hekzpSH25lk a couple days ago. His perspective was basically the opposite of yours, i.e.: Is a Pi-5 good enough to replace an N100?
Pi5+ just because I’d originally written Pi5+PS/case/SD.
And you’re right that everything has gotten more expensive, but $35 in 2016 (Pi-3) is only $45 today (and you can still get a 3B for $35). The older Pis hit, for me, a sweet spot of functionality, ease, and price. Price-wise, they were more comparable to an Arduino board than a PC. They had GPIOs like a microcontroller. They could run a full operating system, so easy to access, configure, and program, without having to deal with the added overhead of cross-compiling or directly programing a microcontroller. That generation of Pi was vastly overpowered for replacing an Arduino, so naturally people started running other services on them.
Pi 3 was barely functional as a desktop, and the Pi Foundation pushed them as a cheap platform to provide desktop computing and programming experience for poor populations. Pi4, and especially Pi5, dramatically improved desktop functionality at the cost of marginal price increases, at the same time as Intel was expanding its inexpensive, low-power options. So now, a high-end Pi5 is almost as good as a low-end x86, but also almost as expensive. It’s no longer attractive to people who mostly want an easy path to embedded computing, and (I think) in developed countries, that was what drove Pi hype.
Pi Zero, at $15, is more attractive to those people who want a familiar interface to sensors and controllers, but they aren’t powerful enough to run NAS, libreelec, pihole, and the like. Where “Rasperry Pi” used to be a melting pot for people making cool gadgets and cheap computing, they’ve now segmented their customer base into Pi-Zero for gadgets and Pi-400/Pi-5 for cheap computing.
My guess is Firefox. I’m using Kodi - OSMC/libreelec - and it coasts along at 1080p, with plenty of spare CPU to run pihole and some environmental monitors. Haven’t tried anything 4k, but supposedly Pi4 offloads that to hardware decoding and handles it just fine. (as long as the codec is supported).
https://www.acepcs.com/products/mini-pc-intel-n100-ultra is only $140, and it looks to me like Pi5+ is $160 with PS/case/microSD.
Pi 4’s were hard to get there for a while. Pi 5’s are expensive. Lot of other SBCs are also expensive, as in not all that much cheaper than a 2-3 generations old low-end x86. That makes them less attractive for special purpose computing, especially among people who have a lot of old hardware lying around.
Any desktop from the last decade can easily host multiple single-household computer services, and it’s easier to maintain just one box than a half dozen SBCs, with a half dozen power supplies, a half dozen network connections, etc. Selfhosters often have a ‘real’ computer running 24/7 for video transcoding or something, so hosting a bunch of minimal-use services on it doesn’t even increase the electric bill.
For me, the most interesting aspect of those SBCs was GPIO and access to raw sensor data. In the last few years, ‘smart home’ technology seems to have really exploded, to where many of the sensors I was interested in 10 years ago are now available with zigbee, bluetooth or even wifi connectivity, so you don’t need that GPIO anymore. There are still some specific control applications where, for me, Pi’s make sense, but I’m more likely to migrate towards Pi-0 than Pi-5.
SBCs were also an attractive solution for media/home theater displays, as clients for plex/jellyfin/mythtv servers, but modern smart-TVs seem mostly to have built-in clients for most of those. Personally, I’m still happy with kodi running on a pi-4 and a 15 year old dumb TV.
I have an inch-high stack of platters now. Kind of interesting to see how their thickness has changed over the years, including a color change in there somewhere. Keep thinking I should bury them in epoxy on some table top.
For extra fun, you ca melt the casings and cast interesting shapes. I only wish I were smart enough to repurpose the spindle motors.
Traditionally, RAID-0 “stripes” data across exactly 2 disks, writing half the data to each, trying to get twice the I/O speed out of disks that are much slower than the data bus. This also has the effect of looking like one disk twice the size of either physical disk, but if either disk fails, you lose the whole array. RAID-1 “mirrors” data across multiple identical disks, writing exactly the same data to all of them, again higher I/O performance, but providing redundancy instead of size. RAID-5 is like an extension of RAID-0 or a combination of -0 and -1, writing data across multiple disks, with an extra ‘parity’ disk for error correction. It requires (n) identical-sized disks but gives you storage capacity of (n-1), and allows you to rebuild the array in case any one disk fails. Any of these look to the filesystem like a single disk.
As @ahto@feddit.de says, none of those matter for TrueNAS. Technically, trueNAS creates “JBOD” - just a bunch of disks - and uses the file system to combine all those separate disks into one logical structure. From the user perspective, these all look exactly the same, but ZFS allows for much more complicated distributions of data and more diverse sizes of physical disks.
You might be surprised how much attention family will put into your media, especially any pictures, movies, or audio that you created, when you’re gone. It’s a way to commune with their memory of you. My family still regularly trots out boxes of physical photographs of grandparents’ grandparents & homes no one has visited in 70 years.
+1 mythtv. It distributes OTA TV to kodi all over the house.
Not the person you replied to, but the only thing on your list with real processing requirements is Jellyfish, if you do transcoding. My pihole uses like 0.3 CPU on a pi4, HA 0.1, zwave2mqtt less than that. You’re more likely to run into bandwidth issues with sonarr/radarr/dropbox, because pi’s just can’t push data to disks very fast, but if you’re doing downloads in the background, maybe that’s no a big deal.
Others have explained the line.
Worth noting that not all implementations of head accept negative line counts (i.e. last n lines), and you might substitute tail.
i.e.: ls -1 /backup/*.dump | tail -2 | xargs rm -f
I have a live local backup to guard against hardware/system failure. I figure the only reason I’d have to go to the off-site backup is destruction of my home, and if that ever happens then recreating a couple of months worth of critical data will not be an undue burden.
If I had work or consulting product on my home systems, I’d probably keep a cloud backup by daily rsync, but I’m not going to spend the bandwidth to remote backup the whole system off site. It’s bad enough bringing down a few tens of gigabytes - sending up several terabytes, even in the background, just isn’t practical for me.
I used to have this with homeassistant and zwavejs. Every time I’d pull a new homeassistant, the zwave integration would fail, because it required a newer version of zwavejs. Taught me to build the chain of services into one docker-compose, so they’d all update together. That’s become one of the rationales for me to use docker: got a chain of dependent processes? wrap them in a docker so you’re working with (probably) the same dependencies as the devs.
My other rationale is just portability, and docker is just one of many solutions there. In my little home environment, where servers are either retired desktops or gee-that-seems-cool SBCs, it’s nice to be able to easily move stuff independent of architecture or OS.