• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 27th, 2023

help-circle

  • I have a bunch of these myself and that is my experience, but don’t have any screenshots now.

    However there’s great comparison of these thin clients if you don’t mind Polish: https://www.youtube.com/watch?v=DLRplLPdd3Q

    Just the relevant screens to save you some time:

    Power usage:

    Cinebench multi core:

    The power usage in idle is within 2W from Pi 4 and the performance is about double compared to overclocked Pi 4. It’s really quite viable alternative unless you need really small device. The only alternative size-wise is slightly bigger WYSE 3040, but that one has x5-z8350 CPU, which sits somewhere between Pi3B+ and Pi4 performance-wise. It is also very low power though and if you don’t need that much CPU it is also very viable replacement. (these can be easily bought for about €60 on eBay, or cheaper if you shop around)

    Also each W of extra idle power is about 9kWh extra consumed. Even if you paid 50c/kWh (which would be more than I’ve ever seen) that’s €5 per year extra. So I wouldn’t lose my sleep over 2W more or less. Prices here are high, 9kWh/y is rounding error.





  • In Tailscale you can set up an exit node which lets you access the entire internet via its internet connection.

    You could set up an exit node that would let you access the internet via some (anonymizing) VPN providers like Mullvad or any other.

    This sounds like Tailscale is simply setting up this exit node for Mullvad on their side and providing it as a service. So it’s not like using another VPN anonymizers is impossible, it’s just convenient to use Mullvad.


  • RAID is not backup. RAID is used for increased capacity, throughput or uptime. (Depending on configuration)

    Multiple volumes would likely get corrupted just as much with faulty RAM as RAID would. Besides RAM there’s controller, CPU, power supply and possibly more single points of failure in that NAS, that would destroy both RAID and multiple volumes.

    So assuming you have external backup, I’d go with RAID for better uptime as opposed to some custom multi volume pseudo-RAID for the same.


  • If it’s really early 2000s, you might want to put it on eBay. There are retro gamers out there that could use it as good Windows 9x era gaming PC. You could give that HW a new life in someone’s retro setup.

    It’s great HW for occasional gaming, but it’s very inefficient for 24/7 operation. You want to be somewhere after 2015-ish for something that is supposed to run constantly.






  • The author is upset that btrfs RAID arrays don’t function as he anticipated. However, btrfs isn’t ZFS or mdadm; it’s its own system and should be understood as such.

    I’d say it’s quite reasonable critique, because RAID1 is kind of industry standard. I can’t think of any other RAID (HW or SW) that would do RAID1 in this way. If btrfs decided to call their implementation raid1 while it really isn’t raid1 in some major way, it was very bad idea. I don’t agree it’s documentation issue, it’s really bad name choice. ZFS has raidz that does something similar to btrfs raid1 and the name does not lead to confusion. RAID1 system should never lead to decreased reliability with increasing number of drives.

    The author points out that btrfs won’t auto-mount an array if a drive fails, while ZFS will. This is actually a protective measure. By not auto-mounting, it minimizes the risk of further drive failures, prioritizing data preservation.

    RAID is uptime preserving mechanism. If anyone uses RAID for data preservation purposes, they are setting themselves for a nasty surprise. RAID system that does not mount in reduced redundancy situation is very bad design. It effectively sacrifices usability of RAID to serve other purpose that RAID system does not really need nor should be used for.

    He attempts ZFS recovery methods on btrfs and is surprised when they don’t work.

    I felt that way as well, but I think they raised one important point - there was no indication that the array was still in reduced redundancy state after their “attempt at recovery”. ZFS is very clear about the state of array at every step. Same for other RAID systems including some HW based ones. Every single one I’ve used were very clear about the fact that array isn’t fully redundant.

    In summary, the article’s author seems primarily upset that btrfs isn’t a ZFS clone.

    FWIW I didn’t have that impression. I have experience with multiple RAID controllers and multiple SW RAID systems and his points would be valid with any of those.

    Anyways thank you for your reply. It’s not the answer I was hoping for and I don’t agree with your views on some of these issues. But it gives me pretty good idea of the current state of the filesystem.



  • I’d say it’s more about elasticity. Scaling is just very narrow aspect of elasticity.

    To give you some specific example, there’s a company (that I won’t name) that by law has to have all data on premises. They have local cloud in their own datacentre. Part of that cloud is a set of powerful servers with ton of GPUs. Daytime they spin up VMs that employees can log into and have remote desktop for graphically intensive tasks.

    Now you might be thinking “wait a second, they can’t easily add GPUs in the morning as employees log in, there is no scaling and thus no cloud!” And by that definition you’d be right. But what they do with their cloud is that as the demand for VDI drops in the evening, they will start allocating the GPU and CPU resources to completely different kind of VMs that do overnight data crunching. (think geospatial data) It’s completely different OS, the servers are in server subnet, not VDI network, etc… So they are using the elasticity, but it’s not just scaling.

    Another counterexample is pretty frequent issue on AWS, where they momentarily run out of specific instance type in specific region. AWS support “will do their best” but you’re often looking at hours of wait time before you get your instance. Now depending where you live you could go buy a server and deploy it in your own DC faster than that. Has AWS stopped being cloud provider? No, you can use the elasticity and either spawn different instance type (if your workload allows that) or in different region/AZ. You might have been just trying to replace one instance with another, not even trying to scale up, it’s just the capacity for replacement wasn’t there.


  • For some definition of cloud. You also have on premises cloud. When Amazon runs their e-commerce site on AWS, are they running it on someone else’s computer or not in cloud? (putting aside some tax-wise separation of individual Amazon subsidiaries)

    On the other hand there are still providers that will rent you an server in their DC, but you don’t get any API or anything else. At best they’ll plug in HDDs that you sent them. This server hosting existed before “cloud” was a thing and it continues to exist.

    I’d say that more accurate definition of cloud would be “someone else’s computer with an API that customer can access”. And if I’m really strict about that definition I’d drop entire first part, because it’s the API that matters - computer might as well be yours.

    Source: I’ve been on both sides of cloud from the very beginning.