Do you have a media center and/or server already? It’s a bit overkill for the former but would be well suited as the latter with its dedicated GPU that your NAS might not have/you may not want to have in your NAS.
Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.
Nixpkgs committer.
https://github.com/Atemu
https://reddit.com/u/Atemu12 (Probably won’t be active much anymore.)
Do you have a media center and/or server already? It’s a bit overkill for the former but would be well suited as the latter with its dedicated GPU that your NAS might not have/you may not want to have in your NAS.
Glad I could save you some money :)
NixOS packages only work with NixOS system. They’re harder to setup than just copying a docker-compose file over and they do use container technology.
It’s interesting how none of that is true.
Nixpkgs work on practically any Linux kernel.
Whether NixOS modules are easier to set up and maintain than unsustainably copying docker-compose files is subjective.
Neither Nixpkgs nor NixOS use container technology for their core functionality.
NixOS has the nixos-container
framework to optionally run NixOS inside of containerised environments (systemd-nspawn) but that’s rather niche actually. Nixpkgs does make use of bubblewrap for a small set of stubborn packages but it’s also not at all core to how it works.
Totally beside the point though; even if you don’t think NixOS is simpler, that still doesn’t mean containers are the only possible mean by which you could possibly achieve “easy” deployments.
Also without containers you don’t solve the biggest problems such as incompatible database versions between multiple services.
Ah, so you have indeed not even done the bare minimum of research into what Nix/NixOS are before you dismissed it. Nice going there.
as robust in terms of configurations
Docker compose is about the opposite of a robust configuration system.
This is a false dichotomy. Just because containers make it easy to ship software, doesn’t mean other means can’t be equally easy.
NixOS achieves a greater ease of deployment than docker-compose and the like without any containers involved for instance.
I would not buy a CPU without seeing a real-world measurement of idle total system power consumption if you’re concerned about energy (and therefore cost) efficiency in any way. Especially on desktop platforms where manufacturers historically do not care one bit about efficiency. You could easily spend many hundred € every year if it’s bad. I was not able to find any measurements for that specific CPU.
Be faster at transcoding video. This is primarily so I can use PhotoPrism for video clips. Real-time transcoding 4K 80mbps video down to something streamabke would be nice. Despite getting QuickSync to work on the Celeron, I can’t pull more than 20fps unless I drop the output to like 640x480.
That shouldn’t be the case. I’d look into getting this fixed properly before spending a ton of money for new hardware that you may not actually need. It smells like to me that encode or decode part aren’t actually being done in hardware here.
What codec and pixel format are the source files?
How quickly can you decode them? Try running ffmpeg manually with VAAPI decode, -c copy
, and a null sink on the files in question.
What codec are you trying to transcode to? Apollo lake can’t encode HEVC 10 bit. Try encoding a testsrc (testsrc=duration=10:size=3840x2160:rate=30
) to AVC 10 bit or HEVC 8 bit.
Have you considered using Oracle’s free VPS tier? Should be more than powerful enough to host a read-only Lemmy instance.
It’s not ideal but if you’re short on money, it’s better than having your online data rot.
Depends on how many other users are using the same proxy. If you host piped for yourself using your home internet connection, Google will absolutely know who is watching the video.
Yes, it’s called email. Run
git send-email
as Linus intended.
Paperless for keeping track of text written on the dead trees which companies keep sending me instead of email.
Actualbudget for keeping track where the money goes.
USB is not really a reliable connector for storage purposes. I’d highly recommend against USB.
A modem is a sort of “adapter” between physical mediums and protocols and sometimes also a router. It speaks DSL, fibre, cable etc. on one end and Ethernet on the other.
A wireless access point is similar in that is also is an “adapter” between mediums but it’s an adapter between physical and wireless. It effectively connects wireless devices to your physical Ethernet network (allowing communication in both directions) and never does any routing.
What you are typically provided by an ISP is an all-in one box that contains modem, router, switch, firewall, wireless access point, DHCP server, DNS resolver and more things in one device. For a home network, I wouldn’t want most of these to be separate devices either but at least wireless should be separate because the point of connection for the modem is likely not the location where you need the WiFi signal the most.
You’re looking for a wireless access point then, not a modem.
Nothing I host is internet-accessible. Everything is accessible to me via Tailscale though.
My setup already goes quite a bit beyond basic file hosting.
There is no self hosted service I could imagine to need that I’d expect not to be able to host due to CPU constraints. I think I’ll run into RAM constraints first; it’s already at 3GiB after boot.
That’s impressive.
Yeah, you really don’t need a lot of CPU power for selfhosting.
It’s a J4105, forgot to mention that.
What do you use the system for? And services like PiHole or media server?
Oh, sorry, forgot to add that bit.
It’s mainly a NAS housing my git-annex repos that I access via SSH.
I also host a few HTTP services on it:
The services I use most here are Paperless and Piped.
Mealie will be added to that list as soon as the upstream PR lands which might be later this evening.
My Immich module is almost ready to go but the Immich app has a major bug preventing me from using it properly, so that’s on hold for now.
I do want to set up Jellyfin in the not too distant future. The machine should handle that just fine with its iGPU as Intel’s Quicksync is quite good and I probably won’t even need transcoding for most cases either.
I probably won’t be able to get around setting up Nextcloud for much longer. I haven’t looked into it much but I already know it’s a beast. What I primarily want from it is calendar and contact synchronisation but I’d also like to have the ability to share files or documents with mere mortals such as my SO or family.
The NixOS module hopefully abstracts away most of the complexity here but still…
I for one am still waiting for paperless-ngnxn2-next-3.0_hypr.
I use an Intel SBC with 10W TDP CPU in it. With a HDD and after PSU inefficiency, it draws about 10-20W depending on the load.
Correct. That’s the currently maintained paperless project.
Infiltrate a movie studio I guess?
On a more serious note: There are some theoretical use-cases for this in a home lab setting if you “enhance” your video in some way server-side and want to send it to a client without loss.
What I had actually intended with the original question is to figure out what OP was actually doing.
The operating system is explicitly not virtualised with containers.
What you’ve described is closer to paravirtualisation where it’s still a separate operating system in the guest but the hardware doesn’t pretend to be physical anymore and is explicitly a software interface.