One neat aspect is under the admin options you can hide whole sections of the menu to not show what you don’t need. Makes things a lot less cluttered that way.
Some dingbat that occasionally builds neat stuff without breaking others. The person running this public-but-not-promoted instance because reasons.
One neat aspect is under the admin options you can hide whole sections of the menu to not show what you don’t need. Makes things a lot less cluttered that way.
https://www.tubearchivist.com/
I’ve liked this one. Let’s you subscribe to channels/playlists and download en-masse if your inclined
Self hosted version of https://homechart.app/
Not FOSS (source available I believe though) but it has the option of a lifetime license rather than a subscription. Dev is readily available and helpful too.
https://www.tenable.com/products/nessus/nessus-essentials
https://www.rapid7.com/blog/post/2012/09/19/using-nexpose-at-home-scanning-reports/
Both Nessus and Nexpose are typically enterprise class systems but they have community licensing available for home labs. Nessus can even be set up in a docker container. OpenVAS is more or less free but can be upgraded with pro-feeds, but last I tried it it was a bit more rough to use.
Do be aware though that throwing a full force scan will use a lot of CPU and can break things depending on the settings, so it’s good to practice their settings on some non-critical systems first to get a feel for them.
Limiting the attack surface is a big part, geo restrictions, reputation lists, brute force mitigation, it all plays a role. Running a vulnerability scanner against your stuff is important to catch things before others do and regular patching is important too. It’s can be a rewarding challenge.
As a general rule if it’s a pubic-ish service like Lemmy (more a friends and family than public) or something where I want ready access like auto uploads it has public access, otherwise it’s private. I make it a point to have everything facing outside to have 2FA enabled and/or limit the available sources to known IP ranges.
It works so long as you’re not trying to create separate networks. When/if you decide to start with some vlan madness and such the AP likely won’t work for that, unless it’s fancy and can do multiple SSID on separate clans, but most WiFi/router combos don’t go that far.
Basically the new firewall/router box becomes the boss of everything done ng DHCP, likely DNS relaying, and all the monitoring. Simple and efficient, just wouldn’t go hosting public services with the setup since there’s no ‘DMZ’ to keep it separate from you personal devices.
If I’m picturing the gear right, putting the TP into AP mode would just make it a client of the network that would then serve as your WiFi and the new box could be set up as the router/gateway for both the TP and the other clients formerly plugged into the TP.
Usually, changing the mode from router to AP would keep the LAN side active as an unmanaged switch, and may even add the wan port to it. So if all above holds true go modem, Celeron (opnsense), TP (LAN to LAN) and then plug the remaining Ethernet either into the TP or the other LAN ports on the Celeron box, both should be the same local network.
The disk size also doesn’t have to match. Creating a drive array for ZFS is a 2 phase thing:
Creating a series of ‘vdev’ which can be single disks or mirrored pairs,
Then you combine the vdevs into a ‘zpool’ regardless of their sizes and it all becomes one big pool, and it acts somewhere between raid and disk spanning where it reads and writes to all but once any given vdevs is full it just stops going there. I currently have vdevs sets in 12, 8, 6 and three 4 TB sizes for a total of 38 TB of space minus formatting loss.
Example how I have it laid out, it’d be ideal to have them all the same size to balance it better, but it’s not required.
No, currently univention corporate server (UCS), but I’ll give those a look since I’ve been eyeing a replacement for a while due to some long standing vulns that I’m keen to be rid of.
https://xigmanas.com/xnaswp/download/
For a pure NAS purpose this is my go to. Serves drives, supports multiple file systems, and has a few extras like a basic web server and RSync built into a nice embedded system. The OS can run on a USB stick and manage the drives separately for the data.
On the ZFS front, a common misconception is that it eats a ton of RAM. What it does actually is use idle RAM for the ‘arc’ which caches the most frequent and/or most recently used files to avoid pulling them from disk. That RAM though will get dumped and made available to the system on demand though if for whatever reason the OS needs it. Idle RAM is wasted RAM so it’s a nice thing to have available.
The other options of making the containers dependent on mounts or similar are all really better, but a simple enough one is to use SMB/CIFS rather than NFS. It’s a lot more transactional in design so the drive vanishing for a bit will just come back when the drive is available. It’s also a fair bit heavier on the overhead.
Using NFSv4 seems to work in similar fashion without the overhead though I haven’t dug into the exact back and forth of the system to know how it differs from the v3 to accomplish that.
It has all the needed parts plus an interesting plug in app ecosystem if you like that kind of thing. My only real gripe with it though is a pile of high sev vulnerabilities that are picked up by a scanning engine that haven’t been fixed for a long time, so I’m reluctant to recommend it unless you have a solid security/segmentation setup in place.
Not AD proper but a compatible controller Linux distro to tie the desktops to, plus common credentials across several services. Just simplifies things not having a dozen different logins.
Currently a R730XD, but it has been run on plenty of other things down to a 1.3Ghz/4GB IPX box at the beginning. It’s pretty stripped down to run as an embedded system rather than a full server OS.
I’ve used this on a variety of boxes since it was called freenas back before a fork long ago. A NAS doesn’t need a whole lot of power in itself if the job is just to store and offer disk space. My current setup is in a full 2U rack server with 14 drives (12 spinning, 2 ssd) and it averages 169 watts. If you do the transcoding on whatever box is actually accessing the data it can save on the need for extra compute on the NAS.
If you have opnsense in front of it all, using a DDNS client to register the public IP would be step one, then using haproxy for an inbound proxy rather than port forwarding the traffic. That way you could have ‘owncloud.your.domain’ and ‘otherservice.your.domain’ hosted on the same IP using 80/443 rather than having to forward random ports in.
The right way is the way that works best for your own use case. I like a 3 box setup, firewall, hypervisor, nas, with a switch in between. Let’s you set up vlans to your heart’s content, manage flows from an external point (virtual firewalls are fine, but if it’s the authoritative DNS/DHCP for your net it gets a bit chicken and egg when it’s inside a vm host), and store the actual data like vids/pics/docs on the NAS that has just that one job of storing the files, less chance of borking it up that way.
It serves as a nice aggregate hub for a lot of household tasks, although I mostly use it for the lists and recipes at this time.
I’m pretty sure it would still work, but images for media content would have broken links. I’m not sure the refresh policy as far as remote media goes but local stuff could be re-uploaded and it should be able to retain things like the user names and comment data even without pictures.