Do you even need reservations? You can also just set a static IP on the computer and it should be fine. Most DHCP servers test the IP before handing it out just in case.
Do you even need reservations? You can also just set a static IP on the computer and it should be fine. Most DHCP servers test the IP before handing it out just in case.
Whether it has benefits is up to you, but from a technical perspective they’re as expensive as VLANs, so basically free. It’s the same receive and transmit radio, the only difference is that it broadcasts and responds to two network names at the same time. The maximum power consumption is the same: the max the radio will pull when at full load. The minimum power consumption has to be ever so slightly more since it needs to broadcast two network IDs, but those are measured in bytes and sent a couple times a second, it’s negligible compared to the cost of just running the radio.
Did you install the certificates at all the appropriate locations?
No certs like that will ever be recognized by browsers by default. You need to add your CA to your browser, and also every other applicable certificate stores. Usually that’d be /usr/share/ca-certificates
or command line flags to explicitly define the chain of trust (for example, curl --cacert
), or sometimes environment variables like SSL_CERT_FILE
.
Also if you have an intermediate CA and only trust the root CA, the intermediate certificate needs to be bundled with the server’s certificate so the browser can trace the chain of trust all the way to something it already trusts (ie. your root CA).
That’s kind of a rabbit hole on its own since it varies from software to software how it’s done, and also OS to OS. On Mac for example, that’s managed through Keychain.
If you want something a bit more managed, HashiCorp’s Vault can do CAs and is very automation-friendly.
I used this guide a few times and it’s pretty well made and general, doesn’t focus on just one task or end goal, just lets you set up a proper CA with intermediates and all: https://jamielinux.com/docs/openssl-certificate-authority/
My main concern would be security I suppose if I’m hosting a web server on the same computer I store all my family backups and stuff. Would using virtual machines solve that?
Mostly yeah. Even VMs aren’t perfect, but so widely used in the clouds of AWS and Google and whatnot that it’s good enough.
I have an old laptop running Linux to play around with and a fast and stable home internet connection.
That’s pretty much all you need!
When I started self hosting things, I literally ran the thing off my one and only laptop. Young me was getting into web development and I was fed up with the available free hosting options, so I was like, I’ve already got Apache running for development, I’ll just open the port and point a domain at it. My friends would check if I’m online by checking if my website loads. Sometimes I had to turn it off because I wanted to use my computer and they kept hogging my connection.
Your old laptop will run NextCloud and Samba/NFS just fine even if it’s a Core 2 Duo. Sure there’s Plex/Jellyfin and they require a lot more power for live transcoding and stuff, but to start off, you can just play your stuff over a simple network share.
Then when you’re happy or want to expand you’ll have a better idea of what kind of hardware you want. I’ve ran my NAS of a Raspberry Pi 2B for several years, but ultimately always wanted at least one real server.
As for setup guides, I have none. But don’t let yourself get too overwhelmed: there’s so much stuff you can do with a server and just as many ways to set it up. One thing at a time: get the server set up, make sure you have SSH access to it. Then pick a thing you want to run on it, and try to figure out how to run it. Don’t get too ambitious, you don’t have to do VMs, or containers, or anything at all. Get something done, play with it, experiment with it, see what you like.
Docker containers are pretty good, they do make setting up some services pretty easy. Sometimes they also add additional complexity. It’s okay to install things directly on the host.
There’s no hard rules and everyone have their preferences. When the time has come you will know and you will be seeking solutions of the likes of Proxmox or maybe some cloud servers.
It doesn’t have to be perfect from the first try. You will fuck it up a couple times, and that’s okay, that’s called experience.
Mine’s running on a VM with 2c/4t 2GB of RAM it shares with my Lemmy instance and it’s been working fine. I’m running Synapse, there’s more lightweight alternatives as well.
The Matrix servers don’t do all that much, it’s pretty much just plumbing data streams and storing data. You need enough disk to store all the messages and reactions and enough bandwidth to sync the rooms, and that’s about it. Most of the encryption is client-side for proper E2EE, so it just moves data around.
You can layer them however you want, so you can slap luks on the physical drives, or the mdraid, or the individual LVM volumes as you do right now. If the entire setup is either locked or unlocked, luks between the raid and LVM PV makes sense. Having luks on the individual LVs have the advantage that you can have your data partially unlocked.
2FA is complicated. You can use a second factor like, you need to enter both a password and be in possession of the flash drive, but you can’t do it with the standard TOTP codes because you need the key to validate them in the first place.
One thing you can explore is TPM: the computer can detect if it’s been tampered with, and if all checks out, it will unwrap the key. You can add a password or flash drive as a second factor. There’s also the whole smartcard rabbit hole.
What exactly are you unsatisfied with? I think that’s a better starting point to advise on.
It wants you to put in whatever nameservers you will be using. It’s pretty nice, it’s even offering you glue records if you’re to self host your DNS too!
Most domain registrars tend to also offer DNS services and even default to using theirs, so it’s often thought those come together. It seems like eu.org doesn’t. So you have to provide your own. That could be Cloudflare or any number of DNS providers out there.
Most of those DNS providers will give you two name servers that you can input there. Minimum is 2 but some have 4 and 8 too, but it’s rare. You just put them there for the first two and you can leave everything else blank.
Any reason the VPN can’t stay as-is? Unless you don’t want it on the unraid box at all anymore. But going to unraid over VPN then out the rest of the network from there is a perfectly valid use case.
All those do is essentially call the Cloudflare API. They’ll all work reasonably well. The linked Docker image for example is essentially doing the bulk of it in this bash script which they call from a cron and some other container init logic which I imagine is to do the initial update when the container starts.
Pick whatever is easiest and makes most sense for you. Even the archived Docker thing is so simple, I wouldn’t worry about it being unmaintained because it can reasonably be called a finished product. It’ll work until Cloudflare upgrades their API and shuts down the old one, which you’d get months to years of warning because of enterprise customers.
Personally, that’s a trivial enough task I’d probably just custom-write a Python script to call their API. They even have a python library for their API. Probably like 50-100 lines long tops. I have my own DNS server and my DDNS “server” is a 25 lines PHP script, and the client is a curl command in a cronjob.
DDNS is a long solved and done problem. All the development is essentially just adding new providers.
Yeah, I almost talked about anycast IPs but it just added unnecessary complexity.
OP’s question is a bit weird but it sounds like they want to connect to a VPN server and then that server uses the client’s IP instead of its own for outbound traffic, like some sort of forwarding?
For all I know OP may be asking for a bridged VPN and it really just means to forward the remote client as if it’s on the local network.
But the way it’s worded, the same IP would be used to both talk to the server and by the server itself going outbound. It’s possible on a local network with iptables hacks but why would you even want to do this?
That’s not possible. There’s only one route to an IP. Those may lead to different machines depending on where the request originates, and you more or less can’t choose which one, your ISP and their upstream ISPs decide and it’s usually the shortest or cheapest route. The Internet is stateless, it just moves packets around. Each step makes an independent decision as to where to send it next.
So your VPN server can try spoofing its outbound traffic to use the client’s IP, but it’ll most likely get discarded by the ISP because it only allows your IP to go out. But even if you can, the answer to those packets will go to the client’s IP, which will go directly to the client and not the VPN. The other end doesn’t know where it originated from, it just has a number, and it sends it back into the Internet and the Internet figures it out.
And if you can properly port the IP to your server, then the client can no longer use that IP because anything directed at it will end up at the server.
It’s theoretically possible to pull off with some clever iptables rules but both ends need to be configured for it so it’ll never leave your private network. In which case, it’s just not worth the hassle to avoid making a new subnet.
I don’t think most thieves care much about the data on the computer in the first place. Steal hardware, fresh install of Windows on it and straight to the pawn shop.
Given the answers given, I would suggest getting a cheap VPS that’s gonna cost you like $5/mo but you know its IP will never change, and you can get the reputation to improve and become good whereas residential IPs are pretty much all blacklisted everywhere as 99% of emails coming from residential IPs is sent out by malware.
Any cheap VPS can handle email just fine on its own but you can also treat it as just an entry and exit of a VPN. So you can technically have your mail locally at home it’s just gonna go through that VPS first before reaching your server, same for outgoing.
Potential problem on the incoming side as well is that if an SMTP server is running on whoever gets the old IP, they may accept the delivery and it may end up on someone else’s catchall handler too. So not just delivery problems and potential delays lost mail, but it can also get successfully delivered to a completely unknown third-party.
With that, the best you can do is glue the 1 and 2 TB drives together as a 3TB drive, and match it with an equal size partition on the 10 TB HDD. Those will have full redundancy, but not the remaining 7TB.
But you can at least have 3TB redundant, and 7TB of more risky storage. It can be used for things you can recover a different way, like game libraries, movie libraries, maybe backups of the 3TB since RAID doesn’t protect against accidental deletions and modifications.
If that’s not possible or desirable for whatever reason like not having other drives to backup the data to, there’s some more options: if it’s ext4 there’s fscrypt which you can then just move the files to the encrypted folder, otherwise there’s gocryptfs. In both cases you only need enough free space for a temporary copy of the biggest file.
HomeAssistant?