• 0 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: July 30th, 2023

help-circle
  • In the IT world, we just call that a server. The usual golden rule for backups is 3-2-1:

    • 3 copies of the data total, of which
    • 2 are backups (not the primary access), and
    • 1 of the backups is off-site.

    So, if the data is only server side, it’s just data. If the data is only client side, it’s just data. But if the data is fully replicated on both sides, now you have a backup.

    There’s a related adage regarding backups: “if there’s two copies of the data, you effectively have one. If there’s only one copy of the data, you can never guarantee it’s there”. Basically, it means you should always assume one copy somewhere will fail and you will be left with n-1 copies. In your example, if your server failed or got ransomwared, you wouldn’t have a complete dataset since the local computer doesn’t have a full replica.

    I recently had a a backup drive fail on me, and all I had to do was just buy a new one. No data loss, I just regenerated the backup as soon as the drive was spun up. I’ve also had to restore entire servers that have failed. Minimal data loss since the last backup, but nothing I couldn’t rebuild.

    Edit: I’m not saying what your asking for is wrong or bad, I’m just saying “backup” isn’t the right word to ask about. It’ll muddy some of the answers as to what you’re really looking for.



  • I don’t have an immediate answer for you on encryption. I know most of the communication is encrypted in flight for AD, and on disk passwords are stored hashed unless the “use reversible encryption field is checked”. There are (in Microsoft terms) gMSAs (group-managed service accounts) but other than using one for ADFS (their oath provider), I have little knowledge of how it actually works on the inside.

    AD also provides encryption key backup services for Bitlocker (MS full-partition encryption for NTFS) and the local account manager I mentioned, LAPS. Recovering those keys requires either a global admin account or specific permission delegation. On disk, I know MS has an encryption provider that works with the TPM, but I don’t have any data about whether that system is used (or where the decryptor is located) for these accounts types with recoverable credentials.

    I did read a story recently about a cyber security firm working with an org who had gotten their way all the way down to domain admin, but needed a biometric unlocked Bitwarden to pop the final backup server to “own” the org. They indicated that there was native windows encryption going on, and managed to break in using a now-patched vulnerability in Bitwarden to recover a decryption key achievable by resetting the domain admin’s password and doing some windows magic. On my DC at home, all I know is it doesn’t need my password to reboot so there’s credentials recovery somewhere.

    Directly to your question about short term use passwords: I’m not sure there’s a way to do it out of the box in MS AD without getting into some overcomplicated process. Accounts themselves can have per-OU password expiration policies that are nanosecond accurate (I know because I once accidentally set a password policy to 365 nanoseconds instead of a year), and you can even set whole account expiry (which would prevent the user from unlocking their expired password with a changed one). Theoretically, you could design/find a system that interacts with your domain to set, impound/encrypt, and manage the account and password expiration of a given set of users, but that would likely be add on software.


    1. Yes I do - MS AD DC

    2. I don’t have a ton of users, but I have a ton of computers. AD keeps them in sync. Plus I can point services like gitea and vCenter at it for even more. Guacamole highly benefits from this arrangement since I can set the password to match the AD password, and all users on all devices subsequently auto-login, even after a password change.

    3. Used to run single domain controller, now I have two (leftover free forever licenses from college). I plan to upgrade them as a tick/tock so I’m not spending a fortune on licensing frequently

    4. With native Windows clients and I believe sssd realmd joins, the default config is to cache the last hash you used to log in. So if you log in regularly to a server it should have an up to date cache should your DC cluster become unavailable. This feature is also used on corporate laptops that need to roam from the building without an always-on VPN. Enterprises will generally also ensure a backup local account is set up (and optionally auto-rotated) in case the domain becomes unavailable in a bad way so that IT can recover your computer.

    5. I used to run in homemade a Free IPA and a MS AD in a cross forest trust when I started ~5-6y ago on the directory stuff. Windows and Mac were joined to AD, Linux was joined to IPA. (I tried to join Mac to IPA but there was only a limited LDAP connector and AD was more painless and less maintenance). One user to rule them all still. IPA has loads of great features - I especially enjoyed setting my shell, sudoers rules, and ssh keys from the directory to be available everywhere instantly.

    But, I had some reliability problems (which may be resolved, I have not followed up) with the update system of IPA at the time, so I ended up burning it down and rejoining all the Linux servers to AD. Since then, the only feature I’ve lost is centralized sudo and ssh keys (shell can be set in AD if you’re clever). sssd handles six key MS group policies using libini, mapping them into relevant PAM policies so you even have some authorization that can be pushed from the DC like in Windows, with some relatively sane defaults.

    I will warn - some MS group policies violate Linux INI spec (especially service definitions and firewall rules) can coredump libini, so you should put your Linux servers in a dedicated OU with their own group policies and limited settings in the default domain policy.



  • I’m probably the overkill case because I have AD+vC and a ton of VMs.

    RPO 24H for main desktop and critical VMs like vCenter, domain controllers, DHCP, DNS, Unifi controller, etc.

    Twice a week for laptops and remote desktop target VMs

    Once a week for everything else.

    Backups are kept: (may be plus or minus a bit)

    • Daily backups for a week
    • Weekly backups for a month
    • Monthly backups for a year
    • Yearly backups for 2-3y

    The software I have (Synology Active Backup) captures data using incremental backups where possible, but if it loses its incremental marker (system restore in windows, change-block tracking in VMware, rsync for file servers), it will generate a full backup and deduplicate (iirc).

    From the many times this has saved me from various bad things happening for various reasons, I want to say the RTO is about 2-6h for a VM to restore and 18 for a desktop to restore from the point at which I decide to go back to a backup.

    Right now my main limitation is my poor quad core Synology is running a little hot on the CPU front, so some of those have farther apart RPOs than I’d like.



  • Going to summarize a lot of comments here with one - VPNs are very powerful tools that can do lots of things. Traffic can be configured to go in several directions. We really have to know more about your use case to advise you as to what config you might need.

    Going to just write a ton of words on paper here - OP, let me know if any of this sounds like what you’re trying to do, and I can try to give a better explanation (or if something was confusing, let me know).

    VPN that uses the client’s IP when sending data out of the VPN server

    That’s the specific sentence I’m getting caught on myself. It could mean several things, some of which have been mentioned, some haven’t.

    • Site to site VPN: Two (generally) fixed devices operate a VPN connection between them and utilize some form of non-NAT routing so that every child device behind each site sees it’s “real” counterpart without getting NATed. However, NAT is typically still configured for IPv4 facing the internet, so each device shows an internet “exit IP” matching the site it’s on. Typically, the device with the most powerful / most stable / most central / least restrictive would be the receiver, while the other nodes would be initiators pointed to that receiver. In larger maps, you could build multiple hub/spoke systems as needed.

    • Sub-type of site to site possible: where one site tunnels all of its data over to the second site, and the second site is the one that provides NAT. This is similar in nature to how GL.Inet routers operate their VPN switch, but IMHO more powerful of you have greater control over the server compared to subscribing to a public VPN service. Notably for you example, the internet NAT exit device can be either the initiator or the receiver.

    • Normal VPN but without NAT: this is another possible expansion of what you’ve written, with one word adjusted - it operates the VPN but preserves the client IP as it’s entering the network. This is how most corporate remote access VPNs operate, since it would be overloaded and pointless to have every remote worker from a small pool of IP addresses when you don’t even need to use a NAT engine for intranet.

    My remote access VPN for my home lab is of the latter type, and I have a few of the sites to site connections floating around with various protocols.

    For mine, I have two VPN servers: one internal server that works tightly with my home firewall, and one remote server running inside a VPS. Both the firewall and VPS apply NAT rules to egress traffic, but internal bound traffic is not NATed and simply passed along the site to site connections to wherever it needs to go. My home-side remote access VPN is simply a “dumb” VPN server that has the VPN protocol port forwarded back to it and passes almost raw traffic to the firewall for processing.

    For routing, since each VPN requires its own subnet, I use FRR with a mixture of OSPF and iBGP (depending on how old the link is)

    For VPN protocols, I currently am using strongSwan for IPsec, but it’s really easy to slap OpenVPN onto that routing stack I already set up and have the routes propagate inward.


  • Any VPN that terminates on the firewall (be it site to site or remote access / “road warrior”) may be affected, but not all will. Some VPN tech uses very efficient computations. Notably affected VPNs are OpenVPN and IPSec / StrongSwan.

    If the VPN doesn’t terminate on the firewall, you’re in the clear. So even if your work provided an OpenVPN client to you that’s affected by AES-NI, because the tunnel runs between your work laptop and the work server, the firewall is not part of the encryption pipeline.

    Another affected technology may be some (reverse) proxies and web servers. This would be software running on the firewall like haproxy, nginx, squid. See https://serverfault.com/a/729735 for one example. In this variation of the check, you’d be running one of these bits of software on the firewall itself and either exposing an internal service (such as Nextcloud) to the internet, or in the case of squid doing some HTTP/S filtering for a tightly locked down network. However, if you just port forwarded 443/TCP to your nextcloud server (as an example), your nextcloud server would be the one caring about the AES-NI decrypt/encrypt. Like VPN, it matters to the extent of where the AES decrypt/encrypt occurred.

    Personally, I’d recommend you get AES-NI if you can. It makes running a personal VPN easier down the road if you think you might want to go that route. But if you know for sure you won’t need any of the tech I mentioned (including https web proxy on the firewall), you won’t miss it if it’s not there.

    Edit: I don’t know what processors you’re looking at that are missing AES-NI, but I think you have to go to some really really old tech on x86 to be missing it. Those (especially if they’re AMD FX / Opteron from the Bulldozer/Piledriver era) may have other performance concerns. Specifically for those old AMD processors (Not Ryzen/Epyc), just hard pass if you need something that runs slightly fast. They’re just too inefficient.


  • Counterpoint: if you system is configured such that the mere act of trying to send an email results in serious delays and regular bounces, you’re doing email wrong. Even push notifications may require third party routing through Google, Apple, or similar to get to the core OS in some cases.

    Yes, I recognize that hosting an SMTP server is difficult these days and can’t always be done at home due to IP restrictions. But that doesn’t mean you have to have an email server at home. I have a third party email on my domain and I can dispatch SMTP which arrives at expected non-delayed times even to Google and Microsoft accounts.

    I honestly wish more software would simply speak to an SMTP server of choice rather than defaulting to just hitting the CLI mail send or attempting a direct SMTP connection.


  • Actually, I legally can’t make money off of it for reasons that would dox me.

    I already pay for both VMware and Microsoft licensing among several others. If I can get my SSO by saving a little bit of money by using a different product, I will. I don’t mind paying for software I use when it makes sense, I only disagree with companies up-charging features like SSO that should be available to all customers.


  • You’ll find a lot of pessimistic people here because there are few unicorns when a commercial company buying an open source project didn’t go badly for the open source people. Most of the time after a sell-out the projects ends up under highly restrictive licensing, features behind paywalls, and many other problems making it a shadow of its former self.

    The most notable recent examples I can think of is IBM buys Red Hat buys CentOS, and that ended with forks as AlmaLinux and Rocky Linux. Oracle buys MySQL ended up forked as MariaDB. Businesses love to push their commercial offerings on open source products, and it’s not always in the form of plain old support agreements (like the people behind AlmaLinux). Often (this is common especially in databases) they’ll tax features like SSO, backups, or literally simple the privilege of having stable software. Projects like CentOS and VyOS don’t have stable OSS versions, and soooo many databases will put LDAP/Kerberos behind the commercial product, charging monthly or yearly operating costs.

    Even GitHub (which to be clear was closed source to begin with, but is a haven for F/OSS so I’ll give it an honorable mention here) started showing Microsoft-isms after M$ bought the platform.



  • Just because you’ve used it professionally, doesn’t mean it’s OK.

    Run the installation file to install the RDPwrap dynamic link library (DLL). This software provides the necessary functionality to enable Remote Desktop from a Windows 10 Home system.

      begin
        if not Reg.OpenKey('\SYSTEM\CurrentControlSet\Control\Terminal Server\Licensing Core', True) then
        begin
          Code := GetLastError;
          Writeln('[-] OpenKey error (code ', Code, ').');
          Halt(Code);
        end;
        try
          Reg.WriteBool('EnableConcurrentSessions', True);
        except
          Writeln('[-] WriteBool error.');
          Halt(ERROR_ACCESS_DENIED);
        end;
        Reg.CloseKey;
    

    So essentially the RDPwrap software subverts Windows 10 Home security to enable Remote Desktop Connections.

    Even without disassembling their shim DLL, just their readme language and installer code doesn’t give me warm fuzzies about this software’s ability to survive legal scrutiny or a Microsoft audit.

    Just like with backups, in my professional IT Admin opinion: if its expensive enough to need remote access, it’s expensive enough to remote access the right way. There’s plenty of free remote options on Windows that don’t require monkey patching the core services and using a Home license professionally. Plus, if you have more than a few Windows installs, you probably want Group Policy anyways, so you’re up to the pro license key for that anyway, plus the Windows Server license key(s) for the AD controller.

    Yeah, windows is expensive when used professionally. If you need windows that badly deal with it or talk to your software vendors about getting Linux or Mac software.


  • If you’re ok leaving a monitor plugged in (but can be off), my go-to is Parsec. Bonus points is that it works without needing a VPN (it uses UDP NAT hole punching like Chrome Remote Desktop). If you’ll be far far away from home, Chrome Remote Desktop tends to be slightly more reliable over high latency than parsec for me - but that could just be because I tuned mine for super low latency when nearby.

    Good news is, you can run both at the same time and see how they treat ya! (And both are free for base use, but parsec has a handful of premium features you can pay for if you like it) I have Parsec, CRD, RDP, and SSH all set up in various forms to get back “home” when I’m not.


  • (if this comment reads like I feel slighted it’s because I do)

    Their networking ecosystem is very focused on a specific class of prosumer and once in it can be very difficult to upgrade out of that bubble to toys that have more growth capacity, from both a tech and learning perspective.

    I have an advanced network with dynamic routing (iBGP and OSPF), as well as several VPN protocols for both site to site and access VPN. I also have redundant layer 3 gateways everywhere in the main site. Ubiquiti has had the tech to make redundant layer 3 for YEARS, but they refuse to and instead stop updating useful product lines that have more features and instead focus on gimmick products that have flashy marketing campaigns. Even on one of their more feature-ful routers (ER-4), I have to use OpenVPN gateway servers because Ubiquiti doesn’t support plugins that I can get on *sense for full mesh VPNs.

    I can really only use them at layer 2 because once I hit my network core I need redundancy protocols at L2 (stacking or vPC/MLAG) to maintain a system that can keep vSAN and Ceph happy.

    I’m really glad I went the *sense route instead of taking a chance on a USG-3 and depending on the custom json file to load OSPF, because that’s a feature they removed from newer gateways iirc.





  • I’ve got nothing against downloading things only once - I have a few dozens of VM at home. But once you reach a certain point maintaining offline ISOs for updating can become a chore, and larger ISOs take longer to write to flash install media by nature. Once you get a big enough network, homogenizing to a single distro can become problematic: some software just works better on certain distros.

    I’ll admit that I did miss the point of this post initially wondering why there was a post about downloading Debian when their website was pretty straightforward - the title caught me off guard and doesn’t quite match what it really is on the inside. Inside is much much more involved than a simple download.

    Therein lies the wrinkle: there’s a wide spectrum of selfhosters on this community, everyone from people getting their first VM server online with a bit of scripted container magic, all the way to senior+ IT and software engineers who can write GUI front ends to make Linux a router. (source: skimming the community first page). For a lot of folks, re-downloading every time is an ok middle ground because it just works, and they’re counting on the internet existing in general to remotely access their gear once it’s deployed.

    Not everyone is going to always pick the ““best”” or ““most efficient”” route every time because in my experience as a professional IT engineer, people tend towards the easy solution because it’s straightforward. And from a security perspective, I’m just happy if people choose to update their servers regularly. I’d rather see them inefficient but secure than efficient and out of date every cycle.

    At home, I use a personal package mirror for that. It has the benefit of also running periodic replications on schedule* to be available as a target that auto updates work from. Bit harder to set up than a single offline ISO, but once it’s up it’s fairly low maintenance. Off-hand, I think I keep around a few versions each of Ubuntu, Debian, Rocky, Alma, EPEL, Cygwin, Xen, and Proxmox. A representative set of most of my network where I have either three or more nodes of a given OS, or that OS is on a network where Internet access is blocked (such as my management network). vCenter serves as its own mirror for my ESXi hosts, and I use Gitea as a docker repo and CI/CD.

    I also have a library of ISOs on an SMB share sorted by distro and architecture. These are generally the net install versions or the DVD versions that get the OS installed enough to use a package repo.

    I’ve worked on full air gap systems before, and those can be just a chore in general. ISO update sometimes can be the best way, because everything else is blocked on the firewall.

    *Before anyone corrects me, yes I am aware you can set up something similar to generate ISOs