Cool. Yeah, as a professional I am constantly aware of data integrity and have most of my shit stored on redundant drives. I had a WoW Guild Officer who shared his home setup with like 8x12TB drives in Windows Storage Spaces with no redundancy that was like 80% full. I had to ask how he slept at night knowing he could lose 80TB of data at any time.
Personally my TrueNAS has 5x1.92TB SSDs setup in two mirror vdevs and a hot spare for my ISCSI LUNs and 8x1.2TB 10K drives in a raidz2 (2 disk parity) for my NAS storage.
Since we don’t know what server or VM tech you’re using the advice will be pretty generic. For self hosting, you can likely get away with your ISCSI traffic sharing the LAN interface with your usual vm traffic but if you need high throughput you will want ISCSI optimized nics and turn on jumbo frames (mtu of 9000 is the standard here). This requires a switch that supports jumbo frames as well.
For Windows, I find the ISCSI support to be very lacking. Every time I have used it I have had sporadic loss of connectivity, failure to mount on boot, and other issues. I would avoid it.
For ESXi you can map an ISCSI lun as a datastore and create vmdks on top. This functions the same if you use actual FC luns or NFS mounts, and have had no issues with reliability. There’s also RDM which is raw direct map which can mount the ISCSI lun as a disk of the vm. If you’re using vSphere I would advise against this as you lose the ability to vMotion or use DRS.