I discovered that there's a way to manually unbind kernel modules for specific devices in the pci so I did this little script
echo -n "0000:07:00.1" > /sys/bus/pci/drivers/snd_hda_intel/unbind
echo -n "0000:07:00.1" > /sys/bus/pci/drivers/vfio-pci/bind
echo -n "0000:07:00.2" > /sys/bus/pci/drivers/xhci_hcd/unbind
echo -n "0000:07:00.2" > /sys/bus/pci/drivers/vfio-pci/bind
echo -n "0000:07:00.3" > /sys/bus/pci/drivers/nvidia-gpu/unbind
echo -n "0000:07:00.3" > /sys/bus/pci/drivers/vfio-pci/bind
It hangs for a while (like 2 minutes) because of the echo -n "0000:07:00.3" > /sys/bus/pci/drivers/nvidia-gpu/unbind
line but when it finishes, this is the output of lspci -nnv
:
7:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU116 [GeForce GTX 1660] [10de:2184] (rev a1) (prog-if 00 [VGA controller])
Subsystem: NVIDIA Corporation TU116 [GeForce GTX 1660] [10de:1324]
Flags: bus master, fast devsel, latency 0, IRQ 11
Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
Memory at d0000000 (64-bit, prefetchable) [size=256M]
Memory at e0000000 (64-bit, prefetchable) [size=32M]
I/O ports at f000 [size=128]
Expansion ROM at 000c0000 [disabled] [size=128K]
Capabilities: <access denied>
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
07:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:1aeb] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:1324]
Flags: fast devsel, IRQ 83
Memory at f7080000 (32-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
07:00.2 USB controller [0c03]: NVIDIA Corporation Device [10de:1aec] (rev a1) (prog-if 30 [XHCI])
Subsystem: NVIDIA Corporation Device [10de:1324]
Flags: fast devsel, IRQ 46
Memory at e2000000 (64-bit, prefetchable) [size=256K]
Memory at e2040000 (64-bit, prefetchable) [size=64K]
Capabilities: <access denied>
Kernel driver in use: vfio-pci
07:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device [10de:1aed] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:1324]
Flags: fast devsel, IRQ 58
Memory at f7084000 (32-bit, non-prefetchable) [size=4K]
Capabilities: <access denied>
Kernel driver in use: vfio-pci
Kernel modules: i2c_nvidia_gpu
As you can see, all them are using vfio-pci. Then I simply added the GPU to virt-manager and it worked. However I'm still investigating why, in the middle of the windows 10 installation, the entire ubuntu froze forever.
UPDATE:
manually unbinding works to unding the GPU but if you have to unbind this means that the linux driver for the GPU already touched the GPU, so now the GPU knows it was on linux. When you bind it to the VM and start the VM, the Windows driver for the GPU will read the GPU state and know somebody (linux) messed with it before and thus will refuse to work because NVIDIA sucks.
Don't unbind manually, or at least try but it will probably not work. Instead make sure linux drivers never touch the GPU