OK, so now I got this new server running CentOS 7 which is loaded with 32 GB of RAM, and I want to run a hypervisor type 1 on it which likely is going to be KVM.
I want to be able to isolate the file server function from my development on the system which would be on a different OS (Ubuntu Serve)r as an instance/slice (what do you call them?). And a third OS such as Fedora Core for experimenting.
How does KVM and hypervisor type 1 work for these? Does the hypervisor boot up and then you run the various OSes? Or in my case, CentOS 7 boots up and that fires up the different OSes using KVM?
The child instances are called virtual machines (or "VM"s for short).
You boot up your KVM-enabled CentOS7 instance. Once you do that, you go into KVM and create the number of virtual machines you need, defining CPU, memory, disk, and network resources that you wish to allocate from the hypervisor instance into each virtual machine. Finally, you start up each VM and install your operating system(s) of choice in each VM.
At that point, you now have a KVM hypervisor running multiple VMs, with each VM running a particular OS. Now you can configure KVM however you want, automatically boot one or more VM automatically when you boot the hypervisor; don’t auto-boot and require an administrator log into the hypervisor and start the VMs manually, whatever you want.
Can the Guest OS allocations be changed? For example, it has been running fine, but realize the memory allocations for it needs to be increased. Can you just stop the Guest OS, make the change and start it back up?
Is it customary to use the Host OS solely to run the Guest OS instances? Anything wrong with having the Host OS run an every day function such as being a Samba file server? Or is it better to have the file server be isolated in a Guest OS?
You can modify the guest allocation after it is set up. All the hypervisors I have used only allow memory changes on non-running guests.
As for your file server, it depends on your resources. If you have lots of memory and disk and CPU, you should isolate all functions into VMs and just run a pure hypervisor. But you are spooling up a whole Linux instance for one single function, you can rapidly use up the available resources. Using your base hypervisor OS to supply that functionality is more efficient, but runs the risk of interference. This is why “containers”, like Docker, are popular. The single system kernel is shared between isolated processes and network functionality.
Yes. Some changes you can even make without taking the guest down, like adding in more HD space as a new drive. Extending the existing partition requires some command-line stuff, IIRC.
I usually use guests to run snapshots of client installations (Often a flavour of Windows, but also various Linuces) and the host OS (Ubuntu, in my case) is my primary dev and browsing/email/etc machine. Of course, that host is packing a big SSD and nice HD.