[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [atomic-devel] Running VMs in Openshift



On 30.03.2017 21:34, Colin Walters wrote:
> On Thu, Mar 30, 2017, at 03:04 PM, Stef Walter wrote:
>> After starting a VM in kubevirt, can access the qemu monitor or have
>> libvirt access to that VM ... from a container in a kubernetes pod?
> 
> To rephrase what Stef is saying:
> 
> First, this is mostly about using VMs for *testing*.  Not running
> production VMs.  For example, in this model, it's a *good* thing
> if the spawned VMs cannot see (network-wise) any other VMs that happen to
> live in the same infrastructure.
> 
> I've seen many, *many* variations of test frameworks which
> provision VMs in libvirt/OpenStack/AWS etc.  and ssh in from
> the place where the test framework is executing (a Jenkins instance,
> or whatever, most commonly a different VM in the same infrastructure).
> 
> One problem with this approach is "lifecycle binding".  If the
> executor process dies, the VM leaks.  Now obviously there are
> many fixes for this - configure the VM to die after idle for an hour,
> scan for old VMs and delete them, etc.
> 
> But with the model of having the test code and the qemu process colocated,
> they naturally die together.
> 
> Second, this also obviously greatly reduces latency - there's no *reason*
> the test execution code should be on a separate physical machine from
> the target VM(s).
> 
> And third, this model allows low-level access to qemu which is
> quite useful when doing OS testing.

Completely agree with this. That's a good way to look at it. Latency,
isolation, and scheduling all start to get in the way of testing and
need work arounds.

Stef


Attachment: signature.asc
Description: OpenPGP digital signature


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]