[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [atomic-devel] Running VMs in Openshift



On 22.03.2017 10:19, Antonio Murdaca wrote:
> 
> 
> On Mar 22, 2017 10:13, "Stef Walter" <stefw redhat com
> <mailto:stefw redhat com>> wrote:
> 
>     That's pretty cool. I imagine there's no Openshift pull request or
>     branch yet where I can play with this?
> 
> 
> There's not at the moment :( hopefully....... 
> 
> 
>     What kind of access would I have to the VM to test things like
>     adding/removing devices, network interfaces, disks, rebooting, console
>     access and so on? I ask, because those are the sorts of things you do
>     routinely when doing operating system CI.
> 
> 
> VM lifecycle should be totally up to you i  guess. All cri-o is doing is
> setting up the network for k8s. The rest should be really up to you afaict. 

So there's direct access to the QEMU monitor, to able to add devices,
snapshot, shutdown the VM etc.? If so, that's awesome.

>     Does it work with nested virtualization? I guess in that case one could
>     schedule a test VM instance with CRI-O and then inside that start the
>     actual operating systems that are being tested as nested VMs?
> 
> 
> Pretty sure it won't work in nested virtualization as their using qemu
> and kvm. 

As long as each layer doesn't prevent nesting, KVM will work with nested
virtualization works many layers deep. The big place where they don't
stack cleanly like russian dolls is memory.

Stef

>     On 22.03.2017 09 <tel:22.03.2017%2009>:25, Antonio Murdaca wrote:
>     > CRI-O can already do VMs workflows with Clear containers (as
>     opposed to
>     > use Linux containers). Hopefully we'll have it in kubernetes soon and
>     > openshift could use it just for virtual machines workloads.
>     >
>     > On Mar 22, 2017 05:50, "Stef Walter" <stefw redhat com
>     <mailto:stefw redhat com>
>     > <mailto:stefw redhat com <mailto:stefw redhat com>>> wrote:
>     >
>     >     On 22.03.2017 04 <tel:22.03.2017%2004>
>     <tel:22.03.2017%2004>:49, Karanbir Singh wrote:
>     >     > On 21/03/17 16:45, Stef Walter wrote:
>     >     >> One of the cool things you can do when implementing integration
>     >     testing
>     >     >> is staging the test dependencies using an OCI image. And
>     scheduling
>     >     >> integration tests in Openshift is also nice.
>     >     >>
>     >     >> For tests that integrate a full operating system, you need to
>     >     start up
>     >     >> one or more VMs running that operating system. Tests then
>     >     interact with
>     >     >> those VMs.
>     >     >>
>     >     >> It's easy to run VMs from inside of a privileged container that
>     >     contains
>     >     >> /dev/kvm. But I want to be able to run full operating system
>     >     integration
>     >     >> tests on an Openshift cluster without enabling privileged
>     >     containers on
>     >     >> all nodes.
>     >     >>
>     >     >> So I've been playing with this, and hacked together:
>     >     >>
>     >     >> https://github.com/stefwalter/oci-kvm-hook
>     <https://github.com/stefwalter/oci-kvm-hook>
>     >     <https://github.com/stefwalter/oci-kvm-hook
>     <https://github.com/stefwalter/oci-kvm-hook>>
>     >     >>
>     >     >> This allows use of KVM inside any container running on a
>     system where
>     >     >> the hook is installed. The use of a hook for this is purely
>     >     pragmatic.
>     >     >>
>     >     >> A far better solution would be to change kubelet to have a
>     >     --enable-kvm
>     >     >> option ... similar to the --experimental-nvidia-gpus support I
>     >     see there
>     >     >> [1]. But since changes into kubernetes and then Openshift
>     have a
>     >     really
>     >     >> long lead time, this lets us play with this before hand.
>     >     >>
>     >     >> Stef
>     >     >>
>     >     >> [1] https://kubernetes.io/docs/admin/kubelet/
>     <https://kubernetes.io/docs/admin/kubelet/>
>     >     <https://kubernetes.io/docs/admin/kubelet/
>     <https://kubernetes.io/docs/admin/kubelet/>>
>     >     >>
>     >     >
>     >     > What would the network layer look like here ?
>     >
>     >     QEMU socket with multicast [0] works on my initial testing. I
>     need to
>     >     try it out under proper load (many thousands of instances a
>     day) ... but
>     >     seems promising.
>     >
>     >     Stef
>     >
>     >     [0] https://people.gnome.org/~markmc/qemu-networking.html
>     <https://people.gnome.org/~markmc/qemu-networking.html>
>     >     <https://people.gnome.org/~markmc/qemu-networking.html
>     <https://people.gnome.org/~markmc/qemu-networking.html>>
>     >
>     >
>     >
> 
> 
> 


Attachment: signature.asc
Description: OpenPGP digital signature


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]