[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [atomic-devel] Running VMs in Openshift



That's pretty cool. I imagine there's no Openshift pull request or
branch yet where I can play with this?

What kind of access would I have to the VM to test things like
adding/removing devices, network interfaces, disks, rebooting, console
access and so on? I ask, because those are the sorts of things you do
routinely when doing operating system CI.

Does it work with nested virtualization? I guess in that case one could
schedule a test VM instance with CRI-O and then inside that start the
actual operating systems that are being tested as nested VMs?

Stef

On 22.03.2017 09:25, Antonio Murdaca wrote:
> CRI-O can already do VMs workflows with Clear containers (as opposed to
> use Linux containers). Hopefully we'll have it in kubernetes soon and
> openshift could use it just for virtual machines workloads. 
> 
> On Mar 22, 2017 05:50, "Stef Walter" <stefw redhat com
> <mailto:stefw redhat com>> wrote:
> 
>     On 22.03.2017 04 <tel:22.03.2017%2004>:49, Karanbir Singh wrote:
>     > On 21/03/17 16:45, Stef Walter wrote:
>     >> One of the cool things you can do when implementing integration
>     testing
>     >> is staging the test dependencies using an OCI image. And scheduling
>     >> integration tests in Openshift is also nice.
>     >>
>     >> For tests that integrate a full operating system, you need to
>     start up
>     >> one or more VMs running that operating system. Tests then
>     interact with
>     >> those VMs.
>     >>
>     >> It's easy to run VMs from inside of a privileged container that
>     contains
>     >> /dev/kvm. But I want to be able to run full operating system
>     integration
>     >> tests on an Openshift cluster without enabling privileged
>     containers on
>     >> all nodes.
>     >>
>     >> So I've been playing with this, and hacked together:
>     >>
>     >> https://github.com/stefwalter/oci-kvm-hook
>     <https://github.com/stefwalter/oci-kvm-hook>
>     >>
>     >> This allows use of KVM inside any container running on a system where
>     >> the hook is installed. The use of a hook for this is purely
>     pragmatic.
>     >>
>     >> A far better solution would be to change kubelet to have a
>     --enable-kvm
>     >> option ... similar to the --experimental-nvidia-gpus support I
>     see there
>     >> [1]. But since changes into kubernetes and then Openshift have a
>     really
>     >> long lead time, this lets us play with this before hand.
>     >>
>     >> Stef
>     >>
>     >> [1] https://kubernetes.io/docs/admin/kubelet/
>     <https://kubernetes.io/docs/admin/kubelet/>
>     >>
>     >
>     > What would the network layer look like here ?
> 
>     QEMU socket with multicast [0] works on my initial testing. I need to
>     try it out under proper load (many thousands of instances a day) ... but
>     seems promising.
> 
>     Stef
> 
>     [0] https://people.gnome.org/~markmc/qemu-networking.html
>     <https://people.gnome.org/~markmc/qemu-networking.html>
> 
> 
> 


Attachment: signature.asc
Description: OpenPGP digital signature


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]