[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [atomic-devel] Multi-node deployment strategies




On 11/10/2014 06:01 PM, Colin Walters wrote:
Let's fast forward to a few weeks from now and say that we have the basics of Atomic delivery sorted out.  I'd like to get some sort of documentation (and continuous testing) around multi-node Atomic+Kubernetes deployments.

A good example here is: https://github.com/eparis/kubernetes-ansible

I was also looking at the Kubernetes Vagrant example - which also uses Fedora, but provisions with Salt, and deploys a k8s binary you built on the host.

The interesting thing here is networking.  The eparis-k8s-ansible model has you boot the hosts and record their IPs.  The Vagrant one has a set of hardcoded IP addresses and uses the Vagrant network control to give those addresses to the guests, as well as the /24 for the k8s pods.

Hmm, actually this post has the potential to be far too long.  I think there's going to be some divergence between dev and prod here, and I'm just thinking about the dev case right now.  "How do I test my k8s pod-ified app".

So maybe focus on:
  - Using Atomic + Vagrant to set up a local cluster
  - Using Atomic on OpenStack/EC2/GCE as a dev environment
?



Hey Colin,
For OpenShift v3 (which is built on top of Kubernetes, so should be similar), we're using Ansible and it's dynamic inventory features.

Our repository (has more than just the openshift v3 stuff):
https://github.com/openshift/openshift-online-ansible

More about Ansible dynamic inventories:
http://docs.ansible.com/intro_dynamic_inventory.html


Using the dynamic inventories feature, you don't have to hard code the IPs, or know them up front. You simply create the hosts, then grab their info from the inventory (or you 'register' the output of the launch and use that info).

It's pretty amazing stuff. If you look at the cluster.sh script, we can setup a whole OpenShift 3 environment with 1 command (all dynamic):

./cluster.sh create stg

Right now the OpenShift 3 stuff is all GCE based, but we have the OpenShift v2 proxy layer stuff (which runs on RHEL 7 Atomic) in the same github repo, and that is all AWS based. So this should give a good example of how to use both.

Hope this helps,
Thomas


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]