[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [atomic-devel] Kubeadm vs. SELinux




On 11/23/2016 10:33 AM, Devan Goodwin wrote:
> On Wed, Nov 23, 2016 at 9:44 AM, Daniel J Walsh <dwalsh redhat com> wrote:
>>
>> On 11/22/2016 07:37 PM, Jason Brooks wrote:
>>> On Tue, Nov 22, 2016 at 4:26 PM, Josh Berkus <jberkus redhat com> wrote:
>>>> On 11/22/2016 03:27 PM, Clayton Coleman wrote:
>>>>> Copying Devan as well since he's been working with kubeadm for a while.
>>>>>
>>>>>> On Nov 22, 2016, at 5:25 PM, Jason Brooks <jbrooks redhat com> wrote:
>>>>>>
>>>>>>> On Tue, Nov 22, 2016 at 2:38 PM, Daniel J Walsh <dwalsh redhat com> wrote:
>>>>>>>
>>>>>>>
>>>>>>>> On 11/22/2016 05:15 PM, Josh Berkus wrote:
>>>>>>>> Currently, it is not possible to run Kubeadm with SELinux enabled.
>>>>>>>>
>>>>>>>> This is bad; it means that Kubernetes' official installation
>>>>>>>> instructions include `setenforce 0`.  But it's hard to argue the point
>>>>>>>> when a kubeadm install -- soon to be the main install option for
>>>>>>>> Kubernetes, and the only one which currently works on Atomic -- simply
>>>>>>>> doesn't work with SELinux enabled.
>>>>>>>>
>>>>>>>> The current blocker is that kubeadm init will hang forever at this stage:
>>>>>>>>
>>>>>>>> <master/apiclient> created API client, waiting for the control plane to
>>>>>>>> become ready
>>>>>>>>
>>>>>>>>
>>>>>>>> The errors shown in the journal are here:
>>>>>>>>
>>>>>>>> https://gist.github.com/jberkus/4e926c76fbf772ffee4eb774cb0a4c60
>>>>>>>>
>>>>>>>> That's on Fedora 25 Atomic.  I've had the exact same experience on
>>>>>>>> CentOS 7 and RHEL 7, although the error messages are not identical.
>>>>>>>>
>>>>>>>> Seems like this is on us to fix, if we want people to keep SELinux
>>>>>>>> enforcing. I don;t know if we need to push patches to Kubeadm, or to
>>>>>>>> SELinux, or both.
>>>>>>>>
>>>>>>> What AVC's are you seeing?  Where is the bugzilla for this?
>>>>>>>
>>>>>>> ausearch -m avc -ts recent
>>>>>> https://paste.fedoraproject.org/488671/79856867/
>>>>>>
>>>>>> This is from a kubeadm that's packaged up in a copr:
>>>>>> https://copr.fedorainfracloud.org/coprs/jasonbrooks/kube-release/
>>>>>>
>>>>>> The kubernetes project provides rpms for centos and ubuntu, and there
>>>>>> are a few things about the way they pkg it that conflict w/ atomic.
>>>>>> Some more info at
>>>>>> https://jebpages.com/2016/11/01/installing-kubernetes-on-centos-atomic-host-with-kubeadm/.
>>>>>>
>>>> In addition to this, please note that setenforce 0 is not required on
>>>> the workers nodes, just on the master.  The kubelet nodes work fine with
>>>> just relabeling the /var/lib/kubelet directory.
>>>>
>>>> It would be really nice if we could somehow do that relabeling as part
>>>> of the installation package, but I don't see how; it would need to be a
>>>> patch/fork on kubeadm instead.
>>> The problem containers are etcd and kube-discovery, they're set to
>>> type unconfined_t to work around selinux, but I believe the correct
>>> type is spc_t. Changing to spc_t allows the install to continue w/o
>>> disabling selinux.
>>>
>>> I sent a PR to change this: https://github.com/kubernetes/kubernetes/pull/37327
>> Correct although it would be nice to get new types for these containers
>> perhaps
>> we could build policy for each one that is not unconfined but still
>> allow other processes
>> to communicate with them.  etcd_t, probbaly needs very limited access on
>> a host system
>> and I have no idea what kube-discovery does.
> Kube-disocvery is the key piece that lets you run a very short kubeadm
> join command, it's just a simple pod that offers up signed data
> containing a list of api servers, and the CA cert to talk to them.
> It's a temporary solution for kubeadm alpha and is disappearing
> entirely, replaced by config maps in core k8s.
>
> We unconfined it in the rush for alpha to avoid shipping instructions
> that involved setenforce 0. It couldn't read secrets by default,
> though this is fixed in 1.5 with pmorie's work I believe.
>
> etcd container just needs access to /var/lib/etcd, suggestions on most
> correct way to handle this would be very welcome.
etcd could probably run with a standard container, but have
/var/lib/etcd mounted in with the equivalent of

docker run -v /var/lib/etcd:/var/lib/etcd:Z

As long as everyone else talks to it via the network SELinux confinement
should work fine.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]