[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [atomic-devel] SELinux labelling when running Pulp in containers




----- Original Message -----
> 
> 
> On 09/03/2015 07:51 AM, Daniel J Walsh wrote:
> >
> > On 09/02/2015 08:14 PM, Nick Coghlan wrote:
> >> To help improve the local dev experience for a project I'm working on
> >> that uses Pulp, I've been looking at making it easier to get a local
> >> dev instance of that up and running in containers.
> >>
> >> Building on Michael Hrivnak's previous work, I got Pulp fully
> >> containerised in
> >> https://github.com/ncoghlan/repofunnel/blob/master/_localdev/start_pulp.sh
> >> (with a couple of messy hacks to work around the inability to change
> >> mount points or the owning user when mounting volumes via
> >> --volumes-from).
> >>
> >> However, I've only managed to get it working under "setenforce 0" -
> >> SELinux complains otherwise. After bringing this up internally, I
> >> realised I should start a thread here with the relevant setroubleshoot
> >> details. (Containerising Pulp for local development serves as a
> >> precursor to getting it running on Atomic Host, so this seems like the
> >> most appropriate upstream list to provide feedback on the challenges I
> >> encountered with it).
> >>
> >> For reference, the containers involved in running Pulp locally are:
> >>
> >> * pulp_data - just owns the data volumes
> >> * pulp_db - MongoDB container
> >> * pulp_qpid - Qpid message broker
> >> * pulp_beat - (I don't actually know what this does...)

This is the driver process for the celery job controller.  Pulp is a Celery based app and the "beat" server is kind of the cron of the service.

> >> * pulp_resource_manager - (ditto...)

This one I'm not clear on myself.

> >> * pulp_worker[12] - celery worker nodes (I believe)

Yep

> >> * pulpapi - web service for main REST API

You have both the pulp content web service and the admin API in one? That's what I've done so far, but I'd love to split them as they have completely different functions and different users.

> >> * crane - Docker registry service

I haven't gotten to crane yet.

Nick, we should talk.  I have something similar, though less extensive that I've been working on.  I haven't gotten the API and crane separated and I avoided using the data volume and volumes-from for a number of reasons.  Mine are running now in kubernetes with the goal of getting it into atomicapp.

> >>
> >> The first 3 containers have no dependencies, the others all mount
> >> volumes from pulp_data, and have network links to pulp_db and
> >> pulp_qpid. All the containers also mount "/dev/log:Z" from the host.

Interesting. Only the worker and content web server should need access to the content.  The workers put it in and the web server offers it out.  The rest communicate only through messaging or access to the database.

> >>
> >> Running "sudo _localdev/start_pulp.sh" under SELinux, only the
> >> database and QPid containers start properly - the later ones which
> >> need to link network interfaces to those containers all fail.
> >>
> >> The setroubleshoot message that seems relevant (both by time and content)
> >> is:
> >>
> >> =============
> >> SELinux is preventing nm-dispatcher from read access on the lnk_file
> >> log. For complete SELinux messages. run sealert -l
> >> 7a75e20f-208a-432a-8b71-008f2c2c94d5
> >> =============
> >>
> >> And the additional information from sealert:
> >> =============
> >> Source Context                system_u:system_r:NetworkManager_t:s0
> >> Target Context
> >> system_u:object_r:svirt_sandbox_file_t:s0:c88,c647
> >> Target Objects                log [ lnk_file ]
> >> Source                        nm-dispatcher
> >> Source Path                   nm-dispatcher
> >> Port                          <Unknown>
> >> Host                          thechalk
> >> Source RPM Packages
> >> Target RPM Packages
> >> Policy RPM                    selinux-policy-3.13.1-128.12.fc22.noarch
> >> Selinux Enabled               True
> >> Policy Type                   targeted
> >> Enforcing Mode                Enforcing
> >>
> >> Raw Audit Messages
> >> type=AVC msg=audit(1441238305.121:883): avc:  denied  { read } for
> >> pid=5928 comm="nm-dispatcher" name="log" dev="devtmpfs"
> >> ino=10641 scontext=system_u:system_r:NetworkManager_t:s0
> >> tcontext=system_u:object_r:svirt_sandbox_file_t:s0:c88,c647 tclas
> >> s=lnk_file permissive=0
> >>
> >> Hash: nm-dispatcher,NetworkManager_t,svirt_sandbox_file_t,lnk_file,read
> >> =============
> >>
> >> Regards,
> >> Nick.
> >>
> >> P.S. There's also a secondary failure that appears to stem from
> >> failing to record the above alert properly: "could not write
> >> /var/lib/setroubleshoot/setroubleshoot_database.xml: [Errno 13]
> >> Permission denied:
> >> '/var/lib/setroubleshoot/setroubleshoot_database.xml'"
> >>
> > Where is log located.  Looks like you have a symbolic link used by a
> > container called log that NetworkManager is trying to read.
> >
> 
> Remove the :Z from this line.  You don't want to relabel /dev/log on the
> host.
> 
> MOUNTS="--volumes-from pulp_data -v /dev/log:/dev/log:Z"
> 
> You should only be relabeling content specific to the container.
> 
> restorecon -F /dev/log
> 
> on the host should fix this label.
> 
> 

-- 
Mark Lamourine <mlamouri redhat com>
Sr. Software Developer, Cloud Strategy
Red Hat, 314 Littleton Road, Westford MA 01886
Voice: +1 978 392 1093
http://people.redhat.com/~mlamouri
markllama @ irc://irc.freenod.org*lopsa


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]