[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [atomic-devel] Fedora Atomic Workstation questions

On Wed, Feb 7, 2018 at 4:45 PM, Dusty Mabe <dusty dustymabe com> wrote:
Note: I copied this email to the atomic-devel projectatomic io list. The atomic lists fedoraproject org is mainly meant for automated emails.

Thanks, I didn't realize that.

On 02/07/2018 09:27 AM, Elad Alfassa wrote:
> Hi all,

Hi Elad!

> I have some questions regarding Fedora Atomic Workstation:
> 1) How do 3rd party repositories (such as ones providing nonfree drivers, which obviously can't be containerized) work with rpm-ostree?
> If our end goal is for users to be able to use Fedora Atomic Workstation for every usecase they use Fedora Workstation for today, we need a plan for this.

You can add a yum repo file into /etc/yum.repos.d/ and install software via `rpm-ostree install pkg.name.rpm`, which will pull the software from the 3rd party yum repositories. There are some issues with say kernel modules that need DKMS, but most rpms will work fine.

Oh, great! for some reason I assumed rpm-ostree can only download pre-composed trees from Fedora.
In the future it might be worth to add some sort of compatibility wrapper around "dnf install" (and such) like dnf has for yum - to show a message to let you know that tool is "deprecated" and that rpm-ostree is what they need to use now. Otherwise people might get confused about "how do I install anything? non of the tools I'm used to are here".

> 4) If I have a container for development, this means that I have to have two copies of coreutils, openssh, and most system libraries/utilities.
> One copy, the "host", is updated by rpm-ostree. But what about the copy on the container? I'll have to remember to manually rebuild it on every update, or manually run "dnf update" in the container, which is not ideal (i'll probably forget, and end up running insecure/buggy software).

> Would it be possible to build a container based on the host filesystem in such a way that all basic system libraries and utilities are accessible directly (not as a copy) for the container? Alternatively, would some mechanism for automatic re-building of the container images after every ostree update is done can be created?

To date we haven't really explored this option as much. I have talked about it with Colin before and I like to call this type of container a 'host context' container. The idea is pretty much everything is shared with the host as well as a small overlay of packages that are specific to the current context you are in. You could have many contexts. Either way there will still be parts in each context that would still need to be updated/managed.

At the moment, I can have auto-updates for my host system and for my Flatpaks, but there's no real mechanism to update containers. I assume people will not be happy if we just automatically run "dnf update" inside all their containers, but if you have a lot of "contexts" you'll have to update all of them yourself.
I'm not sure what's the best approach here, but ideally I think the aim should be for as little "maintenance overhead" as possible. If you have to maintain your pet containers and update them manually, you're either going to spend a lot of time on this, or not bother at all and run insecure software.

Building on this "contexts" idea, maybe a "context" could be some sort of a "managed container"  that will be automatically updated when you update your host (both via graphical or cli tools)? Or at least give you a message when you switch into that context suggesting you install security updates.

> 5) Do we expect every developer using Fedora to write their own Dockerfile / Buildah script for their development environment? I think that's a bit too much overhead and we need to at least have a utility to automatically generate these based on some common configurations, and the usage might look something like
> create_dev_container --langauges=rust,python,c --additional-packages=ffmpeg

interesting idea. we haven't really fleshed that out yet. most people are building their own pet containers because most peoples environments can be pretty unique to them.

Yeah, but even if you build a "pet container", you have to start somewhere. If we have a tool to get you started more quickly, it could have a set of reasonable, opinionated defaults that you can later add upon (either by editing the Dockerfile / script that will be created by this command or by running dnf on the newly created container).
I think I'm going to make a proof of concept for this later, to see if people are interested in such a tool.

In case this is not clear, I'm not asking these questions just because I just want to *use* Atomic Workstation, but also because I want to contribute and help make it better.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]