Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Docker Misconceptions (devopsu.com)
278 points by mattjaynes on June 9, 2014 | hide | past | favorite | 64 comments


I was ready to come in swinging in defending Docker, but I found myself agreeing with most of the points after spending a lot of time with Docker over the last month (ported my Rails application's provisioning from Ansible to it).

I would add to the list that it is currently hard to even find decent images of popular services that you would trust deploying to production (e.g. Postgres). I see with the launch of Docker Hub that they have some flagged as "official" now, but for example the Postgres one is a black box (no Dockerfile source available - not a "trusted build") so I can't trust it.[1] I've had to spend time porting my Ansible playbooks over to Dockerfiles due to this.

I think part of the problem is that composition of images is strict "subclass"-type inheritance, so they don't compose easily. So it's hard to maintain a "vanilla" image rather than a monolithic one that has everything you need to run your particular service - so people just keep their images to themselves. For example, if I want a Ruby image that has the latest nginx and git installed, I can't just blend three images together. I have to pick a base image, and then manually add the rest myself via Dockerfile.

Also, although Vagrant 1.6 added Docker syntax, you really have to understand how Docker works in order to use it. If you're learning Docker I'd stick with the vanilla `docker` syntax first in building your images, maybe using something like fig[2] to start with.

At the end of the day it's another abstraction to manage. It does bring great benefits in my opinion, but the learning curve and time investment aren't cheap, so if you already have a suite of provisioning scripts it may not be worth it to make the leap yet.

[1]: https://registry.hub.docker.com/_/postgres/

[2]: http://orchardup.github.io/fig/


> I see with the launch of Docker Hub that they have some flagged as "official" now, but for example the Postgres one is a black box (no Dockerfile source available - not a "trusted build") so I can't trust it.[1]

It is actually built from source - by a separate open-source project called "stackbrew" which is basically the "homebrew of official Docker images". It was setup by the Docker maintainers before Hub introduced auto-builds, and then it just stuck. The idea is to allow anyone to participate in the maintenance effort, contribute new images etc. by simply making a pull request (again, just like Homebrew).

Here's the source URL from which that Postgres image is built: https://github.com/dotcloud/stackbrew/blob/master/library/po...


> For example, if I want a Ruby image that has the latest nginx and git installed, I can't just blend three images together. I have to pick a base image, and then manually add the rest myself via Dockerfile.

My solution to this is to avoid combining stuff in the same image.

E.g. my home server has an image for haproxy that splits incoming requests to my various test web apps by hostname. Each of those web apps run in their own docker container that bind-mount their appropriate directory from my home dir.

Then the container(s) I do my development in likewise bind-mount the appropriate directory. For the most part I ssh in to a single screen session, but for anything that has particular dependencies, I have separate containers (e.g. I keep a Ruby 1.8 container available to test code that should retain compatibility).

So far, all of my Docker containers are extremely small for this reason: I compose by orchestrating multiple Docker containers, not by stuffing multiple things into each container.

This for the most part means that dependencies I need to manually add rarely is more than installing an extra package with apt, or installing another gem.

I agree with you that it's hard to find decent images to rely on. But by keeping the containers small and single purpose I find that it's not that often it matters - the generic part of the config is often a matter of a couple of lines, while the Dockerfile and associated config files etc. tend to be dominated by my personal requirements.


>> I think part of the problem is that composition of images is strict "subclass"-type inheritance, so they don't compose easily.

This does seem like a pretty major limitation, is this a fundamental constraint or is it something that might be fixed in the future?


The "subclass"-type inheritance is due to the underlying union filesystem (AUFS). Each command in the Dockerfile is applied as a layer and then cached, and each resulting image can be referenced as a starting point from another Dockerfile.

So you can branch out from a common base image, but you can't somehow co-mingle multiple parent images into one child. If you think about it like the filesystem giving you a snapshot at each line of the Dockerfile, you can see how inheriting from a single snapshot is trivial, but somehow inheriting from multiple snapshots into one would require much more awareness in order to merge them.

The practical effect is that either you build a more one-size-fits-all "base" image which you reference for all your services, or you have multiple base images with different more tailored sets of dependencies built up, or you have a very thin base image (e.g. not much more than ubuntu-latest) and just add what you need inside each individual Dockerfile.

I ran into the same issue at first when laying out the Dockerfile hierarchy for all my services/roles, but I think in this case it actually falls under "less magic is better" / KISS. Dockerfiles are so easy to read and write once you get going, you won't mind the simplistic approach.


It sounds like you need something else to help you build the dockerfiles then i.e. I need a dockerfile with nginx, postgres and memcached so I use something like puppet/chef to put it together, is that possible?


It's easy to build new images from Dockerfiles. And "cut-n-paste" based "inheritance" works well when combining two or more Dockerfiles into a single one that has everything you want.

But you can't use composition to combine the actual downloaded images themselves.

Using Chef within Docker to build up the services is documented on the Docker site[1], and of course puppet or whatever is essentially the same.

[1] http://docs.docker.com/articles/chef/


My personal opinion is that if you think you need a Dockerfile with nginx, postgres and memcached, you're still thinking in terms of VMs. To me, that is three containers.


I don't know why the official Docker images aren't marked as trusted builds, but you can find sources here:

https://github.com/docker-library


It seems a shame that Ubuntu's Juju has been passed over in this space. Their charm store is designed to make it easy to mix and match recipes, have the community endorse and improve them. See the 'features' tab on https://jujucharms.com/sidebar/search/precise/postgresql-71/... for a nice summary of what is offered by the most popular postgres charm.


> it is currently hard to even find decent images of popular services

Very true. I've yet to find a suitable image for a simple Nginx + Unicorn + Postgres + Rails setup.


I thought that the point of Docker was to encapsulate each of those things into their own containers?


Not necessarily. The article covers that point.


And that point is, in my opinion, stupid. I run docker in production here, and if I had to make a docker image for each application with all services integrated, that would become a maintenance nightmare the moment you have to manage more than say 10 images (!not containers) built like that.

You lose a lot of the flexibility by integrating everything in one big pre-configured blackbox. In some cases that's nice, like when you actually want to ship a blackbox to customers, but when managing infrastructure? Thanks but no thanks. When you have proper configuration management in place, spinning up and configuring an extra postgres, apache, nginx, jboss, redis, ... whatever container should be dead-easy, otherwise you're doing something wrong.


> For example, if I want a Ruby image that has the latest nginx and git installed, I can't just blend three images together. I have to pick a base image, and then manually add the rest myself via Dockerfile.

Wouldn't it be just crazy if someone maintained a list of software they built to be modularly installed along with other optional dependencies compiled on a common architecture and operating system for a common base image?

We could make a tool to download them from a list of approved sources and verify that the files were cryptographically signed by a particular builder. We could also have the dependent images downloaded automatically. Then we could extend scripting support to run custom-built tools to tailor the install to what was on the system at the time of install or uninstall. Heck, considering all that they could make it cross-platform or even multi-platform.

Maybe they could call it the Really Professional-docker-image Manager, or the Docker Professional Knowledge Gem, or the Yodeler's Ultimate Maintainer package.

That would be crazy. Somebody should create a startup for this ASAP. [note to NSA crawlers: this has been an example of sarcasm]


There are 2 other misconceptions I would really like to be less prevalent:

1) Linux containers are not an entirely new concept. If you didn't know about BSD jails or Solaris zones, you were missing out. If you still don't know the differences, I highly recommend you broaden your horizons. Even if you use Linux anyway, just knowing about what's out there will help you be smarter.

2) Docker is not a drop-in replacement for every current VM use case. It's just not. To HN's credit, I really haven't seen people here who seem to think that, but it's on my list nonetheless.


I use a chrooted environment for development daily, but I didn't know of more advanced containment tools.

I was fooling around with lxc containers over the last hour and they seem to me a much simpler concept to understand that the whole Docker ecosystem.

Is Docker doing basically the same thing, but offering a more structured and controlled way of doing it ?


Docker could be thought of as a wrapper to LXC, providing some additional management capabilities and tooling. It's often described as a "packaging" format for applications, and you then deploy these applications in containers. So the application is actually all of userspace that you're going to use bundled together, and then run on a sort of "partition" of a kernel.


I'm so glad someone finally said this publically. Everyone always writes posts about how magical docker is, but the reality is that Docker more of a PR engine then a usable production virtualization solution.


I used Puppet for about 4 years in a critical production environment and let me tell you, Docker is not only hype. The reason why a lot of people(including me) are excited about Docker is because it brings a clean architectural paradigm to the masses, the same one used at Google and Heroku, the same one that many of us struggled to build behind closed doors at various companies.

People that have used configuration management tools at scale in complex environments have seen that managing state using those respective tools can quickly become tedious. Although the recommendations of the author are fair, its takes a lot of effort to manage state drift on a whole platform using a configuration management tool. But if you use Docker, you will already be forced to think and do things in this new paradigm where there is separation between your data, business logic and service state. You can theoretically achieve the same things with CM tools, but its just harder.

While the whole Docker ecosystem is immature, and people dont know how to deal with things once they get past 1 machine, its getting there and there is slowly more knowledge on how to do service discovery, logging, etc. in large environments. Im sure that in a few years we will look the same way at Docker(and other container tech) like we do at "heavy" virtualization now.


I agree the concept is sound, that said Docker is quite a bit of a PR engine as the parent comment said.

It also is far from mature enough for most. When it fails, good luck finding out whats happening, they don't log stuff. It needs a lot of management around it.

Itll be much more useful when stuff like Flynn takes off, or when someone rewrite this stuff/fix this stuff for average sys admins (and by average im not saying theyre bad, im saying not everyone has an idle army of sysadmins just waiting to fix stuff)


You're right that it is not a usable virtualization solution... because it is not a virtualization solution at all. Docker is a dependency management tool. To the extent that you're using virtualization for dependency management then Docker is an excellent alternative due to its lower operating overhead. If you're using virtualization for resource allocation and management, then Docker is not such a good fit. As someone who has used it quite a bit over the last six months I bristle a bit at the suggestion that it is merely a "PR engine," by which I assume you mean "something that isn't really new or useful but that garners a lot of attention." Not an accurate observation at all, in my opinion.


I think you're going to need serious ops fu for "orchestrating deploys & roll backs with zero downtime" whether you use containers or not. It seems like people with complex environments are flocking to Docker despite its supposed complexity, but maybe that's the echo chamber talking.


For what it's worth, as you mention Docker solves a piece of the puzzle extremely well that is a boon to an entire ecosystem.

That's awesome.

We love all of the tools and technologies that the op mentions and our message to every one is - integrating with your existing tools is a fundamental goal of the company. Telling anyway to throw away the hard work and sweat they've invested in to the current status quo is antithetical to our mission.


echo chamber, social marketing, etc are all pretty much guaranteed.

You can spot a group trying to create buzz/hype on here pretty easily and docker is having a coordinated marketing blitz with a new release.


Being part of the community has been in our DNA since day 1. The project doesn't exist without it, and the major feeling we all have at DockerCon today is humility.

I think characterizing our involvement in the excitement of today as being nothing more than a marketing blitz is disingenuous at best.


Yeah, it takes a decent level of ops skills for setting those up. It's certainly doable, but it's non-trivial.

If setting those up seems daunting for a team, then it's not the right time to use Docker (in production at least).


It only takes a high level of ops skill if your apps aren't designed for it. ;-)


What does that even mean?


I'd be curious to see how CoreOS might help simplify some of the issues you mention. I'm starting to dig deep into both and the learning curve is definitely a bit high.


Yeah, it's nice to see projects like CoreOS tackling the issues of using Docker in production. Hoping to see the complexities continue to be reduced as Docker (and its ecosystem) matures.


I think CoreOS + Fleet directly address many of the gaps identified in this article. It's not the only way to do it, but it does seem like a particularly good way.


Original comment and posting: https://news.ycombinator.com/item?id=7869831


Interesting article! There are some misconceptions I disagree with, but I believe I agree with the spirit.

What Docker does is allows those who are best qualified to make the decisions mentioned (the ops guys!) to have a clear separation of concerns from application developers.

It doesn't magically solve this hard problems in and of itself.


Developers probably see this the exact opposite way.

Finally they get to package their stuff in such a way the ops guys won't be able to change it after it is delivered.

Are docker containers 'tamper proof'?

Can they be locked?


It's awesome, now the devs have to be on 24hr call with the ops people, because they're the ones who built the environment.

Welcome to my world!


We are just implementing this at my work.

IT IS THE GREATEST THING FUCKING EVER!!! (at least on call related)


Hit the nail on the head.

> Are docker containers 'tamper proof'?

Not yet. :)


I've been at events where Docker people were pushing this "separation of concerns" nonsense. Really? The devs build it, hand the black box to ops, and ops has no say in how things are built or even knowledge of how to fix it?

You're pushing to make it so that ops is powerless to fix your "tamperproof" container?

I haven't met very many devs qualified to work as ops engineers, and I've met even fewer interested in jumping in to opsland and joining on-call rotation.

Containers are cool. Docker's "separation of concerns" and "tamperproof" messaging, however, is ridiculous. Devops was supposed to be about communication and ditching silos.

The vision that Docker is pushing is anti-collaborative and unrealistic. The technology is interesting, the sales pitch is horrible.


I don't mean that in an organizational way at all. My terse perspective be the cause of the negative reaction and for that I apologize. My fault for trying to respond while meeting with all of the awesome people at DockerCon.

Edit to add that, I speak of a time where people know exactly what is running on their infrastructure, what version, who built it, any additional Metadata they want to track as a part of that. Built on top of those primitives you can create any sort of policy that you have a reasonable chance of enforcing.

About our marketing, it's interesting feedback, because I'd say I agree with the premise of the points you make. Developers are not ops people. Ops people are not developers. And trying to make them the same is not a recipe for success.


It's not about anti-collaboration, it's about making sure your infrastructure doesn't change unpredictably, so you can treat containers as the unit of deployment that matters.

Removing tampering means that anyone (Dev/ops) fixes things in a predictable, orchestratable manner. It's the nail in the coffin on box-hugging syndrome ("oh I'll just telnet into Gandalf to fix that wrong IP address"). Nope - fix the fockerfile, mint a new container and chuck the old one. Same goes for "helpful" Puppet runs that actually bork up the broader system because of an undeclared / unmanaged dependency in your manifests.

Of course there are exceptions to every rule, but immutable infrastructure, it's a thing.


Go to a meetup, listen to what the Docker guys are saying.

It isn't this. They are explicit that their vision is for developers to bake up a container, and for ops to mindlessly deploy it. They use the shipping container metaphor: ops dockworkers just slot containers in to place, only the devs get to see what's inside.

Yeah, immutable is a thing. Docker messaging is something else entirely.


My current job is sysadmin, but I come from development, and what you say is rubbish. You can just as well say that the binary the devs gave you is a blackbox.

The Docker images I have to deploy are built on the CI system, and I get them from there. In practice however, it's me writing the Dockerfiles, and the devs getting a better insight of it's best practices when deploying. I'm also involved from the beginning, because I am the Docker magician, and this helps to avoid a lot of problems. Things like "download, compile and deploy bleeding edge [stable version]+2" of some software in deploy documents will never make it beyond the fantasy of some junior developer.

I have currently set up 2 projects to be completely docker based, one was a brand-new one, one was a conversion from a huge multi-component other project, and since the transition, things now already go a lot smoother deploying them to QA and test, than it ever did before, and the project is not even supposed to be deployed there yet, but it already works without a problem.

The day they submit a service request and 5 minutes later are standing next to my desk to say "it urgently needs to be deployed" because there's a demo the next day, I don't have to stress out because I now I'm pretty confident I'll have it up and running in 5 minutes. There is one big prerequisite for all this to work: the developers have to use the docker images them-self, ALL the time.

But that's just my experience.


I am not opposed to Docker. Also, I'm with you, someone with a systems background should be involved from the beginning.

I went and saw someone from Docker in December speak at the local meetup in LA, and his slides explained that Docker was all about a workflow where traditional devs handled everything, and ops would be low-skill people who deployed what they were given, no questions asked.

I think your model is reasonable. The model that Docker was advocating that night is wishful thinking bordering on sales flim-flam.


It depends on your culture.

At Netflix they have no traditional ops - just devs with an ops mindset - and do this with AMIs rather than containers. They mostly fired their ops guys for the streaming side - except for the DVD shipping business which still is on your typical Oracle / Java cluster.

Same deal with Google to a point, except they have site reliability engineers working specifically on the plumbing. No one is really in "ops", they're all building software and developers deploy the containers to production without any "ops" handoff other than the software the SREs have previously built.

PostOps / NoOps is the direction this is going - there really should be no active operations of what's inside the container. Just tweaking of the automation and monitoring software fu that supports the containers in the broader environment.

Not everyone is there yet of course. It requires a different mindset than is common in most shops.


It is ideal if ops doesn't have to look inside the containers and CAN just slot them in place.


I guess that would be great, if developers had any kind of UNIX background.

These days kids grow up on an Xbox until they get their first Mac. Seldom do I meet a developer who has run a headless Linux box, understands syslog, understands the filesystem hierarchy, has packaged software, etc.

The 1995-2005 definition of ops was "dudes who insert the CDROM and click through the EULAs." That kind of job doesn't really exist anymore, at least not in startup space. If the low-skill gamers-with-dayjobs were still running ops, and if comp sci programs still had people working from shell accounts on VT100s this kind of thing might make sense, but those people are long gone.

Having people who don't understand UNIX tossing read-only bundles of broken over the wall to the people who do understand UNIX and distributed systems doesn't make a ton of sense.

In case anyone missed it, I have no problem with Docker. I encourage automated deployment.


"I guess that would be great, if developers had any kind of UNIX background."

Pretty common, even in the enterprise, unless you're dealing with .NET developers.

"those people are long gone"

Not in the enterprise. The startup space isn't where the money is for things like Cloud Foundry or OpenShift or Docker.

In the enterprise, usually the development team is the one to tell the ops team exactly how to install & configure the software, or have to produce a 200 page build guide with screen shots. In these environments (almost every large telecom and retail bank, for example, or the majority of outsourced EDS/IBM/Infosys/TCS organizations) ops has almost no critical thinking capability beyond a handful of "premier support" folks and "architects". I am not talking about the individuals - many of them are trying as best they can given their background. I'm talking about the way they're organized.

IBM organizes their strategic outsourcing teams, for example, in teams of about 100, with 5 to 10 senior-intermediate technical leads (they know syslog, kernel modules, TCP/IP, scripting, etc.), 20-30 intermediate technical leads (they know how to do general sysadmin tasks but don't really understand how to admin storage, the kernel or TCP/IP deeply ), and 60-80 freshers that barely can spell grep.

There are exceptions, but this is often is by design to keep the costs down.

IBM earns about $26. Billion. Dollars. in revenue doing this. Multiply that by 5x across the rest of the outsourcers. Multiply it again by 15x to cover the in-sourced shops across the world. This is an area where software is definitely eating an industry.

Docker's target is for teams where

(a) the devs do the work anyway, ops just sits there staring blankly on conference calls, perhaps operating a keyboard if you're lucky (80% of the enterprise IT shops on earth). So, route around them and only retain those who know what they're doing.

or

(b) the ops guys are devs to begin with as they've automated the shit out of everything (Netflix, Google, Startups)


It all depends on your use case. I can see some deployment situations where the ops people would be the ones asking for tamper resistance. If you're running a small shop with only few people and you're not in banking or some other branch of industry where mistakes are very costly then separation of concerns is probably not your main worry.

But if you're operating with some auditor looking over your back then it can become a very powerful argument for using some technology over another.

As with any tool: use it where it is appropriate.

This all assumes that docker will make tamper-proof an option, not mandatory.


Dammit, you are a few months late. I learnt all the things you posted the hard way, and I agree on everything.


Great article. My understanding of Docker is quite new, so take my remarks with a grain of salt.

One thing I would emphasise in the first paragraph is that you need at least other provisioning/configuration tools to set up the servers where the docker containers will be deployed to. I know it would obvious to most, but you would still need to start/stop these machines with correct docker install, firewalls rules, and probably more. The VM layer has been flushed out, but still exists and needs attention.

After spending more than a week looking into Docker, my one frustration with Docker is that I have not found a way to correctly develop with it. Most of the docs I see are about deploying established apps, but I would love to see tutorials about how to develop, and start from scratch with it. Does Docker stand in your way when developing or does it make it easier? Maybe a solution is to create docker wrappers for our favourite framework that would abstract Docker. Anyways, I'd love to see more on this.


Once you have figured all out, it doesn't stand in your way at all, I really like developing with it.

Personally, I use aliases for almost all of its commands. I have also created various scripts for automating things like configuring the SSH connection to the container, etc.

Another thing you have to figure out is how to edit the code. My choice was to use a volume (only in dev), and download the code into a different directory inside the container. Then on the first run, an init script inside the container will clean the volume and move the code.

It has been some painful months and I haven't even finished. I'm liking where I'm going and I'm almost there, but if I had known the work it would take I think I would have avoided it. But if you are a team it's definitely worth it. Devs can now install anything they want without asking the Ops people for permission (as you can imagine by this statement, I'm more a dev/devops than an ops guy).

Oh one thing that does make it awesome though is its combination with Jenkins, if you are into continuous delivery. With those two, you can easily test all kinds of environments in the same machine without wasting resources, and thanks to Docker's cache it doesn't have to reinstall everything every time, it's really fast.


Im always wondering how people pinning versions manage software updates (security or not).

If you grow to a few hundred (or thousand or hundred of thousand) systems, it seems pretty hard to test and install combinations, and even if its a single combination with regular updates, you still need a very well oiled and consistent automated testing.


> you still need a very well oiled and consistent automated testing

I would say this is true for any professional team. Nowadays I don't even develop personal projects without continuous delivery. The idea has been out there for years, and this year in particular there have been lots of articles explaining how to do it.


As opposed to not pinning versions and expecting your average Python/Perl/Ruby/PHP/nodejs library author not to introduce some crazy API breakage in a minor point release?


either way has issues, yes if you manage more than a few projects, you already run into libs that will get updates and break things, at least weekly ;)


a huge leap forward for advanced systems administration. - this is the one great misconception.

How a user-level virtualization technology back from mainframe's era, reincarnated as FreeBSD jails years before solves the problem of, say, optimizing a "slowed down under heavy load" RDBMS server, via re-partirioning, profiling, reindexing, optimizing client's code, which, in my opinion has something to do with "advanced administration".

But ok, nowadays we have new meaningless memes "orchestration", "provisioning", "containerization".

What punks cannot grasp is that it absolutely doesn't matter how quickly and by which tool you will install some set of deb or rpm packages into some virtualized environment and run it. The real administration has nothing to do with this. It is all about adaptation to reality, data-folow analysis and fine tuning.


Docker is one of those things I keep installing and uninstalling. I simply can never quite make it work for me as a use-case.

My current commitment is to try looking at raw LXC again, specifically because it's VM oriented (and also because the unprivileged containers look more like what I'd want to target).


The only thing I'd add to the "You don't need to Dockerize everything" section is be careful of dependencies and what the electric power companies call black start capability.

However tempting it might be, don't virtualize all your deep infrastructure like your LDAP and DHCP servers or DNS or whatever as you'll inevitably find yourself after a power failure unable to light up your virtualization system containing your kerberos servers because every single KDC is down or whatever. Its happened...

Most virtualization seems to push for the customer facing "app side" not infrastructure anyway. But its something to keep in mind.


Only thing I took issue with was "Instead, explicitly define your servers in your configurations for as long as you can."

Maybe you can get away with this for a small scale deployment, but "as long as you can" sends a bit too mixed of a message. You should only defer a service discovery implementation until you get to the point where you have more than one read path in your stack.

IMO, as soon as you go from load balancer -> app -> cache -> database to having a services layer you should start thinking service discovery.

The simplest bandaid is to leverage DNS as much as possible until you get even bigger, and use static CNAMEs.


Thank you, as a veteran Operations Engineer couldn't agree more.


I'd like to hear what the consensus on using the Phusion base-image is. It seems to "fix" some pretty important issues with an Ubuntu base image, but I'm not sure they are really even "problems".

I use Phusion's base image almost all the time, especially since I tend to group services together (nginx+php-fpm, for example).

https://github.com/phusion/baseimage-docker


There is one serious issue that it helps me fix in my experience, and that's Upstart: a) doesn't work in docker, and b) is totally opaque to me. It's an obelisk of mystery. You're not supposed to read it, or understand it. I like runit.

Until Ubuntu can transition to systemd, which is supposedly more friendly toward containers, if I'm going to use Ubuntu inside of docker containers I'm going to need to manage my own inits. Even read them. The work phusion puts into runit and making services work under runit in baseimage-docker is invaluable to me.


Some of it also depends on use case...

Some people use containers like lightweight vms.

Others use it as a way to ensure a solid/consistent userland per process.

Others still blend the two in various ways... I tend to think of it somewhere "a little more than just a consistent userland" but not a full running os image.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: