Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Developers probably see this the exact opposite way.

Finally they get to package their stuff in such a way the ops guys won't be able to change it after it is delivered.

Are docker containers 'tamper proof'?

Can they be locked?



It's awesome, now the devs have to be on 24hr call with the ops people, because they're the ones who built the environment.

Welcome to my world!


We are just implementing this at my work.

IT IS THE GREATEST THING FUCKING EVER!!! (at least on call related)


Hit the nail on the head.

> Are docker containers 'tamper proof'?

Not yet. :)


I've been at events where Docker people were pushing this "separation of concerns" nonsense. Really? The devs build it, hand the black box to ops, and ops has no say in how things are built or even knowledge of how to fix it?

You're pushing to make it so that ops is powerless to fix your "tamperproof" container?

I haven't met very many devs qualified to work as ops engineers, and I've met even fewer interested in jumping in to opsland and joining on-call rotation.

Containers are cool. Docker's "separation of concerns" and "tamperproof" messaging, however, is ridiculous. Devops was supposed to be about communication and ditching silos.

The vision that Docker is pushing is anti-collaborative and unrealistic. The technology is interesting, the sales pitch is horrible.


I don't mean that in an organizational way at all. My terse perspective be the cause of the negative reaction and for that I apologize. My fault for trying to respond while meeting with all of the awesome people at DockerCon.

Edit to add that, I speak of a time where people know exactly what is running on their infrastructure, what version, who built it, any additional Metadata they want to track as a part of that. Built on top of those primitives you can create any sort of policy that you have a reasonable chance of enforcing.

About our marketing, it's interesting feedback, because I'd say I agree with the premise of the points you make. Developers are not ops people. Ops people are not developers. And trying to make them the same is not a recipe for success.


It's not about anti-collaboration, it's about making sure your infrastructure doesn't change unpredictably, so you can treat containers as the unit of deployment that matters.

Removing tampering means that anyone (Dev/ops) fixes things in a predictable, orchestratable manner. It's the nail in the coffin on box-hugging syndrome ("oh I'll just telnet into Gandalf to fix that wrong IP address"). Nope - fix the fockerfile, mint a new container and chuck the old one. Same goes for "helpful" Puppet runs that actually bork up the broader system because of an undeclared / unmanaged dependency in your manifests.

Of course there are exceptions to every rule, but immutable infrastructure, it's a thing.


Go to a meetup, listen to what the Docker guys are saying.

It isn't this. They are explicit that their vision is for developers to bake up a container, and for ops to mindlessly deploy it. They use the shipping container metaphor: ops dockworkers just slot containers in to place, only the devs get to see what's inside.

Yeah, immutable is a thing. Docker messaging is something else entirely.


My current job is sysadmin, but I come from development, and what you say is rubbish. You can just as well say that the binary the devs gave you is a blackbox.

The Docker images I have to deploy are built on the CI system, and I get them from there. In practice however, it's me writing the Dockerfiles, and the devs getting a better insight of it's best practices when deploying. I'm also involved from the beginning, because I am the Docker magician, and this helps to avoid a lot of problems. Things like "download, compile and deploy bleeding edge [stable version]+2" of some software in deploy documents will never make it beyond the fantasy of some junior developer.

I have currently set up 2 projects to be completely docker based, one was a brand-new one, one was a conversion from a huge multi-component other project, and since the transition, things now already go a lot smoother deploying them to QA and test, than it ever did before, and the project is not even supposed to be deployed there yet, but it already works without a problem.

The day they submit a service request and 5 minutes later are standing next to my desk to say "it urgently needs to be deployed" because there's a demo the next day, I don't have to stress out because I now I'm pretty confident I'll have it up and running in 5 minutes. There is one big prerequisite for all this to work: the developers have to use the docker images them-self, ALL the time.

But that's just my experience.


I am not opposed to Docker. Also, I'm with you, someone with a systems background should be involved from the beginning.

I went and saw someone from Docker in December speak at the local meetup in LA, and his slides explained that Docker was all about a workflow where traditional devs handled everything, and ops would be low-skill people who deployed what they were given, no questions asked.

I think your model is reasonable. The model that Docker was advocating that night is wishful thinking bordering on sales flim-flam.


It depends on your culture.

At Netflix they have no traditional ops - just devs with an ops mindset - and do this with AMIs rather than containers. They mostly fired their ops guys for the streaming side - except for the DVD shipping business which still is on your typical Oracle / Java cluster.

Same deal with Google to a point, except they have site reliability engineers working specifically on the plumbing. No one is really in "ops", they're all building software and developers deploy the containers to production without any "ops" handoff other than the software the SREs have previously built.

PostOps / NoOps is the direction this is going - there really should be no active operations of what's inside the container. Just tweaking of the automation and monitoring software fu that supports the containers in the broader environment.

Not everyone is there yet of course. It requires a different mindset than is common in most shops.


It is ideal if ops doesn't have to look inside the containers and CAN just slot them in place.


I guess that would be great, if developers had any kind of UNIX background.

These days kids grow up on an Xbox until they get their first Mac. Seldom do I meet a developer who has run a headless Linux box, understands syslog, understands the filesystem hierarchy, has packaged software, etc.

The 1995-2005 definition of ops was "dudes who insert the CDROM and click through the EULAs." That kind of job doesn't really exist anymore, at least not in startup space. If the low-skill gamers-with-dayjobs were still running ops, and if comp sci programs still had people working from shell accounts on VT100s this kind of thing might make sense, but those people are long gone.

Having people who don't understand UNIX tossing read-only bundles of broken over the wall to the people who do understand UNIX and distributed systems doesn't make a ton of sense.

In case anyone missed it, I have no problem with Docker. I encourage automated deployment.


"I guess that would be great, if developers had any kind of UNIX background."

Pretty common, even in the enterprise, unless you're dealing with .NET developers.

"those people are long gone"

Not in the enterprise. The startup space isn't where the money is for things like Cloud Foundry or OpenShift or Docker.

In the enterprise, usually the development team is the one to tell the ops team exactly how to install & configure the software, or have to produce a 200 page build guide with screen shots. In these environments (almost every large telecom and retail bank, for example, or the majority of outsourced EDS/IBM/Infosys/TCS organizations) ops has almost no critical thinking capability beyond a handful of "premier support" folks and "architects". I am not talking about the individuals - many of them are trying as best they can given their background. I'm talking about the way they're organized.

IBM organizes their strategic outsourcing teams, for example, in teams of about 100, with 5 to 10 senior-intermediate technical leads (they know syslog, kernel modules, TCP/IP, scripting, etc.), 20-30 intermediate technical leads (they know how to do general sysadmin tasks but don't really understand how to admin storage, the kernel or TCP/IP deeply ), and 60-80 freshers that barely can spell grep.

There are exceptions, but this is often is by design to keep the costs down.

IBM earns about $26. Billion. Dollars. in revenue doing this. Multiply that by 5x across the rest of the outsourcers. Multiply it again by 15x to cover the in-sourced shops across the world. This is an area where software is definitely eating an industry.

Docker's target is for teams where

(a) the devs do the work anyway, ops just sits there staring blankly on conference calls, perhaps operating a keyboard if you're lucky (80% of the enterprise IT shops on earth). So, route around them and only retain those who know what they're doing.

or

(b) the ops guys are devs to begin with as they've automated the shit out of everything (Netflix, Google, Startups)


It all depends on your use case. I can see some deployment situations where the ops people would be the ones asking for tamper resistance. If you're running a small shop with only few people and you're not in banking or some other branch of industry where mistakes are very costly then separation of concerns is probably not your main worry.

But if you're operating with some auditor looking over your back then it can become a very powerful argument for using some technology over another.

As with any tool: use it where it is appropriate.

This all assumes that docker will make tamper-proof an option, not mandatory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: