Recently I started using colima[0], a drop in replacement for Docker Desktop on Mac, and have seen an increase in performance and battery life. You can use all the normal docker and docker compose commands.
It does not have a GUI but you can use the Docker extension on VS Code to have an overview of running containers.
Replaced Desktop with `colima` as well few months ago. I've been using it daily since then. I did not have any issue, sometimes I just delete / start a instance to upgrade the docker version, it only takes few minutes.
I like the fact that I decide when I upgrade, not Docker Desktop nagging me every week.
Looking to convert, but I still can't understand how this is more performant. Docker Desktop has lots of engineering going into performance crossing the host/VM barrier. IIRC lima just pipes over SSH. How could that be faster?
Docker is an electron app which might explain some of the performance and battery differences. The containers don't run in electron but that extra copy of chrome is always running in the background.
I only use Docker Desktop for one thing - to see if one of my containers has accidentally started itself as amd64 instead of arm64. Sadly Colima doesn't seem to provide a way to do that.
I found even with ARM Docker containers were already slow as it was.
I also never understood the justification for the added complexity it created, but I also don't have a dedicated ops team at my job to solve my problems.
my personal experience. Much earlier this year at work, we migrated everyone to colima and I had to support devs with their issues. So many small issues kept popping up, and was definitely not a drop in replacement for us.
The higher ups eventually let us just buy docker desktop and we are all happier now.
Please help me understanding: Why is `brew install docker` not sufficient, why do you also need Colima or Docker Desktop? Is it so that there is a docker _daemon_ installed which `docker` doesn't ship with?
OSX dose not support running docker containers (or vis versa depending on your point of view). Instead you need a VM running Linux. Docker Desktop / Colima runs this VM for you.
I assume `brew install docker` just installs the docker CLI/etc, which can run on non-Linux OSes. However the docker daemon can only run on Linux, so something needs to setup a VM for it.
Homebrew handles this kinda poorly, so people are often confused. `brew install docker` installs the Docker CLI. `brew install --cask docker` installs Docker Desktop, and if you've permanently tapped homebrew-cask you'll get that instead of the Docker CLI.
Same. My requirements are very basic, so the switch to colima was basically seamless. I also appreciated being able to avoid Docker Desktop constantly trying to update itself (which is what ultimately motivated me to make the switch).
As a bonus, you can install the Docker CLI (e.g. `brew install --formula docker`) and use that to interact with any containers you start with colima.
Switched to colima a while back after the licensing debacle and have been mostly happy with it. Only real issue has been with some tools making assumptions about the docker socket location, which was easy to fix in the config file.
Thanks for sharing. I missed something like that. Docker is too enterprisy to my taste. I'm using VPS with docker right now which works good enough, but no volume mounts is not very nice.
It’s been a really doozy for me too, doubly so for local k8s in Docker (kind/k3s). I’ve tried a whole lot of variations, it’s hard and harder to scale across hundreds of devices.
Not OP, but if it's helpful, I use containers for all 3rd party services I need to run for development. (e.g. Postgres, Redis, Localstack, etc). This makes it easy to onboard new developers as they just have to run `docker-compose up` and not worry about those. It also allows me to easily use different versions of those services in different projects or even branches of a project.
somehow colima and my corporate vpn (using vpnc) keep deleting each others routes (colima loses network access when you turn off vpnc) and neither podman machine nor rancher desktop have this issue.
There's also Rancher Desktop in the same space, which includes k3s as a local K8s solution.
For personal use I found it great and lighter than Docker Desktop. At work, unfortunately all options but Docker Desktop have issues with either 1) Our Cisco AnyConnect VPN, or 2) Our authenticated http proxy. Couldn't find anything else providing a container runtime + a local k8s on MacOS that works in this environment. So we just got Docker Desktop licenses.
I user Rancher Desktop on an i9 with 32gb of RAM. Starts in less than a minute. I also have Teams and slack. Sometimes I have over 200 browser tabs open (yes, I have a problem). The UI is responsive pretty quickly.
A lot of delays has to do with starting VMs. You need this for Linux on Mac/Windows.
Disclaimer: I started Rancher Desktop. I might be biased.
Last week I switched from Rancher Desktop back to Docker Desktop because I couldn't get VSCode Dev Containers to work properly. I was stumped because it should work out of the box. However it didn't work on a fresh install of Docker Desktop either. Apparently when Rancher Desktop was first released I've installed it and setup a Docker alias in my ~/.zshrc:
alias docker=nerdctl
After removing that alias everything worked first try with Docker Desktop. However after starting up a couple of Dev containers and some debugger my machine crawled to a halt and was memory swapping like there was no tomorrow. I found that this behavior could be normal for Docker Desktop. So I think I'm going to switch back to Rancher Desktop (or perhaps Podman Desktop) sooner than later.
Sorry for using this comment as a way to get ahold of you but there is no DM function on HN and the comment I was wondering about was posted by you 18 days ago and the comments are locked now.
I have been controlling my water heater with HA for a few months and I too am risk averse when it comes to legionella. You have taken it to another level by replacing the sensor inside with a DS18B20. I was also interested in doing this but I don't really want to drill into it. How did you install the sensor? Is the water not under pressure? I've just measured water temperature at the tap with a meat thermometer and under flow into a container to determine that the temperature falls in the range that is safe for legionella growth. Would love to look into a proper sensor again if you have any information about that.
Again, sorry for hijacking your completely unrelated comment.
First of all, I'm also concerned by legionella growth. However all the sources I can find suggests that at a legionella run above 60 degrees celsius once a week should be enough to kill all bacteria. So that's what I'm doing.
My sensor installation was extremely easy. My warm water heater is in fact a barrel within a barrel. And some insulation material between those barrels. If it would only be metal, you lose too much heat. In the outside barrel (excuse me for lack of a better term) there was an analogue thermometer. This thermometer has a metal back so it makes direct contact with the inside barrel. I just pulled this one out and replaced it with a DS18B20 probe. Again metal against metal, so maximum contact.
Finally I have calibrated the sensor by running it with hot water at the tap and measuring the temperature both there and on the warm water heater (with 2 DS18B20). I've done this for several temperatures with intervals of 10 degrees. I've ignored possible sensor deviations. Finally I used Excel's INTERCEPT and SLOPE functions on the range to calculate the value needed for a linear equation. I have used the formula:
boiler_temperature * SLOPE + INTERCEPT
My math is probably far from perfect and I might revisit it one day. But it works for me currently. I also visualized the measurement results with my calculations and they seem to be pretty accurate. Especially when accounting for missing measurements.
I've also added my personal page to the 'About me' on here, and I have the same Reddit username if you want to DM me there.
"Docker Desktop" uses a VM for container execution on all supported OSes - Windows, macOS and Linux - default installs are not running "docker" natively at your local CLI if you are using "Docker Desktop" to run docker, even on Linux.
"Docker Desktop on Linux runs a Virtual Machine (VM) so creates and uses a custom docker context desktop-linux on startup." I had previously assumed it would be native on Linux, but apparently not. To be clear, this only applies when talking about "Docker Desktop" installations - not the same thing as "docker" etc etc.
I also think the performance is pretty bad, and I've used it on all three OSes at one time or another. I simply never install "Docker Desktop" on Linux typically anyway - it adds so little value over a basic native local docker install there.
The only thing I think "Docker Desktop" is really great at is creating FUD regarding using the alternatives or even plain ole free "docker", but that is probably its primary means of generating revenue for the company - I've seen docker desktop licences get deployed everywhere recently, regardless of the merits. So many users I encounter don't even understand the distinction now between docker/docker-desktop, or that there is one.
Running in a VM is there for very good reasons. If you look at where things are going to be installed on the base system, how you can reset the environment, how you deal with variations with other things installed on the host, and more. It's difficult to do this all well outside of a VM.
Also, there are many people who want that VM boundary. We found this when designing for Linux in Rancher Desktop and talking with people about it.
The DD WSL2 backend is also creating a VM in Hyper-V. Actually, it's creating two VMs (docker-desktop and docker-desktop-data). It's also running a proxy in your WSL2 VM so you can access the docker server. It's all a bit convoluted TBH.
I actually decided today to stop using DD on my Windows machine and just run docker native inside the WSL2 VM instance instead. Still not sure what solution I'm going for on my Mac.
This is the way. It's even easier if you just winget install RedHat.Podman - that'll give you a tiny Fedora image where "docker" (podman) just works straight out of the box. No need to worry about getting iptables-legacy packages for your WSL distribution or whatever. It's so simple and lightweight, it feels like a much better solution than anything Docker ever did for Windows.
I do the same for the most part. I have the docker daemon setup both locally in Windows and also in WSL2. I then have multiple contexts setup in Windows so that I can easily switch between Windows/Linux containers from my host terminal. Thus far, I've not experienced any issues.
The initial setup was a little more complex than just running Docker Desktop, but since then, it's running flawlessly.
Talking about WSL2 in general: it creates one VM, with mount namespaces per "distro". That's why everything shares the same network (they didn't set up separate network namespaces). Also, the GUI support in Windows 11 is a separate mount namespace.
Well, yes, but it creates those when you log in. The proxy fixes some corp firewall issues (see other comments here). Start up is faster overall for me versus the normal backend.
Also known as a VM, but maybe you get better warm up times. My main complaints are with the "docker desktop" app experience - not so much "docker" itself VM or otherwise. It adds so little to docker for the license cost too, at least so far.
You can simply install docker in a VM/WSL2 natively yourself and avoid docker desktop/licensing altogether. "Docker Desktop" is/was really a tool to simplify getting a Linux Kernel (the critical dependency for cross-platform container dev) on non-Linux platforms IMO, which seems wild to pay for to me in its current state and when Windows has a great built in VM via WSL2 anyway. That it even exists on Linux now (recent addition) is kinda amusing - on MacOS and Windows there is at least some argument to be made it simplifies getting the kernel...
The WSL2 backend is pretty broken from my experience, it will often lock up using 100% CPU after I put my laptop into sleep/hibernation, and the only reliable way to bring it back seems to be killing the VM in task manager and restarting. Dunno if this is Windows' fault or Docker's though.
Yup. I have a Mac Pro 2019, 12 core Xeon, 192GB, and the Desktop app and VM location reside on a RAID0 M.2 array that can throughput 6GB/s. Still a minute+.
You can use wsl-gvproxy(*), which uses our CRC usermode network stack to allow use with VPN. We are working on making this an option for podman machine. Alternatively, or to test, you can use CRC from our crc-org project and run the podman preset. This uses a dedicated VM using native hypervisors and the gvproxy setup.
I am the teamlead of CRC and work on the Windows enablement of podman machine, with the podman desktop team. Gladly take questions here or by email.
I had similar issues with a different VPN/Proxy at an earlier role. I solved with https://github.com/sakai135/wsl-vpnkit and trusting the root certificate of the proxy on the rancher desktop WSL2 vm (Assuming you're on Windows as I was).
Docker desktop pays for itself by solving these issues though IMO (I wasn't able to get a licence at the old role however)
Podman works with Docker Compose enough to run stuff I've had to deal with at work and home. I prefer to use the podman-compose script usually, since it does offer some small advantages when using Podman. That said, even with the podman-compose script, I ran into an issue where some syntax somewhere needed to be adjusted for Podman; I can't remember exactly what and I don't have access to the repository to check, but it was a security-related flag, and it was fixed in master at some point, I believe.
Getting Podman to run CUDA/Nvidia workloads was a bit more challenging, but that can also be done.
Docker Compose works fine with Rancher Desktop. You can use it with Podman on Linux too, you just need to enable the socket since normally Podman does without - I'd imagine there's some way to enable this on Podman desktop too.
For Rancher Desktop, Docker Compose works with Rancher Desktop when you choose dockerd (moby). If you choose to use straight containerd (with nerdctl as a CLI) than compose isn't going to work.
At the moment podman machine on macOS uses Qemu and their filesharing stack. Qemu allowed us to move quickly utilizing the same deployment stategy. However I work on CRC (a related project) and we have a driver based on virtualization framework using virtiofs. We hope to integrate this in `podman machine` soon as it will improve performance. Ideally we want to boot with the kernel and ramdisk provided by the image, though this isn't the case until macOS 13 (which brings EFI support). So, yes... hopefully soon. We just want to make sure we provide a stable and maintainable solution.
The virtiofs implementation we use is the native one provided by Apple according to our virtiofs spec. https://virtio-fs.gitlab.io/
You can try running https://github.com/crc-org/crc with the podman preset (!) to test it. It would not be exactly the same how podman machine will use it eventually, but might help to give an idea of performance or issues we can improve on. We have seen a lot of users being more than content as it also works in a vpn environment. Note that the CRC tool primarily aims at OpenShift deployment... This is a different preset (resource intensive). Only available as an installer with our tray (sorry about this).
The driver we use is https://github.com/crc-org/vfkit and I am sure Christophe could share a method to just run the VM with our driver. HMU by email if you prefer.
Thanks for the info. I don't actually use MacOS myself, but I'm interested in getting faster userspace networking and filesystem sharing for Linux and Windows hosts. virtiofs is interesting but it's unfortunate that it requires a daemon and AFAIK doesn't run on Windows hosts.
Right. HyperV only does eh... Well, they do 9P for WSL2, but not for HyperV itself. This is one of our issues. We work with the virtiofs team to get this resolved, but their implementation targets Linux first. We hope to see Microsoft adopting this too. We are glad to help. Especially as the current 9P implementation for WSL2 has known syncing and performance issues. On a VM you would have to resort to CIFS, sshfs or something else... Which are all not ideal for locally attached storage
I have been experimenting with the arm64 windows dev kit, neither docker desktop, podman desktop or rancher desktop had arm64 builds. Installing the amd64 builds did not work either.
I was surprised to find out wsl2 now supports systemd
I work on the Windows enablement team of podman machine andam glad to take question or feature requests.
We are looking into enabling systemd, though as said, this isn't GA yet. The functionality would only work for Win11, so we have to enable a win10 workaround for this (we have), but this has to be based on feature detection.
Arm binaries is being looked at, but can't commit to a timeframe when this will be delivered.
I've been running Docker on WSL 2 for a while now, without systemd or Docker Desktop. There's no need to wait for systemd.
You can install Docker following their Linux guide and then in your shell's profile file add:
if grep -q "microsoft" /proc/version &>/dev/null; then
if service docker status 2>&1 | grep -q "is not running"; then
wsl.exe --distribution "${WSL_DISTRO_NAME}" --user root \
--exec /usr/sbin/service docker start >/dev/null 2>&1
fi
fi
Basically the first time you open a terminal it'll pause for 5 seconds while the Docker daemon is started but it'll stay started for the duration of your Windows uptime even if you close your terminal. The next time you open a terminal it'll be instant.
Been running Docker under WSL for a few years. Don't need systemd if you're either happy to start it manually with an alias or poke it into one of your profile/logon scripts.
The experience is light-years ahead of the monstrosity that is Docker Desktop
Thanks for this! I was curious if the windows dev kit would be a good arm64 container build box, but I wasn't sure if getting Docker or buildah etc. on there would be a pain. Sounds like this is totally doable.
The problem with podman at the moment (IMO) is version drift. RHEL/Fedora and friends get the latest and greatest (4), Debian/Ubuntu are stuck on 3.x. This isn't a problem with Docker, which has tight control over what is deployed. This means how you use Podman directly or indirectly via tools and plugins may change.
> This isn't a problem with Docker, which has tight control over what is deployed.
That contention is heavily dependent upon how Docker was installed. Docker desktop, yes. Command line Docker on Linux? That used to be much more complicated and depended on if you had an OS vendor provided version or had a Docker provided repo install.
RHEL’s repo version always seemed particularly out of date to me.
However the versions provided by the distro have gone through more tests. We are still improving this part as ideally you should be able to pick a version
Yes. Podman in podman works, and this is how I sometimes tested on coreos without installing a new package. These days I mostly use a VM instead. Personal preference
> The problem with podman at the moment (IMO) is version drift. RHEL/Fedora and friends get the latest and greatest (4), Debian/Ubuntu are stuck on 3.x.
Podman has no control on this. Each Linux distribution deals with packaging in its own way. If this packaging is not to your taste, podman can easily be installed from alternatives sources: unofficial deb packages, or plain binaries.
BTW, podman v4 is in debian/experimental. I don't know why it didn't land into unstable yet.
> This isn't a problem with Docker, which has tight control over what is deployed.
AFAIK, Docker (now Moby) has no control over what is packaged in Debian/Ubuntu (unless the Debian maintainer works for Moby?). If you install Docker from the official Debian repositories, the "docker" package is an alias for "docker.io", which is a version 20.10.19 in debian/unstable. The latest 20.10.21 is not yet packaged by Debian.
a workaround to get the latest and greatest podman on debian/ubuntu is to use opensuse's kubic repo - it's even mentioned on the podman installation page [0]. but they don't recommend it for production use (though i haven't noticed any issues so far on ubuntu 22.04 and 22.10).
But your recommendation remains. Better to use the latest release as indicated on the GitHub releases of podman. These have seen more tests and integration use/tests
Alternatively you can run `podman machine` which sets up a VM with the stable version.
I really tried to use Podman, but I kept running into issues trying to get a rootless deployment on RHEL that used a protected port (53) and it gave me so much trouble that I uninstalled Podman and installed Docker.
Maybe I can try again, but it was frustrating enough to turn me away for a while.
It's not enough in some cases, in a multi container pod (with `podman play kube`) I still got permission issues binding to privileged ports (which were permitted by the sysctl). There was some github issue about it.
In my experience, if you need to bind to < 1024 ports, just run the container as root. Also if you want finer-grained control of host-side permissions of your mounts (i.e. map host uid 1000 to container uid 99). Otherwise rootless is fine.
Note that that sysctl indicates it's namespace-specific, so it's possible it needs to be set for the application's namespace (maybe as well) via `--sysctl` on the podman command line.
I did a quick test and it works ok for me to bind to low numbered ports just passing that param to podman run.
Running into edge issues and always trying to figure out if it was my container, Podman, or SELinux is making me very close to installing regular docker.
I want to love SELinux, sounds great, I can see the real value in it. Tried to deploy some containers on an SE system and it would block everything. Ok expected, it actually has great tools for discovering, suggesting and creating rules to poke holes where needed, but it only suggests those things post-facto.
So basically you're left deploying a container and waiting for something to break, then inspecting SE logs and patching rules in (and folding all that back into your ansible or whatever).
Maybe this is OK if its your application and you can dump it on a staging environment and push it through your 100% coverage end to end test suite to capture all requirements, but if its a third party - or maybe your test coverage isn't quite where you want it to be - you're basically walking out on the rope bridge blind.
Not sure there is really a way to solve that. Perhaps it there was a `pledge`/`unveil` like system where all requirements can be discovered up front... Otherwise it felt like yours stuck running in SEDisable or SEPermissive for months collecting data (after every deploy).
Maybe I just missed an obvious route to disable swaths of SELinux for rootless containers - where it probably doesn't have as much of an application.
Yeah, use ps -Z PID (from outside of a container) to check its label.
Some other useful sources of info - the container_selinux man page tells you about container_t. And an index of Dan Walsh's blog posts about containers & SELinux can be found in the README of <https://github.com/containers/container-selinux>.
And of course the RHEL documentation for creatign custom SELinux policies that inherit from container_t (for instance, I use it to allow processes within a container to read their certificates out of /etc/pki/tls which is normally forbidden. It's documented here: https://access.redhat.com/documentation/en-us/red_hat_enterp...
> Maybe I just missed an obvious route to disable swaths of SELinux for rootless containers - where it probably doesn't have as much of an application.
But more secure way would be to add ":z" or ":Z" to volume and podman will auto-relabel source dir. Finally you can use nuclear option: "--privileged". It's still more secure than docker's one because you are limited by your user's capabilities.
Am I just a dum dum for not getting this to drop-in replace Docker Desktop for my relatively simple projects? Has anyone else experienced the problematic practicalities of switching, or should I just spend a bit more time with it?
I'm pretty sure the main reason there's a push to move from Docker Desktop now is because earlier this year they started charging larger businesses/teams to use it.
So if you're just using it for yourself you probably don't need to bother
Oh I'm aware of that, my point is that it isn't the smooth transition folks seem to make it out to be (unless, of course, I'm a dum dum which is possible).
It's not pleasant? I don't know what Docker Desktop is (or what the point is) but Docker Server is two lines of shell on most distros (add repo and install).
I use CentOS, Ubuntu and Arch and have to regularly use third-party repos, PPAs or the AUR - this isn't a particularly uncommon thing to do when installing software. And it beats running a shell script, which is how k8s stuff works most of the time :P
And pods and not needing a daemon to run them among other things. Docker is still easier precisely because it has a daemon that automatically just starts up your countainers thay you've configured to run at startup, without the need to create systemd unit files.
> Another one is also that Docker is really not pleasant to install on Linux.
I'm curious on which Linux did you encounter issues while installing Docker? I cannot comment on the (to me somewhat pointless) Docker Desktop GUI installation on Linux, but I can confidently report that installing and using docker engine on Ubuntu, at least, is quite trivial and clearly documented[0] on the website.
Not them, but my employer makes it a major PITA to add non-standard repositories on the corporate Ubuntu image because of the weird and non-standard way Apt is configured. The "docker.io" package available in the official Ubuntu repository is typically horribly out of date.
Additionally, (and admittedly this isn't really Docker's fault), the default IP range for Docker's network happens to conflict with my employer's internal network, which is great fun to debug if you forget to change it.
I should have been more specific: It's been a few months (maybe even a year?) since I last checked, but that package used to not include buildx which I need, that's what I meant by out of date.
First of: It has been some time since I last installed Docker. It actually was on Ubuntu and a properly current version was not in the distro repositories, so I had to add a new repository and it installed plenty of extra dependencies. It's not hard, but also not really pleasant.
It also seems like Docker is now able to run rootless as well, so my nitpicks are actually more minor than I originally thought. It's still not daemonless, but it would work for my use case still I think.
Weird, I have no need I can see for rootless and on multiple machines no issues installing Docker on Ubuntu including using either the Ubuntu repo or the Docker repo. It just works.
Besides the licensing issues, I found it bloated and flaky. For me, the friendly GUI just added pain. I use docker in Hyper-V as my home media server instead. WSL2 also works.
Hyped by those comments, I went and tried both colima and podman.
My use case is the simplest of them all:
A docker compose file for postgres12 with a folder attached as permanent volume.
- colima couldnt do chmod, and it failed when i tried to restore database to folder. I tracked an issue on github, known stuff, issue is couple of years old.
- podman made it work somehow, data restore was under way - it run out of file descriptors, again - know bug with postgres and other scenarios, tracked on github for few years.
I'd just love to know if Docker is ever going to fix the issue where my computer kernel panics when going into hibernation if my macbook is unplugged and docker is running. Had an open, reproducible issue for a quite while that has affected lots of other folks too, but no real recognition from Docker.
Google Mac hibernate issues. Over 1 million results. Windows, over 3 million.
I gave up on hibernation being a valid concept in the 1990's, there's too many things that can go wrong and computers are more complex today.I don't use hibernation and Docker has no issues.
You are being weirdly antagonistic about this. Do you work for Docker? Random search result counts (not unique users btw) have nothing to do with a reproducible bug in Docker. Why are you so caught up on this? You are welcome to go to the issue queue in GitHub to read the steps to reproduce for yourself if you want.
huh, so that's what's happening to me. I was wondering what the likely cause of the kernel panic issue might be. This might be the impetus I needed to try one of these Mac docker alternatives.
Talking about containers: is there an easy way to run “system” containers? This is, containers that run systemd and everything else you would expect to be running on a normal Linux OS. I rely heavily on VMs to simulate cloud environments, but I would love to use lightweight containers instead. Also, these “system” containers should be able to run containers inside them as well (docker in docker?).
I saw something on github the other day that may work (can’t remember the name, something about “box”), but it wasn’t available for Macos.
You are probably referring to Sysbox (https://github.com/nestybox/sysbox), which I believe will meet your requirements (systemd, inner containers, security, etc).
Btw, Sysbox is already supported in Docker-Desktop (business tier only), so you can easily do what you want with this instruction:
$ docker run -it --rm -e SYSBOX_SYSCONT_MODE=TRUE ghcr.io/nestybox/ubuntu-focal-systemd-docker:latest bash
Disclaimer: I'm Sysbox's co-creator and currently working for Docker.
Not entirely sure about the usecase, but podman itself integrates well with systemd, like creating services. But it feels like you want a full systemd OS in a container instead of a VM. Not the best practice. VM might better for this, though Podman can run systemd based services. And Podman in docker or podman in podman is not an issue.
Podman machine starts a VM that is just a Fedora distro, with systemd and you can create multiple instances. This way you can run these system containers easily.
systemd-nspawn, LXC, and podman should all be able to do that (though doing recursive containers can be kind of weird). In theory https://github.com/firecracker-microvm/firecracker should as well, it runs very lightweight VMs.
Gotcha, I thought I may have misunderstood. The wording made it sound "also, it's pretty cool because Podman (as opposed to DD) can do x, y, z" which didn't make sense to me. I was thinking "obviously Podman can do that if it's meant as an alternative to DD"
Convenient timing! I installed podman last night and was playing with it. So far it feels just like Docker. Too bad it'll take much more time before I get to seriously recommend this at work... But that doesn't mean I can't use it for my personal projects :)
Nice to see some alternatives for docker for mac. If you want to keep it simple, simply running a vm (on your device or remotely) and let the docker client talk to it via ssh. I used qemu for this a while ago as it seems is podman.
All you need to do for this is set the DOCKER_HOST variable. It gets slightly tricky with e.g. volume mounts of course. But otherwise this works fine. Including with docker-compose. The only other thing you need to do is some port forwarding from the vm; which you can do with ssh as well.
Podman, Colina, and other tools are essentially just nicer versions of this with a bit more features.
I’m using (as individual, so for free) Docker Desktop on my M1 Apple silicon. It works great. Could someone tell me the benefits of switching to Podman? I like that Podman is open source, though.
I don’t see Multipass mentioned here. We switched from Docker desktop when the license changed and it’s been relatively smooth. The installer is friendly, there’s a vm with docker preinstalled and all we had to do was configure network bridging.
Echoing others, is there a benefit running this over Docker? I recently setup portainer and Docker on my homelab and had everything running in about 30 min. Is there a benefit to migrate to podman?
The main benefits touted are that Podman can run rootless containers and it doesn’t need a daemon compared to Docker. However those comparisons are less relevant now than they were a while ago because Docker can now run rootless containers and Podman has developed a heap of systemd hooks that effectively use that as the daemon. It does have some good features though, like being compatible with Kubernetes manifests.
If you’re happy with portainer and Docker I wouldn’t bother migrating.
You can use the podman module that is part of cockpit-project. It runs on different versions of linux and you can also expose this from a remote machine or VM.
It has a different aim, more of an admin view. I do use it regularly and would gladly take questions for the team. At some point we integrated this as a temporary solution for CRC to provide a container management view.
Using podman since March of 2022. The experience has been nice, and the ability to run containers under user without going root is definitely nice. Good to see it getting some recognition.
Can I use this to manage sudo containers? It's slightly amusing to see the desktop creates a podman daemon, so maybe you could create a sudo podman daemon for use with the desktop somehow?
Sort of off-topic. I noticed they publish Universal, intel and arm builds for MacOS.
I’m struggling to understand why would they build all three? Why not do either universal or split arch?
Podman Desktop is planning also to bring a Kubernetes cluster very easily.
For now you can deploy to an existing Kubernetes cluster the Pod (set of containers that Podman provides) and you can also switch your current context from the tray icon.
At the moment Qemu using podman machine as that allowed for faster iteration. I work on some of this enablement as part of the CRC team. We have a fully working driver for the virtualization framework, with virtiofs support. We would to integrate this, though at the moment essential EFI support is only available in macOS 13. You can try CRC with the podman preset (!) to test this. It would not be exactly the same as podman machine will run a different setup process instead, it does allow you to see the performance. If you need help, just ping me. Gladly will help
podman/skopeo/buildah and systemd managed user services are an absolute game changer for dockerless rootless containers. pod and service defs are supported and can be exported to k8s when or if you ever need it and theres even a podman-compose plugin to get you moved off docker compose while yoou learn
If Docker didn't want this to happen, they should not have given everything away for free and then started charging money for it years later.
Also keep in mind that Linux containers are based on open source kernel technology and Docker was just a friendly layer on top. Anyone else has always been free to build their own friendly layer on top. All that's changed is that those alternatives are now more compelling.
What precisely did the creators of Podman Desktop do that was unethical? I'm not sure if by "ripoff" you mean "copyright infringement" or something else.
And on Homebrew, and packaged for Arch and Gentoo, and other comments seem to indicate that there are outdated .debs for Ubuntu/Debian, and not-outdated ones if you are okay with a third-party repo (which isn't that big of a stretch…?)
[0]https://github.com/abiosoft/colima