I'm completely biased as I work on the nanos/ops unikernel toolchain but unikernels offer a very PaaS like feel as the app and server become one. You simply build your image (ops image create) and then deploy it (ops instance create) - two commands. Takes tens of seconds to have something running on AWS. If you are on a AWS/GCP free tier it costs nothing but even a a g1-small costs only ~$20/month and a f1-micro goes for ~ $5/month which can go a long way. We've had a go unikernel be on the front page of HN on a f1-micro and it barely registered any resources being used.
Besides the perf/security boost you aren't locked in to anything. You could take the same application and deploy it to multiple clouds simultaneously if you wanted to as it is making use of cloud primitives - nothing cloud-specific unlike some of the various serverless offerings.
Run dokku, caprover (or write a better heroku alternative, I'm sure now would be the time) on another free cloud service. I wrote a comparison of a few major ones: https://paul.totterman.name/posts/free-clouds/
AWS beanstalk allows you to run on very cheap instances, even cheaper if you get a plan and commit to a term.
It’s not a 1:1 experience, but I’ve enjoyed it as an alternative to Heroku for sure. Alternatively, you could spin up a server and install dokku which is pretty close to a shipping experience, but still requires some maintenance and hand holding.
I switched from heroku to dokku (and DigitalOcean) last month. Overall: easy to adapt from heroku since so many of the concepts (and commands) are the same.
I tried to get too fancy and set two web services on the same app (since the DO droplet was giving me more CPU and 4x the RAM for half the price) but they seemed to battle each other for control of the database and/or were exceeding resources. So I chilled out and used 1 web service and set CPU and RAM resource limits. And... it's been smooth since then! Much faster than heroku, too.
Price-wise: we were on the $50/mo dyno plus $9/mo postgresql, and with DO we beefed up the managed database specs, and now get 4x the RAM on the droplet, and the total cost is the same as heroku.
We do still have a free tier staging server on heroku that we only use a couple times a year.
Oh shoot, I just remembered that I use staticman for processing comments on a couple jekyll blogs, and those use free heroku tiers. Argh!
With some investment in infra as code we have a similar experience on aws. GitHub actions + terraform targeting ECS on fargate (pay for usage). Push to main build the container, pushes to elastic registry, makes the task/service, configures alba, etc.
Hard for me to say what it would take for a normal small dev team as I am a beneficiary/stakeholder of this work but wasn't involved in the development. In our case we hired a dedicated senior infra swe who had experience in building IaC and other automation. I think given our startup at the time (b-round startup working in healthcare with duck-taped infra and security) it was absolutely the right decision for us.
It took our infra swe a few months to get MVP version working but he also did other infra related work at the same time. Complexity can change a lot depending on requirements, and ours are probably more stringent that Heroku ever supported. Because of sensitivity of the data we deal with there is now a relatively sophisticated identity management/permissioning/what-can-see-what-data component in how our infra is deployed which probably would not be the case for most companies. We also deploy ML models so there is additional issues with automation around keeping track of reproducability/provenance/ml pipeline regression/drift/deidentification/etc (which now a year later we haven't fully solved either!).
Does anyone have a recommendation how to re-create the Heroku experience on AWS or Azure?