One potential hint is the Postgres queries are being executed as sequential scans, as you can see from the explain output. The first query only returns 200k rows (out of 7M), so even without any config tweaks it should be using an index if one is present. So, my guess is those queries are being made against unindexed columns. Also, I'm guessing this is vanilla Postgres, so it's possible OP hasn't tweaked its config for the machine they're using, and the default config pg ships with is designed for fairly resource-constrained machines by today's standards. If the data is indexed, it's possible the rows are all over the place, and clustering could help.
In addition, I wonder if random_page_cost is not set to be 1. I've noticed on machines with SSD only pools that makes the query planner much more deterministic and utilizes indexes over sequential scans almost unused.
I would love it if even one of the database experts who are so sure that the comparisons are somehow rigged, would actually run their own benchmark to prove it wrong instead of just speculating.
Get any moderately sized table (millions of rows and a dozen or more columns) on your 'highly-tuned' postgres database with all the proper indexes in place and load that same data into Didgets (takes a whole 10 minutes to download the software and start using it) and run whatever queries suits your fancy.
My post doesn't assume bad intent. As the new entrant in a space, the onus is on you to show your work. So, if you want people to run their own benchmark then share the data and configs you used.
I actually tried to find the dataset you used yesterday to run my own benchmark, but the only official "Chicago crime database" I could find had only 300k rows and 19 columns, so I assumed it wasn't the same data you mention in your video.
Even though I replied to your post, it was more of a general observation than to you specifically. My video has been viewed over a thousand times. I get comments all the time that say it can't be real or that I must be handicapping the other databases (I also have a video comparing to SQLite) by not configuring them correctly.
I have yet to have a single person who claims to be a database expert try it out on their own favorite data set and tell me that they found Didgets to be slower than their preferred database engine.
The engine was created from scratch. It can do JOINs but hasn't yet implemented all the different kinds of joins.
It is a completely different architecture (originally designed to be a file system replacement where multiple tags could be attached to each and every file) and the database functionality was almost discovered by accident. The tags I invented to make finding files based on them extremely fast, looked a lot like a columnar store. So I tried building regular relation tables using them. It surprised me when queries against my tables gave other databases (with decades of development behind them) a real run for their money.