Hacker Newsnew | past | comments | ask | show | jobs | submit | Const-me's commentslogin

I think C# standard library is better. You can do same unsafe code as in C, SafeBuffer.AcquirePointer method then directly access the memory. Or you can do safer and slightly slower by calling Read or Write methods of MemoryMappedViewAccessor.

All these methods are in the standard library, i.e. they work on all platforms. The C code is specific to POSIX; Windows supports memory mapped files too but the APIs are quite different.


I think you don’t need to be unsafe, they have normal API for it.

https://learn.microsoft.com/en-us/dotnet/standard/io/memory-...


Indeed, but these normal APIs have runtime costs for bounds checking. For some use cases, unsafe can be better. For instance, last time I used a memory-mapped file was for a large immutable Bloom filter. I knew the file should be exactly 4GB, validated that in the constructor, then when testing 12 bits from random location of the mapped file on each query, I opted for unsafe codes.

It is a matter of the deployment scenario, in the days people ship Electron, and deploy production code in CPython, those bounds checking don't hurt at all.

When they do, thankfully there is unsafe if needed.


I like the article, but why PostgreSQL specifically? I have recently needed some of these features in a server, MariaDB did the job reasonably well.

My two cents from beers and tech talks around the industry; a lot of DBAs were put off even trying MariaDB altogether due to the sale to Sun. They felt abandoned after putting the MySQL team on the map to begin with. It was felt the same could happen with MariaDB, so why bother?

I heard from a lot of teams then who planned MySQL migrations to Postgres before the Sun sale to Oracle due to the MySQL sale to Sun.


Why did you choose Maria over PG?

I like engine=memory tables. Compared to data structures found in programming languages, memory tables are more powerful: arbitrary columns, indices. The DB server solves concurrency with transactions and row-level locks; need B-tree primary key which is not the default for memory engine but easy enough to do at table creation.

I think they save quite an amount of software complexity, delegating these problems to the DB server.


Postgres is the materially better (more performant and ergonomic) choice if those are your requirements.

IMO the only place Maria wins is in ease of use / ops.

MariaDB's MEMORY engine has annoying limitations like no variable-length columns, no BLOB/TEXT support, and data loss on restart.

Postgres handles this much better… Unlogged tables skip write-ahead logging so they're fast, but still support all data types, full transactions, and B-tree indexes by default. you can point the data directory at a tmpfs RAM disk and get full in-memory speed with zero feature compromises.


> no variable-length columns

Both varchar and varbinary columns work fine there. Blobs are indeed missing.

> data loss on restart

That’s OK, collections in memory do as well yet we use them pretty much everywhere.

> Unlogged tables skip write-ahead logging

I don’t want any disk I/O for my memory tables.

Another thing, aren’t PostgreSQL tables without write-ahead logging cause consistency bugs after restart when normal tables are current due to the logging, unlogged tables are old?


> very small percentage of people to use the torrent over the direct download

BitTorrent protocol is IMO better for downloading large files. When I want to download something which exceeds couple GB, and I see two links direct download and BitTorrent, I always click on the torrent.

On paper, HTTP supports range requests to resume partial downloads. IME, it seems modern web browsers neglected to implement it properly. They won’t resume after browser is reopened, or the computer is restarted. Command-line HTTP clients like wget are more reliable, however many web servers these days require some session cookies or one-time query string tokens, and it’s hard to pass that stuff from browser to command-line.

I live in Montenegro, CDN connectivity is not great here. Only a few of them like steam and GOG saturate my 300 megabit/sec download link. Others are much slower, e.g. windows updates download at about 100 megabit/sec. BitTorrent protocol almost always delivers the 300 megabit/sec bandwidth.


> AVX2 level includes FMA (fast multiply-add)

FMA acronym is not fast multiply add, it’s fused multiply add. Fused means the instruction computes the entire a * b + c expression using twice as many mantissa bits, only then rounds the number to the precision of the arguments.

It might be the Prism emulator failed to translate FMA instructions into a pair of two FMLA instructions (equally fused ARM64 equivalent), instead it did some emulation of that fused behaviour, which in turn what degraded the performance of the AVX2 emulation.


Author here - thanks - my bad. Fixed 'fast' -> 'fused' :)

I don't have insight into how Prism works, but I have wondered if the right debugger would see the ARM code and let us debug exactly what was going on for sure.


You’re welcome. Sadly, I don’t know how to observe ARM assembly produced by Prism.

And one more thing.

If you test on an AMD processor, you will probably see much less profit from FMA. Not because it’s slower, but because SSE4 version will runs much faster.

On Intel processors like your Tiger Lake, all 3 operations addition, multiplication and FMA compete for the same execution units. On AMD processors however, multiplication and FMA do as well but addition is independent, e.g. on Zen4 multiplication and FMA run on execution units FP0 or FP1 while addition runs on execution units FP2 or FP3. This way replacing multiply/add combo with FMA on AMD doesn’t substantially improve throughput in FLOPs. The only win is L1i cache and instruction decoder.


You can ... to a degree - Google for "XtaCache"

> but nobody is using it! What does that say?

It’s impossible to replace JS with WebAssembly because all state-mutating functions (DOM tree manipulation and events, WebGL rendering, all other IO) is unavailable to WebAssembly. They expect people to do all that using JavaScript glue.

Pretty sure if WebAssembly were designed to replace JS instead of merely supplementing it, we would have little JS left on the web.


> What if you have two different project with different requirements at the same time?

Install multiple versions of Windows SDK. They co-exist just fine; new versions don’t replace old ones. When I was an independent contractor, I had 4 versions of visual studio and 10 versions of windows SDK all installed at once, different projects used different ones.


You can provide custom options to winget, and in there where to install it too (and additional components you need).


> run games through a Proton-like shim even on Windows

Already happening, to an extent. Specifically, modern Intel GPUs do not support DirectX 9 in hardware, yet legacy apps run fine. The readme.txt they ship with the drivers contains a paragraph which starts with the following text: “SOFTWARE: dxvk The zlib/libpng License” DXVK is a library which implements Direct3D on top of Vulkan, and an important component of SteamOS.


> It was never the right choice for API payloads and config files

Partially agree about API payloads; when I design my APIs I typically use binary formats.

However, IME XML is actually great for config files.

Comments are crucial for config files. Once the complexity of the config grows, a hierarchy of nested nodes becomes handy, two fixed levels of hierarchy found in old Windows ini files, and modern Linux config files, is less than ideal, too many sections. Attributes make documents easier to work with due to better use of horizontal screen space: auto-formatted JSON only has single key=value per line, XML with attributes have multiple which reduces vertical scrolling.


Software developers might be majority here in HN comments, but definitely a small minority across general population. A lot of people are negatively affected by AI: computer hardware is expensive because AI companies bought all memory, Windows 11 is crap because Microsoft reworked their operating system into AI-driven trojan horse, many people lost jobs because AI companies convinced top management of their employers’ people will be replaced with computers any day now, etc.


I have a hypothesis why issues like that are so widespread. That AI infrastructure is mostly developed by large companies; their business model is selling software as a service at scale. Hence containers, micro-services, TCP/IP in between. That approach is reasonable for data centres because these made of multiple servers i.e. need networking, and they have private virtual networks just to connect servers so the security consequences aren’t too bad.

If they were designing these infrastructure pieces primarily for consumer use, they would have used named pipes, Unix domain sockets, or some other local-only IPC method instead of TCP/IP.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: