Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You could already achieve that with OS processes and IPC. The whole point of having multi-threading is to be able to write compact, shared-memory code with minimal use of synchronization operators, and sharing as much code and data as possible.

One interpreter per-thread means all side-effects have to be migrated to the other threads to keep a consistent view of memory: guess what you will need to do that? yep, a global lock (except this time it's across all interpreters, instead of just one.)



If you restricted shared memory to objects explicitly declared as shared, you wouldn't need a GIL. You'd simply need per-object locks.

For scientific computing purposes, you can often accomplish this with multiple processes and a numpy array allocated by shmget/shmat. But I'm not sure how to share complex objects in this way.


I'm not quite sure if I'm right here (and I'd appreciate it if another HN reader corrected me).

But I think that's how the Queue object in Python 2.6 works. The Queue instance is locked, you seem to be free to do whatever within the threads that are consuming the queue.

The reason I'm not sure is that having a single object being locked seems to contradict the GIL concept...


...and if we moved away from the annoying-and-awkward shared memory model, then this becomes even less of a problem still.


The whole point of the "Smart message passing" strategy is to move objects from one thread to another one very fast (that is, zero-copy), and with very local locking requirements.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: