Then you must have used them a really long time ago. The last time I did, I had no trouble decoding audio streams in one thread, pushing the raw audio samples into a Queue, pulled the raw audio samples out of the Queue in another thread and sent them to the audio device, and doing network IO in another thread. This code is at http://github.com/thwarted/thundaural (no longer maintained) and was written in 2004 and 2005 against perl 5.6 and perl 5.8. It's hardly a complex or large threaded application, nor an example of fantastic code, but I never experienced "crash and burn" or prohibitively long thread start up time.
I can't produce a significantly long start up time when perl copies a 240meg+ local state using http://gist.github.com/258016. Much larger, and the overhead pushes my 4gig machine into swap (so perl's not very efficient with its data overhead, it seems). It is slower than if the structure is shared (you can uncomment a line to compare) between the threads, which shows that perl is copying the state to the other interpreters, but this just says you need to use the same model with perl threads that you'd use when you do multiprocessing with fork if you're going to have long lived child processes that don't exec: spawn threads early before your heap balloons to avoid all that spawn-time copying (which is a good practice anyway, even when COW is used for fork). Every model has its trade offs, and it's good to know those trade offs going in, rather than treating the flavor of the day as a silver bullet.
And even if perl's implementation isn't stable/good, it doesn't mean the model of explicit sharing is a bad one. And it most definitely doesn't mean that using a separate interpreter in each thread with global sharing is bad at all.
I can't produce a significantly long start up time when perl copies a 240meg+ local state using http://gist.github.com/258016. Much larger, and the overhead pushes my 4gig machine into swap (so perl's not very efficient with its data overhead, it seems). It is slower than if the structure is shared (you can uncomment a line to compare) between the threads, which shows that perl is copying the state to the other interpreters, but this just says you need to use the same model with perl threads that you'd use when you do multiprocessing with fork if you're going to have long lived child processes that don't exec: spawn threads early before your heap balloons to avoid all that spawn-time copying (which is a good practice anyway, even when COW is used for fork). Every model has its trade offs, and it's good to know those trade offs going in, rather than treating the flavor of the day as a silver bullet.
And even if perl's implementation isn't stable/good, it doesn't mean the model of explicit sharing is a bad one. And it most definitely doesn't mean that using a separate interpreter in each thread with global sharing is bad at all.