Concurrency does not necessarily have anything to do with parallel programming, although the latter may need to be aware of concurrency issues if the parallel algorithm needs to manage shared resources and sequential dependencies.
The gist of the article is that functional languages lend themselves to parallel constructs, and such tooling should be built into languages themselves to free the programmer from worrying about concurrency issues. For more details, read on...
Asymptotic efficiency is just a fancy way of saying algorithmic efficiency. In computer science terms, asymptotic analysis measures how work increases relative to how much input needs to be processed; eg, sorting an N-element list using Quicksort would take an amount of work corresponding to NlogN.
Now, when you try to parallelize an algorithm such as Quicksort, you are concerned with your speedup -- how much you will gain by processing in parallel. The speedup is not always linear because of sequential dependencies within the algorithm. Wiki has a decent primer on the subject: http://en.wikipedia.org/wiki/Speedup.
In this case, the article was looking at the asymptotic efficiency of Quicksort's critical section; eg, the part that could be parallelized, to determine just how much it could be parallelized.
Concurrency, on the other hand, deals with more classical problems such as scheduling and resource deadlock. It is the OS's job to manage resources, which is why you hear a lot about concurrency in terms of the OS. And since thread/process scheduling by the OS is unpredictable, this is where 'nondeterministic composition' comes from. This manifests in things like race conditions. http://en.wikipedia.org/wiki/Race_condition
So it may become the parallel application's responsibility to avoid such nondeterministic behavior within resources shared by the application. Note this doesn't just apply to threads running on the same computer/OS; it may also apply to data coming in from the network from a distributed parallel algorithm.
In conventional, procedural programming languages, concurrency has to be addressed explicitly through things like locks and barriers. Functional languages help make this more implicit due to immutability. Languages like Erlang go one step further and absolve you of any responsibility for things like IPC.
The gist of the article is that functional languages lend themselves to parallel constructs, and such tooling should be built into languages themselves to free the programmer from worrying about concurrency issues. For more details, read on...
Asymptotic efficiency is just a fancy way of saying algorithmic efficiency. In computer science terms, asymptotic analysis measures how work increases relative to how much input needs to be processed; eg, sorting an N-element list using Quicksort would take an amount of work corresponding to NlogN.
Now, when you try to parallelize an algorithm such as Quicksort, you are concerned with your speedup -- how much you will gain by processing in parallel. The speedup is not always linear because of sequential dependencies within the algorithm. Wiki has a decent primer on the subject: http://en.wikipedia.org/wiki/Speedup.
In this case, the article was looking at the asymptotic efficiency of Quicksort's critical section; eg, the part that could be parallelized, to determine just how much it could be parallelized.
Concurrency, on the other hand, deals with more classical problems such as scheduling and resource deadlock. It is the OS's job to manage resources, which is why you hear a lot about concurrency in terms of the OS. And since thread/process scheduling by the OS is unpredictable, this is where 'nondeterministic composition' comes from. This manifests in things like race conditions. http://en.wikipedia.org/wiki/Race_condition
So it may become the parallel application's responsibility to avoid such nondeterministic behavior within resources shared by the application. Note this doesn't just apply to threads running on the same computer/OS; it may also apply to data coming in from the network from a distributed parallel algorithm.
In conventional, procedural programming languages, concurrency has to be addressed explicitly through things like locks and barriers. Functional languages help make this more implicit due to immutability. Languages like Erlang go one step further and absolve you of any responsibility for things like IPC.
HTH.