Haskell as functional programming language has a different approach to parallelism, mainly speaking on terms of SMP parallelism — and probably extended to other types of parallelism as it is seen in some hackages like Cloud Haskell. Rather than focusing on processing, like standard libraries are doing like POSIX threads, Haskell is focused on evaluation, allowing the programmer to send expressions to be evaluated in different threads and joining them as it is done using pthread_cond_wait and pthread_join. This approach is very nice, because it is a very good abstraction for parallel computations, reducing the amount of code and leaving a clear idea at the first look which computations are being parallelized.
There are various libraries for doing parallel computing based on SMP techniques, including GPU usage and similar stuff. But its basis are a simple annotated function pair, where par and pseq can be viewed as pthread_create and pthread_cond_wait. As you know I’m trying to solve the Project Euler set of challenges using Haskell to practice and gain good experience with this language. So, you can review the code of the problem 104 and see how par and pseq are being used. On the code you can see two functions using parallelism, isValidDig and isValidPan.
-- | Checks if the given number x have valid last n digits -- and last n digits isValidDig :: Integer -- ^ Number to Check. -> [Integer] -- ^ Sequence to Check. -> Bool -- ^ Is valid or not. isValidDig x xs | length (digits 10 x) < length xs = False | otherwise = let ds = digits 10 x l = length xs s = sort xs r = sort $ take l ds t = sort $ take l $ reverse ds in r `par` t `pseq` (r == s && t == s) -- | Checks sequentially if the given range covers the -- problem of pandigital sequence of digits using the reqDigs -- sequence. isValidPan :: Integer -- ^ Number to check. -> Bool -- ^ True when is valid. isValidPan x = s `par` r `pseq` isValidDig r s where r = fib x s = [1..9]
Due to the nature of the small computations which are sent to another thread by par, the performance gained on the resulting execution — using the RTS option N — is not so good, but enough to reach 8% less time than the sequential one. If you have some experience using POSIX threads you will see that par annotation function have a very similar usage to pthread_create, sending the computation of the r and s expressions to another thread — where lazy evaluations using $ instead of $!, which prevents the immediate evaluation — and the pseq annotation function acts as pthread_cond_wait where both threads are meet on the final evaluation — like using pthread_join. So, you do not need to worry about synchronization, because that task is already done by the annotation functions par and pseq. You can use that pair of annotations many times as you need — and cores that can use your operating system.
The problem with POSIX threads is the fact that they are a complex set of calls, this seems to be much simpler and elegant. There are other interesting approaches, like the Par Monad. It is very similar to use standard thread calls, but with the simplicity and robustness of using Monads and Haskell, with its approach you do not need to think which expressions should be evaluated first, with explicit parallel functions like spawn and fork, you have more control of how and which calls will be parallelized — not so compiler driven and more expression driven. On further experiments I will try solve some problems using the ParFunk hackage, taking advantage of the GPU driven computations.