In this work the parallelism within the extrapolation method is explored. It is found that the method is suitable for small number of processors and speedups up to 4 can be achieved.
The advantage of the parallel extrapolation method is that the computation/communication ratio are high. Such a coarse grain parallelism makes it suitable not only for parallel computers, but also for implementation on work station clusters. The method also performs well for tight tolerances.
Load balancing is difficult, due to the fact that function evaluations are not very expensive compared with linear system solving and the fact that prediction of the cost of nonlinear equation solving is virtually impossible.
As the parallelism exploited is within the method itself and is independent of the underlining physical systems, the method presented is not restricted to small systems. It is planned to explore the use of the extrapolation code for more complex systems, which would make the parallelism more coarse grained.
Acknowledgement This work is sponsored by ICI PLC through a collaborative
project with the Advanced Research Computing Group at SERC Daresbury Laboratory.