An in-house implicit Euler code (BESOLVER) of ICI Engineering was chosen as the test bed for exploring the use of parallel computers in process engineering.
The code uses the implicit Euler method to integrate equation (1). Thus (1) is discretised into (2):
Each of the discretised equations (2) is solved using Newton's method, with the sparse systems solved using MA28 subroutines of the Harewell library. As there is yet no general sparse solver available that solves the type of sparse systems from flowsheeting efficiently on parallel computers as far as the authors understand, it is decided to explore the method parallelism. Extrapolation method is chosen because it provides some scope of parallism, and is relatively easy to be coupled into the existing code BESOLVER.
The basic idea of extrapolation method
is to integrate the equation from time to time independently
with several different step sizes such as , , , , using an
appropriate integration scheme (in this case the BESOLVER), and then
to combine the results of all these independent integrations to get a solution at time
of higher accuracy.
Let denote the results of integration from time to with
step size of
). The extrapolation formula is then
The independent integrations within the extrapolation method using step lengths can be performed on different processors, thus parallelism across methods can be exploited in this way. However, the computational load of the parallel extrapolation method is clearly not balanced. Considering using an explicit integration scheme (such as an explicit Runge-Kutta) combined with the extrapolation method to solve an ODE, in this case the computational load on processor is proportional to . Thus processors with smaller step length will take longer to integrate from to The best speedup that can be achieved on processors, if , is roughly .
However, when an implicit scheme is used with the extrapolation method to solve the general DAE system (1), the computational load of processors is difficult to predict. A processor with a large integration step length may not necessarily has less computational load than a processor that integrates in more steps with a proportionally small step length. This is because even though the former has less nonlinear systems to solve, these nonlinear systems are generally more difficult due to the large step length, thus may require more Newton iterations per system. There are also other sources of load unbalance due to the change of step length.
The parallel implementation adopts the master-slave approach. In this approach processor is designated as the master and processor ( ) the slaves. Slave integrates with step size from time to and sends the results to the master as soon as it finishes. The master uses results that are sent to it from whichever processor that has finished it integration. Once the master finds that the solution has an error below the required tolerance, it waits for the those slaves that are still integrating, but ignores the results found by them (ideally the master should interrupt these processors and ask them to stop integrating, but a suitable way of doing this was not found on the Intel. The time wasted in waiting will be reported). The master then works out the new time and new step size , and sends them with the solution at time to the slaves, and the above process is repeated. This parallel algorithm will be called EXEULER.
Three test cases are used to test the algorithms. Each of them models a distillation column. The number of trays, variables and equations for the three test cases are listed in Table 1.
|number of||test case 1 (TC1)||test case 2 (TC2)||test case 3 (TC3)|
The BESOLVER and EXEULER are tested on the three test cases of Table 1, on the Intel i860 parallel computers. Two integration intervals, and are used. The results of the algorithms on one of the three test cases are summarised in Table 2. The tolerance , integration interval , elapsed time (in seconds), wasted time (in seconds), number of integration steps on the master (including rejected steps, each step on the master consists of -steps on slave , ), average order , error of the final solution , are reported in the tables.
|EXEULER on 4 processors|
|EXEULER on 8 processors|
Speedup is defined approximately as , with the time needed for sequential implementation, and given in the last columns of the table.
As can be seen from the tables, the extrapolation code is a lot faster than BESOLVER, particular for tight tolerance. The speedup predicted previously (for explicit extrapolation method on ODE's) is 2 for 4 processors (of which only 3 slave processors are integrating) and 4 for 8 processors. The speedup actually achieved is about 2 on 4 processors, and around 3 to 4 on 8 processors.
When more than 8 processors are used, the algorithm has a maximum order of over 7 and this is rarely necessary for the tolerances of practical interest. Thus the results are not shown here.