Next: Computing with Polynomials Up: Programming in Maple Previous: Matrix and Vector

Numerical Computation in Maple

Floating point numbers in Maple can be input either as decimal numbers or directly using the Float function

Float(m,e) =

where the mantissa is an integer of any size, but the exponent is restricted to be a machine size integer, typically 31 bits. For example, the number 3.1 and be input as either 3.1 or Float(31,-1). The op function can be used to extract the mantissa and exponent of a floating point number. Note: the floating point constant 0.0 is treated specially and is simplified to 0, i.e. the integer 0 automatically.

Here is a uniform random number generator that generates uniform random numbers with exactly 6 decimal digits precision in the range .


> UniformInteger := rand(0..10^6-1):
> UniformFloat := proc() Float(UniformInteger(),-6) end:
> seq( UniformFloat(), i=1..6 );

              .669081, .693270, .073697, .143563, .718976, .830538

Note, the built-in Maple function rand does not return a random number in the given range. It returns instead a Maple procedure (a random number generator) which when called returns random integers in the given range.

Floating point arithmetic is done in decimal with rounding. The precision used is controlled by the global variable Digits which has 10 as its default value. It can be set to any value however. The evalf function is used to evaluate an exact symbolic constant to a floating point approximation. For example


> Digits := 25:
> sin(1.0);
                           .8414709848078965066525023

> sin(1);
                                     sin(1)

> evalf(");
                           .8414709848078965066525023

Maple knows about the elementary functions and many special functions such as , , and . In Maple these are BesselJ(v,x) GAMMA(x) and Zeta(x) respectively. They are all computed to high precision by summing various series.

The model of floating point arithmetic used is that the relative error of the result is less than . This is a stronger model than that used by hardware implementations of floating point arithmetic and it requires that intermediate calculations to be done at higher precision to obtain such accurate results. Here is an example of summing a Taylor series. Suppose we want to compute the error function

for small . If you are unfamiliar with the error function, take a moment to plot it in Maple. We can make use of the Taylor series for about

for computing for small , (with no error checking) as follows


ErfSmall := proc(a) local x,x2,result,term,sumold,sumnew;
    x := evalf(a); # evaluate the input at Digits precision
    Digits := Digits + 2; # add some guard digits
    sumold := 0;
    term := x;
    sumnew := x;
    x2 := x^2;
    for n from 1 while sumold <> sumnew do
        sumold := sumnew;
        term := - term * x2 / n;
        sumnew := sumold + term / (2*n + 1);
    od;
    result := evalf( 2/sqrt(Pi)*sumnew );
    Digits := Digits-2;
    evalf(result) # round the result to Digits precision
end;

For large , this routine will be inaccurate and inefficient. See the exercises for how to compute for large .

For a second example, let us consider writing a Newton iteration to compute the square root of an integer to 50 Digits. Recall that the Newton iteration for solving an equation is

In our case, to compute we want to solve . So the iteration is

The following routine will do the job.


SqrtNewton := proc(a) local xk, xkm1;
    if a < 0 then ERROR(`square root of a negative integer`) fi;
    Digits := 55; # add some guard digits
    xkm1 := 0;
    xk := evalf(a/2); # initial floating point approximation
    while abs(xk-xkm1) > abs(xk)*10^(-50) do 
        xkm1 := xk;
        print(xk);
        xk := (xk + a/xk)/2;
    od;
    Digits := 50;
    evalf(xk); # round the result to 50 Digits
end;

We have put in the print statement to show the result of each iteration. Here is what happens


> SqrtNewton(2);

                                       1.

            1.500000000000000000000000000000000000000000000000000000

            1.416666666666666666666666666666666666666666666666666667

            1.414215686274509803921568627450980392156862745098039216

            1.414213562374689910626295578890134910116559622115744045

            1.414213562373095048801689623502530243614981925776197429

            1.414213562373095048801688724209698078569671875377234002

            1.414213562373095048801688724209698078569671875376948073

            1.414213562373095048801688724209698078569671875376948073

It is known that a Newton iteration will converge quadratically provided is not close to zero and is sufficiently close to a root of , as illustrated in the above example. For very high precision though, i.e. Digits > 1000, one does not want to do every iteration at full precision as the early steps are not accurate to full precision. Why do all that work when you are only getting a few digits correct? Modify the Newton iteration used in the SqrtNewton procedure appropriately so that it doubles the number of Digits at each step. How much faster does this make the iteration run?

Because Maple's floating point model is implemented in software, it is much slower than hardware floating point arithmetic. Maple also has a function called evalhf for evaluating in hardware floating point arithmetic. This function uses the builtin C library routines for floating point arithmetic. Consequently it is much faster than Maples software floats, but it is still slower than hardware floats because it is not compiled. See ?evalhf for details.


bondaren@thsun1.jinr.ru