Elementwise Operations In Mpmath Slow Compared To Numpy And Its Solution
Solution 1:
gmpy2
is significantly faster that mpmath
for this type of calculation. The following code runs about 12x faster on my machine.
import numpy as np
import gmpy2 as mp
import time
a = np.linspace(0, 100e-2, 100)
b = np.linspace(0, np.pi)
c = np.arange(30)
t = time.time()
M = np.ones([len(a), len(b), len(c)])
A, B = np.meshgrid( a, b, indexing = 'ij' )
temp = A**2+B
temp = np.reshape(temp, [len(a), len(b), 1])
temp = np.repeat(temp, len(c), axis=2)
M *= temp
print'part1:', time.time() - t
t = time.time()
temp = np.array([mp.factorial(x) for x in c])
temp = np.reshape(temp, [1, 1, len(c)])
temp = np.repeat(temp, len(a), axis=0)
temp = np.repeat(temp, len(b), axis=1)
print'part2 so far:', time.time() - t
M *= temp
print'part2:', time.time() - t
t = time.time()
mpmath
is written in Python and normally uses Python's native integers for its computations. If gmpy2
is available, it will use the faster integer type provided by gmpy2
. If you just need one of the functions that is provided directly by gmpy2
, then using gmpy2
directly is usually faster.
Update
I ran a few experiments. What's actually happening may not be what you expect. When you calculate temp
, the values can either be an integer (math.factorial
, gmpy.fac
, or gmpy2.fac
) or a floating-point value (gmpy2.factorial
, mpmath.fac
). When numpy
computes M *= temp
, all the values in temp
are converted to a 64-bit float. If the value is an integer, the conversion raises an OverflowError. If the value is a floating point number, the conversion returns infinity. You can see this by changing c
to np.arange(300)
and print M
at the end. If you use gmpy.fac
or math.factorial
, you will get and OverflowError
. If you use mpmath.factorial
or gmpy2.factorial
, you won't get an OverflowError
but the resulting M
will contain infinities.
If you are trying to avoid the OverflowError
, you will need to calculate temp
with floating point values so the conversion to a 64-bit float will result in infinity.
If you aren't encountering OverflowError
, then math.factorial
is the fastest option.
If you are trying to avoid both OverflowError
and infinities, then you'll need to use either the mpmath.mpf
or the gmpy2.mpfr
floating point types throughout. (Don't try to use gmpy.mpf
.)
Update #2
Here is an example that uses gmpy2.mpfr
with a precision of 200 bits. With c=np.arange(30)
, it is ~5x faster than your original example. I show it using c = np.arange(300)
since that would either generate an OverflowError
or infinities. The total running time for the larger range is about the same as your original code.
import numpy as np
import gmpy2
import time
from gmpy2 import mpfr
gmpy2.get_context().precision = 200
a = np.linspace(mpfr(0), mpfr(1), 100)
b = np.linspace(mpfr(0), gmpy2.const_pi())
c = np.arange(300)
t = time.time()
M = np.ones([len(a), len(b), len(c)], dtype=object)
A, B = np.meshgrid( a, b, indexing = 'ij' )
temp = A**2+B
temp = np.reshape(temp, [len(a), len(b), 1])
temp = np.repeat(temp, len(c), axis=2)
M *= temp
print'part1:', time.time() - t
t = time.time()
temp = np.array([gmpy2.factorial(x) for x in c], dtype=object)
temp = np.reshape(temp, [1, 1, len(c)])
temp = np.repeat(temp, len(a), axis=0)
temp = np.repeat(temp, len(b), axis=1)
print'part2 so far:', time.time() - t
M *= temp
print'part2:', time.time() - t
t = time.time()
Disclaimer: I maintain gmpy2
.
Post a Comment for "Elementwise Operations In Mpmath Slow Compared To Numpy And Its Solution"