Skip to content Skip to sidebar Skip to footer

Why Is A.dot(b) Faster Than A@b Although Numpy Recommends A@b

According to the answers from this question and also according to numpy, matrix multiplication of 2-D arrays is best done via a @ b, or numpy.matmul(a,b) as compared to a.dot(b).

Solution 1:

Your premise is incorrect. You should use larger matrices to measure performance to avoid function calls dwarfing insignificant calculations.

Using Python 3.60 / NumPy 1.11.3 you will find, as explained here, that @ calls np.matmul and both outperform np.dot.

import numpy as np

n = 500
a = np.arange(n**2).reshape(n, n)
b = np.arange(n**2).reshape(n, n)

%timeit a.dot(b)        # 134 ms per loop
%timeit a @ b           # 71 ms per loop
%timeit np.matmul(a,b)  # 70.6 ms per loop

Also note, as explained in the docs, np.dot is functionally different to @ / np.matmul. In particular, they differ in treatment of matrices with dimensions greater than 2.

Solution 2:

matmul and dot don't do the same thing. They behave differently with 3D arrays and scalars. The documentation may be stating that matmul is preferred because it is more "clear" and general, not necessarily for reasons of performance. It would be nice if the documentation was more clear on why one was preferred over the other.

As has been pointed out by @jpp it isn't necessarily true that the performance of matmul is actually worse.

Post a Comment for "Why Is A.dot(b) Faster Than A@b Although Numpy Recommends A@b"