Skip to content Skip to sidebar Skip to footer

Why Is Np.linalg.norm(x,2) Slower Than Solving It Directly?

Example code: import numpy as np import math import time x=np.ones((2000,2000)) start = time.time() print(np.linalg.norm(x, 2)) end = time.time() print('time 1: ' + str(end - sta

Solution 1:

np.linalg.norm(x, 2) computes the 2-norm, taking the largest singular value

math.sqrt(np.sum(x*x)) computes the frobenius norm

These operations are different, so it should be no surprise that they take different amounts of time. What is the difference between the Frobenius norm and the 2-norm of a matrix? on math.SO may be of interest.

Solution 2:

What is comparable is :

In [10]: %timeit sum(x*x,axis=1)**.536.4 ms ± 6.11 ms per loop (mean ± std. dev. of7 runs, 10 loops each)

In [11]: %timeit norm(x,axis=1)
32.3 ms ± 3.94 ms per loop (mean ± std. dev. of7 runs, 10 loops each)

Neither np.linalg.norm(x, 2) nor sum(x*x)**.5 are the same thing.

Post a Comment for "Why Is Np.linalg.norm(x,2) Slower Than Solving It Directly?"