1 | initial version |
You can preserve the precision of your matrix while transferring it to numpy as follows:
sage: R = RealField(100)
sage: A = matrix(R ,2,2, [1.746, 0.940, 1.246, 1.898])
sage: import numpy as np
sage: a = np.array(A, dtype=object)
sage: a
array([[1.7460000000000000000000000000, 0.94000000000000000000000000000],
[1.2460000000000000000000000000, 1.8980000000000000000000000000]], dtype=object)
sage: a[0]
array([1.7460000000000000000000000000, 0.94000000000000000000000000000], dtype=object)
sage: a[0][0]
1.7460000000000000000000000000
sage: a[0][0].parent()
Real Field with 100 bits of precision
Unfortunately, this will not solve your second problem:
sage: from scipy import linalg
sage: b = linalg.svd(a) ; b
(array([[-0.65092234, -0.75914432],
[-0.75914432, 0.65092234]]),
array([ 2.92405178, 0.73277362]),
array([[-0.71216394, -0.70201319],
[-0.70201319, 0.71216394]]))
sage: b[0][0][0]
-0.65092234401411786
sage: type(b[0][0][0])
<type 'numpy.float64'>
As you can see, the svd operation took your entries back to double precision floating point numbers. If you type:
sage: linalg.svd??
You will see that scipy uses lapack to solve the singular value decomposition, and the lapack library only work with simple or double precision floating-point numbers, not mpfr (arbitrary precision) numbers. For the same reason, Sage offers a SVD
method for matrices over RDF
and not over RR
.
What may be hard to understand is why there is no generic method for doing that in any precision. Actually, it is not so easy since there are a lot of numerical stability issues when doing linear algebra over floating-point numbers, this is why Sage and scipy currently relies on a specialized library.