If you are mixing exact values with unexact ones it is unlikely that you get your answer. You have two solutions:

- use floating point numbers everywhere. In this situation, you have to be careful by what you call a kernel since arithmetic operations are not exact (eg look below that the determinant is not exactly zero)
- replace your floating point numbers
`1.75`

, `2.07`

, etc by rational values `7/4`

, `207/100`

To illustrate the first option using scipy (from the method at http://scipy-cookbook.readthedocs.io/...)

```
sage: As = A.subs(x=sol[0][x]).change_ring(CDF)
sage: As
[ 2.333101328217149 2.673036254461398 0.0]
[ 2.673036254461398 2.333101328217149 9.479519749409498*I]
[ 0.0 0.179519749409498*I 2.333101328217149]
sage: As.det() # small but not zero
-3.1648558730042146e-14
sage: import scipy.linalg as linalg
sage: u, s, vh = linalg.svd(As.numpy()) # computing singular values decomposition
sage: s
array([ 1.04215568e+01, 3.45036344e+00, 8.81146174e-16])
```

From above you see that the small singular value is the last one (that is correspond to the "approximate kernel"). Hence to get the corresponding vector into Sage you can just do

```
sage: k = vector(vh[2])
sage: k
(-0.7524244881834203, 0.6567372851130923, -0.050532444272992305*I)
```

And you can check that it is in the kernel

```
sage: As * k
(6.661338147750939e-16, 1.27675647831893e-15, -1.0547118733938987e-15*I)
```

I believe that the function

`solve`

is wrong by giving you exact symbolic expressions for the solutionsAnd this is part of the confusion here (it make no sense to substitute these "exact solutions" in your floating point matrix). I opened the trac ticket #24939 for this issue.