Ask Your Question

nbruin's profile - activity

2021-10-26 20:44:43 +0200 commented question Symbolic computation of multiples of a point in elliptic curve.

Note that if $s,t$ are symbolic variables, then $(s,t)$ is not a pair of rational numbers, so it does not lie in $E(\mat

2021-10-26 20:43:52 +0200 commented question Symbolic computation of multiples of a point in elliptic curve.

Note that if $s,t$ are symbolic variables, then $(s,t)$ is not a pair of rational numbers, so it does not lie in $E(\mat

2021-10-21 20:23:39 +0200 answered a question A problem with fast_callable

I'd say this is a feature. This is what these routines basically translate to: sage: c0.op_list()

2021-10-17 21:25:07 +0200 commented question Difficulties setting up a kernel for jupyterlab in Windows

In particular, I'd expect that starting the sage kernel through a terminal emulator won't allow your server to set up th

2021-10-17 21:20:16 +0200 commented question Difficulties setting up a kernel for jupyterlab in Windows

When sagemath starts a notebook server, it does so within cygwin (or WSL, depending how you run it). It then communicate

2021-09-28 21:10:58 +0200 commented answer how does weak_popov_form work?

If you really want to know the answer, trace through the algorithm by hand and see why sometimes the execution is faster

2021-09-27 20:18:41 +0200 commented answer how does weak_popov_form work?

That complexity bound must be assuming constant time coefficient arithmetic, which will not be true over Q or Z. Further

2021-09-26 20:17:05 +0200 commented question how does weak_popov_form work?

Complexity bounds tend to be asymptotic, so in particular cases an O(n^2) algorithm can outperform an O(n) algorithm, an

2021-08-17 01:50:38 +0200 commented question How to properly implement free product with amalgamation

I think you're facing a mathematical problem here. If you have H and K given by generators and relations, it's very easy

2021-08-05 18:02:27 +0200 commented question Lower-level Singular interface / turn off unneeded PARI calculations

primary factorization and computing reduced ideals can do those tasks, but using pari may be faster. These operations sh

2021-08-05 05:52:57 +0200 commented question Lower-level Singular interface / turn off unneeded PARI calculations

If you're interested in groebner bases over number fields, you should probably just add a generator to your polynomial r

2021-07-27 05:30:42 +0200 commented question Circular imports

One way of having no problems with circular imports is to just use import rational and import integer and refer to integ

2021-07-27 03:31:11 +0200 commented question Circular imports

Reimporting a module should not run its initialization code again. It sounds a little fishy that your UniqueRepresentati

2021-07-26 18:41:47 +0200 commented question Implicit derivative at a particular point

Have you tried substituting the desired values for x,y into your general expression for the derivative?

2021-07-20 19:09:19 +0200 received badge  Nice Answer (source)
2021-07-20 06:10:38 +0200 answered a question How to compute an ideal of specific norm?

Your question can be solved using ideal factorization. If the norm you're looking for does not have extremely large prim

2021-07-19 18:54:35 +0200 commented question polynomial multiplication is unexpectedly slow

It's probably a good exercise to write down the expected number of terms in the coefficients of this degree 120 polynomi

2021-04-03 08:24:02 +0200 received badge  Nice Answer (source)
2021-02-03 15:26:51 +0200 received badge  Nice Answer (source)
2020-12-07 02:37:33 +0200 received badge  Nice Answer (source)
2020-12-05 22:14:21 +0200 commented answer Is it possible to substitute $U^\prime$ to $D_0(U)$ ?

The fact that diff(f(x),x).operator() != diff(f(x),x).operator() is because the FDerivativeOperator class does not have an __eq__ defined on it. That could easily be fixed. With that in place, one could hope that a substitute_function-based approach would work, but the problem there is that diff(f(x),x).substitute_function(f,g) does a function replacement inside the FDerivativeOperator, So you can't use substitute_function. That suggests the right approach is to change the differentiation rules on U rather than try to fix the representation after the fact.

2020-12-05 21:56:02 +0200 answered a question Is it possible to substitute $U^\prime$ to $D_0(U)$ ?

One solution is to consider $U'$ as not a generic notation for the derivative (it is not, as pointed out, because it suppresses with respect to which variable the derivative is taken). The question then is really how to specify a non-generic derivative for a formal symbolic function. Reading the documentation of function, there is a way of specifying specific differentiation rules via the derivative_func argument. One way is to denote $U'$ by Up. Letting that print as $U'$ is a matter of specifying a special LaTeX representative of your function, for which there are separate tools. Then it's just a matter of telling the system that D[0](U) should yield Up. We make the code slightly more general. Amend code as you see fit if you really want to allow U to be a function in more variables and allow differentiation to those as well. I think you'll need to specify special names for all those derivatives then, because I suspect that the automatic simplification rules for D[i](U)(x,y,z) will otherwise get stuck in an infinite loop (I haven't checkde that, though).

This should do the trick:

def special_diff(i,Up):
    def deriv(*args,**kwargs):
        if kwargs['diff_param'] == i:
            return Up(*args[1:])
        else:
            raise ValueError("cannot differentiate this function wrt. {}".format(i))
    return deriv

Up=function('Up')
U=function('U',derivative_func=special_diff(0,Up))
2020-10-25 19:41:15 +0200 answered a question From indexed variables to strings

var accepts a list of strings, so if you want to make a list of custom variable names, you're good if you can generate the list of string names. For instance,

var(['x_{}'.format(i) for i in range(10,21)])

Works fine. I don't know if there's a way of letting latex_name work for a list of variables, but if you're going to have to write a for anyway, why not pull it out and create the variable names one-by-one?

for i in range(3,5):
    var("x_{}".format(i), latex_name="\epsilon_{}".format(i-1))

At some point it just becomes much easier and flexible to leverage that python is a full-featured programming language than to rely on the built-in options designed to allow people to defer learning for accomplishing simple, standardized tasks.

Generally, sage offers simple, modular building blocks which you can arrange yourself to build programs for more complex tasks. Sometimes, sage also offers more byzantine options; usually put in because it was found a lot of people often used a particular combination of simple building blocks (or because someone with enough energy to implement it put it in so that they could save a few lines of code in an introductory text book). If you find yourself asking how to generalize a certain compound construct, you should probably ask yourself how you'd build the original compound construct in the first place and then generalize from there.

2020-10-22 21:59:23 +0200 answered a question Is there any difference between the two different ways of declaring variables?

The top-level var has this "binding injection behaviour" to avoid having to write the print name of the symbolic variable "x" and the python symbol you bind it to separately, since generally you'd want them to agree. The non-toplevel SR.var does not have that behaviour, so SR.var("x,y,z") returns the right symbols, but doesn't bind them. You have to assign them, giving you the option of naming them differently, like

symbol_x, symbol_y = SR.var("x,y")

(with the top-level var this would result in symbol_x as well as x being bound to the same symbol x). Explicit assignment can sometimes be useful to avoid name clashes.

You can invoke darker magic to get the result of the top-level var. For instance, once you've done var('symbols') you can use

_(x,y,z)=symbols

which works because of sage's preparser:

sage: preparse("_(x,y,z)=symbols")
'__tmp__=var("x,y,z"); _ = symbolic_expression(symbols).function(x,y,z)'

so it calls var for you.

Alternatively, with the following definition:

class symbols:
    def __init__(self, names):
        self._symbols = map(SR.symbol, names)
    def _first_ngens(self,n):
        return self._symbols[:n]

you'd be able to leverage another preparser feature to write

_.<x,y,z>=symbols()

because it preparses as:

sage: preparse("_.<x,y,z>=symbols()")
"_ = symbols(names=('x', 'y', 'z',)); (x, y, z,) = _._first_ngens(3)"
2020-10-13 18:07:30 +0200 answered a question how to limit the verbosity in case of error

In IPython/Jupyter, the python shell sage uses, there is some possibly more convenient control: https://ipython.readthedocs.io/en/sta...

With %xmode Minimal you can suppress the traceback completely.

2020-10-12 19:24:05 +0200 commented answer Using matrices in Cython

You're probably best served by reading the cython manual and the sage developer guide at this point.

Both describe profiling tools as well. Before you embark on "cythonizing" your code, first profile for where the bottleneck lies. It's very often not where you expect, and it can be very hard to predict if and where cython will be improving performance. Generally, for most of your code it will hardly make a difference. Only the innermost loops sometimes really benefit from cythonization, and often those loops are already in library code.

2020-10-12 07:17:10 +0200 answered a question Using matrices in Cython

With the import command you use, you can access all.Matrix. Alternatively, you can use from sage.all import * and then Matrix should just work. For true cython work, you probably want to use a cimport. You cannot use sage.all for such imports, so you might as well start using more specific imports right away if real cython use is your goal.

2020-10-06 04:08:09 +0200 commented answer Echelon form in cython

Not use "spyx". It's meant to be a form of "sage-preparsed cython", but I don't think it gets used very much and therefore is prone to failure. Better to just write straight-up cython as is used in the sage library, and interface with the sage libraries in the usual cythonic/pythonic way.

2020-10-02 15:48:40 +0200 received badge  Good Answer (source)
2020-10-02 09:33:07 +0200 commented answer Echelon form in cython

???? Look at the file extension. It's "pyx". That's cython. It may very well be possible to improve this code, but it's already received quite a bit of attention, so I'd be surprised if you can make spectacular gains.You might try playing around with the different algorithms available. One may be better suited to the type of problem you're considering than the other.

If rational echelon form is sufficient, you should do that. It's generally cheaper to compute than HNF.

2020-10-01 21:50:21 +0200 received badge  Nice Answer (source)
2020-10-01 19:14:17 +0200 answered a question Echelon form in cython

I assume you mean Hermite normal form. Yes, there are highly optimized library functions for that. See hermite_form in matrix/matrix_integer_dense.pyx

2020-08-16 05:43:38 +0200 commented answer Law of logs false for symbols but true for numbers

It still works:

sage: var('a,b')
sage: bool(log(a)+log(b) == log(a*b))
False

In general you cannot expect that a symbolic engine will be able to prove all symbolic identities, so it's important that False can mean Don't know. In this case, however, it's really the case that there are numerical inputs for which the identity doesn't hold, so the False really is False in this case.

2020-08-15 18:40:50 +0200 commented answer Law of logs false for symbols but true for numbers

The problem here, once again, is that most functions we consider are actually multivalued complex functions, which really aren't function on $\mathbb{C}$ at all in the normal sense. This usually gets solved by picking a "principal branch", but that sacrifices analyticity along the branch cut. For log, you can see what happens:

sage: log(-1)
I*pi
sage: log(-1)+log(-1)
2*I*pi
sage: log( (-1)*(-1))
0

These two values are not equal: they differ by an integer multiple of $2\pi I$. Since complex logs are "only defined up to multiples of $2\pi I$, they are "essentially" equal (as values taken by logs), but not as elements of $\mathbb{C}$.

2020-08-14 01:52:28 +0200 received badge  Nice Answer (source)
2020-08-13 07:59:46 +0200 answered a question How to make Sage know that a set is an ideal

Try:

ideal(*J)

This is standard python syntax: f(*[a1,a2,a3]) does the same thing as f(a1,a2,a3).

Incidentally, I think you've hidden some of the commands. The code you suggest should produce:

sage: J.groebner_basis()
AttributeError: 'set' object has no attribute 'groebner_basis'

Finally, it seems you want to compute a Groebner basis of an ideal in a quotient ring. There is no such thing! Groebner bases are for ideals of polynomial rings. You should take the inverse image of that ideal in the polynomial ring (that's easy to do: just lift the generators and add the generators of the quotienting ideal to it) and compute a Groebner basis of that.

2020-08-12 09:11:51 +0200 commented answer How to get a solution from an ideal in a polynomial ring when it is nonzero codimensional?

Ah, OK. That could very well be. If your variety is 3-dimensional, though, then intersecting with 3 randomly generated hyperplanes would give you something 0-dimensional (you need to pick very special hyperplanes for that to fail!). Glancing at the ideal, there are a lot of linear relations in there. You can greatly simplify the problem by projecting away variable involved in linear relations. By the looks of it (the linear equations are at the end, you could eliminate something like 21 variables this way. For instance, you see from those equations that any solution will have $c_2=-1/3,c_5=0$. You definitely want to do that before you start intersecting with hyperplanes.

2020-08-12 07:13:35 +0200 received badge  Nice Answer (source)
2020-08-11 18:28:59 +0200 answered a question How to get a solution from an ideal in a polynomial ring when it is nonzero codimensional?

You can intersect with some hyperplanes to get a 0-dimensional variety, That means adding more polynomials to the ideal, say c1-4,c2-5,c3-6 (I haven't checked if this is 0-dimensional). You'll probably not find rational solutions that way, though, because the 0-dimensional variety that you do get probably doesn't have rational points.

In general, non-empty varieties can easily not have rational points. Take the variety defined by x^2+y^2-1 in Q[x,y]. Finding rational solutions on varieties is an unsolved problem.

Over an algebraically closed field, though, Hilbert's Nulstellensatz gives you that solutions exist and the hyperplane trick works. I'm not sure that .variety() is happy to return points in algebraically closed fields, though (in a way, the 0-dimensional ideal IS the algebraic point already, so there is not really a need to go further)

2020-08-04 03:31:34 +0200 commented question Outputting SVG source of a plot

It would be totally doable to implement that. Sage plotting (at least the 2d stuff) is all wrapping matplotlib, so whatever is available there can be exposed fairly straightforwardly in sage as well. It just needs someone willing and able to do the job.

2020-08-03 10:08:55 +0200 commented question Outputting SVG source of a plot

The real engine behind this is matplotlib's "savefig" function. It explicitly also allows a "file-like" object (in which case the "file" format must be specified explicitly, because there's no file name extension to infer it from). With that, you'd be able to write to a StringIO object. Probably using a temporary file is easier, though, and it might be the case that the wrapping that sage has done loses the ability to use a file-like object rather than a file name.

2020-07-29 09:19:06 +0200 received badge  Good Answer (source)
2020-07-23 01:36:27 +0200 answered a question find quotient of two multivariate polynomials (which are divisible)

There really should be a version of reduce somewhere that gives you which combination of the generators allowed for the reduction, and if someone can find where then that really should be the preferred solution, but at the expense of some extra variables, you can fake it:

Given two polynomials f and g in variables [z1,...,zn], embed the ring into a larger one with variables [z1,...,zn,F,G], with an elimination order for z1,...zn (i.e., those variables should be larger than F and G). Then reduce f-F with respect to the principal ideal generated by g-G. The result should be a polynomial that is free of z1,...,zn, and expresses F as a polynomial in G.

2020-07-21 21:19:20 +0200 received badge  Good Answer (source)
2020-07-20 05:07:13 +0200 edited answer What are these extra terms in symmetric polynomial calculations?

If you use u.expand(3) == d^2 (as mentioned in the documentation of from_polynomial) I get equality. With the same trick you can then see what e[4,2].expand(3) is supposed to mean and it seems to be 0. The same for e[4,1,1].expand(3). So, it seems that the routine is at least internally consistent. It shouldn't include 0-functions in its representations, though; that's just confusing.

I suspect what's happening: it's converting to symmetric functions through another basis (monomial basis, where the indexing is by a non-increasing exponent vector, taken to be the exponents of the leading monomial) and then it converts to the elementary basis in a way that's valid for symmetric functions in an arbitrary number of variables, at the cost of including loads of 0 terms for particular realizations (fair enough, because at this point the system doesn't know the number of variables that the symmetric function is in any more)

It looks like u.restrict_parts(3) culls the bits you're not interested in.

I've seen this before: in principle combinatorics and algebra mean the same thing when they talk about symmetric functions, but their motivating questions for studying them are so different that their notations and conventions diverge to the point of being incompatible.

2020-07-19 21:25:49 +0200 received badge  Nice Answer (source)