Ask Your Question

A.P.'s profile - activity

2024-02-25 09:51:00 +0200 received badge  Notable Question (source)
2023-03-06 04:07:54 +0200 received badge  Famous Question (source)
2022-01-20 23:00:51 +0200 received badge  Popular Question (source)
2021-11-13 21:20:21 +0200 received badge  Notable Question (source)
2021-11-13 21:20:21 +0200 received badge  Popular Question (source)
2020-05-09 23:15:18 +0200 commented answer Implicitization by symmetric polynomials

Thank you, this is precisely what I was after.

2020-05-09 23:15:18 +0200 received badge  Commentator
2020-05-09 14:52:05 +0200 commented question Implicitization by symmetric polynomials

Actually, I made a mistake there. I should have substituted $e_i$ back in after eliminating the variables, and indeed evaluating the elements of

gs = [g(x1 = E[1].expand(3), x2 = E[2].expand(3), x3 = E[3].expand(3)) for g in I.elimination_ideal(t).gens()]

at F results in [0,0,0,0]. I'll have to do some performance tests, now.

2020-05-09 14:37:08 +0200 commented question Implicitization by symmetric polynomials

@rburing Unfortunately, your suggestion doesn't seem to work, assuming I understand it correctly:

F = [t,t^2,t^3]
Rt.<t,x1,x2,x3> = QQ[]
I = Rt.ideal([x1 - E[1].expand(3)(F), x2 - E[2].expand(3)(F), x3 - E[3].expand(3)(F)])
gs = I.elimination_ideal(t).gens()
[g(x1 = F[0], x2 = F[1], x3 = F[2]) for g in gs]

outputs

[-t^6 - t^5 + t^4 + t^3,
t^7 - t^6 - 3*t^5 + t^4 + 2*t^3,
t^5 + t^4 - t^3 - t^2,
2*t^10 - 4*t^9 - 9*t^8 + 2*t^7 + 5*t^6]

instead of [0,0,0,0].

2020-05-09 14:19:23 +0200 commented question Implicitization by symmetric polynomials

@rburing I had to clean up my code a little, but here it is.

2020-05-09 11:16:22 +0200 asked a question Implicitization by symmetric polynomials

Let $p_1,\dotsc,p_m$ be real polynomials (although rational can do as well) in $t$ variables.

What is the most efficient way to compute the smallest degree $d$ such that $s(p_1,\dotsc,p_m) = 0$ for some rational symmetric polynomial $s$ in $m$ variables?


At the moment I am doing this for $t = 1$ by iterating the following simple algorithm over $d \geq 1$:

  1. Create a list Sc of the Schur functions corresponding to the partitions of $k \leq d$ with at most $m$ parts, which is a linear basis over $\Bbb{Q}$ for the space of symmetric polynomials in $m$ variables with degree up to $d$.

  2. Compute the list Ps of the evaluations of each element of Sc at $(p_1,\dotsc,p_m)$.

  3. Convert the elements of Ps into vectors and stop if they are linearly dependent, otherwise increase $d$ by one and repeat from 1.

This works reasonably well when the degrees of $p_1,\dotsc,p_m$ are small, but otherwise iterating over each $d$ involves quite a bit of work and it doesn't scale very well.

I know that the equivalent problem for $s \in \Bbb{Q}[X_1,\dotsc,X_m]$ is "readily" solved through variable elimination via Gröbner bases, but I have yet to find a way to make this work for symmetric polynomials. I also thought that I might compute the relevant ideal and then try to find the subset fixed by the symmetric group in $m$ variables, but I couldn't find any facilities in Sage to compute fixed sets under a group action (which I guess is a hard problem in general).


This is my current code:

S = SymmetricFunctions(QQ).s()
R.<x> = QQ[]

def eval_schur(F, d):
    """
    Returns the evaluation at a vector F of the Schur functions of degree n.
    """
    m = len(F)
    schur_polys = ((l, S(l).expand(m)) for l in Partitions(d) if len(l) <= m)
    return ((l, sch(*F)) for (l,sch) in schur_polys)

def poly_vectors(polys):
    d = max(p.degree() for p in polys)
    return (p.padded_list(d+1) for p in polys)

def order_of_symmetric_dependence(F):
    schur = [R(1)]
    vectors = [1]
    d = 0
    while not FreeModule(QQ,1).are_linearly_dependent(vectors):
        d += 1
        schur.extend((s for (_,s) in eval_schur(F,d)))
        vectors = poly_vectors(schur)
    return d

And here's a sample output:

In [2]: [order_of_symmetric_dependence([x, x^2, x^(2*k)]) for k in range(1,8)]
Out[2]: [4, 5, 8, 11, 12, 12, 12]
2020-05-09 10:42:57 +0200 received badge  Notable Question (source)
2020-04-30 16:52:57 +0200 received badge  Popular Question (source)
2020-04-30 16:52:57 +0200 received badge  Notable Question (source)
2019-06-17 15:09:33 +0200 received badge  Popular Question (source)
2018-09-05 08:55:46 +0200 received badge  Famous Question (source)
2018-01-10 14:00:59 +0200 asked a question Drawing a planar multigraph with loops

I would like to draw a planar graph with multiple edges and loops in Sage. Unfortunately, the default algorithm draws may intersecting edges and Sage is unable to compute an embedding for graphs with loops multiple edges.

Fortunately I know a planar embedding of this graph, so I tried using the set_embedding method, but it doesn't seem to work, either. If I only list a vertice's the neighbours, omitting their multiplicities, Sage complains that the list is shorter than the vertice's degree, while if I include multiple copies of each neighbour according to multiplicity, then Sage complains that elements of the list aren't unique.

Is there a way to achieve what I need?

2017-08-09 20:49:47 +0200 received badge  Popular Question (source)
2017-08-09 20:49:47 +0200 received badge  Notable Question (source)
2015-12-01 12:19:00 +0200 received badge  Teacher (source)
2015-12-01 01:21:35 +0200 answered a question Fast evaluation of big polynomials

The problem with the code sample you posted in your second comment isn't the part you highlighted, but the first loop, i.e. the generation of the polynomials.

Indeed, on my 5 year old Intel Core i5 2500K I get

sage: R = BooleanPolynomialRing(256,'x')
sage: %timeit R.random_element(degree=8,terms=10)
100 loops, best of 3: 11.3 ms per loop
sage: %timeit R.random_element(degree=8,terms=100)
10 loops, best of 3: 112 ms per loop
sage: %timeit R.random_element(degree=8,terms=1000)
1 loops, best of 3: 1.12 s per loop
sage: %timeit R.random_element(degree=8,terms=10000)
1 loops, best of 3: 11.3 s per loop
sage: %timeit R.random_element(degree=8,terms=100000)
1 loops, best of 3: 1min 52s per loop

which suggests that the time to generate a random polynomial of degree at most $8$ in $256$ variables scales linearly with the number of terms. Now, the total number of monomials of degree at most $8$ is

$$ \sum_{k=0}^8 {{256}\choose{k}} = 423203101008289 $$

(which I computed with sum(map(lambda k: binomial(256,k), range(0,9)))), thus the time needed to generate one polynomial with R.random_element(degree=8,terms=+infinity) should be

sage: t = 423203101008289/1000 * 1.12 * s
sage: t.convert(units.time.year)
15030.0441758398*year

On the other hand, you could try to speed this up with the choose_degree=True option, which helps a little bit

sage: %timeit R.random_element(degree=8, terms=1000, choose_degree=True)
10 loops, best of 3: 47.1 ms per loop
sage: %timeit R.random_element(degree=8, terms=10000, choose_degree=True)
1 loops, best of 3: 523 ms per loop
sage: %timeit R.random_element(degree=8, terms=100000, choose_degree=True)
1 loops, best of 3: 6.66 s per loop
sage: %timeit R.random_element(degree=8, terms=1000000, choose_degree=True)
1 loops, best of 3: 1min 28s per loop

although I almost ran out of my 8 GB of RAM with the last one, so unless you have access to lots of memory it is unlikely that you would be able to store the result of R.random_element(degree=8,terms=+infinity) even if you could compute it in a reasonable time, let alone $256$ of those...


Final note: If you don't need to use your polynomials more than once, you could try to directly generate their evaluation at some given point, although I fear that won't help much. Anyway, since this is for your master thesis I suggest you bring the problem up with your advisor.

2015-11-30 20:25:09 +0200 commented question Late binding and lazy symbolic thence numeric math

I don't understand: what does lazy evaluation have to do with symbolic simplifications? By the way, your second line of code fails in my Sage

sage: from scipy.constants import epsilon_0, c
sage: mu_0*epsilon_0=1/c^2
  File "<ipython-input-2-6e0cb446ba6f>", line 1
    mu_0*epsilon_0=Integer(1)/c**Integer(2)
SyntaxError: can't assign to operator

although this works

sage: mu_0 = var('mu_0')
sage: mu_0*epsilon_0==1/c^2
(8.854187817620389e-12)*mu_0 == (1.1126500560536185e-17)
2015-11-30 17:38:09 +0200 commented question Piecewise function of several variables and how to display it

I don't think so, although you could use floor(min(1, s/t))...

2015-11-30 17:07:22 +0200 commented question jmol cannot run while plot3d

Did you restart Sage after installing jmol?

2015-11-30 16:57:00 +0200 commented answer How can I make a 3D scatter plot?

point3d does a great job, thank you! Despite being an occasional Sage user for a couple of years now, I'm still having some serious difficulties browsing and, most importantly, searching through the documentation...

2015-11-30 16:09:01 +0200 received badge  Supporter (source)
2015-11-29 21:20:48 +0200 answered a question present sage output as "normal" mathematics

If you're working from the notebook you can use the pretty_print function. In your example

 pretty_print(g)

will display 2x as rendered LaTeX. I doesn't just typeset mathematics, either: you can also use it to prettify other objects, like long lists or dictionaries. Here's a more interesting example

pretty print the derivative of a function involving tangents and fractions

2015-11-29 17:29:16 +0200 asked a question How can I make a 3D scatter plot?

I would like to get an idea of the behaviour of a function that takes an integer and a real number as arguments and returns an integer, so I set out to make a 3D scatter plot over a discretisation of its domain. The problem is that I cannot find any 3D scatter plot function in the documentation.

To make a 2D scatter plot I could use either the aptly named scatter_plot function or the list_plot function, but according to the documentation the homologous list_plot3d function tries to plot a surface interpolated from the input points, instead. For example, in the notebook I get

sage: list_plot3d([[1,2,3], [2,3,4], [3,4,5]], point_list=True)
/usr/lib/python2.7/site-packages/sage/repl/rich_output/display_manager.p\
y:570: RichReprWarning: Exception in _rich_repr_ while displaying
object: Jmol failed to create file
'/home/ap/.sage/temp/ap-arch-64/3172/dir_fZXWvL/preview.png', see
'/home/ap/.sage/temp/ap-arch-64/3172/tmp_jVjWT8.txt' for details
  RichReprWarning,
Graphics3d Object

instead of a plot of three points in Euclidean space. Using a viewer different from jmol or java3d at least shows something, but still not what I'm looking for.

2015-11-29 15:50:51 +0200 answered a question How can I programmatically define constraints for minimize_constrained?

With a bit of effort, due to how Python handles lambda functions, I found a way. Given the above set-up, the following code returns a list of constraints for n variables:

from itertools import chain

def constraints(n):
    def lb(i):
        return lambda p: lower_bound(p[i])
    def ub(i):
        return lambda p: upper_bound(p[i])

    return list(chain([c1], *map(lambda i: [lb(i), ub(i)], range(n))))
2015-11-29 13:38:45 +0200 received badge  Editor (source)
2015-11-29 13:20:53 +0200 asked a question How can I programmatically define constraints for minimize_constrained?

I'd like to run a minimization problem in several different dimensions, and then find a minimum over the results (for more information on the problem see here).

I can easily define both the objective function and the main constraint in a dimenson-independent way with a simple combination of map, sum, and prod:

d = 25
t = 0.75
f  = lambda p: sum(map(lambda x:  d^2 * tan(x * pi/2) + 8*d, p))
c1 = lambda p: 1 - prod(map(lambda q: 1 - q, p)) - t

On the other hand, I'm not sure on how to express the fact that every variable should be an element of the interval [a,b], where 0 < a < b < 1 are to be given beforehand. I thought I could use min like so:

a = 0.1
b = 0.95
lower_bound = lambda x: x - a
upper_bound = lambda x: b - x
c_lower = lambda p: min(map(lower_bound, p))
c_upper = lambda p: min(map(upper_bound, p))

but I'm worried that using those instead of a pair of constraints for every variable would have a negative effect on minimize_constrained.

2015-11-13 09:53:07 +0200 marked best answer Is there a more efficient way to compute the first digit of a number?

I need to compute the first digit of some large numbers. So far I've been converting them to strings, although it can be somewhat slow. For example:

sage: %timeit str(2^57885161)[0]
1 loops, best of 3: 3.07 s per loop

Is there a faster way to to do this? For my purpose you can assume that all the numbers involved are powers of some small base.

2015-11-13 09:53:07 +0200 received badge  Scholar (source)
2015-11-12 22:21:26 +0200 commented answer Is there a more efficient way to compute the first digit of a number?

By the way, is there a particular reason why you used RLF(2^57885161) instead of RLF(2)^57885161? On my machine timing the first expression gives 100 loops, best of 3: 2.42 ms per loop, while the second one gives 1000 loops, best of 3: 285 µs per loop. To my inexpert eye this suggests that in the first case 2^57885161 is computed exactly and then converted to a floating point value, while in the second case the power is computed directly in floating point arithmetic.

2015-11-12 21:29:03 +0200 commented answer Is there a more efficient way to compute the first digit of a number?

Can you explain how this works? I have no trouble with the concept of lazy evaluation, but why doesn't str force the evaluation of the whole number? Or maybe it does, automatically selecting the optimal precision to compute the required result?

2015-11-11 11:52:51 +0200 received badge  Student (source)