Ask Your Question

rtrwalker's profile - activity

2019-04-09 00:34:58 -0500 received badge  Popular Question (source)
2019-04-03 18:03:34 -0500 received badge  Famous Question (source)
2016-12-12 13:01:25 -0500 received badge  Notable Question (source)
2016-12-12 13:01:25 -0500 received badge  Popular Question (source)
2015-12-07 12:41:26 -0500 received badge  Famous Question (source)
2015-08-19 05:19:22 -0500 received badge  Notable Question (source)
2015-03-17 02:59:58 -0500 received badge  Good Question (source)
2015-02-23 01:47:09 -0500 received badge  Notable Question (source)
2014-11-10 09:44:51 -0500 received badge  Famous Question (source)
2014-06-29 11:01:01 -0500 received badge  Famous Question (source)
2014-06-29 11:01:01 -0500 received badge  Popular Question (source)
2014-06-29 11:01:01 -0500 received badge  Notable Question (source)
2014-06-28 20:15:03 -0500 marked best answer Where is located after I attach walk.sage?

The sage tutorial Loading and Attaching Sage files states that if I create a file such as walk.sage containing:

print "Hello world"
print 2^4

and then load or attach it:

sage: attach 'walk.sage'
Hello world

I should get a file containing python code in the same directory as walk.sage. I can find no such file (I've even searched my the rest of my computer). Does anyone know where the file is?

2014-06-28 20:15:03 -0500 marked best answer How do I get an ordered list of a symbolic functions arguments?

How can I get a list/tuple of the variables in a symbolic function with the same ordering as when the function was defined? e.g. for the function below I would want (z,t) not the alphabetically ordered (t, z) I get with .variables() of .arguments(). The ordering has to be stored/used somewhere in sage because I can differentiate with respect to z and get D0(z,t) as an answer where the '0' corresponds to 'z'.

sage: var('z,t')
(z, t)
sage: f = function('u',z,t)
sage: print (f)
u(z, t)
sage: f.variables()
(t, z)
sage: f.arguments()
(t, z)
sage: f.diff(z)
D[0](u)(z, t)
sage: f.diff(t)
D[1](u)(z, t)
2014-06-28 20:13:09 -0500 marked best answer How do I perform a change of variables for a pde

How can I transform, step by step, the partial differential equation (pde) u.diff(z,z)k/gw == mvu.diff(t) into u.diff(Z,Z)=u.diff(Tv) where Z=z/H, Tv = cvt/H^2 and cv = k/(gwmv)? As you can see from my attempt below I can isolate cv; after that I have no idea how to do the change of variables. I can do this example by hand: let Z=z/H --> du/dz = du/dZ * dZ/dz --> du/dz = du/dZ *(1/H) ...etc. But Other examples may not be so easy (and I don't want to do it by hand!).

u = function('u',z,t)
eq1 = u.diff(z,z)*k/gw == mv * u.diff(t)
eq2 = eq1.divide_both_sides(mycoeff)
mycoeff = eq2.right_hand_side().coeff(u.diff(t),1)
eq3 = cv == 1/mycoeff
eq4 = solve(eq3,eq3.right_hand_side().default_variable(), solution_dict=true)
eq5 = eq2.subs(eq4[0])

which gives me the following output:

k*D[0, 0](u)(z, t)/gw == mv*D[1](u)(z, t)
D[0, 0](u)(z, t) == gw*mv*D[1](u)(z, t)/k
cv == k/(gw*mv)
[{gw: k/(cv*mv)}]
D[0, 0](u)(z, t) == D[1](u)(z, t)/cv
2014-01-20 15:15:27 -0500 received badge  Popular Question (source)
2013-11-19 09:38:11 -0500 received badge  Popular Question (source)
2013-03-27 13:47:49 -0500 commented answer sympy codegen with indices

To get your 'answer code to work I need: 'from sympy.utilities.codegen import codegen' and 'from sympy import Eq'

2013-01-03 20:51:31 -0500 received badge  Notable Question (source)
2012-07-02 13:14:49 -0500 received badge  Popular Question (source)
2012-06-12 06:25:27 -0500 received badge  Famous Question (source)
2011-12-09 06:55:01 -0500 answered a question Defining constraint eqations for minimize_constrained

Thanks @DSM. I've tried to generalize your solution with the following:

def get3(fn, constraint_nums, x0, iobj=0):
    Constrained mimimization of a function that is output, along with constraint equations, from function fn.

        fn - function that returns a tuple containing objective function and constraint equations

        constraint_nums - tuple or list containing index positions of constraint equations in fn output

        x0 - initial guess

        iobj - index position of object function in fn output.  default = 0

    a = minimize_constrained(lambda x,j=iobj: fn(x)[j], 
                             [lambda x,j=i: fn(x)[j] for i in constraint_nums],
    return a

running get3(func_wrap,range(1,5),[2,3]) results in the correct (45.0, 6.25). I had some trouble with the list comprehension of the lambda functions for the constraints. If I used [lambda x: fn(x)[i] for i in constraint_nums] then the [i] wasn't hardcoded in and each of my lambda equations had a parameter i in them; the list comphehension looped through the i's meaning all the lambda functions were the same with i equal to the final i such that only my last constraint equation was used. [lambda x,j=i: fn(x)[j] for i in constraint_nums] seems to work.

Two further questions: 1.Is there a way to determine the number of values returned by a function without actually evaluating the function? If there was I could further generalise get3 by giving constraint_nums a default value. 2. What actually gets cached when I use FunctionCached? Is everything in the function stored or just the output values? When should I be using fn.clear_cache()?

2011-12-08 08:07:44 -0500 commented answer Defining constraint eqations for minimize_constrained

On a different note, within get2 I did try to use a list construction to specify my constraints: [lambda x: fn(x)[i] for i in range(1,5)]. This gave the answer (739.437607818, -608.579643471) for the non-cached version and still gave errors for the cached version; not sure how/why that approach stuffed up.

2011-12-08 08:02:16 -0500 answered a question Defining constraint eqations for minimize_constrained

@DSM I can successfully run your example but if I adapt your solution to the example in the minimize_constrained documentation then I get an error for the cached version. For the non-cached version I get the correct result (45.0, 6.25).

def func(p):
    f = -p[0]-p[1]+50
    c_1 = p[0]-45
    c_2 = p[1]-5
    c_3 = -50*p[0]-24*p[1]+2400
    c_4 = -30*p[0]-33*p[1]+2100
    return f, c_1, c_2, c_3, c_4

def func_cached(p):
    f = -p[0]-p[1]+50
    c_1 = p[0]-45
    c_2 =p[1]-5
    c_3 =-50*p[0]-24*p[1]+2400
    c_4 =-30*p[0]-33*p[1]+2100
    return f, c_1, c_2, c_3, c_4

def get2(fn):
    a = minimize_constrained(lambda x: fn(x)[0], 
                             [lambda x: fn(x)[1],lambda x: fn(x)[2],lambda x: fn(x)[3],lambda x: fn(x)[4]],
    return a

If I run get2(func) the result is (45.0, 6.25). If I run get2(func_cached) I get errors. I did try rewritting func using x, y arguments (and fn(*x) in get2) but I got the same results.

2011-12-07 11:50:50 -0500 asked a question Defining constraint eqations for minimize_constrained

I'm trying to think of a way to do constrained optimization where both the objective function that I want to minimise and my constraint function are calculated within the same overarching function. Lets say I have a function like the following:

def complicated_function(x,y,z):
    #...some lengthy and complicated actions
    return f, g

Is there a way to minimize f by changing x,y,z subject to g>=0?

Looking at the documentation for sage.numerical.optimize.minimize_constrained it looks as if I have to define all my constraint equations individually with the varibales as a tuple. I could maybe wrap my complicated_function up like this:

def funcObjective(x):
    f, g = complicated_function(x[0],x[1],x[2])
    return f
def funcConstraint(x):
    f, g = complicated_function(x[0],x[1],x[2])
    return g

But that would mean complicated_function would be called multiple times for each optimization iteration which seems highly inefficient. The problem would only get worse if there were more contraints. Any ideas how to define the constraint without reevaluating complicated_function?

On my internet wanderings I found the recently (June 6, 2011) released pyOpt 1.0 (journal article) which at first glance looks well suited to the problem. I see OpenOpt is an experimental package for sage. I'm not sure if openOpt is suitable; the pyOpt documentation is, at first glance, clearer. Any chance pyOpt could be made an optional package with sage, it's published under the GNU Lesser General Public License?

2011-12-02 18:33:31 -0500 marked best answer How do I perform a change of variables for a pde

With some insights from the following posts:

substitute expression instead of formal function symbol

How do I get an ordered list of a symbolic functions arguments?

Call a function with argument list in python (non-sage link)

I've come up with a function, transform_func_x(expr,f,rels), to change the dependent variables of an expression. Below is the function itself and below that how I've used it (all copy and pasted from a sage notebook). Any suggested improvements are welcome.

def transform_func_x(expr,f,rels):
    Transform the dependent variables of a symbolic function


    - ``expr`` - symbolic function.  Usually a relational expression
      with differential operators such as a partial differential equation.

    - ``f`` -  expression defining a function (e.g. previously defined
      as f = function ('u', z, t))

    - ``rels`` - list of relational expressions giving variable
      transformations.  For example to change y(z, t) to y(Z, T_v), rels
      could be [Z == z/H, T_v = c_v * t / H^2].  To change y(z, t) to y(Z, t),
      rels could be [Z == z / H]

    .. note::

       1. Transformations will not be performed for any relationship where the
       LHS is already one of the variables in f. For, example if
       f = u(Z, t) and rels = [Z == z / H, T_v == c_v * t / H^2] then the
       "Z == z / H" transformation will be ignored.
       2. After applying this function to expr, you will probably want to 
       update f by applying transform_func_x to f itself 
       i.e transform_func_x(f,f,rels)
    y = f.operator()
    x = f.operands()
    nx = f.number_of_operands()

    new_arg_list = x[:]
    """used to transform variables within f to functions
       of those variables. e.g. u(z, t) --> u(z / H, t)"""
    subs_dict1 = {}
    """used to change name of transformed variables within f.
       e.g. u(z / H, t) --> u(Z, t)"""
    subs_dict2 = {}
    """used to transform variables outside of f.
       e.g. z *u(Z, t)--> z * H * u(Z, t)"""
    for v in rels:
        if v.left() not in x:
            """i.e. exclude transformations where LHS of
               rels is already a variable of f"""
            for i in range(nx):
                # find which variable the transformation applies to
                if x[i] in v.right().variables():
            new_arg_list[xi] = v.right()
            subs_dict2[x[xi]]=solve(v,x[xi],solution_dict=true)[_sage_const_0 ][x[xi]]

    # make dummy function for transforming variables of f
    dummy_arg_list = [var("x{0:d}".format(i)) for i in range(nx)]
    dummy_y(*dummy_arg_list) = function(str(y),*new_arg_list))

    return (expr.substitute_function(y,dummy_y).subs(subs_dict1).

var('z,t,H,T_v,Z,c_v,k_v,m_v, g_w')
pde = k_v*f.diff(z,z)/g_w == m_v*f.diff(t)

print('Variable transformations')

print('PDE with u(z,t):')
print('Term to isolate:')
myterm = f.diff(f.operands()[0],2)
print (myterm)
print('Coefficient of term on LHS:')
print(mycoeff ...
2011-12-02 18:33:28 -0500 received badge  Nice Question (source)
2011-11-28 09:54:48 -0500 received badge  Commentator
2011-11-28 09:54:48 -0500 commented question solve gives 1, 2, or 3 answers depending if one value in my equation is a real, rational, or integer

In addition to my first comment to @kcrisman for the case where g = 1/4 if I run solve() on the incomplete answer that solve() originally gave me then I get the correct answer: solve(1/2*sqrt(4*x + 1) == h,x) gives: x == h^2 - 1/4 Why doesn't solve() give me that answer in the first place?

2011-11-28 09:49:11 -0500 commented question solve gives 1, 2, or 3 answers depending if one value in my equation is a real, rational, or integer

Also, in my original question if I replace sol2 = solve(eqs[1].subs(sol1[0]),x_0) with sol2 = solve(eqs[1].subs(sol1[0]).expand(),x_0), i.e. expand before solving, I get x0 = some function of x0 which is wrong.

2011-11-28 09:27:14 -0500 commented question solve gives 1, 2, or 3 answers depending if one value in my equation is a real, rational, or integer

@kcrisman here's a smaller example that starts misbehaving for reals: reset() forget() var('h,x,g') assume (h>0) print(solve(sqrt(x+g)==h,x)[0]) print(solve(sqrt(x+1)==h,x)[0]) print(solve(sqrt(x+2.0)==h,x)[0]) print(solve(sqrt(x+1/4)==h,x)[0]) print(solve(sqrt(x+2.5)==h,x)[0]) myrr.<rr> = PolynomialRing(RR) print(solve(sqrt(x+rr)==h,x)[0]) which gives: x == h^2 - g x == h^2 - 1 x == h^2 - 2 1/2*sqrt(4*x + 1) == h 1/2*sqrt(2*x + 5)*sqrt(2) == h and a traceback error on the last example with rr (not enough space in comment to show). Other polynomial rings (CC,ZZ,QQ) gave the correct answer.

2011-11-25 13:10:09 -0500 asked a question solve gives 1, 2, or 3 answers depending if one value in my equation is a real, rational, or integer

Any idea why solve() is giving me different answers depending on what the value of 'g' is in my equations (see code below)? If g is a variable, an integer, or 1.0 then I get the two correct solutions (i.e. +&- a sqrt). If g is 1/4 or 1.1 I get an incomplete answer which I can get the two correct solutions by doing a solve on the solved solution. If g is a real whole number like 2,3 then I get 3 solutions: the two correct ones and a spurious one. I only discovered this behaviour when I tried eqs=[0==f(x).diff(x)(x=0),0==f(x)(x=0)]; solve(eqs,T_1,x_0) and sage hanged just calculating forever. Is there any general rules about how I should enter variables in equations for use with solve()? I want to know beforehand if something like /4 will cause problems in which case I could use v* instead and later, after using solve(), substitute with .subs(v=1/4)?

def f(x):
    return B_0 + B_1 * (x - x_0) + B_2 * sqrt((x - x_0) ^ 2 + g*d)

B_0 = T_2 * (x_0 - h)
B_1 = (T_1 + T_2) / 2
B_2 = (T_2 - T_1) / 2

def check_solutions(gval):
    print("g = {0}".format(gval)) 
    sol1 = solve(eqs[0],T_1)
    print("******Solution for T_1:")

    print("******sub T_1 solution into 2nd equation:")
    sol2 = solve(eqs[1].subs(sol1[0]),x_0)
    print("******Solutions for x_0")
    print("******Solve again for x_0")
    print([solve(sol2[i],x_0) for i in range(len(sol2))])

for i in range(len(gvals)):

running this I get:

g = g
[0 == 1/2*(T_1 - T_2)*x_0/sqrt(d*g + x_0^2) + 1/2*T_1 + 1/2*T_2, 0 == -(h - x_0)*T_2 - 1/2*(T_1 + T_2)*x_0 - 1/2*(T_1 - T_2)*sqrt(d*g + x_0^2)]
******Solution for T_1:
T_1 == (x_0 - sqrt(d*g + x_0^2))*T_2/(x_0 + sqrt(d*g + x_0^2))
******sub T_1 solution into 2nd equation:
0 == -(h - x_0)*T_2 - 1/2*((x_0 - sqrt(d*g + x_0^2))*T_2/(x_0 + sqrt(d*g + x_0^2)) + T_2)*x_0 - 1/2*((x_0 - sqrt(d*g + x_0^2))*T_2/(x_0 + sqrt(d*g + x_0^2)) - T_2)*sqrt(d*g + x_0^2)
******Solutions for x_0
x_0 == -sqrt(-d*g + h^2),
x_0 == sqrt(-d*g + h^2)
******Solve again for x_0
[[x_0 == -sqrt(-d*g + h^2)], [x_0 == sqrt(-d*g + h^2)]]
g = 1
[0 == 1/2*(T_1 - T_2)*x_0/sqrt(x_0^2 + d) + 1/2*T_1 + 1/2*T_2, 0 == -(h - x_0)*T_2 - 1/2*(T_1 + T_2)*x_0 - 1/2*(T_1 - T_2)*sqrt(x_0^2 + d)]
******Solution for T_1:
T_1 == (x_0 - sqrt(x_0 ...
2011-11-17 14:03:32 -0500 marked best answer Where is located after I attach walk.sage?

(Edit: I originally thought that there wasn't a file, but I think that's wrong.)

Look in $HOME/.sage/temp/DIR0/DIR1/ where $HOME is your home directory, DIR0 has a name like $HOSTNAME and DIR1 is a number, the process id of the Sage process. This directory is deleted when you quit Sage.

If you want to produce a more permanent file, then from your shell prompt, run sage --preparse walk.sage. This will produce in the current directory.