# Gauss distribution fit

Hello! I have a problem with fitting my data set with a Gauss (Normal) distribution.

data = [[90.00, 2.0], [97.40, 5.0], [104.8, 14.0], [112.2, 12.0], [119.6, 11.0], \
[127.0, 6.0], [134.4, 3.0], [141.8, 1.0], [149.2, 2.0], [156.6, 1.0]]

var('sigma mu max x')
model(x) = max*(1/sqrt(2*pi*sigma**2))*exp(-(x-mu)**2/(2*sigma**2))

find_fit(data, model)


gives a result:

[sigma == 1.0, mu == 1.0, max == 1.0]


I tried to use a Python function instead of the symbolic one:

var('sigma mu max x')
def model(x,sigma,mu,max):
return max*(1/(sigma*sqrt(2*pi)))*exp(-(x-mu)**2/(2*sigma**2))

find_fit(data, model, parameters=[sigma, mu, max], variables = [x])


and obtained the very same result:

[sigma == 1.0, mu == 1.0, max == 1.0]


Why does it give me this (obviously, incorrect) result? And what is a way to do it right?

Thanks.

edit retag close merge delete

Sort by ยป oldest newest most voted

You need to give some hint for the initial values. Without any hint, Sage assumes the initial guess of 1 for all parameters. The evaluation of model(90) is then 0 to numerical precision because of the exponential suppression by the Gaussian. If you start with 100 as the midpoint then it works fine:

sage: find_fit(data, model, initial_guess=[1,100,1])
[max == 405.75796954829985, mu == 111.86913960269014, sigma == 11.968861052746961]

more

Thanks a lot! It works great!

( 2012-02-23 01:17:40 -0500 )edit