1 | initial version |
Being a) a statistician and b) lazy to the extreme, I tend to use well-validated tools that I know... R being in Sage, my minimal-effort solution is :
sage: Data=[[100, 0.0489], [110, 0.0633], [120, 0.1213],[130, 0.1244], [140, 0.1569], [150, 0.1693], [160, 0.3154], [170
....: , 0.6146], [180, 0.9118], [190, 01.7478], [200, 2.4523], [210, 4.7945], [230, 17.9766], [240, 29.3237], [250, 52.4
....: 374], [260, 94.6463], [270, 173.3447], [280, 396.0443], [290, 538.6976], [300, 1118.9984], [310, 1442.3694], [320,
....: 4151.9089], [330, 6940.7322]]
sage: RData=r.data_frame(X=[t[0] for t in Data], Y=[t[1] for t in Data])
sage: RModel=r.as_formula("log(Y)~X")
sage: r.summary(r.lm(RModel, data=RData))
Call:
lm(formula = sage103, data = sage166)
Residuals:
Min 1Q Median 3Q Max
-0.55634 -0.34349 -0.09318 0.25851 0.86680
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -9.214846 0.283884 -32.46 <2e-16 ***
X 0.053301 0.001255 42.45 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.4257 on 21 degrees of freedom
Multiple R-squared: 0.9885, Adjusted R-squared: 0.9879
F-statistic: 1802 on 1 and 21 DF, p-value: < 2.2e-16
This is in good concordance with paulmason
' results (I would have been quite surprised if this was not the case...), and gives Romuald_314
the $R^2$ he asked for.