The question is not well defined. The best way to "do something" is to "guess" the or a related question, and answer this one. (The original question must have been in the same circle of ideas, and should have been touched with similar vehicles.)
Restatement:
Let us fix an integer K>1.
We consider
K random variables Z1,…,ZK which follow the standard normal distribution N(0,12),
and K random variables V1,…,VK which follow the uniform distribution on the intervals (1,2),…,(1,K+1) - respectively.
The family of all these variables should be an independent family of random variables defined on the same probability space. Let E be the expectation, the mean on this space.
We build X(K)=|Z1V1+⋯+ZKVK| and its expectation f(K)=EX(K)=E[ |Z1V1+⋯+ZKVK| ] as a function of K.
The exercise asks for
heuristical arguments, that may lead to an asymptotic F(K)=O(K?) in big-O-notation, and
a computer simulation that supports the heuristic.
This was the complicated part of the answer. From this point things go straightforward:
The random variable under the modulus has mean zero since E[ZjVj]=E[Zj]E[Vj]=0⋅E[Vj]=0, and terms have variance
Var[ZjVj]=E[(ZjVj)2]−E[ZjVj]2=E[Z2j]E[V2j]−E[Zj]E[Vj]2
=1⋅E[V2j]−0=1j∫j+11x2dv=13j((j+1)3−13)
and so on.
We used independence. Further using the independence, the variance of the sum is the sum of the variances and we compute ∑1≤j≤K13j((j+1)3−13):
sage: var( 'j,K' );
sage: latex( sum( 1/3/j * ( (j+1)^3-1^3 ), j, 1, K ).factor() )
\frac{1}{9} \, {\left(K^{2} + 6 \, K + 14\right)} K
Then we expect:
Z1V1+⋯+ZKVK√19(K2+6K+14)K∼N(0,12) .
(This is the optimistic law of large numbers, applied outside mathematics when we do not have time to check the details.)
For a big K we can optimistically and statistically approximate the RHS with a normally distributed Y∈N(0,12).
Then E|Y| is twice the integral on [0,∞) from 1√2πyexp(−y2/2).
Putting all together we get: f(K)∼23√2π√K(K2+6K+14) .
That's the maths.
Now we simulate and we ask also for the values respecting the guessed asymptotic:
The simulation...
for pow in [ 2,3,4,5 ]:
K = 10 ** pow
SAMPLES = [] # and we append
for experiment in [ 1..99 ]:
SAMPLES . append( abs( sum( [ gauss(0,1) * uniform( 1,k+2 ) for k in range(K) ] ) ) )
print "%s -> %s" % ( K, mean( SAMPLES ) )
We get
100 -> 294.711735785
1000 -> 8714.8222098
10000 -> 249403.620665
100000 -> 8734793.09067
Next time we will see other numbers above.
And the asymptotic:
for pow in [ 2,3,4,5 ]:
K = 10 ** pow
print "%s -> %f" % ( K, 2/3/sqrt(2*pi) * sqrt( K * ( K^2 + 6*K + 14 ) ) )
100 -> 274.004912
1000 -> 8435.694028
10000 -> 266041.315371
100000 -> 8410694.055422
I did not check the details, but we strongly encourage f(K)∈O(K3/2), even more, we have
f(K)∼1√2π⋅23⋅K3/2.
What does your summation mean in the definition of A? It is mathematically unclear to me.
There are too (two) many appearances of N, while m is used and forgotten. We can also be delighted to see E and M, the only two letters used to denote either expectation or mean, as the names of two random variables. We may use X and V instead. Then the two variables should be independent, else nothing can be computed. The statement should make this clear. Since i am inside a comment, there is a remark that is appropriate.Since everything in probability has to go quick and intuitive, we have a lot of "probability theory without probability spaces". Instead, one has a dictionary of concepts (e.g. density) and a fenomenological way to manipulate them without a solid fundamental shortcut. In my opinion, sage and similar computer algebra systems help to see and use the probability space.