I've seen the Wikipedia articles on how to sum $1+1+1+1+\cdots=-1/2$ or $1+2+3+4+\cdots=-1/12$.
Is there a theory behind it or is it a random trick? It basically uses analytic continuation of the Riemann zeta?
The article says it may be used in physical applications. So I wonder, what are the conditions that this type of regularization will give me a useful and consistent result?
There are other series which can be summed differently: $1+2+4+8+\dots=-1$. Is it possible to obtain the same result with Riemann regularization? Are different type of regularizations giving the same results (if they exist)? If not, how can I know which one will work for my (physical?) calculation?
Answer
One of the keys is surely, that any method of summation must be consistent with the results of a conventional sum, if the summation-method is applied to convergent sums/series. I find it fairly obvious (but in case a source in the literature is sought, for instance, Konrad Knopp explained it in his monography on infinite series:) "it must be made clear, what the symbol 1+2+3+4+... really means"; first thing we must do is to find/define an expression for each term in the sum relative to its index. Otherwise 1+1+1+1+... and 1+0+1+0+1+0+... cannot be distinguished of each other. In the geometric series with the base q we have the general term $q^k$ dependend on its index k, in the zeta-series we have the general term as $k^{-s}$ with the fixed exponent s and so on.
So we should first write $s(p)=\sum_{k=1}^\infty a(k,p) $ and define a dependend on k and possibly on another external parameter p to have a description of the general term.
After that we might either find
- telescoping effects: subsequently following terms cancel (possibly only partially) and the whole expression reduces then to a finite sum
- continuous intervals for some parameter p, for which $s(p)$ is a convergent series with a closed form expression $g(p)$ dependend on that parameter, where it might be consistent to extend $g(p)$ also for the cases, where the series $s(p)$ would be divergent
and so on. - $\cdots$
In the latter case it might happen, that we find such an extension and it seems to be a sensical re-expression, but it is found later, that there is also another reformulation, for instance as a sum with a telescoping effect, which reduces then to another value of the (originally divergent) sum. Then that found summation/regularization must be revised - and in general sthis is still a field of open research. There are accepted summation-procedures even for complete classes of infinite series - accessible for instance by Abel,- Cesaro-, Euler-, Borel-, Ramanujan-summation, to name only the classical ones; but there are arbitrarily many series for which we do not have an accepted summation-procedure.
L. Euler's zeta-"regularisation" for instance used that
$$s(p) = \sum_{k=1}^\infty k^p $$
can seemingly be written as
$$\sum_{k=1}^\infty (-1)^k k^p +2\cdot \sum_{k=1}^\infty (2k)^p$$
and then
$$\sum_{k=1}^\infty (-1)^k k^p +2\cdot 2^p \cdot \sum_{k=1}^\infty k^p $$
and then $$ s(p) = t(p) + 2 \cdot 2^p \cdot s(p)$$
$$ s(p)(1-2 \cdot 2^p) = t(p)$$
$$ s(p) = {t(p) \over (1-2 \cdot 2^p)} $$
where t(p) can approximated for a wider range of the parameter p. But that this works in general depended on, that there is a) a continuous range for the parameter p where this is convergent (and allows the same simplification/reformulation) and b) this range can be continuously extended preserving the meaningfulness of finite values for the $s(p)$ .
No comments:
Post a Comment