Thursday, August 31, 2017

For which complex $a,,b,,c$ does $(a^b)^c=a^{bc}$ hold?



Wolfram Mathematica simplifies $(a^b)^c$ to $a^{bc}$ only for positive real $a, b$ and $c$. See W|A output.



I've previously been struggling to understand why does $\dfrac{\log(a^b)}{\log(a)}=b$ and $\log(a^b)=b\log(a)$ not always hold (while I was always thinking of logarithms of positive reals as $\log_a(a^b)=\dfrac{\log(a^b)}{\log(a)}=\dfrac{b\log(a)}{\log(a)}=b$ before I've started to self-study complexes), but then learned about branch cutting of multifunctions, so that natural logarithm can be defined to be a function by first finding a set of $w$ for each $z$, so that $z=e^w$ (inverse natural exponential), and then somehow (no matter how) selecting a unique solution $w$ from that set, so that $\forall{z}\exists!{w}$, yielding a function $z\mapsto{w}$. This results in some sort of discontinuity where solutions are being dropped:
branch cut



This is done because functions are generally more easy to deal with than multifunctions. Wolfram's convention is to define $\log(z)$ as the inverse of $e^z$ such that $\log(1)=0$ and such that the branch cut discontinuity is at ${(-\infty;0]}$. I know this is somewhat unpopular to think of natural logarithm as a single-valued function, but I am going to follow this convention (I personally think it is well-justified, at least that's what happens with $\operatorname{arcsin}$, $\operatorname{sqrt}$, etc, so in fact I am comfortable with it).



Then it is clear for me why is $\dfrac{\log\left((-2)^{-3}\right)}{\log(-2)}\approx{0.814319+0.841574i}\neq{-3}$, while $-3$ perfectly satisfies the equation $(-2)^x=(-2)^{-3}$.




Given that $\log(a^b)=b\log(a)$ holds for positive real $a, b$ only, I thought how would I then solve equations of form $a^x=b$, where $a,b$ are complexes and not necessarily positive reals, since my first step was always to rewrite everything to base $e$:
$$e^{\log(a^x)}=e^{\log(b)}$$
and then carry the exponent out of the logarithm:
$$e^{x\log(a)}=e^{\log(b)}$$
(adding $2\pi i k,\,k\in\mathbb{Z}$ to any exponent and then eliminating the exponentials yields the result). So I've asked on ##math, where I've been pointed out that another way to rewrite $a^b$ to base $e$ is using the definition of the logarithm: $a^b=(a)^b=(e^{\log(a)})^b=e^{b\log(a)}$. I was happy with that (and even solved a $(-2)^x=-3$ for $x$ just for fun using this approach, see here) but some time later I have realized that this neither does not actually work for arbitrary complex $a$ and $b$!
$$1=1^{1/2}=((-1)^2)^{1/2}=(-1)^{2\cdot\frac{1}{2}}=(-1)^1=-1$$
The rule $(a^b)^c$, as I was told by W|A, requires $a, b, c$ to be positive reals.
When I pointed that out on ##math, I was told that $a^b=e^{b\log(a)}$ is by definition of complex exponentiation. I've checked, and Wolfram Mathematica agreed with this identity. Too good!
But then I realized from these something must not be true:





  • $(a^b)^c=a^{bc}$ holds only for positive real $a,b,c$

  • $\forall{a,b\in\mathbb{C}}:a^b=e^{b\log(a)}$

  • $\forall{a\in\mathbb{C}}:a=e^{log(a)}$



The latter is the definition of the logarithm, so should be true. The second also must be true, otherwise I do not know how to solve equations. Hence:
$$a^c=e^{c\log(a)}$$
is by the definition of complex exponentiation, just as I was told. Then, rewriting $a$ in the LHS using the definition of logarithm:

$$a=e^{log(a)}$$
$$\left(e^{log(a)}\right)^c=e^{c\log(a)}$$
Now, since that for every complex $b$ there exists a $z$ such that $b=\log(z)$, we can rewrite $\log(a)=b$:
$$(e^b)^c=e^{bc}$$
for all complex $b,c$! So, what is that?



I've made a mistake? Or is base $a=e$ that special?



Or is in fact (I suspect) one just needs to require $a$ to be positive real, and $b,c$ are in fact irrelevant?
$$\forall{a}\in\mathbb{R}\forall{b,c}\in\mathbb{C}:a>0\implies{(a^b)^c=a^{bc}}$$

Have I found a bug in Mathematica and W|A or made a huge stupid mistake leading myself to drastic misunderstanding?



P. S. This is my first post at MSE, I am not a math major, just a hobbyist, so sorry if I am struggling at basics here. Also sorry for my English: it is not my native language.






Edit: thank you @Andrew for your answer.




$\forall{a,b,c}:-\pi<\Im(b\log(a))\leq\pi\implies(a^b)^c=a^{bc}$





Very clear and straightforward, works flawlessly.



But it appears that though the implication is obviously true, there are more cases (read "values of $a,\,b,\,c$") from which $(a^b)^c=a^{bc}$ does follow, i.e. I found that it is true for $c\in\mathbb{Z}$ and arbitrary complex $a,\,b$, for example:



$$\big((-2)^{-3}\big)^{-2}=(-2)^{(-3)\times(-2)};$$
$$\left(\frac{1}{(-2)^3}\right)^{-2}=(-2)^{6};$$
$$\left(-\frac{1}{8}\right)^{-2}=64;$$
$$\left(-8\right)^2=64;$$

$$64=64.$$



For this case, $b\log(a)=-3\log(-2)=-3(\log(2)+i\pi)=-3\log(2)-3i\pi$, and hence $\Im(-3\log(2)-3i\pi)=-3\pi\notin{({-\pi;\pi}]}$, therefore @Andrew's implication does not cover all cases.



So, is there any more solutions of $(a^b)^c=a^{bc}$?


Answer



If we agree $\log z$ has imaginary part between $-\pi$ and $\pi$ and is defined only on the set $D = \mathbf{C} \setminus(-\infty, 0]$, then
\begin{align*}
\exp(\log z) &= z\quad\text{for all $z$ in $D$,} \\
\log(\exp z) &= z\quad\text{for all $z$ with imaginary part between $-\pi$ and $\pi$.}

\end{align*}
If the imaginary part of $z$ is between $(2k - 1)\pi$ and $(2k + 1)\pi$, then
$$
\log(\exp z) = z - 2\pi ki
\tag{1}
$$
because $z - 2\pi ki$ has imaginary part between $-\pi$ and $\pi$.



Defining $a^{b} = \exp(b \log a)$, we have
\begin{align*}

(a^{b})^{c}
&= \exp\bigl(c \log(a^{b})\bigr)
= \exp\bigl(c \log [\exp (b \log a)]\bigr), \\
a^{bc}
&= \exp(bc \log a).
\end{align*}



If $b \log a$ has imaginary part between $(2k - 1)\pi$ and $(2k + 1)\pi$, then $\log\bigl(\exp(b \log a)\bigr) = b \log a - 2\pi ki$ by (1), so
$$
\exp\bigl(c \log(a^{b})\bigr)

= \exp\bigl(c(b \log a - 2\pi ki)\bigr)
= \exp(bc \log a) \exp(-2\pi cki),
$$
which is equal to $a^{bc}$ if and only if $\exp(2\pi cki) = 1$.



In particular, if $b \log a$ has imaginary part between $-\pi$ and $\pi$ (i.e., $k = 0$), or if $c$ is an integer, then
$$
(a^{b})^{c}
= \exp\bigl(c \log(a^{b})\bigr)
= \exp\bigl(c \log(\exp b \log a)\bigr)

= \exp(bc \log a) = a^{bc}.
$$


Solving an equation involving binomial coefficients and complex numbers



Question:



Solve the following equation for $x$:



$$\sum_{k=0}^{n}\binom{n}{k}x^{k}\cos(k\theta )=0$$



Attempt:




I think this equation come from:



$$(x\cos\theta+ix\sin\theta)^{k}$$



Is that right?



I don't know what to do after that.


Answer



Assuming $x,\theta \in \mathbb{R}$, $n\in\mathbb{Z}$:




$$\sum _{k=0}^{n}{n\choose k}{x}^{k}\cos \left( k\theta \right) =\frac{1}{2}
\left( 1+x{{\rm e}^{i\theta}} \right) ^{n}+\frac{1}{2} \left( 1+x{{\rm e}^{
-i\theta}} \right) ^{n}=0,$$
$$\Rightarrow\dfrac{
\left( 1+x{{\rm e}^{i\theta}} \right) ^{n}}{\left( 1+x{{\rm e}^{
-i\theta}} \right) ^{n}}=-1={\rm e}^{i\pi},$$
$$\dfrac{
\left( 1+x{{\rm e}^{i\theta}} \right)}{\left( 1+x{{\rm e}^{
-i\theta}} \right)}={\rm exp}\left({\dfrac{im\pi}{n}}\right):m \,\text{odd}\in \mathbb{Z},$$
$$x=\dfrac{\sin \left( {\dfrac {\pi m}{2n}} \right)}{\sin \left(

\theta-{\dfrac {\pi m}{2n}} \right) }.$$


Wednesday, August 30, 2017

algebra precalculus - Algebraic doubt concerning orthogonal polynomials in physics research paper


I am not sure if this post should go into the math or physics stackexchange. I am sorry if it is misplaced and I'm obviously fine with moving it accordingly.


So I am currently reading this paper and I am having a hard time seeing the validity of equation (4) on page 4. I try to summarize what seems to be important here for convenient access despite the paywall, so there are:



  • vectors $\vec{\sigma} = \lbrace \sigma_1, \sigma_2, \dots, \sigma_N\rbrace$ of length $N$. Depending on another parameter $M$, the components $\sigma_i$ of those vectors may be $$ \sigma_i = \begin{cases} \pm m, \pm(m-1), \dots, \pm 1 & M = 2m\\ \pm m, \pm(m-1), \dots, \pm 1, 0 & M = 2m + 1 \end{cases} $$




  • a scalar product for arbitrary functions of $\vec{\sigma}$, $f\left(\vec{\sigma}\right)$ and $g\left(\vec{\sigma}\right)$, is given by $$\langle f,g \rangle = \rho_N^0\mathrm{Tr}^{(N)}f\cdot g\tag{1}$$ with the trace operator $\mathrm{Tr}^{(N)} = \sum\limits_{\sigma_1}\sum\limits_{\sigma_2}\cdots\sum\limits_{\sigma_N}$ over the $M^N$ different vectors, and the normalization $\rho^0_N = M^{-N}$.





  • a set of (real valued) polynomials $\Theta_n\left(\sigma_p\right)$ is defined as a function of the vector's components $\sigma_p$ by $$\begin{align} \Theta_n\left(\sigma_p\right) = \begin{cases} \Theta_{2s}\left(\sigma_p\right) = \sum\limits_{k=0}^{s} c_k^{(s)}\sigma_p^{2k}, & s = \begin{cases} 0, 1, \dots, m-1 & M = 2m\\ 0, 1, \dots, m-1, m & M = 2m+1 \end{cases} \\ \Theta_{2s+1}\left(\sigma_p\right) = \sum\limits_{k=0}^{s} d_k^{(s)}\sigma_p^{2k+1}, & s=0, 1, \dots, m-1 \end{cases}\tag{2} \end{align}$$



The coefficients of the polynomials in $(2)$ are chosen such that the scalar product according to $(1)$ yields $$\begin{align}\langle\Theta_n\left(\sigma_p\right), \Theta_{n^\prime}\left(\sigma_p\right)\rangle &= \rho^0_N\mathrm{Tr}^{(N)} \Theta_n\left(\sigma_p\right) \Theta_{n^\prime}\left(\sigma_p\right)\\ &= M^{-1}\sum\limits_{\sigma_p=-m}^{m} \Theta_n\left(\sigma_p\right) \Theta_{n^\prime}\left(\sigma_p\right) = \delta_{nn^\prime}\tag{3} \end{align}$$ with the Kronecker delta $\delta_{nn^\prime} = \begin{cases}0 & n \neq n^\prime\\1 & n = n^\prime\end{cases}$.


I can see $(3)$ is some kind of orthogonality relation, the paper then however states



For any two lattice points $p$ and $p^\prime$ and $n \geq 1$, $n^\prime \geq 1$, eq. (3) generalizes to


$$\langle\Theta_n\left(\sigma_p\right), \Theta_{n^\prime}\left(\sigma_{p^\prime}\right)\rangle = M^{-2}\sum\limits_{\sigma_p=-m}^{m}\sum\limits_{\sigma_{p^\prime}=-m}^{m} \Theta_n\left(\sigma_p\right) \Theta_{n^\prime}\left(\sigma_{p^\prime}\right) = \delta_{nn^\prime}\delta_{pp^\prime}\tag{4} $$




Can someone show me how I can see $(4)$ is in fact a "generalization" of $(3)$?


I see how $(4)$ results in $(3)$ when considering the same point $p$: $$\begin{align} &M^{-2}\sum\limits_{\sigma_p=-m}^{m}\sum\limits_{\sigma_{p^\prime}=-m}^{m} \Theta_n\left(\sigma_p\right) \Theta_{n^\prime}\left(\sigma_p\right)\\ =& M^{-2}M\sum\limits_{\sigma_p=-m}^{m}\Theta_n\left(\sigma_p\right) \Theta_{n^\prime}\left(\sigma_p\right)\\ =& M^{-1}\sum\limits_{\sigma_p=-m}^{m}\Theta_n\left(\sigma_p\right) \Theta_{n^\prime}\left(\sigma_p\right) = \delta_{n,n^\prime}, \end{align}$$ so in that sense $(4)$ seems to be some kind of "generalization" of $(3)$.


However, I do not get how I know the last equal sign in $(4)$ holds, i. e. how I know the second Kronecker delta $\delta_{p p^\prime}$ must go there. If I'm not mistaken, to understand this I'd need to be able to show the nested sum indeed sums up to zero when considering different points $p$ and $p^\prime$ when $(3)$ holds, but I am not able to do so.


EDIT: After the first answer, I feel as if I should clarify my reasoning a bit. The polynomials are fully defined once their coefficients have been determined from the orthogonality relation eq. (3). In fact, constructing the polynomials for $M=3$, $$\begin{align} \Theta_0\left(\sigma_p\right) &= 1\\ \Theta_1\left(\sigma_p\right) &= \sqrt{\frac{3}{2}}\sigma_p\\ \Theta_2\left(\sigma_p\right) &= \sqrt{2} - \frac{3}{\sqrt{2}}\sigma_p^2 \end{align}$$ as given in the paper as an example, eq. (4) holds despite the polynomials being defined with the help of eq. (3) only. This is why I think eq. (3) somehow needs to be enough to guarantee eq. (4) holds but I fail to see how this is facilitated.


EDIT 2: After the answers I received I thought about my problem a little further and I unfortunately still can't fully comprehend the reasoning behind knowing the last equation in (4) holds. However, I think I can give a more precise statement of what my stumbling block actually is.


So, starting from the generalization of (3) in (4), I split up the summations into two partial sums: $$ M^{-2}\sum\limits_{\sigma_p=-m}^{m}\sum\limits_{\sigma_{p^\prime}=-m}^{m} \Theta_n\left(\sigma_p\right) \Theta_{n^\prime}\left(\sigma_{p^\prime}\right) = M^{-2}\left( \sum\limits_{\sigma_p} \Theta_n\left(\sigma_p\right)\Theta_{n^\prime} \left(\sigma_p\right) + \sum\limits_{\sigma_{p^\prime}\neq \sigma_p} \Theta_n\left(\sigma_p\right) \Theta_{n^\prime}\left(\sigma_{p^\prime}\right) \right) . $$ The first partial sum contains only those summands with equal arguments for both polynomials, while the second only contains summands with different arguments for both polynomials.


E.g, for $M = 3$ as given in the paper, the first sum contains $3$ summands ($\sigma_p$ cycling through $-1, 0, 1$) and the second sum contains $6$ summands (cycling through $(-1,0), (-1,1), (0,-1), (0,1), (1,-1), (1,0)$) which together yield the same $9$ summands the original double sum in the generalization of (3) in (4) cycles through.


So, the first partial sum can be simplified easily using (3), see the annotation of its underbrace. However, the last equality in (4) implies the second partial sum evaluates to what is annotated in its underbrace below: $$ \begin{align} & M^{-2}\left( \underbrace{\sum\limits_{\sigma_p} \Theta_n\left(\sigma_p\right)\Theta_{n^\prime} \left(\sigma_p\right)}_{=M\delta_{nn^\prime}\text{ from (3)}} + \underbrace{\sum\limits_{\sigma_{p^\prime}\neq \sigma_p} \Theta_n\left(\sigma_p\right) \Theta_{n^\prime}\left(\sigma_{p^\prime}\right)}_{=\left(M^2\delta_{pp^\prime}-M\right)\delta_{nn^\prime}} \right)\\ = & M^{-2}\left( M\delta_{nn^\prime} + \left(M^2\delta_{pp^\prime}-M\right)\delta_{nn^\prime} \right)\\ = & M^{-2}\left( M + M^2\delta_{pp^\prime}-M \right)\delta_{nn^\prime}\\ = & M^{-2}\left( M^2\delta_{pp^\prime} \right)\delta_{nn^\prime}\\ = & \delta_{pp^\prime}\delta_{nn^\prime} \end{align} $$ So if I did not make a mistake here, my comprehension problem boils down to:


How is it, that $ \sum\limits_{\sigma_{p^\prime}\neq \sigma_p} \Theta_n\left(\sigma_p\right) \Theta_{n^\prime}\left(\sigma_{p^\prime}\right) = \left(M^2\delta_{pp^\prime}-M\right)\delta_{nn^\prime} $ is guaranteed to hold?


I do not understand where this property of the polynomials comes from - as opposed to (3) because after all, (3) was used to define the polynomials, i.e. they are designed to fulfill (3). As I see things now, this additional property can't be due to (3) as (3), to my understanding, only makes a statement considering the same arguments for both polynomials in all summands. So where does this additional property come from?



Thank you very much in advance.


Answer



If you follow the paper you realize that the orthogonality of the basis $\Theta_n(\sigma_p)$ is defined in one site of the lattice, Eq. (3) is actually that. There is nothing that allows you to go from Eq. (3) to Eq. (4), except for the fact that physically makes sense that the single-site states you use to describe one site $p$ should be independent from states to describe another site $q$. So you impose this condition by hand


$$ \langle \Theta_n(\sigma_p)| \Theta_n(\sigma_q)\rangle = \delta_{pq} $$


algebra precalculus - How many days will it take to complete the work?




A husband and wife, started painting their house, but husband left painting 5 days before the completion of the work. How many days will it take to complete the work, which the husband alone would have completed in 20 days and wife in 15 days?



To solve this problem, At first I have evaluated the portion of the work done by husband.


Which is: $15 \times \frac{1}{20} =\frac{3}{4}$ as I thought total work is $1$.


Now, work done = time required $\times$ work rate.


Where the wrong, I have done?


now for the wife, she will do the portion of the work, $(1 - \frac{3}{4} = \frac{1}{4})$


and therefore, $$1/4 = time \times \frac1{15}$$ then the time is = $\frac{15}{4}$


Answer





Which is: $15 \times 120 = \frac{3}{4}$ as I thought total work is 1.



I think this is wrong because they both work together, they do $(\frac{1}{15} + \frac{1}{20}) = \frac{7}{60}$ of the whole in a day. So, if husband leaves $5$ days early, work left = $5 \times \frac{7}{60} = \frac{7}{12}$. instead of $\frac{1}{4}$ that you used.


elementary number theory - Determination of the last two digits of $777^{777}$

May I know if my proof is correct? Thank you.



This is equivalent to finding $x$ such that $777^{777} \equiv x \pmod{100}.$



By Euler's theorem, $777^{\ \psi(100)} =777^{\ 40}\equiv 1 \pmod{100}$.



It follows that $777^{760} \equiv 1 \pmod{100}$ and $777^{\ 17} \equiv x \pmod{100}.$




By Binomial expansion, $777^{\ 17} = 77^{\ 17}+700m$, for some positive integer $m$.



Hence $77^{17} \equiv x \pmod{100} \Longleftrightarrow \ x= 97$.

functions - Correspondence and bijective correspondence between two sets

Let$A$ and $B$ be two sets. When we say there is a bijective correspondence between $A$ and $B$, it means there is a bijective map between them.




In some texts, to prove there is a correspondence between $A$ and $B$, just show that correspondence to every element of $A$ there is an element in $B$ and conversely. While I think we should prove that there is a well-defined surjective map from $A$ onto $B$. Am I right? Please explain it.

Tuesday, August 29, 2017

sequences and series - Value of $sumlimits_n x^n$



Why does the following hold:




\begin{equation*}
\displaystyle \sum\limits_{n=0}^{\infty} 0.7^n=\frac{1}{1-0.7} = 10/3 ?
\end{equation*}



Can we generalize the above to




$\displaystyle \sum_{n=0}^{\infty} x^n = \frac{1}{1-x}$ ?





Are there some values of $x$ for which the above formula is invalid?



What about if we take only a finite number of terms? Is there a simpler formula?




$\displaystyle \sum_{n=0}^{N} x^n$




Is there a name for such a sequence?







This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.



and here: List of abstract duplicates.


Answer



By definition, a "series" (an "infinite sum")
$$\sum_{n=k}^{\infty} a_n$$
is defined to be a limit, namely
$$\sum_{n=k}^{\infty} a_n= \lim_{N\to\infty} \sum_{n=k}^N a_n.$$

That is, the "infinite sum" is the limit of the "partial sums", if this limit exists. If the limit exists, equal to some number $S$, we say the series "converges" to the limit, and we write
$$\sum_{n=k}^{\infty} a_n = S.$$
If the limit does not exist, we say the series diverges and is not equal to any number.



So writing that
$$\sum_{n=0}^{\infty} 0.7^n = \frac{1}{1-0.7}$$
means that we are asserting that
$$\lim_{N\to\infty} \sum_{n=0}^N0.7^n = \frac{1}{1-0.7}.$$



So what your question is really asking is: why is this limit equal to $\frac{1}{1-0.7}$? (Or rather, that is the only way to make sense of the question).




In order to figure out the limit, it is useful (but not strictly necessary) to have a formula for the partial sums,
$$s_N = \sum_{n=0}^N 0.7^n.$$
This is where the formulas others have given come in. If you take the $N$th partial sum and multiply by $0.7$, you get
$$\begin{array}{rcrcrcrcrcrcl}
s_N &= 1 &+& (0.7) &+& (0.7)^2 &+& \cdots &+& (0.7)^N\\
(0.7)s_N &= &&(0.7) &+& (0.7)^2 &+&\cdots &+&(0.7)^N &+& (0.7)^{N+1}
\end{array}$$
so that
$$(1-0.7)s_N = s_N - (0.7)s_N = 1 - (0.7)^{N+1}.$$

Solving for $s_N$ gives
$$s_N = \frac{1 - (0.7)^{N+1}}{1-0.7}.$$
What is the limit as $N\to\infty$? The only part of the expression that depends on $N$ is $(0.7)^{N+1}$. Since $|0.7|\lt 1$, then $\lim\limits_{N\to\infty}(0.7)^{N+1} = 0$. So,
$$\lim_{N\to\infty}s_N = \lim_{N\to\infty}\left(\frac{1-(0.7)^{N+1}}{1-0.7}\right) = \frac{\lim\limits_{N\to\infty}1 - \lim\limits_{N\to\infty}(0.7)^{N+1}}{\lim\limits_{N\to\infty}1 - \lim\limits_{N\to\infty}0.7} = \frac{1 - 0}{1-0.7} = \frac{1}{1-0.7}.$$
Since the limit exists, then we write
$$\sum_{n=0}^{\infty}(0.7)^n = \frac{1}{1-0.7}.$$



More generally, a sum of the form
$$a + ar + ar^2 + ar^3 + \cdots + ar^k$$
with $a$ and $r$ constant is said to be a "geometric series" with initial term $a$ and common ratio $r$. If $a=0$, then the sum is equal to $0$. If $r=1$, then the sum is equal to $(k+1)a$. If $r\neq 1$, then we can proceed as above. Letting

$$S = a +ar + \cdots + ar^k$$
we have that
$$S - rS = (a+ar+\cdots+ar^k) - (ar+ar^2+\cdots+a^{k+1}) = a - ar^{k+1}$$
so that
$$(1-r)S = a(1 - r^{k+1}).$$
Dividing through by $1-r$ (which is not zero since $r\neq 1$), we get
$$S = \frac{a(1-r^{k+1})}{1-r}.$$



A series of the form
$$

\sum_{n=0}^{\infty}ar^{n}
$$
with $a$ and $r$ constants is called an infinite geometric series.
If $r=1$, then
$$
\lim_{N\to\infty}\sum_{n=0}^{N}ar^{n}
= \lim_{N\to\infty}\sum_{n=0}^{N}a
= \lim_{N\to\infty}(N+1)a
= \infty,
$$

so the series diverges. If $r\neq 1$, then using the formula above we have:
$$
\sum_{n=0}^{\infty}ar^n = \lim_{N\to\infty}\sum_{n=0}^{N}ar^{N} = \lim_{N\to\infty}\frac{a(1-r^{N+1})}{1-r}.
$$
The limit exists if and only if $\lim\limits_{N\to\infty}r^{N+1}$ exists. Since
$$
\lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll}
0 &\mbox{if $|r|\lt 1$;}\\
1 & \mbox{if $r=1$;}\\
\text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$}

\end{array}\right.
$$
it follows that:
$$
\begin{align*}
\sum_{n=0}^{\infty}ar^{n} &=\left\{\begin{array}{ll}
0 &\mbox{if $a=0$;}\\
\text{diverges}&\mbox{if $a\neq 0$ and $r=1$;}\\
\lim\limits_{N\to\infty}\frac{a(1-r^{N+1})}{1-r} &\mbox{if $r\neq 1$;}\end{array}\right.\\
&= \left\{\begin{array}{ll}

\text{diverges}&\mbox{if $a\neq 0$ and $r=1$;}\\
\text{diverges}&\mbox{if $a\neq 0$, and $r=-1$ or $|r|\gt 1$;}\\
\frac{a(1-0)}{1-r}&\mbox{if $|r|\lt 1$;}
\end{array}\right.\\
&=\left\{\begin{array}{ll}
\text{diverges}&\mbox{if $a\neq 0$ and $|r|\geq 1$;}\\
\frac{a}{1-r}&\mbox{if $|r|\lt 1$.}
\end{array}\right.
\end{align*}
$$




Your particular example has $a=1$ and $r=0.7$.






Since this recently came up (09/29/2011), let's provide a formal proof that
$$
\lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll}
0 &\mbox{if $|r|\lt 1$;}\\
1 & \mbox{if $r=1$;}\\

\text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$}
\end{array}\right.
$$



If $r\gt 1$, then write $r=1+k$, with $k\gt0$. By the binomial theorem, $r^n = (1+k)^n \gt 1+nk$, so it suffices to show that for every real number $M$ there exists $n\in\mathbb{N}$ such that $nk\gt M$. This is equivalent to asking for a natural number $n$ such that $n\gt \frac{M}{k}$, and this holds by the Archimedean property; hence if $r\gt 1$, then $\lim\limits_{n\to\infty}r^n$ does not exist. From this it follows that if $r\lt -1$ then the limit also does not exist: given any $M$, there exists $n$ such that $r^{2n}\gt M$ and $r^{2n+1}\lt M$, so $\lim\limits_{n\to\infty}r^n$ does not exist if $r\lt -1$.



If $r=-1$, then for every real number $L$ either $|L-1|\gt \frac{1}{2}$ or $|L+1|\gt \frac{1}{2}$. Thus, for every $L$ and for every $M$ there exists $n\gt M$ such that $|L-r^n|\gt \frac{1}{2}$ proving the limit cannot equal $L$; thus, the limit does not exist. If $r=1$, then $r^n=1$ for all $n$, so for every $\epsilon\gt 0$ we can take $N=1$, and for all $n\geq N$ we have $|r^n-1|\lt\epsilon$, hence $\lim\limits_{N\to\infty}1^n = 1$. Similarly, if $r=0$, then $\lim\limits_{n\to\infty}r^n = 0$ by taking $N=1$ for any $\epsilon\gt 0$.



Next, assume that $0\lt r\lt 1$. Then the sequence $\{r^n\}_{n=1}^{\infty}$ is strictly decreasing and bounded below by $0$: we have $0\lt r \lt 1$, so multiplying by $r\gt 0$ we get $0\lt r^2 \lt r$. Assuming $0\lt r^{k+1}\lt r^k$, multiplying through by $r$ we get $0\lt r^{k+2}\lt r^{k+1}$, so by induction we have that $0\lt r^{n+1}\lt r^n$ for every $n$.




Since the sequence is bounded below, let $\rho\geq 0$ be the infimum of $\{r^n\}_{n=1}^{\infty}$. Then $\lim\limits_{n\to\infty}r^n =\rho$: indeed, let $\epsilon\gt 0$. By the definition of infimum, there exists $N$ such that $\rho\leq r^N\lt \rho+\epsilon$; hence for all $n\geq N$,
$$|\rho-r^n| = r^n-\rho \leq r^N-\rho \lt\epsilon.$$
Hence $\lim\limits_{n\to\infty}r^n = \rho$.



In particular, $\lim\limits_{n\to\infty}r^{2n} = \rho$, since $\{r^{2n}\}_{n=1}^{\infty}$ is a subsequence of the converging sequence $\{r^n\}_{n=1}^{\infty}$. On the other hand, I claim that $\lim\limits_{n\to\infty}r^{2n} = \rho^2$: indeed, let $\epsilon\gt 0$. Then there exists $N$ such that for all $n\geq N$, $r^n - \rho\lt\epsilon$. Moreover, we can assume that $\epsilon$ is small enough so that $\rho+\epsilon\lt 1$. Then
$$|r^{2n}-\rho^2| = |r^n-\rho||r^n+\rho| = (r^n-\rho)(r^n+\rho)\lt (r^n-\rho)(\rho+\epsilon) \lt r^n-\rho\lt\epsilon.$$
Thus, $\lim\limits_{n\to\infty}r^{2n} = \rho^2$. Since a sequence can have only one limit, and the sequence of $r^{2n}$ converges to both $\rho$ and $\rho^2$, then $\rho=\rho^2$. Hence $\rho=0$ or $\rho=1$. But $\rho=\mathrm{inf}\{r^n\mid n\in\mathbb{N}\} \leq r \lt 1$. Hence $\rho=0$.



Thus, if $0\lt r\lt 1$, then $\lim\limits_{n\to\infty}r^n = 0$.




Finally, if $-1\lt r\lt 0$, then $0\lt |r|\lt 1$. Let $\epsilon\gt 0$. Then there exists $N$ such that for all $n\geq N$ we have $|r^n| = ||r|^n|\lt\epsilon$, since $\lim\limits_{n\to\infty}|r|^n = 0$. Thus, for all $\epsilon\gt 0$ there exists $N$ such that for all $n\geq N$, $| r^n-0|\lt\epsilon$. This proves that $\lim\limits_{n\to\infty}r^n = 0$, as desired.



In summary,
$$\lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll}
0 &\mbox{if $|r|\lt 1$;}\\
1 & \mbox{if $r=1$;}\\
\text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$}
\end{array}\right.$$







The argument suggested by Srivatsan Narayanan in the comments to deal with the case $0\lt|r|\lt 1$ is less clumsy than mine above: there exists $a\gt 0$ such that $|r|=\frac{1}{1+a}$. Then we can use the binomial theorem as above to get that
$$|r^n| = |r|^n = \frac{1}{(1+a)^n} \leq \frac{1}{1+na} \lt \frac{1}{na}.$$
By the Archimedean Property, for every $\epsilon\gt 0$ there exists $N\in\mathbb{N}$ such that $Na\gt \frac{1}{\epsilon}$, and hence for all $n\geq N$, $\frac{1}{na}\leq \frac{1}{Na} \lt\epsilon$. This proves that $\lim\limits_{n\to\infty}|r|^n = 0$ when $0\lt|r|\lt 1$, without having to invoke the infimum property explicitly.


sequences and series - Find the value of $n$.

Write the value of $n$ if the sum of n terms of the series $1+3+5+7...n =n^2$.



I'm not getting the right value if I proceed with the general formula for finding sum of n terms of a arithmetic series. The general summation formula for arithmetic series is $\frac{n(2a+(n-1)d)}{2}$, where $a$ is the first term, $d$ is the common difference and $n$ is the number of terms.

integration - Integrating $int^{infty}_0 e^{-x^2},dx$ using Feynman's parametrization trick



I stumbled upon this short article on last weekend, it introduces an integral trick that exploits differentiation under the integral sign. On its last page, the author, Mr. Anonymous, left several exercises without any hints, one of them is to evaluate the Gaussian integral
$$
\int^\infty_0 e^{-x^2} \,dx= \frac{\sqrt{\pi}}{2}
$$
using this parametrization trick. I had been evaluating it through trial and error using different paramatrizations, but no luck so far.







Here are what I have tried so far:




  • A first instinct would be do something like:$$
    I(b) = \int^\infty_0 e^{-f(b)x^2}\,dx
    $$
    for some permissible function $f(\cdot)$, differentiating it will lead to a simple solvable ode:
    $$
    \frac{I'(b)}{I(b)} = -\frac{f'(b)}{2f(b)}

    $$
    which gives:
    $$
    I(b) = \frac{C}{\sqrt{f(b)}}.
    $$
    However, finding this constant $C$ basically is equivalent to evaluating the original integral, we are stuck here without leaving this parametrization trick framework.


  • A second try involves an exercise on the same page:
    $$
    I(b) = \int^\infty_0 e^{-\frac{b^2}{x^2}-x^2}dx.
    $$

    Taking derivative and rescaling the integral using change of variable we have:
    $$
    I'(b) = -2I(b).
    $$
    This gives us another impossible to solve constant $C$ in:
    $$
    I(b) = C e^{-2b}
    $$
    without leaving this framework yet again.


  • The third try is trying modify Américo Tavares's answer in this MSE question:

    $$
    I(b) = \int^\infty_0 be^{-b^2x^2}\,dx.
    $$
    It is easy to show that:
    $$
    I'(b) = \int^\infty_0 e^{-b^2x^2}\,dx - \int^\infty_0 2b^2 x^2 e^{-b^2x^2}\,dx = 0
    $$
    by an integration by parts identity:
    $$
    \int^\infty_0 x^2 e^{- c x^2}\,dx = \frac{1}{2c}\int^\infty_0 e^{- c x^2}\,dx .

    $$
    Then $I(b) = C$, ouch, stuck again at this constant.







Notice in that Proving $\displaystyle\int_{0}^{\infty} e^{-x^2} dx = \frac{\sqrt \pi}{2}$ question, Bryan Yocks's answer is somewhat similar to the idea of parametrization, however he has to introduce another parametric integration to produce a definite integral leading to $\arctan$.



Is there such a one shot parametrization trick solution like the author Anonymous claimed to be "creative parameterizations and a dose of differentiation under the integral"?


Answer




Just basically independently reinvented Bryan Yock's solution as a more 'pure' version of Feynman.



Let $$I(b) = \int_0^\infty \frac {e^{-x^2}}{1+(x/b)^2} \mathrm d x = \int_0^\infty \frac{e^{-b^2y^2}}{1+y^2} b\,\mathrm dy$$ so that $I(0)=0$, $I'(0)= \pi/2$ and $I(\infty)$ is the thing we want to evaluate.



Now note that rather than differentiating directly, it's convenient to multiply by some stuff first to save ourselves some trouble. Specifically, note



$$\left(\frac 1 b e^{-b^2}I\right)' = -2b \int_0^\infty e^{-b^2(1+y^2)} \mathrm d y = -2 e^{-b^2} I(\infty)$$



Then usually at this point we would solve the differential equation for all $b$, and use the known information at the origin to infer the information at infinity. Not so easy here because the indefinite integral of $e^{-x^2}$ isn't known. But we don't actually need the solution in between; we only need to relate information at the origin and infinity. Therefore, we can connect these points by simply integrating the equation definitely; applying $\int_0^\infty \mathrm d b$ we obtain




$$-I'(0)= -2 I(\infty)^2 \quad \implies \quad I(\infty) = \frac{\sqrt \pi} 2$$


Monday, August 28, 2017

number theory - How to prove that any (integer)$^{1/n}$ that isn't an integer, is irrational?




Is my proof beneath perfect and complete?



I wanted to prove that for any nth root of an integer, if it's not an integer, than it's irrational:
$$\begin{cases}
m,n\in \mathbb{N}\\\sqrt[n]{m}\notin \mathbb{N}
\end{cases}\implies \sqrt[n]{m}\notin \mathbb{Q}.$$



I start by assuming that $m^{\frac 1n}$ is rational and non-integer. So there exist co-prime integers $a,b$ so that $$\sqrt[n]{m}=\frac{a}{b}$$ $$\implies
m=\frac{a^n}{b^n}\in\mathbb{N}.$$

But since $a$ and $b$ have no common factor, $a^n$ and $b^n$ also have no common factor. So:
$$\frac{a^n}{b^n}\notin\mathbb{N},$$
a contradiction.


Answer



Your proof is fine. You can use essentially the same idea to prove the following more general statement:



Theorem. If $ P(X) \in \mathbf Z[X] $ is a monic polynomial, then any rational roots of $ P $ are integers. In other words, $ \mathbf Z $ is integrally closed.



Proof. Assume that $ q = a/b $ is a rational root with $ a, b $ coprime, and let $ P(X) = X^n + c_{n-1} X^{n-1} + \ldots + c_0 $. We have $ P(q) = 0 $, which gives




$$ a^n + c_{n-1} a^{n-1} b + \ldots + c_0 b^n = 0 $$



In other words, $ a^n $ is divisible by $ b $. This is a contradiction unless $ b = \pm 1 $, since then any prime dividing $ b $ also divides $ a $, contradicting coprimality. Hence, $ b = \pm 1 $ and $ q \in \mathbf Z $.


modular arithmetic - How can I tell if a number in base 5 is divisible by 3?


I know of the sum of digits divisible by 3 method, but it seems to not be working for base 5.



How can I check if number in base 5 is divisible by 3 without converting it to base 10 (or 3, for that matter)?


Answer



Divisibility rules generally rely on the remainders of the weights of digits having a certain regularity. The standard method for divisibility by $3$ in the decimal system works because the weights of all digits have remainder $1$ modulo $3$. The same is true for $9$. For $11$, things are only slightly more complicated: Since odd digits have remainder $1$ and even digits have remainder $-1$, you need to take the alternating sum of digits to test for divisibility by $11$.


In base $5$, we have the same situation for $3$ as we have for $11$ in base $10$: The remainder of the weights of odd digits is $1$ and that of even digits is $-1$. Thus you can check for divisibility by $3$ by taking the alternating sum of the digits.


More generally, in base $b$ the sum of digits works for the divisors of $b-1$ and the alternating sum of digits works for the divisors of $b+1$.


limits - Evaluate $ lim_{xto pi/2} frac{sqrt{1+cos(2x)}}{sqrt{pi}-sqrt{2x}}$


Evaluate
$$ \lim_{x\to \pi/2} \frac{\sqrt{1+\cos(2x)}}{\sqrt{\pi}-\sqrt{2x}}$$




I tried to solve this by L'Hospital's rule..but that doesn't give a solution..appreciate if you can give a clue.

limits - "Proving" that $0^0 = 1$





I know that $0^0$ is one of the seven common indeterminate forms of limits, and I found on wikipedia two very simple examples in which one limit equates to 1, and the other to 0. I also saw here: Prove that $0^0 = 1$ using binomial theorem
that you can define $0^0$ as 1 if you'd like.



Even so, I was curious, so I did some work and seemingly demonstrated that $0^0$ always equals 1.



My Work:



$$y=\lim_{x\rightarrow0^+}{(x^x)}$$



$$\ln{y} = \lim_{x\rightarrow0^+}{(x\ln{x})} $$




$$\ln{y}= \lim_{x\rightarrow0^+}{\frac{\ln{x}}{x^{-1}}} = -\frac{∞}{∞} $$



$\implies$ Use L'Hôpital's Rule



$$\ln{y}=\lim_{x\rightarrow0^+}\frac{x^{-1}}{-x^{-2}} $$
$$\ln{y}=\lim_{x\rightarrow0^+} -x = 0$$
$$y = e^{0} = 1$$



What is wrong with this work? Does it have something to do with using $x^x$ rather than $f(x)^{g(x)}$? Or does it have something to do with using operations inside limits? If not, why is $0^0$ considered indeterminate at all?



Answer



Someone said that $0^0=1$ is correct, and got a flood of downvotes and a comment saying it was simply wrong. I think that someone, me for example, should point out that while saying $0^0=1$ is correct is an exaggeration, calling that "simply wrong" isn't quite right either. There are many contexts in which $0^0=1$ is the standard convention.



Two examples. First, power series. If we say $f(t)=\sum_{n=0}^\infty a_nt^n$ that's supposed to entail that $f(0)=a_0$. But $f(0)=a_0$ depends on the convention that $0^0=1$.



Second, elementary set theory: Say $|A|$ is the cardinality of $A$. The cardinality of the set off all functions from $A$ to $B$ should be $|B|^{|A|}$. Now what if $A=B=\emptyset$? There as well we want to say $0^0=1$; otherwise we could just say the cardinality of the set of all maps was $|B|^{|A|}$ unless $A$ and $B$ are both empty.



(Yes, there is exactly one function $f:\emptyset\to\emptyset$...)



Edit: Seems to be a popular answer, but I just realized that it really doesn't address what the OP said. For the record, of course the OP is nonetheless wrong in claiming to have proved that $0^0=1$. It's often left undefined, and in any case one does not prove definitions...



real analysis - Proof for $sum_{n=1}^{infty}frac{1}{n^2}=frac{pi^2}{6}$ without complexes?





This is what I needed. Practically, a link were also okay.



$$\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}$$


Answer



Evaluating ζ(2) by Robin Chapman contains several proofs (~14 altogether). You can have a look through and find a nice one.


Sunday, August 27, 2017

Sum of n consecutive numbers






Is there a shortcut method to working out the sum of n consecutive positive integers?


Firstly, starting at $1 ... 1 + 2 + 3 + 4 + 5 = 15.$


Secondly, starting at any other positive integer ...($10$ e.g.): $10 + 11 + 12 + 13 = 46$.


Answer



Take the average of the first number and the last number, and multiply by the number of numbers.


Find limit without using L'hopital or Taylor's series



I'm trying to solve this limit $without$ using L'hopital's Rule or Taylor Series. Any help is appreciated!




$$\lim\limits_{x\rightarrow 0^+}{\dfrac{e^x-\sin x-1}{x^2}}$$


Answer



One possible way is to shoot linear functions at the limit - not very elegant, but it works. Let:



$$f(x)=x^3-\frac{x^2}{2}+e^x-\sin x-1,\;\;x\geq 0$$



Computing the first few derivatives of $f:$



$$f'(x)=3x^2-x+e^x-\cos x$$
$$f''(x)=6x-1+e^x+\sin x$$




$f''$ is clearly increasing and since $f''(0)=0$ we have $f''(x)>0$ for $x\in (0,a)$ for some $a$.
This in turn implies that $f'$ is strictly increasing and since $f'(0)=0$ we again have $f'(x)>0$ for $x\in (0,a)$. Finally, this means $f$ is also increasing on this interval, and since $f(0)=0$ we have:



$$0\leq x\leq a:\quad f(x)\geq 0$$



$$\Rightarrow \;\;\frac{e^x-\sin x-1}{x^2}\geq \frac{1}{2}-x$$



Similarly by considering $h(x)=-x^3-\dfrac{x^2}{2}+e^x-\sin x-1$ it is very easy to show that:




$$0\leq x\leq b: \quad h(x)\leq 0$$



$$\Rightarrow \;\;\frac{1}{2}+x\geq\frac{e^x-\sin x-1}{x^2}$$



Hence for small positive $x$ we have:



$$\frac{1}{2}-x\leq\frac{e^x-\sin x-1}{x^2}\leq \frac{1}{2}+x$$



$$\lim_{x\to 0^+}\frac{e^x-\sin x-1}{x^2}=\frac{1}{2}$$


Saturday, August 26, 2017

calculus - How do I derive $1 + 4 + 9 + cdots + n^2 = frac{n (n + 1) (2n + 1)} 6$











I am introducing my daughter to calculus/integration by approximating the area under y = f(x*x) by calculating small rectangles below the curve.



This is very intuitive and I think she understands the concept however what I need now is an intuitive way to arrive at $\frac{n (n + 1) (2n + 1)} 6$ when I start from $1 + 4 + 9 + \cdots + n^2$.



In other words, just how came the first ancient mathematician up with this formula - what were the first steps leading to this equation? That is what I am interested in, not the actual proof (that would be the second step).


Answer




Same as you can prove sum of n = n(n+1)/2 by



*oooo
**ooo
***oo
****o


you can prove $\frac{n (n + 1) (2n + 1)} 6$ by building a box out of 6 pyramids:




enter image description here
enter image description here
enter preformatted text here



Sorry the diagram is not great (someone can edit if they know how to make a nicer one). If you just build 6 pyramids you can easily make the n x n+1 x 2n+1 box out of it.




  • make 6 pyramids (1 pyramid = $1 + 2^2 + 3^2 + 4^2 + ...$ blocks)

  • try to build a box out of them

  • measure the lengths and count how many you used.. that gives you the formula




Using these (glued) enter image description here


calculus - Proving that $lim_{ntoinfty} left(1+frac{1}{f(n)}right)^{g(n)} = 1$



I want to prove that $$\lim_{n\to\infty} \left(1+\frac{1}{f(n)}\right)^{g(n)} = 1$$ if $f(n)$ grows faster than $g(n)$ for $n\to\infty$ and $\lim_{n\to\infty} f(n) = +\infty = \lim_{n\to\infty}g(n)$.



It is quite easy to see that if $f = g$ the limit is $e$, but I can't find a good strategy to solve this problem.


Answer



We can use that
$$ \left(1+\frac{1}{f(n)}\right)^{g(n)} =\left[\left(1+\frac{1}{f(n)}\right)^{f(n)}\right]^{\frac{g(n)}{f(n)}}$$


Friday, August 25, 2017

convergence in measure implies convergence in mean?



Given a sequence of functions $\{f_1,f_2,\dots\}$ converging in measure to some measurable function $f$. Is it necessary true that they also converge in mean to $f$?


Convergence in mean is defined as follows $\int \lvert f_n-f \rvert$ goes to zero as $n$ goes to infinity.


I know that the converse of the above is true. Proof is as follows:


Given $\epsilon>0$, we know need to show that for each $\delta$, there exist $n_0$ such that $\mu(\{x\mid \lvert f(x)-f_n(x)\rvert\geq \epsilon\})<\delta$ for all $n\geq n_0$, where $\mu$ is the measure. Notice that $\mu(\{x\mid \lvert f(x)-f_n(x)\rvert\geq \epsilon\})\cdot \epsilon\leq \int_E \lvert f_n(x)-f(x)\rvert$. Since the sequence converge in mean, the right hand side goes to zero as $n$ goes to infinity, thus so does the measure.


Now, back to the original question. Intuitively, knowing my calculus 3 stuff, I could have a sequence of function converge to some measurable real-valued function, but that function is not integrable since it blows up maybe when $x=0$. But how do I give a counter example? Or do I have to use a direct proof?


Answer



Consider $\mathbb{R}$ with Lebesgue measure, and define $f_n(x)=\frac{1}{n}$ if $0\leq x\leq n$, $f_n(x)=0$ otherwise.


Then $f_n\to 0$ in measure, but $\int f_n=1$ for all $n$, so $f_n\not\to0$ in $L^1$.


improper integrals - Evaluating $int_0^{infty} frac{sin(xt)(1-cos(at))}{t^2} dt$

The problem is to evaluate the improper integral:
$I = \int_0^{\infty} \frac{\sin(xt)(1-\cos(at))}{t^2} dt$.




This can be written as follows:



$$I = \int_0^{\infty} dt \frac{\sin(xt)}t \int_0^a \sin(yt)dy = \int_0^{\infty} dt \int_0^a \frac{\sin(xt)\sin(yt)}t dy$$



In a previous problem, I determined:



$$J = \int_0^{\infty} \frac{\sin(xt)\sin(yt)}t dt = \frac12\log\left(\lvert\frac{x-y}{x+y}\rvert\right)$$
which converges uniformly with respect to $y$ as long as $|x| \ne |y|$. Hence the order of integration can be interchanged for $I$ to get:




$$I = \int_0^a dy \int_0^{\infty} \frac{\sin(xt)\sin(yt)}t dt = \int_0^a \frac12\log\left(\lvert\frac{x-y}{x+y}\rvert\right)dy$$



This result may not be valid if $|a| \ge |x|$ since $J$ becomes unbounded if $|x|=|y|$. If I integrate $J$ with respect to $y$ from $0$ to $a$, $y$ is going to eventually reach $|x|$ or $-|x|$ if $|a| \ge |x|$.



Is it valid to integrate with respect to $y$ over singularities in $J$ provided that $\int_0^a \frac12\log\left(\lvert\frac{x-y}{x+y}\rvert\right)dy$ converges? Or does $I$ actually diverge if $|a| \ge |x|$?






I think the proper way to do this is as follows. Assume $a > x > 0$ to simplify the problem. Then I should integrate as follows:




$$I = \int_0^{\infty} dt \int_0^{x-\epsilon} \frac{\sin(xt)\sin(yt)}t dy + \int_{x-\epsilon}^{x+\epsilon} \frac{\sin(xt)\sin(yt)}t dy + \int_{x+\epsilon}^a \frac{\sin(xt)\sin(yt)}t dy$$



Then I must show that the middle integral approaches $0$ and $\epsilon \rightarrow 0$. But the outside integrals diverge.

analysis - All Partial sums of two given sequences are bounded by a positive constant


Let $\theta \in \mathbb R$ be a non-integer multiple of $2\pi$. Prove that the sequences $(\sin(n\theta))_{n \in \mathbb N}$ and $(\cos(n\theta))_{n \in \mathbb N}$ verify $|S_N|\leq K$ where $K>0$ and $S_N=a_1+...+a_N$ for a given sequence $(a_n)_{n \in \mathbb N}$.


I began with the sequence involving the cosine, I suppose that the other case is analogue. I tried to express $\cos(n\theta)$ in its exponential form. Then $\sum_{n=1}^N \cos(n\theta)= \sum_{n=1}^N \frac {1} {2}(e^{in\theta}+e^{-in\theta})$. The second member of the equation can be separated into $\frac {1} {2} (\sum_{n=1}^N e^{in\theta}+ \sum_{n=1}^N e^{-in\theta})$. Both of these are the first $N$ terms of two geometric series. So $\frac {1} {2} (\sum_{n=1}^N e^{in\theta}+ \sum_{n=1}^N e^{-in\theta})=\frac {1} {2} (\frac {e^{i(N+1)\theta}-e^{i\theta}} {e^{i\theta}-1} + \frac {e^{-i(N+1)\theta}-e^{-i\theta}} {e^{-i\theta}-1})$. Well, I know that the denominator is never $0$ by the hypothesis we have on $\theta$. I've been fighting with this last term but I don't get to something nice. Am I doing something wrong?


Answer



Note that $$S_N=\Re\left(\sum_{n=1}^N\mathrm e^{\mathrm in\theta}\right)\quad\text{or}\quad S_N=\Im\left(\sum_{n=1}^N\mathrm e^{\mathrm in\theta}\right), $$ and that $$ \sum_{n=1}^N\mathrm e^{\mathrm in\theta}=\frac{\mathrm e^{\mathrm i\theta}-\mathrm e^{\mathrm i(N+1)\theta}}{1-\mathrm e^{\mathrm i\theta}}. $$ Furthermore, for every complex number $z$, $|\Re(z)|\leqslant|z|$ and $|\Im(z)|\leqslant|z|$, hence $$|S_N|\leqslant\left|\frac{\mathrm e^{\mathrm i\theta}-\mathrm e^{\mathrm i(N+1)\theta}}{1-\mathrm e^{\mathrm i\theta}}\right|\leqslant K_\theta, $$ with $$ K_\theta=\frac2{|1-\mathrm e^{\mathrm i\theta}|}. $$


Proof related with prime numbers and congruence



How to (dis)prove this



$ (n-2)! \equiv 1 \mod n$




If n is said to be a prime number. I guess we'll have to use FERMAT’S LITTLE THEOREM, and I just don't know where to start from. Thanks in advance


Answer



If $\;n=p\;$ is a prime, then by Wilson's theorem



$$\color{red}{-1}=(p-1)!=(p-2)!(p-1)=\color{red}{-(p-2)!\pmod p}\implies 1= (p-2)!\pmod p$$


elementary number theory - How to use the Extended Euclidean Algorithm manually?


I've only found a recursive algorithm of the extended Euclidean algorithm. I'd like to know how to use it by hand. Any idea?


Answer



Perhaps the easiest way to do it by hand is in analogy to Gaussian elimination or triangularization, except that, since the coefficient ring is not a field, one has to use the division / Euclidean algorithm to iteratively descrease the coefficients till zero. In order to compute both $\rm\,gcd(a,b)\,$ and its Bezout linear representation $\rm\,j\,a+k\,b,\,$ we keep track of such linear representations for each remainder in the Euclidean algorithm, starting with the trivial representation of the gcd arguments, e.g. $\rm\: a = 1\cdot a + 0\cdot b.\:$ In matrix terms, this is achieved by augmenting (appending) an identity matrix that accumulates the effect of the elementary row operations. Below is an example that computes the Bezout representation for $\rm\:gcd(80,62) = 2,\ $ i.e. $\ 7\cdot 80\: -\: 9\cdot 62\ =\ 2\:.\:$ See this answer for a proof and for conceptual motivation of the ideas behind the algorithm (see the Remark below if you are not familiar with row operations from linear algebra).


For example, to solve  m x + n y = gcd(m,n) one begins with
two rows [m 1 0], [n 0 1], representing the two
equations m = 1m + 0n, n = 0m + 1n. Then one executes
the Euclidean algorithm on the numbers in the first column,
doing the same operations in parallel on the other columns,


Here is an example: d = x(80) + y(62) proceeds as:

in equation form | in row form
---------------------+------------
80 = 1(80) + 0(62) | 80 1 0
62 = 0(80) + 1(62) | 62 0 1
row1 - row2 -> 18 = 1(80) - 1(62) | 18 1 -1
row2 - 3 row3 -> 8 = -3(80) + 4(62) | 8 -3 4
row3 - 2 row4 -> 2 = 7(80) - 9(62) | 2 7 -9
row4 - 4 row5 -> 0 = -31(80) +40(62) | 0 -31 40


The row operations above are those resulting from applying
the Euclidean algorithm to the numbers in the first column,

row1 row2 row3 row4 row5
namely: 80, 62, 18, 8, 2 = Euclidean remainder sequence
| |
for example 62-3(18) = 8, the 2nd step in Euclidean algorithm

becomes: row2 -3 row3 = row4 when extended to all columns.


In effect we have row-reduced the first two rows to the last two.
The matrix effecting the reduction is in the bottom right corner.
It starts as 1, and is multiplied by each elementary row operation,
hence it accumulates the product of all the row operations, namely:

$$ \left[ \begin{array}{ccc} 7 & -9\\ -31 & 40\end{array}\right ] \left[ \begin{array}{ccc} 80 & 1 & 0\\ 62 & 0 & 1\end{array}\right ] \ =\ \left[ \begin{array}{ccc} 2\ & \ \ \ 7\ & -9\\ 0\ & -31\ & 40\end{array}\right ] \qquad\qquad\qquad\qquad\qquad $$


Notice row 1 is the particular  solution  2 =   7(80) -  9(62)
Notice row 2 is the homogeneous solution 0 = -31(80) + 40(62),
so the general solution is any linear combination of the two:


n row1 + m row2 -> 2n = (7n-31m) 80 + (40m-9n) 62

The same row/column reduction techniques tackle arbitrary
systems of linear Diophantine equations. Such techniques
generalize easily to similar coefficient rings possessing a
Euclidean algorithm, e.g. polynomial rings F[x] over a field,
Gaussian integers Z[i]. There are many analogous interesting
methods, e.g. search on keywords: Hermite / Smith normal form,
invariant factors, lattice basis reduction, continued fractions,

Farey fractions / mediants, Stern-Brocot tree / diatomic sequence.

Remark $ $ As an optimization, we can omit one of the Bezout coefficient columns (being derivable from the others). Then the calculations have a natural interpretation as modular fractions (though the "fractions" are multi-valued), e.g. follow the prior link.


Below I elaborate on the row operations to help readers unfamiliar with linear algebra.


Let $\,r_i\,$ be the Euclidean remainder sequence. Above $\, r_1,r_2,r_3\ldots = 80,62,18\ldots$ Given linear combinations $\,r_j = a_j m + b_j n\,$ for $\,r_{i-1}\,$ and $\,r_i\,$ we can calculate a linear combination for $\,r_{i+1} := r_{i-1}\bmod r_i = r_{i-1} - q_i r_i\,$ by substituting said combinations for $\,r_{i-1}\,$ and $\,r_i,\,$ i.e.


$$\begin{align} r_{i+1}\, &=\, \overbrace{a_{i-1} m + a_{i-1}n}^{\Large r_{i-1}}\, -\, q_i \overbrace{(a_i m + b_i n)}^{\Large r_i}\\[.3em] {\rm i.e.}\quad \underbrace{r_{i-1} - q_i r_i}_{\Large r_{i+1}}\, &=\, (\underbrace{a_{i-1}-q_i a_i}_{\Large a_{i+1}})\, m\, +\, (\underbrace{b_{i-1} - q_i b_i}_{\Large b_{i+1}})\, n \end{align}$$


Thus the $\,a_i,b_i\,$ satisfy the same recurrence as the remainders $\,r_i,\,$ viz. $\,f_{i+1} = f_{i-1}-q_i f_i.\,$ This implies that we can carry out the recurrence in parallel on row vectors $\,[r_i,a_i,b_i]$ representing the equation $\, r_i = a_i m + b_i n\,$ as follows


$$\begin{align} [r_{i+1},a_{i+1},b_{i+1}]\, &=\, [r_{i-1},a_{i-1},b_{i-1}] - q_i [r_i,a_i,b_i]\\ &=\, [r_{i-1},a_{i-1},b_{i-1}] - [q_i r_i,q_i a_i, q_i b_i]\\ &=\, [r_{i-1}-q_i r_i,\ a_{i-1}-q_i a_i,\ b_{i-1}-b_i r_i] \end{align}$$


which written in the tabular format employed far above becomes


$$\begin{array}{ccc} &r_{i-1}& a_{i-1} & b_{i-1}\\ &r_i& a_i &b_i\\ \rightarrow\ & \underbrace{r_{i-1}\!-q_i r_i}_{\Large r_{i+1}} &\underbrace{a_{i-1}\!-q_i a_i}_{\Large a_{i+1}}& \underbrace{b_{i-1}-q_i b_i}_{\Large b_{i+1}} \end{array}$$



Thus the extended Euclidean step is: compute the quotient $\,q_i = \lfloor r_{i-1}/r_i\rfloor$ then multiply row $i$ by $q_i$ and subtract it from row $i\!-\!1.$ Said componentwise: in each column $\,r,a,b,\,$ multiply the $i$'th entry by $q_i$ then subtract it from the $i\!-\!1$'th entry, yielding the $i\!+\!1$'th entry. If we ignore the 2nd and 3rd columns $\,a_i,b_i$ then this is the usual Euclidean algorithm. The above extends this algorithm to simultaneously compute the representation of each remainder as a linear combination of $\,m,n,\,$ starting from the obvious initial representations $\,m = 1(m)+0(n),\,$ and $\,n = 0(m)+1(n).\,$


calculus - Proving $ int_{0}^{infty} frac{ln(t)}{sqrt{t}}e^{-t} mathrm dt=-sqrt{pi}(gamma+ln{4})$




I would like to prove that:



$$ \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-t} \mathrm dt=-\sqrt{\pi}(\gamma+\ln{4})$$



I tried to use the integral $$\int_{0}^{n} \frac{\ln(t)}{\sqrt{t}}\left(1-\frac{t}{n}\right)^n \mathrm dt$$



$$\int_{0}^{n} \frac{\ln(t)}{\sqrt{t}}\left(1-\frac{t}{n}\right)^n \mathrm dt \;{\underset{\small n\to\infty}{\longrightarrow}}\; \int_{0}^{\infty} \frac{\ln(t)}{\sqrt{t}}e^{-t} \mathrm dt$$ (dominated convergence theorem)



Using the substitution $t\to\frac{t}{n}$, I get:




$$ \int_{0}^{n} \frac{\ln(t)}{\sqrt{t}}\left(1-\frac{t}{n}\right)^n \mathrm dt=\sqrt{n}\left(\ln(n)\int_{0}^{1} \frac{(1-t)^n}{\sqrt{t}} \mathrm dt+\int_{0}^{1} \frac{\ln(t)(1-t)^n}{\sqrt{t}} \mathrm dt\right) $$



However I don't know if I am on the right track for these new integrals look quite tricky.


Answer



Consider integral representation for the Euler $\Gamma$-function:
$$
\Gamma(s) = \int_0^\infty t^{s-1} \mathrm{e}^{-t} \mathrm{d} t
$$
Differentiate with respect to $s$:
$$

\Gamma(s) \psi(s) = \int_0^\infty t^{s-1} \ln(t) \mathrm{e}^{-t} \mathrm{d} t
$$
where $\psi(s)$ is the digamma function.
Now substitute $s=\frac{1}{2}$. So
$$
\int_0^\infty \frac{ \ln(t)}{\sqrt{t}} \mathrm{e}^{-t} \mathrm{d} t = \Gamma\left( \frac{1}{2} \right) \psi\left( \frac{1}{2} \right)
$$
Now use duplication formula:
$$
\Gamma(2s) = \Gamma(s) \Gamma(s+1/2) \frac{2^{2s-1}}{\sqrt{\pi}}

$$
Differentiating this with respect to $s$ gives the duplication formula for $\psi(s)$, and substitution of $s=1/2$ gives $\Gamma(1/2) = \sqrt{\pi}$.
$$
\psi(2s) = \frac{1}{2}\psi(s) + \frac{1}{2} \psi(s+1/2) + \log(2)
$$
Substitute $s=\frac{1}{2}$ and use $\psi(1) = -\gamma$ to arrive at the result.


Thursday, August 24, 2017

real analysis - Is there any function which grows 'slower' than its derivative?



Does a function $f: \mathbb{R} \rightarrow \mathbb{R}$ such that $f'(x) > f(x) > 0$ exist?



Intuitively, I think it can't exist.



I've tried finding the answer using the definition of derivative:





  1. I know that if $\lim_{x \rightarrow k} f(x)$ exists and is finite, then $\lim_{x \rightarrow k} f(x) = \lim_{x \rightarrow k^+} f(x) = \lim_{x \rightarrow k^-} f(x)$


  2. Thanks to this property, I can write:




$$\begin{align}
& f'(x) > f(x) > 0 \\
& \lim_{h \rightarrow 0^+} \frac{f(x + h) - f(x)}h > f(x) > 0 \\
& \lim_{h \rightarrow 0^+} f(x + h) - f(x) > h f(x) > 0 \\

& \lim_{h \rightarrow 0^+} f(x + h) > (h + 1) f(x) > f(x) \\
& \lim_{h \rightarrow 0^+} \frac{f(x + h)}{f(x)} > h + 1 > 1
\end{align}$$




  1. This leads to the result $1 > 1 > 1$ (or $0 > 0 > 0$ if you stop earlier), which is false.



However I guess I made serious mistakes with my proof. I think I've used limits the wrong way. What do you think?


Answer




expanded from David's comment



$f' > f$ means $f'/f > 1$ so $(\log f)' > 1$. Why not take $\log f > x$, say $\log f = 2x$, or $f = e^{2x}$.



Thus $f' > f > 0$ since $2e^{2x} > e^{2x} > 0$.



added: Is there a sub-exponential solution?
From $(\log f)'>1$ we get
$$
\log(f(x))-\log(f(0)) > \int_0^x\;1\;dt = x
$$

so
$$
\frac{f(x)}{f(0)} > e^x
$$
and thus
$$
f(x) > C e^x
$$
for some constant $C$ ... it is not sub-exponential.


analysis - What are common methods/techniques can be used to prove that limit of an infinite sequence exists?



I would like to know what are common methods can be used to show that an infinite sequence converges. From what I know so far,





  1. If a sequence is bounded and monotonic increasing/decreasing then it converges.

  2. Using definition of limit.

  3. Another method that I saw online, is to assume the sequence approaches a limit $L$, then solve for $L$, but I'm not totally convinced that this approach is correct. For example, the Fibonacci ratio sequence, to prove the limit of $$\displaystyle\lim_{n\to\infty} \dfrac{a_{n+1}}{a_n}$$ exists, they claim that:
    $$1 + \dfrac{1}{L} = L$$
    proof for 3



So I wonder could anyone could share me some most commonly used method for proving the limit of an infinite sequence exists that I'm not aware of? Any suggestion or ideas would be appreciated.


Answer



One of the most powerful tools in calculus(but not only):




Cauchy's criterion for convergence


calculus - Integration of $x^2cdotdfrac{xsec^2x+tan x}{(xtan x+1)^2}$

Integrate




$$\int x^2\cdot\dfrac{x\sec^2x+\tan x}{(x\tan x+1)^2}dx$$



So what is did is integration by parts taking $x^2$ as $u$ and the other part as $v$ . Now I got to use it again which then eventually leads to (integral of $\dfrac1{x\tan x+1}dx $). Can someone help me?

Arithmetic Progression first term from Sum and Common Difference



The sum of the first 50 elements of a set is 6925 with a common difference of 5. What is the first element of the set


I know how I would usually find the first term of an AP, but I cannot work out how to work out what the 50th term is from the sum of the first 50 terms and then use that information to find the first term.




Through brute force problem solving I have established that 16 is the first term, but there must be an easier way. Any pointers would be grand.


Answer



Say $a$ is first term,
$d=5$ is the common difference and
$S_n$ is the sum to $n$ terms = $\frac{n}{2}[2a + (n-1)d]$



We know, $n$th term of series = $a + (n-1)d$
Now, it is given that $S_{50} = 6925$



Therefore, $$\frac{50}{2}[2a + (50-1)\cdot 5] = 6925$$
or $$2a + 245 = 277$$
or $$a = 16$$




Hence first term is 16.



Hope this is what you wanted to know.


improper integrals - Closed form for $I(a)=int_0^infty lnleft(tanh(ax)right)dx$?



I have been messing around with this integral that has some particular special values$$I(a)=\int_0^\infty \ln\left(\tanh(ax)\right)dx$$



I found that $$I(1)=-\frac{\pi^2}{8}$$
$$I\left(\frac{1}{2}\right)=-\frac{\pi^2}{4}$$
$$I\left(\frac{1}{4}\right)=-\frac{\pi^2}{2}$$
$$...$$ and so on. It appears that $I(2^{-n})=-2^{n-3}\pi^2$. Can anyone explain how to derive a general closed form for $I(a)$ or at least why $I(2^{-n})$ takes on the particular values above?



Answer



We have $$I^\prime(a)=\int_0^\infty\frac{x\operatorname{sech}^2 ax dx}{\tanh ax}=\int_0^\infty\frac{2x dx}{\sinh 2ax}=\frac{1}{2a^2}\int_0^\infty\frac{y dy}{\sinh y},$$so constants $A,\,B$ exist with$$I(a)=A-\frac{B}{a}.$$From your results we can infer $A=0,\,B=\frac{\pi^2}{8}$.



Edit: a slicker way is to write $$\int_0^\infty\ln\frac{1-e^{-2ax}}{1+e^{-2ax}}dx=-2\int_0^\infty\sum_{n=0}^\infty\frac{1}{2n+1}e^{-(4n+2)ax} dx=-\frac{\pi^2}{8a}.$$


trigonometry - Find the value of $lambda$ in $frac{3-tan^2 {piover 7}}{1-tan^2 {piover 7}}=lambdacos{piover 7}$




Find the value of $\lambda$ in $$\dfrac{3-\tan^2 {\pi\over 7}}{1-\tan^2 {\pi\over 7}}=\lambda\cos{\pi\over 7}$$




The numerator looks similar to expansion of $\tan 3x$, so I tried this




$$\dfrac{3\tan {\pi\over 7}-\tan^3 {\pi\over 7}}{\tan {\pi\over 7}\left(1-\tan^2 {\pi\over 7}\right)}=\lambda\cos{\pi\over 7}$$



$$\dfrac{\left(3\tan {\pi\over 7}-\tan^3 {\pi\over 7}\right)\left(1-3\tan^2 {\pi\over 7}\right)}{\tan {\pi\over 7}\left(1-\tan^2 {\pi\over 7}\right)\left(1-3\tan^2 {\pi\over 7}\right)}=\lambda\cos{\pi\over 7}$$



$$\dfrac{\tan {3\pi\over 7}\left(1-3\tan^2 {\pi\over 7}\right)}{\sin {\pi\over 7}\left(1-\tan^2 {\pi\over 7}\right)}=\lambda$$



But I'm stuck here. Need help. Thanks in advance.


Answer



Let $a= \pi/7$ then we have $$1+{2\over 1-\tan^2a} = \lambda \cos a$$




multiplying this with $\tan a$ we get: $$\tan a+ \tan 2a = \lambda \sin a$$



so $$ \sin a\cos 2a +\sin 2a\cos a = \lambda \sin a \cos a \cos 2a$$



multiplying this with 4 we get $$ 4\sin 3a = \lambda \sin 4a$$



Sinnce $\sin 3a = \sin 4a$ we get $\lambda =4$.


linear algebra - Euclidean algorithm matrix proof

Suppose that $a$ and $b$ are positive integers with $a\geq b$. Let $q_i$ and $r_i$ be the quotients and remainders in the steps of the Euclidean algorithm for $i=1, 2, ..., n$, where $r_n$ is the last nonzero remainder.




Let $Q_i = \begin{bmatrix} q_i & 1 \\ 1 & 0\end{bmatrix}$ and $Q=\prod_{i=0}^{n} Q_i$.
Show that $\begin{bmatrix}a \\ b\end{bmatrix} = Q\begin{bmatrix}r_n \\ 0\end{bmatrix}$




I really have no idea how to tackle this. A relevant theorem may be Bezout's Theorem:



gcd$(a, b)$ is the smallest linear combination of $a, b$, that is, $gcd(a,b) =

$ min$(sa+tb)$

Wednesday, August 23, 2017

integration - Error in $intlimits_0^{infty}dx,frac {log^2 x}{a^2+x^2}$



This is my work for solving the improper integral$$I=\int\limits_0^{\infty}dx\,\frac {\log^2x}{x^2+a^2}$$I feel like I did everything write, but when I substitute values into $a$, it doesn’t match up with Wolfram Alpha.







First substitute $x=\frac {a^2}u$ so$$\begin{align*}I & =-\int\limits_{\infty}^0du\,\frac {2\log^2a-\log^2 u}{x^2+a^2}\\ & =\int\limits_0^{\infty}du\,\frac {2\log^2a}{a^2+x^2}-\int\limits_0^{\infty}du\,\frac {\log^2 u}{a^2+x^2}\end{align*}$$Hence$$\begin{align*}I & =\int\limits_0^{\infty}du\,\frac {\log^2a}{a^2+x^2}\\ & =\frac {\log^2a}{a^2}\int\limits_0^{\pi/2}dt\,\frac {a\sec^2t}{1+\tan^2t}\\ & =\frac {\pi\log^2a}{2a}\end{align*}$$However, when $a=e$ Wolfram Alpha evaluates the integral numerically as$$I\approx 2.00369$$however the input that I arrived at evaluates numerically$$\frac {\pi}{2e}\approx0.5778$$Where did I go wrong? And how would you go about solving this integral?


Answer



Note
$$\log^2\left(\frac{a^2}{u}\right)\neq2\log^2a-\log^2u.$$


Complex integration parametric form




Evaluate$\int_{\gamma(0;1)} \frac{\cos z}{z}dz$. Write in parametric form and deduce that$$\int^{2\pi}_0 cos(\cos\theta)\cosh(\sin\theta)d\theta=2\pi$$



By Cauchy's integral formula, $\int_{\gamma(0;1)} \frac{\cos z}{z}dz=2\pi i(\cos0) = 2\pi i $
, but could anyone help with parametrization and deducing the above integral?


Answer



The circle is a parametrization over $\theta \in [0,2 \pi]$. Now let $z=e^{i \theta} \implies dz = i z d\theta$. Also note that



$$\cos{z} = \cos{(\cos{\theta}+i \sin{\theta})} = \cos{(\cos{\theta})} \cos{(i \sin{\theta})} - \sin{(\cos{\theta})} \sin{(i \sin{\theta})}$$




Use the fact that $\cos{i x} = \cosh{x}$ and $\sin{i x} = i \sinh{x}$ to get



$$\cos{z} = \cos{(\cos{\theta})} \cosh{(\sin{\theta})} - i \sin{(\cos{\theta})} \sinh{( \sin{\theta})}$$



Thus,



$$\oint_{\gamma(0,1)} dz \frac{\cos{z}}{z} = i \int_0^{2 \pi} d\theta \left [ \cos{(\cos{\theta})} \cosh{(\sin{\theta})} - i \sin{(\cos{\theta})} \sinh{( \sin{\theta})} \right ] = i 2 \pi$$



Equating real and imaginary parts, the sought-after result follows.


complex analysis - How to prove Euler's formula: $e^{ivarphi}=cos(varphi) +isin(varphi)$?


Could you provide a proof of Euler's formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$?


Answer



Assuming you mean $e^{ix}=\cos x+i\sin x$, one way is to use the MacLaurin series for sine and cosine, which are known to converge for all real $x$ in a first-year calculus context, and the MacLaurin series for $e^z$, trusting that it converges for pure-imaginary $z$ since this result requires complex analysis.


The MacLaurin series: \begin{align} \sin x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)!}x^{2n+1}=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots \\\\ \cos x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}x^{2n}=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\cdots \\\\ e^z&=\sum_{n=0}^{\infty}\frac{z^n}{n!}=1+z+\frac{z^2}{2!}+\frac{z^3}{3!}+\cdots \end{align}



Substitute $z=ix$ in the last series: \begin{align} e^{ix}&=\sum_{n=0}^{\infty}\frac{(ix)^n}{n!}=1+ix+\frac{(ix)^2}{2!}+\frac{(ix)^3}{3!}+\cdots \\\\ &=1+ix-\frac{x^2}{2!}-i\frac{x^3}{3!}+\frac{x^4}{4!}+i\frac{x^5}{5!}-\cdots \\\\ &=1-\frac{x^2}{2!}+\frac{x^4}{4!}+\cdots +i\left(x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots\right) \\\\ &=\cos x+i\sin x \end{align}


geometry - Numbers of circles around a circle

"When you draw a circle in a plane of radius $1$ you can perfectly surround it with $6$ other circles of the same radius."




BUT when you draw a circle in a plane of radius $1$ and try to perfectly surround the central circle with $7$ circles you have to change the radius of the surround circles.



How can I find the radius of the surround circles if I want to use more that $6$ circles?



ex :
$7$ circles of radius $0.4$



$8$ circles of radius $0.2$

Solution to simple recursive series












I'm working on some math programming and ran into the following recursive series.



$\displaystyle\sum\limits_{i=1}^n a_n$.
Where $a_{n} = Ca_{n-1}$ and $0 \leq C \leq 1$ and $a_0$ is given;



Because my constant values are always between 0 and 1, I know that the series converges and I can't help but thinking that there must be a solution.



My code looks something like



value = 1000000;

constant = 0.9999;
total = 0;
tolerance = 0.001;

while(value > tolerance)
{
value = constant * value;
total = total + value;
}



Problem is, as is the case with the initial values provided in the snippet above, the algorithm takes a long time to complete when $C$ approaches $1$


Answer



$a_k$ can be written as:



$$a_k = a_0 \cdot \underbrace{C \cdot C \cdots C}_{k \text{ times}} = a_0 C^{k}$$



Where $a_0 = 1000000$ and $C = 0.9999$. ($0 \lt C \lt 1$)



The sum is a geometric series. It has the following closed form:




$$
\sum_{k=0}^n a_k = a_0 \frac{1-C^{n+1}}{1-C}
$$



And as $n \to \infty$:



$$
\sum_{n=0}^\infty a_n = \frac{C_0}{1-C}
$$




On the other hand, if $C=1$, then $a_k=a_0$ and the sum becomes:



$$
\sum_{k=0}^n a_k = (n+1)a_k
$$



And this diverges as $n \to \infty$.


Tuesday, August 22, 2017

roots of a cubic polynomial

Consider a cubic polynomial of the form




$$f(x)=a_3x^3+a_2x^2+a_1x+a_0$$



where the coefficients are non-zero reals. Conditions for which this equation has three real simple roots are well-known. What conditions would guarantee that none of these roots is positive? In other words, what constraints on the parameters would guarantee that the polynomial has no positive roots? Please provide references also, if possible.

real analysis - Question regarding Lebesgue Integrability in $sigma$ -finite spaces

I'm taking a course in measure theory and we defined integrability in a $\sigma$ -finite space as follows: Suppose $\left(X,\mathcal{F},\mu\right)$ is a $\sigma$-finite measure space, a measurable function $f:X\to\mathbb{R}$ is said to be integrable on $X$ (denoted $f\in L^{1}\left(X,\mathcal{F},\mu\right)$) if for every collection $\left\{ X_{m}\right\} _{m=1}^{\infty}$ such that $X_{m}\uparrow X$ , $X_{m}\in\mathcal{F}$ and $\mu\left(X_{m}\right)<\infty$ the following apply:



  1. $f$ is integrable on every set $A\subseteq X$ such that $\mu\left(A\right)<\infty$ .




  2. The limit $\lim\limits _{m\to\infty}\int_{X_{m}}\left|f\right|d\mu$ exists and does not depend on the choice of $\left\{ X_{m}\right\} _{m=1}^{\infty}$ .




  3. The limit $\lim\limits _{m\to\infty}\int_{X_{m}}fd\mu$ does not depend on the choice of $\left\{ X_{m}\right\} _{m=1}^{\infty}$ .




If said conditions apply then we define $\int_{X}fd\mu=\lim\limits _{m\to\infty}\int_{X_{m}}\left|f\right|d\mu$


Now suppose $\mathcal{G}\subseteq\mathcal{F}$ is a $\sigma$ -algebra on $X$ . Let $f:X\to\mathbb{R}$ be a $\mathcal{G}$ -measurable function such that $f\in L^{1}\left(X,\mathcal{G},\mu\right)$ , is $f$ necessarily in $L^{1}\left(X,\mathcal{F},\mu\right)$ ? Obviously $\mathcal{G}$ -measurability implies $\mathcal{F}$ -measurability but what about integrability?


EDIT: It seems the construction of the integral we did is quite unorthodox, I'll elaborate further on the definitions: Suppose $\left(X,\mathcal{F},\mu\right)$ is a measure space and let $A\subseteq X$ be a subset of finite measure. We define a simple function $f:X\to\mathbb{R}$ to be any function taking a countable collection of real values $\left\{ y_{n}\right\} _{n=1}^{\infty}$. Denote $A_{n}=\left\{ x\in A\,|\, f\left(x\right)=y_{n}\right\}$. Assuming $f$ is measurable we say that $f$ is integrable on $A$ if the series ${\sum_{n=1}^{\infty}{\displaystyle y_{n}\mu\left(A_{n}\right)}}$ is absolutely convergent in which case we define: $$\int_{A}fd\mu={\displaystyle \sum_{n=1}^{\infty}}y_{n}\mu\left(A_{n}\right)$$ Furthermore, given any measurable function $f:X\to\mathbb{R}$ we say $f$ is integrable on $A$ if there is a sequence of simple functions (as defined) which are integrable on $A$ and converging uniformly to $f$ on $A$. In which case we define: $$\int_{A}fd\mu=\lim_{n\to\infty}\int_{A}f_{n}d\mu$$


Thanks in advance.

Monday, August 21, 2017

calculus - Find the infinite sum of the series $sum_{n=1}^infty frac{1}{n^2 +1}$



This is a homework question whereby I am supposed to evaluate:




$$\sum_{n=1}^\infty \frac{1}{n^2 +1}$$



Wolfram Alpha outputs the answer as



$$\frac{1}{2}(\pi \coth(\pi) - 1)$$



But I have no idea how to get there. Tried partial fractions (by splitting into imaginary components), tried comparing with the Basel problem (turns out there's little similarities), nothing worked.


Answer



Using David Cardon's method, https://mathoverflow.net/questions/59645/algebraic-proof-of-an-infinite-sum




We can solve a more general sum,
$$\sum_{-\infty}^{\infty} \frac{1}{n^{2}+a^{2}} = \frac{\pi}{a} \coth(\pi a).$$



Note that this sum satisfies the conditions in the above link. The poles lie at $z=ia$ and $z=-ia$, so
$$\sum_{n=-\infty}^{\infty} \frac{1}{n^{2}+a^{2}} = -\pi\left[\operatorname{Res}\left(\frac{\cot(\pi z)}{z^{2}+a^{2}},ia\right) + \operatorname{Res}\left(\frac{\cot(\pi z)}{z^{2}+a^{2}},-ia\right)\right].$$
Computing the residues:
$$\operatorname{Res}\left(\frac{\cot(\pi z)}{z^{2}+a^{2}},ia\right) = \lim_{z\rightarrow ia}\frac{(z-ia)\cot(\pi z)}{(z-ia)(z+ia)} = \frac{\cot(\pi ia)}{2i a} $$
and
$$ \operatorname{Res}\left(\frac{\cot(\pi z)}{z^{2}+a^{2}},-ia\right) = \lim_{z\rightarrow -ia}\frac{(z+ia)\cot(\pi z)}{(z+ia)(z-ia)} = \frac{\cot(i\pi a)}{2ia}.$$
Therefore, summing these we get

$$\sum_{-\infty}^{\infty} \frac{1}{n^{2}+a^{2}} = -\frac{\pi\cot(i\pi a)}{ia} = \frac{\pi \coth(\pi a)}{a}.$$



You should be able to extend this idea to your sum with some effort.


calculus - Limits- h'(0) and writing properly


  1. Write the definition of the derivative of a function of $x$ as a $\lim_\limits{\Delta x\to0}$



I have no idea what this is talking about.





  1. Consider
    $H(x) =
    \begin{cases}
    x(1-cos(\frac{1}{x})) & \text{x$\neq$0} \\
    0, & \text{$x=0$}
    \end{cases}$
    Using the limit definition of the derivative find $H'(0)$. If that limit does not exist, explain why.
    I feel like the limit does exist, and I'm just missing something.

    A reply today would be appreciated.

calculus - Sum of infinite series $1+frac22+frac3{2^2}+frac4{2^3}+cdots$




How do I find the sum of $\displaystyle 1+{2\over2} + {3\over2^2} + {4\over2^3} +\cdots$



I know the sum is $\sum_{n=0}^\infty (\frac{n+1}{2^n})$ and the common ratio is $(n+2)\over2(n+1)$ but i dont know how to continue from here


Answer



After establishing convergence, you could do the following:
$$S = 1 + \frac22 + \frac3{2^2}+\frac4{2^3}+\dots$$
$$\implies \frac12S = \frac12 + \frac2{2^2} + \frac3{2^3}+\frac4{2^4}+\dots$$
$$\implies S - \frac12S = 1+\frac12 + \frac1{2^2} + \frac1{2^3}+\dots$$
which is probably something you can recognise easily...



Sunday, August 20, 2017

real analysis - Changing Lebesgue-measurable function to Borel function




Show that if $f:\mathbb{R}^2\rightarrow\mathbb{R}$ is measurable (with respect to the Lebesgue-measurable sets in $\mathbb{R}^2)$, then there exists a Borel function $g$ such that $f(x)=g(x)$ for almost every $x\in\mathbb{R}^2$ (i.e. for all $x\in\mathbb{R}^2$ except a set of measure zero).





I guess "Borel function" means Borel-measurable functions.



If so, $g$ Borel-measurable would mean that $g^{-1}(A)$ is a Borel set in $\mathbb{R}^2$ for all Borel $A\in\mathbb{R}$. And $f$ Lebesgue-measurable would mean that $f^{-1}(A)$ is a Lebesgue-measurable set in $\mathbb{R}^2$ for all Borel $A\in\mathbb{R}$.



Given this setting, it is hard to see how to proceed. I am allowed to change the value of $f(x)$ for a subset of measure zero in $\mathbb{R}^2$, and I want to get a Borel-measurable function. How can I do that?


Answer



Every Lebesgue-measurable set is the union of a Borel set (one can choose an $F_\sigma$ for that) and a Lebesgue null set.



The pointwise limit of a sequence of Borel-measurable functions is Borel-measurable.




Combining the two leads to the result. For $n \in \mathbb{N}$ and $k \in \mathbb{Z}$, let



$$E_{n,k} = f^{-1}\left(\left[\frac{k}{2^n},\frac{k+1}{2^n} \right)\right).$$



Decompose $E_{n,k}$ into a Borel set $B_{n,k}$ and a null set $N_{n,k}$. Define $g_n$ as $\frac{k}{2^n}$ on $B_{n,k}$, and as $0$ on $N_{n,k}$. Show that the so-defined $g_n$ is Borel measurable. Show that $g_n \to f$ almost everywhere.


number theory - Prove that $sqrt{5n+2}$ is irrational



I'm trying to follow this answer to prove that $\sqrt{5n+2}$ is irrational. So far I understand that the whole proof relies on being able to prove that $(5n+2)|x^2 \implies (5n+2)|x$ (which is why $\sqrt{4}$ doesn't fit, but $\sqrt{7}$ etc. does), this is where I got stuck. Maybe I'm overcomplicating it, so if you have a simpler approach, I'd like to know about it. :)




A related problem I'm trying to wrap my head around is: Prove that $\frac{5n+7}{3n+4}$ is irreducible, i.e. $(5n+7)\wedge(3n+4) = 1$.


Answer



Well, one way to say it is irrational is to see that $5n+2$ isn't an integer square for any $n\in\mathbb{Z}$ (it only finish in $2$ or $7$). The other way you're trying lets you the same ending $(p,q\in \mathbb{Z}, q\neq 0)$:
\begin{align*}
\sqrt{5n+2}=\frac{p}{q}&& \\
5n+2=\frac{p^2}{q^2} &&(1)\\
q^2(5n+2)=p^2 && (2)
\end{align*}



Let

$$p=p_1^{\alpha_1}p_2^{\alpha_2}\dots p_t^{\alpha_t}$$ $$q=q_1^{\beta_1}q_2^{\beta_2}\dots q_s^{\beta_s}$$



where $p_i,q_j$ are primes and $\alpha_i,\beta_j$ are positive integers.



From $(1)$, $p^2/q^2$ is an integer (is equal to $5n+2$), so you have that the $q_i$ are certain primes $p_j$. Renaming the prime factors in a way such that $q_i=p_i$, you can let you the fraction (considering that $t>s$)



$$\frac{p^2}{q^2}=\frac{p_1^{2\alpha_1}p_2^{2\alpha_2}\dots p_t^{2\alpha_t}}{q_1^{2\beta_1}q_2^{2\beta_2}\dots q_s^{2\beta_s}}=p_1^{2(\alpha_1-\beta_1)}p_2^{2(\alpha_2-\beta_2)}\dots p_t^{2(\alpha_s-\beta_s)}\dots p_t^{2\alpha_t}=5n+2$$



then this implies that $5n+2$ is a square, but by the original statement, it can't be.


integration - $int e^{-x^2}dx$








How does one integrate $\int e^{-x^2}\,dx$? I read somewhere to use polar coordinates.



How is this done? What is the easiest way?

Saturday, August 19, 2017

algebra precalculus - Is there a way to quickly know the number of elements on a triangle type?











I don't technically know the mathematical term, but imagine:




         X
X X
X X X
X X X X
X X X X X
X X X X X X
X X X X X X X
X X X X X X X X
X X X X X X X X X

X X X X X X X X X X


taking the last row that holds 10 elements, the total will be:



10 + 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 


witch totals 55




just like there is the factor sign 10! that tells us the same but multiplying the numbers, is there a technical term / function, to by other means calculate the total 55?


Answer



Yes there is, the triangular numbers, and they can be calculated very easily: $$T_n=\frac{n(n+1)}{2}.$$ Your particular example is $T_{10}=(10\cdot 11)/2=55$.


Magnitude of roots of a quadratic function with complex coefficients



Suppose $c \in \mathbb C$ with $|c| < 1$. I constructed a quadratic function $t^2 - 2 c t + c = 0$. I want to know whether the magnitude of the roots are smaller than $1$. The answer for real $c$ is simple. If $c$ is real, then the roots are $c \pm \frac{\sqrt{4c^2 - 4c}}{2}$. Since $4c^2 - 4c < 0$, the second part is imaginary. So the magnitude will be $\sqrt{c^2 + \frac{4c-4c^2}{4}} = \sqrt{c} < 1$.


I got lost when considering $c$ is complex. Specifically, is the discriminant $4c^2 - 4c$ or $4|c|^2 - 4c$? How do we take the root of complex number?


Answer



(Too long for a comment.)


The equation can be written as $\,(t-c)^2 = c^2-c\,$ then by the triangle inequality with $\lambda=|c| \lt 1\,$:


$$ |t-c|^2 = |c|\,|1-c| \le |c|(1+|c|) \quad\implies\quad |t| \le |t-c|+|c| \le \lambda + \sqrt{\lambda(1+\lambda)} $$


Therefore $\,f(\lambda)=\lambda + \sqrt{\lambda(1+\lambda)}\,$ is an upper bound for the magnitude of roots $\,|t|\,$, but it does not insure that $\,|t| \le 1\,$ since $\,f(\lambda)\,$ can take values larger than $\,1\,$ e.g. $\,f(\lambda) \gt 1\,$ for $\,\forall \lambda \gt \frac{1}{3}\,$.


It also follows that $\,|c| \lt \frac{1}{3}\,$ is a sufficient condition for the roots to have magnitude less than $\,1\,$.


calculus - Prove $int^infty_0 bsin(frac{1}{bx})-asin(frac{1}{ax}) = -ln(frac{b}{a})$ using Frullani integrals

Prove $$\int^\infty_0 b\sin(\frac{1}{bx})-a\sin(\frac{1}{ax}) = -\ln(\frac{b}{a})$$




I'm supposed to use Frullani integrals which states that $\int^\infty_0 \frac{f(bx)-f(ax)}{x}\mathrm dx$ since this equals $[f(\infty)-f(0)] \ln(\frac{b}{a})$



So I need to get the first equation into the form of the Frullani integral. I can't figure out how to make this transformation though because I'm no good at them.

Friday, August 18, 2017

summation - What's the formula for the 365 day penny challenge?





Not exactly a duplicate since this is answering a specific instance popular in social media.



You might have seen the viral posts about "save a penny a day for a year and make $667.95!" The mathematicians here already get the concept while some others may be going, "what"? Of course, what the challenge is referring to is adding a number of pennies to a jar for what day you're on. So:



Day 1 = + .01

Day 2 = + .02
Day 3 = + .03
Day 4 = + .04


So that in the end, you add it all up like so:



1 + 2 + 3 + 4 + 5 + 6 + ... = 66795



The real question is, what's a simple formula for getting a sum of consecutive integers, starting at whole number 1, without having to actually count it all out?!


Answer



Have had a lot of friends ask about this lately, as it is all over FaceBook. The formula is actually quite simple:



(N (N + 1) ) / 2 where N = Highest value


Or Simply $\frac {n(n+1)}{2}$



Thus




365 (365 + 1) ) / 2 = 66795


Divide that by 100 (because there's 100 pennies in a dollar) and viola! $667.95



Now, this is an OLD math (think about 6th century BC), wherein these results are referred to as triangle numbers. In part, because as you add them up, you can stack the results in the shape of a triangle!



1 = 1
*

1 + 2 = 3
*
* *
1 + 2 + 3 = 6
*
* *
* * *
1 + 2 + 3 + 4 = 10
*
* *

* * *
* * * *






NoChance also has a fun story and answer to this question!



A little info on his lesson: -{for the super nerdy!}-

"...Carl Friedrich Gauss is said to have found this relationship in his
early youth, by multiplying n/2 pairs of numbers in the sum by the
values of each pair n+1. However, regardless of the truth of this
story, Gauss was not the first to discover this formula, and some find
it likely that its origin goes back to the Pythagoreans 5th century BC..." - wikipedia

"...The mathematical study of figurate numbers is said to have originated
with Pythagoras, possibly based on Babylonian or Egyptian precursors.
Generating whichever class of figurate numbers the Pythagoreans
studied using gnomons is also attributed to Pythagoras. Unfortunately,
there is no trustworthy source for these claims, because all surviving

writings about the Pythagoreans are from centuries later. It seems to
be certain that the fourth triangular number of ten objects, called
tetractys in Greek, was a central part of the Pythagorean religion,
along with several other figures also called tetractys. Figurate
numbers were a concern of Pythagorean geometry. ... - wikipedia







See? Fun stuff, numbers!



polynomials - Prove two bases are dual in a finite field.

Let K be a finite field, $F=K(\alpha)$ a finite simple extension of degree $n$, and $ f \in K[x]$ the minimal polynomial of $\alpha$ over $K$. Let $\frac{f\left( x \right)}{x-\alpha }={{\beta }_{0}}+{{\beta }_{1}}x+\cdots +{{\beta }_{n-1}}{{x}^{n-1}}\in F[x]$ and $\gamma={f}'\left( \alpha \right)$.



Prove that the dual basis of $\left\{ 1,\alpha ,\cdots ,{{\alpha }^{n-1}} \right\}$ is $\left\{ {{\beta }_{0}}{{\gamma }^{-1}},{{\beta }_{1}}{{\gamma }^{-1}},\cdots ,{{\beta }_{n-1}}{{\gamma }^{-1}} \right\}$.


I met this exercise in "Finite Fields" Lidl & Niederreiter Exercises 2.40, and I do not how to calculate by Definition 2.30. It is


Definition 2.30 Let $K$ be a finite field and $F$ a finite extension of $K$. Then two bases $\left\{ {{\alpha }_{1}},{{\alpha }_{2}},\cdots ,{{\alpha }_{m}} \right\}$ and $\left\{ {{\beta }_{1}},{{\beta }_{2}},\cdots ,{{\beta }_{m}} \right\}$ of $ F$ over $K$ are said to be dual bases if for $1\le i,j\le m$ we have $T{{r}_{{F}/{K}\;}}\left( {{\alpha }_{i}}{{\beta }_{j}} \right)=\left\{ \begin{align} & 0\;\;\text{for}\;\;i\neq j, \\ & 1\;\;\text{for}\;\;i=j. \\ \end{align} \right.$


I think $\gamma =\underset{x\to \alpha }{\mathop{\lim }}\,\frac{f(x)-f{{(\alpha )}_{=0}}}{x-\alpha }={{\beta }_{0}}+{{\beta }_{1}}\alpha +\cdots {{\beta }_{n-1}}{{\alpha }^{n-1}}$.


How can I continue? The lecturer did not teach the "dual bases" section.

Thursday, August 17, 2017

matrices - Is there a term for a matrix of 1's but with 0's along the diagonal?

Is there a term for a matrix that is like the identity matrix but with the values swapped? That is, 1's everywhere except the diagonal which has 0's.



[[0, 1, 1, 1],
[1, 0, 1, 1],
[1, 1, 0, 1],

[1, 1, 1, 0]]


Of course, that's easily created with something like abs(numpy.eye(4) - 1). But, does it have a name or at least a phrase that describes it?

What is wrong with this proof that 3 is less than 1?

What is wrong with this proof?
Theorem. 3 is less than 1.
Proof. Every number is either less than 1 or greater than 1 or equals 1. Let $c$ be an arbitrary number. Therefore, it is less than 1 or greater than 1 or equals 1. Suppose it is less than 1. By the rule of universal generalization, if an arbitrary number is less than 1, every number is less than 1. Therefore, 3 is less than 1.

definite integrals - Real-Analysis Methods to Evaluate $int_0^infty frac{x^a}{1+x^2},dx$, $|a|




In THIS ANSWER, I used straightforward contour integration to evaluate the integral $$\bbox[5px,border:2px solid #C0A000]{\int_0^\infty \frac{x^a}{1+x^2}\,dx=\frac{\pi}{2}\sec\left(\frac{\pi a}{2}\right)}$$for $|a|<1$.





An alternative approach is to enforce the substitution $x\to e^x$ to obtain



$$\begin{align}
\int_0^\infty \frac{x^a}{1+x^2}\,dx&=\int_{-\infty}^\infty \frac{e^{(a+1)x}}{1+e^{2x}}\,dx\\\\
&=\int_{-\infty}^0\frac{e^{(a+1)x}}{1+e^{2x}}\,dx+\int_{0}^\infty\frac{e^{(a-1)x}}{1+e^{-2x}}\,dx\\\\
&=\sum_{n=0}^\infty (-1)^n\left(\int_{-\infty}^0 e^{(2n+1+a)x}\,dx+\int_{0}^\infty e^{-(2n+1-a)x}\,dx\right)\\\\
&=\sum_{n=0}^\infty (-1)^n \left(\frac{1}{2n+1+a}+\frac{1}{2n+1-a}\right)\\\\
&=2\sum_{n=0}^\infty (-1)^n\left(\frac{2n+1}{(2n+1)^2-a^2}\right) \tag 1\\\\

&=\frac{\pi}{2}\sec\left(\frac{\pi a}{2}\right)\tag 2
\end{align}$$



Other possible ways forward include writing the integral of interest as



$$\begin{align}
\int_0^\infty \frac{x^a}{1+x^2}\,dx&=\int_{0}^1 \frac{x^{a}+x^{-a}}{1+x^2}\,dx
\end{align}$$



and proceeding similarly, using $\frac{1}{1+x^2}=\sum_{n=0}^\infty (-1)^nx^{2n}$.





Without appealing to complex analysis, what are other approaches one can use to evaluate this very standard integral?




EDIT:




Note that we can show that $(1)$ is the partial fraction representation of $(2)$ using Fourier series analysis. I've included this development for completeness in the appendix of the solution I posted on THIS PAGE.




Answer



I'll assume $\lvert a\rvert < 1$. Letting $x = \tan \theta$, we have



$$\int_0^\infty \frac{x^a}{1 + x^2}\, dx = \int_0^{\pi/2}\tan^a\theta\, d\theta = \int_0^{\pi/2} \sin^a\theta \cos^{-a}\theta\, d\theta$$



The last integral is half the beta integral $B((a + 1)/2, (1 - a)/2)$, Thus



$$\int_0^{\pi/2}\sin^a\theta\, \cos^{-a}\theta\, d\theta = \frac{1}{2}\frac{\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right)}{\Gamma\left(\frac{a+1}{2} + \frac{1-a}{2}\right)} = \frac{1}{2}\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right)$$



By Euler reflection,




$$\Gamma\left(\frac{a+1}{2}\right)\Gamma\left(\frac{1-a}{2}\right) = \pi \csc\left[\pi\left(\frac{1+a}{2}\right)\right] = \pi \sec\left(\frac{\pi a}{2}\right)$$



and the result follows.



Edit: For a proof of Euler reflection without contour integration, start with the integral function $f(x) = \int_0^\infty u^{x-1}(1 + u)^{-1}\, du$, and show that $f$ solves the differential equation $y''y - (y')^2 = y^4$, $y(1/2) = \pi$, $y'(1/2) = 0$. The solution is $\pi \csc \pi x$. On the other hand, $f(x)$ is the beta integral $B(1+x,1-x)$, which is equal to $\Gamma(x)\Gamma(1-x)$. I believe this method is due to Dedekind.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...