Friday, June 30, 2017

modular arithmetic - A "fast" way for writing negative number (say $x$) as $x equiv a pmod b$ with $0 leq a lt b$



Which is the best method for writing negative numbers in the form of $a \bmod b$? (where $a$ and $b$ are positive integers)



While using Chinese remainder theorem to compute the solution of these linear congruences,
$$\begin{align}x &\equiv 2 \pmod 3 \\ x &\equiv 3 \pmod 4 \\ x &\equiv 1 \pmod 5\end{align}$$ as the solution I got $x = -109$, now I need to represent this result in the form of $a \bmod b$,where $b=60$ (here),this case is very easy as the numbers are very small.



In this one I keep checking multiples of $60$ repeatedly and then add $-109$ to the first one $109$, in this case $2\cdot 60 = 120$ and then adding $-109$ we get $11$,now we can write $-109 \equiv 11 \bmod 60$... but is this is the only way? since if the number is large, checking the multiples could be time consuming.



For example here, the number is $-19177$ which needs to expressed in the form $a \bmod 4900$, how to manually and quickly find "$a$" in such cases (assume numbers up-to $6$ digits).



Answer



If you are uncomfortable dividing the negative number $-109$ by $60$, do this:



(i) Divide $109$ by $60$. The remainder is $49$.



(ii) Your answer is $60-49$.



This procedure almost always works. The only time it doesn't is when the remainder on dividing your positive number is $0$. Then the answer for the negative number should also be $0$.



Example: Find the remainder when $-2011$ is divided by $60$.




(i) Divide $2011$ by $60$. The remainder is $31$.



(ii) The remainder when $-2011$ is divided by $60$ is therefore $60-31$, that is, $29$.



Let's check: $-2011=(60)(-34)+29$.



Note: We are here finding the remainder when a negative number is divided by a positive one. Of course, if you are interested in dividing $-2011$ by $-60$, just change both signs, and proceed "normally."



Exercise: Prove that the above procedure is correct. (It really is not hard.)




Calculating Remainers: Suppose that $a$ and $m$ are positive, and not too large. We want the remainder when $a$ is divided by $m$. For example, let $a=4000000$ and $m=2011$.



(a) Divide on the calculator. My calculator display shows $1989.0602$. (The calculator knows a couple of additional digits but is keeping them hidden.)



(b) Immediately subtract the integer part of the answer, that is, $1989$. Do not copy down any number and rekey. If necessary, use the calculator "memory" feature. I got $0.0601691$. Notice that the calculator has just revealed some digits it had kept hidden. The last digit is not trustworthy.



(c) Immediately multiply by $m$. I got $121.00006$.



(d) Find the nearest integer to the result in (c). If calculators were always absolutely exact, the number in (c) would be an integer. The inexactness is due to roundoff error. That error is small, and finding the nearest integer eliminates the error. Often, with smallish $a$, step (d) will be unnecessary, because the result of step (c), on the display, will look like an integer.




(e) We conclude that our remainder is $121$. The whole calculation takes a few seconds, mostly spent keying in the original numbers.



On a scientific calculator, the procedure should be reliable up to at least $a=10^7$. It even works nicely on a primitive "grocery store" calculator.



If $a$ is quite large, say $10^{13}$ or beyond, this calculator procedure breaks down. We cannot even input all the digits of $a$ into the calculator! However, there are many calculator programs available, several of them free, which calculate to greater precision, or even "infinite" precision.


calculus - Need help solving this indefinite integral (not homework!)



Source: A question bank of "tough" problems on integrals (maybe tough for a noob like me). Started by learning integration for use in physics only, but now it's got me hooked :p




Problem: Evaluate the indefinite integral $$\int {x\,\mathrm{d}x\over(7x-10-x^2)^{3/2}}.$$



I have used all the tools in my arsenal; substitution: no viable substitution comes in mind here. I have tried factoring the quadratic but that doesn't help.
I have tried to multiply-divide the denominator by $x^2$ and then substitute $x={1\over t}$ but no help. I'm actually stuck right now. Please give me a hint to solve this one. All help appreciated!



@Frank gave it a shot as well...



$$\int {x\,\mathrm{d}x\over{(-1)^{3/2}(x^2-7x+10)^{3/2}}}.$$




$$\int {x\,\mathrm{d}x\over{(i)^3(x^2-7x+10)^{3/2}}}.$$ ($i$ is the imaginary unit)



Clearly we don't get any imaginary term in the answer and there are probably no chances that we'll cancel the imaginary number. That's why I did not look forward to this method. Will go ahead and try the Euler substitution...



Edit: This question is solved but I'm still looking for a better, more faster alternative as Euler's substitution can sometimes invite a bunch of calculations.


Answer



Thanks @DrSonnhardGraubner for giving me the right article for the problem. I didn't know about this one.



We are going to use the third substitution of Euler here, wherein we assume that




$$\sqrt{7x-10-x^2} = (5-x)t$$ (consider factorization)



$$t = \sqrt{(x-2)\over(5-x)}$$



partially differentiate to get an expression of $\mathrm{d}x$ in terms of $\mathrm{d}t$



$$\mathrm{d}x = \frac{6t \mathrm{d}t}{(t^2+1)^2}$$



Now substitute $x$ with a function of $t$ according to the above equation and get the answer as given above in the comment by @John Chessant
$$-\frac{2}{9}\cdot\frac{20-7x}{\sqrt{(2-x)(5-x)}}+C$$




Any alternates are welcome!


real analysis - Proving that there is an irrational number between any two unequal rational numbers.

I'm trying to prove that there is an irrational number between any two unequal rational numbers $a, b$. Here's a "proof" I have right now, but I'm not sure if it works.




Let $a, b$ be two unequal rational numbers and, WLOG, let $a < b$. Suppose to the contrary that there was an interval $[a, b]$, with $a, b$ rational, which contained no irrational numbers. That would imply that the interval contained only rational numbers since the reals are composed of rationals and irrational numbers. Furthermore, this interval has measure $b-a$, a contradiction since this is a subset of $\mathbb{Q}$ which has measure zero.



Does this work? Is there an easier way to go about it, perhaps through a construction?



Construction: Let $a = \frac{m}{n}$, $b = \frac{p}{q}$. WLOG $a>b$. Then $a-b = \frac{m}{n}-\frac{p}{q} = \frac{mq-np}{nq}$. Since $mq - np > 1$, we can construct an irrational number $a + \frac{1}{nq\sqrt2}$ which is between $a$ and $b$.

calculus - A closed form for $int_0^infty e^{-a,x} operatorname{erfi}(sqrt{x})^3 dx$



Let $\operatorname{erfi}(x)$ be the imaginary error function
$$\operatorname{erfi}(x)=\frac{2}{\sqrt{\pi}}\int_0^xe^{z^2}dz.$$
Consider the following parameterized integral
$$I(a)=\int_0^\infty e^{-a\,x} \operatorname{erfi}(\sqrt{x})^3\ dx.$$

I found some conjectured special values of $I(a)$ that are correct up to at least several hundreds of digits of precision:
$$I(3)\stackrel{?}{=}\frac{1}{\sqrt{2}},\ \ I(4)\stackrel{?}{=}\frac{1}{4\sqrt{3}}.$$




  • Are these conjectured values correct?

  • Is there a general closed-form formula for $I(a)$? Or, at least, are there any other closed-form special values of $I(a)$ for simple (e.g. integer or rational) values of $a$?


Answer



Let $I = [0,1]$ and notice




$$\text{erfi}(t) = \frac{2}{\sqrt{\pi}} \int_0^t e^{z^2} dz \quad\implies\quad \text{erfi}(\sqrt{t}) = \frac{2}{\sqrt{\pi}}\sqrt{t} \int_I e^{tz^2} dz$$



The integral $\mathscr{I}$ we want can be rewritten as:



$$\begin{align}
\mathscr{I}
= & \int_0^\infty e^{-at} \sqrt{t}^3 \left(\int_I e^{tz^2}dz\right)^3 dt\\
= & \frac{8}{\sqrt{\pi}^3}
\int_{I^3} dx dy dz\left[\int_0^\infty \sqrt{t}^3e^{-(a-x^2-y^2-z^2)t}) dt\right]\\
= & \frac{8}{\sqrt{\pi}^3} \Gamma(\frac52)

\int_{I^3} \frac{dx dy dz}{\sqrt{a - x^2 - y^2 - z^2}^5}\\
= & \frac{8}{\pi} \frac{\partial^2}{\partial a^2}
\int_{I^3} \frac{dx dy dz}{\sqrt{a - x^2 - y^2 - z^2}}
\end{align}$$
Since the maximum value of $x^2 + y^2 + z^2$ on $I^3$ is $3$, above rewrite is valid
when $a > 3$.



To compute the integral, we will split the cube $I^3$ into 6 simplices according to the
sorting order of $x, y, z$. On any one of these simplices, say the one for
$1 \ge x \ge y \ge z \ge 0$, introduce parameters $(\rho,\lambda,\mu) \in I^3$ such that $(x, y, z) = (\rho,\rho\lambda,\rho\lambda\mu)$, we have:




$$\begin{align}
\mathscr{I}
= & \frac{48}{\pi} \frac{\partial^2}{\partial a^2} \int_{I^3}
\frac{\rho^2 \lambda d\rho d\lambda d\mu}{\sqrt{a - \rho^2 - \lambda^2 \rho^2 ( 1 + \mu^2})}\\
= & \frac{48}{\pi} \frac{\partial^2}{\partial a^2} \int_{I^2}
\frac{\rho^2 d\rho d\mu}{\rho^2(1+\mu^2)} \left[ - \sqrt{a - \rho^2 - \lambda^2 \rho^2 ( 1 + \mu^2}) \right]_{\lambda=0}^1\\
= & \frac{48}{\pi} \frac{\partial^2}{\partial a^2} \int_{I^2}
\frac{\rho^2 d\rho d\mu}{\rho^2(1+\mu^2)} \left[ \sqrt{a - \rho^2 } - \sqrt{a - \rho^2 ( 2 + \mu^2}) \right]\\
= & \frac{24}{\pi} \frac{\partial}{\partial a} \int_{I^2} \frac{d\rho d\mu}{1+\mu^2}

\left[
\frac{1}{\sqrt{a - \rho^2 }} -
\frac{\frac{1}{\sqrt{2+\mu^2}}
}{\sqrt{\frac{a}{2+\mu^2} - \rho^2})}
\right]\\
= & \frac{24}{\pi} \frac{\partial}{\partial a} \int_{I} \frac{d\mu}{1+\mu^2}
\left[
\arcsin(\frac{1}{\sqrt{a}}) -
\frac{1}{\sqrt{2+\mu^2}} \arcsin(\sqrt{\frac{2+\mu^2}{a}})
\right]\\

= & \frac{24}{\pi} \int_{I} \frac{d\mu}{1+\mu^2}
\left[
\frac{-\frac{1}{2\sqrt{a}^3}}{\sqrt{1 - \frac{1}{a}}} -
\frac{-\frac{1}{2\sqrt{a}^3}}{\sqrt{1 - \frac{2+\mu^2}{a}}}
\right]\\
= & \frac{12}{\pi a } \int_{I} \frac{d\mu}{1+\mu^2}
\left[ \frac{1}{\sqrt{a-2-\mu^2}} - \frac{1}{\sqrt{a-1}} \right]\\
\stackrel{\color{blue}{[1]}}{=} & \frac{12}{\pi a}
\left[
\frac{1}{\sqrt{a-1}}\arctan \frac{\sqrt{a-1} \mu}{\sqrt{a - 2 -\mu^2}}

- \frac{1}{\sqrt{a-1}} \arctan \mu
\right]_{\mu=0}^1
\\
= & \frac{12}{\pi a\sqrt{a-1}} \left[ \arctan\left(\sqrt{\frac{a-1}{a-3}}\right) - \frac{\pi}{4}
\right]\\
= & \frac{12}{\pi a\sqrt{a-1}}
\arctan\left(
\frac{\sqrt{\frac{a-1}{a-3}}-1}{\sqrt{\frac{a-1}{a-3}}+1}
\right)\\
\stackrel{\color{blue}{[2]}}{=} &

\frac{6}{\pi a\sqrt{a-1}} \arctan\frac{1}{\sqrt{(a-1)(a-3)}}
\end{align}$$



Notes




  • $\color{blue}{[1]}$ I am lazy, I get the leftmost integral in RHS from Wolfram alpha instead of deriving it myself.

  • $\color{blue}{[2]}$ Let $u = \sqrt{\frac{a-1}{a-3}}$, we have



    $$\begin{align}

    2\arctan\frac{u-1}{u+1}
    = & \arctan\frac{2\frac{u-1}{u+1}}{1-\left(\frac{u-1}{u+1}\right)^2}
    = \arctan\frac{u^2 - 1}{2u}\\
    = & \arctan\frac{\frac{a-1}{a-3}-1}{2\sqrt{\frac{a-1}{a-3}}}
    = \arctan\frac{1}{\sqrt{(a-1)(a-3)}}
    \end{align}$$



How do I express 0.999(9) as a fraction?

I'm noob in math. If 0.333(3) is 1/3, 0.666(6) is 2/3, then 0.999(9) is what?



If 3/3 and 0.999(9) is the same, then how can I express one of them without expressing the other?

Thursday, June 29, 2017

real analysis - Show that $e^{varepsilon |x|^{varepsilon}}$ grows faster than $sum_{k=0}^{infty} {|x|^{2k}}/{(k!)^2}$

I am wondering whether we have for $$f(x):=\sum_{k=0}^{\infty} \frac{|x|^{2k}}{(k!)^2} $$ that


$$\lim_{x \rightarrow \infty} \frac{e^{\varepsilon |x|^{\varepsilon}}}{f(x)} = \infty$$ for any $\varepsilon>0$?


I assume that this is true as factorials should somehow outgrow powers, but I do not see how to show this rigorously?


Does anybody have an idea?

summation - Find the sum of the n terms of the series $2cdot2^0+3cdot2^1+4cdot2^2+dots$



Find the sum of the n terms of the series:





$2\cdot2^0+3\cdot2^1+4\cdot2^2+\dots$




I don't know how to proceed. Please explain the process and comment on technique to solve questions of similar type.



Source: Barnard and Child Higher Algebra.



Thanks in Advance!


Answer



The formula for the sum of a geometric sequence gives

$$
\sum_{k=0}^n2^kx^{k+2}=\frac{x^2-2^{n+1}x^{n+3}}{1-2x}\tag{1}
$$
Differentiating $(1)$ yields
$$
\sum_{k=0}^n(k+2)2^kx^{k+1}=\frac{2x-(n+3)2^{n+1}x^{n+2}}{1-2x}+\frac{2x^2-2^{n+2}x^{n+3}}{(1-2x)^2}\tag{2}
$$
Plugging in $x=1$ leads to
$$
\sum_{k=0}^n(k+2)2^k=(n+1)2^{n+1}\tag{3}

$$


algebra precalculus - How to find the magnitude squared of square root of a complex number



I'm trying to simplify the expression



$$\left|\sqrt{a^2+ibt}\right|^2$$



where $a,b,t \in \Bbb R$.



I know that by definition




$$\left|\sqrt{a^2+ibt}\right|^2 = \sqrt{a^2+ibt}\left(\sqrt{a^2+ibt}\right)^*$$



But how do you find the complex conjugate of the square root of a complex number? And what is the square root of a complex number (with arbitrary parameters) for that matter?


Answer



For any complex number $z$, and any square root $\sqrt{z}$ of $z$ (there are two), we have
$$\bigl|\sqrt{z}\bigr|=\sqrt{|z|}$$
Therefore
$$\bigl|\sqrt{a^2+ibt}\bigr|^2=\sqrt{|a^2+ibt|^2}=|a^2+ibt| = \sqrt{a^4+b^2t^2}$$


Wednesday, June 28, 2017

Modular Arithmetic CRT: How do modulo with very big numbers

I have always been intrigued as to how one would calculate the modulo of a very large number without a calculator. This is an example that I have come up with just now:



4239^4 mod 19043



The answer is 808, but that is only because I used a calculator. I read in books and online that you can break the modulo 19043 to its factors such that it is modulo 137 and 139 as (modulo (137*139)) is (modulo 19043).



I tried something like this...




4239^4 mod 137
=129^4 mod 137
=123


4239^4 mod 139
=69^4 mod 139
=113



But now I am stuck as to what to do next in Chinese Remainder Theorem

calculus - Recursive square root problem


Give a precise meaning to evaluate the following: $$\large{\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+\dotsb}}}}}$$




Since I think it has a recursive structure (does it?), I reduce the equation to


$$ p=\sqrt{1+p} $$ $$ p^2=1+p $$ $$ p^2-p-1=0 $$ $$ p=\frac{1\pm\sqrt{5}}{2} $$


Did I do this right?

inequality - Maximise $(x+1)sqrt{1-x^2}$ without calculus

Problem



Maximise $f:[-1,1]\rightarrow \mathbb{R}$, with $f(x)=(1+x)\sqrt{1-x^2}$



With calculus, this problem would be easily solved by setting $f'(x)=0$ and obtaining $x=\frac{1}{2}$, then checking that $f''(\frac{1}{2})<0$ to obtain the final answer of $f(\frac{1}{2})=\frac{3\sqrt{3}}{4}$



The motivation behind this function comes from maximising the area of an inscribed triangle in the unit circle, for anyone that is curious.



My Attempt




$$f(x)=(1+x)\sqrt{1-x^2}=\sqrt{(1-x^2)(1+x)^2}=\sqrt 3 \sqrt{(1-x^2)\frac{(1+x)^2}{3}}$$



By the AM-GM Inequality, $\sqrt{ab}\leq \frac{a+b}{2}$, with equality iff $a=b$



This means that



$$\sqrt 3 \sqrt{ab} \leq \frac{\sqrt 3}{2}(a+b)$$



Substituting $a=1-x^2, b=\frac{(1+x)^2}{3}$,




$$f(x)=\sqrt 3 \sqrt{(1-x^2)\frac{(1+x)^2}{3}} \leq \frac{\sqrt 3}{2} \left((1-x^2)+\frac{(1+x)^2}{3}\right)$$



$$=\frac{\sqrt 3}{2} \left(\frac{4}{3} -\frac{2}{3} x^2 + \frac{2}{3} x\right)$$



$$=-\frac{\sqrt 3}{2}\frac{2}{3}(x^2-x-2)$$



$$=-\frac{\sqrt 3}{3}\left(\left(x-\frac{1}{2}\right)^2-\frac{9}{4}\right)$$



$$\leq -\frac{\sqrt 3}{3}\left(-\frac{9}{4}\right)=\frac{3\sqrt 3}{4}$$




Both inequalities have equality when $x=\frac{1}{2}$



Hence, $f(x)$ is maximum at $\frac{3\sqrt 3}{4}$ when $x=\frac{1}{2}$



However, this solution is (rather obviously I think) heavily reverse-engineered, with the two inequalities carefully manipulated to give identical equality conditions of $x=\frac{1}{2}$. Is there some better or more "natural" way to find the minimum point, perhaps with better uses of AM-GM or other inequalities like Jensen's inequality?

real analysis - Limit of the nested radical $sqrt{7+sqrt{7+sqrt{7+cdots}}}$





Where does this sequence converge? $\sqrt{7},\sqrt{7+\sqrt{7}},\sqrt{7+\sqrt{7+\sqrt{7}}}$,...


Answer



For a proof of convergence,


Define the sequence as


$\displaystyle x_{0} = 0$



$\displaystyle x_{n+1} =\sqrt{7 + x_n}$


Note that $\displaystyle x_n \geq 0 \ \ \forall n$.


Notice that $\displaystyle x^2 - x - 7 = (x-a)(x-b)$ where $\displaystyle a \lt 0$ and $\displaystyle b \gt 0$.


We claim the following:


i) $\displaystyle x_n \lt b \Longrightarrow x_{n+1} \lt b$
ii) $\displaystyle x_n \lt b \Longrightarrow x_{n+1} \gt x_n$


For a proof of i)


We have that


$\displaystyle x_n \lt b = b^2 - 7$ and so $x_n +7 \lt b^2$ and thus by taking square roots $x_{n+1} \lt b$


For a proof of ii)


We have that



$\displaystyle (x_{n+1})^2 - (x_n)^2 = -(x^2_n - x_n -7) = -(x_n-a)(x_n-b) \gt 0$ if $x_n \lt b$.


Thus $\displaystyle \{x_{n}\}$ is monotonically increasing and bounded above and so is convergent.


By setting $L = \sqrt{7+L}$, we can easily see that the limit is $\displaystyle b = \dfrac{1 + \sqrt{29}}{2}$



In fact, we can show that the convergence is linear.


$\displaystyle \dfrac{b-x_{n+1}}{b-x_n} = \dfrac{b^2 - (7+x_n)}{(b+\sqrt{7+x_n})(b-x_n)} = \dfrac{1}{b + x_{n+1}}$


Thus $\displaystyle \lim_{n\to \infty} \dfrac{b-x_{n+1}}{b-x_n} = \dfrac{1}{2b}$.


We can also show something a bit stronger:


Let $\displaystyle t_n = b - x_n$.


The we have shown above that $\displaystyle t_n \gt 0$ and $\displaystyle t_n \lt b^2$


We have that



$\displaystyle b - t_{n+1} = \sqrt{7 + b - t_n} = \sqrt{b^2 - t_n}$


Dividing by $\displaystyle b$ throughout we get


$\displaystyle 1 - \dfrac{t_{n+1}}{b} = \sqrt{1 - \dfrac{t_n}{b^2}}$


Using $\displaystyle 1 - \dfrac{x}{2} \gt \sqrt{1-x} \gt 1 - x \ \ 0 \lt x \lt 1$ we have that


$\displaystyle 1 - \dfrac{t_n}{2b^2} \geq 1 - \dfrac{t_{n+1}}{b} \geq 1 - \dfrac{t_n}{b^2}$


And so


$\displaystyle \dfrac{t_n}{2b} \leq t_{n+1} \leq \dfrac{t_n}{b}$


This gives us that $\displaystyle b - \dfrac{b}{b^n} \leq x_n \leq b - \dfrac{b}{(2b)^n}$


Tuesday, June 27, 2017

The functional equation $f(xy)=f(x)f(y)$

Let $f(x)$ be a function that satisfies this functional equation, $f(xy)=f(x)f(y)$.



With a little bit of intuition and luck one may come to a conclusion that these are perhaps the solutions of $f(x)$,





  • $f(x)=x$

  • $f(x)=1$

  • $f(x)=0$



However, these solutions are family solutions of $f(x)=x^n$. What I meant by this is that, when $n=1$ you get the function $f(x)=x$. When $n=0$ you get $f(x)=1$ and when $x=0$ well you get $f(x)=0$.
So, it seems $f(x)=x^n$ is the genuine solution to that functional equation and when you're taking different values for $x$ and $n$ you're getting bunch of other functions of the same family.




Getting excited by this I tried to take different values for $x$, for instance when $x=2$, $x^n$ becomes $2^n$. So, now I'm expecting the function $f(x)=2^n$ to satisfy this functional equation $f(xy)=f(x)f(y)$. However, it doesn't. I don't know why it's not satisfying. May I get your explanation?

integration - Why does $int_{-infty}^{+infty }arctanleft(frac{1}{1+x^2}right)dx$ have a real value when the indefinite integral uses $i$?




WolframAlpha gives a real closed form for this definite integral:
$$\int_{-\infty}^{+\infty } \arctan\left(\dfrac{1}{1+x^2}\right)dx = \sqrt{2\left(\sqrt{2}-1\right)}\;\pi$$



Yet, the formula it gives for the indefinite integral uses $i$.



$$\int \tan^{-1}\left(\frac{1}{x^2 + 1}\right) dx = x \tan^{-1}\left(\frac{1}{x^2 + 1}\right)
+ 2 \left(
\frac{\tan^{-1}\left(\frac{x}{\sqrt{1 - i}}\right)}{(1 - i)^{3/2}}
+

\frac{\tan^{-1}\left(\frac{x}{\sqrt{1 + i}}\right)}{(1 + i)^{3/2}}
\right) + C$$




Why isn't the definite integral non-real?



Answer



First of all, it is real because it converges and because $\arctan\left(\frac1{1+x^2}\right)$ is a real number for every $x\in \mathbb R$.



On the other hand, that some number can be represented with an expression involving $i$ doesn't mean it is not a real number. For instance,

$$\frac1i+i=0\in \mathbb R.$$


calculus - Computing $lim_{xto{0+}}frac{tan(x)-x}{x^3}$ without L'Hôpital's rule.






Computing $\lim_{x\to{0+}}\frac{\tan(x)-x}{x^3}$ without L'Hopital




Say $\lim_{x\to{0+}}\frac{\tan(x)-x}{x^3} = L$



For $L$:
$$L=\lim_{x\to0}\frac{\tan x-x}{x^3}\\
L=\lim_{x\to0}\frac{\tan 2x-2x}{8x^3}\\
4L=\lim_{x\to0}\frac{\frac12\tan2x-x}{x^3}\\

3L=\lim_{x\to0}\frac{\frac12\tan{2x}-\tan x}{x^3}\\
=\lim_{x\to0}\frac{\tan x}x\frac{\frac1{1-\tan^2x}-1}{x^2}\\
=\lim_{x\to0}\frac{(\tan x)^3}{x^3}=1\\
\large L=\frac13$$



I found that in another Q, can someone tell me why



$$L=\lim_{x\to0}\frac{\tan x-x}{x^3}=\lim_{x\to0}\frac{\tan 2x-2x}{8x^3}$$


Answer



If $x = 2y$ then $y\rightarrow 0$ when $x\rightarrow 0$, so $\lim_{x\rightarrow0} f(x) = \lim_{y\rightarrow 0} f(2y)$.



Monday, June 26, 2017

sequences and series - What is $sum_{n=1}^{infty} frac{1}{sqrt{n^{3} + 1}}$?

I am interested in the symbolic evaluation of infinite series over algebraic functions in terms of well-known special functions such as the
Hurwitz zeta function. Often, infinite sums over algebraic expressions can be evaluated in a simple way in terms of well-known special functions. For example, a direct application of Euler's identity may be used to show that $$ \sum_{n \in \mathbb{N}} \frac{1}{\sqrt{(n^2+1)(\sqrt{n^2+1}+n)}}
= - \frac{i \left( \zeta\left( \frac{1}{2}, 1 - i \right) -
\zeta\left(\frac{1}{2}, 1 + i \right) \right)}{\sqrt{2}}. $$
The above formula seems to suggest that there may be many classes of infinite series over algebraic expressions that could be easily evaluated in terms of well-established special functions.



Inspired in part by this question, together with
this question
and
this question, as well as

this question, I'm interested in the problem of evaluating
$$\sum_{n=1}^{\infty} \frac{1}{\sqrt{n^{3} + 1}}
= 2.29412...$$
in terms of "known" special functions, e.g., special functions implemented within Mathematica. This problem is slightly different from the related problems given in the above links, since I'm interested specifically in the use of special functions to compute the above sum symbolically.



More generally, what kinds of algorithms could be used to evaluate infinite sums over algebraic expressions symbolically in terms of special functions?

[unable to retrieve full-text content]

real analysis - $left{frac{1}{f(x_n)}right}$ converges to $frac{1}{f(x)}$.

Let $f:\Bbb R \to \Bbb R$ be a continuous function. Let $\{x_n\}_{n=1}^\infty$ be a convergent sequence in $\Bbb R$ with $\lim \limits_{n\to\infty}x_n=x$ and $f(x)\ne 0$



I want to show that $\left\{\frac{1}{f(x_n)}\right\}$ converges to $\frac{1}{f(x)}$.



Now It would seem I have:




$$|f(x_n) - f(x)| \lt \epsilon $$
$$|f(x_n) - f(x)| \leq |f(x_n)| - |f(x)| $$



and now I don't get it, I would expect to get some relationship also less than epsilon, times by $-1$ to inverse the epsilon inequality and then find a way to get the ricipricals and that would reverse the epsilon inequality again giving me the result, but I can't see it.

Hot Linked Questions

Bijection between [a,b) and [a,b] intervals. [duplicate]


I need help with finding a bijection between these two intervals: [a,b) and [a,b]. It is told that a,b are in R (real numbers). I know how to construct a bijection if a,b are integers, but i have no ...






elementary number theory - How can I prove $sqrt{sqrt2}$ to be irrational?




How can I prove $\sqrt{\sqrt2}$ to be irrational?




I know that $\sqrt2$ is an irrational number, it can be proved by contradiction, but I'm not sure how to prove that $\sqrt{\sqrt2} = \sqrt[4]{2}$ is irrational as well.


Answer



Suppose $x= \sqrt{ \sqrt 2}$ was rational, then so is its square $x^2=\sqrt 2$ which you have shown is irrational. Contradiction!



Sunday, June 25, 2017

algebra precalculus - Does $i^4$ equal $1?$



I can't seem to find a solution to this for the life of me. My mathematics teacher didn't know either.



Edit: I asked the teacher that usually teaches my course today, and she said it was incredible that the other teacher didn't know.



My logic goes as follows:



any real number: $x$ to the fourth power is equal to $(x^2)^2$. Using this logic, $i^4$ would be equal to $(i^2)^2$. This would result in $(-1)^2$, and $(-1)^2 = 1$.




Obviously, this logic can be applied to any real numbers, but does it also apply to complex numbers?


Answer



Yes. The powers of $i$ are cyclic, repeating themselves ever time the exponent increases by 4:
$$i^0 = 1$$
$$i^1=i$$
$$i^2 = -1$$
$$i^3 = -i$$
$$i^4 = 1$$
$$i^5 = i$$
$$i^6 = -1$$

$$i^7 = -i$$
$$i^8 = 1$$
etc.



Your reasoning is excellent, and you should feel good about the fact that you figured this out on your own. The fact that your math teacher didn't know this is, in my professional opinion as a mathematics educator, a disgrace.



Edited to add: As Kamil Maciorowski notes in the comments, the pattern persists for negative exponents, as well. Specifically,
$$i^{-1}= \frac{1}{i} = -i$$
If $\frac{1}{i}=-i$ seems odd, notice that $i(-i) = -i^2 = -(-1) = 1$, so $i$ and $-i$ are multiplicative inverses; therefore $i^{-1} = -i$. Once you know that, you can extend the pattern:
$$i^{-1} = -i$$

$$i^{-2} = -1$$
$$i^{-3} = i$$
$$i^{-4} = 1$$
and so on.



Second update:
The OP asks for some additional discussion of the property $\left( x^a \right)^b = x^{ab}$, so here is some background on that:



First, if $a$ and $b$ are natural numbers, then exponentiation is most naturally understood in terms of repeated multiplication. In this context, $x^a$ means $(x\cdot x\cdot \cdots \cdot x)$ (with $a$ factors of $x$ appearing), and $\left( x^a \right)^b$ means $(x\cdot x\cdot \cdots \cdot x)\cdot(x\cdot x\cdot \cdots \cdot x)\cdot \cdots \cdot (x\cdot x\cdot \cdots \cdot x)$, with $b$ sets of parentheses, each containing $a$ factors of $x$. Since multiplication is associative, we can drop the parentheses and recognize this as a product of $ab$ factors of $x$, i.e. $x^{ab}$.




Note that this reasoning works for any $x$, whether it is positive, negative, or complex. It even applies in settings were multiplication is noncommutative, like matrix multiplication or quaternions. All we need is that multiplication is associative, and that $a$ and $b$ be natural numbers.



Once we have established that $\left( x^a \right)^b = x^{ab}$ for natural numbers $a,b$ we can extend the logic to integer exponents. If $a$ is a positive number, and if $x$ has a multiplicative inverse, then we define $x^{-a}$ to mean the same thing as $\left(\frac1x\right)^a$, or (equivalently) as $\frac1{x^a}$. With this convention in place, it is straightforward to verify that for any combination of signs for $a,b$, the formula $\left(x^a\right)^b = x^{ab}$ holds.



Note however that in extending the formula to cover a larger set of exponents, we have also made it necessary to restrict the domain of values $x$ over which this property holds. If $a$ and $b$ are just natural numbers then $x$ can be almost any object in any set over which an associative multiplication is defined. But if we want to allow $a$ and $b$ to be integers then we have to restrict the formula to the case where $x$ is an invertible element. In particular, the formula $x^{a}$ is not really well-defined if $x=0$ and $a$ is negative.



Now let's consider the case where the exponents are not just integers but arbitrary rational numbers. We begin by defining $x^{1/a}$ to mean $\sqrt[a]{x}$. ( See Why does $x^{\frac{1}{a}} = \sqrt[a]{x}$? for a short explanation of why this convention makes sense.)



In this definition, we are assuming that $a$ is a natural number, and that $x$ is positive. Why do we need $x$ to be positive? Well, consider an expression like $x^{1/2}$. If $x$ is positive, this is (by convention) defined to be the positive square root of $x$. But if $x$ is negative, then $x^{1/2}$ is not a real number, and even if we extend our number system to include complex numbers, it is not completely clear which of the two complex square roots of $x$ this should be identified with. More or less the same problem arises when you try to extend the property to complex $x$: while nonzero complex numbers do have square roots (and $n$th roots in general), there is no way to choose a "principal" $n$th root.




Things get really crazy when you try to extend the property $\left(x^a\right)^b=x^{ab}$ to irrational exponents. If $x$ is a positive real number and $a$ is a real number, we can re-define the expression $x^a$ to mean $e^{a\ln x}$, and it can be proved that this re-definition produces the same results as all of the conventions above, but it only works because $\ln x$ is well-defined for positive $x$. As soon as you try to allow negative $x$, you run into trouble, since $\ln x$ isn't well-defined in that case. One can define logarithms of negative and complex numbers, but they are not single-valued, and there are all kinds of technicalities about choosing a "branch" of the logarithm function.



In particular -- and this is very important for the question at hand -- the identity $\left(x^a\right)^b=x^{ab}$ does not hold in general if $x$ is not a positive real number or if $a,b$ are not both integers. A lot of people misunderstand this, and indeed there are many, many, many, many questions on this site that are rooted in this misunderstanding.



But with respect to the question in the OP: It is perfectly reasonable to argue that $i^4 = \left(i^2 \right)^2$, because even though $i$ is a complex number, the exponents are integers, so the basic notion of exponentiation as repeated multiplication is reliable.


calculus - calculate the the limit of the sequence $a_n = lim_{n to infty} n^frac{2}{3}cdot ( sqrt{n-1} + sqrt{n+1} -2sqrt{n} )$




Iv'e been struggling with this one for a bit too long:



$$
a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \sqrt{n-1} + \sqrt{n+1} -2\sqrt{n} )$$



What Iv'e tried so far was using the fact that the inner expression is equivalent to that:



$$ a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \sqrt{n-1}-\sqrt{n} + \sqrt{n+1} -\sqrt{n} ) $$




Then I tried multiplying each of the expression by their conjugate and got:



$$
a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \frac{1}{\sqrt{n+1} +\sqrt{n}} - \frac{1}{\sqrt{n-1} +\sqrt{n}} )
$$



But now I'm in a dead end.
Since I have this annyoing $n^\frac{2}{3}$ outside of the brackets, each of my attemps to finalize this, ends up with the undefined expression of $(\infty\cdot0)$



I've thought about using the squeeze theorem some how, but didn't manage to connect the dots right.




Thanks.


Answer



Keep on going... the difference between the fractions is



$$\frac{\sqrt{n-1}-\sqrt{n+1}}{(\sqrt{n+1}+\sqrt{n})(\sqrt{n-1}+\sqrt{n})}$$



which, by similar reasoning as before (diff between two squares...), produces



$$\frac{-2}{(\sqrt{n-1}+\sqrt{n+1})(\sqrt{n+1}+\sqrt{n})(\sqrt{n-1}+\sqrt{n})}$$




Now, as $n \to \infty$, the denominator behaves as $(2 \sqrt{n})^3 = 8 n^{3/2}$. Thus, $\lim_{n \to \infty} (-1/4) n^{-3/2} n^{2/3} = \cdots$? (Is the OP sure (s)he didn't mean $n^{3/2}$ in the numerator?)


Let $a$ be a quadratic residue modulo $p$. Prove that the number $bequiv a^frac{p+1}{4} mod p$ has the property that $b^2equiv a mod p$.



Let $p$ be a prime satisfying $p\equiv 3 \mod 4$. Let $a$ be a quadratic residue modulo $p$. Prove that the number $$b\equiv a^\frac{p+1}{4} \mod p$$ has the property that $b^2\equiv a \mod p$. (Hint: Write $\frac{p+1}{2}$ as $1+\frac{p-1}{2}$.) This gives an easy way to take square roots modulo $p$ for primes that are congruent to $3$ modulo $p$.
\textit{Hint:} Write $\frac{p+1}{2}$ as $1+\frac{p-1}{2}$ and use Exercise $3.36$.) This gives an easy way to take square roots modulo $p$ for primes that are congruent to $3$ modulo $p$.



I assume that the proof comes directly from the proof of quadratic residues but I am not sure how.


Answer




Yes, you are right, from the definition of $aR_p\implies a^{(p-1)/2}\equiv1\pmod p$



Now $\left(a^{(p+1)/4}\right)^2=a^{(p-1)/2}\cdot a\equiv a\pmod p$


real analysis - Intermediate Value Theorem and Discontinuous Functions

I am asked to find an example of a discontinuous function$ f : [0, 1] → \mathbb{R}$ where the intermediate value theorem fails. I went over the intermediate value theorem today

Let $f : [a, b] → \mathbb{R}$ be a continuous function. Suppose that there exists a $y$ such that $f(a) < y < f(b) $ or $ f(a) > y > f(b).$ Then there exists a$ \ \ c ∈ [a,b]$ such that $f(c) = y$.

I understand the theory behind it, however, we did not go over many example of how to use it to solve such problems so I do not really know where to begin

limits - Is $frac{sin(x)}{x}$ continuous at $x=0$? Whats the value at $x=0$?

Is $\dfrac{\sin(x)}{x}$ at $x = 0$ continuous?
Whats the value at $x=0~?$

exponential function - The integral $int_0^infty e^{-t^2}dt$


Me and my highschool teacher have argued about the limit for quite a long time.


We have easily reached the conclusion that integral from $0$ to $x$ of $e^{-t^2}dt$ has a limit somewhere between $0$ and $\pi/2$, as we used a little trick, precisely the inequality $e^t>t+1$ for every real $x$. Replacing $t$ with $t^2$, inversing, and integrating from $0$ to $x$, gives a beautiful $\tan^{-1}$ and $\pi/2$ comes naturally.


Next, the limit seemed impossible to find. One week later, after some google searches, i have found what the limit is. This usually spoils the thrill of a problem, but in this case it only added to the curiosity. My teacher then explained that modern approaches, like a computerised approximation, might have been applied to find the limit, since the erf is not elementary. I have argued that the result was to beautiful to be only the result of computer brute force.


After a really vague introduction to fourier series that he provided, i understood that fourier kind of generalised the first inequality, the one i have used to get the bounds for the integral, with more terms of higher powers.


To be on point: I wish to find a simple proof of the result that the limit is indeed $\sqrt\pi/2$, using the same concepts I am familiar with. I do not know what really Fourier does, but i am open to any new information.



Thank you for your time, i appreciate it a lot. I am also sorry for not using proper mathematical symbols, since I am using the app.


Answer



It's useless outside of this one specific integral (and its obvious variants), but here's a trick due to Poisson: \begin{align*} \left(\int_{-\infty}^\infty dx\; e^{-x^2}\right)^2 &= \int_{-\infty}^\infty \int_{-\infty}^\infty \;dx\;dy\; e^{-x^2}e^{-y^2} \\ &= \int_{-\infty}^\infty \int_{-\infty}^\infty \;dx\;dy\; e^{-(x^2 + y^2)} \\ &= \int_0^{2\pi} \!\!\int_0^\infty \;r\,dr\;d\theta\; e^{-r^2} \\ &= \pi e^{-r^2}\Big\vert_{r=0}^\infty \\ &= \pi, \end{align*} switching to polar coordinates halfway through. Thus the given integral is $\frac{1}{2}\sqrt{\pi}$.


Saturday, June 24, 2017

real analysis - How to evaluate $lim_{xto 0} frac {(sin(2x)-2sin(x))^4}{(3+cos(2x)-4cos(x))^3}$?




$$\lim_{x\to 0} \frac {(\sin(2x)-2\sin(x))^4}{(3+\cos(2x)-4\cos(x))^3}$$



without L'Hôpital.



I've tried using equivalences with ${(\sin(2x)-2\sin(x))^4}$ and arrived at $-x^{12}$ but I don't know how to handle ${(3+\cos(2x)-4\cos(x))^3}$. Using $\cos(2x)=\cos^2(x)-\sin^2(x)$ hasn't helped, so any hint?


Answer



Hint: Note that
$$ 3+\cos(2x)-4\cos(x) = 3 + 2\cos^2(x) - 1 - 4\cos(x) = 2(\cos(x)-1)^2, $$
and that

$$ \sin(2x) - 2\sin(x) = 2\sin(x)\cos(x)-2\sin(x) = 2\sin(x)(\cos(x)-1). $$


geometry - The staircase paradox, or why $pine4$


What is wrong with this proof?



Is $\pi=4?$


Answer



This question is usually posed as the length of the diagonal of a unit square. You start going from one corner to the opposite one following the perimeter and observe the length is $2$, then take shorter and shorter stair-steps and the length is $2$ but your path approaches the diagonal. So $\sqrt{2}=2$.


In both cases, you are approaching the area but not the path length. You can make this more rigorous by breaking into increments and following the proof of the Riemann sum. The difference in area between the two curves goes nicely to zero, but the difference in arc length stays constant.



Edit: making the square more explicit. Imagine dividing the diagonal into $n$ segments and a stairstep approximation. Each triangle is $(\frac{1}{n},\frac{1}{n},\frac{\sqrt{2}}{n})$. So the area between the stairsteps and the diagonal is $n \frac{1}{2n^2}$ which converges to $0$. The path length is $n \frac{2}{n}$, which converges even more nicely to $2$.


geometry - The staircase paradox, or why $pine4$



What is wrong with this proof?





Is $\pi=4?$


Answer



This question is usually posed as the length of the diagonal of a unit square. You start going from one corner to the opposite one following the perimeter and observe the length is $2$, then take shorter and shorter stair-steps and the length is $2$ but your path approaches the diagonal. So $\sqrt{2}=2$.




In both cases, you are approaching the area but not the path length. You can make this more rigorous by breaking into increments and following the proof of the Riemann sum. The difference in area between the two curves goes nicely to zero, but the difference in arc length stays constant.



Edit: making the square more explicit. Imagine dividing the diagonal into $n$ segments and a stairstep approximation. Each triangle is $(\frac{1}{n},\frac{1}{n},\frac{\sqrt{2}}{n})$. So the area between the stairsteps and the diagonal is $n \frac{1}{2n^2}$ which converges to $0$. The path length is $n \frac{2}{n}$, which converges even more nicely to $2$.


Friday, June 23, 2017

real analysis - How "ugly" can a derivative get?

There are plenty of examples of differentiable functions $\Bbb R\to\Bbb R$ with derivatives that are not everywhere continuous. However, as stated here, it is impossible for the derivative to be nowhere continuous. In general, can anything be said about exactly how "ugly" a derivative can get?

real analysis - What about of this integral involving the Gudermannian function $int_0^infty frac{operatorname{gd}(x)}{e^x-1}mathrm dx$?




The Gudermannian function is related to the exponential function, see for example the MathWorld's article Gudermannian. From this idea I was playing with the integral $$\int_0^\infty e^{-nx}\operatorname{gd}(x)\mathrm dx$$
where $n\geq 1$ are integers, when by summation over all these $n$, I wondered about the integral $$\int_0^\infty\frac{\operatorname{gd}(x)}{e^x-1}\mathrm dx.$$



Using Wolfram Alpha online calculator I know the indefinite integral $\int e^{-nx}\operatorname{gd}(x)\mathrm dx$, and also approximations (see the code, or similars, 10 digits of int gd(x)/(e^x-1)dx, from x=0 to infinity) for which searching with Wolfram Alpha online calculator for a closed-form of the mentioned approximations, I got the following conjecture.



Claim(?). Seems that $$\int_0^\infty \frac{\operatorname{gd}(x)}{e^x-1}\mathrm dx=2K-\frac{\pi \log 2}{4},\tag{C}$$
being $K$ the Catalan's constant.





Question. Is it possible to get a proof of previous conjecture $(\text{C})$? Many thanks.



Answer



We first notice that



\begin{align*}
\int_{0}^{\infty}\frac{\operatorname{gd}(x)}{e^x-1}\,dx
&= \int_{0}^{\infty} \frac{1}{e^x-1} \left( \int_{0}^{x} \frac{dy}{\cosh y} \right) \, dx \\
&= \int_{0}^{\infty} \frac{1}{\cosh y} \left( \int_{y}^{\infty} \frac{dx}{e^x - 1} \right) \,d y \\
&= -2 \int_{0}^{\infty} \frac{\log(1 - e^{-y})}{e^y + e^{-y}} \, dy \\

&=-2\int_{0}^{\frac{\pi}{4}}\log(1-\tan\theta)\,d\theta \tag{$e^{-x}=\tan\theta$}.
\end{align*}



The last integral is our starting point. We introduce two tricks to evaluate this.



Step 1. Notice that $\tan(\frac{\pi}{4}-\theta)=\frac{1-\tan\theta}{1+\tan\theta}$. So by the substitution $\theta \mapsto \frac{\pi}{4}-\theta$, it follows that



$$ \int_{0}^{\frac{\pi}{4}}\log(1+\tan\theta)\,d\theta
= \int_{0}^{\frac{\pi}{4}}\log\left(\frac{2}{1+\tan\theta}\right)\,d\theta $$




and hence both integrals have the common value $\frac{\pi}{8}\log 2$. Applying the same idea to our integral, it then follows that



\begin{align*}
-2\int_{0}^{\frac{\pi}{4}}\log(1-\tan\theta)\,d\theta
&= -2\int_{0}^{\frac{\pi}{4}}\log\left(\frac{2\tan\theta}{1+\tan\theta}\right)\,d\theta \\
&= -2\int_{0}^{\frac{\pi}{4}}\log\tan\theta \, d\theta - \frac{\pi}{4}\log 2.
\end{align*}



Step 2. In order to compute the last integral, we notice that for $\theta\in\mathbb{R}$ with $\cos\theta\neq0$, we have




\begin{align*}
-\log\left|\tan\theta\right|
&= \log\left|\frac{1+e^{2i\theta}}{1-e^{2i\theta}}\right|
= \operatorname{Re} \log\left(\frac{1+e^{2i\theta}}{1-e^{2i\theta}}\right) \\
&= \operatorname{Re}\left( \sum_{n=1}^{\infty} \frac{1+(-1)^n}{n} e^{2in\theta} \right) \\
&= \sum_{k=0}^{\infty} \frac{2}{2k+1}\cos(4k+2)\theta.
\end{align*}



So by term-wise integration, we obtain




\begin{align*}
-2\int_{0}^{\frac{\pi}{4}}\log\tan\theta \, d\theta
&= \sum_{k=0}^{\infty} \frac{4}{2k+1} \int_{0}^{\frac{\pi}{4}} \cos(4k+2)\theta \, d\theta \\
&= 2 \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)^2}
= 2K,
\end{align*}



where $K$ is the Catalan's constant.


functions - Bijection between $[0,1]$ and $[0,1)$

Could one give me a bijection between $[0,1]$ and $[0,1)$ ?



I am trying to find a function $f:[0,2]\to[0,2]$ such that $\forall0\le x\le2,f^{-1}(x)$ is a tuple (2 elements), and such a bijection would give me an answer easily.

Thursday, June 22, 2017

ELEMENTARY PROOF: Prove $a^2b^2(a^2-b^2)$ is divisible by $12$.


My first thought was to treat see if $a^2 b^2(a^2-b^2)$ is divisible by $2$ and $3$ since they are the prime factors. But I cannot seem to get anywhere. Please give me initial hints. We did not learn about modular arithmetic, so please try not to use it to prove it.


Thanks


Answer



First, let us see how squares behave modulo 3:



$$ n^2\, \text{mod}\, 3$$


We know n is either 0, 1, or 2 mod 3. Squaring this gives 0, 1, and 4 = 1 mod 3. In other words, $$ n^2\, \text{mod}\, 3 = 0$$


or


$$ n^2\, \text{mod}\, 3 = 1$$


Now, consider the different possible cases (both are 0 mod 3; both are 1 mod 3; one is 0 and the other is 1).


Next, do the same thing but under mod 2. You should notice that if a or b (or both) are even, the result follows easily. The only case left to consider is if a and b are odd... how can we factor the expression $a^2 - b^2$?


integration - $intlimits_{-infty}^{infty}frac{sin(x)}{x^2}$




I have seen in my complex analysis class that $lim_{\epsilon\to 0}\int\limits_{|x|\geq \epsilon}\frac{1-e^{ix}}{x^2}dx=\pi$.



From here, by taking the real part, we concluded that $\int\limits_{-\infty}^{\infty}\frac{1-cos(x)}{x^2}dx=\pi$.



I thought that now by taking to imaginary part, one should get $\int\limits_{-\infty}^{\infty}\frac{sin(x)}{x^2}dx=0$.



However, it does seem to me that it even converges, since near $0$ it looks like $1/x$ from both sides.



On the other hand, it is an odd function, so maybe it does exist and is $0$?




You can see the same argument in the following notes: Example 2



Thanks!


Answer



Through complex Analysis you may only show that
$$\color{red}{\text{PV}}\int_{-\infty}^{+\infty}\frac{\sin x}{x^2}\,dx = 0 \tag{1} $$
since we are not allowed to integrate through a pole, and $\int_{-\infty}^{+\infty}\frac{\sin x}{x^2}\,dx$ is not convergent in the usual sense (as an improper Riemann integral). On the other hand $(1)$ is trivial since $\frac{\sin x}{x^2}$ is an odd function.


Wednesday, June 21, 2017

calculus - Find formula of sum $sin (nx)$



I wonder if there is a way to calculate the



$$S_n=\sin x + \sin 2x + … + \sin nx$$


but using only derivatives ?


Answer



Using telescopic sums:


$$ \sin(mx)\sin(x/2) = \frac{1}{2}\left(\cos\left((m-1/2)x\right)-\cos\left((m+1/2)x\right)\right)$$ Hence: $$ S_n \sin\frac{x}{2} = \frac{1}{2}\left(\cos\frac{x}{2}-\cos\left(\left(n+\frac{1}{2}\right)x\right)\right)=\sin\frac{nx}{2}\cdot\sin\frac{(n+1)x}{2}.$$


decimal expansion - Is $0.9999...$ an integer?


Just out of curiosity, since $$\sum_{i>0}\frac{9\times10^{i-1}}{10^i}, \quad\text{ or }\quad 0.999\ldots=1,$$ Does that mean $0.999\ldots=1$, or in other words, that $0.999\ldots$ is an integer, by applying the transitive property?


Ty.


Answer



$0.999999\ldots$ is indeed $1$, which indeed is a natural number and therefore an integer.


Tuesday, June 20, 2017

real analysis - Is my proof that $limlimits_{nto +infty}dfrac{u_{n+1}}{u_n}=1$ correct?



I'm doing an exercise where $(u_n)$ is a numerical sequence which is decreasing and strictily positive.While $(u_n)$ is a numerical sequence which is decreasing and strictily positive, then $(u_n)$ is convergent and its limit is positive which we symbolise by $l$. Assume that $l\ne 0$.



I have to prove that $\lim\limits_{n\to +\infty}\dfrac{u_{n+1}}{u_n}=1$. I'm not sure if my proof is correct or not. Can you please check it? Thank you very much!
Please excuse my English. We don't study Maths in English.




Let $\varepsilon\in ]0;l[$.



So $\exists N\in\mathbb{N},\,\forall n\in\mathbb{N},\,n>N\Longrightarrow |u_n-l|<\varepsilon$



Let $n\in\mathbb{N}$ such as $n>N$. We also have $n+1>n>N$.
Then:



$|u_{n+1}-u_n|=|(u_{n+1}-l)-(u_n-l)|\le |u_{n+1}-l|+|u_n-l|<2\varepsilon$ $(1)$



And we have $|u_n-l|<\varepsilon$ so $0


Then $(1)\times (2)$ gives:



$\left|\dfrac{u_{n+1}}{u_n}-1\right|<\dfrac{2\varepsilon}{l-\varepsilon}$



We put $\varepsilon '=\dfrac{2\varepsilon}{l-\varepsilon}>0$. Then $\varepsilon=\dfrac{l\varepsilon '}{2+\varepsilon '}>0$.



While $\varepsilon '>0$ then $\dfrac{\varepsilon '}{2+\varepsilon '}<1$ and because $l>0$ we have then $\varepsilon=\dfrac{l\varepsilon '}{2+\varepsilon '}

And so $\forall\varepsilon '\in\mathbb{R}^{+*},\,\exists\varepsilon\in ]0,l[,\,\varepsilon=\dfrac{l\varepsilon '}{2+\varepsilon '}$ and so $\varepsilon '$ covers $\mathbb{R}^{+*}$ where $\mathbb{R}^{+*}$ is the set of strictly positive real numbers. As a result we have then:$$\forall\varepsilon '\in\mathbb{R}^{+*},\,\exists N\in\mathbb{N},\,\forall n\in\mathbb{N},\, n>N\Longrightarrow\left|\dfrac{u_{n+1}}{u_n}-1\right| <\varepsilon '$$




And so $\lim\limits_{n\to +\infty}\dfrac{u_{n+1}}{u_n}=1$



Edit: $\mathbb{R}^{+*}$ is the set of strictly positive real numbers.



Edit2: Assume that $l\ne 0$.


Answer



As an exercise, we give a detailed argument directly from the definition. Suppose that the sequence $(u_n)$ has limit $a\gt 0$. We want to show that for any $\epsilon\gt 0$, there is an $N$ such that
$$1-\epsilon\lt \frac{u_{n+1}}{u_n}\le 1\tag{1}$$
if $n\gt N$. Note that

$$\frac{u_{n+1}}{u_n}\ge \frac{a}{u_n},$$ so it suffices to make $\frac{a}{u_n}\gt 1-\epsilon$. This will be the case automatically if $\epsilon\ge 1$, so we may suppose that $\epsilon\lt 1$.



For $0\lt \epsilon\lt 1$ we have
$$\frac{a}{u_n}\gt 1-\epsilon \quad\text{iff}\quad u_n \lt \frac{a}{1-\epsilon} \quad\text{iff}\quad u_n-a\lt \frac{a\epsilon}{1-\epsilon}.$$



Since the sequence $(u_n)$ converges to $a$, there is an $N$ such that if $n\gt N$, then $u_n-a\lt \frac{a\epsilon}{1-\epsilon}$. For any such $n$, Inequality (1) will hold.



Remark: Informally, this is simpler than it looks. We can scale the sequence $(u_n)$ so that it has limit $1$. That does not change ratios.


real analysis - Classical geometry statement in modern terminology



Given two line segments $\overline{AB}$ and $\overline{CD}$, it's always possible to find a third line segment whose length divides evenly into the first two. In modern terminology, if we assign $x = \overline{AB}$ and $y = \overline{CD}$, than the above statement is equivalent to asserting that $x = ay$, where $a \in \mathbb{Q}$.



I'm having difficulty understanding why these two statements are equivalent, mainly because I find the phrasing of the first sentence confusing. If $\overline{CD}$ is a rational multiple of $\overline{AB}$, then why can we always find a third line segment that "evenly" divides into them?


Answer



If $a = \frac p q$, then $x$ and $y$ are both integral multiples of $\frac y q$ by multplying by $p$ and by $q$ respectively. So here, $\frac p q$ is the length of the third length.




This is what meant by “evenly dividing”: You can use the third length to measure both other lengths, it introduces a suitable measure, just like inches measure feet and feet measure yards.



The statement is essentially that the lengths are commensurable and is equivalent to saying that the euclidian algorithm for these lengths works: The third line segment is literally their greatest common divisor, i.e. the greatest length which measures both of the others.






And also, it is not always possible to find such a third length – the diagonal of a square is never commensurable with its sides which was exactly the unexpected discovery that unsettled the Pythagoreans back in their days. See here or here.


Square roots of a complex number



My book says that given the complex number $z$ with modulus $r$, its square roots are $±√re^{iΘ/2}$ where $Θ$ is the principal value of $\arg z$. My question is that why must we consider the principal value of its argument?


Answer



We consider the principal value by convention. But also if we take the other value $\frac{\theta}{2}+\pi$ we find the same two roots because $e^{i(\frac{\theta}{2}+\pi)}=-e^{i(\frac{\theta}{2})}$


algebra precalculus - Integer coefficients polynomial. Find largest number of roots.



The polynomial $p(x)$ has integer coefficients, and $p(100)=100$. Let $r_1, r_2, …, r_k$ be distinct integers that satisfy the equation $p(x)=x^3$. What is the largest possible value of $k$?



Answer



Suppose there are $k$ distinct integer roots for $p(x)-x^3$. Then we may write $p(x) = x^3+q(x)\prod_{i=1}^k(x-r_i) \implies 100 = 100^3+q(100)\prod_{i=1}^k(100-r_i)$.



This gives $q(100) \prod_{i=1}^k(100-r_i) = -999,900=-2^2\cdot3^2\cdot5^2\cdot 11 \cdot 101$



LHS is thus a product of $k+1$ integers of which at least $k$ are distinct, and the RHS can be expressed as a product of at most $11$ factors. Hence $k \le 10$.



To prove $k_{max} = 10$, all we need now is to demonstrate one polynomial $p(x)$, say:
$$100^3-(x-99)(x-101) (x-102) (x-98) (x-103) (x-97) (x-105) (x-95) (x-111) (x-201)$$
which will satisfy the conditions.



linear algebra - Writing a matrix as a product of elementary matrices.


So if I have a matrix and I put it into RREF and keep track of the row operations, I can then write it as a product of elementary matrices. An elementary matrix is achieved when you take an identity matrix and perform one row operation on it. So if we have a matrix like $\begin{bmatrix}1&0\\0&1\end{bmatrix}$, one elementary matrix could look like $\begin{bmatrix}1&0\\-1&1\end{bmatrix}$ for the row operation $r_2 - r_1$ or $\begin{bmatrix}1&0\\0&1/2\end{bmatrix}$ for the row operation $\dfrac{r_2}{2}$. So if you put a matrix into reduced row echelon form then the row operations that you did can form a bunch of elementary matrices which you can put together as a product of the original matrix. So if a have a $2\times{2}$ matrix, what is the most elementary matrices that can be used. What would that look like?


Answer



Let's assume that nonzero entries in our matrices are invertible.


If $a \ne 0$, then a $2\times 2$ matrix with $a$ in the upper corner can be written as a product of 4 matrices that are elementary in the sense described:


$$ \left( \begin{array}{cc} 1 & 0 \\ \frac{c}{a} & 1 \end{array}\right) \left( \begin{array}{cc} a & 0 \\ 0 & 1 \end{array}\right) \left( \begin{array}{cc} 1 & 0 \\ 0 & d-\frac{bc}{a} \end{array}\right) \left( \begin{array}{cc} 1 & \frac{b}{a} \\ 0 & 1 \end{array}\right) = \left( \begin{array}{cc} a & b \\ c & d \end{array}\right) $$


Notice that when $a=1$, three elementary matrices suffice.


If $a=0$ but $c\ne 0$, then $$ \left( \begin{array}{cc} 1 & \frac{1}{c} \\ 0 & 1\end{array}\right) \left( \begin{array}{cc} 0 & b \\ c & d\end{array}\right)= \left( \begin{array}{cc} 1 & * \\ c & d\end{array}\right) $$ Since $\left( \begin{array}{cc} 1 & * \\ c & d\end{array}\right)$ can be written as a product of 3 elementary matrices, $\left( \begin{array}{cc} 0 & b \\ c & d\end{array}\right)$ can again be written as the product of 4. A similar argument holds when $a=0$ but $b \ne 0$.



I'll leave the case $a=b=c=0$ to the reader.


calculus - Prove the next numbers are irrational

I need to prove the next numbers are irrational:


$$ \sqrt{2}+\sqrt{5} $$ $$\sqrt{3}$$ $$ \log_{2}3 $$


Of course, I tried to prove it with negative evidence...

sequences and series - On twisted Euler sums



An interesting investigation started here and it showed that
$$ \sum_{k\geq 1}\left(\zeta(m)-H_{k}^{(m)}\right)^2 $$
has a closed form in terms of values of the Riemann $\zeta$ function for any integer $m\geq 2$.
I was starting to study the cubic analogue $ \sum_{k\geq 1}\left(\zeta(m)-H_{k}^{(m)}\right)^3 $ and I managed to prove through summation by parts that
$$ \sum_{k\geq 1}\frac{\left(H_k^{(2)}\right)^2}{k^2} =\frac{1}{3}\zeta(2)^3-\frac{2}{3}\zeta(6)+\zeta(3)^2 $$
where the LHS, according to Flajolet and Salvy's notation, is $S_{22,2}$. An explicit evaluation of $\sum_{k\geq 1}\left(\zeta(2)-H_{k}^{(2)}\right)^3 $ is completed by the computation of
$$\boxed{ S_{12,2} = \sum_{k\geq 1}\frac{H_k H_k^{(2)}}{k^2} }$$
which has an odd weight, hence it is not covered by Thm 4.2 of Flajolet and Salvy. On the other hand they suggest that by the kernel $(\psi(-s)+\gamma)^4$ the previous series and the cubic Euler sum $\sum_{n\geq 1}\frac{H_n^3}{(n+1)^2}=\frac{15}{2}\zeta(5)+\zeta(2)\,\zeta(3)$ are strictly related.





Question: can you help me completing this sketch, in order to get an explicit value for $S_{12,2}$ and for $\sum_{k\geq 1}\left(\zeta(2)-H_{k}^{(2)}\right)^3 $? Alternative techniques to summation by parts and residues are equally welcome.




Update: I have just realized this is solved by Mike Spivey's answer to Zaid's question here. On the other hand, Mike Spivey's approach is extremely lengthy, and I would be happy to see a more efficient derivation of $S_{12,2}=\zeta(2)\,\zeta(3)+\zeta(5)$.


Answer



I am skipping the derivation of Euler Sums $\displaystyle S(1,1;3) = \sum\limits_{n=1}^{\infty} \frac{H_n^2}{n^3}$ and $\displaystyle S(2;3) = \sum\limits_{n=1}^{\infty} \frac{H_n^{(2)}}{n^3}$ (for a derivation see solutions to $\textbf{problem 5406}$ proposed by C.I. Valean, Romania in SSMA).



Now, consider the partial fraction decomposition, \begin{align*}\sum\limits_{k=1}^{n-1}\left(\frac{1}{k(n-k)}\right)^2 = \frac{2}{n^2}\left(H_n^{(2)} + \frac{2H_n}{n}-\frac{3}{n^2}\right) \qquad \cdots (\star)\end{align*}




Multiplying both sides of $(\star)$ with $H_n$ and summing over $n \ge 1$ (and making the change of varible $m = n+k$):



\begin{align}
\sum\limits_{n=1}^{\infty}\frac{H_n}{n^2}\left(H_n^{(2)}+2\frac{H_n}{n} - \frac{3}{n^2}\right) &= \sum\limits_{n=1}^{\infty}\sum\limits_{k=1}^{n-1} \frac{H_n}{k^2(n-k)^2} \\&= \sum\limits_{m,k=1}^{\infty} \frac{H_{m+k}}{m^2k^2} \tag{0}\\&= \sum\limits_{m,k,j=1}^{\infty} \frac{mj+kj}{m^2k^2j^2(m+k+j)} \tag{1} \\&= 2\sum\limits_{m,k,j=1}^{\infty} \frac{jk}{m^2k^2j^2(m+k+j)} \tag{2} \\&= 2\sum\limits_{j,k=1}^{\infty} \frac{1}{jk}\sum\limits_{m=1}^{\infty} \frac{1}{m^2(m+j+k)} \tag{3} \\&= 2\sum\limits_{k,j=1}^{\infty} \frac{1}{kj(j+k)}\left(\zeta(2) - \frac{H_{k+j}}{k+j}\right) \tag{4} \\&= 4\sum\limits_{n=2}^{\infty}\frac{H_{n-1}}{n^2}\left(\zeta(2) - \frac{H_n}{n}\right)\\&= 4\zeta(2)\sum\limits_{n=1}^{\infty}\frac{H_{n-1}}{n^2} + 4\sum\limits_{n=1}^{\infty} \frac{H_n}{n^4}-4\sum\limits_{n=1}^{\infty} \frac{H_n^2}{n^3}\end{align}



Where, in steps $(0)$ and $(3)$ we used the identity, $\displaystyle \frac{H_q}{q} = \sum\limits_{\ell = 1}^{\infty} \frac{1}{\ell (\ell + q)}$. In step $(1)$ we used the symmetry w.r.t. the variables, in step $(2)$ interchanged order of summation and in step $(4)$ made the change of variables $n = j+k$.



Thus, $$ \sum\limits_{n=1}^{\infty} \frac{H_nH_n^{(2)}}{n^2} = 4\zeta(2)\zeta(3) + 7\sum\limits_{n=1}^{\infty} \frac{H_n}{n^4} - 6\sum\limits_{n=1}^{\infty} \frac{H_n^2}{n^3}$$


real analysis - Finite positive measure - integrals over subsets



Let $\mu$ be a real measure on a space $(S,\Sigma)$. Define $\nu: \Sigma \to [0, \infty)$ by
\begin{align}

\nu(E) = \sup\{\mu(F): F \in \Sigma, F \subseteq E, \mu(F)\geq 0 \}.
\end{align}
I want to show that $\nu$ is a finite positive measure.



Therefore, $\nu(\emptyset)= \sup\{0\}=0$. And I want to show that $\nu(\sqcup_{n \in \mathbb{N}} E_n) = \sum_{n=1}^\infty \nu(E_n) $ using the monotone convergence theorem.



I can propose that the disjoint union of $E_n$'s form $E$, i.e. $\sqcup_{n \in \mathbb{N}} E_n = E$. So,
\begin{align}
\nu(\sqcup_{n \in \mathbb{N}} E_n) = \nu(E).
\end{align}




Now, I want to write $E$ as a sum of events which are a monotone increasing sequence such that the monotone converging theorem can be applied. However, I can't come up with such a sequence.


Answer



The idea is the same as the proof for positive measures. Note that the restriction $\mu(F)\ge0$ is redundant in the definition of $\nu$. There always exists $F\subseteq E$ such that $\mu(F)\ge 0$, e.g., $F=\emptyset$, so subsets with negative measure will be "automatically" ignored when taking supremum.



We first prove the sub-additivity of $\nu$. Suppose $A\cap B=\emptyset$. For any $F\subseteq A\cup B$, we have $\mu(F)=\mu(F\cap A)+\mu(F\cap B)\le\nu(A)+\nu(B)$. The sub-additivity follows.



Then we prove $\sum_{n=1}^\infty\nu(E_n)\le\nu(\sqcup_{n=1}^\infty E_n)$. By the definition of supremum, for each $E_n$ there is some subset $F_n\subseteq E_n$ such that $\nu(E_n)-\mu(F_n)<\epsilon\cdot 2^{-n}$. Hence
$$\sum_{n=1}^\infty\nu(E_n)<\epsilon+\sum_{n=1}^\infty\mu(F_n)=\epsilon+\mu(\bigsqcup_{n=1}^\infty F_n)$$
Since $\sqcup F_n\subseteq\sqcup E_n$, it's immediate that $\mu(\sqcup F_n)\le\nu(\sqcup E_n)$. Now we're done.



probability theory - Show that $E(Z^p) = p int_0^infty (1-F_Z(x))x^{p-1} , dx$ for every $p>0$ and nonnegative random variable $Z$


Given a continuous positive r.v. (I think this means $Z \geq 0$), with pdf $f_Z$ and CDF $F_Z$, how would I show that the following expression
$$\mathbb{E}(Z^p) = p \int_0^\infty (1-F_Z(x))x^{p-1} \, dx\text{?}$$
I don't know where to start, I tried maybe changing it to a by-parts question or something using $px^{p-1} = \frac{d}{dx}(x^p)$ but that's all I can think of.


Answer




Note that for $X \geq 0$ we have $\mathbb{E}(X) = \int_0^{\infty} \mathbb{P}(X \geq x) \, \mathrm{d}x$. Now let $Y= Z^p$ then $\mathbb{P}(Y < y) = \mathbb{P}(Z < y^{1/p})$ by monotonicity of $x^p$ on $[0,\infty)$. Hence we have $$\mathbb{E}(Z^p) = \int_0^{\infty} \mathbb{P}(Y > y) \, \mathrm{d}y = \int_0^{\infty} 1-F_Y(y) \, \mathrm{d}y = \int_0^{\infty}1-F_Z(y^{1/p}) \, \mathrm{d}y$$


Now we use the substitution $x^p = y$ to get $$\mathbb{E}(Z^p) = \int_0^{\infty}p(1-F_Z(x))x^{p-1} \, \mathrm{d}x.$$


Alternatively you can mimic the proof for $\mathbb{E}(Z) = \int_0^{\infty} \mathbb{P}(Z \geq z) \, \mathrm{d}z$ using Fubini for $\mathbb{E}(Z^p)$.


limits - Evaluating and proving $limlimits_{xtoinfty}frac{sin x}x$

I've just started learning about limits. Why can we say $$ \lim_{x\rightarrow \infty} \frac{\sin x}{x} = 0 $$ even though $\lim_{x\rightarrow \infty} \sin x$ does not exist?



It seems like the fact that sin is bounded could cause this, but I'd like to see it algebraically.



$$ \lim_{x\rightarrow \infty} \frac{\sin x}{x} =
\frac{\lim_{x\rightarrow \infty} \sin x} {\lim_{x\rightarrow \infty} x}
= ? $$



L'Hopital's rule gives a fraction whose numerator doesn't converge. What is a simple way to proceed here?

can a number of the form $x^2 + 1 $ be a square number?



I have been trying to prove that $x^2 + 1 $ is not a perfect square (other than $0^2 +1^2=1^2$). I'm stuck and can't move forward.



The thing I have tried so is to relate the problem to a hyperbola and find an integer solution for both $x$ and $y$ when $a=b=1$. The pell's equation came up in my search, but I don't understand it fully.







Note: I was in a confused state and @CoolHandLouis' visual answer cleared my muddled mind, so I selected that answer. In that way, his answer was very helpful to me. @Alessandro's proof is clear to me now and if I could accept two answers, I would accepted that one too. Thanks to everyone for helping!


Answer



We want to prove $x^2 + 1$ can never be a perfect square.



Let




  • $f(x) = x^2$




    Then,



    $f(x)$     $<$     $f(x) + 1$     $<$     $f(x+1)$



    $x^2$         $<$        $x^2 + 1$     $<$      $x^2 + 2x + 1$        (for all $x > 0$).




Therefore, $x^2 + 1$ cannot be a perfect square (except $x = 0$) because it will always be greater than the prior perfect square and less than the next perfect square.



The following table illustrates this. Note that $f(x)$ is the set of all perfect squares:





x f(x)=x^2 x^2+1 f(x+1)
0 0 1 1
1 1 2 4
2 4 5 9
3 9 10 16
4 16 17 25

Monday, June 19, 2017

sequences and series - Generalized Euler sum $sum_{n=1}^infty frac{H_n}{n^q}$



I found the following formula



$$\sum_{n=1}^\infty \frac{H_n}{n^q}= \left(1+\frac{q}{2} \right)\zeta(q+1)-\frac{1}{2}\sum_{k=1}^{q-2}\zeta(k+1)\zeta(q-k)$$



and it is cited that Euler proved the formula above , but how ?




Do there exist other proofs ?



Can we have a general formula for the alternating form



$$\sum_{n=1}^\infty (-1)^{n+1}\frac{H_n}{n^q}$$


Answer



$$
\begin{align}
&\sum_{j=0}^k\zeta(k+2-j)\zeta(j+2)\\
&=\sum_{m=1}^\infty\sum_{n=1}^\infty\sum_{j=0}^k\frac1{m^{k+2-j}n^{j+2}}\tag{1}\\

&=(k+1)\zeta(k+4)
+\sum_{\substack{m,n=1\\m\ne n}}^\infty\frac1{m^2n^2}
\frac{\frac1{m^{k+1}}-\frac1{n^{k+1}}}{\frac1m-\frac1n}\tag{2}\\
&=(k+1)\zeta(k+4)
+\sum_{\substack{m,n=1\\m\ne n}}^\infty\frac1{nm^{k+2}(n-m)}-\frac1{mn^{k+2}(n-m)}\tag{3}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\sum_{n=m+1}^\infty\frac1{nm^{k+2}(n-m)}-\frac1{mn^{k+2}(n-m)}\tag{4}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\sum_{n=1}^\infty\frac1{(n+m)m^{k+2}n}-\frac1{m(n+m)^{k+2}n}\tag{5}\\
&=(k+1)\zeta(k+4)\\

&+2\sum_{m=1}^\infty\sum_{n=1}^\infty\frac1{m^{k+3}n}-\frac1{(m+n)m^{k+3}}\\
&-2\sum_{m=1}^\infty\sum_{n=1}^\infty\frac1{m(n+m)^{k+3}}+\frac1{n(n+m)^{k+3}}\tag{6}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{n=1}^\infty\sum_{m=1}^\infty\frac1{n(n+m)^{k+3}}\tag{7}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{n=1}^\infty\sum_{m=n+1}^\infty\frac1{nm^{k+3}}\tag{8}\\
&=(k+1)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}

-4\sum_{n=1}^\infty\sum_{m=n}^\infty\frac1{nm^{k+3}}+4\zeta(k+4)\tag{9}\\
&=(k+5)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{m=1}^\infty\sum_{n=1}^m\frac1{nm^{k+3}}\tag{10}\\
&=(k+5)\zeta(k+4)
+2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}
-4\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}\tag{11}\\
&=(k+5)\zeta(k+4)
-2\sum_{m=1}^\infty\frac{H_m}{m^{k+3}}\tag{12}
\end{align}

$$
Letting $q=k+3$ and reindexing $j\mapsto j-1$ yields
$$
\sum_{j=1}^{q-2}\zeta(q-j)\zeta(j+1)
=(q+2)\zeta(q+1)-2\sum_{m=1}^\infty\frac{H_m}{m^q}\tag{13}
$$
and finally
$$
\sum_{m=1}^\infty\frac{H_m}{m^q}
=\frac{q+2}{2}\zeta(q+1)-\frac12\sum_{j=1}^{q-2}\zeta(q-j)\zeta(j+1)\tag{14}

$$






Explanation



$\hphantom{0}(1)$ expand $\zeta$
$\hphantom{0}(2)$ pull out the terms for $m=n$ and use the formula for finite geometric sums on the rest
$\hphantom{0}(3)$ simplify terms
$\hphantom{0}(4)$ utilize the symmetry of $\frac1{nm^{k+2}(n-m)}+\frac1{mn^{k+2}(m-n)}$
$\hphantom{0}(5)$ $n\mapsto n+m$ and change the order of summation
$\hphantom{0}(6)$ $\frac1{mn}=\frac1{m(m+n)}+\frac1{n(m+n)}$
$\hphantom{0}(7)$ $H_m=\sum_{n=1}^\infty\frac1n-\frac1{n+m}$ and use the symmetry of $\frac1{m(n+m)^{k+3}}+\frac1{n(n+m)^{k+3}}$
$\hphantom{0}(8)$ $m\mapsto m-n$
$\hphantom{0}(9)$ subtract and add the terms for $m=n$
$(10)$ combine $\zeta(k+4)$ and change the order of summation
$(11)$ $H_m=\sum_{n=1}^m\frac1n$
$(12)$ combine sums


linear algebra - Can we prove that matrix multiplication by its inverse is commutative?

We know that $AA^{-1} = I$ and $A^{-1}A = I$, but is there a proof for the commutative property here? Or is this just the definition of invertibility?

Sunday, June 18, 2017

abstract algebra - Can I skip part III in Dummit and Foote?



I am reading Abstract Algebra by Dummit anfd Foote. I have already taken an introductory course in linear algebra, mostly at the level of Strang's MIT OCW Linear Algebra. My question is: Can I skip 'Part III: Modules and Vector spaces' and jump directly to chapter 13: Field Theory in Dummit and Foote? Will that be harmful?



(Note that I am planning to cover Part I up to 5.3 and Part II up to 9.5, as suggested in preface of D&F. )


Answer



It depends what your goal is. If you want to get to the culmination of a classical abstract algebra class, Galois theory and the solvability of the quintic, you will not be missing much if any.


If you plan to go on to graduate study (even in group theory or field theory!), or even read through the rest of D&F, then you will in all likeliness encounter material that build on PID modules.


However there is no harm in skipping part III for the time being. A lot of the abstract module business is more abstract than you have encountered so far and will benefit from having a motivation to read it.


Also make sure that you do not just read the chapter text but also look at the exercises. The standard result and applications of the theory -- matrix normal forms and finding particular conjugating matrices -- are hidden in the exercises of these chapters.



(Basis for my remarks: I have taught multiple times a first-year graduate class on this material, using D&F)


linear algebra - How can $V$ be a vector space over the field of real numbers when it is explicitly defined as being over the field of complex numbers?


Let $V = \{(a_1, a_2, ..., a_n): a_i \in \mathbb{C}$ for $ i = 1, 2, .., n\}$; so $V$ is a vector space over $\mathbb{R}$. Is V a vector space over the field of real numbers with the operations of coordinatewise addition and multiplication?


The solution says,



Yes. All the conditions are preserved when the field is the real numbers.




I don't understand how $V$ can be a vector space over $\mathbb{R}$? $V$ itself is specified as being over the field of complex numbers ($V = \{(a_1, a_2, ..., a_n): a_i \in \mathbb{C}$), so how can the question then claim that $V$ is a vector space over the real numbers?


I would greatly appreciate it if someone could please take the time to clarify my misunderstanding.


Answer



Check the definition of a vector space over a field, for example on Wikipedia: https://en.wikipedia.org/wiki/Vector_space#Definition


Being over $\mathbb{R}$ means that vectors can be multiplied by real numbers and you get again a vector in $V$. This is certainly true in your example since complex numbers multiplied by real numbers are again complex numbers.


calculus - Find $limlimits_{n to infty} frac{x_n}{n}$ when $limlimits_{n to infty} x_{n+k}-x_{n}$ exists



Let $(x_n)_{n \geq 1}$ be a sequence with real numbers and $k$ a fixed natural number such that $$\lim_{n \to \infty}(x_{n+k}-x_n)=l$$


Find $$\lim_{n \to \infty} \frac{x_n}{n}$$



I have a strong guess that the limit is $\frac{l}{k}$ and I tried to prove it using the sequence $y_n=x_{n+1}-x_n$. We know that $\lim_{n \to \infty}(y_n+y_{n+1}+\dots+y_{n+k-1})=l$ and if we found $\lim_{n \to \infty}y_n$ we would have from the Cesaro Stolz lemma that $$\lim_{n \to \infty}\frac{x_n}{n}=\lim_{n \to \infty}y_n$$


Answer



For fixed $m \in \{ 1, \ldots, k \}$ the sequence $(y_n)$ defined by $y_n = x_{m+kn}$ satisfies $$ y_{n+1} - y_n = x_{(m+kn) + k} - x_{m+kn} \to l \, , $$ so that Cesaro Stolz can be applied to $(y_n)$. It follows that $\frac{y_n}{n} \to l$ and $$ \frac{x_{m+kn}}{m+kn} = \frac{y_n}{n} \cdot \frac{n}{m+kn} \ \to \frac{l}{k} \text{ for } n \to \infty \, . $$ This holds for each $m \in \{ 1, \ldots, k \}$, and therefore $$ \lim_{n \to \infty} \frac{x_n}{n} = \frac lk \, . $$


integration - A 'complicated' integral: $ int limits_{-infty}^{infty}frac{sin(x)}{x}$




I am calculating an integral $\displaystyle \int \limits_{-\infty}^{\infty}\dfrac{\sin(x)}{x}$ and I dont seem to be getting an answer.



When I integrate by parts twice, I get:
$$\displaystyle \int \limits _{-\infty}^{\infty}\frac{\sin(x)}{x}dx = \left[\frac{\sin(x)\ln(x) - \frac{\cos(x)}{x}}{2}\right ]_{-\infty}^{+\infty}$$



What will be the answer to that ?


Answer



Hint: From the viewpoint of improper Lebesgue integrals or in sense of Cauchy principal value is integral is legitimate. Integration by parts.
\begin{align}
\int \limits_{-\infty}^{\infty}\dfrac{\sin(x)}{x} \mbox{d} x

=
&
\lim_{t\to\infty}\int \limits_{-t}^{\frac{1}{t}}\dfrac{\sin(x)}{x} \mbox{d} x
+
\lim_{t\to\infty}\int \limits_{\frac{1}{t}}^{t}\dfrac{\sin(x)}{x} \mbox{d} x
\\
=
&
\lim_{t\to\infty}\int \limits_{-t}^{\frac{1}{t}}\sin(x)(\log x)^\prime \mbox{d} x
+

\lim_{t\to\infty}\int \limits_{\frac{1}{t}}^{t}\sin(x)(\log x)^\prime\mbox{d} x
\\
\end{align}


Saturday, June 17, 2017

real analysis - Problem about bijective function



which of the following statement is True ?




$1.$ There exist a bijective function from $\mathbb{R} - \mathbb{Q}$ to $\mathbb{R}$



$2$ There exist a bijective function from $\mathbb{Q} $ to $\mathbb{Z} \times \mathbb{N}$



$3$ There does exists strictly increasing function from $\mathbb{Z}$ to $\mathbb{N}$



$4$.There does exists strictly increasing onto function from $\mathbb{Z}$ to $\mathbb{N}$



I thinks only option $2 $ is true because countable map to countable



Answer



1) is true since $\mathbb R \setminus \mathbb Q$ has same cardinality as $\mathbb R$



2) is true.



3) and 4) are false. Suppose there is a strictly incerasing function $f: \mathbb Z \to \mathbb N$. Let $f(0)=n$. Then $f(-1), f(-2),....$ satisfy the inequalities $... f(-n). But there are only a finite number of integers less than $n$ in $\mathbb N$. Hence such an $f$ cannot exist.


sequences and series - $limlimits_{ntoinfty} frac{n}{sqrt[n]{n!}} =e$




I don't know how to prove that
$$\lim_{n\to\infty} \frac{n}{\sqrt[n]{n!}} =e.$$
Are there other different (nontrivial) nice limit that gives $e$ apart from this and the following
$$\sum_{k = 0}^\infty \frac{1}{k!} = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n = e\;?$$


Answer



In the series for $$e^n=\sum_{k=0}^\infty \frac{n^k}{k!},$$
the $n$th and biggest(!) of the (throughout positve) summands is $\frac{n^n}{n!}$.

On the other hand, all summands can be esimated as
$$ \frac{n^k}{k!}\le \frac{n^n}{n!}$$
and especially those
with $k\ge 2n$ can be estimated
$$ \frac{n^k}{k!}<\frac{n^{k}}{(2n)^{k-2n}\cdot n^{n}\cdot n!}=\frac{n^{n}}{n!}\cdot \frac1{2^{k-2n}}$$
and thus we find
$$\begin{align}\frac{n^n}{n!}Taking $n$th roots we find
$$ \frac n{\sqrt[n]{n!}}\le e\le \sqrt[n]{2n+3}\cdot\frac n{\sqrt[n]{n!}}.$$
Because $\sqrt[n]{2n+3}\to 1$ as $n\to \infty$, we obtain $$\lim_{n\to\infty}\frac n{\sqrt[n]{n!}}=e$$

from squeezing.


linear algebra - What is the number of real solutions of the equation $ | x - 3 | ^ { 3x^2 -10x + 3 } = 1 $?

I did solve, I got four solutions, but the book says there are only 3.



I considered the cases $| x - 3 | = 1$ or $3x^2 -10x + 3 = 0$.




I got for $x\leq 0$: $~2 , 3 , \frac13$



I got for $x > 0$: $~4$



Am I wrong? Is $0^0 = 1$ or NOT?



Considering the fact that : $ 2^2 = 2 \cdot2\cdot 1 $



$2^1 = 2\cdot 1$




$2^0 = 1$



$0^0$ should be $1$ right?

calculus - Evaluating $limlimits_{ntoinfty} e^{-n} sumlimits_{k=0}^{n} frac{n^k}{k!}$



I'm supposed to calculate:



$$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$




By using W|A, i may guess that the limit is $\frac{1}{2}$ that is a pretty interesting and nice result. I wonder in which ways we may approach it.


Answer



Edited. I justified the application of the dominated convergence theorem.



By a simple calculation,



$$ \begin{align*}
e^{-n}\sum_{k=0}^{n} \frac{n^k}{k!}
&= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k (n-k)! \\
(1) \cdots \quad &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k \int_{0}^{\infty} t^{n-k}e^{-t} \, dt\\

&= \frac{e^{-n}}{n!} \int_{0}^{\infty} (n+t)^{n}e^{-t} \, dt \\
(2) \cdots \quad &= \frac{1}{n!} \int_{n}^{\infty} t^{n}e^{-t} \, dt \\
&= 1 - \frac{1}{n!} \int_{0}^{n} t^{n}e^{-t} \, dt \\
(3) \cdots \quad &= 1 - \frac{\sqrt{n} (n/e)^n}{n!} \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du.
\end{align*}$$



We remark that




  1. In $\text{(1)}$, we utilized the famous formula $ n! = \int_{0}^{\infty} t^n e^{-t} \, dt$.


  2. In $\text{(2)}$, the substitution $t + n \mapsto t$ is used.

  3. In $\text{(3)}$, the substitution $t = n - \sqrt{n}u$ is used.



Then in view of the Stirling's formula, it suffices to show that



$$\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du \xrightarrow{n\to\infty} \sqrt{\frac{\pi}{2}}.$$



The idea is to introduce the function




$$ g_n (u) = \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \mathbf{1}_{(0, \sqrt{n})}(u) $$



and apply pointwise limit to the integrand as $n \to \infty$. This is justified once we find a dominating function for the sequence $(g_n)$. But notice that if $0 < u < \sqrt{n}$, then



$$ \log g_n (u)
= n \log \left(1 - \frac{u}{\sqrt{n}} \right) + \sqrt{n} u
= -\frac{u^2}{2} - \frac{u^3}{3\sqrt{n}} - \frac{u^4}{4n} - \cdots \leq -\frac{u^2}{2}. $$



From this we have $g_n (u) \leq e^{-u^2 /2}$ for all $n$ and $g_n (u) \to e^{-u^2 / 2}$ as $n \to \infty$. Therefore by dominated convergence theorem and Gaussian integral,




$$ \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du = \int_{0}^{\infty} g_n (u) \, du \xrightarrow{n\to\infty} \int_{0}^{\infty} e^{-u^2/2} \, du = \sqrt{\frac{\pi}{2}}. $$


algebra precalculus - Why is this a mutiplication equation VS a division equation?

Circle C Farm has 1,500 chickens. They separate the chickens into 6 different areas. How many chickens are in each area?



Define a variable and write an equation. Solve the equation.




According to my son's Math book, the equation is a multiplication equation, as follows:



Let x = the number of chickens in each area;



6x = 1500; x = 250



Why is this not a division equation even though it says they " separate the chicken into 6 areas". Doesn’t separating the chicken translate to dividing?



Would greatly appreciate your help in understanding what in the word problem would help my son figure out that he needs to write a multiplication equation.

calculus - Evaluating the integral $int_0^infty frac{x sin rx }{a^2+x^2} dx$ using only real analysis



Calculate the integral$$ \int_0^\infty \frac{x \sin rx }{a^2+x^2} dx=\frac{1}{2}\int_{-\infty}^\infty \frac{x \sin rx }{a^2+x^2} dx,\quad a,r \in \mathbb{R}. $$
Edit: I was able to solve the integral using complex analysis, and now I want to try and solve it using only real analysis techniques.


Answer



It looks like I'm too late but still I wanna join the party. :D



Consider
$$
\int_0^\infty \frac{\cos rx}{x^2+a^2}\ dx=\frac{\pi e^{-ar}}{a}.

$$

Differentiating the both sides of equation above with respect to $r$ yields
$$
\begin{align}
\int_0^\infty \frac{d}{dr}\left(\frac{\cos rx}{x^2+a^2}\right)\ dx&=\frac{d}{dr}\left(\frac{\pi e^{-ar}}{a}\right)\\
-\int_0^\infty \frac{x\sin rx}{x^2+a^2}\ dx&=(-a)\frac{\pi e^{-ar}}{a}\\
\Large\int_0^\infty \frac{x\sin rx}{x^2+a^2}\ dx&=\Large\pi e^{-ar}.
\end{align}
$$
Done! :)



Friday, June 16, 2017

self learning - Proof of the infinite descent principle



Hi everyone I wonder to myself if the next proof is correct. I would appreciate any suggestion.



Proposition: There is not a sequence of natural numbers which is infinite descent.



Proof: Suppose for contradiction that there exists a sequence of natural numbers which is infinite descent. Let $(a_n)$ be such sequence, i.e., $a_n>a_{n+1}$ for all natural numbers n.




We claim that if the sequence exists, then $a_n\ge k$ for all $k, n \in N$.



We induct on $k$. Clearly the base case holds, since each $a_n$ is a natural number and then $a_n \ge 0$ for all $n$. Now suppose inductively that the claim holds for $k\ge 0$, i.e., $a_n\ge k$ for all $n \in N$; we wish to show that also holds for $k+1$ and thus close the induction. Furthermore, we get a contradiction since $a_n \ge k$ for all $k, n \in N$, implies that the natural numbers are bounded.



$a_n>a_{n+1}$ since $(a_n)$ is an infinite descent. By the inductive hypothesis we know that $a_{n+1}\ge k$, so we have $a_n>k$ and then $a_n\ge k+1$.



To conclude we have to show that the claim holds for every $n$. Suppose there is some $n_0$ such that $a_{n_0}

Thanks :)


Answer




I would argue a different way.



By assumption,
for all $n$,
$a_n > a_{n+1}$,
or
$a_n \ge a_{n+1}+1$.



Therefore,
since

$a_{n+1} \ge a_{n+2}+1$,
$a_n \ge a_{n+2}+2$.



Proceeding by induction,
for any $k$,
$a_n \ge a_{n+k}+k$.



But,
set $k = a_n+1$.
We get

$a_n \ge a_{n+a_n+1}+a_n+1
> a_n$
.



This is the desired contradiction.



This can be stated in this form:
We can only go down as
far as we are up.



Note:

This sort of reminds me
of some of the
fixed point theorems
in recursive function theory.


calculus - How to prove that $int_{0}^{infty}{sin^4(x)ln(x)}cdot{mathrm dxover x^2}={piover 4}cdot(1-gamma)?$




How to prove that

$$\int_{0}^{\infty}{\sin^4(x)\ln(x)}\cdot{\mathrm dx\over x^2}={\pi\over 4}\cdot(1-\gamma).\tag1$$




Here is my attempt:



$$I(a)=\int_{0}^{\infty}{\ln(x)\sin^4(x)\over x^a}\,\mathrm dx\tag2$$



$$I'(a)=\int_{0}^{\infty}{\sin^4(x)\over x^a}\,\mathrm dx\tag3$$



$$I'(2)=\int_{0}^{\infty}{\sin^4(x)\over x^2}\,\mathrm dx\tag4$$




$$I'(2)=\int_{0}^{\infty}{\sin^2(x)\over x^2}\,\mathrm dx-{1\over 4}\int_{0}^{\infty}{\sin^2(2x)\over x^2}\,\mathrm dx=-{\pi\over 2}\tag5$$



Why is this way wrong?



How to prove (1)?


Answer



This is solved essentially in the same way explained in answers to your previous question. As a convenient starting point, I will refer to @Jack D'Aurizio's answer:



$$ \int_{0}^{\infty}\frac{1-\cos(kx)}{x^2}\log(x)\,dx = \frac{k\pi}{2}\left(1-\gamma-\log k\right). \tag{1} $$




Now all you have to do is to write



$$ \sin^4 x = \frac{1}{2}(1 - \cos(2x)) + \frac{1}{8}(1 - \cos(4x)). \tag{2} $$



I hope that the remaining computation is clear to you.






For your attempt, a correct computation would begin with




$$ \frac{d}{da} \int_{0}^{\infty} \frac{\sin^4 x}{x^a} \, dx = - \int_{0}^{\infty} \frac{\sin^4 x}{x^a} \log x \, dx. $$



Notice that you misidentified the derivative of your parametrized integral.


integration - A 'complicated' integral: $ int limits_{-infty}^{infty}frac{sin(x)}{x}$


I am calculating an integral $\displaystyle \int \limits_{-\infty}^{\infty}\dfrac{\sin(x)}{x}$ and I dont seem to be getting an answer.


When I integrate by parts twice, I get:
$$\displaystyle \int \limits _{-\infty}^{\infty}\frac{\sin(x)}{x}dx = \left[\frac{\sin(x)\ln(x) - \frac{\cos(x)}{x}}{2}\right ]_{-\infty}^{+\infty}$$


What will be the answer to that ?


Answer



Hint: From the viewpoint of improper Lebesgue integrals or in sense of Cauchy principal value is integral is legitimate. Integration by parts. \begin{align} \int \limits_{-\infty}^{\infty}\dfrac{\sin(x)}{x} \mbox{d} x = & \lim_{t\to\infty}\int \limits_{-t}^{\frac{1}{t}}\dfrac{\sin(x)}{x} \mbox{d} x + \lim_{t\to\infty}\int \limits_{\frac{1}{t}}^{t}\dfrac{\sin(x)}{x} \mbox{d} x \\ = & \lim_{t\to\infty}\int \limits_{-t}^{\frac{1}{t}}\sin(x)(\log x)^\prime \mbox{d} x + \lim_{t\to\infty}\int \limits_{\frac{1}{t}}^{t}\sin(x)(\log x)^\prime\mbox{d} x \\ \end{align}


Strassen's Algorithm for Non-Square Matrices



Morning Math-Exchange,



On my homework, we have a problem regarding divide a conquer for matrix multiplication; where if you are multiplying a (4x12) by (12x4) the original total of multiplications is 192, but can be further dropped to 147.



I get that why the original big O for this problem is 192 for the total amount of multiplication because you need to multiply this by 4*12*4 and know Strassen's algorithm for square matrices, but not for non-square matrices...




I know according to the Wikipedia page for Strassen's algorithm that I need to turn the dimensions to closer to 2^n by filling the remaining rows/columns to 0's so you can make smaller square matrices, but I do not know what to do after that and how to cut the number of multiplications to 147 with 2^2 (no zeroes needed here) and 2^4 (Added 4 more to the 12)...


Answer



Write your left factor as a row of three square matrices and your right factor as a column of three square matrices. Then use Strassen.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...