Saturday, December 31, 2016

integration - Can I shift this integral along the complex plane?




Does




$$\int_{-\infty}^\infty \text{e}^{\ a\ (x+b)^2}\ \text dx=\int_{-\infty}^\infty \text{e}^{\ a\ x^2}\ \text dx\ \ \ \ \ ?$$



hold, even if the imaginary part of $b$ is nonzero?




What I really want to understand is what the phrase "By analogy with the previous integrals" means in that link. There, the expression $\frac{J}{a}$ is complex but they seem to imply the integral can be solved like above anyway.



The reusult tells us that the integral is really independend of $J$, which is assumed to be real here. I wonder if we can also generalize this integral to include complex $J$. In case that the shift above is possible, this should work out.




But even if the idea is here to perform that substitution, how to get rid of the complex $a$ to obtain the result. If everything is purely real or imaginary, then this solves the rest of the problem.


Answer



Let us write $b= r+it$. The real part of $b$ does not matter as you have already proven yourself. So wlog $r=0$.



For shifting along the imaginary axis, we have to employ the residue theorem. We have
$$
\begin{align}
\int_{-\infty}^\infty f(x+i t) \,dx&- \int_{-\infty}^\infty f(x)\, dx\\
&=\int_{-\infty-it}^{\infty-it} f(x) \,dx- \int_{-\infty}^\infty f(x)\, dx \\
&= 2\pi i \sum \text{Res}(f)+ \int_{\infty-it}^{\infty} f(x) \,dx

- \int_{-\infty-it}^{-\infty} f(x) \,dx
\end{align},$$
where $\sum \text{Res}(f)$ is the sum over the residues of $f$ in the area $z\in \mathbb{C}$ with $-t<\text{Im}\, z<0$.



So the two integrals are the same if there are no residues and if the two integral at $\pm \infty$ vanish (both of which is the case for your example as long as $\text{Re}\,a <0$).


real analysis - intermediate value property intuition



Let's say we are given a graph. How do we determine from the graph that the function has intermediate value property? And also, why do we need to impose the condition 'monotone' so that a monotone function with intermediate value property is continuous ? Intuitively, I don see why the statement 'general function has intermediate value property is continous' is false.


Answer



Try $f(x) = \begin{cases} \sin \frac{1}{x} & x \neq 0 \\ 0 & x = 0\end{cases}$. This has the intermediate value property, but is not continuous at zero. Also, it is not monotone, so it serves as an example of why monotonicity is required.



Note that the graph is connected.




Note that if the graph is connected, it has the intermediate value property.



To see why, suppose a function $g$ does not have the intermediate value property. Then there exists $a,b, \gamma$ such that $ab \} \cup \{ (x,y) | x > a, \ y > \gamma \} $. Then both $U $ & $V$ are open & non-empty, and the graph of $g$ is contained in $U \cup V$. Hence the graph is disconnected.



Furthermore, if a function $g$ has the intermediate value property, the graph is connected.



To see this, suppose $g$ has the intermediate value property, but the graph is not connected. Then there are two non-empty, disjoint open sets $U,V$ such that the graph of $g$ is contained in $U \cup V$. We can assume without loss of generality that we have some $a < b$ such that $(a,g(a)) \in U$, and $(b,g(b)) \in V$. Let $b' = \sup \{x \ge a | (t,g(t)) \in U \ \forall t \in [a,x] \}$. Since $U$ is open, we must have $(b',g(b')) \in V$. Since $V$ is open, we have $B((b',g(b')),\epsilon) \subset V$ for some $\epsilon>0$. For convenience, we use the $\max$-norm on $\mathbb{R}^2$, hence we have $(b'-\epsilon,b'+\epsilon) \times (g(b')-\epsilon,g(b')+\epsilon) \subset V$.



Choose $a' = b'-\frac{\epsilon}{2}$, $\gamma = g(b')-\frac{\epsilon}{2}$. Then if $t \in [a',b')$, we have $(t,g(t)) \in U$, hence $|g(t) -g(b')| \ge \epsilon$, hence $g(s) \neq \gamma$ for all $s \in [a',b']$, which contradicts $g$ having the intermediate value property. Hence the graph is connected.


How to find the sum of this series?




The series is $1\cdot\frac{1}{2} + 2\cdot\frac{1}{4} + 3\cdot\frac{1}{8} + \cdots$




Or in other words



$$\sum_{n=1}^{\infty}\frac{n}{2^n}$$



What kind of series is this and how to find the sum? Thanks....


Answer



Without calculus:



If

$s(a)
=\sum_{n=0}^{\infty} na^n
$
for
$|a| < 1$,
then



$\begin{array}\\
as(a)
&=\sum_{n=0}^{\infty} na^{n+1}\\

&=\sum_{n=1}^{\infty} (n-1)a^{n}\\
&=\sum_{n=1}^{\infty} na^{n}-\sum_{n=1}^{\infty} a^{n}\\
&=s(a)-\dfrac{a}{1-a}\\
\text{so}\\
s(a)
&=\dfrac{a}{(1-a)^2}\\
\end{array}
$


calculus - How to prove $f(x)=ax$ if $f(x+y)=f(x)+f(y)$ and $f$ is locally integrable



Suppose $f(x)$ is integrable in any bounded interval on $\mathbb R$, and it satisfies the equation $f(x+y)=f(x)+f(y)$ on $\mathbb R$. How to prove $f(x)=ax$?


Answer



Integrate the functional equation with respect to $x$ between 0 and 1. The result is the equation
$$
\int_y^{y+1} f(u) du = \int_0^1 f(x) dx + f(y).
$$

The integral on the left side exists and is continuous in $y$ because $f$ is locally integrable. Therefore the right side is also continuous in $y$; that is, $f$ is continuous! The rest is clear sailing.


algebra precalculus - Given roots (real and complex), find the polynomial



This is not a duplicate of theory of equations finding roots from given polynomial.



Given that the roots (both real and complex) of a polynomial are $\frac{2}{3}$, $-1$, $3+\sqrt2i$, and $3+\sqrt2i$, find the polynomial. All coefficients of the polynomial are real integer values.




What I have so far: $$(3x-2)(x+1)(x-\sqrt2\times i)=0$$ If I were solving other similar problems with two complex roots, I would probably be able to cancel them out, but I'm confused about how to do the $(x-\sqrt2i)$ part. Is this actually two complex roots that meet? Also, what degree is this polynomial? I would like an algebraic explanation please.


Answer



The conjugate factor theorem states that for a polynomial $p(x)$ with real coefficients, the complex roots come in conjugate pairs, or if $a+bi$ is a root, then $a-bi$ is also a root. From this we see that your polynomial has the roots $$\frac{2}{3}, -1, 3+\sqrt{2}i, 3-\sqrt{2}i.$$ Therefore, $$(x-\frac{2}{3})(x+1)(x-3-\sqrt{2}i)(x-3+\sqrt{2}i)=0.$$ $$\therefore (x^2+\frac{x}{3}-\frac{2}{3})(x-3-\sqrt{2}i)(x-3+\sqrt{2}i)=0$$ $$\therefore (x^2+\frac{x}{3}-\frac{2}{3})(x^2-6x+11)=0.$$ Continuing the expansion results in the polynomial $$p(x)=\frac{1}{3}(3x^4-17x^3+25x^2+23x-22).$$


Friday, December 30, 2016

calculus - Finding the limit of $frac {n}{sqrt[n]{n!}}$


I'm trying to find $$\lim_{n\to\infty}\frac{n}{\sqrt[n]{n!}} .$$


I tried couple of methods: Stolz, Squeeze, D'Alambert


Thanks!


Edit: I can't use Stirling.


Answer



Let $\displaystyle{a_n=\frac{n^n}{n!}}$. Then the power series $\displaystyle{\sum_{n=1}^\infty a_n x^n}$ has radius of convergence $R$ satisfying $\displaystyle{\frac{1}{R}=\lim_{n\to \infty} \sqrt[n]{a_n}=\lim_{n\to\infty}\frac{a_{n+1}}{a_n}}$, provided these limits exist. The first limit is what you're looking for, and the second limit is $\displaystyle{\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n}$.


Added: I just happened upon a good reference for the equality of limits above, which gives a more general result which is proved directly without reference to power series. Theorem 3.37 of Rudin's Principles of mathematical analysis, 3rd Ed., says:



For any sequence $\{c_n\}$ of positive numbers, $$\liminf_{n\to\infty}\frac{c_{n+1}}{c_n}\leq\liminf_{n\to\infty}\sqrt[n]{c_n},$$ $$\limsup_{n\to\infty}\sqrt[n]{c_n}\leq\limsup_{n\to\infty}\frac{c_{n+1}}{c_n}.$$




In the present context, this shows that $$\liminf_{n\to\infty}\left(1+\frac{1}{n}\right)^n\leq\liminf_{n\to\infty}\frac{n}{\sqrt[n]{n!}}\leq\limsup_{n\to\infty}\frac{n}{\sqrt[n]{n!}}\leq\limsup_{n\to\infty}\left(1+\frac{1}{n}\right)^n.$$ Assuming you know what $\displaystyle{\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n}$ is, this shows both that the limit in question exists (in case you didn't already know by other means) and what it is.



From the comments: User9176 has pointed out that the case of the theorem above where $\displaystyle{\lim_{n\to\infty}\frac{c_{n+1}}{c_n}}$ exists follows from the Stolz–Cesàro theorem applied to finding the limit of $\displaystyle{\frac{\ln(c_n)}{n}}$. Explicitly, $$\lim_{n\to\infty}\ln(\sqrt[n]{c_n})=\lim_{n\to\infty}\frac{\ln(c_n)}{n}=\lim_{n\to\infty}\frac{\ln(c_{n+1})-\ln(c_n)}{(n+1)-n}=\lim_{n\to\infty}\ln\left(\frac{c_{n+1}}{c_n}\right),$$ provided the latter limit exists, where the second equality is by the Stolz–Cesàro theorem.


Limit involving binomial coefficients: $lim_{n to infty} {left(binom{n}{0}.binom{n}{1}…binom{n}{n}right)}^{frac{1}{n(n+1)}}$



I am facing difficulty with the following limit.




$$\lim_{n \to \infty} {\left(\binom{n}{0}.\binom{n}{1}…\binom{n}{n}\right)}^{\frac{1}{n(n+1)}}$$



I tried to take log both sides but i couldnot simplify the resulting expression .



Please help in this regard.thanks.


Answer



We see that
$$
\prod_{k=0}^n\binom{n}{k}=\frac{n!^{n+1}}{\prod_{k=0}^nk!^2}=\frac{n!^{n+1}}{\left(\prod_{k=0}^nk^{n+1-k}\right)^2}=\frac{H(n)^2}{n!^{n+1}}.
$$

where $H(n)=\prod_{k=1}^nk^k$. Now we see that
$$
\log(H(n))=\sum_{k=1}^nk\log(k)≥\int_{1}^nx\log(x)dx=\frac{n^2}{2}\log(n)-\frac{n^2}{4}
$$
as well as
$$
\log(H(n))=\sum_{k=1}^nk\log(k)≤\int_{1}^{n+1}x\log(x)dx=\frac{(n+1)^2}{2}\log(n+1)-\frac{(n+1)^2}{4}
$$
This gives
$$

-\frac{\log(n)}{2(n+1)}-\frac{n}{4(n+1)}≤\frac{1}{n(n+1)}\log(H(n))-\frac{1}{2}\log(n)=\frac{1}{n(n+1)}\log(H(n))-\frac{1}{2}\log(n+1)+\frac{1}{2}\log(1+1/n)≤\frac{\log(n+1)}{2n}-\frac{n+1}{4n}+\frac{1}{2}\log(1+1/n).
$$
As both the lower and the upper bound tend to $-\frac{1}{4}$ as $n\to\infty$ we get by the squeeze theorem
$$
\lim_{n\to\infty}\left[\frac{1}{n(n+1)}\log(H(n))-\frac{1}{2}\log(n)\right]=-\frac{1}{4}\iff\\
\lim_{n\to\infty}\frac{H(n)^{\frac{1}{n(n+1)}}}{\sqrt{n}}=e^{-\frac{1}{4}}
$$
Using Stirlings approximation we notice
$$
\lim_{n\to\infty}\frac{n!^{\frac{1}{n}}}{n}=e^{-1}

$$
and thus
$$
\lim_{n\to\infty}\left[\prod_{k=0}^n\binom{n}{k}\right]^{\frac{1}{n(n+1)}}=\lim_{n\to\infty}\frac{H(n)^{\frac{2}{n(n+1)}}}{n!^{\frac{1}{n}}}=\lim_{n\to\infty}\left(\frac{H(n)^{\frac{1}{n(n+1)}}}{\sqrt{n}}\right)^2\left(\frac{n}{n!^{\frac{1}{n}}}\right)=(e^{-1/4})^2\cdot\frac{1}{e^{-1}}=\sqrt{e}
$$


trigonometry - How Can One Prove $cos(pi/7) + cos(3 pi/7) + cos(5 pi/7) = 1/2$




Reference: http://xkcd.com/1047/



We tried various different trigonometric identities. Still no luck.



Geometric interpretation would be also welcome.



EDIT: Very good answers, I'm clearly impressed. I followed all the answers and they work! I can only accept one answer, the others got my upvote.



Answer



Hint: start with $e^{i\frac{\pi}{7}} = \cos(\pi/7) + i\sin(\pi/7)$ and the fact that the lhs is a 7th root of -1.



Let $u = e^{i\frac{\pi}{7}}$, then we want to find $\Re(u + u^3 + u^5)$.



Then we have $u^7 = -1$ so $u^6 - u^5 + u^4 - u^3 + u^2 -u + 1 = 0$.



Re-arranging this we get: $u^6 + u^4 + u^2 + 1 = u^5 + u^3 + u$.



If $a = u + u^3 + u^5$ then this becomes $u a + 1 = a$, and rearranging this gives $a(1 - u) = 1$, or $a = \dfrac{1}{1 - u}$.




So all we have to do is find $\Re\left(\dfrac{1}{1 - u}\right)$.



$\dfrac{1}{1 - u} = \dfrac{1}{1 - \cos(\pi/7) - i \sin(\pi/7)} = \dfrac{1 - \cos(\pi/7) + i \sin(\pi/7)}{2 - 2 \cos(\pi/7)}$



so



$\Re\left(\dfrac{1}{1 - u}\right) = \dfrac{1 - \cos(\pi/7)}{2 - 2\cos(\pi/7)} = \dfrac{1}{2} $


functional equations - Examples of functions where $f(ab)=f(a)+f(b)$



What are some examples of continuous (on a certain interval) real or complex functions where $f(ab)=f(a)+f(b)$ (like $\ln x$?)


Answer



Define $g(x)=f(e^x)$. Then
$$g(x+y)=f(e^{x+y})=f(e^xe^y)=f(e^x)+f(e^y)=g(x)+g(y).$$



If $f$ is continuous, so is $g$, and it's a well-known exercise to show that $g$ must then be of the form $g(x)=cx$ for some constant $c$ (see Cauchy's functional equation).




Thus, $\ln(x)$ and constant multiples of it are the only examples of the kind you seek.


Thursday, December 29, 2016

probability - How many random samples needed to pick all elements of set?





If repeatedly picking a random element from a set, what is the expected number of times I'd have to pick before seeing all the elements of the set?



Edit: when picking an element, it is simply counted and not removed from the set, so it can be picked again.


Answer



This is the coupon collector's problem. The expected number of picks required to choose all the elements of the set is $$nH_n = n\sum_{i=1}^n\frac1i.$$


complex analysis - Residue Calculus (Computing an Improper Integral)


Use residue calculus to compute the integral $\int_{-\infty}^{\infty}\frac{1}{(z^{2}+25)(z^{2}+16)}dz$





My solution



If we add to the interval $I_{R}=[-R,R]$ add the semicircle $\gamma_{R}$ in the upper half plane with centre at the origin and radius $R$, then we obtain a closed contour $\Gamma_{R}$ (which we consider to be oriented in the positive direction over which the integral $f(z)=\frac{1}{(z^{2}+25)(z^{2}+16)}$ can be computed using the Cauchy's Residue theorem.



\begin{align}
\int_{-\infty}^{\infty}\frac{1}{(z^{2}+25)(z^{2}+16)}dz&=\lim_{R \to \infty}\int_{\Gamma_{R}}f(z)dz \\ &=2\pi i \sum_{k=1}^{2}\text{Res}(z_{k}), \quad \Im(z_{k})>0
\end{align}
In this case, we have two simple poles at the points $z_{1}=4i$ and $z_{2}=5i$. The residues are computed as
\begin{align}
\text{Res}(z_{1})&=\frac{1}{(z^{2}+25)D(z^{2}+16)}\big|_{z=4i} \\

&=\frac{1}{(z^{2}+25)2z}\big|_{z=4i} \\
&=\frac{1}{9\cdot8i} \\
&=\frac{1}{72i} \\
&=-\frac{i}{72}
\end{align}
Similarly, we get for $z_{2}$ that
\begin{align}
\text{Res}(z_{2})&=\frac{1}{D(z^{2}+25)(z^{2}+16)}\big|_{z=5i} \\
&=\frac{1}{2z(z^{2}+16)}\big|_{z=5i} \\
&=\frac{1}{10i \cdot (-9)} \\

&=-\frac{1}{90i} \\
&=\frac{i}{90}
\end{align}
Thus, we get
\begin{align}
\int_{-\infty}^{\infty}\frac{1}{(z^{2}+25)(z^{2}+16)}dz &=2\pi i \left(\text{Res}(z_{1})+\text{Res}(z_{2})\right) \\
&=2 \pi i \left(-\frac{i}{360}\right) \\
&=\frac{\pi}{180}
\end{align}




Are there other methods one could use to arrive at this answer?

elementary set theory - Prove that $ ^{mathbb{N}}mathcal P left({mathbb{N}}right) sim mathcal P left({mathbb{N}}right) $

The set of all functions from $ A $ to $ B $ is denoted $ ^{A}B $. Prove that $ ^{\mathbb{N}}\mathcal P \left({\mathbb{N}}\right) \sim \mathcal P \left({\mathbb{N}}\right) $.




Previous question proved that for any set $ A $, $ ^{A}\{yes,no\}\sim \mathcal P \left({A}\right) $. The symbol $ \sim $ means equinumerous to. $\mathbb{N}$ does not include $0$ here. $\mathcal P$ is power set operation. I know we have to create a bijection between $ ^{\mathbb{N}}\mathcal P \left({\mathbb{N}}\right) \sim \mathcal P \left({\mathbb{N}}\right) $. I believe I might be close to a solution, but am looking for some suggestions first.

real analysis - Induction proof showing the Cantor set is closed.



Definition: A set $S \in \mathbb{R}$ is open if its complement is closed.



Let the Cantor set be defined as $C=\bigcap_{k=0}^\infty S_k$ and let $S_0=[0,1]$ and let $S_k$ be defined in the following manner for $k\geq 1$.



\begin{align*}

S_1&=S_0-\left(\frac{1}{3},\frac{2}{3}\right)=\left[0,\frac{1}{3}\right]\cup\left[\frac{2}{3}, 1\right],\\
S_2&=S_1-\left\{\left(\frac{1}{9}, \frac{2}{9}\right)\cup \left(\frac{7}{9}, \frac{8}{9}\right)\right\}=\left[0,\frac{1}{9}\right]\cup\left[\frac{2}{9}, \frac{3}{9}\right]\cup\left[\frac{6}{9}, \frac{7}{9}\right]\cup\left[\frac{8}{9},1\right],\\
S_3&=S_2-\left\{ \left(\frac{1}{27}, \frac{2}{27}\right)\cup \left(\frac{7}{27}, \frac{8}{27}\right)\cup \left(\frac{19}{27}, \frac{20}{27}\right)\cup \left(\frac{25}{27}, \frac{26}{27}\right) \right\}\\
&=\left[0, \frac{1}{27}\right]\cup\left[ \frac{2}{27}, \frac{3}{27}\right]\cup\left[ \frac{6}{27}, \frac{7}{27}\right]\cup\left[ \frac{8}{27}, \frac{9}{27}\right]\cup\left[ \frac{18}{27}, \frac{19}{27}\right]\cup\left[ \frac{20}{27}, \frac{21}{27}\right]\cup\left[ \frac{24}{25}, \frac{25}{27}\right]\cup\left[ \frac{26}{27}, 1\right]\\
\vdots
\end{align*}



The ultimate goal is to show that the cantor set is compact. I know that it is bounded between $[0,1]$, but I need to use an induction proof to show that it is closed.



I have only gotten as far as the base case. The complement for the interval $[0,1]$ is the union of open sets $(-\infty,0)\cup (1,\infty)$. Hence the interval $[0,1]$ is closed.




How would one set up this induction?


Answer



If you look at the actual construction of $S_{k+1}$ from $S_k$, you’ll see that $S_{k+1}=S_k\setminus U_k$, where $U_k$ is the union of a certain finite collection of open intervals in $[0,1]$. Thus, $S_{k+1}=S_k\cap\Big([0,1]\setminus U_k\Big)$, the intersection of two closed sets, and so must itself be closed. That’s the induction step in outline right there; all you have to do is fill in a few details about $U_k$.



Once you’ve shown by induction that all of the sets $S_k$ are closed, you just use the fact that any intersection of closed sets is automatically a closed set.


calculus - Alternative way to prove $lim_{ntoinfty}frac{2^n}{n!}=0$?


It follows easily from the convergence of $\sum_{n=0}^\infty\frac{2^n}{n!}$ that $$ \lim_{n\to\infty}\frac{2^n}{n!}=0\tag{1} $$ Other than the Stirling's formula, are there any "easy" alternatives to show (1)?


Answer



Yes: note that $$ 0\leq \frac{2^n}{n!}\leq 2\Big(\frac{2}{3}\Big)^{n-2}$$ for $n\geq 3$, and then use the squeeze theorem.


algebra precalculus - Find remainder of division of $x^3$ by $x^2-x+1$




I am stuck at my exam practice here.





The remainder of the division of $x^3$ by $x^2-x+1$ is ..... and that of $x^{2007}$ by $x^2-x+1$ is .....




I tried the polynomial remainder theorem but I am not sure if I did it correctly.



By factor theorem definition, provided by Wikipedia,





the remainder of the division of a polynomial $f(x)$ by a linear polynomial $x-r$ is equal to $f(r)$.




So I attempted to find $r$ by factorizing $x^2-x+1$ first but I got the complex form $x=\frac{1\pm\sqrt{3}i}{2}=r$.



$f(r)$ is then $(\frac{1+\sqrt{3}i}{2})^3$ or $(\frac{1-\sqrt{3}i}{2})^3$ which do not sound right.



However, the answer key provided is $-1$ for the first question and also $-1$ for the second one. Please help.


Answer



Since $$x^3+1 = (x+1)(x^2-x+1)$$ so $$x^3 = (x+1)(x^2-x+1)-1$$ the answer is $-1$.




Similarly for \begin{eqnarray}x^{3n}+1 &=& (x^3+1)\underbrace{\Big((x^3)^{n-1}-(x^3)^{n-2}+...-(x^3)+1\Big)}_{q(x)}\\
&=& (x+1)(x^2-x+1)q(x)\\
\end{eqnarray}



so the answer is again $-1$.


real analysis - If $fcolon mathbb{R} to mathbb{R}$ is such that $f (x + y) = f (x) f (y)$ and continuous at $0$, then continuous everywhere




Prove that if $f\colon\mathbb{R}\to\mathbb{R}$ is such that $f(x+y)=f(x)f(y)$ for all $x,y$, and $f$ is continuous at $0$, then it is continuous everywhere.




If there exists $c \in \mathbb{R}$ such that $f(c) = 0$, then

$$f(x + c) = f(x)f(c) = 0.$$
As every real number $y$ can be written as $y = x + c$ for some real $x$, this function is either everywhere zero or nowhere zero. The latter case is the interesting one. So let's consider the case that $f$ is not the constant function $f = 0$.



To prove continuity in this case, note that for any $x \in \mathbb{R}$
$$f(x) = f(x + 0) = f(x)f(0) \implies f(0) = 1.$$



Continuity at $0$ tells us that given any $\varepsilon_0 > 0$, we can find $\delta_0 > 0$ such that $|x| < \delta_0$ implies
$$|f(x) - 1| < \varepsilon_0.$$



Okay, so let $c \in \mathbb{R}$ be fixed arbitrarily (recall that $f(c)$ is nonzero). Let $\varepsilon > 0$. By continuity of $f$ at $0$, we can choose $\delta > 0$ such that

$$|x - c| < \delta\implies |f(x - c) - 1| < \frac{\varepsilon}{|f(c)|}.$$



Now notice that for all $x$ such that $|x - c| < \delta$, we have
$$\begin{align*}
|f(x) - f(c)| &= |f(x - c + c) - f(c)|\\
&= |f(x - c)f(c) - f(c)|\\
&= |f(c)| |f(x - c) - 1|\\
&\lt |f(c)| \frac{\varepsilon}{|f(c)|}\\
&= \varepsilon.
\end{align*}$$

Hence $f$ is continuous at $c$. Since $c$ was arbitrary, $f$ is continuous on all of $\mathbb{R}$.



Is my procedure correct?


Answer



One easier thing to do is to notice that $f(x)=(f(x/2))^2$ so $f$ is positive, and assume that it is never zero, since then the function is identically zero. Then you can define $g(x)=\ln f(x)$ and this function $g$ will satisfy the Cauchy functional equation
$$ g(x+y)=g(x)+g(y)$$
and the theory for this functional equation is well known, and it is easy to see that $g$ is continuous if and only if it is continuous at $0$.


sequences and series - Is 1 divided by 3 equal to 0.333...?





I have been taught that $\frac{1}{3}$ is 0.333.... However, I believe that this is not true, as 1/3 cannot actually be represented in base ten; even if you had infinite threes, as 0.333... is supposed to represent, it would not be exactly equal to 1/3, as 10 is not divisible by 3.



0.333... = 3/10 + 3/100 + 3/1000...



This occured to me while I discussion on one of Zeno's Paradoxes. We were talking about one potential solution to the race between Achilles and the Tortoise, one of Zeno's Paradoxes. The solution stated that it would take Achilles $11\frac{1}{3}$ seconds to pass the tortoise, as 0.111... = 1/9. However, this isn't that case, as, no matter how many ones you add, 0.111... will never equal precisely $\frac{1}{9}$.



Could you tell me if this is valid, and if not, why not? Thanks!



I'm not arguing that $0.333...$ isn't the closest that we can get in base 10; rather, I am arguing that, in base 10, we cannot accurately represent $\frac{1}{3}$


Answer



Here is a simple reasoning that $1/3=0.3333...$.



Lets denote $0.333333......$ by $x$. Then




$$x=0.33333.....$$
$$10x=3.33333...$$



Subtracting we get $9x=3$. Thus $x=\frac{3}{9}=\frac{1}{3}$.



Since $x$ was chosen as $0.3333....$ it means that $0.3333...=\frac{1}{3}$.


Wednesday, December 28, 2016

complex analysis - Intuition behind euler's formula











Hi, I've been curious for quite a long time whether it is actually possible to have an intuitive understanding of euler's apparently magical formula: $$e^{ \pm i\theta } = \cos \theta \pm i\sin \theta$$



I've obviously seen the taylor series/differential equation based proofs, and perhaps I'm just going to have to accept that it's not possible to have an intuition on what it means to raise a number to an imaginary power. I obviously realise that the formula implies that an exponential with a variable imaginary part can be visualised as a complex function going around in a unit circle about the origin of the complex plane. But WHY is this? And why is e so special that it moves at just a fast enough rate so that the argument of the exponential is equal to the arc length of the path made by the locus (i.e. the angle in radians we've moved around the circle)? Is there any way anyone out there 'understand' this?



Thankyou!


Answer



If I recall from reading Analysis of the Infinite (very nice book, at least Volume $1$ is), Euler got it from looking at

$$\left(1+\frac{i}{\infty}\right)^{\infty}$$
whose expansion is easy to find using the Binomial Theorem with exponent $\infty$.



There is a nice supposed quote from Euler, which can be paraphrased as "Sometimes my pencil is smarter than I am." He freely accepted the results of his calculations. But of course he was Euler.


Tuesday, December 27, 2016

sequences and series - How do I evaluate this sum :$ sum_{n=1}^{infty}frac{{(-1)}^{n-1}log n}{n^s}$?

How do I evaluate this sum :$$ \sum_{n=1}^{\infty}\frac{{(-1)}^{n-1}\log n}{n^s}$$



Note : In wolfram alpha it is convergent for $Re(s)>1$ .!!



Thank you for any help

Monday, December 26, 2016

analysis - Riemann zeta function, representation as a limit

is it true that $$\displaystyle \zeta(s) = \ \lim_{\scriptstyle a \to 0^+}\ 1 + \sum_{m=1}^\infty e^{\textstyle -s m a } \left\lfloor e^{\textstyle(m+1)a} - e^{\textstyle m a} \right\rfloor$$ my proof :



\begin{equation}F(z) = \zeta(-\ln z) = \sum_{n=1}^\infty z^{\ln n}\end{equation}



which is convergent for $|z| < \frac{1}{e}$. now I consider the functions :



$$\tilde{F}_a(z) = \sum_{n=1}^\infty z^{a \left\lfloor \textstyle \frac{\ln n}{a} \right\rfloor } = 1 + \sum_{m=0}^\infty z^{a n} \left\lfloor e^{a(m+1)} - e^{a m} \right\rfloor $$



because $\displaystyle\lim_{ a \to 0^+} a \left\lfloor \textstyle \frac{\ln n}{a} \right\rfloor = \ln n$, we get that :




$$\lim_{\scriptstyle a \to 0^+} \ \tilde{F}_a(z) = \sum_{n=1}^\infty z^{\ln n} = \zeta(-\ln z)$$






(details)



$\displaystyle\sum_{m=0}^\infty z^{a m} \left\lfloor e^{a(m+1)} - e^{a m} \right\rfloor $
is also convergent for $z < \frac{1}{e}$ because $\displaystyle\sum_{m=0}^\infty (z^a e^a)^{m}$ is convergent for $z < \frac{1}{e}$ and $\displaystyle\sum_{m=0}^\infty z^{am} \left\{e^{a(m+1)} - e^{a m} \right\} $ is convergent for $z < 1$.



to justify $\displaystyle\sum_{n=1}^\infty z^{a \left\lfloor \textstyle \frac{\ln n}{a} \right\rfloor } = 1 + \sum_{m=1}^\infty z^{a m} \left\lfloor e^{a(m+1)} - e^{a m} \right\rfloor $ : if $\left\lfloor \frac{\ln n}{a} \right\rfloor = m \ne 0$ then

$\displaystyle\frac{\ln n}{a} \in [m, m+1[ \implies n \in [ e^{am}, e^{a(m+1)}[ $ . how many different $n$'s is that ? $\left\lfloor e^{a(m+1)} - e^{am} \right\rfloor $.

derivatives - A limit with an intuitive and wrong answer




In my last question I asked about a limit used in my exploration of tangent circles and whatnot.



I decided to come up with a more direct approach to my problem, and now I only have to evaluate the limit
$$ \lim_{d\to x} \frac{\dfrac{f(x)-f(d)}{x-d}-f'(d)}{x-d}$$



Intuition would yield the answer is the second derivative of $f$ at $x$. However, by expanding and whatnot and then using l'Hôpital, as well as plugging in some sample $f$'s, I arrive at half of the second derivative. Why is my intuition wrong?


Answer



The factor you're missing is because of the $1/2$ that arises whenever you expand to within second order:



$$f(x) = f(d) + f'(d) (x-d) + 1/2 f''(d) (x-d)^2 + o((x-d)^2).$$




Substitute and you have



$$\frac{f'(d) + 1/2 f''(d) (x-d) + o(x-d) - f'(d)}{x-d} = 1/2 f''(d) + o(1).$$



This should explain your results. egreg's answer shows a way to see this using only L'Hopital's rule; here the $2$ arises from the derivative of $(x-d)^2$ in the denominator.



Here is some clarification of notation, in case you're not familiar with it already. In the above, $o(g(x))$ is a replacement for a function, which I am not specifying, and which has the property $\lim_{x \to d} \frac{o(g(x))}{g(x)} = 0$. This is called "little oh notation", and it is fairly standard.


probability - How many throws of a die until every possible result is obtained?






A die is thrown until every possible result (i.e., every integer from 1 to 6) is obtained. Find the expected value of the number of throws.




How do I do that? I understand that probability for the single result is $\{1, 5/6, \ldots , 1/6\}$, but what about the expected value?


Answer



This is a very popular problem. I learned it as the "collector's problem".



Essentially, you want to model rolling a die until a new face is shown
as a geometric distribution with $p_k = \frac{7-k}{6}$ where $k = 1,\dotsc,6$ is the number of faces you have seen. So, if $X_k$ denotes rolling until you see $k$th different face, then $X_k\sim\text{Geom}(p_k)$ on $\{1,2,3,\dots\}$. It follows that $X = X_1+\dotsb+X_6$ is the number of rolls until you have seen all six faces. Then

$$E[X] = E[X_1]+E[X_2]+\dotsb+E[X_6] = \frac{6}{6}+\frac{6}{5}+\dotsb+\frac{6}{1}=14.7.$$


algebra precalculus - Why is $sqrt{8}/2$ equal to $sqrt{2}$?



I am trying to help my daughter on her math homework and I am having some trouble on some equation solving steps. My current major concern relies on understanding why $\sqrt{8}/2$ equal to $\sqrt{2}$.



Thanks in advance


Answer




It isn't, as originally written. To see why the fixed version is correct, we have:



$$\frac{\sqrt{8}}2=\frac{\sqrt{4\cdot2}}2=\frac{\sqrt{4}\sqrt{2}}2=\frac{2\sqrt{2}}2=\sqrt{2}.$$


summation - How to represent the sum of matrix elements given all permutations of a set of indices?




I would like to represent the sum all matrix elements of all permutations of indices given a set. For example, given the set $S=\{1,2,3\}$ I would like to compactly express $w_{1,2,3}+w_{1,3,2}+w_{2,1,3}+w_{2,3,1}+w_{3,1,2}+w_{3,2,1}$, where $w_{i,j,k}$ is any element of a three-dimensional matrix $W$.



I came up with this function $P(S)=\sum_{i_1}^{S}\sum_{i_2}^{S\setminus{\{i_1\}}}\sum_{i_3}^{S\setminus{\{i_1,i_2\}}}\ldots\sum_{i_m}^{S\setminus{\{i_1,i_2,\ldots, i_{M-1}\}}}w_{i_1,i_2,\ldots,i_M}$, where $M$ is the number of elements of set $S$, $w_{i_1,i_2,\ldots,i_M}\in W$, and $W$ is an $M$ dimensional matrix. The problem is that I found it quite unelegant.


Answer



Mimick the definition of the determinant : if $\mathfrak{S}_n$ is the group of all permutations of the $n$ items, the quantity you're searching is $$\sum_{\sigma \in \mathfrak{S}_n} w_{\sigma(1), ..., \sigma(n) }$$



Example with $n=3$ : with the usual representation if permutations, $\mathfrak{S}_3 = \{\mathrm{id}, (12), (13), (23), (123), (132)\}$, so
$$P(S) = w_{123} + w_{213} + w_{321} + w_{132} + w_{312} + w_{231} $$



(if you're not familiar with $\mathfrak{S}_n$, just take a quick look at the beginning of the wikipedia page https://en.wikipedia.org/wiki/Symmetric_group, sections 1 and 2)



linear algebra - Block matrix characteristic polynomial



Let $n \in \Bbb N^*$ and $A \in \cal M_n(\Bbb R)$ a square matrix. Let the block matrix $B=\begin{pmatrix}A&A\\A&A \end{pmatrix} \in \cal M_{2n}(\Bbb R)$



1) Calculate the characteristic polynomial $\chi_B$ unsing $\chi_A$.




2) Proof that $A$ is diangonalizable $\iff$ $B$ is diagonalisable.


Answer



For the first part: we have
$$
tI_{2n} - B =
\pmatrix{tI_n - A & -A\\ -A & tI_n - A}
$$
Noting that these matrices commute (or in particular, that the lower two matrices commute), we have
$$

\begin{align}
\det(tI_{2n} - B) &= \det((tI_n - A)^2-A^2)\\
&= \det[((tI_n - A)-A)((tI_n - A)+A)]\\
&= \det((tI_n - A)-A) \det((tI_n - A)+A)\\
&= \det((tI_n - 2A) \det(tI_n)\\
&= \det(2((t/2) - A)) \det(tI_n)\\
&= 2^n \det((t/2) - A) t^n\\
&= 2^n t^n \chi_A\left(\frac t2\right)
\end{align}
$$




Second part: I haven't put it all together nicely, but here are some potentially helpful thoughts



Thought 1: If $B$ is diagonalizable, then $B$ has $2n$ linearly independent eigenvectors. Now, let
$$
v = \pmatrix{v_1\\v_2}
$$
be an eigenvector of $B$, with $v_1,v_2 \in \mathbb{C}^n$. What can we say about the vectors $v_1$ and $v_2$? Remember that $B$ has the eigenvector $0$ of multiplicity $n + \operatorname{null}(A)$.



Thought 2:

$$
\pmatrix{I & I\\ I& -I} \pmatrix{A & A\\ A & A}\pmatrix{I& I\\I & -I} =
\pmatrix{4A & 0\\0 & 0}
$$
Or, in particular,
$$
\frac{1}{2}\pmatrix{I & I\\ I& -I} \pmatrix{A & A\\ A & A}
\pmatrix{I& I\\I & -I} = \\
\left(\frac{1}{\sqrt{2}}\pmatrix{I & I\\ I& -I}\right)^{-1} \pmatrix{A & A\\ A & A} \frac{1}{\sqrt{2}}\pmatrix{I& I\\I & -I} =
\pmatrix{2A & 0\\0 & 0}

$$


analysis - Measure theory limit question





Let $(X, \cal{M},\mu)$ be a finite positive measure space and $f$ a $\mu$-a.e. strictly positive measurable function on $X$. If $E_n\in\mathcal{M}$, for $n=1,2,\ldots $ and $\displaystyle \lim_{n\rightarrow\infty} \int_{E_n}f d\mu=0$, prove that $\displaystyle\lim_{n\rightarrow\infty}\mu(E_n)=0$.



Answer



Since $f$ is almost everywhere strictly positive, the increasing sequence of sets $$A_n=\{x\in X:f(x)>1/n\}$$
has the property that $$\lim_{n\to\infty} \mu(A_n)=\mu(X).$$ Now $\int_E f ~d\mu0$.



So let $\epsilon>0$. Choose $n$ so that $\mu(X\backslash A_n)<\epsilon/2$. For $N$ large enough, $\int_{E_N}f~d\mu<\epsilon/(2n)$ and hence $$\mu(E_N\cap A_n)

Sunday, December 25, 2016

abstract algebra - Powerset bijection problem

Please do not provide a full answer for this.



Let $2^{S} = \{f : S \rightarrow \{0, 1\}\}$. For $A \subseteq S$, define $\chi_{A}\in2^{S}$ by
$$\chi_{A}(s) =

\begin{cases}
0 & \text{if } s \notin A \\
1 & \text{if } s \in A
\end{cases}. $$
Show that $\mu : P(S)\rightarrow2^{S}$ given by $\mu(A)=\chi_{A}$ is a bijection.



I know that the standard procedure for showing that a function is bijective is to show that it is both injective and surjective, and the "standard procedures" for those as well. It's just that I don't really know where to start with this.

sequences and series - Is the sum of all natural numbers $-frac{1}{12}$?




My friend showed me this youtube video in which the speakers present a line of reasoning as to why
$$
\sum_{n=1}^\infty n = -\frac{1}{12}
$$



My reasoning, however, tells me that the previous statement is incorrect:
$$
\sum_{n=1}^\infty n = \lim_{k \to \infty} \sum_{n=1}^k n = \lim_{k \to \infty}\frac{k(k+1)}{2} = \infty

$$



Furthermore, how can it be that the sum of any set of integers is not an integer. Even more, how can the sum of any set of positive numbers be negative? These two ideas lead me to think of inductive proofs as to why the first statement is incorrect.



Which of the two lines of reasoning is correct and why? Are there any proven applications (i.e. non theoretical) which use the first statement?


Answer



It is a matter of definition. We normally say that a series such as $1-1+1-1+\cdots$ does not converge. It has no value in the limit. If you change the definition of convergence by assigning the value of $1/2$ to that series, then you can expect to get very odd result. Is it useful to do that? Evidently the answer is yes in some applications.


calculus - How to solve certain types of integrals


I'm asking for a walk through of integrals in the form: $$\int \frac{a(x)}{b(x)}\,dx$$ where both $a(x)$ and $b(x)$ are polynomials in their lowest terms. For instance $$\int \frac{x^3+2x}{x^2+1}\,dx$$


Is there a trick to doing these? or will I have to integrate by a clever substitution?


Answer



The first step is generally to divide out the integrand so as to get a polynomial plus a rational function whose numerator has lower degree than its denominator. Here you get


$$\frac{x^3+2x}{1+x^2}=x+\frac{x}{x^2+1}\;.$$


Integrating the $x$ term (and, more generally, the polynomial quotient) is easy, so we’ve reduced the problem to integrating something of the form $\frac{p(x)}{q(x)}$, where $p(x)$ and $q(x)$ are polynomials, and the degree of $p(x)$ is less than the degree of $q(x)$. The general solution for such problems is partial fractions; here, however, we’re more fortunate, because the numerator $x$ is a constant multiple of the derivative of the denominator. If you substitute $u=x^2+1$, you find that $du=2x\,dx$, so that $x\,dx=\frac12du$, and



$$\int\frac{x}{x^2+1}\,dx=\frac12\int\frac{du}u\;,$$


which is a standard, basic integral. I would not call this a clever substitution: recognizing that the numerator of a fraction is a constant multiple of the derivative of the denominator is a standard technique.


Suppose that the original numerator had been $x^3+x+2$. Again we do the division to get a polynomial plus a ‘proper’ rational function:


$$\frac{x^3+x+2}{x^2+1}=x+\frac2{x^2+1}\;.$$


This time you should recognize that $\frac2{x^2+1}$ is just twice the derivative of $\tan^{-1}x$, again a standard integration.


Finally, suppose that the original fraction had been


$$\frac{x^3+x+1}{x^2+2x}=x+\frac{1-x}{x^2+2x}\;.$$


This time you might as well simply reduce the remaining fraction to partial fractions:


$$\frac{1-x}{x(x+2)}=\frac{A}x+\frac{B}{x+2}\;,$$


so $A(x+2)+Bx=1-x$, $(A+B)x+2A=1-x$, $A+B=-1$, and $2A=1$, so $A=\frac12$, and $B=-\frac32$. Thus,



$$\frac{1-x}{x^2+2x}=\frac12\left(\frac1x-\frac3{x+2}\right)\;,$$


leaving you with two easy integrations.


discrete mathematics - Strong mathematical induction without basis step



I'm reading about strong mathematical induction in Susanna Epp's Discrete Mathematics, and here's the principle as it's stated in the textbook:





  1. P(a), P(a + 1), . . . , and P(b) are all true. (basis step)


  2. For any integer k ≥ b, if P(i) is true for all integers i from a through k, then P(k + 1) is true. (inductive step)




The principle is followed by the text that's confusing me:




Strictly speaking, the principle of strong mathematical induction can
be written without a basis step if the inductive step is changed to
“∀k ≥ a − 1, if P(i) is true for all integers i from a through k,

then P(k + 1) is true.” The reason for this is that the statement “P(i
) is true for all integers i from a through k” is vacuously true for k
= a−1. Hence, if the implication in the inductive step is true, then the conclusion P(a) must also be true,∗ which proves the basis step



∗If you have proved that a certain if-then statement is true and if you also know that the hypothesis
is true, then the conclusion must be true.




I understand why $k = a − 1$ makes the statement $\forall i \in Z ((a \leq i \leq k) \land P(i)) $ vacuously true, but can't grasp why replacing $k \geq b$ (and hence $k \geq a$ since $b \geq a$) to $k \geq a-1$ proves the basis step implicitly. Why is it?


Answer




Because the statements $ P(a), ..., P(a-1) $ are all true, since there are no statements in this list. The author is using a somewhat confusing but common convention with ellipses: when you list $ firstElement ... lastElement $ where $lastElement$ precedes $firstElement$, you interpret that as an empty list. I will add, the author should have written the basis step as "for all $i$ with $a \leq i \leq b $, $P(i)$ is true," so that the interval of integers was more clear.


Saturday, December 24, 2016

calculus - How do I find $limlimits_{x to 0} xcot(6x) $ without using L'hopitals rule?

How do I find $\lim\limits_{x \to 0} x\cot(6x) $ without using L'hopitals rule?



I'm a freshman in college in the 3rd week of a calculus 1 class. I know the $\lim\limits_{x \to 0} x\cot(6x) = \frac{1}{6}$ by looking at the graph, but I'm not sure how to get here without using L'hopital's rule.




Here is how I solved it (and got the wrong answer). Hopefully someone could tell me where I went wrong.



$\lim\limits_{x \to 0} x\cot(6x) = (\lim\limits_{x \to 0} x) (\lim\limits_{x \to 0}cot(6x)) = (0)((\lim\limits_{x \to 0}\frac{cos(6x)}{sin(6x)}) = (0)(\frac{1}{0}). $ Therefore the limit does not exist.



I am also unsure of how to solve $\lim\limits_{x \to 0} \frac{\sin(5x)}{7x^2} $ without using L'hopital's rule.

geometry - Pythagorean "Paradox" (right-angled triangle).

enter image description here




Consider an isosceles right-angled triangle as shown in the figure (top left). The length of its hypotenuse is $c$. The figure distinguishes both legs of the triangle, however, from now on let's assume, since it's an isosceles right-angled triangle, that $b=a$. Now, let's build a stair on the hypotenuse with steps of height $\frac{a}{n}$, and width $\frac{a}{n}$. If $n=1$ we have the picture of the figure (top centre), in which $d=a$ and $e=a$. Here the length of the stair is $d+e=2a$.



Notation: I will refer to each step in the stair as $s_k$ for some $k\in\mathbb N:0

If we continue doing the same procedure, we have that for some $n\in\mathbb N$ that the lenght $\ell(n)$ of the stair is:



$$\ell(n)=\sum\limits_{k=1}^n \text{lenght}(s_k) = \sum\limits_{k=1}^n2\frac{a}{n}=2a.$$



If we build a stair with infinitely small steps, why don't we end up with a straight line? Because if we did, we would be saying that $c=2a$, and by the pythagorean theorem we know that $c=\sqrt{2}a$. I appreciate your thoughts.

linear algebra - Let $A,B$ be $m times n$ and $n times m$ matrices, respectively. Prove that if $m > n$, $AB$ is not invertible



We haven't done anything about rank or dimensions or linear dependence / basis or determinants. Possible related facts :





  1. A matrix is invertible iff it is bijective as a linear transformation.


  2. An invertible matrix is row-equivalent to the identity matrix.


  3. A matrix has a right inverse iff it has a left inverse.




Also, invertability is only defined for square matrices.


Answer



Since $A$ is an $m\times n$ matrix and $B$ is an $n\times m$ matrix, the product $AB$ is an $m\times m$ matrix. Suppose $m>n$ then the operator associated to left multiplication by $B$ is not injective because there are more columns than rows, so the kernel is always nontrivial (i.e. there are more column vectors than there are entries in those column vectors, so they must be linearly dependent). Said another way: the linear operator $T:\Bbb R^m\to\Bbb R^n$ with $T(v)=Bv$ is not injective because $m>n$, so the domain has higher dimension than the codomain. So there exists vectors $v,w\in\Bbb R^m$ such that $Bv=Bw$ but $v\neq w$.




Spoiler:




Thus, $$Bv=Bw\implies A(Bv)=A(Bw)\implies (AB)v=(AB)w$$ but $v\neq w$. Hence, the operator associated with left multiplication by $AB$ is not injective.



linear algebra - How to find the eigen values of the following matrix:



Is there any way to find the eigen values of the following matrix:



$A_{2n\times 2n}=$
\begin{bmatrix}\textbf{0} & E_{n\times n}\\E^T&\textbf{0}\end{bmatrix}



where $E=$
\begin{bmatrix}1&1&1&\ldots1\\2&2&2&\ldots 2\\2&2&2&\ldots 2\\\ldots&\ldots&\ldots&\ldots\\\ldots&\ldots&\ldots&\ldots\\\ldots&\ldots&\ldots&\ldots \\2&2&2&\ldots 2\\\end{bmatrix}




My try:




I find that rows of $E$ are linearly dependent.
Also every row of $E$ is just a scalar multiple of the first row.



So I was guessing may be $0$ may occur as its eigen value many number of times.



What are some methods to calculate the characteristic polynomial?



Can someone kindly help?

elementary number theory - Prove that$gcd(a+b, a-b) = gcd(2a, a-b) = gcd(a+b, 2b) $



Question:



Prove that$\gcd(a+b, a-b) = \gcd(2a, a-b) = \gcd(a+b, 2b) $



My attempt:




First we prove $\gcd(a+b, a-b) = \gcd(2a, a-b)$.



Let $ d = \gcd(a+b, a-b)$ and $ \ e = \gcd(2a, a-b)$



$ d|a+b$ and $ \ d|a-b \implies \exists m,n \in \mathbb{Z}$ such that $ \ a+b = dm$ and $\ a-b = dn \implies a + a -dn = dm \implies 2a = dm + dn \implies 2a = d(m+n) \implies d|2a$



So, $ \ d|a-b$ and $ \ d |2a \implies d\le e$



$e|2a$ and $ \ e|a-b \implies \exists m,n \in \mathbb{Z}$ such that $ \ 2a = em$ and $\ a-b = en \implies 2a - (a-b) = a + b = em-en = e(m-n) \implies e|(a+b)$




So, $ \ e|(a-b)$ and $ \ e |(a+b) \implies e\le d$



Hence $ e =d$



Similarly, if I prove that $\gcd(a+b, a-b) = \gcd(a+b, 2b)$, will the proof be complete?



I am not quite sure if this is the correct way to prove the problem. My professor used a similar approach to prove that $ \ \gcd(a,b) = \gcd( a- kb, b)$.


Answer



Your approach is correct. However a few steps can be shortened.




Let $d=\gcd(a+b,a-b)$, then $d | a+b$ and $d | a-b$, thus $d$ divides all linear combinations of $a+b$ and $a-b$, in particular $d|(a+b)-(a-b)=2b$ and $d|(a+b)+(a-b)=2a$. Thus $d$ divides both $2a$ and $2b$.



Now you can take it from here.


calculus - limit of Holder norms: $suplimits_{xin [a,b]} f(x) = limlimits_{nrightarrowinfty} left(int_a^b (f(x))^n ;dxright)^{frac{1}{n}}$


Show that $$\sup_{x\in [a,b]} f(x) = \lim_{n\rightarrow\infty} \left(\int_a^b (f(x))^n \;dx\right)^{\frac{1}{n}}$$ for f continuous and positive on [a,b]. I can show that LHS is greater than or equal to RHS but I can't show the other direction.


Answer



Let $F = \sup_{\xi\in[a,b]} f(\xi)$. If $F=0$, the equality is trivial. So we may suppose suppose $F>0$.


Choose $\epsilon \in (0,F)$, and let $A_\epsilon = \{x | f(x) \geq F-\epsilon\}$. Since $f$ is continuous, we have $mA_\epsilon > 0$ ($m$ is the Lebesgue measure). Then the following is true (work from the middle towards the left or right to get the relevant bound):



$$mA_\epsilon (F-\epsilon)^n \leq \int_{A_\epsilon} (F-\epsilon)^n dx \leq I^n_n = \int_a^b (f(x))^n dx \leq \int_a^b F^n dx \leq (b-a) F^n$$


This gives $\sqrt[n]{mA_\epsilon} (F-\epsilon) \leq I_n \leq \sqrt[n]{b-a}F$.


We have $\lim_{n \to \infty}\sqrt[n]{mA_\epsilon} = 1$ and $\lim_{n \to \infty}\sqrt[n]{b-a} = 1$, hence $\limsup_{n \to \infty} I_n \leq F$ and $\liminf_{n \to \infty} I_n \geq F-\epsilon$. Since this is true for $\epsilon$ arbitrarily close to $0$, we have $\liminf_{n \to \infty} I_n \geq F$, from which the desired result follows.


(Note: If you wish to avoid the Lebesgue measure, note that $A_\epsilon$ could be taken to be any non-empty interval such that $f(x) \geq F-\epsilon$ for $x$ in this interval. Then $mA_\epsilon$ would be replaced by the length of this interval. The same reasoning still applies.)


Friday, December 23, 2016

probability theory - Prove that F: $mathbb{R} to mathbb{R}$ nondecreasing, right continous and goes to 1/0 in $infty/-infty$ is a CDF for a random variable X



Prove that F: $\mathbb{R} \to \mathbb{R}$ nondecreasing, right continous, $\lim_{t \to -\infty} F(t) = 0$, $\lim_{t \to \infty} F(t) = 1$ is a CDF for some random variable X, i.e. there exists random variable $X$ such that $F_X = F$ where $F_X$ denotes CDF of X.



The hint was to look onto $\left(\Omega=(0,1), \mathcal{F}=\mathcal{B}((0,1)), \lambda_{|(0,1)}\right)$.




I know that CDF of any random variable has those properties, but I fail to see how to link $F_X$ with $F$.


Answer



For $\omega \in (0,1)$ define $X(\omega)=\inf \{t: F(t) \geq \omega\}$. First note that $F(t) \geq \omega\}$ implies that $X(\omega) \leq t$.



Now suppose $F(t) <\omega$. Then $F(s) <\omega $ for all $s \leq t$. This implies that $X(\omega) \geq t$. In other words $X(\omega) implies $F(t) \geq \omega$.



Thus $P(X\leq t) \leq\lambda((0,F(t)])=F(t)$ or $F_X(t) \leq F(t)$. Also $F_X(t-) \leq F(t)$. These two inequalities imply that $F_X(t)=F(t)$ at all points where $F_X$ is continuous. Since both of these are right-continuous function it follows that $F_X=F$.


integration - Converting double integrals to polar form: what would the limits be here?


So I have the following integral:


$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} {{{x^2}dxdy}\over (1 + \sqrt{x^2 + y^2})^5}$$


I know that converting the integral using polar coordinates gives:


$$\int \int {{r^2 \cos^2 \theta} \over {(1 + r)^5}} rdrd \theta$$


I'm assuming $r$ is going from $0$ to infinity.


But what about $\theta$?


Answer



Note that in order to cover $\mathbb{R}^2$, $r$ extends from $0$ to $\infty$ and $\theta$ spans an entire period of $\sin(\theta)$ and $\cos(\theta)$. (For example, the first quadrant alone is covered by $\theta \in [0,\pi/2]$, $r\in [0,\infty)$.)



Hence,


$$\begin{align} \int_{-\infty}^\infty\int_{-\infty}^\infty\frac{x^2}{(1+\sqrt{x^2+y^2})^5}\,dx\,dy&=\int_0^{2\pi}\int_0^\infty \frac{r^2\cos^2(\theta)}{(1+r)^5}r\,dr\,d\theta\\\\ &=\underbrace{\left(\int_0^{2\pi}\cos^2(\theta)\,d\theta\right)}_{=\pi}\underbrace{\left(\int_0^\infty \frac{r^3}{(1+r)^5}\,dr\right)}_{=1/4}\\\\ &=\frac{\pi}{4} \end{align}$$


Thursday, December 22, 2016

set theory - A question concerning on the axiom of choice and Cauchy functional equation




The Cauchy functional equation:
$$f(x+y)=f(x)+f(y)$$
has solutions called 'additive functions'. If no conditions are imposed to $f$, there are infinitely many functions that satisfy the equation, called 'Hamel' functions. This is considered valid if and only if the Zermelo's axiom of choice is accepted as valid.



My question is: suppose we don't consider valid the axiom of choice, this means that we have a finite number of solutions? Or maybe the 'Hamel' functions are still valid?



Thanks for any hints ore answer.


Answer



What you wrote is not true at all. The argument is not valid "if and only if the axiom of choice holds".





  1. Note that there are always continuous functions of this form, all look like $f(x)=ax$, for some real number $a$. There are infinitely many of those.


  2. The axiom of choice implies that there are discontinuous functions like this, furthermore a very very weak form of the axiom of choice implies this. In fact there is very little "choice" which can be inferred from the existence of discontinuous functions like this, namely the existence of non-measurable sets.


  3. Even if the axiom of choice is false, it can still hold for the real numbers (i.e. the real numbers can be well-ordered even if the axiom of choice fails badly in the general universe). However even if the axiom of choice fails at the real numbers it need not imply that there are no such functions in the universe.


  4. We know that there are models in which all functions which have this property must be continuous, for example models in which all sets of real numbers have the Baire property. There are models of ZF in which all sets of reals have the Baire property, but there are non-measurable sets. So we cannot even infer the existence of discontinuous solutions from the existence of non-measurable sets.


  5. Observe that if there is one non-discontinuous then there are many different, since if $f,g$ are two additive functions then $f\circ g$ and $g\circ f$ are also additive functions. The correct question is to ask whether or not the algebra of additive functions is finitely generated over $\mathbb R$, but to this I do not know the answer (and I'm not sure if it is known at all).




More:





Wednesday, December 21, 2016

integration - Integral of complete elliptic Integral of second kind

I am currently wondering wether there is an indefinite integral for
$$\int dx \;\sqrt{1-x^2} E\left (1+\frac{1}{x^2-1} \right )$$
WolframAlpha states, that there is no result in terms of analytical functions. I am not a Mathematician and don't know the current literature in the field. I wonder whether there maybe is literature more recent, which has a solution for this.



Also if there is no solution, I wonder whether there is a way to prove that there is no solution.



Looking forward to helpfull answers.




Cheers!

logarithms - Inequality $log xle frac{2}{e} , sqrt{x}$


The inequality $$\log x \le \frac{2}{e} \, \sqrt{x},$$ where $\log x$ denotes the natural logarithm, is used in the proof of Theorem 4.7 in Apostol's Analytic Number Theory.


It seems that the inequality is not very difficult to prove using calculus. We could simply find maximum/minimum of some function like $f(x)= \frac2e \sqrt{x} - \log x$ or $g(x)=\frac{\log x}{\sqrt x}$.


Are there some other methods how this inequality can be proved? Is there a way in which this inequality can be seen more directly, without having to calculate critical points of some auxiliary function?


Answer



With the substitution $x = e^{2(u+1)}$ the inequality $$ \log x \le \frac{2}{e} \, \sqrt{x} $$ becomes $$ e^u \ge 1 + u \tag{*} $$ which is a well-known estimate for the exponential function. Equality holds if and only if $u = 0$, corresponding to $x = e^2$ in the original inequality.



$(*)$ is trivial for $u \le -1$ and can for example be shown using the Taylor series for $u > -1$. It also follows – as Jack said in a comment – from the convexity of the exponential function: the graph lies above the tangent line at $u = 0$.


(This approach was inspired by Jack D'Aurizio's answer.)


complex numbers - Prove the following equality

So I have the following equality involving complex numbers:
$$\frac {\sqrt 3 -1}{1-i}(1+\sqrt 3 \,i)(\cos \alpha -i\,\sin\alpha)=2\sqrt {2-\sqrt 3}\left(\cos\left(\frac {7\pi}{12}-\alpha\right)+ \sin\left(\frac {7\pi}{12}-\alpha\right)\right)$$




Guess I have to find that $\alpha$ that is missing, I thought considering both sides of the equation as single complex numbers in the trigonometric form but it seemed only to get more difficult.
I would really appreciate you help and hints, thanks.

Tuesday, December 20, 2016

real analysis - Prove that $lim_{n to infty} frac{n}{(n!)^frac{1}{n}} = e $


I have solved the problem in just 2 lines using a theorem which asserts that


"Let ${u_n}$ be a real sequence such that $u_n > 0 \forall n \in \mathbb{N}$ and $\lim_{n \to \infty} \frac{u_{n+1}}{u_n} = \ell$ (finite of infinite). Then $\lim_{n \to \infty} (u_n)^{1 \over n} =\ell$ "



To prove the aforesaid limit, I fix $u_n={n^n \over n!}$. Then $u_n>0 \forall n \in \mathbb{N}$ and $\lim_{n \to \infty}\frac{u_{n+1}}{u_n}= \lim_{n \to \infty}(1+{1 \over n})^n=e$.


Then it follows from the above theorem that $\lim_{n \to \infty} (u_n)^{1 \over n} =e$ i.e. $ \lim_{n \to \infty} \frac{n}{(n!)^\frac{1}{n}} = e $. (Proved)


But I am trying to prove it without using the theorem. I am trying to get a generic proof.


Can anyone provide a proper wayout for this?


Thanks for your help in advance.


Answer



EDIT: As pointed out in the comments, even though the final inequality is correct, it is insufficient since $(n+1)^{1/n} \to 1$ as $n \to \infty$. The lower bound can be obtained as shown in @adfriedman's answer.


Here's my take on it: Whenever $n \geq 3$, we have $$ n^n \geq (n+1)!, $$ and thus $$ n^n \geq (n+1)n! \quad \Leftrightarrow \quad \frac{n}{n!^{\frac{1}{n}}} \geq (n+1)^{\frac{1}{n}}. $$ On the other hand, the Taylor expansion of $e^n$ gives $$ \frac{n^n}{n!} \leq \sum_{k=0}^{\infty} \frac{n^k}{k!} = e^n \quad \Leftrightarrow \quad \frac{n}{n!^{\frac{1}{n}}} \leq e. $$ So, $$ (n+1)^{\frac{1}{n}} \leq \frac{n}{n!^{\frac{1}{n}}} \leq e. $$ Apply the Squeeze Theorem.


Monday, December 19, 2016

algebra precalculus - My dilemma about $0^0$





We know that $0^0$ is indeterminate.



But if do this:



$$(1+x)^n=(0+(1+x))^n=C(n,0)\cdot ((0)^0)((1+x)^n) + \cdots$$




we get
$$(1+x)^n=(0^0)\cdot(1+x)^n$$



So, $0^0$ must be equal to $1$.



What is wrong in this?
Or am I mistaking $0^0$ as indeterminate?



Other threads are somewhat similar but not exactly the one I am asking.



Answer



There's a common yet subtle misconception among mathematics students that algebraic laws are prior to the numerical values that they describe. For example, "why" is it that:



$$5(3+2)=5\cdot 3 + 5\cdot 2\tag1$$



The tendency is to say, oh, this is true because of the distributive law: $a(b+c)=ab+ac$. But actually, equation (1) is true because it's true, not because of an abstract law. That is, you can actually calculate the left hand side and the right hand side and check that they're equal. And it's the fact that that always works that justifies the abstract law.



In your case, the binomial theorem is considered true because it always works when you plug in actual numerical values. You know how to calculate $(2+3)^8$ without the binomial theorem, but the result you get is consistent with the binomial theorem. To compute an expression using a general theorem, and then conclude that a specific numerical expression has a specific value is backwards reasoning.



If we're starting from the point of view that $0^0$ is undefined, then when we apply the binomial theorem to $(0+a)^n$ and realize that it's asking us to compute $0^0$, the conclusion is not that the binomial theorem is true all of the time, and that therefore $0^0$ must have a certain value, the conclusion is that therefore the binomial theorem does not apply to $(a+b)^n$ when $a$ or $b$ is zero. If we want it to, we have to come up with a definition for $0^0$, and then re-prove the binomial theorem, making sure that our proof is consistent with our new definition.




And, in fact, if we define $0^0=1$, then it's possible to do that. Your reasoning almost amounts to a proof that given that definition, the binomial theorem works, but it's important to recognize that you're verifying that a given definition leads to a given theorem, you are not concluding that a definition is "true" as a consequence of a theorem.


calculus - how to prove that $ln(1+x)< x$




I want to prove that: $\ln(x+1)< x$.




My idea is to define: $f(x) = \ln(x+1) - x$, so:



$f'(x) = \dfrac1{1+x} - 1 = \dfrac{-x}{1+x} < 0, \text{ for }x >0$.



Which leads to $f(x)

Is that a valid proof?
Any other ideas?



Thanks.



Answer



I think your approach is correct but you need to add some more details. Based on your approach let $f(x) = \log(1 + x) - x$ so that $f(0) = 0$. Clearly $$f'(x) = -\frac{x}{1 + x}$$ and hence $f'(x) > 0$ if $-1 < x < 0$ and $f'(x) < 0$ if $x > 0$. It follows that that $f(x)$ in increasing in $(-1, 0]$ and decreasing in $[0, \infty)$. Thus we have $f(x) < f(0)$ if $-1 < x < 0$ and $f(x) < f(0)$ if $x > 0$. It thus follows that $f(x) \leq f(0) = 0$ for all $x > -1$ and there is equality only when $x = 0$. So we can write $$\log(1 + x) \leq x$$ for all $x > -1$ and there is equality only when $x = 0$.



Note: We have considered $x > -1$ because $\log(1 + x)$ is not defined if $x \leq -1$.


sequences and series - Does $a_n = frac{1}{n}$ converges even though $sum frac{1}{n}$ diverges?

I've been studying sequences and series recently. As I understood, the sequence convergence is determined whether the sequence has a limit value. Now, in this example



$$a_n = \frac{3n^2 - 5n + 7}{3n^3 - 5n + 7}$$



I get $\lim_{n\to \infty} \frac{1}{n} = 0$, which means this sequence converges. What confuses me is that it is known that series $\sum \frac{1}{n}$ diverges. My question is, is it possible that $\frac{1}{n}$ converges when working with sequences, but diverges when working with series? Or it diverges in both cases?




Thank you in advance

calculus - How to tell if an integral can be integrated (has an elementary anti-derivative)?



When dealing with improper integrals I sometimes have to figure out whether or not to use the comparison test. Everything I read says something along the lines of "Also, there will be some integrals that we simply won’t be able to integrate and yet we would still like to know if they converge or diverge."



How do I actually figure out whether or not I can integrate without actually trying to integrate and failing?


Answer



In full generality this is an extremely hard problem, in fact it is impossible.



In practice, by learning lots of integration techniques and integration criteria you develop better intuition and have more tools to answer the question. Also, knowing many examples of non-integrable functions helps in quickly spotting more such functions. But, there is not general method.



real and imaginary part of square root of any complex number


let $k\in\mathbb{C}$. What are the real and imaginary parts of any complex number $x$ so that $x^2=k$ ?


My first idea was writing $k$ and $x$ in polar form: $x=r(\cos{\phi}+i\sin{\phi})$;$k=r'(\cos{\psi}+i\sin{\psi})$. Then use De Moivre's formula such that: $x^2=r^2(\cos{2\phi}+i\sin{2\phi})=r'(\cos{\psi}+i\sin{\psi})$.


Any hints how to go on ?


Another idea could be using roots of unity: We know how $x$ looks like when $x^n=1$


Answer



Well, you just answered yourself.


If $r_1 e^{i\theta_1} = r_2 e^{i \theta_2} $ then $r_1=r_2 , \theta_1 = \theta_2 + 2\pi n $. That means in this case that


$$ r^2 = r' \Rightarrow r=\sqrt{r'}$$ $$ 2\phi = \psi + 2\pi n \Rightarrow \phi = \frac{\psi}{2} , \frac{\psi}{2} + \pi $$ Meaning the solution will be $z= \pm \sqrt{r} (\cos \frac{\psi}{2} + i \sin \frac{\psi}{2})$



analysis - Constructing a continuous function whose graph seems 'special'




I've been reading through Zorich's "Analysis I" book recently, and I came across this nice little exercise.



Let $f: [0,1]\to \mathbb R$ be a continuous function such that $f(0)=f(1)$. Show that




  • for any $n\in \mathbb N$ there exists a horizontal closed interval of length $\frac 1n$ with endpoints on the graph of this function;


  • if the number $\ell$ is not of the form $\frac 1n$ there exists a function of this form on whose graph one cannot inscribe a horizontal chord of length $\ell$.





The first part can be proven like this:
Consider $g: [0,(n-1)/n] \to \mathbb R$ given by $g(x) = f(x) - f(x+1/n)$. Then



$$\sum_{k=0}^{n-1} g(k/n) = 0$$



and therefore either all of these points are zero or there exists both a point where $g$ is positive and a point where $g$ is negative. By continuity, there must then also be a point where $g = 0$. So we are done.



Now, the second statement seemed rather counterintuitive, and I have given it some time now, but don't see a counterexample for $\ell < 1/2$.




(For $\ell > 1/2$ the function $f(x) = \sin(2\pi x)$ will do.)



Can anyone help me out?



Cheers,


Answer



Try $f(x) = \sin^2(\pi x/\ell) - x \sin^2(\pi/\ell)$


Sunday, December 18, 2016

integration - why $int sqrt{(sin x)^2}, mathrm{d}x = int |sin x| ,mathrm{d}x$



May be this is a stupid question but why $$\int \sqrt{(\sin x)^2} \,\mathrm{d}x = \int |\sin x| \,\mathrm{d}x$$ instead of $$\pm \int \sin x \,\mathrm{d}x$$



I think may be because it violates the rule that a function can't have more than 1 output for a single input, which brings me to my next question does the intgrand need to be a function?



Answer



The definition of the square root, $\sqrt{\,\cdot\,}$, in real numbers is a function that given a non-negative number $y$, we take the non-negative number $x$ such that $x^2 = y$. That's why we write $$x^2 = k \Rightarrow x = \pm\sqrt{k}$$ instead of $$x^2 = k \Rightarrow x = \sqrt{k}.$$



We have to put the $\pm$ sign because the square root itself only considers the positive answer.



Observation: If the answer of square root was the two numbers, then it couldn't be a function.


Saturday, December 17, 2016

elementary set theory - Proving $(Ale B)vee (Ble A)$ for sets $A$ and $B$

For any pair of sets $A$ and $B$, we can define $A\le B$ iff there exists injection $f\colon A\rightarrow B$. I am trying prove that $$(A\le B)\vee (B\le A).$$


I have tried assuming $\neg (A\le B)$, then proving $B\le A$ by constructing the required injection, but I haven't been able to make any progress. Any hints, etc. would be appreciated.


EDIT


Assuming $\neg (A\le B)$, can you prove there exists a surjection $f: A\rightarrow B$? Then it would be easy, by applying AC, to construct an injection $g: B\rightarrow A$

proof verification - Proving that $f_2+f_4+cdots+f_{2n}=f_{2n+1}-1$ for Fibonacci numbers by induction





Given: $f_1 = f_2 = 1$ and for $n \in\mathbb{N}$, $f_{n+2} =f_{n+1} + f_n$.



Prove that $f_2 + f_4 + \dots + f_{2n} = f_{2n+1}- 1$.




Would you start with setting $f_2 + f_4 + \dots + f_{2n}= a_n$?



Then for the base case let $a_1=1$ LHS$=1$ and RHS$=2-1=1$ so base case holds.



Then the inductive hypothesis: Assume $f_2 + f_4 + \dots + f_{2n} = f_{2n+1}- 1$




$\textbf{NTS}$: $f_2 + f_4 + \dots + f_{2n} +f_{2n+2} = f_{2n+3}- 1$



Inductive step: By inductive hypothesis $f_2 + f_4 + \dots + f_{2n}=f_{2n+1}- 1$



So $f_{2n+1}- 1+f_{2n+1}$=$f_{2n+2}- 1$. As was to be shown.



Is this correct or did I need to show more algebra in my inductive step ?


Answer



Hint. The inductive step is rather

$$
f_2 + f_4 + \cdots + f_{2n}+\color{red}{f_{2n+2}}=\color{red}{f_{2n+3}}- 1,
$$ then using the inductive hypothesis, we have to prove that
$$
f_{2n+1}-1+\color{red}{f_{2n+2}}=\color{red}{f_{2n+3}}- 1.
$$ Can you take it from here?


real analysis - Constructing a continuous random variable on an atomless space



Let $(\Omega,\mathcal F,P)$ be a GIVEN probability space which is atomless. Assume no topology or algebraic structure given on the space (but you can impose one on it) , how could I construct a (real-valued) continuous random variable on this space?




The only example I could think of are infinite sums of indicator functions of events, but that random variable would have countable image and hence discrete. It is simply challenging enough for me to construct a random variable with uncountable image. Anyone can help me? Thanks!



Edited: An atom in a space $(\Omega,\mathcal F,P)$ is a set $E\in \mathcal F$ such that $P(E)>0$ and that for each $F\subseteq E$, either $P(F)=0$ or $P(F)=P(E)$; A space is called atomless if it contains no atoms.



It is a fact that an atomless space must be uncountable, and by axiom of choice we can prove that for each $0\leq\alpha\leq 1$, there is some $E\in\mathcal F$ with $P(E)=\alpha$. How could I utilize this fact to construct such continuous random variable?


Answer



Note: Better answer below, leaving for posterity. Let's assume for now that $\Omega=\mathbb{R}$ and $\mathcal{F}$ is the Borel $\sigma$-algebra, and let $\mu$ be the given measure on $\mathbb{R}$. Then we can write $\mu((-\infty,x])=g(x)$ for some monotonic $g$, and as we have no atoms $g$ is continuous. Let $X^{-1}((-\infty,x])=(-\infty,f(x)]$ for some monotonic increasing $f(x)$ satisfying $f(\mathbb{R})=\mathbb{R}$, i.e. $\{X(\omega)\leq x\}\Leftrightarrow\{\omega\in(-\infty,f(x)]\}$. This is equivalent to defining $X(\omega)=y$ for all $\omega\in[\lim_{z\uparrow y}f(z),f(y)]$, which is well-defined if $f(\mathbb{R})=\mathbb{R}$. Then we can show that $\mathbb{P}[X\leq x]=g(f(x))$. So now the questions becomes, can we make $g(f(x))$ differentiable on all of $\mathbb{R}$ for arbitrarily weird functions $g$ that are continuous and satisfy $g(-\infty)=0$, $g(\infty)=1$?



I believe the answer is yes, and I've heard claims to that effect, but I can't find an actual proof. It's more of an analysis question, and an elementary enough one, so if you post a question with the analysis tag one of those guys might be able to help you out.




Update: Thanks to @Did for the clarification. It's sufficient to be able to construct a random variable uniformly distributed on the set $\{\frac{i}{2^n}\}_{i=1}^{2^n}$ for any $n$. Use the axiom of choice to inductively create the following partitions. For $n=1$, let $E_1^{(1)}$, $E_2^{(1)}$ partition $\Omega$ and satisfy $\mathbb{P}[E_i^{(1)}]=\frac{1}{2}$ for $i=1,2$. For general $n\geq 2$ and $1\leq k\leq 2^{n-1}$, let $E_{2k-1}^{(n)},E_{2k}^{(n)}$ partition $E_k^{(n-1)}$ into two sets of equal probability. Let $X_n(\omega)=\frac{1}{2^n}\sum_{i=1}^{2^n}i1_{\omega\in E_i^{(n)}}$. Then $X_n$ is uniformly distributed on the set $\{\frac{i}{2^n}\}_{i=1}^{2^n}$. Furthermore for every $\omega\in\Omega$ $X_n(\omega)$ is monotonically decreasing as $n\rightarrow\infty$, so $X_n$ converges pointwise (and therefore almost surely) to some random variable $X$. But almost sure convergence implies convergence in distribution so $X$ must be a $\text{Uniform}[0,1]$ random variable.


Friday, December 16, 2016

real analysis - Look for a one-to-one function that maps a square to R

I am looking for a one-to-one function which maps (0,1)^2 to R. It is preferable that the function doesn't involve trig functions.
I have tried several mappings like $\ln(\frac{x_2}{1-x_1}),$ but they are not one-to-one. The challenge for me is the one-to-one requirement.




I have read Examples of bijective map from $\mathbb{R}^3\rightarrow \mathbb{R}$. I like the idea there, but I need to use this function to do further calculation, so it has to be in explicit form. Is it possible to find such a function?



I appreciate any ideas and comments.

elementary number theory - Prove that if $n$ is a positive integer then $2^{3n}-1$ is divisible by $7$.



I encountered a problem in a book that was designed for IMO trainees. The problem had something to do with divisibility.





Prove that if $n$ is a positive integer then $2^{3n}-1$ is divisible by $7$.




Can somebody give me a hint on this problem. I know that it can be done via the principle of mathematical induction, but I am looking for some other way (that is if there is some other way)


Answer



Hint: Note that $8 \equiv 1~~~(\text{mod } 7)$.

So,
$$2^{3n}=(2^3)^n=8^n\equiv \ldots~~~(\text{mod } 7)=\ldots~~~(\text{mod } 7)$$

Try to fill in the gaps!



Solution:
Note that $8 \equiv 1~~(\text{mod } 7)$. This means that $8$ leaves a remainder of $1$ when divided by $7$. Now assuming that you are aware of some basic modular arithmetic,
$$2^{3n}=(2^3)^n=8^n\equiv 1^n ~~(\text{mod } 7)=1~~(\text{mod } 7)$$
Now if $2^{3n}\equiv 1~~(\text{mod } 7)$ then it follows that,

$$2^{3n}-1=8^n-1\equiv (1-1)~~(\text{mod } 7)~\equiv 0~~(\text{mod } 7)\\ \implies 2^{3n}-1\equiv 0~~(\text{mod } 7)$$
Or in other words, $2^{3n}-1$ leaves no remainder when divided by $7$ (i.e. $2^{3n}-1$ is divisible by $7$). As desired

calculus - Proving that $(lim_{x to a}f(x) = L) Rightarrow (lim_{x to a}frac{1}{f(x)} = frac{1}{L})$ using Epsilon-Delta




Using the Epsilon-Delta definition of a limit, how can I rigorously prove that permitting $f(a) \neq 0$,
$$\left(\lim_{x \to a}f(x) = L\right) \Rightarrow \left(\lim_{x \to a}\frac{1}{f(x)} = \frac{1}{L}\right)$$
I tried to work backwards by simplifying:
$$\left|\frac{1}{f(x)}-\frac{1}{L} \right| = \left|\frac{ L - f(x)}{Lf(x)} \right| = \left| f(x) - L \right|\frac{1}{|Lf(x)|}$$



but I am not sure how to proceed from here or if this is even useful to the overall proof as a whole.


Answer



You are to prove that $\lim_{x \to a} \frac{1}{f(x)} = \frac 1L$. First of all, let $\epsilon > 0$.




We note that $\left|\frac{1}{f(x)} - \frac 1L\right| = \left|\frac{L - f(x)}{Lf(x)}\right| = \frac{|f(x) - L|}{|Lf(x)|}$.



Fix an $\epsilon_0 > 0$, that we will choose later. Let $\delta > 0$ be such that $|x - a| < \delta \implies |f(x) - L| < \epsilon_0$. Then, we see that $ L - \epsilon_0 < f(x) < L+ \epsilon_0$. Thus, if $\epsilon_0$ is chosen small enough (say $\left|\frac L2\right|$), then $f(x)$ cannot be zero if $|x - a| < \delta$, and will have the same sign as $L$. Then, we also see that $ L(L - \epsilon_0)

This implies :
$$
\frac{|f(x) - L|}{|Lf(x)|} \leq \frac{\epsilon_0}{L(L - \epsilon_0)}
$$



Now, choose $\epsilon_0$ such that $\frac{\epsilon_0}{L - \epsilon_0} = \epsilon$, in the starting. If that quantity is greater than $|\frac L2|$, then we can take $\min(\epsilon_0,|\frac L2|)$ as the desired $\epsilon_0$ for which the $\delta$ that works for $\lim f(x) = L$ also works for $\epsilon$ and the limit you need to prove.



Thursday, December 15, 2016

calculus - Evaluating $int_0^{pi/2} log left| sin^2 x - a right|$ where $ain [0,1]$.





How to evaluate
$$
\displaystyle\int_0^{\pi/2} \log \left| \sin^2 x - a \right|\,dx
$$

where $a\in[0,1]$ ?




I think of this problem as a generalization of the following proposition
$$
\displaystyle\int_0^{\pi/2} \log \left(\sin x\right)\,dx =-\frac12\pi\log2

$$



My try



Put
$$
I(a)=\displaystyle\int_0^{\pi/2} \log \left| \sin^2 x - a \right|\,dx
$$

From the substitution $x \to \frac{\pi}{2}-x$ , we get
$$

\displaystyle\int_0^{\pi/2} \log \left| \sin^2 x - a \right|\,dx
=
\displaystyle\int_0^{\pi/2} \log \left| \cos^2 x - a \right|\,dx
$$

Thus
$$
\displaystyle\int_0^{\pi/2} \log \left| \sin^2 x - a \right|\,dx
=
\displaystyle\int_0^{\pi/2} \log \left| \sin^2 x - (1-a) \right|\,dx
$$


which means $$I(a)=I(1-a) \tag{1}$$



On the other hand,
\begin{align}
2I(a) &=
\displaystyle\int_0^{\pi/2} \log \left| (\sin^2 x - a)(\cos^2 x -a) \right|\,dx
\\ &=
\displaystyle\int_0^{\pi/2} \log \left| a^2-a+\sin^2 x \cos^2 x \right|\,dx
\\ &=
\displaystyle\int_0^{\pi/2} \log \left| 4(a^2-a)+\sin^2 (2x) \right|\,dx

-\pi \log 2
\\ &=
\frac{1}{2}\displaystyle\int_0^{\pi} \log \left| 4(a^2-a)+\sin^2 x \right|\,dx
-\pi \log 2
\\ &=
\displaystyle\int_0^{\pi/2} \log \left| 4(a^2-a)+\sin^2 x \right|\,dx
-\pi \log 2
\\ &=
\displaystyle\int_0^{\pi/2} \log \left| 1+4(a^2-a)-\sin^2 x \right|\,dx
-\pi \log 2

\\ &=
I((2a-1)^2)
-\pi \log 2
\end{align}

Thus
$$
2I(a)=I((2a-1)^2)-\pi \log 2 \tag{2}
$$



Let $a=0$ we get the proposition mentioned above $\displaystyle\int_0^{\pi/2} \log \left(\sin x\right)\,dx =-\frac12\pi\log2.$




But how to move on ? Can we solve the problem only by $(1)$ and $(2)$? Or what other properties should we use to evaluate that?



Looking forward to your new solutions as well.



Thank you in advance!



Added:



As pointed out in the comments, it seems like that the integral is identical to $-\pi\log 2$.




From $(1)$ and $(2)$ we can also find many numbers such that $I(a)=-\pi\log 2$.


Answer



Using the symmetries and developing the $\sin^2$ term, we can express
\begin{align}
I&=\int_0^{\pi/2} \log \left| \sin^2 x - a \right|\,dx\\
&=\frac{1}{4}\int_0^{2\pi} \log \left| \sin^2 x - a \right|\,dx\\
&=\frac{1}{4}\int_0^{2\pi} \log \left| \frac{1-2a}{2}-\frac{1}{2}\cos 2x\right|\,dx
%&=-\frac{\pi}{2}\ln 2+\frac{1}{4}\int_0^{2\pi} \log \left| \left( 2a-1 \right)+\cos 2x\right|\,dx
\end{align}


By denoting $2a-1=\cos 2\alpha$,
\begin{align}
I&=\frac{1}{4}\int_0^{2\pi} \log \left| \frac{1}{2}\left( \cos 2\alpha+\cos 2x\right)\right|\,dx\\
&=\frac{1}{4}\int_0^{2\pi} \log \left| \cos \left( x+\alpha \right)\cos \left( x-\alpha \right)\right|\,dx\\
&=\frac{1}{4}\int_0^{2\pi} \log \left| \cos \left( x+\alpha \right)\right|\,dx+\frac{1}{4}\int_0^{2\pi} \log \left| \cos \left( x-\alpha \right)\right|\,dx
\end{align}

As the functions are periodic, the integration variables can be shifted, thus
\begin{equation}
I=\frac{1}{2}\int_0^{2\pi} \log \left| \cos \left( x \right)\right|\,dx
\end{equation}


Finally using the symmetries of the integrand,
\begin{align}
I&=2\int_0^{\pi/2} \log \left| \cos \left( x \right)\right|\,dx\\
&=2\int_0^{\pi/2} \log \left| \sin \left( x \right)\right|\,dx\\
&=-\pi\ln 2
\end{align}

from the quoted result.


Wednesday, December 14, 2016

trigonometry - Prove the following trigonometric identity without a calculator involved

I have to prove the following statement.




$$1+\cos{2\pi\over5}+\cos{4\pi\over5}+\cos{6\pi\over5}+\cos{8\pi\over5}=0$$




I have tried to use the sum of angles formula for cosine, but didn't get to a point where I'd be able to show that it is equal to $0$.

real analysis - Finding a differentiable function from (0,1) to (0,1) dominating (point wise) a given continuous function from (0,1) to (0,1)




Suppose we have a continuous function $f: (0,1) \to (0,1)$. Does there exist a differentiable function $\phi: (0,1) \to (0,1)$ such that $f(x) \leq \phi(x)$?



There does exist such a differentiable function if the function $f$ has finitely many non-differentiable points. Also this can be done if there exists an $\epsilon >0$, no matter how small, such that the function $f$ is differentiable in $(0,\epsilon)$.



Note that the real challenge lies in finding the differentiable function $\phi$ to be strictly less than one at the same time. This seems very much possible, at least pictorially. Because the graph of the function $f$ lies strictly below the $Y=1$ line. And hence one can draw a smooth curve between the graph of $f$ and the $Y=1$ line. But I can not prove it mathematically (rigorously). I need your help. Thanks in advance!


Answer



Pick monotonic sequences $a_n\to 0$ and $b_n\to 1$.
Since $f$ is bounded away from $1$ on $[a_{n},b_{n}]$, you can readily find smooth (in fact constant) $\psi_n\colon[a_n,b_n]\to (0,1)$ that is differentiable and $\ge f$ on this interval.
Now you can recursively glue together $\phi_n$ on $[a_{n},b_{n}]$ such that $\phi_n(x)=\phi_{n-1}(x)$ for $x\in[a_{n-2},b_{n-2}]$, $\psi_n(x)\ge\phi_n(x)\ge\phi_{n-1}(x)\setminus[a_{n-2},b_{n-2}]$ for $x\in[a_{n-1},b_{n-1}]$, $\phi_n(x)=\psi_n(x)$ for $x\in[a_{n1},b_{n1}]\setminus [a_{n-1},b_{n-1}]$.
Finally let $\phi(x)=\phi_{n+1}(x)$ where $n$ is such that $x\in[a_{n},b_{n}]$.



real analysis - How do I prove the sequence $x_{n+1}=sqrt{alpha +x_n}$, with $x_0=sqrt alpha$ converges? ( boundedness?)



I need to "study the limit behavior of the following sequence" and then compute the limit. I can compute the limit and prove the monotonicity, but I am having trouble proving boundedness. I tried to understand from other similar posts how this could be done, but since I didn't understand I'm asking it as a new question.
The sequence is $x_{n+1}=\sqrt{\alpha +x_n}, x_0=\sqrt \alpha, \alpha>0$.




It seems to be monotonically increasing.



Proof:



It can be shown by induction that the sequence is monotonically increasing.



Claim that $x_{n+1}\geq x_n$
$$x_1=\sqrt{\alpha+\sqrt\alpha}>\sqrt\alpha=x_0$$
$x_{n+1}\implies x_{n+2}$

$$x_{n+1}=\sqrt{\alpha+x_n}<\sqrt{\alpha+x_{n+1}}=\sqrt{\alpha+\sqrt{\alpha+\sqrt\alpha}}=x_{n+2}$$
Boundedness:



Because the sequence is monotonically increasing and $x_o=\sqrt\alpha$, it is bounded below by $\sqrt\alpha$. Now comes the part where I have to prove that the sequence is bounded above. The problem is that I don't know how to start, so if anyone could give me a bit of a push I'd be grateful.



The limit:



I know that $\lim\ x_n=\lim \ x_{n+1}$, so letting $\lim\ x_n=x$ then
$$x=\sqrt{\alpha+x}$$ and solving would give $x=\frac{1+\sqrt{1+4\alpha}}{2}$.




May somebody please confirm my proof thus far, and help me prove the upper bound?
Thanks!


Answer



Instead of just proving the boundedness, let me show you the logic to construct the proof.



Step 1 guess the answer.




In this step, you don't need to be rigorous. You just use whatever intuition to guide you for a potential candidate of answer.





Let's say you want an upper bound $L$.



What does it mean? It means you want an $L$ such that for any $x \in (0,L]$, $\sqrt{\alpha+x} \le L$.



Since $\sqrt{\alpha+x}$ is an increasing function in $x$, you want



$$\begin{align}&\sqrt{\alpha+L} \le L\\
\iff &\alpha + L \le L^2\\
\iff &\alpha \le L^2 - L\\

\iff &\alpha + \frac14 \le (L - \frac12)^2\\
\implies &\sqrt{\alpha + \frac14} + \frac12 \le L\tag{*}
\end{align}$$
Notice $\sqrt{\alpha + \frac14}$ is bounded above by $\sqrt{\alpha} + \sqrt{\frac14} = \sqrt{\alpha} + \frac12$. This means if you have chosen $L$ such that $\sqrt{\alpha} + 1 \le L$, $(*)$ will be satisfied.



Step 2 verify you answer.




Once you have a potential candidate for an answer, you need to verify
it. If your logic of finding the candidate is reasonable. You can

usually reverse the steps to prove it is indeed what you want.




Let $L = \sqrt{\alpha}+1$, for any $0 \le x \le L$, we have:



$$\sqrt{\alpha + x} \le \sqrt{\alpha + L} = \sqrt{\alpha+ \sqrt{\alpha}+1} < \sqrt{\alpha + 2\sqrt{\alpha}+1} = \sqrt{\alpha} + 1 = L$$



This means if you start with a $x_n \le L$, you will get $x_{n+1} = \sqrt{\alpha + x_{n}} \le L$.



Since $x_0 \le L$, the boundedness of all $x_n$ is proved.




BTW, the rest of your steps look fine.


calculus - Convergence of $lim_{x to 0}frac{x - sin{x}}{x^3}$

Without using L'Hôpital's Rule or the Taylor Series expansion for the sine function, I would like to show that the function
$$\frac{x - \sin{x}}{x^{3}}$$
has a limit at $0$. I understand that if it is assumed that the function has a limit at $0$, the limit must be $1/6$.

calculus - Closed Form for $~limlimits_{ntoinfty}~sqrt ncdot(-e)^{-n}cdotsumlimits_{k=0}^nfrac{(-n)^k}{k!}$





$\qquad\qquad\qquad$ Does $~\displaystyle\lim_{n\to\infty}\frac{\sqrt n}{(-e)^n}\cdot\sum_{k=0}^n\frac{(-n)^k}{k!}~$ possess a closed form expression ?




Inspired by this frequently asked question, I wondered what would happen if the sum were allowed to alternate. Numerically, it seems to converge to a value around $~\dfrac15$ . Unfortunately, I wasn't truly able to grasp any of the various approaches used to evaluate the other related limit $($yes, I actually read carefully through all of them$)$, so I haven't been successful in developing a viable method for expressing this one either. $($Perhaps a new, insightful answer will also help me cast some fresh light on older ones ?$)$


Answer



It was shown in the answers to this question that



$$

e^{-x}\sum_{k=0}^n\frac{x^k}{k!} = \frac{1}{n!}\int_x^\infty e^{-t}\,t^n\,dt,
$$



so setting $x=-n$ we have



$$
\begin{align}
e^{n}\sum_{k=0}^n\frac{(-n)^k}{k!} &= \frac{1}{n!}\int_0^\infty e^{-t}\,t^n\,dt + \frac{1}{n!}\int_{-n}^0 e^{-t}\,t^n\,dt \\
&= 1 + \frac{(-1)^n}{n!} \int_0^n e^u u^n\,du \\
&= 1 + \frac{(-1)^n n^{n+1}}{n!} \int_0^1 e^{n [v+\log v]}\,dv. \tag{$*$}

\end{align}
$$



The quantity $v+\log v$ is increasing and so has a maximum at $v=1$, and near there



$$
v+\log v = 1 + 2(v-1) + O\!\left((v-1)^2\right).
$$



By the Laplace method we therefore have




$$
\int_0^1 e^{n [v+\log v]}\,dv \sim \int_{-\infty}^1 e^{n[1 + 2(v-1)]}\,dv = \frac{e^n}{2n}.
$$



Using this and Stirling's formula



$$
n! \sim \left(\frac{n}{e}\right)^n \sqrt{2\pi n}
$$




we deduce from $(*)$ that



$$
\sum_{k=0}^n\frac{(-n)^k}{k!} \sim \frac{(-e)^n}{2\sqrt{2\pi n}}.
$$



The limit in the question is



$$

\lim_{n\to\infty}\frac{\sqrt n}{(-e)^n}\cdot\sum_{k=0}^n\frac{(-n)^k}{k!} = \frac{1}{2\sqrt{2\pi}} = 0.199471\ldots
$$


Tuesday, December 13, 2016

exponentiation - How can I intuitively understand complex exponents?



Could you help me fill in a gap in my understanding of maths?



As long as the exponent is rational, I can decompose it into something that always works with primitives I know.




Say, I see a power: $x^{-{a\over b}}$ for natural a,b.



Using the basic rules:



$$\begin {align}
x^{-a} =& {1 \over x^a} \\
x^{1\over b} =& \sqrt[b]{x} \\
({x^a})^b =& {x^{a b}}
\end{align}$$




I can always decompose rational exponents into $ x^{{1\over b} \cdot -a} = \sqrt[b]{x} ^ {-a} = {1 \over \sqrt[b]{x}^a }$- and integer degree roots are something well within my grasp. Integer root is always just simply a number of multiplications, $\sqrt[a]{x}$ is just some $y \cdot y \cdot y \cdot ... \cdot y$, as many $y$'s as $a$ says.



If the exponent is irrational, it's trickier, but I can always take $e^{\pi} \approx e^{314159265358979323 \over 100000000000000000}$ and take more $\pi$ digits if needed, and even though the number of underlying multiplications becomes ridiculous, that's still something I can understand.



But I'm completely at loss how to understand stuff like $x^{i\pi \over 4}$. I just can't perform an imaginary number of multiplications. I know how to perform conversion of complex numbers between the exponential and $a+ib$ form, but I perform it like a mysterious voodoo magic recipe without ability to wrap my mind around how that kind of exponentiation is supposed to work. I can still apply the old conversion formulas, $x^a \cdot x^b = x^{a+b}$; $({x^a})^b = {x^{a b}};$ and obtain correct results but I just don't understand the underlying mechanism - as $x^n$ is just $n$ multiplications of $x$, how can understand how that works when $n$ is imaginary?


Answer



In brief: thinking of exponentiation as repeated multiplication is equivalent to the identity $a^{m+n}=a^ma^n$. But focusing on the latter equation instead of the former concept allows us to expand exponentiation beyond the concept of "multiply the base $n$ times", to negative, rational, real, and complex exponents. As you say in the OP question, "the basic rules" allows you to extend past the notion of repeated multiplication for counting numbers. Focusing instead on the equation $a^{m+n}=a^ma^n$ means following the mantra: exponentiation turns additions into multiplications, and rotations are a natural multiplication on the complex plane.







In more detail:



The notion of exponentiation as repeated multiplication is, for natural numbers $n,m$ equivalent to the identity $a^{m+n}=a^ma^n$, because



$$a^{n} = a^{\underbrace{1+\dotsb+1}_{n\text{ times}}}=\underbrace{a\cdot\dotsb\cdot a}_{n\text{ times}}.$$



By focusing on this identity moreso than the notion of "multiplication repeated $n$ times", it also allows us to make sense of exponentiation of naturals, integers, rationals, reals, complexes, even matrices and more, whereas the repeated multiplication notion only makes sense for $n$ natural number, a counting number. We only extend to exponentiation of zero, negatives, and rationals via the above identity (or other similar). Therefore we should view the identity $a^{m+n}=a^ma^n$ not just as a consequence of exponentiation as repeated multiplication, but as a complete and fundamental conceptual replacement. As our conceptual starting point. Exponentiation is, by definition and fundamental conception, the operation that turns addition into multiplication.



As you say in your question, you must extract an understanding of exponentiation of rational, negative, and real exponents via the "basic rules". This identity $a^{m+n}=a^ma^n$ is the most basic of our basic rules, and it will guide us in extracting an understanding imaginary exponents as well.




Now to the matter at hand. On the real line, real numbers act additively by shifting, and multiplicatively by scaling away from zero.



On the complex plane, real numbers act additively by shifting along the real axis (horizontally), imaginary numbers act additively by shifting along the imaginary axis (vertically). Note that orbits of these two actions are orthogonal. Horizontal lines versus vertical.



Real numbers act multiplicatively by stretching away from the origin, while imaginary numbers by rotating $90º$. Note that the orbits of these two actions are also orthogonal. Orbits of scalings are radial lines; orbits of rotations are circles.



Now we have decided that our most fundamental identity of exponentials is $a^{x+y} = a^xa^y$. Exponentiation turns adders into multipliers. It turns real adders (i.e. horizontal shifts) into real multipliers, i.e. scalings away from zero. Horizontal lines into radial lines.



And therefore what must exponentiation transform the orthogonal imaginary shifts, i.e. vertical shifts into? They must transform into those multipliers which are orthogonal to the radial expansions. Which are the rotations. Vertical lines into circles. So exponentiation with an imaginary exponent must be a rotation.




The base of the exponentiation sets the size scale of these stretchings and rotations, and exponentiation with base $e$ does natural rotations in radians, but this picture works with any base of exponentiation, so long as $a>1.$



This intuition is encoded in Euler's identity $e^{i\theta}=\cos\theta + i\sin\theta$. A special case is $e^{i\pi} = -1$, which just says that rotation by $180º$ is the same thing as reflection. This intuitive point of view for understanding Euler's identity is explained in a popular 3Blue1Brown video.



So how do we understand an expression like $x^{\frac{i\pi}{4}}$? Well assuming $x$ is real with $x>0$, since the exponent is imaginary, it's a rotation. How big a rotation? well it depends on the base $x$, and the exponent $\frac{i\pi}{4}$. Computing this magnitude could be thought of as an exercise in operations derived from repeated multiplication, as you set out in your question, but doing so is of limited utility.



Instead we should think of it as an adder $\frac{\pi}{4}$, turned into a vertical shift $\frac{i\pi}{4}$ by the complex unit $i$, and then turned into a rotation by exponentiation with base $x$ (as long as $x$ is real and $x>1.$) The magnitude of $x$ determines the speed of the rotation, or the units.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...