Monday, October 31, 2016

algebra precalculus - Asymptote to $frac{sin x}{x}$?


I have seen elsewhere that:


$$y=\frac{\sin x}{x}$$


has a horizontal asymptote of $y=0$, as it approaches that line as $x$ tends to $\pm \infty$.


Now, why does it not have an asymptote of $x=0$ or $y=1$, as the curve tends towards but never touches these lines? (Which satisfies the definition given by wolfram alpha)


Answer



Let's make our definition of an asymptote more clear.


An vertical asymptote for $x=a$ occurs if $\lim_{x\to a^+}f(x)=\pm\infty$ and $\lim_{x\to a^-}f(x)=\pm\infty$. The limit from the left does not have to equal the limit on the right, in fact there is an asymptote as long as one side goes to $\pm\infty$. Take the asymptote of $f(x)=\ln(x)$ for example.



A horizontal asymptote for $y=b$ occurs if $\lim_{x\to\infty}f(x)=b$ or if $\lim_{x\to-\infty}f(x)=b$, where $b$ is finite.


We can also have a curved asymptote. Say $f(x)$ is asymptotic to $g(x)$, then $\lim_{x\to\pm\infty}f(x)-g(x)=0$. The two functions must both get really close to each other as $x$ becomes arbitrarily large.


So to answer your questions, $y=0$ is an asymptote but $x=0$ and $y=1$ are not.


Connection between GCD and LCM of two numbers



These two exercises I encountered recently seem to develop some type of connection between GCD and LCM I can't quite figure out.



Exercise 1:





Find all the numbers $x$ and $y$ such that:



$a) \ GCD(x,y)=15, \ LCM(x,y)=150$ $b) \ GCD(x,y)=120 \ LCM(x,y)=1320$
$c) \ GCD(x,y)=100 \ LCM(x,y)=990$




Exercise 2:






Find all the numbers $m,n$ such that $GCD(m,n)=pq , \ LCM(m,n)=p^2qs$




where $p,q,s$ are prime




The first thing that is known to me is that $GCD(x,y) \cdot LCM(x,y)= x \cdot y$




Also $LCM(x,y)$ is at most $x \cdot y$ while $GCD(x,y)$ is at most $\max \{x,y\}$. Last thing is that $GCD(x,y)|LCM(x,y)$.



Using all this I tried to solve the first exercise:



$a)$ First two obvious pairs are $x=15, y=150$ and $y=15, x=150$. Now neither of the numbers can be bigger than $150$ or smaller than $15$. So we are looking for numbers in the range $15-150$ that satisfy $x \cdot y = 15 \cdot 150$ Another such pair is $(x,y)=(30,75), \ (x,y)=(75,30)$.



Similarly for $b)$ we find that the only possible values are permutations of the set {$120,1320$} and in $c)$ since $100$ does not divide $990$ no such numbers exist.



Now exercise 2 is what made me think there is actually another connection I'm not quite aware of since now it's about arbitrary prime numbers and the previous method doesn't work anymore. My intuition is that it has something to do with $GCD$ or $LCM$ of the $GCD(x,y), \ LCM(x,y)$


Answer




If you have two numbers with prime factorizations



$$x = p_1^{a_1}p_2^{a_2}p_3^{a_3}\cdots p_n^{a_n}$$
$$y = p_1^{b_1}p_2^{b_2}p_3^{b_3}\cdots p_n^{b_n}$$



then



$$GCD(x,y) = p_1^{min(a_1, b_1)}p_2^{min(a_2, b_2)}p_3^{min(a_3, b_3)}\cdots p_n^{min(a_n, b_n)}$$



and




$$LCM(x,y) = p_1^{max(a_1, b_1)}p_2^{max(a_2, b_2)}p_3^{max(a_3, b_3)}\cdots p_n^{max(a_n, b_n)}$$



where $min(a,b)$ and $max(a,b)$ are the minimum and maximum of $a$ and $b$, respectively.



Does this help?


calculus - evaluation of limit $limlimits_{xto infty}left(frac{2arctan(x)}{pi}right)^x $



I`m trying to evaluate this limit and I need some advice how to do that.
$$\lim\limits_{x\to \infty}\left(\frac{2\arctan(x)}{\pi}\right)^x $$
I have a feeling it has to do with a solution of form $1^\infty$ but do not know how to proceed. Any hints/solutions/links will be appreciated


Answer



It could be suitably modified into something involving the limit $(1+\frac1x)^x\rightarrow e$ for $x\to\infty$.

$$
\left(\frac{2\arctan x}{\pi}\right)^x
~=~
\left[1 + \left(\frac{2\arctan x}{\pi}-1\right)\right]^x
$$
Let $f(x)=\left(\frac{2\arctan x}{\pi}-1\right)$; clearly $f(x)\to 0$ for $x\to+\infty$, therefore
$$
\left[1+f(x)\right]^{\frac{1}{f(x)}}\longrightarrow e
$$
Let us focus on the limit of $xf(x)$: using l'Hospital's rule we get

$$
\lim_{x\to+\infty}x\,f(x)
~=~
\lim_{x\to+\infty}
\frac{\frac{2\arctan x-1}{\pi}}{\frac1x}
~\stackrel H=~
\lim_{x\to+\infty}
\frac{\frac{2}{\pi(1+x^2)}}{-\frac1{x^2}}
~=~
-\frac2\pi

$$
Now, putting all together:
$$
\lim_{x\to+\infty}
\left(\frac{2\arctan x}{\pi}\right)^x
~=~
\lim_{x\to+\infty}
\big(1+f(x)\big)^x
~=~
\lim_{x\to+\infty}

\left[\big(1+f(x)\big)^{\frac{1}{f(x)}}\right]^{xf(x)}
~=~
e^{-2/\pi}
$$
Generally, when you run into $1^\infty$ you can work it out in this way.


Sunday, October 30, 2016

abstract algebra - Right invertible and left zero divisor in matrix rings over a commutative ring


If a ring $R$ is commutative, I don't understand why if $A, B \in R^{n \times n}$, $AB=1$ means that $BA=1$, i.e., $R^{n \times n}$ is Dedekind finite.




Arguing with determinant seems to be wrong, although $\det(AB)=\det(BA ) =1$ but it necessarily doesn't mean that $BA =1$.




And is every left zero divisor also a right divisor ?



calculus - Confirm that $int_{0}^{infty}t^{-1}sin t dt=pi/2$




Confirm that $\int_{0}^{\infty}t^{-1}\sin t dt=\pi/2$.




The guide book I am using gives the following help:



Consider $\int_{\gamma}z^{-1}e^{iz}dz$, where for $0



Exercise IV$.4.20.$ For $r$ with $0

Using the hint, I know that $\int_{\gamma}z^{-1}e^{iz}dz=0$ for the Cauchy theorem, with which $\int_{[s,r]}z^{-1}e^{iz}dz+\int_{\gamma_r}z^{-1}e^{iz}dz+\int_{[-r,-s]}z^{-1}e^{iz}dz-\int_{\gamma_s}z^{-1}e^{iz}dz=0$, but I do not know what else to do here, could someone help me please? Thank you very much.


Answer



Define a path in the Complex Plane: enter image description here




Now consider $$\int_{C} \frac{e^{iz}}{z}dz=\int_{arc}\frac{e^{iz}}{z}dz+\int_{Arc}\frac{e^{iz}}{z}dz+\int_{-R}^{-r}\frac{e^{iz}}{z}dz+\int_{r}^{R}\frac{e^{iz}}{z}dz$$



By parametizing the integrals over the arcs, and letting $r\to0$ for $arc$ and $R\to\infty$ for $Arc$, we see that $\int_{arc}\to i\int_{\pi}^0d\theta$ and $\int_{Arc}$$\to0$.



So we have $$\int_{C} \frac{e^{iz}}{z}dz=PV\int_{-\infty}^{\infty}\frac{e^{iz}}{z}dz-\pi i$$ where $PV$ denotes the Cauchy Principal Value.



Since the contour does not enclose any poles, the entire contour integral is $0$.



So $$0=PV\int_{-\infty}^{\infty}\frac{e^{iz}}{z}dz-\pi i$$ $$PV\int_{-\infty}^{\infty}\frac{e^{iz}}{z}dz=\pi i$$ Note that due to Euler's Formula $$Im(PV\int_{-\infty}^{\infty}\frac{e^{iz}}{z}dz)=\int_{-\infty}^{\infty}\frac{sin(z)}{z}dz$$

And so $$\int_{-\infty}^{\infty}\frac{sin(z)}{z}dz=\pi$$
Lastly since $\frac{sin(z)}{z}$ is an even function:
$$\int_{0}^{\infty}\frac{sin(z)}{z}dz=\frac{\pi}{2}$$


lebesgue integral - Finite measure space & sigma-finite measure space

A measure space $(X, \Sigma, \mu)$ is finite if $\mu(X)<\infty$.



It is equivalent to saying that $(X, \Sigma, \mu)$ is finite if $\mu(E)<\infty$ for all $E \in \Sigma$



A measure space $(X, \Sigma, \mu)$ is $\sigma$-finite if X is a countable union of sets with finite measure.



My two questions is that





  1. Does $\sigma$-finiteness imply that $\mu(E)<\infty$ for all $E \in \Sigma$?


  2. If $\mu(E)<\infty$ for all $E \in \Sigma$, dose it imply $\sigma$-finiteness or finiteness of a measure space?




Thanks.

Saturday, October 29, 2016

linear algebra - Different interpretations of imaginary number

I went through a linear algebra course and I'm a bit confused..



I think I understand the geometric interpretation of imaginary numbers - multiplying by $i$ results in rotation by $90$ degrees in so that $1$ becomes $i$ and so forth. And this is where that $i^2 = -1$ comes from.




And then there's the matrix representation of $i$, which I understand emerged from a later generalization of complex numbers. I interpret the matrix representation as transform function which basically projects the imaginary axis to the real axis. I've thought of it as something very similar to vectors, with the difference that with vectors I write:



$P = x\mathbf{\hat{i}} + y\mathbf{\hat{j}}$ where $\mathbf{\hat{i}} = (1, 0)$ and $\mathbf{\hat{j}} = (0, 1)$



..and with complex numbers I can write:



$C = a + bi$ where $i$ = $2\times 2$ matrix, which represents the same $90$ degree transform logic by transformation.



Correct? Or at least close?




Anyways, as I understand, both of these interpretations of $i$ are actually later than $i=\sqrt{-1}$ itself. Is there an earlier interpretation? How did those who invented imaginary number prove that $i = \sqrt{-1}$ in the first place?



Thanks!

divisibility - Gcd number theory proof: $(a^n-1,a^m-1)= a^{(m,n)}-1$

Prove that if $a>1$ then $(a^n-1,a^m-1)= a^{(m,n)}-1$


where $(a,b) = \gcd(a,b)$


I've seen one proof using the Euclidean algorithm, but I didn't fully understand it because it wasn't very well written. I was thinking something along the lines of have $d= a^{(m,n)} - 1$ and then showing $d|a^m-1$ and $d|a^n-1$ and then if $c|a^m-1$ and $c|a^n-1$, then $c\le d$.


I don't really know how to show this though...


I can't seem to be able to get $d* \mathbb{K} = a^m-1$.



Any help would be beautiful!

Friday, October 28, 2016

trigonometry - How Can One Prove $cos(pi/7) + cos(3 pi/7) + cos(5 pi/7) = 1/2$




Reference: http://xkcd.com/1047/



We tried various different trigonometric identities. Still no luck.



Geometric interpretation would be also welcome.



EDIT: Very good answers, I'm clearly impressed. I followed all the answers and they work! I can only accept one answer, the others got my upvote.


Answer




Hint: start with $e^{i\frac{\pi}{7}} = \cos(\pi/7) + i\sin(\pi/7)$ and the fact that the lhs is a 7th root of -1.



Let $u = e^{i\frac{\pi}{7}}$, then we want to find $\Re(u + u^3 + u^5)$.



Then we have $u^7 = -1$ so $u^6 - u^5 + u^4 - u^3 + u^2 -u + 1 = 0$.



Re-arranging this we get: $u^6 + u^4 + u^2 + 1 = u^5 + u^3 + u$.



If $a = u + u^3 + u^5$ then this becomes $u a + 1 = a$, and rearranging this gives $a(1 - u) = 1$, or $a = \dfrac{1}{1 - u}$.




So all we have to do is find $\Re\left(\dfrac{1}{1 - u}\right)$.



$\dfrac{1}{1 - u} = \dfrac{1}{1 - \cos(\pi/7) - i \sin(\pi/7)} = \dfrac{1 - \cos(\pi/7) + i \sin(\pi/7)}{2 - 2 \cos(\pi/7)}$



so



$\Re\left(\dfrac{1}{1 - u}\right) = \dfrac{1 - \cos(\pi/7)}{2 - 2\cos(\pi/7)} = \dfrac{1}{2} $


elementary number theory - Find conditions on positive integers so that $sqrt{a}+sqrt{b}+sqrt{c}$ is irrational




Find conditions on positive integers
$a, b, c$
so that $\sqrt{a}+\sqrt{b}+\sqrt{c}$ is irrational.





My solution:
if $ab$ is not the square of an integer,
then the expression is irrational.
I find it interesting
that $c$ does not come into this
at all.



My solution is modeled
(i.e., copied with modifications)
from dexter04's solution

to Prove that $\sqrt{3}+ \sqrt{5}+ \sqrt{7}$ is irrational
.



Suppose $\sqrt{a}+\sqrt{b}+\sqrt{c} = r$
where $r$ is rational.
Then,
$(\sqrt{a}+\sqrt{b})^2
= (r-\sqrt{c})^2
\implies a+b+2\sqrt{ab}
= c+r^2-2r\sqrt{c}$.




So, $a+b-c-r^2+2\sqrt{ab} =-2r\sqrt{c}$.
Let $a+b-c-r^2 = k$,
which will be a rational number.
So,
$(k+2\sqrt{ab})^2 = k^2+ 4ab+4k\sqrt{ab} = 4cr^2$
or
$4k\sqrt{ab} = 4cr^2-k^2- 4ab$.



If $ab$ is not a square

of an integer,
then the LHS is irrational
while the RHS is rational.
Hence, we have a contradiction.


Answer



For $\sqrt{a}+\sqrt{b}+\sqrt{c}$ to be rational, $ab$ being a perfect square is only a necessary condition, but it is not sufficient (your proof is correct, only the conclusion that it is sufficient is wrong). The sufficient and necessary condition is $a,b,c$ are all perfect squares (easy to see that it is sufficient and I'm going to prove it is necessary).



Symmetry lets us conclude $ab=A^2,bc=B^2,ca=C^2$.



Let $(a,b)=d$. Then $a=da_1, b=db_1$ and since $d^2a_1b_1=A^2$, we have $a=da_2^2, b=db_2^2$.




$bc=B^2\implies db_2^2c=B^2\implies dc=b_3^2\tag{1}$



Let $(c,d)=D$. Then $c=Dc', d=Dd'$. $(1)\implies c=Dc_1^2, d=Dd_1^2$.



So $(a,b,c)=(D(d_1a_2)^2,D(d_1b_2)^2,Dc_1^2)$



$$\sqrt{a}+\sqrt{b}+\sqrt{c}=\sqrt{D}(|d_1a_2|+|d_1b_2|+|c_1|)\in\mathbb Q\iff \sqrt{D}\in\mathbb Q$$$$\iff \sqrt{D}\in\mathbb Z\iff D=D'^2$$



I used the fact that $\sqrt{a},a\in\mathbb Z$ is either an integer or irrational. If it is rational but not an integer, then $\sqrt{a}=\frac{a'}{b'}, a'\nmid b'\implies a=\frac{a'^2}{b'^2}$.
Contradiction, since LHS is an integer and RHS is not ($a'\nmid b'\iff a'^2\nmid b'^2$).




$(a,b,c)=((D'd_1a_2)^2,(D'd_1b_2)^2,(D'c_1)^2)\ \ \ \square$


complex numbers - Question about Euler's formula




I have a question about Euler's formula



$$e^{ix} = \cos(x)+i\sin(x)$$



I want to show



$$\sin(ax)\sin(bx) = \frac{1}{2}(\cos((a-b)x)-\cos((a+b)x))$$



and




$$ \cos(ax)\cos(bx) = \frac{1}{2}(\cos((a-b)x)+\cos((a+b)x))$$



I'm not really sure how to get started here.



Can someone help me?


Answer



$$\sin { \left( ax \right) } \sin { \left( bx \right) =\left( \frac { { e }^{ aix }-{ e }^{ -aix } }{ 2i } \right) \left( \frac { { e }^{ bix }-{ e }^{ -bix } }{ 2i } \right) } =\frac { { e }^{ \left( a+b \right) ix }-e^{ \left( a-b \right) ix }-{ e }^{ \left( b-a \right) ix }+{ e }^{ -\left( a+b \right) ix } }{ -4 } \\ =-\frac { 1 }{ 2 } \left( \frac { { e }^{ \left( a+b \right) ix }+{ e }^{ -\left( a+b \right) ix } }{ 2 } -\frac { { e }^{ \left( a-b \right) ix }+{ e }^{ -\left( a-b \right) ix } }{ 2 } \right) =\frac { 1 }{ 2 } \left( \cos { \left( a-b \right) x-\cos { \left( a+b \right) x } } \right) $$



same method you can do with $\cos { \left( ax \right) \cos { \left( bx \right) } } $







Edit:
$$\int { \sin { \left( ax \right) \sin { \left( bx \right) } } dx=\frac { 1 }{ 2 } \int { \left[ \cos { \left( a-b \right) x-\cos { \left( a+b \right) x } } \right] dx=\quad } } $$$$\frac { 1 }{ 2 } \int { \cos { \left( a-b \right) xdx } } -\frac { 1 }{ 2 } \int { \cos { \left( a+b \right) xdx= } } $$



now to order calculate $\int { \cos { \left( a+b \right) xdx } } $ write
$$t=\left( a+b \right) x\quad \Rightarrow \quad x=\frac { t }{ a+b } \quad \Rightarrow dx=\frac { 1 }{ a+b } dt\\ \int { \cos { \left( a+b \right) xdx=\frac { 1 }{ a+b } \int { \cos { \left( t \right) } dt=\frac { 1 }{ a+b } \sin { \left( t \right) = } } \frac { 1 }{ a+b } \sin { \left( a+b \right) x } } } +C\\ $$


elementary set theory - Specify a bijection from [0,1] to (0,1].

A) Specify a bijection from [0,1] to (0,1]. This shows that |[0,1]| = |(0,1]|



B) The Cantor-Bernstein-Schroeder (CBS) theorem says that if there's an injection from A to B and an injection from B to A, then there's a bijection from A to B (ie, |A| = |B|). Use this to come to again show that |[0;1]| = |(0;1]|

Thursday, October 27, 2016

real analysis - Using the Law of the Mean (MVT) to prove the inequality $log(1+x)

If $x \gt0$, then $\log(1+x) \lt x$.



My attempt at the proof thus far...



Let $f(x) = x-\log(1+x)$, then $f'(x) = 1-\frac{1}{1+x}$ for some $\mu \in (0,x)$ (since $x>0$)




MVT give us $f(x) = f(a) + f'(\mu)(x-a)$



So plug in our values we get:



$$x-\log(1+x) = 0+(1-\frac{1}{1+\mu})(x-0)$$



which we can reduce to



$$\log(1+x)=\frac{x}{1+\mu}$$




Now For $x\leq1$, then $0\lt\mu\lt1$



S0 $0\lt \frac{1}{1+\mu}\lt 1$, thus $\log(1+x)\lt x$



If $x>1$, then....



So I can see clearly that if $x>1$ is plugged into $\frac{x}{1+\mu}$



then $\log(1+x)


I would appreciate tips hints or proof completions.

integration - Evaluating the definite integral $int_0^infty frac{mathrm{e}^x}{left(mathrm{e}^x-1right)^2},x^n ,mathrm{d}x$



I am having difficulty evaluating definite integrals of the form $\int_0^\infty \frac{\mathrm{e}^x}{\left(\mathrm{e}^x-1\right)^2}\,x^n \,\mathrm{d}x$. I would appreciate any guidance that could be offered. I am aware that these evaluate to constant powers of $\pi$, but find the integration challenging. Thank you.


Answer



$$ \int_0^\infty \frac{e^x}{(e^x-1)^2} x^n dx = \int_0^\infty \frac{e^{-x}}{(1-e^{-x} )^2} x^n dx .$$




For $|z|<1 $ we have $\displaystyle \frac{1}{1-z} =\sum_{k=0}^{\infty} z^k $ so by differentiating we get $\displaystyle \frac{1}{(1-z)^2} = \sum_{k=1}^{\infty} k z^{k-1}. $ Thus for $x\in (0,\infty),$ $$ \frac{e^{-x}}{(1-e^{-x} )^2} = \sum_{k=1}^{\infty} k e^{-kx}.$$ Hence, (after applying the monotone convergence theorem) the desired integral is equal to $$ \sum_{k=1}^{\infty} k \int^{\infty}_0 x^n e^{-kx} dx .$$



If we let $u=kx $ we find that $$ \int^{\infty}_0 x^n e^{-kx} dx = \int^{\infty}_0 \left( \frac{u}{k} \right)^n e^{-u} \cdot \frac{1}{k} du = \frac{1}{k^{n+1}} \Gamma(n+1) = \frac{n!}{k^{n+1}}.$$



Thus, $$\int^{\infty}_0 \frac{e^x}{(e^x-1)^2} x^n dx = n! \zeta(n) .$$



Your suspicion that the value of the integral is a constant times a power of $\pi$ is correct for even integers, as $$(2n)! \cdot \zeta(2n) = (2n)! \cdot (-1)^{n+1} \frac{B_{2n} (2\pi)^{2n} }{2 (2n)!} = (-1)^{n+1} \frac{B_{2n} (2\pi)^{2n} }{2} $$



but for odd n no simpler form for $\zeta(n)$ is known (and it probably isn't simple powers of $\pi.$)


sequences and series - Mathematical Induction: Sum of first n even perfect squares

So the series is $$P_k: 2^2 + 4^2 + 6^2 + ... + (2k)^2 = \frac{2k(k+1)(2k+1)}3$$



and i have to replace $P_k$ with $P_{k+1}$ to prove the series.



I have to show that $$\frac{2k(k+1)(2k+1)}3 + [2(k+1)]^2 = \frac{2(k+1)(k+2)(2k+3)}3$$ but I don't know how.

lebesgue integral - Writing integration over abstract measure space as integration over $mathbb{R}$




Let $(X,\mathcal{M},\mu)$ be a $\sigma$-finite measure space and $f$ a measurable real valued function on $X$. Prove that
\begin{equation*}
\int_X e^{f(x)}\mathrm{d}\mu(x) =
\int_\mathbb{R} e^{t}\mu(E_t)\mathrm{d}t
\end{equation*}
where $E_t=\{x\mid f(x)>t\}$ for each $t\in\mathbb{R}$.



Can this be solved by a change of variable formula?


Answer



\begin{align*}

\int_{X}e^{f(x)}d\mu(x)& = \int_{X}\int_{-\infty}^{f(x)}e^{t}dt d\mu(x)\\
& = \int_{X}\int_{\mathbb{R}}I_{\{t< f(x)\}}(t)e^{t}dtd\mu(x)\\
& = \int_{X}\int_{\mathbb{R}}I_{\{f(x)>t\}}(x)e^{t}dtd\mu(x)\\
& = \int_{\mathbb{R}}\int_{X}I_{\{f(x)>t\}}(x)d\mu(x)e^tdt\\
& = \int_{\mathbb{R}}e^t\mu\left\{ f(x)>t \right\}dt.
\end{align*}


limits - Proof that $lim_{xto0}frac{sin x}x=1$

Is there any way to prove that
$$\lim_{x\to0}\frac{\sin x}x=1$$
only multiplying both numerator and denominator by some expression?

I know how to find this limit using derivatives, L'Hopital's rule, Taylor series and inequalities. The reason I tried to find it using only multiplying both numerator and denominator and then canceling out indeterminate terms is because the most other limits can be solved using this method.

This is an example:

$$\begin{align}\lim_{x\to1}\frac{\sqrt{x+3}-2}{x^2-1}=&\lim_{x\to1}\frac{x+3-4}{\left(x^2-1\right)\left(\sqrt{x+3}+2\right)}\\=&\lim_{x\to1}\frac{x-1}{(x+1)(x-1)\left(\sqrt{x+3}+2\right)}\\=&\lim_{x\to1}\frac{1}{(x+1)\left(\sqrt{x+3}+2\right)}\\=&\frac{1}{(1+1)\left(\sqrt{1+3}+2\right)}\\=&\frac18\end{align}$$
It is obvious that we firtly multiplied numerator and denominator by $\sqrt{x+3}+2$ and then canceled out $x-1$. So, in this example, we can avoid indeterminate form multiplying numerator and denominator by
$$\frac{\sqrt{x+3}+2}{x-1}$$
My question is can we do the same thing with $\frac{\sin x}x$ at $x\to0$? I tried many times, but I failed every time. I searched on the internet for something like this, but the only thing I found is geometrical approach and proof using inequalities and derivatives.

Edit
I have read this question before asking my own. The reason is because in contrast of that question, I do not want to prove the limit using geometrical way or inequalities.

real analysis - Find all roots of the equation :$(1+frac{ix}n)^n = (1-frac{ix}n)^n$



This question is taken from book: Advanced Calculus: An Introduction to Classical Analysis, by Louis Brand. The book is concerned with introductory real analysis.



I request to help find the solution.




If $n$ is a positive integer, find all roots of the equation :
$$(1+\frac{ix}n)^n = (1-\frac{ix}n)^n$$





The binomial expansion on each side will lead to:



$$(n.1^n+C(n, 1).1^{n-1}.\frac{ix}n + C(n, 2).1^{n-2}.(\frac{ix}n)^2 + C(n, 3).1^{n-3}.(\frac{ix}n)^3+\cdots ) = (n.1^n+C(n, 1).1^{n-1}.\frac{-ix}n + C(n, 2).1^{n-2}.(\frac{-ix}n)^2 + C(n, 3).1^{n-3}.(\frac{-ix}n)^3+\cdots )$$



$n$ can be odd or even, but the terms on l.h.s. & r.h.s. cancel for even $n$ as power of $\frac{ix}n$. Anyway, the first terms cancel each other.



$$(C(n, 1).1^{n-1}.\frac{ix}n + C(n, 3).1^{n-3}.(\frac{ix}n)^3+\cdots ) = (C(n, 1).1^{n-1}.\frac{-ix}n + C(n, 3).1^{n-3}.(\frac{-ix}n)^3+\cdots )$$



As the term $(1)^{n-i}$ for $i \in \{1,2,\cdots\}$ don't matter in products terms, so ignore them:




$$(C(n, 1).\frac{ix}n + C(n, 3).(\frac{ix}n)^3+\cdots ) = (C(n, 1).\frac{-ix}n + C(n, 3).(\frac{-ix}n)^3+\cdots )$$



$$2(C(n, 1).\frac{ix}n + C(n, 3).(\frac{ix}n)^3+\cdots ) = 0$$



$$C(n, 1).\frac{ix}n + C(n, 3).(\frac{ix}n)^3+\cdots = 0$$



Unable to pursue further.


Answer



Hint:

Put $$z=\frac{1+i\frac{x}{n}}{1-i\frac{x}{n}}$$ then $z$ will be a $n$-root of unity and solve for $x:$ $$z= \frac{1+i\frac{x}{n}}{1-i\frac{x}{n}}=\exp{\left(i\frac{2k\pi}{n}\right)},\quad k\in\{0,1,...,n-1\}$$


Wednesday, October 26, 2016

calculus - Find $f(x)$ for a function $f: R to R$, which satisfies condition $f(x+y^{3}) = f(x) + [f(y)]^{3}$ for all $x,y in R$ and $f'(0)≥0$.



Find $f(x)$ for a function $f: R \to R$, which satisfies condition $f(x+y^{3}) = f(x) + [f(y)]^{3}$ for all $x,y \in R$ and $f'(0)≥0$



My attempt:



Replacing $x$ and $y$ by $0$, $f(0)=0$



Replacing only x by $0$, $ f(y^{3}) = [f(y)]^{3}$




So $f'(x) = \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} $



$= \lim_{h \to 0} \frac{f(x) + [f(h^{1/3})]^{3} - f(x)}{h}$



$= \lim_{h \to 0} \frac{f(h)}{h}$



= $f'(0)$



Then I'm stuck.




How to proceed$?$


Answer



What you've shown so far is that $f'(x)$ is a constant, since $f'(x) = f'(0)$. And we know that $f(0) = 0$. This means that your solution is going to be something in the form $f(x) = ax$ for a non-negative constant $a$ (since you've specified that $f'(x) \geq 0$).



So what constants work? Well, we know that $f(x^3) = ax^3 = f(x)^3 = a^3x^3$. Cancel out the $x^3$ (since this must hold for any $x$, we can just pick whatever $x \neq 0$ we like) and you're left with $a = a^3$.



This equation has only two non-negative solutions: $a=0$ and $a=1$. These correspond to the functions $f(x) = 0$ and $f(x) = x$, respectively.



(There is of course a third option if you remove the $f'(x) \geq 0$ constraint, namely, $f(x) = -x$. Whoever wrote the question wanted to exclude this one specifically, for whatever unknown reason.)


Tuesday, October 25, 2016

combinatorics - How to simplify this triple summation containing binomial coefficients?


$$
\large\sum_{i=0}^{n} \sum_{j=i}^{n} \sum_{k=j}^{n}
\binom{i+m-1}{m-1}\binom{j+m-1}{m-1}\binom{k+m-1}{m-1}
$$





How to solve it when this involve more than thousand summation ?

functions - Bijection from $[0,1]^3$ to $[0,1]$?




Is there any bijection from $[0,1]^3$ to $[0,1]$? How can I construct it?


Answer



Hint:



If there exists a surjection between $A$ to $B$ and a surjection between $B$ to $A$, then there exists a bijection between $A$ to $B$. In your case, space filling curves are surjections from $[0,1]$ to $[0,1]^3$. It should be easy to find a surjection going the other way.


calculus - Evaluate $int_{0}^{infty} mathrm{e}^{-x^2-x^{-2}}, dx$



I have to find
$$I=\int_{0}^{\infty} \mathrm{e}^{-x^2-x^{-2}}\, dx $$
I think we could use
$$\int_{0}^{\infty} \mathrm{e}^{-x^2}\, dx = \frac{\sqrt \pi}{2} $$ But I don't know how.
Thanks.


Answer



Consider
\begin{align}

x^{2} + \frac{1}{x^{2}} = \left( x - \frac{1}{x} \right)^{2} +2
\end{align}
for which
\begin{align}
I = \int_{0}^{\infty} e^{-\left(x^{2} + \frac{1}{x^{2}}\right)} \, dx = e^{-2} \, \int_{0}^{\infty} e^{-\left(x - \frac{1}{x}\right)^{2}} \, dx.
\end{align}
Now make the substitution $t = x^{-1}$ to obtain
\begin{align}
e^{2} I = \int_{0}^{\infty} e^{- \left( t - \frac{1}{t} \right)^{2}} \, \frac{dt}{t^{2}}.
\end{align}

Adding the two integral form leads to
\begin{align}
2 e^{2} I = \int_{0}^{\infty} e^{- \left( t - \frac{1}{t} \right)^{2}} \left(1 + \frac{1}{t^{2}} \right) \, dt = \int_{-\infty}^{\infty} e^{- u^{2}} \, du = 2 \int_{0}^{\infty} e^{- u^{2}} \, du = \sqrt{\pi},
\end{align}
where the substitution $u = t - \frac{1}{t}$ was made. It is now seen that
\begin{align}
\int_{0}^{\infty} e^{-\left(x^{2} + \frac{1}{x^{2}}\right)} \, dx = \frac{\sqrt{\pi}}{2 e^{2}}.
\end{align}


Monday, October 24, 2016

recurrence relations - How to solve the delay algebraic equation $xf(x) + alpha f(x - {x_0}) - alpha f(x + {x_0}) = 0$?


In the process of solving a problem, I am faced with the problem of finding a non-zero function $f:\mathbb{R} \to \mathbb{R}$ which satisfies the equation $$xf(x) + \alpha f(x - {x_0}) - \alpha f(x + {x_0}) = 0$$ for a known $x_0$ and $\alpha$. Unfortunately all I could find searching online is the topic of delay differential equation which seems to be more general than my question. Could anyone help me with some references, keywords or hints? Thanks in advance.


Answer




Let $f(x)=\int_a^be^{xs}K(s)~ds$ ,


Then $x\int_a^be^{xs}K(s)~ds+\alpha\int_a^be^{(x-x_0)s}K(s)~ds-\alpha\int_a^be^{(x+x_0)s}K(s)~ds=0$


$\int_a^bK(s)~d(e^{xs})+\alpha\int_a^be^{-x_0s}e^{xs}K(s)~ds-\alpha\int_a^be^{x_0s}e^{xs}K(s)~ds=0$


$[e^{xs}K(s)]_a^b-\int_a^be^{xs}~d(K(s))+\alpha\int_a^be^{-x_0s}e^{xs}K(s)~ds-\alpha\int_a^be^{x_0s}e^{xs}K(s)~ds=0$


$[e^{xs}K(s)]_a^b-\int_a^be^{xs}K'(s)~ds+\alpha\int_a^be^{-x_0s}e^{xs}K(s)~ds-\alpha\int_a^be^{x_0s}e^{xs}K(s)~ds=0$


$[e^{xs}K(s)]_a^b-\int_a^b(K'(s)+\alpha(e^{x_0s}-e^{-x_0s})K(s))e^{xs}~ds=0$


$\therefore K'(s)+\alpha(e^{x_0s}-e^{-x_0s})K(s)=0$


$K'(s)=\alpha(e^{-x_0s}-e^{x_0s})K(s)$


$\dfrac{K'(s)}{K(s)}=-2\alpha\sinh x_0s$


$\int\dfrac{K'(s)}{K(s)}~ds=-2\alpha\int\sinh x_0s~ds$



$\ln K(s)=-\dfrac{2\alpha\cosh x_0s}{x_0}+c_1$


$K(s)=ce^{-\frac{2\alpha\cosh x_0s}{x_0}}$


$\therefore f(x)=\int_a^bce^{xs-\frac{2\alpha\cosh x_0s}{x_0}}~ds$


But since the above procedure in fact suitable for any complex number $s$ ,


$\therefore f_n(x)=\int_{a_n}^{b_n}c_ne^{xk_nt-\frac{2\alpha\cosh x_0k_nt}{x_0}}~d(k_nt)=k_nc_n\int_{a_n}^{b_n}e^{k_nxt-\frac{2\alpha\cosh k_nx_0t}{x_0}}~dt$


For some $x$-independent real number choices of $a_n$ and $b_n$ and $x$-independent complex number choices of $k_n$ such that:


$\lim\limits_{t\to a_n}e^{k_nxt-\frac{2\alpha\cosh k_nx_0t}{x_0}}=\lim\limits_{t\to b_n}e^{k_nxt-\frac{2\alpha\cosh k_nx_0t}{x_0}}$


$\int_{a_n}^{b_n}e^{k_nxt-\frac{2\alpha\cosh k_nx_0t}{x_0}}~dt$ converges


One of the choice involving


$\int_{-\infty}^\infty e^{\frac{xt}{x_0}-\frac{2\alpha\cosh t}{x_0}}~dt$



$=\int_{-\infty}^0e^{\frac{xt}{x_0}-\frac{2\alpha\cosh t}{x_0}}~dt+\int_0^\infty e^{\frac{xt}{x_0}-\frac{2\alpha\cosh t}{x_0}}~dt$


$=\int_\infty^0e^{-\frac{xt}{x_0}-\frac{2\alpha\cosh(-t)}{x_0}}~d(-t)+\int_0^\infty e^{\frac{xt}{x_0}-\frac{2\alpha\cosh t}{x_0}}~dt$


$=\int_0^\infty e^{-\frac{xt}{x_0}-\frac{2\alpha\cosh t}{x_0}}~dt+\int_0^\infty e^{\frac{xt}{x_0}-\frac{2\alpha\cosh t}{x_0}}~dt$


$\propto\int_0^\infty e^{-\frac{2\alpha\cosh t}{x_0}}\cosh\dfrac{xt}{x_0}~dt$


$\propto K_\frac{x}{x_0}\left(\dfrac{2\alpha}{x_0}\right)$


You will find that this functional equation if fact is very similar to that of modified Bessel functions


In fact the general solution can consider as $f(x)=\Theta_1(x)I_\frac{x}{x_0}\left(\dfrac{2\alpha}{x_0}\right)+\Theta_2(x)K_\frac{x}{x_0}\left(\dfrac{2\alpha}{x_0}\right)$, where $\Theta_1(x)$ and $\Theta_2(x)$ are arbitrary periodic functions with period $|x_0|$ .


linear algebra - Does the sign of the characteristic polynomial have any meaning?



The characteristic polynomial of a matrix $A \in \mathbb{C}^{n \times n}$, $p_A (\lambda) = \det(A-\lambda \cdot E)$ can be used to find the eigenvalues of the linear function $\varphi:\mathbb{C}^n \rightarrow \mathbb{C}^n, \varphi(x) := A \cdot x$, as the eigenvalues are the roots of $p_A(\lambda)$. So, for finding the eigenvalues, the sign of the characteristic polynomial isn't important. At the moment, this is to only application of the characteristic polynomial that I know.



Do other applications of the characteristic polynomial exist, where the sign of it is important?




Can I make any statements about the matrix $A$ when I know the sign of its characteristic polynomial?


Answer



Actually the characteristic polynomial is often defined as
$$
\chi_A=\det(I_nX-A)\in k[X]
$$
so as to be always monic (of degree $n$); see for instance in Wikipedia. This differs by a sign (and by calling the identity matrix by the more ususal name of $I_n$) from the definition you cited. The fact that the two contradicting definitions coexist shows that the matter of a factor $(-1)^n$ is not considered of great importance.



In my experience however, in most applications of the characteristic polynomial other than just for searching eigenvalues, the fact that it is monic is of importance. (One such application is to show that certain values are algebraic integers over the base ring, i.e., solutions of a monic polynomial equation.) For sure, in those applications monic-up-to-a-sign will be easily seen to do the job as well, but it is more convenient if the characteristic polynomial is just monic, period.




Also consider the statement "the coefficient of degree $n-i$ of $\chi_A$ is the $i$-th symmetric function of minus the eigenvalues of $A$ (taken with thir algebraic multiplicities)". With the definition you gave, you'd need to throw in another "minus".


Sunday, October 23, 2016

sequences and series - Showing $sum _{k=1} 1/k^2 = pi^2/6$








I read my book of EDP, and there appears the next serie
$$\sum _{k=1} \dfrac{1}{k^2} = \dfrac{\pi^2}{6}$$
And, also, we prove that this series is equal $\frac{\pi^2}{6}$ for methods od analysis of Fourier, but...



Do you know other proof, any more simple or beautiful?

limits - Intuition: Why will $3^x$ always eventually overtake $2^{x+a}$ no matter how large $a$ is?



I have a few ways to justifiy this to myself.



I just think that since $3^x$ "grows faster" than $2^{x+a}$, it will always overtake it eventually. Another way to say this is that the slope of the tangent of $3^x$ will always eventually be greater for some $x$ than that of $2^{x+a}$, so that the rate of growth at that $x$ value will be greater for $3^x$, so at that point it's only a matter of "time" before it overtakes $2^{x+a}$.



Another way I think about it is that the larger $x$ becomes, the closer the ratio $x:(x+a)$ comes to $1$, in other words $\lim_{x \to \infty} (\frac{x}{x+a}) = 1$, so that the base of the exponents is what really matters asymptotically speaking.




However, I'm still not completely convinced and would like to rigorize my intuition. Any other way of thinking about this would be very helpful, whether geometric (visual intuition is typically best for me), algebraic, calculusic...anything.



This came to me because I was trying to convince myself that $\frac{b \cdot 2^x}{3^x}$ goes to $0$ no matter how large $b$ is, and I realized that $b$ can be thought of as $2^a$, and that it might be easier to see this way, but if you have some intuition fo this form: $\frac{b \cdot 2^x}{3^x}$, I welcome it also.


Answer



$3^x<2^{x+a}\iff$



$3^x<2^x\cdot2^a\iff$



$\frac{3^x}{2^x}<2^a\iff$




$1.5^x<2^a\iff$



$x<\log_{1.5}2^a\iff$



$x

$x

$x<1.8a$




So no matter how large $a$ is, when $x$ is larger than $1.8a$, $3^x$ is larger than $2^x$.


analysis - Complex equation simplification

Let $k$ be a positive integer and $c_0$ a positive constant. Consider the following expression:
\begin{equation} \left(2 i c_0-i+\sqrt{3}\right)^2 \left(-2 c_0+i \sqrt{3}+1\right)^k+\left(-2 i c_0+i+\sqrt{3}\right)^2 \left(-2 c_0-i \sqrt{3}+1\right)^k
\end{equation}
I would like to find a simple expression for the above in which only real numbers appear. It is clear that the above expression is always a real number since
\begin{equation}
\overline{\left(2 i c_0-i+\sqrt{3}\right)^2 \left(-2 c_0+i \sqrt{3}+1\right)^k}= \left(-2 i c_0+i+\sqrt{3}\right)^2 \left(-2 c_0-i \sqrt{3}+1\right)^k.
\end{equation}

But I am not able to simplify it. I am pretty sure I once saw how to do this in a complex analysis course but I cannot recall the necessary tools. Help is much appreciated.

Sum of a sequence which is neither arithmetic nor geometric




If you have a sequence which is not geometric or arithmetic or arithmetico–geometric. Is there any methodology to follow in order to have a formula for its sum ?



Take for example the following sequence: $\{0.9^{\frac{1}{2}(n-i+1)(i+n)}\}_{i=1}^n$. It is not a geometric or an arithmetic progression. I don't see how to split it into sums of sequences which are arithmetic or geometric. Is there any hints I can get to proceed with writing a formula for this sum ?



$$S_n = \sum_{i=1}^n 0.9^{\frac{1}{2}(n-i+1)(i+n)}$$


Answer



I hope you’ve played with this, and noticed:
1.$\quad$It’s not a sum, it’s many sums, and each sum is finite.
2.$\quad$The base, $0.9$ in this case, plays no particular role, so that you can use any base $r$.
3.$\quad$The first few values are
\begin{align}
S_0&=0\\
S_1&=r\\

S_2&=r^3+r^2\\
S_3&=r^6+r^5+r^3\\
S^4&=r^{10}+r^9+r^7+r^4\\
S_5&=r^{15}+r^{14}+r^{12}+r^9+r^5\\
S_n&=r^n(S_{n-1}+1)
\end{align}
I see no way of getting a closed-form expression for $S_n$, a polynomial in $r$ of degree $\frac12(n^2+n)$, and most certainly not a numerical value once you evaluate $r$ to, in your case, $r=0.9\,$.



I do wonder where or how you came across this—without context, it seems a most unnatural problem.


Saturday, October 22, 2016

algebra precalculus - Why $sqrt{-1 times {-1}} neq sqrt{-1}^2$?


I know there must be something unmathematical in the following but I don't know where it is:


\begin{align} \sqrt{-1} &= i \\ \\ \frac1{\sqrt{-1}} &= \frac1i \\ \\ \frac{\sqrt1}{\sqrt{-1}} &= \frac1i \\ \\ \sqrt{\frac1{-1}} &= \frac1i \\ \\ \sqrt{\frac{-1}1} &= \frac1i \\ \\ \sqrt{-1} &= \frac1i \\ \\ i &= \frac1i \\ \\ i^2 &= 1 \\ \\ -1 &= 1 \quad !!! \end{align}


Answer



Between your third and fourth lines, you use $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$. This is only (guaranteed to be) true when $a\ge 0$ and $b>0$.


edit: As pointed out in the comments, what I meant was that the identity $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$ has domain $a\ge 0$ and $b>0$. Outside that domain, applying the identity is inappropriate, whether or not it "works."


In general (and this is the crux of most "fake" proofs involving square roots of negative numbers), $\sqrt{x}$ where $x$ is a negative real number ($x<0$) must first be rewritten as $i\sqrt{|x|}$ before any other algebraic manipulations can be applied (because the identities relating to manipulation of square roots [perhaps exponentiation with non-integer exponents in general] require nonnegative numbers).


This similar question, focused on $-1=i^2=(\sqrt{-1})^2=\sqrt{-1}\sqrt{-1}\overset{!}{=}\sqrt{-1\cdot-1}=\sqrt{1}=1$, is using the similar identity $\sqrt{a}\sqrt{b}=\sqrt{ab}$, which has domain $a\ge 0$ and $b\ge 0$, so applying it when $a=b=-1$ is invalid.


real analysis - Nowhere differentiable continuous functions and local extrema


Consider any continuous function which is nowhere differentiable (such as Weierstarss function). If this function would be monotone on some interval $(a,b)$ then there is a result which states that in this interval this function would be almost everywhere differentiable which is contradiction. Therefore any such function cannot be differentiable in any interval. However this not settles the issue of having local extrema.



Is it possible to construct everywhere continuous, nowhere differentiable function $f:[-1,1] \to \mathbb{R}$ with the following property: $f$ has local minimum at $x_0=0$, $f$ has local maximum at $x_1=\frac12$, $f(0)>f(\frac12)$ and $x_0,x_1$ are only extremas of $f$



Answer



I explained in a comment to your question that you need instead to assume that $f(0) < f(1/2)$ for your statement to have a chance to be true. I show here that even then, such a function does not exist.


Assume the contrary, so $f : [-1,1] \to \mathbb{R}$ is a continuous nowhere differentiable function which only has precisely two local extrema in some interval $(- \epsilon, 1/2 + \epsilon) \subset (-1,1)$, namely $0$ as a local minimum and $1/2$ as a local maximum, with $f(0) < f(1/2)$. (Observe that $-1$ and $1$ might be local extrema of $f$ on $[-1,1]$).



Consider any $(c - \delta, c+ \delta) \subset (0, 1/2)$. Then $f$ achieves its maximum on $[c - \delta/2, c + \delta/2]$ in some point $c'$. We can't have $c' \in (c - \delta/2, c+ \delta/2)$, for otherwise it would be another local maximum of $f$ in $(-\epsilon, 1/2 + \epsilon)$. Therefore, $c' \in \{ c - \delta/2, c + \delta/2 \}$. Similarly, $f$ achieves its minimum on $[c - \delta/2, c + \delta/2]$ only on the boundary. The minimum and the maximal values have to be different, for otherwise the function would be constant on this interval and there would be infinitely many local extrema. Furthermore, the maximum cannot be achieved at $c - \delta/2$, for otherwise we would have $f(c - \delta/2) > f(c + \delta/2)$ and so $f$ would have a local minimum in the interval $[c - \delta/2, 1/2]$ located in fact in $(c- \delta/2, 1/2)$. Hence the maximum is achieved at $c+ \delta/2$.


As $c$ and $\delta$ were arbitrary, this shows that $f$ is (strictly) increasing on $[0, 1/2]$. As you mentioned in your question, this implies that $f$ is somewhere differentiable. A contradiction.


Incidentally, this shows that the local extrema of a continuous no-where differentiable function form a dense set of its domain.


Friday, October 21, 2016

Proof by induction help. I seem to be stuck and my algebra is a little rusty


Stuck on a homework question with mathematical induction, I just need some help factoring and am getting stuck.


$\displaystyle \sum_{1 \le j \le n} j^3 = \left[\frac{k(k+1)}{2}\right]^2$



The induction part is: $\displaystyle \left[\frac{k(k+1)}{2}\right]^2 +(k+1)^3$ is where I am having a problem.


If you could give me some hints as to where to go since I keep getting stuck or writing the wrong equation.


I'll get to $\displaystyle \left[{k^2+2k\over2}\right]^2 + 2{(k+1)^3\over2}$


Any push in the right direction will be appreciated.


Answer



$(\frac{k(k+1)}{2})^2+(k+1)^3$


$=\frac{k^2(k+1)^2}{4}+(k+1)(k+1)^2$


$=\frac{(k+1)^2}{4}(k^2+4k+4)$


$=\frac{(k+1)^2}{4}(k+2)^2$


algebra precalculus - How to find constant term in two quadratic equations

Let $\alpha$ and $\beta$ be the roots of the equation $x^2 - x + p=0$ and let $\gamma$ and $\delta$ be the roots of the equation $x^2 -4x+q=0$. If $\alpha , \beta , \gamma , \delta$ are in Geometric progression then what is the value of $p$ and $q$?



My approach:



From the two equations,
$$\alpha + \beta = 1$$,
$$\alpha \beta = p$$,
$$\gamma + \delta = 4$$, and,

$$\gamma \delta = q$$.
Since $\alpha , \beta , \gamma , \delta$ are in G. P., let $\alpha = \frac{a}{r^3}$, $\beta = \frac{a}{r^1}$, $\gamma = ar$,
$\delta = ar^3$.
$$\therefore \alpha \beta \gamma \delta = a^4 = pq$$
Now,
$$\frac{\alpha + \beta}{\gamma + \delta} = \frac{1}{r^4}$$
$$\frac{1}{4} = \frac{1}{r^4}$$
$$\therefore r = \sqrt(2)$$



From here I don't know how to proceed. Am I unnecessarily complicating the problem??

elementary number theory - Calculating Modular Multiplicative Inverse for negative values of a.



If I'm calculating $a^{-1} \equiv 1 \pmod n$ where $a$ is negative. Do I simply add $kn$ to $a'$ until $0\lt a' \lt n$?



I'm currently using the extended euclidean algorithm to calculate my modular multiplicative inverses since I already have to make sure that $a$ and $n$ are coprime. From what little number theory I understand $a'=a+kn$ is going to give me the same result as $a \pmod n$. So that should mean that $a' \equiv 1 \pmod n$ is the same as $a \equiv 1 \pmod n$



I've confirmed this with a few values below but don't know if I'm understanding this properly.




$a=-36 \;\; a'=14$



$9 = -36^{-1} \pmod{25}$



$9 = 14^{-1} \pmod {25}$



$a=-11\;\; a'=15$



$7 = -11^{-1} \pmod{26}$




$7 = 15^{-1} \pmod{26}$



Here's a link to my python code.
https://paste.debian.net/1117624/


Answer



Hint: $ $ like sums & products, inverses too are preserved on replacing argument(s) by a congruent one



Congruence Inverse Law
$\ \color{#c00}{a'\equiv a}\,\Rightarrow\,{a'}^{-1}\equiv a^{-1}\,$ if $\ a\,$ is invertible, i.e. $\, ab\equiv 1\,$ for some $b$.




Proof $\ $ Notice $\,\ \color{c00}ab\equiv 1\ \Rightarrow\ \color{#c00}{a'} b\equiv \color{#c00}ab\equiv 1\,$ by the Congruence Product Rule, therefore we conclude $\, {a'}^{-1}\!\equiv b\equiv a^{-1}\,$ by Uniqueness of Inverses.


elementary number theory - What is the correct way of expressiong remainder in Mathematics



$45/7 \quad$ remainder $=3$


What is the correct way of representing this mathematically? I am asking this question because in this site, many times, experts use different ways to denote remainders. I am giving it below


(a) $45$ mod $7$ $=3$


(b) $45$ mod $7$ $\equiv3$


(c) $45\%7$ $=3$ (I believe this is mostly for programming and cannot generally use for mathematics. there is a thread for it)


(d) $45\equiv 3\pmod 7$


It is true that we can easily understand from the last expression that $45$ divided by $7$ gives $3$ as remainder. But, this relation is actually used to tell $45$ and $3$ gives same remainder when divided with $7$.


So, my understanding is that we can only (a). Please tell if I am right or wrong.


Answer



To capture the nature of division of a number $a$ by another number $b$ (which seems to be what you're trying to convey in $a$, we can write $$a = qb + r$$ where q represents the unique quotient, and $r$ ($0\leq r\lt b$) represents the unique remainder.



We can also write $$a \equiv r \pmod b$$


The notation of the second form does not necessarily require that the '$r$' be such that $0\leq r \leq b$.


Thursday, October 20, 2016

calculus - How can I deduce the value of $frac{1}{sqrt{4pi t}}int_{-infty}^{infty}sin(y)e^{-frac{(x-y)^2}{4t} } dy$ without actually evaluating it?



How can I deduce that
$$
\frac{1}{\sqrt{4\pi t}}\int_{-\infty}^{\infty}\sin(y)\,e^{-\frac{(x-y)^2}{4t} } dy = e^{-t} \sin(x)
$$
without actually evaluating the definite integral?


Answer




I will assume the following result:




The solution to the Heat Equation



$$\frac{df}{dt} = \nabla^2f$$



with initial condition $f(x,0) = g(x)$ can be written



$$f(x,t) = \frac{1}{\sqrt{4\pi t}}\int_{-\infty}^\infty e^{-\frac{(x-y)^2}{4t}}g(y) dy$$





Now by inserting



$$f(x,t) = e^{-t}\sin(x)$$



into the Heat Equation we find that it does satisfy it with the initial condition $f(x,0) = \sin(x)$. From the result above it therefore follows that



$$e^{-t}\sin(x) = \frac{1}{\sqrt{4\pi t}}\int_{-\infty}^\infty e^{-\frac{(x-y)^2}{4t}}\sin(y) dy$$


elementary set theory - Proving that the union of two infinite disjoint sets with same cardinality is equipotent with either one



This question is almost a duplicate to the question Q, but I would like to prove it in a more personalised manner.



The fact, that the individual sets, say $A$ and $B$ are countable, as well as infinite, ensures the existence of some bijective function $f: \mathbb{N} \rightarrow A$ and $g: \mathbb{N} \rightarrow B$ .
Again, $A$ and $B$ being equipotent, there is some bijective function $h: A \rightarrow B$.



Now, consider their union (keeping in mind that they are disjoint). Define a map $\phi: \mathbb{N} \rightarrow A \cup B $ such that,



$\phi(n)= a_{n/2} $ when $n$ is even; $b_{n+1/ 2}$ when $n$ is odd [ $A=\{a_n\}$ and $B=\{b_n\}$ (enumerability permits this) ] Evidently, this mapping is a bijection.




Now, we consider the composition $f^{-1} \phi: A \rightarrow A \cup B$ and $g^{-1} \phi: B \rightarrow A \cup B$. Both of them are bijective. Furthermore, the equipotency between the sets $A$ and $B$ implies that their union is equipotent to either one of them.



We now use this to prove that $\mathbb{Q}$ and $\mathbb{Q}^+$ are of same cardinality.



We now split $\mathbb{Q}^*$ into $\mathbb{Q}^+$ and $\mathbb{Q}^-$ . Their elements are disjoint and both of them are equipotent. We use the previously proved proposition to deduce that $\mathbb{Q}^*$ is equipotent to $\mathbb{Q}^+$.




  1. Is the above proof correct?


  2. How do I extend the deduction to $\mathbb{Q}$ ?




Answer



I first want to note, that in your first comment where you suggest a bijection from $\{0/1,0/2,0/3,\dots\}$ is only correct if you regard $0/x$ as a syntactical object. If you refer to a classical fractional representation of a rational number, then $\{0/1,0/2,0/3,\dots\}=\{0\}$ which is most certainly not bijective with $\mathbb{N}$.






The proof that for countable disjoint $A,B$, there is a bijection between $A\cup B$ and $A,B$ respectively is fine. Let me add, that I think that the countable case is not really rich in terms of awaiting revelations, as a countable union of countable sets is again countable (as you showed for the case of a pair-union) and every countably infinite set is bijective with any other countably infinite set.



This is why I think there are many ways to arrive relatively immediate at your desired result of equipotency of $\mathbb{Q}$ and $\mathbb{Q}^+$:





  1. You may show that $\mathbb{Q}$ and $\mathbb{Q}^+$ are both countably infinite (e.g. via a diagonal argument a la Cantor), i.e. yielding bijections $f:\mathbb{N}\to\mathbb{Q}$ and $g:\mathbb{N}\to\mathbb{Q}^+$. Then $h:\mathbb{Q}\to\mathbb{Q}^+$, $q\to g(f^{-1}(q))$ may be checked by you to be bijective.

  2. You can proceed by splitting $\mathbb{Q}=\mathbb{Q}^-\cup(\mathbb{Q}^+\cup\{0\})$ or $\mathbb{Q}=(\mathbb{Q}^-\cup\{0\})\cup\mathbb{Q}^+$ and observe that they are both countably infinite and disjoint. Then you can apply your insight for pair-unions of disjoint countably infinite sets.






As I think these results are relatively immediate, to get a greater deal of new knowledge out of this question, maybe try as an exercise to generalize your results and the related insights as far as possible.


Wednesday, October 19, 2016

Simplifying the infinite series $sum_{n = 1}^{infty} left(frac{1}{2}right)^{3n}$

Does anyone know of a way to simplify




$$
\sum_{n = 1}^{\infty} \left(\frac{1}{2}\right)^{3n}
$$



to a number?

definition - Given real numbers: define integers?



I have only a basic understanding of mathematics, and I was wondering and could not find a satisfying answer to the following:




Integer numbers are just special cases (a subset) of real numbers. Imagine a world where you know only real numbers. How are integers defined using mathematical operations?





Knowing only the set of complex numbers $a + bi$, I could define real numbers as complex numbers where $b = 0$. Knowing only the set of real numbers, I would have no idea how to define the set of integer numbers.



While searching for an answer, most definitions of integer numbers talk about real numbers that don't have a fractional part in their notation. Although correct, this talks about notation, assumes that we know about integers already (the part left of the decimal separator), and it does not use any mathematical operations for the definition. Do we even know what integer numbers are, mathematically speaking?


Answer



There are several ways to interpret this question.



The naive way would be simply to give some sort of definition to the set of integers. This is not very difficult, we can recognize $1$ in the real numbers, because it is the unique number $x$ such that $r\cdot x=r$ for all $r\in\mathbb R$.



Now consider the set obtained by repeatedly adding $1$ to itself, or subtracting $1$ from itself. Namely $\{0,1,-1,1+1,-1-1,1+1+1,-1-1-1,\ldots\}$. One can show that this set is indeed the integers. Every integer is a finite summation of $1$ or the additive inverse of such set.




One could also define, like Nameless did, what is being an inductive set, then define $\mathbb N$ as the least inductive set, and $\mathbb Z$ as the least set containing $\mathbb N$ and closed under addition and subtraction.






However one could also interpret this question by asking "Is the set $\mathbb Z$ first-order definable in $\mathbb R$ in the language of ordered fields?", namely if you live inside the real numbers, is there some formula in the language $0,1,+,\cdot,<$ such that only integers satisfy it?



The answer to this question is negative. The integers are not first-order definable in $\mathbb R$. This is a nontrivial result in model theory. But it is important to note it, because it is a perfectly valid interpretation of the question, and it results in a completely different answer than the above one.



In particular it helps understanding first-order definability vs. second-order definability, and internal- vs. external-definability.







I am adding some more to the answer because of a recent comment by the OP to the original question. I feel that some clarification is needed to my answer here.



First we need to understand what does "define" mean from a logic-oriented point of view. It means that there is a language such that the real numbers interpret the language in a coherent way, and there is a formula in one free variable which is true if and only if we plug an integer into it.



For example we cannot define $\mathbb R$ within $\mathbb C$ if we only have $0,1,+,\times$, but we can do that if we also have the conjugation map in our vocabulary - as shown in the question.



Naively speaking, when we approach to mathematics we may think that everything is available to us, which is true to some extent. But when we want to talk about logic, and in particular first-order logic, then we need to first understand that only things within a particular language are available to us and we cannot expect people to guess what this language is if we don't specify that.




This question did not specify the language, which makes it not unreasonable to think that we are talking about the real numbers in the language of ordered fields. So in our language we only have $0,1,+,\times,<$ (and equality, we always have equality). In this language we cannot define the integers within the real numbers with a first-order formula.



Ah, but what is a first-order formula? Well, generally in a formula we can have variables which are objects of our structure (in this case, real numbers) and we can have sets of real numbers and we can have sets of sets of real numbers and so on. First-order formulas allow us only to quantify over objects, that is real numbers. So variables in first-order logic are elements of the universe, which in our case means simply real numbers. Second-order logic would allow us to quantify over sets (of real numbers) as well, but not sets of sets of real numbers, and so on.



So for example, we can write a definition for $2$ using a first-order formula, e.g. $x=1+1$. There is a unique element which satisfies this property and this is $2$. And we can write the definition of an inductive set using a second-order formula, $0\in A\land\forall x(x\in A\rightarrow x+1\in A)$.



But as it turns out we cannot express the property of being an inductive set (and we certainly cannot express the property of being the minimal inductive set) in first-order logic when we restrict ourselves only to the real numbers as an ordered field. The proof, as I remarked, is not trivial.



The comment I referred to says:





@WillieWong I don't know the real number field: to me real numbers form a line. – Virtlink




Which gives yet another way to interpret this approach. We can consider the real numbers simply as an ordered set. We ignore the fact it is a field, and we simply look at the order.



But this language has even less expressive powers than that of an ordered field, for example we cannot even define the addition and multiplication. In fact we can't even define $0$ and $1$ in this language. We only have the order to work with, and that's really not much to work with.



It is much easier to show that simply as an ordered set all the results about undefinability hold, I won't get into this but I'll just point out that definability is "immune to automorphisms", and $\mathbb R$ has plenty of automorphisms which preserve the order and move every element.




Beyond that one runs into philosophy of mathematics pretty quick. What are the real numbers? Are they sets? Are they any structure which interpret a particular language in a particular way? Do we construct the integers from the real numbers or do we construct the real numbers from the integers? (First we have to construct the rational numbers, of course.)



Those are interesting questions, but essentially moot if one simply wishes to talk about first-order definability in a particular language or another. But if one is approaching this in a "naive" way which allows higher order quantification, and the usage of any function we know about then the answer becomes quite simple (although it is possible to run into circularity if one is not careful enough).



I hope this explains, amongst other things, why I began my answer with "There are several ways to interpret this question". We simply can see the real numbers as different structure in different languages, and we may or may not allow formulas to use sets of real numbers. In each of these interpretation of the question we may have a different answer, and different reasons for this answer to be true!



(I should stop writing this monograph now, if you're read this far - you're a braver [wo]man than I am!)



But wait, there's more!





  1. True Definition of the Real Numbers

  2. FO-definability of the integers in (Q, +, <)

  3. What is definability in First-Order Logic?


calculus - limit $ lim limits_{n to infty} {left(frac{z^{1/sqrt n} + z^{-1/sqrt n}}{2}right)^n} $



Calculate the limit $ \displaystyle \lim \limits_{n \to \infty} {\left(\frac{z^{1/\sqrt n} + z^{-1/\sqrt n}}{2}\right)^n} $



I now the answer, it is $ \displaystyle e^\frac{\log^2z}{2} $, but I don't know how to prove it. It seems like this notable limit $\displaystyle \lim \limits_{x \to \infty} {\left(1 + \frac{c}{x}\right)^x} = e^c$ should be useful here. For example I tried this way: $$ (z^{1/\sqrt n} + z^{-1/\sqrt n}) = (z^{1/(2 \sqrt n)} - z^{-1/(2 \sqrt n)})^2 + 2 $$



$$ \displaystyle \lim \limits_{n \to \infty} {\left(\frac{z^{1/\sqrt n} + z^{-1/\sqrt n}}{2}\right)^n} = \displaystyle \lim \limits_{n \to \infty} {\left(1 + \frac{(z^{1/(2 \sqrt n)} - z^{-1/(2 \sqrt n)})^2}{2}\right)^n} $$




where $ (z^{1/(2 \sqrt n)} - z^{-1/(2 \sqrt n)})^2 $ seems close to $ \frac{\log^2 z}{n} $.



Also we can say that $$ \left(\frac{z^{1/\sqrt n} + z^{-1/\sqrt n}}{2}\right)^n = e^{n \log {\left(1 + \frac{\left(z^{1/(2 \sqrt n)} - z^{-1/(2 \sqrt n)}\right)^2}{2}\right)}}$$ and $ \log {\left(1 + \frac{(z^{1/(2 \sqrt n)} - z^{-1/(2 \sqrt n)})^2}{2}\right)} $ can be expand in the Taylor series. But I can't finish this ways.



Thanks for the help!


Answer



Assume $z>0$. One may write, as $n \to \infty$,
$$
\begin{align}

z^{1/\sqrt n}=e^{(\log z)/\sqrt n}&=1+\frac{\log z}{\sqrt n}+\frac{(\log z)^2}{2n}+O\left(\frac1{n^{3/2}} \right)\\
z^{-1/\sqrt n}=e^{-(\log z)/\sqrt n}&=1-\frac{\log z}{\sqrt n}+\frac{(\log z)^2}{2n}+O\left(\frac1{n^{3/2}} \right)
\end{align}
$$ giving
$$
\frac{z^{1/\sqrt n} + z^{-1/\sqrt n}}{2}=1+\frac{(\log z)^2}{2n}+O\left(\frac1{n^{3/2}} \right)
$$ and, as $n \to \infty$,
$$
\begin{align}
\left(\frac{z^{1/\sqrt n} + z^{-1/\sqrt n}}{2}\right)^n&=\left(1+\frac{(\log z)^2}{2n}+O\left(\frac1{n^{3/2}} \right)\right)^n\\\\

&=e^{(\log z)^2/2}+O\left(\frac1{n^{1/2}} \right) \to e^{(\log z)^2/2}
\end{align}
$$


soft question - Why non-real means only the square root of negative?



Once in 1150 AD, an Indian mathematician Bhaskara wrote in his work Bijaganita (algebra) that,




There is no square root of a negative quantity, for it is not a square




However later on in 1545 an Italian mathematician Gerolamo Cardano while solving the problem $x+y=10$ and $xy=40$ obtained $x=5+\sqrt{-15}$ and $y=5-\sqrt{-15}$, although he discarded this result saying that it was "useless". But later on mathematicians like Albert Girard, Euler and W. R Hamilton introduced these complex roots with purely mathematical definition.




So this was the tale of imaginary numbers which tells us that concept of imaginary numbers was initially adopted to compensate the theory of polynomial roots (number of roots is equal to the degree of polynomial). However later on some mathematicians proposed it's practical application through geometrical interpretation and other ways.



Now I want to know that in mathematics how one can be so sure that the square root of negative numbers can be the set to be designated as non-real numbers. Or in nut shell can't there be any other definition of non-real numbers.



For example, $x^{\gamma k}=-x^{k}$ (where $\gamma$ is something similar to $\iota$ as in complex number)



or $\log{(-x)}=\gamma \log{x}$ (provided $x>0$)



Notice that I had taken the values in the functions where the input is not lying in the domain. Concept of imaginary numbers was similar to this (i.e. $\sqrt{-x}=\iota \sqrt{x}$). So in this way I'll be able to create hundreds or probably thousands of such non-real sets.




I am sorry if I am loosing some logic in this but this is more like a curiosity than a question.


Answer



If you posit $\log(-x)=\gamma \log(x)$ for all $x$, and if you want to allow the usual operations like division, you are going to be forced to conclude that $\gamma=\log(-x)/\log(x)$ for all $x$, and in particular that
$$\frac{\log(-2)}{\log(2)}=\frac{\log(-3)}{\log(3)}$$



But the various logs in this equation already have definitions, and according to those definitions, the equation in question is not true (for any of the various choices of $\log(-2)$ and $\log(-3)$.



Therefore, your $\gamma$ can exist only if you either ban division or change the definition of the log. Likewise for your other proposal $x^{\gamma k}=-x^k$.




This is why you can't just go adjoining new constants willy-nilly and declaring them to have whatever properties you want. In the case of $i$, the miracle is that you can define it in a way that does not require you to revise the existing rules of arithmetic. Such miracles are rare.


Tuesday, October 18, 2016

linear algebra - Determinant of a rank $1$ update of a scalar matrix, or characteristic polynomial of a rank $1$ matrix




This question aims to create an "abstract duplicate" of numerous questions that ask about determinants of specific matrices (I may have missed a few):





The general question of this type is




Let $A$ be a square matrix of rank$~1$, let $I$ the identity matrix of the same size, and $\lambda$ a scalar. What is the determinant of $A+\lambda I$?





A clearly very closely related question is




What is the characteristic polynomial of a matrix $A$ of rank$~1$?



Answer



The formulation in terms of the characteristic polynomial leads immediately to an easy answer. For once one uses knowledge about the eigenvalues to find the characteristic polynomial instead of the other way around. Since $A$ has rank$~1$, the kernel of the associated linear operator has dimension $n-1$ (where $n$ is the size of the matrix), so there is (unless $n=1$) an eigenvalue$~0$ with geometric multiplicity$~n-1$. The algebraic multiplicity of $0$ as eigenvalue is then at least $n-1$, so $X^{n-1}$ divides the characteristic polynomial$~\chi_A$, and $\chi_A=X^n-cX^{n-1}$ for some constant$~c$. In fact $c$ is the trace $\def\tr{\operatorname{tr}}\tr(A)$ of$~A$, since this holds for the coefficient of $X^{n-1}$ of any square matrix of size$~n$. So the answer to the second question is





The characteristic polynomial of an $n\times n$ matrix $A$ of rank$~1$ is $X^n-cX^{n-1}=X^{n-1}(X-c)$, where $c=\tr(A)$.




The nonzero vectors in the $1$-dimensional image of$~A$ are eigenvectors for the eigenvalue$~c$, in other words $A-cI$ is zero on the image of$~A$, which implies that $X(X-c)$ is an annihilating polynomial for$~A$. Therefore




The minimal polynomial of an $n\times n$ matrix $A$ of rank$~1$ with $n>1$ is $X(X-c)$, where $c=\tr(A)$. In particular a rank$~1$ square matrix $A$ of size $n>1$ is diagonalisable if and only if $\tr(A)\neq0$.




See also this question.




For the first question we get from this (replacing $A$ by $-A$, which is also of rank$~1$)




For a matrix $A$ of rank$~1$ one has $\det(A+\lambda I)=\lambda^{n-1}(\lambda+c)$, where $c=\tr(A)$.




In particular, for an $n\times n$ matrix with diagonal entries all equal to$~a$ and off-diagonal entries all equal to$~b$ (which is the most popular special case of a linear combination of a scalar and a rank-one matrix) one finds (using for $A$ the all-$b$ matrix, and $\lambda=a-b$) as determinant $(a-b)^{n-1}(a+(n-1)b)$.


integration - $int e^{-x^2}dx$





How does one integrate $\int e^{-x^2}\,dx$? I read somewhere to use polar coordinates.


How is this done? What is the easiest way?

Monday, October 17, 2016

real analysis - How to show $|f|_{p}rightarrow |f|_{infty}$?








I was asked to show:



Assume $|f|_{r}<\infty$ for some $r<\infty$. Prove that $$
|f|_{p}\rightarrow |f|_{\infty}
$$ as $p\rightarrow \infty$.



I am stuck in the situation that $|f|_{p}<\infty$ for all $r


Could $f_{p}$ be fluctuating while $|f|_{\infty}=\infty$? I have proved that for $r

real analysis - Limit using Poisson distribution




Show using the Poisson distribution that




$$\lim_{n \to +\infty} e^{-n} \sum_{k=1}^{n}\frac{n^k}{k!} = \frac {1}{2}$$


Answer



By the definition of Poisson distribution, if in a given interval, the expected number of occurrences of some event is $\lambda$, the probability that there is exactly $k$ such events happening is
$$
\frac {\lambda^k e^{-\lambda}}{k!}.
$$
Let $\lambda = n$. Then the probability that the Poisson variable $X_n$ with parameter $\lambda$ takes a value between $0$ and $n$ is
$$
\mathbb P(X_n \le n) = e^{-n} \sum_{k=0}^n \frac{n^k}{k!}.

$$
If $Y_i \sim \mathrm{Poi}(1)$ and the random variables $Y_i$ are independent, then $\sum\limits_{i=1}^n Y_i \sim \mathrm{Poi}(n) \sim X_n$, hence the probability we are looking for is actually
$$
\mathbb P\left( \frac{Y_1 + \dots + Y_n - n}{\sqrt n} \le 0 \right) = \mathbb P( Y_1 + \dots + Y_n \le n) = \mathbb P(X_n \le n).
$$
By the central limit theorem, the variable $\frac {Y_1 + \dots + Y_n - n}{\sqrt n}$ converges in distribution towards the Gaussian distribution $\mathscr N(0, 1)$. The point is, since the Gaussian has mean $0$ and I want to know when it is less than equal to $0$, the variance doesn't matter, the result is $\frac 12$. Therefore,
$$
\lim_{n \to \infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!} = \lim_{n \to \infty} \mathbb P(X_n \le n) = \lim_{n \to \infty} \mathbb P \left( \frac{Y_1 + \dots + Y_n - n}{\sqrt n} \le 0 \right) = \mathbb P(\mathscr N(0, 1) \le 0) = \frac 12.
$$




Hope that helps,


complex numbers - Proof of Euler's formula that doesn't use differentiation?

So I saw a 'proof' of the sine and cosine angle addition formulae, i.e. $\sin(x+y)=\sin x\cos y+\cos x \sin y$, using Euler's formula, $e^{ix}=\cos x+i\sin x$. By multiplying by $e^{iy}$, you can get the desired result.



However, this 'proof' appears to be circular reasoning, as all proofs I have seen of Euler's formula involve finding the derivative of the sine and cosine functions. But to find the derivative of sine and cosine from first principles requires the use of the sine and cosine angle addition formulae.


So is there any proof of Euler's formula that doesn't involve finding the derivative of sine or cosine? I know you can prove the trigonometric formulas geometrically, but it is more laborious to do.

Sunday, October 16, 2016

exponentiation - What's the link between logarithms and the zeroth power, if any?

I have noticed that in some instances, $\log(x)$ (with base $e$) appears to "fill the role" of $x^0$ in situations where $x^0$ would not create a sensible answer. Here are a few examples of what I'm talking about:



The Power Rule for Integration



Any first-year calculus student will be familiar with the power rule of integration, which states:
$$\int x^n \, dx = \frac1{n+1} x^{n+1}, n \ne -1$$

Essentially, integration of $x^n$ has the effect of incrementing the power of $x$. Normally, following this rule for $x^{-1}$ would result in $\frac10x^0$, but there's that suspicious-looking $n \ne -1$ there. A natural question many students might ask, then, would be what the integral of $x^{-1}$ really is. And of course:
$$\int x^{-1} \, dx = \log|x|$$
So in a sense, $\log(x)$ is "replacing" $x^0$ in the context of the power rule.



The Generalized Mean



Taken from the Wikipedia article for the generalized mean:




If $p$ is a non-zero real number, and $x_1,\dots,x_n$ are

positive real numbers, then the generalized mean or power
mean
with exponent $p$ of these positive real numbers is: $$M_p(x_1,\dots,x_n) = \left( \frac{1}{n} \sum_{i=1}^n x_i^p \right)^{\frac{1}{p}}$$




Specific cases include the harmonic mean ($p = -1$), the arithmetic mean ($p = 1$), and the quadratic mean ($p = 2$). What about $p = 0$? Well, unfortunately, this would involve raising every individual data point to the zeroth power (which annihilates the data), adding them, and raising them to the infinite-th power. Instead,




For $p=0$ we set it equal to the geometric mean (which is the limit of means with exponents approaching zero, as proved below):
$$M_0(x_1, \dots, x_n) = \sqrt[n]{\prod_{i=1}^n x_i}$$





This doesn't seem immediately connected to the logarithm, but it's possible to rephrase this formula somewhat.
$$M_0(x_1, \dots, x_n) = \exp\left( \frac{1}{n} \sum_{i=1}^n \log(x_i) \right)$$
As you can see, $\log(x)$ arises in place of $x^0$, and $\exp(x)$ replaces the exponent of $1/0$.





Admittedly, this connection is a lot more tenuous than the previous two, but I have noticed that $x^0$ and $\log(x)$ are both undefined at 0, and defined everywhere else, assuming you allow complex numbers. This could just be a coincidence, though, since the same is true of negative powers (though I do wonder if $\log(0)$ is defined in the projectively extendeed real line, where negative powers are defined; if not, that could mean a stronger link to $0^0$).



The logarithmic function will grow slower than any positive, finite power; on the other hand, the exponential function will grow faster than any positive, finite power. Thus, if you were to "sort" the logarithmic, exponential, and all (positive finite) power functions by how quickly they grow, the logarithmic function would be sorted at 0 and the exponential function would be sorted at infinity.




If you want to play with the relationship $(x^p+y^p)^{1/p}$, I've created a Desmos calculator that plots this relationship with the numbers $a$ and $b$, along with the generalized mean shown above. The horizontal axis is the power $p$; at $p=0$, this "generalized addition" diverges to $0$ from the negative direction and $\infty$ from the positive direction, while the generalized mean approaches the geometric mean. In fact, if one were to remove the root from the expression, one would get $\exp(\log(a)+\log(b))$, which simplifies to $ab$.



Overall, I get the impression that $\log$ and $\exp$ act analogously to the "zeroth power" and a sort of "infinite power," and I think I've provided some evidence to support this intuition. Is there a fundamental reason as to why the logarithm tends to replace the zeroth power, or is this just a coincidence that I've over-analysed?

integration - A Complex approach to sine integral

this integral:



$$\int_0^{+\infty}\frac{\sin x}{x}\text{d}x=\frac{\pi}{2}$$



is very famous and had been discussed in the past days in this forum. and I have learned some elegant way to computer it. for example: using the identity:
$\int_0^{+\infty}e^{-xy}\sin x\text{d}x=\frac{1}{1+y^2}$ and $\int_0^{\infty}\int_0^{\infty}e^{-xy}\sin x\text{d}y\text{d}x$ and Fubini theorem. the link is here:Post concern with sine integral



In this post, I want to discuss another way to computer it. since$$\int_0^{+\infty}\frac{\sin x}{x}\text{d}x=\frac{1}{2i}\int_{-\infty}^{+\infty}\frac{e^{ix}-1}{x}\text{d}x$$
this fact inspire me to consider the complex integral:$$\int_{\Gamma}\frac{e^{iz}-1}{z}\text{d}z$$

Integral path

and $\Gamma$ is the red path in the above figure, with counter-clockwise orientation, by Cauchy's theorem, we have
$$\int_{\Gamma}\frac{e^{iz}-1}{z}\text{d}z=0$$ the above integral can be written as:$$\int_{-R}^{-\epsilon}\frac{e^{ix}-1}{x}\text{d}x+\int_{\Gamma_{\epsilon}}\frac{e^{iz}-1}{z}\text{d}z+\int_{\epsilon}^{R}\frac{e^{ix}-1}{x}\text{d}x+\int_{\Gamma_{R}}\frac{e^{iz}-1}{z}\text{d}z$$
Let $R\rightarrow +\infty$ and $\epsilon \rightarrow 0$, we have:
$$\int_{-R}^{-\epsilon}\frac{e^{ix}-1}{x}\text{d}x+\int_{\epsilon}^{R}\frac{e^{ix}-1}{x}\text{d}x \rightarrow \int_{-\infty}^{+\infty}\frac{e^{ix}-1}{x}\text{d}x=2i\int_0^{+\infty}\frac{\sin x}{x}\text{d}x$$
and $$\int_{\Gamma_{\epsilon}}\frac{e^{iz}-1}{z}\text{d}z=\int_\pi^0\frac{e^{i\epsilon e^{i\theta}}-1}{\epsilon e^{i\theta}}i\epsilon e^{i\theta}\text{d}\theta=i\int_\pi^0(\cos(\epsilon e^{i\theta})+i\sin(\epsilon e^{i\theta})-1)\text{d}\theta \rightarrow 0$$ as $\epsilon \rightarrow 0$

so I am expecting that:$$\int_{\Gamma_{R}}\frac{e^{iz}-1}{z}\text{d}z=-i\pi$$ when $$R \rightarrow +\infty$$
but I can't find it. Could you help me? Thanks very much.

Mathematical Induction Question - forgetting a simple rule?



I'm working on a mathematical induction problem for a Computer Science class and I'm trying to solve:



a. $$\frac{3(5^{k+1}-1)}{4} + 3(5^{k+1})$$



and I'm getting stuck at:



b. $$\frac{3(5^{k+1}-1+4 \cdot 5^{k+1})}{4}$$




I need to get to:



c. $$\frac{3(5^{k+2}-1)}{4}$$



I'm not having any real problem with the induction part, I'm getting hung up on the math.



I'm pretty far removed from my last real math course and I'm sure I'm just forgetting a simple rule. I have the solution in front of me and it jumps from b to c but I cannot jump along with it.



Any help would be appreciated.



Answer



$\frac{3(5^{k+1}-1+4 \cdot 5^{k+1})}{4}= \frac {3[(1 + 4)5^{k+1} - 1]}{4}=...$



Hint: $1+4 = 5$.


calculus - Show a function whose derivative is bounded is also bounded in an interval



Suppose g is differentiable over (a,b] (i.e. g is defined and differentiable over (a,c), where (a,c)$\supset$(a,b]), and |g'(p)|$\le$ M (M is a real number) for all p in (a,b]. Prove that |g(p)|$\le$Q for some real number over (a,b].



I looked at a similar solution here: prove that a function whose derivative is bounded also bounded
but I'm not sure if they are asking the same thing, and I'm having trouble figuring out the case when $x\in(a,x_0)$ (x here is point p). Could someone give a complete proof of this problem?



Answer



Use mean value theorem to get $g(x) = (x - b)g'(c) + g(b)$. If $x$ is bounded then clearly the RHS of the equation is bounded.


real analysis - Show that $(x_n) $ convergent to 0



Show that if $x_n \geq 0$ and the limit of $((-1)^nx_n) $ exists, then $(x_n) $ convergent to 0.




I don't have a single clue to solve the problem. I have looking back about monotone sequence, Cauchy sequence, and bounded one, but seem don't lead to this. Please help. Regards


Answer



Since $a_n\ge 0$ we have, $a_n = |(-1)^n a_n|$. If a sequence converges to L then its absolute value convergers to |L|, due to the continuity of the absolute value function.



In your case $L=0$ due to the oscillation of $ (-1)^n a_n.$


Saturday, October 15, 2016

integration - Evaluating $int_0^infty frac{dx}{1+x^4}$.





Can anyone give me a hint to evaluate this integral?



$$\int_0^\infty \frac{dx}{1+x^4}$$



I know it will involve the gamma function, but how?



Answer



HINT:



Putting $x=\frac1y,dx=-\frac{dy}{y^2}$



$$I=\int_0^\infty\frac{dx}{1+x^4}=\int_\infty^0\frac{-dy}{y^2\left(1+\frac1{y^4}\right)}$$
$$=-\int_\infty^0\frac{y^2dy}{1+y^4}=\int_0^\infty\frac{y^2dy}{1+y^4} \text{ as } \int_a^bf(x)dx=-\int_b^af(x)dx$$



$$I=\int_0^\infty\frac{y^2dy}{1+y^4}=\int_0^\infty\frac{x^2dx}{1+x^4}$$




$$\implies 2I=\int_0^\infty\frac{dx}{1+x^4}+\int_0^\infty\frac{x^2dx}{1+x^4}=\int_0^\infty\frac{1+x^2}{1+x^4}dx=\int_0^\infty\frac{\frac1{x^2}+1}{\frac1{x^2}+x^2} dx$$



Now the idea is to express the denominator as a polynomial of $\int \left(\frac1{x^2}+1 \right)dx =x-\frac1x=u\text{(say)}$



$$\text{The denominator =}\frac1{x^2}+x^2=\left(x-\frac1x\right)^2+2=u^2+2$$



Now, complete the definite integral with $u$


real analysis - Continuity $Rightarrow$ Intermediate Value Property. Why is the opposite not true?



Continuity $\Rightarrow$ Intermediate Value Property. Why is the opposite not true?



It seems to me like they are equal definitions in a way.




Can you give me a counter-example?



Thanks


Answer



I.



Some of the answers reveal a confusion, so let me start with the definition. If $I$ is an interval, and $f:I\to\mathbb R$, we say that $f$ has the intermediate value property iff whenever $abetween $a$ and $b$ with $f(d)=c$.



If $I=[\alpha,\beta]$, this is significantly stronger than asking that $f$ take all values between $f(\alpha)$ and $f(\beta)$:





  • For example, this implies that if $J\subseteq I$ is an interval, then $f(J)$ is also an interval (perhaps unbounded).

  • It also implies that $f$ cannot have jump discontinuities: For instance, if $\lim_{x\to t^-}f(x)$ exists and is strictly smaller than $f(t)$, then for $x$ sufficiently close to $t$ and smaller than $t$, and for $u$ sufficiently close to $f(t)$ and smaller than $f(t)$, $f$ does not take the value $u$ in $(x,t)$, in spite of the fact that $f(x)
  • In particular, if $f$ is discontinuous at a point $a$, then there are $y$ such that the equation $f(x)=y$ has infinitely many solutions near $a$.



II.



There is a nice survey containing detailed proofs of several examples of functions that both are discontinuous and have the intermediate value property: I. Halperin, Discontinuous functions with the Darboux property, Can. Math. Bull., 2 (2), (May 1959), 111-118. It contains the amusing quote





Until the work of Darboux in 1875 some mathematicians believed that [the intermediate value] property actually implied continuity of $f(x)$.




This claim is repeated in other places. For example, here one reads




In the 19th century some mathematicians believed that [the intermediate value] property is equivalent to continuity.





This is very similar to what we find in A. Bruckner, Differentiation of real functions, AMS, 1994. In page 5 we read




This property was believed, by some 19th century mathematicians, to be equivalent to the property of continuity.




Though I have been unable to find a source expressing this belief, that this was indeed the case is supported by the following two quotes from Gaston Darboux's Mémoire sur les fonctions discontinues, Ann. Sci. Scuola Norm. Sup., 4, (1875), 161–248. First, on pp. 58-59 we read:




Au risque d'être trop long, j'ai tenu avant tout, sans y réussir peutêtre, à être rigoureux. Bien des points, qu'on regarderait à bon droit comme évidents ou que l'on accorderait dans les applications de la science aux fonctions usuelles, doivent être soumis à une critique rigoureuse dans l'exposé des propositions relatives aux fonctions les plus générales. Par exemple, on verra qu'il existe des fonctions continues qui ne sont ni croissantes ni décroissantes dans aucun intervalle, qu'il y a des fonctions discontinues qui ne peuvent varier d'une valeur à une autre sans passer par toutes les valeurs intermédiaires.





The proof that derivatives have the intermediate value property comes later, starting on page 109, where we read:




En partant de la remarque précédente, nous allons montrer qu'il existe des fonctions discontinues qui jouissent d'une propriété que l'on regarde quelquefois comme le caractère distinctif des fonctions continues, celle de ne pouvoir varier d'une valeur à une autre sans passer par toutes les valeurs intermediaires.




III.




Additional natural assumptions on a function with the intermediate value property imply continuity. For example, injectivity or monotonicity.



Derivatives have the intermediate value property (see here), but there are discontinuous derivatives: Let $$f(x)=\left\{\begin{array}{cl}x^2\sin(1/x)&\mbox{ if }x\ne0,\\0&\mbox{ if }x=0.\end{array}\right.$$ (The example goes back to Darboux himself.) This function is differentiable, its derivative at $0$ is $0$, and $f'(x)=2x\sin(1/x)-\cos(1/x)$ if $x\ne0$, so $f'$ is discontinuous at $0$.



This example allows us to find functions with the intermediate value property that are not derivatives: Consider first $$g(x)=\left\{\begin{array}{cl}\cos(1/x)&\mbox{ if }x\ne0,\\ 0&\mbox{ if }x=0.\end{array}\right.$$ This function (clearly) has the intermediate value property and indeed it is a derivative, because, with the $f$ from the previous paragraph, if $$h(x)=\left\{\begin{array}{cl}2x\sin(1/x)&\mbox{ if }x\ne 0,\\ 0&\mbox{ if }x=0,\end{array}\right.$$ then $h$ is continuous, and $g(x)=h(x)-f'(x)$ for all $x$. But continuous functions are derivatives, so $g$ is also a derivative. Now take $$j(x)=\left\{\begin{array}{cl}\cos(1/x)&\mbox{ if }x\ne0,\\ 1&\mbox{ if }x=0.\end{array}\right.$$ This function still has the intermediate value property, but $j$ is not a derivative. Otherwise, $j-g$ would also be a derivative, but $j-g$ does not have the intermediate value property (it has a jump discontinuity at $0$). For an extension of this theme, see here.



In fact, a function with the intermediate value property can be extremely chaotic. Katznelson and Stromberg (Everywhere differentiable, nowhere monotone, functions, The American Mathematical Monthly, 81, (1974), 349-353) give an example of a differentiable function $f:\mathbb R\to\mathbb R$ whose derivative satisfies that each of the three sets $\{x\mid f'(x)>0\}$, $\{x\mid f'(x)=0\}$, and $\{x\mid f'(x)<0\}$ is dense (they can even ensure that $\{x\mid f'(x)=0\}=\mathbb Q$); this implies that $f'$ is highly discontinuous. Even though their function satisfies $|f'(x)|\le 1$ for all $x$, $f'$ is not (Riemann) integrable over any interval.



On the other hand, derivatives must be continuous somewhere (in fact, on a dense set), see this answer.




Conway's base 13 function is even more dramatic: It has the property that $f(I)=\mathbb R$ for all intervals $I$. This implies that this function is discontinuous everywhere. Other examples are discussed in this answer.



Halperin's paper mentioned above includes examples with even stronger discontinuity properties. For instance, there is a function $f:\mathbb R\to\mathbb R$ that not only maps each interval onto $\mathbb R$ but, in fact, takes each value $|\mathbb R|$-many times on each uncountable closed set. To build this example, one needs a bit of set theory: Use transfinite recursion, starting with enumerations $(r_\alpha\mid\alpha<\mathfrak c)$ of $\mathbb R$ and $(P_\alpha\mid\alpha<\mathfrak c)$ of its perfect subsets, ensuring that each perfect set is listed $\mathfrak c$ many times. Now recursively select at stage $\alpha<\mathfrak c$, the first real according to the enumeration that belongs to $P_\alpha$ and has not been selected yet. After doing this, continuum many reals have been chosen from each perfect set $P$. List them in a double array, as $(s_{P,\alpha,\beta}\mid\alpha,\beta<\mathfrak c)$, and set $f(s_{P,\alpha,\beta})=r_\alpha$ (letting $f(x)$ be arbitrary for those $x$ not of the form $s_{P,\alpha,\beta}$).



To search for references: The intermediate value property is sometimes called the Darboux property or, even, one says that a function with this property is Darboux continuous.



An excellent book discussing these matters is A.C.M. van Rooij, and W.H. Schikhof, A second course on real functions, Cambridge University Press, 1982.


probability - Expected number of rolls to get all 6 numbers of a die




I've been thinking about this for a while but don't know how to go about obtaining the answer. Find the expectation of the times of rolls if u want to get all 6 numbers when rolling a 6 sided die.



Thanks


Answer



Intuitively



You first roll a die. You get some value. Now, the probability of getting a different value on the next one is $\frac{5}{6}$ . Hence the expected value for different values in two rolls is $\frac{6}{5}$. You continue this and get




$\frac{6}{6} + \frac{6}{5} + \frac{6}{4} + \frac{6}{3} + \frac{6}{2} + \frac{6}{1} = \color{blue}{14.7}$ rolls


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...