Friday, August 31, 2018

elementary number theory - If $gcd(a, b) = 1$, then $gcd(ab, c) = gcd(a, c) cdotgcd(b, c)$



How can I prove that if $\gcd(a, b) = 1$, then
$\gcd(ab, c) = \gcd(a, c) \times \gcd(b, c)$?



By eea there exists $ax+by=1$ from $\gcd(a,b)=1$ so a and be are co-primes there also exists $dk=a$ and $dj= b$ where $d=\gcd(a,b)=1$ this is all the information I have gathered from the question but I dont know how to approach and solve it. Can anyone help explain to me how to arrive at the answer? Thanks!


Answer



Without using primes.

We show that $(ab,c) \mid (a,c)(b,c)$ and that $(a,c)(b,c)\mid (ab,c) $.



We have $ax+by=1$ multiplying by $c$ we have
$acx+bcy=c$



Now $$(a,c)(b,c)\left[\frac{a}{(a,c)}\frac{c}{(b,c)}x+\frac{b}{(b,c)}\frac{c}{(a,c)}y\right]=c$$
where of course $\frac{a}{(a,c)}$ etc are integers. So $(a,c)(b,c)\mid c$.
It is clear that $(a,c)(b,c)\mid ab$ since $(a,c)\mid a$ and $(b,c)\mid b$. And therefore we have
$(a,c)(b,c)\mid (ab,c) $.




To show the other direction note that there are $p,q,r,s$ such that



$$ap+qc=(a,c)$$
and $$br+cs=(b,c)$$
thus
$$(a,c)(b,c)=abpr +(aps+brq+qsc)c$$ and this latter is divisible by $(ab,c)$


trigonometry - Trigonometric Arithmetic Progression



If $a$, $b$, $c$ are in arithmetic progression, prove that

$$\cos A \cot\frac{A}{2} \qquad \cos B \cot \frac{B}{2} \qquad \cos C \cot\frac{C}{2}$$
are in arithmetic progression, too.



Here, $a$, $b$, $c$ represent the sides of a triangle and $A$, $B$, $C$ are the opposite angles of the triangle.


Answer



For better clarity, I'm adding another proof that $\displaystyle\cot\frac A2,\cot\frac B2,\cot\frac C2$ are also in AP if $a,b,c$ are so.



We have $\displaystyle00$



So, $\displaystyle\cot\frac C2=\frac1{\tan\frac C2}=+\sqrt{\frac{1+\cos A}{1-\cos A}}$




Using Law of Cosines and on simplification, $\displaystyle\cot\frac C2=+\sqrt{\frac{s(s-c)}{(s-b)(s-a)}}$ where $2s=a+b+c$



$\displaystyle\cot\frac A2,\cot\frac B2,\cot\frac C2$ will be in AP



$\displaystyle\iff\sqrt{\frac{s(s-c)}{(s-b)(s-a)}}+\sqrt{\frac{s(s-a)}{(s-b)(s-c)}}=\sqrt{\frac{s(s-b)}{(s-c)(s-a)}}$



$\displaystyle\iff s-a+s-c=2(s-b)\iff a+c=2b$


Thursday, August 30, 2018

algebra precalculus - How do I show $sin^2(x+y)−sin^2(x−y)≡sin(2x)sin(2y)$?



I really don't know where I'm going wrong, I use the sum to product formula but always end up far from $\sin(2x)\sin(2y)$. Any help is appreciated, thanks.



Answer



\begin{align}
\Big(\sin(x+y)\Big)^2 - \Big(\sin(x-y)\Big)^2 & = \Big( \sin x\cos y+\cos x\sin y \Big)^2 - \Big( \sin x\cos y+\cos x\sin y \Big)^2 \\[6pt]
& = \Big( \sin^2\cos^2y + 2\sin x\cos y\cos x\sin y + \cos^2x\sin^2y \Big) \\[6pt]
&\phantom{{}=} {}- \Big( \sin^2\cos^2y - 2\sin x\cos y\cos x\sin y + \cos^2x\sin^2y \Big) \\[6pt]
& = 4\sin x\cos y\cos x\sin y \\[6pt]
& = (2\sin x\cos x)(2\sin y\cos y) \\[6pt]
& = \sin(2x)\sin(2y).
\end{align}


calculus - Evaluating $intlimits_0^infty ! frac{x^{1/n}}{1+x^2} mathrm{d}x$



I've been trying to evaluate the following integral from the 2011 Harvard PhD Qualifying Exam. For all $n\in\mathbb{N}^+$ in general:



$$\int\limits_0^\infty \! \frac{x^{1/n}}{1+x^2} \ \mathrm{d}x$$



However, I'm not quite sure where to begin, even. There is a possibility that it is related to complex analysis, so I tried going at it with Cauchy's and also with residues, but I still haven't managed to get any further in solving it.



Answer



Beta function



I see @robjohn beat me to it.
The substitution is slightly different, so here it is.



Here's a simple approach using the beta function.



First, notice the integral diverges logarithmically for $n=1$, since the integrand goes like $1/x$ for large $x$.




Let $t=x^2$.
Then
$$\begin{eqnarray*}
\int_0^\infty dx\, \frac{x^{1/n}}{1+x^2}
&=& \frac{1}{2}\int_0^\infty dt\, \frac{t^{(1-n)/(2n)}}{1+t} \\
&=& \frac{1}{2}
\textstyle B\left(\frac{1}{2} + \frac{1}{2n}, \frac{1}{2} - \frac{1}{2n}\right) \\
&=& \frac{1}{2}
\textstyle\Gamma\left(\frac{1}{2} + \frac{1}{2n}\right)
\Gamma\left(\frac{1}{2} - \frac{1}{2n}\right) \\

&=& \frac{\pi}{2 \sin\left(\frac{\pi}{2} + \frac{\pi}{2n}\right)} \\
&=& \frac{\pi}{2} \sec \frac{\pi}{2n}.
\end{eqnarray*}$$



Some details



Above we use the integral representation for the beta function
$$B(x,y) = \int_0^\infty dt\, \frac{t^{x-1}}{(1+t)^{x+y}}$$
for $\mathrm{Re}(x)>0$, $\mathrm{Re}(y)>0$.
We also use Euler's reflection formula,

$$\Gamma(1-z)\Gamma(z) = \frac{\pi}{\sin\pi z}.$$



Addendum: A method with residue calculus



Let $t = x^{1/n}$. Then
$$\begin{eqnarray*}
I &=& \int_0^\infty dx\, \frac{x^{1/n}}{1+x^2} \\
&=& n\int_0^\infty dt\, \frac{t^n}{t^{2n}+1}
\end{eqnarray*}$$
Notice the last integral has no cuts for integral $n$.

The residues are at the roots of $t^{2n}+1=0$.
Consider the pie-shaped contour with one edge along the positive real axis, another edge along the line $e^{i\pi/n}t$ with $t$ real and positive, and the "crust" at infinity.



$\hspace{4.5cm}$pie-shaped contour



There is one residue in the contour at $t_0 = e^{i\pi/(2n)}$.
The integral along the real axis is just $I$.
The integral along the other edge of the pie is
$$\begin{eqnarray*}
I' &=& n\int_\gamma dz\,\frac{z^n}{z^{2n}+1} \\

&=& n \int_\infty^0 dt e^{i\pi/n} \frac{t^n e^{i\pi}}{t^{2n}+1} \\
&=& -e^{i(n+1)\pi/n} I.
\end{eqnarray*}$$
The integral along the crust goes like $1/R^{2n-1}$ as the radius of the pie goes to infinity, and so vanishes in the limit.
Therefore,
$$\begin{eqnarray*}
I + I'
&=& 2\pi i \,\mathrm{Res}_{t=t_0}\, \frac{t^n}{t^{2n}+1} \\
&=& 2\pi i \frac{t_0^n}{2n t_0^{2n-1}}.
\end{eqnarray*}$$

Using this and the formula for $I'$ in terms of $I$ above we find
$$I = \frac{\pi}{2} \sec \frac{\pi}{2n},$$
as before.


Wednesday, August 29, 2018

induction - Prove that $cosleft(frac{2pi}{n}right)+cosleft(frac{4pi}{n}right)+ldots+cosleft(frac{2(n-1)pi}{n}right)=-1$


May you help on how to start, or where to look for the following question?


By using the $n$-th roots of the unity, show that:


  • $\cos\left(\frac{2\pi}{n}\right)+\cos\left(\frac{4\pi}{n}\right)+\ldots+\cos\left(\frac{2(n-1)\pi}{n}\right)=-1$

  • $\sin\left(\frac{2\pi}{n}\right)+\sin\left(\frac{4\pi}{n}\right)+\ldots+\sin\left(\frac{2(n-1)\pi}{n}\right)=0$

Can we prove the above by induction? Thank you very much in advance.


Answer




An answer not (explicitly) using roots of unity, which shows the geometry of what is going on.


Consider the points around a circle, $P_k=(\cos 2k\pi/n,\sin 2k\pi/n)$, $k=0,\dots,n-1$ (note, we include the case $k=0$, which is not in your sum.)


They form a regular $n$-gon, and a simple rotational symmetry argument shows that the center of mass:


$$\frac{1}{n}\left(P_0+P_1+\cdots + P_{n-1}\right)$$


is a point on the plane fixed by a rotation of $2\pi/n$ around $(0,0)$, and there is no such point other than $(0,0)$ (if $n>1$. If $n=1$...)


This means that $$P_1 + P_2+\cdots + P_{n-1} = -P_0=(-1,0).$$


This also explains why it is hard to find an inductive proof of this - the transformation step from a regular $n$-gon to a regular $n+1$-gon is not simply "adding a point." You'd have to do a lot of moving around of the points, which makes it hard to see how you could use the result for $n$ points to get a result for $n+1$ points.


(The division by $n$ is an unnecessary part above, but it lets us evoke the intuitive term "center of mass.")


real analysis - continuous functions on $mathbb R$ such that $g(x+y)=g(x)g(y)$


Let $g$ be a function on $\mathbb R$ to $\mathbb R$ which is not identically zero and which satisfies the equation $g(x+y)=g(x)g(y)$ for $x$,$y$ in $\mathbb R$.


$g(0)=1$. If $a=g(1)$,then $a>0$ and $g(r)>a^r$ for all $r$ in $\mathbb Q$.


Show that the function is strictly increasing if $g(1)$ is greater than $1$, constant if $g(1)$ is equal to $1$ or strictly decreasing if $g(1)$ is between zero and one, when $g$ is continuous.


Answer



For $x,y\in\mathbb{R}$ and $m,n\in\mathbb{Z}$, $$ \eqalign{ g(x+y)=g(x)\,g(y) &\implies g(x-y)={g(x) \over g(y)} \\&\implies g(nx)=g(x)^n \\&\implies g\left(\frac{m}nx\right)=g(x)^{m/n} } $$ so that $g(0)=g(0)^2$ must be one (since if it were zero, then $g$ would be identically zero on $\mathbb{R}$), and with $a=g(1)$, it follows that $g(r)=a^r$ for all $r\in\mathbb{Q}$. All we need to do now is invoke the continuity of $g$ and the denseness of $\mathbb{Q}$ in $\mathbb{R}$ to finish.



For example, given any $x\in\mathbb{R}\setminus\mathbb{Q}$, there exists a sequence $\{x_n\}$ in $\mathbb{Q}$ with $x_n\to x$ (you could e.g. take $x_n=10^{-n}\lfloor 10^nx\rfloor$ to be the approximation of $x$ to $n$ decimal places -- this is where we're using that $\mathbb{Q}$ is dense in $\mathbb{R}$). Since $g$ is continuous, $y_n=g(x_n)\to y=g(x)$. But $y_n=a^{x_n}\to a^x$ since $a\mapsto a^x$ is also continuous.


Moral: a continuous function is completely determined by its values on any dense subset of the domain.


Basic theory about divisibility and modular arithmetic


I am awfully bad with number theory so if one can provide a quick solution of this, it will be very much appreciated!


Prove that if $p$ is a prime with $p \equiv 1(\mod4) $ then there is an integer $m$ such that $p$ divides $m^2 +1$


Answer



I will assume that you know Wilson's Theorem, which says that if $p$ is prime, then $(p-1)!\equiv -1\pmod{p}$.


Let $m=\left(\frac{p-1}{2}\right)^2$. We show that if $p\equiv 1\pmod{4}$, then $m^2\equiv -1\pmod{p}$. This implies that $p$ divides $m^2+1$.


The idea is to pair $1$ with $p-1$, $2$ with $p-2$, $3$ with $p-3$, and so on until at the end we pair $\frac{p-1}{2}$ with $\frac{p+1}{2}$. To follow the argument, you may want to work with a specific prime, such as $p=13$. So we pair $1$ with $12$, $2$ with $11$, $3$ with $10$, $4$ with $9$, $5$ with $8$, and finally $6$ with $7$.


Thus for any $a$ from $1$ to $\frac{p-1}{2}$, we pair $a$ with $p-a$. Note that $a(p-a)\equiv -a^2\pmod{p}$.



So the product of all the numbers from $1$ to $p-1$ is congruent modulo $p$ to the product $$(-1^2)(-2^2)(-3^2)\cdot(-\left(\frac{p-1}{2}\right)^2).$$ This is congruent to $$(-1)^{\frac{p-1}{2}}m^2.$$ But $p-1$ is divisible by $4$, so $\frac{p-1}{2}$ is even, and therefore our product is congruent to $m^2$.


But our product is congruent to $(p-1)!$, and therefore, by Wilson's Theorem, it is congruent to $-1$.


We conclude that $m^2\equiv -1\pmod{p}$, which is what we wanted to show.


elementary number theory - Congruence question with divisibility



enter image description here



I have this question and I have proved that a/d is congruent to b/d mod(m/d)

However, I don't know how to go forward to prove a/k is congruent to b/k mod(m/d)
Can anyone help me out? THX


Answer



We have $a\equiv b\pmod m\iff a=b+c\cdot m$ where $c$ is some integer



Let $\displaystyle \frac aA=\frac bB=k\implies (A,B)=1$



$\implies k(A-B)=c\cdot m$



Let $(k,m)=D$ and $\displaystyle \frac kK=\frac mM=D\implies (K,M)=1$




$\displaystyle\implies K\cdot D(A-B)=c\cdot M\cdot D\iff K(A-B)=c\cdot M\implies A-B=\frac{c\cdot M}K$



As $(K,M)=1$ and $A-B$ is an integer, $K$ must divide $c,$



$\displaystyle\implies A\equiv B\pmod M\iff \frac ak\equiv \frac bk\pmod {\frac m{(k,m)}}$



as $\displaystyle M=\frac mD=\frac m{(k,m)}$


Tuesday, August 28, 2018

summation - What's the formula for the 365 day penny challenge?




Not exactly a duplicate since this is answering a specific instance popular in social media.



You might have seen the viral posts about "save a penny a day for a year and make $667.95!" The mathematicians here already get the concept while some others may be going, "what"? Of course, what the challenge is referring to is adding a number of pennies to a jar for what day you're on. So:




Day 1 = + .01
Day 2 = + .02
Day 3 = + .03
Day 4 = + .04


So that in the end, you add it all up like so:



1 + 2 + 3 + 4 + 5 + 6 + ... = 66795



The real question is, what's a simple formula for getting a sum of consecutive integers, starting at whole number 1, without having to actually count it all out?!


Answer



Have had a lot of friends ask about this lately, as it is all over FaceBook. The formula is actually quite simple:



(N (N + 1) ) / 2 where N = Highest value


Or Simply $\frac {n(n+1)}{2}$




Thus



365 (365 + 1) ) / 2 = 66795


Divide that by 100 (because there's 100 pennies in a dollar) and viola! $667.95



Now, this is an OLD math (think about 6th century BC), wherein these results are referred to as triangle numbers. In part, because as you add them up, you can stack the results in the shape of a triangle!




1 = 1
*
1 + 2 = 3
*
* *
1 + 2 + 3 = 6
*
* *
* * *
1 + 2 + 3 + 4 = 10

*
* *
* * *
* * * *






NoChance also has a fun story and answer to this question!



A little info on his lesson: -{for the super nerdy!}-

"...Carl Friedrich Gauss is said to have found this relationship in his
early youth, by multiplying n/2 pairs of numbers in the sum by the
values of each pair n+1. However, regardless of the truth of this
story, Gauss was not the first to discover this formula, and some find
it likely that its origin goes back to the Pythagoreans 5th century BC..." - wikipedia

"...The mathematical study of figurate numbers is said to have originated
with Pythagoras, possibly based on Babylonian or Egyptian precursors.
Generating whichever class of figurate numbers the Pythagoreans

studied using gnomons is also attributed to Pythagoras. Unfortunately,
there is no trustworthy source for these claims, because all surviving
writings about the Pythagoreans are from centuries later. It seems to
be certain that the fourth triangular number of ten objects, called
tetractys in Greek, was a central part of the Pythagorean religion,
along with several other figures also called tetractys. Figurate
numbers were a concern of Pythagorean geometry. ... - wikipedia








See? Fun stuff, numbers!


sequences and series - find $frac{3}{6}+frac{3cdot5}{6cdot9}+frac{3cdot5cdot7}{6cdot9cdot12}+cdots$




find $\frac{3}{6}+\frac{3\cdot5}{6\cdot9}+\frac{3\cdot5\cdot7}{6\cdot9\cdot12}+\cdots$




I had $(1-x)^{-\frac{p}{q}}$ in mind.
$$S=\frac{3}{6}+\frac{3\cdot5}{6\cdot9}+\frac{3\cdot5\cdot7}{6\cdot9\cdot12}+\cdots$$
$$S+1=1+\frac{3}{6}+\frac{3\cdot5}{6\cdot9}+\frac{3\cdot5\cdot7}{6\cdot9\cdot12}+\cdots$$
$$S+1=1+\frac{3}{2!\cdot3}+\frac{3\cdot(3+2)}{3!\cdot9}+\frac{3\cdot(3+2)\cdot(3+4)}{4!\cdot27}+\cdots$$
$$S+1=1+\frac{3}{2!}\left(\frac{\frac{2}{3}}{2}\right)+\frac{3\cdot(3+2)}{3!}\left(\frac{\frac{2}{3}}{2}\right)^2+\frac{3\cdot(3+2)\cdot(3+4)}{4!}\left(\frac{\frac{2}{3}}{2}\right)^3+\cdots$$
$$S+1=\left(1-\frac{2}{3}\right)^\frac{-3}{2}$$
I got $S=3\sqrt{3}-1$



But answer given is $S=3\sqrt{3}-4$


Answer




In the last step, you miss some multiple of $3$, and you miss one term of the expansion.



$$
S=\sum_{n\geq 1} \frac{(2n+1)!!}{(n+1)!3^n}=\sum_{n\geq 1} \binom{-\frac{1}{2}}{n+1}\frac{(-2)^{n+1}}{3^n}=3\sum_{n\geq 1} \binom{-\frac{1}{2}}{n+1}\left(\frac{-2}{3}\right)^{n+1}=3\left(\left(1-\frac{2}{3}\right)^{-\frac{1}{2}}-1-\frac{1}{3}\right)=3\sqrt{3}-4.
$$


calculus - Mapping the Real Line to the Unit Interval

What is a continuous mapping of the real line $(-\infty, \infty)$ to the interval $[0, 1]$? I'm trying out logs and exponentials but they don't seem to work?

linear algebra - If $AB = I$ then $BA = I$




If $A$ and $B$ are square matrices such that $AB = I$, where $I$ is the identity matrix, show that $BA = I$.





I do not understand anything more than the following.




  1. Elementary row operations.

  2. Linear dependence.

  3. Row reduced forms and their relations with the original matrix.



If the entries of the matrix are not from a mathematical structure which supports commutativity, what can we say about this problem?




P.S.: Please avoid using the transpose and/or inverse of a matrix.


Answer



Dilawar says in 2. that he knows linear dependence! So I will give a proof, similar to that of TheMachineCharmer, which uses linear independence.



Suppose each matrix is $n$ by $n$. We consider our matrices to all be acting on some $n$-dimensional vector space with a chosen basis (hence isomorphism between linear transformations and $n$ by $n$ matrices).



Then $AB$ has range equal to the full space, since $AB=I$. Thus the range of $B$ must also have dimension $n$. For if it did not, then a set of $n-1$ vectors would span the range of $B$, so the range of $AB$, which is the image under $A$ of the range of $B$, would also be spanned by a set of $n-1$ vectors, hence would have dimension less than $n$.



Now note that $B=BI=B(AB)=(BA)B$. By the distributive law, $(I-BA)B=0$. Thus, since $B$ has full range, the matrix $I-BA$ gives $0$ on all vectors. But this means that it must be the $0$ matrix, so $I=BA$.



Monday, August 27, 2018

integration - Find $limlimits_{xto+infty}U_n$ where $U_n=frac{1}{n}intlimits_0^nsin^2(t) dt$


Writing $$\int\limits_0^n\sin^2(t)dt=\int\limits_0^{\pi E(\frac{n}{\pi})}\sin^2(t)dt + \int\limits_{\pi E(\frac{n}{\pi})}^n\sin^2(t)dt$$ Where E(x) designates the floor function of x


Use the squeeze theorem to find $\lim\limits_{n\to+\infty}U_n$


I tried to evaluate the Integral but it's specifically asked to use $\pi E(\frac{n}{\pi})$


Answer



As we have:


$$\sin^2(t)=\frac12(1-\cos(2t))$$


It is clear that the function being integrated has a periodicity of $\pi$. Hence every integral through a whole $\pi$ period will have the same value. So, we have that:


$$\int\limits_0^\pi\sin^2(t)dt=\int\limits_0^\pi\frac12(1-\cos(2t))dt=\frac\pi2$$



Then, if we break down the integration as the function is suggesting:


$$\int\limits_0^n\sin^2(t)dt=\int\limits_0^{\pi \lfloor\frac{n}{\pi}\rfloor}\sin^2(t)dt + \int\limits_{\pi \lfloor\frac{n}{\pi}\rfloor}^n\sin^2(t)dt=\frac\pi2 \lfloor\frac{n}\pi\rfloor + \int\limits_{\pi \lfloor\frac{n}{\pi}\rfloor}^n\sin^2(t)dt$$


Notice that the integrand is always positive, so we can easily find an lower and upper bound by excluding the left term or letting it complete another cycle, which means that:


$$\frac\pi2 \lfloor\frac{n}\pi\rfloor<\frac\pi2 \lfloor\frac{n}\pi\rfloor + \int\limits_{\pi \lfloor\frac{n}{\pi}\rfloor}^n\sin^2(t)dt<\frac\pi2 \lfloor\frac{n}\pi\rfloor+\frac\pi2$$


$$\frac1n \frac\pi2 \lfloor\frac{n}\pi\rfloor


So, if we let $n \to \infty$, as both sides of the inequality tend to $1/2$, we have that


$$\lim_{n\to\infty}U_n=\frac12$$


inequality - Proving Alternating Harmonic Series with Mathematical Induction



I'm stuck on this mathematical induction inequality and am not sure about what to do with it.
Prove using the principle of mathematical induction that:
$$1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}-....+(-1)^{n-1}\frac{1}{n}>0 \quad\forall\;n\in\mathbb Z^+$$
I'm not even sure how to even start it properly due to the signs alternating. I was thinking that I should work it out for $n=1$ and then $n=2$. But then when it reaches the inductive steps, I have no idea of what to do.




Thanks a lot!!


Answer



O.K, first case $n=2$ which is $1-\frac{1}{2} >0 $ which is true.



Assume that its true for $n=2k$ for $k \geq 1$ which is $1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4} + \cdots + \frac{1}{2k-1}-\frac{1}{2k} >0$



We want to prove it for $n=2(k+1)$ which is $1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4} + \cdots + \frac{1}{2k-1}-\frac{1}{2k} +\frac{1}{2k+1}-\frac{1}{2k+2}>0$, by the assumption its true that the sum till $-\frac{1}{2k}$ is bigger than $0$ so we substitute by $0$, so it becomes $0+\frac{1}{2k-1}-\frac{1}{2k}>0$ which is $\frac{1}{2k-1} > \frac{1}{2k}$ which is clearly true. thus for all even cases the inequality holds true.(for odd cases we want that in order to get to even case we need to subtract some value and when reaching even case we know its bigger than $0$).



So if we have a value that we subtract from it some smaller value and we still bigger than $0$, so the initial values are certainly bigger than $0$.




Which conclude the proof.


Sunday, August 26, 2018

calculus - When a function contains a sequence, and how to find the function's limit?

Suppose $\lim \limits_{n \to \infty} a_n = 0$. Find the limit



$$\lim \limits_{n \to \infty} \left(1+a_n \frac{x}{n}\right)^n$$




It's kind intuitive that the answer is 1, but clearly I can't just say that the limits equal $\lim \limits_{n \to \infty} 1^n = 1$.



I feel like I should let $y =(1+a_n \frac{x}{n})^n $. Then take $log$ so that $log(y) = nlog(1+a_n \frac{x}{n})$ But L'hopital doesn't apply here either

analysis - Convergence of Sequence with factorial

I want to show that $$ a_n = \frac{3^n}{n!} $$ converges to zero. I tried Stirlings formulae, by it the fraction becomes $$ \frac{3^n}{\sqrt{2\pi n} (n^n/e^n)} $$ which equals $$ \frac{1}{\sqrt{2\pi n}} \left( \frac{3e}{n} \right)^n $$ from this can I conclude that it goes to zero because $\frac{3e}{n}$ and $\frac{1}{\sqrt{2\pi n}}$ approaching zero?

linear algebra - determinant of a tricky matrix



I'm doing a research on matrix integrators and I ran into a problem in one particular case. To finish my proof the last thing remaining is to prove the nonsingularity of a specific matrix $$M_n: (m_{ij} = \frac{1}{a_i - a_j}, 1\leq i \leq n, 1\leq j \leq n,i\neq j;m_{ii} = \frac{c}{a_i - b} + \sum\limits_{k\neq i, 1\leq k \leq n}\frac{1}{a_i - a_k}),$$
where all $a_i, b$ are distinct.



To be more clear I provide $$M_2 = \begin{pmatrix}
\frac{c}{a_1 -b} + \frac{1}{a_1 - a_2} && \frac{1}{a_1 - a_2}\\
\frac{1}{a_2 - a_1} && \frac{c}{a_2 -b} + \frac{1}{a_2 - a_1}
\end{pmatrix}$$


$$M_3 = \begin{pmatrix}
\frac{c}{a_1 -b} + \frac{1}{a_1 - a_2} + \frac{1}{a_1 - a_3} && \frac{1}{a_1 - a_2} && \frac{1}{a_1 - a_3}\\
\frac{1}{a_2 - a_1} && \frac{c}{a_2 -b} + \frac{1}{a_2 - a_1} + \frac{1}{a_2 - a_3} && \frac{1}{a_2 - a_3}\\
\frac{1}{a_3 - a_1} && \frac{1}{a_3 - a_2} && \frac{c}{a_3 - b} + \frac{1}{a_3 - a_1} + \frac{1}{a_3 - a_2}
\end{pmatrix}$$



For $n \leq 7$ I calculated the $det(M_n) = \frac{c(c+1)...(c + n -1)}{\prod\limits_{1\leq i\leq n}(a_i - b)}$, but I have no idea how to prove this in general case.



I my particular case $c\in \mathbb N$, so this formula will prove the nonsingularity of $M_n$.




Any ideas and tips to prove the formula, or even to prove nonsingularity of $M_n$ in some other way - are very appreciated


Answer



The answer is a development of ideas from the comments of amsmath. The decisive step forward is derivation of the equation $(7)$ below.



Given a set of $n$ pairs of complex numbers
$$\{(x_1,y_2),(x_2,y_2),\dots,(x_n,y_n)\}$$ such that
$$
\forall\; i\ne j:\; x_i\ne x_j,
$$

define its interpolating polynomial as

$$
y(x)=\sum_{i}y_i\prod_{k\ne i}\frac{x-x_k}{x_i-x_k}.\tag1
$$



Differentiating the expression over $x$ and evaluating the result at $x_i$ one obtains:
$$
y'_i\equiv y'(x_i)=\sum_{j\ne i}\frac1{x_i-x_j}
\left[y_i-y_j\prod_{k\ne(i,j)}\frac{x_i-x_k}{x_j-x_k}\right].\tag2
$$

or

$$
\frac{y'_i}{\prod\limits_{k\ne i}x_i-x_k}
=\sum_{j\ne i}\frac1{x_i-x_j}\left[\frac{y_i}{\prod\limits_{k\ne i}x_i-x_k}
+\frac{y_j}{\prod\limits_{k\ne j}x_j-x_k}\right].\tag{2a}
$$

Introducing $f_i=\frac{y_i}{\prod\limits_{k\ne i}x_i-x_k}$, $f'_i=\frac{y'_i}{\prod\limits_{k\ne i}x_i-x_k}$ the equation $(\text{2a})$ can be rewritten in matrix notation as:
$$
\begin{pmatrix}
\sum\limits_{i\ne1}\frac{1}{x_1-x_i}& \frac1{x_1-x_2}&\cdots&\frac1{x_1-x_n}\\
\frac1{x_2-x_1}& \sum\limits_{i\ne2}\frac{1}{x_2-x_i}&\cdots&\frac1{x_2-x_n}\\

\vdots& \vdots& \ddots&\vdots\\
\frac1{x_n-x_1}&\frac1{x_n-x_2}&\cdots&\sum\limits_{i\ne n}\frac{1}{x_n-x_i}\\
\end{pmatrix}
\begin{pmatrix}
\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f_1\\
\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f_2\\
\vdots\\
\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f_n\\
\end{pmatrix}=
\begin{pmatrix}

\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f'_1\\
\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f'_2\\
\vdots\\
\vphantom{\sum\limits_{i\ne1}\frac{1}{x_1-x_i}}f'_n\\
\end{pmatrix}.\tag3
$$

or
$$
A f=f'.\tag4
$$




Assume now a special form of the interpolating polynomial:
$$
y_l(x)=(\lambda x+\beta)^l,\quad
\beta,\lambda\in\mathbb C,\,\lambda\ne 0;\; l\in\mathbb Z,\,0\le l$$

with corresponding $f$-vector components
$$
f_{li}=\frac{(\lambda x_i+\beta)^l}{\prod\limits_{k\ne i}x_i-x_k},\quad
f'_{li}=\frac{\lambda l(\lambda x_i+\beta)^{l-1}}{\prod\limits_{k\ne i}x_i-x_k}.\tag5

$$



Substituting the vectors $f_l$ and $f'_l$ into $(4)$ and multiplying both sides of the resulting equation from the left by diagonal matrix $D$ with elements
$$
D_{ii}=\lambda x_i+\beta,\tag6
$$

one obtains:
$$
DA f_l=\lambda l f_l.\tag7
$$




From this one concludes that $f_l$ are the eigenvectors of the matrix $DA$ with corresponding eigenvalues $\epsilon_l=\lambda l$. Observe that we have found all $n$ (distinct) eigenvalues of the matrix.



Since the determinant of the matrix $DA+Iz$ is characteristic polynomial of the matrix $-DA$, one obtains:
$$\begin{align}
&\det (DA+I z)=\prod_{l=0}^{n-1} z+\lambda l\tag8\\
&\implies
\det (A+D^{-1}z)=\det{D^{-1}}\det (DA+Iz)
=\prod_{l=0}^{n-1}\frac{z+\lambda l}{\lambda x_{l+1}+\beta}.\tag9
\end{align}

$$



It remains only to observe that $A+D^{-1}z$ with $\lambda=1$, $\beta=-b$, $z=c$, $x_i=a_i$ is exactly your matrix $M$.


elementary number theory - Prove: $gcd(n^a-1,n^b-1)=n^{gcd(a,b)}-1$

I'm trying to prove the following statement:
$$\forall_{a,b\in\Bbb{N^{+}}}\gcd(n^a-1,n^b-1)=n^{\gcd(a,b)}-1$$




As for now I managed to prove that $n^{\gcd(a,b)}-1$ divdes $n^a-1$ and $n^b-1$:



Without loss of generality let $a>b$ (if $a=b$, the result is obvious), $n\neq1$ ($\gcd(0,0)$ makes no sense). Then if we let $a=kd, b=jd, d=\gcd(a,b)$, we see that for $n^{kd}-1$ we have $\frac{(n^d)^k-1}{n^d-1}=1+n^d+...+n^{d(k-1)}=C$ (it's a finite geometric series), so $n^a-1=(n^d-1)C$. Same works for $n^b-1$, so $n^d-1$ divides both of these numbers.



How to prove that $n^d-1$ not only divides both numbers, but is greatest common divisor?

algebra precalculus - Prove $e^{i pi} = -1$











I recently heard that $e^{i \pi} = -1$.



WolframAlpha confirmed this for me, however, I don't see how this works.


Answer



This identity follows from Euler's Theorem,
\begin{align}
e^{i \theta} = \cos \theta + i \sin \theta,
\end{align}

which has many proofs. The one that I like the most is the following (sketched). Define $f(\theta) = e^{-i \theta}(\cos \theta + i \sin \theta)$. Use the quotient rule to show that $f^{\prime}(\theta)= 0$, so $f(\theta)$ is constant in $\theta$. Evaluate $f(0)$ to prove that $f(\theta) = f(0)$ everywhere.



Take $\theta = \pi$ for your claim.


Saturday, August 25, 2018

calculus - Differentiate $tan^3(x^2)$




Differentiate $\tan^3(x^2)$




I first applied the chain rule and made $u=x^2$ and $g=\tan^3u$. I then calculated the derivative of $u$, which is $$u'=2x$$ and the derivative of $g$, which is

$$g'=3\tan^2u$$



I then applied the chain rule and multiplied them together, which gave me



$$f'(x)=2x3\tan^2(x^2)$$



Is this correct? If not, any hints as to how to get the correct answer?


Answer



You are almost there! In this case you need to apply the Chain Rule three times.




We have that $$(\tan^3(x^2))'=3\tan^2(x^2)\cdot(\tan(x^2))'=3\tan^2(x^2)\cdot\sec^2(x^2)\cdot(x^2)'=6x\tan^2(x^2)\sec^2(x^2)$$


linear algebra - Row reduction over any field?

EDIT: as stated in the first answer, my initial question was confused. Let me restate the question (I have to admit that it is now quite a different one):


Let's say we have a matrix $A$ with entries from $\mathbb{C}$. Such matrices form a vector space over $\mathbb{R}$ and also form another vector space over $\mathbb{C}$. If we want to row reduce $A$, we see that often we cannot row reduce it over $\mathbb{R}$, that is, using elementary operations involving only scalars which are real numbers, however we can row reduce it using operations that involve complex numbers.


In conclusion, it seems to me that the row reduction of a matrix with elements from a field (all those matrices form a vector space) may or may not be possible, depending on the underlying field of that vector space.. So this helpful tool (row reduction) is not always available for us to use when we get a little more far from, let's say, the most "elementary" vector spaces.


Is my observation correct?


Question as initially stated (please ignore):
Consider the vector space of complex matrices over the field of (i) complex numbers and (ii) real numbers. It is straightforward to find examples where a complex matrix can be row-reduced (say, to the identity matrix) in case (i) but cannot be row-reduced in case (ii).


What gives? So, we can assume that a matrix may be invertible over $\mathbb{C}$ but not over $\mathbb{R}$?


Thanks in advance..

radicals - Are the square roots of all non-perfect squares irrational?




I was asked to define a non-perfect square. Now obviously, the first definition that comes to mind is a square that has a root that is not an integer. However, in the examples, 0.25 was considered a perfect square. And the square itself + its root were both not integers.




Is it that all non-perfect squares have irrational roots, e.g. $\sqrt{2}$?


Answer



In the integers, a perfect square is one that has an integral square root, like $0,1,4,9,16,\dots$ The square root of all other positive integers is irrational. In the rational numbers, a perfect square is one of the form $\frac ab$ in lowest terms where $a$ and $b$ are both perfect squares in the integers. So $0.25=\frac 14$ is a perfect square in the rationals because both $1$ and $4$ are perfect squares in the integers. Any rational that has a reduced form where one of the numerator and denominator is not a perfect square in the integers is not a perfect square. For example, $\frac 12$ is not a perfect square in the rationals. $1$ is a perfect square in the integers, but $2$ is not, and there is no rational that can be squared to give $\frac 12$


logarithms - Graph of a Log Function



I am curious as to why Wolfram|Alpha is graphing a logarithm the way that it is. I was always taught that a graph of a basic logarithm function $\log{x}$ should look like this:




enter image description here



However, Wolfram|Alpha is graphing it like this:



enter image description here



As you can see, there is a "real" range in the region $(-\infty, 0)$, and an imaginary part indicated by the orange line. Is there a part about log graphs that I am missing which would explain why Wolfram|Alpha shows the range of the log function as $\mathbb{R}$?


Answer



$\ln(x)$ is formally defined as the solution to the equation $e^y=x$.




If $x$ is positive, this equation has an unique real solution, anyhow if $x$ is negative this doesn't have a real solution. But it has complex roots.



Indeed, $\ln(x)= a+ib$ is equivalent to



$$x= e^{a+ib}= e^{a} (\cos(b)+i \sin (b)) \,.$$



If $x <0$ we need $e^{a}=|x|$, $\cos(b)=-1$ and $\sin(b)=0$.



Thus, $a= \ln(|x|)$ and $b=\frac{3\pi}{2}+2k\pi$....



elementary set theory - Bijection between $[0,1]$ and $[0,1]times[0,1]$

I know that $|\mathbb R|=|\mathbb R\times\mathbb R|$, and that $|[0,1]|=|\mathbb R|$, which suggests that $|[0,1]|=|[0,1] \times [0,1]|$ but I would like to know a bijection between the interval and square.

elementary number theory - How to prove $gcd(m, n) = gcd(-m, n)?$



Beginner question here:




For a quiz on Elementary Number Theory in my Discrete Math course I was asked to prove if $\gcd(m, n) = \gcd(-m, n)$. I used the Euclidean Algorithm to show that the two expressions simplify to $\gcd(n,\ m\pmod{n})$ and $\gcd(n,\ -m\pmod{n})$ respectively. Then I went on to show (well I tried... but that's another question) that $-m\pmod{n} = m\pmod{n}$.



If I was able to do this correctly, does this approach result in a valid proof? If not, is there a different/better way to do it?



Thanks!


Answer



HINT $\rm\ \ d\ |\ m,\:n\ \iff\ d\ |\ {-}m,\:n\:.\: $ Thus $\rm\ m,n\ $ and $\rm\: -m,n\ $ have the same set of common divisors $\rm\:d\:$ hence the same greatest common divisor. $\ $ QED



Alternatively it's a special case $\rm\ k=-1\ $ in $\rm\ (k\:m,\:n)\ =\ ((k,n)\:m,\:n)\ $




Proof $\rm\quad\ \ (km,n)\ =\ (km,n(m,1))\ =\ (km,nm,n)\ =\ ((k,n)m,n)$



The above proof uses only basic gcd laws (associative, commutative, distributive) - see here.



Euclid's Lemma is the case $\rm\ (k,n) = 1\ \Rightarrow\ (k\:m,\:n)\ =\ (m,\:n)$


radicals - Are the square roots of all non-perfect squares irrational?




I was asked to define a non-perfect square. Now obviously, the first definition that comes to mind is a square that has a root that is not an integer. However, in the examples, 0.25 was considered a perfect square. And the square itself + its root were both not integers.



Is it that all non-perfect squares have irrational roots, e.g. $\sqrt{2}$?



Answer



In the integers, a perfect square is one that has an integral square root, like $0,1,4,9,16,\dots$ The square root of all other positive integers is irrational. In the rational numbers, a perfect square is one of the form $\frac ab$ in lowest terms where $a$ and $b$ are both perfect squares in the integers. So $0.25=\frac 14$ is a perfect square in the rationals because both $1$ and $4$ are perfect squares in the integers. Any rational that has a reduced form where one of the numerator and denominator is not a perfect square in the integers is not a perfect square. For example, $\frac 12$ is not a perfect square in the rationals. $1$ is a perfect square in the integers, but $2$ is not, and there is no rational that can be squared to give $\frac 12$


Friday, August 24, 2018

Convergence of a sequence (possibly Riemann sum)

Let $a_1, a_2, a_3, . . . , a_n$ be the sequence defined by
$$
a_n = 2\sqrt{n}-\sum_{k=1}^{n}\frac{1}{\sqrt{k}} = 2\sqrt{n} - \frac{1}{\sqrt{1}}-\frac{1}{\sqrt{2}}-...-\frac{1}{\sqrt{n}}
$$




show that the sequence $a_n$ is convergent to some limit L, and that $1




I tried looking at this as a Riemann sum. However, I failed to covert it to that. Any hints on that or alternate solution? Thanks

real analysis - $lim_{nrightarrowinfty}frac{a_{n}}{n}=1$ implies that $lim_{nrightarrowinfty}sup_{0leq kleq n}frac{|a_{k}-k|}{n}=0$




I'm trying to prove the following:




If $$\displaystyle\lim_{n\rightarrow\infty}\frac{a_{n}}{n}=1$$ then $$\displaystyle\lim_{n\rightarrow\infty}\sup_{0\leq k\leq n}\frac{|a_{k}-k|}{n}=0.$$




So, the hypotesis $\displaystyle\lim_{n\rightarrow\infty}\frac{a_{n}}{n}=1$ implies that $\forall\epsilon>0,\exists N:=N(\epsilon)\in\mathbb{N}$ such that $\forall n\geq N$ implies $\frac{|a_{n}-n|}{n}<\epsilon.$



Fixed $\epsilon>0,$ we have $\displaystyle\sup_{n\geq N}\frac{|a_{n}-n|}{n}<\epsilon.$




Now, for such $N$ I'd like to prove that $\sup_{0\leq k\leq n}\frac{|a_{k}-k|}{n}<\epsilon$ $\forall n\geq N,$ but I'm stuck in this.



I think that if $n\geq N$ then $$\sup_{0\leq k\leq n}\frac{|a_{k}-k|}{n}\leq\sup_{0\leq k\leq n}\frac{|a_{k}-k|}{N}.$$



Then,for example, if $\sup_{0\leq k\leq n}\frac{|a_{k}-k|}{N}=\frac{|a_{N}-N|}{N}$ it's done because of the hypotesis. If $\sup_{0\leq k\leq n}|a_{k}-k|$ is achieved in $0\leq k

Because of the above I think such limit have sense and it's equal to $0$ but I don't get how to prove it exactly.



Any kind of help is thanked in advanced.


Answer




Let $\varepsilon>0$ be arbitary. Then there exists $N_{1}\in\mathbb{N}$
such that $\left|\frac{a_{n}-n}{n}\right|<\varepsilon$ whenever $n\geq N_{1}$.
Let $M=\max_{0\leq ksuch that $\frac{M}{N_{2}}<\varepsilon$. Let $N=\max(N_{1},N_{2})$.



Let $n\geq N$ be arbitrary. Let $0\leq k\leq n$. If $0\leq kwe have that $\left|\frac{a_{k}-k}{n}\right|\leq\frac{M}{N}<\varepsilon$.
If $N_{1}\leq k\leq n$, then $\left|\frac{a_{k}-k}{n}\right|\leq\left|\frac{a_{k}-k}{k}\right|<\varepsilon$
because $k\geq N_{1}$. It follows that $\sup_{0\leq k\leq n}\left|\frac{a_{k}-k}{n}\right|<\varepsilon$.
Hence $\lim_{n}\sup_{0\leq k\leq n}\left|\frac{a_{k}-k}{n}\right|=0$.



calculus - Solve $lim_{xto 0} frac{sin x-x}{x^3}$





I'm trying to solve this limit




$$\lim_{x\to 0} \frac{\sin x-x}{x^3}$$




Solving using L'hopital rule, we have:



$$\lim_{x\to 0} \frac{\sin x-x}{x^3}= \lim_{x\to 0} \frac{\cos x-1}{3x^2}=\lim_{x\to 0} \frac{-\sin x}{6x}=\lim_{x\to 0} \frac{-\cos x}{6}=-\frac{1}{6}.$$




Am I right?



I'm trying to solve this using change of variables, I need help.



Thanks



EDIT



I didn't understand the answer and the commentaries, I'm looking for an answer using change of variables.


Answer




I suppose the below counts as a change of variable.



Assuming that the limit exists, then you can compute the limit as follows:



Replace $x$ by $3x$, then the limit (say $L$) is



$$L = \lim_{x\to 0}\frac{\sin 3x - 3x}{27x^3} = \lim_{x\to 0}\frac{3\sin x - 3x - 4\sin^3 x}{27x^3} = $$
$$\lim_{x\to 0}\frac{1}{9}\left(\frac{\sin x - x}{x^3}\right) - \lim_{x\to 0}\frac{4}{27}\left(\frac{\sin^3 x}{x^3}\right)$$



(we used the formula $\sin 3x = 3\sin x - 4 \sin^3 x$).




Thus we get



$$L = \frac{L}{9} - \frac{4}{27} \implies L = -\frac{1}{6}$$



Of course, we still need to prove that the limit exists.


real analysis - Show that the following function is continuous on Q:



Let $h : \mathbb{Q}\rightarrow\mathbb{R}$ be given by $$ h(x)= \begin{cases} 0, &\ x^2<2\\ \ \\ 1,&\ x^2>2 \end{cases} $$


see why this function is continuous on $\mathbb{Q}$, but I can't show it. Maybe we could work with the epsilon-delta criterion?


So for every $\epsilon> 0$ we find a $\delta> 0$ with $| x - x_0 | <\delta$ implying $ | h(x) - h(x_0) | < \epsilon$ ? Am I right? How do I have to choose my $\delta$ then?


Answer



Let $q\in \mathbb Q$.


Since $q\ne\sqrt2$, then there exists $\delta>0$ such that $\sqrt2\not\in(q-\delta,q+\delta)$. So $h$ is constant on $(q-\delta,q+\delta)$, and thus $$ |h(x)-h(q)|=0<\varepsilon $$ for any $\varepsilon>0$ that you choose.


real analysis - For given $a in mathbb{R}$ there exists unique continuous function $f:mathbb{R} to mathbb{R}$.


For given $a \in \mathbb{R}$ there exists unique continuous function $f:\mathbb{R} \to \mathbb{R}$ that satiafy $f(x+y)=f(x)f(y)$ for $x,y \in \mathbb{R}$ and $f(1)=a$.





These theorems were discussed in my mathematical-modeling class and were given to us to prove them.

Thursday, August 23, 2018

algebra precalculus - The drying water melon puzzle



I couldn't find an explanation to this problem that I could understand.



A watermelon consist of 99% water and that water measures 2 litre. After a day in the sun the water melon dries up and now consist of 98% water. How much water is left in the water melon?



I know the answer is ~1 litre, but why is that? I've read a couple of answers but I guess I'm a bit slow because I don't understand why.



EDIT
I'd like you to assume that I know no maths. Explain it like you would explain it to a 10 year old.


Answer




At the beginning the solid material is $1\%$ of the total which is a trifle (to be neglected) more than $1\%$ of $99\%$ of the total, or $1\%$ of $2000\ {\rm cm}^3$. Therefore the solid material has volume $\sim20\ {\rm cm}^3$.



After one day in the sun these $20\ {\rm cm}^3$ solid material are still the same, but now they make up $2\%$ of the total. Therefore the total now will be $1000\ {\rm cm}^3$ or $1$ litre. $98\%$ of this volume, or almost all of it, will be water.


Wednesday, August 22, 2018

calculus - $sqrt{c+sqrt{c+sqrt{c+cdots}}}$, or the limit of the sequence $x_{n+1} = sqrt{c+x_n}$



(Fitzpatrick Advanced Calculus 2e, Sec. 2.4 #12)




For $c \gt 0$, consider the quadratic equation
$x^2 - x - c = 0, x > 0$.



Define the sequence $\{x_n\}$ recursively by fixing $|x_1| \lt c$ and then, if $n$ is an index for which $x_n$ has been defined, defining



$$x_{n+1} = \sqrt{c+x_n}$$



Prove that the sequence $\{x_n\}$ converges monotonically to the solution of the above equation.



Note: The answers below might assume $x_1 \gt 0$, but they still work, as we have $x_3 \gt 0$.







This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.



and here: List of abstract duplicates.


Answer



Assuming that you know that a monotone, bounded sequence converges, you want to do two things. First, show that $\langle x_n:n\in\mathbb{Z}^+\rangle$ is monotone and bounded, and then show that its limit is the positive root of $x^2-x-c=0$.



If $c=x_1=1$, $x_2=\sqrt2>x_1$, while if $c=1$ and $x_1=2$, $x_2=\sqrt3


The positive root of the quadratic is $\frac12(1+\sqrt{1+4c})$, which I’ll denote by $r$. If $x_n\to r$, as claimed, and does so monotonically, it must be the case that the sequence increases monotonically if $x_1decreases monotonically if $x_1>r$. In the examples in the last paragraph, $r=\frac12(1+\sqrt5)\approx 1.618$, so they behave as predicted.



This suggests that your first step should be to show that if $x_nr$, $x_n>x_{n+1}>r$; that would be enough to show that $\langle x_n:n\in\mathbb{Z}^+\rangle$ is both monotone and bounded and hence that it has a limit.



Suppose that $0\le x_nx_n^2$, and therefore $x_{n+1}>x_n$. Is it possible that $x_{n+1}\ge r$? That would require that $x_{n+1}^2-x_{n+1}-c\ge 0$ (why?) and hence that $$x_{n+1}^2\ge x_{n+1}+c>x_n+c=x_{n+1}^2\;,$$ which is clearly impossible. Thus, if $0\le x_nr$ to you.



Once this is done, you still have to show that the limit of the sequence really is $r$. Let $f(x)=\sqrt{c+x}$; clearly $f$ is continuous, so if the sequence converges to $L$, we have $$L=\lim_{n\to\infty}x_n=\lim_{n\to\infty}x_{n+1}=\lim_{n\to\infty}f(x_n)=f(L)\;,$$ and from there it’s trivial to check that $L=r$.



Added: Note that although the problem gave us $x_1>0$, this isn’t actually necessary: all that’s needed is that $x_1\ge -c$, so that $x_2$ is defined, since $x_2=\sqrt{c+x_1}\ge 0$ automatically.



sequences and series - How to solve this indetermination when calculating a limit?



I'm stuck trying to find the limit of the sequence $$\frac{\sqrt{12 + a_n} - \sqrt{4a_n}}{a_n^2 - 2a_n - 8}$$



Where I'm given that $a_n > 4$ and $a_n \rightarrow 4$



Both the numerator and the denominator tend to 0, and I can't find how to solve this indetermination. I tried multiplying and dividing by the "reciprocal" of the numerator to get rid of the square roots in the numerator, but that doesn't seem to lead anywhere. What else can I try?


Answer



Hint:




$$b^2-2b-8=(b-4)(b+2)$$



$$\sqrt{12+b}-\sqrt{4b}=-\dfrac{3(b-4)}{\sqrt{12+b}+\sqrt{4b}}$$



If $b\to4,b\ne4\implies b-4\ne0$ hence can be cancelled safely


elementary number theory - Generalization of a Result on Modular Inverses



Yesterday, I attempted to solve the general system of linear congruences (I'm not sure why I've never tried this before.)
\begin{align*} x &\equiv a \pmod{A} \\
x &\equiv b \pmod{B}.\end{align*}
I let $x = a + pA$ and $x = b + qB$ for some $p,q\in\mathbb{Z}$, and I got

$$a + pA = b + qB \implies a \equiv b + qB \pmod{A} \implies q \equiv(a - b)B^{-1}_A \pmod{A}.$$
where $B^{-1}_A$ is the inverse of $B$ modulo $A$. Then $q = (a - b)\cdot B_A^{-1} + rA$ for some $r\in\mathbb{Z}$, so
$$ x = b + (a - b)B_A^{-1} B + rAB\implies x\equiv b + (a - b)B^{-1}_AB \pmod{AB}. $$
However, note that by symmetry, we can also conclude that
$$ x \equiv a + (b - a)A^{-1}_BA \pmod{AB}.$$
Thus,
$$ a + (b-a)A^{-1}_BA \equiv b + (a - b)B^{-1}_AB \pmod{AB}.$$
If $\gcd(a - b, AB) =1$, then
$$a - b = (a - b)(BB^{-1}_A + AA_B^{-1}) \pmod{AB} \implies BB^{-1}_A + AA_B^{-1}\equiv 1 \pmod{AB}.$$
Can this result be generalized to product rings? Also, if there is a generalization, does it have any uses?



Answer



I always use your method to solve these equations, but your result is just the standard formula for the Chinese remainder theorem.



If $\gcd(a-b,AB) = 1$:



  $\gcd(qB-pA,AB) = \gcd((x-pA)-(x-qB),AB) = 1$.



  Thus $\gcd(A,B) = 1$ and hence $A_B^{-1},B_A^{-1}$ exist [which you already assumed].



[Note that the converse is clearly not true! $A,B$ could be coprime while $a,b$ are both zero.]




If $\gcd(A,B) = 1$:



  Thus $A \mid AA_B^{-1} + BB_A^{-1} - 1$ and $B \mid AA_B^{-1} + BB_A^{-1} - 1$.



  Thus $AB \mid AA_B^{-1} + BB_A^{-1} - 1$ and hence $AA_B^{-1} + BB_A^{-1} \equiv 1 \pmod{AB}$.



[So the result you got holds under the usual more general condition that $A,B$ are coprime.]



The chinese remainder theorem also holds for principal ideal domains in the way you want, namely product rings, and generalizes to the product of any coprime quotient rings of a commutative rings. (You can take a look at the Wikipedia article.)



Finding factors of second order complex polynomial.



It concerns with finding roots of complex polynomial:
$x^2+(i-1)x+(2+i)$. One way is to find roots by factoring, as:$(x-i)(x -(1-2i))=(x-i)(x-1+2i)=(x^2 -x +ix +i +2)=x^2 +x(i-1)+(i+2)$



But, this is a guess game for me, with no formal process to get in this case.




The logical process is to take roots of the quadratic equation, but I hope the quadratic formula does not work for complex coefficients.






Update -- Based on comments by @WillJagy, the square root is $\pm1\pm3i$, hence the equation is having roots as :$\frac{-(i-1)\pm\sqrt{-6i-8}}2\implies \frac{(1-i)\pm(\pm 1\pm 3i)}2$ can have possible values:
(i) for $1+3i$, get: $x_{11} = 1+i, x_{12} = -2i$
(ii) for $1-3i$, get: $x_{21} = 1-2i, x_{22} = i$
(iii) for $-1+3i$, get: $x_{11} = i, x_{21} = 1-2i$
(iv) for $-1-3i$, get: $x_{11} = -2i, x_{21} = 1+i$



So, the $4$ root pairs should be tried one by one:



(i) $1+3i =>(x-1-i)(x+2i) = x^2 +ix -x -2i +2$, mismatch
(ii) $1-3i => (x-i)(x-1+2i) = x^2 -x +ix +i +2$, matches original equation
(iii) same as for (ii),
(iv) same as for (i)




Why only two root pairs match, and the other don't?






Some answers on mse : 1






Update 2 -- To add to the answer by @Skip, where the fourth quadrant angle was transformed to the first one by taking out $\sqrt{-1} = i$ as common factor. For mixed sign discriminant, let $D = -6 +8i$, then angle cannot be taken\manipulated to be in the first quadrant; & can lie in the 2nd or 3rd quadrant.
In 2nd quad., $\sin$ of negative angle is positive, while in 3rd quad. both $\sin, \cos$ are negative.



Answer



$$x^2+(i-1)x+2+i=0$$



So



$$x=\frac{-(i-1)\pm\sqrt{-8-6i}}{2}=\frac{-(i-1)\pm i\sqrt{8+6i}}{2}$$



$$\sqrt{8+6i}=\sqrt{10}\left(\cos\left(\frac{\tan^{-1}\left(\frac{3}{4}\right)+2\pi k}{2}\right)+i\sin\left(\frac{\tan^{-1}\left(\frac{3}{4}\right)+2\pi k}{2}\right)\right)\space\space\space\text{k=0, 1}$$



Take $k=0$ since if $k=1$, we'll just get the negative of when $k=0$ and the $\pm$ in front of the square root takes care of this for us




These are nonstandard trig arguments but



$$\cos\left(\frac{1}{2}\tan^{-1}\left(\frac{3}{4}\right)\right)=\frac{3}{\sqrt{10}}$$



$$i\sin\left(\frac{1}{2}\tan^{-1}\left(\frac{3}{4}\right)\right)=i\frac{1}{\sqrt{10}}$$



Therefore



$$\sqrt{8+6i}=3+i$$




So



$$x=\frac{1-i\pm (-1+3i)}{2}$$



$$\boxed{x=i}$$



$$\boxed{x=1-2i}$$



Quadratic formula works







I'll keep the above because that is what appeared in the original problem, but here's how to evaluate $\sqrt{-6+8i}$.



$-6+8i$ is in the second quadrant



$$\theta=\tan^{-1}\left(\frac{8}{-6}\right)$$



But this angle describes a complex number in the fourth quadrant since $tan^{-1}(x)$ has the range $-\frac{\pi}{2}< x<\frac{\pi}{2}$, so add $\pi$ to swing it to the second quadrant




$$\theta=\tan^{-1}\left(\frac{8}{-6}\right)+\pi$$



Its magnitude is still $10$, so for $k=0,\space 1$



$$\sqrt{-6+8i}=\sqrt{10}\left(\cos\left(\frac{\tan^{-1}\left(\frac{-4}{3}\right)+\pi+2\pi k}{2}\right)+i\sin\left(\frac{\tan^{-1}\left(\frac{-4}{3}\right)+\pi+2\pi k}{2}\right)\right)$$



These arguments are not standard by any means, so I admittedly used a calculator and will again for evaluating this one too, which is why there is no work shown in evaluating them. The point of this post though is that the quadratic formula works for complex numbers.



I like Siong's way for solving the square root; the results will be the same and it's nicer to work with in this case. Sometimes this way is easier in case the polynomial isn't easily factorable, it depends on the problem




Taking $k=0$



$$\cos\left(\frac{\tan^{-1}\left(\frac{-4}{3}\right)+\pi}{2}\right)=\frac{1}{\sqrt{5}}$$



$$i\sin\left(\frac{\tan^{-1}\left(\frac{-4}{3}\right)+\pi}{2}\right)=i\frac{2}{\sqrt{5}}$$



So



$$\sqrt{-8+6i}=\sqrt{2}(1+2i)$$




Taking $k=1$ will result in $\sqrt{-8+6i}=\sqrt{2}(-1-2i)$ but the plus and minus takes care of this for us



$$\pm\sqrt{2}(1+2i)=\mp\sqrt{2}(-1-2i)$$



Say if the radicand were $6-8i$, then this lies in the fourth quadrant, so we would not have added $\pi$ when finding its angle.


calculus - $lim_{xtoinfty} frac{sin x+sin^2x+sin^3x+dotsb}{xsin x}$, infinite series limit



I saw this question yesterday.




$$\lim_{x\to\infty} \frac{\sin x+\sin^2x+\sin^3x+\dotsb}{x\sin x}$$





I claim that the limit is $0$ because it can be written like the following.



$$\lim_{x\to\infty} \left(\frac{1}{x}+\frac{\sin x}{x}+\frac{\sin^2x}{x}+\dotsb\right)$$



Then I say in this expression the numerator is limited between $[-1,1]$, and the denominator for each term goes as much as to infinity. Hence term by term we have 0, and adding these up we have 0 as the answer.



But then some guy says its indeterminate, because



$$\lim_{x\to\infty} \frac{1+\sin x+\sin^2x+\dotsb}{x} = \lim_{x\to\infty} \frac{1-\sin^nx}{(1-\sin x)x}$$




He claims what we have in the denominator is oscillating and we cannot have a limit. Also, he says I'm wrong because an infinite amount of zero does not equal to zero.



I'd like to hear your ideas about the answer.


Answer



Let



$$f(x)=\frac{1+\sin x+\sin^2x+\ldots}x$$



for all $x\in\Bbb R$ for which this is defined. If $x>0$ and $\sin x\ne\pm 1$, then




$$f(x)=\frac{1}{x(1-\sin x)}\;.$$



Let $n$ be any odd positive integer. By taking $x$ sufficiently close to $\frac{n\pi}2$, we can make$f(x)$ arbitrarily large. Thus, there is a strictly increasing sequence $\langle x_n:n\in\Bbb N\rangle$ of positive real numbers such that $\lim\limits_{n\to\infty}f(x_n)=\infty$.



Since there is also such a sequence for which $f(x_n)=0$ for each $n\in\Bbb N$, the limit does not exist.


Tuesday, August 21, 2018

calculus - Orders of mean value theorem?



Q. Use Mean Value Theorem of appropriate order to prove that $\sin(x)\gt x-\dfrac{x^3}{3!}$




Now, I know the stated inequality was proved in a previous post, viz. Proof for $\sin(x)\gt x-\frac{x^3}{3!}$ but my question here is what does the problem poser (aka my calculus professor) mean by "mean value theorem of appropriate order" ?



I'm sorry if this is a naive question, I'm a beginner at differential calculus. Thanks for any help!


Answer



Quoting from this linked page,




Now let’s take an arbitrary function f that is n-times differentiable on an open interval containing $[x,x+h]$. To prove the mean value theorem, we subtracted a linear function so as to obtain a function that satisfied the hypotheses of Rolle’s theorem. Here, the obvious thing to do is to subtract a polynomial p of degree n to obtain a function that satisfies the hypotheses of our higher-order Rolle theorem.





The properties we need $p$ to have are that $p(x)=f(x)$, $p'(x)=f'(x)$, and so on all the way up to $p^{(n-1)}(x)=f^{(n-1)}(x)$, and finally $p(x+h)=f(x+h)$. It turns out that we can more or less write down such a polynomial, once we have observed that the polynomial $q_k(x)=(u-x)^k/k!$ has the convenient property that $q_k^{(j)}(x)=0$ except when $j=k$ when it is $1$. This allows us to build a polynomial that has whatever derivatives we want at $x$. So let’s do that. Define a polynomial $q$ by




$q(u)=f(x)+q_1(u)f'(x)+q_2(u)f''(x)+\dots+q_{n-1}(u)f^{(n-1)}(x)$



Then $q^{(k)}(x)=f^{(k)}(x)$ for $k=0,1,\dots,n-1$. A more explicit formula for $q(u)$ is



$\displaystyle f(x)+(u-x)f'(x)+\frac{(u-x)^2}{2!}f''(x)+\dots+\frac{(u-x)^{n-1}}{(n-1)!}f^{(n-1)}(x)$



Now $q(x+h)$ doesn’t necessarily equal $f(x+h)$, so we need to add a multiple of $(u-x)^n$ to correct for this. (Doing that won’t affect the derivatives we’ve got at $x$.) So we want our polynomial to be of the form




$p(u)=q(u)+\lambda(u-x)^n$



and we want $p(x+h)=f(x+h)$. So we want $q(x+h)+\lambda h^n$ to equal $f(x+h)$, which gives us $\lambda=h^{-n}(f(x+h)-q(x+h))$. That is,




$\displaystyle p(u)=q(u)+\frac{(u-x)^n}{h^n}(f(x+h)-q(x+h))$



A quick check: if we substitute in $x+h$ for $u$ we get $q(x+h)+(h^n/h^n)(f(x+h)-q(x+h))$, which does indeed equal $f(x+h)$.




For the moment, we can forget the formula for $p$. All that matters is its properties, which, just to remind you, are these.



$p$ is a polynomial of degree $n$.
$p^{(k)}(x)=f^{(k)}(x) for k=0,1,\dots,n-1$.
$p(x+h)=f(x+h)$.
The second and third properties tell us that if we set $g(u)=f(u)-p(u)$, then $g^{(k)}(x)=0$ for $k=0,1,\dots,n-1$ and $g(x+h)=0$. Those are the conditions needed for our higher-order Rolle theorem. Therefore, there exists $\theta\in(0,1)$ such that $g^{(n)}(x+\theta h)=0$, which implies that $f^{(n)}(x+\theta h)=p^{(n)}(x+\theta h)$.



Let us just highlight what we have proved here.





Theorem. Let f be continuous on the interval $[x,x+h]$ and n-times differentiable on an open interval that contains $[x,x+h]$. Let $p$ be the unique polynomial of degree $n$ such that $p^{(k)}(x)=f^{(k)}(x)$ for $k=0,1,\dots,n-1$ and $p(x+h)=f(x+h)$. Then there exists $\theta\in(0,1)$ such that $f^{(n)}(x+\theta h)=p^{(n)}(x+\theta h)$.




Note that since $p$ is a polynomial of degree $n$, the function $p^{(n)}$ is constant. In the case $n=1$, the constant is $\frac{f(x+h)-f(x)}{h}$, the gradient of the line joining $(x,f(x))$ to $(x+h,f(x+h))$, and the theorem is just the mean value theorem.


calculus - Recognizing that a function has no elementary antiderivative

Is there a method to check whether a function is integrable?



Of-course trying to solve it is one but some questions in integration may be so tricky that I don't get the correct method to start off with those problems. So, is there a method to find correctly whether a function is integrable?



Clarification: I am asking about indefinite integrals which have no elementary anti derivative.

Monday, August 20, 2018

elementary number theory - A divisibility rule for 19


Proof the following divisibility test for 19:



Add two times the last digit to the remaining leading truncated number. If the result is divisible by 19, then so was the first number.


More mathematically:


Let $a, b \in \mathbb{Z}$. Proof that $10a+b$ is divisible by 19 if $a+2b$ is divisible by 19.


My guess is that we can proof this using congruences.


Answer



$10a+b$ is divisible by $19$ if and only if $20a+2b$ is divisible by $19$, of course $20a+2b\equiv a+2b\bmod 19$


number theory - Divisibility of binomial coefficient by prime power - Kummer's theorem



Let's say we have binomial coefficient $\binom{n}{m}$. And we need to find the greatest power of prime $p$ that divides it.




Usually Kummer's theorem is stated in terms of the number of carries you perform while adding $m$ and $n-m$ in base $p$.



I found an equivalent statement of this theorem that reads like this: if we write
$$
\binom{n}{m}\equiv\binom{n_0}{m_0}\binom{n_1}{m_1}\ldots\binom{n_d}{m_d}\pmod{p},
$$
where $n = n_0 + n_1p + n_2p^2 + \ldots + n_dp^d$ and $m = m_0 + m_1p + m_2p^2 + \ldots + m_dp^d$, then the power dividing $\binom{n}{m}$ is precisely the number of indicies $i$ for which $n_i

Now let's take an example. Let's look at $\binom{25}{1}$ and $p=5$. We have

$$
\binom{25}{1}\equiv\binom{1}{0}\binom{0}{0}\binom{0}{1}\pmod{5}.
$$
We have only one index $i$ for which $n_i < m_i$, which is the last one. This suggests that $\binom{25}{1}$ can't be divided by $25$, which obviously isn't true.



Where's the problem? In case you wonder where I found this statement of Kummer's theorem, here is the link: http://www.dms.umontreal.ca/~andrew/PDF/BinCoeff.pdf



Thank you!


Answer



I've noticed this, too. I believe that you're correct and that Granville's "equivalent statement of this theorem" is wrong.




As evidence, I give you two items: first, the counter-example you mentioned, and second, the published version of your linked article by Andrew Granville does not contain this line that "the power dividing $n\choose{m}$ is precisely the number of indicies $i$ for which $n_i < m_i$."



The published version is a bit hard to find, but here's a version on Google Books that I found by Googling "Andrew Granville Binomial Coefficients" from within Google Books. You'll noticed that this published version is an edited copy of the article on the Univ. of Montreal website.



When in doubt, look for the official version of the article. My guess is that the article on Andrew's Montral website is a slightly earlier, unedited version.



EDIT: I just got an e-mail from Andrew himself on this question, and he said, "You should believe the published version -- I do vaguely remember making such a a change. Thanks for pointing out the discrepancy."


elementary set theory - How to Prove $mathbb Rtimes mathbb R sim mathbb R$?

How to prove $\mathbb R\times \mathbb R \sim \mathbb R$?


I know you have to split the problem up into two claims, for each direction to prove that it is a bijection, but i don't know how to go much further than that...

How to turn the square root of a complex number into a function?

In my class, my instructor told us that the square root of a complex number is in general not a function because it is multi-valued. For example, e^(ipi/4) could have a square root of e^(ipi/8) or e^(i*9pi/8). He then added that if we shortened the domain of the polar angle to (-pi,pi), the square root then becomes a function. I don't see how this works. The square root has a period of pi, over which it repeats itself. For eq, both e^(-ipi/4) and e^(ipi/4) qualify as square roots of e^(i*pi/2), and they both lie in the shortened domain. I don't see where I am going wrong.

Sunday, August 19, 2018

sequences and series - Why does the sum of inverse squares equal $pi^2/6$?





I've heard that $1+1/4+1/9+1/16+1/25+...$ converges to $\pi^2/6$. This was very surprising to me and I was wondering if there was a reason that it converges to this number?



I am also confused why $\pi$ would be involved. If someone could provide a proof of this or a intuitive reason/ derivation I would appreciate it. I am only able to understand high school maths however (year 12).


Answer



Some of the proofs in the given link are somewhat technical and I'll try to vulgarize a variant of one of them.




Consider the function $$f(x):=\frac{\sin\pi\sqrt x}{\pi\sqrt x}.$$



This function has roots for every perfect square $x=n^2$, and it can be shown to equal the infinite product of the binomials for the corresponding roots



$$p(x):=\left(1-\frac x{1^2}\right)\left(1-\frac x{2^2}\right)\left(1-\frac x{3^2}\right)\left(1-\frac x{4^2}\right)\cdots$$



(obviously, $p(0)=f(0)=1$ and $p(n^2)=f(n^2)=0$.)



If we expand this product to the first degree, we get




$$1-\left(\frac1{1^2}+\frac1{2^2}+\frac1{3^2}+\frac1{4^2}+\cdots\right)\,x+\cdots$$



On the other hand, the Taylor development to the first order is



$$f(0)+f'(0)\,x+\cdots=1-\frac{\pi^2}6x+\cdots$$ hence the claim by identification.



The plot shows the function $f$ in blue, the linear approximation in red, and the product of the first $4,5$ and $6$ binomials, showing that they coincide better and better.



enter image description here



algebra precalculus - Zero to the zero power – is $0^0=1$?


Could someone provide me with a good explanation of why $0^0=1$?



My train of thought:


$x>0$


$0^x=0^{x-0}=0^x/0^0$, so


$0^0=0^x/0^x=\,?$


Possible answers:


  1. $0^0\cdot0^x=1\cdot0^0$, so $0^0=1$

  2. $0^0=0^x/0^x=0/0$, which is undefined

PS. I've read the explanation on mathforum.org, but it isn't clear to me.


Answer



In general, there is no good answer as to what $0^0$ "should" be, so it is usually left undefined.



Basically, if you consider $x^y$ as a function of two variables, then there is no limit as $(x,y)\to(0,0)$ (with $x\geq 0$): if you approach along the line $y=0$, then you get $\lim\limits_{x\to 0^+} x^0 = \lim\limits_{x\to 0^+} 1 = 1$; so perhaps we should define $0^0=1$? Well, the problem is that if you approach along the line $x=0$, then you get $\lim\limits_{y\to 0^+}0^y = \lim\limits_{y\to 0^+} 0 = 0$. So should we define it $0^0=0$?


Well, if you approach along other curves, you'll get other answers. Since $x^y = e^{y\ln(x)}$, if you approach along the curve $y=\frac{1}{\ln(x)}$, then you'll get a limit of $e$; if you approach along the curve $y=\frac{\ln(7)}{\ln(x)}$, then you get a limit of $7$. And so on. There is just no good answer from the analytic point of view. So, for calculus and algebra, we just don't want to give it any value, we just declare it undefined.


However, from a set-theory point of view, there actually is one and only one sensible answer to what $0^0$ should be! In set theory, $A^B$ is the set of all functions from $B$ to $A$; and when $A$ and $B$ denote "size" (cardinalities), then the "$A^B$" is defined to be the size of the set of all functions from $A$ to $B$. In this context, $0$ is the empty set, so $0^0$ is the collection of all functions from the empty set to the empty set. And, as it turns out, there is one (and only one) function from the empty set to the empty set: the empty function. So the set $0^0$ has one and only one element, and therefore we must define $0^0$ as $1$. So if we are talking about cardinal exponentiation, then the only possible definition is $0^0=1$, and we define it that way, period.


Added 2: the same holds in Discrete Mathematics, when we are mostly interested in "counting" things. In Discrete Mathematics, $n^m$ represents the number of ways in which you can make $m$ selections out of $n$ possibilities, when repetitions are allowed and the order matters. (This is really the same thing as "maps from $\{1,2,\ldots,m\}$ to $\\{1,2,\ldots,n\\}$" when interpreted appropriately, so it is again the same thing as in set theory).


So what should $0^0$ be? It should be the number of ways in which you can make no selections when you have no things to choose from. Well, there is exactly one way of doing that: just sit and do nothing! So we make $0^0$ equal to $1$, because that is the correct number of ways in which we can do the thing that $0^0$ represents. (This, as opposed to $0^1$, say, where you are required to make $1$ choice with nothing to choose from; in that case, you cannot do it, so the answer is that $0^1=0$).


Your "train of thoughts" don't really work: If $x\neq 0$, then $0^x$ means "the number of ways to make $x$ choices from $0$ possibilities". This number is $0$. So for any number $k$, you have $k\cdot 0^x = 0 = 0^x$, hence you cannot say that the equation $0^0\cdot 0^x = 0^x$ suggests that $0^0$ "should" be $1$. The second argument also doesn't work because you cannot divide by $0$, which is what you get with $0^x$ when $x\neq 0$. So it really comes down to what you want $a^b$ to mean, and in discrete mathematics, when $a$ and $b$ are nonnegative integers, it's a count: it's the number of distinct ways in which you can do a certain thing (described above), and that leads necessarily to the definition that makes $0^0$ equal to $1$: because $1$ is the number of ways of making no selections from no choices.


Coda. In the end, it is a matter of definition and utility. In Calculus and algebra, there is no reasonable definition (the closest you can come up with is trying to justify it via the binomial theorem or via power series, which I personally think is a bit weak), and it is far more useful to leave it undefined or indeterminate, since otherwise it would lead to all sorts of exceptions when dealing with the limit laws. In set theory, in discrete mathematics, etc., the definition $0^0=1$ is both useful and natural, so we define it that way in that context. For other contexts (such as the one mentioned in mathforum, when you are dealing exclusively with analytic functions where the problems with limits do not arise) there may be both natural and useful definitions.


We basically define it (or fail to define it) in whichever way it is most useful and natural to do so for the context in question. For Discrete Mathematics, there is no question what that "useful and natural" way should be, so we define it that way.


Saturday, August 18, 2018

modular arithmetic - cartographic RSA algorithm

I am a programmer, While I was studying the RSA encryption I found some difficulties with some mathematical matters


RSA algorithm has the following concepts


Modulo, Modular multiplicative inverse, Prime numbers, Totient $Ï•$



For me I understand all these concepts but to make my question clear let me take an example


The steps to encrypt some text are :


First find two prime numbers


Lets take $5$ and $7$


Then find the modulus by multiplying these numbers $5*7 = 35$


Now find the totient $Ï• \implies Ï•(5*7) = (5-1) * (7-1) = 24$


Now to fin the public key I have to find a number which has GCD with the totient equals to one which is $11 \implies \gcd(11,24) = 1$


The private key is the modular multiplicative inverse of $11$, in this case $107$ is the multiplicative inverse of $11$ because $11*107 \equiv 1 \pmod{24}$.


Now we have the public key $11$ and the private key is $107$.


Lets denote the the public key by $e$ and the private key by $d$ Now If we want to encrypt a message the equation is $c \equiv m^e \pmod n$



and if we want to decrypt the message the equation is $m \equiv c^d \pmod n$


Quick test let's decrypt $m = 5$


$5^{11} \bmod 35 = 10 \implies c= 10$


$10^{107} \bmod 35 = 5 \implies m= 5$


this is magic, actually I understand the concepts but I do not understand how is that done, I understand that $d$ is the inverse of $e$ because we got it using the modular multiplicative inverse but I do not understand why the modulus is $n$ and not $Ï•$ I do not understand why the public key must have $gcd$ equals $1$ with $Ï•$ not with $n$, I am confuses and I need your help to explain that to me.

sequences and series - $limlimits_{ntoinfty} frac{n}{sqrt[n]{n!}} =e$


I don't know how to prove that $$\lim_{n\to\infty} \frac{n}{\sqrt[n]{n!}} =e.$$ Are there other different (nontrivial) nice limit that gives $e$ apart from this and the following $$\sum_{k = 0}^\infty \frac{1}{k!} = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n = e\;?$$


Answer




In the series for $$e^n=\sum_{k=0}^\infty \frac{n^k}{k!},$$ the $n$th and biggest(!) of the (throughout positve) summands is $\frac{n^n}{n!}$. On the other hand, all summands can be esimated as $$ \frac{n^k}{k!}\le \frac{n^n}{n!}$$ and especially those with $k\ge 2n$ can be estimated $$ \frac{n^k}{k!}<\frac{n^{k}}{(2n)^{k-2n}\cdot n^{n}\cdot n!}=\frac{n^{n}}{n!}\cdot \frac1{2^{k-2n}}$$ and thus we find $$\begin{align}\frac{n^n}{n!}

Friday, August 17, 2018

calculus - $lim_{nto infty} n^3(frac{3}{4})^n$




I have the sequence $a_n=(n^3)\big(\frac{3}{4}\big)^n$
and I need to find whether it converges or not and the limit.




I took the common ratio which is $\frac{3 (n+1)^3}{4 n^3}$ and since $\big|\frac{3}{4}\big|<1$ it converges.
I don't know how to find the limit from here.


Answer



If $\frac {a_{n+1}} {a_n} \to l$ with $|l|<1$ then $a_n \to 0$. (In fact the series $\sum a_n$ converges).



In this case $\frac {a_{n+1}} {a_n} =(1+\frac 1 n)^{3}(\frac 3 4) \to \frac 3 4$.


estimation - a limit about exponential function




$\lim_{n\rightarrow\infty}\frac{1+\frac{n}{1!}+\cdot+\frac{n^n}{n!}}{e^n}=\frac12$




Taking the first $n$ terms of the Taylor series of $e^n$ as the numerator, the limit is true or false? How to prove?


Answer



Assuming that we work
$$a_n=e^{-n}\sum_{k=0}^n \frac{n^k}{k!} $$by the definition of the incomplete gamma function
$$a_n=\frac{\Gamma (n+1,n)}{n \Gamma (n)}$$
We have the relation $$\Gamma (n+1,n)=n \,\Gamma (n,n)+e^{-n}\, n^n$$ which makes
$$a_n=\frac{ n^{n-1}}{e^n\,\Gamma (n)}+\frac{\Gamma (n,n)}{\Gamma (n)}$$ The first term tends to $0$ when $n$ becomes large; to prove it, take its logarithm and use Stirling approximation to get
$$\log\left(\frac{ n^{n-1}}{e^n\,\Gamma (n)} \right)=-\frac{1}{2} \log \left({2 \pi n}\right)-\frac{1}{12
n}+O\left(\frac{1}{n^{5/2}}\right)$$




For the second term, if you look here, you will notice the asymptotics
$$\Gamma(n,n) \sim n^n e^{-n} \sqrt{\frac{\pi}{2 n}}$$ So, neglecting the first term, we have, for large $n$
$$a_n\sim \frac{ n^n e^{-n} }{\Gamma(n)}\sqrt{\frac{\pi}{2 n}}$$ Take logarithms and use Stirling approximation to get
$$\log(a_n)=-\log (2)-\frac{1}{12 n}+O\left(\frac{1}{n^{5/2}}\right)$$ Continue with Taylor
$$a_n=e^{\log(a_n)}=\frac{1}{2}-\frac{1}{24 n}+O\left(\frac{1}{n^{2}}\right)$$
If you use the better asymptotics given in the link $$\Gamma(n,n) = n^n e^{-n} \left [ \sqrt{\frac{\pi}{2 n}} - \frac{1}{3 n} + O\left ( \frac{1}{n^{3/2}} \right ) \right ]$$ doing the same, you should end with
$$a_n=\frac 12-\frac{1}{3 \sqrt{2 \pi n} }-\frac{1}{24 n}+O\left(\frac{1}{n^{3/2}}\right)$$


Thursday, August 16, 2018

complex analysis - Help with improper integral

I need help solving this integral:



$$\int_0 ^\infty \frac{\sin(x)}{x} dx$$



I have a help that says that try to calculate the integral of $$\frac{e^{iz}}{z}$$ for a "proper path"... but I don't know how to use that, help.

calculus - $limfrac{cos{2x^2}-1}{x^2sin{x^2}}$ as $x$ goes to $0$





Calculate $\displaystyle \lim_{x \to 0}\frac{\cos{2x^2}-1}{x^2\sin{x^2}}$




I tried L'hopital, but the denominator gets more and more complicated.



How does one calculate this limit?


Answer



Recall that $\cos(2t) = 1-2\sin^2(t)$. Hence, we have

$$\cos(2x^2)-1 = -2\sin^2(x^2)$$
Hence, we have
$$\lim_{x \to 0} \dfrac{\cos(2x^2)-1}{x^2\sin(x^2)} = \lim_{x \to 0} \dfrac{-2\sin^2(x^2)}{x^2\sin(x^2)} = -2 \lim_{x \to 0} \dfrac{\sin(x^2)}{x^2} = -2$$


geometry - Vector path length of a hypotenuse

Figure 1



Consider the red path from A that zigzags to B, which takes $n$ even steps of length $w$. The path length of the route $P_n$ will be equal to:



$ P_n = P_x + P_y = \frac{n}{2}\times w + \frac{n}{2}\times w = n \times w $



But $\frac{n}{2}\times w = 1$ beacuse it is the length of one of the sides of the triangle so:



$P_n = 2$




Which will be true no matter how many steps you take. However in the limit $n \to \infty, w \to 0$ the parth length $P_\infty$ suddenly becomes:



$P_\infty = \sqrt{1^2 + 1^2} = \sqrt{2}$



Due to Pythagoras. Why is this the case? It seems the path length suddenly decreases by 0.59!

Wednesday, August 15, 2018

calculus - Evaluating $int P(sin x, cos x) text{d}x$


Suppose $\displaystyle P(x,y)$ a polynomial in the variables $x,y$.


For example, $\displaystyle x^4$ or $\displaystyle x^3y^2 + 3xy + 1$.



Is there a general method which allows us to evaluate the indefinite integral



$$ \int P(\sin x, \cos x) \text{d} x$$



What about the case when $\displaystyle P(x,y)$ is a rational function (i.e. a ratio of two polynomials)?


Example of a rational function: $\displaystyle \frac{x^2y + y^3}{x+y}$.



This is being asked in an effort to cut down on duplicates, see here: Coping with *abstract* duplicate questions.


and here: List of Generalizations of Common Questions.


Answer



There are several general approaches to integrals that involve expressions with sines and cosines, polynomial expressions being the simplest ones.



Weierstrass Substitution


A method that always works is Weierstrass substitution, which will turn such an integral into an integral of rational functions, which in turn can always be solved, at least in principle, by the method of partial fractions. This works even for rational functions of sine and cosine, as well as functions that involve the other trigonometric functions.


Weierstrass substitution replaces sines and cosines (and by extension, tangents, cotangents, secants, and cosecants) by rational functions of a new variable. The identities begin by the trigonometric substitution $t = \tan\frac{x}{2}$, with $-\pi\lt x\lt \pi$, which yields $$\begin{align*} \sin x &= \frac{2t}{1+t^2}\\ \cos x &= \frac{1-t^2}{1+t^2}\\ dx &= \frac{2\,dt}{1+t^2}. \end{align*}$$ For example, if we have $$\int \frac{\sin x-\cos x}{\sin x+\cos x}\,dx$$ using the substitution above we obtain: $$\begin{align*} \int\frac{\sin x-\cos x}{\sin x+\cos x}\,dx &= \int\left(\frac{\quad\frac{2t}{1+t^2} - \frac{1-t^2}{1+t^2}\quad}{\frac{2t}{1+t^2} + \frac{1-t^2}{1+t^2}}\right)\left(\frac{2}{1+t^2}\right)\,dt\\ &= \int\left(\frac{\quad\frac{2t-1+t^2}{1+t^2}\quad}{\frac{1+2t-t^2}{1+t^2}}\right) \left(\frac{2}{1+t^2}\right)\,dt\\ &= \int\left(\frac{2t-1+t^2}{2t+1-t^2}\right)\left(\frac{2}{1+t^2}\right)\,dt\\ &= 2\int\frac{2t-1+t^2}{(1+t^2)(2t+1-t^2)}\,dt \end{align*}$$ which can then be integrated by the method of partial fractions.


Substitutions and Reduction formulas


However, there are usually faster methods, particularly for polynomial expressions. By breaking up the integral into a sum of integrals corresponding to the monomials, the problem reduces to solving integrals of the form $$\int \left(\sin x\right)^n \left(\cos x\right)^m\,dx$$ with $n$ and $m$ nonnegative integers. The standard methods then are:



  1. If $n$ is odd, then "reserve" one sine, and transform the others into cosines by using the identity $\sin^2 x = 1-\cos^2x$. Then do the change of variable $u=\cos x$ to transform the integral into the integral of a polynomial. For example, $$\int \left(\sin x\right)^5\left(\cos x\right)^2\,dx,$$ then take $(\sin x)^5$, and write it as $$\sin x(\sin x)^4 = \sin x(\sin^2x)^2 = \sin x(1-\cos^2 x)^2.$$ Then setting $u=\cos x$ and $du = -\sin x\,dx$, we get $$\int\left(\sin x\right)^4\left(\cos x\right)^2\,dx = \int \sin x\left(1-\cos^2x\right)^2\left(\cos x\right)^2\,dx = -\int (1-u^2)^2u^2\,du,$$ which can be solved easily.




  2. If $m$ is odd, then do the same trick by by "reserving" one cosine and using the substitution $u=\sin x$. For example, $$\int \sin^2x\cos^3x\,dx = \int \sin^2x(\cos^2x)\cos x\,dx = \int(\sin^2x)(1-\sin^2x)\cos x\,dx$$ and then setting $u=\sin x$, $du = \cos x\,dx$, we get $$\int \sin^2x\cos^3x\,dx = \int u^2(1-u^2)\,du,$$ which can be solved easily again.





  3. If $n$ and $m$ are both even, then either replace all the sines with cosines or vice versa, using $\sin^2 x = 1 - \cos^2x$ or $\cos^2x = 1-\sin^2 x$, and expand. This will leave integrals of the form $$\int \sin^n x\,dx\qquad\text{or}\quad \int \cos^m x\,dx$$ with $n$ and $m$ even positive and even. In that situation, one can use the reduction formulas, which can be obtained by using integration by parts: $$\begin{align*} \int \sin^n x\,dx &= - \frac{1}{n}\sin^{n-1} x\cos x + \frac{n-1}{n}\int \sin^{n-2}x\,dx,\\ \int \cos^m x\,dx &= \frac{1}{m}\cos^{m-1} x\sin x + \frac{n-1}{n}\int \cos^{n-2}x\,dx. \end{align*}$$ By repeated application of these formulas, one eventually ends up with an integral of the form $\int \,dx$ which can be solved directly.



The process can be shortened if you happen to spot or know some trigonometric identities; for example, the power reduction formulas allow you to replace powers of sines or cosines by expressions of multiple angles, e.g., $$\sin^4\theta = \frac{3-4\cos(2\theta)+\cos(4\theta)}{8}$$ could replace a single integral with three integrals that can be done fairly easily via substitution.


Other methods


If you are comfortable enough integrating functions with a complex variable in it, then the method described by Qiaochu will transform any integral that involves sines and cosines in any way into an integral that involves exponential functions instead.


Prove by Induction: $8^n - 3^n$ is divisible by $5$ for all $n geq 1$




Prove by Induction that for all $n \geq 1$ we have



$$8^n - 3^n \text{is divisible by 5} ...(*)$$



My proof so far



Step 1: For $n=1$ we have $8^1 - 3^1 = 8 - 3 = 5$ which is divisible by 5.



Step 2: Suppose (*) is true for some $n=k\geq 1$ that is $8^k-3^k$ is divisible by 5.




Step 3: Prove that (*) is true for $n=k+1$, that is $8^{k+1} - 3^{k+1}$ is divisible by 5. We have



$$8^{k+1}-3^{k+1} = 8*8^k - 3*3^k$$



Can anyone explain the next logical expansion?



Update:



$$8^{k+1}-3^{k+1} = 8*8^k - 3*3^k = 5*8^k + 3*8^k - 3*3^k = 5*8^k + 3(8^k - 3^k)$$




Now we can say that $5*8^k$ is divisible by 5 since it has the form $5*p$ where $p$ is an integer $\geq 1$



And it is assumed that $8^k-3^k$ is divisible by $5$, then $3(8^k-3^k)$ is divisible by $5$ which means it has the form $5p$



so we reduce the expression to
$$5p+5p = 5(p+p)$$



which is of the form $5p$, which is divisible by $5$.




Is my proof correct?


Answer



There is, more or less, somewhat of a "standard" way of writing out these divisibility proofs involving induction. Given your recent questions on induction, I really would encourage you to take a look at this post on how to write a clear induction proof--I think it would force you to construct clear proofs and it would also force you to know what you are doing and why at each step. That said, see if the following proof makes sense (I am going to write it using the template provided in the linked post above):







For all $n\geq 1, 8^n-3^n$ is divisible by $5$; that is, $5\mid(8^n-3^n)$, and this notation simply means that "$5$ divides $8^n-3^n$."





Proof. For $n\geq 1$, let $S(n)$ denote the statement
$$
S(n):5\mid(8^n-3^n)\iff 8^n-3^n=5m, m\in\mathbb{Z}.
$$
Base case ($n=1$): $S(1)$ says that $5\mid (8^1-3^1)$, and this is true (all it is saying is that $5$ divides $5$, and that is clearly true).



Inductive step: Fix some $k\geq1$ and assume that
$$
S(k):5\mid(8^k-3^k)\iff 8^k-3^k=5\ell, \ell\in\mathbb{Z}
$$

is true. To be shown is that $S(k+1)$ follows where
$$
S(k+1):5\mid(8^{k+1}-3^{k+1})\iff 8^{k+1}-3^{k+1}=5\eta, \eta\in\mathbb{Z}.
$$
Beginning with the left-hand side of $S(k+1)$,
\begin{align}
8^{k+1}-3^{k+1}&= 8(8^k)-3(3^k)\tag{law of exponents}\\[0.5em]
&= (8^k-3^k)8+5(3^k)\tag{manipulate}\\[0.5em]
&= (5\ell)8+5(3^k)\tag{by $S(k)$, the ind. hyp.}\\[0.5em]
&= 5(8\ell+3^k)\tag{factor out the $5$}\\[0.5em]

&= 5\eta\tag{$\eta=8\ell+3^k, \eta\in\mathbb{Z}$}
\end{align}
we end up at the right-hand side of $S(k+1)$, completing the inductive step.



By mathematical induction, the proposition $S(n)$ is true for all $n\geq1$. $\blacksquare$






Does all of that make sense? Near the end of your own proof, you write that $5(p+p)$ is of the form $5p$ which is not actually true. The key is to realize how to generalize as above where you can use any number of symbols so long as you do it effectively. Feel free to comment if a step remains unclear.


calculus - Convergence of series of positive terms: $sum_{n=2}^{infty}frac{1}{{(log n)}^{p}}$

I have applied Cauchy condensation test for to test the convergence the series $\sum_{n=2}^{\infty}\frac{1}{{(\log n)}^{p}}$, where p is constant, I got $\frac{1}{{\log 2}^{p}}\sum_{k=1}^{\infty}\frac{2^{k}}{k^{p}}$ . I do not understand for which value of p such that the original series is convergent. Also have used Cauchy integral test but did not solve the improper integral $\int_{2}^{\infty}
{\frac{1}{(\log{x})^{p}}}dx$.
I do not understand the convergent or not, if it convergent what is the value of p will be. Please some one help me. Thanks

Tuesday, August 14, 2018

trigonometry - Use an expression for $frac{sin(5theta)}{sin(theta)}$ to find the roots of the equation $x^4-3x^2+1=0$ in trigonometric form




Question: Use an expression for $\frac{\sin(5\theta)}{\sin(\theta)}$ , ($\theta \neq k \pi)$ , k an integer to find the roots of the equation $x^4-3x^2+1=0$ in trigonometric form?





What I have done



By using demovires theorem and expanding


$$ cis(5\theta) = (\cos(\theta) + i\sin(\theta))^5$$


$$ \cos(5\theta) + i \sin(5\theta) = \cos^5(\theta) - 10\cos^3(\theta)\sin^2(\theta) + 5\cos(\theta)\sin^4(\theta) +i(5\cos^4(\theta)\sin(\theta)-10\cos^2(\theta)\sin^3(\theta) + \sin^5(\theta)$$


Considering only $Im(z) = \sin(5\theta)$


$$ \sin(5\theta) = 5\cos^4(\theta)\sin(\theta)-10\cos^2(\theta)\sin^3(\theta) + \sin^5(\theta) $$


$$ \therefore \frac{\sin(5\theta)}{\sin(\theta)} = \frac{5\cos^4(\theta)\sin(\theta)-10\cos^2(\theta)\sin^3(\theta) + \sin^5(\theta)}{\sin(\theta)}$$


$$ \frac{\sin(5\theta)}{\sin(\theta)} = 5\cos^4(\theta) -10\cos^2(\theta)\sin^2(\theta) + \sin^4(\theta) $$


How should I proceed , I'm stuck trying to incorporate what I got into the equation..


Answer



HINT:



Using Prosthaphaeresis Formula,


$$\sin5x-\sin x=2\sin2x\cos3x=4\sin x\cos x\cos3x$$


If $\sin x\ne0,$ $$\dfrac{\sin5x}{\sin x}-1=4\cos x\cos3x=4\cos x(4\cos^3x-3\cos x)=(4\cos^2x)^2-3(4\cos^2x)$$


OR replace $\sin^2x$ with $1-\cos^2x$ in your $$ 5\cos^4x-10\cos^2x\sin^2x + \sin^4x$$


Now if $\sin5x=0,5x=n\pi$ where $n$ is any integer


$x=\dfrac{n\pi}5$ where $n\equiv0,\pm1,\pm2\pmod5$


So, the roots of $\dfrac{\sin5x}{\sin x}=0\implies x=\dfrac{n\pi}5$ where $n\equiv\pm1,\pm2\pmod5$


But $$\dfrac{\sin5x}{\sin x}=(4\cos^2x)^2-3(4\cos^2x)+1$$


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...