Wednesday, November 30, 2016

real analysis - Using the definition of a limit, prove that $lim_{n rightarrow infty} frac{n^2+3n}{n^3-3} = 0$





Using the definition of a limit, prove that $$\lim_{n \rightarrow \infty} \frac{n^2+3n}{n^3-3} = 0$$




I know how i should start: I want to prove that given $\epsilon > 0$, there $\exists N \in \mathbb{N}$ such that $\forall n \ge N$



$$\left |\frac{n^2+3n}{n^3-3} - 0 \right | < \epsilon$$



but from here how do I proceed? I feel like i have to get rid of $3n, -3$ from but clearly $$\left |\frac{n^2+3n}{n^3-3} \right | <\frac{n^2}{n^3-3}$$this is not true.


Answer



This is not so much of an answer as a general technique.




What we do in this case, is to divide top and bottom by $n^3$:
$$
\dfrac{\frac{1}{n} + \frac{3}{n^2}}{1-\frac{3}{n^3}}
$$
Suppose we want this to be less than a given $\epsilon>0$. We know that $\frac{1}{n}$ can be made as small as we like. First, we split this into two parts:
$$
\dfrac{\frac{1}{n}}{1-\frac{3}{n^3}} + \dfrac{\frac{3}{n^2}}{1-\frac{3}{n^3}}
$$




The first thing we know is that for large enough $n$, say $n>N$, $\frac{3}{n^3} < \frac{3}{n^2} < \frac{1}{n}$. We will use this fact.



Let $\delta >0$ be so small that $\frac{\delta}{1-\delta} < \frac{\epsilon}{2}$. Now, let $n$ be so large that $\frac{1}{n} < \delta$, and $n>N$.



Now, note that $\frac{3}{n^3} < \frac{3}{n^2} < \frac{1}{n} < \delta$. Furthermore, $1- \frac{3}{n^3} > 1 - \frac{3}{n^2} > 1-\delta$.



Thus,
$$
\dfrac{\frac{1}{n}}{1-\frac{3}{n^3}} + \dfrac{\frac{3}{n^2}}{1-\frac{3}{n^3}}
< \frac{\delta}{1+\delta} + \frac{\delta}{1+\delta} < \frac{\epsilon}{2} + \frac{\epsilon}{2} < \epsilon

$$



For large enough $n$. Hence, the limit is zero.



I could have had a shorter answer, but you see that using this technique we have reduced powers of $n$ to this one $\delta$ term, and just bounded that $\delta$ term by itself, bounding all powers of $n$ at once.


abstract algebra - What should be isomorphism class of $Aut(mathbb{Z}_{p^a}times mathbb{Z}_{p^b q^c})$?



I am trying to get some ideas on the following problem but no result. Please show me the way.



Aut$(G)$ denotes the group of automorphisms of the group $G$. If $p, q$ are distinct primes then how shall we find the isomorphism class of
$$Aut(\mathbb{Z}_{p^a}\times \mathbb{Z}_{p^b q^c})$$

where $a, b, c\in \mathbb{N}$.



What I tried is the following: $\mathbb{Z}_{p^bq^c}\simeq \mathbb{Z}_{p^b}\times \mathbb{Z}_{q^c}$ and hence
$$Aut(\mathbb{Z}_{p^a}\times \mathbb{Z}_{p^b q^c})\simeq Aut(\mathbb{Z}_{p^a}\times \mathbb{Z}_{p^b})\times Aut(\mathbb{Z}_{q^c})$$
because we know that if $(|G_1|, |G_2|)=1$ then $Aut(G_1\times G_2)\simeq Aut(G_1)\times Aut(G_2)$.



Question is: What shall we do now? Is there any new ideas please or am I completely in the wrong direction ?



Thanks in advance


Answer




It is well-known that Aut($\mathbb{Z}_{q^c}$) has order $q^{c-1}(q-1)$, and it's cyclic except in the case $q=2$ and $c \ge 3$, when it's the product of two cyclic groups one of which has order 2. So the only question is, what is Aut($\mathbb{Z}_{p^a} \times \mathbb{Z}_{p^b}$)?



In the case $a=b$, it is not too hard to see that the answer is $GL(2,\mathbb{Z}_{p^a})$ (invertible 2x2 matrices over the ring $\mathbb{Z}_{p^a}$).



In the general case, I will assume $a \le b$. The automorphism group turns out to be a group of "2x2 matrices with mixed entries": $\begin{pmatrix}A & B\\C & D\end{pmatrix}$ where $A, B \in \mathbb{Z}_{p^a}$ ; $C, D \in \mathbb{Z}_{p^b}$; determinant not divisible by $p$; and $C \equiv 0 \mod{p^{b-a}}$.



These matrices act as automorphisms on $\mathbb{Z}_{p^a} \times \mathbb{Z}_{p^b}$ (as column vectors) in the usual way, and the matrices multiply by each other in the usual way. You might be worried that you're adding/multiplying elements from different rings, but it turns out everything is well-defined. (Lift everything to $\mathbb{Z}$, do the arithmetic in $\mathbb{Z}$, and project back down; then check that the result is independent of the lift.)



The condition $C \equiv 0 \mod{p^{b-a}}$ is important for things to work out. You can see that this condition is necessary because the image of $(1,0)$ is $(A,C)$, and the former has order $p^a$ so the latter must also, thus forcing $C \equiv 0 \mod{p^{b-a}}$.




For a precise treatment of the above, see this article on the group of automorphisms of an arbitrary finite abelian group.


integration of $sin^3(x)/x^3 $ from $0$ to infinity.Can it be solved using laplace or fourier transform?

Integration of $\sin^3(x)/x^3 $ from $0$ to infinity. Can it be solved using Laplace or Fourier transform ?

Tuesday, November 29, 2016

real analysis - How do we calculate this sum $sum_{n=1}^{infty} frac{1}{n(n+1)cdots(n+p)}$?



I know that this sum $$\sum_{n=1}^{\infty} \frac{1}{n(n+1)\cdots(n+p)}$$ ($p$ fixed) converges which can be easily proved using the ratio criterion, but I couldn't calculate it.




I need help in this part.



Thanks a lot.


Answer



Hint:
$$\frac{p}{n(n+1)\cdots(n+p)}=\frac{1}{(n)(n+1)\cdots (n+p-1)}-\frac{1}{(n+1)(n+2)\cdots (n+p)}.$$


real analysis - Show that the series $sum_{n=1}^infty a_nb_n$ converges.



Problem:





Let $\{a_n\}$ be a decreasing sequence of non-negative real numbers. Suppose that $$\lim_{n\to \infty} a_n = 0$$ Also, assume that the partial sum sequence $\{B_n\}$ of the series $\sum_{n=1}^\infty b_n $ is bounded. Show that the series $\sum_{n=1}^\infty a_nb_n$ converges.




My attempt:



To show that the series $\sum_{n=1}^\infty a_nb_n$ converges, I tried to show that sequence of partial sums $\{\sum_{i=1}^n a_ib_i\}_{n\in \mathbb{N}}$ converges. I tried to do this by showing that this sequence of partial sums is Cauchy.



Let $\varepsilon > 0$. Let $M$ be the bound on $\{B_n\}$. Since $\displaystyle\lim_{n\to \infty} a_n = 0$ we have that there exists $N\in \mathbb{N}$ such that for all $n > N$ we have $|a_n| < \dfrac{\varepsilon}{2M}$. Let $m,n > N$. We have that
\begin{align*}

\left|\sum_{i=1}^n a_ib_i - \sum_{i=1}^m a_ib_i\right|
&= \left|\sum_{i=1}^N a_i b_i - \sum_{i=N+1}^n a_i b_i -\sum_{i=1}^N a_i b_i + \sum_{i=N+1}^m a_ib_i \right|\\
&=\left| \sum_{i=N+1}^m a_ib_i - \sum_{i=N+1}^n a_i b_i\right|\\
&\leq \left| \sum_{i=N+1}^m a_ib_i\right| + \left|\sum_{i=N+1}^n a_i b_i\right|\\
&\leq \text{ something }\\
&\leq |a_N|M + |a_N|M \\
&<\varepsilon
\end{align*}



My problem is the middle step/s labeled "something". I'm not sure how to relate the bound $M$ to $\sum_{i=N+1}^n a_i b_i$. I'd appreciate any help. Is this the right idea for the problem?




Thanks in advance!


Answer



There is a formula called 'partial summation' formula or Abel's transformation which is useful in this case.
It basically states that:



$$\sum_{i=1}^na_nb_n=\sum_{i=1}^{n-1}A_n(b_n-b_{n+1})+A_nb_n$$
where
$$A_n=\sum_{i=1}^na_n$$




You have
$$\lim_{n\to\infty}b_n=0$$such that $$b_n>b_{n+1},\forall n$$
and that $$|A_n|\le M$$for some $M\in\mathbb{R}$



If you apply the formula to the series, then notice that $$A_nb_n\to0$$
And, you have
$$|A_n(b_n-b_{n+1})|\le M(b_n-b_{n+1})$$
so that the series $$\sum_1^\infty A_n(b_n-b_{n+1})$$ is convergent(the series of terms on the right side of the inequality is telescoping,so convergent. Hence the comparison test is used).
Your claim now follows.


number theory - Modular-arithmetic proofs





Read examples $3.2.2$ and $3.2.3$ and answer the following questions:



Example $3.2.2.$ Find a solution to the congruence $5x\equiv11\mod 19$



Solution. If there is a solution then, by Theorem $3.1.4$, there is a solution within the set $\{0,1,2,\dots,18\}$. If $x=0$, then $5x=0$, so $0$ is not a solution. Similarly, for $x=1,5x=5$; for $x=2,5x=10;$ for $x=3,5x=15;$ and for $x=4,5x=20.$None of these are congruent to $11\mod19$. so we have not yet found a solution. However, when $x=6,5x=30$, which is congruent to $11\mod19$.Thus, $x\equiv6\mod19$ is a solution of the congruence.



Example $3.2.3$ Show that there is no solution to the congreuce $x^2\equiv3\mod5$



Proof. If $x=0$, then $x^2=0$; if $x=1$, then $x^2=1$; if $x=2$, then $x^2=4$; if $x=3$, then $x^2=9$,which is congruent to $4\mod 5$; and if $x=4$, then $x^2=16$ which is congruent to $1\mod5$. If there was any solution, it would be congruent to one of $\{0,1,2,3,4\}$ by Theorem $3.1.4$. Thus, the congruence has no solution. $\tag*{$\square$}$

Theorem 3.1.4



For a given modulus $m$, each integer is congruent to exactly one of the numbers in the set $\{0,1,2,\dots,m-1\}.$



(from UTM "A Readable Introduction to Real Mathmatics" Chapter 3)









Questions:



a) For any two integers $a$ and $b$, prove that $ab= 0$ implies $a= 0$ or $b= 0$. Prove that this is still true in mod prime numbers but not true in mod a composite number.



b)Here is how we prove $a^2=b^2$ implies $a=±b$:
$$a^2=b^2\Rightarrow a^2-b^2=0\Rightarrow(a-b)(a+b)=0$$
$$\Rightarrow a-b=0 \vee a+b=0$$
Is this conclusion valid in modular arithmetic $\mod m$: does $a^2≡b^2(\mod m)$ implies $a≡ ±b(\mod m)$? Either prove, or give a counterexample.



c) Given integers $m$ and $1< a < m$, with $a|m$, prove that the equation $ax≡1 (\mod m)$ has no solution.(That is, if $m$ is composite, and $a$ is a factor of $m$ then $a$ has no multiplicative inverse.)








a) First part should be a easy proof,



But i'm not sure what it means by $$\text{Prove that this is still true in mod prime numbers}$$ $$\text{but not true in mod a composite number}$$



How does this related with first part.




Is it means $$\forall a,b,m\in\mathbb{N},\text{prime}(m)\rightarrow (ab\equiv0\mod m\rightarrow (a\equiv0\mod m\vee b\equiv0\mod m))$$



And if m is not prime implies otherwise?



b) $$\text{WTS }\forall a,b,m\in\mathbb{N},a^2\equiv b^2\mod m\rightarrow a\equiv \pm b\mod m$$



The converse is true, but my guess is there might be some counter examples for this one.



c) $$\forall m\in\mathbb{Z},a\in(1,m)\cap\mathbb{Z},a\mid m\rightarrow ax\equiv1\mod m \text{ has no solution}$$




Where should I start for c)?



Any help or hint or suggestion would be appreciated.


Answer



Here's a counterexample for $b)$. Let $m=8, a=1$ and $b=3$. Then $a^2\cong b^2\pmod8$, but $a\not\cong\pm b\pmod8$.



For $c)$, $a\mid m\land 1\lt a\lt m\implies m=ka$, where $k\not\cong0\pmod m$. So $ka\cong0\pmod m$. Now $0\cong kaa^{-1}\cong k\pmod m$. $\Rightarrow \Leftarrow $.


arithmetic - Is my book wrong? Fractional word problem.


Question: Mr. Cortez owns a 10 1/2-acre of tract land, and he plans to subdivide this tract into 1/4-acre lots. He must first set aside 1/6 of the TOTAL land for roads. Which of the following expressions shows how many lots this tract will yield?


(A). $\quad 10 \frac 1 2 \div \frac 1 4 - \frac 1 6$



(B). $\quad (10\frac1 2 - \frac 1 6) \div \frac 1 4$


(C). $\quad 10 \frac1 2 + \frac 1 6 \times \frac 1 4$


(D). $\quad 10 \frac1 2 \times \frac1 4 - \frac 1 6$


(E). Not enough information is given.


Book says answer is (B), which seems wrong because if the question is saying 1/6 of the total 10 1/2-acre land I don't think you should subtract.


So, is my book wrong in telling me the answer is (B)? If so how would the real expression look like? I'm thinking it would look something more like:


10 1/2 x 5/6 divided by 1/4 (5/6 being the amount of land you have left after using 1/6 of it).


So that's it, help would be much appreciated since there is only 1 other similar question like this in the whole book and I don't know if I'm doing it right or this book is just wrong. This wouldn't be the first time the book is wrong either.


Book is McGraw Hill's GED, page 774 question #8.


Answer




You're correct (I will write $10\frac12$ as $10.5$ for clarity).


You need $\dfrac16$ of $10.5 \text{ acres}$ for the roads. So you're left with $10.5-(10.5)\cdot\dfrac16=(10.5)\cdot\dfrac56$ acres of land.


To obtain the number of $\dfrac14\text{ acres}$ contained in $(10.5)\cdot\dfrac56\text{ acres}$, divide the latter by the former to obtain, $\dfrac{(10.5)\cdot\dfrac56\text{ acres}}{\dfrac14\text{ acres}}=10.5\cdot\dfrac{20}{6}=\dfrac{21}2\cdot\dfrac{20}{6}=35$.


Rational Functional Equations

Suppose $f(x)$ is a rational function such that
$3 f \left( \frac{1}{x} \right) + \frac{2f(x)}{x} = x^2$
for all $x \neq 0$. Find $f(-2)$.




I tried substituting different values of $x$ to get a system of equations to solve for $f(x)$, but this didn't work. How should I take this from here?

Monday, November 28, 2016

elementary number theory - Prove without induction that $3^{4n}-2^{4n}$ is divisible by $65$


My brother asked me this (for some reason).


My solution is:



$(3^{4n}-2^{4n})\bmod{65}=$


$(81^{n}-16^{n})\bmod{65}=$


$((81\bmod{65})^{n}-16^{n})\bmod{65}=$


$(16^{n}-16^{n})\bmod{65}=$


$0\bmod{65}$



I think that this solution is mathematically flawless (please let me know if you think otherwise).


But I'm wondering if there's another way, perhaps with the binomial expansion of $(81-16)^{n}$.


In other words, something like:


$3^{4n}-2^{4n}=$


$81^{n}-16^{n}=$



$(81-16)^{n}+65k=$


$65^{n}+65k=$


$65(65^{n-1}+k)$


How would I go from "$81^{n}-16^{n}$" to "$(81-16)^{n}+65k$"?


Answer



You can use the formula $$a^n-b^n = (a-b)\left(a^{n-1}+a^{n-2}b+a^{n-3}b^2+\ldots+ab^{n-2}+b^{n-1}\right)$$


elementary number theory - highest power of prime $p$ dividing $binom{m+n}{n}$



How to prove the theorem stated here.





Theorem. (Kummer, 1854) Let $p$ be a prime. The highest power of $p$ that divides the binomial coefficient $\binom{m+n}{n}$ is equal to the number of "carries" when adding $m$ and $n$ in base $p$.




So far, I know if $m+n$ can be expanded in base power as
$$m+n= a_0 + a_1 p + \dots +a_k p^k$$
and $m$ have to coefficients $\{ b_0 , b_1 , \dots b_i\}$ and $n$ can be expanded with coefficients $\{c_0, c_1 ,\dots , c_j\}$ in base $p$ then the highest power of prime that divides $\binom{m+n}{n}$ can be expressed as
$$e = \frac{(b_0 + b_1 + \dots b_i )+ (c_0 + c_1 + \dots c_j )-(a_0 + a_1 + \dots a_k )}{p-1}$$
It follows from here page number $4$. But how does it relate to the number of carries? I am not being able to connect. Perhaps I am not understanding something very fundamental about addition.


Answer



If $b_{0} + c_{0} < p$, then $a_{0} = b_{0} + c_{0}$, there are no carries, and the term

$$
b_{0} + c_{0} - a_{0} = 0
$$
does not contribute to your $e$.



If $b_{0} + c_{0} \ge p$, then $a_{0} = b_{0} + c_{0} - p$, and this time $b_{0} + c_{0} - a_{0}$ gives a contribution of $p$ to the numerator of $e$. Plus, there is a contribution of $1$ to $a_{1}$, so the net contribution to the numerator of $e$ is $p -1$, and that to $e$ is $1$. Repeat.



As mentioned by Jyrki Lahtonen in his comment (which appeared while I was typesetting this answer), you may have propagating carries, but this is the basic argument.


calculus - Limit of the sequence $frac {a^n} {n!}$



I need to prove that $$\lim_{n \rightarrow \infty} \frac {a^n} {n!}=0$$



I have no condition over $a$, just that is a real number. I thought of using L'Hôpital, but it's way too complicated for something that it should be simpler. Same goes to the epsilon proof, and I´m runnning out of options of what to do with it.



Thanks!



PD: I could also use a hint on how to solve $$\lim_{n \rightarrow \infty} \frac {n^n} {n!}=0$$



Answer



Compare it with the following geometric sequence:
$b_n=(\frac{a^m}{m!})(\frac{a}{m+1})^n,$ where $m$ is the smallest positive integer such that $m+1\gt a.$
Notice that $a_{n+m}\le b_n$ so that $\lim a_n=\lim b_n=0.$
Hope this helps.


probability - Cumulative distribution function determine the random variable



I don't know that determine is the right word, but I try to explain. What I need to understand. :) So..
We know's that if a function fit this conditions:





  • Monotonically non-decreasing for each of its variables

  • Right-continuous for each of its variables.



$$
0 \le F(x_1,\ldots,x_n) \le 1
$$
$$
\lim_{x_1,\ldots,x_n\to\infty} F(x_1,\ldots,x_n)=1
$$

$$
\lim_{x_i\to-\infty} F(x_1,\ldots,x_n) = 0,\text{ for all } i
$$
then the function is or can be a cumulative distribution function.



In this logic the cumulative distribution function determine the random variable? How I can prove it in mathematical way? This is true, I understand in my own way, but not mathematically.



Maybe we can start that the cumulative distribution function determine the probability distribution and vica versa. But how I can prove it mathematically that, the probability distribution determine random variable?



Thanks for your explanation,

I am really grateful:)


Answer



In general the CDF does not determine the distribution function. Consider for instance the uniform distributions over $[a,b]$ and over $(a,b)$. The distribution functions are different but it is straightforward to check that the CDFs are identical.


calculus - Continuous function, finding its value

If a function $f: \mathbb{R}\to\mathbb{R}$ is continuous and $f(x+y) = f(x) + f(y)$ for all $x,y\in\mathbb{R}$, then what is this function $f(x)$?

laplace transform - Partial fraction for complex roots using the second order polynomial

$\frac{(s^2 +s +1)}{(s^2+4s+3)(s+1)}$



the answer has to be in a $\frac{A}{(s+1)} + \frac{Bs+C}{(s^2+4s+3)}$ form.
However i tried to solve it this way but end up with that there is no solution for this problem. For instance i get: $A=1+A$ to solve for A which is false. Please help me i spent a lot of time solving this question with different ways and have no ideas left.

probability - Change of summation limit for a Poisson distribution

I've been given a Poisson distribution for a question sheet and I'm trying to find the mean of the distribution. The solution reads:




$$ \langle x \rangle=\sum_{n=0}^\infty n \frac{a^n}{n!}e^{-a}=e^{-a}\sum_{n=1}^\infty \frac{a^n}{(n-1)!}=e^{-a}a\sum_{n=1}^\infty \frac{a^{n-1}}{(n-1)!}= e^{-a}ae^{a}=a$$



Where $a$ is a real constant. I'm struggling to understand why the lower boundary of the sum has changed during the second step. In the question we are also given the series expansion of the exponential function as:



$$e^x = \sum_{n=0}^\infty \frac{x^n}{n!} $$



Any help would be very much appreciated thanks a lot in advance!

Sunday, November 27, 2016

probability - Pick the highest of two (or $n$) independent uniformly distributed random numbers - average value?



With "random number" I mean an independent uniformly distributed random number.



One




Picking one random number is easy: When I pick a random number from $x$ to $y$, the average value is $(x+y)/2$.



Two



I'm no maths expert, nevertheless I was able to work out a solution for whole numbers: When I pick the highest of two random whole numbers from $x$ to $y$ ...



Let $n$ be the count of possible results $n = y - x + 1$.



I looked at the probability of every single possible outcome and noticed an arithmetic sequence. I knew the sequence had to start with the rarest possibility: Rolling two times in a row the lowest value.




$$p_1 = \frac{1}{n^2}$$



And I knew the sequence had to have a sum of 100 %. This made it possible to calculate the last, the $n$th, element:



$$p_n = \frac{2}{n} - p_1$$



Based on that it was easy to calculate the difference between elements and subsequently the formula for the sequence.



$$p_i = \frac{2in-2i-n+1}{n^3-n^2}$$




All I had to do now, was multiplying the probabilities with their respective values and sum this.



$$
\sum_{i=1}^{n} \frac{2in-2i-n+1}{n^3-n^2}(x+i-1)
$$



Questions



What sort of distribution is this, how is it called? I can't name it and I can hardly search for it.




What is the solution for picking the highest of two random real numbers from $x$ to $y$? I feel a little bit lost, because my approach fails, because there are infinite real numbers.



What is the solution for picking the highest of $c$ random real numbers from $x$ to $y$?


Answer



Here is exactly what you are looking for for all types of random variables.



For $n$ iid discrete uniform random variables the PDF is



$$P(Y_{max}=y)=\left(\frac{\lfloor y\rfloor-a+1}{b-a+1}\right)^n-\left(\frac{\lfloor y\rfloor-a}{b-a+1}\right)^n$$




For $n$ iid continuous uniform random variables the PDF is



$$f_{Y_{max}}(y)=\begin{cases}n\left(\frac{1}{b-a}\right)^{n-1}\left(\frac{y-a}{b-a}\right)^{n} &,y\in[a,b]\\
0&,\text{otherwise}\\
\end{cases}$$


real analysis - Function whose image of every open interval is $(-infty,infty)$




How to find a function from reals to reals such that the image of every open interval is the whole of R?



Is there one which maps rationals to rationals?


Answer



Though this can be done explicitly with enough cleverness (for example with the Conway base 13 function), I rather like the following choice-based argument, which requires almost no thought once you're familiar with transfinite recursion.



Consider the set consisting of all ordered triples of reals $(a,b,c)$ with $a

Now we build our function $f$ recursively along this well-order, fixing the value of $f$ at one new point at each step. At the step corresponding to $(a,b,c)$, we want to ensure that there exists a point $x$ in the open interval $(a,b)$ such that $f(x)=c$. Since we've only fixed $f$ at less-than-continuum-many points, and $(a,b)$ has cardinality continuum, we can choose an $x$ in $(a,b)$ such that $f(x)$ is not yet fixed, and fix $f(x)$ to be $c$.




This recursion gives us a partial function from $\mathbb{R}$ to $\mathbb{R}$ that already satisfies our requirements. We can make it total by just setting $f(x)$ to be $0$ (say) wherever $f(x)$ is not yet defined.



If we additionally want $f$ to map rationals to rationals, we can simply set $f(x)=0$ for every rational $x$ before commencing the recursion.


calculus - How to evaluate $limlimits_{x rightarrow +infty}{e^x (e - (1+frac{1}{x} )^x)}$ without L'Hospital?

Using several times L'Hospital Rule I got $$\lim_{x \rightarrow +\infty}{e^x \left (e - \left(1+\dfrac{1}{x}\right )^x\right)} = +\infty.$$ Is it possible find this limit without L'Hospital?

elementary number theory - Solving simultaneous linear congruences




(a) $x≡5\pmod 7\;\;,\; x≡7\pmod{11}\;\;,\;\;x≡3\pmod{13}$



(b) $x≡3\pmod{10}\;\;,\;\; x≡8\pmod{15}\;\;,\;\;x≡5\pmod{84}$






for (a) I have a rough idea how to do it, its like:



$n_1=7,n_2=11,n_3=13$ then $n=7·11·13=1001$




${1001\over7}·k_1≡1mod(7)$



${1001\over11}·k_2≡1mod(11)$



${1001\over13}·k_3≡1mod(13)$



Hence, $k_1=[5], k_2=[4], k_3=[12]$



$x_0=(11·13)·5·5+(7·13)·4·7+(7·11)·(-1)·3=287$




the solution set is $x=x_0+k_n, k\leftarrow\mathbb{Z}, x=887+k(1001)$



Im not sure if im correct, but i just follow the standard procedure



(b) for this one, what should I do the the $n$'s are not coprime, or prime???



Thank you!!


Answer



You've got the idea...




Apply prime factorization to each of the moduli $n$, omit common factors, proceed in much the way you did for (a), and transform each statement in the system into equivalent congruences for the prime powers: (i.e. to a prime residue system). E.g. $$x \equiv 3 \pmod{10} \iff x \equiv 3 \pmod 2 \;\text{and}\; x \equiv 3 \pmod 5\,.$$



And of course, you'll then want to use the Chinese Remainder Theorem and/or the extended CRT.


Saturday, November 26, 2016

inequality - Showing the "only if" direction of equality in a complex equality



How can I finish off the "only if" direction? I am just unable to prove the only if direction! Using the induction hypothesis and the triangle inequality is confusing me for some reason.




Show that
\begin{equation}
|z_1+z_2+\dots+z_n| = |z_1| + |z_2| + \dots + |z_n|
\end{equation}
if and only if $z_k/z_{\ell} \ge 0$ for any integers $k$ and $\ell$, $\ell \leq k, \ell \leq n,$ for which $z_{\ell} \ne 0.$



We show the "if" direction first. Suppose that $z_k/z_{\ell} \ge 0.$ Without loss of generality, suppose that $z_1$ is nonzero. Otherwise, we could reduce to $|z_2+\dots+z_n| = |z_2| + \dots + |z_n|$, where $z_2, \dots, z_n$ are all nonzero. Then we have:
\begin{align*}
|z_1+z_2+\dots+z_n| \ &= |z_1|\left|1+\dfrac{z_2}{z_1}+\dots+\dfrac{z_n}{z_1}\right| \\
&= |z_1|\left(1+\dfrac{z_2}{z_1}+\dots+\dfrac{z_n}{z_1}\right) \ \ \ \ \ \ \ \ \ \ \ \mathrm{Since} \ \dfrac{z_i}{z_1} \ge 0 \\

&=|z_1|\left(1+\left|\dfrac{z_2}{z_1}\right|+\dots+\left|\dfrac{z_n}{z_1}\right|\right) \\
&=|z_1|\left(1+\dfrac{|z_2|}{|z_1|}+\dots+\dfrac{|z_n|}{|z_1|}\right)=|z_1| + |z_2| + \dots + |z_n|
\end{align*}
To show the "only if" direction, we use induction. For $n=2$, we want to
find a condition for which $|z_1+z_2|=|z_1|+|z_2|.$ From the book and class discussions, we see that equality occurs if $z_1$ and $z_2$ are collinear. Provided the valid assumption of $z_2 \ne 0,$ we have that a necessary and sufficient condition, for which $|z_1+z_2|=|z_1|+|z_2|$, is $z_1/z_2 \ge 0.$



Thanks!


Answer



If $z_k/z_l$ is not $\ge 0$ for some integers $k$ and $l$, let $k=1$ and $l=2$ without loss of generality. You already know that $|z_1+z_2|<|z_1|+|z_2|$. Then
\begin{align*}

|z_1+z_2+\cdots+z_n|&\le |z_1+z_2|+|z_3|+\cdots+|z_n|\\
&<|z_1|+|z_2|+|z_3|+\cdots+|z_n|,
\end{align*}
which means $|z_1+z_2+\cdots+z_n|= |z_1|+|z_2|+\cdots+|z_n|$ does not hold. Q.E.D.



The triangle inequality, which I used in the first line, can be shown by induction easily.


calculus - Calculating $sum_{k=0}^{n}sin(ktheta)$




I'm given the task of calculating the sum $\sum_{i=0}^{n}\sin(i\theta)$.



So far, I've tried converting each $\sin(i\theta)$ in the sum into its taylor series form to get:
$\sin(\theta)=\theta-\frac{\theta^3}{3!}+\frac{\theta^5}{5!}-\frac{\theta^7}{7!}...$
$\sin(2\theta)=2\theta-\frac{(2\theta)^3}{3!}+\frac{(2\theta)^5}{5!}-\frac{(2\theta)^7}{7!}...$
$\sin(3\theta)=3\theta-\frac{(3\theta)^3}{3!}+\frac{(3\theta)^5}{5!}-\frac{(3\theta)^7}{7!}...$
...
$\sin(n\theta)=n\theta-\frac{(n\theta)^3}{3!}+\frac{(n\theta)^5}{5!}-\frac{(n\theta)^7}{7!}...$




Therefore the sum becomes,
$\theta(1+...+n)-\frac{\theta^3}{3!}(1^3+...+n^3)+\frac{\theta^5}{5!}(1^5+...+n^5)-\frac{\theta^7}{7!}(1^7+...+n^7)...$



But it's not immediately obvious what the next step should be.



I also considered expanding each $\sin(i\theta)$ using the trigonemetry identity $\sin(A+B)$, however I don't see a general form for $\sin(i\theta)$ to work with.


Answer



You may write, for any $\theta \in \mathbb{R}$ such that $\sin(\theta/2) \neq 0$,
$$
\begin{align}

\sum_{k=0}^{n} \sin (k\theta)&=\Im \sum_{k=0}^{n} e^{ik\theta}\\\\
&=\Im\left(\frac{e^{i(n+1)\theta}-1}{e^{i\theta}-1}\right)\\\\
&=\Im\left( \frac{e^{i(n+1)\theta/2}\left(e^{i(n+1)\theta/2}-e^{-i(n+1)\theta/2}\right)}{e^{i\theta/2}\left(e^{i\theta/2}-e^{-i\theta/2}\right)}\right)\\\\
&=\Im\left( \frac{e^{in\theta/2}\left(2i\sin((n+1)\theta/2)\right)}{\left(2i\sin(\theta/2)\right)}\right)\\\\
&=\Im\left( e^{in\theta/2}\frac{\sin((n+1)\theta/2)}{\sin(\theta/2)}\right)\\\\
&=\Im\left( \left(\cos (n\theta/2)+i\sin (n\theta/2)\right)\frac{\sin((n+1)\theta/2)}{\sin(\theta/2)}\right)\\\\
&=\frac{\sin(n\theta/2)\sin ((n+1)\theta/2)}{\sin(\theta/2)}.
\end{align}
$$


approximation - Famous fractions: Can any "special" numbers be approximated by simple ratios like $3.14ldots$ as $22/7$?



The ratio $22/7$ dates back to antiquity as an approximation of $3.14\ldots$. I'm wondering whether there are any other "famous" numbers with a similar situation. That is, something like $e$ or $\phi$ (the golden ratio), which are usually represented as decimals, perhaps have instead at times been approximated by a useful fraction throughout history.


Answer



I'll describe a way to derive rational approximations of irrational numbers and then use it to recover some well-known examples.



Given a number like $\pi$, we set aside the integer part ($a_0 = 3$), and we seek to approximate the fractional part ($0.14159\!\ldots$) by some Egyptian fraction $\frac{1}{a_1}$, or just as well its reciprocal, $7.06251\!\ldots$ by an integer $a_1$. Rounding down gives the familiar approximation $\pi \approx 3 + \frac{1}{7} = \frac{22}{7}$. Iterating this process gives a series of approximations,
$$3, 3 + \frac{1}{7}, 3 + \frac{1}{7 + \frac{1}{15}}, 3 + \frac{1}{7 + \frac{1}{15 + \frac{1}{1}}}, 3 + \frac{1}{7 + \frac{1}{15 + \frac{1}{1 + \frac{1}{292}}}}, \ldots .$$

We call the formal limit of these fractions the continued fraction for $\pi$, and for readability sometimes just write out this quantity as the sequence $[3; 7, 15, 1, 292, \ldots]$. Simplifying the above fractions gives the convergents of $\pi$, a sequence of improving approximations:
$$3, \frac{22}{7}, \frac{333}{106}, \frac{355}{113}, \frac{103993}{33102} \ldots .$$
Evidently passing from a convergent just before a large number (e.g., $292$ in the above expansion) to the next one results in a relatively small adjustment, so, very roughly speaking, convergents given by evaluating a fraction just before the one corresponding to a large number gives a relatively good approximation for the size of the denominator. Indeed, the convergent given by stopping just before $292$ gives the famous approximation $$\pi \approx \frac{355}{113}$$ discovered by Yu Chongzhi in the 5th Century A.D. and sometimes known as the Milü (密率); it is accurate to about one part in $10^7$.



Some more examples:




  • The continued fractions of some familiar numbers exhibit obvious patterns. For example, it follows from the fact that the Golden Ratio $\phi$ satisfies $\phi^2 = \phi + 1$ that its continued fraction is $[1; 1, 1, 1, \ldots]$. Its successive convergents are the successive ratios $F_{n + 1} / F_{n}$ of Fibonacci numbers:
    $$1, 2, \frac{3}{2}, \frac{5}{3}, \frac{8}{5}, \frac{13}{8}, \ldots .$$ The fact that all terms in the continued fraction of $1$ means roughly that $\phi$ is difficult to approximate well by rational numbers.


  • The convergents of the disintegration constant $\log 2$ are

    $$0, 1, \frac{2}{3}, \frac{7}{10}, \ldots ,$$
    and the occurrence of $\frac{7}{10}$ can be taken as a motivation for the Rule of 70.


  • As lulu mentioned in the comments, approximations of $\log_2 3$ are important in music theory: One sometimes wants to work with two tones whose frequency ratio is close to $3 : 2$. In an equally tempered scale of $n$ notes to an octave, this means approximating $\log_2 3$ with some rational number $\frac{m}{n}$. The convergents of $\log_2 3$ are
    $$1, 2, \frac{3}{2}, \frac{8}{5}, \frac{19}{12}, \frac{65}{41}, \ldots ,$$ and the presence of the ratio $\frac{19}{12}$ corresponds to the fact that an interval of a perfect fifth in the familiar chromatic ($12$-note, evenly tempered) scale is a good approximation of a $3 : 2$ ratio of frequencies.


  • The continued fraction for $\sqrt{2}$ is $[1; 2, 2, 2, \ldots]$, and its convergents are $$1, \frac{3}{2}, \frac{7}{5}, \frac{17}{12}, \ldots ,$$
    and one can show that the numerator and denominator from every second convergent ($\frac{3}{2}, \frac{17}{12}, \ldots$) are solutions to the classical Pell equation $$x^2 - 2 y^2 = 1 ,$$ and the others are solutions to the Pell equation $x^2 - 2 y^2 = -1$. Similar observations hold for other square roots of integers.



algebra precalculus - Why the given speed ratios are irrelevant for this problem?

The ratio of the speeds of a goods train and a passenger train is 3:7. The two trains can cross each other in 40 sec. A man in the passenger train observes that the goods train crosses him in 25 sec. If the goods train is 275m long, what is the length of the passenger train?


I proceed like this: We have to find Two trains crossing distance because two trains can crossing time is given. So Two trains crossing distance = 275 + X Other distance will be P’s distance i.e 275m


$$ \begin{split} &\text{Distance ratio }= (275+X)/275\\ &\text{Time ratio }= 40/25\text{ (given)}\\ &\to (275+X)/275 = 40/25\\ &\to X = 165m \end{split} $$


Why the given speed ratios are irrelevant? why not given time ratios are irrelevant?

Friday, November 25, 2016

calculus - Is the sum of positive divergent series always divergent?


If two positive terms series $\sum_{n=1}^{\infty} a_n, \sum_{n=1}^{\infty} b_n$ are divergent, $\sum_{n=1}^{\infty} (a_n+b_n)$ is also divergent.



I thought is was obvious, but I saw a counterexample of this problem, that is, $a_n = n, b_n = -n.$ However, this is a little bit strange, because $b_n$ is NOT positive terms series.


What's wrong with my thoughts?


Answer



If both are positive then yes, your thoughts are correct, for example by direct comparison. That example isn't relevant because, as you said, $b_n$ is not a positive sequence.


calculus - Evaluate $lim_{xto0}frac{1-cos3x+sin 3x}x$ without L'Hôpital's rule


I've been trying to solve this question for hours. It asks to find the limit without L'Hôpital's rule. $$\lim_{x\to0}\frac{1-\cos3x+\sin3x}x$$ Any tips or help would be much appreciated.


Answer



If you are given that $\lim_{x \to 0}{\sin x \over x } = 1$, then since $1-\cos (3x) = 2 \sin^2 ({3 \over 2} x)$ (half angle formula), we have


\begin{eqnarray} {1 -\cos (3x) +\sin (3x) \over x } &=& {2 \sin^2 ({3 \over 2} x) \over x} +{\sin (3x) \over 3x} {3x \over x} \\ &=& 2 ({\sin ({3 \over 2} x) \over {3 \over 2} x})^2 { ({3 \over 2} x)^2 \over x} + 3 {\sin (3x) \over 3x} \\ &=& {9 \over 2} x ({\sin ({3 \over 2} x) \over {3 \over 2} x})^2 + 3 {\sin (3x) \over 3x} \end{eqnarray} Taking limits gives $3$.


elementary set theory - Proving intervals are equinumerous to $mathbb R$


Let $a$, $b$ elements of $\mathbb R$ with $a < b$. By combining the results of the past exercises and examples, show that each of the following intervals are equinumerous to the set $\mathbb R$ of all real numbers:




  1. $(a,b)$


  2. $(a,b]$


  3. $[a, b)$


  4. $[a, b]$


  5. $(a, \infty)$


  6. $[a, \infty)$



  7. $(-\infty, b)$


  8. $(-\infty, b]$





Do I need to find a bijection or what, from each interval to $(-\infty, \infty)$? We've shown in class that $(a,b)$ is equinumerous to $(0,1)$, and that $(0,1)$ is equinumerous to $\mathbb R$. We have gone over a proof that $[a, b]$ is equinumerous to $[0,1]$. We've also covered that $[a, b]$ and $(a,b)$ have the same cardinality.



Doing 8 proofs of intervals with a function $f$ being a bijection to $\mathbb R$ seems really tedious, so I'm asking if there's another way around that?



And if there isn't, I'd like some help on the proofs.

rationality testing - Is there a way to prove numbers irrational in general?

I'm familiar with the typical proof that $\sqrt2\not\in\mathbb{Q}$, where we assume it is equivalent to $\frac ab$ for some integers $a,b$, then prove that both $a$ and $b$ are divisible by $2$, repeat infinitely, proof by contradiction, QED. I'm also familiar with the fact that if you repeat this procedure for any radical, you can similarly prove that each $a^n,b^n$ are divisible by the radicand, where $n$ is the root of the radical (though things get tricky if you don't reduce first). In other words, the proof that $\sqrt2\not\in\mathbb{Q}$ generalizes to prove that any number of the form $\sqrt[m]{n}\not\in\mathbb{Q}$.




Further, since adding or multiplying a rational and an irrational yields an irrational, this is a proof for all algebraic irrationals. (Ex. I can prove $\phi=\frac{1+\sqrt5}{2}$ irrational just by demonstrating that the $\sqrt5$ term is irrational.)



Because this relies on the algebra of radicals, this won't help for transcendental numbers. Is there a proof that generalizes to all irrational numbers, or must each transcendental number be proven irrational independently?

sequences and series - Proving $frac{sin x}{x} =left(1-frac{x^2}{pi^2}right)left(1-frac{x^2}{2^2pi^2}right) left(1-frac{x^2}{3^2pi^2}right)cdots$



How to prove the following product?
$$\frac{\sin(x)}{x}=
\left(1+\frac{x}{\pi}\right)
\left(1-\frac{x}{\pi}\right)
\left(1+\frac{x}{2\pi}\right)
\left(1-\frac{x}{2\pi}\right)

\left(1+\frac{x}{3\pi}\right)
\left(1-\frac{x}{3\pi}\right)\cdots$$


Answer



Real analysis approach.



Let $\alpha\in(0,1)$, then define on the interval $[-\pi,\pi]$ the function $f(x)=\cos(\alpha x)$ and $2\pi$-periodically extended it the real line. It is straightforward to compute its Fourier series. Since $f$ is $2\pi$-periodic and continuous on $[-\pi,\pi]$, then its Fourier series converges pointwise to $f$ on $[-\pi,\pi]$:
$$
f(x)=\frac{2\alpha\sin\pi\alpha}{\pi}\left(\frac{1}{2\alpha^2}+\sum\limits_{n=1}^\infty\frac{(-1)^n}{\alpha^2-n^2}\cos nx\right),
\quad x\in[-\pi,\pi]\tag{1}
$$

Now take $x=\pi$, then we get
$$
\cot\pi\alpha-\frac{1}{\pi\alpha}=\frac{2\alpha}{\pi}\sum\limits_{n=1}^\infty\frac{1}{\alpha^2-n^2},
\quad\alpha\in(-1,1)\tag{2}
$$
Fix $t\in(0,1)$. Note that for each $\alpha\in(0,t)$ we have $|(\alpha^2-n^2)^{-1}|\leq(n^2-t^2)^{-1}$ and the series $\sum_{n=1}^\infty(n^2-t^2)^{-1}$ is convergent. By Weierstrass $M$-test the series in the right hand side of $(2)$ is uniformly convergent for $\alpha\in(0,t)$. Hence we can integrate $(2)$ over the interval $[0,t]$. And we get
$$
\ln\frac{\sin \pi t}{\pi t}=\sum\limits_{n=1}^\infty\ln\left(1-\frac{t^2}{n^2}\right),
\quad t\in(0,1)
$$

Finally, substitute $x=\pi t$, to obtain
$$
\frac{\sin x}{x}=\prod\limits_{n=1}^\infty\left(1-\frac{x^2}{\pi^2 n^2}\right),
\quad x\in(0,\pi)
$$



Complex analysis approach



We will need the following theorem (due to Weierstrass).





Let $f$ be an entire function with infinite number of zeros $\{a_n:n\in\mathbb{N}\}$. Assume that $a_0=0$ is zero of order $r$ and $\lim\limits_{n\to\infty}a_n=\infty$, then
$$
f(z)=
z^r\exp(h(z))\prod\limits_{n=1}^\infty\left(1-\frac{z}{a_n}\right)
\exp\left(\sum\limits_{k=1}^{p_n}\frac{1}{k}\left(\frac{z}{a_n}\right)^{k}\right)
$$
for some entire function $h$ and sequence of positive integers $\{p_n:n\in\mathbb{N}\}$. The sequence $\{p_n:n\in\mathbb{N}\}$ can be chosen arbitrary with only one requirement $-$ the series
$$
\sum\limits_{n=1}^\infty\left(\frac{z}{a_n}\right)^{p_n+1}

$$ is uniformly convergent on each compact $K\subset\mathbb{C}$.




Now we apply this theorem to the entire function $\sin z$. In this case we have $a_n=\pi n$ and $r=1$. Since the series
$$
\sum\limits_{n=1}^\infty\left(\frac{z}{\pi n}\right)^2
$$
is uniformly convergent on each compact $K\subset \mathbb{C}$, then we may choose $p_n=1$. In this case we have
$$
\sin z=z\exp(h(z))\prod\limits_{n\in\mathbb{Z}\setminus\{0\}}\left(1-\frac{z}{\pi n}\right)\exp\left(\frac{z}{\pi n}\right)

$$
Let $K\subset\mathbb{C}$ be a compact which doesn't contain zeros of $\sin z$. For all $z\in K$ we have
$$
\ln\sin z=h(z)+\ln(z)+\sum\limits_{n\in\mathbb{Z}\setminus\{0\}}\left(\ln\left(1-\frac{z}{\pi n}\right)+\frac{z}{\pi n}\right)
$$
$$
\cot z=\frac{d}{dz}\ln\sin z=h'(z)+\frac{1}{z}+\sum\limits_{n\in\mathbb{Z}\setminus\{0\}}\left(\frac{1}{z-\pi n}+\frac{1}{\pi n}\right)
$$
It is known that (here you can find the proof)
$$

\cot z=\frac{1}{z}+\sum\limits_{n\in\mathbb{Z}\setminus\{0\}}\left(\frac{1}{z-\pi n}+\frac{1}{\pi n}\right).
$$
hence $h'(z)=0$ for all $z\in K$. Since $K$ is arbitrary then $h(z)=\mathrm{const}$. This means that
$$
\sin z=Cz\prod\limits_{n\in\mathbb{Z}\setminus\{0\}}\left(1-\frac{z}{\pi n}\right)\exp\left(\frac{z}{\pi n}\right)
$$
Since $\lim\limits_{z\to 0}z^{-1}\sin z=1$, then $C=1$. Finally,
$$
\frac{\sin z}{z}=\prod\limits_{n\in\mathbb{Z}\setminus\{0\}}\left(1-\frac{z}{\pi n}\right)\exp\left(\frac{z}{\pi n}\right)=
\lim\limits_{N\to\infty}\prod\limits_{n=-N,n\neq 0}^N\left(1-\frac{z}{\pi n}\right)\exp\left(\frac{z}{\pi n}\right)=

$$
$$
\lim\limits_{N\to\infty}\prod\limits_{n=1}^N\left(1-\frac{z^2}{\pi^2 n^2}\right)=
\prod\limits_{n=1}^\infty\left(1-\frac{z^2}{\pi^2 n^2}\right)
$$
This result is much more stronger because it holds for all complex numbers. But in this proof I cheated because series representation for $\cot z$ given above require additional efforts and use of Mittag-Leffler's theorem.


algebra precalculus - Simplifying $sqrt[4]{161-72 sqrt{5}}$


$$\sqrt[4]{161-72 \sqrt{5}}$$


I tried to solve this as follows:


the resultant will be in the form of $a+b\sqrt{5}$ since 5 is a prime and has no other factors other than 1 and itself. Taking this expression to the 4th power gives $a^4+4 \sqrt{5} a^3 b+30 a^2 b^2+20 \sqrt{5} a b^3+25 b^4$. The integer parts of this must be equal to $161$ and the coeffecients of the roots must add to $-72$. You thus get the simultaneous system:


$$a^4+30 a^2 b^2+25 b^4=161$$ $$4 a^3 b+20 a b^3=-72$$


In an attempt to solve this, I first tried to factor stuff and rewrite it as:


$$\left(a^2+5 b^2\right)^2+10 (a b)^2=161$$ $$4 a b \left(a^2+5 b^2\right)=-72$$


Then letting $p = a^2 + 5b^2$ and $q = ab$ you get


$$4 p q=-72$$ $$p^2+10 q^2=161$$


However, solving this yields messy roots. Am I going on the right path?



Answer



$$\sqrt[4]{161-72\sqrt5}=\sqrt[4]{81-72\sqrt5+80}=\sqrt[4]{(9-4\sqrt{5})^2}=\sqrt{9-4\sqrt{5}}=\sqrt{4-4\sqrt{5}+5}=\sqrt{(2-\sqrt{5})^2}=\sqrt5-2$$ The trick is to notice that $72$ factors into $2*9*4$ and since $9^2+(4\sqrt5)^2=161$ you get this


Thursday, November 24, 2016

elementary set theory - How do i prove that a given graph is a function?




Define a graph $\phi$ as;



$\phi(x)=|x|, \forall x\in [-1,1]$ and $\phi(x+2)=\phi(x)$.



I'm trying to prove that $\phi$ is a function, but it seems something is wrong in my argument.



It's not hard to see that "$\forall x\in \mathbb{R}, \exists y\in[0,1]$ such that $(x,y)\in \phi$"



I'm trying to prove that $x_1=x_2 \Rightarrow \phi(x_1)=\phi(x_2)$ and here's my argument below.




=============
Suppose $x_1=x_2$.



Then there exists a unique $n\in\mathbb{Z}$ such that $n_i≦\frac{x_i + 1}{2}

Thus $\phi(x_1)=|x_1 - 2n_1|=\phi(x_2)$. Q.E.D



===============




I'm sure something's wrong in my argument, but don't know what it is..



(This argument works even when $\phi(1)≠1$, so something's wrong here..)


Answer



Every real number is congruent to a unique real number in $(-1,1]$. (Note the open parenthesis on the left).



So if $\phi$ had been defined by $\phi(x)=|x|$ on $(-1,1]$, and $\phi(x+2)=\phi(x)$, there would be no issue whatsoever. (In principle one would need an easy induction to show that $\phi(x+2n)=\phi(x)$.)



But because $\phi(x)$ was defined as $|x|$ on the closed interval $[-1,1]$, we need to check that $\phi(x)$ is well-defined at odd integers.




The issue is that the definition, for example, simultaneously defines $\phi(5)$ as $\phi(-1)$ and as $\phi(1)$. But since $|-1|=|1|$, there is no problem.


random variable takes only rational values with probability one



I have found an old exercise that seems very interesting: let $X_{1},X_{2},...$ be i.i.d. bernoullian random variables with $\mathbb{P}(X_{n}=1) = \mathbb{P}(X_{n}=0) = 1/2$. Define $S_{n} := X_{1} + ... + X_{n}$. It is to show that the random variable



$$
M := \sup_\limits{n \in \mathbb{N}}\frac{S_{n}}{n}
$$
with probability $1$ only takes rational values in the intervall $(1/2,1]$. Anyone has an idea, how to prove it?



Answer




Lemma Let $f: \mathbb{N} \to \mathbb{Q} \cap [0,1]$ be a mapping such that $\lim_{n \to \infty} f(n) = c \in (0,1)$ exists. Then $$M := \sup_{n \in \mathbb{N}} f(n)$$ satisfies $$M \in \{c\} \cup (\mathbb{Q} \cap [c,1]).$$




Proof: Since $\lim_{n \to \infty} f_n = c$ we clearly have $M \geq c$. If $M=c$ we are done, and therefore we will from now assume that $M>c$. Since $\lim_{n \to \infty} f(n)=c




By the strong law of large numbers, we can apply the above lemma to $f(n) := S_n(\omega)/n$ with $c:=1/2$. This proves the assertion.



calculus - Can someone please simplify this, please.


After solving my previous question, click here for question page, I tried to go up a notch and complicate the question just a bit further, turns out $\int e^x\sin(x)\cos(x)dx$ is much more different than $\int xe^x\sin(x)dx$. Much much much much much more. It's extremely longer (at least it was the way I went about it), after a long excruciating amount of time, I had gotten my final result, which needed simplifying. Here it is without simplying: $$ I = \frac{1}{2}e^x(\sin(x)+\cos(x))-\frac{1}{2}(I + \frac{1}{2}e^x(x+\frac{1}{2}\sin(2x))-\frac{1}{2}(xe^x-e^x)+\frac{1}{20}e^x(\sin(2x)-2\cos(2x))) + C $$


and here is what I got after attempting to simplify it, keep in mind I'm not even sure if this is correct. $$ 20I =e^x(10(\sin(x)+\cos(x)-x+\frac{5}{4}(\sin(2x) -x)) +\sin(2x)-2\cos(x)) - \frac{I}{2} +C$$


I really hope I didn't mess it up too much, if I've made a mistake in any of the things I've posted, please call me out on it. I didn't know what else to do after I tried to simplfy it, I know this is a lot to ask, and I'm sure you have better things to do, but just if by chance of a miracle, you feel like simplifying an extremely long equation, please, it would be really appreciated, and if not, then thank you for reading this far anyway. Thanks in advance. (Sorry for babbling on)


Answer



$\int e^xsin(x)cos(x)dx = \dfrac{1}{2}\int e^xsin(2x) dx$



Then use integration by parts. You can easily solve this math by this way . There is no need of simplification .


abstract algebra - Simple proof for independence of square roots of two primes



Consider the following problem:



Let $p$ and $q$ be two distinct prime numbers. Show that $\sqrt{p}$ and $\sqrt{q}$ are independent over $\mathbb{Q}$, which means that:


$\forall a,b \in \mathbb{Q}: a\sqrt{p} + b\sqrt{q} = 0 \Rightarrow a = b = 0$



I'm well aware how to prove this for a sequence $p_i$ of primes and thus a sequence $\sqrt{p_i}$ of prime roots using Galois theory, but I want to show some students a very elemental and short proof for just two prime roots. Those students are only at the beginning of an elemental algebra course and did not learn about anything like field traces. Is this possible?


Answer



I wanted to construct a proof of this using as elementary means as possible, avoiding if at all feasible "big gun" results such as the fundamental theorem of arithmetic, which in the following has been supplanted by repeated application of Bezout's identity:


If $\sqrt p$ and $\sqrt q$ are dependent over $\Bbb Q$, they satisfy a relation of the form


$r\sqrt p + s\sqrt q = 0, \; 0 \ne r, s \in \Bbb Q; \tag 0$



by clearing the denominators of $r$ and $s$ we find there exist $0 \ne a, b \in \Bbb Z$ with


$a\sqrt p + b\sqrt q = 0, \tag 1$


and we may clearly further assume


$\gcd(a, b) = 1; \tag 2$


from (1) we have, upon multiplication by $\sqrt p$,


$ap + b\sqrt{pq} = 0, \tag 3$


whence


$ap = -b\sqrt{pq}; \tag 4$


we square:


$a^2 p^2 = b^2 pq, \tag 5$



and divide through by $p$:


$a^2 p = b^2 q \Longrightarrow p \mid b^2 q; \tag 6$


now since $p, q \in \Bbb P$ are distinct, $p \ne q$, we have


$\gcd(p, q) = 1, \tag 7$


and thus


$\exists x, y \in \Bbb Z, \; xp + yq = 1, \tag 8$


which further implies


$xpb^2 + yqb^2 = b^2 \Longrightarrow p \mid b^2, \tag 9$


since


$p \mid pb^2, \; p \mid qb^2; \tag{10}$



now with $p \in \Bbb P$,


$p \not \mid b \Longrightarrow \gcd(p, b) = 1, \tag{11}$


whence


$\exists z, w \in \Bbb Z, \; zp + wb = 1, \tag{12}$


and so


$zpb + wb^2 = b \Longrightarrow p \mid b \Rightarrow \Leftarrow p \not \mid b, \tag{13}$


as assumed in (11); thus in fact


$p \mid b \Longrightarrow \exists c \in \Bbb Z, \; b = pc \Longrightarrow b^2 = p^2c^2, \tag{14}$


and thus (6) becomes


$a^2 p = c^2p^2 q \Longrightarrow a^2 = c^2pq \Longrightarrow p \mid a^2; \tag{15}$



now repeating in essence the argument of (11)-(13) proves that $p \mid a$, which is of course precluded by (2), lest $p \mid \gcd(a, b) = 1$.


We thus see that there can be no relation of the form (0) for $p, q \in \Bbb P$ distinct; $p$ and $q$ are independent over $\Bbb Q$.


The informed reader, upon careful scrutiny, will note that this demonstration also has much in common with the classic proof that $\sqrt 2 \notin \Bbb Q$, which truth be told inspired my conception of this answer.


Wednesday, November 23, 2016

calculus - how to integrate $ int_{-infty}^{+infty} frac{sin(x)}{x} ,dx $?






How can I do this integration using only calculus? (not laplace transforms or complex analysis)


$$ \int_{-\infty}^{+\infty} \frac{\sin(x)}{x} \,dx $$


I searched for solutions not involving laplace transforms or complex analysis but I could not find.


Answer



Putting rigor aside, we may do like this: $$\begin{align*} \int_{-\infty}^{\infty} \frac{\sin x}{x} \; dx &= 2 \int_{0}^{\infty} \frac{\sin x}{x} \; dx \\ &= 2 \int_{0}^{\infty} \sin x \left( \int_{0}^{\infty} e^{-xt} \; dt \right) \; dx \\ &= 2 \int_{0}^{\infty} \int_{0}^{\infty} \sin x \, e^{-tx} \; dx dt \\ &= 2 \int_{0}^{\infty} \frac{dt}{t^2 + 1} \\ &= \vphantom{\int}2 \cdot \frac{\pi}{2} = \pi. \end{align*}$$ The defects of this approach are as follows:


  1. Interchanging the order of two integral needs justification, and in fact this is the hardest step in this proof. (There are several ways to resolve this problem, though not easy.)

  2. It is nothing but a disguise of Laplace transform method. So this calculation contains no new information on the integral.

Proving limit doesn't exist using the $epsilon$-$delta$ definition




I want to find out $\displaystyle\lim_{x \to +\infty} x \sin x$. Now, this doesn't exist, but I'm not sure how to transform the definition of limit to something that lets me prove that the limit doesn't exist. This is the definition I use, for the record:




We say that $\displaystyle\lim_{x \to +\infty} f(x) = l$ if $\forall \epsilon > 0, \exists M > 0$ such that $x > M \implies |f(x)-l| < \epsilon$.




This isn't exactly $\epsilon-\delta$, it's more like $\epsilon - M$, but it's the same idea. My problem is: how to use this to prove that the limit doesn't exist? I know that I would have to begin like this:





We say that $\displaystyle\lim_{x \to +\infty} f(x)$ doesn't exist if
$\exists \epsilon>0$ such that $\forall M > 0$ . . .




And I don't know how to continue.



Edit: I want to clarify something: while I am indeed trying to prove the nonexistence of $\displaystyle\lim_{x \to +\infty} x \sin x$, the point of this question was to be able to use the definition to prove the nonexistence of any limit, not just this one.


Answer



$\lim\limits_{x\rightarrow\infty} f(x)\ne L$ would mean that there is an $\epsilon>0$ such that for any $M>0$, there is an $x>M$ so that $|f(x)-L|\ge \epsilon$.




To use the above to show that $\lim\limits_{x\rightarrow\infty} f(x)$ does not exist, you would have to show that $\lim\limits_{x\rightarrow\infty} f(x)\ne L$ for any number $L$.



For your purposes, with $f(x)=x\sin x$, let $L$ be any number. We will show that $\lim\limits_{x\rightarrow\infty} f(x)\ne L$. Towards this end, take $\epsilon=1$. Now fix a value of $M$. Using Alex's answer, you can find an $x>M$ so that $|f(x)-L|\ge1$.



Thus $\lim\limits_{x\rightarrow\infty} f(x)$ does not exist.



(The limit might be infinite (it isn't, see Alex's answer again); but this is another matter...)


Tuesday, November 22, 2016

sequences and series - How to prove $sum_{k=1}^n cos(frac{2 pi k}{n}) = 0$ for any n>1?



I can show for any given value of n that the equation



$$\sum_{k=1}^n \cos(\frac{2 \pi k}{n}) = 0$$


is true and I can see that geometrically it is true. However, I can not seem to prove it out analytically. I have spent most of my time trying induction and converting the cosine to a sum of complex exponential functions


$$\frac{1}{2}\sum_{k=1}^n [\exp(\frac{i 2 \pi k}{n})+\exp(\frac{-i 2 \pi k}{n})] = 0$$


and using the conversion for finite geometric sequences


$$S_n = \sum_{k=1}^n r^k = \frac{r(1-r^n)}{(1-r)}$$


I have even tried this this suggestion I have seen on the net by pulling out a factor of $\exp(i \pi k)$ but I have still not gotten zero.


Please assist.


Answer



We have $$\sum_{k=1}^{n}\cos\left(\frac{2\pi k}{n}\right)=\textrm{Re}\left(\sum_{k=1}^{n}e^{2\pi ik/n}\right) $$ and so $$\sum_{k=1}^{n}e^{2\pi ik/n}=\frac{e^{2\pi i/n}\left(1-e^{2\pi i}\right)}{1-e^{2\pi i/n}} $$ and notice that $$e^{2\pi i}=\cos\left(2\pi\right)+i\sin\left(2\pi\right)=1 $$ so the claim follows.


elementary number theory - highest power of prime $p$ dividing $binom{m+n}{n}$


How to prove the theorem stated here.



Theorem. (Kummer, 1854) Let $p$ be a prime. The highest power of $p$ that divides the binomial coefficient $\binom{m+n}{n}$ is equal to the number of "carries" when adding $m$ and $n$ in base $p$.



So far, I know if $m+n$ can be expanded in base power as $$m+n= a_0 + a_1 p + \dots +a_k p^k$$ and $m$ have to coefficients $\{ b_0 , b_1 , \dots b_i\}$ and $n$ can be expanded with coefficients $\{c_0, c_1 ,\dots , c_j\}$ in base $p$ then the highest power of prime that divides $\binom{m+n}{n}$ can be expressed as $$e = \frac{(b_0 + b_1 + \dots b_i )+ (c_0 + c_1 + \dots c_j )-(a_0 + a_1 + \dots a_k )}{p-1}$$ It follows from here page number $4$. But how does it relate to the number of carries? I am not being able to connect. Perhaps I am not understanding something very fundamental about addition.


Answer



If $b_{0} + c_{0} < p$, then $a_{0} = b_{0} + c_{0}$, there are no carries, and the term $$ b_{0} + c_{0} - a_{0} = 0 $$ does not contribute to your $e$.


If $b_{0} + c_{0} \ge p$, then $a_{0} = b_{0} + c_{0} - p$, and this time $b_{0} + c_{0} - a_{0}$ gives a contribution of $p$ to the numerator of $e$. Plus, there is a contribution of $1$ to $a_{1}$, so the net contribution to the numerator of $e$ is $p -1$, and that to $e$ is $1$. Repeat.



As mentioned by Jyrki Lahtonen in his comment (which appeared while I was typesetting this answer), you may have propagating carries, but this is the basic argument.


complex analysis - $int_{0}^{infty}frac{dx}{1+x^n}$



My goal is to evaluate $$\int_{0}^{\infty}\frac{dx}{1+x^n}\;\;(n\in\mathbb{N},n\geq2).$$




Here is my approach:



Clearly, the integral converges.



Denote the value of the integral by $I_n$.



Now let $\gamma_R$ describe the section of a circle which goes from the origin to $R$ to $Re^{2\pi i/n}$ and back to the origin.



If we let $C_R$ denote the relevant circular arc, then
$$\left|\int_{C_R}\frac{dz}{1+z^n}\right|\leq \left(\frac{2\pi R}{n}\right)\left(\frac{1}{R^{n}-1}\right)\rightarrow0\;\;\;as\;\;R\rightarrow\infty.$$




Furthermore, $$\int_{[R,Re^{2\pi i/n}]}\frac{dz}{1+z^n}=\int_{R}^{0}\frac{e^{2\pi i/n}dr}{1+r^n}.$$



Hence $$\lim_{R\rightarrow\infty}\int_{\gamma_R}\frac{dz}{1+z^n}=\lim_{R\rightarrow\infty}\int_{[0,R]}\frac{dx}{1+x^n}+\int_{[R,Re^{2\pi i/n}]}\frac{dx}{1+x^n}+\int_{C_R}\frac{dx}{1+x^n}=(1-e^{2\pi i/n})I_n\;\;\;(1).$$



Thus if we can obtain the value of $\int_{\gamma_R}\frac{dz}{1+z^n}$ we can evaluate $I_n$.



Now the zeroes of $1+z^n$ are of the form $z=e^{i\pi/n+2\pi i m/n}\;\;(m\in\mathbb{N})$ from which it is clear that the only zero which lies within the contour occurs at $z=e^{i\pi/n}$ with multiplicity 1.
So all that remains to be done is to evaluate the residue of $\frac{1}{1+z^n}$ at $z=e^{i\pi/n}$.




However, if $z=e^{i\pi/n}u$ and $u\neq1$, we have
$$\frac{z^n+1}{z-e^{i\pi/n}}=\frac{1-u^n}{-e^{i\pi/n}(1-u)}
=-e^{-i\pi/n}\sum_{m=0}^{n-1}u^m\;\;\;(2).$$



In particular, (2) implies $$Res_{z=e^{i\pi/n}}\frac{1}{1+z^n}=-\frac{e^{i\pi/n}}{n}\;\;\;(3).$$



Finally, (1) and (3) imply
$$I_n=\frac{2\pi i (Res_{z=e^{i\pi/n}}\frac{1}{1+z^n})}{1-e^{2\pi i/n}}=\frac{-2\pi ie^{i\pi/n}}{n(1-e^{2\pi i/n})}=\frac{\pi/n}{\sin(\pi/n)}.$$



I have three questions:




One, is my method correct?



Two, is there a simpler/different method to evaluate the integral?



Three, is there an easier way to evaluate the residue of $\frac{1}{1+z^4}$ at $z=e^{i\pi/n}$?


Answer



Here is a different way. Lets more generally find the Mellin Transform.



Consider $$I(\alpha,\beta)=\int_{0}^{\infty}\frac{u^{\alpha-1}}{1+u^{\beta}}du=\mathcal{M}\left(\frac{1}{1+u^{\beta}}\right)(\alpha)$$ Let $x=1+u^{\beta}$ so that $u=(x-1)^{\frac{1}{\beta}}$. Then we have $$I(\alpha,\beta)=\frac{1}{\beta}\int_{1}^{\infty}\frac{(x-1)^{\frac{\alpha-1}{\beta}}}{x}(x-1)^{\frac{1}{\beta}-1}dx.$$ Setting $x=\frac{1}{v}$ we obtain $$I(\alpha,\beta)=\frac{1}{\beta}\int_{0}^{1}v^{-\frac{\alpha}{\beta}}(1-v)^{\frac{\alpha}{\beta}-1}dv=\frac{1}{\beta}\text{B}\left(-\frac{\alpha}{\beta}+1,\ \frac{\alpha}{\beta}\right).$$




Using the properties of the Beta and Gamma functions, this equals $$\frac{1}{\beta}\frac{\Gamma\left(1-\frac{\alpha}{\beta}\right)\Gamma\left(\frac{\alpha}{\beta}\right)}{\Gamma(1)}=\frac{\pi}{\beta\sin\left(\frac{\pi\alpha}{\beta}\right)}.$$



Your question is the case where $\alpha =1$.



Also see Chandru's answer on a different thread. It is another nice solution, along the lines of what you did above. (See this previous question, where both solutions can be found)



Hope that helps,


algebra precalculus - Why the equation $3cdot0=0$ needs to be proven



In Algebra by Gelfand Page 21 ( for anyone owning the book).
He tries to prove that: $3\cdot(-5) + 15 = 0$.
Here's his proof: $3\cdot(-5) + 15 = 3\cdot(-5) + 3\cdot5 = 3\cdot(-5+5) = 3\cdot0 = 0$. After that he said:




The careful reader will asky why $3\cdot0 = 0$.




Why does this equation need to be proven ?
I asked somewhere and was told that $a\cdot0=0$ is an axiom which maybe Gelfand didn't assume was true during his proof.
But why does it need to be an axiom, it's provable:
In the second step of his proof he converted 15 to $3\cdot5$ so multiplication was defined so
$a\cdot0 = (0 + 0 + \cdots)$ x times $= 0$.
I'm aware multiplication is defined as repeated addition only for integers,
but 3 is an integer so this definition works in my example.




In case my question wasn't clear it can be summed up as:
Why he takes $3\cdot5=15$ for granted but thinks $3\cdot0=0$ needs an explanation?


Answer



Gelfand doesn't really take $3 \cdot 5 = 15$ for granted; in the ordinary course of events, this would need just as much proof as $3 \cdot 0$.



But the specific value $15$ isn't important here; we're really trying to prove that if $3 \cdot 5 = 15$, then $3 \cdot (-5) = -15$. That is, we want to know that making one of the factors negative makes the result negative. If you think of this proof as a proof that $3 \cdot (-5) = -(3 \cdot 5)$, then there's no missing step.



The entire proof could be turned into a general proof that $x \cdot (-y) = -(x\cdot y)$ with no changes; I suspect that the authors felt that this would be more intimidating than using concrete numbers.



If we really cared about the specific value of $3 \cdot 5$, we would need proof of it. But to prove that $3 \cdot 5 = 15$, we need to ask: how are $3$, $5$, and $15$ defined to begin with? Probably as $1+1+1$, $1+1+1+1+1$, and $\underbrace{1+1+\dots+1}_{\text{15 times}}$, respectively, in which case we need the distributive law to prove that $3 \cdot 5 = 15$. Usually, we don't bother, because usually we don't prove every single bit of our claims directly from the axioms of arithmetic.




Finally, we don't usually make $x \cdot 0 = 0$ an axiom. For integers, if we define multiplication as repeated addition, we could prove it as you suggest. But more generally, we can derive it from the property that $x + 0 = x$ (which is usually taken as a definition of what $0$ is) and the other laws of multiplication and addition given in this part of the textbook.


Monday, November 21, 2016

algebra precalculus - Evaluate $lim_{x to 0} frac{tan(tan x)-sin(sin x)}{tan x-sin x}$

Evaluate $$\lim_{x \to 0}\frac{\tan(\tan x)-\sin(\sin x)}{\tan x-\sin x}$$




First I tried using L'Hopital's rule..but it's very lengthy



Next I have written the limits as
$$L=\lim_{x \to 0}\frac{\tan(\tan x)-\sin(\sin x)}{\tan x-\sin x}=\frac{\lim_{x\to 0}\frac{\tan(\tan x)-\sin(\sin x)}{x^3}}{\lim_{x \to 0}\frac{\tan x-\sin x}{x^3}}=\frac{L_1}{L_2}$$



Now by L'Hopital's Rule we get $L_2=0.5$



$$L_1=\lim_{x\to 0}\frac{\tan(\tan x)-\sin(\sin x)}{x^3}$$



Now $L_1$ can also be evaluated using three applications of L'Hopital's Rule, but is there any other approach?

functions - Functional equation on $mathbb{R}^+$



Let $f:\mathbb{R}^+\rightarrow\mathbb{R}^+$ be a function satisfying
$$f(x)f(yf(x))=f(x+y),\forall x,y\in\mathbb{R}^+$$

If $f(1)=\frac{1}{152}$, evaluate $f(4)$.






By inspection, we can see $f(x)=\frac{1}{151x+1}$ is a solution, from which we can easily get the answer. But how can we show that this is the only solution?



Here is my work:



Because $f$ is the reciprocal of a linear function, it would probably help to define $g(x)=\frac{1}{f(x)}$ (note that this is well defined as we are working in $\mathbb{R}^+$). Then the given equation becomes
$$g(x)g\left(\frac{y}{g(x)}\right)=g(x+y)$$




If we take $y=g(x)$, then this becomes
$$g(x)g(1)=g(x+g(x))\implies 152g(x)=g(x+g(x))$$



Not sure where to go from here. Any thoughts?


Answer



I'm going to proceed with your $g(x) = 1/f(x)$, starting with this
$$g(x)g\left(\frac{y}{g(x)}\right)=g(x+y)$$
but then re-setting in it $y = g(x)y_1$ (possible for any positive $y_1$) to get this basic equation
$$\tag1

g(x)g(y)=g(x+g(x)y)$$
(in which I renamed $y_1$ back to $y$). Applying this first with $x=1$ and then with some other $x = w > 1$, letting $B := g(w)$, gives
$$
g(1+152y) = 152g(y)\\
g(w+By) = Bg(y)
$$
Suppose $B < 152$. We can then pick a positive $y$ such that $1+152y = w + By$ so that the left-hand sides are equal, but the right-hand sides are not (since $g(y) > 0$) - contradiction. So $w > 1$ implies $g(w) \geq 152$.



Now consider $u = w + B(1+152y_0)$ and $v = 1+152(w+By_0)$ for some arbitrary $y_0 > 0$:
$$

g(u) = g(w + B + 152By_0) = Bg(1+152y_0) = 152Bg(y_0)\\
g(v) = g(1+ 152w +152By_0) = 152g(w+By_0) = 152Bg(y_0)
$$
Since the right-hand-sides are equal, $g(u) = g(v)$. This means that either $w+B = 1+152w$, meaning that
$$\tag2
g(w) = B = 1+151w
$$
as we wanted to prove, or, $g(u) = g(v)$ with $u \neq v$. We derive a contradiction in the latter case. WLOG, suppose $u < v$. Then setting in (1) $x = u, y = (v-u)/g(u) =: t$, we get
$$
g(u)g(t) = g(u + g(u)t) = g(v)\\

\therefore g(t) = 1
$$
Now set in (1) $x = t$, to get $g(t+y) = g(y)$ and since $g(t) = 1$, it follows by induction that $g(nt) = 1$ for any positive integer $n$. By choosing a sufficiently large $n$, we can make $nt > 1, g(nt) = 1$ which is a contradiction with the previous result that $w > 1 \Rightarrow g(w) \geq 152$. This leaves (2) as the unique solution.


functional equations - Let $f$ a continuous function defined on $mathbb R$ such that $forall x,y

Let $f$ a continuous function defined on $\mathbb R$ such that $\forall x,y \in \mathbb R :f(x+y)=f(x)+f(y)$



Prove that :

$$\exists a\in \mathbb R , \forall x \in \mathbb R, f(x)=ax$$

limits - Proof that $lim_{xto0}frac{sin x}x=1$

Is there any way to prove that
$$\lim_{x\to0}\frac{\sin x}x=1$$
only multiplying both numerator and denominator by some expression?

I know how to find this limit using derivatives, L'Hopital's rule, Taylor series and inequalities. The reason I tried to find it using only multiplying both numerator and denominator and then canceling out indeterminate terms is because the most other limits can be solved using this method.

This is an example:
$$\begin{align}\lim_{x\to1}\frac{\sqrt{x+3}-2}{x^2-1}=&\lim_{x\to1}\frac{x+3-4}{\left(x^2-1\right)\left(\sqrt{x+3}+2\right)}\\=&\lim_{x\to1}\frac{x-1}{(x+1)(x-1)\left(\sqrt{x+3}+2\right)}\\=&\lim_{x\to1}\frac{1}{(x+1)\left(\sqrt{x+3}+2\right)}\\=&\frac{1}{(1+1)\left(\sqrt{1+3}+2\right)}\\=&\frac18\end{align}$$
It is obvious that we firtly multiplied numerator and denominator by $\sqrt{x+3}+2$ and then canceled out $x-1$. So, in this example, we can avoid indeterminate form multiplying numerator and denominator by
$$\frac{\sqrt{x+3}+2}{x-1}$$
My question is can we do the same thing with $\frac{\sin x}x$ at $x\to0$? I tried many times, but I failed every time. I searched on the internet for something like this, but the only thing I found is geometrical approach and proof using inequalities and derivatives.

Edit
I have read this question before asking my own. The reason is because in contrast of that question, I do not want to prove the limit using geometrical way or inequalities.

Sunday, November 20, 2016

trigonometry - Polar coordinates and extremes of integration

I have trouble understanding how finding the extremes of integration of $\theta$ when I pass in polar coordinates.



1° example - Let $(X,Y)$ a random vector with density $f(x,y)=\frac{1}{2\pi}e^{-\frac{(x^2+y^2)}{2}}$.




Using the transformation $g=\left\{\begin{matrix}
x=rcos\theta\\
y=rsin\theta
\end{matrix}\right.$
and after calculating the determinant of Jacobian matrix, I have $dxdy=rdrd\theta$ from which



$\mathbb{E}[g(X^2+Y^2)]=\int_{\mathbb{R}^2}g(x^2+y^2)f(x,y)dxdy=\frac{1}{2\pi}\int_{0}^{+\infty}g(r^2)e^{-\frac{r^2}{2}}\int_{0}^{2\pi}d\theta$
$\Rightarrow X^2+Y^2\sim Exp(\frac{1}{2})$



2° example - Why for $\int\int_{B}\frac{y}{x^2+y^2}dxdy$ with $B$ annulus of centre $(0,0)$ and radius $1$ and $2$ the extremes of integration of $\theta$ are $(0,\pi)$?




3° example - Why for $\int\int_{B}\sqrt{{x^2+y^2}}dxdy$ with $B$ segment of circle $(0,0)$ and radius $1$ and $2$ the extremes of integration of $\theta$ are $(0,\frac{\pi}{2})$?



4° example - Why for $\int\int_{S}(x-y)dxdy$ with $S={((x,y)\in \mathbb{R}:x^2+y^2=r^2; y\geq 0)}$ the extremes of integration of $\theta$ are $(0,\pi)$?



I hope I have made clear my difficulties.
Thanks in advance for any answer!

discrete mathematics - How to find the numbers of Bezout identity for two numbers

I'm having troubles finding two numbers a,b such that $ 288a+177b=3=gcd(177,288) (1) $



I've been writing the equations of the Euclids algorithm one over another many times to get any pair that verify (1). But, I don't get this yet. I'm trying to solve $ 288x + 177y = 69 $ I understand very well the theorem. But I really, really need help finding the particular solution. If someone can explain me any method, or give me an advice to find a pair (a,b), I would really appreciate it. Thanks for reading,


Greetings!

sequences and series - Convergence/divergence of the limit of a sum

In order to prove asymptotic normality of the time series OLS estimator (this context is not important), I have done the following:


Assumptions:



• $\mathrm{var}\left(y_{t}\right)=\gamma_{0}\quad\forall t$


• $ \mathrm{cov}\left(y_{t},y_{t-j}\right)=\gamma_{j}\in\mathbb{R}^{k}\quad\forall j$


• $\underset{j\rightarrow\infty}{\lim}\mathrm{cov}\left(y_{t},y_{t-j}\right)=\gamma_{j}=0$


• ${\sum}\left|\gamma_{j}\right|<\infty$


Then:


$$\begin{align} L&= \underset{{\scriptstyle T\rightarrow\infty}}{\lim}\mathrm{\mathrm{var}}\left(\sqrt{T}\left(T^{-1}\sum_{t}y_{t}-\mu\right)\right) \\ &= \underset{{\scriptstyle T\rightarrow\infty}}{\lim}\mathrm{\mathrm{var}}\left(T^{-1/2}\sum_{t}y_{t}\right) \\ &=\underset{{\scriptstyle T\rightarrow\infty}}{\lim}T^{-1}\mathrm{\mathrm{var}}\left(\sum_{t}y_{t}\right) \\ &=\underset{{\scriptstyle T\rightarrow\infty}}{\lim}T^{-1}\sum_{t}\sum_{s}\mathrm{cov}\left(y_{t},y_{s}\right) \\ &=\underset{{\scriptstyle T\rightarrow\infty}}{\lim}T^{-1}\left[T\cdot\gamma_{0}+2\cdot\left(T-1\right)\cdot\gamma_{1}+2\cdot\left(T-2\right)\cdot\gamma_{2}+...+2\cdot\gamma_{T-1}\right] \\ &=\underset{{\scriptstyle T\rightarrow\infty}}{\lim}T^{-1}\left[T\gamma_{0}+2\sum_{1\leq j\leq T-1}\left(T-j\right)\gamma_{j}\right] \\ &=\underset{{\scriptstyle T\rightarrow\infty}}{\lim}\gamma_{0}+2\sum_{1\leq j\leq T-1}\left(1-\dfrac{j}{T}\right)\gamma_{j} \end{align}$$ I'm assuming the last step equals: $\gamma_{0}+2\sum_{j=1}^{\infty}\gamma_{j}$


since concluding from here is rather easy. The problem being that I do not think it's the case, since j also converges to infinity.


Am I wrong? If so, why?

integration - TaylorSeries of complete elliptic integral of the first kind


I want compute $K(k)$ as a Taylor Series; $k\in\mathbb{R}$ and $\vert k \vert < 1$. Can someone help me? $$ K(k):= \int^{\frac{\pi}{2}}_0 \dfrac{dt}{\sqrt{1-k^2 sin^2t}} $$ Results so far: $$ K(k):= \int^{\frac{\pi}{2}}_0 \dfrac{dt}{\sqrt{1-k^2 sin^2t}} = \int^{\frac{\pi}{2}}_0 (1-k^2 sin^2t)^{-\frac{1}{2}}dt $$ With using binomial Series we get $$ \int^{\frac{\pi}{2}}_{0} \sum^\infty_{\Phi=0} {-\frac{1}{2} \choose \Phi}(-k^{2\Phi}){\sin^{2\Phi}{t}} \ dt = \sum^\infty_{\Phi=0} {-\frac{1}{2} \choose \Phi}(-k^{2\Phi})\int^{\frac{\pi}{2}}_{0}\sin^{2\Phi}t \ dt $$ For $\Phi$ even: $$ \int^{\frac{\pi}{2}}_{0}\sin^{2\Phi}t \ dt = \frac{\pi}{2}\frac{1}{2}\frac{3}{4}\frac{5}{6}...\frac{n-1}{n} = S $$ thus we get: $$ 1. \sum^\infty_{\Phi=0} {-\frac{1}{2} \choose \Phi}(-k^{2\Phi})\cdot S $$ now i need some help to compute 1. as taylor series, can someone help?


Thanks! Landau.


Answer



The generating function for central binomial coefficient is given by mathworld site.


\begin{align*} \int_{0}^{\frac \pi 2} \frac{1}{\sqrt{1 - 4 \left( \frac {k^2} 4 \sin^2 (t) \right )}} dt &= \int_0^{\frac \pi 2 } \sum_{n=0}^{\infty} \binom{2n}{n} \left(\frac {k^2} 4 \sin^2 (t) \right )^n dt \\ &= \sum_{n=0}^{\infty} \binom{2n}{n} \frac{k^{2n}}{4^n} \cdot \frac 1 2 \cdot \beta (n+1/2, 1/2) \\ &= \sum_{n=0}^\infty \frac{(2n)!}{(n!)^2 2^{2n} }\cdot \frac 1 2 \left( \frac{\Gamma(n+1/2)\Gamma(1/2)}{\Gamma(n+1)} \right ) k^{2n}\\ &= \sum_{n=0}^\infty \frac{(2n)!}{(n!)^2 2^{2n} }\cdot \frac 1 2 \left( \frac{(2n)! \sqrt{\pi} \sqrt{\pi}}{(n!)^2} \right ) k^{2n} \\ &= \frac{ \pi }{2 } \sum_{n=1}^\infty \left( \frac{(2n)!}{(n!)^{2} 2^{2n}} \right )^2 k^{2n}\\ &= \frac{\pi}{2} \sum_{n=0}^\infty P_{2n}(0)k^{2n} \end{align*}


Where $P_{n}(0)$ is Legendre polynomial. Seems that wolf gives the sum of the right side as EllipticK[k^2]. Also it is given here on wiki.


Friday, November 18, 2016

elementary number theory - How to use the Extended Euclidean Algorithm manually?



I've only found a recursive algorithm of the extended Euclidean algorithm. I'd like to know how to use it by hand. Any idea?


Answer



Perhaps the easiest way to do it by hand is in analogy to Gaussian elimination or triangularization, except that, since the coefficient ring is not a field, one has to use the division / Euclidean algorithm to iteratively descrease the coefficients till zero. In order to compute both $\rm\,gcd(a,b)\,$ and its Bezout linear representation $\rm\,j\,a+k\,b,\,$ we keep track of such linear representations for each remainder in the Euclidean algorithm, starting with the trivial representation of the gcd arguments, e.g. $\rm\: a = 1\cdot a + 0\cdot b.\:$ In matrix terms, this is achieved by augmenting (appending) an identity matrix that accumulates the effect of the elementary row operations. Below is an example that computes the Bezout representation for $\rm\:gcd(80,62) = 2,\ $ i.e. $\ 7\cdot 80\: -\: 9\cdot 62\ =\ 2\:.\:$ See this answer for a proof and for conceptual motivation of the ideas behind the algorithm (see the Remark below if you are not familiar with row operations from linear algebra).


For example, to solve  m x + n y = gcd(m,n) one begins with
two rows [m 1 0], [n 0 1], representing the two
equations m = 1m + 0n, n = 0m + 1n. Then one executes
the Euclidean algorithm on the numbers in the first column,
doing the same operations in parallel on the other columns,

Here is an example: d = x(80) + y(62) proceeds as:


in equation form | in row form
---------------------+------------
80 = 1(80) + 0(62) | 80 1 0
62 = 0(80) + 1(62) | 62 0 1
row1 - row2 -> 18 = 1(80) - 1(62) | 18 1 -1
row2 - 3 row3 -> 8 = -3(80) + 4(62) | 8 -3 4
row3 - 2 row4 -> 2 = 7(80) - 9(62) | 2 7 -9
row4 - 4 row5 -> 0 = -31(80) +40(62) | 0 -31 40


The row operations above are those resulting from applying
the Euclidean algorithm to the numbers in the first column,

row1 row2 row3 row4 row5
namely: 80, 62, 18, 8, 2 = Euclidean remainder sequence
| |
for example 62-3(18) = 8, the 2nd step in Euclidean algorithm

becomes: row2 -3 row3 = row4 when extended to all columns.


In effect we have row-reduced the first two rows to the last two.
The matrix effecting the reduction is in the bottom right corner.
It starts as 1, and is multiplied by each elementary row operation,
hence it accumulates the product of all the row operations, namely:

$$ \left[ \begin{array}{ccc} 7 & -9\\ -31 & 40\end{array}\right ] \left[ \begin{array}{ccc} 80 & 1 & 0\\ 62 & 0 & 1\end{array}\right ] \ =\ \left[ \begin{array}{ccc} 2\ & \ \ \ 7\ & -9\\ 0\ & -31\ & 40\end{array}\right ] \qquad\qquad\qquad\qquad\qquad $$


Notice row 1 is the particular  solution  2 =   7(80) -  9(62)
Notice row 2 is the homogeneous solution 0 = -31(80) + 40(62),
so the general solution is any linear combination of the two:


n row1 + m row2 -> 2n = (7n-31m) 80 + (40m-9n) 62

The same row/column reduction techniques tackle arbitrary
systems of linear Diophantine equations. Such techniques
generalize easily to similar coefficient rings possessing a
Euclidean algorithm, e.g. polynomial rings F[x] over a field,
Gaussian integers Z[i]. There are many analogous interesting
methods, e.g. search on keywords: Hermite / Smith normal form,
invariant factors, lattice basis reduction, continued fractions,
Farey fractions / mediants, Stern-Brocot tree / diatomic sequence.


Remark $ $ As an optimization, we can omit one of the Bezout coefficient columns (being derivable from the others). Then the calculations have a natural interpretation as modular fractions (though the "fractions" are multi-valued), e.g. follow the prior link.


Below I elaborate on the row operations to help readers unfamiliar with linear algebra.


Let $\,r_i\,$ be the Euclidean remainder sequence. Above $\, r_1,r_2,r_3\ldots = 80,62,18\ldots$ Given linear combinations $\,r_j = a_j m + b_j n\,$ for $\,r_{i-1}\,$ and $\,r_i\,$ we can calculate a linear combination for $\,r_{i+1} := r_{i-1}\bmod r_i = r_{i-1} - q_i r_i\,$ by substituting said combinations for $\,r_{i-1}\,$ and $\,r_i,\,$ i.e.


$$\begin{align} r_{i+1}\, &=\, \overbrace{a_{i-1} m + a_{i-1}n}^{\Large r_{i-1}}\, -\, q_i \overbrace{(a_i m + b_i n)}^{\Large r_i}\\[.3em] {\rm i.e.}\quad \underbrace{r_{i-1} - q_i r_i}_{\Large r_{i+1}}\, &=\, (\underbrace{a_{i-1}-q_i a_i}_{\Large a_{i+1}})\, m\, +\, (\underbrace{b_{i-1} - q_i b_i}_{\Large b_{i+1}})\, n \end{align}$$


Thus the $\,a_i,b_i\,$ satisfy the same recurrence as the remainders $\,r_i,\,$ viz. $\,f_{i+1} = f_{i-1}-q_i f_i.\,$ This implies that we can carry out the recurrence in parallel on row vectors $\,[r_i,a_i,b_i]$ representing the equation $\, r_i = a_i m + b_i n\,$ as follows


$$\begin{align} [r_{i+1},a_{i+1},b_{i+1}]\, &=\, [r_{i-1},a_{i-1},b_{i-1}] - q_i [r_i,a_i,b_i]\\ &=\, [r_{i-1},a_{i-1},b_{i-1}] - [q_i r_i,q_i a_i, q_i b_i]\\ &=\, [r_{i-1}-q_i r_i,\ a_{i-1}-q_i a_i,\ b_{i-1}-b_i r_i] \end{align}$$


which written in the tabular format employed far above becomes


$$\begin{array}{ccc} &r_{i-1}& a_{i-1} & b_{i-1}\\ &r_i& a_i &b_i\\ \rightarrow\ & \underbrace{r_{i-1}\!-q_i r_i}_{\Large r_{i+1}} &\underbrace{a_{i-1}\!-q_i a_i}_{\Large a_{i+1}}& \underbrace{b_{i-1}-q_i b_i}_{\Large b_{i+1}} \end{array}$$


Thus the extended Euclidean step is: compute the quotient $\,q_i = \lfloor r_{i-1}/r_i\rfloor$ then multiply row $i$ by $q_i$ and subtract it from row $i\!-\!1.$ Said componentwise: in each column $\,r,a,b,\,$ multiply the $i$'th entry by $q_i$ then subtract it from the $i\!-\!1$'th entry, yielding the $i\!+\!1$'th entry. If we ignore the 2nd and 3rd columns $\,a_i,b_i$ then this is the usual Euclidean algorithm. The above extends this algorithm to simultaneously compute the representation of each remainder as a linear combination of $\,m,n,\,$ starting from the obvious initial representations $\,m = 1(m)+0(n),\,$ and $\,n = 0(m)+1(n).\,$



arithmetic - Is my book wrong? Fractional word problem.




Question: Mr. Cortez owns a 10 1/2-acre of tract land, and he plans to subdivide this tract into 1/4-acre lots. He must first set aside 1/6 of the TOTAL land for roads. Which of the following expressions shows how many lots this tract will yield?



(A). $\quad 10 \frac 1 2 \div \frac 1 4 - \frac 1 6$



(B). $\quad (10\frac1 2 - \frac 1 6) \div \frac 1 4$



(C). $\quad 10 \frac1 2 + \frac 1 6 \times \frac 1 4$



(D). $\quad 10 \frac1 2 \times \frac1 4 - \frac 1 6$




(E). Not enough information is given.



Book says answer is (B), which seems wrong because if the question is saying 1/6 of the total 10 1/2-acre land I don't think you should subtract.



So, is my book wrong in telling me the answer is (B)? If so how would the real expression look like? I'm thinking it would look something more like:



10 1/2 x 5/6 divided by 1/4
(5/6 being the amount of land you have left after using 1/6 of it).




So that's it, help would be much appreciated since there is only 1 other similar question like this in the whole book and I don't know if I'm doing it right or this book is just wrong. This wouldn't be the first time the book is wrong either.



Book is McGraw Hill's GED, page 774 question #8.


Answer



You're correct (I will write $10\frac12$ as $10.5$ for clarity).



You need $\dfrac16$ of $10.5 \text{ acres}$ for the roads. So you're left with $10.5-(10.5)\cdot\dfrac16=(10.5)\cdot\dfrac56$ acres of land.



To obtain the number of $\dfrac14\text{ acres}$ contained in $(10.5)\cdot\dfrac56\text{ acres}$, divide the latter by the former to obtain, $\dfrac{(10.5)\cdot\dfrac56\text{ acres}}{\dfrac14\text{ acres}}=10.5\cdot\dfrac{20}{6}=\dfrac{21}2\cdot\dfrac{20}{6}=35$.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...