Tuesday, January 31, 2017

calculus - Find the limit of $lim_{xto0}{frac{ln(1+e^x)-ln2}{x}}$ without L'Hospital's rule


I have to find: $$\lim_{x\to0}{\frac{\ln(1+e^x)-\ln2}{x}}$$ and I want to calculate it without using L'Hospital's rule. With L'Hospital's I know that it gives $1/2$. Any ideas?


Answer



Simply differentiate $f(x)=\ln(e^x +1)$ at the point of abscissa $x=0$ and you’ll get the answer. in fact this is the definition of the derivative of $f$!!


random variables - How many students would have to take the exam to ensure with probability at least $.9$ that the class average would be within $5$ of $75$?

I'm having trouble solving this problem:




From past experience, a professor knows that the test score of a

student taking her final examination is a random variable with mean
$75$.
How many students would have to take the examination to ensure with
probability at least $.9$ that the class average would be within $5$ of
$75$? Use the central limit theorem.



The professor knows that the variance of a student's test score is $25$.




I'm not entirely sure on how to solve this problem.




Right now this is what I have:



We know: $\mu = 75$ and $\sigma^2 = 25$.



This is what I set up (by defn C.L.T): $\mathbb{P}(\frac{X_1+\cdots+X_n - n75}{5\sqrt{n}}\le\frac{.9-n75}{5\sqrt{n}}) = 1-\mathbb{P}(\frac{X_1+\cdots+X_n - n75}{5\sqrt{n}}>\frac{.9-n75}{5\sqrt{n}})$



I'm not sure how to solve for $n$. Thanks.

sequences and series - Sum of all odd numbers to infinity




Starting from the idea that $$\sum_{n=1}^\infty n = -\frac{1}{12}$$
It's fairly natural to ask about the series of odd numbers $$\sum_{n=1}^{\infty} (2n - 1)$$
I worked this out in two different ways, and get two different answers. By my first method
$$\sum_{n=1}^{\infty} (2n - 1) + 2\bigg( \sum_{n=1}^\infty n \bigg) = \sum_{n=1}^\infty n$$

$$\therefore ~\sum_{n=1}^{\infty} (2n - 1) = - \sum_{n=1}^\infty n = \frac{1}{12}$$
But then by the second
$$\sum_{n=1}^{\infty} (2n - 1) - \sum_{n=1}^\infty n = \sum_{n=1}^\infty n$$
$$\therefore ~\sum_{n=1}^{\infty} (2n - 1) = 2 \sum_{n=1}^\infty n = -
\frac{1}{6}$$
Is there any reason to prefer one of these answers over the other? Or is the sum over all odd numbers simply undefined? In which case, was there a way to tell that in advance?



I'm also curious if this extends to other series of a similar form
$$\sum_{n=1}^{\infty} (an + b)$$
Are such series undefined whenever $b \neq 0$?



Answer



With the usual caveat that $$ \sum_{n=1}^\infty n \ne -\frac{1}{12}$$
we can do a similar zeta function regularization for the sum of odd integers. We start with the fact that $$ \sum_{n = 1}^\infty \frac{1}{(2n-1)^s} =(1-2^{-s})\zeta(s)$$ for $\Re(s) > 1$ and then analytically continue to $s=-1$ to get $$ \sum_{n=1}^\infty(2n+1) "=" (1-2)\zeta(-1) = \frac{1}{12}$$



Edit



Zeta function regularization and Ramanujan get the same answer here. As for why your first method gets the "right answer" and the second doesn't, note that the first is argued by the exact same formal steps used to derive $$ \sum_{n=1}^\infty\frac{1}{(2n-1)^s} = (1-2^{-s})\zeta(s)$$ while the second uses both linearity and index shifting which are generally not preserved by the regularization methods.


limits - Is $nlog(n)+mlog(m)=O(nlog(m)+mlog(n))$? And what about $nlog(m)+mlog(n)=O(nlog(n)+mlog(m))$?


I am trying to decide if each of these two statements are true or false:



  • $n\log(n)+m\log(m)=O(n\log(m)+m\log(n))$




  • $n\log(m)+m\log(n)=O(n\log(n)+m\log(m))$




I've tried using some properties of logarithms so as to take these two statements to these inequalities:


  • For the first one, the statement is true if and only if there exist $c$ and $(n_0,m_0)$ such that for all $(n,m)>(n_0,m_0)$ (which means $n>n_0$ and $m>m_0$)

$$n^n* m^m \leq c(m^n* n^m)$$ $${n}^{n-m}\leq cm^{n-m} $$


If I take $m=m_0+1$ and $n=(n_0+1)(m_0+1)c$ then $(n,m)>(n_0,m_0)$ but $$n^{n-m} > cm^{n-m}$$


From here it follows that the first statement is false.


  • As for the second one, the statement is true if and only if there exist $c, (n_0,m_0)$ such that for all $(n,m)>(n_0,m_0)$ $$m^n*n^m \leq c(n^n*m^m)$$ $$n^{m-n} \leq cm^{m-n}$$

I got stuck at this part, I would appreciate some help to prove or disprove the second statement and also to know if I've done the first part correctly. Thanks in advance.


Answer



Since the functions being compared are functions of two variables, $m$ and $n$ it seems the answer would depend entirely upon whether $m$ and $n$ are of the same order.


Let $m=nk$. Then $k=\dfrac{m}{n}$.



The question of whether


$$ n\log(n)+m\log(m)=O(n\log(m)+m\log(n)) $$


becomes a question of whether


$$ n\log(n)+nk(\,\log(n)+\log(k)\,)=O(\,n\log(n)+n\log(k)+nk\log(n\,) $$ which reduces to the question whether


$$ k\cdot(n\log(n))=O(n\log(n)) $$


And since $k=\dfrac{m}{n}$ this reduces to the question whether


$$ m\log(n)=O(n\log(n)) $$


Next consider the second quesion. Is


$$ n\log(m)+m\log(n)=O(n\log(n)+m\log(m)) $$


Taking the same approach as in the first question, this reduces to the question of whether



$$ n\log(n)=O(m\log(n)) $$


So it comes down to whether or not $m$ and $n$ are of the same order.


So, for example, if $m=O(x^3)$ and $n=O(x^2)$ then the two functions are not of the same order. But if $m=O(x^2)$ and $n=O(x^2)$ then the two functions are of the same order.


Monday, January 30, 2017

diophantine equations - How does modular arithmetic work - Fermat's last theorem near misses?



This question about Fermat's Last Theorem near misses uses modular arithmetic (converting all the terms to mod 100) to show why $$3987^{12}+4365^{12}\neq4472^{12}.$$



I've only just come across modular arithmetic, and the method looks really neat. But I'm puzzled as to why this works. In other words, I think I'm asking if an equation is true in modular form, how do we know it's true in ordinary integer form? Apologies in advance if this is a really dumb question.


Answer



Since you're new to modular arithmetic, here is a very simple explanation using a couple of examples.



There are three types of modular arithmetic you are already familiar with from grade school: mod 10, mod 5, and mod 2.




Mod 2 just refers to even numbers and odd numbers. Even numbers are those which are "equal to" (actually "congruent to") 0 (mod 2). Odd numbers are those which are congruent to 1 (mod 2).



For mod 5, discard all but the last digit of a number, then (if it is greater than 4), subtract 5 from the remaining digit.



For mod 10, just take the last digit of the number.






Consider the following equation. Is it true?




$5723784602 + 2893649283 = 8617433887$



Using modular arithmetic (specifically, mod 10) you can discard all but the final digit of each number, and state:



$2 + 3 \equiv 7 \mod 10$



This is obviously false. Therefore, the original equation is false.







How about the following equation?



$234343 \times 23845 = 5587908832$



Using the rules that you were probably taught in Grade School, if you take any number that ends in a five and multiply it by anything, the product must end in either a five or a zero. Therefore, this is false.



We can state this with modular arithmetic as follows:



$3 \times 0 \equiv 2 \mod 5$




Obviously this is false. Anything times zero must equal zero.






However, the reverse approach doesn't work:



$29834620934 + 293840239843 = 17$



If we check this with modular arithmetic (mod 10), we get:




$4 + 3 \equiv 7 \mod 10$



This is true, but the original equation is false.






In summary: You can use modular arithmetic to prove an equation false.



You can't use it (in this simplistic form) to prove an equation true.



real analysis - $f(bigcap K_n)=bigcap f(K_n)$? Where $K_n$ are compact



$f:X\rightarrow Y$ is continous map between metric spaces, $K_n$ are non empty nested sequence of compact subsets of $X$, then we need to show the title above.


Please tell me which result I should apply here? regarding cont map and compact set I know that image of compact set is compact, attains bounds, uniformly continous etc. please help. well, we can start by taking $y\in \bigcap f(K_n)$ and then show that it is also in the left side?


Answer



Let $y \in \bigcap f(K_n)$. This means there is $x_n$ so that $f(x_n) = y$ and $x_n \in K_n$ for all $n$.


If $\{x_n\}$ is finite, we're done. If it's infinite, it has a limit point $x$ (why?). Use the continuity of $f$ to find $f(x)$ and conclude the proof.


algebra precalculus - Evaluate $lim limits_{nto infty }sin^2 (pi sqrt{(n!)^2-(n!)})$





Evaluate $$\lim \limits_{n\to \infty }\sin^2 \left(\pi \sqrt{(n!)^2-(n!)}\right)$$




I tried it by Stirling's Approximation
$$n! \approx \sqrt{2\pi n}.n^n e^{-n}$$
but it leads us to nowhere.



Any hint will be of great help.


Answer




$$\sin (\pi \sqrt{(n!)^2 -n!} )=\sin (\pi \sqrt{(n!)^2 -n!} -\pi\cdot n! )=\sin \left(\pi \frac{-n!}{\sqrt{(n!)^2 -n!} +n!}\right)=\sin \left(\pi \frac{-1}{\sqrt{1 -\frac{1}{n!}} +1}\right)\to -\sin\frac{\pi}{2} =-1$$


analysis - Isn't that proof going the wrong way?


I'm currently working on the very well written book Understanding Analysis, by Stephen Abbott.


But I found a proof that looks wrong, I think that it going the wrong way (showing that A $\implies$ B while it should demonstrate that $B \implies A$).


Here is the theorem :


A function f : A $\to$ R, fails to be uniformly continuous if there exists a particular $\epsilon_0$ and two sequences $(x_n)$ and $(y_n)$ such that : $\lim(|x_n - y_n|) = 0$ and $|f(x_n) - f(y_n)| \geq \epsilon_0 $.


Here is the proof :


Negating the definition of uniform continuity gives the following: A function $f : A → R$ fails to be uniformly continuous on A if there exists $ε_0 > 0$ such that for all $δ > 0$ we can find two points $x$ and $y$ satisfying $|x − y| < δ$ but with $|f(x) − f(y)| ≥ \epsilon_0$.



The fact that no $δ$ “works” means that if we were to try $δ = 1$, we would be able to find points $x_1$ and $y_1$ where $|x_1 − y_1| < 1$ but $|f(x_1) − f(y_1)| ≥ ε_0.$ In a similar way, if we try $δ = 1/n$ where $n ∈ N$, it follows that there exist points $x_n$ and $y_n$ with $|x_n − y_n| < 1/n$ but where $|f(x_n) − f(y_n)| ≥ \epsilon_0$. The sequences $(x_n)$ and $(y_n)$ are precisely the ones described in theorem.


Comment :


I think that the proof demonstrated that : f not uniformly continuous $\implies$ $(x_n)$ and $(y_n)$ exist. While it should have been the other way around.


Do you agree that the proof is wrong ?


Answer



Note that "$f\colon A\to \mathbb R$ is uniformly continuous" is in fact by definition equivalent to $$\forall \epsilon>0\colon \exists \delta>0\colon\forall x,y\colon (|x-y|<\delta\to |f(x)-f(y)|<\epsilon )$$ Hence the negation $$\exists \epsilon>0\colon \forall \delta>0\colon\exists x,y\colon (|x-y|<\delta\land|f(x)-f(y)|\ge\epsilon )$$ is also equivalent to the negation "$f\colon A\to \mathbb R$ fails to be uniformly continuous". So on the one hand you may have tripped over the custom that "if" in a definition is conceptually an "iff" and that on the other hand the argument does indeed show something stronger than "if", namely "iff".


sequences and series - The sum of all the odd numbers to infinity

We have this sequence:




S1: 1+2+3+4+5+6.. (to infinity)



It has been demonstrated, that S1 = -1/12.



Now, what happens if i multiply by a factor of 2?



S2: 2+4+6+8+10+12.... (to infinity).



I have 2S1, which is equal to -1/6




On this, we can create a equation for the odd numbers:



S3: 1+3+5+5+7+9+11... (to infinity)



We know that for every term in S2, every term in S3 is just (n-1)



Or, The sum of the even numbers, Minus , the sum of infinitely many (-1)s



So S3 = -1/6 - ∞




However, we also know that the odd numbers + the even numbers = The natural numbers.



So let's try it.



-1/6 - (-1/6 -∞)



We have -1/6 + 1/6 + ∞



Which is just ∞




So, there we have it. a paradox. S1 cannot be both -1/12 or ∞

Sunday, January 29, 2017

limits - Summation of exponential series


Evaluate the limit: $$ \lim_{n \to \infty}e^{-n}\sum_{k = 0}^n \frac{n^k}{k!} $$ It is not as easy as it seems and the answer is definitely not 1.
Please help in solving it.


Answer



Given an event whose frequencies open the Poisson distribution and occurs an average of $n$ times per trial, the probability that it occurs $k$ times in a given trial is


$e^{-n} \frac{n^k}{k!}$.


So, the sum in the limit is the probability that the event (which now must have an integer average) occurs no more than the mean number of times. For large $n$, the Poisson distribution is well-approximated by the normal distribution (this can be made into a precise limiting statement). The normal distribution is symmetric about its mean, so the limit of the sum is the probability that a normally distributed random variable is less than the mean of the variable, namely $\frac{1}{2}$.


arithmetic progressions - How do I derive a formula for summation of this series?



I have a series in which the general term is represented by



$a_n = \frac{k^{n+1} - k - nk +n}{(k-1)^2} $




where $k$ is a constant.



How do I derive the formula for $\sum_{n=1}^{n=b} S_n$ (summation of n terms)?



I'm not really good at mathematics (high school level), so kindly explain it in the simplest way possible.


Answer



HINT



Note that you can divide and sum each part separately




$$a_n = \frac{k^{n+1} - k - nk +n}{(k-1)^2}=\frac{k^{n+1} - k }{(k-1)^2}-\frac{nk -n}{(k-1)^2}= \frac{k^{n+1} }{(k-1)^2}-\frac{k }{(k-1)^2}-\frac{n}{(k-1)}$$



and



$$\sum_{n=1}^{n=b} k^{n+1}=\sum_{n=0}^{n=b-1} k^{n+2}=k^2\sum_{n=0}^{n=b-1} k^{n}=k^2\frac{1-k^b}{1-k}$$



$$\sum_{n=1}^{n=b} n=\frac{b(b+1)}{2}$$


coupon collector - What is the probability of rolling $n$ dice until each side appears at least once?



We roll a die until each side appears at least once, then we stop.




What is the probability of rolling exactly $n$ dice?





I guess the answer is
$$6-6\left(\dfrac{5}{6}\right)^n\;,$$



but this may be wrong.



In trying to solve this problem, I read this paper, but I couldn't solve the problem.


Answer



I'm going to assume that what you are asking is: if we roll a die until all sides (i.e. faces) appear, what is the probability that we will roll $n$ times?



In order for this to happen, we need exactly five faces to have appeared in the first $n-1$ rolls, and then the one missing face must appear on the $n$-th roll. Using inclusion-exclusion, we can express this, for $n>1$, as

$$
p(n) = \frac{1}{6} \binom{6}{5} \left( \left(\frac{5}{6} \right)^{n-1} - \binom{5}{4} \left( \frac{4}{6} \right)^{n-1} + \binom{5}{3} \left( \frac{3}{6}\right)^{n-1} - \binom{5}{2} \left( \frac{2}{6} \right)^{n-1}+\binom{5}{1} \left( \frac{1}{6}\right) ^{n-1} \right)
$$
So, for example, $p(6)=5/324$, $p(7)=25/648$, and $p$ hits a maximum of $875/10368$ at $n=11$.



As a check, you can (numerically) verify that
$$ \sum_{i=6}^{\infty} ip(i) = 14.7,$$
the well-known expected number of throws needed to see all faces.


Saturday, January 28, 2017

calculus - Find $N$ given a sequence $a_n$




In the sequence {$a_n$}, $$a_n=\frac{a_1+a_2+...+a_{n-1}}{n-1} \, , \quad n \ge 3$$
If $a_1+a_2 \ne 0$ and the sum of the first $N$ terms is $12(a_1+a_2)$, find $N$.




Kind of lost on where to start with this one. My initial thought was, $$\sum^{N}_{n=3}\frac{a_1+a_2+...+a_{n-1}}{n-1}=12(a_1+a_2)$$ $$\sum^{N}_{n=3}\frac{a_1+a_2+...+a_{n-1}}{n-1}=\frac{1}{2}(a_1+a_2)+\frac{1}{3}(a_1+a_2+a_3)...$$but someting seems wrong here, or I do not know where to go from here.


Answer



\begin{align*}

S_2 &= a_1+a_2 \\
S_n &= a_1+a_2+\ldots+a_n \\
a_{n+1} &= \frac{a_1+\ldots+a_{n}}{n} \\
&= \frac{S_n}{n} \\
S_{n+1} &= S_{n}+\frac{S_n}{n} \\
&= \frac{n+1}{n} S_{n} \\
&= \frac{n+1}{n} \cdot \frac{n}{n-1} \cdot \ldots \cdot \frac{4}{3} \cdot \frac{3}{2} S_{2} \\
&= \frac{n+1}{2} S_2 \\
S_{N} &= \frac{N}{2} S_2 \\
&= \frac{N(a_1+a_2)}{2} \\

\end{align*}



Also $\forall n\ge 3$, $$a_{n}=\frac{a_1+a_2}{2}$$


calculus - How to prove that $log(x)1$?

It's very basic but I'm having trouble to find a way to prove this inequality




$\log(x)



when $x>1$



($\log(x)$ is the natural logarithm)



I can think about the two graphs but I can't find another way to prove it, and, besides that, I don't understand why should it not hold if $x<1$



Can anyone help me?




Thanks in advance.

Friday, January 27, 2017

calculus - Calculation of $int_0^{pi} frac{sin^2 x}{a^2+b^2-2ab cos x}mathrm dx$




Calculate the definite integral




$$
I=\int_0^{\pi} \frac{\sin^2 x}{a^2+b^2-2ab \cos x}\;\mathrm dx
$$



given that $a>b>0$.




My Attempt:




If we replace $x$ by $C$, then



$$
I = \int_{0}^{\pi}\frac{\sin^2 C}{a^2+b^2-2ab\cos C}\;\mathrm dC
$$



Now we can use the Cosine Formula ($A+B+C=\pi$). Applying the formula gives



$$
\begin{align}

\cos C &= \frac{a^2+b^2-c^2}{2ab}\\
a^2+b^2-2ab\cos C &= c^2
\end{align}
$$



From here we can use the formula $\dfrac{\sin A}{a} = \dfrac{\sin B}{b} = \dfrac{\sin C}{c}$ to transform the integral to



$$
\begin{align}
I &= \int_{0}^{\pi}\frac{\sin^2 C}{c^2}\;\mathrm dC\\

&= \int_{0}^{\pi}\frac{\sin^2A}{a^2}\;\mathrm dC\\
&= \int_{0}^{\pi}\frac{\sin^2 B}{b^2}\;\mathrm dC
\end{align}
$$



Is my process right? If not, how can I calculate the above integral?


Answer



We have
$$\eqalign{I_m(a,b)=
\int_0^\pi\frac{\cos(mx)}{a^2-2ab\cos x+b^2}\,\mathrm dx

&=\left\{\matrix{\frac{\pi}{a^2-b^2}&\hbox{if}&m=0\\\cr
\frac{\pi}{a^2-b^2}\left(\frac{b}{a}\right)^m&\hbox{if}&m\ne0
}\right.}$$
Proof can be seen here. Hence
\begin{align}
\int_0^{\pi} \frac{\sin^2 x}{a^2+b^2-2ab \cos x}\,\mathrm dx&=\frac{1}{2}\int_0^{\pi} \frac{1-\cos2 x}{a^2+b^2-2ab \cos x}\,\mathrm dx\\
&=\frac{1}{2}\left[\frac{\pi}{a^2-b^2}-\frac{\pi}{a^2-b^2}\left(\frac{b}{a}\right)^2\right]\\
&=\frac{\pi}{2 a^2}
\end{align}


trigonometry - Prove that $cosfrac {2pi}{7}+ cosfrac {4pi}{7}+ cosfrac {8pi}{7}=-frac{1}{2}$





Prove that
$$\cos\frac {2\pi}{7}+ \cos\frac {4\pi}{7}+ \cos\frac {8\pi}{7}=-\frac{1}{2}$$




My attempt



\begin{align}
\text{LHS}&=\cos\frac{2\pi}7+\cos\frac{4\pi}7+\cos\frac{8\pi}7\\

&=-2\cos\frac{4\pi}7\cos\frac\pi7+2\cos^2\frac{4\pi}7-1\\
&=-2\cos\frac{4\pi}7\left(\cos\frac\pi7-\cos\frac{4\pi}7\right)-1
\end{align}
Now, please help me to complete the proof.


Answer



$cos(2\pi/7)$+$cos(4\pi/7)$+$cos(8\pi/7)$



= $cos(2\pi/7)$+$cos(4\pi/7)$+$cos(6\pi/7)$ (angles add to give $2\pi$, thus one is $2\pi$ minus the other)



At this point, we'll make an observation




$cos(2\pi/7)$$sin(\pi/7)$ = $\frac{sin(3\pi/7) - sin(\pi/7)}{2}$ ..... (A)



$cos(4\pi/7)$$sin(\pi/7)$ = $\frac{sin(5\pi/7) - sin(3\pi/7)}{2}$ ..... (B)



$cos(6\pi/7)$$sin(\pi/7)$ = $\frac{sin(7\pi/7) - sin(5\pi/7)}{2}$ ..... (C)



Now, add (A), (B) and (C) to get



$sin(\pi/7)*(cos(2\pi/7)+cos(4\pi/7)+cos(6\pi/7))$ = $\frac{sin(7\pi/7) - sin(\pi/7)}{2}$ = -$sin(\pi/7)/2$




The $sin(\pi/7)$ cancels out from both sides to give you your answer.


Thursday, January 26, 2017

elementary number theory - Prove the following inequality involving Modular Arithmetic

If $$ a-(a \bmod x)

Prime powers that divide a factorial


If we have some prime $p$ and a natural number $k$, is there a formula for the largest natural number $n_k$ such that $p^{n_k} | k!$.


This came up while doing an unrelated homework problem, but it is not itself homework. I haven't had any good ideas yet worth putting here.


The motivation came from trying to figure out what power of a prime you can factor out of a binomial coefficient. Like $\binom{p^m}{k}$.


Answer



This follows from a famous result of Kummer:


Theorem. (Kummer, 1854) The highest power of $p$ that divides the binomial coefficient $\binom{m+n}{n}$ is equal to the number of "carries" when adding $m$ and $n$ in base $p$.



Equivalently, the highest power of $p$ that divides $\binom{m}{n}$, with $0\leq n\leq m$ is the number of carries when you add $m-n$ and $n$ in base $p$.


As a corollary, you get


Corollary. For a positivie integer $r$ and a prime $p$, let $[r]_p$ denote the exact $p$-divisor of $r$; that is, we say $[r]_p=a$ if $p^a|r$ but $p^{a+1}$ does not divide $r$. If $0\lt k\leq p^n$, then $$\left[\binom{p^n}{k}\right]_p = n - [k]_p.$$


Proof. To get a proof, assuming Kummer's Theorem: when you add $p^n - k$ and $k$ in base $p$, you get a $1$ followed by $n$ zeros. You start getting a carry as soon as you don't have both numbers in the column equal to $0$, which is as soon as you hit the first nonzero digit of $k$ in base $p$ (counting from right to left). So you really just need to figure out what is the first nonzero digit of $k$ in base $p$, from right to left. This is exactly the $([k]_p+1)$th digit. So the number of carries will be $(n+1)-([k]_p+1) = n-[k]_p$, hence this is the highest power of $p$ that divides $\binom{p^n}{k}$, as claimed. $\Box$


real analysis - How to prove $lim_{nrightarrow infty} {a^n over n!}=0$







As the topics, how to prove $\lim\limits_{n\rightarrow \infty} \dfrac{a^n}{n!}=0$




$\forall a \in \mathbb{R^+}$

sequences and series - Sum: $sum_{n=1}^inftyprod_{k=1}^nfrac{k}{k+a}=frac{1}{a-1}$



For the past week, I've been mulling over this Math.SE question. The question was just to prove convergence of
$$\sum\limits_{n=1}^\infty\frac{n!}{\left(1+\sqrt{2}\right)\left(2+\sqrt{2}\right)\cdots\left(n+\sqrt{2}\right)}$$
but amazingly Mathematica told me it had a remarkably simple closed form: just $1+\sqrt{2}$. After some fiddling, I conjectured for $a>1$:



$$\sum\limits_{n=1}^\infty\prod\limits_{k=1}^n\frac{k}{k+a}=\sum\limits_{n=1}^\infty\frac{n!}{(1+a)(2+a)\cdots(n+a)}=\frac{1}{a-1}$$



I had been quite stuck until today when I saw David H's helpful answer to a similar problem. I have included a solution using the same idea, but I would be interested to know if anyone has another method.



Answer



Here is a completely elementary proof, which only needs introductory calculus concepts:



$$a_{n+1} = \frac{(n+1)!}{(a+1)(a+2)\dots(a+n+1)} = \frac{1}{a-1}\left(\frac{(n+1)!}{(a+1)(a+2) \dots (a+n)} -\frac{(n+2)!}{(a+1)(a+2) \dots (a+n+1)}\right) = b_{n+1} - b_{n+2}$$



where $a_n$ is the term of our series and $$b_n = \frac{1}{a-1}\left(\frac{n!}{(a+1)(a+2) \dots (a+n-1)}\right)$$ with $b_1 = \frac{1}{a-1}$



Thus the sum we seek, telescopes!



Giving us




$$\sum_{k=1}^{n} a_k = \sum_{k=1}^{n} (b_{k} - b_{k+1}) = b_1 - b_{n+1}$$



Thus we just need to compute $\lim b_n$



(we basically need to show that $b_n \to 0$ to match the limit being $\frac{1}{a-1}$)



Now we have that $a \gt 1$, so let $a = 1 + x$ for $x \gt 0$.



$$ (a-1)b_n =\frac{n!}{(a+1)(a+2) \dots (a+n-1)} = \frac{1}{(1 + x/2)(1+x/3)\dots(1+x/n)} $$




Let $ M = \lceil x \rceil$ and consider the product



$$p_n = \prod_{k=M}^{n} \left(1 + \frac{x}{k}\right)$$



It is enough to show that $\log p_n \to \infty$ (that proves that $b_n \to 0$).



Now we have that $\dfrac{1}{1+t} \gt 1-t$ for $0 \lt t \le 1$ and thus integrating between $0$ and $y$ (where $y \le 1$) we get that



$$ \log(1+y) \ge y - \dfrac{y^2}{2}$$




Now $$\log p_n = \sum_{k=M}^{n} \log (1 + \frac{x}{k})$$



$$ \ge \sum_{k=M}^{n} (\frac{x}{k} - \frac{x^2}{2k^2})$$



This goes to $\infty$ as the harmonic series diverges, and the sum of reciprocals of the squares converges.



Thus the sum of your sequence is $$b_1 = \frac{1}{a-1}$$


algebra precalculus - Complicate the expression for $frac{p-1}{p}x+frac{a}{p}x^{1-p}-a^{1/p}$



I am interested in finding a way to go between the RHS and LHS of the following equation
$$\frac{p-1}{p}x+\frac{a}{p}x^{1-p}-a^{1/p}=(x-a^{1/p})\left[\frac{p-1}{p}-\frac{1}{p}\left(\sum_{n=1}^{p-1}\left(\frac{a^{1/p}}{x}\right)^n\right)\right]$$
In the book, it is said it is an "easily" derived formula, however, even after I reversed engineeres the RHS (recognizing geometric sum and simplifying as much as possible) I have been unable to find a way from the left to the right side.


I feel like I must be missing something since this manipulation was supposed to be easy but I do not see it. How would I have gone from the LHS to the RHS?


Answer



Working on another way round usually is not easy.



Observe that



\begin{array}{rcl}
X^2-2X+1 &=& (X-1)(X-1) \\
2X^3-3X+1 &=& (X-1)(2X^2-X-1) \\
3X^4-4X+1 &=& (X-1)(3X^3-X^2-X-1) \\

4X^5-5X+1 &=& (X-1)(4X^4-X^3-X^2-X-1) \\
& \vdots & \\
(p-1)X^{p}-pX^{p-1}+1 &=& (X-1)[(p-1)X^{p-1}-X^{p-2}-\ldots-X-1] \\
X &=& \dfrac{x}{a^{1/p}} \\
\dfrac{p-1}{p}x+\dfrac{a}{p}x^{1-p}-a^{1/p} &=&
\dfrac{(p-1)X^{p}-pX^{p-1}+1}{p}\cdot \dfrac{a^{1/p}}{X^{p-1}} \\
&=& a^{1/p}(X-1)
\left[
\dfrac{p-1}{p}-\dfrac{1}{p}
\left( 1+\dfrac{1}{X}+\ldots+\dfrac{1}{X^{p-1}} \right)

\right]
\end{array}



The result follows.




Further point to be noticed:
\begin{align*}
nx^{n+1}-(n+1)x^{n}+1 &= (x-1)(nx^{n}-x^{n-1}-\ldots-x-1) \\
&= (x-1)^2[nx^{n-2}+(n-1)x^{n-3}+\ldots+2x+1]

\end{align*}



real analysis - Evaluate $sum_{k=2}^{infty}left(sum_{n=2}^{infty} {1 over k^n}right)$



I am trying to evaluate



$$\sum_{k=2}^{\infty}\left(\sum_{n=2}^{\infty} {1 \over k^n}\right)$$
First observation: ${1 \over k^n} < 1$. So, in the limit, the sum of $p$ first terms of the inner sum is
$$\lim \limits_{p \to \infty}\left({1 \over k^2}{1 - {1 \over k^p} \over 1 - {1 \over k}}\right) = {1 \over k^2}{1 \over 1 - {1 \over k}} = {1 \over k(k - 1)}$$
The outer sum now looks like this
$$\sum_{k=2}^{\infty}{1 \over k(k - 1)} = {1 \over 2} + {1 \over 6} + {1 \over 12} + {1 \over 20} + \dots$$

We know that ${1 \over k^2}$ converges, so the rewritten sum will converge as well. Do you have any ideas how to find the sum?


Answer



Hint:



$$
\frac{1}{2}+
\color{blue}{\frac{1}{6}}+
\color{red}{\frac{1}{12}+}
\color{green}{\frac{1}{20}}+...=\frac{1}{2}+
\color{blue}{\frac{1}{2}-\frac{1}{3}}+

\color{red}{\frac{1}{3}-\frac{1}{4}}+
\color{\green}{\frac{1}{4}-\frac{1}{5}}+...
$$



which is an example for a telescoping series



Btw: You have proven that $\sum_{n\geq1}(\zeta(n)-1)=1$, congratulations :)


Tuesday, January 24, 2017

analysis - Upper and Lower Bounds of $emptyset$



From some reading, I've noticed that $\sup(\emptyset)=\min(S)$, but $\inf(\emptyset)=\max(S)$, given that $\min(S)$ and $\max(S)$ exist, where $S$ is the universe in which one is working. Is there some inherent reasoning/proof as to why this is? It seems strange to me that an upper bound of a set would be smaller than a lower bound of the same set.


Answer



The supremum is defined to be the least upper bound. An upper bound for a set $K$ is defined to be an element $b$ such that for all $k\in K$, $k\leq b$. If $K$ is empty, then the condition "for all $k\in K$, $k\leq b$" is true by vacuity (you have an implication of the form "if $k$ is an element of the empty set, then $\lt$something happens$\gt$", and the antecedent is always false so the implication is always true).




Therefore, every element of $S$ is an upper bound for $\emptyset$; since $\sup(\emptyset)$ is the least upper bound, that means that $\sup(\emptyset)=\min($upper bounds of $\emptyset) = \min(S)$.



Similarly, the infimum is the greatest lower bound, and every element of $S$ is a lower bound for $\emptyset$, so $\inf(\emptyset) = \max($lower bounds of $\emptyset) = \max(S)$.



Yes, it is somewhat counterintuitive that you have $\sup(\emptyset)\leq\inf(\emptyset)$, but in fact the empty set is the only one for which you can have $\sup(\emptyset)\lt \inf(\emptyset)$. The empty set often causes counterintuitive results.


trigonometry - Simplify a quick sum of sines





Simplify $\sin 2+\sin 4+\sin 6+\cdots+\sin 88$



I tried using the sum-to-product formulae, but it was messy, and I didn't know what else to do. Could I get a bit of help? Thanks.


Answer



The angles are in arithmetic progression. Use the formula




$$\sum_{k=0}^{n-1} \sin (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$



See here for two proofs (using trigonometry, or using complex numbers).



In your case, $a=b=2$ and $n=44$.


abstract algebra - How to prove that $zgcd(a,b)=gcd(za,zb)$



I need to prove that $z\gcd(a,b)=\gcd(za,zb)$.



I tried a lot, for example, looking at set of common divisors of the two sides, but I can't conclude anything from that. Can you please give me some advice how I can handle this problem? And $a,b,z \in \mathbb{Z}$.


Answer



Below are a few proofs of the gcd distributive law $\rm\:(ax,bx) = (a,b)x\:$ using Bezout's identity, universal gcd laws, and unique factorization. In each proof the first line serves as a hint.



First we show that the gcd distributive law follows immediately from the fact that, by Bezout, the gcd may be specified by linear equations. Distributivity follows because such linear equations are preserved by scalings. Namely, for naturals $\rm\:a,b,c,x \ne 0$


$\rm\qquad\qquad \phantom{ \iff }\ \ \ \:\! c = (a,b) $



$\rm\qquad\qquad \iff\ \: c\:\ |\ \:a,\:b\ \ \ \ \ \ \&\ \ \ \ c\ =\ na\: +\: kb,\ \ \ $ some $\rm\:n,k\in \mathbb Z$


$\rm\qquad\qquad \iff\ cx\ |\ ax,bx\ \ \ \&\ \ \ cx = nax + kbx,\ \,$ some $\rm\:n,k\in \mathbb Z$


$\rm\qquad\qquad { \iff }\ \ cx = (ax,bx) $


The reader familiar with ideals will note that these equivalences are captured more concisely in the distributive law for ideal multiplication $\rm\:(a,b)(x) = (ax,bx),\:$ when interpreted in a PID or Bezout domain, where the ideal $\rm\:(a,b) = (c)\iff c = gcd(a,b)$



Alternatively, more generally, in any integral domain $\rm\:D\:$ we may employ the universal definitions of GCD, LCM to generalize the above proof.


Theorem $\rm\ \ (a,b)\ =\ (ax,bx)/x\ \ $ if $\rm\ (ax,bx)\ $ exists in $\rm\:D.$


Proof $\rm\quad\: c\ |\ a,b \iff cx\ |\ ax,bx \iff cx\ |\ (ax,bx) \iff c\ |\ (ax,bx)/x\ \ \ $ QED


Such universal definitions often serve to simplify proofs, e.g. see this proof of the GCD * LCM law.



Alternatively, comparing powers of primes in unique factorizations, it reduces to the following $$ \min(a+c,\,b+c)\ =\ \min(a,b) + c$$


The proof is precisely the same as the prior proof, replacing gcd by min, and divides by $\le$, and



$$\begin{eqnarray} {\rm employing}\quad\ c\le a,b&\iff& c\le \min(a,b)\quad&&\rm[universal\ definition\ of\ \ min]\\ \rm the\ analog\ of\quad\ c\ \, |\, \ a,b&\iff&\rm c\ \ |\ \ gcd(a,b)\quad&&\rm[universal\ definition\ of\ \ gcd] \end{eqnarray}$$


Monday, January 23, 2017

Functional Equation $f(x+y)=f(x)+f(y)+f(x)f(y)$

I need to find all the continuous functions from $\mathbb R\rightarrow \mathbb R$ such that $f(x+y)=f(x)+f(y)+f(x)f(y)$. I know, what I assume to be, the general way to attempt these problems, but I got stuck and need a bit of help. Here is what I have so far:



Try out some cases:



Let $y=0$: $$ \begin{align}
f(x)&=f(x)+f(0)+f(x)f(0)
\\ 0&=f(0)+f(x)f(0)
\\0 & = f(0)[1+f(x)]

\end{align}$$
Observe that either $f(0)=0$ or $f(x)=-1$. So this gives me one solution, but I am having trouble finding the other solution(s). Somebody suggested to me that $f(x)=0$ is also a solution but I can't find a way to prove what they said is true. Can anyone please, without giving away the answer, give me a teeny hint? I really want to figure this out as much as I can. I've tried the case when $y=-x$ and $x=y$ but I don't feel like those cases help me towards the solution.



Thanks in advance

calculus - Without using L'Hospital rule or series expansion find $lim_{xto0} frac{x-xcos x}{x-sin x}$.


Is it possible to find $\displaystyle{\lim_{x\to 0} \frac{x-x\cos x}{x-\sin x}}$ without using L'Hopital's Rule or Series expansion.



I can't find it.If it is dublicated, sorry :)


Answer



$$\dfrac{x(1-\cos x)}{x-\sin x}=\dfrac{x^3}{x-\sin x}\cdot\dfrac1{1+\cos x}\left(\dfrac{\sin x}x\right)^2$$


For $\lim_{x\to0}\dfrac{x^3}{x-\sin x}$ use Are all limits solvable without L'Hôpital Rule or Series Expansion


Sunday, January 22, 2017

elementary number theory - What is the logic/theorem/derivation behind finding the exponent of p in n! By [n/p] + [n/p^2] + [n/p^3] + ....?



The exponent of prime number of 3 in 100! is 48. It means 100! is divisible by $3^48$ $$E_3(100!) = \left\lfloor\frac{100}3\right\rfloor + \left\lfloor\frac{100}{3^2}\right\rfloor + \left\lfloor\frac{100}{3^3}\right\rfloor + \left\lfloor\frac{100}{3^4}\right\rfloor = 33+11+3+1 = 48$$ What is the derivation/math behind the above logic?



Answer



Think in this way....


If $p$ is a prime number, actually you want to calculate what is the highest power of $p$ that divides exactly some number $n$. Let $n=100$ & $p=3$ in your case.


1st step: Look for multiple of $p$ through the $n!$


$p, 2p,3p,...$ and in order to get how many multiple of $p$, just divide $n/p$ and take integer part of it.


Example: $3, 6, 9, 12, 15,18,..,99 $ will contain at least one power of $3$


2nd step:Look for the multiple of $p^2$.why? because it is the first number in $n!$ which contain exactly two powers of $p$, if $p^2<=n$.


$p^2, 2p^2,3p^2,...$ and in order to get how many multiple of $p^2$, just divide $n/p^2$ and take integer part of it.


Example: $ 9, 18, 27,...,99$ will contain at least two powers of $3$.


3rd step:Look for the multiple of $p^3$.why? because it is the first number in $n!$ which contain exactly three powers of $p$, if $p^3<=n$.



$p^3, 2p^3,3p^3$ and in order to get how many multiple of $p^3$, just divide $n/p^3$ and take integer part of it.


Example: $ 27,54,81$ will contain at least three powers of $3$.


4th step:Look for the multiple of $p^4$.why? because it is the first number in $n!$ which contain exactly four powers of $p$, if $p^4<=n$.


$p^4$ and in order to get how many multiple of $p^4$, just divide $n/p^4$ and take integer part of it.


Example: $81$ will contain at least four powers of $3$.


>=5 steps:since $p^5>n$ and so for highest power of $p$ because $p^5

Final answer is: Ans(step1+step2+step3+step4).


However number of steps are totally dependent on $n$ and $p$.


Saturday, January 21, 2017

calculus - how to prove that $ln(1+x)< x$


I want to prove that: $\ln(x+1)< x$.


My idea is to define: $f(x) = \ln(x+1) - x$, so:


$f'(x) = \dfrac1{1+x} - 1 = \dfrac{-x}{1+x} < 0, \text{ for }x >0$.


Which leads to $f(x)

Is that a valid proof? Any other ideas?


Thanks.


Answer



I think your approach is correct but you need to add some more details. Based on your approach let $f(x) = \log(1 + x) - x$ so that $f(0) = 0$. Clearly $$f'(x) = -\frac{x}{1 + x}$$ and hence $f'(x) > 0$ if $-1 < x < 0$ and $f'(x) < 0$ if $x > 0$. It follows that that $f(x)$ in increasing in $(-1, 0]$ and decreasing in $[0, \infty)$. Thus we have $f(x) < f(0)$ if $-1 < x < 0$ and $f(x) < f(0)$ if $x > 0$. It thus follows that $f(x) \leq f(0) = 0$ for all $x > -1$ and there is equality only when $x = 0$. So we can write $$\log(1 + x) \leq x$$ for all $x > -1$ and there is equality only when $x = 0$.


Note: We have considered $x > -1$ because $\log(1 + x)$ is not defined if $x \leq -1$.


sequences and series - Bernoulli's representation of Euler's number, i.e $e=lim limits_{xto infty} left(1+frac{1}{x}right)^x $



Possible Duplicates:
Finding the limit of $n/\sqrt[n]{n!}$
How come such different methods result in the same number, $e$?



I've seen this formula several thousand times: $$e=\lim_{x\to \infty} \left(1+\frac{1}{x}\right)^x $$


I know that it was discovered by Bernoulli when he was working with compound interest problems, but I haven't seen the proof anywhere. Does anyone know how to rigorously demonstrate this relationship?



EDIT: Sorry for my lack of knowledge in this, I'll try to state the question more clearly. How do we prove the following?


$$ \lim_{x\to \infty} \left(1+\frac{1}{x}\right)^x = \sum_{k=0}^{\infty}\frac{1}{k!}$$


Answer



From the binomial theorem


$$\left(1+\frac{1}{n}\right)^n = \sum_{k=0}^n {n \choose k} \frac{1}{n^k} = \sum_{k=0}^n \frac{n}{n}\frac{n-1}{n}\frac{n-2}{n}\cdots\frac{n-k+1}{n}\frac{1}{k!}$$


but as $n \to \infty$, each term in the sum increases towards a limit of $\frac{1}{k!}$, and the number of terms to be summed increases so


$$\left(1+\frac{1}{n}\right)^n \to \sum_{k=0}^\infty \frac{1}{k!}.$$


real analysis - Understanding expansion on base $k$

There are some times when one might need to use expansion of real numbers on some base $k$. One example is when dealing with Cantor's set, one uses expansion of the numbers inside $[0,1]$ on base $3$. The point is that I simply can't understand these expansions. I'm asking here on the more general context of expansion on base $k$ in order to try to make the question more useful.



As I understood, expansion of a number $a\in \mathbb{R}$ in base 3 means to write




$$a = \sum_{n=1}^\infty \dfrac{a_n}{3^n} \quad a_n \in \{0,1,2\}.$$



In that setting, I imagine expansion of $a$ in base $k$ would mean to write



$$a = \sum_{n=1}^\infty \dfrac{a_n}{k^n} \quad a_n \in \{0,1,\dots, k-1\}.$$



Now what those expansions really mean? I simply can't understand, we are decomposing numbers as certain series. But what those series really mean? Why would anyone consider doing these expansions? What the coefficients $a_n$?



I believe this is related to decimal expansions, that is when we write a number $a = a_0.a_1a_2a_3\dots$ but I'm unsure how to make this connection rigorous. Also, I believe this would be true just for $k=10$, so for the other cases it would still be something hard to grasp.




In truth, I've seem quite a few times this being used in some proofs, the Cantor set being the most well-known example. But up to now I never understood correctly what these expansions are and how to work with them.

Thursday, January 19, 2017

Measure of set equal to zero when $n$ goes to infinity




Let $(X,A,\mu)$ a measure space and $f:X\to\mathbb R$ measurable. For each $n\in\mathbb N$ let $E_n=\{x\in X:\vert f(x)\vert\gt\frac{1}{n}\}$. Prove that each $E_n$ is measurable and if $\lim\mu(E_n)=0$ then $f=0$ a.e.



I suppose there is not much dificulty proving $E_n$ measurable since if we split $\vert f(x)\vert\gt\frac{1}{n}$ then we get $f^{-1}(-\infty,-1/n)$ and $f^{-1}(1/n,\infty)$ for all $n$. As $f$ is measurable, both preimages are measurable.



I am not sure how to procced on the second part



if $\lim\mu(E_n)=0$ then $f=0$ a.e.



I can see that when $n\to\infty$ then $f=0$




Taking the measure $\lim\mu(E_n)=\mu\{x\in X:f(x)=0\}=0$ ? this clearly does not fit the definition of a.e


Answer



Since $E_1 \subset E_2 \subset E_3 \subset \cdots$ you have $\mu(E_1) \le \mu(E_2) \le \mu(E_3) \le \cdots$



The fact that $\lim_{n \to \infty} \mu(E_n) = 0$ means that $\mu(E_n) = 0$ for all $n$.



Since $\cup_n E_n = \{x : |f(x)| > 0\}$ you get
$$\mu(\{x : f(x) \not= 0\}) = \mu(\{x : |f(x)| > 0\}) \le \sum_n \mu(E_n) = 0.$$


calculus - Evaluate $int_0^4 frac{ln x}{sqrt{4x-x^2}} ,mathrm dx$




Evaluate
$$\displaystyle\int_0^4 \frac{\ln x}{\sqrt{4x-x^2}} \,\mathrm dx$$




How do I evaluate this integral? I know that the result is $0$, but I don't know how to obtain this. Wolfram|Alpha yields a non-elementary antiderivative for the indefinite integral, so I don't think I can directly integrate and then plug in the upper/lower limits.


Answer




First let $t = x-2$ this way $4x-x^2 = 4 - (x-2)^2 = 4-t^2$. Substitute,
$$ \int_{-2}^2 \frac{\log(t+2)}{\sqrt{4-t^2}} ~ dt $$
Now let, $\theta = \sin^{-1}\tfrac{t}{2}$ so that $2\sin \theta = t$ and hence, after substitute,
$$ \int_{-\pi/2}^{\pi/2} \frac{\log [2(1+\sin \theta)]}{2\cos \theta} 2\cos \theta ~ d\theta = \pi \log 2 + \int_{-\pi/2}^{\pi/2} \log(1+\sin \theta)~d\theta $$
To solve this integral, replace $\theta$ by $-\theta$,
$$ I = \int_{-\pi/2}^{\pi/2} \log(1+\sin \theta) ~d\theta= \int_{-\pi/2}^{\pi/2} \log(1-\sin \theta)~d\theta$$
Now,
$$ I + I = \int_{-\pi/2}^{\pi/2} \log(1-\sin^2 \theta) ~ d\theta = 4\int_{0}^{\pi/2} \log (\cos \theta) ~ d\theta$$
The last integral is a well-known integral that computes to $-\frac{\pi}{2}\log 2$.



Your final answer is, $\pi \log 2 -\pi\log 2$.



calculus - Universal Chord Theorem



Let $f \in C[0,1]$ and $f(0)=f(1)$.



How do we prove $\exists a \in [0,1/2]$ such that $f(a)=f(a+1/2)$?




In fact, for every positive integer $n$, there is some $a$, such that $f(a) = f(a+\frac{1}{n})$.



For any other non-zero real $r$ (i.e not of the form $\frac{1}{n}$), there is a continuous function $f \in C[0,1]$, such that $f(0) = f(1)$ and $f(a) \neq f(a+r)$ for any $a$.



This is called the Universal Chord Theorem and is due to Paul Levy.



Note: the accepted answer answers only the first question, so please read the other answers too, and also this answer by Arturo to a different question: https://math.stackexchange.com/a/113471/1102







This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.



and here: List of abstract duplicates.


Answer



You want to use the intermediate value theorem, but not applied to $f$ directly. Rather, let $g(x)=f(x)-f(x+1/2)$ for $x\in[0,1/2]$. You want to show that $g(a)=0$ for some $a$. But $g(0)=f(0)-f(1/2)=f(1)-f(1/2)=-(f(1/2)-f(1))=-g(1/2)$. This gives us the result: $g$ is continuous and changes sign, so it must have a zero.


real analysis - Length convergence paradox

I saw this paradox a long time ago but have never been able to find a resolution to it. In the diagram below, the length of the straight line is of course $\sqrt{2}$, but the length of the 'pyramid' curve is always $2\cdot n\cdot\frac{1}{n}= 2$ (shown below is $n=10$). In the limit $n \to\infty$ it appears that the lines converge, but of course their lengths do not. Why does this happen, and how can this apparent paradox be resolved?
Diagram

real analysis - If the condition of differentiability holds for the rationals then the function is differentiable?



$f:\mathbb{R}\rightarrow\mathbb{R}$ continuous, $a\in\mathbb{R}$. Suppose that there exists $L\in\mathbb{R}$ such that for every $\varepsilon>0$ there exists $r(\varepsilon)>0$ such that $|\frac{f(x)-f(a)}{x-a}-L|<\varepsilon$ for every $x\in\mathbb{Q}$ and $|x-a|

Answer



Fix a point $x\neq a$ such that $0 < |x - a| < r(\epsilon/2)$. Then, at the point x, the function $$g(y) = \frac{f(y) - f(a)}{y - a}$$ is continuous. Now, pick a point $p \in Q$ near $x$ such that $0< |p-a| < r(\epsilon/2)$ and $|g(p) - g(x)| < \epsilon/2$. This is possible by continuity. Now, use your condition to conclude via the triangle inequality, that
$$ \left|\frac{f(x) - f(a)}{x - a} - L\right| = |g(x) - L| \le |g(x) - g(p)| + |g(p) - L| < \epsilon/2 + \epsilon/2 = \epsilon.$$




It follows by definition that $f$ is differentiable at $a$ with $f'(a) = L$.


Wednesday, January 18, 2017

abstract algebra - Explicit construction of a finite field with $8$ elements


Give an explicit contruction of the finite field $K$ containing $8$ elements, as a quotient of an appropriate polynomial ring. Include the multiplication table of the group $K^{*}=K\setminus \{0\},$ and write $K^{*}=\langle \alpha \rangle$ for some $\alpha \in K.$




I have no idea how to approach this problem. Can anyone guide me in the right direction? Thanks.

calculus - Assistance on beginning the integral $intfrac{dx}{(x+1)(n-x)}$

This is the integral
$$\int\frac{dx}{(x+1)(n-x)}=\int kdt$$
I just need some assistance on how to begin the left side integral and I will most likely be able to continue it from there thank you.

discrete mathematics - Prove $ sum_{k=0}^n k4^k = frac 49((3n-1)4^n + 1) $ by induction

Prove that for every position integer $n$ that


$$ \sum_{k=0}^n k4^k = \frac 49((3n-1)4^n + 1) $$



Proof: Let $P(n)$ denote the above statement.


Base case: $n=1$ : Note that $$ \sum_{k=1}^1 k4^k = \frac 49((3(1)-1)4^{(1)} + 1) $$


$\frac 49((3(1)-1)4^{(1)} + 1) = \frac49((2)4+1) = \frac49(8+1) = \frac 49(9) = 4$


$k4^k = (1)4^{(1)} = 4$


So P(1) holds.


Inductive Step: Let $s\ge1$. Assume P(s), so


$$ \sum_{k=1}^s k4^k = \frac 49((3s-1)4^s + 1) $$


Note


$$ \sum_{k=1}^{s+1} k4^k = \sum_{k=1}^{s} k4^k + (s+1)4^{s+1} $$


and by inductive hypothesis:



**


$$ \frac 49((3s-1)4^s + 1) + (s+1)4^{s+1} $$ **


I'm afraid I'm stuck after this point. I know my endpoint needs to be:


$$ \sum_{k=1}^{s+1} k4^k = \frac 49((3(s+1)-1)4^{s+1} + 1) $$


but I don't know how to get from the asterisks to the above. Any help would be greatly appreciated.

combinatorics - Alternative combinatorial proof for $sumlimits_{r=0}^nbinom{n}{r}binom{m+r}{n}=sumlimits_{r=0}^nbinom{n}{r}binom{m}{r}2^r$

I have a question with regards to combinatorics. I am supposed to show the following combinatorial identity: $\sum\limits_{r=0}^n\binom{n}{r}\binom{m+r}{n}=\sum\limits_{r=0}^n\binom{n}{r}\binom{m}{r}2^r$.



The algebraic way is to consider the coefficient of $t^m$ in the expansion of $(1+t)^n(1-t)^{-(n+1)}$. For $r\in\{0,1,\cdots,n\}$, we have the coefficient of $t^{n-r}$ in the expansion of $(1+t)^n$ to be equal to $\binom{n}{r}$, and the coefficient of $t^{m+r-n}$ in the expansion of $(1-t)^{-(n+1)}$ to be equal to $\binom{m+r}{n}$. By the addition and multiplication principles this will show that the coefficient of $t^m$ in the expansion of $(1+t)^n(1-t)^{-(n+1)}$ will be equal to $\sum\limits_{r=0}^n\binom{n}{r}\binom{m+r}{n}$.




On the other hand, we note that $(1+t)^n(1-t)^{-(n+1)}=(1+\frac{2t}{1-t})^n(1-t)^{-1}=\sum\limits_{r=0}^n\binom{n}{r}2^rt^r(1-t)^{-(r+1)}$. As the coefficient of $t^{m-r}$ is equal to $\binom{m}{r}$, this implies that the coefficient of $t^m$ in the expansion of $(1+t)^n(1-t)^{-(n+1)}$ is equal to $\sum\limits_{r=0}^n\binom{n}{r}\binom{m}{r}2^r$.



What I am thinking of, is to think of a way to prove this identity without the use of the generalized binomial series. I was trying to think of a combinatorial method to solve this problem. In particular, I was trying to prove the combinatorial identity as follows:



Firstly, let us be given a class of $n$ boys and a class of $m$ girls, where $n\leq m$. Choose a team $A$ of $r$ boys, and choose a team $B$ of $r$ girls, and finally choose any subset (may be empty) of boys from $A$ to clean the classroom, where $r\in\{0,1,\cdots,n\}$. For each value of $r$, we see that there are $\binom{n}{r}$ choices for $A$, $\binom{m}{r}$ choices for $B$, and subsequently there are $2^r$ ways to choose a subset from $A$. In this way we get the number of ways to do so to be equal to the expression on the RHS of the identity.



What I am struggling though, is to be able to count the number of desired ways in a way that will give me the expression on the LHS of the identity. I tried firstly to choose the boys to clean the classroom before choosing the sets $A$ and $B$, but I wound up with some messy expression that does not seem close to matching the expression on the LHS.



I would like to know if there is another way of looking at this method (or I could have been missing out on something), or if there is any other combinatorial method (or any other algebraic method that does not involve the use of generalized binomial series) that one could think of to show the desired identity.

trigonometry - Describing sine in terms of cosine



The problem says:





Show if $\sin{\theta} = \sqrt{1 - \cos^2{\theta}}$ if $\sin{\theta}$ is positive and $\sin{\theta} = -\sqrt{1 - \cos^2{\theta}}$ if $\sin{\theta}$ is negative.




I know that $\sin^2{\theta} + \cos^2{\theta}=1$ which can be proven with the Pythagorean Theorem. I also understand we can rearrange the above formula to get $\sin{\theta} = \sqrt{1 - \cos^2{\theta}}$, but how to show that if $\sin{\theta}$ is negative, then $\sin{\theta} = -\sqrt{1 - \cos^2{\theta}}$?



I can't bring that together with the Pythagorean Theorem (this was my first thought to use) to proof it.


Answer



Note that if $x^2=16$ we get $x=\pm 4$ where $4$ is positive and $-4$ is negative.




Similarly $$\sin ^2x = 1- \cos ^2 x \implies \sin x = \pm \sqrt {1-\cos ^2 x}$$



where the positive sign is for the positive $\sin x$ and the negative sign for the negative $\sin x$


real analysis - Baire application to sequence of functions

Let $\{f_k\}$ be a sequence of continuous functions $f_k:\mathbb{R} \mapsto [0,\infty).$
Which of those statements can be true? (Not simultaneously)




1) The sequence $\{f_k\}$ is not bounded iff $x$ is in $\mathbb{Q}$



2) The sequence $\{f_k\}$ is not bounded iff $x$ is not in $\mathbb{Q}$



3) $\lim_{k \to \infty} f_k (x)= \infty$ iff $x$ is not in $\mathbb{Q}$



Hint: use Baire theorem.
I have no clue how to approach this problem.

Eigenvalue decomposition of Cross Gramian matrix

For a symmetric state space system $G(s)=\left\{A,B,C,D\right\}$, the cross Gramain matrix $R$ is the solution of $$AR+RA+BC=0$$



Using eigenvalue decomposition, problem is to obtain a matrix U which diagonalizes the cross gramian matrix $R$, resulting a diagonal matrix $S$ such that $$S=U^{-1}RU$$



Note: For state space symmetric system, $A=A^T, C=B^T$ where $T$ is the transpose of a matrix.

elementary number theory - Why is $a^n - b^n$ divisible by $a-b$?



I did some mathematical induction problems on divisibility




  • $9^n$ $-$ $2^n$ is divisible by 7.


  • $4^n$ $-$ $1$ is divisible by 3.

  • $9^n$ $-$ $4^n$ is divisible by 5.



Can these be generalized as
$a^n$ $-$ $b^n$$ = (a-b)N$, where N is an integer?
But why is $a^n$ $-$ $b^n$$ = (a-b)N$ ?



I also see that $6^n$ $- 5n + 4$ is divisible by $5$ which is $6-5+4$ and $7^n$$+3n + 8$ is divisible by $9$ which is $7+3+8=18=9\cdot2$.




Are they just a coincidence or is there a theory behind?



Is it about modular arithmetic?


Answer



They are all special cases of the Factor Theorem: $\rm\: x-y\:$ divides $\rm\:f(x)-f(y),\:$ for $\rm\:f\:$ a polynomial with integer coefficients, i.e. $\rm\:f(x)-f(y) = (x-y)\:g(x,y)\:$ for a polynomial $\rm\:g\:$ with integer coefficients. The divisibility relation remains true when one evaluates the indeterminates $\rm\:x,y\:$ at integer values (this yields your examples if we let $\rm\:f(z) = z^n).$



Said simpler: $\rm\: mod\ x\!-\!y\!:\ \ x\equiv y\:\Rightarrow\: f(x)\equiv f(y)\ $ by the Polynomial Congruence Rule.



for example: $\ \rm\ mod\ 9\!-\!4\!:\ \ 9\equiv 4\:\Rightarrow\: 9^n\equiv 4^n$


Tuesday, January 17, 2017

calculus - Suppose that $f$ is a real valued function such that its second derivative is discontinuous.Can you give some example?


In an interview somebody asked me the following question but I failed to give the answer.
Suppose that $f$ is a real valued function such that its second derivative is discontinuous. Can you give some example?


Answer



The answers so far give differentiable functions that fail to have a second derivative at some point. If you want to second derivative to exist everywhere and be discontinous somewhere, you can use the following function:


$$f(x) = \int_0^x t^2 \sin(\frac1t)\ dt $$


Its first derivative is $x \mapsto x^2 \sin(\frac1x)$, which is differentiable everywhere, and while its derivative (and hence the second derivative of $f$) is defined everywhere, it is discontinuous at $0$. Therefore, $f$, $f'$ and $f''$ are defined everywhere, and $f''$ is discontinuous at $0$.


convergence divergence - Simple Analogy to explain $sum_{n=1}^infty n = -1/12$

I'm looking for a simplified analogy to explain why the following formula does not actually mean what it seems to mean:


$\sum_{n=1}^\infty n = -\frac{1}{12}$


I get this question all the time from high school students, probably because it was popularized by Numberphile on Youtube. (See video here and see associated post here)


Since the concept of power series expansion is tough for a lot of high school students to get (let alone the zeta function or analytic continuation), I'd like to give them some accessible way to understand how that equal sign isn't really doing what we normally think it is.


I am partially indebted to the answer of user3002473 here.


Please let me know if my understanding of the problem is correct and if my analogy makes sense and is apt:


  • My understanding of the problem:

The zeta function exists everywhere in the complex plane, but its power series representation is valid only on a portion of the plane.


The equation $\sum_{n=1}^\infty n = -\frac{1}{12}$ is therefore the result of equating the "analytic continuation of the function" at a point outside the power series representation domain with the numerical value of the power series computed at that point.



In other words, each side of the equal sign is a valid representation of the zeta function, but neither representation is actually trying to communicate the numerical value of the function at that point.


  • My analogy:

Consider a non-vertical line in the plane with slope, $2$, and y-intercept, $3$. We can describe this line as a function between two sets of numbers, $f:X\rightarrow Y$


Clearly, we can see that the graph is one visual representation of the function, but we can also represent it as an equation in slope-intercept form: $f(x) = 2x+3$


Now, suppose I wanted to represent this line as a function with $x$ in the denominator. I could do that like this:


$g(x) = \frac{2x^2+3x}{x}$


This is clearly a valid representation of my linear function, but it breaks down at the point $x=0$, like so:


$g(0) = \frac{2\times 0^2+3\times 0}{0} = \frac{0}{0}$


As we all know, $\frac{0}{0}$ is "undefined".


Key point: The original function exists at $x=0$, but the specific representation $g(x)$ fails at that point.



Now, I could say, "We all know that $\frac{0}{0}$ is undefined. But for this function, we have enough information to 'define' it."


That is, we know that in the original function, the input $x=0$ is associated uniquely with the output $3$.


So, since $g(x)$ is a representation of our function, we could (strongly waving our hands) say that the output $g(0) = 3$


But then, we've just "proved": $3 = \frac{0}{0}$


No, not really. All we did was take equate two different representations, each valid over different domains, and equate them at a point they don't have in common.


Is this a reasonably accurate analogy to the trouble with $\sum_{n=1}^\infty n = -\frac{1}{12}$ ?

real analysis - Why : $1^{+infty}$ is not $1 $ however $1^{+infty}=lim _{xto 0+}(frac{sin x}{x})^{1/x}=1$?





It is well known that :$1^{+\infty}$ is indeterminate case ,I have accrossed the following problem which let me to say that :$1^{+\infty}=1$ .



$1^{+\infty}$ can be written as : $1^{+\infty}=\lim _{x\to 0+}(\frac{\sin x}{x})^{1/x}$ which is $ 1$ ,then $1^{+\infty}=1$ and it's not I.case , i don't know where i'm wrong !!!! ? and wolfram alpha says that :$\lim _{x\to 0+}(\frac{\sin x}{x})^{1/x}=1$ which mixed me .



Edit: I have edited the question to show what's mixed me in the side of

limit calculation and i don't changed my question


Answer



If you think that
$$\lim _{x\to 0^+}\bigg(\frac{\sin x}{x}\bigg)^{1/x} = \bigg(\lim_{x \to 0^+}\frac{\sin x}{x}\bigg)^{\lim_{x \to 0^+}1/x}$$
that is not correct. Here, you can find your answer I think: Why is $1^{\infty}$ considered to be an indeterminate form


real analysis - Discontinuous derivative.

Could someone give an example of a ‘very’ discontinuous derivative? I myself can only come up with examples where the derivative is discontinuous at only one point. I am assuming the function is real-valued and defined on a bounded interval.

linear algebra - Monotone Convergence Theorem for nonnegative functions (not quite a decreasing sequence)



This question suggests that the MCT for functions is to be applied, but I can't see how this could be done.





Assume $g_n: X \to \bar{\mathbb{R}}$ is a sequence of nonnegative measurable functions satisyfing $$\int g_nd\mu < \frac{1}{n^2}$$
for each $n\geq1$. Using the Monotone Convergence Theorem for nonnegative functions, or otherwise, prove that $$\sum_{N=1}^{\infty}g_n(x)<+\infty$$ $\mu$-almost everywhere.




It seems to me that $g_n$ is similar to a decreasing sequence, although not necessarily for all $x$; and that as $n$ tends to infinity, the integral must tend to $0$, so somehow the 'limit' of $g_n$ must be equal to $0$ $\mu$-almost everywhere. But since $g_{n+1}$ is not necessarily less than $g$ for all $x$, I can't see that the pointwise limit even exists.


Answer



Let $f_n=\sum_{k=1}^ng_k$. Then the sequence $\{f_n\}$ is increasing because the $g_k$ are non-negative, hence by the monotone convergence theorem
$$ 0\leq \int \sum_{k=1}^{\infty}g_k\;d\mu=\sum_{k=1}^{\infty}\int g_k\;d\mu\leq \sum_{k=1}^{\infty}\frac{1}{k^2}<\infty$$



Since $\sum_{k=1}^{\infty}g_k$ is a non-negative function with finite integral, it must be finite almost everywhere.



abstract algebra - Construct a finite field of order 27

So some of my thoughts for constructing a finite field of order 27 are making me think of a field with $p^n$ elements, where $p = 3$ and $n = 3$ such that we want a cubic polynomial in $\mathbb{F}_3[X]$ that does not factor.



Could this be thought of as looking for a cubic polynomial in $\mathbb{F}_3[X]$ with no roots in $\mathbb{F}_3$? Could this polynomial work: $x^3 + 2x^2 + 1$ ?

Boundedness and Riemann-integrability implies Continuity at some point (Analysis)





Let $f$ be bounded and Riemann-integrable on $[a,b]$. Prove that $f$ is continuous at some point $x \in [a,b]$ and is also continuous on a dense subset $D$ of $[a,b].$




Let $\epsilon >0$. Since $f$ is Riemann-integrable, there exists a partition $P$ such that $U(P,f,\alpha) - L(P,f,\alpha) < \epsilon$, where $\alpha$ is an increasing function. Are we able to conclude the first part of the proof that wants us to show that $f$ is continuous at some point $x \in [a,b]$ by taking the partition $P$ to be $[a,b]$? If not, how should we proceed/conclude?



Some definitions:



A set $E$ is dense in $X$ if every point of $X$ is a limit point of $E$, or is a point of $E$ (or both).



Let $A \subset \mathbb{R}$, let $f:A \to \mathbb{R}$, and let $c \in A$. $f$ is continuous at $c$ if for any $\epsilon >0$ there exists $\delta >0$ such that $x \in A$ implies that if $|x-c| < \delta$ then $|f(x)-f(c)| < \epsilon$.




A function is Riemann-integrable if $\inf U(P,f,\alpha)
= \sup L(P,f,\alpha)$, where the inf and sup are taken over all partitions.



EDIT:



For any real function $f$ bounded on $[a,b]$ denote



$\displaystyle U(P,f,\alpha ) = \sum_{i=1}^n M_i \Delta \alpha _i$ and




$\displaystyle L(P,f,\alpha ) = \sum_{i=1}^n m_i \Delta \alpha _i$, where



$M_i = \sup \{ f(t): x_{i-1} \leq t \leq x_i \}$ and



$m_i = \inf \{ f(t): x_{i-1} \leq t \leq x_i \}$.



A function is Riemann-integrable if $\inf U(P,f,\alpha)
= \sup L(P,f,\alpha)$, where the inf and sup are taken over all partitions. (This is Rudin's definition of Riemann-integrable. $\alpha$ is an increasing function so that $\Delta \alpha _ i = \alpha (x_i) - \alpha (x_{i-1})$. For example, if $\alpha = x$, we get the definition of $\Delta x$.)


Answer



Define the oscillation, $\omega_f(U)$ for an open set $U$ as $\omega_f(U)=\sup_{x\in U} f(x)-\inf_{x\in U}f(x)$. Then if $x\in \newcommand{\RR}{\mathbb{R}}\RR$, define $$\omega_f(x) = \lim_{\epsilon\to 0} \omega_f(B_\epsilon(x)).$$




It's easy to see that $f$ is continuous at $x$ if and only if $\omega_f(x)=0$. Now consider $A_n:=\omega_f\newcommand{\inv}{^{-1}}\inv([0,1/n))$ for $n > 0$. I claim that $A_n$ is open and dense for all $n$, then we'll have that the points of continuity, $\bigcap_{n\in \Bbb{N}_+} A_n$ will be dense by the Baire category theorem.



Thus we just need to show that $A_n$ is open and dense.
Suppose $x\in A_n$. Then $x$ has a neighborhood $V$ such that $\omega_f(V)< 1/n$. Hence for all $y\in V$, $\omega_f(y)\le \omega_f(V)<1/n$. Thus $V$ is an open nhood of $x$ contained in $A_n$. Thus $A_n$ is open. As for density, here we need to use integrability of $f$. Suppose $(r,s)$ is a nonempty interval contained in the complement of $A_n$. Then for all $x\in (r,s)$, $\omega_f(r,s) \ge 1/n$. Now take any partition $P$ containing $r$ and $s$. Then $U(P,f,\alpha)- L(P,f,\alpha) \ge 1/n(\alpha(s)-\alpha(r))$ for any such $P$, so the Riemann integral doesn't converge. (Assuming $\alpha$ is strictly increasing. If $\alpha$ weren't strict, bad stuff could happen when it remained constant. Therefore $A_n$ is open and dense, so by the Baire category theorem the points of continuity are dense.


Monday, January 16, 2017

Basic divisibility of large numbers.

So I'm just going through KhanAcademy to refresh my basic pre-arithmetic and although it's embarassing I thought I'd get this thing checked up just for safety:



https://www.khanacademy.org/math/arithmetic/factors-multiples/divisibility_tests/v/divisibility-tests-for-2--3--4--5--6--9--10



In this video we check whether we can divide large numbers into 2, 3, 4, 5, 6, 9, 10. For checking whether the number was divisible by 2 it was simply whether the last number was even or odd. For 3 it was adding up all the digits in the number and seeing whether that was divisible by 3. My question is, does this method work for every number? Just add all the digits up and see whether it's divisible by the number? He used other methods for the preceeding numbers but I carried on with the adding of all the digits method and it seemed to work for the few I tried.



I appreciate any help, thanks a lot.



EDIT: Upon further tests it seems apparent that it's not. My bad.

Sunday, January 15, 2017

sequences and series - How to calculate $S_{n}=sum_{i=1}^{n}frac{1}{i^{2}}$

In math lesson, my teacher told me that Euler once used a very delicate method to calculate $\displaystyle S_{n}=\sum_{i=1}^{n}\frac{1}{i^{2}}$ and wrote a paper about it.



I wonder how to calculate this. I need a precise answer, not the answer that it's less than a number such as 2.

Saturday, January 14, 2017

probability - Expectation of nonnegative Random Variable




enter image description here



Can someone help me give me some pointers as to how to prove this relation?


Answer



Let p be the probability measure. We have that $\int_{0}^{\infty}\left[1-F\left(x\right)\right]dx=\int_{0}^{\infty}\Pr\left[X>x\right]dx=\int_{0}^{\infty}\left[\int1_{X>x}dp\right]dx $ using Fubini's theorem we have $\int_{0}^{\infty}\left[\int1_{X>x}dp\right]dx=\int\left[\int_{0}^{\infty}1_{X>x}dx\right]dp=\int Xdp=E\left[X\right] $


linear algebra - Relation between the rank of the matrix and its characteristic polynomial?



Is there some kind of relation between the rank of the matrix and its characteristic polynomial?After searching through various posts



Say if $A \in M_{5}(\Bbb{R})$ and its characteristic polynomial is $\alpha x^5 + \beta x^4 + \gamma x^3 =0 $,then the rank of matrix $A$ = ?



I am unable to estalish the relation ,like I know that from characteristic polynomial i can obtain the eigenvalues and hence the trace and determinant of the matrix and now the question is if i know the trace and determinat of the matrix can i obtain some information about the rank of the matrix(the number of linearly independent rows in the rref).



I was looking at this question but still i am not aware of any trick or relation.



Answer



If the matrix is diagonalizable, rank = degree of the characteristic polynomial minus the order of multiplicity of root 0 (in the example, the rank of the matrix is 5 - 3 = 2).



In fact, in this case, writing $M=PDP^{-1}$ with $D$ diagonal matrix with $n-r$ zeros, and transforming it into $MP=PD$, it means that if the columns of $P$ are denoted $P_k$, we have $MP_k=\lambda_k P_k$ with, say, the last $n-r$ vectors associated with eigenvalue $0$, (and only them) i.e. we have exactly $n-r$ independant vectors belonging to the kernel.


elementary number theory - Why $9$ (and $11)$ are special in testing divisibility by digit sums? (casting out nines & elevens)


I don't know if this is a well-known fact, but I have observed that every number, no matter how large, that is equally divided by $9$, will equal $9$ if you add all the numbers it is made from until there is $1$ digit.


A quick example of what I mean:




$9*99 = 891$


$8+9+1 = 18$


$1+8 = 9$



This works even with really long numbers like $4376331$


Why is that? This doesn't work with any other number. Similarly for $11$ and alternating digits sums.


Answer



Not quite right, since $9\times 0 = 0$ and the digits don't add up to $9$; but otherwise correct.


The reason it works is that we write numbers in base $10$, and when you divide $10$ by $9$, the remainder is $1$. Take a number, say, $184631$ (I just made it up). Remember what that really means: $$184631 = 1 + 3\times 10 + 6\times 10^2 + 4\times 10^3 + 8\times 10^4 + 1\times 10^5.$$ The remainder when you divide any power of $10$ by $9$ is again just $1$, so adding the digits gives you a number that has the same remainder when dividing by $9$ as the original number does. Keep doing it until you get down to a single digit and you get the remainder of the original number when you divide by $9$, except that you get $9$ instead of $0$ if the number is a nonzero multiple of $9$.



But since every multiple of $9$ is a multiple of $9$, you will always get $9$.


Note that you have a similar phenomenon with $3$ (a divisor of $9$), since adding the digits of a multiple of $3$ will always result in one of the one-digit multiples of $3$: $3$, $6$, or $9$.


If we wrote in base $8$, instead of base $10$, then $7$ would have the property: if you write a number in base $8$ and add the digits (in base 8) until you get down to a single digit between $1$ and $7$, then the multiples of $7$ will always end yielding $7$, for precisely the same reason. And if we wrote in base $16$, then $15$ (or rather, F) would have the property. In general, if you write in base $b$, then $b-1$ has the property.


This is a special case of casting out nines, which in turn is a special case of modular arithmetic. It is what is behind many divisibility tests (e.g., for $2$, $3$, $5$, $9$, and $11$).


Coda. This reminds me of an anecdote a professor of mine used to relate: a student once came to him telling him he had discovered a very easy way to test divisibility of any number $N$ by any number $b$: write $N$ in base $b$, and see if the last digit is $0$. I guess, equivalently, you could write $N$ in base $b+1$, and add the digits to see if you get $b$ at the end.


elementary number theory - Why $9$ (and $11)$ are special in testing divisibility by digit sums? (casting out nines & elevens)




I don't know if this is a well-known fact, but I have observed that every number, no matter how large, that is equally divided by $9$, will equal $9$ if you add all the numbers it is made from until there is $1$ digit.



A quick example of what I mean:




$9*99 = 891$



$8+9+1 = 18$



$1+8 = 9$





This works even with really long numbers like $4376331$



Why is that? This doesn't work with any other number. Similarly for $11$ and alternating digits sums.


Answer



Not quite right, since $9\times 0 = 0$ and the digits don't add up to $9$; but otherwise correct.



The reason it works is that we write numbers in base $10$, and when you divide $10$ by $9$, the remainder is $1$. Take a number, say, $184631$ (I just made it up). Remember what that really means:
$$184631 = 1 + 3\times 10 + 6\times 10^2 + 4\times 10^3 + 8\times 10^4 + 1\times 10^5.$$

The remainder when you divide any power of $10$ by $9$ is again just $1$, so adding the digits gives you a number that has the same remainder when dividing by $9$ as the original number does. Keep doing it until you get down to a single digit and you get the remainder of the original number when you divide by $9$, except that you get $9$ instead of $0$ if the number is a nonzero multiple of $9$.



But since every multiple of $9$ is a multiple of $9$, you will always get $9$.



Note that you have a similar phenomenon with $3$ (a divisor of $9$), since adding the digits of a multiple of $3$ will always result in one of the one-digit multiples of $3$: $3$, $6$, or $9$.



If we wrote in base $8$, instead of base $10$, then $7$ would have the property: if you write a number in base $8$ and add the digits (in base 8) until you get down to a single digit between $1$ and $7$, then the multiples of $7$ will always end yielding $7$, for precisely the same reason. And if we wrote in base $16$, then $15$ (or rather, F) would have the property. In general, if you write in base $b$, then $b-1$ has the property.



This is a special case of casting out nines, which in turn is a special case of modular arithmetic. It is what is behind many divisibility tests (e.g., for $2$, $3$, $5$, $9$, and $11$).




Coda. This reminds me of an anecdote a professor of mine used to relate: a student once came to him telling him he had discovered a very easy way to test divisibility of any number $N$ by any number $b$: write $N$ in base $b$, and see if the last digit is $0$. I guess, equivalently, you could write $N$ in base $b+1$, and add the digits to see if you get $b$ at the end.


Friday, January 13, 2017

analysis - Proof for Strong Induction Principle



I am currently studying analysis and I came across the following exercise.








Proposotion 2.2.14
Let $m_0$ be a natural number and let $P(m)$ be a property pertaining to an arbitrary natural number $m$. Suppose that for each $m\geq m_0$, we have the following implication: if $P(m')$ is true for all natural numbers $m_0\leq m'< m$, then $P(m)$ is also true. (In particular, this means that $P(m_0)$ is true, since in this case the hypothesis is vacuous.) Then we can conclude that $P(m)$ is true for all natural numbers $m\geq m_0$.




Prove Proposition 2.2.14. (Hint: define $Q(n)$ to be the property that $P(m)$ is true for all $m_0\leq m < n$; note that $Q(n)$ is vacuously true when $n




I have difficulty understanding how I should use the hint and in general what the framework of this proof would look like (probably an inductive proof; but on what variable do we induct, what will be the induction hypothesis and how would I go about proving the inductive step etc.?). Could anyone please provide me with some hints to help me get started?


Answer




Let $B$ be the subset of $N=\{m_0,m_0+1,...\}$ such that $P(m)\iff m\in B$. This $B$ is not empty since for all $m_0\le m'

Remark: This works for all sets $N$ where each non-empty subset has a minimal element with respect to a relation $R$. These sets are called well-founded.



If you want to use the hint, show that $Q(n)$ implies $Q(n+1)$ and that $Q(m_0)$: Since $Q(m')$ is true for all $m_0\le m'< m_0$, it is also true for $m_0$. Assume $n$ is a natural number $\ge m_0$ such that $Q(n)$. This means that $P(m)\ \forall m_o\le m

So we can proof the strong induction principle via the induction principle. However, the normal induction principle itself requires a proof, it that is the proof I wrote in the first paragraph. As mentioned it works for all well-founded sets ($\mathbb N$ is such a set.)


elementary set theory - Prove that there is a bijection between the set of all subsets of $X$, $P(X)$, and the set of functions from $X$ to ${0,1}$.




Given any set $X$, let $P(X)$ be the set of all subsets of $X$, and let $\{0,1\}^X$ be the set of all functions $X \rightarrow \{0,1\}$. Construct a bijection (and its inverse) between P(X) and $\{0,1\}^X$.




Let us define $f: P(X) \rightarrow \{0,1\}^X$ by $f(\alpha) = g_{\alpha}$ where $\alpha \in P(X)$ and $g_{\alpha} : X \rightarrow \{0,1\}$ is defined by



$$g_{\alpha}(x) =\begin{cases}
1 & x \in \alpha \\

0 & x \in X-\alpha
\end{cases}$$



We need to check if this function is well-defined. Let $\alpha, \beta \in P(X)$. If $\alpha=\beta$, then $\alpha$ and $\beta$ correspond to the same set, and so correspond to the same function since both $g_{\alpha}$ and $g_{\beta}$ send that set to 1 and the ($X-$that set) to 0.



If we have a function $g_{\alpha} \in \{0,1\}$, then $\alpha$ is a set in $X$ that is sent to 1. But since $P(X)$ is the set of all sets of $X$, we must have $\alpha \in P(X)$. So the function is surjective.



Finally, if $g_{\alpha}=g_{\beta}$ then both $g_{\alpha}$ and $g_{\beta}$ send the set $\alpha$ or $\beta$ to 1. So $\alpha$ and $\beta$ must be same set. In other words, the function is injective; and hence bijective.



The inverse function is $f^{-1}: \{0,1\}^X \rightarrow P(X)$ where $f^{-1}(g_{\alpha}) = \alpha$.







Another way to prove this is to first construct the inverse function, show that it is well defined, and then show that $ff^{-1}$ and $f^{-1}f$ are the identity functions, right?



So if we let $g_{\alpha}=g_{\beta}$, then $\alpha=\beta$, because otherwise $g_{\alpha} (\beta)=0$ and $g_{\beta} (\beta)=1$; a contradiction.



Now we have $f \circ f^{-1} (g_{\alpha}) = f(\alpha) = g_{\alpha}$ and $f^{-1} \circ f(\alpha) = f^{-1}(g_{\alpha}) = \alpha$ , and so $f$ and $f^{-1}$ must be bijective.


Answer



You need to show that the function $f$ that you defined is a bijection.




In order to show that $f$ is surjective, the argument you've posted begins by saying "If we have a function $g_\alpha,\ldots$". But that notation presuposes that there is such an $\alpha$, the very thing to be proved. I would say "If we have a function $g$ on $X$ (i.e. whose domain is $X$) taking values is $\{0,1\}$, then..." and set out to show there is some set $\alpha\subseteq X$ such that $g=g_\alpha$. You have to define that set by using the function $g$, without assuming in advance that such an $\alpha$ exists. And of course the set that will serve as $\alpha$ is $\{x\in X: g(x)=1\}$.



To show that $f$ is one-to-one, you need to show that $f(\alpha)=f(\beta)$ ONLY if $\alpha=\beta$. That's the same as saying if $\alpha\ne\beta$ then $f(\alpha)\ne f(\beta)$.
If $\alpha\ne\beta$ then either there exists $x\in\alpha$ that is not a member of $\beta$ or vice-versa. At this point you can observe that no generality is lost by assuming the first alternative: just call the one that has a member that the other one doesn't have "$\alpha$". The think about $g_\alpha(x)$ and $g_\beta(x)$ where $x$ is that one member.


real analysis - finite measure and cauchy in measure



I've been trying to prove that If $(f_{n})_{n}$ is cauchy in measure, i.e $\mu(|f_{n}-f_{m}|\geq\delta)<\epsilon$ for $n,m $ relatively big then it converges to a measurable function in measure. Now, if $\mu(\Omega)$ is finite then it is enough to prove that if $(f_{n})_{n}$(which converges cauchy in measure) converges a.e to a measurable function, because in a space of finite measure, convergence almost everywhere implies convergence in measure. This might be a really basic question, but I havent been able to prove it.



Any help would be really appreciated.
Thanks guys <3



Answer



Your idea may not work. Consider the probability measure, i.e. $\mu(\Omega)=1$. Define an independent sequence $\{X_n\}$ by
$$X_n=\left\{
\begin{array}
1 1 &\text{with probability } n^{-1},\\
0 &\text{with probability }1-n^{-1}.
\end{array}
\right.
$$

You can check that $X_n\rightarrow 0$ in probability, i.e. in measure but $\{X_n\}$ does not converge almost surely.




For the proof of your problem, see Cauchy in measure implies convergent in measure.


functional analysis - $f(x)$ if $f(xy)=f(x) +f(y) +frac{x+y-1}{xy}$




Let $f$ be a differentiable function satisfying the functional time $f(xy)=f(x) +f(y) +\frac{x+y-1}{xy} \forall x,y \gt 0 $ and $f'(1)=2$



My work



Putting $y=1$



$$f(1)=-1$$
$$f'(x)=\lim_{h\to 0}\frac{f(x+h) - f(x)}{h}$$

But I don't know anything about $f(x+h)$ so what to do in this problem ?


Answer



Differentiate both sides with respect to $x$:
$$
yf'(xy)=f'(x)-\frac{1}{x^2}+\frac{1}{x^2y}
$$
For $y=1/x$, we get
$$
\frac{f'(1)}{x}=f'(x)-\frac{1}{x^2}+\frac{1}{x}
$$

so
$$
f'(x)=\frac{1}{x^2}+\frac{f'(1)-1}{x}
$$


functions - Is 4.99999......... exactly equal to 5?

I'm a student of 10th std. Recently our teacher asked a Question that



"Is 4.999...equal to 5 or not?"
Everyone said that is isn't equal or it is approximately equal. Teacher too agreed to that. But I did't agree. I opposed the teacher as I think it is precisely equal to 5. I also prove but then too she isn't satisfied. Can you please explain what is right and what is wrong in this case?Please Justify.




   Proof that I showed : let x=4.999...  (a)
therefore, 10x = 49.999...
10x -x = 49.999... -4.999...
9x = 45
x = 45%9
X= 5 (b)
hence (a) =(b).
And therefore, ***4.999... = 5***

Thursday, January 12, 2017

integration - How do you know if an integral is impossible?



Are there tricks one can use while looking at integral expressions to determine if one is easily calculatable or not? For example:
$$
A = \int \frac{dx}{\ln(1+x^2)} \quad B = \int \frac{dx}{x\ln(x^2)}
$$
Wolfram finds an answer for $B$, but not one for $A$, even though $A$ looks easier. I would like to know why and also how/if one can look at an integral and see if it is a solvable one or not. With "solvable", I mean for a university student and not using non-real numbers or very advanced maths. :) I know most integrals can't be solved analytically but I'm thinking of those that can appear on exams.


Answer



Integration is "difficult". There isn't one systematic approach that will work by hand for most integrals. When it comes to integrating in exams, your best bet is to just practice integrating lots of different things until you start to recognise patterns.




Many integrable functions are of the form:



$$\int f'(x)g'(f(x))\mathrm{d}x = g(f(x))+c$$



Your second example can be seen to be of this form:



$$\frac{1}{\ln(x^2)} = \frac{1}{2\ln(x)}$$



$$\frac{1}{2x\ln(x)} = \frac{1}{2}\frac{1}{x}\frac{1}{\ln(x)} = \frac{1}{2}\ln'(x)\ln'(\ln(x))$$




$$\int \frac{1}{x\ln(x^2)} = \int \frac{1}{2}\ln'(x)\ln'(\ln(x)) = \frac{1}{2}\ln(\ln(x))+c$$



The key is to be able to notice these patterns, and I think the only way you can do that is through practice.



As for integrability "in theory", this is entirely different to whether or not you can integrate by hand in an exam, and the comments below your question are quite good :)


algebra precalculus - Does $(p^{lambda_1}-1)cdots(p^{lambda_m}-1)$ divide $p^n-1$?

Let $p$ be a prime integer and $n\geq 3$ be a natural number such that $p^n-1$ has a primitive prime divisor. Also let $n=\lambda_1+\cdots+\lambda_m$ such that $n>\lambda_i\geq 1$ for all $i$. I want to show that $(p^{\lambda_1}-1)\cdots(p^{\lambda_m}-1)$ does not divide $p^n-1$.

This is my suggested argument for proving the statement when $p>2$:
Both of $(p^{\lambda_1}-1)\cdots(p^{\lambda_m}-1)$ and $p^n-1$ are polynomials in $p$ with the same degree. If $(p^{\lambda_1}-1)\cdots(p^{\lambda_m}-1)\mid p^n-1$, we then have $p^n-1=c(p^{\lambda_1}-1)\cdots(p^{\lambda_m}-1)$, where $c\in \mathbb{N}$. Since $p>2$, $p^{\lambda_i}-1>1$ for all $i$. Therefore, the coefficient of $p^{\lambda_1+...+\lambda_m}=p^n$ in the expansion of the product $\prod_{i=1}^{m}{(p^{\lambda_i}-1)}$ is 1. Also the coefficient of $p^n$ in $p^n-1$ is $1$, and so we must have $c=1$ . Therefore $p^n-1=(p^{\lambda_1}-1)\cdots(p^{\lambda_m}-1)$. Now if we consider $q$ to be primitive prime divisor of $p^n-1$, then $q$ does not divide $(p^{\lambda_1}-1)\cdots(p^{\lambda_m}-1)$, which is a contradiction. So if $p>2$, it is impossible for $(p^{\lambda_1}-1)\cdots(p^{\lambda_m}-1)$ to divide $p^n-1$.

But as it was discussed in comments, it seems that my argument is not true.
Question 1: Does my argument work? If not, is there any possible way to prove the statement for $p>2$?
The remaining case is when $p=2$. According to the example mentioned in comments $(2-1)(2^2-1)(2^3-1) \mid 2^6-1$.
Question 2: Is there any example of a pair $(n, p)\neq(6,2)$ and a partition $\lambda=(1\leq\lambda_1\leq...\leq\lambda_mI would be grateful for any help.

calculus - How to evaluate this limit without using L'Hospital's rule?

How do I solve this limit without using the l'Hospital's rule? For whatever strange reason, my teacher wants this done without the l'Hospital's rule.



$$\lim_{x\to 5^-}\frac{e^x}{(x-5)^3}.$$

Wednesday, January 11, 2017

probability - Show the following relation: $mathbb{E}[X^{q}mathrm{1}_{{X>a}}]=a^{q}P(X>a)+qint_{a}^{infty}P(X>x)x^{q-1}dx $


Let $X$ be a non negative random variable, $a>0$ be a constant and $q\ge 1$ be a positive integer. How can I show the following: \begin{equation} \mathbb{E}[X^{q}\mathrm{1}_{\{X>a\}}]=a^{q}P(X>a)+q\int_{a}^{\infty}P(X>x)x^{q-1}dx ? \end{equation}



I have tried the computation using the complementary CDF trick


$$\mathbb{E}[X^{q}\mathrm{1}_{\{X>a\}}]=\int_{a}^{\infty}P(X^{q}>t)dt=q\int_{a^{1/q}}^{\infty}P(X>u)u^{q-1}du$$


which doesn't help. Any ideas appreciated.


Answer



You made two mistakes, both of which seem to be from trying to move too quickly.


First, we should have $$\mathbb{E}[X^q 1_{\{X>a\}}] = \int_0^\infty P(X^q 1_{\{X>a\}} > t) \,dt$$ and we should split the integral into two integrals, but the minimum $t$-value at which $$\{X^q 1_{\{X>a\}} > t\} = \{X^q > t\}$$ is not $t=a,$ it's actually $t=a^q.$


Therefore, the above equality becomes $$\mathbb{E}[X^q 1_{\{X>a\}}] = \int_0^{a^q} P(X^q1_{\{X>a\}} > t)\,dt + \int_{a^q}^\infty P(X^q > t)\,dt$$


Now simplify each of these. For the first, simplify the integrand and for the second, use substitution.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...