Thursday, December 31, 2015

linear algebra - What are the eigenvalues of matrix that have all elements equal 1?




As in subject: given a matrix $A$ of size $n$ with all elements equal exactly 1.



What are the eigenvalues of that matrix ?


Answer



Suppose $\,\begin{pmatrix}x_1\\x_2\\...\\x_n\end{pmatrix}\,$ is an eigenvector of such a matrix corresponding to an eigenvalue $\,\lambda\,$, then



$$\begin{pmatrix}1&1&...&1\\1&1&...&1\\...&...&...&...\\1&1&...&1\end{pmatrix}\begin{pmatrix}x_1\\x_2\\...\\x_n\end{pmatrix}=\begin{pmatrix}x_1+x_2+...+x_n\\x_1+x_2+...+x_n\\.................\\x_1+x_2+...+x_n\end{pmatrix}=\begin{pmatrix}\lambda x_1\\\lambda x_2\\..\\\lambda x_n\end{pmatrix}$$



One obvious solution to the above is




$$W:=\left\{\begin{pmatrix}x_1\\x_2\\..\\x_n\end{pmatrix}\;;\;x_1+...+x_n=0\right\}\,\,\,,\,\,\lambda=0$$



For sure, $\,\dim W=n-1\,$ (no need to be a wizard to "see" this solution since the matrix is singular and thus one of its eigenvalues must be zero)



Other solution, perhaps not as trivial as the above but also pretty simple, imo, is



$$U:=\left\{\begin{pmatrix}x_1\\x_2\\..\\x_n\end{pmatrix}\;;\;x_1=x_2=...=x_n\right\}\,\,\,,\,\,\lambda=n$$



Again, it's easy to check that $\,\dim U=1\,$ .




Now, just pay attention to the fact that $\,W\cap U=\{0\}\,$ unless the dimension of the vector space $\,V\,$ we're working on is divided by the definition field's characteristic (if you're used to real/complex vector spaces and you aren't sure about what the characteristic of a field is disregard the last comment)



Thus, assuming this is the case, we get $\,\dim(W+U)=n=\dim V\Longrightarrow V=W\oplus U\,$ and we've thus found all the possible eigenvalues there are.



BTW, as as side effect of the above, we get our matrix is diagonalizable.


calculus - Why doesn't L'Hopital's rule work in this case?


I have a very simple question. Suppose I want to evaluate this limit:


$$\lim_{x\to \infty} \frac{x}{x-\sin x}$$


It is easy to evaluate this limit using the Squeeze theorem (the answer is $1$). But here both the numerator and the denominator are going to infinity as $x\to \infty$ so I tried using L'Hospital's rule: $$\lim_{x\to \infty} \frac{x}{x-\sin x}=\lim_{x\to \infty} \frac{1}{1-\cos x}$$


However there's no finite $L$ such that $$\lim_{x\to \infty} \frac{1}{1-\cos x}=L$$ which is a contradiction. I don't understand why in this case L'Hopital's rule doesn't work. Both the numerator and the denominator are differentiable everywhere and both are tending to infinity - which is all we need to use this rule.


Answer



Another condition, very often forgotten, is that in some neighbourhood of $a$ (here $a=+\infty$), except perhaps at $a$, $g'(x)\neq 0$. This is not the case here: $1-\cos x$ is $0$ infinitely many times.


This is a good illustration why L'Hospital's rule is dangerous. One of the first things I learnt when I was a student is: ‘Avoid it. When it works, Taylor's polynomial at order 1 works as well.’


Here, the simplest way is via equivalents: $\,x-\sin x\sim_\infty x$ since $\sin x$ is bounded, hence $$\frac x{x-\sin x}\sim_\infty \frac x{x}=1.$$


real analysis - Use L'Hopital's rule to show that $lim_{x rightarrow+infty} frac{f(x)}{g(x)}= ell$

Let $f : \mathbb{R} \rightarrow \mathbb{R}$, $g : \mathbb{R} \rightarrow \mathbb{R}$, be both differentiable. Suppose that $\lim_{x \rightarrow + \infty} f(x) = \lim_{x \rightarrow + \infty} g(x) = 0$, that $g'(x) ≠ 0$ for all $x \in \mathbb{R}$ and $\lim_{x \rightarrow +\infty} \frac{f'(x)}{g'(x)} = \ell \in \mathbb{R}$. Show that



$$\lim_{x \rightarrow+\infty} \frac{f(x)}{g(x)}= \ell$$



I'm immediately thinking L'Hopital's rule, and investigating when x tends to an element of $\mathbb{R}$. I just learnt this however, how would I go forth to use this (assuming I do actually need to use L'Hopital's rule)?

Wednesday, December 30, 2015

logarithms - Why is $ a^x = e^{x log a} $?




Why is $ a^x = e^{x \log a}$, where $ a $ is a constant?



From my research, I understand that the the natural log of a number is the constant you get when you differentiate the function of that number raised to the power $ x$. For example, if we differentiate the function $ 2^x $, we get $ 2 \log 2$. I also understand that the natural log of e is just 1. But I cannot connect the dots here.



I would really appreciate an intuitive explanation of why we can write a number raised to a power as e raised to (the power x the natural logarithm of the number)?


Answer



I think we can agree that



$$a=e^{\log a}$$




which arises from one of the properties of the logarithm. Therefore, it’s sufficient to say that



$$a^x=e^{\log a^x}$$



But one of the properties of the logarithm also dictates that



$$\log a^x=x\log a$$



Therefore




$$a^x=e^{x\log a}$$


calculus - Suppose that $f$ is a real valued function such that its second derivative is discontinuous.Can you give some example?



In an interview somebody asked me the following question but I failed to give the answer.
Suppose that $f$ is a real valued function such that its second derivative is discontinuous. Can you give some example?


Answer



The answers so far give differentiable functions that fail to have a second derivative at some point. If you want to second derivative to exist everywhere and be discontinous somewhere, you can use the following function:




$$f(x) = \int_0^x t^2 \sin(\frac1t)\ dt
$$



Its first derivative is $x \mapsto x^2 \sin(\frac1x)$, which is differentiable everywhere, and while its derivative (and hence the second derivative of $f$) is defined everywhere, it is discontinuous at $0$. Therefore, $f$, $f'$ and $f''$ are defined everywhere, and $f''$ is discontinuous at $0$.


Tuesday, December 29, 2015

calculus - Evaluate $lim_{nrightarrow infty} Gamma(n+frac{1}{2})/ left( sqrt{2npi} Gamma(n) right)$ using Stirling's formula.



I am working on the limit
$$
\displaystyle\lim_{n\rightarrow \infty} \frac{\Gamma(n+\frac{1}{2})}{ \sqrt{2n\pi}\, \Gamma(n)}\,.
$$



I am thinking I may be able to use Stirling's formula, but they are slightly different, and I am having trouble relating the two. Any help is appreciated.




Stirling's formula says that the limit is 1 as $n$ approaches infinity of the following:



$$\Gamma(n) / ( \sqrt{2\pi} n^{n - \frac{1}{2}}e^{-n})$$



In particular, how do I relate $\Gamma(n)$ to $n^{n}$ and $e^{-n}$? Not sure how do deal with those two terms.


Answer



You know that
$$
\lim_{n\to\infty} \frac{\Gamma(n)}{\sqrt{2\pi} n^{n - 1/2}e^{-n}} = 1.
\tag1

$$



It seems to me that this is how $\Gamma(n)$ relates to
$n^n$ and $e^{-n}$.
What you need to know, though, is whether this will help you
relate $\Gamma\left(n+\frac12\right)$ to $\Gamma(n)\sqrt{2n\pi},$
and if it does, how does it?



Notice what happens if we replace $n$ by $n+\frac12$ in Equation $(1).$
We get

$$
\lim_{n\to\infty} \frac{\Gamma\left(n+\frac12\right)}
{\sqrt{2\pi} \left(n+\frac12\right)^n e^{-(n+1/2)}} = 1. \tag2
$$



Let $f(n)$ be the left-hand side of $(1)$
and $g(n)$ be the left-hand side of $(2).$
What can you say about
$$
\lim_{n\to\infty} \frac{g(n)}{f(n)}?

$$


calculus - Evaluate: $int_{gamma}textrm {x.n(x)} dstextrm{(x)}$

Let $\textrm{x}=(x,y)\in\mathbb{R^2} $. Let $\textrm{n(x)}$ denotes the unit outward normal to the ellipse $\gamma$ whose equation is given by $$\frac{x^2}{4}+\frac{y^2}{9}=1$$ at point $\textrm{x}$ on it. Evaluate: $$\int_{\gamma}\textrm {x.n(x)} ds\textrm{(x)}.$$

exponentiation - 0's Exponents are impossible?




I've had something that's been bugging me, and I tried research and asked my math teacher. None had sufficient answers.
The concept of $0$ is that when $0$ goes to any exponent except for $0$, it becomes $0$. For example,



$0^3 = 0$, but
$0^0 =$ undefined



However, the proof that $0^0$ is undefined is shown thus:
$0^x$

... (divided by) = $0^{(x-x)} = 0^0$ = undefined
$0^x$



You can apply this to any exponent though, such as:
$0^6$
... = $0^6 = 0$ and $0^3 = 0$, so this expression is equal to $0/0$, which should be
$0^3$ undefined, right?



Am I doing something wrong here? Please help!
Gil



Answer



One should not say "equals undefined"; one should say "is undefined". The is the "is" of predication, not the "is" of equality.



$0^0$ is indeterminate in the sense that if $f(x)$ and $g(x)$ both approach $0$ as $x\to a$ then $f(x)^{g(x)}$ could approach any positive number or $0$ or $\infty$ depending on which functions $f$ and $g$ are.



But in some contexts it is important that $0^0$ be taken to be $1$. One of those contexts is in the identity
$$
e^z = \sum_{n=0}^\infty \frac{z^n}{n!}.
$$
This fails to hold when $z=0$ unless $0^0=1$, since the first term of the series is then $\dfrac{0^0}{0!}$.




In combinatorial enumeration it is also important in some contexts that $0^0$ is $1$, for the same reason $2^0$ is $1$: if you multiply by something $0$ times, it's the same as multiplying by $1$. That is also one way of looking at the reason why $0!$ is $1$.


combinatorics - Probability that n people collectively occupy all 365 birthdays


The problem is quite simple to formulate. If you have a large group of people (n > 365), and their birthdays are uniformly distributed over the year (365 days), what's the probability that every day of the year is someone's birthday?



I am thinking that the problem should be equivalent to finding the number of ways to place n unlabeled balls into k labeled boxes, such that all boxes are non-empty, but C((n-k)+k-1, (n-k))/C(n+k-1, n) (C(n,k) being the binomial coefficient) does not yield the correct answer.


Answer



Birthday Coverage is basically a Coupon Collector's problem.


You have $n$ people who drew birthdays with repetition, and wish to find the probability that all $365$ different days were drawn among all $n$ people. ($n\geq 365$)


$$\mathsf P(T\leq n)= 365!\; \left\lbrace\begin{matrix}n\\365\end{matrix}\right\rbrace\; 365^{-n} $$


Where, the braces indicate a Stirling number of the second kind.   Also represented as $\mathrm S(n, 365)$.


intuition - Why does $e^{ipi}=-1$?



I will first say that I fully understand how to prove this equation from the use of power series, what I am interested in though is why $e$ and $\pi$ should be linked like they are.



As far as I know $\pi$ comes from geometry (although it does have an equivalent analytical definition), and $e$ comes from calculus.


I cannot see any reason why they should be linked and the proof doesn't really give any insights as to why the equation works.


Is there some nice way of explaining this?


Answer



Euler's formula describes two equivalent ways to move in a circle.


  • Starting at the number $1$, see multiplication as a transformation that changes the number $1 \cdot e^{i\pi}$.

  • Regular exponential growth continuously increases $1$ by some rate; imaginary exponential growth continuously rotates a number in the complex plane.

  • Growing for $\pi$ units of time means going $\pi\,\rm radians$ around a circle

  • Therefore, $e^{i\pi}$ means starting at 1 and rotating $\pi$ (halfway around a circle) to get to $-1$.

For more details explaining each step, read this article.



Monday, December 28, 2015

elementary number theory - How to solve $ x = 5^{1345}bmod 58$? Fermat's little theorem



$ x = 5^{1345}\bmod 58$.



I wrote a program that finds and period of residues and builds a table. This table consists of $k$ lines where $k$ is a number of residues in one repeating block, as residues repeat periodically.
In other words, I represent this equation as $x = 5^{n \cdot k+m} \bmod 58$, where $m$ is # of residue in table, and that residue is the answer.



But how to solve this equation mathematically? This algorithm is too complicated to be done on paper. I know that it's possible to use Fermat's little theorem here, but can't understand how. Hope someone will help me to understand this.


Answer



$5^{1345} = 5 \cdot 5^{1344} = 5 \cdot (5^{28})^{48} = 5 \cdot (5^{\phi(58)})^{48} = 5 \cdot 1^{48} = 5 \mod 58$.



integration - Can the definite integral $ I(n) := frac{2n}{pi}int_0^{2sqrt{2n-1}} frac{ln(x)sqrt{-x^2 + 8n-4}}{4n^2-x^2}dx$ be computed?


For $n \in \mathbb N$, $n \geq 1$, consider the following integral expression:


\begin{equation} I(n) := \frac{2n}{\pi}\int_0^{2\sqrt{2n-1}} \frac{\ln(x)\sqrt{-x^2 + 8n-4}}{4n^2-x^2} dx \end{equation}


My attempts to use any of the obvious (elementary) few methods that I am accustomed with (i.e partial integration, substitution, series expansion of the integrand) in order to find a general antiderivative were all futile. There might still be a way to compute an antiderivative using complex analysis, but this is currently beyond my capabilites. However, although Wolfram Alpha does also not seem to be able to either to compute a general antiderivate or compute the above expression for general $n \in \mathbb N$, it will yield (after possibly some refreshing) an explicit value for any $n$ I've tested so far (all $n$ between $1$ and $15$). Namely, one gets the following result for $1 \leq n \leq 15:$


\begin{equation} I(n)= (n-\frac{1}{2}) \ln(2n-1) - (n-1) \ln(2n). \end{equation}


This suggest that there is indeed a way to compute the above expression explicitly. Any help is highly appreciated.


Answer



Hint. By making the change of variable, $$ x=2\sqrt{2n-1}\:u ,\quad dx=2\sqrt{2n-1}\:du,\quad u=\frac{x}{2\sqrt{2n-1}}, $$ one has $$ \frac{\pi}{2n}\cdot I(n)=\ln(2\sqrt{2n-1})\int_0^1\frac{\sqrt{1-u^2}}{a^2-u^2}\:du+\int_0^1\frac{\ln u \cdot\sqrt{1-u^2}}{a^2-u^2}\:du \tag1 $$ with $$ a:=\frac{n}{\sqrt{2n-1}}>1, \quad (n>1).\tag2 $$ The first integral on the right hand side of $(1)$ may be evaluated by two changes of variable, $u=\sin \theta$ and $t=\tan\theta$, obtaining, for $a>1$,




$$ \int_0^1\frac{\sqrt{1-u^2}}{a^2-u^2}\:du=\frac12\int_0^\infty\frac{1}{\left((a^2-1)t^2+a^2\right)\cdot\left(t^2+1\right)}\:dt=\frac{\pi \left(a-\sqrt{a^2-1}\right)}{2 a}.\tag3 $$



Similarly, the second integral on the right hand side of $(1)$ is such that $$ \int_0^1\frac{\ln u \cdot\sqrt{1-u^2}}{a^2-u^2}\:du= \frac{1}{2}\int_0^\infty\frac{2\ln t-\ln(1+t^2)}{\left((a^2-1)t^2+a^2\right)\cdot\left(t^2+1\right)}\:dt,\quad (a>1),\tag4 $$ transforming the integrand by a partial fraction decomposition, using the standard results $$ \begin{align} \int_0^\infty\frac{\ln t}{t^2+\alpha^2}\:dt&=\frac{\pi}{2}\cdot\frac{\ln\alpha}{\alpha},\quad \alpha>0, \qquad (\star) \\\int_0^\infty\frac{\ln (1+t^2)}{t^2+\alpha^2}\:dt&=\pi\cdot\frac{\ln(\alpha+1)}{\alpha},\quad \alpha>0,\qquad (\star \star) \end{align} $$ one gets



$$ \int_0^1\frac{\ln u \cdot\sqrt{1-u^2}}{a^2-u^2}\:du=\frac{\pi\sqrt{a^2-1}}{2 a} \cdot\ln\left(\frac{\sqrt{a^2-1}+a}{a}\right)-\frac{\pi}2 \ln 2,\quad (a>1).\tag5 $$



Combining $(3)$ and $(5)$ with $(2)$ gives



$$ I(n) := \frac{2n}{\pi}\int_0^{2\sqrt{2n-1}} \frac{\ln(x)\sqrt{-x^2 + 8n-4}}{4n^2-x^2} dx= \left(n-\frac{1}{2}\right) \ln(2n-1) - (n-1) \ln(2n) $$




for all real numbers $n$ such that $n>1$.



Edit. To prove $(\star)$, one may perform the change of variable $t=\alpha u$, $\alpha>0$, getting $$ \int_0^\infty\frac{\ln t}{t^2+\alpha^2}\:dt=\frac1\alpha \cdot\int_0^\infty\frac{\ln \alpha+ \ln u}{u^2+1}\:du=\frac{\pi}{2}\cdot\frac{\ln\alpha}{\alpha} $$ since $$ \int_0^\infty\frac{1}{u^2+1}\:du=\left[\frac{}{}\arctan u \frac{}{}\right]_0^\infty=\frac \pi2, \quad \int_0^\infty\frac{\ln u}{u^2+1}\:du=0 $$ (the latter is seen by making $u \to 1/u$).


To prove $(\star\star)$, one may perform the change of variable $t=\alpha u$, $\alpha>0$, getting $$ \int_0^\infty\frac{\ln (1+t^2)}{t^2+\alpha^2}\:dt=\frac1\alpha \cdot\int_0^\infty\frac{\ln (1+\alpha^2u^2)}{u^2+1}\:du $$ differentiating the latter integral with respect to $\alpha$ gives a classic evaluation $$ \frac{d}{d\alpha}\int_0^\infty\frac{\ln (1+\alpha^2u^2)}{u^2+1}\:du=\int_0^\infty\frac{2\alpha u^2}{(u^2+1)(1+\alpha^2u^2)}\:du=\frac{\pi}{\alpha+1} $$ then integrating one gets $(\star\star)$.


Sunday, December 27, 2015

dice - probability of false alarm for a loaded die

A loaded 6-sided die has probability 1/4 for 3 & 4 and 1/8 for 1,2,5,6. If i decide whether a die is loaded or not based on one roll what is the probability of falsely classifying a fair die as loaded? What is the probability of classifying a loaded die as fair?



Not sure how to solve this. The probability of getting a 3 or 4 on a loaded die is 1/2 and on a fair die it is 1/3. The likelihood ratio is 3/2. Where do i go from here?

geometry - The staircase paradox, or why $pine4$



What is wrong with this proof?






Is $\pi=4?$


Answer



This question is usually posed as the length of the diagonal of a unit square. You start going from one corner to the opposite one following the perimeter and observe the length is $2$, then take shorter and shorter stair-steps and the length is $2$ but your path approaches the diagonal. So $\sqrt{2}=2$.



In both cases, you are approaching the area but not the path length. You can make this more rigorous by breaking into increments and following the proof of the Riemann sum. The difference in area between the two curves goes nicely to zero, but the difference in arc length stays constant.



Edit: making the square more explicit. Imagine dividing the diagonal into $n$ segments and a stairstep approximation. Each triangle is $(\frac{1}{n},\frac{1}{n},\frac{\sqrt{2}}{n})$. So the area between the stairsteps and the diagonal is $n \frac{1}{2n^2}$ which converges to $0$. The path length is $n \frac{2}{n}$, which converges even more nicely to $2$.


Saturday, December 26, 2015

induction - Prove that $log(x) 0$, $xin mathbb{N}$.


I'm trying to prove $ \log(x) < x$ for $x > 0$ by induction.


Base case: $x = 1$


$\log (1) < 1$ ---> $0 < 1$ which is certainly true.


Inductive hypothesis: Assume $x = k$ ---> $\log(k) < k$ for $k > 0$


Inductive conclusion: Prove $\log(k+1) < k+1$


I don't know what to do after this. I mean the statement itself is quite obviously true, but how do I continue with the proof?


Answer



I don't know why you'd use induction, (unless your domain of each function is $\mathbb{N}\setminus \{0\}$). Here is an alternative approach using calculus. If this is not helpful, I can delete this answer.

Let $g(x)= x- \log(x)$.


$g'(x) = 1 - \frac{1}{x} > 0 $

for all $ x >1$. So $g(x)$ is increasing on $(1,\infty)$.

At $x=1$, $g(x) = 1$, thus $x - \log(x) > 0$ for all $x \ge 1$ (use continuity and the known value at $x = 1$ with what has just been shown about the monotony of $g$).

Now for $x\in (0,1)$, $\log(x) < 0$ and $x>0$ thus $x-\log(x) > 0$.

Thus $x-\log(x) > 0 $ for all $x \in (0,\infty)$. And conclude $x> \log(x) $ for all $x\in (0,\infty)$.

Added

If you want to use induction to show that for each $x\in \mathbb{N}\setminus \{0\}$, $x>\log(x)$, use your inductive hypothesis via: $$ k > \log(k) \longrightarrow \\ k+\log(1+\frac{1}{k})> \log(k)+\log(1+\frac{1}{k}) = \log(k+1) \\ k+\log(1+\frac{1}{k}) \le k + \log(2) \text{ and } \log(2) < 1 \text{ so } \\ k + \log(2) < k + 1 \text{ thus } \\ k+1 > k + \log(2) \ge k + \log(1+\frac{1}{k}) > \log(k+1) $$ Q.E.D.



How to prove that $sum_{n=1}^infty frac{1}{n^2}=frac{pi^2}{6}$ without using Fourier Series




Can we prove that $\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}$ without using Fourier series?


Answer



Yes. The most common way to do this is attributed to Euler. It does still require Maclaurin series, however.




Consider the Maclaurin polynomial for $\frac{\sin x}{x}$:



$$\frac{\sin x}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \cdots$$



However, note that this is a polynomial $p(x)$ with zeroes $\{\pm k\pi\;|\;k \in \Bbb N\}$, and for which $p(0) = 1$. These two properties mean that



$$\frac{\sin x}{x} = \left(1 + \frac{x}{\pi}\right)\left(1 - \frac{x}{\pi}\right)\left(1 + \frac{x}{2\pi}\right)\left(1 - \frac{x}{2\pi}\right)\cdots$$



And by multiplying adjacent terms,




$$\frac{\sin x}{x} = \left(1 - \frac{x^2}{\pi^2}\right)\left(1 - \frac{x^2}{4\pi^2}\right)\left(1 - \frac{x^2}{9\pi^2}\right)\cdots$$



Equating the $x^2$ terms in the Maclaurin polynomial and its factored form yields



$$-\frac{x^2}{3!} = -x^2\left(\frac{1}{\pi^2} + \frac{1}{4\pi^2} + \frac{1}{9\pi^2} + \cdots\right)$$



And multiplying both sides by $-\frac{pi^2}{x^2}$ gives us



$$\frac{\pi^2}{6} = 1 + \frac{1}{4} + \frac{1}{9} + \cdots$$



real analysis - When convolution of two functions has compact support?

It is well-known that,
if $f$ and $g$ are compactly supported continuous functions, then their convolution exists, and is also compactly supported and continuous (Hörmander 1983, Chapter 1).




Next,
Suppose $f\in L^{1}(\mathbb R)$ is given.



My Question is:




Can we expect to choose $\phi \in C_{c}^{\infty}(\mathbb R)$ with $\int_{\mathbb R}\phi(t)dt=1$ and the support of $f\ast \phi$ is contained in a compact set, that is, $\operatorname{supp} f\ast \phi \subset K;$ where $K$ is some compact set in $\mathbb R$ ?




Thanks,

Friday, December 25, 2015

Improper integration involving complex analytic arguments



I am trying to evaluate the following:




$\displaystyle \int_{0}^{\infty} \frac{1}{1+x^a}dx$, where $a>1$ and $a \in \mathbb{R}$



Any help will be much appreciated.


Answer



Use the change of variables $1+x^\alpha=\frac{1}{t}$ to cast the integral in terms of the beta function



$$ \frac{1}{\alpha}\int_{0}^{1}t^{-1/\alpha}(1-t)^{1/\alpha-1}= \frac{1}{\alpha}\Gamma\left(\frac{1}{\alpha}\right)\Gamma\left(1-\frac{1}{\alpha}\right) $$


Thursday, December 24, 2015

limits - $lim_{xto0} frac{x-sin x}{x-tan x}$ without using L'Hopital



$$\lim_{x\to0} \frac{x-\sin x}{x-\tan x}=?$$



I tried using $\lim_{x\to0} \frac{\sin x}{x}=1$.



But it doesn't work :/


Answer




$$\frac{x - \sin(x)}{x - \tan(x)} = \frac{x - \sin(x)}{x^3} \cdot \frac{x^3}{x - \tan(x)}$$



Let $x = 3y$ and $x\to 0 \implies y\to 0$
$$\lim_{x\to0} \frac{x - \sin(x)}{x^3} = L $$
$$L = \lim_{y\to0}\frac{3y - \sin(3y)}{(3y)^3} = \lim_{y\to0} \frac 3 {27} \frac{y - \sin(y)}{y^3} + \lim_{y\to0} \frac{4}{27} \frac{\sin^3(y)}{y^3} = \frac{1}{9} L + \frac 4{27} $$



This gives
$$\lim_{x\to0}\frac{x - \sin(x)}{x^3} = \frac 1 6 $$



\begin{align*}

L &= \lim_{y\to0}\frac{ 3y - \tan(3y)}{27 y^3} \\
&= \lim_{y\to0} \frac{1}{(1 - 3\tan^2(y ))} \cdot \frac{3y(1 - 3\tan^2(y )) - 3 \tan(y) + \tan^3(y)}{27y^3}\\
&= \lim_{y\to0} \frac{1}{(1 - 3\tan^2(y ))} \cdot \left(
\frac 3 {27} \frac{y - \tan(y)}{y^3} + \frac 1 {27} \frac{\tan^3(y)}{y^3} - \frac 9 {27} \frac{y \tan^2(y)}{y^3 }
\right )\\
&= \frac {3L}{27} + \frac 1 {27} - \frac 1 3 \\
\end{align*}



This gives other limit to be $-1/3$, put it up and get your limit.


functions - On the bijection between the naturals and a countable number of countable sets.




I am assuming we are utilizing the axiom of choice because I read in the suggested questions that may have my answer that it's needed for the following bijection to work.



Suppose I am given a countable number of sets $A_1, A_2 \dots$ each with a countable number of elements, I will use the notation that given a set $A$ the first element will be represented as $A(1)$ and so forth.



I was shown a bijection $f$ that puts these in correspondence with the natural numbers as follows:



$$f(1) = A_1(1), \quad f(2) = A_1(2), \quad f(3) = A_2(1), \quad f(4) = A_1(3), \quad f(5) = A_2(2), \quad f(6) = A_3(1) \quad \dots $$



And I was told that the explicit bijection is $$f(\frac{n^2+2nx + x^2 -3x - n +2}{2}) = A_n(x)$$




Where $x$ takes values in the naturals.



How is the formula for this bijection derived?


Answer



The pattern being followed is shown here. As Gae. S. indicates, this works off of a particular map from $\Bbb N \to \Bbb N \times \Bbb N$ that follows this path:



countable times countable is countable



(taken from http://www.cut-the-knot.org/do_you_know/countRatsX.shtml)




Someone apparently has taken the time to calculate this map is the inverse function of $$\rho\ :\ \Bbb N \times \Bbb N \to \Bbb N\ :\ (n, x) \mapsto \frac{n^2+2nx + x^2 -3x - n +2}{2}$$
(I've never seen anyone bother to figure that out, before. Usually, they just take it as obvious from the path that such a map exists, which is all that you need to prove the theorem.)



So far, the axiom of choice is not involved. The only place it comes up is this: All you know about the $A_i$ is that they are countable. But in proclaiming that there is a "least element" of $A_1$ to be called $A_1(1)$, etc. you are actually choosing for each $i$, a particular mapping from some initial segment of $\Bbb N$ to $A_i$. This requires using the Axiom of Choice (or at least, of Countable Choice, which is weaker). Once you have made these choices, then you can define a map from $$g\ :\ \Bbb N \times \Bbb N \to \bigcup_{i=1}^\infty A_i\ :\ (n, x) \mapsto A_n(x)$$
Then your $f = g\circ \rho^{-1}$.



But there are two problems with calling this a bijection, or even fully defined. "Countable" does not imply infinite. Finite sets are also countable. ("Denumerable" is the condition of being both countable and infinite.) If the number of sets $A_i$ is finite, or if any of the $A_i$ are themselves finite, then $f$ is not a bijection between $\Bbb N$ and $\bigcup_{i=1}^\infty A_i$. In fact, there are some $n$ for which $f(n)$ is not even defined. $f$ will only be a bijection when the collection $\{A_i\}$ is infinite, and each $A_i$ is also infinite. Or at least, if it doesn't run afoul of the second problem.



The second problem is that we don't know if $A_i$ are disjoint. It could even be that for all $i > 1, A_i \subseteq A_1$. Even if everything is denumerable, if there is even one element common to two of the sets, say $A_i(x) = A_j(y)$, then $f$ will not be an injection, as $$f(\frac{i^2+2ix + x^2 -3x - i + 2}{2}) = f(\frac{j^2+2jy + y^2 -3y - j + 2}{2})$$




Finally, to answer the exact question you asked, if you look at the figure above, the indexes for the first column are $1,3,6,10,16,...$. These are the triangular numbers. The $n$th such number is the sum from $1$ to $n$, and is well-known to be $\frac{n(n+1)}2$. To get the rest of the indices along a diagonal, subtract the column number less $1$ from the triangular number from the first column. For row $n$ column $x$, the triangular number is for $n + x - 1$, so it is $\frac{(n+x)(n+x-1)}2 - (x - 1)$, which simplifies to your formula.


rational numbers - How can I explain $0.999ldots=1$?





I have to explain $0.999\ldots=1$ to people who don't know limit.


How can I explain $0.999\ldots=1$?


The common procedure is as follows


\begin{align} x&=0.999\ldots\\ 10x&=9.999\ldots \end{align}


$9x=9$ so $x=1$.



Answer



What I always find the most simple explanation is: $$ \frac{1}{3} = 0.333\ldots \quad \Longrightarrow \quad 1 = 3 \cdot \frac{1}{3} = 3 \cdot 0.333\ldots = 0.999\ldots $$


elementary number theory - Prove by induction that $3^n +7^n −2$ is divisible by $8$ for all positive integers $n$...

Prove by induction that $3^n +7^n −2$ is divisible by $8$ for all positive integers $n$.



So far I have the base case completed, and believe I am close to completing the proof itself.




Base case:$(n=1)$



$3^1 + 7^1 - 2 = 8/8 = 1 $



Inductive Hypothesis: Assume that $3^n +7^n −2$ is divisible by 8 for all positive integers n.



Induction step $(n+1)$ case:



$$ 3^{n+1} + 7^{n+1} - 2 $$




$$3(3^{n}) + 7(7^{n}) - 2$$



$$3^n + 7^n = 8x $$



-It seems to me that this could be the end of the proof because whatever the answer is would be a multiple of 8: but I am unsure, any help is appreciated.

integration - Why do we treat differential notation as a fraction in u-substitution method




How did we come to know that treating the differential notation as a fraction will help us in finding the integral. And how do we know about its validity?

How can $\frac{dy}{dx}$ be treated as a fraction?

I want to know about how did u-substitution come about and why is the differential treated as a fraction in it?


Answer



It doesn't necessarily need to be.



Consider a simple equation $\frac{dy}{dx}=\sin(2x+5)$ and let $u=2x+5$. Then
$$\frac{du}{dx}=2$$
Traditionally, you will complete the working by using $du=2\cdot dx$, but if we were to avoid this, you could instead continue with the integral:

$$\int\frac{dy}{dx}dx=\int\sin(u)dx$$
$$\int\frac{dy}{dx}dx=\int\sin(u)\cdot\frac{du}{dx}\cdot\frac{1}{2}dx$$
$$\int\frac{dy}{dx}dx=\frac{1}{2}\int\sin(u)\cdot\frac{du}{dx}dx$$
$$y=c-\frac{1}{2}\cos(u)$$
$$y=c-\frac{1}{2}\cos(2x+5)$$



But why is this? Can we prove that the usefulness of the differentiatals' sepertation is justified? As Gerry Myerson has mentioned, it's a direct consequence of the chain rule:



$$\frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}$$
$$\int\frac{dy}{dx}dx=\int\frac{dy}{du}\frac{du}{dx}dx$$

But then if you 'cancel', it becomes
$$\int\frac{dy}{dx}dx=\int\frac{dy}{du}du$$
Which is what you desired.


Wednesday, December 23, 2015

number theory - Prime factor of $A=14^7+14^2+1$




Find a prime factor of $A=14^7+14^2+1$. Obviously without just computing it.


Answer




Hint: I've seen the 3rd cyclotomic polynomial too many times.



$$
\begin{aligned}
x^7+x^2+1&=(x^7-x^4)+(x^4+x^2+1)\\
&=x^4(x^3-1)+\frac{x^6-1}{x^2-1}\\
&=x^4(x+1)(x^2+x+1)+\frac{(x^3-1)(x^3+1)}{(x-1)(x+1)}\\
&=x^4(x+1)(x^2+x+1)+(x^2+x+1)(x^2-x+1)
\end{aligned}
$$



proof by induction that $sum_{k = n}^{m} binom{k}{n} = binom{m + 1}{n + 1}$

I am unable to solve following proof by induction.



$$\sum_{k = n}^{m} \binom{k}{n} = \binom{m + 1}{n + 1}$$



Can you please help me ?




a) show that it is true for $n=m$



b) show that it is true for $(m+1,n)$



c) give proof why it works for all other values of $m,n$



Thanks. enter image description here

integration - Calculating the following integral using complex analysis: $int_{0}^{pi}e^{acos(theta)}cos(asin(theta)), dtheta$



I am trying to solve a question from my complex analysis test that
I didn't manage to do during the test in order to practice for the

next exam.



The problem is as follows:




Calculate $\int_{0}^{\pi}e^{a\cos(\theta)}\cos(a\sin(\theta))\,
d\theta$




Where the method used should be using complex analysis.




What I tried:



I have noticed $$e^{a\cos(\theta)}\cos(a\sin(\theta))=Re(e^{acis(\theta)})$$
where $cis(\theta):=\cos(\theta)+i\sin(\theta)$



Hence the integral is $$\int_{0}^{\pi}Re(e^{a(\cos(\theta)+i\sin(\theta))}\, d\theta$$



I did the change of variables: $z=ae^{i\theta}$, $dz=iae^{i\theta}\, d\theta\implies d\theta=\frac{dz}{iz}$.




When $\theta=0$ we have that $z=a$ and when $\theta=\pi$ we get
$z=-ia$



and so I wrote that the integral is



$$\int_{a}^{-ia}Re(e^{z})\,\frac{dz}{iz}$$



Now I don't understand how I got $e^{z}$, which seems wrong to me,
but what the checker actually marked in red and wrote a question mark
by were the integration limits: $a,-ia$ .




Can someone please help me understand what was wrong with what the
integration limits, and how to actually solve this integral ?



Any help is greatly appreciated!


Answer



As noted in Emmet´s comment, there is an error in your computation: when $\theta=\pi$ $z=-a$, not $-i\,a$ ($e^{\pi i}=-1$).



The computation becomes easier by noting that the integrand is even, so that it is one half of the integral between $0$ and $2\,\pi$. After the change of variables the integral becomes a line integral along the unit circle, and you can apply Cauchy's theorem or the calculus of residues.




I will work backwards. Consider the integral
$$
\int_{|z|=1}\frac{e^{az}}{i\,z}\,dz.
$$
By Cauchy's theorem its value is $2\,\pi$. Now let $z=e^{it}$. Then
\begin{align*}
\int_{|z|=1}\frac{e^{az}}{i\,z}\,dz&=\int_0^{2\pi}e^{ae^{it}}\frac{i\,e^{it}\,dt}{i\,e^{it}}\\
&=\int_0^{2\pi}e^{a(\cos t+i\sin t)}dt\\
&=\int_0^{2\pi}e^{a\cos t}\cos(a\sin t)\,dt+i\int_0^{2\pi}e^{a\cos t}\sin(a\sin t)\,dt.
\end{align*}

Finally, we get
$$
\int_0^{\pi}e^{a\cos t}\cos(a\sin t)\,dt=\pi.
$$


Functional Equations with Taylor Series



I am stuck on the functional equation

$$f(f(x))+f(x)=e^x$$
Somebody has suggested to me in the past that I can use Taylor Series to solve some of the harder functional equations, but I'm not sure how to use that to solve this. Help?



If Taylor Series is not the way to go for this functional equation, can somebody tell me what method I should use?



In general, can anybody recommend a few different methods that would be good to keep in mind when dealing with functional equations like this in which functional iteration is present?



Thanks!


Answer



I don't know whether there is a simple solution for the function, but I can show how one would use a Taylor expression here.




First, let us suppose there is a value $q$ which is a fixed point of the function f(x)
$$
f(q)=q
$$
If we fill this in the relation we find:
$$
f(f(q)) + f(q) = f(q) + q = 2 q = e^q
$$
The numerical value would be $q\approx0.693147$.




Since we are interested in a smooth function $f(x)$ we know that we could in principle make a Taylor expansion at any point $x$. If we use the fixed point we would have something like:



$$
f(q+\delta) = q + \sum_{i=1}^\infty a_i \delta^i
$$
with some still unknown coefficients $a_i$. Having assumed a smooth function $f(x)$ there is at least a small range of values for $\delta$ where the series converges.



We can now insert this expansion into the relation we have and work all terms out up to some order ${\cal O}(\delta^n)$ and group terms with the same power of $\delta$. The first few terms would be :




$$
2 q + \left(a_1 + a_1^2\right) \delta + \left(a_2 + a_1 a_2 + a_1^2 a_2\right) \delta^2 + \dots = e^{q+\delta} = e^q + e^q \delta + e^q \frac{\delta^2}{2} + \dots
$$
and since for any small $\delta$ this should be the same on both sides we get a lot of equations in the unknown values $a_i$. The lowest order is
$$
2 q = e^q,
$$
which already assumed. The next one is
$$
a_1 + a_1^2 = e^q = 2q

$$
from which we find
$$
a_1 = \frac{1 \pm \sqrt{1 + 8 q}}{2}
$$
The sign is not too important and only determines whether we choose $x$ to go to the left or the right. The next equation we get is
$$
a_2 + a_1 a_2 + a_1^2 a_2 = \frac{1}{2} e^q = 2 q
$$
We can now insert the solution obtained for $a_1$ (or combine the equations) and determine that

$$
a_2 = \frac{q}{1 + 2 q}
$$
This procedure can be continued arbitrarily far and in this particular problem(!) the first equation you encounter for $a_k$ is always linear and easy to solve.



Unfortunately going from a "determined" function in terms of a local Taylor expansion to a closed expression, is in general not so easy.



A typical strategy to solve this type of problems would be to start looking for fixed points or symmetries that can be exploited.


calculus - $sum_{k=1}^{infty}frac{1}{k(k+1)^{frac{1}{n}}}>n$



I think the following question is true:



For each positive integer $n\geq 2$, prove




$$\sum_{k=1}^{\infty}\frac{1}{k(k+1)^{\frac{1}{n}}}>n$$



I try using by induction on $n$, but I think this is not easy with induction.



Do you have any idea or comment for proving it?



So thanks for any comment and help.


Answer



The main term behaves like $k^{-\left(1+\frac{1}{n}\right)}$, hence it should not be difficult to tackle the problem through creative telescoping (section 1 here). If $a>b>0$ and $n\geq 1$ we have
$$ n(a-b)b^{n-1}\leq a^n-b^n \leq n(a-b)a^{n-1} \tag{1}$$

by simply considering the expansion of $\frac{a^n-b^n}{a-b}=a^{n-1}+\ldots+b^{n-1}$.
If we pick $a$ as $\frac{1}{k^{1/n}}$ and $b$ as $\frac{1}{(k+1)^{1/n}}$ we get:



$$\small n\left(\frac{1}{k^{1/n}}-\frac{1}{(k+1)^{1/n}}\right)\frac{1}{(k+1)^{1-1/n}}\leq \frac{1}{k(k+1)}\leq n\left(\frac{1}{k^{1/n}}-\frac{1}{(k+1)^{1/n}}\right)\frac{1}{k^{1-1/n}}$$
from which:
$$ \frac{1}{k(k+1)^{1/n}}\geq n\left(\frac{1}{k^{1/n}}-\frac{1}{(k+1)^{1/n}}\right)\tag{2} $$
leading to the claim in a straightforward way:
$$ \sum_{k\geq 1}\frac{1}{k(k+1)^{1/n}}\geq n. $$


Tuesday, December 22, 2015

discrete mathematics - Proof of the Hockey-Stick Identity: $sumlimits_{t=0}^n binom tk = binom{n+1}{k+1}$



After reading this question, the most popular answer use the identity
$$\sum_{t=0}^n \binom{t}{k} = \binom{n+1}{k+1}.$$



What's the name of this identity? Is it the identity of the Pascal's triangle modified.



How can we prove it? I tried by induction, but without success. Can we also prove it algebraically?




Thanks for your help.






EDIT 01 : This identity is known as the hockey-stick identity because, on Pascal's triangle, when the addends represented in the summation and the sum itself are highlighted, a hockey-stick shape is revealed.



Hockey-stick


Answer



This is purely algebraic. First of all, since $\dbinom{t}{k} =0$ when $k>t$ we can rewrite the identity in question as

$$\binom{n+1}{k+1} = \sum_{t=0}^{n} \binom{t}{k}=\sum_{t=k}^{n} \binom{t}{k}$$



Recall that (by the Pascal's Triangle),
$$\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}$$



Hence
$$\binom{t+1}{k+1} = \binom{t}{k} + \binom{t}{k+1} \implies \binom{t}{k} = \binom{t+1}{k+1} - \binom{t}{k+1}$$



Let's get this summed by $t$:
$$\sum_{t=k}^{n} \binom{t}{k} = \sum_{t=k}^{n} \binom{t+1}{k+1} - \sum_{t=k}^{n} \binom{t}{k+1}$$




Let's factor out the last member of the first sum and the first member of the second sum:
$$\sum _{t=k}^{n} \binom{t}{k}
=\left( \sum_{t=k}^{n-1} \binom{t+1}{k+1} + \binom{n+1}{k+1} \right)
-\left( \sum_{t=k+1}^{n} \binom{t}{k+1} + \binom{k}{k+1} \right)$$



Obviously $\dbinom{k}{k+1} = 0$, hence we get
$$\sum _{t=k}^{n} \binom{t}{k}
=\binom{n+1}{k+1}
+\sum_{t=k}^{n-1} \binom{t+1}{k+1}

-\sum_{t=k+1}^{n} \binom{t}{k+1}$$



Let's introduce $t'=t-1$, then if $t=k+1 \dots n, t'=k \dots n-1$, hence
$$\sum_{t=k}^{n} \binom{t}{k}
= \binom{n+1}{k+1}
+\sum_{t=k}^{n-1} \binom{t+1}{k+1}
-\sum_{t'=k}^{n-1} \binom{t'+1}{k+1}$$



The latter two arguments eliminate each other and you get the desired formulation
$$\binom{n+1}{k+1}

= \sum_{t=k}^{n} \binom{t}{k}
= \sum_{t=0}^{n} \binom{t}{k}$$


Divergent series which is Abel summable but not Euler summable



It is said that:




Abel summation and Euler summation are not comparable.





We were able to find examples of divergent series which are Euler summable but not Abel summable, for instance
$$ 1-2+4-8+16-\dots$$



However, we couldn't find any example of a divergent series which is Abel summable but not Euler summable.



Do you know such an example?



Thank you!







EDIT: Dear Peter, this is the definition of Euler summation:




Let $\sum_{n=0}^\infty a_n$ be any series. The Euler transformation of this series is defined as:
\begin{equation*}
\sum_{n=0}^\infty \frac{1}{2^{n+1}}b_n\quad\text{ with }\quad b_n:=\sum_{k=0}^n\left(\begin{array}[h]{c} n \\ k \end{array}\right) a_k
\end{equation*}




The series $\sum_{n=0}^\infty a_n$ is called Euler summable if the Euler transformation of this series
$$\sum_{n=0}^\infty \frac{1}{2^{n+1}}b_n$$
is converges in the usual sense.



The Euler sum is then given by
$$\sum_{n=0}^\infty \frac{1}{2^{n+1}}b_n.$$



Answer



From the Wikipedia article,





Euler summation is essentially an explicit form of analytic continuation. If a power series converges for small complex $z$ and can be analytically continued to the open disk with diameter from $-1/2$ to $1$ and is continuous at $1$, then its value at $1$ is called the Euler sum of the series $a_0 + a_1 + \ldots$.




Whereas Abel summation consists of taking the limit of
$
f(z)\equiv \sum_{n=0}^{\infty} a_n z^n
$
as $z$ approaches $1$ from below along the real axis. For a series to be Abel-summable but not Euler-summable, it has to be that $f(z)$ has a limit as $z\rightarrow 1^-$ along the real axis, but $f(z)$ is not continuous at $z=1$. An example would be $f(z)=\exp\left(-\frac{z}{1-z}\right)$, which has an essential singularity at $z=1$, but for which $\lim_{z\rightarrow 1^-}$ exists and is equal to $0$.


modular arithmetic - Find all solutions; $17x equiv 25 (text{ mod } 33)$





Find all solutions $x \in \mathbb{Z}_{m}$ of the following congruence
whereby $m$ is the modulus. If there doesn't exist a solution, state
why.$$17x \equiv 25 (\text{ mod } 33)$$




Alright so I have big trouble doing this because I only know how to do it if there was a $1$ instead of a $25$ on the right side : /



You then put all on one side and $33$ on the other side, use euclidean algorithm and calculate it. But what can I do in this case, when there isn't a $1$?




Is it done completely different or the same way?


Answer



Since $17$ and $33$ are coprime, i.e. $\gcd(17,33)=1$, we can find integers $m$ and $n$ for which $17m+33n=1$. We can find $m$ and $n$ by using the Extended Eulidean Algorithm.



We see that $m=2$ and $n=-1$, which gives $17\times 2 + 33 \times(-1)=1$. Reducing both sides modulo $33$ gives $17 \times 2 \equiv 1 \bmod 33$, i.e. $2$ is the multiplicative inverse of $17$, modulo $33$.



Coming back to $17x \equiv 25 \bmod 33$. If we multiply both sides by the multiplicative inverse of $17$, i.e. $2$, we get $34x \equiv 50 \bmod 33$, i.e. $1x \equiv 17 \bmod 33$. Hence $x = 33k+17$, where $k$ is an integer.


fake proofs - Can this really happen?

Yesterday, while scribbling on the back page of my maths copy, I accidentally came across this



If $ x\in R\ $ and $ x+x+x+.....\infty = m $ where $ m $ is any real number.



Then we can write



$ x+x+x+.....\infty = m $



$\Rightarrow $ $ x+m=m $




$ \Rightarrow $ $ x=0 $



In the same way we can prove that $ 6=0 $ or $ 7=0 $



But how is this really possible? Please explain. I am in 10th standard.

Monday, December 21, 2015

sequences and series - Complex Analysis Solution to the Basel Problem ($sum_{k=1}^infty frac{1}{k^2}$)





Most of us are aware of the famous "Basel Problem":



$$\sum_{k=1}^\infty \frac{1}{k^2} = \frac{\pi^2}{6}$$



I remember reading an elegant proof for this using complex numbers to help find the value of the sum. I tried finding it again to no avail. Does anyone know a complex number proof for the solution of the Basel Problem?



Answer



The most straightforward way I know is to consider the contour integral
$$
\frac{1}{2\pi i}\oint\pi\cot(\pi z)\frac{1}{z^2}\mathrm{d}z\tag{1}
$$
around circles whose radii are $\frac12$ off an integer.



The function $\pi\cot(\pi z)$ has residue $1$ at every integer. Thus the integral in $(1)$ equals the residue of $\pi\cot(\pi z)\dfrac{1}{z^2}$ at $z=0$ plus twice the sum in question (one for the positive integers and one for the negative integers).



The integral in $(1)$ tends to $\color{blue}{0}$ as the radius goes to $\infty$.




The Laurent expansion of $\pi\cot(\pi z)\dfrac{1}{z^2}$ at $z=0$ is
$$
\frac{1}{z^3}-\frac{\pi^2}{3z}-\frac{\pi^4z}{45}-\frac{2\pi^6z^3}{945}-\dots\tag{2}
$$
The only term that contributes to the residue at $z=0$ is the $\dfrac1z$ term. That is, the residue at $z=0$ of $(2)$ is $\color{green}{-\frac{\pi^2}{3}}$. Thus, the sum in question must be $\color{red}{\frac{\pi^2}{6}}$ (so that $\color{green}{-\frac{\pi^2}{3}}+2\cdot\color{red}{\frac{\pi^2}{6}}=\color{blue}{0}$).


trigonometry - Identity for a weighted sum of sines / sines with different amplitudes



I'm trying to simplify the following sum of sines with different amplitudes
$$
a \sin(\theta) + b \sin(\phi) = ??? \,\,\,\,\, (1)
$$




I know that
$$
a \sin(\theta) + a \sin(\phi) = a\cdot2\sin\left(\frac{\theta+\phi}{2}\right)\cos\left(\frac{\theta-\phi}{2}\right)
$$



But how do I do the same thing in the case where both sines have different amplitudes, as in (1).



Thanks!


Answer



I'll try to give both geometrical solution and a solution using complex numbers.




But perhaps you might have a look at Wikipedia first, their explanation is probably better than mine. (They certainly have nicer pictures.)



Wikipedia



You can find very similar formulas in Wikipedia's List of trigonometric identities. (Here is also link to the current revision of the Wikipedia article. I hope I have not made mistake somewhere and their formulas are equivalent to mine.)



A link to Phasor addition (current revision) is also given there. This Wikipedia article might help to visualize the whole thing.



Geometry




We have situation as in the following picture:
enter image description here



Note that the $y$-coordinates of the two vectors are precisely the values you want to add together. So we would like to know the length and the angle for the sum of these two vectors.



We simply rotate the situation. We denote $\alpha=\phi-\theta$.
enter image description here



We want to find the values $c$ and $\varkappa$. Once we know them, we get

$$a\sin\theta+b\sin\phi= c\sin(\theta+\varkappa).$$



For $c$ we can simply use law of cosines:
$$c^2=a^2+b^2+2ab\cos\alpha$$
(Note that the angle in the bottom triangle is $\pi-\alpha$.)



We can calculate $\varkappa$ using law of sines:
$$\frac{\sin(\pi-\alpha)}{\sin\varkappa} = \frac cb \qquad \Rightarrow \qquad \sin\varkappa=\frac{b\sin\alpha}c$$



Note that this does not determine $\varkappa$ uniquely, but you can find out whether $\varkappa>\pi/2$ or $\varkappa<\pi/2$ by checking the sign of $x$-coordinate of the sum of the two vectors, which is $a+b\cos\alpha$.




Complex numbers



We want to get
$$ae^{i\theta}+be^{i\phi}=ce^{i(\theta+\varkappa)}\tag{1}$$
(If we get such an expression then the imaginary part is a formula for sines.)



This is equivalent to
$$a+be^{i(\phi-\theta)}=ce^{i\varkappa}.\tag{2}$$




To find $c$ we simply compute the absolute value of both sides. On the LHS we get $(a+be^{i(\phi-\theta)})(a-be^{i(\phi-\theta)})=a^2+b^2+2ab\cos(\phi-\theta)$, hence
$$c^2=a^2+b^2+2ab\cos(\phi-\theta).$$



We also have
$$e^{i\varkappa}=\frac ac+\frac bce^{i(\phi-\theta)}\\
\cos\varkappa+i\sin\varkappa = \frac ac+\frac bc\cos(\phi-\theta)+i\frac bc\sin(\phi-\theta)$$

Comparing the imaginary parts yields $$\sin\varkappa=\frac bc\sin(\phi-\theta).$$



The remark that we have to do a little more to find which of the two possible values of $\varkappa$ we should choose applies here, too.


Understanding the definitions of limit superior and limit inferior of a real sequence



I'm trying understand the definitions of limit superior and limit inferior of a sequence $(x_n)$ in extended real numbers. I know that $\lim \sup$ and $\lim \inf$ are the limits of the sequences defined by $a_m := \sup \{ x_k \ ; \ k \geq m \}$ and $b_m := \inf \{ x_k \ ; \ k \geq m \}$ respectively, the sequences $(a_m)$ and $(b_m)$ are decreasing and increasing respectively and the $\lim \sup$ and $\lim \inf$ always exist since the sequence is defined on extended real numbers, finally, I know that if $(x_n)$ is a bounded sequence, then



$$\lim_{n \rightarrow \infty} \sup \{ x_k \ ; \ k \geq m \} = \inf_m \left\{ \sup \{ x_k \ ; \ k \geq m \} \right\} = \inf_m a_m$$



because $(a_m)$ is a decreasing sequence and this sequence is bounded (otherwise, $(x_n)$ would be unbounded).



Analogously,




$$\lim_{n \rightarrow \infty} \inf \{ x_k \ ; \ k \geq m \} = \sup_m \left\{ \inf \{ x_k \ ; \ k \geq m \} \right\} = \sup_m b_m$$



because $(b_m)$ is a increasing sequence and this sequence is bounded (otherwise, $(x_n)$ would be unbounded).



My doubt is why makes sense $\lim_{n \rightarrow \infty} \sup \{ x_k \ ; \ k \geq m \} = \inf_m \left\{ \sup \{ x_k \ ; \ k \geq m \} \right\}$ and $\lim_{n \rightarrow \infty} \inf \{ x_k \ ; \ k \geq m \} = \sup_m \left\{ \inf \{ x_k \ ; \ k \geq m \} \right\}$ when $(x_n)$ is unbounded (i.e., $(x_n)$ doesn't has upper bound, lower bound or both)? I would like to know how to argue to prove this equalities.



Thanks in advance!


Answer



If the sequence has no upper bound then $\sup(\{x_k\mid k\geq m\})=+\infty$ for every $m$ so that:





  • $\lim_{n\to\infty}\sup(\{x_k\mid k\geq m\})=+\infty$

  • $\inf(\{\sup(\{x_k\mid k\geq m\})\mid m=1,2,\dots\})=\inf(\{+\infty,+\infty,\dots\})=+\infty$



From this we conclude that they are equal again (and both equalize $+\infty$).



Same story for $\liminf$ where $-\infty$ takes the place of $+\infty$.




Actually this takes place in the extension $\overline{\mathbb R}=\{-\infty\}\cup\mathbb R\cup\{+\infty\}$.


Prove that if $x$ and $y$ are irrational numbers, there exists an irrational number $z$ such that $y < z < x$



My teacher proposed this question a few days ago along with the similar case for rational numbers. I've already figured out the proof for rational numbers (just prove that their arithmetic mean is rational), but I'm not really sure where to start with this proof. I guess I'm stuck because it isn't as easy to represent irrational numbers as it is for rational numbers. Also, the arithmetic mean of two irrational numbers isn't necessarily irrational, so that approach doesn't work either. Could someone help me get started?


Answer



If $\frac{x+y}{2}$ is irrational, there you go. If not, then $q=\frac{x+y}{2}$ is rational and what can we say about $\frac{q+x}{2}$?


calculus - Finding the limit of $frac {n}{sqrt[n]{n!}}$


I'm trying to find $$\lim_{n\to\infty}\frac{n}{\sqrt[n]{n!}} .$$



I tried couple of methods: Stolz, Squeeze, D'Alambert


Thanks!


Edit: I can't use Stirling.


Answer



Let $\displaystyle{a_n=\frac{n^n}{n!}}$. Then the power series $\displaystyle{\sum_{n=1}^\infty a_n x^n}$ has radius of convergence $R$ satisfying $\displaystyle{\frac{1}{R}=\lim_{n\to \infty} \sqrt[n]{a_n}=\lim_{n\to\infty}\frac{a_{n+1}}{a_n}}$, provided these limits exist. The first limit is what you're looking for, and the second limit is $\displaystyle{\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n}$.


Added: I just happened upon a good reference for the equality of limits above, which gives a more general result which is proved directly without reference to power series. Theorem 3.37 of Rudin's Principles of mathematical analysis, 3rd Ed., says:



For any sequence $\{c_n\}$ of positive numbers, $$\liminf_{n\to\infty}\frac{c_{n+1}}{c_n}\leq\liminf_{n\to\infty}\sqrt[n]{c_n},$$ $$\limsup_{n\to\infty}\sqrt[n]{c_n}\leq\limsup_{n\to\infty}\frac{c_{n+1}}{c_n}.$$



In the present context, this shows that $$\liminf_{n\to\infty}\left(1+\frac{1}{n}\right)^n\leq\liminf_{n\to\infty}\frac{n}{\sqrt[n]{n!}}\leq\limsup_{n\to\infty}\frac{n}{\sqrt[n]{n!}}\leq\limsup_{n\to\infty}\left(1+\frac{1}{n}\right)^n.$$ Assuming you know what $\displaystyle{\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n}$ is, this shows both that the limit in question exists (in case you didn't already know by other means) and what it is.




From the comments: User9176 has pointed out that the case of the theorem above where $\displaystyle{\lim_{n\to\infty}\frac{c_{n+1}}{c_n}}$ exists follows from the Stolz–Cesàro theorem applied to finding the limit of $\displaystyle{\frac{\ln(c_n)}{n}}$. Explicitly, $$\lim_{n\to\infty}\ln(\sqrt[n]{c_n})=\lim_{n\to\infty}\frac{\ln(c_n)}{n}=\lim_{n\to\infty}\frac{\ln(c_{n+1})-\ln(c_n)}{(n+1)-n}=\lim_{n\to\infty}\ln\left(\frac{c_{n+1}}{c_n}\right),$$ provided the latter limit exists, where the second equality is by the Stolz–Cesàro theorem.


What is wrong with this circle's area problem?



My solution and my book's solution don't match.



Is something wrong with the my solution?
If so, where and why?



My book says:





The radius r of a circle increases by 50%.
In terms of r, what is the area of the circle
with the increased radius?




My solution:




  1. A = $\pi r^2\ $ => Area of any circle

  2. ir = $\ 3r/2 \ $ => Increased radio

  3. A$\ _{ir} = \pi ir^{2} \ $ => Area of circle with increased radio

  4. A$\ _{ir} = \pi (3r/2 )^{2} \ $ => Substituting ir with its value


  5. A$\ _{ir} = \pi (9r^2/4 ) \ $ => Square

  6. A$\ _{ir} = \ (9\pi r^2 )/4 \ $ => Result



Is the In terms of r tricky?


Answer



There is nothing wrong with your answer!


discrete mathematics - Number of distinct digits

How many integers from 10 through 99 have distinct digits?



Solution using the Multiplication Rule:
[# of ints w/ dist. digits] = [# ways to pick digit 1] * [# ways to pick digit 2].

Since there are 9 ways to determine the 1st digit and (10 – 1 = 9) ways to determine the second digit,
[9] * [9] = 81
So there are 81 integers from 10-99 with distinct digits.



I'm not understanding the part where it says the "# of ways to pick digit 1 or 2". Are the numbers coming from the fact that the ones digit from 10 to 19 are 10 numbers total? But why 9? I've always been bad with numbers.. so apologies if this is common sense, but I really do not understand where these numbers came from (9*9 for example)

Sunday, December 20, 2015

sequences and series - Is there a natural number satisfying the given condition?

Is there a positive integer $n$ such that $\sum_{k=0}^{n}\sqrt{n+k}$ is also an integer?

calculus - Show that $f(tx)=t^pf(x),;forall ;t>0,;&;xin Bbb{R}^n $ if and only $f'(x)(x)=pf(x),;forall ;xin Bbb{R}^n$




Let $f:\Bbb{R}^n\to \Bbb{R}$ be a differentiable function such that for some $p>1,$
\begin{align}f(tx)=t^pf(x),\;\forall \;t>0,\;\&\;x\in \Bbb{R}^n \qquad (1)\end{align}
$i.$ I want to show that \begin{align}f'(x)(x)=pf(x),\;\forall \;x\in \Bbb{R}^n \qquad (1)\end{align}
$ii.$ Is the converse also true?



I believe that we can take \begin{align}\varphi(t)=f(tx),\;\forall \;t>0\end{align} and show that \begin{align}\frac{d}{dt}\big(\frac{\varphi(t)}{t^p}\big)=0,\end{align} but I don't know how to tranform this to a proof. Any help please?


Answer



We can assume $n=1$ by restriction to an arbitrary line. Then
$$

f'(x) = \lim_{\epsilon\to 0}\frac{f((1+\epsilon)x)-f(x)}{\epsilon x} = \lim_{\epsilon\to 0}\frac{[(1+\epsilon)^p - 1]f(x)}{\epsilon x} = \frac{f(x)}{x}\frac{d}{d\epsilon}(1+\epsilon)^p|_{\epsilon=0} = p\frac{f(x)}{x}
$$
as desired. Note differentiability is guaranteed by this condition.



Similarly, if $\frac{df}{dx} = p\frac{f}{x}$ then $\frac{df}{f} = p\frac{dx}{x}$. Integrating both sides says
$$
\log f(x) - \log f(1) = p\log x = \log x^p
$$
so
$$

f(x) = C x^p,
$$
which provides the converse.


elementary number theory - If $a^2 + 1$ is prime then the unitary digit of $a$ must be either of 4,6 or 0 for $forall a geq 3 in mathbb{N}$.

Let $a \geq 3$ and suppose $a^2 + 1$ is a prime number. How do I prove the unitary digit of $a$ must be one of $6, 4$ or $0$. I can see it's true for $a^2+1=17, 37, 101, 197, 257...$etc. where the $10^0$ digit of $a$ is $6, 4$ or $0$ and the pattern repeats but how do I show this and formulate it into a proof using modular arithmetic?

Saturday, December 19, 2015

How do i reduce this expression of binomial coefficients



I was solving a problem and am stuck with this expression. Any leads on how can I simplify this expression?



$$\frac{{\sum\limits_{x=Q}^{N-P+Q} (x-Q) \binom{x}{Q} \binom{N-x}{P-Q}}}{{\sum\limits_{x=Q}^{N-P+Q} \binom{x}{Q} \binom{N-x}{P-Q}}}$$



UPDATE: I realized a mistake. expression updated.


Answer



There is a variation of the Vandermonde identity that reads, for $k,m,n\in\mathbf N$:
$$

\sum_{i=0}^k\binom im\binom{k-i}n=\binom{k+1}{m+n+1}.
$$
Here is how you can remember it: let $0\leq a_0<\cdots

One can restrict the range of $i$ to the values $m\leq i\leq k-n$, as other terms contribute $0$.



So your expression simplfies to
$$
\frac{{\sum\limits_{x=Q}^{N-P+Q} \binom{x-1}{Q} \binom{N-x}{P-Q}}}{{\sum\limits_{x=Q}^{N-P+Q} \binom{x}{Q} \binom{N-x}{P-Q}}}=
\frac{\binom{N}{P+1}}{\binom{N+1}{P+1}}=\frac{N-P}{N+1}.

$$


calculus - Why can we treat infinitesimals as real numbers in integration by substitution?



During integration by substitution we normally treat infinitesimals as real numbers, though I have been made aware that they are not real numbers but merely symbolic, and yet we still can, apparently, treat them as real numbers. For instance, consider we want to integrate the expression $3x(x^4+1)^3$. A common way to do this is to let $u=x^4+1$, where $\frac{du}{dx}=4x^3$, and thus $du=4x^3dx$ which is appropriately used in our substitution to obtain $\int3x(u)^4 du$, and then we simply directly integrate this new integrand. However, while I understand the process and why we do it in such a manner, I am perplexed as to why we can still rigorously treat the infinitesimals as real numbers. So, my question is if anyone can elaborate on exactly why it is logically rigorous to treat infinitesimals as real numbers during substitution for integration.



(Note: My question does not concern as to what "dx" means in integration simply because my question is defined in the prospect of treating infinitesimal derivatives as ratios specifically in integration by substitution, where other questions do not specifically address. )


Answer



The issue is that there is a whole bunch of measure theory hidden behind those "legitimate" calculations. If you just learned how to integrate elementary functions, you have too much to learn before you get there.




The key in this is the Radon-Nikodym Theorem, which says that if you have two measures $\mu,\nu$ on a measurable space $X$ such that $\mu$ is absolutely continuous with respect to $\nu$, then there exists a unique density function (which we denote by $\frac{d\mu}{d\nu}$) such that for any function $g$ integrable with respect to both $\mu$ and $\nu$,
$$
\int_X g \, d\mu = \int_X g \frac{d\mu}{d\nu} \, d\nu.
$$
The function $\frac{d\mu}{d\nu} : X \to \mathbb R$ is called the Radon-Nikodym derivative of $\mu$ with respect to $\nu$.



So in other words, in the rigorous treatment, the infinitesimals are not treated as infinitely small quantities, but rather as measures; while doing a change of variables, it can be shown that the Radon-Nikodym derivative you obtain is in fact the derivative of the function you use to change variables (i.e. in $du = f'(x) dx$, $\, f'(x) = \frac{du}{dx}$ where $du$ and $dx$ are two measures on the real line and $u = f(x)$ ; $f$ must be a diffeomorphism).



Now let me assume that this was not satisfactory for you; there is also the field of mathematics called non-standard analysis, which defined hyperreal numbers; I suggest you look at the Wikipedia page on this for more details. It allows the treatment of infinitesimals as quantities so you can play around with them.




Hope that helps,


reference request - The following is a necessary condition for a number to be prime, from its digit expansion. Has it been referred somewhere?




Concerning a numbers’ digits we know some necessary conditions on them for the number to be prime, besides the last digit having to be odd (except for prime 2). For example in decimal representation the last digit can not be 5, and the sum of the digits of the number can not be divisible by 3.
But has the following result been referred somewhere?




  1. If we take the digit expansion of a natural number $n$ in positional notation with its $m$ digits divided into two continuous concatenated strings $a$ and $b$ such that:



    $$n=a2^k+b\text{ with }$$



    $$b=b_0r^0+\cdots+b_{k-1}r^{k-1}$$




    $$a=a_0r^0+\cdots+a_{m-k-1}r^{m-k-1}$$



    $r=$ radius (of the base where the number is represented)



    $m=$ number of digits of $n$ ($m>k$)


  2. Then we might state the following necessary condition over the number’s digit expansion so that it is prime:



    For any $b>1$, there cannot be any $k$ such that $a \text{ mod }b = 0$



    Proof: If there were, we could write $n$ as a product:




    $$n=(\dfrac{a}{b}r^k+1)b$$




Example:



Take the number 637 in base 10. You have to check all possible two-string concatenation combinations with a>=b.



In this case its simple, it works dividing the number digits the following way: a='63' and b='7'.




Since a/b = 63/7=9 you can write 637=((63/7)*10 + 1)*7



So 637 cannot be prime.



I realized this property while working in a “number system” different from our own, where the arithmetic is different. Then Gustavo Granja kindly arrived at this formulation, which is the equivalent in our “usual” number system.



It seems too easy not to be known. In MathOverflow I got the following helpful comment from Gerhard Paseman: "This is a specific version of something more general: If $n=ac +b$ with $a,b,c$ positive, and $b > 1$, then $b|a$ implies $n$ is not prime".



However, in spite of that general case, I find it interesting (and eventually helpful for factorization purposes) that sometimes we can realize this directly from the digits of a number, without divisibility tests involving any information other than the one we have right before our eyes.




So I wonder if this has been previously mentioned somewhere.


Answer



Your observation is a special case of content-based factors of polynomials.



Suppose that $\,f(x)\,$ is a nonconstant polynomial whose coefficients are all nonnegative integers. $\ \ $ If $\,f\,$ has nontrivial content, i.e. if the gcd $\,d\,$ of all of the coefficients of $\,f\,$ is $>1,\,$ then it follows that $\,d\mid f(n)\,$ properly for all $\,n>1,\,$ so $\,f(n)\,$ is composite for all $\,n>1$.



This applies to OP because radix notation is polynomial function of the radix (of above type).



Remark $\ $ There are many interesting relationships between the factorization properties of polynomials and their values. For example, in $1918$ Stackel published the following




Theorem If $\rm\, f(x)\,$ is a composite integer coefficient polynomial then $\rm\, f(n)\, $ is composite for all $\rm\,|n| > B,\, $ for some bound $\rm\,B.\,$ In fact $\rm\, f(n)\, $ has at most $\rm\, 2d\, $ prime values, where $\rm\, d = {\rm deg}(f)$.



The simple proof can be found online in Mott & Rose [3], p. 8.



Contrapositively, $\rm\, f(x)\, $ is prime (irreducible) if it assumes a prime value
for large enough $\rm\, |x|\, $.
As an example, Polya-Szego popularized A. Cohn's irreduciblity test, which
states that $\rm\, f(x) \in \mathbb Z[x]\,$ is prime if $\rm\, f(b)\, $
yields a prime in radix $\rm\,b\,$ representation (so $\rm\,0 \le f_i < b).$




For example $\rm\,f(x) = x^4 + 6\, x^2 + 1 \pmod p\,$ factors for all primes $\rm\,p,\,$
yet $\rm\,f(x)\,$ is prime since $\rm\,f(8) = 10601\rm$ octal $= 4481$ is prime.
Cohn's test fails if, in radix $\rm\,b,\,$ negative digits are allowed, e.g.
$\rm\,f(x)\, =\, x^3 - 9 x^2 + x-9\, =\, (x-9)\,(x^2 + 1)\,$ but $\rm\,f(10) = 101\,$ is prime.



Conversely Bouniakowski conjectured $(1857)$
that prime $\rm\, f(x)\, $ assume infinitely many prime values (excluding
cases where all the values of $\rm\,f\,$ have fixed common divisors, e.g. $\rm\, 2\: |\: x(x+1)+2\, ).$ However, except for linear polynomials (Dirichlet's theorem), this conjecture has never been proved for any polynomial of degree $> 1.$



Note that a result yielding the existence of one prime value extends to existence of infinitely many prime values, for any class of polynomials closed under shifts, viz. if $\rm\:f(n_1)\:$ is prime, then $\rm\:g(x) = f(x+ n_1\!+1)\:$ is prime for some $\rm\:x = n_2\in\Bbb N,\:$ etc.




For further detailed discussion of Bouniakowski's conjecture and related results, including heuristic and probabilistic arguments, see Chapter 6 of Ribenboim's The New Book of Prime Number Records.



[1] Bill Dubuque, sci.math 2002-11-12, On prime producing polynomials.



[2] Murty, Ram. Prime numbers and irreducible polynomials.
Amer. Math. Monthly, Vol. 109 (2002), no. 5, 452-458.



[3] Mott, Joe L.; Rose, Kermit. Prime producing cubic polynomials.
Ideal theoretic methods in commutative algebra, 281-317.
Lecture Notes in Pure and Appl. Math., 220, Dekker, New York, 2001.


complex numbers - How to solve $sqrt {35 - 5i}$

Need some hints on how to Solve $\sqrt {35 - 5i}$



Attempt.



I factorized 5 out and it became $\sqrt {5(7-i)}$



I just want to know if it can be solved further.




Thanks.

exponentiation - Complicated rational exponents

How is this $\frac{m}{n}$ equals to $m \times \frac{1}{n}$? Any logical proof for this? Which draws this conclusion:




A fractional exponent like $\frac{m}{n}$ means do the $m^{\text{th}}$ power, then take the $n^{\text{th}}$ root or take the $n^{\text{th}}$ root and then do the $m^{\text{th}}$ power.


Friday, December 18, 2015

probability theory - Expectation of Random Variable and Indicator Function

I have to do the following problem:




Let $X$ be a random variable in $\mathcal{L}^{1}(\Omega,A,\mathbb P)$. Let $(A_n)_{n\geq 0}$ be a sequence of events in $A$ such that $\mathbb P(A_{N})\xrightarrow[n\rightarrow\infty]{}0$. Prove that $\mathbb E(X\mathbb{1}_{A_n})\xrightarrow[n\rightarrow\infty]{}0$.



I'd appreciate any help.

linear algebra - Group of r0w equivalent matrices - prove or disprove question



Let $S=\{A_1,A_2,...,A_k \}$ be a group of row equivalent matrices where $A_m$ is a square matrix.


Prove or disprove that if there is a linear combination of elements of S, such that this combination is invertible matrix, then any element of S is an invertible matrix as well.




I know that if matrices are row equivalent, then one can be changed to the other by a sequence of elementary row operations, means that I can take any $A_m \in S$ and find a sequence of elementary row operations until $A_m$ is the same as $A_1$.


A linear combinations of the elements in S will be $A_C=\lambda_1 A_1+\lambda_2 A_2+...+\lambda_k A_k$ (C for combination). If we can take any matrix as equal to $A_1$ then $A_C=(\lambda_1+\lambda_2+...+\lambda_k)A_1$.


Now, if we multiply an invertible matrix in a scalar the result is still an invertible matrix, thus if $A_C$ is an invertible matrix, then $A_1$ is an invertible matrix. Every element of S is a row equivalent matrix to the matrix $A_1$, thus invertible.


Is my claims correct? how should we solve this question?


Please help (I'm preparing for a test). Thank you!


Answer



What do we know about row-equivalence, what do we know about invertability of matrices, and can we make them join together somewhere?


Two row equivalent matrices will have the same row space. Also, an $n\times n$ matrix is invertible iff its row space is the whole space ($\Bbb R^n$?). So there you go! If there is a linear combination $A_C$ of matrices in $S$ which is invertible, then the row space of the linear combination must be $\Bbb R^n$.


But a linear combination of matrices with common row space cannot extend said row space (each row in $A_C$ will just be a linear combination of rows from matrices in $S$, and row spaces, like any vector space, is closed under linear combinations). So if $A_C$ has row space $\Bbb R^n$, that means the matrices of $S$ must have that row space too, and thus they are invertible.



sequences and series - famous Euler sum

Does anyone know how Euler in the 18th century proved that
$$
\sum_{n=1}^{\infty} \frac{H_n}{n^2}=2 \zeta(3)
$$

with $H_n$ being the $n$'th harmonic number?

Diophantine Equation Without Using Fermat's Last Theorem



I'm having trouble with a problem. The problem asks me to solve the equation $(x+1)^4-(x-1)^4=y^3$ in integers. I found out that the only integer solution is $(0,0)$. I found this answer by setting $x$ as $a^3$ and $x^2+1$ as $b^3$. After doing that, I got the equation $a^6+1^n=b^3$, which assures me that there is no other solution than $(0,0)$ by Fermat's Last Theorem. However, I just realized that I am not supposed to use Fermat's Last Theorem. So far, I have simplified the equation to $8x^3+8x=y^3$. Please help me prove that the only integer solution is $(0,0)$ without using Fermat's Last Theorem.


Answer



Consider $y^3=8x(x^2+1)$. Then $y$ must be even (say $y=2z)$. Then we get $z^3=x(x^2+1)$. Then we have
$$z^3-x^3=x.$$
OR

$$(z-x)(z^2+zx+x^2)=x$$
But the factor $|x^2+zx+z^2| \geq |x|$. So for this to be true $x=0=z$.


Wednesday, December 16, 2015

"Good" subsets of natural numbers


We define a subset A of positive integers as "Good" if it's possible to write it's members as $a_1$, $a_2$, $a_3$, $\cdots$ so that GCD of any two consecutive numbers $a_i$ and $a_{i+1}$ is greater than $1$. Verify and prove "Goodness" of the following two sets:





  1. Set of positive integers greater than $1$

  2. Set of squares greater than $1$




How to solve this problem?

linear algebra - Give the corresponding elementary matrix decomposition of A




can you guys explain the question to me



Put the following matrices into reduced row echelon form, indicating the row operations you use. Give the corresponding elementary matrix decomposition of A



$$ \left[
\begin{array}{ccc}
2&1&1\\
1&2&1\\
1&1&2

\end{array}
\right] $$



i put the matrix in RREF form, but i dont know how to get the elementary matrix.


Answer



Whenever you perform elementary row operations, you are multiplying the matrix by an elementary matrix.



Suppose you perform $k$ operations.



$$E_k\ldots E_1A=R$$




Then we have
$$A=E_1^{-1}\ldots E_k^{-1}R$$



To get the elmentary matrix, perform the same operation on the identity matrix.



Suppose the first operation is multiply the first row by $\frac12$.



$$E_1=\begin{bmatrix} \frac12 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix}$$




$E_1^{-1}$ can be obtained by thinking about what is the reverse operation? It should be multiply the first row by $2$.



$$E_1^{-1}=\begin{bmatrix} 2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix}$$


elementary number theory - $3$ never divides $n^2+1$


Problem: Is it true that $3$ never divides $n^2+1$ for every positive integer $n$? Explain.


Explanation: If $n$ is odd, then $n^2+1$ is even. Hence $3$ never divides $n^2+1$, when $n$ is odd.



If $n$ is even, then $n^2+1$ is odd. So $3$ could divide $n^2+1$.


And that is where I am stuck. I try to plug in numbers for $n$ but I want a more general form of showing that $3$ can't divide $n^2+1$ when $n$ is even.


Answer



Instead of considering whether $n$ is even or odd, consider the remainder when $n$ is divided by $3$; as an example, if the remainder is $1$, we have $$n = 3k + 1 \implies n^2 + 1 = 9k^2 + 6k + 2$$


which is not divisible by $3$. There are two more cases.


real analysis - Prove that $frac{2^n}{n!}$ converges 0.






Prove that $\frac{2^n}{n!}$ converges 0.


I can see why, I just don't get how exactly to do convergence proofs. Right now I have:


For $n>6$, $|\frac{2^n}{n!}-0|=\frac{2^n}{n!}<\frac{2^n}{3^n}$


and


assuming $\frac{2^n}{3^n}<\epsilon$, $n<\frac{\ln\epsilon}{\ln\frac2 3}$


Not sure if the last step is even right...


(This was an exam question today)


Answer



I'm pretty sure that last one need to be $n > \frac{\ln \varepsilon}{\ln \frac{2}{3}}$. But then that this works. For every $\varepsilon$ you give an explicit way to find $N\left(= \frac{\ln \varepsilon}{\ln \frac{2}{3}})\right)$ such that for all $n > N$ we have $|x_n - 0| < \varepsilon$. Definitions, ta da!


Tuesday, December 15, 2015

gcd and lcm - Suppose $gcd(a,y)=s$ and $gcd(b,y)=t$. Prove that $gcd(gcd(a,b),y)=gcd(s,t)$.



All I have so far is that $$s|a, s|y, t|b, \text{ and } t|y.$$ I also know



$$\gcd(\gcd(a,b),y)=\gcd(a,b,y)=\gcd(a,gcd(b,y))$$



by the associative property of gcd. It would suffice to show $$\gcd(a,b,y)=\gcd(gcd(a,y),\gcd(b,y)).$$
I'm just not sure how to prove it. Thanks for your help.


Answer



I would approach it a bit differently. Let $d=\gcd(\gcd(a,b),y)$. Then $d\mid\gcd(a,b)$, and $d\mid y$. Since $d\mid\gcd(a,b)$, we also know that $d\mid a$ and $d\mid b$. Since $d\mid a$ and $d\mid y$, we know that $d\mid s$; similarly, $d\mid t$, so $d\mid\gcd(s,t)$.




Now let $e=\gcd(s,t)$ and make a similar argument to show that $e\mid d$. Since $d,e\ge 1$, $d\mid e$, and $e\mid d$, it must be the case that $d=e$.


linear algebra - why symmetric matrix is always diagonalizable even when it has repeated eigenvalues?



I am studying linear algebra and I have a very basic question.




Why symmetric matrix is always diagonalizable even if it has repeated eigenvalues?



I've seen that sufficient orthonormal eigenvectors can be generated by applying Gram-Schmidt process to the eigenspace of repeated eigenvalue.



I know what this means. I have solved bunch of exercise problems and I've been always able to generate full set of orthonormal basis of eigenspace of repeated eigenvalue.



But what I want to know is this: The dimension of eigenspace of repeated eigenvalue with multiplicity of "k" is always "k"? Is it impossible that eigenspace of repeated eigenvalue of symmetric matrix is a 1-dimensional line?



Thank you.



Answer



If $A$ is symmetric, and zero is an eigenvalue with the dimension of the eigenspace less than the multiplicity of zero, then there will be a vector
with $A^2v=0$ but $Av\ne0$. But then $0\ne\|Av\|^2=(Av)^t Av=v^tA^2v=0$,
a contradiction.



For general eigenvalues $\lambda$, consider $A-\lambda I$ instead.


Infinite series $sum _{n=2}^{infty } frac{1}{n log (n)}$



Recently, I encountered a problem about infinite series.
So my question is how to know whether the infinite series $\sum _{n=2}^{\infty } \frac{1}{n \log (n)}$ is convergent?


Answer



To see whether $\sum_2^\infty 1/(n \log n)$ converges, we can use the integral test. This series converges if and only if this integral does:

$$
\int_2^\infty \frac{1}{x \log x} dx = \left[\log(\log x)\right]_2^\infty
$$
and in fact the integral diverges.



This is part of a family of examples worth remembering. Note that
$$
d/dx \log(\log(\log x)) = d/dx \log(\log x) \cdot \frac{1}{\log (\log x)} = \frac{1}{x \log x \log(\log x)}
$$
and $\log (\log (\log x)) \to \infty$ as $x \to \infty$ hence $\sum \frac{1}{n \log n \log (\log n)}$ diverges as well. Similarly, by induction we can put as many iterated $\log$s in the denominator as we want (i.e. $\sum \frac{1}{n \log n \log(\log n) \ldots \log (\ldots (\log n) \ldots )}$ where the $i$th log is iterated $i$ times), and it will still diverge. However, as you should check, $\sum \frac{1}{x \log^2x}$ converges, and in fact (again by induction) if you square any of the iterated logs in $\sum \frac{1}{n \log n \log(\log n) \ldots \log (\ldots (\log n) \ldots )}$ the sum will converge.



elementary set theory - Bijective map from $Bbb Z$ to $Bbb Q$



There exists a map $f: \Bbb Z\rightarrow \Bbb Q $ such that $f$ is



A. Bijective and increasing



B. Onto and decreasing




C. Bijective and satisfies $f(n)\ge 0$ if $n\le 0$



D. Has uncountable images



Now option D. is absurd . Option C. is given to be the correct answer.I was thinking since both sets are countable bijection is obvious. Now why cannot be increasing ? I could map $0$ to $0$ and the negative integers to the negative rationals and positive integers to the positive rationals. And if increasing would be possible just interchanging signs would give the decreasing map. So none is possible but why?


Answer



The problem is that, while the cardinality of $\mathbb{Z}$ and $\mathbb{Q}$ is the same, the topology is different. Consider the following: Given two integers $a$ and $b$ with $a < b$, can we say how many integers $c$ satisfy $a < c < b$? Now ask the same question for the rational numbers.



This leads to problems. For example, lets say $a$ and $b$ are two such integers with $k$ other integers between them. And we can even assume the $f(a) = r_{1} < r_{2} = f(b)$. But now we can find an increasing sequence of$k+1$ rational numbers (at least) between $r_{1}$ and $r_{2}$, and only $k$ potential integers to use as their preimages.


elementary set theory - How did Cantor demonstrate a bijection from $I=[0,1]$ to $I^n$?





"I See It, but I Don't Believe It."




Georg Cantor showed that sets of different dimensions can have the same cardinality; in particular, he demonstrated that there is a bijection between the interval $I= [0,1]$ and the $n$-fold product $I^{n} = I \times I \times \cdots \times I$.





Does anyone know specifically how this was done?


Answer



I am not sure if Cantor did it this way, but this argument works: any number $x$ in $[0,1]$ has an expansion to base $2$: $x=\sum \frac {a_k} {2^{k}}$ where $a_k =0$ or $a_k=1$ for each $k$. This expansion is not unique but it can be made unique by avoiding expansions with $a_k=1$ for all but finitely many $k$ (except when $x=1$). Now let $r \in \{0,1,2,...,n-1\}$ and form a sequence $(b_k^{(r)})$ using the coefficients $a_k$ with $k=r\, \pmod{n}$. Let $x_r$ be the number whose expansion to base $2$ has the coefficient sequence $(b_k^{(r)})$. Then the map $x \to (x_1,x_2,...,x_n)$ is a bijection.



A correction: it has been pointed out that $x=1$ causes problem in this argument. (See comment by Henno Brandsma). As suggested we can use the proof to show that there is a bijection between $[0,1)$ and $[0,1) \times [0,1)\times \cdots\times [0,1)$ and use the fact that there are bijections between $[0,1)$ and $[0,1]$ as well as between $[0,1) \times [0,1)\times \cdots \times [0,1)$ and $[0,1] \times [0,1]\times \cdots \times [0,1]$


real analysis - Show uniform convergence of the series of functions $sum_{n=1}^infty frac{x^n sin(nx)}{n}$





Show uniform convergence of the series of functions $\sum_{n=1}^\infty
\frac{x^n \sin(nx)}{n}$ on the interval $[-1,1]$




My attempt: I showed the series converges uniformly on the interval $[-1/2,1/2]$ using Weierstrass M-test. I also showed the series converges uniformly on the interval $[1/2,1]$ by using Dirichlet's criterium, where I used that $\left|\sum_{k=1}^n \sin(kx)\right| \leq \frac{1}{\sin(x/2)}$



However, I'm stuck at showing it converges uniformly on the interval $[-1,-1/2]$. I tried to apply Dirichlet's criterium but can't conclude anything because of the behaviour of the term $x^n/n$ (which does not decrease monotonically).



Any ideas?


Answer




$$\sum_{n\geq 1}\frac{x^n \sin(nx)}{n}$$
is pointwise convergent for any $x\in[-1,1]$: that is trivial if $|x|<1$ and it follows from Dirichlet's test if $|x|=1$. In order to prove uniform convergence, it is enough to show that
$$ E(N) = \sup_{x\in[-1,1]}\left|\sum_{n\geq N}\frac{x^n \sin(nx)}{n}\right| $$
fulfills $\lim_{N\to +\infty}E(N)=0$. If $|x|<1$ we have
$$ \left|\sum_{n\geq N}\frac{x^n \sin(nx)}{n}\right|\leq \frac{1}{N}\sum_{n\geq N}|x|^n = \frac{|x|^N}{N(1-|x|)}.$$
For a fixed $N$, let
$ S_M(x) = \sum_{n=N}^{M}\sin(nx)$. We know that $|S_M(x)|\leq \frac{1}{|\sin(x/2)|}\leq\frac{\pi}{|x|}\leq 2\pi$ for any $x\in[-1,1]$ such that $|x|>\frac{1}{2}$, and by summation by parts
$$ \sum_{n\geq N}\frac{x^n\sin(nx)}{n} = \sum_{n\geq N}S_n(x)x^n\left(\frac{1}{n}-\frac{x}{n+1}\right). $$
If $x$ is negative the RHS can be bounded through the alternating series test, and it turns out to be $O\left(\frac{1}{N}\right)$. If $x$ is positive the RHS can be written as
$$\sum_{n\geq N}\frac{S_n(x)x^n}{n(n+1)}+\sum_{n\geq N}\frac{S_n(x)x^n(1-x)}{n+1} $$

where the first term is bounded by $\frac{2\pi}{N}$ in absolute value. We have
$$ \sup_{x\in[0,1]} x^n(1-x) \leq \frac{1}{en} $$
and this completes the proof that
$$ \left|\sum_{n\geq N}\frac{x^n \sin(nx)}{n}\right| = O\left(\frac{1}{N}\right). $$


elementary set theory - Bijecting a countably infinite set $S$ and its cartesian product $S times S$



From Herstein's Topics in Algebra (exercise 14, section 1.2):




If $S$ is infinite and can be brought into one-to-one correspondence with the set of integers, prove that there is one-to-one correspondence between $S$ and $S \times S$.




So far I know there exists some bijection $\sigma : S \rightarrow \mathbb{Z}$. If I can define a bijection $\tau : \mathbb{Z} \rightarrow \mathbb{Z} \times \mathbb{Z}$, then a one-to-one correspondence between $S$ and $S \times S$ is given by $\mu$ such that $s_\mu = (a_{\sigma^{-1}},b_{\sigma^{-1}})$, with $a,b\in \mathbb{Z}$ given by $s_{\sigma \circ \tau} = (a,b)$.




I'm having trouble defining $\tau$. I think a possible way to do so would be to create some kind of spiral (like the Ulam spiral), and assign each point to a different integer. I suppose this would be a one-to-one correspondence but I'm at a loss on how to prove it. Thanks a lot!


Answer



It's a little simpler to find a bijection $\mathbb{N}\to\mathbb{N}\times\mathbb{N}$ (e.g., think of the usual proof that $\mathbb{Q}$ is countable). There is also an easy bijection $\mathbb{Z}\to\mathbb{N}$, so you can use that to get a bijection $\mathbb{Z}\to\mathbb{Z}\times\mathbb{Z}$.



A couple of comments, though: Herstein is only asking you to show that such a bijection exists, not to explicitly construct one. You could do so using any number of tools, including some that are in principle constructive but which you may not want to actually carry out. For example, there are obvious and easy embeddings $\mathbb{Z}\hookrightarrow \mathbb{Z}\times\mathbb{Z}$. If you also have an embedding $\mathbb{Z}\times\mathbb{Z}\hookrightarrow \mathbb{Z}$ you get the existence of a bijection between the two by using Cantor-Bernstein. For instance, you could take:
$$(a,b)\longmapsto \left\{\begin{array}{ll}
2^a\times 3^b &\text{if $a,b\geq 0$;}\\
3^b\times 5^{|a|} &\text{if $a\lt 0$, $b\geq 0$;}\\
2^a\times 7^{|b|} &\text{if $a\geq 0$, $b\lt 0$;}\\

5^{|a|}\times 7^{|b|} &\text{if $a,b\lt 0$.}
\end{array}\right.$$
So with this you know there is a bijection $S\to S\times S$, even if you don't go through the rather annoying work of trying to write it out explicitly.



Also, it might be worth noting that if you drop the clause "and can be brought into one-to-one correspondence with the set of integers", then the resulting statement is equivalent to the Axiom of Choice (this is a result of Tarski's).


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...