Tuesday, July 31, 2018

sequences and series - A Question On Euler's Proof Of the Basel Problem



I've studied the proof that Euler gave for the famous Basel Problem, and it would seem that while it is technically correct, he does not justify all of his steps properly. Namely, he assumes that




$$\frac{\sin(x)}{x}=\left(1-\frac{x}{\pi}\right)\left(1+\frac{x}{\pi}\right)\left(1-\frac{x}{2\pi}\right)\left(1+\frac{x}{2\pi}\right)\dots$$



simply because they have the same roots, which is really not a strong enough condition. How do you really show that the equality holds?



He then notices that if you use the above equality, and consider it against the Taylor expansion for $\frac{\sin(x)}{x}$, then you can equate the coefficients of the two infinite expansions at each order, and the result of the Basel problem follows. But how do you know that if you have two different expansions for a function, then their coefficients at each order must be equal?



I would really appreciate if someone could show me how to make these two intuitive, yet informal, steps rigorous.


Answer



The following theorem can be used: http://en.wikipedia.org/wiki/Weierstrass_factorization_theorem



abstract algebra - Proof of Artin's Theorem (linearly independent functions)



Recently I have come across one of Artin's theorems and I have not been able to crack it quite yet. The theorem is stated as follows:




Let $G$ be a group. and let $f_1,\dots, f_n\colon G\to K^*$ be distinct homomorphisms of $G$ into the multiplicative group of a field. Prove that these functions are linearly independent over $K$.





Would anyone know a (if possible quite simple) proof of this Theorem. This proof came up in a chapter regarding eigenvectors and eigenvalues, so I presume it has something to do with that?


Answer



Suppose there are nontrivial linear relations between the maps $f_1,\dots,f_n$ seen as elements of the vector space $K^G$; among them choose one with the minimum number of nonzero coefficients. Upon a reordering, we can assume it is
$$
\alpha_1f_1+\dots+\alpha_kf_k=0
$$
with all $\alpha_i\ne0$. This means that, for every $x\in G$,
$$
\alpha_1f_1(x)+\dots+\alpha_kf_k(x)=0

$$
Note that $k>1$ or we have a contradiction.



Fix $y\in G$; then also
$$
\alpha_1f(yx)+\dots+\alpha_kf_k(yx)=0
$$
and, since the maps are homomorphisms,
$$
\alpha_1f_1(y)f_1(x)+\dots+\alpha_kf_k(y)f_k(x)=0\tag{1}

$$
for every $x\in G$ and
$$
\alpha_1f_1(y)f_1(x)+\dots+\alpha_kf_1(y)f_k(x)=0\tag{2}
$$
By subtracting $(2)$ from $(1)$ we get
$$
\alpha_2(f_2(y)-f_1(y))f_2(x)+\dots+\alpha_k(f_k(y)-f_1(y))f_k(x)=0
$$
for all $x$, hence

$$
\alpha_2(f_2(y)-f_1(y))f_2+\dots+\alpha_k(f_k(y)-f_1(y))f_k=0
$$
which would be a shorter linear relation, so we conclude that
$$
f_2(y)=f_1(y),\quad
\dots,\quad
f_k(y)=f_1(y)
$$
Now, choose $y$ such that $f_1(y)\ne f_2(y)$ and you have your contradiction.



modular arithmetic - Find inverse modulo when modulo is smaller than the number



I know how to use the Euclidean algorithm to find the inverse modulo in most cases, but I can't wrap my head around the calculations when the modulo is smaller than the number I'd like to find the inverse for.




For example:



$$59x \equiv 1 \pmod{19} $$



has solution $$x \equiv 10 \pmod{19}$$ according to online calculators but I can't figure out why.


Answer



What you want is a solution to $59x \equiv 1 \pmod{19}$.



But $59x \equiv 1 \pmod{19} \Leftrightarrow (3(19)+2)x \equiv 1 \pmod{19} \Leftrightarrow 2x \equiv 1 \pmod{19}$.




I'm sure you can now solve the last equation, which is equivalent to the original.


Monday, July 30, 2018

complex numbers - Verifying that $sqrt{(R+jomega L)(jomega C)}=0.5frac{R}{sqrt{L/C}}+jomegasqrt{LC}$ for $Rlll omega L$


So I have the following square root of this two complex numbers and my book provides this:


$$\sqrt{(R+j\omega L)(j\omega C)}=0.5\frac{R}{\sqrt{\frac{L}{C}}}+j\omega\sqrt{LC}$$


if $$R\lll\omega L$$


I have no freaking idea how they do this mathematically. I tried to apply distributive property, which leads to


$$\sqrt{jR\omega C-\omega^2LC}$$


And the second term of my expansion kind of looks like the second term of the expression $$\sqrt{-\omega^2LC}=j\omega\sqrt{LC}$$



But I don't if (a) this is correct and (b) how do I get the first term.


Thanks in advance.


Answer



The basic idea is that $\sqrt{1+x} \approx 1+\frac x2$ when $x \ll 1$. The right side is the first two terms of the Taylor series. If you expand the left you have $$\sqrt{(R+j\omega L)(j\omega C)}=\sqrt{Rj\omega C-\omega^2LC}\\ =j\omega\sqrt{LC}\sqrt {R\frac 1{j\omega L}+1}\\ \approx j\omega \sqrt{LC}\left(1+\frac {R}{2j\omega L}\right)\\ =j\omega \sqrt{LC}+\frac R2\sqrt{\frac {C}{L}}$$ They owe you an approximation sign when they do the Taylor series step.


combinatorics - Prove ${n choose k}^2 = sum_{i=0}^{k}{n choose i}{n-i choose k-i}{n-k choose k-i}$ using a combinatorial argument



I've been studying for a final exam. I have gotten stuck on this one question that asks us to prove using a double-counting proof that
$${n \choose k}^2 = \sum_{i=0}^{k}{n \choose i}{n-i \choose k-i}{n-k \choose k-i}$$



I wish I could give an attempted answer, but I don't really know where to begin with this.Could someone please help me out?



Thank You



Answer



Let us think about what we know of that the left-side counts since it is a more natural expression. $\binom{n}{k}$ counts the number of ways of selecting $k$ objects from $n$ total, so $\binom{n}{k}^2$ counts the number of ways of selecting $k$ objects from one batch of $n$ objects and selecting another $k$ objects from a different batch of $n$ objects.



Let our hypothetical scenario be that we have two urns of $n$ labeled balls each, one urn containing red balls and one urn containing blue balls. Each ball will be labeled an integer $1$ through $n$ such that exactly one of each number appears in each urn.



The LHS counts then how many ways we can take $k$ red balls and $k$ blue balls.






Let us try to count this a different way so that it matches the RHS.




Let $i$ denote the number of numbers which match between the balls selected from the red urn and blue urn.



If there are $i$ numbers which match, let us pick which $i$ numbers those were. This can be accomplished in $\binom{n}{i}$ ways.



Once chosen, let us pick the remaining $k-i$ blue balls. This can be accomplished in $\binom{n-i}{k-i}$ ways.



Now that we have our selection of blue balls, let us pick the remaining $k-i$ balls from the red urn such that none of them match any of those numbers chosen for blue. This can be accomplished in $\binom{n-k}{k-i}$ ways.



Ranging over all possible values of $i$ yields the total number of ways to pick $k$ balls from each urn as $\sum\limits_{i=0}^k\binom{n}{i}\binom{n-i}{k-i}\binom{n-k}{k-i}$




Since both methods of counting find a count of the same scenario, the expressions must be equal. The LHS did it without regards to how many numbers on the balls chosen overlapped, whereas the RHS grouped the terms according to how many numbers on the balls chosen overlapped.


calculus - Different ways finding the derivative of $sin$ and $cos$.



I am looking for different ways of differentiating $\sin$ and $\cos$, especially when using the geometric definition, but ways that use other defintions are also welcome.



Please include the definition you are using.




I think there are several ways to do this I didn't find, that make clever use of trigonometric identities or derivative facts. I am looking for these for several reasons. Firstly, it is just interesting to see. Secondly, there may be interesting techniques that can also be applied in a clever way to other derivative problems. It is also interesting to see how proofs can come form completely different fields of mathematics or even from physics.



I have included several solutions in the answers.


Answer



Consider the image from this "proof without words",



Derivative of $\sin(x)$



Asymptotically, the angle between the black radius and the red vertical line is complementary to both angles marked as $\theta$. Thus, asymptotically, those angles are equal, and the two red triangles are similar. Therefore, by similar triangles,

$$
\frac{\mathrm{d}\sin(\theta)}{\mathrm{d}\theta}=\frac{\cos(\theta)}{1}
$$
To get the derivative of $\cos(\theta)$, recall that $\cos(\theta)=\sin\left(\frac\pi2-\theta\right)$ and $\sin(\theta)=\cos\left(\frac\pi2-\theta\right)$. Then the Chain Rule says
$$
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}\theta}\cos(\theta)
&=\frac{\mathrm{d}}{\mathrm{d}\theta}\sin\left(\frac\pi2-\theta\right)\\
&=-\cos\left(\frac\pi2-\theta\right)\\[3pt]
&=-\sin(\theta)

\end{align}
$$


Sunday, July 29, 2018

calculus - How to know whether an integral can be solved?

I just wonder: How do you know whether an integral can be solved? For example, exponential integral can not be derived to final result.

The limit $lim_{nto infty}{na^n}=0$

Let $a\in [0,1)$. I want to show that $$\lim_{n\to \infty}{na^n}=0$$



My try : $$na^n={n\over e^{-(\log{a})n}}$$ and the limit is $${+\infty\over +\infty}$$
Hence by l'Hopital's rule we have that
$$\lim_{n\to \infty}{1\over -(\log{a})e^{-(\log{a})n}}={1\over -\infty}=0$$



Is there any other way to compute this limit ? thanks!

probability - When rolling multiple dice, what is the average value if you only keep the highest roll?

These questions come up regarding a RPG in which you roll multiple dice, but only keep the highest value. So, suppose I were to roll 5 dice, each of which has 12 sides (numbered from 1 to 12). If I only keep the highest roll, what would the average end up being?



To throw in an additional level of complication, what would happen if the dice "explode" or "penetrate"?
Explode/Penetrate=If you roll a die and roll the highest value (if you roll an 8 on an eight sided die), re-roll the die and add the results together. Repeat until you do not roll the maximum value.

limits - Find lim$_{n to infty} sum _{ k =0}^ n frac{e^{-n}n^k}{k!}$


We need to find out the limit of,


lim$_{n \to \infty} \sum _{ k =0}^ n \frac{e^{-n}n^k}{k!}$


One can see that $\frac{e^{-n}n^k}{k!}$ is the cdf of Poisson distribution with parameter $n$.


Please give some hints on how to find out the limit.


Answer



It's a good start to try to solve it in a probabilistic way: notice that the Poisson random variable has the reproducibility property, that is, if $X_{k} \sim \text{Poisson}(1)$, $k = 1, 2, \ldots, n$ independently, then $$S_n = \sum_{k = 1}^n X_{k} \sim \text{Poisson}(n),$$ whose distribution function $F_{S_n}$ satisfies: $$F_{S_n}(n) = P[S_n \leq n] = \sum_{k = 0}^n e^{-n} \frac{n^k}{k!},$$ which is exactly the expression of interest. Hence this suggests linking this problem to central limit theorem.


By the classic CLT, we have $$\frac{S_n - n}{\sqrt{n}} \Rightarrow \mathcal{N}(0, 1).$$ Hence $$P[S_n \leq n] = P\left[\frac{S_n - n}{\sqrt{n}} \leq 0\right] \to P[Z \leq 0] = \frac{1}{2}$$ as $n \to \infty$.


abstract algebra - Polynomial division: an obvious trick? [reducing mod $textit{simpler}$ multiples]


The following question was asked on a high school test, where the students were given a few minutes per question, at most:



Given that, $$P(x)=x^{104}+x^{93}+x^{82}+x^{71}+1$$ and, $$Q(x)=x^4+x^3+x^2+x+1$$ what is the remainder of $P(x)$ divided by $Q(x)$?




The given answer was:



Let $Q(x)=0$. Multiplying both sides by $x-1$: $$(x-1)(x^4+x^3+x^2+x+1)=0 \implies x^5 - 1=0 \implies x^5 = 1$$ Substituting $x^5=1$ in $P(x)$ gives $x^4+x^3+x^2+x+1$. Thus, $$P(x)\equiv\mathbf0\pmod{Q(x)}$$





Obviously, a student is required to come up with a “trick” rather than doing brute force polynomial division. How is the student supposed to think of the suggested method? Is it obvious? How else could one approach the problem?


Answer



The key idea employed here is the method of simpler multiples - a very widely used technique. Note that $\,Q\,$ has a "simpler" multiple $\,QR = x^5\!-\!1,\,$ so we can first reduce $P$ modulo $\,x^{\large 5}\! -\! 1\,$ via $\!\bmod x^{\large 5}-1\!:\,\ \color{#c00}{x^{\large 5}\equiv 1}\Rightarrow\, x^{\large r+5q^{\phantom{|}}}\!\!\equiv x^{\large r}(\color{#c00}{x^{\large 5}})^{\large q}\equiv x^{\large r},\,$ then finally reduce that $\!\bmod Q,\,$ i.e.


$$P\bmod Q\, =\, (P\bmod QR)\bmod Q\qquad$$


This idea is ubiquitous, e.g. we already use it implicitly in grade school in radix $10$ to determine integer parity: first reduce mod $10$ to get the units digit, then reduce the units digits mod $2,\,$ i.e.


$$N \bmod 2\, = (N\bmod 2\cdot 5)\bmod 2\qquad\ $$


i.e. an integer has the same parity (even / oddness) as that of its units digit. Similarly since $7\cdot 11\cdot 13 = 10^{\large 3}\!+1$ we can compute remainders mod $7,11,13$ by using $\,\color{#c00}{10^{\large 3}\equiv -1},\,$ e.g. $\bmod 13\!:\,\ d_0+ d_1 \color{#c00}{10^{\large 3}} + d_2 (\color{#c00}{10^{\large 3}})^{\large 2}\!+\cdots\,$ $ \equiv d_0 \color{#c00}{\bf -} d_1 + d_2+\cdots,\,$ and, similar to the OP, by $\,9\cdot 41\cdot 271 = 10^{\large 5}\!-1\,$ we can compute remainders mod $41$ and $271$ by using $\,\color{#c00}{10^5\!\equiv 1}$


$$N \bmod 41\, = (N\bmod 10^{\large 5}\!-1)\bmod 41\quad $$


for example $\bmod 41\!:\ 10000\color{#0a0}200038$ $ \equiv (\color{#c00}{10^{\large 5}})^{\large 2}\! + \color{#0a0}2\cdot \color{#c00}{10^{\large 5}} + 38\equiv \color{#c00}1+\color{#0a0}2+38\equiv 41\equiv 0$



Such "divisibility tests" are frequently encountered in elementary and high-school and provide excellent motivation for this method of "divide first by a simpler multiple of the divisor" or, more simply, "mod first by a simpler multiple of the modulus".


This idea of scaling to simpler multiples of the divisor is ubiquitous, e.g. it is employed analogously in the method of rationalizing denominators and in Gauss's algorithm for computing modular inverses.


For example, to divide by an algebraic number we can employ its norm as a simpler multiple, e.g. for $\,w = a+b\sqrt{n}\,$ we use $\,ww' = (a+b\sqrt n)(a-b\sqrt n) = a^2-nb^2 = c\in\Bbb Q\ (\neq 0\,$ by $\,\sqrt{n}\not\in\Bbb Q),\,$ which reduces division by an algebraic to simpler division by a rational, i.e. $\, v/w = vw'/(ww'),$ e.g.


$$\dfrac{1}{a+b\sqrt n}\, =\, \dfrac{1}{a+b\sqrt n}\, \dfrac{a-b\sqrt n}{a-b\sqrt n}\, =\, \dfrac{a-b\sqrt n}{a^2-nb^2}\,=\, {\frac{\small 1}{\small c}}(a-b\sqrt n)\qquad $$


so-called rationalizing the denominator. The same idea works even with $\,{\rm\color{#c00}{nilpotents}}$


$$\color{#c00}{t^n = 0}\ \Rightarrow\ \dfrac{1}{a-{ t}}\, =\, \dfrac{a^{n-1}+\cdots + t^{n-1}}{a^n-\color{#c00}{t^n}}\, =\, a^{-n}(a^{n-1}+\cdots + t^{n-1})\qquad$$ Another example is Gauss' algorithm, where we compute fractions $\!\bmod m\,$ by iteratively applying this idea of simplifying the denominator by scaling it to a smaller multiple. Here we scale $\rm\color{#C00}{\frac{A}B} \to \frac{AN}{BN}\: $ by the least $\rm\,N\,$ so that $\rm\, BN \ge m,\, $ reduce mod $m,\,$ then iterate this reduction, e.g.


$$\rm\\ mod\ 13\!:\,\ \color{#C00}{\frac{7}9} \,\equiv\, \frac{14}{18}\, \equiv\, \color{#C00}{\frac{1}5}\,\equiv\, \frac{3}{15}\,\equiv\, \color{#C00}{\frac{3}2} \,\equiv\, \frac{21}{14} \,\equiv\, \color{#C00}{\frac{8}1}\qquad\qquad$$


Denominators of the $\color{#c00}{\rm reduced}$ fractions decrease $\,\color{#C00}{ 9 > 5 > 2> \ldots}\,$ so reach $\color{#C00}{1}\,$ (not $\,0\,$ else the denominator would be a proper factor of the prime modulus; it may fail for composite modulus)


See here and its $25$ linked questions for more examples similar to the OP (some far less trivial).


Worth mention: there are simple algorithms for recognizing cyclotomics (and products of such), e.g. it's shown there that $\, x^{16}+x^{14}-x^{10}-x^8-x^6+x^2+1$ is cyclotomic (a factor of $x^{60}-1),\,$ so we can detect when the above methods apply for arbitrarily large degree divisors.



Saturday, July 28, 2018

real analysis - Question regarding Lebesgue outer measure.



Given $m\geq1$, $0\leq s<\infty$, $0<\delta\leq\infty$ and $A\subseteq\mathbb{R}^{m}$ define: $$\mathcal{H}_{\delta}^{s}\left(A\right)=\inf\left\{ {\displaystyle \sum_{n=1}^{\infty}d\left(B_{n}\right)^{s}\,|\,\left\{ B_{n}\right\} _{n\in\mathbb{N}}}\:\mbox{is a covering of }A\:\mbox{and }d\left(B_{n}\right)<\delta\,\forall\, n\geq1\right\}$$
For ease of notation this refers to countable coverings but it also includes finite coverings. Now I've shown the following two things:




  1. For all $m\geq1$, $0\leq s<\infty$, $0<\delta\leq\infty$ this defines an external measure on $\mathbb{R}^{m}$.

  2. For all $0\leq s<\infty$ and $A\subseteq\mathbb{R}^{m}$ the limit $ \mathcal{H}^{s}\left(A\right)=\lim\limits _{\delta\downarrow0}\mathcal{H}_{0}^{s}\left(A\right)$ exists.




I'm now trying to show these following two things:




  1. For all $ 0\leq s<\infty\quad$ $\mathcal{H}^{s}$ is an external metric measure on $\mathbb{R}^{m}$.

  2. $\mathcal{H}^{0}$ is the counting measure on $\mathbb{R}^{m}$



Here is what I know so far regarding each of these goals:





  1. I want to show that for each $A,B\subseteq\mathbb{R}^{m}$ if $dist(A,B)>0$ then $$\mathcal{H}^{s}(A\cup B)=\mathcal{H}^{s}(A)+\mathcal{H}^{s}(B)$$I know that $dist(A,B)>0$ is equivalent to $\overline{A}\cap\overline{B}=\emptyset$ so I assume I should use that somehow but I'm not sure how.

  2. I've shown it works for finite sets but I'm having trouble with infinite sets. Obviously for $s=0$ we get that $\mathcal{H}_{\delta}^{0}\left(A\right)$ is a function only of the number of sets in the covering for which the infima is obtained. What I want to show is that given an infinite subset $A\subseteq\mathbb{R}^{m}$ as $\delta\downarrow0$ the number of sets required in order to cover $A$ by sets of diameter less than $\delta$ goes to infinity.
    At first this struck me as a bit odd since for a compact infinite set we know that for each $\delta>0$ there is a finite $\delta$-net covering $A$ but then I realized that doesn't mean the number of sets in these nets doesn't go to infinity.



Anyway, I'd really appreciate some thick hints or a full proof of these two claims, I've already spent a couple of hours wrecking my head over it alone :)


Answer



For (2) - for any $n$, an infinite set $A$ contains a subset of cardinality $n$. So as $H^0$ is an outer measure, $H^0(A)\geq n$. As this holds for all $n$, we are done.



modular arithmetic - Prove if $a^{(p)(p-1)} equiv 1$ (mod $p$), then $p$ divides $k$, where $k$ is an integer such that $a^{p(p-1)} - 1 = pk$, and $p$ is a prime number.



I am trying to prove that $a^{p(p-1)} \equiv 1$ (mod $p^2$). I have reduced the equation to essentially solving the the problem above, but I am not sure how to proceed. I've tried to use Fermat's Little Theorem and the fact that $a^{p(p-1)} \equiv 1$ (mod $p$) and that $a^{(p-1)} \equiv 1$ (mod $p$), but I haven't gotten anything concrete.


Answer



Write $a^{p-1}=1+pt$ with $t \in \mathbb Z$.
Then
$$a^{p(p-1)} = (1+pt)^p = \sum_{i=0}^p \binom{p}{i}(pt)^i \equiv 1+(pt)^p \equiv 1 \bmod p^2$$

because $\displaystyle\binom{p}{i}$ is a multiple of $p$ for $0 < i < p$.


Friday, July 27, 2018

calculus - I can't remember a fallacious proof involving integrals and trigonometric identities.



My calc professor once taught us a fallacious proof. I'm hoping someone here can help me remember it.



Here's what I know about it:




  • The end result was some variation of 0=1 or 1=2.

  • It involved (indefinite?) integrals.

  • It was simple enough for Calc II students to grasp.


  • The (primary?) fallacy was that the arbitrary constants (+ C) were omitted after integration.



I'm not certain, but I have a strong hunch it involved a basic trigonometric identity.


Answer



It's probably the classic
$$\int \sin 2x \;dx = \int 2\sin x\cos x \;dx$$




  • Doing a $u=\sin x$ substitution "gives" $$\int 2u \;du = u^2 = \sin^2 x$$



  • Alternatively, using $v = \cos x$ "gives" $$\int -2v \;dv = -v^2 = -\cos^2 x$$




Since the solutions must be equal, we have
$$\sin^2 x = -\cos^2 x \quad\to\quad \sin^2 x + \cos^2 x = 0 \quad\to\quad 1 = 0$$



As you note, the fallacy here is the failure to include "+ constant" to the indefinite integrals.







Note that there's also the substitution $w = 2x$, which "gives"
$$\begin{align}
\int \frac12 \; \sin w \; dw = -\frac12 \; \cos w = -\frac12\;\cos 2x &= -\frac12\;(2 \cos^2 x - 1 ) = -\cos^2 x + \frac12 \\[6pt]
&= -\frac12\;(1 - 2 \sin^2 x) = \phantom{-}\sin^2 x - \frac12
\end{align}$$
that leads to the same kind of apparent contradiction when compared to the other integrals.


number theory - Euclidean algorithm on $(a^n-1,a^m-1)$



I'm having trouble applying the Euclidean algorithm to
$$\gcd(a^n-1,a^m-1)=a^{\gcd(n,m)}-1$$
I've seen others do it such as in this solution but I'm having trouble figuring out what they are doing in each step.



Could someone help explain this to me?


Answer




Assume that $n=qm+r$. Then $ a^n-1 = a^r\cdot(a^m)^q-1 $, and since
$$ a^m \equiv 1 \pmod{a^m-1} \tag{1}$$
we have:
$$ (a^n-1) \equiv (a^r-1)\pmod{a^m-1}\tag{2} $$
proving:
$$ \gcd(a^n-1,a^m-1) = \gcd(a^r-1,a^m-1).\tag{3}$$
Now, repeat. The involved exponents transform like in the computation of
$$ \gcd(n,m) = \gcd(r,m) = \ldots \tag{4} $$
hence at last we get:
$$ \gcd(a^n-1,a^m-1) = a^{\gcd(n,m)}-1\tag{5} $$

qed.


Matrices: left inverse is also right inverse?

If $A$ and $B$ are square matrices, and $AB=I$, then I think it is also true that $BA=I$. In fact, this Wikipedia page says that this "follows from the theory of matrices". I assume there's a nice simple one-line proof, but can't seem to find it.



Nothing exotic, here -- assume that the matrices have finite size and their elements are real numbers.



This isn't homework (if that matters to you). My last homework assignment was about 45 years ago.

Thursday, July 26, 2018

abstract algebra - Find all irreducible monic polynomials in $mathbb{Z}/(2)[x]$ with degree equal or less than 5




Find all irreducible monic polynomials in $\mathbb{Z}/(2)[x]$ with degree equal or less than $5$.





This is what I tried:



It's evident that $x,x+1$ are irreducible. Then, use these to find all reducible polynomials of degree 2. There ones that can't be made are irreducible. Then use these to make polynomials of degree 3, the ones that can't be made are irreducible. Repeat until degree 5.



Doing this way takes way too long and I just gave up during when I was about to reach degree 4 polynomials.



My question is: is there any easier way to find these polynomials?



P.S.: this is not exactly homework, but a question which I came across while studying for an exam.



Answer



Extrapolated Comments converted to answer:



First, we note that there are $2^n$ polynomials in $\mathbb{Z}_2[x]$ of degree $n$.



A polynomial $p(x)$ of degree $2$ or $3$ is irreducible if and only if it does not have linear factors. Therefore, it suffices to show that $p(0) = p(1) = 1$. This quickly tells us that $x^2 + x + 1$ is the only irreducible polynomial of degree $2$. This also tells us that $x^3 + x^2 + 1$ and $x^3 + x + 1$ are the only irreducible polynomials of degree $3$.



As hardmath points out, for a polynomial $p(x)$ of degree $4$ or $5$ to be irreducible, it suffices to show that $p(x)$ has no linear or quadratic factors. To rule out the linear factors, we can again throw out any polynomial not satisfying $p(0) = p(1) = 1$. That is, we can throw out any polynomial with constant term $0$, and we can throw out any polynomial with an even number of terms. This rules out $3/4$ of the polynomials. For example, the $4^{th}$ degree polynomials which do not have linear factors are:





  • $ x^4 + x^3 + x^2 + x + 1 $

  • $ x^4 + x^3 + 1 $

  • $ x^4 + x^2 + 1 $

  • $ x^4 + x + 1 $



The $5^{th}$ degree polynomials which do not contain linear factors are:




  • $x^5 + x^4 + x^3 + x^2 + 1$


  • $x^5 + x^4 + x^3 + x + 1$

  • $x^5 + x^4 + x^2 + x + 1$

  • $x^5 + x^3 + x^2 + x + 1$

  • $x^5 + x^4 + 1$

  • $x^5 + x^3 + 1$

  • $x^5 + x^2 + 1$

  • $x^5 + x + 1$



It still remains to check whether $x^2 + x + 1$ (which is the only quadratic irreducible polynomial in $\mathbb{Z}_2[x]$) divides any of these polynomials. This can be done by hand for sufficiently small degrees. Again, as hardmath points out, since $x^2 + x + 1$ is the only irreducible polynomial of degree $2$, it follows that $(x^2 + x + 1)^2 = x^4 + x^2 + 1$ is the only polynomial of degree $4$ which does not have linear factors and yet is not irreducible. Therefore, the other $3$ polynomials listed must be irreducible. Similarly, for degree $5$ polynomials, we can rule out




$$
(x^2 + x + 1)(x^3 + x^2 + 1) = x^5 + x + 1
$$



and



$$
(x^2 + x + 1)(x^3 + x + 1) = x^5 + x^4 + 1.
$$




The other $6$ listed polynomials must therefore be irreducible.



Notice that this trick of throwing out polynomials with linear factors, then quadratic factors, etc. (which hardmath called akin to the Sieve of Eratosthenes) is not efficient for large degree polynomials (even degree $6$ starts to be a problem, as a polynomial of degree $6$ can factor as a product of to polynomials of degree $3$). This method, therefore only works for sufficiently small degree polynomials.



To recap, the irreducible polynomials in $\mathbb{Z}_2[x]$ of degree $\leq 5$ are:




  • $x$

  • $x+1$


  • $x^2 + x + 1$

  • $x^3 + x^2 + 1$

  • $x^3 + x + 1$

  • $ x^4 + x^3 + x^2 + x + 1 $

  • $ x^4 + x^3 + 1 $

  • $ x^4 + x + 1 $

  • $x^5 + x^4 + x^3 + x^2 + 1$

  • $x^5 + x^4 + x^3 + x + 1$

  • $x^5 + x^4 + x^2 + x + 1$

  • $x^5 + x^3 + x^2 + x + 1$


  • $x^5 + x^3 + 1$

  • $x^5 + x^2 + 1$


real analysis - How can I show that a r.v. with cumulative distribution is continuous?

I want to show that, if $F_X$ is the cumulative distribution function of a random variable $X$, then $X$ is absolutely continuous iff $F_X \in C^1(\mathbb{R})$ ?



I know absolutely continuous means that there exists $f(x) \ge 0, \int_{\mathbb{R}} f(x) dx = 1$ such that $P(X \in A) = \int_A f(x) dx$, and this happens if and only if $F_X (x) = \int_{-\infty} ^ x f(t)dt$



Now, we know $F_X (x) = \int_{-\infty}^x F'_X (t)dt$, and since $F_X \in C^1$, we know that $F_X'$ exists, it must be the density and $X$ is continuous.



I am interested in a rigorous proof. Can this be considered rigorous?




Can some hypothesis be relaxed?

Wednesday, July 25, 2018

calculus - Find the value of : $lim_{xtoinfty}frac{sqrt{x-1} - sqrt{x-2}}{sqrt{x-2} - sqrt{x-3}}$



I'm trying to solve evaluate this limit



$$\lim_{x\to\infty}\frac{\sqrt{x-1} - \sqrt{x-2}}{\sqrt{x-2} - \sqrt{x-3}}.$$



I've tried to rationalize the denominator but this is what I've got



$$\lim_{x\to\infty}(\sqrt{x-1} - \sqrt{x-2})({\sqrt{x-2} + \sqrt{x-3}})$$




and I don't know how to remove these indeterminate forms $(\infty - \infty)$.



EDIT: without l'Hospital's rule (if possible).


Answer



Fill in details:



As $\;x\to\infty\;$ we can assume $\;x>0\;$ , so:



$$\frac{\sqrt{x-1}-\sqrt{x-2}}{\sqrt{x-2}-\sqrt{x-3}}=\frac{\sqrt{x-2}+\sqrt{x-3}}{\sqrt{x-1}+\sqrt{x-2}}=\frac{\sqrt{1-\frac2x}+\sqrt{1-\frac3x}}{\sqrt{1-\frac1x}+\sqrt{1-\frac2x}}\xrightarrow[x\to\infty]{}1$$




Further hint: the first step was multiplying by conjugate of both the numerator and the denominator.


integration - $int^infty_0 frac{dx}{x^n + 1}$, n odd

Well, I saw this integral :


$$\int^\infty_0 \frac{dx}{x^n + 1}$$


on some questions (like this one : Calculating a real integral using complex integration )


But it has always been for $n$ even. Is there something wrong with $n$ odd ? Is not, how do you compute this ? by the same argument ?


thank you and sorry if this is easy. I can't figure out how to do it since it's neither odd nor even.

Whats wrong with this proof? (infinite sequences)

So I just watched a video where they explained that the sum of all natural numbers is $-1/12$. However, there was an interesting comment:


S1 = 1 + 2 + 3 + 4 + 5 ... = - 1/12
S1 - S1 =
1 + 2 + 3 + 4 + 5 + 6 ...
- 1 - 2 - 3 - 4 - 5 ...
= 1 + 1 + 1 + 1 + 1 + 1 ...
Since S1 - S1 = - 1/12 - (- 1/12) = 0
It follows that 1 + 1 + 1 + 1 + 1 .... = 0
Let's name this sequence S2:

S2 = 1 + 1 + 1 + 1 + 1 ... = 0
Now let's subtract it from itself:
S2 - S2 =
1 + 1 + 1 + 1 + 1 ...
- 1 - 1 - 1 - 1 ....
= 1
Given that S2 equals 0, we can also write this as:
0 - 0 = 1

For those who are wondering about the video: https://www.youtube.com/watch?v=w-I6XTVZXww It does the same trick of "shifting". With infinity and stuff things get tricky. What is wrong with this proof? I am a math student, so involved answers with limits is fine.

Limit of $lim_{xto0^+}frac{sin x}{sin sqrt{x}}$



How do I calculate this? $$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}$$

If I tried using l'Hopital's rule, it would become
$$\lim_{x\to0^+}\frac{\cos x}{\frac{1}{2\sqrt{x}}\cos \sqrt{x}}$$
which looks the same. I can't seem to find a way to proceed from here. Maybe it has something to do with $$\frac{\sin x}{x} \to 1$$
but I'm not sure what to do with it. Any advice?



Oh and I don't understand series expansions like Taylor's series.


Answer



By equvilency near zero $\sin x\approx x$ we have
$$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}=\lim_{x\to0^+}\frac{x}{\sqrt{x}}=0$$
or

$$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}=\lim_{x\to0^+}\frac{\sin x}{x}\frac{\sqrt{x}}{\sin \sqrt{x}}.\sqrt{x}=1\times1\times0=0$$


elementary set theory - How to show equinumerosity of the powerset of $A$ and the set of functions from $A$ to ${0,1}$ without cardinal arithmetic?

How to show equinumerosity of the powerset of $A$ and the set of functions from $A$ to $\{0,1\}$ without cardinal arithmetic?



Not homework, practice exercise.

analysis - Bijection from $mathbb{R} to mathbb{R} times mathbb{R}$?








I know it's possible to produce a bijection from $\mathbb{Z}$ to $\mathbb{Z}\times\mathbb{Z}$, but is it possible to do so from $\mathbb{R}$ to $\mathbb{R} \times \mathbb{R}$?

elementary number theory - Prove without induction that $3^{4n}-2^{4n}$ is divisible by $65$


My brother asked me this (for some reason).


My solution is:


$(3^{4n}-2^{4n})\bmod{65}=$


$(81^{n}-16^{n})\bmod{65}=$



$((81\bmod{65})^{n}-16^{n})\bmod{65}=$


$(16^{n}-16^{n})\bmod{65}=$


$0\bmod{65}$



I think that this solution is mathematically flawless (please let me know if you think otherwise).


But I'm wondering if there's another way, perhaps with the binomial expansion of $(81-16)^{n}$.


In other words, something like:


$3^{4n}-2^{4n}=$


$81^{n}-16^{n}=$


$(81-16)^{n}+65k=$


$65^{n}+65k=$



$65(65^{n-1}+k)$


How would I go from "$81^{n}-16^{n}$" to "$(81-16)^{n}+65k$"?


Answer



You can use the formula $$a^n-b^n = (a-b)\left(a^{n-1}+a^{n-2}b+a^{n-3}b^2+\ldots+ab^{n-2}+b^{n-1}\right)$$


real analysis - Limit of $L^p$ norm



Could someone help me prove that given a finite measure space $(X, \mathcal{M}, \sigma)$ and a measurable function $f:X\to\mathbb{R}$ in $L^\infty$ and some $L^q$, $\displaystyle\lim_{p\to\infty}\|f\|_p=\|f\|_\infty$?


I don't know where to start.


Answer



Fix $\delta>0$ and let $S_\delta:=\{x,|f(x)|\geqslant \lVert f\rVert_\infty-\delta\}$ for $\delta<\lVert f\rVert_\infty$. We have $$\lVert f\rVert_p\geqslant \left(\int_{S_\delta}(\lVert f\rVert_\infty-\delta)^pd\mu\right)^{1/p}=(\lVert f\rVert_\infty-\delta)\mu(S_\delta)^{1/p},$$ since $\mu(S_\delta)$ is finite and positive. This gives $$\liminf_{p\to +\infty}\lVert f\rVert_p\geqslant\lVert f\rVert_\infty.$$ As $|f(x)|\leqslant\lVert f\rVert_\infty$ for almost every $x$, we have for $p>q$, $$ \lVert f\rVert_p\leqslant\left(\int_X|f(x)|^{p-q}|f(x)|^qd\mu\right)^{1/p}\leqslant \lVert f\rVert_\infty^{\frac{p-q}p}\lVert f\rVert_q^{q/p},$$ giving the reverse inequality.


Tuesday, July 24, 2018

vector spaces - Does existence of a non-continuous linear functional depend on Axiom of Choice?



Well, it is easy to construct a non-continuous linear functional on an arbitrary infinite-dimensional vector space (assuming Choice, and taking a basis etc.).



I think it is intuitive to say that:





Every non-finite-dimensional space has a non-continuous linear functional.




depends on the axiom of choice. But what if I am not so demanding, and just want an example of a vector space where there exists a non-continuous linear functional. Do I need choice for that?



(OBS: I'm restricting myself to normed spaces)


Answer



You do not need the axiom of choice to prove "There exists a normed space with a discontinuous linear functional". Consider the space $c_{00}$ of all sequences of real numbers with only finitely many nonzero terms, with the supremum norm. The functional $f(x) = \sum_{n=1}^\infty n x(n)$ is discontinuous. Everything in this argument is completely explicit and constructive, and we needed no choice whatsoever.




You do need the axiom of choice to prove "There exists a Banach space with a discontinuous linear functional". This is nontrivial. I found a reasonable overview in C. Rosendal, "Automatic continuity of Polish group homomorphisms", Bull. Symbolic Logic 15 (2009), no. 2, 184-214.



Very roughly: It is not hard to show it suffices to consider the case of a separable Banach space $X$, which is a Polish space (separable and completely metrizable). A subset $A$ of a Polish space has the property of Baire if it can be written $A = U \Delta M$ where $U$ is open and $M$ is meager. A function between Polish spaces is Baire measurable if the preimage of every open set has the property of Baire. There is a short but clever argument known as the Pettis lemma that shows that any linear functional on $X$ which is Baire measurable must in fact be continuous. Then, there is a deep result of Shelah and Solovay which says that it is consistent with the negation of the axiom of choice that every subset of every Polish space has the property of Baire. If so, then every function on $X$ is Baire measurable, and every linear functional on $X$ is therefore continuous.


Proof that the exists a bijective function

$S$ be a set. Consider the set of all functions from $S$ into $\{0,1\}$.




The set is $2^S$



How do I proof that there exists a bijective function from $P(S)$ to $2^S$

trigonometry - Prove that $ sumlimits_{n=1}^N cos(2pi n/N)= 0 $?





Is there a way to algebraically prove that $ \sum\limits_{n=1}^N \cos(2 \pi n/N) = 0 $ for any $N > 0$? (And if so, how?)


Answer



Hint :



$$\sum_{n=1}^N\cos\left(\frac{2\pi n}{N}\right)=\Re\left(\sum_{n=1}^N\exp\left(\frac{2i\pi n}{N}\right)\right)$$



and also :



$$\forall\theta\in\mathbb{R},\,\forall n\in\mathbb{N},\,\exp(i n\theta)=\left[\exp(i\theta)\right]^n$$


algebra precalculus - What is going wrong in this "proof" of $0=1$?



\begin{align} -20 &= -20\\ 16-36 &= 25-45\\ 4^2-4\times 9&=5^2-5\times 9\\ 4^2-4\times 9+81/4&=5^2-5\times 9+81/4\\ 4^2-4\times 9+(9/2)^2&=5^2-5\times 9+(9/2)^2\\ \end{align}


Considering the formula $a^2+2ab+b^2=(a-b)^2$, one has \begin{align} (4-9/2)^2&=(5-9/2)^2\\ \sqrt{(4-9/2)^2}&=\sqrt{(5-9/2)^2}\\ 4-9/2&=5-9/2\\ 4&=5\\ 4-4&=5-4\\ 0&=1 \end{align}



Answer



Here let me simplify your process:


$$ \begin{align} (-2)^2 = 4 &\implies \sqrt{(-2)^2} = \sqrt{2^2} \\ &\implies -2 = 2 \\ &\implies -2 + 2 = 2 +2 \\ &\implies 0 = 4 \end{align} $$


QED. Do you see the mistake?


Monday, July 23, 2018

limits - Evaluating $lim_{ntoinfty}{nleft(ln(n+2)-ln nright)}$




I am trying to find$$\lim_{n\to\infty}{n\left(\ln(n+2)-\ln n\right)}$$
But I can't figure out any good way to solve this.
Is there a special theorem or method to solve such limits?


Answer



Why not elementary? $n(\ln(n+2)-\ln n)=\ln(\frac{n+2}{n})^n=\ln(1+\frac{2}{n})^n \to \ln e^2=2$


Sunday, July 22, 2018

calculus - How find limit $displaystyle lim_{ntoinfty}nleft(1-tfrac{ln n}{n}right)^n$





How find this limit $$\displaystyle \lim_{n\to\infty}n\left(1-\dfrac{\ln n}{n}\right)^n$$



Answer



Let $$f(n) = n\left(1 - \frac{\log n}{n}\right)^{n}\tag{1}$$ and we need to calculate the limit of $f(n)$ as $n \to \infty$ through integer values. The best approach would be to analyze the behavior of $\log f(n)$. Clearly we have $$\log f(n) = \log n + n\log\left(1 - \frac{\log n}{n}\right)\tag{2}$$ and if $n > 1$ we know that $$0 < \frac{\log n}{n} < 1\tag{3}$$ We also know that the following inequality $$x < -\log(1 - x) < \frac{x}{1 - x}\tag{4}$$ holds for $0 < x < 1$. Replacing $x$ with $(\log n)/n$ in the above inequality we get $$\frac{\log n}{\log n - n} < \log\left(1 - \frac{\log n}{n}\right) < -\frac{\log n}{n}$$ Multiplying by $n$ we get $$\frac{n\log n}{\log n - n} < n\log\left(1 - \frac{\log n}{n}\right) < -\log n\tag{5}$$ Using $(2)$ we now have $$\frac{(\log n)^{2}}{\log n - n} < \log f(n) < 0\tag{6}$$ Now we can see that
\begin{align}
A &= \lim_{n \to \infty}\frac{(\log n)^{2}}{\log n - n}\notag\\
&= \lim_{n \to \infty}\dfrac{(\log n)^{2}}{n\left(\dfrac{\log n}{n} - 1\right)}\notag\\
&= \lim_{n \to \infty}\dfrac{(\log n)^{2}}{n}\cdot\dfrac{1}{\dfrac{\log n}{n} - 1}\notag\\
&= 0\cdot\frac{1}{0 - 1} = 0\tag{7}

\end{align}



In the above derivation we have used the standard result that $$\lim_{n \to \infty}\frac{(\log n)^{a}}{n^{b}} = 0\tag{8}$$ for any positive numbers $a, b$. Using Squeeze theorem in equation $(6)$ and noting the equation $(7)$ we get that $\log f(n) \to 0$ as $n \to \infty$. Hence $f(n) \to 1$ as $n \to \infty$. The desired limit is therefore equal to $1$.



Update: Some other answers make use of the symbol $\sim$, but it is wrong unless provided with further justification. The definition of the symbol $\sim$ in the current context is like this. If $$\lim_{n \to \infty}\frac{a(n)}{b(n)} = 1$$ then we write $a(n) \sim b(n)$. And because of this definition we can replace $a(n)$ by $b(n)$ while calculating limits where $a(n)$ is used in the multiplicative context. To be more specific if we have $a(n) \sim b(n)$ then while calculating the limit of an expression like $a(n)c(n)$ we can replace $a(n)$ by $b(n)$ and just calculate the limit of $b(n)c(n)$ to get final answer. This is justified because we can write $$\lim_{n \to \infty}a(n)c(n) = \lim_{n \to \infty}\frac{a(n)}{b(n)}\cdot b(n)c(n) = \lim_{n \to \infty}1\cdot b(n)c(n)$$ Replacement of $a(n)$ by $b(n)$ in other contexts must be justified by further analysis and it may generate wrong answer also.



Further Update: In case you have access to powerful technique of series expansions then the limit can be calculated easily as follows:
\begin{align}
\log f(n) &= \log n + n\log\left(1 - \frac{\log n}{n}\right)\notag\\
&= \log n - n\left\{\frac{\log n}{n} + \frac{(\log n)^{2}}{2n^{2}} + o\left(\frac{(\log n)^{2}}{n^{2}}\right)\right\}\notag\\

&= -\frac{(\log n)^{2}}{2n} + o\left(\frac{(\log n)^{2}}{n}\right)\notag
\end{align}
Using the fact that $(\log n)^{2}/n \to 0$ as $n \to \infty$ we can see that $\log f(n) \to 0$ and hence $f(n) \to 1$ as $n \to \infty$. My preferred approach is to use simpler tools (theorems on algebra of limits, Squeeze theorem etc), but advanced tools like series expansions and L'Hospital give the answer very easily.


divisibility - How come the number $N!$ can terminate in exactly $1,2,3,4,$ or $6$ zeroes but never $5$ zeroes?






How come the number $N!$ can terminate in exactly $1,2,3,4,$ or $6$ zeroes but never $5$ zeroes?


Answer



The number of zeros at the end of $N!$ is given by $$\left \lfloor \frac{N}{5} \right \rfloor + \left \lfloor \frac{N}{5^2} \right \rfloor + \left \lfloor \frac{N}{5^3} \right \rfloor + \cdots$$ where $\left \lfloor \frac{x}{y} \right \rfloor$ is the greatest integer $\leq \frac{x}{y}$.


To make it clear, write $N!$ as a product of primes $N! = 2^{\alpha_2} 3^{\alpha_2} 5^{\alpha_5} 7^{\alpha_7} 11^{\alpha_{11}} \ldots$ where $\alpha_i \in \mathbb{N}$.


Note that $\alpha_5 < \alpha_2$, $\forall N$. (Why?)


The number of zeros at the end of $N!$ is the highest power of $10$ dividing $N!$


If $10^{\alpha}$ divides $N!$ and since $10 = 2 \times 5$, $2^{\alpha} | N!$ and $5^{\alpha} | N!$. Further since $\alpha_5 < \alpha_2$, the highest power of $10$ dividing $N!$ is the highest power of $5$ dividing $N!$ which is $\alpha_5$.


So you will find that for $N \leq 24$, the number of zeros will be less than or equal to 4. However when $N$ hits $25$ you will get 2 additional zeros courtesy $25$ since $25 \times 2^2 = 100$. Hence, there will be a jump when you go from $24$ to $25$.



EDIT:


Note that there will be



  1. A jump of $1$ zero going from $(N-1)!$ to $N!$ if $5 || N$




  2. A jump of $2$ zero going from $(N-1)!$ to $N!$ if $5^2 || N$




  3. A jump of $3$ zero going from $(N-1)!$ to $N!$ if $5^3 || N$ and in general





  4. A jump of $k$ zero going from $(N-1)!$ to $N!$ if $5^k || N$



where $a || b$ means $a$ divides $b$ and gcd($a,\frac{b}{a}$) = 1


EDIT


Largest power of a prime dividing $N!$


In general, the highest power of a prime $p$ dividing $N!$ is given by


$$s_p(N!) = \left \lfloor \frac{N}{p} \right \rfloor + \left \lfloor \frac{N}{p^2} \right \rfloor + \left \lfloor \frac{N}{p^3} \right \rfloor + \cdots$$


The first term appears since you want to count the number of terms less than $N$ and are multiples of $p$ and each of these contribute one $p$ to $N!$. But then when you have multiples of $p^2$ you are not multiplying just one $p$ but you are multiplying two of these primes $p$ to the product. So you now count the number of multiple of $p^2$ less than $N$ and add them. This is captured by the second term $\displaystyle \left \lfloor \frac{N}{p^2} \right \rfloor$. Repeat this to account for higher powers of $p$ less than $N$.



In case of the current example, the largest prime dividing $10$ is $5$. Hence, the largest power of $10$ dividing $N!$ is the same as the largest power of $5$ dividing $N!$.


Largest power of a prime dividing other related products


In general, if we want to find the highest power of a prime $p$ dividing numbers like $\displaystyle 1 \times 3 \times 5 \times \cdots (2N-1)$, $\displaystyle P(N,r)$, $\displaystyle \binom{N}{r}$, the key is to write them in terms of factorials.


For instance, $$\displaystyle 1 \times 3 \times 5 \times \cdots (2N-1) = \frac{(2N)!}{2^N N!}.$$ Hence, the largest power of a prime, $p>2$, dividing $\displaystyle 1 \times 3 \times 5 \times \cdots (2N-1)$ is given by $s_p((2N)!) - s_p(N!)$, where $s_p(N!)$ is defined above. If $p = 2$, then the answer is $s_p((2N)!) - s_p(N!) - N$.


Similarly, $$\displaystyle P(N,r) = \frac{N!}{(N-r)!}.$$ Hence, the largest power of a prime, dividing $\displaystyle P(N,r)$ is given by $s_p((N)!) - s_p((N-r)!)$, where $s_p(N!)$ is defined above.


Similarly, $$\displaystyle C(N,r) = \binom{N}{r} = \frac{N!}{r!(N-r)!}.$$ Hence, the largest power of a prime, dividing $\displaystyle C(N,r)$ is given by $s_p((N)!) - s_p(r!) - s_p((N-r)!)$, where $s_p(N!)$ is defined above.


Is "Strong Induction" not actually stronger than normal induction?



I'm proving something via induction (which has turned into strong induction) and there's something I've never really fully understood about strong induction. The name "strong induction" does make it sound like a more 'powerful' version of induction - but surely this is somewhat counter-intuitive (at least in my mind) given the implications of strong induction?




Just going to the definition of strong induction, it lets you assume that not only does the inductive hypothesis hold for some integer, but also that it holds for all integers less than it as well.



In my mind, assuming something is true for more than one cases isn't as powerful as assuming it's true for only one value and using that as a basis. In my proof, I've had to use not only the $n=k$ assumption, but also the assumption it holds for $n = k-1$, and I really can't see how this is 'strong' or 'complete' induction.



I'm sure someone can enlighten me! Thanks everyone.


Answer



When you say "assuming something is true for more than one cases isn't as powerful as assuming it's true for only one value" you have the direction wrong. Assuming it is true for one value is one thing, assuming it is true for many values is clearly stronger-you have more to work with. As others have said, you can't prove anything from strong induction that you can't prove from weak induction, but that is a significant result in logic. Bigger assumptions are weaker than smaller ones, because you might need something that is in the big one and not in the small one.


Saturday, July 21, 2018

calculus - Why $sum_{k=1}^{infty} frac{k}{2^k} = 2$?




Can you please explain why
$$
\sum_{k=1}^{\infty} \dfrac{k}{2^k} =
\dfrac{1}{2} +\dfrac{ 2}{4} + \dfrac{3}{8}+ \dfrac{4}{16} +\dfrac{5}{32} + \dots =
2
$$




I know $1 + 2 + 3 + ... + n = \dfrac{n(n+1)}{2}$


Answer



\begin{gather*}
|x|<1:\quad f(x)=\sum_{n=1}^{\infty} x^n=\frac{x}{1-x} \\
xf'(x)=\sum_{n=1}^{\infty} nx^n=\frac{x}{(1-x)^2}
\end{gather*}



Let $x=\frac{1}{2}$


real analysis - A sequence of functions which are non-uniformly lipschitz with lipschitz limit


I have a general question about sequence of functions $(f_n)$ from $[0,1]$ to reals which are non-uniformly Lipschitz (i.e., each has a Lipschitz constant $D_n$ but we don't know if it is bounded) and converge uniformly to a function with Lipschitz constant $D$. I have been thinking whether if this was sufficient to say that the set $\{D_n, n \in \mathbb{N}\}$ is bounded. I have seen here examples which give examples of non-Lipschitz functions converging to Lipschitz ones so it seems this idea may not be correct. And indeed I was able to build a counter example which is


$f_n(x) = nxe^{-n^nx}$


The derivative of these functions at $0$ is $n$, so there are elements of this sequence with arbitrarily large Lipschitz constant (and each $f_n$ is indeed Lipschitz since their derivative is continuous in $[0,1]$). Moreover for these functions the maximum is achieved at $x=\frac{1}{n^n}$ with value $\frac{1}{en^{n-1}}$ so they uniformly converge to $0$.


Thus this builds a counterexample. Now my question is could one add some more assumptions to such a sequence so that the Lipschitz constants would be bounded (of course no easy assumptions like the functions being differentiable and their derivatives converging etc, it should be a topological property).



Thanks


Answer



If I understand correctly, the question is: given that $f_n\to f$ and $f$ is Lipschitz, what additional assumptions do we need to conclude that $(f_n)$ is a uniformly Lipschitz sequence?


Concerning



it should be a topological property



I remark that Lipschitz-ness is not a topological property, it is a metric property. If we are on a topological space without a metric, the concept of being Lipschitz does not exist.


Here is a metric assumption: the sequence $(f_n)$ converges in the Lipschitz norm $$ \|f\|_{\rm Lip} = |f(x_0)|+\sup_{x\ne y}\frac{|f(x)-f(y)|}{d(x,y)} \tag1$$ The definition of norm (1) involves choosing a base point $x_0$ in our metric space; the term $|f(x_0)|$ is necessary so that the constant functions get nonzero norm.


Let's check. Suppose $\|f_n-f\|_{\rm Lip}\to 0$ and $f$ is Lipschitz. Then there is $N$ such that for all $n\ge N$ we have $\|f_n-f\|_{\rm Lip}\le 1$. Hence $\|f_n\|\le 1+\|f\|_{\rm Lip}$, which is a uniform upper bound on the Lipschitz constants of $f_n$.



permutations - How many factors of $10$ in $100!$






How many factors of 10 are there in $100!$ (IIT Question)?


Is it 26,25,24 or any other value


Please tell how you have done it


Answer



So first, we want to find how many factors of $5$ there are in $100!$. There are $20$ numbers divisible by $5$ from $1$ to $100$, so we start off our count at $20$. Then, we count how many numbers are divisble by $5^2$. There are four: $25, 50, 75, 100$, and so we add four to our count to get $24$ factors of $5$. (Note that we don't add eight fives - if we did so, we would be counting the first factors of five twice!)


Since $5^3>100$, we don't have to worry about third powers of five. There are at least $100/2=50$ factors of $2$ in $100!$, but we're only going to use $24$ of them to get our $24$ multiples of $10$, so we don't have to calculate the exact number of factors of $2$ in $100!$.


So basic method: To find how many factors of $a$ there are in $b!$, first decompose $a$ into its primes $p_n$, and then find out how many factors of each prime $p_n$ are in numbers less than $b$, by using the method I described of checking for divisibility by $p_n$, then $p_n^2$, etc. Then, from this pool of factors, figure out how many you can take. In our examples to make $10^n$ we could take a maximum of $24$ fives and $24$ twos. If we wanted to find how many factors of $40$ (=$2^3 5$) were less than $100$, we would have needed to find out exactly how many factors of $2$ were less than $100$, and then either take $24*3$ twos if there are enough, or less, if there aren't.


See also: youtube Factors of Factorials Part 1



Friday, July 20, 2018

calculus - If $lim_{xtoinfty}frac{sin x}{x} = 0$ is zero, why does it not work with L'hospital's way?

I just got a simple question regarding the use of L'Hopitals method for finding limits. Usually L'Hopitals method can be used to find limits like
$$\lim_{x\to0}\frac{\sin x}{x} = \lim_{x\to 0}\dfrac{\dfrac{d}{dx} \sin x}{\dfrac{d}{dx} x} = \lim_{x\to 0}\cos x$$



Here if we plug $0$, we can find the limit of the original function $\dfrac{\sin x}{x}$ at $0$ using the $\cos x$ function. Put $0$ in, and you will get $1$, which is correct. However, if we replace $x$ with $\infty$, we don't get the right limit.



$$\cos x$$
$$\cos(\infty)$$




Which is not right for the limit of the original function, as $$\lim_{x\to\infty}\frac{\sin x}{x} = 0$$ Using the new function which we get via L'Hopital's method does not help get that. Is this like a special case? In what cases could then L'Hopital's way not work?

real analysis - non-continuous function satisfies $f(x+y)=f(x)+f(y)$




As mentioned in this link, it shows that For any $f$ on the real line $\mathbb{R}^1$, $f(x+y)=f(x)+f(y)$ implies $f$ continuous $\Leftrightarrow$ $f$ measurable.



But



how to show there exists such an non-measurable function satisfying $f(x+y)=f(x)+f(y)$?



I guess we may use the uniform bounded principal and the fact that $f$ is continuous iff it is continuous at zero under the above assumption.



Thanks in advance!



Answer



Considering $\mathbb R$ as infinite-dimensional $\mathbb Q$ vector space, any linear map will do. For example, one can extend the function
$$f(x)=42a+666b\quad \text{ if } x=a+b\sqrt 2\text{ with }a,b\in \mathbb Q$$
defined on $\mathbb Q[\sqrt 2]$ to all of $\mathbb R$, if one extends the $\mathbb Q$-linearly independent set $\{1,\sqrt 2\}$ to a basis of $\mathbb R$. (This requires the Axiom of Choice, of course)


induction - For every natural number $n$, $ 3^{3n} - 1$ is divisible by $26$.


Use induction to prove that for every natural number $n$, $ 3^{3n} - 1$ is divisible by $26$.





I can see that for $n=1$, $ 3^{3} -1=26\cdot 1$. As for inductive step, assuming that the statement holds for $n=k$ ($3^{3k}-1 = 26k$), I want to show it for $n=k+1$ (that is, $3^{ 3(k+1)} -1=26(k+1)$). But how to proceed from here?

Thursday, July 19, 2018

Higher degree polynomial with complex roots



I'm working on the following problem:



$$ r^4 - 3r^2 -4r = 0 $$



I factor out one $r$ and leaving me $ r(r^3 - 3r -4) = 0 $. One real root is $r=0$, and I'm unable to find the other ones. I tried using synthetic division but it didn't help. I tried googling synthetic division with complex root problems, but all the videos use examples that are given a complex solution in order to solve the other roots. So what could be a good approach in this problem?


Answer




Hint. Applying Cardano's formula (see the link above) to the reduced equation
$$
r^3 - 3r -4=0,
$$ one gets the real root




$$
r_1=\left(2-\sqrt{3}\right)^{1/3}+\left(2+\sqrt{3}\right)^{1/3}
$$





and the two complex roots




$$
r_{2}^{\pm}=-\frac12 \left(2- \sqrt{3}\right)^{1/3} \left(1\pm i \sqrt{3}\right)-\frac{1}{2} \left(2+\sqrt{3}\right)^{1/3}\left(1\mp i \sqrt{3}\right) .
$$



calculus - Two questions about Euler's number $e$



I am on derivatives at the moment and I just bumped into this number $e$, "Euler's number" . I am told that this number is special especially when I take the derivative of $e^x$ , because its slope of any point is 1. Also it is an irrational ($2.71828\ldots$) number that never ends, like $\pi$.



So I have two questions, I can't understand





  1. What is so special about this fact that it's slope is always 1?

  2. Where do we humans use this number that is so useful, how did Mr Euler come up with this number?



and how come this number is a constant? where can we find this number in nature?


Answer



You don't take the derivative of a constant. You could, but it's zero.



What you should be talking about is the exponential function, $ e^x $ commonly denoted by $ \exp(\cdot ) $. Its derivative at any point is equal to its value, i.e. $ \frac{d}{dx} e^x \mid_{x = a} = e^a $. That is to say, the slope of the function is equal to its value for all values of $ x $.




As for how to arrive at it, it depends entirely on definition. There are numerous ways to define $ e $, the exponential function, or the natural logarithm. One common definition is to define $$ \ln x := \int\limits_1^x \frac{1}{t} \ dt $$ From here, you can define $ e $ as the sole positive real such that $ \ln x = 1 $ and arrive at it numerically.



Another common definition is $ e = \lim\limits_{n \to \infty}\left(1 + \frac{1}{n}\right)^n $, although in my opinion it is easier to derive properties from the former definition.


Wednesday, July 18, 2018

algebra precalculus - Difference of 2 roots

$2003^2x^2 - (2002)(2004)x - 1 = 0$



$x^2+2002x - 2003 = 0$



What is the difference of the 1st equation's larger root and the smaller root of the 2nd equation?



I did not know how to approach this, so I used the quadratic formula but could not easily simplify the expressions. What properties will be used ?

linear algebra - Inverse of a symmetric block tridiagonal matrix



I am aware of existent discussion on the inverse of a block tridiagonal matrix on this website (for example, How to invert a block tridiagonal matrix?) and I have been googling articles about this topic, but I feel I may be interested in a slightly different setting and I cannot tell whether the references I looked so far discuss that, so I'm posting here.



Similar to the link above, I am interested in the last block along the diagonal, the block in $A^{-1}$ corresponding to $D_n$ in $A$. However, the size of the blocks may vary. I do not assume each $D_i$ must be of same size and I assume each $D_i$ is $n_i \times n_i$.



$$A = \begin{bmatrix}
D_1 & A_2^{\top} & & \\
A_2 & D_2 & A_3^{\top} & & & \\

& \ddots & \ddots & \ddots \\
& & A_{n-1} & D_{n-1} & A_n^{\top} \\
& & & A_n & D_n \\
\end{bmatrix}$$



One reference I looked at is https://epubs.siam.org/doi/pdf/10.1137/0613045 and Theorem 3.4 in it gives a general formula when $A$ is proper (i.e. the matrices $A_i$ are nonsingular). However, I am unsure if my setting fits the paper, as it says "block is of order n" (pg.8), and I wonder if the "order" here means $\Theta(n)$. If it actually means equal-size diagonal block, then I wonder if anyone could point some other reference to me for different-size block setting. Thank you!


Answer



For convenience, let $$T_k =
\begin{bmatrix}
D_1 & A_2^{\top} & & \\

A_2 & D_2 & A_3^{\top} & & & \\
& \ddots & \ddots & \ddots \\
& & A_{k-1} & D_{k-1} & A_k^{\top} \\
& & & A_k & D_k \\
\end{bmatrix}$$
for $k = 1,2,\ldots,m$, where I've let $m$ be the total number of diagonal blocks in the original matrix. This is to avoid confusion since the diagonal blocks are of size $n_1 \times n_1, \ldots, n_m \times n_m$. Our goal is to compute $T_m^{-1}$ as efficiently as possible.



Trivially, $T_1 = D_1$, so $T_1^{-1} = D_1^{-1}$, which can be computed in $O(n_1^3)$ operations.



Now, suppose we've already computed $T_{k-1}^{-1}$ and we wish to compute $T_k^{-1}$. We can partition $$T_k = \begin{bmatrix}T_{k-1} & Z_k^T \\ Z_k & D_k \end{bmatrix}$$ where $Z_k = \begin{bmatrix}0 & A_k\end{bmatrix}$. To invert $T_k$, we can apply the block matrix inverse formula to get $$T_k^{-1} = \begin{bmatrix}T_{k-1}^{-1} + T_{k-1}^{-1}Z_k^TS_kZ_kT_{k-1}^{-1} & -T_{k-1}^{-1}Z_k^TS_k \\ -S_kZ_kT_{k-1}^{-1}& S_k \end{bmatrix} \quad \text{where} \quad S_k = (D_k-Z_kT_{k-1}^{-1}Z_k^T)^{-1}.$$




With $T_{k-1}^{-1}$ already computed, we require the following steps:




  1. Multiply $Z_k$ by $T_{k-1}^{-1}$ by $Z_k^T$ to get $Z_kT_{k-1}^{-1}Z_k^T$ -- $O(n_{k-1}^2n_k + n_{k-1}n_k^2)$ operations

  2. Subtract $Z_kT_{k-1}^{-1}Z_k^T$ from $D_k$ to get $D_k - Z_kT_{k-1}^{-1}Z_k^T$ -- $O(n_k^2)$ operations

  3. Invert $D_k - Z_kT_{k-1}^{-1}Z_k^T$ to get $S_k$ -- $O(n_k^3)$

  4. Multiply $S_k$ by $Z_k$ to get $S_kZ_k$ -- $O(n_{k-1}n_k^2)$ operations

  5. Multiply $Z_k^T$ by $S_k$ to get $Z_k^TS_k$ -- $O(n_{k-1}n_k^2)$ operations

  6. Multiply $-S_kZ_k$ by $T_{k-1}^{-1}$ to get $-S_kZ_kT_{k-1}^{-1}$ -- $O(n_k^2(n_1+\cdots+n_{k-1}))$ operations

  7. Multiply $T_{k-1}^{-1}$ by $-Z_k^TS_k$ to get $-T_{k-1}^{-1}Z_k^TS_k$ -- $O(n_k^2(n_1+\cdots+n_{k-1}))$ operations


  8. Multiply $Z_k^T$ by $S_kZ_kT_{k-1}^{-1}$ to get $Z_k^TS_kZ_kT_{k-1}^{-1}$ -- $O(n_k^2(n_1+\cdots+n_{k-1}))$ operations

  9. Multiply $T_{k-1}^{-1}$ by $Z_k^TS_kZ_kT_{k-1}^{-1}$ to get $T_{k-1}^{-1}Z_k^TS_kZ_kT_{k-1}^{-1}$ -- $O(n_k^2(n_1+\cdots+n_{k-1}))$ operations

  10. Add $T_{k-1}^{-1}$ and $T_{k-1}^{-1}Z_k^TS_kZ_kT_{k-1}^{-1}$ to get $T_{k-1}^{-1}+T_{k-1}^{-1}Z_k^TS_kZ_kT_{k-1}^{-1}$ -- $O((n_1+\cdots+n_{k-1})^2)$ operations



Note that many of the above steps take advantage of the fact that $Z_k = \begin{bmatrix}0 & A_k\end{bmatrix}$ and $S_kZ_k = \begin{bmatrix}0 & S_kA_k\end{bmatrix}$ are $n_k \times (n_1+\cdots+n_{k-1})$ matrices which have all zeros except for a block of size $n_k \times n_{k-1}$.



If all the blocks are the same size $n_1 = \cdots = n_m = n$, then the total cost of computing $T_k^{-1}$ from $T_{k-1}^{-1}$, $A_k$, and $D_k$ is $O((k-1)n^3+(k-1)^2n^2)$. Thus, the total cost of computing $T_m^{-1}$ recursively is $O(m^2n^3+m^3n^2)$ as opposed to $O(m^3n^3)$ by just direct inversion. If the blocks are not all the same size, it's a bit harder to analyze how much faster the above method is compared to direct inversion. However, I suspect the above method is still faster in many cases.


integration - Transformation to $logleft(frac a2cos xright)$




Question: How does$$T=-\int\limits_{\tfrac {\pi}2}^{-\tfrac {\pi}2}dx\,\frac {\log\left(\tfrac {a^2}4\cos^2x\right)}{\sqrt{\tfrac {a^2}4\cos^2x}}\left(\frac a2\cos x\right)=4\int\limits_0^{\tfrac {\pi}2}dx\,\log\left(\frac a2\cos x\right)\tag1$$





I was evaluating an integral, namely$$I=\int\limits_0^adx\,\frac {\log x}{\sqrt{ax-x^2}}$$And I'm having trouble seeing how to get from the left-hand side to the right-hand side. Doesn't the denominator and numerator cancel to leave you with$$T=-\int\limits_{\tfrac {\pi}2}^{-\tfrac {\pi}2}dx\,\log\left(\frac {a^2}4\cos^2x\right)=-2\int\limits_{\tfrac {\pi}2}^{-\tfrac {\pi}2}dx\,\log\left(\frac a2\cos x\right)$$But I don't see how that can be substituted to get the second expression of $(1)$. What kind of substitution should be made to make the transformation from where I left off to $(1)$?


Answer



First, $\cos(x)\ge 0$ and is even for $x\in[-\pi/2,\pi/2]$. Hence, $\sqrt{\cos^2(x)}=\cos(x)$. If $a>0$, then $\sqrt{a^2}=a$. Therefore,



$$\frac{\log\left(\frac{a^2}{4}\cos^2(x)\right)\left(\frac a2\cos(x)\right)}{\sqrt{\frac{a^2}{4}\cos^2(x)}}=\frac{\log\left(\left(\frac{a\cos(x)}{2}\right)^2\right)\left(\frac a2\cos(x)\right)}{\frac a2\cos(x)}=2\log\left(\frac{a\cos(x)}{2}\right)$$



Finally, exploiting the evenness of the cosine function, we find



$$\begin{align}
\int_{-\pi/2}^{\pi/2}\frac{\log\left(\frac{a^2}{4}\cos^2(x)\right)\left(\frac a2\cos(x)\right)}{\sqrt{\frac{a^2}{4}\cos^2(x)}}\,dx&=\int_{-\pi/2}^{\pi/2}2\log\left(\frac{a\cos(x)}{2}\right)\,dx\\\\

&=4\int_0^{\pi/2}\log\left(\frac{a\cos(x)}{2}\right)\,dx
\end{align}$$



as expected!


elementary set theory - Proving the Cantor Pairing Function Bijective

How would you prove the Cantor Pairing Function bijective? I only know how to prove a bijection by showing (1) If $f(x) = f(y)$, then $x=y$ and (2) There exists an $x$ such that $f(x) = y$


How would you show that for a function like the Cantor pairing function?

complex numbers - What did I do wrong? 1 = √1 = √(-1)(-1) = √(-1) √(-1) = i.i = i² = -1

I'm a simple man living my life and enjoying mathematics now and then. Today during lunch my friend asked me about complex numbers and $i$. I told him what I knew and we went back to work.




After work I decided to read up on complex numbers and I somehow ended up with this equation:



$$ 1 = \sqrt 1 = \sqrt{(-1)(-1)} = \sqrt{(-1)} \ \sqrt{(-1)} = i \cdot i = i² = -1 $$



Somehow I got that $1 = -1.$ I can't see a contradiction. Did I just break math? What happened? Where is my mistake?

polynomials - Finding the eigenvalues of $A=left(begin{smallmatrix} a & 1 & 1 \ 1 & a & 1 \ 1 & 1 & a \ end{smallmatrix}right)$



I would like to calculate the eigenvalues of the following matrix $A$, but the factorization of the characteristic polynomial does not seem to be easy to compute.



$A=\pmatrix{
a & 1 & 1 \\
1 & a & 1 \\
1 & 1 & a \\
},\ a\neq1,\ a\neq-2$




$f(\lambda)$ = Char$(A,\lambda)$ = $(a-\lambda)^3-3(a-\lambda)+2 = -\lambda^3 + 3a\lambda^2 + 3\lambda(1-3a^2) + (a-1)^2(a+2)$



I have thought about using the Rational-Root Theorem (RRT), so possible roots of $f(\lambda)$ are $(a-1),\ (-a+1),\ (a+2),\ (-a-2)$, and much more, as for example in the case $a=2$ we should also test whether $f(\pm2)=0$ or not, am I wrong?



The eigenvalues of $A$ are $a-1$ and $a+2$ (computed with Wolfram Alpha). This result can be obtained using RRT, computing $f(a-1)$ and $f(a+2)$ and realizing that both are equal to zero but, is there an easier and 'elegant' way to find these eigenvalues?


Answer



Basically, you need to solve $(a-\lambda)^3-3(a-\lambda)+2 =0$ for $\lambda$. Don't expand the brackets, instead denote: $t=a-\lambda$. Then:
$$t^3-3t+2=0 \Rightarrow (t-1)^2(t+2)=0 \Rightarrow \\
t_1=1 \Rightarrow a-\lambda =1 \Rightarrow \lambda_1 =a-1\\

t_2=-2\Rightarrow a-\lambda =-2 \Rightarrow \lambda_2=a+2.$$


Does Induction theorem fails here at Euler's conjecture?



I read in a Book written by Raymond A. Barnett and Micheal R. Ziegler the way to prove conjectures for infinite members of a given set and that is, Mathematical Induction. When I read Induction, I found it very interesting. The way to prove propositions using induction theorem looks like this:



Step 1: Base case i.e. to prove for 1.




Step :2 Inductive step i.e. to assume that proposition is true for any unknown number (k) and then prove for k+1



Then on other page I found an conjecture given by Euler i.e. for each positive integer n, the number n^2 - n + 41 is a prime. When I tried it to prove by Induction theorem stated above, it resulted it to be true for each positive integer however Euler's proposition failed when I put n=41. Is there any other method to check for the Euler's proposition; that could prove it wrong for all n?


Answer



You can certainly show it is true for the base case $n=1$ which gives $41$ which is indeed prime.



You then assume that it is true for $n=k$ and get:$$k^2-k+41=p_k$$where $p_k$ is some prime number.



If we now use the inductive step and try to prove this is also true for $n=k+1$ then we get:$$(k+1)^2-(k+1)+41=k^2+2k+1-k-1+41=(k^2-k+41)+2k$$$$=p_k+2k$$Now how did you conclude that $p_k+2k$ is also prime?



Tuesday, July 17, 2018

probability - throwing a dice repeatedly so that each side appear once.

Pratt is given a fair die. He repeatedly

throw the die until he get
at least each number (1 to 6).



Define the random variable $X$ to be the total number of trials that
pratt throws the die. or
How many times he has to throw a die so that each side of the die appears at least once.
Determine the expected value $E(X)$.

real analysis - Prove that sequence defined by $a_1 = sqrt{2}, a_{n+1} = sqrt{2 + sqrt{a_n}}$ is convergent or increasing?




Prove that the following sequence defined by the given recurrence relation is convergent:




\begin{align*}
a_1 &= \sqrt{2} \\
a_{n+1} &= \sqrt{2 + \sqrt{a_n}} \\
\end{align*}




First, I can prove that the sequence is bounded such that for all $n$, $\sqrt{2} \le a_n \le 2$.



If I can prove that the sequence is monotonically increasing, then textbook theorems guarantee that bounded monotone sequences must converge.




Is there an easy way to show that this sequence is increasing?



I can solve for the fixed point, which I presume is the convergence point, at approximately $1.8311772$. I can casually see that $a_{n+1} > a_n$ for any $a_n$ less than this fixed point value, but that doesn't quite prove monotonicity, and and doesn't disprove the possibility of the sequence oscillating or cycling around the fixed point.



UPDATE: The linked question features the exact same sequence and recurrence relation, but is asking a very different question. I asked how to prove that the sequence is increasing, while the linked answer asked to find the value of convergence. I found that question quite simple, and it looks like the person who posted the linked question found my question easy. greedoid's answer completely solved my problem. So, from my perspective, I'm finished, this can be archived for future searches or deleted/closed. But it's definitely not the same question as the linked question.


Answer



Prove by induction, that $a_{n} - a_{n-1}>0$ for all $n$:



Induction step:




$$ a_{n+1}^2 - a_{n}^2 = \sqrt{a_n}-\sqrt{a_{n-1}}>0$$



since $a_n>0$ for all and we have $$\underbrace{(a_{n+1} + a_{n})}_{>0}(a_{n+1} - a_{n})>0$$



we are done.


Monday, July 16, 2018

diophantine equations - How does modular arithmetic work - Fermat's last theorem near misses?



This question about Fermat's Last Theorem near misses uses modular arithmetic (converting all the terms to mod 100) to show why $$3987^{12}+4365^{12}\neq4472^{12}.$$



I've only just come across modular arithmetic, and the method looks really neat. But I'm puzzled as to why this works. In other words, I think I'm asking if an equation is true in modular form, how do we know it's true in ordinary integer form? Apologies in advance if this is a really dumb question.


Answer



Since you're new to modular arithmetic, here is a very simple explanation using a couple of examples.



There are three types of modular arithmetic you are already familiar with from grade school: mod 10, mod 5, and mod 2.




Mod 2 just refers to even numbers and odd numbers. Even numbers are those which are "equal to" (actually "congruent to") 0 (mod 2). Odd numbers are those which are congruent to 1 (mod 2).



For mod 5, discard all but the last digit of a number, then (if it is greater than 4), subtract 5 from the remaining digit.



For mod 10, just take the last digit of the number.






Consider the following equation. Is it true?




$5723784602 + 2893649283 = 8617433887$



Using modular arithmetic (specifically, mod 10) you can discard all but the final digit of each number, and state:



$2 + 3 \equiv 7 \mod 10$



This is obviously false. Therefore, the original equation is false.







How about the following equation?



$234343 \times 23845 = 5587908832$



Using the rules that you were probably taught in Grade School, if you take any number that ends in a five and multiply it by anything, the product must end in either a five or a zero. Therefore, this is false.



We can state this with modular arithmetic as follows:



$3 \times 0 \equiv 2 \mod 5$




Obviously this is false. Anything times zero must equal zero.






However, the reverse approach doesn't work:



$29834620934 + 293840239843 = 17$



If we check this with modular arithmetic (mod 10), we get:




$4 + 3 \equiv 7 \mod 10$



This is true, but the original equation is false.






In summary: You can use modular arithmetic to prove an equation false.



You can't use it (in this simplistic form) to prove an equation true.


An interesting fact about the number 123456789 and its generalization in arbitrary base

The number $(12\ldots(b-1))$ in base $b$ has the property that when multiplied by any integer $1\le k\le b-1$ which is coprime to $b-1$, its digits are permuted. Why?



For example in base 10,
\begin{eqnarray}
123456789&*2=& 246913578\\
&*4=& 493827156\\
&*5=& 617283945\\
&*7=& 864197523\\
&*8=& 987654312

\end{eqnarray}

sequences and series - Convergence of $sum_{n=1}^{infty} sum_{m=1}^{infty} frac{sin(n-m)}{n^2+m^2}$



Is the following series convergent or divergent?



$$\displaystyle{ \sum_{n=1}^{\infty} \sum_{m=1}^{\infty} \frac{\sin(n-m)}{n^2+m^2}}$$



Even if it converge I do not know to prove it. However, for example, I know that
$$\sum_n \frac{\sin n}{n}$$
converges. Then, why should not converge also this double series?




Thanks in advance.


Answer



For fixed n, let $S(m,n) = \sum_{k=1}^{m}\sin (n-k).$ Then there exists $M$ such that $|S(m,n)|\le M$ for all $m,n.$ Summing by parts for fixed $n$ we find



$$\sum_{m=1}^{\infty}\frac{\sin(n-m)}{m^2+n^2} = \sum_{m=1}^{\infty}S_m[\frac{1}{m^2+n^2}- \frac{1}{(m+1)^2+n^2}]= \sum_{m=1}^{\infty}S_m\frac{2m+1}{(m^2+n^2)((m+1)^2 + n^2)}.$$



Slap absolute values everywhere in this last sum to see its absolute value is less than



$$M\sum_{m=1}^{\infty}\frac{3m}{(m^2+n^2)^2}
=\frac{1}{n^2}\sum_{m=1}^{\infty}\frac{m/n}{((m/n)^2 + 1)^2}\frac{1}{n}.$$




As $n\to \infty,$ the last sum on $m$ tends to



$$\int_0^\infty \frac{x}{(x^2+1)^2}dx < \infty.$$ It follows that there is a positive constant $C$ such that



$$\left |\sum_{m=1}^{\infty}\frac{\sin(n-m)}{m^2+n^2} \right | \le \frac{C}{n^2}$$



for all $n.$ That implies the original double sum is convergent.


linear algebra - If row spaces of two $mtimes n$ matrices are equal, then matrices are row-equivalent



I'm trying to prove that if two $m\times n$ matrices $A$ and $B$ over a field $F$ have the same row space, then they are row-equivalent.



Intuitively, it makes sense. Indeed, if $\mathrm{row}(A) = \mathrm{row}(B)$, then each row $[a_{i1}, ..., a_{in}]$ of $A$ is generated by rows of $B$ and vice versa, that if, for any $i = 1,...,m$ we have

$$[a_{i1},...,a_{in}] = c_{i1}[b_{i_11},...,b_{i_1n}] + ... + c_{it_i}[b_{i_{t_i}1},...,b_{i_{t_i}n}]$$
and
$$[b_{i1},...,b_{in}] = d_{i1}[a_{i'_11},...,a_{i'_1n}] + ... d_{ip_i}[b_{i'_{p_i}1},...,b_{i'_{p_i}n}].$$



This seems similar to the condition an elementary operation being applied to a row, but I can't seem to make this final step to make it rigorous. One idea was to prove by induction is that is $A_k$ is equivalent to $A$, then $A_{k+1}$ is equivalent to $B$, where $(A_k)_{ij}$ is equal to $b_{ij}$ if $i < k$ and to $a_{ij}$ otherwise, but I have a feeling that this is wrong.



Edit: This question has been marked as a duplicate of another question for some reason. However, that question asked how to prove that two row-equivalent matrices have the same row space, while my question asks how to prove the converse.


Answer



First of all, elementary row operations can be realized as multiplication by elementary matrices, that is, matrices differing from the identity by an elementary row operation. Such matrices are invertible.




Also, elementary row operations don't change the row space.



If you perform Gaussian elimination on $A$, the matrix in row echelon form you get has $k$ nonzero rows and the other are zero (at the bottom), where $k$ is the dimension of the row space. This echelon form $U$ is row equivalent to $A$ and $U=SA$, for some invertible matrix $S$.



Similarly, you can get a row echelon form $V=TB$ from $B$. So it's not restrictive to assume that $A$ and $B$ are in row echelon form: indeed, if $V=LU$ for an invertible matrix $L$, then $TB=LSA$ and $B=T^{-1}LSA$.



Note also that the nonzero rows of a matrix in row echelon form are linearly independent.



Since $A$ and $B$ have the same row space, they have the same number of nonzero rows. If $a_1,a_2,\dots,a_k$ and $b_1,b_2,\dots,b_k$ are the nonzero rows in $A$ and $B$, then
$$

a_i=\sum_{j=1}^kc_{ij}b_j
$$

and the $k\times k$ matrix $C=[c_{ij}]$ is invertible (why?). Now consider the invertible block matrix
$$
L=\begin{bmatrix} C & 0 \\ 0 & I_{m-k} \end{bmatrix}
$$

(the bottom right corner is the identity matrix; if $k=m$, just consider $L=C$).



Compute $LB$.


Computing $gcd$ of very large numbers with powers

How to calculate $\gcd$ of $5^{2^{303} - 1} - 1$ and $5^{2^{309} - 1} - 1$?



I stumbled upon this interesting problem and tried elementary algebraic simplification and manipulation. But found no success.

real analysis - Prove that $lim_{n to infty} frac{n}{(n!)^frac{1}{n}} = e $



I have solved the problem in just 2 lines using a theorem which asserts that



"Let ${u_n}$ be a real sequence such that $u_n > 0 \forall n \in \mathbb{N}$ and $\lim_{n \to \infty} \frac{u_{n+1}}{u_n} = \ell$ (finite of infinite). Then $\lim_{n \to \infty} (u_n)^{1 \over n} =\ell$ "


To prove the aforesaid limit, I fix $u_n={n^n \over n!}$. Then $u_n>0 \forall n \in \mathbb{N}$ and $\lim_{n \to \infty}\frac{u_{n+1}}{u_n}= \lim_{n \to \infty}(1+{1 \over n})^n=e$.


Then it follows from the above theorem that $\lim_{n \to \infty} (u_n)^{1 \over n} =e$ i.e. $ \lim_{n \to \infty} \frac{n}{(n!)^\frac{1}{n}} = e $. (Proved)


But I am trying to prove it without using the theorem. I am trying to get a generic proof.


Can anyone provide a proper wayout for this?


Thanks for your help in advance.


Answer



EDIT: As pointed out in the comments, even though the final inequality is correct, it is insufficient since $(n+1)^{1/n} \to 1$ as $n \to \infty$. The lower bound can be obtained as shown in @adfriedman's answer.


Here's my take on it: Whenever $n \geq 3$, we have $$ n^n \geq (n+1)!, $$ and thus $$ n^n \geq (n+1)n! \quad \Leftrightarrow \quad \frac{n}{n!^{\frac{1}{n}}} \geq (n+1)^{\frac{1}{n}}. $$ On the other hand, the Taylor expansion of $e^n$ gives $$ \frac{n^n}{n!} \leq \sum_{k=0}^{\infty} \frac{n^k}{k!} = e^n \quad \Leftrightarrow \quad \frac{n}{n!^{\frac{1}{n}}} \leq e. $$ So, $$ (n+1)^{\frac{1}{n}} \leq \frac{n}{n!^{\frac{1}{n}}} \leq e. $$ Apply the Squeeze Theorem.


algebra precalculus - In how many days $Y$ alone finish the work

$X$ and $Y$ do a piece of work in $30$ days .They work together for $6$ days and then $X$ quits and $Y$ finishes the work in $32$ more days.In how many days would $Y$ be able to finish the work alone?



note:$X+Y=1/30$
then $X+Y=6/30=1/5$ finished work
remaining $1-1/5=4/5$ work only by $Y$ in 32 more days
so $\frac54\cdot32=40$ days
is this correct?.

Sunday, July 15, 2018

polynomials - Using induction for $x^n - 1$ is divisible by $x - 1$

Prove using induction that for all non-negative integers n and for all integers $ x > 1 $, $ x^n - 1 $ is divisible by $ x - 1 $.




Step 1: We will prove this using induction on n.
Step 2: Assume the claim is true when $ n = 1 $.
$$ x^{n+1} - 1 = x(x^n - 1) + (x - 1) $$



Can someone help me with this further?

Saturday, July 14, 2018

measure theory - If $mu(A)





Let $(X,\Sigma,\mu)$ be a finite measure space. Let $\mathcal{F}$ be a family of measurable functions $f:X\to\mathbb{R}$. Prove that if $$\lim_{t\to\infty}\left(\sup_{f\in\mathcal{F}}\int_{\{x\in X:|f(x)|\ge t\}}|f|d\mu \right)=0,$$



then $$\sup_{f\in\mathcal{F}}\int_X|f|d\mu<\infty,$$



and for all $\epsilon >0$ there exists $\delta >0$ such that: $$A\in\Sigma,\mu(A)<\delta\Longrightarrow \sup_{f\in\mathcal{F}}\int_A|f|d\mu<\epsilon.$$




For the first part.




Let $t>0$ such that: $\displaystyle\sup_{f\in\mathcal{F}}\int_{\{x\in X:|f(x)|\ge t\}}|f|d\mu<1$.



Fix $f\in\mathcal{F}$. Then $$\displaystyle\int_X|f|d\mu=\int_{\{|f|\ge t\}}|f|d\mu+\int_{\{|f|

And $1+t\mu(X)$ does not depend of $f$, so we get $\sup_{f\in\mathcal{F}}\int_X|f|d\mu<\infty$.



Is that correct?



I don't know how to do the second part. Could it be true that $v(A):=\sup_{f\in\mathcal{F}}\int_A|f|d\mu$ is a finite measure? I wanted to try something similar to that known result when $\mathcal{F}$ is just one function (some call it absolutely continuous of the measure $v$, I think).




Any hint? Thank you.


Answer



Your first part is correct.



For the second part, try to bound $\int_A |f|\,d\mu$ similarly to how you bounded $\int_X |f|\,d\mu$ in the first part:



$$\int_A |f|\,d\mu=\int_{A\cap \{|f|Then try to make the right side as small as possible, by choosing $t$ and $\delta$ appropriately.


linear algebra - Why do we need each elementary matrix to be invertible?




I've been presented to a proof that having $Ax=b$, one could have elementary row operations seen as certain special matrices $E_i$. And then we can prove that applying the same sequence of matrix products to the identity, we obtain the inverse. Suppose that $A$ is invertible:



\begin{eqnarray*}
{AB}&=&{I} \\
{E_n\dots E_2E_1AB}&=&{E_n\dots E_2E_1I} \\
{IB}&=&{E_n\dots E_2E_1I} \\
\end{eqnarray*}



For each $E_i$, there is the requirement that each one of them is invertible. Why is that needed? My guess is that if some of the $E_i$ is not invertible, then we could go back to different $A$'s and then, applying $E_i$'s again, we could go to a different $B$?



Answer



If we can use elementary matrices to start from $A$ and find $B=A^{-1}$, we should be able to reverse the process starting with $B$ and finding $A=B^{-1}$. The reverse of each step in the process is just applying the inverse elementary matrix. If an elementary matrix is not invertible, then we cannot reverse the step.



Anther reason that each elementary matrix must be invertible is that the determinant of noninvertible matrices is zero. Furthermore, invertible matrices have nonzero determinant. Therefore, if even one of the $E_i$ is not inertible, then $$\det(B)=\det(E_n...E_1 I)=\det(E_n)...\det(E_1)\det(I)=0.$$ Thus, $B$ is not invertible. But we know that $B^{-1}=A$. That is a contradiction.


linear algebra - Looking for a proof for the convergence of matrix geometric series


Consider a symmetric matrix $A$ with non-negative integer coefficients. It appears that the geometric series $\sum_{i \geq 0}A^i$ will converge to a matrix $B$ if the spectral radius (the largest eigenvalue in absolute value) is less than 1.


I would like to extend this result to a case where the matrix doesn't have integer coefficients but generating functions (with real coefficients) instead and to do so, I'd be interested in having a closer look at the proof of the standard case.


The questions I'm trying to figure out are:


1) is there a standard proof in a general case where we're dealing with a ring with a norm, than the geometric series has 1 as radius of convergence? Or what's the most obvious proof in the specific case of symmetric non-negative matrices?


2) what is the good equivalent of the case of matrices with integer coefficients that aren't necessarily symmetric. I'm quite interested in keeping the link to the eigenvalues and not in other matrix norms.


3) when dealing with matrices of generating functions (on one variable for a start), can I think of the eigenvalues as generating functions as well? Would that be the good way to think of this result on geometric series?


I'll keep posting as I work on all this (but I needed a place to write things down and possible get some help or hints).


Answer




In any Banach algebra with identity over $\mathbb C$, the spectral radius of $x$ is $\rho(x) = \lim_{n \to \infty} \|x^n \|^{1/n}$. If $|t| \rho(x) < 1$, the geometric series $\sum_{n=0}^\infty t^n x^n$ converges to $(1 - t x)^{-1}$. This is standard material that you can find in any text that discusses Banach algebras.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...