Wednesday, January 31, 2018

elementary number theory - How to calculate $5^{3^{1000}}bmod 101$?




I've just started a cryptography course, so i am not experienced at all how to calculate such big numbers. Clearly, i can't use a calculator, because the number is too big, so i have to calculate it by hand.



Since $101$ is a prime number, i think i should use here Fermat's little theorem. Found some example and tried to solve it this way, but i am totally not sure, if it is correct and if my approach must be this one.



Calculate $5^{3^{1000}}\bmod 101$.



First of all i think i should calculate $3^{1000}\bmod 101$. From Fermat's little theorem i get to $3^{100}\equiv 1\bmod 101$.



Thus $1000=x100+0$ and $x=10$.




$3^{1000}\equiv 3^{999^{10}} = 1 ^{10} \equiv 102\bmod 101 $



Later i have to calculate $5^{102}\bmod 101$.
Again by Fermat $5^{100}\equiv 1\bmod 101$.



$$102=100\cdot 1 +2$$



Here i am not sure how to move on... I think that my solution is wrong, but i'd love to see your suggeststions and remarks how to solve the problem.
Thank you very much in advance!


Answer




By Fermat's little theorem we know that $5^{100} \equiv 1 \bmod 101$.



What exactly does this tell us? It tells us that the powers of $5$ when reduced mod $101$ repeat every $100$ times.



So to find out what $5^{3^{1000}}$ is mod $101$ we really need to find out what $3^{1000}$ is mod $100$.



You can use the generalisation of FlT mentioned in another answer to see that $3^{40} \equiv 1 \bmod 100$, so that $3^{1000} = (3^{40})^{25} \equiv 1^{25} \equiv 1 \bmod 100$.



Alternatively you can do it by little step by step calculations.




Either way we find that $5^{3^{1000}} \equiv 5 \bmod 101$.


Tuesday, January 30, 2018

real analysis - Characterizing discontinuous derivatives



Apparently the set of discontinuity of derivatives is weird in its own sense. Following are the examples that I know so far:



$1.$ $$g(x)=\left\{ \begin{array}{ll}
x^2 \sin(\frac{1}{x}) & x \in (0,1] \\
0 & x=0 \end{array}\right.$$
$g'$ is discontinuous at $x=0$.



$2. $ The Volterra function defined on the ternary Cantor set is differentiable everywhere but the derivative is discontinuous on the whole of Cantor set ,that is on a nowhere dense set of measure zero.




$3.$ The Volterra function defined on the Fat-Cantor set is differentiable everywhere but the derivative is discontinuous on the whole of Fat-Cantor set ,that is on a set of positive measure, but not full measure.



$4.$ I am yet to find a derivative which is discontinuous on a set of full measure.



Some good discussion about this can be found here and here.



Questions:



1.What are some examples of functions whose derivative is discontinuous on a dense set of zero measure , say on the rationals?




2.What are some examples of functions whose derivative is discontinuous on a dense set of positive measure , say on the irrationals?



Update: One can find a function which is differentiable everywhere but whose derivative is discontinuous on a dense set of zero measure here.


Answer



There is no everywhere differentiable function $f$ on $[0,1]$ such that $f'$ is discontinuous at each irrational there. That's because $f',$ being the everywhere pointwise limit of continuous functions, is continuous on a dense $G_\delta$ subset of $[0,1].$ This is a result of Baire. Thus $f'$ can't be continuous only on a subset of the rationals, a set of the first category.



But there is a differentiable function whose derivative is discontinuous on a set of full measure.



Proof: For every Cantor set $K\subset [0,1]$ there is a "Volterra function" $f$ relative to $K,$ which for the purpose at hand means a differentiable function $f$ on $[0,1]$ such that i)$|f|\le 1$ on $[0,1],$ ii) $|f'|\le 1$ on $[0,1],$ iii) $f'$ is continuous on $[0,1]\setminus K,$ iv) $f'$ is discontinuous at each point of $K.$




Now we can choose disjoint Cantor sets $K_n \subset [0,1]$ such that $\sum_n m(K_n) = 1.$ For each $n$ we choose a Volterra function $f_n$ as above. Then define



$$F=\sum_{n=1}^{\infty} \frac{f_n}{2^n}.$$



$F$ is well defined by this series, and is differentiable on $[0,1].$ That's because each summand above is differentiable there, and the sum of derivatives converges uniformly on $[0,1].$ So we have



$$F'(x) = \sum_{n=1}^{\infty} \frac{f_n'(x)}{2^n}\,\, \text { for each } x\in [0,1].$$



Let $x_0\in \cup K_n.$ Then $x_0$ is in some $K_{n_0}.$ We can write




$$F'(x) = \frac{f_{n_0}'(x)}{2^{n_0}} + \sum_{n\ne n_0}\frac{f_n'(x)}{2^n}.$$



Now the sum on the right is continuous at $x_0,$ being the uniform limit of functions continuous at $x_0.$ But $f_{n_0}'/2^{n_0}$ is not continuous at $x_0.$ This shows $F'$ is not continuous at $x_0.$ Since $x_0$ was an aribtrary point in $\cup K_n,$ $F'$ is discontinuous on a set of full measure as desired.


limits - $limlimits_{nto infty}frac{n^k}{2^n}$ without l'Hopital





Let $\varepsilon >0$. Let $N>?$. For any integer $n>N$ we have
$$\frac{n^k}{2^n}<\varepsilon.$$
I don't know how to proceed here sensically.



I'd say we have to start with



$$<\frac{n^k}{2^N}$$



But what do we do here about the $n^k$?




Remember, I don't want to use l'Hopital or for me unproven limit laws like "exponents grow faster than powers"



Also, I don't want to use hidden l'Hopital, i.e. argumenting with derivatives. Since we don't even have proven derivative laws.


Answer



Let's take the logarithm. One gets



$$\log{n^k\over 2^n}=k\log{n}-n\log{2}=-n\left(\log{2}-k{\log{n}\over n}\right)$$



Now when $n\to\infty$ one has $\log{n}/n\to 0$ and so




$$\lim_{n\to\infty}\log{n^k\over 2^n}=-\infty$$



And so



$$\lim_{n\to\infty}{n^k\over 2^n}=0$$


Monday, January 29, 2018

calculus - How to prove that $limlimits_{xto0}frac{sin x}x=1$?


How can one prove the statement $$\lim_{x\to 0}\frac{\sin x}x=1$$ without using the Taylor series of $\sin$, $\cos$ and $\tan$? Best would be a geometrical solution.


This is homework. In my math class, we are about to prove that $\sin$ is continuous. We found out, that proving the above statement is enough for proving the continuity of $\sin$, but I can't find out how. Any help is appreciated.


Answer



sinc and tanc at 0



The area of $\triangle ABC$ is $\frac{1}{2}\sin(x)$. The area of the colored wedge is $\frac{1}{2}x$, and the area of $\triangle ABD$ is $\frac{1}{2}\tan(x)$. By inclusion, we get $$ \frac{1}{2}\tan(x)\ge\frac{1}{2}x\ge\frac{1}{2}\sin(x)\tag{1} $$ Dividing $(1)$ by $\frac{1}{2}\sin(x)$ and taking reciprocals, we get $$ \cos(x)\le\frac{\sin(x)}{x}\le1\tag{2} $$ Since $\frac{\sin(x)}{x}$ and $\cos(x)$ are even functions, $(2)$ is valid for any non-zero $x$ between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$. Furthermore, since $\cos(x)$ is continuous near $0$ and $\cos(0) = 1$, we get that $$ \lim_{x\to0}\frac{\sin(x)}{x}=1\tag{3} $$ Also, dividing $(2)$ by $\cos(x)$, we get that $$ 1\le\frac{\tan(x)}{x}\le\sec(x)\tag{4} $$ Since $\sec(x)$ is continuous near $0$ and $\sec(0) = 1$, we get that $$ \lim_{x\to0}\frac{\tan(x)}{x}=1\tag{5} $$


trigonometry - Sine series: angle multipliers add to 1


It is known that in an sine series with angles in arithmetic progression (I refer to this question):


$\sum_{k=0}^{n-1}\sin (a+k \cdot d)=\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \sin\biggl( \frac{2 a + (n-1) \cdot d}{2}\biggr)$


What if $k$ does not go from $0$ to $n-1$, but its elements are strictly positive rational numbers,


with $0and $\sum_{i=1}^{n} k_i=1$
and $k$ is monotonically increasing


Is there a way to simplify:


$\sum_{i=1}^{n} \sin (a+k_i \cdot d)$



in a way analogous to the first formula? Maybe using the property that k adds to $1$ and using $\sin(\pi/2)=1$ ??


e.g.:
$k_i = (0.067,0.133,0.200,0.267,0.333)$ (5 increasing elements between $0$ & $1$ which sum to 1)
$a=90$
$d=40$


Sum = $(90+0.067\cdot40)+(90+0.133\cdot40)+(90+0.200\cdot40)+(90+0.267\cdot40)+(90+0.333\cdot40)$


Answer



There is no way to give this a general form. The definition of $k_i$ is too general. I have been trying to come up with a function that would give $k_i$ while also fitting your terms, but I am having difficulty. This would need to have specific values of $k_i$ and each term of the sum would need to be calculated individually. I did try to change the sum into something else (work below), but it seems that this is only more complicated. $$\sum_{i=1}^n \sin (a + k_i d)$$ We can separate the sum using the Trigonometric Addition Formula: $$\sin (\alpha + \beta) = \sin\alpha \cos\beta + \sin\beta \cos\alpha$$ $$\sum_{i=1}^n [\sin (a) \cos (k_i d) + \sin (k_i d) \cos (a)]$$ $$= \sin a \sum_{i=1}^n [\cos(k_i d)] + \cos a \sum_{i=1}^n[\sin(k_i d)]$$ Past this point, there is no general form. You can attempt to use the Multiple-Angle Formulas, but this only seems to complicate things. In conclusion, I believe your best option would be to just use the original sum and calculate it normally. Unless you have a more rigorous definition for $k_i$, there is no general form that will apply.


calculus - How can we compute $lim_{xrightarrow 0}frac{sin (x)-x+x^3}{x^3}$ and $lim_{xrightarrow 0}frac{e^x-sin (x)-1}{x^2}$?

Could we compute the limits
$$\lim_{x\rightarrow 0}\frac{\sin (x)-x+x^3}{x^3} \\ \lim_{x\rightarrow 0}\frac{e^x-\sin (x)-1}{x^2}$$
without using the l'Hospital rule and the Taylor expansion?

abstract algebra - Suppose that $beta$ is a zero of $f(x)=x^4+x+1$ in some field extensions of $E$ of $Z_2$.Write $f(x)$ as a product of linear factors in $E[x]$




Suppose that $\beta$ is a zero of $f(x)=x^4+x+1$ in some field extensions of $E$ of $Z_2$.Write $f(x)$ as a product of linear factors in $E[x]$



Attempt: In $\mathbb Z_2: \beta^4+\beta+1=0$



Going by the long division method, I was able to factorize $f(x)=x^4+x+1 = (x-\beta)(x^3+\beta x^2+ \beta^2x + \beta^3+1)$.



if $g(x)=x^3+\beta x^2+ \beta^2x + \beta^3+1$, then $g(1)=\beta^2 =g(-1)~;~$



Is there any other method than trial and error in this problem. Please note that this problem is listed in the field extension chapter and even before the chapter on algebraic extensions and finite fields




Thank you for your help.


Answer



$\require{cancel}$For a problem with small coefficients like that, trial and error works. We have
$$f(x+y) = (x+y)^4 + x + y + 1 = f(x) + f(y) + 1$$ This implies
$$f(x+1) = f(x) + f(1) + 1 = 0 + 1 + 1 = f(x)$$
So if $x$ is a root of $f$, $x+1$ is another root of $f$. In particular $\beta+1$ is a root of $f$.



Besides $f(\beta^2) = (f(\beta))^2 = 0$, because the Frobenius is a field automorphism (as Jyrki Lahtonen told you in the comments). It follows that the roots are $\beta$, $\beta+1$, $\beta^2$, $\beta^2+1$.


calculus - Limit of a trigonometric integral: $limlimits_{x to infty} int_{0}^{pi} frac{sin (t)}{1+cos^{2}(xt)} , mathrm dt$



For all $x \in \mathbb{R}$, let
$$
{\rm f}\left(x\right)
=\int_{0}^{\pi}\frac{\sin\left(t\right)}{1 + \cos^{2}\left(xt\right)}\,{\rm d}t
$$

Compute the limit when $x\rightarrow +\infty$.



My attempt :



I tried the substitution $u=\sin(t)$ (and $u=\cos^2(xt)$) but it seems worse:
$$
\int _{0}^{1}\!{\frac {u}{ \left( 1+ \left( \cos \left( x\arcsin
\left( u \right) \right) \right) ^{2} \right) \sqrt {1-{u}^{2}}}}{du}
$$
I tried to use a subsequence $(x_n)$ which is tends to $+\infty$ and use the dominated convergence theorem but it didn't work either.




Sorry if my attempt doesn't make much sense.



Thank you in advance.


Answer



We can use a generalization of the Riemann-Lebesgue lemma that states that if $f(x)$ is a continuous function on $[a,b]$, where $0 \le a < b$, and $g(x)$ is a continuous $T$-periodic function on $[0, \infty)$, then $$\lim_{n \to \infty}\int_{a}^{b} f(x) g(nx) \, dx = \left(\frac{1}{T} \int_{0}^{T} g(x) \, dx \right) \left(\int_{a}^{b} f(x) \, dx \right).$$



A proof can be found in THIS PAPER.



The proof uses the fact that a primitive of $g(x)$ can be expressed as $$ \int_{0}^{x} g(t) \, dt = \frac{x}{T} \int_{0}^{T} g(t) \ dt + h(x),$$ where $h(x)$ is $T$-periodic function. (Lemma 2.2)




The regular Riemann-Lesbegue lemma is the case where the average value of $g(x)$ is $0$.



Applying the lemma to this particular integral, we get



$$ \lim_{x \to \infty} \int_{0}^{\pi} \frac{\sin t}{1+\cos^{2}(xt)} \ dt = \left( \frac{1}{\pi} \int_{0}^{\pi} \frac{1}{1+\cos^{2} t} \ dt \right) \left( \int_{0}^{\pi} \sin t \ dt \right). $$



The first integral can be evaluated by making the substitution $u = \tan t$.



This particular example actually appears in the paper, so I'll omit the details.




We end up with



$$ \lim_{x \to \infty} \int_{0}^{\pi} \frac{\sin t}{1+\cos^{2}(xt)} \ dt = \frac{1}{\pi} \left(\frac{\pi}{\sqrt{2}} \right) \left(2\right) = \sqrt{2} .$$


calculus - Proper substitution help for integration appreciated.



$$\int\frac{\sqrt{2x - 1}}{2x + 3}dx$$



I'm completely stumped. I feel like my goal should be to get rid of the square root somehow, but I'm not sure how to accomplish that. As it stands, obviously a substitution for $\sqrt{2x - 1}$ just moves it down to the denominator and does nothing close to help, substitutions for $2x - 1$ or $2x + 3$ don't work either. I've tried integration by parts taking the numerator and denominator to be $u$ and $dv$ respectively and vice versa but that seems to make it more complicated.




A trignometric substitution seems to be my best bet, for example I tried $x = \frac{sec^2(u)}{2}$ which does indeed get rid of the square root but then leaves me with $$\int\frac{du}{sec^4(u) + 3sec^2(u)}$$ Which I'm also not sure how to do... Nudges in the right direction appreciated greatly.


Answer



Your first idea was good : if you substitute $u=\sqrt{2x-1}$, you have :



$$\int\frac{\sqrt{2x - 1}}{2x + 3}dx=\int\frac{\sqrt{2x - 1}\sqrt{2x - 1}}{(2x + 3)\sqrt{2x - 1}}dx=\int\frac{u^2}{u^2+4}du$$



Because $du=\frac{dx}{\sqrt{2x-1}}$ and $2x+3=2x-1+4=u^2+4$.



Can you go from there ?



group theory - Prove that $(n-1)! equiv -1 pmod{n}$ iff $n$ is prime [Wilson's Theorem]



How can I show that $(n-1)!$ is congruent to $-1 \pmod{n}$ if and only if $n$ is prime?



Thanks.


Answer



$$n\text{ is prime if }(n-1)! \equiv -1 \pmod n$$




This direction is easy. If $n$ is composite, then there exists $k|n$ and $k\lt n$. So $k|(n-1)!$ and $k \equiv 1 \pmod n$. This means $k$ needs to divide $1$. So $n$ must be prime (or $1$, but we can eliminate this by substitution).



$$(n-1)! \equiv -1\text{ if }n\text{ is prime}$$



Wikipedia contains two proofs of this result known as Wilson's theorem. The first proof only uses basic abstract algebra and so should be understandable with a good knowledge of modular arithmetic. Just in case, I prove below that each element $1, 2, ... n-1$ has a unique inverse $\mod n$.



They use the fact that integers $\mod p$ form a group and hence that each element $x$ not congruent $0$ has a multiplicative inverse (a number $y$ such that $xy \equiv 1 \mod n$.
We show this as follows. Suppose $n \nmid x$, for $n$ prime. From the uniqueness of prime factorisations, $xn$ is the first product of $x$, after $0x$, divisible by $n$ (use prime factorisation theorem). If we look at the series $kn \mod n$, this cycles and must have cycle length $n$. Therefore, each element $x, 2x,... nx$ must be different modulo $n$, including one, $y$, with $xy \equiv 1 \mod n$. Furthermore, due to the cycle length being $n$, each only one of those elements will be an inverse. So every element has a unique inverse (although 1 and -1 are their own inverses).


Sunday, January 28, 2018

Functional equation basic

Let $f$ be any function defined on set $\Bbb N$.if $f(xy)=f(x)+f(y)$ & $f(2)=9$ then find $f(3)$ and answer for this question is $7$ . Please say how to get it.

sequences and series - Is there any geometry behind the Basel problem?



I could find many beautiful and rigorous proofs for Euler's solution to the Basel problem here Different methods to compute $\sum\limits_{k=1}^\infty \frac{1}{k^2}$ (Basel problem)



Basel problem solution



But I am curious to know whether there are proofs by using geometry.



If anyone has proofs by geometry, please do share it with us.


Answer




Funny you should ask this today. A great video video by the YouTuber 3Blue1Brown was just posted today. (Aside: I recommend all his videos.)



The proof is based on the result mentioned by "3 revs" in the MO thread mentioned by user296602 above.


calculus - Limit of a set of fractions

I'm having trouble with this particular exercise in limits, and I just can't seem to find a way to crack it.


I saw a similar exercise online where they used integrals, but it's pretty early in the course so we're only supposed to use basic limit arithmetics and the Squeeze theorem (Oh boy, and I thought it had a bad name in MY language). That said, I already tried the Squeeze theorem and it doesn't work.



$$\lim_{n\to\infty} \left( \frac{1}{1\cdot4}+\frac{1}{4\cdot7}+...+\frac{1}{(3n-2)(3n+1)} \right)$$


What am I missing?


Thanks in advance.

geometry - Can I prove Pythagoras' Theorem using that $sin^2(theta)+cos^2(theta)=1$?




In any right-angled triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle).




The theorem can be written as an equation relating the lengths of the sides $a$, $b$, and $c$, often called the Pythagorean equation:
$$a^2 + b^2 = c^2$$



Can I prove Pythagoras' Theorem by the following way?




Actually, my question is: does it violate any rules of mathematics, or is it alright?
Sorry, it may not be a valid question for this site. But I want to know. Thanks.



Like it


Answer



The usual proof of the identity $\cos^2 t+\sin^2 t=1$ uses the Pythagorean Theorem. So a proof of the Pythagorean Theorem by using the identity is not correct.



True, we can define cosine and sine purely "analytically," by power series, or as the solutions of a certain differential equation. Then we can prove $\cos^2 t+\sin^2 t=1$ without any appeal to geometry.




But we still need geometry to link these "analytically" defined functions to sides of right-angled triangles.



Remark: The question is very reasonable. The logical interdependencies between various branches of mathematics are usually not clearly described. This is not necessarily always a bad thing. The underlying ideas of the calculus were for a while quite fuzzy, but calculus was still used effectively to solve problems, Similar remarks can be made about complex numbers.


Saturday, January 27, 2018

calculus - By using the fact that $log(n)


By using the fact that $\log(n)<,evaluate $$\lim_{n\to \infty} n^{1/n}$$


How to use $\log(n)< to evaluate that?


Answer



HINT


Use that


$$\large {(\log n)^\frac1n = e^{\frac{\log n}n}}$$


limits - How to evaluate $lim_{b to infty} left[frac{b^{t-1}}{e^b}right]$?


I encountered this limit when I studied the the gamma function. This limit arises when integrating the gamma function by parts. The author evaluates it as follows:


  1. $\lim\limits_{b \to \infty} \left[\dfrac{b^{t-1}}{e^b}\right] =-\lim\limits_{b \to \infty} \left\{\dfrac{\text{exp}[(t-1) \ln b]}{\text{exp}(b)}\right\}$

  2. $\lim\limits_{b \to \infty} \left[\dfrac{b^{t-1}}{e^b}\right] =-\lim\limits_{b \to \infty}\{\text{exp}[(t-1) \ln b-b]\}$

  3. $\lim\limits_{b \to \infty} \left[\dfrac{b^{t-1}}{e^b}\right] =-\lim\limits_{b \to \infty} \left\{\text{exp}\left[(t-1)b\left(\dfrac{ \ln b}{b}-1\right)\right]\right\}$

And finally, by l’Hôpital's rule: $$\dfrac{ \ln b}{b} = 0$$ then $$\lim\limits_{b \to \infty} e^{(t-1)b(0-1)} = \lim\limits_{b \to \infty} e^{-\infty}=0.$$



I can not make sense of the manipulation between step 2 and 3. Can someone offer an explanation? Thanks in advance.


Source: https://onlinecourses.science.psu.edu/stat414/node/142


Answer



$$\frac{e^{(t-1)\log b}}{e^b}=e^{(t-1)\log b-b}$$


and now just factor out $\;b\;$ in the exponent:


$$(t-1)\log b-b=b\left((t-1)\frac{\log b}b-1\right)$$


...and there's a mistake, either in your writing or in the book, as the factor $\;(t-1)\;$ does not multiply the whole thing, only $\;\log b\;$ ...


calculus - How to show that $lim_{nto infty } left(frac{(1 + frac{1}{n^2})^{n^2}}{e}right)^n = 1$?




I need to find the limit:
$$\lim_{n\to \infty } \left(\frac{(1 + \frac{1}{n^2})^{n^2}}{e}\right)^n$$
So I know that the limit is $1$.
Using Squeeze theorem
$$? \leq \left(\frac{(1 + \frac{1}{n^2})^{n^2}}{e}\right)^n \leq \left(\frac{e}{e}\right)^n \rightarrow\ 1 $$
What should be instead $?$ ? Is it possible to solve in another way?
Unfortunately, I can't use L'Hôpital Rule or Series Expansion in this task.


Answer



Note that:



$$\left(\frac{(1 + \frac{1}{n^2})^{n^2}}{(1 + \frac{1}{n^2})^{n^2+1}}\right)^n=\left(1 + \frac{1}{n^2}\right)^{-n}\leq \left(\frac{(1 + \frac{1}{n^2})^{n^2}}{e}\right)^n \leq \left(\frac{e}{e}\right)^n=1$$




Since:



$$\left(1 + \frac{1}{n^2}\right)^{-n}\to 1$$



By Squeeze Theorem:



$$\lim_{n\to \infty } \left(\frac{(1 + \frac{1}{n^2})^{n^2}}{e}\right)^n=1$$


elementary set theory - Unions and Functions on Sets



Given these conditions, I seek a proof.




Let $f: A \rightarrow B$ be a function, and let $X$ and $Y$ be subsets of $A$.



Prove that $f(X \cup Y) = f(X) \cup f(Y)$.



I can't seem to figure it out. It appears obvious, but materializing a proof is troubling me. What is the best method of proof for this?


Answer



I suppose the best way to do this is to show the two inclusions (namely $f(X\cup Y)\subseteq f(X)\cup f(Y)$ and $f(X)\cup f(Y)\subseteq f(X\cup Y)$), and to use definitions. For the first inclusion, this gives :



Let $x\in f(X\cup Y)$. This means that there exists a $y\in X\cup Y$ such that $f(y)=x$. Now $y\in X\cup Y$ means that either $y\in X$ or $y\in Y$. Now if $y\in X$, then $x\in f(X)$, and if $y\in Y$, then $x\in f(Y)$. In any instance, $x\in f(X)\cup f(Y)$. We proved that any $x$ in $f(X\cup Y)$ is also in $f(X)\cup f(Y)$, that is precisely $f(X\cup Y)\subseteq f(X)\cup f(Y)$. The other inclusions proceeds similarly.



abstract algebra - Constructing a multiplication table for a finite field


Let $f(x)=x^3+x+1\in\mathbb{Z}_2[x]$ and let $F=\mathbb{Z}_2(\alpha)$, where $\alpha$ is a root of $f(x)$. Show that $F$ is a field and construct a multiplication table for $F$.




Can you please help me approach this problem? I've tried searching around, but I don't really know what I'm looking for!


Thanks.

Friday, January 26, 2018

calculus - Find this limit without using L'Hospital's rule

I have to find this limit without using l'Hôspital's rule:



$$\lim_{x\to 0} \frac{\alpha \sin \beta x - \beta \sin \alpha x}{x^2 \sin \alpha x}$$



Using L'Hôspital's rule gives:




$$\frac{\beta}{6(\alpha^2 - \beta^2)}$$
I am stuck where to begin without using the rule.

algebra precalculus - Given $2^{x}=129$, why is it that I can use the natural logarithm to find $x$?


I've looked at an example in my textbook, it is:


$2^{x}=129$


$\ln \left( 2^{x}\right) =\ln \left( 129\right) $


$x\ln \left( 2\right) =\ln \left( 129\right) $


$ x=\dfrac {\ln \left( 129\right) }{\ln \left( 2\right) }$


My question is how is it that you can take logs to the base e and still obtain the right $x$? Shouldn't you have to take logs to base 2 to find the exponent $x$ that goes on 2 to give 129? Why is it that I can use the natural logarithm to find x in this instance? Also I've tried for other bases such as 3 as well and I get the right answer, why is this?



Furthermore when we usually solve an exponential function such as the above we would do:


$2^{x}=129$


$\log _{2}\left( 129\right) =x$


But how is it in this example they take logs of both sides?


I'm sorry if this doesn't make sense, I'm just very confused so take it easy on me, I'm studying at a pre-calculus mathematics level. Thank you for your help!




EDIT: Okay I've opened a bounty on this question, partly because although i've received a lot of responses I still don't seem to understand this and hopefully someone else will come along with a fresh perspective or perhaps build on what others have wrote beforehand in a way that's conducive to my understanding. I hope this does not offend any of the people who have answered beforehand, its not your fault i cannot understand.



That said, what I would like to understand is the following:


(1) Why is it that if I have an equation $2^x=8$, that taking logs to any base b (where b>0) would always give me the same answer for x (i.e. 3)?:



$$\eqalign{ & {\log _2}({2^x}) = {\log _2}(8) \cr & {\log _3}({2^x}) = {\log _3}(8) \cr & {\log _4}({2^x}) = {\log _4}(8) \cr} $$


How is it they all give the value of $x=3$?


Shouldn't taking the equation $2^x=8$ to the base of say ${\log _2}$ give me a differing value to an equation taken to ${\log _4}$? So that leads me onto my next question:


(2) What property of logarithms does this? WHY is this so? I think from what I've gathered from here is it has to do with how one logarithm is proportional or scalable to another? But I don't understand how this applies, if someone could explain this I think i'd be able to understand.


I'd like any answers if possible to refer to the example I've already used, if possible. Thank you.


Answer



Note that if you solve $e^y=2$ you get $y=\ln(2)$.


This tells us that


$$2=e^{\ln(2)}$$


which you probably already know.



Now we can solve this equation two ways:


Method 1: as you solved it.


$$2^x=129 \Rightarrow x =\log_2(129) \,.$$


Method 2: Well, since $2=e^{\ln(2)}$ you can rewrite the equation as


$$(e^{\ln(2)})^x= 129 \,. $$


This is


$$e^{(x \ln (2))} =129 \,.$$


Since this is an exponential with base $e$, it suddenly makes sense to take the natural logarithm.


Then you get


$$x \ln(2) = \ln(129) \,.$$



Intuitively, this is the reason why you can take a different logarithm than the obvious one, in a hidden way you use that any number $a$ can be rewritten as $b^{\log_b(a)}$ and this way any exponential with basis $a$ becomes an exponential with basis $b$. This process is called " the change of base for logarithm formula", and it can be done much faster by the simple formula


$$\log_a(b)=\frac{\log_c(b)}{\log_c(a)} \,.$$


Thursday, January 25, 2018

logarithms - Using Stirling's approximiation to show that $(log(log n))!$ is $O(n^k)$



I am trying to show the following:




Prove, using Stirling's approximiation, that $(\log(\log n))!$ is $O(n^k)$ for some positive constant $k$. Stirling's approximation is
$$n!=\sqrt{2\pi n}\left(\frac{n}{e}\right)^n\left(1+\Theta\left(\frac{1}{n}\right)\right).$$




By the given formula,

$$(\log(\log n))!=\sqrt{2\pi\log(\log n)}\left(\frac{\log(\log n)}{e}\right)^{\log(\log n)}\left(1+\Theta\left(\frac{1}{\log(\log n)}\right)\right).$$



I can show that $\sqrt{2\pi\log(\log n)}$ is $O(n^{1/2})$ and that $1+\Theta\left(\frac{1}{\log(\log n)}\right)$ is $O(1)$, but I am having trouble showing that $\left(\frac{\log(\log n)}{e}\right)^{\log(\log n)}$ is $O(n^k)$.



I can take the logarithm of this term and get $\approx(\log(\log n))(\log(\log(\log n)))$, but I am not sure where to go from here.


Answer



It's easier if you redefine $t = \log \log n$, then RHS becomes $e^{e^{kt}}$. Now apply Stirling's approximation to the LHS and take logarithm. You'll see that lhs is $t \log t -t +O(\log t)$, which is of course $O(e^{kt})$.


Wednesday, January 24, 2018

derivatives - How is an infinitesimal $dx$ different from $Delta x,$?





When I learned calc, I was always taught
$$\frac{df}{dx}= f'(x) = \lim_{h\rightarrow 0} \frac{f(x+h)-f(x)}{(x+h)-x}$$
But I have heard $dx$ is called an infinitesimal and I don't know what this means. In particular, I gather the validity of treating a ratio of differentials is a subtle issue and I'm not sure I get it.
Can someone explain the difference between $dx$ and $\Delta x$?



EDIT:



Here is a related thread:



Is $\frac{\textrm{d}y}{\textrm{d}x}$ not a ratio?




I read that and this is what I don't understand:




There is a way of getting around the logical difficulties with infinitesimals; this is called nonstandard analysis. It's pretty difficult to explain how one sets it up, but you can think of it as creating two classes of real numbers: the ones you are familiar with, that satisfy things like the Archimedean Property, the Supremum Property, and so on, and then you add another, separate class of real numbers that includes infinitesimals and a bunch of other things.




Can someone explain what specifically are these two classes of real numbers and how they are different?


Answer



Back in the days of non-standard analysis, the idea of differentiation was (informally) defined as the ratio between two infinitesimal values.




The real number system (often denoted as $\mathbb{R}$) can be defined in terms of a field. It is a field with properties such as total ordering (basically means every element in it has an order), Archimedean property (which states that every two elements are within an integer multiple of each other). $\mathbb{R}$ can, however, be extended. One example is to allow the existence of imaginary number, $\sqrt{-1}$, in which case you would have complex numbers (and that is also a field).



Extending $\mathbb{R}$ by introducing the element infinitesimal to it would make it lose the Archimedean property.



So when Arturo Magidin talked about "two classes of real numbers", basically he is referring to $\mathbb{R}$ and an ordered field containing all elements in $\mathbb{R}$ and also infinitesimal, a "number" defined as greater than 0 but smaller than any integer unit fraction.


Property of modular arithmetic.


(a / b) % c = ((a % c) * (b^{-1} % c)) % c




How to calculate b^{-1}? I know it is not 1/b. Is there more than one way to calculate this?

sequences and series - How to calculate: $sum_{n=1}^{infty} n a^n$

I've tried to calculate this sum:


$$\sum_{n=1}^{\infty} n a^n$$


The point of this is to try to work out the "mean" term in an exponentially decaying average.


I've done the following:


$$\text{let }x = \sum_{n=1}^{\infty} n a^n$$ $$x = a + a \sum_{n=1}^{\infty} (n+1) a^n$$ $$x = a + a (\sum_{n=1}^{\infty} n a^n + \sum_{n=1}^{\infty} a^n)$$ $$x = a + a (x + \sum_{n=1}^{\infty} a^n)$$ $$x = a + ax + a\sum_{n=1}^{\infty} a^n$$ $$(1-a)x = a + a\sum_{n=1}^{\infty} a^n$$



Lets try to work out the $\sum_{n=1}^{\infty} a^n$ part:


$$let y = \sum_{n=1}^{\infty} a^n$$ $$y = a + a \sum_{n=1}^{\infty} a^n$$ $$y = a + ay$$ $$y - ay = a$$ $$y(1-a) = a$$ $$y = a/(1-a)$$


Substitute y back in:


$$(1-a)x = a + a*(a/(1-a))$$ $$(1-a)^2 x = a(1-a) + a^2$$ $$(1-a)^2 x = a - a^2 + a^2$$ $$(1-a)^2 x = a$$ $$x = a/(1-a)^2$$


Is this right, and if so is there a shorter way?


Edit:


To actually calculate the "mean" term of a exponential moving average we need to keep in mind that terms are weighted at the level of $(1-a)$. i.e. for $a=1$ there is no decay, for $a=0$ only the most recent term counts.


So the above result we need to multiply by $(1-a)$ to get the result:


Exponential moving average "mean term" = $a/(1-a)$


This gives the results, for $a=0$, the mean term is the "0th term" (none other are used) whereas for $a=0.5$ the mean term is the "1st term" (i.e. after the current term).

combinatorics - How to remember the difference between "stars and bars" and multinomial coefficients?


Explanation: It is easy for me to remember the difference between permutation coefficients and combination coefficients verbally: one says that the former


"is the number of ways of choosing $k$ things from $n$ options without replacement where order matters"


and that the latter



"is the number of ways of choosing $k$ things from $n$ options without replacement where order doesn't matter".



Question: How can one quickly describe (verbally) the difference between multinomial coefficients and "stars and bars" (which will here henceforth be called multiset coefficients)? For which does order matter? For which does order not matter? For which is the choosing with/without replacement?



Attempt: Both involve a counting process with two considerations: (i) dividing objects of $n$ types into $i$ containers, (ii) each container with $k_i$ spots, so assigning $k = \sum_i k_i$ objects from the $n$ different types. So it seems like there might be some ambiguity, when one says "with/without replacement" or "order does/doesn't matter", is one referring to (i) or (ii)?


So it seems like there are at least four distinct possibilities:


  1. Assign $n$ types of object into $i$ containers, where the order of the containers doesn't matter, with $k = \sum_i k_i$ spots total, where the order within each container doesn't matter.

  2. Assign $n$ types of object into $i$ containers, where the order of the containers doesn't matter, with $k = \sum_i k_i$ spots total, where the order within each container does matter.

  3. Assign $n$ types of object into $i$ containers, where the order of the containers does matter, with $k = \sum_i k_i$ spots total, where the order within each container doesn't matter.

  4. Assign $n$ types of object into $i$ containers, where the order of the containers does matter, with $k = \sum_i k_i$ spots total, where the order within each container does matter.


IF this is true, then which of these options corresponds to multinomial coefficients, and which of these options corresponds to multiset coefficients?


Update: I think the difference is this: "multiset coefficients are the number of ways of distributing balls into boxes, where (i) the total number of balls is fixed, (ii) the number of balls in each container is not fixed, (iii) the balls are indistinguishable (their order doesn't matter, just the number in each container), (iv) the boxes are distinguishable (i.e. their order does matter)". (For (iv), e.g. $x^2 y \not= x y^2$.) While "multinomial coefficients are the number of ways of distributing balls into boxes, where (i) the total number of balls is fixed because (ii) the number of balls in each box is fixed, (iii) the balls are indistinguishable (hence the relationship to binomial coefficients), and (iv) the boxes are distinguishable (but it doesn't matter in the case of binomials because the solution to the Diophantine equation $k_1 + k_2 = k$ is already determined once $k_1$ is chosen)".


So TL;DR: in both cases, the balls are indistinguishable (their order doesn't matter), the boxes are distinguishable (their order does matter), the total number of balls is fixed, but (a) for multiset coefficients, the number of balls in each box is not fixed in advance, while (b) for multinomial coefficients the number of balls in each box is fixed in advance.


Thus the difference is whether one fixes in advance how many balls are to be placed into each box.


Answer



A multinomial coefficient is often written in the form $$ \binom{n}{k_1,k_2,\ldots,k_m} $$ where $k_1+k_2+\cdots+k_m = n.$ Because of that last equation, the $n$ is redundant. Binomial coefficients are technically multinomial coefficients as well, but instead of writing $\binom{n}{k_1,k_2}$ where $k_1+k_2 = n,$ we usually write either $\binom{n}{k_1}$ or $\binom{n}{k_2}.$ These are all mean exactly the same thing: $$ \binom{k_1+k_2}{k_1,k_2}=\binom{k_1+k_2}{k_1} =\binom{k_1+k_2}{k_2} = \frac{(k_1+k_2)!}{k_1!\,k_2!}. $$


A multinomial coefficient can arise in a counting problem as follows:


  • You have $m$ containers, with sizes $k_1, k_2, \ldots, k_m$ respectively.

  • You have $n = k_1+k_2+\cdots+k_m$ distinguishable objects, which are just enough to fill every container when drawing from the $n$ objects without replacement.

  • The order of the containers matters.


  • The order of the objects within each container does not matter.


A multiset coefficient can arise as follows:


  • You have $n$ distinguishable objects.

  • You have a single container able to hold $k$ objects.

  • You draw objects from the $n$ objects with replacement, and put the copies of those objects into the container.

  • You continue putting copies of objects in the container until it is full.

  • The order of objects in the container does not matter.

A multiset coefficient can alternatively arise this way:


  • You have $k$ objects that are indistinguishable or that you choose not to distinguish.

  • You have $n$ containers, each with effectively infinite capacity (they never get full).


  • You draw the objects without replacement and put them in the containers.

  • The order of the containers matters.

  • The order of objects in a container cannot matter, because by assumption you cannot tell which object is which, only how many are in each container.

These are two quite different ways of modeling a multiset coefficient: in one model, the number of objects is the first number of the coefficient, while in the other model the number of objects is the second number. But there is a simple counting argument that shows both ways model the same total count: each time you choose one of the $n$ objects in the first model to occupy one of the $k$ unordered (hence indistinguishable) places in the container, you choose one of the $n$ containers in the second model to hold one of the $k$ indistinguishable objects. Just as the first model allows you to choose the same object as often as you want (until all spaces in the container are full), the second model allows you to put objects in the the same container as many times as you like (until all objects have been placed).


While the term "multiset" is inspired by the first model, the second model is a frequent application of the "stars and bars" method. It's useful to know how to apply the multiset coefficient to either model as needed.


Tuesday, January 23, 2018

real analysis - Natural derivation of the complex exponential function?



Bourbaki shows in a very natural way that every continuous group isomorphism of the additive reals to the positive multiplicative reals is determined by its value at $1$, and in fact, that every such isomorphism is of the form $f_a(x)=a^x$ for $a>0$ and $a\neq 1$. We get the standard real exponential (where $a=e$) when we notice that for any $f_a$, $(f_a)'=g(a)f_a$ where $g$ is a continuous group isomorphism from the positive multiplicative reals to the additive reals. By the intermediate value theorem, there exists some positive real $e$ such that $g(e)=1$ (by our earlier classification of continuous group homomorphisms, we notice that $g$ is in fact the natural log).



Notice that every deduction above follows from a natural question. We never need to guess anything to proceed.



Is there any natural way like the above to derive the complex exponential? The only way I've seen it derived is as follows:



Derive the real exponential by some method (inverse function to the natural log, which is the integral of $1/t$ on the interval $[1,x)$, Bourbaki's method, or some other derivation), then show that it is analytic with infinite radius of convergence (where it converges uniformly and absolutely), which means that it is equal to its Taylor series at 0, which means that we can, by a general result of complex analysis, extend it to an entire function on the complex plane.




This derivation doesn't seem natural to me in the same sense as Bourbaki's derivation of the real exponential, since it requires that we notice some analytic properties of the function, instead of relying on its unique algebraic and topological properties.



Does anyone know of a derivation similar to Bourbaki's for the complex exponential?


Answer



I think essentially the same characterization holds. The complex exponential is the unique Lie group homomorphism from $\mathbb{C}$ to $\mathbb{C}^*$ such that the (real) derivative at the identity is the identity matrix.


complex analysis - On an alternative to computing $int_{-infty}^{infty} frac{sin{x}}{x} , dx$



I was looking through alternative methods to computing this integral rather than using the classic semi-circular contour avoiding the origin. I came across a method to computing



$$\int_{\infty}^{\infty} \frac{\sin^2(x)}{x^2} \, dx$$



by shifting the poles up by a factor $i \epsilon$ and continuing the calculation as normal.




I want to employ the same idea with the sinc integral, noting that
$$\int_{\infty}^{\infty} \frac{\sin(x)}{x} \, dx = \lim_{\epsilon \rightarrow 0}\int_{\infty}^{\infty} \frac{\sin(x)}{x-i\epsilon} \, dx$$



We consider the integral
$$\int_{\Gamma} \frac{e^{iz}}{z-i \epsilon}\, dz$$



Where $\Gamma$ is the upper semi-circular arc, $[-R, R] \cup Re^{i[0, \pi]}$. Then we have



\begin{align*}
\int_{\Gamma} \frac{e^{iz}}{z-i \epsilon}\, dz &= \int_{-R}^{R} \frac{e^{it}}{t-i \epsilon} \, dt + iR\int_{0}^{\pi} \frac{e^{it}e^{iRe^{it}}}{Re^{it}-i \epsilon}\, dt \\

&= 2 \pi i e^{-\epsilon}
\end{align*}



by Residue theorem since there is one pole inside the contour. The integral along the arc vanishes(and is pretty easy to see) as $R \rightarrow \infty$. Letting $\epsilon \rightarrow 0$, we have



$$\lim_{\epsilon \rightarrow 0} \int_{-\infty}^{\infty}\frac{e^{it}}{t-i \epsilon} \, dt = 2 \pi i$$



This is where it gets murky for me. Expanding the integrand, we've



$$\lim_{\epsilon \rightarrow 0}\left(\int_{-\infty}^{\infty} \frac{x \cos (x)}{x^2+\epsilon ^2}-\frac{\epsilon \sin (x)}{x^2+\epsilon ^2} \, dx + i \int_{-\infty}^{\infty} \frac{x \sin (x)}{x^2+\epsilon ^2}+\frac{\epsilon \cos (x)}{x^2+\epsilon ^2} \, dx \right)= 2 \pi i$$




Using the linearity of the limit on the left hand side and equating real and imaginary parts, we have



$$\lim_{\epsilon \rightarrow 0 } \int_{-\infty}^{\infty} \frac{x \sin (x)}{x^2+\epsilon ^2}+\frac{\epsilon \cos (x)}{x^2+\epsilon ^2} \, dx = 2 \pi$$



and yet, at the same time,



$$ \int_{-\infty}^{\infty}\lim_{\epsilon \rightarrow 0 }\left(\frac{x \sin (x)}{x^2+\epsilon ^2}+\frac{\epsilon \cos (x)}{x^2+\epsilon ^2}\right) \, dx = \int_{-\infty}^{\infty}\frac{\sin(x)}{x} \, dx = \pi$$



I'd like to know why the exchange of limit and integral is making the difference here, is there something to do with the convergence of these integrals that is making this technique not work, or is there something I've plainly missed? One thing I can note is that I don't employ that idea I had originally, as the real and imaginary parts of the integrand above don't recover what I want to exploit.



Answer



I see this as a question about the intuition of the matter (since I think you are already aware of the "classic" actual proof, and you are already aware of that the place where the "fake" proof goes wrong is at the interchange of integral and limit), so allow me to give an answer which aims at an "intuitive" level, even if at the expense of rigorous detail.



Let's focus on the expression that causes the trouble in the end: $\frac{\epsilon \cos(x)}{x^2+\epsilon ^2}$. It causes trouble because of its behavior when $\epsilon$ and $x$ are close to $0$. Well, to be a little more specific, think of $\cos(x)$ as the sum of the main term $1$ and the higher-order part $h(x) = \cos(x)-1 = O(x^2)$; then it's only the main term that causes the trouble. In other words, $\frac{\epsilon}{x^2+\epsilon ^2}$ is the interesting part while $\epsilon \frac{h(x)}{x^2+\epsilon ^2}$ is the uninteresting part (uninteresting because it behaves nicely for $\epsilon$ and $x$ near $0$, and is happy to allow its integral and limit to be interchanged).



The "interesting" expression has the integral
$$
\int_{-\infty}^\infty \frac{\epsilon}{x^2+\epsilon ^2} dx
= \int_{-\infty}^\infty \frac{1}{\frac{x^2}{\epsilon^2} + 1} \frac{dx}{\epsilon}
= \int_{-\infty}^\infty \frac{1}{u^2 + 1} du

$$
which doesn't actually depend on $\epsilon$ - it's always $\pi$.



So, what we see is that $\frac{\epsilon}{x^2+\epsilon ^2}$ accounts for the difference between $\pi$ (the actual answer) and $2\pi$ (from the fake proof).



To put it another way, what happens is that
$$
\frac{\epsilon}{x^2+\epsilon ^2}
= \frac{1}{2i} \left( \frac{1}{x-i\epsilon} - \frac{1}{x+i\epsilon}\right)
$$

and
$$
\frac{1}{2i} \int_{a}^b \left( \frac{1}{x-i\epsilon} - \frac{1}{x+i\epsilon}\right) dx
\approx \frac{1}{2i} \oint \frac1z \, dz,
$$
which equals $\pi$ (if $0 \in (a,b)$).
In this light, the error of the fake proof is in neglecting to analyze the pole at $0$ (in contrast to the proof involving the "semi-circular contour avoiding the origin" you alluded to, which does tread carefully around the pole).



After a few more drinks, you might say that the true limit of the functions is given by
$$

\frac{\epsilon}{x^2+\epsilon ^2}
= \frac{1}{2i} \left( \frac{1}{x-i\epsilon} - \frac{1}{x+i\epsilon}\right)
\to \pi \delta(x)
$$
where $\delta$ is the Dirac delta "function". (The theories of "distributions" and "hyperfunctions" are the elaborations of this idea.) In a sense, the essence of the fake proof's error is in equating $\delta$ with $0$.



The point is, the interchange of the limit and integral doesn't work, as $\frac{\epsilon \cos(x)}{x^2+\epsilon ^2}$ carries the "missing mass" of $\pi$, in spite of its limit being $0$ for every individual $x$ pointwise (except at $x=0$).


real analysis - Rigourous substitution formula for indefinite integral

By the fundemental theorem calculus we are able to say that:




The Substition Formula: If $f$ and $g'$ are continuous, then
$$ \int_{g(a)}^{g(b)}f(u)du= \int_a^b f\big( g(x) \big)\cdot g'(x)dx $$





This formula is often stated for indefinite integrals by:



$$ \int f(u)du= \int f\big( g(x) \big)\cdot g'(x)dx \quad \text{where} \quad u=g(x) $$



I was trying to find a rigorous proof to the second formulation, but have come up short so far. I was wondering whether it can indeed be rigorously proved, since it is often taught in early calculus class. I found a remark in Spivak's book on calculus saying that:




"This formula cannot be taken literally (after all, $\int f(u)du$ should mean a primitive of $f$ and the symbol $\int f\big( g(x) \big) g'(x)dx$ should mean a primitive of $ (f\circ g)\cdot g'$; these are certainly not equal)."





Is there a rigorous sense in which the second formulation has a proof, or is just a symbolic manipulation?

Monday, January 22, 2018

soft question - Visually deceptive "proofs" which are mathematically wrong

Related: Visually stunning math concepts which are easy to explain




Beside the wonderful examples above, there should also be counterexamples, where visually intuitive demonstrations are actually wrong. (e.g. missing square puzzle)



Do you know the other examples?

summation - Question on a tricky Arithmo-Geometric Progression::



$$\dfrac{1}{4}+\dfrac{2}{8}+\dfrac{3}{16}+\dfrac{4}{32}+\dfrac{5}{64}+\cdots\infty$$



This summation was irritating me from the start,I don't know how to attempt this ,tried unsuccessful attempts though.



Answer



$$\begin{align} S&=\qquad \frac 14+\frac 28+\frac 3{16}+\frac 4{32}+\frac 5{64}+\cdots\tag{1}\\ 2S&=\frac 12+\frac 24+\frac 38+\frac 4{16}+\frac 5{32}\cdots\tag{2}\\ (2)-(1):\qquad\\ S&=\frac 12+\frac 14+\frac 18+\frac 1{16}+\frac 1{32}\cdots\\ &=\frac {\frac 12}{1-\frac 12}\\ &=\color{red}1 \end{align}$$


elementary number theory - CRT + Fermat's Little Theorem

I am seeking a proof for the following...


Suppose $p$ and $q$ are distinct primes. Show that $$ p^{q-1} + q^{p-1} \equiv 1 \quad (\text{mod } pq)$$ $$$$ $$$$ I gather from Fermat's Little Theorem the following: $$q^{p-1} \equiv 1 \quad (\text{mod } p)$$


and


$$p^{q-1} \equiv 1 \quad (\text{mod } q)$$


How can I use this knowledge to give a proof? I'm confident I can combine this with the Chinese Remainder Theorem, but I am stuck from here.

Sunday, January 21, 2018

combinatorics - Combinatorial identity's algebraic proof without induction.





How would you prove this combinatorial idenetity algebraically without induction?



$$\sum_{k=0}^n { x+k \choose k} = { x+n+1\choose n }$$



Thanks.


Answer



Here is an algebraic approach. In order to do so it's convenient to use the coefficient of operator $[z^k]$ to denote the coefficient of $z^k$ in a series. This way we can write e.g.
\begin{align*}
\binom{n}{k}=[z^k](1+z)^n
\end{align*}





We obtain



\begin{align*}
\sum_{k=0}^{n}\binom{x+k}{k}&=\sum_{k=0}^{n}\binom{-x-1}{k}(-1)^k\tag{1}\\
&=\sum_{k=0}^n[z^k](1+z)^{-x-1}(-1)^k \tag{2}\\
&=[z^0]\frac{1}{(1+z)^{x+1}}\sum_{k=0}^n\left(-\frac{1}{ z }\right)^k\tag{3}\\
&=[z^0]\frac{1}{(1+z)^{x+1}}\cdot \frac{1-\left(-\frac{1}{z}\right)^{n+1}}{1-\left(-\frac{1}{z}\right)}\tag{4}\\
&=[z^n]\frac{z^{n+1}+(-1)^n}{(1+z)^{x+2}}\tag{5}\\

&=(-1)^n[z^n]\sum_{k=0}^\infty\binom{-x-2}{k}z^k\tag{6}\\
&=(-1)^n\binom{-x-2}{n}\tag{7}\\
&=\binom{x+n+1}{n}\tag{8}
\end{align*}
and the claim follows.




Comment:





  • In (1) we use the binomial identity $\binom{-p}{q}=\binom{p+q-1}{q}(-1)^q$.


  • In (2) we apply the coefficient of operator.


  • In (3) we do some rearrangements by using the linearity of the coefficient of operator and we also use the rule
    \begin{align*}
    [z^{p-q}]A(z)=[z^p]z^{q}A(z)
    \end{align*}


  • In (4) apply the formula for the finite geometric series.


  • In (5) we do some simplifications and use again the rule stated in comment (3).


  • In (6) we use the geometric series expansion of $\frac{1}{(1+z)^{x+2}}$. Note that we can ignore the summand $z^{n+1}$ in the numerator since it has no contribution to the coefficient of $z^n$.


  • In (7) we select the coefficient of $z^n$.



  • In (8) we use the rule stated in comment (1) again.



real analysis - Can there be only one extension to the factorial?



Usually, when someone says something like $\left(\frac12\right)!$, they are probably referring to the Gamma function, which extends the factorial to any value of $x$.



The usual definition of the factorial is $x!=1\times2\times3\times\dots x$, but for $x\notin\mathbb{N}$, the Gamma function results in $x!=\int_0^\infty t^xe^{-t}dt$.



However, back a while ago, someone mentioned that there may be more than one way to define the factorial for non-integer arguments, and so, I wished to disprove that statement with some assumptions about the factorial function.







the factorial




  1. is a $C^\infty$ function for $x\in\mathbb{C}$ except at $\mathbb{Z}_{<0}$ because of singularities, which we will see later.


  2. is a monotone increasing function that is concave up for $x>1$.


  3. satisfies the relation $x!=x(x-1)!$


  4. and lastly $1!=1$








From $3$ and $4$, one can define $x!$ for $x\in\mathbb{N}$, and we can see that for negative integer arguments, the factorial is undefined. We can also see that $0!=1$.



Since we assumed $2$, we should be able to sketch the factorial for $x>1$, using our points found from $3,4$ as guidelines.



At the same time, when sketching the graph, we remember $1$, so there can be no jumps or gaps from one value of $x$ to the next.



Then we reapply $3$, correcting values for $x\in\mathbb{R}$, since all values of $x$ must satisfy this relationship.




Again, because of $1$, we must re-correct our graph, since having $3$ makes the derivative of $x!$ for $x\in\mathbb N$ undefined.



So, because of $1$ and $3$, I realized that there can only be one way to define the factorial for $x\in\mathbb R$.



Is my reasoning correct? And can there be only one extension to the factorial?






Oh, and here is a 'link' to how I almost differentiated the factorial only with a few assumptions, like that it is even possible to differentiate.




Putting that in mind, it could be possible to define the factorial with Taylor's theorem?


Answer



First, for a fixed $c\in\mathbb{C}$, let $$F_c(z):=\Gamma(z+1)\cdot\big(1+c\,\sin(2\pi z)\big)$$ for all $z\in\mathbb{C}\setminus \mathbb{Z}_{<0}$, which defines an analytic function $F_c:\mathbb{C}\setminus\mathbb{Z}_{<0}\to\mathbb{C}$ such that $$F_c(z)=z\cdot F_c(z-1)$$
for all $z\in\mathbb{C}\setminus\mathbb{Z}_{\leq 0}$ and that $F_c(0)=F_c(1)=1$ (whence $F_c(n)=n!$ for every $n\in\mathbb{Z}_{\geq 0}$). Excluding the essential singularity at $\infty$, the negative integers are the only singularities of $F_c$, which are simple poles.



Here are some results I checked with Mathematica. If $c$ is a positive real number less than $0.022752$, then $F_c'(z)>0$ for all $z>1$ and $F_c''(z)>0$ for all $z>-1$, making $F_c$ monotonically increasing on $(1,\infty)$ and convex on $(-1,\infty)$. It also appears that, with $0

what exactly is a function of function?$f(x)=2x$ ,$g=3f$, what exactly is $g$?

The question: a function $f(x)=2x$ with the domain $[1,2]$ the codomain $[2,4]$ the image $[2,4]$ when we take a transformation of this function like $g=3f$, what exactly is this $g$ function is?


My guess:


  1. we can think $g$ is the function that takes the domain of $f$ as its own domain, so $g$ is a function $g(x)=6x $ with the domain $[1,2]$ and the codomain $[6,12]$.$x\in[1,2]$

  2. we can think $g$ is the composition function $g=h\circ f$ ,here $f(x)=2x$ with the domain $[1,2]$,$h(x)=3x$,with the domain$[2,4]$.

Both of them can make sense, so which one is right?

Series convergence/ divergence


Determine whether the following series is absolutely convergent, conditionally convergent or divergent.

$$\sum_{n=1}^\infty(-0.75)^n\dfrac{n+1}{2}$$




I tried using the divergence test and the alternating series test but I can't seem to use it or this problem as I could not take the limit of the function.

Saturday, January 20, 2018

all prime numbers have irrational square roots


How can I prove that all prime numbers have irrational square roots?


My work so far: suppose that a prime p = a*a then p is divisible by a. Contradiction. Did I begin correctly? How to continue?


Answer



The standard proof for $p=2$ works for any prime number.


calculus - The series $sum_{n=2}^{infty }frac{1}{log (n!)}$




I'm trying to find out whether this Series
$\sum_{n=2}^{\infty } a_{n}$ converges or not when
$$a_{n}=\frac{1}{\log (n!)}$$



I tried couple of methods, among them: d'Alembert $\frac{a_{n+1}}{a_{n}}$, Cauchy condensation test $\sum_{n=2}^{\infty } 2^{n}a_{2^n}$, and they both didn't work for me.



Edit: I can't use stirling, and integral.



Thank you


Answer




Hint: You can use $$a_n = \frac{1}{\log n!} = \frac{1}{\sum_{k=1}^n \log k} \geq
\frac{1}{n \log n}.$$
Then use the Cauchy condensation test...


complex analysis - Evaluating the improper integral $ int_{0}^{infty}{frac{x^2}{x^{10} + 1}mathrm dx} $

I am trying to solve the following integral, but I don't have a solution, and the answer I am getting doesn't seem correct.



So I am trying to integrate this:



$$ \int_{0}^{\infty}{\frac{x^2}{x^{10} + 1}\,\mathrm dx} $$




To integrate this, I want to use a contour that looks like a pizza slice, out of a pie of radius R. One edge of this pizza slice is along the positive x-axis, if that makes sense. Since $ z^{10} + 1 $ has 10 zeroes, the slice should only be one tenth of a whole circle. So let's call this contour $ C $. Then:



$$ \int_{C}{\frac{z^2}{z^{10} + 1}\,\mathrm dz} = 2 \pi i\,\operatorname{Res}(\frac{x^2}{x^{10} + 1}, e^{i \pi/10}) $$ This is because this slice contains only one singularity. Furthermore:



$$ \int_{C}{\frac{z^2}{z^{10} + 1}\,\mathrm dz} = \int_0^R{\frac{z^2}{z^{10} + 1}\,\mathrm dz} + \int_\Gamma{\frac{z^2}{z^{10} + 1}\,\mathrm dz} $$



And then, by the M-L Formula, we can say that $ \int_\Gamma{\frac{z^2}{z^{10} + 1}\,\mathrm dz} $ goes to $0$ as $R$ goes to infinity. Evaluating $ 2 \pi i\ \operatorname{Res}(\frac{x^2}{x^{10} + 1}, e^{i \pi/10}) $ I get $ \dfrac{\pi}{e^{i \pi/5}} $. Since this answer isn't real, I don't think this could be correct. What did I do wrong?

Friday, January 19, 2018

calculus - An alternative proof for sum of alternating series evaluates to $frac{pi}{4}secleft(frac{api}{4}right)$



How does one prove the given series? $$\sum_{n=0}^\infty\left(\frac{(-1)^n}{4n-a+2}+\frac{(-1)^n}{4n+a+2}\right)=\frac{\pi}{4}\sec\left(\frac{a\pi}{4}\right)$$



This series came up in xpaul's calculation during the process of answering my homework problem. I really appreciate his help for me but I am looking a method to prove the above series using a real analysis method because the link he cited (Ron G's answer) to help me to prove it is using the residue theorem. I worked for a while on this today but was unsuccessful.
$$\begin{align}\sum_{n=0}^\infty\left(\frac{(-1)^n}{4n-a+2}+\frac{(-1)^n}{4n+a+2}\right)&=4\sum_{n=0}^\infty(-1)^n\frac{2n+1}{(4n+2)^2-a^2}\\&=\sum_{n=0}^\infty(-1)^n\frac{2n+1}{(2n+1)^2-\left(\frac{a}{2}\right)^2}\end{align}$$

Comparing with Taylor series for secant, hyperbolic secant, or any other well known series forms but I could not get any of them to work, perhaps someone else can? I would like a nice proof and avoiding residue method in order to complete my homework's answer. Would you help me? Any help would be appreciated. Thanks in advance.


Answer



I answered this before in Closed form of $\int_{0}^{\infty} \frac{\tanh(x)\,\tanh(2x)}{x^2}\;dx$. However the solution was too long as Venus mentioned. Inspired from Jack D'aurizio's answer, I have a simple solution for this. It is easy to check that
\begin{eqnarray*}
&&\sum_{n=0}^\infty\left(\frac{(-1)^n}{4n-a+2}+\frac{(-1)^n}{4n+a+2}\right)\\
&=& \int_{0}^{1}\frac{x}{1+x^4}\left(x^a+x^{-a}\right)\,dx=\int_{0}^{\infty}\frac{x^{a+1}}{1+x^4}\,dx\\
&=&\frac{1}{a+2}\int_0^\infty\frac{1}{1+x^{\frac{a+2}{4}}}dx.
\end{eqnarray*}
Now using the following well-known integral
$$ \int_0^\infty\frac{1}{1+x^n}dx=\frac{\pi}{n\sin(\pi/n)}, \text{ for }n>1, $$

(for example, see Prove $\int_0^{\infty}\! \frac{\mathbb{d}x}{1+x^n}=\frac{\pi}{n \sin\frac{\pi}{n}}$ using real analysis techniques only) it is easy to get the answer.


number theory - if d divides n then prove that fibonacci of d divides fibonacci of n


prove that if $d$ divides $n$ then prove that
fibonacci of $d$ divides fibonacci of $n$.


i have tried to write $F(n)$ as a multiple of $F(d)$ using the fact that $n = ad$ for some natural $a$ but got nowhere..


Answer




By the addition law $\rm\: f(x+y) = i\:f(x) + j\:f(y)\:$ for $\rm\:i,j\in \mathbb Z,\,$ so by $\,\rm\color{#c00}{induction}\,$ $\rm\: f(d)\mid f(nd)$


$$\rm f((n\!+\!1)d)\, =\, f(nd\!+\!d)\, =\, i\: \color{#c00}{f(nd)} + j\: f(d)\, =\, i\, \color{#c00}{k\, f(d)}\! + j\: f(d)\, =\, (i\,k+j)\: f(d) $$


Remark $\ $ More generally $\rm\, \gcd(f(k),f(n)) = f_{\,\gcd(k,n)}.\,$ For a proof see here.


linear algebra - Decompose $A$ to the product of elementary matrices. Use this to find $A^{-1}$



Decompose $A$ to the product of elementary matrices. Use this to find $A^{-1}$
$$
A =
\begin{bmatrix}
3 & 1\\
1 & -2
\end{bmatrix}

$$



I understand how to reduce this into row echelon form but I'm not sure what it means by decomposing to the product of elementary matrices. I know what elementary matrices are, sort of, (a row echelon form matrix with a row operation on it) but not sure what it means by product of them. could someone demonstrate an example please? It'd be very helpful


Answer



An elementary matrix $E$ is a square matrix that the effect of performing a single elementary row operation to $I$. For example, $\left[ \begin{matrix} 1 & 0 \\ 1 & 1 \end{matrix}\right]$ is an elementary matrix because adding the first row of $I$ to the second row of $I$ gives us this matrix. Moreover, matrix multiplication on the left by an elementary matrix is equivalent to performing the corresponding elementary row operation. Thus, with $A$ as in your question and $E$ as above, we have that
$$EA=\left[ \begin{matrix} 1& 0 \\ 1 & 1 \end{matrix}\right]\left[ \begin{matrix} 3 & 1 \\ 1 & -2 \end{matrix}\right]=\left[ \begin{matrix} 3 & 1 \\ 3+1 & 1+(-2) \end{matrix}\right] =\left[ \begin{matrix} 3 & 1 \\ 4 & -1 \end{matrix}\right]$$



A square matrix $A$ is invertible if and only if it can be reduced to the identity matrix, which is to say that by multiplying by finitely-many elementary matrices on the left we get the identity: $$E_nE_{n-1}\cdots E_2E_1A=I$$ so that $$A=E_1^{-1}E_2^{-1}\cdots E_{n-1}^{-1}E_n^{-1}.$$
The above is well-defined since every elementary matrix is invertible (its inverse corresponds to the elementary row operation that reverses the elementary row operation corresponding to the original elementary matrix).




Thus, the first step is to row-reduce $A$ to the identity $I$, keeping track of what operations we used, and then multiplying the corresponding inverses in the opposite order as indicated above.



For example, take $A=\left[ \begin{matrix} 1 & 2 \\ 2 & 1 \end{matrix}\right]$. We can reduce this to $I$ by subtracting two times the second row from the first row, giving us $\left[ \begin{matrix} -3 & 0 \\ 2 & 1 \end{matrix}\right]$. We can then add $2/3$ the first row to the second to give us $\left[ \begin{matrix} -3 & 0 \\ 0 & 1 \end{matrix}\right]$. Finally, we multiply the first row by $-1/3$. This gives us the decomposition
$$\left[ \begin{matrix} 1 & 2 \\ 2 & 1 \end{matrix}\right] =\left[ \begin{matrix} 1 & 2 \\ 0 & 1 \end{matrix}\right] \left[ \begin{matrix} 1 & 0 \\ -2/3 & 1 \end{matrix}\right] \left[ \begin{matrix} -3 & 0 \\ 0 & 1 \end{matrix}\right]$$


real analysis - Bijection from (0,1) to [0,1)




I'm trying to solve the following question:




Let $f:(0,1)\to [0,1)$ and $g:[0,1)\to (0,1)$ be maps defined as



$f(x)=x$ and $g(x)=\frac{x+1}{2}$. Use these maps to build a bijection
$h:(0,1)\to [0,1)$




I've already proved that these maps are injectives, and following the others questions on the site such as




Continuous bijection from $(0,1)$ to $[0,1]$



How to define a bijection between $(0,1)$ and $(0,1]$?



I think I can found such $h$, but the problem is that we have to use only $f$ and $g$ to build $h$.



I need help.



Thanks a lot.



Answer



The slick answer is




Since each of $f$ and $g$ are clearly injective, apply the Cantor-Bernstein theorem to $f$ and $g$. This gives a bijection $(0,1)\to[0,1)$.




It may be implicit in the question that you're supposed to "crank the handle" on the proof of Cantor-Bernstein in order to find an explicit description of the particular bijection it produces. In that case you'd start by finding the ranges (not merely the domain) of $f$ and $g$, namely $(0,1)$ and $[\frac12,1)$.



What happens next depends on the details of the proof of Cantor-Bernstein you're working with, but in one that is easiest to apply here, we look for the the part of $[0,1)$ that is not in the range of $f$, which is the singleton $\{0\}$. If we iterate $f\circ g$ on this, we get $A=\{1-2^{-n}\mid n\in \mathbb N\}$. Then the bijection $H:[0,1)\to(0,1)$ is

$$ H(x) = \begin{cases} g(x) & x\in A \\ f^{-1}(x) & x\notin A \end{cases} $$
Now unfold this definition and invert it to get $h$.


Thursday, January 18, 2018

exponential function - The integral $int_0^infty e^{-t^2}dt$





Me and my highschool teacher have argued about the limit for quite a long time.



We have easily reached the conclusion that integral from $0$ to $x$ of $e^{-t^2}dt$ has a limit somewhere between $0$ and $\pi/2$, as we used a little trick, precisely the inequality $e^t>t+1$ for every real $x$. Replacing $t$ with $t^2$, inversing, and integrating from $0$ to $x$, gives a beautiful $\tan^{-1}$ and $\pi/2$ comes naturally.



Next, the limit seemed impossible to find. One week later, after some google searches, i have found what the limit is. This usually spoils the thrill of a problem, but in this case it only added to the curiosity. My teacher then explained that modern approaches, like a computerised approximation, might have been applied to find the limit, since the erf is not elementary. I have argued that the result was to beautiful to be only the result of computer brute force.



After a really vague introduction to fourier series that he provided, i understood that fourier kind of generalised the first inequality, the one i have used to get the bounds for the integral, with more terms of higher powers.



To be on point: I wish to find a simple proof of the result that the limit is indeed $\sqrt\pi/2$, using the same concepts I am familiar with. I do not know what really Fourier does, but i am open to any new information.




Thank you for your time, i appreciate it a lot. I am also sorry for not using proper mathematical symbols, since I am using the app.


Answer



It's useless outside of this one specific integral (and its obvious variants), but here's a trick due to Poisson:
\begin{align*}
\left(\int_{-\infty}^\infty dx\; e^{-x^2}\right)^2
&= \int_{-\infty}^\infty \int_{-\infty}^\infty \;dx\;dy\; e^{-x^2}e^{-y^2} \\
&= \int_{-\infty}^\infty \int_{-\infty}^\infty \;dx\;dy\; e^{-(x^2 + y^2)} \\
&= \int_0^{2\pi} \!\!\int_0^\infty \;r\,dr\;d\theta\; e^{-r^2} \\
&= \pi e^{-r^2}\Big\vert_{r=0}^\infty \\
&= \pi,

\end{align*}
switching to polar coordinates halfway through. Thus the given integral is $\frac{1}{2}\sqrt{\pi}$.


Properties of the finite field with $729$ elements



I am trying to solve the following problem: let $K$ be a finite field with $729$ elements.




  • How many $\alpha\in K$ make $K^* = \langle \alpha\rangle$?

  • How many fields $E$ are such that $K|E$ is a field extension? What number of elements have each?


  • How many $\beta\in K$ satisfy $K = \mathbb F_3[\beta]$?

  • How many irreducible polynomials of degree $2$, $3$ and $6$ are in $\mathbb F_3[t]$?



And I have argued as follows:



Since $K$ has $729$ elements, and $729$ is $3^6$, it follows $K\cong \mathbb F_{3^6}$, the finite field with $3^6$ elements, so every $E$ such that $K|E$ is field extension needs to satisfy $|E| = p^k$ with $k|6$, so there are, up to isomorphism, $4$ field extensions of the form $E\subseteq K$, that are $\mathbb F_3|K$, $\mathbb F_{3^2}|K$, $\mathbb F_{3^3}|K$ and $\mathbb F_{3^6}|K$.



Also, since $K^*$ is cyclic of order $p^n-1$, $K^* = \langle u\rangle$ for some $u\in K^*$, and each element $\alpha\in K^*$ can be written $\alpha = u^k$. For $\alpha\in K^*$ to satisfy $K^* = \langle \alpha \rangle$, it is needed to be $\gcd(k,p^{n}-1)=1$, since
$$ \mathrm{order}(u^k) = \frac{\mathrm{order}(u)}{\gcd(k,\mathrm{order}(u))} $$

so the number of $\alpha$'s with this property is $\varphi(728) = 288$.



This should give all the elements such that $K=\mathbb F_3[\alpha]$, since every element $\beta\in K$ except $0$ satisfies $\beta = \alpha^{k}\in \mathbb F_3[\alpha]$ and $|K|=|\mathbb F_3[\alpha]|$ (is this right?)



For the last, I would only know how to calculate the number of irreducible and monic polynomials, but I don't know how to calculate the whole number of irreducible polynomials with those degrees.



I would appreciate some hints or help. Thanks in advance.


Answer



Most of it looks good, except for the last part. You don't need $\beta$ to be a generator for $\mathbb{F}_{3}[\beta] = \mathbb{F}_{3}[\alpha]$. What you need is for the minimal polynomial of $\beta$ over $\mathbb{F}_{3}$ to have degree $6$. Equivalently, you need $\beta$ not to be in any proper subfield of $K$.




To count the number of irreducible polynomials, use the fact that the irreducible polynomials of these degrees in $\mathbb{F}_{3}[t]$ are going to factor completely in $K$. For example, an irreducible polynomial of degree $2$ is going to have two distinct conjugate roots in $\mathbb{F}_{3^{2}} \setminus \mathbb{F}_{3}$, so there should be $(9-3)/2$ of them (that are monic).


calculate the limit of this sequence $sqrt{1+sqrt{1+sqrt{1+sqrt{1..}}}}$






i am trying to calculate the limit of $a_n:=\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+}}}}..$ with $a_0:=1$ and $a_{n+1}:=\sqrt{1+a_n}$ i am badly stuck not knowing how to find the limit of this sequence and where to start the proof. i did some calculations but still cannot figure out the formal way of finding the limit of this sequence. what i tried is:
$$(1+(1+(1+..)^\frac{1}{2})^\frac{1}{2})^\frac{1}{2}$$ but i am totally stuck here


Answer



We (inductively) show following properties for sequence given by $a_{n+1} = \sqrt{1 + a_n}, a_0 =1$


  1. $a_n \ge 0$ for all $n\in \Bbb N$

  2. $(a_n)$ is monotonically increasing

  3. $(a_n)$ is bounded above by $2$

Then by Monotone Convergence Theorem, the sequence converges hence the limit of sequence exists. Let $\lim a_{n} = a$ then $\lim a_{n+1} = a$ as well. Using Algebraic Limit Theorem, we get


$$ \lim a_{n+1} = \sqrt{1 + \lim a_n} \implies a = \sqrt {1 + a} $$



Solving above equation gives out limit. Also we note that from Order Limit Theorem, we get $a_n \ge 0 \implies \lim a_n \ge 0$.


Wednesday, January 17, 2018

intuition - Dominoes and induction, or how does induction work?


I've never really understood why math induction is supposed to work.


You have these 3 steps:



  1. Prove true for base case (n=0 or 1 or whatever)





  2. Assume true for n=k. Call this the induction hypothesis.




  3. Prove true for n=k+1, somewhere using the induction hypothesis in your proof.



In my experience the proof is usually algebraic, and you just manipulate the problem until you get the induction hypothesis to appear. If you can do that and it works out, then you say the proof holds.


Here's one I just worked out,


Show $\displaystyle\lim_{x\to\infty} \frac{(\ln x)^n}{x} = 0$


So you go:




  1. Use L'Hospital's rule. $\displaystyle\lim_{x\to\infty} \frac{\ln x}{x} = 0$. Since that's $\displaystyle\lim_{x\to\infty} \frac{1}{x} = 0$.




  2. Assume true for $n=k$. $\displaystyle\lim_{x\to\infty} \frac{(\ln x)^k}{x} = 0$.




  3. Prove true for $n=k+1$. You get $\displaystyle\lim_{x\to\infty} \frac{(\ln x)^{k+1}}{x} = 0.$



Use L'Hospital again: $\displaystyle\lim_{x\to\infty} \frac{(k+1)(\ln x)^{k}}{x} = 0$.



Then you see the induction hypothesis appear, and you can say this is equal to $0$.


What I'm not comfortable with is this idea that you can just assume something to be true ($n=k$), then based on that assumption, form a proof for $n=k+1$ case.


I don't see how you can use something you've assumed to be true to prove something else to be true.


Answer



The inductive step is a proof of an implication: you are proving that if the property you want holds for $k$, then it holds for $k+1$.


It is a result of formal logic that if you can prove $P\rightarrow Q$ (that $P$ implies $Q$), then from $P$ you can prove $Q$; and conversely, that if from assuming that $P$ is true you can prove $Q$, then you can in fact prove $P\rightarrow Q$.


We do this pretty much every time we prove something. For example, suppose you want to prove that if $n$ is a natural number, then $n^2$ is a natural number. How do we start? "Let $n$ be a natural number." Wait! Why are you allowed to just assume that you already have a natural number? Shouldn't you have to start by proving it's a natural number? The answer is no, we don't have to, because we are not trying to prove an absolute, we are trying to prove a conditional statement: that if $n$ is a natural number, then something happens. So we may begin by assuming we are already in the case where the antecedent is true. (Intuitively, this is because if the antecedent is false, then the implication is necessarily true and there is nothing to be done; formally, it is because the Deduction Theorem, which is what I described above, tells you that if you manage to find a formal proof that ends with "$n^2$ is a natural number" by assuming that "$n$ is a natural number" is true, then you can use that proof to produce a formal proof that establishes the implication "if $n$ is a natural number then $n^2$ is a natural number"; we don't have to go through the exercise of actually producing the latter proof, we know it's "out there").


We do that in Calculus: "if $\lim\limits_{x\to x_0}f(x) = a$ and $\lim\limits_{x\to x_0}g(x) = b$, then $\lim\limits_{x\to x_0}(f(x)+g(x)) = a+b$." How do we prove this? We begin by assuming that the limit of $f(x)$ as $x\to x_0$ is $a$, and that the limit of $g(x)$ as $x\to x_0$ is $b$. We assume the premise/antecedent, and procede to try to prove the consequent.


What this means in the case of induction is that, since the "Inductive Step" is actually a statement that says that an implication holds: $$\mbox{"It" holds for $k$}\rightarrow \mbox{"it" holds for $k+1$},$$ then in order to prove this implication we can begin by assuming that the antecedent is already true, and then procede to prove the consequent. Assuming that the antecedent is true is precisely the "Induction Hypothesis".


When you are done with the inductive step, you have in fact not proven that it holds for any particular number, you have only shown that if it holds for a particular number $k$, then it must hold for the next number $k+1$. It is a conditional statement, not an absolute one.



It is only when you combine that conditional statement with the base, which is an absolute statement that says "it" holds for a specific number, that you can conclude that the original statement holds for all natural numbers (greater than or equal to the base).


Since you mention dominoes in your title, I assume you are familiar with the standard metaphor of induction like dominoes that are standing all in a row falling. The inductive step is like arguing that all the dominos will fall if you topple the first one (without actually toppling it): first, you argue that each domino is sufficiently close to the next domino so that if one falls, then the next one falls. You are not tumbling every domino. And when you argue this, you argue along the lines of "suppose this one falls; since it's length is ...", that is, you assume it falls in order to argue the next one will then fall. This is the same with the inductive step.


In a sense you are right that if feels like "cheating" to assume what you want; but the point is that you aren't really assuming what you want. Again, the inductive step does not in fact establish that the result holds for any number, it only establishes a conditional statement. If the result happens to hold for some $k$, then it would necessarily have to also hold for $k+1$. But we are completely silent on whether it actually holds for $k$ or not. We are not saying anything about that at the inductive-step stage.



Added: Here's an example to emphasize that the "inductive step" does not make any absolute statement, but only a conditional statement: Suppose you want to prove that for all natural numbers $n$, $n+1 = n$.

Inductive step. Induction Hypothesis: The statement holds for $k$; that is, I'm assuming that $k+1 = k$.


To be proven: The statement holds for $k+1$. Indeed: notice that since $k+1= k$, then adding one to both sides of the equation we have $(k+1)+1 = k+1$; this proves the statement holds for $k+1$. QED


This is a perfectly valid proof! It says that if $k+1=k$, then $(k+1)+1=k+1$. This is true! Of course, the antecedent is never true, but the implication is. The reason this is not a full proof by induction of a false statement is that there is no "base"; the inductive step only proves the conditional, nothing more.



By the way: Yes, most proofs by induction that one encounters early on involve algebraic manipulations, but not all proofs by induction are of that kind. Consider the following simplified game of Nim: there are a certain number of matchsticks, and players alternate taking $1$, $2$, or $3$ matchsticks every turn. The person who takes the last matchstick wins.


Proposition. In the simplified game above, the first player has a winning strategy if the number of matchsticks is not divisible by $4$, and the second player has a winning strategy if the number of matchsticks is divisible by 4.


The proof is by (strong) induction, and it involves no algebraic manipulations whatsoever.



logarithms - Inequality $log xle frac{2}{e} , sqrt{x}$



The inequality $$\log x \le \frac{2}{e} \, \sqrt{x},$$ where $\log x$ denotes the natural logarithm, is used in the proof of Theorem 4.7 in Apostol's Analytic Number Theory.



It seems that the inequality is not very difficult to prove using calculus. We could simply find maximum/minimum of some function like $f(x)= \frac2e \sqrt{x} - \log x$ or $g(x)=\frac{\log x}{\sqrt x}$.



Are there some other methods how this inequality can be proved? Is there a way in which this inequality can be seen more directly, without having to calculate critical points of some auxiliary function?


Answer




With the substitution $x = e^{2(u+1)}$ the inequality
$$
\log x \le \frac{2}{e} \, \sqrt{x}
$$ becomes
$$
e^u \ge 1 + u \tag{*}
$$
which is a well-known estimate for the exponential function.
Equality holds if and only if $u = 0$, corresponding to
$x = e^2$ in the original inequality.




$(*)$ is trivial for $u \le -1$ and can for example be shown using
the Taylor series for $u > -1$. It also follows – as Jack said in a comment –
from the convexity of the exponential function: the graph lies above
the tangent line at $u = 0$.



(This approach was inspired by Jack D'Aurizio's answer.)


integration - Treatise on non-elementary integrable functions



All of us mathematicians after some time (and trial-and-error, of course) we are able to guess with reasonable accuracy whether or not a given function is elementary integrable (test yourself:
$$\int\frac1{x\sin\bigl(\frac1x\bigr)}\,dx\quad\style{font-family:inherit;}{\text{vs.}}\quad\int\frac1{x^2\sin\bigl(\frac1x\bigr)}\,dx\ ;$$




surely the readers can give a lot more challenging and interesting examples).



I would like to know what is the most comprehensive work (survey, book, whatever) dealing with the theory of integration in elementary terms. I know about the pioneering work of Liouville, as well as the classic paper by Rosenlicht, but what else? what about allowing certain "VIP" non-elementary functions (erf function, for example)?


Answer



Here is the list of references I've seen on the topic of elementary integration. It is eclectic, and not intended to be complete, but contains most of what seem to be the relevant benchmarks, and several expository accounts in varying degrees of detail. Sadly, I am not familiar with Liouville's or Ostrowski's original papers. (Perhaps I'll use this as an excuse to track them down.)




  1. MR0223346 (36 #6394). Rosenlicht, Maxwell. Liouville's theorem on functions with elementary integrals. Pacific J. Math. 24 (1968), 153–161.


  2. MR0237477 (38 #5759). Risch, Robert H. The problem of integration in finite terms. Trans. Amer. Math. Soc. 139 (1969), 167–189.


  3. MR0269635 (42 #4530). Risch, Robert H. The solution of the problem of integration in finite terms. Bull. Amer. Math. Soc. 76, (1970), 605–608.



  4. MR0321914 (48 #279). Rosenlicht, Maxwell. Integration in finite terms. Amer. Math. Monthly 79 (1972), 963–972.


  5. MR0409427 (53 #13182). Risch, Robert H. Implicitly elementary integrals. Proc. Amer. Math. Soc. 57 (1), (1976), 1–7.


  6. MR0536040 (81b:12029). Risch, Robert H. Algebraic properties of the elementary functions of analysis. Amer. J. Math. 101 (4), (1979), 743–759.


  7. MR0815235 (87a:12009). Richtmyer, R. D. Integration in finite terms: a method for determining regular fields for the Risch algorithm. Lett. Math. Phys. 10 (2-3), (1985), 135–141.


  8. Matthew P Wiener. Elementary integration and $x^x$. Sci.Math post. February 21, 1995. (The pdf version was typed by Apollo Hogan).


  9. Manuel Bronstein. Symbolic Integration Tutorial. "Course notes of an ISSAC (International Symposium on Symbolic and Algebraic Computation) '98 tutorial."


  10. MR1960772 (2004c:12010). Van der Put, Marius; Singer, Michael F. Galois theory of linear differential equations. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 328. Springer-Verlag, Berlin, 2003. xviii+438 pp. ISBN: 3-540-44228-6.


  11. MR2106657 (2005i:68092). Bronstein, Manuel. Symbolic integration. I.
    Transcendental functions
    . Second edition. With a foreword by B. F. Caviness. Algorithms and Computation in Mathematics, 1. Springer-Verlag, Berlin, 2005. xvi+325 pp. ISBN: 3-540-21493-3.


  12. Brian Conrad. Integration in elementary terms. Unpublished note. (2011?).



  13. Moshe Kamensky. Differential Galois theory. "An introduction to Galois theory of linear differential equations."




Bronstein's book in particular is highly recommended.


Tuesday, January 16, 2018

linear algebra - Eigenvalues of a nxn matrix without calculations




I have a question about the following matrix:



$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 2 & 3 \\
1 & 2 & 3 \\
\end{bmatrix}

$$



Find the eigenvalues without calculations and define your answer. Now, I was thinking about this problem. And I thought, yeah ok if you try the vector (1,1,1), you can find 6 as one eigenvalue (and I know you have a double multiplicity 0 too). But than you are doing sort of guessing/calculation work.



I see that the columns are linearly dependant. So I know the dimension of the column space and of the null space.



Thank you in advance.



EDIT: follow up question:




Ok, so you find that the dimension of the null space is 2, so there are 2 eigenvectors when the eigenvalue is 0. Now my question is, can the dimension of the eigenspace be bigger than the amount of eigenvalues? I guess not. I know it can be smaller


Answer



Notice that rank=1 and hence $0$ is an eigenvalue of multiplicity $2$.
Then trace=sum of eigenvalue and hence the last eigenvalue is $6$.



It is also rather easy to find all eigenvectors without a lot of work. For $6$ the vector is $(1,1,1)$. For $0$ you can take basis $(2,-1,0),(3,0,-1)$.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...