Tuesday, February 28, 2017

Find the sum of the infinite series $frac{1}{1cdot 2}+frac{1cdot3}{1cdot2cdot3cdot4}+frac{1cdot3cdot5}{1cdot2cdot3cdot4cdot5cdot6}+...$


Find the sum of the series $\frac{1}{1\cdot 2}+\frac{1\cdot3}{1\cdot2\cdot3\cdot4}+\frac{1\cdot3\cdot5}{1\cdot2\cdot3\cdot4\cdot5\cdot6}+...$. This type of questions generally require a trick or something and i am not able to figure that out. My guess is that it has something to do with exponential series or binomial series. Any help?



Answer



Sorry guys, got it.
$\frac{1}{1\cdot 2}+\frac{1\cdot3}{1\cdot2\cdot3\cdot4}+\frac{1\cdot3\cdot5}{1\cdot2\cdot3\cdot4\cdot5\cdot6}+...=\frac{1}{2}\cdot\frac{1}{1!}+\frac{1}{2^2}\cdot\frac{1}{2!}+\frac{1}{2^3}\cdot\frac{1}{3!}+... = e^\frac{1}{2}-1.$
The first equality holds after cancelling the common terms in the numerator and denominator


algebra precalculus - Problem on multiplication formulae.



Given $a^3 + b^{3}+ c^{3}= (a+b+c)^{3} $.



Prove that for any natural number $n$,
$$a^{2n+1}+b^{2n+1}+c^{2n+1}=(a+b+c)^{2n+1}$$



I first tried mathematical induction but did not proceed anywhere.
Can the formula $(a+b+c)^{3}=a^{3}+b^{3}+c^{3}+3(a+b)(b+c)(c+a)$ be directly used?

Can this problem be solved using mathematical induction?


Answer



Solution 1
Using Induction



We have $$(a+b+c)^{3}=a^{3}+b^{3}+c^{3}+3(a+b)(b+c)(c+a)\implies(a+b)(b+c)(c+a)=0$$



Now, for $k$ we have



$$(a+b+c)^{2k+1}=a^{2k+1}+b^{2k+1}+c^{2k+1}$$

For $(k+1)$
$$(a+b+c)^{2k+3}=(a+b+c)^2(a+b+c)^{2k+1}=(a+b+c)^2(a^{2k+1}+b^{2k+1}+c^{2k+1})$$



$$(a+b+c)^{2k+3}=a^{2k+3}+b^{2k+3}+c^{2k+3}+(a+b)(b+c)(c+a)(\text{some factor})$$



(I've left the job of finding factor to you for observing pattern see here and here)



Using $(a+b)(b+c)(c+a)=0$
$$(a+b+c)^{2k+3}=a^{2k+3}+b^{2k+3}+c^{2k+3}$$




Solution 2



$$(a+b+c)^{3}=a^{3}+b^{3}+c^{3}+3(a+b)(b+c)(c+a)\implies(a+b)(b+c)(c+a)=0$$
Either of $a+b,b+c,c+a =0$



$\implies (a+b+c)=a$ or $(a+b+c)=b$ or $(a+b+c)=c$



In any case $$(a+b+c)^{2n+1}=a^{2n+1}+b^{2n+1}+c^{2n+1}$$


algebra precalculus - How to prove an Inequality



I'm beginner with proofs and I got the follow exercise:





Prove the inequality $$(a + b)\Bigl(\frac{1}{a} + \frac{4}{b}\Bigr) \ge 9$$ when $a > 0$ and $b > 0$. Determine when the equality occurs.




I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?



Thanks in advance.


Answer



The first thing to do is simplify the expression on the lefthand side of the inequality.




$$(a + b)\left(\frac{1}{a} + \frac{4}{b}\right)=\frac{(a+b)(4a+b)}{ab}=\frac{4a^2+5ab+b^2}{ab}=\frac{4a}b+5+\frac{b}a\;.$$



Now notice that the resulting expression contains both $\frac{a}b$ and $\frac{b}a$; this is an indication that it might simplify matters to introduce a new quantity, $x=\frac{a}b$, and rewrite the inequality as $$4x+5+\frac1x\ge 9\;,$$ or $$4x+\frac1x\ge 4\;.\tag{0}$$



The natural thing to do now is to multiply through by $x$ to get rid of the fraction, but be careful: since this is an inequality, the sign of $x$ matters. If $x\ge 0$ we get $4x^2+1\ge 4x$, or $4x^2-4x+1\ge 0$, but if $x<0$ we get $4x^2+1\le 4x$, or $4x^2-4x+1\le 0$. In either case, though, we recognize that $4x^2-4x+1=(2x-1)^2$, so either $$x\ge 0\quad\text{and}\quad(2x-1)^2\ge 0\tag{1}$$ or $$x<0\quad\text{and}\quad(2x-1)^2\le 0\;.\tag{2}$$



Now $(2)$ is impossible: $(2x-1)^2\le 0$ if and only if $(2x-1)^2=0$, in which case $2x=1$, $x=\frac12$, and $x\not<0$. Thus, any solution must come from $(1)$: $x>0$, and $(2x-1)^2\ge 0$. If $2x\ne 1$, then $2x-1\ne0$, so $(2x-1)^2>0$, and we have a solution. If $2x=1$, then $x=\frac12>0$ and $(2x-1)^2=0\ge0$, and again we have a solution. In short, every positive $x$ is a solution, no negative $x$ is a solution, and $x$ can’t be $0$. (Why not?) We could actually have discovered this just by looking more closely at $(0)$, but one won’t always have so nice an inequality as that.



What does this mean in terms of $a$ and $b$? Recall that $x=\dfrac{a}b$; thus, $x>0$ if and only if $\dfrac{a}b>0$, which is true if and only if $a$ and $b$ have the same algebraic sign: both are positive, or both are negative. Since we were told that both are positive, we know that the inequality holds for all $a$ and $b$ in the given domain.




Finally, we have equality in $(0)$ if and only if $4x^4-4x+1=0$, or $(2x-1)^2=0$, i.e., if and only if $x=\frac12$. Since $x=\frac{a}b$, that’s equivalent to $\frac{a}b=\frac12$, or $b=2a$.


real analysis - $f(x+y)=f(x)f(y)$ for all $x,y$ . Show that $f'(x)=f'(0)f(x)$ and determine the value of $f'(0)$



Problem is that Suppose f is differentiable everywhere and $f(x+y)=f(x)f(y)$ for all $x,y$ . Show that $f'(x)=f'(0)f(x)$ and determine the value of $f'(0)$.



I can show $f'(x)=f'(0)f(x)$



but i don't know how to determine the value of $f'(0)$.




Please help!


Answer



It's impossible to determine the value of $f'(0)$ from the information given - perhaps you left something out, perhaps the question was stated somewhat differently, or perhaps it's a bad question.



If $a$ is any real number and $f(t)=e^{at}$ then $f$ is differentiable, $f(x+y)=f(x)f(y)$, and $f'(0)=a$. So the information given literally says nothing at all about the value of $f'(0)$; that value can be anything.


Monday, February 27, 2017

calculus - Deconstructing $0^0$





It is well known that $0^0$ is an indeterminate form. One way to see that is noticing that


$$\lim_{x\to0^+}\;0^x = 0\quad,$$


yet,


$$\lim_{x\to0}\;x^0 = 1\quad.$$


What if we make both terms go to $0$, that is, how much is


$$L = \lim_{x\to0^+}\;x^x\quad?$$



By taking $x\in \langle 1/k\rangle_{k\in\mathbb{N*}}\,$, I concluded that it equals $\lim_{x\to\infty}\;x^{-1/x}$, but that's not helpful.


Answer



This is, unfortunately, not very exciting. Rewrite $x^x$ as $e^{x\log x}$ and take that limit. One l'Hôpital later, you get 1.


algebra precalculus - Proving that a series is bounded from above through induction.


I am required to prove that the following series $$a_1=0, a_{n+1}=(a_n+1)/3, n \in N$$ is bounded from above and is monotonously increasing through induction and calculate its limit. Proving that it's monotonously increasing was simple enough, but I don't quite understand how I can prove that it's bounded from above through induction, although I can see that it is bounded. It's a fairly new topic to me. I would appreciate any help on this.


Answer



See that for any $a_n<\frac12$, we have


$$a_{n+1}=\frac{a_n+1}3<\frac{\frac12+1}3=\frac12$$


Thus, it is proven that since $a_0<\frac12$, then $a_1<\frac12$, etc. with induction.



We choose $\frac12$ since, when solving $a_{n+1}=a_n$, we result with $a_n=\frac12$, the limit of our sequence.


Saturday, February 25, 2017

discrete mathematics - Induction on a sum



The left hand side has terms involving
$\binom{n}{m}= \dfrac{n!}{(n-m)!m!}$



$$1+\dfrac{1}{2}\binom{n}{1} +\frac{1}{3}\binom{n}{2}+...........+\frac{1}{n+1}\binom{n}{n} = \dfrac{2^{n+1}-1}{n+1}$$




I've done induction and proved $P(0)$ holds and also $P(1),P(2)$ holds (Just in case)



Now I've found $P(K+1)$ so the sum on the left is going to equal $\dfrac{2^{n+1}-1}{n+1}$ + $\dfrac{1}{n+2}$



and the right hand side is going to equal $\dfrac{2^{n+2}-1}{n+2}$



My problem is coming with the algebra though. I can't get those sides to equal. I can't get rid of the $n+1$ in the denominator.



Anyone have any ideas for me? They'd be much appreciated!




P.S. Someone gave me a hint and said that you can use the binomial theorem to solve this.


Answer



Recall that
$$(1+x)^n = \sum_{k=0}^n \dbinom{n}k x^k$$
Integrating this from $0$ to $1$, gives us
$$\left. \dfrac{(1+x)^{n+1}}{n+1} \right \vert_{0}^1 = \sum_{k=0}^n \dbinom{n}k \left. \dfrac{x^{k+1}}{k+1} \right \vert_0^1$$
Hence, we get what you want.


Friday, February 24, 2017

Sum of numbers between consecutive multiple numbers of $N$ proof


I need to see if I can generalize a proof: whether the sum of all numbers between two consecutive numbers multiples of $N$, being $N$ a natural number such that $N > 2$ is a multiple of $N$.


I started by taking two consecutive multiples of $N$:


$N$, and $Nn + N$


The sum of the numbers between them is:



$\ (Nn + 1) + (Nn + 2) + (Nn + 3) \ldots (Nn + (N - 3)) + (Nn + (N - 2)) + (Nn + (N - 1)) = N^2n + N$


But


$\ N^2n + N = {N(Nn + 1)}$



Edit:


I didn't sum correctly, the actual sum is:


$\ (2Nn+N)(N-1)/2 $



So the sum of all the numbers between the consecutive multiples of $N$, is a multiple of $N$.


I don't know if the process so far is good, and another problem I have is how to include the fact that $N$, has to be a natural number bigger than 2. How should I go about this?



Thanks in advance.


Answer



Your two consecutive multiples of $N$ should be $Nn$ and $Nn+N$. The sum of all the numbers between them is then $(N-1)Nn+\sum_{i=1}^{N-1}i=N(N-1)n+\frac 12(N-1)N$-you lost the factor $\frac 12$ If $2$ does not divide into $N-1$ your proposition will fail. For example, let $N=4, n=3$. The numbers between $12$ and $16$ are $13,14,15$, with sum $42$, which is not divisible by $4$


Real Analysis: Use Intermediate value Thm to prove the functions




Let $f \colon [0,2] \to \Bbb R$ be a continuous function such that $f(0) = f(2)$. Use the intermediate value theorem to prove that there exist numbers $x, y \in [0, 2]$ such that $f (x) = f (y)$ and $|x − y| = 1$.



Hint: Introduce the auxiliary function $g \colon [0, 1] \to \Bbb R$ defined by $g(x) = f (x + 1) − f (x)$.





I still do not know how to prove it. Could anyone help?


Answer



The given function $\;g\;$ is continous in $\;[0,1]\;$ and



$$\begin{cases}g(0)=f(1)-f(0)\\{}\\g(1)=f(2)-f(1)\end{cases}\implies g(0)g(1)<0\;\;\text{, unless}\;f(1)=f(0)\;\;\text{(why?)}$$



and so by the IVM there exists $\;c\in (0,1)\;$ s.t. $\;g(c)=0\;\ldots$ ...end the exercise now,


real analysis - Solving Basel Problem using euler infinite product and infinite product-sum equality

Trying to prove Basel problem through the equality $sin(x) = x\prod\limits_{k=1}^{+\infty}(1-\frac{x^{2}}{\pi^{2}k^{2}})$,



I came across the following problem;




I was able to prove the following equality by induction in a finite case, I'd like to prove the general one,which is,for $\{a_{k}\} \subseteq \mathbb{R}$ :



$$\prod_{k \in I}(1+a_{k}) = \sum\limits_{n \in \mathbb{N}}(\sum\limits_{\underset{J \in F(\mathbb{N_{+}})}{\lvert J \rvert = n}} \prod_{k \in J}a_{k})$$



Where $F(\mathbb{N_{+}})$ denotes the finite subsets on $\mathbb{N_{+}}$.



Any tip,suggestion or sketch of the proof would be appreciated.

number theory - Showing equivalence in modular arithmetic




Prove for all odd $n \in \mathbb{Z}$: $a^{3(n+1)^2+1} \equiv a$ mod $21$.



I started this way: $21 = 3*7$ and $gcd(3,7) = 1$. So
\begin{cases} x \equiv a^{3(n+1)^2+1} \, \text{mod} \, 7 \\ x \equiv a^{3(n+1)^2+1} \, \text{mod} \, 3 \end{cases}
I was trying to use following theorem to solve the equation with mod $3$: $a^p \equiv a$ mod $p$ with $p$ a prime number. But I have no idea how to solve the first equation (the one with mod $7$) and how to prove that $n$ has to be an odd number.



Thanks in advance!


Answer



If $n$ is odd then $n+1$ is even and $4|(n+1)^2$ and $12|3(n+1)^3$ and $3(n+1)^3 +1\equiv 1\pmod {12}$. (Let $3(n+1)^3 + 1 = 12k + 1$




By Fermat's little theorem $a^{3(n+1)^2+1} = a^{12k}a\equiv 1^{2k}a \equiv a$ if $7\not \mid a$ and if $7|a$ then $a^{3(n+1)^2+1}\equiv 0 \equiv a\pmod 7$.



Likewise $a^{3(n+1)^2+1} = a^{12k}a\equiv 1^{4k}a \equiv a$ if $3\not \mid a$ and if $3|a$ then $a^{3(n+1)^2+1}\equiv 0 \equiv a\pmod 3$



So $a^{3(n+1)^2 +1}\equiv a \pmod {3,7}$ if $n$ is odd.



And since $a^{3(n+1)^2+1} \equiv a\pmod 3$ and $a^{3(n+1)^2+1} \equiv a \pmod 7$, then $a^{2(n+1)^3+1}\equiv a \pmod {3*7}$ by the Chinese remainder theorem.



That is to say: $a\equiv a\pmod {21}$ is certainly a solution to $x\equiv a\pmod 3$ and $x\equiv a\pmod 7$ because $a \equiv a \pmod{anything}$, and CRT says that is the only solution $\mod 21$.


Thursday, February 23, 2017

inequality - Prove $ min left(a+b+frac1a+frac1b right) = 3sqrt{2}:$ given $a^2+b^2=1$


Prove that $$ \min\left(a+b+\frac1a+\frac1b\right) = 3\sqrt{2}$$ Given $$a^2+b^2=1 \quad(a,b \in \mathbb R^+)$$ Without using calculus.
$\mathbf {My Attempt}$
I tried the AM-GM, but this gives $\min = 4 $.


I used Cauchy-Schwarz to get $\quad (a+b)^2 \le 2(a^2+b^2) = 2\quad \Rightarrow\quad a+b\le \sqrt{2}$
But using Titu's Lemma I get $\quad \frac1a+\frac1b \ge \frac{4}{a+b}\quad \Rightarrow\quad \frac1a+\frac1b \ge 2\sqrt{2}$
I'm stuck here, any hint?


Answer



Notice $$\begin{align} a + b + \frac1a + \frac1b = &\; \left(a + \frac{1}{2a} + \frac{1}{2a}\right) + \left(b + \frac{1}{2b} + \frac{1}{2b}\right)\\ \\ \color{blue}{\rm AM \ge \rm GM \rightarrow\quad} \ge &\; 3\left[\left(\frac{1}{4a}\right)^{1/3} + \left(\frac{1}{4b}\right)^{1/3}\right]\\ \color{blue}{\rm AM \ge \rm GM \rightarrow\quad} \ge &\; 6 \left(\frac{1}{16ab}\right)^{1/6}\\ \color{blue}{a^2 + b^2 \ge 2 ab \rightarrow\quad} \ge &\; 6 \left(\frac{1}{8(a^2+b^2)}\right)^{1/6}\\ = &\; 6 \left(\frac18\right)^{1/6} = \frac{6}{\sqrt{2}} = 3\sqrt{2} \end{align} $$ Since the value $3\sqrt{2}$ is achieved at $a = b = \frac{1}{\sqrt{2}}$, we have


$$\min \left\{ a + b + \frac1a + \frac1b : a, b > 0, a^2+b^2 = 1 \right\} = 3\sqrt{2}$$


Notes



About the question how do I come up with this. I basically know the minimum is achieved at $a = b = \frac{1}{\sqrt{2}}$. Since the bound $3\sqrt{2}$ on RHS of is optimal, if we ever want to prove the inequality, we need to use something that is tight when $a = b$. If we want to use AM $\ge$ GM, we need to arrange the pieces so that all terms are equal. That's why I split $\frac1a$ into $\frac{1}{2a} + \frac{1}{2a}$ and $\frac1b$ into $\frac{1}{2b} + \frac{1}{2b}$ and see what happens. It just turns out that works.


calculus - Showing $frac{x}{1+x}

I want to show that $$\frac{x}{1+x}<\log(1+x)0$ using the mean value theorem. I tried to prove the two inequalities separately.



$$\frac{x}{1+x}<\log(1+x) \Leftrightarrow \frac{x}{1+x} -\log(1+x) <0$$




Let $$f(x) = \frac{x}{1+x} -\log(1+x).$$ Since $$f(0)=0$$ and $$f'(x)= \frac{1}{(1+x)^2}-\frac{1}{1+x}<0$$ for all $x > 0$, $f(x)<0$ for all $x>0$. Is this correct so far?



I go on with the second part:
Let $f(x) = \log(x+1)$. Choose $a=0$ and $x>0$ so that there is, according to the mean value theorem, an $x_0$ between $a$ and $x$ with



$f'(x_0)=\frac{f(x)-f(a)}{x-a} \Leftrightarrow \frac{1}{x_0+1}=\frac{ \log(x+1)}{x}$.



Since $$x_0>0 \Rightarrow \frac{1}{x_0+1}<1.$$ $$\Rightarrow 1 > \frac{1}{x_0+1}= \frac{ \log(x+1)}{x} \Rightarrow x> \log(x+1)$$

functional equations - How to find a such function $f:3mathbb{N}+1to 4mathbb{N}+1$

How to find a bijective function $f: 3\mathbb{N}+1\to 4\mathbb{N}+1$ such that $$f(xy)=f(x)f(y),\forall x,y\in 3N+1$$



If i let $x,y\in 3\mathbb{N}+1$ then there exists $n,m\in \mathbb{N}$ such that $x=3n+1,y=3m+1$



but I have no idea how I can find a such $f$, Is there a method please ?

real analysis - Convergence in $L^{infty}$ norm implies convergence in $L^1$ norm



Let $\{f_n\}_{n\in \mathbb{N}}$ be a sequence of measurable functions on a measure space and $f$ measurable. Assume the measure space $X$ has finite measure. If $f_n$ converges to $f$ in $L^{\infty}$-norm , then $f_n$ converges to $f$ in $L^{1}$-norm.



This is my approach:



We know $||f_n-f||_{\infty} \to 0 $ and by definition $||f_n-f||_{\infty} =\inf\{M\geq 0: |f_n-f|\leq M \}.$ Then \begin{align} ||f_n-f||_1\ &=\int |f_n-f| dm\ &\leq \int|f_n|dm+\int|f|dm\ \end{align}


I don't know how to proceed after that, any help would be appreciated.


Answer



For any function $g$, $||g||_1 = \int_X|g(m)|dm \leq \int_X||g||_\infty dm = \mu(X)*||g||_\infty$ (as $|g(m)| \leq ||g||_\infty$ almost everywhere); $||g||_\infty \geq \frac{||g||_1}{\mu(X)}$, so if $||f_n-f||_\infty$ tends to zero, then $||f_n-f||_1$ tends to zero as well.


Wednesday, February 22, 2017

calculus - how to integrate $ int_{-infty}^{+infty} frac{sin(x)}{x} ,dx $?











How can I do this integration using only calculus?
(not laplace transforms or complex analysis)



$$
\int_{-\infty}^{+\infty} \frac{\sin(x)}{x} \,dx

$$



I searched for solutions not involving laplace transforms or complex analysis but I could not find.


Answer



Putting rigor aside, we may do like this:
$$\begin{align*}
\int_{-\infty}^{\infty} \frac{\sin x}{x} \; dx
&= 2 \int_{0}^{\infty} \frac{\sin x}{x} \; dx \\
&= 2 \int_{0}^{\infty} \sin x \left( \int_{0}^{\infty} e^{-xt} \; dt \right) \; dx \\
&= 2 \int_{0}^{\infty} \int_{0}^{\infty} \sin x \, e^{-tx} \; dx dt \\

&= 2 \int_{0}^{\infty} \frac{dt}{t^2 + 1} \\
&= \vphantom{\int}2 \cdot \frac{\pi}{2} = \pi.
\end{align*}$$
The defects of this approach are as follows:




  1. Interchanging the order of two integral needs justification, and in fact this is the hardest step in this proof. (There are several ways to resolve this problem, though not easy.)

  2. It is nothing but a disguise of Laplace transform method. So this calculation contains no new information on the integral.


Monday, February 20, 2017

elementary number theory - How to compute $2^{2475} bmod 9901$?


How to compute $2^{2475} \bmod 9901$?


My work: $$2^{2475} = 2^{5^2\cdot 9\cdot 11} = 1048576^{5\cdot 9\cdot 11} = (-930)^{5\cdot 9\cdot 11} \bmod 9901$$


but I got stuck after this. Any further computation continuing from where I got stuck results in numbers that are too large for me to work with.


Answer



$9901$ is prime and $9900=4\cdot2475$ then $$(2^{2475})^4=2^{9900}\equiv 1\pmod{9901}$$ Hence we can solve in the field $\Bbb F_{9901}$ the equation $$X^4=1$$ Wolfram gives $X=1000,X=8901,X=9900$ and we have to pick $\color{red}{X=1000}$ because it corresponds to $2^{2475}$.


algebra precalculus - How can I show using mathematical induction that $frac{1}{2} + frac{1}{4} + cdots + frac{1}{2^n} = frac{2^n - 1}{2^n}$



How can I show using mathematical induction that $\frac{1}{2} + \frac{1}{4} + \cdots + \frac{1}{2^n} = \frac{2^n - 1}{2^n}$



Edit: I'm specifically stuck on showing that $\frac{2^n - 1}{2^n} + \frac{1}{2^{n+1}} = \frac{2^{n+1}-1}{2^{n+1}}$.



What should the approach be here?


Answer



If $T(n) = \frac{1}{2}+\frac{1}{4}+\cdots+\frac{1}{2^n}$, then

$$T(n+1) - T(n) = \frac{1}{2^{n+1}}$$
hence if $T(n) = \frac{2^n-1}{2^n}$, then
$$T(n+1) = T(n) + \frac{1}{2^{n+1}} = \frac{2^n-1}{2^n}+\frac{1}{2^{n+1}} = \frac{2(2^n-1)}{2^{n+1}}+\frac{1}{2^{n+1}} = \frac{2^{n+1}-2+1}{2^{n+1}},$$
giving the desired formula.



Since $T(1) = \frac{1}{2}=\frac{2-1}{2}$, the result is established by induction.


Sunday, February 19, 2017

real analysis - Prove: if $limlimits_{x rightarrow x_0} f(x) = L >0$, then $limlimits_{x rightarrow x_0} sqrt{f(x)}= sqrt{L}$




Prove: if $\lim\limits_{x \rightarrow x_0} f(x) = L >0$, then $\lim\limits_{x \rightarrow x_0} \sqrt{f(x)}= \sqrt{L}$ (using the original definition of limit)




We assume that $f$ is defined in some deleted neighborhood of $x_0$, for every $\epsilon>0$ , there is a $\delta>0$ such that:




$$|f(x)-L|<\epsilon$$
and
$$ 0<|x-x_0|<\delta$$



As $L>0$,



$$|f(x)-L|
=| \sqrt{f} \sqrt{f} - \sqrt{L} \sqrt{L}|
= |\sqrt{f} - \sqrt{L}| \cdot |\sqrt{f} + \sqrt{L}|

<\epsilon$$



It follows that for the same $\delta>0$, the following statement is true:
$$|f(x)-L|<|\sqrt{f} - \sqrt{L}|< \frac{\epsilon}{ |\sqrt{f} + \sqrt{L}|} < \epsilon$$



>



It shows that if $\sqrt{f}$ is defined on some deleted neighborhood of $x_0$,there is a



$\epsilon>0$ , there is a $\delta>0$ such that:




$$|\sqrt{f}-\sqrt{L}|<\epsilon$$
and
$$ 0<|x-x_0|<\delta$$



Am I having the right reasoning? Can it be improved?



Any input is much appreciated.


Answer



${\epsilon}/{|\sqrt{f}+\sqrt{L}|}$ does not necessarily have to be smaller than $\epsilon$ because $|\sqrt{f}+\sqrt{L}|$ could be lesser than $1$. Instead, look for a bound for $1/{|\sqrt{f}+\sqrt{L}|}$ :




Since $L>0,$ so $L/2>0$, and there would be (by definition of the limit of $f$ at $x_0$) some $\delta_1>0$ such that if $0<|x-x_0|<\delta_1$, we must have $$|f(x)-L|$$\Rightarrow -L/2So we have $f(x)>L/2>0$ for all $x$ satisfying $0<|x-x_0|<\delta_1$. Then
$|\sqrt{f}+\sqrt{L}|>\sqrt{L/2}+\sqrt{L}$. Denote $\sqrt{L/2}+\sqrt{L}$ by $M$. Hence $1/{|\sqrt{f}+\sqrt{L}|}<1/M$ for all $0<|x-x_0|<\delta_1$.



Now we can show that desired limit:



Let $\epsilon>0$, then there would be some $\delta_2>0$ such that if $0<|x-x_0|<\delta_2$, we must have$$|f(x)-L|<\epsilon$$
Take $\delta=\min(\delta_1,\delta_2)$, then if $x$ is any number satisfying $0<|x-x_0|<\delta$, we must have$$|\sqrt{f}-\sqrt{L}|<\epsilon/|\sqrt{f}+\sqrt{L}|<\epsilon/M$$

Since $M$ is fixed and $\epsilon$ is arbitrary, so $\lim_{x\to x_0}\sqrt{f}=\sqrt{L}$.


elementary set theory - How to Prove $mathbb Rtimes mathbb R sim mathbb R$?

How to prove $\mathbb R\times \mathbb R \sim \mathbb R$?



I know you have to split the problem up into two claims, for each direction to prove that it is a bijection, but i don't know how to go much further than that...

elementary number theory - Not understanding Simple Modulus Congruency




Hi this is my first time posting on here... so please bear with me :P



I was just wondering how I can solve something like this:



$$25x ≡ 3 \pmod{109}.$$



If someone can give a break down on how to do it would be appreciated (I'm a slow learner...)!



Here is proof that I've attempted:





  1. Using definition of modulus we can rewrite $$25x ≡ 3 \pmod{109}$$ as $25x = 3 + 109y$ (for some integer $y$). We can rearrange that to $25x - 109y = 3$.


  2. We use Extended Euclidean Algorithm (not sure about this part, I keep messing things up), so this is where I'm stuck at.




Thanks!


Answer



The extended euclidean algorithm is used to find x and y such that ax + by = gcd of a and b.




In our case $a = 109$ and $b = 25$.



So we start as follows.



Find remainder and quotient when we divide $109$ by $25$ and write the remainder on the left hand side.



So we get



9 = 109 - 25*4.




Now we get two new numbers $25$ and $9$. Write the remainder on the left hand side again.



7 = 25 - 9*2.



So we have two new numbers, 9 and 7.



In the extended algorithm, we use the formula for 9 in the first step



7 = 25 - (109 - 25*4)*2 = 25*9 - 109*2.




Now



2 = 9 - 7*1



= (109-25*4) - (25*9 - 109*2) = 109*3 - 25*13



Now write



1 = 7 - 3*2




i.e.



1 = (25*9 - 109*2) - 3*(109*3 - 25*13)



i.e.
1 = 25*48 - 109*11



Thus $25x - 109y = 1$ for $x = 48$ and $y = 11$.



So $25x - 109y = 3$ for x = 48*3 = 144 and y = 11*3 = 33.




Therefore 144*25 = 3 (mod 109).



If you need a number $ \le 109,$



$144 = 109 + 35$.



So we have (109+35)*25 = 3 (mod 109).



Which implies 35*25 = 3 (mod 109).




Thus $x = 35$ is a solution to your equation, which we found using the extended euclidean algorithm.



Hope that helps.


Saturday, February 18, 2017

discrete mathematics - Use mathematical induction to show that $ H_{2^n} geq 1+ frac{n}{2} $



An Inequality for Harmonic Numbers.
The harmonic numbers $H_j, j=1,2,3,...,$ are defined by $$H_j = 1 + \cfrac{1}{2}+\cfrac{1}{3}+...+\cfrac{1}{j}$$




Use mathematical induction to show that $$ H_{2^n} \geq 1+ \frac{n}{2} $$
whenever $n$ is a nonnegative integer.



BASIS STEP: $P(0)$ is true, because $H_{2^0}=H_1=1 \geq 1+\dfrac{0}{2}$



INDUCTIVE STEP: The inductive hypothesis is the statement that $P(k)$ is true, that is, $H_{2^k} \geq 1 +\dfrac{k}{2}$, where $k$ is an arbitrary nonnegative integer. We must show that if $P(k)$ is true, then $P(k+1)$, which states that $H_{2^{k+1}} \geq 1 +\dfrac{k+1}{2}$, is also true. So, assuming the inductive hypothesis, it follows that
$$H_{2^{k+1}} = 1+ \frac{1}{2} + \frac{1}{3} + ...+ \frac{1}{2^k} + \frac{1}{2^k+1}+...+ \frac{1}{2^{k+1}}$$
$$=H_{2^k}+\frac{1}{2^k+1}+...+ \frac{1}{2^{k+1}}$$
$$\geq (1+\frac{k}{2})+ \frac{1}{2^k+1}+...+ \frac{1}{2^{k+1}} \qquad ...(?)$$
$$\geq (1+\frac{k}{2})+2^k \cdot \frac{1}{2^{k+1}}\qquad ...(??)$$

$$\geq (1+\frac{k}{2}) + \frac{1}{2}$$
$$=1+\frac{k+1}{2} $$
I don't understand what is going on at lines $(?)$ and $(??)$, why did it change from $=H_{2^k}$ to $\geq (1+\dfrac{k}{2})$ can somebody explain it to me?


Answer



The first inequality is the inductive hypothesis.



As for the second, note that for all $j \in \{ 1, \dots 2^k\}$, $$\frac{1}{2^k+j} \ge \frac{1}{2^{k+1}}$$
So that
$$\sum_{j=1}^{2^k} \frac{1}{2^k+j} \ge \sum_{j=1}^{2^k} \frac{1}{2^{k+1}} = 2^k \frac{1}{2^{k+1}}$$


linear algebra - A tricky problem about matrices with no ${-1,0,1}$ vector in their kernel



A Hankel matrix is a square matrix in which each ascending skew-diagonal from left to right is constant. Let us call a matrix partial Hankel if it is the first $m


Does there exist an $m$ by $n$ partial Hankel matrix $M$ such that it has $m < n/2$ rows and there is no non-zero vector $v\in\{-1,0,1\}^n$ in its kernel?





Such matrices certainly do exist for non-Hankel $m$ by $n$ $\{-1,1\}$-matrices.



Other that writing computer code to exhaustively explore small matrices I am not sure how to approach this question.



I would be very grateful for any ideas at all.


Answer



It appears completely equivalent to ask this question for Toeplitz matrices instead of Hankel matrices. The reason is that arranging the $m$ rows of any partial Hankel matrix simply in reverse order turns it into a rectangular Toeplitz matrix but doesn't affect its kernel. I will give an answer for Toeplitz matrices below.



In another answer at MathOverflow I have argued that for "typical" $m$ by $n$ $\{0,1\}$-matrices $M$, regardless of whether they have Toeplitz (or circulant) form or not (in other words, they are chosen at random either among all $m$ by $n$ $\{0,1\}$-matrices, with uniform distribution, or within the subsets of Toeplitz or circulant matrices), the following holds:





If $w \in \{-1,0,1\}^n$ is a random vector with i.i.d. components, distributed with probabilities $P(w_i = -1) = P(w_i = 1) = \frac14$ and $P(w_i=0) = \frac12$, then the probability of $w$ lying in the kernel of $M$ is asyptotically (for large $n$) given by
$$
P(Mw=0) \sim
\begin{cases}
\quad 2^{-n} \ , \quad m > \frac{2n}{\log_2 ( \pi n/6)} \\
m^{-\frac12} \left( \frac{\pi n}6 \right)^{-\frac m2} \ , \quad m \leq \frac{2n}{\log_2 ( \pi n/6)}
\end{cases} \ .
\hspace{2cm} (1)

$$




An intermediate step of this argument effectively supports an even stronger claim, namely that for all $m$
$$
P(Mw=0) - 2^{-n} \sim m^{-\frac12} \left( \frac{\pi n}6 \right)^{-\frac m2} \ .
\hspace{2cm} (2)
$$
Since $w=0$ is always a solution of $Mw=0$ and has probability $P(w=0) = 2^{-n}$, the right hand side is essentially the asymptotic probability that $Mw=0$ with some $w \neq 0$. Consequently, since by definition every vector $w \in \{-1,0,1\}^n$ has at least probability $4^{-n}$, the probability that there exists at least one $w \neq 0$ with $Mw=0$ must then, with some suitable constant $\alpha>1$ and for large enough $n$, obey the following inequality:
$$

P(\exists w \neq 0 : Mw=0) \leq \alpha \, 4^n m^{-\frac12} \left( \frac{\pi n}6 \right)^{-\frac m2} \ .
\hspace{2cm} (3)
$$
Again, this supposes that $M$ is a random matrix, with uniform distribution over the set of all $m$ by $n$ $\{0,1\}$-matrices, or over the subset of Toeplitz matrices.



An adaptation of the above argument to $\{-1,1\}$-matrices $M$ is completely straightforward; the eigenvalues of $MM^{\rm T}$ are just scaled by a factor of $4$ (i.e. they asymptotically tend now to $n$ instead of $\frac n4$) and the single large eigenvalue $\sim \frac{mn}4$ disappears. The above inequality $(3)$ then becomes instead
$$
P(\exists w \neq 0 : Mw=0) \leq \alpha \, 4^n \left( \frac{2\pi n}3 \right)^{-\frac m2} \ .
\hspace{2cm} (4)
$$

The base 2 logarithm of the expression on the right hand side is $\log_2\alpha + 2n - \frac m2 \log_2 \left( \frac{2\pi n}3 \right)$, and so for $m > \beta \frac{4n}{\log_2 ( 2\pi n/3)}$ (with some constant $\beta>1$ of our choice) it will be bounded from above by $\alpha \, 4^{(1-\beta)n}$ and thus become arbitrarily small for large enough $n$. Recall that this is an upper bound on the probability that any randomly chosen $m$ by $n$ matrix $M$ (of the required form) will admit a nontrivial solution $w \in \{-1,0,1\}^n$ of the equation $Mw=0$. For given $n$ and $m$ there is a number $2^{n+m-1} \gg 1$ of such matrices of Toeplitz type (or $2^{nm} \gg 1$ of general form), so the probability that every single one of these will admit such a nontrivial solution must become overwhelmingly small for large $n$.




In conclusion, what the above argument tells us is that if we choose some $\beta>1$ and consider the $m$ by $n$ $\{-1,1\}$-matrices $M$, with sufficiently large $n$ and any $m$ obeying
$$
m > \beta \frac{4n}{\log_2 \left( \frac{2\pi n}3 \right)} \ ,
\hspace{2cm} (5)
$$
then with overwhelming probability there will be at least one among these matrices which has the wanted property (i.e. at least one $M$ which does not admit any nontrivial solution of $Mw=0$). This holds independently of whether we require in addition that $M$ has a Toeplitz, Hankel or circulant form. Moreover, not just one but probably the majority of all matrices of the specified form will have the wanted property.





Note that the condition $(5)$ is asymptotically weaker than $m>n/c$ for any real constant $c>1$, which implies that for any such $c$ and sufficiently large $n$ there should exist $n$ by $m$ Toeplitz (Hankel, ...) matrices with $m

A final note: Of course, in the present form the entire argument is not mathematically rigorous, since it relies on several uncontrolled approximations. I believe, however, that with a little effort these holes can be filled.


summation - Find closed form of sum of fraction of binomial coefficients



can somebody give me a hint for this exercise, where I have to find the specific closed form?




$\sum_{k=0}^m \frac{\binom{m}{k}}{\binom{n}{k}}, m,n\in\mathbb{N}$ and $m\leq n$





What I have done so far:



$\sum_{k=0}^m \frac{\binom{m}{k}}{\binom{n}{k}} = \sum_{k=0}^m \frac{\frac{m!}{(m-k)!\cdot k!}}{\frac{n!}{(n-k)!\cdot k!}} = \sum_{k=0}^m \frac{m!}{n!}\cdot \frac{(n-k)!}{(m-k)!} = \frac{m!}{n!}\cdot \sum_{k=0}^m \frac{(n-k)!}{(m-k)!}$



Look at example 1: $n=8, m=5$



$\frac{5!}{8!}\cdot(\frac{8!}{5!} + \frac{7!}{4!} +\frac{6!}{3!} +\frac{5!}{2!} + \frac{4!}{1!} +\frac{3!}{0!}) = \\\frac{5!}{8!} \cdot (8\cdot7\cdot6+7\cdot6\cdot5+6\cdot5\cdot4+5\cdot4\cdot3+4\cdot3\cdot2+3\cdot2\cdot1) = \\
\frac{5!}{8!} \cdot (336+210+120+60+24+6) = \frac{5!}{8!}\cdot 756 = 2.25$




I can't find a pattern.



Edit 1: Maybe there is an approach with recursion.



Edit 2: Okay I found the solution.



$\sum_{k=0}^m \frac{m!}{n!}\cdot \sum_{k=0}^m \frac{(n-k)!}{(m-k)!}= \frac{m!}{n!}\cdot\frac{1}{4}\cdot t\cdot(t+1)\cdot(t+2)\cdot(t+3)$ with $t=(n-m)!$



Edit 3: This formula works well for example 1, but fails for example 2: $n=9, m=3$




$\frac{3!}{9!}\cdot(\frac{9!}{3!} + \frac{8!}{2!} +\frac{7!}{1!} +\frac{6!}{0!}) = \\\frac{3!}{9!} \cdot (9\cdot8\cdot7\cdot6\cdot5\cdot4+8\cdot7\cdot6\cdot5\cdot4\cdot3+7\cdot6\cdot5\cdot4\cdot3\cdot2+6\cdot5\cdot4\cdot3\cdot2\cdot1) = \\
\frac{3!}{9!} \cdot (60480+20160+5040+720) = \frac{3!}{9!}\cdot 86400 = 1.428$



So I have still no general solution. Can someone help?


Answer



Let



$$S(m,n):=\sum_{k=0}^m \frac{\binom{m}{k}}{\binom{n}{k}}$$




We are going to prove:
$$S(m,n)=\frac{n+1}{n+1-m}.\tag{1}$$



Obviously (1) is valid for $m=0$ and arbitray $n\ge 0$. Further, if (1) is valid for $(m-1,n-1)$ it is valid for $(m,n)$ as well:
$$
S(m,n):=1+\sum_{k=1}^m \frac{\frac mk\binom{m-1}{k-1}}{\frac nk\binom{n-1}{k-1}}
=1+\frac{m}{n} S(m-1,n-1)\\
\stackrel{I.H.}{=}1+\frac{m}{n}\frac{n}{n+1-m}=\frac{n+1}{n+1-m}.
$$




Thus by induction (1) is valid for arbitrary $0\le m \le n$.


Friday, February 17, 2017

probability - Expected value question.



This is a question from my lecture notes.



"""

Persons arrive at a copy machine according to a Poisson process with rate λ=1 per minute.
The number of copies made is uniformly distributed between 1 and 10. Each copy requires
3 seconds. Find the average waiting time in the queue and the average system waiting time.
"""



I know how to do the problem, but I am having trouble understanding why the following calculation of an expectation is correct.
$$
E[X^2] = \sum_{i = 1}^{10}(3i)^2 * \space Pr(X = 3i)
$$




I don't understand why there is a $(3i)^2$ and why it's $Pr(X =3i)$ term in this expected value. I would set this up:



$$
3E[X^2] = \sum_{i = 1}^{10}(i)^2 * \space Pr(X = i)
$$
Can anyone please provide some intuition? I've looked online and through my textbooks and I can't find anything that helps me with this intuition. I know it has something to do with the 3 seconds in the problem statement, but I can't figure out how to make sense of this and therefore generalize this problem.



Thanks in advance!


Answer



I am guessing that you are trying to compute variance of $X$.




The problem is we did not define $X$.



In your lecture note, $X$ seems defined to be waiting time. Waiting time comes in multiple of 3 in this setting.



We have $Var[X]=E[X^2]-E[X]^2$



On the other hand, it seems that you are trying to wokr with the number of copies.To avoid confusion, let's define it to be $Y$ instead. In particular, we have the relationship $$X=3Y$$ and if we square them and take expectation, we have



$$E[X^2]=3^2E[Y^2].$$




Again, we can compute $Var[Y]=E[Y^2]-E[Y]^2$



A point to consider is suppose you really prefer to work with $Y$ (the number of copies), can you compute $Var[X]$?



$$Var[X]=E[(3Y)^2]-E[3Y]^2=9(E[Y^2]-E[Y]^2)=3^2Var[Y]$$


real analysis - Prove that $f$ is continuous, $f'$ is bounded...


Suppose $\alpha$ and $\beta$ are real numbers and $\beta$ $\gt 0$. We define the function $f$ on $[-1,1]$ by $$f(x)=x^\alpha \sin(|x^{-\beta}|), x \neq0 $$

$$f(x)= 0, x=0.$$
Prove that
a. $f$ is continuous iff $\alpha \gt 0$.



b. $f'(0)$ exists iff $\alpha \gt 1$.



c. $f'$ is bounded iff $\alpha \ge 1+\beta$.



d. $f'$ is continuous iff $\alpha \gt 1+\beta$.



[You can use the standard properties of trig functions and their derivatives.]





I've been able to come up with something for a. and b. but I don't know how to do c. or d.



I'll post my a. and b.



a. $f$ is continuous iff $\forall ({x_n}) \rightarrow 0$ for $x_n \neq 0$. Then $x^\alpha\sin(|x^{-\beta}|) \rightarrow 0$ as $n \rightarrow \infty$.



I considered $x_n =\frac{1}{2n\pi + \frac{\pi}2} \gt 0$




$x_n \rightarrow 0$ as $n \rightarrow 0$, hence $\alpha \gt 0$. $\alpha \neq 0$ because then $x_n^\alpha =1$. $\alpha \not \lt 0 $ because then $x_n^\alpha \rightarrow \infty$ as $n \rightarrow \infty$.



It is easy to see that $f$ is continuous on $[-1,1] \setminus \{0\}$. We find that $$ -|x^\alpha| \le x^\alpha \sin (|x|^{-\beta}) \le |x^\alpha|$$
(because sin varies between $-1$ and $1$). $|x^\alpha| \rightarrow 0$ as $x \rightarrow 0$ since $\alpha \gt 0$. Therefore $f$ is continuous everywhere.



b. A function needs to be continuous everywhere on $[-1,1]$ in order to be differentiable there. In the previous part we proved that $\alpha \gt 0$, so we know that $\alpha$ has to be at least this.



$f'(0)$ exists iff $x^{\alpha-1} \sin (|x|^{-\beta}) \rightarrow 0$ as $x \rightarrow0$. We see that is does, if $\alpha \gt 1$. Therefore $f'(0)=0$ and exists.



Is what I have correct? And how would I do parts c and d?




Thanks

algebra precalculus - Anything to zero power equals one?




I am in Adv. Algebra 2 and I have a question. Firstly, would like to say I haven't taken algebra in a year due to geometry (stupid order they do but oh well) and I have a question understanding this: $(x+5)^{0}$. That would be $x^{0} + 5^{0}$ which then, wouldn't that be $1 + 1$ since anything that has a power of $0 = 1$? Maybe I misunderstood but that's what I got.


Answer



First, exponents do not distribute over addition. To start with the simplest example, $(a+b)^2=(a+b)(a+b)$. Applying the distributive rule, we see that this is the same as $a(a+b)+b(a+b) = a^2 + ab + ab + b^2$, which is different from $a^2+b^2$. This can be seen geometrically, too: A square built on a side of length $a+b$ has greater area than the square with side length $a$, combined with the square with side length $b$.



You can also see your result if you consider order of operations, and substitute an actual number for $x$. Let's consider $x=3$. Then we have:



$$(x+5)^0=(3+5)^0=8^0=1,$$



because Parentheses come before Exponents in PEMDAS.



real analysis - If $f(x+y)=f(x)+f(y) ,forall;x,yinBbb{R}$, then if $f$ is continuous at $0$, then it is continuous on $Bbb{R}.$



I know that this question has been asked here before but I want to use a different approach. Here is the question.



A function $f:\Bbb{R}\to\Bbb{R}$ is such that \begin{align} f(x+y)=f(x)+f(y) ,\;\;\forall\;x,y\in\Bbb{R}\qquad\qquad\qquad(1)\end{align} I want to show that if $f$ is continuous at $0$, it is continuous on $\Bbb{R}.$


MY WORK


Since $(1)$ holds for all $x\in \Bbb{R},$ we let \begin{align} x=x-y+y\end{align} Then, \begin{align} f(x-y+y)=f(x-y)+f(y)\end{align} \begin{align} f(x-y)=f(x)-f(y)\end{align} Let $x_0\in \Bbb{R}, \;\epsilon>$ and $y=x-x_0,\;\;\forall\,x\in\Bbb{R}.$ Then, \begin{align} f(x-(x-x_0))=f(x)-f(x-x_0)\end{align} \begin{align} f(x_0)=f(x)-f(x-x_0)\end{align} \begin{align} f(y)=f(x_0)-f(x)\end{align}


HINTS BY MY PDF:


Let $x_0\in \Bbb{R}, \;\epsilon>$ and $y=x-x_0,\;\;\forall\,x\in\Bbb{R}.$ Then, show that \begin{align} \left|f(x_0)-f(x)\right|=\left|f(y)-f(0)\right|\end{align} Using this equation and the continuity of $f$ at $0$, establish properly that \begin{align}\left|f(y)-f(0)\right|<\epsilon,\end{align} in some neighbourhood of $0$.


My problem is how to put this hint together to complete the proof. Please, I need assistance, thanks!


Answer



We want to show that


$$\forall \epsilon>0, \exists r>0:|x-y|

But $f(x)-f(y)=f(x-y)$ because $f(y)+f(x-y)=f(y+(x-y))=f(x)$ as you have noticed.



Now, take $u=x-y$. By continuity at $0$, we can write:


$$\forall \epsilon>0, \exists r>0:|u-0|

It's easy to see that $f(0)=0$, because $f(0)=f(0+0)=f(0)+f(0)$. Hence


$$\forall \epsilon>0, \exists r>0:|(x-y)-0| 0, \exists r>0:|x-y|

analysis - Bijection from $mathbb{R} to mathbb{R} times mathbb{R}$?







I know it's possible to produce a bijection from $\mathbb{Z}$ to $\mathbb{Z}\times\mathbb{Z}$, but is it possible to do so from $\mathbb{R}$ to $\mathbb{R} \times \mathbb{R}$?

galois theory - Degree 4 extension of $mathbb {Q}$ with no intermediate field




I am looking for a degree $4$ extension of $\mathbb {Q}$ with no intermediate field. I know such extension is not Galois (equivalently not normal). So I was trying to adjoin a root of an irreducible quartic. But I got stuck. Any hint/idea/solution?


Answer



With regard to Sebastian Schoennenbeck's comment, an extension of $\mathbb{Q}$ with Galois group $A_4$ (alternating group on 4 points) will do the trick.
Such an extension certainly exists, in fact all alternating groups are Galois groups over $\mathbb{Q}$.


Wednesday, February 15, 2017

sequences and series - Prove by induction that $sum_{n=1}^infty frac{1}{2^n} = 1$

The title explains the problem fairly well; is there a way to prove by induction that $\sum_{n=1}^\infty \frac{1}{2^n} = 1$. If not are there other ways?



I have thought of showing it by rewriting the series so that. $$\sum_{n=1}^\infty \frac{1}{2^n} = 1 \implies \sum_{n=1}^\infty \frac{1}{2}(\frac{1}{2})^{n-1} = 1$$



And then from there conclude that it is a geometric series with the values $r = 1/2$ and $a=1/2$ thus $$\sum_{n=1}^\infty \frac{1}{2^n} = \frac{1/2}{1-1/2} = 1$$



This seems like kind of a vodoo proof, so i was wondering if its possible to do this by induction?

elementary number theory - How to prove the divisibility rule for $3, $ [casting out threes]



The divisibility rule for $3$ is well-known: if you add up the digits of $n$ and the sum is divisible by $3$, then $n$ is divisible by three. This is quite helpful for determining if really large numbers are multiples of three, because we can recursively apply this rule:



$$1212582439 \rightarrow 37 \rightarrow 10\rightarrow 1 \implies 3\not\mid 1212582439$$
$$124524 \rightarrow 18 \rightarrow 9 \implies 3\mid 124524$$




This works for as many numbers as I've tried. However, I'm not sure how this may be proven. Thus, my question is:




Given a positive integer $n$ and that $3\mid\text{(the sum of the digits of $n$)}$, how may we prove that $3\mid n$?



Answer



HINT: Suppose that you have a four-digit number $n$ that is written $abcd$. Then



$$\begin{align*}

n&=10^3a+10^2b+10c+d\\
&=(999+1)a+(99+1)b+(9+1)c+d\\
&=(999a+99b+9c)+(a+b+c+d)\\
&=3(333a+33b+3c)+(a+b+c+d)\;,
\end{align*}$$



so when you divide $n$ by $3$, you’ll get



$$333a+33b+3c+\frac{a+b+c+d}3\;.$$




The remainder is clearly going to come from the division $\frac{a+b+c+d}3$, since $333a+33b+3c$ is an integer.



Now generalize: make a similar argument for any number of digits, not just four. (If you know about congruences and modular arithmetic, you can do it very compactly.)


limits - What is $lim_{x to infty} x^2f(x)$ where $int_0^infty f(x),mathrm{d}x = 1 $

I encountered a question in a test:




If $F(x)$ is a cumulative distribution function of a continuous non-negative random variable $X$, then find $\int_0^\infty (1 - F(x))\,\mathrm{d}x$




After a bit of pondering, I thought that the answer should depend upon the density of the random variable, so I checked the "none of these" option , but the correct answer was $\,E(X)$.So later I tried to work out the question properly.



If the density of random variable $X$ is $f(x)$ then it is necessary that $f(x) > 0$ and $\int_0^\infty f(x)\,\mathrm{d}x = 1$

Doing the integration by parts $$\left. x\Big(1-F(x)\Big)\right|_0^\infty - \int_0^\infty x\left(-\frac{\mathrm{d}}{\mathrm{d}x}F(x)\right)\,\mathrm{d}x$$
which reduces to
$$ \lim_{x\to \infty} x\big(1\,- F(x)\big)\;+ \int_0^\infty xf(x) \, \mathrm{d}x $$



Now the $\int_0^\infty xf(x)\,\mathrm{d}x$ is clearly $E(X)$ but the limit is where my problem arises. Applying L'Hopital rule in the limit we have
$$\lim_{x\to\infty}\frac{1-F(x)}{\frac{1}{x}} = \lim_{x\to\infty}\;x^2f(x)$$.
Now is there any way to further reduce that limit to $0$ so that $E(X)$ is the correct answer or am I doing something wrong?

analysis - Why does $1+2+3+cdots = -frac{1}{12}$?

$\displaystyle\sum_{n=1}^\infty \frac{1}{n^s}$ only converges to $\zeta(s)$ if $\text{Re}(s) > 1$.




Why should analytically continuing to $\zeta(-1)$ give the right answer?

elementary number theory - Is $frac{sqrt7}{sqrt[3]15}$ rational or irrational?



Is $\frac{\sqrt7}{\sqrt[3]15}$ rational or irrational? Prove it.




I am having a hard time with this question. So far what I did was say, assume it's rational, then $$\frac{\sqrt7}{\sqrt[3]15}=\frac{x}{y} \Rightarrow \sqrt{7}y=x\sqrt[3]15$$
I then showed the product of a rational number and an irrational number is irrational so the expression above is irrational on the left and right side. I can't get to a contradiction to prove it's irrational so I'm currently thinking it is rational but I don't know how to go about proving it.


Answer



Factoring $\,7^{\,\large \color{}{3}} y^{\large 6}\!=15^{\large 2}x^{\large\color{}{6}}$ into primes, $\,7\,$ has odd power $\,3\!+\!6j\,$ on LHS, but even $\,6k\,$ on RHS


probability - How many papers do you expect to hand in before you receive each possible grade at least once?

A particular professor is known for his arbitrary grading policies. Each paper receives a grade from the set {A, A-, B+, B, B-, C+}, with equal probability, independently of other papers. How many papers do you expect to hand in before you receive each possible grade at least once?

calculus - Solving $lim_{xto 0} frac{e^x-1-x}{x^2}$ without L'Hôpital's Rule.

I want to solve this limit without the use of L'Hôpital's rule:



$$\lim_{x\to 0} \frac{e^x-1-x}{x^2}.$$

algebra precalculus - 8^n - 3^n is divisible by 5

My compitations:
Base Case= $8-3 = 5$ is divisible by $5$
Inductive Hypothesis$=$
$8^k - 3^k = 5M$
$8^k = 5M + 3^k$
$3^k = 8^k - 5M$



My final answer is $5(11M + 11^k)$

Please tell me if I have any mistake regarding my computations.

radicals - How do I get the square root of a complex number?


If I'm given a complex number (say $9 + 4i$), how do I calculate its square root?


Answer



The square root is not a well defined function on complex numbers. Because of the fundamental theorem of algebra, you will always have two different square roots for a given number. If you want to find out the possible values, the easiest way is probably to go with De Moivre's formula, that is, converting your number into the form $r(\cos(\theta) + i \sin(\theta))$, and then you will get $(r(\cos(\theta)+ i \sin(\theta)))^{1/2} = ±\sqrt{r}(\cos(\theta/2) + i \sin(\theta/2))$.


calculus - Proof without using induction




How to prove that
$$1^2+2^2+...+n^2=\frac{n(n+1)(2n+1)}{6}$$
without using induction.



If we don't know the right side of this expression, how to get right expression. I tried with partial sums and binomial formula but can't get it.



So the problem is:

$$1^2+2^2+...+n^2=?$$



Thanks for replies.


Answer



Assume we know $\sum\limits_{k=1}^nk=\frac{n(n+1)}{2}$. Compute the following cubes



$$\begin{align}
1^3&=1\\
(1+1)^3&=1^3+3\cdot 1^2+3\cdot 1+1^3\\
\vdots&=\vdots\\

n^3&=(n-1)^3+3(n-1)^2+3(n-1)+1^3\\
(n+1)^3&=n^3+3n^2+3n+1^3\end {align}$$



Add these equations together and cancel the cubes you have on both sides you get



$$\begin{align}(n+1)^3&=1+3\sum_{k=1}^nk^2+3\sum_{k=1}^nk+n\\
&=(n+1)\frac{3n+2}{2}+3\sum_{k=1}^nk^2\end{align}$$



This yields




$$\sum_{k=1}^nk^2=\frac{n+1}{3}\left(n^2+2n+1-\frac{3n+2}{2}\right)=\frac{(n+1)(2n^2+n)}{6}$$



Factoring $n$ we get the result expected


Tuesday, February 14, 2017

real analysis - Functions whose derivative is not continuous on a dense subset

Are there differentiable functions $F:(a,b)\rightarrow \mathbb{R}$, where the set of points at which the derivative of $F$ is discontinuous, is dense in $(a,b)$?



So far I've only found functions where derivatives are discontinuous only at a finite number of points.

arithmetic - Number in tens place



A number in tens place in result of $4^{2015} \cdot 9^{2016}$ is?




Obviously without using calculator, though I doubt it could count with those high numbers.



By tens place I mean, for example if you have number $2451$, the number in tens place here is $5$.



I know the answer, but I don't know how to get it, so if you got any ideas, please share. :)


Answer



$$
4^{2015}\cdot9^{2016} = 9\cdot36^{2015}
$$




So look at $9\cdot36^n$ for $n=1,2,3,\ldots$:
\begin{align}
9\cdot36^1 & = 324 \\
& = \cdots24 \\[6pt]
9\cdot36^2 & = \cdots64 \\
9\cdot36^3 & = \cdots04 \\
9\cdot36^4 & = \cdots44 \\
9\cdot36^5 & = \cdots84 \\
9\cdot36^6 & = \cdots24 \longleftarrow\text{Now we're back to where we were when }n=1, \\

& \phantom{=\cdots24 \longleftarrow\text{a}}\text{so it starts over again.}
\end{align}
After five steps we return to where we started; thus at $n=1,6,11,16,21,26,\ldots$ we get $\cdots24$.



Of course, in order for this to make sense, you have to realize that when you multiply two numbers, the last two digits of the answer are determined by the last two digits of each of the numbers you're multiplying, and are not affected by any earlier digits. That is clear if you think about the algorithm for multiplication that you learned in elementary school, and you can also show it via some simple algebra.



So we get $24$ whenever the exponent is congruent mod $5$ to $1$, i.e. the last digit is $1$ or $6$. So $9\cdot36^{2016}=24$ and $9\cdot{2015}$ is one step before that, thus $84$.


algebra precalculus - If $a(n)=n^2+1$ then $gcd(a_n,2^{d(a_n)})=1text{ or }2$?



Let $n\in\mathbf{N}$. I write $a_n=n^2+1$ and let $d(a_n)$ count the number of divisors of $a_n$. Set $$\Phi_n=\gcd\left(a_n,2^{d\left(a_n\right)}\right)$$ I would like to show and I believe it to be true that



$$\Phi_n =
\begin{cases}
1, & \text{if $n$ is even} \\[2ex]
2, & \text{if $n$ is odd}
\end{cases}$$




My gut instinct is two beak it down by parity and then use Euclid's lemma. But I am not sure how to use Euclid's lemma.



To see a working example consider $n=15$. Then $a_n=226$, $d(a_n)=4$ and $$\text{ }\Phi_n=\gcd(226,2^{4})=\gcd(226,16)=2$$


Answer



Note that $2^{d(a_n)}$ can only be divisible by $1$ and powers of $2$.



If $n$ is even then $n^2+1$ is odd and in that case $\gcd=1$.



If $n$ is odd, then $n^2+1 \equiv 2 \pmod{4}$. Thus $n$ is not divisible by $4$, hence the $\gcd=2$.




Added explanation:



Using the division algorithm, we can write any integer $n=4k+r$, where $r \in \{0,1,2,3\}$. So when $n$ is odd, then it can only be of the form $4k+1$ or $4k+3$. Now consider the case when $n=4k+1$, then
$$n^2+1=(4k+1)^2+1=16k^2+8k+2=4(\text{some integer})+2.$$
Likewise when $n=4k+3$,we have
$$n^2+1=(4k+3)^2+1=16k^2+24k+8+2=4(\text{some integer})+2.$$



This means $n^2+1$ will always leave a remainder of $2$, when divided by $4$.


abstract algebra - primitive root of a finite field


This is a problem similar to one of my homework problems, but not on the homework. The problem states that:


Find a primitive root $\beta$ of $F_2[x]/(x^4+x^3+x^2+x+1)$.


Questions:



  1. I know what a primitive root of a prime number is, but what is a primitive root of a polynomial (or is it called something like a field extension here)?




  2. My book gives this hint: $[x]=\alpha$ doesn't work because $\alpha^5=1$. There are eight choices of $\beta$. I am basically lost on the hint. What is $\alpha$? Why doesn't it work? Why are there 8 choices for $\beta$? Wouldn't there be $2^4 =16$ choices? (or that's the number of polynomials in the field?)




Sorry for the long questions, since I am quite lost right now. Hopefully my questions make sense, and any help would be appreciated!


Answer



The multiplicative group of nonzero elements of a finite field is always cyclic. A "primitive root" of a finite field is a generator for the multiplicative group of nonzero elements.


Note that $x^4+x^3+x^2+x+1$ is irreducible over $\mathbb{F}_2$: it has no roots, and it is not the product of two irreducible quadratics (the quadratics are $x^2$, $x^2+1$, $x^2+x$, and $x^2+x+1$, and the only irreducible one is the latter; but $(x^2+x+1)^2 = x^4 + x^2 + 1$). So $\mathbb{F}_2[x]/(x^4+x^3+x^2+x+1$ is a field of degree $4$ over $\mathbb{F}_2$ (hence, of order $2^4 = 16$). So you are looking for an element in the field of $16$ elements whose multiplicative order is exactly $15$. The book is noting that even though this field equals $\mathbb{F}_2(\alpha)$ (where $\alpha$ is the class of $x$ in the quotient), $\alpha$ doesn't work because it is of order $5$. That is: while it is true that if $\beta$ is a primitive root for the field $GF(p^k)$, then $GF(p^k) = \mathbb{F}_p(\beta)$, the converse does not necessarily hold as this example shows.


real analysis - Prove that if $g(x)$ is uniformly continuous in $(a,b]$ and $[b,c)$ then is uniformly continuous in $(a,c)$




I open this question to check my own proof and to ask a related question.



My proof: if $g(x)$ is uniformly continuous in $(a,b]$ and $[b,c)$ then




$$\forall\varepsilon_1>0,\exists\delta_1>0,\forall x\in (a,b]:|x-b|<\delta_1\implies|g(x)-g(b)|<\varepsilon_1$$
$$\forall\varepsilon_2>0,\exists\delta_2>0,\forall y\in [b,c):|y-b|<\delta_2\implies|g(y)-g(b)|<\varepsilon_2$$




If we call $\varepsilon_0=\varepsilon_1+\varepsilon_2$ and $\delta_0=\delta_1+\delta_2$ and due triangle inequality





$$|x-y|\le|x-b|+|y-b|\\|g(x)-g(y)|\le|g(x)-g(b)|+|g(y)-g(b)|$$




Then we have the case that




$$\forall\varepsilon_0>0,\exists\delta_0>0,\forall x\in (a,b]\land\forall y\in [b,c):|x-y|<\delta_0\implies|g(x)-g(y)|<\varepsilon_0\tag{1}$$





And by definition of $g(x)$ we have too that




$$\forall\varepsilon_0>0,\exists\delta_a>0,\forall x,y\in (a,b]:|x-y|<\delta_a\implies|g(x)-g(y)|<\varepsilon_0\tag{2}$$
$$\forall\varepsilon_0>0,\exists\delta_b>0,\forall x,y\in [b,c):|x-y|<\delta_b\implies|g(x)-g(y)|<\varepsilon_0\tag{3}$$




Cause $(1)$, $(2)$ and $(3)$ if we took $\delta_{\omega}=\min\{\delta_0,\delta_a,\delta_b\}$ then we can finally write





$$\forall\varepsilon_0>0,\exists\delta_{\omega}>0,\forall x,y\in (a,c):|x-y|<\delta_{\omega}\implies|g(x)-g(y)|<\varepsilon_0$$




Two questions:




  1. Is my proof right? I think is right but Im not completely sure.

  2. Can you tell me some different $\delta{-}\varepsilon$ proof for the same problem?




Thank you in advance.


Answer



Let $\varepsilon>0$. By uniform continuity on $(a,b]$ and $[b,c)$, there is $\delta_1>0$ s.t. $$\forall x,y\in (a,b],\ |x-y|<\delta_1\implies |g(x)-g(y)|<\frac{\varepsilon}{2}$$
and there is $\delta_2>0$ s.t. $$\forall x,y\in [b,c),\ |x-y|<\delta_2\implies |g(x)-g(y)|<\frac{\varepsilon}{2}.$$
Let $\delta=\min\{\delta_1,\delta_2\}$ and $x\leq b\leq y$ s.t. $|x-y|\leq \delta$. Then,
$$|g(x)-g(y)|\leq |g(x)-g(b)|+|g(y)-g(b)|\leq \frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon.$$



As you see, $\varepsilon$ is fixed at the beginning. Since it's unspecified, we have the result for all $\varepsilon>0$, and this prove the claim.


Monday, February 13, 2017

real analysis - Find the limit without using Maclaurin series



Find a limit,
$$\lim_{x\to 0} \frac{1-\cos{(1-\cos{(1-\cos x)})}}{x^8}$$
without using Maclaurin series. My attempt was to use L'hopital's rule but that's just too much, and chances of not making a mistake trough repetitive differentiation are very low.


Answer



Hint: Use repetitively "$1-\cos x=2\sin^2 x/2$"
$$\newcommand{\b}[1]{\left(#1\right)}
\newcommand{\f}{\frac}

\newcommand{\t}{\text}
\newcommand{\u}{\underbrace}
$$
$$\lim_{x\to0}\frac{1-\cos{(1-\cos{(1-\cos x)})}}{x^8}
=\lim_{x\to0}\f{2\sin^2\b{\sin^2\b{\sin^2\b{\f x2}}}}{x^8}\\
=\lim_{x\to0}\f{2\sin^2\b{\color{red}{\sin^2\b{\color{blue}{\sin^2\b{\color{green}{\f x2}}}}}}}{\b{\color{red}{\sin^2\b{\color{blue}{\sin^2\b{\color{green}{\f x2}}}}}}^2}.\f{\b{\color{red}{\sin^2\b{\color{blue}{\sin^2\b{\color{green}{\f x2}}}}}}^2}{\b{\color{blue}{\sin^2\b{\color{green}{\f x2}}}}^4}.\f{\b{\color{blue}{\sin^2\b{\color{green}{\f x2}}}}^4}{\b{\color{green}{\f x2}}^8}.\b{\f 12}^8
\\=\lim_{x\to0}\f1{128}\b{\f{\sin\b{\color{fuchsia}{\sin^2\b{\sin^2\b{\f x2}}}}}{\color{fuchsia}{\sin^2\b{\sin^2\b{\f x2}}}}}^2.\b{\f{\sin\b{\color{purple}{\sin^2\b{\f x2}}}}{\color{purple}{\sin^2\b{\f x2}}}}^4.\b{\f{\sin\b{\color{crimson}{\f x2}}}{\color{crimson}{\f x2}}}^8
\\={\large\f1{128}}\quad\b{\because \f{\sin x}x=1}$$


Sunday, February 12, 2017

sequences and series - A sum involving Fibonacci numbers, $sum_{k=1}^infty F_k/k!$

Let $F_k$ be Fibonacci numbers. I am looking for a closed form of the sum $\sum_{k=1}^\infty F_k/k!$.



I tried to use Wolfram Alpha, but it is not doing the sum Fibonacci[k]/k! , k=1 to infinity.




Can someone tell what is the problem with WA and what this sum equals to?

logic - Paradox - What is wrong with this proof that proves a false assertion?



Theorem: Let $a_{n}=a_{n-1}+1, a_1=1$. For any $n$, in order to compute $a_n$, it is necessary to compute $a_i$ for each $i=1,\dots,n-1$, which takes $\Theta(n)$ time.



Proof: This is vacuously true for $n=1$. Assume true for $n=k-1$. Prove true for $n=k$. In order to compute $a_{k-1}+1$, it is necessary to compute $a_{k-1}$. Then since $a_k=a_{k-1}+1$, in order to compute $a_k$, it is necessary to compute $a_{k-1}$. By the induction hypothesis, in order to compute $a_{k-1}$, it is necessary to compute $a_i$ for each $i=1,\dots,k-2$. Hence, in order to compute $a_k$, it is necessary to compute $a_i$ for each $i=1\dots,k-1$. QED




What is wrong with this proof? It seems valid to me, even though the theorem is false.


Answer



In your induction step, you made the additional assumption (beyond the inductive hypothesis) that it was necessary to compute $a_{k-1}$ in order to compute $a_k$. That's hardly the case, as simple inspection of the recursive definition gives us the closed-form definition $a_n:=n$ for all $n$.


functional equations - Suppose a function $f : mathbb{R}rightarrow mathbb{R}$ satisfies $f(f(f(x)))=x$ for all $x$ belonging to $mathbb{R}$.



Suppose a function $f : \mathbb{R}\rightarrow\mathbb{R}$ satisfies $f(f(f(x)))=x$ for all $x$ belonging to $\mathbb{R}$. Show that



(a) $f$ is one-to-one.



(b) $f$ cannot be strictly decreasing, and



(c) if $f$ is strictly increasing, then $f(x)=x$ for all $x$ belonging to $\mathbb{R}$.




Now, I've done the first part(the simplest that is) and I need someone to shed some light on the next two.


Answer



(b) Suppose that $f$ is strictly decreasing, then $0 < 1 \Rightarrow f(0) > f(1) \Rightarrow f(f(0)) < f(f(1)) \Rightarrow f(f(f(0))) = 0 > 1 = f(f(f(1)))$. Contradiction.



(c) Suppose that $f$ is strictly increasing. Let $x \in \mathbb{R}$.




  • Suppose $f(x) > x$. Then $f(f(x)) > f(x) \Rightarrow f(f(f(x))) > f(f(x)) \Rightarrow x > f(f(x)) > f(x)$. This is absurd.


  • You can prove in the exact same way that $f(x) < x$ is absurd. Therefore $f(x) = x$.




elementary number theory - Extended Euclidean Algorithm, what is our answer?


I am learning Euclidean Algorithm and the Extended Euclidean Algorithm. The problem I have is:


Find the multiplicative inverse of 33 modulo n, for n = 1023, 1033, 1034, 1035.

Now I learned that a multiplicative inverse only exists if the gcd of two numbers is 1. Thus, there doesn't exist a multiplicative inverse for n = 1023, 1034, 1035 because there gcd's are not 1.



gcd(33, 1023) = 33
gcd(33, 1033) = 1
gcd(33, 1034) = 11
gcd(33, 1035) = 3

So we move forward with n = 1033 using the Euclidean Algorithm:


33x = 1 mod 1033
x = 1/33 mod 1033
x = 1033 = 33 * 31 + 10
x = 33 = 10 * 3 + 3

x = 10 = 3 * 3 + 1
x = 1

Now we work our way back up using the Extended Euclidean Algorithm


//EEA
1 = 10 + 3(-3)
= 10 + (33 + 10(-3))(-3)
= 33(-3) + 10(10)
= 33(-3) + (1033 + 33(-31))(10)
1 = 33(-313) + 1033(10)


So now how do we get the multiplicative inverse from this final equation that we have here?


Also, I used this video as a reference to running through EA and EEA: https://www.youtube.com/watch?v=hB34-GSDT3k and I was wondering how he was able to use EEA when his gcd was 6, and not 1?


Answer



From the last line of your calculation $1 = 33(-313) + 1033(10)$, reduce mod $1033$ to see that


$$ 1 \equiv 33(-313) \pmod{1033}. $$


So the inverse of $33$ modulo $1033$ is the equivalence class of $-313$, the least non-negative representative of which is $-313+1033 = 720$.


real analysis - Does there exist a function that is differentiable everywhere with everywhere discontinuous partial derivatives?

Does there exist a function that is differentiable everywhere with everywhere discontinuous partial derivatives?



I just had this doubt, talking about first order partials.

summation - Is this equation about binomial-coefficients true?



Question : Is the following true?



$$\sum_{r=k}^{n}\frac{\binom{q}{r}}{\binom{p}{r}}=\frac{p+1}{p-q+1}\left(\frac{\binom{q}{k}}{\binom{p+1}{k}}-\frac{\binom{q}{n+1}}{\binom{p+1}{n+1}}\right)$$

for $p\ge q\ge n\ge k\in\mathbb N$.



Motivation : I've known the following :
$$\sum_{r=1}^{n}\frac{\binom{n}{r}}{\binom{n+m}{r}}=\frac{n}{m+1}.$$
This is the case of $(p,q,k)=(n+m,n,1)$. Then, I reached the above expectation. However, I can neither prove that this expectation is true nor find any counterexample. Can anyone help?


Answer



The identity can be proved in a routine way by induction on $n$. This is not true of your motivating identity, so this seems to be a nice example where a more general problem is actually easier to solve.



For the inductive step we need




$$ \frac{\binom{q}{n+1}}{\binom{p}{n+1}} = - \frac{p+1}{p-q+1} \left( \frac{\binom{q}{n+2}}{\binom{p+1}{n+2}} - \frac{\binom{q}{n+1}}{\binom{p+1}{n+1}} \right) $$



for $p \ge q > n \ge k$.
Rewriting three binomial coefficients using the identities $\binom{q}{n+2} = \binom{q}{n+1}\frac{q-n-1}{n+2}$, $\binom{p}{n+1} = \binom{p+1}{n+1}\frac{p-n}{p+1}$ and $\binom{p+1}{n+2} = \binom{p+1}{n+1}\frac{p-n}{n+2}$ and then taking out $\binom{q}{n+1}/\binom{p+1}{n+1}$, this is equivalent to



$$ \frac{p+1}{p-n} = - \frac{p+1}{p-q+1} \left( \frac{\frac{q-n-1}{n+2}}{\frac{p-n}{n+2}} - \frac{1}{1} \right) $$



which is easily verified. The base case of the induction when $n=k$ can be checked by similar methods.


Saturday, February 11, 2017

real analysis - Differentiability of the Cantor Function

I know that the Cantor function is differentiable a.e. but I want to prove it without using the theorem about monotonic functions. I have already proved that $f'(x) = 0$ for all $x \in [0,1] \backslash \mathbb{C}$ where $\mathbb{C}$ is the Cantor set.



But I'm not sure how to go about proving that if $x \in \mathbb{C}$ then $f$ is not differentiable at $x$.



Actually, upon reflection, I think I have already proved differentiability a.e. but I would still like to know how to finish this part.



Also, the definition I am using for the function:
$$f:[0,1] \to [0,1]$$
Let $x \in [0,1]$ with ternary expansion $0.a_1a_2...$ Let $N$ be the first $n \in \mathbb{N}$ such that $a_n = 1$. If for all $n \in \mathbb{N}$, $a_n \in \{0,2\}$, let $N = \infty$.




Now define $b_n = \frac{a_n}{2}$ for all $n < N$ and $b_N = 1$.
Then $$f(x) = \sum_{i=1}^{N} \frac{b_n}{2^n}.$$

probability - Expected value of the negative part of a random variable

I want to prove that, if $X$ is a real valued random variable with finite expected value, then:



$$\mathbb{E}[X]=\displaystyle \int_{0}^{\infty} \mathbf{P}(X \geq t)dt - \int_{-\infty}^{0} \mathbf{P} (X \leq t)dt.$$



We have that $$\mathbb{E}[X] = \mathbb {E} (X^{+}) - \mathbb{E} (X^{-})$$
and I know how to prove that if $Y$ is a non-negative r.v., then its expected value can be expressed as $$\mathbb{E}[Y]=\displaystyle \int_{0}^{\infty} \mathbf{P}(Y \geq t)dt.$$



I am having trouble expressing the second integral as the expectation of the negative part, $X^{-}$.




Can anyone help me with that?

limits - $limlimits_{nto infty}frac{n^k}{2^n}$ without l'Hopital




Let $\varepsilon >0$. Let $N>?$. For any integer $n>N$ we have
$$\frac{n^k}{2^n}<\varepsilon.$$
I don't know how to proceed here sensically.




I'd say we have to start with



$$<\frac{n^k}{2^N}$$



But what do we do here about the $n^k$?



Remember, I don't want to use l'Hopital or for me unproven limit laws like "exponents grow faster than powers"



Also, I don't want to use hidden l'Hopital, i.e. argumenting with derivatives. Since we don't even have proven derivative laws.


Answer




Let's take the logarithm. One gets



$$\log{n^k\over 2^n}=k\log{n}-n\log{2}=-n\left(\log{2}-k{\log{n}\over n}\right)$$



Now when $n\to\infty$ one has $\log{n}/n\to 0$ and so



$$\lim_{n\to\infty}\log{n^k\over 2^n}=-\infty$$



And so




$$\lim_{n\to\infty}{n^k\over 2^n}=0$$


Friday, February 10, 2017

abstract algebra - What does it mean for elements to be algebraically independent?




Wikipedia's definition:




"In abstract algebra, a subset S of a field L is algebraically independent over a subfield K if the elements of S do not satisfy any non-trivial polynomial equation with coefficients in K."



My textbook's definition:



"Let A be a subring of the commutative ring B. We say that the
elements $b_1, . . . , b_n$ of B are algebraically independent over A if the evaluation map
$ε_{b_1,...,b_n} : A[X_1, . . . , X_n] → B$ that evaluates each $X_i$ at $b_i$ is injective."





I'm a bit confused about how these two definitions are related. I think wikipedia's definiton is clearer but I'm not sure how this relates to the definition in the textbook. In other words, why are we looking at this evaluation map in order to understand the definition of algebraic independence?



Thanks in advance


Answer



The evaluation map at a point $(b_1,\ldots,b_n)\in B^n$ takes a polynomial $f(x_1,\ldots,x_n)\in A[x_1,\ldots,x_n]$ to its value at the point: $f(b_1,\ldots,b_n)\in B$. Clearly the zero polynomial will map to zero, but if the $b_i$ are algebraically independent by the first definition, then only the zero polynomial will map to zero, making the evaluation map injective.


algebra precalculus - Why $sqrt{-1 times -1} neq sqrt{-1}^2$?


We know $$i^2=-1 $$then why does this happen? $$ i^2 = \sqrt{-1}\times\sqrt{-1} $$ $$ =\sqrt{-1\times-1} $$ $$ =\sqrt{1} $$ $$ = 1 $$


EDIT: I see this has been dealt with before but at least with this answer I'm not making the fundamental mistake of assuming an incorrect definition of $i^2$.


Answer



From $i^2=-1$ you cannot conclude that $i=\sqrt{-1}$, just like from $(-2)^2 = 4$ you cannot conclude that $-2=\sqrt 4$. The symbol $\sqrt a$ is by definition the positive square root of $a$ and is only defined for $a\ge0$.


It is said that even Euler got confused with $\sqrt{ab} = \sqrt{a}\,\sqrt{b}$. Or did he? See Euler's "mistake''? The radical product rule in historical perspective (Amer. Math. Monthly 114 (2007), no. 4, 273–285).



Thursday, February 9, 2017

limits - Solving $lim_{n to infty}{frac{n^a}{logleft(left| log(n^a)right|right)}}$




I haven't practiced limits for years, now I need them to solve an exercise and I don't know whether I have come up with the right solution.



$$\lim_{n \to \infty}{\frac{n^a}{\log\left(\left| \log(n^a)\right|\right)}}$$



where $a$ is a fixed constant.



Since I have the form $\frac{\infty}{\infty}$, I apply the De L'Hopital theorem, so I derive both numerator and denominator, so:



$$\lim_{n \to \infty}{\frac{an^{a-1}}{\frac{a}{n\log(n^a)}}} = \lim_{n \to \infty}{n\log(n^a)} = \lim_{n \to \infty}{a n\log(n)} = \infty$$




Can you please give me any feedback?


Answer



The expression is defined only for $a\ne0$, because for $a=0$ the denominator would have $\log 0$.



So, assume $a>0$; then you can rewrite the denominator as
$$
\log(|\log n^a|)=\log(|a\log n|)=\log(|a|\log n)=\log|a|+\log\log n
$$
(at least for $n>1$, which is implied).




Since both numerator and denominator go to infinity, you can apply l'Hôpital's theorem:
$$
\lim_{n\to\infty}\frac{n^a}{\log|a|+\log\log n}=
\lim_{n\to\infty}\frac{an^{a-1}}{\frac{1}{\log n}\frac{1}{n}}=
\lim_{n\to\infty}an^{a-1}\cdot n\log n=
\lim_{n\to\infty}an^a\log n
$$
provided this last limit exists. Does it?



Don't forget to apply the chain rule when differentiating $\log\log n$.




For $a<0$ you have a slightly different, but easier, situation.


real analysis - Showing that $int_{0}^{infty} u^{-alpha} sin(u) , du >0$ for $0

Does anyone know how to show $\int_{0}^{\infty} u^{-\alpha} \sin(u) \, du >0$ for $0<\alpha<1$ (without explicitly having to calculate the exact value)?

Wednesday, February 8, 2017

galois theory - Extension field



Let $E$ an extension field of $k$ of grade $n$. I want to know if for $\alpha\in E$ the minimal polynomial of $\alpha$ has degree $r\leq n$. I think is true, but i could use some help


Answer




Conclude the desired result from the following:



Observation 1: Let $W$ be a subspace of a finite dimensional vector space $V$. Then $$\dim W \leq \dim V.$$


real analysis - Find a bijection between 2 intervals


So I'm supposed to find a bijection between $[0,1) \longrightarrow [0,1]$. My attempt of solution is the function defined as


$$f(x):=\begin{cases} 2x, & \text{iff} \,\, x=\dfrac{1}{2^n} \\ x, &\text{otherwise} \end{cases}$$


Using this we have that if $x=\frac{1}{2}$, then $f(1/2)=1$ and so we got that covered. And for $x=\frac{1}{4}$ we have $f(1/4)=1/2$ and so on. Is this correct? Is it possible to assume that there is a bijection, when $n$ goes to infinity?


I also got another question: define a bijection from $[0,1) \longrightarrow [0,1) \times [0,1) \times \ \dots \ \times [0,1)$ $(n-$times$)$. My solution is just to define the value of $f(x)$ as a vector, i.e take $x \in [0,1)$ and $$f(x):=(x,x,x,\dots , x)$$ Is this correct?


Thank you in advance!


Answer




For the first function you either compute the inverse, and show that is is the right and left inverse, or you should show that it is injective and surjective. (that is take to elements and show that they are mapped to different images, and show that ever element in $[0,1]$ has a pre-image)


The function is correct. But why you say assume there is a bijection? What do you mean by $n$ goes to infinity? You should take care that $f$ is well-defined which it is since $x= \frac{1}{2^n}$ is either true or false.


For the second function. Is $(0,\frac{1}{2}, \cdots, 0)$ in the image? You may also should say $n$-tuple instead of vector, since the codomain is not equipped with a vector space structure.


Tuesday, February 7, 2017

calculus - The application of Squeeze Theorem to find the limit of a trigonometric problem

I've been learning Squeeze Theorem (and limits in general), but am having problems understanding how to apply it. I understand the basics of the theorem (I think), but I've come across a problem that I'm not even sure how to start solving. I realize that Squeeze Theorem is the way to solve it, but beyond that, I'm clueless.



I've search around the site, and a few other places online, but I can't seem to find a similar problem.




So, here's my problem:



$$\lim_{x\to 0} \frac{3 - \sin(e^x)}{\sqrt{x^2 + 2}}$$



Looks easy enough, but I'm clearly missing something obvious. How do we approach a problem like this? I'd love to show my work, but I'm not sure where to start.

trigonometry - Euler's formula simplification


I have been trying to simplify this expression:


$$\cos (\theta) - \frac{1}{\cos (\theta) + i \sin(\theta)}$$


into:


$$\ i \sin(\theta).$$


However I can't find what steps to take to get to this simplification.


Euler's formula states:


$$\ e^{i\theta}= \cos (\theta) + i \sin(\theta) $$


It is linked to this formula however I am not sure how to go about this.


Answer




$$cos(\theta)-\frac{1}{\cos(\theta)+i\sin(\theta)}=cos(\theta)-\frac{1}{e^{i\theta}}=$$ $$cos(\theta)-e^{-i\theta}=\frac{1}{2}e^{i\theta}+\frac{1}{2}e^{-i\theta}-e^{-i\theta}=$$ $$=\frac{1}{2}e^{i\theta}-\frac{1}{2}e^{-i\theta}=i\Big(\frac{1}{2i}e^{i\theta}-\frac{1}{2i}e^{-i\theta}\Big)=i\sin(\theta)$$


functional equations - Solution(s) to $f(x + y) = f(x) + f(y)$ (and miscellaneous questions...)



My lecturer was talking today (in the context of probability, more specifically Kolmogorov's axioms) about the additive property of functions, namely that:




$$f(x+y) = f(x) + f(y)$$



I've been trying to find what functions satisfy this. Intuition says that, for functions over $\mathbb{R}$, the only functions should be of the form $f(x) = ax$ for some real a. Unfortunately I've only shown this is true when the domain of the function is the rational multiples of a given real number.



My question is if it is possible to extend this result (that $f(x) = ax$ given additivity) to the real numbers, possibly without assuming the continuity of f. It seems to me that additivity introduces so many constrains on a function that nothing but the trivial case would be able to sneak through. The following is a summary of my thoughts to date, though they're obviously long and not 'compulsory reading'. :)



When x is rational - Preliminary Investigation



It is not hard to see that:




$$f(x + x + x) = 3f(x)$$



and more generally, for $a\in \mathbb{N}$,
$$f(ax) = af(x)$$
It is not too hard to prove (well, it took half a bus trip ... ) that this also applies first for $a\in \mathbb{Z}$ and then for $a\in \mathbb{Q}$, (for the latter you just need to consider $a=m/n$ and then note that:



$$f\Big(\frac{m}{n}x\Big)=mf\Big(\frac{x}{n}\Big)=\frac{m}{n}\cdot nf\Big(\frac{x}{n}\Big)=\frac{m}{n}\cdot f\Big(n\frac{x}{n}\Big)=\frac{m}{n}\cdot f(x)$$



The reason this little equation is cool is that we can set $x = 1$ and get:




$$f(a)=a\cdot f(1)$$



which is equivalent to what was expected intuitively, namely (after changing $a$ to $y$ and $f(1)$ to $a$)



$$f(y) = a\cdot y$$



as long as y is rational



$y$ is a rational multiple of a real number




But we can do a bit better than that. If we substitute in $x = \sqrt{2}$ or any other real number in $f(ax) = af(x)$ (which we know for rational $a$), you can conduct the exact same argument above and show that, for instance



$$f(y) = \Big(\frac{f(\sqrt{2})}{\sqrt{2}}\Big)\cdot y=a\cdot y$$



Whenever $y = \frac{m}{n}\sqrt{2}$ i.e. whenever $y$ is a rational multiple of $\sqrt{2}$. Note however, that the value of the coefficient $a$ (i.e. the slope of the line) is apparently completely unrelated to the value taken in the case where $y$ is purely rational.



What I'm actually asking



We still haven't shown that $$f(x) = ax$$ for all $x \in \mathbb{R}$, as the slope of the line may change depending on what real number we are taking rational multiples of. As far as I've shown now, we might have $f(x) = x$ when $x$ is rational, $f(x) = 3x$ when $x$ is a rational multiple of $\sqrt{2}$, etc.




I still feel that $f(x) = ax$ for all $x \in \mathbb{R}$. One reason for thinking this comes from noting that $$f(2) = f(2-\sqrt{2})+f(\sqrt{2})$$



$2$, $2-\sqrt{2}$ and $\sqrt{2}$ are not rational multiples of each other, however the equation above gives a restraint on the slopes of the lines formed by their rational multiples (which we'll call $a_1, a_2$ and $a_3$ for the slopes on the rational multiples of $2, 2-\sqrt{2}$ and $\sqrt{2}$ respectively). We have $2a_1 = (2-\sqrt{2}) a_2 + \sqrt{2} a_3$



There's so many constraints here - all the rational multipes have the same coefficient, whenever 2 (or more) numbers which aren't rational multiples of each other are added together we get another constraint on their coefficients. The trivial solution is just that$$f(x) = ax$$



over $x \in \mathbb{R}$ and I really struggle to see how any other solution could possible squeeze through all these constraints.



Is there an additive function on $\mathbb{R}$ not of the form $f(x) = ax$?


Answer




Yea. See the Wikipedia article on Cauchy's Functional Equation.


sequences and series - Error with the proof that all solutions to the Cauchy Functional Equation are linear



If $f(x)$ is continuous, it is known that $f(x+y)=f(x)+f(y)$ implies that $f(x)$ is linear, and non-continuous solutions are discussed in these links. (1, 2,3, 4)


However, what is wrong with this proof that all solutions to the Cauchy Functional Equation are of the form $f(x)=cx$?


If $x$ is rational, it is known that $f(x)=cx$ for some fixed constant $c$, as seen here.


If $x$ is irrational let us assume that $x=n+\alpha$, where $0 \le \alpha <1$.


$f(x)=f(n+\alpha)=f(n)+f(\alpha)$.


Because of the upper result, $f(n)=cn$.


Let the decimal expansion of $\alpha$ be $\sum _{ i=1 }^{ \infty }{ \frac { { a }_{ i } }{ { 10 }^{ i } } } $


Note that $\frac { { a }_{ i } }{ { 10 }^{ i } } $ is rational.


Then, $$f(\alpha)=f(\sum _{ i=1 }^{ \infty }{ \frac { { a }_{ i } }{ { 10 }^{ i } } })=\sum _{ i=1 }^{ \infty }{ f(\frac { { a }_{ i } }{ { 10 }^{ i } } ) } =c\sum _{ i=1 }^{ \infty }{ \frac { { a }_{ i } }{ { 10 }^{ i } } }=c\alpha $$


Therefore $f(x)=cn+c\alpha=cx$. What did I do wrong?



Answer



The answer is in the comments:


How do you prove that $f(\sum _{ i=1 }^{ \infty }{ \frac { { a }_{ i } }{ { 10 }^{ i } } })$ equals $\sum _{ i=1 }^{ \infty }{ f(\frac { { a }_{ i } }{ { 10 }^{ i } } ) }$ without assuming $f$ continuous?


Exactly. Let $b_n=\sum_{j=1}^n a_j10^{-j}$. Then $f(\sum_{j=1}^{\infty}a_j10^{-j})=\sum _{j=1}^{\infty}f(a_j10^{-j})$ is equivalent to $f(\lim_{n\to \infty}b_n)=\lim_{n\to \infty}f(b_n)$. This assumes $f$ is continuous at $\alpha$.


Monday, February 6, 2017

linear algebra - Proving that $(A^n)^{-1} = (A^{-1})^n$ for invertible matrix $A$.



I have seen a proof of the fact that for an invertible matrix $A$, $A^n$ is also invertible and
$$
(A^n)^{-1} = (A^{-1})^n.
$$

The proof was by induction and it was mentioned that one has to use induction because one has that it is true for all $n$.



I am wondering why one has to use induction. Why can't one just say that
$$
(A^n)(A^{-1})^n = (AA^{-1})^n = I^n =I
$$
where one has used that $A$ and $A^{-1}$ by definition commute. Isn't this enough to show that $A^n$ is also invertible and $(A^n)^{-1} = (A^{-1})^n$?


Answer



your proof is fine (to me, at least). I think if you wanted to be super rigorous, you'd need to use induction anyways to fully show that $(A^n)(A^{-1})^n=(AA^{-1})^n$ because even though $A$ and $A^{-1}$ commute, you need to be sure it holds for arbitrarily large products of the two.




Honestly though, like I said, your proof is fine how it is.


number theory - How to prove that any (integer)$^{1/n}$ that isn't an integer, is irrational?




Is my proof beneath perfect and complete?




I wanted to prove that for any nth root of an integer, if it's not an integer, than it's irrational:
$$\begin{cases}
m,n\in \mathbb{N}\\\sqrt[n]{m}\notin \mathbb{N}
\end{cases}\implies \sqrt[n]{m}\notin \mathbb{Q}.$$



I start by assuming that $m^{\frac 1n}$ is rational and non-integer. So there exist co-prime integers $a,b$ so that $$\sqrt[n]{m}=\frac{a}{b}$$ $$\implies
m=\frac{a^n}{b^n}\in\mathbb{N}.$$
But since $a$ and $b$ have no common factor, $a^n$ and $b^n$ also have no common factor. So:
$$\frac{a^n}{b^n}\notin\mathbb{N},$$

a contradiction.


Answer



Your proof is fine. You can use essentially the same idea to prove the following more general statement:



Theorem. If $ P(X) \in \mathbf Z[X] $ is a monic polynomial, then any rational roots of $ P $ are integers. In other words, $ \mathbf Z $ is integrally closed.



Proof. Assume that $ q = a/b $ is a rational root with $ a, b $ coprime, and let $ P(X) = X^n + c_{n-1} X^{n-1} + \ldots + c_0 $. We have $ P(q) = 0 $, which gives



$$ a^n + c_{n-1} a^{n-1} b + \ldots + c_0 b^n = 0 $$




In other words, $ a^n $ is divisible by $ b $. This is a contradiction unless $ b = \pm 1 $, since then any prime dividing $ b $ also divides $ a $, contradicting coprimality. Hence, $ b = \pm 1 $ and $ q \in \mathbf Z $.


elementary number theory - Find the last two digits of $2^{2156789}$

Find the last two digits of $2^{2156789}$.



My try:




As $2156789 = 4* 539197 + 1$
The unit digit of $2^{2156789}$ is similar to the unit digit of $2^{4n+1}$ which is equal to 2.
But I'm unable to find the tens digit. Please help me.

logic - Who first proved that the second-order theory of real numbers is categorical?

The second-order theory of real numbers is obtained by taking the axioms of ordered fields and adding a (Dedekind) completeness axiom, which states that every set which has an upper bound has a least upper bound. This theory is categorical, meaning that it has a unique model upto isomorphism.



My question is, who first managed to prove this fact? Let me give a proof here, so we can be clear as to what we're talking about. Let us first define the natural numbers in our theory. A hereditary set is a set which contains x+1 whenever it contains x, and a natural number is a real number which is an element of every hereditary set containing 0. Let me show that these natural numbers satisfy the axioms of second-order Peano arithmetic.





  1. To prove that 0 is a natural number, we must prove that 0 belongs to every hereditary set containing 0. That's trivial because 0 belongs to every set containing 0.


  2. We must prove that whenever x is a natural number, so is x+1. Consider any hereditary set X containing 0. Then by definition of natural number, x must belong to X, and thus by definition of hereditary, x+1 must belong to X. Since X was arbitrary, x+1 belongs to every hereditary set containing 0, and thus x+1 is a natural number.


  3. The fact that x+1=y+1 implies x=y follows directly from the field axioms, which are part of the second-order theory of real numbers.


  4. We must prove that there exists no natural number x such that x+1=0, which in our case is equivalent to the statement that -1 is not a natural number. Well, just consider the set of all natural numbers greater than or equal to 0 (which is of course all of them, but we don't know that yet). Then this is a hereditary set containing 0, and yet -1 doesn't belong to it, so -1 is not a natural number.


  5. The induction axiom says that if X contains 0 and contains x+1 whenever it contains x, then X contains all the natural numbers. But that is just saying that if X is a hereditary set containing 0, all natural numbers belong to it, which is trivial given our definition of natural number.


  6. The defining axioms of addition and multiplication in Peano arithmetic follow easily from the field axioms.




Since the axioms of second-order Peano arithmetic are categorical, it follows that the sets of natural numbers in any two models of the second-order theory of real theory must be isomorphic, and thus the sets of rational numbers in the two models must also be isomorphic. (The isomorphism can be extended straightforwardly to the rational numbers.) Let’s say we have two models R and S, with sets of rational numbers Q_R and Q_S, and with the isomorphism g from Q_R to Q_S. Then let us define the map f from R to S as follows: f(x) is the least upper bound of the set g({q in Q_R: q < x}). In other words, f is the least upper bound of the Dedekind cut in Q_S which corresponds to the Dedekind cut of x in Q_R. We can easily check that f is an isomorphism, so any two models of the second-order theory of real numbers are isomorphic.




Who was the first person to write down this proof or something like it?



Any help would be greatly appreciated.



Thank You in Advance.

Proving that a function is a bijection




Let $m_1,\ldots,m_n$ be pairwise coprime and let $m=m_1m_2\cdots m_n$. Show that the map



\begin{align}
\theta\,\colon \mathbb{Z}_m &\to \prod_{i=1}^n \mathbb{Z}_{m_i}\\
a+m\mathbb{Z} &\mapsto (a_1+m_1\mathbb{Z},\ldots,a_n+m_n\mathbb{Z})
\end{align}



is an isomorphism of rings assuming you know it is well defined and a homomorphism of rings.







So it is left to show that the function is a bijection. So if we take



$$(a_1+m_1\mathbb{Z},\ldots,a_n+m_n\mathbb{Z})\in \prod_{i=1}^n \mathbb{Z}_{m_i}$$



Since each $(m_i,m_j)$ are pairwise coprime for $i\not=j$, by the Chinese remainder theorem, there exists a unique $x$ such that



$$x\equiv a_i\pmod{m_i}$$




since this is true for any $1\le i\le n$ we have that



$$x\equiv a\pmod{m}\implies x = a + m\mathbb{Z}$$



So there exists an inverse map:



\begin{align}
\theta^{-1}\,\colon \prod_{i=1}^n \mathbb{Z}_{m_i} &\to \mathbb{Z}_m\\
(a_1+m_1\mathbb{Z},\ldots,a_n+m_n\mathbb{Z}) &\mapsto a+m\mathbb{Z}
\end{align}




Hence $\theta$ is surjective. Since $x$ is unique, it is also injective. Therefore $\theta$ is bijective.






Is this a sufficient proof for such a question?


Answer



It is left to show that the function is a bijection. Take



$$(a_1+m_1\mathbb{Z},\ldots,a_n+m_n\mathbb{Z})\in \prod_{i=1}^n \mathbb{Z}_{m_i}$$




Since each $(m_i,m_j)$ are pairwise coprime for $i\not=j$, by the Chinese remainder theorem, there exists a unique $x$ such that



$$x\equiv a_i\pmod{m_i}$$



since this is true for any $1\le i\le n$ we have that



$$x\equiv a\pmod{m}\implies x = a + m\mathbb{Z}$$



So there exists an inverse map:




\begin{align}
\theta^{-1}\,\colon \prod_{i=1}^n \mathbb{Z}_{m_i} &\to \mathbb{Z}_m\\
(a_1+m_1\mathbb{Z},\ldots,a_n+m_n\mathbb{Z}) &\mapsto a+m\mathbb{Z}
\end{align}



Hence $\theta$ is surjective. Since $x$ is unique, it is also injective. Therefore $\theta$ is bijective.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...