Wednesday, May 31, 2017

linear algebra - Write 3 by 4 matrix as a product of elementary matrices and a row echelon form matrix

I want to write $A$ as a product of $4$ matrices $B$,$C$,$D$, and $E$ such that $B$,$C$, and $D$ are elementary matrices and $E$ is row-echelon form.



$$
A = \begin{bmatrix}
0&1&7&8\\
1&3&3&8\\
-2&5&1&8
\end{bmatrix}

$$



Thanks

calculus - Proving that a continuous function has a solution $f:[0,1]to mathbb R$





Let $f:[0,1]\to \mathbb R$ be a continuous function such that $f(0)=f(1)$.



Prove that $f(x)=f\left(x+\frac12\right)$ has a solution for $x\in [0,\frac12]$.





This question has to do with continuity and the intermediate value theorem.



I observed that $f(0)=f\left(\frac12\right)=f(1)$ but I don't see how to show that the function go through zero (i.e has a solution) for all we know it can be a straight line parallel to the $x$ axis in $[0,1]$.


Answer



Hint: put $g(x)=f(x)-f(x+\dfrac{1}{2})$ which is also continous .



Then $g(0)=f(0)-f(\dfrac{1}{2})$ anf $g(\dfrac{1}{2})=f(\dfrac{1}{2})-f(1)=-(f(0)-f(\dfrac{1}{2}))$ so that $g(0)g(\dfrac{1}{2}) <0$


exponentiation - For each irrational number $b$, does there exist an irrational number $a$ such that $a^b$ is rational?



It is well known that there exist two irrational numbers $a$ and $b$ such that $a^b$ is rational.




By the way, I've been interested in the following two propositions.



Proposition 1 : For each irrational number $a\gt 0$, there exists an irrational number $b$ such that $a^b$ is rational.



Proposition 2 : For each irrational number $b$, there exists an irrational number $a$ such that $a^b$ is rational.



I got the following :





Proposition 1 is true.



Suppose that both $\frac{\ln 2}{\ln a}$ and $\frac{\ln 3}{\ln a}$ are rational. There exists a set of four non-zero integers $(m_1,m_2,n_1,n_2)$ such that $\frac{\ln 2}{\ln a}=\frac{n_1}{m_1}$ and $\frac{\ln 3}{\ln a}=\frac{n_2}{m_2}$. Since one has $a=2^{m_1/n_1}=3^{m_2/n_2}$, one has $2^{m_1n_2}=3^{m_2n_1}$. This is a contradiction. It follows that either $\frac{\ln 2}{\ln a}$ or $\frac{\ln 3}{\ln a}$ is irrational. Hence, either setting $b=\frac{\ln 2}{\ln a}$ or setting $b=\frac{\ln 3}{\ln a}$ works.




Then, I began to consider if proposition 2 is true.



To prove that proposition 2 is true, it is sufficient to show that for each irrational number $b$, there exists a rational number $c$ such that $c^{1/b}$ is irrational.



This seems true, but I have not been able to prove that. So, my question is the following :





Question : Is proposition 2 true? If yes, how can we show that? If no, what is a counterexample?



Proposition 2 : For each irrational number $b$, there exists an irrational number $a$ such that $a^b$ is rational.



Answer



What you want to prove is: given an irrational number $b$, then one of the following numbers :
$$r^{\frac{1}{b}}\ \ \ \ \ \ r\in \Bbb Q $$
is irrational, this seems to be an attainable result for what we know nowadays about the transcendence of numbers, we have for example the following theorem:





Six Exponentials Theorem:Let $(x_1,x_2)$ and $(y_1,y_2,y_3)$ be two sets of complex numbers linearly independent over the rationals. Then at least one of
$$e^{x_1y_1},e^{x_1y_2},e^{x_1y_3},e^{x_2y_1},e^{x_2y_2},e^{x_2y_3}$$
is transcendental.




Given an irrational number $x$, let $x_1=1,x_2=x$ and $y_1=\ln(p_1),y_2=\ln(p_2),y_3=\ln(p_3)$ for some primes $p_1,p_2,p_3$ hence using this theorem we have : at least one of:
$$p_1,p_2,p_3,p_1^x,p_2^x,p_3^x $$
is irrational which gives us the following well known consequence:





Six Exponentials Theorem (special case). If $x$ is a real number such that $p_1^x$ , $p_2^x$ and $p_3^x$ are rational numbers for three distinct primes $p_1, p_2$ and $p_3$, then $x\in \Bbb Z$




If we use this theorem one $k^{\frac{1}{b}}$ of the numbers $2^{\frac{1}{b}},3^{\frac{1}{b}},5^{\frac{1}{b}}$ is irrational. and hence you can take $a=k^{\frac{1}{b}}$ and we have $a^b$ is an integer among $\{2,3,5\}$ which implies of course that it's rational.



Comment Maybe there is a very simpler answer which does not use this strong theorem.


integration - Evaluate $intlimits_{|z|=1}frac{sin{1/z}}{(z-2)^2}dz.$


This is a question from an old exam. The two integrals are seemingly similar, but the first one seems quite tedious compared to the other one. It seems, according to the prof solution, that the hard part in the first integral is computing the residue at $z=0.$



Evaluate the intergals


a) $$\int\limits_{|z|=1}\frac{\sin{(1/z)}}{(z-2)^2} \ dz$$


b) $$\int\limits_{|z|=3}\frac{\sin{(1/z)}}{(z-2)^2} \ dz$$




In the first, we have $2$ poles, first one is $z_1=0$ and the second one is a pole of order $2$, which is $z_2=2.$ However, only $z_1$ is inside our unitcircle so we only need to compute $\text{Res}_{z_1}(f(z))$ and apply the residue theorem.


Since our function is of the form $f(z)=\frac{g(z)}{(z-z_1)^k}$ the residue is given by


$$\text{Res}_{z_1}(f(z))=\frac{g^{k-1}(z_1)}{(k-1)!},$$


and here $k=2$ and $(\sin(1/z))'=-\cos(1/z)/z^2$, so


$$\text{Res}_{z_1}(f(z))=-\frac{\cos\frac{1}{z_1}}{z_1^2}=...\text{this is the moment I realised I'm screwed.}$$


Howver, computing the residue at $z_2=2$ for the other integral I can use conventional methods, but not here.


Can someone break down the main difference between these integrals and show how to find the residue at $z_1=0?$


EDIT:


I need to solve $a)$ using Laurent series.



Answer



Re $(a)$ and series, given $$\sin{z}=z-\frac{z^3}{3!}+\frac{z^5}{5!}-\frac{z^7}{7!}+\frac{z^9}{9!}+...= \sum\limits_{k=0}\frac{(-1)^k}{(2k+1)!}z^{2k+1} \tag{1}$$ we have $$\sin{\frac{1}{z}}=\sum\limits_{k=0}\frac{(-1)^k}{(2k+1)!}\frac{1}{z^{2k+1}}$$ then $$\int\limits_{|z|=1}\frac{\sin{(1/z)}}{(z-2)^2} dz=\sum\limits_{k=0}\frac{(-1)^k}{(2k+1)!}\int\limits_{|z|=1}\frac{1}{z^{2k+1}(z-2)^2}dz \tag{2}$$


Using Cauchy's integral formula:


$$f^{(n)}(a)=\frac{n!}{2\pi i} \int\limits_{\gamma}\frac{f(z)}{(z-a)^{n+1}}dz \tag{3}$$ where $f(z)=\frac{1}{(z-2)^2}$, because $2$ is outside $|z|=1$, we have $$f^{2k}(0)=\frac{(2k)!}{2\pi i}\int\limits_{|z|=1}\frac{1}{z^{2k+1}(z-2)^2}dz$$ then $(2)$ becomes $$\int\limits_{|z|=1}\frac{\sin{(1/z)}}{(z-2)^2} dz=2\pi i\left(\sum\limits_{k=0}\frac{(-1)^k}{(2k+1)!}\frac{f^{2k}(0)}{(2k)!}\right) \tag{4}$$ But $$f^{'}(z)=-\frac{2}{(z-2)^3}$$ $$f^{''}(z)=\frac{2\cdot3}{(z-2)^4}$$ $$f^{'''}(z)=-\frac{2\cdot3\cdot4}{(z-2)^5}$$ $$f^{(4)}(z)=\frac{2\cdot3\cdot4\cdot5}{(z-2)^6}$$ $$...$$ $$f^{(2k)}(z)=\frac{(2k+1)!}{(z-2)^{2k+2}} \tag{5}$$ and $(4)$ becomes $$\int\limits_{|z|=1}\frac{\sin{(1/z)}}{(z-2)^2} dz= 2\pi i\left(\sum\limits_{k=0}\frac{(-1)^k}{(2k)!}\frac{1}{(-2)^{2k+2}}\right)= 2\pi i \frac{1}{4}\left(\sum\limits_{k=0}\frac{(-1)^k}{(2k)!}\frac{1}{2^{2k}}\right) \tag{6}$$ but $$\sum\limits_{k=0}\frac{(-1)^k}{(2k)!}\frac{1}{2^{2k}}=\cos{\frac{1}{2}}$$ from cosine series expansion, thus $$\int\limits_{|z|=1}\frac{\sin{(1/z)}}{(z-2)^2} dz= \frac{\pi i}{2} \cos{\frac{1}{2}} \tag{7}$$


linear algebra - Intuitive explanation of left- and right-inverse

I am reading about right-inverse and left-inverse matrices. According to theory if a matrix $A_{m\times n}(\mathbb{R})$ is full row rank, then it has a right-inverse. That is, $AC=I_{m}$. Similarly, if $A$ is full collumn rank, then it has a left-inverse. That is, $BA=I_{n}$. I have the following questions:



  1. Taking $AC=I_{m}\iff A^TAC=A^T \iff C=(A^TA)^{-1}A^T$ but this satisfies $CA=I$, contradiction. Similarly, taking $BA=I_{n}\iff BAA^T=A^T \iff B=A^T(AA^T)^{-1}$ but this satisfies $AB=I$, contradiction. How is that possible?




  2. Moreover, and most importantly what is the intuitive explanation of the left and right inverse? Is there any connection with the rows or collumns or any of the four foundamental subspaces of $A$?



Thank you very match!

Tuesday, May 30, 2017

algebra precalculus - How to find $log{x}$ close to exact value in two digits with these methods?


I'm trying to find the result of $\log{x}$ (base 10) close to exact value in two digits with these methods:


The methods below are doing by hand. I appreciate you all who already give answers for computer method.


As suggested by Praktik Deoghare's answer



If number N (base 10) is n-digit then $$n-1 \leq \log_{10}(N) < n$$ Then logarithm can be approximated using $$\log_{10}(N) \approx n-1 + \frac{N}{10^{n} - 10^{n-1}}$$ Logarithm maps numbers from 10 to 100 in the range 1 to 2 so log of numbers near 50 is about 1.5. But this is only a linear approximation, good for mental calculation and toy projects but not that good for serious research.




This method is cool for me, but it's nearly close to exact value. $\log_{10}(53)$ is 1.7 and with that method results 1.58.


As suggested by Pedro Tamaroff's answer



One can get very good approximations by using $$\frac 1 2 \log \left|\frac{1+x}{1-x}\right| =x+\frac {x^3} 3+ \frac {x^5}5+\cdots$$ Say you want to get $\log{3}$. Then take $x=1/2$. Then you get $$\log 3 \approx 2\left( \frac 1 2 +\frac 1 {24} + \frac 1 {140} \right)=1.0976190\dots$$ The real value is $\log 3 \approx 1.098065476\dots$



This one is also cool for me, but it's to find natural logarithm, not base-10 logarithm.


As suggested by Kaleb's answer



This can be done by recourse to Taylor series. For $ln(x)$ centered at 1, i.e. where $0 < x \leq 2$: $$\ln(x)= \sum_{n=1}^\infty \frac{(x-1)^n}{n}= (x-1) - \frac{1}{2}(x-1)^2 + \frac{1}{3}(x-1)^3 + \frac{1}{4}(x-1)^4 + \cdots$$




The method is for calculating $ln(x)$, not $\log{x}$. I don't know the Taylor series for calculate $\log{x}$, especially when to find log result close to exact value in two digits (with similar method).


As suggested by Glenn's answer



The Wikipedia article "Generalized continued fraction" has a KhovanskiÄ­-based algorithm that differs only in substituting x/y for z, and showing an intermediate step: $$\log \left( 1+\frac{x}{y} \right) = \cfrac{x} {y+\cfrac{1x} {2+\cfrac{1x} {3y+\cfrac{2x} {2+\cfrac{2x} {5y+\cfrac{3x} {2+\ddots}}}}}} $$



This method is very slow for me. When I stop at $3y$, the result (log calculation) is still far from the exact value.


Anyone who can improve all of the above methods so I can precisely get log result close to exact value in two digits?


Answer



Listing a few tricks that work for mental arithmetic. Mostly to get the OP to comment, whether this is at all what they expect. I write $\log$ for $\log_{10}$ to save a few keystrokes.



You need to memorize a few logarithms and play with those. We all have seen $\log 2\approx 0.30103$ enough many times to have memorized it. Consequently by mental arithmetic we get for example the following $$ \begin{aligned} \log4&=2\log2&\approx 0.602,\\ \log5&=\log(10/2)&\approx 0.699,\\ \log1.6&=\log(2^4/10)&\approx 0.204,\\ \log1.024&=\log(2^{10})-3&\approx 0.0103.\\ \end{aligned} $$ Using these is based on spotting numerical near matches.


You should also be aware of the first order Taylor series approximation $$ \log(1+x)\approx\frac{x}{\ln 10}\approx\frac{x}{2.3}\approx0.434 x, $$ which implies (plug in $x=0.01$) that if you change the value of $x$ by 1 per cent, then its logarithm changes by approximately $0.0043$.


As an example let's do $\log 53$ and $\log7$. Here $x=53$ is $6\%$ larger than $50$, so a first order approximation would be $$ \log53\approx\log 50+6\cdot 0.0043=\log(10^2/2)+6\cdot0.00434\approx 2-0.30103+0.0258\approx1.725. $$ With $7$ we can spot that $7^2=49$ is $2\%$ less than $50$, so $$\log 7=\frac12\,\log49\approx\frac12(2-0.301-2\cdot0.0043)\approx\frac{1.690}2=0.845. $$ Here the third decimal of $\log53$ is off by one, but $\log7$ has three correct digits - both well within your desired accuracy.


abstract algebra - Is the group isomorphism $exp(alpha x)$ from the group $(mathbb{R},+)$ to $(mathbb{R}_{>0},times)$ unique?


I'm having a problem trying to find the simplest way of proving this, which has most probably been solved a hundred of times but I am unable to find a good reference.


I have two groups, $(\mathbb{R},+)$ and $(\mathbb{R}_{>0},\times)$. I am trying to prove that the only class of isomorphisms between them is the class $F = \{f: f(x) = \exp(\alpha x), $ for all $\alpha \in \mathbb{R}_{>0}\}$. Existence is easy to prove: what I'm having trouble with is a clean algebraic uniqueness proof.


Does anyone know the proof or a reference containing this proof?



Thanks in advance!


Answer



Wait, it may not be true.


Consider $(\Bbb R,+)$ as an infinite (continuum) dimension vector space over $\Bbb Q$, and fix a basis (Hamel basis).


Then, any automorphism of this vector space (for example permuting the basis) will be an automorphism of $(\Bbb R,+)$, and you can compose this with any exponential.


elementary set theory - Proving the Cardinality of a set in R

Let $\ A\subset R $ have the following characteristic:



For all $\ a,b \in A$ , $\ \frac{a+b}{2} \notin A$.



Prove that there exists a maximal set A. Prove its cardinality is $\ \aleph $.




The first part is relatively simple using Zorn's Lemma, taking any chain with the inclusion relation, of sets with the said characteristic, and binding them above by their union.



As for the second part. Let $\ max{A}=M $. $\ M\subset R $ so $\ |M| \leq \aleph $ .



My question is, how can I find a set of Cardinality $\ \aleph $ with the stated characteristic (or prove one exists) to be a lower bound for M?

Monday, May 29, 2017

calculus - Evaluate $limlimits_{x to infty } {left( {int_0^{pi /6} {{{(sin t)}^x}dt} } right)^{1/x}}$

It is given that the following limit
$\mathop {\lim }\limits_{x \to \infty } {\left( {\int\limits_0^{\pi /6} {{{(\sin t)}^x}dt} } \right)^{1/x}}$ exists. Evaluate the limit.




I've tried tackling this problem but I can't seem to get started. Any hint is appreciated, thanks!

set theory - A question concerning on the axiom of choice and Cauchy functional equation


The Cauchy functional equation: $$f(x+y)=f(x)+f(y)$$ has solutions called 'additive functions'. If no conditions are imposed to $f$, there are infinitely many functions that satisfy the equation, called 'Hamel' functions. This is considered valid if and only if the Zermelo's axiom of choice is accepted as valid.



My question is: suppose we don't consider valid the axiom of choice, this means that we have a finite number of solutions? Or maybe the 'Hamel' functions are still valid?


Thanks for any hints ore answer.


Answer



What you wrote is not true at all. The argument is not valid "if and only if the axiom of choice holds".



  1. Note that there are always continuous functions of this form, all look like $f(x)=ax$, for some real number $a$. There are infinitely many of those.




  2. The axiom of choice implies that there are discontinuous functions like this, furthermore a very very weak form of the axiom of choice implies this. In fact there is very little "choice" which can be inferred from the existence of discontinuous functions like this, namely the existence of non-measurable sets.





  3. Even if the axiom of choice is false, it can still hold for the real numbers (i.e. the real numbers can be well-ordered even if the axiom of choice fails badly in the general universe). However even if the axiom of choice fails at the real numbers it need not imply that there are no such functions in the universe.




  4. We know that there are models in which all functions which have this property must be continuous, for example models in which all sets of real numbers have the Baire property. There are models of ZF in which all sets of reals have the Baire property, but there are non-measurable sets. So we cannot even infer the existence of discontinuous solutions from the existence of non-measurable sets.




  5. Observe that if there is one non-discontinuous then there are many different, since if $f,g$ are two additive functions then $f\circ g$ and $g\circ f$ are also additive functions. The correct question is to ask whether or not the algebra of additive functions is finitely generated over $\mathbb R$, but to this I do not know the answer (and I'm not sure if it is known at all).



More:



ordinary differential equations - function representation of power series

What is the function representation of this power series?




[Summation from n=0 to infinity of ($x^n)(n+1)!/n!$



The solution is $\frac{1}{(1-x)^-2}$ but how???



I know that $\sum_{n=0}^{\infty}(x^n)/n! = e^x$, but I don't know how to get to the solution from there.

real analysis - square root of 2 irrational - alternative proof



I have found the following alternative proof online.
Proof irrationality square root of 2



It looks amazingly elegant but I wonder if it is correct.



I mean: should it not state that $(\sqrt{2}-1)\cdot k \in \mathbb{N}$ to be able to talk about a contradiction?




Doesn anybody know who thought of this proof (who should I credit)? I couldn't find a reference on the web.


Answer



The proof is correct, but you could say it's skipping over a couple of steps: In addition to pointing out that $(\sqrt2-1)k\in\mathbb{N}$ (because $\sqrt2k\in\mathbb{N}$ and $k\in\mathbb{N}$), one might also want to note that $1\lt\sqrt2\lt2$, so that $0\lt\sqrt2-1\lt1$, which gives the contradiction $0\lt(\sqrt2-1)k\lt k$.



As for the source of the proof, you might try looking at the references in an article on the square root of 2 by Martin Gardner, which appeared in Math Horizons in 1997.


calculus - How can I calculate the limit of exponential divided by factorial?


I suspect this limit is 0, but how can I prove it?


$$\lim_{n \to +\infty} \frac{2^{n}}{n!}$$


Answer



The easiest way to do this is to do the following: Assume $n \ge 4$. Then $$0 \le \frac{2^n}{n!} = \prod_{i=1}^n \frac{2}{i} = \frac{2\cdot 2\cdot 2}{1 \cdot 2 \cdot 3} \cdot \prod_{i=4}^n \frac{2}{i} \le \frac{8}{6} \cdot \prod_{i=1}^n \frac{2}{4} = \frac{8}{6 \cdot 2^{n-3}}.$$ Applying the squeeze theorem gives the result.


Sunday, May 28, 2017

calculus - Area of north cap of a sphere



Problem



Derive formula for area of cap of sphere where $h$ is height of the cap. Derive formula $A=2\pi Rh$.




Hint: (by rotating function $f(x)=\sqrt{R^2-x^2}$ in between $R-h\le x\le R$)



Attempt to solve



Now area of revolving function should be:



$$ A=2\pi \int_{a}^{b}|f(x)|\sqrt{1+f'(x)^2}dx $$



I've tried to find where this formula comes from but i wasn't able to find proof for this. Now if someone knows how to prove this is valid formula / or if it isn't that would be great. (Possibly the cause of confusion on this problem).




We have the function given by the hint:



$$ f(x)=\sqrt{R^2-x^2} $$
$$ f'(x)=\frac{d}{dx}(\sqrt{R^2-x^2})=-\frac{-x}{\sqrt{R^2-x^2}} $$



Now to find out area this would be simply inserting function into the formula ?



$$ A=2\pi \int_{R-h}^{R}\sqrt{R^2-x^2}\sqrt{1+(-\frac{x}{\sqrt{R^2-x^2}})^2}dx=2\pi Rh $$



I have tried to calculate the indefinite integral




$$ A=2\pi \int\sqrt{R^2-x^2}\sqrt{1+(-\frac{x}{\sqrt{R^2-x^2}})^2}dx=2\pi Rh $$
$$ A=2\pi(\frac{1}{4}(2x+1)\sqrt{R^2-x^2}\sqrt{\frac{-R^2+x^2+x}{x^2-R}}-\frac{1}{8}(4R^2+1)\tan^{-1}(\frac{(2x+1\sqrt{R^2-x^2}\sqrt{\frac{-R^2+x^2+x}{x^2-R^2}})}{2(-R^2+x^2+x)})) +c$$
I've tried to integrate this but with little success. However this is suppose to be same as $2\pi Rh$ (hence the equality at the end) but i don't have definite proof of it.



You can notice that $2\pi$ is only constant and does not come from the integration itself. So integration of something will most likely produce $Rh$



$$ \int_{R-h}^{R}\sqrt{R^2-x^2}\sqrt{1+(-\frac{x}{\sqrt{R^2-x^2}})^2}dx=Rh $$







To sum up the (possible) problems



(a) is formula for area correct in this. If it is where does it come from ?



(b) If we assume formula is correct then there has to be problem with my integration.


Answer



First, the formula is correct. You can find the derivation of the formula here.



Second, you made some mistakes in simplifying the expressions. In particular,

$$f'(x) = -\frac{x}{\sqrt{R^2-x^2}} \implies \sqrt{1+f'(x)^2} = \frac{R}{\sqrt{R^2-x^2}}.$$
Consequently,
$$A = 2\pi \int_{R-h}^{R}|f(x)|\sqrt{1+f'(x)^2}dx = 2\pi\int_{R-h}^{R}R\,dx=2\pi Rh.$$


limits - Is $frac{sin(x)}{x}$ continuous at $x=0$? Whats the value at $x=0$?

Is $\dfrac{\sin(x)}{x}$ at $x = 0$ continuous?

Whats the value at $x=0~?$

calculus - example of discontinuous function having direction derivative


Is there a function (non piece-wise unlike below) which is discontinuous but has directional derivative at particular point? I have a manual that says the function has directional derivative at $(0,0)$ but is not continuous at $(0,0)$. $$f(x,y) = \begin{cases} \frac{xy^2}{x^2+y^4} & \text{ if } x \neq 0\\ 0 & \text{ if } x= 0 \end{cases}$$


Can anyone give me few examples which is not defined piece wise as above?


Answer



$$f(x,y)=\lim_{u\to0}\frac{xy^2+u^2}{x^2+y^4+u^2}$$


Functional equation, find particular value given $f(ab)=bf(a)+af(b)$

Let $f(x)$ be a function such that $f(ab) = bf(a) + af(b)$ for all nonzero real numbers. Given that $f(4) = 3$, which of the following is a possible value of $f(2018)$?




(A) $0\quad$ (B) $\dfrac34\quad$ (C) $\dfrac43\quad$ (D) $1512\quad$ (E) $2688\quad$



By substitution, it's easy to find that $f(1) = 0$ and $f(2) = \dfrac34$. But how can I get to $f(2018)$?

Saturday, May 27, 2017

probability - Expected time to roll all 1 through 6 on a die


What is the average number of times it would it take to roll a fair 6-sided die and get all numbers on the die? The order in which the numbers appear does not matter.


I had this questions explained to me by a professor (not math professor), but it was not clear in the explanation. We were given the answer $(1-(\frac56)^n)^6 = .5$ or $n = 12.152$


Can someone please explain this to me, possibly with a link to a general topic?


Answer



The time until the first result appears is $1$. After that, the random time until a second (different) result appears is geometrically distributed with parameter of success $5/6$, hence with mean $6/5$ (recall that the mean of a geometrically distributed random variable is the inverse of its parameter). After that, the random time until a third (different) result appears is geometrically distributed with parameter of success $4/6$, hence with mean $6/4$. And so on, until the random time of appearance of the last and sixth result, which is geometrically distributed with parameter of success $1/6$, hence with mean $6/1$. This shows that the mean total time to get all six results is $$\sum_{k=1}^6\frac6k=\frac{147}{10}=14.7.$$



Edit: This is called the coupon collector problem. For a fair $n$-sided die, the expected number of attempts needed to get all $n$ values is $$n\sum_{k=1}^n\frac1k,$$ which, for large $n$, is approximately $n\log n$. This stands in contrast with the mean time needed to complete only some proportion $cn$ of the whole collection, for some fixed $c$ in $(0,1)$, which, for large $n$, is approximately $-\log(1-c)n$. One sees that most of the $n\log n$ steps needed to complete the full collection are actually spent completing the last one per cent, or the last one per thousand, or the last whatever percentage of the collection.


real analysis - Proof that continuous function respects sequential continuity



Let $A\subset\mathbb{R}^n$, $f:A\to \mathbb{R}^m$ be a continuous function. I want to prove that if a sequence $(x_k)\subset A$ converges to some $a\in \mathbb{R}^n$ then $f(x_k)$ converges to $f(a)$. Here's my attempt:



Since $f$ is continuous, $\forall \varepsilon>0,\exists \delta >0 $ such that $\|f(x)-f(a)\|<\varepsilon$ whenever $\|x-a\|<\delta$. Also, since $(x_k)$ converges to $a$, $\forall\varepsilon' > 0$, $\exists k_0 \in\mathbb{N}$ such that $\|x_k-a\|<\varepsilon'$ whenever $k\ge k_0$. Let $\varepsilon'>0$ and set $\varepsilon'' = \varepsilon+\varepsilon'$, and fix $k\ge k_0$. Since $f$ is continuous, then $\exists \delta' >0$ such that $\|f(x)-f(x_k)\|<\varepsilon'$ whenever $\|x-x_k\|<\delta'$. Now, $\|f(x_k)-f(a)\|=\|f(x_k)-f(x)+f(x)-f(a)\|\le$
$\|f(x)-f(x_k)\|+\|f(x)-f(a)\|<\varepsilon'+\varepsilon= \varepsilon''$. Hence, $f$ respects sequential continuity.



Do you think my proof is OK? Thanks.


Answer



The proof does not look sound. Saying $\|f(x)-f(x_k)\|<\varepsilon'$ whenever $\|x-x_k\|<\delta'$ makes sense if $x_k\to x$. The tails of the sequence cannot be close to $x$ and $a$ at the same time unless $x=a$ (otherwise $\varepsilon''$ cannot be made arbitrary small).




The idea is more straightforward here: by definition




  1. Since $f$ continuous we have
    $$
    \forall \varepsilon>0\,\exists \color{red}\delta >0\text{ such that } \|f(\color{blue}x)-f(a)\|<\varepsilon,\text{ for all }\color{blue}x\colon \|\color{blue}x-a\|<\color{red}\delta.
    $$

  2. Since $x_k\to a$ we have
    $$

    \forall\color{red}\delta>0\,\exists k_0 \in\mathbb{N}\text{ such that } \|\color{blue}{x_k}-a\|<\color{red}\delta,\text{ for all } k\ge k_0.
    $$



We see that for any $\varepsilon>0$ we can find $\color{red}\delta>0$ in 1, then for this $\color{red}\delta$ find $k_0\in\Bbb N$ in 2. The tails of $\color{blue}{x_k}$ satisfy then $\|\color{blue}{x_k}-a\|<\color{red}\delta$, so we can take $\color{blue}x=\color{blue}x_k$ in 1 to conclude that for the tails $\|f(\color{blue}{x_k})-f(a)\|<\varepsilon$. We started from any $\varepsilon$ and ended up with the proper $k_0$, thus, obtained the sequential continuity by definition.


algebra precalculus - Square roots of complex number in exponential form



The complex number $z$ is defined by $z=\frac{9\sqrt{3}+9i}{\sqrt{3}-i}$. Find the square roots of $z$, giving your answers in the form $re^{i\theta}$.where $r>0$ and $-\pi < \theta \leq \pi$.



I found the $z=9e^{\frac{\pi}{3}i}$. How to find the square roots of $z$?


Answer



Assuming your calculation of $\;z\;$ is correct, observe that




$$z=9e^{\frac{\pi i}3}=9e^{\frac{\pi i}3+2k\pi i}\;,\;\;k\in\Bbb Z\implies z^{1/2}=\left(9e^{\frac{\pi i}3+2k\pi i}\right)^{1/2}=3e^{\frac{\pi i}6\left(6k+1\right)}$$



and this time we only need $\;k=0,1\;$ since the roots repeat themselves, so the roots are



$$\begin{cases}&z_1:=3e^{\frac{\pi i}6}=3\left(\frac{\sqrt3}2+\frac12i\right)=\frac{3\sqrt3}2+\frac32i\\{}\\&z_2=3e^{\frac{7\pi i}6}=3\left(-\frac{\sqrt3}2-\frac12i\right)=-\frac{3\sqrt3}2-\frac32i=-z_1\end{cases}$$


Thursday, May 25, 2017

How to apply modular division correctly?




As described on Wikipedia:
$$\frac{a}{b} \bmod{n} = \left((a \bmod{n})(b^{-1} \bmod n)\right) \bmod n$$



When I apply this formula to the case $(1023/3) \bmod 7$:
$$\begin{align*}
(1023/3) \bmod 7 &= \left((1023 \bmod 7)((1/3) \bmod 7)\right) \bmod 7 \\
&= ( 1 \cdot (1/3)) \mod 7 \\

&= ( 1/3) \mod 7 \\
&= 1/3
\end{align*}
$$
However, the real answer should be $(341) \bmod 7 = \mathbf{5}$.



What am I missing? How do you find $(a/b) \bmod n$ correctly?


Answer



$\frac{1}{3}\mod 7 = 3^{-1}\mod 7$




You need to solve below for finding $3^{-1}\mod 7$ : $$3x\equiv 1\pmod 7$$



Find an integer $x$ that satisfies the above congruence


real analysis - Prove: Given $x_0$ is a cluster point of a set $S$ and $f:S to mathbb{R}$ then $f$ can have at most one limit as $x to x_0$


Essentially, I need to prove that, given a point $x_0$ in $S$ where $S \subseteq \mathbb{R}$, as the $x$ value converges to $x_0$, $f(x)$ converges to only one $f(x_0)$. This is poking at the idea that for every input, $x$, there cannot be more than one output, $f(x)$.


This is something that most of us knew from algebra 1, but I need to prove this statement using the definition of cluster points, continuity, etc.


Cluster point: if $x_0$ is a cluster point, then $\forall \epsilon \gt 0$, $(x_0- \epsilon , x_0+ \epsilon ) \cap (S \setminus {x_0}) \neq \phi $



definition of continuity: $\forall \epsilon \gt 0$, $\exists \delta \gt 0$ such that $0 \lt |x-x_0| \lt \delta$, $x \in S$, implies $|f(x)-f(x_0)| \lt \epsilon$.


I'm not too sure how I would approach this!


Answer



First, the existence of limit at a point (say, $x_0$) does not depend on the value of the function at $x_0$. This is important because, without this idea, limit and continuity become more or less useless.


Now to show that $f$ has at most one limit as $x \to x_0$, we need to show that either the limit does not exist ("zero" limit), or the limit is unique if it exists ("one" limit).


In other words, we can assume that the limit exists, and let


$$ \lim_{x \to x_0} f(x) = l_1$$ and $$ \lim_{x \to x_0} f(x) = l_2$$. Now we want to show that $l_1 = l_2$. Start from the definition.


Given $\epsilon > 0$, $\exists \delta_1 > 0$ such that $$|f(x)-l_1| < \frac{\epsilon}{2}, \forall x \in V_{\delta_1}(x_0)\cap{S}\setminus{\{x_0 \}}$$ Also, $\exists \delta_2>0$ such that $$|f(x)-l_2|<\frac{\epsilon}{2}, \forall x \in V_{\delta_2}(x_0)\cap{S}\setminus{\{x_0\}}$$


. Now take $\delta = min\{\delta_1, \delta_2\}$.


$$|l_1-l_2| = |l_1-f(x)+f(x)-l_2| \le |l_1-f(x)|+|f(x)-l_2| < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon, \forall x \in V_\delta(x_0) \cap S\setminus {\{x_0\}}$$. Since $x_0$ is a cluster point, $|l_1-l_2|$ is arbitrary small (but still non-negative) on a non-empty set. Can you conclude from here? (An extra hint: Suppose $\forall \epsilon > 0, 0\le a < \epsilon$, can you prove by contradiction that $a = 0$?)



probability - Coupon collector problem with partial collection of a specific set of coupons

I am very new in probability and combinatorics and have a naive question around a variation on the coupon collector problem with partial collection.



Lets assume we have a box with 45 coupons labeled 1-45. Now in this case I would like to adjust the CCP such that I can calculate the expected value (amount of draws necessary) to collect 10 specific items. For example item 1-10. How do I adjust my model such that I can calculate the amount of draws necessary to collect each item n times.




I assume that I have to adjust CCP2 in following post (Coupon Collector's Problem with Partial Collections and Coupon Packages) to include the probability that I catch one item is 10/45.



All tips and tricks are welcome!



Thanks for your help

Wednesday, May 24, 2017

calculus - When are differentials actually useful?



I think of differentials as a way to approximate $\Delta y$ in a function $y = f(x)$ for a certain $\Delta x$.



The way I understood it, the derivative itself is not a ratio because you can't get $\frac{dy}{dx}$ by taking the ratio of the limits of the numerator and denominator separately.



However, once you do have $\frac{dy}{dx}$, you can then think of $dx$ as $\Delta x$ and of $dy$ as the change in $y$ for a slope $\frac{dy}{dx}$ over a certain $\Delta x$.



My problem is that I don't know why differentials are useful. The examples I saw are along the lines of approximating the max error in a sphere's volume if we know that the radius we're given (let's say 20) has a possible error of let's say 0.01.




In this kind of example, it seems to me we're better off computing $V(20.01) - V(20)$, instead of $\Delta V \approx dV = V' \cdot \Delta x$.



In the first case at least we get an exact maximum error instead of an approximation.



So my question is: When are differentials actually useful? Are they ever anything more than at best a time saver over hand computations? Are they useful at all in today's world with Wolfram Alpha and other strong computational engines around?


Answer



The list is endless, as the comments have started to indicate.



An example of an extremely important real-world application, on which literally trillions of dollars a day depends, comes from the risk management (hedging) of derivatives contracts from the point of view of a large bank or dealer. Say you call up Goldman Stanley's equity desk to purchase an equity call option, where the underlying equity has a continuously quoted market price of $S_{t}$ and the pricing function for the derivative contract is $f(S_{t},t)$ (at least according to the pricing models the dealer depends on and uses), then the "delta" of the derivative is

$$\frac{\partial f}{\partial S}(S(t),t)$$
and represents a first order approximation of the amount of risk a dealer has from selling the option to their client (i.e., as a first approximation, it quantifies how much money the dealer will gain or lose when $S$ changes by an amount $\Delta S$). In particular,
$$\Delta f_{t}\approx\frac{\partial f}{\partial S}\Delta S_{t}.$$
The market-maker is thus constantly buying and selling $|\partial f/\partial S|$ of the underlying asset in order to hedge (eliminate $S$-risk of) their exposure. They make money from the initial transaction costs/spreads, provided they effectively execute the hedging strategy, and avoid taking bets that their short position will be in the money and their clients' long position out of the money.



Obviously this differential is only a first-order approximation (and we're ignoring dependence on $t$ as well). To neutralize risk even more effectively, traders also try to be "gamma-neutral" in addition to "delta-neutral" by adjusting their hedging strategy according to the second-order approximation
$$\Delta f_{t}\approx\frac{\partial f}{\partial S}\Delta S_{t}+\frac{1}{2}\frac{\partial^{2}f}{\partial S^{2}}(\Delta S)^{2}.$$



Other differentials that traders commonly use to manage their risk involve the quantities "vega", "charm", etc., these nicknames just being cute trader-speak for the partial derivatives of the pricing function with respect to other market variables for which it depends, including volatility, time, etc., respectively.


elementary set theory - How to define the conception of a sum without the operation of addition?

In short: I look for a definition of a sum of any number of natural numbers in the terms of pure set theory. Until now, neither have I found such a definition in books, nor invented it by myself.




In details:



Let there be $n$ piles of apples on a table (${n}\in\mathbb{N}_{>0}$). Let $x_i$ be the number of apples in each pile (${x_i}\in\mathbb{N}_{>0}$, ${i}=1…n$). How to define the conception of “total number of apples on the table” through ${x_i}$, without using the operation of arithmetic addition?



All sources known to me reduce this conception to the arithmetic addition one way or another. But it seems not quite correct to me: addition doesn’t reflect the main point of the conception, but it only is one of the possible operations for calculating this “total number”. Besides that, the entity of “total number of apples on the table” exists regardless of the fact whether we perform any operations to calculate it.



Furthermore, addition is defined for two or more addends, while “total number of apples on the table” exists and is computable even if $n=1$.



I am interested in a definition in terms of pure set theory. Individual natural numbers (for example, $n$ and each of ${x_i}$) can be defined, e.g. as finite ordinals. I look for a definition of “total number” also in the context of set theory (e.g. as a result of unions, intersections and other set operations).




Is this possible?

real analysis - Constructing A Space Filling Curve that fills the Unit Square



I'm reading Neal Carothers' Real Analysis, and he constructs a curve defined over $[0,1]$ that fills the unit square as follows:



Let $f$ be a real-valued function over $[0,1]$ such that $f$ is $0$ over $[0,\dfrac{1}{3}]$, $3t-1$ over $(\dfrac{1}{3},\dfrac{2}{3})$, and $1$ over $[\dfrac{2}{3},1]$. Now extend $f$ over $\mathbb{R}$ by specifying that $f$ is even and $2$-periodic.



Here is the equation I don't understand: $$f(3^kt)=f(0.(2a_k)(2a_{k+1})(2a_{k+2})...)=a_k$$ if $t$ is a number in the Cantor Set and $0.(2a_0)(2a_{1})(2a_{2})...$ is the base $3$ decimal expansion of $t$, where $(2a_i)$ is the $(i+1)^{th}$ digit in the expansion. Since it's base $3$ and $t$ is in the Cantor Set, each $a_i$ is either $0$ or $1$.




I tried to prove the equation by writing the decimal expansion as a series multiplied by $3^k$ but I can't seem to make the math come out. In the book, Carothers prefaces the equation with "since $f$ is periodic with period $2$;" I don't know what to make of this hint since $t$ is fixed.



Any help at all would be much appreciated!


Answer



We're in base $3$, so multiplying by $3^k$ shifts the decimal $k$ digits to the right.



So f($3^kt)=((2a_0)(2a_1)\cdots(2a_{k-1}).(2a_k)(2a_{k+1})\cdots)$. We also note that $f$ has period $2$, and that $(2a_0)(2a_1)\cdots(2a_{k-1})$ is an even number and is hence a multiple of the period of $f$.



So we are left with




$f(3^kt)=f(0.(2a_k)(2a_{k+1})\cdots)$



Then if $a_k=0,$ we will have that $0.(0)(2a_{k+1})\cdots=0$ because (in base $3$) it will be within the interval $[0,\frac{1}{3}]$ and $f$ is equal to $0$ there (since that's how $f$ was defined). On the other hand if $a_k=1$, then we will have that $0.2(a_{k+1})\cdots=1$ (in base $3$) since $f$ was defined to be equal to $1$ in the interval $[\frac{2}{3},1]$.



So if $a_k=0$ then $f(3^kt)=a_k=0$, but if $a_k=1$,then $f(3^kt)=a_k=1$, so either way we have that $f(3^kt)=a_k.$


Tuesday, May 23, 2017

summation - How to get to the formula for the sum of squares of first n numbers?








I know that the sum of the squares of the first n natural numbers is $\frac{n(n + 1)(2n + 1)}{6}$. I know how to prove it inductively. But how, presuming I have no idea about this formula, should I determine it? The sequence $a(n)=1^2+2^2+...+n^2$ is neither geometric nor arithmetic. The difference between the consecutive terms is 4, 9, 16 and so on, which doesn't help. Could someone please help me and explain how should I get to the well known formula assuming I didn't know it and was on some desert island?

Monday, May 22, 2017

summation - Finite Sum $sum_{i=1}^nfrac i {2^i}$




I'm trying to find the sum of :



$$\sum_{i=1}^n\frac i {2^i}$$



I've tried to run $i$ from $1$ to $∞$ , and found that the sum is $2$ , i.e :



$$\sum_{i=1}^\infty\frac i {2^i} =2$$



since :




$$(1/2 + 1/4 + 1/8 + \cdots) + (1/4 + 1/8 + 1/16 +\cdots) + (1/8 + 1/16 + \cdots) +\cdots = 1+ 1/2 + 1/4 + 1/8 + \cdots = 2 $$



But when I run $i$ to $n$ , it's a little bit different , can anyone please explain ?



Regards


Answer



For the sake of generality, and more importantly to make typing easier, we use $t$ instead of $1/2$.



We want to find the sum

$$S(t)=t+2t^2+3t^3+4t^4+ \cdots +(n-1)t^{n-1}+nt^n.$$
Multiplying both sides by $t$, we get
$$tS(t)=t^2+2t^3+3t^4 +4t^5+ \cdots +(n-1)t^{n}+nt^{n+1}.$$
Subtract, and rearrange a bit. We get
$$(1-t)S(t)=(t+t^2+t^3+ +\cdots +t^n)-nt^{n+1}.\tag{$\ast$}$$
Recall that for $t\ne 1$, we have $t+t^2+t^3+ +\cdots +t^n=t\frac{1-t^n}{1-t}$.
If we do not recall the sum of a finite geometric series, we can find it by a trick similar to (but simpler) than the trick that got us to $(\ast)$.



Substitute, and solve for $S(t)$. (The method breaks down when $t=1$ because of a division by $0$ issue. But $t=1$ is easy to handle separately.)



Remark: Now that we have obtained an expression for $\sum_{k=1}^n kt^k$, we can use this expression, and the same basic trick, to find $\sum_{k=1}^n k^2t^k$, and then $\sum_{k=1}^n k^3t^k$. Things get rapidly more unpleasant, and to get much further one needs to introduce new ideas.



Prime powers that divide a factorial




If we have some prime $p$ and a natural number $k$, is there a formula for the largest natural number $n_k$ such that $p^{n_k} | k!$.




This came up while doing an unrelated homework problem, but it is not itself homework. I haven't had any good ideas yet worth putting here.



The motivation came from trying to figure out what power of a prime you can factor out of a binomial coefficient. Like $\binom{p^m}{k}$.


Answer



This follows from a famous result of Kummer:



Theorem. (Kummer, 1854) The highest power of $p$ that divides the binomial coefficient $\binom{m+n}{n}$ is equal to the number of "carries" when adding $m$ and $n$ in base $p$.



Equivalently, the highest power of $p$ that divides $\binom{m}{n}$, with $0\leq n\leq m$ is the number of carries when you add $m-n$ and $n$ in base $p$.




As a corollary, you get



Corollary. For a positivie integer $r$ and a prime $p$, let $[r]_p$ denote the exact $p$-divisor of $r$; that is, we say $[r]_p=a$ if $p^a|r$ but $p^{a+1}$ does not divide $r$. If $0\lt k\leq p^n$, then
$$\left[\binom{p^n}{k}\right]_p = n - [k]_p.$$



Proof. To get a proof, assuming Kummer's Theorem: when you add $p^n - k$ and $k$ in base $p$, you get a $1$ followed by $n$ zeros. You start getting a carry as soon as you don't have both numbers in the column equal to $0$, which is as soon as you hit the first nonzero digit of $k$ in base $p$ (counting from right to left). So you really just need to figure out what is the first nonzero digit of $k$ in base $p$, from right to left. This is exactly the $([k]_p+1)$th digit. So the number of carries will be $(n+1)-([k]_p+1) = n-[k]_p$, hence this is the highest power of $p$ that divides $\binom{p^n}{k}$, as claimed. $\Box$


real analysis - Zero to power Zero (Zero ^ Zero) indeterminable or not?

I want to know Zero power to Zero equal to 1 or Indeterminable. I think it cannot be exist. Please explain with proper mathematical definitions.

real analysis - Removing closed sets to form Cantor Middle Third Set



The Cantor Middle Third set in $[0,1]$ is formed by removing middle third open interval at each stage. Then Cantor set is the intersection of those remaining intervals.




More precisely, given $[0,1],$ let $F_1=[0,\frac{1}{3}]\cup[\frac{2}{3},1],$ where we remove middle third open interval $(\frac{1}{3},\frac{2}{3})$ from $[0,1].$
Proceed in similar fashion, remove middle third open interval from $[0,\frac{1}{3}]$ and $[\frac{2}{3},1]$ and $F_2$ be the remaining sets.
The Cantor set is $C:=\bigcap_{n=1}F_n.$




Question: Instead of removing open middle third interval, what happens if we remove closed middle third interval at each stage and take intersection like Cantor set? How many elements are left?



Answer



The answer is uncountably many. It is well-known that the original Cantor set consists of all numbers in the unit interval that can be written in ternary using no $1$'s (including the ones that end in infinitely many $2$'s, if that lets you get away from having a $1$ in there).




If you remove closed intervals instead of the open ones, that means that you're also taking away all the numbers that can be written with a terminating ternary expansion, of which there are only countably many (since it's a subset of the rational numbers)


Sunday, May 21, 2017

Induction Proof: Fibonacci Numbers Identity with Sum of Two Squares

Using induction, how can I show the following identity about the fibonacci numbers? I'm having trouble with simplification when doing the induction step.



Identity: $$f_n^2 + f_{n+1}^2 = f_{2n+1}$$



I get to:




$$f_{n+1}^2 + f_{n+2}^2$$



Should I replace $f_{n+2}$ using the recursion? When I do that, I end up with the product of terms, and that just doesn't seem right. Any guidance on how to get manipulate during the induction step?



Thanks!

combinatorics - How many distinct arrangements of the letters in HEELLOOP are there in which the first two letters include a H or a P (or both)?

CONTEXT: Question made up by uni lecturer.



How many distinct arrangements of the letters in HEELLOOOP are there in which the first two letters include a H or a P (or both)?



Note: There are 9 letters in total (one H, one P, two E's, two L's and three O's)



When attempting this question, I tried splitting it up into different cases:




  1. First letter H, second letter P


  2. First letter P, second letter H

  3. First letter H, second letter not P (either an E, L or O)

  4. First letter P, second letter not H (either an E, L or O)

  5. First letter not H (either an E, L or O), second letter P

  6. First letter not P (either an E, L or O), second letter H



I know for cases (1) and (2), there are $2!\cdot\frac{6!}{3!\cdot2!\cdot2!}=60$ ways to arrange it since there are $2!$ ways to arrange H and P, and for each, there are $\frac{6!}{3!\cdot2!\cdot2!}$ distinct ways to arrange 6 letters (where there are two E's, two L's and three O's).



It is cases (3) to (5) where I get a bit lost, because letters you get to choose from for the 6 end letters depend on which letter is chosen to accompany the H or P in the first and second position.




For example, in case (3), the first letter is a H, and the second letter can either be an E, L or O. If say, for example, it is an O, then the six remaining letters will consist of one P, two L's, two O's and two E's. But, if it were an E, then the six remaining letters would consist of one P, two L's, three O's and one E. The existence of these two different scenarios are what get me.



Any help on how to approach this would be greatly appreciated.

Show that the the linear map $I-L$ is invertible where $L:V rightarrow V$ and $L^3 = 0$. Find the invertible matrix in terms of a polynomial.



Let $L:V \rightarrow V$ be a linear map such that $L^3 = 0$ (i.e. $L^3$ is the zero matrix). Show that $I-L$ is invertible and find $(I-L)^{-1}$ in terms of a polynomial of $L$.




This question is giving me fits. How do I show this? Furthermore, how do I find the invertible matrix in terms of a polynomial of $L$? I know, by the Invertible Matrix Theorem, that the following are equivalent for an $n \times n$ square matrix:




  • A is an invertible matrix

  • A is row equivalent to the $n \times n$ identity matrix

  • A has n pivot positions.

  • $Ax=0$ has only the trivial solution.

  • The columns of A form a linearly independent set.

  • The linear transformation $x \rightarrow Ax$ is one-to-one.


  • The columns of A span $\mathbb{R}^n$

  • The linear transformation $x \rightarrow Ax$ maps $\mathbb{R}^n$ onto $\mathbb{R}^n$.

  • There is an $n \times n$ matrix $C$ such that $CA=I$.

  • There is an $n \times n$ matrix $D$ such that $AD=I$.

  • $A^T$ is an invertible matrix.



and so on.



New to linear algebra. Usually I can give a bit more in my questions.




Any help is appreciated.


Answer



Suppose



$L^k = 0, \; k \ge 1; \tag 1$



then consider the identity, which holds for any $m \ge 1$,



$L^m - I = (L - I)(\displaystyle \sum_0^{m - 1} L^j) = (L - I)(L^{m - 1} + L^{m - 2} + \ldots L + I); \tag 2$




this equation may easily be proved (by induction on $m$ if you like), and is quite likely familiar to the reader either from high-school algebra or the study of roots of unity in field theory. Be that as it may, with (1) in place we see that (2) becomes, with $m = k$,



$-I = (L - I)(L^{k - 1} + L^{k - 2} + L + I), \tag 3$



which shows that $I - L$ is invertible with inverse



$(I - L)^{-1} = L^{k - 1} + L^{k - 2} + L + I. \tag 4$



The particular case at hand may be resolved by taking $k = 3$.



Saturday, May 20, 2017

calculus - Convergence of $sum_n^infty (-1)^nfrac{sin^2 n}n$




Could anyone give a hint how to prove the convergence of the following sum?



$$\sum_n^\infty (-1)^n\frac{\sin^2 n}n$$



I tried writing it like this instead:



$$\sum_n^\infty \frac1n (-1)^n \sin^2 n.$$



From here, it is easy to see that $\frac1n$ is a bounded and strictly decreasing sequence. It would be sufficient to prove that the sequence of partial sums of $(-1)^n\sin^2 n$ is bounded.




From here, I get that $(-1)^n\sin^2 n = (-1)^n\frac{1 - \cos 2n}2 = \frac{(-1)^n}2 - \frac{(-1)^n \cos2n}2$, where the sequence of partial sums of $\frac{(-1)^n}2$ is bounded as well as the sequence of partial sums of $\frac{\cos 2n}2$. Unfortunately, I cannot tell anything about $\frac{(-1)^n\cos 2n}2$.



Thank you.


Answer



The series is not absolutely convergent, so the study of convergence is of interest.



We have
$$\sin^2n=1-\cos^2n=1-\frac{1+\cos(2n)}2=\frac 12-\frac{\cos(2n)}2.$$
Since $$\sum_{n=1}^\infty\frac{(-1)^n}n\mbox{ is convergent},$$
we only have to address the convergence of $$\sum_{n=1}^\infty (-1)^n\frac{\cos(2n)}n,$$

which can be done by a summation by parts. Indeed, we define $s_n:=\sum_{k=0}^n(-1)^k$. Then
$$\sum_{k=M}^N(-1)^k\frac{\cos(2k)}k=\sum_{n=M}^Ns_n\frac{\cos(2n)}n-\sum_{n=M-1}^{N-1}s_n\frac{\cos(2(n+1))}{n+1}.$$
Since the series $\sum_k\frac 1{k^2}$ is convergent, we actually only have to show that the series
$$\sum_{n=1}^\infty s_n\frac{\cos(2n)}{n}\mbox{ and }\sum_{n=1}^\infty s_n\frac{\cos(2(n+1))}{n}$$
are convergent. (Indeed, $\frac{\cos(2n)}n-\frac{\cos(2(n+1))}{n+1}=\frac{\cos(2n)-\cos(2(n+1))}n-\cos(2(n+1))\left(\frac 1n-\frac 1{n+1}\right) $.) Since $s_{2k+1}=0$, it's enough to establish the convergence of
$$\sum_{n=1}^{\infty}\frac{\cos(4n)}n\mbox{ and }\sum_{n=1}^{\infty}\frac{\cos(2(2n+1))}n.$$
It can be done by (an other!) summation by parts.


elementary set theory - Is the powerset of the reals any "more uncountable" (in some sense) than the reals are?



I know that $\mathbb{N}$ is countable and has cardinality $\aleph_0$, and that $\mathbb{R}$ has cardinality $2^{\aleph_0} = \text{C}$ and is uncountable.



Are sets with cardinalities greater than $\text{C}$ (like $2^{\mathbb{R}}$, for instance) "more uncountable" in some sense than the reals are?



Edit: I am familiar with the proof of the fact that there is no bijection from a set to its powerset. What I'm looking for is this: do we lose some more properties when we go from $\mathbb{R}$ to $2^{\mathbb{R}}$, like we lose countability when we go from $\mathbb{N}$ to $\mathbb{R}$? Are there any notions of "higher countability", or some sort of analog of countability, that $\mathbb{R}$ has, but which we miss when we consider the powerset of the reals?


Answer



You use the phrase "cardinalities greater than $C$," so I assume you know that Cantor's diagonal argument shows that for any set $X$, $\mathcal{P}(X)$ is strictly larger than $X$ (in that $X$ injects into $\mathcal{P}(X)$ but does not surject onto $\mathcal{P}(X)$).




Based on this, I think the real question is: what do you mean by "more uncountable"?



One possible answer is the following: there may be sets with combinatorial properties which are characteristic of extremely large objects, which "reasonable" infinite sets like the naturals and the reals cannot have. For example, measurability: a cardinal $\kappa$ is measurable iff there is a countably complete ultrafilter on $\kappa$ which is not principal. By combining "countably complete" with "nonprincipal," this is clearly an "uncountability property" if anything is!



I would guess that you would find large cardinals very interesting; and, I suspect that in general large cardinal properties provide positive answers to your question.






For a related question - given a set $X$ with cardinality in between $\omega$ and $C$, is $X$ "closer" to $\omega$ or $C$? - you should check out cardinal characteristics of the continuum.



elementary number theory - Prove that if $n$ is a positive integer then $2^{3n}-1$ is divisible by $7$.



I encountered a problem in a book that was designed for IMO trainees. The problem had something to do with divisibility.




Prove that if $n$ is a positive integer then $2^{3n}-1$ is divisible by $7$.




Can somebody give me a hint on this problem. I know that it can be done via the principle of mathematical induction, but I am looking for some other way (that is if there is some other way)



Answer



Hint: Note that $8 \equiv 1~~~(\text{mod } 7)$.

So,
$$2^{3n}=(2^3)^n=8^n\equiv \ldots~~~(\text{mod } 7)=\ldots~~~(\text{mod } 7)$$
Try to fill in the gaps!



Solution:
Note that $8 \equiv 1~~(\text{mod } 7)$. This means that $8$ leaves a remainder of $1$ when divided by $7$. Now assuming that you are aware of some basic modular arithmetic,
$$2^{3n}=(2^3)^n=8^n\equiv 1^n ~~(\text{mod } 7)=1~~(\text{mod } 7)$$
Now if $2^{3n}\equiv 1~~(\text{mod } 7)$ then it follows that,


$$2^{3n}-1=8^n-1\equiv (1-1)~~(\text{mod } 7)~\equiv 0~~(\text{mod } 7)\\ \implies 2^{3n}-1\equiv 0~~(\text{mod } 7)$$
Or in other words, $2^{3n}-1$ leaves no remainder when divided by $7$ (i.e. $2^{3n}-1$ is divisible by $7$). As desired

Friday, May 19, 2017

complex analysis - Find the real and Imaginary part of $z^{z}$.

Find the real and Imaginary part of $z^{z}$.




My approach: If $z=re^{i\theta}$, then $$z^{z}=\exp{(z\ln(z))}=\exp{(re^{i\theta}(\ln(r)+i(\theta+2k\pi))}$$
$$=\exp{(r(\cos(\theta)+i\sin(\theta))(\ln(r)+i(\theta+2k\pi)))}$$
$$=\exp(r(\cos(\theta)\ln(r)-\sin(\theta)(\theta+2k\pi))+ir(\cos(\theta+2k\pi)+\sin(\theta)\ln(r)))$$



And continuous with this development, I can find Imaginary and real part, but is this correct?? Exist any approach more easy?? Regards!

algebra precalculus - Solving $sin theta + cos theta=1$ in the interval $0^circleq thetaleq 360^circ$



Solve in the interval $0^\circ\leq \theta\leq 360^\circ$ the equation $\sin \theta + \cos \theta=1$.



I've got the two angles in the interval to be $0^\circ$ and $90^\circ$, it's not an answer I'm after, I'd just like to see different approaches one could take with a problem like this. Thank you!


Sorry, my approach:



$$\begin{align} \frac{1}{\sqrt 2}\sin \theta + \frac{1}{\sqrt 2}\cos \theta &= \frac{1}{{\sqrt 2 }} \\ \cos 45^\circ\sin \theta + \sin 45^\circ\cos \theta &= \frac{1}{\sqrt 2} \\ \sin(\theta + 45^\circ) &= \frac{1}{\sqrt 2} \\ \theta + 45^\circ &= 45^\circ,\ 135^\circ \\ \theta &= 0^\circ, \ 90^\circ \end{align}$$


Answer



A slightly 'expanded-upon' version of user67418's answer:


enter image description here


The circle here represents the parametric curve $(x=\cos\theta, y=\sin\theta)$, and the line is the line $x+y=1$, so their points of intersection are the points where $\cos\theta+\sin\theta=1$; at least for me, this is the clearest way of seeing that there are only the two solutions already mentioned.


elementary number theory - If $k^2-1$ is divisible by $8$, how can we show that $k^4-1$ is divisible by $16$?



All is in the title:




If $k^2-1$ is divisible by $8$, how can we show that $k^4-1$ is divisible by $16$?




I can't conclude from the fact that $k^2 - 1$ is divisible by $8$, that then $k^4-1$ is divisible by $16$.



Answer



Hint: $$k^4 - 1 = (k^2 - 1)(k^2 + 1) = (k^2 - 1)\Big((k^2 - 1) + 2\Big)$$






ADDED per comment: So yes, we have that if $(k^2 - 1)$ is divisible by $8$, then $$k^4 - 1 = (k^2 - 1)(k^2 + 1) = 8b(k^2 + 1)$$ for some integer $b$.



And now, if $k^2 - 1$ is divisible by 8, it is even, then so is $k^2 + 1$.



That is, $k^2 + 1 = (k^2 - 1) + 2 = 8b + 2$. So $$(k^2 - 1)(k^2 + 1) = 8b(8b + 2) = 16b (4b + 1)$$



Thursday, May 18, 2017

Functional equation extended solution

The question is Find all functions $f:R \to R$ such that $$f(x+y)f(x-y)=(f(x)+f(y))^2-4x^2f(y)$$ Taking $x=y=0$, we get $f(0)^2=4f(0)^2 \implies f(0)=0$. Now take $x=y$ which immediately gives $$4f(x)^2=4x^2f(x)\\\implies f(x)(f(x)-x^2)=0\\\implies f(x)=0 \space or \space f(x)=x^2 \space \forall x \in \mathbb R$$ This was my solution. But I was stunned when I looked at the official solution.enter image description here




Why do we need to continue to do anything after getting what I got, isn't it sufficient?

Wednesday, May 17, 2017

real analysis - The n-th root of a prime number is irrational

If $p$ is a prime number, how can I prove by contradiction that this equation $x^{n}=p$ doesn't admit solutions in $\mathbb {Q}$ where $n\ge2$

probability - Explain why $E(X) = int_0^infty (1-F_X (t)) , dt$ for every nonnegative random variable $X$




Let $X$ be a non-negative random variable and $F_{X}$ the corresponding CDF. Show,
$$E(X) = \int_0^\infty (1-F_X (t)) \, dt$$
when $X$ has : a) a discrete distribution, b) a continuous distribution.




I assumed that for the case of a continuous distribution, since $F_X (t) = \mathbb{P}(X\leq t)$, then $1-F_X (t) = 1- \mathbb{P}(X\leq t) = \mathbb{P}(X> t)$. Although how useful integrating that is, I really have no idea.


Answer




For every nonnegative random variable $X$, whether discrete or continuous or a mix of these,
$$
X=\int_0^X\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,\mathrm dt,
$$
hence




$$
\mathrm E(X)=\int_0^{+\infty}\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt.
$$








Likewise, for every $p>0$, $$
X^p=\int_0^Xp\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,p\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,p\,t^{p-1}\,\mathrm dt,
$$
hence





$$
\mathrm E(X^p)=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\geqslant t)\,\mathrm dt.
$$



arithmetic - Multiplication of repeating decimal $0.3333overline{3}$ by $3$




Let's start considering a simple fractions like $\dfrac {1}{2}$ and $\dfrac {1}{3}$.



If I choose to represent those fraction using decimal representation, I get, respectively, $0.5$ and $0.3333\overline{3}$ (a repeating decimal).



That is where my question begins.



If I multiply either $\dfrac {1}{2}$ or $0.5$ by $2$, I end up with $1$, as well as, if I multiply $\dfrac {1}{3}$ by $3$.




Nonetheless, if I decide to multiply $0.3333\overline{3}$ by $3$, I will not get $1$, but instead, $0.9999\overline{9}$



What am I missing here?



*Note that my question is different than the question Adding repeating decimals


Answer



Hint: compute the difference between $1$ and $0.9\bar9$. How much is that ? What do you conclude ?


calculus - Integral of $int frac{x^2 - 5x + 16}{(2x+1)(x-2)^2}dx$




I am trying to find the integral of this by using integration of rational functions by partial fractions.



$$\int \frac{x^2 - 5x + 16}{(2x+1)(x-2)^2}dx$$



I am not really sure how to start this but the books gives some weird formula to memorize with no explanation of why $\frac {A}{(ax+b)^i}$ and $ \frac {Ax + B}{(ax^2 + bx +c)^j}$



I am not sure at all what this means and there is really no explanation of any of it, I am guessing $i$ is for imaginary number, and $j$ is just a representation of another imaginary number that is no the same as $i$. $A$, $B$ and $C$, I have no idea what that means and I am not familiar with capital letters outside of triangle notation so I am guessing that they are angles of lines for something.


Answer



See first Arturo's excellent answer to Integration by partial fractions; how and why does it work?








I am guessing i is for imaginary number, and j is just a representation
of another imaginary number that is no the same as i.



I don't know what an indice or natural number is and it is not
mentioned naywhere in the text. (in a comment)





The numbers $i$ and $j$ are natural numbers, i.e. they are positive integers $1,2,3,\dots,n,\dots .$ Their set is denoted by $\mathbb{N}$.




$A$, $B$ and $C$, I have no idea what that means and I am not familiar
with capital letters outside of triangle notation so I am guessing
that they are angles of lines for something.




In this context the leters $A$, $B$ and $C$ are constants, i.e. independent of the variable $x$.





  • Let
    $$\begin{equation*}
    \frac{P(x)}{Q(x)}:=\frac{x^{2}-5x+16}{\left( 2x+1\right) \left( x-2\right)
    ^{2}}\tag{1}.
    \end{equation*}$$ The denominator $Q(x):=\left( 2x+1\right) \left( x-2\right) ^{2}$ has factors of the form $(ax+b)^{i}$ only. Each one originates $i\in\mathbb{N}$ partial fractions whose integrals can be computed recursively and/or found in tables of integrals. See $(6),(7),(8)$ bellow for the present case.)
    $$\begin{equation*}
    \frac{A_{i}}{(ax+b)^{i}}+\frac{A_{i-1}}{(ax+b)^{i-1}}+\ldots +\frac{A_{1}}{ax+b}.
    \end{equation*}\tag{2}$$ The exponent of the factor $\left( x-2\right) ^{2}$ is $i=2$ and of the factor $2x+1$ is $i=1$. Therefore we should find the constants $A_{1}$, $A_{2}$, $B$ such that
    $$\begin{equation*}

    \frac{P(x)}{Q(x)}=\frac{x^{2}-5x+16}{\left( 2x+1\right) \left( x-2\right)
    ^{2}}=\frac{B}{2x+1}+\frac{A_{2}}{\left( x-2\right) ^{2}}+\frac{A_{1}}{x-2}\end{equation*}.\tag{3}$$


  • One methodis to reduce the RHS to a common denominator
    $$\begin{equation*}
    \frac{x^{2}-5x+16}{\left( 2x+1\right) \left( x-2\right) ^{2}}=\frac{B\left(x-2\right) ^{2}+A_{2}\left( 2x+1\right) +A_{1}\left( x-2\right) \left(2x+1\right) }{\left( 2x+1\right) \left( x-2\right) ^{2}}.
    \end{equation*}$$ $$\tag{3a}$$
    [See remak below.] This means that the polynomials of the numerators must be equal on both sides of this last equation. Expanding the RHS, grouping the terms of the same degree
    $$\begin{eqnarray*}
    P(x) &:=&x^{2}-5x+16=B\left( x-2\right) ^{2}+A_{2}\left( 2x+1\right)
    +A_{1}\left( x-2\right) \left( 2x+1\right) \\
    &=&\left( Bx^{2}-4Bx+4B\right) +\left( 2A_{2}x+A_{2}\right) +\left(

    2A_{1}x^{2}-3A_{1}x-2A_{1}\right) \\
    &=&\left( B+2A_{1}\right) x^{2}+\left( -4B+2A_{2}-3A_{1}\right) x+\left(
    4B+A_{2}-2A_{1}\right)
    \end{eqnarray*}$$ $$\tag{3b}$$ and equating the coefficients of $x^{2}$, $x^{1}$ and $x^{0}$, we conclude that they must satisfy‡ the following system of 3 linear equations [See (*) for a detailed solution of the system]
    $$\begin{equation*}
    \left\{
    \begin{array}{c}
    B+2A_{1}=1 \\
    -4B+2A_{2}-3A_{1}=-5 \\
    4B+A_{2}-2A_{1}=16

    \end{array}
    \right. \Leftrightarrow \left\{
    \begin{array}{c}
    A_{1}=-1 \\
    A_{2}=2 \\
    B=3.
    \end{array}
    \right.\tag{3c}
    \end{equation*}$$ In short, this method reduces to solving a linear system. So, we have
    $$\begin{equation*}

    \frac{x^{2}-5x+16}{\left( 2x+1\right) \left( x-2\right) ^{2}}=\frac{3}{2x+1}+
    \frac{2}{\left( x-2\right) ^{2}}-\frac{1}{x-2}.
    \end{equation*}\tag{4}$$


  • We are now left with the integration of each partial fraction
    $$\begin{equation*}
    \int \frac{x^{2}-5x+16}{\left( 2x+1\right) \left( x-2\right) ^{2}}dx=3\int
    \frac{1}{2x+1}dx+2\int \frac{1}{\left( x-2\right) ^{2}}dx\\-\int \frac{1}{x-2}
    dx.\tag{5}
    \end{equation*}$$





Can you proceed from here? Remember these basic indefinite integral formulas:



$$\int \frac{1}{ax+b}dx=\frac{1}{a}\ln \left\vert ax+b\right\vert +C, \tag{6}$$



$$\int \frac{1}{\left( x-r\right) ^{2}}dx=-\frac{1}{x-r}+C,\tag{7}$$



$$\int \frac{1}{x-r}dx=\ln \left\vert x-r\right\vert +C.\tag{8}$$



--




† Another method is to evaluate both sides of $(3)$ at 3 different values, e.g. $x=-1,0,1$ and obtain a system of 3 equations. Another one is to compute $P(x)$



$$\begin{equation*}
P(x)=x^{2}-5x+16=B\left( x-2\right) ^{2}+A_{2}\left( 2x+1\right)
+A_{1}\left( x-2\right) \left( 2x+1\right)
\end{equation*}$$



first at the zeros of each term, i.e. $x=2$ and $x=-1/2$
$$\begin{eqnarray*}

P(2) &=&10=5A_{2}\Rightarrow A_{2}=2 \\
P\left( -1/2\right) &=&\frac{75}{4}=\frac{25}{4}B\Rightarrow B=3;
\end{eqnarray*}$$



and then at e.g. $x=0$
$$\begin{equation*}
P(0)=16=4B+A_{2}-2A_{1}=12+2-2A_{1}\Rightarrow A_{1}=-1.
\end{equation*}$$



For additional methods see this Wikipedia entry




‡ If $B+2A_{1}=1,-4B+2A_{2}-3A_{1}=-5,4B+A_{2}-2A_{1}=16$, then $x^{2}-5x+16=\left( B+2A_{1}\right) x^{2}+\left( -4B+2A_{2}-3A_{1}\right) x+\left(4B+A_{2}-2A_{1}\right)$ for all $x$ and $(3a)$ is an identity.






REMARK in response a comment below by OP. For $x=2$ the RHS of $(3a)$ is not defined. But we can compute as per $(3b,c)$ or as per †, because we are not plugging $x=2$ in the fraction $(3a)$. In $(3c)$ we assure that the numerators of $(3a)$ $$x^{2}-5x+16$$ and $$B\left( x-2\right) ^{2}+A_{2}\left( 2x+1\right)
+A_{1}\left( x-2\right) \left( 2x+1\right) $$ are identically equal, i.e. they must have equal coefficients of $x^2,x,x^0$.







(*) Detailed solution of $(3c)$. Please note that we cannot find $A,B$ and $C$ with one equation only, as you tried below in a comment ("$16=2b+A_1−A_2$ I have no idea how to solve this.")
$$\begin{eqnarray*}
&&\left\{
\begin{array}{c}
B+2A_{1}=1 \\
-4B+2A_{2}-3A_{1}=-5 \\
4B+A_{2}-2A_{1}=16
\end{array}
\right. \\
&\Leftrightarrow &\left\{

\begin{array}{c}
B=1-2A_{1} \\
-4\left( 1-2A_{1}\right) +2A_{2}-3A_{1}=-5 \\
4\left( 1-2A_{1}\right) +A_{2}-2A_{1}=16
\end{array}
\right. \Leftrightarrow \left\{
\begin{array}{c}
B=1-2A_{1} \\
-4+5A_{1}+2A_{2}=-5 \\
4-10A_{1}+A_{2}=16

\end{array}
\right. \\
&\Leftrightarrow &\left\{
\begin{array}{c}
B=1-2A_{1} \\
A_{2}=-\frac{1+5A_{1}}{2} \\
4-10A_{1}-\frac{1+5A_{1}}{2}=16
\end{array}
\right. \Leftrightarrow \left\{
\begin{array}{c}

B=1-2A_{1} \\
A_{2}=-\frac{1+5A_{1}}{2} \\
A_{1}=-1
\end{array}
\right. \\
&\Leftrightarrow &\left\{
\begin{array}{c}
B=1-2\left( -1\right) \\
A_{2}=-\frac{1+5\left( -1\right) }{2} \\
A_{1}=-1

\end{array}
\right. \Leftrightarrow \left\{
\begin{array}{c}
A_{1}=-1 \\
A_{2}=2 \\
B=3
\end{array}
\right.
\end{eqnarray*}$$







Comment below by OP




I watched the MIT lecture on this and they use the "cover up" method to solve systems like this and I am attempting to use that here. I have $$\frac{A}{2x+1} + \frac{B}{x-2} + \frac{C}{(x-2)^2}$$ Is there anything wrong so far? It appears to me to be correct. Now I try to find B by making x = 2 and multplying by x-2 which gets rid of C and A since it makes them zero and then the RHS which cancels out and leaves me with B = 2 but that also works for C I think so I am confused, and for A I get 55/6 which I know is wrong but the method works and I am doing the math right so what is wrong?




Starting with $$\frac{x^{2}-5x+16}{(2x+1)(x-2)^{2}}=\frac{A}{2x+1}+\frac{B}{x-2}+\frac{C}{(x-2)^{2}}\tag{3'}$$




we can multiply it by $(x-2)^{2}$



$$\frac{x^{2}-5x+16}{2x+1}=\frac{A(x-2)^{2}}{2x+1}+B(x-2)+C.$$



To get rid of $A$ and $B$ we make $x=2$ and obtain $C$



$$\frac{2^{2}-5\cdot 2+16}{2\cdot 2+1}=\frac{A(2-2)^{2}}{2x+1}+B(2-2)+C$$



$$\Rightarrow 2=0+0+C\Rightarrow C=2$$




We proceed by multiplying $(3')$ by $2x+1$



$$\frac{x^{2}-5x+16}{(x-2)^{2}}=A+\frac{B(2x+1)}{x-2}+\frac{C(2x+1)}{(x-2)^{2}}$$



and making $x=-1/2$ to get rid of $B$ and $C$



$$\frac{\left( -1/2\right) ^{2}-5\left( -1/2\right) +16}{(-1/2-2)^{2}}=A+
\frac{B(2\left( -1/2\right) +1)}{-1/2-2}+\frac{C(2\left( -1/2\right) +1)}{
(-1/2-2)^{2}}$$




$$\Rightarrow 3=A+0+0\Rightarrow A=3$$



Substituing $A=3,C=2$ in $(3')$, we have



$$\frac{x^{2}-5x+16}{(2x+1)(x-2)^{2}}=\frac{3}{2x+1}+\frac{B}{x-2}+\frac{2}{
(x-2)^{2}}$$



Making e.g. $x=1$ (it could be e.g. $x=0$)



$$\frac{1^{2}-5+16}{(2+1)(1-2)^{2}}=\frac{3}{2+1}+\frac{B}{1-2}+\frac{2}{

(1-2)^{2}},$$



$$\Rightarrow 4=1-B+2\Rightarrow B=-1.$$



Thus



$$\frac{x^{2}-5x+16}{(2x+1)(x-2)^{2}}=\frac{3}{2x+1}-\frac{1}{x-2}+\frac{2}{(x-2)^{2}},\tag{3''}$$



which is the same decomposition as $(4)$.


Tuesday, May 16, 2017

monotone functions - Discontinuous derivative, positive on a dense set

Whether there exists a continuos monotone function $f: [0,1]\rightarrow [0,1]$ with the following properties:



(1) $f$ strictly increase, $f(0)=0$ and $f(1)=1$;




(2) there is no interval $A\subset [0,1]$, where the derivative $f'$ (a) exists, (b) is continuous and (c) is finite;



(3) there exists a set $B\subset [0,1]$ dense in $[0,1]$, where the derivative $f'$ (a) exists, (b) is positive and (c) is finite?



I can construct an example of function $f:[0,1]\rightarrow [0,1]$ with condition (1), whose derivative is either $0$, or $+\infty$ (at points, where the dewrivative exists). It follows from Lebesgues theorem that $f'$ is zero on some set $A\subset[0,1]$, which has Lebesgue measure 1 and, whence, is dence in $[0,1]$. Clearly, since the derivative is either $0$, or $+\infty$ (in the example, which I can construct), and the function is invertible, then $f'\neq 0$ at any interval.



In other words, I can answer "yes" to my question if ommit the word "positive" in the condition (3).



I hope that my question is "natural", but neither have found "in the internet" neither its proof, nor counter example, nor this question as it is (for example as open problem).

abstract algebra - In an extension field, is there any difference between the original field and its isomorphic copy in the extension field?




I recently came to the topic of field extensions in my abstract algebra course, and there has been a slight issue which has been bothering me that I was hoping I might be able to clear up.



We have defined an extension field for a field F to be a field E such that $F \subseteq E$ and that $F$ is a field under the operations of $E$ restricted to $F$.



Sounds easy enough and I realize that we have been using objects like this for a long time. For example we know that $\mathbb{C}$ is an extension field of $\mathbb{R}$.



Something that has been bothering me a little bit though is that we have started proving theorems where we need to construct extension fields, but these extension fields don't seem to contain the original field $F$ but rather an isomorphic copy of $F$.



For example if our field was $F$, then $F[x]/(p(x))$ is a field if $p(x)$ is irreducible, which contains a subfield isomorphic to $F$. It seems strange that in the theorems (Gallian's Text) that $F[x]/(p(x))$ is considered an extension field for $F$ even though it doesn't really contain $F$ as a subset, but rather another set which is isomorphic.




I don't think I would have normally though this as being much of a problem, but I remember that earlier in the text Gallian seems to mention that even when structures are isomorphic and that they behave essentially the same, that we need to keep in mind that they are not exactly the same.



If this distinction does matter, why not make the definition of an extension field just say that $E$ is an extension field of $F$ if $E$ has a subfield isomorphic to $F$? This would seem to include all cases. Is this largely a historical issue related to how mathematicians thought about isomorphic structures in the past?


Answer



Firstly, a short answer. If you are just interested in a field extension of $F$, then you must first realise that you should be quite content with a field extension of any other field $F'$, as long as $F$ and $F'$ are isomorphic and if you know the isomorphism $F\to F'$. This is something we do often in mathematics, yes sometimes without sufficient care, namely ignoring the part that says "if you know the isomorphism...". Often you'd hear people say "we'll just identify these two things since they are isomorphic", though this is not really a healthy thing to do (nor do we actually do that). What we do often is "identify these two things since they are isomorphic and we know precisely which isomorphism we mean for the identification". That is healthy. So, for your field $F$ and the somewhat incorrect claim that $F[x]/(p(X))$ is a field extension of it, what is really going on here is that $F[x]/(p(X))$ is a field extension of an isomorphic copy of $F$, and we know precisely which isomorphism we are talking about, so its ok to identify them. More precisely, we pretend the original $F$ is the isomorphic copy we actually have an extension of.



As long as you are considering just a few objects of study this is usually quite fine. Trouble start when you are considering infinitely many objects. For instance, knowing how to obtain the splitting field extension of a polynomial it is tempting to obtain the algebraic closure of $F$ by 'simply' using a Zorn lemma argument, every time splitting one more polynomial. It is instructive to try and work out the details and see where it fails (lots of difficulties because of those identifications above).



As for you final suggestion to speak of a field extension of $F$ to mean $F'$ contains an isomorphic copy of $F$, you can do that, but you'll have to specify the isomorphism explicitly (since there could be different isomorphic copies, and they can be isomorphic in different ways). But that does not really help, or matter much, since any such superficially broader extension can be replaced, by identifying along the given isomorphism, by the good old notion of extension.



integration - When does a function NOT have an antiderivative?




I know this question may sound naïve but why can't we write $\int e^{x^2} dx$ as $\int e^{2x} dx$? The former does not have an antiderivative, while the latter has.



In light of this question, what are sufficient conditions for a function NOT to have an antiderivative. That is, do we need careful examination of a function to say it does not have an antiderivative or is there any way that once you see the function, you can right away say it does not have an antiderivative?


Answer



As you might have realised, exponentiation is not associative:



$$\left(a^b\right)^c \ne a^\left(b^c\right)$$



So what should $a^{b^c}$ mean? The convention is that exponentiation is right associative:




$$a^{b^c} = a^\left(b^c\right)$$



Because the otherwise left-associative exponentiation is just less useful and redundant, as it can be represented by multiplication inside the power (again as you might have realised):



$$a^{bc} = \left(a^b\right)^c$$



Wikipedia on associativity of exponentiation.


Monday, May 15, 2017

real analysis - Differentiability of $f(x+y) = f(x)f(y)$

Let $f$: $\mathbb R$ $\to$ $\mathbb R$ be a function such that $f(x+y)$ = $f(x)f(y)$ for all $x,y$ $\in$ $\mathbb R$. Suppose that $f'(0)$ exists. Prove that $f$ is a differentiable function.




This is what I've tried:
Using the definition of differentiability and taking arbitrary $x_0$ $\in$ $\mathbb R$.



$\lim_{h\to 0}$ ${f(x_0 + h)-f(x_0)\over h}$ $=$ $\cdots$ $=$ $f(x_0)$$\lim_{h\to 0}$ ${f(h) - 1\over h}$.



Then since $x_0$ arbitrary, using $f(x_0+0) = f(x_0) = f(x_0)f(0)$ for $y = 0$, can I finish the proof?

Sunday, May 14, 2017

real analysis - Evaluation of $sum_{n=1}^infty frac{(-1)^{n-1}eta(n)}{n} $ without using the Wallis Product



In THIS ANSWER, I showed that



$$2\sum_{s=1}^{\infty}\frac{1-\beta(2s+1)}{2s+1}=\ln\left(\frac{\pi}{2}\right)-2+\frac{\pi}{2}$$




where $\beta(s)=\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)^s}$ is the Dirichlet Beta Function.



In the development, it was noted that



$$\begin{align}
\sum_{n=1}^\infty(-1)^{n-1}\log\left(\frac{n+1}{n}\right)&=\log\left(\frac21\cdot \frac23\cdot \frac43\cdot \frac45\cdots\right)\\\\
&=\log\left(\prod_{n=1}^\infty \frac{2n}{2n-1}\frac{2n}{2n+1}\right)\\\\
&=\log\left(\frac{\pi}{2}\right) \tag 1
\end{align}$$




where I used Wallis's Product for $\pi/2$.






If instead of that approach, I had used the Taylor series for the logarithm function, then the analysis would have led to



$$\sum_{n=1}^\infty(-1)^{n-1}\log\left(\frac{n+1}{n}\right)=\sum_{n=1}^\infty \frac{(-1)^{n-1}\eta(n)}{n} \tag 2$$



where $\eta(s)=\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n^s}$ is the Dirichlet eta function.




Given the series on the right-hand side of $(2)$ as a starting point, it is evident that we could simply reverse steps and arrive at $(1)$.




But, what are some other distinct ways that one can take to evaluate the right-hand side of $(2)$?




For example, one might try to use the integral representation



$$\eta(s)=\frac{1}{\Gamma(s)}\int_0^\infty \frac{x^{s-1}}{1+e^x}\,dx$$




and arrive at



$$\sum_{n=1}^\infty \frac{(-1)^{n-1}\eta(n)}{n} =\int_0^\infty \frac{1-e^{-x}}{x(1+e^x)}\,dx =\int_1^\infty \frac{x-1}{x^2(x+1)\log(x)}\,dx \tag 3$$



Yet, neither of these integrals is trivial to evaluate (without reversing the preceding steps).




And what are some other ways to handle the integrals in $(3)$?




Answer



Another way to handle $(2)$ is using the identity $$\eta\left(s\right)=\left(1-\frac{1}{2^{s-1}}\right)\zeta\left(s\right)
$$ hence, since $\eta\left(1\right)=\log\left(2\right)
$, $$\sum_{n\geq1}\frac{\left(-1\right)^{n-1}}{n}\eta\left(n\right)=\log\left(2\right)+\sum_{n\geq2}\frac{\left(-1\right)^{n-1}}{n}\eta\left(n\right)
$$ $$=\log\left(2\right)+\sum_{n\geq2}\frac{\left(-1\right)^{n-1}}{n}\zeta\left(n\right)-\sum_{n\geq2}\frac{\zeta\left(n\right)}{n}\left(-\frac{1}{2}\right)^{n-1}
$$ and now we can use the identity $$\sum_{n\geq2}\frac{\zeta\left(n\right)}{n}\left(-x\right)^{n}=x\gamma+\log\left(\Gamma\left(x+1\right)\right),\,-1 $$ which can be proved taking the log of the Weierstrass product of Gamma. So $$\sum_{n\geq1}\frac{\left(-1\right)^{n-1}}{n}\eta\left(n\right)=\log\left(2\right)+2\log\left(\frac{\sqrt{\pi}}{2}\right)=\log\left(\frac{\pi}{2}\right).$$


limits - Using the Squeeze Theorem on $lim_{xto 0}frac{sin^2x}{x^2}$

$$\lim_{x\to 0}\frac{\sin^2x}{x^2}$$




I'm trying to evaluate this limit using Squeeze Theorem. However, looking at the graph I know it approaches $1$, but I am getting $0$ using the Squeeze Theorem.



$$-\frac{1}{x^2} < \frac{\sin^2x}{x^2} < \frac{1}{x^2}$$



when I sub in $0$ it's just $0$. What am I doing wrong?



Edit: Wait, it's not zero! The upper and lower bounds are indeterminate. So I can't use squeeze theorem, correct?

Saturday, May 13, 2017

Does the series $sum_{n=1}^infty frac{n}{sqrt[3]{8n^5-1}}$ Converge?


$$\sum_{n=1}^\infty \frac{n}{\sqrt[3]{8n^5-1}}$$


From the tests that I know of:


Divergence Test: The limit is ≠ to a constant, so inconclusive.


Geometric series: I don't think this could be written in that manner.


Comparison Test/Lim Comparison: Compare to $$\frac{n}{8n^{\frac{5}{3}}}$$


Integral Test: I can't think of a integration method that would work here.


Alternating Series/Root Test don't apply.


Ratio Test: The limit is 1 so inconclusive.


Perhaps I'm making a mistake throughout the methods I've tried, but I'm lost. Using these tests, is it possible to find whether or not it converges or diverges?



Answer



Use comparison test,$$\frac{n}{\sqrt[3]{8n^5-1}} \ge \frac{n}{\sqrt[3]{8n^5}} = \frac{1}{2n^{\frac53-1}}=\frac{1}{2n^{\frac23}}$$


Now, use $p$-series to make conclusion that it diverges.


elementary number theory - Proof : $aequiv b pmod n implies a^iequiv b^i pmod n$




The congruence equation $a\equiv b \pmod n \implies \exists k \in \mathbb{Z}, a -b = kn.$
Taking the i-th power of both sides : $(a -b)^i = k^in^i$.




$(a-b)(a-b)^{i-1}=k^in^i$.
$n\mid (a-b), (a-b) \mid (a-b)^i \implies n\mid (a-b)^i \implies \exists m \in \mathbb {Z}, (a-b)^i = mn$.



But, my proof is incomplete, as it does not show still that $a^i - b^i = kn$.






All the comments and answer have ignored the fact that how to get the term $a^i - b^i$ in the first place.
It seems that all have taken the approach to simply take the $a^i- b^i=kn$ as a new equality, not derived from the original one. Just it uses the property of $n \mid (a-b)$ of the original one.



Answer



It might be easier to use $a = kn + b$ and the Binomial theorem. Then $a^i = (kn+b)^i = \sum_{j=0}^i \binom{i}{j}(kn)^jb^{i-j}$. All of these terms have a factor of $n$ in them except for the $0^{th}$ one, so this equation may be written in the form $a^i = pn + b^i$. Hence the claim.


real analysis - Limit of the nested radical $sqrt{7+sqrt{7+sqrt{7+cdots}}}$










Where does this sequence converge?
$\sqrt{7},\sqrt{7+\sqrt{7}},\sqrt{7+\sqrt{7+\sqrt{7}}}$,...


Answer




For a proof of convergence,



Define the sequence as



$\displaystyle x_{0} = 0$



$\displaystyle x_{n+1} =\sqrt{7 + x_n}$




Note that $\displaystyle x_n \geq 0 \ \ \forall n$.



Notice that $\displaystyle x^2 - x - 7 = (x-a)(x-b)$ where $\displaystyle a \lt 0$ and $\displaystyle b \gt 0$.



We claim the following:



i) $\displaystyle x_n \lt b \Longrightarrow x_{n+1} \lt b$
ii) $\displaystyle x_n \lt b \Longrightarrow x_{n+1} \gt x_n$



For a proof of i)




We have that



$\displaystyle x_n \lt b = b^2 - 7$ and so $x_n +7 \lt b^2$ and thus by taking square roots $x_{n+1} \lt b$



For a proof of ii)



We have that




$\displaystyle (x_{n+1})^2 - (x_n)^2 = -(x^2_n - x_n -7) = -(x_n-a)(x_n-b) \gt 0$ if $x_n \lt b$.



Thus $\displaystyle \{x_{n}\}$ is monotonically increasing and bounded above and so is convergent.



By setting $L = \sqrt{7+L}$, we can easily see that the limit is $\displaystyle b = \dfrac{1 + \sqrt{29}}{2}$






In fact, we can show that the convergence is linear.




$\displaystyle \dfrac{b-x_{n+1}}{b-x_n} = \dfrac{b^2 - (7+x_n)}{(b+\sqrt{7+x_n})(b-x_n)} = \dfrac{1}{b + x_{n+1}}$



Thus $\displaystyle \lim_{n\to \infty} \dfrac{b-x_{n+1}}{b-x_n} = \dfrac{1}{2b}$.



We can also show something a bit stronger:



Let $\displaystyle t_n = b - x_n$.




The we have shown above that $\displaystyle t_n \gt 0$ and $\displaystyle t_n \lt b^2$



We have that



$\displaystyle b - t_{n+1} = \sqrt{7 + b - t_n} = \sqrt{b^2 - t_n}$



Dividing by $\displaystyle b$ throughout we get




$\displaystyle 1 - \dfrac{t_{n+1}}{b} = \sqrt{1 - \dfrac{t_n}{b^2}}$



Using $\displaystyle 1 - \dfrac{x}{2} \gt \sqrt{1-x} \gt 1 - x \ \ 0 \lt x \lt 1$ we have that



$\displaystyle 1 - \dfrac{t_n}{2b^2} \geq 1 - \dfrac{t_{n+1}}{b} \geq 1 - \dfrac{t_n}{b^2}$



And so




$\displaystyle \dfrac{t_n}{2b} \leq t_{n+1} \leq \dfrac{t_n}{b}$



This gives us that $\displaystyle b - \dfrac{b}{b^n} \leq x_n \leq b - \dfrac{b}{(2b)^n}$


set theory - For any two sets $A,B$ , $|A|leq|B|$ or $|B|leq|A|$


Let $A,B$ be any two sets. I really think that the statement $|A|\leq|B|$ or $|B|\leq|A|$ is true. Formally:



$$\forall A\forall B[\,|A|\leq|B| \lor\ |B|\leq|A|\,]$$



If this statement is true, what is the proof ?


Answer




This claim, the principle of cardinal comparability (PCC), is equivalent to the Axiom of Choice.


If the Axiom of Choice is true then Zorn's Lemma is true and a proof of the PCC is a classical application of Zorn's Lemma.


If PCC holds then using Hartogs' Lemma it is quite easy to show that The Well-Ordering-Princple holds, which in turn implies (easily) the Axiom of Choice.


A complete presentation of these (and some other) equivalences, is treated in a rather elementary fashion but including all the details, in the book: http://www.staff.science.uu.nl/~ooste110/syllabi/setsproofs09.pdf starting on page 31.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...