Friday, November 30, 2018

calculus - Find the following finite sum


$f(x)=c_{2014}x^{2014}+c_{2013}x^{2013}+\dots+c_1x+c_0$ has 2014 roots $a_1,\dots,a_{2014}$ and $g(x)=c_{2014}x^{2013}+c_{2013}x^{2012}+\dots+c_1
$
. Given that $c_{2014}=2014$ and $f '(x)$ is the derivative of $f(x)$, find the sum $\sum_{n=1}^{2014}\frac{g(a_n)}{f '(a_n)}$.





$f(x)=2014(x-a_1)(x-a_2)\dots (x-a_{2014})$



$f'(x)=2014^2x^{2013}+2013\cdot c_{2013}x^{2012}+\dots+c_1$



is there any relation between $f(a_n)$, $g(a_n)$ and $f'(a_n)$ or do you need different approach to solve?
Edit
As Gerry suggested
now I have

$\frac{-c_0}{2014}\left(\frac1{a_1\prod_{i\neq 1} (a_1-a_i)}+\frac1{a_2\prod_{i\neq 2} (a_2-a_i)}+\dots+\frac1{a_{2014}\prod_{i\neq2014} (a_{2014}-a_i)}\right)$



Would be helpful if someone tell me what to do next

linear algebra - How to find orthogonal eigenvectors if some of the eigenvalues are the same?



I have an example:
$$A=\begin{pmatrix} 2 & 2 & 4 \\ 2 & 5 & 8 \\ 4 & 8 & 17 \end{pmatrix}$$
The eigenvalue I found is $\lambda_1=\lambda_2=1$ and $\lambda_3=22$.
For $\lambda=1$,
$$\begin{pmatrix} x\\ y \\ z \end{pmatrix}=\begin{pmatrix} -2\\ 1 \\ 0 \end{pmatrix}y+\begin{pmatrix} -4\\ 0 \\ 1 \end{pmatrix}z$$
For $\lambda=22$,
$$\begin{pmatrix} x\\ y \\ z \end{pmatrix}=\begin{pmatrix} 1/4\\ 1/2 \\ 1 \end{pmatrix}z$$

However, those eigenvectors I found are not orthogonal to each other. The goal is to find an orthogonal matrix P and diagonal matrix Q so that $A=PQP^T$.


Answer



One thing we know is that eigenvectors of a symmetric matrix corresponding to distinct eigenvalues are orthogonal. So, if we find eigenvectors $v_1,v_2,v_3$ for $\lambda_1< \lambda_2< \lambda_3$ we are done. On the other hand, we have eigenvalues $\lambda_1=\lambda_2=1$ and $\lambda_3=22$, so that there are not $3$ distinct eigenvalues and the situation becomes somewhat more complicated.



Suppose we found $v_1,v_2\in E(A,\lambda_1)$ which are linearly independent (and hence a basis for the Eigenspace). We know that $v_1\perp v_3$ and $v_2\perp v_3$. This means $\langle v_1,v_3\rangle=\langle v_2,v_3\rangle=0$. By bilinearity of the inner product, we get that $\langle av_1+bv_2,v_3\rangle =0$ for all $a,b\in \mathbb{R}$. The upshot is that the entire eigenspace $E(A,\lambda_1)$ is orthogonal to $v_3$. So, we are free to choose any basis of eigenvectors for $E(A,\lambda_1)$ and proceed from there. Well, just apply Gram-Schmidt to $v_1,v_2$. Define
$$ u_1=\frac{v_1}{\lVert v_1\rVert}$$
$$ u_2=\frac{v_2-\langle v_2, u_1\rangle u_1}{\lVert v_2-\langle v_2, u_1\rangle u_1\rVert}.$$
A quick check shows that these two vectors form an orthonormal basis for $E(A,\lambda_1)$. Then, if we take any nonzero $v_3\in E(A,\lambda_3)$ and set
$$ u_3=\frac{v_3}{\lVert v_3\rVert}$$
we can see that $(u_1,u_2,u_3)$is an orthonormal eigenbasis of $\mathbb{R}^3\cong E(\lambda_1,A)\oplus E(\lambda_3,A)$ with respect to $A$. You've already found the vectors $v_1,v_2,v_3$. Once you compute $u_1,u_2,u_3$, the matrix $P=[u_1,u_2,u_3]$ is orthogonal and

$$
A=P^T
\begin{bmatrix}
1&0&0\\
0&1&0\\
0&0&22
\end{bmatrix}
P.
$$


divisibility - Show that the number $n$ is divisible by $7$

How can I prove that $n = 8709120$ divisible by $7$?
I have tried a couple methods, but I can't show that. Can somebody please help me?

real analysis - Convergence in $L^{infty}$ norm implies convergence in $L^1$ norm




Let $\{f_n\}_{n\in \mathbb{N}}$ be a sequence of measurable functions on a measure space and $f$ measurable. Assume the measure space $X$ has finite measure. If $f_n$ converges to $f$ in $L^{\infty}$-norm , then $f_n$ converges to $f$ in $L^{1}$-norm.




This is my approach:




We know $||f_n-f||_{\infty} \to 0 $ and by definition $||f_n-f||_{\infty} =\inf\{M\geq 0: |f_n-f|\leq M \}.$ Then
\begin{align}
||f_n-f||_1\
&=\int |f_n-f| dm\
&\leq \int|f_n|dm+\int|f|dm\
\end{align}



I don't know how to proceed after that, any help would be appreciated.


Answer



For any function $g$, $||g||_1 = \int_X|g(m)|dm \leq \int_X||g||_\infty dm = \mu(X)*||g||_\infty$ (as $|g(m)| \leq ||g||_\infty$ almost everywhere); $||g||_\infty \geq \frac{||g||_1}{\mu(X)}$, so if $||f_n-f||_\infty$ tends to zero, then $||f_n-f||_1$ tends to zero as well.



Thursday, November 29, 2018

calculus - Formula for $r+2r^2+3r^3+...+nr^n$




Is there a formula to get $r+2r^2+3r^3+\dots+nr^n$ provided that $|r|<1$? This seems like the geometric "sum" $r+r^2+\dots+r^n$ so I guess that we have to use some kind of trick to get it, but I cannot think of a single one. Can you please help me with this problem?



Answer



We have
$$1+r+r^2 + \cdots + r^n = \dfrac{r^{n+1}-1}{r-1}$$
Differentiating this once, we obtain
$$1+2r + 3r^2 + \cdots + nr^{n-1}= \dfrac{nr^{n+1}-(n+1)r^n + 1}{(r-1)^2}$$
Multiply the above by $r$ to obtain what you want.


Limit of goniometric function without l'Hospital's rule




I'm trying figure this out without l'Hospital's rule. But I don't know how should I start. Any hint, please?



$$\lim_{x\to \frac{\pi}2} \frac {1-\sin x}{\left(\frac\pi2 -x\right)^2 }$$


Answer



Set $t=\frac \pi2 - x,$
$$\lim_{x\to {\pi\over 2}} \frac {1-\sin x}{(\frac\pi2 -x)^2}=\lim_{t\to {0}} \frac {1-\cos t}{t^2}=\lim_{t\to {0}} \frac {2 \sin ^2(t/2)}{4(t/2)^2}={1\over 2}$$


probability theory - Let $X$ be a positive random variable with distribution function $F$. Show that $E(X)=int_0^infty(1-F(x))dx$

Let $X$ be a positive random variable with distribution function $F$. Show that $$E(X)=\int_0^\infty(1-F(x))dx$$



Attempt



$\int_0^\infty(1-F(x))dx= \int_0^\infty(1-F(x)).1dx = x (1-F(x))|_0^\infty + \int_0^\infty(dF(x)).x $ (integration by parts)



$=0 + E(X)$ where boundary term at $\infty$ is zero since $F(x)\rightarrow 1$ as $x\rightarrow \infty$



Is my proof correct?

Wednesday, November 28, 2018

real analysis - Solution space to a functional equation



This question comes from my attempts at understanding an example presented by Bill Gasarch on his blog. The example is of a continuous strictly increasing function whose derivative is zero almost everywhere. The example is apparently discussed in the book Probability and Measure by Patrick Billingsley, but I currently do not have access to it.



Gasarch explains the background very well. Given a parameter $p \in (0,1)$, he describes a continuous function $F:[0,1]\to[0,1]$ which is increasing, $F(0) = 0$, $F(1) = 1$, but such that $F' = 0$ a.e. The derivative $f = F'$, which is only defined almost everywhere, must be nonnegative and should satisfy the following functional equation $$f(x) = \begin{cases} 2pf(2x) & \text{when $x \in (0,1/2)$,} \\ 2(1-p)f(2x-1) & \text{when $x \in (1/2,1)$,} \end{cases}$$ almost everywhere. It would seem, from the claimed example, that a nonnegative measurable function that satisfies this functional equation almost everywhere must be $0$ almost everywhere. However, this is not true since every constant function satisfies this functional equation when $p = 1/2$. Thinking about the dynamic properties of the transformation $$T(x) = \begin{cases} 2x & \text{when $x \in [0,1/2)$,} \\ 1/2 & \text{when $x = 1/2$ (say),} \\ 2x-1 & \text{when $x \in (1/2,1]$,} \end{cases}$$ it does seem that the space of solutions to the above equation is heavily constrained.




Is there a nice characterization of the space of solutions to the above functional equation? A general characterization would be best but a characterization for special cases (e.g. $f \geq 0$, $p = 1/2$) would be welcome.


Answer



Here is the proof that $F'=0$ almost everywhere, if $p\neq \frac 1 2$ :



We have
$F(x) = pF(2x)$ if $x \in [0,0.5]$ and $F(x) = p + (1-p)F(2x-1)$ if $x \in [0.5,1]$.



Let $x \in (0,1)$ and suppose $x$ is normal in binary. Then $F'(x)=0$ :
Let $k>0$, and suppose the first $k$ digits of $x$ contain $a$ digits $0$, $b$ digits $1$, and that the length of the streak of consecutive $0$s (or consecutive $1$s) at the end of those $k$ digits is $c$. For example suppose the $k$th digit is $0$ (so there are $c$ consecutive $0$s)
If $2^{-(k+1)} \le |x-y| < 2^{-k}$, then $y$ has the same $k-c$ first digits as $x$,
thus $|F(x)-F(y)| = p^{a-c} (1-p)^b |F(?)-F(?)| \le p^{a-c} (1-p)^b$,

so the slope between those points is bounded by $\frac{|F(x)-F(y)|}{|x-y|} \le p^{a-c} (1-p)^b 2^{k+1} = 2 (2p)^a (2(1-p))^b p^{-c} = 2(4p(1-p))^{k/2} (\frac{1-p}p)^{b-a}p^{-c}$.
When $p \neq \frac 1 2$, $4p(1-p) < 1$.
Moreover, since $x$ is normal, $(a-b)/k$ converges to $0$ as $k$ grows, as does $c/k$, so after taking logarithms it becomes clear that this quantity converges to $0$ as $k$ grows.






Now, about the positive nonzero solutions of your functional equation.
If you write $x$ in binary, the orbit of $x$ when acted on by the two transformations of the functional equation is the set of $y$ such that the binary expansions of $x$ and $y$ are the same up to removing/adding/changing a finite number of digits. Basically they are equivalent if you can truncate them (not necessarily at the same place) so that you are left with two identical infinite strings.



One problem is that if a string is ultimately periodic (so if $x \in \mathbb Q$), then it makes a nontrivial cycle.
The functional equation implies that $f(x) = (2p)^a (2(1-p))^b f(x)$ where the repeating portion of the expansion of $x$ has $a$ digits $1$ and $b$ digits $0$.

If this multiplier $(2p)^a (2(1-p))^b$ is not one, then $f(x)$ has to be $0$.
Thus, for most values of $p$, $f=0$ on $\mathbb Q$.



Then from this point, $f$ is determined by its value for any set of representants of the equivalence classes of binary strings.



if (again) $p \neq \frac 1 2$, then $f$ is unbounded on any open interval, and if $\log {\frac p {1-p}}$ is irrational, its graph is even dense in $[0;1] \times \mathbb R^+$ :



Suppose $x \in (0,1)$ and that $F(x) > 0$. The functional equation basically says that if we plug $a$ digits $0$s and $b$ digits $1$ in front of its binary expansion to get a number $y$ (in any order), then $F(y) = (2p)^{-a}(2(1-p))^{-b}F(x)$. By fixing the first digits of $y$ you can force $y$ to be in any open interval you want, and then by putting additional digits, since $\log(2p)$ and $\log(2(1-p))$ are of different sign, you can make the multiplier as high (or as low) as you need to get above (or under) a specific value.
And in case their quotient is irrational you can make it as close to a specific value as you need.




If $p= \frac 1 2$, then the value of $F(x)$ doesn't change if you change/add/remove a finite amount of digits in the binary expansion of $x$, so the solutions are simply functions from the set of equivalence classes of binary strings into $\mathbb R$. They are still not good-looking, except the continuous ones which are constant.


sequences and series - Sum of all triangle numbers

Does anyone know the sum of all triangle numbers? I.e
1+3+6+10+15+21...
I've tried everything, but it might help you if I tell you one useful discovery I've made:



I know that the sum of alternating triangle numbers, 1-3+6-10... Is equal to 1/8 and that to change
1+3+6... Into 1-3+6... You would subtract 6+20+42+70... which is every other triangular number (not the hexagonals) multiplied by two.



1/8 plus this value is 1+3+6+10+...




A final note: I tried to split the triangle numbers into hexagonals and that series and then I got the squares of the odd numbers. Using dirichlet lambda functions This gave me 0 but I don't think this could be right. A number of other sums gave me -1/24 and 3/8 but I have no idea

How are Quaternions derived from Complex numbers or Real numbers?



I understand how complex numbers are derived from real numbers. Namely when you have a sqrt of a negative number you must have an answer of some kind, but this answer cannot be in the real number system, therefore you need another number system which you call complex.




What I do not understand is which problem in calculation, either calculation in real numbers or in complex numbers, you cannot solve within those systems, for which you need another system like the quaternion.
Thus I fail to see how the quaternion system can be derived from the complex or real number system from a perspective of calculation.



To me it therefore seemed that someone wanted a 3dimensional number system to represent problems in graphing. This person then found out that the 3rd dimension in the number system made no sense (the relation of this 3rd dimension with complex and real numbers is unclear) and therefore defined a 4th dimension in order to make sense of this 3rd dimension. What I think at this moment is that he has not derived quaternions from complex and real numbers but instead simply chose to define another system.



However: Since I am a noob and this guy was clearly a genius, I presume that I fail to see something. And the relevant thing that I fail to see is: How are quaternions derived from complex and real numbers from a calculation perspective. (Not from a geometrical perspective).


Answer



The "someone" you're describing was the Irish physicist and mathematician Willian Rowan Hamilton.



What he wanted was exactly a 3-dimensional number system -- not particularly for graphing as such, but more generally to get something that would make it possible to use algebraic techniques in space geometry as easy as the complex plane had by then already made it for plane geometry.




In modern terms one might say that what Hamilton was really looking for was what we know as three-dimensional vectors, but in those days there was a general feeling that you needed to have a well-behaved multiplication rule for your system before you were "allowed to" use algebraic notation for your calculations with it. The modern concept of a vector space (where we're quite comfortable with having things that can be added, but don't have a multiplication that satisfies the same rules as multiplication of real numbers) did not exist yet.



Hamilton later wrote that he had tried for a long time to find a multiplication rule that would work for three-dimensional quantities, but without luck (later it was proved to be impossible). Then in a sudden flash of inspiration he realized that a four-dimensional system would be possible. There's a plaque at the exact spot in Dublin where he said this insight occurred.



It is therefore not so much a matter of "deriving" the quaternions, as of thinking about the problem for a long time (and by that time Hamilton no doubt had a keen intuitive experience with the ways a multiplication rule can fail to work) and then suddenly noticing that this particular rule happens to work. Once it works, it doesn't need to have a neat story of how you found it -- though Frobenius later proved that the quaternions are the only finite-dimensional associative division algebra over $\mathbb R$ other than the real and complex numbers, so sooner or later someone would have found it.


algebra precalculus - Given that (a-1/a)=3 determine the value of (a+1/a)^2 without solving for x



Please excuse my noobery but I'm very new to high school maths for reasons unimportant.



I'm using this site as my study guide to get through high school math and the questions (close to the bottom of the page) that I'm having trouble with go like this:





  1. Answer the following:




(a) Expand: $(a+\frac{1}{a})^2$.



(b) Given that $(a+\frac{1}{a})=3$, determine the value of $(a+\frac{1}{a})^2$ without solving for $x$.



(c) Given that $(a-\frac{1}{a})=3$, determine the value of $(a+\frac{1}{a})^2$ without solving for $x$.




I can do (a) and (b) but I get stuck at (c) even though the website gives the answers but their explanation doesn't make any sense and I think the problem is that the site doesn't explain how to do these sorts of problems, what these problems are about, where to go find out what all this is, etc. so I'm hoping someone can please explain to me how to do this step for step so that my puny mind can understand it too.




Edit: If you'd like me to post the answers according to the website, please let me know, I only exclude them because of how much space that would take.


Answer



hint
just observe that



$$(a+1/a)^2=(a-1/a)^2+4$$
because



$$(a+1/a)^2=$$
$$a^2+1/a^2+2.a.(1/a)=$$

$$a^2+1/a^2+2$$
and
$$(a-1/a)^2=$$
$$a^2+1/a^2-2.a. (1/a) =$$
$$a^2+1/a^2-2=$$
$$a^2+1/a^2+2 -4$$


real analysis - Find the sum $sum_{j=0}^{n}j$

Find the sum $\sum_{j=0}^{n}j$




for all $n=0,1,2,3,\dots$.



How do I find/evaluate this sum?

Tuesday, November 27, 2018

calculus - Integral and convergence of the sinc function: $lim_{t to infty} int_frac{1}{t}^t frac{sin(ax)}{x}dx $



I am asked to show that $$\lim_{t \to \infty} \int_\frac{1}{t}^t \frac{\sin(ax)}{x}dx $$ converges for all real numbers $a$ and that the value of the converged integral is the same for all $a>0$.



For the first part, I want to apply a previously proven result that if $f$ is continuous and $g'$ is locally integrable on [a,b) with $\lim_{x \to b^- }g(x) = 0$ and $F(t)=\int_a^tf$ is bounded on $[a,b)$, and $g'$ has a constant sign then $\int_a^bfg$ converges. Can I use this theorem here? If we take $g(x)=\frac{1}{x}$ and $f(x)=\sin(ax)$ then clearly $F(t)$ is bounded and $g'$ has a constant sign and $g(x)$ tends to 0. But the condition of local integrability is throwing me off.



If I cannot use this theorem. How may I alternatively prove that the above integral converges? additionally, the proof that the value of the integral is the same for all positive $a$ is eluding me. Any help is greatly appreciated.


Answer



We assume $a>0$, $t>0$.




Hint. One may first perform an integration by parts,
$$
\int_{1/t}^t \frac{\sin(ax)}{x}dx =\left[\frac1{x}\cdot\frac{1- \cos(ax)}{a} \right]_{1/t}^t+\int_{1/t}^t \frac{1- \cos(ax)}{ax^2}dx
$$ giving
$$
\lim_{t \to \infty}\int_{1/t}^t \frac{\sin(ax)}{x}dx =\lim_{t \to \infty}\int_{1/t}^t \frac{1- \cos(ax)}{ax^2}dx\tag1
$$ the latter integrand may be extended as a continuous function over $[0,b]$ ($b\ge1$) satisfying
$$
\left| \frac{1- \cos(ax)}{ax^2}\right| \le \frac{2}{ax^2}, \qquad x\ge1,

$$ the latter dominating function being integrable over $[1,\infty)$, one deduces the convergence of the initial integral.



From $(1)$, using a standard result one has




$$
\lim_{t \to \infty}\int_{1/t}^t \frac{\sin(ax)}{x}dx =\int_0^\infty \frac{1- \cos(ax)}{ax^2}dx=\frac{\pi}{2} ,\quad a>0.
$$



Monday, November 26, 2018

Prove Inequality with Cauchy Schwarz




Use the Cauchy-Schwarz Inequality to show that for any positive integer n, $\frac{2n}{n+1} \leq 1 + \frac{1}{2} + \cdots + \frac{1}{n}$



I'm having some trouble understanding how the Cauchy Schwarz Inequality can be applied to this. I've tried separating the $\frac{2n}{n+1}$ into two parts, but I'm getting nowhere with that.


Answer



Hint: Take as vectors in $\mathbb{R}^n$ $\displaystyle u=(1,\frac{1}{\sqrt{2}},\cdots,\frac{1}{\sqrt{n}})$, $\displaystyle v=(1,\sqrt{2},\cdots, \sqrt{n})$, and compute $$, $\|u\|$ and $\|v\|$. Recall that $\displaystyle 1+2+\cdots+n=\frac{n(n+1)}{2}$.


What's the difference between simple induction and strong induction?

I just started to learn induction in my first year course. I'm having a difficult time grasping the concept. I believe I understand the basics but could someone summarize simple induction and strong induction and explain what the differences are? The video I'm watching explains that if $P(k)$ is true then $P(k+1)$ is true for simple induction, and for strong induction if $P(i)$ is true for all $i$ less than equal to $k$ then $P(k+1)$ is true. I don't really know what that means.

limits - $f(x+y) = f(x)+f(y)$ continuous at $x_0=0 implies f(x)$ is continuous over R?





Let $x,y \in R$



$f(x+y) = f(x)+f(y)$



is it true that if $f$ is continuous at $x_0=0$, than $f$ is continuous in $R$?



Answer



At any arbitrary $x_1\in\mathbb{R}$ and any $\Delta\neq 0$, we have
$$
f(x_1+\Delta)-f(x_1)=f(\Delta)=f(\Delta+x_0)-f(x_0).
$$
As $\Delta\to 0$, the rightmost expression above goes to $0$ due to continuity at $x_0$, so the leftmost expression also goes to $0$. This implies continuity at $x_1$ and therefore in $\mathbb{R}$. Note we don't need the fact that $x_0=0$.


Sunday, November 25, 2018

calculus - Deconstructing $0^0$











It is well known that $0^0$ is an indeterminate form. One way to see that is noticing that



$$\lim_{x\to0^+}\;0^x = 0\quad,$$



yet,




$$\lim_{x\to0}\;x^0 = 1\quad.$$



What if we make both terms go to $0$, that is, how much is



$$L = \lim_{x\to0^+}\;x^x\quad?$$



By taking $x\in \langle 1/k\rangle_{k\in\mathbb{N*}}\,$, I concluded that it equals $\lim_{x\to\infty}\;x^{-1/x}$, but that's not helpful.


Answer



This is, unfortunately, not very exciting. Rewrite $x^x$ as $e^{x\log x}$ and take that limit. One l'Hôpital later, you get 1.


Find limit $xrightarrow0$ of $f(x)=x^2cdotleft({sin{frac 1 x}}right)^2$



I have following function:



$$f(x)=x^2\cdot\left({\sin{\frac 1 x}}\right)^2$$



I want to find the limit of the function for $x\rightarrow0^\pm$. First I analyze $\frac 1 x$:




  • $\frac {1}{x}\rightarrow +\infty$ for $x\rightarrow0^+$




but the $\sin$ of infinity does not exist. Then I use the comparison theorem (I don't know how it's called in English) and conclude that, because



$$\left|{x^2\left({\sin{\frac 1 x}}\right)}^2 \right| \le \frac{1}{x^2}\rightarrow0^+$$



therefore the initial function tends to $0$. Is this reasoning correct? Are there better ways?


Answer



If you meant $\left\lvert x^2\sin^2\left(\frac1x\right)\right\rvert\leqslant x^2$, then yes, it is correct. It follows from this that $\lim_{x\to 0}\left\lvert x^2\sin^2\left(\frac1x\right)\right\rvert=0$ and that therefore $\lim_{x\to 0}x^2\sin^2\left(\frac1x\right)=0$.


elementary set theory - Order Preserving Bijection Also Preserves Number of Predecessors



Let $(A_1, <_1)$ and $(A_2,<_2)$ be ordered sets for which there exists an order preserving bijection $f:A_1 \rightarrow A_2$. Let $x \in A_1$ and define $Pd(x) = \{y \in A_1 ~|~ y <_1 x\}$ (i.e., the set containing all the predecessors of $x$), and take $Pd(f(x)) = \{y \in A_2 ~|~ y <_2 f(x) \}$ (sorry if this terrible notation; perhaps someone could improve it, if they are so inclined).




My question is, will $|Pd(x)| = |Pd(f(x))|$? That is, when $x$ is mapped to $f(x)$, will it have the same number of predecessors. I am inclined to think so, but I can't immediately see anything. I need this to be in order to obtain a contradiction in something else I am trying to prove. What I am trying to show is that there is no order preserving bijection between $\{1,2\} \times \mathbb{N}$ and $\mathbb{N} \times \{1,2\}$, when both are given the dictionary ordering.



EDIT:



I do know that if $a \in Pd(x)$, then $f(a) \in Pd(f(x))$, due to the order preserving property. But am I allowed to conclude that they have the same cardinality? If so, that would seem like a cheesy proof.



EDIT:



Wait! I am being a knucklehead! I just need to show there exists a bijection beteen the two sets. Give me a few moments to see if I can construct one!




EDIT:



Okay, I defined $g_x : Pd(x) \rightarrow Pd(f(x))$ to be $g_x(a) = f(a)$, and was able to show injectivity easily enough, but I think I need help with the surjectivity part.


Answer



HINT: Suppose that $y<_2f(x)$; $f$ is a bijection, so there is some $a\in A_1$ such that $g_x(a)=f(a)=y$. Is it possible that $a=x$ or that $x<_1a$, given the assumptions on $f$?


Saturday, November 24, 2018

Prove sequence $a_n=n^{1/n}$ is convergent




How to prove that the sequence $a_n=n^{1/n}$ is convergent using definition of convergence?


Answer



Noticing that $n^\frac{1}{n} > 1$ for all $n$, it all comes down to showing that for any $\epsilon > 0$, there is a $n$ such that $(1+\epsilon) \geq n^\frac{1}{n}$, or by rearranging, that



$$
(1+\epsilon)^n \geq n
$$




Now, let's first of all choose an $m$ such that $(1+\epsilon)^{m}$ is some number bigger than 2, let's say the smallest number greater than $3$ that you can get. From here, swap $m$ for $2m$. This will make the left side a little over 3 times larger, and the right side 2 times larger. The next doubling will still double the right side, but the left side will increase roughly 9-fold. Repeating, we can easily see that the left side will at some point overtake the right side, and we have our $n$


reference request - Picard's Little Theorem Proofs



Picard's little theorem says that





If there exist two complex numbers $a,b$ such that $f: \Bbb{C} \to \Bbb{C}\setminus \{a,b\}$ is holomorphic then $f$ is constant.




I am interested in proofs for this theorem. Until now I found at least two, one in W. Rudin's Real and Complex Analysis and another one in S. Krantz, Geometric Function Theory. Both of them need some preparation before someone not very advanced in Complex Analysis could understand them, especially the one in S. Krantz's book.



My questions are





  1. How many proofs are there for Picard's little theorem? (references if possible)



  2. Is there a "simple" proof for Picard's little theorem? Simple means that it could be presented to an audience which had a one semester course in complex analysis.





Thank you.


Answer



There is an essentially elementary proof that can be presented to an audience having only little background in complex analysis. Apart from miraculous trickery and some simple estimates, the only ingredients are Cauchy's integral formula and the existence of holomorphic logarithms on simply connected domains.



A much more complete exposition can be found in Remmert, Classical Topics in Complex Function Theory, Springer GTM 172, chapter 10. Let me emphasize: the following is only a distillate of the parts from Remmert's chapter 10 needed for a proof of Picard's little theorem. Said chapter contains a lot more: extensive historical remarks and references, variants of the proofs and further developments, improvements of the results, some nice applications of Picard's theorem and it culminates in a proof of Picard's great theorem.




I once presented this argument to a group of talented students in a two hours “Christmas special lecture” and I think it worked quite well, but admittedly it is ambitious and the argument is flabbergasting at various points.



The main ingredient in the proof is the amazing:




Theorem (Bloch). If $f$ is holomorphic in a neighborhood of the closed unit disk $\overline{\mathbb D}$ and $f'(0) = 1$ then $f(\mathbb{D})$ contains a disk of radius $\frac{3}{2} - \sqrt 2 \gt 0$.




Remmert prefaces the section containing this result by a statement of J.E. Littlewood:





One of the queerest things in mathematics, ... the proof itself is crazy.




I'll give a proof at the end of this answer.



The way this is applied is:




Exercise. If $f: \mathbb{C} \to \mathbb{C}$ is holomorphic and non-constant then $f(\mathbb{C})$ contains disks of arbitrary radius.




Hint: If $f'(0) \neq 0$ then $g(z) = \frac{f(rz)}{r |f'(0)|}$ satisfies the hypothesis of Bloch's theorem.







There's a second tricky ingredient, due to Landau and refined by König:




Let $G \subset \mathbb{C}$ be a simply connected domain and let $f: G \to \mathbb{C}$ be holomorphic. If $f(G)$ does not contain $0$ and $1$ then there is a holomorphic $g: G \to \mathbb{C}$ such that $$f = \frac{1}{2}\big(1+ \cos{(\pi\cos{(\pi g)})}\big).$$

Moreover, if $g$ is any such function then $g(G)$ does not contain a disk of radius one.




Simple connectedness is used in guise of existence of roots and logarithms of holomorphic functions omitting the value $0$. Let us show first that for a function $h$ on a simply connected domain $G$ such that $\pm 1 \notin h(G)$ there is a holomorphic $H:G \to \mathbb{C}$ such that $h = \cos{H}$: The trick is that $1-h^2$ has no zero, hence there exists $k$ such that $k^2 = 1-h^2$, so $1 = h^2 + k^2 = (h+ik)(h-ik)$. But this means that $h+ik$ doesn't have a zero either, hence it has a logarithm: $h+ik = e^{iH}$ and thus $h = \frac{1}{2}(e^{iH}+e^{-iH})$. Applying this to $h = 2f-1$ (which leaves out the values $\pm 1$ by hypothesis) we get an $F$ such that $h=\cos{(\pi F)}$, but $F$ must leave out all integer values in its range, hence $F = \cos{(\pi g)}$ and unwinding the construction gives us the desired $f=\frac{1}{2}\big(1+ \cos{(\pi\cos{(\pi g)})}\big)$.



The “moreover” part follows from the observation that $g(G)$ must not hit the set $$A = \left\{m \pm \frac{i}{\pi} \log{\big(n+\sqrt{n^2 - 1}\big)},\;m\in\mathbb{Z},\;n \in \mathbb{N}\smallsetminus\{0\}\right\}$$
since for $a \in A$ we have $\cos{(\pi a)} = (-1)^m \cdot n$ by a short calculation. Thus $\cos{(\pi\cos{(\pi a)})} = \pm 1$ and if there were $z \in G$ such that $g(z) \in A$ we would have $f(z) \in \{0,1\}$ contradicting the assumptions. It is not hard to convince oneself that every point $w \in \mathbb{C}$ is within distance $\lt 1$ of some point of $A$ (a picture would help!), hence $g(G)$ can't contain a disk of radius $1$.







Armed with these two ingredients the proof of Picard's little theorem is immediate:




Picard's little theorem. If there exist two complex numbers $a,b$ such that $f: \mathbb{C} \to \mathbb{C}\smallsetminus \{a,b\}$ is holomorphic then $f$ is constant.




Proof. We may assume $\{a,b\} = \{0,1\}$. By the Landau–König theorem we have $f(z) = \frac{1}{2}\big(1+ \cos{(\pi\cos{(\pi g)})}\big)$ for some $g$ whose image does not contain a disk of radius $1$ and by the exercise to Bloch's theorem $g$ must be constant.







Now for the proof of Bloch's theorem:




Lemma. Let $f$ be holomorphic in a neighborhood of the closure of the disk $D = B_r(a)$ and assume that $|f'(z)| \lt 2|f'(a)|$ for $z \in D$. Put $\rho = (3-2\sqrt{2})\cdot r \cdot |f'(a)|$ then $B_{\rho}(f(a)) \subset f(D)$.




Proof. Assume $a = f(a) = 0$ for simplicity of notation and write $C = \sup\limits_{z \in D}{\,|f'(z)|}$.



Put $A(z) = f(z) - f'(0)\cdot z$. Then $A(z) = \int_{0}^{z} (f'(w) - f'(0))\,dw$, so
$$|A(z)| \leq \int_{0}^{1} |f'(zt) - f'(0)|\,|z|\,dt.$$

For $d \in D$ we have by Cauchy's integral formula
$$f'(d) - f'(0) = \frac{d}{2\pi i} \int_{|w|= r} \frac{f'(w)}{w(w-d)} \,dw,$$
hence
$$|f'(d) - f(0)| \leq \frac{|d|}{r - |d|} C$$
and thus
$$|A(z)| \leq \int_{0}^{1} \left(\frac{|zt|}{r - |zt|}C\right)|z|\,dt \leq \frac{1}{2} \frac{|z|^2}{r - |z|} C.$$
Let $x = |z| \in (0,r)$ and observe that $|f(z) - f'(0)z| \geq |f'(0)| x - |f(z)|$. The last inequality together with the hypothesis $C \leq 2 |f'(0)|$ gives
$$|f(z)| \geq \underbrace{\left(x - \frac{x^2}{r- x}\right)}_{h(x)} |f'(0)|.$$
Now $h(x)$ assumes its maximum $(3 - 2\sqrt{2})r$ at the point $\tilde{x} = (1-\frac{\sqrt{2}}{2})r$. Thus we have shown that for $|z| = \tilde{x}$ we have
$$|f(z)| \geq (3 - 2\sqrt{2})\cdot r \cdot |f'(0)| = \rho.$$

But this implies that $B_{\rho}(f(0)) \supset f(B_{\tilde{x}}(0))$. Why? This is because $B_{\tilde{x}}(0)$ is a domain whose boundary is mapped outside the ball $B_{\rho}(f(0))$ by $f$, as $f(0) = 0$, see here (1) at the bottom of the page for more details.



Proof of Bloch's theorem. Assume that $f$ is holomorphic in a neighborhood of the closed unit disk and assume that $f'(0) = 1$. Consider the function $z \mapsto |f'(z)|(1-|z|)$. It takes on its maximum at some point $p \in \mathbb{D}$. Putting $t = \frac{1}{2}(1-|p|)$ we have $B_{t}(p) \subset \mathbb{D}$ and $1-|z| \geq t$ for all $z \in B_{t}(p)$. Therefore $|f'(z)|(1-|z|) \leq 2t|f'(p)|$ and hence $|f'(z)| \leq 2|f'(p)|$ for all $z \in B_t(p)$. Hence the lemma gives us $B_{\rho}(f(p)) \subset f(\mathbb{D})$ for $\rho = (3-2\sqrt{2}) \frac{1}{2} t |f'(p)| \geq \frac{3}{2} - \sqrt{2}$.


Friday, November 23, 2018

induction - Proving that two summations are equivalent: $sum_{i=1}^n i^3 = (sum_{i=1}^n i)^2$




Give a constructive proof to show that for all $n \geq 1$ ,



$\sum\limits_{i=1}^n i^3 = (\sum\limits_{i=1}^n i)^2$



Observe that $(n+1)^4 - n^4 = 4n^3 + 6n^2 + 4n + 1$ .







Now, the two following equalities are obvious:



$\sum\limits_{i=1}^n i^3 = 1^3 + 2^3 + 3^3 + ... + n^3$



$(\sum\limits_{i=1}^n i)^2 = (1 + 2 + 3 + ... + n)^2$



And they are both obviously equivalent given the first few test cases:



$\sum\limits_{i=1}^n i^3 = A(n)$





  • $A(1) = 1^3 = 1$

  • $A(2) = 1^3 + 2^3 = 1 + 8 = 9$

  • $A(3) = 1^3 + 2^3 + 3^3 = 9 + 27 = 36$



$(\sum\limits_{i=1}^n i)^2 = B(n)$





  • $B(1) = (1)^2 = 1$

  • $B(2) = (1 + 2)^2 =9 $

  • $B(3) = (1 + 2 + 3)^2 = 36$



Now, I am thinking of finding the closed-forms for both functions in the hopes that they are indeed the same. Then I would prove those closed forms to work by induction. But:




  1. I don't know if that would be a sound way to do it.

  2. I don't know if this would even qualify as constructive, as the question requests.




As you may tell, I am no math major. I am a Computer Science major, though. This is a computing fundamentals class. I took discrete 1.5 years ago, so my knowledge is about as fresh as a litter box. I've been in quite a rut for a few hours over this.


Answer



Your goal is to prove the statement $S(n)$ for all $n\geq 1$ where
$$
S(n) : 1^3 + 2^3 +3^3 +\cdots + n^3 = \left[\frac{n(n+1)}{2}\right]^2.
$$
Using $\Sigma$-notation, we may rewrite $S(n)$ as follows:
$$

S(n) : \sum_{r=1}^n r^3 = \left[\frac{n(n+1)}{2}\right]^2.
$$
Base step: The statement $S(1)$ says that $(1)^3 = (1)^2$ which is true because $1=1$.



Inductive step [$S(k)\to S(k+1)$]: Fix some $k\geq 1$, where $k\in\mathbb{N}$. Assume that
$$
S(k) : \sum_{r=1}^k r^3 = \left[\frac{k(k+1)}{2}\right]^2
$$
holds. To be proved is that
$$

S(k+1) : \sum_{r=1}^{k+1} r^3 = \left[\frac{(k+1)((k+1)+1)}{2}\right]^2
$$
follows. Beginning with the left side of $S(k+1)$,
\begin{align}
\sum_{r=1}^{k+1}r^3 &= \sum_{r=1}^k r^3 + (k+1)^3\tag{evaluate sum for $i=k+1$}\\[1em]
&= \left[\frac{k(k+1)}{2}\right]^2+(k+1)^3\tag{by $S(k)$}\\[1em]
&= \frac{(k+1)^2}{4}[k^2+4(k+1)]\tag{factor out $\frac{(k+1)^2}{4}$}\\[1em]
&= \frac{(k+1)^2}{4}[(k+2)(k+2)]\tag{factor quadratic}\\[1em]
&= \frac{(k+1)^2(k+2)^2}{4}\tag{multiply and rearrange}\\[1em]
&= \left[\frac{(k+1)(k+2)}{2}\right]^2\tag{rearrange}\\[1em]

&= \left[\frac{(k+1)((k+1)+1)}{2}\right]^2,\tag{rearrange}
\end{align}
one arrives at the right side of $S(k+1)$, thereby showing that $S(k+1)$ is also true, completing the inductive step.



By mathematical induction, it is proved that for all $n\geq 1$, where $n\in\mathbb{N}$, that the statement $S(n)$ is true.



Note: The step where $\dfrac{(k+1)^2}{4}$ is factored out is an important one. If we do not factor this out and, instead, choose to expand $(k+1)^3$, the problem becomes much more messy than it needs to be.


calculus - Gaussian Definite Integral for Gaussian random variables


Weh have the following:



For $s>0$, $$\int_s^{\infty} e^{-x^2/2}\,\mathrm{d}x \le \frac 1 s e^{-s^2/ 2}, \quad \int_s^{\infty} e^{-x^2/2}\,\mathrm{d}x \le \sqrt {\frac \pi 2} e^{-s^2/ 2}.$$
Then prove that for a random variable $X \sim N(0,1)$ and $s>0$, $$ \Pr(X>s) \le \frac 1 {\sqrt {2\pi}}\min\left({\frac 1 t, \sqrt {\frac \pi 2}}\right)e^{-s^2/2}.$$




So far i have the following $\Pr(X>s) = \int_s^{\infty} e^{-x^2}\,\mathrm{d}x \le \min\left(\frac 1 t, {\frac \pi 2}\right) e^{-s^2/2},$, which does not need any further proof if i get first the two inequalities right?



I got the first part where it is $\le 1/s$ but for the second part, I tried converting to polar coordinates for the second equation. Since I did $$\int_s^{\infty} e^{-x^2/2}\,\mathrm{d}x \int_s^{\infty} e^{-y^2/2}\,\mathrm{d}y \le \int_0^{\pi/2} \int_s^{\infty} e^{-r^2/2}\,r\mathrm{d}rd\theta \to \int_s^{\infty} e^{-x^2/2}\,\mathrm{d}x \le \sqrt {\frac \pi 2 e^{-s^2/ 2}}$$? Am i doing something wrong?

trigonometry - Proving identity $cos^2 a= sinh^2 b$ if $sin(a+ib)=cos b + isin b$




This is my first question on StackExchange. I have some trouble with proving this identity with the following condition:




If: $\sin(a+ib)= \cos b + i \sin b$



Prove: $\cos^2 (a) = \sinh^2 (b)$




I tried using different methods, but the furthest I've come to is this:




$\sinh^2 (b) = \sin^2 (b)/\cos^2 (a)$



I expanded $\sin(a+ib)$ and rearranged the equation to isolate $\sinh(b)$. I then squared it (to get $\sinh^2 (b)$) and got the above statement.



I would like to know whether I'm on the right track, or if there are any mistakes. Thanks a lot.


Answer



\begin{align}
\sin(a+ib)
&=\sin a\cos ib+\cos a\sin ib\\
&=\sin a\cosh b+i\cos a\sinh b\\

&= \cos b + i \sin b
\end{align}
so take real part and imaginary of sides give us
$$\sin a\cosh b= \cos b$$
$$\cos a\sinh b= \sin b$$
then squaring two equations and adding concludes
$$\sin^2 a\cosh^2 b+\cos^2 a\sinh^2 b=1$$
$$(1-\cos^2 a)(1+\sinh^2 b)+\cos^2 a\sinh^2 b=1$$
which gives $\color{blue}{\cos^2 a = \sinh^2 b}$.


We all use mathematical induction to prove results, but is there a proof of mathematical induction itself?

I just realized something interesting. At schools and universities you get taught mathematical induction. Usually you jump right into using it to prove something like



$$1+2+3+\cdots+n = \frac{n(n+1)}{2}$$



However.



I just realized that at no point is mathematical induction proved itself? What's the mathematical induction's proof? Is mathematical induction proved using mathematical induction itself? (That would be mind blowing.)

trigonometry - If $0 lt x lt frac{pi}{2}$, is $cos(x) le frac{sin(x)}{x} le frac{1}{cos(x)}$?

How can I find the following product using elementary trigonometry?




Suppose $0 \lt x \lt \frac{\pi}{2}$
is an angle measured in radians. Use the trigonometric circle and
show that $\cos(x) \le \frac{\sin(x)}{x} \le \frac{1}{\cos(x)}$.





I have been trying to solve this question. I can't figure out whether or not the solution requires a trigonometric circle or if it can be done using another method.

Thursday, November 22, 2018

elementary number theory - Prove That $3^n + 8^n$ is Not Divisible by $5$ (Using Induction)



Prove that $3^n+8^n$ is not divisible by 5.



I know that this can be proved by using congruence and I am providing the proof by congruence below. But is there any way to Prove It By Induction.




The proof by congruence goes like this:



$3\equiv 3\pmod 5 \\ 3^2 \equiv 4\pmod 5 \\ 3^3\equiv 7\pmod 5 \\ 3^4\equiv 1\pmod 5 \\ 3^5\equiv 3\pmod 5$



Also,



$8\equiv 3\pmod 5 \\ 8^2 \equiv 4\pmod 5 \\ 8^3\equiv 7\pmod 5 \\ 8^4\equiv 1\pmod 5 \\ 8^5\equiv 3\pmod 5$



Adding the congruence up (since the same cycle repeats after the 4th power) none of them are divisible by 5 or equal to 0.




But I need a proof by Induction.



Any help will be appreciated.


Answer



=====Answer 3:======



It's important that one realizes that whenever they use an argument that a pattern repeats or an observation will recur indefinitely, they are fundamentally relying upon and using the Principal of induction. To wit:



They are showing something is true for a few base cases; They are (hopefully-- sometimes this step is weak---) that if it is true for some cases is will follow through for the next cases; and they make it clear that this will repeat and be true for an infinite or indefinite number of iterations.




So your argument is an argument of induction.



You've shown for base cases: $n = 1,2,3,4,5$ That $3^n + 8^n $ are none divisible by $4$.



You state that the cycle repeats. (You actually need to give a reason why the cycle repeats. That is why if $3^n + 8^n\equiv K \pmod 5$ why $3^{n+4} + 8^{n+4} $ is also $\equiv K \pmod 5$. You just noticed that $3^{5} \equiv 3^{1}$ and $8^{5} \equiv 8^{1}$ and assumed that means it is true for all $n$ and $n + 4$. You have to justify this.)



And therefore you concluded it is true for all $n$.



It is a principal if induction that allows you to conclude this.




So if you can give a reason why $3^{n+4}\equiv 3^{n}$ and $8^{n+4}\equiv 8^n$ you would be done.



(Hint: $3^{n+4} = 3^{n-1}3^5\equiv 3^{n-1}3^1 \equiv 3^n\pmod 5$. That is, after all, the reason you assumed the cycle repeated, isn't it?)



===== Answer 2: =======



You DID a proof by induction!



Notice the key phrase in you proof and the phrase that assures that you are done is:





since the same cycle repeats after the 4th power




This means that if it is true for $3^n + 8^n$ it will be true for $3^{n+4} + 8^{n+4}$ and so by induction:



As you showed a Base case that it is true for $n = 1,2,3, 4$ (as well as $n=5$ and an induction case that if it is true for $n$, we can conclude it it true for all $n = 1+4k, 2+4k, 3+4k, 4+4k$. WHich means it is true for all $n$.



That IS a prove by induction.




....



But another proof by induction follows.



===== Answer 1: ======



Well, follow the rules of a proof by induction.



Base case: $n=1$




$3^1 + 8^1 =11 $ which is not divisible by $5$.



Base case done:



Inductive case:



Assume that $3^n + 8^n$ is not divisible $5$.



Now we need to prove that thereform $3^{n+1} + 8^{n+1}$ is not divisible by $5$.




Now my advice is that when you need to prove something about $P(n+1)$ is to put it into terms of $P(n)$ and use what you know about $P(n)$.



$3^{n+1} + 8^{n+1} = 3*3^n + 8*8^n = 3*3^n + 3*8^n + 5*8^n= 3*(3^n + 8^n) + 5*8^n$ and....



$5$ is prime. $5\not \mid 3$ and $5\not \mid (3^n + 8^n)$ and $5|5*8^n$ so $5 \not \mid 3*(3^n+8^n) + 5*8^n$.



Induction step done.



Principal of induction declares we are done. Base case: $3^n + 8^n$ is not divisible by $5$ for $n = 1$. Induction case: If $3^n+8^n$ is not divisible by $5$ for a value of $n$ then in will not be divisible by $5$ for the next value of $n$. Therefore: As we can get to all values of $n$ by starting at $1$ and then going the next, and the next after that, and so on.... it must be true that $3^n +8^n$ is not divisible by $5$ for any natural $n$.




By the way...


algebra precalculus - Solve a simple equation: $x+xsqrt{(2x+2)}=3$





$x+x\sqrt{(2x+2)}=3$




I must solve this, but I always get to a point where I don't know what to do. The answer is 1.



Here is what I did:



$$\begin{align}
3&=x(1+\sqrt{2(x+1)}) \\
\frac{3}{x}&=1+\sqrt{2(x+1)} \\

\frac{3}{x}-1&=\sqrt{2(x+1)} \\
\frac{(3-x)^{2}}{x^{2}}&=2(x+1) \\
\frac{9-6x+x^{2}}{2x^{2}}&=x+1 \\
\frac{9-6x+x^{2}-2x^{2}}{2x^{2}}&=x \\
\frac{9-6x+x^{2}-2x^{2}-2x^{3}}{2x^{2}}&=0
\end{align}$$



Then I got:
$-2x^{3}-x^{2}-6x+9=0$


Answer




As in the comment $x-1$ is a factor of $f(x) = -2x^3 -x^2-6x+9$. After an easy long division we get $f(x) = (x-1)(-2x^2-3x-9)$. From this we can use the quadratic equation to see the other two roots are not real.



We can do the long division in the following way:



We need to divide $-2x^3-x^2-6x+9$ by $x-1$ so we can ask what times $x-1$ gives the first term $-2x^3$? This is clearly $\boxed{-2x^2}$. But this also gives $+2x^2$ that we didn't want. So we need to take this away from what is left: we now have $-x^2-6x+9 - (+2x^2) = -3x^2-6x +9$. Again we look for a term that when multiplied with $x-1$ gives the highest order term ($-3x^2$). This is $\boxed{-3x}$ but this gives an extra $+3x$. Now we are left with $-6x+9 - (+3x) = -9x+9$. Here we see that this is $\boxed{-9}$ times $(x-1)$. Adding together the terms that we used along the way, we have $-2x^2-3x-9$.


Can a finite set with a non prime number of elements be a field?



I understand that as typically defined (using modular arithmetic) finite fields require a prime number of elements.



But I recall hearing someone say that if you modify the way addition and multiplication is defined on a set with a non-prime number of elements, say 4 elements, then it could still be a field.



Is this true? How would this set look and how would you define the addition and multiplication?


Answer




For any prime $p$ and integer $k\geq 1$, there is, up to isomorphism, exactly one field of order $p^k$.



In the case of $2^2$ elements, one usually denotes the elements as $0,1,x,x+1$ (or something similar), with addition done modulo $2$. The multiplication table looks like this:
$$
\begin{array}{|c|cccc|}\hline
&0&1&x&x+1\\\hline 0&0&0&0\\1&0&1&x&x+1\\
x&0&x&x+1&1\\
x+1&0&x+1&1&x\\\hline\end{array}
$$

In general, you can find a multiplication table the following way: Start with $\Bbb Z_p$, the integers modulo $p$ (also known as the field with $p$ elements), and an irreducible polynomial $f$ of degree $k$ with coefficients in $\Bbb Z_p$. Then take the polynomial ring $\Bbb Z_p[x]$, and divide out by the ideal generated by $f$. Any element of our $p^k$-element field will correspond to a polynomial of degree less than $k$, with addition as normal. Multiplication is defined by reducing modulo $f$.




In our example, we have $\Bbb Z_2$, $k=2$ and $f(x)=x^2+x+1$. The elements are as given above, and addition is done as for regular polynomials with coefficients in $\Bbb Z_2$. As for multiplication, let's look at $x(x+1)$ as an example. With regular polynomials we have $x(x+1)=x^2+x$. Then reducing modulo $f$ basically means either




  • Subtract multiples of $f$ until the degree is lower than $k=2$.

  • $f(x)=0$ means $x^2=x+1$. Substitute this, repeatedly if necessary, until the degree is lower than $k=2$.



In either case, $x^2+x$ is reduced to $1$.


Wednesday, November 21, 2018

limits - How to find $lim _{ nto infty } frac { ({ n!) }^{ 1over n } }{ n } $?


How to find $\lim _{ n\to \infty } \frac { ({ n!) }^{ 1\over n } }{ n } $ ? I tried taking using logarithm to bring the expression to sum form and then tried L Hospital's Rule.But its not working.Please help!


This is what wolfram alpha is showing,but its not providing the steps!


BTW if someone can tell me a method without using integration, I'd love to know!


Answer



Note


\begin{align}\frac{(n!)^{1/n}}{n} &= \left[\left(1 - \frac{0}{n}\right)\left(1 - \frac{1}{n}\right)\left(1 - \frac{2}{n}\right)\cdots \left(1 - \frac{n-1}{n}\right)\right]^{1/n}\\ &= \exp\left\{\frac{1}{n}\sum_{k = 0}^{n-1} \log\left(1 - \frac{k}{n}\right)\right\} \end{align}


and the last expression converges to


$$\exp\left\{\int_0^1\log(1 - x)\, dx\right\} = \exp(-1) = \frac{1}{e}.$$


Alternative: If you want to avoid integration, consider the fact that if $\{a_n\}$ is a sequence of positive real numbers such that $\lim\limits_{n\to \infty} \frac{a_{n+1}}{a_n} = L$, then $\lim\limits_{n\to \infty} a_n^{1/n} = L$.


Now $\frac{(n!)^{1/n}}{n} = a_n^{1/n}$, where $a_n = \frac{n!}{n^n}$. So



$$\frac{a_{n+1}}{a_n} = \frac{(n+1)!}{(n+1)^{n+1}}\cdot \frac{n^n}{n!} = \frac{n+1}{n+1}\cdot\frac{n^n}{(n+1)^n} = \left(\frac{n}{n+1}\right)^n = \left(\frac{1}{1 + \frac{1}{n}}\right)^n = \frac{1}{\left(1 + \frac{1}{n}\right)^n}.$$


Since $\lim\limits_{n\to \infty} (1 + \frac{1}{n})^n = e$, then $$\lim_{n\to \infty} \frac{a_{n+1}}{a_n} = \frac{1}{e}.$$


Therefore $$\lim_{n\to \infty} \frac{(n!)^{1/n}}{n} = \frac{1}{e}.$$


calculus - How to evaluate integral $int_0^{infty} e^{-x^2} frac{sin(a x)}{sin(b x)} dx$?

I came across the following integral:



$$\int_0^{\infty} e^{-x^2} \frac{\sin(a x)}{\sin(b x)} dx$$



while trying to calculate the inverse Laplace transform




$$ L_p^{-1} \left[ \frac{\sinh(\alpha\sqrt{p})}{\sinh(\beta\sqrt{p})}
\frac{e^{-\gamma\sqrt{p}}}{\sqrt{p}} \right], |\alpha|<\beta, \gamma>0$$



using the Bromwich integral approach. The contour I used is the following:



enter image description here



the above mentioned integral arises while doing integration over the segments $L_1^+,L_2^+,\cdots$ and $L_1^-,L_2^-,\cdots$.



I have searched for this integral in Prudnikov et. al., Integrals and Series, v.1, but found nothing. I have also tried to evaluate the integral using residue theorem, but could not quite decide which contour to use.




Any help is greatly appreciated!



P.S. The ILT can be calculated by noticing that
$$ F[p] = \frac{\sinh (\sqrt{p} \alpha)}{\sinh (\sqrt{p} \beta)}
\frac{e^{-\gamma\sqrt{p}}}{\sqrt{p}}
= \sum_{n=0}^{\infty}
\left(
\frac{e^{-(-\alpha+\beta+\gamma+2n\beta)\sqrt{p}}}{\sqrt{p}}
-\frac{e^{-(\alpha+\beta+\gamma+2n\beta)\sqrt{p}}}{\sqrt{p}}

\right)$$
using
$$L_p^{-1} \left[ \frac{e^{-\alpha\sqrt{p}}}{\sqrt{p}} \right] = \frac{1}{\sqrt{\pi t}} e^{-\frac{\alpha^2}{4t}}$$
we get
$$\begin{align*}
f(t)
&= L_p^{-1}[F(p)] \\
&= \sum_{n=0}^{\infty}
\left(
\frac{ e^{-(-\alpha+\beta+\gamma+2n\beta)^2/4t} }{\sqrt{\pi t}}

- \frac{ e^{-(-\alpha+\beta+\gamma+2n\beta)^2/4t} }{\sqrt{\pi t}}
\right).
\end{align*}$$
Here I am more interested in calculating the above ILT using the Bromwich integral approach.

summation - Having trouble understanding why $sum_{i=1}^ni^2= frac{n(n+1)(2n+1)}{6}$

So I understand $\sum\limits_{i=1}^{n}i^2 = \frac{n(n+1)(2n+1)}{6}$ but I'm not sure how to come to that conclusion. Having trouble understanding

calculus - Is there a "simple" way of proving Stirlings formula?

Is there any way to derive Stirlings formula that only requires some undergraduate knowledge of calculus, real analysis and perhaps some identitets involving the gamma function, maybe Wallis product, and things along those lines? If not, and I know this is a rather vague question, what is the simplest but still sufficiently rigorous way of deriving it? I had a look at Stirling's formula: proof? but the comments seems quite messy. Wikipedia was not particularly helpful either since I have not learned about Laplace's method, Bernoulli numbers or the trapezoidal rule.

Tuesday, November 20, 2018

number theory - Conditions for which $(sum p_i)(sum frac{1}{p_i})$ over an arbitrary $i$ for a set of primes ${p_i}$ is unique?

I am looking for conditions (if any are needed beyond properties of primes) for which $(\sum p_i)(\sum \frac{1}{p_i})$ over an arbitrary $i$ for a set of primes $\{p_i\}$ is unique in that there is no other set of primes $p_j$ for which $$(\sum p_i)(\sum \frac{1}{p_i}) = (\sum p_j)(\sum \frac{1}{p_j})$$



It is a bit difficult to describe so I will give an example:




I would like to know if there is a proof (if it is even true) that given any primes $p_1$ and $p_2$ that there are no other set of primes $\{p_3$,$p_4,..,p_k\}$ such that
$$(p_1+p_2)(\frac{1}{p_1}+\frac{1}{p_2}) = (p_3+p_4+..+p_k)(\frac{1}{p_3}+\frac{1}{p_4}+..+\frac{1}{p_k})$$



And in general (again I don't know if this is true), that



$$(p_1+p_2+..+p_n)(\frac{1}{p_1}+\frac{1}{p_2}+..+\frac{1}{p_n}) \neq (p_3+p_4+..+p_s)(\frac{1}{p_3}+\frac{1}{p_4}+..+\frac{1}{p_s})$$



So basically whether or not I can find a set of two sets of primes of any length which satisfy the equality.




I have been trying to figure out a clever way to reduce this and would guess that there is some trick by the uniqueness of primes, but I don't see it.



Thanks,



Brian

Repunit prime numbers

Consider all numbers that are written with only ones in base $10$, that is, numbers of the form
$$
p_n=\sum_{i=1}^{n} 10^{i-1}=\frac{10^n-1}{9}=\underbrace{1.....1}_\text{$n$ $1$s}.
$$

Here, $n$ is the number of $1$s in that number. For example, $p_2=11$ and $p_5=11111$.




For which values of $n$ is $p_n$ a prime number? I feel there should be an infinite number of values, but is this true? For example, after a brief computation, I've concluded that, for $1\leq n\leq 10^4 $, $p_n$ is prime if and only if
$$
n\in\{2,19,23,317,1031\},
$$

which are also prime numbers.



In some way, it seems that such primes stop here, but it might simply be the case that the next prime is way bigger than $p_{1031}$. If there are indeed infinitely many primes in such form, is there an efficient way of testing whether $p_n$ is a prime, given $n$?

Monday, November 19, 2018

elementary number theory - Prove that all even integers $n neq 2^k$ are expressible as a sum of consecutive positive integers



How do I prove that any even integer $n \neq 2^k$ is expressible as a sum of positive consecutive integers (more than 2 positive consecutive integer)?
For example:



14 = 2 + 3 + 4 + 5
84 = 9 + 10 + ... + 15

n = sum (k + k+1 + k+2 + ...)
n ≠ 2^k


Answer



The sum of the integers from $1$ to $n$ is $n(n+1)/2$. The sum of the integers from $k$ to $k+n$ is then
$$\begin{align*}
k+(k+1)+\cdots+(k+n) &= (n+1)k + 1+\cdots+n\\
& = (n+1)k + \frac{n(n+1)}{2} \\
&= \frac{(n+1)(2k+n)}{2}.\end{align*}$$



Therefore, $a$ can be expressed as the sum of consecutive integers if and only if $2a$ can be factored as $(n+1)(2k+n)$.




Suppose that $a$ is a power of $2$. Then $2a$ is a power of $2$, so $(n+1)(2k+n)$ must be a power of $2$. If we want to avoid negatives, and also avoid the trivial expression as a sum with one summand, we must have $n\geq 1$ and $k\gt 0$. But the parities of $n+1$ and of $2k+n$ are opposite, so this product cannot be a power of $2$ unless either $n+1=1$ (which requies $n=0$) or $2k+n=1$ (which requires $k=0$). Thus, no power of $2$ can be expressed as a sum of at least two consecutive positive integers. In particular, $8$, $16$, $32$, etc cannot be so expressed.



On the other hand, suppose that $a$ is even but not a power of $2$. If we can write $2a = pq$ with $p\gt 1$ and odd, $q$ even, and $q\geq p+1$, then setting $n=p-1$ and $k=(q-p+1)/2$ gives the desired decomposition. If this cannot be done, then every time we factor $2a$ as $pq$ with $p\gt 1$ odd, we have $q\lt p+1$. Then we can set $n=q-1$ and $k = (p+1-q)/2$.



Thus, the powers of $2$ are the only even numbers that are not expressible as the sum of at least two consecutive positive integers.



Added. The OP has now excluded powers of $2$, but has also required that the sum contains strictly more than two summands; i.e., $k\gt 0$ and $n\gt 1$. With the above decompositions, the only case in which we could have $n=1$ is if $2a=pq$ with $p$ odd, $p\gt 1$, and $q=2$. But this is impossible, since $a$ is assumed to be even, and this leads to $2a = 2p$ with $p$ odd.


calculus - Limits: How to evaluate $limlimits_{xrightarrow infty}sqrt[n]{x^{n}+a_{n-1}x^{n-1}+cdots+a_{0}}-x$



This is being asked in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions, and here: List of abstract duplicates.






What methods can be used to evaluate the limit $$\lim_{x\rightarrow\infty} \sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x.$$




In other words, if I am given a polynomial $P(x)=x^n + a_{n-1}x^{n-1} +\cdots +a_1 x+ a_0$, how would I find $$\lim_{x\rightarrow\infty} P(x)^{1/n}-x.$$



For example, how would I evaluate limits such as $$\lim_{x\rightarrow\infty} \sqrt{x^2 +x+1}-x$$ or $$\lim_{x\rightarrow\infty} \sqrt[5]{x^5 +x^3 +99x+101}-x.$$


Answer



Your limit can be rewritten as
$$\lim_{x\rightarrow\infty}\left(\frac{\sqrt[n]{1+\frac{a_{n-1}}{x}+\cdots+\frac{a_{0}}{x^{n}}}-1}{1 \over x}\right)$$
Or equivalently,
$$\lim_{y\rightarrow 0}\left(\frac{\sqrt[n]{1+{a_{n-1}}{y}+\cdots+{a_{0}}{y^{n}}}-1}{y}\right)$$
This, by the definition of derivative, is the derivative of the function $f(y) = {\sqrt[n]{1+{a_{n-1}}{y}+\cdots+{a_{0}}{y^{n}}}}$ at $y = 0$, which evaluates via the chain rule to ${a_{n-1} \over n}$.


calculus - How to prove that $limlimits_{xto0}frac{sin x}x=1$?


How can one prove the statement $$\lim_{x\to 0}\frac{\sin x}x=1$$ without using the Taylor series of $\sin$, $\cos$ and $\tan$? Best would be a geometrical solution.


This is homework. In my math class, we are about to prove that $\sin$ is continuous. We found out, that proving the above statement is enough for proving the continuity of $\sin$, but I can't find out how. Any help is appreciated.


Answer



sinc and tanc at 0


The area of $\triangle ABC$ is $\frac{1}{2}\sin(x)$. The area of the colored wedge is $\frac{1}{2}x$, and the area of $\triangle ABD$ is $\frac{1}{2}\tan(x)$. By inclusion, we get $$ \frac{1}{2}\tan(x)\ge\frac{1}{2}x\ge\frac{1}{2}\sin(x)\tag{1} $$ Dividing $(1)$ by $\frac{1}{2}\sin(x)$ and taking reciprocals, we get $$ \cos(x)\le\frac{\sin(x)}{x}\le1\tag{2} $$ Since $\frac{\sin(x)}{x}$ and $\cos(x)$ are even functions, $(2)$ is valid for any non-zero $x$ between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$. Furthermore, since $\cos(x)$ is continuous near $0$ and $\cos(0) = 1$, we get that $$ \lim_{x\to0}\frac{\sin(x)}{x}=1\tag{3} $$ Also, dividing $(2)$ by $\cos(x)$, we get that $$ 1\le\frac{\tan(x)}{x}\le\sec(x)\tag{4} $$ Since $\sec(x)$ is continuous near $0$ and $\sec(0) = 1$, we get that $$ \lim_{x\to0}\frac{\tan(x)}{x}=1\tag{5} $$


Sunday, November 18, 2018

real analysis - Looking for differentiable function $f:mathbb R to mathbb R$ whose derivative is nowhere continuous

Does there exist a differentiable function $f: \mathbb R \to \mathbb R$ such that its derivative $f'$ is nowhere continuous ?

discrete mathematics - Strong Induction: parity of sum of odd numbers



This is the question:




Use strong mathematical induction to prove that for any integer $n \ge 2$, if $n$ is even, then any sum of $n$ odd integers is even, and if $n$ is odd, then any sum of $n$ odd integers is odd.



I know that $P(n)$ is the sentence:



“If $n$ is even, then any sum of $n$ odd integers is even, and if $n$ is odd, then any sum of $n$ odd integers is odd.”



If anyone could guide me a bit or provide some sort of formula, it be much appreciated!


Answer



The first step is
to get this into

mathematical form.



I would write it like this:



Let
$(a_i)_{i=1}^n$
be odd integers.
Then,
for any positive integer $m$,
$\sum_{i=1}^{2m} a_i$

is even
and
$\sum_{i=1}^{2m-1} a_i$
is odd.



Proof.



Note:
All variables are integers.




The basic facts needed are that
(1) every even number $a$
can be written in the form
$a = 2b$;
(2) every odd number $a$
can be written in the form
$a = 2b+1$;
(3) all numbers are either
even or odd;
(4) the sum of two even numbers

is even;
(5) the sum of an odd and even integer
is odd;
(6) the sum of two odd numbers
is even.



Note:
Facts (1) and (2)
are definitions.
A good exercise is

to prove facts
(3) through (6).



For $m=1$,
this states that
$a_1$ is odd and
$a_1+a_2$ is even.



The first is true by assumption.




The second is true because
the sum of two odd integers
is even.



For the induction step,
suppose it is true for $m$.



The statement for
$m+1$ is
$\sum_{i=1}^{2(m+1)} a_i$

is even
and
$\sum_{i=1}^{2(m+1)-1} a_i$
is odd.



For the first,



$\begin{array}\\
\sum_{i=1}^{2(m+1)} a_i
&=\sum_{i=1}^{2m+2} a_i\\

&=\sum_{i=1}^{2m} a_i+a_{2m+1}+a_{2m+2}\\
&=(\sum_{i=1}^{2m} a_i)+(a_{2m+1}+a_{2m+2})\\
\end{array}
$



and this is even
(using fact 4) because
$\sum_{i=1}^{2m} a_i$
is even by the induction hypothesis
and

$a_{2m+1}+a_{2m+2}$
is even by fact 6.



For the second,



$\begin{array}\\
\sum_{i=1}^{2(m+1)-1} a_i
&=\sum_{i=1}^{2m+1} a_i\\
&=\sum_{i=1}^{2m-1} a_i+a_{2m}+a_{2m+1}\\
&=(\sum_{i=1}^{2m-1} a_i)+(a_{2m}+a_{2m+1})\\

\end{array}
$



and this is odd
(using fact 5) because
$\sum_{i=1}^{2m-1} a_i$
is odd by the induction hypothesis
and
$a_{2m}+a_{2m+1}$
is even by fact 6.




You could also group the sum as
$\sum_{i=1}^{2m} a_i+a_{2m+1}
$;
in this,
the sum is even
and $a_{2m+1}$
is odd,
so their sum is odd.


algebra precalculus - Proof by induction: $sumlimits_{i=0}^n i cdot i! = (n+1)!-1$





Let n be a positive natural number , $n\ge 0$, then. $\displaystyle\sum_{i=0}^n i \cdot i!= (n+1)!-1$




Here is my attempt.I'm not going to write the base case because I understand that part.



Assuming $\displaystyle\sum_{i=0}^n i \cdot i!= (n+1)!-1$ is true. We wish to show.



$\displaystyle \sum_{i=0}^{n+1} i \cdot i!= (n+2)!-1$. Thus.




$\displaystyle\sum_{i=0}^{n+1} i \cdot i!= (\sum_{i=0}^n i \cdot i!) $ This is where I get stuck.


Answer



Your last step should read $$\sum_{i=0}^{k+1} \left( i \cdot i! \right)= \sum_{i=0}^k \left(i \cdot i!\right) + (k+1)(k+1)! $$ Now use your induction assumption to get $$\sum_{i=0}^{k+1} \left(i \cdot i! \right)= (k+1)! - 1 + (k+1)(k+1)! = (k+1)! (k+2) -1 = (k+2)! - 1$$


real analysis - Representing an integral as a finite sum


Question: If $a$ is in an arbitrary commensurable ratio to $\pi$, that is $a=\tfrac {m\pi}n$, then if $m+n$ is odd$$\int\limits_0^1\mathrm dy\,\frac {y^x\sin a}{1+2y\cos a+y^2}=\frac 12\sum\limits_{i=1}^{n-1}(-1)^{i-1}\sin ia\left[\frac {\mathrm d\Gamma\left(\frac {x+n+i}{2n}\right)}{\mathrm dx}-\frac {\mathrm d\Gamma\left(\frac {x+i}{2n}\right)}{\mathrm dx}\right]$$and when $m+n$ is even$$\int\limits_0^1\mathrm dy\,\frac {y^x\sin a}{1+2y\cos a+y^2}=\frac 12\sum\limits_{i=1}^{(n-1)/2}(-1)^{i-1}\sin ia\left[\frac {\mathrm d\Gamma\left(\frac {x+n+i}n\right)}{\mathrm dx}-\frac {\mathrm d\Gamma\left(\frac {x+i}n\right)}{\mathrm dx}\right]$$




I’m just having difficulty finding out where to start. Since the integral equals an infinite sum, it might be wise to start off with a taylor expansion of some sort. However, which function to expand I’m not very sure.



If you guys have any idea, I would be happy to hear them. Thanks!

probability theory - Finite expectation of a random variable

$X \geq 0$ be a random variable defined on $(\Omega,\mathcal{F},P)$. Show that $\mathbb{E}[X]<\infty \iff \Sigma_{n=1}^\infty P(X>n) < \infty $.


I got the reverse direction but I am struggling with the $"\implies"$ direction. So far, I have the following worked out:


$\mathbb{E}[X]<\infty$


$\implies \int_0^\infty (1-F(x)) dx < \infty$ (where $F$ is the distribution function of the random variable X)


$\implies \int_0^\infty (1-P(X\leq x)) dx < \infty$


$\implies \int_0^\infty P(X>x) dx < \infty$



Consider $\int_0^\infty P(X>x) dx$


$= \Sigma_{n=1}^\infty \int_{n-1}^n P(X>x) dx$


This is the point I am stuck at. Any help will be deeply appreciated!

Evaluation of a series (possibly related to Binomial Theorem)

I have the following series:




$$1 + \frac{2}{3}\cdot\frac{1}{2} + \frac{2\cdot5}{3\cdot6}\cdot\frac{1}{2^2} + \frac{2\cdot5\cdot8}{3\cdot6\cdot9}\cdot\frac{1}{2^3} + \ldots$$




I have to find the value of this series, and I have four options:
(A) $2^{1/3}$ (B) $2^{2/3}$ (C) $3^{1/2}$ (D) $3^{3/2}$



I can't seem to find a general term for this. I tried:




$$S = 1 + \frac{(1 - \frac{1}{3})}{1!}(\frac{1}{2}) + \frac{(1 - \frac{1}{3})(2 - \frac{1}{3})}{2!}(\frac{1}{2})^2 + \frac{(1 - \frac{1}{3})(2 - \frac{1}{3})(3 - \frac{1}{3})}{3!}(\frac{1}{2})^3 + \ldots$$



But this doesn't seem to get me anywhere.



Any help?






This maybe a telescopic series, because there was a similar question we solved in class which ended up being telescopic:





$$ \frac{3}{2^3} + \frac{4}{2^4\cdot3} + \frac{5}{2^6\cdot3} + \frac{6}{2^7\cdot5} + \ldots$$



$=\displaystyle\sum\limits_{r=1}^\infty\frac{r+2}{2^{r+1}r(r+1)}$



$=\displaystyle\sum \bigg(\frac{1}{2^r r} - \frac{1}{2^{r+1}(r+1)}\bigg) = \frac{1}{2}$




$P.S:$ This problem was included in my set of questions for Binomial Theorem, which is why I thought it might be related to it.

probability - Moments and non-negative random variables?



I want to prove that for non-negative random variables with distribution F:
$$E(X^{n}) = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx$$




Is the following proof correct?



$$R.H.S = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx = \int_0^\infty n x^{n-1} (1-F(x)) dx$$



using integration by parts:
$$R.H.S = [x^{n}(1-F(x))]_0^\infty + \int_0^\infty x^{n} f(x) dx = 0 + \int_0^\infty x^{n} f(x) dx = E(X^{n})$$



If not correct, then how to prove it?



Answer



Here's another way. (As the others point out, the statement is true if $E[X^n]$ actually exists.)



Let $Y = X^n$. $Y$ is non-negative if $X$ is.



We know
$$E[Y] = \int_0^{\infty} P(Y \geq t) dt,$$
so
$$E[X^n] = \int_0^{\infty} P(X^n \geq t) dt.$$
Then, perform the change of variables $t = x^n$. This immediately yields

$$E[X^n] = \int_0^{\infty} n x^{n-1} P(X^n \geq x^n) dx = \int_0^{\infty} n x^{n-1} P(X \geq x) dx.$$


Saturday, November 17, 2018

induction - Prove $4^{n} -1$ is divisible by $3$ for all $ngeq1$




I'm doing some homework and cant seem to get my head around this inductive proof.




So far I have got:



Atom case:
$4^1 -1 = 3$, so proved for basic case.



Closure:



Assume $4^k - 1$ is divisible by $3$, Show that $4^{k+1} -1$ is divisible by $3$.




So: $4^k-1 = 4^k\cdot4 -1$ but this is where I get stuck, unsure where to go from here.



Any help much appreciated!



Thanks


Answer



Hint $ $ To induct multiply the first two congruences below (by the Congruence Product Rule)



$$\begin{align} \bmod 3\!:\quad\ \ 4&\equiv 1\\[.3em]
4^{\large n}&\equiv 1\ \ \ {\rm i.e.}\ \ P(n)\\

\overset{\rm multiply}\Longrightarrow\ 4^{\large n+1} &\equiv 1\ \ \ {\rm i.e.}\ \ P(n\!+\!1)
\end{align}$$



Remark $\ $ The proof is a special case of the inductive proof of the Congruence Power Rule, which is the straightforward induction extension of the Product Rule.



If congruences are unfamiliar then you can instead use the Product Rule in Divisibility form:



$$\begin{align} {\rm mod}\,\ m\!:\, A\equiv a,\, B\equiv b&\ \ \,\Longrightarrow\,\ \ AB\equiv ab\qquad\text{Congruence Product Rule}\\[3pt]
m\mid A-a,\ B-b&\,\Rightarrow\, m\mid AB-ab\qquad\text{Divisibility Product Rule}\\[4pt]
{\bf Proof}\quad (A-a)B+a(B&-b)\, = AB-ab\end{align}$$




Then the original proof becomes $\,\ 3\,\mid\, \underbrace{4-1}_{\large A\,-\,a},\, \underbrace{4^n-1}_{\large B\,-\,b}\,\Rightarrow\, 3\,\mid\, \underbrace{4\cdot 4^{\large n}-1\cdot 1}_{\large AB\,-\,ab} = 4^{\large n+1}-1$


Solve the equation $ z^2 + leftvert z rightvert = 0 $, where $z$ is a complex number.

I've tried solving this, but I'm stuck at one point.



Here's what I did:



Let $ z = x + yi $, where $x, y \in \mathbf R$




Then , $ (x + yi)^2 + \sqrt{x^2 + y^2} = 0 $



Or, $x^2 + {(yi)}^2 + 2xyi + \sqrt{x^2 + y^2} = 0 $



Or, $ x^2 - y^2 + 2xyi + \sqrt{x^2 + y^2} = 0 + 0i$



Thus, $ x^2 - y^2 + \sqrt{x^2 + y^2} = 0\qquad\qquad\qquad\qquad\qquad\qquad\ (i)$
and $2xy = 0 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad(ii)$



If $2xy = 0$, then either $x = 0 $ or $y = 0$




Now, if I take $ x = 0$, and subsitute in $(i)$, I get either $y = 0$ or $y = 1$.



So far, so good, but if I take $y = 0$, and substitute in $(ii)$:



We have $x^2 + \sqrt{x^2} = 0$



so $x^2 = -\sqrt{x^2} $



or $x^2 = -x$




or $\frac{x^2}{x} = -1 $



or $x = -1$



However, this solution doesn't satistfy the equation $x^2 + \sqrt{x^2}$ or the original equation.



What am I doing wrong here ?

probability - Coupon Collector's Problem -- Expected Value of each item



So I guess my problem is based on the famous coupon collector's problem, which is, if you should not be familiar with it, the following:



Given N different coupons from which coupons are being drawn independantly, with equal probability and with replacement: How many coupons do you expect to need to draw before having drawn each coupon at least once? (Wikipedia has a well written article about it).




My problem is the following: I need to know what is the expected value for each coupon after we have collected at least one copy of each coupon.



More formal: We label each of the N coupons with numbers 1,...,n. Let $X_i$ be a random variable which counts how often we have drawn coupon $i$. What is $\mathbb{E}[X_i]$?


Answer



The expected value of the number of purchases until you have obtained the complete collection is $E_n=\sum\limits_{i=1}^{n}\frac{n}{n-i+1}=n\sum\limits_{k=1}^{n}\frac1k$. For reasons of symmetry $E(X_i)=\frac{E_n}{n}=\sum\limits_{k=1}^{n}\frac1k$ for all $i$.


complex analysis - Find $sum_{k=1}^{infty}frac{1}{z_k^2}$



Let $z_1, z_2,\dots, z_k,\dots$ be all the roots of $e^z=z$.



Let $C_N$ be the square in the plane centered at the origin with siden parallel to the axis and each of length $2\pi N$.




Assume that $\lim_{N\to \infty\int_{C_N}}\frac{e^z-1}{z^2(e^z-z)}dz=0$



Find $\sum_{k=1}^{\infty}\frac{1}{z_k^2}$






My solution attempt is too trivial. So I didnt write here. Please solve the question more explicitly. Thank you.


Answer



Let us call $f(z) = e^z - z$. Then the integral can be written




$$\int_{C_N} \frac{f'(z)}{z^2 f(z)}\,dz.$$



If $g$ is holomorphic on $\mathbb{C}\setminus D$, where $D$ is a closed discrete set in $\mathbb{C}$, then



$$\int_{C_N} g(z)\,dz$$



is $2\pi i$ times the sum of the residues of $g$ in the points of $D$ that are enclosed by $C_N$ (provided none of the points of $D$ lies on the contour $C_N$).



Here, the integrand $\dfrac{f'(z)}{z^2 f(z)}$ has singularities in the $z_k$ and in $0$. The residue in $z_k$ is $\dfrac{1}{z_k^2}$ since $f$ has only simple zeros, so




$$\int_{C_N} \frac{f'(z)}{z^2 f(z)}\,dz = 2\pi i \left(\operatorname{Res} \left(\frac{f'(z)}{z^2f(z)}; 0\right) + \sum_{z_k \in S_N} \frac{1}{z_k^2}\right).$$



Since the integral tends to $0$ for $N\to \infty$, we obtain



$$\sum_{k=1}^\infty \frac{1}{z_k^2} = -\operatorname{Res} \left(\frac{f'(z)}{z^2f(z)}; 0\right).$$



It remains to find that residue. Since $f(0) = 1$, we need the residue of $\dfrac{e^z-1}{z^2}$ in $0$, which is easily seen to be $1$.


Friday, November 16, 2018

measure theory - Cauchy's Functional Equation

Consider Cauchy's Functional Equation




$$\phi(t+s)=\phi(t)+\phi(s).$$



Can we say that any right continuous with left limits (cadlag) solution is Borel measurable? Obviously continuous solutions are Borel measureable, however, discontinuous solutions are not. I've also read that measurability can be reduced to cadlag, but I don't see how?



Any clarification is greatly appreciated.



In addition, we can observe that solutions to Cauchy's functional equation are deterministic analogues of Levy processes, however in the definition of Levy processes Cauchy's functional equation is satisfied only up to equality in distribution. Does this enable solutions to become Borel or Lebesgue measurable?

calculus - When is an elliptic integral expressible in terms of elementary functions?



After seeing this recent question asking how to calculate the following integral



$$ \int \frac{1 + x^2}{(1 - x^2) \sqrt{1 + x^4}} \, dx $$



and some of the comments that suggested that it was an elliptic integral, I tried reading a little bit on the Wikipedia article about elliptic integrals.
It seems that the point is that most elliptic integrals cannot be expressed in terms of elementary functions. The Wikipedia article defines an elliptic integral as an integral of the form




$$\int R \left( x, \sqrt{ P(x) } \right ) \, dx$$



where $R(x, y)$ is a rational function and $P(x)$ is a polynomial of degree $3$ or $4$ with no repeated roots.



Now, the article does mention in its introductory section that two exceptions in which the elliptic integrals can be expressed in terms of elementary functions are when the polynomial $P(x)$ has repeated roots or when the rational function $R(x, y)$ does not contain odd powers of $y$.



In the example in question we have $P(x) = 1 + x^4$ and



$$R(x, y) = \frac{1 + x^2}{(1 - x^2)y}$$




so certainly it does not correspond to the two exceptions mentioned before. Thus I have a couple of questions about this:




1) What are the conditions for an elliptic integral (as defined in the Wikipedia article) to be expressible in terms of elementary functions? More specifically, are the two above cited conditions the only exceptions or are there any others which may explain why the above integral is expressible in terms of elementary functions?



2) Depending on the answer to my first question, why is it that the above "elliptic integral" can be expressed in terms of elementary functions?




Note: I'm not sure but I suppose that some conditions must be put on the rational function $R(x, y)$ so to avoid trivial cases, but I don't want to speculate.




Thank you very much in advance.


Answer



A consideration of Aryabhata's answer to the linked question shows that there is a map from the elliptic curve $y^2 = P(x)$ to the conic $v^2 = u^2 + 2$ given by
$$(x,y) \mapsto \left(x - \dfrac{1}{x}, \dfrac{y}{x}\right),$$
and the differential
$$\dfrac{1+x^2}{(1-x^2)\sqrt{1 + x^4}} \,\mathrm dx$$
on the elliptic curve is the pull-back of the differential
$$\dfrac{1}{u v}\,\mathrm du$$
on the conic.




Since a conic has genus zero (i.e. it can be parameterized by a single variable,
using a classical "$t$-substitution"), the integral of a differential on a conic can always be expressed via elementary functions. Thus the same is true
of the integral of the original differential on the elliptic curve.



The answer to the general question is the same: if the differential in question can be pulled back from a map to a rational curve (i.e. a genus zero curve),
then the "elliptic integral" in question can be in fact integrated via elementary functions.



For example, any elliptic curve $y^2 = P(x)$ has a map to the the $x$-line given
by $(x,y) \mapsto x$. So if the integral only involves rational functions of $x$ (which will be the case when $y$ appears to even powers, since we can always

substitute $P(x)$ for $y^2$) then it can be computed in elementary terms. Also,
if $P(x)$ has repeated roots, then the curve $y^2 = P(x)$ itself is actually rational (it can be parameterized by a variation of the classical $t$-substitution for conics), and so any "elliptic integral" is actually elementarily integrable.



P.S. I have used some geometric terminology here (pull-back, differential, elliptic curve, rational curve) because the modern point of view on this material is via algebraic geometry. If some of this is unfamiliar, leave a comment and I (or someone else) can elaborate.



Added: If we have a curve $C$ (which could be our elliptic curve $y^2 = P(x)$,
or our rational curve $v^2 = u^2 + 2$, or just the $x$-line, or ...) and if $\omega$ is a differential on $C$, then finding the indefinite integral of $\omega$ means finding some function $f$ on $C$ such that $\omega = df$.



Now if $\varphi: C' \to C$ is a map of curves, then $\varphi^* \omega
= \varphi^* d f = d (f\circ \varphi).$ So $f\circ \varphi$ is an indefinite

integral of the pulled back differential $\varphi^*\omega$.



In particular, if $f$ is an elementary function of the coordinates on $C$,
and $\varphi$ is given by expressions which are elementary functions of the
coordinates, than the composite $f\circ \varphi$ will again be given by
elementary functions of the coordinates.



This is what is happening in your example.
Explicitly, on our curve $v^2 = u^2 + u,$ we had for example the differential
$$\dfrac{1}{u v} \,\mathrm du = \frac{1}{2 u^2 v}\,\mathrm d (u^2 + 2) = \frac{1}{2(v^2-2)v}\,\mathrm d(v^2) = \dfrac{1}{v^2-2}\,\mathrm dv = \dfrac{1}{2\sqrt{2}}\mathrm d\log\bigl( \frac{v-\sqrt{2}}{v+\sqrt{2}}\bigr).$$

Now pulling back this differential via our map $\varphi:(x,y)\mapsto \left(x-\dfrac{1}{x}, \dfrac{y}{x}\right)$ we obtain
$$\dfrac{1 + x^2}{(1-x^2)y}\,\mathrm dx = \dfrac{1}{2\sqrt{2}}\mathrm d\log\bigl(\frac{y-\sqrt{2}x}{y+\sqrt{2}x} \bigr).$$



As this example shows, pulling back is just the theoretical terminology
for making a substitution, just as a map of curves is just theoretical terminology for a change of variables.



If memory serves, Miles Reid's wonderful book Undergraduate algebraic geometry discusses some of this, and in particular gives some of the history of how the analytic theory of elliptic integrals turned into the
algebro-geometric theory of elliptic curves. (If you don't know this book, don't be fooled by the title --- it is a great introduction to the subject for someone at any level, not just undergraduates!) A much more detailed history can be found in Dieudonne's book on the history of algebraic geometry, but that book is probably not very readable unless you already have some feeling for algebraic geometry as a subject.


Finite fields of complex numbers




enter image description here



I don't understand why $(1-i)$ is in the set of element of $R_2$ in the solution.



My understanding is that $\mathbb{Z}/2\mathbb{Z} = \{0,1\}$, endowed with addition and multiplication mod 2. So wouldn't the elements of $R_2$ be $\{0,1,i,1+i\}$? And shouldn't the multiplication table of $R_2$ be something like this?



enter image description here


Answer



Yes, you are right, the elements of $R_2$ are $0$, $1$, $i$, and $1+i$. And since, in $\mathbb{Z}/2\mathbb{Z}$, $1=-1$, $1+i=1-i$ there.



functional analysis - Hamel bases without (too much) axiomatic set theory


Hamel bases have cropped up on the periphery of my mathematical interests a few times over my mathematical career, but I have never found the time or had a real need to look into them at any depth. Most of what I know comes from the 1966 book "A First Course in Functional Analysis" by M. Davies, in which he uses them to prove the existence of discontinuous solutions of the functional equation $f(x+y) = f(x) + f(y)$.


My questions/queries:




  • Is it possible to talk (meaningfully) about Hamel bases without invoking the axiom of choice?




  • am I correct in my primitive, intuition-led understanding: "we can't explicitly exhibit a Hamel basis because that would be "equivalent" (in some obscure way that I cannot define precisely) to explicitly exhibiting a "choice function"?




  • can anyone give me a nice reference where an old-fashioned analyst (well, old, at least) could read up on such matters without getting too heavily involved in axiomatic set theory or foundations of maths texts?



Answer




In general, you can't even posit the existence of a Hamel basis without the axiom of choice; and even with the axiom of choice, you can't get your hands on them. If you were to say "pick a basis of $\mathbb{R}$ over $\mathbb{Q}$" then you'd already have invoked the axiom of choice; and then you wouldn't be able to write down what a general vector in the basis looks like. This is characteristic of any construction in mathematics that depends on the axiom of choice, since it's the only axiom which says "this thing exists" without also saying what it looks like.


The sense in which the existence of Hamel bases is equivalent to the axiom of choice is as follows: if every vector space has a (Hamel) basis, then the axiom of choice holds; and if the axiom of choice holds, then every vector space has a basis. This equivalence means that, if the axiom of choice fails, then there is a vector space without a basis; likewise, if there exists a vector space without a basis, then the axiom of choice fails. But the question of whether the axiom of choice holds or fails (and hence of whether Hamel bases exist or don't exist) is unprovable, in a very concrete sense... the response of most mathematicians to this fact is "we'll just admit the axiom of choice".


It's not true that the existence of a Hamel basis of $\mathbb{R}$ over $\mathbb{Q}$ implies the full axiom of choice. However, it does imply a weaker form of the axiom of choice which is [still] unprovable.


Beyond this, I don't know what to say: it's tough to answer an axiomatic set theoretic question without talking about axiomatic set theory.


Thursday, November 15, 2018

solve modular arithmetic equation

How do I solve this equation




$L = D + [D:4]$




  • $L$ is a known integer obtained previously;

  • $D$ is an integer;

  • $[D:4]$ is the quotient part of ($D/4$.)



At first glance, I did not know how to solve this equation as I have never seen this type in my calculus studies. After searching online I found that this is modular arithmetic for two reasons:





  1. The original relation is a congruence equation (Zeller's Rule) dealing with cyclical calendar numbers and I understand that modular arithmetic deals with cyclical integers.

  2. $D/4=quotient(D/4)+remainder (D/4)$ and since $D \pmod 4=remainder(D /4)$ that also means we are dealing with modular arithmetic.



    $∴ [D:4]=D/4-D \pmod 4$



    $∴ L=5/4 D-D \pmod 4 ……………….. (1)$








how to go further with eq. (1)?



Although I read the modular arithmetic rules and practiced a little but I wasn’t sure if I was going the right path.
I tried to eliminate the $D \pmod 4$ part by multiplying it with its inverse according to the following rule:



Calculate $A \cdot D \pmod 4$ for $A$ values $0$ through $(4-1)$, the modular inverse of
$\pmod 4$ is the $A$ value that makes $A \cdot D\equiv 1\pmod 4$ and only the numbers that share no prime factors with $4$ have a modular inverse $\pmod 4$



From this point, I can obtain an inverse (I think) but It makes no sense to me.




Can anyone help please.

analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...