Tuesday, May 31, 2016

continuity - Find a discontinuous function defined on $mathbb{Q}$ embedded in $mathbb{R}$


I am reading Chapter 1 Example 11 of 'Counterexamples in Analysis' by Gelbaum and Olmstead. This section illustrates counterexamples of functions defined on $\mathbb{Q}$ embedded in $\mathbb{R}$ of statements that are usually true for functions defined on a real domain. Almost all examples have the assumption that the function (defined on a rational domain) is continuous, for example, the book gives a counterexample of:



A function continuous and bounded on a closed interval but not uniformly continuous.



My questions are, what is an example of a discontinuous real function defined on $\mathbb{Q}$, that is: $f:\mathbb{Q}\rightarrow\mathbb{R}$? Are all functions defined on $\mathbb{Q}$ discontinuous (similar to how functions defined on the set of natural numbers are always continuous)?


Answer



1). All functions defined on $\mathbb{N}$ are continuous (not discontinuous).


2). An example of a function $f: \mathbb{Q} \to \mathbb{R}$ that is discontinuous is $f = \chi_{\{0\}}$, i.e. $f(x) = 1$ iff $x = 0$ ($x \in \mathbb{Q}$). One can see that this is discontinuous by noting that $f(\frac{1}{n}) = 0$ for each $n \ge 1$, while $f(\lim_n \frac{1}{n}) = f(0) = 1$.


linear algebra - Eigenvalues of tridiagonal symmetric matrix with diagonal entries 2 and subdiagonal entries 1




Problem:



Let A be a square matrix with all diagonal entries equal to 2, all entries directly above or below the main diagonal equal to 1, and all other entries equal to 0. Show that every eigenvalue of A is a real number strictly between 0 and 4.





Attempt at solution:




  • Since A is real and symmetric, we already know that its eigenvalues are real numbers.


  • Since the entries in the diagonal of A are all positive (all 2), A is positive definite iff the determinants of all the upper left-hand corners of A are positive. I think this can be proven via induction (showing that each time the dimension goes up, the determinant goes up too)


  • Since A is symmetric and positive definite, eigenvalues are positive, i.e. greater than 0.





But I can't get the upper bound of 4. Any help would be appreciated. Thank you.


Answer



The characteristic polynomial of $A-2I$ is the $n\times n$ determinant $D_n(X)$ of the matrix with entries $-1$ directly above or below the main diagonal, entries $X$ on the main diagonal, and entries $0$ everywhere else, hence
$$
D_{n+2}(X)=XD_{n+1}(X)-D_n(X),
$$
for every $n\geqslant0$ with $D_1(X)=X$ and the convention $D_0(X)=1$.
This recursion is obviously related to Chebyshev polynomials and one can prove:





For every $u$ in $(0,\pi)$ and $n\geqslant0$, $D_{n}(2\cos(u))=\dfrac{\sin((n+1)u)}{\sin(u)}$.




Assume that this holds for $n$ and $n-1$ for some $n\geqslant1$, then
$$
D_{n+1}(2\cos(u))\sin(u)=2\cos(u)\sin(nu)-\sin((n-1)u)=\sin((n+1)u).
$$
Since $D_1(2\cos(u))=2\cos(u)=\sin(2u)/\sin(u)$ and $D_0(2\cos(u))=1=\sin(u)/\sin(u)$, this proves the claim. Hence $x=2\cos(k\pi/(n+1))$ solves $D_n(x)=0$ for every $1\leqslant k\leqslant n$. These $n$ different values are the eigenvalues of $A-2I$.



Since the eigenvalues of $A-2I$ are all in the interval $[-2\cos(\pi/(n+1)),+2\cos(\pi/(n+1))]$, the eigenvalues of $A$ are all in the interval $]0,4[$.



calculus - Differentiation of $2arccos left(sqrt{frac{a-x}{a-b}}right)$



Okay so the question is:




Show that the function
$$2\arccos \left(\sqrt{\dfrac{a-x}{a-b}}\right)$$

is equal to
$$\frac{1}{\sqrt{(a-x)(x-b)}} .$$




I started by changing the arccosine into inverse cosine, then attempted to apply chain rule but I didn't get very far.
Then I tried substituting the derivative for arccosine in and then applying chain rule. Is there another method besides chain rule I should use? Any help is appreciated.


Answer



$$\dfrac{d}{du} 2\arccos u = - 2\dfrac{1}{\sqrt{1 - u^2}} ~du$$



See the Proof Wiki for a proof of this.




In this problem, we have, $u = \sqrt{\dfrac{a-x}{a-b}}$, and we need to find $dx$, so we have:



$$ \dfrac{d}{dx} \left(\sqrt{\dfrac{a-x}{a-b}} \right) = -\dfrac{\sqrt{\dfrac{a-x}{a-b}}}{2 (a-x)} = -\dfrac{1}{2 \sqrt{(a - b)(a - x)}}$$



So, lets put these two together.



$\dfrac{d}{du}\left(2 \arccos u \right) =-2 \dfrac{1}{\sqrt{1 - u^2}} ~du = -\dfrac{2}{\sqrt{1 - \left(\sqrt{\dfrac{a-x}{a-b}}\right)^2}} \left(-\dfrac{1}{2 \sqrt{(a - b)(a - x)}} \right)$



We can reduce this to:




$$\dfrac{d}{dx} \left(2 \arccos \left(\sqrt{\dfrac{a-x}{a-b}}\right)\right)=\dfrac{1}{\sqrt{(a-x)(x-b)}}$$


lie algebras - Defining the Lie Bracket on $mathfrak{g} otimes_Bbb{R} Bbb{C}$



I already know how to do the complexification of a real Lie algebra $\mathfrak{g}$ by the usual process of taking $\mathfrak{g}_\Bbb{C}$ to be $\mathfrak{g} \oplus i\mathfrak{g}$. Now suppose I take the approach of trying to complexify things using tensor products. I look at $\mathfrak{g} \otimes_\Bbb{R} \Bbb{C}$ with the $\Bbb{R}$ - linear map




$$\begin{eqnarray*} f : &\mathfrak{g}& \longrightarrow \mathfrak{g} \otimes_\Bbb{R} \Bbb{C} \\
&v&\longmapsto v \otimes 1. \end{eqnarray*}$$



Now suppose I have an $\Bbb{R}$ - linear map map $h : \mathfrak{g} \to \mathfrak{h}$ where $\mathfrak{h}$ is any other complex Lie algebra. Then I can define a $\Bbb{C}$ - linear map $g$ from the complexification $\mathfrak{g} \otimes_\Bbb{R} \Bbb{C}$ to $\mathfrak{h}$ simply by defining the action on elementary tensors as



$$g(v \otimes i) = ih(v).$$



I have checked that $g$ is a $\Bbb{C}$ - linear map. Now my problem comes now in that my $f,g,h$ have to somehow be compatible with the bracket on $[\cdot,\cdot]_\mathfrak{g}$ of $\mathfrak{g}$ and $[\cdot,\cdot]_\mathfrak{h}$ of $\mathfrak{h}$. This is because I don't want them to just be linear maps but also Real/Complex Lie algebra homomorphisms.






My question is: How do we define the bracket on the complexification? A reasonable guess would be $[v \otimes i,w \otimes i] = \left([v,w] \otimes [i,i]\right)$ but this is zero.





Edit: Perhaps I should add, in the usual way of defining the complexification, the bracket on $\mathfrak{g}$ extends uniquely to one on the complexification $\mathfrak{g} \oplus i\mathfrak{g}$. Should it not be the case now that my bracket on $\mathfrak{g}$ extends uniquely to one on the tensor product then?



Edit: How do we know that the Lie Bracket defined by MTurgeon is well-defined? Does it follow from the fact that we are tensoring vector spaces, and so there is one and only one way to represent a vector in here?


Answer




First of all, it seems the right extension is the following:
$$[v\otimes\lambda,w\otimes\mu]:=[v,w]\otimes\lambda\mu.$$



This satisfies bilinearity, and Jacobi's identity. However, how can we show that this is the unique extension of the Lie bracket? We have the following result (taken, for example, from Bump's Lie groups):



Proposition: If $V$ and $U$ are real vector spaces, any $\mathbb R$-bilinear map $V\times V\to U$ extends uniquely to a $\mathbb C$-bilinear map $V_{\mathbb C}\times V_{\mathbb C}\to U_{\mathbb C}$.



Proof: This basically follows from the properties of tensor products. Any $\mathbb R$-bilinear map $V\times V\to U$ corresponds to a unique $\mathbb R$-linear map $V\otimes_{\mathbb R} V\to U$. But any $\mathbb R$-linear map extends uniquely to a $\mathbb C$-linear map of the complexified vector spaces (this is easy to prove). Hence, we have a $\mathbb C$-linear map $(V\otimes_{\mathbb R} V)_{\mathbb C}\to U_{\mathbb C}$. But we have the following isomorphism:
$$(V\otimes_{\mathbb R} V)_{\mathbb C}\cong V_{\mathbb C}\otimes_{\mathbb C} V_{\mathbb C};$$
on the left-hand side, the tensor product is over $\mathbb R$, and on the right-hand side, it is over $\mathbb C$. Finally, our $\mathbb C$-linear map $V_{\mathbb C}\otimes_{\mathbb C} V_{\mathbb C}\to U_{\mathbb C}$ corresponds to a unique $\mathbb C$-bilinear map $V_{\mathbb C}\times V_{\mathbb C}\to U_{\mathbb C}$.



calculus - Integrating the formula for the sum of the first $n$ natural numbers



I was messing around with some math formulas today and came up with a result that I found pretty neat, and I would appreciate it if anyone could explain it to me.



The formula for an infinite arithmetic sum is $$\sum_{i=1}^{n}a_i=\frac{n(a_1+a_n)}{2},$$ so if you want to find the sum of the natural numbers from $1$ to $n$, this equation becomes $$\frac{n^2+n}{2},$$ and the roots of this quadratic are at $n=-1$ and $0$. What I find really interesting is that $$\int_{-1}^0 \frac{n^2+n}{2}dn=-\frac{1}{12}$$ There are a lot of people who claim that the sum of all natural numbers is $-\frac{1}{12}$, so I was wondering if this result is a complete coincidence or if there's something else to glean from it.


Answer



We have Faulhaber's formula:



$$\sum_{k=1}^n k^p = \frac1{p+1}\sum_{j=0}^p (-1)^j\binom{p+1}jB_jn^{p+1-j},~\mbox{where}~B_1=-\frac12$$




$$\implies f_p(x)=\frac1{p+1}\sum_{j=0}^p(-1)^j\binom{p+1}jB_jx^{p+1-j}$$



We integrate the RHS from $-1$ to $0$ to get



$$I_p=\int_{-1}^0f_p(x)~\mathrm dx=\frac{(-1)^p}{p+1}\sum_{j=0}^p\binom{p+1}j\frac{B_j}{p+2-j}$$



Using the recursive definition of the Bernoulli numbers,



$$I_p=(-1)^p\frac{B_{p+1}}{p+1}=-\frac{B_{p+1}}{p+1}$$




Using the well known relation $B_p=-p\zeta(1-p)$, we get



$$I_p=\zeta(-p)$$



So no coincidence here!


category theory - How to understand quasi-inverse of a function f∘g∘f = f?

Recently I was studying the quasi-inverse. Before I studied the quasi-inverse, I revisited the inverse and the left-right inverse.


inverse function:


Let $f : X → Y$, $g : Y → X$ is inverse of $f$, if only if, $f∘g = id_{Y}$ and $g∘f = id_{X}$.


It is easy to understand.


right-inverse function:


Let $f : X → Y$, $g : Y → X$ is right-inverse of $f$ (or section of $f$ ), if only if , $f∘g = id_{Y}$.


It means that $f$ must be surjective and $g$ must be injective. It is also very intuitive.



Now I start to study quasi-inverse:


One thing I have to explain here is that the "quasi-inverse" does not seem to be a precise terminology and I can't find any information about quasi-inverse in wikipedia or nlab. (I study it because the form of "quasi-inverse" appears in many branches of mathematics, e.g. in category theory, adjoint functors needs to satisfy triangular identity. Although they are completely different, they are similar in form)


Here, I use the definition of quasi-inverse from https://planetmath.org/QuasiinverseOfAFunction



Let f:X→Y be a function from sets X to Y. A quasi-inverse g of f is a function g such that



  1. g:Z→X where ran⁡(f)⊆Z⊆Y, and




  2. f∘g∘f=f, where ∘ denotes functional composition operation.




Note that ran⁡(f) is the range of f.



In order to understand this formula intuitively, I drew the following diagram


enter image description here


This formula seems to tell us that


A function $g$ is a quasi-inverse of a function $f$, if the restriction of $g$ to $ran(f)$ is the right-inverse of $f$, i.e.


$f ∘ g ∘ j_{ran(f)} = j_{ran(f)}$


Note: $j_{S}$ denote identity function on $S$.


My first question is, is this conclusion correct? i.e.



$f ∘ g ∘ j_{ran(f)} = j_{ran(f)} \Leftrightarrow f∘g∘f=f $


If this conclusion is correct, how to prove it?


It is easy to prove $\Rightarrow$, but how to prove the opposite?


If this conclusion is wrong, anyone can give me an example which satisfies $f∘g∘f=f$ but not satisfies $f ∘ g ∘ j_{ran(f)} = j_{ran(f)}$?


I may have missed some key things...


The second question is, if I have $f∘g∘f=f$ and $g∘f∘g=g$, is there any interesting conclusion? e.g. it can be concluded that f and g are bijection?


Very thanks.


PS: The reference of https://planetmath.org/QuasiinverseOfAFunction mentioned a book "Probabilistic Metric Spaces". In this book, the author mentioned another definition of quasi-inverse, which is stronger than the two quasi-inverses here, but it is another topic.

Monday, May 30, 2016

algebraic number theory - Does the equation $x^2+23y^2=2z^2$ have integer solutions?





I would like to show that the image of the norm map $\text N : \mathbb Z \left[\frac{1 + \sqrt{-23}}{2} \right] \to \mathbb Z$ does not include $2.$ I first thought that the norm map from $\mathbb Q(\sqrt{-23}) \to \mathbb Q$ does not have $2$ as its image either, so I tried to solve the Diophantine equation $$x^2 + 23y^2 = 2z^2$$ in integers.




After taking congruences with respect to several integers, such as $2, 23, 4, 8$ and even $16,$ I still cannot say that this equation has no integer solutions. Then I found out that the map $\text{N}$ has a simpler expression and can be easily shown not to map to $2.$



But I still want to know about the image of $\text N,$ and any help will be greatly appreciated, thanks in advance.


Answer



$x^2+23y^2=2z^2\iff x^2+(5y)^2=2(z^2+y^2)$.




Since the solutions of the equation $X^2+Y^2=2Z^2$ are given by the identity
$$(a^2+2ab-b^2)^2+(a^2-2ab-b^2)^2=2(a^2+b^2)$$ we can try $(y,z)=(a,b)$ taking care of one of $a^2+2ab-b^2$ or $a^2-2ab-b^2$ be equal to $5b$.



Taking for example $(y,z)=(1,4)$ we get the solution $(x,y,z)=(3,1,4)$.



Thus the proposed equation have solutions (which can be parametrized but I stop here).


exponential Limit involving Trigonometric function




$$\lim_{x\rightarrow \infty}\left[\cos\left(2\pi\left(\frac{x}{x+1}\right)\right)^{\alpha}\right]^{x^2}$$ where $\alpha\in \mathbb{Q}$




Try: $$l=\lim_{x\rightarrow \infty}\bigg[\cos\bigg(2\pi\bigg(1-\frac{1}{x+1}\bigg)\bigg)\bigg]^{x^2}$$



Put $\displaystyle \frac{1}{x+1}=t.$ Then limit convert into $$\displaystyle l=\lim_{t\rightarrow 0}\bigg(\cos (2\pi t)^{\alpha}\bigg)^{\frac{1}{t}-1}$$




$$\ln(l)=\lim_{t\rightarrow 1}\bigg(\frac{1}{t}-1\bigg)\ln(\cos(2\pi t)^{\alpha})$$



Could some Help me to solve it, Thanks


Answer



Hint:



Put $1/x=t$ to find



$$\lim_{t\to0}\left(\cos\dfrac{2\pi }{t+1}\right)^{1/t^2}$$




$=\lim_{t\to0}\left(\cos\left(2\pi-\dfrac{2\pi }{t+1}\right)\right)^{1/t^2}$ as $\cos(2\pi-x)=\cos x$



$$\lim_{t\to0}\left(\cos\dfrac{2\pi t}{t+1}\right)^{1/t^2}$$



$$=\left[\lim_{t\to0}\left(1-2\sin^2\dfrac{\pi t}{t+1}\right)^{-\dfrac1{2\sin^2\dfrac{\pi t}{t+1}}}\right]^{-2\lim_{t\to0}\left(\dfrac{2\sin\dfrac{\pi t}{t+1}}t\right)^2}$$


algebra precalculus - How to determine linear function from "at a price (p) of 220 the demand is 180"




In my Math book I can look-up the answers for exercises. And as I do I have no idea how I would solve the following example. Probably my mind is stuck as I don't find new ways to think about the issue.



"The supply function of a good is a linear function. At a price (p) of 220 the demand is 180 units. At a price of 160 the demand is 240 units."




  1. Determine the demand function.



"Also the supply function is linear. At a price of 150 the supply is 100 units and at a price of 250 the supply is 300 units".





  1. Determine the supply function.



Could someone explain to me how I would approach to solve these two questions as the book doesn't provide the explanation but only the answers? Thank you.


Answer



You know that the demand function is a linear function of price $p$, say $D(p)=\alpha\cdot p+\beta$ for suitable parameters $\alpha,\beta\in\mathbb R$. From the conditions given in your problem, you know that



$$

D(220)=\boldsymbol{220\alpha+\beta=180}\qquad\text{and}\qquad
D(160)=\boldsymbol{160\alpha+\beta=240}.
$$



From the bold equations (system of two linear equations with two variables), one simply obtains the coeffcients $\alpha=-1$, $\beta=400$ that enables you to write down the demand function as $D(p)=400-p$.



In fact, the same can be done for the supply function.


calculus - Are all limits solvable without L'Hôpital Rule or Series Expansion

Is it always possible to find the limit of a function without using L'Hôpital Rule or Series Expansion?



For example,



$$\lim_{x\to0}\frac{\tan x-x}{x^3}$$



$$\lim_{x\to0}\frac{\sin x-x}{x^3}$$




$$\lim_{x\to0}\frac{\ln(1+x)-x}{x^2}$$



$$\lim_{x\to0}\frac{e^x-x-1}{x^2}$$



$$\lim_{x\to0}\frac{\sin^{-1}x-x}{x^3}$$



$$\lim_{x\to0}\frac{\tan^{-1}x-x}{x^3}$$

Sunday, May 29, 2016

How is $zeta(0)=-1/2$?







Fermat's Dream by Kato et al. gives the following:





  1. $\zeta(s)=\sum\limits_{n=1}^{\infty}\frac{1}{n^s}$ (the standard Zeta function) provided the sum converges.


  2. $\zeta(0)=-1/2$




Thus, $1+1+1+...=-1/2$ ? How can this possibly be true? I guess I'm under the impression that $\sum 1$ diverges.

trigonometry - Prove that series with $(-1)^kcos^3(k)$ has sequence of partial sums bounded.



I found that $(-1)^k\cos^3(k)$ equals $$(1-sin^2(k))cos((\pi+1)k)$$ or $$cos((\pi+3)k) + sin(k)sin((\pi+2)k) + sin(k)cos(k)sin((\pi+1)k)$$.



I wanted to flatten it into such a sum of terms that each has bounded partial sums, so the final sequence of partial sums is bounded too.



But I got the $sin(k)$ and $sin(k)cos(k)$.




I should use that $(-1)^k$ equals $cos(\pi k)$ and the trigonometric identities.


Answer



$$
\begin{align}
&\left|\,\sum_{k=0}^{n-1}(-1)^k\cos^3(k)\,\right|\\
&=\left|\,\sum_{k=0}^{n-1}\frac{(-1)^k}8\left(e^{ik}+e^{-ik}\right)^3\,\right|\\
&=\left|\,\sum_{k=0}^{n-1}\frac{(-1)^k}8\left(e^{3ik}+3e^{ik}+3e^{-ik}+e^{-3ik}\right)\,\right|\\
&\le\frac18\left|\,\frac{1-\left(-e^{3i}\right)^n}{1+e^{3i}}\,\right|+\frac38\left|\,\frac{1-\left(-e^{i}\right)^n}{1+e^{i}}\,\right|+\frac38\left|\,\frac{1-\left(-e^{-i}\right)^n}{1+e^{-i}}\,\right|+\frac18\left|\,\frac{1-\left(-e^{-3i}\right)^n}{1+e^{-3i}}\,\right|\\
&\le\frac1{4\cos\left(\frac32\right)}+\frac3{4\cos\left(\frac12\right)}

\end{align}
$$


real analysis - A series involves harmonic number




How do we get a closed form for
$$\sum_{n=1}^\infty\frac{H_n}{(2n+1)^2}$$


Answer



Here's another solution. I'll denote various versions of the sum



$$
\sum_{k=1}^\infty\sum_{j=1}^k\frac1j\frac1{k^2}
$$



by an $S$ with two subscripts indicating which parities are included, the first subscript referring to the parity of $j$ and the second to the parity of $k$, with '$\mathrm e$' denoting only the even terms, '$\mathrm o$' denoting only the odd terms, '$+$' denoting the sum of the even and odd terms, i.e. the regular sum, and '$-$' denoting the difference between the even and the odd terms, i.e. the alternating sum. Then




$$
\begin{align}
\sum_{n=1}^\infty\frac{H_n}{(2n+1)^2}
&=
2\sum_{n=1}^\infty\sum_{i=1}^n\frac1{2i}\frac1{(2n+1)^2}
\\
&=
2S_{\mathrm{eo}}
\\

&=
2(S_{++}-S_{\mathrm o+}-S_{\mathrm{ee}})
\\
&=
2\left(S_{++}-S_{\mathrm o+}-\frac18S_{++}\right)
\\
&=
2\left(\frac38S_{++}+\left(\frac12S_{++}-S_{\mathrm o+}\right)\right)
\\
&=

\frac34S_{++}+S_{-+}
\\
&=
\frac32\zeta(3)+\sum_{k=1}^\infty\sum_{j=1}^k\frac{(-1)^j}j\frac1{k^2}\;,
\end{align}
$$



where I used the result $\sum_nH_n/n^2=2\zeta(3)$ from the blog post Aeolian linked to and reduced the present problem to finding the analogue of that result with the sign alternating with $j$, which we can rewrite as



$$

\begin{align}
\sum_{k=1}^\infty\sum_{j=1}^k\frac{(-1)^j}j\frac1{k^2}
&=
\sum_{k=1}^\infty\sum_{j=1}^\infty\frac{(-1)^j}j\frac1{k^2}-\sum_{k=1}^\infty\sum_{j=k+1}^\infty\frac{(-1)^j}j\frac1{k^2}
\\
&=
-\zeta(2)\log2+\sum_{j=1}^\infty\frac{(-1)^j}{j+1}\sum_{k=1}^j\frac1{k^2}\;.
\end{align}
$$




This last double sum can be evaluated by the method applied in the blog post, making use of the fact that summing the coefficients of a power series in $x$ corresponds to dividing it by $1-x$:



$$
\begin{align}
\sum_{j=1}^\infty x^j\sum_{k=1}^j\frac1{k^2}=\def\Li{\operatorname{Li}}\frac{\Li_2(x)}{1-x}\;,
\end{align}
$$



where $\Li_2$ is the dilogarithm. Thus




$$
\begin{align}
\sum_{j=1}^\infty\frac{(-1)^j}{j+1}\sum_{k=1}^j\frac1{k^2}
&=
\int_0^1\sum_{j=1}^\infty (-x)^j\sum_{k=1}^j\frac1{k^2}\mathrm dx
\\
&=
\int_0^1\frac{\Li_2(-x)}{1+x}\mathrm dx
\\
&=

\left[\Li_2(-x)\log(1+x)\right]_0^1+\int_0^1\frac{\log^2(1+x)}x\mathrm dx
\\
&=-\frac{\zeta(2)}2\log2+\frac{\zeta(3)}4\;,
\end{align}
$$



where the boundary term is evaluated using $\Li_2(-1)=-\eta(2)=-\zeta(2)+2\zeta(2)/4=-\zeta(2)/2$ and the integral in the second term is evaluated in this separate question. Putting it all together, we have



$$
\begin{align}

\sum_{n=1}^\infty\frac{H_n}{(2n+1)^2}
&=
\frac74\zeta(3)-\frac32\zeta(2)\log2
\\
&=
\frac74\zeta(3)-\frac{\pi^2}4\log2\;.
\end{align}
$$



I believe all the rearrangements can be justified, despite the series being only conditionally convergent in $j$, by considering the partial sums with $j$ and $k$ both going up to $M$; then all the rearrangements can be carried out within that finite square of the grid, and the sums of the remaining terms go to zero with $M\to\infty$.



Saturday, May 28, 2016

calculus - Limit of a trig function. (Without using L'Hopital)

I'm having trouble figuring out what to do here, I'm supposed to find this limit:




$$\lim_{x\rightarrow0} \frac{x\cos(x)-\sin(x)}{x^3}$$



But I don't know where to start, any hint would be appreciated, thanks!

discrete mathematics - Induction Proof: $2$ divides $n^2 + n$ for each $n in mathbb{N}$



So I am looking at some induction questions and I am trying to solve them on my own but I am getting stumped and frustrated. There was a previous question question that was answered, but I changed it to see if I could solve it. I am not getting that far.



How do I show by mathematical induction that $2$ divides $n^2+n$ for all $n$ belonging to the set of Natural Numbers. Here is what I have so far. Could I be pointed in the right direction? You can see below where I am stumped. This is where I am having the issue.




Basis: $n=1, \qquad P(1)$ is true, 2 divides $1^2+1 = 2$



Induction Hypothesis: 2 divides $(k+1)^2+(k+1)$ for some $k \in \mathbb{Z} \geq 1$



Induction Step: $(k+1)^2+(k+1)=k^3+3k^2+3k+1=$


Answer



Alright, so our inductive hypothesis is that $k^2 + k$ is a multiple of $2$ for some $k$. Then consider $(k+1)^2 + (k+1) = k^2 + 2k + 1 + k + 1 = (k^2 + k) + 2k + 2 = (k^2 + k) + 2(k + 1)$.



By our inductive hypothesis, $k^2 + k$ was an even number: because we are adding an even number, $2(k+1)$, to it, we still have an even number.




Therefore, $(k+1)^2 + (k+1)$ must be even as well. This completes our proof.


Friday, May 27, 2016

linear algebra - Proof on Endomorphism

Let A be element of Endomorphism of V (V is a finite dimensional vector space over F) such that A is onto.
Assume that there exist a function
B: V $\to$ V such that BA = I. Prove that AB = I





  • Can you give me a hint on how to prove this problem? Thanks.



Here is working solution. Since A is onto, there exist x in V such that A(x) = v.



We need to show that BA = I.



(BA)(x) = B(A(x)) = B(v) then I don't know what's next

real analysis - $f$ is linear and continuous at a point $implies f$ should be $f(x) =ax, $ for some $a in mathbb R$

Let $f$ be a real valued function defined on $\mathbb R$ such that $f(x+y)=f(x)+f(y)$. Suppose there exists at least an element $x_0 \in \mathbb R$ such that $f$ is continuous at $x.$ Then prove that $f(x)=ax,\ \text{for some}\ x \in \mathbb R.$


Hints will be appreciated.

elementary set theory - Prove $F(F^{-1}(B)) = B$ for onto function

Suppose that $f:X \to Y$ is an onto function. Prove that for all subsets $B$ subset of $Y$, $f(f^{-1}(B)) = B$. I don't know how to do this if the function is not also one to one, which it is not. Any help proving this would be greatly appreciated.

Thursday, May 26, 2016

real analysis - Measure of big discontinuities




Let $D\subset\left[ 0,1\right] $ be a dense set, and $\mu$ Lebesgue
measure on $\left[ 0,1\right] .$



Suppose $f:\left[ 0,1\right] \rightarrow\left[ 0,1\right] $ is continuous
at each point in $D.$ Let $\overline{G}$ be the closure of the graph of $f$ on $\left[
0,1\right] ^{2}.$



Is it true that $\mu\left\{ x_{0}:\overline{G}\cap\left\{ x=x_{0}\right\}
\text{ is infinite}\right\} =0?$



Answer



Here's a counterexample: Let $D$ be the complement of a fat Cantor set $C \subseteq [0, 1]$ so $D$ is dense. Construct $f:[0, 1] \rightarrow [0, 1]$ such that $f$ is zero on $D$ and the graph of $f$ is dense in $C \times [0, 1]$.


calculus - $mathop {lim }limits_{n to infty } ,frac{{tan (n)}}{{{n^k}}}$

what is the minimum number of $k$ for which the following limit exist
$$\mathop {\lim }\limits_{n \to \infty } \,\frac{{\tan (n)}}{{{n^k}}}$$
I know that $$\mathop {\lim }\limits_{n \to \infty } \,\frac{{\tan (n)}}{n}$$
doesn't exist, and $$\mathop {\lim }\limits_{n \to \infty } \,\frac{{\tan (n)}}{{{n^8}}} = 0.$$
But i don't know what is the minimum number of $k$ for existing that limit.
(note that here n's are positive integers not real numbers)

real analysis - $sum limits_{n=1}^{infty}n(frac{2}{3})^n$ Evalute Sum












How can you compute the limit of
$\sum \limits_{n=1}^{\infty} n(2/3)^n$



Evidently it is equal to 6 by wolfram alpha but how could you compute such a sum analytically?


Answer



$$
\begin{align*}
\sum_{n=1}^\infty n(2/3)^n &=
\sum_{m=1}^\infty \sum_{n=m}^\infty (2/3)^n \\ &=
\sum_{m=1}^\infty \frac{(2/3)^m}{1-2/3} \\ &=

\frac{2/3}{(1-2/3)^2} = 6.
\end{align*}
$$


analysis - Summing the series $(-1)^k frac{(2k)!!}{(2k+1)!!} a^{2k+1}$




How does one sum the series $$ S = a -\frac{2}{3}a^{3} + \frac{2 \cdot 4}{3 \cdot 5} a^{5} - \frac{ 2 \cdot 4 \cdot 6}{ 3 \cdot 5 \cdot 7}a^{7} + \cdots $$



This was asked to me by a high school student, and I am embarrassed that I couldn't solve it. Can anyone give me a hint?!


Answer



HINT $\quad \:\;\;\rm (a^2+1) \: S' = 1 - a \: S \;\:$ by transmuting the coefficient recurrence to a differential equation.



$\rm\;\Rightarrow\; 1 = (a^2+1) \: S' + a \: S \; = \; f \: (f \; S)' \;\;$ for $\rm\;\; f = (a^2+1)^{1/2}$



$\rm\displaystyle\;\Rightarrow\; S = f^{-1} \int \; f^{-1} = \frac{\sinh^{-1}(a)}{(a^2+1)^{1/2}}$


field theory - embedding of a finite algebraic extension



In one of my courses we are proving something (so far, not surprising) and using the fact:
if $F$ is a finite algebraic field extension of $K$, there is an embedding of $F$ into $K$. Well, doesn't seems to me that we can really embed $F$ into $K$, since $F$ is bigger, but can we at least prove there is a homomorphism from $F$ to $K$?



Answer



Any homomorphism of fields must be zero or an embedding as there are no nontrivial ideals of any field. There is always the natural inclusion $i: K\rightarrow F$ if $K\subseteq F$, but rarely do we have an embedding $F \rightarrow K$.



For a simple example, there is no embedding $\Bbb C\rightarrow \Bbb R$, as only one has a root of $x^2+1$ and an embedding will preserve roots of this polynomial. There are in fact examples of algebraic extensions $K\subseteq F$, with embeddings $F\rightarrow K$ (i.e. $K(x)\rightarrow K(x^p)$) .


discrete mathematics - How do I find a flaw in this false proof that $7n = 0$ for all natural numbers?

This is my last homework problem and I've been looking at it for a while. I cannot nail down what is wrong with this proof even though its obvious it is wrong based on its conclusion. Here it is:




Find the flaw in the following bogus proof by strong induction that
for all $n \in \Bbb N$, $7n = 0$.




Let $P(n)$ denote the statement that $7n = 0$.



Base case: Show $P(0)$ holds.



Since $7 \cdot 0 = 0$, $P(0)$ holds.



Inductive step: Assume $7·j = 0$ for all natural numbers $j$ where $0 \le j \le k$ (induction hypothesis). Show $P(k + 1)$: $7(k + 1) = 0$.



Write $k + 1 = i + j$, where $i$ and $j$ are natural numbers less than $k + 1$. Then, using the induction hypothesis, we get $7(k + 1) = 7(i + j) = 7i + 7j = 0 + 0 = 0$. So $P(k + 1)$ holds.




Therefore by strong induction, $P(n)$ holds for all $n \in \Bbb N$.




So the base case is true and I would be surprised if that's where the issue is.



The inductive step is likely where the flaw is. I don't see anything wrong with the strong induction declaration and hypothesis though and the math adds up! I feel like its so obvious that I'm just jumping over it in my head.

Wednesday, May 25, 2016

real analysis - Existence of a Strictly Increasing, Continuous Function whose Derivative is 0 a.e. on $mathbb{R}$



This proof is almost done except for the step of showing that the function's derivative is $0$ a.e.



Let $I = \{[p_n, q_n]\}$ denote the set of all closed intervals in $\mathbb{R}$ with rational endpoints $p_i, q_i \in \mathbb{Q}$.



Let $\phi_n$ denote a variant of the Cantor Ternary Function (i.e., Devil's Staircase Function) extended to all $\mathbb{R}$ which satisfies $\phi_n(x) = 0$ for all $x < p_i$, $\phi_n(x) = 1/2^n$ for all $x > q_i$ and is non-decreasing, continuous, and of zero derivative a.e. on $[p_i,q_i]$ so that $\phi_n$ satisfies





  1. $\phi_n'(x) = 0$ a.e. on $\mathbb{R}$

  2. $\phi_n$ non-decreasing, continuous on $\mathbb{R}$

  3. $\phi_n$ is bounded above by $1/2^n$

  4. $\phi_n (p_n) < \phi_n (q_n)$ (trivial but important later)



Now let $\sum \phi_n = \phi$.



$\fbox{Claim}$





  1. $\sum \phi_n \rightarrow \phi$ uniformly

  2. $\phi$ is strictly increasing and continuous on $\mathbb{R}$ yet also satisfies $\phi'(x) = 0$ a.e. on $\mathbb{R}$.



$\fbox{Attempted Proof}$




  1. First since $\forall n \in \mathbb{N}$, we have that $|\phi_n| \le 1/2^n$ with $\sum 1/2^n < \infty$, it follows that $\sum \phi_n \rightarrow \phi$ uniformly (a fact from Real Analysis).



  2. Consider that the finite sum of $k$ continuous functions is itself a continuous function. Since all of the $\phi_n$ are continuous functions, we therefore have that $f_k = \sum_{n=1}^k \phi_n$ is also a continuous function. Moreover, since (1) implies that $f_k \rightarrow \phi$ unformly, we now have that $\phi$ is also continuous (via another Real Analysis fact that a uniformly convergent sequence of continuous functions converges to another continuous function).


  3. To show $\phi$ is strictly increasing, consider $x,y \in \mathbb{R}$ s.t. $x < y$. Since $\exists k \in \mathbb{N}$ s.t. $x < p_k < q_k < y$, we have that $\phi_k(x) \le \phi_k(p_k) < \phi_k(q_k) \le \phi_k(y)$ implies that $\phi_k(x) < \phi_k(y)$ so that for $N \ge k$ we have $\sum_{n = 1}^N \phi_n(x) < \sum_{n = 1}^N \phi_n(y)$ and more generally $\phi(x) < \phi(y)$. This shows that indeed $\phi$ is strictly increasing on $\mathbb{R}$.


  4. To show that $\phi'(x) = 0$ a.e. on $\mathbb{R}$, we will show that $\phi' = \sum \phi_n' = \sum 0_n$, where each $0_n$ is the derivative function of $\phi_n$ which is $0$ almost everywhere.




But I'm not sure how to complete (4)?


Answer



From your answer I see that it is ok to use standard facts from real analysis, so let's use them!





Fubini's theorem on differentiation. Assume $(f_n)_{n\in\mathbb{N}}$ is a sequence of non-decreasing functions on $[a,b]$, and the series $\sum_{n=1}^\infty f_n(x)$ converges for all $x\in [a,b]$, then
$$
\left(\sum_{n=1}^\infty f_n(x)\right)'=\sum_{n=1}^\infty f_n'(x)
$$
a.e. on $[a,b]$.




Now we turn to the original problem. Consider arbitrary interbal $[a,b]$. By assumption $\phi_n'=0$ a.e. on $\mathbb{R}$ and a fortiori on $[a,b]$. By Fubini's theorem on differentiation we get
$$
\phi'(x)=\left(\sum_{n=1}^\infty \phi_n(x)\right)'=\sum_{n=1}^\infty \phi_n'(x)=\sum_{n=1}^\infty 0=0

$$
a.e. on $[a,b]$. Since $[a,b]$ is an arbitrary interval, then $\phi'=0$ a.e. on $\mathbb{R}$.


limits - Finding the value of variables from a given piecewise continuous function.

Given that $f(x)= \dfrac{\sin (a+1)x+ \sin x} {x}$ if $x<0.$



$$f(x)= c \text{ if } x=0.$$




$$f(x)= \frac{{\sqrt{x+bx^2}}-\sqrt {x}}{b \sqrt{x^3}} \text{ if } x>0.$$



Also given that the function is continuous at $x=0.$ Then find $a,b,c.$
I tried simplifying the left hand limit of the function and equated it to $f(0)$ and ended up with $a-c=-2$
Now i am having problem simplifying the right hand limit of the function.Please help. Also after finding the left hand limit and equation it to $f(0)$ how should i proceed to find the values required?

calculus - Compute $int_0^1frac {sqrt{x}}{(x+3)sqrt{x+3}}dx.$




Evaluate $\int_0^1\frac {\sqrt{x}}{(x+3)\sqrt{x+3}}dx.$





I tried so many substitutions but none of them led me to the right answer:



$u=\frac 1{\sqrt{x+3}}$, $u=\frac 1{x+3}$, $u=\sqrt{x}$... I even got to something like $\int_0^1 \frac {u^2}{(u^2+3)^{\frac 32}}du$ or $\int_0^1 \frac {\sqrt{1-3u^2}}{u}du$... and I don't know how to solve these...


Answer



If you change $u=\sqrt{x} \Rightarrow u^2=x \Rightarrow 2udu=dx$, then:
$$\int_0^1\frac {\sqrt{x}}{(x+3)\sqrt{x+3}}dx=\int_0^1\frac {2u^2}{(u^2+3)^{3/2}}du=\int_0^1\frac {2u^2+6-6}{(u^2+3)^{3/2}}du=\\
2\int_0^1\frac {1}{(u^2+3)^{1/2}}du-6\int_0^1\frac {1}{(u^2+3)^{3/2}}du.$$
Both integrals you can evaluate by $u=\sqrt{3}\tan t$. See this and this.



elementary number theory - Solving Non-Linear Congruences




I'm trying to solve an exercise from Niven I.M "An Introduction to Theory of Numbers" on page 106, problem 10. The problem wants you to find all the solutions to the congruence \begin{equation*}
x^{12} \equiv 16 \quad (\text{mod }17)
\end{equation*}

Here is my attempt;



First I found that $3$ is a primitive root in $(mod 17)$, i.e $3^{16} \equiv 1 \quad (\text{mod }17)$.



This means that we can write $16 \equiv 3^{8} \quad (\text{mod }17)$. So we have \begin{equation*}
x^{12} \equiv 3^{8} \quad (\text{mod }17)
\end{equation*}


Then multiplying the congruence by $3^{16}$ we see that
\begin{equation*}
x^{12} \equiv 3^{24} \quad (\text{mod } 17)
\end{equation*}

We see that $x=9$ is a solution because $9=3^2$.



To find the remaining solution I think we need to have \begin{equation*}
x^{12} \equiv 3^{8+16k} \quad (\text{mod }17)
\end{equation*}

for $k \in \mathbb{Z}/17\mathbb{Z}$.




So we need $12|(8+16k)$.
However, I'm not sure about my last argument that $12|(8+16k)$. Is it right or wrong?
Any help is appreciated.


Answer



$k$ would be in $\mathbb{Z}$ . You can Also note both 12 and $16k+8$ divide by 4. This means, 3 would need to divide $4k+2$. Using mod 3, we get $k$ congruent to 1 mod 3. $k=1$ gives a cube root of 9. $k=4$ gives 15, $k=7$ gives 8, and $k=10$ gives 2. You intuition works, but your reduction could have gone further.


elementary number theory - Solving congruences



I've the following congruence system:



\begin{align*}

I \quad 2x \equiv 0\mod 7 \\
II \quad x \equiv 1 \mod 5\\
III \quad x \equiv 3 \mod 4
\end{align*}



Now I tried to solve it:



\begin{align*}
II \quad &x \equiv 1 \mod 5 \Rightarrow x=5x_1+1\\
\stackrel{I}{\Rightarrow} 2(5x_1+1) &\equiv 0 \mod 7 \\

\Leftrightarrow 10x_1+2 &\equiv 0 \mod \\
\Leftrightarrow 10x_1 &\equiv -2 \mod 7\\
\Leftrightarrow 10x_1 &\equiv 12 \mod 7\\
\Rightarrow 5x_1 &\equiv 6 \mod 7 \Rightarrow 5x_1=7x_2+6
\end{align*}



and now



$$x=5x_1+1=7x_2+6+1=7x_2+7 \Leftrightarrow x \equiv 0 \mod 7$$




This result is obviously no solution. If I try to solve it with euclidean algorithm, I'll get the correct result. Now I try to unterstand why the first idea is wrong. In general I understood the way of solving congruence systems, but never thought about why it works.



Any help is appreciated.


Answer



It's a good idea to try a pedestrian way to solve your problem.
Here's mine.



Let $x\in\mathbb{Z}$ be a solution of your system. Then
there exists $a,b,c\in\mathbb{Z}$ such that:
$$\begin{cases}2x=7a\\x=1+5b\\x=3+4c.\end{cases}$$

We must then have1:
$$\begin{cases}40x=140a\\28x=28+140b\\35x=105+140c\end{cases}$$
Now2, $3\times40-3\times28-35=1$, hence:
$$x=-3\times28-105+140(3a-3b-c)=-189+140(3a-3b-c),$$
i.e.,
$$x=91+140(3a-3b-c-2).$$



Hence a necessary condition for $x\in\mathbb{Z}$ to be a solution of your system is:
$$x=91\mod 140.$$
It is now easy to check that this condition is also sufficient: let $m\in\mathbb{Z}$ and set $x=91+140m$. Then:

$$x=7\times(13+20m)$$
hence $x=0\mod 7$.
$$x=1+5\times(18+28m)$$
hence $x=1\mod 5$.
$$x=3+4\times(22+45m)$$
hence $x=3\mod 4$.








In general I understood the way of solving congruence systems, but never thought about why it works.




Hope this helps…






1$140$ naturally appears as the least common multiple of $4$, $5$ and $7$.



2We're lucky to find such a simple relation, aren't we?



real analysis - Proving convergence of a sequence in an inequality by saying "because the inequality is large"




enter image description here



enter image description here



I'm currently taking an optimization course where we have a chapter on topology, with no prior experience in it (not a pre-requisite). I generally understand set open-ness, closure, and boundness. With regards to closure, I understand how if the limit of a sequence originally in the set is still in the set, then the set is closed.



Our professor has given simple examples of like a set where x>2, and if you take the sequence x=2+1/n with the limit as n approaches infinity, it does not lie in the set (as it converges to 2), so the set is not closed. But when we got to sets involving polynomials, the prof simply says "because the inequality is large, we can pass to the limit" and poof, the sequences all converge and the inequality still holds true (see problem above).



I've had friends who've taken topology courses call this a "half-assed proof" (which it very much does look like to me), but our professor just hasn't taught us any other way to do it and doesn't answer our questions about it (he just pretty much says "isn't it obvious"). This is pretty dangerous to me, conceptually, as me just assuming the inequality will always hold as you take the limit will definitely not always be the case.




Given my very simple understanding of convergence and closure (which is all he expects us to know), what does "because the inequality is large" actually mean? Any help would be phenomenal


Answer




what does "because the inequality is large" actually mean?




This is a short way to express the classic (I assume that, though topology is not a prerequisite for your course, Calculus is) theorem from Calculus:





Assume that $x_n$ satisfies $x_n\leqslant M$ for any $n$ large enough, and that $x_n$ converges to $\ell$. Then $\ell\leqslant M$.




This is usually proved using $\epsilon-\delta$ definition of limits. You may be more familiar with a version about functions:




Assume that $f(x)$ satisfies $f(x)\leqslant M$ for any $x$ near a point $a$, and that $\ell=\lim_{x\to a} f(x)$ exists. Then $\ell\leqslant M$.




You can change the large inequality $\leqslant$ to $\geqslant$ to obtain





Assume that $x_n$ satisfies $x_n\geqslant M$ for any $n$ large enough, and that $x_n$ converges to $\ell$. Then $\ell\geqslant M$.




But you cannot change $\leqslant$ to $<$ as the theorem would become false. Indeed, $x_n=2-\frac{1}{n}<2$ for any $n\geqslant 1$, but $\ell=\lim_{n\to\infty}x_n=2$.



You can generalize the above results using the terminology of topology: if $A$ is a subset of $\mathbb R^n$, we denote by $\overline{A}$ the closure of $A$. Then,





if $x_n$ is in $A$ for $n$ large enough and converges to $\ell$, then $\ell=\lim_{n\to\infty}x_n$ is in $\overline{A}$.




As we know that $A=(-\infty,3]$ is closed, we have $\overline{A}=A=(-\infty,3]$.



Note: "$x_n$ satisfies the property $P$ for any $n$ large enough" is a half-assed terminology meaning "there exists $n_0$ such that $x_n$ satisfies $P$ for any $n\geqslant n_0$".


Square root of complex numbers



I’m studying complex analysis.



I was reading a section about roots of complex numbers, and I found that $\sqrt{i}$ has two values. ($i$ : imaginary unit) However, for a non-zero real number, $\sqrt{a}$ is always one value. Moreover, $\sqrt{a}$ is a real number.



However, $\sqrt{i}=\pm(\sqrt{0.5}+\sqrt{0.5}i)$.
This is not a number !! I think a number should have one value. If $\sqrt{i}$ is the number having positive sign, then it shouldn’t have the negative sign unless it is a zero.




I understand those are the numbers satisfying $x^2=i$.
But.. what does $\sqrt{i}$ represent for?



Is it different from the sqaure root of real numbers?


Answer



Every complex and real number except $0$ have two square roots. If $r$ is one of the square roots $-r $ is the other.



So it is not true that "$\sqrt i $ has two values". It's that, "$i$ has two values as square roots. And it's not true that positive real numbers have one square root. They have two.




So what does the symbol $\sqrt {} $ and what does "the square root" mean? Well, nothing really. It's convenience to have a single value "square root function" so we arbitrarily chose that the positive value of square roots of positive real would be "the" square root. And the negative square root was the "other".



We could do the same for complex square roots. We could arbitrarily decide the one with a no negative real component was "the" square root and the one with a negative component would be the "other". But what would be the point?



The main reason we do this for the reals is because the real numbers is convenience really. And none of that convenience is useful in the complex numbers. The complex numbers don't have an greater/less than ordering. You are going to learn that exponentiation is cylic and there are multiple logarithms of each number (don't worry about that; you'll learn it later).



So basically we say $\sqrt z$ to mean the set of the two complex numbers, $w $ and $-w $, so that $w^2=(-w )^2=z$. Or we write $\sqrt z =\pm w$ to mean that the number to be consider a square root of $z$ could be either of $w$ or $-w $.


Tuesday, May 24, 2016

calculus - $sin(x)$ infinite product formula: how did Euler prove it?



I know that $\sin(x)$ can be expressed as an infinite product, and I've seen proofs of it (e.g. Infinite product of sine function). I found How was Euler able to create an infinite product for sinc by using its roots? which discusses how Euler might have found the equation, but I wonder how Euler could have proved it.



$$\sin(x) = x\prod_{n=1}^\infty \left(1-\frac{x^2}{n^2\pi^2}\right)$$



So how did Euler derive this? I've seen a proof that requires Fourier series (something not know [formally] by Euler, I guess). I also know that this equation can be thought intuitively, and it's really true that it will have the same roots as the sine function, however it's not clear that the entire function converges to the sine function. So, even if Euler guessed it, how was his proof accepted, for this formula to calculate the zeta function for even integers?




$$\zeta(2n) = (-1)^{n+1} \frac{B_{2n}(2\pi )^{2n}}{2(2n)!}$$



I've checked a proof of this result, and it requires the sine infinite product.



Also, the Basel problem (solved by him) used this infinite product too, and he got famous by this proof, so the sine infinite product might have been accepted by the mathematical community at that time.


Answer



Check out this link: Ed Sandifer: How Euler Did It, March 2004



And here: Wikipedia: Basel Problem




The Wikipedia link shows there actually does exist an "elementary" proof of the generalized Basel problem (the evaluation of the Riemann zeta function for positive even integers). Graham, Knuth, and Patashnik, in the text Concrete Mathematics, also give an outline of another elementary proof which is easy to follow, using properties of exponential generating functions and some basic calculus. Although it is not how Euler went about it, the approach certainly would have been within his scope of knowledge.


Find the remainder of a number with a large exponent

I have to find the remainder of $10^{115}$ divided by 7.



I was following the way my book did it in an example but then I got confused. So far I have,
$\overline{10}^{115}$=$\overline{10}^{7*73+4}$=($\overline{10}^{7})^{73}$*$\overline{10}^4$
and that's where I'm stuck.



Also, I don't fully understand what it means to have a bar over a number.

binomial coefficients - Prove that $sumlimits_{k=0}^rbinom{n+k}k=binom{n+r+1}r$ using combinatoric arguments.





Prove that $\binom{n+0}0 + \binom{n+1}1 +\binom{n+2}2
+\ldots+\binom{n+r}r = \binom{n+r+1}r$ using combinatoric arguments.




(EDITED)




I want to see if I understood Brian M. Scott's approach so I will try again using an analogical approach.



$\binom{n+0}0 + \binom{n+1}1 +\binom{n+2}2
+\ldots+\binom{n+r}r = \binom{n+r+1}r$ can be rewritten as $\binom{n+0}n + \binom{n+1}n +\binom{n+2}n
+\ldots+\binom{n+r}n = \binom{n+r+1}{n+1}$



We can use the analogy of people lining up to buy tickets to see a concert. Let's say there are only $n$ number of tickets available for sale. "Choosing" who gets to attend the concert can be done in two ways.



The first way (the RHS), we have $n+1$ number of tickets for sale but $n+r+1$ people who wants to buy the tickets. Thus, there are $\binom{n+r+1}{n+1}$ ways to "choose" who gets to attend the concert.




The second way (the LHS) is to select the last person in line to buy the first ticket (I think this was the step I missed in my first attempt). Then, we choose $n$ from the remaining $n+r$ people to buy tickets. Or we can ban the last person in line from buying a ticket and choose the second-to-last person in line to buy the first ticket. Then, we have $\binom{n+r-1}n$ ways. This continues until we reach the case where we choose the $n+1$ person in line to buy the first ticket (banning everyone behind him/her from buying a ticket). This can be done in $\binom{n+0}n$ ways.



Therefore, adding up each case on the LHS is equal to the RHS.


Answer



You’re on the right track, but you have a discrepancy between choosing $r$ from $n+r+1$ on the right, and choosing $r$ from $n+r$ on the left, so what you have doesn’t quite work. Here’s an approach that does work and is quite close in spirit to what you’ve tried.



Let $A=\{0,1,\ldots,n+r\}$; clearly $|A|=n+r+1$, so $\binom{n+r+1}r$ is the number of $r$-sized subsets of $A$. Now let’s look at a typical term on the lefthandside. The term $\binom{n+k}k$ is the number of ways to choose a $k$-sized subset of $\{0,1,\ldots,n+k-1\}$; how does that fit in with choosing an $r$-sized subset of $A$?



Let $n+k$ be the largest member of $A$ that we do not pick for our $r$-sized subset; then we’ve chosen all of the $(n+r)-(n+k)=r-k$ members of $A$ that are bigger than $n+k$, so we must fill out our set by choosing $k$ members of $A$ that are smaller than $n+k$, i.e., $k$ members of the set $\{0,1,\ldots,n+k-1\}$. In other words, there are $\binom{n+k}k$ ways to choose our $r$-sized subset of $A$ so that $n+k$ is the largest member of $A$ that is not in our set. And that largest number not in our set cannot be any larger than $n$, so the choices for it are $n+0,\ldots,n+r$. Thus, $\sum_{k=0}^r\binom{n+k}k$ counts the $r$-sized subsets of $A$ by classifying them according to the largest member of $A$ that they do not contain.







It may be a little easier to see what’s going on if you make use of symmetry to rewrite the identity as



$$\sum_{k=0}^r\binom{n+k}n=\binom{n+r+1}{n+1}\;.\tag{1}$$



Let $A$ be as above; the righthand side of $(1)$ is clearly the number of $(n+1)$-sized subsets of $A$. Now let $S$ be an arbitrary $r$-sized subset of $A$. The largest element of $S$ must be one of the numbers $n,n+1,\ldots,n+r$, i.e., one of the numbers $n+k$ for $k=0,\ldots,r$. And if $n+k$ is the largest element of $S$, there are $\binom{n+k}n$ ways to choose the $n$ smaller members of $S$. Thus, the lefthand side of $(1)$ also counts the $(n+1)$-sized subsets of $A$, classifying them according to their largest elements.



The relationship between the two arguments is straightforward: the sets that I counted in the first argument are the complements in $A$ of the sets that I counted in the second argument. There’s a bijection between the $r$-subsets of $A$ and their complementary $(n+1)$-subsets of $A$, so your identity and $(1)$ are essentially saying the same thing.


algebra precalculus - Negative Squared Root on Quadratic Equation formula?

I have this basic problem:




In a farm, $X$ animals are added to the farm. These animals gain weight according to the equation: $500 - 2X$ gr. Which interval of animals can the farm take, if the total weight gain is greater than $30,600$ Kg?




Original (Spanish - Español):





En un criadero de cuyes se integran $x$ cuyes, si se tiene presente que los cuyes ganan peso en promedio de $(500 - 2x)$ gramos. ¿Qué intervalo de cuyes puede aceptar esta granja si la ganancia total de peso de los cuyes es mayor a $30 600$ Kg?




Step 1:



Total animals: $x$.



Weigthgain = $(500 - 2x)$ gr.




TotalWeightgain > $30 600(1000)$ converting kg to gr.



Step 2:



$$x(500 - 2x) > 30600000$$



Step 3:



$$0 > 2x^2 - 500x + 30 600 000$$




Step 4:



$$ x = \frac{-(-500) +- \sqrt[2]{(-500)^2 - 4(2)(30 600 000)}}{2(2)}$$



But as you can see, the sqrt is negative. So It does not exist.
What should be the next step?

real analysis - show that the recursive sequence converges to $sqrt r$




Let $r>0$. Show that starting with any $x_1 \neq 0 $, the sequence defined by $x_{n+1}=x_n-\frac {x^2_n-r}{2x_n}$ converges to $\sqrt r$ if $x_1 >0$ and $-\sqrt r$ if $x_1<0$.



Proof: 1. show the sequence is bounded when $x_1>0$.
By induction, I can show that $x_n>0.$
2. show that the sequence is monotone decreasing so that I can use the theorem that the monotone bounded sequence has a limit.
If $x_{n+1}\le x_n$, $x^2_n-r\ge0$. But, here I cannot show that $x^2_1-r \ge 0$. How can I proceed from this step? or Is there another way to approach this question?



Thank you in advance.


Answer




First, let us rewrite the recursive definition by
$$x_{n+1}= \frac{x_n}{2}+\frac{r}{2x_n}$$
and
$$2x_{n+1} x_n = x_n^2+r.$$
If $x_1>0$, we see by induction (from the second equation) that $x_n > 0$ for all $n \in \mathbb{N}$. On the other hand, if $x_1 <0$, then $x_n <0$ for all $n \in \mathbb{N}$.



Let $x_1 >0$. If $x_1 = \sqrt{r}$, then $x_2 = \sqrt{r}$ and so on. Thus, we can assume that either $x_1 < \sqrt{r}$ or $x_1 > \sqrt{r}$. In the first case, the sequence is montone increasing and in the second case monotone decreasing.



First note that the function $f(t) = \frac{t}{2}+\frac{r}{2t}$, defined on $[0,\infty)$, is striclty monotone increasing if $t<\sqrt{r}$ and striclty decreasing if $t > \sqrt{r}$. Thus, by induction, we see that $x_n > \sqrt{r}$ if $x_1 > \sqrt{r}$ or $x_n <\sqrt{r}$ if $x_1 < \sqrt{r}$. (In the second case, we get already that the sequence is bounded!)




Now, using the same induction argument, we can prove that $x_{n+1} < x_{n}$ if $x_1 > \sqrt{r}$ or $x_{n+1} > x_n$ if $x_1 < \sqrt{r}$. (For the first case we find that the sequence is bounded.



The case $x_1 <0$ can be deduced form the previous one: Set $y_1 = -x_1$ and $y_n = - x_n$. Note that $y_{n+1} = y_n/2 + r/(2y_n)$. Thus, $y_n$ converges towards $\sqrt{r}$.



Additional Comment: This method of computing square roots is known as Babylonian method.


Monday, May 23, 2016

probability theory - Let $X$ be a positive random variable with distribution function $F$. Show that $E(X)=int_0^infty(1-F(x))dx$

Let $X$ be a positive random variable with distribution function $F$. Show that $$E(X)=\int_0^\infty(1-F(x))dx$$



Attempt



$\int_0^\infty(1-F(x))dx= \int_0^\infty(1-F(x)).1dx = x (1-F(x))|_0^\infty + \int_0^\infty(dF(x)).x $ (integration by parts)




$=0 + E(X)$ where boundary term at $\infty$ is zero since $F(x)\rightarrow 1$ as $x\rightarrow \infty$



Is my proof correct?

Sunday, May 22, 2016

polynomials - Quadratic equation including Arithmetic Progression



For $a, b, c$ are real. Let $\frac{a+b}{1-ab}, b, \frac{b+c}{1-bc}$ be in arithmetic progression . If $\alpha, \beta$ are roots of equation $2acx^2+2abcx+(a+c) =0$ then find the value of $(1+\alpha)(1+\beta)$


Answer



The A.P. condition gives $2abc=a+c$ on simplifying, so the quadratic is effectively $x^2+bx+b=0$. As $(1+\alpha)(1+\beta) = 1+(\alpha+\beta) + (\alpha \beta)$, using Vieta we have this equal to $1+(-b)+b=1$.


algebra precalculus - Why $sqrt{-1 times -1} neq sqrt{-1}^2$?




We know $$i^2=-1 $$then why does this happen?
$$
i^2 = \sqrt{-1}\times\sqrt{-1}
$$
$$
=\sqrt{-1\times-1}
$$
$$

=\sqrt{1}
$$
$$
= 1
$$



EDIT: I see this has been dealt with before but at least with this answer I'm not making the fundamental mistake of assuming an incorrect definition of $i^2$.


Answer



From $i^2=-1$ you cannot conclude that $i=\sqrt{-1}$, just like from $(-2)^2 = 4$ you cannot conclude that $-2=\sqrt 4$. The symbol $\sqrt a$ is by definition the positive square root of $a$ and is only defined for $a\ge0$.




It is said that even Euler got confused with $\sqrt{ab} = \sqrt{a}\,\sqrt{b}$. Or did he? See Euler's "mistake''? The radical product rule in historical perspective (Amer. Math. Monthly 114 (2007), no. 4, 273–285).


elementary number theory - Why is $a^n - b^n$ divisible by $a-b$?



I did some mathematical induction problems on divisibility


  • $9^n$ $-$ $2^n$ is divisible by 7.

  • $4^n$ $-$ $1$ is divisible by 3.

  • $9^n$ $-$ $4^n$ is divisible by 5.

Can these be generalized as $a^n$ $-$ $b^n$$ = (a-b)N$, where N is an integer? But why is $a^n$ $-$ $b^n$$ = (a-b)N$ ?


I also see that $6^n$ $- 5n + 4$ is divisible by $5$ which is $6-5+4$ and $7^n$$+3n + 8$ is divisible by $9$ which is $7+3+8=18=9\cdot2$.


Are they just a coincidence or is there a theory behind?


Is it about modular arithmetic?


Answer



They are all special cases of the Factor Theorem: $\rm\: x-y\:$ divides $\rm\:f(x)-f(y),\:$ for $\rm\:f\:$ a polynomial with integer coefficients, i.e. $\rm\:f(x)-f(y) = (x-y)\:g(x,y)\:$ for a polynomial $\rm\:g\:$ with integer coefficients. The divisibility relation remains true when one evaluates the indeterminates $\rm\:x,y\:$ at integer values (this yields your examples if we let $\rm\:f(z) = z^n).$



Said simpler: $\rm\: mod\ x\!-\!y\!:\ \ x\equiv y\:\Rightarrow\: f(x)\equiv f(y)\ $ by the Polynomial Congruence Rule.


for example: $\ \rm\ mod\ 9\!-\!4\!:\ \ 9\equiv 4\:\Rightarrow\: 9^n\equiv 4^n$


Saturday, May 21, 2016

geometry - arithmetic progression of triangle sides


Let $gcd(a,b,c)=1$ such that $a^2, b^2, c^2$ are in arithmetic progression. Show they can be written in the form


$a=-p^2+2pq+q^2$
$b=p^2+q^2$
$c=p^2+2pq-q^2$


for relatively prime integers $p,q$ of different parities.



so if $a^2, b^2, c^2$ are in arithmetic progression, then $a^2=x, b^2=x+d, c^2=x+2d$, for $x,d\in\mathbb{Z}$
That means $x+x+d=x+d+d \Rightarrow 2x+d=x+2d \Rightarrow x=d$.
So then $a^2=x, b^2=2x, c^2=3x$


Now with primitive pythagorean triples, we have $(2pq, p^2-q^2, p^2+q^2)$. I just don't see how to go from here....i'm sure it's easy, but I'm not seeing something.


Answer



Since $a^2, b^2, c^2$ are in arithmetic progression, $a^2+c^2=2b^2$. If $a$ is even, then so is $c$, so $4 \mid a^2+c^2=2b^2$, so $b$ is also even, giving a contradiction.


Thus $a$ is odd. Similarly $c$ is odd, so $b$ is also odd.


$$(a-b)(a+b)=a^2-b^2=b^2-c^2=(b-c)(b+c)$$


$$\frac{a-b}{2}\frac{a+b}{2}=\frac{b-c}{2}\frac{b+c}{2}$$


By factoring lemma, there exists integers $w, x, y, z$ such that $\frac{a-b}{2}=wx, \frac{a+b}{2}=yz, \frac{b-c}{2}=wy, \frac{b+c}{2}=xz$. Now


$$a=wx+yz, b=yz-wx=wy+xz, c=xz-wy$$


$y(z-w)=x(z+w)$. Again by factoring lemma, there exists integers $d, e, f, g$ such that $y=de, z-w=fg, x=df, z+w=eg$. Now



$$z=\frac{eg+fg}{2}=g\frac{e+f}{2}, w=\frac{eg-fg}{2}=g\frac{e-f}{2}$$ $$a=wx+yz=dfg\frac{e-f}{2}+deg\frac{e+f}{2}=\frac{dg}{2}(e^2+2ef-f^2)$$ $$b=wy+xz=deg\frac{e-f}{2}+dfg\frac{e+f}{2}=\frac{dg}{2}(e^2+f^2)$$ $$c=xz-wy=dfg\frac{e+f}{2}-deg\frac{e-f}{2}=\frac{dg}{2}(-e^2+2ef+f^2)$$


If $dg$ is divisible by $4$ or an odd prime $p$, then $a, b, c$ will not be relatively prime. Thus $dg=\pm 1$ or $\pm 2$.


If $dg=\pm 2$, then $a=\pm (e^2+2ef-f^2), b=\pm (e^2+f^2), c=\pm (-e^2+2ef+f^2)$. Clearly $\gcd(e, f) \mid \gcd(a, b, c)=1$, so $e, f$ are relatively prime. If $e, f$ have the same parity, then $a, b, c$ are all even, a contradiction, so $e, f$ have different parities.


If $dg= \pm 1$, then $a=\pm \frac{e^2+2ef-f^2}{2}, b=\pm \frac{e^2+f^2}{2}, c=\pm \frac{-e^2+2ef+f^2}{2}$. Thus $e, f$ must have the same parity. Put $e+f=2p, e-f=2q$, then $e=p+q, f=p-q$, and


$$a=\pm \frac{e^2+2ef-f^2}{2}=\pm \frac{(p+q)^2+2(p+q)(p-q)-(p-q)^2}{2}=\pm (p^2+2pq-q^2)$$ $$b=\pm \frac{e^2+f^2}{2}=\pm \frac{(p+q)^2+(p-q)^2}{2}=\pm (p^2+q^2)$$ $$c=\pm \frac{-e^2+2ef+f^2}{2}=\pm \frac{-(p+q)^2+2(p+q)(p-q)+(p-q)^2}{2}=\mp(-p^2+2pq+q^2)$$


Again, $p, q$ must be relatively prime and be of different parities.


Finally, combining the 2 cases, $a, b, c$ can be written as


$$a=\epsilon_a (p^2+2pq-q^2), b=\epsilon_b (p^2+q^2), c=\epsilon_c (-p^2+2pq+q^2)$$


where each $\epsilon_a, \epsilon_b, \epsilon_c$ are $1$ or $-1$.


Friday, May 20, 2016

real analysis - Is $sum_{n=1}^{+ infty}left(frac {1}{n}-frac{1}{p_n}right)$ convergent?

It is known that $$\sum_{n=1}^{+ \infty} \frac {1}{n}$$ is divergent. Also, it is known that $$\sum_{n=1}^{+ \infty} \frac {1}{p_n}$$ is divergent where $p_n$ is $n$-th prime number.



I was thinking what would happen (in the sense of convergence) if we termwise subtract these two series to obtain $$\sum_{n=1}^{+ \infty} \left(\frac {1}{n}-\frac{1}{p_n}\right)$$



Is $$\sum_{n=1}^{+ \infty} \left(\frac {1}{n}-\frac{1}{p_n}\right)$$ convergent?

calculus - What are other methods to evaluate $int_0^1 sqrt{-ln x} mathrm dx$



$$\int_0^1 \sqrt{-\ln x} dx$$
I'm looking for alternative methods to what I already know (method I have used below) to evaluate this Integral.
$$y=-\ln x$$



$$\bbox[8pt, border:1pt solid crimson]{e^y=e^{-\ln x}=e^{\ln\frac{1}{x}}=\frac{1}{x}}$$




$$\color{#008080}{dx= -\frac{dy}{e^y}}$$



$$\int_0^1 \sqrt{-\ln x} dx=\int_{\infty}^0 \sqrt{y} \left(-\frac{dy}{e^y}\right)=\int_0^{\infty} e^{-y} y^{\frac{1}{2}}dy=\left(\frac{1}{2}\right)!$$



$$\bbox[8pt, border:1pt solid crimson]{\left(\frac{1}{2}\right)!=\frac{1}{2} \sqrt{\pi}}$$



$$\Large{\color{crimson}{\int_0^1 \sqrt{-\ln x} dx=\frac{1}{2} \sqrt{\pi}}}$$


Answer



As far as I know, there are essentially two ways for proving that $\Gamma\left(\frac{1}{2}\right)=\sqrt{\pi}$.




The first way is to use integration by parts, leading to $\Gamma(z+1)=z\Gamma(z)$, in order to relate $\Gamma\left(\frac{1}{2}\right)$ to the Wallis product. The latter can be computed by exploiting the Weierstrass product for the sine function:
$$\frac{\sin z}{z}=\prod_{n=1}^{+\infty}\left(1-\frac{z^2}{\pi^2n^2}\right)$$
by evaluating it in $z=\frac{\pi}{2}$. The duplication formula for the $\Gamma$ function follows from this approach.



The second way is to use some substitutions in order to relate $\Gamma\left(\frac{1}{2}\right)$ to the gaussian integral
$$\int_{-\infty}^{+\infty}e^{-x^2}\,dx$$
that can be evaluated through Fubini's theorem and polar coordinates.


calculus - Evaluate $ int_{0}^{infty} frac{1}{sqrt{x(1+e^x)}}mathrm dx $


I would like to evaluate: $$ \int_{0}^{\infty} \frac{1}{\sqrt{x(1+e^x)}}\mathrm dx $$ Given that I can't find $ \int \frac{1}{\sqrt{x(1+e^x)}}\mathrm dx $, a substitution is needed: I tried $$ u=\sqrt{x(1+e^x)} $$ and $$ u=\frac{1}{\sqrt{x(1+e^x)}} $$ but I could not get rid of $x$ in the new integral.... Do you have ideas of substitution?


Answer



$$ \begin{align} \int_0^\infty\frac{1}{\sqrt{x(1+e^x)}}\mathrm{d}x &=2\int_0^\infty\frac{1}{\sqrt{1+e^{x^2}}}\mathrm{d}x\\ &=2\int_0^\infty(1+e^{-x^2})^{-1/2}e^{-x^2/2}\;\mathrm{d}x\\ &=2\int_0^\infty\sum_{k=0}^\infty(-\tfrac{1}{4})^k\binom{2k}{k}e^{(2k+1)x^2/2}\;\mathrm{d}x\\ &=\sum_{k=0}^\infty(-\tfrac{1}{4})^k\binom{2k}{k}\sqrt{\frac{2\pi}{2k+1}} \end{align} $$


complex analysis - Curve with one negative and one positive sine-like impulse




Is it possible to create an algebraic function that is smooth and continuous (i.e., a function in the form $f(x)$ using algebraic functions, with no curly braces that stipulate different behaviour for different domains of $x$, and not piecewise-continuous, smoothly continuous even in the limit) that has the following properties:



LIST EDITED FOR CLARITY:



For positive integer $a:




  • In the range $x$ equals $-\infty$ to $-b$, the function yields $0$

  • In the range $x$ equals $-b$ to $-a$, the function yields a single negative 'pulse' of some form with only one local minimum (an example would be $-\cos\bigl(\frac{1}{b-a}\pi (x+\frac{b-a}{2})\bigr)-1$.)


  • In the range $x$ equals $-a$ to $a$, the function yields $0$

  • In the range $x$ equals $a$ to $b$, the function yields a single positive 'pulse' of some form with only one local maximum (an example would be $\cos\bigl(\frac{1}{b-a}\pi (x+\frac{b-a}{2})\bigr)+1$.)

  • In the range $x$ equals $b$ to $\infty$, the function yields $0$



END OF EDIT



The result would be a curve with vaguely sinusoidal behaviour: a single (i.e., non-oscillating) negative pulse between $x=(-b,a)$ and a positive pulse between $x=(a,b)$, and for other values of $x$, the curve either oscillates or flat-lines, but integrates to $0$ across the ranges specified.



I suspect this requires some form of Fourier analysis, and I am sure that the solution will require complex analysis, which is fine.




I understand (to some degree!) complex numbers. But Fourier analysis is something I simply don't know how to do. I just want to find such a curve so I can play with it :-)



$\color{red}{\text{Edits:}}$



An example that doesn't quite work is $\DeclareMathOperator{\sinc}{sinc}(\pi (x-n))-\DeclareMathOperator{\sinc}{sinc}(\pi (x+n))$. This creates a curve that could easily be adjusted to create an area of $1$ between points $a$ and $b$ (and the negative equivalent), but it's hard to hit the points $a$ and $b$ precisely (needs to cross $y=0$ at precisely those points), and the integral of the other ranges are not zero.



If the answer is that it's impossible, that's useful info too.



Note that the function should be smoothly continuous, at least in the limit ($\DeclareMathOperator{\sinc}{sinc}$ is an example).




If behaviour is different in the complex plane, that's also fine. So long as it's continuous, and the real part has the behaviour above.


Answer



By substitution $x = At + B$, you can shift and compress/expand the location of the bumps in this function to be at any two points you like. The function itself seems as if it captures what you're looking for:



$$
y = \frac{\arctan(t)}{1 + t^2}
$$



Here's a plot via desmos.com:

enter image description here



====



Oops. Now that I look at your question more clearly, I see that I've produced something more interesting than what you asked for. I'm gonna leave it there anyhow.



There's a small problem because you say that the function should be "odd", but I think that you mean that it should be symmetric (in an "even" sense) around the point $\frac{a+b}{2}$; otherwise most of what you write is inconsistent in the case $a = -1, b = 1$.



The way I derived my example was to take the derivative of $arctan^2(t)$ (and then drop a factor of $2$ to simplify). Starting with arctan-like stuff is generally a good way to work things like this.




But in this case, it's even easier: take a look at the function $y = \frac{sin(x)}{x}$, where you have to define $y = 1$ when $x = 0$; this (or a scaled version of it) is commonly given then name $sinc$, and it has an everywhere convergent power series without any "cases" in it, namely
$$
sinc(x) = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + \ldots
$$

This function, or better,
$$
f(x) = \frac{sin(\pi x)}{\pi x}
$$

has a bump at the origin, and minima at either side of the origin, around $x = \pm 1.43$. Let's call that $x = \pm c$. Then you can arrange, by multiplying by a small constant, to make $\int_{-c}{c} kf(x) dx = 1$. The problem is that the integral over the "tails" won't be $0$. But it'll be pretty close to zero, and you can make a small adjustment to make it work: you just add in a little bit of $\frac{1}{1+x^2}$. So what I'm claiming is that if you look at
$$

u(x) = A\left( \frac{sin(\pi x)}{\pi x} + \frac{B}{1+x^2} \right),
$$

you'll have, for $A$ somewhere near $2$, and $B$ around $0.2$, a function that has the properties you want. Using the intermediate value theorem, you can prove that there are values for $A$ and $B$ that achieve your goals (assuming that $a$ and $b$ are approximately $\pm 1.5$). Once you achieve this, you can scale vertically and horizontally to make it work for $a = -1, b = +1$, and then scale and translate to make it work for any $a$ and $b$.


Thursday, May 19, 2016

algebra precalculus - Imaginary numbers and exponent arithmetic

This is a very basic question but it's been driving me crazy for a while... I know that $i^3 = i^2*i = (-1)i = -i$


But why does the following reasoning give me the wrong answer? $i^3 = (i^2)^{3/2} = (-1)^{3/2} = (-1)^{1/2} = i$


Thanks

discrete mathematics - Proof of the Hockey-Stick Identity: $sumlimits_{t=0}^n binom tk = binom{n+1}{k+1}$



After reading this question, the most popular answer use the identity
$$\sum_{t=0}^n \binom{t}{k} = \binom{n+1}{k+1}.$$




What's the name of this identity? Is it the identity of the Pascal's triangle modified.



How can we prove it? I tried by induction, but without success. Can we also prove it algebraically?



Thanks for your help.






EDIT 01 : This identity is known as the hockey-stick identity because, on Pascal's triangle, when the addends represented in the summation and the sum itself are highlighted, a hockey-stick shape is revealed.




Hockey-stick


Answer



This is purely algebraic. First of all, since $\dbinom{t}{k} =0$ when $k>t$ we can rewrite the identity in question as
$$\binom{n+1}{k+1} = \sum_{t=0}^{n} \binom{t}{k}=\sum_{t=k}^{n} \binom{t}{k}$$



Recall that (by the Pascal's Triangle),
$$\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}$$



Hence

$$\binom{t+1}{k+1} = \binom{t}{k} + \binom{t}{k+1} \implies \binom{t}{k} = \binom{t+1}{k+1} - \binom{t}{k+1}$$



Let's get this summed by $t$:
$$\sum_{t=k}^{n} \binom{t}{k} = \sum_{t=k}^{n} \binom{t+1}{k+1} - \sum_{t=k}^{n} \binom{t}{k+1}$$



Let's factor out the last member of the first sum and the first member of the second sum:
$$\sum _{t=k}^{n} \binom{t}{k}
=\left( \sum_{t=k}^{n-1} \binom{t+1}{k+1} + \binom{n+1}{k+1} \right)
-\left( \sum_{t=k+1}^{n} \binom{t}{k+1} + \binom{k}{k+1} \right)$$




Obviously $\dbinom{k}{k+1} = 0$, hence we get
$$\sum _{t=k}^{n} \binom{t}{k}
=\binom{n+1}{k+1}
+\sum_{t=k}^{n-1} \binom{t+1}{k+1}
-\sum_{t=k+1}^{n} \binom{t}{k+1}$$



Let's introduce $t'=t-1$, then if $t=k+1 \dots n, t'=k \dots n-1$, hence
$$\sum_{t=k}^{n} \binom{t}{k}
= \binom{n+1}{k+1}
+\sum_{t=k}^{n-1} \binom{t+1}{k+1}

-\sum_{t'=k}^{n-1} \binom{t'+1}{k+1}$$



The latter two arguments eliminate each other and you get the desired formulation
$$\binom{n+1}{k+1}
= \sum_{t=k}^{n} \binom{t}{k}
= \sum_{t=0}^{n} \binom{t}{k}$$


Wednesday, May 18, 2016

arithmetic - What is the name of this summation formula?

So recently I derived a formula (obviously not the first... it already existed but that is what got me into summations) that quickly adds all the numbers from 1 to "n" However I recently derived another formula (also not the first I am guessing) that adds all the numbers from any number (not just 1) to "n" (i.e. 14+15+16+17)



Where i= Starting number and n= Ending number




$$\sum_{i}^{n} = \left ( n-i+1 \right )\ast \left ( \left ( n+i \right )/2 \right )$$



What I want to know is what is this formula called? Mine is very complicated looking as well so is there a more compact way?

integration - Integral from $0$ to Infinity of $e^{-3x^2}$?

How do you calculate the integral from $0$ to Infinity of $e^{-3x^2}$? I am supposed to use a double integral. Can someone please explain? Thanks in advance.

Tuesday, May 17, 2016

polynomials - Prove $a^n+1$ is divisible by $a + 1$ if $n$ is odd

Prove $a^n+1$ is divisible by $a + 1$ if $n$ is odd:



We know $a$ cannot be $-1$ and the $n \in \mathbb{N}$.
Since $n$ must be odd, we can rewrite $n$ as $2k+1$. Now we assume it holds for prove that it holds for the next term.



$$a^{2(k+1)+1}+1$$
$$=a^{2k+3}+1$$
$$=a^3\cdot a^{2k}+1$$
$$=(a^3+1)\cdot a^{2k} -a^{2k}+1$$




Im not sure on what to do next. Since $a^{2k}$ means that the exponential term will be even and thus you cant use the fact that $a^n+1$ is divisible by $a + 1$ if $n$ is odd.

proof verification - The square root of a prime number is irrational


If $p$ is a prime number, then $\sqrt{p}$ is irrational.




I know that this question has been asked but I just want to make sure that my method is clear. My method is as follows:





Let us assume that the square root of the prime number $p$ is rational. Hence we can write $\sqrt{p} = \frac{a}{b}$. (In their lowest form.) Then $p = \frac{a^2}{b^2}$, and so $p b^2 = a^2$.



Hence $p$ divides $a^2$, so $p$ divides $a$. Substitute $a$ by $pk$. Find out that $p$ divides $b$. Hence this is a contradiction as they should be relatively prime, i.e., gcd$(a,b)=1$.


Monday, May 16, 2016

integration - Using Complex Analysis to Compute $int_0 ^infty frac{dx}{x^{1/2}(x^2+1)}$



I am aware that there is a theorem which states that for $0

My attempt is the following: Let's take $f(z)=\frac{1}{z^{1/2}(z^2+1)}$ as the complexification of our integrand. Define $\gamma_R=\gamma _R^1+\gamma_R^2+\gamma_R^3$, where $\gamma_R^1(t)=t$ for $t$ from $1/R$ to $R$; $\gamma_R^2(t)=\frac{1}{R}e^{it}$, where $t$ goes from $\pi /4$ to $0$ ; and $\gamma_R^3(t)=e^{\pi i/4}t, $ where $t$ goes from $R$ to $1/R$ (see drawing).enter image description here



The poles of the integrand are at $0,\pm i$, but those are not contained in the contour, so by the residue theorem $\int_{\gamma_R}f(z)dz=0$. On the other hand, $\int_{\gamma_R}=\int_{\gamma_R^1}+\int_{\gamma_R^2}+\int_{\gamma_R^3}$. As $R\to \infty$, $\int_{\gamma_R^1}f(z)dz\to \int_0 ^\infty \frac{1}{x^{1/2}(x^2+1)}dx$. Also, = $\vert \int_{\gamma_R^2}f(z)dz\vert \le \frac{\pi }{4R}\cdot \frac{1}{R^2-1}$ and the lattest expression tends to $0$ as $R\to \infty$. However, $\int_{\gamma_R^3}f(z)=i\int_R ^{1/R}tdt=\frac{i/R^2-iR^2}{2}$, which is unbounded in absolute value for large $R$.




Is there a better contour to choose? If so, what is the intuition for finding a good contour in this case?


Answer



For this, you want the keyhole contour $\gamma=\gamma_1 \cup \gamma_2 \cup \gamma_3 \cup \gamma_4$, which passes along the positive real axis ($\gamma_1$), circles the origin at a large radius $R$ ($\gamma_2$), and then passes back along the positive real axis $(\gamma_3)$, then encircles the origin again in the opposite direction along a small radius-$\epsilon$ circle ($\gamma_4$). Picture (borrowed from this answer):



$\hspace{4.4cm}$enter image description here



$\gamma_1$ is green, $\gamma_2$ black, $\gamma_3$ red, and $\gamma_4$ blue.



It is easy to see that the integrals over the large and small circles tend to $0$ as $R \to \infty$, $\epsilon \to 0$, since the integrand times the length is $O(R^{-3/2})$ and $O(\epsilon^{1/2})$ respectively. The remaining integral tends to
$$ \int_{\gamma_1 \cup \gamma_3} = \int_0^{\infty} \frac{x^{-1/2}}{1+x^2} \, dx + \int_{\infty}^0 \frac{(xe^{2\pi i})^{-1/2}}{1+(xe^{2\pi i})^2} \, dx, $$

because we have walked around the origin once, and chosen the branch of the square root based on this. This simplifies to
$$ (1-e^{-\pi i})\int_0^{\infty} \frac{x^{-1/2}}{1+x^2} \, dx = 2I. $$
Now you need to compute the residues of the function at the two poles, using the same branch of the square root. The residues of $1/(1+z^2) = \frac{1}{(z+i)(z-i)}$ are at $z=e^{\pi i/2},e^{3\pi i/2}$, so you find
$$ 2I = 2\pi i \left( \frac{(e^{\pi i/2})^{-1/2}}{2i} +\frac{(e^{3\pi i/2})^{-1/2}}{-2i} \right) = 2\pi \sin{\frac{1}{4}\pi} = \frac{2\pi}{\sqrt{2}} $$






However, I do recommend that you don't attempt to use contour integration for all such problems: imagine trying to do
$$ \int_0^{\infty} \frac{x^{s-1}}{(a+bx^n)^m} \, dx, $$
for general $a,b,s,m,n$ such that it converges, using that method! No, the useful thing to know is that

$$ \frac{1}{A^n} = \frac{1}{\Gamma(n)}\int_0^{\infty} \alpha^{n-1}e^{-\alpha x} \, dx, $$
which enables you to do more general integrals of this type. Contour integration's often a quick and cheap way of doing simple integrals, but becomes impractical in some general cases.


Uniform convergence of series $sumlimits_{n=2}^inftyfrac{sin n x}{nlog n}$




Using Dirichlet series test I proved that the series $\displaystyle\sum\limits_{n=2}^\infty\frac{\sin n x}{n\log n}$ converges for all $x\in\mathbb{R}$.



How to determine whether this series converges uniformly on $\mathbb{R}$?


Answer



Let $\left\{a_n\right\}$ be a decreasing sequence of real numbers such that $n\cdot a_n\to 0$. Then the series $\sum_{n\geqslant 2}a_n\sin (nx)$ is uniformly convergent on $\mathbb R$.



Thanks to Abel transform, we can show that the convergence is uniform on $[\delta,2\pi-\delta]$ for all $\delta>0$. Since the functions are odd, we only have to
prove the uniform convergence on $[0,\delta]$. Put $M_n:=\sup_{k\geqslant n}ka_k$, and
$R_n(x)=\sum_{k=n}^{\infty}a_k\sin (kx)$. Fix $x\neq 0$ and $N$ such that $\frac 1N\leqslant x<\frac 1{N-1}$. Put for $n\lt N$:
$$A_n(x)=\sum_{k=n}^{N-1}a_k\sin kx\mbox{ and }B_n(x):=\sum_{k=n}^{+\infty}a_k\sin (kx),$$

and for $n\geq N$, $A_n(x):=0$.



Since $|\sin t|\leqslant t$ for $t\geq 0$ we have
$$|A_n(x)|\leqslant \sum_{k=n}^{N-1}a_kkx\leqslant M_nx(N-n)\leqslant \frac{N-n}{N-1}M_n,$$
so $|A_n(x)|\leqslant M_n$.



If $N> n$, we have after writing $D_k=\sum_{j=0}^k\sin jx$, $|D_k(x)|\leqslant \frac cx$ on $(0,\delta]$ for some constant $c$. Indeed, we have
$|D_k(x)|\leqslant \frac 1{\sqrt{2(1-\cos x)}}$ and $\cos x=1-\frac{x^2}2(1+\xi)$ where
$|\xi|\leqslant \frac 12$ so $2(1-\cos x)\geqslant \frac{x^2}2$ and $|D_k(x)\leqslant \frac{\sqrt 2}x$. Therefore
$$|B_n(x)|\leqslant \frac{\sqrt 2}x\sum_{k=N}^{+\infty}(a_k-a_{k+1})+a_N\frac{\sqrt 2}x=\frac{2\sqrt 2}xa_N\leqslant 2\sqrt 2 Na_N\leqslant 2\sqrt 2M_n.$$

We get the same bound if $N\leqslant n$. Finally $|R_n(x)|\leqslant (2\sqrt 2+1)M_n$ for all $0\leqslant x\leqslant \delta$, so the convergence is uniform on $\mathbb R$.






Added latter: it's an example of a Fourier series which is uniformly convergent on the real line, but not absolutely convergent at any point of $(0,2\pi)$. Indeed take $x\in (0,2\pi)$. Since $|\sin(nx)|\geqslant \sin^2(nx)$, we would have the convergence of $\sum_{n\geqslant 2}\frac{\sin^2(nx)}{n\log n}$. We have $\sin ^2(nx)= \frac 1{-4}(e^{inx}-e^{-inx})^2=-\frac 14 (e^{2inx}+e^{-2inx}-2)=\frac 12-\frac 12\cos (2nx)$ and an Abel transform shows that the series $\sum_{n\geqslant 2}\frac{\cos(2nx)}{n\log n}$ is convergent. So the series $\sum_{n\geqslant 2}\frac 1{n\log n}$ would be convergent, which is not the case as the integral test shows.


real analysis - Using the definition of a limit, prove that $lim_{n rightarrow infty} frac{n^2+3n}{n^3-3} = 0$




Using the definition of a limit, prove that $$\lim_{n \rightarrow \infty} \frac{n^2+3n}{n^3-3} = 0$$





I know how i should start: I want to prove that given $\epsilon > 0$, there $\exists N \in \mathbb{N}$ such that $\forall n \ge N$



$$\left |\frac{n^2+3n}{n^3-3} - 0 \right | < \epsilon$$



but from here how do I proceed? I feel like i have to get rid of $3n, -3$ from but clearly $$\left |\frac{n^2+3n}{n^3-3} \right | <\frac{n^2}{n^3-3}$$this is not true.


Answer



This is not so much of an answer as a general technique.




What we do in this case, is to divide top and bottom by $n^3$:
$$
\dfrac{\frac{1}{n} + \frac{3}{n^2}}{1-\frac{3}{n^3}}
$$
Suppose we want this to be less than a given $\epsilon>0$. We know that $\frac{1}{n}$ can be made as small as we like. First, we split this into two parts:
$$
\dfrac{\frac{1}{n}}{1-\frac{3}{n^3}} + \dfrac{\frac{3}{n^2}}{1-\frac{3}{n^3}}
$$



The first thing we know is that for large enough $n$, say $n>N$, $\frac{3}{n^3} < \frac{3}{n^2} < \frac{1}{n}$. We will use this fact.




Let $\delta >0$ be so small that $\frac{\delta}{1-\delta} < \frac{\epsilon}{2}$. Now, let $n$ be so large that $\frac{1}{n} < \delta$, and $n>N$.



Now, note that $\frac{3}{n^3} < \frac{3}{n^2} < \frac{1}{n} < \delta$. Furthermore, $1- \frac{3}{n^3} > 1 - \frac{3}{n^2} > 1-\delta$.



Thus,
$$
\dfrac{\frac{1}{n}}{1-\frac{3}{n^3}} + \dfrac{\frac{3}{n^2}}{1-\frac{3}{n^3}}
< \frac{\delta}{1+\delta} + \frac{\delta}{1+\delta} < \frac{\epsilon}{2} + \frac{\epsilon}{2} < \epsilon
$$




For large enough $n$. Hence, the limit is zero.



I could have had a shorter answer, but you see that using this technique we have reduced powers of $n$ to this one $\delta$ term, and just bounded that $\delta$ term by itself, bounding all powers of $n$ at once.


complex analysis - Evaluate $P.V. int^{infty}_{0} frac{x^alpha }{x(x+1)} dx $ where $0 < alpha


Evaluate
$$P.V. \int^{\infty}_{0} \frac{x^\alpha }{x(x+1)} dx $$ where $0 < \alpha <1$



Thm Let $P$ and $Q$ be polynomials of degree $m$ and $n$,respectively, where $n \geq m+2$. If $Q(x)\neq 0$. for $Q$ has a zero of order at most 1 at the origin and $f(z)= \frac{z^\alpha P(z)}{Q(z)}$, where $0 < \alpha <1$ then $$P.V, \int^{\infty}_0 \frac{x^ \alpha P(x)}{Q(x)} dx= \frac{2 \pi i}{1- e^{i \alpha 2 \pi }} \sum^{k}_{j=1} Res [f,z_j] $$ where $z_1,z_2 ,\dots , z_k$ are the nonzero poles of $\frac{P}{Q}$



Attempt


Got that $P(x)=1$ where its degree $m=1$ and $q(x)=x(x+1)$ its degree is $n=1$ so it is not the case that $n \geq m+2$ because $2 \geq 1+2$


Answer



We assume $0<\alpha<1$. We have



$$ P.V. \int^{\infty}_{0} \frac{x^\alpha }{x(x+1)} dx =\frac{\pi}{\sin(\alpha \pi)}. $$



Hint. One may prove that $$ \begin{align} & \int^{\infty}_{0} \frac{x^\alpha }{x(x+1)} dx \\\\&=\int^1_{0} \frac{x^\alpha }{x(x+1)} dx+\int^{\infty}_{1} \frac{x^\alpha }{x(x+1)} dx \\\\&=\int^1_{0} \frac{x^{\alpha-1} }{x+1} dx+\int^0_{1} \frac{x^{\alpha-1} }{1+\frac1x}\cdot \left(- \frac{dx}{x^2}\right) \\\\&=\int^1_{0} \frac{x^{\alpha-1} }{1+x} dx+\int^1_{0} \frac{x^{-\alpha} }{1+x}dx \\\\&=\int^1_{0} \frac{x^{\alpha-1} (1-x)}{1-x^2} dx+\int^1_{0} \frac{x^{-\alpha}(1-x) }{1-x^2}dx \\\\&=\int^1_{0} \frac{x^{\alpha-1} (1-x)}{1-x^2} dx+\int^1_{0} \frac{x^{-\alpha}(1-x)}{1-x^2}dx \\\\&=\frac12\psi\left(\frac{\alpha+1}2\right)-\frac12\psi\left(\frac{\alpha}2\right)+\frac12\psi\left(1-\frac{\alpha}2\right)-\frac12\psi\left(\frac{1-\alpha}2\right) \\\\&=\frac{\pi}{\sin(\alpha \pi)} \end{align} $$ where we have used the classic integral representation of the digamma function $$ \int^1_{0} \frac{1-t^{a-1}}{1-t} dt=\psi(a)+\gamma, \quad a>-1,\tag 1 $$ and the properties $$ \psi(a+1)-\psi(a)=\frac1a,\qquad \psi(a)-\psi(1-a)=-\pi\cot(a\pi). $$



analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...