Tuesday, September 29, 2015

fake proofs - imaginary number $i$ equals $-6/3.4641$?

$$-4^3 = -64$$
so the third root of $-64$ should be $-4$ than.
$$\sqrt[3]{-64} = -4$$

But if you calculate the third root of -64
with WolframAlpha( http://www.wolframalpha.com/input/?i=third+root+of+-64 )
you get a complex number with an imaginary part of $$3.4641016151 i$$ and a real part of $$2$$



so if the third root of $4-64$ equals $-4$ AND $2 + 3.46410162 i$ (which i know is a bit foolish) than you could actually reform it like this
$$
\sqrt[3]{-64} \approx 2 + 3.46410162 i | -2$$
$$
\sqrt[3]{-64} -2 \approx -6 \approx 3.46410162 i |/3.46410162$$
$$

\frac{\sqrt[3]{-64} -2}{3.46410162} ≈ \frac{-6}{3.46410162} ≈ i$$



and this have to be totally wrong, so my question is, where exactly is the mistake?

Monday, September 28, 2015

number theory - Proving ${rm gcd}(a,b)=1$, $amid c$ and $bmid c$ implies $abmid c$ WITHOUT Euclid's or Bezout's lemma.

I want to show prove the following statement:
For any $a,b,c\in\mathbb Z$, if $a,b$ are coprime and both $a$ and $b$ divide $c$, then $ab$ has to divide $c$ as well.



Before marking this as a duplicate - I've spent some time searching and I couldn't find an elementary proof that does not rely on Euclid's or Bezout's lemma.



My approach is as follows:




Since $b$ divides $c$, there exists an integer $k$, such that $kb=c$.



Since $a$ divides $c$, by identity it also divides $kb$.



I now want to show that assuming $a$ doesn't divide $kb$ implies the existence of a common divisor $d>1$ of $a$ and $b$, however, I'm walking in circles trying to prove this step.



I'm not 100% sure there's an easy way to prove it, but I'd appreciate any hint on how to prove the statement without using Euclid's or Bezout's lemma, just using the definition of divisibility



$x\mid y\ \Rightarrow\ \exists\ z\in\mathbb Z:zx=y$




and Euclidean division



$\forall\ x\in\mathbb Z,y\in\mathbb Z^\times\ \exists\ q\in\mathbb Z,r\in\{0,\ldots,y-1\}:x=qy+r$.

elementary number theory - Show that if $gcd(abc,d^2)=1$, then $gcd(a,d)=gcd(b,d)=gcd(c,d)=1$.

Let $a,b,c$ be integers. Show that if $\gcd(abc,d^2)=1$, then $\gcd(a,d)=\gcd(b,d)=\gcd(c,d)=1$.



Here is my way of approaching this question:




Suppose $\gcd(abc,d^2)=1$, there exist integers $x,y$ such that $abcx+d^2y=1$



$a(bcx)+d(dy)=1$, which implies that $\gcd(a,d)=1$



$b(acx)+d(dy)=1$, which implies that $\gcd(b,d)=1$



$c(abx)+d(dy)=1$, which implies that $\gcd(c,d)=1$



So far I don't really know if this is the way to answer this question. Any help would be appreciated.

calculus - Evaluating $lim_{xto 0} frac{tan x - sin x}{x^3} $. I get $0$, but book says $1/2$

$$\lim_{x\to\ 0} \frac{\tan x - \sin x}{x^3} $$



I am getting $0$. I even tried to plug in small values for $x$ and got the same answer. But the book's answer is $1/2$.



I don't understand. Am I doing something wrong? Please help ��

algebra precalculus - Do odd imaginary numbers exist?



Is the concept of an odd imaginary number defined/well-defined/used in mathematics? I searched around but couldn't find anything. Thanks!


Answer



"Odd" has several meanings in mathematics: you have odd integers (those which are not multiples of $2$); you have odd functions (those that satisfy $f(-x) = -f(x)$ for all $x$); and possibly others.




If you want to stick to the first meaning, then there's two things to keep in mind: even for just real numbers, "odd" in the sense of "not a multiple of $2$" doesn't really work very well, because every real number is a multiple of $2$: given $r\in\mathbb{R}$, $r = 2\left(\frac{r}{2}\right)$, and $\frac{r}{2}$ is a real number. The same is true for complex numbers: if $a+bi\in\mathbb{C}$, then $a+bi = 2\left(\frac{a}{2} + \frac{b}{2}i\right)$, so every complex number would be "a multiple of $2$", so no complex number would be odd. So this concept does not really do much for complex numbers as a whole.



On the other hand, you can restrict to those complex numbers which have integer real and complex part: $a+bi$ with $a,b\in\mathbb{Z}$ (instead of $a,b\in\mathbb{R}$, like in $\mathbb{C}$). These are called the Gaussian integers because they were first studied by Gauss.



For these numbers, you can talk about "multiples of $2$": a Gaussian integer $a+bi$ is a multiple of $2$ if and only if both $a$ and $b$ are even: because if $a+bi = 2(x+yi)$ with $a,b,x,y\in\mathbb{Z}$, then $a=2x$ is even and $b=2y$ is also even. So the "odd Gaussian integers" would be all Gaussian integers that are not multiples of $2$, namely the $a+bi$ that have either $a$ or $b$ odd. Note that $1$ would be an "odd Gaussian integer" (which looks good, because $1$ is an odd integer), but then again so would $1+2i$ (which may not look so good).



There is also a slight wrinkle: in the integers, if you add two integers of the same parity, you will always get something that is "even", and if you add two integers of different parity you will get something that is "odd." This does not happen with the above notion of "even" in the Gaussian integers. For instance, $1+2i$ is "odd", and so is $2+i$; if we add them, we get $3+3i$ which is also "odd." In fact, we have four different kinds of Gaussian integers: the "even" ones (both real and imaginary parts are even); the "even-odd" ones (real part even, imaginary part odd); the "odd-even" ones (real part odd, imaginary part even); and the "odd" ones (both real and imaginary part odd). It is only if you add two of the same kind that you will get an "even" Gaussian, and if you add two different kinds you will get an "odd" Gaussian. So this concept of "even" and "odd" does not seem to behave like it does in the integers. Added. What is worse, as Bill points out in comments, even this does not work well with multiplication, since for example the product of an "even-odd" by an "even-odd" gives an "odd-even", not an "even-odd".



We might, then, want to look at another possibility.




Another possibility is to notice that $2i = (1+i)^2$, and $i$ is invertible in the Gaussian integers. So instead of looking at the multiples of $2$, you can try looking at the multiples of $1+i$ (just like you don't define "odd" in terms of multiples of $4$ in the integers; I bring up $4$ because $4=2^2$). When is a Gaussian integer a multiple of $1+i$?
$$(x+yi)(1+i) = (x-y) + (x+y)i.$$
Can we recognize such numbers? I claim they are precisely those Gaussian integers $a+bi$ with $a+b$ even.



Indeed, if $a+bi$ is a multiple of $1+i$, then as above we have $a=x-y$ and $b=x+y$ for some integers $x$ and $y$, so $a+b = (x-y)+(x+y) = 2x$ is even. Conversely, suppose that $a+bi$ has $a+b$ even, $a+b = 2k$. Then $a-b$ is also even (since $a-b = (a+b)-2b$), so we can write $a-b = 2\ell$. Then
\begin{align*}
(k -\ell i)(1+i) &= (k+\ell) + (k-\ell)i\\
&= \left( \frac{a+b}{2} + \frac{a-b}{2}\right) + \left(\frac{a+b}{2} - \frac{a-b}{2}\right)i\\
&= a + bi,

\end{align*}
so $a+bi$ is a multiple of $1+i$. So if you define "odd" in terms of "multiple of $1+i$, then it corresponds precisely to whether or not $a\equiv b\pmod{2}$: if $a$ and $b$ have the same parity, then $a+bi$ is "even"; if $a$ and $b$ have different parity then $a+bi$ is "odd".



It also has the advantage of mirroring a bit better what happens with parity in the integers: if you add two "even" or two "odd" Gaussian integers (under this definition), then the sum is "even"; and if you add an "even" and an "odd" Gaussian integer you get an "odd" Gaussian integer. Also, if you multiply an "even" Gaussian by any Gaussian you get an "even" Gaussian: for if $a$ and $b$ have the same parity, then
$$(a+bi) (x+yi) = (ax-by) + (ay+bx)i.$$
If both $a$ and $b$ are even, then so are $ax-by$ and $ay+bx$, so the result is even. If both $a$ and $b$ are odd, then either $x$ and $y$ have the same parity, in which case both $ax-by$ and $ay+bx$ are even; or else $x$ and $y$ have different parity, so that both $ax-by$ and $ay+bx$ are odd. Either way, the product is "even." Similarly, if you multiply two "odd" Gaussians, the result will be "odd."



So I think the latter concept is a bit more intuitive, but that may be just me.



Post data. There is in fact a lot of very interesting stuff in the background of the above; considering $1+i$ instead of $2$ in the Gaussian integers comes from Algebraic Number Theory.



riemann integration - Continuity of a Function $f$

I've been studying different types of functions and I came across one on What is an example that a function is differentiable but derivative is not Riemann integrable, but I can't figure out why $f(x)=x^{ \frac{3}{2} }sin(\frac{1}{x})$ on $[0,1]$ is continuous, because it seems that that it doesn't exist at $x=0.$ But I know it is differentiable on $(0,1),$ and not Riemann integrable. Some clarification, please?

Sunday, September 27, 2015

integration - How to solve the following integral $int_0^{frac{pi}{2}}sqrt[3]{sin^8xcos^4x}dx$?




How to solve the following integral?



$$\int_0^{\frac{\pi}{2}}\sqrt[3]{\sin^8x\cos^4x}\,dx$$



Preferably without the universal substitution $$\sin(t) = \dfrac{2\tan(t/2)}{1+\tan^2(t/2)}$$


Answer



Using $\operatorname{B}(a,\,b)=2\int_0^{\pi/2}\sin^{2a-1}x\cos^{2b-1}xdx$, your integral is$$\frac12\operatorname{B}\left(\frac{11}{6},\,\frac{7}{6}\right)=\frac{\Gamma\left(\frac{11}{6}\right)\Gamma\left(\frac{7}{6}\right)}{2\Gamma(3)}=\frac{5}{144}\Gamma\left(\frac{5}{6}\right)\Gamma\left(\frac{1}{6}\right)=\frac{5\pi}{144}\csc\frac{\pi}{6}=\frac{5\pi}{72}.$$Here the first $=$ uses $\operatorname{B}(a,\,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$, the second $\Gamma(a+1)=a\Gamma(a)$, the third $\Gamma(a)\Gamma(1-a)=\pi\csc\pi a$.


linear algebra - Maximum eigenvalue of a hollow symmetric matrix




Is the maximum eigenvalue (or spectral radius) of the matrix with the following form equalled to row or column sum of the matrix?




$$
A=\left( \begin{array}{cccc}
0 & a & ... & a \\
a & 0 & ... & a \\
: & : & ...& : \\
a & a & ... & 0\end{array} \right) $$



The matrix is square with dimension $n \times n$ where $n = 2,3,4,...$, hollow (all elements in the principal diagonal = 0), symmetric and all off diagonal elements have the same value.




Is the spectral radius of such matrices = $(n-1)\times a$? Why?


Answer



Start with the matrix $A$ of all $a$'s, whose eigenvalues are zero except for eigenvalue $na$ having multiplicity one (because rank$(A) = 1$).



Now subtract $aI$ from $A$ to get your matrix. The eigenvalues of $A-aI$ are those of $A$ shifted down by $a$. We get a eigenvalue $(n-1)a$ of multiplicity one and eigenvalue $-a$ with multiplicity $n-1$.



So the spectral radius (largest absolute value of an eigenvalue) of $A$ is $|na|$, and the spectral radius of $A-aI$ is $\max(|(n-1)a|,|a|)$. The latter is simply $|(n-1)a|$ unless $n=1$.


Saturday, September 26, 2015

measure theory - Proof of Poincare's Inclusion-Exclusion Indicator Function Formula by Induction



The Poincare's inclusion exclusion formula is given by



\begin{align} \Bbb{I}_{\bigcup_{1\leq j\leq n}A_j}=\sum_{1\leq j\leq n}\Bbb{I}_{A_j}+\sum^{n}_{r=2}(-1)^{r+1}\sum_{1\leq i_1
where $\Bbb{I}$ represents the indicator function. I want to prove this formula by induction. Below is my work



MY TRIAL



Assume true for $n,$ that is for $P_{n}$




\begin{align} \Bbb{I}_{\bigcup_{1\leq j\leq n}A_j}=\sum_{1\leq j\leq n}\Bbb{I}_{A_j}+\sum^{n}_{r=2}(-1)^{r+1}\sum_{1\leq i_1
Now \begin{align} \bigcup_{1\leq j\leq n+1}A_j=\left(\bigcup_{1\leq j\leq n}A_j\right)\cup A_{n+1}\end{align}
Applying the indicator function rule,



\begin{align} \Bbb{I}_{\bigcup_{1\leq j\leq n+1}A_j}=\Bbb{I}_{\bigcup_{1\leq j\leq n}A_{n}}+\Bbb{I}_{A_{n+1}}-\Bbb{I}_{\bigcup_{1\leq j\leq n}A_{n}}\Bbb{I}_{A_{n+1}}\end{align}
So, we apply $P_n$ to get



\begin{align} \Bbb{I}_{\bigcup_{1\leq j\leq n+1}A_j}=&\sum_{1\leq j\leq n}\Bbb{I}_{A_j}+\sum^{n}_{r=2}(-1)^{r+1}\sum_{1\leq i_1\\=&\sum_{1\leq j\leq n+1}\Bbb{I}_{A_j}+\sum^{n}_{r=2}(-1)^{r+1}\sum_{1\leq i_1
\end{align}

Again, applying $P_{n}$, to the last term, we get



\begin{align} \Bbb{I}_{\bigcup_{1\leq j\leq n+1}A_j}=&\sum_{1\leq j\leq n+1}\Bbb{I}_{A_j}+\sum^{n}_{r=2}(-1)^{r+1}\sum_{1\leq i_1\\=&\sum_{1\leq j\leq n+1}\Bbb{I}_{A_j}+\sum^{n}_{r=2}(-1)^{r+1}\sum_{1\leq i_1\end{align}
From here, I'm not sure how to continue. Any help please? Thanks!


Answer



At your last line observe that
$$\Bbb{I}_{A_j}\Bbb{I}_{A_{n+1}}

=\Bbb{I}_{A_{j}\cap A_{n+1}}$$

and
$$\Bbb{I}_{A_{i_1}\cap A_{i_2}\cap \cdots\cap A_{i_r} }\Bbb{I}_{A_{n+1}}
=\Bbb{I}_{A_{i_1}\cap A_{i_2}\cap \cdots\cap A_{i_r}\cap A_{n+1}}.$$

These terms provide the indicator functions of the multiple intersections
of the $A_i$ that involve $A_{n+1}$.


geometry - Length of diagonal compared to the limit of lengths of stair-shaped curves converging to it

enter image description here


I see this post and I am stunned. I think this is fallacious but I can't figure where is the fallacy?


If you know the fallacy. Please post a answer.

Parameterize a polynomial with no real roots


An even polynomial with a constant term of 1 will have no real roots if the coefficients of the powers (the c's below) are non-negative. So


$$1 + c_2x^2 + c_4x^4 + c_6x^6$$


has no real roots. Is there a general way to parameterize an nth order polynomial with a constant term of 1 so that it has no real roots? I know that the above conditions (even powers, with non-negative coefficients) are more restrictive than necessary. The application is fitting (x,y) data where y is always positive with a polynomial in x.



Answer



Let $p(x)=1+c_1+\dots c_n$. Since $p(0)=1>0$, if $p$ does not have real roots, it must be positive. This implies $c_n>0$ (otherwise there would be a positive root) and $n$ even (otherwise there would be a negative root.) Applying Descartes rule of signs to $p(x)$ and $p(-x)$ we get the following necessary condition: the sequences of coefficients $$ 1,\,c_1,\,c_2,\,c_3,\dots,c_n,\quad\text{and}\quad 1,\,-c_1,\,c_2,-\,c_3,\dots,c_n $$ must have an even number of sign changes.


sequences and series - Is there any geometry behind the Basel problem?



I could find many beautiful and rigorous proofs for Euler's solution to the Basel problem here Different methods to compute $\sum\limits_{k=1}^\infty \frac{1}{k^2}$ (Basel problem)



Basel problem solution



But I am curious to know whether there are proofs by using geometry.




If anyone has proofs by geometry, please do share it with us.


Answer



Funny you should ask this today. A great video video by the YouTuber 3Blue1Brown was just posted today. (Aside: I recommend all his videos.)



The proof is based on the result mentioned by "3 revs" in the MO thread mentioned by user296602 above.


Friday, September 25, 2015

real analysis - Discontinuous derivative.

Could someone give an example of a ‘very’ discontinuous derivative? I myself can only come up with examples where the derivative is discontinuous at only one point. I am assuming the function is real-valued and defined on a bounded interval.

calculus - Prove that $x_n=1+frac{2}{4}+frac{3}{16}+...+frac{n}{4^{n-1}}$ converges



so i have got a sequence $$x_n=1+\frac{2}{4}+\frac{3}{16}+\frac{4}{64}+...+\frac{n}{4^{n-1}}$$
and i have to prove that it actually converges to some point, just by looking at it, it is clear to me that it does converge, if i would take its limit $$\lim_{n \to \infty}\sum_{k=0}^{n}{\frac{n}{4^{n-1}}}$$



as n increases the numerator becomes actually less than the
denominator, from that point it would converge, but how would i prove it.


Answer



Use comparison:

$$\sum_{k=0}^{\infty}{\frac{k}{4^{k-1}}}<\sum_{k=0}^{\infty}\frac{2^k}{4^{k-1}}=4\sum_{k=0}^{\infty}\bigg(\frac{2}{4}\bigg)^k$$


circles - Maximize the area of an ellipse inscribed in a semicircle.



An ellipse inscribed in semi-circle touches the circular arc at two distinct points and also touches the bounding diameter its major axis is parallel to the bounding diameter. When the ellipse has the maximum possible area, find its eccentricity.



I tried to approach this problem using coordinate geometry and tried to maximise the area of ellipse after construction of the area function using derivatives. The area function comes out to be $\displaystyle \frac{\pi a^2 \mathrm R}{\sqrt{\mathrm {R^2}+a^2}}$. Here $\mathrm R$ is the radius of semicircle and $a$ is semi-major axis of ellipse. But its derivative came out to be positive i.e. area will be maximised when $a$ is maximum. Here I am stuck since I can't find the maximum value of $a$ in terms of radius $\mathrm R$. Please help me with this.



Thanks!


Answer



If an ellipse with centre $(0,b>0)$ is tangent to the $x$-axis at the origin then its equation is given by $$ \frac{x^2}{a^2}+\frac{(y-b)^2}{b^2} = 1 $$ and if such ellipse is additionally tangent to the circle $x^2+y^2=1$, then the discriminant of the quadratic polynomial $\frac{1-y^2}{a^2}+\frac{(y-b)^2}{b^2}-1$ equals zero, hence $b^2=a^2-a^4$ and the area enclosed by the ellipse equals $\pi a b = \pi a^2\sqrt{1-a^2}$, which by the AM-GM (or Cauchy-Schwarz) inequality attains its maximum at $a=\sqrt{\frac{2}{3}}$, $b=\frac{\sqrt{2}}{3}$. It follows that the eccentricity of the solution (depicted below) is given by $$ e = \sqrt{1-\left(\tfrac{b}{a}\right)^2}=\sqrt{\tfrac{2}{3}}. $$ $\hspace{1in}$enter image description here


In particular the largest ellipse inscribed in a half-circle approximately covers the $77\%$ of the area of the half-circle.


Hot Linked Questions

Dirichlet integral. [duplicate]


I want to prove $\displaystyle\int_0^{\infty} \frac{\sin x}x \,\mathrm{d}x = \frac \pi 2$, and $\displaystyle\int_0^{\infty} \frac{|\sin x|}x \,\mathrm{d}x \to \infty$. And I found in wikipedia, but ...







real analysis - Increasing, bounded and continuous is uniformly continuous




Let $f:(x,y)\to \mathbb{R}$ be increasing, bounded and continuous on $(x,y)$. Prove that $f$ is uniformly continuous on $(x,y)$.





Since it is bounded then there exists an $M\in \mathbb{R}$ such that $|f(x,y)| 0, \forall x \in X, \exists \delta > 0 : |x - y| < \delta \implies |f(x) - f(y)| < \epsilon.$$ To show it is uniformly continuous I must show: $$\text{there exists} \ \epsilon >0 \ \forall \ \delta \ \exists \ (x_0,x)\in I:\{|x -x_0|<\delta \implies |f(x)-f(x_0)|\le \epsilon\}$$



but I am not sure how I can show that?


Answer



Since $f$ is continuous on $(a,b)$, bounded and increasing, there's a unique continous extension of $f$ to $[a,b]$. This works because both limits $f(b) := \lim_{x\to b-}$ and $f(a) = \lim_{x\to a+}$ and are guaranteed to exist since every bounded and increasing (respectively bounded a decreasing) sequence converges. To prove this, simply observe that for a increasing and bounded sequence, all $x_m$ with $m > n$ have to lie within $[x_n,M]$ where $M=\sup_n x_n$ is the upper bound. Add to that the fact that by the very definition of $\sup$, there are $x_n$ arbitrarily close to $M$.



You can then use the fact that continuity on a compact set implies uniform continuity, and you're done. This theorem, btw, isn't hard to prove either (and the proof shows how powerful the compactness property can be). The proof goes like this:




First, recall the if $f$ is continuous then the preimage of an open set, and in particular of an open interval, is open. Thus, for $x \in [a,b]$ all the sets $$
C_x := f^{-1}\left(\left(f(x)-\frac{\epsilon}{2},f(x)+\frac{\epsilon}{2}\right)\right) $$
are open. The crucial property of these $C_x$ is that for all $y \in C_x$ you have $|f(y)-f(x)| < \frac{\epsilon}{2}$ and thus $$
|f(u) - f(v)| = |(f(u) - f(x)) - (f(v)-f(x))|
\leq \underbrace{|f(u)-f(x)|}_{<\frac{\epsilon}{2}}
+ \underbrace{|f(v)-f(x)|}_{<\frac{\epsilon}{2}} < \epsilon
\text{ for all } u,v \in C_x
$$
Now recall that an open set contains an open interval around each of its points. Each $B_x$ thus contains an open interval around $x$, and you may wlog assume that its symmetric around $x$ (just make it smaller if it isn't). Thus, there are $$
\delta_x > 0 \textrm{ such that }

B_x := (x-\frac{\delta_x}{2},x+\frac{\delta_x}{2})
\subset (x-\delta_x,x+\delta_x)
\subset C_x
$$
Note how we made $B_x$ artifically smaller than seems necessary, that will simplify the last stage of the proof. Since $B_x$ contains $x$, the $B_x$ form an open cover of $[a,b]$, i.e. $$
\bigcup_{x\in[a,b]} B_x \supset [a,b] \text{.}
$$
Now we invoke compactness. Behold! Since $[a,b]$ is compact, every covering with open sets contains a finite covering. We can thus pick finitely many $x_i \in [a,b]$ such that we still have $$
\bigcup_{1\leq i \leq n} B_{x_i} \supset [a,b] \text{.}
$$

We're nearly there, all that remains are a few applications of the triangle inequality. Since we're only dealing with finitly many $x_i$ now, we can find the minimum of all their $\delta_{x_i}$. Like in the definition of the $B_x$, we leave ourselves a bit of space to maneuver later, and actually set $$
\delta := \min_{1\leq i \leq n} \frac{\delta_{x_i}}{2} \text{.}
$$



Now pick arbitrary $u,v \in [a,b]$ with $|u-v| < \delta$.
Since our $B_{x_1},\ldots,B_{x_n}$ form a cover of $[a,b]$, there's an $i \in {1,\ldots,n}$ with $u \in B_{x_i}$, and thus $|u-x_i| < \frac{\delta_{x_i}}{2}$. Having been conservative in the definition of $B_x$ and $\delta$ pays off, because we get $$
|v-x_i| = |v-((x_i-u)+u)| = |(v-u)-(x_i-u)|
< \underbrace{|u-v|}_{<\delta\leq\frac{\delta_{x_i}}{2}}
+ \underbrace{|x_i-u|}_{<\frac{\delta_{x_i}}{2}}
< \delta_{x_i} \text{.}

$$
This doesn't imply $y \in B_{x_i}$ (the distance would have to be $\frac{\delta_{x_i}}{2}$ for that), but it does imply $y \in C_{x_i}$!. We thus have $x \in B_{x_i} \subset C_{x_i}$ and $y \in C_{x_i}$, and by definition of $C_x$ (see the remark about the crucial property of $C_x$ above) thus $$
|f(x)-f(y)| < \epsilon \text{.}
$$


integration - Closed form for $ int_0^infty {frac{{{x^n}}}{{1 + {x^m}}}dx }$



I've been looking at



$$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$



It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example:



$$\displaystyle \int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^3}}}dx = } \frac{\pi }{3}\frac{1}{{\sin \frac{\pi }{3}}} = \frac{{2\pi }}{{3\sqrt 3 }}$$




$$\int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^4}}}dx = } \frac{\pi }{4}$$



$$\int\limits_0^\infty {\frac{{{x^2}}}{{1 + {x^5}}}dx = } \frac{\pi }{5}\frac{1}{{\sin \frac{{2\pi }}{5}}}$$



So I guess there must be a closed form - the use of $\Gamma(x)\Gamma(1-x)$ first comess to my mind because of the $\dfrac{{\pi x}}{{\sin \pi x}}$ appearing. Note that the arguments are always the ratio of the exponents, like $\dfrac{1}{4}$, $\dfrac{1}{3}$ and $\dfrac{2}{5}$. Is there any way of finding it? I'll work on it and update with any ideas.






UPDATE:




The integral reduces to finding



$$\int\limits_{ - \infty }^\infty {\frac{{{e^{a t}}}}{{{e^t} + 1}}dt} $$



With $a =\dfrac{n+1}{m}$ which converges only if



$$0 < a < 1$$



Using series I find the solution is





$$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} $$




Can this be put it terms of the Digamma Function or something of the sort?


Answer



I would like to make a supplementary calculation on BR's answer.



Let us first assume that $0 < \mu < \nu$ so that the integral

$$ \int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx $$
converges absolutely. By the substitution $x = \tan^{2/\nu} \theta$, we have
$$ \frac{dx}{1+x^{\nu}} = \frac{2}{\nu} \tan^{(2/\nu)-1} \theta \; d\theta. $$
Thus
$$ \begin{align*}
\int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx
& = \int_{0}^{\frac{\pi}{2}} \frac{2}{\nu} \tan^{\frac{2\mu}{\nu}-1} \theta \; d\theta \\
& = \frac{1}{\nu} \beta \left( \frac{\mu}{\nu}, 1 - \frac{\mu}{\nu} \right) \\
& = \frac{1}{\nu} \Gamma \left( \frac{\mu}{\nu} \right) \Gamma \left( 1 - \frac{\mu}{\nu} \right) \\
& = \frac{\pi}{\nu} \csc \left( \frac{\pi \mu}{\nu} \right),

\end{align*} $$
where the last equality follows from Euler reflexion formula.


Thursday, September 24, 2015

trigonometry - How Can One Prove $cos(pi/7) + cos(3 pi/7) + cos(5 pi/7) = 1/2$


Reference: http://xkcd.com/1047/


We tried various different trigonometric identities. Still no luck.


Geometric interpretation would be also welcome.


EDIT: Very good answers, I'm clearly impressed. I followed all the answers and they work! I can only accept one answer, the others got my upvote.


Answer



Hint: start with $e^{i\frac{\pi}{7}} = \cos(\pi/7) + i\sin(\pi/7)$ and the fact that the lhs is a 7th root of -1.


Let $u = e^{i\frac{\pi}{7}}$, then we want to find $\Re(u + u^3 + u^5)$.


Then we have $u^7 = -1$ so $u^6 - u^5 + u^4 - u^3 + u^2 -u + 1 = 0$.


Re-arranging this we get: $u^6 + u^4 + u^2 + 1 = u^5 + u^3 + u$.


If $a = u + u^3 + u^5$ then this becomes $u a + 1 = a$, and rearranging this gives $a(1 - u) = 1$, or $a = \dfrac{1}{1 - u}$.



So all we have to do is find $\Re\left(\dfrac{1}{1 - u}\right)$.


$\dfrac{1}{1 - u} = \dfrac{1}{1 - \cos(\pi/7) - i \sin(\pi/7)} = \dfrac{1 - \cos(\pi/7) + i \sin(\pi/7)}{2 - 2 \cos(\pi/7)}$


so


$\Re\left(\dfrac{1}{1 - u}\right) = \dfrac{1 - \cos(\pi/7)}{2 - 2\cos(\pi/7)} = \dfrac{1}{2} $


abstract algebra - Constructing finite fields of order $8$ and $27$ or any non-prime

I want to construct a field with $8$ elements and a field with $27$ elements for an ungraded exercise.



For $\bf 8$ elements: So we can't just have $\Bbb Z/8\Bbb Z$ since this is not even an integral domain. But rather we can construct $\Bbb F_2 \oplus \Bbb F_2 \oplus\Bbb F_2 \oplus \Bbb F_2 = \{0,1,\alpha,\alpha+1,\beta,\beta+1,\gamma,\gamma+1\}$.



This line of thinking seems to break from what I tried. Is there a better way to construct these things?


I saw this answer: Construct a finite field of order 27


We pick a polynomial irreducible polynomial and take the quotient of $\Bbb Z_3[x]$ but this wasn't helpful in me understanding the general ideal/method.

algebra precalculus - Easy way to solve $w^2 = -15 + 8i$


Solve $w^2=−15+8i$, where $w$ is complex.




Normally, I would convert this into polar coordinates, but the problem is that is too slow.


What is another alternative?

calculus - What is the sum of the series $sumlimits _{k=1}^infty frac{1}{k^2}$?




What is $\lim \limits_{n\to\infty} \sum\limits_{k=1}^n \frac{1}{k^2}$ as an exact value?

algebra precalculus - Calculating amount of time to wait to stream a internet video.



Eric wants to stream a 41 minute show over the internet. However, he is having some connection issues, and the show is loading slowly. Specifically, he sees that after 5 minutes of waiting, only 1 minute and 10 seconds of the show has loaded. He is able to play the show while later parts of the show are loading, but he does not want to pause the show once he has started playing. He is willing to wait a certain amount of time before he starts watching the show; what is the MINIMUM WHOLE TOTAL NUMBER OF MINUTES he has to wait before he can play the show without any breaks?



I was given this problem in a math competition, but I couldn't figure out how to solve it.



My reasoning:
Using the slope formula, I found that for every minute he waits, the computer loads 7/30 minutes of the show. I tried visualizing this as a piecewise function with the functions, y=7/30x and y= -23/30x (because at some time he is going to play the video and I combined 7/30x with -x). Somehow I arrived at the answer 54 minutes, which is incorrect.



I'm not sure if this is the best way of solving this problem, however. Can someone show me how should I go about doing this? Thank you.



Answer



If you've waited $t_{wait}$ minutes, you've downloaded $\frac{7}{30}t_{wait}$ of the show. You've got that.



So at the optimal point, $\frac{7}{30}(t_{wait} + 41) = 41.$ In other words, the time you wait, plus the time of the show (during which you're still downloading) must be long enough to download the entire show.



Then $t_{wait} \approx 134.71$, so your answer is $135$ minutes.


discrete mathematics - Show the closed form of the sum $sum_{i=0}^{n-1} i x^i$

Can anybody help me to show that when $x\neq 1$



$$\large \sum_{i=0}^{n-1} i\, x^i = \frac{1-n\, x^{n-1}+(n-1)\,x^n}{(1-x)^2}$$

Wednesday, September 23, 2015

trigonometry - Simplify a quick sum of sines




Simplify $\sin 2+\sin 4+\sin 6+\cdots+\sin 88$



I tried using the sum-to-product formulae, but it was messy, and I didn't know what else to do. Could I get a bit of help? Thanks.


Answer



The angles are in arithmetic progression. Use the formula



$$\sum_{k=0}^{n-1} \sin (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$




See here for two proofs (using trigonometry, or using complex numbers).



In your case, $a=b=2$ and $n=44$.


sequences and series - $lim_{nrightarrow infty} (a_n+b_n)=0$ and $lim_{nrightarrow infty} c_n=L$ imply $lim_{nrightarrow infty} exp(a_n)*c_n-L*exp(-b_n)=0$




Consider the sequences $\{a_n\}_{\forall n \in \mathbb{N}}<0$, $\{b_n\}_{\forall n \in \mathbb{N}}>0$, $\{c_n\}_{\forall n \in \mathbb{N}}>0$ and suppose
$$
\begin{cases}
\lim_{n\rightarrow \infty} (a_n+b_n)=0\\
\lim_{n\rightarrow \infty} c_n=L<\infty
\end{cases}
$$



Could you help me to show that
$$

\lim_{n\rightarrow \infty} [\exp(a_n)*c_n-L*\exp(-b_n)]=0
$$
?






I know that by assumption
$$
\lim_{n\rightarrow \infty} [\exp(a_n)*c_n-L*\exp(-b_n)]= \lim_{n\rightarrow \infty} [\exp(-b_n+o(1))*(L+o(1))-L*\exp(-b_n)]
$$

where $o(1)$ is a number going to zero as $n\rightarrow \infty$. How can I proceed from here?






Let me add another assumption (thanks to a comment below)
$$
\exp(a_n)\equiv \Pi_{k=1}^{2n} x_{n,k}
$$
where $x_{n,k}\in [0,1]$ and $\lim_{n\rightarrow \infty} x_{n,k}=1$ $\forall k$


Answer




You can just compute
$$ |e^{a_n}c_n - L e^{-b_n}| = e^{a_n}|c_n - L e^{-(a_n+b_n)}| \leq |c_n - L e^{-(a_n+b_n)}|. $$
Now $c_n\to L$ and $a_n+b_n\to 0$. So, because $e^x$ and $|x|$ are continuous functions, you can "pass to the limit inside the functions" and get
$$ |e^{a_n}c_n - L e^{-b_n}| \leq |c_n - L e^{-(a_n+b_n)}| \to |L-Le^{-0}| = 0. $$
By the comparison principle, you get convergence of that guy to 0.


Extended Euclidean Algorithm with negative numbers minimum non-negative solution


I came through a problem in programming which needs Extended Euclidean Algorithm, with $a*s + b*t = \gcd(|a|,|b|)$ for $b \leq 0$ and $a \geq 0$


With the help of this post: extended-euclidean-algorithm-with-negative-numbers


I know we can just move the sign towards $t$ and just use normal Extended Euclidean Algorithm and use $(s,-t)$ as solution



However in my scenario, there is one more condition: I would like to find the minimum non-negative solution, i.e. $(s,t)$ for $s,t\geq 0$


And my question is how to find such minimum $(s,t)$?


Sorry if it sounds too obvious as I am dumb :(


Thanks!


Answer



Fact 1: One nice property of the Extended Euclidean Algorithm is that it already gives minimal solution pairs, that is, if $a, b \geq 0$, $|s| \lt \frac{b}{gcd(a,b)}$ and $|t| \lt \frac{a}{gcd(a,b)}$


Fact 2: If $(s,t)$ is a solution then $(s+k*\frac{b}{gcd(a,b)},t-k*\frac{a}{gcd(a,b)}), k \in \mathbb{Z}$ is also a solution.


Combining the two facts above, for your case in which $a \geq 0$ and $b \leq 0$, compute the pair $(s,t)$ using the extended algorithm on $(|a|,|b|)$, then either:


  1. $s \geq 0, t \leq 0$, in this case $(s,-t)$ is the solution you want.

  2. $s \leq 0, t \geq 0$, in this case $(s+\frac{|b|}{gcd(|a|,|b|)},-t+\frac{|a|}{gcd(|a|,|b|)})$ is your desired solution.


discrete mathematics - Prove two sets have same Cardinality by writing down bijection

a. Prove that the interval $A = [1,3]$ has the same cardinality as $B = [1,5]$ by writing down a bijection from $A \to B$. Don't prove it is a bijection.


b. Consider the following infinite set: $A = {1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4},..., \frac{1}{n}}$, Prove the set $A$ has the same cardinality as the integers by writing down a . bijection from $A$ onto $Z$.


I don't know how to find a function that is a bijection from one set to another. Can anyone help by explaining the thought process behind it, I've been having trouble with these types of problems? Thanks.

complex numbers - Question about Euler's formula



I have a question about Euler's formula




$$e^{ix} = \cos(x)+i\sin(x)$$



I want to show



$$\sin(ax)\sin(bx) = \frac{1}{2}(\cos((a-b)x)-\cos((a+b)x))$$



and



$$ \cos(ax)\cos(bx) = \frac{1}{2}(\cos((a-b)x)+\cos((a+b)x))$$




I'm not really sure how to get started here.



Can someone help me?


Answer



$$\sin { \left( ax \right) } \sin { \left( bx \right) =\left( \frac { { e }^{ aix }-{ e }^{ -aix } }{ 2i } \right) \left( \frac { { e }^{ bix }-{ e }^{ -bix } }{ 2i } \right) } =\frac { { e }^{ \left( a+b \right) ix }-e^{ \left( a-b \right) ix }-{ e }^{ \left( b-a \right) ix }+{ e }^{ -\left( a+b \right) ix } }{ -4 } \\ =-\frac { 1 }{ 2 } \left( \frac { { e }^{ \left( a+b \right) ix }+{ e }^{ -\left( a+b \right) ix } }{ 2 } -\frac { { e }^{ \left( a-b \right) ix }+{ e }^{ -\left( a-b \right) ix } }{ 2 } \right) =\frac { 1 }{ 2 } \left( \cos { \left( a-b \right) x-\cos { \left( a+b \right) x } } \right) $$



same method you can do with $\cos { \left( ax \right) \cos { \left( bx \right) } } $







Edit:
$$\int { \sin { \left( ax \right) \sin { \left( bx \right) } } dx=\frac { 1 }{ 2 } \int { \left[ \cos { \left( a-b \right) x-\cos { \left( a+b \right) x } } \right] dx=\quad } } $$$$\frac { 1 }{ 2 } \int { \cos { \left( a-b \right) xdx } } -\frac { 1 }{ 2 } \int { \cos { \left( a+b \right) xdx= } } $$



now to order calculate $\int { \cos { \left( a+b \right) xdx } } $ write
$$t=\left( a+b \right) x\quad \Rightarrow \quad x=\frac { t }{ a+b } \quad \Rightarrow dx=\frac { 1 }{ a+b } dt\\ \int { \cos { \left( a+b \right) xdx=\frac { 1 }{ a+b } \int { \cos { \left( t \right) } dt=\frac { 1 }{ a+b } \sin { \left( t \right) = } } \frac { 1 }{ a+b } \sin { \left( a+b \right) x } } } +C\\ $$


Tuesday, September 22, 2015

A problem regarding polynomials with prime values



The problem is as follows:




Prove that there is no non-constant polynomial $P(x)$ with integer coefficients such that $P(n)$ is a prime number for all positive integers $n$.





I cannot solve it. I can't even find the exact definition of a non-constant polynomial. Any help would be appreciated.


Answer



There are still some gaps, but I'd suggest something like the following. There must be quite a few other approaches, I'd expect, and I hope others will provide some of these.



Suppose a formula exists that produces primes for all positive integers, then $P(1)$ is prime, say $p$. Moreover, $P(1+np)\equiv 0 (\text{mod } p)$ for all natural numbers $n$. Since these values must all be prime, $P(1+np) = p$. There are infinitely many positive integers $n$ and therefore this is only possible if $P(n) = p$ for all $n\in\mathbb{N}$, which is a constant polynomial.


calculus - Recognizing that a function has no elementary antiderivative

Is there a method to check whether a function is integrable?




Of-course trying to solve it is one but some questions in integration may be so tricky that I don't get the correct method to start off with those problems. So, is there a method to find correctly whether a function is integrable?



Clarification: I am asking about indefinite integrals which have no elementary anti derivative.

discrete mathematics - Prove by induction that $1^3 + 2^3 + 3^3 + .....+ n^3= frac{n^2(n+1)^2}{4}$ for all $ngeq1$.

Use mathematical induction to prove that $1^3 + 2^3 + 3^3 + .....+ n^3= \frac{n^2(n+1)^2}{4}$ for all $n\geq1$.




Can anyone explain? Because I have no clue where to begin. I mean, I can show that $1^3+ 2^3 +...+ (k+1)^3=\frac{(k+1)^2(k+2)^2}{4}$, but then I don't know where to go. I need further explanation to prove it.



thank you so much for help



Sincerely

sequences and series - Different methods to compute $sumlimits_{k=1}^infty frac{1}{k^2}$ (Basel problem)



As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem)
$$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}.$$

However, Euler was Euler and he gave other proofs.



I believe many of you know some nice proofs of this, can you please share it with us?


Answer



OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9
(EDIT: ...which is actually the proof that I read in Aigner & Ziegler).



When $0 < x < \pi/2$ we have $0<\sin x < x < \tan x$ and thus
$$\frac{1}{\tan^2 x} < \frac{1}{x^2} < \frac{1}{\sin^2 x}.$$
Note that $1/\tan^2 x = 1/\sin^2 x - 1$.

Split the interval $(0,\pi/2)$ into $2^n$ equal parts, and sum
the inequality over the (inner) "gridpoints" $x_k=(\pi/2) \cdot (k/2^n)$:
$$\sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k} - \sum_{k=1}^{2^n-1} 1 < \sum_{k=1}^{2^n-1} \frac{1}{x_k^2} < \sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k}.$$
Denoting the sum on the right-hand side by $S_n$, we can write this as
$$S_n - (2^n - 1) < \sum_{k=1}^{2^n-1} \left( \frac{2 \cdot 2^n}{\pi} \right)^2 \frac{1}{k^2} < S_n.$$



Although $S_n$ looks like a complicated sum, it can actually be computed fairly easily. To begin with,
$$\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.$$
Therefore, if we pair up the terms in the sum $S_n$ except the midpoint $\pi/4$ (take the point $x_k$ in the left half of the interval $(0,\pi/2)$ together with the point $\pi/2-x_k$ in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into $2^{n-1}$ parts. And the midpoint $\pi/4$ contributes with $1/\sin^2(\pi/4)=2$ to the sum. In short,
$$S_n = 4 S_{n-1} + 2.$$

Since $S_1=2$, the solution of this recurrence is
$$S_n = \frac{2(4^n-1)}{3}.$$
(For example like this: the particular (constant) solution $(S_p)_n = -2/3$ plus the general solution to the homogeneous equation $(S_h)_n = A \cdot 4^n$, with the constant $A$ determined by the initial condition $S_1=(S_p)_1+(S_h)_1=2$.)



We now have
$$ \frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.$$
Multiply by $\pi^2/4^{n+1}$ and let $n\to\infty$. This squeezes the partial sums between two sequences both tending to $\pi^2/6$. Voilà!


analysis - Show that the map is bijective

Let $X$ be a set. We consider the map \begin{equation*}\Phi : \ \mathcal{P}(X)\rightarrow \{0,1\}^X, \ \ A\mapsto 1_A\end{equation*} that maps a subset $A\subset X$to its characteristc function $1_A$.




I want to show that $\Phi$ is bijective by givung explicitly an inverse map.



Could you give me a hint how we can show that? I don't really have an idea how to find the inverse one.



$$$$



If we want to show the bijectivity by proving that the map is injective and surjective, we do the following, or not?



$\Phi$ is surjective because for every element of in the range, i.e. $0$ and $1$ there is a preimage in $\mathcal{P}(X)$ because either one element is contained in the set $A$ or not.




$\Phi$ is injective because every element of $\Phi (X)$ has an image in $\{0,1\}$.



So, $\Phi$ is bijective.



Is everything correct? Could I improve something?

Monday, September 21, 2015

Can anyone help me in finding this integral? Without using differentiation under integral sign.

Solve the integral $I$ = $\int_0^{\infty} \frac{\sin x}{x} dx$

complex analysis - Evaluating the improper integral $ int_{0}^{infty}{frac{x^2}{x^{10} + 1}mathrm dx} $

I am trying to solve the following integral, but I don't have a solution, and the answer I am getting doesn't seem correct.



So I am trying to integrate this:




$$ \int_{0}^{\infty}{\frac{x^2}{x^{10} + 1}\,\mathrm dx} $$



To integrate this, I want to use a contour that looks like a pizza slice, out of a pie of radius R. One edge of this pizza slice is along the positive x-axis, if that makes sense. Since $ z^{10} + 1 $ has 10 zeroes, the slice should only be one tenth of a whole circle. So let's call this contour $ C $. Then:



$$ \int_{C}{\frac{z^2}{z^{10} + 1}\,\mathrm dz} = 2 \pi i\,\operatorname{Res}(\frac{x^2}{x^{10} + 1}, e^{i \pi/10}) $$ This is because this slice contains only one singularity. Furthermore:



$$ \int_{C}{\frac{z^2}{z^{10} + 1}\,\mathrm dz} = \int_0^R{\frac{z^2}{z^{10} + 1}\,\mathrm dz} + \int_\Gamma{\frac{z^2}{z^{10} + 1}\,\mathrm dz} $$



And then, by the M-L Formula, we can say that $ \int_\Gamma{\frac{z^2}{z^{10} + 1}\,\mathrm dz} $ goes to $0$ as $R$ goes to infinity. Evaluating $ 2 \pi i\ \operatorname{Res}(\frac{x^2}{x^{10} + 1}, e^{i \pi/10}) $ I get $ \dfrac{\pi}{e^{i \pi/5}} $. Since this answer isn't real, I don't think this could be correct. What did I do wrong?

calculus - How to show that the limit $lim_{n inmathbb N}sqrt[n]n$ exists using epsilon-delta method.

Finding the limit using L'Hopital's method might seem fine for me, but how can you prove that the limit exists :



$\lim_{n \in\mathbb N}\sqrt[n]n$

Sunday, September 20, 2015

abstract algebra - How do I find the order of an element if I am given the minimal polynomial of the element?

For example, let's say I am given an element $α$ in a field of characteristic $2$. Further, I am given the minimal polynomial of α with respect to $GF(2)$. Let's say that minimal polynomial is $f(x) = x^9 + x + 1$. How would I then deduce the order of $α$?


This is for a coding theory assignment, but I cannot deduce from the literature how to go in this direction. Most things I have read seem to indicate how to find a minimal polynomial, but don't seem to take that any further.

infinity - How to explain indeterminations, and some aprpoaches to $+infty$ or $-infty$, for middle school students?






Question: how to explain the undefinitions $0^0$ and $\frac{0}{0}$ for Middle school students??







I am a math teacher and I don't know how to answer properly when studens ask me why some operations give undefined/indetermination (the most frequent are $0^0$ and $\frac{0}{0}$) or why division by zero result infinity. So most of time a avoid to answer such because I am affraid to confuse them more with too complicated explanations. Some student understand most explanations, but others have more difficulties.



To explain division by zero I try to use their intuition, making divisions by factor every time smaller, so I get a kind of limit without mentionig it (I say: "dividing by a number every time smaller, what you get is always a bigger one, tendind to a huge number, the $\pm\infty$"). But still, some continue to asking me: "I understand that dividing nothing by any number, the result is nothing for each" ($\frac{0}{n} = 0$, division of finite by zero). "Why is that, if I divide any number by no one, I should get infinity?" (these students continue to think that we should expect no change after such a division).



I tried once to explain $\frac{0}{0}$ with an simple equation like $\frac{a}{0}=b$. In this case we can use algebrism to write $a = 0\cdot b$, which means $a$ was already known ($~=0$), and we can say nothing about $b$, that is, it is undefined (in fact, I am not quite sure this is an satisfactory answer).



And what about the other indeterminations if some clever student asks me? Can someone help me out with this doubt? I hope I made myself clear.



I was searching for other similar questions but didn't find what I was looking for. Some interesting posts related are:

Ways to solve indeterninations; Solving indetermination in limit; Two square roots in an indeterminate. See also Mathematics in Wikipedia.


Answer



Here is what I would suggest as an informal explanation for some kinds of indeterminate forms, though it may be less helpful for others.



If you try to evaluate $0^0$ by concentrating on the exponent, you would probably say, "anything to the power $0$ is $1$, therefore the answer is $1$". On the other hand, if you concentrated on the base, you would probably say "$0$ to any power is $0$, therefore the answer is $0$". The fact that you can get contradictory answers in this way is what makes it an indeterminate form.



Similarly, for "$\frac00$", concentrating on the numerator suggests an answer of $0$ while concentrating on the denominator suggests an answer of $\infty$. In this case however, I would be very careful not to let the students believe that $\infty$ is ever a sensible answer to an arithmetic question.


What is an example of a proof by minimal counterexample?



I was reading about proof by infinite descent, and proof by minimal counterexample. My understanding of it is that we assume the existance of some smallest counterexample $A$ that disproves some proposition $P$, then go onto show that there is some smaller counterexample to this which to me seems like a mix of infinite descent and 'reverse proof by contradiction'.



My question is, how do we know that there might be some counterexample? Furthermore, are there any examples of this?


Answer



Consider, for instance, the statment





Every $n\in\mathbb{N}\setminus\{1\}$ can be written as a product of prime numbers (including the case in which there's a single prime number appearing only once).




Suppose otherwise. Then there would be a smallest $n\in\mathbb{N}\setminus\{1\}$ that would not be possible to express as a product of prime numbers. In particular, this implies that $n$ cannot be a prime number. Since $n$ is also different from $1$, it can be written as $a\times b$, where $a,b\in\{2,3,\ldots,n-1\}$. Since $n$ is the smallest counterexample, neither $a$ nor $b$ are counterexamples and therefore both of them can be written as a product of prime numbers. But then $n(=a\times b)$ can be written in such a way too.


Saturday, September 19, 2015

algebra precalculus - Bounds / Approximation to sum of squares of sum

Can we define any tight upper / lower bound or approximation to the expression,



$\sum_{i = 1}^{N}|x_{i} + y_{i}|^{2}$



in terms of $\sum_{i = 1}^{N} |x_{i}|^{2}$ and $\sum_{i = 1}^{N} |y_{i}|^{2}$, where $x_{i}, y_{i} \in \mathbb{C}, \forall i \in \{1, 2, , \ldots, N\}$.



The bound should be tight enough to represent $\exp \left\{ - \left( \sum_{i = 1}^{N}|x_{i} + y_{i}|^{2} \right) \right\}$ in terms of $\exp \left( - \sum_{i = 1}^{N} |x_{i}|^2 \right)$ and $\exp \left( - \sum_{i = 1}^{N} |y_{i}|^2 \right)$.

elementary number theory - Finitely many Supreme Primes?

A challenge on codegolf.stackexchange is to find the highest "supreme" prime: https://codegolf.stackexchange.com/questions/35441/find-the-largest-prime-whose-length-sum-and-product-is-prime



A supreme prime has the following properties:




  • the number itself is prime

  • the number of digits is prime

  • the sum of digits is prime


  • the product of digits is prime



Are there finitely many "supreme" primes? Are there infinitely many? Currently the highest one found is ~$10^{72227}$

calculus - Prove with the mean value theorem that $x-frac{x^2}{2} < ln(1+x)$




Prove with the mean value theorem that $x-\frac{x^2}{2}<\ln(1+x)$(0,\infty)$



Approach
$f(x) := \ln(1+x) $ with the mean value theorem in $[0,x]$



$\frac{1}{1+\xi}= \frac{\ln(1+x)-0}{x-0}$




$\frac{1}{1+\xi}$ takes the biggest value when $\xi$ is $0$



and so $1 <\frac{\ln(1+x)}{x}$ multiply with x and you get
$x<\ln(1+x)$



I can't prove the second part.


Answer



For MVT



$$\ln (1+x)+\frac{x^2}{2}-(\ln 1+0)=x\cdot\left(\frac{1}{1+c}+c\right)>x \quad c\in(0,x)$$




Indeed



$$\frac{1}{1+c}+c=\frac{c^2+c+1}{1+c}>1$$


Easy functional equation



Find all functions $f:\mathbb{R} \rightarrow \mathbb{R}$ such that:




$$f(2f(x)+f(y))=2x+f(y)\qquad \forall x,y \in \mathbb{R}.$$





If you put $x=y=0$, you get $f(3f(0))=f(0)$. What deductions about $f(0)$ can you then make?



Clearly from above $f(0)=0$ is a solution . . . so,



Putting $x=0$ gives $f(2f(0)+f(y))=f(y)$



$\rightarrow$ $f(f(y))=f(y)$



So $f(x)=x$ is a solution, but is it the only one?




I think it probably is, but how to prove?


Answer



$$f(2f(x)+f(y))=2x+f(y)\qquad \forall x,y \in \mathbb{R}.$$
Interchaning $x$ and $y$ you get



$$f(f(x)+2f(y))=f(x)+2y \,.$$



Claim 1: $f(x)$ is 1 to 1.




Indeed, if $f(x)=f(y)$ then



$$2x+f(x)=2x+f(y)=f(2f(x)+f(y))=f(f(x)+2f(y))=f(x)+2y $$



This implies that $x=y$.



Now, you can do part of what you did:



$$f(2f(0)+f(y))=f(y)\qquad \forall y \in \mathbb{R}.$$




Since $f$ is 1 to 1 you get



$$2f(0)+f(y)=y \,.$$



Thus



$$f(y)=y-2f(0)\,.$$



Setting $y=0$ you get $f(0)=0$ and thus $f(x)=x$ is the only solution.


Friday, September 18, 2015

derivatives - When not to treat dy/dx as a fraction in single-variable calculus?

While I do know that $\frac{dy}{dx}$ isn't a fraction and shouldn't be treated as such, in many situations, doing things like multiplying both sides by $dx$ and integrating, cancelling terms, doing things like $\frac{dy}{dx} = \frac{1}{\frac{dx}{dy}}$ works out just fine.



So I wanted to know: Are there any particular cases (in single-variable calculus) we have to look out for, where treating $\frac{dy}{dx}$ as a fraction gives incorrect answers, in particular, at an introductory level?



Note: Please provide specific instances and examples where treating $\frac{dy}{dx}$ as a fraction fails

Thursday, September 17, 2015

analysis - Why does $1+2+3+cdots = -frac{1}{12}$?

$\displaystyle\sum_{n=1}^\infty \frac{1}{n^s}$ only converges to $\zeta(s)$ if $\text{Re}(s) > 1$.


Why should analytically continuing to $\zeta(-1)$ give the right answer?

sequences and series - Find $A=+frac{3}{4×8}-frac{3×5}{4×8×12}+frac{3×5×7}{4×8×12×16}-···$

Find $A$:



$$A=+\frac{3}{4×8}-\frac{3×5}{4×8×12}+\frac{3×5×7}{4×8×12×16}-···$$




My Try :



$$a_1=\frac{3}{4×8}-\frac{3×5}{4×8×12}=\frac{3×12-3×5}{4×8×12}=\frac{3(12-5)}{4×8×12}=\frac{3(7)}{4×8×12}$$



$$a_2=\frac{3(7)}{4×8×12}-\frac{3×5×7}{4×8×12×16}=\frac{3×7×16-3×5×7}{4×8×12×16}=\frac{3×7(16-7)}{4×8×12×16}\\=\frac{3×7(8)}{4×8×12×16}$$



now?

discrete mathematics - How to prove if log is rational/irrational

I'm an English major, now doubling in computer science. The first course I'm taking is Discrete Mathematics for Computer Science, using the MIT 6.042 textbook.




Within the first chapter of the book's practice problems, they ask us multiple times to prove that some log function is either rational or irrational.



Specific cases make more sense than others, but I would really appreciate any advice on how to approach these problems. Not how to carry them out algebraically, but what thought constructs are necessary to consider a log being (ir)rational.



For example, in the case of $\sqrt{2}^{2\log_2 3}$, proving that $2\log_23$ is irrational (and therefore $a^b$, when $a=\sqrt{2}$ and $b=2\log_23$, is rational) is not an easily solvable problem. I understand the methods of proofs, but the rules of logs are not intuitive to me.



A section from my TF's solution is not something I would know myself to construct:



Since $2 < 3$, we know that $\log_23$ is positive

(specifically it is greater than $1$), and hence so is $2\log_23$. Therefore, we can assume that $a$ and $b$ are two positive integers.
Now $2\log_2 3=a/b$ implies $2^{2\log_2 3}=2^{a/b}$.
Thus $$2^{a/b}=2^{2\log_2 3} = 2^{\log_2 3^2} =3^2 =9\text{,}$$
and hence $2^a = 9^b$.



Any advice on approaching thought construct to logs would be greatly appreciated!

Wednesday, September 16, 2015

discrete mathematics - System of congruences with polynomials

How do I go about solving exercises such as this one:



Find all polynomials $f(x)$ in $\mathbb{Z}_3$ that satisfy



$$f(x) \equiv 1 \space \space \mathrm{mod} \space \space x^2 + 1$$
$$f(x) \equiv x \space \space \mathrm{mod} \space \space x^3 + 2x + 2$$




in $\mathbb{Z}_3.$



I know about the Chinese Remainder Theorem, but only how to apply it to system of congruences where there are no polynomials involved.



I realise that $f_1(x) \equiv f_2(x) \space \space \mathrm{mod} \space g(x)$ means that $f_1(x) - f_2(x)$ is divisible by $g(x)$, but that's about as far as I've come with this problem.



Also, if anyone has any advice as to where I can read about modular arithmetic involving polynomials, I'd be happy to hear about it, because the literature I have doesn't say much about it at all, and I would like to learn.

calculus - Find the infinite sum $sum_{n=1}^{infty}frac{1}{2^n-1}$



How to evaluate this infinite sum?
$$\sum_{n=1}^{\infty}\frac{1}{2^n-1}$$


Answer




I think you wanna see this:



Ramanujan’s Notebooks Part I



Click me and try Entry $14$ (ii) / pag 146 where you set $x=\ln2$



Chris.


abstract algebra - On the existence of an algebraically closed field containing other fields




This question arose while I was reading a paper I found in the web.
It might be very simple, but I don't know the answer.
Let $\mathbb{R}$ be the set of real numbers and $\mathbb{Q}_p$ the set of all $p$-adic numbers.



My question is: how can I construct (or at least guarantee the existence of) an algebraically closed field $\Omega$ of characteristic $0$ containing both $\mathbb{R}$ and $\mathbb{Q}_p$ for all primes $p$?



More generally, given a finite or infinite family of fields with the same characteristic (and possibly a common subfield), can I prove the existence of such a field? If not, under which conditions does it hold?



Thank you in advance for your help.




Edit: about my background, my level is basic; that is, I know what an algebraically closed field is and basic facts about Field Theory from a basic Galois theory course


Answer



It is possible to embed the algebraic closure of $\Bbb Q_p$ into $\Bbb C$, if you want. We also can consider the completion $\Bbb C_p$ of $\overline{\Bbb Q_p}$. This field is called the field of $p$-adic complex numbers.



The details have been already discussed at this site, e.g., here:



Is there an explicit embedding from the various fields of p-adic numbers $\mathbb{Q}_p$ into $\mathbb{C}$?



The embedding is guaranteed by the axiom of choice.


probability - Number of die rolls needed for average to converge to roll expectancy?


I'm aware there are similar questions, but I haven't been able to find what I'm asking.


Say we have an $n$-sided die, labeled with 1 thru $n$ and roll it $N$ times. We take the average, call it $m$.


The die is fair, so the expectancy for the die roll is $E=\frac{n+1}{2}$.


How large must $N$ be for the average to be within $\epsilon$ from $E$ with a probability, say, $p$?


For example, 20-sided die: $E=10.5$, choose $\epsilon = 0.01$, and $p=0.99$.


So how many times do I have to roll the 20-sided die for the average to lie in the interval $[10.49, 10.51]$ with 99% probability?



Answer



The variance of a single roll is $\frac{n^2-1}{12}$ so the standard deviation of the average of $N$ rolls is $\sqrt{\frac{n^2-1}{12N}}$.


For a normal distribution, the probability of being within $\Phi^{-1}\left(\frac{p +1}{2}\right)$ standard deviations of the mean is $p$, where $\Phi^{-1}$ is the inverse of the cumulative distribution of a standard normal.


For large $N$ you can use the central limit theorem as an approximation, so you want $\sqrt{\frac{n^2-1}{12N}}\Phi^{-1}\left(\frac{p +1}{2}\right) \le \epsilon$, i.e. $$N \ge \left(\frac{n^2-1}{12}\right) \left(\frac{\Phi^{-1}\left(\frac{p +1}{2}\right)}{\epsilon}\right)^2. $$


So in your numerical example $\left(\frac{p +1}{2}\right)=0.995$, $\Phi^{-1}\left(\frac{p +1}{2}\right) \approx 2.5758 $, $\epsilon=0.01$ and $n=20$ so $$N \ge 2206103.1$$ which is certainly large.


Summation Notation Confusion

I am unclear about what the following summation means given that $\lambda_i: \forall i \in \{1,2,\ldots n\}$:



$\mu_{4:4} = \sum\limits_{i=1}^{4} \lambda_i + \mathop{\sum\sum}_{1\leq i_1 < i_2 \leq 4}(\lambda_{i_1} + \lambda_{i_2}) + \mathop{\sum\sum\sum}_{1\leq i_1 < i_2

I understand how this term expands:




$\sum\limits_{i=1}^{4} \lambda_i = \lambda_1 + \lambda_2 + \lambda_3 + \lambda_4$.



But, I don't understand what how this term expands



$\mathop{\sum\sum}_{\substack{1\leq i_1 < i_2 \leq 4}}(\lambda_{i_1} + \lambda_{i_2})$



Nor do I understand how this term expands



$\mathop{\sum\sum\sum}_{\substack{1\leq i_1 < i_2 $




Any help in these matters would be appreciated.

matrices - Looking for a proof that the resultant is the product of the differences of roots



I'm trying to find a general proof to an exercise given in Garrity et al's book, Algebraic Geometry: A problem-solving approach.



The problem is this: Given two polynomials f and g, show that for each pair of roots, f(r) = 0, g(s) = 0, that (r - s) divides the resultant.




There is a book of selected answers, but somewhat disappointingly, the solution is given as a brutal appeal to algebra. Moreover, the result is only given for quadratic polynomials.



It seems cited in a few places that the resultant, defined as the determinant of the Sylvester matrix of two polynomials $f = \lambda_1\prod (x - r_i)$ and $g = \lambda_2 \prod (x - s_i)$, is equal to the product $\prod r_i - s_i$. But so far, I have been unable to find a general proof of this fact.



Would anyone either mind sketching the proof, or else pointing me to a resource which does?


Answer



The following is really only a sketch. Feel free to ask for more details.



The coefficients of a polynomial $f$ are equal to elementary symmetric polynomials in the roots of $f$. Since the resultant is a polynomial function in the coefficients of two polynomials $f$ and $g$, it is a symmetric polynomial function in the roots of $f$ and $g$. The definition of the resultant is made in such a way that $\operatorname{res}(f,g)=0$ if (and only if) $f$ and $g$ share a common root. Now, we use the following:




Lemma. Let $R$ be an integral domain and denote by $K$ the algebraic closure of its quotient field. Let $p\in R[X,Y]$ a polynomial such that $p(a,a)=0$ for all $a\in K$. Then, $(Y-X)$ divides $p$.



Proof. Write $p\in R[X][Y]$ as a polynomial in $Y$, i.e. $p=\sum_{i=0}^n p_i Y^i$ with certain $p_i\in R[X]$. In this integral domain, perform division with remainder of $p$ by $Y-X$ to obtain $p=q(Y-X)+r$ for $q\in R[Y,X]$ and $r\in R[X]\subseteq R[X,Y]$. Since
$r(a)=r(a,a)=p(a,a)=0$ for all $a\in K$, we must have $r=0$. Indeed, $K$ is an infinite field because it is algebraically closed and any nonzero polynomial has only finitely many roots. Consequently, $p=(X-Y)\cdot q$ as required.



Applying this lemma to the resultant as a polynomial in the zeros of $f$ and $g$, you get the desired statement.


soft question - Unexpected examples of natural logarithm

Quite often, mathematics students become surprised by the fact that for a mathematician, the term “logarithm” and the expression $\log$ nearly always mean natural logarithm instead of the common logarithm. Because of that, I have been gathering examples of problems whose statement have nothing to do with logarithms (or the exponential function), but whose solution does involve natural logarithms. The goal is, of course, to make the students see how natural the natural logarithms really are. Here are some of these problems:




  1. The sum of the series $1-\frac12+\frac13-\frac14+\cdots$ is $\log2$.

  2. If $x\in(0,+\infty)$, then $\lim_{n\to\infty}n\bigl(\sqrt[n]x-1\bigr)=\log x$.

  3. What's the average distance from a point of a square with the side of length $1$ to the center of the square? The question is ambiguous. Is the square a line or a two-dimensional region? In the first case, the answer is $\frac14\bigl(\sqrt2+\log\bigl(1+\sqrt2\bigr)\bigr)$; in the second case, the answer is smaller (of course): $\frac16\bigl(\sqrt2+\log\bigl(1+\sqrt2\bigr)\bigr)$.

  4. The length of an arc of a parabola can be expressed using logarithms.

  5. The area below an arc of the hyperbola $y=\frac1x$ (and above the $x$-axis) can be expressed using natural logarithms.

  6. Suppose that there is an urn with $n$ different coupons, from which coupons are being collected, equally likely, with replacement. How many coupons do you expect you need to draw (with replacement) before having drawn each coupon at least once? The answer is about $n\log(n)+\gamma n+\frac12$, where $\gamma$ is the Euler–Mascheroni constant.


  7. For each $n\in\mathbb N$, let $P_p(n)$ be the number of primitive Pythagorean triples whose perimeter is smaller than $n$. Then $\displaystyle P_p(n)\sim\frac{n\log2}{\pi^2}$. (By the way, this is also an unexpected use of $\pi$.)



Could you please suggest some more?

integration - Any simple way for proving $int_{0}^{infty} mathrm{erf(x)erfc{(x)}}, dx = frac{sqrt 2-1}{sqrtpi}$?



How to prove
$$\int_{0}^{\infty} \mathrm{erf(x)erfc{(x)}}\, dx = \frac{\sqrt 2-1}{\sqrt\pi}$$ with $\mathrm{erfc(x)} $ is the complementary error function, I have used integration by part but i don't succed


Answer



The given integral equals




$$ \frac{4}{\pi}\int_{0}^{+\infty}\int_{0}^{x}e^{-a^2}\,da \int_{x}^{+\infty}e^{-b^2}\,db\,dx =\frac{4}{\pi}\iiint_{0\leq a\leq x\leq b} e^{-(a^2+b^2)}\,da\,db\,dx$$



or



$$\frac{4}{\pi}\iint_{0\leq a\leq b}(b-a)e^{-(a^2+b^2)}\,da\,db = \frac{4}{\pi}\int_{0}^{+\infty}\int_{0}^{\pi/4}(\cos\theta-\sin\theta)\rho^2 e^{-\rho^2}\,d\theta \,d\rho$$
or
$$ \frac{4}{\pi}(\sqrt{2}-1)\int_{0}^{+\infty}\rho^2 e^{-\rho^2}\,d\rho = \frac{4}{\pi}(\sqrt{2}-1)\frac{\sqrt{\pi}}{4}=\color{red}{\frac{\sqrt{2}-1}{\sqrt{\pi}}}.$$


complex numbers - Find square roots of $8 - 15i$

Find the square roots of:
$8-15i.$



Could I get some working out to solve it?




Also what are different methods of doing it?

Tuesday, September 15, 2015

integration - Why are gauge integrals not more popular?



A recent answer reminded me of the gauge integral, which you can read about here.




It seems like the gauge integral is more general than the Lebesgue integral, e.g. if a function is Lebesgue integrable, it is gauge integrable. (EDIT - as Qiaochu Yuan points out, I should clarify this to mean that the set of Lebesgue integrable functions is a proper subset of gauge integrable functions.)




My question is this: What mathematical properties, if any, make the gauge integral (aka the Henstock–Kurzweil integral) less useful than the Lebesgue or Riemann integrals?




I have just a cursory overview of the properties that make Lebesgue integration more useful than Riemann in certain situations and vice versa. I was wondering if any corresponding overview could be given for the gauge integral, since I don't quite have the background to tackle textbooks or articles on the subject.


Answer



I would have written this as a comment, but by lack of reputation this has become an answer. Not long ago I've posed the same question to a group of analysts and they gave me more or less these answers:




1) The gauge integral is only defined for (subsets of) $\mathbb R^n$. It can easily be extended to manifolds but not to a more general class of spaces. It is therefore not of use in (general) harmonic analysis and other fields.



2) It lacks a lot of very nice properties the lebesgue integral has. For example $f \in \mathcal L^1 \Rightarrow |f| \in \mathcal L^1$ obviously has no generalization to gauge theory.



3) and probably most important. Afaik (also according to wikipedia) there is no known natural topology for the space of gauge integrable functions.


probability - Expected value with nine-sided die

You have a fair nine-sided die, where the faces are numbered from $1$ to $9.$ You roll the die repeatedly, and write the number consisting of all your rolls so far, until you get a multiple of $3.$ For example, you could roll an $8,$ then a $2,$ then a $5.$ You would stop at this point, because $825$ is divisible by $3$, but $8$ and $82$ are not.




Find the expected number of times that you roll the die.



I am fairly new to the concept of expected value, and I don't really know how to go about solving this. It would be great if someone could help.

probability - Expected value from popping bubbles




This is a fairly simple math problem from a programming challenge but I'm having trouble wrapping my head around it. What follows is a paraphrase of the problem:



Kundu has a Bubble Wrap and like all of us she likes popping it. 
The Bubble wrap has dimensions NxM, i.e. it has N rows and each row has

M cells which has a bubble. Initially all bubbles in filled with air and
can be popped.

What Kundu does is randomly picks one cell and tries to pop it,
there might be a case that the bubble Kundu selected is already popped.
In that case he has to ignore this. Both of these steps take 1 second of
time. Tell the total expected number of seconds in which Kundu would
be able to pop them all.



So I know that the expected value is the sum of the product of the random variable x and the probability of that instance of x occurring but I'm having trouble parameterising the problem statement. What's the x and how does time play into it?


Answer



One approach, which I like because it is rather general, is to use a Markov chain.



There are $B$ total bubbles. After $t$ attempts, $U(t)$ of them are left. She pops a bubble with probability $U(t)/B$ and finds a popped bubble with probability $1-U(t)/B$. Call $\tau$ the random number of attempts that it takes to pop all the bubbles. Let $F(u)=E_u(\tau)$, where $E_u$ means that we take the expectation with $U(0)=u$. Then by the total expectation formula



$$F(u)=(F(u-1)+1)(u/B)+(F(u)+1)(1-u/B).$$



This is coupled to the natural boundary condition $F(0)=0$. This recurrence for $F$ turns out to be very easy to solve, as you can see by rearranging it. Your solution is $F(B)$.


calculus - Does a bijective map from $(−π,π)→mathbb R$ exist?



I'm having trouble proving that $\mathbb R$ is equinumerous to $(-π,π)$. I'm think about using a trigonometric function such as $\cos$ or $\sin$, but there are between the interval of $(0,1)$. Could someone help me define a bijective map from $(−π,π)→\mathbb R$?


Answer



You could use a trigonometric function such as $\tan$, although you must first divide the number by $2$ to instead get $\tan$ applied to a number in the interval $(-\pi/2,\pi/2)$.



Monday, September 14, 2015

elementary number theory - Is it possible to do modulo of a fraction

I am trying to figure out how to take the modulo of a fraction.



For example: 1/2 mod 3.



When I type it in google calculator I get 1/2. Can anyone explain to me how to do the calculation?

complex analysis - Let $f(x) = p(x)/q(x)$ and $deg(p) = deg(q) - 1$. Show that $int_{-infty}^infty f(x) = 0$

I would like to show that $\int_{-\infty}^\infty f(x)dx = 0$ where $f(x) = \dfrac{p(x)}{q(x)}$ and $deg(p) = deg(q) - 1$. Also, $q(x)$ has no real roots. I was considering integrating along the contour $C_R$, where $C_R$ is the real line segment from $-R$ to $R$ and the upper semi circle, in which case



$$\lim_{R \to \infty} \int_{C_R} f(z)dz = \lim_{R \to \infty} \int_{-R}^R f(x) dx+\int_{\Gamma_R}f(z)dz = 2\pi i\sum_{k}Res(f, z_k)$$



where $z_k$ are the zeroes of $q(x)$ in the upper half plane, and $\Gamma_R$ is the upper semicircle. However, I'm not sure where to proceed from here



Any help would be appreciated.

Real roots of a quintic polynomial



Consider a real quintic polynomial
$$p(x;\, \alpha, \beta)=a_0 (\alpha,\beta) + a_1 (\alpha,\beta) x + a_2 (\alpha,\beta) x^2+ a_3 (\alpha,\beta) x^3 + a_4 (\alpha,\beta) x^4 - x^5$$
with real valued functions $a_i$ defined by
$$\forall i \in \{1,\ldots, 5\}\quad a_i:\Omega \to \mathbb{R}, $$
where $\Omega \subset \mathbb{R}^2$.



I'd like to proof, that $p$ has only real roots in $x$ for all $(\alpha,\beta) \in \Omega$. A proof relying on Sturm's Theorem seems not feasible as the given functions $\alpha_i$ are quite complex expressions themselves. Is there an easier method to accomplish this?


Answer




I assume all $a_i$ are continuous.
Compute the discriminant $D(\alpha,\beta)$ of the polynomial. If the set $D^{-1}(0)\subseteq \Omega$ has no interior points, it is suficient to check a single $(\alpha,\beta)$ per connected component of $\Omega\setminus D^{-1}(0)$.


combinatorics - Find a combinatorial proof for $binom{n+1}{k} = binom{n}{k} + binom{n-1}{k-1} + ... + binom{n-k}{0}$





Let $n$ and $k$ be integers with $n \geq k \geq 0$. Find a combinatorial proof for



$$\binom{n+1}{k} = \binom{n}{k} + \binom{n-1}{k-1} + \cdots + \binom{n-k}{0} .$$





My approach:
I was thinking to use the binomial formula as in $$2^n = \sum{\binom{n}{k}1^k1^{n-k}} .$$
I also tried to use Pascal's Identity $\binom{n}{r}=\binom{n-1}{r-1}+\binom{n-1}{r}$.


Answer



I think you can try to interpret this identity using the following real-life problem:




Imagine you have a plate of $n+1$ treats: $k$ of them are chocolate pieces, and the rest are Brussels sprouts. In how many ways can you eat all of them, one by one? Assume the chocolate pieces are indistinguishable from each other, and so are Brussels sprouts.





The answer is, of course, $\binom{n+1}{k}$. However, let's count differently, depending on what you eat first:




  • You bravely go straight on to a Brussels sprout, before eating any chocolate. There are $\binom{n}{k}$ ways to do that, as there will be $n$ treats left and $k$ of them are chocolate pieces;

  • You first eat one piece of chocolate, and then go on to eating a Brussels sprout: there are $\binom{n-1}{k-1}$ ways to do that, as there will be $n-1$ treats left, $k-1$ of them chocolate;

  • You first eat two pieces of chocolate, then go on to a Brussels sprout: similarly, it can be done in $\binom{n-2}{k-2}$ ways;

  • You first eat three pieces of chocolate...




etc. until:




  • You eat $k-1$ chocolate pieces first, then one Brussels sprout. Then, only one of the remaining $n-(k-1)$ treats is a chocolate, and the number of ways to do that is $\binom{n-(k-1)}{1}$;

  • You eat all the chocolate pieces first, and then you eat all of the Brussels sprouts: it can be done in only one way, which can also be written as $1=\binom{n-k}{0}$.



Generally, if you eat first $m$ pieces of chocolate ($0\le m \le k$) before getting on with your first Brussels sprout, there are $\binom{n-m}{k-m}$ ways to do it: after eating the initial chocolate pieces and the Brussels sprout, there will be $n-m$ treats left on the plate, $k-m$ of them being chocolate, so you can eat them in $\binom{n-m}{k-m}$ ways.




Altogether all those numbers must add to our original result $\binom{n+1}{k}$, i.e.



$$\binom{n+1}{k}=\binom{n}{k}+\binom{n-1}{k-1}+\binom{n-2}{k-2}+\cdots+\binom{n-(k-1)}{1}+\binom{n-k}{0}=\sum_{m=0}^k\binom{n-m}{k-m}$$


integration - $int e^{-x^2}dx$








How does one integrate $\int e^{-x^2}\,dx$? I read somewhere to use polar coordinates.



How is this done? What is the easiest way?

calculus - Simpler way to compute a definite integral without resorting to partial fractions?



I found the method of partial fractions very laborious to solve this definite integral :
$$\int_0^\infty \frac{\sqrt[3]{x}}{1 + x^2}\,dx$$



Is there a simpler way to do this ?


Answer




Perhaps this is simpler.



Make the substitution $\displaystyle x^{2/3} = t$. Giving us



$\displaystyle \frac{2 x^{1/3}}{3 x^{2/3}} dx = dt$, i.e $\displaystyle x^{1/3} dx = \frac{3}{2} t dt$



This gives us that the integral is



$$I = \frac{3}{2} \int_{0}^{\infty} \frac{t}{1 + t^3} \ \text{d}t$$




Now make the substitution $t = \frac{1}{z}$ to get



$$I = \frac{3}{2} \int_{0}^{\infty} \frac{1}{1 + t^3} \ \text{d}t$$



Add them up, cancel the $\displaystyle 1+t$, write the denominator ($\displaystyle t^2 - t + 1$) as $\displaystyle (t+a)^2 + b^2$ and get the answer.


Sunday, September 13, 2015

elementary number theory - Prove $forall n in mathbb {N}$, $6vert (n^3-n)$. (strong induction)



Pretty straight forward idea to be proven here, but trying to grasp the fundamentals of what one would consider "strong induction". Please see my proof below, my questions are: is this a valid proof? Is this stylistically appropriate for a proof via strong induction?




EDIT: Sorry as it was not clear in my original question, but this post was more so to assist in grasping fundamentals of Strong Induction. Although thanks for pointing out the simplistic method of realizing that $(n^3-n)$ is simply the product of three consecutive integers and is thus divisible by a factor of $2$ and $3$.



Q: Prove $\forall n \in \mathbb {N}$, $6\vert (n^3-n)$.



$Proof.$ We will prove this via mathematical induction.



Base Case. Let $n=1$ and observe that $6\vert 0$ is true. Let $n=2$ and observe that $6\vert 6$ is true. Finally, let $n=3$ and observe that $6\vert 24$ is true. Thus for $n\in\mathbb{N}$ where $1\le n \le 3$ our proposition holds.



Inductive Step. Assume our proposition is true for $n=j$ where $1\le j \le k$ and $k \ge 3$. Since $6\vert (k^3-k)$ we know $k^3-k=6l$ for some $l\in\mathbb{Z}$.




Then



$$\begin{align}(k-2)^3-(k-2)&= k^3-6k^2+12k-8-k+2\\
&=(k^3-k)-6k^2+12k-6\\
&=6l-6k^2+12k-6\\
&=6(l-k^2+2k-1).
\end{align}$$



Thus $\exists m\in\mathbb{Z},(k-2)^3-(k-2)=6m\Rightarrow 6\vert (k-2)^3-(k-2)$. It follows by mathematical induction that $\forall n \in \mathbb {N}$, $6\vert (n^3-n)$. $\Box$



Answer



I'm not sure that I understand your inductive step. You say that for all $j$ such that $1\leq j\leq k, 6|j^3-j$ and $k\geq 3$. Then you should now show that this property holds for $k+1$, that is, $6|(k+1)^3+k+1$. Instead, it looks like you show me that $6|(k-2)^3-(k-2)$.



Instead, consider that $(k+1)^3-(k+1)=k^3+3k^2+2k$ and $(k-1)^3-(k-1)=k^3-3k^2+2k$. Compare the two.



Note that while this isn't the induction you're used to (sometimes called weak induction, or the first principle of induction), this "strong" induction isn't much stronger. The proof only needs the case that $k-1$ has the desired property to show that $k+1$ has the property as well. The idea of strong induction is that you would use the fact that all $j$ from $1$ to $k$ use the desired property.



A good example would be the proof of the fundamental theorem of arithmetic; to show that every number greater than $1$ has a prime factorization, we first say that $2$ is prime (the base case). Now let $k>2$, and say that for all $j$ such that $2\leq j\leq k$, $j$ has a prime factorization. If $k+1$ is prime, it is its own prime factorization. If not, it has a prime factor less than it, say $p$. Then $\frac{k+1}{p}$ has a prime factorization by the strong induction hypothesis, so $k+1$ does as well. Here, the entire assumption for all $j$ is needed, because you don't know exactly what $\frac{k+1}{p}$ is, just that it's less than $k+1$.


sequences and series - Help in evaluating $sumlimits_{k=1}^infty frac1{k}sin (frac{a}{k})$




I would like to try to evaluate



$$\sum\limits_{k=1}^\infty \frac{\sin (\frac{a}{k})}{k}$$



However, all of my attempts have been fruitless. Even Wolfram Alpha cannot evaluate this sum. Can someone help me evaluate this interesting sum?


Answer



This is the Hardy-Littlewood function (see this and this as well), which is known to be very slowly convergent. Walter Gautschi, in this article, shows that



$$\sum_{k=1}^\infty \frac1{k}\sin\frac{x}{k}=\int_0^\infty \frac{\operatorname{bei}(2\sqrt{xu})}{\exp\,u-1}\mathrm du$$




where $\operatorname{bei}(x)=\operatorname{bei}_0(x)$ is a Kelvin function, through Laplace transform techniques (i.e., $\mathcal{L} \{\operatorname{bei}(2\sqrt{xt})\}=\sin(x/s)/s$), and gives a few methods for efficiently evaluating the integral.






Here's a plot of the Hardy-Littlewood function:



plot of Hardy-Littlewood function


Saturday, September 12, 2015

Negative solution for a positive continued fraction



$$
x=1+\cfrac{1}{1+\cfrac{1}{1+...}}\implies x=1+\frac{1}{x}\implies x=\frac{1\pm \sqrt{5}}{2}
$$
Can the negative solution be considered as a solution? If yes, how is it possible to have a negative solution for a positive continued fraction? If no, how do we prove that it can't be a solution?




Edit 1: I want to understand the assumption we are considering while forming the equation which results in the "extraneous solution".


Answer



No, the negative number is not a solution. You showed that if $x$ is equal to that fraction, then it is either $\frac{1+\sqrt 5}{2}$ or $\frac{1-\sqrt5}{2}$. You calculated possible candidates for solutions, not the solution itself.



You can prove that $x$ must be positive by simply arguing that $x$ is a limit of a sequence with only positive elements, so the limit (if it exists, which should also be proven) must be positive.


algebra precalculus - Can a finite sum of square roots be an integer?




Can a sum of a finite number of square roots of integers be an integer? If yes can a sum of two square roots of integers be an integer?



The square roots need to be irrational.


Answer



I think this link is a pretty good answer to your question. However, it might be at a level which is too advanced for you, since this is a pretty natural question to ask relatively early on in life, but it takes some significantly more difficult mathematics to prove.



The direct, yes/no answer to the question is "Yes, but only if the numbers inside the square roots were already perfect squares," or equivalently "If you've already done all the simplifying that you can do, then no."


calculus - Evaluating: $int 3xsinleft(frac x4right) , dx$.




$\displaystyle\color{darkblue}{3\int x\sin\left(\dfrac x4\right)\,\mathrm dx}$



$$\begin{align}
\dfrac{-4x}{x}\cos\dfrac x4 \,\,\boldsymbol\Rightarrow\,\, & -4\cos\left(\dfrac x4\right)-\int \dfrac{-4}{x}\cos\left(\dfrac x4 \right)\,\mathrm dx\\\,\\

& 3\left(-4\cos\left(\dfrac x4\right)+\int\dfrac4x\cos\left(\dfrac x4\right)\,\mathrm dx\right)\\\,\\
&\int\dfrac{4\cos\left(\frac x4\right)}{x}\mathrm dx
\end{align}$$ $\displaystyle \color{darkblue}{uv-\int v\dfrac{\mathrm du}{\mathrm dx}\,\mathrm dx }$



$\displaystyle\boxed{\displaystyle\,\,-4\cos\left(\dfrac x4\right)+4\int\dfrac{\cos x/4}{x}\,\mathrm dx\,\,}$



$\displaystyle 3\left[-4\cos\left( \dfrac x4\right) +4\left(\cos\left( \dfrac x4\right)\ln(x)\right)\right]$



$\displaystyle -12\cos (x/4) + 12\cos (x/4) \ln(x) \rightarrow \text{ wrong.}$







$\displaystyle\color{darkblue}{\int 3x\sin\left(\dfrac x4\right)\,\mathrm dx}$
$\quad\quad\quad\quad\displaystyle\int\dfrac{\cos(x/4)}{x}\rightarrow\dfrac1x\int\cos(x/4)\mathrm dx$



$\displaystyle3\left[-4\cos(x/4)+\dfrac4x\sin(x/4)\right]$



$\displaystyle -12\cos(x/4)+\dfrac{48}{x}\sin(x/4)$





In the text above is my work done to solve the following question:





Find the indefinite integral of: $\left[3x\sin\left(\dfrac x4\right)\right]$





The bordered area is the furthest I got (there should be a 3 at the front to multiply the whole equation but I usually remember to add that at the end) The part where I wrote "wrong" is what I thought the answer was, I assumed it would be such. What I'm having troubles with is integrating the $\frac{\cos(x/4)}{x}$. Would I need to make it $\frac{\cos(x/4)}{x}$ and then integrate by parts to get that integral? Thanks in advance, I hope I made some sense in what I'm trying to achieve. I guess what I'm looking for, is a way to integrate $$ \frac{\cos(x/4)}{x}$$


Answer




I think you have made a mistake in the application of integration by parts. Taking $u$ as $x$ and $\frac{\mathrm{d}v}{\mathrm{d}x}$ as $\sin \frac{x}{4}$, we should get
\begin{align*} 3 \int {x} \sin \frac{x}{4} \mathrm{d}x &= 3 ( -4x \cos \frac{x}{4}- \int -4 \cos \frac{x}{4} \mathrm{d}x) \\ &= -12x \cos \frac{x}{4} + 48 \sin \frac{x}{4} + C.
\end{align*}


Friday, September 11, 2015

calculus - Is it possible to give the sum of the convergent series $sum_{j=1}^{infty}frac{1}{1+2^j}?$



It is easy to show that the series $\sum_{j=1}^{\infty}\frac{1}{1+2^j}$ is convergent. Now my curiosity is, Is it possible to give the sum of this series? I have tried by Mathematica to evaluate it, and the output is:




In[1]:= Sum[1/(1 + 2^j), {j, 1, Infinity}]


Out[1]= (-Log[2] + QPolyGamma[0, 1 - (I [Pi])/Log[2], 1/2])/Log[2]



In[2]:= Re[(-Log[2] + QPolyGamma[0, 1 - (I [Pi])/Log[2], 1/2])/Log[2]]



Out[2]= (-Log[2] + Re[QPolyGamma[0, 1 - (I [Pi])/Log[2], 1/2]])/Log[2]




In[3]:= N[(-Log[2] + Re[QPolyGamma[0, 1 - (I [Pi])/Log[2], 1/2]])/
Log[2]]



Out[3]= 0.7645



I also tried to consider some series of terms of functions, such as $\sum_{j=1}^{\infty}\frac{1}{1+x^j},$ hoping to estimate the sum of the desired sum, by differentiation/integration term-by-term. But perhaps it is very hard, and I was in vain. Can someone give me some clues? Many Thanks. I guess maybe we can sum the series $\sum_{j=1}^{\infty}\frac{1}{1+a^j}$ explicitly for all $a>1,$ if we can do for $a=2.$


Answer



Notice $$\frac{x}{1+x} = \frac{x(1-x)}{1-x^2} = \frac{x(1+x)-2x^2}{1-x^2} = \frac{x}{1-x} - \frac{2x^2}{1-x^2}$$
Let $q = \frac12$, we have
$$\sum_{n=1}^\infty

\frac{1}{1+2^n} =
\sum_{n=1}^\infty \frac{q^n}{1+q^n}
= \sum_{n=1}^\infty \frac{q^n}{1-q^n} - 2\sum_{n=1}^\infty \frac{q^{2n}}{1-q^{2n}}
= L(q) - 2L(q^2)
$$

where $L(x)$ is a Lambert series
defined by
$$L(x) \stackrel{def}{=} \sum_{n=1}^\infty \frac{x^n}{1-x^n}
= \frac{\psi_{x}(1) + \log(1-x)}{\log x}
$$


This Lambert series can be expressed in terms of q-polygamma function $\psi_\beta(x)$.
The sum we want becomes



$$\sum_{n=1}^\infty \frac{1}{1+2^n}
= \frac{\log(3/2) - \psi_{1/2}(1) + \psi_{1/4}(1)}{\log 2}$$



By throwing the command (Log[3/2] - QPolyGamma[1,1/2] + QPolyGamma[1,1/4])/Log[2] to WA,
one find
$$\sum_{n=1}^\infty \frac{1}{1+2^n}
\approx 0.76449978034844420919131974725549848...$$



analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...