Monday, November 30, 2015

logic - Show that a sequence of elements each realizing an isolated type over the previous realizes an isolated type




I'm trying to prove the following result which seems correct to me, but I'm not sure how to proceed. The proposition is:




Let $M$ be a structure, $A\subseteq M$, $(a_1,\dots,a_n)$ be a sequence from $M$ such that for all $i\le n$ the type $tp(a_i\,\vert\,A\cup\{a_1,\dots,a_{i-1}\})$ is isolated by $\varphi_i$, then $tp(a_1,\dots,a_n\,\vert\,A)$ is isolated by $\bigwedge_{i=1}^n\varphi_i$.




What I have so far:



Let $\Phi_i$ denote $\bigwedge_{j=1}^i\varphi_i$. By induction, in the base case $tp(a_1\,\vert\,A)$ is isolated by $\Phi_1=\varphi_1$ by hypothesis.




Now suppose that $\Phi_i$ isolates $tp(a_1,\dots,a_i\,\vert\,A)$ and $\varphi_{i+1}$ isolates $tp(a_{i+1}\,\vert\,A\cup\{a_1,\dots,a_i\})$. Let $\psi\in tp(a_1,\dots,a_{i+1}\,\vert\,A)$ and $(c_1,\dots,c_{i+1})$ from $M$ such that $M\vDash\Phi_i(c_1,\dots,c_i)\land\varphi_{i+1}(a_1,\dots,a_i,c_{i+1})$. We want to show that $M\vDash\psi(c_1,\dots,c_i,c_{i+1})$. Now since $\psi(a_1,\dots,a_i,x)\in tp(a_{i+1}\,\vert\,A\cup\{a_1,\dots,a_i\})$, we have $M\vDash\psi(a_1,\dots,a_i,c_{i+1})$.



Now because $M\vDash\Phi_i(c_1,\dots,c_i)$ I know that $tp(c_1,\dots,c_i\,\vert\,A)=tp(a_1,\dots,a_i\,\vert\,A)$, but it seems that I need to show that the two tuples realize the same type over $A\cup\{c_{i+1}\}$ in order to conclude. I'd appreciate any hint to how to proceed from here, or a counterexample if the result is not correct.


Answer



There's an issue with the statement of the proposition you're trying to prove: The formula $\varphi_i$ is a formula over $A\cup\{a_1,\dots,a_{i-1}\}$, not over $A$, so it can't appear as a conjunct in a formula purporting to isolate $\text{tp}(a_1,\dots,a_n/A)$.



It is true that if $\text{tp}(a_i/A\cup\{a_1,\dots,a_{i-1}\})$ is isolated for all $i$, then $\text{tp}(a_1,\dots,a_n/A)$ is isolated, and an inductive argument of the type you wrote down will work. I would recommend trying first trying to prove this, and then extracting the formula that works from the proof.


algebra precalculus - Why $sqrt{-1 times {-1}} neq sqrt{-1}^2$?


I know there must be something unmathematical in the following but I don't know where it is:


\begin{align} \sqrt{-1} &= i \\ \\ \frac1{\sqrt{-1}} &= \frac1i \\ \\ \frac{\sqrt1}{\sqrt{-1}} &= \frac1i \\ \\ \sqrt{\frac1{-1}} &= \frac1i \\ \\ \sqrt{\frac{-1}1} &= \frac1i \\ \\ \sqrt{-1} &= \frac1i \\ \\ i &= \frac1i \\ \\ i^2 &= 1 \\ \\ -1 &= 1 \quad !!! \end{align}


Answer



Between your third and fourth lines, you use $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$. This is only (guaranteed to be) true when $a\ge 0$ and $b>0$.



edit: As pointed out in the comments, what I meant was that the identity $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$ has domain $a\ge 0$ and $b>0$. Outside that domain, applying the identity is inappropriate, whether or not it "works."


In general (and this is the crux of most "fake" proofs involving square roots of negative numbers), $\sqrt{x}$ where $x$ is a negative real number ($x<0$) must first be rewritten as $i\sqrt{|x|}$ before any other algebraic manipulations can be applied (because the identities relating to manipulation of square roots [perhaps exponentiation with non-integer exponents in general] require nonnegative numbers).


This similar question, focused on $-1=i^2=(\sqrt{-1})^2=\sqrt{-1}\sqrt{-1}\overset{!}{=}\sqrt{-1\cdot-1}=\sqrt{1}=1$, is using the similar identity $\sqrt{a}\sqrt{b}=\sqrt{ab}$, which has domain $a\ge 0$ and $b\ge 0$, so applying it when $a=b=-1$ is invalid.


complex analysis - Fiding $int_0^{2pi}frac{mathrm{d}x}{4cos^2x+sin^2x}$




I was checking some old complex analysis homework and I found the following definite integral $$\int_0^{2\pi}\frac{\mathrm{d}x}{4\cos^2x+\sin^2x},$$
had to be found with the residue theorem. Back at the time I thought it was trivial, however I'm trying to do it, but I have no idea on how to star. Could anyone please give me a hint on how to start?



Thanks.


Answer



Hint: Let $z=e^{i x}$; $dx=-i dz/z$; $\cos{x}=(z+z^{-1})/2$. The integral is then equal to



$$-i \oint_{|z|=1} \frac{dz}{z} \frac{1}{1+\frac{3}{4} (z+z^{-1})^2} $$




Multiply out, determine the poles, figure out which poles, if any, lie within the unit circle, find the residues of those poles, multiply the sum of those residues (there may only be one, or none) by $i 2 \pi$, and you are done.


elementary number theory - Is this proof, that $sqrt{n}$ is irrational for all non-square $n in mathbb{N}$, correct or not?

Prove that the square root of all non-square numbers $n \in \mathbb{N}$ is irrational




I have made an attempt to prove this, I don't know if it's correct though:



Take a non-square number $n \in \mathbb{N}$, and we'll assume that $\sqrt{n}$ is rational.



$\sqrt{n} = \dfrac{p}{q}$ , $p,q \in \mathbb{N}$ and they have no common factors.



$$nq^2=p^2$$ Lets say that $z$ is a prime factor of $q$, it must also be a prime factor of $q^2$. However, it then must ALSO be a prime factor of $p^2$ because of the equality above, and this is a contradiction.

number theory - Write a single congruence?


Write a single congruence that is equivalent to the pair of congruences:



$x\equiv 1(\mod4)$ and $x\equiv 2 (\mod 3)$.




I am new to Number Theory/ Modular Arithmetic. Just started reading the theory from a book yesterday. Earlier I had read about it a little on the internet. I know the basic definitions and some properties. I do not understand the question properly. What does writing a single congruence mean? Does it mean I have to find $x$? How do I solve this question. Do I need to use any theorem like the Chinese Remainder Theorem or something else? I am stuck. Please help.

integration - Finding indefinite integral by partial fractions



$$\displaystyle \int{dx\over{x(x^4-1)}}$$



Can this integral be calculated using the Partial Fractions method.


Answer



HINT:




We need to use Partial Fraction Decomposition



Method $1:$



As $x^4-1=(x^2-1)(x^2+1)=(x-1)(x+1)(x^2+1),$



$$\text{Put }\frac1{x(x^4-1)}=\frac Ax+\frac B{x-1}+\frac C{x+1}+\frac {Dx+E}{x^2+1}$$







Method $2:$
$$I=\int \frac1{x(x^4-1)}dx=\int \frac{xdx}{x^2(x^4-1)} $$



Putting $x^2=y,2xdx=dy,$



$$I=\frac12\int \frac{dy}{y(y^2-1)}$$



$$\text{ Now, put }\frac1{y(y^2-1)}=\frac A y+\frac B{y-1}+\frac C{y+1}$$







Method $3:$



$$I=\int \frac1{x(x^4-1)}dx=\int \frac{x^3dx}{x^4(x^4-1)} $$



Putting $x^4=z,4x^3dx=dz,$



$$I=\frac14\int \frac{dz}{z(z-1)}$$



$$\text{ Now, put }\frac1{z(z-1)}=\frac Az+\frac B{z-1}$$




$$\text{ or by observation, }\frac1{z(z-1)}=\frac{z-(z-1)}{z(z-1)}=\frac1{z-1}-\frac1z$$



Observe that the last method is susceptible to generalization.



$$J=\int\frac{dx}{x(x^n-a)}=\int\frac{x^{n-1}dx}{x^n(x^n-a)}$$



Putting $x^n=u,nx^{n-1}dx=du,$



$$J=\frac1n\int \frac{du}{ u(u-a)}$$

$$\text{ and }\frac1{u(u-a)}=\frac1a\cdot\frac{u-(u-a)}{u(u-a)}=\frac1a\left(\frac1{u-a}-\frac1u\right)$$


Sunday, November 29, 2015

complex analysis - How to finish proof of $ {1 over 2}+ sum_{k=1}^n cos(kvarphi ) = {sin({n+1 over 2}varphi)over 2 sin {varphi over 2}}$


I'm trying to prove the identity $$ {1 \over 2}+ \sum_{k=1}^n \cos(k\varphi ) = {\sin({n+1 \over 2}\varphi)\over 2 \sin {\varphi \over 2}}$$


What I've done so far:


From geometric series $\sum_{k=0}^{n-1}q = {1-q^n \over 1 - q}$ for $q = e^{i \varphi}$ and taking the real part on both sides I got


$$ \sum_{k=0}^{n-1}\cos (k\varphi ) = {\sin {\varphi \over 2} - \cos (n\varphi) + \cos (n-1)\varphi \over 2 \sin {\varphi \over 2}}$$


I checked all my calculations twice and found no mistake. Then I used trigonometric identities to get



$$ {1 \over 2} + \sum_{k=1}^{n-1}\cos (k\varphi ) + {\cos n\varphi \over 2 } = {\sin \varphi \sin (n \varphi ) \over 2 \sin {\varphi \over 2}}$$



How to finish this proof? Is there a way to rewrite


$\sin \varphi \sin (n \varphi )$ as


$\sin({n+1 \over 2}\varphi) - \sin {\varphi \over 2} \cos (n \varphi) $?



Answer



There is a mistake in the real part.


$$ \frac{q^n - 1}{q - 1} = \frac{e^{in\phi} - 1}{e^{i\phi} - 1} = \frac{e^{i(n-1/2)\phi} - e^{-i\phi/2}}{e^{i\phi/2} - e^{-i\phi/2}} = \frac{- i e^{i(n-1/2)\phi} + i e^{-i\phi/2}} {2\sin{\phi/2}} $$ the real part is $$ \frac{\sin ((n-1/2)\phi) + \sin(\phi/2)} {2\sin{\phi/2}} $$ yielding the right result.



However, there is a simpler solution: $$ 1 + 2\sum_{k=1}^n \cos k\phi = 1+ \sum_{k=1}^n \left(e^{i k\phi} + e^{-i k\phi}\right) = \sum_{k=-n}^n e^{i k\phi} $$ which simplifies to $$ e^{-i n\phi} \frac{1 - e^{i (2n+1)\phi}}{1 - e^{i\phi}} = \frac{e^{-i (n + 1/2)\phi} - e^{i (n + 1/2)\phi}} { e^{-i\phi/2} - e^{i\phi/2}} = \frac{\sin((n + 1/2)\phi)}{\sin(\phi/2)} $$



complex analysis - Intuition behind euler's formula










Hi, I've been curious for quite a long time whether it is actually possible to have an intuitive understanding of euler's apparently magical formula: $$e^{ \pm i\theta } = \cos \theta \pm i\sin \theta$$




I've obviously seen the taylor series/differential equation based proofs, and perhaps I'm just going to have to accept that it's not possible to have an intuition on what it means to raise a number to an imaginary power. I obviously realise that the formula implies that an exponential with a variable imaginary part can be visualised as a complex function going around in a unit circle about the origin of the complex plane. But WHY is this? And why is e so special that it moves at just a fast enough rate so that the argument of the exponential is equal to the arc length of the path made by the locus (i.e. the angle in radians we've moved around the circle)? Is there any way anyone out there 'understand' this?



Thankyou!


Answer



If I recall from reading Analysis of the Infinite (very nice book, at least Volume $1$ is), Euler got it from looking at
$$\left(1+\frac{i}{\infty}\right)^{\infty}$$
whose expansion is easy to find using the Binomial Theorem with exponent $\infty$.



There is a nice supposed quote from Euler, which can be paraphrased as "Sometimes my pencil is smarter than I am." He freely accepted the results of his calculations. But of course he was Euler.


linear algebra - T/F: If $A$ and $B$ are matrices such that AB=$I_n$, then $A$ and $B$ are square matrices.

In my book (Linear Algebra by Steven Andrilli 4th ed.) There is a true/false question in the chapter 2 review that's all about "systems of linear equations." The question asks:



T/F: If $A$ and $B$ are matrices such that AB=$I_n$, then $A$ and $B$ are square matrices.



I said that this was true. The correct answer according to the book is False.




If $A$ is $m$x$n$ non-square matrix and $B$ is $n$x$p$ square matrix, then $A$*$B$ is an $m$x$p$ matrix. I don't understand how it can be square if $A$ or $B$ is non square.

number theory - Classification of the positive integers not being the sum of four non-zero squares



It is well known that every positive integer is the sum of at most four perfect squares (including $1$).





But which positive integers are not the sum of four non-zero perfect squares ($1$ is still allowed as a perfect square) ?




I showed that the numbers $2^k$ , $2^k\cdot 3$ and $2^k\cdot 7$ with odd positive integer $k$ have this property. I checked the numbers upto $10^4$ and above $41$, no examples , other than those of the mentioned forms , occured. So my question is whether additional positive integers with the desired property exist.


Answer



page 140 in Conway's little book,
$$ 1,3,5,9,11,17,29,41, \; 2 \cdot 4^m \; , \; 6 \cdot 4^m \; , \; 14 \cdot 4^m \; . $$
The proof is on the same page, with preparatory material in the previous few pages.



The first detail: any number $3 \pmod 8$ is the sum of three squares, meanwhile they must be odd squares, therefore nonzero. The square of any number that is divisible by $4$ becomes $0 \pmod 8.$ As a result, any number $6 \pmod 8$ is the sum of three squares, as $ (2A)^2 + B^2 + C^2,$ where $A,B,C$ must be odd squares, therefore nonzero.




10 June: Second detail: if $x^2 + y^2 + z^2 \equiv 0 \pmod 4,$ then $x,y,z$ are all even. This means that $12 \pmod{32}$ is the sum of three nonzero squares. Same for $24 \pmod{32}$


functional equations - Find all functions $f:mathbb Rto mathbb R$ such that $f(a^2+b^2)=f(a^2-b^2)+f(2ab)$ for every real $a$,$b$

I guessed $f(a)=a^2$ and $f(a)=0$, but have no idea how to get to the solutions in a good way.



Edit: I did what was suggested:



from $a=b=0$



$f(0)=0$



The function is even, because from $b=-a$




$f(2a^2)=f(-2a^2)$.

Find sum of infinite anharmonic(?) series


I need help with this:


$$ \frac{1}{1\cdot2\cdot3\cdot4}+\frac{1}{2\cdot3\cdot4\cdot5}+\frac{1}{3\cdot4\cdot5\cdot6}\dots $$ I don't know how to count sum of this series. It is similar to standard anharmonic series so it should have similar solution. However I can't work it out.


Answer



What you're evaluating is the infinite series


$$\sum_{n = 1}^\infty \frac{1}{n(n+1)(n+2)(n+3)}.$$



Since


$$\frac{1}{n(n+3)} = \frac{1}{3n}-\frac{1}{3(n+3)},$$


then


$$\frac{1}{n(n+1)(n+2)(n+3)} = \frac{1}{3n(n+1)(n+2)}-\frac{1}{3(n+1)(n+2)(n+3)}.$$


So your series telescopes to


$$\frac{1}{3(1)(2)(3)} = \frac{1}{18}.$$


Functional Equation Problems

If $\Bbb N$ denotes all positive integers. Then find all functions $f: \mathbb{N} \to \mathbb{N}$ which are strictly increasing and such for all positive integers $n$, we have:



$$f(f(n)) = n+2$$



So far I know that $f(n)$ is greater than $n$ because the function is strictly increasing, but I'm not sure how to use this in order to solve the equation.

Saturday, November 28, 2015

calculus - Convergence of the series $sumlimits_{n=1}^{infty}frac{cos(alphasqrt{n})}{n^q}$



Determine for which real values of $\alpha$ and $q$ the following series converges $\sum\limits_{n=1}^{\infty}\frac{\cos(\alpha\sqrt{n})}{n^q}$?
So far I managed to prove that 1) for $q\leqslant0,\alpha\in\mathbb{R}$ the series diverges; 2) for $q>1,\alpha\in\mathbb{R}$ the series converges absolutely; 3) for $0

How to approach the problem for the case $0

Answer



By the Euler-MacLaurin formula, we have



$$S(k) := \sum_{n = 1}^k \cos (\alpha\sqrt{n}) = \int_1^k \cos (\alpha \sqrt{t})\,dt + O(1).$$




Further,



\begin{align}
\int_1^k \cos (\alpha \sqrt{t})\,dt
&= 2\int_1^{\sqrt{k}} u\cos (\alpha u)\,du\\
&= \frac{2}{\alpha}\bigl[u\sin (\alpha u)\bigr]_1^{\sqrt{k}} - \frac{2}{\alpha}\int_1^{\sqrt{k}} \sin (\alpha u)\,du
\end{align}



shows $\lvert S(k)\rvert \leqslant \frac{4}{\lvert\alpha\rvert}\sqrt{k} + O(1) \leqslant C\sqrt{k}$ for some constant $C \in (0,+\infty)$.




Then a summation by part shows



\begin{align}
\Biggl\lvert\sum_{n = m}^k \frac{\cos (\alpha \sqrt{n})}{n^q}\Biggr\rvert
&= \Biggl\lvert\sum_{n = m}^k \bigl(S(n) - S(n-1)\bigr) n^{-q}\Biggr\rvert\\
&\leqslant \frac{\lvert S(k)\rvert}{k^q} + \frac{\lvert S(m-1)\rvert}{m^q} + \Biggl\lvert \sum_{n = m}^{k-1} S(n) \biggl(\frac{1}{n^q} - \frac{1}{(n+1)^q}\biggr)\Biggr\rvert\\
&\leqslant C\Biggl( k^{\frac{1}{2}-q} + m^{\frac{1}{2}-q} + \sum_{n = m}^{k-1}\frac{q}{n^{q+\frac{1}{2}}}\Biggr)\\
&\leqslant \tilde{C}\cdot m^{\frac{1}{2}-q}
\end{align}




for $q > \frac{1}{2}$.



So as D. Thomine expected, the series converges for $q > \frac{1}{2}$ and $\alpha \in \mathbb{R}\setminus \{0\}$. The argument given in the comment shows (when the details are carried out) that the series is divergent for $q \leqslant \frac{1}{2}$.


integration - Absolutely continuous and differentiable almost everywhere

I've read the following claim and I wonder if someone can direct me to or provide me with a proof of it:



"A strongly absolutely continuous function which is differentiable almost everywhere is the indefinite integral of strongly integrable derivative"




It was in the context of Bochner integrable functions so I'm assuming that "strongly" means with respect to the norm.


Thanks!

nonlinear system - Non-linear functional form satisfying $g(x-y) = frac{g(x)}{g(y)}$.

Is there a general non-linear mapping function family, which obeys this general rule:


$$g(x-y) = \frac{g(x)}{g(y)}.$$


I am looking at identifying a classification of admissible kinetics relaxation functions.

real analysis - How can you prove that a function has no closed form integral?


I've come across statements in the past along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations:


  • addition/subtraction

  • multiplication/division

  • raising to powers and roots

  • trigonometric functions


  • exponential functions

  • logarithmic functions

, which when differentiated gives the function $f(x)$. I've heard this said about the function $f(x) = x^x$, for example.


What sort of techniques are used to prove statements like this? What is this branch of mathematics called?



Merged with "How to prove that some functions don't have a primitive" by Ismael:


Sometimes we are told that some functions like $\dfrac{\sin(x)}{x}$ don't have an indefinite integral, or that it can't be expressed in term of other simple functions.


I wonder how we can prove that kind of assertion?


Answer



It is a theorem of Liouville, reproven later with purely algebraic methods, that for rational functions $f$ and $g$, $g$ non-constant, the antiderivative


$$f(x)\exp(g(x)) \, \mathrm dx$$



can be expressed in terms of elementary functions if and only if there exists some rational function $h$ such that it is a solution to the differential equation:


$$f = h' + hg$$


$e^{x^2}$ is another classic example of such a function with no elementary antiderivative.


I don't know how much math you've had, but some of this paper might be comprehensible in its broad strokes: http://www.sci.ccny.cuny.edu/~ksda/PostedPapers/liouv06.pdf


Liouville's original paper:



Liouville, J. "Suite du Mémoire sur la classification des Transcendantes, et sur l'impossibilité d'exprimer les racines de certaines équations en fonction finie explicite des coefficients." J. Math. Pure Appl. 3, 523-546, 1838.



Michael Spivak's book on Calculus also has a section with a discussion of this.


functional equations - Does a function that satisfies the equality $f(a+b) = f(a)f(b)$ have to be exponential?


I understand the other way around, where if a function is exponential then it will satisfy the equality $f(a+b)=f(a)f(b)$. But is every function that satisfies that equality always exponential?


Answer



First see that $f(0)$ is either 0 or 1. If $f(0)=0$, then for all $x\in\mathbb R$, $f(x)=f(0)f(x)=0$. In this case $f(x)=0$ a constant function.


Let's assume $f(0)=1$. See that for positive integer $n$, we have $f(nx)=f(x)^n$ which means $f(n)=f(1)^n$. Also see that: $$ f(1)=f(n\frac 1n)=f(\frac 1n)^n\implies f(\frac 1n)=f(1)^{1/n}. $$ Therefore for all positive rational numbers: $$ f(\frac mn)=f(1)^{m/n}. $$ If the function is continuous, then $f(x)=f(1)^x$ for all positive $x$. For negative $x$ see that: $$ f(0)=f(x)f(-x)\implies f(x)=\frac{1}{f(-x)}. $$ So in general $f(x)=a^x$ for some $a>0$.



Without continuity, consider the relation: $xRy$ if $\frac xy\in \mathbb Q$ (quotient group $\mathbb R/\mathbb Q$). This relation forms an equivalence class and partitions $\mathbb R$ to sets with leaders $z$. In each partition the function is exponential with base $f(z)$.


Divisibility proof with co-prime numbers




Let $a,b$ be co-prime. Prove that for every integer $n > a$ it is true that $a | (n + kb)$ for some $k$ with $0 <= k < a$.



I have a feeling this is very basic stuff and I feel like there is a very simple intuitive proof and it's on the tip of my tongue but I just can't express it.


Answer



Recall that for any integers $a,b$ (not both zero) the extended Euclidean algorithm allows you to find integers $x,y \in \mathbb{Z}$ such that $ax+by=d$ where $d$ is the greatest common divisor of $a,b$.



So in your case, there exist $x,y \in \mathbb{Z}$ such that $ax+by=1$ (since $a,b$ are relatively prime). Thus $ax=1-by$. Multiply through by $n$ and you have your result. :)


Friday, November 27, 2015

sequences and series - Sum of $frac{1}{n^{2}}$ for $n = 1 ,2 ,3, ...$?








I just got the "New and Revised" edition of "Mathematics: The New Golden Age", by Keith Devlin. On p. 64 it says the sum is $\pi^2/6$, but that's way off. $\pi^2/6 \approx 1.64493406685$ whereas the sum in question is $\approx 1.29128599706$. I'm expecting the sum to be something interesting, but I've forgotten how to do these things.

Thursday, November 26, 2015

algebra precalculus - Does the size of the sum of squares say anything about the size of original sum?

Suppose I am told I have N numbers and that the sum of the

square of those numbers is S. Can anything be said about the sum of those
original N numbers?

number theory - Find the least non negative residues mod 7,11 and 13 of 12345678.



Find the least non negative residues mod 7,11 and 13 of 12345678.



my attempt: A number N is congruent modulo 7, 11, or 13, to the alternating sum of its digits in base 1000. (For example, 123456789 ≡ 789 − 456 + 123 ≡ 456 mod 7, 11, or 13.)


Answer



What I can do if I have understood well the question is the clear




$$12345678=949667\cdot13+\color{red}7=1122334\cdot11+\color{red}4=1763668\cdot7+\color{red}2$$ where the asked residues are in red.


limits - Compute $limlimits_{xto0} left(sin x + cos xright)^{1/x}$




I hit a snag while solving exponential functions whose limits are given.



Question:



$$\lim_{x\to0} \left(\sin x + \cos x\right)^\left(1/x\right)$$



My Approach:



I am using the followin relation to solve the question of these type.




$$\lim_{x\to0} \left(1 + x\right)^\left(1/x\right) = e \qquad(2)$$



But now how should i convert my above question so that i can apply the rule as mentioned in $(2)$.



Conclusion:



First of all help will be appreciated.



Second how to solve functions of such kind in a quick method.




Thanks,



P.S.(Feel free to edit my question if you find any errors or mistakes in my question)


Answer



In this case the best idea is take logarithm, and then use De l'Hopital. $$\lim_{x \to 0} \frac{\ln (\sin x + \cos x)}{x} = \lim_{x \to 0} \frac{\cos x - \sin x}{\sin x + \cos x} = 1$$ Hence the answer is $e^{1}=e$.


discrete mathematics - Sum of cubes proof

Prove that for any natural number n the following equality holds:


$$ (1+2+ \ldots + n)^2 = 1^3 + 2^3 + \ldots + n^3 $$


I think it has something to do with induction?

Wednesday, November 25, 2015

linear algebra - eigenvalues of symmetric matrix

Suppose we have $n \times n$ symmetric matrix $A$ with all diagonal entries zero and remaining entries are non negative. Let $B$ be the matrix obtained from $A$ by deleting its $k$th row and $k$th column and remaining entries of $B$ are less equal the corresponding entries of $A$. Then the largest eigenvalue of $B$ is less equal the largest eigenvalue of $A$ and the smallest eigenvalue of $A$ is less equal the smallest eigenvalue of $B$.

Tuesday, November 24, 2015

Probability not to get a coupon : Coupon Collector's Problem

We buy coupons for $m$ rounds (no matter if we have already collected them all or not and we buy one coupon each round). What is the probability that we will not get the coupon number 1 in any of the $m$ rounds?




Assume we have the Coupon Collector's Problem with $1 \dots n$ Coupons.



Assume the event $X_1 = \text{"We don't get the first coupon"}$ so i think $X_1 \sim Bernoulli(1 - \frac{1}{n})$. And after $m$ rounds we have the probability of $(1 - \frac{1}{n})^m$ to get not the first coupon - right ? And with the bernoulli's inequality we get $(1 - \frac{1}{n})^m \geq 1 - \frac{m}{n} $.



How can I calculate the expected value of the number of coupons that have not yet been collected after $m = n \cdot ln(n) + t$ round ( with m is an integer) ? Is it $ E \geq m \cdot (1 - \frac{m}{n})$?

Monday, November 23, 2015

calculus - Understanding Lebesgue Integration

I have started studying Lebesgue integration and I have a few of questions regarding the Lebesgue integral:



  1. In the wikipedia entry of "Lebesgue integration" they define the Lebesgue integral as: $\int f d\mu = \int_{0}^{\infty}f^{*}(t)dt$ where $f^{*}(t) = \mu(\{x |f(x) > t\})$. The Lebesgue integration notes that I am studying first defines the integral of the positive simple function on a measure space, then the positive measurable function followed by the sign changing measurable function. I want to know if this wiki definition is equivalent to the integral constructed from simple functions, if so how can this be easily shown?





  2. Secondly , I want to know how the Lebesgue integral is used exactly. I understand how it is defined in terms of simple functions, in the same way I know how Riemann integration is defined by the limit of the Riemann sum. But I also want to know if there are equivalent theorems and techniques for Lebesgue integration as in Riemann integration used when actually computing a given integral. For example how does the following translate to Lebesgue integration: the evaluation theorem, using anti-derivatives to evaluate indefinite integrals, the fundamental theorem of calculus, the substitution rule and integration by parts?




  3. Lastly as an example for the integral $\int x^{2} dx$ evaluated as a Lebesgue integral, would you evaluate it as you would as a Riemann integral by taking the anti derivative? Is this always the case?



Thanks a lot for any assistance.

complex analysis - How to show that for $|a|

Show that for $|a|<1$:
$$\int_0^\infty\frac{x^a}{(x+2)^2}\,dx=\frac{a\pi2^{a-1}}{\sin\pi a}$$

discrete mathematics - Show that the number $2^{2^n}+1, forall nin{mathbb{N}:ngeq 2}$ has the last digit $7$.


Problem: Show that the number $2^{2^n}+1, \ \forall n\in\{\mathbb{N}:n\geq 2\}$ has the last digit $7$ by induction and without using modular arithmetic.




I know how induction works and I've tried the basecase $n=2$ which holds true.


Setting $p(n)=2^{2^n}+1$ and computing $p(2)=17,$ which indeed has the last digit seven.


Now I need the inductive step, so I assume this holds true for a number $n=m\in\{\mathbb{N}:m>2\}.$ Our goal here is to verify that $p$ also holds for the following integer $n=m+1.$ Our induction hypothesis states that


$$p(m)=2^{2^m}+1=? \quad \quad (1)$$


Here is where I run into trouble. If the problem statement was to show that it's divisible by $7$, then in $(1)$ I could have set the RHS to $7k$ for some $k\in\mathbb{N}$ and proceed. But in this case, I'm lost.

complex analysis - Second Derivative of Univalent Function

While doing some self-study today, I was reading through some proofs that the derivative of a Univalent Function vanishes nowhere. However, I couldn't find mention of the second derivative of a univalent function.


I thus presumed that there must be trivial counterexamples for second derivatives never vanishing; however, after testing a few simple, univalent maps I've failed to find a counterexample.


My question is thus as follows:


Can second derivatives of Univalent functions vanish somewhere?

integration - Integrating $int^{infty}_0 e^{-x^2},dx$ using Feynman's parametrization trick



I stumbled upon this short article on last weekend, it introduces an integral trick that exploits differentiation under the integral sign. On its last page, the author, Mr. Anonymous, left several exercises without any hints, one of them is to evaluate the Gaussian integral
$$
\int^\infty_0 e^{-x^2} \,dx= \frac{\sqrt{\pi}}{2}
$$
using this parametrization trick. I had been evaluating it through trial and error using different paramatrizations, but no luck so far.







Here are what I have tried so far:




  • A first instinct would be do something like:$$
    I(b) = \int^\infty_0 e^{-f(b)x^2}\,dx
    $$
    for some permissible function $f(\cdot)$, differentiating it will lead to a simple solvable ode:
    $$

    \frac{I'(b)}{I(b)} = -\frac{f'(b)}{2f(b)}
    $$
    which gives:
    $$
    I(b) = \frac{C}{\sqrt{f(b)}}.
    $$
    However, finding this constant $C$ basically is equivalent to evaluating the original integral, we are stuck here without leaving this parametrization trick framework.


  • A second try involves an exercise on the same page:
    $$
    I(b) = \int^\infty_0 e^{-\frac{b^2}{x^2}-x^2}dx.

    $$
    Taking derivative and rescaling the integral using change of variable we have:
    $$
    I'(b) = -2I(b).
    $$
    This gives us another impossible to solve constant $C$ in:
    $$
    I(b) = C e^{-2b}
    $$
    without leaving this framework yet again.



  • The third try is trying modify Américo Tavares's answer in this MSE question:
    $$
    I(b) = \int^\infty_0 be^{-b^2x^2}\,dx.
    $$
    It is easy to show that:
    $$
    I'(b) = \int^\infty_0 e^{-b^2x^2}\,dx - \int^\infty_0 2b^2 x^2 e^{-b^2x^2}\,dx = 0
    $$
    by an integration by parts identity:
    $$

    \int^\infty_0 x^2 e^{- c x^2}\,dx = \frac{1}{2c}\int^\infty_0 e^{- c x^2}\,dx .
    $$
    Then $I(b) = C$, ouch, stuck again at this constant.







Notice in that Proving $\displaystyle\int_{0}^{\infty} e^{-x^2} dx = \frac{\sqrt \pi}{2}$ question, Bryan Yocks's answer is somewhat similar to the idea of parametrization, however he has to introduce another parametric integration to produce a definite integral leading to $\arctan$.



Is there such a one shot parametrization trick solution like the author Anonymous claimed to be "creative parameterizations and a dose of differentiation under the integral"?



Answer



Just basically independently reinvented Bryan Yock's solution as a more 'pure' version of Feynman.



Let $$I(b) = \int_0^\infty \frac {e^{-x^2}}{1+(x/b)^2} \mathrm d x = \int_0^\infty \frac{e^{-b^2y^2}}{1+y^2} b\,\mathrm dy$$ so that $I(0)=0$, $I'(0)= \pi/2$ and $I(\infty)$ is the thing we want to evaluate.



Now note that rather than differentiating directly, it's convenient to multiply by some stuff first to save ourselves some trouble. Specifically, note



$$\left(\frac 1 b e^{-b^2}I\right)' = -2b \int_0^\infty e^{-b^2(1+y^2)} \mathrm d y = -2 e^{-b^2} I(\infty)$$



Then usually at this point we would solve the differential equation for all $b$, and use the known information at the origin to infer the information at infinity. Not so easy here because the indefinite integral of $e^{-x^2}$ isn't known. But we don't actually need the solution in between; we only need to relate information at the origin and infinity. Therefore, we can connect these points by simply integrating the equation definitely; applying $\int_0^\infty \mathrm d b$ we obtain




$$-I'(0)= -2 I(\infty)^2 \quad \implies \quad I(\infty) = \frac{\sqrt \pi} 2$$


Sunday, November 22, 2015

geometry - Can this shape contain a circle?

I have a shape, made up of interconnected squares. Their sizes are all equal and known. Think tetris blocks, but the shapes can be made of any number of squares, greater than or equal to one.




I need to know whether it is possible for this shape to contain ANY circle, that comes into contact, or encloses ALL the squares in the shape, and NO part of the circle is outside the shape. This includes the circle's interior as well as its circumference. A circle being tangent to a square does not count as being in contact with it.



For example:




  • A 2x2 shape can fit a circle, with maximum diameter of 2 square units, therefore it is valid.

  • A 3 square L shape can. It can fit a circle that lies tangent (internally) to the sides of the 'elbow' square and hits the opposite corner. Image: http://imgur.com/a/lE0wP


  • A 3x1 shape cannot. The maximum size of the possible circle is 1 unit, and its not possible for the circle to be in contact with all three squares.





Things I do know so far:




  • If the difference between the width and height of the shape is more than one square, it is not valid.

  • If there is a hole in the shape, its not valid.



I need some kind of algorithm, as I will likely need to program this in Java.




I have been told that using the Möbius Transformation could potentially be useful, but the maths behind it is beyond my level.



Is this possible to do algorithmically, or by checking against a set of rules?



(Sorry for no images, the image uploader does not seem to be working for me)

calculus - Formula for $r+2r^2+3r^3+...+nr^n$




Is there a formula to get $r+2r^2+3r^3+\dots+nr^n$ provided that $|r|<1$? This seems like the geometric "sum" $r+r^2+\dots+r^n$ so I guess that we have to use some kind of trick to get it, but I cannot think of a single one. Can you please help me with this problem?


Answer



We have
$$1+r+r^2 + \cdots + r^n = \dfrac{r^{n+1}-1}{r-1}$$
Differentiating this once, we obtain
$$1+2r + 3r^2 + \cdots + nr^{n-1}= \dfrac{nr^{n+1}-(n+1)r^n + 1}{(r-1)^2}$$
Multiply the above by $r$ to obtain what you want.



Saturday, November 21, 2015

probability - Set $Z=X+Y$ and calculate the density function $f_Z$ for Z. Given solution doesn't match mine. (Continuous R.V)



I'm having difficulties with task 2, since given solution doesn't equal mine:




$f_{XY}(x,y) = \begin{cases} \left (6·e^{-3x}·e^{-2y} \right) &
\text{if } 00 & \text{otherwise } %

\end{cases}$



Task 1: Show that X and Y are independent



Task 2: Set $Z=X+Y$ and calculate the density function $f_Z$ for Z




I'll post the entire task, since others might seek help for a similar problem.



Task 1:




To show this we use that "Two continuous random variables X and Y are independent if:



$f_{XY}(x,y)=f_X(x)·f_Y(y),\space for\space all \space x,y$



We find the marginal PDF's



$f_X(x)=\int_0^\infty6·e^{-3x}·e^{-2y}$dy



$f_Y(y)=\int_0^\infty6·e^{-3x}·e^{-2y}$dx




We multiply the two:



$f_Y(y)·f_X(x)=6·e^{-3x}·e^{-2y}$



Which is the same as the joint PDF. We conclude they are independent.



Task 2:



We solve it by finding the CDF and differentiate this to find the PDF:




$P(Z\leq z)$



$P(x+y\leq z)$



$P(y\leq z-x)$



We solve the double integral:



$\int_0^\infty \int_0^{(z-x)}(6·e^{-3x}·e^{-2y}dydx=(1-3·e^{-2·z})$




We now have the CDF:



$F_{Z}(z) = \begin{cases} \left (0 \right) &
\text{if } 0>z \\(1-3·e^{-2·z}) & \text{}01 & \text{}z> \infty %
\end{cases}$



To find the PDF we differentiate the CDF:




$\frac{d}{dz}(1-3·e^{-2·z})=6·e^{-2z} $



Giving us a PDF of:



$f_{Z}(z) = \begin{cases} \left (6·e^{-2z} \right) &
\text{if } 00 & \text{otherwise } %
\end{cases}$



However the solution provided is:




$f_Z(z)=6·e^{-2z}·(1-e^{-z})$



What am I doing wrong?



Besides this im unsure of:



The intervals in the CDF


Answer



$Y = z-X = -X + z$ is a linear equation with $y$-intercept $z$ and slope $-1$. The region where $Y \leq -X + z$ is in green below. Note where the line intersects with the $X$ and $Y$ axes.




enter image description here



Thus, the CDF is, according to the graph above,
$$\int_{0}^{z} \int_{0}^{-x+z}6e^{-3x}e^{-2y}\text{ d}y\text{ d}x = 2e^{-3z}-3e^{-2z}+1$$
from which we obtain
$$f_{Z}(z) = -6e^{-3z}+6e^{-2z}=6e^{-2z}-6e^{-3z}=6e^{-2z}(1-e^{-z})$$
for $z > 0$ as desired.


calculus - Prove $int_{0}^infty mathrm{d}yint_{0}^infty sin(x^2+y^2)mathrm{d}x=int_{0}^infty mathrm{d}xint_{0}^infty sin(x^2+y^2)mathrm{d}y=pi/4$



How can we prove that
\begin{aligned}
&\int_{0}^\infty \mathrm{d}y\int_{0}^\infty \sin(x^2+y^2)\mathrm{d}x\\
=&\int_{0}^\infty \mathrm{d}x\int_{0}^\infty \sin(x^2+y^2)\mathrm{d}y\\=&\cfrac{\pi}{4}
\end{aligned}




I can prove these two are integrable but how can we calculate the exact value?


Answer



I do not know if you are supposed to know this. So, if I am off-topic, please forgive me.



All the problem is around Fresnel integrals. So, using the basic definitions,$$\int_{0}^t \sin(x^2+y^2)dx=\sqrt{\frac{\pi }{2}} \left(C\left(\sqrt{\frac{2}{\pi }} t\right) \sin
\left(y^2\right)+S\left(\sqrt{\frac{2}{\pi }} t\right) \cos
\left(y^2\right)\right)$$ where appear sine and cosine Fresnel integrals. $$\int_{0}^\infty \sin(x^2+y^2)dx=\frac{1}{2} \sqrt{\frac{\pi }{2}} \left(\sin \left(y^2\right)+\cos
\left(y^2\right)\right)$$ Integrating a second time,$$\frac{1}{2} \sqrt{\frac{\pi }{2}}\int_0^t \left(\sin \left(y^2\right)+\cos
\left(y^2\right)\right)dy=\frac{\pi}{4} \left(C\left(\sqrt{\frac{2}{\pi }}

t\right)+S\left(\sqrt{\frac{2}{\pi }} t\right)\right)$$ $$\frac{1}{2} \sqrt{\frac{\pi }{2}}\int_0^\infty \left(\sin \left(y^2\right)+\cos
\left(y^2\right)\right)dy=\frac{\pi}{4} $$


calculus - How to compute the integral $int_{-infty}^infty e^{-x^2},dx$?




How to compute the integral $\int_{-\infty}^\infty e^{-x^2}\,dx$ using polar coordinates?


Answer



Hint: Let $I=\int_{-\infty}^\infty e^{-x^2}\,dx.$ Then $$I^2=\left(\int_{-\infty}^\infty e^{-x^2}\,dx\right)\left(\int_{-\infty}^\infty e^{-y^2}\,dy\right)=\int_{-\infty}^\infty\int_{-\infty}^\infty e^{-x^2}e^{-y^2}\,dx\,dy=\int_{-\infty}^\infty\int_{-\infty}^\infty e^{-(x^2+y^2)}\,dx\,dy.$$ Now switch to polar coordinates.


sequences and series - How can I evaluate $sum_{n=0}^infty(n+1)x^n$?


How can I evaluate $$\sum_{n=1}^\infty\frac{2n}{3^{n+1}}$$? I know the answer thanks to Wolfram Alpha, but I'm more concerned with how I can derive that answer. It cites tests to prove that it is convergent, but my class has never learned these before. So I feel that there must be a simpler method.


In general, how can I evaluate $$\sum_{n=0}^\infty (n+1)x^n?$$


Answer




No need to use Taylor series, this can be derived in a similar way to the formula for geometric series. Let's find a general formula for the following sum: $$S_{m}=\sum_{n=1}^{m}nr^{n}.$$


Notice that \begin{align*} S_{m}-rS_{m} & = -mr^{m+1}+\sum_{n=1}^{m}r^{n}\\ & = -mr^{m+1}+\frac{r-r^{m+1}}{1-r} \\ & =\frac{mr^{m+2}-(m+1)r^{m+1}+r}{1-r}. \end{align*}
Hence $$S_m = \frac{mr^{m+2}-(m+1)r^{m+1}+r}{(1-r)^2}.$$
This equality holds for any $r$, but in your case we have $r=\frac{1}{3}$ and a factor of $\frac{2}{3}$ in front of the sum. That is \begin{align*} \sum_{n=1}^{\infty}\frac{2n}{3^{n+1}} & = \frac{2}{3}\lim_{m\rightarrow\infty}\frac{m\left(\frac{1}{3}\right)^{m+2}-(m+1)\left(\frac{1}{3}\right)^{m+1}+\left(\frac{1}{3}\right)}{\left(1-\left(\frac{1}{3}\right)\right)^{2}} \\ & =\frac{2}{3}\frac{\left(\frac{1}{3}\right)}{\left(\frac{2}{3}\right)^{2}} \\ & =\frac{1}{2}. \end{align*}


Added note:


We can define $$S_m^k(r) = \sum_{n=1}^m n^k r^n.$$ Then the sum above considered is $S_m^1(r)$, and the geometric series is $S_m^0(r)$. We can evaluate $S_m^2(r)$ by using a similar trick, and considering $S_m^2(r) - rS_m^2(r)$. This will then equal a combination of $S_m^1(r)$ and $S_m^0(r)$ which already have formulas for.


This means that given a $k$, we could work out a formula for $S_m^k(r)$, but can we find $S_m^k(r)$ in general for any $k$? It turns out we can, and the formula is similar to the formula for $\sum_{n=1}^m n^k$, and involves the Bernoulli numbers. In particular, the denominator is $(1-r)^{k+1}$.


real analysis - Alternatives to percentage for measuring relative difference?



If the value of something changes from $\;a\;$ to $\;b\;$, their relative difference can be expressed as a percentage:

$$
\newcommand{\upto}{\mathop\nearrow}
(D0) \;\;\; a \upto b \;=\; (b-a)/a \color{gray}{\times 100\%}
$$



(In this question $\;a\;$, $\;b\;$, and $\;c\;$ are positive (non-zero) real numbers throughout, and $\;a \upto b\;$ is a real number.)



However, percentages are not anti-symmetrical, i.e., the above definition does not satisfy
$$
(P0) \;\;\; a \upto b \;=\; -(b \upto a)

$$
This could be fixed by instead defining
$$
(D1) \;\;\; a \upto b \;=\; (b-a)/(b+a)
$$
(By the way, does this difference measure have a standard name?)



However, neither of the above definitions add up: they don't satisfy the more general property
$$
(P1) \;\;\; a \upto b \;+\; b \upto c \;=\; a \upto c

$$
After some attempts I discovered that
$$
(D2) \;\;\; a \upto b \;=\; \log_q (b/a)
$$
(for an arbitrary base $\;q > 1\;$) does satisfy $(P1)$. And it also satisfies the following other properties of percentage $(D0)$ and of $(D1)$ which seem desirable for any relative difference:
\begin{align}
(P2) \;\; & a \upto a \;=\; 0 \\
(P3) \;\; & \textit{sgn}(a \upto b) \;=\; \textit{sgn}(b - a) \\
(P4) \;\; & (s \times a) \upto (s \times b) \;=\; a \upto b \text{ for all $\;s > 0\;$} \\

(P5) \;\; & a \upto b \to M \text{ for } a \to 0 \text{ for some $\;M\;$ (which may be $\;\infty\;$)} \\
(P6) \;\; & a \upto b \to N \text{ for } b \to \infty \text{ for some $\;N\;$ (which may be $\;\infty\;$)} \\
\end{align}
(I mention $(P2)$ only for completeness; it follows of course from $(P0)$.)



My question: Apart from $(D2)$, what other definitions (if any) satisfy all of the above properties?


Answer



The general (real-valued) solution of axioms P1 and P2 is to take a function $F$ on the set $V$ of values that $a,b,\cdots$ could possibly assume, and define $a \upto b = F(b) - F(a)$.



If $F$ is injective, the converse of P2 is satisfied: $a \upto b = 0$ only for $a=b$.




If $F$ is increasing, P3 is satisfied.



If $F$ is continuous, P5 and P6 are satisfied. (Assume here that the set of values $V$ is an interval, or put the order topology on the set.)



This shows that there is a large family of solutions of the axioms, one for every increasing continuous function on $V$, if you do not assume homogeneity, P4. Homogeneity is a strong requirement that cuts down the space of continuous solutions to the logarithmic ones stated in the question.



Homogeneity of $a \upto b$ is the functional equation $F(sb) - F(sa) = F(b) - F(a)$. Solutions $F$ and $F+c$ are equivalent for constant $c$, and we can assume $F(1)=0$ by adjusting the constant. Taking $a=1$ this is $F(sb)=F(s)+F(b)$ and all continuous solutions (on intervals) are well-known to be multiples of the logarithm.



Assuming that you meant to work with open intervals of values, and $a \upto x$ to be an increasing continuous function of $x$, the not necessarily homogeneous solutions correspond to one parameter groups of homeomorphisms of the interval.



Friday, November 20, 2015

calculus - What are $intsqrt{a^2-x^2},textrm{d}x, intsqrt{x^2+a^2},textrm{d}x,intsqrt{x^2-a^2},textrm{d}x$?

Can someone confirm these equations below? I got it from my college textbook, unfortunately there are no proofs and more importantly I cannot seem to find any other sources that say have these equations.



$\displaystyle\int\sqrt{a^2-x^2}\,\textrm{dx}=\frac{x\sqrt{a^2-x^2}}{2}+\frac{a^2}{2}\sin^{-1}\left(\frac{x}{a}\right) + C$



$\displaystyle\int\sqrt{x^2+a^2}\,\text{dx}=\frac{x\sqrt{x^2+a^2}}{2}+\frac{a^2}{2}\ln\left(x+\sqrt{x^2+a^2}\right)+C$



$\displaystyle\int\sqrt{x^2-a^2}\,\textrm{dx}=\frac{x\sqrt{x^2-a^2}}{2}-\frac{a^2}{2}\ln\left(x+\sqrt{x^2-a^2}\right)+C$

number theory - Bezout Coefficients produced by Extended Euclidean Algorithm for $a$ and $b$

I was trying the Extended Euclidean Algorithm on various pairs of numbers to find a logic on the Bezout Coefficients produced. But, I am confused about the nature of the coefficients.




I found the GCD of $21$ and $14$ and then found the Bezout Coefficients, it was the diophantine equation $$7 = 21 \cdot 1 + 14 \cdot -1$$ What I don't understand is why the coefficients produced were those. One can easily represent them as $$7 = 14 \cdot 2 + 21 \cdot -1$$ Similarly, $$3 = 21 \cdot 2 + 39 \cdot -1$$ can also be written as $$7 = 39 \cdot 6 + 21 \cdot -11$$ I want to know why the Bezout Coefficients produced by the Extended Euclidean Algorithm are those. It seems that the Bezout Coefficients produce for $\gcd(a, b) = ax + by$ are those where $|x| + |y|$ is the lowest. Is there a proof for that statement and are there some papers about it?



Furthermore, consider that $a$ and $b$ are both positive and the equation $$\gcd(a, b) = ax + by$$ We know that as $x$ increases, $y$ decreases. I want to know of the importance of the two solutions $(x_1, y_1)$ and $(x_2, y_2)$ where $x_1 < y_1$ but $x_2 > y_2$ and they are the solutions immediately next to each other.
It is the point of inversion of the larger of the solutions.



Thanks.

Thursday, November 19, 2015

algorithms - Finding irreducible polynomials over GF(2) with the fewest terms




I'm investigating an idea in cryptography that requires irreducible polynomials with coefficients of either 0 or 1 (e.g. over GF[2]). Essentially I am mapping bytes to polynomials. For this reason, the degree of the polynomial will always be an integer multiple of 8.



I would like to build a table of such irreducible polynomials for various degrees (e.g. degrees from 8, 16, 24, ..., 1024). Because there are multiple irreducible polynomials for a given degree, I'd like the one with the fewest number of terms since I will hard code the non-zero terms. For example, for degree 16, both of these polynomials are irreducible:



$x^{16} + x^{14} + x^{12} + x^7 + x^6 + x^4 + x^2 + x + 1$



and



$x^{16} + x^5 + x^3 + x + 1$



Obviously, the latter one is preferred because it requires less space in code and is more likely to be right (e.g. that I wouldn't have made a copy/paste error).



Furthermore, I've noticed that to at least degree 1024 where the degree is a multiple of 8, there are irreducible polynomials of the form:



$x^n + x^i + x^j + x^k + 1$ where $n = 8*m$ and $0 < i,j,k < 25$



Is there an good algorithmic way of finding these polynomials (or ones that have even fewer terms)? Again, the purpose is to keep the non-zero terms in a look-up table in code.




Thanks in advance for any help!



UPDATE:



This Mathematica code generates all pentanomials for degrees that are multiples of 8 up to degree 1024:



IrreducibleInGF2[x_] := IrreduciblePolynomialQ[x, Modulus -> 2]

ParallelTable[

Select[
Sort[
Flatten[
Table[
x^n + x^a + x^b + x^c + 1,
{a, 1, Min[25, n - 1]}, {b, 1, a - 1}, {c, 1, b - 1}
]
]
],
IrreducibleInGF2, 1],

{n, 8, 1024, 8}]


(I sorted the list of polynomials to make sure I always got the one with the overall smallest degrees first). However, it takes quite a bit of time to run. For example, it took over 26 minutes for the case of $x^{984} + x^{24} + x^9 + x^3 + 1$ .



UPDATE #2



The HP paper "Table of Low-Weight Binary Irreducible Polynomials" has been incredibly helpful. It lists up to $x^{10000}$ and reiterates a proof by Swan that there are no trinomials when the degree is a multiple of 8 (which matches my findings). I've spot checked that their results match mine up to $x^{1024}$ so I'll just need to double check their results up to 10000 which should be much easier than finding them myself.


Answer



According to a paper "Optimal Irreducible Polynomials for $GF(2^m)$ Arithmetic" by M. Scott,

"it is in practise always possible to chooose as an irreducible polynomial either a trinomial... or a pentanomial." [talk slides] [PDF link]



In random number generators irreducible trinomials of various degrees with three nonzero binary coefficients are associated with the names Tausworthe-Lewis-Payne.



Added: It has been known since Gauss that there are lots of irreducible polynomials over a finite field, basically the analog of the Prime Number Theorem for such polynomials. Among the $2^m$ (monic) polynomials over Z/2Z of degree $m$, approximately $1/m$ of them are irreducible.



We can eliminate the possibility of first degree factors by inspection, for divisibility by $x$ or $x+1$ would imply respectively a zero constant term or an even number of nonzero terms in the polynomial. So the first case to test for irreducibility is the trinomials of degree $m$. With leading and constant coefficients accounting for two of the three nonzero terms, there are but $m-1$ possibilities to test, and by symmetry of $x$ → $1/x$ substitution, we can restrict the middle terms to degree ≤ $m/2$.



If none of those pan out, we have the richer supply of pentanomials to test. Indeed you seem to have hit upon a seam of cases where trinomials will never work out, namely degree $m$ a multiple of 8 [PS] (Swan, 1962).




The work then comes down to testing all the $\binom{m-1}{3}$ binary pentanomials $p(x)$ until we find one that's irreducible. Your application might make other conditions, perhaps similar to those considered in Scott's paper above, attractive. Given the modest degrees you are working with, trial division (taking $p(x) \; mod \; q(x)$ for all $q(x)$ of degree ≤ $m/2$) should be fast enough. [Remember, we shouldn't have to test more than O(m) possibilities before we find success.]



There is a fancier way [PDF] to test polynomials for irreducibility over GF(2). A necessary condition for binary polynomial $p(x)$ to be irreducible over $GF(2)$ is that:
$$x^{2^m} = x \mod p(x)$$



In fact Gauss showed for prime q that $x^{q^m} - x$ is precisely the product
of all monic irreducible polynomials over $GF(q)$ whose degrees divide
$m$. [From this he deduced the count of monic irreducible polynomials of
degree exactly $m$ is asymptotically $q^m/m$ as $m \rightarrow \infty$.]



For $q = 2$ it follows that if $p(x)$ is irreducible of degree $m$,

it divides $x^{2^m} - x$ over $GF(2)$, i.e. the congruence above.



Rubin (1980) proposed a necessary and sufficient test for irreducibility,
combining the above with some additional steps to rule out the possibility
that $p(x)$ might be the product of some irreducible factors whose degrees
properly divide $m$. [While the degrees of the irreducible factors would
naturally sum to $m$, having all the irreducible factors' degrees divide
$m$ would be somewhat special, unless of course there is only one factor.]



The additional "sufficiency" steps are to check for each prime factor

$d_i$ of $m$ that:
$$GCD(x^{2^{m/d}} - x, p(x)) = 1$$



That is, if $p(x)$ were to have an irreducible factor of degree $k$ properly
dividing $m$, it would crop up when taking the gcd of $x^{2^{m/d}} - x$ and
$p(x)$ if $k$ divides $m/d$.



Since then a lot of ingenuity has been applied to efficiently doing these
steps. Of course the necessary condition lends itself to repeated squaring,
computing $x^{2^{k+1}} = (x^{2^k})^2$ mod $p(x)$, for $k$ up to $m-1$. We

can take advantage here of the fact that the multiplication we're doing
is a squaring and advantage of the sparsity of our $p(x)$ as a pentanomial
when doing reduction mod $p(x)$.



As the report by Brent of work with Zimmerman (linked above) points out,
this repeated squaring gives with each (fairly inexpensive step) linear
progress toward the "exponent of the exponent" $m$. There is also a way
to progress farther with greater computational effort by modular
composition
.




That is, suppose we've already arrived at:
$$f(x) = x^{2^k} \mod p(x)$$
and
$$g(x) = x^{2^j} \mod p(x)$$
Then:
$$f(g(x)) = x^{2^{k+j}} \mod p(x)$$



Thus composition of two polynomials $f(x)$ and $g(x)$ mod $p(x)$ can
replace a number of repeated squaring steps. But composition mod $p(x)$
is more expensive than squaring or even than multiplication generally

mod $p(x)$. So as Brent points out, practical advantage for using
modular composition lies at the final stage(s) of evaluating the
necessary condition. E.g. at the end one modular composition might
replace $m/2$ repeated squarings.



As far as the "sufficiency" conditions go, Gao and Panario outlined
an improvement over a naive implementation of Rubin's tests in this

1997 paper [PDF]
, basically sequencing the gcd computations in
an expedient order.



Wednesday, November 18, 2015

elementary set theory - Specify a bijection from [0,1] to (0,1].

A) Specify a bijection from [0,1] to (0,1]. This shows that |[0,1]| = |(0,1]|



B) The Cantor-Bernstein-Schroeder (CBS) theorem says that if there's an injection from A to B and an injection from B to A, then there's a bijection from A to B (ie, |A| = |B|). Use this to come to again show that |[0;1]| = |(0;1]|

linear algebra - how to calculate the remainder when the number is divided by 1000


$n=(2^{2014}-1)/3$ what is the remainder when it is divided by $1000$. i wrote $2$ as $3-1$ and proceeded but that only helped me prove $n$ is a integer but i dont know how to find the remainder on dividing it by 1000. well im guessing it involves some number theory theorom. please guide me.(i was thinking of using euler totient theorom but could not do so)


Answer



Without being clever, let's brute force our way through.


We want to understand $2^{2014} - 1 \pmod{1000}$. Notice that $\gcd(3,1000) = 1$, so $3$ has a modular inverse (a quick check shows that it's $667$).



Now, how do we find $2^{2014} - 1 \pmod {1000}$? As $\gcd(2,1000) = 2$, we should worry about the $2$ factor. So we think of $1000$ as $2^3 \cdot 5^3$.


Clearly $2^{2014} \equiv 0 \pmod {2^3}$. And as $\varphi(5^3) = 100$, we know that $2^{2014} \equiv 2^{14} \equiv 9 \pmod {125}$. Putting these together, perhaps with the Chinese Remainder Theorem or by quick brute force (there are only 8 things to check, afterall), we see that $2^{2014} \equiv 384 \pmod{1000}$.


Thus $2^{2014} - 1 \equiv 383 \pmod{1000} $. And thus $\frac{2^{2014}-1}{3} \equiv 667\cdot(2^{2014}-1) \equiv 667 \cdot 383 \equiv 461 \pmod{1000}$.


So the remainder is $461$.


linear algebra - Vector spaces - Multiplying by zero scalar yields zero vector

Please rate and comment. I want to improve; constructive criticism is highly appreciated.
Please take style into account as well.



The following proof is solely based on vector space related axioms.
Axiom names are italicised.

They are defined in Wikipedia (see vector space article).



Vector spaces - Multiplying by zero scalar yields zero vector



\begin{array}{lrll}
\text{Let} & \dots & \text{be} & \dots \\
\hline
& F && \text{a field.} \\
& V && \text{a vector space over $F$.} \\
& 0 && \text{an identity element of addition of $F$.} \\

& \mathbf{0} && \text{an identity element of addition of $V$.} \\
& \mathbf{v} && \text{an arbitrary vector in $V$.} \\
\end{array}



$$\text{Then, }0\mathbf{v} = \mathbf{0}.$$



Proof. We will denote by $1$ an identity element of scalar multiplication;
we will denote by $(-\mathbf{v})$ an additive inverse of $\mathbf{v}$.
\begin{align*}
0\mathbf{v}

&= 0\mathbf{v} + \mathbf{0} && \text{by }\textit{Identity element of vector addition} \\
&= 0\mathbf{v} + (\mathbf{v} + (-\mathbf{v})) && \text{by }\textit{Inverse elements of vector addition} \\
&= (0\mathbf{v} + \mathbf{v}) + (-\mathbf{v}) && \text{by }\textit{Associativity of vector addition} \\
&= (0\mathbf{v} + 1\mathbf{v}) + (-\mathbf{v}) && \text{by }\textit{Identity element of scalar multiplication} \\
&= ((0 + 1)\mathbf{v}) + (-\mathbf{v}) && \text{by }\textit{Distributivity of scalar multiplication (field addition)} \\
&= ((1 + 0)\mathbf{v}) + (-\mathbf{v}) && \text{by }\textit{Commutativity of field addition} \\
&= (1\mathbf{v}) + (-\mathbf{v}) && \text{by }\textit{Identity element of field addition} \\
&= \mathbf{v} + (-\mathbf{v}) && \text{by }\textit{Identity element of scalar multiplication} \\
&= \mathbf{0} && \text{by }\textit{Inverse elements of vector addition} \\
\end{align*}

QED

summation - Why is $(1+2+3+4+...+n)^2$ equal to $1^3+2^3+3^3...+n^3$?

I noticed that the sum of the first $n$ cubes is equal to the square of sum of the first $n$ natural numbers:
$$
\sum\limits_{i=1}^n i^3=\frac{n^2(n+1)^2}{4}=\left(\frac{n(n+1)}{2}\right)^2=\left(\sum\limits_{i=1}^n i\right)^2
$$



Is there a clever way to explain (not prove) this identity?

Worthwhile freshman-level definite-integrals-"proper"

If one mentions the topic of evaluating definite integrals without the fundamental theorem of calculus, I think of things like $$ \int_0^\infty \frac{\sin x} x \, dx \quad \text{ or } \quad \int_{-\infty}^\infty e^{-x^2}\,dx $$ i.e. "definite integral[s] proper" defined in this paper as "integral[s] whose value can be expressed in finite terms, although the indefinite integral of the subject of integration cannot be so determined."


However, at the freshman level, sometimes there is a point to evauating integrals without the fundamental theorem. For example, one can show by one of the simplest substitutions that $$ \int_0^{\pi/2} \sin^2\theta\,d\theta = \int_0^{\pi/2} \cos^2\theta\,d\theta, \tag 1 $$ and by a trivial trigonometric identity that their sum is $$ \int_0^{\pi/2} 1\,d\theta, $$ and some who post answers here go on to say that that is $\left[\vphantom{\dfrac 1 1}\ \theta\ \right]_0^{\pi/2}$, but that is silly: one is just finding the area of a rectangle. Thus one evaluates both of $(1)$ without antidifferentiating anything.


Q: What other examples exist of freshman-level integrals doable without antidifferentiating, and where some worthwhile pedagogical purpose can be served by proceeding without antidifferentiating?

Evaluate using complex integration: $int_{-infty}^infty frac{dx}{(x^2+1)(x^2+9)}$





Evaluate $$\int_{-\infty}^\infty \frac{dx}{(x^2+1)(x^2+9)}$$




Firsly I found the residues of this function:



$$Res(i)=-i/16$$



$$Res(-i)=i/16$$




$$Res(3i)=i/48$$



$$Res(-3i)=-i/48$$



I then closed the contour using the upper half semi circle $C$, parametrized by $Rz^{it}$ where $0\leq t\leq \pi$ and $R$ is radius, which I'll take to infinity.



Using partial fractions:



$$\frac{1}{(z^2+1)(z^2+9)}=\frac{1}{8(z^2+1)}-\frac{1}{8(z^2+9)}$$




ML Lemma(I think) gives me that this integral is $\leq \pi R / (R^2-1)$ and $\leq \pi R / (R^2-9)$



But then the residue theorem:



$2\pi i (i/16+i/48)$ is negative. What is my mistake?


Answer



Since complex integration and I do not get along,
I would just do this
with real integration.
I know that this isn't

what the OP asked for,
but it can be useful
to solve a problem
in more than one way.



$\int_0^{\infty} \frac{dx}{x^2+a^2}
= \frac1{a}\arctan(\frac{x}{a})|_0^{\infty}
= \frac{\pi}{2a}
$,
so

$\int_{-\infty}^{\infty} \frac{dx}{x^2+a^2}
= \frac{\pi}{a}
$.



Using your partial fraction decomposition of
$\frac{1}{(z^2+1)(z^2+9)}
=\frac1{8}(\frac{1}{z^2+1}-\frac{1}{z^2+9})
$,
I get
$\frac{\pi}{8}(1-\frac1{3})

=\frac{\pi}{12}
$.



Note that
$ \int \frac{dx}{x^2-a^2}
= -\frac1{a}(\tanh^{-1}(x/a))
$.


Tuesday, November 17, 2015

Proof of the power series 1 + $x^2$ + $x^3$ + $ldots$ + $x^n$ = $frac{1}{1-x}$

Can anyone show me the proof of why if $|x|<1$ then:



$$
\lim_{n \to \infty} 1+ x^2 + x^3 + \ldots + x^n = \frac{1}{1-x}
$$

real analysis - 1-periodic and continuous is uniformly continuous

Let $f$ be a $1$-periodic function that is continuous on $\mathbb{R}$. Then $f$ is uniformly continuous on $\mathbb{R}$.


Indeed, since $f$ is continuous on $\mathbb{R}$, it is continuous in $[0,1]$ which is compact; therefore $f$ is uniformly continuous there, that is, for $\varepsilon>0$ there exists $\delta(\varepsilon)>0$ such that for all $x,y \in [0,1]$ $|x-y|<\delta\implies|f(x)-f(y)|<\varepsilon$. Now, let $x,y$ be two points of $\mathbb{R}$ such that $|x-y|<\delta(\varepsilon/2)$. We distinguish two cases:


Case 1: there exists an integer $n$ between $x,y$. Then without loss of generality, we may assume that $x<1

Case 2: there is no integer between $x,y$. Then without loss of generality (periodicity) we may assume that $x,y\in (0,1)$ and we have nothing to prove.


Any objections?

linear algebra - Characteristic polynomial of a matrix in block form



Let $ A_1, ...,A_n$ be square matrices over a field $F$, and let $A$ have the block form
$$ A= \begin{bmatrix} A_1 & \\ & A_2 & & \\ & &...

& \\
& & & A_n \end{bmatrix} $$
where all other off-diagonal entries are zero. Show that characteristic polynomial $\Delta_A $ of A is $\Delta_A = \Delta_{A_1} \Delta_{A_2} ... \Delta_{A_n} $



Firstly, I observed that $ \Delta_A(A) = 0 \quad \Leftrightarrow \Delta_A(A_1)=...=\Delta_A(A_n) =0 $. But I did not proceed from here.


Answer



The characteristic polynomial is a determinant.
$$\lambda I - A =
\begin{bmatrix} \lambda I_1 - A_1 & \\ & \lambda I_2 -A_2 & & \\ & &... & \\
& & & \lambda I_n -A_n \end{bmatrix}$$




Where $I_j$ are identity matrices of the same size as $A_j$. The determinant of a block-diagonal matrix is the product of the blocks' determinants:
$$ k_A(\lambda) = \det (\lambda I - A) = \det(\lambda I_1 - A_1) \det(\lambda I_2 - A_2) \cdot \ldots \cdot \det(\lambda I_n - A_n). $$


factoring - Applying the idea of Dixon's method to discrete logs

I want to see if a technique like Dixon's method for integer factorization would work for discrete logs. If the base is 2 (which could be generalized), to find the log of $r$, set up equations like the following (in which the first 7 primes are the factor base):



$r^{A_{i1}}2^{A_{i2}}3^{A_{i3}}5^{A_{i4}}7^{A_{i5}}11^{A_{i6}}13^{A_{i7}}17^{A_{i8}}\equiv1$ mod p



Then use row operations on the rows of the matrix $A$ to find a linear combination of these rows with $0$s in every entry other than the first two. This gives an equation of the form $r^x2^y\equiv1$ mod p, from which it's easy to obtain the log of $r$ base $2$.




The question I have is, whether the row operations can be performed so $x$ and $y$ are non-zero mod $p-1$. I don't have much idea how to guarantee this, and there is a strong tendency to get $0$s. For example, I picked $p=4327$ and $r=1277$, and generated a matrix $A$ in which each row satisfies



$1277^{A_{i1}}2^{A_{i2}}3^{A_{i3}}5^{A_{i4}}7^{A_{i5}}11^{A_{i6}}13^{A_{i7}}17^{A_{i8}}\equiv1$ mod 4327



$$A = \begin{bmatrix}
1 & -1 & -1 & 1 & -3 & 0 & 0 & 0 \\
1 & -3 & -2 & 1 & 0 & -1 & 1 & 0 \\
1 & 1 & 1 & -1 & 0 & 0 & 1 & -1 \\
1 & -10 & -1 & 0 & 0 & 2 & 0 & 0 \\

1 & -1 & 5 & 0 & -1 & 0 & -1 & -1 \\
1 & 2 & 0 & 3 & 0 & -1 & -1 & -1 \\
1 & 3 & 2 & 0 & -1 & -1 & -1 & 1 \\
1 & 1 & -2 & -2 & 2 & 0 & -1 & 1 \\
\end{bmatrix}$$



Lo and behold, the determinant of $A$ is $0$ mod $p-1$ (-4326 to be exact). Had this not been the case, an absurd result would have ensued: We could have shown that $1277^x2^y\equiv1$ mod $p$ for any ratio $x/y$ of our choosing, by inverting $A$. So the determinant is bound to be $0$ mod $p-1$.

calculus - Find $lim limits_{xto 0}frac{logleft(cos xright)}{x^2}$ without L'Hopital



$$\lim_{x\to 0}\frac{\log\left(\cos x\right)}{x^2}$$



I've been triyng to:





  1. show $\displaystyle -\frac{\pi}{2}


  2. find a function so that $\displaystyle f(x)<\frac{\log\left(\cos x\right)}{x^2}$ and $\displaystyle \lim\limits_{x\to0}f(x) = -\frac{1}{2}$




And then apply the squeeze principle, but haven't managed any of these.


Answer



HINT:



$$\dfrac{\log(\cos x)}{x^2}=\dfrac{\log(\cos^2x)}{2x^2}=-\dfrac12\cdot\dfrac{\log(1-\sin^2x)}{-\sin^2x}\cdot\left(\dfrac{\sin x}x\right)^2$$


Monday, November 16, 2015

real analysis - Partial sums of exponential series



What is known about $f(k)=\sum_{n=0}^{k-1} \frac{k^n}{n!}$ for large $k$?



Obviously it is is a partial sum of the series for $e^k$ -- but this partial sum doesn't reach close to $e^k$ itself because we're cutting off the series right at the largest terms. In the full series, the $(k+i-1)$th term is always at least as large as the $(k-i)$th term for $1\le i\le k$, so $f(k)< e^k/2$. Can we estimate more precisely how much smaller than $e^k$ the function is?




It would look very nice and pleasing if, say, $f(k)\sim e^{k-1}$ for large $k$, but I have no real evidence for that hypothesis.



(Inspired by this question and my answer thereto).


Answer



This appears as problem #96 in Donald J Newman's excellent book: A Problem Seminar.



The problem statement there is:





Show that



$$ 1 + \frac{n}{1!} + \frac{n^2}{2!} + \dots + \frac{n^n}{n!} \sim
\frac{e^n}{2}$$




Where $a_n \sim b_n$ mean $\lim \frac{a_n}{b_n} = 1$.



Thus we can estimate your sum (I have swapped $n$ and $k$) as




$$ 1 + \frac{n}{1!} + \frac{n^2}{2!} + \dots + \frac{n^{n-1}}{(n-1)!} \sim
\frac{e^n}{2}$$



as by Stirling's formula, $\dfrac{n^n}{n!e^n} \to 0$.



The solution in the book proceeds as follows:



The remainder term for a the Taylor Series of a function $f$ is



$$ R_n(x) = \int_{0}^{x} \frac{(x-t)^n}{n!} f^{n+1}(t) \ \text{d}t$$




which for our purposes, comes out as



$$\int_{0}^{n} \frac{(n-t)^n}{n!} e^t \ \text{d}t$$



Making the substitution $n-t = x$ gives us the integral



$$ \int_{0}^{n} \frac{x^n}{n!} e^{-x} \ \text{d}x$$



In an earlier problem (#94), he shows that




$$\int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \sim \sqrt{\frac{\pi n}{2}}$$



which using the substitution $n+x = t$ gives



$$ \int_{n}^{\infty} t^n e^{-t} \ \text{d}t \sim \frac{n^n}{e^n} \sqrt{\frac{\pi n}{2}}$$



Using $\int_{0}^{\infty} x^n e^{-x}\ \text{d}x = n!$ and Stirling's formula now gives the result.



To prove that




$$\int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \sim \sqrt{\frac{\pi n}{2}}$$



He first makes the substitution $x = \sqrt{n} t$ to obtain



$$ \int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x \ = \sqrt{n} \int_{0}^{\infty} \left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t} \ \text{d}t$$



Now $$\left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t} \le (1+t)e^{-t}$$ and thus by the dominated convergence theorem,



$$ \lim_{n\to \infty} \frac{1}{\sqrt{n}} \int_{0}^{\infty} \left(1 + \frac{x}{n}\right)^n e^{-x} \ \text{d}x $$




$$= \int_{0}^{\infty} \left(\lim_{n \to \infty}\left(1 + \frac{t}{\sqrt{n}}\right)^n e^{-\sqrt{n} t}\right) \ \text{d}t$$



$$ = \int_{0}^{\infty} e^{-t^2/2} \ \text{d}t = \sqrt{\frac{\pi}{2}} $$


arithmetic - "Proof" that $sqrt{x^2} = x$ by using index laws. Why is this wrong?


This is a very short argument that I have often seen about the definition of $|x|=\sqrt{x^2}$ for $x\in \mathbb{R}$. I'm not quite sure what is the flaw in this argument below, can someone please illustrate at what line this goes wrong and reference what "restriction" was broken from the original index law.


Claim: $\sqrt{x^2} = x$ where $x\in\mathbb{R}$.


Proof: Note that $$\begin{align*} \sqrt{x^2} &= (x^2)^{\frac{1}{2}} \\ &= x^{\frac{2}{2}} \\ &= x^{1}=x.\end{align*}$$


What is wrong with this?


Answer




First the counterexample to this is $x = -1$. Now to the error in your reasoning is the following $(a^b)^c = a^{bc}$ is only true if $a \geq 0$.


polynomials - Determine how many roots are real, and finding all roots of a quintic: $-2y^5 +4y^4-2y^3-y=0$

Using a computer we can see that the only real root of $f(y)=-2y^5 +4y^4-2y^3-y=0$ is $0$. Furthermore, we know from algebra that since this polynomial lives in $\Bbb R[y]$ that the roots come in complex conjugate pairs. I.e. we knew that there were either $1,3$ or $5$ real roots of this polynomial. Furthermore, if we could guess factors, we could complete polynomial long division to break up the polynomial. Noting that $y=0$ is a root, we want the $4$ roots of:



$$-2y^4+4y^3-2y^2-1=0$$





  • How would we deduce that the remaining roots are not real?

  • How would we find these by hand.

sequences and series - Evaluating $sumlimits_{i=1}^{a-1} i = frac{a(a-1)}{2}$

I'm pretty sure $$\sum\limits_{i=1}^{a-1} i = \frac{a(a-1)}{2}$$ using the relationship $\sum\limits_{i=1}^{n} i = \frac{n(n+1)}{2}$.



It looks similar to $$\sum\limits_{i=a}^{n} i = \frac{(n+a)(n-a+1)}{2}$$ but I'm not sure how or why does$$\sum\limits_{i=a}^{n} i = \frac{(n+a)(n-a+1)}{2}?$$

Sunday, November 15, 2015

probability theory - Prove that F: $mathbb{R} to mathbb{R}$ nondecreasing, right continous and goes to 1/0 in $infty/-infty$ is a CDF for a random variable X



Prove that F: $\mathbb{R} \to \mathbb{R}$ nondecreasing, right continous, $\lim_{t \to -\infty} F(t) = 0$, $\lim_{t \to \infty} F(t) = 1$ is a CDF for some random variable X, i.e. there exists random variable $X$ such that $F_X = F$ where $F_X$ denotes CDF of X.



The hint was to look onto $\left(\Omega=(0,1), \mathcal{F}=\mathcal{B}((0,1)), \lambda_{|(0,1)}\right)$.



I know that CDF of any random variable has those properties, but I fail to see how to link $F_X$ with $F$.



Answer



For $\omega \in (0,1)$ define $X(\omega)=\inf \{t: F(t) \geq \omega\}$. First note that $F(t) \geq \omega\}$ implies that $X(\omega) \leq t$.



Now suppose $F(t) <\omega$. Then $F(s) <\omega $ for all $s \leq t$. This implies that $X(\omega) \geq t$. In other words $X(\omega) implies $F(t) \geq \omega$.



Thus $P(X\leq t) \leq\lambda((0,F(t)])=F(t)$ or $F_X(t) \leq F(t)$. Also $F_X(t-) \leq F(t)$. These two inequalities imply that $F_X(t)=F(t)$ at all points where $F_X$ is continuous. Since both of these are right-continuous function it follows that $F_X=F$.


trigonometry - How were these values of sin x found?

I'm working on Trigonometry problems that have to do with inverse functions.



So I have an example problem that goes like this:



$$\ 10\sin^2x = \sin x$$




and apparently the solution set is:



$\sin x = 0 $ if $ x = 0.0 , 3.1$



and $\sin x = 1/10$ if $ x = 0.1, 3.0$



I know how to do these problems in terms of radians, but I don't know what these integers are referring to on the unit circle, because when I assume that 1 = 360 degrees, the equations still don't make sense.



Can someone please help me with how these values were computed? I can't find any examples in my textbook.




Thank you!

Convergence of the series $sumln(1+frac{(-1)^n}{n+1})$

I want to show that the series whose nth term is
$a_n=\ln(1+\frac{(-1)^n}{n+1})$ is convergent. I wanted to use the limit comparison test to compare it to the $p$ series but $a_n$ is not positive. I thought of writing the power series representation of $a_n$ using the power series representation of $\ln(1+x)$ with $x=b_n=\frac{(-1)^n}{n+1}$ we find that
$$a_n=b_n-\frac{1}{2}b_n^2+\frac{1}{3}b_n^3-\frac{1}{4}b_n^4+\cdots$$
Now the seris $\sum b_n$ is convergent by the alternating series test and the other terms are all terms of absolutely convergent series but it is an infinte sum, can I say so ? I mean is the infinite sum of convergent series a convergent series ? Is this correct and is there any other way to do it ?

calculus - Find $lim_{xrightarrow0}frac{x}{[x]}$



Find $\lim_{x\rightarrow0}\frac{x}{[x]}$



$[x]$ represent greatest integer less than or equal to x.



Right hand limit is not defined as [0+]=0, left hand limit is zero as [0-]=-1.
I want to know whether we can say limit exist or not. Because Left Hand Limit $\ne$Right Hand Limit


Answer



For $x\to 0$ the expression $\frac{x}{[x]}$ is not well defined since for $0


As you noticed, we can only consider



$$\lim_{x\rightarrow0^-}\frac{x}{[x]}=0$$


Saturday, November 14, 2015

analysis - Prove that region under graph of function is measurable



In the measure theory book that I am studying, we consider the 'area' under (i.e. the product measure of) the graph of a function as an example of an application of Fubini's Theorem for integrals (with respect to measures).




The setting: $(X,\mathcal{A}, \mu)$ is a $\sigma$-finite measure space, $\lambda$ is Lebesgue measure on $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ (Borel $\sigma$-algebra), $f:X \to [0,+\infty]$ is $\mathcal{A}$-measurable, and we are considering the region under the graph of $f$,



$E=\{(x,y)\in X \times \mathbb{R}|0\leq y < f(x)\}$.



I need to prove $E \in \mathcal{A} \times \mathcal{B}(\mathbb{R})$. I thought to write $E=g^{-1}((0,+\infty])\cap(X \times [0,+\infty])$ where $g(x,y)=f(x)-y$ but I can't see why $g$ must be $\mathcal{A} \times \mathcal{B}(\mathbb{R})$-measurable. Any help would be appreciated.


Answer



$g=k\circ h$ where $h(x,y)=(f(x),y)$ and $k(a,b)=a-b$. [ Here $h:X\times \mathbb R \to \mathbb R^{2}$ and $k:\mathbb R^{2} \to \mathbb R$]. $k:\mathbb R^{2} \to \mathbb R$ is Borel measurable because it is continuous. To show that $h$ is measurable it is enough to show that $h^{-1} (A \times B) \in \mathcal A \times B(\mathbb R)$ for $A,B \in \mathcal B(\mathbb R)$. This is clear because $h^{-1} (A \times B)=f^{-1}(A) \times B$.



I have assumed that $f$ takes only finite values. To handle the general case let $g(x)=f(x)$ if $f(x) <\infty$ and $0$ if $f(x)=\infty$. Let $F=\{(x,y):0\leq y . Then $E=(f^{-1}\{\infty\}\times [0,\infty)) \cup [(f^{-1}\{\mathbb R\}\times \mathbb R) \cap F]$.


proof verification - The square root of a prime number is irrational


If $p$ is a prime number, then $\sqrt{p}$ is irrational.



I know that this question has been asked but I just want to make sure that my method is clear. My method is as follows:



Let us assume that the square root of the prime number $p$ is rational. Hence we can write $\sqrt{p} = \frac{a}{b}$. (In their lowest form.) Then $p = \frac{a^2}{b^2}$, and so $p b^2 = a^2$.



Hence $p$ divides $a^2$, so $p$ divides $a$. Substitute $a$ by $pk$. Find out that $p$ divides $b$. Hence this is a contradiction as they should be relatively prime, i.e., gcd$(a,b)=1$.


Power Series $0^{0}$



My textbook explains that the power series: $\sum_{n=0}^{\infty} x^{n}/n!$ converges for $x=0$ because the terms of the series get the value 0.



My problem with this argument is the first term, which is $0^{0}$. But this is undefined? Someone who can explain this?


Answer



In the context described in the question, it is a convention that $0^0 = 1$.


discrete mathematics - prove by induction that $(a^n - b^n)$ is a multiple of $(a - b)$ for $n geq 1$





Okay so I just finished a final in discrete mathematics and I could not figure out how to finish this proof:



"Prove by mathematical induction, that $(a^n - b^n)$ is a multiple of $(a - b)$ when $a$ and $b$ are integers and $n \geq 1$.



$\underline{Base case: n = 1}$




$(a^1 - b^1) = (a - b)x$



$x = 1$



$a - b = a - b$



$\underline{Inductive Hypothesis:}$



$n = k$ for some $k$




$(a^k - b^k) = (a - b)x$



$\underline{Inductive Step:}$



$n = k + 1$



$(a^{k + 1} - b^{k + 1}) = (a - b)x_1$



$(a \cdot a^{k}) - (b \cdot b^{k}) = (a - b) x_1$




$a \cdot [(a - b)x + b^k] + b \cdot [(a - b)x - a^k]= (a-b)x_1$



$(a -b)xa + ab^k + (a - b)xb - ba^k = (a - b)x_1$



$(a -b)xa + (a - b)xb - ab^k + ba^k = (a - b)x_1$



$(a -b)xa + (a - b)xb + ba^k - ab^k = (a - b)x_1$



.....




I don't know if I did it right but I couldn't get any further than this.


Answer



Assumed $a^k-b^k=(a-b)m,m\in\Bbb Z\implies a^k=b^k+(a-b)m$



We have $a^{k+1}-b^{k+1}=a\times a^k-b^{k+1}=a[b^k+(a-b)m]-b^{k+1}\\=am(a-b)+ab^k-b^{k+1}\\=am(a-b)+b^k(a-b)\\=(am+b^k)(a-b)$


Friday, November 13, 2015

Prove the convergence of $prodlimits^{n}_{k=1}{left(1+frac{k}{n^p}right)} $ and Find Its Limit




Suppose $p> 1$ and the sequence $\{x_n\}_{n=1}^{\infty}$ has a general term of
$$x_n=\prod\limits^{n}_{k=1}{\left(1+\frac{k}{n^p}\right)} \space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space n=1,2,3, \cdots$$
Show that the sequence $\{x_n\}_{n=1}^{\infty}$ converges, and hence find
$$\lim_{n\rightarrow\infty}{x_n}$$

which is related to $p$ itself.




I have been attempted to find the convergence of the sequence using ratio test but failed. The general term has a form of alike the $p$-series. And also the question seems difficult to find its limit because the denominator is of $p^{th}$ power. How do I deal it?


Answer



We have that



$$\prod\limits^{n}_{k=1}{\left(1+\frac{k}{n^p}\right)}=e^{\sum^{n}_{k=1}{\log \left(1+\frac{k}{n^p}\right)}}$$



and




$$\sum^{n}_{k=1}{\log \left(1+\frac{k}{n^p}\right)}=\sum^{n}_{k=1} \left(\frac{k}{n^p}+O\left(\frac{k^2}{n^{2p}}\right)\right)=$$$$=\frac{n(n-1)}{2n^{p}}+O\left(\frac{n^3}{n^{2p}}\right)=\frac{1}{2n^{p-2}}+O\left(\frac{1}{n^{2p-3}}\right)$$



therefore the sequence converges for $p\ge 2$




  • for $p=2 \implies x_n \to \sqrt e$

  • for $p>2 \implies x_n \to 1$




and diverges for $1.



Refer also to the related




complex analysis - Why is $sinx$ the imaginary part of $e^{ix}$?

Most of us who are studying mathematics are familiar with the famous $e^{ix}=cos(x)+isin(x)$. Why is it that we have $e^{ix}=cos(x)+isin(x)$ and not $e^{ix}=sin(x)+icos(x)$? I haven't studied Complex Analysis to know the answer to this question. It pops up in Linear Algebra, Differential Equations, Multivariable Equations and many other fields. But I feel like textbooks and teachers just expect us students to take it as given without explaining it to a certain extent. I also couldn't find any good article that explains this.

calculus - How to prove that $e = lim_{n to infty} (sqrt[n]{n})^{pi(n)} = lim_{n to infty} sqrt[n]{n#} $?

While reading this post, I stumbled across these definitions (Wiki_german)


$$e = \lim_{n \to \infty} \sqrt[n]{n\#}$$


and


$$e = \lim_{n \to \infty} (\sqrt[n]{n})^{\pi(n)}.$$


The last one seems interesting, since $ \lim_{n \to \infty} (\sqrt[n]{n})=1$, proven here.



How to prove these?

elementary number theory - Name of 'mod' or 'remainder' operation in mathematics


I am trying to write the Euclidean algorithm in the following way:


$A = \lfloor A \div B \rfloor \times B + (\text{remainder of}) \: A \div B $


Now is there any symbol I can use to say "remainder of A $\div$ B"? I know that in the C programming language there is the operator % for modulus; is that a valid symbol in maths? Can I write A % B? Or is there some other way?


Answer



Per request, I post my comment(s) as an answer, and add some further remarks.


The notation $\, a\bmod b\, $ denotes the remainder when dividing $\,a\,$ by $\,b\,$ using the division algorithm. The same notation is used in other rings that have an analogous (Euclidean) Division Algorithm, e.g. polynomials with coefficients over a field, e.g. we can write the Polynomial Remainder Theorem as $\,f(a) = f(x)\bmod x\!-\!a,\,$ or higher-degree forms like $\,f(i) = (f(x)\bmod x^2\!+\!1)\bmod x\!-\!i$.


Also $\!\bmod\!$ is used as a ternary relation (vs. above binary operation) when dealing with congruences, i.e. $\ a\equiv b\pmod{\! n}\iff n\mid a-b\,$ (an equivalence relation for a fixed modulus $\,n).$


These two denotations of $\!\bmod\!$ are related as follows


$$ a\equiv b\!\!\!\pmod{\!n}\iff a\bmod n\, =\, b\bmod n$$



so $\,a\bmod n\,$ serves as a normal form or canonical representative for the entire equivalence class $\,[a]_n = a + n\Bbb Z\,$ of all integers $\,\equiv a\pmod{\!n}$. So the above displayed equivalence means that testing congruence of integers reduces to testing equality of their normal forms (= remainders $\!\bmod n),\,$ just as we can test equivalence of fractions by testing equality of their least-terms normal forms. Similarly we can view the remainder as a "least terms" rep: it is the least nonnegative integer in the class $[a]_n.\,$


The operational use of mod is often more convenient in computational contexts, whereas the relational use often yields more flexibility in theoretical contexts. The difference amounts to whether it is more convenient to work with general equivalence classes vs. canonical / normal representatives ("reps") thereof. For example, it would be quite cumbersome to state the laws of fraction arithmetic if we required that all fractions to be in normal (reduced) form, i.e. in lowest terms. Instead, it proves more convenient to have the flexibility to work with arbitrary equivalent fractions. For example, this allows us to state the fraction addition rule in very simple form by first choosing convenient reps having a common denominator.


Analogously, in modular arithmetic the canoniocal remainder $\,a\ {\rm mod}\ m\,$ may not be the most convenient choice of representative of the equivalence class $\,[a]_n =\, a + n\,\Bbb Z.\,$ For example, casting out elevens exploits that $\ {\rm mod}\ 11\!:\ 10\equiv -1\,\Rightarrow\,10^{\large k}\equiv (-1)^{\large k}\equiv \pm1,\,$ which involves choosing a rep of least magnitude $\,\color{#c00}{\bf -1}\,$ vs. $\,\color{#0a0}{10}\in [10]_{11}\! = \{\ldots,\, -23,-12,\color{#c00}{\bf -1},\color{#0a0}{10},21,\,\ldots\}.\,$ Or, as here we can choose reps that conveniently make a quotient be exact when computing modular fractions, e.g. $\!\bmod 11\!:\,\ 9/13\equiv -2/2\equiv -1.\,$ Hence, analogous to fraction addition, we chose reps which simplified arithmetic. Using least magnitude reps often simplifies other computations too, e.g. it can halve the number of steps in the Euclidean algorithm. Thus the use of congruence classes (vs. canonical reps) provides much greater flexibility, which may yield great simplifications - not only computationally, but also theoretically, which becomes clearer when one studies quotient rings, which yield (algebraic) structure reifications of the the congruence rules = compatibility of congruences with addition and multiplication).


Beware that some authors omit the parentheses in $\, a\equiv b\pmod{\!n}$ instead writing them as follows $\,a\equiv b\mod n\ $ or $\ a = b\mod n,\ $ using \mod vs. \pmod in $\TeX$. These might easily be confused with $\,a = b\bmod n\,$ i.e. $\,a = (b\bmod n),\,$ so one should keep in mind such possible ambiguities in contexts where both forms of $\!\bmod\!$ are in use.


The name '%' for the normal form $\!\bmod\!$ operation (as in the C programming language) has not percolated to the mathematical community as far as I can tell. I recall many questions on sci.math regarding the meaning of $\rm\, a\,\%\, b.\, $ As such, if you use this notation in a mathematical forum then I recommend that you specify its meaning. This would not be necessary for $\!\bmod\!$ since that notation is ubiquitous in mathematics (currently more so for congruence than operator form). Be aware, however, that some mathematicians look down on the operational use of mod in the case when it would be more natural to use the congruence form. Apparently the mathematical Gods do too, since doing so can make some proofs quite more difficult (much more so than the above simple case of fraction addition).


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...