Sunday, January 31, 2016

abstract algebra - Irrationality of $sqrt[n]2$




I know how to prove the result for $n=2$ by contradiction, but does anyone know a proof for general integers $n$ ?



Thank you for your answers.



Marcus


Answer



Suppose that $\sqrt[n]2$ is rational. Then, for some $p,q\in\mathbb Q$,




$$\sqrt[n]2=\frac{p}{q}\implies 2=\frac{p^n}{q^n}\implies p^n=2q^n=q^n+q^n.$$



Contradiction with Fermat last theorem.


reference request - how do we know that integral is non-elementary?





Is there a condition that states that the indefinite integration is non-elementary?


Answer



There is a decision procedure called the Risch algorithm that will either tell you that the intergral is non-elementary, or produce an elementary anti-derivative. It is not an easy algorithm to execute, or even implement in a computer algebra system (although the latter has been done), so there is no hope of finding an easy condition for the existence of an anti-derivative.


general topology - Is there exist a homemoorphism between either pair of $(0,1),(0,1],[0,1]$



As the topic is there exist a homeomorphism between either pair of $(0, 1),(0,1],[0,1]$


Answer



No two of the three spaces are homeomorphic. One way to see this is to note that $(0,1)$ has no non-cut points, $(0,1]$ has one non-cut point, and $[0,1]$ has two. (A non-cut point is one whose removal does not disconnect the space.) Another way to see that $[0,1]$ is not homeomorphic to either of the others is to note that $[0,1]$ is compact, and they are not. $(0,1)$ and $(0,1]$ can also be distinguished by the fact that the one-point compactification of $(0,1)$ is homeomorphic to the circle $S^1$, while that of $(0,1]$ is homeomorphic to $[0,1]$.



integration - Different ways to tackle the integral $int_0^1sqrtfrac x{1-x},dx$



$$\int_0^1\sqrt\frac x{1-x}\,dx$$ I saw in my book that the solution is $x=\cos^2u$ and $dx=-2\cos u\sin u\ du$.
I would like to see different approaches, can you provide them?


Answer



Here is another way that involves rationalising the numerator first.


For $0 \leqslant x < 1$ we can write \begin{align*} \int_0^1 \sqrt{\frac{x}{1 - x}} \, dx &= \int_0^1 \sqrt{\frac{x}{1 - x}} \cdot \frac{\sqrt{x}}{\sqrt{x}} \, dx\\ &= \int^1_0 \frac{x}{\sqrt{x - x^2}} \, dx \end{align*} Now rewriting the numerator as the derivative of the denominator we have \begin{align*} \int_0^1 \sqrt{\frac{x}{1 - x}} \, dx &= -\frac{1}{2} \int^1_0 \frac{(1 - 2x) - 1}{\sqrt{x - x^2}} \, dx\\ &= -\frac{1}{2} \int^1_0 \frac{1 - 2x}{\sqrt{x - x^2}} \, dx + \frac{1}{2} \int^1_0 \frac{dx}{\sqrt{x - x^2}} \, dx\\ &= I_1 + I_2 \end{align*} The first of these integrals can be found using a substitution of $x = u + 1/2$. The result is $$I_1 = \int^{1/2}_{-1/2} \frac{u}{\sqrt{1/4 - u^2}} \, du = 0,$$ as the integrand is odd between symmetric limits.


The second integral can be found by first completing the square. As $$x - x^2 = \frac{1}{4} - \left (x - \frac{1}{2} \right )^2,$$ we have $$I_2 = \frac{1}{2} \int^1_0 \frac{dx}{\sqrt{\frac{1}{2^2} - \left (x - \frac{1}{2} \right )^2}} \, dx = \frac{1}{2} \sin^{-1} (2x - 1) \Big{|}^1_0 = \frac{\pi}{2}.$$ Thus $$\int_0^1 \sqrt{\frac{x}{1 - x}} \, dx = \frac{\pi}{2}.$$


Saturday, January 30, 2016

calculus - Compute $lim_{n to infty}(frac{a_n+b_n}{2})^n$



I am trying to solve the following problem:



Compute $\lim_{n \to \infty}(\frac{a_n+b_n}{2})^n$ when $\lim_{n \to \infty} a_n^n=a>0$ and $\lim_{n \to \infty} b_n^n=b>0$ such that $a_n,b_n>0 \ \forall \ n \ \in \mathbb{N}$.



I tried to use the Sandwich Theorem to come up with an answer, but my upper bound was not tight:



$\max(a_n,b_n)\ge(\frac{a_n+b_n}{2}) \ge \sqrt{a_nb_n}$




On passing to the limits I got the following:



$\max(a,b)\ge \lim_{n \to \infty}(\frac{a_n+b_n}{2}) \ge \sqrt{ab}$



But this doesn't help me at all. How could I actually compute the limit?


Answer



This is done in steps. First we have to note that both $a_n, b_n$ tend to $1$. This follows from the fact that $n\log a_n\to \log a$ and thus $\log a_n\to 0$.



Next we can let $x_n$ denote the expression whose limit is to be evaluated here. Then we have $$\log x_n=n\log \left(1+\frac{a_n+b_n-2}{2}\right)$$ and the limit of above expression is same as that of $$\frac{1}{2}\cdot\{n(a_n-1)+n(b_n-1)\}$$ Next we can use the fact that $n\log a_n\to\log a$ which implies $n(a_n-1)\to\log a$. The limit of $\log x_n$ is thus equal to $$\frac{\log a +\log b} {2}$$ It follows that $x_n\to\sqrt{ab} $.




The above argument makes use of the standard limit $\lim\limits_{x\to 1}\dfrac{\log x} {x-1}=1$.


integration - Difficult infinite integral involving a Gaussian, Bessel function and complex singularities

I've come across the following integral in my work. $$\intop_{0}^{\infty}dk\, e^{-ak^{2}}J_{0}\left(bk\right)\frac{k^{3}}{c^{2}+k^{4}}

$$



Where $a$,$b$,$c$ are all positive.



I've seen and evaluate similar integrals without the denominator using resources like Watson's Theory of Bessel Functions, but I've had no luck finding anything resembling this integral. It would get me a very awesome result if I were to evaluate this. Does anyone have any ideas on how to approach it?



edit: Some of my attempts so far include:



1)Using "Infinite integrals involving Bessel functions by an improved approach of contour integration and the residue theorem" by Qiong-Gui Lin. This (like other residue theorem approaches) doesn't seem to work since the gaussian blows up on the imaginary axis.




2)I recall seeing expressions like $Z_u(ax)X_v(b\sqrt x)$ where Z,X are some variation of bessel functions in Gradshteyn, Ryzhik. This inspired me to write $e^{-ax^2}=\sum_{k=-\infty}^{\infty}I_{k}(-ax^{2})$ and substitute $t=x^2$, then integrate term by term. This hasn't gotten me anywhere either.

calculus - Difficult Substitution/Log Integral



Okay, so I'm currently working on a strange proof for the volume of a cone and could use some guidance with a difficult integral. I've done many other easier proofs involving rotational solids and triple integrals so please don't just tell me to do it that way, I already have.




I believe the following integral should give the volume of a cone with radius $r$ and height $h$:



$$V(r,h)=2\int_0^r\delta(y)\;dy$$



$$\delta(y)=2h\int_0^{\sqrt{r^2-y^2}}\left[1-\frac{\sqrt{x^2+y^2}}{r}\right]\;dx$$



The first trouble I had was solving the density integral $\delta$. I simplified it to the following and tried using a typical trig sub with tangent.



$$\delta(y)=2h\sqrt{r^2-y^2}-\frac{2h}{r}y^2\int_0^{\arctan\left(\frac{\sqrt{r^2-y^2}}{y}\right)}\sec^3\theta\;d\theta$$




After a while I managed to solve the integral using integration by parts:



$$\delta(y)=2h\sqrt{r^2-y^2}-\frac{2h}{r}y^2\left[\frac{\sec\theta\tan\theta+\color{red}{\ln|\sec x+\tan x|}}{2}\right]_0^{\arctan\left(\frac{\sqrt{r^2-y^2}}{y}\right)}$$



$$\delta(y)=h\sqrt{r^2-y^2}-\frac{h}{r}y^2\ln(r+\sqrt{r^2-y^2})-\frac{h}{r}y^2\ln(y)$$



However, upon plugging into the initial volume integral, I got the following:



$$V(r,h)=2h\int_0^r\sqrt{r^2-y^2}\;dy-\color{red}{\frac{2h}{r}\int_0^ry^2\ln(r+\sqrt{r^2-y^2})\;dy}-\frac{2h}{r}\int_0^ry^2\ln y\;dy$$




The first integral can be solved fairly easily, and the third can be done with integration by parts and a little work. However, the second appears quite daunting and I don't know that it can be solved using elementary techniques. I've considered trying Laisant's inverse method, converting to polar, or using some trig identity a few steps back to alter this particular integrand but just don't know if I'd get anywhere.



The other thing I've considered is using a hyperbolic trig sub, but I'm less familiar with these and don't know how I'd reapply bounds to even solve the density integral, much less the remaining integral with respect to $y$. I've gotten a little past here but it seems to simply dead-end.



$$\delta(y)=2h\sqrt{r^2-y^2}-\color{red}{\frac{2h}{r}y^2\int_0^{\text{arcsinh}\left(\frac{\sqrt{r^2-y^2}}{y}\right)}\cosh^2\theta\;d\theta}$$



Any advice for proceeding (i.e. integrating that second log integral above or doing the hyperbolic one below)? I'd really appreciate it and may assign a bounty if it proves excessively daunting. I'd prefer the solution not be posted but if it necessitates non-elementary techniques feel free to do so as you've earned the right to post imo.



By the way if you do solve the problem and post it here, I am planning on putting this proof in a collection I am writing and would like to cite your name with your work if/when I submit it for jobs and other things.







Here's an outline of the final workings:



\noindent We consider a cone of height $h$ and radius $r$ formed by the following equation:



$$z(x,y)=h-\frac{h\sqrt{x^2+y^2}}{r}$$
\begin{center}
\small{*Note the transformations placing the cone upright on the xy-plane}
\end{center}




\noindent At some point $y=c$, we assign a density $\delta$ corresponding to the area of the hyperbola formed by the intersection of $z$, $y=c$, and the xy-plane.



$$\delta(y)=2h\int_0^{\sqrt{r^2-y^2}}\left[1-\frac{\sqrt{x^2+y^2}}{r}\right]\;dx$$
\*
\noindent The volume of the cone is given by the following integral:



$$V(r,h)=2\int_0^r\delta(y)\;dy$$
\*
$$V(r,h)=2\int_0^r2h\int_0^{\sqrt{r^2-y^2}}\left[1-\frac{\sqrt{x^2+y^2}}{r}\right]\;dx\;dy$$

\noindent First, let's integrate $\delta$ by parts:



$$\delta(y)=2h\sqrt{r^2-y^2}-\frac{2h}{r}\int_0^{\sqrt{r^2-y^2}}\sqrt{x^2+y^2}\;dx$$
\*
\noindent Substituting $x=y\tan\theta$:



$$\delta(y)=2h\sqrt{r^2-y^2}-\frac{2h}{r}y^2\int_0^{\arctan\left(\frac{\sqrt{r^2-y^2}}{y}\right)}\sqrt{y^2(1+\tan^2\theta)}\cdot y\sec^2\theta\;d\theta$$
\*
Simplifying:




$$\delta(y)=2h\sqrt{r^2-y^2}-\frac{2h}{r}y^2\int_0^{\arctan\left(\frac{\sqrt{r^2-y^2}}{y}\right)}\sec^3\theta\;d\theta$$
\*
\noindent Integrating with respect to $\theta$:
$$\delta(y)=2h\sqrt{r^2-y^2}-\frac{2h}{r}y^2\left[\frac{\sec\theta\tan\theta+\ln|\sec x+\tan x|}{2}\right]_0^{\arctan\left(\frac{\sqrt{r^2-y^2}}{y}\right)}$$
\*
\noindent Applying bounds and simplifying:
$$\delta(y)=2h\sqrt{r^2-y^2}-\frac{h}{r}y^2\left[\frac{r}{y}\cdot\frac{\sqrt{r^2-y^2}}{y}+\text{arcsech}\left(\frac{h}{r}\right)\right]$$
\*
\noindent Simplifying:
$$\delta(y)=2h\sqrt{r^2-y^2}-h\sqrt{r^2-y^2}-\frac{h}{r}y^2\text{arcsech}\left(\frac{h}{r}\right)$$




$$\delta(y)=h\sqrt{r^2-y^2}-\frac{h}{r}y^2\text{arcsech}\left(\frac{h}{r}\right)$$
\noindent Substituting $\delta$ into $V(r,h)$:



$$V(r,h)=2\int_0^r\left[h\sqrt{r^2-y^2}-\frac{h}{r}y^2\text{arcsech}\left(\frac{h}{r}\right)\right]\;dy$$



\noindent Breaking up:



$$V(r,h)=2h\int_0^r\sqrt{r^2-y^2}\;dy-\frac{2h}{r}\int_0^ry^2\text{arcsech}\left(\frac{h}{r}\right)\;dy$$
\*

\noindent Substituting $y=r\sin\phi$ in the first integral:
$$V(r,h)=2hr^2\int_0^{\pi/2}\cos^2\phi\;d\phi-\frac{2h}{r}\int_0^ry^2\text{arcsech}\left(\frac{h}{r}\right)\;dy$$



\noindent Using double angle formula:



$$V(r,h)=hr^2\int_0^{\pi/2}1-\cos2\phi\;d\phi-\frac{2h}{r}\int_0^ry^2\text{arcsech}\left(\frac{h}{r}\right)\;dy$$



\noindent Integrating with respect to $\phi$:



$$V(r,h)=hr^2\left[\phi-\frac{\sin 2\phi}{2}\right]_0^{\pi/2}-\frac{2h}{r}\int_0^ry^2\text{arcsech}\left(\frac{h}{r}\right)\;dy$$




\noindent Simplifying:



$$V(r,h)=\frac{\pi r^2 h}{2}-\frac{2h}{r}\int_0^ry^2\text{arcsech}\left(\frac{h}{r}\right)\;dy$$



\noindent Integrating by parts:



$$V(r,h)=\frac{\pi r^2 h}{2}-\frac{2h}{r}\left(\left[ \frac{y^3}{3}\text{arcsech}\left(\frac{y}{r}\right)\right]_0^r+\frac{2h}{3} \int_0^r \frac{y^2}{\sqrt{r^2-y^2}} \;dy\right)$$



\noindent Simplifying:




$$V(r,h)=\frac{\pi r^2h}{2}-\frac{2h}{3}\int_0^r\frac{y^2}{\sqrt{r^2-y^2}}\;dy$$



\noindent Substituting $y=r\sin\alpha$:



$$V(r,h)=\frac{\pi r^2h}{2}-\frac{2r^2h}{3}\int_0^{\pi/2}\sin^2\alpha\;d\alpha$$



\noindent Using double angle formula:



$$V(r,h)=\frac{\pi r^2h}{2}-\frac{r^2h}{3}\int_0^{\pi/2}1-\cos2\alpha\;d\alpha$$




\noindent Integrating with respect to $\alpha$:



$$V(r,h)=\frac{\pi r^2h}{2}-\frac{r^2h}{3}\left[\alpha-\frac{\sin 2\alpha}{2}\right]_0^{\pi/2}$$



\noindent Applying bounds:



$$V(r,h)=\frac{\pi r^2h}{2}-\frac{r^2h}{3}\cdot\frac{\pi}{2}$$



\noindent Simplifying:




$$V(r,h)=\frac{\pi r^2h}{2}-\frac{\pi r^2h}{6}$$



$$V(r,h)=\frac{1}{3}\pi r^2h$$


Answer



HINT:
I think there is a sign error in your post. I found that
$$V(r,h)=2h\int_0^r\sqrt{r^2-y^2}\;dy-\frac{2h}{r} \left( \int_0^ry^2\ln(r+\sqrt{r^2-y^2})\;dy-\int_0^ry^2\ln y\;dy \right) $$
which differs in the sign of the last term when compared with your formula above.




Now by lumping the last two terms together I get



$$V(r,h)=2h\int_0^r\sqrt{r^2-y^2}\;dy-\frac{2h}{r} \int_0^r y^2 \, sech^{-1}\left(\frac{y}{r}\right)\;dy $$



which on integrating by parts seems to lead to a sensible result.



Edited to add some details of the integration by parts



Taking $u=sech^{-1}\left(\frac{y}{r}\right)$; $\frac{du}{dy}=-\frac{r}{y\sqrt{r^2-y^2}}$ and $\frac{dv}{dy}=y^2$ ; $v=\frac{y^3}{3}$ gives




$$\frac{2h}{r} \int_0^r y^2 \, sech^{-1}\left(\frac{y}{r}\right)\;dy =\frac{2h}{r} \left[ \frac{y^3}{3}sech^{-1}\left(\frac{y}{r}\right)\right]_0^r+\frac{2h}{3} \int_0^r \frac{y^2}{\sqrt{r^2-y^2}} \;dy$$



the first resultant term is zero so the integral of real interest is



$$\int_0^r \frac{y^2}{\sqrt{r^2-y^2}} \;dy = \left[-\frac{y}{2}\sqrt{r^2-y^2}+\frac{r^2}{2}\sin^{-1}\left(\frac{y}{r}\right)\right]_0^r$$



the only other integral of interest being the first one



$$\int_0^r\sqrt{r^2-y^2}\;dy=\left[ \frac{y}{2}\sqrt{r^2-y^2}+\frac{r^2}{2}\sin^{-1}\left(\frac{y}{r}\right)\right]_0^r$$


calculus - How do I solve $lim_{x to infty} x(e^{(1/x)}-1)$ without L'Hopital?



I don't have experience with L'Hopital Rule nor series and thats what most solution are, is there is other method can be used to solve that limit? i thought about trying to use first principle of derivative but i dont know where to begin. i need some help to guide me to the right direction.


Answer



This is precisely $$\lim_{x \to +\infty} \frac{\exp\left(\frac 1x\right) - 1}{\frac 1x} = \lim_{u \to 0^{+}} \frac{\exp\left(u \right) - 1}{u}$$




Does that remind you of some derivative?


How many numbers below $100$ can be expressed as a difference of two perfect squares in only one way?




How many numbers below $100$ can be expressed as a difference of two
perfect squares in only one way?




Please explain your approach.



ADDED: I can determine whether a number could be represented as difference of two squares,say $N = a^2-b^2$ ,where $a,b,N \in \mathbb{N}$,which gives $N = (a+b)(a-b)$ hence,$(a+b)$ and $(a-b)$ must be both odd or both even,observing the prime factorization of $N$ we can determine whether it is possible to represent $N$ in that form or not,but I am not sure if I could use this for this problem.



Answer



The answer depends on whether you count $0^2$ as a perfect square or not. For the analysis below, $0^2$ is a perfect square. And we interpret "number below $100$" as meaning positive integer below $100$. Also, note that $5=3^2-2^2=(-3)^2-2^2$. Representations will always be non-unique if we allow sign changes. So we rule out using squares of negative integers.



The easy way to solve this problem is to look at $1$ to $99$, one after the other. For each, check whether it satisfies the condition, and keep a running tally. After a little while, we notice some patterns, and then the work is quick. As a partial compromise, we could look up a general result, and use that. But that is not interesting, so we develop all the theory we need.



Let $n$ be a positive integer. We look for non-negative integer solutions of
$$x^2-y^2=n, \qquad\text{that is, of}\qquad (x-y)(x+y)=n.$$



We must then have
$$x-y=d\qquad x+y=\frac{n}{d}$$

where $d$ is a (positive) divisor of $n$.



Solving for $x$ and $y$, we obtain
$$x=\frac{d+\frac{n}{d}}{2}\qquad\text{and}\qquad y=\frac{-d+\frac{n}{d}}{2}.$$



From the above equations, $x$ and $y$ are integers precisely when $d$ and $n/d$ are both even or both odd. And since $y$ must be non-negative, we need to have $d \le n/d$, that is, $d\le \sqrt{n}$. Any factor $d$ of $n$ satisfying those conditions gives us a solution, and different factors yield different solutions.



Thus the number of ways of expressing $n$ as a difference of $2$ squares is precisely the same as the number of divisors $d$ of $n$ such that $d\le \sqrt{n}$ and $d$ and $n/d$ have the same parity.
In particular, if $n$ is divisible by $2$ but not by $4$, there is no representation of $n$.




Look first at the case $n$ odd. There is only one factorization of $n$ of the desired type iff $n$ is an odd prime or $n=1$.



Look next at the case $n$ even.
The number of representations of $n$ is the number of ways of splitting $n$ into two even factors $d$ and $n/d$, with $d \le n/d$.
This is the same as the number of ways of expressing $n'=n/4$ as a product of $d'$ and $n'/d'$, where $d' \le n'/d'$.



There is only one such factorization precisely if $n'$ is a prime or $n'=1$.



Now counting is easy, though somewhat tedious. For the odds, count $1$, plus the odd primes $\le 99$.
There are $24$ odd primes less than $100$, for a total of $25$.




For the evens, count $1$, plus all primes $\le 24$. There are $9$ such primes, giving a total of $10$ even numbers.



So overall $35$ numbers qualify.



Comment: If we don't allow $0^2$ as a perfect square, some changes need to be made. The most obvious ones are that $1$ and $4$ should not be counted. But the situation is more complicated than that. For the odd case, if $p$ is a prime, then $p^2$ now only has one representation, since $p^2-0^2$ is not allowed, so it should be counted. Similarly, in the even case, now $4p^2$ only has one representation.


Friday, January 29, 2016

sequences and series - Expressing array response $A(Z) = sum_{-N}^{N} w_n Z^n$ as sine-function




The array-response of an antenna can be defined as:



$$A(Z) = \sum_{-N}^{N} w_n Z^n$$



where $Z = \exp(-i \omega \Delta t) = \exp(-ik\Delta x \sin \alpha)$



According to my textbook, if we let $w_n = 1$ for all $n$, the expression above can be written as:



$$A(Z) = \frac{\sin [(2N+1)k \Delta x \sin \alpha /2]}{(2N+1)\sin[k \Delta x \sin \alpha /2]}$$




where we have divded with $(2N+1)$ so that $A(1) = 1$



I tried to solve this algebraically to see how we get this expresison, but can't seem to figure it out. If we let $w_n = 1$ for all $n$, then we have:



$$A(Z) = \sum_{-N}^{N} Z^n$$



which is a geometric series which can be expressed as:



$$A(Z) = \frac{Z^{-N} - Z^{N+1}}{1 - Z}$$




However, I can't seem to get further than this. I tried plugging in $z = \exp(-ik\Delta x \sin \alpha)$ but only get a very complicated expression that I am unable to simplify. If anyone can give me any tips as to how to proceed with this derivation once I have obtained $A(Z) = \frac{Z^{-N} - Z^{N+1}}{1 - Z}$, then I would be very grateful!


Answer



If we have $w_n=1$, $\forall n$, you get



$$A(Z)=\sum_{n=-N}^N Z^n=z^{-N}\sum_{n=0}^{2N}Z^n=z^{-N}\frac{1-z^{2N+1}}{1-z}=\\
=\frac{z^{-(2N+1)/2}-z^{(2N+1)/2}}{z^{-1/2}-z^{1/2}}$$



If you subsitute $Z=\exp(-ik\Delta x\sin \alpha)$ and divide by $2N+1$ you finally get the expression given in your question.




(Of course you need to know that $2i\sin x = e^{ix}-e^{-ix}$).


limits - Evaluate $lim_{n to infty }frac{(n!)^{1/n}}{n}$.











Evaluate
$$\lim_{n \to \infty }\frac{(n!)^{1/n}}{n}.$$



Can anyone help me with this? I have no idea how to start with. Thank you.


Answer



Let's work it out elementarily by wisely applying Cauchy-d'Alembert criterion:




$$\lim_{n\to\infty} \frac{n!^{\frac{1}{n}}}{n}=\lim_{n\to\infty}\left(\frac{n!}{n^n}\right)^{\frac{1}{n}} = \lim_{n\to\infty} \frac{(n+1)!}{(n+1)^{(n+1)}}\cdot \frac{n^{n}}{n!} = \lim_{n\to\infty} \frac{n^{n}}{(n+1)^{n}} =\lim_{n\to\infty} \frac{1}{\left(1+\frac{1}{n}\right)^{n}}=\frac{1}{e}. $$



Also notice that by applying Stolz–Cesàro theorem you get the celebre limit:



$$\lim_{n\to\infty} (n+1)!^{\frac{1}{n+1}} - (n)!^{\frac{1}{n}} = \frac{1}{e}.$$



The sequence $L_{n} = (n+1)!^{\frac{1}{n+1}} - (n)!^{\frac{1}{n}}$ is called Lalescu sequence, after the name of a great Romanian mathematician, Traian Lalescu.



Q.E.D.


elementary number theory - How to use the Extended Euclidean Algorithm manually?


I've only found a recursive algorithm of the extended Euclidean algorithm. I'd like to know how to use it by hand. Any idea?


Answer



Perhaps the easiest way to do it by hand is in analogy to Gaussian elimination or triangularization, except that, since the coefficient ring is not a field, one has to use the division / Euclidean algorithm to iteratively descrease the coefficients till zero. In order to compute both $\rm\,gcd(a,b)\,$ and its Bezout linear representation $\rm\,j\,a+k\,b,\,$ we keep track of such linear representations for each remainder in the Euclidean algorithm, starting with the trivial representation of the gcd arguments, e.g. $\rm\: a = 1\cdot a + 0\cdot b.\:$ In matrix terms, this is achieved by augmenting (appending) an identity matrix that accumulates the effect of the elementary row operations. Below is an example that computes the Bezout representation for $\rm\:gcd(80,62) = 2,\ $ i.e. $\ 7\cdot 80\: -\: 9\cdot 62\ =\ 2\:.\:$ See this answer for a proof and for conceptual motivation of the ideas behind the algorithm (see the Remark below if you are not familiar with row operations from linear algebra).


For example, to solve  m x + n y = gcd(m,n) one begins with
two rows [m 1 0], [n 0 1], representing the two

equations m = 1m + 0n, n = 0m + 1n. Then one executes
the Euclidean algorithm on the numbers in the first column,
doing the same operations in parallel on the other columns,

Here is an example: d = x(80) + y(62) proceeds as:

in equation form | in row form
---------------------+------------
80 = 1(80) + 0(62) | 80 1 0
62 = 0(80) + 1(62) | 62 0 1

row1 - row2 -> 18 = 1(80) - 1(62) | 18 1 -1
row2 - 3 row3 -> 8 = -3(80) + 4(62) | 8 -3 4
row3 - 2 row4 -> 2 = 7(80) - 9(62) | 2 7 -9
row4 - 4 row5 -> 0 = -31(80) +40(62) | 0 -31 40

The row operations above are those resulting from applying
the Euclidean algorithm to the numbers in the first column,

row1 row2 row3 row4 row5
namely: 80, 62, 18, 8, 2 = Euclidean remainder sequence

| |
for example 62-3(18) = 8, the 2nd step in Euclidean algorithm

becomes: row2 -3 row3 = row4 when extended to all columns.

In effect we have row-reduced the first two rows to the last two.
The matrix effecting the reduction is in the bottom right corner.
It starts as 1, and is multiplied by each elementary row operation,
hence it accumulates the product of all the row operations, namely:


$$ \left[ \begin{array}{ccc} 7 & -9\\ -31 & 40\end{array}\right ] \left[ \begin{array}{ccc} 80 & 1 & 0\\ 62 & 0 & 1\end{array}\right ] \ =\ \left[ \begin{array}{ccc} 2\ & \ \ \ 7\ & -9\\ 0\ & -31\ & 40\end{array}\right ] \qquad\qquad\qquad\qquad\qquad $$


Notice row 1 is the particular  solution  2 =   7(80) -  9(62)
Notice row 2 is the homogeneous solution 0 = -31(80) + 40(62),
so the general solution is any linear combination of the two:

n row1 + m row2 -> 2n = (7n-31m) 80 + (40m-9n) 62

The same row/column reduction techniques tackle arbitrary
systems of linear Diophantine equations. Such techniques
generalize easily to similar coefficient rings possessing a

Euclidean algorithm, e.g. polynomial rings F[x] over a field,
Gaussian integers Z[i]. There are many analogous interesting
methods, e.g. search on keywords: Hermite / Smith normal form,
invariant factors, lattice basis reduction, continued fractions,
Farey fractions / mediants, Stern-Brocot tree / diatomic sequence.

Remark $ $ As an optimization, we can omit one of the Bezout coefficient columns (being derivable from the others). Then the calculations have a natural interpretation as modular fractions (though the "fractions" are multi-valued), e.g. follow the prior link.


Below I elaborate on the row operations to help readers unfamiliar with linear algebra.


Let $\,r_i\,$ be the Euclidean remainder sequence. Above $\, r_1,r_2,r_3\ldots = 80,62,18\ldots$ Given linear combinations $\,r_j = a_j m + b_j n\,$ for $\,r_{i-1}\,$ and $\,r_i\,$ we can calculate a linear combination for $\,r_{i+1} := r_{i-1}\bmod r_i = r_{i-1} - q_i r_i\,$ by substituting said combinations for $\,r_{i-1}\,$ and $\,r_i,\,$ i.e.


$$\begin{align} r_{i+1}\, &=\, \overbrace{a_{i-1} m + a_{i-1}n}^{\Large r_{i-1}}\, -\, q_i \overbrace{(a_i m + b_i n)}^{\Large r_i}\\[.3em] {\rm i.e.}\quad \underbrace{r_{i-1} - q_i r_i}_{\Large r_{i+1}}\, &=\, (\underbrace{a_{i-1}-q_i a_i}_{\Large a_{i+1}})\, m\, +\, (\underbrace{b_{i-1} - q_i b_i}_{\Large b_{i+1}})\, n \end{align}$$



Thus the $\,a_i,b_i\,$ satisfy the same recurrence as the remainders $\,r_i,\,$ viz. $\,f_{i+1} = f_{i-1}-q_i f_i.\,$ This implies that we can carry out the recurrence in parallel on row vectors $\,[r_i,a_i,b_i]$ representing the equation $\, r_i = a_i m + b_i n\,$ as follows


$$\begin{align} [r_{i+1},a_{i+1},b_{i+1}]\, &=\, [r_{i-1},a_{i-1},b_{i-1}] - q_i [r_i,a_i,b_i]\\ &=\, [r_{i-1},a_{i-1},b_{i-1}] - [q_i r_i,q_i a_i, q_i b_i]\\ &=\, [r_{i-1}-q_i r_i,\ a_{i-1}-q_i a_i,\ b_{i-1}-b_i r_i] \end{align}$$


which written in the tabular format employed far above becomes


$$\begin{array}{ccc} &r_{i-1}& a_{i-1} & b_{i-1}\\ &r_i& a_i &b_i\\ \rightarrow\ & \underbrace{r_{i-1}\!-q_i r_i}_{\Large r_{i+1}} &\underbrace{a_{i-1}\!-q_i a_i}_{\Large a_{i+1}}& \underbrace{b_{i-1}-q_i b_i}_{\Large b_{i+1}} \end{array}$$


Thus the extended Euclidean step is: compute the quotient $\,q_i = \lfloor r_{i-1}/r_i\rfloor$ then multiply row $i$ by $q_i$ and subtract it from row $i\!-\!1.$ Said componentwise: in each column $\,r,a,b,\,$ multiply the $i$'th entry by $q_i$ then subtract it from the $i\!-\!1$'th entry, yielding the $i\!+\!1$'th entry. If we ignore the 2nd and 3rd columns $\,a_i,b_i$ then this is the usual Euclidean algorithm. The above extends this algorithm to simultaneously compute the representation of each remainder as a linear combination of $\,m,n,\,$ starting from the obvious initial representations $\,m = 1(m)+0(n),\,$ and $\,n = 0(m)+1(n).\,$


Thursday, January 28, 2016

Can someone explain the following modular arithmetic?

$7^{-1} \bmod 120 = 103$




I would like to know how $7^{-1} \bmod 120$ results in $103$.

probability theory - I think I've found an invariant distribution for a transient discrete Markov chain - Where is my mistake?



Let




  • $E$ be an at most countable set equipped with the discrete topology $\mathcal E$


  • $X=(X_n)_{n\in\mathbb N_0}$ be a discrete Markov chain with values in $(E,\mathcal E)$, distributions $(\operatorname P_x)_{x\in E}$ and transition matrix $$p=\left(p(x,y)\right)_{x,y\in E}:=\left(\operatorname P_x\left[X_1=y\right]\right)\;.$$

  • $\tau_x^1:=\inf\left\{n\in\mathbb N:X_n=x\right\}$ and $$\varrho(x,y):=\operatorname P_x\left[\tau_y^1<\infty\right]$$




A measure $\mu$ on $(E,\mathcal E)$ is called invariant $:\Leftrightarrow$ $$\mu p=\mu\;,\tag 1$$ where $$\mu p\left(\left\{x\right\}\right):=\sum_{y\in E}\mu\left(\left\{y\right\}\right)p(y,x)\;\;\;\text{for }y\in E\;.$$ If $\mu$ is a probability measure with $(1)$, it's called an invariant distribution. Moreover, $x\in E$ is called transient $:\Leftrightarrow$ $$\varrho(x,x)<1\;.$$




At the German Wikipedia page, they state, that if $X$ is transient, i.e. all states are transient, then there exists no invariant distribution.




However, I think, that I've found a counterexample:




  • Consider the random walk on $\mathbb Z$ with transition probabilities $$p(x,x+1)=r\;\text{and}\;p(x,x-1)=1-r\;\;\;\text{for }x\in\mathbb Z$$ for some $r\in (0,1)$

  • Let $$\mu_1\left(\left\{x\right\}\right):=1\;\text{and}\;\mu_2\left(\left\{x\right\}\right):=\left(\frac r{1-r}\right)^x\;\;\;\text{for }x\in\mathbb Z$$ and $\mu_1(\emptyset):=\mu_2(\emptyset):=0$



It's easy to see, that $\mu_1$ and $\mu_2$ both satisfy $(1)$. Moreover, if $r\ne 1/2$, the random walk is transient and $\mu_1\ne \mu_2$. In addition, each non-negative linear combination of $\mu_1$ and $\mu_2$ is an invariant measure, too.





While these combinations are no invariant distribution in general, at least $\mu_1$ should be one. So, I think I've found an invariant distribution of a transient discrete Markov chain. Is this the world's end or did I made a mistake?



Answer



You made more than one mistake. You mistranslated the German Wikipedia article – it doesn't talk about measures (Maß) but about distributions (Verteilung). You also seem to be equivocating between measures, probability measures and distributions in your arguments in English. The counting measure is indeed invariant under these transitions, but it's neither a probability measure nor a distribution.


Determining the condition for the real part and imaginary parts of complex function using its modulus



So if I have a function like the following, where $f$ is a complex function and B is real:
$$S(\lambda) = 1 + iBf(\lambda)$$
Suppose that I know that its modulus must be equal to 1, therefore:




Now how do I determine the conditions for the real part and its imaginary part?
I know that:
$$|1 + iBf(\lambda)|^2 = 1$$



From here:
$$|1 + iB [Re(f(\lambda)) + iIm(f(\lambda)]|^2 = 1$$



Then I can say that the imaginary and real parts of the function would be:
$$Im(S(\lambda)) = 1+ BRe(f(\lambda))$$
$$Re(S(\lambda)) = -BIm(f(\lambda))$$




Is my reasoning correct?


Answer



For the modulus to be $1,$ you only need the product $Bf$ to be $\pm1,$ for since $$|1+iBf|=1,$$ it follows that $$(1+iBf)(1-iBf)=1+(Bf)^2=1,$$ and the result I claimed follows.






Of course I assumed $B$ and $f$ are real, since you do not say anything about them. Otherwise what I say above needs to be modified to be true.







OK, you have specified that $f(\lambda)$ is complex valued. Thus, if you write $f=a+ib,$ where $a=a(\lambda),\,\, b=b(\lambda).$ Then we have that $$1+iBf=1+iB(a+ib)=1-Bb+iBa.$$



For its modulus to be $1,$ we must have $$(1-Bb+iBa)(1-Bb-iBa)=1,$$ or $$(1-Bb)^2+(Ba)^2=1.$$ This gives $$B=\frac{2\Im f(\lambda)}{|f(\lambda)|^2}.$$


Wednesday, January 27, 2016

linear algebra - Function that satisfies $f(x+ y) = f(x) + f(y)$ but not $f(cx)=cf(x)$


Is there a function from $ \Bbb R^3 \to \Bbb R^3$ such that $$f(x + y) = f(x) + f(y)$$ but not $$f(cx) = cf(x)$$ for some scalar $c$?


Is there one such function even in one dimension? I so, what is it? If not, why?


I came across a function from $\Bbb R^3$ to $\Bbb R^3$ such that $$f(cx) = cf(x)$$ but not $$f(x + y) = f(x) + f(y)$$, and I was wondering whether there is one with converse.


Although there is another post titled Overview of the Basic Facts of Cauchy valued functions, I do not understand it. If someone can explain in simplest terms the function that satisfy my question and why, that would be great.


Answer



Take a $\mathbb Q$-linear function $f:\mathbb R\rightarrow \mathbb R$ that is not $\mathbb R$-linear and consider the function $g(x,y,z)=(f(x),f(y),f(z))$.


To see such a function $f$ exists notice that $\{1,\sqrt{2}\}$ is linearly independent over $\mathbb Q$, so there is a $\mathbb Q$-linear function $f$ that sends $1$ to $1$ and $\sqrt{2}$ to $1$. So clearly $f$ is not $\mathbb R$-linear. ( Zorn's lemma is used for this).


elementary number theory - $x^a - 1$ divides $x^b - 1$ if and only if $a$ divides $b$



Let $x > 1$ and $a$, $b$ be positive integers. I know that $a$ divides $b$ implies that $x^a - 1$ divides $x^b - 1$. If $b = qa$, then




$$x^b - 1 = (x^a)^q - 1^q = (x^a - 1)((x^a)^{q-1} + \ldots + x^a + 1).$$



I'm interested in the converse of the statement. If $x^a - 1$ divides $x^b - 1$, does this imply that $a$ divides $b$?


Answer



Let $b=a \cdot q+r$, where $0 < r < a$.



Then



$$x^b-1=x^b-x^r+x^r-1=x^r(x^{aq}-1)+x^r-1 \,.$$




Use that $x^a-1$ divides $x^{aq}-1$.


sequences and series - limit $limlimits_{ntoinfty}left(sumlimits_{i=1}^{n}frac{1}{sqrt{i}} - 2sqrt{n}right)$



Calculate below limit $$\lim_{n\to\infty}\left(\sum_{i=1}^{n}\frac{1}{\sqrt{i}} - 2\sqrt{n}\right)$$


Answer



As a consequence of Euler's Summation Formula, for $s > 0$, $s \neq 1$ we have $$ \sum_{j =1}^n \frac{1}{j^s} = \frac{n^{1-s}}{1-s} + \zeta(s) + O(|n^{-s}|), $$ where $\zeta$ is the Riemann zeta function. In your situation, $s=1/2$, so $$ \sum_{j =1}^n \frac{1}{\sqrt{j}} = 2\sqrt{n} + \zeta(1/2) + O(n^{-1/2}) , $$ and we have the limit $$ \lim_{n\to \infty} \left( \sum_{j =1}^n \frac{1}{\sqrt{j}} - 2\sqrt{n} \right) = \lim_{n\to \infty} \big( \zeta(1/2) + O(n^{-1/2}) \big) = \zeta(1/2). $$


elementary number theory - Highest power contained in the expression.



Lets say we have a polynomial $N$ which has been expressed in $p$ where $p$ is a prime and all the coefficients in the polynomial are less than $p$. We are asked to prove, that the highest power of $p$ contained in $N!$ would be $(N-s) \over (p-1)$ where $s$ is the sum of all the coefficients.



I tried to do the question by making up the expression of $N$
$$N=a_0+a_1p^1+a_2p^2.......+a_np^n$$
where $a_0+a_1+a_2+.........+a_n=s$
Now the highest power of p contained would be
$$[N/p]+[N/p^2]+[N/p^3]+......+[N/p^n]$$
$$[N/p]=a_0/p+a_1+a_2p+.....+a_np^{n-1}$$

$$[N/p^2]=a_0/p^2+a_1/p+a_2+.....+a_np^{n-2}$$
$$..$$$$..$$$$..$$$$[N/p^n]=a_0/p^n+a_1/p^{n-1}+a_2p^{n-2}+.....+a_n$$
adding them all would give
$[N/p]+[N/p^2]+[N/p^3]+......+[N/p^n]=(a_1+a_2+a_3....a_n)+{1/p}(a_0+a_1+a_2+a_3....a_{n-1})+{1/p^2}(a_0+a_1+a_2+a_3....a_{n-2})....+1/p^n(a_0)+p(a_2+a_3....+a_n)+p^2(a_3+a_4....+a_n)+....+p^n(a_n)$



Though the things can be summed up to a large extent by the use of GP and writing $a_0+a_1+a_2+a_3....a_n=s$ but the expression is becoming pretty much complicated and I am unable to extract as the expression has become so much complicated.



Please provide the solution/hint(more preferable) to the question using only basic number theory expressions as it was in the first exercise of my book and the author wants me to solve it the simple way only.


Answer



The integer parts do not include fractions, so for example

$$\lfloor N/p\rfloor=a_1+a_2p+...+a_np^{n-1}\\ \lfloor N/p^2\rfloor=a_2+a_3p+...+a_np^{n-2}$$
When you add them up, try collecting the $a_i$ instead of $p$. For example, $a_3$ is multiplied by $p^2+p+1$.
Multiply the full sum by $p-1$ and try to make the result equal $N-s$.


integration - does $int_0^infty x/(1+x^2 sin^2x) mathrm dx$ converge or diverge?



$$\int_0^\infty x/(1+x^2\sin^2x) \mathrm dx$$



I'd be very happy if someone could help me out and tell me, whether the given integral converges or not (and why?). Thanks a lot.


Answer



Hint: The integral diverges, there is trouble when $x$ is large. For detail, use the fact that if $x \ge 1$, then
$1+x^2\sin^2 x \le 2x^2$ and therefore

$$\frac{x}{1+x^2\sin^2 x} \ge \frac{1}{2x}.$$


Tuesday, January 26, 2016

discrete mathematics - Prove if a (mod n) has a multiplicative inverse, then it's unique



Assume that an integer $a$ has a multiplicative inverse modulo an integer $n$. Prove that this inverse is unique modulo $n$.



I was given a hint that proving this Lemma: \begin{align} n \mid ab \ \wedge \ \operatorname{gcd}\left(n,a\right) = 1 \qquad \Longrightarrow \qquad n \mid b \end{align} should help me in finding the answer.


Here are my steps in trying to solve the problem: \begin{align} \operatorname{gcd}\left(n,a\right) = 1 & \qquad \Longrightarrow \qquad sn + ta = 1 \qquad \Longrightarrow \qquad sn = 1 - ta \\ & \qquad \Longrightarrow \qquad 1 \equiv ta \mod n \qquad \Longrightarrow \qquad ta \equiv 1 \mod n . \end{align} I know that having the GCD of m and a equal to 1 proves there is a multiplicative inverse mod n, but I'm not sure on how to prove $n \mid b$ with and how it helps prove the multiplicative inverse is unique.


Answer



For the first part note as $(n,a) = 1$, then there exist $x,y \in \mathbb{Z}$, s.t. $nx + ay = 1$. Multiply both sides by $b$ and you will get $nxb + (ab)y = b$. The LHS is obviously divisible by $n$, so then the RHS must be too. Hence $n \mid b$.


This thing proves that the inverse exist. To note that it's unique modulo $n$ assume that $aa_1 \equiv 1 \pmod n$ and $aa_2 \equiv 1 \pmod n$. Then we have that there exist integer $x,y$ s.t. $aa_1 + nx = 1$ and $aa_2 + ny = 1$. Subtracting the two equations we have that $a(a_1 - a_2) = n(y-x) \implies a(a_1 - a_2) \equiv 0 \pmod n \implies a_1 \equiv a_2 \pmod n$. Hence the multiplicative inverse is unique in $\mathbb{Z}_n$


limits - Sum of all the positive integers problem

The staff of Numberphile has shown that the sum of all the integers from $0$ to $\infty$ is $-\frac1{12}$. Recently I was looking for the sum of all the (positive) integers from $0$ to $n$ and I found that:
$$\sum_{i=0}^n i=\frac{n(n+1)}{2}$$
So I decided to take the limit:
$$\lim_{n\to \infty}\frac{n(n+1)}{2}$$
but that tends towards $\infty$ when I expected that to be $-\frac1{12}$!
Where did I got wrong? (the result is also confirmed by Wolfram Alpha)

Monday, January 25, 2016

Difficult Induction Question Using Triginometry and Inequalities

please put me out of my misery and give some hints as to how to complete this one:



Given that $$\sum_{k=1}^n x_k < \frac{\pi}{2}$$



Where $0\leq x_i \leq \frac{\pi}{2}$ for $i=1,2,3,4,\ldots,n$



Prove by mathematical induction that for $n=1,2,3,\ldots$ that:
$$\tan(x_1+\ldots+x_n)\geq \tan x_1 + \ldots + \tan x_n $$

functional equations - If $f(xy)=f(x)f(y)$ then show that $f(x) = x^t$ for some t




Let $f(xy) =f(x)f(y)$ for all $x,y\geq 0$. Show that $f(x) = x^p$ for some $p$.




I am not very experienced with proof. If we let $g(x)=\log (f(x))$ then this is the same as $g(xy) = g(x) + g(y)$




I looked up the hint and it says let $g(x) = \log f(a^x) $



The wikipedia page for functional equations only states the form of the solutions without proof.



Attempt
Using the hint (which was like pulling a rabbit out of the hat)



Restricting the codomain $f:(0,+\infty)\rightarrow (0,+\infty)$
so that we can define the real function $g(x) = \log f(a^x)$ and we
have $$g(x+y) = g(x)+ g(y)$$




i.e $g(x) = xg(1)$ as $g(x)$ is continuous (assuming $f$ is).



Letting $\log_a f(a) = p$ we get $f(a^x) =a^p $. I do not have a rigorous argument but I think I can conclude that $f(x) = x^p$ (please fill any holes or unspecified assumptions) Different solutions are invited


Answer



So, we assume $f$ is continuous. Letting $g(x) = \ln(f(a^x))$, we get
$$
\begin{align*}
g(x+y) &= \ln(f(a^{x+y})) = \ln(f(a^xa^y)) = \ln(f(a^x)f(a^y))\\
&= \log(f(a^x)) + \ln(f(a^y))\\

&= g(x)+g(y).
\end{align*}$$
So $g$ satisfies the Cauchy functional equation; if you assume $f$ is continuous, then so is $g$, hence $g(x) = xg(1)$ for all $x\gt 0$.



Since $g(1) = \ln(f(a))$, we have
$$f(a^x) = e^{g(x)} = e^{g(1)x} = (e^{x})^{g(1)}.$$
Given $r\in \mathbb{R}$, $r\gt 0$, we have $r = a^{\log_a(r)}$, hence
$$\begin{align*}
f(r) &= f\left(a^{\log_a(r)}\right)\\
&= \left(e^{\log_a(r)}\right)^{g(1)}\\

&= \left(e^{\ln(r)/\ln(a)}\right)^{g(1)}\\
&= \left(e^{\ln(r)}\right)^{g(1)/\ln(a)}\\
&= r^{g(1)/\ln(a)},
\end{align*}$$
where we have used the change-of-base formula for the logarithm,
$$\log_a(r) = \frac{\ln r}{\ln a}.$$
Finally, since $g(1) = \ln(f(a))$, we have
$$f(r) = r^{\ln(f(a))/\ln(a)}.$$
As this works for any positive $a$, $a\neq 1$, taking $a=e$ we get
$$f(r) = r^{\ln(f(e))}.$$



Sunday, January 24, 2016

real analysis - Calculate $lim_{x to 0} frac{ln(1+2x)}{x^2}$ with the help of l'Hospital's and Bernoullie's rule.




Task:


Calculate $$\lim_{x \to 0} \frac{\ln(1+2x)}{x^2}$$ with the help of l'Hospital's and Bernoullie's rule.



My thoughts:


Because $\mathcal{D}(f)=\{x\mid x\in\mathbb{R} \land x\neq 0\}$ the function is undefined for $0$ and therefore, I need to find out, whether the function has a limit or only one-sided limits. In order to do that, I'll just calculate the one sided limits. If $$\lim_{x \to 0^+} \frac{\ln(1+2x)}{x^2}\neq \lim_{x \to 0^-} \frac{\ln(1+2x)}{x^2} \implies \lim_{x \to 0} \frac{\ln(1+2x)}{x^2} \text{ doesn't exist}$$


$\lim_{x \to 0^-} \frac{\ln(1+2x)}{x^2}\overbrace{=}^{l'Hospital}=\lim_{x \to 0^-} \frac{2/(2x+1)}{2x}=\lim_{x \to 0^-}\frac{1}{x(2x+1)}\overbrace{=}^{product- rule}\underbrace{\lim_{x \to 0^-}\frac{1}{(2x+1)}}_{=1}\cdot \underbrace{\lim_{x \to 0^-}\frac{1}{x}}_{(*)}=1\cdot (*)=(*)$


$(*)$: If $x$ is small, than $1/x$ gets proportional bigger. Let $M>0$ and let $\delta = 1/M$. Than $-1/x<\frac{-1}{1/M}=-M;\forall (-\delta). Since $M$ can be arbitrarily large: $$\lim_{x \to 0^-} \frac1x=-\infty$$


$\lim_{x \to 0^-}$ analogue. $$\lim_{x \to 0^+} \frac{\ln(1+2x)}{x^2} = \cdots = \lim_{x \to 0^+} \frac1x=\infty$$ $\implies \lim_{x \to 0}$ doesn't exist.


Is this proof correct?


Answer




Yes your evaluation is fine, to check it by standard limits, we have that


$$ \frac{\ln(1+2x)}{x^2}=\frac{\ln(1+2x)}{2x}\frac{2}{x}\to 1\cdot\pm\infty$$


therefore the limit doesn't exist.


For the proof of the standard limit refer to


algebra precalculus - Ratio of number of days in which $A,B,C$ finish the work alone

To complete a certain work, a workman $A$ alone would take $m$ times as many days as $B$ and
$C$ working together; $B$ alone would take n times as many days as $A$ and $C$ together; $C$ alone
would take $p$ times as many days as $A$ and $B$ together : show that the numbers of days in
which each would do it alone are as $m + 1 : n + 1 : p + 1$



I just need hint as how to approach this problem as I am not able to initiate. Please provide some guidance

functions - Bijection from $[0,1]^3$ to $[0,1]$?





Is there any bijection from $[0,1]^3$ to $[0,1]$? How can I construct it?


Answer



Hint:



If there exists a surjection between $A$ to $B$ and a surjection between $B$ to $A$, then there exists a bijection between $A$ to $B$. In your case, space filling curves are surjections from $[0,1]$ to $[0,1]^3$. It should be easy to find a surjection going the other way.


elementary number theory - A simple modular arithmetic query.

Given $a,b,c\in\Bbb N$ with $\mathsf{gcd}(a,b)=\mathsf{gcd}(b,c)=\mathsf{gcd}(c,a)=1$ we know that there are $m_1,m_2,m_3\in\Bbb N$ such that $a\equiv m_1a^2\bmod abc$, $ab\equiv m_2ab\bmod abc$ and $b\equiv m_3b^2\bmod abc$ holds.


It is also easy to see there is a single $m$ such that $$a\equiv ma\bmod ab,\quad b\equiv mb\bmod ab$$ holds.


However how to find a single $m$ coprime to $c$ such that $$1\equiv ma\bmod abc,\quad 1\equiv mb\bmod abc$$ holds?



At least how to find a single $m$ such that $$\ell_1 a\equiv ma^2\bmod abc, \quad \ell_2 ab\equiv mab\bmod abc,\quad\ell_3b\equiv mb^2\bmod abc$$ holds where $0<\ell_1,\ell_2,\ell_3<\log abc$ holds and at least one of $\ell_1,\ell_2,\ell_3$ is distinct?


If not how small can we make $\max(\ell_1,\ell_2,\ell_3)$ where $a\nmid\ell_1$ and $b\nmid\ell_3$ holds?

Saturday, January 23, 2016

arithmetic - "Proof" that $sqrt{x^2} = x$ by using index laws. Why is this wrong?




This is a very short argument that I have often seen about the definition of $|x|=\sqrt{x^2}$ for $x\in \mathbb{R}$. I'm not quite sure what is the flaw in this argument below, can someone please illustrate at what line this goes wrong and reference what "restriction" was broken from the original index law.



Claim: $\sqrt{x^2} = x$ where $x\in\mathbb{R}$.



Proof: Note that $$\begin{align*} \sqrt{x^2} &= (x^2)^{\frac{1}{2}} \\ &= x^{\frac{2}{2}} \\ &= x^{1}=x.\end{align*}$$



What is wrong with this?


Answer



First the counterexample to this is $x = -1$.
Now to the error in your reasoning is the following $(a^b)^c = a^{bc}$ is only true if $a \geq 0$.



Friday, January 22, 2016

combinatorics - Binom - Combinatorical/Algebraic Proof



I've been given a couple of questions for an upcoming exams. We're having trouble answering these types of questions.



a. Calculate the value $\sum_{k=50}^{100} {100 \choose k}{k \choose 50}$ explicitly (without using the summation sign).



b. Calculate the value $\sum_{k=0}^{100} {100 \choose k}{200 \choose 200-k}$ explicitly (without using the summation sign).



c. Calculate the value $\sum_{k=2}^{50} k(k-1){50\choose k}$ explicitly (without using the summation sign).




d. Calculate the value $\sum_{k=0}^{20}{50\choose k}{50\choose 20- k}$ explicitly (without using the summation sign).



How do I solve these?



Thanks


Answer



We will give combinatorial arguments to evaluate these sums. Note that the numbers are specific to your sums but identical arguments can be used to prove the general cases.



A) Suppose there is a room of $100$ people and we want to select a cohort of at least $50$ them, say $k$. Moreover, we want to select $50$ of these $k$ people to get a special prize. The sum represents all ways to select a cohort of $50$ to $100$ people where $50$ get a special prize.




How else can we think of this? Instead, first pick the $50$ people to get a special prize. There are $\binom{100}{50}$ ways to do this. Then of the remaining $50$ people, select anywhere between $0$ and $50$ of them to also be in the cohort, which can be done in $2^{50}$ ways. Hence



$$
\sum_{k=50}^{100} \binom{100}{k} \binom{k}{50} = 2^{50} \binom{100}{50}.
$$



B) Consider a room of $300$ people, $100$ of which are male and $200$ are female. Suppose we want to select a committee of $200$ people. We can select $k$ males and $200-k$ females, of which there are $\binom{100}{k} \binom{200}{200-k}$ ways to do so. Summing from $k=0$ to $100$ gives all of the possible committees. Thus



$$
\sum_{k=0}^{100} \binom{100}{k} \binom{200}{200-k} = \binom{300}{200}.

$$



C) Suppose there are $50$ people and we must select a team of $k$ of them (between $2$ and $50$) where we designate a president and a vice president. There are $\binom{50}{k}$ ways to select the team, and $k(k-1)$ ways to select the president and vice president. Your sum is all ways to select such a team.



Equivalently, suppose first we chose a president and a vice president from the group of $50$. There are $50 \cdot 49$ ways to do so. Then we pick from $0$ to $48$ of the remaining people to be on the team. This can be done in $2^{48}$ ways. Hence



$$
\sum_{k=2}^{50} k(k-1)\binom{50}{k} = 50 \cdot 49 \cdot 2^{48}.
$$




D) This is identical to B) but with different numbers. The answer is



$$
\sum_{k=0}^{20} \binom{50}{k} \binom{50}{20-k} = \binom{100}{20}.
$$


Thursday, January 21, 2016

elementary number theory - Proof that $sqrt[3]{17}$ is irrational



Consider $\sqrt[3]{17}$. Like the famous proof that $\sqrt2$ is irrational, I also wish to prove that this number is irrational. Suppose it is rational, then we can write:



$$ 17 = \frac{p^3}{q^3}.$$
and then
$$ 17q^3 = p^3$$




With the proof of $\sqrt2$ we used the fact that we got an even number at this step in the proof and that $p$ and $q$ were in lowest terms. However, 17 is a prime number, somehow we could use this fact and the fact that every number has a unique prime factorisation to arrive at a contradiction, but I don't quite see it yet.


Answer



The argument that works with $2$ also works with $17$. Since $17q^3=p^3$, $17\mid p^3$ and therefore $17\mid p$. Can you take it from here?


functional equations - A non-zero function satisfying $g(x+y) = g(x)g(y)$ must be positive everywhere

Let $g: \mathbf R \to \mathbf R$ be a function which is not identically zero and which satisfies the equation
$$
g(x+y)=g(x)g(y) \quad\text{for all } x,y \in \mathbf{R}.
$$

Show that $g(x)\gt0$ for all $x \in \mathbf{R}$.

probability - throwing a dice repeatedly so that each side appear once.

Pratt is given a fair die. He repeatedly
throw the die until he get
at least each number (1 to 6).




Define the random variable $X$ to be the total number of trials that
pratt throws the die. or
How many times he has to throw a die so that each side of the die appears at least once.
Determine the expected value $E(X)$.

proof writing - Mathematical induction (sum of the first few even numbers)

So the question was basically "
Suppose that there are n teams in a rugby league competition. Every team A
plays every other team B twice, once at the home ground for team A, and the other time

at the home ground for team B."



2(n 1) + 2(n 2) + 2(n 3) + : : : + 6 + 4 + 2 is given



a) Write the expression in summation notation.
b) Use mathematical induction to prove it, n>=2



So I got this expression for (a)
n^Sigma(i=1) = (2(n-i)) where n is the number of teams




Part B



Proof:



Let P(n) denote the sequence n^Sigma(i=1)=2(n-i) and n≥2



Consider P (2)
n^Sigma(i=1)=2(n-i) =2(2-1)=2
∴it is true when n=2




We will now assume it is true for P(k)



k^Sigma(i=1)=2(k-i) for some integer k ≥2



Consider P(k+1)



k+1^Sigma(i=1)=2(k+1-i) for some integer k ≥2



P(k+1)=2(k-1)+2(k-2)+2(k-3)+⋯+2(k-i)+2(k+1-1)




Since we have assumed that P(k) is true.



So we know: P(k+1)=P(k)+(k+1)



ANSWER i cant answer my own question for 8hrs so here it is:



P(n)=n^2-n



P(K+1)=P(k)+2((k+1)-1)




P(K+1)=〖(k〗^2-k)+2(k+1)-2



P(K+1)=〖(k〗^2-k)+2(k+1)-2



P(K+1)=〖(k〗^2-k)+2k+2-2



P(K+1)=〖(k〗^2-k)+2k



P(K+1)=〖(k〗^2+k)




P(K+1)=(k+1)^2-(k+1)



Therefore under induction the sequence has been proven.



Thanks to @P..

Wednesday, January 20, 2016

algebra precalculus - $lim_{n to infty} frac{sqrt{1} + sqrt{2} + ... + sqrt{n}}{nsqrt{n}}$



How do I find the following limit?



$$
\lim_{n \to \infty} \frac{\sqrt{1} + \sqrt{2} + ... + \sqrt{n}}{n\sqrt{n}}
$$



The answer (from Wolfram) is $\frac{2}{3}$, but I'm not sure how to proceed.




Is this an application of the Squeeze theorem? I'm not quite sure.


Answer



$$\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^{n}\sqrt{\frac{k}{n}} = \int_{0}^{1}\sqrt{x}\,dx = \color{red}{\frac{2}{3}}$$
by Riemann sums and the integrability of $\sqrt{x}$ over $[0,1]$.



For a more elementary approach, notice that $\sqrt{k}$ is pretty close to $\frac{2}{3}\left[\left(k+\frac{1}{2}\right)^{3/2}-\left(k-\frac{1}{2}\right)^{3/2}\right]$ and apply creative telescoping and squeezing.


elementary set theory - Bijection between [a,b) and [a,b] intervals.

I need help with finding a bijection between these two intervals: [a,b) and [a,b]. It is told that a,b are in R (real numbers). I know how to construct a bijection if a,b are integers, but i have no idea how to do it if they are real. Any ideas?


Edit: If anyone is still watching, can you tell me if I'm doing this right?


So the bijection will be f(x) = 1) a if x = a and 2) b/p-1 if x != a and x = b/p , where p is in R - {0}.



By taking the p from R - {0} and not from N, I think this could work for real numbers.


Because a < b, every number from a to b can be written as b/p. So the function moves every number upwards if I express myself that way, thus reaching the b.

calculus - Convergence of $sum_{n=1}^{infty} frac{sin(n)}{n}$



I am trying to argue that
$$
\sum_{n=1}^{\infty} \frac{\sin(n)}{n}
$$

is divergent. It get that it must be divergent because $\sin(n)$ is bounded and there is an $n$ on the bottom. But I have to use one of the tests in Stewart's Calculus book and I can't figure it out. I can't use the Comparison Tests or the Integral Test because they require positive terms. I can't take absolute values, that would only show that it is not absolutely convergent (and so it might still be convergent). The Divergence Test also doesn't work.



I see from this question:



Evaluate $ \sum_{n=1}^{\infty} \frac{\sin \ n}{ n } $ using the fourier series



that the series is actually convergent, but using some math that I don't know anything about. My questions are



(1) Is this series really convergent?




(2) Can this series be handled using the tests in Stewart's Calculus book?


Answer



One may apply the Dirichlet test, noticing that




  • $\displaystyle \frac1{n+1} \le \frac1{n}$


  • $\displaystyle \lim_{n \rightarrow \infty}\frac1{n} = 0$


  • $\displaystyle \left|\sum^{N}_{n=1}\sin
    n\right|=\left|\text{Im}\sum^{N}_{n=1}e^{in}\right| \leq
    \left|e^i\frac{1-e^{iN}}{1-e^i}\right| \leq \frac{2}{|1-e^i|}<\infty,\qquad N\ge1,$




giving the convergence of the given series.


combinatorics - why is $sumlimits_{k=1}^{n} k^m$ a polynomial with degree $m+1$ in $n$



why is $\sum\limits_{k=1}^{n} k^m$ a polynomial with degree $m+1$ in $n$?




I know this is well-known. But how to prove it rigorously? Even mathematical induction does not seem so straight-forward.



Thanks.


Answer



Let $V$ be the space of all polynomials $f : \mathbb{N}_{\ge 0} \to F$ (where $F$ is a field of characteristic zero). Define the forward difference operator $\Delta f(n) = f(n+1) - f(n)$. It is not hard to see that the forward difference of a polynomial of degree $d$ is a polynomial of degree $d-1$, hence defines a linear operator $V_d \to V_{d-1}$ where $V_d$ is the space of polynomials of degree at most $d$. Note that $\dim V_d = d+1$.



We want to think of $\Delta$ as a discrete analogue of the derivative, so it is natural to define the corresponding discrete analogue of the integral $(\int f)(n) = \sum_{k=0}^{n-1} f(k)$. But of course we need to prove that this actually sends polynomials to polynomials. Since $(\int \Delta f)(n) = f(n) - f(0)$ (the "fundamental theorem of discrete calculus"), it suffices to show that the forward difference is surjective as a linear operator $V_d \to V_{d-1}$.



But by the "fundamental theorem," the image of the integral is precisely the subspace of $V_d$ of polynomials such that $f(0) = 0$, so the forward difference and integral define an isomorphism between $V_{d-1}$ and this subspace.




More explicitly, you can observe that $\Delta$ is upper triangular in the standard basis, work by induction, or use the Newton basis $1, n, {n \choose 2}, {n \choose 3}, ...$ for the space of polynomials. In this basis we have $\Delta {n \choose k} = {n \choose k-1}$, and now the result is really obvious.



The method of finite differences provides a fairly clean way to derive a formula for $\sum n^m$ for fixed $m$. In fact, for any polynomial $f(n)$ we have the "discrete Taylor formula"



$$f(n) = \sum_{k \ge 0} \Delta^k f(0) {n \choose k}$$



and it's easy to compute the numbers $\Delta^k f(0)$ using a finite difference table and then to replace ${n \choose k}$ by ${n \choose k+1}$. I wrote a blog post that explains this, but it's getting harder to find; I also explained it in my notes on generating functions.


elementary number theory - What is the largest power of 2 that divides $200!/100!$.


What is the largest power of 2 that divides $200!/100!$.


No use of calculator is allowed. I had proceeded in a brute force method which i know regret.. I would like to know your methods.


Answer




Find highest power of $2$ in $200!$ and $100!$, using Legendre's formula


In $200!$, highest power of $2$


$$=\lfloor 200/2 \rfloor +\lfloor 200/4 \rfloor +\lfloor 200/8 \rfloor +\lfloor 200/16 \rfloor +\lfloor 200/32 \rfloor +\lfloor 200/64 \rfloor +\lfloor 200/128 \rfloor $$


$$=100+50+25+12+6+3+1=197$$


In $100!$, highest power of $2$


$$=\lfloor 100/2 \rfloor +\lfloor 100/4 \rfloor +\lfloor 100/8 \rfloor +\lfloor 100/16 \rfloor +\lfloor 100/32 \rfloor +\lfloor 100/64 \rfloor$$


$$= 50 + 25+12+6+3+1 =97$$


Now, just subtract the two, and we get $100$ as the answer.


Tuesday, January 19, 2016

measure theory - Prove that $ lim_{n to infty} int_{-infty}^{infty} sin(nt) f(t) d t = 0 $.

I am trying to prove that $ \lim_{n \to \infty} \int_{-\infty}^{\infty} \sin(nt) f(t) d t = 0 $ for every Lebesgue integrable function $ f $ on $ \mathbb{R} $. My first thoughts were to use Dominated Convergence Theorem but I realised that there is no pointwise limit of the sequence of functions $ f_n = \sin(nt) f(t) $. I do not know how to proceed.


Any help would be appreciated. Thanks!

calculus - What is the limit of the sequence $ (frac{3-4n}{1+n})(1+frac1n)^n $?



I am trying to find the limit of the sequence
$$
\left(\frac{3-4n}{1+n}\right)\left(1+\frac1n\right)^n
$$
I am aware that if one sequence converges and another sequence converges then the multiplication of two sequences also converge. The limit of the first sequence is $-4$. However I do not know how to calculate the limit of the second sequence.



Answer



Here is one approach
$$ \left( 1+\frac{1}{n}\right)^{n}=e^{ n \ln(1+\frac{1}{n}) } = e^{ n (\frac{1}{n}-\frac{1}{2 n^2}+\dots) } = e^{1-\frac{1}{2n}+\dots}\underset{\infty}{\longrightarrow} e $$


Compute the square root of a complex number


This is a follow up to a previous question. I solved the equation $z^4 - 6z^2 + 25 = 0$ and I found four answer to be $z = \pm\sqrt{3 \pm 4i}$.


However someone in the comment said that the answer is going to be $2+i$, $2-i$, $-2+i$, $-2-i$. I cannot understand how we can find these roots from the answer that I found. How are we supposed to compute the square root of a complex number?


Answer



Hint:


Let $x + yi = \sqrt{a + bi}$. Then $(x+ yi)^2 = a + bi$. Then solve for $x$ and $y$ and you will generally have two sets of values for the square root $ \sqrt{a + bi}$



Example:


Say you want to compute $\sqrt{3 + 4i}$. Then assume the square root is $a + bi$. That is $a + bi = \sqrt{3 + 4i} \implies (a + bi)^2 = (a^2 - b^2) + 2abi = 3 + 4i$. Now solve the equations $ (a^2 - b^2) = 3$ and $2ab = 4$ to find $a$ and $b$.


elementary number theory - Find $21^{1234}pmod{100}equiv ?$



The I'm having trouble to do this only by hand (no software or calculator). I tried the following:



\begin{align}21^{1234}(\text{mod} \ 100) &= 21^{1000}21^{200}21^{20}21^4(\text{mod} \ 100)\equiv41^{500}41^{100}41^{15}41^2(\text{mod} \ 100)\\

\end{align}



It's not reasonable to continue taking powers of 21, takes to long with pen and paper. Is there a more efficient way?



Yes I know about the Euler theorem and his totient function but please I don't want t use it, only elementary methods.


Answer



Does the binomial theorem count as an elementary method? If so, we can just do $21^{1234} \equiv (20+1)^{1234} \equiv 20^{1234} + \binom{1234}{1} \cdot 20^{1233} + ... + 20\cdot \binom{1234}{1} + 1 \pmod{100}$. Then, if the exponent of $20$ is greater than or equal to $2$, it is divisible by $100$, so we simply get $20 \cdot 1234 + 1 \equiv 81 \pmod{100}$.



Otherwise, just break it down $\pmod{4}$ and $\pmod{25}$, and evaluate the first few terms, which should give you $1 \pmod{4}$ and $6 \pmod{25} \implies 81 \pmod{100}$.


Monday, January 18, 2016

functional equations - Show that f is linear



Let $f : \mathbb R \to \mathbb R$ be a solution of the additive Cauchy functional equation satisfying the condition
$$f(x) = x^2 f(1/x)\quad \forall x \in \mathbb R\setminus \{0\}.$$
Then show that $f(x) = cx,$ where $c$ is an arbitrary constant.


Answer



Let $F(x)=f(x)-xf(1)$




For some $x\neq 0$, $F(\frac{1}{x})=f(\frac{1}{x})-\frac{1}{x}f(1)$.



Hence for $x\neq 0$ $$\begin{align}
x^2F(\frac{1}{x})&=x^2f(\frac{1}{x})-xf(1)\\&=f(x)-xf(1)\\&=F(x)\end{align}$$ and of course $F(1)=0$ and $F$ is additive.






Let us prove that $\forall x\in \mathbb R, F(x)=-F(-x)$



Indeed, $0=F(1)=F(x+1-x)=F(x)+F(1)+F(-x)=F(x)+F(-x)$







Also, for some $x\neq -1$,



$$\begin{align}
F(x)=F(x+1)
&=(x+1)^2F\left(\frac{1}{x+1}\right)\\
&=(x+1)^2F\left(1-\frac{x}{x+1}\right)\\
&=-(x+1)^2F\left(\frac{x}{x+1}\right)\\

&=-(x+1)^2\left(\frac{x}{x+1}\right)^2F\left(\frac{x+1}{x}\right)\\
&=-x^2F\left(1+\frac{1}{x}\right)\\
&=-x^2F\left(\frac{1}{x}\right)\\
&=-F(x)\\\end{align}$$



This also holds for $x=-1$ since $F(-1)=-F(1)$.



Hence $2F=0$.



Hence $F=0$ and we're done.



calculus - The limit of $limlimits_{x to infty}sqrt{x^2+3x-4}-x$



I tried all I know and I always get to $\infty$, Wolfram Alpha says $\frac{3}{2}$. How should I simplify it?



$$\lim\limits_{x \to \infty}\sqrt{(x^2+3x+4)}-x$$




I tried multiplying by its conjugate, taking the squared root out of the limit, dividing everything by $\sqrt{x^2}$, etc.



Obs.: Without using l'Hôpital's.


Answer



Note that
\begin{align}
\sqrt{x^2+3x-4} - x & = \left(\sqrt{x^2+3x-4} - x \right) \times \dfrac{\sqrt{x^2+3x-4} + x}{\sqrt{x^2+3x-4} + x}\\
& = \dfrac{(\sqrt{x^2+3x-4} - x)(\sqrt{x^2+3x-4} + x)}{\sqrt{x^2+3x-4} + x}\\
& = \dfrac{x^2+3x-4-x^2}{\sqrt{x^2+3x-4} + x} = \dfrac{3x-4}{\sqrt{x^2+3x-4} + x}\\
& = \dfrac{3-4/x}{\sqrt{1+3/x-4/x^2} + 1}

\end{align}
Now we get
\begin{align}
\lim_{x \to \infty}\sqrt{x^2+3x-4} - x & = \lim_{x \to \infty} \dfrac{3-4/x}{\sqrt{1+3/x-4/x^2} + 1}\\
& = \dfrac{3-\lim_{x \to \infty} 4/x}{1 + \lim_{x \to \infty} \sqrt{1+3/x-4/x^2} } = \dfrac{3}{1+1}\\
& = \dfrac32
\end{align}


group theory - What is $operatorname{Aut}(mathbb{R},+)$?


I was solving some exercises about automorphisms. I was able to show that $\operatorname{Aut}(\mathbb{Q},+)$ is isomorphic to $\mathbb{Q}^{\times}$. The isomorphism is given by $\Psi(f)=f(1)$, but when I try to do the same thing with $\operatorname{Aut}(\mathbb{R},+)$ I got stuck.


My question is: What is $\operatorname{Aut}(\mathbb{R},+)$?


I would appreciate your help.


Answer



First note that for every $\alpha\in\mathbb R^\times$ we have that $x\mapsto\alpha\cdot x$ is an automorphism of $(\mathbb R,+)$.


Also note that if $f(x+y)=f(x)+f(y)$ then $f(2)=f(1)+f(1)$, and by induction $f(n)=n\cdot f(1)$ for $n\in\mathbb N$, equally $f(k)=k\cdot f(1)$ for $k\in\mathbb Z$. This carries to rationals as well, so $f\left(\frac{p}{q}\right)=\frac{p}{q}\cdot f(1)$.


Now, if $f$ is continuous then for every $x\in\mathbb R$ we have $f(x)=x\cdot f(1)$. So setting $\alpha=f(1)$ gives us that the continuous solutions are the solutions defined by $\mathbb R^\times$, therefore $\mathrm{Aut}(\mathbb R,+)\cap\{f\in\mathbb R^\mathbb R\mid f\text{ continuous}\}\cong(\mathbb R^\times,\cdot\ )$.


The existence of non-continuous solutions requires some axiom of choice, since such solutions generate Lebesgue non-measurable sets. So if we assume that every set is Lebesgue measurable (e.g. Solovay's model of models of Determinacy) then indeed there are no other solutions.



However, assuming the axiom of choice we can generate a basis for the vector space $\mathbb R$ over $\mathbb Q$. Note that every permutation of this basis can be extended to an automorphism of the vector space, namely $(\mathbb R,+)$.


The cardinality of such basis (known as Hamel basis) is $2^{\aleph_0}$, we have $2^{2^{\aleph_0}}$ many non-continuous solutions if we assume that such basis exists.


For further reading:


  1. Is there a non-trivial example of a $\mathbb Q-$endomorphism of $\mathbb R$?

  2. Horst Herrlich, The Axiom of Choice. Springer, 2006. (In particular section 5.1)

probability - Expected number of rolls?

Suppose a fair six-sided die has the following sides: 1, 1, 1, 1, 4, 4. The die is rolled twice. The mixed outcomes [1,4] and [4,1] are considered "successes" while the outcomes [1,1] and [4,4] are considered "failures." What is the expected number of rolls to achieve the first success?


I am having trouble here because the die is rolled twice and am not quite sure how to calculate this expectation.

sequences and series - Value of this sum?




Let $S_k$, $k=1,2,3,4...,100$ denote the sum of infinite geometric series whose first term is $\frac{k-1}{k!}$ and the common ratio is $\frac{1}{k}$. Then the value of $$\frac{100^2}{100!}+\sum^{100}_{k=1}|(k^2-3k+1)S_k| \text{ is?}$$



All I could figure out in this one is:



$S_k=\frac{(k-1)k}{k!(k-1)}$



But I'm not sure on how to continue. Please help


Answer



What you found for $S_k$ simplifies to $S_k={1\over(k-1)!}$. Now $k^2-3k+1\gt0$ if $k\ge3$, so you are asking for the value of




$${100^2\over100!}+S_1+S_2+\sum_{k=3}^{100}{k^2-3k+1\over(k-1)!}={100\over99!}+1+1+\sum_{k=3}^{100}{(k-1)^2-k\over(k-1)!}$$



But



$$\sum_{k=3}^{100}{(k-1)^2-k\over(k-1)!}=\sum_{k=3}^{100}{k-1\over(k-2)!}-\sum_{k=3}^{100}{k\over(k-1)!}=\sum_{k=2}^{99}{k\over(k-1)!}-\sum_{k=3}^{100}{k\over(k-1)!}={2\over1}-{100\over99!}$$



and thus



$${100^2\over100!}+\sum_{k=1}^{100}|(k^2-3k+1)S_k|=4$$


Polynomials with integer coefficients such that $p(a)=b,p(b)=c,p(c)=a$

While studying polynomials a question struck in my mind that



To have a polynomial $p(x)$ with integer coefficients and $a,b,c$ are three distinct integers, then is it possible to have $$p(a)=b$$$$p(b)=c$$$$p(c)=a$$

calculus - Another proof of $limlimits_{theta to 0} frac{sintheta}{theta} = 1$



I am sorry if this is a duplicate question, but as far as I searched I have not come across this question.





$\lim\limits_{\theta \to 0} \frac{\sin\theta}{\theta} = 1$. This formula is famously proven by geometrical means using area of a circle and so on..




I want to know if this method of proof over l-hopital's Rule is also acceptable.
$$\lim\limits_{\theta \to 0} \frac{\sin\theta}{\theta} = \lim_{\theta \to 0} \frac{\cos \theta}{1} = 1$$
If there is any other method to prove apart from the two methods above. Please Share. Thank You!


Answer



$$sin(\theta)=\sum_{n=0}^\infty \frac{(-1)^n{\theta}^{2n+1}}{(2n+1)!}=\theta-\frac{(\theta)^3}{3!}+\frac{(\theta)^5}{5!}+....$$then$$\frac{sin(\theta)}{\theta}=1-\frac{(\theta)^2}{3!}+\frac{(\theta)^4}{5!}+....$$ and the limit as $\theta\rightarrow 0$ is 1


Sunday, January 17, 2016

Hot Linked Questions

Bijection between [a,b) and [a,b] intervals. [duplicate]


I need help with finding a bijection between these two intervals: [a,b) and [a,b]. It is told that a,b are in R (real numbers). I know how to construct a bijection if a,b are integers, but i have no ...





functional equations - Do there exist functions satisfying $f(x+y)=f(x)+f(y)$ that aren't linear?



Do there exist functions $f : \mathbb{R} \rightarrow \mathbb{R}$ such that $f(x+y)=f(x)+f(y),$ but which aren't linear? I bet you they exist, but I can't think of any examples.



Furthermore, what hypotheses do we need to put on $f$ before no such functions exist? I feel continuity should be enough.


Answer



Yes continuity is enough: You can quickly show that $f(x)=x\cdot f(1)$ for $x\in\mathbb N$, then for $x\in\mathbb Z$ and then for $x\in\mathbb Q$; assuming continuity, this implies validity for all $x\in\mathbb R$.




Any other functions only exist per Axiom of Choice: View $\mathbb R$ as a vector space over $\mathbb Q$ and take any $\mathbb Q$-linear map (which need not be $\mathbb R$-linear).


sequences and series - Why does the sum of inverse squares equal $pi^2/6$?


I've heard that $1+1/4+1/9+1/16+1/25+...$ converges to $\pi^2/6$. This was very surprising to me and I was wondering if there was a reason that it converges to this number?


I am also confused why $\pi$ would be involved. If someone could provide a proof of this or a intuitive reason/ derivation I would appreciate it. I am only able to understand high school maths however (year 12).


Answer



Some of the proofs in the given link are somewhat technical and I'll try to vulgarize a variant of one of them.



Consider the function $$f(x):=\frac{\sin\pi\sqrt x}{\pi\sqrt x}.$$


This function has roots for every perfect square $x=n^2$, and it can be shown to equal the infinite product of the binomials for the corresponding roots


$$p(x):=\left(1-\frac x{1^2}\right)\left(1-\frac x{2^2}\right)\left(1-\frac x{3^2}\right)\left(1-\frac x{4^2}\right)\cdots$$


(obviously, $p(0)=f(0)=1$ and $p(n^2)=f(n^2)=0$.)


If we expand this product to the first degree, we get


$$1-\left(\frac1{1^2}+\frac1{2^2}+\frac1{3^2}+\frac1{4^2}+\cdots\right)\,x+\cdots$$


On the other hand, the Taylor development to the first order is


$$f(0)+f'(0)\,x+\cdots=1-\frac{\pi^2}6x+\cdots$$ hence the claim by identification.


The plot shows the function $f$ in blue, the linear approximation in red, and the product of the first $4,5$ and $6$ binomials, showing that they coincide better and better.


enter image description here



abstract algebra - Field and Field Axioms.



I wanted to ask what are field and field axioms? I have tried looking on Wikipedia and Wolfram But They are too are advanced and I cant a understand one bit.So please any help would be much appreciated. Also there is a question in my book :"What is the difference between Real and the Complex fields?" As I don't know anything about fields ,so I don't know the answer too and cant think of one as well ( as I repeat I don't know anything about fields),So please again any help would be much appreciated on understanding Fields and field axioms.
THANKS VERY MUCH


Answer



A field is a set of numbers which satisfy some calculation rules. For the field the following rules hold:



Addition rules





  1. Addition has the neutral element 0

  2. Addition has an inverse (adding the negative part to a number)

  3. Addition is associative



Multiplication rules




  1. Multiplication has the neutral element 1


  2. Multiplication is always invertible (by division)

  3. Multiplication is associative


  4. And another important fact is that the distributive law holds!




Real fields are fields of real numbers while complex fields consist on complex numbers.


integration - Integrating $int^{infty}_0 e^{-x^2},dx$ using Feynman's parametrization trick


I stumbled upon this short article on last weekend, it introduces an integral trick that exploits differentiation under the integral sign. On its last page, the author, Mr. Anonymous, left several exercises without any hints, one of them is to evaluate the Gaussian integral $$ \int^\infty_0 e^{-x^2} \,dx= \frac{\sqrt{\pi}}{2} $$ using this parametrization trick. I had been evaluating it through trial and error using different paramatrizations, but no luck so far.



Here are what I have tried so far:



  • A first instinct would be do something like:$$ I(b) = \int^\infty_0 e^{-f(b)x^2}\,dx $$ for some permissible function $f(\cdot)$, differentiating it will lead to a simple solvable ode: $$ \frac{I'(b)}{I(b)} = -\frac{f'(b)}{2f(b)} $$ which gives: $$ I(b) = \frac{C}{\sqrt{f(b)}}. $$ However, finding this constant $C$ basically is equivalent to evaluating the original integral, we are stuck here without leaving this parametrization trick framework.




  • A second try involves an exercise on the same page: $$ I(b) = \int^\infty_0 e^{-\frac{b^2}{x^2}-x^2}dx. $$ Taking derivative and rescaling the integral using change of variable we have: $$ I'(b) = -2I(b). $$ This gives us another impossible to solve constant $C$ in: $$ I(b) = C e^{-2b} $$ without leaving this framework yet again.





  • The third try is trying modify Américo Tavares's answer in this MSE question: $$ I(b) = \int^\infty_0 be^{-b^2x^2}\,dx. $$ It is easy to show that: $$ I'(b) = \int^\infty_0 e^{-b^2x^2}\,dx - \int^\infty_0 2b^2 x^2 e^{-b^2x^2}\,dx = 0 $$ by an integration by parts identity: $$ \int^\infty_0 x^2 e^{- c x^2}\,dx = \frac{1}{2c}\int^\infty_0 e^{- c x^2}\,dx . $$ Then $I(b) = C$, ouch, stuck again at this constant.




Notice in that Proving $\displaystyle\int_{0}^{\infty} e^{-x^2} dx = \frac{\sqrt \pi}{2}$ question, Bryan Yocks's answer is somewhat similar to the idea of parametrization, however he has to introduce another parametric integration to produce a definite integral leading to $\arctan$.


Is there such a one shot parametrization trick solution like the author Anonymous claimed to be "creative parameterizations and a dose of differentiation under the integral"?


Answer



Just basically independently reinvented Bryan Yock's solution as a more 'pure' version of Feynman.


Let $$I(b) = \int_0^\infty \frac {e^{-x^2}}{1+(x/b)^2} \mathrm d x = \int_0^\infty \frac{e^{-b^2y^2}}{1+y^2} b\,\mathrm dy$$ so that $I(0)=0$, $I'(0)= \pi/2$ and $I(\infty)$ is the thing we want to evaluate.


Now note that rather than differentiating directly, it's convenient to multiply by some stuff first to save ourselves some trouble. Specifically, note


$$\left(\frac 1 b e^{-b^2}I\right)' = -2b \int_0^\infty e^{-b^2(1+y^2)} \mathrm d y = -2 e^{-b^2} I(\infty)$$


Then usually at this point we would solve the differential equation for all $b$, and use the known information at the origin to infer the information at infinity. Not so easy here because the indefinite integral of $e^{-x^2}$ isn't known. But we don't actually need the solution in between; we only need to relate information at the origin and infinity. Therefore, we can connect these points by simply integrating the equation definitely; applying $\int_0^\infty \mathrm d b$ we obtain



$$-I'(0)= -2 I(\infty)^2 \quad \implies \quad I(\infty) = \frac{\sqrt \pi} 2$$


elementary number theory - Problems With Divisors



While surfing on the internet I found a divisibility rule for 7





Subtract 14 times the last digit from remaining truncated number. Repeat the step as necessary. If the result is divisible by 47, the original number is also divisible by 47. This too is difficult to operate for people who are not comfortable with table of 14.




I was thinking if we could prove it .Any help is appreciated


Answer



As $47\cdot3-140=1$



$$47\cdot3a-14(10a+b)=a-14b$$




$$47|(10a+b)\iff47\mid(a-14b)$$



See also : Divisibility criteria for $7,11,13,17,19$


Saturday, January 16, 2016

analysis - How to show that $lim_{n to +infty} n^{frac{1}{n}} = 1$?



I've spent the better part of this day trying to show from first principles that this sequence tends to 1. Could anyone give me an idea of how I can approach this problem?


$$ \lim_{n \to +\infty} n^{\frac{1}{n}} $$


Answer



You can use $\text{AM} \ge \text{GM}$.


$$\frac{1 + 1 + \dots + 1 + \sqrt{n} + \sqrt{n}}{n} \ge n^{1/n} \ge 1$$


$$ 1 - \frac{2}{n} + \frac{2}{\sqrt{n}} \ge n^{1/n} \ge 1$$


sequences and series - Is $sum_{n=1}^{+infty}frac{(-1)^n log n}{n!}$ a positive sum?




The below series is convergent series by the ratio test but i'm no able to know if this series have a positive sum , and i don't succeed to check if it has a closed form ,Then my question here is :




Question:
Is this : $\displaystyle\sum_{n=1}^{+\infty}\frac{(-1)^n \log n}{n!}$ a positive sum ?



Answer



For $n \geq 2$, the expression $\frac{\log n}{n!}$ is strictly decreasing, as $\frac{\log (n+1)}{(n+1)!} < \frac{\log n}{n!} \iff \log (n+1) < (n+1) \log n \iff \log_n (n+1) < n+1$, which is true, because $\log_n (n+1) < \log_n (n^2) = 2$. Therefore, $$\sum_{n=1}^\infty \frac{(-1)^n \log n}{n!}$$ is an alternating series with terms that strictly decrease in magnitude after the first nonzero term, so the series has the same sign as the first nonzero term, which is positive.


real analysis - Differentiability of $f(x+y) = f(x)f(y)$

Let $f$: $\mathbb R$ $\to$ $\mathbb R$ be a function such that $f(x+y)$ = $f(x)f(y)$ for all $x,y$ $\in$ $\mathbb R$. Suppose that $f'(0)$ exists. Prove that $f$ is a differentiable function.


This is what I've tried: Using the definition of differentiability and taking arbitrary $x_0$ $\in$ $\mathbb R$.


$\lim_{h\to 0}$ ${f(x_0 + h)-f(x_0)\over h}$ $=$ $\cdots$ $=$ $f(x_0)$$\lim_{h\to 0}$ ${f(h) - 1\over h}$.


Then since $x_0$ arbitrary, using $f(x_0+0) = f(x_0) = f(x_0)f(0)$ for $y = 0$, can I finish the proof?

calculus - Limits - Indeterminate forms

I have struggled with understanding indeterminate forms for quite sometime now. In this post, I put forward my understanding and hope to see it expand( or get corrected ).



Some limits of functions are certain at first observation, others are not. These "others", at a first look, take indeterminate forms as the independent variable gets sufficiently close to a point. By first look, I mean that an indeterminate form may crop up when we make a direct substitution in potential functions. We see that their limits take the following forms.



$0^0$; $1^\infty$; $0.\infty$; $\frac{\infty}{\infty}$; $\infty - \infty$; $\frac{0}{0}$




The term indeterminate, if I understand well, means that we are still not sure what the original value is( original limit as per the context of this post ). The original limit could be any real number, infinity or undefined( does not exist ). To properly convey this concept, I have constructed some examples. I shall deal with the above cases.



1) $0^0$



$\lim_{x\rightarrow 0} x^0 = 1$; $\lim_{x\rightarrow 0} 0^{|x|} =0$.



At first look, the above two limits take indeterminate forms. Their limits, however, are different. This tells us that an indeterminate form can take multiple real values.



2) $\frac{0}{0}$




$\lim_{x\rightarrow 0}\frac{\sin(x)}{x}=1$; $\lim_{x\rightarrow 0}\frac{|x|}{x}$ is undefined.



While both the limits take indeterminate forms at first look, the first takes a real value and the second's limit is undefined.



3) $1^\infty$



$\lim_{x\rightarrow 1} x^{\frac{1}{x-1}}=e$; $\lim_{x\rightarrow 1} x^{\frac{1}{|x-1|}}$ does not exist.



4) $\frac{\infty}{\infty}$




$\lim_{x\rightarrow 0} \frac{\frac{1}{x}}{\frac{1}{x}}=1$; $\lim_{x\rightarrow 0} \frac{\frac{1}{|x|}}{\frac{1}{x}}$ does not exist.



5) $0.\infty$



$\lim_{x\rightarrow 0} x.\frac{1}{x}=1$; $\lim_{x\rightarrow 0} \sin(x).\frac{1}{|x|}$ does not exist.



6) $\infty - \infty$



$\lim_{x\rightarrow 0} \frac{1}{x}-\frac{1}{x}=0$; $\lim_{x\rightarrow 0} \frac{1}{x}-\frac{1}{|x|}$ does not exist.




From the above, we see that indeterminate forms can take several values. Often while computing limits, we run into these forms. We then perform algebraic manipulations to arrive at the original limit( could exist or could not exist ). I haven't stated an Infinite limit in my examples. The following is one.



$\lim_{x\rightarrow 0} \frac{x}{x^2}$ is an infinite limit that, at first look, takes the indeterminate form $\frac{0}{0}$.



I end this post with a question. Are there other indeterminate forms?

calculus - Evaluate $lim_{xto infty} cos (sqrt {x+1})-cos (sqrt x)$


Evaluate $$\lim_{x\to \infty} \cos (\sqrt {x+1})-\cos (\sqrt x)$$



For this question I have tried using the squeeze theorem and some trigonometric manipulations such as using $$2\sin A\sin B=\cos (A-B) -\cos (A+B)$$. But they were of no use. I also tried using the substitutions like $$x=1/y$$ Hence as $x\to \infty$ , $y\to 0$ but still no success.



I could've also tried using L'Hospital rule but I am not able to convert the function in appropriate indeterminate form. Any help would be greatly appreciated

Friday, January 15, 2016

geometry - Sides of a triangle given perimeter and two angles



Let be a triangle with angles $\alpha$, $\beta$ and $\gamma.$ Let $p$ the semiperimeter of this triangle. How can I prove that the length of the opposite side to angle $\alpha$ is



$$ \frac{ p\sin(\frac{\alpha}{2})}{ \cos(\frac{\beta}{2})\cos(\frac{\gamma}{2}) }$$



Using properties of area and the inradius, ($A = pr$ where $r$ is the radius of the inscribed circle and Heron's Formula $A = \sqrt{p(p-a)(p-b)(p-c)}$) I can't solve the question. How can I proceed?



Answer



If you use the formulae for $\sin\frac{\alpha}{2}$, $\cos\frac{\beta}{2}$ and $\cos\frac{\gamma}{2}$ [for example, from here:



https://en.wikibooks.org/wiki/Trigonometry/Solving_triangles_by_half-angle_formulae ]



the derivation is pretty easy. Please let me know if you understood.


inequality - Simplest or nicest proof that $1+x le e^x$

The elementary but very useful inequality that $1+x \le e^x$ for all real $x$ has a number of different proofs, some of which can be found online. But is there a particularly slick, intuitive or canonical proof? I would ideally like a proof which fits into a few lines, is accessible to students with limited calculus experience, and does not involve too much analysis of different cases.

elementary number theory - Show that for a and b non-zero integers and c different from zero, then gcd(ca,bc) = |c|gcd(a,b)

I did:


$$ca = cb * k + r \Leftrightarrow \\ ca - cb*k = r \\ c(a-bk)=r \\ a-bk = r/c \\ a = bk +r/c$$


So, $gcd(a,b) = r/c$



What do I do next?

elementary set theory - Ordered sets $langle mathbb{N} times mathbb{Q}, le_{lex} rangle$ and $langle mathbb{Q} times mathbb{N}, le_{lex} rangle$ not isomorphic



I'm doing this exercise:
Prove that ordered sets $\langle \mathbb{N} \times \mathbb{Q}, \le_{lex} \rangle$ and $\langle \mathbb{Q} \times \mathbb{N}, \le_{lex} \rangle$ are not isomorphic ($\le_{lex}$ means lexigraphic order).



I don't know how to start (I know that to prove that ordered sets are isomorphic I would make a monotonic bijection, but how to prove they aren't isomorphic?).


Answer



Recall that the lexicographic order on $A\times B$ is essentially to take $A$ and replace each point with a copy of $B$.




So only in one of these lexicographic orders every element has an immediate successor. And having an immediate successor is preserved under isomorphisms.



(Generally, to show two orders are not isomorphic you need to show either there are no bijections between the sets (e.g. $\Bbb Q$ and $\Bbb R$ are not isomorphic because there is no bijection between them) or that there are properties true for one ordered and set and not for the other that are preserved under isomorphisms, like having minimum or being a linear order, or having immediate successors.)


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...