Tuesday, April 30, 2019

abstract algebra - When is a field a nontrivial field of fractions?



If we take any integral domain, then we can define a field of fractions by taking equivalence classes of ordered pairs of elements, the same way that the rational numbers are constructed from the integers. My question is:





What fields (of characteristic $0$) are isomorphic to the field of fractions of some integral domain (that's not a field)?




For instance, is the field of constructible real numbers a nontrivial field of fractions? What about the algebraic real numbers? What about arbitrarily real closed fields? And what about if we restrict ourselves to integral domains which are models of Peano arithmetic, or models of Robinson arithmetic? (EDIT: for those less acquainted with logic and model theory, let me ask this: what if we restricted the integral domains to ones that are discretely ordered rings?) I should mention that my motivation for asking these sorts of questions is my MathOverflow question.



Any help would be greatly appreciated.



Thank You in Advance.


Answer




This is a partial answer (it answers your first question).



Every field of characteristic zero is the fraction field of some integral domain which is not a field. Indeed, let $k$ be your field and let $(X_i)$ be a transcendence basis for $k$ over $\mathbb{Q}$. Consider the ring $R$ which is the integral closure of $\mathbb{Z}[\{X_i\}]$ in $k$. Note then that $R\ne k$ (since integral extensions preserve dimension), but it's a common fact that $k=\text{Frac}(R)$.


algebra precalculus - Converting Numbers to Exponential Form




can someone explain to me how problems such as:



enter image description here



are converted into exponential form? I am interested in the logic and method behind this, rather than the answer. Much thanks in advance.


Answer



a) $x^{\frac{1}{2}}$



b) $y^{-1}$




c)$(x+3)^{\frac{1}{6}}$



d) $(2x-3)^{-11}$



e) $y^\frac{7}{3}$



Notice that $\frac{1}{x^m}=x^{-m}$ and $\sqrt[m] x=x^{\frac{1}{m}}$


integration - On the integral $intsqrt{1+cos x}text{d} x$




I've been wondering about the following integral recently:
$$
I = \int\sqrt{1+\cos x}\,\text{d}x
$$
The way I integrated it is I used the identity $\sqrt{1+\cos x} = \sqrt2\cos\frac x2$, and so
$$
I = 2\sqrt2\sin\frac x2 + C
$$




the problem is that $\sqrt{1+\cos x}$ is actually integrable over the entire real line, but the derivative of $2\sqrt2\sin\frac x2$ is only equal to $\sqrt{1+\cos x}$ in certain intervals. This is because the actual identity is $\sqrt{1+\cos x} = \sqrt2\left|\cos\frac x2\right|$. Now, I wasn't exactly sure how to integrate the absolute value, so I thought I would "fix" the function after integration.



The first fix we can do is making sure that the sign of the result of integration is correct:
$$
I = 2\sqrt2\sin\frac x2\text{sgn}\cos\frac x2 + C
$$
The problem with just $\sin\frac x2$ was that sometimes it was "flipped", on certain intervals it was actually the antiderivative of the $-\sqrt{1+\cos x}$ (this is because we dropped the absolute value signs).



There is one further problem with this, however. The above function is not continuous, meaning it's not continuously differentiable with derivative equal to $\sqrt{1+\cos x}$ everywhere. Namely, it is discontinuous at $x=(4n\pm1)\pi$ where $n\in\mathbb{Z}$.




I noticed, however, that the limit of the derivative on either side of $x=(4n\pm1)\pi$ existed and was equal to each other. Hence, I figured that I could somehow "stitch" these continuous sections end to end and get a continuous result whose derivative was $\sqrt{1+\cos x}$ everywhere. The resulting function I got was
$$
I = 2\sqrt2\sin\frac x2\text{sgn}\cos\frac x2 + 4\sqrt2\left\lfloor\frac1{2\pi}x+\frac12\right\rfloor + C
$$



Now, I was wondering, is there any way to arrive at this result just by integrating $\sqrt{1+\cos x}$ using the usual techniques? The method I used can be boiled down to "integrating" then "fixing", but I'm just wondering if you can arrive at a result that is continuous and differentiable on the entire real line by doing just the "integrating" part.



Any help would be appreciated. Thanks!



Edit: To be clear, I'm not looking for exactly the function above, but rather simply any function that is "nice", continuous and differentiable on $\mathbb{R}$, and has derivative equal to $\sqrt{1+\cos x}$ everywhere, and which is attainable through "simple" integration methods.



Answer



The function to be integrated is
$$
\sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right|
$$
so we can as well consider the simpler
$$
\int\lvert\cos t\rvert\,dt
$$
or, with $t=u+\pi/2$, the even simpler

$$
\int|\sin u|\,du
$$
One antiderivative is
$$
\int_0^u\lvert\sin v\rvert\,dv
$$
Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0<\pi$. Then
$$
\int_0^u\lvert\sin v\rvert\,dv=

\int_0^{k\pi}\lvert\sin v\rvert\,dv+
\int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv
\\=2k+\int_0^{u_0}\sin v\,dv=
2k+1-\cos(u-k\pi)
$$
Now do the back substitution; write $k=u\bmod\pi$, if you prefer.


Monday, April 29, 2019

geometry - What is the smallest circle such that an arbitrary set of circles can be placed on the circumference without overlapping?



I have a set of circles of arbitrary radii: $r_1, r_2, r_3, ... r_n$.




I wish to arrange them around an inner circle so that they are all touching the perimeter of the inner circle, and do not overlap each other.



What I don't know how to do is figure out the inner radius $r_{inner}$.



I can figure out the angle each circle will use given an inner radius: $\theta_i = 2\sin^{-1} \frac{r_i}{r_i+r_{inner}}$, so I can test whether an inner radius is correct.



My first guess was to solve $2\pi = \sum_i (2\sin^{-1} \frac{r_i}{r_i+r_{inner}})$ for $r_{inner}$, but that's beyond my skills.



I also considered whether the sum of the diameters would equal the circumference of the circle, but that's a set of line segments rather than a smooth arc, and correcting that is also beyond me.




How do I figure out $r_{inner}$?



Numbers of circles around a circle is related, but assumes the circles are identical, which mine are not.


Answer



Using the cosine rule, the angle at the centre of the inner circle between lines through the centres of two adjacent touching circles is given by
$$
\varphi(i,{i+1}) \;=\;
\cos^{-1}\left(\frac{R^2+R r_i+R r_{i+1}-r_i r_{i+1}}{(R+r_i) (R+r_{i+1})}\right)
$$

where I've used $R$ instead of $r_{inner}$ for simplicity.



So, we need
$$
\qquad\qquad\qquad
\qquad\qquad\qquad
\sum\limits_{i=1}^n\varphi(i,{i+1})\;=\;2\pi
\qquad\qquad\qquad
\qquad\qquad\qquad(1)
$$

where we consider $r_{n+1}=r_1$.



Using trigonometric identities, for a given $n$, it is possible to convert this into a very complex expression full of square roots, but there seems to be no easy way of solving the result in general (although for $n=3$ and $r_i=1,2,3$, we somewhat surprisingly get $R=\frac{6}{23}$).



Numerical solution is probably the best approach. (The diagram below was generated using Mathematica to solve numerically.)



Moreover, although this is a necessary condition, it is not sufficient in general because a sequence of small circles between two large circles may not fit tightly, causing the solution above to give an incorrect result (with the large circles overlapping):



$\qquad\qquad\qquad\qquad$ Overlapping circles




So for all $i\neq j$, for the above solution to be correct we also need
$$
\varphi(i,j) \;\geqslant\; \sum_{k=i}^{j-1}\varphi(k,{k+1})
$$
with appropriate wrapping around of indexes if $i>j$. If this does not hold for some $i$ and $j$, then the correct solution for $R$ is obtained from $(1)$ by discarding circles $i+1,\ldots,j-1$. Due to the possibility of nested overlapping, the testing and discarding needs to be done iteratively, considering increasing values of $|i-j|$ in order.


linear algebra - How to calculate this determinant?




How to calculate this determinant?





$$A=\begin{bmatrix}n-1&k&k&k&\ldots& k\\k&n-1&k&k&\ldots &k\\\ldots&\ldots&\ldots &&\ldots\\\\k&k&k&k&\ldots &n-1\\
\end{bmatrix}_{n\times n}$$




where $n,k\in \Bbb N$ are fixed.



I tried for $n=3$ and got the characteristic polynomial as $(x-2-k)^2(x-2+2k).$




How to find it for general $n\in \Bbb N$?


Answer



Here I've followed the same initial step as K. Miller. Instead of using a determinant identity I examine the eigenvalues $A$ and consider their product.



If $J$ denotes the $n\times n$ matrix of all $1$'s, then then eigenvalues of $J$ are $0$ with multiplicity $n-1$ and $n$ with multiplicity $1$. This can be seen by noting that $J$ has $n-1$ dimensional kernel and trace $n$.



Your matrix $A$ is exactly $kJ+(n-k-1)I$ where $I$ denotes the $n\times n$ identity matrix. The eigenvalues of $A$ are therefore $n-k-1$ with multiplicity $n-1$ and $nk+n-k-1$ with multiplicity $1$. The determinant of $A$ is then $(nk+n-k-1)(n-k-1)^{n-1}$.


trigonometry - Using De Moivre's Theorem to prove $cos(3theta) = 4cos^3(theta) - 3cos(theta)$ trig identity


I am stuck on trying to prove a trig identity using De Moivre's theorem.


I have to prove, $$\cos(3\theta) = 4\cos^3(\theta) - 3\cos(\theta)$$


I am not sure where to even start, I broke the LHS down to $$\cos(3\theta) + i\sin(3\theta)$$


but I have no idea where to go from here, or if this is fully correct.



If I could get some pointers or a simple worked example that I could follow it would be great.


Thanks


Answer



De Moivre's formula reads $$(\cos\theta+i\sin\theta)^n=\cos(n\theta)+i\sin(n\theta)$$ Of course this identity implies the real part should be also equality. That is $$\cos(n\theta)=\Re\{(\cos\theta+i\sin\theta)^n\}$$ Hence we have $$\cos(3\theta)=\Re\{\cos^3\theta+3i\cos^2\theta\sin\theta-3\cos\theta\sin^2\theta-i\sin^3\theta\}=\cos^3\theta-3\cos\theta\sin^2\theta$$


calculus - Convergence of $sum_{n=1}^inftyfrac{n!, i^n}{n^n}$



I need to study this sum: $\sum_{n=1}^{\infty} \frac{n!\, i^n}{n^n}$. Taking: $$\lim_{n\rightarrow \infty} \; \left|\left(\frac{n!\: i^n}{n^n}\right)^{\frac{1}{2}}\right|$$
$$\rightarrow \lim_{n\rightarrow \infty} \left|\left(\frac{n!}{n^n}\right)^{\frac{1}{2}}\right|$$



Using Stirling approximation considering the limit:



$$\ln n! \approx n \ln\:n-n$$

then $n! \approx (\frac{n}{e})^n$ (is this correct?):



$$\rightarrow \lim_{n\rightarrow \infty} \left|\frac{n}{n\, e}\right| = \frac{1}{e}$$ This doesn't make too much sense because I know the series diverges.



I think I'm missing a $\sqrt{2\pi n}$ in Stirling approximation but I don't understand why that pops out taking the $e^{(\;)}$ from the first expression.


Answer



The series is absolutely convergent. You don't need Striling's approximation. Just apply ratio test: $\frac {|a_{n+1}|} {|a_n|} \to \frac 1 e$.


discrete mathematics - Proof by Induction: Solving $1+3+5+cdots+(2n-1)$



The question asks to verify that each equation is true for every positive integer n.



The question is as follows:




$$1+ 3 + 5 + \cdots + (2n - 1) = n^2$$



I have solved the base step which is where $n = 1$.



However now once I proceed to the inductive step, I get a little lost on where to go next:



Assuming that k is true (k = n), solve for k+1:

(2k - 1) + (2(k+1) - 1)

(2k - 1) + (2k+2 - 1)
(2k - 1) + (2k + 1)


This is where I am stuck. Do I factor these further to obtain a polynomial of some sort? Or am I missing something?


Answer



Assume true for $k$. Then consider the case $k+1$, you got $$1+3+\cdots+(2k-1)+(2(k+1)-1)$$ which is equal by inductive hypothesis $$k^2+(2k+1)=(k+1)^2$$ and this closes the induction.


sequences and series - Evaluate $sum_{n=1}^infty frac{1}{n(n+2)(n+4)}$?


How can I evaluate this?


$$\sum_{n=1}^\infty \frac{1}{n(n+2)(n+4)} = \frac{1}{1\cdot3\cdot5}+\frac{1}{2\cdot4\cdot6}+\frac{1}{3\cdot5\cdot7}+ \frac{1}{4\cdot6\cdot8}+\cdots$$


I have tried:


$$\frac{1}{1\cdot3\cdot5}+\frac{1}{3\cdot5\cdot7}+\frac{1}{2\cdot4\cdot6}+ \frac{1}{4\cdot6\cdot8}+\cdots = \frac{1}{3\cdot5}\left(1+\frac{1}{7}\right)+\frac{1}{4\cdot6}\left(\frac{1}{2}+\frac{1}{8}\right)+\cdots$$


and so on... Been stuck for a while. Result should be $\dfrac{11}{96}$



Answer



With *partial fractions:$$\frac1{k(k+2)(k+4)}=\frac18\biggl(\frac1k-\frac2{k+2}+\frac1{k+4}\biggr),$$ we get a telescoping sum which simplifies to: $$\frac18\biggl(1+\frac12-\frac13-\frac14-\frac1{n+1}-\frac1{n+2}+\frac1{n+3}+\frac1{n+4}\biggr).$$


Sunday, April 28, 2019

sequences and series - Question regarding proof for $sum_{k=0}^nbinom{n-k}{k} = F_{n+1}$



I am currently working through the various properties of the binomial coefficient and was up to the identity



$\sum_{k=0}^n\binom{n-k}{k} = F_{n+1}$




The proof is provided elsewhere on this website, here specifically. I have reproduced the user Adi Dani's proof below.



$$\begin{align} F_{n+1}&=\sum_{k=0}^n\binom{n-k}{k}\\ &=\sum_{k=0}^{n}\Bigg(\binom{n-k-1}{k}+\binom{n-k-1}{k-1}\Bigg)\\ &=\sum_{k=0}^{n-1}\Bigg(\binom{(n-1)-k}{k}+\binom{n-2-(k-1)}{(k-1)}\Bigg)\\ &=\sum_{k=0}^{n-1}\binom{(n-1)-k}{k}+\sum_{k=0}^{n-1}\binom{n-2-(k-1)}{k-1}\\ &=\sum_{k=0}^{n-1}\binom{(n-1)-k}{k}+\sum_{j=0}^{n-2}\binom{(n-2)-j}{j}\\ &=F_{n}+F_{n-1} \end{align}$$



It's all clear to me except for the changing of the index on the summation. How are you able to reduce a summation by one or two members in the series without consequence?



As an addendum to this question, I've noted that the only proofs I've had trouble with so far have involved changing the index on a summation. As such, if anyone had some good resources for looking at summations in more detail (especially if there are exercises) then that would be massively appreciated.


Answer



I believe that proof uses the convention, that $\binom{n}{k} = 0$ for all $k > n$ and for all $k < 0$. This of course means that most of the terms in the sum are equal to zero.




I've annotated the proof for a bit for you.



\begin{align} F_{n+1}&=\sum_{k=0}^n\binom{n-k}{k} \text{ (induction hypothesis) } \\ &=\sum_{k=0}^{n}\Bigg(\binom{n-k-1}{k}+\binom{n-k-1}{k-1}\Bigg) \text{ (recursive formula for binom. coefficients) }\\ &=\sum_{k=0}^{n}\Bigg(\binom{(n-1)-k}{k}+\binom{n-2-(k-1)}{(k-1)}\Bigg)\text{ $(n-2-(k-1)= n-k-1)$} \\
&=\sum_{k=0}^{n-1}\Bigg(\binom{(n-1)-k}{k}+\binom{n-2-(k-1)}{(k-1)}\Bigg)\text{ (the $k=n$ Term is equal to zero)} \\ &=\sum_{k=0}^{n-1}\binom{(n-1)-k}{k}+\sum_{k=0}^{n-1}\binom{n-2-(k-1)}{k-1} \text{ (splitting the sum into two sums) } \\ &=\sum_{k=0}^{n-1}\binom{(n-1)-k}{k}+\sum_{j=-1}^{n-2}\binom{(n-2)-j}{j} \text{ (substitute $j = k-1$) } \\ &=\sum_{k=0}^{n-1}\binom{(n-1)-k}{k}+\sum_{j=0}^{n-2}\binom{(n-2)-j}{j} \text{ (the $j = -1$ Term is equal to zero) } \\
&=F_{n}+F_{n-1} \end{align}



See wikipedia for the recursive formula. The article also mentions the convention I described earlier. In fact it also lists the formula for the Fibonacci numbers as
$$F(n+1) = \sum_{k=0}^{\lfloor{n/2}\rfloor} \binom{n-k}{k}.$$


calculus - Does a bijective map from $(−π,π)→mathbb R$ exist?



I'm having trouble proving that $\mathbb R$ is equinumerous to $(-π,π)$. I'm think about using a trigonometric function such as $\cos$ or $\sin$, but there are between the interval of $(0,1)$. Could someone help me define a bijective map from $(−π,π)→\mathbb R$?


Answer



You could use a trigonometric function such as $\tan$, although you must first divide the number by $2$ to instead get $\tan$ applied to a number in the interval $(-\pi/2,\pi/2)$.


calculus - Evaluate $limlimits_{x to 0+}left[frac{x^{sin x}-(sin x)^{x}}{x^3}+frac{ln x}{6}right].$


Problem


Evaluate $$\lim_{x \to 0+}\left[\frac{x^{\sin x}-(\sin x)^{x}}{x^3}+\frac{\ln x}{6}\right].$$


Attempt


First, we may obtain $$\lim_{x \to 0+}\left[\frac{x^{\sin x}-(\sin x)^{x}}{x^3}+\frac{\ln x}{6}\right]=\lim_{x \to 0+}\frac{6e^{\sin x\ln x}-6e^{x\ln\sin x}+x^3\ln x}{6x^3}.$$ Here, you can apply L'Hôpital's rule, but it's too complicated. Moreover, you can also apply Taylor's formula, for example $$e^{\sin x\ln x}=1+\sin x\ln x+\frac{1}{2}(\sin x\ln x)^2+\cdots,\\e^{x\ln\sin x}=1+x\ln\sin x+\frac{1}{2}(x\ln\sin x)^2+\cdots,$$ but you cannot cancel the terms, thus you cannot avoid differentiating, either. Is there any elegant solution?


P.S. Please don't suspect the existence of the limit. The result equals $\dfrac{1}{6}.$



Answer



The key point is that $x\log \sin x \to 0$ and $\sin x \log x \to 0$ then by Taylor's series we have


  • $x^{\sin x}=e^{\sin x \log x}=1+x\log x+\frac12x^2\log^2 x+\frac16x^3\log x(\log^2 x -1)+O(x^4\log^2 x)$

  • $(\sin x)^{x}=e^{x \log (\sin x)}=1+x\log x+\frac12x^2\log^2 x+\frac16x^3(\log^3 x -1)+O(x^4\log x)$

then


$$\frac{x^{\sin x}-(\sin x)^{x}}{x^3}+\frac{\ln x}{6}=\frac{\frac16x^3\log^3 x-\frac16x^3\log x-\frac16x^3\log^3 x +\frac16x^3+O(x^4\log x)}{x^3}+\frac{\ln x}{6}=$$


$$=\frac16+O(x\log x) \to \frac16$$



To see how obtain the Taylor's expansion, let consider the first one, then since


  • $\sin x =x-\frac16 x^3+O(x^5) \implies \sin x \log x=x\log x-\frac16 x^3\log x+O(x^5\log x)$

  • $e^t = 1+t+\frac12 t^2+\frac16t^3+O(t^4)$


we obtain that


$$x^{\sin x}=e^{\sin x \log x} =1+\left(x\log x-\frac16 x^3\log x+O(x^5\log x)\right)+\frac12\left(x\log x-\frac16 x^3\log x+O(x^5\log x)\right)^2+\frac16\left(x\log x-\frac16 x^3\log x+O(x^5\log x)\right)^3+O(x^5\log^4 x)=$$


$$=1+x\log x-\frac16 x^3\log x+\frac12x^2\log^2x-\frac16x^4\log^2x+\frac16x^3\log^3x+O(x^4\log^2x)=$$


$$=1+x\log x+\frac12x^2\log^2x+\frac16x^3\log x(\log^2x-1)+O(x^4\log^2x)$$


and for the second one since


  • $\log (1+t)= t-\frac12t^2+\frac13t^3+O(t^4)$

  • $\sin x =x-\frac16 x^3+O(x^5)\implies \frac{\sin x}x=1-\frac16 x^2+O(x^4)$

  • $\log \sin x=\log x+\log \frac{\sin x}x=\log x+\log \left(1-\frac16 x^2+O(x^4)\right)=\log x-\frac16 x^2+O(x^4)$

  • $x\log \sin x=x\log x-\frac16 x^3+O(x^5)$

we obtain that



$$(\sin x)^x=e^{x\log \sin x}=1+\left(x\log x-\frac16 x^3+O(x^5)\right)+\frac12\left(x\log x-\frac16 x^3+O(x^5)\right)^2+\frac16\left(x\log x-\frac16 x^3+O(x^5)\right)^3+O(x^4\log^4x)$$


$$=1+x\log x-\frac16 x^3+\frac12x^2\log^2x-\frac16x^4\log x+\frac16x^3\log^3 x+O(x^4\log x)=$$


$$=1+x\log x+\frac12x^2\log^2x+\frac16x^3(\log^3 x-1)+O(x^4\log x)$$


Saturday, April 27, 2019

summation - Evaluate the sum of this series




Please help me find the sum of this series:




$$1 + \frac{2}{3}\cdot\frac{1}{2} + \frac{2}{3}\cdot\frac{5}{6}\cdot\frac{1}{2^2} + \frac{2}{3}\cdot\frac{5}{6}\cdot\frac{8}{9}\cdot\frac{1}{2^3} + \cdots$$



All I could figure out was to find the $n^{\text{th}}$ term as:



$$a_n = \frac{2 \cdot (2+3) \cdots(2+3(n-1))}{3 \cdot 6 \cdot 9 \cdots 3(n-1)} \cdot\frac{1}{2^{n-1}}$$



What To do ahead of it. I don't know. Please help.


Answer



Let $S$ denote the sum. We write each term (with indices starting at $0$) as




$$ \left( \prod_{k=1}^{n} \frac{3k-1}{3k} \right) \frac{1}{2^n}
= \frac{\prod_{k=0}^{n-1} (-\frac{2}{3}-k)}{n!} \left(-\frac{1}{2}\right)^n
= \binom{-2/3}{n} \left(-\frac{1}{2}\right)^n. $$



Then we easily recognize $S$ as a bionmial series and hence



$$ S = \sum_{n=0}^{\infty}\binom{-2/3}{n} \left(-\frac{1}{2}\right)^n = \left(1 - \frac{1}{2}\right)^{-2/3} = 2^{2/3}.$$


field theory - Prove that $x^3+2y^3+4z^3equiv6xyz pmod{7} Rightarrow xequiv yequiv zequiv 0 pmod{7}$



[Old] Wants: $x^3+2y^3+4z^3\equiv6xyz \pmod{7} \Rightarrow x\equiv y\equiv z\equiv 0 \pmod{7}$



[Old] My attempt: With modular arithmetic, I can show that if $x\not\equiv0$, then $y\not\equiv0$, and $z\not\equiv0$; if $x\equiv0$, then $x\equiv y \equiv z \equiv 0 \pmod{7}.$



But I don't see how to show that $x^3+2y^3+4z^3-6xyz\not\equiv0\pmod{7}$ when $x,y,z\not\equiv0$, other than plugging in all possible nonzero values of $x,y,z \pmod{7}$ to derive a contradiction. I hope to prove the desired result with least effort.



Update: As @Lubin pointed out in his answer, "the Norm of a nonzero element of the big field is necessarily nonzero in the base field". But I cannot find a reference on this specific statement. Could anyone tell me why is the quoted sentence above true?


Answer




Someone played a dirty trick on you.



The form $x^3-6xyz + 2y^3+4z^3$ is the “norm form” for the extension $\Bbb F_{7^3}\supset\Bbb F_7$. That is, if you take a generator $\zeta=\sqrt[3]2$ of the big field, which is all right, since $X^3-2$ is irreducible over $\Bbb F_7$, then the Norm of $x+y\zeta+z\zeta^2$ is $w=x^3-6xyz+2y^3+4z^3$, when $x,y,z\in\Bbb F_7$. But the Norm of a nonzero element of the big field is necessarily nonzero in the base field. And that does it.



How did I know this? Many hand computations of the Norm, over the years. Pencil and paper, friends, pencil and paper.


Friday, April 26, 2019

elementary number theory - Why does the extended euclidean algorithm allow you to find modular inverse?



Why is it that by working backwards from the euclidean algorithm one can find the modular inverse of a number?



Further, there is also another method for finding inverses discussed here which seems similar to the extended euclidean algorithm method but is much shorter. How is this method related to the extended euclidean algorithm one, and why does this method work?


Answer



Using the Euclidean algorithm for two coprime numbers gives you
$$a=q_1b+r_1\ ,\quad b=q_2r_1+r_2\ ,\ldots,\quad r_{n-2}=q_nr_{n-1}+r_n\ ,$$

where $r_n=1$. Therefore
$$1=r_{n-2}-q_nr_{n-1}\ ,$$
and then by substituting for $r_{n-1}$ in terms of $r_{n-2}$ and $r_{n-3}$, and so on, you eventually get
$$1=ax+by$$
for some $x,y$. Doing a few examples should convince you that this always works; if you want a more formal proof, you can do it by induction on $n$, the number of steps in the algorithm. We now have
$$1\equiv ax\pmod b\ ,$$
which means by definition that $x$ is the inverse of $a$ modulo $b$.



With regard to the other algorithm that you linked, if you do it the Euclidean algorithm way starting with the same numbers $811$ and $216$ you will see that the remainders you get are just the same as in the other method: this is why they are related, and it works for essentially the same reason as above. Once again, you could prove it formally by induction.




By the way, I'm not convinced that the other method is very much shorter than the Euclidean method - maybe a little. Note that in the page you linked they just wrote down the values of $f,e,d,c,b,a$, but there is actually some calculation to be done here which they didn't show. If you do the same example using the "reverse Euclidean" method I think you will find that the arithmetic work is fairly similar.


functions - $f:Xrightarrow X$ such that $f(f(x))=x$


Let $X$ be a metric space and $f:X\rightarrow X$ be such that $f(f(x))=x$, for all $x\in X$.



Then $f$




  1. is one-one and onto;

  2. is one-one but not onto;


  3. is onto but not one-one;

  4. need not be either.




From the given condition I have that $f^2=i$ where $i$ is the identity function. If $f$ itself is the identity function then the conditions are satisfied as well as $f$ is bijection. Is that the only such function or are there other possibilities ?



My guess is that it will be bijection i.e. option $1$ will be correct .



For see, if $$f(x_1)=y \ \text{and}\ f(x_2)=y \ \text{then} \ f(y)=x_1\ \text{and}\ f(y)=x_2$$ will be possible iff $x_1=x_2$. So this is injective.




Now an injection from a set to itself is trivially surjective so it is bijective. Is my proof correct?

calculus - Why Euler's formula is true?











I need to know why Euler's formula is true? I mean why is the following true:
$$
e^{ix} = \cos(x) + i\sin(x)
$$


Answer



Hint: Notice $$\sin (x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + ..... $$ and $$i\cos (x) = i - i\frac{x^2}{2!} + i\frac{x^4}{4!} - i\frac{x^6}{6!} + .... $$ Now add them and use the fact that $i^2 = -1$, $i^3 = -i$, $i^4 = 1$. You should obtain $e^{ix}$. Also notice: $$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + ....... $$


Thursday, April 25, 2019

sequences and series - Why does $sum_{k=1}^{infty}frac{{sin(k)}}{k}={frac{pi-1}{2}}$?



Inspired by this question (and far more straightforward, I am guessing), Mathematica tells us that $$\sum_{k=1}^{\infty}\dfrac{{\sin(k)}}{k}$$ converges to $\dfrac{\pi-1}{2}$.



Presumably, this can be derived from the similarity of the Leibniz expansion of $\pi$ $$4\sum_{k=1}^{\infty}\dfrac{(-1)^{k+1}}{2k-1}$$to the expansion of $\sin(x)$ as $$\sum_{k=0}^{\infty}\dfrac{(-1)^k}{(2k+1)!}x^{2k+1},$$ but I can't see how...




Could someone please explain how $\dfrac{\pi-1}{2}$ is arrived at?


Answer



Here is one way, but it does not use the series you mention so much. I hope that's OK.



The series is:



$$\sin(1)+\frac{\sin(2)}{2}+\frac{\sin(3)}{3}+\cdot\cdot\cdot $$



$$\Im\left[e^{i}+\frac{e^{2i}}{2}+\frac{e^{3i}}{3}+\cdot\cdot\cdot \right]$$




Let $\displaystyle x=e^{i}$.



$$\Im\left[x+\frac{x^2}{2}+\frac{x^3}{3}+\cdot\cdot\cdot \right]$$



differentiate:



$$\Im \left[1+x+x^{2}+x^{3}+\cdot\cdot\cdot \right]$$



This is a geometric series, $\displaystyle \frac{1}{1-x}$




$$\Im [\frac{1}{1-x}]$$



Integrate:



$$-\Im[\ln(x-1)]=-\Im [\ln(e^{i}-1)]$$



Now, suppose $$\ln(e^{i}-1)=a+bi$$,



$$e^{i}-1=e^{a}e^{bi}$$




$$\cos(1)-1+i\sin(1)=e^{a}\left[\cos(b)+i\sin(b)\right]$$



Equate real and imaginary parts:



$$\cos(1)-1=e^{a}\cos(b)\\ \sin(1)=e^{a}\sin(b)$$



divide both:



$$\frac{\cos(1)-1}{\sin(1)}=\frac{e^{a}\sin(b)}{e^{a}\cos(b)}$$




$$-\cot(1/2)=\tan(b)$$



$$b=\tan^{-1}(\cot(1/2))=\frac{1}{2}-\frac{\pi}{2}$$



But we need the negative of this, so finally:



$$\frac{\pi}{2}-\frac{1}{2}$$


real analysis - How can you prove that a function has no closed form integral?



I've come across statements in the past along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations:




  • addition/subtraction

  • multiplication/division

  • raising to powers and roots

  • trigonometric functions

  • exponential functions


  • logarithmic functions



, which when differentiated gives the function $f(x)$. I've heard this said about the function $f(x) = x^x$, for example.



What sort of techniques are used to prove statements like this? What is this branch of mathematics called?






Merged with "How to prove that some functions don't have a primitive" by Ismael:




Sometimes we are told that some functions like $\dfrac{\sin(x)}{x}$ don't have an indefinite integral, or that it can't be expressed in term of other simple functions.



I wonder how we can prove that kind of assertion?


Answer



It is a theorem of Liouville, reproven later with purely algebraic methods, that for rational functions $f$ and $g$, $g$ non-constant, the antiderivative



$$f(x)\exp(g(x)) \, \mathrm dx$$



can be expressed in terms of elementary functions if and only if there exists some rational function $h$ such that it is a solution to the differential equation:




$$f = h' + hg$$



$e^{x^2}$ is another classic example of such a function with no elementary antiderivative.



I don't know how much math you've had, but some of this paper might be comprehensible in its broad strokes: http://www.sci.ccny.cuny.edu/~ksda/PostedPapers/liouv06.pdf



Liouville's original paper:





Liouville, J. "Suite du Mémoire sur la classification des Transcendantes, et sur l'impossibilité d'exprimer les racines de certaines équations en fonction finie explicite des coefficients." J. Math. Pure Appl. 3, 523-546, 1838.




Michael Spivak's book on Calculus also has a section with a discussion of this.


linear algebra - Determinant of Large Matrix

$A=\left[\begin{array}{ccccc}{-2} & {-1} & {} & {\cdots} & {-1} \\ {-1} & {-2} & {-1} & {\cdots} & {-1} \\ {} & {} & {\ddots} & {} & {} \\ {-1} & {\cdots} & {-1} & {-2} & {-1} \\ {-1} & {\cdots} & {} & {-1} & {-2}\end{array}\right] \in \mathbb{R}^{53 \times 53}$



So we want to find determinant of this big matrix. I tried for some cases I got the pattern like for even dimension determinant is $n+1$ and and for odd dimension it is $-n-1$ so answer should be $-54$ ;I guess. But what is formal method to do this calculation ; idea I have in mind is to find eigenvalue and then product will give me determinant.

Hot Linked Questions

Is $0.9999…$ an integer? [duplicate]


Just out of curiosity, since $$\sum_{i>0}\frac{9\times10^{i-1}}{10^i}, \quad\text{ or }\quad 0.999\ldots=1,$$ Does that mean $0.999\ldots=1$, or in other words, that $0.999\ldots$ is an integer, ...





Wednesday, April 24, 2019

integration - Evaluating $int_0^infty sin x^2, dx$ with real methods?



I have seen the Fresnel integral



$$\int_0^\infty \sin x^2\, dx = \sqrt{\frac{\pi}{8}}$$



evaluated by contour integration and other complex analysis methods, and I have found these methods to be the standard way to evaluate this integral. I was wondering, however, does anyone know a real analysis method to evaluate this integral?


Answer




Let $u=x^2$, then
$$
\int_0^\infty \sin(u) \frac{\mathrm{d} u}{2 \sqrt{u}}
$$
The real analysis way of evaluating this integral is to consider a parametric family:
$$\begin{eqnarray}
I(\epsilon) &=& \int_0^\infty \frac{\sin(u)}{2 \sqrt{u}} \mathrm{e}^{-\epsilon u} \mathrm{d} u = \frac{1}{2} \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}\int_0^\infty u^{2n+\frac{1}{2}} \mathrm{e}^{-\epsilon u} \mathrm{d} u \\ &=& \frac{1}{2} \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!} \Gamma\left(2n+\frac{3}{2}\right) \epsilon^{-\frac{3}{2}-2n} \\
&=& \frac{1}{2 \epsilon^{3/2}} \sum_{n=0}^\infty \left(-\frac{1}{\epsilon^2}\right)^n\frac{\Gamma\left(2n+\frac{3}{2}\right)}{\Gamma\left(2n+2\right)} \\
&\stackrel{\Gamma-\text{duplication}}{=}&\frac{1}{2 \epsilon^{3/2}} \sum_{n=0}^\infty \left(-\frac{1}{\epsilon^2}\right)^n\frac{\Gamma\left(n+\frac{3}{4}\right)\Gamma\left(n+\frac{5}{4}\right)}{\sqrt{2} n! \Gamma\left(n+\frac{3}{2}\right)} \\
&=& \frac{1}{(2 \epsilon)^{3/2}} \frac{\Gamma\left(\frac{3}{4}\right)\Gamma\left(\frac{5}{4}\right)}{\Gamma\left(\frac{3}{2}\right)} {}_2F_1\left(\frac{3}{4}, \frac{5}{4}; \frac{3}{2}; -\frac{1}{\epsilon^2}\right) \\

&\stackrel{\text{Euler integral}}{=}& \frac{1}{(2 \epsilon)^{3/2}} \frac{\Gamma\left(\frac{3}{4}\right)\Gamma\left(\frac{5}{4}\right)}{\Gamma\left(\frac{3}{2}\right)} \frac{1}{\operatorname{B}\left(\frac{5}{4}, \frac{3}{2}-\frac{5}{4}\right)} \int_0^1 x^{\frac{5}{4}-1} (1-x)^{\frac{3}{2}-\frac{5}{4} -1} \left(1+\frac{x}{\epsilon^2}\right)^{-3/4} \mathrm{d} x \\
&=& \frac{1}{2^{3/2}} \frac{\Gamma\left(\frac{3}{4}\right)\Gamma\left(\frac{5}{4}\right)}{\Gamma\left(\frac{3}{2}\right)} \frac{\Gamma\left(\frac{3}{2}\right)}{\Gamma\left(\frac{5}{4}\right) \Gamma\left(\frac{1}{4}\right)} \int_0^1 x^{\frac{5}{4}-1} (1-x)^{\frac{1}{4} -1} \left(\epsilon^2+x\right)^{-3/4} \mathrm{d} x
\end{eqnarray}
$$
Now we are ready to compute $\lim_{\epsilon \to 0} I(\epsilon)$:
$$\begin{eqnarray}
\lim_{\epsilon \to 0} I(\epsilon) &=& \frac{1}{2^{3/2}} \frac{\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)} \int_0^1 x^{\frac{1}{2}-1} \left(1-x\right)^{\frac{1}{4}-1} \mathrm{d} x = \frac{1}{2^{3/2}} \frac{\Gamma\left(\frac{3}{4}\right)}{\Gamma\left(\frac{1}{4}\right)} \frac{\Gamma\left(\frac{1}{2}\right) \Gamma\left(\frac{1}{4}\right)}{\Gamma\left(\frac{3}{4}\right)} \\ &=& \frac{1}{2^{3/2}} \Gamma\left(\frac{1}{2}\right) = \frac{1}{2} \sqrt{\frac{\pi}{2}}
\end{eqnarray}
$$


Laplace transform of $ t^{1/2}$ and $ t^{-1/2}$



Prove the following Laplace transforms:



(a) $ \displaystyle{\mathcal{L} \{ t^{-1/2} \} = \sqrt{\frac{ \pi}{s}}} ,s>0 $



(b) $ \displaystyle{\mathcal{L} \{ t^{1/2} \} =\frac{1}{2s} \sqrt{\frac{ \pi}{s}}} ,s>0 $




I did (a) as following:



(a) $ \displaystyle{\mathcal{L} \{ t^{-1/2} \} = \int_{0}^{\infty} e^{-st} t^{-1/2}dt }$. Substituting $st=u$ and using the fact that $\displaystyle { \int_{0}^{\infty} e^{-u^2}du=\sqrt{\pi} }$ we are done.



Is there a similar way about (b)? Can we make a substitution to get in (a)?



edit: I know the formula $ \displaystyle \mathcal{L} \{ t^n \} = \frac{\Gamma (n+1)}{s^{n+1}}, n>-1 ,s>0$ , but I would like to see a solution without this.



Thank's in advance!


Answer




Like I mentioned earlier, there is the rule $\mathcal{L}\{tf(t)\}=-F'(s)$, here applicable with $f(t)=t^{-1/2}$.



Or just directly apply $d/ds$ to part (a). Integration by-parts is equivalent ($u=t^{1/2},dv=e^{-ts}dt$).


trigonometry - Evaluation of limit at infinity: $lim_{xtoinfty} x^2 sin(ln(cos(frac{pi}{x})^{1/2}))$

$$\lim_{x\to\infty} x^2 \sin(\ln(\cos(\frac{\pi}{x})^{1/2}))$$



What I tried was writing $1/x=t$ and making the limit tend to zero and writing the cos term in the form of sin

abstract algebra - $DeclareMathOperator{Aut}{Aut}$ $Aut(mathbb{Z}/{nmathbb{Z}})$ and $Aut(mathbb{Z}/{2nmathbb{Z}})$ are isomorphic for $n in mathbb{N}$ odd



Let $n \in \mathbb{N}$ be odd. Show that: $$\Aut(\mathbb{Z}/{n\mathbb{Z}}) \cong \Aut(\mathbb{Z}/{2n\mathbb{Z}})$$



$\DeclareMathOperator{\Aut}{Aut}$ My attempt:


An automorphism $f \in \Aut(\mathbb{Z}/{n\mathbb{Z}})$ is uniquely represented by $f(1)$ since $1$ generates $\mathbb{Z}/{n\mathbb{Z}}$. $f(1)$ has to be a generator of $\mathbb{Z}/{2n\mathbb{Z}}$, which means $2n$ and $f(1)$ are relatively prime.


Thus, $\Aut(\mathbb{Z}/{n\mathbb{Z}})$ and $(\mathbb{Z}/{n\mathbb{Z}})^\times$ are actually isomorphic via the isomorphism $i \mapsto f_i$, where $f_i$ is an automorphism of $\mathbb{Z}/{n\mathbb{Z}}$ having $f_i(1) = i$.


Since we can similarly deduce that $\Aut(\mathbb{Z}/{2n\mathbb{Z}}) \cong (\mathbb{Z}/{2n\mathbb{Z}})^\times$, we have reduced the problem to showing that $$(\mathbb{Z}/{n\mathbb{Z}})^\times = (\mathbb{Z}/{2n\mathbb{Z}})^\times$$


Since $n$ and $2$ are relatively prime, we have:



$$\phi(2n) = \phi(2)\phi(n) = \phi(n)$$


Hence, both groups are of the same order, $\phi(n)$.


Now, I'm aware of the fact that $(\mathbb{Z}/{p\mathbb{Z}})^\times$ is a cyclic group if $p$ is prime, but that is certainly not the case for $2n$, so we cannot easily conclude that they are isomorphic.


The only thing that comes to mind is to try to find what the elementary divisors for an Abelian group of order $\phi(n)$ could be. For example, $\phi(n)$ is even for $n \ge 3$ so there exists a unique element of order $2$ in both groups, which is $-1$. So the isomorphism would send $-1$ to $-1$.


How should I proceed here?


Answer



If $n$ is odd, then $\newcommand{\Z}{\Bbb Z}\Z/2n\Z$ is isomorphic to $\Z/2\Z\times\Z/n\Z$. As $n$ is odd, any automorphism of $\Z/2\Z\times\Z/n\Z$ preserves this factorization, so comes from an automorphism of $\Z/2\Z$ and one of $\Z/n\Z$. But $\Z/2$ has only the trivial automorphism.


Tuesday, April 23, 2019

sequences and series - $limlimits_{ntoinfty} frac{n}{sqrt[n]{n!}} =e$




I don't know how to prove that
$$\lim_{n\to\infty} \frac{n}{\sqrt[n]{n!}} =e.$$

Are there other different (nontrivial) nice limit that gives $e$ apart from this and the following
$$\sum_{k = 0}^\infty \frac{1}{k!} = \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n = e\;?$$


Answer



In the series for $$e^n=\sum_{k=0}^\infty \frac{n^k}{k!},$$
the $n$th and biggest(!) of the (throughout positve) summands is $\frac{n^n}{n!}$.
On the other hand, all summands can be esimated as
$$ \frac{n^k}{k!}\le \frac{n^n}{n!}$$
and especially those
with $k\ge 2n$ can be estimated
$$ \frac{n^k}{k!}<\frac{n^{k}}{(2n)^{k-2n}\cdot n^{n}\cdot n!}=\frac{n^{n}}{n!}\cdot \frac1{2^{k-2n}}$$

and thus we find
$$\begin{align}\frac{n^n}{n!}Taking $n$th roots we find
$$ \frac n{\sqrt[n]{n!}}\le e\le \sqrt[n]{2n+3}\cdot\frac n{\sqrt[n]{n!}}.$$
Because $\sqrt[n]{2n+3}\to 1$ as $n\to \infty$, we obtain $$\lim_{n\to\infty}\frac n{\sqrt[n]{n!}}=e$$
from squeezing.


real analysis - Convergence in $L_{infty}$ norm implies uniform convergence



I'm trying to prove the following claim:



$f_n \in C_c$, $C_c$ being the set of continuous functions with compact support, then $\mathrm{lim}_{n \rightarrow \infty} || f_n - f||_{\infty} = 0$ implies $f_n(x) \rightarrow f(x)$ uniformly.



So, according to my understanding,



$$ \mathrm{lim}_{n \rightarrow \infty} || f_n - f||_{\infty} = 0 $$
$$ \Leftrightarrow $$

$$ \forall \varepsilon > 0 \exists N: n > N \Rightarrow |f_n(x) - f(x)| \leq || f_n - f||_{\infty} < \varepsilon$$ $\mu$-almost everywhere on $X$.



Now my problem is that I don't see how uniform convergence follows from pointwise convergence $\mu$ almost everywhere. Can someone give me a hint? I guess I have to use the fact that they have compact support but I don't see how to put this together.



Thanks for your help!


Answer



You were not very specific about your hypotheses - I assume you're working on $\mathbb{R}^{n}$ with Lebesgue measure.



Suppose there exists a point $x$ such that $|f_{n}(x) - f(x)| > \varepsilon$. Then there exists an open set $U$ containing $x$ such that for all $y \in U$ you have $|f_{n}(y) - f(y)| > \varepsilon$ by continuity. But this contradicts the a.e. statement you gave.




In case you don't know yet that $f$ is continuous (or rather: has a continuous representative in $L^\infty$), a similar argument shows that $(f_{n})$ is a uniform Cauchy sequence (with the sup-norm, not only the essential sup-norm), hence $f$ will be continuous (in fact, uniformly continuous).



Note that I haven't used compact support at all, just continuity.



If you're working in a more general setting (like a locally compact space), you'd have to require that the measure gives positive mass to each non-empty open set.



Finally, note that $f$ need not have compact support. It will only have the property that it will be arbitrarily small outside some compact set ("vanish at infinity" is the technical term). For instance, $\frac{1}{1+|x|}$ can easily be uniformly approximated by functions with compact support.


sequences and series - converges or diverges? $sum_{n=1}^infty sin^2(frac{pi}{n}) $



$$\sum_{n=1}^\infty \sin^2(\frac{\pi}{n}) $$



First, I tried Divergence test, but the limit is $0$, so that's inconclusive. I don't think I can do integral test, since it's not a clean integration.




Did I do this right ? If it's wrong, can you steer me in the right direction? I tried subbing in $1-\cos(x)$ for the $\sin(x)$. This would get:



$$\sum_{n=1}^\infty \sin^2(\frac{\pi}{n})$$
$$= \sum_{n=1}^\infty 1-\cos^2(\frac{\pi}{n})$$
$$ = \sum_{n=1}^\infty 1 - \sum_{n=1}^\infty \sin^2(\frac{\pi}{n})$$
But that first sigma is divergent to infinity, so does the original series also diverge to infinity?



The tests I know are: Geometric, p-series, Divergence (nth term) test, Integral test, Direct Comparison test, Alternating Series Test, Absolute convergence, and Ratio test.


Answer




Using $\sin(x)\le x$ for all $x\ge0$ so by comparison with the convergent series $\sum\frac1{n^2}$ we conclude the convergence of the given series.


Sunday, April 21, 2019

real analysis - Functions whose derivative is not continuous on a dense subset

Are there differentiable functions $F:(a,b)\rightarrow \mathbb{R}$, where the set of points at which the derivative of $F$ is discontinuous, is dense in $(a,b)$?


So far I've only found functions where derivatives are discontinuous only at a finite number of points.

probability - Expected number of rolls to get all 6 numbers of a die




I've been thinking about this for a while but don't know how to go about obtaining the answer. Find the expectation of the times of rolls if u want to get all 6 numbers when rolling a 6 sided die.



Thanks


Answer



Intuitively



You first roll a die. You get some value. Now, the probability of getting a different value on the next one is $\frac{5}{6}$ . Hence the expected value for different values in two rolls is $\frac{6}{5}$. You continue this and get




$\frac{6}{6} + \frac{6}{5} + \frac{6}{4} + \frac{6}{3} + \frac{6}{2} + \frac{6}{1} = \color{blue}{14.7}$ rolls


functional equations - Are there any real-valued functions, besides logarithms, for which $f(x*y) = f(x) + f(y)$?

Is there any real-valued function, $f$, which is not a logarithm, such that $∀ x,y$ in $ℝ$ , $f(x*y) = f(x) + f(y)$?



So far, all I can think of is $z$ where $z(x) = 0$ $∀ x$ in $ℝ$




EDIT:



Functions having a domain of $ℝ^+$ or a domain of $ℝ$/{0} are acceptable as well.



What are examples of functions, $f$, from $ℝ$/{0} to $ℝ$ which are not logarithms, such that
$∀ x,y$ in $ℝ$, $f(x*y) = f(x) + f(y)$?

Saturday, April 20, 2019

calculus - Evaluating the definite integral $int_{-infty}^{+infty} mathrm{e}^{-x^2}x^n,mathrm{d}x$



I recognize that the $\int_0^\infty \mathrm{e}^{-x}x^n\,\mathrm{d}x = \Gamma(n+1)$ and $\int_{-\infty}^{+\infty} \mathrm{e}^{-x^2}\,\mathrm{d}x = \sqrt{\pi}$. I am having difficulty, however with $\int_{-\infty}^{+\infty} \mathrm{e}^{-x^2}x^n\,\mathrm{d}x$. By the substitution $u=x^2$, this can be equivalently expressed as $\frac{1}{2} \int_{-\infty}^{+\infty} \mathrm{e}^{-u}u^{\frac{n-1}{2}}\,\mathrm{d}u$. This integral is similar to the first one listed (which equates to the $\Gamma$ function), except that its domain spans $\mathbb{R}$ like the second integral (which equates to $\sqrt{\pi}$). Any pointers on how to evaluate this integral would be helpful.


Answer




Let $I_n:=\int_{-\infty}^{+\infty}e^{-x^2}x^ndx$. If $n$ is odd then $I_n=0$ and for $p\geq 1$:
\begin{align}
I_{2p}&=\int_0^{+\infty}e^{-x^2}x^{2p}dx+\int_{-\infty}^0e^{-x^2}x^{2p}dx\\
&=\int_0^{+\infty}e^{-t^2}t^{2p}dt+\int_0^{+\infty}e^{-t^2}(-t)^{2p}dt\quad (\mbox{left: } t=x,\mbox{right: } t=-x)\\
&=2\int_0^{+\infty}e^{-t^2}t^{2p}dt\\
&=2\int_0^{+\infty}e^{-s}s^p\frac 1{2\sqrt s}ds \quad (s=t^2)\\
&=\int_0^{+\infty}e^{-s}s^{p-1/2}ds\\
&=\left[-e^{-s}s^{p-1/2}\right]_0^{+\infty}+\int_0^{+\infty}e^{—s}\left(p-\frac 12\right)s^{p-1-1/2ds}\\
&=\left(p-\frac 12\right)I_{2(p-1)}.
\end{align}

Finally we get $I_{2p+1}=0$ and $I_{2p}=\sqrt \pi\prod_{j=1}^p\left(j-\frac 12\right)$ for all $p\geq 0$.


Friday, April 19, 2019

find the square root of the complex number 1 + 2i?


Find the square root of the complex number $1 + 2i$



I was trying this question many times, but I could not get it and I don't know where to start.


Actually after squaring the $1+ 2i$ I got $-1 + 2i$, and I also tried multiplying by $1+i$. However I don't know the exact answer.


Thanks in advance

How to use mathematical induction with inequalities?



I've been using mathematical induction to prove propositions like this:
$$1 + 3 + 5 + \cdots + (2n-1) = n^2$$



Which is an equality. I am, however, unable to solve inequalities. For instance, this one:



$$ 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} \leq \frac{n}{2} + 1 $$



Every time my books solves one, it seems to use a different approach, making it hard to analyze. I wonder if there is a more standard procedure for working with mathematical induction (inequalities).




There are a lot of questions related to solving this kind of problem. Like these:





Can you give me a more in depth explanation of the whole procedure?


Answer



The inequality certainly holds at $n=1$. We show that if it holds when $n=k$, then it holds when $n=k+1$. So we assume that for a certain number $k$, we have
$$1+\frac{1}{2}+\frac{1}{3}+\cdots +\frac{1}{k} \le \frac{k}{2}+1.\tag{$1$}$$
We want to prove that the inequality holds when $n=k+1$. So we want to show that

$$1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{k}+\frac{1}{k+1}\le\frac{k+1}{2}+1.\tag{$2$}$$



How shall we use the induction assumption $(1)$ to show that $(2)$ holds? Note that the left-hand side of $(2)$ is pretty close to the left-hand side of $(1)$. The sum of the first $k$ terms in $(2)$ is just the left-hand side of $1$. So the part before the $\frac{1}{k+1}$ is, by $(1)$, $\le \frac{k}{2}+1$.



Using more formal language, we can say that by the induction assumption,
$$1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{k}+\frac{1}{k+1}\le \frac{k}{2}+1+\frac{1}{k+1}.$$



We will be finished if we can show that
$$\frac{k}{2}+1+\frac{1}{k+1}\le \frac{k+1}{2}+1.$$
This is equivalent to showing that

$$\frac{k}{2}+1+\frac{1}{k+1}\le \frac{k}{2}+\frac{1}{2}+1.$$
The two sides are very similar. We only need to show that
$$\frac{1}{k+1}\le \frac{1}{2}.$$
This is obvious, since $k\ge 1$.



We have proved the induction step. The base step $n=1$ was obvious, so we are finished.


algebra precalculus - I'm having trouble with induction. Prove $1 + 2^3 + 3^3 + ... + n^3 = frac{((n^2)(n+1)^2)}4$





I started a new course and I'm expected to know this stuff, and I'm having trouble learning some on my own.
I'm stuck with this problem:
Prove $1 + 2^3 + 3^3 + ... + n^3 = \frac{((n^2)(n+1)^2)}4$ using induction (the $1+2^3...$ is written in sum notation, although I don't know how to enter that here, sorry).
I started substituting $n \rightarrow n+1$ but I don't know what to do next. Any help would be extremely appreciated.


Answer



$$1+2^3+...+n^3+(n+1)^3\underset{Hyp.}{=}\frac{n^2(n+1)^2}{4}+(n+1)^3=\frac{n^2(n+1)^2+4(n+1)^3}{4}=\frac{(n+1)^2(n^2+4n+4)}{4}=\frac{(n+1)^2(n+2)^2}{4}.$$


probability - expected value of a function of two random variables



I am trying to calculate this sum (which is expected value of a function of two independent Poisson random variables):



$$\displaystyle\sum_{x=0}^{\infty}\sum_{y=0}^{\infty}\frac{1}{a^x+a^y-a^{x+y}}\frac{e^{-\lambda}(\lambda)^x}{x!}\frac{e^{-\lambda}(\lambda)^y}{y!}$$




Is there any closed form solution for this?


Answer



I calculated this in Mathematica, but it couldn't find a closed form for this sum, nor for the integral over $x$ and $y$ from $0$ to $\infty$, so I guess a nice closed form doesn't exist.


Algebra with complex numbers

Let $m$ be the minimum value of $|z|$, where $z$ is a complex number such that

$$ |z-3i | + |z-4 | = 5. $$
Then $m$ can be written in the form $\dfrac{a}{b}$, where $a$ and $b$ are relatively prime positive integers. Find the value of $a+b$.



I cannot seem to get rid of the absolute values, making it impossible to solve this. Is it possible for someone to give me some help?



Thank you!

Thursday, April 18, 2019

calculus - Function as a "constant of integration"



I'm reading a book Differential Equations with Applications and Historical Notes, 3rd edition, specifically section 8 about exact equations. The author is trying to prove that iff $\partial M/\partial y = \partial N/\partial x$ then equation
\begin{equation}
M(x,y)dx + N(x,y)dy = 0

\end{equation}

is exact differential equation.
At some point we integrate equation
\begin{equation}
\frac{\partial f(x,y)}{\partial x} = M(x,y)
\end{equation}

to get
\begin{equation}
f(x, y) = \int M(x,y)dx + g(y)
\end{equation}

The author states that function $g(y)$ appears as a constant of integration because if we take derivative of both sides with respect to $x$, $g(y)$ would disappear because it doesn't depend on $x$.
That's the part that I have trouble with, $y$ is a dependent variable and $x$ is independent variable so wouldn't derivative of $g(y)$ with respect to $x$ be

\begin{equation}
\frac{d\,g(y)}{dy} \frac{dy}{dx}
\end{equation}

and not $0$ ?


Answer



This is a common poorly written part in differential equations textbooks, because they don't want to spend time discussing differential forms.



At this point we forget that $y$ depends on $x$. Of course then the equation $M(x,y)dx+N(x,y)dy=0$ looks weird, and indeed it's wrong. What is meant there is that if we have a dependence of $x$ and $y$, a curve on $x$-$y$ plane, denoted $\gamma$, then the pullback of $M(x,y)dx+N(x,y)dy$ on $\gamma$ is $0$. For example, if we can parametrize $\gamma$ by $x$ (i.e. we can write $y$ as a function of $x$), then this condition says $\frac{dy}{dx} = -\frac{M(x,y)}{N(x,y)}$. That's why we want to find such $\gamma$.



The exactness condition means that $df=M(x,y)dx+N(x,y)dy$. Then the level sets of $f$, $\{(x,y)|f(x,y)=c\}$, give us such $\gamma$'s. Note that exactness follows from closeness on simply connected domains.




So, one can separate this problem into two stages, where $x$ and $y$ are independent, and then were we look for a required dependence.



Alternatively, instead of using differential forms, one can think of $(N,M)$ as a vector field on $x$-$y$ plane perpendicular to $\gamma$'s, the level sets of $f$, gradient of which is $(N,M)$.


trigonometry - Sine series: angle multipliers add to 1


It is known that in an sine series with angles in arithmetic progression (I refer to this question):


$\sum_{k=0}^{n-1}\sin (a+k \cdot d)=\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \sin\biggl( \frac{2 a + (n-1) \cdot d}{2}\biggr)$


What if $k$ does not go from $0$ to $n-1$, but its elements are strictly positive rational numbers,


with $0and $\sum_{i=1}^{n} k_i=1$
and $k$ is monotonically increasing


Is there a way to simplify:


$\sum_{i=1}^{n} \sin (a+k_i \cdot d)$


in a way analogous to the first formula? Maybe using the property that k adds to $1$ and using $\sin(\pi/2)=1$ ??


e.g.:
$k_i = (0.067,0.133,0.200,0.267,0.333)$ (5 increasing elements between $0$ & $1$ which sum to 1)
$a=90$
$d=40$



Sum = $(90+0.067\cdot40)+(90+0.133\cdot40)+(90+0.200\cdot40)+(90+0.267\cdot40)+(90+0.333\cdot40)$


Answer



There is no way to give this a general form. The definition of $k_i$ is too general. I have been trying to come up with a function that would give $k_i$ while also fitting your terms, but I am having difficulty. This would need to have specific values of $k_i$ and each term of the sum would need to be calculated individually. I did try to change the sum into something else (work below), but it seems that this is only more complicated. $$\sum_{i=1}^n \sin (a + k_i d)$$ We can separate the sum using the Trigonometric Addition Formula: $$\sin (\alpha + \beta) = \sin\alpha \cos\beta + \sin\beta \cos\alpha$$ $$\sum_{i=1}^n [\sin (a) \cos (k_i d) + \sin (k_i d) \cos (a)]$$ $$= \sin a \sum_{i=1}^n [\cos(k_i d)] + \cos a \sum_{i=1}^n[\sin(k_i d)]$$ Past this point, there is no general form. You can attempt to use the Multiple-Angle Formulas, but this only seems to complicate things. In conclusion, I believe your best option would be to just use the original sum and calculate it normally. Unless you have a more rigorous definition for $k_i$, there is no general form that will apply.


Wednesday, April 17, 2019

calculus - Evaluate $lim_{ntoinfty} left( frac{n!}{n^n} right)^{1/n}$

How do I go about evaluating this?



$$\lim_{n\to\infty} \left( \dfrac{n!}{n^n} \right)^{\dfrac{1}{n}}$$

Euclidean Algorithm for polynomials

I know how to use the extended euclidean algorithm for finding the GCD of integers but not polynomials. I can't really find any good explanations of it online. The question here is to find the GCD of

m(x) = $\ x^3+6x+7 $ and n(x) = $\ x^2+3x+2 $.



I try to use it the same way as for integers but don't really get anywhere and just get huge lines without ever reducing it and getting closer to finding the GCD.

Tuesday, April 16, 2019

elementary number theory - Find the remainder $4444^{4444}$ when divided by 9

Find the remainder $4444^{4444}$ when divided by 9


When a number is divisible by 9 the possible remainder are $0, 1, 2,3, 4,5,6,7,8$ we know that $0$ is not a possible answer. My friend told me the answer is $7$ but how


I am thinking of taking 4444 divide by 9 and that left a remainder of 7 so $4444 \cong 7$ mod 9 based on that I am guessing $4444^{4444} \cong 7^{4444}$ mod 9 so to me no matter how big the power is that will still be a remainder of 7 is that correct to think like that

calculus - Sin properties - Why is this undefined?

Why is sin(-π/2)^(1/2) undefined? sin(-π/2) is -1, so I thought the the answer would be the same, -1^(1/2)=-1? But my calculator and the answers sheet both say undefined...

calculus - Calculating $sum_{k=0}^{n}sin(ktheta)$




I'm given the task of calculating the sum $\sum_{i=0}^{n}\sin(i\theta)$.




So far, I've tried converting each $\sin(i\theta)$ in the sum into its taylor series form to get:
$\sin(\theta)=\theta-\frac{\theta^3}{3!}+\frac{\theta^5}{5!}-\frac{\theta^7}{7!}...$
$\sin(2\theta)=2\theta-\frac{(2\theta)^3}{3!}+\frac{(2\theta)^5}{5!}-\frac{(2\theta)^7}{7!}...$
$\sin(3\theta)=3\theta-\frac{(3\theta)^3}{3!}+\frac{(3\theta)^5}{5!}-\frac{(3\theta)^7}{7!}...$
...
$\sin(n\theta)=n\theta-\frac{(n\theta)^3}{3!}+\frac{(n\theta)^5}{5!}-\frac{(n\theta)^7}{7!}...$



Therefore the sum becomes,
$\theta(1+...+n)-\frac{\theta^3}{3!}(1^3+...+n^3)+\frac{\theta^5}{5!}(1^5+...+n^5)-\frac{\theta^7}{7!}(1^7+...+n^7)...$



But it's not immediately obvious what the next step should be.



I also considered expanding each $\sin(i\theta)$ using the trigonemetry identity $\sin(A+B)$, however I don't see a general form for $\sin(i\theta)$ to work with.


Answer



You may write, for any $\theta \in \mathbb{R}$ such that $\sin(\theta/2) \neq 0$,

$$
\begin{align}
\sum_{k=0}^{n} \sin (k\theta)&=\Im \sum_{k=0}^{n} e^{ik\theta}\\\\
&=\Im\left(\frac{e^{i(n+1)\theta}-1}{e^{i\theta}-1}\right)\\\\
&=\Im\left( \frac{e^{i(n+1)\theta/2}\left(e^{i(n+1)\theta/2}-e^{-i(n+1)\theta/2}\right)}{e^{i\theta/2}\left(e^{i\theta/2}-e^{-i\theta/2}\right)}\right)\\\\
&=\Im\left( \frac{e^{in\theta/2}\left(2i\sin((n+1)\theta/2)\right)}{\left(2i\sin(\theta/2)\right)}\right)\\\\
&=\Im\left( e^{in\theta/2}\frac{\sin((n+1)\theta/2)}{\sin(\theta/2)}\right)\\\\
&=\Im\left( \left(\cos (n\theta/2)+i\sin (n\theta/2)\right)\frac{\sin((n+1)\theta/2)}{\sin(\theta/2)}\right)\\\\
&=\frac{\sin(n\theta/2)\sin ((n+1)\theta/2)}{\sin(\theta/2)}.
\end{align}

$$


Monday, April 15, 2019

sequences and series - Tricky question on binomial

Let's say there's a series of the form $$S=\frac{1}{10^2}+\frac{1\cdot3}{1\cdot2\cdot10^4}+\frac{1\cdot3\cdot5}{1\cdot2\cdot3\cdot10^6}+...$$ Now i had written the rth term as $$T_r=\frac{1\cdot3\cdot5....(2r-1)}{1\cdot2\cdot3.... r\cdot10^{2r}}=\frac{2r!}{r!\cdot r!\cdot2^r\cdot10^{2r}}$$ I came to the second equivalence by mutliplying and dividing the first expression with $2\cdot4\cdot6....2r\;$and then taking out a power of 2 from each of the even numbers multiplied in the denomininator. From the looks of it, these expressions tend to give the idea of being solved using binomial most probably the expansion for negative indices but I don't understand how to get to the result from here

number theory - Calculating ${34}^{429} mod 431$

I am trying to calculate $${34}^{429} \mod 431$$ by hand. (This follows from $34^{-1}\mod 431$).



I think I have made mistakes in my working, and have three different answers thus far from the attempts:



$$351, 306, 134$$




Is one of these correct?
If none of the above is correct please provide an answer with working.

complex analysis - Show that $intnolimits^{infty}_{0} x^{-1} sin x dx = fracpi2$

Show that $\int^{\infty}_{0} x^{-1} \sin x dx = \frac\pi2$ by integrating $z^{-1}e^{iz}$ around a closed contour $\Gamma$ consisting of two portions of the real axis, from -$R$ to -$\epsilon$ and from $\epsilon$ to $R$ (with $R > \epsilon > 0$) and two connecting semi-circular arcs in the upper half-plane, of respective radii $\epsilon$ and $R$. Then let $\epsilon \rightarrow 0$ and $R \rightarrow \infty$.


[Ref: R. Penrose, The Road to Reality: a complete guide to the laws of the universe (Vintage, 2005): Chap. 7, Prob. [7.5] (p. 129)]


Note: Marked as "Not to be taken lightly", (i.e. very hard!)


Update: correction: $z^{-1}e^{iz}$ (Ref: http://www.roadsolutions.ox.ac.uk/corrections.html)

real analysis - Evaluating the limit of a sequence given by recurrence relation $a_1=sqrt2$, $a_{n+1}=sqrt{2+a_n}$. Is my solution correct?


Problem


The sequence $(a_n)_{n=1}^\infty$ is given by recurrence relation:


  • $a_1=\sqrt2$,

  • $a_{n+1}=\sqrt{2+a_n}$.

Evaluate the limit $\lim_{n\to\infty} a_n$.



Solution


  • Show that the sequence $(a_n)_{n=1}^\infty$ is monotonic. The statement $$V(n): a_n < a_{n+1}$$ holds for $n = 1$, that is $\sqrt2 < \sqrt{2+\sqrt2}$. Let us assume the statement holds for $n$ and show that $V(n) \implies V(n+1)$. We have that $$a_n < a_{n+1}.$$ Adding 2 to both sides and taking square roots, we have that $$\sqrt{2+a_n} < \sqrt{2+a_{n+1}},$$ that is $a_{n+1} < a_{n+2}$ by definition.

  • Find bounds for $a_n$. The statement $$W(n): 0 < a_n < 2$$ holds for $n=1$, that is $0 < \sqrt2 < 2$. Let us assume the statement holds for $n$ and show that $W(n) \implies W(n+1)$. We have that $$0 < a_n < 2.$$ Adding two and taking square roots, we have that $$0 < \sqrt2 < \sqrt{2+a_n} < \sqrt4 = 2.$$

  • The limit $\lim_{n\to\infty} a_n$ exists, because $(a_n)_{n=1}^\infty$ is a bounded monotonic sequence. Let $A = \lim_{n\to\infty} a_n$.

  • Therefore the limit $\lim_{n \to\infty} a_{n+1}$ exists as well and $\lim_{n \to\infty} a_{n+1} = A$. (For $(n_k)_{k=1}^\infty = (2,3,4, \dots)$, we have that $(a_{n_k})_{k=1}^\infty$ is a subsequence of $(a_n)_{n=1}^\infty$, from which the statement follows.)

  • We have that $a_{n+1} = f(a_n)$. That means that $A = \lim_{n\to\infty} a_n = \lim_{n \to\infty} {f(a_n)} = f(\lim_{n \to\infty} a_n) = f(A) = \sqrt{2 + A}$. Solving the equation $A = \sqrt{2 + A}$, we get $A = -1 \lor A = 2$.

  • Putting it all together, we get that $A = 2$, because the terms of the sequence are increasing and $a_1 > 0$.

Is my solution correct?


Answer



Looks great. Here is a fun trick I've seen to answer this question.



Using the half angle formula, notice the following:


$$\cos\left(\frac{\pi}{4}\right)=\frac{1}{2}\sqrt 2\\\cos\left(\frac{\pi}{8}\right)=\sqrt{\frac{1}{2}(1+\frac{1}{2}\sqrt 2)}=\frac{1}{2}\sqrt{2+\sqrt 2}\\\cos\left(\frac{\pi}{16}\right)=\sqrt{\frac{1}{2}(1+\frac{1}{2}\sqrt{ 2+\sqrt2})}=\frac{1}{2}\sqrt{2+\sqrt {2+\sqrt 2}}\\\vdots\\\cos\left(\frac{\pi}{2^{n+1}}\right)=\underbrace{\frac{1}{2}\sqrt{2+\sqrt{2+\sqrt{2+{\ldots}}}}}_\text{n times}=\frac{1}{2}a_{n}$$


Now let $n$ approach infinity.


Sunday, April 14, 2019

limits - Which function grows faster?



Which function grows faster



$𝑓(𝑛)= 2^{𝑛^2+3𝑛}$ and $𝑔(𝑛) = 2^{𝑛+1}$



by using the limit theorem I will first simplify




then I will just get $$\lim_{n \to \infty} \dfrac{2^{n^2+3n}}{2^{n+1}}=\lim_{n \to \infty} 2^{n^2+3n-n-1}=\lim_{n \to \infty} 2^{n^2+2n-1}=\infty$$



Is this enough?
I say it will go then to infinity so the $f(n)$ is growing faster? I am asking this question because I have to find it by using limit but I didn't need to use l'hopital rule!


Answer



Before Edit:
Your idea was correct, but you didn’t simplify the limit properly.
$$\lim_{n \to \infty} \frac{2^{n^2+3n}}{2^n+1}$$
It is enough to divide both the numerator and denominator by $2^n$.

$$\lim_{n \to \infty} \frac{\frac{2^{n^2+3n}}{2^n}}{\frac{2^n+1}{2^n}} = \lim_{n \to \infty} \frac{2^{n^2+3n-n}}{2^{n-n}+\frac{1}{2^n}} = \lim_{n \to \infty} \frac{2^{n^2+2n}}{1+\frac{1}{2^n}}$$
As $n \to \infty$, it becomes clear that the limit tends to $\infty$ since the numerator tends to $\infty$ while the denominator tends to $1$.



After Edit: Yes, your way is correct.


My proof of Cauchy functional equation?



Although I have not quite studied functional equations, I came upon Cauchy functional equation and tried to prove it. Here is what I have done:




We are given the condition, $f(x+y)=f(x)+f(y)$.



So, for some constant $a$ and another constant $f(a)=b$, we have $f(x+a)=f(x)+b$.



Differentiating both sides wrt $x$, we have $f'(x+a)=f'(x)$.



But this result is valid for any constant $a$, and hence $f'(x)=c$, for some constant $c$. This gives us $f(x)=cx+d$. Putting this result into original condition, we have $c(x+y)+d = cx +d+ cy +d$. Hence $d=0$ and $f(x)=cx$, for some constant $c$.



Is my proof right or are there holes which needs to be filled? I asked here because it is different from the proof I found. My main concern is that I have assumed that the function is differentiable. Is there any elementary way to patch up for non differentiable functions? What about some other points?



Answer



There are many conditions you could assume in order to get the linear function. For instance continuity.



Hint: Naturals $\to$ Rationals $\to$ Reals (Continuity)



You could also replace continuity with the following:
$f(xy) = f(x)f(y)$



These are the ones I know of that guarantee a linear solution. However as it is, without another assumption, it is wrong to say that only linear functions satisfy above. I'll post a reference when I get one.




Edit: Your proof is correct if you assume differentiability everywhere.


algebra precalculus - Intuition behind sum of multiplication arithmetic sequence

Maybe this is a stupid question but please guide and enlighten me patiently. I have just known something fact that quite shocking me. Let start from this simple fact

$$\sum_{k=1}^n k=\frac{n(n+1)}{2}\tag{1}$$
The summation above is a sum of arithmetic progression with common difference of 1 and I have already known it. Then, it turns out (I realized these when playing with Wolfram|Alpha)
$$\begin{align}\sum_{k=1}^n k(k+1)&=\frac{n(n+1)(n+2)}{3}\tag{2}\\\sum_{k=1}^n k(k+1)(k+2)&=\frac{n(n+1)(n+2)(n+3)}{4}\tag{3}\\\sum_{k=1}^n k(k+1)(k+2)(k+3)&=\frac{n(n+1)(n+2)(n+3)(n+4)}{5}\tag{4}\\\end{align}$$
and it seems (I haven't proved it yet)
$$\sum_{k=1}^n k(k+1)(k+2)\cdots(k+r)=\frac{n(n+1)(n+2)(n+3)(n+4)\cdots(n+r+1)}{r+2}\tag{5}$$
We have an obvious pattern here. I know the intuition of $(1)$, but I am wondering what are the intuitutions for the other sums: $(2),\,(3),\,(4),\,(5)$?



I can derive $(2)$ using well-known formulas for arithmetic series and square pyramidal number, but how do the other formulas, $(3),\,(4),\,(5)$, derive? Does it use Faulhaber's formula?

Saturday, April 13, 2019

integration - How to show $int_{0}^{infty} frac{dx}{x^3+1} = frac{2pi}{3sqrt{3}}$





I am trying to show $\displaystyle{\int_{0}^{\infty} \frac{dx}{x^3+1} = \frac{2\pi}{3\sqrt{3}}}.$



Any help?
(I am having troubles using the half circle infinite contour)




Or more specifically, what is the residue $\text{res} \left(\frac{1}{z^3+1},z_0=e^\frac{\pi i}{3} \right )$



Thanks!


Answer



There are already two answers showing how to find the integral using just calculus. It can also be done by the Residue Theorem:



It sounds like you're trying to apply RT to the closed curve defined by a straight line from $0$ to $A$ followed by a circular arc from $A$ back to $0$. That's not going to work, because there's no reason the intergal over the semicircle should tend to $0$ as $A\to\infty$.



How would you use RT to find $\int_0^\infty dt/(1+t^2)$? You'd start by noting that $$\int_0^\infty\frac{dt}{1+t^2}=\frac12\int_{-\infty}^\infty\frac{dt}{1+t^2},$$and apply RT to the second integral.




You can't do exactly that here, because the function $1/(1+t^3)$ is not even. But there's an analogous trick available.



Hint: Let $$f(z)=\frac1{1+z^3}.$$If $\omega=e^{2\pi i/3}$ then $$f(\omega z)=f(z).$$(Now you're going to apply RT to the boundary of a certain sector of opening $2\pi/3$... be careful about the "$dz"$...)


linear algebra - Proving that a right (or left) inverse of a square matrix is unique using only basic matrix operations

Proving that a right (or left) inverse of a square matrix is unique using only basic matrix operations



-- i.e. without any reference to higher-order matters like rank, vector spaces or whatever ( :)).




More precisely, armed with the knowledge only of:




  • rules of matrix equality check, addition, multiplication, distributive law and friends

  • Gauss-Jordan elimination and appropriate equation system solution cases for the reduced row-echelon form



Thanks in advance.

Friday, April 12, 2019

sequences and series - Is the sum of natural numbers equal to $-frac{1}{8}$?

I came across the following video on YouTube: Sum of all natural numbers (- 1/8).


Basically what happens is: \begin{align*} 1+2+3+\dotsb &= N \\ 1+(2+3+4)+(5+6+7)+\dotsb &= N\\ 1+9+18+27+\dotsb &= N\\ 1+9(1+2+3+4+\dotsb)&= N\\ 1+9N &= N \end{align*} and therefore $N=-\frac{1}{8}$.


This is directly in contradiction with the well-known result of $-\frac{1}{12}$.


What is the problem with this reasoning? Was this result discovered earlier? Is this a consequence of Riemann's Rearrangement Theorem? Thanks in advance.


This was a repost of my previous post because some people said it was a duplicate to "Why is the sum of natural numbers $-1/12$?"

elementary number theory - Find the maximum positive integer that divides $n^7+n^6-n^5-n^4$





Find the maximum positive integer that divides all the numbers of the form $$n^7+n^6-n^5-n^4 \ \ \ \mbox{with} \ n\in\mathbb{N}-\left\{0\right\}.$$




My attempt



I can factor the polynomial



$n^7+n^6-n^5-n^4=n^4(n-1)(n+1)^2\ \ \ \forall n\in\mathbb{N}.$




If $n$ is even then exists $\,k\in\mathbb{N}-\left\{0\right\}\,$ such that $n=2k.\;$ so:



$n^4(n-1)(n+1)^2=2^4 k^4(2k-1)(1+2k)^2$



If $n$ is odd then exists $k\in\mathbb{N}$ such that $n=2k+1$ so



$$n^4(n-1)(n+1)^3=2^3(2k+1)^4(k+1)^2$$



Can I conclude that the maximum positive integer that divides all these numbers is $N=2^3?$ (Please, help me to improve my english too, thanks!)







Note: I correct my "solution" after a correction... I made a mistake :\


Answer



we can use some common sense.



If $n$ is even $16|n^4$. But if $n$ is odd then $2\not \mid n^4$. However $n+1$ and $n-1$ are both even so $2^3|(n-1)(n+1)^2$. But if $n$ is odd then either $n + 1$ or $n -1$ is divisible by $4$ so $2^4|(n-1)(n+1)^2$. So $2^4$ will divide all such numbers.



Can any higher powers of $2$ divide it? Why should they? Example: if $n=2$ then then number is $16*1*3^2$ which is not divisible by any higher power.




But are there any other factors that must divide it? Either $n$ or $n-1$ or $n+1$ is divisible by $3$ so $3|n^4(n-1)(n+1)^2$. But if $3|n-1$ then $1$ is the only guaranteed power of $3$ that divides. (if $n-1 = 3$ then $9 \not \mid n^4(n-1)(n+1)^2$.



So $3*2^4$ will always be a factor.



Are there any other prime factors that must occur? Well, if $p$ is a prime $p > 3$ and if we let $n-1 = p+1$ we have $n^4(n-1)(n+1)^2 = (p+2)^2(p+1)*(p+3)^2$ and $p$ is not a factor of that.



So $48 = 3*2^4$ the largest integer that must divide all $n^4(n-1)(n+1)^2$.


sequences and series - If $lim_{n to infty} a_n=L$, then $lim_{n to infty} a_n^{1/k}=L^{1/k}$. {Edited]




I need some help proving the following statement:



[EDIT]: forgot to mention that:



$a_n \ge 0$ , $L \ge 0$.



If the limit of a sequence $\lim_{ \ n \to \infty} a_n=L$,
then, for any $k \in \mathbb{N}$, the limit of the sequence $\lim_{n \to \infty} \sqrt[k]{a_n} = \sqrt[k]{L}$.



I already proved that every sequence $\{a_n\}^k$ converges to $L^k$ by induction step,

but when I tried the same for $\{1/k\}$, it just doesn't work.


Answer



Let $a_{n} \geq 0$ for all $n \geq 1$, so that, if $a_{n} \to L$, then $L \geq 0$. This allows $a_{n}^{1/k}$ to be meaningful for all $n,k \geq 1$.



Let $k \geq 1$. If $L=0$, then the statement is obvious; for, we have $a_{n} < \varepsilon$ iff $a_{n}^{1/k} < \varepsilon^{1/k}$. Suppose $L > 0$. Then there is some $N_{1} \geq 1$ such that $a_{n} > 0$ for all $n \geq N_{1}$. If $n \geq N_{1}$, then
$$
|a_{n}^{1/k} - L^{1/k}| = \frac{|a_{n}-L|}{ \sum_{j=0}^{k-1}a_{n}^{\frac{j}{k}}L^{\frac{k-1-j}{k}}}.
$$
On the other hand, there is some $N_{2} \geq N_{1}$ such that $n \geq N_{2}$ implies
$|a_{n} - L| < L/2$, implying $L/2 < a_{n}$, implying that

$$
\sum_{j=0}^{k-1} a_{n}^{\frac{j}{k}} L^{\frac{k-1-j}{k}} > L^{\frac{k-1}{k}}/ \sum_{j=0}^{k-1}{2^{\frac{j}{k}}} =: M.
$$
If $\varepsilon > 0$, then there is some $N_{3} \geq N_{2}$ such that $n \geq N_{3}$ implies $|a_{n}-L| < M\varepsilon$.
We have proved that, for every $\varepsilon > 0$, there is some $N \geq 1$ such that $n \geq N$ implies $|a_{n}^{1/k} - L^{1/k}| < \varepsilon$.


Thursday, April 11, 2019

Proof by Induction $n^2+n$ is even



I'm not entirely sure if I'm going about proving $n^2+n$ is even for all the natural numbers correctly.




$P(n): = n^2+n$




$P(1) = 1^2+1 = 2 = 0$ (mod $2$), true for $P(1)$




Inductive step for $P(n+1)$:




$\begin{align}P(n+1) &=& (n+1)^2+(n+1)\\

&=&n^2+2n+1+n+1\\
&=&n^2+n+2(n+1)\end{align}$




Does this prove $n^2+n$ is even as it's divisible by $2$? Thanks!


Answer



I see other answers provide different (possibly simpler) proofs. To finish off your proof:



by the induction hypothesis $n^2+n$ is even. Hence $n^2 + n = 2k$ for some integer $k.$ We have $$n^2+n+2(n+1) = 2k + 2(n+1) = 2(k+n+1) = 2\times\text{an integer} = \text{even}.$$





Does this prove $n^2+n$ is even as it's divisible by $2$?




The key here is to remember stating & using the induction hypothesis.


number theory - What is the effective lower bound on gaps between zeta zeros?

In this question here:
Upper bound on differences of consecutive zeta zeros
by Charles it is said that: "There are many papers giving lower bounds to:



$$\limsup_n\ \delta_n\frac{\log\gamma_n}{2\pi}$$



unconditionally or on RH or GRH." RH stands of course for the Riemann hypothesis.



Therefore I am asking: What is the best unconditional effective lower bound for gaps $$\delta_n=|\gamma_{n+1}-\gamma_n|$$ between consecutive non-trivial Riemann zeta function zeros?

calculus - Prove: $lim_{xrightarrow infty} left (frac{x+log 9}{x-log 9} right )^{x}=81$




Over the real line, prove that



$\lim_{x\rightarrow \infty} \left (\frac{x+\log 9}{x-\log 9} \right )^{x}=81$



I've tried L Hospital Rule, which gets too complicated and I'm trying by Squeeze theorem now.


Answer



Hint:
$$\frac{x+a}{x-a}=1+\frac{2a}{x-a}$$


limits - Prove that $lim limits_{n to infty} frac{x^n}{n!} = 0$, $x in Bbb R$.

Why is



$$\lim_{n \to \infty} \frac{2^n}{n!}=0\text{ ?}$$



Can we generalize it to any exponent $x \in \Bbb R$? This is to say, is




$$\lim_{n \to \infty} \frac{x^n}{n!}=0\text{ ?}$$




This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.


and here: List of abstract duplicates.

Wednesday, April 10, 2019

linear algebra - Coefficients of characteristic polynomial of a $3times 3$ matrix



Let $A$ be a $3\times 3$ matrix over reals. Then its characteristic polynomial $\det(xI-A)$ is of the form $x^3+a_2x^2+a_1x+a_0$. It is well known that
$$-a_2=\mbox{trace}(A) \mbox{ and } -a_0=\det(A).$$
Note that these constants are expressed as functions of $A$ without referring to eigenvalues of $A$.




Q. What is interpretation of $a_1$ in terms of $A$ without considering its eigenvalues?






This could be trivial, but I have never seen it.


Answer



If your matrix is:
$$A=\begin{bmatrix} A_{11} & A_{12} & A_{13}\\A_{21} & A_{22} & A_{23}\\A_{31} & A_{32} & A_{33}\end{bmatrix}$$
Then your $a_1$ is:

$$a_1=\det\left(\begin{bmatrix} A_{22} & A_{23}\\A_{32} & A_{33}\end{bmatrix}\right)+\det\left(\begin{bmatrix} A_{11} & A_{13}\\A_{31} & A_{33}\end{bmatrix}\right)+\det\left(\begin{bmatrix} A_{11} & A_{12}\\A_{21} & A_{22}\end{bmatrix}\right)$$
And I think it's called $a_1=\text{Tr}(\text{Adj}(A))$.
Calculating it is similar to calculating the determinant: You are calculating the sub-determinant( I'm not sure about the name) for $A_{11}$, $A_{22}$ and $A_{33}$, and you are summing them.


discrete mathematics - Prove $frac{1cdot 3cdot 5cdots (2n - 1)}{ 2^n(n + 1)!}cdot 4^n= frac{1}{n+1} {2nchoose n}$



Prove:
$$

\frac{1\times 3\times 5\times \cdots \times (2n - 1)}{2^n (n + 1)!} \times 4^n = \frac{1}{n+1} \binom{2n}{n}
$$





I'm having a bit of trouble proving this, I tried to prove this by induction and would get stuck. Any help is appreciated in advance.


Answer



By multiplying both numerator and denominator by $\;2\cdot 4\cdot 6 \cdot \ldots \cdot (2n)=2^nn!$ we get
\begin{align}
\frac{1\cdot 3\cdot 5\cdot \ldots \cdot (2n-1)}{2^n(n+1)!}&=\frac{1\cdot \color{red}{2}\cdot 3\cdot \color{red}{4}\cdot 5\cdot \color{red}{6}\cdot \ldots \cdot (2n-1)\cdot\color{red}{2n}}{2^n(n+1)!\cdot \color{red}{2^nn!}}\\

&=\frac{1}{4^n(n+1)}\cdot\frac{(2n)!}{n!\cdot n!}\\
&=\color{blue}{\frac{1}{4^n(n+1)}{2n \choose n}}
\end{align}



Then
$$\boxed{\color{blue}{4^n\left[\frac{1\cdot 3\cdot 5\cdot \ldots \cdot (2n-1)}{2^n(n+1)!}\right]=\frac{1}{n+1}{2n \choose n}}}$$


Tuesday, April 9, 2019

linear algebra - Help Determinant Binary Matrix


I was messing around with some matrices and found the following result.



Let $A_n$ be the $(2n) \times (2n)$ matrix consisting of elements $$a_{ij} = \begin{cases} 1 & \text{if } (i,j) \leq (n,n) \text{ and } i \neq j \\ 1 & \text{if } (i,j) > (n,n) \text{ and } i \neq j \\ 0 & \text{otherwise}. \end{cases} $$ Then, the determinant of $A_n$ is given by $$\text{det}(A_n) = (n-1)^2.$$



Example: $$A_2 = \begin{pmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix}, A_3 = \begin{pmatrix} 0 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ \end{pmatrix},$$ with det$(A_2)$ and det$(A_3)$ being $1$ and $4$ respectively. I was wondering if anybody could prove this statement for me.


Answer



Your matrix $A_n$ has the block diagonal structure


$$ A_n = \begin{pmatrix} B_n & 0 \\ 0 & B_n \end{pmatrix} $$



where $B_n \in M_n(\mathbb{F})$ is the matrix which has diagonal entries zero and all other entries $1$. Hence, $\det(A_n) = \det(B_n)^2$ so it is enough to calculate $\det(B_n)$. To do that, let $C_n$ be the matrix in which all the entries are $1$ (so $B_n = C_n - I_n$).


The matrix $C_n$ is a rank-one matrix so we can find it's eigenvalues easily. Let us assume for simplicity that $n \neq 0$ in $\mathbb{F}$. Then $C_n$ has an $n - 1$ dimensional kernel and $(1,\dots,1)^T$ is an eigenvalue of $C_n$ associated to the eigenvalue $n$. From here we see that the characteristic polynomial of $C_n$ must be $\det(\lambda I - C_n) = \lambda^{n-1}(\lambda - n)$ and hence $$\det(B_n) = \det(C_n - I_n) = (-1)^n \det(I_n - C_n) = (-1)^{n} 1^{n-1}(1 - n) = (-1)^n(1 - n) = (-1)^{n-1}(n-1).$$


In fact this formula works even if $n = 0$ in $\mathbb{F}$ because in this case, $A^2 = 0$ so $A$ is nilpotent and $\det(C_n - \lambda I) = \lambda^n$.


number theory - Closed form for $K(n)=[0;overline{1,2,3,...,n}]$


I just started playing around with fairly simple periodic continued fractions, and I have a question. The fractions can be represented "linearly": for $n\in\Bbb N$, $$K(n)=[0;\overline{1,2,3,...,n}]$$ I am seeking for a closed form for $K(n)$. I found the first few.


$n=1$: $$K(1)=\frac1{1+K(1)}$$ $$\Rightarrow K(1)=\frac{-1\pm\sqrt{5}}2$$ $n=2$: $$K(2)=\frac1{1+\frac1{2+K(2)}}$$ $$\Rightarrow K(2)=-1\pm\sqrt{3}$$ $n=3$: $$K(3)=\frac1{1+\frac1{2+\frac1{3+K(3)}}}$$ $$\Rightarrow K(3)=\frac{-4\pm\sqrt{37}}3$$ $n=4$: $$K(4)=\frac1{1+\frac1{2+\frac1{3+\frac1{4+K(4)}}}}$$ $$\Rightarrow K(4)=\frac{-9\pm2\sqrt{39}}5$$ As you may be able to tell, these results are all found by simplifying the fraction until one has a quadratic in $K(n)$, at which point the quadratic formula may be applied.



I would be surprised if there wasn't a closed form expression for $K(n)$, as they can all be found the same way. I've failed to recognize any numerical patterns in the results, however.


So, I have two questions:


$1)$: How does one express $K(n)$ in the $\operatorname{K}_{i=i_1}^\infty \frac{a_i}{b_i}$ notation? I was thinking something like $$K(n)=\operatorname{K}_{i\geq0}\frac1{1+\operatorname{mod}(i,n)}$$ $2)$: What is a closed form for $K(n)$?


Thanks.


Update:


I'm pretty sure that all the $\pm$ signs in the beginning of the question should be changed to a $+$ sign.


Answer



Following up on Daniel Schepler's comment. Let $$P_n(x) = \frac{1}{1 + \frac{1}{2 + \ddots \frac{1}{n+x}}}.$$ This is basically the RHS of the recurrence equation for $K(n)$. Then: \begin{align*} P_1(x) &= \frac{1}{x+1} \\ P_2(x) &= \frac{x+2}{x+3} \\ P_3(x) &= \frac{2x+7}{3x+10} \\ P_4(x) &= \frac{7x+30}{10x+43} \\ P_5(x) &= \frac{30x+157}{43x+225} \\ P_6(x) &= \frac{157x+972}{225x+1393}. \end{align*} Note that $P_n(x) = P_{n-1}\left( \frac{1}{x+n}\right)$. Therefore, if $P_{n-1}(x) = \frac{ax+b}{cx+d}$, then \begin{align*}P_n(x) &= \frac{\frac{a}{x+n} + b}{\frac{c}{x+n} + d} \\ &= \frac{bx + (a+bn)}{dx + (c+dn)} \end{align*} Thus in general, we may write $$P_n(x) = \frac{a_n x + a_{n+1}}{b_n x + b_{n+1}}$$ where $a$ and $b$ satisfy the recurrence $a_{n+1} = a_{n-1} + n a_n$ and likewise for $b$, with the initial conditions $a_1 = 0, a_2 = b_1 = b_2 = 1$. This recurrence gives the OEIS sequences linked by Jean-Claude Arbaut. $K(n)$ is a solution to $x - P_n(x) = 0$, or a root of the quadratic $$b_n x^2 + (b_{n+1} - a_n) x - a_{n+1} = 0.$$


complex analysis - Why is $sinx$ the imaginary part of $e^{ix}$?

Most of us who are studying mathematics are familiar with the famous $e^{ix}=cos(x)+isin(x)$. Why is it that we have $e^{ix}=cos(x)+isin(x)$ and not $e^{ix}=sin(x)+icos(x)$? I haven't studied Complex Analysis to know the answer to this question. It pops up in Linear Algebra, Differential Equations, Multivariable Equations and many other fields. But I feel like textbooks and teachers just expect us students to take it as given without explaining it to a certain extent. I also couldn't find any good article that explains this.

real analysis - Finding a bijective Map


I need to find a bijective map from $A=[0,1)$ to $B=(0,1).$ Is there a standard method for coming up with such a function, or does one just try different functions until one fits the requirements?


I've considered some "variation" of the floor function, but not sure if $\left \lfloor{x}\right \rfloor$ is bijective. Now i'm thinking of maybe some trig function that takes elements of $A$ and maps them to $B$.


I know how to show that a function is injective and surjective, but how do I find such a function? Am I overthinking this or am I under thinking this?


Any help would be greatly appreciated!!!



Lastly, since we are to find a bijective map from $A$ to $B$, assuming one exists, is this equivalent to saying that these two sets have the same cardinality?


Thank you


Answer



Consider the function $\psi: A \rightarrow B$ given by $$ \psi(x) = \begin{cases} x & \text{ if}\; \; \; x \neq 0 \; \; \text{and}\; \; x \neq \frac{1}{n} \in \mathbb{N} \\ \frac{1}{2} & \text{if} \;\; x = 0 \\ \frac{1}{n+1} & \text{if } \; \; x= \frac{1}{n}, \; n \in \mathbb{N} \end{cases} $$ It is not difficult to verify that $\psi$ is bijective. The problem with constructing a bijection using our favorite elementary functions is that no continuous bijection between the sets exists. This is due to the differences in character (or topology) of the two sets. So unfortunately one must turn their attention to different functions.


The function above relies on the fact that we can get this bijection by taking a sequence in $(0,1)$, say $\{x_n\}$ and mapping $x_1$ to the extra point $0$, and then mapping $x_n$ to $x_{n-1}$ for all other $n$. Hopefully this trick helps with your future work.


real analysis - How do I compute $lim_{x to 0}{(sin(x) + 2^x)^frac{cos x}{sin x}}$ without L'Hopital's rule?


What I've tried so far is to use the exponent and log functions: $$\lim_{x \to 0}{(\sin(x) + 2^x)^\frac{\cos x}{\sin x}}= \lim_{x \to 0}e^ {\ln {{(\sin(x) + 2^x)^\frac{\cos x}{\sin x}}}}=\lim_{x \to 0}e^ {\frac{1}{\tan x}{\ln {{(\sin(x) + 2^x)}}}}$$.


From here I used the expansion for $\tan x$ but the denominator turned out to be zero. I also tried expanding $\sin x$ and $\cos x$ with the hope of simplifying $\frac{\cos x}{\sin x}$ to a constant term and a denominator without $x$ but I still have denominators with $x$.


Any hint on how to proceed is appreciated.


Answer



Take the logarithm and use standard first order Taylor expansions: $$ \lim_{x\to0} \frac{\log\bigl(\sin(x)+2^x\bigr)}{\tan(x)} =\lim_{x\to0} \frac{\log\bigl(\sin(x)+2^x\bigr)}{x+o(x)} =\lim_{x\to0} \frac{x+\log(2)x+o(x)}{x+o(x)} = 1+\log(2). $$ Then $$ \lim_{x\to0} \bigl(\sin(x)+2^x\bigr)^{\cot(x)} = e^{1+\log(2)} = 2e. $$




EDIT


Maybe it's important to clarify why $\log\bigl(\sin(x)+2^x\bigr)=x+\log(2)x+o(x)$. I'm using the following facts:


  • $\log(1+t) = t+o(t)$ as $t\to0$,

  • $\sin(x)+2^x = 1+x+\log(2)x+o(x)$ as $x\to0$.

analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...