Monday, April 30, 2018

linear algebra - How to find characteristic polynomial of this matrix?



Let, $A=\begin{bmatrix} 4&0&1&0\\1&1&1&0\\0&1&1&0 \\0&0&0&4 \end{bmatrix}$. Knowing that $4$ is one of its eigenvalues, find the characteristic polynomial of $A$.



Well if $4$ is an eigenvalues of $A$, one should have $|A-4I_{4}|=0$ . And so,


$\begin{vmatrix} 0&0&1&0\\1&-3&1&0\\0&1&-3&0 \\0&0&0&0 \end{vmatrix}=0$


It's clear that the previous equation is true (the determinant of $(A-4I_{4})=0$). Now that the factor $(\lambda-4)$ was pull out, one gets a new matrix by removing the null row and null column.



$A'=\begin{bmatrix} 0&0&1\\1&-3&1\\0&1&-3&\end{bmatrix}$


The characteristic polynomial of $A'$ will be a $3^{th}$ degree polynomial, which product with $(\lambda-4)$ equals to a $4^{th}$ degree polynomial.


Now, in order of finding the characteristic polynomial of $A'$ one must to solve the characteristic equation:


$\begin{vmatrix} -\lambda&0&1\\1&-3-\lambda&1\\0&1&-3-\lambda&\end{vmatrix}=0$


My doubt is on finding this determinant. I already tryed Laplace's transformations in order to make null row or a column, but I couldn't do it.


Can you give me a clue? Thanks.


Answer



...and how to find the charateristic polynomial of the original matrix, $A$ $$\begin{align}\mathrm{det}(A - \lambda \mathrm{I}) &= 0 \tag{1}\\ \begin{vmatrix}(4-\lambda)&0&1&0 \\ 1&(1-\lambda)&1&0\\ 0&1&(1-\lambda)&0\\ 0&0&0&(4-\lambda) \end{vmatrix} &= 0 \tag{2}\\ (4-\lambda)\begin{vmatrix}(4-\lambda)&0&1\\ 1&(1-\lambda)&1\\ 0&1&(1-\lambda) \tag{3}\\ \end{vmatrix} &= 0 \\ (4-\lambda) \left[(4-\lambda) \left[(1-\lambda)^2 -1 \right] + 1\right] &= 0\tag{4} \\ (4-\lambda) \left[(4-\lambda)(\lambda^2 -2\lambda) + 1\right] &= 0\tag{5} \\ (4-\lambda) \left[(4\lambda^2 -8\lambda -\lambda^3+2\lambda^2) + 1\right] &= 0 \tag{6}\\ (4-\lambda)(-\lambda^3 + 6\lambda^2 - 8\lambda +1 ) &= 0 \tag{7}\\ (\lambda -4)(\lambda^3 - 6\lambda^2 + 8\lambda - 1)&= 0\tag{8}\\ \end{align}$$


Set Theory: Cardinality of functions on a set have higher cardinality than the set



I'm independently working my way through Elements of the theory of functions and functional analysis by Kolmogorov and Fomin. At the moment, I'm stuck on the following exercise (on page 11), the question states:



Prove that the set of all numerical functions defined on a set M has a greater cardinal number than the cardinal number of the set M. Hint: Use the fact that the set of all characteristic functions (i.e. functions assuming only the values of 0 and 1) defined on M is equivalent to the set of all subsets of M.



Defining $f: M \rightarrow \{0,1\}$, I've figured out that $\prod_{k} (1-f_{\cap_{k} M_k}) = 1-f_{\cup_{k} M_k}=f_{M-\cup_{k}M_k}$ where $M_k$ are subsets of M.




At this point, I'm not sure what I've proved is helpful at all and if it is, how I should connect the dots between what I've proved and what needs to be proved. My idea is that I need to first prove that the set of all characteristic functions has the same cardinality as the power set of $M$ then realise the connection between numerical functions and the characteristic functions. Both of these would then imply the result. Any ideas on how to go about this? Note that I'm quite new to set theory and am aiming to learn some real analysis for an undergraduate class.


Answer



The goal is to find a bijection between charasteristic functions and subsets of $M$.



Try to map each function to a subset and each subset to a function in a natural and bijective way. In other words, how can you see a characteristic function as a description of a subset of $M$? (you probably saw it already in different contexts)


analysis - Is there a continuous function from $[0, 1]$ onto $(0, 1)$?


If there is none, why?


And for the other side, what about open set $(0, 1)$ to closed set $[0, 1]$ with a continuous function?



Thanks


Answer



HINT: For the first one use the fact that, Continuous image of a compact set is compact.


Sunday, April 29, 2018

modular arithmetic - Show that if $n > 0$ is an integer, then $n^2$ is congruent to only $0,1,2$ or $4$ modulo $7$



No solution please, I just need some guidance. I've tried various approaches so far yet no prevail.




  • I've looked at small number cases and tried to identify something interesting. Couldn't find much though.

  • I've thought about how n can only be odd or even, hence I took the form n = 2k or n = 2k+1 and squared each respectively. Which was indeed interesting because the odd took the form of $2k^2 + 4k + 1$ after being squared which seems interesting because n^2 can be congruent to 2,4 and 1 mod 7. But I can't really advance from there, or if this isn't relevant.




I'm hoping to get a sense of direction from you. Thanks.


Answer



You only have to consider $7$ numbers:



$$0,1,2,3,4=7-3, 5=7-2, 6=7-1$$



Square each of them and you are done.




Another way to phrase it is each number can be written as $7k+r$ where $r \in \{0,1,2,3,4,5,6\}$, just study $(7k+r)^2$ to convince yourselves that it suffices to study $7$ numbers.


Saturday, April 28, 2018

divergent series - When does (Riemann) regularization work?



I've seen the Wikipedia articles on how to sum $1+1+1+1+\cdots=-1/2$ or $1+2+3+4+\cdots=-1/12$.



Is there a theory behind it or is it a random trick? It basically uses analytic continuation of the Riemann zeta?



The article says it may be used in physical applications. So I wonder, what are the conditions that this type of regularization will give me a useful and consistent result?




There are other series which can be summed differently: $1+2+4+8+\dots=-1$. Is it possible to obtain the same result with Riemann regularization? Are different type of regularizations giving the same results (if they exist)? If not, how can I know which one will work for my (physical?) calculation?


Answer



One of the keys is surely, that any method of summation must be consistent with the results of a conventional sum, if the summation-method is applied to convergent sums/series. I find it fairly obvious (but in case a source in the literature is sought, for instance, Konrad Knopp explained it in his monography on infinite series:) "it must be made clear, what the symbol 1+2+3+4+... really means"; first thing we must do is to find/define an expression for each term in the sum relative to its index. Otherwise 1+1+1+1+... and 1+0+1+0+1+0+... cannot be distinguished of each other. In the geometric series with the base q we have the general term $q^k$ dependend on its index k, in the zeta-series we have the general term as $k^{-s}$ with the fixed exponent s and so on.



So we should first write $s(p)=\sum_{k=1}^\infty a(k,p) $ and define a dependend on k and possibly on another external parameter p to have a description of the general term.



After that we might either find





  • telescoping effects: subsequently following terms cancel (possibly only partially) and the whole expression reduces then to a finite sum

  • continuous intervals for some parameter p, for which $s(p)$ is a convergent series with a closed form expression $g(p)$ dependend on that parameter, where it might be consistent to extend $g(p)$ also for the cases, where the series $s(p)$ would be divergent
    and so on.

  • $\cdots$



In the latter case it might happen, that we find such an extension and it seems to be a sensical re-expression, but it is found later, that there is also another reformulation, for instance as a sum with a telescoping effect, which reduces then to another value of the (originally divergent) sum. Then that found summation/regularization must be revised - and in general sthis is still a field of open research. There are accepted summation-procedures even for complete classes of infinite series - accessible for instance by Abel,- Cesaro-, Euler-, Borel-, Ramanujan-summation, to name only the classical ones; but there are arbitrarily many series for which we do not have an accepted summation-procedure.
L. Euler's zeta-"regularisation" for instance used that
$$s(p) = \sum_{k=1}^\infty k^p $$
can seemingly be written as
$$\sum_{k=1}^\infty (-1)^k k^p +2\cdot \sum_{k=1}^\infty (2k)^p$$
and then

$$\sum_{k=1}^\infty (-1)^k k^p +2\cdot 2^p \cdot \sum_{k=1}^\infty k^p $$
and then $$ s(p) = t(p) + 2 \cdot 2^p \cdot s(p)$$
$$ s(p)(1-2 \cdot 2^p) = t(p)$$
$$ s(p) = {t(p) \over (1-2 \cdot 2^p)} $$
where t(p) can approximated for a wider range of the parameter p. But that this works in general depended on, that there is a) a continuous range for the parameter p where this is convergent (and allows the same simplification/reformulation) and b) this range can be continuously extended preserving the meaningfulness of finite values for the $s(p)$ .


sequences and series - Is there a formula for calculating the sum of all products of $p$ different integers $le n$?



For example:



$n=3, p=2$ then the sum I'm looking for is: $1.2+1.3+2.3$.



Of course this can be easily calculated on a computer. But having a formula would allow me to use it in the derivation of other formulas.
So far I've found nothing on the internet.




I have found this nice formula for the sum of all products of arbitrary many distinct integers :



$1+2+3+1.2+1.3+2.3+1.2.3=4!-1=(n+1)!-1$ .



But I would like to be able to separate this into the parts mentioned above. I hope someone can point me in the right direction. Thanks in advance!


Answer



A recursive formula is fairly easy, as you have



$$f(n,1) = \frac{n(n+1)}{2}$$
$$f(n,n) = n!$$

$$f(n,p) = nf(n-1,p-1)+f(n-1,p)$$



If you start with $f(0,0)=1$ and $f(0,p)=0$ for $p \not = 0$ then you can reduce this to just $f(n,p) = nf(n-1,p-1)+f(n-1,p)$; note that you then get $f(n,0)=1$



These are essentially unsigned Stirling numbers of the first kind and you have $f(n,p)=\left[{n+1 \atop n-p}\right]$


Friday, April 27, 2018

real analysis - Bijection from $mathcal{P}(mathbb{N})$ to $(0,1)$



How can we construct a bijection from $\mathcal{P}(\mathbb{N})$ to $(0,1)$?


Here is what I know:



  • $\mathcal{P}(\mathbb{N}) = \{A | A \text{ is a subset of } \mathbb{N}\}$




  • Both $\mathcal{P}(\mathbb{N})$ and $(0,1)$ are uncountable, so such bijection exists <--- incorrect




  • There are uncountable many functions from $(0,1)$ to $\mathbb{N}$




So I seek a bijection function that takes a set $A \in \mathcal{P}(\mathbb{N}) \mapsto x \in (0,1)$ and back...I don't see how this can be done


Is anyone familiar with this result?


Answer



The result is well known. There's a straightforward program for establishing it (although the last step is irritating):


  • Subsets correspond to indicator functions

  • Functions on $\mathbb{N}$ correspond to sequences

  • Relate Boolean sequences to binary numerals

  • Patch up the argument to deal with the countably many real numbers that have two different binary representations

e.g. the set of even natural numbers would correspond to the binary numeral



$$ 0.101010101010\ldots$$


(which is $2/3$). The set of primes would correspond to


$$ 0.001101010001\ldots$$


Without the patch, both the set $\{ 0 \}$ and the set of all positive natural numbers would correspond to $1/2$: i.e. to the numerals


$$ 0.100000\ldots $$ $$ 0.011111\ldots $$


sequences and series - Convergence of $sum_{n=1}^infty frac{sin^2(n)}{n}$



Does the series $$ \sum_{n=1}^\infty \frac{\sin^2(n)}{n} $$ converge?



I've tried to apply some tests, and I don't know how to bound the general term, so I must have missed something. Thanks in advance.


Answer



Hint:


Each interval of the form $\bigl[k\pi+{\pi\over6}, (k+1)\pi-{\pi\over6}\bigr)$ contains an integer $n_k$. We then have, for each $k$, that ${\sin^2(n_k)\over n_k}\ge {(1/2)^2\over (k+1)\pi}$. Now use a comparison test to show your series diverges.



Real Matrices with Real Eigenvalue pre- and Post multiplied by a Diagonal Matrix



Suppose all the eigenvalues of $A\in \mathbb{R}^{n\times n}$ (not necessarily symmetric) are real. Let $D\in \mathbb{R}^{n\times n}$ be a diagonal matrix with positive diagonals. Prove/disprove that $A+D$ and $DAD$ has only real eigenvalues.


Answer



I played around with Maple and came up with a counterexample. I'm not going to prove it's a counterexample as the mathematics is tedious.



Take

$$A = \begin{pmatrix} 1 & 3 & 2 \\ -1 & 1 & 4 \\ 1 & 2 & 7 \end{pmatrix}^{-1}\begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 3 \end{pmatrix}\begin{pmatrix} 1 & 3 & 2 \\ -1 & 1 & 4 \\ 1 & 2 & 7 \end{pmatrix},$$
and
$$D = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 1\end{pmatrix}.$$
Then $DAD$ and $A + D$ has non-real eigenvalues.


Thursday, April 26, 2018

analysis - Can we describe all group isomorphisms from $(mathbb R ,+)$ to $(mathbb R^+ , .)$ ?

Can we describe all group isomorphisms from $(\mathbb R ,+)$ to $(\mathbb R^+ , .)$ ? I have tried that if $f$ is such an isomorphism , then $f(x)>0$ , and $f(r)=(f(1))^r , \forall r \in \mathbb Q$ , but nothing else . Please Help

real analysis - Prove linear functions that are not multiplications by a constant are unbounded on every interval



An exercise in a book on logic and set theory is as follows:





Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be such that $\forall x, y : f(x + y) = f(x) + f(y)$, but $f$ is not a multiplication by a constant. Prove that it is unbounded on every interval.


While looking for hints I've come across Herrlich's "Axiom of Choice", where (Theorem 5.5) he proves a related result that non-continuous solutions to this functional equation are non-measurable. This, coupled with continuous solutions being necessarily a multiplication by a constant and some real analysis (which is rusty in my case), should yield the required result if I understand correctly.



But considering whether the functions are measurable requires adding a lot more "structure" to $\mathbb{R}$ that the book I'm using typically considers, so perhaps there is a more concise and straightforward solution. So, my question is: is there one?



The exercise is in a section on Hamel basis, so I guess it's fine to prove this just for Hamel-related functions, like $f(x)$ being a projection of $x$ on a basis vector in a Hamel basis of $\mathbb{R}$ considered a vector space over $\mathbb{Q}$.


Answer




First of all we can observe that an $f$ of this kind is not continuous. In fact we can put $f(1) = a$ and from the additivity property it follows that $f(q) = aq$ for every $q \in \mathbb{Q}$. If then $f$ would be continuous, considered that $\mathbb{Q}$ is dense on $\mathbb{R}$, we could prove that $f(x) = ax$ for every $x \in \mathbb{R}$, but that is a contradiction under our hypothesis.



From that we can prove that $f$ is discontinuous in $0$. If $f$ had been continuous in $0$ then for every $x \in \mathbb{R}$ we would have:
$$ \lim_{h \to 0} [f(x +h) -f(x)] = \lim_{h \to 0} f(h) = 0$$
And we would have $f$ continuous on every point, which is an absurd.



Now we can prove our thesis. Let $[a, b]$ be an interval of $\mathbb{R}$. Since $f$ is discontinuous in $0$ there exists an $\epsilon > 0$ such that for every $\delta > 0$ there exists an $x_{\delta} < \delta$ such that $f(x_{\delta}) > \epsilon$ (It could also be $f(x_{\delta}) < -\epsilon$, but the proof would be analogue and to avoid confusion let us restrict at the first case).



Now taken any $n \in \mathbb{N}$ let $c$ be a positive integer such that $c \epsilon > n - f(a)$ and put $\delta < \frac{b - a}{c}$. So we have an $0 < x_0 < \delta$ such that $f(x_0) > \epsilon$. From that it follows that $f(cx_0) > c \epsilon > n - f(a)$ and $cx_0 < b - a$.




Now we can see that $a + cx_0 \in [a,b]$ and $$f(a + cx_0) = f(a) + f(cx_0) > f(a) + n - f(a) = n$$
And we proved our proposition since for every $n \in \mathbb{N}$ we have found an $x \in [a, b]$ such that $f(x) > n$.



Hope I helped you, I know my english isn't perfect, sorry me if I did some mistake.


integration - A closed-form expression for $int_0^infty frac{ln (1+x^alpha) ln (1+x^{-beta})}{x} , mathrm{d} x$



I have been trying to evaluate the following family of integrals:





$$ f:(0,\infty)^2 \rightarrow \mathbb{R} \, , \, f(\alpha,\beta) = \int \limits_0^\infty \frac{\ln (1+x^\alpha) \ln (1+x^{-\beta})}{x} \, \mathrm{d} x \, . $$




The changes of variables $\frac{1}{x} \rightarrow x$, $x^\alpha \rightarrow x$ and $x^\beta \rightarrow x$ yield the symmetry properties
$$ \tag{1}
f(\alpha,\beta) = f(\beta,\alpha) = \frac{1}{\alpha} f\left(1,\frac{\beta}{\alpha}\right) = \frac{1}{\alpha} f\left(\frac{\beta}{\alpha},1\right) = \frac{1}{\beta} f\left(\frac{\alpha}{\beta},1\right) = \frac{1}{\beta} f\left(1,\frac{\alpha}{\beta}\right) $$
for $\alpha,\beta > 0$ .



Using this result one readily computes $f(1,1) = 2 \zeta (3)$ . Then $(1)$ implies that

$$ f(\alpha,\alpha) = \frac{2}{\alpha} \zeta (3) $$
holds for $\alpha > 0$ . Every other case can be reduced to finding $f(1,\gamma)$ for $\gamma > 1$ using $(1)$.



An approach based on xpaul's answer to this question employs Tonelli's theorem to write
$$ \tag{2}
f(1, \gamma) = \int \limits_0^\infty \int \limits_0^1 \int \limits_0^1 \frac{\mathrm{d}u \, \mathrm{d}v \, \mathrm{d}x}{(1+ux)(v+x^\gamma)} = \int \limits_0^1 \int \limits_0^1 \int \limits_0^\infty \frac{\mathrm{d}x \, \mathrm{d}u \, \mathrm{d}v}{(1+ux)(v+x^\gamma)} \, .$$
The special case $f(1,2) = \pi \mathrm{C} - \frac{3}{8} \zeta (3)$ is then derived via partial fraction decomposition ($\mathrm{C}$ is Catalan's constant). This technique should work at least for $\gamma \in \mathbb{N}$ (it also provides an alternative way to find $f(1,1)$), but I would imagine that the calculations become increasingly complicated for larger $\gamma$ .



Mathematica manages to evaluate $f(1,\gamma)$ in terms of $\mathrm{C}$, $\zeta(3)$ and an acceptably nice finite sum of values of the trigamma function $\psi_1$ for some small, rational values of $\gamma > 1$ (before resorting to expressions involving the Meijer G-function for larger arguments). This gives me some hope for a general formula, though I have not yet been able to recognise a pattern.




Therefore my question is:




How can we compute $f(1,\gamma)$ for general (or at least integer/rational) values of $\gamma > 1$ ?




Update 1:



Symbolic and numerical evaluations with Mathematica strongly suggest that
$$ f(1, n) = \frac{1}{n (2 \pi)^{n-1}} \mathrm{G}_{n+3, n+3}^{n+3,n+1} \left(\begin{matrix} 0, 0, \frac{1}{n}, \dots, \frac{n-1}{n}, 1 , 1 \\ 0,0,0,0,\frac{1}{n}, \dots, \frac{n-1}{n} \end{matrix} \middle| \, 1 \right) $$

holds for $n \in \mathbb{N}$ . These values of the Meijer G-function admit an evaluation in terms of $\zeta(3)$ and $\psi_1 \left(\frac{1}{n}\right), \dots, \psi_1 \left(\frac{n-1}{n}\right) $ at least for small (but likely all) $n \in \mathbb{N}$ .



Interesting side note: The limit
$$ \lim_{\gamma \rightarrow \infty} f(1,\gamma+1) - f(1,\gamma) = \frac{3}{4} \zeta(3) $$
follows from the definition.



Update 2:



Assume that $m, n \in \mathbb{N} $ are relatively prime (i.e. $\gcd(m,n) = 1$). Then the expression for $f(m,n)$ given in Sangchul Lee's answer can be reduced to
\begin{align}

f(m,n) &= \frac{2}{m^2 n^2} \operatorname{Li}_3 ((-1)^{m+n}) \\
&\phantom{=} - \frac{\pi}{4 m^2 n} \sum \limits_{j=1}^{m-1} (-1)^j \csc\left(j \frac{n}{m} \pi \right) \left[\psi_1 \left(\frac{j}{2m}\right) + (-1)^{m+n} \psi_1 \left(\frac{m + j}{2m}\right) \right] \\
&\phantom{=} - \frac{\pi}{4 n^2 m} \sum \limits_{k=1}^{n-1} (-1)^k \csc\left(k \frac{m}{n} \pi \right) \left[\psi_1 \left(\frac{k}{2n}\right) + (-1)^{n+m} \psi_1 \left(\frac{n + k}{2n}\right) \right] \\
&\equiv F(m,n) \, .
\end{align}
Further simplifications depend on the parity of $m$ and $n$.



This result can be used to obtain a solution for arbitrary rational arguments: For $\frac{n_1}{d_1} , \frac{n_2}{d_2} \in \mathbb{Q}^+$ equation $(1)$ yields
\begin{align}
f\left(\frac{n_1}{d_1},\frac{n_2}{d_2}\right) &= \frac{d_1}{n_1} f \left(1,\frac{n_2 d_1}{n_1 d_2}\right) = \frac{d_1}{n_1} f \left(1,\frac{n_2 d_1 / \gcd(n_1 d_2,n_2 d_1)}{n_1 d_2 / \gcd(n_1 d_2,n_2 d_1)}\right) \\

&= \frac{d_1 d_2}{\gcd(n_1 d_2,n_2 d_1)} f\left(\frac{n_1 d_2}{\gcd(n_1 d_2,n_2 d_1)},\frac{n_2 d_1}{\gcd(n_1 d_2,n_2 d_1)}\right) \\
&= \frac{d_1 d_2}{\gcd(n_1 d_2,n_2 d_1)} F\left(\frac{n_1 d_2}{\gcd(n_1 d_2,n_2 d_1)},\frac{n_2 d_1}{\gcd(n_1 d_2,n_2 d_1)}\right) \, .
\end{align}



Therefore I consider the problem solved in the case of rational arguments. Irrational arguments can be approximated by fractions, but if anyone can come up with a general solution: you are most welcome to share it. ;)


Answer



Only a comment. We have



$$ \int_{0}^{\infty} \frac{\log(1+\alpha x)\log(1+\beta/x)}{x} \, dx = 2\operatorname{Li}_3(\alpha\beta) - \operatorname{Li}_2(\alpha\beta)\log(\alpha\beta) $$




which is valid initially for $\alpha, \beta > 0$ and extends to a larger domain by the principle of analytic continuation. Then for integers $m, n \geq 1$ we obtain



\begin{align*}
f(m, n)
&=\int_{0}^{\infty} \frac{\log(1+x^m)\log(1+x^{-n})}{x}\,dx \\
&\hspace{6em} = \sum_{j=0}^{m-1}\sum_{k=0}^{n-1} \left[ 2\operatorname{Li}_3\left(e^{i(\alpha_j+\beta_k)}\right) - i(\alpha_j+\beta_k)\operatorname{Li}_2\left(e^{i(\alpha_j+\beta_k)}\right) \right],
\end{align*}



where $\alpha_j = \frac{2j-m+1}{n}\pi$ and $\beta_k = \frac{2k-n+1}{n}\pi$. (Although we cannot always split complex logarithms, this happens to work in the above situation.) By the multiplication formula, this simplifies to





\begin{align*}
f(m, n)
&= \frac{2\gcd(m,n)^3}{m^2n^2}\operatorname{Li}_3\left((-1)^{(m+n)/\gcd(m,n)}\right) \\
&\hspace{2em} - \frac{i}{n} \sum_{j=0}^{m-1} \alpha_j \operatorname{Li}_2\left((-1)^{n-1}e^{in\alpha_j}\right) \\
&\hspace{2em} - \frac{i}{m} \sum_{k=0}^{n-1} \beta_k \operatorname{Li}_2\left((-1)^{m-1}e^{im\beta_k}\right).
\end{align*}




Here, $\gcd(m,n)$ is the greatest common divisor of $m$ and $n$.







The following code tests the above formula.



Numerical computation


polynomials - Using the fifth roots of unity to find the roots of $(z+1)^5=(z-1)^5$


The question I am working on starts of with:




Find the five fifth roots of unity and hence solve the following problems




I have done that and solved several questions using this, however when I came to the last question (the one in the title) I got stumped.



The five fifth root of unity are $z= \cos({2\pi k \over 5})+i\sin({2\pi k \over 5}), k\in \mathbb{Z}$ or more simply put $w=\cos({2\pi \over 5})+i\sin({2\pi \over 5})$ and the roots are $z=1, w, w^2, w^3, w^4$.


Now, using this information we are supposed to find the roots of $(z+1)^5=(z-1)^5$. However having tried some different approaches I don't know how to proceed. I tried simplifying it to $5z^4+10z^2+1=0$ from which I suppose I can use the quadratic equation, but it does not utilize the five fifth roots of unity and so I wont get my answer in terms of $w$ (which is the requirement).


Answer



Having calculated the fifth roots of unity $1, w, w^2, w^3, w^4$, i.e. the solutions to the equation $ u^5 = 1$, we can recast our equation as $ \left( \frac{z+1}{z-1} \right) ^5 = 1$ provided $z$ is not equal to $1$.


It is easy to verify that $z = 1$ is not a solution to the equation $(z+1)^5 = (z-1)^5$. So for $z$ not equal to $1$, we have that $ \frac{z+1}{z-1} = 1, w, w^2, w^3, w^4 $. Simple manipulation will allow you to express $z$ in terms of these roots.


Note that setting $ \frac{z+1}{z-1} = 1 $ will not yield a solution. This can be expected, since the equation we are solving is quartic and so has at most 4 distinct roots.


sequences and series - First Success distribution PMF sum problem




I have the following problem:




Each toss of a coin results in a head with probability $p$. The coin is tossed until the first head appears. Let $X$ be the total number of tosses (including the count of the first head appears).



What is $P(X < m)$ for $m = 1,2, \dots$?




It is important to clarify that, in this context, the variable $X$ is said to have the first success distribution (which is defined to include the toss of the first success), as opposed to the Geometric distribution (which is defined to exclude the toss of the first success). Therefore, if have that $X \sim \text{FS}(p)$ and $Y \sim \text{Geom}(p)$, then $P(X = Y + 1 = k) = P(Y = k - 1) = (1 - p)^{k - 1} p$ for all tosses $k = 1, 2, \dots$, since the PMF for Geometric random variables is $(1 - p)^k p$.




I have that



$$P(X < m = 1) = 0$$



and



$$\begin{align} P(X < m = 2, 3, \dots) &= \sum_{k = 1}^{m - 1} (1 - p)^{k - 1}p \\ &= p + (1 - p)p + \dots + (1 - p)^{m - 2} \\ &= p(1 + (1 - p) + \dots + (1 - p)^{m - 2})\end{align}$$



The solution has that





$$\begin{align} P(X < m = 2, 3, \dots) &= \sum_{k = 1}^{m - 1} (1 - p)^{k - 1}p \\ &= p + (1 - p)p + \dots + (1 - p)^{m - 2} \\ &= p(1 + (1 - p) + \dots + (1 - p)^{m - 2}) \\ &= p\dfrac{1 − (1 − p)^{m−1}}{1 - (1 - p)} \\ &= 1 − (1 − p)^{m−1} \end{align}$$




How did the author get that $p(1 + (1 - p) + \dots + (1 - p)^{m - 2}) = p\dfrac{1 − (1 − p)^{m−1}}{1 - (1 - p)} = 1 − (1 − p)^{m−1}$?



I would greatly appreciate it if people could please take the time to clarify this.


Answer



This is a finite geometric series, and finite geometric series can be easily summed:
$$a+ar+\dots+ar^n=a\frac{1-r^{n+1}}{1-r}$$

Thus
$$1+(1-p)+\dots+(1-p)^{m-2}=\frac{1-(1-p)^{m-1}}{1-(1-p)}$$
and the result follows.


real analysis - Differentiable, but not continuous



Let $f:\mathbb R^2 \to\mathbb R$ be defined by

$$f(x,y) =
\begin{cases}(x^2 + y^2)\sin(\frac{1}{\sqrt{x^2 + y^2}}) &\mbox{when } (x, y) \ne (0, 0) \\
0 &\mbox{when } (x, y) = (0, 0)
\end{cases}$$
Show that $f$ is differentiable at $(0,0)$ but the ith partial derivative of $f$ is not continuous at $(0,0)$



My guess is that the derivative at $(0,0)$ will be $0$ since the function is $0$ at the point. However, I am facing some trouble proving it with the help of the definition. And I am also confused about what to do once I have proven it using the definition.


Answer



The differential at origin is $Df(0,0)=(0,0)$ (not $0\in\Bbb R$) iff the following limit is zero:
$$

\lim_{(x,y)\to(0,0)}{f(x,y)-f(0,0)-Df(0,0)(x,y)\over||(x,y)||}=
\lim_{(x,y)\to(0,0)}{(x^2 + y^2)\sin(1/(\sqrt{x^2 + y^2})) \over||(x,y)||}=
$$
$$
\lim_{(x,y)\to(0,0)}\sqrt{x^2 + y^2}\sin(1/(\sqrt{x^2 + y^2}))=0.
$$
But for $(x,y)\ne(0,0)$
$$
{\partial f\over\partial x}(x,y)=2\,x\,\sin \left({{1}\over{\sqrt{x^2+y^2}}}\right)-{{x\,\cos \left(
{{1}\over{\sqrt{x^2+y^2}}}\right)}\over{\sqrt{x^2+y^2}}}

$$
and ${\partial f\over\partial x}$ is discontinuous at (0,0) because
$$
{\partial f\over\partial x}(0,0)=0\ne
\lim_{x\to 0}{\partial f\over\partial x}(x,0)=
\lim_{x\to 0}2\,x\,\sin\left({1\over|x|}\right)-{{x\,\cos \left(
{{1}\over |x|}\right)}\over|x|}=0-\not\exists=\not\exists.
$$
(the 2D limit can't $\exists$ is some directional limit $\not\exists$)


Wednesday, April 25, 2018

calculus - Prove $lim_{ntoinfty} frac{n^x}{n!}=0$.

I've been having trouble using the definition of a limit to prove limits, and at the moment I am trying to prove that
$$\lim_{n\to\infty} \frac{n^x}{n!}=0$$



for all $x$ which are elements of natural numbers.
I'm able to start the usual setup, namely let $0<\epsilon$ and attempt to obtain $\left\lvert\dfrac {n^x}{n!}\right\rvert <\epsilon$. I don't really feel like this is correct, and I have absolutely no idea how to go about proving this. Any help at all would be very much appreciated!

Why are the elements of a galois/finite field represented as polynomials?



I'm new to finite fields - I have been watching various lectures and reading about them, but I'm missing a step. I can understand what a group, ring field and prime field is, no problem.



But when we have a prime extension field, suddenly the elements are no longer numbers, they are polynomials. I'm sure there is some great mathematical tricks which show that we can (or must?) use polynomials to be able to satisfy the rules of a field within a prime extension field, but I haven't been able to find a coherent explanation of this step. People I have asked in person don't seen to know either, it's just assumed that that is the way it is.



So I have two questions:




What is a clear explanation of "why polynomials?".



Has anyone tried using constructs other than polynomials to satisfy the same field rules?



Thanks in advance.


Answer



In any ring, finite or infinite, we have two operations: $+$ and $\cdot$. The idea of a ring extension is this: let $R$ be a ring and $x$ be some element that we want to add to $R$ (maybe $R\subset S$ and $x\in S$, or $x$ is some formal element).





We need $R[x]$ to be closed under addition and multiplication.




This means that any element that can be formed by manipulating elements of the set $R\cup\{x\}$ by $+$ and $\cdot$ must be an element of $R[x]$.




Polynomials are a general way of expressing such manipulations.




An arbitrary polynomial in $x$ is a completely general manipulation of $x$ using only the operations valid in a ring. Moreover, any element of a ring can be expressed as a polynomial in terms of the generators of the ring.




Let's see how this works: start with some element $a\in R$. We can add or multiply by any other element of $R$, but this just gives us some $a'\in R$. Or we can multiply by $x$ any number of times to get $a'x^n$ for some $n$. And given different elements of this form, we can add them together to get a polynomial.



In many rings, because the elements $x$ satisfy non-trivial relations (e.g. in $\mathbb C=\mathbb R[i]$, $i^2+1=0$), there are neater ways to express elements, and we can avoid polynomials, even though they lurk in the background. In finite fields, polynomials happen to be the easiest and most intuitive way to express what's going on.


Limit problem with an exponential function without L'Hopital

How can I calculate this limit without L'Hospital rule and Taylor series?



$${\lim_{x \to 1} \big(4^x - 3^x\big)^{\frac{1}{x - 1}}}$$

calculus - Show that $sum_{m in mathbb{Z}} e^{-(x-m)^4}$ converges for all $x in mathbb{R}$ converges



how can I rigorously show that

$$\sum_{m \in \mathbb{Z}} e^{-(x-m)^4}$$ converges for all real $x$?



I am aware of convergence criteria for ordinary series, but not for $\sum_{m \in \mathbb{Z}}$. Does anybody have an idea?



In particular, I am also looking for $A,B>0$ such that



$$A \le \sum_{m \in \mathbb{Z}} e^{-(x-m)^4} \le B$$ for all $x \in \mathbb{R}.$



If you have any questions, please let me know.


Answer




For $x\in [0,1],$ note $|x-m|\ge |m|-|x|\ge |m|-1.$ Also $|m|-1 \ge |m|/2$ for $|m|\ge 2.$ So for any such $x$ your sum is less than $3 + 2\sum_{m=2}^{\infty}e^{-(|m|/2)^4},$ and that last series is convergent (by a mile).



The proof for other $x$ is similar, although it might be more relaxing to note that the sum as a function of $x$ is periodic.


complex analysis - $int_{0}^{infty}frac{dx}{1+x^n}$


My goal is to evaluate $$\int_{0}^{\infty}\frac{dx}{1+x^n}\;\;(n\in\mathbb{N},n\geq2).$$



Here is my approach:


Clearly, the integral converges.


Denote the value of the integral by $I_n$.


Now let $\gamma_R$ describe the section of a circle which goes from the origin to $R$ to $Re^{2\pi i/n}$ and back to the origin.


If we let $C_R$ denote the relevant circular arc, then $$\left|\int_{C_R}\frac{dz}{1+z^n}\right|\leq \left(\frac{2\pi R}{n}\right)\left(\frac{1}{R^{n}-1}\right)\rightarrow0\;\;\;as\;\;R\rightarrow\infty.$$


Furthermore, $$\int_{[R,Re^{2\pi i/n}]}\frac{dz}{1+z^n}=\int_{R}^{0}\frac{e^{2\pi i/n}dr}{1+r^n}.$$


Hence $$\lim_{R\rightarrow\infty}\int_{\gamma_R}\frac{dz}{1+z^n}=\lim_{R\rightarrow\infty}\int_{[0,R]}\frac{dx}{1+x^n}+\int_{[R,Re^{2\pi i/n}]}\frac{dx}{1+x^n}+\int_{C_R}\frac{dx}{1+x^n}=(1-e^{2\pi i/n})I_n\;\;\;(1).$$


Thus if we can obtain the value of $\int_{\gamma_R}\frac{dz}{1+z^n}$ we can evaluate $I_n$.


Now the zeroes of $1+z^n$ are of the form $z=e^{i\pi/n+2\pi i m/n}\;\;(m\in\mathbb{N})$ from which it is clear that the only zero which lies within the contour occurs at $z=e^{i\pi/n}$ with multiplicity 1. So all that remains to be done is to evaluate the residue of $\frac{1}{1+z^n}$ at $z=e^{i\pi/n}$.


However, if $z=e^{i\pi/n}u$ and $u\neq1$, we have $$\frac{z^n+1}{z-e^{i\pi/n}}=\frac{1-u^n}{-e^{i\pi/n}(1-u)} =-e^{-i\pi/n}\sum_{m=0}^{n-1}u^m\;\;\;(2).$$



In particular, (2) implies $$Res_{z=e^{i\pi/n}}\frac{1}{1+z^n}=-\frac{e^{i\pi/n}}{n}\;\;\;(3).$$


Finally, (1) and (3) imply $$I_n=\frac{2\pi i (Res_{z=e^{i\pi/n}}\frac{1}{1+z^n})}{1-e^{2\pi i/n}}=\frac{-2\pi ie^{i\pi/n}}{n(1-e^{2\pi i/n})}=\frac{\pi/n}{\sin(\pi/n)}.$$


I have three questions:


One, is my method correct?


Two, is there a simpler/different method to evaluate the integral?


Three, is there an easier way to evaluate the residue of $\frac{1}{1+z^4}$ at $z=e^{i\pi/n}$?


Answer



Here is a different way. Lets more generally find the Mellin Transform.


Consider $$I(\alpha,\beta)=\int_{0}^{\infty}\frac{u^{\alpha-1}}{1+u^{\beta}}du=\mathcal{M}\left(\frac{1}{1+u^{\beta}}\right)(\alpha)$$ Let $x=1+u^{\beta}$ so that $u=(x-1)^{\frac{1}{\beta}}$. Then we have $$I(\alpha,\beta)=\frac{1}{\beta}\int_{1}^{\infty}\frac{(x-1)^{\frac{\alpha-1}{\beta}}}{x}(x-1)^{\frac{1}{\beta}-1}dx.$$ Setting $x=\frac{1}{v}$ we obtain $$I(\alpha,\beta)=\frac{1}{\beta}\int_{0}^{1}v^{-\frac{\alpha}{\beta}}(1-v)^{\frac{\alpha}{\beta}-1}dv=\frac{1}{\beta}\text{B}\left(-\frac{\alpha}{\beta}+1,\ \frac{\alpha}{\beta}\right).$$


Using the properties of the Beta and Gamma functions, this equals $$\frac{1}{\beta}\frac{\Gamma\left(1-\frac{\alpha}{\beta}\right)\Gamma\left(\frac{\alpha}{\beta}\right)}{\Gamma(1)}=\frac{\pi}{\beta\sin\left(\frac{\pi\alpha}{\beta}\right)}.$$



Your question is the case where $\alpha =1$.


Also see Chandru's answer on a different thread. It is another nice solution, along the lines of what you did above. (See this previous question, where both solutions can be found)


Hope that helps,


Tuesday, April 24, 2018

calculus - Find $limlimits_{n to infty} frac{x_n}{n}$ when $limlimits_{n to infty} x_{n+k}-x_{n}$ exists




Let $(x_n)_{n \geq 1}$ be a sequence with real numbers and $k$ a fixed natural number such that $$\lim_{n \to \infty}(x_{n+k}-x_n)=l$$



Find
$$\lim_{n \to \infty} \frac{x_n}{n}$$





I have a strong guess that the limit is $\frac{l}{k}$ and I tried to prove it using the sequence $y_n=x_{n+1}-x_n$. We know that $\lim_{n \to \infty}(y_n+y_{n+1}+\dots+y_{n+k-1})=l$ and if we found $\lim_{n \to \infty}y_n$ we would have from the Cesaro Stolz lemma that $$\lim_{n \to \infty}\frac{x_n}{n}=\lim_{n \to \infty}y_n$$


Answer



For fixed $m \in \{ 1, \ldots, k \}$ the sequence $(y_n)$
defined by $y_n = x_{m+kn}$ satisfies
$$
y_{n+1} - y_n = x_{(m+kn) + k} - x_{m+kn} \to l \, ,
$$
so that Cesaro Stolz can be applied to $(y_n)$. It follows that $\frac{y_n}{n} \to l$ and

$$
\frac{x_{m+kn}}{m+kn} = \frac{y_n}{n} \cdot \frac{n}{m+kn} \ \to \frac{l}{k} \text{ for } n \to \infty \, .
$$
This holds for each $m \in \{ 1, \ldots, k \}$, and therefore
$$
\lim_{n \to \infty} \frac{x_n}{n} = \frac lk \, .
$$


derivatives - Is there a difference in domain between the two ways to notate the reciprocal function?

Whilst exploring basic derivation, I noticed something peculiar.



We know that the derivative of any constant is 0. However, does this derivative exist for the whole domain?



Consider the function $f(x)=x^2$. We know $f'(x)=2x$, and $f''(x)=0$.



If you write the functions literally, it would come out to this:




$f(x)=x^2$



$f'(x)=2*x^1=2x$



$f''(x)=1*2x^0=2$



$f'''(x)=0*2x^{-1}=0$



The issue sprouts from this last derivative; since there is an $x^{-1}$, then wouldn't the derivative not exist if $x=0$? If it does exist, this brings up the question if this is mathematical syntax that allows it to exist. Another example is the following two functions, which are equivalent mathematically, but not by function.




$f(x)=\sqrt{\frac{x^3}{x}}$ Domain: ($-\infty, 0) U (0, \infty$)



$g(x)=\frac{\sqrt{x^3}}{\sqrt{x}}$ Domain: ($0,\infty$)



These two functions are taught to be mathematically equivalent, but do not yield the same domain. This is why the question of mathematical syntax has come up. Do these two functions yield the same domain?



$f(x)=1/x$



$g(x)=x^{-1}$

calculus - Evaluate $int_0^{{pi}/{2}} log(1+cos x), dx$



Find the value of $\displaystyle \int_0^{{\pi}/{2}} \log(1+\cos x)\ dx$



I tried to put $1+ \cos x = 2 \cos^2 \frac{x}{2} $, but I am unable to proceed further.
I think the following integral can be helpful:
$\displaystyle \int_0^{{\pi}/{2}} \log(\cos x)\ dx =-\frac{\pi}{2} \log2 $.


Answer




Using Weierstrass substitution
$$
t=\tan\frac x2\qquad;\qquad\cos x=\frac{1-t^2}{1+t^2}\qquad;\qquad dx=\frac{2}{1+t^2}\ dt
$$
we obtain
\begin{align}
\int_0^{\Large\frac\pi2}\ln(1+\cos x)\ dx&=2\underbrace{\int_0^1\frac{\ln2}{1+t^2}\ dt}_{\color{blue}{\text{set}\ t=\tan\theta}}-2\color{red}{\int_0^1\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt}\\
&=\frac{\pi}{2}\ln2-2\color{red}{\int_0^1\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt}.\tag1
\end{align}
Consider

\begin{align}
\int_0^\infty\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt&=\int_0^1\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt+\underbrace{\int_1^\infty\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt}_{\large\color{blue}{t\ \mapsto\ \frac1t}}\\
&=2\int_0^1\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt-2\int_0^1\frac{\ln t}{1+t^2}\ dt\\
\color{red}{\int_0^1\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt}&=\frac12\underbrace{\int_0^\infty\frac{\ln\left(1+t^2\right)}{1+t^2}\ dt}_{\color{blue}{\text{set}\ t=\tan\theta}}+\int_0^1\frac{\ln t}{1+t^2}\ dt\\
&=-\underbrace{\int_0^{\Large\frac\pi2}\ln\cos\theta\ d\theta}_{\color{blue}{\Large\text{*}}}+\sum_{k=0}^\infty(-1)^k\underbrace{\int_0^1 t^{2k}\ln t\ dt}_{\color{blue}{\Large\text{**}}}\\
&=\frac\pi2\ln2-\sum_{k=0}^{\infty} \frac{(-1)^k}{(2 k+1)^2}\\
&=\frac\pi2\ln2-\text{G},\tag2
\end{align}
where $\text{G}$ is Catalan's constant.




$(*)$ can be proven by using the symmetry of $\ln\cos\theta$ and $\ln\sin\theta$ in the interval $\left[0,\frac\pi2\right]$ and $(**)$ can be proven by using formula
$$
\int_0^1 x^\alpha \ln^n x\ dx=\frac{(-1)^n n!}{(\alpha+1)^{n+1}}, \qquad\text{for }\ n=0,1,2,\ldots
$$

Thus, plugging in $(2)$ to $(1)$ yields
\begin{align}
\int_0^{\Large\frac\pi2}\ln(1+\cos x)\ dx
=\large\color{blue}{2\text{G}-\frac{\pi}{2}\ln2}.
\end{align}


linear algebra - Why is the left inverse of a matrix equal to the right inverse?

Given a square matrix $A$ that has full row rank we know that the matrix is invertible. So there is a matrix $B$ such that




$$
AB=1
$$



writing this in component notation,



$$
A_{ij}B_{jk}=\delta_{ik}
$$




Now, we tend to write $A^{-1}$ instead of $B$ but let's leave it like that for now.



My question is how can we show that $BA=1$? We mechanically jump to the conclusion that if the inverse exists, $AA^{-1}=A^{-1}A=1$ but how to show that? Equivalently why is the left inverse equal to the right inverse? It seems intuitively obvious!



Thanks a bunch, I appreciate.

real analysis - Find a one-to-one correspondence between $[0,1]$ and $(0,1)$.








Establish a one-to-one correspondence between the closed interval $[0,1]$ and the open interval $(0,1)$.



this is a problem in real analysis.



Thanks

Infinite series sum which have infinite terms





Sum of series $\displaystyle \frac{15}{16}+\frac{15}{16}\cdot\frac{21}{24}+\frac{15}{16}\cdot\frac{21}{24}\cdot \frac{27}{32}+\cdots\cdots $



what i try



$\displaystyle S =\frac{15}{16}+\frac{15\cdot 21}{16\cdot 24}+\frac{15\cdot 21\cdot 27}{16\cdot 24\cdot 32}+\cdots \cdots $




i am trying to convert numerator and denomiantor terms into arithmetic progression



$\displaystyle \frac{9S}{8}=\frac{9\cdot 15}{8\cdot 16}+\frac{9\cdot 15\cdot 21}{8\cdot 16\cdot 24}+\cdots \cdots $



$\displaystyle \frac{9S}{8}+\frac{9}{8}+1=1+\frac{9}{8}+\frac{9\cdot 15}{8\cdot 16}+\frac{9\cdot 15\cdot 21}{8\cdot 16\cdot 24}+\cdots \cdots $



but it is divergent series



i did not know how i solve that infinite series




Help me how to solve


Answer



Based on the hint of lab bhattacharjee



$$S=\frac{5}{2}\left(\frac{3}{8}\right)+\frac{5\cdot7}{2\cdot3}\left(\frac{3}{8}\right)^2+\frac{5\cdot7\cdot9}{2\cdot3\cdot4}\left(\frac{3}{8}\right)^3+ \cdots\\
=\frac{4}{3}\cdot\frac{2}{3}\sum_{n=2}^\infty
\frac{-\frac{3}{2}\left(-\frac{3}{2}-1\right)\cdots\left(-\frac{3}{2}-n+1\right)}{n!}\left(-\frac{3}{4}\right)^n\\
=\frac{8}{9}\left[\left(1-\frac{3}{4}\right)^{-\frac{3}{2}}-1-\frac{-\frac{3}{2}}{1!}\left(-\frac{3}{4}\right)\right]\\=\frac{8}{9}\left[8-1-\frac{9}{8}\right]=\frac{47}{9}.
$$







Generally for arbitrary $a\in\mathbb{R}_+$, $k\in\mathbb{Z}_+$, $|x|<1$:
$$
\sum_{N=1}^\infty \prod_{n=1}^N\left(1+\frac{a-1}{k+n}\right)x=
\frac{k!}{a(a+1)\cdots(a+k-1)x^k}\sum_{N=k+1}^\infty\frac{a(a+1)\cdots(a+N-1)}{N!}x^N\\
=\left[\binom{a+k-1}{k}x^k\right]^{-1}\left[\left(1-x\right)^{-a}-\sum_{N=0}^k\binom{a+N-1}{N}x^N\right].
$$



For your problem: $a=\frac{3}{2}$, $k=1$, $x=\frac{3}{4}$.




Note:
$$
\binom{a}{k}:=\frac{1}{k!}\prod_{i=0}^{k-1} (a-i);\quad
\binom{a+k-1}{k}x^k\equiv \binom{-a}{k}(-x)^k.
$$


Monday, April 23, 2018

functions - Proof Regarding an iff Statement





Let $f:X \to Y$ be a function.



Then $f$ is one-to-one iff for all subsets $A$ and $B$ of $X$, $f(A\cap B) = f(A) \cap f(B)$.



Any proofs or guidance would be greatly appreciated


Answer




An incomplete proof of the --> part of the biconditional



enter image description here


calculus - Proving and deriving a Gamma function

I'm having a hard time trying to prove this Gamma function and trying to derive the duplication formula:



a.) Prove that



$$\frac{\Gamma (p)\Gamma (p)}{\Gamma (2p)} = 2\int_0^{1/2}x^{p-1}(1-x)^{p-1}\mathrm dx$$



b.) Make a suitable change of variable (a) and derive the duplication formula for the Gamma function:




$$\Gamma (2p)\Gamma\left(\frac12\right) = 2^{2p-1}\Gamma (p)\Gamma\left(p-\frac12\right)$$

real analysis - Test the convergence of the integral $int_{-infty}^{infty}frac{e^{-x}}{1+x^2}.$



Test the convergence of the following integral $$\int_{-\infty}^{\infty}\frac{e^{-x}}{1+x^2}.$$I can not find the indefinite integral of the integrand so that we can check at the limits $-\infty$ and $\infty$ Also I can not apply any theorem about convergence , like Ables test, Dirichlet's test...etc... Can anyone help me?


Answer



Follow @Solitary comment. Let $f(x)$ be the integrand function.
Notice that
$$

\lim_{x\to -\infty} f(x) = +\infty
$$
hence there is some $b<0$ such that for all $x1$ hence for all $a$$
\int_{a}^{b} f(x) \ge b-a \to +\infty
$$
as $a\to -\infty$.



Hence
$$

\int_{-\infty}^0 f(x) = +\infty.
$$
This assumes that you are speaking of improper integrals. If you are speaking of Lebesgue integrals the solution is even simpler...


field theory - Are logarithms of prime numbers quadratically independent over $mathbb Q$?



It is well-known that the logarithms of prime numbers are linearly independent over $\mathbb Q$. It is also known that
the question whether the logarithms are algebraically independent over $\mathbb Q$ is an open problem.



What is known about the next to linear by complexity case? Are the logarithms of primes quadratically independent over $\mathbb Q$, i.e.
$$\sum_{ij\le N}a_{ij} \log p_i \log p_j = 0, \quad a_{ij} \in \mathbb Q, \quad \implies a_{ij} = -a_{ji} $$?


Answer



It seems highly probable that this is an open question, for the following indirect reason.




For non-negative integers $m_p, n_p$ ($p$ prime), almost all zero, if $a = \prod_pp^{m_p}$ and $b = \prod_pp^{n_p}$, then
\begin{multline*}
\log_2a = \log_3b \iff \frac{\sum_p m_p\log{p}}{\log2} = \frac{\sum_p n_p\log{p}}{\log3} \\
\iff -n_2(\log2)^2 + (m_2 - n_3)\log2\log3 + m_3(\log3)^2 \\ - \sum_{p\geqslant5}n_p\log2\log{p} + \sum_{p\geqslant5} m_p\log3\log{p} = 0,
\end{multline*}

and if the logarithms of the primes were known to be quadratically independent over $\mathbb{Q}$, this would imply $a = 2^n$, $b = 3^n$ for some non-negative integer $n$; but as this would settle the notorious open problem If $2^x $and $3^x$ are integers, must $x$ be as well?, someone would surely have noticed by now!


Sunday, April 22, 2018

calculus - Did I choose the correct method for solving this integral?(Integral Techniques)



I'm currently studying for my calc exam, and i've stumbled across a rather odd problem(at least for me). I've been doing integrals non stop for 2 weeks now and I haven't seen anything like this, so I would like to know if my approach is correct. I feel like i'm not doing it correctly since my answer conflicts with my professor's answer key. It is as follows:



$$\int\sqrt{x^4+x^7}\;dx\;\; or \int(x^4+x^7)^\frac{1}{2}\;dx$$



Since i've been doing(mostly) complex trig sub and integration by parts, I didn't immediately know what to do. I decided to convert the integral to $\int(x^4+x^7)^1/2$ and multiply the exponents:




$$\int\sqrt{x^4+x^7} = \int(x^2+x^\frac{7}{2})\;dx$$



Then I use basic integration to yield:



$$\frac13x^3+\frac29x^\frac{9}{2}+\;C$$



Taking the derivative to check:



$$\frac {d}{dx}(\frac 13x^3 + \frac29x^\frac{9}{2}) = x^2+x^\frac{7}{2}$$




Seems to give me what I started with, but my answer key has this as the answer: $$\frac29(1+x^3)^\frac32+C$$



I can see some similarities to my answer, but it makes me feel like I made a mistake in my technique. Symbolab isn't capable of computing this integral for some reason, and WolframAlpha gives an answer far, far different then either of the integrals I(or my professor) has. Any input would be greatly appreciated as I just want to be as prepare for my exam as much as possible. Thanks!


Answer



HINT



Take out $x^4$ common from squareroot and then put $x^3=t$


Saturday, April 21, 2018

elementary number theory - Prove: $gcd(n^a-1,n^b-1)=n^{gcd(a,b)}-1$

I'm trying to prove the following statement:
$$\forall_{a,b\in\Bbb{N^{+}}}\gcd(n^a-1,n^b-1)=n^{\gcd(a,b)}-1$$



As for now I managed to prove that $n^{\gcd(a,b)}-1$ divdes $n^a-1$ and $n^b-1$:



Without loss of generality let $a>b$ (if $a=b$, the result is obvious), $n\neq1$ ($\gcd(0,0)$ makes no sense). Then if we let $a=kd, b=jd, d=\gcd(a,b)$, we see that for $n^{kd}-1$ we have $\frac{(n^d)^k-1}{n^d-1}=1+n^d+...+n^{d(k-1)}=C$ (it's a finite geometric series), so $n^a-1=(n^d-1)C$. Same works for $n^b-1$, so $n^d-1$ divides both of these numbers.



How to prove that $n^d-1$ not only divides both numbers, but is greatest common divisor?

Summation of series with factorial



enter image description here




I tried breaking the terms into differences or finding a generalised term but did not get it right. Can someone please help me to proceed with this?


Answer



Assuming the first term is $1$ (and not $2$ as written), the general term of the series is, for $n\geq 1$,
$$
a_n \stackrel{\rm def}{=} \frac{\prod_{k=2}^{n}(2k+1)}{n!3^{n-1}}
= \frac{\prod_{k=1}^{n}(2k+1)}{n!3^{n}}
= \frac{(2n+1)!}{n!3^{n}\prod_{k=1}^n(2k)}
= \frac{(2n+1)!}{n!3^{n}2^nn!}
= \frac{(2n+1)!}{(n!)^26^{n}}
$$

or, equivalently, $a_n= \binom{2n}{n}\left(\frac{1}{6}\right)^n$.



Now, either you work towards finding the general form for $$f(x) = \sum_{n=1}^\infty (2n+1) \binom{2n}{n}x^n$$
(a power series with radius of convergence $1/4$), which you can find by relating it to both
$
g(x) = \sum_{n=1}^\infty n\binom{2n}{n}x^{n-1}
$
(recognize a derivative) and $
h(x) = \sum_{n=1}^\infty \binom{2n}{n}x^{n}
$, since $$f(x) = 2xg(x)+h(x)\,;$$

or, by other means (there may be?) you establish that
$f(1/6) = 3\sqrt{3}$, leading to
$$
\sum_{n=1}^\infty a_n = 3\sqrt{3}.
$$


combinatorics - Analysis of how-many-squares and rectangles are are there on a chess board?




I know the formula $$f(n) = \frac{n \cdot (n + 1) \cdot (2n + 1) } 6$$



Since chess board consists $8\times 8$, hence here $n=8$.



But I want to know how it has been concluded? Also how to tackle the number of rectangles?


Answer



Here is an easy way to count the number of rectangles. There are $9$ horizontal lines on the chessboard, and $9$ vertical lines. Choose two distinct horizontal lines, and two distinct vertical lines. These determine a unique rectangle. And any rectangle determines a pair of horizontal lines and a pair of vertical lines.



So the number of rectangles is $\binom{9}{2}^2$. That is $1296$.




Exactly the same idea can be used to count the number of rectangles on an $m$ by $n$ chessboard. It is
$$\binom{m+1}{2}\binom{n+1}{2}.$$



The number of squares is a bit less nice. It is easy to see that there are $8^2$ small $1\times 1$ squares, $7^2$ $2\times 2$ squares, and so on down to $1^2$ $1\times 1$ squares, for a total of
$$1^2+2^2+3^2+\cdots+8^2.$$
Now we can add up, but there is also a simple formula for the sum of the first $n$ squares. The same idea works for counting the squares on an $n \times n$ chessboard.


galois theory - Construction of non-prime finite fields

I am new to Galois field theory and I am struggling with some definitions. To construct any non-prime finite field $GF(p^n)$ with p prime and $n \in \mathbb{N}$, one has to find an irreducible polynomial $g(x)$ in $GF(p)$ and eventually calculate $GF(p^n) = G(p)[x] / g(x)$.



Assuming I want to construct $GF(9) = GF(3^2)$. Why do I have to do the stuff above? Doesn't $GF(3^2)$ simply contains elements ranging from 0 to 8? What is the upper construction rule about?

Can every divergent series be regularized?




The following words reflect my understanding(an elementary one) of the divergent series. We first define an infinite series as follows:



$L = \sum_{n=0}^{\infty}a_n \Leftrightarrow L = \lim_{k \rightarrow \infty} S_k.$



Where $S_k$ is the partial sum of the infinite series from $a_0$ to $a_k$. A series whose limit exists is said to be convergent, if not, then it's called divergent.



By this former definition, series like:



$1-1+1-...$ and $1+2+3+...$ are divergent.




Then we have the notion of regularized sum. Where we look for a new definition for infinite series such that it allows us to assign real values to some divergent series. Also in the new definition series that are normally convergent under the definition $L = \sum_{n=0}^{\infty}a_n \Leftrightarrow L = \lim_{k \rightarrow \infty} S_k$, are convergent under the new definition, and the two definitions yield the same exact limit $L$ for the normally convergent series. Although I'm not sure of following, but different summation methods always assign the same value for a divergent series(in case it can be assigned to), so that $1-1+1-...=1/2$ under Caesaro summation and Abel's and any other summation that assign a value to such series.



In addition to that, there are series like $1+2+3+...$ , that are not Caesaro or Abel summable, but can be summed under other methods like zeta regularization; This implies that a series that is not summable under certain summation method(say Caesaro's), can be summable under other summation methods(like zeta).



This last fact leads me to my question:



-Can every divergent series be regularized? That is, for every series that is not summable under certain summation methods, can we find a new summation method that sums it up?



-If the answer is yes to the last question, then, does there exist a summation method such that it can sum(regularize) every single divergent series?


Answer




In the most general sense, a summation is a partial function from the set of summand sequences to $\mathbb R$ (or $\mathbb C$). This sounds like we could assign more or less arbitrary values and if we want we really can.
However, certain properties of summations are preferred to hold, such as




  • Regularity that is, our summation method should be an extension of the standard -convergent-sequence-of-partial-sums method

  • Linearity that is, if we define $\sum a_n$ and $\sum b_n$ then we also define $\sum(ca_n+b_n)$ and have $\sum(ca_n+b_n)=c\sum a_n+\sum b_n$

  • Stability $\sum a_n$ is defined if and only if $\sum a_{n+1}$ is defined and we have $\sum a_n=a_2+\sum a_{n+1}$



To repeat: not all summation methods (not even all methods in practical use) obaey all three criteria. But if we concentrate on methods obeying all three then indeed we often get that certain (classically) divergent series are always assigned the same value under any summation method. For example, $\sum x^n=\frac1{1-x}$ follows for all $x\ne 1$ where we define the sum by, merely playing around with stability and linearity.




So how high can we try? We can use Zorn's lemma to find a maximal regular, linear, stable summation method. But will "maximal" imply "total", i.e., that all series become summable? And will the summation thus obtained be well-defined? Unfortunately, the answer to both is no. This can already be exemplified with $\sum 1$, which has do be a solution of $\sum 1 = 1+\sum 1$ per statbility. (Then again, you have have read that regularization can assign $1+1+1+\ldots =-\frac12$; apparently those methods are not linear or nopt stable ...)


calculus - Find the value of $lim_{nto infty}left(1+frac{1}{n}right)left(1+frac{2}{n}right)^{1/2}ldots(2)^{1/n}$



Find the value of
$$\lim_{n\to \infty}\bigg(1+\dfrac{1}{n}\bigg)\bigg(1+\dfrac{2}{n}\bigg)^{\frac12}\ldots(2)^{\frac{1}{n}}$$




My work:
$\bigg(1+\dfrac{1}{n}\bigg)=\bigg\{\bigg(1+\dfrac{1}{n}\bigg)^n\bigg\}^{\frac{1}{n}}=e^{\frac{1}{n}}$
$\bigg(1+\dfrac{2}{n}\bigg)^{\frac12}=\bigg\{\bigg(1+\dfrac{2}{n}\bigg)^{\frac{n}{2}}\bigg\}^{\frac{1}{n}}=e^{2\cdot\frac12\cdot\frac{1}{n}}=e^\frac{1}{n}$
$~~~~~~~~~~~~\vdots$
$~~~~~~~~~~~~\vdots$
$\bigg(1+\dfrac{n}{n}\bigg)^{\frac{1}{n}}=e^{\frac{1}{n}}$
So, $L=e$
But, the answer says $L=e^{\frac{\pi^2}{12}}$.
I do not know where I am going wrong, is the answer a typo or I am doing wrong. Please help.


Answer



This seems to be the reasoning in your argument
$$
\begin{align}
\lim_{n\to\infty}\prod_{k=1}^n\left(1+\frac kn\right)^{1/k}
&=\lim_{n\to\infty}\left(\prod_{k=1}^n\left(1+\frac kn\right)^{n/k}\right)^{1/n}\tag{1}\\
&=\lim_{n\to\infty}\left(\prod_{k=1}^n\lim_{n\to\infty}\left[\left(1+\frac kn\right)^{n/k}\right]\right)^{1/n}\tag{2}\\
&=\lim_{n\to\infty}\left(\prod_{k=1}^n\ e\right)^{1/n}\tag{3}\\[12pt]

&=\ e\tag{4}
\end{align}
$$
All of the steps are fine except $(2)$. It is not, in general, allowed to take the limit of an inner part like that. For example consider
$$
\begin{align}
\lim_{n\to\infty}\left(\frac1n\cdot n\right)
&=\lim_{n\to\infty}\left(\lim_{n\to\infty}\left[\frac1n\right] n\right)\tag{5}\\
&=\lim_{n\to\infty}\left(0\cdot n\right)\tag{6}\\[3pt]
&=\lim_{n\to\infty}\ 0\tag{7}\\[2pt]

&=\ 0\tag{8}
\end{align}
$$
Step $(5)$ is the same as step $(2)$, but that step allows us to show that $1=0$.



To see why this affects your limit adversely, notice that no matter how big $n$ gets in the limit, when $k$ is near $n$, $\left(1+\frac kn\right)^{n/k}$ is close to $2$, not $e$. Thus, the terms of the product are between $2$ and $e$. Not all of them tend to $e$.






What we need to do is use the continuity of $\log(x)$ as viplov_jain suggests.

$$
\begin{align}
\log\left(\lim_{n\to\infty}\prod_{k=1}^n\left(1+\frac kn\right)^{1/k}\right)
&=\lim_{n\to\infty}\log\left(\prod_{k=1}^n\left(1+\frac kn\right)^{1/k}\right)\tag{9}\\
&=\lim_{n\to\infty}\sum_{k=1}^n\frac1k\log\left(1+\frac kn\right)\tag{10}\\
&=\lim_{n\to\infty}\sum_{k=1}^n\frac nk\log\left(1+\frac kn\right)\frac1n\tag{11}\\
&=\int_0^1\frac1x\log(1+x)\,\mathrm{d}x\tag{12}\\
&=\int_0^1\sum_{k=0}^\infty(-1)^k\frac{x^k}{k+1}\,\mathrm{d}x\tag{13}\\
&=\sum_{k=0}^\infty\frac{(-1)^k}{(k+1)^2}\tag{14}\\
&=\frac{\pi^2}{12}\tag{15}

\end{align}
$$
Step $(12)$ uses the idea of approximating a Riemann Sum by an integral. $(15)$ tells us that
$$
\lim_{n\to\infty}\prod_{k=1}^n\left(1+\frac kn\right)^{1/k}=e^{\pi^2/12}\tag{16}
$$
Notice that
$$
2\lt2.27610815162573\doteq e^{\pi^2/12}\lt e\tag{17}
$$



Friday, April 20, 2018

calculus - How to show $limlimits_{ntoinfty} frac{a_n}{n}=2 Rightarrow limlimits_{ntoinfty}(a_n-n)=infty$?




I find it hard to answer the question below. I just don't know how to use the fact that $$\lim_{n\to\infty} \frac{a_n}{n}=2.$$ Maybe with limit arithmetic?




Let $(a_n)$ be a sequence, where $$\lim_{n\to\infty} \frac{a_n}{n}=2$$



Is it correct that $$\lim_{n\to\infty}(a_n-n)=\infty$$




I think it is correct since from limit arithmetic I can get to the conclusion that $$\lim_{n\to\infty}a_n=2\lim_{n\to\infty}n$$




But I just can't prove it.



Thanks.


Answer



Hint: Prove that $\dfrac{a_n}{n} > 1.5$ for all $n$ sufficiently large.



Solution:





Since $\dfrac{a_n}{n} \to 2$, taking $\varepsilon=0.5$, we get that $\dfrac{a_n}{n} > 1.5$ for all $n$ sufficiently large. This implies that $a_n-n> 0.5 n$ for all $n$ sufficiently large and so $a_n -n \to \infty$.



calculus - Question about discontinuous function with directional derivatives at a points

For a function, if at a point $a$, the function has directional derivatives along some lines, but the function is discontinuous at $a$, does that mean along those lines, the function is continuous, but along some other directions the function is not? What does the graph of such a function look like? Continuous in some direction but discontinuous in others?

Thursday, April 19, 2018

algorithms - Finding irreducible polynomials over GF(2) with the fewest terms



I'm investigating an idea in cryptography that requires irreducible polynomials with coefficients of either 0 or 1 (e.g. over GF[2]). Essentially I am mapping bytes to polynomials. For this reason, the degree of the polynomial will always be an integer multiple of 8.



I would like to build a table of such irreducible polynomials for various degrees (e.g. degrees from 8, 16, 24, ..., 1024). Because there are multiple irreducible polynomials for a given degree, I'd like the one with the fewest number of terms since I will hard code the non-zero terms. For example, for degree 16, both of these polynomials are irreducible:



$x^{16} + x^{14} + x^{12} + x^7 + x^6 + x^4 + x^2 + x + 1$




and



$x^{16} + x^5 + x^3 + x + 1$



Obviously, the latter one is preferred because it requires less space in code and is more likely to be right (e.g. that I wouldn't have made a copy/paste error).



Furthermore, I've noticed that to at least degree 1024 where the degree is a multiple of 8, there are irreducible polynomials of the form:



$x^n + x^i + x^j + x^k + 1$ where $n = 8*m$ and $0 < i,j,k < 25$



Is there an good algorithmic way of finding these polynomials (or ones that have even fewer terms)? Again, the purpose is to keep the non-zero terms in a look-up table in code.



Thanks in advance for any help!



UPDATE:



This Mathematica code generates all pentanomials for degrees that are multiples of 8 up to degree 1024:




IrreducibleInGF2[x_] := IrreduciblePolynomialQ[x, Modulus -> 2]

ParallelTable[
Select[
Sort[
Flatten[
Table[
x^n + x^a + x^b + x^c + 1,
{a, 1, Min[25, n - 1]}, {b, 1, a - 1}, {c, 1, b - 1}
]

]
],
IrreducibleInGF2, 1],
{n, 8, 1024, 8}]


(I sorted the list of polynomials to make sure I always got the one with the overall smallest degrees first). However, it takes quite a bit of time to run. For example, it took over 26 minutes for the case of $x^{984} + x^{24} + x^9 + x^3 + 1$ .



UPDATE #2




The HP paper "Table of Low-Weight Binary Irreducible Polynomials" has been incredibly helpful. It lists up to $x^{10000}$ and reiterates a proof by Swan that there are no trinomials when the degree is a multiple of 8 (which matches my findings). I've spot checked that their results match mine up to $x^{1024}$ so I'll just need to double check their results up to 10000 which should be much easier than finding them myself.


Answer



According to a paper "Optimal Irreducible Polynomials for $GF(2^m)$ Arithmetic" by M. Scott,
"it is in practise always possible to chooose as an irreducible polynomial either a trinomial... or a pentanomial." [talk slides] [PDF link]



In random number generators irreducible trinomials of various degrees with three nonzero binary coefficients are associated with the names Tausworthe-Lewis-Payne.



Added: It has been known since Gauss that there are lots of irreducible polynomials over a finite field, basically the analog of the Prime Number Theorem for such polynomials. Among the $2^m$ (monic) polynomials over Z/2Z of degree $m$, approximately $1/m$ of them are irreducible.



We can eliminate the possibility of first degree factors by inspection, for divisibility by $x$ or $x+1$ would imply respectively a zero constant term or an even number of nonzero terms in the polynomial. So the first case to test for irreducibility is the trinomials of degree $m$. With leading and constant coefficients accounting for two of the three nonzero terms, there are but $m-1$ possibilities to test, and by symmetry of $x$ → $1/x$ substitution, we can restrict the middle terms to degree ≤ $m/2$.




If none of those pan out, we have the richer supply of pentanomials to test. Indeed you seem to have hit upon a seam of cases where trinomials will never work out, namely degree $m$ a multiple of 8 [PS] (Swan, 1962).



The work then comes down to testing all the $\binom{m-1}{3}$ binary pentanomials $p(x)$ until we find one that's irreducible. Your application might make other conditions, perhaps similar to those considered in Scott's paper above, attractive. Given the modest degrees you are working with, trial division (taking $p(x) \; mod \; q(x)$ for all $q(x)$ of degree ≤ $m/2$) should be fast enough. [Remember, we shouldn't have to test more than O(m) possibilities before we find success.]



There is a fancier way [PDF] to test polynomials for irreducibility over GF(2). A necessary condition for binary polynomial $p(x)$ to be irreducible over $GF(2)$ is that:
$$x^{2^m} = x \mod p(x)$$



In fact Gauss showed for prime q that $x^{q^m} - x$ is precisely the product
of all monic irreducible polynomials over $GF(q)$ whose degrees divide
$m$. [From this he deduced the count of monic irreducible polynomials of

degree exactly $m$ is asymptotically $q^m/m$ as $m \rightarrow \infty$.]



For $q = 2$ it follows that if $p(x)$ is irreducible of degree $m$,
it divides $x^{2^m} - x$ over $GF(2)$, i.e. the congruence above.



Rubin (1980) proposed a necessary and sufficient test for irreducibility,
combining the above with some additional steps to rule out the possibility
that $p(x)$ might be the product of some irreducible factors whose degrees
properly divide $m$. [While the degrees of the irreducible factors would
naturally sum to $m$, having all the irreducible factors' degrees divide

$m$ would be somewhat special, unless of course there is only one factor.]



The additional "sufficiency" steps are to check for each prime factor
$d_i$ of $m$ that:
$$GCD(x^{2^{m/d}} - x, p(x)) = 1$$



That is, if $p(x)$ were to have an irreducible factor of degree $k$ properly
dividing $m$, it would crop up when taking the gcd of $x^{2^{m/d}} - x$ and
$p(x)$ if $k$ divides $m/d$.




Since then a lot of ingenuity has been applied to efficiently doing these
steps. Of course the necessary condition lends itself to repeated squaring,
computing $x^{2^{k+1}} = (x^{2^k})^2$ mod $p(x)$, for $k$ up to $m-1$. We
can take advantage here of the fact that the multiplication we're doing
is a squaring and advantage of the sparsity of our $p(x)$ as a pentanomial
when doing reduction mod $p(x)$.



As the report by Brent of work with Zimmerman (linked above) points out,
this repeated squaring gives with each (fairly inexpensive step) linear
progress toward the "exponent of the exponent" $m$. There is also a way

to progress farther with greater computational effort by modular
composition
.



That is, suppose we've already arrived at:
$$f(x) = x^{2^k} \mod p(x)$$
and
$$g(x) = x^{2^j} \mod p(x)$$
Then:
$$f(g(x)) = x^{2^{k+j}} \mod p(x)$$




Thus composition of two polynomials $f(x)$ and $g(x)$ mod $p(x)$ can
replace a number of repeated squaring steps. But composition mod $p(x)$
is more expensive than squaring or even than multiplication generally
mod $p(x)$. So as Brent points out, practical advantage for using
modular composition lies at the final stage(s) of evaluating the
necessary condition. E.g. at the end one modular composition might
replace $m/2$ repeated squarings.



As far as the "sufficiency" conditions go, Gao and Panario outlined
an improvement over a naive implementation of Rubin's tests in this


1997 paper [PDF]
, basically sequencing the gcd computations in
an expedient order.


Wednesday, April 18, 2018

elementary number theory - Not understanding Simple Modulus Congruency



Hi this is my first time posting on here... so please bear with me :P



I was just wondering how I can solve something like this:




$$25x ≡ 3 \pmod{109}.$$



If someone can give a break down on how to do it would be appreciated (I'm a slow learner...)!



Here is proof that I've attempted:




  1. Using definition of modulus we can rewrite $$25x ≡ 3 \pmod{109}$$ as $25x = 3 + 109y$ (for some integer $y$). We can rearrange that to $25x - 109y = 3$.


  2. We use Extended Euclidean Algorithm (not sure about this part, I keep messing things up), so this is where I'm stuck at.





Thanks!


Answer



The extended euclidean algorithm is used to find x and y such that ax + by = gcd of a and b.



In our case $a = 109$ and $b = 25$.



So we start as follows.



Find remainder and quotient when we divide $109$ by $25$ and write the remainder on the left hand side.




So we get



9 = 109 - 25*4.



Now we get two new numbers $25$ and $9$. Write the remainder on the left hand side again.



7 = 25 - 9*2.



So we have two new numbers, 9 and 7.




In the extended algorithm, we use the formula for 9 in the first step



7 = 25 - (109 - 25*4)*2 = 25*9 - 109*2.



Now



2 = 9 - 7*1



= (109-25*4) - (25*9 - 109*2) = 109*3 - 25*13




Now write



1 = 7 - 3*2



i.e.



1 = (25*9 - 109*2) - 3*(109*3 - 25*13)



i.e.

1 = 25*48 - 109*11



Thus $25x - 109y = 1$ for $x = 48$ and $y = 11$.



So $25x - 109y = 3$ for x = 48*3 = 144 and y = 11*3 = 33.



Therefore 144*25 = 3 (mod 109).



If you need a number $ \le 109,$




$144 = 109 + 35$.



So we have (109+35)*25 = 3 (mod 109).



Which implies 35*25 = 3 (mod 109).



Thus $x = 35$ is a solution to your equation, which we found using the extended euclidean algorithm.



Hope that helps.


summation - Why is $sum_{j = 1}^{n} dfrac{1}{n} = 1$


I encountered $\sum_{j = 1}^{n} \dfrac{1}{n} = 1$ in my textbook, but I do not understand why this summation $= 1$. My textbook provides no reasoning as to why this is the case.


My understanding is that, since there is nothing in $\dfrac{1}{n}$ that depends on $j$, it seems that we are just summing $\dfrac{1}{n}$ to itself up to $n$. However, I'm not sure how to interpret this and how it equals a value of $1$.


I apologise if there is already a question on this, but my searches have encountered nothing that addresses this specific summation. If I am mistaken, I would appreciate it if someone could please redirect me.



I would greatly appreciate it if people could please take the time to explain the reasoning behind this.


Answer



Note that $$ \sum_{j=1}^n \frac 1n = \overbrace{\frac 1n + \frac 1n + \cdots + \frac 1n}^{n\text{ times}} = n \cdot \frac 1n = 1 $$


Tuesday, April 17, 2018

algebra precalculus - Method to solve $|x| + |2-x| leq x+ 1$


Even if it seems really easy, I'm struggling to solve $$|x| +|2-x |\leq x+1.$$


The book says that $ x \in [1,3] $.



I first rewrote as $x+(2-x)\leq x+1$ with $x\geq 0$ and $-x-(2-x)\leq x+1$ with $x<0$. Then I solved.


For the first, I got $1\leq x$ and for the 2nd, $-3\leq x$


$$x\in ]-3;0[\cup[1;+\infty[ $$


It is maybe a very stupid question, but I can't see what I did wrong? Thanks for your help.


Answer



There are 3 cases:


a) $x\leq 0$ (quite useless, because it generates a bright lower bound):


$$|x| +|2-x |\leq x+1\implies -x+ 2-x\leq x+1$$ $$-3x\leq -1$$ $$3x\geq1$$ $$x\geq\frac{1}{3}$$


b) $0

$$|x| +|2-x |\leq x+1\implies x+2-x\leq x+1$$ $$x\geq 1$$



c) $x > 2$:


$$|x| +|2-x |\leq x+1\implies x+x-2\leq x+1$$ $$2x\leq x+3\implies x\leq 3$$


Now, you know the lower bound of $x$, from case b, $x_{min}=1$, and you know the upper bound of $x$ from case c, $x_{max}=3$. Therefore, $1\leq x\leq 3\implies x\in [1;3]$


calculus - How do I derive $1 + 4 + 9 + cdots + n^2 = frac{n (n + 1) (2n + 1)} 6$





I am introducing my daughter to calculus/integration by approximating the area under y = f(x*x) by calculating small rectangles below the curve.


This is very intuitive and I think she understands the concept however what I need now is an intuitive way to arrive at $\frac{n (n + 1) (2n + 1)} 6$ when I start from $1 + 4 + 9 + \cdots + n^2$.


In other words, just how came the first ancient mathematician up with this formula - what were the first steps leading to this equation? That is what I am interested in, not the actual proof (that would be the second step).


Answer



Same as you can prove sum of n = n(n+1)/2 by


*oooo

**ooo
***oo
****o

you can prove $\frac{n (n + 1) (2n + 1)} 6$ by building a box out of 6 pyramids:


enter image description hereenter image description hereenter preformatted text here


Sorry the diagram is not great (someone can edit if they know how to make a nicer one). If you just build 6 pyramids you can easily make the n x n+1 x 2n+1 box out of it.


  • make 6 pyramids (1 pyramid = $1 + 2^2 + 3^2 + 4^2 + ...$ blocks)

  • try to build a box out of them

  • measure the lengths and count how many you used.. that gives you the formula


Using these (glued) enter image description here


algebra precalculus - Simplify a rational expression

Suppose I want to simplify this expression:

$$\frac{bx-bc-dx+ad}{a-c}$$
More specifically, I want to minimize the number of operations. Counting each addition, subtraction, and multiplication, the expression requires 9 operations to compute.



The expression $$b+\frac{(x-a)(b-d)}{a-c}$$ is equivalent to the previous expression, yet requires only 6 operations to compute.



So, my real question is, starting with the top expression, how do I derive the bottom expression? I want to know step-by-step. I can't think of any factoring techniques that would help me here. I know how to go from the bottom expression to the top expression, but not the other way around.

summation - Simple proof by induction: $1^3 + 2^3 +3^3 +...+ n^3 = [ (n(n+1))/2 ]^2 $

I am rather illiterate when it comes to mathematics, I am afraid. In an effort to change that, I grabbed a copy of 'What is mathematics? : An elementary approach to ideas and methods' and have already encountered some difficulty. It seems silly to waste so much time trying to solve it myself, so I decided to ask for some help in solving and illuminating the taken steps, so that I can solve some more on my own. I shall review some of the basics while awaiting answers, and hope that my session tomorrow shall be more productive...




The problem is as follows:



Prove by induction that $1^3 + 2^3 +3^3 ... n^3 = [ (n(n+1))/2 ]^2 $.



As is, at the point I decided to seek help and look up material for review, I have taken the following steps:



solved for the base case: n=1, $1^3 = [2/2]^2 $



$$ 1 = 1^2, 1=1 $$




Then after proving the basis, I stated the assumption that:



$1^3 + 2^3 ... k^3 = [ (k(k+1)) / 2 ]^2$ is true.



Then I tried to solve for the next case, $(k+1)^3$:



$$ 1^3 + 2^3 + 3^3 ... k^3 + (k+1)^3 = [ ( (k(k+1)) / 2 ) + (k+1) ]^2 $$



$$ [ (k(k+1)) / 2 ]^2 + (k+1)^3 = [ (k(k+1) + 1(k+1) ) / 2 ]^2 $$




at this point after failing for a while and having spent quite some time looking up material, I sought help. I hope I'm going down the right path here, but I suspect I shall soon find out...



Help is greatly appreciated.

Monday, April 16, 2018

probability theory - Let $X$ be a positive random variable with distribution function $F$. Show that $E(X)=int_0^infty(1-F(x))dx$

Let $X$ be a positive random variable with distribution function $F$. Show that $$E(X)=\int_0^\infty(1-F(x))dx$$




Attempt



$\int_0^\infty(1-F(x))dx= \int_0^\infty(1-F(x)).1dx = x (1-F(x))|_0^\infty + \int_0^\infty(dF(x)).x $ (integration by parts)



$=0 + E(X)$ where boundary term at $\infty$ is zero since $F(x)\rightarrow 1$ as $x\rightarrow \infty$



Is my proof correct?

sequences and series - Proof of the formula $1+x+x^2+x^3+ cdots +x^n =frac{x^{n+1}-1}{x-1}$





Proof to the formula $$1+x+x^2+x^3+\cdots+x^n = \frac{x^{n+1}-1}{x-1}.$$


Answer



Let $S=1+x+x^2+...+x^n$. Then, $xS=x+x^2+...+x^{n+1}=1+x+x^2+...+x^n+(x^{n+1}-1)=S+x^{n+1}-1$. So, $xS-S=x^{n+1}-1$. So, $S=\frac{x^{n+1}-1}{x-1}$. (The exponent of the $x$ in the numerator of the RHS should be $n+1$ not $n$).


linear algebra - Calculate the dimension of a space of operators



This is a homework question.



First, consider the ring of real polynomials in $n$ variables, $P_n=\mathbb{R}[x_1,\ldots,x_n]$ ($n\geq 2$), and let $S_n$ act on $P_n$ by automorphism (of algebra) by permutating the variables: $\sigma(x_i)=x_{\sigma(i)}$



For each $i=1,\ldots,n-1$, let $s_i$ be the transposition $(i\ i+1)$ (so $s_i(x_i)=x_{i+1}$ and $s_i(x_{i+1})=x_i$).




Finally, for each $i$, consider the operator $\Delta_i:P_n\to P_n$ given by
$$\Delta_i(f)=\frac{f-s_i(f)}{x_i-x_{i+1}}$$




(a) Show that $\Delta_i$ is a linear operator on $P_n$ that satisfies the rule
$$\Delta_i(fg)=\Delta_i(f)g+s_i(f)\Delta_i(g)$$
(b) Let $D_n$ be the subalgebra generated by $Id_P,\Delta_1,\ldots,\Delta_{n-1}$. Calculate the dimension of $D_3$.





This is what I have so far: It is easy to check that $\Delta_i$ is a well-defined operator and that the rule above is satisfied. Moreover, in the basis $\left\{x_1^{\alpha_1}\cdots x_n^{\alpha_n}\right\}$, we have the following (let's write $x(\alpha)=x_1^{\alpha_1}\cdots x_n^{\alpha_n}$ for $\alpha=(\alpha_1,\ldots,\alpha_n)$)
$$\Delta_i(x(\alpha))=x_1^{\alpha_1}\cdots x_{i-1}^{\alpha_{i-1}}x_i^{\min(\alpha_i,\alpha_{i+1})}x_{i+1}^{\min(\alpha_i,\alpha_{i+1})}\left(\sum_{k=0}^{|\alpha_i-\alpha_{i+1}|-1}x_i^kx_{i+1}^{|\alpha_i-\alpha_{i+1}|-1-k}\right)x_{i+1}^{\alpha_{i+2}}\cdots x_n^{\alpha_n}$$
(the sum is $0$ if $\alpha_i=\alpha_{i+1}$)



The problem is finding the dimension of $D_3$. I checked that $\Delta_i^2=0$, so $D_3$ is generated (as a vector space) by $Id_P,\Delta_1,\Delta_1\Delta_2,\Delta_1\Delta_2\Delta_1,\ldots,\Delta_2,\Delta_2\Delta_1,\Delta_2\Delta_1\Delta_2,\ldots$, but I couldn`t find any other good relation between $\Delta_1$ and $\Delta_2$ so obtain an upper bound for the dimension of $D_3$ (which I'm guessing is finite).


Answer



My previous answer was incorrect, here is the correct answer obtained by trying to correct the previous one :)



Claim: $\Delta_1 \Delta_2 \Delta_1 = \Delta_2 \Delta_1 \Delta_2$.




Note that the polynomial $h = (x_1-x_2)(x_2-x_3)(x_3-x_1)$ has the property that $s_i(h) = -h$. It follows that $\Delta_i h^2 p = h^2 \Delta_i p$ for any $p$.



A slightly mundane computation shows that
$$\Delta_1 \Delta_2 h(x_1,x_2,x_3) x_1^a x_2^b x_3^c = - x_1^a x_3^b x_2^{c+1}+x_1^a x_2^b x_3^{c+1}+x_1^{a+1} x_3^b x_2^c-\\x_1^{a+1} x_2^b x_3^c+x_3^a x_1^b x_2^{c+1}-x_2^a x_1^b x_3^{c+1}+x_2^a x_1^{b+1} x_3^c-x_3^a x_1^{b+1} x_2^c =: g(x_1,x_2,x_3).$$



Now, grouping terms with the same power of $x_1$ together and applying $\Delta_2$ shows (with some work) that:
$$ \Delta_2 g(x_1,x_2,x_3) = x_1^a x_3^b x_2^c-x_1^a x_2^b x_3^c-x_3^a x_1^b x_2^c+x_2^a x_1^b x_3^c+x_3^a x_2^b x_1^c-x_2^a x_3^b x_1^c$$
A fully analogous argument show gives the same answer if we start with $\Delta_2 \Delta_1 h(x_1,x_2,x_3) x_1^a x_2^b x_3^c$. It follows that:
$$ \Delta_1 \Delta_2 \Delta_1 h(x_1,x_2,x_3) x_1^a x_2^b x_3^c = \Delta_2 \Delta_1 \Delta_2 h(x_1,x_2,x_3) x_1^a x_2^b x_3^c.$$
It follows that $ \Delta_1 \Delta_2 \Delta_1 h p = \Delta_2 \Delta_1 \Delta_2 h p$ for any polynomial $p$. Thus, also $ \Delta_1 \Delta_2 \Delta_1 h^2 p = \Delta_2 \Delta_1 \Delta_2 h^2 p$. We can now bring out $h^2$, and conclude that $ \Delta_1 \Delta_2 \Delta_1$ and $\Delta_2 \Delta_1 \Delta_2 $.




It follows immediately (together with OP's work) that $ \Delta_1\Delta_2\Delta_1\Delta_2 = \Delta_2\Delta_1\Delta_2\Delta_1 =0$. It remains to check that $\mathrm{Id},\Delta_1,\Delta_2,\Delta_1 \Delta_2, \Delta_2 \Delta_1, \Delta_1\Delta_2\Delta_1$ are linearly independent. They are, and it is enough to look at some low degree polynomials to check that. So, the sought dimension is $6$.


complex numbers - $z=e^{2pi i/5}$ solves $1+z+z^2+z^3+z^4=0$


What is the best way to verify that $$1+z+z^2+z^3+z^4=0$$ given $z=e^{2\pi i/5}$?


I tried using Euler's formula before substituting this in, but the work got messy real fast.


Answer




Clearly(?) $z^5=1$ and $z\ne 1$. Also, $$z^ 5-1=(z-1)(1+z+z^2+z^3+z^4)$$


Sunday, April 15, 2018

improper integrals - Convergence of $int_0^infty sin(t)/t^gamma mathrm{d}t$




For what values of $\gamma\geq 0$ does the improper integral $$\int_0^\infty \frac{\sin(t)}{t^\gamma} \mathrm{d}t$$ converge?




In order to avoid two "critical points" $0$ and $+\infty$ I've thought that it would be easier to test the convergence of the sum (is this coherent?):
$$
\int_0^1 \frac{\sin(t)}{t^\gamma}\mathrm{d}t +
\int_1^\infty \frac{\sin(t)}{t^\gamma}\mathrm{d}t.

$$
For the second integral, it converges if $\gamma > 1$ (comparision) and also converges if $0 <\gamma \leq 1$. I'm stuck on proving the last part and the fact that the first integral converges for $\gamma < 2$. Any help would be appreciated. Thanks in advance.



PD: I've checked the answers for this question but I would not like to solve this integral using $(n\pi,(n+1)\pi)$ intervals.


Answer



I was not going to answer, but the previous answers left me a bit anxious for $t$ near $\infty$.



Integrate by parts to get
$$
\int_1^\infty\frac{\sin(t)}{t^\gamma}\,\mathrm{d}t

=\left.\frac{-\cos(t)}{t^\gamma}\right]_1^\infty
-\gamma\int_1^\infty\frac{\cos(t)}{t^{\gamma+1}}\,\mathrm{d}t
$$
and both converge at $\infty$ when $\gamma\gt0$.



Of course, as the previous answers have said
$$
\int_0^1\frac{\sin(t)}{t^\gamma}\,\mathrm{d}t
$$
converges when $t\lt2$ by comparison with $\dfrac{t}{t^\gamma}=\dfrac1{t^{\gamma-1}}$.




This shows that the interval of convergence is $(0,2)$.


probability - Expected value of the minimum (discrete case)



Maybe related to this question



In the comments of this question they say that it gets easier if the variables are identically and independently distributed.
But i don't see how because in my case the variable is discrete



Here is my problem :

I toss 4 dice and keep the 3 best results. What is the expected value of the result ?



I think tossing 4 dice and keep the 3 best is like tossing 4 dice and removing the minimum.




  • Let X be the result of a standard die.

  • Let Y be tossing 4 dice and keeping the 3 best



Is that correct : $E(Y) = 4*E(X) - E(min)$ ?




So how calculate E(min) ?
I know if the variable was uniform on [0,1] I could have started with $F_Y = 1 - ( 1-F_X )^p$ where p is the number of dice I toss, but here the variable is discrete so i don't know where to start.



Generalization :
How to calculate the expected value of k realizations of a discrete random variable in [0-n]?



It's been a while since i studied probability, so my basic calculation may be wrong. Also,
English is not my mother tongue, so please forgive my mistakes.




edit : spelling mistakes


Answer



For clarity, suppose that the dice have ID numbers $1,2,3,4$. Let $X_i$ be the result on die $i$. Let $Y$ be the sum of the three largest of the $X_i$, and let $W$ be the minimum of the $X_i$.



Then $Y=X_1+X_2+X_3+X_4-W$. By the linearity of expectation, it follows that
$$E(Y)=E(X_1)+E(X_2)+E(X_3)+E(X_4)-E(W).$$
The linearity of expectation is a very useful result. Note that linearity always holds: independence is not required.



The expectation of the minimum can be calculated by first finding the distribution of the minimum $W$.




The minimum is $1$ unless the dice all show a number $\ge 2$. The probability of this is $1-\left(\frac{5}{6}\right)^4$. We rewrite this as $\frac{6^4-5^4}{6^4}$.



The minimum is $2$ if all the dice are $\ge 2$ but not all are $\ge 3$. The probability of this is $\frac{5^4-4^4}{6^4}$/



The minimum is $3$ if all results are $\ge 3$ but not all are $\ge 4$. This has probability $\frac{4^4-3^4}{6^4}$.



And so on. Now use the ordinary formula for expectation. We get that the expectation of $W$ is
$$\frac{1}{6^4}\left(1(6^4-5^4)+ 2(5^4-4^4)+3(4^4-3^4)+4(3^4-2^4)+5(2^4-1^4)+6(1^4-0^4) \right).$$
We leave you the task of computing. Before computing, simplify!




Generalization: Suppose we toss $k$ "fair" $(n+1)$-sided dice, with the numbers $0$ to $n$ written on them. For $i=1$ to $k$, let $X_i$ be the number showing on the $i$-th die. Let $S$ be the sum of the dice. Then $S=X_1+\cdots+X_k$. The expectation of $X_i$ is $\frac{0+1+\cdots +n}{n+1}$. By the usual expression for the sum of consecutive integers, $E(X_i)=\frac{n}{2}$ and therefore $E(S)=\frac{kn}{2}$.



The analysis of the minimum $W$ goes along the same lines as the earlier one. The probability that the minimum is $j$ is $\frac{(n+1-j)^k -(n-j)^k}{(n+1)^k}$. If we use the ordinary formula for expectation, and simplify, we find that
$$E(W)=\frac{1^k+2^k+\cdots+n^k}{(n+1)^k}.$$



A nice way to find $E(W)$: The following is a useful general result. Let $X$ be a random variable that only takes non-negative integer values. Then
$$E(X)=\sum_{i=1}^\infty \Pr(X\ge i).$$
We apply that to the case of the random variable $W$ which is the minimum of $X_1,\dots,X_4$. The probability that $W\ge i$ in that case is $\frac{(7-i)^k}{6^k}$.



The same procedure works for the more general situation you asked about.



real analysis - Convergence from $L^p$ to $L^infty$



If $f$ is a function such that $f \in L^\infty \cap L^ {p_0}$ where $L^\infty$ is the space of essentially bounded functions and $ 0 < p_0 < \infty$. Show that $ || f|| _{L^p} \to ||f || _{L^\infty} $ as $ p \to \infty$. Where $|| f||_{L^\infty} $ is the least $M \in R$ such that $|f(x)| \le M$ for almost every $x \in X$.



The hint says to use the monotone convergence theorem, but i can't even see any pointwise convergence of functions. Any help is appreciated.


Answer



Hint: Let $M\lt\|f\|_{L^\infty}$ and consider $$ \int_{E_M}\left|\frac{f(x)}{M}\right|^p\,\mathrm{d}x $$ where $E_M=\{x:|f(x)|\gt M\}$. I believe the Monotone Convergence Theorem works here.


Further Hint: $M\lt\|f\|_{L^\infty}$ implies $E_M$ has positive measure. On $E_M$, $\left|\frac{f(x)}{M}\right|^p$ tends to $\infty$ pointwise. MCT says that for some $p$, the integral above exceeds $1$.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...