Thursday, January 9, 2020

analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$?




I know that every function is surjective when it's codomain is restricted to it's image but I am not sure can I do what I did.

sequences and series - How do you prove $sum_{n=0}^infty frac{(-1)^n}{n!} = frac{1}{e}$?

prove the sum

$$\sum_{n=0}^\infty \frac{(-1)^n}{n!} = \frac{1}{e}$$



In one of the solutions to a problem I was looking at had this sum and directly got $1/e$ from it. I don't understand how you get that, I used my calculator and it indeed does equal $1/e$ but I'm interested in how you solve this by hand.

Wednesday, January 8, 2020

Discarding random variables in favor of a domain-less definition?

Probabilists don't care, what exactly the domain of random variables is. Here is an extreme comment that exemplifies this: "You should soon see, if you learn more stochastic stuff, that specifying the underlying probability space is a BAD IDEA (what happens when you add a new head/tails?), and quite useless.



If specifying the underlying probability space domain of a random variable (in short "domain" from now on) is such a useless, bad idea for most scenarios, I'm wondering why no one in the long history of probability theory resp. statistics has come up with a better, slicker definition of random variables, that avoids this unelegant we-have-a-domain-but-we-won't-talk-about-it situation?



It seems the only reason to keep the domain $\Omega$ is to enabling a coupling of random variables, so that we can speak of their independence. But can't such a coupling be realized in a more elegant way, than using a space that we don't want to define in the first place?




As soon as I'm reading texts that go beyond very elementary probability, it seems to me that such domains are treated like the crazy uncle from family parties: which we never show them/him, but know it's there.

logarithms - Inequality $log xle frac{2}{e} , sqrt{x}$



The inequality $$\log x \le \frac{2}{e} \, \sqrt{x},$$ where $\log x$ denotes the natural logarithm, is used in the proof of Theorem 4.7 in Apostol's Analytic Number Theory.



It seems that the inequality is not very difficult to prove using calculus. We could simply find maximum/minimum of some function like $f(x)= \frac2e \sqrt{x} - \log x$ or $g(x)=\frac{\log x}{\sqrt x}$.




Are there some other methods how this inequality can be proved? Is there a way in which this inequality can be seen more directly, without having to calculate critical points of some auxiliary function?


Answer



With the substitution $x = e^{2(u+1)}$ the inequality
$$
\log x \le \frac{2}{e} \, \sqrt{x}
$$ becomes
$$
e^u \ge 1 + u \tag{*}
$$
which is a well-known estimate for the exponential function.

Equality holds if and only if $u = 0$, corresponding to
$x = e^2$ in the original inequality.



$(*)$ is trivial for $u \le -1$ and can for example be shown using
the Taylor series for $u > -1$. It also follows – as Jack said in a comment –
from the convexity of the exponential function: the graph lies above
the tangent line at $u = 0$.



(This approach was inspired by Jack D'Aurizio's answer.)


real analysis - How can you prove that a function has no closed form integral?



I've come across statements in the past along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations:




  • addition/subtraction


  • multiplication/division

  • raising to powers and roots

  • trigonometric functions

  • exponential functions

  • logarithmic functions



, which when differentiated gives the function $f(x)$. I've heard this said about the function $f(x) = x^x$, for example.



What sort of techniques are used to prove statements like this? What is this branch of mathematics called?







Merged with "How to prove that some functions don't have a primitive" by Ismael:



Sometimes we are told that some functions like $\dfrac{\sin(x)}{x}$ don't have an indefinite integral, or that it can't be expressed in term of other simple functions.



I wonder how we can prove that kind of assertion?


Answer



It is a theorem of Liouville, reproven later with purely algebraic methods, that for rational functions $f$ and $g$, $g$ non-constant, the antiderivative




$$f(x)\exp(g(x)) \, \mathrm dx$$



can be expressed in terms of elementary functions if and only if there exists some rational function $h$ such that it is a solution to the differential equation:



$$f = h' + hg$$



$e^{x^2}$ is another classic example of such a function with no elementary antiderivative.



I don't know how much math you've had, but some of this paper might be comprehensible in its broad strokes: http://www.sci.ccny.cuny.edu/~ksda/PostedPapers/liouv06.pdf




Liouville's original paper:




Liouville, J. "Suite du Mémoire sur la classification des Transcendantes, et sur l'impossibilité d'exprimer les racines de certaines équations en fonction finie explicite des coefficients." J. Math. Pure Appl. 3, 523-546, 1838.




Michael Spivak's book on Calculus also has a section with a discussion of this.


abstract algebra - Greatest common divisor of polynomials over $mathbb{Q}$

I have two polynomials: $f: x^3 + 2x^2 - 2x -1$ and $g: x^3 - 4x^2 + x + 2$. I have to do two things: find $gcd(f,g)$ and find polynomials $a,b$ such as: $gcd(f,g) = a \cdot f + b \cdot v$. I have guessed their greatest common divisor: $(x-1)$, but I did it by looking for roots of both polynomials, and now I am stuck. How do I find the greatest common divisor using the Euclid algorithm? I started with $f(x) = g(x) + 3(2x^2 - x - 1)$, but then things go nuts, and I can't use Bézout's identity to bring it all back to $gcd(f,g) = a \cdot f + b \cdot v$.

gcd and lcm - using extended euclidean algorithm to find s, t, r


i am stuck for many hours and i don't understand using the extended euclidean algorithm. i calculated it the gcd using the regular algorithm but i don't get how to calculate it properly to obtain s,t,r.


i understand that from the gcd i can get a linear combination representation, but i don't get how to do it using the algorithm.


how can i find $s,t,r$ for $a=154, b= 84$?



if it is of any importance, the algorithm i am referring to is from the book cryptography: theory and practice


thank you very much. became hopeless because of it


Answer



Using the Euclidean algorithm, we have


$$ \begin{align} 154&=1\cdot84+70\tag{1}\\ 84&=1\cdot70+14\tag{2}\\ 70&=5\cdot14+0 \end{align} $$ The last nonzero remainder is 14. So $\gcd(154,84)=14$. Now $$ \begin{align*} 14&=84-70\qquad\text{(using 2)}\\ &=84-(154-84)\qquad\text{(using 1)}\\ &=2\cdot84-1\cdot154 \end{align*} $$ So $14=2\cdot84-1\cdot154$.


abstract algebra - The degree of the field extension $mathbb{Q}(sqrt{5},w): mathbb{Q}$





Compute the degree of the field extension $$\mathbb{Q}(\sqrt{5},w): \mathbb{Q} ,$$ where $ w = e^{2\pi i / 3}$.




I consider the tower of fields $\mathbb{Q} \subset \mathbb{Q}(\sqrt{5}) \subset \mathbb{Q}(\sqrt{5})(w)$ now, $\mathbb{Q}: \mathbb{Q}(\sqrt{5})$ has degree $2$, so I am trying to find the degree of $\mathbb{Q}(\sqrt{5}) : \mathbb{Q}(\sqrt{5})(w)$ it is $\leq 2$ since $w $ satisfies $w^2+w+1 = 0$. I am trying to show that it is exactly $2$ - I know that $ w \notin \mathbb{Q}(\sqrt{5})$ but I don't see how I can justify it is exactly $2$ from here.


Answer



Since $[\Bbb Q(\sqrt 5)(w) : \Bbb Q(\sqrt 5)]$ is $\leq 2$ (as $w^2 + w + 1 = 0$), it is either $1$ or $2$, and it is $1$ iff $w \in \sqrt{5}$. Since $\Bbb Q(\sqrt 5) \leq \Bbb R$ but $w \not \in \Bbb R$, we must have $[\Bbb Q(\sqrt 5)(w) : \Bbb Q(\sqrt 5)] = 2$.


Tuesday, January 7, 2020

Evaluate the integral: $int_0^{infty}frac{tan^{-1}(tx)}{xleft(1+x^2right)} mathrm{d}x$



I've been trying to evaluate the integral for a while now, and I've been unable to find it anywhere...
I tried substituting $\tan^{-1}(tx)$ as $u$ but got nowhere... I have done dozens of other substitutions...

I've been told to use the properties of gaussian integrals and the gamma function, but can't seem to figure out a way to bring in $\mathrm{e}$...
A few hints that point in the right direction to think will really help!



It would be really really helpful if you could solve this using only the properties of gaussian and gamma functions!


Answer



Let:
$$
I(t)=\int_0^\infty\frac{\arctan tx}{x(1+x^2)}dx.
$$

Then

$$
I'(t)=\int_0^\infty\frac{1}{(1+t^2x^2)(1+x^2)}dx=
\frac1{t^2-1}\int_0^\infty\left[\frac{t^2}{1+t^2x^2}-\frac{1}{1+x^2}\right]dx\\
=\frac1{t^2-1} \left[t\arctan(tx)-\arctan x\right]_0^\infty=\frac\pi2\frac{|t|-1}{t^2-1}=\frac\pi2\frac1{|t|+1}.
$$

Finally integrating the last expression over $t$ one obtains:
$$
I(t)=I(0)+\frac\pi2\text {sgn}(t)\log(|t|+1)=\text {sgn}(t)\frac\pi2\log(|t|+1).
$$


calculus - Finding $lim_{xto +infty}(frac{x+ln x}{ x-ln x})^{frac{x}{ln x}}$



Find $\lim_{x\to +\infty}(\frac{x+\ln x}{ x-\ln x})^{\frac{x}{\ln x}}$. I tried using l'Hospital rule with the continuity of $e$ function. Also tried using Taylor expansion with no success. What should I do? Thank you.



Answer



\begin{align*}
\lim_{x\to+\infty}\left(\frac{x+\log x}{x-\log x}\right)^{\frac x{\log x}}
=&\lim_{x\to+\infty}\left(1+\frac{2\log x}{x-\log x}\right)^{\frac x{\log x}}\\
=&\lim_{x\to+\infty}\left(1+\frac{2}{\frac{x}{\log x}-1}\right)^{\frac x{\log x}}\\
=&\lim_{x\to+\infty}\left(1+\frac{2}{\frac{x}{\log x}-1}\right)^{\frac x{\log x}-1}\left(1+\frac{2}{\frac{x}{\log x}-1}\right)\\
=&e^2
\end{align*}


elementary number theory - Prove that if $d$ is a common divisor of $a$ and $b$, then $gcd(a,b)/d = gcd(a/d,b/d)$.




Prove that, given $a,b,d$ integers, if $d$ is a common divisor of $a$ and $b$, then $\frac{\gcd(a,b)}{d}=\gcd(\frac{a}{d},\frac{b}{d})$.





So far I have what was given:



$a=dk$, $b=dk'$



Now: i give the $\gcd$ the next value $\gcd(a,b)=t$ and so i know that




  • $t$ divides $a$ then $a=t*t'$


  • $t$ divides $b$ then $b=t*t''$

  • $d$ divides $t$ then $t=rd$

  • $t$ can be expressed as the linear combination $t=ax +by$



That is all the information that I have.



I don't know how to start. I tried to start by



$\frac{\gcd(a,b)}{d}=\frac{t}{d}=\dots$ but I keep getting nowhere.





How should I proceed?



Answer



We know that there exist $x,y \in \mathbb{Z}$ s.t.



$$\gcd(a,b) = ax + by$$



Now divide both sides by $d$ to get that:




$$\frac{\gcd(a,b)}{d} = \frac ad x + \frac bd y$$



From here we conclude that $\gcd\left(\frac ad, \frac bd\right)$ divides $ \frac{\gcd(a,b)}{d}$



Similarly we have that there exist $s,t \in \mathbb{Z}$ s.t



$$\gcd\left(\frac ad, \frac bd\right) = \frac ad s + \frac bd t$$



Multiply both sides by $d$ to get that




$$d \cdot\gcd\left(\frac ad, \frac bd\right) = a s + b t$$



From here we get that $\gcd(a,b)$ divides $d \cdot\gcd\left(\frac ad, \frac bd\right)$



From both relation we conclude that $\gcd\left(\frac ad, \frac bd\right)= \frac{\gcd(a,b)}{d}$. Hence the proof.


Monday, January 6, 2020

integration - Closed formula for integral



Mathematica can evaluate



$$\int\limits_0^\infty \frac{ \ln^{p} {x} \sin^{q} {x}}{x^r}dx $$



for all $p,q,r \in \mathrm{N}$ when $q \geq r>c, c \equiv q-r \ (\textrm{mod} \ 2) $, but can't give general formula. Any references would be appreciated. Also, any hints on how to do $p=q=r=1$ case? All results are some combination of a fraction, Euler–Mascheroni and logarithms.



So, is there a formula in spirit of :




enter image description here



UPDATE



I changed the notation a bit here to match Raymond Manzoni's first reference: integral converges for $p\geq q$ :




  1. for $p$ odd (even) and $q$ even(odd)




$$\int\limits_0^\infty \frac{\ln^{r}\! x \sin^{p} \! x }{x^q} \mathrm{d}x = \frac{(-1)^{r} (-1)^{\lfloor \frac{p-q}{2} \rfloor} }{ \Gamma(q) 2^p} \sum\limits_{i=0}^{r} \binom{r}{i} \frac{(-1)^{r-i} (r-i)!}{2^{r-i}} \sum\limits_{k=0}^{\lfloor \frac{p}{2} \rfloor - \text{mod}(2p-q,2)} \! \! \! \! \! \! \! \! \! \! 2(-1)^k \binom{p}{k}(p-2k)^{q-1} \sum\limits_{t=0}^{\lfloor \frac{r-i+1}{2} \rfloor} \frac{\text{Li}_{2t}(-1)\ln^{r-i+1-2t}\left(\frac{1}{(p-2k)^2}\right) }{(r-i+1-2t)!} \sum\limits_{\lambda \vdash i} \psi_0^{m_1}(q) \psi_1^{m_2}(q)\cdot \ldots\cdot \psi_{i-1}^{m_{l(\lambda)} }(q) \frac{\binom{i}{m_1,m_2,\ldots,m_{l(\lambda)}}}{m_1! m_2! \cdot \ldots\cdot ,m_{l(\lambda)}!} (-1)^{m_1+m_2+\ldots +m_{l(\lambda)}}$$




  1. for $p$ odd (even) and $q$ odd(even)



$$\int\limits_0^\infty \frac{\ln^{r}\! x \sin^{p} \! x }{x^q} \mathrm{d}x = \frac{(-1)^{r} (-1)^{ \frac{2p-q + \text{mod}(p,2)}{2}} }{ \Gamma(q) 2^{p-1}} \sum\limits_{i=0}^{r} \binom{r}{i} (-1)^{r-i} (r-i)! \sum\limits_{k=1}^{ \frac{p +\text{mod}(p,2)}{2}} \! \! \! \! \! \! 2(-1)^k \binom{p}{\frac{p +\text{mod}(p,2)}{2}- k}(2k-\text{mod}(p,2))^{q-1} \sum\limits_{t=0}^{\lfloor \frac{r-i+1}{2} \rfloor} \frac{\text{Li}_{2t}(-1) \sum\limits_{m=0}^{\lfloor \frac{r-i-2t}{2} \rfloor} \binom{r-i-2t+1}{2m+1} \left( \frac{\pi}{2} \right)^{2m+1} \ln^{r-i-2t-2m} \left( \frac{1}{2k -\text{mod}(p,2) } \right) }{(r-i-2t+1)!} \sum\limits_{\lambda \vdash i} \psi_0^{m_1 }(q) \psi_1^{m_2 }(q)\cdot \ldots\cdot \psi_{i-1}^{m_{l(\lambda)} }(q) \frac{\binom{i}{m_1,m_2,\ldots,m_{l(\lambda)}}}{m_1! m_2! \cdot \ldots\cdot ,m_{l(\lambda)}!} (-1)^{m_1+m_2+\ldots +m_{l(\lambda)}}$$



where $\text{Li}$ is polylogarithm, $\psi$ is polygamma, and the last sum is over all partitions $\lambda = \left(1^{m_1} 2^{m_2} \ldots \right) $ and $l(\lambda)$ is partition length.




fell free to tidy up and improove if you want, I don't have the will right now.


Answer



Here is the proof and Mathematica code to play with. Took me some time to write all this. Thank you again Raymond Manzoni for that nice Edwards book(s) refference, couldn't have done it without it. It would be nice to see a proof for idenetities (37) and (38) in paper, one proof is in Edwards book but it's not too elegant, one way is to reduce it to hypergeometric identity http://functions.wolfram.com/07.31.03.0032.01 but I don't know how to prove that identity. Also the final expressions are equivalent to ones I wrote above without proof, but more elegant.



You can downolad it here : download link



and the .tex file here: download link



This code is also much nicer than the last one, you can download the Mathematica notebook here download link



probability - Expected Value of Die until I roll an even number



I roll a standard die repeatedly until I roll an even number. What is the probability that the last number rolled is $2$?



The probability of rolling an even number is $\frac12$ but how would you work out what the probability that number is? Would it be the intersection of the two events?


Answer



By the principle of indifference, every even number is equally likely to be the last number rolled. Since a standard die has $6$ sides, there are $3$ even numbers, and their equal probabilities must add up to $1$, so each of them must be $\frac13$.



probability - Continuous and Discrete random variable distribution function



I have a very basic question in probability. It pertains to the difference between a continuous random variable distribution function and a discrete one. This question has confused me many times.


Suppose the continuous r.v. distribution is given by $F_X(x)$. Then as $F_X$ is having both right and left continuity, I can say that


(i) $P(a \leq X \leq b)=F_X(b)-F_X(a)$


(ii) $P(a < X \leq b)=F_X(b)-F_X(a)$


(iii) $P(a \leq X < b)=F_X(b)-F_X(a)$


(iv) $P(a < X < b)=F_X(b)-F_X(a)$


Eqn. (i) to (iv) holds simply because $F_X$ is continuous. However how to compute these probabilities in the discrete case has always confused me.


Suppose the discrete r.v. distribution is given by $F_X(x)$. Then as $F_X$ is ONLY having right continuity, I can say that


(i) $P(a \leq X \leq b)=F_X(b)-F_X(a) + P(X==a)$


(ii) $P(a < X \leq b)=F_X(b)-F_X(a)$



(iii) $P(a \leq X < b)=???$


(iv) $P(a < X < b)=???$


Please don't down vote just because it's easy. If an answer exists do point it out to me.


Thanks in advance.


Answer



The CDF of a discrete random variable $X$ is continuous everywhere except at those discrete points $x_i$ for which $P\{X=x_i\} > 0$. At these points of discontinuity, the limit from the right exists as does the limit from the left, but these two limits have different values.


Let $F_X(a^{-}) = \lim_{x\uparrow a} F_X(x)$ denote the limiting value of $F_X(x)$ as $x$ approaches $a$ from the left (or from below) and let $F_X(a^+)= \lim_{x\uparrow a} F_X(x)$ denote the limit of $F_X(x)$ as $x$ approaches $a$ from the right (or from above).



  • If $F_X(x)$ is continuous at $x=a$, then $F_X(a^-)=F_X(a^+)$ and the value of $F_X(a)$ is this limit. It does not matter whether we call it the limit from the right or the limit from the left, but for consistency with other cases, it is convenient to insist that $F_X(a)$ is the limiting value of $F_X(x)$ as $x$ approaches $a$ from the right: $F_X(a) = F_X(a^+)$





  • As you correctly state, the value of $F_X(x)$ at $x=a$ is defined to be $F_X(a^+)$. If $F_X(x)$ is discontinuous at $x=a$, then since $F_X(x)$ is a nondecreasing function, we have that $F_X(a^+) > F_X(a^-)$ and the difference $F_X(a^+) - F_X(a^-)$ is the value of $P\{X=a\}$. That is, $$\text{For each real number}~ a, P\{X=a\} = F_X(a^+) - F_X(a^-). \tag{1}$$ Note that this result is applicable to continuous random variables as well and leads to the conclusion that $P\{X=a\} = 0$ for every real number $a$: a statement that confuses many beginners in probability theory.



If you like this notation and decide that you will use it, then note that your four probabilities differ only in whether $P\{X=a\}$ and $P\{X=b\}$ are included in the probability of interest. For example, your


(i) $P\{a \leq X \leq b\} = P\{a < X \leq b\} + P\{X = a\} = [F_X(b)-F_X(a)] + P\{X=a\}$ can be expressed via $(1)$ as $$P\{a \leq X \leq b\} = F_X(b) - F_X(a^-) = F_X(b^+)-F_X(a^-)$$ and similarly for the other cases.


Showing convergence of an infinite series?

I'm trying to show whether the below series converges or diverges, and I have very little clue on how to do it. I know about the Comparison Test, but I can't think of a sequence $b_n > a_n$ to perform a comparison with.




Esentially, my question is more how should I approach this problem, and is there a less "guess-based" approach for finding a $b_n$?
$$\sum_{n=2}^\infty a_n =\sum_{n=2}^\infty\frac{1}{\log(\log(n))\log(n)n}$$

probability - Two players are playing a game by rolling a single die.



Two players are playing a game by rolling a single die. First player rolls the die first. If the top surface of the die shows 1 or 2 then he wins. If not, then the second player will roll the die. If the top surface of the die shows 3, 4, 5 or 6 then he wins. Else, the game is continues until there is a winner.



(i) Who will be the most likely winner?




(ii) Find the probability that the first player wins the game given that the game has ended.



(iii) What is the most likely length of the game (number of rolls) to produce a winner?



My answer and solution, want to do some checking, correct me if any mistakes.



i)
Let A = first player wins the game within one round
Let B = second player wins the game within one round

Probability of the first player wins the game
P(A) = 1/6+1/6
= 1/3
Probability of the second player wins the game
P(B) = P(A’) + (1/6+1/6+1/6+1/6)
= 4/6 x 2/3
= 4/9



Hence, Second winner will be the most likely winner.




ii)



Probability that the first player wins the game given that the game has ended
P(A1) + P(A2) + P(A3) + ….. + P(∞)
= 1/3 + ( 1/3 ) ( 2/9 ) + ( 1/( 3) ) ( 2/( 9) )2 + ( 1/( 3) ) ( 2/( 9) )3 + ….



By using the formula of sum of infinity
S = a/(1-r)
a = 1/3 r = 2/9
s = (1/3)/(1- 2/9)

= 3/7



iii.



P(C) = P(A) +P(B)
= 1/3 + ( 2/3 ) ( 2/3 )
= 1/3 + 4/9
= 7/9
The probability smaller than 1
Therefore , one is the most likely length of the game to produce a winner



Answer



For (i) I think you need to consider the players winning after $k$ rounds with $k=1,2,3,...$



Note that for player 1 to win in $k$ rounds, it means that he lost his first $k-1$ rounds, player 2 lost his first $k-1$ rounds, and player 1 won his $k$th round. By independence, this is ($A_k=$"player 1 wins after k rounds", $B_k$ is defined similarly for player 2):



$$P(A_k)=(4/6)^{k-1}(2/6)^{k-1}(4/6)=(2/9)^{k-1}(2/3)$$



Hence $$P(\text{1 wins})=\bigcup_{k=1}^\infty A_k=\sum_{k=1}^\infty P(A_k)=(2/3)\sum_{k=1}^\infty (2/9)^{k-1}={2 \over 3}{1 \over 1- {2 \over 9}}$$



Now for $B_k$ to occur player 1 must loose his first $k$ rounds, player $2$ his first $k-1$ and player 2 must win his $k$th round, which gives:




$$P(B_k)=(4/6)^k(2/6)^{k-1}(4/6)$$



$$P(\text{2 wins})={4 \over 9}\sum_{k=1}^\infty (2/9)^{k-1}={4 \over 9}{1 \over 1- {2 \over 9}}$$



It follows that $P(A)>P(B)$



ii) is fine.



For iii) again the probability of a game of length k won by 1 is $(2/9)^{k-1}(2/3)$ and won by 2 is $(4/9)(2/9)^{k-1}$. Hence the longer games are less probable, and since $2/3>4/9$ the most probable run is 1 winning is 1 round, hence a game of length 1.



measure theory - Show that for every $epsilon > 0 $ there exists $h in mathcal{L}^1(X)$ non-negative and $delta > 0$ such that:



I am working through some practice questions, and I think I have gotten the first two parts, but I am having trouble deriving the third part:




Let $(X,\mathcal{A},\mu)$ be a finite measure space. Suppose that
$(f_k)$ is a sequence of measurable functions $X \rightarrow
\mathbb{R}$ such that for every $\epsilon > 0$ there exists $h \in

\mathcal{L}^1(X)$ non-negative such that:



$$ \int_{[|f_k|\ge h]} |f_k|d\mu< \epsilon$$



for all $k \in \mathbb{N}$. Where $[|f_k|\ge h] = \{ x \in X : |f_k(x)| \ge h(x) \} $



(1) Show that there exists $P>0$ such that:



$$ \int_X |f_k|d\mu \le P$$ for all $k \in N$




(2) Show that for every $A \in \mathcal{A}$ and every $h \in
> \mathcal{L}^1(X)$ non-negative:



$$ \int_A |f_n|d\mu \le \int_{[|f_k|\ge h]} |f_k|d\mu + \int_A h
d\mu$$
(3) Using part (2), show that for every $\epsilon > 0 $ there exists $h \in
\mathcal{L}^1(X)$ non-negative and $\delta > 0$ such that:



$A \in \mathcal{A}$ and $\int_A h d\mu < \delta \implies \int_A |f_k|d
\mu < \epsilon $ for all $n \in \mathbb{N}$





For part (1), I have written the integral on the left hand side as disjoint integrals, namely $ [|f_k|\ge h]$ and $[|f_k| < h]$ then the second integral is smaller than $\int_{[|f_k| < h]} h $, since it is precisely over the x's which $h > |f_k|$. And since we know the integrals of $h$ are finite, this yields the result.



For part (2), I have done a similar construction, splitting the problem into two cases, where $A$ and $[|f_k|\ge h]$ intersect and where they do not. I am able to derive the inequalities. Is this the right approach to this problem?



Part(3) is where I am having the most trouble, by part(2) it seems that I can immediately derive that $\int_A |f_k|d\mu < \epsilon + \delta $, but how to show it is just $< \epsilon$?



Any help would be very gratefully received!


Answer




Part 3: Given $\epsilon>0$, there is $h$ such that
$$
\int_{[|f_k|\ge h]} |f_k|d\mu< \epsilon
$$
For this $h$, by post for each $\epsilon >0$ there is a $\delta >0$ such that whenever $m(A)<\delta$, $\int_A f(x)dx <\epsilon$,



given $\epsilon$, there is a $\eta$, for any $A$ such that $\mu(A)<\eta$, there is
$$
\int_A h d\mu < \epsilon
$$

So
$$
\int_A |f_n|d\mu=\int_{A\cap [|f_n|\ge h]} |f_n|d\mu+\int_{A\cap [|f_n|< h]} |f_n|d\mu\leqslant \int_{[|f_n|\ge h]} |f_n|d\mu+\int_A h d\mu<2\epsilon
$$


Sunday, January 5, 2020

functions - Is 4.99999......... exactly equal to 5?

I'm a student of 10th std. Recently our teacher asked a Question that


"Is 4.999...equal to 5 or not?" Everyone said that is isn't equal or it is approximately equal. Teacher too agreed to that. But I did't agree. I opposed the teacher as I think it is precisely equal to 5. I also prove but then too she isn't satisfied. Can you please explain what is right and what is wrong in this case?Please Justify.



   Proof that I showed : let x=4.999...  (a)
therefore, 10x = 49.999...
10x -x = 49.999... -4.999...
9x = 45
x = 45%9
X= 5 (b)
hence (a) =(b).
And therefore, ***4.999... = 5***

How to prove $T(n) = Tleft(frac n4right) + Tleft(frac{3n}4right) + n$ is $mathcal O(nlog n)$ using induction?

How would you go about proving the recursion
$$T(n) = T\left(\frac n4\right) + T\left(\frac{3n}4\right) + n$$is $\mathcal O(n\log n)$ using induction?



Thanks!

Discrete logarithm tables for the fields $Bbb{F}_8$ and $Bbb{F}_{16}$.



The smallest non-trivial finite field of characteristic two is
$$
\Bbb{F}_4=\{0,1,\beta,\beta+1=\beta^2\},
$$

where $\beta$ and $\beta+1$ are primitive cubic roots of unity, and zeros of the
polynomial $x^2+x+1$. Here the multiplication table is given once we know
how to write the non-zero elements as powers of $\beta$. Extend the idea to fields of eight and sixteen elements.



Those fields can be constructed as
$$
\Bbb{F}_8=\Bbb{F}_2[\alpha], \quad\text{and}\quad \Bbb{F}_{16}=\Bbb{F}_2[\gamma],
$$
where $\alpha$ has minimal polynomial $x^3+x+1$, and $\gamma$ has minimal polynomial $x^4+x+1$, both irreducible in $\Bbb{F}_2[x]$.




Task:




Calculate the tables for base $\alpha$ discrete logarithm of $\Bbb{F}_8$ and base $\gamma$ discrete logarithm of $\Bbb{F}_{16}$.



Answer



A (base-$g$) discrete logarithm of a finite field $\Bbb{F}_q$, is a function
$$
\log_g:\Bbb{F}_q^*\to\Bbb{Z}_{q-1}
$$

defined via the equivalence $g^j=x\Leftrightarrow \log_g(x)=j$. For this to be well-defined it is imperative that $g$ is a primitive element, i.e. a generator of $\Bbb{F}_q^*$, and that the domain of $\log_g$ is the
residue class ring of integer modulo $q-1$, as $g^{q-1}=g^0=1$.



It immediately follows that the discrete logarithm satisfies the familiar rules
$$
\begin{aligned}
\log_g(x\cdot y)&=\log_g(x)+\log_g(y),\\
\log_g(x^n)&=n\cdot\log_g(x)
\end{aligned}
$$

for all elements $x,y\in \Bbb{F}_q^*$ and all integers $n$. The arithmetic
on the r.h.s. is that of the ring $\Bbb{Z}_{q-1}$.






It is known that when $q=8$, a zero $\alpha$ of $x^3+x+1$ generates $\Bbb{F}_8^*$. This is proven by the following calculation, where we repeatedly
use the fact that we are working in characteristic two, and that we have the
relation $\alpha^3=\alpha+1$.
$$
\eqalign{

\alpha^0&=&&=1,\\
\alpha^1&=&&=\alpha,\\
\alpha^2&=&&=\alpha^2,\\
\alpha^3&=&&=1+\alpha,\\
\alpha^4&=&\alpha\cdot\alpha^3=\alpha(1+\alpha)&=\alpha+\alpha^2,\\
\alpha^5&=&\alpha\cdot\alpha^4=\alpha(\alpha+\alpha^2)=\alpha^2+\alpha^3=\alpha^2+(1+\alpha)&=1+\alpha+\alpha^2,\\
\alpha^6&=&\alpha\cdot\alpha^5=\alpha(1+\alpha+\alpha^2)=\alpha+\alpha^2+\alpha^3=
\alpha+\alpha^2+(1+\alpha)&=1+\alpha^2,\\
\alpha^7&=&\alpha\cdot\alpha^6=\alpha(1+\alpha^2)=\alpha+\alpha^3=\alpha+(1+\alpha)&=1.
}$$




We see from the end results in the last column that all the non-zero
quadratic polynomials evaluated at $\alpha$ appear. This is yet another confirmation of the fact that $\alpha$ is a primitive element.



The discrete logarithm is used to replace the cumbersome multiplication (and raising
to an integer power) of the field with more familiar integer arithmetic. Exactly like the old-timers used logarithm tables to replace the error-prone multiplication with the easier addition.



For example
$$
(1+\alpha)(1+\alpha+\alpha^2)=\alpha^3\cdot\alpha^5=\alpha^8=\alpha^7\cdot\alpha=\alpha.

$$
Note that both the base-$\alpha$ discrete logarithms and its inverse mapping are needed. I generate such a table as a part of the program initialization, whenever I carry out extensive computer-aided calculations involving a finite field. The above table gives the discrete logarithm when read from right to left, and the inverse mapping (that we actually produced above) when read from left to right.






Similarly with $q=16$ we use $\gamma$, a zero of $x^4+x+1$. This time the table looks like
$$
\begin{aligned}
\gamma^0&=&1\\
\gamma^1&=&\gamma\\

\gamma^2&=&\gamma^2\\
\gamma^3&=&\gamma^3\\
\gamma^4&=&\gamma+1\\
\gamma^5&=\gamma(\gamma+1)=&\gamma^2+\gamma\\
\gamma^6&=\gamma(\gamma^2+\gamma)=&\gamma^3+\gamma^2\\
\gamma^7&=\gamma^4+\gamma^3=&\gamma^3+\gamma+1\\
\gamma^8&=(\gamma^4)^2=&\gamma^2+1\\
\gamma^9&=\gamma(\gamma^2+1)=&\gamma^3+\gamma\\
\gamma^{10}&=\gamma^4+\gamma^2=&\gamma^2+\gamma+1\\
\gamma^{11}&=&\gamma^3+\gamma^2+\gamma\\

\gamma^{12}&=\gamma^4+\gamma^3+\gamma^2=&\gamma^3+\gamma^2+\gamma+1\\
\gamma^{13}&=\gamma^4+\gamma^3+\gamma^2+\gamma=&\gamma^3+\gamma^2+1\\
\gamma^{14}&=\gamma^4+\gamma^3+\gamma=&\gamma^3+1\\
(\gamma^{15}&=\gamma^4+\gamma=&1)
\end{aligned}
$$



Thus for example
$$
(\gamma^3+1)(\gamma^2+1)=\gamma^{14}\cdot\gamma^8=\gamma^{22}=\gamma^7=\gamma^3+\gamma+1.

$$






As another example of the use of this table I want to discuss the problem of factorizing $x^4+x+1$ over $\Bbb{F}_4$. To that end we need to first identify a copy of $\Bbb{F}_4$ as a subfield of $\Bbb{F}_{16}$. We just saw that $\gamma$ is of order fifteen. Therefore $\gamma^5=\gamma^2+\gamma$ and $\gamma^{10}=\gamma^2+\gamma+1$ are third roots of unity. It is then trivial to check that we have a homomorphism of fields $\sigma:\Bbb{F}_4\to\Bbb{F}_{16}$ given by $\sigma(\beta)=\gamma^5$. Note that composing this (from either end) by the Frobenius automorphism gives an alternative embedding $\beta\mapsto \gamma^{10}$.



Basic Galois theory tells us that
$$
x^4+x+1=(x-\gamma)(x-\gamma^2)(x-\gamma^4)(x-\gamma^8)
$$

as we get the other roots by repeatedly applying the Frobenius automorphism $F:x\mapsto x^2$. Here we see that the factor
$$
(x-\gamma)(x-\gamma^4)=x^2+x(\gamma+\gamma^4)+\gamma^5=x^2+x+\gamma^5
$$
is stable under the automorphism $F^2$, and thus (as we also see directly!) has its
coefficients in the subfield $\sigma(\Bbb{F}_4)$. The same holds for the remaining factor
$$
(x-\gamma^2)(x-\gamma^8)=x^2+x(\gamma^2+\gamma^8)+\gamma^{10}=x^2+x+\gamma^{10}.
$$
Pulling back the effect of $\sigma$ we get the desired factorization

$$
x^4+x+1=(x^2+x+\beta)(x^2+x+\beta+1)
$$
in $\Bbb{F}_4[x]$.






Here is a local version of similar tables for $\Bbb{F}_{256}$


Saturday, January 4, 2020

real analysis - Where does this sequence $sqrt{7}$,$sqrt{7+ sqrt{7}}$,$sqrt{7+sqrt{7+sqrt{7}}}$,.... converge?




The given sequence is $\sqrt{7}$,$\sqrt{7+ \sqrt{7}}$,$\sqrt{7+\sqrt{7+\sqrt{7}}}$,.....and so on.



the sequence is increasing so to converge must be bounded above.Now looks like they would not exceed 7. The given options are




  1. ${1+\sqrt{33}}\over{2}$


  2. ${1+\sqrt{32}}\over{2}$



  3. ${1+\sqrt{30}}\over{2}$


  4. ${1+\sqrt{29}}\over{2}$




How to proceed now.
Thanks for any help.


Answer



Trick: Let $X = \sqrt{ 7 + \sqrt{ 7 + ... } } $. We have $X = \sqrt{ 7 + X } $ and so $X^2 = 7 + X $. Now you solve the quadratic equation.


real analysis - Which of the following is uncountable?


Which of the following is uncountable?



$1.\{f|f:\{0,1\}\to\mathbb{Z}\}$


$2.\{f|f:\mathbb{Z}\to\{0,1\}\}$


My attempt:I think first is countable because we can make a bijection from this to $\mathbb{Z^2}$,am I correct?About second option I do not have any idea.


Thanks.


Answer



Hint for 2: $\{f\mid f:\mathbb{Z}\to\{0,1\}\}$ has the same cardinality as the power set of $\mathbb{Z}$.


elementary number theory - $gcd(b^x - 1, b^y - 1, b^ z- 1,...) = b^{gcd(x, y, z,...)} -1$




Dear friends,


Since $b$, $x$, $y$, $z$, $\ldots$ are integers greater than 1, how can we prove that $$ \gcd (b ^ x - 1, b ^ y - 1, b ^ z - 1 ,\ldots)= b ^ {\gcd (x, y, z, .. .)} - 1 $$ ?


Thank you!


Paulo Argolo

algebra precalculus - Easy way to solve $w^2 = -15 + 8i$


Solve $w^2=−15+8i$, where $w$ is complex.




Normally, I would convert this into polar coordinates, but the problem is that is too slow.



What is another alternative?

linear algebra - If $AB = I$ then $BA = I$: is my proof right?




I want to prove that for matrices $A,B \in M_n (\mathbb K)$ where $\mathbb K \in \{\mathbb R, \mathbb C, \mathbb H\}$ if $AB = I$ then $BA = I$.




My proof is really short so I'm not sure it's right:



If $AB = I$ then $(BA)B = B$ and therefore $BA=I$?


Answer



The implication $(BA)B=B \Rightarrow BA=I$ is a little quick and not always true...



But observe that
$$1=\det(BA)= \det(B)\det(A)$$
thus $B$ is invertible and it follows that
$$BA= BA(BB^{-1}) = B(AB)B^{-1}=BB^{-1}=I.$$



sequences and series - How to derive the formula for the sum of the first $n$ perfect squares?

How do you derive the formula for the sum of the series when $S_n = \sum_{j=1}^n j^2$?



The relationship $(n, S_n)$ can be determined by a polynomial in $n$. You are supposed to use finite differences to determine the degree and then use a calculator to derive the function. But how?

Bézout's identity and Extended Euclidean algorithm for polynomials

I'm trying to find the $U, V$ such that $a(x)U + m(x)V = d(x)$.



\begin{align*}
a(x) &= x^4+1\\

m(x) &= x^3+3x+1\\
d(x) &= \gcd(a(x),m(x))
\end{align*}



I found $d(x)$ by using Euclidean algorithm
\begin{align*}
x^4+1 &= (x^3+3x+1) \cdot x -(3x^2-x+1)\\
x^3+3x+1 &= (-3x^2-x+1) \cdot (\frac{-x}{3}+\frac{1}{9}) + (\frac{31x}{9}+\frac{8}{9})\\
-3x^2-x+1 &= (\frac{31x}{9}+\frac{8}{9})\cdot (\frac{-27x}{31} - \frac{63}{961})+ \frac{1017}{961}\\
\end{align*}

so because the remainder is equal to a constant, $d(x)= 1$.



Then I try to solve the equation $a(x)U + m(x)V = d(x)$ by using the Extended Euclidean algorithm, and more precisely by using the table (shown here: https://www.numbertheory.org/php/euclid.html)



and with all the values put in the table looks like this: https://imgur.com/a/8Wku6 (sry I don't know how to put tables into stackexchange)



I know the answer should be $(x^3+3x+1) \cdot (-\frac{31x^3}{113}+\frac{8x^2}{113}- \frac{13x}{113}+\frac{7}{113}) + (x^4+1) \cdot (\frac{31x^2}{113} - \frac{8x}{113} + \frac{106}{113}) = 1$



Where $$ U = (-\frac{31x^3}{113}+\frac{8x^2}{113}- \frac{13x}{113}+\frac{7}{113})$$
$$ V = (\frac{31x^2}{113} - \frac{8x}{113} + \frac{106}{113}) $$




But I don't know what I'm doing wrong with my Euclidean algorithm or Euclidean extended algorithm.



Thanks in advance

real analysis - non-continuous function satisfies $f(x+y)=f(x)+f(y)$


As mentioned in this link, it shows that For any $f$ on the real line $\mathbb{R}^1$, $f(x+y)=f(x)+f(y)$ implies $f$ continuous $\Leftrightarrow$ $f$ measurable.



But


how to show there exists such an non-measurable function satisfying $f(x+y)=f(x)+f(y)$?


I guess we may use the uniform bounded principal and the fact that $f$ is continuous iff it is continuous at zero under the above assumption.


Thanks in advance!


Answer



Considering $\mathbb R$ as infinite-dimensional $\mathbb Q$ vector space, any linear map will do. For example, one can extend the function $$f(x)=42a+666b\quad \text{ if } x=a+b\sqrt 2\text{ with }a,b\in \mathbb Q$$ defined on $\mathbb Q[\sqrt 2]$ to all of $\mathbb R$, if one extends the $\mathbb Q$-linearly independent set $\{1,\sqrt 2\}$ to a basis of $\mathbb R$. (This requires the Axiom of Choice, of course)


Friday, January 3, 2020

calculus - Is there a general formula for $I(m,n)$?




Consider the integral




$$I(m,n):=\int_0^{\infty} \frac{x^m}{x^n+1}\,\mathrm dx$$



For $m=0$, a general formula is $$I(0,n)=\frac{\frac{\pi}{n}}{\sin\left(\frac{\pi}{n}\right)}$$



Some other values are $$I(1,3)=\frac{2\pi}{3\sqrt{3}}$$ $$I(1,4)=\frac{\pi}{4}$$ $$I(2,4)=\frac{\pi}{2\sqrt{2}}$$



For natural $m,n$ the integral exists if and only if $n\ge m+2$.





Is there a general formula for $I(m,n)$ with integers $m,n$ and $0\le m\le n-2$ ?



Answer



We can use contour integration to arrive at the general result. Note that



$$\begin{align}
\oint_C \frac{z^m}{z^n+1}\,dz&=2\pi i \text{Res}\left(\frac{z^m}{z^n+1}, z=e^{i\pi/n}\right)\\\\
&=-2\pi i \frac{e^{i\pi(m+1)/n}}{n}\tag 1
\end{align}$$




where $C$ is the "pie slice" contour comprised of (i) the real-line segment from $0$ to $R$, where $R>1$, (ii) the circular arc of radius $R$ that begins at $R$ and ends at $Re^{i2\pi/n}$, and $(3)$ the straight line segment from $Re^{i2\pi/n}$ to $0$.



Then, we can write



$$\oint_C \frac{z^m}{z^n+1}\,dz=\int_0^R \frac{x^m}{x^n+1}\,dx+\int_0^{2\pi/2}\frac{R^me^{im\phi}}{R^ne^{in\phi}+1}\,iRe^{i\phi}\,d\phi-\int_0^R \frac{x^me^{i2\pi m/n}}{x^n+1}e^{i2\pi/n}\,dx \tag 2$$



If $n>m+1$, then as $R\to \infty$, the second integral on the right-hand side of $(2)$ vanishes and we find that



$$\bbox[5px,border:2px solid #C0A000]{\int_0^\infty \frac{x^m}{x^n+1}\,dx=2\pi i\frac{e^{i\pi(m+1)/n}}{n(e^{i2\pi(m+1)/n}-1)}=\frac{\pi/n}{\sin(\pi(m+1)/n)}}$$


Sum of series $frac {4}{10}+frac {4cdot7}{10cdot20}+ frac {4cdot7cdot10}{10cdot20cdot30}+cdots$

What is the sum of the series



$$\frac {4}{10}+\frac {4\cdot7}{10\cdot20}+ \frac {4\cdot7\cdot10}{10\cdot20\cdot30}+\cdots?$$



I know how to check if a series is convergent or not.Is there any technique to find out the sum of a series like this where each term increases by a pattern?

algebra precalculus - Simplify the expression $binom{n}{0}+binom{n+1}{1}+binom{n+2}{2}+cdots +binom{n+k}{k}$





Simplify the expression $\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots +\binom{n+k}{k}$



My attempt: Using the formula $\binom{n+1}{k}=\binom{n}{k}+\binom{n}{k-1}$



$\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots \binom{n+k-1}{k-1}+\binom{n+k}{k}$




=$\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots +\binom{n+k-1}{k-1}+(\binom{n+k-1}{k}+\binom{n+k-1}{k-1})$



=$\binom{n}{0}+\binom{n+1}{1}+\binom{n+2}{2}+\cdots +2\binom{n+k-1}{k-1}+\binom{n+k-1}{k}$



I can again use the same formula for the term $2\binom{n+k-1}{k-1}$, and in the next to step to the term $3\binom{n+k-2}{k-2}$.



But I don't think this way the expression will get simplified.



Any help is appreciated.



Answer



First show that $\large{n\choose r}={{n-1}\choose {r-1}}+{{n-1}\choose r}$,



from which we get $\large{{n+r+1}\choose r}={{n+r}\choose r}+{{n+r}\choose {r-1}}$



By the same rule, $\large{{n+r}\choose {r-1}}={{n+r-1}\choose {r-1}}+{{n+r-1}\choose {r-2}}$



$\large{{n+r-1}\choose {r-2}}={{n+r-2}\choose {r-2}}+{{n+r-3}\choose {r-3}}$



$..$ $\quad \quad \quad \quad ..$ $ \quad \quad \quad ..$




$..$ $\quad \quad \quad \quad ..$ $ \quad \quad \quad ..$



$..$ $\quad \quad \quad \quad ..$ $ \quad \quad \quad ..$



$\large{{n+3}\choose {2}}={{n+2}\choose {2}}+{{n+2}\choose {1}}$



$\large{{n+2}\choose {1}}={{n+1}\choose {1}}+{{n+1}\choose {0}}$



Adding, we get $\large{n\choose 0}+{{n+1}\choose 1}+...+{{n+r-1}\choose {r-1}}+{{n+r}\choose r}={{n+r+1}\choose r}$







Alternatively, we can fix any $r$ of the $(n+r+1)$ objects given. Label them $A_1,A_2,...A_r$. Now our choices of $r$ objects from the $(n+r+1)$ objects may or may not contain any or all of the set $\{A_1,A_2,...,A_r\}$.



Case I: It does not contain $A_1$



This will happen in $\large{{n+r}\choose r}$ ways as the $r$ things have to be chosen from the remaining $(n+r)$ things.



Case II: It contains $A_1$ but does not contain $A_2$.




This will happen in $\large{{n+r-1}\choose {r-1}}$ ways, because having chosen $A_1$ and rejectd $A_2$, we have only $(n+r-1)$ things to choose from and we need only $(r-1)$.



$...$



Case r: It contains $A_1,A_2,...,A_{r-1}$ but does not contain $A_r$.



This will happen in $\large {{n+1}\choose 1}$ ways.



Case $(r+1)$: It contains $A_1,A_2,...,A_r$.




This will happen in $\large{n\choose 0}=1$ way.



Hence, $\displaystyle\sum_{k=0}^{r}{{n+k}\choose k}={{n+r+1}\choose r}$


probability - On average, how many friends would I need to have to have at least one friend's birthday every day?

I know that because of the birthday problem, even after 365 friends, you're going to have a lot of doubles and that there's also an infinitesimal chance that even with infinite friends that there's one day left out. But I was curious how many friends you'd need on average to have every day represented (this is ignoring leap day and assuming birthdays are equally distributed). Or to generalize it further, given n unique boxes and you're placing balls in them with an equal 1/n chance for the ball to go into any box, how many balls would you have to place on average before every box had at least one ball?

abstract algebra - How many irreducible polynomials of degree $n$ exist over $mathbb{F}_p$?


I know that for every $n\in\mathbb{N}$, $n\ge 1$, there exists $p(x)\in\mathbb{F}_p[x]$ s.t. $\deg p(x)=n$ and $p(x)$ is irreducible over $\mathbb{F}_p$.



I am interested in counting how many such $p(x)$ there exist (that is, given $n\in\mathbb{N}$, $n\ge 1$, how many irreducible polynomials of degree $n$ exist over $\mathbb{F}_p$).



I don't have a counting strategy and I don't expect a closed formula, but maybe we can find something like "there exist $X$ irreducible polynomials of degree $n$ where $X$ is the number of...".


What are your thoughts ?


Answer




Theorem: Let $\mu(n)$ denote the Möbius function. The number of monic irreducible polynomials of degree $n$ over $\mathbb{F}_q$ is the necklace polynomial $$M_n(q) = \frac{1}{n} \sum_{d | n} \mu(d) q^{n/d}.$$


(To get the number of irreducible polynomials just multiply by $q - 1$.)


Proof. Let $M_n(q)$ denote the number in question. Recall that $x^{q^n} - x$ is the product of all the monic irreducible polynomials of degree dividing $n$. By counting degrees, it follows that $$q^n = \sum_{d | n} d M_d(q)$$


(since each polynomial of degree $d$ contributes $d$ to the total degree). By Möbius inversion, the result follows.


As it turns out, $M_n(q)$ has a combinatorial interpretation for all values of $q$: it counts the number of aperiodic necklaces of length $n$ on $q$ letters, where a necklace is a word considered up to cyclic permutation and an aperiodic necklace of length $n$ is a word which is not invariant under a cyclic permutation by $d$ for any $d < n$. More precisely, the cyclic group $\mathbb{Z}/n\mathbb{Z}$ acts by cyclic permutation on the set of functions $[n] \to [q]$, and $M_n(q)$ counts the number of orbits of size $n$ of this group action. This result also follows from Möbius inversion.


One might therefore ask for an explicit bijection between aperiodic necklaces of length $n$ on $q$ letters and monic irreducible polynomials of degree $n$ over $\mathbb{F}_q$ when $q$ is a prime power, or at least I did a few years ago and it turns out to be quite elegant.


Let me also mention that the above closed form immediately leads to the "function field prime number theorem." Let the absolute value of a polynomial of degree $d$ over $\mathbb{F}_q$ be $q^d$. (You can think of this as the size of the quotient $\mathbb{F}_q[x]/f(x)$, so in that sense it is analogous to the norm of an element of the ring of integers of a number field.) Then the above formula shows that the number of monic irreducible polynomials $\pi(n)$ of absolute value less than or equal to $n$ satisfies $$\pi(n) \sim \frac{n}{\log_q n}.$$


elementary number theory - Prove for all $n in N$, $gcd(2n+1,9n+4)=1$

Question: Prove for all $n \in N$, $\gcd(2n+1,9n+4)=1$



Attempt: I want to use Euclid's Algorithm because it seemed to be easier than what my book was doing which was manually finding the linear combination.




Euclid's Algorithm states that we let $a,b \in N $. By applying the Division Algorithm repeatedly...then $\gcd(a,b) = r_j$ will be the last non-zero remainder. By the Well Ordering Principle, there is a smallest element of the set of natural numbers less than or equal to $r_j$.



I have used long division, but since I can't get it to show up here, I will type what I've done.



Starting at $\gcd(2n+1,9n+4)$,



$\frac{2n+1}{9n+4}$



I can multiply $2n+1$ four times and I would have a remainder of $n$ because $(9n+4)-(8n+4) = 9n+4-8n-4=n$




so we have $4 \cdot (2n+1)+n$ if I apply Euclid's Algorithm.



For $\gcd(2n+1, n)$,



$\frac{2n+1}{n}$



I can multiply $n$ 2 times and I will have only 1 as the remainder because $(2n+1)-(2n+0) = 2n+1-2n+0=1$



Therefore, we have $2 \cdot (n) +1$




and $\gcd(n,1)$ which is $1$



Since the end result is $1, \gcd(2n+1,9n+4)=1$



I followed an example from this link



http://cms.math.ca/crux/v33/n5/public_page274-276.pdf



Am I doing this correctly?

Thursday, January 2, 2020

real analysis - Characterizing discontinuous derivatives


Apparently the set of discontinuity of derivatives is weird in its own sense. Following are the examples that I know so far:


$1.$ $$g(x)=\left\{ \begin{array}{ll} x^2 \sin(\frac{1}{x}) & x \in (0,1] \\ 0 & x=0 \end{array}\right.$$ $g'$ is discontinuous at $x=0$.


$2. $ The Volterra function defined on the ternary Cantor set is differentiable everywhere but the derivative is discontinuous on the whole of Cantor set ,that is on a nowhere dense set of measure zero.


$3.$ The Volterra function defined on the Fat-Cantor set is differentiable everywhere but the derivative is discontinuous on the whole of Fat-Cantor set ,that is on a set of positive measure, but not full measure.


$4.$ I am yet to find a derivative which is discontinuous on a set of full measure.


Some good discussion about this can be found here and here.


Questions:


1.What are some examples of functions whose derivative is discontinuous on a dense set of zero measure , say on the rationals?



2.What are some examples of functions whose derivative is discontinuous on a dense set of positive measure , say on the irrationals?


Update: One can find a function which is differentiable everywhere but whose derivative is discontinuous on a dense set of zero measure here.


Answer



There is no everywhere differentiable function $f$ on $[0,1]$ such that $f'$ is discontinuous at each irrational there. That's because $f',$ being the everywhere pointwise limit of continuous functions, is continuous on a dense $G_\delta$ subset of $[0,1].$ This is a result of Baire. Thus $f'$ can't be continuous only on a subset of the rationals, a set of the first category.


But there is a differentiable function whose derivative is discontinuous on a set of full measure.


Proof: For every Cantor set $K\subset [0,1]$ there is a "Volterra function" $f$ relative to $K,$ which for the purpose at hand means a differentiable function $f$ on $[0,1]$ such that i)$|f|\le 1$ on $[0,1],$ ii) $|f'|\le 1$ on $[0,1],$ iii) $f'$ is continuous on $[0,1]\setminus K,$ iv) $f'$ is discontinuous at each point of $K.$


Now we can choose disjoint Cantor sets $K_n \subset [0,1]$ such that $\sum_n m(K_n) = 1.$ For each $n$ we choose a Volterra function $f_n$ as above. Then define


$$F=\sum_{n=1}^{\infty} \frac{f_n}{2^n}.$$


$F$ is well defined by this series, and is differentiable on $[0,1].$ That's because each summand above is differentiable there, and the sum of derivatives converges uniformly on $[0,1].$ So we have


$$F'(x) = \sum_{n=1}^{\infty} \frac{f_n'(x)}{2^n}\,\, \text { for each } x\in [0,1].$$



Let $x_0\in \cup K_n.$ Then $x_0$ is in some $K_{n_0}.$ We can write


$$F'(x) = \frac{f_{n_0}'(x)}{2^{n_0}} + \sum_{n\ne n_0}\frac{f_n'(x)}{2^n}.$$


Now the sum on the right is continuous at $x_0,$ being the uniform limit of functions continuous at $x_0.$ But $f_{n_0}'/2^{n_0}$ is not continuous at $x_0.$ This shows $F'$ is not continuous at $x_0.$ Since $x_0$ was an aribtrary point in $\cup K_n,$ $F'$ is discontinuous on a set of full measure as desired.


sequences and series - Closed form expressions for harmonic sums



It is well known that there are deep connections between harmonic sums (discrete infinite sums that involve generalized harmonic numbers) and poly-logarithms. Bearing this in mind we have calculated the following sum:
\begin{equation}
S_a(t) := \sum\limits_{m=1 \vee a} H_m \cdot \frac{t^{m+1-a}}{m+1-a}
\end{equation}
where $t\in (-1,1)$ and $a \in {\mathbb Z}$. The result reads:

\begin{eqnarray}
S_a(t) =
\left\{
\begin{array}{lll}
\frac{1}{2} [\log(1-t)]^2 + \sum\limits_{j=1}^{a-1} \frac{1}{j \cdot t^j} \left( \sum\limits_{m=1}^j \frac{t^m}{m} + (1-t^j) \log(1-t) \right) + Li_2(t) 1_{a\ge 1} & \mbox{if $a \ge 0$} \\
\frac{1}{2} [\log(1-t)]^2 -\sum\limits_{j=1}^{|a|} \frac{1}{j } \left( \sum\limits_{m=1}^j \frac{t^m}{m} + (1-t^j) \log(1-t) \right) & \mbox{if $a < 0$} \\
\end{array}
\right.
\end{eqnarray}
Unfortunately it took me a lot of time to derive and thoroughly check the result even though all the calculations are at elementary level. It is always helpful to use Mathematica. Indeed for particular values of $a$ Mathematica "after long thinking" comes up with solutions however from that it is hard to find the generic result as given above. Besides in more complicated cases Mathematica just fails.




In view of the above my question is the following. Can we prove that every infinite sum whose coefficients represent a rational function in $m$ and in addition involve products of positive powers of generalized harmonic numbers, that such a sum is always always given in closed form by means of elementary functions and poly-logarithms? If this is not the case can we give a counterexample?


Answer



Even though this is not conceived as an answer to your specific question which conserns the class of functions needed to represent your sum it also contributes to it as it exhibits a broader class than you mentioned.



Also I think it is an interesting result in itself when it comes to closed formes.



Compact closed expression



I have found that your sum




$$S_a(t) := \sum\limits_{m=1} ^{ \infty} H_m \cdot \frac{t^{m+1-a}}{m+1-a}\tag{1}$$



can be expressed in more compact closed forms. The first one I derived is the following; for the second one see the paragraph "Derivation".



$$f(a,t) = -\frac{1}{12} \left(12 a \, _4F_3(1,1,1,a+1;2,2,2;1-t)-12 a t \, _4F_3(1,1,1,a+1;2,2,2;1-t)-12 a \log (1-t) \, _3F_2(1,1,a+1;2,2;1-t)+12 a t \log (1-t) \, _3F_2(1,1,a+1;2,2;1-t)+6 \psi ^{(0)}(1-a)^2+12 \gamma \psi ^{(0)}(1-a)-6 \psi ^{(1)}(1-a)+6 \gamma ^2+\pi ^2\right)+\frac{1}{2} \log ^2(1-t)$$



It consists of hypergeometric, polygamma and log functions, and some well known constants abundant in this field.



The graph shows $f(a,t=1/2)$ as a function of $a$




enter image description here



Validity checks



I have checked the validity of both $f(a,t)$ and $f_{1}(a,t)$ by plotting them together with a partial sum of $S_a(t)$ as a function of $a$ for $t=1/2$. All three curves agree reasonably.



Unfortunately, my attempts to perform an independent validity check by comparing power series in $t$ failed. This might be due to difficulties in Mathematica and needs further study.



Derivation




We observe that the derivative of $S_a(t)$ with respect to $t$ is a simple function



$$\frac{\partial S_a(t)}{\partial t}=\sum _{m=1}^{\infty } H_m t^{m-a}=-\frac{t^{-a} \log (1-t)}{1-t}\tag{2}$$



Hence $S_a(t)$ is given by the integral



$$-\int_0^t \frac{u^{-a} \log (1-u)}{1-u} \, du\tag{3}$$



Mathematica gives for this integral the following expression




$$f_{1}(a,t) =\pi (-1)^{a-1} H_{a-1} \csc (\pi a)-\frac{1}{a^2}\left(\frac{1}{t-1}\right)^a \, _3F_2\left(a,a,a;a+1,a+1;\frac{1}{1-t}\right)-\frac{1}{a}\left(\frac{1}{t-1}\right)^a \log (1-t) \, _2F_1\left(a,a;a+1;\frac{1}{1-t}\right)$$



This is equivalent to $f(a,t)$ which I have derived first in the following more complicated manner.



Substituting $u\to 1-s$ in $(3)$ leads to



$$-\int_{1-t}^1 \frac{(1-s)^{-a} \log (s)}{s} \, ds\tag{4}$$



Expanding $(1-s)^{-a}$ into a binomial series, interchanging the summation and integration, doing the integral, and then the sum gives $f(a,t)$.




Mathematica expressions



In order to avoid possible typing errors, here are the Mathematica expressions for the functions obtained



f[a_, t_] := (1/
2 Log[1 - t]^2 + -(1/
12) (6 EulerGamma^2 + \[Pi]^2 +
12 a HypergeometricPFQ[{1, 1, 1, 1 + a}, {2, 2, 2}, 1 - t] -
12 a t HypergeometricPFQ[{1, 1, 1, 1 + a}, {2, 2, 2}, 1 - t] -

12 a HypergeometricPFQ[{1, 1, 1 + a}, {2, 2}, 1 - t] Log[
1 - t] +
12 a t HypergeometricPFQ[{1, 1, 1 + a}, {2, 2}, 1 - t] Log[
1 - t] + 12 EulerGamma PolyGamma[0, 1 - a] +
6 PolyGamma[0, 1 - a]^2 - 6 PolyGamma[1, 1 - a]))

f1[a_, t_] :=(-1)^(a - 1) \[Pi] Csc[a \[Pi]] HarmonicNumber[a - 1] - (1/
a^2) (1/(-1 + t))^
a HypergeometricPFQ[{a, a, a}, {1 + a, 1 + a}, 1/(1 - t)] - (1/
a) (1/(-1 + t))^

a Hypergeometric2F1[a, a, 1 + a, 1/(1 - t)] Log[1 - t]

definite integrals - How to show that $ int_0^infty frac{1}{sqrt {t(t^2+21t+112)}} dt=frac{1}{4pisqrt {7}}Gamma(frac{1}{7})Gamma(frac{2}{7})Gamma(frac{4}{7})$

I've already known that

$$ \int \limits_0^\infty \frac{1}{\sqrt {t(t^2+21t+112)}} \mathrm{d}t=\frac{1}{4\pi\sqrt {7}}\Gamma(\frac{1}{7})\Gamma(\frac{2}{7})\Gamma(\frac{4}{7})$$



To get this answer, I let $\mathrm{d}u=\mathrm{d}\sqrt{t}$, then got



$$ \int \limits_0^\infty \frac{1}{\sqrt {t(t^2+21t+112)}} \mathrm{d}t=2\int \limits_0^\infty \frac{1}{\sqrt {u^4+21u^2+112}} \mathrm{d}u$$



$$2\int \limits_0^\infty \frac{1}{\sqrt {u^4+21u^2+112}} \mathrm{d}u=2\int \limits_0^\infty \frac{1}{\sqrt {u^2+\frac{21}{2}+\sqrt{\frac{7}{4}}i}{\sqrt {u^2+\frac{21}{2}-\sqrt{\frac{7}{4}}i}}} \mathrm{d}u$$



and it was pretty like Incomplete elliptic integral of the first kind.




But, how to carry on?

calculus - Limits for sine and cosine functions




Recently I took a test where I was given these two limits to evaluate:
$\lim_\limits{h \to 0}\frac{\sin(x+h)-\sin{(x)}}{h}$ and $\lim_\limits{h \to 0}\frac{\cos(x+h)-\cos{(x)}}{h}.$ I used sine and cosine addition formulas and found the value of each limit individually, eventually canceling out $\sin x\cdot \frac1h$ and $\cos x\cdot \frac1h$ because I learned that we can evaluate limits piece by piece. As a result, I got $\cos x $ and $-\sin x$ as my two answers. However, my teacher marked it wrong saying that we cannot cancel $\sin{x}\cdot\frac1h$ or $\cos{x}\cdot\frac1h$ because those limits do not exist. Can someone please explain why this doesn't work? I thought that we can cancel those limits out since we never look at $0,$ just around $0,$ when evaluating these two limits.



Sine: $$\frac{\sin(x+h)-\sin(x)}h=\frac{\sin(x)\cos(h)+\sin(h)\cos(x)}h-\frac{\sin(x)}h$$



$$=\sin(x)\frac1h+\cos(x)-\sin(x)\frac1h=\cos(x)$$



Cosine: $$\frac{\cos(x+h)-\cos(x)}h=\frac{\cos(x)\cos(h)-\sin(x)\sin(h)}h-\cos(x)\frac1h$$




$$=\cos(x)\frac1h-\sin(x)\cdot1-\cos(x)\frac1h=-\sin(x)$$



Note: I am allowed to assume $\lim_\limits{x\to 0} \frac{\sin(h)}h=1,\lim_\limits{x\to 0} \frac{\cos(h)-1}h=0.$


Answer



You broke up the limits incorrectly. That can be done only when the individual limits exist. $\color{red}{\lim_\limits{h \to 0} \frac{\sin x}{h}}$ and $\color{red}{\lim_\limits{h \to 0}\frac{\cos x}{h}}$ do NOT exist.



Here is how you can solve the first limit properly.



$$\lim_{h \to 0}\frac{\sin(x+h)-\sin x}{h}$$




$$= \lim_{h \to 0}\frac{\color{blue}{\sin x\cos h}+\cos x\sin h\color{blue}{-\sin x}}{h}$$



$$= \lim_{h \to 0}\frac{\color{blue}{\sin x(\cos h-1)}+\cos x\sin h}{h}$$



$$= \lim_{h \to 0}\frac{\sin x(\cos h-1)}{h}+\lim_{h \to 0}\frac{\cos x\sin h}{h}$$



$$= \sin x\cdot\lim_{h \to 0}\frac{\cos h-1}{h}+\cos x\cdot\lim_{h \to 0}\frac{\sin h}{h}$$



Using $\lim_\limits{h \to 0} \frac{\sin h}{h} = 1$ and $\lim_\limits{h \to 0}\frac{\cos h-1}{h} = 0$, you get




$$= \sin x\cdot 0 + \cos x\cdot 1 = \cos x$$



Notice how the individual limits exist. Therefore, the procedure is correct. Can you use the same way to solve for the second limit?


Wednesday, January 1, 2020

summation - Prove that $1^3 + 2^3 + ... + n^3 = (1+ 2 + ... + n)^2$


This is what I've been able to do:


Base case: $n = 1$


$L.H.S: 1^3 = 1$


$R.H.S: (1)^2 = 1$


Therefore it's true for $n = 1$.


I.H.: Assume that, for some $k \in \Bbb N$, $1^3 + 2^3 + ... + k^3 = (1 + 2 +...+ k)^2$.


Want to show that $1^3 + 2^3 + ... + (k+1)^3 = (1 + 2 +...+ (k+1))^2$


$1^3 + 2^3 + ... + (k+1)^3$


$ = 1^3 + 2^3 + ... + k^3 + (k+1)^3$



$ = (1+2+...+k)^2 + (k+1)^3$ by I.H.


Annnnd I'm stuck. Not sure how to proceed from here on.


Answer



HINT: You want that last expression to turn out to be $\big(1+2+\ldots+k+(k+1)\big)^2$, so you want $(k+1)^3$ to be equal to the difference


$$\big(1+2+\ldots+k+(k+1)\big)^2-(1+2+\ldots+k)^2\;.$$


That’s a difference of two squares, so you can factor it as


$$(k+1)\Big(2(1+2+\ldots+k)+(k+1)\Big)\;.\tag{1}$$


To show that $(1)$ is just a fancy way of writing $(k+1)^3$, you need to show that


$$2(1+2+\ldots+k)+(k+1)=(k+1)^2\;.$$


Do you know a simpler expression for $1+2+\ldots+k$?



(Once you get the computational details worked out, you can arrange them more neatly than this; I wrote this specifically to suggest a way to proceed from where you got stuck.)


elementary number theory - Find $11^{644} mod 645$


Can someone just explain to me the basic process of what is going on here? I understand everything until we start adding 1's then after that it all goes to hell. I just need some guidance. The Problem with the solution is attached.


Thanks in advance..


Find $11^{644} \mod 645$


enter image description here


Answer



Note that $645=3\cdot 5\cdot 43$. Then



$$11^{644}\equiv (-1)^{2\cdot 322}\equiv 1^{322}\equiv 1\pmod 3$$ $$11^{644}\equiv 1^{644}\equiv 1\pmod 5$$ For the last modulus, we should determine the order of $11\pmod{43}$. To this end we first try $11^q$ for $q\mid p-1$: $$11^2\equiv 35\equiv -8, 11^3\equiv -8\cdot 11\equiv -2, 11^7\equiv (-8)^2\cdot(-2)\equiv -128\equiv 1.$$ So with this $$11^{644}\equiv 11^{7\cdot 46}\equiv 1^{46}\equiv 1\pmod{43} $$ and so by the Chinese Remainder Theorem also $11^{644}\equiv 1\pmod {645}$


calculus - Rudin's Chain Rule

Rudin's chain rule theorem goes like this:




Suppose $f$ is continuous on ${[a,b]}$, $f'(x)$ exists at some point $x\in [a,b], g$ is defined on an interval $I$ which contains the range of $f$, and $g$ is differentiable at the point $f(x)$. If $$h(t)=g(f(t)) \quad (a\le t \le b)$$ then $h$ is differentiable at $x$, and $$h'(x)=g'(f(x))f'(x)$$




First note that the interval $I$ is closed by Rudin's definition (he calls an open interval $(a,b)$ a segment). Also, Rudin defines differentiability to be in closed intervals, taking the left/right derivative at endpoints.




Since the interval $I$ taken affects the derivative $g'$ (for example, let $g(x) = |x|$, then the derivative can be $1$ or $-1$), I'm wondering if it's true that taking any interval $I$ containing the range of $f$ in which $g$ is (perhaps only left/right) differentiable at $f(x)$ is ok. And in case $g$ is only left-differentiable at $f(x)$ in $I$, we just take the left derivative to be $g'$. Will $h'(x)$ always be the same regardless of which interval $I$ we pick?



Lastly, aside to the question, I wonder if it is a good idea to define differentiability on closed intervals like Rudin does. It does seem to make theorems like this one more complicated. I've found many online sources like Wikipedia defining it on open sets only, but Rudin's book is considered a classic and Terence Tao also defines it the same way. There seems to be no consensus on this issue, so is there a "best practice"?

analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...