Monday, December 31, 2018

calculus - Functions cannot be integrated as simple functions





Since I was a college student, I was told there were many functions that cannot be integrated as simple functions.(I'll give the definition of simple functions at the end of the article). As a TA for calculus now, I've been asked for integrated various functions, certainly, most of them are integrable(in the sense of simple functions). However, how could I know that certain functions are not integrable not merely because I cannot integrate them. (There was one time that one integration on the question sheet daunted all the TAs I asked).


Does anyone know THEORETICAL REASONS why certain functions cannot be integrated as simple funtions? Or could you refer to certain reference containing such materials? Or, could you show me by example, certain "good" function actually don't have "good" integration, I think one famous example could be "$\frac{\sin x}{x}$".


Simple functions: The functions which is the summations(subtractions),multiplications(divisions), and compositions of the following functions(as well as the functions generated by these operations): $x, \sin x , \cos x, \log x, \sqrt[n]{x}, e^x$.


Answer




A search for "integration in finite terms" will get you many useful results. This paper by Rosenlicht is a very good place to start.


Bibliographic details: Maxwell Rosenlicht, Integration in Finite Terms, American Mathematical Monthly 79 (1972) 963-972.


all prime numbers have irrational square roots




How can I prove that all prime numbers have irrational square roots?



My work so far: suppose that a prime p = a*a then p is divisible by a. Contradiction. Did I begin correctly? How to continue?


Answer



The standard proof for $p=2$ works for any prime number.


real analysis - Evaluate $lim_{ntoinfty}frac{x_n}{sqrt n}$



Question: $x_1>0$, $x_{n+1}=x_n+\dfrac1{x_n}$, $n\in\Bbb N$. Evaluate $$\lim_{n\to\infty}\frac{x_n}{\sqrt n}.$$



What I know now is that $\dfrac1{x_n}\to\dfrac12$ when $n\ge2$, $\{x_n\}$ is monotonically increasing,$x_n\ge 2$ when $n\ge 2$.


I have tried to use the Stolz theorem, and I found I could not use Squeeze theorem.


Could you please give some instructions? Thank you!



Answer



We have $$x_{n+1}^2=\left(x_n+\frac1{x_n}\right)^2=x_n^2+\frac1{x_n^2}+2\implies x_{n+1}^2-x_n^2=\frac1{x_n^2}+2.$$


Obviously, $x_n$ is increasing and $x_n\to\infty$ as $n\to\infty$. Apply the Stolz theorem, \begin{align*} \left(\lim_{n\to\infty}\frac{x_n}{\sqrt n}\right)^2&=\lim_{n\to\infty}\frac{x_n^2}{n}\\ (\text{Stolz})&=\lim_{n\to\infty}\frac{x_n^2-x_{n-1}^2}{n-(n-1)}\\ &=\lim_{n\to\infty}\left(\frac1{x_{n-1}^2}+2\right)=0+2=2. \end{align*} $$\therefore \lim_{n\to\infty}\frac{x_n}{\sqrt n}=\sqrt 2.$$


Sunday, December 30, 2018

integration - Triangle inequality for integrals proof



I'm having a bit of trouble proving the following inequality, using a graphic proof.


$$ \left| \int_a^b f \right | \le \int_a^b |f| $$ It's difficult for me to imagine these sorts of problems so I'm quite lost. Thank you


Answer



Let us look at an example of $$\left| \int_a^b f \right | \le \int_a^b |f|$$


For $f(x) =x$ on the interval $[ -1,1]$ what is $ \int_a^b f $ ?


The answer is $\int _{-1}^{1} xdx = x^2/2|_{-1}^{1} =0$.


What about the $|x|$ on the same interval. Now your function looks like a $V$ and it is defined as $f(x) = x$ for $x\ge 0$ and $f(x) = -x$ for $x\le 0$


Now you take the integral and you come up with $$\int _{-1}^{1}| x|dx$$ = $$\int _{-1}^{0}(- x)dx + \int _{0}^{1}( x)dx = 1$$


Clearly $0 \le 1$.


The important point is for f(x) you may have negative values but for $|f(x)|$ we only have non-negative values.



elementary set theory - Proving a bijection(injection and surjection) over a function

I need some help proving bijections:



Suppose f is a function from $$ \mathbb R^2 \rightarrow \mathbb R^2$$



Defined by



$$f(x,y) = (ax-by,bx+ay)$$



Where a,b are numbers with $$ a^2 + b^2 \neq 0 $$




Prove that f is a bijection.



I understand that a function f is a bijection if it is both an injection and a surjection so I would need to prove both of those properties.



Could you give me a hint on how to start proving injection and surjection?



Thanks.

calculus - Proving $int_{0}^{infty} mathrm{e}^{-x^2} dx = frac{sqrt pi}{2}$



How to prove
$$\int_{0}^{\infty} \mathrm{e}^{-x^2}\, dx = \frac{\sqrt \pi}{2}$$


Answer




This is an old favorite of mine.
Define $$I=\int_{-\infty}^{+\infty} e^{-x^2} dx$$
Then $$I^2=\bigg(\int_{-\infty}^{+\infty} e^{-x^2} dx\bigg)\bigg(\int_{-\infty}^{+\infty} e^{-y^2} dy\bigg)$$
$$I^2=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}e^{-(x^2+y^2)} dxdy$$
Now change to polar coordinates
$$I^2=\int_{0}^{+2 \pi}\int_{0}^{+\infty}e^{-r^2} rdrd\theta$$
The $\theta$ integral just gives $2\pi$, while the $r$ integral succumbs to the substitution $u=r^2$
$$I^2=2\pi\int_{0}^{+\infty}e^{-u}du/2=\pi$$
So $$I=\sqrt{\pi}$$ and your integral is half this by symmetry



I have always wondered if somebody found it this way, or did it first using complex variables and noticed this would work.


Prescribing norm and trace of elements in a finite field.

Let $\mathbb{F}_{q}$ be the finite field with $q$ elements, where $q$ is a prime power.



Let $n \geq 1$ be an integer and consider $\mathbb{F}_{q^n}|\mathbb{F}_{q}$.



There is a theorem that says the following:



Theorem: There is always an element $\alpha \in \mathbb{F}_{q^n}$ that is primitive and normal over $\mathbb{F}_{q}$.



We say that one can prescribe the norm and the trace of a primitive and normal (over $\mathbb{F}_{q}$) element $\alpha \in \mathbb{F}_{q^n}$ if, for every $a,b \in \mathbb{F}_{q}^\ast$, with $b$ primitive, there is a primitive and normal element $\alpha \in \mathbb{F}_{q^n}$ such that $Tr_{\mathbb{F}_{q^n}|\mathbb{F}_{q}}(\alpha) = a$ and $N_{\mathbb{F}_{q^n}|\mathbb{F}_{q}}(\alpha) = b$.




The assumption that $a$ is non-zero is because normal elements cannot have zero trace and a primitive element $\alpha \in \mathbb{F}_{q^n}$ must have norm a primitive element of $\mathbb{F}_{q}$.



My point is, the article I'm reading asserts that if $n \leq 2$, such $\alpha$ is already prescribed by its trace and norm, but I cannot see this. Can anyone help me?



The case $\mathbb{F}_{q^2}|\mathbb{F}_{q}$ we then have $Tr(\alpha) = \alpha + \alpha^q$ and $N(\alpha) = \alpha^{q+1}$. I cannot see why all possible values for the norm and trace in $\mathbb{F}_{q}$ are achieved by primitive normal elements of $\mathbb{F}_{q^2}$.



Edit: I'm trying to think about it's minimal polynomial. There is a fact (I will not prove here but it is true): In $\mathbb{F}_{q^2} | \mathbb{F}_{q}$, every primitive element is also normal. So, the minimal polynomial of a primitive normal element of $\mathbb{F}_{q^2}$ must be



$$X^2 -aX + b,$$ where $a = Tr(\alpha)$ and $b = N(\alpha)$. Still cannot see why every possible value for $N(\alpha)$ (any primitive element of $\mathbb{F}_{q}$) and $Tr(\alpha)$ (any non-zero element of $\mathbb{F}_{q}$) can be achieved.

real analysis - solve the limit of $(y_{n})$, $y_{n}=1+frac{1}{3^{2}}+...+frac{1}{(2n-1)^{2}}$

I have the following sequence $(x_{n})$ , $x_{n}=1+\frac{1}{2^{2}}+...+\frac{1}{n^{2}}$ which has the limit $\frac{\pi ^{2}}{6}$.I need to calculate the limit of the sequence $(y_{n})$, $y_{n}=1+\frac{1}{3^{2}}+...+\frac{1}{(2n-1)^{2}}$


I don't know how to start.I think I need to solve the limit for the whole sequence ( even n + odd n) then from the "big limit" I should subtract $\frac{\pi ^{2}}{6}$, right?

contest math - Inequality with $a,b,cin{}mathbb{R}$.



Prove that for every positive real numbers $a,b$ and $c$ we have



$$(a+b+c)^5\ge 81(a^2+b^2+c^2)abc.$$



I tried using the u,v,w method by substituting




$$a+b+c=3u$$
$$ab+bc+ca=3v^2$$
$$abc=w^3$$



From which it suffices to show $u^5-3u^2w^3+2v^2w^3\geq0$. Im quite stuck here and unable to proceed. Also I know that equality occurs when $a=b=c$.


Answer



EDIT: I tried adding vvnitram's condition, but it doesn't sit well with me, because from cauchy-scharwz $(1 + 1 + 1)(a^2 + b^2 + c^2) \geq (a + b + c)^2$



This inequality should only work for non negative numbers. From AM-GM

\begin{align}
\frac{a + b + c}{3} &\geq (abc)^{\frac{1}{3}}\\
(a + b + c)^3 &\geq 27abc
\end{align}
Now for positive numbers $a$, $b$ and $c$ it follows that
\begin{align}
(a + b + c)^2 = a^2 + b^2 +c^2 + 2ab + 2ac + 2bc \geq a^2 + b^2 + c^2
\end{align}
Using vvnitram's condition, assuming $a \geq b \geq c$ then
\begin{equation}

(a + b + c)^2 \geq 3(a^2 + b^2 + c^2)
\end{equation}
(because $ab > b^2$)
If $a \geq b$ and $c \geq d$ for positive numbers then $ac \geq bd$. So using the above two inequalities
\begin{align}
(a + b + c)^3 \cdot (a + b + c)^2 &\geq 27abc \cdot 3(a^2 + b^2 + c^2)\\
(a + b + c)^5 &\geq 81abc(a^2 + b^2 + c^2)
\end{align}


limits - "Proving" that $0^0 = 1$




I know that $0^0$ is one of the seven common indeterminate forms of limits, and I found on wikipedia two very simple examples in which one limit equates to 1, and the other to 0. I also saw here: Prove that $0^0 = 1$ using binomial theorem
that you can define $0^0$ as 1 if you'd like.



Even so, I was curious, so I did some work and seemingly demonstrated that $0^0$ always equals 1.



My Work:



$$y=\lim_{x\rightarrow0^+}{(x^x)}$$




$$\ln{y} = \lim_{x\rightarrow0^+}{(x\ln{x})} $$



$$\ln{y}= \lim_{x\rightarrow0^+}{\frac{\ln{x}}{x^{-1}}} = -\frac{∞}{∞} $$



$\implies$ Use L'Hôpital's Rule



$$\ln{y}=\lim_{x\rightarrow0^+}\frac{x^{-1}}{-x^{-2}} $$
$$\ln{y}=\lim_{x\rightarrow0^+} -x = 0$$
$$y = e^{0} = 1$$




What is wrong with this work? Does it have something to do with using $x^x$ rather than $f(x)^{g(x)}$? Or does it have something to do with using operations inside limits? If not, why is $0^0$ considered indeterminate at all?


Answer



Someone said that $0^0=1$ is correct, and got a flood of downvotes and a comment saying it was simply wrong. I think that someone, me for example, should point out that while saying $0^0=1$ is correct is an exaggeration, calling that "simply wrong" isn't quite right either. There are many contexts in which $0^0=1$ is the standard convention.



Two examples. First, power series. If we say $f(t)=\sum_{n=0}^\infty a_nt^n$ that's supposed to entail that $f(0)=a_0$. But $f(0)=a_0$ depends on the convention that $0^0=1$.



Second, elementary set theory: Say $|A|$ is the cardinality of $A$. The cardinality of the set off all functions from $A$ to $B$ should be $|B|^{|A|}$. Now what if $A=B=\emptyset$? There as well we want to say $0^0=1$; otherwise we could just say the cardinality of the set of all maps was $|B|^{|A|}$ unless $A$ and $B$ are both empty.



(Yes, there is exactly one function $f:\emptyset\to\emptyset$...)




Edit: Seems to be a popular answer, but I just realized that it really doesn't address what the OP said. For the record, of course the OP is nonetheless wrong in claiming to have proved that $0^0=1$. It's often left undefined, and in any case one does not prove definitions...


elementary set theory - Cardinality of all injective functions from $mathbb{N}$ to $mathbb{R}$.



What is the cardianlity of: $$ A = \left\{ f:\mathbb{N}\to\mathbb{R} : \text{f is injective} \right\} $$



Trying to prove it using Cantor–Bernstein–Schroeder theorem, I have the obvious side:
$$A \subseteq f:\mathbb{N}\to\mathbb{R}$$



Hence,
$$\left|A\right| \le \aleph$$




I need to find an injection from a set with cardinality of $\aleph$ to $A$, but couldn't think of a proper one. It's tricky.



Any idea?



Thanks.


Answer



HINT: Prove that $\{f\in A\mid\operatorname{range}(f)\subseteq\Bbb N\}$ has size $\aleph$.


Saturday, December 29, 2018

real analysis - How do I prove that $f(x)f(y)=f(x+y)$ implies that $f(x)=e^{cx}$, assuming f is continuous and not zero?



This is part of a homework assignment for a real analysis course taught out of "Baby Rudin." Just looking for a push in the right direction, not a full-blown solution. We are to suppose that $f(x)f(y)=f(x+y)$ for all real x and y, and that f is continuous and not zero. The first part of this question let me assume differentiability as well, and I was able to compose it with the natural log and take the derivative to prove that $f(x)=e^{cx}$ where c is a real constant. I'm having a little more trouble only assuming continuity; I'm currently trying to prove that f is differentiable at zero, and hence all real numbers. Is this an approach worth taking?


Answer



First note that $f(x) > 0$, for all $x \in \mathbb{R}$. This can be seen from the fact that
$$f(x) = f\left(\dfrac{x}2 + \dfrac{x}2\right) = f \left(\dfrac{x}2\right)^2$$ Further, you can eliminate the case $f(x) = 0$, since this would mean $f \equiv 0$.




One way to go about is as follows.



$1$. Prove that $f(m) = f(1)^m$ for $m \in \mathbb{Z}^+$.



$2$. Now prove that $f(m) = f(1)^m$ for $m \in \mathbb{Z}$.



$3$. Now prove that $f(p/q) = f(1)^{p/q}$ for $p \in \mathbb{Z}$ and $q \in \mathbb{Z} \backslash \{0\}$.



$4$. Now make use of the fact that rationals are dense in $\mathbb{R}$ and hence you can find a sequence of rationals $r_n \in \mathbb{R}$ such that $r_n \to r \in \mathbb{R} \backslash \mathbb{Q}$. Now use continuity to conclude that $f(x) = f(1)^x$ for all $x \in \mathbb{R}$. You will see that you need only continuity at one point to conclude that $f(x) = f(1)^x$.


sequences and series - Tricky question on binomial

Let's say there's a series of the form $$S=\frac{1}{10^2}+\frac{1\cdot3}{1\cdot2\cdot10^4}+\frac{1\cdot3\cdot5}{1\cdot2\cdot3\cdot10^6}+...$$
Now i had written the rth term as $$T_r=\frac{1\cdot3\cdot5....(2r-1)}{1\cdot2\cdot3.... r\cdot10^{2r}}=\frac{2r!}{r!\cdot r!\cdot2^r\cdot10^{2r}}$$

I came to the second equivalence by mutliplying and dividing the first expression with $2\cdot4\cdot6....2r\;$and then taking out a power of 2 from each of the even numbers multiplied in the denomininator.
From the looks of it, these expressions tend to give the idea of being solved using binomial most probably the expansion for negative indices but I don't understand how to get to the result from here

Ratio test for convergent series - Does that mean that the series $1 +frac{1}{2} + frac{1}{3} +dotsb$ converges?



  1. An infinite series is convergent if from and after some fixed term, the ratio of each term to the preceding term is numerically less than some quantity which is itself numerically less than unity.
    Let the series beginning from the fixed term be denoted by $$u_1+u_2+u_3+u_4+\dotsb,$$ and let $$\frac{u_2}{u_1}Then \begin{align*} &u_1+u_2+u_3+u_4+\dotsb\\ &=u_1\left(1+\frac{u_2}{u_1}+\frac{u_3}{u_2}\cdot\frac{u_2}{u_1}+\frac{u_4}{u_3}\cdot\frac{u_3}{u_2}\cdot\frac{u_2}{u_1}+\dotsb\right)\\ &Hence the given series is convergent.


Does that mean that the series $1 +\frac{1}{2} + \frac{1}{3} +\dotsb$ should be a convergent one? $$S_n = \frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\dotsb$$ Here, we have \begin{gather} \frac{u_n}{u_{n-1}} = \frac{1/n}{1/(n-1)} = \frac{n-1}{n} = 1-\frac{1}{n}\\ ∴\boxed{\frac{u}{u_{n-1}}<1}\\ ∴\text{Series should be convergent} \end{gather}


Answer




You need $\frac{u_n}{u_{n-1}} < r < 1$ for some $r < 1$ and all $n $.


For any $r <1$ we can find $r < \frac {u_n}{u_{n-1}} < 1$. So we can not find an appropriate $r <1$.


So we failed the hypothesis. The ratio test fails.


calculus - How do I find this limit without using L'Hôpital's rule?



Finding this limit using L'Hôpital's rule is easy, but how to do it without using L'Hôpital's rule?



$$\lim_{x \rightarrow 0} \frac{(1+\sin x)^{\csc x}-e^{x+1}}{\sin (3x)}$$


Answer



We may proceed as follows and reduce the complicated limit expression to a simple one before applying series expansions
$$\begin{aligned}L&=\lim_{x \to 0}\frac{(1 + \sin x)^{\csc x} - e^{x + 1}}{\sin 3x}\\
&=\lim_{x\to 0}\frac{\exp\{\csc x\log(1+\sin x)\} -\exp(1+x)}{\sin 3x}\\
&=\lim_{x\to 0}\frac{\exp(1+x)\left\{\exp\left(\csc x\log(1+\sin x) -1 -x\right) -1\right\}}{\sin 3x}\\

&=\lim_{x\to 0}\frac{\exp(1+x)\left\{\exp\left(\csc x\log(1+\sin x) -1 -x\right) -1\right\}}{3x}\cdot\frac{3x}{\sin 3x}\\
&=\frac{e}{3}\lim_{x\to 0}\frac{\exp\{\csc x\log(1+\sin x) -1 -x\} -1}{x}\\
&=\frac{e}{3}\lim_{x\to 0}\frac{e^{t} -1}{t}\cdot\frac{t}{x}\\
&=\frac{e}{3}\lim_{x\to 0}\frac{t}{x}\\
&=\frac{e}{3}\lim_{x\to 0}\frac{\csc x\log(1+\sin x)-1-x}{x}\\
&=\frac{e}{3}\left(\lim_{x\to 0}\frac{\log(1+\sin x)-\sin x}{x\sin x}-1\right)\\
&=\frac{e}{3}\left(\lim_{x\to 0}\frac{\log(1+\sin x)-\sin x}{\sin^{2} x}\cdot\frac{\sin x}{x}-1\right)\\
&=\frac{e}{3}\left(\lim_{z\to 0}\frac{\log(1+z)-z}{z^{2}}-1\right)\\
&=\frac{e}{3}\left(\lim_{z\to 0}\dfrac{\left(z - \dfrac{z^{2}}{2} + \cdots\right)-z}{z^{2}}-1\right)\\
&=\frac{e}{3}\cdot\frac{-3}{2}=-\frac{e}{2}\end{aligned}$$

In the above derivation we have $z = \sin x$ and $$\begin{aligned}t&=\csc x\log(1+\sin x) -1-x\\
&= \frac{\log(1 + \sin x)}{\sin x} - 1 - x\\
&= \frac{\log(1 + z)}{z} - 1 - x\end{aligned}$$ so that both $t$ and $z$ tend to $0$ as $x\to 0$.


sequences and series - Is there any mathematical or physical situations that $1+2+3+ldotsinfty=-frac{1}{12}$ shows itself?


I just saw the proof that $$1+2+3+\cdots=-\frac{1}{12}$$ and my brain still hurts.


I completely understood the proof and my question is NOT actually about the proof itself. At the end of the proof, they said that this answer shows itself in quantum physics (e.g. it has a role in why there are 26 dimensions in string theory).


Sadly though, I am no expert in quantum physics and I'm wondering are there any other physical and/or mathematical situations that this counter intuitive result shows itself?


P.S. Needless to say, it would be great if your answer doesn't need a very deep background in physics or math.


Answer



The identity as you've stated it isn't quite correct. We usually define an infinite sum by taking the limit of the partial sums. So



$$1+2+3+4+5+\dots $$


would be what we get as the limit of the partial sums


$$1$$


$$1+2$$


$$1+2+3$$


and so on. Now, it is clear that these partial sums grow without bound, so traditionally we say that the sum either doesn't exist or is infinite.


So, to make the claim in your question title, you must adopt a nontraditional method of summation. There are many such methods available, but the one used in this case is Zeta function regularization. That page might be too advanced, but it is good to at least know the name of method under discussion.


You ask why this nontraditional approach to summation might be useful in physics. The answer is that sometimes this approach gives the physically correct result. A simple example is the Casimir effect. Suppose we place two metal plates a very short distance apart (in a vacuum, with no gravity, and so on -- we assume idealized conditions). Classical physics predicts they will just be still. However, there is actually a small attractive force between them. This can be explained using quantum physics, and calculation of the magnitude of the force uses the sum you discuss, summed using zeta function regularization.


number theory - Explanation of Zeta function and why 1+2+3+4+... = -1/12











I found this article on Wikipedia which claims that $\sum\limits_{n=0}^\infty n=-1/12$. Can anyone give a simple and short summary on the Zeta function (never heard of it before) and why this odd result is true?


Answer



The answer is much more complicated than $\lim_{x \to 0} \frac{\sin(x)}{x}$.



The idea is that the series $\sum_{n=1}^\infty \frac{1}{n^z}$ it is convergent when $Re(z) >1$, and this works also for complex numbers.




The limit is a nice function (analytic) and can be extended in an unique way to a nice function $\zeta$. This means that



$$\zeta(z)=\sum_{n=1}^\infty \frac{1}{n^z} \,;\, Re(z) >1 \,.$$



Now, when $z=-1$, the right side is NOT convergent, still $\zeta(-1)=\frac{-1}{12}$. Since $\zeta$ is the ONLY way to extend $\sum_{n=1}^\infty \frac{1}{n^z}$ to $z=-1$, it means that in some sense



$$\sum_{n=1}^\infty \frac{1}{n^{-1}} =-\frac{1}{12}$$



and this is exactly what that means. Note that, in order for this to make sense, on the LHS we don't have convergence of series, we have a much more suttle type of convergence: we actually ask that the function $\sum_{n=1}^\infty \frac{1}{n^z}$ is differentiable as a function in $z$ and make $z \to -1$...




In some sense, the phenomena is close to the following:



$$\sum_{n=0}^\infty x^n =\frac{1}{1-x} \,;\, |x| <1 .$$



Now, the LHS is not convergent for $x=2$, but the RHS function makes sense at $x=2$. One could say that this means that in some sense $\sum_{n=0}^\infty 2^n =-1$.



Anyhow, because of the Analyticity of the Riemann zeta function, the statement about $\zeta(-1)$ is actually much more suttle and true on a more formal level than this geometric statement...


Friday, December 28, 2018

elementary number theory - Show that if $d_1e_1=d_2e_2, and gcd(e_1,e_2)=1Rightarrow lcm(d_1,d_2)=d_1e_1=d_2e_2$



Show that for positive integers if $d_1e_1=d_2e_2\ and \gcd(e_1,e_2)=1\Rightarrow\ lcm(d_1,d_2)=d_1e_1=d_2e_2$
We know that: $$lcm(d_1,d_2)gcd(d_1,d_2)=d_1d_2$$ but this doesn't help much!
What's the trick?


Answer



Let $k=\gcd(d_1,d_2)$. Then $d_1=kd_1'$ and $d_2=kd_2'$ where $d_1'$ and $d_2'$ are relatively prime. Thus from the relationship you stated we have $$\text{lcm}(d_1,d_2)k=k^2d_1'd_2',$$
and therefore
$$\text{lcm}(d_1,d_2)=kd_1'd_2'.$$

It remains to show that $kd_1'd_2'=d_1e_1=kd_1'e_1$, or equivalently $d_2'=e_1$.



We were told that $d_1e_1=d_2e_2$, or equivalently that $d_1'e_1=d_2'e_2$. Since $e_1$ and $e_2$ are relatively prime, we have $e_1\mid d_2'$. And because $d_1'$ and $d_2'$ are relatively prime, we have $d_1'\mid e_2$.



But because $d_1'e_1=d_2'e_2$, it follows that $e_1=d_2'$ and $d_1'=e_2$.


probability - Explain why $E(X) = int_0^infty (1-F_X (t)) , dt$ for every nonnegative random variable $X$



Let $X$ be a non-negative random variable and $F_{X}$ the corresponding CDF. Show, $$E(X) = \int_0^\infty (1-F_X (t)) \, dt$$ when $X$ has : a) a discrete distribution, b) a continuous distribution.



I assumed that for the case of a continuous distribution, since $F_X (t) = \mathbb{P}(X\leq t)$, then $1-F_X (t) = 1- \mathbb{P}(X\leq t) = \mathbb{P}(X> t)$. Although how useful integrating that is, I really have no idea.


Answer



For every nonnegative random variable $X$, whether discrete or continuous or a mix of these, $$ X=\int_0^X\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,\mathrm dt, $$ hence



$$ \mathrm E(X)=\int_0^{+\infty}\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt. $$





Likewise, for every $p>0$, $$ X^p=\int_0^Xp\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,p\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,p\,t^{p-1}\,\mathrm dt, $$ hence



$$ \mathrm E(X^p)=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\geqslant t)\,\mathrm dt. $$



Thursday, December 27, 2018

What 's the short proof that for square matrices $AB = I$ implies $BA = I$?







I'm trying to remember the one line proof that for square matrices $AB = I$ implies $BA = I$.
I think it uses only elementary matrix properties and nothing else. Does anyone know the proof?



I remember it beginning with $BAB = B$ and the result following almost immediately.

polynomials - Find parameter so that the equation has roots in arithmetic progression



Find the parameter $m$ so that the equation
$$x^8 - mx^4 + m^4 = 0$$
has four distinct real roots in arithmetic progression.



I tried the substitution $x^4 = y$, so the equation becomes



$$y^2 -my + m^4 = 0$$




I don't know what condition should I put now or if this is a correct approach.



I have also tried to use Viete's by the notation $$2x_2=x_1+x_2, 2x_3=x_2 +x_4$$ but I didn't get much out of it.


Answer



Zero can't be a root, else $m=0$, in which case all the roots would be zero.


If $r$ is any root, so is $ri$, hence there must be exactly $4$ real roots, and $4$ pure imaginary roots.


Also, if $r$ is a root, so is $-r$, hence the real roots sum to zero.


Ordering the real roots in ascending order, let $d > 0$ be the common difference.


Then the four real roots are



$$-\frac{3}{2}d,\;-\frac{1}{2}d,\;\frac{1}{2}d,\;\frac{3}{2}d$$



and the other four roots are



$$-\frac{3}{2}di,\;-\frac{1}{2}di,\;\frac{1}{2}di,\;\frac{3}{2}di$$




Since the $4$-th powers of the roots satisfy the quadratic



$$y^2 - my + m^4 = 0$$



Vieta's formulas yields the equations



$$\left(\frac{1}{2}d\right)^4+\left(\frac{3}{2}d\right)^4 = m$$



$$\left(\frac{1}{2}d\right)^4\left(\frac{3}{2}d\right)^4 = m^4$$




$$\text{or, in simpler form}$$



$$\frac{41}{8}d^4 = m\tag{eq1}$$



$$\frac{81}{256}d^8 = m^4\tag{eq2}$$



Solving $(\text{eq}1)$ for $d^4$, substituting the result into $(\text{eq}2)$, and then solving for $m$ yields
$$m = \frac{9}{82}$$


Wednesday, December 26, 2018

If 0 / 0 is indeterminate, are all clauses "0 / 0 != x" true

Elsewhere arose a discussion about logical clauses that can be made from indeterminate forms, in this case, namely $0 / 0$. Since $0 / 0$ is indeterminate form, can we make these logical clauses:





  • $0 / 0 = 1$ is false?

  • $0 / 0 \neq 1$ is true?



Or in more general form, if $x$ is not just undefined, but indeterminate form, and $y$ is defined and $y \in \mathbb{R}$, can we say that:




  • $x = y$ is false?

  • $x \neq y$ is true?




My thinking goes, that since $x$ cannot be determined in any way, first one is intuitively correct: any clause that tries to say that $x$ is something must be false. Based on simple logics, it follows that also all clauses that $x$ is something else than something that is defined, are true.



Counterargument is, that if $x$ is indeterminate form, we cannot say that it is not $y$, for that it would make a clause that $x$ is something, because it is at least not $y$, which cannot be true if $x$ is truly indeterminate (i.e. if $x$ is indeterminate, we cannot say that it isn't $y$. To my thinking, this leads to also back to that $x = y$ is at least not true. But with the same argumentation, it cannot be false either, and hence both logical clauses are themselves undefined in answer, not true but not false either.



And counter-counterargument is that, if we make a clause that says $x \neq y$, it does not make $x$ any less indeterminate, because it only says that $x$ is at least not that particular defined form, but still leaves open possibilities that $x$ is something else than $y$ or still completely indeterminate.



My thinking goes that if $x$ is indeterminate form, it means that $x \notin \mathbb{R}$ (or $\mathbb{R}$, for that matter), and hence all clauses that $x \neq y \in \mathbb{R}$, are true, because what we can say about indeterminate forms is that they are not in real number space. But that might be untrue as well, if it is also indeterminate whether indeterminate forms exist in real numbers or not.



What comes to undefined numbers, e.g. if instead of $0 / 0$ we talk about $a / 0$, where $a \neq 0$, it is more clear that they do not exist in real numbers, so if $x$ was only undefined, I believe $x \neq y$ is without doubt true. I.e. clauses that undefined numbers are always not equal to any or all of defined numbers are necessarily true, because it is basically the same clause that $x \notin \mathbb{R}$, which is true if $x$ is undefined.




The answer to this is probably very much not indeterminate, but with my basic knowledge of algebra a definitely correctly-argumented answer cannot be made.

general topology - What is the order type of [0,1)?



My question is pretty simple, but I want to make sure I'm understanding it correctly, because otherwise my misconception could go on a while.




What is the "order type" of [0,1)?



I know the set has a smallest element, does not have a largest element, and for any two elements in the set x < y we can find another element z so that x < z < y.



Are these the properties referred to by the phrase "order type of [0,1)"? Is there anything I am missing?



This was inspired by a homework problem from Munkres #12 from section 24 needing to show that [a,c) has the same order type as [0,1) iff both [a,b) and [b,c) have the same order type as [0,1).



Thanks!


Answer




Since the open interval $(0,1)$ is isomorphic to the real line, its order type is $\lambda$, the order type of the real line. The half-open interval $[0,1)$ therefore has order type $1+\lambda$; similarly, the order type of the half-open interval $(0,1]$ is $\lambda+1$, and the order type of the closed interval $[0,1]$ is $1+\lambda+1$.




I know the set has a smallest element, does not have a largest element, and for any two elements in the set x < y we can find another element z so that x < z < y.

Are these the properties referred to by the phrase "order type of [0,1)"?




No. The set $\mathbb Q\cap[0,1)$ of all rational numbers in $[0,1)$ has all the properties you listed, but is not isomorphic to $[0,1)$ (different cardinalities), so it does not have the same order type; its order type is $1+\eta$.


linear algebra - Maximum eigenvalue of a hollow symmetric matrix


Is the maximum eigenvalue (or spectral radius) of the matrix with the following form equalled to row or column sum of the matrix?


$$ A=\left( \begin{array}{cccc} 0 & a & ... & a \\ a & 0 & ... & a \\ : & : & ...& : \\ a & a & ... & 0\end{array} \right) $$


The matrix is square with dimension $n \times n$ where $n = 2,3,4,...$, hollow (all elements in the principal diagonal = 0), symmetric and all off diagonal elements have the same value.


Is the spectral radius of such matrices = $(n-1)\times a$? Why?


Answer



Start with the matrix $A$ of all $a$'s, whose eigenvalues are zero except for eigenvalue $na$ having multiplicity one (because rank$(A) = 1$).


Now subtract $aI$ from $A$ to get your matrix. The eigenvalues of $A-aI$ are those of $A$ shifted down by $a$. We get a eigenvalue $(n-1)a$ of multiplicity one and eigenvalue $-a$ with multiplicity $n-1$.


So the spectral radius (largest absolute value of an eigenvalue) of $A$ is $|na|$, and the spectral radius of $A-aI$ is $\max(|(n-1)a|,|a|)$. The latter is simply $|(n-1)a|$ unless $n=1$.


probability - Calculating the expected value of a random variable that's a function of a random variable



I am working on the following problem:




enter image description here



I'm having a hard time putting all of this information together:



The cost of the maintenance is $Z = X + Y$, where $X$ is the cost of the first machine and $Y$ is the cost of the second machine.



The payment $T$ by the insurer is $Z$ if $0 \leq Z \leq 6000$ and $6000$ if $Z > 6000$.



$$
T(Z) = \left\{

\begin{array}{lr}
Z & \text{if } 0 \leq Z \leq 6000\\
6000 & \text{if } Z > 6000
\end{array}
\right.
$$



We want to find $E[T(Z)] = T(Z)\cdot Pr[T(Z)].$



From here I am uncertain about how to proceed. I know that a uniform random variable on $[0, 4000]$ has expected value $\frac{4000}{2} = 2000$. This has a probability of $\frac{1}{3}$ of occurring.




I would think $E[X] = E[Y] = \frac{4000}{6}$, and so $E[Z] = \frac{4000}{3}$.



I am just unsure of how to factor in what the insurer is actually paying.


Answer



The probability of $(0,0)$ occurring is $\frac23\cdot\frac23$; the probability of $(0,y)$ occurring, for $0$$
\frac49\cdot0 + \int_0^{4000} \frac{y\,dy}{18000} + \int_0^{4000} \frac{x\,dx}{18000} + \int_0^{4000} \int_0^{4000} \frac{\min\{x+y,6000\}\,dx\,dy}{12000^2}.
$$
Solving this calculus problem (splitting the last integral at the line $x+y=6000$) gives the answer $\frac{35750}{27} \approx 1324.07$.



number theory - Arithmetic progression divisibility



Suppose $b|a$ and $\frac{a}{b} \neq \frac{v}{y}$, $a, b, v, y \in \mathbb{N}$ arbitrary. Is there a nice clean intuitive proof to show that it is never true that $b+yk|a+vk$ for all $k \in \mathbb{N}$? Or is it true sometimes after all (I strongly feel like not)?


Answer



Assume for contradiction that for all $k\in \Bbb N$, $a+vk|b+yk$ and consider the sequence $u_k\in \Bbb Z$ such that $u_k(a+vk) = b+yk$. Then since $u_k = {b+yk\over a+vk} \to {y\over v}$ as $k$ goes to $+\infty$, $u_k$ approaches ${y\over v}$ arbitrary close, which therefore must be an integer, and thus $v|y$. Let $u \in \mathbb{Z}$ be such that $y = vu$. Then for all $k\in \Bbb N$ we have $a+vk |b+vuk$ and therefore also $a+vk |b-au$. However since $a+vk$ is unbounded and $b-au$ does not depend on $k$, what follows is that $b-au = 0$, which rewrites as ${a\over b} = {v \over y}$.



The contrapositive of this is your statement.


Tuesday, December 25, 2018

exponentiation - What is 0 raised to 0 ???!!!!




I have read many articles on this confusion but i am still confused...



My simple question is -




What is $0^0$?



What is the present agreement to this?



I feel that it should be 1 as anything to the power zero is one....



I am currently a school student so i would like a more of a school based answer..



So incase it comes in my exam i should know what to write:)



Answer



$0^0$ is most often undefined. The reason is that it is not possible to define it in a good enough way. Notice the following examples:



$0^x$



Whenever $x \neq 0$ then this expression should equal to 0. However



$x^0$



should be $1$ whenever $x\neq 0$. Thus, if we define $0^0$ to either $0$ or $1$ then we get problems with these functions not being continous (without jumps if you plot them) where they are defined, which is why we keep $0^0$ undefined in most cases.



elementary set theory - what is the cardinality of the injective functuons from R to R?

what is the cardinality of the injective functuons from R to R?
lets say A={he injective functuons from R to R}
obviously, A<= $2^א$
I have no Idea from which group I have to find an injective function to A to show (The Cantor-Schroeder-Bernstein theorem) that A=> $2^א$

calculus - Is it possible to find the limit without l'Hopital's Rule or series

Is it possible to find $$\lim_{x\to0}\frac{\sin(1-\cos(x))}{x^2e^x}$$ without using L'Hopital's Rule or Series or anything complex but just basic knowledge (definition of a limit, limit laws, and algebraic expansion / cancelling?)

elementary number theory - $(1 cdot 2 cdot ... cdot m)^w+ (2 cdot 3 cdot ... cdot (m+1))^w+...+(n cdot(n+1)+ cdot ... cdot (n+m-1))^w=?$

I started to read some book on elementary number theory and preliminary chapter asks to establish some formulas by mathematical induction:



There is this formula:



$$1+2+...+n= \dfrac {n(n+1)}{2}$$



And this one:




$$ 1\cdot 2 + 2\cdot 3 + ... + n\cdot (n+1)= \dfrac {n(n+1)(n+2)}{3}$$



With some pencil-and-paper work I established that this should hold:



$$1\cdot 2 \cdot ... \cdot m + 2\cdot 3 \cdot ... \cdot (m+1) + ... + n \cdot (n+1) \cdot ... \cdot (n+m-1)= \dfrac {n(n+1)...(n+m-1)(n+m)}{m+1}$$



I did not prove this formula that I established but just checked some cases and it seems to hold.



It can be written in the form of a hockey-stick identity, I think, so it holds.




Now, I know about generalization of first formula that goes like this (Faulhaber´s formula):



$$1^w + 2^w + ... + n^w=\dfrac {1}{w+1} \cdot \displaystyle \sum_{j=0}^{w} { {w+1} \choose j} B_j n^{w+1-j}$$



How do the generalization $$(1 \cdot 2 \cdot ... \cdot m)^w+ (2 \cdot 3 \cdot ... \cdot (m+1))^w+...+(n \cdot(n+1)+ \cdot ... \cdot (n+m-1))^w=?$$ look like? That is, what is on the right side?

Monday, December 24, 2018

algebra precalculus - Why the equation $3cdot0=0$ needs to be proven




In Algebra by Gelfand Page 21 ( for anyone owning the book).
He tries to prove that: $3\cdot(-5) + 15 = 0$.
Here's his proof: $3\cdot(-5) + 15 = 3\cdot(-5) + 3\cdot5 = 3\cdot(-5+5) = 3\cdot0 = 0$. After that he said:




The careful reader will asky why $3\cdot0 = 0$.




Why does this equation need to be proven ?
I asked somewhere and was told that $a\cdot0=0$ is an axiom which maybe Gelfand didn't assume was true during his proof.
But why does it need to be an axiom, it's provable:
In the second step of his proof he converted 15 to $3\cdot5$ so multiplication was defined so
$a\cdot0 = (0 + 0 + \cdots)$ x times $= 0$.
I'm aware multiplication is defined as repeated addition only for integers,
but 3 is an integer so this definition works in my example.



In case my question wasn't clear it can be summed up as:
Why he takes $3\cdot5=15$ for granted but thinks $3\cdot0=0$ needs an explanation?


Answer




Gelfand doesn't really take $3 \cdot 5 = 15$ for granted; in the ordinary course of events, this would need just as much proof as $3 \cdot 0$.



But the specific value $15$ isn't important here; we're really trying to prove that if $3 \cdot 5 = 15$, then $3 \cdot (-5) = -15$. That is, we want to know that making one of the factors negative makes the result negative. If you think of this proof as a proof that $3 \cdot (-5) = -(3 \cdot 5)$, then there's no missing step.



The entire proof could be turned into a general proof that $x \cdot (-y) = -(x\cdot y)$ with no changes; I suspect that the authors felt that this would be more intimidating than using concrete numbers.



If we really cared about the specific value of $3 \cdot 5$, we would need proof of it. But to prove that $3 \cdot 5 = 15$, we need to ask: how are $3$, $5$, and $15$ defined to begin with? Probably as $1+1+1$, $1+1+1+1+1$, and $\underbrace{1+1+\dots+1}_{\text{15 times}}$, respectively, in which case we need the distributive law to prove that $3 \cdot 5 = 15$. Usually, we don't bother, because usually we don't prove every single bit of our claims directly from the axioms of arithmetic.



Finally, we don't usually make $x \cdot 0 = 0$ an axiom. For integers, if we define multiplication as repeated addition, we could prove it as you suggest. But more generally, we can derive it from the property that $x + 0 = x$ (which is usually taken as a definition of what $0$ is) and the other laws of multiplication and addition given in this part of the textbook.


Modular arithmetic and linear congruences


Assuming a linear congruence:


$ax\equiv b \pmod m$


It's safe to say that one solution would be:


$x\equiv ba^{-1} \pmod m$


Now, the first condition i memorized for a number $a$ to have an inverse $mod(m)$ was:



$\gcd(a,m) = 1$


Which stems from the fact (and correct me here) that a linear congruence has solution if that gcd divides $b$. Since on an inverse calculation we have:


$ax\equiv 1 \pmod m$


The only number that divides one is one itself, so it makes sense.


Now comes the part that annoys me most:


"If the condition that tells us that there is an inverse $mod (m)$ for $a$ says that $\gcd(a,m)=1$, then how come linear congruences where the $\gcd(a,m) \ne 1$ have solution? Why do we say that a linear congruence where $\gcd(a,m) = d > 1$ has d solutions? If you can't invert $a$, then you can't do this:"


$ax\equiv b \pmod m \implies x\equiv ba^{-1} \pmod m $


Please help on this one. It's kinda tormenting me :(


Answer



The long and the short of it is: $ax \equiv b \pmod m$ has solutions iff $\gcd(a,m)$ divides $b$.



As you said, if $\gcd(a,m) = 1$, then we can multiply by the inverse of $a$ to get our (unique!) solution.


But if $\gcd(a,m) = d > 1$, we still have a chance at finding solutions, even though there is no inverse of $a$ mod $m$.



Assume $d \mid b$.


Then there are integers $a', m', b'$ such that $a = da'$, $b = db'$, and $m = dm'$. $$ax \equiv b \pmod m \iff a'x \equiv b' \pmod{m'} $$


But since $d$ was the GCD of $a$ and $m$, we know $\gcd(a', m') = 1$, and we can construct a solution mod $m'$. For notation's sake, let $c$ be an integer such that $ca' \equiv 1 \pmod {m'}$. $$a'x \equiv b' \pmod{m'} \iff x \equiv b' c \pmod {m'}$$


Now we can "lift" our solution mod $m'$ to many solutions mod $m$.: $$ x \equiv b' c \pmod {m'} \iff \exists k \quad x \equiv b'c + km' \pmod m $$



Say there is a solution to the congruence. Then $ax \equiv b \pmod m$ implies that for some $k$, $ax + km = b$. But if $d$ divides $a$ and $m$, it must also divide $b$.



So it is a necessary and sufficient condition.


Double finite field extension



Suppose we are given the field $\mathbb{F}_5$ and
$p(X) = X^2-2 \in \mathbb{F}_5[X]$, an irreducible polynomial over $\mathbb{F}_5$.
Let $\mathbb{K}$ denote the extension of $\mathbb{F}_5$ in which $p(X)$ has a root $\alpha$. $\mathbb{K}$ is an extension of degree $2$ and of cardinality $5^2=25$.




It is easily verified that $\alpha$ is not a square in $\mathbb{K}$, so $q(Y) = Y^2 - \alpha \in \mathbb{K}[Y]$ is an irreducible polynomial over $\mathbb{K}$. We can therefore form an extension of $\mathbb{K}$ in which $q(Y)$ has a root $\beta$, i.e. an element such that $\beta^2=\alpha$. I will call this extension $\mathbb{L}$. $\mathbb{L}$ is an extension of degree $2$ over $\mathbb{K}$ and of cardinality $5^4 = 625$.



I am trying to find the minimal polynomial of $\beta$ over $\mathbb{F}_5$ which gives rise to the same extension $\mathbb{L}$, i.e. the polyomial $s(Z) \in \mathbb{F}_5[Z]$ such that $s(\beta)=0$.



$s(Z)$ should be of degree $4$ and $\mathbb{L}$, being the splitting field of $s(Z)$, will contain all its roots which are given by $\beta, \beta^5, \beta^{25}, \beta^{125}$. So, one way to find $s(Z)$ is to compute and simpify the product $s(Z) = (Z-\beta)(Z-\beta^5)(Z-\beta^{25})(Z-\beta^{125})$. I got $s(Z)=Z^4+3$.



My question: is there another, more intelligent, method to find $s(Z)$ that would work even when the involved fields are of greater cardinality?


Answer



You're adding in $\sqrt{2}$ and then $\sqrt{\sqrt{2}}$. So intuitively it's obvious that really you're just adding a root of $x^4-2$.




This is the same polynomial as you got.



More formally speaking, you could verify the isomorphism of rings:



$\mathbb{F}_5[x,y]/\langle x^2 - 2, y^2 -x \rangle \cong \mathbb{F_5}[y]/\langle y^4-2 \rangle$



This amounts to checking that the equality of the ideals



$\langle x^2 - 2, y^2 -x \rangle = \langle y^4-2, y^2 - x\rangle$




The only non-trivial step there is that $x^2-2 \in \langle y^4-2, y^2 - x\rangle$.


Sunday, December 23, 2018

elementary number theory - RSA with small encryption exponent

In RSA: Fast factorization of N if d and e are known a comment under the OPs question stated that if the encrytion exponent $e$ is small compared to $N=p\cdot q$ for the RSA-primes $p,q$ (like $e<\sqrt{N}$) one could find $k$ in $de=1+k\varphi(N)$ by rounding $\frac{de}{N}$ as $k$ is an integer and $\varphi(N)\approx N$.




I don't know how to show this rigorously as I don't find a way to use $e<\sqrt{N}$ and I'm unsure whether the floor function or some other approximation was meant. (For the following i went with the floor as it seemed to be the most reasonable)



As $\varphi(N)\le N$ I know that:
$$k=\left\lfloor \frac{1}{N}+k\right\rfloor\ge\left\lfloor \frac{de}{N}\right\rfloor$$
As $\varphi(N)=N-p-q+1$ is also know that:
\begin{align*}
\left\lfloor\frac{de}{N}\right\rfloor&=\left\lfloor\frac{1}{N}+k\frac{\varphi(N)}{N}\right\rfloor\\
&=\left\lfloor\frac{1}{N}+k\frac{pq-p-q+1}{pq}\right\rfloor\\
&=\left\lfloor\frac{1}{N}+k\left(\frac{1}{N}-\frac{1}{q}-\frac{1}{p}\right)\right\rfloor+k
\end{align*}


So it would be nice to have $\left\lfloor\frac{1}{N}+k\left(\frac{1}{N}-\frac{1}{q}-\frac{1}{p}\right)\right\rfloor=0$ as this would show the preposition.



Does this result even hold true?

sequences and series - Proof that $lim_{ntoinfty} nleft(frac{1}{2}right)^n = 0$


Please show how to prove that $$\lim_{n\to\infty} n\left(\frac{1}{2}\right)^n = 0$$


Answer



Consider extending the sequence {$n/2^n$} to the function $f(x)=x/2^x$.


Then use L'Hopital's rule: lim$_{x\to\infty} x/2^x$ has indeterminate form $\infty/\infty$. Taking the limit of the quotient of derivatives we get lim$_{x\to\infty} 1/($ln$2\cdot 2^x)=0$. Thus lim$_{x\to\infty} x/2^x=0$ and so $n/2^n\to 0$ as $n\to\infty$.



Saturday, December 22, 2018

Intuition behind logarithm inequality: $1 - frac1x leq log x leq x-1$




One of fundamental inequalities on logarithm is:
$$ 1 - \frac1x \leq \log x \leq x-1 \quad\text{for all $x > 0$},$$
which you may prefer write in the form of
$$ \frac{x}{1+x} \leq \log{(1+x)} \leq x \quad\text{for all $x > -1$}.$$



The upper bound is very intuitive -- it's easy to derive from Taylor series as follows:
$$ \log(1+x) = \sum_{i=1}^\infty (-1)^{n+1}\frac{x^n}{n} \leq (-1)^{1+1}\frac{x^1}{1} = x.$$



My question is: "what is the intuition behind the lower bound?" I know how to prove the lower bound of $\log (1+x)$ (maybe by checking the derivative of the function $f(x) = \frac{x}{1+x}-\log(1+x)$ and showing it's decreasing) but I'm curious how one can obtain this kind of lower bound. My ultimate goal is to come up with a new lower bound on some logarithm-related function, and I'd like to apply the intuition behind the standard logarithm lower-bound to my setting.


Answer




Take the upper bound:
$$
\ln {x} \leq x-1
$$
Apply it to $1/x$:
$$
\ln \frac{1}{x} \leq \frac{1}{x} - 1
$$
This is the same as
$$

\ln x \geq 1 - \frac{1}{x}.
$$


complex analysis - Proof: For every $alpha in mathbb{C}$ with $alpha not= 0$, there exists a unique $beta in mathbb{C}$ such that $alpha beta = 1$




I am trying to prove the following:




For every $\alpha \in \mathbb{C}$ with $\alpha \not= 0$, there exists a unique $\beta \in \mathbb{C}$ such that $\alpha \beta = 1$.




There are two steps to this proof:




  1. I first need to prove that, for every $\alpha \in \mathbb{C}$, where $\alpha \not= 0$, there does exist $\beta \in \mathbb{C}$ such that $\alpha\beta = 1$.


  2. I then prove that, for every $\alpha \in \mathbb{C}$, where $\alpha \not= 0$, there exists a unique $\beta \in \mathbb{C}$ such that $\alpha\beta = 1$.



My proof follows.



Let $\alpha \in \mathbb{C}$, where $\alpha \not= 0$, and $\beta \in \mathbb{C}$. Also, I will assume that it is valid to assume knowledge of the fact that the inverse of a complex number is also a complex number. Otherwise, I'm not sure how one would do the proof.




  1. \begin{alignat}{2}
    \alpha\beta &= \alpha \alpha^{-1} & &\text{(The inverse of a complex number is also a complex number.} \\

    & & &\text{Therefore, since $\alpha \in \mathbb{C}$ and $\beta \in \mathbb{C}$, we can set $\beta = \alpha^{-1}.)$} \\
    &= 1 \tag*{$\blacksquare$}
    \end{alignat}


  2. Here, I prove that $\beta$ is unique by assuming that there are two objects $\beta_1$ and $\beta_2$, and then show that they must therefore be the same object.




$$\begin{align} \alpha \beta_1 = 1, \\ \alpha \beta_2 = 1 \end{align}$$



Subtracting the two equations, we get




$$\begin{align} &\alpha \beta_1 - \alpha \beta_2 = 0 \\ &\Rightarrow \alpha(\beta_1 - \beta_2) = 0, \\ &\Rightarrow \beta_1 - \beta_2 = 0 \ \ \ \text{(Since $\alpha \not= 0$.)} \\ &\Rightarrow \beta_1 = \beta_2 \ \ \ \tag*{$\blacksquare$} \end{align}$$



I would greatly appreciate it if people could please take the time to review my proof. If there are any errors, please point out the specific error and explain the correct way.


Answer



As José Carlos Santos points out in his answer, the fact that every $\alpha \in \mathbb{C}$ with $\alpha \ne 0$ has an inverse is precisely what you have to prove, so you can't assume it in your proof.



J. W. Tanner's answer is technically correct, but it does rather assume that you already "magically" know what the inverse $\beta$ of $\alpha$ is.



In contrast, Dr. Sonnhard Graubner's answer shows how you could "discover" for yourself that the inverse $\beta$ of $\alpha$ exists, and that it has the form given in J. W. Tanner's answer.




However, I find some aspects of his answer confusing, so - with apologies - I present a slightly different version of essentially the same argument. (I hope that I haven't just introduced confusions of my own!)



First, I make some further remarks on the proof you've presented:






One flaw in your proof that hasn't been mentioned is that you repeat essentially the same argument in 1 and 2.



In 1, you conclude from $\alpha\beta = \alpha\alpha^{-1}$ that $\beta = \alpha^{-1}$, but you don't give a reason why.




In 2, you conclude from $\alpha\beta_1 = \alpha\beta_2$ that $\beta_1 = \beta_2$, and this time you do give a reason. However, as José Carlos Santos points out, you give no justification for concluding that if $\alpha(\beta_1 - \beta_2) = 0$ then $\beta_1 - \beta_2 = 0$. The only obvious way to justify this is by multiplying both sides by an inverse of $\alpha$. In 2 (unlike 1), you can do this without logical circularity. Still, the proof is unnecessarily complicated.






Let $\alpha = x + iy$. We are given that $\alpha \ne 0$, i.e. that $x \ne 0$ or $y \ne 0$.



Let $\beta = u + iv$: then $\alpha\beta = (xu - yv) + i(xv + yu)$.



The equation $\alpha\beta = 1$ is therefore equivalent to two simultaneous linear equations in real variables $u$ and $v$:
\begin{align*}

xu - yv & = 1, \\
xv + yu & = 0.
\end{align*}

It is up to you now to solve these equations for $u$ and $v$, with the help of the condition that $x \ne 0$ or $y \ne 0$.



The fact that a solution exists means that an inverse $\beta$ of $\alpha$ exists.



The fact that the solution is unique means that the inverse $\beta$ is unique - and now one can denote it by $\alpha^{-1}$.


algebra precalculus - Why does cancelling change this equation?

The equation



$$2t^2 + t^3 = t^4$$



is satisfied by $t = 0$



But if you cancel a $t^2$ on both sides, making it

$$2 + t = t^2$$
$t = 0$ is no longer a solution.



What gives? I thought nothing really changed, so the same solutions should apply.



Thanks

calculus - Solving the infinite sum $sum_{k=0}^{infty} left(frac{1}{3}right)^{k+3}$



I'm stuck on the question $\sum_{k=0}^{\infty} \left(\frac{1}{3}\right)^{k+3}$



I know that $\sum_{k=0}^{\infty} \left(\frac{1}{3}\right)^k$ is solved by using $\sum_{k=0}^{\infty} a^{k} =$ $\frac{1}{1-a}$ and the answer is $\frac{3}{2}$



So is there a way I could apply that to the above question or is there a different way to approach the problem?


Answer



$$\sum_{k=0}^n\left(\frac{1}{3}\right)^{k+3}=\frac{1}{3^3}\sum_{k=0}^n\left(\frac{1}{3}\right)^{k}\to\frac{1}{27}\frac{1}{1-\frac{1}{3}}=\frac{1}{27}\frac{3}{2}=\frac{1}{18}$$


Friday, December 21, 2018

Limit $lim_{xtoinfty}xtan^{-1}(f(x)/(x+g(x)))$

I am investigating the limit


$$\lim_{x\to\infty}x\tan^{-1}\left(\frac{f(x)}{x+g(x)}\right)$$


given that $f(x)\to0$ and $g(x)\to0$ as $x\to\infty$. My initial guess is the limit exists since the decline rate of $\tan^{-1}$ will compensate the linearly increasing $x$. But I'm not sure if the limit can be non zero. My second guess is the limit will always zero but I can't prove it. Thank you.


EDIT 1: this problem ca be reduced into proving that $\lim_{x\to\infty}x\tan^{-1}(M/x)=M$ for any $M\in\mathbb{R}$. Which I cannot prove it yet.


EDIT 2: indeed $\lim_{x\to\infty}x\tan^{-1}(M/x)=M$ for any $M\in\mathbb{R}$. Observe that



$$\lim_{x\to\infty}x\tan^{-1}(M/x)=\lim_{x\to0}\frac{\tan^{-1}(Mx)}{x}.$$ By using L'Hopital's rule, the right hand side gives $M$. So the limit which is being investigated is equal to zero for any $f(x)$ and $g(x)$ as long as $f(x)\to0$ and $g(x)\to0$ as $x\to\infty$. The problem is solved.

logarithms - $sqrt{log n}$ vs $logsqrt{n}$




I have to check if $ \sqrt{ \log(n) } = \theta(\log({\sqrt n}))$.
Following log rules I can write:



$ \sqrt{ \log(n) } = \log(n)^{\frac{1}{2}}$
$\log({\sqrt n)} = \log(n^{\frac{1}{2}}) = \frac{1}{2}\log(n) $



Looking at graphs I can see the $O$ notation is correct but not the $\Omega$.
I would appreciate some help at how to disprove this, as I am stuck for quite some time on it.



Thanks in advance


Answer




You want to prove that its not true that $\sqrt{\log(n)} = \Omega(\log\sqrt n)$. It is enough to prove that $\frac{\log\sqrt n}{\sqrt{\log n}}$ is unbounded. But this is true, simply because this ratio is
$$ \frac{\log\sqrt n}{\sqrt{\log n}} = \frac{\sqrt{\log n}}{2} \to \infty,$$
for $n$ large.


Fourth degree polynomial with rational coefficients and a real root



If a quartic has rational coefficients and one real root, how would one go about showing that the real root is rational?



I understand that the condition is equivalent to showing that having a polynomial with one irrational double root and two imaginary roots or one irrational quadruple root is impossible. The latter seems straightforward, since the ratio of the second coefficient to the first (which is rational) is the sum of the roots (which is irrational). Is there a "nice" way of showing the former?


Answer



If $r$ is a double root and the other two roots are complex, i.e.
$p(x) = (x-r)^2 (x - w) (x - \overline{w})$, then $x - r = \gcd(p(x), p'(x))$ which has rational coefficients.


modular arithmetic - How can I tell if a number in base 5 is divisible by 3?


I know of the sum of digits divisible by 3 method, but it seems to not be working for base 5.


How can I check if number in base 5 is divisible by 3 without converting it to base 10 (or 3, for that matter)?


Answer



Divisibility rules generally rely on the remainders of the weights of digits having a certain regularity. The standard method for divisibility by $3$ in the decimal system works because the weights of all digits have remainder $1$ modulo $3$. The same is true for $9$. For $11$, things are only slightly more complicated: Since odd digits have remainder $1$ and even digits have remainder $-1$, you need to take the alternating sum of digits to test for divisibility by $11$.



In base $5$, we have the same situation for $3$ as we have for $11$ in base $10$: The remainder of the weights of odd digits is $1$ and that of even digits is $-1$. Thus you can check for divisibility by $3$ by taking the alternating sum of the digits.


More generally, in base $b$ the sum of digits works for the divisors of $b-1$ and the alternating sum of digits works for the divisors of $b+1$.


elementary set theory - Is there a simple, constructive, 1-1 mapping between the reals and the irrationals?



Is there a simple, constructive, 1-1 mapping between the reals and the irrationals?


I know that the Cantor–Bernstein–Schroeder theorem implies the existence of a 1-1 mapping between the reals and the irrationals, but the proofs of this theorem are nonconstructive.


I wondered if a simple (not involving an infinite set of mappings) constructive (so the mapping is straightforwardly specified) mapping existed.


I have considered things like mapping the rationals to the rationals plus a fixed irrational, but then I could not figure out how to prevent an infinite (possible uncountably infinite) regression.


Answer



Map numbers of the form $q + k\sqrt{2}$ for some $q\in \mathbb{Q}$ and $k \in \mathbb{N}$ to $q + (k+1)\sqrt{2}$ and fix all other numbers.


Thursday, December 20, 2018

induction - How can I prove that $4^{n} + 5$ is divisible by $3$.



I have trying to prove that $4^{n} + 5$.




I've already proved the base case, so I'm working on the inductive step.



I've done the following:



$4^{n} + 5$



$4^{n+1} + 5$



$4*4^{n} + 5$




But I am unsure where to go from here to prove that it is divisible by 3 since I am unsure how to get a $3$ or multiple of $3$ from this.


Answer



From @JMoravitz and continuing from the question above,



$4 * 4^{n} + 5$



$(3 + 1)4^{n} + 5$



$3(4^{n}) + (4^{n} + 5)$




From the base case, we know $(4^{n} + 5)$ is divisible by 3, and trivially $3(4^{n})$ is also divisible by 3.



$Q.E.D.$


Wednesday, December 19, 2018

calculus - Convergence of the integral $int_0^infty frac{sin^2x}{x^2}~mathrm dx.$





Determine whether the integral $$\int_0^\infty \frac{\sin^2x}{x^2}~\mathrm dx$$ converges.




I know it converges, since in general we can use complex analysis, but I'd like to know if there is a simpler method that doesn't involve complex numbers. But I cannot come up with a function that I could compare the integral with.


Answer



Hint:$$x>1\implies0\le\frac{\sin^2(x)}{x^2}\le\frac1{x^2}\\0

calculus - Finding $limsup_{nrightarrowinfty} n^{frac{log(n)}{n}}$

I was attempting to show that the power series


\begin{equation*} \sum_{n=1}^\infty n^{\log(n)}z^n \end{equation*}


has a radius of convergence of $1$.


In order to do this I decided to use the $\alpha$ method. This meant evaluating the limit


\begin{equation*} \limsup_{n\rightarrow\infty} n^{\frac{\log(n)}{n}} \end{equation*}


I was able to prove that


\begin{equation*} \lim_{n\rightarrow\infty} \dfrac{\log(n)}{n} = 0 \end{equation*}


but I realized that was necessary but not sufficient to show that the limsup in question is $1$.


I then was able to prove that


\begin{equation*} \lim_{n\rightarrow\infty} n^{\frac{1}{n}} = 1 \end{equation*}



However I could not figure out any way of using that fact either.


I realized I could rewrite this limit as


\begin{equation*} \exp\left(\lim_{n\rightarrow\infty} \dfrac{\log(n)^2}{n}\right) \end{equation*}


However since I have not proven L'hôpital's rule, I have no way of evaluating this limit either.


This has left me pretty stuck. I'm not sure how I could tackle this problem from here. Where should I start for this limit? Is there a way to do it without L'hôpital's rule?


I'd really rather not know the whole proof if possible.

linear algebra - Eigenvalue decomposition of $D , A , D$ with $A$ symmetric and $D$ diagonal



Let $A$ be a real, symmetric matrix. It admits the eigenvalue decomposition



$A = U \Lambda U^T$



where the eigenvectors are chosen to be orthogonal. Let $D$ be a diagonal matrix and



$B = D A D = D U \Lambda U^T D = (DU) \Lambda (DU)^T = V \Lambda V^T. \tag{$\ast$}$




Assume none of $A$, $B$, and $D$ is the identity matrix, $I$. In general, we have that



$V^T = U^T D \neq V^{-1} = U^T D^{-1}$ and



$V V^T = D^2 \neq I$.



I would like to clarify my understanding of the situation. Are the following statements correct in general?





  1. Eq. ($\ast$) is not an eigenvalue decomposition of $B$.

  2. Eq. ($\ast$) is not a diagonalization of $B$.

  3. $A$ and $B$ have different sets of eigenvalues, and one can say nothing about the eigenvalues of $B$ based on the knowledge of $\Lambda$.



Thank you.



Regards,
Ivan


Answer




Note that $U^T = U^{-1}$, so the congruence



$$A = U \Lambda U^T = U \Lambda U^{-1}$$



is also a similarity relation. In other words, the eigenvalue decomposition is a unitary similarity of $A$ and $\Lambda$.



Since the same cannot be said about $V := DU$, relation $(*)$ remains congruence that is not a similarity, hence it doesn't preserve the eigenvalues. However, it is a diagonalization by congruence (usually, when we say "diagonalization" without the additional "by something", we assume that it's "by similarity").



However, there is a relation between the eigenvalues of $A$ and $B$, albeit a weaker one. By Sylvester's law of inertia, if $D$ is nonsingular, then $A$ and $B$ have the same number of negative, zero, and positive eigenvalues.


algebra precalculus - If $costheta=frac{cosalpha+cos beta}{1+cosalphacosbeta}$, then prove that one value of $tan(theta/2)$ is $tan(alpha/2)tan(beta/2)$




If $$\cos\theta = \frac{\cos\alpha + \cos \beta}{1 + \cos\alpha\cos\beta}$$ then prove that one of the values of $\tan{\frac{\theta}{2}}$ is $\tan{\frac{\alpha}{2}}\tan{\frac{\beta}{2}}$.





I don't even know how to start this question. Pls help


Answer



I think i have found out how to approach it. I hope this proof is satifactory.



Using the half angle formula,
$$\tan{\frac{\theta}{2}} = \pm \sqrt{\frac{1 - \cos\theta}{1 + \cos\theta}} \longrightarrow \text{eq.1}$$
Evaluating $\frac{1 - \cos\theta}{1 + \cos\theta}$ first,
$$\frac{1 - \cos\theta}{1 + \cos\theta} = \frac{1 - (\frac{\cos\alpha + \cos\beta}{1 + \cos\alpha\cos\beta})}{1 + (\frac{\cos\alpha + \cos\beta}{1 + \cos\alpha\cos\beta})}\text{ }[\text{since } \cos\theta= \frac{\cos\alpha +\cos\beta}{1 + \cos\alpha\cos\beta}]$$
$$=\frac{1+\cos\alpha\cos\beta - \cos\alpha -\cos\beta}{1+\cos\alpha\cos\beta + \cos\alpha+ \cos\beta}$$
$$=\frac{(1-\cos\alpha)(1-\cos\beta)}{(1+\cos\alpha)(1+\cos\beta)}$$



Substituting this value of $\frac{1 - \cos\theta}{1 + \cos\theta}$ into equation 1,
$$\tan{\frac{\theta}{2}} = \sqrt{\frac{(1-\cos\alpha)(1-\cos\beta)}{(1+\cos\alpha)(1+\cos\beta)}}$$

$$=\sqrt{\frac{(1-\cos\alpha)^2(1-\cos\beta)^2}{(1-\cos^2\alpha)(1-\cos^2\beta)}}$$
$$=\pm\frac{(1-\cos\alpha)(1-\cos\beta)}{\sin\alpha\sin\beta}$$
Taking the positive value of $\tan{\frac{\theta}{2}}$,
$$\frac{(1-\cos\alpha)(1-\cos\beta)}{\sin\alpha\sin\beta} = \frac{4\sin^2\frac{\alpha}{2}\sin^2\frac{\beta}{2}}{4\sin\frac{\alpha}{2}\cos{\frac{\alpha}{2}}\sin{\frac{\beta}{2}}\cos{\frac{\beta}{2}}} \text{(using half angle and double angle formula)}$$
$$=\tan{\frac{\alpha}{2}}\tan{\frac{\beta}{2}}$$



Therefore, $\tan{\frac{\alpha}{2}}\tan{\frac{\beta}{2}}$ is one of the values of $\tan{\frac{\theta}{2}}$.


probability - Coupon Collector's Problem with X amount of coupons already collected.





I am having an issue with understanding how to calculate a specific case of the Coupon Collector's Problem. Say I have a set of 198 coupons. I learned how to find the estimated amount of draws to see all 198 coupons, using the following equation:



$$n \sum_{k=1}^n\frac1k$$



It turns out that for $n = 198$, the expected number of draws is approximately 1162. Let's assume, however, that I already have some of the coupons, say 50. How should I go about solving the same problem, given that I've already collected $X$ of them?


Answer




Based on the corresponding thread on Wikipedia. The expected time to draw all $n$ coupons equals:



$$E[T] = E[t_1] + E[t_2] + \ldots + E[t_n]$$



with $t_i$ the time needed to collect the $i^{th}$ coupon once $i-1$ coupons have been drawn. Once $i-1$ tickets have been drawn, there are $n-i+1$ unseen coupons left. The probability $p_i$ of selecting a new coupon thus equals $\frac{n-i+1}{n}$, and the expected number of draws needed to draw a new coupon equals $\frac{1}{p_i} = \frac{n}{n-i+1}$. As such, the expected value for the time needed to draw all $n$ coupons can be calculated as:



$$E[T] = \frac{n}{n} + \frac{n}{n-1} + \ldots + \frac{n}{1} = n \sum_{k=1}^{n}{\frac{1}{k}}$$



In this case, however, we have already drawn $X$ unique coupons. As such, the estimated number of draws needed to find all $n$ coupons equals:




$$E[T] = E[t_{X+1}] + E[t_{X+2}] + \ldots + E[t_n] = n \sum_{k=1}^{n-X} \frac{1}{k}$$


complex analysis - Improper integration of sin(z)/z

I need to calculate this $\int_{-\infty}^\infty \frac{sin(z)}{z}dz$ using the residue of $\frac{sin(z)}{z}$.


Then I write $\int_{-\infty}^\infty \frac{sin(z)}{z}dz$ = $\lim_{R\to\infty}Im(\int_{-R}^R \frac{e^{iz}}{z}dz+\int_{CR}\frac{e^{iz}}{z}dz)=2\pi i Res(\frac{e^{iz}}{z},0)$ where CR is the the half circunference of radius R>0 over the plane.


Then I need to show that $\lim_{R\to\infty}|\int_{CR}\frac{e^{iz}}{z}dz|=0$


Can someone help me?

integration - Frullani 's theorem in a complex context.



It is possible to prove that $$\int_{0}^{\infty}\frac{e^{-ix}-e^{-x}}{x}dx=-i\frac{\pi}{2}$$ and in this case the Frullani's theorem does not hold since, if we consider the function $f(x)=e^{-x}$, we should have $$\int_{0}^{\infty}\frac{e^{-ax}-e^{-bx}}{x}dx$$ where $a,b>0$. But if we apply this theorem, we get $$\int_{0}^{\infty}\frac{e^{-ix}-e^{-x}}{x}dx=\log\left(\frac{1}{i}\right)=-i\frac{\pi}{2}$$ which is the right result.





Questions: is it only a coincidence? Is it possible to generalize the theorem to complex numbers? Is it a known result? And if it is, where can I find a proof of it?




Thank you.


Answer




The following development provides a possible way forward to generalizing Frullani's Theorem for complex parameters.




Let $a$ and $b$ be complex numbers such that $\arg(a)\ne \arg(b)+n\pi$, $ab\ne 0$, and let $\epsilon$ and $R$ be positive numbers.




In the complex plane, let $C$ be the closed contour defined by the line segments (i) from $a\epsilon$ to $aR$, (ii) from $aR$ to $bR$, (iii) from $bR$ to $b\epsilon$, and (iv) from $b\epsilon$ to $a\epsilon$.



Let $f$ be analytic in and on $C$ for all $\epsilon$ and $R$. Using Cauchy's Integral Theorem, we can write



$$\begin{align}
0&=\oint_{C}\frac{f(z)}{z}\,dz\\\\
&=\int_\epsilon^R \frac{f(ax)-f(bx)}{x}\,dx\\\\
&+\int_0^1 \frac{f(aR+(b-a)Rt)}{a+(b-a)t}\,(b-a)\,dt\\\\
&-\int_0^1 \frac{f(a\epsilon+(b-a)\epsilon t)}{a+(b-a) t}\,(b-a)\,dt\tag1

\end{align}$$



Rearranging $(1)$ reveals that



$$\begin{align}
\int_\epsilon^R \frac{f(ax)-f(bx)}{x}\,dx&=\int_0^1 \frac{f(a\epsilon+(b-a)\epsilon t)}{a+(b-a) t}\,(b-a)\,dt\\\\ &-\int_0^1 \frac{f(aR+(b-a)Rt)}{a+(b-a)t}\,(b-a)\,dt \tag 2
\end{align}$$



If $\lim_{R\to \infty}\int_0^1 \frac{f(aR+(b-a)Rt)}{a+(b-a)t}\,(b-a)\,dt=0$, then we find that




$$\begin{align}
\int_0^\infty \frac{f(ax)-f(bx)}{x}\,dx&=f(0)(b-a)\int_0^1\frac{1}{a+(b-a)t}\,dt\\\\
&=f(0)\log(|b/a|)\\\\
&+if(0)\left(\arctan\left(\frac{\text{Re}(a\bar b)-|a|^2}{\text{Im}(a\bar b)}\right)-\arctan\left(\frac{|b|^2-\text{Re}(a\bar b)}{\text{Im}(a\bar b)}\right)\right) \tag 3
\end{align}$$



Since $(a-b)\int_0^1 \frac{1}{a+(b-a)t}\,dt$, $ab\ne 0$ is continuous in $a$ and $b$, then $(3)$ is valid for $\arg(a)=\arg(b)+n\pi$ also.








Note that the tangent of the term in large parentheses on the right-hand side of $(3)$ is



$$\begin{align}
\frac{\text{Im}(\bar a b)}{\text{Re}(\bar a b)}&=\tan\left(\arctan\left(\frac{\text{Re}(a\bar b)-|a|^2}{\text{Im}(a\bar b)}\right)-\arctan\left(\frac{|b|^2-\text{Re}(a\bar b)}{\text{Im}(a\bar b)}\right)\right)\\\\
&=\tan\left(\arctan\left(\frac{\text{Im}(b)}{\text{Re}(b)}\right)-\arctan\left(\frac{\text{Im}(a)}{\text{Re}(a)}\right)\right)
\end{align}$$



real analysis - How discontinuous can a derivative be?



There is a well-known result in elementary analysis due to Darboux which says if $f$ is a differentiable function then $f'$ satisfies the intermediate value property. To my knowledge, not many "highly" discontinuous Darboux functions are known--the only one I am aware of being the Conway base 13 function--and few (none?) of these are derivatives of differentiable functions. In fact they generally cannot be since an application of Baire's theorem gives that the set of continuity points of the derivative is dense $G_\delta$.



Is it known how sharp that last result is? Are there known Darboux functions which are derivatives and are discontinuous on "large" sets in some appropriate sense?


Answer



What follows is taken (mostly) from more extensive discussions in the following sci.math posts:



http://groups.google.com/group/sci.math/msg/814be41b1ea8c024 [23 January 2000]




http://groups.google.com/group/sci.math/msg/3ea26975d010711f [6 November 2006]



http://groups.google.com/group/sci.math/msg/05dbc0ee4c69898e [20 December 2006]



Note: The term interval is restricted to nondegenerate intervals (i.e. intervals containing more than one point).



The continuity set of a derivative on an open interval $J$ is dense in $J.$ In fact, the continuity set has cardinality $c$ in every subinterval of $J.$ On the other hand, the discontinuity set $D$ of a derivative can have the following properties:





  1. $D$ can be dense in $\mathbb R$.


  2. $D$ can have cardinality $c$ in every interval.


  3. $D$ can have positive measure. (Hence, the function can fail to be Riemann integrable.)


  4. $D$ can have positive measure in every interval.


  5. $D$ can have full measure in every interval (i.e. measure zero complement).


  6. $D$ can have a Hausdorff dimension zero complement.


  7. $D$ can have an $h$-Hausdorff measure zero complement for any specified Hausdorff measure function $h.$




More precisely, a subset $D$ of $\mathbb R$ can be the discontinuity set for some derivative if and only if $D$ is an $F_{\sigma}$ first category (i.e. an $F_{\sigma}$ meager) subset of $\mathbb R.$




This characterization of the discontinuity set of a derivative can be found in the following references: Benedetto [1] (Chapter 1.3.2, Proposition, 1.10, p. 30); Bruckner [2] (Chapter 3, Section 2, Theorem 2.1, p. 34); Bruckner/Leonard [3] (Theorem at bottom of p. 27); Goffman [5] (Chapter 9, Exercise 2.3, p. 120 states the result); Klippert/Williams [7].



Regarding this characterization of the discontinuity set of a derivative, Bruckner and Leonard [3] (bottom of p. 27) wrote the following in 1966: Although we imagine that this theorem is known, we have been unable to find a reference. I have found the result stated in Goffman's 1953 text [5], but nowhere else prior to 1966 (including Goffman's Ph.D. Dissertation).



Interestingly, in a certain sense most derivatives have the property that $D$ is large in all of the ways listed above (#1 through #7).



In 1977 Cliff Weil [8] published a proof that, in the space of derivatives with the sup norm, all but a first category set of such functions are discontinuous almost everywhere (in the sense of Lebesgue measure). When Weil's result is paired with the fact that derivatives (being Baire $1$ functions) are continuous almost everywhere in the sense of Baire category, we get the following:



(A) Every derivative is continuous at the Baire-typical point.




(B) The Baire-typical derivative is not continuous at the Lebesgue-typical point.



Note that Weil's result is stronger than simply saying that the Baire-typical derivative fails to be Riemann integrable (i.e. $D$ has positive Lebesgue measure), or even stronger than saying that the Baire-typical derivative fails to be Riemann integrable on every interval. Note also that, for each of these Baire-typical derivatives, $\{D, \; {\mathbb R} - D\}$ gives a partition of $\mathbb R$ into a first category set and a Lebesgue measure zero set.



In 1984 Bruckner/Petruska [4] (Theorem 2.4) strengthened Weil's result by proving the following: Given any finite Borel measure $\mu,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has $\mu$-measure zero.



In 1993 Kirchheim [5] strengthened Weil's result by proving the following: Given any Hausdorff measure function $h,$ the Baire-typical derivative is such that the set $D$ is the complement of a set that has Hausdorff $h$-measure zero.



[1] John J. Benedetto, Real Variable and Integration With Historical Notes, Mathematische Leitfäden. Stuttgart: B. G. Teubne, 1976, 278 pages. [MR 58 #28328; Zbl 336.26001]




[2] Andrew M. Bruckner, Differentiation of Real Functions, 2nd edition, CRM Monograph Series #5, American Mathematical Society, 1994, xii + 195 pages. [The first edition was published in 1978 as Springer-Verlag's Lecture Notes in Mathematics #659. The second edition is essentially unchanged from the first edition with the exception of a new chapter on recent developments (23 pages) and 94 additional bibliographic items.] [MR 94m:26001; Zbl 796.26001]



[3] Andrew M. Bruckner and John L. Leonard, Derivatives, American Mathematical Monthly 73 #4 (April 1966) [Part II: Papers in Analysis, Herbert Ellsworth Slaught Memorial Papers #11], 24-56. [MR 33 #5797; Zbl 138.27805]



[4] Andrew M. Bruckner and György Petruska, Some typical results on bounded Baire $1$ functions, Acta Mathematica Hungarica 43 (1984), 325-333. [MR 85h:26004; Zbl 542.26004]



[5] Casper Goffman, Real Functions, Prindle, Weber & Schmidt, 1953/1967, x + 261 pages. [MR 14,855e; Zbl 53.22502]



[6] Bernd Kirchheim, Some further typical results on bounded Baire one functions, Acta Mathematica Hungarica 62 (1993), 119-129. [94k:26008; Zbl 786.26002]




[7] John Clayton Klippert and Geoffrey Williams, On the existence of a derivative continuous on a $G_{\delta}$, International Journal of Mathematical Education in Science and Technology 35 (2004), 91-99.



[8] Clifford Weil, The space of bounded derivatives, Real Analysis Exchange 3 (1977-78), 38-41. [Zbl 377.26005]


Tuesday, December 18, 2018

algebra precalculus - Find consumer demand as a function of time, given the demand equation and price


An importer of Brazilian coffee estimates that local consumers will buy approximately $Q(p)= 4374/p^2$ kg of the coffee per week when the price is $p$ dollars per kg. It is estimated that $t$ weeks from now the price of this coffee will be $p(t) = 0.04t^2 + 0.2t + 12$ dollar per kg.




a) Express the weekly consumer demand for the coffee as a function of $t$.



b) How many kg of the coffee will consumers be buying from the importer $10$ weeks from now?



c) When will the demand for the coffee be $30.375$ kg?




Here's my solution
a)
\begin{align*}

q(p) & = \frac{4374}{p^2} p^{-2}\\
& = 4374 -2 p^{-3}\\
& = \frac{4372}{p^3}
\end{align*}
and I don't really know how please help me

binomial coefficients - Proving that $nmid(nCr)$ for all $r$ ($1 leq r leq n-1$), only if $n$ is prime



I'm trying to prove that $n\mid(nCr)$ for all $r$ ($1 \leq r \leq n-1$) if and only if $n$ is prime.



Now proving that if $n$ is prime then $n\mid(nCr)$ is pretty easy, but how would you go about proving that $n\mid(nCr)$ only if $n$ is prime?



Could you show that if $n$ is not prime, then there exists an $r$ such that $n$ does not divide $nCr$? If so how would you go about doing that? I thought it would be easy but then I realized that for some $r, n\mid(nCr)$ even if $n$ isn't prime which made things a little bit more complicated.




I've managed to show that $n\mid(n-1)!$ for non-primes greater than or equal to $6$, which means that despite $n$ always canceling out with a product of factors in $r!(n-r)!$ in some cases, there will be another pair of factors that cancel out with the $n$ in $(n-1)!$, and sometimes there won't be. Maybe there's some kind of pattern but I can't find it unfortunately.



(I'd prefer a method that doesn't use undergraduate and above level maths)


Answer



Suppose n is not prime, and let $n=mp$ where $p$ is a prime. Then $n\not|\binom{n}{p}$, since
$$\binom{n}{p}=n \bigg[\frac{(n-1)!}{p!(n-p)!}\bigg]=n\bigg[\frac{(mp-1)!}{p!((m-1)p)!}\bigg]=n\bigg[\frac{(mp-1)(mp-2)(mp-3)\cdots((m-1)p+1)}{p!}\bigg],$$ where the expression in brackets is not an integer since
$(mp-1)(mp-2)(mp-3)\cdots((m-1)p+1)$ is not divisible by $p$, while $p!$ is.


analysis - Convergence/Divergence of Sequence defined by a recurrence relation



Given the following sequence:
$$
a_{n+1} = a_n(2 - a_n)
$$

for which values $a_1 \in \mathbb{R}$ does this sequence converges or diverges.



By trial and error I found that for $a_1 \in (0, 2)$ it converges to $1$, for $a_1 \in \{ 0 , 2 \}$ it converges to $0$ and for all other values it goes to $-\infty$.



But how can we prove these facts? If a limit exists then I can show that it must be $1$ or $0$ by the following calculation. It holds
\begin{align*}
\lim_{n \to \infty} a_{n+1} = \lim_{n\to \infty} a_n(2-a_n)
\end{align*}
So if $\lim_{n\to \infty} a_n = a$, then
$$

a = 2a - a^2 \Leftrightarrow a^2 = a \Leftrightarrow a = 1 \lor a = 0.
$$
But how to show that a limit exists when $a_1 \in (0,2)$, and there is no limit when $a_1 < 0$ or $a_1 > 2$?


Answer



Now that you have a guess for what the limit should be, try making a substitution to reflect that. In this case, let



$$
a_n = 1 + b_n
$$




for all $n$. Substituting this in, the original recurrence $a_{n+1} = a_n(2 - a_n)$ becomes



$$
1 + b_{n+1} = (1 + b_n)(1 - b_n) = 1 - b_n^2,
$$



so that



$$
b_{n+1} = -b_n^2.

$$



By iterating this recurrence we can solve it explicitly as



$$
b_{n+1} = -b_1^{2n}.
$$



Thus if $|b_1| > 1$ we see that $b_n \to -\infty$. This is equivalent to the statement




$$
a_1 \in (-\infty,0) \cup (2,\infty) \Longrightarrow a_n \to -\infty.
$$



If $|b_1| = 1$ then $b_n = -1$ for all $n > 1$. This is equivalent to the statement



$$
a_1 \in \{0,2\} \Longrightarrow a_n = 0 \text{ for all } n > 1.
$$




Finally if $|b_1| < 1$ then $b_n \to 0$, and this is equivalent to



$$
a_1 \in (0,2) \Longrightarrow a_n \to 1.
$$


Monday, December 17, 2018

Explanation on the Proof of $W(t)=W(t_0)expleft(int_{t_0}^{t} text{tr}(underline{A}(s)) dsright)$



In my study of Floquet Theory, I have been given a sketch proof on the following definition of the Wronskian, $$W(t)=W(t_0)\exp\left(\int_{t_0}^{t} \text{tr}(\underline{A}(s)) \ ds\right).$$




Proof:\begin{align}
\underline{x}(t)&=\underline{x}(t_0)+(t-t_0)\underline{x'}(t_0)+O(t-t_0)^2 \\
&=\underline{x}(t_0)+(t-t_0)\underline{A}(t_0)\underline{x}(t_0)+O(t-t_0)^2,

\end{align}

as $\underline{x'}=\underline{A}\underline{x}$ and thus $\underline{X'}=\underline{A}\underline{X}$ where $\underline{X}$ denotes the fundamental matrix.
Now, \begin{align}
W(t)&=\det(\underline{X}(t)) \\
&=\det((\underline{I}+(t-t_0)\underline{A}(t_0))\underline{X}(t_0)+O(t-t_0)^2) \tag{1}\\
&=W(t_0)(1+(t-t_0)\text{tr}(\underline{A}(t_0))+O(t-t_0)^2 \tag{2}.
\end{align}

Using Taylor expansion:
$$W(t)=W(t_0)+(t-t_0)W'(t_0)+O(t-t_0)^2. \tag{3}$$ Letting $t\rightarrow t_0$, $$W'(t)=W(t)\text{tr}(\underline{A}(t))\implies W(t)=W(t_0)\exp\left(\int_{t_0}^{t} \text{tr}(\underline{A}(s)) \ ds\right)$$





There are many parts that I do not understand. Are there any resources that could help explain this? For instance, how is the last line derived (where did $W'(t)$ appear from)?


Answer



So, firstly the Taylor series expansion of $X(t)$ near $t=t_0$ is given by



$$X(t)=X(t_0)+X'(t_0)(t-t_0)+\mathcal{O}\big((t-t_0)^2\big).$$



Next, we can use $X'(t_0)=A(t_0)X(t_0)$ to get



$$X(t)=X(t_0)\big(I+(t-t_0)A(t_0)\big)+\mathcal{O}\big((t-t_0)^2\big). $$




To take the determinant of both sides, we can use $\det(X(t_0)B)=\det(X(t_0))\det B$:



$$ \begin{array}{ll} W(t) & =W(t_0)\det\big(I+(t-t_0)A(t_0)+\mathcal{O}((t-t_0)^2)\big). \\ & =W(t_0)\det\big(I+(\color{Red}{t-t_0})\big[\color{Blue}{A(t_0)+\mathcal{O}(t-t_0)}\big]\big) \end{array}$$



Note when you factor $X(t_0)$ out of $\mathcal{O}((t-t_0)^2)$, you just get $\mathcal{O}((t-t_0)^2)$ since $X(t_0)$ is constant.



Next we can use the fact $\det(I+\color{Red}{\varepsilon} \color{Blue}{X})=1+\color{Red}{\varepsilon}\mathrm{tr}(\color{Blue}{X})+\mathcal{O}(\color{Red}{\varepsilon}^2)$ (which follows from the Leibniz formula for the determinant, itself the final result of expansion by minors), where $\varepsilon=t-t_0$. Note factoring $(t-t_0)$ out of $\mathcal{O}((t-t_0)^2)$ will be $\mathcal{O}(t-t_0)$, but everything gets absorbed back in the end:



$$ \begin{array}{ll} W(t) & =W(t_0)\big[1+(\color{Red}{t-t_0})\mathrm{tr}\big(\color{Blue}{A(t_0)+\mathcal{O}(t-t_0)}\big)+\mathcal{O}((\color{Red}{t-t_0})^2)\big] \\ & =W(t_0)\big[1+(t-t_0)\mathrm{tr}\,A(t_0)+\mathcal{O}((t-t_0)^2)\big]. \end{array}$$




On the other hand, the Taylor expansion of $W(t)$ is



$$ W(t)=W(t_0)+W'(t_0)(t-t_0)+\mathcal{O}((t-t_0)^2). $$



Equating coefficients of $(t-t_0)$ yields $W'(t_0)=W(t_0)\mathrm{tr}\,A(t_0)$. Equivalently, the other expansion of $W(t)$ we can subtract $W(t_0)$ from both sides, divide by $t-t_0$ to obtain



$$ \frac{W(t)-W(t_0)}{t-t_0}=W(t_0)\mathrm{tr}\,A(t_0)+\mathcal{O}(t-t_0). $$



Letting $t\to t_0$ yields $W'(t_0)=W(t_0)\mathrm{tr}\,A(t_0)$. Now replace $t_0$ with $t$ (since it was arbitrary):




$$ W'(t)=A(t)W(t). $$



This is a separable ODE which can be solved with the integrating factor method. First divide by $W(t)$ and then notice $W'(t)/W(t)$ is the derivative of $\ln W(t)$, so integrate from $t_0$ to $t$:



$$ W'(t)/W(t)=\mathrm{tr}\,A(t) $$



$$ \ln W(t)-\ln W(t_0)=\int_{t_0}^t \mathrm{tr}\,A(s)\,\mathrm{d}s $$



Notice so the LHS is $\ln(W(t)/W(t_0))$, so exponentiate and multiply by $W(t_0)$ to get




$$ W(t)=W(t_0)\exp\left(\int_{t_0}^t \mathrm{tr}\,A(s)\,\mathrm{d}s\right). $$


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...