Saturday, August 31, 2019

coding theory - Extended Euclidean Algorithm to find multiplicative inverse of two polynomials



I asked Using Extended Euclidean Algorithm to find multiplicative inverse earlier, and understand how to use EEA for two integers. Now I'd like to do it with polynomials, but I'm getting some very large fractions following the same procedure as I did w/ the integers. The problem I'm working on is:
$(x^5 + x^2 + 1)^{-1} \cdot mod (x^{10} + x^3 + 1)$



\begin{align}
x^{10} + x^3 + 1&=(x^5-x^2-1)\cdot(x^5+x^2+1)+(x^4+x^3+2x^2+2)\\
(x^5+x^2+1)&=(x-1)\cdot(x^4+x^3+2x^2+2)+(-x^3+3x^2-2x+3)\\
(x^4+x^3+2x^2+2)&=(-x-4)\cdot(-x^3+3x^2-2x+3)+(12x^2-5x+14)\\
(-x^3+3x^2-2x+3)&=(\frac{31}{144}-\frac{x}{12})\cdot(12x^2-5x+14)+(\frac{35x}{144}-\frac{1}{72})\\

12x^2-5x+14&=(\frac{1728x}{35}-\frac{21744}{1225})\cdot(\frac{35x}{144}-\frac{1}{72})+\frac{16848}{1225}
\end{align}
Continuing from there, I start to get some big fractions, which doesn't seem correct. I know the GCD is 1, but it doesn't look like I'm going to get that following the above procedure. What is the issue here?


Answer



The GCD of two polynomials is only unique up to multiplication by an (invertible) element of the base field. You can freely renormalise to work with smaller coefficients.



$$\begin{align}
x^{10} + x^3 + 1&=(x^5-x^2-1)\cdot(x^5+x^2+1)+(x^4+x^3+2x^2+2)\\
(x^5+x^2+1)&=(x-1)\cdot(x^4+x^3+2x^2+2)+(-x^3+3x^2-2x+3)\\
(x^4+x^3+2x^2+2)&=(-x-4)\cdot(-x^3+3x^2-2x+3)+(12x^2-5x+14)\\

(-x^3+3x^2-2x+3)&=\frac{1}{144}\left((31-12x)\cdot(12x^2-5x+14)+(35x-2) \right)\\
12x^2-5x+14&=\frac{1}{1225}\left((420x - 151)(35x - 2) + 16848 \right)
\end{align}$$



The numbers aren't much smaller in this case, but the point is that the GCD is a constant, so the two polynomials are coprime.


linear algebra - Matrix as a product of elementary matrices?


enter image description here





So



$$A = \begin{bmatrix} 1 & 2 \\ 3 & 1 \end{bmatrix}$$



and the matrix can be reduced in these steps:



$$\begin{bmatrix} 1 & 2 \\ 0 & -5 \end{bmatrix}$$



via an elementary matrix that looks like this:




$$ E_1 = \begin{bmatrix} 1 & 0 \\ -3 & 1 \end{bmatrix}$$



next:



$$\left [ \begin{matrix} 1 & 0 \\ 0 & -5 \end{matrix} \right ] $$



via an elementary matrix that looks like this:



$$ E_2 = \left [ \begin{matrix} 1 & \frac{2}{5} \\ 0 & 1 \end{matrix} \right ] $$




next:



$$\left [ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right ] $$



via an elementary matrix that looks like this:



$$ E_1 = \left [ \begin{matrix} 1 & 0 \\ 0 & \frac{-1}{5} \end{matrix} \right ] $$



So...




$$E_1^{-1} = \left [ \begin{matrix} 1 & 0 \\ 3 & 1 \end{matrix} \right ] $$
$$E_2^{-1} = \left [ \begin{matrix} 1 & \frac{-2}{5} \\ 0 & 1 \end{matrix} \right ] $$
$$E_3^{-1} = \left [ \begin{matrix} 1 & 0 \\ 0 & -5 \end{matrix} \right ] $$



And if I multiply them together I get:



$$E_1^{-1} * E_2^{-2} = \left [ \begin{matrix} 1 & \frac{-2}{5} \\ 3 & \frac{-1}{5} \end{matrix} \right ] = C$$



and




$$C * E_3^{-1} = \left [ \begin{matrix} 1 & 2 \\ 3 & 1 \end{matrix} \right ] $$



So this works out. Is this right?



Also question, the underlying premise of all of this is that $A$ is invertible right? And if $A$ is invertible, that means that a series of row operations can change it to the identity matrix. Why is this? This doesn't make intuitive sense to me.



Also, why does the product of elementary matrices equal $A$? What is the underlying theorem?

probability - Expected value of a continuous random variable (understanding of the proof)



I need a little help to understand the steps of this proof. (I'm currently studying double integrals) but also i don't understand the first equalities. Thank you in advance.



Let $X$ be an absolutely continous variable with density function $f_x$. Suppose that $g$ is a nonnegative function then.
$$E(g(X))=\int_{0}^{\infty}P(g(X))>y)dy-\int_{0}^{\infty}P(g(X)\leqslant -y)dy$$
$$=\int_{0}^{\infty}P(g(X))>y)dy=\int_{0}^{\infty} (\int_{B} f_x(x) dx)dy$$
where B:= {x:g(x)>y}. Therefore.
$$E(g(X))=\int_0^\infty \int_0^g(x) f_x dydx= \int_0^\infty g(x)f_x(x)dx.$$


Answer




The first equality can be skipped if you just use the tail sum formula for nonnegative random variables: $E[Y] = \int_0^\infty P(Y > y) \mathop{dy}$ if $Y$ is continuous and nonnegative. Applying this to $Y=g(X)$ immediately yields $E[g(X)] = \int_0^\infty P(g(X)) > y) \mathop{dy}$.



[The first equality is a generalization, and can be proven by writing $Y=Y \cdot 1_{Y \ge 0} - (-Y) \cdot 1_{Y < 0}$ and applying the tail sum probability to each term both of which are nonnegative.]






The next part is simply the definition of $B$. $$P(g(X)>y) = \int_B f_X(x) \mathop{dx}$$







Finally, switch the order of the two integrals.
I think you forgot to mention that $X$ is also nonnegative?
The region you are integrating over is $\{y \ge 0\} \cap \{x \ge 0 :g(x) > y\}$, which can be rewritten as $\{x \ge 0\} \cap \{y : 0 \le y < g(x)\}$.



$$\int_0^\infty \int_B f_X(x) \mathop{dx} \mathop{dy} = \int_0^\infty \int_0^{g(x)} f_X(x) \mathop{dy} \mathop{dx}.$$


probability - Coupon Collector's Problem with X amount of coupons already collected.




I am having an issue with understanding how to calculate a specific case of the Coupon Collector's Problem. Say I have a set of 198 coupons. I learned how to find the estimated amount of draws to see all 198 coupons, using the following equation:



$$n \sum_{k=1}^n\frac1k$$




It turns out that for $n = 198$, the expected number of draws is approximately 1162. Let's assume, however, that I already have some of the coupons, say 50. How should I go about solving the same problem, given that I've already collected $X$ of them?


Answer



Based on the corresponding thread on Wikipedia. The expected time to draw all $n$ coupons equals:



$$E[T] = E[t_1] + E[t_2] + \ldots + E[t_n]$$



with $t_i$ the time needed to collect the $i^{th}$ coupon once $i-1$ coupons have been drawn. Once $i-1$ tickets have been drawn, there are $n-i+1$ unseen coupons left. The probability $p_i$ of selecting a new coupon thus equals $\frac{n-i+1}{n}$, and the expected number of draws needed to draw a new coupon equals $\frac{1}{p_i} = \frac{n}{n-i+1}$. As such, the expected value for the time needed to draw all $n$ coupons can be calculated as:



$$E[T] = \frac{n}{n} + \frac{n}{n-1} + \ldots + \frac{n}{1} = n \sum_{k=1}^{n}{\frac{1}{k}}$$




In this case, however, we have already drawn $X$ unique coupons. As such, the estimated number of draws needed to find all $n$ coupons equals:



$$E[T] = E[t_{X+1}] + E[t_{X+2}] + \ldots + E[t_n] = n \sum_{k=1}^{n-X} \frac{1}{k}$$


Friday, August 30, 2019

linear algebra - Prove that if $V$ = $R^{n,n}$, then the set of all diagonal matrices is a subspace of $V$.



I am reading the book, Applied Linear Algebra and Matrix Analysis.
When I was doing the exercise of Section3.2 Exercise 19, I was puzzled at some of it. Here is the problem description:





Prove that if $V$ = $R^{n,n}$, then the set of all diagonal matrices is a
subspace of $V$.




And I know it is not hard to know the set of all diagonal matrices is closed under matrix addition and scalar multiplication.
BUT, it confused me how to know it contains the zero element of V
SO I check the reference answer which is as followed:




Let A and B be $n×n$ diagonal matrices. Then $cA$ is a diagonal matrix and $A$
$+ B$ is diagonal matrix so the set of diagonal matrices is closed under matrix addition and scalar multiplication.





It doesn't explain anything about zero elements.
And the diagonal matrix is a matrix in which the entries outside the main diagonal are all zero, which means it is can't be equal to zero matrices.
And I wonder the definition of subspace is the only way to prove it in an abstract subspace situation.
If not mind, could anyone help me and give me some inspirations?
Thanks in advance.


Answer



In general, a set $V\subset X$ is a subspace if it is




  1. closed under addition

  2. closed under scalar multiplication.

  3. non-empty.




There is no need to demand that $0\in V$, because that is a consequence of the three properties above, since, if $x\in V$, then $0=x+(-1)\cdot x\in V$.






Very often, as is your case, the third property, i.e. that $V$ is non-empty, is obvious, and left out of the proof. Technically, it's better to at least mention that of course, $V$ is not empty.






Just a final remark:





And the diagonal matrix is a matrix in which the entries outside the main diagonal are all zero, which means it is can't be equal to zero matrices.




I don't understand this sentence at all.


calculus - How to evaluate the following integral $int_0^{pi/2}sin{x}cos{x}ln{(sin{x})}ln{(cos{x})},dx$?



How to evaluate the following integral
$$\int_0^{\pi/2}\sin{x}\cos{x}\ln{(\sin{x})}\ln{(\cos{x})}\,dx$$
It seems that it evaluates to$$\frac{1}{4}-\frac{\pi^2}{48}$$
Is this true? How would I prove it?


Answer



Find this




$$I=\int_{0}^{\frac{\pi}{2}}\sin{x}\cos{x}\ln{(\cos{x})}\ln{(\sin{x})}dx$$



Solution



Since
$$\sin(2x) = 2\sin(x)\cos(x)$$



then
$$I=\dfrac{1}{8}\int_{0}^{\frac{\pi}{2}}\ln{(\sin^2{x})}

\ln{(\cos^2{x})}\sin{(2x)}dx$$



Let $\cos{(2x)}=y$, and since
$$\cos(2x) = 2\cos^2x - 1 = 1 - 2\sin^2x$$
we get
$$I=\dfrac{1}{16}\int_{-1}^{1}\ln{\left(\dfrac{1-y}{2}\right)}
\ln{\left(\dfrac{1+y}{2}\right)}dy$$



Let $\dfrac{1-y}{2}=z$, then we have
\begin{align*}I&=\dfrac{1}{8}\int_{0}^{1}\ln{z}\ln{(1-z)}dz=\dfrac{-1}{8}\sum_{n=1}^{\infty}\dfrac{1}{n}

\int_{0}^{1}z^n\ln{z}dz\\
&=\dfrac{1}{8}\sum_{n=1}^{\infty}
\dfrac{1}{n(n+1)^2}=\dfrac{1}{8}\sum_{n=1}^{\infty}
\left(\dfrac{1}{n}-\dfrac{1}{n+1}\right)-\dfrac{1}{8}\sum_{n=1}^{\infty}
\dfrac{1}{(n+1)^2}\\
&=\dfrac{1}{4}-\dfrac{\pi^2}{48}
\end{align*}


probability - Expectation of nonnegative Random Variable





enter image description here



Can someone help me give me some pointers as to how to prove this relation?


Answer



Let p be the probability measure. We have that $\int_{0}^{\infty}\left[1-F\left(x\right)\right]dx=\int_{0}^{\infty}\Pr\left[X>x\right]dx=\int_{0}^{\infty}\left[\int1_{X>x}dp\right]dx $ using Fubini's theorem we have $\int_{0}^{\infty}\left[\int1_{X>x}dp\right]dx=\int\left[\int_{0}^{\infty}1_{X>x}dx\right]dp=\int Xdp=E\left[X\right] $


elementary number theory - Why does $lfloorfrac{n}{x}rfloor$ have at most $2sqrt{n}$ values when $x=1,2,dots n$?



The question is very short and clear:



Why does $\lfloor\frac{n}{x}\rfloor$ (floor of $\frac nx$) have at most $2\sqrt{n}$ values when $x = 1, 2,\dots, n $?



I saw this statement at tutorial of 449A at codeforces, I really want to know how we get that, and since the amount of values should be integer, why $2\sqrt{n}$?




And I don't know the specific category of this problem so I just tag it as elementary number theory.


Answer



Over the range $1 \le x \le \sqrt{n}$, $x$ can take on only $\sqrt{n}$ distinct integer values. Thus, $\left\lfloor\dfrac{n}{x}\right\rfloor$ can only take on $\sqrt{n}$ distinct values, one for each distinct value of $x$.



Over the range $\sqrt{n} \le x \le n$, we have that $1 \le \dfrac{n}{x} \le \sqrt{n}$. Thus, $\left\lfloor\dfrac{n}{x}\right\rfloor$ can only take on $\sqrt{n}$ distinct values, one for each integer between $1$ and $\sqrt{n}$.



This adds up to $2\sqrt{n}$ values over the entire range $1 \le x \le n$.



Note that this is an upper bound and not an exact number. For instance, if $n = 6$, then we have at most $2\sqrt{6} \approx 4.89$ values. Indeed, $\lfloor\frac{6}{1}\rfloor = 6$, $\lfloor\frac{6}{2}\rfloor = 3$, $\lfloor\frac{6}{3}\rfloor = 2$, $\lfloor\frac{6}{4}\rfloor = \lfloor\frac{6}{5}\rfloor = \lfloor\frac{6}{6}\rfloor = 1$, so we have $4$ distinct values.



Thursday, August 29, 2019

real analysis - Why is $limlimits_{n to +infty }{sqrt[n]{a_1 a_2 ldots a_n}} =limlimits_{n to +infty}{a_n}$



Solving some problems regarding limits and sequence convergence, i stumbled upon a task, and it's solution relies on, and i quote: "We now use a well-known theorem :

$$\lim_{n \to +\infty }{\sqrt[n]{a_1 a_2 \ldots a_n}} = \lim_{n \to +\infty}{a_n}$$



This isn't really intuitive (at least to me) and I don't know how to prove it.
The original task was to find the limit of
$$\lim_{n \to +\infty }{\sqrt[n]{\bigg{(}1+\frac{1}{1}\bigg{)} \bigg{(}1+\frac{1}{2}\bigg{)}^2 \ldots \bigg{(}1+\frac{1}{n}\bigg{)}^n}} $$
which of course, using the expression above is just $e$.


Answer



Take the logarithm of both sides. Then you want to prove



$$\lim_{n\rightarrow \infty} \frac{1}{n}\sum_1^n a_i = \lim_{n\rightarrow \infty} a_n.$$




This is a standard result about Cesàro means.


calculus - Computing $lim_{xto{0+}}frac{tan(x)-x}{x^3}$ without L'Hôpital's rule.






Computing $\lim_{x\to{0+}}\frac{\tan(x)-x}{x^3}$ without L'Hopital




Say $\lim_{x\to{0+}}\frac{\tan(x)-x}{x^3} = L$




For $L$:
$$L=\lim_{x\to0}\frac{\tan x-x}{x^3}\\
L=\lim_{x\to0}\frac{\tan 2x-2x}{8x^3}\\
4L=\lim_{x\to0}\frac{\frac12\tan2x-x}{x^3}\\
3L=\lim_{x\to0}\frac{\frac12\tan{2x}-\tan x}{x^3}\\
=\lim_{x\to0}\frac{\tan x}x\frac{\frac1{1-\tan^2x}-1}{x^2}\\
=\lim_{x\to0}\frac{(\tan x)^3}{x^3}=1\\
\large L=\frac13$$



I found that in another Q, can someone tell me why




$$L=\lim_{x\to0}\frac{\tan x-x}{x^3}=\lim_{x\to0}\frac{\tan 2x-2x}{8x^3}$$


Answer



If $x = 2y$ then $y\rightarrow 0$ when $x\rightarrow 0$, so $\lim_{x\rightarrow0} f(x) = \lim_{y\rightarrow 0} f(2y)$.


trigonometry - Trigonometric Identities: $frac{sin^2theta}{1+costheta}=1-costheta$


$\dfrac{\sin^2\theta}{1+\cos\theta}=1-\cos\theta$


Right Side: $1-\cos\theta$ either stays the same, or can be $1-\dfrac{1}{\sec\theta}$



Left Side: $$\begin{align*} &= \dfrac{\sin^2\theta}{1+\cos\theta}\\ &= \dfrac{1-\cos^2\theta}{1+\cos\theta} &= \dfrac{(1-\cos\theta)(1+\cos\theta)}{1+cos\theta} &= 1-\cos\theta \end{align*}$$


Is this correct?


Answer



Perhaps slightly simpler and shorter (FYI, what you did is correct): $$\frac{\sin^2x}{1+\cos x}=1-\cos x\Longleftrightarrow \sin^2x=(1-\cos x)(1+\cos x)\Longleftrightarrow \sin^2x=1-\cos^2x$$ And since the last equality is just the trigonometric Pytahgoras Theorem we're done.


Wednesday, August 28, 2019

calculus - simple limit but I forget how to prove it

I have to calculate the following limit



$$\lim_{x\rightarrow -\infty} \sqrt{x^2+2x+2} - x$$




it is in un undeterminated form.



I tried to rewrite it as follows:



$$\lim_{x\rightarrow -\infty} \sqrt{x^2+2x+2} - \sqrt{|x|^2}$$



but seems a dead road.



Can anyone suggest a solution?




thanks for your help

calculus - Universal Chord Theorem




Let $f \in C[0,1]$ and $f(0)=f(1)$.



How do we prove $\exists a \in [0,1/2]$ such that $f(a)=f(a+1/2)$?



In fact, for every positive integer $n$, there is some $a$, such that $f(a) = f(a+\frac{1}{n})$.



For any other non-zero real $r$ (i.e not of the form $\frac{1}{n}$), there is a continuous function $f \in C[0,1]$, such that $f(0) = f(1)$ and $f(a) \neq f(a+r)$ for any $a$.



This is called the Universal Chord Theorem and is due to Paul Levy.




Note: the accepted answer answers only the first question, so please read the other answers too, and also this answer by Arturo to a different question: https://math.stackexchange.com/a/113471/1102






This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.



and here: List of abstract duplicates.


Answer



You want to use the intermediate value theorem, but not applied to $f$ directly. Rather, let $g(x)=f(x)-f(x+1/2)$ for $x\in[0,1/2]$. You want to show that $g(a)=0$ for some $a$. But $g(0)=f(0)-f(1/2)=f(1)-f(1/2)=-(f(1/2)-f(1))=-g(1/2)$. This gives us the result: $g$ is continuous and changes sign, so it must have a zero.



summation - induction proof: $sum_{k=1}^nk^2 = frac{n(n+1)(2n+1)}{6}$




I encountered the following induction proof on a practice exam for calculus:



$$\sum_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6}$$



I have to prove this statement with induction.



Can anyone please help me with this proof?


Answer



If $P(n): \sum_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6},$




we see $P(1): 1^2=1$ and $\frac{1(1+1)(2\cdot1+1)}{6}=1$ so, $P(1)$ is true



Let $P(m)$ is true, $$\sum_{k=1}^mk^2 = \frac{m(m+1)(2m+1)}{6}$$



For $P(m+1),$



$$ \frac{m(m+1)(2m+1)}{6}+(m+1)^2$$



$$=\frac{m(m+1)(2m+1)+6(m+1)^2}6$$




$$=\frac{(m+1)\{m(2m+1)+6(m+1)\}}6$$



$$=\frac{(m+1)(m+2)\{2(m+1)+1\}}6$$ as $m(2m+1)+6(m+1)=2m^2+7m+6=(m+2)(2m+3)$



So, $P(m+1)$ is true if $P(m)$ is true


real analysis - Jump discontinuity implies nonexistence for primitive




Let $I$ be an interval and $f:I\to \mathbb{R}$ function that has jump discontinuity at some interior point $x_0\in I$, i.e limits $f(x_0^+)$ and $f(x_0^-)$ exist, but are not equal. Prove that $f$ does not have a primitive on $I.$





Comment. Proving this answers another quenstion of mine, if the Intermediate Value Property (necessary condition for existence of primitive) is sufficient for the existence of primitive. Then the answer would be no: take $f(x)=x$ on $[0,1]$ and $f(x)=3-x$ on $(1,2]$, which enjoys the IMP but has jump discontinuity at $x=1$ and therefore no primitive exists (of course that may have been done from the scratch).



Getting back to the question, by contradiction, if a primitive of $f$ existed, then $f$ would enjoy the IVP. But how does this contradict the jump discontinuity of $f$?



Thanks in advance.


Answer



If $f$ has a primitive $F$, by definition $f$ satisfies $F'=f$.
Now, by Darboux theorem $f$ satisfies the IVP.




A function with a jump discontinuity cannot satisfy the IVP:




  • First proof: analytical



    let $\varepsilon=\frac{|f(x_0^-)-f(x_0^+)|}{3}$. By limit definition, there exists $\delta_1,\delta_2$ such that $x_0-\delta_1 and $x_0.
    Setting $\delta=\min(\delta_1,\delta_2)$ we obtain that





$$f\big((x_0-\delta,x_0+\delta)\big)\subset \big(f(x_0^-)-\varepsilon,f(x_0^-)+\varepsilon\big)\cup \big(f(x_0^+)-\varepsilon,f(x_0^-)+\varepsilon\big)\cup {f(x_0)}$$



But this contradicts the IVP.




  • Second proof: topological



We claim that a function $h:\mathbb{R}\to \mathbb{R}$ satisfies the IVP only if it maps closed intervals to intervals (in $\mathbb{R}^*$). The fact that $h$ cannot have a jump discontinuity then trivially follows.




To prove our claim, let $h([a,b])$ be a set that is not an interval (and thus it is not path connected, since the only path connected sets in $\mathbb{R}*$ are the intervals). Then there is a point $y\in[h(a),h(b)]$ that is not in $h([a,b])$, and this clearly contradicts the IVP.




  • Note:On the existence of primitive



To answer your other question: the IVP is not enough to ensure the existence of a primitive. Actually, it is not enough to even ensure the non boundedness of the function (see, for example, the Base 13 function).




  • Note: A proof of Darboux Theorem




The case in which $f=F'=k'$ is trivial. Let $F$ be a differentiable mom constant function defined on a closed interval $[a,b]$ and let $f=F'$, and let $y$ be a number in $(f(a),f(b))$ (if instead $f(b) the proof is no different).
Now, let $h(x):=F(x)-xy$. This function is continuous and so, by Weierstrass theorem, it has a maximum and a minimum. They can't both be on the boundary of the interval, since this would imply the fact that $F$ is constant. Thus, at least one of those point is on the interior of the interval, and by Fermat theorem in this point (let us call it $\zeta$) we have $h'(\zeta)=0\rightarrow f(\zeta)-y=0\rightarrow f(\zeta)=y$.


real analysis - Let $f(x)$ be a differentiable function at $x=0$ & $fleft(frac{x+y}{k}right)=frac{f(x)+f(y)}k left(kin R, kne 0,2right)$. Show following

Let $f(x)$ be a derivable function at $x=0$ & $f\left(\dfrac{x+y}{k}\right)=\dfrac{f(x)+f(y)}{k} \left(k\in R, k\ne 0,2\right)$. Show that $f(x)$ is either a zero or odd linear function.



My attempt is as follows:-




Putting $x=0,y=0$ in the functional equation



$$f(0)=\dfrac{2f(0)}{k}$$
$$f(0)(k-2)=0$$
$$f(0)=0 \text { as it is given $k \ne 2$ }$$



Replace $y=-x$ in the functional equation.



$$f(0)=\dfrac{f(x)+f(-x)}{k}$$
$$f(x)+f(-x)=0$$

$$f(-x)=-f(x)$$



Thus $f(x)$ is definitely an odd function.



As $f(x)$ is a derivable function at $x=0$ then following limit should exist:-



$$\lim_{h\to 0}\dfrac{f(h)-f(0)}{h}$$
$$\lim_{h\to 0}\dfrac{f(h)}{h}$$



Assuming $f(x)$ to be polynomial function.




If $f(x)$ is a linear function, then limit should be non-zero because if its a polynomial of degree greater than $1$, then limit would be zero.



Like if $f(x)=ax^2+bx+c$



Applying L' Hospital rule:-
$$\lim_{h\to 0}(2ah+b)=0$$



If $f(x)$ is a constant function other than $0$, then $f(x)=-f(x)$ will not hold.




So we just have to prove that $$\lim_{h\to 0}\dfrac{f(h)}{h} \ne 0$$



Applying L' Hospital rule as we have
$\dfrac{0}{0}$ form and function is derivable at $x=0$



$$\lim_{h\to 0}f'(h)=f'(0)$$



If we can just prove that $f'(0) \ne 0$, then we will be done.



I am not getting how to prove this fact.Any hints?

integration - Solving the integral $int_0^{pi/2}logleft(frac{2+sin2x}{2-sin2x}right)mathrm dx$



I am in the process of proving
$$I=\int_0^\infty \frac{\arctan x}{x^4+x^2+1}\mathrm{d}x=\frac{\pi^2}{8\sqrt{3}}-\frac23G+\frac\pi{12}\log(2+\sqrt{3})$$
And I have gotten as far as showing that
$$2I=\frac{\pi^2}{4\sqrt{3}}+J$$
Where

$$J=\int_0^\infty \log\bigg(\frac{x^2-x+1}{x^2+x+1}\bigg)\frac{\mathrm{d}x}{1+x^2}$$
Then we preform $x=\tan u$ to see that
$$J=\int_0^{\pi/2}\log\bigg(\frac{2+\sin2x}{2-\sin2x}\bigg)\mathrm dx$$
Which I have been stuck on for the past while. I tried defining
$$k(a)=\int_0^{\pi/2}\log(2+\sin2ax)\mathrm dx$$
Which gives
$$J=k(1)-k(-1)$$
Then differentiating under the integral:
$$k'(a)=2\int_0^{\pi/2}\frac{x\cos2ax}{2+\sin2ax}\mathrm dx$$
We may integrate by parts with $u=x$ to get a differential equation

$$ak'(a)+k(a)=\frac\pi2\log(2+\sin\pi a)$$
With initial condition
$$k(0)=\frac\pi2\log2$$
And from here I have no idea what to do.



I also tried tangent half angle substitution, but that just gave me the original expression for $J$.



I'm hoping that there is some really easy method that just never occurred to me... Any tips?



Edit




As was pointed out in the comments, I could consider
$$P(a)=\frac12\int_0^\pi \log(a+\sin x)\mathrm dx\\\Rightarrow P(0)=-\frac\pi2\log2$$
And
$$
\begin{align}
Q(a)=&\frac12\int_0^\pi \log(a-\sin x)\mathrm dx\\
=&\frac12\int_0^\pi\log[-(-a+\sin x)]\mathrm dx\\
=&\frac12\int_0^\pi\bigg(\log(-1)+\log(-a+\sin x)\bigg)\mathrm dx\\
=&\frac{i\pi}2\int_0^\pi\mathrm{d}x+\frac12\int_0^\pi\log(-a+\sin x)\mathrm dx\\

=&\frac{i\pi^2}2+P(-a)
\end{align}
$$

Hence
$$J=P(2)-Q(2)=P(2)-P(-2)-\frac{i\pi^2}2$$
So now we care about $P(a)$. Differentiating under the integral, we have
$$P'(a)=\frac12\int_0^\pi \frac{\mathrm{d}x}{a+\sin x}$$
With a healthy dose of Tangent half angle substitution,
$$P'(a)=\int_0^\infty \frac{\mathrm{d}x}{ax^2+2x+a}$$
completing the square, we have

$$P'(a)=\int_0^\infty \frac{\mathrm{d}x}{a(x+\frac1a)^2+g}$$
Where $g=a-\frac1a$. With the right trigonometric substitution,
$$P'(a)=\frac1{\sqrt{a^2+1}}\int_{x_1}^{\pi/2}\mathrm{d}x$$
Where $x_1=\arctan\frac1{\sqrt{a^2+1}}$. Then using
$$\arctan\frac1x=\frac\pi2-\arctan x$$
We have that
$$P'(a)=\frac1{\sqrt{a^2+1}}\arctan\sqrt{a^2+1}$$
So we end up with something I don't know how to to deal with (what a surprise)
$$P(a)=\int\arctan\sqrt{a^2+1}\frac{\mathrm{d}a}{\sqrt{a^2+1}}$$
Could you help me out with this last one? Thanks.



Answer



$$J=\int_0^{\pi/2}\ln\left(\frac{2+\sin2x}{2-\sin2x}\right)\mathrm dx\overset{2x=t}=\frac12 \int_0^\pi \ln\left(\frac{1+\frac12\sin t}{1-\frac12\sin t}\right)\mathrm dt=\int_0^\frac{\pi}{2}\ln\left(\frac{1+\frac12\sin x}{1-\frac12\sin x }\right)\mathrm dx$$
Now let's consider the following integral:
$$I(a)=\int_0^\frac{\pi}{2}\ln\left(\frac{1+\sin a\sin x}{1-\sin a\sin x}\right)dx\Rightarrow I'(a)=2\int_0^\frac{\pi}{2} \frac{\sin a\sin x}{1-\sin^2a\sin^2 x}dx$$
$$=\frac{2}{\sin a}\int_0^\frac{\pi}{2} \frac{\sin x}{\cos^2x +\cot^2 a}dx=\frac{2}{\sin a}\arctan\left(x\tan a\right)\bigg|_0^1=\frac{2a}{\sin a}$$
$$I(0)=0 \Rightarrow J=I\left(\frac{\pi}{6}\right)=2\int_0^\frac{\pi}{6}\frac{x}{\sin x}dx$$
$$=2\int_0^{\frac{\pi}{6}} x \left(\ln\left(\tan \frac{x}{2}\right)\right)'dx=2x \ln\left(\tan \frac{x}{2}\right)\bigg|_0^{\frac{\pi}{6}} -2{\int_0^{\frac{\pi}{6}} \ln\left(\tan \frac{x}{2}\right)dx}=$$
$$\overset{\frac{x}{2}=t}=\frac{\pi}{3}\ln(2-\sqrt 3) -4\int_0^\frac{\pi}{12}\ln (\tan t)dt=\frac{\pi}{3}\ln(2-\sqrt 3) +\frac{8}{3}G$$ $G$ is the Catalan's constant and for the last integral see here.







Also note that there's a small mistake. After integrating by parts you should have: $$2I=\frac{\pi^2}{4\sqrt 3}- \int_0^\infty\frac{(x^2-1)\arctan x}{x^4+x^2+1}dx=\frac{\pi^2}{4\sqrt 3}-\frac12\underbrace{\int_0^\infty \ln\bigg(\frac{x^2-x+1}{x^2+x+1}\bigg)\frac{dx}{1+x^2}}_{=J}$$


Tuesday, August 27, 2019

elementary set theory - Specify a bijection from [0,1] to (0,1].

A) Specify a bijection from [0,1] to (0,1]. This shows that |[0,1]| = |(0,1]|



B) The Cantor-Bernstein-Schroeder (CBS) theorem says that if there's an injection from A to B and an injection from B to A, then there's a bijection from A to B (ie, |A| = |B|). Use this to come to again show that |[0;1]| = |(0;1]|

calculus - Cauchy-Schwarz inequality application



I am having trouble verifying the first step where the author makes use of Cauchy-Schwarz inequality.
https://gbas2010.wordpress.com/2011/10/16/inequality-53-vo-quoc-ba-can/



I am unsure how he chooses the terms from $( a + b + c )$ to square and multiple on the LHS.


Answer



He is using the two variable form of the inequality, that is
\begin{equation}

(x^2 + y^2)(z^2 + w^2) \geq (xz + yw)^2
\end{equation}
Now set $x = \sqrt{a^2 + b^2}$, $y = c$, $z = \frac{a + b}{\sqrt{a^2 + b^2}}$ and $w = 1$ and we get
\begin{align}
((\sqrt{a^2 + b^2})^2 + c^2)((\frac{a + b}{\sqrt{a^2 + b^2}})^2 + 1 ) &\geq (\sqrt{a^2 + b^2} \frac{a + b}{\sqrt{a^2 + b^2}} + c \cdot 1)^2\\
(a^2 + b^2 + c^2)(\frac{(a + b)^2}{a^2 + b^2} + 1 ) &\geq (a + b + c)^2
\end{align}
Which is the first step of the solution.


Monday, August 26, 2019

sequences and series - An intuitive reasoning for 1+2+3+4+5..... + ∞ = -1/12?

I was just watching this video: http://www.youtube.com/watch?v=w-I6XTVZXww


In it, a professor working at the Nottingham University( Dr. Ed Copeland I think) shows how 1+2+3+4+5....+ ∞ = -1/12 Is this a joke? Or does it contain somewhere within it a flaw?


Now, I know that this channel and the people running it are by profession researchers so it does make me think whether this is true.


but at the same time, you are adding positive numbers on one hand and getting a negative sum?


The video also shows how 1-1+1-1+1-1.... =1/2 and that is counter intuitive too, how can you get a fraction when you are only adding together integers? I do however, painfully accept this as the truth because this is an infinite G.P. with r= -1. and I can get the answer using the formula.


what I am asking is:



  1. Is this actually true or altogether false, or a limitation to the current system of mathematics and accepted as true only because otherwise, mathematics would be proven wrong?





  2. Is there any intuitive explanation to this (both the series that are mentioned), or can I only see it using mathematics?




  3. The video mentions that their are actual uses in physics for this. What are they?



I have read other answers and the opinions in them are conflicting, some say it is false and some say that it is true. Also, other questions don't seem to answer all hree of the questions posed.

integration - Find $lim_{n rightarrow infty} int_0^n (1+ frac{x}{n})^{n+1} exp(-2x) , dx$




Find:



$$\lim_{n \rightarrow \infty} \int_0^n \left(1+ \frac{x}{n}\right)^{n+1} \exp(-2x) \, dx$$



The sequence $\left(1+ \frac{x}{n}\right)^{n+1} \exp{(-2x)}$ converges pointwise to $\exp{(-x)}$. So if we could apply Lebesgue's monotone convergence theorem, we have:



\begin{align}
\lim_{n \rightarrow \infty} \int_0^n \left(1+ \frac{x}{n}\right)^{n+1} \exp(-2x) \, dx &=\lim_{n \rightarrow \infty} \int_{\mathbb{R}} I_{(0,n)(x)} \left(1+ \frac{x}{n}\right)^{n+1} \exp(-2x) \, dx \\[10pt]
&= {} \int_{- \infty}^\infty \exp(-x) \, dx= +\infty

\end{align}



Can someone help me to prove that the sequence of integrands is monotone?


Answer



Using the indicator function, we can write



$$\int_0^n \left(1+\frac xn\right)^{n+1}e^{-2x}\,dx=\int_0^\infty \left(1+\frac xn\right)^{n+1}e^{-2x}\xi_{[0,n]}(x)\,dx$$



Then, note that for $0\le x\le n$, we have




$$\left(1+\frac xn\right)^{n+1}=\left(1+\frac xn\right)\left(1+\frac xn\right)^{n}\le 2e^{x}$$




EDIT: The OP requested a source for the preceding inequality.
In THIS ANSWER, I showed using only the limit definition of the exponential function along with Bernoulli's Inequality that the exponential function satisfies the inequality $$e^x\ge \left(1+\frac xn\right)^{n}$$for $x>-n$.




We can assert, therefore, that



$$\left(1+\frac xn\right)^{n+1}e^{-2x}\xi_{[0,n]}(x)\le 2e^{-x}$$




Applying the Dominated convergence theorem, we have



$$\begin{align}
\lim_{n\to \infty} \int_0^n \left(1+\frac xn\right)^{n+1}e^{-2x}\,dx&=\int_0^\infty \lim_{n\to \infty}\left(\left(1+\frac xn\right)^{n+1}\xi_{[0,n]}(x)\right)\,e^{-2x}\,dx\\\\
&=\int_0^\infty e^{x}\,(1)\,e^{-2x}\,dx\\\\
&=1
\end{align}$$


sequences and series - Different methods to compute $sumlimits_{k=1}^infty frac{1}{k^2}$ (Basel problem)


As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem) $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}.$$ However, Euler was Euler and he gave other proofs.


I believe many of you know some nice proofs of this, can you please share it with us?


Answer




OK, here's my favorite. I thought of this after reading a proof from the book "Proofs from the book" by Aigner & Ziegler, but later I found more or less the same proof as mine in a paper published a few years earlier by Josef Hofbauer. On Robin's list, the proof most similar to this is number 9 (EDIT: ...which is actually the proof that I read in Aigner & Ziegler).


When $0 < x < \pi/2$ we have $0<\sin x < x < \tan x$ and thus $$\frac{1}{\tan^2 x} < \frac{1}{x^2} < \frac{1}{\sin^2 x}.$$ Note that $1/\tan^2 x = 1/\sin^2 x - 1$. Split the interval $(0,\pi/2)$ into $2^n$ equal parts, and sum the inequality over the (inner) "gridpoints" $x_k=(\pi/2) \cdot (k/2^n)$: $$\sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k} - \sum_{k=1}^{2^n-1} 1 < \sum_{k=1}^{2^n-1} \frac{1}{x_k^2} < \sum_{k=1}^{2^n-1} \frac{1}{\sin^2 x_k}.$$ Denoting the sum on the right-hand side by $S_n$, we can write this as $$S_n - (2^n - 1) < \sum_{k=1}^{2^n-1} \left( \frac{2 \cdot 2^n}{\pi} \right)^2 \frac{1}{k^2} < S_n.$$


Although $S_n$ looks like a complicated sum, it can actually be computed fairly easily. To begin with, $$\frac{1}{\sin^2 x} + \frac{1}{\sin^2 (\frac{\pi}{2}-x)} = \frac{\cos^2 x + \sin^2 x}{\cos^2 x \cdot \sin^2 x} = \frac{4}{\sin^2 2x}.$$ Therefore, if we pair up the terms in the sum $S_n$ except the midpoint $\pi/4$ (take the point $x_k$ in the left half of the interval $(0,\pi/2)$ together with the point $\pi/2-x_k$ in the right half) we get 4 times a sum of the same form, but taking twice as big steps so that we only sum over every other gridpoint; that is, over those gridpoints that correspond to splitting the interval into $2^{n-1}$ parts. And the midpoint $\pi/4$ contributes with $1/\sin^2(\pi/4)=2$ to the sum. In short, $$S_n = 4 S_{n-1} + 2.$$ Since $S_1=2$, the solution of this recurrence is $$S_n = \frac{2(4^n-1)}{3}.$$ (For example like this: the particular (constant) solution $(S_p)_n = -2/3$ plus the general solution to the homogeneous equation $(S_h)_n = A \cdot 4^n$, with the constant $A$ determined by the initial condition $S_1=(S_p)_1+(S_h)_1=2$.)


We now have $$ \frac{2(4^n-1)}{3} - (2^n-1) \leq \frac{4^{n+1}}{\pi^2} \sum_{k=1}^{2^n-1} \frac{1}{k^2} \leq \frac{2(4^n-1)}{3}.$$ Multiply by $\pi^2/4^{n+1}$ and let $n\to\infty$. This squeezes the partial sums between two sequences both tending to $\pi^2/6$. Voilà!


number theory - Any relationship between $amod b$ and $amod c$?

Let's say we have $a\equiv x\mod b$ and $a\equiv y\mod c$. Is there any way to find the relationship between $x$ and $y$ for nontrivial (a>b and/or a>c) moduli?


How about a specific example, such as


$a\equiv x\mod b$ vs $a\equiv y\mod 2b$,


or


$a\equiv x\mod b$ vs $a\equiv y\mod 2b-1$.

Sunday, August 25, 2019

real analysis - How do i evaluate this limit :$lim_{x to 0} frac{e^x-x-1}{x²}$ without using Taylor expansion




I would like to evaluate this limit without using Taylor expansion:




$$\lim_{x \to 0} \frac{e^x-x-1}{x^2}$$ .



Note: by Taylor Expansion i have got :$\frac{1}{2}$ .



Thank u for any help .!!!!


Answer



METHODOLOGY $1$: Use L'Hospital's Rule Successively



Repeated use of L'Hospital's Rule reveals




$$\begin{align}
\lim_{x\to 0}\frac{e^x-x-1}{x^2}&=\lim_{x\to 0}\frac{e^x-1}{2x}\\\\
&=\lim_{x\to 0}\frac12 e^x=\frac12
\end{align}$$






METHODOLOGY $2$: Integral representation of the numerator




Note that we can write the numerator as



$$\begin{align}
e^x-x-1&=\int_0^x \int_0^t e^s \,ds\,dt\\\\
&=\int_0^x \int_s^x e^s\,dt\,ds\\\\
&=\int_0^x (x-s)e^s\,ds
\end{align}$$



Now, we can use the Mean-Value-Theorem for integrals to reveal




$$\begin{align}
e^x-x-1&=e^{s^*(x)}\int_0^x(x-s)\,ds\\\\
&=\frac12 x^2e^{s^*(x)}
\end{align}$$



for some value of $s^*(x) \in (0,x)$.



Finally, exploiting the continuity of the exponential function yield the coveted limit



$$\begin{align}

\lim_{x\to 0}\frac{e^x-x-1}{x^2}&=\lim_{x\to 0}\frac{\frac12 x^2e^{s^*(x)}}{x^2}\\\\
&=\frac12
\end{align}$$



as expected!


Limit of $lim_{xto0^+}frac{sin x}{sin sqrt{x}}$



How do I calculate this? $$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}$$
If I tried using l'Hopital's rule, it would become
$$\lim_{x\to0^+}\frac{\cos x}{\frac{1}{2\sqrt{x}}\cos \sqrt{x}}$$
which looks the same. I can't seem to find a way to proceed from here. Maybe it has something to do with $$\frac{\sin x}{x} \to 1$$

but I'm not sure what to do with it. Any advice?



Oh and I don't understand series expansions like Taylor's series.


Answer



By equvilency near zero $\sin x\approx x$ we have
$$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}=\lim_{x\to0^+}\frac{x}{\sqrt{x}}=0$$
or
$$\lim_{x\to0^+}\frac{\sin x}{\sin \sqrt{x}}=\lim_{x\to0^+}\frac{\sin x}{x}\frac{\sqrt{x}}{\sin \sqrt{x}}.\sqrt{x}=1\times1\times0=0$$


complex analysis - Integral with contour integration



I want to evaluate the integral:
$$\int_{-\infty}^{0}\frac{2x^2-1}{x^4+1}\,dx$$




using contour integration.



I re-wrote it as: $\displaystyle \int_{0}^{\infty}\frac{2x^2-1}{x^4+1}\,dx$. I am considering of integrating on a semicircle contour with center at the origin. I considered the function $\displaystyle f(z)=\frac{2z^2-1}{z^4+1}$ which has $4$ simple poles but only two of them lie on the upper half plane and included in the contour which are: $\displaystyle z_1=\frac{1+i}{\sqrt{2}}, \;\; z_2=\frac{-1+i}{\sqrt{2}}$.



The residue at $\displaystyle z_1$ equals $\displaystyle \mathfrak{Res}\left ( f; z_1 \right )=-\frac{2i-1}{2\sqrt{2}}$ while the residue at $z_2$ equals $\displaystyle -2\sqrt{2}i-2\sqrt{2}$. (if I have done the calculations right)



Now, I don't know how to continue. Should I find the residues at the other poles as well and the say $\displaystyle \oint_{C}f(z)=2\pi i \sum res$ where $C$ is the semicircle contour and then expand it? That is:



$$\oint_{C}f(z)\,dz=\int_{0}^{a} + \int_{{\rm arc}}$$




Then let $a \to +\infty$ then than arc integral would go to zero.
But I don't know how to proceed.






I had dealt with this integral with residues converting it into a minus infinity to infinity integral but with contours I am having a bit of problem.



Therefore I'd like some help.


Answer




You could also use the following contour.
contour



$$\int_{-\infty}^0 \frac{2x^2-1}{x^4+1}\operatorname dx =\int_{0}^{+\infty} \frac{2x^2-1}{x^4+1}\operatorname dx $$



Has an analytic continuation as $$\int_{\Gamma} \frac{2z^2-1}{z^4+1}\operatorname dz $$ with 4 poles, but just on pole inside of the contour.



$$\operatorname*{res}_{z=e^{i\frac{\pi}{4}}} f(z) = \frac{3\sqrt{2}}{8} -i \frac{\sqrt{2}}{8} $$



I think you made a calculation error here (?)




Using $$\int_{\Gamma} \frac{2z^2-1}{z^4+1}\operatorname dz = \color{blue}{\int_{\Gamma_1}\frac{2z^2-1}{z^4+1}\operatorname dz} + \int_{\Gamma_2}\frac{2z^2-1}{z^4+1}\operatorname dz + {\color{red}{\int_{\Gamma_3}\frac{2z^2-1}{z^4+1}\operatorname dz}}$$




  • Now $\color{blue}{\int_{\Gamma_1} \to 0}$ as $R\to +\infty$ which can be proven using the triangle inequality.

  • Use $\Gamma_2 \leftrightarrow z(x) = x$ and $x:0\to R$

  • Use $\color{red}{\Gamma_3 \leftrightarrow z(y) = iy}$ and $y: R \to 0$



Which finally results in (as $R \to +\infty$)

$$2\pi i \left(\frac{3\sqrt{2}}{8} -i \frac{\sqrt{2}}{8}\right) = \color{blue}{0}+\int_{0}^{+\infty}\frac{2x^2-1}{x^4+1}\operatorname dx + \color{red}{ i \int_{+\infty}^0\frac{-2y^2-1}{y^4+1}\operatorname dy}$$



Here you can read the real parts which results in:



$$\int_{0}^{+\infty}\frac{2x^2-1}{x^4+1}\operatorname dx = \frac{\pi\sqrt{2}}{4}$$


Saturday, August 24, 2019

real analysis - Intermediate Value Property and Discontinuous Functions

This is a general question to which I need help finding a concrete example so that I may understand the concept/strategy better, and any help will be greatly appreciated.



If given a function $F$ that is not continuous, how can I show that the given function satisfies the intermediate value property? A hint that was given by the Professor was to find an auxiliary function $f$ such that $f'=F$.



I know that all continuous functions have the intermediate value property (Darboux's property), and from reading around I know that all derivatives have the Darboux property, even the derivatives that are not continuous.




Here is what I could make sense of the Professor's hint:



If I could find a suitable function $f$ which was differentiable, and $f'=F$, the derivative would have the intermediate value theorem (since all derivatives have the intermediate value property), and thus the original discontinuous function $F$ would also have the intermediate value theorem.



Can anyone please tell me if my reasoning is correct and/or please provide me with a discontinuous function that I can practice on (or perhaps direct me to another question with such a function that I may have overlooked)?



Thanks a lot in advance!

Friday, August 23, 2019

complex numbers - How to solve the equation $ z^2 = i-1 $?




$\ z^2 = i-1 \ $
Hey guys, I couldn't find a way to solve this problem. The question suggests I replace $\ z\ $ with $\ x+iy\ $ and then go from there, but I always end up having to complete the square and end up with a completely different answer to the back o the book. Can you please help?
Thanks


Answer




let $$z=x+iy$$ then we get
$$x^2-y^2+2xyi=i-1$$ so we get
$$x^2-y^2+1=0$$
and $$2xy-1=0$$
Can you solve this?


real analysis - Find a differentiable $f$ such that $mathrm{Zeros}(f)=mathbb{any; closed ;set}$



Let $B\subset \mathbb{R}^2$ be a closed set.



How to prove that there is a differentiable function $f:\mathbb{R}^2\longrightarrow\mathbb{R}$ such that $$Z(f)=B$$



where $$Z(f)=\{x\in\mathbb{R}^2:f(x)=0\}$$




Any hints would be appreciated.


Answer



I will adapt the MO answer by Harald Hanche-Olsen, filling in some details, and taking into account that you don't ask for $C^\infty$, but only for a differentiable function.



Let
$$E_0=\{x\colon\operatorname{dist}(x,B)\ge 1 \}$$
and for $k=1,2,\ldots$ let
$$E_k=\{x\colon 2^{-k}\le \operatorname{dist}(x,B)\le 2^{1-k}\}$$
These sets cover the complement of $B$. Also let
$$F_k=\{x\colon \operatorname{dist}(x,E_k)\le 2^{-k-2}\}$$

be an "enlargement" of $E_k$ which still stays away from $B$.



Let $\omega$ be a smooth function on $\mathbb R^2$ such that $\omega\ge 0$, $\omega(x)=0$ when $|x|\ge 1/2$, and $\omega(x)>0$ when $|x|\le 1/4$. Let $\omega_k(x) = \omega( 2^{ k}x)$.



The convolution of $\chi_{F_k}$ with $\omega_k$ has the following properties:




  • it is as smooth as $\omega$ is

  • it is nonnegative

  • it is zero on the set $ \{x\colon \operatorname{dist}(x,F_k) > 2^{-k-1}\}$, which contains the set $ \{x\colon \operatorname{dist}(x,B ) < 2^{-k-2}\}$


  • it is strictly positive on $E_k$

  • it does not exceed $4^{-k}\int_{\mathbb R^n} \omega$.



Define
$$f=\sum_{k=0}^\infty (\chi_{F_k}*\omega_k) \tag{1}$$
and observe that




  • $f$ is strictly positive on the complement of $B$, and vanishes on $B$.

  • $f$ satisfies an estimate of the form $f(x)\le C(\operatorname{dist}(x,B ))^2$, because at distance about $2^{-k}$ from $B$
    it takes values about $4^{-k}$


  • By the above, $f$ is differentiable on $B$.

  • $f$ is also differentiable on the complement of $B$, because every point of this complement has a neighborhood in
    which only finitely many terms of (1) are nonzero.






As Harald Hanche-Olsen notes, introducing a rapidly decaying weight one can make the sum $C^\infty$ smooth, e.g.,
$$f=\sum_{k=0}^\infty 2^{-k^2} (\chi_{F_k}*\omega_k )$$


real analysis - Evaluation of an improper integral: $int_0^infty frac{tsin{t}}{u^2+t^2,}dt$



I want to calculate the integral below for positive $u$.
$$\int_0^\infty \dfrac{t\sin{t}}{u^2+t^2}dt$$
Wolframalpha gives me a simple answer :
$\dfrac{\pi}{2}e^{-u}$.



but I cannot approach to that.

Can anyone solve above without using complex integral (ex.residue integration.. because I'm beginner of analysis)?



Thanks.


Answer



Hint. Let's consider the Laplace transform of $\displaystyle I(a):=\int_{0}^{\infty}\frac{\cos(ax)}{x^2+1}\:dx$. We have
$$
\begin{aligned}
\mathcal{L}\left(I(a)\right)(s)&=\mathcal{L}\left(\int_{0}^{\infty}\frac{\cos(ax)}{x^2+1}\:dx\right)(s)
\\& = \int_{0}^{\infty}\int_{0}^{\infty}\frac{\cos(ax)}{x^2+1}e^{-as}\:da\:dx
\\&= \int_{0}^{\infty}\frac{s}{(x^2+1)(s^2+x^2)}\;{dx}

\\&= \frac{\pi}{2(s+1)}
\end{aligned}
$$giving
$$
I(a)=\int_{0}^{\infty}\frac{\cos(ax)}{x^2+1}\:dx=\mathcal{L}^{-1}\left(\frac{\pi}{2(s+1)}\right) =\frac{\pi}{2}e^{-a},\qquad a>0, \tag2
$$ then by differentiating $(2)$ with respect to $a$, one gets
$$
\int_{0}^{\infty}\frac{x\sin(ax)}{x^2+1}\:dx=\frac{\pi}{2}e^{-a},\qquad a>0. \tag3
$$ as given by Wolfram alpha.


linear algebra - Matrix with zeros on diagonal and ones in other places is invertible



Hi guys I am working with this and I am trying to prove to myself that n by n matrices of the type zero on the diagonal and 1 everywhere else are invertible.




I ran some cases and looked at the determinant and came to the conclusion that we can easily find the determinant by using the following $\det(A)=(-1)^{n+1}(n-1)$. To prove this I do induction



n=2 we have the $A=\begin{bmatrix}
0 & 1\\
1 & 0
\end{bmatrix}$ $\det(A)=-1$ and my formula gives me the same thing (-1)(2-1)=-1



Now assume if for $n \times n$ and $\det(A)=(-1)^{n+1}(n-1)$



Now to show for a matrix B of size $n+1 \times n+1$. I am not sure I was thinking to take the determinant of the $n \times n$ minors but I am maybe someone can help me. Also is there an easier way to see this is invertible other than the determinant? I am curious.



Answer



This is easy to calculate by row reduction:



Add all rows to first:
$$\det(A) =\det \begin{bmatrix}
0 & 1 & 1 &...&1 \\
1 & 0 & 1 &...&1 \\
1 & 1 & 0 &...&1 \\
... & ... & ... &...&... \\
1 & 1 & 1 &...&0 \\

\end{bmatrix}=\det \begin{bmatrix}
n-1 & n-1 & n-1 &...&n-1 \\
1 & 0 & 1 &...&1 \\
1 & 1 & 0 &...&1 \\
... & ... & ... &...&... \\
1 & 1 & 1 &...&0 \\
\end{bmatrix} \\
=(n-1)\det \begin{bmatrix}
1 & 1 & 1 &...&1 \\
1 & 0 & 1 &...&1 \\

1 & 1 & 0 &...&1 \\
... & ... & ... &...&... \\
1 & 1 & 1 &...&0 \\
\end{bmatrix}=(n-1)\det \begin{bmatrix}
1 & 1 & 1 &...&1 \\
0 & -1 & 0 &...&0 \\
0 & 0 & -1 &...&0 \\
... & ... & ... &...&... \\
0 & 0 & 0 &...&-1 \\
\end{bmatrix}$$




where in the last row operation I subtracted the first row from each other row.



This shows
$$\det(A)=(n-1)(-1)^{n-1}$$


Thursday, August 22, 2019

calculus - Proving $sum_{k=1}^n{k^2}=frac{n(n+1)(2n+1)}{6}$ without induction


I was looking at: $$\sum_{k=1}^n{k^2}=\frac{n(n+1)(2n+1)}{6}$$


It's pretty easy proving the above using induction, but I was wondering what is the actual way of getting this equation?


Answer



$$n^{3}-(n-1)^{3}=3n^{2}+3n+1$$ $$(n-1)^{3}-(n-2)^{3}=3(n-1)^{2}+3(n-1)+1$$ $$\vdots$$ $$2^{3}-1^{3}=3(1)^{2}+3(1)+1$$


Now use telescopic cancellation.


Here are some "proof without words"(I find them more elegant):


Sum of squares


Sum of Squares(2)


Finally a more generalized form:$$1^{k}+2^{k}+\cdots+n^{k}=\sum\limits_{i=1}^{k}S(k,i)\binom{n+1}{i+1}i!$$ Where S(k,i) represents the Stirling number of the second kind.


calculus - Identification of a curious function

During computation of some Shapley values (details below), I encountered the following function:

$$
f\left(\sum_{k \geq 0} 2^{-p_k}\right) = \sum_{k \geq 0} \frac{1}{(p_k+1)\binom{p_k}{k}},
$$
where $p_0 > 0$ and $p_{k+1} > p_k$ for all $k$. In other words, the input to $f$ is the binary expansion of a real number in the range $[0,1]$, and the $p_k$ correspond to the positions of $1$s in the binary expansion.



For example, $f(2^{-t}) = 1/(t+1)$, so $f(1/2) = 1/2$, $f(1/4) = 1/3$ and so on. More complicated examples are $f(5/8) = f(2^{-1} + 2^{-3}) = 1/2 + 1/(4\cdot 3) = 7/12$ and
$$ f(2/3) = f\left(\sum_{k \geq 0}2^{-(2k+1)}\right) = \sum_{k \geq 0} \frac{1}{(2k+2)\binom{2k+1}{k}} = \frac{\pi}{\sqrt{27}}.$$



The function $f$ is a continuous increasing function satisfying $f(0) = 0$, $f(1) = 1$, and $f(1-t) = 1-f(t)$ for $t \in [0,1]$. It has vertical asymptotes at dyadic points.




Here is a plot of $f$:plot of f




Is the function $f$ known?







Here is where $f$ came from. Let $n \geq 1$ be an integer and let $t \in [0,1]$. For a permutation $\pi$ of the numbers $\{ 2^{-m} : 0 \leq m \leq n-1 \}$ satisfying $\pi^{-1}(1) = i$, we say that $\pi$ is pivotal if $\sum_{j


For example, take $n = 4$. The permutation $1/8,1/2,1,1/4$ is pivotal for $t \in (5/8,1]$. For all $n \geq 2$ we have $f_n(1/2) = 1/2$, since $\pi$ is pivotal iff $1$ appears before $1/2$ in $\pi$. The general formula for $f$ is derived in a similar way.



We leave it to the reader to figure out how $f_n$ measures some Shapley value. The functions $f_n$ are step functions with steps of length $1/2^{n-1}$. They are left-continuous, and are equal to $f$ at the breakpoints.

Wednesday, August 21, 2019

Compute the square root of a complex number


This is a follow up to a previous question. I solved the equation $z^4 - 6z^2 + 25 = 0$ and I found four answer to be $z = \pm\sqrt{3 \pm 4i}$.


However someone in the comment said that the answer is going to be $2+i$, $2-i$, $-2+i$, $-2-i$. I cannot understand how we can find these roots from the answer that I found. How are we supposed to compute the square root of a complex number?


Answer



Hint:


Let $x + yi = \sqrt{a + bi}$. Then $(x+ yi)^2 = a + bi$. Then solve for $x$ and $y$ and you will generally have two sets of values for the square root $ \sqrt{a + bi}$


Example:


Say you want to compute $\sqrt{3 + 4i}$. Then assume the square root is $a + bi$. That is $a + bi = \sqrt{3 + 4i} \implies (a + bi)^2 = (a^2 - b^2) + 2abi = 3 + 4i$. Now solve the equations $ (a^2 - b^2) = 3$ and $2ab = 4$ to find $a$ and $b$.


Discrete logarithm tables for the fields $Bbb{F}_8$ and $Bbb{F}_{16}$.




The smallest non-trivial finite field of characteristic two is
$$
\Bbb{F}_4=\{0,1,\beta,\beta+1=\beta^2\},
$$
where $\beta$ and $\beta+1$ are primitive cubic roots of unity, and zeros of the
polynomial $x^2+x+1$. Here the multiplication table is given once we know
how to write the non-zero elements as powers of $\beta$. Extend the idea to fields of eight and sixteen elements.



Those fields can be constructed as

$$
\Bbb{F}_8=\Bbb{F}_2[\alpha], \quad\text{and}\quad \Bbb{F}_{16}=\Bbb{F}_2[\gamma],
$$
where $\alpha$ has minimal polynomial $x^3+x+1$, and $\gamma$ has minimal polynomial $x^4+x+1$, both irreducible in $\Bbb{F}_2[x]$.



Task:




Calculate the tables for base $\alpha$ discrete logarithm of $\Bbb{F}_8$ and base $\gamma$ discrete logarithm of $\Bbb{F}_{16}$.




Answer



A (base-$g$) discrete logarithm of a finite field $\Bbb{F}_q$, is a function
$$
\log_g:\Bbb{F}_q^*\to\Bbb{Z}_{q-1}
$$
defined via the equivalence $g^j=x\Leftrightarrow \log_g(x)=j$. For this to be well-defined it is imperative that $g$ is a primitive element, i.e. a generator of $\Bbb{F}_q^*$, and that the domain of $\log_g$ is the
residue class ring of integer modulo $q-1$, as $g^{q-1}=g^0=1$.



It immediately follows that the discrete logarithm satisfies the familiar rules
$$

\begin{aligned}
\log_g(x\cdot y)&=\log_g(x)+\log_g(y),\\
\log_g(x^n)&=n\cdot\log_g(x)
\end{aligned}
$$
for all elements $x,y\in \Bbb{F}_q^*$ and all integers $n$. The arithmetic
on the r.h.s. is that of the ring $\Bbb{Z}_{q-1}$.







It is known that when $q=8$, a zero $\alpha$ of $x^3+x+1$ generates $\Bbb{F}_8^*$. This is proven by the following calculation, where we repeatedly
use the fact that we are working in characteristic two, and that we have the
relation $\alpha^3=\alpha+1$.
$$
\eqalign{
\alpha^0&=&&=1,\\
\alpha^1&=&&=\alpha,\\
\alpha^2&=&&=\alpha^2,\\
\alpha^3&=&&=1+\alpha,\\
\alpha^4&=&\alpha\cdot\alpha^3=\alpha(1+\alpha)&=\alpha+\alpha^2,\\

\alpha^5&=&\alpha\cdot\alpha^4=\alpha(\alpha+\alpha^2)=\alpha^2+\alpha^3=\alpha^2+(1+\alpha)&=1+\alpha+\alpha^2,\\
\alpha^6&=&\alpha\cdot\alpha^5=\alpha(1+\alpha+\alpha^2)=\alpha+\alpha^2+\alpha^3=
\alpha+\alpha^2+(1+\alpha)&=1+\alpha^2,\\
\alpha^7&=&\alpha\cdot\alpha^6=\alpha(1+\alpha^2)=\alpha+\alpha^3=\alpha+(1+\alpha)&=1.
}$$



We see from the end results in the last column that all the non-zero
quadratic polynomials evaluated at $\alpha$ appear. This is yet another confirmation of the fact that $\alpha$ is a primitive element.



The discrete logarithm is used to replace the cumbersome multiplication (and raising

to an integer power) of the field with more familiar integer arithmetic. Exactly like the old-timers used logarithm tables to replace the error-prone multiplication with the easier addition.



For example
$$
(1+\alpha)(1+\alpha+\alpha^2)=\alpha^3\cdot\alpha^5=\alpha^8=\alpha^7\cdot\alpha=\alpha.
$$
Note that both the base-$\alpha$ discrete logarithms and its inverse mapping are needed. I generate such a table as a part of the program initialization, whenever I carry out extensive computer-aided calculations involving a finite field. The above table gives the discrete logarithm when read from right to left, and the inverse mapping (that we actually produced above) when read from left to right.







Similarly with $q=16$ we use $\gamma$, a zero of $x^4+x+1$. This time the table looks like
$$
\begin{aligned}
\gamma^0&=&1\\
\gamma^1&=&\gamma\\
\gamma^2&=&\gamma^2\\
\gamma^3&=&\gamma^3\\
\gamma^4&=&\gamma+1\\
\gamma^5&=\gamma(\gamma+1)=&\gamma^2+\gamma\\
\gamma^6&=\gamma(\gamma^2+\gamma)=&\gamma^3+\gamma^2\\

\gamma^7&=\gamma^4+\gamma^3=&\gamma^3+\gamma+1\\
\gamma^8&=(\gamma^4)^2=&\gamma^2+1\\
\gamma^9&=\gamma(\gamma^2+1)=&\gamma^3+\gamma\\
\gamma^{10}&=\gamma^4+\gamma^2=&\gamma^2+\gamma+1\\
\gamma^{11}&=&\gamma^3+\gamma^2+\gamma\\
\gamma^{12}&=\gamma^4+\gamma^3+\gamma^2=&\gamma^3+\gamma^2+\gamma+1\\
\gamma^{13}&=\gamma^4+\gamma^3+\gamma^2+\gamma=&\gamma^3+\gamma^2+1\\
\gamma^{14}&=\gamma^4+\gamma^3+\gamma=&\gamma^3+1\\
(\gamma^{15}&=\gamma^4+\gamma=&1)
\end{aligned}

$$



Thus for example
$$
(\gamma^3+1)(\gamma^2+1)=\gamma^{14}\cdot\gamma^8=\gamma^{22}=\gamma^7=\gamma^3+\gamma+1.
$$






As another example of the use of this table I want to discuss the problem of factorizing $x^4+x+1$ over $\Bbb{F}_4$. To that end we need to first identify a copy of $\Bbb{F}_4$ as a subfield of $\Bbb{F}_{16}$. We just saw that $\gamma$ is of order fifteen. Therefore $\gamma^5=\gamma^2+\gamma$ and $\gamma^{10}=\gamma^2+\gamma+1$ are third roots of unity. It is then trivial to check that we have a homomorphism of fields $\sigma:\Bbb{F}_4\to\Bbb{F}_{16}$ given by $\sigma(\beta)=\gamma^5$. Note that composing this (from either end) by the Frobenius automorphism gives an alternative embedding $\beta\mapsto \gamma^{10}$.




Basic Galois theory tells us that
$$
x^4+x+1=(x-\gamma)(x-\gamma^2)(x-\gamma^4)(x-\gamma^8)
$$
as we get the other roots by repeatedly applying the Frobenius automorphism $F:x\mapsto x^2$. Here we see that the factor
$$
(x-\gamma)(x-\gamma^4)=x^2+x(\gamma+\gamma^4)+\gamma^5=x^2+x+\gamma^5
$$
is stable under the automorphism $F^2$, and thus (as we also see directly!) has its

coefficients in the subfield $\sigma(\Bbb{F}_4)$. The same holds for the remaining factor
$$
(x-\gamma^2)(x-\gamma^8)=x^2+x(\gamma^2+\gamma^8)+\gamma^{10}=x^2+x+\gamma^{10}.
$$
Pulling back the effect of $\sigma$ we get the desired factorization
$$
x^4+x+1=(x^2+x+\beta)(x^2+x+\beta+1)
$$
in $\Bbb{F}_4[x]$.







Here is a local version of similar tables for $\Bbb{F}_{256}$


calculus - How to prove $operatorname{si}(0) = -pi/2$ without contour











How to prove $\operatorname{si}(0) = -\pi/2$ without contour integration ?
Where $\operatorname{si}(x)$ is the sine integral.


Answer



HINT:



Note that our integral may be rewritten as
$$\int_{0}^{\infty} \int_{0}^{\infty} e^{-xy} \sin x \ dy \ dx = \int_{0}^{\infty} \frac{\sin x}{x} \ dx$$
but integrating with respect to x we get that
$$\int_{0}^{\infty} \int_{0}^{\infty} e^{-xy} \sin x \ dx \ dy = \int_{0}^{\infty} \frac{1}{1+y^2} \ dy$$

Hence I hope you can handle it on your own.


real analysis - Proving an alternating Euler sum: $sum_{k=1}^{infty} frac{(-1)^{k+1} H_k}{k} = frac{1}{2} zeta(2) - frac{1}{2} log^2 2$



Let $$A(p,q) = \sum_{k=1}^{\infty} \frac{(-1)^{k+1}H^{(p)}_k}{k^q},$$
where $H^{(p)}_n = \sum_{i=1}^n i^{-p}$, the $n$th $p$-harmonic number. The $A(p,q)$'s are known as alternating Euler sums.




Can someone provide a nice proof that
$$A(1,1) = \sum_{k=1}^{\infty} \frac{(-1)^{k+1} H_k}{k} = \frac{1}{2} \zeta(2) - \frac{1}{2} \log^2 2?$$





I worked for a while on this today but was unsuccessful. Summation by parts, swapping the order of summation, and approximating $H_k$ by $\log k$ were my best ideas, but I could not get any of them to work. (Perhaps someone else can?) I would like a nice proof in order to complete my answer here.



Bonus points for proving $A(1,2) = \frac{5}{8} \zeta(3)$ and $A(2,1) = \zeta(3) - \frac{1}{2}\zeta(2) \log 2$, as those are the other two alternating Euler sums needed to complete my answer.





Added: I'm going to change the accepted answer to robjohn's $A(1,1)$ calculation as a proxy for the three answers he gave here. Notwithstanding the other great answers (especially the currently most-upvoted one, the one I first accepted), robjohn's approach is the one I was originally trying. I am pleased to see that it can be used to do the $A(1,1)$, $A(1,2)$, and $A(2,1)$ derivations.

Answer



$A(1,1)$:
$$
\begin{align}

\sum_{n=1}^N\frac{(-1)^{n-1}}{n}H_n
&=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\sum_{n=2}^N\frac{(-1)^{n-1}}{n}H_{n-1}\\
&=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{n=2}^N\sum_{k=1}^{n-1}\frac{(-1)^{n-1}}{n}\left(\frac1k+\frac1{n-k}\right)\\
&=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{n=2}^N\sum_{k=1}^{n-1}\frac{(-1)^{n-1}}{k(n-k)}\\
&=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{k=1}^{N-1}\sum_{n=k+1}^N\frac{(-1)^{n-1}}{k(n-k)}\\
&=\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}+\frac12\sum_{k=1}^{N-1}\sum_{n=1}^{N-k}\frac{(-1)^{n+k-1}}{kn}\\
&=\color{#00A000}{\sum_{n=1}^N\frac{(-1)^{n-1}}{n^2}}
-\color{#0000FF}{\frac12\sum_{k=1}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=1}^{N-1}\frac{(-1)^{n-1}}{n}}\\
&+\color{#C00000}{\frac12\sum_{k=1}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}}\tag{1}
\end{align}

$$
where, using the Alternating Series Test, we have
$$
\begin{align}
&\color{#C00000}{\frac12\left|\sum_{k=1}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}\right|}\\
&\le\frac12\left|\sum_{k=1}^{N/2}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}\right|
+\frac12\left|\sum_{k=N/2}^{N-1}\frac{(-1)^{k-1}}{k}\sum_{n=N-k+1}^{N-1}\frac{(-1)^{n-1}}{n}\right|\\
&\le\frac12\cdot1\cdot\frac2N+\frac12\cdot\frac2N\cdot1\\
&=\frac2N\tag{2}
\end{align}

$$
Applying $(2)$ to $(1)$ and letting $N\to\infty$, we get
$$
\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n}H_n=\color{#00A000}{\frac12\zeta(2)}-\color{#0000FF}{\frac12\log(2)^2}\tag{3}
$$


Tuesday, August 20, 2019

Can someone explain the following modular arithmetic?

$7^{-1} \bmod 120 = 103$



I would like to know how $7^{-1} \bmod 120$ results in $103$.

algebra precalculus - Prove $e^{i pi} = -1$





I recently heard that $e^{i \pi} = -1$.


WolframAlpha confirmed this for me, however, I don't see how this works.


Answer



This identity follows from Euler's Theorem, \begin{align} e^{i \theta} = \cos \theta + i \sin \theta, \end{align} which has many proofs. The one that I like the most is the following (sketched). Define $f(\theta) = e^{-i \theta}(\cos \theta + i \sin \theta)$. Use the quotient rule to show that $f^{\prime}(\theta)= 0$, so $f(\theta)$ is constant in $\theta$. Evaluate $f(0)$ to prove that $f(\theta) = f(0)$ everywhere.



Take $\theta = \pi$ for your claim.


Monday, August 19, 2019

probability - Number of die rolls needed for average to converge to roll expectancy?



I'm aware there are similar questions, but I haven't been able to find what I'm asking.




Say we have an $n$-sided die, labeled with 1 thru $n$ and roll it $N$ times.
We take the average, call it $m$.



The die is fair, so the expectancy for the die roll is $E=\frac{n+1}{2}$.



How large must $N$ be for the average to be within $\epsilon$ from $E$ with a probability, say, $p$?



For example, 20-sided die:
$E=10.5$, choose $\epsilon = 0.01$, and $p=0.99$.




So how many times do I have to roll the 20-sided die for the average to lie in the interval $[10.49, 10.51]$ with 99% probability?


Answer



The variance of a single roll is $\frac{n^2-1}{12}$ so the standard deviation of the average of $N$ rolls is $\sqrt{\frac{n^2-1}{12N}}$.



For a normal distribution, the probability of being within $\Phi^{-1}\left(\frac{p +1}{2}\right)$ standard deviations of the mean is $p$, where $\Phi^{-1}$ is the inverse of the cumulative distribution of a standard normal.



For large $N$ you can use the central limit theorem as an approximation, so you want $\sqrt{\frac{n^2-1}{12N}}\Phi^{-1}\left(\frac{p +1}{2}\right) \le \epsilon$, i.e. $$N \ge \left(\frac{n^2-1}{12}\right) \left(\frac{\Phi^{-1}\left(\frac{p +1}{2}\right)}{\epsilon}\right)^2. $$



So in your numerical example $\left(\frac{p +1}{2}\right)=0.995$, $\Phi^{-1}\left(\frac{p +1}{2}\right) \approx 2.5758 $, $\epsilon=0.01$ and $n=20$ so $$N \ge 2206103.1$$ which is certainly large.



elementary set theory - Proof of cardinality inequality: $m_1le m_2$, $k_1le k_2$ implies $k_1m_1le k_2m_2$



I have this homework question I am struggling with:



Let k1,k2,m1,m2 be cardinalities. prove that if $${{m}_{1}}\le {{m}_{2}},{{k}_{1}}\le {{k}_{2}}$$ then $${{k}_{1}}{{m}_{1}}\le {{k}_{2}}{{m}_{2}}$$



Can anyone please help me prove this?

thanks


Answer



First:




  • What does that mean that $k_1\le k_2$? It means there exists a one-to-one function $f\colon k_1\to k_2$.

  • What does that mean $k_1 m_1$? It is the cardinality of $A\times B$ where $|A|=k_1$ and $|B|=m_1$.



Suppose $k_1\le k_2$ and $m_1\le m_2$, we abuse the notation and assume that $k_i,m_i$ are also the sets given in the cardinalities at hand.




Now we need to find a function from $k_1\times m_1$ which is one-to-one, into $k_2\times m_2$. Since $k_1\le k_2$ there exists a one-to-one $f\colon k_1\to k_2$, and likewise $g\colon m_1\to m_2$ which is one-to-one.



Let $h\colon k_1\times m_1\to k_2\times m_2$ be defined as:
$$h(\langle k,m\rangle) = \langle f(k),g(m)\rangle$$



$h$ is well-defined, since for every $\langle k,m\rangle\in k_1\times m_1$ we have that $f(k)\in k_2$ and $g(m)\in m_2$, therefore $h(\langle k,m\rangle)\in k_2\times m_2$.



We need to show that $h$ is injective. Suppose $h(\langle a,b\rangle) = h(\langle c,d\rangle)$, then $\langle f(a),g(b)\rangle=\langle f(c),g(d)\rangle$. Therefore $f(a)=f(c)$ and $g(b)=g(d)$.




Since $f,g$ are both injective, we have that $a=c, b=d$ that is $\langle a,b\rangle=\langle c,d\rangle$.



It is a very standard exercise to prove the basics properties of the cardinals order, for example:



$A\le B$ and $C\le D$, then:




  1. $A+C\le B+D$,

  2. $A\cdot C\le B\cdot D$,

  3. $A^C\le B^D$.




And so forth. It is easily proved by the above method, of composing the injective functions witnessing $A\le B$ and $C\le D$ into functions witnessing these properties.


calculus - Why isn't it mathematically rigorous to treat dx's and dy's as variables?




If I do something like:



$$\frac{dy}{dx} = D$$



$$dy = D \times dx$$



People would often say that it is not rigorous to do so. But if we start from the definition of the derivative:



$$\lim_{h \to 0}{\frac{f(x + h) - f(x)}{h}} = D$$




And by using the properties of limits we can say:



$$\frac{\lim_{h \to 0}{f(x + h) - f(x)}}{\lim_{h \to 0}{h}} = D$$



And then finally:



$$\lim_{h \to 0}(f(x + h) - f(x)) = D \times (\lim_{h \to 0} h)$$



Isn't this the same? Or am I missing something?



Answer



I can spot the following mistake in your attempt:





  1. And by using the properties of limits we can say:



    $$\frac{\lim_{h \to 0}{f(x + h) - f(x)}}{\lim_{h \to 0}{h}} = D$$






You cannot actually do that, as smcc has said. You must note that $$\lim_{x\to 0} \frac{f(x)}{g(x)}=\frac{\lim_\limits{x\to 0} f(x)}{\lim_\limits{x\to 0} g(x)} \,\,\,\,\,\,\,\,\,\,\mathrm {iff \lim_\limits{x\to 0} g(x) \not = 0}$$
So what you have said is not right.






Now coming to the actual question, if $dx$ and $dy$ can be treated as variables,
most of the mathematicians treat $\frac{d}{dx}$ as a mathematical operator (like $+,-,*,/$) which acts on the variable $y$. So that way, you can clearly understand what is the variable and what is not.



However, if you are strict enough to observe from the "limit" viewpoint, then observe that $\frac{dy}{dx}$ is nothing but $\lim_\limits{\Delta x\to 0}\frac{\Delta y}{\Delta x}$. Now $\frac{\Delta y}{\Delta x}$ is a fraction with $\Delta y$ and $\Delta x$ in the numerator and denominator. So you can view them as variables now.




Looks a bit weird, I know, but it entirely depends on how you want to support your argument and from which point of view you want to make your claim.


real analysis - Evaluation of an improper integral: $int_0^infty frac{tsin{t}}{u^2+t^2,}dt$



I want to calculate the integral below for positive $u$.

$$\int_0^\infty \dfrac{t\sin{t}}{u^2+t^2}dt$$
Wolframalpha gives me a simple answer :
$\dfrac{\pi}{2}e^{-u}$.



but I cannot approach to that.
Can anyone solve above without using complex integral (ex.residue integration.. because I'm beginner of analysis)?



Thanks.


Answer



Hint. Let's consider the Laplace transform of $\displaystyle I(a):=\int_{0}^{\infty}\frac{\cos(ax)}{x^2+1}\:dx$. We have

$$
\begin{aligned}
\mathcal{L}\left(I(a)\right)(s)&=\mathcal{L}\left(\int_{0}^{\infty}\frac{\cos(ax)}{x^2+1}\:dx\right)(s)
\\& = \int_{0}^{\infty}\int_{0}^{\infty}\frac{\cos(ax)}{x^2+1}e^{-as}\:da\:dx
\\&= \int_{0}^{\infty}\frac{s}{(x^2+1)(s^2+x^2)}\;{dx}
\\&= \frac{\pi}{2(s+1)}
\end{aligned}
$$giving
$$
I(a)=\int_{0}^{\infty}\frac{\cos(ax)}{x^2+1}\:dx=\mathcal{L}^{-1}\left(\frac{\pi}{2(s+1)}\right) =\frac{\pi}{2}e^{-a},\qquad a>0, \tag2

$$ then by differentiating $(2)$ with respect to $a$, one gets
$$
\int_{0}^{\infty}\frac{x\sin(ax)}{x^2+1}\:dx=\frac{\pi}{2}e^{-a},\qquad a>0. \tag3
$$ as given by Wolfram alpha.


Sunday, August 18, 2019

combinatorics - Finding the maximum value of elements to be selected in a grid - ZIO $2009$, P$1$



problem
problem




Hello Community! The above problem you see is a problem I got wrong. :( This is ZIO $2009$, P$1$.



I tried the problem and miserably found the wrong answer as $20$. Here is how my approach goes - part (a): Notice that the largest element in the whole grid is $16$ which appears two times. I may be a good decision to start there to maximize the score but unfortunately, it is covered by only negative numbers. Although if we try to start, with the upper $16$, we get value: $16 - 9 + 13 = 20$. Similarly, starting with other big numbers we observe that the value gets even lesser so the answer must be $\boxed{20}$. However, like most trial and error attempts is optimization problems, this is wrong as the answer is $29$.



Now the main question I have for this problem is: How do we ensure maximum value? Is there some sort of an algorithm or something that we can follow and can be assured to have found the maximum value? Note that this problem is from a pen and paper exam where you are given 10 minutes to solve one sub-case (that is 30 minutes for this whole problem), so complete trial and error is of no use at all.



I asked a similar problem on MSE only: link but haven't got any answers till now... Any help there would also be appreciated.



The answers are $29, 9, 20$.




I would be grateful if anyone could help.. Thanks!


Answer



Starting in the upper left corner, replace each number $x$ with $x$ plus the larger of the number above it and the number to the left of it. In (a) this results in $$\matrix{-2&-1&-4&0&-4\cr10&-6&6&-6&2\cr-6&7&-7&1&-2\cr1&3&19&4&14\cr-8&19&10&23&7\cr}$$ You must exit at the bottom row or the rightmost column, and you want to exit at the biggest exit number, which is the $23$ in the bottom row. Now trace your way back to the left and up from that $23$, always choosing the larger of the two possible numbers. This takes you left to $10$, then left to $19$ (or up to $19$, it doesn't matter), then up to $3$, up to $7$, left (or up) to $-6$, up to $10$, up to $-2$. The smallest number on the way was the $-6$, so that path will give you $23-(-6)=29$, which is the maximum.


real analysis - I want to show that $f(x)=x.f(1)$ where $f:Rto R$ is additive.











I know that if $f$ is continuous at one point then it is continuous at every point.

From this i want to show that $f(x)=xf(1).$
Can anybody help me to proving this?


Answer



HINTS:




  1. Look at $0$ first: $f(0)=f(0+0)=f(0)+f(0)$, so $f(0)=0=0\cdot f(1)$.


  2. Use induction to prove that $f(n)=nf(1)$ for every positive integer $n$, and use $f(0)=0$ to show that $f(n)=nf(1)$ for every negative integer as well.


  3. $f(1)=f\left(\frac12+\frac12\right)=f\left(\frac13+\frac13+\frac13\right)=\dots\;$.


  4. Once you’ve got it for $f\left(\frac1n\right)$, use the idea of (2) to get it for all rationals.



  5. Then use continuity at a point.



Convergence of the series $sum_{n = 1}^{+infty}{left(nsin{frac{1}{n}}right)^n}$



I have to study the convergence of the series



$$
\sum_{n = 1}^{+\infty}{\left(n\sin{\frac{1}{n}}\right)^n}

$$



and



$$
\sum_{n = 1}^{+\infty}{\left(\left(n\sin{\frac{1}{n}}\right)^n - 1\right)}.
$$



I know I should study the limit




$$
\lim_{n\to +\infty}{\left(n\sin{\frac{1}{n}}\right)^n}
$$



and that



$$
\lim_{n\to +\infty}{n\sin{\frac{1}{n}}} = 1
$$




but I don't see how it helps. Any ideas ?



Thank you in advance !


Answer



On the interval $(0,1)$ we have
$$ 1-\frac{x^2}{6} \leq \frac{\sin x}{x}\leq e^{-x^2/6} $$
hence $\left(n\sin\frac{1}{n}\right)^n$ behaves like $e^{-\frac{1}{6n}}=1-\frac{1}{6n}+O\left(\frac{1}{n^2}\right)$ for large values of $n$ and the given series are divergent.


Saturday, August 17, 2019

calculus - Limit of $frac{n^2}{4^n}$

How can I prove that the limit of $n^2/4^n$ as $n$ approaches infinity is 0?
I want to solve it without using de l'Hospital's rule and I tried some inequalities, but I don't find a nice solution.

linear algebra - Primitive elements of GF(8)



I'm trying to find the primitive elements of $GF(8),$ the minimal polynomials of all elements of $GF(8)$ and their roots, and calculate the powers of $\alpha^i$ for $x^3 + x + 1.$



If I did my math correct, I found the minimal polynomials to be $x, x + 1, x^3 + x + 1,$ and $x^3 + x^2 + 1,$ and the primitive elements to be $\alpha, \dots, \alpha^6 $



Would the powers of $\alpha^i$ as a polynomial (of degree at most two) be: $\alpha, \alpha^2, \alpha+ 1, \alpha^2 + \alpha, \alpha^2 + \alpha + 1,$ and $\alpha^2 + 1$?




Am I on the right track?


Answer



Those are all correct. Here's everything presented in a table:



$$\begin{array}{lll}
\textbf{element} & \textbf{reduced} & \textbf{min poly} \\
0 & 0 & x \\
\alpha^0 & 1 & x+1 \\
\alpha^1 & \alpha & x^3+x+1 \\
\alpha^2 & \alpha^2 & x^3+x+1 \\

\alpha^3 & \alpha+1 & x^3+x^2+1 \\
\alpha^4 & \alpha^2+\alpha & x^3+x+1 \\
\alpha^5 & \alpha^2+\alpha+1 & x^3 + x^2 + 1 \\
\alpha^6 & \alpha^2+1 & x^3 + x^2 + 1 \\
\end{array}$$


algebra precalculus - Rigorous proof that $frac{1}{3} = 0.333ldots$



I'm a PreCalculus student trying to find a rigorous proof that $\displaystyle\frac{1}{3} = 0.333\ldots$, but I couldn't find it. I think (just think) that this proof would start by proving that



$\displaystyle\sum_{i=1}^{\infty}3\cdot10^{-i} = \frac{1}{3}$. My guesses (assuming that proving that $\displaystyle\sum_{i=1}^{\infty}\left(\frac{1}{10}\right)^i$ converges is trivial):




$\displaystyle\sum_{i=1}^{\infty}3\cdot10^{-i} = 3\cdot\sum_{i = 1}^{\infty}10^{-i} = 3\cdot\sum_{i=1}^{\infty}\left(\frac{1}{10}\right)^i = 3\cdot\left(\frac{1}{1 - \frac{1}{10}}-1\right) = 3\cdot\left(\frac{10}{9}-1\right) = \frac{1}{3}$.



Questions: is this completely rigorous? Which flaws could be found in this proof? How can I improve it?



PS. I'm not sure how to tag this. Feel free to edit, if necessary.


Answer



Since $\sum_{i=1}^\infty 3\cdot 10^{-i}$ is what the notation "$0.333...$" means, your argument is perfectly good. It's not just the "start of a proof", it is all there is to it.



Okay, perhaps it is not really trivial to prove that the geometric series converges, but straightforward it is. Just plug in the definition of the sum of a series and crank the handle, using standard tricks to rewrite each of the partial sums in turn.



algebra precalculus - 8^n - 3^n is divisible by 5

My compitations:
Base Case= $8-3 = 5$ is divisible by $5$
Inductive Hypothesis$=$
$8^k - 3^k = 5M$

$8^k = 5M + 3^k$
$3^k = 8^k - 5M$



My final answer is $5(11M + 11^k)$
Please tell me if I have any mistake regarding my computations.

elementary number theory - Linear Diophantine Equations in Three Variables

$$
3x+6y+5z=7
$$

The general solution to this linear Diophantine equation is as described
here (Page 7-8) is:



$$
x = 5k+2l+14
$$
$$
y = -l
$$
$$

z = -7-k
$$
$$
k,l \in \mathbb{Z}
$$



If I plug the original equation into Wolframalpha the solution is:
$$
y = 5n+2x+2
$$

$$
z =-6n-3x-1
$$
$$
n \in \mathbb{Z}
$$



I can rewrite this as:



$$

x = l
$$
$$
y = 5k+2l+2
$$
$$
z = -6k-3l-1
$$
$$
k,l \in \mathbb{Z}

$$



However now two equations depend on two variables ($k,l$) and one on one variable $l$.
In the first solution one equation depends on two variables and two on one variable.



Questions:



How can I come from a representation like the one from wolfram alpha for the general solution to one where all equations depend on one distinct variable except one equation.



Is there always such a representation?

Linear Algebra- Independence [Probably a Stupid Question]


Given two vector spaces $ V \subset W $ over a field $\mathbb{F}$ (where $V$ is a proper subspace of $W$ ). If we have three elements $x,y,z \in W$ . does the following two statements are true?(I can't find any reason for them to not be true, but it seems strange that both will be true)


(a) if $x,y,z$ are linearly independent as elements in $V$ , then they are also independent in $W$ .


(b) is $x,y,z$ are linearly independent as elements in $W$ , then they are also independent in $V$ .


What do you think? Is it true that both statements are correct?


Thanks in advance


Answer



Yes, both are correct.


Both just say that if $\alpha x+\beta y+\gamma z=\vec 0$, where $\alpha,\beta,\gamma\in\Bbb F$, then $\alpha=\beta=\gamma=0_{\Bbb F}$. This works because the scalar multiplication, vector addition, and zero vector are the same in $V$ and $W$.



(And no, it’s not a stupid question: this is the sort of picky detail that you should worry about.)


elementary number theory - Bezout identity of 720 and 231 by hand impossible?

Is it possible to solve this by hand? Without using the Extended Euclidean Algorithm



We do the Euclidean algorithm and we get:



720 = 231 * 3 + 27



231 = 27 * 8 + 15




27 = 15 * 1 + 12



15 = 12 * 1 + 3



12 = 3 * 4 + 0



The GCD is 3.



We now isolate the remainders:




27 = 720 - 231 * 3



15 = 231 - 27 * 8



12 = 27 - 15 * 1



3 = 15 - 12 * 1



We proceed by substitution:




3 = 15 - 12 * 1



What now? How can we proceed when we have *1? There is no substitution possible?



Help! Thanks!

Friday, August 16, 2019

exponentiation - Why is X raised to the power of 0 = 1?


So as the topic states, why is 5^0 = 1 and not 5 or 0? Is it only because the other exponential laws wouldn't work if it was not the case?


Answer



$5^0$ is the product of zero $5$s, which is the empty product, and there are plenty of reasons to define that as $1$.


What's wrong with that proof?




What wrong with this proof?



$(-1)=(-1)^{\frac{2}{2}}=(-1)^{2\times \frac{1}{2}}=\sqrt{1}=1$ then $1=-1$


Answer



$x^{\frac{1}{2}}$ is a multiple-valued "function", since in general $x$ has two square roots. One could also write:



$$\sqrt1=-1$$


calculus - How come such different methods result in the same number, $e$?



I guess the proof of the identity
$$
\sum_{n = 0}^{\infty} \frac{1}{n!} \equiv \lim_{x \to \infty} \left(1 + \frac{1}{x}\right)^x
$$
explains the connection between such different calculations. How is it done?


Answer



This identity, in isolation, is certainly confusing. It becomes less so once you know that it is just a special case of a more general identity




$$\sum_{n \ge 0} \frac{x^n}{n!} = \lim_{n \to \infty} \left( 1 + \frac{x}{n} \right)^n.$$



Why does this hold? Well, let's call the function on the left $e^x$. Then by inspection, $\frac{d}{dx} e^x = e^x$. General theory tells you that the solution to $y' = y$ is unique provided you fix the value of $y(0)$ (and this is also physically intuitive), and here $e^0 = 1$ by inspection, so that tells you that $e^x$ is the unique function satisfying $e^0 = 1$ and $\frac{d}{dx} e^x = e^x$.






Chandru1's deleted answer reminded me that the proof of this is actually completely elementary. Suppose $f$ is a function which satisfies $f'(x) = f(x)$. Then



$$\frac{d}{dx} \left( e^{-x} f(x) \right) = e^{-x} f'(x) - e^{-x} f(x) = 0$$




hence $e^{-x} f(x) = C_f$ for some constant $C_f$. Substituting $f(x) = e^x$ gives $e^{-x} e^x = C$, and substituting $x = 0$ gives $e^{-x} e^x = 1$. Now it follows that $f(x) = C_f e^x$.






Why does the function on the RHS also satisfy this property? Let's call this function $\exp(x)$. Then



$$\exp(cx) = \lim_{n \to \infty} \left( 1 + \frac{cx}{n} \right)^n = \lim_{m \to \infty} \left( 1 + \frac{x}{m} \right)^{mc} = \exp(x)^c$$



where $m = \frac{n}{c}$. Setting $c = 1 + \frac{y}{x}$, this gives




$$\exp(x + y) = \exp(x)^{1 + \frac{y}{x}} = \exp(x) \exp(x)^{ \frac{y}{x} } = \exp(x) \exp(y)$$



from which it follows that



$$\frac{d}{dx} \exp(x) = \lim_{h \to 0} \frac{\exp(x) \exp(h) - \exp(x)}{h} = \exp(x) \lim_{h \to 0} \frac{\exp(h) - 1}{h}$$



so it follows that $\exp(x)$ satisfies $\exp(0) = 1$ and $\frac{d}{dx} \exp(x) = \exp(x)$, which means it must equal $e^x$ - at least, as soon as we know that



$$\lim_{h \to 0} \frac{\exp(h) - 1}{h} = 1.$$




But



$$\exp(h) = \lim_{n \to \infty} \left( 1 + \frac{h}{n} \right)^n = \lim_{n \to \infty} \left( 1 + {n \choose 1} \frac{h}{n} + {n \choose 2} \frac{h^2}{n^2} + ... \right) = \lim_{n \to \infty} \left( 1 + h + R(h, n) \right)$$



where $|R(h, n)| \le \sum_{k \ge 2} |h|^k = O(|h|^2)$ by the ratio test, from which the conclusion follows.






That last step, by the way, is an echo of the standard proof, where one expands




$$\left( 1 + \frac{x}{n} \right)^n = \sum_{k=0}^n {n \choose k} \frac{x^k}{n^k}$$



and uses the fact that $\frac{1}{n^k} {n \choose k} \to \frac{1}{k!}$ as $n \to \infty$. While this is fairly hands-on, in my opinion it obscures an underlying principle behind this problem, which is that $e^x$ is a very special function and all of the equivalent ways of defining it are based on one of its special properties.



One really nice way to interpret the above identity is that it is precisely what you get by applying Euler's method to the ODE $y' = y$ with initial conditions $y(0) = 1$.


calculus - Why the limit of $frac{sin(x)}{x}$ as $x$ approaches 0 is 1?

I need a rigorous proof that verify why the limit of $\dfrac{\sin(x)}{x}$ as $x$ approaches $0$ is $1$.
I tried before but i do not know how start this proof.
I would appreciate if somebody help me. Thanks.

Thursday, August 15, 2019

calculus - a continuous function, satisfying $f(α) = f(β) +f(α −β)$ for any $α, β ∈ mathbb{R}$




Hi need some help with this problem:



Assume $f : \mathbb{R} → \mathbb{R}$ is a continuous function, satisfying $f(α) = f(β) +f(α −β)$ for any $α, β ∈ \mathbb{R}$, and $f(0) = 0$. Then $f(α) = α f(1)$.




any hints, thank you.


Answer



Note that $f(-\beta)=f(0-\beta)=f(0)-f(\beta)=-f(\beta)$ then $f(\alpha-\beta)=f(\alpha)+f(-\beta)$ and also $f(\alpha+\beta)=f(\alpha-(-\beta))=f(\alpha)+f(-(-\beta))=f(\alpha)+f(\beta)$ and this is a "Cauchy functional equation" conditon of continuity.



See the hint of @Nate. or see the book "Kaczor and Nowak - Problems in Mathematical Analysis II (2000)" 1.6.2 exercise and then conclude your proof.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...