Thursday, May 31, 2018

convergence divergence - Find the partial sum of a given series?

I found this thread, but since it says that I can't ask for help there, I'm making a new one.
How to find the partial sum of a given series?



I have this series (WolframAlpha):
$$ \sum_{n = 3}^\infty \frac{1}{n(n - 2)} $$




And I need the partial sum formula. I don't get how it is derived. I got to the partial fraction decomposition described in the linked thread. I made it to the point where I ahd to put the newly found coefficients into the sum. This is what I got:
(WolframAlpha).
Ignore the output. Now how do I get from it to the partial sum formula in the first Wolfram Alpha link?

linear algebra - Practical use of matrix right inverse

Consider a matrix $A \in \mathcal{R}^{m \times n}$ In the case when $rank(A) = n$ then this implies the existence of a left inverse: $A_l^{-1} = (A^\top A)^{-1}A^\top$ such that $A_l^{-1}A = I_{n\times n}$.


Similarly if $rank(A) = m$ then this implies the existence of a right inverse: $A_r^{-1} = A^\top(A A^\top )^{-1}$ such that $A A_r^{-1} = I_{m\times m}$.


I understand how the concept of a right inverse is naturally follows in say solution of a least squares problem: $Ax = b$, $rank(A)=n

I expect it to involve the fact that in this case $rank(A)=m < n$ and so there are now infinitely many solutions to $Ax = b$ and that the right inverse now someone seeks the solution which minimises the length of the solution vector?

real analysis - Proving that $lim_{nrightarrow infty} frac{n^k}{2^n}=0$

I need to prove that $$\lim_{n\rightarrow \infty} \frac{n^k}{2^n}=0$$ where $k\in \mathbb{N}$. All I can think of is to use something like L'Hopital's rule but I suppose there must be a another simpler way. I would much appreciate if someone could give me a hint. Thanks

algebra precalculus - How to find the sum of the series by treating deonominator so that to split fraction $frac{1}{a_1a_2a_3} + frac{1}{a_2a_3a_4}+$......

This is a series in A.P ( Arithmetic Progression )



$\frac{1}{a_1a_2} + \frac{1}{a_2a_3}+\frac{1}{a_3a_4}+.......\frac{1} { a_{n}a_{n+1}}$ ( where $a_1 ,a_2,a_3.....$ are terms in A.P.)



When we do sum of such series then we use the common difference concept to split the denominator in two parts so that one part is negative and other is positive i.e.




$\frac{1}{d}[ \frac{a_2-a_1}{a_1a_2}+\frac{a_3-a_2}{a_2a_3}+........\frac{a_{n}-a_{n+1}}{a_{n}a_{n+1}}$ ] ( where d is common difference which is further equal to the difference of two consecutive terms)



=$\frac{1}{d}[ \frac{1}{a_1} -\frac{1}{a_2}+\frac{1}{a_2}-\frac{1}{a_3}+........\frac{1}{a_{n}} -\frac{1}{a_{n+1}}]$ which after adding the series give us



$\frac{1}{d}[ \frac{1}{a_1}+\frac{1}{a_{n+1}}]$ = $\frac{1}{d}[ \frac{a_{n+1}+a_1}{a_1a_{n+1}}]$ ( here $a_{n+1}$ is the (n+1)th term ]



My question is do we have some method with the help of which we can split the following series ( which is in Arithmetic progression) ...



$\frac{1}{a_1a_2a_3} + \frac{1}{a_2a_3a_4}+\frac{1}{a_3a_4a_5}+.......\frac{1}{a_{n}a_{n+1}a_{n+2}}$




If yes....then please suggest that method... thanks

matrices - Max eigenvalue of symmetric matrix and its relation to diagonal values

I saw few questions about it, but still can't understand.



Let $A$ be a symmetric matrix and $\lambda_{\max}$ its largest eigenvalue. Is the following true for all $A$?



$$
\lambda_{\max} \ge a_{ii} \forall i
$$



That is, is the largest eigenvalue of a symmetric matrix always greater than any of its diagonal entries?




Is it somehow related to spectral radius and the following equation?



$$
\rho(A)=\max|\lambda_i|.
$$

Wednesday, May 30, 2018

real analysis - Natural derivation of the complex exponential function?



Bourbaki shows in a very natural way that every continuous group isomorphism of the additive reals to the positive multiplicative reals is determined by its value at $1$, and in fact, that every such isomorphism is of the form $f_a(x)=a^x$ for $a>0$ and $a\neq 1$. We get the standard real exponential (where $a=e$) when we notice that for any $f_a$, $(f_a)'=g(a)f_a$ where $g$ is a continuous group isomorphism from the positive multiplicative reals to the additive reals. By the intermediate value theorem, there exists some positive real $e$ such that $g(e)=1$ (by our earlier classification of continuous group homomorphisms, we notice that $g$ is in fact the natural log).




Notice that every deduction above follows from a natural question. We never need to guess anything to proceed.



Is there any natural way like the above to derive the complex exponential? The only way I've seen it derived is as follows:



Derive the real exponential by some method (inverse function to the natural log, which is the integral of $1/t$ on the interval $[1,x)$, Bourbaki's method, or some other derivation), then show that it is analytic with infinite radius of convergence (where it converges uniformly and absolutely), which means that it is equal to its Taylor series at 0, which means that we can, by a general result of complex analysis, extend it to an entire function on the complex plane.



This derivation doesn't seem natural to me in the same sense as Bourbaki's derivation of the real exponential, since it requires that we notice some analytic properties of the function, instead of relying on its unique algebraic and topological properties.



Does anyone know of a derivation similar to Bourbaki's for the complex exponential?


Answer




I think essentially the same characterization holds. The complex exponential is the unique Lie group homomorphism from $\mathbb{C}$ to $\mathbb{C}^*$ such that the (real) derivative at the identity is the identity matrix.


probability - Dice: Expected highest value with a tricky condition

I know how to calculate the expected value "E" of a roll of n k-sided dice if we are supposed to keep the highest number rolled. If I am not wrong, the formula is:



E = k − (1^n + 2^n + ... + (k-1)^n)/k^n




But what I would like to know is how would affect to that expected value the following condition: every 1 that is rolled will cancel the highest remaining number rolled. If the number of 1s is at least half of the total number of dice, then the value of the roll is 0.



For example, rolling 6 10-sided die:




  • Roll #1: 7, 4, 1, 10, 1, 9 ----> The 1 cancels the 10, the second 1 cancels the 9, value of the roll is 7


  • Roll #2: 1, 9, 6, 1, 1, 3 ----> The first 1 cancels the 9, the second 1 cancels the 6, the third 1 cancels the 3, there are no more dice, the value of the roll is 0


  • Roll #3: 8, 1, 1, 1, 6, 1 ----> The first 1 cancels the 8, the second 1 cancels the 6, the only remaining dice are 1s which count as fails, so the value of the roll is 0.





As I said, I know how to calculate the expected value of the highest number in a roll of n dice, but I do not know where to start to add the condition of the 1s canceling out the highest numbers, so I would appreciate some help.



Thanks a lot.

Tuesday, May 29, 2018

real analysis - Proving that R is uncountable

Is the following proof for the uncountability of $\Bbb{R}$ sufficient? We first assume that the interval $(0,1)$ is countable. So we can define a bijection $f:\Bbb{N}\rightarrow(0,1)$



$$x_{1}=x_{11}x_{12}x_{13}\\x_{2}=x_{21}x_{22}x_{23}\\x_3=x_{31}x_{32}x_{33}\\.\\.\\.$$



Where $x_{ij}$ is the digit in the $jth$ decimal place of the $ith$ number in the list. Now construct some number $y$ whose $jth$ decimal place $y_{j}=x_{ii}+1$ when $x_{ii}\neq9$ and $0$ otherwise. But $y$, while clearly in $(0,1)$, is not in the list, for it differs from $x_{1}$ in the first decimal, $x_2$ in the second, and so on. So $f:\Bbb{N}\rightarrow(0,1)$ is not surjective, and so not a bijection. $(0,1)$ is therefore not countable, and so neither is $\Bbb{R}$.

calculus - What is the difference between differentiability of a function and continuity of its derivative?



I am sort of confused regarding differentiable functions, continuous derivatives, and continuous functions. And I just want to make sure I'm thinking about this correctly.




(1) If you have a function that's continuous everywhere, then this doesn't necessarily mean its derivative exists everywhere, correct? e.g., $$f(x) = |x|$$ has an undefined derivative at $x=0$



(2) So this above function, even though its continuous, does not have a continuous derivative?



(3) Now say you have a derivative that's continuous everywhere, then this doesn't necessarily mean the underlying function is continuous everywhere, correct? For example, consider
$$
f(x) = \begin{cases}
1 - x \ \ \ \ \ x<0 \\
2 - x \ \ \ \ \ x \geq 0
\end{cases}

$$

So its derivative is -1 everywhere, hence continuous, but the function itself is not continuous?



So what does a function with a continuous derivative say about the underlying function?


Answer



A function may or may not be continuous.



If it is continuous, it may or may not be differentiable. $f(x) = |x|$ is a standard example of a function which is continuous, but not (everywhere) differentiable. However, any differentiable function is necessarily continuous.



If a function is differentiable, its derivative may or may not be continuous. This is a bit more subtle, and the standard example of a differentiable function with discontinuous derivative is a bit more complicated:

$$
f(x) = \cases{x^2\sin(1/x) & if $x\neq 0$\\
0 & if $x = 0$}
$$

It is differentiable everywhere, $f'(0) = 0$, but $f'(x)$ oscillates wildly between (a little less than) $-1$ and (a little more than) $1$ as $x$ comes closer and closer to $0$, so it isn't continuous.


sequences and series - finding $a_1$ in an arithmetic progression



Given an arithmetic progression such that: $$a_{n+1}=\frac{9n^2-21n+10}{a_n}$$



How can I find the value of $a_1$?



I tried using $a_{n+1}=a_1+nd$ but I think it's a loop..



Thanks.


Answer




We have
$$a_2=\frac{9-21+10}{a_1}\Rightarrow a_1a_2=-2\tag1$$
and
$$a_3=\frac{36-42+10}{a_2}\Rightarrow a_2a_3=4\tag2$$



Since we have $a_1+a_3=2a_2$, with $(1)(2)$, we have
$$a_1+\frac{4}{a_2}=2a_2\Rightarrow a_1a_2+4=2a_2^2\Rightarrow a_2=\pm1.$$
Can you take it from here?


calculus - How do I approach solving this indeterminate limit? $lim_limits{hto 0}frac{1}{h}lnleft(frac{2+h}{2}right)$

Disclaimer: I am a middle aged adult learning Calculus. This is not a student posting his homework assignment. Thank humanity for this great forum!


$$\lim_\limits{h\to 0}\frac{1}{h}\ln\left(\frac{2+h}{2}\right)$$


1) Can't directly sub the h. So, normally you reduce and cancel. Can you point me in the right direction? The directions say "manipulate the expression so L'Hopital's is used" I think L'hopital's is involved. Just not sure how to deal with the $\frac{1}{h}$


$$\lim_\limits{x\to \infty}\frac{x^k}{e^x}$$


2) Also, any tips on the one above? If $k$ is a positive integer, what is the limit above?


Thanks for any guidance.

Monday, May 28, 2018

calculus - Sum of infinite series $1+frac22+frac3{2^2}+frac4{2^3}+cdots$





How do I find the sum of $\displaystyle 1+{2\over2} + {3\over2^2} + {4\over2^3} +\cdots$



I know the sum is $\sum_{n=0}^\infty (\frac{n+1}{2^n})$ and the common ratio is $(n+2)\over2(n+1)$ but i dont know how to continue from here


Answer



After establishing convergence, you could do the following:

$$S = 1 + \frac22 + \frac3{2^2}+\frac4{2^3}+\dots$$
$$\implies \frac12S = \frac12 + \frac2{2^2} + \frac3{2^3}+\frac4{2^4}+\dots$$
$$\implies S - \frac12S = 1+\frac12 + \frac1{2^2} + \frac1{2^3}+\dots$$
which is probably something you can recognise easily...


real analysis - How to prove that exponential grows faster than polynomial?



In other words, how to prove:





For all real constants $a$ and $b$ such that $a > 1$,



$$\lim_{n\rightarrow\infty}\frac{n^b}{a^n} = 0$$




I know the definition of limit but I feel that it's not enough to prove this theorem.


Answer



We could prove this by induction on integers $k$:




$$
\lim_{n \to \infty} \frac{n^k}{a^n} = 0.
$$



The case $k = 0$ is straightforward. I will leave the induction step to you. To see how this implies the statement for all real $b$, just note that every real number is less than some integer. In particular, $b \leq \lceil b \rceil$. Thus,



$$
0 \leq \lim_{n \to \infty} \frac{n^b}{a^n} \leq \lim_{n \to \infty} \frac{n^{\lceil b \rceil}}{a^n} = 0.
$$




The first inequality follows since all the terms are positive. The last equality follows from the induction we established previously.


real analysis - If ${a_{n}}>0$ and $sumlimits_{n=1}^{infty}a_n$ diverge

If $\{a_{n}\}>0$ and $\sum\limits_{n=1}^{\infty}a_n$ diverge.



The following series: converge, diverge, or neither?



$\sum\limits_{n=1}^{\infty} \dfrac{a_n}{1 + a_{n^2}} , \sum\limits_{n=1}^{\infty} \dfrac{a_n}{1 + na_{n}}$ and $\sum\limits_{n=1}^{\infty} \dfrac{a_n}{a_{n} +n^2 a-{n}}$ ?



1) $ \sum\limits_{n=1}^{\infty} \dfrac{a_n}{1 + na_{n}}$




let $ a_{n} = \frac{1}{n}.$ then $ \frac{a_n}{1 + na_{n}} = \frac{1}{2n} $



$ \sum\limits_{n=1}^{\infty} \dfrac{a_n}{1 + na_{n}} = $ $\sum\limits_{n=1}^{\infty} \frac{1}{2n}$ diverge .



Is this reasoning correct?

number theory - Show that $3p^2=q^2$ implies $3|p$ and $3|q$




This is a problem from "Introduction to Mathematics - Algebra and Number Systems" (specifically, exercise set 2 #9), which is one of my math texts. Please note that this isn't homework, but I would still appreciate hints rather than a complete answer.



The problem reads as follows:




If 3p2 = q2, where $p,q \in \mathbb{Z}$, show that 3 is a common divisor of p and q.




I am able to show that 3 divides q, simply by rearranging for p2 and showing that




$$p^2 \in \mathbb{Z} \Rightarrow q^2/3 \in \mathbb{Z} \Rightarrow 3|q$$



However, I'm not sure how to show that 3 divides p.






Edit:



Moron left a comment below in which I was prompted to apply the solution to this question as a proof of $\sqrt{3}$'s irrationality. Here's what I came up with...




[incorrect solution...]



...is this correct?



Edit:



The correct solution is provided in the comments below by Bill Dubuque.


Answer



Write $q$ as $3r$ and see what happens.


Sunday, May 27, 2018

discrete mathematics - Prove $(n^5-n)$ is divisible by 5 by induction.



So I started with a base case $n = 1$. This yields $5|0$, which is true since zero is divisible by any non zero number. I let $n = k >= 1$ and let $5|A = (k^5-k)$. Now I want to show $5|B = [(k+1)^5-(k+1)]$ is true....




After that I get lost.



I was given a supplement that provides a similar example, but that confuses me as well.



Here it is if anyone wants to take a look at it:



Prove that for all n elements of N, $27|(10n + 18n - 1)$.



Proof:
We use the method of mathematical induction. For $n = 1$, $10^1+18*1-1 = 27$.

Since $27|27$, the statement is correct in this case.



Let $n = k = 1$ and let $27|A = 10k + 18k - 1$.



We wish to show that $27|B = 10k+1 + 18(k + 1) - 1 = 10k+1 + 18k + 17$.



Consider $C = B - 10A$ ***I don't understand why A is multiplied by 10.
$= (10k+1 + 18k + 17) - (10k+1 + 180k - 10)$



$= -162k + 27

= 27(-6k + 1)$.



Then $27|C$, and $B = 10A+C$. Since $27|A$ (inductive hypothesis) and $27|C$, then
$B$ is the sum of two addends each divisible by $27$. By Theorem 1 (iii), $27|B$, and
the proof is complete.


Answer



Your induction hypothesis is that $5\mid k^5-k$, which means that $k^5-k=5n$ for some integer $n$. Now



$$\begin{align*}
(k+1)^5-(k+1)&=\left(k^5+5k^4+10k^3+10k^2+5k+1\right)-(k+1)\\

&=k^5+5k^4+10k^3+10k^2+5k-k\\
&=(k^5-k)+5k^4+10k^3+10k^2+5k
\end{align*}$$



can you see why that must be a multiple of $5$?


polynomials - Prove $x^n-1$ is divisible by $x-1$ by induction




Prove that for all natural number $x$ and $n$, $x^n - 1$ is divisible by $x-1$.




So here's my thoughts:
it is true for $n=1$, then I want to prove that it is also true for $n-1$



then I use long division, I get:



$x^n -1 = x(x^{n-1} -1 ) + (x-1)$



so the left side is divisible by $x-1$ by hypothesis, what about the right side?


Answer




So first you can't assume that the left hand side is divisible by $x-1$ but for the right hand side we have that $x-1$ divides $x-1$ and by the induction hypothesis we have that $x-1$ divides $x^{n-1}-1$ so what can you conclude about the left hand side.


real analysis - Evaluation of the sum $sumlimits_{n=1}^{infty}frac1nsinfrac1n$




I am trying to evaluate the sum $\displaystyle\sum_{n=1}^{\infty}\dfrac1n\sin\dfrac1n$.



This was given in my real analysis test yesterday.



I have proved that the sum exists:




We know for any non-negative real $x$, $\sin x\le x$.



Hence $$\displaystyle\sum_{n=1}^{\infty}\dfrac1n\sin\dfrac1n\le

\displaystyle\sum_{n=1}^{\infty}\dfrac1n\cdot\dfrac1n=\displaystyle\sum_{n=1}^{\infty}\dfrac1{n^2}=\dfrac{\pi^2}{6}$$




But how can I find the sum?


Answer



I cannot say there is no closed form, I just hope this gives you an idea.



\begin{align*}\sum_{n=1}^\infty \frac1n\sin\frac1n&=\sum_{n=1}^\infty \frac1n\bigg[\frac1n-\frac1{3!n^3}+\frac1{5!n^5}-\frac1{7!n^7}+\cdots\bigg]\\
&=\sum_{n=1}^\infty\bigg[\frac1{n^2}-\frac1{3!n^4}+\frac1{5!n^6}-\frac1{7!n^8}+\cdots\bigg]\\
&=\zeta(2)-\frac16\zeta(4)+\frac1{120}\zeta(6)-\frac1{5040}\zeta(8)+\cdots\end{align*}




When $k$ get large, $\zeta(k)$ will get closer and closer to $1$, I believe this gives a faster convergent to the sum.


Saturday, May 26, 2018

real analysis - Let $f: [0,1] rightarrow mathbb{R} $ be continuous with $f(0) = f(1)$ *note, there is a part b*

(a) Show that there must exist $x,y \in [0,1] $ satisfying $|x-y| = \frac{1} {2}$ and $f(x) = f(y)$



I can start by defining a function $g(x) = f(x + \frac{1} {2}) - f(x)$ to guarantee an $x,y$ so that $|x-y| = \frac{1} {2}$ But how do I show that $f(x) = f(y)$?



(b) Show that for each $n \in \mathbb{N}$ $\exists x_n ,y_n \in [0,1]$ with $|x_n - y_n| = \frac{1} {n}$, and $f(x_n) = f(y_n)$



Actually I'm not sure where to start here. Any help is greatly appreciated.
Thanks!

Hyperbolic Diophantine Equations: Application of Euclidean Algorithm?



I'm trying to determine whether or not I can find the integer solutions to $(x+a)$$(x+b)$ $=$ $x(x-1)$ + $x(a-b)$ (with a known $x$ value you choose, i.e. $707$). Plugging in my example value on Wolfram Alpha for $x$ reveals the form of a hyperbola, but can I use http://mathworld.wolfram.com/EuclideanAlgorithm.html to help me generalize solutions for hyperbolic equations of this form?



Presumably plugging in the function into Wolfram Alpha and choosing my integer $x$ value for every solution is not the only way to do this?



Answer



$$(x+a)(x+b)=x(x-1) + x(a-b),$$
$$(x+a)(x+b)-x(x-1) + x(a-b)= 0,$$ cancel stuff and
$$ (2b+1)x + ab = 0. $$



Introduce a variable
$$ c = a + 2x, $$ so that
$$ a = c - 2x. $$
Then $ (2b+1)x + ab = 0 $ becomes
$$ x + bc = 0, $$

$$ bc = -x. $$
Therefore, find all divisors $d$ of $-x,$ both positive and negative, so that $$ d \in \{ -|x|, \ldots, -1,1 \ldots, |x| \}. $$
For each such $d,$ let
$$ c = d, \; \; \; b = -x / d, $$ with
$$ a = d - 2x, \; \; b = -x / d. $$



Finding all positive divisors of $|x|$ is a matter of completely factoring $|x|;$ for example, your $707 = 7 \cdot 101.$ The positive divisors are $1,7,101,707,$ and all divisors are $$d \in \{-707, -101,-7,-1,1,7,101,707 \}.$$ For each such $d$, let $a = d - 1414$ and $b = -707/d.$



set mp_Divisors( mpz_class  i)
{

set sofar, more;
set::iterator iter;
sofar.insert(1);
more.clear();
string fac;
fac = " = ";
mpz_class p = 2;
mpz_class temp = i;
if (temp < 0 )
{

temp *= -1;
fac += " -1 * ";
}

if ( 1 == temp) fac += " 1 ";
if ( temp > 1)
{
int primefac = 0;
while( temp > 1 && p * p <= temp) // WWWWWWWWWWWWWWWWWWW
{

if (temp % p == 0)
{
++primefac;
if (primefac > 1) fac += " ";
// fac += stringify( p) ;
temp /= p;
int exponent = 1;
mpz_class power = p;
for(iter = sofar.begin() ; iter != sofar.end(); ++iter)
{

more.insert( power * *iter );
}
while (temp % p == 0)
{
temp /= p;
++exponent;
power *= p;
for(iter = sofar.begin() ; iter != sofar.end(); ++iter)
{
more.insert( power * *iter );

}
} // while p is fac
if ( exponent > 1)
{
fac += "^" ;
fac += stringify( exponent) ;
}
for(iter = more.begin() ; iter != more.end(); ++iter)
{
sofar.insert( *iter );

}
more.clear();
} // if p is factor
++p;
} // while p
if (temp > 1 && primefac >= 1) fac += " ";
// if (temp > 1 ) fac += stringify( temp.get_ui()) ;
if (temp > 1 ) fac += temp.get_str() ;
if ( temp > 1) {
for(iter = sofar.begin() ; iter != sofar.end(); ++iter)

{
more.insert( temp * *iter );
}
for(iter = more.begin() ; iter != more.end(); ++iter)
{
sofar.insert( *iter );
}
more.clear();
}
} // temp > 1

return sofar;
} // mp_Divisors

elementary number theory - If $p not = 5$ is an odd prime, prove that either $p^2+1$ or $p^2-1$ is divisible by $10$?



I was able to find a solution to this problem, but only by using a couple extra tools that are later in the book$^{1}$. So far the book only covered basic divisibility, $gcd$, and the fundamental theorem of arithmetic; it did not cover modular arithmetic, and altough we did cover the division algorithm, we did not cover divisibility rules (i.e. if a number ends in $5$ or $0$, then it is divisible by $5$). Is there any way of proving this with only the above tools? (I will point out what I used from future chapters in my solution)



My Solution




Suppose $10 \nmid p^2-1 = (p+1)(p-1)$. Then $5 \nmid (p+1)$ and $5 \nmid (p-1)$.



Odd primes can only have last digits of $1, 3, 7, 9$ (I used the divisibility rule that a number ending in $0$ or $5$ is divisible by $5$, which is in the next chapter). Since $5 \nmid (p+1)$ and $5 \nmid (p-1)$, the last digit of $p$ is either $3$ or $7$. If we write $p$ as $10n+3$ or $10n+7$, then square and add $1$, we get a multiple of $10$. (The fact that any integer with a last digit of $k$ can be written as $10n+k$ is also something from a future chapter)






Elemntary Nuber Theory by David Burton 6th ed., Section 3.1 # 10


Answer



If $p\neq 5$ is an odd prime, its square $p^2$ is also odd, thus $p^2-1$ and $p^2+1$ are both even.




Now, since an odd prime $p\neq 5$ must (as you mention in your post) be: $$p\equiv1,3,7 \textrm{ or }9 \mod 10$$ its square will be
$$
p^2\equiv1,-1,-1\textrm{ or }1 \mod 10
$$
which answers your question.


sequences and series - Sum of First $n$ Squares Equals $frac{n(n+1)(2n+1)}{6}$


I am just starting into calculus and I have a question about the following statement I encountered while learning about definite integrals:


$$\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}$$


I really have no idea why this statement is true. Can someone please explain why this is true and if possible show how to arrive at one given the other?


Answer



You can easily prove it by induction.


One way to find the coefficients, assuming we already know that it's a degree $3$ polynomial, is to calculate the sum for $n=0,1,2,3$. This gives us four values of a degree $3$ polynomial, and so we can find it.


The better way to approach it, though, is through the identity $$ \sum_{t=0}^n \binom{t}{k} = \binom{n+1}{k+1}. $$ This identity is true since in order to choose a $(k+1)$-subset of $n+1$, you first choose an element $t+1$, and then a $k$-subset of $t$.


We therefore know that $$ \sum_{t=0}^n A + Bt + C\binom{t}{2} = A(n+1) + B\binom{n+1}{2} + C\binom{n+1}{3}. $$ Now choosing $A=0,B=1,C=2$, we have $$ A+Bt + C\binom{t}{2} = t^2. $$ Therefore the sum is equal to $$ \binom{n+1}{2} + 2\binom{n+1}{3}. $$



Friday, May 25, 2018

elementary number theory - Prove that if $gcd(a,b)=1$, then $gcd(acdot b,c) = gcd(a,c)cdot gcd(b,c)$.

Let $a,b,c \in \mathbb{Z}$, prove that if $\gcd(a,b)=1$, then $\gcd(a\cdot b,c) = \gcd(a,c)\cdot \gcd(b,c)$.

integration - Tricky integral? $int_0^{frac{pi}{2}}arccos(sin x)dx$ My answer doesn't match an online calculator



I tried to calculate this integral:
$$\int_0^{\frac{\pi}{2}}\arccos(\sin x)dx$$
My result was $\dfrac{{\pi}^2}{8}$, but actually, according to https://www.integral-calculator.com/, the answer is $-\dfrac{{\pi}^2}{8}$.



It doesn't make sense to me as the result of the integration is $$x\arccos\left(\sin x\right)+\dfrac{x^2}{2}+C$$
and after substituting $x$ with $\dfrac{{\pi}}{2}$ and $0$, the result is a positive number.




Can someone explain it? Thanks in advance!


Answer



Yes, your result is correct. For $x\in[-1,1]$,
$$\arccos(x)=\frac{\pi}{2}-\arcsin(x).$$
Hence
$$\int_0^{\pi/2}\arccos(\sin(x))dx=
\int_0^{\pi/2}\left(\frac{\pi}{2}-x\right)dx=\int_0^{\pi/2}tdt=\left[\frac{t^2}{2}\right]_0^{\pi/2}=\frac{\pi^2}{8}.$$



P.S. WA gives the correct result. Moreover $t\to \arccos(t)$ is positive in $[-1,1)$ so the given integral has to be POSITIVE!


integration - How to show $int_{0}^{infty} frac{dx}{x^3+1} = frac{2pi}{3sqrt{3}}$





I am trying to show $\displaystyle{\int_{0}^{\infty} \frac{dx}{x^3+1} = \frac{2\pi}{3\sqrt{3}}}.$



Any help?
(I am having troubles using the half circle infinite contour)



Or more specifically, what is the residue $\text{res} \left(\frac{1}{z^3+1},z_0=e^\frac{\pi i}{3} \right )$




Thanks!


Answer



There are already two answers showing how to find the integral using just calculus. It can also be done by the Residue Theorem:



It sounds like you're trying to apply RT to the closed curve defined by a straight line from $0$ to $A$ followed by a circular arc from $A$ back to $0$. That's not going to work, because there's no reason the intergal over the semicircle should tend to $0$ as $A\to\infty$.



How would you use RT to find $\int_0^\infty dt/(1+t^2)$? You'd start by noting that $$\int_0^\infty\frac{dt}{1+t^2}=\frac12\int_{-\infty}^\infty\frac{dt}{1+t^2},$$and apply RT to the second integral.



You can't do exactly that here, because the function $1/(1+t^3)$ is not even. But there's an analogous trick available.




Hint: Let $$f(z)=\frac1{1+z^3}.$$If $\omega=e^{2\pi i/3}$ then $$f(\omega z)=f(z).$$(Now you're going to apply RT to the boundary of a certain sector of opening $2\pi/3$... be careful about the "$dz"$...)


probability - Related to conditional expectation of the product of two independent random variables



I have two random variables $X$ and $Y$. The random variable have non-negative support meaning that their PDF is defined only for non negative values. I know the PDF's of $X$ and $Y$. I want to find the expectation of $XY^j$ (where $j$ is a real number) given that $XY^j

Answer



To be clear, you are looking for $\mathsf E(XY^j\mid XY^j

I get:



$$\begin{align}\mathsf E(XY^j\mid XY^j\dfrac{\mathsf E (XY^j~\mathbf 1_{XY^j\\[1ex]~=~&\dfrac{\mathsf E (X~\mathsf E(Y^j~\mathbf 1_{Y<(w/X)^{1/j}}\mid X)) }{ \mathsf P(Y<(w/X)^{1/j})} &\text{only if}~&j > 0
\\[1ex] ~=~& \bbox[gainsboro]{\dfrac{\int\limits_0^\infty~x~\mathsf E( Y^j~\mathbf 1_{Y<(w/x)^{1/j}}\mid X=x)~f_X(x)~\mathrm d x }{\int\limits_0^\infty~\mathsf P(Y<(w/x)^{1/j}\mid X=x)~f_X(x)~\mathrm d x }}
\\[1ex] ~=~& \dfrac{\int\limits_0^\infty~x~f_X(x)\int\limits_0^{(w/x)^{1/j}} y^j~f_{Y\mid X}(y\mid x)~\mathrm d~y~\mathrm d x }{\int\limits_0^\infty~f_X(x)\int\limits_0^{(w/x)^{1/j}} f_{Y\mid X}(y\mid x)~\mathrm d~y~\mathrm d x }

\end{align}$$


calculus - How to prove that $limlimits_{ntoinfty} frac{n!}{n^2}$ diverges to infinity?




$\lim\limits_{n\to\infty} \dfrac{n!}{n^2} \rightarrow \lim\limits_{n\to\infty}\dfrac{\left(n-1\right)!}{n}$



I can understand that this will go to infinity because the numerator grows faster.



I am trying to apply L'Hôpital's rule to this; however, have not been able to figure out how to take the derivative of $\left(n-1\right)!$



So how does one take the derivative of a factorial?


Answer



you could introduce the gamma function!




Just a joke, as $n!>n^3$ for $n>100$ you know that
$$\frac{n!}{n^2} > \frac{n^3}{n^2}=n$$


algebra precalculus - Show that the solutions of the equation $(1+x)^{2n}+(1-x)^{2n}=0$ are $x=pm i ~tan{frac{(2r-1)pi}{4n}}, ~~r=1,2,cdots n.$




Show that the solutions of the equation $(1+x)^{2n}+(1-x)^{2n}=0$ are $$x=\pm i~ \tan{\frac{(2r-1)\pi}{4n}}, ~~r=1,2,\cdots n.$$




Attempt




Clearly, $$\frac{1+x}{1-x}=(-1)^{1/2n}=(\cos{(2k+1)\pi}+i \sin{(2k+1)\pi})^{1/2n}=\cos{\frac{(2k+1)\pi}{2n}}+i \sin{\frac{(2k+1)\pi}{2n}}$$
(by de Moivre's formula)



By componendo dividendo and some simplification,



I am getting $$x=i\tan{\frac{2k+1}{4n}\pi}, ~~k=1,2,3,\cdots 2n$$



How to get the desired result? I am getting 2n solutions and I understand that the solutions that I have obtained is equal to that of the desired. But how to to get the desired from the solution that I have obtained? Please provide me the mathematical steps.



Is there any other method of solving to get the answer directly.



Answer



For any $\alpha$, $\tan\alpha=\tan(\alpha-\pi)$.



For $n\leq k<2n$:



$$\begin{align}i\tan\frac{2k+1}{4n}\pi &= i\tan\frac{2(k-2n)+1}{4n}{\pi}
\\
&=-i\tan\frac{2(2n-k)-1}{4n}\pi
\end{align}$$




So $k=n,n+1,\dots,2n-1$ corresponds to $r=2n-k$ and the minus sign.



$k=2n$ corresponds to $r=1$ with the positive sign.



$k=1,2,\dots,n-1$ corresponds to $r=k+1$ with the positive sign.


Thursday, May 24, 2018

complex analysis - $int_0^{infty} frac{x^{frac{1}{4}}}{1+x^3} dx = frac{pi}{3 sinleft( frac{5pi}{12} right)}$



I want to evaluate following integral
\begin{align}
\int_0^{\infty} \frac{x^{\frac{1}{4}}}{1+x^3} dx = \frac{\pi}{3 \sin\left( \frac{5\pi}{12} \right)}
\end{align}



Simple try on this integral is using branch cut and apply residue theorem.



Usual procedure gives for $0 < \alpha < 1$, with $Q(x)$ deg $n$ and $P(x)$ deg $m$, for $x>0$, $Q(x) \neq 0$




\begin{align}
\int_0^{\infty} \frac{x^\alpha P(x)}{Q(x)} dx = \frac{2\pi i}{1- e^{i\alpha 2 \pi}} \sum_j Res[\frac{z^\alpha P(z)}{Q(z)} , z_j]
\end{align}



where $z_j$ are poles which does not make $\frac{P}{Q}$ be zero.
This formula comes from Mathews and Howell's complex analysis textbook.
And this is nothing but applying branch cut to make $x^{\frac{1}{4}}$ singled valued function. I think this formula works for above improper integral but results seems different.



Apply $\alpha=\frac{1}{4}$ and take poles $z_0=-1$, $z_1 = e^{\frac{i \pi}{3}}$
, $z_2 = e^{\frac{i5 \pi}{3}}$, i got different things.




Am i doing right?






\begin{align}
\frac{2\pi i}{1-i}\frac{1}{3} \left( e^{\frac{1}{4} \pi i} + e^{-\frac{7}{12}\pi i} + e^{-\frac{35}{12} \pi i}\right)
\end{align}


Answer



You may simply remove the branch cut by setting $x=z^4$:

$$ I = 4 \int_{0}^{+\infty}\frac{z^4\,dz}{1+z^{12}} = 2\int_{-\infty}^{+\infty}\frac{z^4}{1+z^{12}} \tag{1}$$
and by evaluating the residues at the roots of $1+z^{12}$ in the upper half-plane,
$$ I = \frac{2\pi}{3\sqrt{2+\sqrt{3}}}=\frac{\pi}{3}\left(\sqrt{6}-\sqrt{2}\right)\tag{2} $$
follows. By setting $\frac{1}{1+z^{12}}=u$, the integral $(1)$ can also be evaluated through Euler's Beta function and the reflection formula for the $\Gamma$ function, since:
$$ I = \frac{1}{3}\int_{0}^{1}u^{-5/12}(1-u)^{-7/12}\,du = \frac{1}{3}\,\Gamma\left(\frac{5}{12}\right)\,\Gamma\left(\frac{7}{12}\right)=\frac{\pi}{3\sin\frac{5\pi}{12}}.\tag{3}$$


Wednesday, May 23, 2018

calculus - Limits problem to find the values of constants - a and b If $lim_{x to infty}(1+frac{a}{x}+frac{b}{x^2})^{2x}=e^2$ Find the value of $a$ and $b$.

Problem :




If $\lim\limits_{x \to \infty}\left(1+\frac{a}{x}+\frac{b}{x^2}\right)^{2x}=e^2$ Find the value of $a$ and $b$.



Please suggest how to proceed this problem :



If we know that $\lim\limits_{x \to \infty}\left( 1+\frac{1}{x}\right)^x= e$



Will this work somehow... please suggest

linear algebra - Determinant of a rank $1$ update of a scalar matrix, or characteristic polynomial of a rank $1$ matrix



This question aims to create an "abstract duplicate" of numerous questions that ask about determinants of specific matrices (I may have missed a few):





The general question of this type is





Let $A$ be a square matrix of rank$~1$, let $I$ the identity matrix of the same size, and $\lambda$ a scalar. What is the determinant of $A+\lambda I$?




A clearly very closely related question is




What is the characteristic polynomial of a matrix $A$ of rank$~1$?




Answer



The formulation in terms of the characteristic polynomial leads immediately to an easy answer. For once one uses knowledge about the eigenvalues to find the characteristic polynomial instead of the other way around. Since $A$ has rank$~1$, the kernel of the associated linear operator has dimension $n-1$ (where $n$ is the size of the matrix), so there is (unless $n=1$) an eigenvalue$~0$ with geometric multiplicity$~n-1$. The algebraic multiplicity of $0$ as eigenvalue is then at least $n-1$, so $X^{n-1}$ divides the characteristic polynomial$~\chi_A$, and $\chi_A=X^n-cX^{n-1}$ for some constant$~c$. In fact $c$ is the trace $\def\tr{\operatorname{tr}}\tr(A)$ of$~A$, since this holds for the coefficient of $X^{n-1}$ of any square matrix of size$~n$. So the answer to the second question is




The characteristic polynomial of an $n\times n$ matrix $A$ of rank$~1$ is $X^n-cX^{n-1}=X^{n-1}(X-c)$, where $c=\tr(A)$.




The nonzero vectors in the $1$-dimensional image of$~A$ are eigenvectors for the eigenvalue$~c$, in other words $A-cI$ is zero on the image of$~A$, which implies that $X(X-c)$ is an annihilating polynomial for$~A$. Therefore





The minimal polynomial of an $n\times n$ matrix $A$ of rank$~1$ with $n>1$ is $X(X-c)$, where $c=\tr(A)$. In particular a rank$~1$ square matrix $A$ of size $n>1$ is diagonalisable if and only if $\tr(A)\neq0$.




See also this question.



For the first question we get from this (replacing $A$ by $-A$, which is also of rank$~1$)




For a matrix $A$ of rank$~1$ one has $\det(A+\lambda I)=\lambda^{n-1}(\lambda+c)$, where $c=\tr(A)$.





In particular, for an $n\times n$ matrix with diagonal entries all equal to$~a$ and off-diagonal entries all equal to$~b$ (which is the most popular special case of a linear combination of a scalar and a rank-one matrix) one finds (using for $A$ the all-$b$ matrix, and $\lambda=a-b$) as determinant $(a-b)^{n-1}(a+(n-1)b)$.


number theory - Using the fundamental theorem of arithmetic to prove $sqrt{text{prime}}notinBbb Q$




I need to use the fundamental theorem of arithmetic to show that





if $p$ is prime then $\sqrt p$ is irrational.




So far I've stated that $\sqrt p=m/n$ where $m,n$ are positive integers, then $pn^2=m^2$. Now I can factor $m$ and $n$ into primes but I don't know where to go on from there.


Answer



Given a prime $p$ and some $n\in\mathbb{N}^*$, let we define:
$$\nu_p(n)=\max\{k\in\mathbb{N}: p^k\mid n\}.\tag{1}$$
Since $\mathbb{Z}$ is a UFD we have $\nu_p(ab)=\nu_p(a)+\nu_p(b)$. In particular, $\nu_p$ of a square is always an even number. If we assume that $\sqrt{p}=\frac{m}{n}$ with $m,n\in\mathbb{N}^*$, we also have

$$ p n^2 = m^2. \tag{2} $$
However, such identity cannot hold, since $\nu_p(\text{RHS})$ is an even number and $\nu_p(\text{LHS})$ is an odd number. It follows that $\sqrt{p}\not\in\mathbb{Q}$ as wanted.


linear algebra - In general find: eigenvalues and ''-vectors of symmetric matrices with all rowsums equal | Specific example (where all non-diag. elements equal)




If you have a situation where every row-sum is equal in a matrix A, this sum equals one of the eigenvalues of the matrix. It might make a difference in cases where the matrix is symmetrical?



Using this fact, is there any easy procedure/shortcut for finding the rest of the eigenvalues?



Or do you have to solve the characteristic equation? You could use the eigenvalue we found at first and use this as a factor when doing long division on the charactersitic equation, however when the long division doesn't end up with a nice expression for the factorization you start to wonder if this really is the right way to go (in case of 3rd or more degreee polynomials). I got this characteristic equation for n = n for the matrix below:
$\lambda{^3} - 3 \lambda{^3} +3(1-n^2) \lambda + 2(n^2 -n^3) - 1$



EDIT:
Now going over to a specific example with a symmetric matrix where all rowsums are equal, but where everything not on the main diagonal is equal to n (any real number). For example we could say that n = 1, then this would be a duplicate question of another thread here, but n can be any real number, so would the same apply if n was equal to 5 instead?

This thread for instance is similar, however the OPs non-main-diagonal elements are in the range 0 < r < 1, which are not any of the real numbers:
Find the eigenvalues of a matrix with ones in the diagonal, and all the other elements equal



So the matrix would looks like this:



$ A = \left(\begin{array}{rrr}
1 & n & n\\
n & 1 & n\\
n & n & 1
\end{array}\right) $




Finding the characteristic equation for this matrix returned a pretty ugly result, which results in a nightmare if you try doing longdivision on this expression using the rowsum as the first eigenvalue = 1 + 2n



Therefore I'm convinced there must be some shorter way for finding the eigenvalues/eigenvectors, wondering what it might be? Any link to any exisiting thread that might adress or apply to this specific scenario? Perhaps here?: Determinant of a rank $1$ update of a scalar matrix, or characteristic polynomial of a rank $1$ matrix



So I tried with some different values for n and solved in a computer program.



We know that 1 of the eigenvalues equals the rowsum.
tr(A) = sum of all eigenvalues = 3 for any n (real)




So for n=1 we get eigenvalues 0, 3 and 0.



So for n=2 we get eigenvalues -1, 5 and -1.



So for n=3 we get eigenvalues -2, 7 and -2.



But I can't seem to see how to present a general solution for any real n, for the reasons mentioned further above.


Answer



Nothing else can be said exactly, without running an eigenvalue routine (which usually is not solving the characteristic equation in practice).




Indeed, the spectral theory of left-stochastic matrices, meaning matrices with row sum $1$ and nonnegative entries, is a subject of significant interest and complexity in probability theory. Yet this is itself a special case of your situation.


limits - Evaluating and proving $limlimits_{xtoinfty}frac{sin x}x$

I've just started learning about limits. Why can we say $$ \lim_{x\rightarrow \infty} \frac{\sin x}{x} = 0 $$ even though $\lim_{x\rightarrow \infty} \sin x$ does not exist?



It seems like the fact that sin is bounded could cause this, but I'd like to see it algebraically.



$$ \lim_{x\rightarrow \infty} \frac{\sin x}{x} =
\frac{\lim_{x\rightarrow \infty} \sin x} {\lim_{x\rightarrow \infty} x}
= ? $$




L'Hopital's rule gives a fraction whose numerator doesn't converge. What is a simple way to proceed here?

Tuesday, May 22, 2018

totient function - Euler's theorem (modular arithmetic) for non-coprime integers



I am trying to calculate $10^{130} \bmod 48$ but I need to use Euler's theorem in the process.




I noticed that 48 and 10 are not coprime so I couldn't directly apply Euler's theorem. I tried breaking it down into $5^{130}2^{130} \bmod 48$ and I was sucessfully able to get rid of the 5 using Euler's theorem but now I'm stuck with $2^{130} \bmod 48$. $2^{130}$ is still a large number and unfortunately 2 and 48 are not coprime.



Could someone lead me in the right direction regarding how would I proceed from here?


Answer



Calculate $\mod 48$ using the Chinese Remainder Theorem. Or, informally:
Clearly $2^{130}$ is divisible by $16$ so modulo $48$ this is one of $0, 16, 32$, which one of the three it is depends on what it is modulo $3$.


algebra precalculus - Work Problem that deals with Number of Men, Days, Leaving


A project can be done by 70 men in 100 days. There were 80 men at the start of the project but after 50 days, 20 of them had to be transferred to another project. How long will it take the remaining workforce to complete the job?


The correct answer is 50.


Any hints on how to go about this? I have encountered work problems before with the general formula $$\frac1A + \frac1B + \dots = \frac1T.$$


There's also problems with time involved:



$$t_A\left(\frac1A + \frac1B\right) + t_B\left(\frac1C + \frac1D\right) \dots = 1.$$


This problem incorporates people leaving, remaining days. But I am not sure how to combine them concepts.


Answer



Think about the required amount of work in man-days. The project requires $70*100=7000$ man-days of total work. After $80*50=4000$ man-days of work, there are $7000-4000=3000$ man-days of work remaining, and there are $60$ remaining workers, so the project will take another $3000/60=50$ days.


Upper bound on the magnitude of the roots of a complex polynomial



Problem:



Let $z_0$ be a root of the complex polynomial $z^n + a_{n-1}z^{n-1} + ... + a_0 $ $ (a_k \in \mathbb{C})$.

Prove that $|z_0| \le \zeta$, where $\zeta$ is the only positive root of $z^n - |a_{n-1}|z^{n-1} - ... - |a_0|$.



(the preceding problem - which I've solved - was to prove that the second polynomial has in fact exactly one positive root; to be precise we'd have to assume that at least one of the $a_i$ are not equal to 0 or allow the root to be zero)



I have no idea how to approach this problem. The statement seems to be that "for given nonnegative real $a_i$ the complex polynomial with the greatest root whose coefficients are of magnitude $a_i$ is $z^n -...-a_0$", and I've tried proving this by "rotating" the coefficients one by one and observing how the roots behave, but I've had no success. (maybe it's just because I have no experience at all with complex polynomials)



I haven't studied complex analysis, so it would be great to find a solution that doesn't use results from that area. Hints would be great as well. :)


Answer



If $z_0$ is a root then $-z_0^n = a_0 + \ldots + a_{n-1}z_0^{n-1}$. Now the triangle inequality says that $|z_0|^n \le |a_0| + \ldots + |a_{n-1}||z_0|^{n-1}$.




Since for positive real $x$, $x^n - |a_0| - \ldots - |a_{n-1}|x^{n-1}$ has a unique root $\zeta$, this quantity is negative for $0 \le x \le \zeta$ and positive for $x \ge \zeta$.



This shows that $|z_0| \le \zeta$.


real analysis - Convergence almost uniformly to zero with certain conditions.



This problem is related to Convergence in measure to zero with certain conditions.




(Let $\{f_n\}_{n\in \mathbb{N}}$ be a sequence of measurable functions on a measure space and $f$ measurable.



For $c_n>0$ such that either $\lim_{n\to \infty}c_n=0$, or $c_n\geq c>0$ for all $n$, and measurable sets $E_n$ with $m(E_n)>0$ consider the sequence $f_n(x):=c_n\mathcal{X}_{E_n}(x).$ )



For the same sequence(same assumptions!) $\{f_n\}_{n\in\mathbb{N}}$, $f_n$ converges almost uniformly to zero, iff $c_n\to 0$ as $n\to \infty$ or $m(\cup_{n\geq N}E_n)\to 0$ as $N\to \infty.$



My approach:
($\Rightarrow$) By definition of almost uniformly, we have that for all $\epsilon>0$ there exists $A_\epsilon $ such that $m(A_\epsilon)<\epsilon$ such that $f_n$ converges to uniformly to $0$ on $A_\epsilon^c$. Observe that we have $A_\epsilon \subset \cup_{n\geq N}E_n$, so $\epsilon\geq m(A_\epsilon)\leq m(\cup_{n\geq N}E_n)$. Now if $x\in A_\epsilon$ then clearly $m(\cup_{n\geq N}E_n)\to 0$ as $N\to \infty$ and if $x\notin A_\epsilon$ then again clearly as $f_n(x)=c_n\mathcal{X}_{E_n}(x)\to 0$, $c_n\to 0.$




($\Leftarrow$) I think to prove this direction is hard.



Any help, comments and suggestions will be appreciated.


Answer



as in your other question I think it is a good idea to suppose for $(\Rightarrow)$ that we have $c_n \not\to 0$ (and therefore $c_n \ge c > 0$). I don't understand what you mean when you write "Now if $x \in A_\epsilon$ then clearly $m(\bigcup_{n\ge N} E_n) \to 0$ as ...". How can you conclude from $x \in A_\epsilon$ something about the measure of some set which doesn't depend on $x$. And why $A_\epsilon \subseteq \bigcup_{n\ge N} E_n$?



For $(\Rightarrow)$: Suppose $c_n \ge c > 0$. We want to prove $m(\bigcup_{n\ge N} E_n) \to 0$. For $\epsilon > 0$ choose $A_\epsilon$ as you did. Now $f_n \to 0$ uniformly on $A_\epsilon^c$. Now use again (as in the other task) that $f_n \ge c\chi_{E_n}$. So $\chi_{E_n\cap A_\epsilon^c} \to 0$ uniformly. Does this tell you something?



$(\Leftarrow)$ is IMO more direct. If $c_n \to 0$ use $|f_n| \le c_n$ to conclude uniform convergence, otherwise for some $\epsilon$ you can find a $N$ with $m(\bigcup_{n\ge N} E_n) < \epsilon$. Can you show convergence outside this set?




Hope this helps, otherwise feel free to ask more.


Saturday, May 19, 2018

online resources - Overview of basic facts about Cauchy functional equation

The Cauchy functional equation asks about functions $f \colon \mathbb R \to \mathbb R$ such that $$f(x+y)=f(x)+f(y).$$ It is a very well-known functional equation, which appears in various areas of mathematics ranging from exercises in freshman classes to constructing useful counterexamples for some advanced questions. Solutions of this equation are often called additive functions.


Also a few other equations related to this equation are often studied. (Equations which can be easily transformed to Cauchy functional equation or can be solved by using similar methods.)


Is there some overview of basic facts about Cauchy equation and related functional equations - preferably available online?

calculus - Find the value of : $lim_{xtoinfty}frac{sqrt{x-1} - sqrt{x-2}}{sqrt{x-2} - sqrt{x-3}}$



I'm trying to solve evaluate this limit



$$\lim_{x\to\infty}\frac{\sqrt{x-1} - \sqrt{x-2}}{\sqrt{x-2} - \sqrt{x-3}}.$$




I've tried to rationalize the denominator but this is what I've got



$$\lim_{x\to\infty}(\sqrt{x-1} - \sqrt{x-2})({\sqrt{x-2} + \sqrt{x-3}})$$



and I don't know how to remove these indeterminate forms $(\infty - \infty)$.



EDIT: without l'Hospital's rule (if possible).


Answer



Fill in details:




As $\;x\to\infty\;$ we can assume $\;x>0\;$ , so:



$$\frac{\sqrt{x-1}-\sqrt{x-2}}{\sqrt{x-2}-\sqrt{x-3}}=\frac{\sqrt{x-2}+\sqrt{x-3}}{\sqrt{x-1}+\sqrt{x-2}}=\frac{\sqrt{1-\frac2x}+\sqrt{1-\frac3x}}{\sqrt{1-\frac1x}+\sqrt{1-\frac2x}}\xrightarrow[x\to\infty]{}1$$



Further hint: the first step was multiplying by conjugate of both the numerator and the denominator.


Assigning values to divergent series



I have been looking at divergent series on wikipedia and other sources and it seems people give finite "values" to specific ones. I understand that these values sometimes reflect the algebraic properties of the series in question, but do not actually represent what the series converges to, which is infinity. Why is it usefull to assign values to divergent series?



The only theory I could come up with, is this:



Say you have 2 divergent series, series' A and B, and you assign each a value,




Series ($A=
\sum_{n=0}^\infty a_n$), which I assigned the value Q



and series ($B=
\sum_{n=0}^\infty b_n$ ), which I assigned the value P



But it just so happens that series $C=A-B=
\sum_{n=0}^\infty (a_n-b_n)$ converges.
Could that imply that the actual value of series $C$ is the difference of the two assigned values to $A$ and $B$, that is $\sum_{n=0}^\infty (a_n-b_n)=Q-P$ ?




If so, then that would make some sense to me, as to why people sometimes assign values to divergent series.


Answer



The most common situation with a divergent series is this: an infinite series with a variable $z$ is given, which converges for some values of $z$ in the complex plane $\mathbb C.$ On the region of convergence, the series defines a holomorphic function, call it $f(z).$ Then the analytic continuation of the function $f(z)$ is correctly described. As a result, there is a well-defined value $f(z)$ for $z$ values that would cause the original series to diverge.



The best example is ZETA. I guess Euler found values of $\zeta(-n)$
at negative integers, and wrote these down in the style of divergent series. So people get an impression that one assigns a value to a divergent series by clever manipulation. This is not the general case, however. When the radius of convergence is strictly exceeded, we are simply reporting the value given by the analytic continuation. Not assigning.



Here is an elementary example: Let us take
$$ f(z) = \frac{1}{1 + z^2}. $$ Now, for $|z| < 1,$ we know
$$ f(z) = 1 - z^2 + z^4 - z^6 + z^8 - z^{10} \cdots $$

If I wrote
$$ 1 - 9 + 81 - 729 + 6561 - 59049 \cdots = \frac{1}{10} $$ you would have every reason to be suspicious as the series obviously diverges. But if I instead wrote
$$ f(3) = \frac{1}{10} $$ you would think that was probably alright.


Friday, May 18, 2018

trigonometry - Finite Series - reciprocals of sines




Find the sum of the finite series
$$\sum _{k=1}^{k=89} \frac{1}{\sin(k^{\circ})\sin((k+1)^{\circ})}$$
This problem was asked in a test in my school.
The answer seems to be $\dfrac{\cos1^{\circ}}{\sin^21^{\circ}}$ but I do not know how. I have tried reducing it using sum to product formulae and found out the actual value and it agrees well. Haven't been successful in telescoping it.


Answer



HINT:



$$\frac{1}{\sin k^\circ\sin(k+1)^\circ}=\frac1{\sin1^\circ}\frac{\sin (k+1-k)^\circ}{\sin k^\circ\sin(k+1)^\circ}$$
$$=\frac1{\sin1^\circ}\cdot\frac{\cos k^\circ\sin(k+1)^\circ-\sin k^\circ\cos(k+1)^\circ}{\sin k^\circ\sin(k+1)^\circ}=\frac1{\sin1^\circ}\left(\cot k^\circ-\cot(k+1)^\circ\right)$$




Can you recognize Telescoping series / sum?


trigonometry - Prove the following trigonometric identity without a calculator involved

I have to prove the following statement.




$$1+\cos{2\pi\over5}+\cos{4\pi\over5}+\cos{6\pi\over5}+\cos{8\pi\over5}=0$$





I have tried to use the sum of angles formula for cosine, but didn't get to a point where I'd be able to show that it is equal to $0$.

Geometric series and complex numbers


I'm new to this site, english is not my mother tongue, and I'm just learning LaTeX. I'm basically a noob, so please be indulgent if I break any rule or habits.


I'm stuck at proving the following equation. I suppose I should use the formula for geometric series ($\sum\limits_{k=0}^{n}q^k=\frac{1-q^{n+1}}{1-q}$), and also use somewhere that $e^{î\theta}=cos(\theta)+i\sin(\theta)$


So here is the equation I have to prove : $$\frac{1}{2}+cos(\theta)+cos(2\theta)+...+cos(n\theta)=\frac{sin(n+\frac{1}{2}\theta)}{2sin(\frac{\theta}{2})}$$


Thanks for your help


Answer




$$ \begin{aligned} \dfrac{1}{2} + \sum_{r=1}^{n} \cos(r\theta) & = \Re \left\{ \dfrac{1}{2} + \sum_{r=1}^{n} e^{ir\theta} \right\} \\ & = \Re \left\{ -\dfrac{1}{2} + \sum_{r=0}^{n} e^{ir\theta} \right\} \\ & = \Re \left\{ -\dfrac{1}{2} + \dfrac{1-e^{(n+1)\theta}}{1-e^{i\theta}} \right\} \\ & = \Re \left\{ \dfrac{1-2e^{(n+1)\theta}+e^{i\theta}}{2(1-e^{i\theta})} \right\} \\ & = \Re \left\{ \dfrac{e^{-i\theta/2}-2e^{(n+1/2)\theta}+e^{i\theta/2}}{2(e^{-i\theta/2}-e^{i\theta/2})} \right\} \\ & = \Re \left\{ \dfrac{2\cos\left(\frac{\theta}{2}\right)-2\cos(n+1/2)\theta-2i\sin(n+1/2)\theta}{-2i\sin\left(\frac{\theta}{2}\right)} \right\} \end{aligned} $$




Multiply through by $1=\frac{i}{i}$, take the real part and there you have it. ^_^


Thursday, May 17, 2018

functional analysis - $f_n rightarrow 0$ in $L^1$ $implies sqrt{f_n} rightarrow 0$ also?


Let $(X,\Sigma,\mu)$ be a finite measure space, and let $\{f_n : n \in \mathbb{N} \}$ be a sequence of non-negative measurable functions converging in the $L^1$ sense to the zero function. Show that the sequence $\{\sqrt{f_n}:n \in \mathbb{N} \}$ also converges in the $L^1$ sense to the zero function.


So I have to somehow show that



$$ \lim_{n \to \infty}\int_X\lvert\sqrt{f_n(x)}\rvert\;\mathbb{d}\mu(x) = 0 $$


If I'm honest I don't really know where to start. I think it's an easy question, but I'm new to this stuff. Any help appreciated!


Answer



You have the right choice of $p = q = 2$. However, choose $u = 1$, $v = \sqrt{f_n}$, then


$$\int_X \left|\sqrt{f_n}\right| \ d\mu = \left\| 1.\sqrt{f_n} \right\|_1 \ \leq \ \left\| 1 \right\|_2 \ \ \left\| \sqrt{f_n}\right\|_2 = \ ...$$


Wednesday, May 16, 2018

calculus - derivative of $xcdot|sin x|$



I have the function $f(x)=x|\sin x|$, and I need to see in which points the function has derivatives.
I tried to solve it by using the definition of limit but it's complicated. It's pretty obvious that the function has no derivatives where $|\sin x|=0$, but I don't know how to show it.
I thought maybe calculate the derivative of $f(x)$ but I didn't learn how to calculate the derivative of $|\sin x|$.
How can I solve it withut knowing the derivative of $|\sin x|$? or better question how to calculate the derivative of $|\sin x|$?



edited:I didn't learn the derivative of $|f(x)|$.


Answer



I'm going to make this as simple as I can. So, ofcourse, I'll be assuming $x\in \mathbb R$




You're first question is for what values of $x$ is the function differentiable.



There are nice algebraic ways to find it but why waste time in an explicit proof if all one needs is to convince one's peers that one's logic is right.



Let's just observe the differentiability of $f(x) = x\cdot |\sin x|$ through it's graph.
But oh, wait, you must not know how to plot it's graph. So, let's take baby steps to get find it.



Take what we know. The standard graph of $y = \sin x$




sin x



Note that the roots (ie, $\sin x = 0$) are $x = n\pi,\quad n\in\mathbb Z$



Now, let's graph y = $|\sin x|$ . How?
There's a method to get $|f(x)|$ from $f(x)$ and it goes something like this:




Step 1: Make $y = 0$ a mirror which reflect all $y<0$ into the plane $y>0$
Step 2: Eliminate the portion of the graph which lies in $y < 0$
Step 3: Be amazed that by executing the above two steps precisely, you got the right graph.





Learn why this works



$|\sin x|$



Now we have to multiply this with $x$. There's no real method for this, it only takes a slight bit of thinking and understanding of what multiplication does to a graph.



Usually when we multiply a function with a constant, say $c$.





  • The graph diminishes for $c\in(0,1)$

  • Enlarges for $c>1$

  • Turns upside down for $c<0$ and follows the above two observations once again.



Since we're multiplying by a variable and not a pure scalar, the graph is distorted such that all the above can be seen at once and in increasing degree with increase in the magnitude of $x$.
$x|\sin x$|



Now, it is obvious that the roots of this graph are the same as that of $\sin x$
and you know we can't differentiate a function at sharp points. (Why?)




Notice that the sharp point at $x = 0$ has been smoothed over by the inversion in the graph for $x<0$. But is it differentiable here at $x=0$?



To prove that it's differentiable at this point,



$$f'(0) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} , \quad \text{ where } x =0\\
= \lim_{h \to 0} \frac{\sin (0+h) - \sin 0}{h}\\
= \lim_{h \to 0} \frac{\sin h}{h} = 1
$$
(Why?)




$\therefore $ Derivative exists @ $x = 0$ $\implies$ Differentiable @ $x = 0$



So, we can now safely say that we can differentiate $f(x) = x\cdot|\sin x|\quad \forall \space x\in\mathbb R - \{n\pi\}, \quad n\in\mathbb Z-\{0\}$



Or more easily in words,
$f(x) = x|\sin x|$ is differentiable at $x \neq n\pi ,\quad n \neq 0$



The following is how I would differentiate the function:
$$

\frac{d}{dx} x\cdot|\sin x|\\
= \frac{d}{dx} x\cdot\sqrt{\sin^2 x}
\quad ,\quad \{\because\space|x| = \sqrt{x^2}\} \\
= x\frac{d}{dx}\sqrt{\sin^2 x} + |\sin x|\frac{dx}{dx} \quad ,\quad \{\because\space (uv)' = uv' + u'v\}\\
= x\cdot\frac{1}{2\sqrt{\sin^2x}}\cdot (2\sin x)\cdot (\cos x) + |\sin x|
\quad , \quad \{\because\text{Chain Rule }\}\\
=\frac{x\cdot\sin 2x}{2|\sin x|} + |\sin x|$$

This isn't totally simplified but hopefully this is helpful.




Now, to further clarify on the derivitive of $|x|$:
$$\frac{d}{dx} |x|
= \frac{d}{dx} \sqrt{x^2}
= \frac{1}{2}(x^2)^{\frac{1}{2} - 1} \cdot 2x
= \frac{2x\cdot (x^2)^{-\frac{1}{2}}}{2}
=\frac{1}{2\sqrt{x^2}}
= \frac{x}{|x|}
\equiv \frac{|x|}{x}
= \text{sgn}(x)$$




Here is more information on $\text{sgn}(x)$ and a better more explicit way of finding the derivative of the absolute value



Exercise: Can you try to get the derivative of $x|\sin x|$ with the sign-function included?


Tuesday, May 15, 2018

calculator - What is logarithm & how log table can be constructed by me?

I'm studying properties of logarithm but I don't understand how base e works. Base 10 looks simple while doing calculations of numbers having multiple of 10. As other numbers are not multiple of 10 how one can calculate without using log table. That's why I want know how log table is constructed & how base e works ?
I'm very basic user, I don't get integration, derivatives & summations. Please give answer theoretically or using row concepts. Since I want to construct log table by myself.

calculus - Evaluating the integral, $int_{0}^{infty} lnleft(1 - e^{-x}right) ,mathrm dx $



I recently got stuck on evaluating the following integral. I do not know an effective substitution to use. Could you please help me evaluate:



$$\int_{0}^{\infty} \ln\left(1 - e^{-x}\right) \,\mathrm dx $$


Answer




One route to evaluating the integral is
$$-\int_0^\infty \ln(1-e^{-x})dx=\int_0^\infty\left(e^{-x}+\frac{e^{-2x}}{2}+\frac{e^{-3x}}{3}+\cdots\right)dx $$
$$=\int_0^\infty e^{-x}dx+\frac{1}{2}\int_0^\infty e^{-2x}dx+\frac{1}{3}\int_0^\infty e^{-3x}dx+\cdots$$
$$=1+\frac{1}{2}\cdot\frac{1}{2}+\frac{1}{3}\cdot\frac{1}{3}+\frac{1}{4}\cdot\frac{1}{4}\cdots $$
$$=\zeta(2)=\frac{\pi^2}{6}.$$



I don't know if a straightforward substitution could get you the answer, what with this being the Riemann zeta function and all, but you can see the integrals on the linked page and try your own hand at finding one. (Or someone else can try their hand.)


Monday, May 14, 2018

elementary set theory - Equinumerousity of operations on cardinal numbers

I want to prove for all Cardinal numbers $a$, $b$, $c$ that:




  1. $(a \cdot b)^c =_c a^c \cdot b^c$




  2. $a^{(b+c)} =_c a^b \cdot a^c$




  3. $(a^b)^c =_c a^{b \cdot c}$



I know that for 1. it's enough to show that $(c \rightarrow a \times b) =_c (c \rightarrow a) \times (c \rightarrow b)$ because my teacher told me so.


I think that I have to show the relation "$\leqslant$" first and then "$ \geqslant$" by finding an injective function in both cases. For the latter I'm thinking that for every $(f_1 f_2) \in (c \rightarrow a)$ x $(c \rightarrow b)$ let $f: c \rightarrow a$ x $b$ be defined as $f(x) = (f_1(x), f_2(x))$ which gives the injective function $(f_1 f_2) \rightarrow f$ but I don't know how to verify. For "$\leqslant$" I tried to do it the other way around but it makes no sense..

complex analysis - Derivation of a series expansion for Riemann Zeta function




Though there is a vast amount of literature on Riemann's Zeta Function, but, I was struck by this formula of the series, which is given in Wikipedia. Even after seeing several tracts from this site as well as others, I am unable to find its derivation. If it is a duplicate, or the question is asked before, please let me know. The question is how does the below two series, given in the link above, be derived? $$\zeta(s)=\frac{1}{s-1}\sum_{n=1}^{\infty}\left(\frac{n}{(n+1)^s}-\frac{n-s}{n^s}\right)\forall \Re(s)>0$$ and $$\zeta(s)=\frac{1}{s-1}\sum_{n=1}^{\infty}\frac{n(n+1)}{2}\left(\frac{2n+3+s}{(n+1)^{s+2}}-\frac{2n-1-s}{n^{s+2}}\right)\forall \Re(s)>-1$$



I once got a comment that these formulae are untrue. Are their claims right? But, since it is there in Wkipedia, it deserves some crucial examination. The series are instructive in that,if they are true, they give us analytic continuation without the use of Integration. I guess it is by using the Mittag-Leffler Theorem as the reference in the article points to Knopp's Theory of Functions book, if which I have no copy but am unsure. Thanks beforehand


Answer



For $Re(s) > 2$ : $$\sum_{n=1}^{\infty}\left(\frac{n}{(n+1)^s}-\frac{n-s}{n^s}\right)= \sum_{n=1}^\infty\left( (n+1)^{1-s}-(n+1)^{-s}- n^{1-s}+s n^{-s} \right)$$ $$= \zeta(s-1)-1-(\zeta(s)-1)-\zeta(s-1)+s \zeta(s) = (s-1)\zeta(s)$$ since $\frac{n}{(n+1)^s}-\frac{n-s}{n^s} = sn\int_n^{n+1} (n^{-s-1}-x^{-s-1})dx$ $=s n\int_n^{n+1} \int_n^x (s+1)t^{-s-2}dtdx= \mathcal{O}(n^{-s-1})$



$\sum_{n=1}^{\infty}\left(\frac{n}{(n+1)^s}-\frac{n-s}{n^s}\right)$ converges and is analytic for $Re(s) > 0$, and by analytic continuation $\sum_{n=1}^{\infty}\left(\frac{n}{(n+1)^s}-\frac{n-s}{n^s}\right) = (s-1) \zeta(s)$ stays true for $Re(s) > 0$.



The second formula follows the same idea.



Prove that the sequence $a_{n+1} =frac{1}{2}left(a_{n}+frac{c}{a_{n}}right)$ is convergent and find its limit



Let $c>0$, $a_{1} = 1$, and
$$a_{n+1} =\frac{1}{2}\left(a_{n}+\frac{c}{a_{n}}\right)$$




I need to:




  1. Show that $a_{n}$ is defined for every $n\geq 1$

  2. Show that this sequence is convergent.

  3. Find its limit.



I proved the first part by showing by induction that this sequence is positive for every $n$. To show that this sequence is convergent I'm thinking of showing that this sequence is a Cauchy series, yet can't figure out how.




For the third part I'm clueless at the moment.


Answer



Hints:




  1. One can prove that if $a_1>0$ and $n \geq 2$ then $a_n \geq \sqrt{c}$.

  2. One can prove that if $a_1>0$ and $n \geq 2$ then $a_{n+1} \leq a_n$. Knowing that and the bound in 1, convergence follows.

  3. Once you know that a recursive sequence is convergent, its limit can only be a fixed point of the recursion mapping, i.e. in your case a solution to $\frac{x}{2}+\frac{c}{2x}=x$.


real analysis - Limit of $L^p$ norm



Could someone help me prove that given a finite measure space $(X, \mathcal{M}, \sigma)$ and a measurable function $f:X\to\mathbb{R}$ in $L^\infty$ and some $L^q$, $\displaystyle\lim_{p\to\infty}\|f\|_p=\|f\|_\infty$?




I don't know where to start.


Answer



Fix $\delta>0$ and let $S_\delta:=\{x,|f(x)|\geqslant \lVert f\rVert_\infty-\delta\}$ for $\delta<\lVert f\rVert_\infty$. We have
$$\lVert f\rVert_p\geqslant \left(\int_{S_\delta}(\lVert f\rVert_\infty-\delta)^pd\mu\right)^{1/p}=(\lVert f\rVert_\infty-\delta)\mu(S_\delta)^{1/p},$$
since $\mu(S_\delta)$ is finite and positive.
This gives
$$\liminf_{p\to +\infty}\lVert f\rVert_p\geqslant\lVert f\rVert_\infty.$$
As $|f(x)|\leqslant\lVert f\rVert_\infty$ for almost every $x$, we have for $p>q$, $$
\lVert f\rVert_p\leqslant\left(\int_X|f(x)|^{p-q}|f(x)|^qd\mu\right)^{1/p}\leqslant \lVert f\rVert_\infty^{\frac{p-q}p}\lVert f\rVert_q^{q/p},$$

giving the reverse inequality.


Sunday, May 13, 2018

real analysis - Solution of Cauchy functional equation which has an antiderivative



Let $f\colon\mathbb R\to\mathbb R$ be a function such that
$$f(x+y)=f(x)+f(y)$$
for any $x,y\in\mathbb R$
i.e., it fulfills Cauchy functional equation.



Additionally, suppose that $F'=f$ for some function $F\colon\mathbb R\to\mathbb R$, i.e., $f$ has a primitive function.




How can I show that every such function must by of the form $f(x)=cx$ for some constant $c\in\mathbb R$?






I have seen an exercise in a book on real analysis, where I would be able to use this fact. I could use the argument that every derivative belongs to the first Baire class and consequently it is measurable. Every measurable solution of Cauchy functional equation is a linear function, nice proof is given, for example, in Herrlich's Axiom of Choice, p.119.



The fact that derivative is Baire function was mentioned in the book before the chapter with this exercise. But measurability is done in this book only later. For this reason (and also out of curiosity) I wonder whether there is a proof not using measurability of $f$.


Answer



By the functional equation, it suffices to prove that $f$ is continuous at one point.




The fact that $f$ is of first Baire class is very straightforward:
$$
f(x) = \lim_{n \to \infty} \frac{F(x+1/n)-F(x)}{1/n}
$$
is a pointwise limit of continuous functions.



Now a function of first Baire class has a comeager $G_\delta$-set of points of continuity. Done.







Indeed, enumerate the open intervals with rational endpoints as $\langle I_n \mid n \in \omega\rangle$. Then
$$
f \text{ is discontinuous at }x \iff \exists n\in \omega : x \in f^{-1}[I_n] \setminus \operatorname{int}f^{-1}[I_n]
$$
Since $f$ is of first Baire class, $f^{-1}[I_n]$ is an $F_\sigma$ and so is $f^{-1}[I_n] \setminus \operatorname{int} f^{-1}[I_n]$. Therefore we can write
$$
f^{-1}[I_n] \setminus \operatorname{int} f^{-1}[I_n] = \bigcup_{k \in \omega} F_{k}^{n}
$$
for some sequence $\langle F_{k}^n \mid k \in \omega\rangle$ of closed sets. Observe that $F_{k}^n$ has no interior, so the set of points of discontinuity of $f$ is

$$
\bigcup_{n \in \omega} f^{-1}[I_n] \setminus \operatorname{int}f^{-1}[I_n] = \bigcup_{n\in\omega} \bigcup_{k\in\omega} F_{k}^n,
$$
a countable union of closed and nowhere dense sets.


Saturday, May 12, 2018

elementary number theory - If $p equiv 1 mod{4}$ is prime, how to find a quadratic nonresidue modulo $p$?



If $p \equiv 3 \mod{4}$ is prime, then $-1$ is a quadratic non-residue modulo $p$. This is not the case when $p \equiv 1 \mod{4}$. How can we find a quadratic non-residue in this case?



At least one will exist since half of the elements in $\mathbb{Z}_p^*$ are quadratic residues and the other half are non-residues, so atleast one non-residue will exist. What I am looking for is an algorithm (other than brute force or a probabilistic algorithm) or a formula that would describe some quadratic residue modulo $p$.


Answer



If $p \equiv 5 \pmod{8}$, $2$ is a quadratic non-residue $\pmod{p}$. If $p \equiv 1 \pmod{8}$, the smallest quadratic non-residue has to be an odd prime $q$, and by quadratic reciprocity $\left(\frac{q}{p}\right)=\left(\frac{p}{q}\right)$, so you can just take prime $q$ and test whether $p$ is a quadratic residue $\pmod{q}$. Do this for $q=3, 5, 7 \ldots$.




Edit: An efficient way to compute $ \left(\frac{q}{p}\right)$ would be to use the Jacobi symbol Note: This is much faster than using the Legendre symbol.



This algorithm ends quite quickly, since it is known that the smallest quadratic non-residue is $<\sqrt{p}+1$, and if the extended Riemann hypothesis is true, then the smallest quadratic non-residue is $<\frac{3(\ln{p})^2}{2}$


integration - Fundamental Theorem of Calculus for $limlimits_{xto 0}frac{int_0^x(x-t)sin t^2 dt}{xsin^3x}$


How to integrate this?




Evaluate $$\lim_{x\to0}\frac{\displaystyle\int_0^x(x-t)\sin t^2\ dt}{x\sin^3x}.$$



I've had difficulty in using L'Hopital rule. At the same time failed to really understand how to differentiate or evaluate the limit of the $x\sin^3x$ and $\int_0^x(x-t)\sin t^2\ dt$


Would like to appreciate your help


Answer



Use repeatedly $\sin u=u\,{\rm sinc}(u)$, whereby $\lim_{u\to0}{\rm sinc}(u)={\rm sinc}(0)=1$. We have $$\int_0^x (x-t)\sin(t^2)\>dt=x^4\int_0^1(1-\tau)\,\tau^2{\rm sinc}(x^2\tau^2)\>d\tau$$ and $$\>x\sin^3 x=x^4\>{\rm sinc}^3(x)\ ,$$ so that $${\int_0^x (x-t)\sin(t^2)\>dt \over x\sin^3 x}={\int_0^1(1-\tau)\,\tau^2\bigl(1+r(x,\tau)\bigr)\>d\tau\over 1+\bar r(x)}\ ,$$ whereby $\lim_{x\to0}r(x,\tau)=0$ uniformly in $\tau$, and $\lim_{x\to0}\bar r(x)=0$ as well. It follows that the limit in question is $$\int_0^1(1-\tau)\,\tau^2\>d\tau={1\over12}\ .$$


Friday, May 11, 2018

sequences and series - Does $zeta(-1)=-1/12$ or $zeta(-1) to -1/12$?

I saw NumberPhile channel on Youtube, and they proved $1+2+3+\cdots=-1/12$. Also, I read This.



So, which one is correct


$$\zeta(-1)=-1/12\\ \text{or} \\\zeta(-1) \to -1/12$$


Equivalent to:


$$1+2+3+\cdots=-1/12\\ \text{or} \\1+2+3+\cdots \to -1/12$$




My question: Does it "equal" or "converge"?



Question Explanation:


I mean by "$\to$" "approaches to", like $x\to a $ means $\forall \epsilon>0, |x-a|<\epsilon.$

real analysis - How to define a bijection between $(0,1)$ and $(0,1]$?





How to define a bijection between $(0,1)$ and $(0,1]$?
Or any other open and closed intervals?




If the intervals are both open like $(-1,2)\text{ and }(-5,4)$ I do a cheap trick (don't know if that's how you're supposed to do it):
I make a function $f : (-1, 2)\rightarrow (-5, 4)$ of the form $f(x)=mx+b$ by
\begin{align*}
-5 = f(-1) &= m(-1)+b \\
4 = f(2) &= m(2) + b

\end{align*}
Solving for $m$ and $b$ I find $m=3\text{ and }b=-2$ so then $f(x)=3x-2.$



Then I show that $f$ is a bijection by showing that it is injective and surjective.


Answer



Choose an infinite sequence $(x_n)_{n\geqslant1}$ of distinct elements of $(0,1)$. Let $X=\{x_n\mid n\geqslant1\}$, hence $X\subset(0,1)$. Let $x_0=1$. Define $f(x_n)=x_{n+1}$ for every $n\geqslant0$ and $f(x)=x$ for every $x$ in $(0,1)\setminus X$. Then $f$ is defined on $(0,1]$ and the map $f:(0,1]\to(0,1)$ is bijective.



To sum up, one extracts a copy of $\mathbb N$ from $(0,1)$ and one uses the fact that the map $n\mapsto n+1$ is a bijection between $\mathbb N\cup\{0\}$ and $\mathbb N$.


calculus - To prove a sequence is Cauchy




I have a sequence:
$ a_{n}=\sqrt{3+ \sqrt{3 + ... \sqrt { 3} } } $ , it repeats $n$-times.




and i have to prove that it is a Cauchy's sequence.
So i did this:
As one theorem says that every convergent sequence is also Cauchy, so i proved that it's bounded between $ \sqrt{3}$ and $ 3 $ (with this one i am not sure, please check if i am right with this one.)And also i proved tat this sequence is monotonic. (with induction i proved this: $ a_{n} \leq a_{n+1} $
so if it's bounded and monotonic, therefore it is convergent and Cauchy.
I am just wondering if this already proved it or not? And also if the upper boundary - supremum if you wish - is chosen correctly.
I appreciate all the help i get.


Answer



${ a }_{ n+1 }=\sqrt { a_{ n }+3 } $ $\Rightarrow \quad { a^{ 2 } }_{ n+1 }=a_{ n }+3$ as $n\rightarrow \infty $ $\Rightarrow \quad { a^{ 2 } }_{ n+1 }=a_{ n }+3$ $\quad x^{ 2 }=x+3\quad \Rightarrow $ $x^{ 2 }-x-3=0 $ $and\quad it\quad$ convergents to the $x=\frac { 1+\sqrt { 13 } }{ 2 } $


calculus - If $f:[0,infty)to [0,infty)$ and $f(x+y)=f(x)+f(y)$ then prove that $f(x)=ax$




Let $\,f:[0,\infty)\to [0,\infty)$ be a function such that $\,f(x+y)=f(x)+f(y),\,$ for all $\,x,y\ge 0$. Prove that $\,f(x)=ax,\,$ for some constant $a$.




My proof :



We have , $\,f(0)=0$. Then ,

$$\displaystyle f'(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{h\to 0}\frac{f(h)}{h}=\lim_{h\to 0}\frac{f(h)-f(0)}{h}=f'(0)=a\text{(constant)}.$$



Then, $\,f(x)=ax+b$. As, $\,f(0)=0$ so $b=0$ and $f(x)=ax.$



Is my proof correct?


Answer



In your proof you assume that $f$ is differentiable, which is not given.



Let me suggest how to obtain the formula of $f$:




Step I. Show that $\,f(px)=p\,f(x),\,$ when $p$ is a positive rational and $x$ a non-negative real. (At first show this for $p$ integer.) We obtain also that, $\,f(0)=0$.



Step II. Observe that $f$ is increasing, since, for $y>x$, we have
$$
f(y)=f(x)+f(y-x)\ge f(x).
$$



Step III.
Since $f$ is increasing, then the limit $\,\lim_{x\to 0^+}f(x)\,$ exists. However
$$

\lim_{x\to 0^+}f(x)=\lim_{n\to\infty}f\Big(\frac{1}{n}\Big)
=\lim_{n\to\infty}\frac{1}{n}\,f(1)=0.
$$



Step IV. Pick an arbitrary $x\in(0,\infty)$, and a decreasing sequence
$\{q_n\}\subset\mathbb Q$ tending to $x$. Then
$$
f(q_n)=q_n\,f(1)
$$
and

$$
x\,f(1)\longleftarrow q_n\,f(1)=f(q_n)=f(x)+f(q_n-x)\longrightarrow f(x),
$$
since $\,\,q_n-x\to 0^+$, and thus $\,\,\lim_{n\to\infty}f(q_n-x)=0$.



Therefore, $\,f(x)=x\,f(1),\,$ for all $x\in\mathbb [0,\infty)$, and hence $\,f'(x)=f(1)$.


Thursday, May 10, 2018

linear algebra - If Brauer characters are $bar{mathbb{Q}}$-linearly independent, why are they $mathbb{C}$-linearly independent?




If Brauer characters are $\bar{\mathbb{Q}}$-linearly independent, why are they $\mathbb{C}$-linearly independent?



I think this is a linear algebra fact showing up when proving the irreducible Brauer characters on a finite group are linearly independent over $\mathbb{C}$. The proof I've seen observes that the characters take values in the ring of algebraic integers, and then proves linear independence over $\bar{\mathbb{Q}}$.



Why is it sufficient to only check linear independence over $\bar{\mathbb{Q}}$? It seems like something could go wrong when extending the field all the way up to $\mathbb{C}$.



The proof I'm reading is Theorem 15.5 in Isaacs' Character Theory of Finite Groups.



enter image description here



Answer



If $E/F$ is a field extension, we have $F^n\subset E^n$, and if a subset of $F^n$ is $F$-linearly independent, then it is also $E$-linearly independent. A nice, super easy way to see it: extend the subset to a basis for $F^n$. Form the matrix whose columns are elements of this basis. Its determinant is nonzero. But this shows that the columns form a basis for $E^n$ since the determinant has the same formula regardless of the field you work over.



The space of class functions of a finite group can be identified with $F^n$ in an obvious way ($n$= number of conjugacy classes).


continuity - Real analysis: Continuous Function


Let $ f: {{\mathbb{R^n}} \rightarrow {{\mathbb{R}} }}$ be continuous and let $a$ and $b$ be points in $ {{\mathbb{R} }} $ Let the function $g: {\mathbb{R}} \rightarrow {\mathbb{R}}$ be defined as: $$ g(t) = f(ta+(1-t)b) $$ Show that $g$ is continuous .



If I define a function $ h(t)=ta+(1-t)b$, then I have that $g(t)=f(h(t))$ I know that $f$ is continuous, so I have to prove that $h(t)$ is continuous as a compound function of two continuous function is also continuous.


How do I prove that $h(t)$ is continuous in ${{\mathbb{R^n}}}$?


Answer



If $t_1.t_2\in\mathbb R$, then\begin{align}\bigl\|h(t_2)-h(t_1)\bigr\|&=\bigl\|t_2a+(1-t_2)b-t_1a-(1-t_1)b\bigr\|\\&=\bigl\|(t_2-t_1)a-(t_2-t_1)b\bigr\|\\&=|t_2-t_1|.\|a-b\|.\end{align}If $a=b$, $h$ is the null function and therefore ir is continuous. Otherwise, if $\varepsilon>0$ then take $\delta=\frac{\varepsilon}{\|a-b\|}$. Then$$|t_2-t_1|<\delta\implies\bigl\|h(t_2)-h(t_1)\bigr\|<\varepsilon.$$


divisibility - The method of solving for a factor of $90!$


If $90! = (90)(89)(88)...(2)(1)$, then what is the exponent of the highest power of $2$ which will divide $90!$ ?




How would I apply one of the easiest method from Here?



I need help on applying the link to this question.




I do not understand which one explains my case, and how I can solve using the method.



I would appreciate if someone showed how it is applied to this

abstract algebra - How are the integral parts of $(9 + 4sqrt{5})^n$ and $(9 − 4sqrt{5})^n$ related to the parity of $n$?



I am stuck on this question,




The integral parts of $(9 + 4\sqrt{5})^n$ and $(9 − 4\sqrt{5})^n$ are:





  1. even and zero if $n$ is even;

  2. odd and zero if $n$ is even;

  3. even and one if $n$ is even;

  4. odd and one if $n$ is even.




I think either the problem or the options are wrong. To me it seems that answer should be odd irrespective of $n$. Consider the following:



$$

\begin{align*}
(9 \pm 4 \sqrt{5})^4 &= 51841 \pm 23184\sqrt{5} \\
(9 \pm 4 \sqrt{5})^5 &= 930249 \pm 416020\sqrt{5}
\end{align*}$$



Am I missing something?


Answer



The idea is to see that $(9+4\sqrt{5})^n+(9-4\sqrt{5})^n=2d_n$ is an even number for every $n$. This can be seen using the binomial expansion formula, and seeing that odd terms appear once with $+$ once with $-$, and even terms are always with $+$ and integers.



Moreover, $0<(9-4\sqrt{5})=9-\sqrt{80}=\frac{1}{\sqrt{81}+\sqrt{80}}<1$. This means that the integer part of the second term is zero. And for the other one, think as this




$$2d_n-1<(9+4\sqrt{5})^n<2d_n$$ so the integer part of the first term is always odd. The correct answer would be the second one (although this happens for every $n$).



[edit] As Arturo Magidin wrote in his comment, the integer part of a real number
$x$, often denoted $\lfloor x \rfloor$ is the unique integer $\lfloor x \rfloor=k$ such that $k \leq x < k+1$, and it does not equal $a$ from the expansion $(9\pm 4\sqrt{5})^n=a\pm b\sqrt{5},\ a,b \in \Bbb{Z}$.


question about the proof about the square root of natural numbers

Could someone please help me to prove that for $t \in \mathbb{N}$ , $\sqrt{t} \in \mathbb{Q} $ if only if $\sqrt{t} \in \mathbb{N}$

calculus - Why does $lim_{xrightarrow 0}frac{sin(x)}x=1$?


I am learning about the derivative function of $\frac{d}{dx}[\sin(x)] = \cos(x)$.


The proof stated: From $\lim_{x \to 0} \frac{\sin(x)}{x} = 1$...


I realized I don't know why, so I wanted to learn why part is true first before moving on. But unfortunately I don't have the complete note for this proof.


  1. It started with a unit circle, and then drew a triangle at $(1, \tan(\theta))$

  2. It show the area of the big triangle is $\frac{\tan\theta}{2}$


  3. It show the area is greater than the sector, which is $\frac{\theta}{2}$ Here is my question, how does this "section" of the circle equal to $\frac{\theta}{2}$? (It looks like a pizza slice).


  4. From there, it stated the area of the smaller triangle is $\frac{\sin(\theta)}{2}$. I understand this part. Since the area of the triangle is $\frac{1}{2}(\text{base} \times \text{height})$.




  5. Then they multiply each expression by $\frac{2}{\sin(\theta){}}$ to get $\frac{1}{\cos(\theta)} \ge \frac{\theta}{\sin(\theta)} \ge 1$



And the incomplete notes ended here, I am not sure how the teacher go the conclusion $\lim_{x \to 0} \frac{\sin(x)}{x} = 1$. I thought it might be something to do with reversing the inequality... Is the answer obvious from this point? And how does step #3 calculation works?


Answer



Draw the circle of radius $1$ centered at $(0,0)$ in the Cartesian plane.



Let $\theta$ be the length of the arc from $(1,0)$ to a point on the circle. The radian measure of the corresponding angle is $\theta$ and the height of the endpoint of the arc above the coordinate axis is $\sin\theta$.


Now look at what happens when $\theta$ is infinitesimally small. The length of the arc is $\theta$ and the height is also $\theta$, since that infinitely small part of the circle looks like a vertical line (you're looking at the neighborhood of $(1,0)$ under a microscope).


Since $\theta$ and $\sin\theta$ are the same when $\theta$ is infinitesimally small, it follows that $\dfrac{\sin\theta}\theta=1$ when $\theta$ is infinitesimally small.


That is how Leonhard Euler viewed the matter in the 18th century.


Why does the sector of the circle have area $\theta/2$?


The whole circle has area $\pi r^2=\pi 1^2 = \pi$. The fraction of the circle in the sector is $$ \frac{\text{arc}}{\text{circumference}} = \frac{\theta}{2\pi}. $$ So the area is $$ \frac \theta {2\pi}\cdot \pi = \frac\theta2. $$


Wednesday, May 9, 2018

calculus - Proving that $limlimits_{xtoinfty}f'(x) = 0$ when $limlimits_{xtoinfty}f(x)$ and $limlimits_{xtoinfty}f'(x)$ exist



I've been trying to solve the following problem:


Suppose that $f$ and $f'$ are continuous functions on $\mathbb{R}$, and that $\displaystyle\lim_{x\to\infty}f(x)$ and $\displaystyle\lim_{x\to\infty}f'(x)$ exist. Show that $\displaystyle\lim_{x\to\infty}f'(x) = 0$.


I'm not entirely sure what to do. Since there's not a lot of information given, I guess there isn't very much one can do. I tried using the definition of the derivative and showing that it went to $0$ as $x$ went to $\infty$ but that didn't really work out. Now I'm thinking I should assume $\displaystyle\lim_{x\to\infty}f'(x) = L \neq 0$ and try to get a contradiction, but I'm not sure where the contradiction would come from.


Could somebody point me in the right direction (e.g. a certain theorem or property I have to use?) Thanks


Answer



Hint: If you assume $\lim _{x \to \infty } f'(x) = L \ne 0$, the contradiction would come from the mean value theorem (consider $f(x)-f(M)$ for a fixed but arbitrary large $M$, and let $x \to \infty$).


Explained: If the limit of $f(x)$ exist, there is a horizontal asymptote. Therefore as the function approaches infinity it becomes more linear and thus the derivative approaches zero.


decimal expansion - I'm puzzled with 0.99999











After reading all the kind answers for this previous question question of mine,
I wonder... How do we get a fraction whose decimal expansion is the simple $0.\overline{9}$?



I don't mean to look like kidding or joking (of course, one can teach math with fun so it becomes more interesting), but this series has really raised a flag here, because $\frac{9}{9}$ won't solve this case, although it solves for all other digits (e.g. $0.\overline{8}=\frac{8}{9}$ and so on).




Thanks!
Beco.


Answer



The number $0.9999\cdots$ is in fact equal to $1$, which is why you get $\frac{9}{9}$. See this previous question.



To see it is equal to $1$, you can use any number of ideas:




  1. The hand-wavy but convincing one: Let $x=0.999\cdots$. Then $10x = 9.999\cdots = 9 + x$. So $9x = 9$, hence $x=1$.



  2. The formal one. The decimal expansion describes an infinite series. Here we have that
    $$ x = \sum_{n=1}^{\infty}\frac{9}{10^n}.$$
    This is a geometric series with common ration $\frac{1}{10}$ and initial term $\frac{9}{10}$, so
    $$x = \sum_{n=1}^{\infty}\frac{9}{10^n} = \frac{\quad\frac{9}{10}}{1 - \frac{1}{10}} = \frac{\frac{9}{10}}{\quad\frac{9}{10}\quad} = 1.$$




In general, a number whose decimal expansion terminates (has a "tail of 0s") always has two decimal expansions, one with a tail of 9s. So:
$$\begin{align*}
1.0000\cdots &= 0.9999\cdots\\
2.480000\cdots &= 2.4799999\cdots\\

1938.01936180000\cdots &= 1938.019361799999\cdots
\end{align*}$$
etc.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...