Friday, March 31, 2017

elementary number theory - A simple problem on congruences.



This is a problem on congruences given in the book of Andy Liu, titled "Arithmetical Wonderland".



The teacher asked Ace, Beatrice and Cecil to write down some $4$-digit number, write down a second $4$-digit number obtained from the first by reversing the order of the digits, and then add the two numbers. Ace’s answer was $5985$, Beatrice’s $2212$ and Cecil’s $4983$.
(a) Without looking at their work, the teacher said that both Ace and Beatrice had made mistakes. How could she tell?
(b) If Cecil had not made mistakes, what was his initial number, given that it was not divisible by $10$ and was greater than the number obtained by digit reversal?







My attempt:
If in base $10$ arithmetic, there is a $4$ digit number, let : 'abcd', then its reverse would be 'dcba'. Adding the two leads to (a+d)(b+c)(b+c)(a+d). This means that first and last digits are the same, and similarly for the middle digits.



Based on this analysis, all the three numbers are having mistakes. But it is given that Cecil did not make any. So, consider the other two conditions (i) original number is bigger than its reverse, (ii)not having a digit $0$ at the end$.



I hope that the second condition is for the sum, as there "cannot" be a number with unit's digit as $0$, as then the MSB digit would also be $0$.
So, is the second condition redundant?



How is this a problem on congruence arithmetic even, is not clear?



Answer



The $4$-digit number can be written as $$abcd=1000a+100b+10c+d$$ where $a,b,c,d\in\{0,1,...,9\}$. So the sum required is $$abcd+dcba=1000(a+d)+100(b+c)+10(b+c)+(a+d)$$ Ace's answer is $5985$ so we know that $a+d$ does not exceed $9$ - there is no carrying. But $9\neq8$ so Ace is wrong. The argument for Beatrice is similar.



For Cecil's answer, there is no carrying either, since there are still $4$ digits - $a+d$ does not exceed $9$. In fact, $a+d=3$, so we must have that either $(a,d)=(1,2)$ or $(2,1)$ since $a,d\neq0$ by the first condition. The condition that $abcd$ must be larger than $dcba$ means that it is the latter. $$abcd=2991\implies abcd+dcba=2991+1992=4983$$


sequences and series - Evaluate the Sum $S=frac{1}{4}+frac{1.3}{4.6}+frac{1.3.5}{4.6.8}+cdots infty$

Evaluate the Sum




$$S=\frac{1}{4}+\frac{1.3}{4.6}+\frac{1.3.5}{4.6.8}+\cdots \infty$$



My try: We have the $n$ th term as


$$T_n=\frac{1.3.5. \cdots (2n-1)}{4.6.8 \cdots (2n+2)}$$ $\implies$


$$T_n=\frac{1.3.5. \cdots (2n-1)}{2^n \times (n+1)!}$$


$$T_n=\frac{(2n)!}{4^n \times n! \times (n+1)!}$$


$$T_n=\frac{\binom{2n}{n-1}}{n \times 4^n}$$


Any clue here?

Calculating the probability that throwing two dice will yield a higher number than throwing one die

How do I work out the probability that Andy, rolling two dice, will roll a number higher than Candy, rolling one die?



The highest number you can roll with one die is six. The probability of rolling any number is 1/6. The highest number you can roll using two dice is 12. The sum of probabilities that Andy will roll a number higher than six is 59%.




  • 17% for rolling a 7

  • 14% for rolling an 8


  • 11% for rolling a 9

  • 8% for rolling a 10

  • 6% for rolling an 11

  • 3% for rolling a 12



So is it fair to say that Andy has a 59% chance to roll a number higher than Candy? How do I represent that in a formula? Or have I gotten my math all wrong?



This might be a knucklehead question, and for that you'll have to excuse; I haven't used any math beyond elementary algebra for 15 years.

sequences and series - Find the limit of $frac{(n+1)^sqrt{n+1}}{n^sqrt{n}}$.



Find $$\lim_{n\to \infty}\frac{(n+1)^\sqrt{n+1}}{n^\sqrt{n}}$$



First I tried by taking $\ln y_n=\ln \frac{(n+1)^\sqrt{n+1}}{n^\sqrt{n}}=\sqrt{n+1}\ln(n+1)-\sqrt{n}\ln(n),$



which dose not seems to take me anywhere. Then I tried to use squeeze theorem, since $\frac{(n+1)^\sqrt{n+1}}{n^\sqrt{n}}\geq 1$, but I need an upper bound now. It's for a while I am trying to come up with it but stuck. Can you help me please.


Answer



Notice that for $f(x)=\sqrt x \ln x$ you have

$$f'(x)=\frac{\ln x}{2\sqrt x}-\frac1{\sqrt x}.$$
Now by mean value theorem
$$f(n+1)-f(n) = f'(\theta_n)$$
for some $\theta_n$ such that $n<\theta_n

If we notice that $\lim\limits_{x\to\infty} f'(x) = 0$ we get that
$$\lim\limits_{n\to\infty} (\sqrt{n+1}\ln(n+1)-\sqrt{n}\ln n) = \lim\limits_{n\to\infty} f'(\theta_n) = 0.$$


real analysis - Is there a bijection between $[0, infty)$ and $(0,1)$

I tried $\tan(x)$, and $\log(x)$, but seems it does not work, so I wonder is there a bijection or not?

calculus - A sine integral $int_0^{infty} left(frac{sin x }{x }right)^n,mathrm{d}x$



The following question comes from Some integral with sine post
$$\int_0^{\infty} \left(\frac{\sin x }{x }\right)^n\,\mathrm{d}x$$
but now I'd be curious to know how to deal with it by methods of complex analysis.
Some suggestions, hints? Thanks!!!



Sis.


Answer




Here's another approach.



We have
$$\begin{eqnarray*}
\int_0^\infty dx\, \left(\frac{\sin x}{x}\right)^n
&=& \lim_{\epsilon\to 0^+}
\frac{1}{2} \int_{-\infty}^\infty dx\,
\left(\frac{\sin x}{x-i\epsilon}\right)^n \\
&=& \lim_{\epsilon\to 0^+}
\frac{1}{2} \int_{-\infty}^\infty dx\,

\frac{1}{(x-i\epsilon)^n}
\left(\frac{e^{i x}-e^{-i x}}{2i}\right)^n \\
&=& \lim_{\epsilon\to 0^+}
\frac{1}{2} \frac{1}{(2i)^n} \int_{-\infty}^\infty dx\,
\frac{1}{(x-i\epsilon)^n}
\sum_{k=0}^n (-1)^k {n \choose k} e^{i x(n-2k)} \\
&=& \lim_{\epsilon\to 0^+}
\frac{1}{2} \frac{1}{(2i)^n}
\sum_{k=0}^n (-1)^k {n \choose k}
\int_{-\infty}^\infty dx\, \frac{e^{i x(n-2k)}}{(x-i\epsilon)^n}.

\end{eqnarray*}$$
If $n-2k \ge 0$ we close the contour in the upper half-plane and pick up the residue at $x=i\epsilon$.
Otherwise we close the contour in the lower half-plane and pick up no residues.
The upper limit of the sum is thus $\lfloor n/2\rfloor$.
Therefore, using the Cauchy differentiation formula, we find
$$\begin{eqnarray*}
\int_0^\infty dx\, \left(\frac{\sin x}{x}\right)^n
&=& \frac{1}{2} \frac{1}{(2i)^n}
\sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k {n \choose k}
\frac{2\pi i}{(n-1)!}

\left.\frac{d^{n-1}}{d x^{n-1}} e^{i x(n-2k)}\right|_{x=0} \\
&=& \frac{1}{2} \frac{1}{(2i)^n}
\sum_{k=0}^{\lfloor n/2\rfloor}
(-1)^k {n \choose k}
\frac{2\pi i}{(n-1)!} (i(n-2k))^{n-1} \\
&=& \frac{\pi}{2^n (n-1)!}
\sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k {n \choose k} (n-2k)^{n-1}.
\end{eqnarray*}$$
The sum can be written in terms of the hypergeometric function but the result is not particularly enlightening.


algebra precalculus - Prove by induction $sum_{i=1}^ni^3=frac{n^2(n+1)^2}{4}$ for $nge1$

Prove the following statement $S(n)$ for $n\ge1$:


$$\sum_{i=1}^ni^3=\frac{n^2(n+1)^2}{4}$$


To prove the basis, I substitute $1$ for $n$ in $S(n)$:



$$\sum_{i=1}^11^3=1=\frac{1^2(2)^2}{4}$$


Great. For the inductive step, I assume $S(n)$ to be true and prove $S(n+1)$:


$$\sum_{i=1}^{n+1}i^3=\frac{(n+1)^2(n+2)^2}{4}$$


Considering the sum on the left side:


$$\sum_{i=1}^{n+1}i^3=\sum_{i=1}^ni^3+(n+1)^3$$


I make use of $S(n)$ by substituting its right side for $\sum_{i=1}^ni^3$:


$$\sum_{i=1}^{n+1}i^3=\frac{n^2(n+1)^2}{4}+(n+1)^3$$


This is where I get a little lost. I think I expand the equation to be


$$=\frac{(n^4+2n^3+n^2)}{4}+(n+1)^3$$


but I'm not totally confident about that. Can anyone provide some guidance?

Thursday, March 30, 2017

calculus - $sqrt{c+sqrt{c+sqrt{c+cdots}}}$, or the limit of the sequence $x_{n+1} = sqrt{c+x_n}$


(Fitzpatrick Advanced Calculus 2e, Sec. 2.4 #12)


For $c \gt 0$, consider the quadratic equation $x^2 - x - c = 0, x > 0$.


Define the sequence $\{x_n\}$ recursively by fixing $|x_1| \lt c$ and then, if $n$ is an index for which $x_n$ has been defined, defining


$$x_{n+1} = \sqrt{c+x_n}$$


Prove that the sequence $\{x_n\}$ converges monotonically to the solution of the above equation.


Note: The answers below might assume $x_1 \gt 0$, but they still work, as we have $x_3 \gt 0$.



This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.



and here: List of abstract duplicates.


Answer



Assuming that you know that a monotone, bounded sequence converges, you want to do two things. First, show that $\langle x_n:n\in\mathbb{Z}^+\rangle$ is monotone and bounded, and then show that its limit is the positive root of $x^2-x-c=0$.


If $c=x_1=1$, $x_2=\sqrt2>x_1$, while if $c=1$ and $x_1=2$, $x_2=\sqrt3

The positive root of the quadratic is $\frac12(1+\sqrt{1+4c})$, which I’ll denote by $r$. If $x_n\to r$, as claimed, and does so monotonically, it must be the case that the sequence increases monotonically if $x_1decreases monotonically if $x_1>r$. In the examples in the last paragraph, $r=\frac12(1+\sqrt5)\approx 1.618$, so they behave as predicted.


This suggests that your first step should be to show that if $x_nr$, $x_n>x_{n+1}>r$; that would be enough to show that $\langle x_n:n\in\mathbb{Z}^+\rangle$ is both monotone and bounded and hence that it has a limit.


Suppose that $0\le x_nx_n^2$, and therefore $x_{n+1}>x_n$. Is it possible that $x_{n+1}\ge r$? That would require that $x_{n+1}^2-x_{n+1}-c\ge 0$ (why?) and hence that $$x_{n+1}^2\ge x_{n+1}+c>x_n+c=x_{n+1}^2\;,$$ which is clearly impossible. Thus, if $0\le x_nr$ to you.


Once this is done, you still have to show that the limit of the sequence really is $r$. Let $f(x)=\sqrt{c+x}$; clearly $f$ is continuous, so if the sequence converges to $L$, we have $$L=\lim_{n\to\infty}x_n=\lim_{n\to\infty}x_{n+1}=\lim_{n\to\infty}f(x_n)=f(L)\;,$$ and from there it’s trivial to check that $L=r$.


Added: Note that although the problem gave us $x_1>0$, this isn’t actually necessary: all that’s needed is that $x_1\ge -c$, so that $x_2$ is defined, since $x_2=\sqrt{c+x_1}\ge 0$ automatically.


Wednesday, March 29, 2017

Determining a limit without L'Hospital's Rule

I'm trying to solve the limit $$\lim_{h\to0}\frac{\sin(\pi+h)}{h(\pi+h)}$$ without using L'Hospital's rule. It's part of a problem for which I am trying to prove that $$\lim_{h\to0}\frac{\frac{\sin(\pi+h)}{|\pi+h|}-\frac{h}{\pi}}{|h|}$$ exists. After a bit of simplifying and assuming $h>0$ I have arrived at the first expression.

Convergence and sum of an infinite series: $sum_{i=1}^{infty}frac{6}{24 i-4 i^2-35}$

Determine whether the following series is convergent or divergent. If convergent, find the sum. $$\sum_{i=1}^{\infty}\frac{6}{24 i-4 i^2-35}$$


Since the limit of the series is zero, I know that it is not divergent (divergence test).


How do i prove that the series is convergent, and futhermore, find the sum?


I rewrote the series (using partial fraction decomposition) as:



$$\sum_{i=1}^{\infty}\frac{6}{24 i-4 i^2-35}=\sum_{i=1}^{\infty}\frac1{1/4i-10(-(1/4 i-7))}$$


But I don't know what to do from here.

Tuesday, March 28, 2017

integration - Integrating definite integral $int_a^infty frac{1}{log (t)} , dt$ and then $int_a^infty frac{e^{-st}}{log (t)} , dt$



The differentiation of $\operatorname{Li}(t)$ gives $$\frac{1}{\log t}$$



In mathematica using D[LogIntegral[t], t] it confirms this as one would expect.




But I'm having difficulty integrating:



$$\int_a^\infty \frac 1 {\log (t)} \, dt$$



I'm not sure by hand how to do this and it certainly won't compute in mathematica; yet it differentiates $\operatorname{Li}(t).$ I can see that with limit a set to zero there would be a problem with it going to infinity as it approaches the $y$-axis, something like $a=2$ would not present that problem.



A similar question was posed here:
Convergence or Divergence using Limits
but only established that it is divergent and did not explain that it came from $\operatorname{Li}(t)$ and what would be needed to get back.




Further I cannot integrate $$\int_a^\infty \frac{e^{-st}}{\log (t)} \, dt$$
again with $a=0$ this would be a problem I suppose, but $a=2$ should be okay.


Answer



$$\int_a^\infty \frac{dt}{\log t} \tag{$*$}$$ does not converge. $$\log t < t \to \frac{1}{\log t}>\frac{1}{t}$$ and as $$\int_a^\infty \, \frac{dt}{t}$$ diverges, so does the $(*)$



The second integral converges for any $s>0;\;a>1$, but I can't find a closed form and I think It doesn't exist



Hope this helps


abstract algebra - Suppose that $beta$ is a zero of $f(x)=x^4+x+1$ in some field extensions of $E$ of $Z_2$.Write $f(x)$ as a product of linear factors in $E[x]$


Suppose that $\beta$ is a zero of $f(x)=x^4+x+1$ in some field extensions of $E$ of $Z_2$.Write $f(x)$ as a product of linear factors in $E[x]$


Attempt: In $\mathbb Z_2: \beta^4+\beta+1=0$


Going by the long division method, I was able to factorize $f(x)=x^4+x+1 = (x-\beta)(x^3+\beta x^2+ \beta^2x + \beta^3+1)$.


if $g(x)=x^3+\beta x^2+ \beta^2x + \beta^3+1$, then $g(1)=\beta^2 =g(-1)~;~$


Is there any other method than trial and error in this problem. Please note that this problem is listed in the field extension chapter and even before the chapter on algebraic extensions and finite fields


Thank you for your help.



Answer



$\require{cancel}$For a problem with small coefficients like that, trial and error works. We have $$f(x+y) = (x+y)^4 + x + y + 1 = f(x) + f(y) + 1$$ This implies $$f(x+1) = f(x) + f(1) + 1 = 0 + 1 + 1 = f(x)$$ So if $x$ is a root of $f$, $x+1$ is another root of $f$. In particular $\beta+1$ is a root of $f$.


Besides $f(\beta^2) = (f(\beta))^2 = 0$, because the Frobenius is a field automorphism (as Jyrki Lahtonen told you in the comments). It follows that the roots are $\beta$, $\beta+1$, $\beta^2$, $\beta^2+1$.


radicals - How do I get the square root of a complex number?


If I'm given a complex number (say $9 + 4i$), how do I calculate its square root?


Answer



The square root is not a well defined function on complex numbers. Because of the fundamental theorem of algebra, you will always have two different square roots for a given number. If you want to find out the possible values, the easiest way is probably to go with De Moivre's formula, that is, converting your number into the form $r(\cos(\theta) + i \sin(\theta))$, and then you will get $(r(\cos(\theta)+ i \sin(\theta)))^{1/2} = ±\sqrt{r}(\cos(\theta/2) + i \sin(\theta/2))$.



Monday, March 27, 2017

calculus - Infinite Geometric Series Issue



i have came across a series, i am trying to find its sum knowing the fact that, if it converges and its common ratio ex. r is: -1 < r < 1, then i can use the specified formula $\frac{a}{1-r}$ , which specifically means first term of series over 1 minus common ratio




here is the series
$\sum_{n=1}^{\infty}\frac{2n-1}{2^n}$



i manipulated it this way to prove its convergence: $\sum_{n=1}^{\infty}\frac{2n-1}{2^n}=\sum_{n=1}^{\infty}(2n-1)\frac{1}{2^n}=\sum_{n=1}^{\infty}(2n-1)\left(\frac{1}{2}\right)^n$



$\frac{a}{1-r}=\frac{\frac{1}{2}}{1-\frac{1}{2}}=\frac{\frac{1}{2}}{\frac{1}{2}}=1$



using it i get the result 1, which actually should be 3


Answer




$$\sum_{n=1}^{\infty}\frac{2n-1}{2^n}=\sum_{n=1}^{\infty}\frac{2n}{2^n}-\sum_{n=1}^{\infty}\frac{1}{2^n}=\sum_{n=1}^{\infty}\frac{n}{2^{n-1}}-1.$$



We use Maclaurin series for function $\frac{1}{(1-x)}$
$$\frac{1}{1-x}=1+x+x^2+x^3+\dots = \sum_{n=0}^{\infty} x^{n}$$
Differentiating both sides of this equation we get that
$$\frac{1}{(1-x)^2}=1+2x+3x^2+4x^3+\dots = \sum_{n=1}^{\infty}n x^{n-1}$$
if $x=\frac 12$ then $\sum_{n=1}^{\infty}n (\frac 12)^{n-1}=\frac{1}{(1-\frac 12)^2}=4$
$$\sum_{n=1}^{\infty}\frac{2n-1}{2^n}=\sum_{n=1}^{\infty}\frac{n}{2^{n-1}}-1=3$$


complex analysis - To evaluate square roots of $1+2i$.



Here is as far as I got.




First we write $1+2i$ in the polar form which is $\sqrt{5}e^{i\alpha}$ ($\alpha$ is the argument of $1+2i$ which turns out to be $\arctan2$). Therefore the square roots are $\pm \sqrt{\sqrt{5}} (\cos{\alpha/2}+i\sin{\alpha/2})$.



The answer given at the back is $\pm (\sqrt{\frac{\sqrt{5}+1}{2}}+i\sqrt{\frac{\sqrt{5}-1}{2}})$. How to I get it into this form?


Answer



setting $$\sqrt{1+2i}=a+bi$$ we get the system $$1=a^2-b^2$$ and $$2=2ab$$


calculus - To prove a sequence is Cauchy




I have a sequence:
$ a_{n}=\sqrt{3+ \sqrt{3 + ... \sqrt { 3} } } $ , it repeats $n$-times.



and i have to prove that it is a Cauchy's sequence.
So i did this:
As one theorem says that every convergent sequence is also Cauchy, so i proved that it's bounded between $ \sqrt{3}$ and $ 3 $ (with this one i am not sure, please check if i am right with this one.)And also i proved tat this sequence is monotonic. (with induction i proved this: $ a_{n} \leq a_{n+1} $
so if it's bounded and monotonic, therefore it is convergent and Cauchy.
I am just wondering if this already proved it or not? And also if the upper boundary - supremum if you wish - is chosen correctly.

I appreciate all the help i get.


Answer



${ a }_{ n+1 }=\sqrt { a_{ n }+3 } $ $\Rightarrow \quad { a^{ 2 } }_{ n+1 }=a_{ n }+3$ as $n\rightarrow \infty $ $\Rightarrow \quad { a^{ 2 } }_{ n+1 }=a_{ n }+3$ $\quad x^{ 2 }=x+3\quad \Rightarrow $ $x^{ 2 }-x-3=0 $ $and\quad it\quad$ convergents to the $x=\frac { 1+\sqrt { 13 } }{ 2 } $


sequences and series - how can we show $frac{pi^2}{8} = 1 + frac1{3^2} +frac1{5^2} + frac1{7^2} + …$?

Let $f(x) = \frac4\pi \cdot (\sin x + \frac13 \sin (3x) + \frac15 \sin (5x) + \dots)$. If for $x=\frac\pi2$, we have
$$f(x) = \frac{4}{\pi} ( 1 - \frac13 +\frac15 - \frac17 + \dots) = 1$$
then obviously :
$$ 1 - \frac13 +\frac15 - \frac17 + \dots=\frac{\pi}{4}$$
Now how can we prove that:

$$\frac{\pi^2}{8} = 1 + \frac1{3^2} +\frac1{5^2} + \frac1{7^2} + \dots$$

algebra precalculus - Why isn't $-2$ solution for $x$?



I came across an logarithm problem recently. I don't know why solution to this problem cannot be $-2$. Now, don't downvote now because you don't know why I'm asking this. I know that logarithms' domains must be greater than $0$, can't take a negative number. Just read the whole post and wait until I get to the point to see why I'm asking this "ridiculous question".



This is the problem:
$$\log_{10}x=1-\log_{10}(x-3)$$
You can solve it like this:
$$\log_{10}x=\log_{10}10-\log_{10}(x-3)$$
$$\log_{10}x=\log_{10}\frac{10}{x-3}$$

$$x=\frac{10}{x-3}$$
$$x(x-3)=10$$
$$x^2-3x-10=0$$
$$(x+2)(x-5)=0$$
$$x\stackrel{?}{=}-2,\,\,x\stackrel{?}{=}5$$
If you plug in $5$, it will work just fine. You will get when simplified $\log_{10}5=\log_{10}5$. But if you plug in $-2$, you get something like this:
$$\log_{10}-2=\log_{10}\frac{10}{-2-3}$$
$$\log_{10}-2=\log_{10}-2$$
I know that negative logarithms with any base don't exist, but I thought of something. There's a logarithm property which states:
$$\log_by=\log_bx \Rightarrow y=x$$

Using this property, we can say:
$$\log_{10}-2=\log_{10}-2 \Rightarrow -2=-2$$
And, now (I hope) everyone will agree that $-2$ is indeed equal to $-2$ (itself).



So why besides $5$, isn't $-2$ solution to this problem? Even if the property that I stated doesn't apply to this case, could I theoretically invent a new imaginary number unit of $v_b=\log_b(-1)$ like someone back in the days did with $i=\sqrt{-1}$ and state that $\log_{10}(-2)=v_{10}+\log_{10}2$ ?


Answer



Okay we've talked a lot about complex logarithms, let's try and solve the problem over the complex numbers. (I think that is what the OP is really interested in; it has been reiterated that the only solution over the reals is 5)



We have:




$\log_{10} (-2)$



The change of base formula is consistent for all complex numbers, of which 10 is one. ($0i+10$). But we must be careful about branch selection. In this exercise we will denote $L(x)$ as the principal branch of the natural complex logarithm.



Therefore:



$\log_{10} (-2)$ =$ \frac{L(-2)}{L(10)}$



By the definition of the principle branch:




$\log_{10} (-2)$ =$ \frac{\ln(2)+iπ}{\ln(10)}$



As @uqtredd1 has noted, this solution is not one that should be submitted in the context of the presented problem, but is completely extraneous. The complex logarithm is a multivalued function, which is why branch selection was so important. There are many more possible "answers" to what $\log_{10} (-2)$ is in the complex sense.


Sunday, March 26, 2017

calculus - Can the derivative of a function be such that it is not continuous?


My guess is that all derivatives ought to be continuous but perhaps there's some obscure example of a function for which this is not the case. Is there any theorem that answers this perhaps?


Answer



The standard counterexample is $f(x)=x^2\sin(\frac{1}{x})$ for $x\neq 0$, with $f(0)=0$.


This function is everywhere differentiable, with $f^{\prime}(x)=2x\sin(\frac{1}{x})-\cos(\frac{1}{x})$ if $x\neq 0$ and $f^{\prime}(0)=0$. However, $f^{\prime}$ is not continuous at zero because $\lim_{x\to0}\cos(\frac{1}{x})$ does not exist.


While $f^{\prime}$ need not be continuous, it does satisfy the intermediate value property. This is known as Darboux's theorem.


integration - Integrating this improper integral to test for convergence?


I'm trying to integrate this:



$$\int^\infty_0 \frac{8}{\sqrt{e^{x}-x}} \,dx$$


And use the Direct Comparison Test to find out whether it diverges or converges.


I looked at a similar problem:


another improper integral and I can see how the integrals on the lefthand side are less than the integrals on the righthand side, since the rightmost right-side integral is squared from the rightmost left-side integral, but:


Why is the 5 dropped? Is it because a small added constant ultimately wouldn't affect the behavior of x on its way to infinity?


Why is the integral from 1 to infinity squared, out of all the possible operations we could perform on it?


And is this the correct next step in my own integration?


$$\int^\infty_0 \frac{8}{\sqrt{e^{x}-x}} \,dx = \int^1_0 \frac{8}{\sqrt{e^{x}-x}} \,dx + \int^\infty_1 \frac{8}{\sqrt{e^{x}-x}} \,dx < \int^1_0 \frac{8}{\sqrt{e^{x}-x}} \,dx + \int^\infty_1 \frac{1}{\sqrt{e^{x}}} \,dx $$


Thank you in advance if you're able to help clarify this.


Answer




Note that $e^x-x \geq x^4$ for all sufficiently large $x$. So there exists some $N > 0$ such that $e^x-x \geq x^4$ for all $x \geq N$. Since $$ \sqrt{e^x-x} \geq \sqrt{x^4} = x^2 \quad \Rightarrow \quad \frac{1}{\sqrt{e^x-x}} \leq \frac{1}{x^2} $$ for all $x \geq N$, we have \begin{align*} \int_0^\infty \frac{dx}{\sqrt{e^x-x}} & = \int_0^N \frac{dx}{\sqrt{e^x-x}} + \int_N^\infty \frac{dx}{\sqrt{e^x-x}} \\ & \leq \int_0^N \frac{dx}{\sqrt{e^x-x}} + \int_N^\infty \frac{dx}{x^2}. \end{align*} The two integrals are finite so the integral you consider is convergent.


linear algebra - Looking for solution to "lights out" puzzle variant with multiple states




Recently in World of Warcraft, there is a puzzle that is very similar to the "lights out" puzzle where a player needs to flip switches to turn all the lights into a specific color (in this case yellow, green, red, white). I have seen other solution to the lights out problem using linear algebra however all these uses only 2 states (on or off).



I haven't ever ran into a system of linear equation with modular operation before and would like some help solving something like:



  L_1 = ((s_1 + s_2 + ...s_n + c_1 ) mod 4)

L_2 = ((s_1 + s_2 + ...s_n + c_2 ) mod 4)

...


L_n = ((s_1 + s_2 + ...s_n + c_n ) mod 4)


where each L has some linear combination of s + constant mod 4


Answer



You can solve the mod 4 version like two instances of the mod 2 version.



First treat states $1$ and $3$ as if they are switched-on lights, and states $0$ and $2$ as if they are switched-off lights. Solve this as the normal lights out. You are essentially just working mod $2$, making everything even. At the end of this stage, all lights are either $0$ or $2$.



Then solve the rest as another two-state lights out puzzle, where $2$ is the switched-on state and $0$ is a switched-off state. The only difference is that the moves you do consist of double button presses. A double button press skips over the $1$ and $3$ states, and toggles lights between the $0$ and $2$ state.



real analysis - Prove the sequence $x_{n+1}=frac{1}{4-x_n}, x_1=3$ converges.


I try to use induction to show that the sequence is monotonically decreasing. My induction hypothesis is $$P(n):x_{n+1}

$$x_{n+2}-x_{n+1}=\frac{1}{4-x_{n+1}}-\frac{1}{4-x_n}=\frac{x_{n+1}-x_n}{(4-x_{n+1})(4-x_n)}$$ I get stuck here I know that numerator is less than $0$ but how do I show that denominator is positive so that the entire thing is less than $0$.


I am trying to show that sequence is monotonically decresaing and also bounded below to show that it converges.


Hints first


Answer



Hint: use as your inductive hypothesis the following statement: $x_{n+1} < x_n < 4$.



How to come up with this hypothesis? I first asked myself whether you were asking for something that's even true: is the denominator positive? The answer is obviously yes by working out the first few values. Why is the denominator positive? Because it's the product of positive things. Can we show that we're always taking the product of positive things? In order to do that, we'd need…



calculus - Function that is uniformly continuous but not bounded?



I've been given the following question but I'm unsure if there are actually any answers:




Give examples of functions $f,g: \mathbb{R}\to\mathbb{R}$ which are uniformly continuous such that $f$ is not bounded but $g$ is bounded.





I know that if $f:(a,b)\to\mathbb{R}$ is uniformly continuous then $f$ is bounded so surely there is no such example for $f$ ?


Answer



What about $f(x)=x$ and $g(x)=A$?


Saturday, March 25, 2017

real analysis - Examples of differentiable functions with bounded but discontinuous derivatives

I am looking for examples of functions $f:\mathbb{R} \rightarrow \mathbb{R}$ that:


  • are differentiable, and


  • have discontinuous but bounded derivatives.

I believe that a function satisfying these conditions must also be (globally) Lipschitz continuous.


The only example I know of is $f(x) = \left\{ \begin{array}{c l} x^2\cdot \sin\left(\frac{1}{x}\right) & ,\quad x\neq0\\ 0 & ,\quad x=0 \end{array} \right.$, so examples besides this one would be most helpful.


My question is inspired by having recently proven that if a differentiable function $f$ has a bounded derivative, then $f$ is Lipschitz. A differentiable function with a continuous derivative is also Lipschitz, but requiring $f'$ to be continuous is not necessary for Lipschitz continuity. This exercise made me curious about examples of differentiable functions that meet the bounded derivative requirement but not the (stricter) continuous derivative requirement.


Thanks in advance for any suggestions!

elementary number theory - How to find last two digits of $2^{2016}$



What should the 'efficient' way of finding the last two digits of $2^{2016}$ be? The way I found them was by multiplying the powers of $2$ because $2016=1024+512+256+128+64+32$. I heard that one way would be with the Chinese Remainder Lemma but I don't really know how I should start?


Answer



Essentially we need $2^{2016}\pmod{100}$



As $(2^{2016},100)=4$



let us find $2^{2016-2}\pmod{100/4}$




Now as $2^{10}\equiv-1\pmod{25}$



$2^{2014}=2^{201\cdot10+4}=(2^{10})^{201}\cdot2^4\equiv(-1)^{201}\cdot2^4\equiv9\pmod{25}$



$$\implies2^2\cdot2^{2014}\equiv2^2\cdot9\pmod{2^2\cdot25}$$


How to prove that if a continuous function satisfies $f(a b)=f(a) + f(b)$, this function must be a log function?

How to prove that if a continuous function satisfies $f(ab)=f(a)+f(b)$ and both $a$ and $b$ are positive real numbers, this function must be a log function? i.e., proof of uniqueness. Thanks

Friday, March 24, 2017

Is there any good reason not to define $0^0=1$ , such as contradictions in algebra or arithmetic?



Math people:



The title is the question: Is there any good reason not to define $0^0=1$ , such as contradictions in algebra or arithmetic?



I searched for similar questions before I posted this question, and couldn't find any. After I posted it, I got some comments citing similar questions. There is a similar question at What values of $0^0$ would be consistent with the Laws of Exponents? . I checked some of the other questions in the links in the comments and the links posted with those links. There is a closer match at How to define the $0^0$? That question was closed as a duplicate, but the older, duplicated question was not identified. I could not find a convincing answer anywhere to my essential question: "does defining $0^0=1$ lead to contradictions in algebra and arithmetic?" I'll leave it up to others to decide whether my question is a duplicate. If it is, maybe you can close this question and give a better (in my opinion) answer to one of the older questions.



Let me get one thing out of the way up front: yes, I know "$0^0$" is an indeterminate form. That is, if $f$ and $g$ are real-valued functions with $f(t) \to 0^+$ as $t \to 0$ and $g(t)\to 0$ as $t \to 0$, then you don't know what $\lim_{t\to 0}f(t)^{g(t)}$ is, or even whether exists, without more information. I don't consider this a good reason to declare that $0^0$ itself must be considered undefined. I know many people will disagree with me here. I expect at least one answer and some comments arguing why this is a good reason for $0^0$ to be considered undefined. Everyone is entitled to their opinion, and you are free to leave such an answer. I will not attempt to change your mind, beyond what I write in this question.




If you define $f(x,y) = x^y$, then $f$ cannot be continuous on $[0,\infty) \times \mathbb{R}$ no matter what value, including $1$, you assign to $f(0,0)$. But why should every function have to be continuous?



If the mathematical community ever does come to the consensus that $0^0=1$, and I were teaching calculus students about limits involving indeterminate forms, I probably would not even mention the question of whether $0^0$ itself had a value, because it probably just confuse the students. They probably wouldn't even notice the omission.



To me, a "good reason" not to define $0^0=1$ would be if this definition resulted in a contradiction, when used in expressions involving multiplication and exponentiation of real numbers and the rules used to simplify such expressions. Here is an attempt to produce such a contradiction: assuming $0^0=1$, $(0^0)^2=1^2=1$, and $(0^0)^2=0^{(0*2)}=0^0=1$. No contradiction. In constrast, if you define $0/0 = 1$ and you want the associative property to hold (a reasonable expectation), then you can derive the contradiction $1=0/0=(2*0)/0=2*(0/0)=2*1=2$.



It just occurred to me that there is another good reason for not declaring officially that $0^0$ must always equal $1$: if defining $0^0=0$ does not lead to contradictions in algebra or arithmetic, either.



I am not claiming $0^0 = 0/0$. Of course you can never divide by zero, or raise zero to a negative power.




Of course, when people use power series, they use $0^0=1$ all the time, and no one complains. I have read that "$0^0=1$" is used often in combinatorics, but I don't know much about combinatorics.



Based on what I have seen in the older questions, their answers, and the answers and comments to this question, it seems that no one has discovered any way in which defining $0^0$ to be $1$ leads to contradictions when using the usual rules of multiplication and exponentiation. It also seems that defining $0^0$ to be $0$ does not lead to such a contradiction. So I'm guessing it is impossible to produce such a contradiction. But I have never heard of anyone wanting to define $0^0$ as $0$.


Answer



At the very least, the answer $0^0 = 1$ is consistent with cardinal arithmetic on the set $\{0, 1, 2, \ldots\}$. Under this interpretation, the number $m^n$ is defined to be the number of functions from an $n$-element set to an $m$-element set. There is exactly one function from a $0$-element set to a $0$-element set, so in this interpretation, $0^0 = 1$. The laws $a^m a^n = a^{m+n}$ and $(a^m)^n = a^{mn}$ can be proven with that definition. Thus, exponentiation as repeated multiplication can be recovered. (In one of your links, Matt N. gives this same idea as an answer.)



The first criticism I heard involved the subtraction law $\frac{a^x}{a^y} = a^{x-y}$. The idea is that $\frac{0^x}{0^x} = 0^0$, yet $\frac{0^x}{0^x} = \frac{0}{0}$, so $0^0$ is undefined. The problem here isn't with $0^0$, it's with saying $\frac{0^x}{0^x}$ is equal to anything, when it is undefined. We could use the same reasoning to say that $0^2$ is undefined, since $0^2 = \frac{0^7}{0^5} = \frac{0}{0}$. At worst, the subtraction law has to be modified to say that it only applies when $a^y \neq 0$. We make such modifications for all of our other laws involving division, so why should this be any different?


elementary set theory - Why is $|Y^{emptyset}|=1$ but $|emptyset^Y|=0$ where $Yneq emptyset$



I have a question about the set of functions from a set to another set. I am wondering about the degenerate cases. Suppose $X^Y$ denotes the set of functions from a set $Y$ to a set $X$, why is $|Y^{\emptyset}|=1$ but $|\emptyset^Y|=0$ where $Y\neq \emptyset$?


Answer



The definition of $A^B$ is "the set of all functions with domain $B$ and codomain $A$".



A function $f$ from $B$ to $A$ is a set of ordered pairs such that:





  1. If $(x,y)\in f$, then $x\in B$ and $y\in A$.

  2. For every $b\in B$ there exists $a\in A$ such that $(b,a)\in f$.

  3. If $(b,a)$ and $(b,a')$ are in $f$, then $a=a'$.



Now, what happens if $B=\emptyset$? Well, then there can be no pair in $f$, because you cannot have $x\in B$. But notice that in that case, 2 is satisfied "by vacuity" (if it were false, you would be able to exhibit a $b\in\emptyset$ for which there is no $a\in A$ with $(b,a)\in f$; but there are no $b\in\emptyset$, so you cannot make such an exhibition; the statement is true because the premise, "$b\in\emptyset$", can never hold). Likewise 3 holds by vacuity. So it turns out that if we take $f=\emptyset$, then $f$ satisfies 1, 2, and 3, and therefore it is by all rights a "function from $\emptyset$ to $A$". But this is the only possible function from $\emptyset$ to $A$, because only the empty set works.



By contrast, if $A=\emptyset$, but $B\neq\emptyset$, then no set $f$ can satisfy both 1 and 2, so no set can be a function from $B$ to $A$.



That means that $Y^{\emptyset}$ always contains exactly one element, namely the "empty function", $\emptyset$. But if $Y\neq\emptyset$, then $\emptyset^Y$ contains no elements; that is, it is empty.




Therefore, since $Y^{\emptyset}$ has exactly one element, $|Y^{\emptyset}|=1$ regardless of what $Y$ is. But if $Y\neq\emptyset$, then $\emptyset^{Y}$ is empty, so $|\emptyset^{Y}| = 0$.


measure theory - Existence of non decreasing sequence of continuous functions aproximating $f$ in $L_p(0,infty)$

I know that the continuous functions $f:(0,\infty) \rightarrow R $ are dense in $L_p(0,\infty)$, with respect to the norm $|| \space||_p$. Therefore, if $f\in L_p(0,\infty)$ then there exists a sequence of continous functions $\{f_n\}$ in $L_p$ such that $f_n \rightarrow f$. I'm wondering if there exists a sequence that does this, but also is non-decreasing, meaning $f_{n+1}\geq f_n$ pointwise por each $n$. I believe this to be true, but I haven't been able to prove it. So, is this true? If it is, I would appreciete any tips on how to prove it.



Thanks!

proof writing - Prove by induction that $10^n -1$ is divisible by 11 for every even natural number



Prove by induction that $10^n -1$ is divisible by 11 for every even natural number n. $0 \notin N$




Base Case: n = 2, since it is the first even natural number. $10^2 -1 = 99$ which is divisible by 11.



Assume $n =k $ is true for some $k \in N$. Now prove $n=k+1$ is true.



$10^k -1$



I know I have to put k+1 instead of k, but I do not know how to relate the induction hypothesis with k+1.


Answer



HINT:




If $f(n)=10^n-1$



$f(n+2)-f(n)=10^n(10^2-1)\equiv0\pmod{11}$



So, $f(n+2)$ will be divisible by $11$ if $f(n)$ is divisible by $11$



What is the base case $n=2,$ i.e, $f(2)$


elementary number theory - prove that $3$ does not divide $n^2+1$

How do I prove that $3$ does not divide $n^2+1$, for all $n\in\mathbb{Z}$, thought of in separate cases, but did not get, induction also was unable to ....

Cauchy functional equation over the complex field

It is known that the only measurable solutions to the Cauchy functional equation $f(x+y) =f(x)+f(y)$ are the linear ones ($x,y\in \mathbb{R}$). Does the same hold if we take $x,y \in \mathbb{C}$?
Edit: After the first answer, I rephrase my question: Are the only measurable functions $f:\mathbb{C} \to \mathbb{C}$ which satisfy the Cauchy functional equation linear or anti-linear (that is of the form $f(z)=a \bar{z}+b)$?

complex numbers - Finding modulus of $sqrt{6} - sqrt{6},i$

I found the real part $=\sqrt{6}$.



But I don't know how to find imaginary part. I thought it was whatever part of the function that involved $i$, with the $i$ removed? Therefore the imaginary part would be $-\sqrt{6}$.




Meaning the modulus is equal to
\begin{align}
\sqrt{ (\sqrt{6})^2 + (-\sqrt{6})^2} = \sqrt{12}.
\end{align}
The answer was $2\sqrt{3}$.

number theory - why $sum_{h=0}^{infty}{dfrac{h}{2^h}} = 2$




There is a summation in analysis of an algorithm which is the following:



$$\sum_{h=0}^{\infty}{\dfrac{h}{2^h}} = \dfrac{1/2}{(1-1/2)^2} = 2$$




But I don't can't solve this. I would be appreciated if anyone can solve this.



thanks.


Answer



Not sure what you need to solve (there doesn't seem to be an unknown in your expression), but assuming you want to derive the result:



$\sum_{n=0}^{\infty}x^n=\frac{1}{1-x}$ for $|x|<1$. Differentiate both sides to get $\sum_{n=0}^{\infty}nx^{n-1}=\frac{1}{(1-x)^2}$. Multiply both sides by $x$ to finally get:



$\sum_{n=0}^{\infty}nx^n=\frac{x}{(1-x)^2}$


geometry - Newbie: determine if line *segment* intersects circle

I've read related posts, including:



How to tell if a line segment intersects with a circle?
where the suggestions are probably relevant, but above my level, and the final solution is actually not what I need, and




Circle and Line segment intersection
Which may be what I need, but assumes more math knowledge than is in my brain.



Context: I have two circles in powerpoint, each of which have 8 points (anchors) on the perimeter. I am cycling through all 64 combinations in VBA to identify which two lines provide the longest point-to-point connectors that do not go through either circle. For convenience, I can check each line against each circle separately.



Basically, I am trying to create code that will allow me to automatically create something like this http://www.harrisgolfonline.com/userfiles/image/wlcc%201932%20bylaws%20callout.jpg except I'll also have a small circle around the enlarged part of the original image. I'm trying to get the math to figure out which circle anchors will give me those two outer lines.



So for example, if I draw the shortest possible line segment between the two closest connectors, I should not intersect with either circle. A line between the furthest away connectors would have to travel through both circles, and therefore fail either circle test.



Anyway, I'm hoping someone can help me with the equation setup, since the two pages I reference put in multiple equations and then say "then just solve it". I'm not exactly sure how to combine those formulas to solve. What I'm hoping for is something more like this equation, except this evaluates based on the whole line and not just a line segment:




Private Function LineCrossesCircle(x1, y1, x2, y2, cx, cy, cr) As Boolean



x1 = x1 - cx

y1 = y1 - cy

x2 = x2 - cx

y2 = y2 - cy


LineCrossesCircle = 0 <= cr ^ 2 * ((x2 - x1) ^ 2 + (y2 - y1) ^ 2) - (x1 * y2 - x2 * y1) ^ 2


End Function



Reading the answers, I do understand that there may be multiple conditions, e.g. it may be something similar to the above AND some other equation or two.



My connector points will always be on the circumference of the circle, so I was adjusting my calculated radius value to be [r*.95] otherwise they might all intersect at the endpoints. That means that my endpoints will never fall within the circles, but part of the line will fall in one or both circles probably 80% of the time. Once I find the lines that don't, I can check their lengths and add them as connectors.




Any help translating the previous solutions into one or more equations that just take the two endpoints (x1, y1, x2, y2) and circle info (cx, cy, cr) and return a boolean for whether there is an intersect would be greatly appreciated!!



===================================================================================



Sample code based on OAO's helpful post. I think I got something wrong, because it is no connectors with intersects, when I know about 80% should have an intersect.



EDIT: updated the formulas for MA and MB (I had some of the elements reversed I think) but still showing no intersects



Private Function IntersectCheck(ByVal Cx As Single, _
ByVal Cy As Single, _

ByVal Radius As Single, _
ByVal Ax As Single, _
ByVal Ay As Single, _
ByVal Bx As Single, _
ByVal By As Single) As Boolean


'distance between endpoints



d = (((Bx - Ax) ^ 2) + ((By - Ay) ^ 2)) ^ 0.5




'not totally sure what this is



alpha = (1 / (d ^ 2)) * ((Bx - Ax) * (Cx - Ax) + (Bx - Ax) * (Cx - Ax))



'point nearest circle center



mx = Ax + ((Bx - Ax) * alpha)



my = Ay + ((By - Ay) * alpha)




MC = ((Cx - mx) ^ 2 + (Cy - my) ^ 2) ^ 0.5



MA = ((Ax - mx) ^ 2 + (Ay - my) ^ 2) ^ 0.5



MB = ((Bx - mx) ^ 2 + (By - my) ^ 2) ^ 0.5



AC = ((Cx - Ax) ^ 2 + (Cy - Ay) ^ 2) ^ 0.5



BC = ((Cx - Bx) ^ 2 + (Cy - By) ^ 2) ^ 0.5




IntersectCheck = False



If MC > Radius Then



'there is no intersect



Else



If (MA < d) And (MB < d) Then


'there is an intersect

IntersectCheck = True

Else

If AC <= Radius Or BC <= Radius Then

'there is an intersect


IntersectCheck = True

End If

End If


End If




End Function

Thursday, March 23, 2017

calculus - convergence of $a_n$ knowing that $a_n^n$ converges



One can prove the following statement:



Let
$(1) a_1, a_2, a_3, ... a_n, ...$
be a sequence of non-negative numbers.
Let also $k>1$ be an integer.



If the sequence
$(2) a_1^k, a_2^k, a_3^k, ... a_n^k, ...$
converges and its limit is $a$, then the sequence (1) also converges and its limit is $\sqrt[k]{a}$.




But what if instead of (2) we know that the sequence
$(3) a_1^1, a_2^2, a_3^3, ... a_n^n, ...$
converges and its limit is b. Can we then state something about (1), and about its convergence, and possibly about its limit (if such a limit exists)?


Answer



Take $(a_n)$ defined by :



$a_n = 0$ if n is even.



$a_n = \frac{1}{2}$ if n is odd.



This sequence satisfies (3) but it does not converge.



Geometrically, what does raising a real number to a complex number do on the complex plane?




I know the formula to raise a real number to a complex number:
$a^{b+ic}=a^b\cdot(\cos(b \ln a) + i\sin(b \ln a))$
but I don't understand how it's derived. I know what the trig functions are doing but I'm not quite understanding what the natural log is doing and how this relationship transforms on the plane.


Answer



The derivation of $a^{b+ic}$ comes from Euler's Identity: $$a^{b+ic}=a^ba^{ic}=a^b e^{\ln(a)ic}=a^b(\cos(c\ln a)+i\sin(c\ln a))$$



The geometric interpretation is that of a rotation of the vector $\langle a^b, 0\rangle$ by an angle of $c\ln a$.


sequences and series - Sum of First $n$ Squares Equals $frac{n(n+1)(2n+1)}{6}$



I am just starting into calculus and I have a question about the following statement I encountered while learning about definite integrals:



$$\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}$$



I really have no idea why this statement is true. Can someone please explain why this is true and if possible show how to arrive at one given the other?


Answer



You can easily prove it by induction.




One way to find the coefficients, assuming we already know that it's a degree $3$ polynomial, is to calculate the sum for $n=0,1,2,3$. This gives us four values of a degree $3$ polynomial, and so we can find it.



The better way to approach it, though, is through the identity
$$ \sum_{t=0}^n \binom{t}{k} = \binom{n+1}{k+1}. $$
This identity is true since in order to choose a $(k+1)$-subset of $n+1$, you first choose an element $t+1$, and then a $k$-subset of $t$.



We therefore know that
$$ \sum_{t=0}^n A + Bt + C\binom{t}{2} = A(n+1) + B\binom{n+1}{2} + C\binom{n+1}{3}. $$
Now choosing $A=0,B=1,C=2$, we have
$$ A+Bt + C\binom{t}{2} = t^2. $$

Therefore the sum is equal to
$$ \binom{n+1}{2} + 2\binom{n+1}{3}. $$


discrete mathematics - Sum of cubes proof

Prove that for any natural number n the following equality holds:




$$ (1+2+ \ldots + n)^2 = 1^3 + 2^3 + \ldots + n^3 $$



I think it has something to do with induction?

Wednesday, March 22, 2017

algebra precalculus - Intuition behind multiplication



I recently read this post and the highest voted comment and it got me thinking. How does think about multiplication if it is decimals?



For example, if we have $3.9876542 \times 2.3156479$ then how would we multiply that? It doesn't make a lot of sense to add $3.9876542$, $2.3156479$ times. Then how would you think about multiplying that i.e. what's the intuition of behind that?




Thanks!


Answer



A rectangle with sides $3.9876542\,\mathrm m$ and $2.3156479\,\mathrm m$ can be viewed as $3987654200$ by $2315647900$ namometers instead. Then you can actually count all thos tiny square-nanometers (or simplify this by repaeted addition!) and obtain an area of $9234003074156180000\,\mathrm{nm}^2$. Since there are $1000000000000000000\,\mathrm{nm}^2$ in each $\mathrm m^2$, you end up with $9.23400307415618\,\mathrm m^2$ and thus we should have $3.9876542\cdot 2.3156479 = 9.23400307415618$.


real analysis - Integral on a completion of measure space



Let $(X,{\cal M}, \mu)$ be a measure space with completion $(X,\bar{{\cal M}},\bar{\mu})$. If $f \in {\cal L}^1(X,{\cal M},\mu)$, show that $f \in {\cal L^1}(X,\bar{\cal M},\bar{\mu})$ and $\int fd\mu = \int f d\bar{ \mu}$



My attempt: firstly to show that $f$ is $\bar {\cal M}$ measurable, let $E \in {\cal B}_{\mathbb{C}}$, then $f^{-1}(E) \in \cal{M}$, by the definition of completion, $\bar{\cal M} := \{F \cup G: F \in {\cal M}, G \subseteq N, N \in {\cal M}, \mu(N) = 0\}$, if we set $G = \emptyset $, then we can have $f^{-1}(E) \in \bar {\cal M}$, hence $f$ is $\bar {\cal M}$ measurable.



Furthermore, $f \in {\cal L}^1(X, \bar{\cal M},\bar{\mu})$ will directly follow from $\int f d\mu = \int f d \bar{\mu}$, but how to show this? I appreciate your help!


Answer



It is enough to prove it for nonnegative $f$.




Let $S$ be the set of finite sums $\sum_{k=1}^na_n\mu(A_k)$ where the sets $A_k$ are $\mathcal M$-measurable, the $a_k$ are nonnegative, and finally $\sum_{k=1}^na_n1_{A_k}\leq f$.



Let $\bar S$ be the set of finite sums $\sum_{k=1}^na_n\mu(A_k)$ where the sets $A_k$ are $\bar{\mathcal M}$-measurable, the $a_k$ are nonnegative, and finally $\sum_{k=1}^na_n1_{A_k}\leq f$.



Then $\int fd\mu=\sup S$ and $\int fd\bar{\mu}=\sup\bar S$.



It is evident that $S\subseteq\bar S$ so $\int fd\mu\leq\int fd\bar{\mu}$.



For every $s=\sum_{k=1}^na_n\mu(A_k)\in\bar S$ we can find sets $B_k\in\mathcal M$ such that $B_k\subseteq A_k$ and $\mu B_k=\bar{\mu} A_k$. Then $s=\sum_{k=1}^na_n\mu(A_k)=\sum_{k=1}^na_n\mu(B_k)\in S$.




So we also have $\bar S\subseteq S$ so that $\int fd\bar{\mu}\leq\int fd\mu$.


calculus - The non-existence of $lim limits_{x to 0} sin {1 over x}$



Regarding to the definition of the limit of a function, employing a step by step approach using tautologies in logic, here it was proved that the limit of a function $f(x)$ does not exist at a point $x=a$ if and only if



$$\forall L,\exists \varepsilon > 0:\left( \forall \delta > 0,\exists x:(0 < \left| x - a \right| < \delta \wedge |f(x) - L| \ge \varepsilon \right))\tag{1}$$



or




$$\nexists L:\left( {\forall \varepsilon > 0,\exists \delta > 0:\left( {\forall x,0 < \left| {x - a} \right| < \delta \to \left| {f(x) - L} \right| < \varepsilon } \right)} \right)\tag{2}$$



Statements $(1)$ and $(2)$ are logically equivalent. I want to use just $(1)$ to prove that $\lim\limits_{x \to 0} \sin {1 \over x}$ does not exist. However, proofs using $(2)$ can also be interesting!



My thought



I just know that I must find an $\varepsilon $ which may depend on $L$ and one $x$ which may depend on both $L$ and ${\delta}$ such that for every $L$ and $\delta > 0$ we must have



$$0 < |x| < \delta \wedge \left| \sin {1 \over x} - L \right| \ge \varepsilon \tag{3} $$




I don't know how to proceed!


Answer



Solution using epsilon-delta argument directly:
If $\sin {1 \over x}$ has limit at $x=0$ namely $L$, for $\varepsilon={1\over 2}$ we must have $\delta$ such that:
$$|{x - 0}| < \delta \Rightarrow |\sin {1 \over x} - L| < {1\over 2}$$
But as you can see from my last answer (two sequences) there are two number $x_1=\frac{1}{n\pi}$ and $x_2=\frac{2}{(4n+1)\pi}$ (there are infinitely of them but I need just two!) Such that
$$|{x_1}| < \delta \,\, , |{x_2}| < \delta \,\, ,\,\,\sin {1 \over x_1}=0\,\, , \,\,\sin {{1 \over x_2}=1}$$
So
$$|0 - L| < {1\over 2}\,\, ,\,\,|1 - L| < {1\over 2}$$

Or
$${-1\over 2} < L < {1\over 2}\,\, ,\,\,{1\over 2} < L < {3\over 2}$$
That's a contradiction if you accept :)


Tuesday, March 21, 2017

intuition - Why does $e^{ipi}=-1$?





I will first say that I fully understand how to prove this equation from the use of power series, what I am interested in though is why $e$ and $\pi$ should be linked like they are.



As far as I know $\pi$ comes from geometry (although it does have an equivalent analytical definition), and $e$ comes from calculus.



I cannot see any reason why they should be linked and the proof doesn't really give any insights as to why the equation works.



Is there some nice way of explaining this?


Answer



Euler's formula describes two equivalent ways to move in a circle.





  • Starting at the number $1$, see multiplication as a transformation that changes the number $1 \cdot e^{i\pi}$.

  • Regular exponential growth continuously increases $1$ by some rate; imaginary exponential growth continuously rotates a number in the complex plane.

  • Growing for $\pi$ units of time means going $\pi\,\rm radians$ around a circle

  • Therefore, $e^{i\pi}$ means starting at 1 and rotating $\pi$ (halfway around a circle) to get to $-1$.



For more details explaining each step, read this article.


elementary set theory - Difference between equality and isomorphism



I don't really understand the difference between sets being equal and isomorphic. I know that two sets $A$ and $B$ are equal of $A \subseteq B$ and $B \subseteq A$. I know two sets are isomorphic is there is a homomorphic bijection between the two sets, i.e., they have the same cardinality and the elements of each can be identified in such a way that they behave in exactly the same way. (In other words, the two sets have the same structure.)



Is equality stronger? I guess my main question is:




If two sets $A$ and $B$ are equal, does this imply they are isomorphic? How do $A \subseteq B$ and $B \subseteq A$, which are purely set theoretic statements, imply that the structure of the two sets is the same, i.e., their elements can be identified in such a way that they act the same way?


Answer



There seem to be two problems here.



First, if $A$ and $B$ have the same elements (every element of $A$ is an element of $B$ and vice versa), then they are the same set, the strongest possible form of equivalence.



And the identity function (which sends every $x\in A$ to itself) is certainly a bijection $A\to A$ and preserves whichever structure we care to equip $A$ with. So $A$ is always isomorphic to itself.



Second, the word "isomorphic" denotes many different concepts, depending on which kind of structure we require the isomorphism to preserve. If we say that $A$ and $B$ are isomorphic as sets, we only require that it's a bijection, and "isomorphic" is then the same as "has the same cardinality".




But we can also speak about being isomorphic as groups, or as rings, or as partially ordered sets, or as graphs, or as a lot of other things. In each of those cases we're strictly speaking using sloppy language to speak not only of $A$ and $B$ in themselves as sets (that is, which elements they have), but also additionally (and implicitly) about some structure we have chosen to consider for each of $A$ and $B$. The structure might be a binary operation on the set (when we're speaking of isomorphic-as-groups), or two binary operation (for isomorphic-as-rings), or an order relation (for isomorphic-as-posets), and so forth.



For example, if we say that $A$ and $B$ are isomorphic as groups, then what we really mean is that we have chosen operations $*:A\times A\to A$ and $\circledast:B\times B\to B$ such that $\langle A,{*}\rangle$ and $\langle B,{\circledast}\rangle$ are groups (and having made those choices is neccessary before we can even ask whether $A$ and $B$ are isomorphic groups) and that there is an $f:A\to B$ that is a bijection and satisfies $f(a_1*a_2)=f(a_1)\circledast f(a_2)$.



What we really should be saying is "the groups $\langle A,{*}\rangle$ and $\langle B,{\circledast}\rangle$ are isomorphic". But we're employing an informal shorthand where we can say $A$ when we mean $\langle A,{*}\rangle$, provided that it is clear which ${*}$ we mean.



Note that it is possible for $A$ and $B$ to be the same set, yet because we have chosen different $*$ and $\circledast$ they are not isomorphic by structure. For example, $\langle \mathbb R,{+}\rangle$ and $\langle \mathbb R,{\times}\rangle$ are not isomorphic as monoids, even though the underlying set $\mathbb R$ is the same.


Sunday, March 19, 2017

exponentiation - Why does any nonzero number to the zeroth power = 1?


I can't properly wrap my head around this odd concept, though I feel I'm almost there.


A non-zero base raised to the power of 0 always results in 1. After some thinking, I figured this is the proof: $\frac{{x}^{2}}{{x}^{2}}= x^{2-2}=x^{0}=1$


Assuming that's true, would it be correct to assume that anything raised to 0 is a "whole" (1)? Because if $\frac{{x}^{2}}{{x}^{2}}=1$, then no matter what x is, it will always result in 1.


I would like to understand this concept intuitively and deeply, rather than just memorizing that $x^{0}=1$


EDIT: Thank you all for the answers. Each and everyone of them have been insightful and I've now gained a deeper understanding. This is a new account, so it seems I can't upvote, but if I could I would upvote each and everyone of you. Thanks :)


Answer



Here's a good heuristic that helped me feel better about it back in the day:



Consider any number $x$ to any power. We will choose, say, $5^3$.


$5^3 = 125$. Divide both sides by $5$. You get $5^2=25$, which we know. Again. Then you get $5^1=5$. We usually leave exponents of $1$ off but we'll keep it here. Do you see what happens? The power goes down by $1$ each time. We can do it again. Then following our pattern, $5^0=1$. But $5$ wasn't special!


You can do this with any real number. Of course, for irrationals it can get a little fuzzy.


By the way, we can keep going. Divide by $5$ again. You get $5^{-1}=\frac{1}{5}$. I hope this helps!


Saturday, March 18, 2017

sequences and series - Compute $1 cdot frac {1}{2} + 2 cdot frac {1}{4} + 3 cdot frac {1}{8} + cdots + n cdot frac {1}{2^n} + cdots $


I have tried to compute the first few terms to try to find a pattern but I got


$$\frac{1}{2}+\frac{1}{2}+\frac{3}{8}+\frac{4}{16}+\frac{5}{32}+\frac{6}{64}$$


but I still don't see any obvious pattern(s). I also tried to look for a pattern in the question, but I cannot see any pattern (possibly because I'm overthinking it?) Please help me with this problem.


Answer



$$I=\frac{1}{2}+\frac{2}{4}+\frac{3}{8}+\frac{4}{16}+\frac{5}{32}+\frac{6}{64}+\cdots$$ $$2I=1+1+\frac{3}{4}+\frac{4}{8}+\frac{5}{16}+\frac{6}{32}+\cdots$$ $$2I-I=1+\left(1-\frac 12 \right)+\left(\frac 34 -\frac 24 \right)+\left(\frac 48 -\frac 38 \right)+\left(\frac {5}{16} -\frac {4}{16} \right)+\cdots$$ $$I=1+\frac 12+\frac 14+\frac 18+\cdots=2$$


number theory - Easy explanation of analytic continuation

Today, as I was flipping through my copy of Higher Algebra by Barnard and Child, I came across a theorem which said,




The series $$ 1+\frac{1}{2^p} +\frac{1}{3^p}+...$$ diverges for $p\leq 1$ and converges for $p>1$.



But later I found out that the zeta function is defined for all complex values other than 1. Now I know that Riemann analytically continued this function to fit all complex values, but how do I explain, to a layman, that $\zeta(0)=1+1+1+...=-\frac{1}{2}$?


The Wiki articles on these topics go way over my head. I'd appreciate it if someone can explain it to me what analytic continuation actually is, and which functions can be analytically continued?



Edit


If the function diverges for $p\leq1$, how is WolframAlpha able to compute $\zeta(1/5)$? Shouldn't it give out infinity as the answer?

calculus - Prove the following inequality using Mean Value Theorem



First off, I've seen a couple questions similar to this one (different inequalities, same principle) but didn't really understand the answers.
Here are a couple of those questions:



Prove inequality using Mean Value Theorem 2




Prove inequality using Mean Value Theorem Mean Value theorem problem?(inequality)



$$1 + 2x < e^{2x} < (1-2x)^{-1}, \ \forall \ \textrm{x} \in \ \ ] 0,1/2 [$$


Answer



When using the Mean Value Theorem to prove inequalities, remember the conclusion of the MVT:
$$
\frac{f(b)-f(a)}{b-a} = f'(t)
$$
for some $t$ between $a$ and $b$. Replacing $b$ by a variable $x$, and applying some algebra, we get
$$

f(x) = f(a) + f'(t)(x-a)
$$
The case $a=0$ is particularly useful; it says:
$$
f(x) = f(0) + f'(t)x
$$
for some $t$ with $0 < t < x$. If you can give upper and/or lower bounds for $f'(t)$, then you have an equality for $f(x)$ in terms of $x$.



Your example suggests $f(x) = e^{2x}$. Since $f'(t) = 2e^{2t}$, and $e^{2t} \geq 1$ for all $t\geq 0$, we know $f'(t) \geq 2$. So
$

e^{2x} > 1 + 2x
$.



What about the other part of the inequality? $e^{2x} < (1-2x)^{-1}$ doesn't look like it fits the pattern above. But again with some algebra,
$$
e^{2x} < \frac{1}{1-2x} \implies e^{-2x} > 1 - 2x
$$
and now you might see how to adapt the previous case.


Friday, March 17, 2017

fake proofs - Why is $i^3$ (the complex number "$i$") equal to $-i$ instead of $i$?




$$i^3=iii=\sqrt{-1}\sqrt{-1}\sqrt{-1}=\sqrt{(-1)(-1)(-1)}=\sqrt{-1}=i
$$



Please take a look at the equation above. What am I doing wrong to understand $i^3 = i$, not $-i$?


Answer



We cannot say that $\sqrt{a}\sqrt{b}=\sqrt{ab}$ for negative $a$ and $b$. If this were true, then $1=\sqrt{1}=\sqrt{\left(-1\right)\cdot\left(-1\right)} = \sqrt{-1}\sqrt{-1}=i\cdot i=-1$. Since this is false, we have to say that $\sqrt{a}\sqrt{b}\neq\sqrt{ab}$ in general when we extend it to accept negative numbers.


Thursday, March 16, 2017

polynomials - How to evaluate GF(256) element

I wonder is there any easy way to evaluate elements of GF$(256)$: meaning that I would like to know what $\alpha^{32}$ or $\alpha^{200}$ is in polynomial form? I am assuming that the primitive polynomial is $D^8+D^4+D^3+D^2+1$. For example for GF$(8)$ what we do is as follow to calculate $\alpha^3$ is divide it by $\alpha^3+\alpha+1$ and we get $\alpha+1$ but here in GF$(256)$ this will be really tedious so I would like to know is there any way to calculate above expressions or similar expressions like $\alpha^{100}$ in GF$(256)$.



Thanks.

general topology - Construct a set of real numbers whose limit points comprise the set of integers $mathbb{Z}$



My thought process is the following: Let $S=\{ m + \frac{1}{n}| m \in \mathbb{Z},n \in N \}$. Then I need to show that the limit points of $S$ are indeed the integers and that these are the only limit points. I don't know where to go from here.


Answer




Your example is correct because the $\lim_{n\to \infty} m+\frac{1}{n} = m\in \mathbb{Z}$ for all $m\in \mathbb{Z}$, however your trick here is that you use that $m\in\mathbb{Z}$.


Counting the number of zeros

I am stuck at the question:



How many zeros are there when numbers between 1 and 100 are multiplied including 1 and 100, devise some technique for this .



Regards

Wednesday, March 15, 2017

numerical methods - Greatest common divisor of more than two numbers

I'm coming from a programming aspect of this issue. So in Scheme code



(define (gcd a b)
(if (= b 0)
a

(gcd b (modulo a b))))


works and uses recursion, i.e., the ...(gcd b (modulo a b)))) recursively calls the function gcd again (and again) until the condition b = 0 is met, to give the answer a. So to use this function (gcd 12 20) gives 4.



Now, if I do this for more than two numbers, say $2, 12, 20, 120$



(gcd 2 (gcd 21 (gcd (20 120))))



I get the right answer too, i.e., 2. Can someone tell me why this is working correctly from a math standpoint?




On Wikipedia's Euclidean Algorithm article, it says




The GCD of three or more numbers equals the product of the prime
factors common to all the numbers,[11] but it can also be calculated
by repeatedly taking the GCDs of pairs of numbers.[12] For example,




gcd(a, b, c) = gcd(a, gcd(b, c)) = gcd(gcd(a, b), c) = gcd(gcd(a, c), b)



I'm guessing it has something to do with the commutable "product of all prime factors." But still, this is recursion inside of recursion pair-wise. Why does this work?

calculus - Proving the inequality: $[x-(x^2)/2 < ln(1+x) 0$



Prove the inequality:

$[x-(x^2)/2 < \ln(1+x) < x]$ , $x>0$






The right side is easy, I used taylor expansion to show that $e^x > 1+x$ since
$e^x = 1 + x + x^2/2 + x^3/3! +\cdots $



The left side I expanded $\ln(1+x)$ to $x-x^2/2 + x^3/3 - x^4/4 +\cdots$
moved $(x-x^2/2)$ to the left and got this:




$$0 < x^3/3 - x^4/4 \cdots $$



The only thing I can think of here is that since the first term is positive and this series goes on forever the first term will always stay positive since the other terms cancel out...
Not sure this is mathematically coherent or even correct.



thanks.


Answer



For the left inequality, since you know that $x-\frac{x^2}{2}=\ln(1+x)$ when $x=0$ you need only check the derivatives: $\frac{d}{dx}(x-\frac{x^2}{2})=1-x$ while $\frac{d}{dx}(\ln(1+x))=\frac{1}{1+x}$, and so
$$\frac{d}{dx}(\ln(1+x))-\frac{d}{dx}(x-\frac{x^2}{2})=\frac{1}{1+x}-(1-x)=\frac{1-(1+x)(1-x)}{1+x}=\frac{x^2}{(1+x)}$$
which is positive for positive $x$, thus for $x>0$, $\ln(1+x)$ is always growing faster than $x-\frac{x^2}{2}$ so it must be greater than $x-\frac{x^2}{2}$.



combinatorics - Given that at least $n$ elements from $A$ must be members in $C$, how many different sets C can be created by combining elements from $A$ and $B$?




Generalized Problem:



Let $A$ and $B$ be two sets where $|A|=a$ and $|B|=b$.



Let $C\subset A\wedge B, |C|=c$.



Given that at least $n$ elements from $A$ musts be members in $|C|$, how many distinct sets $C$ can be created by combining a total of $c$ elements from $A\wedge B$?



Example: A team of 23 football players and 3 trainers have been awarded 10 tickets to a concert. To increase their probability of receiving a ticket, the cunning trainers deceitfully tell the players that at least 1 trainer must attend the concert. In how many different ways can the tickets be distributed among the 26 team members?




When initially trying to solve this problem I penned the following: $${3\choose1} {25\choose9}=3\times2042975=6128925$$



I thought that ${3\choose1}$ gives the number of ways to choose one of the teachers; and that ${25\choose9}$ gives the number of ways to choose nine people from the remaining 23 players and 2 trainers. Then, I thought that the product of ${3\choose1} {25\choose9}$ would equal total number of ways to distribute the tickets due to the rule of product. (This method does not provide the correct answer.)



After attempting to solve the problem with the above method, I tried to solve it in a different way:



$$\text{"number of allowed ticket distributions" = "total number of ticket distributions"} - \text{"number of ticket distributions with no trainers"} = {26\choose10} - {23\choose10} = 4167669$$



This method yields the correct answer.




My Questions:




  • Why does the first method fail? Where is the my reasoning flawed?

  • What combinations arise when using the first method that do not when using the second method?


Answer



There is multiple counting in the first method.



Let's call the trainers $A,B,C$ and the players $1,2,\dots23$.




Then possibility $A,B,1,2,3,4,5,6,7,8$ is counted if $A$ is the "chosen" trainer and $B$ is among the other $9$, but it is also counted if $B$ is the "chosen" trainer and $A$ is among the other $9$.


elementary set theory - What's the cardinality of all sequences with coefficients in an infinite set?



My motivation for asking this question is that a classmate of mine asked me some kind of question that made me think of this one. I can't recall his exact question because he is kind of messy (both when talking about math and when thinking about math).




I'm kind of stuck though. I feel like the set $A^{\mathbb{N}} = \{f: \mathbb{N} \rightarrow A, f \text{ is a function} \}$ should have the same cardinality as the power set of A, if A is infinite. On the other hand, in this post, it is stated that the sequences with real coefficients have the same cardinality as the reals.



It's easy to see that $A^{\mathbb{N}} \subseteq P(A)$, but (obviously) I got stuck on the other inclusion. Is there any general result that says anything else? References would be appreciated.



EDIT To clarify the intetion of this question: I want to know if there are any general results on the cardinality of $A^{\mathbb{N}}$ other that it is strictly less than that of the power set of A.



Also, I was aware that the other inclusion isn't true in general (as the post on here I linked to gave a counterexample), but thanks for pointing out why too. :)


Answer



From Jech's Set Theory, we have the following theorems on cardinal exponentiation (a Corollary on page 49):




Theorem. For all $\alpha,\beta$, the value of $\aleph_{\alpha}^{\aleph_{\beta}}$ is always either:




  • $2^{\aleph_{\beta}}$; or

  • $\aleph_{\alpha}$; or

  • $\aleph_{\gamma}^{\mathrm{cf}\;\aleph_{\gamma}}$ for some $\gamma\leq\alpha$ where $\aleph_{\gamma}$ is such that $\mathrm{cf}\;\aleph_{\gamma}\leq\aleph_{\beta}\lt\aleph_{\gamma}$.



Here, $\mathrm{cf}\;\aleph_{\gamma}$ is the cofinality of $\aleph_{\gamma}$: the cofinality of a cardinal $\kappa$ (or of any limit ordinal) is the least limit ordinal $\delta$ such that there is an increasing $\delta$-sequence $\langle \alpha_{\zeta}\mid \zeta\lt\delta\rangle$ with $\lim\limits_{\zeta\to\delta} = \kappa$. The cofinality is always a cardinal, so it makes sense to understand the operations above as cardinal operations.




Corollary. If the Generalized Continuum Hypothesis holds, then
$$\aleph_{\alpha}^{\aleph_{\beta}} = \left\{\begin{array}{lcl}
\aleph_{\alpha} &\quad & \mbox{if $\aleph_{\beta}\lt\mathrm{cf}\;\aleph_{\alpha}$;}\\
\aleph_{\alpha+1} &&\mbox{if $\mathrm{cf}\;\aleph_{\alpha}\leq\aleph_{\beta}\leq\aleph_{\alpha}$;}\\
\aleph_{\beta+1} &&\mbox{if $\aleph_{\alpha}\leq\aleph_{\beta}$.}
\end{array}\right.$$



So, under GCH, for all cardinals $\kappa$ with cofinality greater than $\aleph_0$ have $\kappa^{\aleph_0} = \kappa$, and for cardinals $\kappa$ with cofinality $\aleph_0$ (e.g., $\aleph_0$, $\aleph_{\omega}$), we have $\kappa^{\aleph_0} = 2^{\kappa}$. (In particular, it is not the case the cardinality of $A^{\mathbb{N}}$ is necessarily less than the cardinality of $\mathcal{P}(A)$).



Then again, GCH is usually considered "boring" by set theorists, from what I understand.



proof verification - Mathematical Induction Problem solving

$1^3$ + $2^3$ + $2^3$ + ... + $n^3$ = ($1 + 2 + 3 + ... + n)^2$



I start with $P(1)$ and get $1 = 1$.



Then I do it with $P(n+1)$ and I get stuck.



$1^3$ + $2^3$ + $2^3$ + ... + $n^3$ + $(n+1)^3$ = ($1 + 2 + 3 + ... + n +(n+1))^2$




then I've tried substituting values and both ways and I cannot find anywhere to go with the problem.



$(1 + 2 + 3 + ... + n)^2 + (n+1)^3 = (1 + 2 + 3 + ... + n +(n+1))^2$
$OR$



$1^3 + 2^3 + 2^3 + ... + n^3 + (n+1)^3 = (1^3 + 2^3 + 2^3 + ... + n^3 +(n+1))^2$



I know that the sum of numbers in a row is $\cfrac{n(n+1)}{2}$. I'm not sure if that's of any use. I'm pretty sure there's a mathematical proof that I can't remember that'll clean up this problem so any insight would be greatly appreciated.

Tuesday, March 14, 2017

elementary number theory - Computing Linear Congruence -Uncertainty Of Answer

given:
$$17x\equiv 3\pmod{2\cdot3\cdot5\cdot7}$$



after extended euclidean algorithm of $(17,210)$ I got:



$$1=3\cdot210+17\cdot(-37)$$ now multplying both RHS and LHS by $3$ I get :



$$210\cdot9+17\cdot(-111)=3$$




then, doing (mod $210$) for both sides I get:



$$17\cdot(-111)\equiv 3\pmod{210}$$



so the answer supposed to be $-111$ but the answer is $99$ for $X.$



My question: is it valid to add $210$ just to $-111$ without the all exp. which is $17(-111)$?

real analysis - Discontinuous derivative.

Could someone give an example of a ‘very’ discontinuous derivative? I myself can only come up with examples where the derivative is discontinuous at only one point. I am assuming the function is real-valued and defined on a bounded interval.

how does remainder vary with multiples of a given number




Knowing the remainder of an unknown number for a given number, is it possible to calculate the remainder of the unknown number for a (known) multiple of the given number. i.e. For r, g and f known, x unknown in following



r = x mod g
R = x mod (fg)



find R.


Answer



Reworded in terms of modular arithmetic, you say you know that $x\equiv r\pmod{g}$ and from this you wish to know the value of $x\pmod{f\cdot g}$? There is not enough information yet, but if you had a few more pieces of information, namely the value of $x\pmod{f}$ as well as knowing that $\gcd(f,g)=1$, you could then use the chinese remainder theorem to get your final answer.



An example of why we don't have enough information without these:




Consider $g=10, f=10, r=5$. We are told that $x\pmod{10}=5$, i.e. the last digit of $x$ is $5$. We want to know what $x\pmod{10\cdot 10}$ is, i.e. we want to know the last two digits of $x$. If all we know is the last digit of $x$, we have no way of knowing what the second to last digit of $x$ is. For example, both $15$ and $25$ have their last digit equal to $5$ but they have different second to last digits.



Similarly, if $g=2, f=5, r=1$. We are told that $x\pmod{2}=1$, i.e. that $x$ is odd. We want to know what $x\pmod{2\cdot 5}$ is, i.e. we want to know the last digit of $x$. If all we know is that $x$ is odd, we cannot determine uniquely which of the digits $1,3,5,7,9$ is the last digit of $x$.



If we knew however that $g=2,f=5,r=1$ and that $s=2$ where $x\equiv s\pmod{f}$, then we would simultaneously know $x\equiv 1\pmod{2}$ and $x\equiv 2\pmod{5}$, which by chinese remainder theorem would imply that $x\equiv 7\pmod{10}$






If your question was instead about finding the value of $(x\cdot f)\pmod{g}$ given the information that $x\equiv r\pmod{g}$, then this is simply a matter of arithmetic. You would have $x\cdot f\equiv r\cdot f\pmod{g}$.



Sunday, March 12, 2017

real analysis - Prove that if $lim_{ntoinfty} x_n = 1$ for $x_n > 0$ then $lim_{ntoinfty} sqrt[n]{x_1x_2cdots x_n} = 1$




Given a sequence $x_n$ and the fact that:
$$
\lim_{n\to\infty} x_n = 1\\

x_n > 0\\
n\in\Bbb N
$$

Prove
$$
\lim_{n\to\infty} \sqrt[n]{x_1x_2\cdots x_n} = 1
$$




I'm having some difficulties finishing this proof. I've shown while solving another problem that:

$$
\lim_{n\to\infty} x_n = a \implies \lim_{n\to\infty}{1\over n}\sum_{k=1}^nx_k = a
$$



Using this we may state that:
$$
\lim_{n\to\infty}x_n = 1 \implies \lim_{n\to\infty}{1\over n}\sum_{k=1}^nx_k = 1
$$



On the other hand by AM-GM we have that:

$$
\frac{x_1 + x_2 + \cdots x_n}{n} \ge \sqrt[n]{x_1x_2\cdots x_n}
$$



Since $x_n > 0$:
$$
\frac{x_1 + x_2 + \cdots x_n}{n} \ge \sqrt[n]{x_1x_2\cdots x_n} \ge 0
$$



We know that:

$$
\lim_{n\to\infty}\frac{x_1 + x_2 + \cdots x_n}{n} = 1
$$



Therefore:
$$
1 \ge \lim_{n\to\infty}\sqrt[n]{x_1x_2\cdots x_n} \ge 0
$$



My idea was to use Monotone Convergence theorem, but since $x_n$ is only constrained by $x_n > 0$ we can not make any conclusions on the monotonicity of:

$$
y_n = \sqrt[n]{x_1x_2\cdots x_n}
$$

(or can we?).



Apparently my idea to use MCT is not applicable here. So the question is what would be the proper way to prove the above?


Answer



Consider the logarithm and use Stolz cesaro to deduce that
$$
\lim_{n\to \infty}\log\sqrt[n]{x_1\dotsb x_n}=\lim_{n\to \infty}\frac{1}{n}\sum_{i=1}^n\log x_i=\lim_{n\to \infty}\log x_n=0

$$

since $x_n\to 1$ from which the claim follows.


arithmetic - Can the 0 power rule be conceptualized?

A review of the rule that $n^0$ is always 1, when n is not 0, has made me question if all math rules can be visualized or conceptualized with a general intuition gained from viewing the natural world. It seems that me that there is no natural representation of this rule or a representation that can be conceptualized in terms gained from experiencing reality.
I have come across three methods of justification that are used to determine that $n^0=1$. One uses a somewhat altered, or added to, definition of exponents, the other relies on maintaining internal consistency with other math rules, and the last relies on a pattern of division seen with single terms exponentiated. I’ll list the justifications now:




  • 0 power rule justified with exponent subtraction


  • The second justification has to do with assuming that the definition of exponents always starts with a multiplication of 1. Therefore $x^3=1*x*x*x$ and $x^0=1$


  • The last justification tries to justify the rule from a pattern: $3^3=27$, $3^2=9$, $3^1=3$. Each time the following result is the former result divided by 3. If we keep going, we get $3^0=1$.





None of these methods make very much conceptual sense because none of the justifications are attempts to show conceptual justifications, instead they are abstracted away from real life examples and intuition. If we think about exponentiation as repeated multiplication, similar to how multiplication is repeated addition, I cannot think of an everyday situation which would repeat multiplication 0 times and get 1. As a result, I cannot grasp an intuitive sense of the rule. With multiplication by 0, it is easy to see that when you repeat addition on X 0 times, you get 0.



So, my question is, is there a more intuitive explanation that I am missing or not understanding? If not, then I feel like I have been treating my math studies in the wrong way. I have been trying to intuitively understand mathematical concepts in the same way a philosopher mathematician might have conceptualized them early on in their brains. Is this the wrong way to go about thinking about math? Should I instead just treat math as a series of arbitrary rules that are built upon each other to create a system? Edit: Or as LittleO said, "convenient rules".



Edit 1: Called the three justifications "rules" on accident.



Also thank you to the kind admin/moderator who helped clean up the math formatting.

limits - Finding $lim_{n to infty} frac{sqrt{n!}}{2^n}$



I'm looking for a way to find this limit:



$\lim_{n \to \infty} \frac{\sqrt{n!}}{2^n}$




I think I have found that it diverges, by plugging numbers into the formula and "sandwich" the result. However I can't find way to prove it.



I know that $n! \approx \sqrt{2 \pi n}(\frac{n}{e})^n $ when $n \to \infty$. (Stirling rule I think)



However, I don't know how I could possibly use it. I mean, I tried using the rule of De l'Hôpital after using that rule, but I didn't go any further.


Answer



$$
\frac{\sqrt{n!}}{2^n}\sim\frac{(2\pi\,n)^{1/4}\Bigl(\dfrac{n}{e}\Bigr)^{n/2}}{2^n}=(2\pi\,n)^{1/4}\Bigl(\frac{\sqrt n}{2\sqrt e}\Bigr)^{n}.
$$
Since $\dfrac{\sqrt n}{2\sqrt e}$ and $n^{1/4}$ converge to $\infty$, so does $\dfrac{\sqrt{n!}}{2^n}$.



Saturday, March 11, 2017

Calculate limit (exponential distribution)




As I was trying to show the variance of $X$ with the exponential distribution and parameter $\lambda$, I got the following bit,



$[-x^2e^{-\lambda x}]_{0}^{\infty}=\lim_{a\to\infty}[-x^2e^{-\lambda x}]_{0}^{a}$,



which is apparently equal to zero. However, we have that $x^2$ goes to $\infty$ and $e^{\lambda x}$ to 0, as $a\to\infty$.



I tried proving this limit:



Let $(x_n)$ be a sequence with limit $+\infty$. We want to show that for each $\epsilon>0$,




$\exists N\in\mathbb N:n>\mathbb N\implies\lvert x_n^2\cdot e^{-\lambda x_n}\rvert<\epsilon$.



But from here on I'm not sure how to proceed. I can see that $\lvert x_n^2\cdot e^{-\lambda x_n}\rvert=x_n^2\cdot e^{-\lambda x_n}$, but that's all I can think of... Could someone give me a hint?


Answer



hint: First you show $e^{\lambda x_n} > \dfrac{\lambda^3x_n^3}{6}$, but this is evidently true from the expansion Maclaurin of $e^{\lambda x}$, and use Squeeze theorem to settle the result.


Trying to iterate Cauchy-Schwarz inequality to prove an inequality

Let us consider the following function :



$$f(x)=\frac{1}{\sqrt{1+x}}+\frac{1}{\sqrt{1+a}}+\sqrt{\frac{ax}{ax+8}}$$



In a particular Olympiad problem I have to prove that for any $a>0$,




$$1

So, I started up by assuming that the range of the function should be real numbers, because in case of complex numbers it would not make sense to speak of inequalities.



Then, the next set up was to conclude that $x$ cannot be less than $-1$ as it would not give real values for the fist term in $f(x)$.



And similarly, in the last term the sign of the denominator and the numerator must be the same which restricts $x$ from assuming values in $(-8,0)$. And hence $x$ should only have non negative values.



Now, I viewed $f(x)$ as a dot-product of vectors

$$\left( 1, 1, \sqrt{\frac{ax}{ax+8}} \right) \quad \text{and} \quad
\left( \frac{1}{\sqrt{1+x}}, \frac{1}{\sqrt{1+a}}, 1 \right)$$



Now the square of the dot product of these vectors must be less than the product of the square of the magnitudes of these vectors, which I guess is one way of Cauchy-Schwarz inequality.



So



$$f(x) <
\left( \frac{1}{1+x}+\frac{1}{1+a}+1 \right)
\left( 1+1+\frac{ax}{ax+8} \right)$$




At this stage, I can see that the as both the terms in the RHS of the inequality, decreases as $x$ and $a$ increases.



So the max value can be said to be $6$. But I could not tighten the bound to $4$.



I tried further by taking a term like



$\dfrac{1}{1+x}$ and making an inequality:



$\dfrac{1}{1+x} < \dfrac{1}{2\sqrt{x}}$ by applying AM-GM inequality




and then seeing $\dfrac{1}{\sqrt{x}}$ as a product and applying AM-GM to get
$\dfrac{1}{\sqrt{x}} < \dfrac{1}{2} \left( 1 + \dfrac{1}{x} \right)$



But this couldn't tighten the upper bound.



Is there any way to proceed further by using Cauchy-Schwarz inequality without using any tools from calculus?



Raw picture




Basically , what I want to know is whether I can prove that f(x) will always be less than 2 by using cauchy-schwarz inequality and AM-GM inequality in the way I have tried to ?

elementary set theory - How to prove that from "Every infinite cardinal satisfies $a^2=a$" we can prove that $b+c=bc$ for any two infinite cardinals $b,c$?



Prove that if $a^2=a$ for each infinite cardinal $a$ then $b + c = bc$ for any two infinite cardinals $b,c$.




I tried $b+c=(b+c)^2=b^2+2bc+c^2=b+2bc+c$, but then I'm stuck there.


Answer



If you can compare $b$ and $c$, then suppose without loss of generality that $b\leq c$. Then $$bc \leq cc = c \leq b+c \leq c+c = 2c \leq bc.$$


The last inequality because we are assuming $b$ and $c$ are both infinite, so $2\leq b$; the equality $cc=c$ by assumption.


Added${}^{\mathbf{2}}$. From $a^2=a$ for all infinite cardinals, one may deduce the Axiom of Choice (this is a theorem of Tarski's), which in turn is equivalent to the fact that we can compare $b$ and $c$.


If you don't want to go through AC in order to assume comparability of $b$ and $c$, then I would have had some trouble with the problem, though Apostolos's answer together with your $b+c = b+bc+c\geq bc$ does tie it together neatly.


Added. In ZF, the Axiom of Choice is equivalent to the statement that given any two sets $A$ and $B$, either $A$ injects into $B$ or $B$ injects into $A$ (that is, the cardinalities of $A$ and $B$ are comparable). Whether or not we can assume that two cardinals are comparable may depend on what one means by "cardinal".


By "cardinal", I understood a cardinal number, which means an ordinal that is not bijectable with any strictly smaller ordinal, where ordinals are ordered by $\in$ (this is the definition in Jech's Set Theory, under Alephs, page 24: "An ordinal $\alpha$ is called a cardinal number if $|\alpha|\neq|\beta|$ for all $\beta\lt\alpha$." The definition precedes the discussion of the Axiom of Choice, which begins in page 38). The ordering among cardinals is induced by the ordering of ordinals. If that is what we mean by "cardinal", then any two cardinals are certainly comparable (even in ZF without AC), so we will either have $b\leq c$ or $c\leq b$, and there is no loss in generality in assuming the first. But as Asaf points out, when dealing with cardinal numbers in this sense, both $a^2=a$ and $b+c=bc=\max\{b,c\}$ are theorems in ZF.


Friday, March 10, 2017

functional equations - If $f(xy)=f(x)f(y)$ then show that $f(x) = x^t$ for some t



Let $f(xy) =f(x)f(y)$ for all $x,y\geq 0$. Show that $f(x) = x^p$ for some $p$.




I am not very experienced with proof. If we let $g(x)=\log (f(x))$ then this is the same as $g(xy) = g(x) + g(y)$


I looked up the hint and it says let $g(x) = \log f(a^x) $


The wikipedia page for functional equations only states the form of the solutions without proof.


Attempt Using the hint (which was like pulling a rabbit out of the hat)


Restricting the codomain $f:(0,+\infty)\rightarrow (0,+\infty)$ so that we can define the real function $g(x) = \log f(a^x)$ and we have $$g(x+y) = g(x)+ g(y)$$


i.e $g(x) = xg(1)$ as $g(x)$ is continuous (assuming $f$ is).


Letting $\log_a f(a) = p$ we get $f(a^x) =a^p $. I do not have a rigorous argument but I think I can conclude that $f(x) = x^p$ (please fill any holes or unspecified assumptions) Different solutions are invited


Answer



So, we assume $f$ is continuous. Letting $g(x) = \ln(f(a^x))$, we get $$ \begin{align*} g(x+y) &= \ln(f(a^{x+y})) = \ln(f(a^xa^y)) = \ln(f(a^x)f(a^y))\\ &= \log(f(a^x)) + \ln(f(a^y))\\ &= g(x)+g(y). \end{align*}$$ So $g$ satisfies the Cauchy functional equation; if you assume $f$ is continuous, then so is $g$, hence $g(x) = xg(1)$ for all $x\gt 0$.


Since $g(1) = \ln(f(a))$, we have $$f(a^x) = e^{g(x)} = e^{g(1)x} = (e^{x})^{g(1)}.$$ Given $r\in \mathbb{R}$, $r\gt 0$, we have $r = a^{\log_a(r)}$, hence $$\begin{align*} f(r) &= f\left(a^{\log_a(r)}\right)\\ &= \left(e^{\log_a(r)}\right)^{g(1)}\\ &= \left(e^{\ln(r)/\ln(a)}\right)^{g(1)}\\ &= \left(e^{\ln(r)}\right)^{g(1)/\ln(a)}\\ &= r^{g(1)/\ln(a)}, \end{align*}$$ where we have used the change-of-base formula for the logarithm, $$\log_a(r) = \frac{\ln r}{\ln a}.$$ Finally, since $g(1) = \ln(f(a))$, we have $$f(r) = r^{\ln(f(a))/\ln(a)}.$$ As this works for any positive $a$, $a\neq 1$, taking $a=e$ we get $$f(r) = r^{\ln(f(e))}.$$



analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...