Can you explain this please
$$T(n) = (n-1)+(n-2)+…1= \frac{(n-1)n}{2}$$
I am really bad at maths but need to understand this for software engineering.
Can you explain this please
$$T(n) = (n-1)+(n-2)+…1= \frac{(n-1)n}{2}$$
I am really bad at maths but need to understand this for software engineering.
If the sides of a triangle are in Arithmetic progression and the greatest and smallest angles are $X$ and $Y$, then show that
$$4(1- \cos X)(1-\cos Y) = \cos X + \cos Y$$
I tried using sine rule but can't solve it.
Answer
Let the sides be $a-d,a,a+d$ (with $a>d)$ be the three sides of the triangle, so $X$ corresponds to the side with length $a-d$ and $Y$ that to with length $a+d$. Using cosine formula
\begin{align*}
\cos X & = \frac{(a+d)^2+a^2-(a-d)^2)}{2a(a+d)}=\frac{a+4d}{2(a+d)}\\
\cos Y & = \frac{(a-d)^2+a^2-(a+d)^2)}{2a(a-d)}=\frac{a-4d}{2(a-d)}\\
\end{align*}
Then
$$\cos X +\cos Y=\frac{a^2-4d^2}{a^2-d^2}=4 \frac{(a-2d)}{2(a+d)}\frac{(a+2d)}{2(a-d)}=4(1-\cos X)(1-\cos Y).$$
I'm trying to compute the infinite sum
$\sum_{n=1}^{\infty}n(\frac{1}{2})^n$
which I believe should represent the expected amount of coin flips needed to get a head. Can someone remind me how to do this?
Answer
The key is that the infinite sum $\sum x^n $ converges to $\frac 1{1 - x}$ under certain conditions on $x$, and differentiating the resulting inequality gives that $\sum nx^{n-1}$ is convergent to the derivative of $\frac 1{1 - x}$, under the same conditions. Multiplying this by $x$ gives the sum $\sum nx^{n}$, which is the result you are looking for with $x = \frac 12$, which does fall under the set for which the first equality holds.
Find $S=\frac{1}{7}+\frac{1\cdot3}{7\cdot9}+\frac{1\cdot3\cdot5}{7\cdot9\cdot11}+\cdots$ upto 20 terms
I first multiplied and divided $S$ with $1\cdot3\cdot5$ $$\frac{S}{15}=\frac{1}{1\cdot3\cdot5\cdot7}+\frac{1\cdot3}{1\cdot3\cdot5\cdot7\cdot9}+\frac{1\cdot3\cdot5}{1\cdot3\cdot5\cdot7\cdot9\cdot11}+\cdots$$ Using the expansion of $(2n)!$ $$1\cdot3\cdot5\cdots(2n-1)=\frac{(2n)!}{2^nn!}$$ $$S=15\left[\sum_{r=1}^{20}\frac{\frac{(2r)!}{2^rr!}}{\frac{(2(r+3))!}{2^{r+3}(r+3)!}}\right]$$ $$S=15\cdot8\cdot\left[\sum_{r=1}^{20}\frac{(2r)!}{r!}\cdot\frac{(r+3)!}{(2r+6)!}\right]$$ $$S=15\sum_{r=1}^{20}\frac{1}{(2r+5)(2r+3)(2r+1)}$$
How can I solve the above expression? Or is there an simpler/faster method?
Answer
Hint:
$\frac{1}{(2r+5)(2r+3)(2r+1)}=\frac{1}{4}\left(\frac{1}{(2r+3)(2r+1)}-\frac{1}{(2r+5)(2r+3)}\right)$
I am currently studying proving by induction but I am faced with a problem.
I need to solve by induction the following question.
$$1+2+3+\ldots+n=\frac{1}{2}n(n+1)$$
for all $n > 1$.
Any help on how to solve this would be appreciated.
This is what I have done so far.
Show truth for $N = 1$
Left Hand Side = 1
Right Hand Side = $\frac{1}{2} (1) (1+1) = 1$
Suppose truth for $N = k$
$$1 + 2 + 3 + ... + k = \frac{1}{2} k(k+1)$$
Proof that the equation is true for $N = k + 1$
$$1 + 2 + 3 + ... + k + (k + 1)$$
Which is Equal To
$$\frac{1}{2} k (k + 1) + (k + 1)$$
This is where I'm stuck, I don't know what else to do. The answer should be:
$$\frac{1}{2} (k+1) (k+1+1)$$
Which is equal to:
$$\frac{1}{2} (k+1) (k+2)$$
Right?
By the way sorry about the formatting, I'm still new.
Answer
Basic algebra is what's causing the problems: you reached the point
$$\frac{1}{2}K\color{red}{(K+1)}+\color{red}{(K+1)}\;\;\;\:(**)$$
Now just factor out the red terms:
$$(**)\;\;\;=\color{red}{(K+1)}\left(\frac{1}{2}K+1\right)=\color{red}{(K+1)}\left(\frac{K+2}{2}\right)=\frac{1}{2}(K+1)(K+2)$$
I am studying branches of logarithm. I came to know that there are infinitely many branches of logarithm where $\log z = \log |z| + i (\arg z +2k\pi)$, $k \in \mathbb Z$ and $z \neq 0$. Now for each $\alpha \in [0,2\pi)$ if we restrict $\arg z$ to lie inside $(\alpha , \alpha + 2\pi)$ this will yield a branch of logarithm having branch cut $\theta = \alpha$ which is analytic in the cut plane $D_{\alpha} = \mathbb C \setminus \{z \in \mathbb C : z \leq 0 \}$. For each such branch there exists a principal logarithmic function where $k=0$ i.e. $\log z =\log |z| + i \arg_{\alpha} z$ where $z \neq 0$ and $\arg_{\alpha}$ is the restriction of the argument function on $(\alpha,\alpha+2\pi)$ for some $\alpha \in [0,2\pi)$. The principal branch of logarithm corresponds to $k=0$ and $\arg=\arg_{\pi}$ as the argument function which is known as principal argument function.
Now my question is :
"Is the same true for square root?" As we know that $z^{\frac {1} {2}} = \exp (\frac {1} {2} \log z)$. As we know that logarithm has infinitely many branches, each of which is analytic in some certain cut plane. So we can say that $z^{\frac {1} {2}}$ is analytic on a certain cut plane of the corresponding logarithmic branch. But I don't know whether it is analytic on any point on the cut plane of the corresponding logarithmic branch or not!! If it is not so then clearly there are infinitely many branches of square root function. Corresponding to each branch there are two square root functions. One is $z \mapsto |z|^{\frac {1} {2}} e^{\frac {i\arg_{\alpha} z} {2}}$ and the other is $z \mapsto -|z|^{\frac {1} {2}} e^{\frac {i\arg_{\alpha} z} {2}}$ for each $\alpha \in [0,2\pi)$. But for that I need the answer to the question whether $z^{\frac {1} {2}}$ is analytic on the points of the cut plane of the corresponding logarithmic branch or not. If the answer to that question is "no" then only we can extend the concept of logarithmic function to the square root function. I only know that the principal square root function is not continuous on $\mathbb C \setminus \{0 \}$.
Is it true or not? I am in a fix. Please help me.
Thank you in advance.
Answer
The two concepts match. Let us at first revisit the logarithmic function:
The multivalued logarithm is defined as
\begin{align*}
\log(z)=\log|z|+i\arg(z)+2k\pi i\qquad\qquad k\in\mathbb{Z}\tag{1}
\end{align*}
In order to make single-valued branches of $\log $ we make a branch cut from $0$ to infinity, the most common being the negative real axis. This way we define the single-valued principal branch or principal value of $\log$ denoted with $\mathrm{Log}$ and argument $\mathrm{Arg}$. We obtain
\begin{align*}
\mathrm{Log}(z)=\log |z|+i\mathrm{Arg}(z)\qquad\qquad -\pi <\mathrm{Arg}(z)\leq \pi\tag{2}
\end{align*}
Now let's look at the square root function:
The two-valued square root is defined as
\begin{align*}
z^{\frac{1}{2}}&=|z|^{\frac{1}{2}}e^{i\frac{\arg(z)+2k\pi}{2}}\\
&=|z|^{\frac{1}{2}}e^{i\frac{\arg(z)}{2}}(-1)^k\qquad\qquad k\in\mathbb{Z}\tag{3}
\end{align*}
In order to make single-valued branches of $z^{\frac{1}{2}}$ we make again a branch cut from $0$ to infinity along the negative real axis. This way we define the single-valued principal branch or principal value of $z^{\frac{1}{2}}$ denoted with $\left[z^{\frac{1}{2}}\right]$ and argument $\mathrm{Arg}$. We obtain
\begin{align*}
\left[z^{\frac{1}{2}}\right]&=|z|^{\frac{1}{2}}e^{i\frac{\mathrm{Arg}(z)}{2}}\qquad\qquad -\pi <\mathrm{Arg}(z)\leq \pi\tag{4}
\end{align*}
Now we are ready to calculate $e^{\frac{1}{2}\log(z)}$
We obtain from (1)
\begin{align*}
\color{blue}{e^{\frac{1}{2}\log(z)}}&=e^{\frac{1}{2}\left(\log|z|+i\arg(z)+2k\pi\right)}\\
&=|z|^{\frac{1}{2}}e^{\frac{1}{2}\left(i\arg(z)+2k\pi\right)}\\
&=|z|^{\frac{1}{2}}e^{i\frac{\arg(z)}{2}}(-1)^k\\
&\color{blue}{=z^{\frac{1}{2}}}
\end{align*}
which coincides with (3).
Taking the principal value $\mathrm{Log}$ we obtain from (2)
\begin{align*}
\color{blue}{e^{\frac{1}{2}\mathrm{Log}(z)}}&=e^{\frac{1}{2}\left(\log |z|+i\mathrm{Arg}(z)\right)}\\
&=|z|^{\frac{1}{2}}e^{i\mathrm{Arg}(z)}\\
&\color{blue}{=\left[z^{\frac{1}{2}}\right]}
\end{align*}
which coincides with (4).
We also see the relationship
\begin{align*}
e^{\frac{1}{2}\log(z)}=\left[z^{\frac{1}{2}}\right](-1)^k
\end{align*}
Conclusion: The concepts of logarithm and square root match in the sense that the infinitely many branches of the logarithm yield precisely the two branches of the square root.
Note: This answer is mostly based upon chapter VI from Visual Complex Analysis by T. Needham.
I'm having trouble with this particular exercise in limits, and I just can't seem to find a way to crack it.
I saw a similar exercise online where they used integrals, but it's pretty early in the course so we're only supposed to use basic limit arithmetics and the Squeeze theorem (Oh boy, and I thought it had a bad name in MY language). That said, I already tried the Squeeze theorem and it doesn't work.
$$\lim_{n\to\infty} \left( \frac{1}{1\cdot4}+\frac{1}{4\cdot7}+...+\frac{1}{(3n-2)(3n+1)} \right)$$
What am I missing?
Thanks in advance.
Finding value of $$\lim_{n\rightarrow \infty}\lim_{m\rightarrow \infty}\sum^{n}_{r=1}\sum^{mr}_{k=1}\frac{m^2n^2}{(m^2n^2+k^2)(n^2+r^2)}$$
what i try
$$\lim_{m\rightarrow \infty}\lim_{n\rightarrow \infty}\sum^{mr}_{k=1}\frac{m^2n^2}{m^2n^2+k^2}\cdot \frac{1}{n}\sum^{n}_{r=1}\frac{n}{n^2+r^2}$$
$$\lim_{n\rightarrow \infty}\sum^{n}_{r=1}\frac{n}{n^2+r^2}=\int^{1}_{0}\frac{1}{1+x^2}dx = \frac{\pi}{4}$$ How do i solve first summation
help me please
I got this question, and I'm totally lost as to how I solve it!
Any help is appreciated :)
When 100! is written out in full, it equals
100! = 9332621...000000.
Without using a calculator, determine the number of 0 digits at the end of this number
EDIT:
Just want to confirm this is okay --
I got 24 by splitting products into 2 cases 1) multiples of 10 and 2) multiples of 5
Case I
(1*3*4*6*7*8*9*10)(100,000,000,000)--> 12 zeroes
Similarly got 12 zeroes for Case 2.
So 24 in total? Is that correct?
Will $\kappa_1, \kappa_2, m$ cardinals. Given $\kappa_1 \leq \kappa_2$. prove: $\kappa_1 \cdot m \leq \kappa_2 \cdot m$.
Hi, I would be happy if someone could help me with this. What I did until
now:I replaced the cardinals with sets: $|K_1|=\kappa_1$, $|K_2|=\kappa_2$, $|M|=m$. From what is given stems there is a injection $f:K_1→K_2$. Now I need to prove there is a injection $g:K_1⋅M \to K_2⋅M$, from multiplication of cardinals→ $g:K_1\times M → K_2\times M$. Now how do I show that?
I just started to learn this subject so would be happy to get a complete answer. Thanks!
One of the basic (and frequently used) properties of cardinal exponentiation is that $(a^b)^c=a^{bc}$.
What is the proof of this fact?
As Arturo pointed out in his comment, in computer science this is called currying.
Notation: Let A and B be sets. The set of all functions $f:A \rightarrow B$ is denoted by $B^A$.
Problem: Let A, B, and C be sets. Show that there exists a bijection from $(A^B)^C$ into $A^{B \times C} $. You should first construct a function and then prove that it is a bijection.
Actually this question hasn't been posted by me, but has already been answered and closed as Find a bijection from $(A^B)^C$ into $A^{B \times C}$
I don't agree, since this doesn't seen at least for me to be correct. Maybe I haven't got through the answer but in my view, the correct answer should be, following the same letters for the functions:
my Answer
Let $f \in (A^B)^C, g \in A^{B \times C}$. Define $\Phi: (A^B)^C \to A^{B \times C}$ by setting
$$\Phi(f)(b,c) = f(c)(b)$$
This is a bijection because it has an inverse $\Psi: A^{B \times C} \to (A^B)^C$
$$\Psi(g)(c)(b) = g(b,c)$$
I would like to know if my editions to the functions really answer the question or if the previous answer Find a bijection from $(A^B)^C$ into $A^{B \times C}$ was indeed correct.
Thanks.
Hey guys i need help with this problem:
$y''-4y'+5y=6e^{2x}sinx$
I get the roots from the characteristic equation. $r^2-4r+5=0$ $\implies r=2\pm i $
Then i set up:
$y_{h}=c_{1}e^{2x}sinx+c_{2}e^{2x}cosx$
And i know that:
$y_p=e^{2x}(Asinx+Bcosx)x $
But the problem arrives when i differentiate. The problem is the x on the end i dont know how to work with it, i can solve the other differential equation types. For this problem i end up with:
$$-xBcosx-2Bcosx+4Acosx-2xBsinx-xAsinx+2Bsinx$$
After differentiating and putting it back into the equation. But i dont know what to do with the x's.
Answer
If I may suggest.
You can make your life much easier if you start from the very beginning using $$y=e^{2x}z \implies y'=e^{2 x} \left(z'+2 z\right)\implies y''=e^{2 x} \left(z''+4 z'+4z\right)$$ This makes the differential equation to be $$z''+z=6\sin(x)$$ and then, almost as you wrote, the particular solution will be $$z_p=x(A \sin(x)+B \cos(x))$$ $$z'_p=\sin (x) (A-B x)+\cos (x) (A x+B)$$ $$z''_p=\cos (x) (2 A-B x)-\sin (x) (A x+2 B)$$ $$z_p''+z_p=2 A \cos (x)-2 B \sin (x)=6\sin(x)$$ which is easy to solve.
I have $M:=\sqrt{\frac{a\cdot(b+ic)}{de}}$ and all variables $a,b,c,d,e$ are real. Now I am looking for the real and imaginary part of this, but this square root makes it kind of hard.
Answer
$$\sqrt{\frac{a(b+ic)}{de}}=\sqrt{\frac{a}{de}}\cdot\sqrt{b+ic}$$
Let $$\sqrt{b+ic}=x+iy$$
$$\implies b+ic=(x+iy)^2=x^2-y^2+2xyi$$
Equating the real & the imaginary parts, $b=x^2-y^2, c=2xy$
So, $b^2+c^2=(x^2-y^2)^2+(2xy)^2=(x^2+y^2)^2\implies x^2+y^2=\sqrt{b^2+c^2}$
We have $$x^2-y^2=b$$
$$\implies 2x^2=\sqrt{b^2+c^2}+b\implies x^2=\frac{\sqrt{b^2+c^2}+b}2$$
$$\implies x=\pm\frac{\sqrt{\sqrt{b^2+c^2}+b}}{\sqrt2}$$
and $$\implies y^2=x^2-b=\frac{\sqrt{b^2+c^2}-b}2$$
$$\implies y=\pm\frac{\sqrt{\sqrt{b^2+c^2}-b}}{\sqrt2}$$
Now, the sign of $y=$ sign of $x\cdot$ sign of $c$
How can I prove that
nonzero integer linear combination of two rational independent irrational numbers is still a irrational number?That is to say, given two irrational numbers a and b, if a/b is a irrational number too, then for any m,n is nonzero integer, we have that the number ma+nb is a irrational number, why?
Answer
That's not true: Take $a=\sqrt{2} -1$, $b=\sqrt{2}$. Then $\frac{a}{b} = \frac{1}{\sqrt{2}} - 1 $ isn't rational, but $a-b=1$
How many ways can the digits $2,3,4,5,6$ be arranged to get a number divisible by $11$
I know that the sum of the permutations of the digits should be divisible by 11. Also, the total number of ways the digits can be arranged is $5! = 120$.
Answer
Hint. By the divisibility rule by $11$ we have to count the arrangements $d_1,d_2,d_3,d_4,d_5$ of the digits $2,3,4,5,6$ such that $d_1+d_3+d_5-(d_2+d_4)$ is divisible by $11$. Notice that
$$-2=2+3+4-(5+6)\leq d_1+d_3+d_5-(d_2+d_4)\leq 4+5+6-(2+3)=10$$
therefore we should have $d_1+d_3+d_5=d_2+d_4=\frac{2+3+4+5+6}{2}=10$.
In how many ways we can do that?
I know there must be something unmathematical in the following but I don't know where it is:
\begin{align} \sqrt{-1} &= i \\ \\ \frac1{\sqrt{-1}} &= \frac1i \\ \\ \frac{\sqrt1}{\sqrt{-1}} &= \frac1i \\ \\ \sqrt{\frac1{-1}} &= \frac1i \\ \\ \sqrt{\frac{-1}1} &= \frac1i \\ \\ \sqrt{-1} &= \frac1i \\ \\ i &= \frac1i \\ \\ i^2 &= 1 \\ \\ -1 &= 1 \quad !!! \end{align}
Answer
Between your third and fourth lines, you use $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$. This is only (guaranteed to be) true when $a\ge 0$ and $b>0$.
edit: As pointed out in the comments, what I meant was that the identity $\frac{\sqrt{a}}{\sqrt{b}}=\sqrt{\frac{a}{b}}$ has domain $a\ge 0$ and $b>0$. Outside that domain, applying the identity is inappropriate, whether or not it "works."
In general (and this is the crux of most "fake" proofs involving square roots of negative numbers), $\sqrt{x}$ where $x$ is a negative real number ($x<0$) must first be rewritten as $i\sqrt{|x|}$ before any other algebraic manipulations can be applied (because the identities relating to manipulation of square roots [perhaps exponentiation with non-integer exponents in general] require nonnegative numbers).
This similar question, focused on $-1=i^2=(\sqrt{-1})^2=\sqrt{-1}\sqrt{-1}\overset{!}{=}\sqrt{-1\cdot-1}=\sqrt{1}=1$, is using the similar identity $\sqrt{a}\sqrt{b}=\sqrt{ab}$, which has domain $a\ge 0$ and $b\ge 0$, so applying it when $a=b=-1$ is invalid.
Let $a,b,n \in Z$ with $n > 0$ and $a \equiv b \mod n$. Also, let $c_0,c_1,\ldots,c_k \in Z$. Show that :
$c_0 + c_1a + \ldots + c_ka^k \equiv c_0 + c_1b + \ldots + c_kb^k \pmod n$.
For the proof I tried :
$a = b + ny$ for some $y \in Z$.
If I multiply from both side $c_1 + \ldots + c_k$ I obtain :
$c_1a + c_2a + \ldots + c_ka = (c_1b + c_2b + \ldots + c_k) (b + ny)$.
However I can't prove that is true when I multiply by both side $a^1 + a^2 + \ldots + a^k$.
Answer
1) Prove that $k*a \equiv k*b \pmod n$ for any integer $k$.
2) Show that by induction that means $a^k \equiv b^k \pmod n$ for any natural $k$.
3) Show that if $a\equiv b\pmod n$ and $a' \equiv b'\pmod n$ that $a+a'\equiv b +b' \pmod n$.
4) Show your result follows by induction and combinition
....
Or. Note that $a^k - b^k = (a-b)(a^{k-1}+a^{k-2}b + .... +ab^{k-2} + b^{k-1})$.
And that $(c_0 + c_1a + ... + c_ka^k) - (c_0 + c_1b + ... + c_kb^k)=$
$c_1(a-b) + c_2(a^2 - b^2) + ...... c_k(a^k - b^k) =$
.... And therefore......
I am reading Chapter 1 Example 11 of 'Counterexamples in Analysis' by Gelbaum and Olmstead. This section illustrates counterexamples of functions defined on $\mathbb{Q}$ embedded in $\mathbb{R}$ of statements that are usually true for functions defined on a real domain. Almost all examples have the assumption that the function (defined on a rational domain) is continuous, for example, the book gives a counterexample of:
A function continuous and bounded on a closed interval but not
uniformly continuous.
My questions are, what is an example of a discontinuous real function defined on $\mathbb{Q}$, that is: $f:\mathbb{Q}\rightarrow\mathbb{R}$? Are all functions defined on $\mathbb{Q}$ discontinuous (similar to how functions defined on the set of natural numbers are always continuous)?
Answer
1). All functions defined on $\mathbb{N}$ are continuous (not discontinuous).
2). An example of a function $f: \mathbb{Q} \to \mathbb{R}$ that is discontinuous is $f = \chi_{\{0\}}$, i.e. $f(x) = 1$ iff $x = 0$ ($x \in \mathbb{Q}$). One can see that this is discontinuous by noting that $f(\frac{1}{n}) = 0$ for each $n \ge 1$, while $f(\lim_n \frac{1}{n}) = f(0) = 1$.
This inequality should be fairly easy to show. I think I'm just having trouble looking at it the right way (It's used in a proof without explanation).
$$(1-\frac{1}{\log ^{2} n})^{(2 \log n) -1}\geq e^{-2/\log n}$$
Any help is much appreciated. Thanks Edit: Log is base 2
Answer
Assuming $n>2$ are natural numbers and $\log = \log_2$:
$(1 + \frac 1n)^n$ is increasing, $\lim_{n\to \infty}(1+ \frac 1n)^n =e $ and $(1+\frac 1x)^x < e$ for $x \ge 1$.
And $(1-\frac 1x)^x > \frac 1e$ for $x \ge 1$.
So $ (1 - \frac 1{\log^2 n})^{\log^2 n} > e^{-1}$
$( 1 - \frac 1{\log^2 n})^{2\log n} > e^{\frac{-2}{\log n}}$
$( 1 - \frac 1{\log^2 n})^{2\log n - 1} > \frac {e^{\frac{-2}{\log n}}}{1 - \frac 1{\log^2 n}}$
If $n > 2$ and $\log n = \log_2 n > 1$ then $0< {1 - \frac 1{\log^2 n}} < 1$ and
$( 1 - \frac 1{\log^2 n})^{2\log n - 1} > \frac {e^{\frac{-2}{\log n}}}{1 - \frac 1{\log^2 n}}>e^{\frac{-2}{\log n}} $
If $n = 2$ then
$( 1 - \frac 1{\log^2 n})^{2\log n - 1} =$
$( 1 - \frac 1{\log^2 2})^{2\log 2 - 1} = 0^0$ is undefined.
Likewise if $n=1$ we have division by $0$.
Perhaps $\log = \ln =\log_e$?
Why does the following hold:
\begin{equation*}
\displaystyle \sum\limits_{n=0}^{\infty} 0.7^n=\frac{1}{1-0.7} = 10/3 ?
\end{equation*}
Can we generalize the above to
$\displaystyle \sum_{n=0}^{\infty} x^n = \frac{1}{1-x}$ ?
Are there some values of $x$ for which the above formula is invalid?
What about if we take only a finite number of terms? Is there a simpler formula?
$\displaystyle \sum_{n=0}^{N} x^n$
Is there a name for such a sequence?
This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.
and here: List of abstract duplicates.
Answer
By definition, a "series" (an "infinite sum")
$$\sum_{n=k}^{\infty} a_n$$
is defined to be a limit, namely
$$\sum_{n=k}^{\infty} a_n= \lim_{N\to\infty} \sum_{n=k}^N a_n.$$
That is, the "infinite sum" is the limit of the "partial sums", if this limit exists. If the limit exists, equal to some number $S$, we say the series "converges" to the limit, and we write
$$\sum_{n=k}^{\infty} a_n = S.$$
If the limit does not exist, we say the series diverges and is not equal to any number.
So writing that
$$\sum_{n=0}^{\infty} 0.7^n = \frac{1}{1-0.7}$$
means that we are asserting that
$$\lim_{N\to\infty} \sum_{n=0}^N0.7^n = \frac{1}{1-0.7}.$$
So what your question is really asking is: why is this limit equal to $\frac{1}{1-0.7}$? (Or rather, that is the only way to make sense of the question).
In order to figure out the limit, it is useful (but not strictly necessary) to have a formula for the partial sums,
$$s_N = \sum_{n=0}^N 0.7^n.$$
This is where the formulas others have given come in. If you take the $N$th partial sum and multiply by $0.7$, you get
$$\begin{array}{rcrcrcrcrcrcl}
s_N &= 1 &+& (0.7) &+& (0.7)^2 &+& \cdots &+& (0.7)^N\\
(0.7)s_N &= &&(0.7) &+& (0.7)^2 &+&\cdots &+&(0.7)^N &+& (0.7)^{N+1}
\end{array}$$
so that
$$(1-0.7)s_N = s_N - (0.7)s_N = 1 - (0.7)^{N+1}.$$
Solving for $s_N$ gives
$$s_N = \frac{1 - (0.7)^{N+1}}{1-0.7}.$$
What is the limit as $N\to\infty$? The only part of the expression that depends on $N$ is $(0.7)^{N+1}$. Since $|0.7|\lt 1$, then $\lim\limits_{N\to\infty}(0.7)^{N+1} = 0$. So,
$$\lim_{N\to\infty}s_N = \lim_{N\to\infty}\left(\frac{1-(0.7)^{N+1}}{1-0.7}\right) = \frac{\lim\limits_{N\to\infty}1 - \lim\limits_{N\to\infty}(0.7)^{N+1}}{\lim\limits_{N\to\infty}1 - \lim\limits_{N\to\infty}0.7} = \frac{1 - 0}{1-0.7} = \frac{1}{1-0.7}.$$
Since the limit exists, then we write
$$\sum_{n=0}^{\infty}(0.7)^n = \frac{1}{1-0.7}.$$
More generally, a sum of the form
$$a + ar + ar^2 + ar^3 + \cdots + ar^k$$
with $a$ and $r$ constant is said to be a "geometric series" with initial term $a$ and common ratio $r$. If $a=0$, then the sum is equal to $0$. If $r=1$, then the sum is equal to $(k+1)a$. If $r\neq 1$, then we can proceed as above. Letting
$$S = a +ar + \cdots + ar^k$$
we have that
$$S - rS = (a+ar+\cdots+ar^k) - (ar+ar^2+\cdots+a^{k+1}) = a - ar^{k+1}$$
so that
$$(1-r)S = a(1 - r^{k+1}).$$
Dividing through by $1-r$ (which is not zero since $r\neq 1$), we get
$$S = \frac{a(1-r^{k+1})}{1-r}.$$
A series of the form
$$
\sum_{n=0}^{\infty}ar^{n}
$$
with $a$ and $r$ constants is called an infinite geometric series.
If $r=1$, then
$$
\lim_{N\to\infty}\sum_{n=0}^{N}ar^{n}
= \lim_{N\to\infty}\sum_{n=0}^{N}a
= \lim_{N\to\infty}(N+1)a
= \infty,
$$
so the series diverges. If $r\neq 1$, then using the formula above we have:
$$
\sum_{n=0}^{\infty}ar^n = \lim_{N\to\infty}\sum_{n=0}^{N}ar^{N} = \lim_{N\to\infty}\frac{a(1-r^{N+1})}{1-r}.
$$
The limit exists if and only if $\lim\limits_{N\to\infty}r^{N+1}$ exists. Since
$$
\lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll}
0 &\mbox{if $|r|\lt 1$;}\\
1 & \mbox{if $r=1$;}\\
\text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$}
\end{array}\right.
$$
it follows that:
$$
\begin{align*}
\sum_{n=0}^{\infty}ar^{n} &=\left\{\begin{array}{ll}
0 &\mbox{if $a=0$;}\\
\text{diverges}&\mbox{if $a\neq 0$ and $r=1$;}\\
\lim\limits_{N\to\infty}\frac{a(1-r^{N+1})}{1-r} &\mbox{if $r\neq 1$;}\end{array}\right.\\
&= \left\{\begin{array}{ll}
\text{diverges}&\mbox{if $a\neq 0$ and $r=1$;}\\
\text{diverges}&\mbox{if $a\neq 0$, and $r=-1$ or $|r|\gt 1$;}\\
\frac{a(1-0)}{1-r}&\mbox{if $|r|\lt 1$;}
\end{array}\right.\\
&=\left\{\begin{array}{ll}
\text{diverges}&\mbox{if $a\neq 0$ and $|r|\geq 1$;}\\
\frac{a}{1-r}&\mbox{if $|r|\lt 1$.}
\end{array}\right.
\end{align*}
$$
Your particular example has $a=1$ and $r=0.7$.
Since this recently came up (09/29/2011), let's provide a formal proof that
$$
\lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll}
0 &\mbox{if $|r|\lt 1$;}\\
1 & \mbox{if $r=1$;}\\
\text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$}
\end{array}\right.
$$
If $r\gt 1$, then write $r=1+k$, with $k\gt0$. By the binomial theorem, $r^n = (1+k)^n \gt 1+nk$, so it suffices to show that for every real number $M$ there exists $n\in\mathbb{N}$ such that $nk\gt M$. This is equivalent to asking for a natural number $n$ such that $n\gt \frac{M}{k}$, and this holds by the Archimedean property; hence if $r\gt 1$, then $\lim\limits_{n\to\infty}r^n$ does not exist. From this it follows that if $r\lt -1$ then the limit also does not exist: given any $M$, there exists $n$ such that $r^{2n}\gt M$ and $r^{2n+1}\lt M$, so $\lim\limits_{n\to\infty}r^n$ does not exist if $r\lt -1$.
If $r=-1$, then for every real number $L$ either $|L-1|\gt \frac{1}{2}$ or $|L+1|\gt \frac{1}{2}$. Thus, for every $L$ and for every $M$ there exists $n\gt M$ such that $|L-r^n|\gt \frac{1}{2}$ proving the limit cannot equal $L$; thus, the limit does not exist. If $r=1$, then $r^n=1$ for all $n$, so for every $\epsilon\gt 0$ we can take $N=1$, and for all $n\geq N$ we have $|r^n-1|\lt\epsilon$, hence $\lim\limits_{N\to\infty}1^n = 1$. Similarly, if $r=0$, then $\lim\limits_{n\to\infty}r^n = 0$ by taking $N=1$ for any $\epsilon\gt 0$.
Next, assume that $0\lt r\lt 1$. Then the sequence $\{r^n\}_{n=1}^{\infty}$ is strictly decreasing and bounded below by $0$: we have $0\lt r \lt 1$, so multiplying by $r\gt 0$ we get $0\lt r^2 \lt r$. Assuming $0\lt r^{k+1}\lt r^k$, multiplying through by $r$ we get $0\lt r^{k+2}\lt r^{k+1}$, so by induction we have that $0\lt r^{n+1}\lt r^n$ for every $n$.
Since the sequence is bounded below, let $\rho\geq 0$ be the infimum of $\{r^n\}_{n=1}^{\infty}$. Then $\lim\limits_{n\to\infty}r^n =\rho$: indeed, let $\epsilon\gt 0$. By the definition of infimum, there exists $N$ such that $\rho\leq r^N\lt \rho+\epsilon$; hence for all $n\geq N$,
$$|\rho-r^n| = r^n-\rho \leq r^N-\rho \lt\epsilon.$$
Hence $\lim\limits_{n\to\infty}r^n = \rho$.
In particular, $\lim\limits_{n\to\infty}r^{2n} = \rho$, since $\{r^{2n}\}_{n=1}^{\infty}$ is a subsequence of the converging sequence $\{r^n\}_{n=1}^{\infty}$. On the other hand, I claim that $\lim\limits_{n\to\infty}r^{2n} = \rho^2$: indeed, let $\epsilon\gt 0$. Then there exists $N$ such that for all $n\geq N$, $r^n - \rho\lt\epsilon$. Moreover, we can assume that $\epsilon$ is small enough so that $\rho+\epsilon\lt 1$. Then
$$|r^{2n}-\rho^2| = |r^n-\rho||r^n+\rho| = (r^n-\rho)(r^n+\rho)\lt (r^n-\rho)(\rho+\epsilon) \lt r^n-\rho\lt\epsilon.$$
Thus, $\lim\limits_{n\to\infty}r^{2n} = \rho^2$. Since a sequence can have only one limit, and the sequence of $r^{2n}$ converges to both $\rho$ and $\rho^2$, then $\rho=\rho^2$. Hence $\rho=0$ or $\rho=1$. But $\rho=\mathrm{inf}\{r^n\mid n\in\mathbb{N}\} \leq r \lt 1$. Hence $\rho=0$.
Thus, if $0\lt r\lt 1$, then $\lim\limits_{n\to\infty}r^n = 0$.
Finally, if $-1\lt r\lt 0$, then $0\lt |r|\lt 1$. Let $\epsilon\gt 0$. Then there exists $N$ such that for all $n\geq N$ we have $|r^n| = ||r|^n|\lt\epsilon$, since $\lim\limits_{n\to\infty}|r|^n = 0$. Thus, for all $\epsilon\gt 0$ there exists $N$ such that for all $n\geq N$, $| r^n-0|\lt\epsilon$. This proves that $\lim\limits_{n\to\infty}r^n = 0$, as desired.
In summary,
$$\lim_{N\to\infty}r^{N+1} = \left\{\begin{array}{ll}
0 &\mbox{if $|r|\lt 1$;}\\
1 & \mbox{if $r=1$;}\\
\text{does not exist} &\mbox{if $r=-1$ or $|r|\gt 1$}
\end{array}\right.$$
The argument suggested by Srivatsan Narayanan in the comments to deal with the case $0\lt|r|\lt 1$ is less clumsy than mine above: there exists $a\gt 0$ such that $|r|=\frac{1}{1+a}$. Then we can use the binomial theorem as above to get that
$$|r^n| = |r|^n = \frac{1}{(1+a)^n} \leq \frac{1}{1+na} \lt \frac{1}{na}.$$
By the Archimedean Property, for every $\epsilon\gt 0$ there exists $N\in\mathbb{N}$ such that $Na\gt \frac{1}{\epsilon}$, and hence for all $n\geq N$, $\frac{1}{na}\leq \frac{1}{Na} \lt\epsilon$. This proves that $\lim\limits_{n\to\infty}|r|^n = 0$ when $0\lt|r|\lt 1$, without having to invoke the infimum property explicitly.
So only recently encountering conjugation (in the group-theory sense) in my math adventures/education, and I can't help but ask why? It doesn't seem (at first glance) why its worthwhile defining such a term/homomorphism/idea. What do they really tell us about group structure? In $S_n$ they have the nice interpretation of equivalent cycle structures. For finite groups, conjugates can be thought of as having the same cycle structure in the encompassing symmetric group. But since a group on $n$ elements has far less than $n!$ elements, this interpretation isn't so useful.
Can someone offer an interpretation of what these equivalence classes are in a general group? Is the only reason to define them as such is so that we can define quotient groups?
Thanks for any help. Sorry if the post is broad/verbose/may not have an answer.
So I ran into this problem today. It asks me to use an identity to simplify the sum.
$$\sum_{j=7}^{27}\ln\left(\frac{j+1}{j}\right)$$
I have no idea where to start. I don't know any identity that fits this formation. Thanks.
The integral $\displaystyle\int\limits_0^{\infty}\frac {\mathrm dx}{\sqrt{1+x^4}}$ is equal to $\displaystyle \frac{\Gamma \left(\frac{1}{4}\right)^2}{4 \sqrt{\pi }}$.
It is calculated or verified with a computer algebra system that $\displaystyle \frac{\Gamma \left(\frac{1}{4}\right)^2}{4 \sqrt{\pi }} = K\left(\frac{1}{2}\right)$ , where $K(m)$ is the complete elliptic integral of the first kind. This is in relation to what is called the elliptic integral singular value.
It is also known or verified that
$\displaystyle K\left(\frac{1}{2}\right) =\displaystyle \int_0^{\frac{\pi }{2}} \frac{1}{\sqrt{1-\frac{\sin ^2(t)}{2}}} \, dt= \frac{1}{2} \int_0^{\frac{\pi }{2}} \frac{1}{\sqrt{\sin (t) \cos (t)}} \, dt$.
Can one prove directly or analytically that
$\displaystyle\int\limits_0^{\infty}\frac {\mathrm dx}{\sqrt{1+x^4}} =\frac{1}{2} \int_0^{\frac{\pi }{2}} \frac{1}{\sqrt{\sin (t) \cos (t)}} \, dt =\displaystyle \int_0^{\frac{\pi }{2}} \frac{1}{\sqrt{1-\frac{\sin ^2(t)}{2}}} \, dt = K\left(\frac{1}{2}\right) $ ?
Let $A$ and $B$ be sets, where $f : A \rightarrow B$ is a function. Show that the following properties are
validequivalent*:
- $f$ is injective.
- For all $X, Y \subset A$ is valid: $f(X \cap Y)=f(X)\cap f(Y)$
- For all $X \subset Y \subset A$ is valid: $f(X \setminus Y)=f(X) \setminus f(Y)$.
I do know what injective is, but I thought number (2.) and (3.) were valid for any kind of function. Just to see if I understood this right:
$f(X\cap Y)$ means, first, make the intersection from $X$ and $Y$ and then map it to $B$ via $f$.
$f(X)\cap f(Y)$ actually means, map all $f(X)$ and $f(Y)$ and intersect both.
Aren't those both properties valid for all functions? I can't think of a counter example. Thanks in advance guys!
*edited by SN.
Answer
No they are not true for any function:
Take a function $f:\{0,1\}\to\{0,1\}$ such that $f(0)=1$ and $f(1)=1$. Then $f[\{0\}\cap\{1\}]=f[\varnothing]=\varnothing$ but $f[\{0\}]\cap f[\{1\}]=\{1\}\cap\{1\}=\{1\}$. This function provides a counterexample for the second case as well: $f[\{0,1\}\setminus\{0\}]=\{1\}$, while $f[\{0,1\}]\setminus f[\{0\}]=\{1\}\setminus\{1\}=\varnothing$.
Note that for any function $f$ it is true that $f[X\cap Y]\subseteq f[X]\cap f[Y]$ and it is also true that $f[X]\setminus f[Y]\subseteq f[X\setminus Y]$.
As for the equivalence, a function $f$ is called injective exactly when $x\neq y$ implies $f(x)\neq f(y)$ (or equivalently $f(x)=f(y)$ implies $x=y$). They are sometimes called $1-1$:
$1\Rightarrow 2$. Let $f:A\to B$ be injective. We just need to show that $f[X]\cap f[Y]\subseteq f[X\cap Y]$. Let $x\in f[X]\cap f[Y]$. Then there is some $a\in X$ and some $b\in Y$ such that $f(a)=f(b)=x$. By the definition of injective functions $a=b$ thus $a\in X\cap Y$ or $x\in f[X\cap Y]$.
$2\Rightarrow 1$. Now let $f[X]\cap f[Y]=f[X\cap Y]$. Let $f(a)=f(b)$. We have that $f[\{a\}\cap\{b\}]=f[\{a\}]\cap f[\{b\}]$. Thus $f[\{a\}\cap\{b\}]$ is not empty (since it is equal to $f[\{a\}]\cap f[\{b\}]$ which is not). Therefore $\{a\}\cap\{b\}$ is not empty, which means that $a=b$.
$1\Rightarrow 3$. Let $f:A\to B$ be injective. We just need to show that $f[X\setminus Y]\subseteq f[X]\setminus f[Y]$. Let $x\in f[X\setminus Y]$. Of course $x\in f[X]$. We have that there is some $a\in X\setminus Y$ such that $f(a)=x$. For every $b\in Y$ we have that $a\neq b$ thus $f(a)\neq f(b)$. Thus $x\notin f[Y]$ and thus $x\in f[X]\setminus f[Y]$.
$3\Rightarrow 1$. Conversely assume that $f[X\setminus Y]= f[X]\setminus f[Y]$. Let $f(a)=f(b)$. Then $f[\{a,b\}\setminus\{b\}]=f[\{a,b\}]\setminus f[\{b\}]$. The second set is empty, thus $f[\{a,b\}\setminus\{b\}]$ is empty. Then $\{a,b\}\setminus\{b\}$ is empty, which means $a=b$.
$$\int_0^{\infty} \frac{dx}{1+x^3}$$
So far I have found the indefinite integral, which is:
$$-\frac{1}{6} \ln |x^2-x+1|+\frac{1}{\sqrt{3}} \arctan\left(\frac{2x-1}{\sqrt{3}}\right)+\frac{1}{3}\ln|x+1|$$
Now what do I need to do in order to calculate the improper integral?
Answer
Next, simplify
$$
F(x)=-\frac{1}{6}\ln|x^2-x+1|+\frac{1}{\sqrt{3}}\arctan{\frac{2x-1}{\sqrt{3}}}+\frac{1}{3}\ln|x+1|
$$
$$
=\frac{1}{\sqrt{3}}\arctan\left(\frac{2x-1}{\sqrt{3}}\right)+\frac{1}{3}\ln|x+1|-\frac{1}{3}\ln\sqrt{|x^2-x+1|}
$$
$$
=\frac{1}{\sqrt{3}}\arctan\left(\frac{2x-1}{\sqrt{3}}\right)+\frac{1}{3}\ln\left(\frac{|x+1|}{\sqrt{|x^2-x+1|}}\right).
$$
Then
$$\int_0^\infty \frac{dx}{1+x^3}=\lim_{X\rightarrow\infty}F(X)-F(0).$$
Compute the limit, and you are done.
How to find infinite sum How to find infinite sum $$1+\dfrac13+\dfrac{1\cdot3}{3\cdot6}+\dfrac{1\cdot3\cdot5}{3\cdot6\cdot9}+\dfrac{1\cdot3\cdot5\cdot7}{3\cdot6\cdot9\cdot12}+\dots? $$
I can see that 3 cancels out after 1/3, but what next? I can't go further.
Answer
As the denominator of the $n$th term $T_n$ is $\displaystyle3\cdot6\cdot9\cdot12\cdots(3n)=3^n \cdot n!$
(Setting the first term to be $T_0=1$)
and the numerator of $n$th term is $\displaystyle1\cdot3\cdot5\cdots(2n-1)$ which is a product of $n$th terms of an Arithmetic Series with common difference $=2,$
we can write
$\displaystyle1\cdot3\cdot5\cdots(2n-1)=-\frac12\cdot\left(-\frac12-1\right)\cdots\left(-\frac12-{n+1}\right)\cdot(-2^n)$
which suitably resembles the numerator of Generalized binomial coefficients
$$\implies T_n=\frac{-\frac12\cdot\left(-\frac12-1\right) \cdots\left(-\frac12-{n+1}\right)}{n!}\left(-\frac23\right)^n$$
So, here $\displaystyle z=-\frac23,\alpha=-\frac12$ in $\displaystyle(1+z)^\alpha$
Let $A,B$ be any two sets. I really think that the statement $|A|\leq|B|$ or $|B|\leq|A|$ is true. Formally:
$$\forall A\forall B[\,|A|\leq|B| \lor\ |B|\leq|A|\,]$$
If this statement is true, what is the proof ?
Answer
This claim, the principle of cardinal comparability (PCC), is equivalent to the Axiom of Choice.
If the Axiom of Choice is true then Zorn's Lemma is true and a proof of the PCC is a classical application of Zorn's Lemma.
If PCC holds then using Hartogs' Lemma it is quite easy to show that The Well-Ordering-Princple holds, which in turn implies (easily) the Axiom of Choice.
A complete presentation of these (and some other) equivalences, is treated in a rather elementary fashion but including all the details, in the book: http://www.staff.science.uu.nl/~ooste110/syllabi/setsproofs09.pdf starting on page 31.
How can I prove $\sqrt{\sqrt2}$ to be irrational?
I know that $\sqrt2$ is an irrational number, it can be proved by contradiction, but I'm not sure how to prove that $\sqrt{\sqrt2} = \sqrt[4]{2}$ is irrational as well.
Answer
Suppose $x= \sqrt{ \sqrt 2}$ was rational, then so is its square $x^2=\sqrt 2$ which you have shown is irrational. Contradiction!
It all about maths I don't understand, how can I solve this exercise, someone say that is a $2$-dimensional problem, but I can not figure out for myself, I already understand what to do, but I don't know why is the formula, can someone explain this exercise for a dummy? http://www.codeforces.com/contest/1/problem/A
Why the solution of this problem is $(\dfrac{m+a-1}{a}\times \dfrac{n+a-1}{a})$?
Help really appreciated.
Answer
So break it into one dimensional problems. How many flagstones of length $a$ does it take to cover a path of $m$ meters? If you can trust $a$ and $m$ to be integers, $m/a$ is correct if $a$ divides $m$. Otherwise you need $\lfloor{m/a}\rfloor+1$. One way to combine these is $\lfloor(m+a-1)/a\rfloor$. Try it with some small numbers to see how it works.
For two dimensions, just multiply two one dimensional problems.
I am wondering whether we have for $$f(x):=\sum_{k=0}^{\infty} \frac{|x|^{2k}}{(k!)^2} $$ that
$$\lim_{x \rightarrow \infty} \frac{e^{\varepsilon |x|^{\varepsilon}}}{f(x)} = \infty$$
for any $\varepsilon>0$?
I assume that this is true as factorials should somehow outgrow powers, but I do not see how to show this rigorously?
Does anybody have an idea?
I have seen the following one. Please give the proof of the observation.
We know that, The difference between the sum of the odd numbered digits (1st, 3rd, 5th...) and the sum of the even numbered digits (2nd, 4th...) is divisible by 11. I have checked the same for other numbers in different base system. For example, if we want to know 27 is divisible by 3 or not.
To check the divisibility for 3, take 1 lees than 3 (i.e., 2) and follow as shown bellow
now 27 = 2 X 13 + 1 and then
13 = 2 X 6 + 1 and then
6 = 2 X 3 + 0 and then
3 = 2 X 1 + 1 and then
1 = 2 X 0 + 1
Now the remainders in base system is
27 = 11011
sum of altranative digits and their diffrence is ( 1 + 0 + 1) - (1 + 1) = 0
So, 27 is divisible by 3.
What I want to say that, to check the divisibility of a number K, we will write the number in K-1 base system and then we apply the 11 divisibility rule. How this method is working.Please give me the proof. Thanks in advance.
I need to show that the following real sequence is convergent. Let $r,l>0$ be constant then the sequence $(a_n)_{n\in \mathbb{N}}$ is defined by
$$ a_n=sin^{-1}\left(\frac{r}{2n}\right)\sqrt{1+n^2l^2}.$$
Furthermore, I also need to determine the limit (which is $\frac{r}{2}l$).
Thank you in advance for your answers and ideas.
Answer
Since $\lim _{ n\rightarrow 0 }{ \frac { \arcsin { n } }{ n } } =1,\lim _{ n\rightarrow \infty }{ \frac { \arcsin { \left( \frac { r }{ 2n } \right) } }{ \left( \frac { r }{ 2n } \right) } } =1$ so
$$\lim _{ n\rightarrow \infty }{ \arcsin { \left( \frac { r }{ 2n } \right) } \sqrt { 1+n^{ 2 }l^{ 2 } } } =\lim _{ n\rightarrow \infty }{ \frac { \arcsin { \left( \frac { r }{ 2n } \right) } }{ \frac { r }{ 2n } } \cdot \frac { r }{ 2n } \cdot \sqrt { 1+n^{ 2 }l^{ 2 } } } =\\ =\lim _{ n\rightarrow \infty }{ \frac { r }{ 2n } \cdot n\cdot \sqrt { \frac { 1 }{ { n }^{ 2 } } +l^{ 2 } } } =\color{red}{\frac { lr }{ 2 }} $$
This question originates from the definition of the Cox point process, but I suspect it might be a more general one.
If we define
$$Q(\cdot) = \int_{\mathcal M} P_{\Lambda}(\cdot)Q_{\Psi}(d\Lambda)$$
Then
$$\int_{\mathcal N} \mu(B) Q(d\mu) \stackrel{(\ast)}= \int_{\mathcal M} \int_{\mathcal N} \mu(B) P_{\Lambda}(d\mu) Q_{\Psi}(d\Lambda)$$
Where
$\mathcal M$ is a set of locally finite measures
$\mathcal N$ is a set of locally finite integer-valued measures
$P_\Lambda$ is the distribution of a Poisson process with intensity measure $\Lambda$
$\Psi$ is a random (diffusion) measure with distribution $Q_\Psi$
$B$ is a Borel set on the measurable space $X$ on which the measures in $\mathcal N$ are defined.
My question is: How to explain the equality $(\ast)$? Intuitivelly it makes sense. Possibly this could be contrasted with integration w.r.t. $\nu(E) = \int_E f\; d\mu$ which gives $\int_E g \; d\nu = \int_E fg \; d\mu$, if such contrast is helpful in answering the question.
Thank you.
Answer
Almost exactly two years later, I find myself wondering the same thing, only to find my own question on the topic. Anyway, here's a more rudimentary approach to answering it.
In fact, it's simply an application of the standard measure-theoretic approach (indicator function -> simple function -> non-negative function)
The definition
$$Q(D) = \int_{\mathcal M} P_{\Lambda}(D)Q_{\Psi}(d\Lambda), D \in \mathcal N$$
is the special case for the indicator function. Take $f=1_D$, then we have
$$\int_{\mathcal N} f(\mu) Q(d\mu) = \int_{\mathcal M} \int_{\mathcal N} f(\mu) P_\Lambda(d\mu) Q_\Psi(d\Lambda)$$
from which we obtain the same equality for all non-negative $f:\mathcal N \to \mathbb R$ by the standard measure-theoretic argument.
The equality $(\ast)$ is then only an application of that equality for $f(\mu) = \mu(B)$, sometimes called the projection of measure $\mu$.
Let $H_n$ denote the $n$th harmonic number; i.e., $H_n = \sum\limits_{i=1}^n \frac{1}{i}$. I've got a couple of proofs of the following limiting expression, which I don't think is that well-known: $$\lim_{n \to \infty} \left(H_n - \frac{1}{2^n} \sum_{k=1}^n \binom{n}{k} H_k \right) = \log 2.$$
I'm curious about other ways to prove this expression, and so I thought I would ask here to see if anybody knows any or can think of any. I would particularly like to see a combinatorial proof, but that might be difficult given that we're taking a limit and we have a transcendental number on one side. I'd like to see any proofs, though. I'll hold off from posting my own for a day or two to give others a chance to respond first.
(The probability tag is included because the expression whose limit is being taken can also be interpreted probabilistically.)
(Added: I've accepted Srivatsan's first answer, and I've posted my two proofs for those who are interested in seeing them.
Also, the sort of inverse question may be of interest. Suppose we have a function $f(n)$ such that $$\lim_{n \to \infty} \left(f(n) - \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} f(k) \right) = L,$$ where $L$ is finite and nonzero. What can we say about $f(n)$? This question was asked and answered a while back; it turns out that $f(n)$ must be $\Theta (\log n)$. More specifically, we must have $\frac{f(n)}{\log_2 n} \to L$ as $n \to \infty$.)
Answer
I made an quick estimate in my comment. The basic idea is that the binomial distribution $2^{−n} \binom{n}{k}$ is concentrated around $k= \frac{n}{2}$. Simply plugging this value in the limit expression, we get $H_n−H_{n/2} \sim \ln 2$ for large $n$. Fortunately, formalizing the intuition isn't that hard.
Call the giant sum $S$. Notice that $S$ can be written as $\newcommand{\E}{\mathbf{E}}$
$$
\sum_{k=0}^{\infty} \frac{1}{2^{n}} \binom{n}{k} (H(n) - H(k)) = \sum_{k=0}^{\infty} \Pr[X = k](H(n) - H(k)) = \E \left[ H(n) - H(X) \right],
$$
where $X$ is distributed according to the binomial distribution $\mathrm{Bin}(n, \frac12)$. We need the following two facts about $X$:
Since the function $x \mapsto H(n) - H(x)$ is monotone decreasing, we have
$$
S \leqslant \color{Red}{H(n)} \color{Blue}{-H\left( \frac{n(1-\varepsilon)}{2} \right)} + \color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)}.
$$
Plugging in the standard estimate $H(n) = \ln n + \gamma + O\Big(\frac1n \Big)$ for the harmonic sum, we get:
$$
\begin{align*}
S
&\leqslant \color{Red}{\ln n + \gamma + O \Big(\frac1n \Big)} \color{Blue}{- \ln \left(\frac{n(1-\varepsilon)}{2} \right) - \gamma + O \Big(\frac1n \Big)} +\color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)}
\\ &\leqslant \ln 2 - \ln (1- \varepsilon) + o_{n \to \infty}(1)
\leqslant \ln 2 + O(\varepsilon) + o_{n \to \infty}(1). \tag{1}
\end{align*}
$$
An analogous argument gets the lower bound
$$
S \geqslant \ln 2 - \ln (1+\varepsilon) - o_{n \to \infty}(1) \geqslant \ln 2 - O(\varepsilon) - o_{n \to \infty}(1). \tag{2}
$$
Since the estimates $(1)$ and $(2)$ hold for all $\varepsilon > 0$, it follows that $S \to \ln 2$ as $n \to \infty$.
I'm unsure as to how to evaluate:
$$\lim\limits_{x\to 0} \frac{\sin x - x + \frac{x^3}{6}}{x^3}$$
The $\lim\limits_{x\to 0}$ of both the numerator and denominator equal $0$. Taking the derivative of both ends of the fraction we get:
$$\lim\limits_{x\to 0} \frac{x^2 + 2\cos x -2}{6x^2}$$
But I don't know how to evaluate this?
Many thanks for any help.
Answer
You can use l'Hospital as many times as needed as long as the indeterminate forms conditions are fulfilled. In this case, using Taylor series can be helpful, too:
$$\sin x = x - \frac{x^3}6 + \frac{x^5}{120} - \ldots = x - \frac{x^3}6 + \mathcal O(x^5)$$
$$\implies \frac{\sin x - x + \frac{x^3}6}{x^3} = \frac{\mathcal O(x^5)}{x^3} = \mathcal O(x^2) \xrightarrow[x \to 0]{} 0$$
Let $P(X) = (X-x_1)\ldots(X-x_N)$ be a complex polynomial with simple roots.
I define
$$Q(X) = P(X-a)-bP(X),$$
with $a\in\mathbb{C}$ and $b\neq 1$ so that $Q(X)$ is also a polynomial of degree $N$.
Let me note $y_1,\ldots,y_N$ the roots of $Q(X)$. I would like to prove that
I am not sure this is the case for all $a$ and $b$, but in the context of my problem it seems $Q$ has to verify these conditions.
I thought it would be a simple exercise, but I keep struggling on it.
Any help is much appreciated ! :)
Answer
It's not true.
As a counterexample, letting
we get
$\;\;\;Q(x)=5(x+1)^2$.
Staying with the case $n=2$, let $P(x)=(x-r)(x-s)$ with $r\ne s$.
Then for $b\ne 1$, the polynomial
$$Q(x)=P(x-a)-bP(x)$$
has simple roots if and only if either$\;b=0\;$or
$$a^2\ne -\frac{(b-1)^2(r-s)^2}{4b}$$
Let's try an example with $n=3$ . . .
Let $P(x)=(x-1)x(x+1)$.
Then for $b\ne 1$, the polynomial
$$Q(x)=P(x-a)-bP(x)$$
has simple roots if and only if
$$
4b^4+(36a^2-16)b^3+(-27a^6+108a^4-72a^2+24)b^2+(36a^2-16)b+4
$$
is nonzero.
The results of that example suggest that for the general case, trying to find usable necessary and sufficient conditions on $a,b$ for $Q(x)$ to have simple roots is not likely to succeed.
As a more reasonable goal, one might try to find sufficient conditions for $Q(x)$ to have simple roots, expressed in terms of inequalities relating $|a|,|b|$.
For example, if $a$ is fixed, then $Q(x)$ will have simple roots if
or
So I am asked to find this limit
$$\lim_{x\to 0} \; \frac{2\cos(a+x)-\cos(a+2x)-\cos(a)}{x^2}$$
I know I am supped to use the trig identity $\cos(u+v)=\cos(u)\cos(v)-\sin(u)\sin(v)$ but I am having trouble with the denominator. I am trying to use the common limit $\lim_{x \to 0} \frac{sin(x)}{x}$ but if I expand out every term, I don't have a sin everywhere. How would I deal with that? I tried simplifying the answer but I am honesty getting nowhere.
Could I have some help with how I would simplify? Thanks.
Answer
HINT:
Using Prosthaphaeresis Formula,
$$\cos(a+2x)+\cos a=2\cos(a+x)\cos x$$
Now $\dfrac{1-\cos x}{x^2}=\left(\dfrac{\sin x}x\right)^2\cdot\dfrac1{1+\cos x}$
The following is a homework question:
Let $P(x)$ be a polynomial with integer coefficients and $P(x_1)=P(x_2)=P(x_3)=P(x_4)=P(x_5)=P(x_6)=P(x_7)=7$ where $x_i$ are distinct integers. Determine if $P(x)$ has integer zeros.
I've never done questions like this before. I started with this:
If $\deg(P) = 7$,
$$P(x)=\alpha(x-x_1)(x-x_2)(x-x_3)(x-x_4)(x-x_5)(x-x_6)(x-x_7)+7$$
where $\alpha$ is some integer.
However, the question doesn't state that the polynomial must be of the seventh degree. Even then, I don't see how I can determine if $P(x)$ has integer zeros without knowing all the $x_i$.
Can someone please help me? Thanks.
Edit:
Is this a valid solution?
$$P(x)=Q(x)(x-x_1)(x-x_2)(x-x_3)(x-x_4)(x-x_5)(x-x_6)(x-x_7)+7$$
If $P(n)=0$,
$$Q(n)(n-x_1)(n-x_2)(n-x_3)(n-x_4)(n-x_5)(n-x_6)(n-x_7)=-7$$
where $Q(x)$ is a polynomial of integer coefficients (therefore $Q(n)$ is an integer)
And since all the terms on the LHS are integers and the $x_i$ are distinct integers, it follows that some of the factors on the LHS $\ne$ {$\pm 1, \pm 7$}. And $7$ is a prime number, therefore $n$ cannot be a zero.
Answer
Hint:
$P(x)= Q(x) (x-x_1)(x-x_2)(x-x_3)(x-x_4)(x-x_5)(x-x_6)(x-x_7)+7$
Idea : You can't get $-7$ by multiplying $7$ distinct integers..
I am trying to show that
$|f(x)-f(y)|<|x-y|$,
for the function $f$ to be defined as $f:[0,+\infty)\mapsto [0,+\infty)$, $f(x)=(1+x^2)^{1/2}$, using the mean value theorem.
I have done this:
Since $f$ is differentiable on $[0,+\infty)$, then there is a point $x_0$, $x $f(x)-f(y)=(x-y)f'(x_0)$, by the mean value theorem. Hence, $|f(x)-f(y)|=|x-y||f'(x_0)|=|x-y||x_0 (1+{x_0} ^2)^{-1/2}|\leq|x-y||x_0|\leq|x-y|M<|x-y|$ where M is a constant. Can someone tell me if this is correct?
Answer
Hint:
First prove the general result: if $f:\mathbb{R}\to \mathbb{R}$ is a differentiable function and $|f'(x)| < M$ for all $x\in\mathbb{R}$ then for all $x,y\in\mathbb{R}$ the inequality $$|f(x)-f(y)| < M|x-y|$$ holds. The proof is very similar to what you have done in the question.
Next prove that if $f(x) = \sqrt{1+x^2}$ then $|f'(x)| < 1$. To do this consider $f'(x)^2 = \frac{x^2}{1+x^2}$.
Combinding the two results above gives the desired result.
Let $(X, \mathcal{A}, \mu)$ be a $\sigma$-finite measure space, and let $f: X \to \mathbb{R}$ be measurable. Then, $\Gamma(f)$, the graph of $f$ defined as
$$\Gamma = \{(x,y) \in X \times \mathbb{R}: f(x) = y\}$$
is measurable in the $\sigma$-algebra $\mathcal{A \times L}$ where $(\mathbb{R},\mathcal{L},m)$ is the measure space composed of the Lebesgue $\sigma$-algebra ($\mathcal{L}$) on $\mathbb{R}$ and the Lebesgue measure $m$.
Furthermore, prove that the product measure is $0$.
For the first part, I am trying to find the measurable rectangle to prove this is measurable in the product sigma algebra.
I know that $\Gamma = X \times \{f(x)\}$
$X \in \mathcal{A}$ trivially. Also, is the reason why $\{f(x)\} \in \mathcal{L}$ the fact that $f$ is measurable? I know that $f$ being measurable means that
$$\{x:f(x) > a\} \in \mathcal{A} \ \ \forall a \in \mathbb{R}$$
How does this translate to $\{f(x)\}$ being measurable on $\mathcal{L}$?
Furthermore, assuming this is proven. Let $\chi_A$ be the indicator function of some set $A$.
We have that the measure of $\Gamma$, by definition is
$$(\mu \times m) (\Gamma)=\int_\Gamma \mathrm{d}(\mu \times m) = \int_{X\times\mathbb{R}} \chi_\Gamma ((x,y)) \mathrm{d}(\mu \times m)$$
and since the indicator function is, by definition, non-negative, we can use Fubini's theorem to get
$$(\mu \times m)(\Gamma)=\int_X\int_\mathbb{R} \chi_{\{(x,y):f(x)=y\}} ((x,y)) \mathrm{d}m \mathrm{d}\mu$$
But here I have no idea on how to do the first integral or how to proceed in general from here.
Thank you so much!
An exercise asks to describe (i.e. basically tell what it is isomorphic to, rather than listing the automorphisms explicitly) the Galois group of $\mathbb{Q}(\sqrt{2},\sqrt{3})$ and suggests computing its degree over $\mathbb{Q}$. I already know it's $4$ and it is easy to prove. But I don't understand why this is needed, as we saw in class that the Galois group associated with an irreducible polynomial acts transitively on its roots and this is enough to conclude.
I would solve the exercise the following way:
$\mathbb{Q}(\sqrt{2},\sqrt{3})$ is the splitting field of the polynomial $(X^2-2)(X^2-3)$, each factor of degree $2$ being irreducible over $\mathbb{Q}$ using Eisenstein's criterion with $p=2, 3.$ Since an automorphism preserves each of these two polynomials, any permutations of the roots of $(X^2-2)(X^2-3)$ is a product of a permutation of the roots of $(X^2-2)$ and a permutation of the roots of $(X^2-3)$, and they are disjoint. (If they were not disjoint, i.e. if a root of a factor were to be sent to the root of the other, the polynomial factor wouldn't be preserved, since the two factors have different roots...).
Therefore, the Galois group is a subgroup of $S_2\times S_2$. However, since each of the polynomial factor is irreducible, its Galois group's action on its roots must be transitive, so the Galois group is actually isomorphic to $G_1\times G_2$ where $G_1$, $G_2$ are transitive groups of $S_2$, which is $S_2$ itself. Therefore, the Galois group is isomorphic to $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$. $\square$
I think the same method works for $\mathbb{Q}(\sqrt{p_1},...,\sqrt{p_n})$ where the $p_i$ are distinct primes. A corollary would be that these square roots are linearly independent over $\mathbb{Q}$.
Is there anything wrong with my proof? I think this is interesting because if this works then it allows the description of the Galois group of such extensions without having to bother showing linear independence of the square roots (i.e. computing the degree of the extension, which can be troublesome)
Answer
(Moved from comments)
You need to know that $\sqrt{2}$ and $\sqrt{3}$ are linearly independent, otherwise you could have something like $\sqrt{3} = a\sqrt{2} + b$ for some $a, b \in \mathbb{Q}$. Then you would not be able to choose an automorphism that sends $\sqrt{2}$ to its conjugate while fixing $\sqrt{3}$. So it would turn out that the Galois group is just $S_2$.
For example the argument you gave would carry out just as well for the polynomial $(X^2 - 2)(X^2 - 18)$ (both factors are irreducible by the rational roots test). But the Galois group here is actually just $S_2$, and this is because $\sqrt{18} = 3\sqrt{2}$ so they are linearly dependent. The splitting field is $\mathbb{Q}(\sqrt{2}, \sqrt{18}) = \mathbb{Q}(\sqrt{2})$, so the degree of the extension is only $2$.
It is relatively easy to show that if $p_1$, $p_2$ and $p_3$ are distinct primes then $\sqrt{p_1}+\sqrt{p_2}$ and $\sqrt{p_1}+\sqrt{p_2}+\sqrt{p_3}$ are irrational, but the only proof I can find that $\sqrt{p_1}+\sqrt{p_2}+...+\sqrt{p_n}$ is irrational for distinct primes $p_1$, $p_2$, ... , $p_n$ requires we consider finite field extensions of $\mathbb{Q}$.
Is there an elementary proof that $\sqrt{p_1}+\sqrt{p_2}+...+\sqrt{p_n}$ is irrational exist?
(By elementary, I mean only using arithmetic and the fact that $\sqrt{m}$ is irrational if $m$ is not a square number.)
The cases $n=1$, $n=2$, $n=3$ can be found at in the MSE question sum of square root of primes 2 and I am hoping for a similar proof for larger $n$.
Background: I'm looking at old exams in abstract algebra. The factor ring described was described in one question and I'd like to understand it better.
Question: Let $F = \mathbb{Z}_2[x]/(x^4+x+1)$. As the polynomial $x^4+x+1$ is irreducible over $\mathbb{Z}_2$, we know that $F$ is a field. But what does it look like? By that I am asking if there exists some isomorphism from $F$ into a well-known field (or where it is straightforward to represent the elements) and about the order of $F$.
In addition: is there something we can in general say about the order of fields of the type $\mathbb{Z}_2[x]/p(x)$ (with $p(x)$ being irreducible in $\mathbb{Z}_2[x]$)?
Answer
The elements of $F$ are $\{ f(x) + (x^4 + x + 1) \mid f(x) \in \mathbb{Z}_2[x], \deg f < 4 \}$. There are $2^4$ of them. Any field of order $2^4$ is isomorphic to $F$.
In general, if $p(x) \in \mathbb{Z}_2[x]$ is irreducible of degree $k$, then $\mathbb{Z}_2[x]/(p(x))$ is a field of order $2^k$.
There is a notation that makes this field more convenient to work with. Let $\alpha = x + (x^4 + x + 1) \in F$. Then for $f(x) \in \mathbb{Z}_2[x]$, $f(\alpha) = f(x) + (x^4 + x + 1)$. So, for example, we can write the element $x^2 + 1 + (x^4 + x + 1)$ as $\alpha^2 + 1$. In this notation,
$$F = \{ f(\alpha) \mid f(x) \in \mathbb{Z}_2[x], \deg f < 4 \}.$$
An isomorphic field is the nimber field of nimbers less than 16. The representation of the elements is simpler, but I'm finding nim-multiplication to be harder than polynomial multiplication (maybe there's a trick to it that I don't know).
How can one prove the statement
$$\lim_{x\to 0}\frac{\sin x}x=1$$
without using the Taylor series of $\sin$, $\cos$ and $\tan$? Best would be a geometrical solution.
This is homework. In my math class, we are about to prove that $\sin$ is continuous. We found out, that proving the above statement is enough for proving the continuity of $\sin$, but I can't find out how. Any help is appreciated.
Answer
The area of $\triangle ABC$ is $\frac{1}{2}\sin(x)$. The area of the colored wedge is $\frac{1}{2}x$, and the area of $\triangle ABD$ is $\frac{1}{2}\tan(x)$. By inclusion, we get
$$
\frac{1}{2}\tan(x)\ge\frac{1}{2}x\ge\frac{1}{2}\sin(x)\tag{1}
$$
Dividing $(1)$ by $\frac{1}{2}\sin(x)$ and taking reciprocals, we get
$$
\cos(x)\le\frac{\sin(x)}{x}\le1\tag{2}
$$
Since $\frac{\sin(x)}{x}$ and $\cos(x)$ are even functions, $(2)$ is valid for any non-zero $x$ between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$. Furthermore, since $\cos(x)$ is continuous near $0$ and $\cos(0) = 1$, we get that
$$
\lim_{x\to0}\frac{\sin(x)}{x}=1\tag{3}
$$
Also, dividing $(2)$ by $\cos(x)$, we get that
$$
1\le\frac{\tan(x)}{x}\le\sec(x)\tag{4}
$$
Since $\sec(x)$ is continuous near $0$ and $\sec(0) = 1$, we get that
$$
\lim_{x\to0}\frac{\tan(x)}{x}=1\tag{5}
$$
I don't know how to even stoke it...
$$ \lim_{n\to \infty } \frac{2^n}{n!} = $$
Answer
HINT
Prove that $$0 < \dfrac{2^n}{n!} \leq \dfrac4n$$ for all $n \in \mathbb{Z}^+$ using induction and then use squeeze theorem.
There was a previous problem in my homework that basically demonstrated that:
10C7 = 9C6 + 8C6 + 7C6 + 6C6
And our question is:
"Use that fact to derive a summation formula involving expressions nC1."
I'm not entirely sure what this means, but I'm assuming we are to use Sigma. This is what I came up with:
$${}_nC_r = \sum_{i=r - 1}^{n-1} {}_iC_{r-1}$$
I'm not sure if I'm even using legal notation here, so any help would be greatly appreciated.
Let $X$ be a non-negative random variable and $F_{X}$ the corresponding CDF. Show,
$$E(X) = \int_0^\infty (1-F_X (t)) \, dt$$
when $X$ has : a) a discrete distribution, b) a continuous distribution.
I assumed that for the case of a continuous distribution, since $F_X (t) = \mathbb{P}(X\leq t)$, then $1-F_X (t) = 1- \mathbb{P}(X\leq t) = \mathbb{P}(X> t)$. Although how useful integrating that is, I really have no idea.
Answer
For every nonnegative random variable $X$, whether discrete or continuous or a mix of these,
$$
X=\int_0^X\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,\mathrm dt,
$$
hence
$$
\mathrm E(X)=\int_0^{+\infty}\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt.
$$
Likewise, for every $p>0$, $$
X^p=\int_0^Xp\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,p\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,p\,t^{p-1}\,\mathrm dt,
$$
hence
$$
\mathrm E(X^p)=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\geqslant t)\,\mathrm dt.
$$
I'm being told that because the following series is the tail end of a convergent series, it converges to zero as $n$ gets large:
$$\sum\limits_{k=n+1}^\infty\frac{r^{2k+1}}{(2k+1)!}$$
The tail end of which convergent series? $e^r$? If so, then the above series is actually every other term of the tail send of the power series for $e^r$, right?
Or how else to see that the above series converges to $0$? Or does the series sum to zero simply because as $n$ gets large, the number of terms get arbitrarily small?
Answer
The series $\displaystyle \sum_{k=0}^\infty \frac{r^{2k+1}}{(2k+1)!}$ is the Taylor series of $\sinh r$, the hyperbolic sine (see Wikipedia), which converges for all real $r$.
But even the observation that it's half of the terms of $e^r$ is sufficient in this case: simply apply the squeeze theorem with the zero series and the $e^r$ series:
In terms of the formulation in the link, put:
\begin{align}
x_n &= \sum_{k=n+1}^\infty \frac{r^{2k+1}}{(2k+1)!}\\
y_n &= 0\\
z_n &= \sum_{k=n+1}^\infty \frac{r^{2k+1}}{(2k+1)!} + \sum_{k=n+1}^\infty \frac{r^{2k+2}}{(2k+2)!} = \sum_{j=2n+3}^\infty\frac{r^j}{j!}
\end{align}
Since $z_n$ is the tail of $e^r$, it converges to zero, and the squeeze theorem tells us that $\lim\limits_{n\to\infty} x_n = 0$, as desired.
As to "the number of terms gets arbitrarily small", that's quite incorrect. It's like saying that there are only finitely many natural numbers.
If this were correct, we would have $\displaystyle \sum_{k=n+1}^\infty 1 = 0$ as well, which of course is not true.
This is from a youtube video on the Chinese Remainder Theorem - https://www.youtube.com/watch?v=ru7mWZJlRQg
What the author has done thus far is basically
1.Make sure that the mods, 3, 4, 5 are relatively pairwise prime by showing that gcd(3,4) = 1, gcd(3,5) = 1 and gcd(4,5) = 1.
2.Set up a table with mod 3, mod 4, mod 5 as the columns. He multiplied one column by the other two so that when applied it's modulus, say for mod 3, mod 4 and mod 5 column values will be set to zero.
3.Here's the part that I have a question about. The author states that the first linear congruence
x $\equiv$ 2(mod 3) must be satisfied and to do so, he mods all the values by 3. The only non zero value will be the value in column 1(because of the last step).
My question is by the definition of a is congruent to b modulo m(below)
Shouldn't the author have to subtract all the values by 2 first and then mod 3, that way he get 18 mod 3, 13 mod 3, and 9 mod 3, or 0, 1, and 0? Is there a reason he doesn't have to do this? To me, this isn't consistent with the definition of congruency
Answer
First that's not the only definition there is of congruence. For example you may define it as :
Two integers $a$ and $b$ are said to be congruents modulo m if the remainders of the division of $a$ and $b$ by $m$ are equal.
Which is equivalent to your definition, and for some more intuitive.
Second, notice that congruence is an equivalence relation and also we may sum and multiply through congruence.
Third you may represent a residue class by
$$ \overline {a} = \{x \in \mathbb Z ; x \equiv a \mod m\} $$
and the basic properties of arithmetic you are used to with the integers is also valid here. With this in mind we may "apply" congruence modulo $3$ to obtain
$$\overline{x} = \overline {20 + 15 + 12} = \overline {20} + \overline {15} + \overline {12} = \overline {2} + \overline {0} + \overline {0}$$
Because according to definition we gave here $$20 = 6 \cdot 3 + \color{#f05}2\ ,\ 15 = 5 \cdot 3 + \color{#f05}0\ ,\ 12 = 4 \cdot 3 + \color{#f05}0\ \ \text{and}\ \ 2 = 0 \cdot 3 + \color{#f05}2$$
In order words, $$ x \equiv 20 \equiv 2 \mod 3$$
Also you may notice that, of course,
$$(20 -2) \equiv 0 \mod 3$$
Because $$18 = 6 \cdot 3 + \color{#f05} 0$$
Problem 32 on page 630 in Larson Calculus 9e.
In Exercies 29-36, test for convergence or divergence, using each test at least one. Identify which test was used.
$$32. \sum \limits_{n=2}^{\infty} \dfrac{1}{n^3-8} $$
when $n=2$ $a_2 = \frac{1}{2^3-8} = \frac{1}{0}$ Uh, not possible to divide by zero.
I know it converges for $\sum \limits_{n=3}^{\infty} \dfrac{1}{n^3-8} = -\dfrac{1}{3}$
From the comments, it is very likely a typo.
Thus, I will present some problems for more clearity.
$$ \sum \limits_{n=0}^{\infty} \dfrac{1}{n^3-8} = -\dfrac{1}{3}$$ Will not converge because at $n = 2$ there is singularity?
Aka, No series can converge if there exist a singularity?
Answer
In this case, as many pointed out, it seems like a typo.
Regarding:
Thus, I will present some problems for more clearity.
$$ \sum \limits_{n=0}^{\infty} \dfrac{1}{n^3-8} = -\dfrac{1}{3}$$ Will not converge because at $n = 2$ there is singularity?
Aka, No series can converge if there exist a singularity?
Notice that
$$ \sum \limits_{n=0}^{\infty} \dfrac{1}{n^3-8} = -\dfrac{1}{3}$$
is just a (no sense) formula, so it cannot converge or diverge.
The question
Does
$$\sum_{k=0}^\infty a_k$$
converge?
It's a shorthand for
Does the sequence
$$\left(\sum_{k=0}^n a_k\right)_{n\in\Bbb N}$$
converge?
In this case
$$\left(\sum_{k=0}^n \frac1{k^3-8}\right)_{n\in\Bbb N}$$
doesn't even makes sense, you have already noted why, so it can't converge nor diverge.
I know it's possible to produce a bijection from $\mathbb{Z}$ to $\mathbb{Z}\times\mathbb{Z}$, but is it possible to do so from $\mathbb{R}$ to $\mathbb{R} \times \mathbb{R}$?
I have to solve - $$\sum_{i=1}^\infty \left(\frac{5}{12}\right)^i$$ - geometric series?
The geometric series sequence I know is - $$\sum_{i=0}^\infty x_i= \frac{1}{1-x}$$
However in my assignment, the series starts from $i=1$.
The solution I have is - $$\sum_{i=1}^\infty \left(\frac{5}{12}\right)^i = \frac{1}{1-\frac{5}{12}}-1$$
Can you explain please why is that the solution?
Answer
HINT:
$$\sum_{i=0}^\infty x_i= \frac{1}{1-x} =x_0 + \sum_{i=1}^\infty x_i$$
Suppose that the function $f:[0,1] \to \mathbb{R}$ is continuous on
$[0,1]$ and $f(0)=f(1)$. Prove that for each natural number $n$, there
exists $x_n \in \mathbb{R}$ such that $0 \leq x_n \leq 1-\frac{1}{n}$
and $f(x_n)=f(x_n+\frac{1}{n})$.
Though I don't know how the proof would look like, I have a strong feeling that it has something to do with the Intermediate Value Theorem, judging by the continuity of $f$ and the existence of such a $x_n$. So I guess I'm supposed to define a function $g(x)=f(x)-f(x+\frac{1}{n})$ on $[0,\frac{1}{n}]$ and try to claim that $g(x_n)=0$ for some $x_n \in [0,\frac{1}{n}]$. Unfortunately I don't know how to proceed further from here, maybe because I haven't made use of the fact that $f(0)=f(1)$.
Any hint and suggestion is much appreciated. Thank you!
Answer
Consider $g_n(x)=g(x+\frac{1}{n})-g(x)$ for $x\in [0,\frac{n-1}{n}]$.
Now $0=g(1)-g(0) = g_n(0) + g_n(\frac{1}{n}) + \dots g_n(\frac{n-1}n)$. If $g_n(\frac{k}k)=0$ for any $k$ you are done. On the other hand, if not, since their sum is $0$, at least one of the values $g_n(k/n)$ must be positive and at least one must be negative.
Now use the intermediate value theorem.
\begin{gather}
3x\equiv1 \pmod 7 \tag 1\\
2x\equiv10 \pmod {16} \tag 2\\
5x\equiv1 \pmod {18} \tag 3
\end{gather}
Hi everyone, just a little bit stuck on this one. I think I am close, but I must be getting tripped up somewhere. Here is what I have so far:
from (2), $2x=10+16k \implies x=5+8k$
Putting this into (1):
\begin{align*}
3(5+8k) \equiv 1 \pmod 7
&= 15+24k \equiv 1 \pmod 7 \\
&= 24k \equiv -14 \pmod 7
\end{align*}
By co-prime cancellation, I get $12k\equiv -7 \pmod 7$
And since $12k \equiv 5k \pmod 7 \implies 5k \equiv -2k \pmod 7$ and $-7 \equiv 0 \pmod 7$, we conclude that
$ -2k \equiv 0 \pmod 7 \implies -2k = 7l$ for some integer $l$.
Multiplying by $-4 \implies 8k = -28l$
It follows that, $x = 5 + 8k = 5-28l \implies x \equiv 5 (mod \space -28) $
So now, solving (1), (2) and (3) is equivalent to solving:
$x \equiv 5 (mod \space -28) $ (4)
$5x\equiv1 (mod\space 18)$ (3)
Then substitute $x = 5-28l$ into (3),
$5(5-28l) \equiv 1 (mod \space 18)$
= $25 - 140l \equiv 1 (mod \space 18)$
= $140l \equiv 24 (mod \space 18) $
And since $140l \equiv 14l (mod \space 18) \implies 14l \equiv -4l (mod \space 18)$ and $24 \equiv 6 (mod \space 18)$
we have, $-4l \equiv 6 (mod \space 18) \implies -4l = 6 + 18M$ for some integer M.
Multiplying by 7 $\implies -28l = 42 + 126M$
Finally, substituting this back into x, $x = 5-28l \implies x = 5+42+126M = 47+126M \implies x \equiv 47 (mod \space 126)$
But when I substitute $x = 47$ back into my original equations, it works for (1) and (3), but fails for (2).
Can anyone tell me where I went wrong? Many thanks!!
Answer
This is not an answer, but a guide to a simplification. The first congruence is fine. Note that the second congruence is equivalent to $x\equiv 5\pmod{8}$. Any solution of this congruence must be odd.
Now look at the third congruence. Note that as long as we know that $x$ is odd, we automatically have $5x\equiv 1\pmod{2}$. So in the presence of the second congruence, the third one can be replaced by $5x\equiv 1\pmod{9}$.
Thus we are looking at the congruences $3x\equiv 1 \pmod{7}$, $x\equiv 5\pmod{8}$, and $5x\equiv 1\pmod{9}$. Now the moduli are pairwise relatively prime. Relatively prime moduli are easier to handle, there is less risk of error.
There will be a unique solution modulo $(7)(8)(9)$. In particular, your final modulus of $126$ cannot be right.
It is probably worthwhile to simplify the congruences still further. Note that the congruence $3x\equiv 1\pmod{7}$ has the solution $x\equiv 5\pmod{7}$. And the congruence $5x\equiv 1\pmod{9}$ is equivalent to $x\equiv 2\pmod{9}$.
We got lucky, ended up with $x\equiv 5\pmod{7}$ and $x\equiv 5\pmod{8}$, which has as only solution $x\equiv 5\pmod{56}$.
So we are trying to solve $x\equiv 5\pmod{56}$, $x\equiv 2\pmod{9}$.
One can find a solution by a short search. Or else we want $x=56a+5=9b+2$. That gives $9b=56a+3$, so $3$ must divide $a$, say $a=3c$. We arrive at $56c+1=3b$. Clearly $c=1$ works, so $a=168$. The solution is therefore $x\equiv 173\pmod{(56)(9)}$.
I need a rigorous proof that verify why the limit of $\dfrac{\sin(x)}{x}$ as $x$ approaches $0$ is $1$.
I tried before but i do not know how start this proof.
I would appreciate if somebody help me. Thanks.
Question:
Prove $$\ 1+\frac{1}{4}+\frac{1}{9}+\cdots+\frac{1}{n^2}\le 2-\frac{1}{n},
\text{ for all natural } n$$
My attempt:
Base Case: $n=1$ is true:
I.H: Suppose $1+\frac{1}{4}+\frac{1}{9}+\cdots+\frac{1}{k^2}\le 2-\frac{1}{k},$ for some natural $k.$
Now we prove true for $n = k+1$
$$ 1+\frac{1}{4}+\cdots+\frac{1}{k^2}+\frac{1}{\left(k+1\right)^2}\le 2-\frac{1}{k}+\frac{1}{\left(k+1\right)^2},\text{ by induction hypothesis} $$
Now how do I show that $2-\frac{1}{k}+\frac{1}{\left(k+1\right)^2}\le 2-\frac{1}{\left(k+1\right)}\text{ ?}$
Have I done everything correctly up until here?
If yes, how do I show this inequality is true?
Any help would be appreciated.
Answer
You are right!
We need to prove that:
$$\frac{1}{(k+1)^2}<\frac{1}{k}-\frac{1}{k+1}$$ or
$$\frac{1}{(k+1)^2}<\frac{1}{k(k+1)},$$
which is obvious.
Let $\sum_{k=0}^{\infty}a_k(x-a)^k$ be a power series with real coefficients $a_k$, the constant $a \in \mathbb{R}$ and the positive radius of convergence $R>0$. Let
\begin{equation}
D =
\begin{cases}
]a-R, \ a+R[ & \text{if } R < \infty \\
\mathbb{R} & \text{if } R = \infty \
\end{cases}
\end{equation}
and $ \ f: D \rightarrow \mathbb{R}$, $ \ f(x) := \sum_{k=0}^{\infty}a_k(x-a)^k$. $ \ f$ is then continuous on $D$.
This is from my lecture notes. I am not really sure, what is happening here. Do I understand this correctly? A power series has a certain radius of convergence $R$. Depending on $R$ the set $D$ is defined. $D$ is the set where the power series converges (?). $D$ is then taken as the domain for a function which is defined as the power series. $f$ is then continuous on $D$.
Maybe somebody can explain it more clearly or add a helpful image.
I've been looking at
$$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$
It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example:
$$\displaystyle \int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^3}}}dx = } \frac{\pi }{3}\frac{1}{{\sin \frac{\pi }{3}}} = \frac{{2\pi }}{{3\sqrt 3 }}$$
$$\int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^4}}}dx = } \frac{\pi }{4}$$
$$\int\limits_0^\infty {\frac{{{x^2}}}{{1 + {x^5}}}dx = } \frac{\pi }{5}\frac{1}{{\sin \frac{{2\pi }}{5}}}$$
So I guess there must be a closed form - the use of $\Gamma(x)\Gamma(1-x)$ first comess to my mind because of the $\dfrac{{\pi x}}{{\sin \pi x}}$ appearing. Note that the arguments are always the ratio of the exponents, like $\dfrac{1}{4}$, $\dfrac{1}{3}$ and $\dfrac{2}{5}$. Is there any way of finding it? I'll work on it and update with any ideas.
UPDATE:
The integral reduces to finding
$$\int\limits_{ - \infty }^\infty {\frac{{{e^{a t}}}}{{{e^t} + 1}}dt} $$
With $a =\dfrac{n+1}{m}$ which converges only if
$$0 < a < 1$$
Using series I find the solution is
$$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} $$
Can this be put it terms of the Digamma Function or something of the sort?
Answer
I would like to make a supplementary calculation on BR's answer.
Let us first assume that $0 < \mu < \nu$ so that the integral $$ \int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx $$ converges absolutely. By the substitution $x = \tan^{2/\nu} \theta$, we have $$ \frac{dx}{1+x^{\nu}} = \frac{2}{\nu} \tan^{(2/\nu)-1} \theta \; d\theta. $$ Thus $$ \begin{align*} \int_{0}^{\infty} \frac{x^{\mu-1}}{1+x^{\nu}} \; dx & = \int_{0}^{\frac{\pi}{2}} \frac{2}{\nu} \tan^{\frac{2\mu}{\nu}-1} \theta \; d\theta \\ & = \frac{1}{\nu} \beta \left( \frac{\mu}{\nu}, 1 - \frac{\mu}{\nu} \right) \\ & = \frac{1}{\nu} \Gamma \left( \frac{\mu}{\nu} \right) \Gamma \left( 1 - \frac{\mu}{\nu} \right) \\ & = \frac{\pi}{\nu} \csc \left( \frac{\pi \mu}{\nu} \right), \end{align*} $$ where the last equality follows from Euler reflexion formula.
I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...