Tuesday, October 31, 2017

real analysis - $limlimits_{n to{+}infty}{sqrt[n]{n!}}$ is infinite




How do I prove that $ \displaystyle\lim_{n \to{+}\infty}{\sqrt[n]{n!}}$ is infinite?


Answer



By considering Taylor series, $\displaystyle e^x \geq \frac{x^n}{n!}$ for all $x\geq 0,$ and $n\in \mathbb{N}.$ In particular, for $x=n$ this yields $$ n! \geq \left( \frac{n}{e} \right)^n .$$



Thus $$\sqrt[n]{n!} \geq \frac{n}{e} \to \infty.$$


algebra precalculus - Why would I want to multiply two polynomials?

I'm hoping that this isn't such a basic question that it gets completely laughed off the site, but why would I want to multiply two polynomials together?



I flipped through some algebra books and have googled around a bit, and whenever they introduce polynomial multiplication they just say 'Suppose you have two polynomials you wish to multiply', or sometimes it's just as simple as 'find the product'. I even looked for some example story problems, hoping that might let me in on the secret, but no dice.




I understand that a polynomial is basically a set of numbers (or, if you'd rather, a mapping of one set of numbers to another), or, in another way of thinking about it, two polynomials are functions, and the product of the two functions is a new function that lets you apply the function once, provided you were planning on applying the original functions to the number and then multiplying the result together.



Elementary multiplication can be described as 'add $X$ to itself $Y$ times', where $Y$ is a nice integer number of times. When $Y$ is not a whole number, it doesn't seem to make as much sense.



Any ideas?

real analysis - Integral representation of Euler's constant


Prove that : $$ \gamma=-\int_0^{1}\ln \ln \left ( \frac{1}{x} \right) \ \mathrm{d}x.$$


where $\gamma$ is Euler's constant ($\gamma \approx 0.57721$).




This integral was mentioned in Wikipedia as in Mathworld , but the solutions I've got uses corollaries from this theorem. Can you give me a simple solution (not using much advanced theorems) or at least some hints.


Answer



In this answer, it is shown that since $\Gamma$ is log-convex, $$ \frac{\Gamma'(x)}{\Gamma(x)}=-\gamma+\sum_{k=1}^\infty\left(\frac1k-\frac1{k+x-1}\right)\tag{1} $$ Setting $x=1$ yields $$ \Gamma'(1)=-\gamma\tag{2} $$ The integral definition of $\Gamma$ says $$ \begin{align} \Gamma(x)&=\int_0^\infty t^{x-1}\,e^{-t}\,\mathrm{d}t\\ \Gamma'(x)&=\int_0^\infty\log(t)\,t^{x-1}\,e^{-t}\,\mathrm{d}t\\ \Gamma'(1)&=\int_0^\infty\log(t)\,e^{-t}\,\mathrm{d}t\tag{3} \end{align} $$ Putting together $(2)$ and $(3)$ gives $$ \int_0^\infty\log(t)\,e^{-t}\,\mathrm{d}t=-\gamma\tag{4} $$ Substituting $t\mapsto\log(1/t)$ transforms $(4)$ to $$ \int_0^1\log(\log(1/t))\,\mathrm{d}t=-\gamma\tag{5} $$


algebra precalculus - word problem - hoe to find area from work done and days




A farmer planned to plough a field by doing 120 hectares a day. After
two days of work he increased his daily productivity by 25% and he
finished the job two days ahead of schedule.



a) What is the area of the field?



b) In how many days did the farmer get the job done?




c) In how many days did the farmer plan to get the job done?




The only thing I am able to work out from here is $25\%$ of $120$ hectares a day, which is $30$, so on day 2 he works $150$ hectares a day and finishes the job two days earlier than planned, but I am not sure how to put this into algebra and work out area?


Answer



If the number of expected days (ploughing 120 hectares a day) is $x$ you have $120x = A$ where $A$ is the total area.



Instead the farmer increased the workload after two days to $150$, as you stated.




In the first two days they plowed $240$ hectares, so in this new set up the total area can be seen as $240 + 150n = A$ where $n$ is the total number of days they ploughs $150$ hectares. You can see that $x - 2 = n + 2$, so $n = x-4$.



You now have $240 + 150(x-4) = 120x$ which you can rearrange to give $x = 12$. From here it follows that $A = 1440$, and $n = 8$. (Actual days spent ploughing $=10$)


complex numbers - How to solve quadratic function with degree higher than two?


I am struggling to solve the function $z^4 - 6z^2 + 25 = 0$ mostly because it has a degree of $4$. This is my solution so far:


Let $y = z^2 \Longrightarrow y^2 - 6y + 25 = 0$.


Now when we solve for y we get: $y=3 \pm 4i$.


So $z^2 = 3 \pm 4i$. Consequently $z = \sqrt{3 \pm 4i}$



But I know this is not the right answer, because a quadratic equation with degree four is supposed to have four answers. But I only get one answer. What I am doing wrong?


Answer



You are very close to the answer. Just put a plus or minus in front of the solution and you have your complete answer. It's always the simple things.


Prove that $C(r, r) + C(r+1, r) +dotsb+C(n, r) = C(n+1, r+1)$ using combinatoric arguments.





Prove that $C(r, r) + C(r+1, r) +\dotsb+C(n, r) = C(n+1, r+1)$ using combinatoric arguments.




I know we can use an analogy of picking people to form a committee, but I can't see how to prove it.


Answer



We have $n+1$ different doughnuts lined up in a row. In how many ways can we choose $r+1$ of them for breakfast? It can be done in $\binom{n+1}{r+1}$ ways. Now let us count another way.



We could pick the leftmost one, and then choose $r$ from the remaining $n$. This can be done in $\binom{n}{r}$ ways.



Or else we could skip the first, pick the second, and then $r$ from the remaining $n-1$. This can be done in $\binom{n-1}{r}$ ways.




And so on. Finally, we could skip all up to the one $r+1$ from the end, choose it, and then choose $r$ from the remaining $r$. This can be done in $\binom{r}{r}$ ways.



Add up. We get our sum of binomial coefficients, regrettably backwards.



Remark: One could alternately talk about committees. Too bureaucratic for my taste, I have been on too many Department committees.


linear algebra - Intuition behind symmetric and antisymmetric tensors



I've been studying multilinear algebra on Kostrikin's "Linear Algebra and Geometry" and he says the following. If $V$ is a linear space, $T^q_0(V)=V^{\otimes q}$ and if $f_\sigma :T^{q}_0(V)\to T^q_0(V)$ is given by:



$$f_\sigma(v_1\otimes\cdots\otimes v_q)=v_{\sigma(1)}\otimes\cdots \otimes v_{\sigma(q)}$$




Then the subspace of symmetric tensors is generated by the map



$$\frac{1}{q!}\sum_{\sigma\in S_q}f_\sigma:T^q_0(V) \to T^q_0(V)$$



I understood his proof of this. However I can't see the intuition behind it. What's the motivation of summing all those functions and why we must include that factor $1/q!$ it's what I don't understand. The same happens with the alternation operator, which he defines as:



$$\frac{1}{q!}\sum_{\sigma \in S_q}\operatorname{sgn}(\sigma)f_\sigma : T^q_0(V) \to T^q_0(V)$$



Again he proves that the image of this things is really the space of antysimmetric tensors. However again, what's the intuition behind this? How can we see intuitively that any symmetric tensor can be written as an arbitrary tensor under the action of this function. And again, I can't understand why this factor with $q!$ comes again.




I believe there is some intuition, although the author doesn't tell. It's like the tensor product, the intuition is that we want to construct a pair $(S,T)$ of a vector space $S$ and a multilinear function $T$ such that the pair possess the universal property. So we do all that job with the free vector space and there are good motivations, good reasons to construct things like they are. It's something that after we read about the motivations and so on we can think: "those are good reason, I can see why someone ever thought of them!". I believe there must be good reasons too for the introduction of these operators as they are.



How can we see intuitively that it must be like that? Can someone help with this?



Thanks very much in advance!


Answer



For the start let's have a look at matrices.



Let $A \in \mathbb{R^{n\times n }}$ be any matrix.




Than $\text{sym}(A)=\frac{1}{2}(A+A^T)$ is it's symetric part and $\text{skew}(A)=\frac{1}{2}(A-A^T)$ its antisymetric part. Note that any matrix can be written as sum of its symetric and antisymetric part $A=\text{sym}(A) + \text{skew}(A)$



Now I explain why there is the factor $\frac{1}{2}$.
If matrix $A$ is already symetric you want to have $\text{sym}(A) = A$. Without the factor you would get $\text{sym}(A) = 2A$. Similar for antisymetric part.



You can think about matrices as bilinear forms, $x^T A y = A(x,y)$. Matrix $A$ is symetric iff $A(x,y)=A(y,x)$ for all $x,y$. This brings as to idea to what symetric tensor might be. That $T$ multilinear form(or tensor) is symetric iff $T(...,x,...,y,...) = T(...,y,...,x,...)$ for all $x,y$ and for all positions where you preform the 'swap'. As with matrices you can take symetric part of tensor $T$:



$$\text{sym}(T)(x_1,...,x_n) = \frac{1}{n!} \sum_{\sigma\in S_n} T(x_{\sigma(1)},..,x_{\sigma(n)})$$



Again natural requirement is if you have symetric tensor $T$ than $\text{sym}( T)=T$, this explains the factor $\frac{1}{n!}$




And why bother with symetric and antisymetric tensors? Well I'll give few remarks from my studium.



In physics you encounter tensors quite a lot and they are often symetric or anytisymetric, like electromagnetic tensor is antisymmetric.
Or in general relativity you often caclulate something like this $\sum_{ij} A_{ij}B_{ij}$. When $A$ is symmetric than $\sum_{ij} A_{ij}B_{ij}=\sum_{ij} A_{ij}\text{sym}(B)_{ij}$ which is useful.



Or in differential geometry you define differential forms which are antisymetric and thay are of great importance.



I hope I shed a little bit of light on this topic.


Monday, October 30, 2017

derivatives - How is an infinitesimal $dx$ different from $Delta x,$?





When I learned calc, I was always taught
$$\frac{df}{dx}= f'(x) = \lim_{h\rightarrow 0} \frac{f(x+h)-f(x)}{(x+h)-x}$$
But I have heard $dx$ is called an infinitesimal and I don't know what this means. In particular, I gather the validity of treating a ratio of differentials is a subtle issue and I'm not sure I get it.
Can someone explain the difference between $dx$ and $\Delta x$?




EDIT:



Here is a related thread:



Is $\frac{\textrm{d}y}{\textrm{d}x}$ not a ratio?



I read that and this is what I don't understand:




There is a way of getting around the logical difficulties with infinitesimals; this is called nonstandard analysis. It's pretty difficult to explain how one sets it up, but you can think of it as creating two classes of real numbers: the ones you are familiar with, that satisfy things like the Archimedean Property, the Supremum Property, and so on, and then you add another, separate class of real numbers that includes infinitesimals and a bunch of other things.





Can someone explain what specifically are these two classes of real numbers and how they are different?


Answer



Back in the days of non-standard analysis, the idea of differentiation was (informally) defined as the ratio between two infinitesimal values.



The real number system (often denoted as $\mathbb{R}$) can be defined in terms of a field. It is a field with properties such as total ordering (basically means every element in it has an order), Archimedean property (which states that every two elements are within an integer multiple of each other). $\mathbb{R}$ can, however, be extended. One example is to allow the existence of imaginary number, $\sqrt{-1}$, in which case you would have complex numbers (and that is also a field).



Extending $\mathbb{R}$ by introducing the element infinitesimal to it would make it lose the Archimedean property.




So when Arturo Magidin talked about "two classes of real numbers", basically he is referring to $\mathbb{R}$ and an ordered field containing all elements in $\mathbb{R}$ and also infinitesimal, a "number" defined as greater than 0 but smaller than any integer unit fraction.


calculus - Finding a function based on its Derivative without Integrating



My question revolves around finding a function based on its derivative of the type below :







Problem :
The limit below represents the derivative of some real-valued function $f$ at some real-number $a$. State such an $f$ and $a$ in this case.



$$\lim_{h \ \to \ 0} \frac{\sqrt{9+h}-3}{h}$$






Now this type of problem is slightly unique. In general we can always find a function based on it's derivative by taking the indefinite integral of the derivative, however in this case we don't have the derivative in a general form, we only have the value for the derivative function at some point $a$, and there are a large number of $f$'s which can produce the value for the derivative at that point. Am I correct in saying this?




This problem above is easily solvable, anyone can see already that the function is obviously going to be $f(x) = \sqrt{x}$, but the way I seem to solve it is a very heuristic method, which bothers me greatly (i.e. similar to guessing a function and working from there). I'm trying to find a methodical way of solving problems of this type, as the way I solve it (shown below) will definitely break down for harder examples.






This is my solution :



By the definition of a derivative :



$$f'(x) = \lim_{h \to\ 0} \frac{f(x+h)-f(x)}{h}$$




In the above case we can see that $\ f(a+h) = \sqrt{9+h}\ $ and $\ f(a)=3 = \sqrt{9}$



This implies that $a = 9$ and $f(x) = \sqrt{x}$ or written more formally ($f : x \to \sqrt{x}, \ \forall x\in\mathbb{R^{+}}$)






As you can see that is a very hap-hazard solution, and I would like to find a better way to solve problems of these types, but it eludes me.



Is there a more methodical approach (or formal approach) to solving problems of this type, where we are given a limit representing the derivative of some real-valued function $f$ at some real-number $a$, and asked to find $f$ and $a$?




If you have any other suggestions for solving these types of problems, please comment below.


Answer



You are correct in saying there is a large number of maps $f$ and points $a$ satisfying this. Take any real number $a$, and any map $f$ differentiable at $a$, then this equals $g'(a)$ where $\forall x \in \mathbb{R}, g(x) = f(x) +(\frac{1}{6} - f'(a))x$. This exercice is just there for you to remember that some limits are better calculated when seen as derivatives at some point. One typical example is $\lim \limits_{x \to 0} \frac{\sin(x)}{x} = 1$, though depending on how you define $\sin$ in the first place, this one might be trivial.


Multiples of 2 numbers that differ by 1

I have 2 known positive integers, $a$ and $b$ . Is there a ' standard ' formula to find the lowest (if possible) positive integers $x$ and $y$ , so that the following is true?




$$xa = yb + 1$$


Sunday, October 29, 2017

real analysis - Sequence such that $limlimits_{nto infty} frac{x_1^2+x_2^2+...+x_n^2}{n}=0$



Let $(x_n) _{n\ge 1}$ be a sequence of real numbers such that $\lim\limits_{n\to \infty} \frac{x_1^2+x_2^2+...+x_n^2}{n}=0$. Prove that $\lim\limits_{n\to \infty} \frac{x_1+x_2+...+x_n}{n}=0$.
I used the definition of the limit to conclude that $\exists N\in \mathbb{N} $ such that $|\frac{x_1^2+x_2^2+...+x_n^2}{n}|<\frac{1}{n^2}$, $\forall n\ge N$. Hence, we get that $|x_1^2+x_2^2+...+x_n^2|<\frac{1}{n}$.
Now here comes the part where I am not really sure. I think that this implies that $\sum_{n=1}^{\infty}x_n^2=0$, and as a result $x_n\to 0$,which solves the problem because if we use the Stolz-Cesaro lemma we get that $\lim\limits_{n\to \infty} \frac{x_1+x_2+...+x_n}{n}=\lim\limits_{n\to \infty} x_n=0$.


Answer



$\lim_{n\to \infty} \frac{x_1^2+x_2^2+...+x_n^2}{n}=0$
does not imply that $|\frac{x_1^2+x_2^2+...+x_n^2}{n}|<\frac{1}{n^2}$ for sufficiently large $n$. Also $\sum_{n=1}^{\infty}x_n^2=0$ would be true only if all $x_n$ are zero, so that approach cannot work, unfortunately.




But the Cauchy-Schwarz inequality gives
$$
\left |\sum_{k=1}^n 1\cdot x_k \right| \le \sqrt n \cdot \sqrt{\sum_{k=1}^n x_k^2}
$$

and therefore
$$
\left |\frac 1n \sum_{k=1}^n x_k \right| \le \sqrt{\frac 1n \sum_{k=1}^n x_k^2}
$$



This is also a special case of the Generalized mean inequality:

$$
\sqrt[p]{\frac 1n \sum_{k=1}^n x_k^p} \le \sqrt[q]{\frac 1n \sum_{k=1}^n x_k^q}
$$

for non-negative real numbers $x_1, \ldots, x_n$ and $0 < p < q$.


Saturday, October 28, 2017

Proof involving functional equation



I'm trying to prove that if $$f(x+n)=f(x)f(n)$$ for all $x\in \Bbb R$ and $n \in \Bbb N$, then it also holds for $x,n \in \Bbb R$. One "argument" I came up with was regarding the symmetry. There's no reason why one should be constrained to the integers, while the other can be any real number, but that's not a proper argument.







Another thing I thought of is this: If we set $n=1$ then we get $$\begin{align} f(x+1) &= f(1)f(x) \tag{1} \end{align}$$ which is true for all $x \in \Bbb R$, now if we instead set $x=1$ then we get $f(n+1)=f(n)f(1)$ which must also be true for $n \in \Bbb R$ because $(1)$ is. What keeps me from being satisfied is that $n\in \Bbb R$ under the assumption that $x=1$.



Is my reasoning valid or not?



Edit: Forgot an important bit of information: $f$ is a continuous function.


Answer



Your proof is not valid; you are noticing that if $x$ is an integer then $n$ could vary anywhere in the reals, but this is just stating that the relation $f(x+n)=f(x)f(n)$ has a symmetry where we swap $x$ and $n$ and doesn't tell us anything about whether $f(x+y)=f(x)f(y)$ when both are real.



More directly, the statement you're trying to prove is false, so obviously your proof of it is not valid. Let $f$ be the function

$$f(x)=a^{x+g(x)}$$
where $g(x)$ is a function such that $g(x+1)=g(x)$ and $g(0)=0$, and clearly, for any integer $n$, it holds that $g(x+n)=g(x)$ and $g(n)=0$. Then, for integer $n$ and real $x$, it olds that
$$f(x+n)=a^{x+n+g(x+n)}$$
but we can write $x+n+g(x+n)=(x+g(x))+(n+g(n))$ so we have



$$f(x+n)=a^{x+g(x)}a^{n+g(n)}=f(x)f(n).$$
However, it's easy to choose reals for which $f(x+y)=f(x)f(y)$ does not hold; for instance, choosing $x=y=\frac{1}2$ and letting $k=g(\frac{1}2)+\frac{1}2$, we get
$$f(1)=f\left(\frac{1}2\right)f\left(\frac{1}2\right)$$
$$a = a^k\cdot a^k = a^{2k}$$
which does not hold if $a\neq 1$ and $g(\frac{1}2)\neq 0$.



trigonometry - Trigonometric equation cos sin and power



The problem is $2\cos t - 3\sin^2t +2 = 0$.
I get to $2\cos t -3\sin^2t =-2$
I think that I need to use a trigonometric identity like $\cos(x+y)$ and to divide $2\cos t -3\sin^2t$ with the $\sqrt{2^2+3^2}$



Do you know how to solve this? It should be $\sqrt{2^2 + 3^2}$


Answer



$$2 \cos t - 3 \sin^2t +2 = 0\\.$$ Write $$\sin^2 t=1-\cos^2 t, $$ then you have a quadratic equation. Solve for $\cos t$ $$ 2\cos t -3(1-\cos^2 t)+2=0\\3\cos ^2 t+2\cos t-3+2=0\\\cos t=-1,\;\frac{1}{3}. $$


algorithms - GCD and LCM of three numbers



Given two positive integers G and L, could you tell me how many solutions of (x, y, z) there are, satisfying that gcd(x, y, z) = G and lcm(x, y, z) = L? gcd(x, y, z) means the greatest common divisor of x, y and z, while lcm(x, y, z) means the least common multiple of x, y and z. Also, (1,2,3) and (1,3,2) are two different solutions, while (1,2*,2) and (1,2,2*) are the same solution.(* is just a symbol) Are there any ideas about this? Thanks in advance!


Answer



Let's write $\displaystyle x = \prod_{i=1}^n p_i^{a_i}$, $\displaystyle y = \prod_{i=1}^n p_i^{b_i}$, $\displaystyle z = \prod_{i=1}^n p_i^{c_i}$, where the $p_i$ are the primes that are factors of at least one of $x, y, z$.


Then $\displaystyle \gcd(x,y,z) = \prod_{i=1}^n p_i^{\min(a_i,b_i,c_i)}$ and $\displaystyle \mathrm{lcm}(x,y,z) = \prod_{i=1}^n p_i^{\max(a_i,b_i,c_i)}$.


Now, suppose that $n=1$ for simplicity. Then the number of $(x,y,z)$ with gcd $p^q$ and lcm $p^r$ is just the number of triples $(a,b,c)$ with minimum $q$ and maximum $r$. If $r=q$ this is just $1$, while if $q

So define $f(s) = 0$ if $s<0$, $f(0) = 1$, and $f(s) = 6s$ if $s>0$.


Now let $\displaystyle G = \prod_{i=1}^n p_i^{q_i}$ and $\displaystyle L = \prod_{i=1}^n p_i^{r_i}$. Then one can show that the number of triples with gcd $G$ and lcm $L$ is $\displaystyle \prod_{i=1}^n f(r_i-q_i)$


Friday, October 27, 2017

calculus - Can the derivative of a function be such that it is not continuous?





My guess is that all derivatives ought to be continuous but perhaps there's some obscure example of a function for which this is not the case. Is there any theorem that answers this perhaps?


Answer



The standard counterexample is $f(x)=x^2\sin(\frac{1}{x})$ for $x\neq 0$, with $f(0)=0$.



This function is everywhere differentiable, with $f^{\prime}(x)=2x\sin(\frac{1}{x})-\cos(\frac{1}{x})$ if $x\neq 0$ and $f^{\prime}(0)=0$. However, $f^{\prime}$ is not continuous at zero because $\lim_{x\to0}\cos(\frac{1}{x})$ does not exist.



While $f^{\prime}$ need not be continuous, it does satisfy the intermediate value property. This is known as Darboux's theorem.


A Non-Standard Functional Equation



Suppose that $a\in(0,1)$ and that $f:\mathbb R \to \mathbb R$. I am trying to solve for $f$ such that



$$ (1+f(x^{-1}))(f(x)-f(x)^{-1})=\frac{(x-a)(1-ax)}{x} .$$




I really don't know where to start from though. I've tried the method of undetermined coefficients supposing that $f$ is linear and quadratic, but that did not work. I think I need a better guess for a functional form. I am not really looking for the solution, but for some help with where to start from. References are very welcome.



$$$$
$$$$






Suggestion from the comments:



The power series,

$$ f(x) = \sum_{i=0}^\infty b_ix^i $$
leads to the following system of equations:



\begin{align}
&\;\vdots\\
[x^4] &: \;\;\sum_{i=3}^{\infty}\left(\sum_{j=0}^{i}b_{j}b_{i-j}\right)b_{i-3}=(1+a^2)b_3-a(b_2+b_4)-2(b_0 b_3+b_1 b_2)\\
[x^3] &: \;\;\sum_{i=2}^{\infty}\left(\sum_{j=0}^{i}b_{j}b_{i-j}\right)b_{i-2}=(1+a^2)b_2-a(b_1+b_3)-(b_1^2+2b_0 b_2)\\
[x^2] &: \;\;\sum_{i=1}^{\infty}\left(\sum_{j=0}^{i}b_{j}b_{i-j}\right)b_{i-1}=(1+a^2)b_1-a(b_0+b_2)-2b_0 b_1\\
[x^1] &: \;\;\sum_{i=0}^{\infty}\left(\sum_{j=0}^{i}b_{j}b_{i-j}\right)b_{i}=1+(a^2-b_0)b_0-ab_1\\
[x^0] &: \;\;\sum_{i=0}^{\infty}\left(\sum_{j=0}^{i}b_{j}b_{i-j}\right)b_{i+1}=b_1-ab_0\\

[x^{-1}] &: \;\;\sum_{i=0}^{\infty}\left(\sum_{j=0}^{i}b_{j}b_{i-j}\right)b_{i+2}=b_2\\
[x^{-2}] &: \;\;\sum_{i=0}^{\infty}\left(\sum_{j=0}^{i}b_{j}b_{i-j}\right)b_{i+3}=b_3\\
&\; \vdots
\end{align}


Answer



Hint  (while assuming $x \ne 0, f(x) \ne 0$ etc) ... First note that the RHS is symmetric with respect to $x \mapsto \frac{1}{x}$ which becomes obvious if rewritten as $\frac{(x-a)(1-ax)}{x} = (x-a)(\frac{1}{x}-a)$. This suggests taking the $x \mapsto \frac{1}{x}$ counterpart of the identity and equating the two to get:



$$
\begin{align}
& \left(1+f\left(\frac{1}{x}\right)\right) \left(f(x)-\frac{1}{f(x)}\right) = \Big(1+f(x)\Big) \left(f\left(\frac{1}{x}\right)-\frac{1}{f\left(\frac{1}{x}\right)}\right) \\

\iff \quad & \frac{f(x) \big(f(x)+1\big)}{\big(f(x)\big)^2 -1} = \frac{f(\frac{1}{x}) \big(f(\frac{1}{x})+1\big)}{\big(f(\frac{1}{x})\big)^2 -1} \\
\iff \quad & \frac{f(x)}{f(x) -1} = \frac{f(\frac{1}{x})}{f(\frac{1}{x})-1} \\
\iff \quad & f(x) = f(\frac{1}{x})
\end{align}
$$



Then, with $y = f(x) = f(\frac{1}{x})$, the original identity becomes:



$$\Big(1+y\Big)\Big(y - \frac{1}{y}\Big) = \Big(x-a\Big)\Big(\frac{1}{x} - a\Big)$$




which gives a cubic in $y$ that can be solved to obtain a closed form for $y=f(x)$ though it's not obvious it would be a "nice" one.


number theory - $x^2 + 3x + 7 equiv 0 pmod {37}$




I'm trying to solve the following



$x^2 + 3x + 7 \equiv 0 \pmod {37}$



What I've tried -



I've tried making the left side as a square and then I know how to solve



but couldn't make it as a square root..




We also learned in class that you can multiply the left side and the modulo by $4a$



(that is $4\cdot 1 = 1$) and continue somehow - which I can't figure out how.



any help will be appreciated.


Answer



In the real numbers, a method of finding a solution to a quadratic equation is to complete the square. This would involve adding and subtracting $(b/2)^2$. $b=3$ in your case, and remember that $1/2 = 19 \mod 37$.



Specifically notice: $$(x+3 \cdot 19)^2 \equiv x^2 + 2\cdot 3 \cdot 19 x + (3 \cdot 19)^2$$ $$\equiv x^2 + 3x + (20)^2 \mod 37$$




Note that $3 \cdot 19 \equiv 20 \mod 37$. Also $20^2 = 400 \equiv 30 \mod 37$.



Thus the method of completing the square is as follows $$x^2 + 3x + 7 \equiv x^2 + 3x + 20^2 - 20^2 + 7 \equiv (x+20)^2 - 23 \mod 37$$



Finally this means you need to solve $$(x+20)^2 \equiv 23 \mod 37$$


Thursday, October 26, 2017

calculus - Does Intermediate Value Theorem $rightarrow $ continuous?

i try to understand Intermediate Value Theorem and wonder if the theorem works for the opposite side. I mean, if we know that $\forall c\:\:\:f\left(a\right)\le \:c\le \:f\left(b\right)\:,\:\exists x_0\in \left[a,b\right]\:\:$ such that $f\left(x_0\right)=c$ then $f\:$ is continuous in $\left[a,b\right]$? tnx!



EDITED: Continuity $\Rightarrow$ Intermediate Value Property. Why is the opposite not true? there is absolute fantastic answer for this!

Wednesday, October 25, 2017

combinatorics - Sum of combinations with varying $n$




What is the sum of number of ways of choosing $n$ elements from $(n+r)$ elements where $r$ is fixed and $n$ varies from $1$ to $m$ ? Can this be reduced to a formula ?



$$ \sum ^m _{n=1} \binom{n + r}n $$


Answer



Yes your formula is corrct and:
$$\sum_{n=1}^m \dbinom{n+r}n=\sum_{n=1}^m\dbinom{n+r}{r}$$



and you can prove by induction that this $\dbinom{m+r+1}{r+1}-1$



real analysis - Prove that $f$ is differentiable at $x_0$ if and only if $g$ is differentiable at $x_0$



Suppose $f: D \rightarrow \mathbb{R}$ and $g: E \rightarrow \mathbb{R}$ and that $x_0$ is in the domain of both functions and an accumulation point of their intersection. Suppose further that there exists $\mu >0$ such that the intersection of the $\mu$-neighborhood of $x_0$ with $D$ is equal to the intersection of the $\mu$-neighborhood with $E$ and for all $x$ in that intersection, $f(x)=g(x)$. Prove that $f$ is differentiable at $x_0$ if and only if $g$ is differentiable at $x_0$







The concept of this proof is not really clicking with me,Given most of my theorems and lemmas of differentiability state the use of one function,I'm lost on how to approach the proof


Answer



Hint: write down the definition of $f'(x_0)$ and $g'(x_0)$.






Denote the "$\mu$-neighborhood" of $x_0$ as $H$. Then $S:=H\cap D=H\cap E$. Also $f=g$ on $S$.




Now
$$
f'(x_0)=\lim_{\substack{x\to x_0\\x\in S\setminus\{x_0\}}}\frac{f(x)-f(x_0)}{x-x_0}
=\lim_{\substack{x\to x_0\\x\in S\setminus\{x_0\}}}\frac{g(x)-g(x_0)}{x-x_0}
$$


real analysis - Show that $ limlimits_{ntoinfty}frac{1}{n}sumlimits_{k=0}^{n-1}e^{ik^2}=0$



TL;DR : The question is how do I show that $\displaystyle \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}e^{ik^2}=0$ ?



More generaly the question would be : given an increasing sequence of integers $(u_k)$ and an irrational number $\alpha$, how do I tell if $\displaystyle \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}e^{2i\pi \alpha u_k}=0$ ? I'm not asking for a criterium for completely general sequences, an answer for sequences like $u_k=k^2$, $v_k=k!$ or $w_k=p(k)$ with $p\in \mathbf Z [X]$ would already be awesome.




A little explanation about this question :



In Real and Complex Analysis by Rudin there is the folowing exercise :



Let $f$ be a continuous, complex valued, $1$-periodic function and $\alpha$ an irrational number. Show that
$\displaystyle \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f(\alpha k)=\int_0^1f(x)\mathrm d x$. (We say that $(\alpha k)_k$ is uniformly distributed in $\mathbf R / \mathbf Z$)



With the hint given by Rudin the proof is pretty straightforward : First one show that this is true for every $f_j=\exp(2i\pi j\cdot)$ with $j\in \mathbf{Z} $. Then using density of trigonometric polynomials in $(C^0_1(\mathbf{R}),\|\cdot\|_\infty)$ and the fact that the $0$-th Fourier coefficient of $f$ is it's integral over a period, one can conclude using a $3\varepsilon$ argument. This proof is possible because one can compute explicitly the sums $$\displaystyle \frac{1}{n}\sum_{k=0}^{n-1}e^{2i\pi j \alpha k}=\frac{1}{n}\cdot\frac{1-e^{2i\pi j\alpha n}}{1-e^{2i\pi j\alpha}}\longrightarrow 0 \text{ when }n\to\infty \text{ and }j\in \mathbf{Z}^*.$$



Now using a different approach (with dynamical systems and ergodic theorems) Tao show in his blog that $(\alpha k^2)_k $ is uniformly distributed in $\mathbf R / \mathbf Z$ (corollary 2 in this blog). I'd like to prove this result using the methods of the exercice of Rudin, but this reduce to show that $$\displaystyle \frac{1}{n}\sum_{k=0}^{n-1}e^{2i\pi j \alpha k^2}\longrightarrow 0 \text{ when }n\to\infty \text{ and }j\in \mathbf{Z}^*.$$ Hence my question.




P.S. When i ask wolfram alpha to compute $\sum_{k\geq0}e^{ik^2}$ it answer me with some particular value of the Jacobi-theta function. Of course the serie is not convergent but maybe it's some kind of resummation technique or analytic continuation. I'm not familiar with these things but it might be interesting to look in that direction.


Answer



Gauss sums



Your sum is strongly related to the Gauss sum. The usual trick is to compute the modulus. This works particularly smoothly over $\mathbf{Z}/p\mathbf{Z}$ as with usual Gauss sums, but essentially it works here too: If $S = \sum_{k=0}^{n-1} e^{ik^2},$ then
\begin{align}
|S|^2 &= \sum_{k=0}^{n-1} \sum_{k'=0}^{n-1} e^{i(k'^2 - k^2)}\\
&= \sum_{h=-n+1}^{n-1} \sum_{\substack{0\leq k\end{align}

where we have written $h=k'-k$. Now the inner sum is a geometric series with at most $n$ terms and with common ratio $e^{i2h}$, so we have
\begin{equation}
\left|\sum_{\substack{0\leq k\end{equation}
Thus
\begin{equation}
|S|^2 \leq 2\sum_{h=0}^{n-1} \min\left(\frac2{|1-e^{i2h}|},n\right).
\end{equation}
Now fix $\epsilon>0$. Since $(h/\pi)_{h=1}^\infty$ is equidistributed mod $1$ the number of $h=0,\dots,n-1$ for which $|1-e^{i2h}| \leq \epsilon$ is $O(\epsilon n)$, so
\begin{equation}

|S|^2 \leq 2\sum_{\substack{0\leq h < n\\ |1-e^{i2h}| \leq \epsilon}}n + 2\sum_{\substack{0\leq h < n\\ |1-e^{i2h}| > \epsilon}} \frac2{|1-e^{i2h}|} \leq O(\epsilon n^2) + O(\epsilon^{-1} n).
\end{equation}
Since $\epsilon$ was arbitrary this implies $|S|^2=o(n^2)$, and hence $|S|=o(n)$.



The van der Corput trick



The only thing we really used about $k^2$ here is that for fixed $h$ we understand the behaviour of the sequence $(k+h)^2 - k^2$, and indeed if you repeat the above calculation but with a judicious application of the Cauchy--Schwarz inequality then you prove a general fact called van der Corput's difference theorem (aka Weyl's differencing trick): if $(u_k)$ is a sequence such that for each $h\geq 1$ the sequence $(u_{k+h}-u_k)$ is equidistributed modulo $1$, then $(u_k)$ is equidistributed modulo $1$. See for example Corollary 2 on Tao's blog here. This implies for example that $\sum_{k=0}^{n-1} e^{i2\pi p(k)} = o(n)$ for every nonconstant polynomial $p$ with irrational leading coefficient.



Other sequences




In general there is no hard and fast rule about $\lim \frac1n \sum_{k=0}^{n-1} e^{i2\pi \alpha u_k}$, i.e., about equidistribution of $(u_k)$, and in fact the other sequence you mention, $k!$, is indeed very different. To take a slightly simpler example which is qualitatively similar, consider $u_k = 2^k$. Let $f_n(\alpha)$ be the exponential sum $\frac1n \sum_{k=1}^n e^{i2\pi \alpha 2^k}$. Then it is a well known consequence of the ergodic theorem that $f_n(\alpha)$ converges to $0$ for almost every $\alpha$. On the other hand clearly $f_n(\alpha)\to 1$ for every dyadic rational $\alpha$, as $\alpha 2^k$ is eventually constantly $0$ mod $1$. But then by Baire category theorem we must have for a comeagre set of $\alpha$ that $f_n(\alpha)$ does not converge to $0$. Thus it's difficult to say anything too general about $f_n(\alpha)$, especially for particular $\alpha$. For instance, proving $\lim_{n\to\infty} f_n(\sqrt{2})=0$ is a famous open problem.



Test your understanding



Here are some related problems to think about, not all of which I know off-hand how to answer!




  1. Is $(\sqrt{n})$ equidistributed mod $1$?

  2. What about $(\log n)$?

  3. Show that there are some $\alpha$ for which $f_n(\alpha)$ does not converge.


  4. Determine $\{z: f_n(\alpha)\to z~\text{for some}~\alpha\}$.

  5. Let $g_n(\alpha) = \frac1n \sum_{k=1}^n e^{i2\pi \alpha k!}$. Prove statements for $g_n$ analogous to those we proved for $f_n$.

  6. Is there a power of $2$ with at least $7$ decimal digits equal to $7$?

  7. Think of other silly (but not open) problems like these ones.


Tuesday, October 24, 2017

If it is Arithmetic Progression then find the value of s?


The value of s = enter image description here I started this question by making an A.P as the common difference is same and got the answer that I need number of terms to proceed further but my valie for number of terms is coming in fraction that is not possible i tried many times but end up with the same value so I can't proceed further as I don't know the value of n to put in the sum of Arithmetic Progression series?


Answer



Hint:


The general term is $$\frac{1}{(5r-3)(5r+2)}$$


We use partial fraction decomposition, so:


$$\frac{1}{(5r-3)(5r+2)}=\frac{A}{5r-3}+\frac{B}{5r+2}$$



Can you find $A$ and $B$? (Hint 2: $A+B=0$)


Because $A+B=0$, can you see that between the decomposition of $$\frac{1}{(5r-3)(5r+2)}$$ and $$\frac{1}{(5(r+1)-3)(5(r+1)+2)}=\frac{1}{(5r+2)(5r+7)}$$


some terms are cancelled? Figure out which ones will not be cancelled and sum them.


This method is known as telescoping


calculus - Evalutating $lim_{xto +infty} sqrt{x^2+4x+1} -x$





I'm looking to evaluate



$$\lim_{x\to +\infty} \sqrt{x^2+4x+1} -x$$



The answer in the book is $2$. How do I simply evaluate this problem?




I usually solve limits such as this with the short cut method, i.e (Numerator degree < Denominator degree) = 0 ; (Numerator degree = Denominator degree )= take ratio of leading coefficients; (Degree numerator > degree denominator )= take leading terms and use algebra to simplify and then plug in $-\infty$ or $+\infty$
Please keep in mind that I do not know L'Hopital's rule.


Answer



$$\begin{align}\lim_{x\to \infty}\sqrt{x^2+4x+1}-x&=\lim_{x\to\infty}(\sqrt{x^2+4x+1}-x)\cdot\frac{\sqrt{x^2+4x+1}+x}{\sqrt{x^2+4x+1}+x}\\&=\lim_{x\to\infty}\frac{(x^2+4x+1)-x^2}{\sqrt{x^2+4x+1}+x}\\&=\lim_{x\to\infty}\frac{4+\frac 1x}{\sqrt{1+\frac 4x+\frac{1}{x^2}}+1}\\&=\frac{4}{1+1}\end{align}$$


Is it possible to have a positive exponential function that starts below zero?



I'm working on a project for my math class. We need to make an image on our calculators (Texas Instruments) using the DrawF function (which graphs functions as y=). I need an exponential function that starts below zero. From what I understand, they can't (according to my Algebra II textbook and a few Google searches).


Is it possible to draw the line I need with an exponential?


Side note: I would rather use this than, say, mushing it together with other types of equations because we need at least two exponential functions, and I can't find a better place to use them.


Answer



Multiplying an exponential function by any real number is still an exponential function


Take $$f(x) = -e^{x}$$


Then $f(0) = -1$. On the other hand if you want purely a function which is of the form $f(x) = a^x$, you will need to use complex numbers but then there's no real concept of a number being "negative"


Monday, October 23, 2017

abstract algebra - Describe the elements in $mathbb{Q}(pi)$



Describe the elements in $\mathbb{Q}(\pi)$



Attempt: $\mathbb{Q}(\pi)$ is the smallest field which contains $\mathbb{Q}$ and $\pi$



We know that $\nexists~ f(x) \in \mathbb{Q}[x]$ such that $f(\pi)=0$



Hence, $\mathbb{Q}[x]/\langle p(x) \rangle \not\approx \mathbb{Q}(\pi)~~\forall~~p(x) \in \mathbb{Q}[x]~~ ;~~ p(x)$ is irreducible in $\mathbb{Q}[x]$.




How do I move ahead? Please note that this exercise happens to be in the field extension chapter before algebraic extensions or finite fields are introduced.



EDIT: So, it means the the kernel of the mapping $f: \mathbb{Q}[x] \rightarrow \mathbb{Q}[\pi]$ is an isomorphism since kernel $f=0$. Hence, their field of fractions of the integral domain is also isomorphic. I get till this point, but how do we relate this to $\mathbb Q[\pi]$ . Does this $\implies \mathbb{Q}(\pi)= \{a_0+a_1 \pi + a_2 \pi^2 + \cdots + a_n \pi^n ~~|~~a_0,a_1,\cdots a_n \in \mathbb Q\}$



Thank you for your help.


Answer



If $\alpha$ is a complex number that is "transcendental over" $\mathbb{Q}$ then $\mathbb{Q}(\alpha)$ is isomorphic to $\mathbb{Q}(x)$ i.e. the field of rational functions over $\mathbb{Q}$.



To see why define the map,
$ f: \mathbb{Q}[x] \to \mathbb{Q}[\alpha]$ where $\mathbb{Q}[\alpha]$ is the smallest ring containing $\alpha$ and $f(p(x)) = p(\alpha)$ where $p(x)\in \mathbb{Q}[x]$ is a polynomial. This mapping is a homomorphism, clearly surjective, and most importantly injective. The reason for this if there was a non-trivial polynomial $q(x)$ such that $f(q(x)) = 0$ then it would mean $q(\alpha) = 0$, which is impossible as we assumed that $\alpha$ was transcendental. Thus, we conclude that $\mathbb{Q}[x]$ and $\mathbb{Q}[\alpha]$ are isomorphic as rings. Since they both are integral domains we can form their field of fractions and get the claimed result.



probability theory - Expected value of visits in a state of a discrete Markov chain





Let




  • $X=(X_n)_{n\in\mathbb N_0}$ be a Markov chain with values in a at most countable Polish space $E$ and $\mathcal E$ be the Borel $\sigma$-algebra on $E$

  • $(\operatorname P_x)_{x\in E}$ be the distributions of $X$

  • $N(y)=\sum_{n\in\mathbb N_0}1_{\left\{X_n=y\right\}}$ be the number of visits of $X$ in $y\in E$



Clearly, $$\operatorname E_x[N(y)]=\sum_{n\in\mathbb N_0}\operatorname P_x[X_n=y]\;.$$




I've read that it holds $$\operatorname E_x[N(y)]=\sum_{k\in\mathbb N}\operatorname P_x[N(y)\ge k]\;,$$ but I don't understand why this is true. Is it a typo and what's really meant is "=" instead of "\ge"?


Answer



Here's a formula with uses in lots of places: If $N$ is a non-negative integer-valued random variable, then $E[N] =\sum_{k=1}^\infty P[N\ge k]$. To see this write
$$
\sum_{k=1}^\infty P[N\ge k]=\sum_{k=1}^\infty \sum_{j=k}^\infty P[N=j]=\sum_{j=1}^\infty \sum_{k=1}^j P[N=j]=\sum_{j=1}^\infty j\cdot P[N=j]=E[N]
$$


Sunday, October 22, 2017

calculus - For which $p>0$ does $sum_{n=3}^{infty}frac{log(n)}{n^p}$ converge?

For which $p>0$ does $\sum_{n=3}^{\infty}\frac{\log(n)}{n^p}$ converge? I tried all the criteria for series convergence I know, but I'm not getting any further with this exercise. I'm not asking someone to do my homework for me, but could somebody tell me what criteria I should try please or how to proceed?

linear algebra - Can we prove $BA=E$ from $AB=E$?




I was wondering if $AB=E$ ($E$ is identity) is enough to claim $A^{-1} = B$ or if we also need $BA=E$. All my textbooks define the inverse $B$ of $A$ such that $AB=BA=E$. But I can't see why $AB=E$ isn't enough. I can't come up with an example for which $AB = E$ holds but $BA\ne E$.

I tried some stuff but I can only proof that $BA = (BA)^2$.



Edit: For $A,B \in \mathbb{R}^{n \times n}$ and $n \in \mathbb{N}$.


Answer



If $AB = E$, then (the linear application associated to) $A$ has a right inverse, so it's surjective, and as the dimension is finite, surjectivity and injectivity are equivalent, so $A$ is bijective, and has an inverse. And the inverse is also a right inverse, so it's $B$


Saturday, October 21, 2017

independence - root of prime numbers are linearly independent over $mathbb{Q}$

How can we prove by mathematical induction that $1,\sqrt{2}, \sqrt{3}, \sqrt{5},\ldots, \sqrt{p_n}$ ($p_n$ is the $n^{\rm th}$ prime number) are linearly independent over the rational numbers ?



$\underline{\text{base case (n=1)}}$: $1,\sqrt{2}$ are linearly independent over the field $\mathbb{Q}$ otherwise $a1+b\sqrt{2}=0 \Leftrightarrow \sqrt{2}=-\frac{a}{b}$ which is absurd.



Then I am stuck.

combinatorics - Prove the identity $sum^{n}_{k=0}binom{m+k}{k} = binom{n+m+1}{n}$




Let $n,m \in \mathbb{N}$. Prove the identity $$\sum^{n}_{k=0}\binom{m+k}{k} = \binom{n+m+1}{n}$$




This seems very similar to Vandermonde identity, which states that for nonnegative integers we have $\sum^{m}_{k=0}\binom{m}{k}\binom{n}{r-k} = \binom{m+n}{r}$. But, clearly this identity is somehow different from it. Any ideas?


Answer




We can write $\displaystyle \sum^{n}_{k=0}\binom{m+k}{k} = \sum^{n}_{k=0}\binom{m+k}{m} = \binom{m+0}{m}+\binom{m+1}{m}+........+\binom{m+n}{m}.$



Now Using Coefficient of $x^r$ in $(1+x)^{t} $ is $\displaystyle = \binom{t}{r}.$



So we can write above series as...



Coefficient of $x^m$ in $$\displaystyle \left[(1+x)^m+(1+x)^{m+1}+..........+(1+x)^{m+n}\right] = \frac{(1+x)^{m+n+1}-(1+x)^{m}}{(1+x)-1} = \frac{(1+x)^{m+n+1}-(1+x)^{m}}{x}$$



above we have used Sum of Geometric Progression.




So we get Coefficient of $x^{m+1}$ in $\displaystyle \left[(1+x)^{m+n+1}-(1+x)^{m}\right] = \binom{m+n+1}{m+1} = \binom{m+n+1}{n}.$


calculus - Usage of mean value theorem ; bounded derivative and open interval



Let $f : (0,1) \to \mathbb{R}$ be a function such that $ |f'(x)| \leq 5 $ on the open interval $(0,1)$. Prove that $\lim_{x \to 1^-} f(x)$ exists.



It involves the derivative and the actual function itself, so I think I have to somehow use the mean value theorem.. Also, $f$ is continuous on $(0,1)$ and differentiable on $(0,1)$ ( because the derivative exists there ).



But then, the function is defined on the open interval, so the requirements for the mean value theorem aren't satisfied. I'm guessing we have to consider intervals of the form $(a,b)$ with $a > 0$ and $b < 0$.



Lastly, I don't see the significance of the $5$ ... Is it only there to establish that the derivative is bounded, or does the number itself have some signifiance ( would the same thing hold if we had $3$ for example? ).




Please give me a hint, not the solution. Something like "consider the mean value theorem on intervals of the form ... " would be very helpful.


Answer



Pick a sequence $(x_{n})\subseteq(0,1)$ such that $x_{n}\rightarrow 1$. Then
\begin{align*}
|f(x_{n})-f(x_{m})|=|f'(\eta_{n,m})||x_{n}-x_{m}|\leq 5|x_{n}-x_{m}|,
\end{align*}
where $\eta_{n,m}$ is chosen by Mean Value Theorem. So $(f(x_{n}))$ is convergent. For other sequence $(y_{n})$ such that $y_{n}\rightarrow 1$, consider the sequence $(z_{n})$ defined by $z_{2n}=x_{n}$, $z_{2n+1}=y_{n}$ to claim that the limits of $(f(x_{n}))$ and $(f(y_{n}))$ are the same. So $\lim_{x\rightarrow 1^{-}}f(x)$ exists.


Friday, October 20, 2017

real analysis - Absolutely convergent series of complex numbers

If $\sum_{n=0}^{\infty} a_n$ is an absolutely convergent series of complex numbers and $a_n \ne -1$ $\forall$ $n$, prove that the series



$\sum_{n=0}^{\infty} \frac{a_n}{1+a_n}$ is absolutely convergent.



I'm not sure about this series in $\Bbb C$... I haven't worked with complex series before so I'm not quite sure how to prove this. I'm trying to start with a proof in $\Bbb R$, but I'm not getting very far. Placing a series in a series is tough to wrap my mind around when the only thing I know is that $a_n \ne -1$. Which is an obvious fact since then the fraction would be undefined.

elementary number theory - How to use the Extended Euclidean Algorithm manually?


I've only found a recursive algorithm of the extended Euclidean algorithm. I'd like to know how to use it by hand. Any idea?


Answer



Perhaps the easiest way to do it by hand is in analogy to Gaussian elimination or triangularization, except that, since the coefficient ring is not a field, one has to use the division / Euclidean algorithm to iteratively descrease the coefficients till zero. In order to compute both $\rm\,gcd(a,b)\,$ and its Bezout linear representation $\rm\,j\,a+k\,b,\,$ we keep track of such linear representations for each remainder in the Euclidean algorithm, starting with the trivial representation of the gcd arguments, e.g. $\rm\: a = 1\cdot a + 0\cdot b.\:$ In matrix terms, this is achieved by augmenting (appending) an identity matrix that accumulates the effect of the elementary row operations. Below is an example that computes the Bezout representation for $\rm\:gcd(80,62) = 2,\ $ i.e. $\ 7\cdot 80\: -\: 9\cdot 62\ =\ 2\:.\:$ See this answer for a proof and for conceptual motivation of the ideas behind the algorithm (see the Remark below if you are not familiar with row operations from linear algebra).


For example, to solve  m x + n y = gcd(m,n) one begins with

two rows [m 1 0], [n 0 1], representing the two
equations m = 1m + 0n, n = 0m + 1n. Then one executes
the Euclidean algorithm on the numbers in the first column,
doing the same operations in parallel on the other columns,

Here is an example: d = x(80) + y(62) proceeds as:

in equation form | in row form
---------------------+------------
80 = 1(80) + 0(62) | 80 1 0

62 = 0(80) + 1(62) | 62 0 1
row1 - row2 -> 18 = 1(80) - 1(62) | 18 1 -1
row2 - 3 row3 -> 8 = -3(80) + 4(62) | 8 -3 4
row3 - 2 row4 -> 2 = 7(80) - 9(62) | 2 7 -9
row4 - 4 row5 -> 0 = -31(80) +40(62) | 0 -31 40

The row operations above are those resulting from applying
the Euclidean algorithm to the numbers in the first column,

row1 row2 row3 row4 row5

namely: 80, 62, 18, 8, 2 = Euclidean remainder sequence
| |
for example 62-3(18) = 8, the 2nd step in Euclidean algorithm

becomes: row2 -3 row3 = row4 when extended to all columns.

In effect we have row-reduced the first two rows to the last two.
The matrix effecting the reduction is in the bottom right corner.
It starts as 1, and is multiplied by each elementary row operation,
hence it accumulates the product of all the row operations, namely:


$$ \left[ \begin{array}{ccc} 7 & -9\\ -31 & 40\end{array}\right ] \left[ \begin{array}{ccc} 80 & 1 & 0\\ 62 & 0 & 1\end{array}\right ] \ =\ \left[ \begin{array}{ccc} 2\ & \ \ \ 7\ & -9\\ 0\ & -31\ & 40\end{array}\right ] \qquad\qquad\qquad\qquad\qquad $$


Notice row 1 is the particular  solution  2 =   7(80) -  9(62)
Notice row 2 is the homogeneous solution 0 = -31(80) + 40(62),
so the general solution is any linear combination of the two:

n row1 + m row2 -> 2n = (7n-31m) 80 + (40m-9n) 62

The same row/column reduction techniques tackle arbitrary
systems of linear Diophantine equations. Such techniques

generalize easily to similar coefficient rings possessing a
Euclidean algorithm, e.g. polynomial rings F[x] over a field,
Gaussian integers Z[i]. There are many analogous interesting
methods, e.g. search on keywords: Hermite / Smith normal form,
invariant factors, lattice basis reduction, continued fractions,
Farey fractions / mediants, Stern-Brocot tree / diatomic sequence.

Remark $ $ As an optimization, we can omit one of the Bezout coefficient columns (being derivable from the others). Then the calculations have a natural interpretation as modular fractions (though the "fractions" are multi-valued), e.g. follow the prior link.


Below I elaborate on the row operations to help readers unfamiliar with linear algebra.


Let $\,r_i\,$ be the Euclidean remainder sequence. Above $\, r_1,r_2,r_3\ldots = 80,62,18\ldots$ Given linear combinations $\,r_j = a_j m + b_j n\,$ for $\,r_{i-1}\,$ and $\,r_i\,$ we can calculate a linear combination for $\,r_{i+1} := r_{i-1}\bmod r_i = r_{i-1} - q_i r_i\,$ by substituting said combinations for $\,r_{i-1}\,$ and $\,r_i,\,$ i.e.



$$\begin{align} r_{i+1}\, &=\, \overbrace{a_{i-1} m + a_{i-1}n}^{\Large r_{i-1}}\, -\, q_i \overbrace{(a_i m + b_i n)}^{\Large r_i}\\[.3em] {\rm i.e.}\quad \underbrace{r_{i-1} - q_i r_i}_{\Large r_{i+1}}\, &=\, (\underbrace{a_{i-1}-q_i a_i}_{\Large a_{i+1}})\, m\, +\, (\underbrace{b_{i-1} - q_i b_i}_{\Large b_{i+1}})\, n \end{align}$$


Thus the $\,a_i,b_i\,$ satisfy the same recurrence as the remainders $\,r_i,\,$ viz. $\,f_{i+1} = f_{i-1}-q_i f_i.\,$ This implies that we can carry out the recurrence in parallel on row vectors $\,[r_i,a_i,b_i]$ representing the equation $\, r_i = a_i m + b_i n\,$ as follows


$$\begin{align} [r_{i+1},a_{i+1},b_{i+1}]\, &=\, [r_{i-1},a_{i-1},b_{i-1}] - q_i [r_i,a_i,b_i]\\ &=\, [r_{i-1},a_{i-1},b_{i-1}] - [q_i r_i,q_i a_i, q_i b_i]\\ &=\, [r_{i-1}-q_i r_i,\ a_{i-1}-q_i a_i,\ b_{i-1}-b_i r_i] \end{align}$$


which written in the tabular format employed far above becomes


$$\begin{array}{ccc} &r_{i-1}& a_{i-1} & b_{i-1}\\ &r_i& a_i &b_i\\ \rightarrow\ & \underbrace{r_{i-1}\!-q_i r_i}_{\Large r_{i+1}} &\underbrace{a_{i-1}\!-q_i a_i}_{\Large a_{i+1}}& \underbrace{b_{i-1}-q_i b_i}_{\Large b_{i+1}} \end{array}$$


Thus the extended Euclidean step is: compute the quotient $\,q_i = \lfloor r_{i-1}/r_i\rfloor$ then multiply row $i$ by $q_i$ and subtract it from row $i\!-\!1.$ Said componentwise: in each column $\,r,a,b,\,$ multiply the $i$'th entry by $q_i$ then subtract it from the $i\!-\!1$'th entry, yielding the $i\!+\!1$'th entry. If we ignore the 2nd and 3rd columns $\,a_i,b_i$ then this is the usual Euclidean algorithm. The above extends this algorithm to simultaneously compute the representation of each remainder as a linear combination of $\,m,n,\,$ starting from the obvious initial representations $\,m = 1(m)+0(n),\,$ and $\,n = 0(m)+1(n).\,$


Finding the Modular Multiplicative Inverse of a large number



I am practicing some modular arithmetic and I am trying to find the multiplicative inverse of a large number.
Here is the problem:




345^-1 mod 76408



I'm not sure how to go about solving this problem. I set it up the following way:



x = 345^-1 mod 76408



345x = 1 mod 76408



76408 = 345 * 221 + 163




345 = 163 * 2 + 19



163 = 19 * 8 + 11



19 = 11 * 1 + 8



11 = 8 * 1 + 3



8 = 3 * 2 + 2




3 = 2 * 1 + 1



2 = 1 * 2



I watched a video where i would then use the extended euclidean algorithm, but I'm unsure as to how to do it.



Any help/advice to solve this would be appreciated! Thanks.


Answer



Here is a way of writing things: $q_i$ and $r_i$ are the quotient and the remainder of the $i$-th division, $u_i$ and $v_i$ are Bézout's coefficients of $r_i$ relative to $76408$ and $345$ respectively.




The recurrence relations are $$u_{i+1}=u_{i-1}-q_iu_i, \quad v_{i+1}=v_{i-1}-q_iv_i.$$



$$ \begin{array}[t]{r@{\qquad}r@{\qquad}r@{\qquad}r}
r_i & u_i & v_i & q_i\\
\hline
76408 & 1 & 0 & \\
345 & 0 & 1 & 221\\
\hline
163 & 1 & -221 & 2\\

19 & -2 & 443 & 8 \\
11 & 17 & -3765 & 1 \\
8 & -19 & 4208 & 1\\
3 & 36 & -7973 & 2\\
2 & -91 & 20154 & 1\\
1 & 127 & -28127 \\
\hline
\end{array}$$



So $\,1=127\cdot 76408-28127\cdot 345$ from which we conclude $$ 345^{-1}\equiv -28127\equiv 48281\mod 76408. $$



Thursday, October 19, 2017

Find a complex number $w$ such that $w^2=-sqrt{3} - i$



This is a problem in my undergrad foundations class.
\begin{equation}
w^2=-\sqrt{3} - i \\w=(-\sqrt{3}-i)^{\frac{1}{2}} \\w=\sqrt{2}\bigg(\cos\frac{5\pi}{12}+i\sin\frac{5\pi}{12}\bigg)
\end{equation}
So I get to here and the next step is
\begin{equation}
\sqrt{2}\bigg(\cos\bigg(\frac{5\pi}{12}+\pi\bigg)+i\sin\bigg(\frac{5\pi}{12}+\pi\bigg)\bigg)

\end{equation}
and this is equal to
\begin{equation}
w(\cos\pi+i\sin\pi)=-w
\end{equation}
Can someone help me with understanding the last two steps. Thanks


Answer



$\omega^2 = $$2(-\frac {\sqrt 3}{2} + i \frac 12)\\
2(\cos \frac {7\pi}{6} + i \sin \frac {7\pi}{6})$



$\omega = \sqrt 2(\cos \frac {7\pi}{12} + i \sin \frac {7\pi}{12})$




And here you are....



When we take the square root of the square of something, there are always two solutions. For example $x^2 = 9 \implies x = \pm 3$



We can put in the $\pm symbol here, but that technique breaks down with higher roots.



A more general approach...



$\omega^2 =2(\cos (\frac {7\pi}{6}+2n\pi) + i \sin (\frac {7\pi}{6}+2n\pi))$




Takes advantage of the period nature of the trig functions.



$\omega =\sqrt 2(\cos (\frac {7\pi}{12}+n\pi) + i \sin (\frac {7\pi}{12}+n\pi))$



And now we can look at just the solutions where the argument is in $[0,2\pi)$



$\omega =\sqrt 2(\cos \frac {7\pi}{12} + i \sin \frac {7\pi}{12}),\sqrt 2(\cos (\frac {7\pi}{12}+\pi) + i \sin (\frac {7\pi}{12}+\pi))$


probability - Explain why $E(X) = int_0^infty (1-F_X (t)) , dt$ for every nonnegative random variable $X$




Let $X$ be a non-negative random variable and $F_{X}$ the corresponding CDF. Show,
$$E(X) = \int_0^\infty (1-F_X (t)) \, dt$$
when $X$ has : a) a discrete distribution, b) a continuous distribution.





I assumed that for the case of a continuous distribution, since $F_X (t) = \mathbb{P}(X\leq t)$, then $1-F_X (t) = 1- \mathbb{P}(X\leq t) = \mathbb{P}(X> t)$. Although how useful integrating that is, I really have no idea.


Answer



For every nonnegative random variable $X$, whether discrete or continuous or a mix of these,
$$
X=\int_0^X\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,\mathrm dt,
$$
hence





$$
\mathrm E(X)=\int_0^{+\infty}\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}\mathrm P(X\geqslant t)\,\mathrm dt.
$$







Likewise, for every $p>0$, $$
X^p=\int_0^Xp\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\gt t}\,p\,t^{p-1}\,\mathrm dt=\int_0^{+\infty}\mathbf 1_{X\geqslant t}\,p\,t^{p-1}\,\mathrm dt,

$$
hence




$$
\mathrm E(X^p)=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\gt t)\,\mathrm dt=\int_0^{+\infty}p\,t^{p-1}\,\mathrm P(X\geqslant t)\,\mathrm dt.
$$



probability - Expected value of game involving 100-sided die

The following question is from a Jane Street interview.


You are given a 100-sided die. After you roll once, you can choose to either get paid the dollar amount of that roll OR pay one dollar for one more roll. What is the expected value of the game? (There is no limit on the number of rolls.)


P.S. I think the question assumes that we are rational.

geometry - Form a Circle with Circles



I need to form a perfect circle out of circles.



Given N number of circles each with radius R, how can I find the distance away from the center?


Answer



Sorry, this is too brief, there should be a picture. Let the little circles all have radius $r$. Suppose there are $n$ of them, where $n\ge 3$. Let $R$ be the distance from the centre of the big circle to the centre of each little circle. It turns out that
$$R\sin\left(\frac{180^\circ}{n}\right)=r,\tag{$1$}$$
so now we can compute $R$.




To see that Formula $(1)$ is correct, draw two consecutive little circles, with centres $A$ and $B$ respectively. Let the big circle have centre $O$. By the definition of $R$, the big circle passes through $A$ and $B$.
Drop a perpendicular from $O$ to the midpoint $M$ of $AB$. The two little circles touch at $M$.



Note that $\angle AOM$ is $\dfrac{180^\circ}{n}$ and $AM=r$. The formula now follows by trigonometry.



The question specifically asked not for $R$, but for the (nearest) distance from the centre of the big circle to the little circles. This is $R-r$.



Remark: I should have called the radius of the little circles $R$, to use the notation of the OP. But it is a little circle, so it should be $r$. Then one can reserve $R$ for the big one.


Wednesday, October 18, 2017

integration - $intlimits_0^{10}e^{-0.04t}cdot e^{-0.001t^2}dt$

I need to find the following integral




$$\int\limits_0^{10}e^{-0.04t ~-0.001t^2}dt$$
This integral seems to "scream" for the error function, but I have never worked with the error function yet, so I have no idea how to do this. Can anyone please show me how this definite integral can be determined?



Update:
After following the hints given, I have the following:



\begin{align}\int\limits_0^{10}e^{\frac{-1}{1000}(t^2+40t)}dt &= e^{\frac{400}{1000}}\int\limits_0^{10} e^{\frac{-1}{1000}(t+20)^2}dt\end{align}



Now, let $u = t+20$, then our integral changes to

\begin{align*}e^{\frac{400}{1000}}\int\limits_{20}^{30}e^{\frac{-1}{1000}u^2}du\end{align*}
I am stuck here though, since I only know how to use the error function of $\int_0^xe^{-at^2}dt$, which is different from what I have, since the lower limit is not zero? Please help!

Tuesday, October 17, 2017

analysis - Show that $sum_{n=0}^infty a_n z^n$ converges $forall zinmathbb{C}.$



Assume that $\sum_{n=0}^\infty b_n z^n$ converges $\forall z\in\mathbb{C}.$ Let $x=\lim_{ n\rightarrow\infty}|\frac{a_n}{b_n}|$ exists. Show that $\sum_{n=0}^\infty a_n z^n$ converges $\forall z\in\mathbb{C}.$



Proof:



Since $\sum_{n=0}^\infty b_n z^n$ converges $\forall z\in\mathbb{C},$ the radius of convergence $R_{(b_n)}=\infty, \mbox{ i.e. } \limsup_{n\rightarrow\infty} b_n = 0.$ As $x=\lim_{ n\rightarrow\infty}|\frac{a_n}{b_n}|$ exists, we can write $x=\lim_{ n\rightarrow\infty}|\frac{a_n}{b_n}|=\limsup_{n\rightarrow\infty}|\frac{a_n}{b_m}|.$ I am stuck here.



I know that to show $\sum_{n=0}^\infty a_n z^n$ converges $\forall z\in\mathbb{C},$ I need to prove that $\limsup_{n\rightarrow\infty}a_n=0.$ Any suggestions?


Answer




Since the limit $\lim_{n\to \infty}\left|\frac{a_n}{b_n}\right|$ exists it follows that the sequence $\left|\frac{a_n}{b_n}\right|$ is bounded.
Therefore there exists a positive number $M$ such that $\left|a_n\right|\leq M\left|b_n\right|$ for all $n\in\mathbb N$.
We conclude that $\left|a_nz^n\right|\leq M\left|b_nz^n\right|$ for all $n\in\mathbb N$ and $z\in\mathbb C$ and the result follows.


calculus - Are all limits solvable without L'Hôpital Rule or Series Expansion

Is it always possible to find the limit of a function without using L'Hôpital Rule or Series Expansion?


For example,


$$\lim_{x\to0}\frac{\tan x-x}{x^3}$$


$$\lim_{x\to0}\frac{\sin x-x}{x^3}$$


$$\lim_{x\to0}\frac{\ln(1+x)-x}{x^2}$$


$$\lim_{x\to0}\frac{e^x-x-1}{x^2}$$



$$\lim_{x\to0}\frac{\sin^{-1}x-x}{x^3}$$


$$\lim_{x\to0}\frac{\tan^{-1}x-x}{x^3}$$

calculus - What's wrong with my calculation for the area of a circle?

This is my calculation for a simple circle of radius 1. Take a pizza slice of the circle. From geometry, the cone has an area of $r^2\pi * 2\pi / \theta$



Now to get the area of the circle, $$2\pi^2 \int_0^{2\pi} 1/\theta \, d\theta$$ which yields a nonsensical answer.

Monday, October 16, 2017

limit involving expressions of the form $n^x log n$



In doing calculations involving computational complexity the following function came up:
$$f(x) = \lim_{N \rightarrow \infty} \frac{\sum_{n=1}^N n^x \log n}{N^{x+1} \log N}$$
for $x \in \mathbb{R}^+$. It appears to be true that
$$f(x) = \frac{1}{x+1}$$
but I am not sure how to prove this. Could anyone suggest an approach?


Answer



$f(x) = \displaystyle \lim_{N\to \infty}\dfrac{1}{N\log N}\left(\displaystyle \sum_{n=1}^N \left(\dfrac{n}{N}\right)^x\log n\right)=\displaystyle \lim_{N\to \infty}\dfrac{1}{N\log N}\left(\displaystyle \sum_{n=1}^N \left(\dfrac{n}{N}\right)^x\log\left(\dfrac{n}{N}\right)+\log N\displaystyle \sum_{n=1}^N\left(\dfrac{n}{N}\right)^x\right)= \displaystyle \lim_{N\to\infty}\dfrac{\displaystyle \int_{0}^1 t^x\log tdt}{\log N}+\displaystyle \int_{0}^1 t^xdt=0+\dfrac{1}{1+x}=\dfrac{1}{1+x}$


functions - Bijection between $[1,2]$ and $[3,5)$.

Construct a bijection between $[1,2]$ and $[3,5)$.



So what I usually do if I pair the two outer number together to each other so like $3=1m+b$ and $5=2m+b$ and just solve for m and b but since 5 is not included, I am not sure how to construct it that makes it still a bijection. Thanks for any help!



Edit: this is not from $(0,1] \rightarrow (0,1)$ and also the brackets are different.

calculus - Find $lim_{nrightarrowinfty}sqrt{n}int_{-infty}^{+infty}frac{cos(x)}{(1+x^2)^{n}}dx$?

I don't know from which point I have to start finding this limit $$\lim_{n\rightarrow\infty}\sqrt{n}\int_{-\infty}^{+\infty}\frac{\cos(x)}{(1+x^2)^{n}}dx$$

elementary number theory - Help with congruence and divisibility exercise



I'm starting to solve some problems of congruence and integer division, so the exercise is quite simple but I'm not sure I'm on the right track. I need to prove that the following is true for all $n \in \Bbb N$:
$$9\ |\ 7 \cdot 5^{2n}+ 2^{4n+1}$$




This is what I have so far:



$$7 \cdot 5^{2n} + 2^{4n+1} \equiv 0 \ (9)$$



So I try to see what each side of the sum is congruent to: $7 \equiv -2 \ (9)$ and $5^{2n} \equiv 4^{2n} (9)$, hence: $7 \cdot 5^{2n} \equiv -2 \cdot 4^{2n} \ (9)$ and the left side is also congruent to: $-2 \cdot 4^{2n} \equiv 7^n \cdot -2 \ (9)$ which leaves me with:



$$7 \cdot 5^{2n} \equiv 7^n \cdot -2 \ (9)$$



As for the other side:




$$2^{4n+1} \equiv 7^{4n} \cdot\ 2\ (9)$$



Finally combining them:



$$7 \cdot 5^{2n} + 2^{4n+1} \equiv 7^n \cdot (-2) + 7^{4n} \cdot\ 2\ (9)$$



Am I right so far? Any hint on how to continue?
Thanks!


Answer




Alternative proof by induction.






First, show that this is true for $n=0$:



$7\cdot5^{2\cdot0}+2^{4\cdot0+1}=9$



Second, assume that this is true for $n$:




$7\cdot5^{2n}+2^{4n+1}=9k$



Third, prove that this is true for $n+1$:



$7\cdot5^{2(n+1)}+2^{4(n+1)+1}=$



$16\cdot(\color\red{7\cdot5^{2n}+2^{4n+1}})+63\cdot5^{2n}=$



$16\cdot\color\red{9k}+63\cdot5^{2n}=$




$9\cdot(16k+7\cdot5^{2n})$






Please note that the assumption is used only in the part marked red.


Sunday, October 15, 2017

real analysis - Determine the limit and give and $epsilon-N$ proof of $lim_{nrightarrowinfty}frac{3+(-4)^n}{n^2}$.



I'm having difficulty working out this particular limit. My intuition is that it diverges because of the oscillating $(-4)^n$ which clearly grows much faster than then $n^2$ in the denominator. So in the limit the partial sequence would be oscillating between very large positive numbers and very large negative numbers. With that said my strategy would be to show that the limit of the reciprocal converges to $0$ since I have a theorem that says that a sequence diverges to infinity if and only if the reciprocal converges to $0$, so
\begin{align*}
\lim_{n\rightarrow\infty}\frac{n^2}{3+(-4)^n}=0.
\end{align*}
is what I'm working with. So far I have yet to come up with any promising avenue to construct an $\epsilon-N$ proof. Any ideas would be greatly appreciated.


Answer




How about showing $\lim_{m\to\infty}\frac{3+(-4)^{2m}}{(2m)^2}=\infty$?


calculus - Calculate the limit: $lim_{nrightarrowinfty}left(frac{1^{2}+2^{2}+...+n^{2}}{n^{3}}right)$




Calculate the limit: $$\lim_{n\rightarrow\infty}\left(\frac{1^{2}+2^{2}+...+n^{2}}{n^{3}}\right)$$





I'm planning to change the numerator to something else.



I know that $1+2+3+...n = \frac{n(n+1)}{2}$



And now similar just with $2$ as exponent but I did many tries on paper and always failed..



The closest I had is this but it still seems wrong:




$1^{2}+2^{2}+...+n^{2} = \frac{n(n^{2}+1)}{2}$



Well the idea is replacing numerator and then forming it, then easily calculate limit.. But I cannot find the correct thing for numerator..



Any ideas?


Answer



For variety,



$$\begin{align}
\lim_{n \to \infty} \frac{1^2 + 2^2 + \ldots + n^2}{n^3}

&=
\lim_{n \to \infty} \frac{1}{n} \left( \left(\frac{1}{n}\right)^2
+ \left(\frac{2}{n}\right)^2 + \ldots + \left(\frac{n}{n}\right)^2
\right)
\\&= \int_0^1 x^2 \mathrm{d}x = \frac{1}{3}
\end{align}
$$


Saturday, October 14, 2017

measure theory - Limit inferior/superior of sequence of sets



Let $(\Omega, \mathcal{A}, \mu)$ be a measure space, where $\mu(\Omega)< \infty$. Further $(A_n)_{n \in \mathbb{N}}$ is a a sequence of $\mathcal{A}$-measurable sets. I want to prove, that




$$ \mu ( \liminf_{n \rightarrow \infty} A_n) \leq \liminf_{n \rightarrow \infty} \mu (A_n) \leq \limsup_{n \rightarrow \infty} \mu (A_n) \leq \mu (\limsup_{n \rightarrow \infty} A_n)$$
holds for any sequence $(A_n)_{n \in \mathbb{N}}$.
I have no experience working with the limit superior/inferior. Clearly
$$\mu ( \liminf_{n \rightarrow \infty} A_n) \leq \mu (\limsup_{n \rightarrow \infty} A_n)$$
holds, since it is easy to prove that the one is a superset of the other. Also
$$\liminf_{n \rightarrow \infty} \mu (A_n) \leq \limsup_{n \rightarrow \infty} \mu (A_n)$$
holds, for any sqeuence. But I am stuck how to show the connection. I could use the Definitions, then I get
$$ \mu (\bigcup_n^\infty \bigcap_{k=n}^\infty A_n) \leq \lim_{n \rightarrow \infty} \inf_{k \geq n} \mu(A_n) \leq \inf_{n \geq 0} \sup_{k \geq n} \mu(A_n) \leq \mu (\bigcap_n^\infty \bigcup_{k=n}^\infty A_n) $$
But I don't know if this helps. Anyone got a hint how to go on?



Answer



Consider sets
$$
B_n=\bigcap\limits_{k=n}^\infty A_k
$$
Obviously, $B_n\subset A_k$ for all $k\geq n$, so $\mu(B_n)\leq \mu(A_k)$ for all $k\geq n$
After taking infimum we get $\mu(B_n)\leq\inf_{k\geq n}\mu(A_k)$. Since $B_n\subset B_{n+1}$ for all $n\in\mathbb{N}$ then the sequence $\{\mu(B_n):n\in\mathbb{N}\}$ is non-decreasing, so there exist $\lim_{n\to\infty}\mu(B_n)$. Similarly the sequence $\{\inf_{k\geq n}\mu(A_k):n\in\mathbb{N}\}$ is non-decreasing hence there exist $\lim_{n\to\infty}\inf_{k\geq n}\mu(A_k)$. Since existence of limits is justified we can say
$$
\lim\limits_{n\to\infty}\mu(B_n)\leq
\lim\limits_{n\to\infty}\inf\limits_{k\geq n}\mu(A_k)=\liminf\limits_{n\to\infty}\mu(A_n)\tag{1}

$$
Again recall that $B_n\subset B_{n+1}$ for all $n\in\mathbb{N}$, so
$$
\mu\left(\bigcup\limits_{n=1}^\infty B_n\right)=\lim\limits_{n\to\infty}\mu(B_n)\tag{2}
$$
It is remains to note that
$$
\liminf\limits_{n\to\infty}A_n=
\bigcup\limits_{n=1}^\infty \bigcap\limits_{k=n}^\infty A_k=
\bigcup\limits_{n=1}^\infty B_n\tag{3}

$$
From $(1)$, $(2)$ and $(3)$ we have
$$
\mu\left(\liminf\limits_{n\to\infty}A_n\right)\leq \liminf\limits_{n\to\infty}\mu(A_n)
$$
Now try to prove in the similar way the second inequality.


Proof Complex Number Square Roots


I need to be able to prove that there are always two square roots of a non-zero complex number, though I'm uncertain as to how I prove this. When I tried doing it algebraically letting $(a+ib)^2 = x+iy$, then getting various simultaneous equations, I end up getting 4 potential roots with all the plus's and minuses.


How do you prove this algebraically?


Is there another method of proving it that is simpler and quicker?


Thanks


Answer




Expanding your expression:


$$a^2+2abi-b^2=x+iy$$


Comparing real and imaginary parts:


$$a^2-b^2=x\text{ and }2ab=y$$


$$(a^2-b^2)^2+(2ab)^2=x^2+y^2$$


$$a^4-2a^2b^2+b^4+4a^2b^2=x^2+y^2$$


$$a^4+2ab+b^4=x^2+y^2$$


$$(a^2+b^2)^2=x^2+y^2$$


$$a^2+b^2=\sqrt{x^2+y^2}$$


Note: The negative root can be ignored as $a^2+b^2\ge0$



$$a^2=\frac{1}{2}\left(a^2+b^2+a^2-b^2\right)=\frac{1}{2}\left(\sqrt{x^2+y^2}+x\right)$$


$$b^2=\frac{1}{2}\left(a^2+b^2-(a^2-b^2)\right)=\frac{1}{2}\left(\sqrt{x^2+y^2}-x\right)$$


Looking at the right hand side of both these equations the values are both positive so a square root is possible of both:


$$a=\pm\sqrt{\frac{1}{2}\left(\sqrt{x^2+y^2}+x\right)}$$


$$b=\pm\sqrt{\frac{1}{2}\left(\sqrt{x^2+y^2}-x\right)}$$


Now while there are four combinations we know that $2ab=y$ so two of these combinations would be invalid as the product of $ab$ must have the same sign as $y$.


How do I express 0.999(9) as a fraction?

I'm noob in math. If 0.333(3) is 1/3, 0.666(6) is 2/3, then 0.999(9) is what?


If 3/3 and 0.999(9) is the same, then how can I express one of them without expressing the other?

integration - Evaluate:$I=int^infty_0 frac {e^{-ax} sin bx}{x},dx$




How would I evaluate the integral:
$$I=\int^\infty_0 \frac {e^{-ax}\ \sin bx}{x}\,dx$$
I have started doing the problem by integration by parts but it seems to be more lengthy. Since it is definite integration, any properties may be there. I can't figure out the property basically. Any suggestion will be highly appreciated. Thanks!


Answer



Notice, using property of Laplace transform as follows
$$L\left(\frac{1}{t}f(t)\right)=\int_{s}^{\infty}L(f(t))dt$$
$$L(\sin bt)=\int_{0}^{\infty}e^{-st}\sin t dt=\frac{b}{b^2+s^2}$$



Now, we have
$$\int_{0}^{\infty} \frac{e^{-ax}\sin bx}{x}dx$$

$$=\int_{a}^{\infty} L(\sin bx)dx$$
$$=\int_{a}^{\infty}\frac{b}{b^2+x^2} dx$$
$$=b\int_{a}^{\infty}\frac{dx}{b^2+x^2} $$
$$=b\left[\frac{1}{b}\tan^{-1}\left(\frac{x}{b}\right)\right]_{a}^{\infty} $$
$$=\left[\tan^{-1}\left(\infty\right)-\tan^{-1}\left(\frac{a}{b}\right)\right] $$ $$=\frac{\pi}{2}-\tan^{-1}\left(\frac{a}{b}\right)$$ Hence, we have



$$\bbox[5px, border:2px solid #C0A000]{\color{red}{\int_{0}^{\infty} \frac{e^{-ax}\sin bx}{x}dx=\frac{\pi}{2}-\tan^{-1}\left(\frac{a}{b}\right)}}$$


Friday, October 13, 2017

convergence divergence - Prove that $x^n/n!$ converges to $0$ for all $x$

Prove that $a_n=x^n/n! \to 0$ for all $x$



Here is what I tried, but it seems to lead to nowhere.




Choose $\epsilon > 0$. We need to show that there exists $N\in \mathbb{N}$ such that for all $n>N$ we have $|a_n| < \epsilon$



So, $|(x^n/n!)| < \epsilon \implies |x^n| < n!\cdot \epsilon$ (since $n!$ is positive we ignore the absolute signs). So $|x|/(\epsilon^{1/n}) < [n!^{(1/n)}]$.
Now I am stuck in solving this for $n$, and hence finding $N$ ...

trigonometry - $sum cos$ when angles are in arithmetic progression





Prove $$\cos(\alpha) + \cos(\alpha + \beta) + \cos(\alpha + 2\beta) + \dots + \cos[\alpha + (n-1)\beta] = \frac{\cos(\alpha + \frac{n-1}{2}\beta) \cdot \sin\frac{n\beta}{2}}{\sin\frac{\beta}{2}} $$

calculus - Evaluate $lim_{xtoinfty} (1+frac{2}{x})^{5x}$ without L'Hopital



I'm trying to evaluate the following limit



$$\lim_{x\to\infty} \left(1+\frac{2}{x}\right)^{5x}$$



I recognize a part of this limit because it resembles the limit for $e$ but I don't know anything other than that. I have no idea where to start. Some hints would be much appreciated.




Thanks!


Answer



Write $y = \frac{x}{2}$. Then we want $\displaystyle \lim_{y\rightarrow\infty}\left(1+\frac{1}{y}\right)^{10y}$. Since the function $g(z) = z^{10}$ is continuous, this is $$\lim_{y\rightarrow\infty}g\left(\left(1+\frac{1}{y}\right)^{y}\right) = g\left(\lim_{y\rightarrow\infty}\left(1+\frac{1}{y}\right)^{y}\right) = \left(\lim_{y\rightarrow\infty}\left(1+\frac{1}{y}\right)^y\right)^{10} = e^{10}.$$


Thursday, October 12, 2017

calculus - Can definite integrals be indeterminate?



Consider the expression $$\frac 10 - \frac 10$$



This is an indeterminate expression, and even though it's of the form $a - a$, we can't say that it evaluates to $0$ because $a$ itself is undefined here.



This got me thinking about how a definite integral might be indeterminate. I have two questions:




1. Consider a function $f(x)$ that is continuous everywhere but at $x=x_1$, where it is undefined. Is $$ \int_{x_1}^{x_2} f(x) \, \mathrm dx$$ defined? If so, what is its value? I was thinking that if $f$ is undefined at $x_1$, then $F$ is undefined there also, and so by the definition of the integral, this would be undefined.



Yet, if I remember that there is no differentiation (here I mean "differentiation" in a non-mathematical sense) between $ \int_{x_1}^{x_2} f(x) \, \mathrm dx$ and $ \int_{x_1+\delta}^{x_2} f(x) \, \mathrm dx$, then I become inclined to think that $$ \int_{x_1}^{x_2} f(x) \, \mathrm dx = \lim_{h \rightarrow x_1} \int_{h}^{x_2} f(x) \, \mathrm dx$$



But this creates a problem, doesn't it? If a function that is undefined at certain points can still be integrated, then it's not very practical to talk about integrability because we can just break any otherwise non-integrable function up into several integrals of which we take the limit.



**Subquestion: ** Can we make any non-integrable range integrable by splitting the range on non-integrable points and taking the limit at each discontinuity? If so, does the handedness of the limit matter?







2. Consider a function $f(x)$ that is continuous everywhere but at $x=x_1$, where it is undefined. Is $$ \int_{x_1}^{x_1} f(x) \, \mathrm dx$$ defined? If so, what is its value?



From a geometric standpoint (thinking of a definite integral as an area), then it seems obvious that this should be zero regardless of whether or not $f$ is defined at $x_1$. But perhaps this is only "obvious" in the same way that $\frac 10 - \frac 10 = a - a$ would "obviously" be zero, which we know is not the case. Is evaluating this integral tantamount to evaluating $\frac 10 - \frac 10$?


Answer



You are essentially asking how an improper (Riemann) integral can be defined. Typically, an integral
$$
\int_{a}^{b} f(x)\,\mathrm{d}x
$$
is said to be improper if either $a$ or $b$ is infinite, or if $\lim_{x\to a} f(x)$ or $\lim_{x\to b} f(x)$ is infinite. The Riemann integral is not exactly well-suited for asking these kinds of questions, but we can generally get away with considering the limit, as you have. That is, define the improper integral (in the case where $a=-\infty$ or the limit of $f$ at $a$ is infinite) by setting
$$

\int_{a}^{b} f(x)\,\mathrm{d}x
:= \lim_{t\to a^{+}} \int_{t}^{b} f(x)\,\mathrm{d}x.
$$
As long as $f$ is integrable on every interval of the form $[t,b]$ and the limit exists, then this is a reasonable approach, and one that (I think) is appropriate for your question.






More generally, the "right" approach might be measure-theoretic. Instead of working with the Riemann integral, we can work with the Lebesgue integral. An entire discussion of the definition of the Lebesgue integral is far beyond the scope of a single answer on MSE, so let me get to the punchline:



In the Lebesgue theory, we integrate over sets, rather than intervals. In notation,

$$
\int_{A} f(x)\,\mathrm{d}\mu(x)
$$
is the integral of the function $f$ over the set $A$, with respect to something called a measure $\mu$. The measure isn't really playing a role in the current discussion, so don't worry about too much—for now, you might think of it as a bit of decoration that distinguishes a Riemann integral from a Lebesgue integral.



There are certain sets that are very small (called null sets), which can be thrown away without changing the value of the integral. Singleton points, countable sets (i.e. the integers) and the usual middle-third Cantor set are examples of such "small sets". In other words, if $N$ is a null set, then
$$
\int_{N} f(x)\,\mathrm{d}\mu(x) = 0,
\qquad\text{and}\qquad
\int_{A\setminus N} f(x)\,\mathrm{d}\mu(x) = \int_{A} f(x)\,\mathrm{d}\mu(x).

$$
Moreover, it turns out that if a function is (properly) Riemann integrable, then it is also Lebesgue integrable, and the two methods of integration give the same number.



So suppose that we want to evaluate
$$
\int_{a}^{b} f(x)\,\mathrm{d}x
$$
as a Riemann integral, but $f$ is not defined at $a$. Assuming that a $f$ is not too pathologically behaved, we can instead interpret this as a Lebesgue integral, i.e.
$$
\int_{a}^{b} f(x)\,\mathrm{d}x

= \int_{[a,b]} f(x)\,\mathrm{d}\mu(x).
$$
But the singleton set $\{a\}$ is a null set, and so we have
$$
\int_{[a,b]} f(x)\,\mathrm{d}\mu(x)
= \int_{(a,b]} f(x)\,\mathrm{d}\mu(x).
$$







In short, depending on the nature of $f$, it might be appropriate to regard the integral as in improper Riemann integral and take limits, or to regard it as a Lebesgue integral and apply that theory, instead.


asymptotics - Help proving $sum_{nle x}{ln{n}}=xln{x}-x+O(ln{x})$

Just learning a bit about big O notation and have come across this exercise.


The notation used is $$\sum_{n\le x}{\ln{n}}=x\ln{x}-x+O(\ln{x})$$ and I am assuming that is equivalent to $$\sum_{n=1}^{x}{\ln{n}}=x\ln{x}-x+O(\ln{x}).$$



I know that $$\int_1^x{\ln{t}}\operatorname{dt}=x\ln{x}-x+1$$ so I feel like I am almost there, just not entirely sure how to make the connection, in particular exactly how the big O comes into this.


Edit: I should make it clear, I can see (I have done a quick sketch) that the summation is an 'overshoot' of the integral, and that the difference between the summation and the integral is bounded by something, but I'm not sure how to get that it is precisely $\ln{x}$.


Also I have no idea what tags to use, feel free to change them.

trigonometry - Simplify a quick sum of sines




Simplify $\sin 2+\sin 4+\sin 6+\cdots+\sin 88$



I tried using the sum-to-product formulae, but it was messy, and I didn't know what else to do. Could I get a bit of help? Thanks.


Answer



The angles are in arithmetic progression. Use the formula



$$\sum_{k=0}^{n-1} \sin (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$



See here for two proofs (using trigonometry, or using complex numbers).




In your case, $a=b=2$ and $n=44$.


abstract algebra - When are u and 1-u never both units?

I was trying to generalize a solution to a problem (if $P\in \mathbb Z[x], n\in\mathbb Z,$ then $P(P(P(n)))\neq n$ unless $P(n)=n$), and I found a sufficient condition to replace $\mathbb Z$ with a domain $R$: $u$ and $1-u$ are never simultaneously units. Equivalently, $u(1-u)$ is never a unit in $R$ for $u\in R$. Unfortunately, the only examples I can find of rings with this property are the integers and the Gaussian integers.



Are there other rings with this property, or perhaps a more general condition that implies this property? What if we restrict ourselves to rings of integers of number fields?

calculus - Limit of: $lim_{xrightarrowinfty}left(frac{x+2}{x+1}right)^{x/2}$

Without the use of L'hospitals rule, solve the following:
$$\lim_{x\rightarrow\infty}\left(\frac{x+2}{x+1}\right)^{x/2}$$




I'm trying to apply the limit that says $$\lim_{x\rightarrow\pm\infty} \left(1+\frac{1}{x}\right)^x = e$$



However, I'm confused as the exponent is now $x/2$ and $x$ is approaching positive infinity in the limit, and not $\pm\infty$. Also there's the rational.



Thank you in advance

trigonometry - If $frac{3-tan^2fracpi7}{1-tan^2fracpi7}=kcosfracpi7$, find $k$


I need help solving this question: $$ \text{If }\frac{3-\tan^2\frac\pi7}{1-\tan^2\frac\pi7}=k\cos\frac\pi7\text{, find k.} $$ I simplified this down to:



$$ \frac{4\cos^2\frac\pi7-1}{2\cos^2\frac\pi7-1} $$


But am unable to proceed further. The value of k is given to be 4, but I am unable to derive that result.


Kindly provide me with some insight, or with a step-by-step solution.


Thanks in advance,


Abhigyan


Answer



\begin{align} k&=\frac{4\cos^2\frac\pi7-1}{\cos\frac{\pi}{7}(2\cos^2\frac\pi7-1)}\\ &=\frac{2\cos\frac{2\pi}{7}+1}{\cos\frac{2\pi}{7}\cos\frac{\pi}{7}}\\ &=\frac{2\cos\frac{2\pi}{7}+1}{\frac{1}{2}(\cos\frac{3\pi}{7}+\cos\frac{\pi}{7})}\\ &=\frac{4\sin\frac{\pi}{7}\cos\frac{2\pi}{7}+2\sin\frac{\pi}{7}}{\sin\frac{\pi}{7}\cos\frac{3\pi}{7}+\sin\frac{\pi}{7}\cos\frac{\pi}{7}}\\ &=\frac{2\sin\frac{3\pi}{7}-2\sin\frac{\pi}{7}+2\sin\frac{\pi}{7}}{\frac{1}{2}(\sin\frac{4\pi}{7}-\sin\frac{2\pi}{7}+\sin\frac{2\pi}{7})}\\ &=\frac{4\sin\frac{3\pi}{7}}{\sin\frac{4\pi}{7}}\\ &=4 \end{align}


calculus - Taking derivative of an absolute function



Let $$f(x) = \left|\frac{μ}{μ-1}-\frac{1}{x}\right|^\frac{1}{μ-1}.$$ If I want to get the derivative of this absolute value function, how would I go about doing this?



I tried using the fact that $\sqrt{x^2} = |x|$, and applied the chain rule to obtain an answer as the following:
$$f'(x) = \frac{\left(\left(\sqrt{\dfrac{μ}{μ-1}-\dfrac{1}{x}}\right)^2\right)^\frac{-μ+2}{μ-1}\left(\dfrac{μ}{μ-1}-\dfrac{1}{x}\right)}{x^2(μ-1)\left|\dfrac{μ}{μ-1}-\dfrac{1}{x}\right|},$$ where $μ$ is just a constant such as $μ = 2,3,\cdots, 100$.




The derivative I have works for $μ = 2$, but does not work for other values.Is my algebra off or is there something else that I should note? Am I on the right path to getting $f'(x)$?


Answer



For $\frac{\mu}{\mu-1}-\frac{1}{x}>0\iff \frac{\mu}{\mu-1}>\frac1x\iff x>1-\frac1\mu$, $$f'(x)=\frac d{dx}\left(\left(\frac{\mu}{\mu-1}-\frac{1}{x}\right)^\frac1{\mu-1}\right)=\frac1{\mu-1}\left(\frac{\mu}{\mu-1}-\frac{1}{x}\right)^{\frac1{\mu-1}-1}\cdot\frac1{x^2}$$



For $x<1-\frac1\mu$, $$f'(x)=\frac d{dx}\left(\left(-\frac{\mu}{\mu-1}+\frac{1}{x}\right)^\frac1{\mu-1}\right)=\frac1{\mu-1}\left(-\frac{\mu}{\mu-1}+\frac{1}{x}\right)^{\frac1{\mu-1}-1}\cdot\left(-\frac1{x^2}\right)$$



For $x=1-\frac1\mu$, plug this value into each of the above derivatives. If they are the same, that is the derivative at this $x$. If they are different, then the derivative does not exist.


Wednesday, October 11, 2017

discrete mathematics - Proof using double counting


Give two proofs that $$ (2n)! = \binom{2n}{n} \cdot (n!)^2 $$




I've already determined how to prove it algebraically (I think):



$\binom{2n}{n} = \frac{(2n)!}{(n!)^2}$



$(2n)! = \frac{(2n)!}{(n!)^2} *(n!)^2$




$(2n)! = (2n)!$



But how would you go about proving it through double counting? Any pointers on how to formulate the proof would be appreciated, thanks.

probability - Showing $int_0^{infty}(1-F_X(x))dx=E(X)$ in both discrete and continuous cases


Ok, according to some notes I have, the following is true for a random variable $X$ that can only take on positive values, i.e $P(X<0=0)$


$\int_0^{\infty}(1-F_X(x))dx=\int_0^{\infty}P(X>x)dx$


$=\int_0^{\infty}\int_x^{\infty}f_X(y)dydx$


$=\int_0^{\infty}\int_0^{y}dxf_X(y)dy$


$=\int_0^{\infty}yf_X(y)dy=E(X)$


I'm not seeing the steps here clearly. The first line is obvious and the second makes sense to me, as we are using the fact that the probability of a random variable being greater than a given value is just the density evaluated from that value to infinity.


Where I'm lost is why: $=\int_x^{\infty}f_X(y)dy=\int_0^{y}f_X(y)dy$


Also, doesn't the last line equal E(Y) and not E(X)?



How would we extend this to the discrete case, where the pmf is defined only for values of X in the non-negative integers?


Thank you


Answer



The region of integration for the double integral is $x,y \geq 0$ and $y \geq x$. If you express this integral by first integrating with respect to $y$, then the region of integration for $y$ is $[x, \infty)$. However if you exchange the order of integration and first integrate with respect to $x$, then the region of integration for $x$ is $[0,y]$. The reason why you get $E(X)$ and not something like $E(Y)$ is that $y$ is just a dummy variable of integration, whereas $X$ is the actual random variable that defines $f_X$.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...