Sunday, June 30, 2019

elementary number theory - Proving $sqrt2$ is irrational

I used the method of contradiction by assuming that $\sqrt 2$ is a rational number. Then, by the definition of rational number, there exist two integers $p$ and $q$ whose ratio equals $\sqrt 2$. Thus,
$$\frac pq = \sqrt2\tag{x}$$



Squaring both sides,
$$p^2/q^2 = 2\tag{a}$$




or $$p^2 = 2q^2\tag{b}$$
This means that $p^2$ is an even number, implying that $p$ is even.



Now, any even integer can be written as $2^kf$ where $f$ is any odd integer and $k$ is some positive integer (the minimum value of $k$ and $f$ is $1$ since even numbers start from $2 = 2^1\cdot1$). For odd numbers, $k=0$.



For example, $8 = 2^3 \cdot 1$, $4= 2^2 \cdot 1$, $18= 2^1 \cdot 9$, $24 = 2^3 \cdot 3$, $-12= 2^2 \cdot(-3)$, etc.



Now, from $(b)$, $q^2$ can be even or odd.




Case 1: $q^2$ is even (thus meaning $q$ is even). Then $p= 2^{k_1} \cdot f_1$ (say) and $q = 2^{k_2} * f_2$ (say). Note here in this case, both conditions $k_1=k_2$ and $f_1=f_2$ can't hold simultaneously since that would mean $p/q =1$ and here $p/q =\sqrt2$ (from $(x)$)). Let's consider the condition when $k_1=k_2=k$ but $f_1\ne f_2$. Then
$$\frac{p^2}{q^2}=\frac{(2^kf_1)^2}{(2^kf_2)^2} = \frac{f_1^2}{f_2^2} =2$$
(from $(a)$), i.e.: odd/odd ($f_1$ and $f_2$ are odd) can never equal $2$.



Now lets consider $f_1=f_2=f$ but $k_1\ne k_2$, thus $(2^{k_1}f)^2/(2^{k_2}f)^2= 2^{2(k_1-k_2)}= 2$ meaning $k_1 - k_2 = 0.5$ but $k_1,k_2$ are integers, so their difference can't be $0.5$. (Also here, $k_1-k_2$ must be greater or equal to $1$ since $2^{k_1-k_2}=\sqrt 2$, $k_1-k_2$ can't be negative since $\sqrt 2>1$, but $k_1-k_2\ge1$ can't satisfy $(a)$ since the minimum value of $p/q$ in this case will be $2$ which is surely greater than $\sqrt2$.)



Now lets consider $f_1\ne f_2$ and $k_1\ne k_2$, thus $\frac{(2^{k_1} f_1)^2}{(2^{k_2} f_2)^2}= 2^{2(k_1-k_2)}\frac{f_1^2}{f_2^2}$ can never equal $2$ since $\frac{f_1^2}{f_2^2}$ is either odd or "odd/odd" ie: it doesn't contain 2 as a factor and $2^{2(k_1-k_2)}$ is a power of $2$. So in $2^{2(k_1-k_2)}\frac{f_1^2}{f_2^2}$, there's no chance of cancellation of the odd factor $\frac{f_1^2}{f_2^2}$.



Case 2: $q^2$ is odd (thus meaning $q$ is odd). $q=f'$ (say, here $k=0$ for $q$ but not for $p$), thus from $(a)$ $2^{2k}\frac{f^2}{f'^2} = 2$ but this isn't possible since a power of $2$ multiplied with odd factor can't equal $2$.




Thus, both case 1 and 2 suggest that for any possible combination of $k_1,k_2,f_1$ and $f_2$, $p/q\ne\sqrt2$, i.e.: for no value of integers $p$ and $q$, $p/q = \sqrt2$. Thus, this contradicts our assumption that $\sqrt2$ is rational. Therefore it must be irrational.



PLS NOTE that i would like to clarify that $f_1^2$/$f_2^2$ is either an odd integer or a fraction of odd/odd or 1/odd form, thus not containing 2 as a factor,for any $2^n$, if it has to be reduced to 2 , must be multiplied with $1/ 2^(n-1)$ but $f_1^2$/$f_2^2$ can't be of $1/2^[n-1]$ form since for odd nos, in $2^k$*f notation, k is 0 , so 2 vanishes as $2^k$ becomes 1 in this case thus f1/f2 can't be represented as 1/ $2^(n-1)$ in which n has to be existent.
therefore $2^{2(k_1-k_2)}\frac{f_1^2}{f_2^2}$ can never equal 2 as to further explain the 3rd condition of case 1



Difficulty: is my approach correct? This proof which I thought is different
from proofs found on the internet or books since I have used $p$ and $q$ as any integers, which may have a common factor. So I am not sure if I am on right track. Will someone please check this out and make kind comments?

Saturday, June 29, 2019

integration - Rigorous definition of differentials in the context of integrals.


When using the subsitituion rule in integration of an integral $\displaystyle \int f(x)\,\mathrm dx$, one turns the integral from the form $$\displaystyle \int f(x(t))\,\mathrm dx \quad(*)$$ into the form $$\int f(x(t))\,\frac{\mathrm dx}{\mathrm dt}\mathrm dt \quad(**)$$. This transform is usually accomplished by means of differenting the subsitution $x = x(t)$, such that $\dfrac{\mathrm dx}{\mathrm dt} = \dfrac{\mathrm d x(t)}{\mathrm dt}$. Now, at this point, one turns this into a differential form by means of magic, s.t. $\mathrm dx = \dfrac{\mathrm dx(t)}{\mathrm dt}\mathrm dt$. This now substitutes the differential term $\mathrm dx$ in the original expression $(*)$ to the one in the transformed expression $(**)$.


I'd like to learn that magic step a bit more rigorous – so that I can better understand it. It is often explained by "multiplication" of $\mathrm dt$, which do make sense, but it does not explain the nature of differentials; when is "multiplication" allowed? It seems there should be a more rigorous way of explaining it, perhaps by defining the "multiplication.



So, in what ways can differentials like $\mathrm dx$ and $\mathrm dt$ be formalized in this context? I've seen them being compared to small numbers, which often work, but can this analogy fail? (And what are the prerequisites needed to understand them?)


Answer



Here's one way:


Consider $x$ and $t$ are coordinate systems of $\mathbb{R}$. If we wish to change coordinate systems, we have to look at how they transform into one another. If we consider $t$ to be a reference coordinate system and let the coordinate transformation be defined as $x(t) = 2t$ then for any $t$ element, $x$ is twice that (under $x$ view).


Now, since $(\mathbb{R}, + , \cdot)$ is a vector space, it has a dual $\mathbb{R}^*$. Using this space, we can start defining the elements $dx, dt$. Specifically, $dt$ will be a basis for $\mathbb{R}^*$ if $t$ is the basis vector for $\mathbb{R}$ . The elements of the dual space are called 1-forms. 1-forms of $\mathbb{R}^*$ "eat" vector elements of $\mathbb{R}$ and return a measure along that direction (only 1 dimension, so one direction). In this case you can consider elements of $\mathbb{R}^*$ as "row" vectors and multiply column vectors in $\mathbb{R}$ (which is the dot product of two vectors).


We can define a different basis for $\mathbb{R}$ and $\mathbb{R}^*$ with a coordinate change. For this example, if $dt$ eats a one dimensional vector $a$, it will return $a$. But when $dx$ eats $a$ it returns $2a$ in the $t$ coordinate system. That is $dx = 2dt$. For a general coordinate transform, a 1-form can be describe by $dx = \frac{dx}{dt} dt$.


This provides us with a way to talk about $dx$ and $dt$ meaningfully. Since $f: \mathbb{R} \to \mathbb{R}$ then $f(x)dx$ is $dx$ "eating" the vector $f(x)$ with regards to the $x$ coordinate system. Sometimes $f$ is easier to think of in a different coordinate system and so we wish to change it. $f(x)$ then becomes $f(x(t))$ and $dx$ becomes $\frac{dx}{dt}dt$. Now $dt$ is eating vectors $f(x(t))$ in its own coordinate system.


Consider how the uniform subdivide interval $(a,b)$ looks in a new coordinate system. For example $\{(0,\frac{1}{2}), (\frac{1}{2},1), (1,\frac{3}{2})\}$ in $t$ looks like $\{(0,\frac{2}{3}), (\frac{2}{3},2), (2, \frac{6}{2})\}$ in $x$ in the example coordinate transform. $\frac{dx}{dt}$ tells us precisely how the intervals change under our transformation.


integration - prove that $int_{-infty}^{infty} frac{x^4}{1+x^8} dx= frac{pi}{sqrt 2} sin frac{pi}{8}$






prove that $$\int_{-\infty}^{\infty} \frac{x^4}{1+x^8} dx= \frac{\pi}{\sqrt 2} \sin \frac{\pi}{8}$$




My attempt:

C is semicircle in upper half complex plane



Simple poles = $e^{i\frac{\pi}{8}}, e^{i\frac{3\pi}{8}},e^{i\frac{5\pi}{8}},e^{i\frac{7\pi}{8}}$ lie in upper semi-circle C and real axis



Given integral value $= 2\pi i \cdot (\text{sum of residues}) = 2 \pi i \left(\frac{-1}{8}\right) \left[e^{i\frac{5\pi}{8}}+e^{i\frac{15\pi}{8}}+e^{i\frac{25\pi}{8}}+e^{i\frac{35\pi}{8}}\right] = 0.27059 \pi$



This is numerically equal to $\frac{\pi}{\sqrt 2} \sin \frac{\pi}{8}$. But without using calculator, how to get this expression.


Answer



Just extending what you've got so far. Let's note $z=e^{i\frac{5\pi}{8}}$ and recall that $\cos{z}=\frac{e^{iz}+e^{-iz}}{2}$, then:
$$2 \pi i \left(\frac{-1}{8}\right) \left[e^{i\frac{5\pi}{8}}+e^{i\frac{15\pi}{8}}+e^{i\frac{25\pi}{8}}+e^{i\frac{35\pi}{8}}\right]=

2 \pi i \left(\frac{-1}{8}\right) \left[z+z^3+z^5+z^7\right]=\\
2 \pi i \left(\frac{-1}{8}\right) z \left[1+z^2+z^4+z^6\right] =
2 \pi i \left(\frac{-1}{8}\right) z \left[1+z^2+z^4(1+z^2)\right]=\\
2 \pi i \left(\frac{-1}{8}\right) z (1+z^2)(1+z^4)=
2 \pi i \left(\frac{-1}{8}\right) z^4 (z^{-1}+z)(z^{-2}+z^2)=\\
2 \pi i \left(\frac{-1}{8}\right) e^{i\frac{5\pi}{2}} 2\cos\left(\frac{5\pi}{8}\right)2 \cos\left(\frac{5\pi}{4}\right)=\pi i(-1)i \cos\left(\frac{\pi}{2}+\frac{\pi}{8}\right)\left(-\frac{1}{\sqrt{2}}\right)=\\
\frac{\pi}{\sqrt{2}}\sin\left(\frac{\pi}{8}\right)$$


Friday, June 28, 2019

Limit of infinite sum (Taylor series)



Let's say I want to show
$$
\lim_{x \to \infty} e^{-x} = 0

$$
using Taylor series. I can expand
$$
e^{-x} = \sum_{k=0}^\infty \frac{(-x)^k}{k!}
$$
so I've got to consider
$$
\lim_{x\to\infty}\sum_{k=0}^\infty \frac{(-x)^k}{k!}.
$$




How do I actually show this is equal to $0$?



My first thought is to bound it in a squeeze theorem kind of way, but
$$
\sum_{k=0}^\infty \left\vert\frac{(-x)^k}{k!}\right\vert = \sum_{k=0}^\infty \frac{x^k}{k!} = e^x \to \infty
$$
as $x \to \infty$ so that doesn't help.



This sum is absolutely convergent so I can exchange the limit and sum, but that also doesn't help as $\lim_{x \to \infty} x^k = \infty$ .




How can I directly show this limit using this infinite series? I'd also like something that applies in general for these sorts of Taylor series evaluations, rather than something relying on a unique property of the exponential function. In general I've got a more complicated series $f(x) = \sum_{k=0}^\infty a_kx^k$ that I want to find $\lim_{x \to \infty} f(x)$ for, but when I tried to do this simple case I got stuck so I suspect I'm missing some basic facts about working with sums and limits like these.


Answer



You say you need to does this by using that series. Here's one way:
\begin{align}
& \sum_{n=0}^\infty \frac {(-x)^n}{n!} \cdot\sum_{m=0}^\infty \frac{x^m}{m!} \\[10pt]
= {} & \sum_{n=0}^\infty\left( \frac {(-x)^n}{n!} \cdot\sum_{m=0}^\infty \frac{x^m}{m!} \right) & & \text{This can be done because the second sum} \\
& & & \text{does not depend on $n.$} \\[10pt]
= {} & \sum_{n=0}^\infty\sum_{m=0}^\infty \left( \frac {(-x)^n}{n!}\cdot\frac{x^m}{m!} \right) & & \text{This can be done because the first fraction} \\
& & & \text{does not depend on $m.$} \\[10pt]
= {} & \sum_{p=0}^\infty \left( \sum_{\{\,(m,\,n)\,:\,m+n=p\,\}} \frac {(-x)^n}{n!} \cdot\frac{x^m}{m!} \right) & & \text{(The same terms in a different order.)} \\[10pt]

= {} & \sum_{p=0}^\infty \sum_{n=0}^p \frac {(-x)^n}{n!} \cdot\frac{x^{p-n}}{(p-n)!} \\[10pt]
= {} & \sum_{p=0}^\infty \sum_{n=0}^p \frac 1 {p!} \binom p n (-x)^n x^{p-n} \\[10pt]
= {} & \sum_{p=0}^\infty \left( \frac 1 {p!} \sum_{n=0}^p \binom p n (-x)^n x^{p-n} \right) & & \text{This can be done because that fraction} \\
& & & \text{does not change as $n$ goes from $0$ to $p.$} \\[10pt]
= {} & \sum_{p=0}^\infty \frac 1 {p!} \big((-x)+x\big)^p & & \text{by the binomial theorem} \\[10pt]
= {} & \sum_{p=0}^\infty \frac{0^p}{p!} \\[10pt]
= {} & 1 + 0 + 0 + 0 + \cdots = 1.
\end{align}
Therefore the two series we started with are reciprocals of each other.




The second series clearly is everywhere positive and everywhere increasing and approaches $+\infty$ as $x\to+\infty.$



Therefore the first series is everywhere positive and everywhere decreasing and approaches $0$ as $x\to-\infty.$


relations - Set Theory - Given 2 sets, are they order-isomorphic



We are given the sets $A=(1,2]\cup ((3,4)\cap \mathbb Q)$ and $B=(1,2)\cup ((3,4)\cap \mathbb Q)$ with the standard order $\leq$ of the reals.



Are they order-isomorphic? Meaning, is there a bijective function $f:A\to B$ such that $a_1 \leq a_2 \in A$ implies $f(a_1) \leq f(a_2) \in B$?




Answer: There isn't.



The reason for this (this is what the teacher said) is that the set $A^{*} = \{x\in A| |\{a\in A| a \geq x\}|\leq \aleph_0\}$ has a minimal value with the standard order. While $B^{*}=\{x\in B| |\{b\in B| b \geq x\}|\leq \aleph_0\}$ does not.



Firstly, I don't understand at all why this is true. And second, even if it is true, why does that imply that there isn't an order perserving isomorphism between $A$ and $B$? I don't see the relation between the 2 statements.


Answer



The difference between A and B is what happens with '2'. There is a hint, which is to prove that there is no order isomorphism, so we try to find something that is true in A and not true for B and that concerns orders.



Little digression as an example of what your teacher is trying to do : is there an homeomorphism between an infinite line and an infinite plane ? Answer : no. Because if you remove a point from a line you have 2 different connex sets, whereas this does not hold with a plane.




Here the teacher tries to find such a property (with sets that are countable/not countable) that should hold in both A and B if there was such an order isomorphism


sequences and series - Does an exponential decay faster than a polynomial, in the limit of an infinite power?



We know that
$$ \lim\limits_{x \rightarrow \infty} \mathrm{e}^{-x}\, x^n = 0$$
for any $n$. But I assume that usually, this is stated with the understanding that $n$ is finite. But what happens when we take the limit
$$ \lim\limits_{n \rightarrow \infty} \lim\limits_{x \rightarrow \infty} \mathrm{e}^{-x}\, x^n = 0\,?$$
The context is that I have an infinite sum of the form
$$ \lim\limits_{n \rightarrow \infty} \sum_{i=0}^n \mathrm{e}^{-x}\, x^i,$$ and I want to study its behavior as $x \rightarrow \infty$. In summary,





Does
$$ \lim\limits_{x \rightarrow \infty} \sum_{i=0}^\infty \mathrm{e}^{-x}\, x^i,$$
converge?




This question seems to indicate that the answer might be yes, but I wonder if taking $n \rightarrow \infty$ messes anything up?


Answer



The issue is one of interchanging the order of limits. Note that we have




$$\begin{align}
\lim_{n\to\infty}\lim_{x\to\infty}\sum_{i=0}^n e^{-x}x^i&=\lim_{n\to\infty} \sum_{i=0}^n \lim_{x\to\infty}\left(e^{-x}x^i\right)\\\\
&=\lim_{n\to\infty} \sum_{i=0}^n (0)\\\\
&=0
\end{align}$$



Here, we first hold $n$ fixed and let $x\to\infty$. The result of the inner limit is $0$ for any $n$. Then, letting $n\to\infty$ produces $0$ as the result.



However, if the order of the limits is interchanged, then we have




$$\begin{align}
\lim_{x\to\infty}\lim_{n\to\infty} \sum_{i=0}^n e^{-x}x^i&=\lim_{x\to\infty}\lim_{n\to\infty} e^{-x}\left(\frac{x^{n+1}-1}{x-1}\right)\\\\
\end{align}$$



which diverges since $\lim_{n\to\infty}x^n=\infty$ for $x>1$. In this case, we first hold $x>1$ fixed and take the limit as $n\to\infty$. The resultant limit diverges and renders the outer limit as $x\to\infty$ meaningless.






Aside, we ask what is the limit, if it exists, of $e^{-x}x^x$ as $x\to\infty$? We find that




$$\begin{align}
\lim_{x\to\infty}e^{-x}x^x&=\lim_{x\to\infty}e^{-x} e^{x\log(x)}\\\\
&=\lim_{x\to\infty}e^{x\log(x/e)} \\\\
&=\infty
\end{align}$$


real analysis - Show that the sequence $left(frac{2^n}{n!}right)$ has a limit.





Show that the sequence $\left(\frac{2^n}{n!}\right)$ has a limit.



I initially inferred that the question required me to use the definition of the limit of a sequence because a sequence is convergent if it has a limit $$\left|\frac{2^n}{n!} - L \right|{}{}<{}{}\epsilon$$



I've come across approaches that use the squeeze theorem but I'm not sure whether its applicable to my question. While I have found answers on this site to similar questions containing the sequence, they all assume the limit is $0$.



I think I need to show $a_n \geq a_{n+1},\forall n \geq 1$, so



$$a_{n+1} = \frac{2^{n+1}}{(n+1)!}=\frac{2}{n+1}\frac{2^{n}}{n!}


A monotonic decreasing sequence is convergent and this particular sequence is bounded below by zero since its terms are postive. I'm not sure whether or not I need to do more to answer the question.


Answer



It is easy to prove that
$$\sum_{k=1}^{\infty}\frac{2^n}{n!} \lt \infty$$
e.g. with the ratio test you have
$$\frac{a_{n+1}}{a_n}=\frac{n!}{2^n}\cdot\frac{2^{n+1}}{(n+1)!}=\frac{2}{n+1}\longrightarrow 0$$
Then $\lim\limits_{n\rightarrow\infty}\frac{2^n}{n!}$ has to be $0$







If you do not know anything about series you might assert, that $n!>3^n$ if $n\gt N_0$ for some fixed $N_0\in\mathbb{N}$. Therefore you have for those $n$
$$0<\frac{2^n}{n!} \lt\frac{2^n}{3^n}=\left(\frac{2}{3}\right)^n\longrightarrow 0$$
Thus $$\lim\limits_{n\longrightarrow\infty} \frac{2^n}{n!} =0 $$


number theory division of power for the case $(n^r −1)$ divides $(n^m −1)$ if and only if $r$ divides $m$.

Let $n > 1$ and $m$ and $r$ be positive integers. Prove that $(n^r −1)$ divides $(n^m −1)$ if and only if $r$ divides $m$.

abstract algebra - What is $(x^7-x)mod(x^6-3)$ equal to?



I'm trying to use Rabins test for irreducibility over finite fields , but in part of the test you need to calculate $gcd(f,x^{p^{n_i}}-xmodf)$ where in my case p=7 and n=6,3,2 as I'm testing if $f(x)=x^6-3$ is irreducible over GF(7).



My trouble is I don't know how to calculate this modulo, I know how to do it for integers and I know that in my case it implies that $x^6=3$. But after this i'm stuck.



could anyone work me through how to find what $(x^7-x)mod(x^6-3)$ is equal to ?



Also is Rabins test a good go to for testing if a polynomial is irreducible over a finite field ? Or is there perhaps less cumbersome methods for higher degree's of f(x) where degree f(x)>3 and so doesn't strictly need to be factored into linear polynomials in order to be irreducible ? (just suggestions would suffice )



Answer



Division algorithm:



$$x^7 - x = (x^6 - 3) (x) + (2x)$$



and this is valid because $\deg (2x) < \deg (x^6 - 3)$



So the remainder is $2x$.


soft question - What does a "convention" mean in mathematics?



We all know that $0!=1$, the degree of the zero polynomial equals $-\infty$, the interval$[a,a)=(a,a]=(a,a)=\emptyset$ ... and so on, are conventions in mathematics. So is a convention something that we can't prove with mathematical logic, or is it just intuitions, or something
that mathematicians agree about?
Are they the same as axioms? What does "convention" mean in mathematics?
And is $i^2 = -1$ a convention? If not how can we prove existence of such number?


Answer




To answer the question in the title, I would say: 'convention' in mathematics means exactly the same as in ordinary English.



As for your examples: $0!:=1$ and $[a,a):=\emptyset$ are definitions. It is a convention not to use a different definition, or to leave it undefined. Of course in this sense, every definition is a convention.



It think that informally, one says a certain definition (such as the two above) is '(just) convention', to mean that they are 'extreme' or 'degenerate' cases, and leaving them undefined would still make the theory go through, but it is more convenient to define them anyway (for example to prevent having to exclude this extreme case in statement of theorems). For example, I think you could get by not defining $[a,a)$ or $[a,b]$ for $ba$ which could be tiresome.


probability - throwing a dice repeatedly so that each side appear once.

Pratt is given a fair die. He repeatedly
throw the die until he get
at least each number (1 to 6).



Define the random variable $X$ to be the total number of trials that
pratt throws the die. or
How many times he has to throw a die so that each side of the die appears at least once.
Determine the expected value $E(X)$.

Thursday, June 27, 2019

elementary number theory - Prove that $gcd(a,b) = gcd (a+b, gcd(a,b))$



I started by saying that $\gcd(a,b) = d_1$ and $\gcd(a+b,\gcd(a,b)) = d_2$



Then I tried to show that $\ d_1 \ge d_2, d_1 \le d_2$.



I know that $\ d_2 | \gcd(a+b, d_1)$ hence $\ d_2 \le d_1 $.



How do I prove that $\ d_2 \ge d_1$ ?


Answer




If $\gcd(a,b)=d_1$ then $a = d_1 x$ and $b= d_1 y$, where $x,y$ are integers. Consequently,
$$\gcd(a+b,\gcd(a,b))=\gcd(d_1(x+y),d_1) = d_1\gcd(x+y,1)=d_1.$$


Wednesday, June 26, 2019

Continuous probability - calculate probability of r.v and distribution function

This is the question:





$X$ is a continuous random variable whose probability density function
is given by
$$f(x)=\begin{cases}
\frac{1}{9}x^2 & \text{if $0\leq x \leq 3$}.\\
0 & \text{otherwise}. \end{cases}$$



(a) what is the probability that $X$ is less than $1$?
(b) Write down the distribution function $F_{X}(x)$ for $X$ (remember to
include the values for $F_{X}$ for all real $x$).





So for (a), I am looking for
$P\{X < 1\} = \frac{1}{9}\int_{-\infty}^{1}x^2\,dx$
however how do I compute the definite integral with negative infinity? Am I allowed to replace the negative infinity with $0$ since the function ranges from $0$ to $3$?



For (b), I am not sure how do this exactly, well my book says "derive and then differentiate the distribution function".

abstract algebra - Finding a basis for $Bbb{Q}(sqrt{2}+sqrt{3})$ over $Bbb{Q}$.

I have to find a basis for $\Bbb{Q}(\sqrt{2}+\sqrt{3})$ over $\Bbb{Q}$.



I determined that $\sqrt{2}+\sqrt{3}$ satisfies the equation $(x^2-5)^2-24$ in $\Bbb{Q}$.



Hence, the basis should be $1,(\sqrt{2}+\sqrt{3}),(\sqrt{2}+\sqrt{3})^2$ and $(\sqrt{2}+\sqrt{3})^3$.



However, this is not rigorous. How can I be certain that $(x^2-5)^2-24$ is the minimal polynomial that $\sqrt{2}+\sqrt{3}$ satisfies in $\Bbb{Q}$? What if the situation was more complicated? In general, how can we ascertain thta a given polynomial is irreducible in a field?



Moreover, checking for linear independence of the basis elements may also prove to be a hassle. Is there a more convenient way of doing this?




Thanks.

calculus - Integrating $frac{x^k }{1+cosh(x)}$



In the course of solving a certain problem, I've had to evaluate integrals of the form:



$$\int_0^\infty \frac{x^k}{1+\cosh(x)} \mathrm{d}x $$



for several values of k. I've noticed that that, for k a positive integer other than 1, the result is seemingly always a dyadic rational multiple of $\zeta(k)$, which is not particularly surprising given some of the identities for $\zeta$ (k=7 is the first noninteger value).



However, I've been unable to find a nice way to evaluate this integral. I'm reasonably sure there's a way to change this expression into $\int \frac{x^{k-1}}{e^x+1} \mathrm{d}x$, but all the things I tried didn't work. Integration by parts also got too messy quickly, and Mathematica couldn't solve it (though it could calculate for a particular value of k very easily).




So I'm looking for a simple way to evaluate the above integral.


Answer



Just note that
$$ \frac{1}{1 + \cosh x} = \frac{2e^{-x}}{(1 + e^{-x})^2} = 2 \frac{d}{dx} \frac{1}{1 + e^{-x}} = 2 \sum_{n = 1}^{\infty} (-1)^{n-1} n e^{-n x}.$$
Thus we have
$$ \begin{eqnarray*}\int_{0}^{\infty} \frac{x^k}{1 + \cosh x} \, dx
& = & 2 \sum_{n = 1}^{\infty} (-1)^{n-1} n \int_{0}^{\infty} x^{k} e^{-n x} \, dx \\
& = & 2 \sum_{n = 1}^{\infty} (-1)^{n-1} \frac{\Gamma(k+1)}{n^k} \\
& = & 2 (1 - 2^{1-k}) \zeta(k) \Gamma(k+1).
\end{eqnarray*}$$

This formula works for all $k > -1$, where we understand that the Dirichlet eta function $\eta(s) = (1 - 2^{1-s})\zeta(s)$ is defined, by analytic continuation, for all $s \in \mathbb{C}$.


Tuesday, June 25, 2019

real analysis - If $f$ is uniformly differentiable then $f '$ is uniformly continuous?



The following theorem is true?



Theorem. Let $U\subset \mathbb{R}^m$ (open set) and $f:U\longrightarrow \mathbb{R}^n$ a differentiable function.




If $f$ is uniformly differentiable $ \Longrightarrow$ $f':U\longrightarrow \mathcal{L}(\mathbb{R}^m,\mathbb{R}^n)$ is uniformly continuous.



Note that $f$ is uniformly differentiable if



$\forall \epsilon>0\,,\exists \delta>0:|\!|h|\!|<\delta,\color{blue}{[x,x+h]\subset U} \Longrightarrow |\!|f(x+h)-f(x)-f'(x)(h)|\!|<\epsilon |\!|h|\!| $ (edited)



$\forall \epsilon>0\,,\exists \delta>0:|\!|h|\!|<\delta,\color{blue}{x,x+h\in U} \Longrightarrow |\!|f(x+h)-f(x)-f'(x)(h)|\!|<\epsilon |\!|h|\!|\qquad \checkmark$



Any hints would be appreciated.



Answer



Let's build off of Tomas' last remark, slightly modified:



Let $t>0$ be small. Then
\begin{eqnarray}
\|f'(x)-f'(y)\| &=& \frac{1}{t}\sup_{\|w\|=1}\|\langle f'(x)-f'(y),tw\rangle\| \nonumber \\
&\leq& \frac{1}{t}\sup_{\|w\|=1}\|f(x+tw)-f(x)-[f(y+tw)-f(y)]\| + 2\epsilon \nonumber
\end{eqnarray}



It suffices to show that this weighted combination of four close points on a parallelogram can be bounded by $C\epsilon t$.




Let us bound $\|f(x+h) - f(x) + f(x+k) - f(x+h+k)\|_2 \leq C\epsilon(\|h\|+\|k\|)$, and then in this case $\|h\|=t$ and $\|k\|\leq \delta$, so if $t=\delta$ the whole expression is bounded by a constant times $\epsilon$.



Note applying uniform differentiability three times in directions $h,k,$ and $h+k$, for small $\|h\|,\|k\|$ we have



\begin{eqnarray*}
\|f(x+h) - f(x) + f(x+k) - f(x+h+k)\| &\leq& \|f'(x)h + f'(x)k - f'(x)(h+k)\|_2 + 3\epsilon(\|h\|+\|k\|)\\
&=& 3\epsilon(\|h\|+\|k\|)
\end{eqnarray*}


Monday, June 24, 2019

functional equations - Let $f$ a continuous function defined on $mathbb R$ such that $forall x,y

Let $f$ a continuous function defined on $\mathbb R$ such that $\forall x,y \in \mathbb R :f(x+y)=f(x)+f(y)$




Prove that :
$$\exists a\in \mathbb R , \forall x \in \mathbb R, f(x)=ax$$

induction - Summation problem involving odd numbers and binomial coefficients. Prove $sum_{k=0}^{m}binom{n}{k}(n-2k)(-1)^k=0$



Recently I came across a finite sum which appears to be zero for all odd numbers. The sum is defined as follows:





$$\sum_{k=0}^{m}~\binom{n}{k}(n-2k)(-1)^k$$
where $n=2m+1$.




For the first few $m$ this sum always equals zero. I tried to prove this by induction with the inductive step from $m=k$ to $m=k+1$ but I did not got this far with this approach. So I am asking for a proof of this formula with an explanation.







I have found more of these sums and I would be interested in a more general result. It seems to me like that in general the exponent of the term $n-2k$ is irrelevant as long as it is an odd number. So that



$$\sum_{k=0}^{m}~\binom{n}{k}(n-2k)^3(-1)^k$$
$$\sum_{k=0}^{m}~\binom{n}{k}(n-2k)^5(-1)^k$$
$$...$$



all equal zero. I guess thats a fact but I have no clue how to derivate this.







I have to add that it is not needed for the number to be odd or even - all these sums should work out for both. So also the following should equal zero.



$$\sum_{k=0}^{m}~\binom{n}{k}(n-2k)^2(-1)^k$$
$$\sum_{k=0}^{m}~\binom{n}{k}(n-2k)^4(-1)^k$$
$$...$$


Answer



We have that for $n=2m+1>1$, then
$$\begin{align}
\sum_{k=0}^{m}~\binom{n}{k}(n-2k)(-1)^k&=\sum_{k=0}^{m}~\binom{n}{k}(n-k)(-1)^k-\sum_{k=0}^{m}~\binom{n}{k}k(-1)^k\\
&=\sum_{k'=n-m}^{n}~\binom{n}{n-k'}k'(-1)^{n-k'}-\sum_{k=0}^{m}~\binom{n}{k}k(-1)^k\\

&=-\sum_{k'=m+1}^{n}~\binom{n}{k'}k'(-1)^{k'}-\sum_{k=0}^{m}~\binom{n}{k}k(-1)^k\\
&=-\sum_{k=0}^{n}~\binom{n}{k}k(-1)^{k}=n\sum_{k=1}^{n}~\binom{n-1}{k-1}(-1)^{k-1}\\
&=n(1+(-1))^{n-1}=0\end{align}$$
where at the second step, we rewrite the first sum with respect to the new index $k':=n-k$.



P.S. If $d$ is odd then again by letting $k'=n-k$,
$$\sum_{k=0}^{m}\binom{n}{k}(n-2k)^d(-1)^k=\sum_{k'=m+1}^{n}\binom{n}{n-k'}(n-2(n-k'))^d(-1)^{n-k'}\\=\sum_{k=m+1}^{n}\binom{n}{k'}(n-2k')^d(-1)^{k'}$$
which implies that
$$\sum_{k=0}^{m}\binom{n}{k}(n-2k)^d(-1)^k=\frac{1}{2}\sum_{k=0}^{n}\binom{n}{k}(n-2k)^d(-1)^k.$$
This sum is NOT always zero. For example, when $n=d$, by Tepper's identity, we have that

$$\sum_{k=0}^{n}\binom{n}{k}(n-2k)^n(-1)^k=2^n\cdot n!.$$


elementary number theory - Bezout identity of 720 and 231 by hand impossible?

Is it possible to solve this by hand? Without using the Extended Euclidean Algorithm


We do the Euclidean algorithm and we get:


720 = 231 * 3 + 27



231 = 27 * 8 + 15


27 = 15 * 1 + 12


15 = 12 * 1 + 3


12 = 3 * 4 + 0


The GCD is 3.


We now isolate the remainders:


27 = 720 - 231 * 3


15 = 231 - 27 * 8


12 = 27 - 15 * 1


3 = 15 - 12 * 1



We proceed by substitution:


3 = 15 - 12 * 1


What now? How can we proceed when we have *1? There is no substitution possible?


Help! Thanks!

Sunday, June 23, 2019

geometry - The circumcircles of the triangles $OAD$, $OBE$ & $OCF$ has another common point beside $O$!

Given a convex hexagon $ABCDEF$ circumscribing a circle $(O)$. Assume that $O$ is the circumcenter of the triangle $ACE$.



I see that the circumcircles of the triangles $OAD$, $OBE$ & $OCF$ has another common point beside $O$. But I can't know this point is, and how to prove the problem I see. Please help my 1st post! Thanks!

abstract algebra - $[L_1 L_2 : k] = [L_1 : k] [L_2 : k]$ for two finite field extensions $L_1/k$ and $L_2/k$ with $L_1 cap L_2 = k$



I want to prove or disprove the following:





Let $L_1$ and $L_2$ be two finite extensions of field $k$ inside of an
extension $L/k$. Moreover, $L_1 \cap L_2 = k$. Then the degree of the
composite field $L_1 L_2$ is $[L_1 L_2 : k] = [L_1 : k] [L_2 : k]$.




I want to solve this problem with basic field theory (I haven't studied Galois theory). Thanks in advance.






Below is what I've done so far.




Let $[L_1 : k] = m$, $[L_2 : k] = n$, and write $L_1$, $L_2$ as $L_1 = k(\alpha_1, \dots, \alpha_m)$, $L_2 = k(\beta_1, \dots, \beta_n)$, where $\alpha_1, \dots, \alpha_m$ is a $k$-basis for $L_1$, and $\beta_1, \dots, \beta_n$ is a $k$-basis for $L_2$. Then we can easily show that



$$[L_1 L_2 : k] \le [L_1 : k] [L_2 : k],$$



where the equality is achieved iff $\beta_1, \dots, \beta_n$ are linearly independent over $L_1$.



I tried to use $L_1 \cap L_2 = k$ to prove the linear independence but failed.


Answer



The answer turns out to be negative.




Consider $k = \mathbb{Q}$, $L_1 = \mathbb{Q}(\alpha)$, and $L_2 = \mathbb(\alpha \zeta_3)$, where $\alpha$ is the real root of $x^3 - 2$, and $\zeta_3$ is a primitive root of $x^3 - 1$. Then $[L_1 : k] = [L_2 : k] = 3$, and it is easy to show that $L_1 \cap L_2 = k$ (one way is to consider the imaginary parts). However, $L_1 L_2 = \mathbb{Q}(\alpha, \zeta_3)$ is the splitting field of $x^3 - 2$, which has degree 6 over $\mathbb{Q}$.



It turns out that if one of $L_1/k$ and $L_2/k$ is Galois, then the argument would be true.


logic - Mathematical induction question: why can we "assume $P(k)$ holds"?


So I see that the process for proof by induction is the following (using the following statement for sake of example: $P(n)$ is the formula for the sum of natural numbers $\leq n$: $0 + 1 + \cdots +n = (n(n+1))/2$ )


  1. Show that the base case is trivially true: $P(0)$: let $n = 0$. Then $0 = (0(0+1))/2$, which is true.

  2. Show that if $P(k)$ holds then $P(k+1)$ also holds. Assume $P(k)$ holds for some unspecified value of $k$. It must be then shown that $P(k+1)$ holds.

This is the part I don't get: 'Assume $P(k)$ holds'


We are just assuming something is true and then 'proving' that something else is true based on that assumption. But we never proved the assumption itself is true so it doesnt seem to hold up to me. Oviously proof by induction works so am I viewing the process incorrectly?


Answer



The "inductive step" is a proof of an implication: $$\mathbf{if}\ P(k),\ \mathbf{then}\ P(k+1).$$ So we are trying to prove an implication.


When proving an implication, instead of proving the implication, we usually assume the antecedent (the clause after "if" and before "then"), and then use that to prove the consequent (the clause after "then"). There are several reasons why this is reasonable, and one reason why it is valid.


Reasonable:




  1. An implication, $P\to Q$, is false only in the case where $P$ is true but $Q$ is false. In any other combination of "truth values", the implication is true. So in order to show that $P\to Q$ is valid (always true), it is enough to consider the case when $P$ is already true: if $P$ is false, then the implication will be true regardless of what $Q$ is.




  2. More informally: in proving $P\to Q$, we can say: "if $P$ is false, then it doesn't matter what happens to $Q$, and we are done; if $P$ is true, then..." and proceed from there.



Why is it a valid method of proof?


There is something called the Deduction Theorem. What it says is that if, from the assumption that $P$ is true, you can produce a valid proof that shows that $Q$ is true, then there is a recipe that will take that proof and transform it into a valid proof that $P\to Q$ is true. And, conversely, if you can produce a valid proof that $P\to Q$ is true, then from the assumption that $P$ is true you can produce a proof that shows that $Q$ is true.


The real interesting part of the Deduction Theorem is the first part, though: that if you can produce a proof of $Q$ from the assumption that $P$ is true, then you can produce a proof of $P\to Q$ without assuming anything about $P$ (or about $Q$). It justifies the informal argument given above.


That's why, in mathematics, whenever we are trying to prove an implication, we always assume the antecedent is already true: the Deduction Theorem tells us that this is a valid method of proving the implication.



algebra precalculus - Show that $frac {37 - sqrt{1357}}{3} = frac {4}{37 + sqrt{1357}}$.


Solving the cubic equation:


$$3x^3 - 74x^2 + 4x = 0$$


I found the following roots:


$$x = 0, \frac {37 - \sqrt{1357}}{3}, \frac {37 + \sqrt{1357}}{3}$$


On the Wolfram Alpha website one of the roots is shown differently.


Instead of the root:


$$x = \frac {37 - \sqrt{1357}}{3}$$


The site shows:



$$x = \frac {4}{37 + \sqrt{1357}}$$


I rapidly established that:


$$\frac {37 - \sqrt{1357}}{3} = \frac {4}{37 + \sqrt{1357}} \approx 0.054$$


But I can't see what mathematical steps would turn $\frac {37 - \sqrt{1357}}{3}$ into $\frac {4}{37 + \sqrt{1357}}$ or indeed visa-versa. Can someone explain what steps would make this transformation please?


Also...


It seems to me that if an equation has 2 roots which differ only in whether they have a plus or minus sign as a result of taking the square root of both sides of a quadratic equation, then it's best to express the 2 roots in the same way with only the plus-or-minus sign differing (it's clearer that way right?). What are the reasons for expressing one of them differently as Wolfram Alpha has done in this case?


Thanks.


Answer



$$\frac {37 - \sqrt{1357}}{3}= \frac{(37-\sqrt{1357})(37+\sqrt{1357})}{3(37+\sqrt{1357})}=\frac{37^2-1357}{3(37+\sqrt{1357})}$$ $$=\frac{12}{3(37+\sqrt{1357})}=\frac{4}{(37+\sqrt{1357})}$$


Saturday, June 22, 2019

calculus - Why does $frac{1}{2}lim_{x to 0}frac{x}{sin x}$ equal to $frac{1}{2}frac{1}{lim_{x to 0}frac{sin{x}}{x}}$?




I've come across the following transformation:



$$\frac{1}{2}\lim_{x \to 0}\frac{x}{\sin x}=\frac{1}{2}\frac{1}{\lim_{x \to 0}\frac{\sin{x}}{x}}$$



But I can't quite understand why and how it works. I would be grateful if someone explained why it's correct.



Thanks!


Answer



The function $x\mapsto \frac1x$ is continuous. Therefore, for any function $f(x)$ and any value $a\in[-\infty,\infty]$, we have $$\lim_{x\to a}\frac1{f(x)}=\frac1{\lim_{x\to a}f(x)}$$as long as any of the expressions exist.


functional equations - Does a function that satisfies the equality $f(a+b) = f(a)f(b)$ have to be exponential?





I understand the other way around, where if a function is exponential then it will satisfy the equality $f(a+b)=f(a)f(b)$. But is every function that satisfies that equality always exponential?


Answer



First see that $f(0)$ is either 0 or 1. If $f(0)=0$, then for all $x\in\mathbb R$, $f(x)=f(0)f(x)=0$. In this case $f(x)=0$ a constant function.



Let's assume $f(0)=1$. See that for positive integer $n$, we have $f(nx)=f(x)^n$ which means $f(n)=f(1)^n$. Also see that:
$$
f(1)=f(n\frac 1n)=f(\frac 1n)^n\implies f(\frac 1n)=f(1)^{1/n}.
$$
Therefore for all positive rational numbers:
$$

f(\frac mn)=f(1)^{m/n}.
$$
If the function is continuous, then $f(x)=f(1)^x$ for all positive $x$. For negative $x$ see that:
$$
f(0)=f(x)f(-x)\implies f(x)=\frac{1}{f(-x)}.
$$
So in general $f(x)=a^x$ for some $a>0$.







Without continuity, consider the relation: $xRy$ if $\frac xy\in \mathbb Q$ (quotient group $\mathbb R/\mathbb Q$). This relation forms an equivalence class and partitions $\mathbb R$ to sets with leaders $z$. In each partition the function is exponential with base $f(z)$.


real analysis - Find Borel functions $f,g$ that agree on a dense subset of R but not at $lambda$-almost every $xin R$



Here is the homework question, verbatim:


Find Borel functions $f,g: \mathbb{R} \to \mathbb{R}$ that agree on a dense subset of $\mathbb{R}$ but are such that $f(x) \neq g(x)$ holds at $\lambda$-almost every $x\in \mathbb{R}$


I interpreted the latter part to mean, "... holds $\lambda$-almost everywhere in $\mathbb{R}$."


I also understand $\lambda$ to actually be $\lambda^{\ast}$ - Lebesgue outer measure.


I also think the question says that $f(x) \neq g(x)$ holds $\lambda$-almost everywhere in $\mathbb{R}$ and means that for only a set of outer Lebesgue measure zero is it true that $f(x) = g(x)$. This latter set happens to be the dense subset of $\mathbb{R}$ mentioned in the problem.


So what I have so far is that $f(x) = g(x)$ on a set $A$ s.t. $\lambda \big(A:=\{x \in \mathbb{R} : f(x) = g(x) \}\big)=0$. So $A$ is dense, meaning that for any $x \in \mathbb{R}$, any neighborhood $N(x,\;\;\;) \ni $ (at least one point from $A$). This to me means that, since the interior of $int \; (A^c) = \varnothing$, only the points ${}^{\pm}\infty$ of the extended real number line is where these functions agree. But I don't see how $\{{}^-\infty\}$ and $\{{}^+\infty\}$ can be dense....?


So does this mean two different functions (classes of functions?) that only share one or both infinite limits?


Thanks much for any guidance!


nate


Answer




Thanks all for the help with this - it may yet need editing? @Gerry ?


So set $g(x)=0$ identically and treat $f(x)$ as an indicator/characteristic function on the set where $f(x) \neq g(x)$. The dense subset of $\mathbb{R}$ where the two functions are equal has the properties, $$f(x) = g(x) \longleftrightarrow \lambda^{\ast}(\left\{ x \in \mathbb{R} : f(x) = g(x) \right\} ) = \lambda^{\ast}\big(H^c:= dense\;\;subset\;\;of\;\;\mathbb{R} \big) = 0,$$ and the function is defined as, $$f(x) = \left\{ \begin{array}{ll} 1,& x \in H \subseteq \mathbb{R} \\ 0, & x \notin H \end{array} \right. .$$ So choose $H$ s.t. $f(x) \neq 0\;\;a.e.$.


Let $H:=\mathbb{R}\backslash \mathbb{Q} \Longleftrightarrow H^c = \mathbb{Q}$, where $\mathbb{Q}$ is dense (and countable). Then the indicator function becomes, $$f(x) = \left\{ \begin{array}{ll} 1, & x \in \mathbb{R}\backslash \mathbb{Q} \\ 0, & x \in \mathbb{Q} \end{array} \right. .$$ As a check then, on $\mathbb{Q}$, $f(x) = 0$ and $g(x) = 0$ (though $g(x) = 0$, being identically 0 implies it is the additive identity of the underlying group, and $f(x)=0$ implies that $\left[f(x)\right] = 0$ as a class of functions - are they really the same?).


On $\mathbb{R} \backslash \mathbb{Q}$, $f(x) = 1$ and $g(x) = 0$.


Basic mathematical induction question.



Prove using induction:




For any natural number $n$ there is a natural number $m$ such that $n\le m^2\le 2n$.




Obviously letting $n$ and $m$ equal $1$ satisfies the first part of mathematical induction. I'm stuck at the second part. I believe we assume the inequality holds for $n=k$ but I am stuck on where to go next. I know we have to prove the inequality holds for $k+1$ but am not sure how to go about that.



Answer



We assume that for $k$ there is an $m$ such that $k \le m^2 \le 2k$. Now we want to prove there is a $p$ such that $k+1 \le p^2 \le 2(k+1)$. If $k+1 \le m^2$ we can set $p=m$ as a witness as $2(k+1) \gt 2k$. If $k+1 \gt m^2$ we have $k=m^2$ so $k+1 =m^2+1 \lt (m+1)^2=m^2+2m+1=k+1+2m\le 2(k+1)$ as long as $m \le k+1$, which is always true when $k=m^2$


Friday, June 21, 2019

number theory - Elementary proof that $4$ never divides $n^2 - 3$



I would like to see a proof that for all integers $n$, $4$ never divides $n^2 - 3$. I have searched around and found some things about quadratic reciprocity, but I don't know anything about that. I am wondering if there is a more elementary proof.




For example, I managed to show that $4$ never divides $x^2 - 2$ by saying that if $4$ does divide $x^2 - 2$, then $x^2 - 2$ is even. And then $x^2$ is even, which means that $x$ is even. So $x = 2m$ for some integer $m$, and so $x^2 - 2 = 4m^2 - 2$ is not divisible by $4$. So I would like to see a similar proof that $4$ doesn't divide $n^2 -3$.


Answer



$n $ is odd $\implies n=2k+1\implies n^2-3=2(2k^2+2k-1)$ where $2k^2+2k-1$ is odd and hence can't have $2$ as a factor.



In order for $4$ to divide $n^2-3$ it should have $4=2.2$ as a factor but note that $2$ appears as a factor only once if $n$ is odd.



$n$ is even $\implies n^2-3=4k^2-3$ which is odd


calculus - How can something be proved unsolvable?




My question specifically deals with certain real indefinite integrals such as $$\int e^{-x^2} {dx} \ \ \text{and} \ \ \int \sqrt{1+x^3} {dx}$$ Books and articles online have only ever said that these cannot be expressed in terms of elementary functions. I was wondering how this could be proved? I know this is a naive way of thinking, but it seems to me like these are just unsolved problems, not unsolvable ones.


Answer



The trick is to make precise the meaning of "elementary": essentially, these are functions, which are expressible as finite combinations of polynomials, exponentials and logarithms. It is then possible to show (by algebraically tedious disposition of cases, though not necessarily invoking much of differential Galois theory - see e.g. Rosenthal's paper on the Liouville-Ostrowski theorem) that functions admitting elementary derivatives can always be written as the sum of a simple derivative and a linear combination of logarithmic derivatives. One consequence of this is the notable criterion that a (real or complex) function of the form $x\mapsto f(x)e^{g(x)}$, where $f, g$ are rational functions, admits an elementary antiderivative in the above sense if and only if the differential equation $y'+g'y=f$ admits a rational solution. The problem of showing that $e^{x^2}$ and the lot have no elementary indefinite integrals is then reduced to simple algebra. In any case, this isn't an unsolved problem and there is not much mystery to it once you've seen the material.


Thursday, June 20, 2019

calculus - Evaluate the $int_0^{1}{cos(frac{pi t}2)}dt$



Evaluate the definite integral


$$\int_0^{1}{\cos(\frac{\pi t}2)}dt$$


I've been indefinite intervals like this:


$$\int{\frac{\cos x}{\sin ^2x}}dt$$ so I could do this:


$$u=sinx$$


$$du=cosx ....$$


And things would workout, but with: $$\int_0^{1}{\cos(\frac{\pi t}2)}dt$$ I'm having troubles figuring out what to substitute $$u=\frac{\pi t}{2}$$ Doesn't seem right because then


$$du=\frac\pi 2$$ And that doesn't fit in my integral anywhere.


Is this right?


So $$\frac{\sin u}{du} $$



$$=\frac{\sin \frac {\pi t} 2}{\frac \pi 2} | f(1) - f(0)$$


$$\frac{\sin \frac {\pi (1)} 2}{\frac \pi 2} - 0$$


$$= \frac 2 \pi$$


Answer



Try using subsitution rule.


$$u = \frac{\pi}2 t \text{ and } du = \frac{\pi}2 \, dt \implies \frac 2{\pi} du = dt$$


And since this is a definite integral, change your limits accordingly: $$u(0)=\frac{\pi}2 \cdot 0=0 \text{ and }u(1)=\frac{\pi}2 \cdot 1=\frac{\pi}2$$


Finally, \begin{align*} \int_0^1 \cos\left(\frac {\pi}2 t \right) \, dt&=\frac 2{\pi} \int_0^{\pi/2} \cos u \, du \\ \end{align*} Can you take it from here?


Question regarding the Cauchy functional equation


Is it true that, if a real function $f$ satisfies $f(x+y) = f(x) + f(y)$ and vanishes at some $k \neq 0$, then $f(x) = 0$? Over the rationals(or, allowing certain conditions like continuity or monotonicity), this is clear since it is well known that the only solutions to this equation are functions of the form $f(x) = cx$. The reason I'm asking is to see whether or not there's "weird" solutions other than the trivial one.


Some observations are that $f(x) = f(x+k) = -f(k-x)$. $f$ is periodic with $k$.


It is easy to see that at $x=\frac{k}{2}$ the function also vanishes, and so, iterating this process, the function vanishes at and has a "period" of $\frac{k}{2^n}$ for all $n$. If the period can be made arbitrarily small, I want to say that implies the function is constant, but of course I don't know how to preclude pathological functions.


Answer



The values of $f$ can be assigned arbitrarily on the elements of a Hamel basis for the reals over the rationals, and then extended to all of $\mathbb{R}$ by $\mathbb{Q}$-linearity. So (assuming the Axiom of Choice) there are indeed weird solutions.



Cauchy functional equation with non choice



Assume ZF+ not AC. Then how many solutions are there for Cauchy functional equation?



Thank you


Answer



The answer is undecidable. We know it could be $2^{\aleph_0}$ and it could be $2^{2^{\aleph_0}}$. I am unaware of results that it could be an intermediate cardinality, though.



It is true that there are always the continuous ones (and there are $2^{\aleph_0}$ of those), but it is consistent that there are only the continuous ones. For example in Solovay's model or in Shelah's model of $\sf ZF+DC+BP$ (the last one denotes "all sets of reals have the Baire property").




Assuming only that the axiom of choice fails is not enough to conclude in what manner it fails, and whether or not the real numbers are even well-orderable or not.



It is consistent that the axiom of choice fails and the real numbers are well-orderable, in which case the usual proof as if the axiom of choice holds shows that there are $2^{2^{\aleph_0}}$ solutions.


Wednesday, June 19, 2019

complex analysis - How to prove Euler's formula: $e^{ivarphi}=cos(varphi) +isin(varphi)$?



Could you provide a proof of Euler's formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$?


Answer



Assuming you mean $e^{ix}=\cos x+i\sin x$, one way is to use the MacLaurin series for sine and cosine, which are known to converge for all real $x$ in a first-year calculus context, and the MacLaurin series for $e^z$, trusting that it converges for pure-imaginary $z$ since this result requires complex analysis.



The MacLaurin series:

\begin{align}
\sin x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)!}x^{2n+1}=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots
\\\\
\cos x&=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}x^{2n}=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\cdots
\\\\
e^z&=\sum_{n=0}^{\infty}\frac{z^n}{n!}=1+z+\frac{z^2}{2!}+\frac{z^3}{3!}+\cdots
\end{align}



Substitute $z=ix$ in the last series:
\begin{align}

e^{ix}&=\sum_{n=0}^{\infty}\frac{(ix)^n}{n!}=1+ix+\frac{(ix)^2}{2!}+\frac{(ix)^3}{3!}+\cdots
\\\\
&=1+ix-\frac{x^2}{2!}-i\frac{x^3}{3!}+\frac{x^4}{4!}+i\frac{x^5}{5!}-\cdots
\\\\
&=1-\frac{x^2}{2!}+\frac{x^4}{4!}+\cdots +i\left(x-\frac{x^3}{3!}+\frac{x^5}{5!}-\cdots\right)
\\\\
&=\cos x+i\sin x
\end{align}


integration - References on Breaking Integrals into Logarithms

I've seen that (tough) integrals may be broken into answers in logarithmic form. In other words, many integrals have an alternate answer that is in the form of a function involving logarithms. An example is this question, which gives an alternate answer in terms of logarithms.


I'd like to know much more about breaking integrals into logarithms. Is there a method that can accomplish this without luck? I've read a reference (actually pictures of a book, I believe) that stated something like any integral can be broken into this logarithmic form. I'd like to know what is known about this, and I'd be delighted if someone could reference this research.


I'm looking into an algorithm to do very tough integration, and wonder if this technique is anywhere close to feasible.

limits - Is the sequences${S_n}$ convergent?

Let $$S_n=e^{-n}\sum_{k=0}^n\frac{n^k}{k!}$$



Is the sequences$\{S_n\}$ convergent?



The following is my answer,but this is not correct. please give some hints.



For all $x\in\mathbb{R}$, $$\lim_{n\rightarrow\infty}\sum_{k=0}^n\frac{x^k}{k!}=e^x.$$

then



$$\lim_{n\rightarrow\infty}e^{-n}\sum_{k=0}^n\frac{n^k}{k!}=1.$$

Tuesday, June 18, 2019

How to shorten this fraction?

How to shorten this fraction?



$R_1+R_2$ divided by $\frac1{R_1} + \frac1{R_2}$




The answer is $R_1R_2$. I just don't know how to get there.

Limit of a sequence including infinite product. $limlimits_{n toinfty}prod_{k=1}^n left(1+frac{k}{n^2}right)$





I need to find the limit of the following sequence:
$$\lim\limits_{n \to\infty}\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)$$


Answer




PRIMER:



In THIS ANSWER, I showed using only the limit definition of the exponential function and Bernoulli's Inequality that the logarithm function satisfies the inequalities



$$\bbox[5px,border:2px solid #C0A000]{\frac{x-1}{x}\le \log(x)\le x-1} \tag 1$$




for $x>0$.







Note that we have



$$\begin{align}
\log\left(\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)\right)&=\sum_{k=1}^n \log\left(1+\frac{k}{n^2}\right)\tag 2
\end{align}$$




Applying the right-hand side inequality in $(1)$ to $(2)$ reveals



$$\begin{align}
\sum_{k=1}^n \log\left(1+\frac{k}{n^2}\right)&\le \sum_{k=1}^n \frac{k}{n^2}\\\\
&=\frac{n(n+1)}{2n^2} \\\\
&=\frac12 +\frac{1}{2n}\tag 3
\end{align}$$



Applying the left-hand side inequality in $(1)$ to $(2)$ reveals




$$\begin{align}
\sum_{k=1}^n \log\left(1+\frac{k}{n^2}\right)&\ge \sum_{k=1}^n \frac{k}{k+n^2}\\\\
&\ge \sum_{k=1}^n \frac{k}{n+n^2}\\\\
&=\frac{n(n+1)}{2(n^2+n)} \\\\
&=\frac12 \tag 4
\end{align}$$



Putting $(2)-(4)$ together yields




$$\frac12 \le \log\left(\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)\right)\le \frac12+\frac{1}{2n} \tag 5$$



whereby application of the squeeze theorem to $(5)$ gives



$$\bbox[5px,border:2px solid #C0A000]{\lim_{n\to \infty} \log\left(\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)\right)=\frac12}$$



Hence, we find that



$$\bbox[5px,border:2px solid #C0A000]{\lim_{n\to \infty}\prod_{k=1}^n \left(1+\frac{k}{n^2}\right)=\sqrt e}$$




And we are done!


Basic Mathematical Induction


I'm not quite sure how to approach this question.


I need to prove that for $$n\ge1$$


$$1^2+2^2+3^3+\dots+n^2=\frac16n(n+1)(2n+1)$$


Do I just plug $1$ and see if $$\frac16(1)((1)+1)(2(1)+1) = 1^2\text{ ?}$$


Answer



Well what we want to do with induction is to show that for the set $S$ that satisfies our desired property, $S = \mathbb{N}$. We can do this on the naturals by a very special property. Every as part of the Peano axioms, we know that $\forall x \in \mathbb{N} $, $x$ has a successor also in $\mathbb{N}$ in the form $\text{s} (x) = x+1$. So if we can show that if the base natural $1$ holds and that our property holds for an arbitrary $x$ then it holds for its successor, we prove the property over the naturals. This works because if $1,x,\text{s}(x) \in S$ we can let $1$ be our $x$ and then we get that it holds for s$(1)$ so it holds for s$($s$(1))$ and so on.


So to the problem at hand. Let $S = \left\{ x \in \mathbb{N} : 1^2 + 2^2 + ... + x^2 = \frac{x(x+1)(2x+1)}{6} \right\}$. First we show the base case $1 \in S$. So, $$1^2 = 1 = \frac{1(1+1)(2*1+1)}{6} = \frac{6}{6} = 1$$ therefore $1 \in S$.



Next we form our inductive hypothesis and assume that some natural $k \in S$ which implies $1^2 + 2^2 + ... + k^2 = \frac{k(k+1)(2k+1)}{6}$. We will use this to make our inductive step and show $k+1 \in S$. So,


\begin{align*} 1^2 + 2^2 + ...+k^2 + (k+1)^2 =& \frac{k(k+1)(2k+1)}{6} + (k+1)^2\\ =&\frac{k(k+1)(2k+1)}{6} + \frac{6(k+1)^2}{6}\\ =&\frac{(k+1)[k(2k+1) + 6(k+1)]}{6}\\ =&\frac{(k+1)[2k^2+k + 6k+6)]}{6}\\ =&\frac{(k+1)[2k^2+7k+6)]}{6}\\ =&\frac{(k+1)[(k+2)(2k+3)]}{6}\\ =&\frac{(k+1)[(k+1+1)(2k+2+1)]}{6}\\ =&\frac{(k+1)((k+1)+1)(2(k+1)+1)}{6}\\ \end{align*} Which is what we trying to show! $\blacksquare$


Side note: I should probably conclude that we showed that when $k \in S$ that $k+1 \in S$ which means that $S = \mathbb{N}$.


calculus - Finding an integral using the Laplace transform

I have to evaluate the following integral by using the Laplace transform:



$$\int_0^\infty \frac{\sin^4 (tx)}{x^3}\,\mathrm{d}x.$$



How am I supposed to approach this question by using the Laplace transform?

real analysis - Can someone help me solve this problem please.




For the real numbers $x=0.9999999\dots$ and $y=1.0000000\dots$ it is the case that $x^2

Answer



Since $x=y$ (that is, $y = 0.\overline{9}=\sum_{n=1}^\infty \frac9{10^n}=1=x$), it must be the case that $x^2=y^2$. Thus, your statement is false.


What are the "moments" of the Riemann zeta function?



I have been reading about the applications of the Riemann zeta function in physics and came across something called a "moment". I have never heard of such a property of the Riemann zeta function so I tried to find information on it on the Internet, without success. Neither the Wikipedia page nor other articles define what a $\zeta$ "moment" is.



For instance, on the website of the University of Bristol, there is the following text:





Now, there are certain attributes of the Riemann zeta function called
its moments which should give rise to a sequence of numbers.




One could not be more vague. Further on, one can read:




(...) only two of these moments were known: 1, as calculated by Hardy and

Littlewood in 1918; and 2, calculated by Ingham in 1926.




I was unable to find references to these "calculations" by H&L and Ingham. Even more puzzling is:




The next number in the series was suggested as 42 by Conrey (...). The challenge for the quantum physicists then, was to use their quantum methods to check the number 42.




This makes no sense at all. What "quantum methods" are we talking about and what does "check the number 42" mean? I understand they didn't want to go into too much detail but this is suitably vague to confuse any reader.




So what is a "moment" of the Riemann zeta function and why is it important?


Answer



Raziwill has a paper about the moments of the Riemann Zeta function:



The 4.36-th moment of the Riemann Zeta function



Both Ingham's paper of 1926 and Hardy-Littlewood's paper of 1918 are in the references.



Radom matrix theory (and quantum billiard) is known to be related to the spacings between the non-trivial roots of the Riemann zeta function:




Is there an equivalent statement of Riemann Hypothesis in term of Random Matrix or physics theory?



Edit: To address the question why the moments of the Riemann zeta function are important - this is really involved with a lot of analytic number theory and the Riemann hypothesis. In fact, the quality of the asymptotic formula depends on RH. Since the latter is one of the most important conjectures in number theory, also the moments and other properties of the zeta function are important. The exact connections are a bit technical. The introduction of the paper Moments of the Riemann zeta function by Soundararajan give a good survey on this. There is also a connection to random matrix theory, i.e., to the link above. This makes it important, too.


algebra precalculus - Why is it important to define that a logarithm and exponential function is one-to-one?


I'm currently studying the properties of logarithm in an open source pre-calculus textbook that can be found here (Page 438). Before the text goes on to the Algebraic properties of exponential and logarithmic functions it defines the "one-to-one" properties of exponential function and logarithmic functions (Theorem 6.4). As you can see:




As you can see



And on the next page it describes the algebraic properties of logarithms...



Algebraic properties of logarithms



Is there any particular reason for stating the one-to-one property of logarithmic and exponential function beforehand? I know the logarithmic function is the inverse of an exponential function, and as such it can only exists for a one-to-one function, but is there any other reason why this was mentioned? I have the feeling that this distinction is extremely important with regards to the algebraic properties of logarithms that come after (i.e. product rule, quotient rule, power rule) but I cannot see why.


I'm sorry if this was long winded, and thank you for taking the time to read this. For who ever is so kind as to answer could you bear in mind that I'm studying at a pre-calculus level. Thanks.


Answer



No, there's no connection between the injectivity (what you call "one-to-one") property and the various algebraic properties. The theorems just happen to be stated one after another in this particular book.



Knowing that the exponential and logarithmic functions are injective is just a useful fact to know. Really, knowing that any function $f$ is injective is useful if you plan on using $f$ a lot, because it allows you to cancel $f$, in other words, if in your calculations you should end up with:


$$f(x)=f(y)$$


You can immediately conclude that:


$$x=y$$


Which is only true if $f$ is injective, of course. For instance, $x^2$ is not injective, which is why $x^2=y^2$ does not imply $x=y$.


Monday, June 17, 2019

linear algebra - Can we prove $BA=E$ from $AB=E$?


I was wondering if $AB=E$ ($E$ is identity) is enough to claim $A^{-1} = B$ or if we also need $BA=E$. All my textbooks define the inverse $B$ of $A$ such that $AB=BA=E$. But I can't see why $AB=E$ isn't enough. I can't come up with an example for which $AB = E$ holds but $BA\ne E$. I tried some stuff but I can only proof that $BA = (BA)^2$.


Edit: For $A,B \in \mathbb{R}^{n \times n}$ and $n \in \mathbb{N}$.


Answer



If $AB = E$, then (the linear application associated to) $A$ has a right inverse, so it's surjective, and as the dimension is finite, surjectivity and injectivity are equivalent, so $A$ is bijective, and has an inverse. And the inverse is also a right inverse, so it's $B$


I'm having trouble getting the number of terms for the sum of this geometric progression.


This question has been making me mad all day! It's in a advanced maths text book and my teacher asked us to do it for homework.
Here's the question:

How many terms of the sequence 4, 3, 2.25, ... can you add before the sum exceeds 12?

Here's my working out:

My working out
The answer I got is n=-2 and it's incorrect. I checked the answer for this question at back of text book and it was n=4. I tried and tried but still got n=-2. Please help!



Answer



Hint. From the line $$ 1-\left(\frac34 \right)^n>\frac34 $$ you get $$ \left(\frac34 \right)^n<\frac14 $$ giving $$ n>\frac{\log(1/4)}{\log(3/4)}=4.8\ldots $$ that is $$n=5.$$


calculus - prove that for every natural n, $5^n - 2^n$, can be divided by 3


How to prove, using recursion, that for every natural n:$$5^n - 2^n$$ can be divided by 3.


Answer



  1. setting $n=1$, $\implies 5^1-2^1=3$ is divisible by $3$

Thus, the number $5^n-2^n$ is divisible by $3$ for $n=1$



  1. assume for $n=k$, the number $5^n-2^n$ is divisible by $3$ then $$\color{blue}{5^k-2^k=}\color{blue}{3m}$$ where, $m$ is some integer




  2. setting $n=k+1$, $$5^{k+1}-2^{k+1}=5\cdot 5^k-2\cdot 2^k$$ $$=5\cdot 5^k-5\cdot 2^k+3\cdot 2^k$$ $$=5(\color{blue}{5^k-2^k})+3\cdot 2^k$$ $$=5(\color{blue}{3m})+3\cdot 2^k$$ $$=3(5m+2^k)$$ since, $(5m+2^k)$ is an integer hence, the above number $3(5m+2^k)$ is divisible by $3$




Hence, $5^n-2^n$ is divisible by $3$ for all integers $n\ge 1$


probability - finding expected value using CDF



If it is known that the random variable X has a CDF:



$$F_X(t)=\begin{cases} 0 && t\lt -1 \\ \frac{1}{2} && -1\le t \lt 0 \\ \frac{3}{4}+t^2 && 0\le t \lt \frac{1}{2} \\ 1 && \frac{1}{2} \le t \end{cases}$$



I wish to find $E[(X-a)^2]$ for all $a\in \mathbb R$




I found that $E[(X-a)^2]=E[X^2]-2aE[X]+a^2$



my teacher solved it this way:
enter image description here



what I'm trying to understand is the reason for the integrals used to find $E[X],E[X^2]$. Why does this work?


Answer



The first equality in the computation of $\mathbb E\left[X^2\right]$ follows from the fact that, for a non-negative random variable $Y$ with CDF $G$, we have that




$$ \mathbb E [Y]= \int_0^\infty y \,\mathrm d G(y) = \int_0^\infty (1-G(t))\,\mathrm d t. $$



You can find a proof of this here.



To see the first equality for $\mathbb E[X]$, we use the above result, along with the fact for a non-positive random variable $Y$ with CDF $G$,



$$ \mathbb E[Y] = \int_{-\infty}^0 y \,\mathrm d G(y) = -\int_{-\infty}^0 G(t) \, \mathrm d t. $$



You can prove this in a way similar to the analogous result for non-negative random variables.




Putting the two facts together allows us to write



\begin{align*}
\mathbb E [X] &= \int_{-\infty}^\infty x \,\mathrm d F_X (x) \\
&= \int_0^\infty x\,\mathrm d F_X (x) +\int_{-\infty}^0 x \mathrm d F_X (x) \\
&= \int_0^\infty (1-F_X(t)) \,\mathrm d t - \int_{-\infty}^0 F_X(t)\,\mathrm d t.
\end{align*}


elementary number theory - Diophantine equation

There were 63 equal piles of plantain fruit put together and 7 single fruits. They were divided evenly among 23 travelers. What is the number in each pile. Consider the Diophantine equation 63x+7=23y



So I found one solution: -7= 6(-28)+23(7)
so then I plugged it into the formula to find all solutions and got
x=-28-23t and y=-28-6t



so then I got t<-1.2 and t<-4.7 but this range is huge.. where did I go wrong?

discrete mathematics - Proving $(0,1)$ and $[0,1]$ have the same cardinality

Prove $(0,1)$ and $[0,1]$ have the same cardinality.



I've seen questions similar to this but I'm still having trouble. I know that for $2$ sets to have the same cardinality there must exist a bijection function from one set to the other. I think I can create a bijection function from $(0,1)$ to $[0,1]$, but I'm not sure how the opposite. I'm having trouble creating a function that makes $[0,1]$ to $(0,1)$. Best I can think of would be something like $x \over 2$.




Help would be great.

Sunday, June 16, 2019

elementary number theory - Does $3$ divide $2^{2n}-1$?

Prove or find a counter example of : $3$ divide $2^{2n}-1$ for all $n\in\mathbb N$.



I compute $2^{2n}-1$ until $n=5$ and it looks to work, but how can I prove this ? I tried by contradiction, but impossible to conclude. Any idea ?

Saturday, June 15, 2019

Proving mathematical induction with arbitrary base using (weak) induction



I attempted a proof of mathematical induction using an arbitrary base case, but was unsuccessful (and hence this question). Below is what I was trying to do and along with my thinking; if anyone can point me in the right direction I'd appreciate it.


The induction I am using: Let $P$ be a property about the natural numbers $\mathbb{N}$ with $0\in\mathbb{N}$, and let $P(n)$ denote the statement that the property $P$ holds for $n\in\mathbb{N}$. Suppose $P(0)$. Furthermore suppose that for each natural number $k$, $P(k)$ implies $P(k+1)$. Then $\forall nP(n)$.


What I am trying to prove: Let $P$ be a property about the natural numbers $\mathbb{N}$ with $0\in\mathbb{N}$, and let $P(n)$ denote the statement that the property $P$ holds for $n\in\mathbb{N}$. Suppose for $n_0\in\mathbb{N}$, $P(n_0)$. Furthermore suppose that for each natural number $k\geq n_0$, $P(k)$ implies $P(k+1)$. Then $\forall n\geq n_0 P(n)$.


My attempted proof: define $Q(n)$ to be $n\geq n_0 \to P(n)$. Then we wish to prove $\forall nQ(n)$. We induct on $n$.


Base case (for ordinary induction): $Q(0)$ is $0\geq n_0\to P(0)$. Since $n_0\in\mathbb{N}$, $0\geq n_0$ implies that $n_0=0$. Since $P(n_0)$, $P(0)$, which proves the base case.


Inductive step: we want to show $\forall n(Q(n)\to Q(n+1))$. To do this, we assume $k\geq n_0 \to P(k)$ and try to show $k+1\geq n_0 \to P(k+1)$.


Since $k\geq n_0 \to P(k)$, we first prove the case where $k\geq n_0$ and $P(k)$. By the hypothesis of the proof, we see that $P(k+1)$, which proves the case for $k+1$.


This is where I am having trouble: For $k

So my questions would be: (1) is the overall approach for the proof correct? (2) If so, how might I go on to prove the case when $k

Thanks in advance. (This is not homework, by the way.)



Answer



No, you’re off on the wrong track even with the base case. If $n_0>0$, $Q(0)$ is true not because $n_0=0$, but because $0\not\ge n_0$, and the implication $0\ge n_0\to P(0)$ is vacuously true. A much better idea is to let $Q(n)$ be the statement $P(n+n_0)$ for each $n\in\Bbb N$ and prove $\forall n Q(n)$. I’ll leave it at that for now to give you a chance to finish it off on your own.


Added: A version of your argument can be made to work, but it’s easier if you replace your $Q(n)$ by the logically equivalent $Q'(n)$: $n


  • Base Case: $Q'(0)$ says that $00$, this is certainly true. If $n_0=0$, then $P(0)$ is $P(n_0)$, which is true by hypothesis, so in this case $Q'(0)$ is again true. There are no other possibilites.




  • Induction Step: Assume $Q'(k)$ for some $k\ge 0$. We want to show $Q'(k+1)$, i.e., that $(k+1n_0$. If $k+1=n_0$, then $P(k+1)$ is $P(n_0)$, which is true by hypothesis, so $Q'(k+1)$ is true in this case. (Note that up to here the argument is very similar to that of the base case.) Otherwise, $k+1>n_0$, and therefore $k\ge n_0$. Now we finally use the induction hypothesis $Q'(k)$, which says that $k


We can now conclude that $\forall n Q'(n)$, i.e., $\forall n \Big(n

calculus - Evaluating $lim_{btoinfty} int_0^b frac{sin x}{x}, dx= frac{pi}{2}$





Using the identity $$\lim_{a\to\infty} \int_0^a e^{-xt}\, dt = \frac{1}{x}, x\gt 0,$$ can I get a hint to show that $$\lim_{b\to\infty} \int_0^b \frac{\sin x}{x} \,dx= \frac{\pi}{2}.$$


Answer



Hint: $$\begin{align} \lim_{b\to \infty}\int_{0}^{b}\frac{\sin x}{x}dx &= \lim_{a,b\to \infty}\int_{0}^{b}\int_{0}^{a}e^{-xt}dt\sin x dx\\& = \lim_{a,b\to \infty}\int_{0}^{b}dt\int_{0}^{a}e^{-xt}\frac{e^{ix}-e^{-ix}}{2i} dx \\&=\lim_{a,b\to \infty}\int_{0}^{b}dt\int_{0}^{a}\frac{e^{-(t-i)x}-e^{-(i+t)x}}{2i} dx\end{align}$$.


real analysis - Using the definition of a limit, prove that $lim_{n rightarrow infty} frac{n^2+3n}{n^3-3} = 0$



Using the definition of a limit, prove that $$\lim_{n \rightarrow \infty} \frac{n^2+3n}{n^3-3} = 0$$



I know how i should start: I want to prove that given $\epsilon > 0$, there $\exists N \in \mathbb{N}$ such that $\forall n \ge N$


$$\left |\frac{n^2+3n}{n^3-3} - 0 \right | < \epsilon$$


but from here how do I proceed? I feel like i have to get rid of $3n, -3$ from but clearly $$\left |\frac{n^2+3n}{n^3-3} \right | <\frac{n^2}{n^3-3}$$this is not true.


Answer



This is not so much of an answer as a general technique.


What we do in this case, is to divide top and bottom by $n^3$: $$ \dfrac{\frac{1}{n} + \frac{3}{n^2}}{1-\frac{3}{n^3}} $$ Suppose we want this to be less than a given $\epsilon>0$. We know that $\frac{1}{n}$ can be made as small as we like. First, we split this into two parts: $$ \dfrac{\frac{1}{n}}{1-\frac{3}{n^3}} + \dfrac{\frac{3}{n^2}}{1-\frac{3}{n^3}} $$



The first thing we know is that for large enough $n$, say $n>N$, $\frac{3}{n^3} < \frac{3}{n^2} < \frac{1}{n}$. We will use this fact.


Let $\delta >0$ be so small that $\frac{\delta}{1-\delta} < \frac{\epsilon}{2}$. Now, let $n$ be so large that $\frac{1}{n} < \delta$, and $n>N$.


Now, note that $\frac{3}{n^3} < \frac{3}{n^2} < \frac{1}{n} < \delta$. Furthermore, $1- \frac{3}{n^3} > 1 - \frac{3}{n^2} > 1-\delta$.


Thus, $$ \dfrac{\frac{1}{n}}{1-\frac{3}{n^3}} + \dfrac{\frac{3}{n^2}}{1-\frac{3}{n^3}} < \frac{\delta}{1+\delta} + \frac{\delta}{1+\delta} < \frac{\epsilon}{2} + \frac{\epsilon}{2} < \epsilon $$


For large enough $n$. Hence, the limit is zero.


I could have had a shorter answer, but you see that using this technique we have reduced powers of $n$ to this one $\delta$ term, and just bounded that $\delta$ term by itself, bounding all powers of $n$ at once.


indeterminate forms - What is undefined times zero?

Einstein's energy equation (after substituting the equation of relativistic momentum) takes this form: $$E = \frac{1}{{\sqrt {1 - {v^2}/{c^2}} }}{m_0}{c^2} % $$ Now if you apply this form to a photon (I know this is controversial, in fact I would not do it, but I just want to understand the consequences), you get the following: $$E = \frac{1}{0}0{c^2}% $$ On another note, I understand that after dividing by zero:


  • If the numerator is any number other than zero, you get an "undefined" = no solution, because you are breaching mathematical rules.

  • If the numerator is zero, you get an "indeterminate" number = any value.

Here it seems we would have an "indeterminate" [if (1/0) times 0 equals 0/0], although I would prefer to have an "undefined" (because I think that applying this form to a photon breaches physical/logical rules, so I would like the outcome to breach mathematical rules as well...) and to support this I have read that if a subexpression is undefined (which would be the case here with gamma = 1/0), the whole expression becomes undefined (is this right and if so does it apply here?).


So what is the answer in strict mathematical terms: undefined or indeterminate?

limits - How may I show $x!$ grows faster than $(x+1)^{n-1}$

I am trying to show that the factorial function grows faster than its respective power function.
I started by define $f(x) = \frac {x!} {x^{n}}$ and then looked at $\frac {f(x+1)} {f(x)}$ and got $\frac {x^n} {(x+1)^{(n-1)}}$ and took the limit as n goes to infinity, and got $0$.



If I recall, that's inconclusive, yes?




Note: As I'm typing this, I feel that maybe I should have taken the limit as x goes to infinity, since I am trying to show the argument of $x$ grows.

linear algebra - Rational numbers as vectors in infinite dimensional space with the basis $( log 2,log 3, log 5, log 7, dots, log p, dots) $



Since every natural number can be represented as $a=2^{n_1}3^{n_2}5^{n_3}7^{n_4}\cdots p_k^{n_k}\cdots$ it makes sense to represent natural numbers by vectors, using the properties of logarithms:



$$\log a=n_1 \log 2+n_2 \log3+n_3 \log5+\cdots$$




This space appears to be similar to the usual Euclidean space if we extend it to an infinite number of dimensions.



If we allow negative coordinates, we can also put all rational numbers in this space. For example, here is part of the plane $(\log 2, \log 3)$:



enter image description here




Does this space have any application in number theory? If it is studied, then how is it usually defined? Are the usual Cartesian vector dot product and the usual Euclidean norm used? Or does it make sense to use a different norm (for example, taxicab norm)?




Answer



There is a generalization of vector spaces called "modules" which allow any ring to serve as scalars. When you use the integers as the ring of scalars, a "module" is the same thing as a "abelian group".



The group of 'factorizations' is indeed a free abelian group, which is the kind of abelian group that behaves most similarly to a vector space.



Factorizations are indeed important in number theory. More generally, rather than the rationals you might consider number fields or even global fields. You would then consider things like prime ideals or places instead of prime numbers.



Formally taking logarithms like you are is somewhat superfluous — what you're doing is mainly just changing the notation of the group operation to $+$ so that it's easier to think about it in terms of linear algebra.



It can indeed be useful to extend to real coefficients rather than merely integer coefficients. e.g. after restricting to a finite set of primes, number theorists like to view the group of factorizations as a lattice contained in the corresponding vector space $\mathbb{R}^n$ and use geometric methods to study things.




The most natural norm to take here is a weighted $L^1$ norm



$$ \left\| \sum_{n=0}^{\infty} a_n \log p_n \right\| = \sum_{n=0}^{\infty} |a_n| \ln p_n $$



This way, the norm of the factorization of an integer is precisely the natural logarithm of the magnitude of that integer. More generally, if $a$ and $b$ are relatively prime nonzero integers, then the norm of the factorization of $a/b$ is simply $\ln|a/b|$.


Friday, June 14, 2019

limits - Prove $[sin x]' = cos x$ without using $limlimits_{xto 0}frac{sin x}{x} = 1$

I came across this question: How to prove that $\lim\limits_{x\to0}\frac{\sin x}x=1$?


From the comments, Joren said:




L'Hopital Rule is easiest: $\displaystyle\lim_{x\to 0}\sin x = 0$ and $\displaystyle\lim_{x\to 0} = 0$, so $\displaystyle\lim_{x\to 0}\frac{\sin x}{x} = \lim_{x\to 0}\frac{\cos x}{1} = 1$.



Which Ilya readly answered:



I'm extremely curious how will you prove then that $[\sin x]' = \cos x$



My question: is there a way of proving that $[\sin x]' = \cos x$ without using the limit $\displaystyle\lim_{x\to 0}\frac{\sin x}{x} = 1$. Also, without using anything else $E$ such that, the proof of $E$ uses the limit or $[\sin x]' = \cos x$.



All I want is to be able to use L'Hopital in $\displaystyle\lim_{x\to 0}\frac{\sin x}{x}$. And for this, $[\sin x]'$ has to be evaluated first.



Alright... the definition that some requested.



Def of sine and cosine: Have a unit circumference in the center of cartesian coordinates. Take a dot that belongs to the circumference. Your dot is $(x, y)$. It relates to the angle this way: $(\cos\theta, \sin\theta)$, such that if $\theta = 0$ then your dot is $(1, 0)$.


Basically, its a geometrical one. Feel free to use trigonometric identities as you want. They are all provable from geometry.

Find the remainder of a number with a large exponent

I have to find the remainder of $10^{115}$ divided by 7.



I was following the way my book did it in an example but then I got confused. So far I have,
$\overline{10}^{115}$=$\overline{10}^{7*73+4}$=($\overline{10}^{7})^{73}$*$\overline{10}^4$
and that's where I'm stuck.




Also, I don't fully understand what it means to have a bar over a number.

Thursday, June 13, 2019

calculus - Evaluating the integral $int_0^infty frac{sin x} x ,mathrm dx = frac pi 2$?


A famous exercise which one encounters while doing Complex Analysis (Residue theory) is to prove that the given integral: $$\displaystyle\int_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$$


Well, can anyone prove this without using Residue theory. I actually thought of doing this: $$\int_0^\infty \frac{\sin x} x \, dx = \lim_{t \to \infty} \int_0^t \frac{1}{t} \left( t - \frac{t^3}{3!} + \frac{t^5}{5!} + \cdots \right) \,\mathrm dt$$ but I don't see how $\pi$ comes here, since we need the answer to be equal to $\dfrac{\pi}{2}$.



Answer



Here's another way of finishing off Derek's argument. He proves $$\int_0^{\pi/2}\frac{\sin(2n+1)x}{\sin x}dx=\frac\pi2.$$ Let $$I_n=\int_0^{\pi/2}\frac{\sin(2n+1)x}{x}dx= \int_0^{(2n+1)\pi/2}\frac{\sin x}{x}dx.$$ Let $$D_n=\frac\pi2-I_n=\int_0^{\pi/2}f(x)\sin(2n+1)x\ dx$$ where $$f(x)=\frac1{\sin x}-\frac1x.$$ We need the fact that if we define $f(0)=0$ then $f$ has a continuous derivative on the interval $[0,\pi/2]$. Integration by parts yields $$D_n=\frac1{2n+1}\int_0^{\pi/2}f'(x)\cos(2n+1)x\ dx=O(1/n).$$ Hence $I_n\to\pi/2$ and we conclude that $$\int_0^\infty\frac{\sin x}{x}dx=\lim_{n\to\infty}I_n=\frac\pi2.$$


combinatorics - Evaluating $limlimits_{nto infty }sumlimits_{k=0}^n:(2n)^{-k}binom{n}{k}$



$$\lim _{n\to \infty }\sum _{k=0}^n\:\frac{\binom{n}{k}}{\left(2n\right)^k}$$
I've got to the form:
$$\lim _{n\to \infty }\frac{2^n\left(2n-1\right)}{\left(2n\right)^{n+1}-1}=\lim _{n\to \infty }\frac{2^{n+1}n-2^n}{2^{n+1}n^{n+1}-1}$$



And it should be $e^{1/2}$ but I always get $0$. I know it's just easy but I don't get it.



Answer



First use the binomial theorem:



$$\sum_{k=0}^n\binom{n}k\left(\frac1{2n}\right)^k=\left(1+\frac1{2n}\right)^n\;.$$



Now



$$\lim_{n\to\infty}\left(1+\frac1{2n}\right)^n=\lim_{n\to\infty}\left(\left(1+\frac1{2n}\right)^{2n}\right)^{1/2}=\left(\lim_{n\to\infty}\left(1+\frac1{2n}\right)^{2n}\right)^{1/2}\;,$$



and you should know what the last limit there is.



Wednesday, June 12, 2019

calculus - Convergence of $sum_{n=1}^{infty} frac{sin(n)}{n}$



I am trying to argue that
$$
\sum_{n=1}^{\infty} \frac{\sin(n)}{n}
$$
is divergent. It get that it must be divergent because $\sin(n)$ is bounded and there is an $n$ on the bottom. But I have to use one of the tests in Stewart's Calculus book and I can't figure it out. I can't use the Comparison Tests or the Integral Test because they require positive terms. I can't take absolute values, that would only show that it is not absolutely convergent (and so it might still be convergent). The Divergence Test also doesn't work.



I see from this question:




Evaluate $ \sum_{n=1}^{\infty} \frac{\sin \ n}{ n } $ using the fourier series



that the series is actually convergent, but using some math that I don't know anything about. My questions are



(1) Is this series really convergent?



(2) Can this series be handled using the tests in Stewart's Calculus book?


Answer



One may apply the Dirichlet test, noticing that





  • $\displaystyle \frac1{n+1} \le \frac1{n}$


  • $\displaystyle \lim_{n \rightarrow \infty}\frac1{n} = 0$


  • $\displaystyle \left|\sum^{N}_{n=1}\sin
    n\right|=\left|\text{Im}\sum^{N}_{n=1}e^{in}\right| \leq
    \left|e^i\frac{1-e^{iN}}{1-e^i}\right| \leq \frac{2}{|1-e^i|}<\infty,\qquad N\ge1,$



giving the convergence of the given series.


Tuesday, June 11, 2019

algebra precalculus - Can you explain this please $T(n) = (n-1)+(n-2)+…1= frac{(n-1)n}{2}$








Can you explain this please
$$T(n) = (n-1)+(n-2)+…1= \frac{(n-1)n}{2}$$



I am really bad at maths but need to understand this for software engineering.

topological groups - Is there a "nice" discontinuous, bijective homomorphism $f: (mathbb{R},+) to (mathbb{R},+)$?



Consider $(\mathbb{R},+)$ as a topological group. Using the axiom of choice, we can construct a $\mathbb{Q}$-basis for $\mathbb{R}$ and using this basis, we can define a discontinuous, bijective homomorphism from $(\mathbb{R},+)$ to itself.


Is it possible to find such a homomorphism without using the axiom of choice?


Answer



The answer is no, we need some choice to construct such homomorphism, because it is consistent with ZF (without choice) that every function $\phi:\Bbb R\rightarrow\Bbb R$ satifying $\phi(x+y)=\phi(x)+\phi(y)$ is continuous. You can get far more details in this MO answer.


Here you can see a handful of collected facts about the nontrivial solutions, provided any exist.


real analysis - Why do we consider that $p$ & $q$ are co-primes when proving square root of a prime number is irrational?




When we prove that square root of any prime number is irrational, we assume that there exists some rational number $r=\frac{p}{q};\space p,q\in\mathbb{Z}, q\neq0$ s.t. $\frac{p^2}{q^2}=p_1$ where $p_1$ is prime and then prove by contradiction.



Why do we consider that $p$ & $q$ are co-primes?


Answer



This image is from pg-2 of Abbott's Understanding Analysis that gives the simple yet elegant answer to the question:



answer


calculus - How does $a^2 + b^2 = c^2$ work with ‘steps’?





We all know that $a^2+b^2=c^2$ in a right-angled triangle, and therefore, that $c, so that walking along the red line would be shorter than using the two black lines to get from top left to bottom right in the following graphic:





Now, let's assume that the direct way using the red line is blocked, but instead, we can use the green way in the following picture:






Obviously, the green way isn't any shorter than the black one, it's just $a/2+b/2+a/2+b/2 = a+b$. Now, we can divide the green path again, just like the black path, and get to the purple path. Dividing this one in two halfs again, we get the yellow path:





Now obviously, the yellow path is still as long as the black path from the beginning, it's just $8*a/8+8*b/8=a+b$. But if we do this segmentation again and again, we approximate the red line - without making the way any shorter. Why is this so?


Answer



Essentially,
it is because the distance of the

stepped curve from the line
does not get small compared
to the length of the steps.



An example where the limit
is properly found
is dividing a circle
into $n$ equal parts
and computing the sum of the
line segments

connecting the endpoints
of the arcs.
This $does$ converge
to the length of the circle
because the height of each arc
gets arbitrarily small
compared to the length of each arc
as $n$ gets large.


calculus - Dirichlet integral.




I want to prove $\displaystyle\int_0^{\infty} \frac{\sin x}x \,\mathrm{d}x = \frac \pi 2$, and $\displaystyle\int_0^{\infty} \frac{|\sin x|}x \,\mathrm{d}x \to \infty$.



And I found in wikipedia, but I don't know, can't understand. I didn't learn differential equation, laplace transform, and even inverse trigonometric functions.



So tell me easy, please.


Answer



About the second integral: Set $x_n = 2\pi n + \pi / 2$. Since $\sin(x_n) = 1$ and
$\sin$ is continuous in the vicinity of $x_n$, there exists $\epsilon, \delta > 0$ so that $\sin(x) \ge 1 - \epsilon$ for $|x-x_n| \le \delta$. Thus we have:

$$\int_0^{+\infty} \frac{|\sin x|}{x} dx \ge 2\delta\sum_{n = 0}^{+\infty} \frac{1 - \epsilon}{x_n} = \frac{2\delta(1-\epsilon)}{2\pi}\sum_{n=0}^{+\infty} \frac{1}{n + 1/4} \rightarrow \infty $$


calculus - Is there a rapider or more elegant way to evaluate $int_0^{+infty} frac{cos(pi x) text{d}x}{e^{2pi sqrt{x}}-1}$?


$$\int_0^{+\infty} \frac{\cos(\pi x)\ \text{d}x}{e^{2\pi \sqrt{x}}-1}$$




First attempt





  • $x\to t^2$


  • Geometric series by writing the denominator as $e^{2\pi t}(1 - e^{-2\pi t})$


  • $\cos(\pi t^2) = \Re e^{i\pi t^2}$




This leads me to



$$2\sum_{k = 0}^{+\infty} \int_0^{+\infty} t e^{i\pi t^2}e^{-\alpha t}\ \text{d}t$$



Where $\alpha = 2\pi (k+1)$.




Now I thought about writing it again as



$$-2\sum_{k = 0}^{+\infty}\frac{d}{d\alpha} \int_0^{+\infty} e^{i\pi t^2}e^{-\alpha t}\ \text{d}t$$



The last integral can be evaluated with the use of the Imaginary Error Function, hence a Special Function method.



Yet it doesn't seem me the best way.



Second Attempt




Basically like the previous one with the difference that




  • $\cos( \cdot )$ stays as it;


  • $\pi t^2 \to z$;




And this brings




$$-\frac{1}{\sqrt{\pi}}\frac{d}{d\alpha} \sum_{k = 0}^{+\infty}\int_0^{+\infty} \frac{\cos(z)}{z} e^{-\alpha \sqrt{\frac{z}{\pi}}}\ \text{d}z$$



But in both cases what I am thinking are just numerical methods. Or at least I could give a try with the stationary phase but... meh.



I don't know if I can use residues for this, actually. Even if taking a look at the initial integral, there is this additional way:



$$\frac{1}{e^{2\pi t} -1} = \frac{1}{(e^{\pi t}+1)(e^{\pi t}-1)}$$



Which for example has a pole at $t = +i$...




But using residues I would obtain



$$\pi \cos(\pi)$$



Where as the correct numerical result (which I checked with Mathematica) is



$$\color{blue}{0.0732233(...)}$$



And it seems there is not a closed form for this.




Any hint/help?

Monday, June 10, 2019

calculus - Read Binary and Write Ternary



Working with cantor set I came up with the function $f : [0,1] \longrightarrow \Bbb{R}$ defined as follows: Let $0.a_1a_2a_3\cdots$ be the binary expansion of $x \in [0,1]$ any define $f(x) := 0.a_1a_2a_3\cdots$ (considered in base 3) i.e., $f(x) = \sum_{n=1}^\infty \frac{a_n}{3^n}$. And note that for $f$ to be well-defined we never consider expansions ending with infinitely many $1$'s. Some interesting questions about $f$ would be as follows:




  1. Determination of the set of points which $f$ is differentiable at them.


  2. Evaluation of $\int_0^1 f(x)dx$. (Since $f$ is monotone, it is integrable)



Answer




$2.$ Let,



$$\underline{\int}_0^1 f(x)dx,\quad\quad\bar{\int}_0^1 f(x)dx,$$



be the lower and upper Riemann integrals of $f$ over $[0,1]$, respectively (for formal definitions see, for example, the beginning of the chapter on the Riemann integral in Principles of Mathematical Analysis by Rudin -- I think it was chapter $6$). Pick any natural number $n$. Consider partitioning $[0,1]$ into intervals of length $1/2^n$, ($[0,1/2^n]$, $[1/2^n,1/2^{n-1}]$, etc.). Because $f$ is increasing on $[0,1]$,



$$L_n:=\sum_{i=1}^{2^n}f\left(\frac{i-1}{2^n}\right)\frac{1}{2^n}\leq \underline{\int}_0^1 f(x)dx \leq \bar{\int}_0^1 f(x)dx \leq \sum_{i=1}^{2^n}f\left(\frac{i}{2^n}\right)\frac{1}{2^n}=:U_n.$$



Letting $n$ tend to infinity we have that




$$L_\infty:=\lim_{n\to\infty}\sum_{i=1}^{2^n}f\left(\frac{i-1}{2^n}\right)\frac{1}{2^n}\leq \underline{\int}_0^1 f(x)dx \leq \bar{\int}_0^1 f(x)dx \leq \lim_{n\to\infty}\sum_{i=1}^{2^n}f\left(\frac{i}{2^n}\right)\frac{1}{2^n}=:U_\infty.$$



Let's try to compute $L_\infty$. To trick to doing so is to note that any $1\leq i\leq 2^n$, $f\left(\frac{i-1}{2^n}\right)$ is a finite sum of terms of the form $1/3^k$ where $1\leq k \leq n$. Thus, we can re-write $L_n$ as



$$L_n=\frac{1}{2^n}\sum_{i=1}^{n}\frac{1}{3^i}N_n\left(\frac{1}{3^i}\right).$$



where $N_n(1/3^i)$ denotes the total number of times the term $\frac{1}{3^i}$ appears in the sum $L_n$. Each number $(i-1)/2^n$ contributes a $\frac{1}{3^k}$ to the sum if and only if it has a $1$ in the $k^{th}$ place of its binary expansions. Thus, $N_n(1/3^k)$ is simply the total amount of numbers of the form $(i-1)/2^n$, with $1\leq i\leq 2^n$, that have a $1$ in the $k^{th}$ place of their binary expansions. Since the numbers of said form are exactly those whose binary expansions terminate after $n$ places, $N_n(1/3^i)$ is simply the number of sequences of zeros and ones of length $n-1$ (which is $2^{n-1}$). Thus,



$$\sum_{i=1}^{n}\frac{1}{3^i}N_n\left(\frac{1}{3^i}\right)=\sum_{i=1}^{n}\frac{1}{3^i}2^{n-1}=2^{n-1}\sum_{i=1}^{n}\frac{1}{3^i} =2^{n-2}\left(1-\frac{1}{3^n}\right).$$




So,



$$L_\infty=\lim_{n\to\infty}\frac{1}{4}\left(1-\frac{1}{3^n}\right)=\frac{1}{4}.$$



Since, for any $n$, $U_n=L_n-f(0)/2^n+f(1)/2^n=L_n+1/2^n$, we have that $L_\infty=U_\infty$. Thus, $f$ is Riemann integrable with integral of $1/4$.






In case it helps with $1$, here's a set on which $f$ is not even continuous (and hence, neither differentiable). Let $A$ denote the set of all numbers with finite binary representations (the dyadic fractions contained in $[0,1]$). That is,




$$A:=\{0.a_1a_2a_3\dots\in[0,1]:a_m=1,\quad n>m\Rightarrow a_n=0\text{ for some }m\}.$$



Let $a=0.a_1a_2a_3\dots\in A$ and let $m$ denote the last member of the expansion of $a$ that is a one. To show that $f$ is discontinuous at $a$ it is enough to show that for any $N>m$ we can find a number $b=0.b_1b_2b_3\dots$ such that



$$|a-b|\leq\frac{1}{2^N},\quad f(a)-f(b)\geq\frac{1}{2\cdot3^{m}}.$$



To do this set $b_n=a_n$ for all $n

$$a-b=\sum_{n=1}^m\frac{a_n}{2^n}-\left(\sum_{n=1}^{m-1}\frac{a_n}{2^n}+\sum_{n=m+1}^N\frac{1}{2^n}\right)=\frac{1}{2^m}-2\left(\frac{1}{2^{m+1}}-\frac{1}{2^{N+1}}\right)=\frac{1}{2^N}.$$




But,



$$f(a)-f(b)=\sum_{n=1}^m\frac{a_n}{3^n}-\left(\sum_{n=1}^{m-1}\frac{a_n}{3^n}+\sum_{n=m+1}^N\frac{1}{3^n}\right)=\frac{1}{3^m}-\frac{3}{2}\left(\frac{1}{3^{m+1}}-\frac{1}{3^{N+1}}\right)$$



$$=\frac{1}{2\cdot3^{m}}+\frac{1}{2\cdot3^N}\geq \frac{1}{2\cdot3^{m}}.$$



This is as far as I got, here are some final remarks:




  • $A$ is dense in $[0,1]$ but has a measure of $0$.


  • There is a theorem that states that a monotone function is differentiable almost everywhere (see Theorem 1.6.25 in here). Hence, there is at most a set of measure zero disjoint from $A$ on which $f$ is not differentiable. I suspect that $A$ is indeed the set of points on which $f$ is not differentiable, but this is nothing more than a hunch (do let me know if you ever figure it out!).


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...