Thursday, January 9, 2020

analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$?




I know that every function is surjective when it's codomain is restricted to it's image but I am not sure can I do what I did.

sequences and series - How do you prove $sum_{n=0}^infty frac{(-1)^n}{n!} = frac{1}{e}$?

prove the sum

$$\sum_{n=0}^\infty \frac{(-1)^n}{n!} = \frac{1}{e}$$



In one of the solutions to a problem I was looking at had this sum and directly got $1/e$ from it. I don't understand how you get that, I used my calculator and it indeed does equal $1/e$ but I'm interested in how you solve this by hand.

Wednesday, January 8, 2020

Discarding random variables in favor of a domain-less definition?

Probabilists don't care, what exactly the domain of random variables is. Here is an extreme comment that exemplifies this: "You should soon see, if you learn more stochastic stuff, that specifying the underlying probability space is a BAD IDEA (what happens when you add a new head/tails?), and quite useless.



If specifying the underlying probability space domain of a random variable (in short "domain" from now on) is such a useless, bad idea for most scenarios, I'm wondering why no one in the long history of probability theory resp. statistics has come up with a better, slicker definition of random variables, that avoids this unelegant we-have-a-domain-but-we-won't-talk-about-it situation?



It seems the only reason to keep the domain $\Omega$ is to enabling a coupling of random variables, so that we can speak of their independence. But can't such a coupling be realized in a more elegant way, than using a space that we don't want to define in the first place?




As soon as I'm reading texts that go beyond very elementary probability, it seems to me that such domains are treated like the crazy uncle from family parties: which we never show them/him, but know it's there.

logarithms - Inequality $log xle frac{2}{e} , sqrt{x}$



The inequality $$\log x \le \frac{2}{e} \, \sqrt{x},$$ where $\log x$ denotes the natural logarithm, is used in the proof of Theorem 4.7 in Apostol's Analytic Number Theory.



It seems that the inequality is not very difficult to prove using calculus. We could simply find maximum/minimum of some function like $f(x)= \frac2e \sqrt{x} - \log x$ or $g(x)=\frac{\log x}{\sqrt x}$.




Are there some other methods how this inequality can be proved? Is there a way in which this inequality can be seen more directly, without having to calculate critical points of some auxiliary function?


Answer



With the substitution $x = e^{2(u+1)}$ the inequality
$$
\log x \le \frac{2}{e} \, \sqrt{x}
$$ becomes
$$
e^u \ge 1 + u \tag{*}
$$
which is a well-known estimate for the exponential function.

Equality holds if and only if $u = 0$, corresponding to
$x = e^2$ in the original inequality.



$(*)$ is trivial for $u \le -1$ and can for example be shown using
the Taylor series for $u > -1$. It also follows – as Jack said in a comment –
from the convexity of the exponential function: the graph lies above
the tangent line at $u = 0$.



(This approach was inspired by Jack D'Aurizio's answer.)


real analysis - How can you prove that a function has no closed form integral?



I've come across statements in the past along the lines of "function $f(x)$ has no closed form integral", which I assume means that there is no combination of the operations:




  • addition/subtraction


  • multiplication/division

  • raising to powers and roots

  • trigonometric functions

  • exponential functions

  • logarithmic functions



, which when differentiated gives the function $f(x)$. I've heard this said about the function $f(x) = x^x$, for example.



What sort of techniques are used to prove statements like this? What is this branch of mathematics called?







Merged with "How to prove that some functions don't have a primitive" by Ismael:



Sometimes we are told that some functions like $\dfrac{\sin(x)}{x}$ don't have an indefinite integral, or that it can't be expressed in term of other simple functions.



I wonder how we can prove that kind of assertion?


Answer



It is a theorem of Liouville, reproven later with purely algebraic methods, that for rational functions $f$ and $g$, $g$ non-constant, the antiderivative




$$f(x)\exp(g(x)) \, \mathrm dx$$



can be expressed in terms of elementary functions if and only if there exists some rational function $h$ such that it is a solution to the differential equation:



$$f = h' + hg$$



$e^{x^2}$ is another classic example of such a function with no elementary antiderivative.



I don't know how much math you've had, but some of this paper might be comprehensible in its broad strokes: http://www.sci.ccny.cuny.edu/~ksda/PostedPapers/liouv06.pdf




Liouville's original paper:




Liouville, J. "Suite du Mémoire sur la classification des Transcendantes, et sur l'impossibilité d'exprimer les racines de certaines équations en fonction finie explicite des coefficients." J. Math. Pure Appl. 3, 523-546, 1838.




Michael Spivak's book on Calculus also has a section with a discussion of this.


abstract algebra - Greatest common divisor of polynomials over $mathbb{Q}$

I have two polynomials: $f: x^3 + 2x^2 - 2x -1$ and $g: x^3 - 4x^2 + x + 2$. I have to do two things: find $gcd(f,g)$ and find polynomials $a,b$ such as: $gcd(f,g) = a \cdot f + b \cdot v$. I have guessed their greatest common divisor: $(x-1)$, but I did it by looking for roots of both polynomials, and now I am stuck. How do I find the greatest common divisor using the Euclid algorithm? I started with $f(x) = g(x) + 3(2x^2 - x - 1)$, but then things go nuts, and I can't use Bézout's identity to bring it all back to $gcd(f,g) = a \cdot f + b \cdot v$.

gcd and lcm - using extended euclidean algorithm to find s, t, r


i am stuck for many hours and i don't understand using the extended euclidean algorithm. i calculated it the gcd using the regular algorithm but i don't get how to calculate it properly to obtain s,t,r.


i understand that from the gcd i can get a linear combination representation, but i don't get how to do it using the algorithm.


how can i find $s,t,r$ for $a=154, b= 84$?



if it is of any importance, the algorithm i am referring to is from the book cryptography: theory and practice


thank you very much. became hopeless because of it


Answer



Using the Euclidean algorithm, we have


$$ \begin{align} 154&=1\cdot84+70\tag{1}\\ 84&=1\cdot70+14\tag{2}\\ 70&=5\cdot14+0 \end{align} $$ The last nonzero remainder is 14. So $\gcd(154,84)=14$. Now $$ \begin{align*} 14&=84-70\qquad\text{(using 2)}\\ &=84-(154-84)\qquad\text{(using 1)}\\ &=2\cdot84-1\cdot154 \end{align*} $$ So $14=2\cdot84-1\cdot154$.


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...