Saturday, March 31, 2018

elementary set theory - Suppose that $A_1subseteq A$ and $B_1subseteq B$, and that $Asim B$ and $A_1sim B_1$. Then $Asetminus A_1 sim Bsetminus B_1$.



For $A,B,A_1,B_1$ are infinite sets. Suppose that $A_1\subseteq A$ and $B_1\subseteq B$, and that $A\sim B$ and $A_1\sim B_1$. Then $A\setminus A_1 \sim B\setminus B_1$.




My attempt:



We denote $A$ and $B$ are equinumerous by $A\sim B$.


Since $A\sim B$, there is a bijection $f:A\to B$. Similarly, there is a bijection $h:A_1\to B_1$.



From here, I don't know how to proceed to define a bijection from $A\setminus A_1$ to $B\setminus B_1$. I think the proof may require Axiom of Choice, but don't know how. Please shed some lights!



Answer



A counterexample. Let $\mathbb N = \{1,2,3,4,\dots\}$ be the natural numbers, and let $\mathbb E = \{2,4,6,\cdots\}$ be the even natural numbers. Then our counterexample is: $$ A_1 = \mathbb N, \quad A=\mathbb N,\quad B_1 = \mathbb E,\quad B = \mathbb N . $$ We have $A_1 \sim B_1$ and $A \sim B$ but $A \setminus A_1$ is empty and $B\setminus B_1$ is the set of all odd numbers.


probability - What are the chances of getting at least 3 fours when rolling 5 dice?

What are the chances of getting at least three or more "fours" or higher number when rolling a fair six-sided die five times?



Actually, this is a problem I am curious about the answer and I can't find it anywhere. I would be glad if explained.


The question can be generalized to getting at least $A$ times a number $B$ or higher when rolling $C$ dice of $D$ sides. I would like to know the function that returns this probability given $A$, $B$, $C$ and $D$ parameters.


Examples of success: $(1,4,4,3,4)$, $(2,6,4,3,5)$, $(6,1,5,5,4)$;


Examples of failure: $(3,4,4,1,1)$, $(6,5,1,2,1)$, $(5,1,4,3,3)$.


Thanks.

elementary number theory - $x^a - 1$ divides $x^b - 1$ if and only if $a$ divides $b$



Let $x > 1$ and $a$, $b$ be positive integers. I know that $a$ divides $b$ implies that $x^a - 1$ divides $x^b - 1$. If $b = qa$, then



$$x^b - 1 = (x^a)^q - 1^q = (x^a - 1)((x^a)^{q-1} + \ldots + x^a + 1).$$




I'm interested in the converse of the statement. If $x^a - 1$ divides $x^b - 1$, does this imply that $a$ divides $b$?


Answer



Let $b=a \cdot q+r$, where $0 < r < a$.



Then



$$x^b-1=x^b-x^r+x^r-1=x^r(x^{aq}-1)+x^r-1 \,.$$



Use that $x^a-1$ divides $x^{aq}-1$.



Friday, March 30, 2018

statistics - How do you calculate probability of rolling all faces of a die after n number of rolls?





Im pretty new to the stackexchange, and posted this is statistics, and then discovered this site, and thought it was much more appropriate, so here I go again:


It is fairly easy to figure out what is the average rolls it would take to roll all faces of a die [1 + 6/5 + 6/4 + 6/3 + 6/2 + 6/1 = 14.7], but that got me thinking of a seemingly more complicated problem.


Say you roll a die 1-5 times, the is the odds of ALL faces showing, is obviously 0. If you roll a die 6 times, the odds of all faces showing can easily be calculated like so:


1 * (5/6) * (4/6) * (3/6) * (2/6) * (1/6) = .0154 or 1.54%


Now is where I get stuck. How to do 7, or more times, and calculate it with n.


Any tips is helpful!

Thursday, March 29, 2018

calculus - How to understand the notion of a differential of a function

In elementary calculus (and often in courses beyond) we are taught that a differential of a function, $df$ quantifies an infinitesimal change in that function. However, the notion of an infinitesimal is not well-defined and is nonsensical (I mean, one cannot define it in terms of a limit, and it seems nonsensical to have a number that is smaller than any other real number - this simply doesn't exist in standard analysis). Clearly the definition $$df=\lim_{\Delta x\rightarrow 0}\Delta f =f'(x)dx$$ makes no sense, since, in the case where $f(x)=x$ we have that $$ dx=\lim_{\Delta x\rightarrow 0}\Delta x =0.$$



All of this leaves me confused on how to interpret expressions such as $$df=f'(x)dx$$ Should it be seen simply as a definition, quantifying the first-order (linear) change in a function about a point $x$? i.e. a new function that is dependent both on $x$ and a finite change in $x$, $\Delta x$, $$df(x,\Delta x):=f'(x)\Delta x$$ then one can interpret $dx$ as $$dx:=dx(x,\Delta x)= \Delta x$$ such that $$\Delta f=f'(x)dx+\varepsilon =df +\varepsilon$$ (in which $\varepsilon$ quantifies the error between this linear change in $f$ and the actual change in $f$, with $\lim_{\Delta x\rightarrow 0}\varepsilon =0$).



I feel that there must be some sort of rigorous treatment of the notion of differentials since these kind of manipulations are used all the time, at least in physics?!



I've had some exposure to differential geometry in which one has differential forms, in particular $1-$forms which suggestively notationally "look like" differentials, for example $$\omega =df$$ but as I understand it these are defined as linear maps, members of a dual space to some vector space, $V$, which act on elements of $V$, mapping them to real numbers. Furthermore, the basis $1-$forms are suggestively written as what in elementary calculus one would interpret as an infinitesimal change in x, $dx$. But again, this is simply symbolic notation, since the basis $1-$forms simply span the dual space and are themselves linear maps which act on elements of $V$.



I've heard people say that differential forms make the notion of a differential of a function mathematically rigorous, however, in my mind I can't seem to reconcile how this is the case, since at best they specify the direction in which the differential change in a function occurs, via $$df(v)=v(f)$$ (since $v(f)$ is the directional derivative of a function $f$ along the vector $v$).




If someone could enlighten me on this subject I'd really appreciate it.

calculus - prove that for every natural n, $5^n - 2^n$, can be divided by 3


How to prove, using recursion, that for every natural n:$$5^n - 2^n$$ can be divided by 3.


Answer



  1. setting $n=1$, $\implies 5^1-2^1=3$ is divisible by $3$

Thus, the number $5^n-2^n$ is divisible by $3$ for $n=1$



  1. assume for $n=k$, the number $5^n-2^n$ is divisible by $3$ then $$\color{blue}{5^k-2^k=}\color{blue}{3m}$$ where, $m$ is some integer




  2. setting $n=k+1$, $$5^{k+1}-2^{k+1}=5\cdot 5^k-2\cdot 2^k$$ $$=5\cdot 5^k-5\cdot 2^k+3\cdot 2^k$$ $$=5(\color{blue}{5^k-2^k})+3\cdot 2^k$$ $$=5(\color{blue}{3m})+3\cdot 2^k$$ $$=3(5m+2^k)$$ since, $(5m+2^k)$ is an integer hence, the above number $3(5m+2^k)$ is divisible by $3$




Hence, $5^n-2^n$ is divisible by $3$ for all integers $n\ge 1$


Wednesday, March 28, 2018

trigonometry - sinc function centered at $x=c$ that goes to zero at $x=0$?

The sinc function is ordinarily defined as



$$
\operatorname{sinc}(x) = \frac{\sin x} x \text{ if } x \neq 0 \text{ else } 1.
$$




I want a sinc function that is shifted away from the origin such that it's centered at some value $c$, and also equals zero at $x=0$. How can I define this function?

real analysis - Does a bijective map from $(-pi/2,pi/2)to (0,1)$ exist?




Does a bijective map from $(-\pi/2,\pi/2)\to (0,1)$ exist?





My first guess was using the sine function but it doesn't really comply with the bijective map since it goes from $(-\pi/2,\pi/2)\to (-1,1)$. Am I missing something with the sine function or is there another way I can achieve this bijection?


Answer



Consider the function $f(x) = \frac{x}{\pi} + \frac{1}{2}$.


divisibility - divisors of natural numbers



I have a problem with an exercise.
I have the two following sentences I have to determine if they are true or false.




If $a

If $a



I think the first is true. But I can't understand what's the different when removing the brackets?



Thanks.



EDITED!
I was mistaken copying the questing. Sorry! Now its correct


Answer



The first one is true even if you remove the hypothesis that $a


I also see no difference between the assertions.


Tuesday, March 27, 2018

calculus - Is the function continuous - $f(x) = frac{1}{sin x} + frac{1}{x-1}$



I have an assignment in which I have to prove that a function "recieves every real value, where $x\in (0,1)$".



Here is the function:



$$f(x) = \frac{1}{\sin x} + \frac{1}{x-1}$$



I don't know the answer yet, however I wanted to know if the function is continuous. I think it is, with the following proof:




I know that if $f(x)$ and $g(x)$ are continuous then $f(x) / g(x)$ is continous as well, and the same goes for $f(x) + g(x)$.



Let $f(x)=1$ and $g(x)=\sin x$, they are both continuous and since $x \in (0,1)$, $\sin x\ne 0 $ for every value of $x$, I can say that $\frac{1}{\sin x}$ is continuous as well.



Similairly, I do this for $\frac{1}{x-1}$, and I can say the whole function is continuous.



Am I correct?



I don't know how does this make me closer to proving the argument above, but I have to start with something.




Thanks,



Alan


Answer



Your initial step is correct, $f$ is continuous on $(0,1)$. I will assume the domain of $f$ is restricted to be $(0,1)$.



Note that $\lim_{x \to 0^{+}} f(x)=\infty$ and $\lim_{x \to 1^{-}} f(x)=-\infty$, in other words near 0 and 1 the function takes arbitrarily large (or small) values.



Since the function is continuous and defined on an interval, its image is an interval (the intermediate value theorem). The only interval that contains arbitrarily large and small values is $(-\infty, \infty)$.



sequences and series - Show that $Y_n$ converges in probability

enter image description here




I attempted part ii of the question with the following:



To show that $Y_n$ converges in probability, I started off with $P(|Y_n-0|>\epsilon$). I then wrote, for $0<\epsilon<\theta$



$P(Y_n<-\epsilon).$ However, trying to evaulate this lead me nowhere. Upon looking at the solutions:
enter image description here



It says that $P(Y_n>\epsilon)$. Why is this so? How do I know that $Y_n$ is positive?

logarithms - Inequality $log xle frac{2}{e} , sqrt{x}$


The inequality $$\log x \le \frac{2}{e} \, \sqrt{x},$$ where $\log x$ denotes the natural logarithm, is used in the proof of Theorem 4.7 in Apostol's Analytic Number Theory.


It seems that the inequality is not very difficult to prove using calculus. We could simply find maximum/minimum of some function like $f(x)= \frac2e \sqrt{x} - \log x$ or $g(x)=\frac{\log x}{\sqrt x}$.


Are there some other methods how this inequality can be proved? Is there a way in which this inequality can be seen more directly, without having to calculate critical points of some auxiliary function?


Answer



With the substitution $x = e^{2(u+1)}$ the inequality $$ \log x \le \frac{2}{e} \, \sqrt{x} $$ becomes $$ e^u \ge 1 + u \tag{*} $$ which is a well-known estimate for the exponential function. Equality holds if and only if $u = 0$, corresponding to $x = e^2$ in the original inequality.


$(*)$ is trivial for $u \le -1$ and can for example be shown using the Taylor series for $u > -1$. It also follows – as Jack said in a comment – from the convexity of the exponential function: the graph lies above the tangent line at $u = 0$.


(This approach was inspired by Jack D'Aurizio's answer.)


calculus - Rigorous definition of a limit




Suppose that $L$ is a real number and $f$ is a real-valued function defined on some interval $(b, \infty)$. We say that $\displaystyle{\lim_{x \to \infty} f(x) =L}$ if for every positive real number $\epsilon$, there is a real number $M$ such that if $x>M$ then $|f(x) -L| < \epsilon$.




Is this statement correct, or should it be amended to imply that a limit can exist at L (i.e. it is possible for a limit to exist at L), but does not have to be the limit of the function? For example, we can prove from this definition that $\displaystyle{\lim_{x \to \infty} \frac{4}{x^2}=0}$, but can't one also prove that $\displaystyle \lim_{x \to \infty} \frac{4}{x^2}=-0.001$, $\displaystyle \lim_{x \to \infty} \frac{4}{x^2}=-0.0001$ and other false claims by application of this definition?


Answer



The statement is correct.




(Note also that we usually talk about a limit existing at $a$ to refer to the point that the variable $x$ is approaching, rather than what the values of the function are approaching; to refer to what the function is approaching, we talk about the limit being $L$, or equaling $L$).



In your example, you cannot prove that $\lim\limits_{x\to\infty}\frac{4}{x^2} = -.0001$: given any $L\gt 0$, let $\epsilon = \frac{L}{2}$. Then for any $N\gt 0$, pick $x\gt\max\{N, \sqrt{\frac{8}{L}}\}$. Then
$$\frac{8}{L}\lt x^2,\text{ therefore }\frac{4}{x^2}\lt\frac{L}{2}.$$
And therefore, we have that
$$\left|L-\frac{4}{x^2}\right| = L - \frac{4}{x^2} \gt L-\frac{L}{2} = \epsilon.$$
We have therefore proven that if $L\gt 0$, then:





For every $N\gt 0$ there exists $x\gt N$ such that $|L-f(x)|\gt\frac{L}{2}$.




This proves that the limit definition cannot be satisfied, since the condition fails for at least one $\epsilon$.



If $L\lt 0$, pick $\epsilon=\frac{-L}{2}$ and a similar computation shows that you can always find $x$ greater than any given $N$ that will show the property is not satisfied.


Limit of recurrence sequence


I have to find a limit (or prove it doesn't exist) for the following recurrence sequence.


$a_1 = 2; a_{n+1} = \frac{1}{2}(a_n + \frac{2}{a_n})$


Now I know, in order to find the limit, I first need to prove that the sequence is monotonic and bounded. I've made a partial table of values and concluded that the sequence is decreasing, thus to prove monotonicity, I've written down:


$ a_{n+1} < a_n \rightarrow a_n > \sqrt{2} $



And that's all I could think of. I don't think the inequality above proves anything so I don't know how to continue. I tried to calculate limit of the sequence by using limits of elements as follows:


$ \lim a_{n+1} = \frac{1}{2}(\lim a_n + \lim \frac{2}{a_n}) = a\Rightarrow a = \sqrt{2}$


But without proving monotonicity and bounding, there's no proof the limit exists at all.


Thank you for any help in advance.


Answer



That is the Babylon's algorithm use to extract the root of a real number, hence the limit you found is good. Have you seen how to study sequences of the form $a_{n+1}=f\left(a_n\right)$ ? You can also focus on $\displaystyle \frac{a_{n+1}}{a_n}$ since the sequence is never null.


Monday, March 26, 2018

probability - What is missing in my solution of "from PDF to CDF and $P(X > 0.5)$"?





Task:



The continuous random variable $X$ is described with the following
probability density function (pdf):



$$f_X(x) = \begin{cases} \frac{1}{9}\big(3 + 2x - x^2 \big) \; : 0
\leq x \leq 3 \\ 0 \; \;: x < 0 \; \lor \; x > 3\end{cases}$$




Find cumulative distribution function $F_X$ and probability $P(X >
0.5)$
.




The task is started by verifying if the pdf is in fact correct pdf. I am checking two conditions:




  1. Is the pdf nonnegative on all of its domain? Yes, hence we can write:




$$\forall_{x \in \mathbb{R}}\;f_X(x) \geq 0$$




  1. The pdf has to be integrable and its total area under the curve has to be equal $1$:



$$\begin{align*} &\int_{\mathbb{R}}f_X = 1 \\ &\color{red}{\int_{-\infty}^{\infty}f_X(x)dx = 1} \\ \end{align*}$$



(for now assume the condition is true)




PDF plot:
pdf






Computing CDF which is defined as:



$$F_X(x) = \int_{-\infty}^{x}f_X(t)dt$$



Therefore:




If $x < 0$:



$$F_X(x) = \int_{-\infty}^{x} 0dt = 0$$



If $x \geq 0 \; \land \; x \leq 3$:



$$\begin{align*}F_X(x) &= \int_{-\infty}^{0}0dt + \int_{0}^{x}\frac{1}{9}\big(3 + 2t - t^2\big)dt = \\ &= 0 + \frac{1}{9}\Big(3t + t^2 - \frac{1}{3}t^3 \Big)\Bigg|^{x}_0 = \\ &= \frac{1}{9} \Big(3x + x^2 - \frac{1}{3}x^3 \Big)\end{align*}$$



If $x \geq 3$:




$$\begin{align*} F_X(x) &= \int_{-\infty}^{0}0dt + \int_{0}^{3}\frac{1}{9}\Big(3 + 2t - t^2 \Big)dt + \int_{3}^{x}0dt \\ &= 0 + \frac{1}{9}\Big(3t + t^2 - \frac{1}{3}t^3 \Big)\Bigg|^3_0 + 0 = \\ &= 1 \end{align*}$$



(this implicitly confirms the $\color{red}{\text{red}}$ condition)



Finally the CDF is defined as:



$$F_X(x) = \begin{cases} 0 \; \; : x < 0 \\ \frac{1}{9} \Big(3x + x^2 - \frac{1}{3}x^3 \Big) \; \; : x \geq 0 \; \land \; x \leq 3 \\ 1 \; \; : x > 3 \end{cases}$$







The CDF result agrees with:



$$\lim_{x \to \infty}F_X(x) = 1 \; \land \; \lim_{x \to -\infty}F_X(x) = 0 $$



Also the function is non-decreasing and continuous.



CDF plot:



cdf







Calculating $P(X > 0.5)$:



$$\begin{align*}P(X > 0.5) &= \int_{0.5}^{\infty}f_X(x)dx = \\ &= \int_{0.5}^{3}\frac{1}{9}(3+2x-x^2)dx + \int_{3}^{\infty}0dx = \\ &= \frac{1}{9} \Big(3x + x^2 - \frac{1}{3}x^3 \Big)\Bigg|^3_{0.5} + 0 = \\ &= \frac{175}{216} \approx 0.81\end{align*}$$






This probability solution does not agree with the book's solution.




The book says $P(X > 0.5) = 1 - F_X(0.5) = \frac{41}{216} \approx 0.19$, so it's my solution "complemented".






My questions:




  • Which final probability solution is correct?

  • Is this any special kind of probability distribution, e.g. Poisson or Chi Square (well, not these)?


  • Can you please point out all minor or major mistakes I have made along the way? (perhaps aside from plots that are not perfect). This is the most important for me.

  • What have I forget to mention or calculate for my solution to make more sense? Especially something theoretical, perhaps e.g. definition for $X$.


Answer




My questions:




  • Which final probability solution is correct?





Yours answer is right and the book's isn't. They presumably have mistakenly computed $\mathbb P(X < 0.5)$ instead of $\mathbb P(X > 0.5)$.





  • Is this any special kind of probability distribution, e.g. Poisson or Chi Square (well, not these)?





Not a common one, no. I found this page on "U-quadratic distributions" (a term I've never heard before), and this would be the vertical inverse of one of these described in the "related distributions" section, but I don't think this is a particularly common term or distribution.



EDIT: Whoops, this isn't even quite the vertical inverse of a U-quadratic distribution, is it? Such a distribution would apparently not truncate the left side of the parabola as this one does. The better answer to your question is: "No, this distribution is neither named nor important."





  • Can you please point out all minor or major mistakes I have made along the way? (perhaps aside from plots that are not perfect). This is the most important for me.





I'd love to, but I didn't find any!





  • What have I forget to mention or calculate for my solution to make more sense? Especially something theoretical, perhaps e.g. definition for $X$.




I didn't spot any holes or anything that needs to be improved.




EDIT: One thing you could do to clean this up a bit: when you compute $\mathbb P(X > 0.5)$, you're redoing the integration you already did in your CDF. Instead, you could just use that result that you already obtained:
$$\mathbb P(X > 0.5) = 1 - \mathbb P(X \leq 0.5) = 1 - F_X(0.5) = 3(0.5) + (0.5)^2 - \frac{1}{3}(0.5)^3 = \dots $$
That said, your answer isn't wrong, it's just a bit inefficient.


induction - Inequality in Algebra: $1 leq x_1 x_2 cdots x_n$ implies that $2^{n} leq (1 + x_1)(1+x_2) cdots (1 + x_n).$

How do I prove that if $x_1, \ldots, x_n$ are positive real numbers, then

$$1 \leq x_1 x_2 \cdots x_n \text{ implies that } 2^{n} \leq (1 + x_1)(1+x_2) \cdots (1 + x_n).$$



I attempted a proof by induction but am not able to nail the inductive step. Any help would be appreciated!

summation - Combination of quadratic and arithmetic series



Problem:





Calculate $\dfrac{1^2+2^2+3^2+4^2+\cdots+23333330^2}{1+2+3+4+\cdots+23333330}$.







Attempt:



I know the denominator is arithmetic series and equals

$$\frac{n}{2}(T_1+T_n)=\frac{23333330}{2}(1+23333330)=272222156111115,$$
but how do I calculate the numerator without using a calculator?


Answer



Intuitively,
\begin{align}
S_1&=\frac{1^2}{1}=1=\frac{3}{3}\\
S_2&=\frac{1^2+2^2}{1+2}=\frac{5}{3}\\
S_3&=\frac{1^2+2^2+3^2}{1+2+3}=\frac{7}{3}\\
S_4&=\frac{1^2+2^2+3^2+4^2}{1+2+3+4}=3=\frac{9}{3}\\
\vdots\\

\large\color{blue}{S_n}&\color{blue}{=\frac{2n+1}{3}}.
\end{align}


real analysis - Is every function with the intermediate value property a derivative?



As it is well known every continuous function has the intermediate value property, but even some discontinuous functions like

$$f(x)=\left\{
\begin{array}{cl}
\sin\left(\frac{1}{x}\right) & x\neq 0 \\
0 & x=0
\end{array}
\right.$$
are having this property.
In fact we know that a derivative always have this property.



My question is, if for every function $f$ with the intermediate value property exists a function $F$ such that $F'=f$. And if so, is $F$ unique (up to a constant you may add)?




My attempts till now: (Just skip them if you feel so)



The most natural way of finding those functions would be integrating, so I guess the question can be reduced to, if functions with the intermediate value property can be integrated.
This one depends heavy on how you define when a functions is integrable, we (my analysis class) said that we call a function integrable when it is a regulated function (the limits $x\to x_0^+ f(x)$ and $x\to x_0^- f(x)$ exists ) .
As my example above shows, not every function with the intermediate value property is a regulated function. But if we say a function is integrabel when every riemann sum converges the above function is integrable, so it seems like this would be a better definition for my problem.



Edit: As Kevin Carlson points out in a commentar that being a derivative is different from being riemann integrable, he even gave an example for a function which is differentiable but it's derivative is not riemann integrable. So we can't show that those functions are riemann integrable as they are not riemann integrable in general. Now I have no clue how to find an answer.


Answer



If you compose $ \tan^{-1} $ with Conway’s Base-$ 13 $ Function, then you get a bounded real-valued function on the open interval $ (0,1) $ that satisfies the Intermediate Value Property but is discontinuous at every point in $ (0,1) $. Therefore, by Lebesgue’s theorem on the necessary and sufficient conditions for Riemann-integrability, this function is not Riemann-integrable on any non-degenerate closed sub-interval of $ (0,1) $.



Now, it cannot be the derivative of any function either, because by the Baire Category Theorem, if a function defined on an open interval has an antiderivative, then the function must be continuous on a dense set of points. This thread may be of interest to you. :)


Sunday, March 25, 2018

calculus - Can one solve $int_{0}^{infty} frac{sin(x)}{x} dx$ *from its Taylor series antiderivative directly*?

This question was inspired by this question:



Evaluating the integral $\int_{0}^{\infty} \frac{\sin{x}}{x} \ dx = \frac{\pi}{2}$?





Well, can anyone prove this without using Residue theory. I actually thought of doing this:
$$\int_{0}^{\infty} \frac{\sin x}{x} \, dx = \lim_{t \to \infty} \int_{0}^{t} \frac{1}{t} \left( t - \frac{t^3}{3!} + \frac{t^5}{5!} + \cdots \right) \, dt$$
but I don't see how $\pi$ comes here, since we need the answer to be equal to $\frac{\pi}{2}$.




Answers were given to the stated question -- how to prove without using Residue theory. Yet the quote suggests an obvious follow-up question: can you prove the integral from the Taylor series expansion directly, somehow?

elementary number theory - solving and manipulating linear congruences



I need to find a multiple of $5$ call it $5k$ with the following properties:




  1. $5k \equiv 3 $ mod $6$

  2. $5k \equiv 2 $ mod $7$



My first instinct is to start with the Chinese Remainder Theorem, but I do not know how to start. Could I get a few hints? Please explain the modular arithmetic manipulations needed to solve this problem.



Answer



To make the computations easy to understand, we first consider general equations.



Let $a,b,n$ be integers.
Suppose gcd$(a, n) = 1$.
Consider the following equation.



$ax \equiv b$ (mod $n$)



Since gcd$(a, n) = 1$, we can solve $ay \equiv 1$ (mod $n$) by Euclid's algorithm.

Then $x = by$ (mod $n$) is the solution.



Let $a,b,n,m$ be integers.
Suppose gcd$(n, m) = 1$.
Consider the following equatiuons.



$x \equiv a$ (mod $n$)



$x \equiv b$ (mod $m$)




Since gcd$(n, m) = 1$, we can find, by Euclid's algorithm, integers $k, l$ such that



$mk \equiv 1$ (mod $n$)



$nl \equiv 1$ (mod $m$)



Then $x = amk + bnl$ (mod $nm$) is the solution.



Now let's solve the given equations




$5k \equiv 3$ (mod $6$)



$5k \equiv 2$ (mod $7$)



We get(by Euclid's algorithm or just by testing)



$k \equiv 3$ (mod $6$)



$k \equiv 6$ (mod $7$)




Then we can apply the above method to find $k$.
Since



$7 \equiv 1$ (mod $6$)



$-6 \equiv 1$ (mod $7$)



$k = 3\cdot7 -6\cdot6 = -15 \equiv 27$ (mod $42$)


trigonometry - Proving $sin ((n-1/2)phi) + sin(phi/2)=sin({n+1 over 2}phi)$



I am trying to show that



$\sin ((n-1/2)\phi) + \sin(\phi/2)=\sin({n+1 \over 2}\phi)$




I tried to apply $\sin (x-y) = \sin x \cos y - \cos x \sin y$ to it and I got



$\sin ((n-1/2)\phi) + \sin(\phi/2)=\sin \phi / 2 \sin n \phi / 2 + \sin n \phi \cos \phi / 2$



but how can I proceed from there?


Answer



You need the prosthaphaeresis/sum-to-product formulae. Adding the expanded forms of $\sin{(x-y)}$ and $\sin{(x+y)}$, you find
$$ \sin{(x-y)} + \sin{(x+y)} = 2\sin{x}\cos{y}, $$
and changing variables,

$$ \sin{A}+\sin{B} = 2\sin{\tfrac{1}{2}(A+B)}\cos{\tfrac{1}{2}(A-B)}. $$
Then the left-hand side of your identity is
$$ \sin{(n-\tfrac{1}{2})\phi} + \sin{\tfrac{1}{2}\phi} = 2\sin{\tfrac{n}{2}\phi} \cos{\tfrac{n+1}{2}\phi}, $$
which is not often equal to what you have on the right...


integration - How to show $int_{0}^{infty} frac{dx}{x^3+1} = frac{2pi}{3sqrt{3}}$




I am trying to show $\displaystyle{\int_{0}^{\infty} \frac{dx}{x^3+1} = \frac{2\pi}{3\sqrt{3}}}.$



Any help?
(I am having troubles using the half circle infinite contour)



Or more specifically, what is the residue $\text{res} \left(\frac{1}{z^3+1},z_0=e^\frac{\pi i}{3} \right )$




Thanks!


Answer



There are already two answers showing how to find the integral using just calculus. It can also be done by the Residue Theorem:



It sounds like you're trying to apply RT to the closed curve defined by a straight line from $0$ to $A$ followed by a circular arc from $A$ back to $0$. That's not going to work, because there's no reason the intergal over the semicircle should tend to $0$ as $A\to\infty$.



How would you use RT to find $\int_0^\infty dt/(1+t^2)$? You'd start by noting that $$\int_0^\infty\frac{dt}{1+t^2}=\frac12\int_{-\infty}^\infty\frac{dt}{1+t^2},$$and apply RT to the second integral.



You can't do exactly that here, because the function $1/(1+t^3)$ is not even. But there's an analogous trick available.




Hint: Let $$f(z)=\frac1{1+z^3}.$$If $\omega=e^{2\pi i/3}$ then $$f(\omega z)=f(z).$$(Now you're going to apply RT to the boundary of a certain sector of opening $2\pi/3$... be careful about the "$dz"$...)


geometry - Two paradoxes: $pi = 2$ and $sqrt 2 = 2$












Can anyone explain how to properly resolve two paradoxes in this YouTube video by James Tanton?


Answer



First: It is not a paradox: it is just wrong. The reasoning is wrong.



About $\pi = 2$ he says: "Well clearly we are approaching the diameter of the circle". That is a statement that he doesn't prove and which is false.



The same problem arises with the $\sqrt{2} = 2$ when he says: "Well clearly this geometric construction approaches the diagonal of the square". How does he know that?



All that this proves is that we have to be careful when we talk about finding limits from purely looking at pictures.




"Just because the sun sets in the west doesn't mean that it has to rise in the west as well.



Edit: There are plenty of example of proofs that seem right, but turn out to be wrong when we go over them in more detail. Take for example the proof that for complex numbers
$$
1 = \sqrt{1} = \sqrt{(-1)\cdot(-1)} = \sqrt{-1}\sqrt{-1} = i\cdot i = -1$$
Here again, the argument is invalid because the rule $\sqrt{ab} = \sqrt{a}\sqrt{b}$ doesn't hold for complex numbers.


Saturday, March 24, 2018

real analysis - Why can we work with $[0,1]$ instead of $mathbb{R}$?




I would like to know why the following are true. I believe it is some sort of relationship between the two spaces- an isomorphism of sorts?



There is a proof to show that the real numbers is uncountable. The proof uses $[0,1]$ instead of $\mathbb{R}$. Why can we work with $[0,1]$ instead of $\mathbb{R}$?



Another instance of this:



There is a proof of the Weierstrass Approximation Theorem, which can be proved by proving Bernstein's Theorem first. In the proof for Bernstein's Theorem, although we want to show that $B_n(f)$ converges to $f$ uniformly for all $f\in C[a,b]$, we merely suppose $f\in C[0,1]$ instead, where $(B_n(f))(x)=\sum_{k=0}^nf\left(\frac{k}{n}\right)\cdot
{n\choose k}x^k(1-x)^{n-k}$, for $x\in[0,1]$, and where $C[a,b]$ is the set of all continuous functions on the interval $[a,b]$.


Answer




The first result your mention is set-theoretic. It is easily verified that if $A\subseteq B$ and $A$ is uncountable, then $B$ is also uncountable. So, showing that $[0,1]$ is uncountable suffices to show the uncountability of the reals. For Cantor's diagonal argument it is easier to work with $[0,1]$, which is why we choose to do so. It should be noted the proof is typographical. It is about sequences of sequences of digits.



As for the second result, I'm not entirely sure what you are asking, but here there is no essential difference between any two closed intervals. Again, for Bernstein's proof (which is probabilistic) it is easier to normalize things and work with $[0,1]$. Using any linear function $[0,1]\to [a,b]$ then translates Weierstrass's approximation result to any closed interval. Note that then it also implies that the cardinality of $[0,1]$ is the same as that of $[a,b]$ for all $a

Friday, March 23, 2018

calculus - Using the Eulerian integrals evaluate $int_0^infty frac{ln^2{x}}{1+x^4} mathrm{d}x$

The question asks to evaluate the integral: $$\int_0^\infty \frac{\ln^2{x}}{1+x^4} \mathrm{d}x.$$



I have tried a few substitutions but am not getting anywhere.




Thanks in advance!

combinatorics - Incorrect combinatorial argument- 5 card hand with at least 3 red cards



How many 5 card hands can be made with at least three red cards? Of course, we're using a standard deck of 52. I know how to answer this, but I frequently see this argument, producing a different answer. I know it's wrong but I can't explain exactly what's wrong with it?



"First, there are $_{52}C_3$ of choosing three red cards. Since the other two cards can be black or red, we can choose them from any of the 49 unused cards. I.e. $_{49}C_2$ ways. So the final answer should be $_{52}C_3$ x $_{49}C_2$"


Answer




This counting argument is incorrect because it will multiple-count configurations for which more than three cards are red; e.g., $$\{10\color{red}\heartsuit, J\color{red}\heartsuit, Q\color{red}\heartsuit, K\color{red}\heartsuit, A\color{red}\heartsuit\} = \{\{10\color{red}\heartsuit, J\color{red}\heartsuit, Q\color{red}\heartsuit \}, \{K\color{red}\heartsuit, A\color{red}\heartsuit\}\} = \{\{10\color{red}\heartsuit, J\color{red}\heartsuit\}, \{ Q\color{red}\heartsuit, K\color{red}\heartsuit, A\color{red}\heartsuit\}\}$$ but under such an enumeration scheme, the middle and right hand expressions are considered distinct.


abstract algebra - Order of element in field extension



Given a field extension $\mathbb{F}_{p^n}$ and some element $\alpha \in \mathbb{F}_{p^n}$ which is not contained within any proper subfield of $\mathbb{F}_{p^n}$, is there a lower bound on the order of $\alpha$?


I understand that the nonzero elements of a finite field form a cyclic group generated by some primitive element $\beta \in \mathbb{F}_{p^n}$. However, if we don't know whether $\alpha$ is primitive, what can we say about its order (without actually computing anything)?


Answer



From the fact that $\alpha$ is not contained within any proper subfield, we know it is not fixed by any power of Frobenius, so that the elements $\alpha, \alpha^p, \alpha^{p^2},...,\alpha^{p^{n-1}}$ are all distinct, i.e. there exist $n$ distinct powers of $\alpha$ not equal to the identity, so $\alpha$ must have order greater than $n$.


complex numbers - I am getting $i^{-1}=pm i$.


I am trying to find $i^{-1}$. I already know that the answer is $-i$, but I can't figure out a way to determine that using math. This is what I am doing: $$i^{-1}$$ $$\frac1i$$ $$\sqrt{\left(\frac1i\right)^{2}}$$ $$\sqrt{\frac{1}{-1}}$$ $$\sqrt{-1}$$ $$\pm i$$ What am I doing wrong?


Answer



I'm not sure where your 3rd expression comes from. In general, to find $\frac{1}{a+bi}$, where $a,b\in \mathbb{R}$, multiply numerator and denominator by the conjugate $\overline{a+bi}=a-bi$ to obtain $$\frac{1}{a+bi}=\frac{a-bi}{(a+bi)(a-bi)}=\frac{a-bi}{a^2+b^2}$$ In your case, take $a=0,b=1$ to obtain $\frac{1}{i}=\frac{-1i}{1}=-i$.


integration - Simple explanation of difference between $cos(x)$ and $cos(x^2)$ integrals.



I'm not looking for the solved integrals, I'm just looking for a simple explanation for why




$\displaystyle\int\cos(x^2)dx$



is so much more complicated than



$\displaystyle\int\cos(x)dx$



With simple I mean something I can say to explain the difference to a person just starting to learn about integration. Perhaps something visual?


Answer



For one thing, look at their graphs. Since $x$ increases at a constant rate, $\cos(x)$ has a constant period but $x^2$ increases faster and faster for larger $x$ so the "period" of $\cos(x^2)$ keeps getting smaller and smaller.



algebra precalculus - How to find the magnitude squared of square root of a complex number


I'm trying to simplify the expression



$$\left|\sqrt{a^2+ibt}\right|^2$$


where $a,b,t \in \Bbb R$.


I know that by definition


$$\left|\sqrt{a^2+ibt}\right|^2 = \sqrt{a^2+ibt}\left(\sqrt{a^2+ibt}\right)^*$$


But how do you find the complex conjugate of the square root of a complex number? And what is the square root of a complex number (with arbitrary parameters) for that matter?


Answer



For any complex number $z$, and any square root $\sqrt{z}$ of $z$ (there are two), we have $$\bigl|\sqrt{z}\bigr|=\sqrt{|z|}$$ Therefore $$\bigl|\sqrt{a^2+ibt}\bigr|^2=\sqrt{|a^2+ibt|^2}=|a^2+ibt| = \sqrt{a^4+b^2t^2}$$


sequences and series - Comparison test $sum_{k=1}^{infty} frac{1 + ln(k)}{k}$



$$\sum_{k=1}^{\infty} \,\frac{1 + \ln(k)}{k}$$



Determine whether it converges or diverges.




I don't think I could do limit comparison test because the $\ln(k)$ messed me up. Pretty sure I could do this with integral test but I think this is possible with comparison test as well. Could someone tell me if it is? For instance I'm looking for a $b_k$ value that is $$0 \leq a_k \leq b_k$$



My textbook uses $b_k = \frac{1}{k}$, but how is the hypothesis met with this? $$\frac{1+\ln(k)}{k} \,\geq\, \frac{1}{k}$$



Thats wrong it should be $a_k \leq b_k$ $\forall n \geq 1$


Answer



The book is correct. Note that $1+\log(k)\ge 1$ for all $k\ge 1$. Hence,



$$\frac{1+\log(k)}{k}\ge \frac1k$$




Since the harmonic series diverges, then the series $\sum_{k=1}^\infty \frac{1+\log(k)}{k}$ diverges by comparison. That is to say, the series of interest dominates the divergence harmonic series.


calculus - What is $limlimits_{k to 0}{f(k) = 2 + k^{frac{3}{2}}cos {frac{1}{k^2}}}$

Just want to check this one:



I got:



$$\displaystyle \lim_{k \to 0}{f(k) = 2} \;+\; \lim_{k \to 0}{k^{\frac{3}{2}}\cos {\frac{1}{k^2}}}$$



Since $\lim\limits_{k \to 0}\cos{\frac{1}{k^2}} = 0$, using the squeeze theorem, I have $\lim\limits_{k \to 0} k^{\frac{3}{2}}\cos{\frac{1}{k^2}} = 0$.



So
$$\begin{align*}

\lim_{k \to 0}f(k) &= 2 + \lim_{k \to 0}k^{\frac{3}{2}}\cos\left(\frac{1}{k^2}\right)\\
&= 2 + 0\\
&= 2
\end{align*}$$



Is this correct?



Thanks!

calculus - Prove $lim_{xto 0}frac{ln(cos x)}{lnleft(1-frac{x^2}{2}right)}=1$ without L'Hopital


Prove that:$$\lim_{x\to 0}\frac{\ln(\cos x)}{\ln\left(1-\frac{x^2}{2}\right)}=1$$



without L'Hopital's rule.




I don't know if this is possible.



WolframAlpha agrees with this limit.




There's a similar limit without L'Hopital's rule $\lim_{x\to 0^+}\frac{\ln(\sin x)}{\ln (x)}=1$, which is easier to prove and some proofs can be seen in this question.



In $\lim_{x\to 0^+}\frac{\ln(\sin x)}{\ln (x)}=1$ you can see $x\to 0^+$, which is important, but in this case $\frac{\ln(\cos x)}{\ln\left(1-\frac{x^2}{2}\right)}$ is an even function and we can use $x\to 0$.



I've tried some of the methods given in the linked question and none of them worked.



I first saw this problem in this answer, where I also discussed the methods I've tried to solve this.



We could use $\lim_{x\to 0}\frac{\cos x}{1-\frac{x^2}{2}}=1$.







Edit: I've noticed that also



$$\lim_{x\to 0}\frac{\cos (\cos x)}{\cos\left(1-\frac{x^2}{2}\right)}=1$$



See WolframAlpha here.



And also $$\lim_{x\to 0}\frac{\sin (\cos x)}{\sin\left(1-\frac{x^2}{2}\right)}=1$$




See WolframAlpha here.



And also



$$\lim_{x\to 0}\frac{\tan (\cos x)}{\tan\left(1-\frac{x^2}{2}\right)}=1$$



See WolframAlpha here..



And also $$\lim_{x\to 0}\frac{\arcsin(\cos x)}{\arcsin\left(1-\frac{x^2}{2}\right)}=1$$




See WolframAlpha here.



Etc.

Thursday, March 22, 2018

summation - Formula for sum of first $n$ odd integers

I'm self-studying Spivak's Calculus and I'm currently going through the pages and problems on induction. This is my first encounter with induction and I would like for someone more experienced than me to give me a hint and direction.
The first problem is as follows:



Find a formula for $$\sum_{i=1}^n(2i-1)=1+3+5+...+(2n-1)$$
And the related following problem:




Find a formula for $$\sum_{i=1}^n(2i-1)^2=1^2+3^2+5^2+...+(2n-1)^2$$



The given hints are: "What do these expressions have to do with $1+2+3+...+2n$ and $1^2+2^2+3^2+...+(2n)^2$?"



I recognize that the above sums are the sum of all the odd integers from $1$ to $n$ and the sum of all the squares of the odd integers from $1$ to $n$, respectively. My question is, in problems like these does one just do a bunch of trial and error, as I have done for quite a while now, or is there a more clever way to go about it?

geometry - Method of Exhaustion applied to Parabolic Segment in Apostol's Calculus

In section I1.3 of Apostol's Calculus (2nd Ed., Vol. 1, pages 3 to 7), Apostol details how to apply the method of exhaustion to a the parabolic segment of $x^{2}$. I understand the process of applying the , however I'm not sure how he proceeds from a certain step to another. In particular, it is the step in which he introduces the inequality necessary to proceed to single out $b^3 \over 3$ as the value of the area of the parabolic segment. I've attempted to provide as much context as possible below so that reference to the book isn't necessary.



So in his explanation, he employs the method of exhaustion on the parabolic segment of $x^2$ from $x=0$ to $x=b$ with outer rectangles and inner rectangles, also known and upper and lower rectangles in other books.




The area of the lower rectangles is:



$$ S_{inner}= {b^3 \over n^3}[1^{2}+2^{2}+...+(n-1)^{2}] $$



$$ S_{outer}= {b^3 \over n^3}[1^{2}+2^{2}+...+n^{2}] $$



He then says that calculating the sum of the squares is inconvenient, and introduces the identity:



$$ (I.3) \ 1^{2}+2^{2}+...+n^{2} = {n^3 \over 3}+{n^2 \over 2}+{\frac n6}$$




and says that for the summation of squares from $1$ to $(n-1)$:



$$ (I.4) \ 1^{2}+2^{2}+...+(n-1)^{2} = {n^3 \over 3}-{n^2 \over 2}+{\frac n6}$$



He says, "For our purposes, we do not need the exact expressions given in the right-hand members of (I.3) and (I.4). All we need are the two inequalities:



$$1^{2}+2^{2}+...+(n-1)^{2} < {n^{3} \over 3} < 1^{2}+2^{2}+...+n^{2}$$



What I don't understand is where he obtained the idea to use ${n^{3} \over 3}$ in the inequality, from the expressions I've provided above. It seems to me that he used his prior knowledge of the value of the area to continue the proof, which leaves the reader wondering where he got the value from. He could have also used the following valid inequality:




$$1^{2}+2^{2}+...+(n-1)^{2} < {n^{3} \over 3}+ \frac n6 < 1^{2}+2^{2}+...+n^{2}$$



This is true if you simply remove the $n^2 \over 2$ term in I.3 and I.4 above. But he instead chose to use the first inequality. If there is a reason for why, I'd like to know.



Finally, and this is certainly pedantic on my part, I'd like to note that I think his proof of the area of a parabolic segment is inappropriately named the method of exhaustion. Isn't the reason Archimedes' method of analysis was heralded as ahead of its time is because he used the notion of the limiting process (albeit indirectly)? When I first encountered this method, the values of the inner and outer rectangles I provided above were subjected to the limiting process which results in the area under the parabolic segment. That seems more akin to the process with which Archimedes approached the problem (which was simply adding more and more triangles to fill in the empty space between the circle, and the polygon inscribed in the circle).

Wednesday, March 21, 2018

Prove Fibonacci identity by induction


I am having trouble with the induction step of this proof, any nudges in the right direction or pointers where my reasoning is wrong are greatly appreciated.


I should prove the following equality:


For all $ n, n_1 \in \mathbb N, n_1 \ge 4, F_0 = 1, F_1 = 2, F_n = F_{n-1} + F_{n-2}$ $$ F_{n_1-1} = 2 + \sum_{i=0}^{n_1-3} F_i $$


My work so far:


Base case $ n_1 = 4 $: $$F_3 = 2 + \sum_{i=0}^{1} F_i = 2 + F_0 + F_1 $$ $$= 2 + 1 + 2$$ $$=5$$


IA: Assume for particular $k \in \mathbb N, k \ge 4$ $$F_{k-1} = 2 + \sum_{i+0}^{k-3} F_i$$



IS: Here I should show that this applies for $k+1$. $$ F_{(k+1)-1} = 2 + \sum_{i=0}^{(k+1)-3} F_i $$ $$=2 + \sum_{i=0}^{k-2} F_i$$ And this is where I'm a bit stuck. I think I need to show that this is the same thing as the sum for k, plus a bit...right? And that "bit" should basically be enough to get us to the next term in the Fibonacci sequence?


Because we defined the nth Fibonacci term as $F_n = F_{n-1} + F_{n-2}$, I broke up the sum for $k+1$ like so: $$(2 + \sum_{i = 0}^{((k+1)-1) - 3}F_i) + (2 + \sum_{i=0}^{((k+1)-2)-3}F_i)$$ $$= (2 + \sum_{i = 0}^{k-3}F_i) + (2 + \sum_{i=0}^{k-4}F_i)$$ And then I can see that the first operand is actually the sum for $k$. But here I feel stuck. How do I show that the second operand is enough to get us to the term after $F_{k-1}$?


Answer



Let $F_0 = 0, F_1 = 1$ and $F_{n+1} = F_n + F_{n-1}$. Then $$ F_{n+2}-1 = \sum_{k=1}^n F_k. $$ I'm guessing you want to prove something along these lines. That's not too bad via induction.


Base Case: Exercise.


Induction Step: Suppose $F_{n+2} -1 = \sum_{k=1}^n F_k$ for a natural $n$. Observe $$ \sum_{k=1}^{n+1} F_k = \sum_{k=1}^n F_k + F_{n+1} = (F_{n+2}-1)+F_{n+1} = F_{n+3} - 1.$$ That's it.


discrete mathematics - Proving $(0,1)$ and $[0,1]$ have the same cardinality

Prove $(0,1)$ and $[0,1]$ have the same cardinality.




I've seen questions similar to this but I'm still having trouble. I know that for $2$ sets to have the same cardinality there must exist a bijection function from one set to the other. I think I can create a bijection function from $(0,1)$ to $[0,1]$, but I'm not sure how the opposite. I'm having trouble creating a function that makes $[0,1]$ to $(0,1)$. Best I can think of would be something like $x \over 2$.



Help would be great.

complex numbers - Why is $cos(i)-i sin(i)=e$?

When I was typing $\cos(i)-i \sin(i)$ into the calculator, I found out that it is equal to e (Euler's Constant). I was amazed by that "discovery" so I checked in on the internet and there was no results. Someone please explain the connection of imaginary numbers and Euler's Constant.

summation - Prove: $sum_{x=0}^{n} (-1)^x {n choose x} = 0$




Is there a quick, fancy, way of proving sums such as this?



Prove that:
$$\sum_{x=0}^{n} (-1)^x {n \choose x} = 0$$



A recent homework assignment I turned in had a couple problems similar to the above. For the most part, I used a proof by induction to solve them. They take forever to write up that way and I was wondering if there were any manipulations possible to speed these types of problems up.



Any suggestions would be greatly appreciated!


Answer




$$(1-1)^n=\sum_{x=0}^n {n\choose x}(-1)^x=0$$



Conisder the following expansion $$(x-a)^n=\sum_{i=0}^n{n \choose i}x^{n-i}(-a)^i$$. . In your question $x =1$ and $a=1$.


Tuesday, March 20, 2018

elementary number theory - 'Gauss's Algorithm' for computing modular fractions and inverses


There is an answer on the site for solving simple linear congruences via so called 'Gauss's Algorithm' presented in a fractional form. Answer was given by Bill Dubuque and it was said that the fractional form is essentially Gauss, Disquisitiones Arithmeticae, Art. 13, 1801.


Now I have studied the article from the book, but I am not seeing the connection to the fractional form. What Gauss does is reducing $b$ via $p\bmod b= p - qb$ and I do not see that happening in the fractional form nor do I see how it computes an inverse. I have already talked with Bill about this via comments, but decided to open a new question so he or anyone else can help me more intuitively understand what is going on here. This article is supposed to give an algorithm to compute inverses in a prime modulus, yet I have no idea how.


Edit:


Actual question for Bill:



I may have been asking some stupid questions up till now so I will give something concrete and hopefully you can provide an answer to that.


Let's take your sci.math example for this:


So we are looking for a multiplicative inverse $x$ of $60$ in modulo $103$


$$60x \equiv 1 \pmod{103}$$


The tool we can use for this is, as Bill has said, a special case of the Euclidean algorithm which iterates $(p\bmod b,\, p)$ instead of the usual Euclidean algorithm that iterates $(p \bmod b,\, b)$.


This is the result of that algorithm:


$$103=60 \cdot \color{#c00} 1 + 43 = 43 \cdot \color{#c00}2 + 17 = 17 \cdot \color{#c00} 6 + 1$$


And then this translates into the following in mod $103$: $$60 \cdot \color{#c00}{(-1)} \equiv 43 \rightarrow 43 \cdot \color{#c00}{(-2)} \equiv 17 \rightarrow 17 \cdot \color{#c00}{(-6)} \equiv 1$$


Producing the numbers in red which when multiplied give an inverse:


$$60 \cdot \color{#c00}{(-1)(-2)(-6)} \equiv 1 \pmod{103}$$ $$x \equiv-12 \pmod{103}$$



And this is fine and I see it works, of course only when the number and modulo are coprime.


Now my question is why this works. I am not interested in optimisations and different ways of reaching the inverse, but specifically why do the same values of the numbers in red(the coefficients of the algorithm descent) produce an inverse? This method of reusing the coefficients does not work via the normal Euclidean algorithm, but only with this special case. What is special about this? I would like to see a generalized proof or reason as to why the generated numbers produced via this special algorithm have this property.


Answer



Below we compare the related forms. First is the iterated descent $\,a\to 103\bmod a\,$ used by Gauss. Second is that rearranged into the form of descending multiples of $60.\,$ Third is the fractional view, and fourth is the graph of the descending multiples of $60$ (denominator descent graph).


$$\begin{align} 103\bmod{60} &= 103 - 1(60) = 43\\ 103\bmod 43 &= 103\color{#0a0}{-2(43)=17}\\ 103\bmod 17 &= 103-6(17) = 1 \end{align}\qquad\qquad\quad$$


$$\begin{array}{rl} \bmod{103}\!:\qquad\ (-1)60\!\!\!\! &\equiv\, 43 &\Rightarrow\ 1/60\equiv -1/43\\[.3em] \smash[t]{\overset{\large\color{#0a0}{*(-2)}}\Longrightarrow}\ \ \ \ \ \ \ \ \ \ (-2)(-1)60\!\!\!\! &\equiv \color{#0a0}{(-2)43\equiv 17}\!\! &\Rightarrow\ 1/60\equiv\ \ \ 2/17\\[.3em] \smash[t]{\overset{\large *(-6)}\Longrightarrow}\ \ \color{#c00}{(-6)(-2)(-1)}60\!\!\!\! &\equiv (-6)17\equiv 1 &\Rightarrow\ 1/60 \equiv {\color{#c00}{-12}}/1\\ \end{array}$$


$$ \begin{align} &\dfrac{1}{60}\ \,\equiv\ \ \dfrac{-1}{43}\, \ \equiv\, \ \dfrac{2}{17}\, \equiv\, \dfrac{\color{#c00}{-12}}1\ \ \ \rm[Gauss's\ algorithm]\\[.3em] &\, 60\overset{\large *(-1)}\longrightarrow\color{#0a0}{43}\overset{\large\color{#0a0}{*(-2)}}\longrightarrow\,\color{#0a0}{17}\overset{\large *(-6)}\longrightarrow 1\\[.4em] \Rightarrow\ \ &\,60*(-1)\color{#0a0}{*(-2)}*(-6)\equiv 1\ \Rightarrow\ 60^{-1}\rlap{\equiv (-1)(-2)(-6)\equiv \color{#c00}{-12}} \end{align}$$


The translation from the first form (iterated mods) to the second (iterated smaller multiples) is realized by viewing the modular reductions as modular multiplications, e.g.


$$\ 103\color{#0a0}{-2(43) = 17}\,\Rightarrow\, \color{#0a0}{-2(43) \equiv 17}\!\!\pmod{\!103} $$


This leads to the following simple recursive algorithm for computing inverses $\!\bmod p\,$ prime.



$\begin{align}\rm I(a,p)\ :=\ &\rm if\ \ a = 1\ \ then\ \ 1\qquad\qquad\ \ \ ; \ \ a^{-1}\bmod p,\,\ {\rm for}\ \ a,p\in\Bbb N\,\ \ \&\,\ \ 0 < a < p\ prime \\[.5em] &\rm else\ let\ [\,q,\,r\,]\, =\, p \div a\qquad ;\, \ \ p = q a + r\ \Rightarrow \color{#0a0}{-qa\,\equiv\, r}\!\!\pmod{\!p},\ \ 0 < r < a\,\\[.2em] &\rm\ \ \ \ \ \ \ \ \ ({-}q*I(r,p))\bmod p\ \ \ ;\ \ because\ \ \ \dfrac{1}a \equiv \dfrac{-q}{\color{#0a0}{-qa}}\equiv \dfrac{-q}{\color{#0a0}r}\equiv -q * I(r,p)\ \ \ \ \ \color{#90f}{[\![1]\!]} \end{align} $


Theorem $\ \ I(a,p) = a^{-1}\bmod p$


Proof $\ $ Clear if $\,a = 1.\,$ Let $\,a > 1\,$ and suppose for induction the theorem holds true for all $\,n < a$. Since $\,p = qa+r\,$ we must have $\,r > 0\,$ (else $\,r = 0\,\Rightarrow\,a\mid p\,$ and $\,1< a < p,\,$ contra $\,p\,$ prime). Thus $\,0 < r < a\,$ so induction $\,\Rightarrow\,I(r,p)\equiv \color{#0a0}{r^{-1}}$ so reducing equation $\color{#90f}{[\![1]\!]}\bmod p\,$ yields the claim.


calculus - show that $int_{0}^{infty} frac {sin^3(x)}{x^3}dx=frac{3pi}{8}$


show that


$$\int_{0}^{\infty} \frac {\sin^3(x)}{x^3}dx=\frac{3\pi}{8}$$


using different ways



thanks for all


Answer



Let $$f(y) = \int_{0}^{\infty} \frac{\sin^3{yx}}{x^3} \mathrm{d}x$$ Then, $$f'(y) = 3\int_{0}^{\infty} \frac{\sin^2{yx}\cos{yx}}{x^2} \mathrm{d}x = \frac{3}{4}\int_{0}^{\infty} \frac{\cos{yx} - \cos{3yx}}{x^2} \mathrm{d}x$$ $$f''(y) = \frac{3}{4}\int_{0}^{\infty} \frac{-\sin{yx} + 3\sin{3yx}}{x} \mathrm{d}x$$ Therefore, $$f''(y) = \frac{9}{4} \int_{0}^{\infty} \frac{\sin{3yx}}{x} \mathrm{d}x - \frac{3}{4} \int_{0}^{\infty} \frac{\sin{yx}}{x} \mathrm{d}x$$


Now, it is quite easy to prove that $$\int_{0}^{\infty} \frac{\sin{ax}}{x} \mathrm{d}x = \frac{\pi}{2}\mathop{\mathrm{signum}}{a}$$


Therefore, $$f''(y) = \frac{9\pi}{8} \mathop{\mathrm{signum}}{y} - \frac{3\pi}{8} \mathop{\mathrm{signum}}{y} = \frac{3\pi}{4}\mathop{\mathrm{signum}}{y}$$ Then, $$f'(y) = \frac{3\pi}{4} |y| + C$$ Note that, $f'(0) = 0$, therefore, $C = 0$. $$f(y) = \frac{3\pi}{8} y^2 \mathop{\mathrm{signum}}{y} + D$$ Again, $f(0) = 0$, therefore, $D = 0$.


Hence, $$f(1) = \int_{0}^{\infty} \frac{\sin^3{x}}{x^3} = \frac{3\pi}{8}$$


error function - Elegant proof for an inequality involving erf



I'm trying to prove the following inequality for $0 < x < 1$:




$$\operatorname{erf}\left(\frac{(1+x)\sqrt{\ln{(1+x)}}}{\sqrt{(1+x)^2 - 1}}\right) - \operatorname{erf}\left(\frac{\sqrt{\ln{(1+x)}}}{\sqrt{(1+x)^2 - 1}}\right) \geq \frac{x}{4}$$



Proof by WolframAlpha: http://goo.gl/15mrM



I could also construct a Proof by Mathematica, without too much trouble.



However, I'm looking for a more elegant proof of this inequality. My approach was going to involve showing that this holds for $x = 0$ and $x = 1$, and then show the function is concave. However, taking the second derivative yields the following monstrosity: http://goo.gl/fKxca



Is there a more elegant way to prove this? I wouldn't mind showing a weaker inequality of the form $\geq \frac{x}{c}$ (for some explicit $c$) if the proof was sufficiently simple.


Answer




I don't know if it's the elegant approach you're looking for, but here's a suggestion: fix any $x_0 > $, and define
$$
f_{x_0}\colon y > 0 \mapsto \operatorname{erf}\left(\frac{(1+y)\sqrt{\ln(1+x_0)}}{\sqrt{(1+x_0)^2-1}}\right)
$$
Now, what you want to prove is
$$\begin{equation}
f_{x_0}\!(x_0)-f_{x_0}\!(0) \geq \frac{x_0}{4} \tag{1}
\end{equation}$$
so it is sufficient to prove that for all $y\in[0,2x_0]$,
$$\begin{equation}

f_{x_0}\!(y)-f_{x_0}\!(0) \geq \frac{y}{4}
\end{equation}$$
i.e.
$$\begin{equation}
\frac{f_{x_0}\!(y)-f_{x_0}\!(0)}{y} \geq \frac{1}{4} \tag{2}
\end{equation}$$
Since $f_{x_0}$ is concave, you can use the usual arguments about concavity/convexity (eg, a concave function has a decreasing slope).



Does that make sense? (I'm not sure it is easy, but the whole point is "just" to reduce the problem to an actual concave function — for which (2) might be easier))


calculus - The notation for partial derivatives



Today, in my lesson, I was introduced to partial derivatives. One of the things that confuses me is the notation. I hope that I am wrong and hope the community can contribute to my learning. In single-variable calculus, we know that, given a function $y =f(x)$, the derivative of $y$ is denoted as $\frac {dy}{dx}$. I understand this as the relative change in $y$, $\delta y$ given a small change in $x$, $\delta x$.



However, in today's lesson on partial derivative, my professor constantly used this notation.




Given a function $z = f(x,y)$, the first derivative with respected to $x$ is written as



$$ \frac{\partial z}{\partial x} $$



So, for example



$$
z = 5x+3y\\
\frac{\partial z}{\partial x} = 5
$$




Why can't I just write it as
$$
z = 5x+3y\\
\frac{d z}{d x} = 5
$$



Is it some convention or am I not understanding something in the notation?


Answer



First, rest assured that you're not the only one who's confused by the standard notation for partial derivatives. See this answer for a collection of answers I've written in response to such confusions.




The problem is that the standard notation doesn't indicate which variables are being held constant. It assumes that you've defined a function of a certain set of variables, and that everyone remembers what these are. That's fine if you only introduce a single function and write its partial derivatives as



$$
\frac{\partial f(x,y,z)}{\partial x}
$$



and the like, since the arguments for the function evaluation make up for what the notation for the partial derivative is missing, but it becomes a problem when you start writing things like



$$

\frac{\partial f}{\partial x}
$$



and especially when you have lots of things like $x,y,z$ floating around that all look like variables and the notation doesn't contain the slightest clue which of these are being treated as functions and which as independent variables being held constant.



In a certain sense, you're right that you could always regard $\dfrac{\partial f}{\partial x}$ as $\dfrac{\mathrm df}{\mathrm dx}$ of a univariate function, namely by regarding all other variables as parameters. That is, given a function $f(x,y)$ of two variables, you can regard $y$ as a fixed parameter and write $g(x)=f(x,y)$, and then $\dfrac{\mathrm dg}{\mathrm dx}=\dfrac{\partial f}{\partial x}$. Then if you feel things aren't quite confusing enough already, you can instead call this new univariate function by the same name as the multivariate function $f$ and write $\vphantom{\dfrac{\partial f}{\partial x}}f(x)=f(x,y)$, and then indeed $\dfrac{\mathrm df}{\mathrm dx}=\dfrac{\partial f}{\partial x}$, but you need to remember what you mean by that: $\dfrac{\mathrm df(x)}{\mathrm dx}=\dfrac{\partial f(x,y)}{\partial x}$, with two different uses of the symbol $f$.



However, this view is rarely very helpful, since the variables of a multivariate function are usually variables on an equal footing for good reason, and one would usually have introduced them as fixed parameters in the first place if that were the natural way to think of them. So usually it's better to view the univariate function that you get by keeping all but one variable fixed as a more temporary construct that's used only for defining and thinking about the partial derivative, but not as something that should appear in the notation as a univariate function in its own right.


Monday, March 19, 2018

bernoulli numbers - Connection between Faulhauber's formula and Riemann zeta function



Denote $F_k(n)$ the sum of all $k$-th powers of first $n$ natural numbers. Also known as Faulhauber's formula. Let's give few examples:
$$F_1(n)=1+2+3+...+n=\frac{n(n+1)}{2}$$
$$F_2(n)=1+4+9+...+n^2=\frac{n(n+1)(2n+1)}{6}$$
Now we know that the Riemann Zeta function is defined as follows for $s>1$
$$\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}$$
From the analytic continuation we know that $\zeta(-2n)=0$ for all natural $n$. Also $\zeta(-n)=(-1)^n \frac{B_{n+1}}{n+1}$ where $B_n$ is $n$-th Bernoulli number. Now what i found was, that if take the definite integral from $-1$ to $0$
I get the riemann zeta function at $-k$:

$$\int_{-1}^{0}{F_k(n)}dn=\zeta(-k)$$ Let me give few examples:
$$\int_{-1}^{0}{F_1(n)}dn=\int_{-1}^{0}{\frac{n^2}{2}+\frac{n}{2}}dn=-\frac{1}{12}$$
$$\int_{-1}^{0}{F_2(n)}dn=\int_{-1}^{0}{\frac{n^3}{3}+\frac{n^2}{2}+\frac{n}{6}}dn=0$$
and so on...
Now i know these sums of powers are somehow related to the Bernoulli numbers and so are the values of the riemann zeta function at negative integers. But i can't seem to find connection between these. Would anyone please give me an argument, why this holds?


Answer



You can do it using the Faulhaber's polynomials, the binomial series, and some analytic continuation.



By induction we have the polynomials




$$F_k(N) = \sum_{n=1}^N n^k = \sum_{m=0}^{k+1} c_{m,k} N^m$$



For $|x| < 1$ we have the Taylor series
$$(1+x)^{-s} = \sum_{l=0}^\infty {-s \choose l} x^l, \qquad {-s \choose l} = \prod_{j=0}^{l-1} \frac{-s-j}{j+1}, \\ n^{-s} - (n+1)^{-s} = n^{-s} (1-(1+n^{-1})^{-s})=-n^{-s} \sum_{l=1}^\infty {-s \choose l} n^{-l}$$
At first for $\Re(s) > k+1$ and by analytic continuation for every $s$
$$\zeta(s-k) = \sum_{n=1}^\infty n^k n^{-s} = \sum_{n=1}^\infty F_k(n) (n^{-s}-(n+1)^{-s})\\=1-2^{-s}+ \sum_{n=2}^\infty F_k(n) (n^{-s}-(n+1)^{-s})=1-2^{-s} -\sum_{n=2}^\infty \sum_{m=0}^{k+1} c_{m,k} n^m n^{-s} \sum_{l=1}^\infty {-s \choose l} n^{-l}$$
$$ =1-2^{-s} -\sum_{m=0}^{k+1} c_{m,k} \sum_{l=1}^\infty {-s \choose l} (\zeta(s+l-m)-1) \tag{1}$$



Also $\zeta(s) = \sum_{n=1}^\infty \int_n^\infty s x^{-s-1}dx =s \int_1^\infty \lfloor x \rfloor x^{-s-1}dx =\frac{s}{s-1}+s \int_1^\infty (\lfloor x \rfloor -x)x^{-s-1}dx$ thus $\lim_{s \to 1} (s-1)\zeta(s) = 1$ and together with $(1)$ it shows $(s-1)\zeta(s)$ is analytic everywhere.




Therefore letting $s \to 0$ in $(1)$, noting $\lim_{s \to 0} {-s \choose l}\zeta(s+l-m) = 0$ for $l-m \ne 1$
$$\zeta(0-k) =-\sum_{m=0}^{k+1} c_{m,k}\prod_{j=1}^{m} \frac{-0-j}{j+1}= \int_{-1}^0 F_k(t)dt $$


limits - Evaluate $lim_{n to infty }frac{(n!)^{1/n}}{n}$.











Evaluate
$$\lim_{n \to \infty }\frac{(n!)^{1/n}}{n}.$$



Can anyone help me with this? I have no idea how to start with. Thank you.


Answer



Let's work it out elementarily by wisely applying Cauchy-d'Alembert criterion:




$$\lim_{n\to\infty} \frac{n!^{\frac{1}{n}}}{n}=\lim_{n\to\infty}\left(\frac{n!}{n^n}\right)^{\frac{1}{n}} = \lim_{n\to\infty} \frac{(n+1)!}{(n+1)^{(n+1)}}\cdot \frac{n^{n}}{n!} = \lim_{n\to\infty} \frac{n^{n}}{(n+1)^{n}} =\lim_{n\to\infty} \frac{1}{\left(1+\frac{1}{n}\right)^{n}}=\frac{1}{e}. $$



Also notice that by applying Stolz–Cesàro theorem you get the celebre limit:



$$\lim_{n\to\infty} (n+1)!^{\frac{1}{n+1}} - (n)!^{\frac{1}{n}} = \frac{1}{e}.$$



The sequence $L_{n} = (n+1)!^{\frac{1}{n+1}} - (n)!^{\frac{1}{n}}$ is called Lalescu sequence, after the name of a great Romanian mathematician, Traian Lalescu.



Q.E.D.



probability - On average, how many friends would I need to have to have at least one friend's birthday every day?

I know that because of the birthday problem, even after 365 friends, you're going to have a lot of doubles and that there's also an infinitesimal chance that even with infinite friends that there's one day left out. But I was curious how many friends you'd need on average to have every day represented (this is ignoring leap day and assuming birthdays are equally distributed). Or to generalize it further, given n unique boxes and you're placing balls in them with an equal 1/n chance for the ball to go into any box, how many balls would you have to place on average before every box had at least one ball?

elementary number theory - Prove by induction that $a-b|a^n-b^n$




Given $a,b,n \in \mathbb N$, prove that $a-b|a^n-b^n$. I think about induction. The assertion is obviously true for $n=1$. If I assume that assertive is true for a given $k \in \mathbb N$, i.e.: $a-b|a^k-b^k$, I should be able to find that $a-b|a^{k+1}-b^{k+1}$, but I can't do it. Any help is welcome. Thanks!



Answer



To complete the induction, note that



$a^{k + 1} - b^{k + 1} = a^{k + 1} - a^kb + a^kb - b^{k + 1} = a^k(a - b) + b(a^k - b^k), \tag{1}$



then simply observe that



$(a - b) \mid a^k(a - b), \tag{2}$



which is obvious, and that




$(a - b) \mid (a^k -b^k) \tag{3}$



by the induction hypothesis



$(a - b) \mid (a^k - b^k). \tag{4}$



Since $a - b$ divides both summands, it divides their sum.QED



Hope this helps. Cheerio,




and as always,



Fiat Lux!!!


elementary number theory - Bezout identity of 720 and 231 by hand impossible?

Is it possible to solve this by hand? Without using the Extended Euclidean Algorithm


We do the Euclidean algorithm and we get:


720 = 231 * 3 + 27


231 = 27 * 8 + 15


27 = 15 * 1 + 12


15 = 12 * 1 + 3


12 = 3 * 4 + 0



The GCD is 3.


We now isolate the remainders:


27 = 720 - 231 * 3


15 = 231 - 27 * 8


12 = 27 - 15 * 1


3 = 15 - 12 * 1


We proceed by substitution:


3 = 15 - 12 * 1


What now? How can we proceed when we have *1? There is no substitution possible?


Help! Thanks!

matrices - computing polynomial determinants

Let $A$ be a $3\times3$ complex matrix and $B$ its transpose. Let $ a$ be a complex number such that $ a \neq1$ and $\det(A+a*B)=0$. Compute $\det(A+B)$ in terms of $a$ and $\det(A).$



I tried to use the polynomial expansion $ \det(A+xB)=\det A + q*x +w*x^2+ \det B*x^3 $ for any matrices $A,B$. Probably I should have found some relations between coefficients $q$ and $w $ beacuse $B$ is $A$ transposed, but got stuck.

integer linear combination of irrational number is irrational number?



How can I prove that




nonzero integer linear combination of two rational independent irrational numbers is still a irrational number?That is to say, given two irrational numbers a and b, if a/b is a irrational number too, then for any m,n is nonzero integer, we have that the number ma+nb is a irrational number, why?


Answer



That's not true: Take $a=\sqrt{2} -1$, $b=\sqrt{2}$. Then $\frac{a}{b} = \frac{1}{\sqrt{2}} - 1 $ isn't rational, but $a-b=1$


Sunday, March 18, 2018

finite fields - How to find minimal polynomial for an element in $mbox{GF}(2^m)$?

I'm new to this Finite field theory. Someone please explain how minimal polynomials are generated for each element in GF(2^m). I searched in the website but I'm not getting any clue.

sequences and series - arithmetic progression, division and remainder. finding pattern length



When all elements in a arithmetic progression are divided by a constant number k, and are written down the remainder, you'll quickly notice that a series of numbers will appear.
simple example



7   42  77  112 147 182 217 252 287 322 357 392 (progression, 35 added each time)


7 42 17 52 27 2 37 12 47 22 57 32 (remainder after division by 60)


When 35 is added again, that gives 392+35=427. The remainder after division by 60 is again 7. The pattern



7   42  17  52  27  2   37  12  47  22  57  32


repeats itself. In this example, this pattern consists of 12 numbers.




Can you calculate the length of that pattern with a formula when you know the constant difference in the progression and the constant number? I tried to figure this out, but failed.


Answer



The period in the example is the smallest number $k$ with the property $$35k\equiv 0 \mod 60$$



We get $$k=\frac{60}{\gcd(35,60)}=\frac{60}{5}=12$$



This way you can find the period in general.


summation - Sum of divergent series




I saw a lot of article in Math SE like Why does 1+2+3+⋯=−1/12? and S=1+10+100+100+10000+…=−1/9? How is that and lot of others. Also I saw this one of Ramanujan summation but I do not get the contradiction.



I do not want to explain how the sum of such series is calculated since I read these articles but I want an explanation of the logic of these series.




  • Are these a contradictory results?

  • Where is the logic behind such series?

  • How come the sum of infinite positive numbers is equal to a negative one?

  • Is the problem with infinity $\infty$?

  • If someone uses this result then this someone can create a lot of absurd results ($1=0$), how to explain this please?




I appreciate your help. Thanks.


Answer



L. Euler explained his assumptions about infinite series - convergent or divergent - with the following idea (just paraphrasing, don't have the article at hand, but you can look at the Euler-archives the treatize "De series divergentibus"): The evaluation of an infinite series is different from a finite sum. But always when we want to assign a value for such a series we should do it in the sense, that it is the result of an infinitely applied arithmetic operation - so that the geometric series (to which we meanwhile assign a value) occurs as result of the infinite formal long-division $s(x) = {1 \over 1-x } \to s(x) = 1 + x + x^2 + ... $ and then insert the value for $x$ in the finite rational formula.



Possibly this is meant in a sense, that similarly we can discuss infinite periodic continued fractions as representations of finite expressions like $\sqrt{1+x}$ and others. It is "compatible" somehow to an axiom, that we require for number theory that we can have a closed-form representation for general infinitely repeated (symbolic) algebraic operation. (in the german translation of E247 this occurs in §11 and §12)



From this, I think, for instance Euler-summation and other manipulations on infinite (convergent and divergent) series by L. Euler can be nicely understood.




[update] The Euler-archives seem to have moved to MAA; the original links, for instance //www.eulerarchive.com/ is taken over by some completely unrelated commercials. A seemingly valid link to Ed Sandifer's column "How Euler did it", however only accessible via internal MAA-access is this (but I think via webarchive.org one can still access the former existent openly available pages)



[update 2]: here is a currently valid link to Ed Sandifer's article


real analysis - How does ($x_n$) converge for $x_1=1$, $x_{n+1}=frac{1}{x_n+3}$ for $n=1,2,ldots$?



Show that the sequence ($x_n$) defined by $$x_1=1\quad \text{and}\quad x_{n+1}=\frac{1}{x_n+3} \quad (n=1,2,\ldots)$$ converges and determine its limit ?



I try to show ($x_n$) is a Cauchy sequence or ($x_n$) is decreasing (or increasing) and bounded sequence but I fail every step of all.


Answer



Hint: For $x,y \geq 0$ we have $\left\vert\frac{1}{x+3} - \frac{1}{y+3}\right\vert = \left\vert\frac{y-x}{(x+3)(y+3)}\right\vert \leq \frac{1}{9}\vert x-y\vert$.


calculus - Proving that $n over 2^n$ converges to $0$


I'm completely clueless on this one. I can easily calculate the limit using L'Hopital's rule, but proving that the series is converging to 0 is far more tricky.


$$a_n= {n \over 2^n}$$


Any help?


Answer



Using the fact that $2^n>n^2$ for $n > 4$, we have: $$0 \leq a_n < \frac{n}{n^2}=\frac{1}{n}.$$


Hence, $\displaystyle \lim_{n \to \infty}a_n=0.$


real analysis - Intermediate Value Property + BV nearly everywhere = continuous?



It is well known that a function $f:[a,b]\to\mathbb R$ which has the intermediate value property (i.e. if $a',b'\in[a,b]$, then for each $c$ between $f(a')$ and $f(b')$ there is some $x$ between $a'$ and $b'$ such that $f(x)=c$) cannot have jump discontinuities, and that a function $g:[a,b]\to\mathbb R$ of bounded variation can only have jump discontinuities. Therefore, a function having the intermediate value property and being of bounded variation must be continuous. I wonder if the BV-condition can be relaxed a little, so my question is:



Is there a discontinuous function $f:[a,b]\to\mathbb R$ and a function $g:[a,b]\to\mathbb R$ of bounded variation, such that $f$ has the intermediate value property and agrees with $g$ at every point of $[a,b]$ except at countably many?



Any hints/help is highly appreciated. Thanks in advance!


Answer



Let $x\in[a,b]$ be a discontinuity of $f$.

We assume without loss of generality that $x\in(a,b)$, the case of the boundary can be dealt with similarly.



Since $g$ is of bounded variation, the limits $l=\lim_{y\to x^-}g(y)$ and $L=\lim_{y\to x^+}g(y)$ exist and are finite.
On the other hand, since $f$ is Darboux and discontinuous at $x$, at least one of $\lim_{y\to x^-}f(y)$ and $\lim_{y\to x^+}f(y)$ must not exist, say the latter.



We hence have that



\begin{align}\forall \epsilon>0,\,&\exists\delta>0,\,\forall y \in(x,x+\delta),\,\,|g(y)-L|<\epsilon\tag{1}\\
\exists \epsilon_0>0,\,&\forall\delta>0,\,\exists y \in(x,x+\delta),\,\,|f(y)-L|\geq\epsilon_0\tag{2}\end{align}




Take $\epsilon=\frac{\epsilon_0}{2}$ in $(1)$, so that there is some $\delta_0>0$ with the property that $|g(y)-L|<\frac{\epsilon_0}{2}$ whenever $y \in (x,x+\delta_0)$.
By $(2)$, there is some $y_1\in (x,x+\delta_0)$ with $|f(y_1)-L|\geq \epsilon_0$.



If $g=f$ except on a countable subset of $(x,x+\epsilon)$, there must be some $y_2\in(x,y_1)$ with $g(y_2)=f(y_2)$.
Hence, $|g(y_2)-L|=|f(y_2)-L|<\frac{\epsilon_0}{2}$.



Now, since $f$ is Darboux, it must assume all values between $f(y_1)$ and $f(y_2)$ on $(y_1,y_2)$.
In particular, it must assume an uncountable range of values $v$ with $\frac{\epsilon_0}{2}\leq |v-L|\leq \epsilon_0$.
This means there is some $y_3\in(y_2,y_1)\subset(x,x+\delta_0)$ with $\frac{\epsilon_0}{2}\leq |g(y_3)-L|\leq \epsilon_0$, which contradicts the defining property of $\epsilon_0$. $\square$


Saturday, March 17, 2018

Analysis Proof- different conditions.



A continuous function on $[a,b]$ is also uniformly continuous on $[a,b]$.



The following tries to illustrate what happens when the interval is not closed:




Show: $f(x) = \frac{1}{x} $ is not uniformly continuous in the half open interval ($0, 1$]



Proof:



Take $\epsilon = 1$. We show that there is no $\delta>0$ such that $\forall p$ and $\forall x$ such that $|p − x| < \delta$
$|f(p) − f(x)| < \epsilon$
Take sequences $x_n =\frac{1}{n}$ and $y_n =\frac{1}{n+1}$ . Then $|f(x_n) − f(y_n)| = 1$, but $|x_n − y_n| \rightarrow 0$.
So for any $\delta > 0$, there exists $n$ such that $|x_n − y_n| < \delta$ but $|f(x_n) − f(y_n)| \not< 1$. So $f$
is not uniformly continuous.




I looked at this proof, followed the definition/theorem chase, was pleased I understood, till I realised: wait... where have we used the fact this is the half open interval ($0,1$]? If I didnt know any better I would have believed this if it was a claimed disproof of the uniform continuity of $f $ on [$0,1$]
Could someone please explain?


Answer



$f$ is not defined at $0$, so it cannot be continuous at zero.


algebra precalculus - Polynomial with real roots



Consider the polynomial: $$f=X^4+4X^3+6X^2+aX+b$$
We know that $f$ has four real roots. Let $x_1,x_2,x_3,x_4$ be the roots of this polynomial. How can one compute $$x_1^{2015}+x_2^{2015}+x_3^{2015}+x_4^{2015}?$$
If $a=4$ and $b=1$, we obtain a self-reciprocal (palindromic) polynomial. We can write $f=(X+1)^4$, thus $x_1=x_2=x_3=x_4=1$. Hence the sum computes to $-4$.
Are there any other cases to consider ($a,b$)? I thought using the formula for the quartic equation and paying attention to the cases where we have only real roots.

Any ideas? Thank you!


Answer



We know the following to be true:



$$
\tag1x_1+x_2+x_3+x_4 = -4
$$
$$
\tag2x_1x_2+x_2x_3+x_3x_4+x_4x_1+x_1x_3+x_2x_4 = 6
$$

$$
\tag3x_1 x_2x_3+x_2x_3x_4+x_3x_4x_1+x_4x_1x_2 = -a
$$
$$
\tag4x_1x_2x_3x_4 = b
$$



Now we can square $(1)$ to get



$$

\begin{align}
(x_1+x_2+x_3+x_4)^2 = &x^2_1+x^2_2+x^2_3+x^2_4\\
&+2[x_1x_2+x_2x_3+x_3x_4+x_4x_1+x_1+x_3+x_2x_4]
\end{align}
$$



Substituting in values from $(1)$ gives



$$
\begin{align}

(-4)^2 &= x^2_{1}+x^2_{2}+x^2_{3}+x^2_{4}+2(6)\\
16&=x^2_{1}+x^2_{2}+x^2_{3}+x^2_{4}+12\\
4&=x^2_{1}+x^2_{2}+x^2_{3}+x^2_{4}
\end{align}
$$



Using the Cauchy-Schwartz Inequality, we get



$$
\left(x^2_{1}+x^2_{2}+x^2_{3}+x^2_{4}\right)(1^2+1^2+1^2+1^2)\geq (x_{1}+x_{2}+x_{3}+x_{4})^2

$$



where equality holds when



$$
\tag5x_{1} = x_{2} = x_{3} = x_{4}
$$



Thus, the equality condition holds, because $4\cdot 4= (-4)^2$. So from $(1)$ and $(5)$, we have $$x_{1} = x_{2} = x_{3} = x_{4}=-1$$




Setting the values of $x_{1} = x_{2} = x_{3} = x_{4}=-1$ in $(3)$ and $(4)$, we get $(a,b)=(4,1)$, which is thus the unique solution. So we can evaluate the sum as



$$
\begin{align}
x^{2015}_{1}+x^{2015}_{2}+x^{2015}_{3}+x^{2015}_{4} &= (-1)+(-1)+(-1)+(-1)\\
&=-4
\end{align}
$$


Solving an unsolvable integral ??





I recently stumbled upon an indefinite integral . sin(x)/x [ Another similar one is root (x) times sin x.
However if we substitute sin(x) in terms of x as Maclaurin series we could get a series of infinite yet integrable polynomials . What's the catch?


Answer



The quote you're referring to is "It is worth mentioning that integration by parts is not applicable to product of functions in all cases. For instance, the method does not work for $\int \sqrt{x} \sin\; x \; dx$.
The reason is that there does not exist any function whose derivative is

$\sqrt{x} \sin \; x$."



As stated, this is simply wrong. A correct statement would be that there is not an elementary function whose derivative is $\sqrt{x} \sin\; x$. Integration by parts "works" on $\sqrt{x} \sin\; x$, but it doesn't give you a closed form for the antiderivative. It will say, e.g., that



$$ \int \sqrt{x} \sin \; x \; dx = \frac{2}{3} x^{3/2} \sin\; x - \frac{2}{3} \int x^{3/2} \cos\; x\; dx $$



which is correct, but not really helpful: it leaves you no closer to finding a formula than before.


reference request - Golden ratio / Fibonacci which branch of math?

Friends,



The Golden ratio / Fibonacci sequence are studied under which branch of math?



Can you recommend some good textbooks on the subject?




Thanks

linear algebra - If $AB = I$ then $BA = I$




If $A$ and $B$ are square matrices such that $AB = I$, where $I$ is the identity matrix, show that $BA = I$.





I do not understand anything more than the following.




  1. Elementary row operations.

  2. Linear dependence.

  3. Row reduced forms and their relations with the original matrix.




If the entries of the matrix are not from a mathematical structure which supports commutativity, what can we say about this problem?



P.S.: Please avoid using the transpose and/or inverse of a matrix.


Answer



Dilawar says in 2. that he knows linear dependence! So I will give a proof, similar to that of TheMachineCharmer, which uses linear independence.



Suppose each matrix is $n$ by $n$. We consider our matrices to all be acting on some $n$-dimensional vector space with a chosen basis (hence isomorphism between linear transformations and $n$ by $n$ matrices).



Then $AB$ has range equal to the full space, since $AB=I$. Thus the range of $B$ must also have dimension $n$. For if it did not, then a set of $n-1$ vectors would span the range of $B$, so the range of $AB$, which is the image under $A$ of the range of $B$, would also be spanned by a set of $n-1$ vectors, hence would have dimension less than $n$.




Now note that $B=BI=B(AB)=(BA)B$. By the distributive law, $(I-BA)B=0$. Thus, since $B$ has full range, the matrix $I-BA$ gives $0$ on all vectors. But this means that it must be the $0$ matrix, so $I=BA$.


induction - Show that $({sqrt{2}!+!1})^{1/n} !+ ({sqrt{2}!-!1})^{1/n}!notinmathbb Q$



How could we prove that for every positive integer $n$, the number
$$({\sqrt{2}+1})^{1/n} + ({\sqrt{2}-1})^{1/n}$$ is irrational?




I think it could be done inductively from a more general expression, but I don't know how.



I made an effort trying to solve it using many different methods.


Answer



Assume that for a certain positive integer $n$
$$
\big(\sqrt{2}-1\big)^{1/n}+\big(\sqrt{2}+1\big)^{1/n}\in \mathbb Q.
$$
Clearly, $\,\big(\sqrt{2}-1\big)^{1/n}\cdot\big(\sqrt{2}+1\big)^{1/n}=1$, and thus
$$

\left(\big(\sqrt{2}-1\big)^{1/n}+\big(\sqrt{2}+1\big)^{1/n}\right)^2=
\big(\sqrt{2}-1\big)^{2/n}+2+\big(\sqrt{2}+1\big)^{2/n}
\in \mathbb Q,
$$
and hence
$$
\big(\sqrt{2}-1\big)^{2/n}+\big(\sqrt{2}+1\big)^{2/n}
\in \mathbb Q.
$$
Next

$$
\left(\big(\sqrt{2}-1\big)^{1/n}+\big(\sqrt{2}+1\big)^{1/n}\right)^3=
\big(\sqrt{2}-1\big)^{3/n}+\big(\sqrt{2}+1\big)^{3/n}+3\left(\big(\sqrt{2}-1\big)^{1/n}+\big(\sqrt{2}+1\big)^{1/n}\right),
$$
and hence
$$
\big(\sqrt{2}-1\big)^{3/n}+\big(\sqrt{2}+1\big)^{3/n}\in\mathbb Q.
$$
Let $s_k=\big(\sqrt{2}-1\big)^{k/n}+\big(\sqrt{2}+1\big)^{k/n}$. Then
$$

s_1^k=s_k+\binom{k}{1}s_{k-2}+\binom{k}{2}s_{k-4}
+\cdots+\binom{k}{\lfloor k/2\rfloor}s_{k-2\lfloor k/2\rfloor},
$$
which means that we can inductively show that
$$
s_1,s_2,\ldots,s_n\in\mathbb Q.
$$
But
$$
s_n=\big(\sqrt{2}-1\big)+\big(\sqrt{2}+1\big)=2\sqrt{2}\not\in\mathbb Q,

$$
and hence
$$
s_1=\big(\sqrt{2}-1\big)^{1/n}+\big(\sqrt{2}+1\big)^{1/n}\not\in\mathbb Q.
$$



Generalization. If $a,b\in\mathbb R$, such that
$a,b, a+b\not\in\mathbb Q$ and $ab\in\mathbb Q$, then using exactly this method, we obtain that
$a^{1/n}+b^{1/n}\not\in\mathbb Q$, for all $n\in\mathbb N$.


Proving $3^n geq 3n$ using mathematical induction



So I have to prove that $3^n$ is greater than or equal to $3n$ using induction. The base case is a not a problem, but I can't seem to figure out where to go for $(n-1)$. I've tried saying:
$$3^n=3\cdot3^{n-1}>3\cdot3(n-1)$$
$$3\cdot3(n-1)=9n-9$$



I'm pretty sure my end goal is $3n$, but I'm not really sure how to get there.
Any suggestions would be much appreciated.


Answer




I will show that we may assume that the inequality holds for some $k$ and use that to show that it holds for $k+1$.



Use the base case $n=2$, $3^2>3(2)$, which is obviously true.



Now, assume that for $n=k$ that $3^k>3k$. This is called the induction hypothesis. Now, we must prove the inequality for $k+1$.



$3^k>3k$ via our induction hypothesis.



$3\cdot3^k>3\cdot3k$ multiplying by $3$ on both sides.




$3^{k+1}>3\cdot3k>3k+3=3(k+1)$



Thus, the inductive step and our proof are complete.


trigonometry - Prove that $sinfrac{pi}{14}$ is a root of $8x^3 - 4x^2 - 4x + 1=0$



Prove that $\sin\frac{\pi}{14}$ is a root of $8x^3 - 4x^2 - 4x + 1=0$.



I have no clue how to proceed and tried to prove that the whole equation becomes $0$ when $\sin\frac{\pi}{14}$ is placed in place of $x$ but couldn't do anything further. I think the symbols might be different but can be the same. If it is correct, please help me to solve this; if the equation is wrong, then please modify it and solve it.


Answer



First, you have, for any integer $n>0$ and any real $a, b$ with $b \not \equiv 0 \pmod{2\pi}$,




$$\sum_{k=0}^{n-1} \sin (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$



I'll prove this formula a bit later, but it's a rather classic one.



With $n=7, a=\frac{\pi}{14} \textrm{and}\; b=\frac{2\pi}{7}$, you get



$$\sin \frac{\pi}{14} + \sin \frac{5\pi}{14} + \sin \frac{9\pi}{14} + \sin \frac{13\pi}{14} + \sin \frac{17\pi}{14} + \sin \frac{21\pi}{14} + \sin \frac{25\pi}{14}=0$$



Now, using $\sin (\theta \pm \pi)= - \sin \theta$, notice that
$$\sin \frac{9\pi}{14} = \sin \frac{5\pi}{14}$$

$$\sin \frac{13\pi}{14} = \sin \frac{\pi}{14}$$
$$\sin \frac{17\pi}{14} = - \sin \frac{3\pi}{14}$$
$$\sin \frac{21\pi}{14} = - 1$$
$$\sin \frac{25\pi}{14} = - \sin \frac{3\pi}{14}$$



So, the equality becomes



$$\sin \frac{\pi}{14} - \sin \frac{3\pi}{14} + \sin \frac{5\pi}{14} = \frac{1}{2}$$



Using $\sin p - \sin q = 2 \sin \frac{p-q}{2} \cos \frac{p+q}{2}$, you get




$$\sin \frac{\pi}{14} + 2 \sin \frac{\pi}{14} \cos \frac{4\pi}{14} = \frac{1}{2}$$



Or



$$\sin \frac{\pi}{14} \left(1 + 2 \cos \frac{4\pi}{14}\right) = \frac{1}{2}$$



But



$$\cos \frac{4\pi}{14} = \sin \left(\frac{\pi}{2} - \frac{4\pi}{14} \right) = \sin \frac{3\pi}{14}$$




And we have also $\sin 3\theta = 3\sin \theta - 4 \sin^3 \theta$, so



$$\cos \frac{4\pi}{14} = 3\sin \frac{\pi}{14} - 4\sin^3 \frac{\pi}{14}$$



Let $x=\sin \frac{\pi}{14}$, we have then the equation



$$x (1+6x-8x^3) = \frac{1}{2}$$



Or




$$16 x^4-12x^2-2x+1=0$$



But $-\frac{1}{2}$ is a trivial root of $16 x^4-12x^2-2x+1$, so this polynomial is divisible by $2x+1$. And since obviously $\sin \frac{\pi}{14} \neq -\frac{1}{2}$, we can do polynomial division, and the equation becomes



$$8x^3-4x^2-4x+1=0$$



And we are done.



As a side comment, by changing slightly the proof, starting from $\sin \frac{\pi}{14} - \sin \frac{3\pi}{14} + \sin \frac{5\pi}{14} = \frac{1}{2}$, you would discover that the other two roots are $-\sin \frac{3\pi}{14}$ and $\sin \frac{5\pi}{14}$.







Now, the proof of the sum



$$\sum_{k=0}^{n-1} \sin (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$



There is an almost trivial proof using complex numbers, but here we are asked not to use them, so we will do this by induction.



First, the formula is true for $n=1$, for it amounts to $ \sin a = \sin a$.

Let's suppose it's true for $n$, we will compute



$$A=\frac{\sin \frac{(n+1)b}{2}}{\sin \frac{b}{2}} \sin \left( a+ n\frac{b}{2}\right) - \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$



Using $2\sin \theta \sin \phi = \cos (\theta - \phi) - \cos (\theta + \phi)$, we have



$$A=\frac{1}{2\sin \frac{b}{2}} \left[\cos \left(\frac{b}{2} - a \right) - \cos \left(a+ (2n+1)\frac{b}{2} \right) -\\
\cos \left(\frac{b}{2} - a \right) + \cos \left(a+ (2n-1)\frac{b}{2} \right) \right]$$



$$A=\frac{1}{2\sin \frac{b}{2}} \left[ \cos \left(a+ (2n-1)\frac{b}{2} \right) - \cos \left(a+ (2n+1)\frac{b}{2} \right) \right]$$




And, since $\cos q - \cos p = 2\sin \frac{p+q}{2} \sin \frac{p-q}{2}$,



$$A=\frac{1}{\sin \frac{b}{2}} \left[ \sin (a+nb) \sin \frac{b}{2} \right] = \sin (a+nb)$$



Thus,



$$\sum_{k=0}^{n} \sin (a+kb) = \sum_{k=0}^{n-1} \sin (a+kb) + \sin(a+nb)$$
$$=\frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right) + A$$
$$=\frac{\sin \frac{(n+1)b}{2}}{\sin \frac{b}{2}} \sin \left( a+ n\frac{b}{2}\right)$$




And the induction step is proved, so the formula is true for all $n>0$.






The "complex numbers" proof of the sum runs like this:



$$S = \sum_{k=0}^{n-1} e^{i(a+kb)} = e^{ia} \sum_{k=0}^{n-1} z^k = e^{ia} \frac{z^n - 1}{z - 1}$$



with $z=e^{ib}$ (and $z\neq1$ because $b \not \equiv 0 \pmod{2\pi}$)




Hence



$$S = e^{ia} \frac{e^{inb} - 1}{e^{ib} - 1}$$



There is a well-known trick to simplify, write:



$$S = e^{ia} \frac{e^{inb/2}}{e^{ib/2}} \frac{e^{inb/2} - e^{-inb/2}}{e^{ib/2} - e^{ib/2}} = e^{ia} \frac{e^{inb/2}}{e^{ib/2}} \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}}$$
$$S = e^{i(a + (n-1)b/2)} \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}}$$




Thus, taking real and imaginary parts, you get:



$$\sum_{k=0}^{n-1} \cos (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \cos \left( a+ (n-1)\frac{b}{2}\right)$$



$$\sum_{k=0}^{n-1} \sin (a+kb) = \frac{\sin \frac{nb}{2}}{\sin \frac{b}{2}} \sin \left( a+ (n-1)\frac{b}{2}\right)$$


analysis - Injection, making bijection

I have injection $f \colon A \rightarrow B$ and I want to get bijection. Can I just resting codomain to $f(A)$? I know that every function i...