url
stringlengths
13
4k
text
stringlengths
100
1.01M
date
timestamp[s]
meta
dict
http://mathhelpforum.com/algebra/98142-i-m-stuck-losing-sleep.html
# Math Help - I'm stuck and losing sleep! 1. ## I'm stuck and losing sleep! Help if anyone can provide it! I'm stuck on a problem that I have the answer but don't know how they got it. Here it is: A wire is cut into three equal parts. The resulting segments are then cut into 4, 6, and 8 parts respectively. If each of the resulting segments has an integer length, what is the minimum length of the wire? The answer is 72. But the rationale for why was explained and I don't understand. I do understand why each piece must be "able" to be cut into pieces of 8 and would therefore would be 24 inches (8 x 3) but how does this end up being 72? Or, how and why does this get muliplied by 3? I think it has do do with multiples but its not sinking in. Thanks, Rob 2. Originally Posted by GRE-Rob Help if anyone can provide it! I'm stuck on a problem that I have the answer but don't know how they got it. Here it is: A wire is cut into three equal parts. The resulting segments are then cut into 4, 6, and 8 parts respectively. If each of the resulting segments has an integer length, what is the minimum length of the wire? The answer is 72. But the rationale for why was explained and I don't understand. I do understand why each piece must be "able" to be cut into pieces of 8 and would therefore would be 24 inches (8 x 3) but how does this end up being 72? Or, how and why does this get muliplied by 3? I think it has do do with multiples but its not sinking in. Thanks, Rob Let the wire be $x$ units of length long Cutting it into equal parts gives $\frac{x}{3}$ for each piece. Each of these thirds is then cut up into $\frac{x}{3} \times \frac{1}{4} = \frac{x}{12}$ $\frac{x}{3} \times \frac{1}{6} = \frac{x}{18}$, $\frac{x}{3} \times \frac{1}{8} = \frac{x}{24}$ For x to be an integer we need to find the LCM (lowest common multiple) of 12, 18 and 24 which happens to be 72 (see the spoiler for how to show this) Spoiler: To do this split into prime factors: • $12 = 2 \times 2 \times 3$ • $18 = 2 \times 3 \times 3$ • $24 = 2 \times 2 \times 2 \times 3$ Then multiply each unique prime factor: $2 \times 2 \times 3 \times 2 \times 3 = 72$ The first 3 parts are from the prime factors of 12, the second 2 from the 24 and the last 3 from the 18. Therefore x must be 72 or multiples thereof but as the question asks for the lowest value it must be 72 3. ## Thank you e^(i*pi) Thank uo for your help. I sleep better now.
2015-05-23T15:58:03
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/98142-i-m-stuck-losing-sleep.html", "openwebmath_score": 0.6497178673744202, "openwebmath_perplexity": 302.59698022445804, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639694252315, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132544250377 }
http://mathhelpforum.com/calculus/97279-more-volume-resulting-solid.html
# Thread: more volume of resulting solid 1. ## more volume of resulting solid The region between the graphs of y=x2 and y=2x is rotated around the line x=2. The volume of the resulting solid is ok i need some more help with this one |2 | pi[(2-x^2)^2-(2-2x)^2] |0 im not sure what im doing wrong here 2. Originally Posted by dat1611 The region between the graphs of y=x2 and y=2x is rotated around the line x=2. The volume of the resulting solid is ok i need some more help with this one |2 | pi[(2-x^2)^2-(2-2x)^2] |0 im not sure what im doing wrong here you have to use cylindrical shells w/r to x ... $V = 2\pi \int_0^2 (2-x)(2x-x^2) \, dx$ ... or washers w/r to y. $V = \pi \int_0^4 \left(2-\frac{y}{2}\right)^2 - (2-\sqrt{y})^2 \, dx$
2016-10-21T12:06:29
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/calculus/97279-more-volume-resulting-solid.html", "openwebmath_score": 0.5817983150482178, "openwebmath_perplexity": 1813.7454180418435, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639694252315, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132544250377 }
http://openstudy.com/updates/560c137fe4b0659107565c62
## nthenic_oftime one year ago please help.. idk how to even figure this out so steps would be nice too. Find the specified vector or scalar. u = -4i + 1j and v = 4i + 1j; Find . ||u+v|| A. Sqrt34 B. 8 C. 5 D. 2 1. phi first add the "corresponding" terms then find the length of the resulting vector (length is sqrt(sum of square of each term)) 2. phi for example the length of 3i+4j is $\sqrt{3^2+4^2} = \sqrt{9+16}= \sqrt{25} = 5$ 3. nthenic_oftime hey thank you for that a answer can you show me how you got got the length of the vector 4. phi what did you get for u+v? 5. nthenic_oftime wait i misunderstood i thought that was the same thing. im so confused 6. nthenic_oftime first add corresponding terms gives us... i+2j ? 7. nthenic_oftime idk how to find the length of a vector 8. phi -4i + 1j 4i + 1j 9. phi -4i + 4i is not i 10. nthenic_oftime its 0? 11. nthenic_oftime i dont understand the whole vector and scalar thing 12. phi yes, the i's "go away" you are left with 2j 13. nthenic_oftime which is positive points? 14. phi in 2-dimensions, you can think of the vector as <x,y> pair of numbers for example, u= -4i + 1j (which can also be written <-4,1> in a graph, it looks like this: |dw:1443634973472:dw| 15. phi and the length of the vector is the length of the line from (0,0) (the origin) to the point (-4,1) we use pythagoras to find its length 16. nthenic_oftime okay so U is a vector and length would be to the point. 17. nthenic_oftime oh okay i didnt see your message 18. phi u+v = 2j (or 0i+2j, or (0,2) ) in a graph it looks like this |dw:1443635111360:dw| 19. nthenic_oftime |dw:1443635167339:dw| 20. phi we could use pythagoras $\sqrt{0^2+2^2}= \sqrt{0+4}= \sqrt{4}= 2$ but we can see the length is just 2 21. nthenic_oftime oh cool good so our drawings of vthis match okay so 0^2 because we have no i left and 2^2 becayse 2j 22. nthenic_oftime i understand thank you so much... can i ask what do i and j standfor why not use x and y? i got behind and i am trying to catch up on things as its obvious i am missing a gfew things in my knowledge 23. phi The interesting thing is this same idea works for 3-D or even higher dimensions (though visualizing a vector with more than 3 components is beyond me) 24. phi I'm not sure why people use i,j,k but it does not matter the idea is they are different "dimensions" for example, sideways and up/down. The other way people write vectors is as a "tuple" such as (1,2) or <1,2> where it is understood each number is the distance along each dimension 25. nthenic_oftime okay so vectors would be written as plot points or coordinates just signified as different dimensions thanks for the help on the problem and that little info there it caught me up a lil bit. medal and fan for you :)
2017-01-17T17:42:55
{ "domain": "openstudy.com", "url": "http://openstudy.com/updates/560c137fe4b0659107565c62", "openwebmath_score": 0.8404390811920166, "openwebmath_perplexity": 1755.924296382822, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.653213253871466 }
http://math.stackexchange.com/questions/133041/how-can-i-solve-xx-5-for-x
# How can I solve $x^x = 5$ for $x$? [duplicate] Possible Duplicate: Is $x^x=y$ solvable for $x$? I've been playing with this equation for a while now and can't figure out how to isolate $x$. I've gotten to $x \ln x = \ln 5$, which seems like it would be easier to work with, but I can't figure out where to go from there. Is it possible to solve this algebraically? If not, how can I find the value of $x$? - ## marked as duplicate by Guess who it is., Nate Eldredge, t.b., Bill Cook, Asaf KaragilaApr 18 '12 at 13:15 @J.M. Definitely. I figured it was asked before, but couldn't fathom how to search for it. –  StrixVaria Apr 17 '12 at 18:19 In adition to the answers: a quick numeric solution can be obtained iteratively via $x=5^{1/x}$. Or, more quick: $x= \sqrt{x \, 5^{1/x}}$ –  leonbloy Apr 17 '12 at 19:15 We can find the result using the Lambert W function. Let's define $y\,e^y=t$. Then $y=W(t)$ where $W(t)$ is the Lambert W function. $$x=e^y\Rightarrow x^x= (e^y)^{e^y}=e^{y\,e^y}=5$$ $$\log e^{y\,e^y}=\log 5$$ $$y\,e^y\log e=y\,e^y=\log 5$$ Thus, $t=\log 5$, and from my first definition $y=W(\log 5)$, so $x=e^y=e^{W(\log 5)}$. It can be expressed in another way too. $$xy=y\,e^y=t=\log 5$$ $$x=\frac{\log 5}{y}=\frac{\log 5}{W(\log 5)}$$ I asked Wolfram Alpha what the numerical value is, and it said $x\approx2.129372$. - It's not possible to solve this algebraically. Look at Lambert W function. In your case, the solution is $x=\frac{\ln(5)}{W(\ln(5))}$. -
2015-09-02T05:28:43
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/133041/how-can-i-solve-xx-5-for-x", "openwebmath_score": 0.86131352186203, "openwebmath_perplexity": 288.74865566355084, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.653213253871466 }
http://mathhelpforum.com/advanced-statistics/59216-poisson-distribution-help-v2-0-a.html
## Poisson Distribution Help v2.0 This problem is confusing me too. In a length manuscript, it is discovered that only 13.5% of the pages contain no typing errors. If we assume that the number of errors per page is a random variable with a Poisson distribution, find the percentage of pages that have exactly one error. I get two different answers depending upon which Poisson formula I use, and I have no idea which is correct (if either is, that is!!))
2016-08-28T21:03:12
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/advanced-statistics/59216-poisson-distribution-help-v2-0-a.html", "openwebmath_score": 0.9681388735771179, "openwebmath_perplexity": 348.3346576763156, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.653213253871466 }
https://www.ffi.no/en/research/n-vector/vector-symbols-explained
# Vector symbols explained Here you will find the mathematical notation used for the n-vector page. The notation system used for the n-vector page and in the files for download is presented in Chapter 2 of the following thesis: Gade (2018): Inertial Navigation - Theory and Applications. A simplified presentation is given here. ## Coordinate frame coordinate frame has a position (origin), and three axes (basis vectors) xy and z (orthonormal). Thus, a coordinate frame can represent both position and orientation, i.e. 6 degrees of freedom. It can be used to represent a rigid body, such as a vehicle or the Earth, and it can also be used to represent a "virtual" coordinate frame such as North-East-Down. Coordinate frames are designated with capital letters, e.g. the three generic coordinate frames AB, and C. We also have specific names for some common coordinate frames: Coordinate frame Description E Earth N North-East-Down B Body, i.e. the vehicle Note that it is no problem to only use the position or the orientation of a coordinate frame. E.g., in some cases, we just care about the position of point B and C (and sometimes we only care about the orientation of N). ## General vector A 3D vector given with numbers is written e.g. . The three numbers are the vector components along the x-y- and z-axes of a coordinate frame. If the name of the vector is k, and the coordinate frame is A, we will use bold k and A as trailing superscript, i.e.: Thus is the 3D vector that is constructed by going 2 units along the x-axis of coordinate frame A, 4 units along the y-axis, and 6 along the z-axis. We say that the vector k is decomposed in A. ## Position vector Instead of the general vector k, we can have a specific vector that goes from A to B. This vector can be decomposed in CAB, and C are three arbitrary coordinate frames. We would write this vector: In program code: p_AB_C The letter p is used since this is a position vector (the position of B relative to A, decomposed/resolved in the axes of C). ### Example a) From the subscript, we see that this is the vector that goes from E (center of the Earth) to B (the vehicle). The superscript tells us that it is decomposed in E, which we now assume has its z-axis pointing towards the North Pole. From the values, we see that the vector goes 6371 km towards the North Pole, and zero in the x and y directions. If we assume that the Earth is a sphere with radius 6371 km, we see that B is at the North Pole. ### Example b) The vector goes from B, e.g. an aircraft, to C, e.g. an object. The vector is decomposed in N (which has North-East-Down axes). This means that C is 50 m north of B and 60 m east, and C is also 5 m above B. ### Properties of the position vector For the general position vector , we have the property: I.e. swapping the coordinate frames in the subscript gives a vector that goes in the opposite direction. We also have: I.e., going from A to D is the same as first going from A to B, then from B to D. From the equation, we see that B is cancelled out. ABC, and D are arbitrary coordinate frames. ## Rotation matrix , we could also have a coordinate frame B, with different orientation than A. The same vector k could be expressed by components along the xy and z-axes of B instead A, i.e. it can also be decomposed in B, written . Note that the length of equals the length of . We will now have the relation: is the 9 element (3x3) rotation matrix (also called direction cosine matrix) that transforms vectors decomposed in B to vectors decomposed in A. Note that the B in should be closest to the vector decomposed in B (following the "the rule of closest frames", see Section 2.5.3 in Inertial Navigation - Theory and Applications for details). If we need to go in the other direction, we have: Now we see that A is closest to A. ### Properties of the rotation matrix We have that where the T means matrix transpose. We also have the following property (closest frames are cancelled): If we compare these properties with the position vector, we see that they are very similar: minus is replaced by transpose, and plus is replaced by matrix multiplication. AB, and C are three arbitrary coordinate frames. ## n-vector The n-vector is in almost all cases decomposed in E, and in the simplest form, we will write it This simple form can be used in cases where there is no doubt about what the n-vector expresses the position of. In such cases, we can also express the position using e.g. the variables lat and long, without further specification. However, if we are interested in the position of multiple objects, e.g. A and B, we must specify which of the two, both for n-vector and for latitude/longitude. In this case we will write: and (program code: n_EA_E and n_EB_E And and The subscript E might seem redundant here, it could be sufficient to use only A or B. However, we have chosen to also include the E, since both n-vector and latitude/longitude are depending on the reference ellipsoid that is associated with E (see Section 4.1. in Gade (2010) for more about this). Note however, that the subscript rules (swapping and canceling) we had for and cannot be used for n-vector or lat/long. For spherical Earth, we have a simple relation between and : where is the radius of the Earth, is the height of B and
2023-01-28T14:21:00
{ "domain": "ffi.no", "url": "https://www.ffi.no/en/research/n-vector/vector-symbols-explained", "openwebmath_score": 0.8428168296813965, "openwebmath_perplexity": 736.4020758903888, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132538714659 }
https://dsp.stackexchange.com/questions/30701/inverse-laplace-transform-using-inversion-formula
# Inverse Laplace transform Using Inversion Formula Use the complex inversion formula to calculate the inverse Laplace transform $f(t)$ of the following Laplace transform: $$F_L (s) = \frac{1}{(s+2)(s^2 +4)}.$$ When the region of convergence is: \begin{align}(1)& \quad Re(s)<-2;\\(2)&\quad -2<Re(s)<0;\\(3)&\quad Re(s)>0.\end{align} Attempt: Here is an explanation of the complex inversion formula. Plugging the function into the formula: $$f(t) = \frac{1}{j2\pi} \int^{\sigma + j \infty}_{\sigma - j \infty} \frac{e^{st}}{(s+2)(s^2+4)}\ ds \tag{1}$$ • So, how do I need to choose $\sigma$? • And how do I evaluate this for each of the three regions? P. S. I tried to solve this without the complex inversion formula, just to see what the answer should look like. I started out by expanding using partial fractions as: \begin{align} \frac{1}{(s+2)(s^2 +4)}&= \frac{1}{8} \left( \frac{1}{s+2} + \frac{1}{s^2+4} \right)\\ &=\frac{1}{8} \left( \frac{1}{s+2} + \frac{1}{(s+2j)(s-2j)} \right)\\ &=\frac{1}{8} \left( \frac{1}{s+2} + \frac{j}{4(s+2j)} - \frac{j}{4(s-2j)} \right). \end{align} Looking at a Laplace transform table, $\frac{1}{s-a} \leftrightarrow e^{at},$ so $$f(t) = \frac{1}{8} \left( e^{-2t} + \frac{j}{4} \left(e^{-2jt} + e^{2jt}\right) \right).$$ • Is this correct? • If so, how can I get to this using the complex inversion formula? In engineering practice, the complex inversion integral is hardly ever used. As an engineer, you will almost exclusively need to invert rational functions, and this can be done by partial fraction expansion and elementary inversions. So first I'll show you how to obtain the inverse Laplace transform by partial fraction expansion, then I'll explain the evaluation of the inversion integral using Cauchy's residue theorem. You have an error in the partial fraction expansion. Furthermore, you don't need to split up the complex pole pair. I would rewrite the Laplace transform like this: $$F(s)=\frac{1}{(s+2)(s^2+4)}=\frac{A}{s+2}+\frac{Bs+C}{s^2+4}\tag{1}$$ with $A=\frac18$, $B=-\frac18$, and $C=\frac14$. The terms on the right-hand side of $(1)$ are elementary Laplace transforms. Now you just have to consider the different regions of convergence (ROC): \begin{align}\frac{1}{s+2}&\Longleftrightarrow e^{-2t}u(t),&\quad \text{Re}\{s\}>-2\\\frac{1}{s+2}&\Longleftrightarrow -e^{-2t}u(-t),&\quad \text{Re}\{s\}<-2\\ \frac{s}{s^2+4}&\Longleftrightarrow \cos(2t)u(t),&\quad\text{Re}\{s\}>0\\ \frac{s}{s^2+4}&\Longleftrightarrow -\cos(2t)u(-t),&\quad\text{Re}\{s\}<0\\ \frac{1}{s^2+4}&\Longleftrightarrow \frac12\sin(2t)u(t),&\quad\text{Re}\{s\}>0\\ \frac{1}{s^2+4}&\Longleftrightarrow -\frac12\sin(2t)u(-t),&\quad\text{Re}\{s\}<0 \end{align} So for the ROC $\text{Re}\{s\}<-2$ you get the anti-causal signal $$f(t)=\frac18\left[-e^{-2t}+\cos(2t)-\sin(2t)\right]u(-t)\tag{2}$$ For the ROC $-2<\text{Re}\{s\}<0$ you get the two-sided signal $$f(t)=\frac18\left[e^{-2t}u(t)+(\cos(2t)-\sin(2t))u(-t)\right]\tag{3}$$ And, finally, for the ROC $\text{Re}\{s\}>0$ you get the causal signal $$f(t)=\frac18\left[e^{-2t}-\cos(2t)+\sin(2t)\right]u(t)\tag{4}$$ If you need to use the inversion formula then it is very helpful to know Cauchy's residue theorem, which says that $$\frac{1}{2\pi j}\oint_Cf(s)ds=\sum_kR_k\tag{5}$$ where $f(s)$ is analytic with finitely many poles, $C$ is a positively oriented closed curve, and $R_k$ are the residues of the poles inside $C$. It can be shown that the inversion integral equals a contour integral if the curve $C$ is chosen appropriately: $$\frac{1}{2\pi j}\int_{\sigma-j\infty}^{\sigma+j\infty}F(s)e^{st}ds= \frac{1}{2\pi j}\oint_CF(s)e^{st}ds\tag{6}$$ In the case of a rational function $F(s)$ the curve $C$ is chosen as a Bromwich contour, as shown here in Fig.2. The straight line part of the curve is the actual integration path we're interested in. The contribution from the circular part of $C$ approaches zero. Depending on the chosen ROC, we have to choose the position of the straight line (i.e., the value of $\sigma)$ differently. For ROC $\text{Re}\{s\}<-2$ (i.e., the anti-causal solution), the straight line is anywhere to the left of the left-most pole, and so the Bromwich contour enclosing all poles is negatively oriented, which results in a sign change: $$\frac{1}{2\pi j}\int_{\sigma-j\infty}^{\sigma+j\infty}F(s)e^{st}ds=-\sum_kR_k,\quad\sigma<-2,\quad t<0\tag{7}$$ where $R_k$ are the residues corresponding to the poles of the function $f(s)=F(s)e^{st}$. For ROC $\text{Re}\{s\}>0$ (i.e., the causal solution), the straight line is anywhere to the right of the right-most pole, and the Bromwich contour enclosing all poles is positively oriented. Consequently, we have $$\frac{1}{2\pi j}\int_{\sigma-j\infty}^{\sigma+j\infty}F(s)e^{st}ds=\sum_kR_k,\quad\sigma>0,\quad t>0\tag{8}$$ For the two-side solution with ROC $-2<\text{Re}\{s\}<0$ we need to choose two curves with the straight line inside the ROC. One encloses the pole to its left at $s=-2$, so the curve is positively oriented, and the other one encloses the two poles at $s=2j$ and $s=-2j$ to the right of the straight line, so it is negatively oriented (which adds a negative sign to the corresponding residues). Let $R_1$ be the residue corresponding to the pole at $s=-2$, and let $R_2$ and $R_3$ be the residues of the two poles at $\pm 2j$, respectively. The inversion integral is then given by: $$\frac{1}{2\pi j}\int_{\sigma-j\infty}^{\sigma+j\infty}F(s)e^{st}ds=\begin{cases}R_1,&t>0\\-R_2-R_3,&t<0\end{cases},\quad-2<\sigma<0\tag{9}$$ What remains is the computation of the residues. The residue at pole $p_k$ is given by $$R_k=\lim_{s\rightarrow p_k}(s-p_k)f(s)\tag{10}$$ With $f(s)=F(s)e^{st}$ we get for $p_1=-2$ $$R_1=\lim_{s\rightarrow -2}(s+2)F(s)e^{st}=\lim_{s\rightarrow -2}\frac{e^{st}}{s^2+4}=\frac{e^{-2t}}{8}\tag{11}$$ In a similar manner the other residues are obtained: \begin{align}R_2&=-\frac{e^{2jt}}{8}\frac{1+j}{2}\\ R_3&=-\frac{e^{-2jt}}{8}\frac{1-j}{2}\end{align}\tag{12} Now $(7)-(9)$ can be evaluated, and the results are of course the same as $(2)-(4)$. • Thank you very much. But the question is specifically asking to use the complex inversion formula. So if I have $$f(t) = \frac{1}{j2 \pi} \int^{\sigma + j \infty}_{\sigma - j \infty} \frac{e^{st}}{(s+2)(s^2+4)} \ ds,$$ for instance for the region of absolute convergence $Re\{ s \} < -2,$ how exactly do I choose $\sigma$? The thing we start with is real, so the answer we end up with should be real as well. – Merin May 10 '16 at 21:37 • @Merin: I've added the evaluation of the complex integral. – Matt L. May 11 '16 at 8:22
2021-05-08T10:16:40
{ "domain": "stackexchange.com", "url": "https://dsp.stackexchange.com/questions/30701/inverse-laplace-transform-using-inversion-formula", "openwebmath_score": 0.9766323566436768, "openwebmath_perplexity": 262.25413467172365, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.6532132538714659 }
https://math.stackexchange.com/questions/2984134/can-someone-explain-how-this-is-not-self-adjoint
# Can someone explain how this is not self adjoint? The matrix representation of a linear operator T: $$\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}$$ is given with respect to the following basis $$\begin{bmatrix}-5 &2 & \\ 2&5 & \end{bmatrix}$$ is the matrix of T when written with respect to the basis $$(1,0)$$ and $$(1,1)$$. We take the standard dot product as its inner product. I'm getting that it is self-adjoint as $$\langle Tv_1,v_2 \rangle = \langle v_1,Tv_2 \rangle$$ where $$v_1 = (1,0)$$ and $$v_2=(1,1)$$ but I was told that it wasn't. Could someone please shed some light on this? Something about the conjugate transpose not being equal? • There's a good chance that what you were told was actually: Trying with just two particular vectors is not enough to prove that the operator is self-adjoint. – Henning Makholm Nov 4 '18 at 12:29 HINT: let $$u:=u_1e_1+u_2e_2$$ and $$v:=v_1e_1+v_2e_2$$ for some basis $$e_1,e_2$$ of $$\Bbb R^2$$ and $$T$$ a linear operator from $$\Bbb R^2$$ to itself, then from the linearity of $$T$$ and the definition of self-adjoint $$\forall u,v\in\Bbb R^2:\langle Tu,v\rangle=\langle u,Tv\rangle \iff \forall j,k\in\{1,2\}:\langle Te_k,e_j\rangle=\langle e_k,Te_j\rangle$$ So you only need to check if the RHS of above holds or not holds for some basis of $$\Bbb R^2$$ and $$T$$ represented using the standard orthonormal basis of $$\Bbb R^2$$, because the standard inner product is defined using the standard orthonormal basis.
2019-12-08T00:38:45
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2984134/can-someone-explain-how-this-is-not-self-adjoint", "openwebmath_score": 0.9419352412223816, "openwebmath_perplexity": 82.22937372979423, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132538714659 }
https://kratos-wiki.cimne.upc.edu/index.php?title=Resolution_of_the_1D_Poisson%27s_equation_using_local_Shape_Functions&diff=prev&oldid=3725
Resolution of the 1D Poisson's equation using local Shape Functions (Difference between revisions) Revision as of 10:21, 9 December 2008 (view source)JMora (Talk | contribs)← Older edit Latest revision as of 09:07, 4 November 2009 (view source)JMora (Talk | contribs) Line 363: Line 363: :Come back to the [[Poisson's_Equation#Domain_Discretisation:_.22local.22_Shape_Functions|Resolution of the Poisson's equation using local Shape Functions]] :Come back to the [[Poisson's_Equation#Domain_Discretisation:_.22local.22_Shape_Functions|Resolution of the Poisson's equation using local Shape Functions]] − [[Category:Theory]] + [[Category:Poisson's Equation]] Latest revision as of 09:07, 4 November 2009 The problem to solve is: $A(\varphi) = \frac{d}{dx} \left( k \frac{d \varphi}{dx} \right) + Q = 0 ~~ in ~ 0 \le x \le l$ $B(\varphi) = \begin{cases} \varphi - \overline{\varphi} = 0 & in ~ x = 0 \\ k \frac{d \varphi}{dx} + \overline{q} = 0 & in ~ x = l \end{cases}$ with the following analytical solution: $\varphi (x) = - \frac{Q}{2 k} x^2 + \frac{- \overline q + Q l}{k} x + \overline \varphi$ $k \nabla \varphi (x) = Q (l - x) - \overline q$ The weak form of the integral equation is: ${ \int_0^l \frac{d W_i}{dx} k \sum_{j=1}^n \frac{d N_j}{dx} a_j dx = \int_0^l W_i Q dx + \left[ W_i q \right ]_{x=0} - \left[ W_i \overline q \right ]_{x=l} }$ and the approach solution is written by using the shape functions as: $\varphi (x) \cong \hat \varphi (x) = \sum_{i=1}^n N_i (x) \varphi_i$ Resolution using 1 element with two nodes One single element means to cover the entire domain with the local functions. $\hat \varphi (x) = N_1 (x) \varphi_1 + N_2 (x) \varphi_2$ $\frac{ d \hat \varphi (x)}{dx} = \frac{ d N_1 (x) }{dx} \varphi_1 + \frac{ d N_2 (x) }{dx} \varphi_2$ By using the Galerkin method ($W_i = N_i \,$, it can be obtained: $\int_0^l \frac{d N_i}{dx} k \left ( \frac{d N_1}{dx} \varphi_1 + \frac{d N_2}{dx} \varphi_2 \right ) dx = \int_0^l N_i Q dx + \left[ N_i q \right ]_{x=0} - \left[ N_i \overline q \right ]_{x=l} \qquad i=1,2$ for i=1: $\int_0^l \frac{d N_1}{dx} k \left ( \frac{d N_1}{dx} \varphi_1 + \frac{d N_2}{dx} \varphi_2 \right ) dx = \int_0^l N_1 Q dx + q_0$ because $N_1(0)=1 \,$ and $N_1(l)=0 \,$ for i=2: $\int_0^l \frac{d N_2}{dx} k \left ( \frac{d N_2}{dx} \varphi_1 + \frac{d N_2}{dx} \varphi_2 \right ) dx = \int_0^l N_1 Q dx - \overline q$ because $N_2(0)=0 \,$ and $N_2(l)=1 \,$ in matricial form: $\begin{bmatrix} K_{11} & K_{12} \\ K_{21} & K_{22} \end{bmatrix} \begin{Bmatrix} \varphi_1 \\ \varphi_2 \end{Bmatrix} = \begin{Bmatrix} f_1 \\ f_2 \end{Bmatrix}$ with: $K_{ij}=\int_0^{l} \frac{d N_i(x)}{dx} k \frac{d N_j(x)}{dx} dx$ $f_{1}=\int_0^{l} N_1(x) Q dx + q_0 \,$ $f_{2}=\int_0^{l} N_2(x) Q dx - \overline q$ To be more practical, this system of equations can be particularised for some specific polynomial shape functions: \begin{align} N_1^{(e)}(x) = \frac{x_2^{(e)} - x}{l^{(e)}} & \qquad \frac{d N_1^{(e)}(x)}{dx} = -\frac{1}{l^{(e)}} \\ N_2^{(e)}(x) = \frac{x - x_1^{(e)}}{l^{(e)}} & \qquad \frac{d N_2^{(e)}(x)}{dx} = \frac{1}{l^{(e)}} \end{align} Therefore: \begin{align} K_{11}^{(e)} &= K_{22}^{(e)} = - K_{12}^{(e)} = - K_{21}^{(e)} = \frac{k}{l^{(e)}} \\ f_1 &= \frac{Q l}{2} - q_0 \\ f_2 &= \frac{Q l}{2} + \overline q \end{align} $\frac{k}{l} \begin{bmatrix} 1 & -1 \\ -1 & 1 \end{bmatrix} \begin{Bmatrix} \varphi_1 \\ \varphi_2 \end{Bmatrix} = \begin{Bmatrix} \frac{Q l}{2} - q_0 \\ \frac{Q l}{2} + \overline q \end{Bmatrix}$ \begin{align} \varphi_1 & = \overline \varphi\\ \varphi_2 & = \frac{Q l^2}{2 k} - \frac{\overline q l}{k} + \overline \varphi \end{align} $\varphi_2 \,$ is the same value which it has been obtained with the analytical solution. Nevertheless, the global solution is clearly different: $\hat \varphi (x) = \frac{l - x}{l} \varphi_1 + \frac{x}{l} \varphi_2 = \overline \varphi + \frac{1}{k} \left ( \frac{Q l}{2} - \overline q \right ) x \ne - \frac{Q}{2 k} x^2 + \frac{- \overline q + Q l}{k} x + \overline \varphi = \varphi(x)$ To obtain the reaction to the fixed $\overline \varphi \,$ value, the first equation of the matricial system of equations can be used: $\frac{k}{l} (\varphi_1 - \varphi_2) = \frac{Q l}{2} + q_0 \qquad \Rightarrow \qquad \frac{k}{l} \left( \overline \varphi - \left( \frac{Q l^2}{2 k} - \frac{\overline q l}{k} + \overline \varphi \right) \right) = \frac{Q l}{2} + q_0$ $q_0 = \overline q - Q l \qquad \Rightarrow \qquad \underbrace{ q_0 + Q l }_{inflow} - \underbrace{ \overline q }_{outflow} = 0$ Resolution using two elements with two nodes Two elements with two nodes each means the use of three global nodes: $\hat \varphi (x) = N_1 (x) \varphi_1 + N_2 (x) \varphi_2 + N_3 (x) \varphi_3$ Locally, it is equivalent to: \left . \begin{align} N_1 &= N_1^{(1)} \\ N_1 &= 0 \end{align} \right \} \quad \begin{align} 0 \le & x \le l/2 \\ l/2 \le & x \le l \end{align} \left . \begin{align} N_2 &= N_2^{(1)} \\ N_2 &= N_1^{(2)} \end{align} \right \} \quad \begin{align} 0 \le & x \le l/2 \\ l/2 \le & x \le l \end{align} \left . \begin{align} N_3 &= 0 \\ N_3 &= N_2^{(2)} \end{align} \right \} \quad \begin{align} 0 \le & x \le l/2 \\ l/2 \le & x \le l \end{align} $\int_0^l \frac{d N_i}{dx} k \left ( \frac{d N_1}{dx} \varphi_1 + \frac{d N_2}{dx} \varphi_2 + \frac{d N_3}{dx} \varphi_3\right ) dx = \int_0^l N_i Q dx + \left[ N_i q \right ]_{x=0} - \left[ N_i \overline q \right ]_{x=l} \qquad i=1, 2, 3$ for i=1: $\int_0^{l/2} \frac{d N_1^{(1)}}{dx} k \left ( \frac{d N_1^{(1)}}{dx} \varphi_1 + \frac{d N_2^{(1)}}{dx} \varphi_2 \right ) dx = \int_0^{l/2} N_1^{(1)} Q dx + \left[ N_1^{(1)} q \right ]_{x=0} - \left[ N_1^{(1)} \overline q \right ]_{x=l} = \underbrace{\int_0^{l/2} N_1^{(1)} Q dx }_{f_1^{(1)}} + q_0$ for i=2: $\int_0^{l/2} \frac{d N_2^{(1)}}{dx} k \left ( \frac{d N_1^{(1)}}{dx} \varphi_1 + \frac{d N_2^{(1)}}{dx} \varphi_2 \right ) dx + \int_{l/2}^{l} \frac{d N_1^{(2)}}{dx} k \left ( \frac{d N_1^{(2)}}{dx} \varphi_2 + \frac{d N_2^{(2)}}{dx} \varphi_3 \right ) = \underbrace{\int_0^{l/2} N_2^{(1)} Q dx }_{f_2^{(1)}} + \underbrace{\int_{l/2}^{l} N_1^{(2)} Q dx }_{f_1^{(2)}}$ for i=3: $\int_{l/2}^{l} \frac{d N_2^{(2)}}{dx} k \left ( \frac{d N_1^{(2)}}{dx} \varphi_2 + \frac{d N_2^{(2)}}{dx} \varphi_3 \right ) = \int_{l/2}^{l} N_2^{(2)} Q dx + \left[ N_2^{(2)} q \right ]_{x=0} - \left[ N_2^{(2)} \overline q \right ]_{x=l} = \underbrace{\int_{l/2}^{l} N_2^{(2)} Q dx }_{f_2^{(2)}} - \overline q$ in matricial form: $\underbrace{ \begin{bmatrix} K_{11}^{(1)} & K_{12}^{(1)} & 0 \\ K_{21}^{(1)} & K_{22}^{(1)} + K_{11}^{(2)} & K_{12}^{(2)} \\ 0 & K_{21}^{(2)} & K_{22}^{(2)} \end{bmatrix} }_{K} \underbrace{ \begin{Bmatrix} \varphi_1 = \overline \varphi \\ \varphi_2 \\ \varphi_3 \end{Bmatrix} }_{a} = \underbrace{ \begin{Bmatrix} f_1^{(1)} + q_0 \\ f_2^{(1)} + f_1^{(2)} \\ f_2^{(2)} - \overline q \end{Bmatrix} }_{f}$ with: $K_{ij}^{(e)}=\int_{l^{(e)}} \frac{d N_i^{(e)}(x)}{dx} k \frac{d N_j^{(e)}(x)}{dx} dx$ $f_{i}^{(e)}=\int_{l^{(e)}} N_i^{(e)}(x) Q dx \,$ In this case: \begin{align} K_{11}^{(e)} &= K_{22}^{(e)} = - K_{12}^{(e)} = - K_{21}^{(e)} = \frac{k}{l}^{(e)} \\ f_1^{(e)} &= \frac{Q l}{2} \\ f_2^{(e)} &= \frac{Q l}{2} \end{align} $\begin{bmatrix} \left( \frac{k}{l} \right ) ^{(1)} & -\left( \frac{k}{l} \right ) ^{(1)} & 0\\ -\left( \frac{k}{l} \right ) ^{(1)} & \left( \frac{k}{l} \right ) ^{(1)} + \left( \frac{k}{l} \right ) ^{(2)} & -\left( \frac{k}{l} \right ) ^{(2)} \\ 0 & - \left( \frac{k}{l} \right ) ^{(2)} & \left( \frac{k}{l} \right ) ^{(2)} \end{bmatrix} \begin{Bmatrix} \varphi_1 \\ \varphi_2 \\ \varphi_3 \end{Bmatrix} = \begin{Bmatrix} \frac{Q l^{(1)}}{2} + q_0 \\ \frac{Q l^{(1)}}{2} + \frac{Q l^{(2)}}{2} \\ \frac{Q l^{(2)}}{2} - \overline q \end{Bmatrix}$ If $l^{(e)}=l/2 \,$ $\frac{2 k}{l} \begin{bmatrix} 1 & -1 & 0\\ -1 & 2 & -1 \\ 0 & -1 & 1 \end{bmatrix} \begin{Bmatrix} \varphi_1 \\ \varphi_2 \\ \varphi_3 \end{Bmatrix} = \begin{Bmatrix} \frac{Q l}{4} + q_0 \\ \frac{Q l}{2} \\ \frac{Q l}{4} - \overline q \end{Bmatrix}$ \begin{align} \varphi_1 & = \overline \varphi\\ \varphi_2 & = \frac{3 Q l^2}{8 k} - \frac{\overline q l}{2 k} + \overline \varphi \\ \varphi_3 & = \frac{Q l^2}{2 k} - \frac{\overline q l}{k} + \overline \varphi \end{align} and again $q_0 = \overline q - Q l \,$ Come back to the Resolution of the Poisson's equation using local Shape Functions
2022-08-12T00:51:35
{ "domain": "upc.edu", "url": "https://kratos-wiki.cimne.upc.edu/index.php?title=Resolution_of_the_1D_Poisson%27s_equation_using_local_Shape_Functions&diff=prev&oldid=3725", "openwebmath_score": 1.0000100135803223, "openwebmath_perplexity": 5853.603797156058, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.6532132538714659 }
https://de.maplesoft.com/support/help/view.aspx?path=geometry/hyperbola&L=G
hyperbola - Maple Help geometry hyperbola define a  hyperbola Calling Sequence hyperbola(p, [A, B, C, E, F], n) hyperbola(p, ['directrix'=dir, 'focus'=fou, 'eccentricity'=ecc], n) hyperbola(p, ['foci'=foi, 'vertices'=ver], n) hyperbola(p, ['foci'=foi, 'distancev'=disv], n) hyperbola(p, ['vertices'=ver, 'distancef'=disf], n) hyperbola(p, eqn, n) Parameters p - the name of the hyperbola A, B, C, E, F - five distinct points 'directrix'=dir - dir is the line which is the directrix of the hyperbola 'focus'=fou - fou is a point which is the focus of the hyperbola 'eccentricity'=ecc - ecc is a constant bigger than one denoting the eccentricity of the hyperbola 'vertices'=ver - where ver is a list of two points which is the vertices of the hyperbola 'foci'=foi - foi is a list of two points which is the foci of the hyperbola 'distancev'=disv - where disv is the distance between the two vertices 'distancef'=disf - where disf is the distance between the two foci eqn - the algebraic representation of the hyperbola (i.e., a polynomial or an equation) n - (optional) a list of two names representing the names of the horizontal-axis and vertical-axis Description • A hyperbola is the set of all points in the plane, the difference of whose distances from two fixed points is a given positive constant that is less than the distance between the fixed points. • The two fixed points are called the foci. The line through the foci is called the focal axis, and the line through the center and perpendicular to the focal axis is called the conjugate axis. The hyperbola intersects the focal axis at two points, called vertices. • Associated with every hyperbola is a pair of lines, called the asymptotes of the hyperbola. These lines intersect at the center of the hyperbola and have the property that as a point P moves along the hyperbola away from the center, the distance between P and one of the asymptotes approaches zero. • The two fixed points are called the foci. • A hyperbola p can be defined as follows: – from five distinct points. The input is a list of five points. Note that a set of five distinct points does not necessarily define a hyperbola. – from the directrix, focus, and eccentricity. The input is a list of the form ['directrix'=dir, 'focus'= fou, 'eccentricity' = ecc] where dir, fou, and ecc are explained above. – from the foci, and vertices. The input is a list of the form ['foci' = foi, 'vertices' = ver] where foi and ver are explained above. – from the foci and the distance between the two vertices. The input is a list of the form ['foci' = foi, 'distancev' = disv] where foi and disv are explained above. – from the vertices and the distance between the two foci. The input is a list of the form ['vertices' = ver, 'distancef' = disf] where ver and disf are explained above. – from its internal representation eqn. The input is an equation or a polynomial. If the optional argument n is not given, then: – if the two environment variables _EnvHorizontalName and _EnvVerticalName are assigned two names, these two names will be used as the names of the horizontal-axis and vertical-axis respectively. – if not, maple will prompt for input of the names of the axes. • To access the information relating to an hyperbola p, use the following function calls: form(p) returns the form of the geometric object (i.e., hyperbola2d if p is a hyperbola). center(p) returns the name of the center of p. foci(p) returns a list of two foci of p. vertices(p) returns a list of two vertices of p. asymptotes(p) returns a list of two asymptotes of p. Equation(p) returns the equation that represents the hyperbola p. HorizontalName(p) returns the name of the horizontal-axis; or FAIL if the axis is not assigned a name. VerticalName(p) returns the name of the vertical-axis; or FAIL if the axis is not assigned a name. detail(p) returns a detailed description of the hyperbola p. • The command with(geometry,hyperbola) allows the use of the abbreviated form of this command. Examples > $\mathrm{with}\left(\mathrm{geometry}\right):$ define hyperbola h1 from its algebraic representation: > $\mathrm{hyperbola}\left(\mathrm{h1},9{y}^{2}-4{x}^{2}=36,\left[x,y\right]\right):$ > $\mathrm{center}\left(\mathrm{h1}\right),\mathrm{coordinates}\left(\mathrm{center}\left(\mathrm{h1}\right)\right)$ ${\mathrm{center_h1}}{,}\left[{0}{,}{0}\right]$ (1) > $\mathrm{foci}\left(\mathrm{h1}\right),\mathrm{map}\left(\mathrm{coordinates},\mathrm{foci}\left(\mathrm{h1}\right)\right)$ $\left[{\mathrm{foci_1_h1}}{,}{\mathrm{foci_2_h1}}\right]{,}\left[\left[{0}{,}{-}\sqrt{{13}}\right]{,}\left[{0}{,}\sqrt{{13}}\right]\right]$ (2) > $\mathrm{vertices}\left(\mathrm{h1}\right),\mathrm{map}\left(\mathrm{coordinates},\mathrm{vertices}\left(\mathrm{h1}\right)\right)$ $\left[{\mathrm{vertex_1_h1}}{,}{\mathrm{vertex_2_h1}}\right]{,}\left[\left[{0}{,}{-2}\right]{,}\left[{0}{,}{2}\right]\right]$ (3) > $\mathrm{asymptotes}\left(\mathrm{h1}\right),\mathrm{map}\left(\mathrm{Equation},\mathrm{asymptotes}\left(\mathrm{h1}\right)\right)$ $\left[{\mathrm{asymptote_1_h1}}{,}{\mathrm{asymptote_2_h1}}\right]{,}\left[{y}{+}\frac{{2}{}{x}}{{3}}{=}{0}{,}{y}{-}\frac{{2}{}{x}}{{3}}{=}{0}\right]$ (4) define hyperbola h2 from its foci and vertices: > $\mathrm{hyperbola}\left(\mathrm{h2},\left['\mathrm{vertices}'=\mathrm{vertices}\left(\mathrm{h1}\right),'\mathrm{foci}'=\mathrm{foci}\left(\mathrm{h1}\right)\right],\left[a,b\right]\right):$ > $\mathrm{Equation}\left(\mathrm{h2}\right)$ ${64}{}{{a}}^{{2}}{-}{144}{}{{b}}^{{2}}{+}{576}{=}{0}$ (5) define hyperbola h3 from its foci and distance between the two vertices: > $\mathrm{hyperbola}\left(\mathrm{h3},\left['\mathrm{foci}'=\mathrm{foci}\left(\mathrm{h1}\right),'\mathrm{distancev}'=\mathrm{distance}\left(\mathrm{op}\left(\mathrm{vertices}\left(\mathrm{h1}\right)\right)\right)\right],\left[m,n\right]\right):$ > $\mathrm{detail}\left(\mathrm{h3}\right)$ $\begin{array}{ll}{\text{name of the object}}& {\mathrm{h3}}\\ {\text{form of the object}}& {\mathrm{hyperbola2d}}\\ {\text{center}}& \left[{0}{,}{0}\right]\\ {\text{foci}}& \left[\left[{0}{,}{-}\sqrt{{13}}\right]{,}\left[{0}{,}\sqrt{{13}}\right]\right]\\ {\text{vertices}}& \left[\left[{0}{,}{-2}\right]{,}\left[{0}{,}{2}\right]\right]\\ {\text{the asymptotes}}& \left[{n}{+}\frac{{2}{}{m}}{{3}}{=}{0}{,}{n}{-}\frac{{2}{}{m}}{{3}}{=}{0}\right]\\ {\text{equation of the hyperbola}}& {64}{}{{m}}^{{2}}{-}{144}{}{{n}}^{{2}}{+}{576}{=}{0}\end{array}$ (6) define hyperbola h4 from its vertices and distance between the two foci: > $\mathrm{hyperbola}\left(\mathrm{h4},\left['\mathrm{vertices}'=\mathrm{vertices}\left(\mathrm{h1}\right),'\mathrm{distancef}'=\mathrm{distance}\left(\mathrm{op}\left(\mathrm{foci}\left(\mathrm{h1}\right)\right)\right)\right],\left[u,v\right]\right):$ > $\mathrm{Equation}\left(\mathrm{h4}\right)$ ${64}{}{{u}}^{{2}}{-}{144}{}{{v}}^{{2}}{+}{576}{=}{0}$ (7) define hyperbola h5 from five distinct points: > $\mathrm{point}\left(A,1,\frac{2}{3}\mathrm{sqrt}\left(10\right)\right),\mathrm{point}\left(B,2,-\frac{2}{3}\mathrm{sqrt}\left(13\right)\right),\mathrm{point}\left(C,3,2\mathrm{sqrt}\left(2\right)\right),\mathrm{point}\left(E,4,-\frac{10}{3}\right),\mathrm{point}\left(F,5,\frac{2}{3}\mathrm{sqrt}\left(34\right)\right):$ > $\mathrm{hyperbola}\left(\mathrm{h5},\left[A,B,C,E,F\right],\left[\mathrm{t1},\mathrm{t2}\right]\right):$ do some simplifications: > $\mathrm{remove}\left(\mathrm{type},\mathrm{radnormal}\left(\mathrm{op}\left(1,\mathrm{Equation}\left(\mathrm{h5}\right)\right)\right),\mathrm{constant}\right)$ ${4}{}{{\mathrm{t1}}}^{{2}}{-}{9}{}{{\mathrm{t2}}}^{{2}}{+}{36}$ (8) define hyperbola h6 from its directrix, focus and eccentricity: > $\mathrm{line}\left(l,x=-2,\left[x,y\right]\right):$$\mathrm{point}\left(f,1,0\right):$$e≔\frac{3}{2}:$ > $\mathrm{hyperbola}\left(\mathrm{h6},\left['\mathrm{directrix}'=l,'\mathrm{focus}'=f,'\mathrm{eccentricity}'=e\right],\left[c,d\right]\right):$ > $\mathrm{eq}≔\mathrm{Equation}\left(\mathrm{h6}\right)$ ${\mathrm{eq}}{≔}{-}\frac{{5}}{{4}}{}{{c}}^{{2}}{-}{11}{}{c}{+}{{d}}^{{2}}{-}{8}{=}{0}$ (9)
2022-08-19T14:47:09
{ "domain": "maplesoft.com", "url": "https://de.maplesoft.com/support/help/view.aspx?path=geometry/hyperbola&L=G", "openwebmath_score": 0.7988901138305664, "openwebmath_perplexity": 988.6805630706816, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132538714659 }
https://www.clutchprep.com/physics/practice-problems/147234/suppose-a-1-6x105-kg-airplane-has-engines-that-produce-125-mw-of-power-part-a-ho
More Conservation of Energy Problems Video Lessons Example # Problem: Suppose a 1.6x105-kg airplane has engines that produce 125 MW of power.Part (a). How many seconds would it take the airplane to reach a speed of 265 m/s and an altitude of 13.5 km, if air resistance were negligible? Part (b). How much power did the plane use, in megawatts, if this process actually took 825 s? Part (c) Using the power consumption from papart (b), what would be the magnitude of the average force, in newtons, of the air resistance, if the airplane took 1050 s and traveled in a straight line at a constant acceleration to the specified altitude? ###### FREE Expert Solution From the conservation of energy, the total mechanical energy is given by the sum of the kinetic energy and the potential energy: $\overline{){{\mathbf{E}}}_{\mathbf{m}\mathbf{e}\mathbf{c}\mathbf{h}}{\mathbf{=}}\frac{\mathbf{1}}{\mathbf{2}}{\mathbf{m}}{{\mathbf{v}}}^{{\mathbf{2}}}{\mathbf{+}}{\mathbf{m}}{\mathbf{g}}{\mathbf{h}}}$ 95% (478 ratings) ###### Problem Details Suppose a 1.6x105-kg airplane has engines that produce 125 MW of power. Part (a). How many seconds would it take the airplane to reach a speed of 265 m/s and an altitude of 13.5 km, if air resistance were negligible? Part (b). How much power did the plane use, in megawatts, if this process actually took 825 s? Part (c) Using the power consumption from papart (b), what would be the magnitude of the average force, in newtons, of the air resistance, if the airplane took 1050 s and traveled in a straight line at a constant acceleration to the specified altitude?
2021-01-25T21:31:03
{ "domain": "clutchprep.com", "url": "https://www.clutchprep.com/physics/practice-problems/147234/suppose-a-1-6x105-kg-airplane-has-engines-that-produce-125-mw-of-power-part-a-ho", "openwebmath_score": 0.7919530868530273, "openwebmath_perplexity": 744.395665445281, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132538714658 }
https://stats.stackexchange.com/questions/308224/expectation-of-x-given-x-c?noredirect=1
# Expectation of $X$ given $X < c$ Let $X$ be a random variable with PDF $f(\cdot)$ and CDF $\Phi(\cdot)$. I want to compute $E(X \mid X < c)$, where $c$ is some constant. Using definition of the expected value $$E(X \mid X < c) = \int_{-\infty}^{+\infty}xf(x\mid x<c)dx.$$ I know that conditional density should simplify to $$f(x\mid x<c) = \frac{f(x)}{\Phi(c)},$$ but I can't derive it. I found that question similar, but I am still confused. Using (Kolmogorov) definition of conditional probability I get $$P(X=x \mid X<c) = \frac{P\left(\{X = x\}\cap\{X<c\}\right)}{P(X<c)}.$$ But I don't see how $P\left(\{X = x\}\cap\{X<c\}\right)$ simplifies to $P(X=x)$. • Do you have a specific density $f$ given to you, or is this a general question for any $f$? – Greenparker Oct 16 '17 at 14:32 • @Greenparker, now I work with a case where $X$ is normally distributed, but I also would like to know an answer to the general case. – tosik Oct 16 '17 at 14:40 • What definition of conditional probability are you working with? – whuber Oct 16 '17 at 15:05 • @whuber, I guess with Kolmogorov definition. – tosik Oct 16 '17 at 15:10 • What happens when you apply it to your situation? This definition writes every conditional probability as a quotient, so by inspecting the intended answer you should easily be able to identify the events that are involved. For instance, the presence of $\Phi(c)$ in the denominator is a strong hint that the conditioning event is $X\le c$. – whuber Oct 16 '17 at 15:17 A general solution: let $X$ be a random vector with density $f$ and $A=\{X\in B_0\}$, for some $n$-dimensional Borel set $B_0$, with $\Pr(A)>0$. The conditional density denoted by $f(x\mid A)$ must be such that $$\Pr\{X\in B\mid A\} = \int_B f(x\mid A)\,dx \, \qquad (*)$$ for every $n$-dimensional Borel set $B$. From this it's clear that $$f(x\mid A) = \frac{f(x)}{\Pr(A)}\,I_{B_0}(x) \,$$ almost everywhere, in which $I_{B_0}$ is the indicator function of $B_0$: $I_{B_0}(x)=1$ if $x\in B_0$, and $I_{B_0}(x)=0$ if $x\notin B_0$. Here is why: the left hand side of $(*)$ is just $$\Pr\{X\in B\mid X\in B_0\} = \frac{\Pr\{X\in B\cap B_0\}}{\Pr\{X\in B_0\}},$$ and integrating the right hand side of $(*)$ we have $$\int_B \frac{f(x)}{\Pr(A)}\,I_{B_0}(x)\, dx = \frac{1}{\Pr(A)} \int_{B\cap B_0} f(x)\,dx = \frac{\Pr\{X\in B\cap B_0\}}{\Pr\{X\in B_0\}}.$$ In your specific case $$f(x\mid X \leq c) = \frac{f(x)}{\Phi(c)}\,I_{(-\infty,c]}(x).$$ • Sorry, but it is not really clear for me how you get an expression for $f(x\mid A)$. Also, what $I_{B_0}(x)$ stands for? – tosik Oct 16 '17 at 14:52 • @DeltaIV: In this case you can, of course. But if you think about it, this reverse order has the merit of giving a little flavor of the general descriptive definition of conditional probability/expectation given a $\sigma$-field. In the general case there is no universal constructive recipe/procedure, but to verify that the integral equation holds for your candidate (which you may have to guess). With that said, your one-liner is probably much more adequate as an explanation to a beginner than my answer. Maybe this is why the OP didn't accept the answer... Thank you very much for your comment! – Zen Oct 17 '17 at 10:31
2020-04-05T18:38:09
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/308224/expectation-of-x-given-x-c?noredirect=1", "openwebmath_score": 0.9020839929580688, "openwebmath_perplexity": 150.8546479301871, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132538714658 }
http://mathhelpforum.com/calculus/216668-help-lagrange-multipliers-print.html
# Help with Lagrange Multipliers • Apr 4th 2013, 12:21 PM Confucius Help with Lagrange Multipliers I am new to the forums, so please forgive any errors in formatting a request for help. Thanks to everyone who views this! -------------------------------------------------------------------------------------------------------------------- Here is the question: "Use Lagrange multipliers to find the point (a, b) on the graph y=ex , where the value ab is as small as possible." I am unsure how to solve this. Any help would be greatly appreciated since I would like to actually understand the solution and how it is solved! Once again, thank you! • Apr 4th 2013, 04:12 PM Soroban Re: Help with Lagrange Multipliers Hello, Confucius! Quote: Use Lagrange multipliers to find the point (x, y) on the graph y=ex , where the value xy is as small as possible. We want to minimize: $f(x,y) \,=\,xy$ subject to the constraint: $e^x - y \,=\,0$ We have: . $F(x,y,\lambda) \;=\;xy + \lambda(e^x-y)$ Set the partial derivatives equal to zero, and solve. . . $\begin{array}{cccccccc}F_x &=& y + \lambda e^x &=& 0 & [1] \\ F_y &=& x - \lambda &=& 0 & [2] \\ F_{\lambda} &=& e^x-y &=& 0 &[3] \end{array}$ From [2]: . $\lambda \,=\,x$ Substitute into [1]: . $y + xe^x \:=\:0 \quad\Rightarrow\quad y \:=\:-xe^x$ Substitute into [3]: . $e^x - (-xe^x) \:=\:0 \quad\Rightarrow\quad e^x + xe^x \:=\:0$ . . . . . . . . . . . . . . . . . $e^x(1+x) \:=\:0 \quad\Rightarrow\quad x \:=\:-1$ Substitute into [3]: . $e^{-1} - y \:=\:0 \quad\Rightarrow\quad y \:=\:\tfrac{1}{e}$ Therefore: . $(x,y) \;=\;\left(-1,\,\frac{1}{e}\right)$ • Apr 5th 2013, 07:27 AM Confucius Re: Help with Lagrange Multipliers Thank you very much!
2016-09-25T16:50:57
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/calculus/216668-help-lagrange-multipliers-print.html", "openwebmath_score": 0.9092636108398438, "openwebmath_perplexity": 897.3627803256613, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132538714658 }
https://fr.maplesoft.com/support/help/maple/view.aspx?path=odeadvisor%2Fquadrature
Solving ODEs That Are in Quadrature Format - Maple Programming Help Solving ODEs That Are in Quadrature Format Description • An ODE is said to be in quadrature format when the following conditions are met: 1) the ODE is of first order and the right hand sides below depend only on x or y(x): ${\mathrm{quadrature_1_x_ode}}{≔}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{F}{}\left({x}\right)$ (1) ${\mathrm{quadrature_1_y_ode}}{≔}\frac{{ⅆ}}{{ⅆ}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{F}{}\left({y}{}\left({x}\right)\right)$ (2) 2) the ODE is of high order and the right hand side depends only on x. For example: ${\mathrm{quadrature_h_x_ode}}{≔}\frac{{{ⅆ}}^{{4}}}{{ⅆ}{{x}}^{{4}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{y}{}\left({x}\right){=}{F}{}\left({x}\right)$ (3) where F is an arbitrary function. These ODEs are just integrals in disguised format, and are solved mainly by integrating both sides. Examples $\left[{\mathrm{odeadvisor}}{,}{\mathrm{symgen}}\right]$ (4) $\left[{\mathrm{_quadrature}}\right]$ (5) ${y}{}\left({x}\right){=}{\int }{F}{}\left({x}\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{x}{+}{\mathrm{_C1}}$ (6) $\left[{\mathrm{_quadrature}}\right]$ (7) ${x}{-}\left({{\int }}_{{}}^{{y}{}\left({x}\right)}\frac{{1}}{{F}{}\left({\mathrm{_a}}\right)}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{\mathrm{_a}}\right){+}{\mathrm{_C1}}{=}{0}$ (8) $\left[{\mathrm{_ξ}}{=}{0}{,}{\mathrm{_η}}{=}{1}\right]$ (9) $\left[{\mathrm{_ξ}}{=}{1}{,}{\mathrm{_η}}{=}{0}\right]$ (10)
2021-04-15T01:48:19
{ "domain": "maplesoft.com", "url": "https://fr.maplesoft.com/support/help/maple/view.aspx?path=odeadvisor%2Fquadrature", "openwebmath_score": 0.9553918838500977, "openwebmath_perplexity": 331.9397852848475, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132538714658 }
https://mathlesstraveled.com/
## Book review: Opt Art [Disclosure of Material Connection: Princeton Press kindly provided me with a free review copy of this book. I was not required to write a positive review. The opinions expressed are my own.] Opt Art: From Mathematical Optimization to Visual Design Robert Bosch Princeton University Press, 2019 I recently finished reading Robert Bosch’s new book, Opt Art. It was a quick read, both because it’s not actually that long, but also because it was fascinating and beautiful and I didn’t want to put it down! The central theme of the book is using linear optimization (aka “linear programming”) to design and generate art. The resulting art can be simply beautiful for its own sake, or can also give us insight into underlying mathematics. Linear optimization is something I knew about in a general sense, but after reading Bosch’s book I understand it much better—both the details of how the simplex algorithm works, and especially the various ways linear optimization can be applied. I think Bosch does a fantastic job explaining things in a way that gives real insight but doesn’t get bogged down in too much detail. (In a few places I personally wish there had been a few more details—but it’s quite possible that adding more detail would have made the book better for me but worse for a bunch of other people, i.e. it would not be a global optimum!) A Celtic knot pattern created out of a single continuous TSP tour Another thing the book explains really well is how the Travelling Salesman Problem (TSP) can be solved using linear optimization. I had no idea there was a connection between the two topics. I’m sure the connection is explained in great detail in the TSP book by William Cook, which I read 7 years ago, but for some reason when I read that I guess it didn’t really click. But from reading Bosch’s book I feel like I now know enough to put together the details and implement a basic TSP solver myself if I wanted to (maybe I will)! I’m definitely inspired to use some of Bosch’s techniques to make my own artwork—if I do, I will obviously post about it here! ## More on Human Randomness In a post a few months ago I asked whether there is a way for a human to reliably generate truly random numbers. I got a lot of great responses and I think it’s worth summarizing them here! ## Randomness in poker strategies Robert Anderson noted that poker players sometimes use the second hand of a watch to introduce some randomness into their strategy. I assumed this would be something like getting a random bit based on whether the number of seconds is even or odd, but Pete McAllister chimed in to say that it is usually something more like dividing a minute into chunks, and making a decision based on which chunk the current second is in. For example, if you want to make one choice 20 percent of the time and another choice 80 percent of the time, you could just make the first choice if the second hand is between 0–12 seconds, and the other choice otherwise. In game theory this is called a “mixed” strategy, and this kind of strategy can arise naturally as the Nash equilibrium of certain kinds of games, so it’s not surprising to me that it would show up in high-level poker. I found conflicting advice about this online; some people were claiming that you should not use randomness when playing poker, but I did find a website that talked about implementing this kind of mixed strategy using the second hand of a watch, and it seemed to be a website with pretty high-level poker advice. In any case, if you have a phone or a watch with you, this does suggest some strategies for generating random numbers: for example, look at the last digit of the seconds to get a random number from 0–9, or whether it is even or odd to get a bit. Or you could just take the number of seconds directly as a random number between 0–59. Of course this only works once and then you have to wait a while before you can do it again. Also, it turns out that my phone doesn’t show seconds by default. Taking the ones digit of the minutes as a random number from 0–9 should work too, but the tens digit of the minutes seems like it’s “not random enough”, in the sense that it might be correlated with whatever it is that I’m doing. Of course, a phone or watch counts as an “aid”, but most people tend to carry around something like this all the time, so it’s relatively practical. On the other hand, if you’re going to use a phone anyway, you should just use an app for generating random numbers. ## Bodies and clothing • Naren Sundar commented that hair is pretty random, but admitted that it would be hard to measure. • Frederik suggested spitting, or throwing your shoe in the air and seeing which way the toe points when it lands. I like the shoe idea, but on the other hand it’s somewhat obtrusive to take your shoe off and throw it in the air every time you want a random bit! And what if you’re not wearing shoes? I’m also afraid I might throw my shoe in the same way every time; I’m not sure how random it would be in practice. ## Minds and memorization Kaligule suggested taking whatever song is currently running through your head, stopping it at a random point, and getting a random bit by seeing whether the number of consonants in the next word is even or odd. This is a cool idea, and is the only proposal that really meets my criterion of generating randomness “without aids”. I think for some people it could work quite well. “Stopping at a random point” is somewhat problematic—you might be biased to stop at certain points more than others—but it’s pretty hard to know how many consonants are in a word before you count, so I’m not sure this would really bias the results that much. Unfortunately, however, it won’t work for me because, although I do always have some kind of music running through my head, it often has no lyrics! Kaligule suggested using whether the melody goes up or down, but this is obvious (unlike number of consonants in a word) and too easy to “cheat”, i.e. pick a stopping point that gives me the bit I “want”. This suggested another idea to me, however: just pre-generate some random data and put some up-front effort into memorizing it. Whenever you need some randomness, use the next part of the random sequence you memorized. When you use it up, generate another and memorize that instead. This leaves a number of questions: • How do you reliably keep track of where you are in the sequence? I don’t actually have a good answer to this. I think in practice I would get confused and forget whether I had already used a certain part or not. Though maybe this doesn’t really matter that much. • What format would be most effective, and how do you go about memorizing it? Some ideas: • My first idea is to generate a sequence of random bits, and then write a story where sequential words have even or odd numbers of letters corresponding to the bits in your sequence. Unfortunately, this seems like a relatively inefficient way to memorize data, but writing a story that corresponds to a given sequence of bits does sound like a fun exercise in constrained writing. • Alternatively, one could simply generate a random sequence of digits (or hexadecimal digits) and memorize them using whatever sort of memorization technique you like (e.g. a memory palace). This is less fun but probably more effective. Memorizing a story sounds like it would be easier, but I don’t think it is, especially since you would have to memorize it word-for-word and you only get one bit per word memorized, as opposed to something like e.g. four bits per hexadecimal digit. I have generated some random hexadecimal digits but haven’t gotten around to trying to memorize them yet. If I do I will definitely report on the experience. In the meantime, I’m also open to more ideas! Posted in computation, people | Tagged , , , | 2 Comments ## Book review: The Mathematics of Various Entertaining Subjects, Volume 3 I have a bunch of books in the queue to review—I hope to begin writing these more regularly again! [Disclosure of Material Connection: Princeton Press kindly provided me with a free review copy of this book. I was not required to write a positive review. The opinions expressed are my own.] The Mathematics of Various Entertaining Subjects, Volume 3: The Magic of Mathematics Jennifer Beineke and Jason Rosenhouse, eds. Princeton University Press, 2019 The MOVES conference takes place every two years in New York. MOVES is an acronym for “The Mathematics of Various Entertaining Subjects”, and the conference is a celebration of math that isn’t necessarily considered an Important Research Topic, and doesn’t necessarily have Important Applications—but simply math that is fun for its own sake. (Although in hindsight, math that starts out as Just For Fun often seems to end up with important applications too—for example, think of graph theory or probability theory.) The most recent conference took place just a few months ago, in August 2019; the next one will be in August 2021 (you can already register if you like to plan that far ahead!). This book is basically the conference proceedings from 2017—a collection of papers that were presented at the conference, published all together in book form. So it’s important to state at the outset that although the topics are entertaining, this really is a collection of research papers. Overall this is definitely not a book written for a general audience! I had to work hard to understand some of the papers, and some of them lost me completely. However, there’s some great stuff in here that rewards patient study. Some of my favorites that are more generally accessible include: • A chapter on “Wiggly Games and Burnside’s Lemma” that does a great job explaining Burnside’s Lemma—a classic result about counting things with symmetry, at the intersection of combinatorics and group theory—via applications to counting the number of possible tiles in several different games. • “Solving Puzzles Backwards” has some nice puzzles and a discussion of elegant ways to approach their solutions. • “Should we Call Them Flexa-Bands?” has some interesting reflections on the topology of different types of flexagons. Some other things I particularly enjoyed but which are not so accessible without some background include a chapter on the computational complexity of losing at checkers, a chapter on “Kings, sages, hats, and codes” that I wish I understood better, and a chapter on the combinatorics of Legos. There’s so much other stuff in there on such wildly varying topics that it’s impossible to summarize. In any case, definitely recommended if you are a professional mathematician looking for some fun yet still technically meaty reading; definitely not recommended if you’re looking for a casual read of a popular math book. And if you’re somewhere in between—that is, you’re not a professional mathematician but you aspire to read and understand things on that level—this could honestly be a great place to start! ## A combinatorial proof: PIE a la mode! Continuing from my last post in this series, we’re trying to show that $S = n!$, where $S$ is defined as $\displaystyle S = \sum_{i=0}^n (-1)^i \binom{n}{i} (k+(n-i))^n$ Recall that we defined $M_a$ as the set of all functions from a set of size $n$ (visualized as $n$ blue dots) to a set of size $k + n$ (visualized as $k$ yellow dots on top of $n$ blue dots) such that the blue dot numbered $a$ is missing. I also explained in my previous post that the functions with at least one blue dot missing from the output are exactly the “bad” functions, that is, the functions which do not correspond to a one-to-one matching between the blue dots on the left and the blue dots on the right. As an example, the function pictured above is an element of $M_1$, as well as an element of $M_3$. (That means it’s also an element of the intersection $M_1 \cap M_3$—this will be important later!) Let $F$ be the set of all functions from $n$ to $k+n$, and let $P$ be the set of “good” functions, that is, the subset of $F$ consisting of matchings (aka Permutations—I couldn’t use $M$ for Matchings because $M$ is already taken!) between the blue sets. We already know that the number of matchings between two sets of size $n$, that is, $|P|$, is equal to $n!$. However, let’s see if we can count them a different way. Every function is either “good” or “bad”, so we can describe the set of good functions as what’s left over when we remove all the bad ones: $\displaystyle P = F - \bigcup_{a=1}^n M_a$ (Notice how we can’t just write $P = F - M_1 - M_2 - \dots - M_n$, because the $M_a$ sets overlap! But if we union all of them we’ll get each “bad” function once.) In other words, we want to count the functions that aren’t in any of the $M_a$. But this is exactly what the Principle of Inclusion-Exclusion (PIE) is for! PIE tells us that the size of this set is $\displaystyle |P| = |F| - \left|\bigcup_{a=1}^n M_a \right| = \sum_{T \subseteq \{1 \dots n\}} (-1)^{|T|}\left| \bigcap_{a \in T} M_a \right|,$ that is, we take all possible intersections of some of the $M_a$, and either add or subtract the size of each intersection depending on whether the number of sets being intersected is even or odd. We’re getting close! To simplify this more we’ll need to figure out what those intersections $\bigcap_{a \in T} M_a$ look like. ## Intersections What does $M_a \cap M_b$ look like? The members of $M_a \cap M_b$ are exactly those functions which are in both $M_a$ and $M_b$, so $M_a \cap M_b$ contains all the functions that are missing both $a$ and $b$ (and possibly other elements). Likewise, $M_a \cap M_b \cap M_c$ contains all the functions that are missing (at least) $a$, $b$, and $c$; and so on. Last time we argued that $|M_a| = (k + (n-1))^n$, since functions from $n$ to $k+n$ that are missing $a$ can be put into a 1-1 matching with arbitrary functions from $n$ to $k + (n-1)$, just by deleting or inserting element $a$: So what about an intersection—how big is $M_a \cap M_b$ (assuming $a \neq b$)? By a similar argument, it must be $(k + (n-2))^n$, since we can match up each function in $M_a \cap M_b$ with a function from $n$ to $k+(n-2)$: just delete or insert both elements $a$ and $b$, like this: Generalizing, if we have a subset $T \subseteq \{1, \dots, n\}$ and intersect all the $M_a$ for $a \in T$, we get the set of functions whose output is missing all the elements of $T$, and we can match them up with functions from $n$ to $k + (n-|T|)$. In formal notation, $\displaystyle \left| \bigcap_{a \in T} M_a \right| = (k + (n-|T|))^n$ Substituting this into the previous expression for the number of blue matchings $|P|$, we get $\displaystyle |P| = \sum_{T \subseteq \{1 \dots n\}} (-1)^{|T|}(k + (n-|T|))^n$ ## Counting subsets Notice that the value of $(-1)^{|T|}(k + (n-|T|))^n$ depends only on the size of the subset $T$ and not on its specific elements. This makes sense: the number of functions missing some particular number of elements is the same no matter which specific elements we pick to be missing. So for each particular size $|T| = i$, we are adding up a bunch of copies of the same value $(-1)^i (k + (n-i))^n$—as many copies as there are different subsets of size $i$. The number of subsets $T$ of size $i$ is $\binom n i$, the number of ways of choosing exactly $i$ things out of $n$. Therefore, if we add things up size by size instead of subset by subset, we get $\begin{array}{rcl} |P| &=& \displaystyle \sum_{\text{all possible sizes } i} (\text{number of subsets of size } i) \cdot \left[(-1)^{i}(k + (n-i))^n \right] \\[1em] &=& \displaystyle \sum_{i=0}^n \binom n i (-1)^{i}(k + (n-i))^n\end{array}$ But this is exactly the expression for $S$ that we came up with earlier! And since we already know $|P| = n!$ this means that $S = n!$ too. And that’s essentially it for the proof! I think there’s still more to say about the big picture, though. In a future post I’ll wrap things up and offer some reflections on why I find this interesting and where else it might lead. ## A few words about PWW #27 The images in my last post were particular realizations of the famous Sieve of Eratosthenes. The basic idea of the sieve is to repeatedly do the following: • Circle the next number bigger than $1$ that is not yet crossed out, call it $p$. • Cross out every $p$th number after $p$, that is, all the multiples of $p$. These are not prime since they are multiples of $p$. This is an efficient way to find all the primes up to a given limit. Note that it doesn’t require doing any division or factoring, just adding. Here’s the image of the $10 \times 10$ sieve again: Some questions for you to ponder: • Why can we always cross out multiples of each prime $p$ using parallel straight lines? • When will the lines be vertical? When will they be diagonal? • Is there only one way to cross out multiples of each $p$ with straight lines, or could we have chosen different lines? • Why are the primes on the top row circled, while the rest of the primes are in boxes? What’s the difference? And just for fun here’s the sieve diagram for $n = 29$, one of my favorites. Click here for a larger version. Posted in pattern, pictures, posts without words, primes | Tagged , , | 4 Comments ## Post without words #27 Image | Posted on by | Tagged , , | 7 Comments ## Order of operations considered harmful [The title is a half-joking reference to Edsger Dijkstra’s classic paper, Go To Statement Considered Harmful; see here for more context.] Everyone is probably familiar with the so-called “order of operations”, which is a collection of rules that reflect conventions about which procedures to perform first in order to evaluate a given mathematical expression. (the above quote is from the Wikipedia page on Order of Operations). If you grew up in the US, like me, you might have memorized the acronym PEMDAS; I have recently learned that people in other parts of the world use other acronyms, like BEDMAS. In any case, these mnemonics help you remember the order in which you should conventionally perform various arithmetic operations, right? For example: $\begin{array}{rl} & 1 + 3 \times (5 - 3)^4/2 \\ = & 1 + 3 \times 2^4 / 2 \\ = & 1 + 3 \times 16 / 2 \\ = & 1 + 48 / 2 \\ = & 1 + 24 \\ = & 25.\end{array}$ Makes sense, right? We did the stuff in parentheses first, then the exponent, then the multiplication and division (from left to right), then the addition. OK, pop quiz: what is the value of $3 + 0 \times 37^{984}$? Of course it is $3$, you say. Aha, but did you follow the order of operations? I thought not. Supposedly the order of operations tells you that you have to perform the exponentiation before the multiplication, but I am willing to bet that you skipped straight over the exponentiation and did the multiplication first! Really, you should be ashamed of yourself. Another pop quiz: what is the value of $2 \times 7 + 3 \times 5 + 4 \times 6$? Well, let’s see: $\begin{array}{rl}& 2 \times 7 + 3 \times 5 + 4 \times 6 \\[0.5em] =& 14 + 3 \times 5 + 4 \times 6 \\[0.5em] =& 14 + 15 + 4 \times 6 \\[0.5em] =& 29 + 4 \times 6 \\[0.5em] =& 29 + 24 \\[0.5em] =& 53\end{array}$ Easy peasy. But wait, did you notice what I did there? I did one of the additions before I did the last multiplication! According to the “order of operations” this is not allowed; you are supposed to perform multiplication before addition, right?? One more quiz: solve for $y$ in the equation $20 = 2 + 3y$. Of course we can proceed by subtracting $2$ from both sides, resulting in $18 = 3y$, and then dividing both sides by $3$, finding that $y = 6$ is the unique solution. How did we know not to add the $2$ and the $3$, resulting in the bogus equation $20 = 5y$? Because of the order of operations, of course… but wait, what does the order of operations even mean here? Did you notice that we never actually performed the multiplication or addition at all? My point is this: casting the “order of operations” in performative terms—as in, the order we should perform the operations—is grossly misleading. You might think all my examples can be explained away easily enough—and I agree—but if you take literally the idea of the order of operations telling us what order to perform the operations (as many students will, and do), they don’t make sense. In fact, I would argue that saying the order of operations is about the order of performing the operations is worse than misleading, it is just plain wrong. ## So what is it, really? I made the title of this post provocative on purpose, but of course I am not actually arguing against the order of operations in and of itself. We certainly do need to agree on whether $1 + 2 \times 3$ should be $7$ or $9$. But we need a better way of explaining it than saying it is the “order in which we perform the operations”. Any mathematical expression is fundamentally a tree, where each operation is a node in the tree with the things it operates on as subtrees. For example, consider the example expression $1 + 3 \times (5 - 3)^4/2$ from the beginning of the post. As a tree, it looks like this: This tells us that at the very top level, the expression consists of an addition, specifically, the addition of the number $1$ and some other expression; that other expression is a division (quick check: do you see why the division should go here, and not the multiplication?), and so on. However, pretty much all the writing systems we humans have developed are linear, that is, they consist of a sequence of symbols one after the other. But when you go to write down a tree as a linear sequence of symbols you run into problems. For example, which tree does $3 \mathbin{\triangle} 5 \mathbin{\square} 2$ represent? Without further information there’s no way to tell; the expression $3 \mathbin{\triangle} 5 \mathbin{\square} 2$ is ambiguous. There are two ways to resolve such ambiguity. One is just to add parentheses around every operation. For example, when fully parenthesized, the example expression from before looks like this: $(1 + ((3 \times ((5 - 3)^4))/2))$ With one set of parentheses for every tree node, this is an unambiguous way to represent a tree as a linear sequence of symbols. For example, in the case of $3\mathbin{\triangle} 5 \mathbin{\square} 2$, we would be forced to write either $(3\mathbin{\triangle} (5 \mathbin{\square} 2))$ or $((3\mathbin{\triangle} 5) \mathbin{\square} 2)$, fully specifying which tree we mean. But, of course, fully parenthesized expressions are quite tedious to read and write. This leads to the second method for resolving ambiguity: come up with some conventions that specify how to resolve ambiguity when it arises. For example, if we had a convention that says $\square$ has a higher precedence than (i.e. “comes before”, i.e. “binds tighter than”) $\triangle$, then $3 \mathbin{\triangle} 5 \mathbin{\square} 2$ is no longer ambiguous: it must mean the left-hand tree (with the $\triangle$ at the top), and if we wanted the other tree we would have to use explicit parentheses, as in $(3 \mathbin{\triangle} 5) \mathbin{\square} 2$. Of course, this is exactly what the “order of operations” is: a set of conventions that tells us how to interpret otherwise ambiguous linear expressions as unambiguous trees. In particular, the operations that we usually talk of being “performed first” really just have higher precedence than the other operations. Think of the operations as “magnets” trying to attract things to them; higher precedence means stronger magnets. I wouldn’t necessarily phrase it this way to a student, though. I have never taught elementary or middle school math, or whichever level it is where this is introduced, but I think if I did I would just tell them: The order of operations tells us where to put parentheses.
2019-12-08T15:09:14
{ "domain": "mathlesstraveled.com", "url": "https://mathlesstraveled.com/", "openwebmath_score": 0.6345001459121704, "openwebmath_perplexity": 362.3442499335215, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639677785088, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132533178939 }
https://puzzling.stackexchange.com/questions/2049/guess-the-function-for-scatterplot-of-number-theoretic-function/2050
# Guess the Function for Scatterplot of Number Theoretic Function To my knowledge, this puzzle is not previously published (except by me on google+ recently), but I would be interested to hear of any info otherwise. This following graph was generated using a simple ruby program and gnuplot. It graphs a "basic/fundamental" number theoretic property of the natural numbers. Can you figure out what it is? Increasingly detailed hints below if you want more help. The answer and further background will be posted as a comment after some time. hint1: "fundamental" as in "fundamental theorem of arithmetic" hint2: one word: prime hint3: prime decomposition • What does mean "graphs a "basic fundamental" number theoretic property of the natural numbers"? Can you give an example how you graph a property of numbers? – klm123 Aug 8 '14 at 20:30 • let x be a natural number and y=f(x) be some simple number-theoretic function of x. – vzn Aug 8 '14 at 20:37 • and what then? How do you graph it? take all possible natural numbers X and put (y,x) points on the graph? – klm123 Aug 8 '14 at 20:50 • as axis labels on graph state x ∈ [1..1000] – vzn Aug 8 '14 at 21:09 $x$ against its largest prime factor. The points on the top line are primes, the next line down are primes times two, then primes times three, and so on, with the scattering at the bottom being as a result of numbers that are a product of more than two primes.
2019-10-23T08:48:52
{ "domain": "stackexchange.com", "url": "https://puzzling.stackexchange.com/questions/2049/guess-the-function-for-scatterplot-of-number-theoretic-function/2050", "openwebmath_score": 0.4510771632194519, "openwebmath_perplexity": 896.1373275971803, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639677785087, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132533178939 }
https://mathoverflow.net/questions/229553/does-every-automorphism-of-a-separably-rationally-connected-variety-have-a-fixed
# Does every automorphism of a separably rationally connected variety have a fixed point? Let $k$ be an algebraically closed field. Let $X$ be a smooth, projective variety over $k$ that is separably rationally connected, i.e., there exists a $k$-morphism $u:\mathbb{P}^1_k \to X$ such that $u^*T_X$ is isomorphic to $\mathcal{O}_{\mathbb{P}^1_k}(a_1)\oplus \dots \oplus \mathcal{O}_{\mathbb{P}^1_k}(a_n)$ for positive integers $a_1,\dots,a_n$. Let $f:X\to X$ be a $k$-automorphism. Question. Does there exist a $k$-point of $X$ that is fixed by $f$? This is true if $k$ is of characteristic $0$ by the Atiyah-Bott fixed point theorem. That theorem was extended to positive characteristic by many authors; one reference is Corollaire 6.12 (Appendice), Exposé III, SGA 5. This extension does not quite give the result above; it does give the result if $h^i(X,\mathcal{O}_X)$ vanishes for all $i>0$, but that is unknown for arbitrary separably rationally connected, smooth, projective varieties. By a theorem of Kollár, this is also true in positive characteristic if $f$ has finite order. By the "spreading out" technique, this implies the result if there exists an ample invertible sheaf $\mathcal{L}$ such that $f^*\mathcal{L}$ is isomorphic to $\mathcal{L}$, i.e., if for some $n>0$, the iterate $f^n$ is in the identity component of the automorphism group scheme of $X$. In particular, the result is true if $X$ is separably rationally connected and Fano. However, there do exist pairs $(X,f)$ with $X$ a separably rationally connected variety and $f$ an automorphism that preserves no ample divisor class, e.g., translations on the elliptic surface obtained from $\mathbb{P}^2$ by blowing up the base locus of a pencil of plane cubics. This question is related to the following questions: let $p$ be a prime integer, let $q$ be $p^r$, and let $Y/\mathbb{F}_q$ be a smooth, projective variety whose base change to $\overline{\mathbb{F}}_q$ is separably rationally connected. Let $g:Y \to Y$ be an $\mathbb{F}_q$-automorphism. What can we say about the induced permutation of the finite set $Y(\mathbb{F}_q)$ (whose cardinality is congruent to $1$ modulo $q$, by work of Esnault)? Does there exist an integer $N$ depending only on $p$ (not $q$) and geometric properties of $Y$ and $g$ such that there exists an orbit of size $\leq N$? This is not a full answer, but a partial result. Recall that a rationally chain connected variety has rational decomposition of the diagonal: there exists $N \in \mathbb Z_{> 0}$ and a cycle $Z$ supported on $X \times D$ such that $$N \cdot \Delta_X = N \cdot (\{x\} \times X) + Z \in \operatorname{CH}^n(X \times X).$$ This always forces $H^0(X,\Omega_X^i) = 0$ for $i > 0$ (in characteristic $p > 0$ we need $(N,p) = 1$, cf. Totaro [Tot], Lemma 2.2). (But for separably rationally connected, we always have $H^0(X,\Omega_X^i) = 0$ for $i > 0$, even in positive characteristic.) In characteristic $0$, this forces $H^i(X,\mathcal O_X) = 0$ for $i > 0$, by Hodge symmetry. This would imply the claim, as you note, using a Woods Hole trace formula. In analogy with Totaro's theorem, we have the following: Theorem. Suppose $(N,p) = 1$, i.e. $X$ has decomposition of the diagonal in $\operatorname{CH} \otimes \mathbb Z/p\mathbb Z$. Then $H^i(X,\mathcal O_X) = 0$ for all $i > 0$. Proof. Working in $\operatorname{CH} \otimes \mathbb Z/p\mathbb Z$, we may assume $N = 1$ after multiplying by its inverse. We use the cycle class map to Hodge cohomology, cf. Chatzistamatiou–Rülling [CR]. By Proposition 3.2.2 (1) of [loc cit], the map $$\bigoplus_{i,j} H^i(X,\Omega_X^j) \to \bigoplus_{i,j} H^i(X,\Omega_X^j)$$ induced by $Z$ vanishes on the part where $j = 0$. Thus, on $H^i(X,\mathcal O_X)$, the map induced by $\{x\} \times X$ is the identity. But the map induced by $\{x\} \times X$ factors through $H^i(\{x\}, \mathcal O_{\{x\}})$, which is $0$ for $i > 0$. $\square$ Thus, any rationally connected variety with $H^i(X, \mathcal O_X) \neq 0$ needs to have $N$ (as above) divisible by $p$. We do not know if this is possible when $X$ is separably rationally connected (which is what you're interested in), but it certainly is possible for merely rationally connected: Example. Let $X$ be a supersingular K3 surface. Then $X$ is unirational (at least if $p \geq 5$), by Liedtke [Lie]. However, since $X$ is a K3 surface, we have $h^2(X,\mathcal O_X) = h^0(X,\Omega_X^2) = 1$. But of course $X$ cannot be separably rationally connected. To conclude, the difference between rational, mod $p$, and integral decomposition of the diagonal, as well as between rationally (chain) connected and separably rationally connected, remains very subtle. But maybe this gives some ideas for how to construct counterexamples: we need some $p$ stuff happening. References. [CR] Andre Chatzistamatiou and Kay Rülling. Higher direct images of the structure sheaf in positive characteristic. Algebra & Number Theory 5 (2011), no. 6, 693–775. MR2923726 [Lie] Christian Liedtke. Supersingular K3 surfaces are unirational. Invent. Math. 200 (2015), no. 3, 979--1014. MR3348142 [Tot] Burt Totaro. Hypersurfaces that are not stably rational. J. Amer. Math. Soc. 29 (2016), no. 3, 883–891. MR3486175. • That is interesting. My own search for counterexamples focuses on separably rationally connected varieties arising together with an automorphism fixing no ample divisor. Nov 12, 2016 at 10:20
2023-04-01T07:19:01
{ "domain": "mathoverflow.net", "url": "https://mathoverflow.net/questions/229553/does-every-automorphism-of-a-separably-rationally-connected-variety-have-a-fixed", "openwebmath_score": 0.9210120439529419, "openwebmath_perplexity": 109.32244560118517, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639677785087, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132533178939 }
https://www.bartleby.com/solution-answer/chapter-10-problem-1012ex-accounting-27th-edition/9781337272094/revision-of-depreciation-a-building-with-a-cost-of-1200000-has-an-estimated-residual-value-of/39289024-98dc-11e8-ada4-0ee91056875a
# Revision of depreciation A building with a cost of $1,200,000 has an estimated residual value of$250,000, has an estimated useful life of 40 years, and is depreciated by the straight-line method. (a) What is the amount of the annual depreciation? (b) What is the book value at the end of the twenty-eighth year of use? (c) If at the start of the twenty-ninth year it is estimated that the remaining life is 10 years and that the residual value is $180,000, what is the depreciation expense for each of the remaining 10 years? BuyFind ### Accounting 27th Edition WARREN + 5 others Publisher: Cengage Learning, ISBN: 9781337272094 BuyFind ### Accounting 27th Edition WARREN + 5 others Publisher: Cengage Learning, ISBN: 9781337272094 #### Solutions Chapter Section Chapter 10, Problem 10.12EX Textbook Problem ## Revision of depreciationA building with a cost of$1,200,000 has an estimated residual value of $250,000, has an estimated useful life of 40 years, and is depreciated by the straight-line method. (a) What is the amount of the annual depreciation? (b) What is the book value at the end of the twenty-eighth year of use? (c) If at the start of the twenty-ninth year it is estimated that the remaining life is 10 years and that the residual value is$180,000, what is the depreciation expense for each of the remaining 10 years? Expert Solution (a) To determine Straight-line Depreciation: Under the straight-line method of depreciation, the same amount of depreciation is allocated every year over the estimated useful life of an asset. The formula to calculate the depreciation cost of the asset using the salvage value is shown as below: Depreciation cost = (Original cost of the asset-residual value)Estimated useful life of the asset To determine: the amount of annual depreciation. ### Explanation of Solution Determine the amount of annual depreciation. Cost of the equipment =$1,200,000 Estimated residual Value =$250,000 Estimated useful life =40 years AnnualDepreciation = (Original cost of the asset-residual value)Estimated useful life of the asset Expert Solution (b) To determine the book value at the end of the twenty-eighth year of use. Expert Solution (c) To determine the revised annual depreciation for each of the remaining 10 years. ### Want to see the full answer? Check out a sample textbook solution.See solution ### Want to see this answer and more? Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! See solution
2020-10-24T10:32:05
{ "domain": "bartleby.com", "url": "https://www.bartleby.com/solution-answer/chapter-10-problem-1012ex-accounting-27th-edition/9781337272094/revision-of-depreciation-a-building-with-a-cost-of-1200000-has-an-estimated-residual-value-of/39289024-98dc-11e8-ada4-0ee91056875a", "openwebmath_score": 0.4483798146247864, "openwebmath_perplexity": 1029.053660916787, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639669551474, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.6532132527643221 }
http://icpc.njust.edu.cn/Problem/CF/117A/
# Elevator Time Limit: 3 seconds Memory Limit: 256 megabytes ## Description And now the numerous qualifying tournaments for one of the most prestigious Russian contests Russian Codec Cup are over. All n participants who have made it to the finals found themselves in a huge m-floored 108-star hotel. Of course the first thought to come in a place like this is "How about checking out the elevator?". The hotel's elevator moves between floors according to one never changing scheme. Initially (at the moment of time 0) the elevator is located on the 1-st floor, then it moves to the 2-nd floor, then — to the 3-rd floor and so on until it reaches the m-th floor. After that the elevator moves to floor m - 1, then to floor m - 2, and so on until it reaches the first floor. This process is repeated infinitely. We know that the elevator has infinite capacity; we also know that on every floor people get on the elevator immediately. Moving between the floors takes a unit of time. For each of the n participant you are given si, which represents the floor where the i-th participant starts, fi, which represents the floor the i-th participant wants to reach, and ti, which represents the time when the i-th participant starts on the floor si. For each participant print the minimum time of his/her arrival to the floor fi. If the elevator stops on the floor si at the time ti, then the i-th participant can enter the elevator immediately. If the participant starts on the floor si and that's the floor he wanted to reach initially (si = fi), then the time of arrival to the floor fi for this participant is considered equal to ti. ## Input The first line contains two space-separated integers n and m (1 ≤ n ≤ 105, 2 ≤ m ≤ 108). Next n lines contain information about the participants in the form of three space-separated integers si fi ti (1 ≤ si, fi ≤ m, 0 ≤ ti ≤ 108), described in the problem statement. ## Output Print n lines each containing one integer — the time of the arrival for each participant to the required floor. ## Sample Input Input7 42 4 31 2 02 2 01 2 14 3 51 2 24 2 0Output91071075Input5 51 5 41 3 11 3 43 1 54 2 5Output12101087 ## Sample Output None ## Hint Let's consider the first sample. The first participant starts at floor s = 2 by the time equal to t = 3. To get to the floor f = 4, he has to wait until the time equals 7, that's the time when the elevator will go upwards for the second time. Then the first participant should get on the elevator and go two floors up. In this case the first participant gets to the floor f at time equal to 9. The second participant starts at the time t = 0 on the floor s = 1, enters the elevator immediately, and arrives to the floor f = 2. The third participant doesn't wait for the elevator, because he needs to arrive to the same floor where he starts. None
2020-10-29T22:37:27
{ "domain": "edu.cn", "url": "http://icpc.njust.edu.cn/Problem/CF/117A/", "openwebmath_score": 0.3634292781352997, "openwebmath_perplexity": 649.4073053572095, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639669551474, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.653213252764322 }
https://shopgdt.com/blog/archive.php?5b3d1c=projectile-motion-formula
In this part of the problem, explicitly show how you follow the steps involved in solving projectile motion problems. It can also work 'in reverse'. (c) What is the vertical component of the velocity just before the ball hits the ground? If the problem in question gives you a launch angle and an initial velocity, you’ll need to use trigonometry to find the horizontal and vertical velocity components. Figure 4. 2. How do I solve projectile motion problems? In the case where the initial height is 0, the formula can be written as: Vy * t – g * t² / 2 = 0. Because y0 is zero, this equation reduces to simply. This is true only for conditions neglecting air resistance. Projectile Motion on Inclined Plane Formulas When any object is thrown with velocity u making an angle α from horizontal, at a plane inclined at an angle β from horizontal, then Initial velocity along the inclined plane = u cos (α – β) Lee Johnson is a freelance writer and science enthusiast, with a passion for distilling complex concepts into simple, digestible language. There are formulas for the range of a projectile, which you can look up or derive from the constant acceleration equations, but this isn’t really needed because you already know the maximum height of the projectile, and from this point it’s just in free fall under the effect of gravity. Therefore: vx = v0 cos θ0 = (25.0 m/s)(cos 35º) = 20.5 m/s. Galileo was the first person to fully comprehend this characteristic. (i) At highest point, the linear momentum is mu cos θ and the. The distance will be about 95 m. A goalkeeper can give the ball a speed of 30 m/s. Suppose the extension of the legs from the crouch position is 0.600 m and the acceleration achieved from this position is 1.25 times the acceleration due to gravity, g. How far can they jump? How many buses can he clear if the top of the takeoff ramp is at the same height as the bus tops and the buses are 20.0 m long? So any projectile that has an initial vertical velocity of 14.3 m/s and lands 20.0 m below its starting altitude will spend 3.96 s in the air. (a) Calculate the height at which the shell explodes. We can say that it happens when the vertical distance from the ground is equal to 0. Because y0 and vy are both zero, the equation simplifies to. Projectile motion is pretty logical. at2 v2 = v02. How many meters lower will its surface be 32.0 km from the ship along a horizontal line parallel to the surface at the ship? Therefore, we derive it using the kinematics equations: $$a_{x}$$ = 0 $$v_{x}$$ = $$v_{0x}$$ $$\triangle x$$ = $$v_{0x}t$$ $$a_{y}$$ = -g $$v_{y}$$ = $$v_{0y}$$ – gt The vector s has components x and y along the horizontal and vertical axes. A to point C (see figure above). Objects with projectile motion include: keys being thrown, a 300 kg projectile being thrown 90 m by a trebuchet, a football being kicked so that it no longer touches the ground, a diver jumping from a diving board, an artillery shell the moment it leaves the barrel, and a car trying to jump a bridge. Set the angle, initial speed, and mass. Then, from that equation, we find that the time of flight is. 3. Thus. A ball is kicked with an initial velocity of 16 m/s in the horizontal direction and 12 m/s in the vertical direction. (b) For how long does the ball remain in the air? Once again we see that thinking about one topic, such as the range of a projectile, can lead us to others, such as the Earth orbits. • (a) If a gun is sighted to hit targets that are at the same height as the gun and 100.0 m away, how low will the bullet hit if aimed directly at a target 150.0 m away? (b) Is the acceleration ever in the same direction as a component of velocity? Analyze the motion of the projectile in the horizontal direction using the following equations: 3. \Large 3. Step 4. (b) Discuss qualitatively how a larger muzzle velocity would affect this problem and what would be the effect of air resistance. (d) The x – and y -motions are recombined to give the total velocity at any given point on the trajectory. Again, resolving this two-dimensional motion into two independent one-dimensional motions will allow us to solve for the desired quantities. In practice, air resistance is not completely negligible, and so the initial velocity would have to be somewhat larger than that given to reach the same height. This equation yields two solutions: t = 3.96 and t = –1.03. This means that an object will eventually fall to Earth. In the horizontal direction, there is no change in speed, as air resistance is assumed to be negligible, so acceleration is 0. After two seconds, a second object is thrown upward with the same velocity. Its solutions are given by the quadratic formula: $t=\frac{-bpm \sqrt{{b}^{2}-4\text{ac}}}{\text{2}\text{a}}\\$. (At its highest, the shell is above 60% of the atmosphere—but air resistance is not really negligible as assumed to make this problem easier.) (Another way of finding the time is by using $y={y}_{0}+{v}_{0y}t-\frac{1}{2}{\text{gt}}^{2}\\$, and solving the quadratic equation for t.). $R=\frac{{{{v}_{0}}}^{}}{\sin{2\theta }_{0}g}\\$, For θ = 45º, $R=\frac{{{{v}_{0}}}^{2}}{g}\\$, R = 91.9 m for v0 = 30 m/s; R = 163 m for v0; R = 255 m for v0 = 50 m/s. Note that most players will use a large initial angle rather than a flat shot because it allows for a larger margin of error. v2 =v02+2a(x−x0) v 2 = v 0 2 + 2 a ( x − x 0). You are asked to find the height of that point. Most of the basic physics textbooks talk on the topic of horizontal range of the Projectile motion. This fact was discussed in Kinematics in Two Dimensions: An Introduction, where vertical and horizontal motions were seen to be independent. Also an interactive html 5 applet may be used to better understand the projectile equations. How do I solve a projectile motion if I know the speed, distance, and height? n other words, its the acceleration due to gravity (g). With this approach, you can use the kinematics equations, noting that time t is the same for both horizontal and vertical components, but things like the initial velocity will have different components for the initial vertical velocity and the initial horizontal velocity. For finding different parameters related to projectile motion, we can make use of differential equations of motions: Total Time of Flight: Resultant displacement (s) = 0 in Vertical direction. The range R of a projectile on level ground for which air resistance is negligible is given by. (c) The ocean is not flat, because the Earth is curved. Galileo was the first person to describe projectile motion accurately, by breaking down motion into a horizontal and vertical component, and realizing that the plot of any object's motion would always be a parabola. This article has been viewed 35,649 times. (b) The horizontal motion is simple, because ax=0 and vx is thus constant. Say an object is thrown with uniform velocity V 0 making an angle theta with the horizontal (X) axis. A projectile is an object that is in motion, in the air and has no force acting upon it other than the acceleration due to gravity (this means that it cannot be self-propelled). The cannon on a battleship can fire a shell a maximum distance of 32.0 km. Ignore air resistance. State your assumptions. Imagine you’re manning a cannon, aiming to smash down the walls of an enemy castle so your army can storm in and claim victory. With increasing initial speed, the range increases and becomes longer than it would be on level ground because the Earth curves away underneath its path. Determine a coordinate system. This result is consistent with the fact that the final vertical velocity is negative and hence downward—as you would expect because the final altitude is 20.0 m lower than the initial altitude. Its magnitude is s, and it makes an angle θ with the horizontal. Note also that the maximum height depends only on the vertical component of the initial velocity, so that any projectile with a 67.6 m/s initial vertical component of velocity will reach a maximum height of 233 m (neglecting air resistance). We can find the time for this by using. For an additional problem to work on, imagine the firework from the previous example (initial velocity of 60 m/s launched at 70 degrees to the horizontal) failed to explode at the peak of its parabola, and instead lands on the ground unexploded. Considering factors that might affect the ability of an archer to hit a target, such as wind, explain why the smaller angle (closer to the horizontal) is preferable. 4. 19. since $2\sin\theta \cos\theta =\sin 2\theta\\$, the range is: $R=\frac{{{v}_{0}}^{2}\sin 2\theta }{g}\\$. (b) What maximum height does it reach? However, to simplify the notation, we will simply represent the component vectors as x and y.). The properties of projectile motion are that the object’s horizontal velocity does not change, that it’s vertical velocity constantly changes due to gravity, that the shape of its trajectory will be a parabola, and that the object is not affected by air resistance. . Chesterfield Sofa Living Room, Sweet Pork Adobo Recipe, Ethylene Glycol Production From Ethylene Oxide, Is Lomé Airport Closed, Baker Hughes Catalog Pdf, Laird Connectivity Wifi, English To Gujarati Typing, Crochet Meaning In Kannada, Funny Poems About Beans, Marine Traffic Api, Cupcakes With Sprinkles Inside Recipe, Gartner Studios International, Pink Cotton Shirt Women's, Enema, Ready To Use, Types Of Feminism In International Relations, Marks And Spencer Jobs Carlisle, Presente Continuo Español Irregulares, Northern Lights Alaska 2020, Hampstead Tea Fairtrade, Body Scan Worksheet, Bakusaiga Vs Tessaiga, Shaker Orange Pie, Life Over Cancer Shake Recipes, New River Calexico, When Do Babies Point To Body Parts, Sad Love Quotes In Tamil, Why Does Coffee Make Me Jittery But Tea Doesn't, 10 Advantages Of Accounting, Jordan Hydro 3, Factors That Attract Foreign Direct Investment, Deathwing Hearthstone Hero,
2021-01-27T13:01:48
{ "domain": "shopgdt.com", "url": "https://shopgdt.com/blog/archive.php?5b3d1c=projectile-motion-formula", "openwebmath_score": 0.5525017976760864, "openwebmath_perplexity": 562.3435474987284, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639669551474, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.653213252764322 }
https://math.stackexchange.com/questions/1640803/how-to-count-the-closed-left-hand-turn-paths-of-planar-bicubic-graphs
# How to count the closed left-hand turn paths of planar bicubic graphs? When you draw a planar cubic bipartite graph $\Gamma$ and 3-color its edges you can use this as an orientation $\mathcal O$. Definition A left-hand turn path on $(\Gamma, \mathcal O)$ is a closed path on $\Gamma$ such that, at each vertex, the path turns left in the orientation $\mathcal O$. I want to calculate the number of left-hand turn paths of $\Gamma$ without drawing them. I found the following: When you look at a vertex with the given (planar) edge-coloring i.e. orientation, there are two situations that can happen: $\hskip1.7in$ Lets start with the left figure: When you come from the color-1 edge and you want to go left you end at the color-2 edge. Coming from 2 you end at 3, and from 3 to 1. Fine in the right figure the orientation is inverted, so left is right here. So if we come from the color-1 edge we end at (surprise,surprise) the color-2 edge. And so forth... So after 1 follows 2 after that 3 and then 1 again, no matter if we reach a left- or right-oriented vertex. Now, the adajency matrix of the graph $A_\Gamma$ splits up into three different color submatrices, with $A_\Gamma=A_1+A_2+A_3$. $A_k$ are permutation matrices with $A_k^2=1$. So the number of left-hand turn path can be calculated when you look at the number of unique solutions of $$(A_3A_2A_1) v_kv_{k+1} =v_kv_{k+1},$$ where $v_k$ can be any vertex as starting point and $v_kv_{k+1}$ indicates the starting edge. Vertices are allowed multiple times. Edges may be traversed in the opposite directions as well... Is this correct and if so are there other ways to do it? • Isnt your claim equivalent to "an orientation on a cubic graph is the same as edge 3-colouring"? Some cubic graphs do not admit edge 3-colourings. – SashaKolpakov Feb 5 '16 at 13:11 • @SashaKolpakov I restrict to planar cubic ones where a 3-edge coloring is guaranteed by the 4-color theorem. An orientation can be chosen without respect to the edge coloring, e.g. All orientations can be left (or + as in my other post... – draks ... Feb 5 '16 at 14:46 • some colouring is guaranteed, yes would you like the number of left turn paths with the orientation induced by that colouring? I'm worried by the fact that orientation means colouring half-edges around each vertex, and thus an orientation will produce an edge colouring only when all half-edge colours agree on the halfs of all edges. – SashaKolpakov Feb 6 '16 at 15:36 • @SashaKolpakov yes, how many path for a given coloring / orientation. Is my proposed way correct? I think your worries are no problem for planar graphs... – draks ... Feb 6 '16 at 15:51 • Do you want to count cycles (closed loops) or paths (which can start and end anywhere)? If paths, do you allow a vertex to be used more than once? Or an edge? – Matt Feb 14 '16 at 22:30 Solving your equation $(A_1A_2A_3)^t v_kv_{k+1} =v_kv_{k+1}$ will count each cycle multiple times, even if you only consider the minimal positive $t$ that works, since each cycle will be counted once for every $3\rightarrow 1$ transition it contains (and it must contain an even number of them, since the graph is bipartite). Since the $A_k$ are permutation matrices, the matrix $M=A_1A_2A_3$ is itself a permutation matrix, and what you want is exactly the number of cycles in this permutation $\pi_M$. Since the graph is bipartite, every cycle of $\pi_M$ will be of even length, corresponding to a unique "left hand turn" cycle in $\Gamma$ that is three times longer. Clearly each cycle in $\pi_M$ corresponds to a unique "left hand turn" cycle in the graph, and vice versa. You might be worried that $\pi_M$ could have cycles where we return to a vertex but we are traveling in the opposite direction and so it is not really a cycle. But this cannot happen, since every vertex in a cycle of $\pi_M$ is always being traversed in the $3\rightarrow 1$ direction. If we return to a vertex, we are passing through it in exactly the same way. • Thanks, do you think it is possible to calculate the number cycles with some properties of $M$ like eigenvalues...? – draks ... Feb 15 '16 at 15:38 • Different cycles of the same length will yield the same eigenvalues, see this answer. Finding the cycles directly (and counting them) is faster and simpler than computing eigenvalues anyway. The number of cycles is a property of $M$. What are you looking for: something easy to implement in a particular programming language, or perhaps a formula with a certain mathematical form? – Matt Feb 15 '16 at 16:56 • Thanks, for the links. A formula with a certain mathematical form sounds appealing. I agree that cycles are counted multiple times To me it feels like I could go through all pairs $v_kv_{k+1}$, .i.e. edges $e_j$ cycle through them and count. Do you think this would be easier if I use the line graph of $\Gamma$? – draks ... Feb 16 '16 at 6:38
2019-08-22T20:28:06
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1640803/how-to-count-the-closed-left-hand-turn-paths-of-planar-bicubic-graphs", "openwebmath_score": 0.7901081442832947, "openwebmath_perplexity": 258.646527947837, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639669551475, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.653213252764322 }
http://mathhelpforum.com/calculus/77345-time-velocity-position-particle-print.html
# time, velocity, position of a particle • March 7th 2009, 09:21 AM LexiRae time, velocity, position of a particle at t time a particle is moving along the x axis at position x. the relationship between x and t is given by tx=x^2 +8. at x = 2 the velocity of the particle is: • March 7th 2009, 09:38 AM Mentia Did you mean to write t(x) = x^2 + 8 instead of tx = x^2 + 8? I hope so because that makes it easier. Then: $t=x^{2}+8$ then $x=\pm \sqrt[ ]{ t-8}$ then $velocity=\frac{dx }{dt }=\pm \frac{1}{2}\frac{1}{\sqrt[]{t-8}} = \frac{1}{2x}$ Then if x = 2, velocity = 1/4 Seem reasonable? • March 7th 2009, 10:09 AM HallsofIvy Quote: Originally Posted by Mentia Did you mean to write t(x) = x^2 + 8 instead of tx = x^2 + 8? I hope so because that makes it easier. Then: $t=x^{2}+8$ then $x=\pm \sqrt[ ]{ t-8}$ then $velocity=\frac{dx }{dt }=\pm \frac{1}{2}\frac{1}{\sqrt[]{t-8}} = \frac{1}{2x}$ Then if x = 2, velocity = 1/4 Seem reasonable? I think simpler: from $t= x^2+ 8$, $1= 2x dx/dt$. At x= 2, $1= 4 dx/dt$ so $v= dx/dt= 1/4$.
2014-12-25T22:11:14
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/calculus/77345-time-velocity-position-particle-print.html", "openwebmath_score": 0.6628741025924683, "openwebmath_perplexity": 1120.3789149781207, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639669551474, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.653213252764322 }
http://mathhelpforum.com/algebra/119194-factoring-polynomials.html
# Math Help - Factoring Polynomials 1. ## Factoring Polynomials Hey there, I'm new here, and am having some trouble with factoring. I have already done quadratic trinomials but am having a lot of trouble figuring out polynomials. Here is one of the questions: 16x³ - 48x² - 9x + 27 Any help would be much appreciated. Thanks, Mike 2. Originally Posted by MikeMcC Hey there, I'm new here, and am having some trouble with factoring. I have already done quadratic trinomials but am having a lot of trouble figuring out polynomials. Here is one of the questions: 16x³ - 48x² - 9x + 27 Any help would be much appreciated. Thanks, Mike factor by grouping $(16x^3-48x^2)+(-9x+27)=$ $16x^2(x-3)-9(x-3)=(x-3)(16x^2-9)=(x-3)(4x-3)(4x+3)$ 3. Hi Mike, you should consider the factor theorem in this problem. Factor theorem - Wikipedia, the free encyclopedia Your constant term is 27, so try the theorem with its factors 4. Cool, thanks a lot guys. Doesn't take too long to get an answer around here at all does it? This helps a lot!!
2016-02-11T06:41:21
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/119194-factoring-polynomials.html", "openwebmath_score": 0.6560004949569702, "openwebmath_perplexity": 950.3160014546132, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639669551474, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.653213252764322 }
http://mathhelpforum.com/algebra/66889-sort-game-print.html
# Sort of a game • Jan 4th 2009, 10:00 PM Kai Sort of a game Someone asked me how to get 6 with 2 2 2 using any mathematical operations ( +,-,*,/, ^2, ^3..., sqaure root, cube root,...) For 2 2 2, 2+2+2=6 For 3 3 3, 3*3 -3=6 ( or 3+3 * (3^0)) For 4 4 4, square root 4 + square root 4 + square root 4=6 For 5 5 5, 5+ (5/5)=6 For 7 7 7, 7- (7/7)=6 For 8 8 8, cube root 8 + cube root 8 + cube root 8=6 For 9 9 9, (9+9)/(square root 9) OR square root 9 *square root 9-square root 9 Now am stuck on the last part, that is 1 1 1 to get 6 (Thinking) • Jan 5th 2009, 12:11 AM Moo Hello, Quote: Originally Posted by Kai Someone asked me how to get 6 with 2 2 2 using any mathematical operations ( +,-,*,/, ^2, ^3..., sqaure root, cube root,...) For 2 2 2, 2+2+2=6 For 3 3 3, 3*3 -3=6 ( or 3+3 * (3^0)) For 4 4 4, square root 4 + square root 4 + square root 4=6 For 5 5 5, 5+ (5/5)=6 For 7 7 7, 7- (7/7)=6 For 8 8 8, cube root 8 + cube root 8 + cube root 8=6 For 9 9 9, (9+9)/(square root 9) OR square root 9 *square root 9-square root 9 Now am stuck on the last part, that is 1 1 1 to get 6 (Thinking) (1+1+1)! (factorial) 3!=3x2x1=6 :D • Jan 5th 2009, 02:29 AM craig Was just about to provide an answer with my first post, got beaten to it tho (Wink) Just to be awquard I'll do it with 3 zeros (Happy) $(cos(0))!$ • Jan 25th 2009, 08:57 AM tom ato Quote: Originally Posted by craig Just to be awquard I'll do it with 3 zeros (Happy) $(cos(0))!$ Wouldn't it be $ (cos(0)+cos(0)+cos(0))! $ ?
2017-07-29T12:17:10
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/66889-sort-game-print.html", "openwebmath_score": 0.8708091378211975, "openwebmath_perplexity": 3526.510884296058, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639669551472, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132527643219 }
https://openmath.org/cd/fns3.html
# OpenMath Content Dictionary: fns3 Canonical URL: http://www.openmath.org/cd/fns3.ocd CD Base: http://www.openmath.org/cd CD File: fns3.ocd CD as XML Encoded OpenMath: fns3.omcd Defines: function, specification Date: 2004-06-01 Version: 1 (Revision 1) Review Date: 2006-06-01 Status: experimental This CD holds further functions concerning functions themselves. A particularly interesting function is function which constructs a function with given domain and range. ## function Description: This symbol denotes a function constructor. When aplied to at least two arguments, which are sets, the first argument is the domain and the second the range of the function. When applied to at least three arguments, the first two of which are stes and the third of which is a lambda expression, the third argument gives the function specification. Commented Mathematical property (CMP): The domain of the function f constructed this way is the first argument Formal Mathematical property (FMP): $\mathrm{domain}\left(\mathrm{function}\left(X,Y,Z\right)\right)=X$ Commented Mathematical property (CMP): The range of the function f constructed this way is the second argument Formal Mathematical property (FMP): $\mathrm{range}\left(\mathrm{function}\left(X,Y,Z\right)\right)=Y$ Example: The following object defines a function from the natural numbers into the integers specificied by the fact that n maps to n(n+1)/2. $\mathrm{function}\left(\mathbb{N},\mathbb{Z},\lambda n.\frac{n\left(n+1\right)}{2}\right)$ Signatures: sts [Next: specification] [Last: specification] [Top] ## specification Description: This symbol denotes the specification of a function. It is a unary function. When aplied to its argument, which should be a function applied to three arguments, it returns the third argument of the function, that is, the function specification. Formal Mathematical property (FMP): $\mathrm{specification}\left(\mathrm{function}\left(\mathbb{N},\mathbb{Z},f\right)\right)=f$ Example: The following object defines a function from the natural numbers into the integers specificied by the fact that n maps to n(n+1)/2. $\mathrm{specification}\left(\mathrm{function}\left(\mathbb{N},\mathbb{Z},\lambda n.\frac{n\left(n+1\right)}{2}\right)\right)=\lambda n.\frac{n\left(n+1\right)}{2}$ Signatures: sts [First: function] [Previous: function] [Top]
2021-09-28T03:48:29
{ "domain": "openmath.org", "url": "https://openmath.org/cd/fns3.html", "openwebmath_score": 0.893739640712738, "openwebmath_perplexity": 1985.289602777174, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639669551472, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132527643219 }
http://math.stackexchange.com/questions/97728/derivative-of-a-complicated-inverse-function
# Derivative of a complicated inverse function $\Phi(\cdot,0,1)$ and $\phi(\cdot,0,1)$ are cdf and pdf of standard normal distribution. $$y=F_\text{mix}(x,\mu,\sigma)=\sum\limits_{i=1}^{K}\lambda_i\Phi\left(\frac{x-\mu_i}{\sigma_i},0,1\right).$$ $x=Q(y)$ is the inverse function of $F_\text{mix}$. $$\mu_i=\bar{\mu_i}'w,\qquad \sigma_i^2 =w'\Sigma_i w.$$ What is the derivative of $Q(y)$ with respect to $w$? - I'll assume that by $x=Q(y)$ you mean $x=Q(y,\mu,\sigma)$, where $\mu$ and $\sigma$ are treated as parameters and $F_\text{mix}$ is inverted with respect to $x$. Also, from your expressions for $\mu_i$ and $\sigma_i^2$ I'm guessing that $w$ is a vector, $\Sigma_i$ is a matrix, $\bar{\mu_i}$ is a vector and a prime denotes transposition. (All these things should preferably have been explained in the question.) This is a great exercise for becoming familiar with partial derivatives and understanding how important it is to keep clear about what's being varied and what's being held fixed. I'll use vertical lines to indicate the variables being held fixed. First, $$\def\deriv#1#2{\frac{\mathrm d#1}{\mathrm d#2}}\def\pderiv#1#2#3{\left.\frac{\partial#1}{\partial#2}\right|_{#3}}\deriv{Q(y,\mu,\sigma)}w = \pderiv x\mu{y,\sigma} \deriv\mu w+\pderiv x\sigma{y,\mu}\deriv\sigma w\;.$$ To find the derivatives of $x$, differentiate the defining equation for $y$: $$\begin{eqnarray} 0 &=& \pderiv y\mu{y,\sigma} \\ &=& \pderiv yx{\mu,\sigma}\pderiv x\mu{y,\sigma}+\pderiv y\mu{x,\sigma}\pderiv \mu\mu{y,\sigma}+\pderiv y\sigma{x,\mu}\pderiv \sigma\mu{y,\sigma} \\ &=& \pderiv yx{\mu,\sigma}\pderiv x\mu{y,\sigma}+1\cdot\pderiv y\mu{x,\sigma}+0\cdot\pderiv y\sigma{x,\mu} \\ &=& \pderiv yx{\mu,\sigma}\pderiv x\mu{y,\sigma}+\pderiv y\mu{x,\sigma}\;, \end{eqnarray}$$ and thus $$\pderiv x\mu{y,\sigma}=-\pderiv y\mu{x,\sigma}\left(\pderiv yx{\mu,\sigma}\right)^{-1}\;,$$ and likewise for $\partial x/\partial\sigma$. Thus we have $$\deriv{Q(y,\mu,\sigma)}w =-\left(\pderiv yx{\mu,\sigma}\right)^{-1}\left( \pderiv y\mu{x,\sigma} \deriv\mu w+\pderiv y\sigma{x,\mu}\deriv\sigma w\right)\;.$$ Perhaps you can take it from there, since these are all derivatives of the functions given explicitly; let me know if you need further assistance. -
2016-06-28T06:07:23
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/97728/derivative-of-a-complicated-inverse-function", "openwebmath_score": 0.956185519695282, "openwebmath_perplexity": 77.34233439850244, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639665434667, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132524875359 }
http://math.stackexchange.com/questions/25054/expected-value-for-min-estimated-entropy
# expected value for min estimated entropy We have a random generator that generates independent random bits with probability $P(x=1) = P$ and $P(x=0)=1-P$. Given $N$ random independent bits, we estimate $P$ by $\hat{P} = N_1/(N_0+N_1)$. where $N_0$ is the number of $0$'s and $N_1$ is number of $1$'s. The expected value for $\hat{P}$ can be simply shown to be $P$. a. What is the expected value for the estimated entropy defined as following: $\hat{H}=-[ \hat{P} \log(\hat{P}) + (1-\hat{P}) \log(1-\hat{P}) ]$ b. If we take $M$ independent sets of $N$ random bits as above and each time estimate the entropy using the above equation, What is the expected value for the smallest estimated entropy among those $M$ sets ? thanks, MG P.S. If the solution to the integral for general P is too complicated, a solution to the special case $P=1/2$ would be appreciated as well. - a. for large $N$, $\langle \hat H \rangle$ will approach $-[P \log P + (1-P) \log (1-P)]$. – Fabian Mar 4 '11 at 22:49 For the second part, the estimated entropy depends on the deviation of the estimated P from the real P. You can estimate this since $\hat{P}$ has a binomial (and so approximately normal) distribution. Presumably you can estimate the minimal $\hat{H}$ this way (just take the largest deviation from $P$). – Yuval Filmus Mar 4 '11 at 23:17 @Fabian: Thanks, thats true. – mghandi Mar 6 '11 at 4:08 @Yuval: Thanks, it makes sense, so you are suggesting to approximate $\hat{P}$ distribution with normal, and then calculate the expected value (solve the integral for that?) – mghandi Mar 6 '11 at 4:14 If you can solve the integral... – Yuval Filmus Mar 6 '11 at 7:26 As noted in the comments, there is no general exact formula for $\langle \hat H_N\rangle$ but, for large $N$, one can approach it by a $\chi^2$-type limit. To wit, $N_1=pN+v\sqrt{N}Z_N$ with $v^2=p(1-p)$, $\langle Z_N\rangle=0$ and $\langle Z_N^2\rangle=1$ for every $N$ and, when $N\to+\infty$, $Z_N$ converging in distribution to a standard normal random variable. Hence $\hat P_N=p+U_N$ and $1-\hat P_N=q-U_N$ with $q=1-p$ and $U_N=vZ_N/\sqrt{N}$, and $$\hat H_N=-(p+U_N)\log(p+U_N)-(q-U_N)\log(q-U_N).$$ Using the expansions $$\log(p+U_N)=\log(p)+U_N/p-U_N^2/(2p^2)+o(U_N^2),$$ and $$\log(q-U_N)=\log(q)-U_N/q-U_N^2/(2q^2)+o(U_N^2),$$ one gets $$\hat H_N=-p\log(p)-q\log(q)+U_N\log(q/p)-U_N^2/(2pq)+o(U_N^2).$$ Since $\langle U_N\rangle=0$, $\langle U_N^2\rangle=v^2/N$ and $v^2=pq$, one gets $$\langle \hat H_N\rangle=-p\log(p)-q\log(q)-1/(2N)+o(1/N).$$
2016-06-25T05:15:57
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/25054/expected-value-for-min-estimated-entropy", "openwebmath_score": 0.9813743233680725, "openwebmath_perplexity": 218.2853621081281, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.971563966131786, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.6532132522107501 }
https://math.stackexchange.com/questions/967840/when-lim-n-rightarrow-infty-int-01fx-sinnx-0
# When $\lim_{n\rightarrow\infty}\int_0^1f(x)\sin(nx)=0$ What is a good sufficient condition for $f:[0,1]\rightarrow\mathbb{R}$ such that: $$\lim_{n\rightarrow\infty}\int_0^1f(x)\sin(nx)dx=0$$ If $f$ is differentiable then by integral by part we can easily prove. But I think differentiablity is a too strong condition. Can we make it less strict? • Yes, use $$\left\lvert \int_0^1 f(x)\sin (nx)\,dx - \int_0^1 g(x)\sin (nx)\,dx\right\rvert \leqslant \int_0^1 \lvert f(x) - g(x)\rvert\,dx.$$ – Daniel Fischer Oct 11 '14 at 13:08 • Lebesgue integrability is enough. See this. – David Mitra Oct 11 '14 at 13:08 • @DanielFischer How do you proceed further? – Leaning Oct 11 '14 at 13:26 • You approximate an integrable function by a continuously differentiable function in the integral norm to obtain the conclusion for all integrable functions. – Daniel Fischer Oct 11 '14 at 13:37 • Yeah I understand that argument! Is there any other nice way to show directly by some calculations? – Leaning Oct 11 '14 at 13:38 To be sure that everything makes sense, we assume that $f$ is Lebesgue integrable. If we show the result for the characteristic function of a Borel subset, it OK. And by approximation, we can do it for finite disjoint unions of intervals, for which the result is true. Hence $f$ Lebesgue integrable is a sufficient condition. • I really don't understand anything but thanks, I can see the beauty :) – Leaning Oct 11 '14 at 13:32 • I guess you did not study yet measure theory. – Davide Giraudo Oct 11 '14 at 13:40 • Yes I haven't. I hope to obtain a beautiful proof in the case of continuity maybe and in the context of normal Riemann integral. Hope I will get to measure theory in the future... – Leaning Oct 11 '14 at 13:41 I'm inspired by another answer here Computing $\lim_{n \rightarrow\infty} \int_{a}^{b}\left ( f(x)\left | \sin(nx) \right | \right )$ with $f$ continuous on $[a,b]$ So I will write down a proof for $f$ continous. Take $I_n=\int_0^1f(x)\sin(nx)dx=\int_0^1g(x)dx$ We have $$I_n=\int_0^{\pi/n}g(x)dx+\int_{\pi/n}^{2\pi/n}g(x)dx+\dots+\int_{(k-1)\pi/n}^{k\pi/n}g(x)dx+\int_{k\pi/n}^1g(x)dx$$ where $1-\frac{k\pi}{n}<\frac{\pi}{n}$ or $k=[\frac{n}{\pi}]$. Using mean value theorem we have $$\int_{(i-1)\pi/n}^{i\pi/n}f(x)\sin(nx)dx=f(u_i)\int_{(i-1)\pi/n}^{i\pi/n}\sin(nx)dx=\frac{f(u_i)}{n}\int_{(i-1)\pi}^{i\pi}\sin ydy=2\frac{f(u_i)}{n}(-1)^i$$ with $u_i\in [(i-1)\pi/n,i\pi/n]$. Therefore we have: $$I_n=\sum_{i=1}^k2\frac{f(u_i)}{n}(-1)^i+\int_{k\pi/n}^1g(x)dx$$ $$I_n=\frac{1}{\pi}\sum_{i=1}^{[k/2]}\frac{2\pi f(u_{2i})}{n}-\frac{1}{\pi}\sum_{i=1}^{[(k+1)/2]}\frac{2\pi f(u_{2i-1})}{n}+\int_{k\pi/n}^1g(x)dx$$ Then using the definition of Riemann Integral we have: $$\lim_{n\rightarrow\infty} I_n=\frac{1}{\pi}\int_0^1f(x)dx-\frac{1}{\pi}\int_0^1f(x)dx+\lim_{n\rightarrow\infty}\int_{k\pi/n}^1g(x)dx$$ $$=\lim_{n\rightarrow\infty}\int_{k\pi/n}^1g(x)dx=\lim_{n\rightarrow\infty}f(v)\int_{k\pi/n}^1\cos(nx)dx=0$$ where $v\in[k\pi/n,1]$ (since $|\int_{k\pi/n}^1\cos(nx)dx|\le 1-k\pi/n<\pi/n$ and $f$ is continous so $|f(v)-f(1)|$ is small enough) Is this proof true?
2019-09-22T05:52:50
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/967840/when-lim-n-rightarrow-infty-int-01fx-sinnx-0", "openwebmath_score": 0.9182623624801636, "openwebmath_perplexity": 393.2027496310644, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.971563966131786, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.6532132522107501 }
https://www.transtutors.com/questions/three-bonds-have-face-value-of-rs-1-000-coupon-rate-of-12-per-cent-and-maturity-of-5-820158.htm
# Three bonds have face value of Rs 1,000, coupon rate of 12 per cent and maturity of 5 years. 1 answer below » Three bonds have face value of Rs 1,000, coupon rate of 12 per cent and maturity of 5 years. One pays interest annually, one pays interest half-yearly, and one pays interest quarterly. Calculate the prices of bonds if the required rate of return is (a) 10 per cent, (b) 12 per cent and (c) 16 per cent. Ankita G Face value of bond $1,000 Annual coupon 12% Annual coupon$                                           120 1000*12% Period of bonds 5 Years Interest paid annually PV of annuity for making pthly payment P = PMT x (((1-(1 + r) ^- n)) / i) Where: P = the present value of an annuity stream PMT = the dollar amount of each annuity payment r = the effective interest rate (also known as the discount rate) i=nominal Interest rate n = the number of periods in which payments will be made Part-1 Part-2 Part-3 10% 12% 16% PV of coupon payments= PMT x (((1-(1 + r) ^- n)) / i) PV of coupon payments= 120*(((1-(1 + 10%) ^- 5)) / 10%) 120*(((1-(1 + 12%) ^- 5)) / 12%) 120*(((1-(1 + 16%) ^- 5)) /... ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker Looking for Something Else? Ask a Similar Question
2021-08-04T03:00:44
{ "domain": "transtutors.com", "url": "https://www.transtutors.com/questions/three-bonds-have-face-value-of-rs-1-000-coupon-rate-of-12-per-cent-and-maturity-of-5-820158.htm", "openwebmath_score": 0.32755017280578613, "openwebmath_perplexity": 5509.757733901341, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639661317859, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.65321325221075 }
https://socratic.org/questions/how-do-you-evaluate-10-4
# How do you evaluate (10!)/(4!)? May 7, 2017 (10!)/(4!) = 151,200 #### Explanation: By hand you need to know that 10! = 10 * 9 * 8 * 7 * 6 * 5 * 4! So 10! = (10 * 9 * 8 * 7 * 6 * 5 * cancel(4!))/(cancel(4!)) = 10 * 9 * 8 * 7 * 6 * 5 = 151,200 A TI graphing calculator can do factorials: $10 \text{ MATH > PRB " 4 -: 4 " MATH > PRB } 4$ ENTER 10! "/" 4! " " 151,200
2022-08-14T12:23:02
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-evaluate-10-4", "openwebmath_score": 0.5084953904151917, "openwebmath_perplexity": 326.4927813826981, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639661317859, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.65321325221075 }
https://math.stackexchange.com/questions/3133473/partial-derivative-of-multi-variable-function
# partial derivative of multi-variable function This is my original post. In the derivation, I come across to the point that I need to decompose $$\frac{\partial f}{\partial v_x}$$ to obtain $$\frac{\partial f}{\partial v}$$, $$\frac{\partial f}{\partial \theta}$$,$$\frac{\partial f}{\partial \phi}$$ terms. One SE user pointed me where I made mistake in the derivation. This is his/her suggestion: $$\frac{\partial f}{\partial v_x}=\sin(\theta)\cos(\phi)\frac {\partial f}{\partial v}-\frac1v \cos(\theta)\cos(\phi)\frac {\partial f}{\partial \theta}+\frac{\sin(\phi)}{r\sin(\theta)}\frac{\partial f}{\partial \phi}$$ My question/purpose in this post is to understand how to derive/obtain that. He suggested me to look into his/her other answer for more detail, which is a very good answer to remind me about the derivative of an inverse function. But still I cannot understand how to do that. If I write $$\frac{\partial v_x}{\partial f_o} = \sin\theta\cos\phi \frac{\partial v}{\partial f_o} + v\cos\theta\cos\phi \frac{\partial \theta}{\partial f_o}- v\sin\theta\sin\phi \frac{\partial \phi}{\partial f_o}$$ and try to find the inverse with the determinant = $$-v^2 \sin^2\theta \cos\theta \sin\phi \cos^2\phi$$, still I do not get a form close to his answer.
2019-12-10T13:36:06
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3133473/partial-derivative-of-multi-variable-function", "openwebmath_score": 0.8957258462905884, "openwebmath_perplexity": 354.7922862843991, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.971563966131786, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.65321325221075 }
http://am207.info/wiki/dataaug.html
## Contents The idea is to construct iterative algorithms for sampling based on the introduction of unobserved data or hidden variables. Does the iterative part sound familiar? We did that in Gibbs sampling. We’ll soon see s deterministic version of this idea when we talk about the Expectation Maximization Algorithm (Dempster, Laird, and Rubin (1977)). Here we’ll see a stochastic version from Tanner and Wong’s (1987) Data Augmentation algorithm for posterior sampling. This was also explored in the physics by Swendsen and Wang’s (1987) algorithm for sampling from Ising and Potts models. (Look it up, it relates to your homework!) Indeed the general idea of introducing a hidden variable will also be exploited to introduce slice sampling and Hamiltonian Monte Carlo. Thus we shall see that the method is useful not only in “theory” to understand the decomposition of outcomes through hidden factors, but also in a practical way to construct sampling algorithms The difference from Gibbs Sampling here is that we are thinking of a 1 (or lower) dimensional distribution or posterior we want to sample from, say $x$, and the other variable, say $y$, is to be treated as latent. The game is, like in Gibbs, to construct a joint $p(x,y)$ such that we can sample from $p(x \vert y)$ and $p(y \vert x)$, and then find the marginal The simplest form of a Data aumentation algorithm looks like this: 1. Draw $Y\sim p_{Y \vert X}(. \vert x)$ and call the observed value y 2. Draw $X_{n+1} \sim p_{X \vert Y}(. \vert y)$ Here is an example %matplotlib inline import numpy as np import scipy as sp import matplotlib.pyplot as plt import pandas as pd pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) import seaborn as sns ## Example Suppose that $p_X$ is the standard normal density, i.e., $p(x) = e^{-x^2/2}/\sqrt{2\pi}$. We’ll pretend we dont know how to sample from it. Take which is a bivariate normal density with means equal to zero, variances equal to one, and correlation equal to $1/\sqrt{2}$. The two conditionals are normal, as we can see by completing the square and neglecting the part of the function that only depends on the variable not being conditioned upon ($e^{-y^2 }$ and $e^{-x^2 }$ respectively for the conditionals below). The x-marginal is and clearly thus gets back the old normal we want to draw from. N=100000 x=np.zeros(N) x[0] = np.random.rand() # INITIALIZE for i in np.arange(1,N): Y=sp.stats.norm.rvs(x[i-1]/np.sqrt(2), 0.5) x[i]=sp.stats.norm.rvs(Y/np.sqrt(2), 0.5) plt.hist(x, bins=30, alpha=0.3, normed=True); sns.kdeplot(x) plt.xlabel('x') //anaconda/envs/py35/lib/python3.5/site-packages/statsmodels/nonparametric/kdetools.py:20: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future y = X[:m/2+1] + np.r_[0,X[m/2+1:],0]*1j <matplotlib.text.Text at 0x117970d68> ## Data Augmentation is a Markov Chain Monte Carlo Lets start from the “transition kernel” that we identified when we learnt about gibbs sampling where we have: the stationarity condition. Since we are dealing with probability densities, $h$ is always positive. Also note Therefore for each fixed $x$, $h(x’ \vert x)$ is non-negative and integrates to 1. The function $h$ therefore could be a Markov Chain transition density and if the current state is $x_n$ then the density of the next state would be $h(. \vert x_n)$. Also note that the $h(x’ \vert x)\, p(x)$ is symmetric in $(x,x’)$. The rhs is symmetric in $(x,x’)$ and so is $h(x’ \vert x) p(x)$. The Markov chain generated with transition probability $h(x’ \vert x)$ is REVERSIBLE and thus supports detailed balance. ## Sequential Simulation Now consider the practical issue of simulating the Markov chain $X$. Given that the current state of the chain is $X_n = x$, how do we draw $X_{n+1}$ from the $h(. \vert x)$? The answer is based on a sequential simulation technique that we now describe. Suppose we would like to simulate a random vector from some pdf, $p_U(u)$, but we cannot do this directly. Suppose further that $p_U$ is the u-marginal of the joint pdf $p_{U,V} (u, v)$ and that we have the ability to make draws from $p_V(v)$ and from $p_{U,V} (u,v)$ for fixed $v$. If we draw $V\sim p_V(.)$, and then, conditional on $V = v$, we draw $U \sim p_{U,V}(. \vert v)$, then the observed pair, $(u, v)$, is a draw from $p_{U,V}$, which means that $u$ is a draw from $p_U$. We now can explain how it is used to simulate from $h(. \vert x)$. Define We apply the procedure above with $h(. \vert x)$ and $H(.,. \vert x)$ playing the roles of $p_U(.)$ and $p_{U,V}(.,.)$ respectively. We of course need the marginal of $H(x’, y \vert x)$ which is $p(y \vert x)$ and the conditional density of $X’$ given $Y=y$ which is which gives us the procedure above: 1. Draw $Y\sim p_{Y \vert X}(. \vert x)$ and call the observed value y 2. Draw $X_{n+1} \sim p_{X \vert Y}(. \vert y)$
2019-01-17T23:28:07
{ "domain": "am207.info", "url": "http://am207.info/wiki/dataaug.html", "openwebmath_score": 0.8455761671066284, "openwebmath_perplexity": 384.60065636701853, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639661317859, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132522107499 }
https://socratic.org/questions/55e9c56511ef6b433254aa5a
# Question 4aa5a Sep 4, 2015 48 km/hr #### Explanation: Pick some distance the motorcyclist might have traveled (it doesn't matter what distance but for ease of calculation I chose 120 km). Time spent traveling 120 km at 40 km/hr color(white)("XXXX")120" km" div 40 " km/hr" = 3 " hours" Time spent traveling 120 km at 60 km/hr color(white)("XXXX")120" km" div 60 " km/hr" = 2 " hours" Total time spent traveling $120 \times 2 = 240$ km color(white)("XXXX")3 " hours" + 2 " hours" = 5 " hours" Average speed color(white)("XXXX")240" km" div 5" hours" = 48 " km/hr"# Sep 4, 2015 The answer is (b) $\text{48 km/h}$. #### Explanation: I want to provide an alternative approach to figuring out the average sspeed of the motorcyclist. So, you know that the motorcyclist travels the same distance, let's say $d$, for both his trips. Let's assume that it takes him ${t}_{1}$ hours to travel the distance at $\text{40 km/h}$, and ${t}_{2}$ hours to travel the same distance at $\text{60 km/h}$. This means that you can write $d = 40 \cdot {t}_{1} \text{ }$ and $\text{ } d = 60 \cdot {t}_{2}$ This is equivalent to $40 \cdot {t}_{1} = 60 \cdot {t}_{2} \implies {t}_{1} = \frac{3}{2} \cdot {t}_{2}$ The average speed of the motorcyclist can be thought of as the total distance he covered divided by the total time needed. Since he goes from $A$ to $B$ for one trip, and from $B$ to $A$ for the second, the total distance covered will be ${d}_{\text{total}} = 2 \cdot d$ The total time will be ${t}_{\text{total}} = {t}_{1} + {t}_{2}$ ${t}_{\text{total}} = \frac{3}{2} \cdot {t}_{2} + {t}_{2} = \frac{5}{2} \cdot {t}_{2}$ The average speed will thus be $\overline{v} = \frac{2 d}{\frac{5}{2} \cdot {t}_{2}} = \frac{4}{5} \cdot \frac{d}{t} _ 2$ But $\frac{d}{t} _ 2$ is equal to the speed of his return trip, $\text{60 km/h}$. This means that you have $\overline{v} = \frac{4}{5} \cdot 60 = \textcolor{g r e e n}{\text{48 km/h}}$
2019-10-18T16:14:04
{ "domain": "socratic.org", "url": "https://socratic.org/questions/55e9c56511ef6b433254aa5a", "openwebmath_score": 0.8841671347618103, "openwebmath_perplexity": 1138.9750089758434, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639661317859, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132522107499 }
https://socratic.org/questions/569d2cd611ef6b5b63a0cffc
# Question 0cffc Jan 19, 2016 ${\text{76 cm}}^{3}$ #### Explanation: The density of a substance tells you the mass of that substance per unit of volume. $\textcolor{b l u e}{\text{density" = "mass"/"unit of volume}}$ Simply put, you can look at a substance's density and tell what mass is associated with one unit of volume of that substance. In your case, the block of wood is said to have a density of ${\text{0.60 g/cm}}^{3}$. This means that the unit of volume here will be ${\text{1 cm}}^{3}$. So, a density of ${\text{0.60 g/cm}}^{3}$ means that every ${\text{cm}}^{3}$ of wood will have a mass of $\text{0.60 g}$. You are told that a particular block of wood has a mass of $\text{126 g}$. Well, if you get $\text{0.60 g}$ for every ${\text{1 cm}}^{3}$ of wood, you can say that $\text{126 g}$ will correspond to a volume of 126 color(red)(cancel(color(black)("g"))) * ("1 cm"^3)/(0.60 color(red)(cancel(color(black)("g")))) = "75.6 cm"^3# Rounded to two sig figs, the number of sig figs you have for the density of the wood, the answer will be $V = \textcolor{g r e e n}{{\text{76 cm}}^{3}}$
2019-11-15T17:24:28
{ "domain": "socratic.org", "url": "https://socratic.org/questions/569d2cd611ef6b5b63a0cffc", "openwebmath_score": 0.7487611174583435, "openwebmath_perplexity": 404.6115029597012, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639661317859, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132522107499 }
http://mathhelpforum.com/calculus/36651-integration-math-solution.html
Math Help - Integration math solution 1. Integration math solution Find the first-quadrant area bounded by the axes and the functions f(x) = 0.5x + 2; g(x) = 2x - 4 2. Hello, urmiprincess! Did you make a sketch? Find the first-quadrant area bounded by the axes and the functions: . . $\begin{array}{cccc}f(x) &= & \frac{1}{2}x +2 & {\color{blue}[1]}\\ g(x) &=& 2x - 4 & {\color{blue}[2]}\end{array}$ Code: | B | ...o (4,4) | ...*:::* : | ...*:::::::* : | ...*:::::::::::* : A o2::::::::::::::* : |:::::::::::::* : |:::::::::D:* : E - - o - - - - o - - - - - - + - O| * 2 4 | * | * | * C *-4 | $\text{Line }{\color{blue}[1]} \:=\:AB$ $\text{Line }{\color{blue}[2]} \:=\:CB$ . . They intersect at $B(4,4)$ We want the area of quadrilateral $ABDO$ . . Drop perpendicular $BE$ Trapezoid $ABEO$ has: . $h = OE = 4,\;b_1 = AO = 2,\;b_2 = BE = 4$ . . Its area is: . $\frac{4}{2}(2 + 4) \:=\:12$ Right triangle BED has: . $b = DE = 2,\;h = BE = 4$ . . Its area is: . $\frac{1}{2}(2)(4) \:=\:4$ Therefore: . $\text{Area}(ABDO) \;=\;12 - 4 \:=\:\boxed{8}$
2015-04-27T00:27:55
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/calculus/36651-integration-math-solution.html", "openwebmath_score": 0.7660218477249146, "openwebmath_perplexity": 7653.506199495174, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639661317859, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132522107499 }
https://stats.stackexchange.com/questions/152670/how-to-represent-distribution-dependencies-in-bayesian-graphical-models
# How to represent distribution dependencies in Bayesian graphical models? In a Bayesian graphical model, suppose that we have a random variable $B$ whose parent is the random variable $A$. So there is an arrow from $A$ to $B$, and this means that the joint distribution is mechanistically factored as: $\Pr(A,B) = \Pr(A) \Pr(B|A)$. In particular, the probability of any realization $B=b$ is specified by single realizations of $A$, e.g. $\Pr(B=b | A=a)$. I'd like to know how to graphically represent a situation where $B$ depends not just on a realization of $A$ but also on the expectation value of $A$. So I want something like $\Pr(B=b | A=a, \langle A\rangle)$. An example might be that $B$ is normally distributed given $A=a$ with the mean $\mu = \langle A\rangle - a$. As far as I can tell, there isn't a simple way to graphically represent this sort of dependence. One route would be to introduce another random variable $C$ which is a distribution over distributions of $A$. But since I my interest is alwys with a single joint distribution $\Pr(A,B)$ it seems silly to introduce some peaked distribution over distributions. Even then, I don't know how I'd then show the dependence on a particular realization of $A$. So what is a clear and useful way to represent a distributional dependency in a directed graphical model? • This works for the simple situation I described. Any thoughts on how it could be extended to handle a case such as: $A \to B \to C \to D \to E$. I would want, $\Pr(E | D=d)$ to also depend on the distribution $\Pr(D)$, not just on the realization $d$. In this situation, the only input that relates to your suggestion goes into $A$ and then this induces a distribution $\Pr(D)$.
2022-01-20T09:40:16
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/152670/how-to-represent-distribution-dependencies-in-bayesian-graphical-models", "openwebmath_score": 0.8176276087760925, "openwebmath_perplexity": 126.01286985063393, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639653084245, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.653213251657178 }
http://lemon.cs.elte.hu/trac/lemon/browser/glpk-cmake/doc/glpk04.tex?rev=1
# source:glpk-cmake/doc/glpk04.tex@1:c445c931472f Last change on this file since 1:c445c931472f was 1:c445c931472f, checked in by Alpar Juttner <alpar@…>, 10 years ago Import glpk-4.45 • Generated files and doc/notes are removed File size: 53.9 KB Line 1%* glpk04.tex *% 2 4 5\section{Background} 6\label{basbgd} 7 8Using vector and matrix notations LP problem (1.1)---(1.3) (see Section 9\ref{seclp}, page \pageref{seclp}) can be stated as follows: 10 11\medskip 12 13\noindent 14\hspace{.5in} minimize (or maximize) 15$$z=c^Tx_S+c_0\eqno(3.1)$$ 16\hspace{.5in} subject to linear constraints 17$$x_R=Ax_S\eqno(3.2)$$ 18\hspace{.5in} and bounds of variables 19$$20\begin{array}{l@{\ }c@{\ }l@{\ }c@{\ }l} 21l_R&\leq&x_R&\leq&u_R\\ 22l_S&\leq&x_S&\leq&u_S\\ 23\end{array}\eqno(3.3) 24$$ 25where: 26 27\noindent 28$x_R=(x_1,\dots,x_m)$ is the vector of auxiliary variables; 29 30\noindent 31$x_S=(x_{m+1},\dots,x_{m+n})$ is the vector of structural 32variables; 33 34\noindent 35$z$ is the objective function; 36 37\noindent 38$c=(c_1,\dots,c_n)$ is the vector of objective coefficients; 39 40\noindent 41$c_0$ is the constant term (shift'') of the objective function; 42 43\noindent 44$A=(a_{11},\dots,a_{mn})$ is the constraint matrix; 45 46\noindent 47$l_R=(l_1,\dots,l_m)$ is the vector of lower bounds of auxiliary 48variables; 49 50\noindent 51$u_R=(u_1,\dots,u_m)$ is the vector of upper bounds of auxiliary 52variables; 53 54\noindent 55$l_S=(l_{m+1},\dots,l_{m+n})$ is the vector of lower bounds of 56structural variables; 57 58\noindent 59$u_S=(u_{m+1},\dots,u_{m+n})$ is the vector of upper bounds of 60structural variables. 61 62\medskip 63 64From the simplex method's standpoint there is no difference between 65auxiliary and structural variables. This allows combining all these 66variables into one vector that leads to the following problem statement: 67 68\medskip 69 70\noindent 71\hspace{.5in} minimize (or maximize) 72$$z=(0\ |\ c)^Tx+c_0\eqno(3.4)$$ 73\hspace{.5in} subject to linear constraints 74$$(I\ |-\!A)x=0\eqno(3.5)$$ 75\hspace{.5in} and bounds of variables 76$$l\leq x\leq u\eqno(3.6)$$ 77where: 78 79\noindent 80$x=(x_R\ |\ x_S)$ is the $(m+n)$-vector of (all) variables; 81 82\noindent 83$(0\ |\ c)$ is the $(m+n)$-vector of objective 84coefficients;\footnote{Subvector 0 corresponds to objective coefficients 85at auxiliary variables.} 86 87\noindent 88$(I\ |-\!A)$ is the {\it augmented} constraint 89$m\times(m+n)$-matrix;\footnote{Note that due to auxiliary variables 90matrix $(I\ |-\!A)$ contains the unity submatrix and therefore has full 91rank. This means, in particular, that the system (3.5) has no linearly 92dependent constraints.} 93 94\noindent 95$l=(l_R\ |\ l_S)$ is the $(m+n)$-vector of lower bounds of (all) 96variables; 97 98\noindent 99$u=(u_R\ |\ u_S)$ is the $(m+n)$-vector of upper bounds of (all) 100variables. 101 102\medskip 103 104By definition an {\it LP basic solution} geometrically is a point in 105the space of all variables, which is the intersection of planes 106corresponding to active constraints\footnote{A constraint is called 107{\it active} if in a given point it is satisfied as equality, otherwise 108it is called {\it inactive}.}. The space of all variables has the 109dimension $m+n$, therefore, to define some basic solution we have to 110define $m+n$ active constraints. Note that $m$ constraints (3.5) being 111linearly independent equalities are always active, so remaining $n$ 112active constraints can be chosen only from bound constraints (3.6). 113 114A variable is called {\it non-basic}, if its (lower or upper) bound is 115active, otherwise it is called {\it basic}. Since, as was said above, 116exactly $n$ bound constraints must be active, in any basic solution 117there are always $n$ non-basic variables and $m$ basic variables. 118(Note that a free variable also can be non-basic. Although such 119variable has no bounds, we can think it as the difference between two 120non-negative variables, which both are non-basic in this case.) 121 122Now consider how to determine numeric values of all variables for a 123given basic solution. 124 125Let $\Pi$ be an appropriate permutation matrix of the order $(m+n)$. 126Then we can write: 127$$\left(\begin{array}{@{}c@{}}x_B\\x_N\\\end{array}\right)= 128\Pi\left(\begin{array}{@{}c@{}}x_R\\x_S\\\end{array}\right)=\Pi x, 129\eqno(3.7)$$ 130where $x_B$ is the vector of basic variables, $x_N$ is the vector of 131non-basic variables, $x=(x_R\ |\ x_S)$ is the vector of all variables 132in the original order. In this case the system of linear constraints 133(3.5) can be rewritten as follows: 134$$(I\ |-\!A)\Pi^T\Pi x=0\ \ \ \Rightarrow\ \ \ (B\ |\ N) 135\left(\begin{array}{@{}c@{}}x_B\\x_N\\\end{array}\right)=0,\eqno(3.8)$$ 136where 137$$(B\ |\ N)=(I\ |-\!A)\Pi^T.\eqno(3.9)$$ 138Matrix $B$ is a square non-singular $m\times m$-matrix, which is 139composed from columns of the augmented constraint matrix corresponding 140to basic variables. It is called the {\it basis matrix} or simply the 141{\it basis}. Matrix $N$ is a rectangular $m\times n$-matrix, which is 142composed from columns of the augmented constraint matrix corresponding 143to non-basic variables. 144 145From (3.8) it follows that: 146$$Bx_B+Nx_N=0,\eqno(3.10)$$ 147therefore, 148$$x_B=-B^{-1}Nx_N.\eqno(3.11)$$ 149Thus, the formula (3.11) shows how to determine numeric values of basic 150variables $x_B$ assuming that non-basic variables $x_N$ are fixed on 151their active bounds. 152 153The $m\times n$-matrix 154$$\Xi=-B^{-1}N,\eqno(3.12)$$ 155which appears in (3.11), is called the {\it simplex 156tableau}.\footnote{This definition corresponds to the GLPK 157implementation.} It shows how basic variables depend on non-basic 158variables: 159$$x_B=\Xi x_N.\eqno(3.13)$$ 160 161The system (3.13) is equivalent to the system (3.5) in the sense that 162they both define the same set of points in the space of (primal) 163variables, which satisfy to these systems. If, moreover, values of all 164basic variables satisfy to their bound constraints (3.3), the 165corresponding basic solution is called {\it (primal) feasible}, 166otherwise {\it (primal) infeasible}. It is understood that any (primal) 167feasible basic solution satisfy to all constraints (3.2) and (3.3). 168 169The LP theory says that if LP has optimal solution, it has (at least 170one) basic feasible solution, which corresponds to the optimum. And the 171most natural way to determine whether a given basic solution is optimal 172or not is to use the Karush---Kuhn---Tucker optimality conditions. 173 174\def\arraystretch{1.5} 175 176For the problem statement (3.4)---(3.6) the optimality conditions are 177the following:\footnote{These conditions can be appiled to any solution, 178not only to a basic solution.} 179$$(I\ |-\!A)x=0\eqno(3.14)$$ 180$$(I\ |-\!A)^T\pi+\lambda_l+\lambda_u=\nabla z=(0\ |\ c)^T\eqno(3.15)$$ 181$$l\leq x\leq u\eqno(3.16)$$ 182$$\lambda_l\geq 0,\ \ \lambda_u\leq 0\ \ \mbox{(minimization)} 183\eqno(3.17)$$ 184$$\lambda_l\leq 0,\ \ \lambda_u\geq 0\ \ \mbox{(maximization)} 185\eqno(3.18)$$ 186$$(\lambda_l)_k(x_k-l_k)=0,\ \ (\lambda_u)_k(x_k-u_k)=0,\ \ k=1,2,\dots, 187m+n\eqno(3.19)$$ 188where: 189$\pi=(\pi_1,\pi_2,\dots,\pi_m)$ is a $m$-vector of Lagrange 190multipliers for equality constraints (3.5); 191$\lambda_l=[(\lambda_l)_1,(\lambda_l)_2,\dots,(\lambda_l)_n]$ is a 192$n$-vector of Lagrange multipliers for lower bound constraints (3.6); 193$\lambda_u=[(\lambda_u)_1,(\lambda_u)_2,\dots,(\lambda_u)_n]$ is a 194$n$-vector of Lagrange multipliers for upper bound constraints (3.6). 195 196Condition (3.14) is the {\it primal} (original) system of equality 197constraints (3.5). 198 199Condition (3.15) is the {\it dual} system of equality constraints. 200It requires the gradient of the objective function to be a linear 201combination of normals to the planes defined by constraints of the 202original problem. 203 204Condition (3.16) is the primal (original) system of bound constraints 205(3.6). 206 207Condition (3.17) (or (3.18) in case of maximization) is the dual system 208of bound constraints. 209 210Condition (3.19) is the {\it complementary slackness condition}. It 211requires, for each original (auxiliary or structural) variable $x_k$, 212that either its (lower or upper) bound must be active, or zero bound of 213the corresponding Lagrange multiplier ($(\lambda_l)_k$ or 214$(\lambda_u)_k$) must be active. 215 216In GLPK two multipliers $(\lambda_l)_k$ and $(\lambda_u)_k$ for each 217primal (original) variable $x_k$, $k=1,2,\dots,m+n$, are combined into 218one multiplier: 219$$\lambda_k=(\lambda_l)_k+(\lambda_u)_k,\eqno(3.20)$$ 220which is called a {\it dual variable} for $x_k$. This {\it cannot} lead 221to the ambiguity, because both lower and upper bounds of $x_k$ cannot be 222active at the same time,\footnote{If $x_k$ is a fixed variable, we can 223think it as double-bounded variable $l_k\leq x_k\leq u_k$, where 224$l_k=u_k.$} so at least one of $(\lambda_l)_k$ and $(\lambda_u)_k$ must 225be equal to zero, and because these multipliers have different signs, 226the combined multiplier, which is their sum, uniquely defines each of 227them. 228 229\def\arraystretch{1} 230 231Using dual variables $\lambda_k$ the dual system of bound constraints 232(3.17) and (3.18) can be written in the form of so called {\it rule of 233signs''} as follows: 234 235\begin{center} 236\begin{tabular}{|@{\,}c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}| 237@{$\,$}c|c@{$\,$}|@{$\,$}c@{$\,$}|@{$\,$}c@{$\,$}|} 238\hline 239Original bound&\multicolumn{3}{c|}{Minimization}&\multicolumn{3}{c|} 240{Maximization}\\ 241\cline{2-7} 242constraint&$(\lambda_l)_k$&$(\lambda_u)_k$&$(\lambda_l)_k+ 243(\lambda_u)_k$&$(\lambda_l)_k$&$(\lambda_u)_k$&$(\lambda_l)_k+ 244(\lambda_u)_k$\\ 245\hline 246$-\infty<x_k<+\infty$&$=0$&$=0$&$\lambda_k=0$&$=0$&$=0$&$\lambda_k=0$\\ 247$x_k\geq l_k$&$\geq 0$&$=0$&$\lambda_k\geq 0$&$\leq 0$&$=0$&$\lambda_k 248\leq0$\\ 249$x_k\leq u_k$&$=0$&$\leq 0$&$\lambda_k\leq 0$&$=0$&$\geq 0$&$\lambda_k 250\geq0$\\ 251$l_k\leq x_k\leq u_k$&$\geq 0$& $\leq 0$& $-\infty\!<\!\lambda_k\!< 252\!+\infty$ 253&$\leq 0$& $\geq 0$& $-\infty\!<\!\lambda_k\!<\!+\infty$\\ 254$x_k=l_k=u_k$&$\geq 0$& $\leq 0$& $-\infty\!<\!\lambda_k\!<\!+\infty$& 255$\leq 0$& 256$\geq 0$& $-\infty\!<\!\lambda_k\!<\!+\infty$\\ 257\hline 258\end{tabular} 259\end{center} 260 261May note that each primal variable $x_k$ has its dual counterpart 262$\lambda_k$ and vice versa. This allows applying the same partition for 263the vector of dual variables as (3.7): 264$$\left(\begin{array}{@{}c@{}}\lambda_B\\\lambda_N\\\end{array}\right)= 265\Pi\lambda,\eqno(3.21)$$ 266where $\lambda_B$ is a vector of dual variables for basic variables 267$x_B$, $\lambda_N$ is a vector of dual variables for non-basic variables 268$x_N$. 269 270By definition, bounds of basic variables are inactive constraints, so in 271any basic solution $\lambda_B=0$. Corresponding values of dual variables 272$\lambda_N$ for non-basic variables $x_N$ can be determined in the 273following way. From the dual system (3.15) we have: 274$$(I\ |-\!A)^T\pi+\lambda=(0\ |\ c)^T,\eqno(3.22)$$ 275so multiplying both sides of (3.22) by matrix $\Pi$ gives: 276$$\Pi(I\ |-\!A)^T\pi+\Pi\lambda=\Pi(0\ |\ c)^T.\eqno(3.23)$$ 277From (3.9) it follows that 278$$\Pi(I\ |-\!A)^T=[(I\ |-\!A)\Pi^T]^T=(B\ |\ N)^T.\eqno(3.24)$$ 279Further, we can apply the partition (3.7) also to the vector of 280objective coefficients (see (3.4)): 281$$\left(\begin{array}{@{}c@{}}c_B\\c_N\\\end{array}\right)= 282\Pi\left(\begin{array}{@{}c@{}}0\\c\\\end{array}\right),\eqno(3.25)$$ 283where $c_B$ is a vector of objective coefficients at basic variables, 284$c_N$ is a vector of objective coefficients at non-basic variables. 285Now, substituting (3.24), (3.21), and (3.25) into (3.23), leads to: 286$$(B\ |\ N)^T\pi+(\lambda_B\ |\ \lambda_N)^T=(c_B\ |\ c_N)^T, 287\eqno(3.26)$$ 288and transposing both sides of (3.26) gives the system: 289$$\left(\begin{array}{@{}c@{}}B^T\\N^T\\\end{array}\right)\pi+ 290\left(\begin{array}{@{}c@{}}\lambda_B\\\lambda_N\\\end{array}\right)= 291\left(\begin{array}{@{}c@{}}c_B\\c_T\\\end{array}\right),\eqno(3.27)$$ 292which can be written as follows: 293$$\left\{ 294\begin{array}{@{\ }r@{\ }c@{\ }r@{\ }c@{\ }l@{\ }} 295B^T\pi&+&\lambda_B&=&c_B\\ 296N^T\pi&+&\lambda_N&=&c_N\\ 297\end{array} 298\right.\eqno(3.28) 299$$ 300Lagrange multipliers $\pi=(\pi_i)$ correspond to equality constraints 301(3.5) and therefore can have any sign. This allows resolving the first 302subsystem of (3.28) as follows:\footnote{$B^{-T}$ means $(B^T)^{-1}= 303(B^{-1})^T$.} 304$$\pi=B^{-T}(c_B-\lambda_B)=-B^{-T}\lambda_B+B^{-T}c_B,\eqno(3.29)$$ 305and substitution of $\pi$ from (3.29) into the second subsystem of 306(3.28) gives: 307$$\lambda_N=-N^T\pi+c_N=N^TB^{-T}\lambda_B+(c_N-N^TB^{-T}c_B). 308\eqno(3.30)$$ 309The latter system can be written in the following final form: 310$$\lambda_N=-\Xi^T\lambda_B+d,\eqno(3.31)$$ 311where $\Xi$ is the simplex tableau (see (3.12)), and 312$$d=c_N-N^TB^{-T}c_B=c_N+\Xi^Tc_B\eqno(3.32)$$ 313is the vector of so called {\it reduced costs} of non-basic variables. 314 315\pagebreak 316 317Above it was said that in any basic solution $\lambda_B=0$, so 318$\lambda_N=d$ as it follows from (3.31). 319 320The system (3.31) is equivalent to the system (3.15) in the sense that 321they both define the same set of points in the space of dual variables 322$\lambda$, which satisfy to these systems. If, moreover, values of all 323dual variables $\lambda_N$ (i.e. reduced costs $d$) satisfy to their 324bound constraints (i.e. to the rule of signs''; see the table above), 325the corresponding basic solution is called {\it dual feasible}, 326otherwise {\it dual infeasible}. It is understood that any dual feasible 327solution satisfy to all constraints (3.15) and (3.17) (or (3.18) in case 328of maximization). 329 330It can be easily shown that the complementary slackness condition 331(3.19) is always satisfied for {\it any} basic solution. Therefore, 332a basic solution\footnote{It is assumed that a complete basic solution 333has the form $(x,\lambda)$, i.e. it includes primal as well as dual 334variables.} is {\it optimal} if and only if it is primal and dual 335feasible, because in this case it satifies to all the optimality 336conditions (3.14)---(3.19). 337 338\def\arraystretch{1.5} 339 340The meaning of reduced costs $d=(d_j)$ of non-basic variables can be 341explained in the following way. From (3.4), (3.7), and (3.25) it follows 342that: 343$$z=c_B^Tx_B+c_N^Tx_N+c_0.\eqno(3.33)$$ 344Substituting $x_B$ from (3.11) into (3.33) we can eliminate basic 345variables and express the objective only through non-basic variables: 346$$347\begin{array}{r@{\ }c@{\ }l} 348z&=&c_B^T(-B^{-1}Nx_N)+c_N^Tx_N+c_0=\\ 349&=&(c_N^T-c_B^TB^{-1}N)x_N+c_0=\\ 350&=&(c_N-N^TB^{-T}c_B)^Tx_N+c_0=\\ 351&=&d^Tx_N+c_0. 352\end{array}\eqno(3.34) 353$$ 354From (3.34) it is seen that reduced cost $d_j$ shows how the objective 355function $z$ depends on non-basic variable $(x_N)_j$ in the neighborhood 356of the current basic solution, i.e. while the current basis remains 357unchanged. 358 359%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 360 361\newpage 362 363\section{LP basis routines} 364\label{lpbasis} 365 366\subsection{glp\_bf\_exists---check if the basis factorization exists} 367 368\subsubsection*{Synopsis} 369 370\begin{verbatim} 371int glp_bf_exists(glp_prob *lp); 372\end{verbatim} 373 374\subsubsection*{Returns} 375 376If the basis factorization for the current basis associated with the 377specified problem object exists and therefore is available for 378computations, the routine \verb|glp_bf_exists| returns non-zero. 379Otherwise the routine returns zero. 380 382 383Let the problem object have $m$ rows and $n$ columns. In GLPK the 384{\it basis matrix} $B$ is a square non-singular matrix of the order $m$, 385whose columns correspond to basic (auxiliary and/or structural) 386variables. It is defined by the following main 387equality:\footnote{For more details see Subsection \ref{basbgd}, 388page \pageref{basbgd}.} 389$$(B\ |\ N)=(I\ |-\!A)\Pi^T,$$ 390where $I$ is the unity matrix of the order $m$, whose columns correspond 391to auxiliary variables; $A$ is the original constraint 392$m\times n$-matrix, whose columns correspond to structural variables; 393$(I\ |-\!A)$ is the augmented constraint\linebreak 394$m\times(m+n)$-matrix, whose columns correspond to all (auxiliary and 395structural) variables following in the original order; $\Pi$ is a 396permutation matrix of the order $m+n$; and $N$ is a rectangular 397$m\times n$-matrix, whose columns correspond to non-basic (auxiliary 398and/or structural) variables. 399 400For various reasons it may be necessary to solve linear systems with 401matrix $B$. To provide this possibility the GLPK implementation 402maintains an invertable form of $B$ (that is, some representation of 403$B^{-1}$) called the {\it basis factorization}, which is an internal 404component of the problem object. Typically, the basis factorization is 405computed by the simplex solver, which keeps it in the problem object 406to be available for other computations. 407 408Should note that any changes in the problem object, which affects the 409basis matrix (e.g. changing the status of a row or column, changing 410a basic column of the constraint matrix, removing an active constraint, 411etc.), invalidates the basis factorization. So before calling any API 412routine, which uses the basis factorization, the application program 413must make sure (using the routine \verb|glp_bf_exists|) that the 414factorization exists and therefore available for computations. 415 416%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 417 418\subsection{glp\_factorize---compute the basis factorization} 419 420\subsubsection*{Synopsis} 421 422\begin{verbatim} 423int glp_factorize(glp_prob *lp); 424\end{verbatim} 425 426\subsubsection*{Description} 427 428The routine \verb|glp_factorize| computes the basis factorization for 429the current basis associated with the specified problem 430object.\footnote{The current basis is defined by the current statuses 431of rows (auxiliary variables) and columns (structural variables).} 432 433The basis factorization is computed from scratch'' even if it exists, 434so the application program may use the routine \verb|glp_bf_exists|, 435and, if the basis factorization already exists, not to call the routine 436\verb|glp_factorize| to prevent an extra work. 437 438The routine \verb|glp_factorize| {\it does not} compute components of 439the basic solution (i.e. primal and dual values). 440 441\subsubsection*{Returns} 442 443\begin{tabular}{@{}p{25mm}p{97.3mm}@{}} 4440 & The basis factorization has been successfully computed.\\ 445\verb|GLP_EBADB| & The basis matrix is invalid, because the number of 446basic (auxiliary and structural) variables is not the same as the number 447of rows in the problem object.\\ 448\verb|GLP_ESING| & The basis matrix is singular within the working 449precision.\\ 450\verb|GLP_ECOND| & The basis matrix is ill-conditioned, i.e. its 451condition number is too large.\\ 452\end{tabular} 453 454%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 455 456\newpage 457 458\subsection{glp\_bf\_updated---check if the basis factorization has\\ 459been updated} 460 461\subsubsection*{Synopsis} 462 463\begin{verbatim} 464int glp_bf_updated(glp_prob *lp); 465\end{verbatim} 466 467\subsubsection*{Returns} 468 469If the basis factorization has been just computed from scratch'', the 470routine \verb|glp_bf_updated| returns zero. Otherwise, if the 471factorization has been updated at least once, the routine returns 472non-zero. 473 475 476{\it Updating} the basis factorization means recomputing it to reflect 477changes in the basis matrix. For example, on every iteration of the 478simplex method some column of the current basis matrix is replaced by a 479new column that gives a new basis matrix corresponding to the adjacent 480basis. In this case computing the basis factorization for the adjacent 481basis from scratch'' (as the routine \verb|glp_factorize| does) would 482be too time-consuming. 483 484On the other hand, since the basis factorization update is a numeric 485computational procedure, applying it many times may lead to accumulating 486round-off errors. Therefore the basis is periodically refactorized 487(reinverted) from scratch'' (with the routine \verb|glp_factorize|) 488that allows improving its numerical properties. 489 490The routine \verb|glp_bf_updated| allows determining if the basis 491factorization has been updated at least once since it was computed from 492scratch''. 493 494%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 495 496\newpage 497 498\subsection{glp\_get\_bfcp---retrieve basis factorization control 499parameters} 500 501\subsubsection*{Synopsis} 502 503\begin{verbatim} 504void glp_get_bfcp(glp_prob *lp, glp_bfcp *parm); 505\end{verbatim} 506 507\subsubsection*{Description} 508 509The routine \verb|glp_get_bfcp| retrieves control parameters, which are 510used on computing and updating the basis factorization associated with 511the specified problem object. 512 513Current values of the control parameters are stored in a \verb|glp_bfcp| 514structure, which the parameter \verb|parm| points to. For a detailed 515description of the structure \verb|glp_bfcp| see comments to the routine 516\verb|glp_set_bfcp| in the next subsection. 517 519 520The purpose of the routine \verb|glp_get_bfcp| is two-fold. First, it 521allows the application program obtaining current values of control 522parameters used by internal GLPK routines, which compute and update the 523basis factorization. 524 525The second purpose of this routine is to provide proper values for all 526fields of the structure \verb|glp_bfcp| in the case when the application 527program needs to change some control parameters. 528 529\subsection{glp\_set\_bfcp---change basis factorization control 530parameters} 531 532\subsubsection*{Synopsis} 533 534\begin{verbatim} 535void glp_set_bfcp(glp_prob *lp, const glp_bfcp *parm); 536\end{verbatim} 537 538\subsubsection*{Description} 539 540The routine \verb|glp_set_bfcp| changes control parameters, which are 541used by internal GLPK routines on computing and updating the basis 542factorization associated with the specified problem object. 543 544New values of the control parameters should be passed in a structure 545\verb|glp_bfcp|, which the parameter \verb|parm| points to. For a 546detailed description of the structure \verb|glp_bfcp| see paragraph 547Control parameters'' below. 548 549The parameter \verb|parm| can be specified as \verb|NULL|, in which case 550all control parameters are reset to their default values. 551 553 554Before changing some control parameters with the routine 555\verb|glp_set_bfcp| the application program should retrieve current 556values of all control parameters with the routine \verb|glp_get_bfcp|. 557This is needed for backward compatibility, because in the future there 558may appear new members in the structure \verb|glp_bfcp|. 559 560Note that new values of control parameters come into effect on a next 561computation of the basis factorization, not immediately. 562 563\subsubsection*{Example} 564 565\begin{verbatim} 566glp_prob *lp; 567glp_bfcp parm; 568. . . 569/* retrieve current values of control parameters */ 570glp_get_bfcp(lp, &parm); 571/* change the threshold pivoting tolerance */ 572parm.piv_tol = 0.05; 573/* set new values of control parameters */ 574glp_set_bfcp(lp, &parm); 575. . . 576\end{verbatim} 577 578\subsubsection*{Control parameters} 579 580This paragraph describes all basis factorization control parameters 581currently used in the package. Symbolic names of control parameters are 582names of corresponding members in the structure \verb|glp_bfcp|. 583 584\def\arraystretch{1} 585 586\medskip 587 588\noindent\begin{tabular}{@{}p{17pt}@{}p{120.5mm}@{}} 589\multicolumn{2}{@{}l}{{\tt int type} (default: {\tt GLP\_BF\_FT})} \\ 590&Basis factorization type:\\ 591&\verb|GLP_BF_FT|---$LU$ + Forrest--Tomlin update;\\ 592&\verb|GLP_BF_BG|---$LU$ + Schur complement + Bartels--Golub update;\\ 593&\verb|GLP_BF_GR|---$LU$ + Schur complement + Givens rotation update. 594\\ 595&In case of \verb|GLP_BF_FT| the update is applied to matrix $U$, while 596in cases of \verb|GLP_BF_BG| and \verb|GLP_BF_GR| the update is applied 597to the Schur complement. 598\end{tabular} 599 600\medskip 601 602\noindent\begin{tabular}{@{}p{17pt}@{}p{120.5mm}@{}} 603\multicolumn{2}{@{}l}{{\tt int lu\_size} (default: {\tt 0})} \\ 604&The initial size of the Sparse Vector Area, in non-zeros, used on 605computing $LU$-factorization of the basis matrix for the first time. 606If this parameter is set to 0, the initial SVA size is determined 607automatically.\\ 608\end{tabular} 609 610\medskip 611 612\noindent\begin{tabular}{@{}p{17pt}@{}p{120.5mm}@{}} 613\multicolumn{2}{@{}l}{{\tt double piv\_tol} (default: {\tt 0.10})} \\ 614&Threshold pivoting (Markowitz) tolerance, 0 $<$ \verb|piv_tol| $<$ 1, 615used on computing $LU$-factorization of the basis matrix. Element 616$u_{ij}$ of the active submatrix of factor $U$ fits to be pivot if it 617satisfies to the stability criterion 618$|u_{ij}| >= {\tt piv\_tol}\cdot\max|u_{i*}|$, i.e. if it is not very 619small in the magnitude among other elements in the same row. Decreasing 620this parameter may lead to better sparsity at the expense of numerical 621accuracy, and vice versa.\\ 622\end{tabular} 623 624\medskip 625 626\noindent\begin{tabular}{@{}p{17pt}@{}p{120.5mm}@{}} 627\multicolumn{2}{@{}l}{{\tt int piv\_lim} (default: {\tt 4})} \\ 628&This parameter is used on computing $LU$-factorization of the basis 629matrix and specifies how many pivot candidates needs to be considered 630on choosing a pivot element, \verb|piv_lim| $\geq$ 1. If \verb|piv_lim| 631candidates have been considered, the pivoting routine prematurely 632terminates the search with the best candidate found.\\ 633\end{tabular} 634 635\medskip 636 637\noindent\begin{tabular}{@{}p{17pt}@{}p{120.5mm}@{}} 638\multicolumn{2}{@{}l}{{\tt int suhl} (default: {\tt GLP\_ON})} \\ 639&This parameter is used on computing $LU$-factorization of the basis 640matrix. Being set to {\tt GLP\_ON} it enables applying the following 641heuristic proposed by Uwe Suhl: if a column of the active submatrix has 642no eligible pivot candidates, it is no more considered until it becomes 643a column singleton. In many cases this allows reducing the time needed 644for pivot searching. To disable this heuristic the parameter \verb|suhl| 645should be set to {\tt GLP\_OFF}.\\ 646\end{tabular} 647 648\medskip 649 650\noindent\begin{tabular}{@{}p{17pt}@{}p{120.5mm}@{}} 651\multicolumn{2}{@{}l}{{\tt double eps\_tol} (default: {\tt 1e-15})} \\ 652&Epsilon tolerance, \verb|eps_tol| $\geq$ 0, used on computing 653$LU$-factorization of the basis matrix. If an element of the active 654submatrix of factor $U$ is less than \verb|eps_tol| in the magnitude, 655it is replaced by exact zero.\\ 656\end{tabular} 657 658\medskip 659 660\noindent\begin{tabular}{@{}p{17pt}@{}p{120.5mm}@{}} 661\multicolumn{2}{@{}l}{{\tt double max\_gro} (default: {\tt 1e+10})} \\ 662&Maximal growth of elements of factor $U$, \verb|max_gro| $\geq$ 1, 663allowable on computing $LU$-factorization of the basis matrix. If on 664some elimination step the ratio $u_{big}/b_{max}$ (where $u_{big}$ is 665the largest magnitude of elements of factor $U$ appeared in its active 666submatrix during all the factorization process, $b_{max}$ is the largest 667magnitude of elements of the basis matrix to be factorized), the basis 668matrix is considered as ill-conditioned.\\ 669\end{tabular} 670 671\medskip 672 673\noindent\begin{tabular}{@{}p{17pt}@{}p{120.5mm}@{}} 674\multicolumn{2}{@{}l}{{\tt int nfs\_max} (default: {\tt 100})} \\ 675&Maximal number of additional row-like factors (entries of the eta 676file), \verb|nfs_max| $\geq$ 1, which can be added to $LU$-factorization 677of the basis matrix on updating it with the Forrest--Tomlin technique. 678This parameter is used only once, before $LU$-factorization is computed 679for the first time, to allocate working arrays. As a rule, each update 681this parameter limits the number of updates between refactorizations.\\ 682\end{tabular} 683 684\medskip 685 686\noindent\begin{tabular}{@{}p{17pt}@{}p{120.5mm}@{}} 687\multicolumn{2}{@{}l}{{\tt double upd\_tol} (default: {\tt 1e-6})} \\ 688&Update tolerance, 0 $<$ \verb|upd_tol| $<$ 1, used on updating 689$LU$-factorization of the basis matrix with the Forrest--Tomlin 690technique. If after updating the magnitude of some diagonal element 691$u_{kk}$ of factor $U$ becomes less than 692${\tt upd\_tol}\cdot\max(|u_{k*}|, |u_{*k}|)$, the factorization is 693considered as inaccurate.\\ 694\end{tabular} 695 696\medskip 697 698\noindent\begin{tabular}{@{}p{17pt}@{}p{120.5mm}@{}} 699\multicolumn{2}{@{}l}{{\tt int nrs\_max} (default: {\tt 100})} \\ 700&Maximal number of additional rows and columns, \verb|nrs_max| $\geq$ 1, 701which can be added to $LU$-factorization of the basis matrix on updating 702it with the Schur complement technique. This parameter is used only 703once, before $LU$-factorization is computed for the first time, to 704allocate working arrays. As a rule, each update adds one new row and 706limits the number of updates between refactorizations.\\ 707\end{tabular} 708 709\medskip 710 711\noindent\begin{tabular}{@{}p{17pt}@{}p{120.5mm}@{}} 712\multicolumn{2}{@{}l}{{\tt int rs\_size} (default: {\tt 0})} \\ 713&The initial size of the Sparse Vector Area, in non-zeros, used to 714store non-zero elements of additional rows and columns introduced on 715updating $LU$-factorization of the basis matrix with the Schur 716complement technique. If this parameter is set to 0, the initial SVA 717size is determined automatically.\\ 718\end{tabular} 719 720%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 721 722\newpage 723 725 726\subsubsection*{Synopsis} 727 728\begin{verbatim} 730\end{verbatim} 731 732\subsubsection*{Description} 733 735for the current basis associated with the specified problem object. 736 737\subsubsection*{Returns} 738 739If basic variable $(x_B)_k$, $1\leq k\leq m$, is $i$-th auxiliary 740variable ($1\leq i\leq m$), the routine returns $i$. Otherwise, if 741$(x_B)_k$ is $j$-th structural variable ($1\leq j\leq n$), the routine 742returns $m+j$. Here $m$ is the number of rows and $n$ is the number of 743columns in the problem object. 744 746 747Sometimes the application program may need to know which original 748(auxiliary and structural) variable correspond to a given basic 749variable, or, that is the same, which column of the augmented constraint 750matrix $(I\ |-\!A)$ correspond to a given column of the basis matrix 751$B$. 752 753\def\arraystretch{1} 754 755The correspondence is defined as follows:\footnote{For more details see 756Subsection \ref{basbgd}, page \pageref{basbgd}.} 757$$\left(\begin{array}{@{}c@{}}x_B\\x_N\\\end{array}\right)= 758\Pi\left(\begin{array}{@{}c@{}}x_R\\x_S\\\end{array}\right) 759\ \ \Leftrightarrow 760\ \ \left(\begin{array}{@{}c@{}}x_R\\x_S\\\end{array}\right)= 761\Pi^T\left(\begin{array}{@{}c@{}}x_B\\x_N\\\end{array}\right),$$ 762where $x_B$ is the vector of basic variables, $x_N$ is the vector of 763non-basic variables, $x_R$ is the vector of auxiliary variables 764following in their original order,\footnote{The original order of 765auxiliary and structural variables is defined by the ordinal numbers 766of corresponding rows and columns in the problem object.} $x_S$ is the 767vector of structural variables following in their original order, $\Pi$ 768is a permutation matrix (which is a component of the basis 769factorization). 770 771Thus, if $(x_B)_k=(x_R)_i$ is $i$-th auxiliary variable, the routine 772returns $i$, and if $(x_B)_k=(x_S)_j$ is $j$-th structural variable, 773the routine returns $m+j$, where $m$ is the number of rows in the 774problem object. 775 776%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 777 778\newpage 779 780\subsection{glp\_get\_row\_bind---retrieve row index in the basis\\ 782 783\subsubsection*{Synopsis} 784 785\begin{verbatim} 786int glp_get_row_bind(glp_prob *lp, int i); 787\end{verbatim} 788 789\subsubsection*{Returns} 790 791The routine \verb|glp_get_row_bind| returns the index $k$ of basic 792variable $(x_B)_k$, $1\leq k\leq m$, which is $i$-th auxiliary variable 793(that is, the auxiliary variable corresponding to $i$-th row), 794$1\leq i\leq m$, in the current basis associated with the specified 795problem object, where $m$ is the number of rows. However, if $i$-th 796auxiliary variable is non-basic, the routine returns zero. 797 799 800The routine \verb|glp_get_row_bind| is an inverse to the routine 801\verb|glp_get_bhead|: if \verb|glp_get_bhead|$(lp,k)$ returns $i$, 802\verb|glp_get_row_bind|$(lp,i)$ returns $k$, and vice versa. 803 804\subsection{glp\_get\_col\_bind---retrieve column index in the basis 806 807\subsubsection*{Synopsis} 808 809\begin{verbatim} 810int glp_get_col_bind(glp_prob *lp, int j); 811\end{verbatim} 812 813\subsubsection*{Returns} 814 815The routine \verb|glp_get_col_bind| returns the index $k$ of basic 816variable $(x_B)_k$, $1\leq k\leq m$, which is $j$-th structural 817variable (that is, the structural variable corresponding to $j$-th 818column), $1\leq j\leq n$, in the current basis associated with the 819specified problem object, where $m$ is the number of rows, $n$ is the 820number of columns. However, if $j$-th structural variable is non-basic, 821the routine returns zero. 822 824 825The routine \verb|glp_get_col_bind| is an inverse to the routine 826\verb|glp_get_bhead|: if \verb|glp_get_bhead|$(lp,k)$ returns $m+j$, 827\verb|glp_get_col_bind|$(lp,j)$ returns $k$, and vice versa. 828 829%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 830 831\newpage 832 833\subsection{glp\_ftran---perform forward transformation} 834 835\subsubsection*{Synopsis} 836 837\begin{verbatim} 838void glp_ftran(glp_prob *lp, double x[]); 839\end{verbatim} 840 841\subsubsection*{Description} 842 843The routine \verb|glp_ftran| performs forward transformation (FTRAN), 844i.e. it solves the system $Bx=b$, where $B$ is the basis matrix 845associated with the specified problem object, $x$ is the vector of 846unknowns to be computed, $b$ is the vector of right-hand sides. 847 848On entry to the routine elements of the vector $b$ should be stored in 849locations \verb|x[1]|, \dots, \verb|x[m]|, where $m$ is the number of 850rows. On exit the routine stores elements of the vector $x$ in the same 851locations. 852 853\subsection{glp\_btran---perform backward transformation} 854 855\subsubsection*{Synopsis} 856 857\begin{verbatim} 858void glp_btran(glp_prob *lp, double x[]); 859\end{verbatim} 860 861\subsubsection*{Description} 862 863The routine \verb|glp_btran| performs backward transformation (BTRAN), 864i.e. it solves the system $B^Tx=b$, where $B^T$ is a matrix transposed 865to the basis matrix $B$ associated with the specified problem object, 866$x$ is the vector of unknowns to be computed, $b$ is the vector of 867right-hand sides. 868 869On entry to the routine elements of the vector $b$ should be stored in 870locations \verb|x[1]|, \dots, \verb|x[m]|, where $m$ is the number of 871rows. On exit the routine stores elements of the vector $x$ in the same 872locations. 873 874%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 875 876\newpage 877 878\subsection{glp\_warm\_up---warm up'' LP basis} 879 880\subsubsection*{Synopsis} 881 882\begin{verbatim} 883int glp_warm_up(glp_prob *P); 884\end{verbatim} 885 886\subsubsection*{Description} 887 888The routine \verb|glp_warm_up| warms up'' the LP basis for the 889specified problem object using current statuses assigned to rows and 890columns (that is, to auxiliary and structural variables). 891 892This operation includes computing factorization of the basis matrix 893(if it does not exist), computing primal and dual components of basic 894solution, and determining the solution status. 895 896\subsubsection*{Returns} 897 898\begin{tabular}{@{}p{25mm}p{97.3mm}@{}} 8990 & The operation has been successfully performed.\\ 900\verb|GLP_EBADB| & The basis matrix is invalid, because the number of 901basic (auxiliary and structural) variables is not the same as the number 902of rows in the problem object.\\ 903\verb|GLP_ESING| & The basis matrix is singular within the working 904precision.\\ 905\verb|GLP_ECOND| & The basis matrix is ill-conditioned, i.e. its 906condition number is too large.\\ 907\end{tabular} 908 909%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 910 911\newpage 912 913\section{Simplex tableau routines} 914 915\subsection{glp\_eval\_tab\_row---compute row of the tableau} 916 917\subsubsection*{Synopsis} 918 919\begin{verbatim} 920int glp_eval_tab_row(glp_prob *lp, int k, int ind[], 921      double val[]); 922\end{verbatim} 923 924\subsubsection*{Description} 925 926The routine \verb|glp_eval_tab_row| computes a row of the current 927simplex tableau (see Subsection 3.1.1, formula (3.12)), which (row) 928corresponds to some basic variable specified by the parameter $k$ as 929follows: if $1\leq k\leq m$, the basic variable is $k$-th auxiliary 930variable, and if $m+1\leq k\leq m+n$, the basic variable is $(k-m)$-th 931structural variable, where $m$ is the number of rows and $n$ is the 932number of columns in the specified problem object. The basis 933factorization must exist. 934 935The computed row shows how the specified basic variable depends on 936non-basic variables: 937$$x_k=(x_B)_i=\xi_{i1}(x_N)_1+\xi_{i2}(x_N)_2+\dots+\xi_{in}(x_N)_n,$$ 938where $\xi_{i1}$, $\xi_{i2}$, \dots, $\xi_{in}$ are elements of the 939simplex table row, $(x_N)_1$, $(x_N)_2$, \dots, $(x_N)_n$ are non-basic 940(auxiliary and structural) variables. 941 942The routine stores column indices and corresponding numeric values of 943non-zero elements of the computed row in unordered sparse format in 944locations \verb|ind[1]|, \dots, \verb|ind[len]| and \verb|val[1]|, 945\dots, \verb|val[len]|, respectively, where $0\leq{\tt len}\leq n$ is 946the number of non-zero elements in the row returned on exit. 947 948Element indices stored in the array \verb|ind| have the same sense as 949index $k$, i.e. indices 1 to $m$ denote auxiliary variables while 950indices $m+1$ to $m+n$ denote structural variables (all these variables 951are obviously non-basic by definition). 952 953\subsubsection*{Returns} 954 955The routine \verb|glp_eval_tab_row| returns \verb|len|, which is the 956number of non-zero elements in the simplex table row stored in the 957arrays \verb|ind| and \verb|val|. 958 960 961A row of the simplex table is computed as follows. At first, the 962routine checks that the specified variable $x_k$ is basic and uses the 963permutation matrix $\Pi$ (3.7) to determine index $i$ of basic variable 964$(x_B)_i$, which corresponds to $x_k$. 965 966The row to be computed is $i$-th row of the matrix $\Xi$ (3.12), 967therefore: 968$$\xi_i=e_i^T\Xi=-e_i^TB^{-1}N=-(B^{-T}e_i)^TN,$$ 969where $e_i$ is $i$-th unity vector. So the routine performs BTRAN to 970obtain $i$-th row of the inverse $B^{-1}$: 971$$\varrho_i=B^{-T}e_i,$$ 972and then computes elements of the simplex table row as inner products: 973$$\xi_{ij}=-\varrho_i^TN_j,\ \ j=1,2,\dots,n,$$ 974where $N_j$ is $j$-th column of matrix $N$ (3.9), which (column) 975corresponds to non-basic variable $(x_N)_j$. The permutation matrix 976$\Pi$ is used again to convert indices $j$ of non-basic columns to 977original ordinal numbers of auxiliary and structural variables. 978 979\subsection{glp\_eval\_tab\_col---compute column of the tableau} 980 981\subsubsection*{Synopsis} 982 983\begin{verbatim} 984int glp_eval_tab_col(glp_prob *lp, int k, int ind[], 985      double val[]); 986\end{verbatim} 987 988\subsubsection*{Description} 989 990The routine \verb|glp_eval_tab_col| computes a column of the current 991simplex tableau (see Subsection 3.1.1, formula (3.12)), which (column) 992corresponds to some non-basic variable specified by the parameter $k$: 993if $1\leq k\leq m$, the non-basic variable is $k$-th auxiliary variable, 994and if $m+1\leq k\leq m+n$, the non-basic variable is $(k-m)$-th 995structural variable, where $m$ is the number of rows and $n$ is the 996number of columns in the specified problem object. The basis 997factorization must exist. 998 999The computed column shows how basic variables depends on the specified 1000non-basic variable $x_k=(x_N)_j$: 1001$$1002\begin{array}{r@{\ }c@{\ }l@{\ }l} 1003(x_B)_1&=&\dots+\xi_{1j}(x_N)_j&+\dots\\ 1004(x_B)_2&=&\dots+\xi_{2j}(x_N)_j&+\dots\\ 1005.\ \ .&.&.\ \ .\ \ .\ \ .\ \ .\ \ .\ \ .\\ 1006(x_B)_m&=&\dots+\xi_{mj}(x_N)_j&+\dots\\ 1007\end{array} 1008$$ 1009where $\xi_{1j}$, $\xi_{2j}$, \dots, $\xi_{mj}$ are elements of the 1010simplex table column, $(x_B)_1$, $(x_B)_2$, \dots, $(x_B)_m$ are basic 1011(auxiliary and structural) variables. 1012 1013The routine stores row indices and corresponding numeric values of 1014non-zero elements of the computed column in unordered sparse format in 1015locations \verb|ind[1]|, \dots, \verb|ind[len]| and \verb|val[1]|, 1016\dots, \verb|val[len]|, respectively, where $0\leq{\tt len}\leq m$ is 1017the number of non-zero elements in the column returned on exit. 1018 1019Element indices stored in the array \verb|ind| have the same sense as 1020index $k$, i.e. indices 1 to $m$ denote auxiliary variables while 1021indices $m+1$ to $m+n$ denote structural variables (all these variables 1022are obviously basic by definition). 1023 1024\subsubsection*{Returns} 1025 1026The routine \verb|glp_eval_tab_col| returns \verb|len|, which is the 1027number of non-zero elements in the simplex table column stored in the 1028arrays \verb|ind| and \verb|val|. 1029 1031 1032A column of the simplex table is computed as follows. At first, the 1033routine checks that the specified variable $x_k$ is non-basic and uses 1034the permutation matrix $\Pi$ (3.7) to determine index $j$ of non-basic 1035variable $(x_N)_j$, which corresponds to $x_k$. 1036 1037The column to be computed is $j$-th column of the matrix $\Xi$ (3.12), 1038therefore: 1039$$\Xi_j=\Xi e_j=-B^{-1}Ne_j=-B^{-1}N_j,$$ 1040where $e_j$ is $j$-th unity vector, $N_j$ is $j$-th column of matrix 1041$N$ (3.9). So the routine performs FTRAN to transform $N_j$ to the 1042simplex table column $\Xi_j=(\xi_{ij})$ and uses the permutation matrix 1043$\Pi$ to convert row indices $i$ to original ordinal numbers of 1044auxiliary and structural variables. 1045 1046\newpage 1047 1048\subsection{glp\_transform\_row---transform explicitly specified\\ 1049row} 1050 1051\subsubsection*{Synopsis} 1052 1053\begin{verbatim} 1054int glp_transform_row(glp_prob *P, int len, int ind[], 1055      double val[]); 1056\end{verbatim} 1057 1058\subsubsection*{Description} 1059 1060The routine \verb|glp_transform_row| performs the same operation as the 1061routine \verb|glp_eval_tab_row| with exception that the row to be 1062transformed is specified explicitly as a sparse vector. 1063 1064The explicitly specified row may be thought as a linear form: 1065$$x=a_1x_{m+1}+a_2x_{m+2}+\dots+a_nx_{m+n},$$ 1066where $x$ is an auxiliary variable for this row, $a_j$ are coefficients 1067of the linear form, $x_{m+j}$ are structural variables. 1068 1069On entry column indices and numerical values of non-zero coefficients 1070$a_j$ of the specified row should be placed in locations \verb|ind[1]|, 1071\dots, \verb|ind[len]| and \verb|val[1]|, \dots, \verb|val[len]|, where 1072\verb|len| is number of non-zero coefficients. 1073 1074This routine uses the system of equality constraints and the current 1075basis in order to express the auxiliary variable $x$ through the current 1076non-basic variables (as if the transformed row were added to the problem 1077object and the auxiliary variable $x$ were basic), i.e. the resultant 1078row has the form: 1079$$x=\xi_1(x_N)_1+\xi_2(x_N)_2+\dots+\xi_n(x_N)_n,$$ 1080where $\xi_j$ are influence coefficients, $(x_N)_j$ are non-basic 1081(auxiliary and structural) variables, $n$ is the number of columns in 1082the problem object. 1083 1084On exit the routine stores indices and numerical values of non-zero 1085coefficients $\xi_j$ of the resultant row in locations \verb|ind[1]|, 1086\dots, \verb|ind[len']| and \verb|val[1]|, \dots, \verb|val[len']|, 1087where $0\leq{\tt len'}\leq n$ is the number of non-zero coefficients in 1088the resultant row returned by the routine. Note that indices of 1089non-basic variables stored in the array \verb|ind| correspond to 1090original ordinal numbers of variables: indices 1 to $m$ mean auxiliary 1091variables and indices $m+1$ to $m+n$ mean structural ones. 1092 1093\subsubsection*{Returns} 1094 1095The routine \verb|glp_transform_row| returns \verb|len'|, the number of 1096non-zero coefficients in the resultant row stored in the arrays 1097\verb|ind| and \verb|val|. 1098 1099\subsection{glp\_transform\_col---transform explicitly specified\\ 1100column} 1101 1102\subsubsection*{Synopsis} 1103 1104\begin{verbatim} 1105int glp_transform_col(glp_prob *P, int len, int ind[], 1106      double val[]); 1107\end{verbatim} 1108 1109\subsubsection*{Description} 1110 1111The routine \verb|glp_transform_col| performs the same operation as the 1112routine \verb|glp_eval_tab_col| with exception that the column to be 1113transformed is specified explicitly as a sparse vector. 1114 1115The explicitly specified column may be thought as it were added to 1116the original system of equality constraints: 1117$$1118\begin{array}{l@{\ }c@{\ }r@{\ }c@{\ }r@{\ }c@{\ }r} 1119x_1&=&a_{11}x_{m+1}&+\dots+&a_{1n}x_{m+n}&+&a_1x \\ 1120x_2&=&a_{21}x_{m+1}&+\dots+&a_{2n}x_{m+n}&+&a_2x \\ 1121\multicolumn{7}{c} 1122{.\ \ .\ \ .\ \ .\ \ .\ \ .\ \ .\ \ .\ \ .\ \ .\ \ .\ \ .\ \ .\ \ .}\\ 1123x_m&=&a_{m1}x_{m+1}&+\dots+&a_{mn}x_{m+n}&+&a_mx \\ 1124\end{array} 1125$$ 1126where $x_i$ are auxiliary variables, $x_{m+j}$ are structural variables 1127(presented in the problem object), $x$ is a structural variable for the 1128explicitly specified column, $a_i$ are constraint coefficients at $x$. 1129 1130On entry row indices and numerical values of non-zero coefficients 1131$a_i$ of the specified column should be placed in locations 1132\verb|ind[1]|, \dots, \verb|ind[len]| and \verb|val[1]|, \dots, 1133\verb|val[len]|, where \verb|len| is number of non-zero coefficients. 1134 1135This routine uses the system of equality constraints and the current 1136basis in order to express the current basic variables through the 1137structural variable $x$ (as if the transformed column were added to the 1138problem object and the variable $x$ were non-basic): 1139$$1140\begin{array}{l@{\ }c@{\ }r} 1141(x_B)_1&=\dots+&\xi_{1}x\\ 1142(x_B)_2&=\dots+&\xi_{2}x\\ 1143\multicolumn{3}{c}{.\ \ .\ \ .\ \ .\ \ .\ \ .}\\ 1144(x_B)_m&=\dots+&\xi_{m}x\\ 1145\end{array} 1146$$ 1147where $\xi_i$ are influence coefficients, $x_B$ are basic (auxiliary 1148and structural) variables, $m$ is the number of rows in the problem 1149object. 1150 1151On exit the routine stores indices and numerical values of non-zero 1152coefficients $\xi_i$ of the resultant column in locations \verb|ind[1]|, 1153\dots, \verb|ind[len']| and \verb|val[1]|, \dots, \verb|val[len']|, 1154where $0\leq{\tt len'}\leq m$ is the number of non-zero coefficients in 1155the resultant column returned by the routine. Note that indices of basic 1156variables stored in the array \verb|ind| correspond to original ordinal 1157numbers of variables, i.e. indices 1 to $m$ mean auxiliary variables, 1158indices $m+1$ to $m+n$ mean structural ones. 1159 1160\subsubsection*{Returns} 1161 1162The routine \verb|glp_transform_col| returns \verb|len'|, the number of 1163non-zero coefficients in the resultant column stored in the arrays 1164\verb|ind| and \verb|val|. 1165 1166\subsection{glp\_prim\_rtest---perform primal ratio test} 1167 1168\subsubsection*{Synopsis} 1169 1170\begin{verbatim} 1171int glp_prim_rtest(glp_prob *P, int len, const int ind[], 1172      const double val[], int dir, double eps); 1173\end{verbatim} 1174 1175\subsubsection*{Description} 1176 1177The routine \verb|glp_prim_rtest| performs the primal ratio test using 1178an explicitly specified column of the simplex table. 1179 1180The current basic solution associated with the LP problem object must be 1181primal feasible. 1182 1183The explicitly specified column of the simplex table shows how the basic 1184variables $x_B$ depend on some non-basic variable $x$ (which is not 1185necessarily presented in the problem object): 1186$$1187\begin{array}{l@{\ }c@{\ }r} 1188(x_B)_1&=\dots+&\xi_{1}x\\ 1189(x_B)_2&=\dots+&\xi_{2}x\\ 1190\multicolumn{3}{c}{.\ \ .\ \ .\ \ .\ \ .\ \ .}\\ 1191(x_B)_m&=\dots+&\xi_{m}x\\ 1192\end{array} 1193$$ 1194 1195The column is specifed on entry to the routine in sparse format. Ordinal 1196numbers of basic variables $(x_B)_i$ should be placed in locations 1197\verb|ind[1]|, \dots, \verb|ind[len]|, where ordinal number 1 to $m$ 1198denote auxiliary variables, and ordinal numbers $m+1$ to $m+n$ denote 1199structural variables. The corresponding non-zero coefficients $\xi_i$ 1200should be placed in locations \verb|val[1]|, \dots, \verb|val[len]|. The 1201arrays \verb|ind| and \verb|val| are not changed by the routine. 1202 1203The parameter \verb|dir| specifies direction in which the variable $x$ 1204changes on entering the basis: $+1$ means increasing, $-1$ means 1205decreasing. 1206 1207The parameter \verb|eps| is an absolute tolerance (small positive 1208number, say, $10^{-9}$) used by the routine to skip $\xi_i$'s whose 1209magnitude is less than \verb|eps|. 1210 1211The routine determines which basic variable (among those specified in 1212\verb|ind[1]|, \dots, \verb|ind[len]|) reaches its (lower or upper) 1213bound first before any other basic variables do, and which, therefore, 1214should leave the basis in order to keep primal feasibility. 1215 1216\subsubsection*{Returns} 1217 1218The routine \verb|glp_prim_rtest| returns the index, \verb|piv|, in the 1219arrays \verb|ind| and \verb|val| corresponding to the pivot element 1220chosen, $1\leq$ \verb|piv| $\leq$ \verb|len|. If the adjacent basic 1221solution is primal unbounded, and therefore the choice cannot be made, 1222the routine returns zero. 1223 1225 1226If the non-basic variable $x$ is presented in the LP problem object, the 1227input column can be computed with the routine \verb|glp_eval_tab_col|; 1228otherwise, it can be computed with the routine \verb|glp_transform_col|. 1229 1230\subsection{glp\_dual\_rtest---perform dual ratio test} 1231 1232\subsubsection*{Synopsis} 1233 1234\begin{verbatim} 1235int glp_dual_rtest(glp_prob *P, int len, const int ind[], 1236      const double val[], int dir, double eps); 1237\end{verbatim} 1238 1239\subsubsection*{Description} 1240 1241The routine \verb|glp_dual_rtest| performs the dual ratio test using 1242an explicitly specified row of the simplex table. 1243 1244The current basic solution associated with the LP problem object must be 1245dual feasible. 1246 1247The explicitly specified row of the simplex table is a linear form 1248that shows how some basic variable $x$ (which is not necessarily 1249presented in the problem object) depends on non-basic variables $x_N$: 1250$$x=\xi_1(x_N)_1+\xi_2(x_N)_2+\dots+\xi_n(x_N)_n.$$ 1251 1252The row is specified on entry to the routine in sparse format. Ordinal 1253numbers of non-basic variables $(x_N)_j$ should be placed in locations 1254\verb|ind[1]|, \dots, \verb|ind[len]|, where ordinal numbers 1 to $m$ 1255denote auxiliary variables, and ordinal numbers $m+1$ to $m+n$ denote 1256structural variables. The corresponding non-zero coefficients $\xi_j$ 1257should be placed in locations \verb|val[1]|, \dots, \verb|val[len]|. 1258The arrays \verb|ind| and \verb|val| are not changed by the routine. 1259 1260The parameter \verb|dir| specifies direction in which the variable $x$ 1261changes on leaving the basis: $+1$ means that $x$ goes on its lower 1262bound, so its reduced cost (dual variable) is increasing (minimization) 1263or decreasing (maximization); $-1$ means that $x$ goes on its upper 1264bound, so its reduced cost is decreasing (minimization) or increasing 1265(maximization). 1266 1267The parameter \verb|eps| is an absolute tolerance (small positive 1268number, say, $10^{-9}$) used by the routine to skip $\xi_j$'s whose 1269magnitude is less than \verb|eps|. 1270 1271The routine determines which non-basic variable (among those specified 1272in \verb|ind[1]|, \dots, \verb|ind[len]|) should enter the basis in 1273order to keep dual feasibility, because its reduced cost reaches the 1274(zero) bound first before this occurs for any other non-basic variables. 1275 1276\subsubsection*{Returns} 1277 1278The routine \verb|glp_dual_rtest| returns the index, \verb|piv|, in the 1279arrays \verb|ind| and \verb|val| corresponding to the pivot element 1280chosen, $1\leq$ \verb|piv| $\leq$ \verb|len|. If the adjacent basic 1281solution is dual unbounded, and therefore the choice cannot be made, 1282the routine returns zero. 1283 1285 1286If the basic variable $x$ is presented in the LP problem object, the 1287input row can be computed with the routine \verb|glp_eval_tab_row|; 1288otherwise, it can be computed with the routine \verb|glp_transform_row|. 1289 1290%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 1291 1292\newpage 1293 1294\section{Post-optimal analysis routines} 1295 1296\subsection{glp\_analyze\_bound---analyze active bound of non-basic 1297variable} 1298 1299\subsubsection*{Synopsis} 1300 1301\begin{verbatim} 1302void glp_analyze_bound(glp_prob *P, int k, double *limit1, 1303      int *var1, double *limit2, int *var2); 1304\end{verbatim} 1305 1306\subsubsection*{Description} 1307 1308The routine \verb|glp_analyze_bound| analyzes the effect of varying the 1309active bound of specified non-basic variable. 1310 1311The non-basic variable is specified by the parameter $k$, where 1312$1\leq k\leq m$ means auxiliary variable of corresponding row, and 1313$m+1\leq k\leq m+n$ means structural variable (column). 1314 1315Note that the current basic solution must be optimal, and the basis 1316factorization must exist. 1317 1318Results of the analysis have the following meaning. 1319 1320\verb|value1| is the minimal value of the active bound, at which the 1321basis still remains primal feasible and thus optimal. \verb|-DBL_MAX| 1322means that the active bound has no lower limit. 1323 1324\verb|var1| is the ordinal number of an auxiliary (1 to $m$) or 1325structural ($m+1$ to $m+n$) basic variable, which reaches its bound 1326first and thereby limits further decreasing the active bound being 1327analyzed. if \verb|value1| = \verb|-DBL_MAX|, \verb|var1| is set to 0. 1328 1329\verb|value2| is the maximal value of the active bound, at which the 1330basis still remains primal feasible and thus optimal. \verb|+DBL_MAX| 1331means that the active bound has no upper limit. 1332 1333\verb|var2| is the ordinal number of an auxiliary (1 to $m$) or 1334structural ($m+1$ to $m+n$) basic variable, which reaches its bound 1335first and thereby limits further increasing the active bound being 1336analyzed. if \verb|value2| = \verb|+DBL_MAX|, \verb|var2| is set to 0. 1337 1338The parameters \verb|value1|, \verb|var1|, \verb|value2|, \verb|var2| 1339can be specified as \verb|NULL|, in which case corresponding information 1340is not stored. 1341 1342\newpage 1343 1344\subsection{glp\_analyze\_coef---analyze objective coefficient at basic 1345variable} 1346 1347\subsubsection*{Synopsis} 1348 1349\begin{verbatim} 1350void glp_analyze_coef(glp_prob *P, int k, double *coef1, 1351      int *var1, double *value1, double *coef2, int *var2, 1352      double *value2); 1353\end{verbatim} 1354 1355\subsubsection*{Description} 1356 1357The routine \verb|glp_analyze_coef| analyzes the effect of varying the 1358objective coefficient at specified basic variable. 1359 1360The basic variable is specified by the parameter $k$, where 1361$1\leq k\leq m$ means auxiliary variable of corresponding row, and 1362$m+1\leq k\leq m+n$ means structural variable (column). 1363 1364Note that the current basic solution must be optimal, and the basis 1365factorization must exist. 1366 1367Results of the analysis have the following meaning. 1368 1369\verb|coef1| is the minimal value of the objective coefficient, at 1370which the basis still remains dual feasible and thus optimal. 1371\verb|-DBL_MAX| means that the objective coefficient has no lower limit. 1372 1373\verb|var1| is the ordinal number of an auxiliary (1 to $m$) or 1374structural ($m+1$ to $m+n$) non-basic variable, whose reduced cost 1375reaches its zero bound first and thereby limits further decreasing the 1376objective coefficient being analyzed. If \verb|coef1| = \verb|-DBL_MAX|, 1377\verb|var1| is set to 0. 1378 1379\verb|value1| is value of the basic variable being analyzed in an 1380adjacent basis, which is defined as follows. Let the objective 1381coefficient reaches its minimal value (\verb|coef1|) and continues 1382decreasing. Then the reduced cost of the limiting non-basic variable 1383(\verb|var1|) becomes dual infeasible and the current basis becomes 1384non-optimal that forces the limiting non-basic variable to enter the 1385basis replacing there some basic variable that leaves the basis to keep 1386primal feasibility. Should note that on determining the adjacent basis 1387current bounds of the basic variable being analyzed are ignored as if 1388it were free (unbounded) variable, so it cannot leave the basis. It may 1389happen that no dual feasible adjacent basis exists, in which case 1390\verb|value1| is set to \verb|-DBL_MAX| or \verb|+DBL_MAX|. 1391 1392\verb|coef2| is the maximal value of the objective coefficient, at 1393which the basis still remains dual feasible and thus optimal. 1394\verb|+DBL_MAX| means that the objective coefficient has no upper limit. 1395 1396\verb|var2| is the ordinal number of an auxiliary (1 to $m$) or 1397structural ($m+1$ to $m+n$) non-basic variable, whose reduced cost 1398reaches its zero bound first and thereby limits further increasing the 1399objective coefficient being analyzed. If \verb|coef2| = \verb|+DBL_MAX|, 1400\verb|var2| is set to 0. 1401 1402\verb|value2| is value of the basic variable being analyzed in an 1403adjacent basis, which is defined exactly in the same way as 1404\verb|value1| above with exception that now the objective coefficient 1405is increasing. 1406 1407The parameters \verb|coef1|, \verb|var1|, \verb|value1|, \verb|coef2|, 1408\verb|var2|, \verb|value2| can be specified as \verb|NULL|, in which 1409case corresponding information is not stored. 1410 1411%* eof *% Note: See TracBrowser for help on using the repository browser.
2020-10-22T02:20:39
{ "domain": "elte.hu", "url": "http://lemon.cs.elte.hu/trac/lemon/browser/glpk-cmake/doc/glpk04.tex?rev=1", "openwebmath_score": 0.9079437851905823, "openwebmath_perplexity": 4250.01344872914, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639653084245, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132516571779 }
https://prioritizr.net/reference/loglinear_interpolation.html
Log-linearly interpolate values between two thresholds. loglinear_interpolation( x, coordinate_one_x, coordinate_one_y, coordinate_two_x, coordinate_two_y ) ## Arguments x numeric x values for which interpolate y values. numeric value for lower x-coordinate. numeric value for lower y-coordinate. numeric value for upper x-coordinate. numeric value for upper y-coordinate. ## Value numeric values. ## Details Values are log-linearly interpolated at the x-coordinates specified in x using the lower and upper coordinate arguments to define the line. Values lesser or greater than these numbers are assigned the minimum and maximum y coordinates. ## Examples # create series of x-values x <- seq(0, 1000) # interpolate y-values for the x-values given the two reference points: # (200, 100) and (900, 15) y <- loglinear_interpolation(x, 200, 100, 900, 15) # plot the interpolated values # \dontrun{ plot(y ~ x) # add the reference points to the plot (shown in red) points(x = c(200, 900), y = c(100, 15), pch = 18, col = "red", cex = 2) # } # this function can also be used to calculate representation targets # following Rodrigues et al. (2014). For example, let's say that # we had a set of species we were interested in calculating representation # targets for and we had information on their range sizes (in km^2). spp_range_size_km2 <- seq(0.01, 15000000, by = 100) # we can now use this function to calculate representation targets # (expressed as a percentage of the species' range sizes) using # the thresholds and cap sizes reported by Rodrigues et al. 2014 spp_target_percentage_rodrigues <- loglinear_interpolation( x = spp_range_size_km2, coordinate_one_x = 1000, coordinate_one_y = 1, coordinate_two_x = 250000, coordinate_two_y = 0.1) * 100 # it is also common to apply a cap to the representation targets, # so let's apply the cap these targets following Butchart et al. (2015) spp_target_percentage_butchart <- ifelse( spp_range_size_km2 >= 10000000, (1000000 / spp_range_size_km2) * 100, spp_target_percentage_rodrigues) # plot species range sizes and representation targets plot(spp_target_percentage_butchart ~ spp_range_size_km2, xlab = "Range size km^2" , ylab = "Representation target (%)", type = "l") # plot species range sizes and representation targets on a log10 scale plot(spp_target_percentage_butchart ~ log10(spp_range_size_km2), xlab = "Range size km^2" , ylab = "Representation target (%)", type = "l", xaxt = "n") axis(1, pretty(log10(spp_range_size_km2)), 10^pretty(log10(spp_range_size_km2)))
2021-06-19T12:41:44
{ "domain": "prioritizr.net", "url": "https://prioritizr.net/reference/loglinear_interpolation.html", "openwebmath_score": 0.5760525465011597, "openwebmath_perplexity": 7891.760500015894, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639653084245, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132516571779 }
https://math.stackexchange.com/questions/642226/understanding-the-syntax-for-derivatives-dy-dx
# Understanding the syntax for derivatives - dy/dx I'm new to calculus, and I'm trying to understand the syntax of derivatives: $$\frac {dy}{dx}$$ At a glance it implies some kind of division and some variable "d" has entered the picture. Does this reflect some deep relationship between division and derivatives or is it just misleading? The syntax for higher order derivatives seems even stranger: $$\frac {d^2y}{dx^2} ... \frac {d^3y}{dx^3} ... \frac {d^4y}{dx^4} ...$$ This seems to imply that some exponents are now involved. Again is this reflective of deeper relationships or just an overlap of syntax choice? Does more advanced calculus ever involve different numbers as follows? $$\frac {d^3y}{dx^2} ... \frac {d^5y}{dx^3}$$ • – Amzoti Jan 18 '14 at 1:44 • The notation $y'(x), y''(x),$ etc. might be more clear. – littleO Jan 18 '14 at 1:49 • $\frac {d^2y}{dx^2}$ stand for $\frac{d}{dx}\left(\frac{dy}{dx}\right)$ – janmarqz Jan 18 '14 at 1:52 • This might be helpful: math.stackexchange.com/questions/21199/is-dy-dx-not-a-ratio – Zircht Jan 18 '14 at 1:59 • @janmarqz, is that implying some kind of multiplication of the derivatives? – Brendan Hill Jan 18 '14 at 2:00 We write $\frac{dy}{dx}$ but this is just notational, convention really. First, it is important to remember that this is not a ratio (see this, which is an excellent discussion of $\frac{dy}{dx}$), it is a limit and there is a limit definition, see the brief section here for an idea. The idea is we approximate the change of functions using an ever closer secant line (that 'becomes' the tangent line in the limit). Typically the first definition one sees is, the derivative of $f(x)$, $f'(x)$, at $x=a$ is $$f'(a)=\lim_{h \to 0} \frac{f(a+h)-f(a)}{(a+h)-a}$$ Now rather than write this limit each time we just 'shorthand' this via $f'(a)$. If we knew the derivative in general (for any $x$), we would write $f'(x)$ instead of the general limit because who wants to do that! (Especially, when we can find a formula for $f'(x)$!) Another way of writing $f'(x)$ is $$f'(x)=\frac{df}{dx}$$ or the derivative of $f(x)$ with respect to $x$. When it comes to taking multiple derivatives, we use the Leibniz notation. That is, $\frac{dy}{dx}$ means the derivative of the function $y(x)$, with respect to $x$. Meaning, we examine how much $y$ (or $y(x)$) changes when we change $x$ by a little bit. What if we want to look at the change of the change? Well, notationally we write $$\frac{d^2y}{dx^2}=\frac{d}{dx}\left(\frac{dy}{dx}\right)$$ In the previous notation, this would be $f''(x)$ and we just 'assume' we took the derivative with respect to $x$ since it is the only input variable for $f$. Notice the parenthesis term on the right should not be thought of as multiplication but rather as a function (strictly speaking this is more of an operator, in any case, we don't multiply these!). We find $\frac{dy}{dx}$ then we take the derivative of $\frac{dy}{dx}$ again. What if we wanted to do this process $100$ times!? Well, we can write that compactly as $$\frac{d^{100}y}{dx^{100}}$$ It makes no sense to write $\frac{d^5y}{dx^4}$. Why? Because that's not how we defined the notation! Now why did we define it that way? Well, because we did! Now we do we write the superscripts the way we do? Well, the number in the denominator, the superscript of the $d$, tells us how many times we have taken the derivative in total. The number above the $d$ in the denominator tells us how many times we have taken the derivative with respect to that variable. So in the above case, we have taken the derivative $100$ times and each of those $100$ times we looked at how $y$ changed when we changed $x$, that is we took the derivative with respect to $x$ $100$ times. Now what if I had a function of $2$ variables, $y=f(x,z)$. I could look at how $y$ changes when I change $x$, $$\frac{\partial^1 y}{\partial x^1}$$ (when we have more than one input variable, we use $\partial$ instead of $d$. Why? TRADITION!) I could have looked at how it changed with respect to $z$, $$\frac{\partial^1 y}{\partial z^1}$$ Now what if I wanted to look at how this change changes when I change $x$, we take the derivative with respect to $x$!, so we write $$\frac{\partial^2 y}{\partial x^1 \partial z^1}$$ Notice we have taken $2$ total derivatives, first with respect to $z$ then with respect to $x$. The top tells us how many times we took the derivative total and the bottom superscripts tell us how many times we took the derivative with respect to that variable. Of course, the numbers in the denominator need add to the one in the numerator! Of course, what we really mean is $$\frac{\partial}{\partial x}\left(\frac{\partial^1 y}{\partial z^1} \right)$$ as above. But this short hand is easier, which is why we use it. Notice the order we took the derivative in works from left to right. Now then one may ask $$\frac{\partial^2 y}{\partial z^1 \partial x^1}=\frac{\partial^2 y}{\partial x^1 \partial z^1}$$ the answer is no. Not all the time! But in 'most' of the 'nice' cases, yes (At least in the introductory undergraduate level!). EDIT: With an answer this long, there is bound to be typos that I missed. If you are confused or think something is in error, please ask/let me know! Also, as a shorthand for derivatives of functions of more than one variables, often over $$\frac{\partial f}{\partial x}$$ one may write $f_x$. Then $$\frac{\partial}{\partial x}\left(\frac{\partial f}{\partial z}\right)=\frac{\partial^2 f}{\partial x^1 \partial z^1}$$ would be written $f_{zx}$. Notice order in which we took the derivative still works from right to left. • In your edit, you have a mistake. $\dfrac{\partial^2f}{\partial x\partial z}=f_{zx}$, not $f_{xz}$. Also, I have never seen $\dfrac{\partial}{\partial x}\left(\dfrac{\partial }{\partial z}\right)$ written as $\dfrac{\partial^2}{\partial z\partial x}$, only $\dfrac{\partial^2}{\partial x\partial z}$ as the partial differentiation operators aren't commutative (in general). See en.wikipedia.org/wiki/Partial_derivative#Notation. – Daryl Jan 18 '14 at 3:40 • @Daryl Thank you for pointing that out! I actually switched it twice I believe. It is fixed now. I believe your final comment was intended to be $\frac{\partial^2 f}{\partial x \partial z}$, as I never wrote the other? But I have seen such notation, though I do admit it is rare as the latter is shorter. I don't know that there is a difference otherwise. – mathematics2x2life Jan 18 '14 at 3:47 • That's a thorough answer to my question, thanks for taking the time to explain it to me! – Brendan Hill Jan 18 '14 at 10:54 $\frac{dy}{dx}$ is just a notation. It is to be interpreted as a limit, and NOT as a fraction. Also, $\frac{d^5y}{dx^3}$ does not make sense. $\frac{d^2y}{dx^2}$ means the derivative of the derivative of $y$ which is a function of x, with respect to x. • It seems like there's some redundancy in the syntax then, if the order numbers are always going to be the same? I find it confusing... :/ – Brendan Hill Jan 18 '14 at 2:00 • @BrendanHill : with partial higher derivatives like $\frac{\partial^5 z}{\partial^3 x \partial^2 y}$, clearly the numbers on the bottom are important. – Stefan Smith Jan 18 '14 at 3:09
2019-06-19T01:15:41
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/642226/understanding-the-syntax-for-derivatives-dy-dx", "openwebmath_score": 0.9299219250679016, "openwebmath_perplexity": 207.26456707666858, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639653084245, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132516571779 }
http://math.stackexchange.com/questions/209637/derivative-of-a-vector-with-respect-to-a-matrix
Derivative of a vector with respect to a matrix I am at an impasse. I don't know if homework is allowed on here or not, so if it isn't, someone delete this. Given: $H_{\gamma} = C_{\beta \beta} v_{\gamma} + C_{\beta \varepsilon} C_{\varepsilon \beta} v_{\gamma} + C_{\beta \varepsilon} v_{\varepsilon} C_{\beta \gamma} + \delta_{\beta \gamma} v_{\beta} + \varepsilon_{\gamma \epsilon \eta} v_{\varepsilon} v_{\eta} + \varepsilon_{\gamma \varepsilon \eta} C_{\varepsilon \eta}$ Find: $\frac{\partial H_{\gamma}}{\partial C_{\alpha \beta}}$ I believe $\delta$ is the identity matrix, and $\varepsilon$ is the Levi-Civita symbol. I don't know how to work this sort of thing and my notes aren't helping. I have one example that takes the partial of a scalar function with respect to a matrix, but this is confusing me. Does the $\beta$ in the definition of $H_{\gamma}$ refer to the same $\beta$ in $\frac{\partial H_{\gamma}}{\partial C_{\alpha \beta}}$? I can't even tell what kind of tensor this result is. Is my result a matrix, a vector, or a vector of matrices? If someone could maybe walk me through even the first term ($\frac{\partial C_{\beta \beta} v_{\gamma} }{\partial C_{\alpha \beta}}$) that would be very helpful. My textbook doesn't include this sort of notation (only my lecture notes), so that's not very helpful. And Googling "derivative of a vector with respect to a matrix" is useless. Thanks! - are you using Einstein notation where a repeated index is summed over? –  Jonathan Oct 9 '12 at 4:26 Yes, I am using Einstein notation. –  Nick Oct 9 '12 at 4:57
2014-07-28T05:01:48
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/209637/derivative-of-a-vector-with-respect-to-a-matrix", "openwebmath_score": 0.9174253344535828, "openwebmath_perplexity": 226.06546312155965, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639653084245, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132516571779 }
http://math.stackexchange.com/questions/4009/confusion-between-operation-and-relation-clarification-needed
# Confusion between operation and relation: Clarification needed I'm doing some old exams and found following question: Set $S={{1,2,3}}$ is given. Provide an example of binary operation in set S, binary relation in set S and a function $f:S\rightarrow R$. So, I'm thinking about $+_4$ as operation, but wouldn't it be a relation too? I can take ordered pair from $S$ and for every ordered pair associate an element form $S$ with every pair. For the third part, I'm thinking about $y=x*1.25$ or something similar, but that part isn't so problematic for me. I did read the Wikipedia articles, but the difference between operations and relations isn't clear to me. - A binary operation is a function from $S\times S \to S$ such as addition, multiplication or anything really. A binary relation is just a subset of $S^2$, that is not necessarily a function and it doesn't have to include all the elements of $S$ in one way or another. A function $f\colon S\to R$ is a relation, this time it's a subset of $S\times R$ however it satisfies a certain property, if you take some $s \in S$ then there is only a unique ordered pair with $s$ in it, so if you have $\langle s,r_1\rangle$ as well $\langle s,r_2\rangle$ then you can say that $r_1 = r_2$.
2014-12-28T04:30:38
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/4009/confusion-between-operation-and-relation-clarification-needed", "openwebmath_score": 0.8076469302177429, "openwebmath_perplexity": 153.96601447799634, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639653084245, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132516571779 }
https://mathhelpforum.com/threads/exponents-and-logs.147017/
# Exponents and Logs #### nuckers Why is the 5 in this equation, I can't multiply the 2 and 5, some of this math is so confusing $$\displaystyle 2(5)^x=7^{x+1}$$ #### e^(i*pi) MHF Hall of Honor Why is the 5 in this equation, I can't multiply the 2 and 5, some of this math is so confusing $$\displaystyle 2(5)^x=7^{x+1}$$ The 5 is the base of the exponential. You can still take logs and use the relevant log laws. Hint: $$\displaystyle \log(2\cdot 5^x) = \log(2) + \log(5^x)$$ #### Plato MHF Helper $$\displaystyle 2(5)^x=7^{x+1}$$ That equation is equivalent to $$\displaystyle \left( {\frac{5}{7}} \right)^x = \frac{7}{2}$$ which has solution $$\displaystyle x = \frac{{\log (7) - \log (2)}}{{\log (5) - \log (7)}}$$ #### nuckers That equation is equivalent to $$\displaystyle \left( {\frac{5}{7}} \right)^x = \frac{7}{2}$$ which has solution $$\displaystyle x = \frac{{\log (7) - \log (2)}}{{\log (5) - \log (7)}}$$ I'm totally confused how you came up with that. I did screw up and put a 7 in the equation, it's supposed to be $$\displaystyle 3^{x+1}$$ #### Plato MHF Helper I'm totally confused how you came up with that. I did screw up and put a 7 in the equation, it's supposed to be $$\displaystyle 3^{x+1}$$ $$\displaystyle 2 \cdot 5^x = 3^{x + 1}$$ is equivalent to $$\displaystyle \left( {\frac{5}{3}} \right)^x = \frac{3}{2}$$ which has solution $$\displaystyle x = \frac{{\log (3) - \log (2)}}{{\log (5) - \log (3)}}$$ To see how that works divide both sides by $$\displaystyle 3^x$$ and also by $$\displaystyle 2$$. Note that $$\displaystyle \frac{{5^x }}{{3^x }} = \left( {\frac{5}{3}} \right)^x$$. #### nuckers $$\displaystyle 2 \cdot 5^x = 3^{x + 1}$$ is equivalent to $$\displaystyle \left( {\frac{5}{3}} \right)^x = \frac{3}{2}$$ which has solution $$\displaystyle x = \frac{{\log (3) - \log (2)}}{{\log (5) - \log (3)}}$$ To see how that works divide both sides by $$\displaystyle 3^x$$ and also by $$\displaystyle 2$$. Note that $$\displaystyle \frac{{5^x }}{{3^x }} = \left( {\frac{5}{3}} \right)^x$$. Ok, i understand it now, thanks so much for the help, i appreciate it
2019-11-20T17:19:47
{ "domain": "mathhelpforum.com", "url": "https://mathhelpforum.com/threads/exponents-and-logs.147017/", "openwebmath_score": 0.9540709853172302, "openwebmath_perplexity": 296.02236098051196, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639653084245, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132516571779 }
https://physics.stackexchange.com/questions/104826/conservation-of-energy-quick-question/104831
# Conservation of energy quick question Say we had a particle moving in a frictionless funnel and was projected horizontally. If we had some initial conditions for the energy E, then would these conditions be the same always? For instance, in this particular question I got $$E = 1/2m\dot r^2 - mgz,$$ and we were given that $z = b\left ( \dfrac{b}{r} \right )^n$, and it was projected at the inner surface level $z = b$ horizontally with speed $U$. Using those initial conditions, I got $E = 1/2m U^2 -mgb.$ However would I be correct in the stating that $$1/2m\dot r^2 - mgz = 1/2m U^2 -mgb?$$ I looked in the solutions and the lecturer wrote that they were equal, but does the energy not change of the particle? • Not that there's anything wrong with it, but uppercase $U$ is an unusual choice for speed. It usually means potential energy. Mar 23 '14 at 19:43 • @DavidZ it was phrased as such in the question, could you explain why it's not wrong? – John Mar 23 '14 at 19:53
2022-01-24T00:59:57
{ "domain": "stackexchange.com", "url": "https://physics.stackexchange.com/questions/104826/conservation-of-energy-quick-question/104831", "openwebmath_score": 0.7783564329147339, "openwebmath_perplexity": 208.1627005211177, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.971563964485063, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.6532132511036061 }
https://math.stackexchange.com/questions/648616/show-that-3-is-a-primitive-root-modulo-p-2q1
# Show that $-3$ is a primitive root modulo $p=2q+1$ This was a question from an exam: Let $q \ge 5$ be a prime number and assume that $p=2q+1$ is also prime. Prove that $-3$ is a primitive root in $\mathbb{Z}_p$. I guess the solution goes something like this: Let $k$ be the order multiplicative order of $-3$ modulo p. Using Euler's theorem we see that: $(-3)^{2q} \equiv1\ (mod\ p)$. Hence $k\ |\ 2q$, which means (since $q$ is prime) that $k=1,\ 2,\ q,$ or $2q$. Obviously $k \neq 1$ and $k \neq 2$, since otherwise we'll have $[-3]_p=[1]_p$ and $[(-3)^2]_p = [9]_p = [1]_p$, respectively, which is wrong since $p \ge 11$. What remains is to show that $k \neq q$. Unfortunately, I could not figure out how to show this. Hint: If $(-3)^q\equiv1\pmod p$, then $(-3)^{\frac{p-1}2}\equiv1\pmod p$, hence $-3$ would be a quadratic residue modulo $p$. You could write this as $$\left(\!\frac{-3}{p}\!\right)=1$$ and play around with the Law of Quadratic Reciprocity and other properties of the Legendre symbol. This will allow you to conclude something very interesting. Edit: I'll show how I personally would handle with the Legendre symbols. We have \begin{align}1=\left(\!\frac{-3}{p}\!\right)&=\left(\!\frac{-1}{p}\!\right)\cdot\left(\!\frac{3}{p}\!\right)\\&=(-1)^{\frac{p-1}2}\cdot\left(\!\frac{3}{p}\!\right)\\ &=(-1)^q\cdot(-1)^{\frac{(p-1)(3-1)}{4}}\cdot\left(\!\frac{p}{3}\!\right)\\ &=(-1)\cdot(-1)\cdot\left(\!\frac{p}{3}\!\right)\end{align}, hence $\left(\!\frac{p}{3}\!\right)=1$, which means $p\equiv1\pmod3$. This would imply $3\mid p-1=2q$, a contradiction. • Wait, there is no need of the reciprocity here, except the properties linked to in the second link, right? :) – awllower Jan 23 '14 at 11:18 • I can use the properties to show that: $$\left(\!\frac{3}{p}\!\right)=-1$$ and then show that it is impossible for a prime $q \ge 5$ to have: $$2q+1\equiv5,7\ (mod\ 12)$$ But this property is very specific, and not something that can be thought up during an exam. – SomeStrangeUser Jan 23 '14 at 11:56 • I edited my answer and added the rest of the solution. Please tell me if anything is unclear, I will explain. The only properties I use are the multiplicativity, Euler's criterion and the law of QR. – punctured dusk Jan 23 '14 at 12:17 • @awllower (and SomeStrangeUser): Yes, in fact it can be solved if you know the values of $\left(\!\frac 3p\!\right)$ by heart. However I suggest not to learn these (I don't know them by heart either), and to use QR. There's less chance of making mistakes if you only use the general properties. (Besides, note that the multiplicative property follows from Euler's criterion.) – punctured dusk Jan 23 '14 at 12:23 • Okay. Now I got it. Thanks Alot! – SomeStrangeUser Jan 23 '14 at 12:36 Hint: If $(-3)^q \equiv 1 (mod p)$ then $-3$ is a quadratic residue modulo $p$ (because if $\xi$ is a primitive root and $(\xi^k)^q \equiv 1$ then $k$ is even and $\xi^k$ is a quadratic resieue)
2019-07-23T09:58:34
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/648616/show-that-3-is-a-primitive-root-modulo-p-2q1", "openwebmath_score": 0.9245212078094482, "openwebmath_perplexity": 166.3319106188758, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.971563964485063, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.653213251103606 }
https://www.physicsforums.com/threads/closed-set-metric-spaces.548254/
# Homework Help: Closed set (metric spaces) 1. Nov 7, 2011 ### Ted123 Suppose $f:\mathbb{R}\to \mathbb{R}$ is a continuous function (standard metric). Show that its graph $\{ (x,f(x)) : x \in \mathbb{R} \}$ is a closed subset of $\mathbb{R}^2$ (Euclidean metric). How to show this is closed? 2. Nov 7, 2011 ### lanedance what are your definitions of closed? thinking geometrically, a continuous function will have a graph that is an unbroken curve in the 2D plane, how would you show this is closed in R^2 3. Nov 7, 2011 ### Ted123 Well a set $A$ is closed if $\partial A \subset A$, i.e. $\partial A \cap A^c = \emptyset$ 4. Nov 7, 2011 ### Ted123 How could I show it is closed by considering the function $f : \mathbb{R}^2 \to \mathbb{R}$ defined by $f(x,y)=f(x)- y$?
2018-05-22T01:05:41
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/closed-set-metric-spaces.548254/", "openwebmath_score": 0.7285190224647522, "openwebmath_perplexity": 1026.882703922277, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639644850629, "lm_q2_score": 0.6723316991792861, "lm_q1q2_score": 0.653213251103606 }
http://mathhelpforum.com/pre-calculus/51437-pre-cal-help.html
# Thread: pre cal help 1. ## pre cal help 1. what is the area of a square whose diagnol is 1 unit longer than the length of a side? 2. if a 3 didget number is chosen at random, from the set of all 3 diget numbers what is the probablility all 3 didgets will be prime? 3. x+y=b x+2y=b^2 for what value of "b" will the solution of the system consist of pairs of positive numbers? 4. Find the equation of a line perpendicular to 3x-5y=17 that goes throgh the point (4,3) 5.ms sanders has Epogen 2200 units subcutaneous injection 3 times a week ordered for the anemia caused by chronic renal failur. Epogen 3000units/ml is avalible. How many millilieters will the patient recieve for each dose? 2. Originally Posted by Peyton Sawyer 1. what is the area of a square whose diagnol is 1 unit longer than the length of a side? 2. if a 3 didget number is chosen at random, from the set of all 3 diget numbers what is the probablility all 3 didgets will be prime? 3. x+y=b x+2y=b^2 for what value of "b" will the solution of the system consist of pairs of positive numbers? 4. Find the equation of a line perpendicular to 3x-5y=17 that goes throgh the point (4,3) 5.ms sanders has Epogen 2200 units subcutaneous injection 3 times a week ordered for the anemia caused by chronic renal failur. Epogen 3000units/ml is avalible. How many millilieters will the patient recieve for each dose? 1. If we let $x$ be a side length of the square, it's diagonal is $x+1$. Using Pythagoras' Theorem, we can see that $x^2 + x^2 = (x+1)^2$ (since $a^2 + b^2 = c^2$). Solving for $x$ we get... $2x^2 = x^2 + 2x + 1$ $x^2 - 2x - 1 = 0$ Use the Quadratic formula to find $x$ (only one answer will be acceptable). Then square $x$ to find the square's area. 3. Originally Posted by Peyton Sawyer 1. what is the area of a square whose diagnol is 1 unit longer than the length of a side? 2. if a 3 didget number is chosen at random, from the set of all 3 diget numbers what is the probablility all 3 didgets will be prime? 3. x+y=b x+2y=b^2 for what value of "b" will the solution of the system consist of pairs of positive numbers? 4. Find the equation of a line perpendicular to 3x-5y=17 that goes throgh the point (4,3) 5.ms sanders has Epogen 2200 units subcutaneous injection 3 times a week ordered for the anemia caused by chronic renal failur. Epogen 3000units/ml is avalible. How many millilieters will the patient recieve for each dose? 3. Use the elimination method to get $y$ in terms of $b$. Substitute back into equation 1 to get $x$ in terms of $b$. You should get $x=2b - b^2$ and $y=b^2 - b$. You want both x and y to be positive. So $2b - b^2 > 0$ and $b^2 - b >0$. Solving the first gives... $b^2 < 2b$ and so, dividing by b, we get $b < 2$ if $b>0$ and $b>2$ if $b<0$. Solving the second gives... $b^2 > b$ and so $b > 1$ if $b>0$ and $b<1$ if $b<0$. So putting the inequalities together, we find that if $b>0$, b is both greater than 1 and greater than 2. So $b>2$. And if $b<0$ we find that b is both less than 1 and less than 2. So $b<1$. 4. Originally Posted by Peyton Sawyer 1. what is the area of a square whose diagnol is 1 unit longer than the length of a side? Another way to solve number 1: Knowing that in a 45-45-90 triangle the hypotenuse is \sqrt(2) multiplied by side: $x + 1 = \sqrt{2}x$ $(\sqrt{2}-1)x = 1$ $x = \ldots$ EDIT: As for number 4, simply solve for y to get the function in the form of: $y = mx + b$ Recall that m is the slope, and that the slope of a line that is perpendicular to the line in question is given by: $m_{perpendicular} \cdot m_{line} = -1$ Once you have the slope and the given point, simply plug in point slope form: $y - y_1 = m(x-x_1)$ and solve for y.
2017-11-23T04:08:13
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/pre-calculus/51437-pre-cal-help.html", "openwebmath_score": 0.6813880205154419, "openwebmath_perplexity": 972.2709499650274, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.971563964485063, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.653213251103606 }
http://math.stackexchange.com/questions/256411/norm-of-a-linear-functional-f
# Norm of a linear functional f Can someone help me with this one? What is the norm of this functional on $l^2$? $$f(x) = \sum_{n=1}^\infty \frac{3^n\cdot x_n}{\sqrt{(n+1)!}}, \qquad x=(x_1,x_2,\ldots)\in l^2.$$ It is easy to see this functional is bounded ($\|f(x)\|\leq \sum_{n=1}^\infty \frac{3^n}{\sqrt{(n+1)!}} \|x_n\|$ and the series converges). But I can't compute the exact norm. Any help appreciated. - What are your spaces? $\|f\| = \sup \{ |f(x)|: \|x\|=1\}.$ –  user29999 Dec 11 '12 at 17:28 Is your functional $f: l² \rigtharrow l²?$ –  user29999 Dec 11 '12 at 17:31 I changed $||x||$ to $\|x\|$. –  Michael Hardy Dec 11 '12 at 19:54 @MichaelHardy, could you please take a look at math.stackexchange.com/questions/256079/… Nobody really knows what to do with this guy. From your "activity" you may have been asleep. –  Will Jagy Dec 11 '12 at 21:57 add comment ## 2 Answers This functional is of the form $f(x) = \langle u,x \rangle$, where $u \in \ell^2$. What is $u$? What happens if you take $x = u$? - Thank you. This lead me to the solution. –  Uroš Dec 11 '12 at 19:07 add comment As dual of $l²$ is $l²$, for this make sense we have that $\Bigl(\frac{3^n}{\sqrt{(n+1)!}}\Bigr)_{n=1}^{\infty} \in l^2$. Then the norm of this functional is $$\| \Bigl(\frac{3^n}{\sqrt{(n+1)!}}\Bigr)_{n=1}^{\infty} \|_{l²} = \sum_{n=1}^{\infty} \frac{3^{2n}}{(n+1)!} = \sum_{n=1}^{\infty} \frac{9^{n}}{(n+1)!}.$$ - but this sum is greater than the number that is surely upper bound for this norm... –  Uroš Dec 11 '12 at 18:11 I think the norm is square root of your sum. Do you agree? –  Uroš Dec 11 '12 at 19:11 How can you see easily that $\|f(x)\|\leq \sum_{n=1}^\infty \frac{3^n}{\sqrt{(n+1)!}} \|x_n\|$? What are the norms $\| \cdot \|$? If your norm for $x = (x_1, x_2, \cdots ) \in l^2$ is $\|x\|_{l^2}$ your estimate is wrong. if your functional is from $l²$ then your funcional is $f(x) = \langle x,\Bigl(\frac{3^n}{\sqrt{(n+1)!}}\Bigr)_{n=1}^{\infty} \rangle_{l²}$ and its norm is $\| \Bigl(\frac{3^n}{\sqrt{(n+1)!}}\Bigr)_{n=1}^{\infty} \|_{l²}$. –  user29999 Dec 12 '12 at 16:15 add comment
2014-03-12T20:13:35
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/256411/norm-of-a-linear-functional-f", "openwebmath_score": 0.9416971206665039, "openwebmath_perplexity": 691.3503483318943, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639644850629, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132511036058 }
http://mathhelpforum.com/advanced-statistics/53182-another-joint-pdf-question.html
# Thread: Another Joint pdf question 1. ## Another Joint pdf question Hi, Iam trying to find the conditional pdf f(x│1/2) and have the joint pdf of f(x,y) = 3.3 - y , 0 < y < < 1 Iam not sure how to set up the limits of integration to get f(x│y) because of the $x^2$. Any help how be apprectiated 2. Originally Posted by Number Cruncher 20 Hi, Iam trying to find the conditional pdf f(x│1/2) and have the joint pdf of f(x,y) = 3.3 - y , 0 < y < < 1 Iam not sure how to set up the limits of integration to get f(x│y) because of the $x^2$. Any help how be apprectiated $f(x | y) = \frac{f(x, y)}{f_Y(y)}$. The joint pdf is non-zero for values of X and Y lying in the region of the XY-plane enclosed by x = 0, x = 1, y = 0 and y = x^2. Therefore the marginal density function of Y is $f_Y(y) = \int_{x = +\sqrt{y}}^{x=1} f(x, y) \, dx$. 3. Originally Posted by mr fantastic $f(x | y) = \frac{f(x, y)}{f_Y(y)}$. The joint pdf is non-zero for values of X and Y lying in the region of the XY-plane enclosed by x = 0, x = 1, y = 0 and y = x^2. Therefore the marginal density function of Y is $f_Y(y) = \int_{x = +\sqrt{y}}^{x=1} f(x, y) \, dx$. My mistake. It should be $f_Y(y) = \int_{x = +\sqrt{y}}^{x=1} f(x, y) \, dx + \int_{x=-1}^{x = -\sqrt{y}} f(x, y) \, dx = 2 \int_{x = +\sqrt{y}}^{x=1} f(x, y) \, dx$ (since f(x, y) is symmetric in x). And by now you've realised that it's 1.8 and not 3.3 (see my other mistake in the other post)
2017-01-24T16:38:06
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/advanced-statistics/53182-another-joint-pdf-question.html", "openwebmath_score": 0.7847086191177368, "openwebmath_perplexity": 397.5118179081487, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639640733822, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132508268199 }
https://www.tutorialspoint.com/program-to-find-out-the-points-achievable-in-a-contest-in-python
# Program to Find Out the Points Achievable in a Contest in Python PythonServer Side ProgrammingProgramming Suppose we are in a programming contest where there are multiple problems but the contest ends when we solve one problem. Now if we have two lists of numbers of the same length called points, and chances. To explain it, here for the ith problem, we have a chances[i] percent chance of solving it for points[i] points. We also have another value k which represents the number of problems we can attempt. The same problem cannot be attempted twice. If we devise an optimal strategy, we will have to find the expected value of the number of points we can get in the contest, which is rounded to the nearest integer. We can expect the value of attempting the ith problem as points[i] * chances[i] / 100.0, and this represents the number of points we would get on average. So, if the input is like points= [600, 400, 1000], chances = [10, 90, 5], k = 2, then the output will be 392. To solve this, we will follow these steps − • n := size of points • for i in range 0 to n, do • chances[i] := chances[i] / 100.0 • R := arrange 0-3 sorted according to points descending • return int(dp(0, K)) • Define a function dp() . This will take i, k • if i is same as n, then • return 0.0 • j := R[i] • p := chances[j] • ev := p * points[j] • if k is same as 1, then • return maximum of ev, dp(i + 1, k) • return maximum of dp(i + 1, k - 1) *(1 - p) + ev, dp(i + 1, k) ## Example Let us see the following implementation to get better understanding − Live Demo class Solution: def solve(self, points, chances, K): n = len(points) for i in range(n): chances[i] /= 100.0 R = sorted(range(n), key=points.__getitem__, reverse=True) def dp(i, k): if i == n: return 0.0 j = R[i] p = chances[j] ev = p * points[j] if k == 1: return max(ev, dp(i + 1, k)) return max(dp(i + 1, k - 1) * (1 - p) + ev, dp(i + 1, k)) return int(dp(0, K)) ob = Solution() print (ob.solve([600, 400, 1000], [10, 90, 5], 2)) ## Input [600, 400, 1000], [10, 90, 5], 2 ## Output 392 Published on 23-Dec-2020 11:25:24
2021-05-05T22:19:36
{ "domain": "tutorialspoint.com", "url": "https://www.tutorialspoint.com/program-to-find-out-the-points-achievable-in-a-contest-in-python", "openwebmath_score": 0.4988800883293152, "openwebmath_perplexity": 2851.5311817741654, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639636617015, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132505500339 }
https://www.physicsforums.com/threads/two-capacitors-in-series-for-two-circuits.781209/
# Two Capacitors in series for two circuits 1. Nov 10, 2014 ### andre220 1. The problem statement, all variables and given/known data A capacitor of capacitance $C_1 = C$ is charged by a battery of potential difference $V_0$. After fully charged, it is disconnected from the batter and reconnected in series to a second, uncharged capacitor of capacitance $C_2 = C/2$ and another battery of potential difference $10V_0$. The positive side of the first capacitor is connected to the positive terminal of the battery. Calculate the final potential difference across each of the capacitors. 2. Relevant equations For the second circuit, $C_{\mathrm{eq}} = \frac{1}{C_1} + \frac{1}{C_2} = \frac{1}{3} C$ and also $Q = C_{\mathrm{eq}} 10V_0 = \frac{10}{3}V_0 C$. 3. The attempt at a solution So then, normally from here it would be straightforward, however, the first part about $C_1$ being charged to only $V_0$ is throwing me off. I know it would affect the potential difference across $C_1$, but I can't quite see where that would fit in equation-wise. 2. Nov 10, 2014 ### Staff: Mentor Hint: For purposes of analysis you can replace the capacitor that has the initial charge and voltage $V_o$ with an uncharged capacitor of the same value in series with a voltage source with value $V_o$. This becomes the equivalent circuit model for that initially charged capacitor. Alternatively you could determine the expression for the charge on the first capacitor and then work out what charge movements are required to satisfy KVL around the new loop (whatever change in charge occurs to one capacitor must occur to the other as well since they are in series). 3. Nov 10, 2014 ### andre220 Okay so here is what I have: $10V_0 = V_1 + V_2$, and $V_2 = Q/C_2 = \frac{10}{3}V_0 C\frac{2}{C} = \frac{20}{3}V_0$ and thus because of KVL $V_1 = \frac{10}{3}V_0$. Doesn't feel like that's right though. 4. Nov 10, 2014 ### Staff: Mentor Doesn't look right to me either. Start with the initial charge on the first capacitor. Call it q (and you should have an expression for q based upon the first capacitance and initial voltage). Then you're going to add some charge $\Delta q$ to each capacitor such that the total voltage of the two capacitors yields your new total potential difference. If you can find this $\Delta q$ you can work out the new potentials. 5. Nov 10, 2014 ### andre220 Okay. I have for the first circuit $Q = CV_0$, then the second circuit (call it the primed circuit) $Q' = 10V_0 C_{\mathrm{eq}} = \frac{10}{3}C V_0$, Then the first capacitor is carrying $Q$ from the first circuit so that $\Delta Q = Q + Q' = \frac{13}{3} C V_0$. 6. Nov 10, 2014 ### Staff: Mentor No, ΔQ is not Q + Q'. ΔQ is the charge added to the first and second capacitor. Forget the equivalent capacitance, work with the individual capacitors. The new charge on the first capacitor is Q + ΔQ. The new charge on the second capacitor is just ΔQ. Write the expression for the total voltage across the series capacitors. This total must equal the new battery's potential difference. 7. Nov 10, 2014 ### andre220 Okay, I think I have it. $$\begin{eqnarray} 10V_0 &=& V_1 + V_2\\ &=&\frac{Q + \Delta Q}{C_1} + \frac{\Delta Q}{C_2} \\ &=& V_0 + \frac{\Delta Q}{C} + \frac{2 \Delta Q}{C} \\ & = & \frac{3\Delta Q}{C} + V_0 \end{eqnarray}$$ Then, $\Delta Q = 3 V_0 C$, then $V_1 = 4V_0$ and $V_2 = 6V_0$. And $V_1 + V_2 = 10V_0$ as a check. 8. Nov 10, 2014 ### Staff: Mentor Huzzah! 9. Nov 10, 2014
2017-10-19T05:55:04
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/two-capacitors-in-series-for-two-circuits.781209/", "openwebmath_score": 0.5969593524932861, "openwebmath_perplexity": 556.1186973637234, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639636617015, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132505500339 }
https://mathoverflow.net/questions/278291/what-are-traces
# What are traces? Let $A$ be a Noetherian commutative ring and Let $A\rightarrow B$ be a finite flat homomorphism of rings. We can thus form the so called "trace" $\mathrm{Tr_{B/A}}:B\rightarrow A$, which is a homomorphism of $A$ - modules defined as follows: Every $b\in B$ acts on $B$ (when viewed as an $A$ - module) by multiplication. Since $B$ is finite flat over $A$ and $A$ is Noetherian, $B$ is a locally free $A$ - module and hence multiplication by $b$ is given locally (on a principal open subset $\mathrm{Spec}(A_s)\subseteq \mathrm{Spec}(A),s\in A$ and under some isomorphism $B_s\cong A_s^n$) by multiplication by a matrix. We define $\mathrm{Tr_{B/A}}(b)$ to be the trace of this matrix. Since the trace of a matrix is independent of the choice of basis this homomorphism of $A$ - modules glues nicely and is well defined. In the case $A\rightarrow B$ is finite etale one can even show that this morphism is nondegenerate, i.e.: Induces an isomorphism $B\overset{\sim}\rightarrow \mathrm{Hom}_A(B,A)$ by adjunction (this is a well known claim of Galois theory in the case where $A\rightarrow B$ is a finite seperable field extension). My (rather ill-formulated) questions are the following: 1) Are there any other algebraic/geometric constructions I should think of as similar to this one? 2) Is there a deeper reason for the existence of such trace morphisms (for example some categorical phenomena that this is a special case of)? what's so special about finite flat homomorphisms that makes this happen? It seems to me pretty mysterious that such a homomorphism even exists, and I do not seem to completely grasp it's geometric meaning. 3) What is the geometric intuition behind the cannonical isomorphism of $A$ - modules $B\overset{\sim}\rightarrow \mathrm{Hom}_A(B,A)$ in case $A\rightarrow B$ is finite etale? Let me try to be abit more specific about what bothers me: Given a ring homomorphism $\phi :A\rightarrow B$ there's an obvious adjunction: $\mathrm{Forget}:\mathsf Mod_B \substack{\longrightarrow\\\perp \\\longleftarrow \\}\mathsf Mod_A:\mathrm{Hom}_A(B,-)$ which, by evaluating the counit at $A$, gives a map $\mathrm{Tr}_\phi:\mathrm{Hom}_A(B,A)\rightarrow A$ in $\mathsf{Mod}_A$. Also, given a proper map $f:X\rightarrow Y$ between reasonable schemes (for example essentially finite type schemes over a field) we have the adjunction given by Grothendieck duality $Rf_*:\mathsf D^b_c(X) \substack{\longrightarrow\\\perp \\\longleftarrow \\}\mathsf D^b_c(Y):Rf^!$ which induces (again by evaluating the counit at $\mathcal{O}_Y$) a morphism $\mathrm{Tr}_f:Rf_*Rf^!\mathcal{O}_Y\rightarrow\mathcal{O}_Y$ in $\mathsf{D}_c^b(Y)$ What bothers me is that my original trace, unlike the two trace maps I just mentioned, does not seem to come as naturally from some adjunction or anything like that. Where is it coming from? What is it? Regarding 2) in the question. If $\varphi \colon A \to B$ is a finite flat homomorphism of rings, then the corresponding map of schemes $f \colon X \to Y$ is a finite flat map of schemes, therefore proper. Here $X:= Spec(B)$ and $Y:= Spec(A)$. From Grothendieck duality one has an adjunction $f_* \dashv f^!$, where $f_* \colon Qco(X) \to Qco(Y)$ and $f^!$ is its right adjoint. We don't need to take derived categories because we are in relative dimension 0. it turns out that one can describe this adjoint as: $$f^!(\mathcal{G}) = \mathcal{H}om_{\mathcal{O}_Y}(f_*\mathcal{O}_X,\mathcal{G})^\sim$$ with $\mathcal{G} \in Qco(Y)$ and where $(-)^\sim$ denotes the equivalence between $\mathcal{O}_X$-modules and $f_*\mathcal{O}_X$-modules over $Y$, being $f$ an affine morphism. Now the counit of the adjunction $$\int_f \colon f_*f^! \mathcal{G} \longrightarrow \mathcal{G}$$ applied to $\mathcal{O}_Y$ yields the map $$\mathrm{Tr}_{\varphi} \colon B \longrightarrow A$$ that is mentioned in the question. One of the most fascinating aspects of Grothendieck duality is this interrelation between very abstract concepts (the adjunction) together with very concrete descriptions (the matrix trace). In higher dimensions one need to add derived functors (and most conveniently, derived categories) in the abstract part and higher dimensional residues in the concrete part. Regarding 3) If $f$ is étale then it has a "trivial relative dualizing sheaf", in other words $$f^! \mathcal{O}_Y \cong f^* \mathcal{O}_Y \cong \mathcal{O}_X$$ which illustrates the isomorphism $B\overset{\sim}\rightarrow Hom_A(B,A)$ in the question. Regarding the last question: your original trace is an explicit computation of both descriptions of duality for a finite flat map. The uniqueness of adjoints forces the trace to agree with the counit of the adjunction. In a philosophical way, the counit is a way of integrating, and the trace of a matrix is another, and in this case, they both agree as they should. • I think you messed up the order of composition in the counit, it should probably be $f_{*} f^{!} \mathcal{G} \rightarrow \mathcal{G}$ – Anonymous Coward Aug 9 '17 at 17:07 • There is something I don't understand about your answer. The counit gives you a map $\mathrm{Hom}_{A}(B,A)\rightarrow A$ (which is by the way completely tautological, it just takes a homomorphism and inputs "1"). but $\mathrm{Hom}_{A}(B,A)$ is not the same thing as $B$. There is not even an obvious map $B\rightarrow \mathrm{Hom}_{A}(B,A)$, so I do not see how you even get the map $B\rightarrow A$ – Anonymous Coward Aug 9 '17 at 18:10 • @AnonymousCoward Thank you for spotting the typo. As for the étale case, consider the map that carries $1 \in B$ to $\mathrm{Tr}_{\varphi}$. It can be shown that this map is an isomorphism precisely when $f$ is étale. – Leo Alonso Aug 9 '17 at 21:20 • I don't understand the downvote.In the initial question there was no mention to Grothendieck duality. – Leo Alonso Aug 9 '17 at 22:42 • Perhaps it was abit too harsh and I'm sorry, but (this part of) your answer sort of missed the point of my question, as $f_*f^!\mathcal{O}_Y$ is not the same thing as $B$, and I don't see how the map you constructed is related to my trace. Relating them in a natural way will completely satisfy me and and give you back the upvote. – Anonymous Coward Aug 9 '17 at 22:55 It seems worth mentioning the geometric setting: Let $A$ and $B$ be the rings of functions on varieties $X$ and $Y$ over an algebraically closed field $k$, so we have a map $\phi: Y \to X$. Let $d$ be the degree of $\phi$ and let $x \in X$ be a closed point over which $\phi$ is etale, so $\phi^{-1}(x) = \{ y_1, \ldots, y_d \}$. Let $x$ correspond to the maximal ideal $\mathfrak{p} \subset A$ and let $y_i$ correspond to the maximal ideal $\mathfrak{q}_i \subset B$. By the Chinese Remainder Theorem, $$B \otimes_A A/\mathfrak{p} = \bigoplus B/\mathfrak{q}_i.\quad (\ast)$$ Let $g \in B$. Then $\mathrm{Tr}(g)$, evaluated at the point $x$, is the same as the trace of the map $\bar{g} :B \otimes_A A/\mathfrak{p} \to B \otimes_A A/\mathfrak{p}$ induced by multiplication by $g$. Under the isomorphism $(\ast)$, $\bar{g}$ acts on $B/\mathfrak{q}_i$ by $g(y_i)$. In particular, this action is diagonal! So, for $x$ as above, we have $$\mathrm{Tr}(g)(x) = \sum g(y_i).$$ The nifty fact is that this extends to a regular function on the locus where $\phi$ is branched. • To restate this in even simpler terms and expand slightly: suppose $\phi$ is surjective. In general a function $f$ on $Y$ will not descend to a function on $X$, but we can try to get it to do so. One method would be to take $f$ and produce a function $F$ by summing the value of $f$ across fibers above the points of $X$. This certainly ought to be linear over the functions pulled back from $X$. Defining the trace makes this process rigorous. If we'd prefer playing well with multiplicative structure to linearity, we could multiply across fibers - in this case the rigorous analogue is the norm – Sarah Griffith Jan 26 at 6:24 • All of this is pretty metaphorical, at least at my level of understanding, particularly if you want to leave the context of varieties over an algebraically closed field. For example, if $k \hookrightarrow K$ is a finite field extension, the associated morphism of schemes doesn't have interesting fibers. But if you think of $k$ as morally a function field of some $X$ and $K$ as the function field of a (possibly ramified) finite covering space $Y \to X$ then it all ties together rather nicely – Sarah Griffith Jan 26 at 6:35 • @SarahGriffith Thanks, that is a very helpful rewriting! Regarding your comments on non-algebraically closed fields, this is a situation where I find it much easier to think in terms of the functor of points. Let $k \hookrightarrow K$ be a finite field extension of degree $d$ and let $\bar{k}$ be an algebraic closure of $k$. Then $(\mathrm{Spec}\ K)(\bar{k})$ is $d$ points. The field $K$ can be identified with certain $K$-valued functions on those points, and the trace $K \to k$ is summing up the values on the $d$ points. – David E Speyer Jan 26 at 13:21 I think I can partly answer this question (but only in a restricted setting: when $A$ is a field and $B$ is a central simple $A$-algbera with involution $\sigma$. My suspicion is that if you want $A$ to be an arbitrary noetherian commutative ring, then you will be generalizing the below from algebraic groups to group schemes). When $A$ is a field and $B$ a CSA, there's a simpler definition of the trace map: over some field extension $F/A$, the base change $B_F$ is a matrix algebra, and the trace map can be defined as taking the composition $B\rightarrow B_F\rightarrow F$ where the last arrow is the usual trace map; the image lands in $A$ because the characteristic polynomial of an element $b\in B$ has coefficients in $A$. From now on I'll write $(B,\sigma)$ for the central simple algebra $B$ with involution. A lot of effort goes into proving the statements of the next few paragraphs (a reference for this is The Book of Involutions by Knus, Merkurjev, Rost, and Tignol). Let $M$ be a finitely generated $B$-module, and $h$ an hermitian form on $M$. The algebra $E=\text{End}_B(M)$ is Brauer-equivalent to $B$, and there's an involution $\sigma_h$ on $E$ such that $\sigma_h|_A=\sigma|_A$ and is defined as the adjoint $$h(x,f(y))=h(\sigma_h(f)(x),y)$$ for any $x,y\in M$. Moreover, this association is a one-to-one correspondence between nonsingular hermitian forms (up to a scaling-factor of $A^\times$) and involutions on $B$. From here one can define similitudes. If $D$ is a division algebra (finite dimensional over $A$) with involution, and $V$ a vector space over $D$ with hermitian form $h$ (with respect to the involution on $D$), $(V,h)$, Then $\text{Sim}(V,h)$ are those linear maps $g:V\rightarrow V$ which preserve $h$ up to a scalar multiple. One can similarly define the similitudes of the corresponding central simple algebra with involution $(\text{End}_D(V),\sigma_h)$ as the set $\text{Sim}(\text{End}_D(V),\sigma_h)$ whose elements are $g\in \text{End}_D(V)$ such that $\sigma_h(g)g\in A^\times$. These happen to be equal. The usefulness of these constructions comes when you take the functor of points pov for algebraic groups (or group schemes). Then, one can show that there is an algebraic group $G$ over $A$ whose $F$-points (for a field extension $F/A$) are exactly groups defined using these similitudes. That's purposefully vague since it doesn't answer your question but only gives motivation (maybe). An example would be the split case of a matrix algebra $M_{n,n}(A)$ with involution $\tau$ the transposition of matrices. Then there is an algebraic group $G$ over $A$ with $G(F)=\text{Sim}(M_{n,n}(A_F),\tau_F)$ and it is just the orthogonal group. The benefit of going this way however is to consider twisted forms of such groups. Now to the trace map. From the csa with involution $(B,\sigma)$ and the trace map $Trd:B\rightarrow A$ defined above (the Trd is because this is usually called the reduced trace map), one can define a bilinear form $T_{(B,\sigma)}:B\times B\rightarrow A$ by $T_{(B,\sigma)}(x,y)=Trd(\sigma(x)y)$. If $\sigma$ is particularly nice (which here means it's of the first kind, I didn't really go over what this means but essentially it corresponds to either an orthogonal or symplectic group and not a unitary group), then there is the following theorem: There is an isomorphism $\sigma_*:B\otimes {^iB}\xrightarrow{\sim} \text{End}_A(B)$ such that $\sigma_*(a\otimes b)(x)=ax\sigma(b)$. Here $^iB$ is the same set as $B$ but it has a different algebra structure (defined so that there is a corresponding involution on the other side $^i\sigma$) and so that the tensor product $B\otimes {^iB}$ has a canonical involution $\sigma\otimes {^i\sigma}$. Finally, by the correspondence above $\sigma\otimes {^i\sigma}$ is the involution corresponding to the trace $T_{(B,\sigma)}$. So: 1) I'm not sure if there are constructions which you should think about when seeing this but, my first thought was the trace maps defined in the theory of algebras with involution (only a very small portion of which was mentioned above; the whole theory uses trace maps everywhere). 2) I don't know how to generalize this categorically. But, it seems a more general approach would be through azumaya algebras (which are generalizations of central simple algebras to the case where the base is a ring and not a field). Then your assumptions would read $A$ is a commutative ring, $B$ is an $A$-algebra finitely generated over $A$ and projective as an $A$-module, and the canonical morphism from $B\otimes B^{op}\rightarrow \text{End}_A(B)$ is an isomorphism. Then you can at least see where the trace map (on the right) goes on the left (which is called the Goldman element). I would hope you could get more detail and more mileage by considering some kind of involution on an Azumaya algebra but I've never seen it considered. 3) An \'etale algebra is really similar to a central simple algebra. Both are twisted forms of something. Really an \'etale algebra is a twisted form of products of central simple algebras. This actually fits into the above theory and is discussed throughout the Book of Involutions. (When there is an involution of the second kind on an algebra $B$ -- corresponding to a unitary group -- then the center of $B$ is actually a quadratic \'etale extension of $A$). If you tensor the last isomorphism you found $B\xrightarrow{\sim} \text{Hom}_A(B,A)$ by $B$ then it is canonically isomorphic with the map $B\otimes B\xrightarrow{\sim} \text{Hom}_A(B,B)=\text{End}_A(B)$. Of course, all the standard constructions like norm too will play a role and can be defined in appropriate settings. Even more importantly, trace makes sense sometimes even when $B$ is not flat. For example, assume $A, B$ are domains, then you do have a trace from $B$ to the fraction field of $A$, using standard field theory arguments. This in fact will be a map to $A$ if $A$ is integrally closed, no flatness is used.
2020-10-20T01:10:20
{ "domain": "mathoverflow.net", "url": "https://mathoverflow.net/questions/278291/what-are-traces", "openwebmath_score": 0.9618522524833679, "openwebmath_perplexity": 131.5387352205324, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639636617015, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132505500339 }
https://www.physicsforums.com/threads/hermite-polynomials.235638/
# Hermite Polynomials. Gold Member ## Main Question or Discussion Point I need to show that: $$\sum_{n=0}^{\infty}\frac{H_n(x)}{n!}y^n=e^{-y^2+2xy}$$ where H_n(x) is hermite polynomial. Now I tried the next expansion: $$e^{-y^2}e^{2xy}=\sum_{n=0}^{\infty}\frac{(-y)^{2n}}{n!}\cdot \sum_{k=0}^{\infty}\frac{(2xy)^k}{k!}$$ after some simple algebraic rearrangemnets i got: $$\sum_{n=0}^{\infty}(2x-y)^n\frac{y^n}{n!}$$ which looks similar to what i need to show, the problem is that the polynomial (2x-y)^n satisifes only the condition: H_n'(x)=2nH_n-1(x) and not the other two conditions, so i guess something is missing, can anyone help me on this? Last edited: $$e^{-y^2}=\sum_{n=0}^{\infty}\frac{(-y)^{2n}}{n!}$$ Is this correct? I'm inclined to think that: $$e^{-y^2}=\sum_{n=0}^{\infty}\frac{(-y^2)^{n}}{n!}$$ $$=\sum_{n=0}^{\infty}\frac{(-1)^n y^{2n}}{n!}$$.? Gold Member What two other conditions do these polynomials have to satisfy? Gold Member H''_n-2xH'_n+2nH_n=0 H_n+1-2xH_n+2nH_n-1=0 according to this exercise. well I looked at wikipedia, and I guess I only need to use the first definition given at wikipedia.
2020-02-19T04:17:41
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/hermite-polynomials.235638/", "openwebmath_score": 0.8275939226150513, "openwebmath_perplexity": 1370.0590653992488, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639636617014, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132505500338 }
https://en.wikipedia.org/wiki/Arc_diagram
# Arc diagram An arc diagram of the Goldner–Harary graph. The red dashed line segment shows where this graph was subdivided to make it Hamiltonian. In graph drawing, an arc diagram is a style of graph drawing, in which the vertices of a graph are placed along a line in the Euclidean plane, with edges being drawn as semicircles in one of the two halfplanes bounded by the line, or as smooth curves formed by sequences of semicircles. In some cases, line segments of the line itself are also allowed as edges, as long as they connect only vertices that are consecutive along the line. The use of the phrase "arc diagram" for this kind of drawing follows the use of a similar type of diagram by Wattenberg (2002) to visualize the repetition patterns in strings, by using arcs to connect pairs of equal substrings. However, this style of graph drawing is much older than its name, dating back to the work of Saaty (1964) and Nicholson (1968), who used arc diagrams to study crossing numbers of graphs. An older but less frequently used name for arc diagrams is linear embeddings.[1] Heer, Bostock & Ogievetsky (2010) write that arc diagrams "may not convey the overall structure of the graph as effectively as a two-dimensional layout", but that their layout makes it easy to display multivariate data associated with the vertices of the graph. ## Planar graphs As Nicholson (1968) observed, every embedding of a graph in the plane may be deformed into an arc diagram, without changing its number of crossings. In particular, every planar graph has a planar arc diagram. However, this embedding may need to use more than one semicircle for some of its edges. If a graph is drawn without crossings using an arc diagram in which each edge is a single semicircle, then the drawing is a two-page book embedding, something that is only possible for the subhamiltonian graphs, a proper subset of the planar graphs.[2] For instance, a maximal planar graph has such an embedding if and only if it contains a Hamiltonian cycle. Therefore, a non-Hamiltonian maximal planar graph such as the Goldner–Harary graph cannot have a planar embedding with one semicircle per edge. Testing whether a given graph has a crossing-free arc diagram of this type (or equivalently, whether it has pagenumber two) is NP-complete.[3] However, every planar graph has an arc diagram in which each edge is drawn as a biarc with at most two semicircles. More strongly, every st-planar directed graph (a planar directed acyclic graph with a single source and a single sink, both on the outer face) has an arc diagram in which every edge forms a monotonic curve, with these curves all consistently oriented from one end of the vertex line towards the other.[4] For undirected planar graphs, one way to construct an arc diagram with at most two semicircles per edge is to subdivide the graph and add extra edges so that the resulting graph has a Hamiltonian cycle (and so that each edge is subdivided at most once), and to use the ordering of the vertices on the Hamiltonian cycle as the ordering along the line.[5] ## Minimizing crossings Because it is NP-complete to test whether a given graph has an arc diagram with one semicircle per edge and no crossings, it is also NP-hard to find an arc diagram of this type that minimizes the number of crossings. This crossing minimization problem remains NP-hard, for non-planar graphs, even if the ordering of the vertices along the line is fixed.[1] However, in the fixed-ordering case, an embedding without crossings (if one exists) may be found in polynomial time by translating the problem into a 2-satisfiability problem, in which the variables represent the placement of each arc and the constraints prevent crossing arcs from being placed on the same side of the vertex line.[6] Additionally, in the fixed-ordering case, a crossing-minimizing embedding may be approximated by solving a maximum cut problem in an auxiliary graph that represents the semicircles and their potential crossings (or equivalently, by approximating the MAX2SAT version of the 2-satisfiability instance).[7] Cimikowski & Shope (1996), Cimikowski (2002), and He, Sýkora & Vrt'o (2005) discuss heuristics for finding arc diagrams with few crossings. ## Clockwise orientation For drawings of directed graphs, a common convention is to draw each arc in a clockwise direction, so that arcs that are directed from an earlier to a later vertex in the sequence are drawn above the vertex line, and arcs directed from a later to an earlier vertex are drawn below the line. This clockwise orientation convention was developed as part of a different graph drawing style by Fekete et al. (2003), and applied to arc diagrams by Pretorius & van Wijk (2007). ## Other uses Arc diagrams were used by Brandes (1999) to visualize the state diagram of a shift register, and by Djidjev & Vrt'o (2002) to show that the crossing number of every graph is at least quadratic in its cutwidth. ## Notes 1. ^ a b 2. ^ The application of semicircles to edge layout in book embeddings was already made by Bernhart & Kainen (1979), but the explicit connection of arc diagrams with two-page book embeddings seems to be due to Masuda et al. (1990). 3. ^ 4. ^ 5. ^ 6. ^ 7. ^
2021-06-16T09:21:47
{ "domain": "wikipedia.org", "url": "https://en.wikipedia.org/wiki/Arc_diagram", "openwebmath_score": 0.806196928024292, "openwebmath_perplexity": 385.71421362904005, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639636617014, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132505500338 }
http://bdigital.unal.edu.co/32351/
Powers of two in generalized fibonacci sequences Bravo, Jhon J. and Luca, Florian (2012) Powers of two in generalized fibonacci sequences. Revista Colombiana de Matemáticas; Vol. 46, núm. 1 (2012); 67-79 0034-7426 . Texto completo Vista previa 420kB 4kB Resumen The $k-$generalized Fibonacci sequence $\big(F_{n}^{(k)}\big)_{n}$ resembles the Fibonacci sequence in that it starts with $0,\ldots,0,1$ ($k$ terms) and each term afterwards is the sum of the $k$ preceding terms. In this paper, we are interested in finding powers of two that appear in $k-$generalized Fibonacci sequences; i.e., we study the Diophantine equation $F_n^{(k)}=2^m$ in positive integers $n,k,m$ with $k\geq 2$. Tipo de documento:Artículo - Article Palabras clave:Fibonacci numbers, Lower bounds for nonzero linear forms in logarithms of algebraic numbers, 11B39, 11J86 Código ID:32351 Enviado por : Dirección Nacional de Bibliotecas STECNICO Enviado el día :01 Julio 2014 00:13 Ultima modificación:06 Junio 2018 10:58 Ultima modificación:06 Junio 2018 10:58 Exportar:Clic aquí Compartir:
2020-01-18T12:37:14
{ "domain": "edu.co", "url": "http://bdigital.unal.edu.co/32351/", "openwebmath_score": 0.8097476959228516, "openwebmath_perplexity": 6125.588299542557, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639718953155, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132497069456 }
https://www.emathhelp.net/calculators/calculus-1/domain-and-range-calculator/
# Domain and Range Calculator The calculator will find the domain and range of the single-variable function. Enter a function of one variable: Enter an interval: Required only for trigonometric functions. For example, [0, oo) or (-2, 5pi]. If you need oo, type inf. If the calculator did not compute something or you have identified an error, or you have a suggestion/feedback, please write it in the comments below. Your input: find the domain and range of $f=\frac{1}{x - 1}$ ## Domain $\left(-\infty, 1\right) \cup \left(1, \infty\right)$ ## Range $\left(-\infty, 0\right) \cup \left(0, \infty\right)$ ## Graph For graph, see graphing calculator.
2022-07-01T04:51:06
{ "domain": "emathhelp.net", "url": "https://www.emathhelp.net/calculators/calculus-1/domain-and-range-calculator/", "openwebmath_score": 0.6248582005500793, "openwebmath_perplexity": 2437.0345173368696, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639611916168, "lm_q2_score": 0.672331699179286, "lm_q1q2_score": 0.6532132488893176 }
https://socratic.org/calculus/derivatives/tangent-line-to-a-curve
# Tangent Line to a Curve ## Key Questions • The "tangent slope" is the slope of the tangent line. It is also called "the slope of the tangent" and "the slope of the curve at a point". You could use infinitesimals... #### Explanation: The slope of the tangent line is the instantaneous slope of the curve. So if we increase the value of the argument of a function by an infinitesimal amount, then the resulting change in the value of the function, divided by the infinitesimal will give the slope (modulo taking the standard part by discarding any remaining infinitesimals). For example, suppose we want to find the tangent to $f \left(x\right)$ at $x = 2$, where: $f \left(x\right) = {x}^{3} - 3 {x}^{2} + x + 5$ Let $\epsilon > 0$ be an infinitesimal value. Then: $\frac{f \left(2 + \epsilon\right) - f \left(2\right)}{\epsilon}$ $= \frac{\left({\left(2 + \epsilon\right)}^{3} - 3 {\left(2 + \epsilon\right)}^{2} + \left(2 + \epsilon\right) + 5\right) - \left({\left(2\right)}^{3} - 3 {\left(2\right)}^{2} + \left(2\right) + 5\right)}{\epsilon}$ $= \frac{\left(\left(8 + 12 \epsilon + 6 {\epsilon}^{2} + {\epsilon}^{3}\right) - 3 \left(4 + 4 \epsilon + {\epsilon}^{2}\right) + \left(2 + \epsilon\right) + 5\right) - \left(8 - 12 + 2 + 5\right)}{\epsilon}$ $= \frac{\left(12 \epsilon + 6 {\epsilon}^{2} + {\epsilon}^{3}\right) - \left(12 \epsilon + 3 {\epsilon}^{2}\right) + \epsilon}{\epsilon}$ $= \frac{\epsilon + 3 {\epsilon}^{2} + {\epsilon}^{3}}{\epsilon}$ $= 1 + 3 \epsilon + {\epsilon}^{2}$ of which the standard (i.e. finite) part is $1$ (discarding the $3 \epsilon + {\epsilon}^{2}$). So the slope of the tangent is $1$ and the tangent point is: $\left(2 , f \left(2\right)\right) = \left(2 , 3\right)$ So the equation of the tangent may be written: $\left(y - 3\right) = 1 \left(x - 2\right)$ or more simply: $y = x + 1$ graph{ (y-(x^3-3x^2+x+5))(y-x-1) = 0 [-3.355, 6.645, 1.38, 6.38]} • The tangent line to a curve at a given point is a straight line that just "touches" the curve at that point. So if the function is f(x) and if the tangent "touches" its curve at x=c, then the tangent will pass through the point (c,f(c)). The slope of this tangent line is f'(c) ( the derivative of the function f(x) at x=c). A secant line is one which intersects a curve at two points. Click this link for a detailed explanation on how calculus uses the properties of these two lines to define the derivative of a function at a point.
2020-02-23T02:26:45
{ "domain": "socratic.org", "url": "https://socratic.org/calculus/derivatives/tangent-line-to-a-curve", "openwebmath_score": 0.8842041492462158, "openwebmath_perplexity": 246.44023713514585, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639702485928, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132485998018 }
http://mathhelpforum.com/algebra/184698-masked-algebric-inequality.html
Math Help - Masked algebric inequality Prove that $\sqrt{a^2-ab+b^2}+\sqrt{b^2-bc+c^2} >\sqrt{a^2+ac+c^2}$,if a,b,c>0. Originally Posted by TheodorMunteanu Prove that $\sqrt{a^2-ab+b^2}+\sqrt{b^2-bc+c^2} >\sqrt{a^2+ac+c^2}$,if a,b,c>0. I'm not so sure, here it goes... I used Inequality of arithmetic and geometric means - Wikipedia, the free encyclopedia $\sqrt{a^2-ab+b^2}+\sqrt{b^2-bc+c^2} >\sqrt{a^2+ac+c^2}$ $\sqrt{a^2-ab+b^2}+\sqrt{b^2-bc+c^2} \geq 2\sqrt{\sqrt{(a^2-ab+b^2)(b^2-bc+c^2)}}\geq 2\sqrt{\sqrt{(a^2-2ab+b^2)(b^2-2bc+c^2)}}=2\sqrt{\sqrt{(a-b)^2(b-c)^2}}=2\sqrt{(a-b)(b-c)}>2\sqrt{\frac{(a+c)^2}{4}}=\sqrt{(a+c)^2}>\sqrt {a^2+ac+c^2}$ You could have also substituted a, b, and c with any number greater than 0, and see if the statement returned true. Something like. Whops sorry, remove this- What I meant though was substitute 1 for all three variables and see if the function is true. $\sqrt{1^2-1*1+1^2}+\sqrt{1^2-1*1+1^2} >\sqrt{1^2+1*1+1^2}$ All though I could be completely wrong, so sorry if I am. Please correct. I was thinking that if we take OA=x,OB=y,OC=z with $\angle AOB=60,\angle BOC=60\Righarrow AB=\sqrt{a^2-ab+b^2},BC=\sqrt{b^2-bc+c^2},AC=\sqrt{a^2+ac+c^2}$ and using the inequality of triangle in ABC we got the conclusion. Originally Posted by Auri You could have also substituted a, b, and c with any number greater than 0, and see if the statement returned true. Something like. Whops sorry, remove this- What I meant though was substitute 1 for all three variables and see if the function is true. $\sqrt{1^2-1*1+1^2}+\sqrt{1^2-1*1+1^2} >\sqrt{1^2+1*1+1^2}$ All though I could be completely wrong, so sorry if I am. Please correct. You proved that is true for a=b=c=1, but you asked to show that the equality is true for all real positive numbers a,b,c. ------------------ Now I see that I used the condition a>b>c>0 (I assuming that it is allowed... hmmm...) Also try to prove that $\sqrt{x^2+y^2-\sqrt{3}xy}+\sqrt{y^2+z^2-yz} >\sqrt{x^2+z^2}$
2016-06-25T14:54:06
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/184698-masked-algebric-inequality.html", "openwebmath_score": 0.8909898400306702, "openwebmath_perplexity": 716.0312735438018, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639702485929, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132485998018 }
http://www.talkstats.com/threads/probability-of-most-expensive-item-automatically-being-purchased.64298/
# Probability of most expensive item automatically being purchased #### DoubleoJosh ##### New Member Hi all, many thanks for any input on this perplexing problem: If a sporting good store offers a coupon as follows: Free soccer ball OR free T-shirt OR free Hat And their POS system automatically credits the highest purchase priced item to the individual if it is in their shopping basket. So if the Hat costs $10 T-Shirt costs$12 Soccer ball costs \$15 Then the person who purchases a T-shirt AND soccer ball will receive the soccer ball free, while someone who purchases a Hat and T-shirt, would receive the T-shirt free, and so forth. Let's say the Hat is included in 5% of all orders T-shirt is included in 7% of all orders Soccer ball is included in 10% of all orders How do we determine the probability of any one of these items being credited to the user at checkout? My theory is the answer would be: P(Hat Discounted) = P(Hat) * (1 - P(T-Shirt)) * (1 - P(SB)) (Since either of the more expensive items being added would supercede the hat and cancel its discount) P(T-Shirt Discounted) = P(T-shirt) * (1 - P(SB)) (Since the presence of the Hat would not impact the T-Shirt hat given the hat is cheaper) Am I missing something? Should I be subtracting the probability of both other items or something? I think I have the answer, but don't have the privilege of being sure until it's too late. Many thanks!
2019-08-17T21:00:05
{ "domain": "talkstats.com", "url": "http://www.talkstats.com/threads/probability-of-most-expensive-item-automatically-being-purchased.64298/", "openwebmath_score": 0.17184695601463318, "openwebmath_perplexity": 2394.561992296598, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639694252315, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132480462299 }
https://gmatclub.com/forum/if-x-and-y-are-positive-integers-is-4x-7y-291631.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 24 Apr 2019, 13:49 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If x and y are positive integers, is 4x – 7y < 0? Author Message TAGS: ### Hide Tags Manager Status: Gathering chakra Joined: 05 Feb 2018 Posts: 189 If x and y are positive integers, is 4x – 7y < 0?  [#permalink] ### Show Tags 24 Mar 2019, 13:07 1 00:00 Difficulty: 25% (medium) Question Stats: 75% (01:45) correct 25% (01:57) wrong based on 24 sessions ### HideShow timer Statistics If x and y are positive integers, is $$4^x – 7^y < 0$$? (1) $$16^x + 1 < 49^y$$ (2) $$x > y$$ Math Expert Joined: 02 Aug 2009 Posts: 7584 Re: If x and y are positive integers, is 4x – 7y < 0?  [#permalink] ### Show Tags 24 Mar 2019, 19:58 If x and y are positive integers, is $$4^x – 7^y < 0$$? (1) $$16^x + 1 < 49^y$$ Let us modify the statement $$16^x + 1 < 49^y..........16^x-49^y<-1.......4^{2x}-7^{2y}<-1......(4^x-7^y)(4^x+7^y)<-1$$ Now, x and y are positive, so $$4^x+7^y>0$$. This means$$4^x-7^y<0$$ Sufficient (2) $$x > y$$ x is 100 and y is 1, ans is NO x is 100 and y is 99, ans is YES insuff A _________________ Re: If x and y are positive integers, is 4x – 7y < 0?   [#permalink] 24 Mar 2019, 19:58 Display posts from previous: Sort by
2019-04-24T20:49:00
{ "domain": "gmatclub.com", "url": "https://gmatclub.com/forum/if-x-and-y-are-positive-integers-is-4x-7y-291631.html", "openwebmath_score": 0.6417821645736694, "openwebmath_perplexity": 4111.103305351112, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639694252315, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132480462299 }
https://math.stackexchange.com/questions/2432282/does-improper-integral-of-continuous-non-zero-and-periodic-function-diverge
# Does Improper integral of continuous, non zero and periodic function diverge? Let $f:\mathbb R\to\mathbb R$ be a continuous, non zero and periodic function. Does $\int_0^\infty f(x)dx$ diverge? I think that the answer is yes. My thought was that if we assume $f$ has, in each period, a point $x_0$ in which $f(x_0)\neq0$, then from continuous we must have a close interval to $x_0$ in which the integral can't be zero. And by summing all those non-zero integral in each period we get the integral diverge. I have two problem with this thought: $1.$ what about periodic functions such as $sin(x)$ which satisfying $\int_a^{a+T}f(x)dx=0$ I kind of ignored them. $2.$ I tried to write a proof and got stuck with satisfying that continuous ensure that integral of the close interval to $x_0$ is non-zero. • $\int_0^a \sin(x)dx = 1- \cos(a)$ and $\lim_{a \to \infty}1- \cos(a)$ doesn't converge. But $1- \cos(a)$ is bounded (and periodic) precisely because the mean value of $\sin(x)$ is $0$. – reuns Sep 16 '17 at 21:47 As you noticed, we can find $x_0$ with $f(x_0)\ne 0$, wlog. $f(x_0)>0$, and then an interval $[x_0-\epsilon,x_0+\epsilon9$ where $f(x)>\frac 12f(x_0)$. It follows that $$\int_0^{nT+x_0+\epsilon}f(x)\,\mathrm dx - \int_0^{nT+x_0-\epsilon}f(x)\,\mathrm dx \ge 2\epsilon \cdot \frac12 f(x_0)=:c>0$$ whereas convergence of the improper integral would require that $$\left|\int_0^{x_1}f(x)\,\mathrm dx-\int_0^{x_2}f(x)\,\mathrm dx\right|<c$$ for all sufficiently large $x_1$ and $x_2$.
2019-09-20T20:24:45
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2432282/does-improper-integral-of-continuous-non-zero-and-periodic-function-diverge", "openwebmath_score": 0.983338475227356, "openwebmath_perplexity": 185.82364219923693, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639694252316, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132480462299 }
https://math.stackexchange.com/questions/2962478/trace-inequality-for-a-product-of-p-s-d-matrices-and-their-pseudo-inverse
# Trace inequality for a product of p.s.d. matrices and their pseudo inverse. Let $$A, B_i$$ be positive semidefinite real matrices. Let $$\dagger$$ stand for the Moore-Penrose generalized inverse. I managed to prove that if $$\operatorname{Ran}B_1\subseteq\operatorname{Ker}B_2$$ then $$\operatorname{trace}\left((A + B_1 + B_2)^\dagger B_1 \right) \leq\operatorname{trace}\left(( A + B_1)^\dagger B_1\right)$$ Does it still hold without this assumption? Yes. In general, if $$X,Y$$ are positive semidefinite and $$P=XX^\dagger$$ denotes the orthogonal projection onto $$\operatorname{ran}(X)$$, then $$P(X+Y)^\dagger P\preceq X^\dagger$$. This can be easily proved by using Schur complements. Now, if you put $$X=A+B_1$$ and $$Y=B_2$$, you get $$B_1^{1/2}P(A+B_1+B_2)^\dagger PB_1^{1/2}\preceq B_1^{1/2}(A+B_1)^\dagger B_1^{1/2}.$$ Since $$B_1^{1/2}P=PB_1^{1/2}=B_1^{1/2}$$, the result follows. Edit. If $$X=0$$, the inequality $$P(X+Y)^\dagger P\preceq X^\dagger$$ simply means $$0\preceq0$$. Suppose $$X$$ is PSD but nonzero. Since $$\operatorname{ran}(X)\subseteq\operatorname{ran}(X+Y)$$, by a change of orthonormal basis, we may assume that $$X=\pmatrix{X_1&0&0\\ 0&0&0\\ 0&0&0},\ P=\pmatrix{I&0&0\\ 0&0&0\\ 0&0&0},\ Y=\pmatrix{H&R&0\\ R^T&S&0\\ 0&0&0},\ X+Y=\pmatrix{X_1+H&R&0\\ R^T&S&0\\ 0&0&0}$$ where $$X_1$$ and $$Z:=\pmatrix{X_1+H&R\\ R^T&S}$$ are the matrix representations of $$X|_{\operatorname{ran}(X)}$$ and $$(X+Y)|_{\operatorname{ran}(X+Y)}$$ respectively and they are positive definite. As $$Z\succ0$$, we must have $$S\succ0$$. Yet $$Y\succeq0$$ by assumption. Therefore the Schur complement $$H-RS^{-1}R^T$$ must be $$\succeq0$$. It follows that $$X_1+H-RS^{-1}R^T\succeq X_1\succ0$$ and in turn $$0\prec X_1^{-1}\preceq(X_1+H-RS^{-1}R^T)^{-1}$$. But this means $$P(X+Y)^\dagger P\preceq X^\dagger$$, because $$X^\dagger=\pmatrix{X_1^{-1}&0&0\\ 0&0&0\\ 0&0&0},\ (X+Y)^\dagger=\pmatrix{(X_1+H-RS^{-1}R^T)^{-1}&\ast&0\\ \ast&\ast&0\\ 0&0&0},$$ • Could you please expand a bit about the Schur complements demonstration? Or provide some reference. Many thanks. – Manuel Oct 19 '18 at 20:54 • @Manuel See my edit. – user1551 Oct 20 '18 at 5:40 • I have started a bounty as an appreciation of this answer. I also belive it should receive more attention. Thanks a lot. I would accept in a few days. – Manuel Oct 22 '18 at 13:17 • @Manuel Thanks for your bounty! – user1551 Oct 23 '18 at 18:05 • Thank you for a very useful well explained answer. – Manuel Oct 23 '18 at 18:25
2019-05-24T08:48:14
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2962478/trace-inequality-for-a-product-of-p-s-d-matrices-and-their-pseudo-inverse", "openwebmath_score": 0.937405526638031, "openwebmath_perplexity": 139.8045640151906, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639694252315, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132480462299 }
http://mathhelpforum.com/advanced-algebra/134182-inner-product-space-proof-question-print.html
# Inner product space proof question • March 16th 2010, 07:27 PM firebio Inner product space proof question Prove that {u,v}=0 for all v belongs to V iff u=0. {u,v}= $\Sigma$ u*v, where u* is conjugate of u If u=0 then {u,v} is obviously 0. now im not sure how to prove it the other way If {u,v}=0 then u=0 $\Sigma$ u*v=0... • March 16th 2010, 07:31 PM tonio Quote: Originally Posted by firebio Prove that {u,v}=0 for all v belongs to V iff u=0. {u,v}= $\Sigma$ u*v, where u* is conjugate of u If u=0 then {u,v} is obviously 0. now im not sure how to prove it the other way If {u,v}=0 then u=0 $\Sigma$ u*v=0... Apparently you're using {u,v} to denote inner product...(Punch) Anyway, if it is true that $=0\,\,\forall v\in V$ then this is true for $v=u$ as well, so apply now positiveness of inner product to get that it must be $u=0$ . Tonio • March 16th 2010, 07:33 PM Drexel28 Quote: Originally Posted by firebio Prove that {u,v}=0 for all v belongs to V iff u=0. {u,v}= $\Sigma$ u*v, where u* is conjugate of u If u=0 then {u,v} is obviously 0. now im not sure how to prove it the other way If {u,v}=0 then u=0 $\Sigma$ u*v=0...
2015-07-01T15:42:06
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/advanced-algebra/134182-inner-product-space-proof-question-print.html", "openwebmath_score": 0.9778611063957214, "openwebmath_perplexity": 3784.8079158847945, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639694252316, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132480462299 }
http://mathhelpforum.com/algebra/128816-dot-product-question.html
Math Help - Dot Product Question 1. Dot Product Question Maybe I'm overlooking the simplicity of the question, but I keep coming up with answers separate to the given answer. The angle between $\vec a$ and $\vec b$ is $79 deg$. Find $p$ if $\vec a = 6\vec i + 3 \vec j - 2 \vec k$ and $\vec b = -2 \vec i + p \vec j - 4\vec k$ 2. Originally Posted by gordonparsons Maybe I'm overlooking the simplicity of the question, but I keep coming up with answers separate to the given answer. The angle between $\vec a$ and $\vec b$ is $79 deg$. Find $p$ if $\vec a = 6\vec i + 3 \vec j - 2 \vec k$ and $\vec b = -2 \vec i + p \vec j - 4\vec k$ We know (hopefully) that $\cos 79 = \frac{\vec a\cdot \vec b}{||\vec a\|\|\vec b\|}$ ....well, do the math.
2016-05-04T06:44:07
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/128816-dot-product-question.html", "openwebmath_score": 0.8357636332511902, "openwebmath_perplexity": 426.65998339436464, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639694252316, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132480462299 }
https://stacks.math.columbia.edu/tag/095J
## 59.72 Auxiliary lemmas on morphisms Some lemmas that are useful for proving functoriality properties of constructible sheaves. Lemma 59.72.1. Let $U \to X$ be an étale morphism of quasi-compact and quasi-separated schemes (for example an étale morphism of Noetherian schemes). Then there exists a partition $X = \coprod _ i X_ i$ by constructible locally closed subschemes such that $X_ i \times _ X U \to X_ i$ is finite étale for all $i$. Proof. If $U \to X$ is separated, then this is More on Morphisms, Lemma 37.44.4. In general, we may assume $X$ is affine. Choose a finite affine open covering $U = \bigcup U_ j$. Apply the previous case to all the morphisms $U_ j \to X$ and $U_ j \cap U_{j'} \to X$ and choose a common refinement $X = \coprod X_ i$ of the resulting partitions. After refining the partition further we may assume $X_ i$ affine as well. Fix $i$ and set $V = U \times _ X X_ i$. The morphisms $V_ j = U_ j \times _ X X_ i \to X_ i$ and $V_{jj'} = (U_ j \cap U_{j'}) \times _ X X_ i \to X_ i$ are finite étale. Hence $V_ j$ and $V_{jj'}$ are affine schemes and $V_{jj'} \subset V_ j$ is closed as well as open (since $V_{jj'} \to X_ i$ is proper, so Morphisms, Lemma 29.41.7 applies). Then $V = \bigcup V_ j$ is separated because $\mathcal{O}(V_ j) \to \mathcal{O}(V_{jj'})$ is surjective, see Schemes, Lemma 26.21.7. Thus the previous case applies to $V \to X_ i$ and we can further refine the partition if needed (it actually isn't but we don't need this). $\square$ In the Noetherian case one can prove the preceding lemma by Noetherian induction and the following amusing lemma. Lemma 59.72.2. Let $f: X \to Y$ be a morphism of schemes which is quasi-compact, quasi-separated, and locally of finite type. If $\eta$ is a generic point of an irreducible component of $Y$ such that $f^{-1}(\eta )$ is finite, then there exists an open $V \subset Y$ containing $\eta$ such that $f^{-1}(V) \to V$ is finite. Proof. This is Morphisms, Lemma 29.51.1. $\square$ The statement of the following lemma can be strengthened a bit. Lemma 59.72.3. Let $f : Y \to X$ be a quasi-finite and finitely presented morphism of affine schemes. 1. There exists a surjective morphism of affine schemes $X' \to X$ and a closed subscheme $Z' \subset Y' = X' \times _ X Y$ such that 1. $Z' \subset Y'$ is a thickening, and 2. $Z' \to X'$ is a finite étale morphism. 2. There exists a finite partition $X = \coprod X_ i$ by locally closed, constructible, affine strata, and surjective finite locally free morphisms $X'_ i \to X_ i$ such that the reduction of $Y'_ i = X'_ i \times _ X Y \to X'_ i$ is isomorphic to $\coprod _{j = 1}^{n_ i} (X'_ i)_{red} \to (X'_ i)_{red}$ for some $n_ i$. Proof. Setting $X' = \coprod X'_ i$ we see that (2) implies (1). Write $X = \mathop{\mathrm{Spec}}(A)$ and $Y = \mathop{\mathrm{Spec}}(B)$. Write $A$ as a filtered colimit of finite type $\mathbf{Z}$-algebras $A_ i$. Since $B$ is an $A$-algebra of finite presentation, we see that there exists $0 \in I$ and a finite type ring map $A_0 \to B_0$ such that $B = \mathop{\mathrm{colim}}\nolimits B_ i$ with $B_ i = A_ i \otimes _{A_0} B_0$, see Algebra, Lemma 10.127.8. For $i$ sufficiently large we see that $A_ i \to B_ i$ is quasi-finite, see Limits, Lemma 32.17.2. Thus we reduce to the case of finite type algebras over $\mathbf{Z}$, in particular we reduce to the Noetherian case. (Details omitted.) Assume $X$ and $Y$ Noetherian. In this case any locally closed subset of $X$ is constructible. By Lemma 59.72.2 and Noetherian induction we see that there is a finite partition $X = \coprod X_ i$ of $X$ by locally closed strata such that $Y \times _ X X_ i \to X_ i$ is finite. We can refine this partition to get affine strata. Thus after replacing $X$ by $X' = \coprod X_ i$ we may assume $Y \to X$ is finite. Assume $X$ and $Y$ Noetherian and $Y \to X$ finite. Suppose that we can prove (2) after base change by a surjective, flat, quasi-finite morphism $U \to X$. Thus we have a partition $U = \coprod U_ i$ and finite locally free morphisms $U'_ i \to U_ i$ such that $U'_ i \times _ X Y \to U'_ i$ is isomorphic to $\coprod _{j = 1}^{n_ i} (U'_ i)_{red} \to (U'_ i)_{red}$ for some $n_ i$. Then, by the argument in the previous paragraph, we can find a partition $X = \coprod X_ j$ with locally closed affine strata such that $X_ j \times _ X U_ i \to X_ j$ is finite for all $i, j$. By Morphisms, Lemma 29.48.2 each $X_ j \times _ X U_ i \to X_ j$ is finite locally free. Hence $X_ j \times _ X U'_ i \to X_ j$ is finite locally free (Morphisms, Lemma 29.48.3). It follows that $X = \coprod X_ j$ and $X_ j' = \coprod _ i X_ j \times _ X U'_ i$ is a solution for $Y \to X$. Thus it suffices to prove the result (in the Noetherian case) after a surjective flat quasi-finite base change. Applying Morphisms, Lemma 29.48.6 we see we may assume that $Y$ is a closed subscheme of an affine scheme $Z$ which is (set theoretically) a finite union $Z = \bigcup _{i \in I} Z_ i$ of closed subschemes mapping isomorphically to $X$. In this case we will find a finite partition of $X = \coprod X_ j$ with affine locally closed strata that works (in other words $X'_ j = X_ j$). Set $T_ i = Y \cap Z_ i$. This is a closed subscheme of $X$. As $X$ is Noetherian we can find a finite partition of $X = \coprod X_ j$ by affine locally closed subschemes, such that each $X_ j \times _ X T_ i$ is (set theoretically) a union of strata $X_ j \times _ X Z_ i$. Replacing $X$ by $X_ j$ we see that we may assume $I = I_1 \amalg I_2$ with $Z_ i \subset Y$ for $i \in I_1$ and $Z_ i \cap Y = \emptyset$ for $i \in I_2$. Replacing $Z$ by $\bigcup _{i \in I_1} Z_ i$ we see that we may assume $Y = Z$. Finally, we can replace $X$ again by the members of a partition as above such that for every $i, i' \subset I$ the intersection $Z_ i \cap Z_{i'}$ is either empty or (set theoretically) equal to $Z_ i$ and $Z_{i'}$. This clearly means that $Y$ is (set theoretically) equal to a disjoint union of the $Z_ i$ which is what we wanted to show. $\square$ Comment #5474 by Andrés on In Lemma 03S1, “generic point of on irreducible component” should read “generic point of an irreducible component”. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work. All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 095J. Beware of the difference between the letter 'O' and the digit '0'.
2022-05-25T23:49:19
{ "domain": "columbia.edu", "url": "https://stacks.math.columbia.edu/tag/095J", "openwebmath_score": 0.9757256507873535, "openwebmath_perplexity": 139.37314943913938, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639694252316, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132480462299 }
http://www.physicsforums.com/showthread.php?t=492466
# Find equation of the tangent plane by andrey21 Tags: equation, plane, tangent P: 466 Find the equation of the tangent plane to the level surface of the scalar field $$\xi$$(x,y,z) = x2+y2+z2 at the point (1,1,2) Looking the work through this question with someone, not to sure where to start. HW Helper P: 3,307 i would start with the general equation of a plane... the tangent plane can be thought of as a linear approximation of the function at that point Related Discussions Calculus & Beyond Homework 2 Calculus & Beyond Homework 2 Calculus & Beyond Homework 6 Calculus & Beyond Homework 2 Calculus 10
2014-08-22T19:39:37
{ "domain": "physicsforums.com", "url": "http://www.physicsforums.com/showthread.php?t=492466", "openwebmath_score": 0.5047823190689087, "openwebmath_perplexity": 675.0674536918788, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639694252315, "lm_q2_score": 0.6723316926137811, "lm_q1q2_score": 0.6532132480462298 }
http://projectsforpreschoolers.com/books/vector-methods
# Vector Methods Format: Paperback Language: Format: PDF / Kindle / ePub Size: 7.91 MB Click on any part of the photo of Bill Gates, hold the left mouse button down, then drag it to "warp" the photo into a topologically equivalent distortion. The present book grew out of notes written for a course by the same name taught by the author during in 2005. In the meantime you can try to view this department information on IRIS (click here) using your current browser. The precise mathematical definition of curvature can be made into a powerful toll for studying the geometrical structure of manifolds of higher dimensions. Pages: 0 Publisher: Oliver and Boyd (1959) ISBN: B00ELLW868 Collected Papers: Volume I 1955-1966 Singularities of Caustics and Wave Fronts (Mathematics and its Applications) The Future of Identity in the Information Society: Proceedings of the Third IFIP WG 9.2, 9.6/11.6, 11.7/FIDIS International Summer School on the ... 2007 (Collected Works of Claude Chevalley) Lightlike Submanifolds of Semi-Riemannian Manifolds and Applications (Mathematics and Its Applications) Lectures on the differential geometry of curves and surfaces. by The Geometry of Population Genetics (Lecture Notes in Biomathematics) First, we must locate the tangent on which it lies. If Q is the point of the contact of the tangent to the curve, then the tangent itself is determined by the parameters of the point Q. Next, on the tangent, the position of P is given by its algebraic distance u from Q. thus s and u C = ÷, which on integration w.r.t.s gives ( ) s k s C = ÷ where k is a constant pdf. This book provides full details of a complete proof of the Poincare Conjecture following Grigory Perelman's preprints Advances in Discrete download pdf 87creative.co.uk. More advanced parts of each subject rely heavily on real analysis, particularly the theory of integration and its offshoot, measure theory Differential Scanning download here http://expertgaragedoorportland.com/books/differential-scanning-calorimetry. In 1736 Euler published a paper on the solution of the Königsberg bridge problem entitled Solutio problematis ad geometriam situs pertinentis which translates into English as The solution of a problem relating to the geometry of position. The title itself indicates that Euler was aware that he was dealing with a different type of geometry where distance was not relevant , source: The Pullback Equation for Differential Forms (Progress in Nonlinear Differential Equations and Their Applications, Vol. 83) download epub. Beyond the Third Dimension: Geometry, Computer Graphics, and Higher Dimensions (Scientific American Library) Riemannian Geometry (Oxford Science Publications) Differential Geometry (Pitman Monograph & Surveys in Pure & Applied Mathematics) The most profound of these generalists was a sometime architect named Girard Desargues (1591–1661) , e.g. Old and New Aspects in Spectral Geometry (Mathematics and Its Applications) Old and New Aspects in Spectral Geometry. Michor This book covers the following topics: Manifolds And Lie Groups, Differential Forms, Bundles And Connections, Jets And Natural Bundles, Finite Order Theorems, Methods For Finding Natural Operators, Product Preserving Functors, Prolongation Of Vector Fields And Connections, General Theory Of Lie Derivatives Null Curves and Hypersurfaces read online projectsforpreschoolers.com. Although mathematicians from antiquity had described some curves as curving more than others and straight lines as not curving at all, it was the German mathematician Gottfried Leibniz who, in 1686, first defined the curvature of a curve at each point in terms of the circle that best approximates the curve at that point , source: From Holomorphic Functions to Complex Manifolds (Graduate Texts in Mathematics) projectsforpreschoolers.com. In particular, how much discussion of smooth manifolds occurs in class will depend on the need for it. Differential Geometry can be defined as a branch of mathematics concerned with the properties of and relationships between points, lines, planes, and figures and with generalizations of these concepts. It is a discipline that uses the methods of differential and integral calculus, as well as linear and multilinear algebra, to study problems in geometry , source: Quantitative Arithmetic of read epub Quantitative Arithmetic of Projective. The following main areas are covered: differential equations on manifolds, global analysis, Lie groups, local and global differential geometry, the calculus of variations on manifolds, topology of manifolds, and mathematical physics Hyperfunctions and Harmonic Analysis on Symmetric Spaces (Progress in Mathematics) projectsforpreschoolers.com. Wether that's true globally is the bain of many mathematicians and physicist's lives ref.: The Elementary Differential download online http://projectsforpreschoolers.com/books/the-elementary-differential-geometry-of-plane-curves! Differential geometry is a mathematical discipline that uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra to study problems in geometry. The theory of plane and space curves and surfaces in the three-dimensional Euclidean space formed the basis for development of differential geometry during the 18th century and the 19th century Parabolic Geometries I download pdf http://projectsforpreschoolers.com/books/parabolic-geometries-i-mathematical-surveys-and-monographs. Initiation to Global Finslerian Geometry, Volume 68 (North-Holland Mathematical Library) Algorithmen zur GefÇÏÇ?erkennung fǬr die Koronarangiographie mit Synchrotronstrahlung Ordinary Differential Equations Equivalence, Invariants and Symmetry Symplectic Geometry and Analytical Mechanics (Mathematics and Its Applications) (No 35) Differential Geometric Structures (Dover Books on Mathematics) Invariants of Quadratic Differential Forms (Dover Books on Mathematics) Mary Reed Missionary to the Lepers Stable Mappings and Their Singularities (Graduate Texts in Mathematics) Geometry and Differential Geometry: Proceedings of a Conference Held at the University of Haifa, Israel, March 18-23, 1979 Curve and Surface Reconstruction: Algorithms with Mathematical Analysis (Cambridge Monographs on Applied and Computational Mathematics) Noncommutative Differential Geometry and Its Applications to Physics: Proceedings of the Workshop at Shonan, Japan, June 1999 (Mathematical Physics Studies) Modern Differential Geometry of Curves and Surfaces with Mathematica, Fourth Edition (Textbooks in Mathematics) Regulators in Analysis, Geometry and Number Theory (Progress in Mathematics) General Investigations of Curved Surfaces of 1827 and 1825 Harmonic Morphisms between Riemannian Manifolds (London Mathematical Society Monographs) Partial Differential Equations: Proceedings of a Symposium held in Tianjin, June 23 - July 5, 1986 (Lecture Notes in Mathematics) Lectures on Seiberg-Witten Invariants (Springer Tracts in Modern Physics) The above examples of this non-uniqueness are all rank $1$ symmetric spaces. However, as we show in this paper, bisectors in the usual $L^2$ metric are such for a unique pair of points in the rank $2$ geometry $\mathbb{H}^2 \times\mathbb{H}^2$ Fat Manifolds and Linear Connections http://projectsforpreschoolers.com/books/fat-manifolds-and-linear-connections. Topics include differential forms, homotopy, homology, cohomology, fiber bundles, connection and covariant derivatives, and Morse theory. "Thoroughly recommended." � Physics Bulletin. 1983 edition Differential Geometry: A Symposium in Honour of Manfredo Do Carmo (Pitman Monographs & Surveys in Pure & Applied Mathematics) Differential Geometry: A Symposium in. The tangents space of an imbedded manifold pdf. By the way, the only thing the reader learns about what an 'open set' is, is that it contains none of its boundary points. All the topology books I have read define open sets to be those in the topology. This is another point of confusion for the reader. In fact, points of confusion abound in that portion of the book. 2) On page, 17, trying somewhat haphazardly to explain the concept of a neighborhood, the author defines N as "N := {N(x) This used to be something that bothered me, but now I recognise the importance of having a firm intuitive grasp on classical differential geometry before drowning in the abstraction ElementaryDifferential Geometry 2nd Second edition byO'Neill projectsforpreschoolers.com. Many of the articles in this volume are written by prominent researchers and will serve as introductions to the topics pdf. In 1750 he wrote a letter to Christian Goldbach which, as well as commenting on a dispute Goldbach was having with a bookseller, gives Euler 's famous formula for a polyhedron where v is the number of vertices of the polyhedron, e is the number of edges and f is the number of faces The Geometry of Spacetime: An read for free http://vezaap.com/ebooks/the-geometry-of-spacetime-an-introduction-to-special-and-general-relativity-undergraduate-texts-in. The RTG is a vertically integrated program to enhance the training of undergraduates, graduate students, and postdocs at the University of Texas and, through this website, well beyond. We exemplify and promote a unified perspective on geometry and topology. The mathematics on this website includes a potent mix of low-dimensional topology, algebraic geometry, differential geometry, global linear and nonlinear analysis, representation theory, geometric group theory, and homotopy theory epub. Zimmer going back to the 1980's asserts that up to local isomorphism, SL(2,R) is the only non-compact simple Lie group that can act by isometries on a Lorentzian manifold of finite volume , e.g. Elementary Differential Geometry Elementary Differential Geometry. This workshop focuses on building bridges by developing a unified point of view and by emphasizing cross-fertilization of ideas and techniques from geometry, topology, and combinatorics. New experimental evidence is crucial to this goal pdf. Differential geometry is a mathematical discipline that uses the methods of differential calculus to study problems in geometry. The theory of plane and space curves and of surfaces in the three-dimensional Euclidean space formed the basis for its initial development in the eighteenth and nineteenth century online. This is the Tensor calculus, which Albert Einstein found to be the most suitable tool for his general theory of relativity. Formulae - Expression for torsion. indicarices ( or) spherical images Several Complex Variables IV: download for free download for free. From the above figure, we can find out the tangents, normal and binormal at any given point j, thus helping in the correct visualization of the object. Differential geometry is widely applied in the study of various polymers, in the field of chemistry too, where we use the famous formula of Eyring’s Formula which is also deducted from the discrete form of Frenet Frame The Chern Symposium 1979: Proceedings of the International Symposium on Differential Geometry in honor of S.-S. Chern, held in Berkeley, California, June 1979 The Chern Symposium 1979: Proceedings of.
2016-10-21T13:02:00
{ "domain": "projectsforpreschoolers.com", "url": "http://projectsforpreschoolers.com/books/vector-methods", "openwebmath_score": 0.4671308994293213, "openwebmath_perplexity": 914.1274789806552, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
https://proxies-free.com/definite-integrals-on-counting-the-areas-covered-by-holes-in-a-function-in-integration/
# definite integrals – On counting the areas covered by holes in a function in integration As far as I know, holes in a function at the endpoints of an interval aren’t usually given any importance while integrating over that interval. For example, while calculating the area under the fractional part function from 0 to 1; You don’t really think of a hole as being an infinitesimally small width that contributes to the calculated area. And I’m fine with that- it makes sense. But there’s this question that defines f(x) as 0 where x can be expressed as $$frac{n}{n+1}$$, where n’s an natural number, and as 1 everywhere else. And you’re supposed to find the integral of f(x) from 0 to 2. So you’ve got a line, y=1, with dots on it that grow closer and more numerous as you get closer to 2, nearing an infinite number. Now the solution to the question just integrates the function from 0 to $$frac{1}{2}$$, $$frac{1}{2}$$ to $$frac{2}{3}$$, $$frac{2}{3}$$ to $$frac{3}{4}$$ and so on until 1 and then a normal integral from 1 to 2-essentially just integrating y=1 from 0 to 2. This seems odd. Isn’t ‘infinitesimally small quantities summed in infinite numbers forming actual numbers’, the general idea of integration? Since there’re an infinite number of small widths here, shouldn’t they be considered as constituting some area and thereby affecting the calculation?
2021-04-22T04:25:26
{ "domain": "proxies-free.com", "url": "https://proxies-free.com/definite-integrals-on-counting-the-areas-covered-by-holes-in-a-function-in-integration/", "openwebmath_score": 0.9454137682914734, "openwebmath_perplexity": 338.06956624025736, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
https://socratic.org/questions/how-do-you-solve-x-3-10-1
# How do you solve x/3=-10? Jun 14, 2018 $x = - 30$ #### Explanation: When dealing with equalities, you can multiply both sides by a same number, and the inequality will preserve its truth value. By this, I mean that if you start with a true equality, it will still be true, for example $3 = 3 \setminus \to 3 \cdot 4 = 3 \cdot 4 \setminus \to 12 = 12$ So, we started with a true equality ($3 = 3$), and we multiplied both sides by $4$, obtaining another true identity ($12 = 12$). On the other hand, if the two sides are different, they will still differ after being multiplied by the same number: $3 \setminus \ne 2 \setminus \to 3 \cdot 6 \setminus \ne 2 \cdot 6 \setminus \to 18 \setminus \ne 12$ So, we started with a false equality ($3 \setminus \ne 2$), and we multiplied both sides by $6$, obtaining another true identity ($18 \setminus \ne 12$). In your case, you only need to multiply both sides by $3$: the expression becomes $3 \cdot \setminus \frac{x}{3} = - 10 \cdot 3$ Why did we choose $3$? Because our goals is to isolate the $x$ on the left side, obtaining an expression like $x = \ldots$. And the $3$ we multiplied by simplifies with the $3$ at the denominator, allowing us to reach our goal: $\cancel{3} \cdot \setminus \frac{x}{\cancel{3}} = - 10 \cdot 3$ Of, course, this comes with the price of an added calculation on the right hand side, but that's not much of a problem: $x = - 10 \cdot 3 = - 30$ Equation solved! In fact, we reached a form like $x = k$, for some real number $k$, which means that that particular value is the solution for the equation. Jun 14, 2018 $x = - 30$ #### Explanation: Given: $\frac{x}{3} = - 10$ The objective is to end up with just one $x$ and for it to be on its own on one side of the = and everything else on the other side. To end up with just $x$ on the left we change the $\frac{1}{3}$ into 1. This is because $1 \times x$ is just $x$ color(green)(x/3=-10 color(white)("dddd") ->color(white)("dddd") x xx1/3=-10 Multiply both sides by $\textcolor{red}{3}$ color(green)( color(white)("dddddddddddd") ->color(white)("dddd") x xx1/3color(red)(xx3)=-10color(red)(xx3) $\textcolor{g r e e n}{\textcolor{w h i t e}{\text{dddddddddddd") ->color(white)("dddd") x color(white)("dd")xx3/3color(white)(".d}} = - 30}$ But $\frac{3}{3}$ is the same value as 1 giving: $\textcolor{w h i t e}{\text{dd}} x = - 30$ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $\textcolor{b l u e}{\text{General shortcut rule}}$ To move a value to the other side of the equals: color(white)("dd")Move to the other side and apply the opposite action. Addition is color(white)("dd")changed to subtraction and subtraction is changed to addition. $\textcolor{w h i t e}{\text{dd}}$Move to the other side and apply the opposite action. Division is $\textcolor{w h i t e}{\text{dd}}$changed to multiplication and multiplication is changed to $\textcolor{w h i t e}{\text{.d}}$ division.
2020-08-09T23:29:56
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-solve-x-3-10-1", "openwebmath_score": 0.8520316481590271, "openwebmath_perplexity": 323.94920039606495, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
https://www.mathscinotes.com/2017/01/measuring-a-chamfer-angle-using-gage-balls/
# Measuring a Chamfer Angle Using Gage Balls Quote of the Day The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato. — Alfred North Whitehead, Process and Reality, p. 39 [Free Press, 1979] ## Introduction Figure 1: Chamfer Angle Measurement Example Using Two Gage Balls. One metrology operation I have had to perform a number of times is measuring a chamfer angle precisely – Figure 1 shows today's example. Many items are chamfered – even in electronics. For example, edge connectors on printed circuit boards often need to be chamfered to ensure that they do not damage the connectors they are being inserted into. Referring to Figure 1, you might think that chamfer measurements would be easy because you have a vertical edge and horizontal edge that you should be able to measure. Unfortunately, these edges are rarely straight. They often are rounded or irregular. This makes the chamfer angle measurement non-repeatable. Using gage balls eliminates any dependence on the determining precisely the location of an edge. ## Background Equation 1 is the formula I use for determining a chamfer angle using two gage balls of different diameter. Eq. 1 $\displaystyle \theta \left( {{{R}_{1}},{{R}_{2}},{{M}_{1}},{{M}_{2}}} \right)=2\cdot \text{arctan}\left( {\frac{{{{R}_{1}}-{{R}_{2}}}}{{{{M}_{1}}-{{M}_{2}}-{{R}_{1}}+{{R}_{2}}}}} \right)$ where • M1 is height measurement of ball 1 above the top surface. • M2 is height measurement of ball 2 above the top surface. • R1 is the radius of gage ball 1. • R2 is the radius of gage ball 2. I derive Equation 1 in the analysis section. ## Analysis ### Symbol Definitions Figure 2 shows the variables that I defined for the angle measurement scenario of Figure 1. ### Derivation and Example Calculation I am lazy this morning. I am sure there is a clever geometric derivation, but the quickest way to get a formula is to use Mathcad's symbolic processor to solve a simple system of equations. Figure 3 shows my derivation with Q = arctan(θ/2). I often initially do my trigonometric derivations sans trig functions because Mathcad will often generate overly complex answers, e.g. applying half-angle formulas to solve for θ. As an example, I use Equation 1 to determine the angle in Example 1. In my function evaluation, I use the fact that the gage ball diameter, D, is 1/2 the radius, R. Figure 3: Derivation of Equation 1 and Application to Figure 1 Example. ## Conclusion This is a relatively simple formula that is useful for precision angle measurement of a chamfer angle using two gage balls or roller gages of different diameter. Save Save Save Save Save This entry was posted in Metrology. Bookmark the permalink. ### One Response to Measuring a Chamfer Angle Using Gage Balls 1. Howie Moore says: Hello, I am Howie Moore and I will like to know if you have Chamfer Gages.if yes,then get back to me with the types and prices and the major credit cards you accept.Thank You. Best Regards. Howie M.
2019-10-14T13:24:47
{ "domain": "mathscinotes.com", "url": "https://www.mathscinotes.com/2017/01/measuring-a-chamfer-angle-using-gage-balls/", "openwebmath_score": 0.7956954836845398, "openwebmath_perplexity": 2320.787962364989, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
http://math.stackexchange.com/questions/677574/homeomorphism-between-space-and-product
Homeomorphism between Space and Product Do there exist examples of non-empty, infinite spaces X not equipped with the discrete topology for with $X \cong X \times X$? - I think the answer is no, by dimension reasons if the dimension falls in the interval $(0, infty)$; I think Lebesgue covering dimensions is multiplicative. The answer is no for manifolds, cell complexes or CW-complexes. Maybe you can use, e.g., Kunneth's theorem if there is some non-trivial homology below the top one. But a weaker result is possible; a space $X$ can be homotopically-equivalent to $X \times X$, e.g., for $X=\mathbb R^n ; n< \infty$ –  user99680 Feb 15 at 18:19 Take for example $X=\cup_{n\ge1}\mathbb R^n$ with the topology defined by $U\subset X$ is open if and only if $U\cap\mathbb R^n$ is open for all $n$. It is then easy to check that the map $\phi:X\times X\to X$ defined by $\phi((x_i),(y_i))=(x_1,y_1,x_2,y_2,x_3,\dots)$ is a homeomorphism here an element of $X$ is represented by $(x_i)$ where $(x_i)$ is a sequence of real numbers which is nonzero only for a finite number of $i's$. (also $X=\Pi_{n\ge1}\mathbb R$ with the product topology would also work with the same map defined as above) For a another example we have a theorem in functional analysis which says that every separable infinite dimentional Hilbert space is isomorphic to $l_2$. Since $l_2\times l_2$ is a separable Hilbert space hence by the theorem it would be isomorphic to $l_2$ (in particular it would be homemorphic) - Elementary examples are often zero-dimensional: $\mathbb{Q}$, $\mathbb{P}$ (= the irrationals as a subset of $\mathbb{R}$) and the Cantor set $C \subset [0,1]$ all are homeomorphic to their squares (so even every finite power of itself; even countable power, for the irrationals and $C$.). A trivial example is an infinite space in the indiscrete topology. Many infinite-dimensional spaces also obey this: $R^\omega$ in the product topology, or any higher power as well, or the Hilbert space $\ell_2$ (not really different, as $R^\omega$ is homeomorphic (as topological spaces) to $\ell_2$). A nice one-dimensional space due to Erdős: take all points of $\ell_2$ where all coordinates are rational. Erdős showed this space is one-dimensional and it's quite clearly homeomorphic to its square: the even and odd coordinates form copies of the space itself. -
2014-11-27T14:47:10
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/677574/homeomorphism-between-space-and-product", "openwebmath_score": 0.949070930480957, "openwebmath_perplexity": 145.31672083076145, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
http://math.stackexchange.com/questions/18721/how-did-they-simplify-this-function/18723
# How did they simplify this function I'm currently practicing differentiation. The exercise I currently have is the following Find the derivative of: $(x + 6)^3 (9 x^3 - 2)^5$ Okay, well I can do that now. When I do this I uses the chain/product rule to get the following result: $3(x+6)^2 (9x^3 -2)^5 + 45(x+6)^3 (3x^2) (9x^3 - 2)^4$ However, when I put this in wolfram alpha I get the following result (and it matches the answer from my exercise): $6(x+6)^2 (2-9 x^3)^4 (27 x^3+135 x^2-1)$ I'm staring at this for an hour now, but I don't get how they get rid of the addition, and how they'd get rid of (for example) $(9x^3-2)^5$ - $3(x+6)^2(9x^3-2)^5+45(x+6)^3(3x^2)(9x^3-2)^4$ $=3(x+6)^2(9x^3-2)^4(9x^3-2)+3(x+6)^2(9x^3-2)^4(45x^2)(x+6)$ $=3(x+6)^2(9x^3-2)^4((9x^3-2)+45x^2(x+6))$ $=3(x+6)^2(9x^3-2)^4(9x^3-2+45x^3+270x^2)$ $=3(x+6)^2(9x^3-2)^4(54x^3+270x^2-2)$ $=6(x+6)^2(9x^3-2)^4(27x^3+135x^2-1)$ @Timo It doesn't come out of thin air. It is the same process as factoring $3x^2 y^5+45x^3 y^4$. Look for factors that are the same in each thing added together and then pull them out front. If you are confused by the powers being different try writing them out like $(x+6)(x+6)...$. –  Brian Jan 24 '11 at 13:32
2014-08-23T19:29:00
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/18721/how-did-they-simplify-this-function/18723", "openwebmath_score": 0.8468016982078552, "openwebmath_perplexity": 120.91561746511533, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
http://mathematica.stackexchange.com/questions/34157/differentiating-a-cumulative-distribution-function-which-has-a-discontinuity-at
# Differentiating a Cumulative Distribution Function which has a discontinuity at $0$ I want to take the derivative of a distribution function which has a discontinuity at $0$. Here is the function: temp0[x_] :=If[Element[x, Reals], Sqrt[Pi]*(Erfc[(-2 + Log[2])/(2*Sqrt[2])] + (1 + Erf[(1 +Piecewise[{{(x - Log[2])/2, x < 0}}, (x + Log[2])/2])/Sqrt[2]] - Erfc[(-2 + Log[2])/(2*Sqrt[2])])*UnitStep[-Log[2]/2 - Piecewise[{{(x - Log[2])/2, x < 0}}, (x + Log[2])/2]]) + Sqrt[(2*Pi)/E]*(Erf[Log[2]/(2*Sqrt[2])] - Erf[Piecewise[{{(x - Log[2])/2, x < 0}}, (x + Log[2])/2]/Sqrt[2]])*UnitStep[-Log[2]/2 + Piecewise[{{(x - Log[2])/2, x < 0}}, (x + Log[2])/2]] + 2*Sqrt[Pi]*(-Erf[(2 + Log[2])/(2*Sqrt[2])] + Erf[(1 + Piecewise[{{(x - Log[2])/2, x < 0}}, (x + Log[2])/2])/Sqrt[2]])*UnitStep[-Log[2]/2 + Piecewise[{{(x - Log[2])/2, x < 0}}, (x + Log[2])/2]] + Sqrt[(2*Pi)/E]*(Erf[Log[2]/(2*Sqrt[2])] + Erf[Piecewise[{{(x - Log[2])/2, x < 0}}, (x + Log[2])/2]/Sqrt[2]])*UnitStep[Log[2]/2 + Piecewise[{{(x - Log[2])/2, x < 0}}, (x + Log[2])/2]], Integrate[(Sqrt[2]*UnitStep[-y - Log[2]/2] + 2*((Sqrt[2] - E^y)*UnitStep[y - Log[2]/2] + E^y*UnitStep[y + Log[2]/2]))/E^((1 + y)^2/2), {y, -Infinity, Piecewise[{{(x - Log[2])/2, x < 0}, {(x + Log[2])/2, x >= 0}}, 0]}, Assumptions -> NotElement[x, Reals]]]/(Sqrt[Pi]*(2*Sqrt[2/E]*Erf[Log[2]/(2*Sqrt[2])] + Erfc[(-2 + Log[2])/(2*Sqrt[2])] + 2*Erfc[(2 + Log[2])/(2*Sqrt[2])])) If I plot it I get The result of the derivation should be a density function. I used fx0[x_] := D[temp0[x], x] to get the density function but it didnt help me. Do you know how can I ignore the discontinuity at $0$ for the derivation? Thank you very much. - Using the definition of temp0[x], it is possible to plot the derivative by taking the limit: der[x_] := Limit[temp0'[x + eps], eps -> 0]; Plot[der[x], {x, -5, 5}] Note the kink at the origin. This uses a similar technique to J. M.'s answer here. - OP wrote: Do you know how can I ignore the discontinuity at 0 for the derivation? You most definitely do not want to 'ignore' the discontinuity at 0. The fact that the CDF jumps from about 0.64 to about 0.84 at $x = 0$ implies that your density is: • piecewise continuous for $x < 0$, • has a discrete mass at $x = 0$, i.e. $f(0) \approx 0.2$, and then • piecewise continuous for $x>0$. The density should thus appear like so: You can plot the discrete mass at $x = 0$ using something like: BB = ListPlot[{{0, .2}}, PlotStyle -> AbsolutePointSize[8], Filling -> Axis] and then: Show[AA,BB] where AA is the continuous plot ... to get the complete mixed continuous/discrete density. - thanks for the post and for the information. You are exactly right, in fact. I get the distribution function via transformation temp0[x_] := ff0[ll[x]] here ll[x] is the inverse function of Piecewise[{{{0}, -Log[2]/2 <= x <= Log[2]/2}, {Log[E^(2*x)/2], 2*x > Log[2]}}, Log[2*E^(2*x)]] and ff0[x] is the CDF of the density (UnitStep[-x - Log[2]/2]/(E^((1 + x)^2/2)*Sqrt[2*Pi]) + (Sqrt[2/Pi]*UnitStep[x - Log[2]/2])/E^((1 + x)^2/2) + (Sqrt[E^(-(-1 + x)^2/2)]*Sqrt[E^(-(1 + x)^2/2)]*(-UnitStep[x - Log[2]/2] + UnitStep[x + Log[2]/2]))/Sqrt[Pi])/C1 I simply took ll[0]=Log[2]/2 which is originally –  Seyhmus Güngören Oct 17 '13 at 9:22 My density doesnt add upto $1$ when I ignore the point pass. How can I modify my function at $0$? –  Seyhmus Güngören Oct 17 '13 at 11:53 I have no idea whether it is appropriate for you to ignore the discrete point ... but if the mass at $x = 0$ is p, and you want to ignore the point at $x = 0$ (??), and the continuous component of your pdf is f, then f/(1-p) should be a well-defined density that integrates to unity (with the discrete point expunged). –  wolfies Oct 17 '13 at 12:02 I will ask a new question. –  Seyhmus Güngören Oct 17 '13 at 12:36
2015-04-28T06:57:18
{ "domain": "stackexchange.com", "url": "http://mathematica.stackexchange.com/questions/34157/differentiating-a-cumulative-distribution-function-which-has-a-discontinuity-at", "openwebmath_score": 0.6945838928222656, "openwebmath_perplexity": 3623.5278912219164, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
https://math.stackexchange.com/questions/2369861/analyzing-multiple-time-series-describing-the-same-feature
# Analyzing multiple time series describing the same feature I'm currently facing a problem for which I am given several time series that all describe the feature. E.g. the height of several trees of the same kind was measured over a period of time each. However, the time periods are rather random and do not necessarily overlap. For instance for the height of tree 1 we know that it was 2m on 01.01.2000, 2.1m on 08.05.2000, 3m on 06.12.2006. For tree 2 we know that its height was 2.5m on 17.03.2004 and 2.9m on 16.06.2006. this is just an example and my data is far more complex, contains a whole lot more data points and time series, but that's basically the nature of my problem. My aim is to find a function (not yet clear if it should be linear, exponential, etc.) as a prognosis of the height of this particular sort of tree that best fits my data. However, since the dates where the heights were measured do not at all coincide and also the heights at which measurements were started are not alike, I basically have no clue how to approach this challenge. Unfortunately I have never worked with advanced statistics or data analysis before and googling e.g. "analysis of multiple time series" does not yield anything I could work with. Is there anyone here who's familiar with such analysis of time series? I bet there must be plenty of similar problems and lots of approaches to tackle them. I would be grateful for any suggestions! :) Thanks a lot!!! This is a perfectly good instance of general curve fitting! You have some data, and you want a curve to fit to it. Let's say that the 'growth' function, over time, is something like $f(t)$, where $t=0$ corresponds to when the tree first began to grow, and $t$ is measured in days. Convert your timestamp data to some normalized "day count": what you choose as your starting point is pretty much up to you. One option would be tracking the data for each tree from when the first data was recorded. In your example then, you would get something like 01.01.2000 -> day 0, 08.05.2000 -> day 200ish, 06.12.2006 -> day 2200ish. Then you can fit your data to the curve $f(t-A)$, where $A$ is the parameter you're fitting to. If you get $A = -500$, for instance, that means the tree began to grow on day $-500$: late 1998. • Proportional - $f(t) = Bt$ • Logistic - $f(t) = \frac{D}{1+e^{-Bt+C}}$ (makes sense if the trees 'max out' in height) • Hyperbolic - $f(t) = A(\sqrt{x^2+Bx+1}-1)$ (means they start off growing at one rate, but as an adult grow at a different steady rate)
2019-05-26T05:41:36
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2369861/analyzing-multiple-time-series-describing-the-same-feature", "openwebmath_score": 0.4686655104160309, "openwebmath_perplexity": 580.6007000179044, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
https://uglyduckling.nl/blog/financial-engineering/short-rate-models/reproduction-example-1-of-generalized-procedure-for-building-trees/
# Reproduction Example 1 of Generalized Procedure for Building Trees In a recent (2014) paper John Hull and Alan White demonstrate a generalized method for the construction of short rate trees. Keen to understand the model we tried to reproduce the results of the first example mentioned in the paper on page 10. The example considers the short rate model: which is transformed using into with . In case of the example we use To keep things as simple as possible we only consider the first theta here, which allows us to do all the computations on the spreadsheet without resorting to VBA or embedded libraries (currently we are working on a standalone code that will implement the complete tree building algorithm using VBA which will be posted here once completed). The spreadsheet is organised in four sections or steps: • Step one takes the input data • Step two lists the x and rates that make up the tree • Step three computes the probabilities in each node. A node can be selected by entering its coordinate in cell D68 and D69. • Finally, step four gives the discount in the first node. This discount should correspond to the input rates. Consider the first node. In this spreadsheet we can find the theta that makes the initial discount equal to the one based on the input data using goal seek. We do so by computing the discount based on the input as: The discount based on the lattice is: where (for this specific model choice). Now define the error function . Minimization using goal seek in Excel 2011 returns a theta equal to 0,0498165. reproduceExample1 v1.0 Reference: A Generalized Procedure for Building Trees for the Short Rate and its Application to Determining Market Implied Volatility Functions By John Hull and Alan White And is available here:
2019-05-22T09:19:44
{ "domain": "uglyduckling.nl", "url": "https://uglyduckling.nl/blog/financial-engineering/short-rate-models/reproduction-example-1-of-generalized-procedure-for-building-trees/", "openwebmath_score": 0.5243708491325378, "openwebmath_perplexity": 1018.0106714706499, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
https://www.physicsforums.com/threads/optimizing-cone-calculus.568387/
# Optimizing Cone - Calculus 1. Jan 17, 2012 ### girlygirl93 Hello there :) I'm having tons of trouble figuring out how to finish this problem. A cone is to be constructed having a given slant height of l>0 . Find the radius and height which give maximal volume. I am unsure of which variables to keep in order for it to be maximized, and how to go about optimizing it. This is how I was going about it: I think that the cross-section of the cone makes a right angled triangle, for which the equation would be l^2= b^2 + h^2, and in order to maximize the volume you must relate it to the volume equation V = 1/3(pi)r^2h, but I am having trouble putting it together, to be able to differentiate and then maximize. 2. Jan 17, 2012 ### lanedance ok so colume as a function of r & h is V(r,h) = 1/3(pi)r^2h but you also know (assuming b=r) l^2=h^2+r^2 rearranging the contsrtaint gives r^2 = h^2-l^2 and you can subsitute into you volume equation, to get V(h) only. Then you can differentiate w.r.t. h and maximise remembering that l is constant
2017-09-23T14:35:16
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/optimizing-cone-calculus.568387/", "openwebmath_score": 0.8133781552314758, "openwebmath_perplexity": 772.2146510044865, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
https://www.physicsforums.com/threads/quick-questions-about-modular-arithmetic.76797/
# Quick questions about modular arithmetic 1. May 24, 2005 ### johnnyICON $$99999^{99} + 1$$ As $$99999 \equiv$$24$$(mod \25)$$ Can I say then: $$99999^{99} + 1 \equiv$$24$$^{99} + 1(mod \25)$$, or is it $$99999^{99} + 1 \equiv$$24$$^{99}(mod \25) + 1$$, or are these two the same things? 2. May 24, 2005 ### shmoe This one is standard. People would probably understand the second, but the usual format is to write the mod at the end. 3. May 24, 2005 ### johnnyICON oh ok, i was just wondering if the two were the same. I was uncertain if by moving the +1 before the mod5 it would change the meaning. Awesome, well then that makes this question a lot easier now. Thanks :D 4. May 25, 2005 ### johnnyICON Another quick question. Are the following two congruencies the same? 1. $$24^{99} + 1 \equiv 0 \mod \25$$ 2. $$24^{99} + 1 \mod \25 \equiv 0 \mod \25$$ I am trying to show that a number is divisible by 25, and I found that 25 can be written in terms expressed in equation #1. And I found that number that I am trying to divide by 25 can be expressed in terms expressed in equation #2. I thought I finished the proof but now that I am looking at it, I am unsure about this one thing. 5. May 25, 2005 ### Zurtex When you write 25 don't write it \25 or the 5 just shows up, write it normally, e.g: $$24^{99} + 1 \equiv 0 \mod 25$$ and: $$24^{99} + 1 \mod 25 \equiv 0 \mod 25$$ There two statements are the same, I think you miss the point though. Something like: $$24^{99} + 1 \mod 37 \equiv 0 \mod 25$$ Woule make not really make that much sense, so amoung other reasons there is no reasons to write the mod twice. 6. May 25, 2005 ### johnnyICON Okay. Because I was trying to show that equation #2 and #1 are equivalent. So they are right? LOL sorry, I'm just very uncertain about myself. 7. May 25, 2005 ### funkstar Well, people tend to view congruence defined as a ternary relation: $$x \equiv y \mod n \overset{def}{\Longleftrightarrow} n|x-y.$$ Sometimes one omits the modulo part, but it is still understood that we're dealing with modulo arithmetic by using the equivalence sign $$\equiv$$, instead of an equality sign. Another way of stating a congruence $$x \equiv y \mod n$$ is by saying that $$x$$ and $$y$$ belong to the same residue class (look this up on mathworld.wolfram.com). That is $$x \equiv y \mod n \Leftrightarrow [x]_n = [y]_n$$ So, your equation 2. states (with the missing 2 from 25) that $$[[25^{99}+1]_{25}]_{25} = [0]_{25}$$ 8. May 26, 2005 ### ramsey2879 Remember Matt Grime's post in your other thread. You are trying to show that 24^99 +1 = 0 mod 25. but 24^99 +1 = (-1)^25 +1 = -1+1 = 0 mod 25, since 24=-1 mod 25 and since -1 raised to an odd power is -1. Q.E.D. 9. May 26, 2005 ### johnnyICON Yea, thats exactly how I did it Ramsey :D Sorry, I should of concluded this thread by mentioning that. Thanks for another helpful response though. I appreciate it.
2016-12-04T02:03:47
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/quick-questions-about-modular-arithmetic.76797/", "openwebmath_score": 0.7832014560699463, "openwebmath_perplexity": 1183.5817335328052, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
http://mathhelpforum.com/pre-calculus/221112-verifying-trig-identity-print.html
# Verifying a trig identity Printable View • August 9th 2013, 11:45 AM curt26 Verifying a trig identity I have 1/sin^2(t) + 1/cos^2(t) = 1/sin^2(t)-sin^4(t) I started on the right side of the identity and got to this stage: 1/sin^2(t) + 1/1-sin^2(t) On the left side of the identity I got to this stage: 1/sin^2(t)(1-sin^2(t)) I know I need to go farther on the right side of the identity but I am stuck at this point. Any help would be great thanks! • August 9th 2013, 12:19 PM adkinsjr Re: Verifying a trig identity There's some ambiguity in the way you typed the right side of the equation, but the left reduces down to this: $\frac{1}{sin^2(t)}+\frac{1}{1-sin^2(t)}=\frac{1-sin^2(t)+sin^2(t)}{sin^2(t)[1-sin^2(t)]}=\frac{1}{sin^2(t)-sin^4(t)}$ • August 13th 2013, 08:19 PM ibdutt Re: Verifying a trig identity alternatively we can also do like this LHS = 1/sin^2(t) + 1/cos^2(t) = [ sin^2(t) + cos^2(t) ]/ [ sin^2(t) * cos^2(t)] = 1 / [ sin^2(t) * ( 1 - sin ^2(t) )]= RHS
2014-09-17T20:28:50
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/pre-calculus/221112-verifying-trig-identity-print.html", "openwebmath_score": 0.9491896033287048, "openwebmath_perplexity": 2072.2345803514704, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
http://mathhelpforum.com/differential-geometry/140931-compact-space-continuous-function.html
# Math Help - compact space and continuous function 1. ## compact space and continuous function Hey, I have a topology problem here: Let K be a compact subset in R and let f : K → R be a continuous function. Prove that for every ε > 0 there exists Lε > 0 such that |f (x) − f (y)| ≤ Lε |x − y| + ε , for every x, y ∈ K. I'm thinking of using the Lipschitz condition, but not quite sure how to handle it. Could you anyone please give me a hint? Any input is appreciated! 2. Originally Posted by rain07 Hey, I have a topology problem here: Let K be a compact subset in R and let f : K → R be a continuous function. Prove that for every ε > 0 there exists Lε > 0 such that |f (x) − f (y)| ≤ Lε |x − y| + ε , for every x, y ∈ K. I'm thinking of using the Lipschitz condition, but not quite sure how to handle it. Could you anyone please give me a hint? Any input is appreciated! You can't use the Lipschitz condition because every continuous function on a compact space isn't Lipschitz! So, let $\varepsilon>0$ be given. By assumption $f$ is uniformly continuous and so there exists some $\delta>0$ such that $|f(x)-f(y|<\varepsilon\leqslant \varepsilon+M|x-y|$ for $|x-y|<\delta$ and $M\geqslant 0$. Now, assume that $\delta\leqslant |x-y|$ by the boundedness of $f$ there exists some $M'$ such that $\text{diam }f(K)\leqslant M'$. So, let $M=\frac{M'}{\delta}$. Then, $|f(x)-f(y)|\leqslant M'=\delta M\leqslant |x-y|M\leqslant M|x-y|+\varepsilon$. Thus, if $|x-y|\geqslant \delta$ we have that $|f(x)-f(y)\leqslant M|x-y|+\varepsilon$, but this also works for $|x-y|<\delta$ since the extra non-epsilon term is superfluous. 3. I got it. I didn't think much about the properties of continuous function in the first place. Thanks a lot for your help!
2015-01-30T06:31:25
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/differential-geometry/140931-compact-space-continuous-function.html", "openwebmath_score": 0.9185299277305603, "openwebmath_perplexity": 199.10785620877206, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018701, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
http://www.mathworks.com/matlabcentral/newsreader/view_thread/287712
MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi # Thread Subject: Question on the derivate /calculus of a 2-norm matrix . Thanks a lot Subject: Question on the derivate /calculus of a 2-norm matrix . Thanks a lot From: Antony ### Antony Date: 25 Jul, 2010 13:30:08 Message: 1 of 9 Hi, all, I wonder how to solve the following problem: Assuming g(x)=||KX-B||^2 where both K and X are matrices and B is a vector, please compute \partial{g}/\partial{x} . I don't know what the exact answer is. Checking some related stuffs, I think the answer might be 2K^{T}(KX-B). Could you please show me how to compute \partial{g}/\partial{x} in this problem? Or is there some website discussing such a problem? Thanks a lot. In addition, I wonder where to find the possible answer to such similar problems. May I call it multi variables calculus problem, or derivates of quadratic 2-norm matrix problem, or some other problem? I tried to search the web, but failed. Thanks a lot again! I have read some related material, such as matrix calculus, but it seems that this type of problem is not discussed at all. I feel it is not a simple d(X^{T}AX)/dX problem which is rather simple. Antony Subject: Question on the derivate /calculus of a 2-norm matrix . Thanks a lot From: Roger Stafford ### Roger Stafford Date: 25 Jul, 2010 19:22:05 Message: 2 of 9 "Antony " <[email protected]> wrote in message <[email protected]>... > Hi, all, I wonder how to solve the following problem: > Assuming g(x)=||KX-B||^2 where both K and X are matrices and B is a vector, please compute \partial{g}/\partial{x} . > > I don't know what the exact answer is. Checking some related stuffs, I think the answer might be 2K^{T}(KX-B). Could you please show me how to compute \partial{g}/\partial{x} in this problem? Or is there some website discussing such a problem? Thanks a lot. > > In addition, I wonder where to find the possible answer to such similar problems. May I call it multi variables calculus problem, or derivates of quadratic 2-norm matrix problem, or some other problem? I tried to search the web, but failed. Thanks a lot again! > > I have read some related material, such as matrix calculus, but it seems that this type of problem is not discussed at all. I feel it is not a simple d(X^{T}AX)/dX problem which is rather simple. > > Antony - - - - - - - - - - If you want the partial derivatives listed as a column vector, then the answer you gave is correct: 2*K'*(K*X-B) where the apostrophe is used here to denote (complex) transpose as is done in matlab. Here is a somewhat intuitive demonstration. Your L2 norm can be written as: ||K*X-B||^2 = (K*X-B)'*(K*X-B) = X'*K'*K*X - X'*K'*B - B'*K*X - B'*B Since B'*K*X is a scalar, it is equal to its own transpose B'*K*X = (B'*K*X)' = X'*K'*B which gives ||K*X-B||^2 = X'*K'*K*X - 2*X'*K'*B - B'*B If we have an n-element row vector whose elements are each a function of x1,x2,...,xn, we can produce a matrix in which for each of the functions there is an n-element column of the partial derivatives of that function taken with respect to x1,x2,...,xn. If the row vector is simply X' = [x1,x2,...,xn] then applying this partial derivative operator will clearly give just the identity matrix, I. If we apply this operator to the above L2 norm we get 2*I*K'*K*X - 2*I*K'*B + 0 = 2*K'*(K*X-B) as asserted above. In the case of X'*K'*K*X its derivative is to be taken first with respect to the x values in the left factor while holding the right hand set of x's fixed, plus the derivatives with respect to the right hand set while holding the left hand set fixed. But because the expression is a scalar it is equal to its own transpose, so that second term is the same as if we held the right hand set fixed and varied the left hand set in both terms. Hence the above operator applied to X'*K'*K*X gives I*K'*K*X + I*K'*K*X = 2*K'*K*X. If you are not convinced by this last argument, you can always show it rigorously using summation notion. It is just a problem in taking the derivatives of a homogeneous quadratic function of many variables. Roger Stafford Subject: Question on the derivate /calculus of a 2-norm matrix . Thanks a lot From: Matt J ### Matt J Date: 26 Jul, 2010 02:19:04 Message: 3 of 9 "Antony " <[email protected]> wrote in message <[email protected]>... > Hi, all, I wonder how to solve the following problem: > Assuming g(x)=||KX-B||^2 where both K and X are matrices and B is a vector, please compute \partial{g}/\partial{x} . > > I don't know what the exact answer is. Checking some related stuffs, I think the answer might be 2K^{T}(KX-B). Could you please show me how to compute \partial{g}/\partial{x} in this problem? Or is there some website discussing such a problem? Thanks a lot. ============ You could also just use the multivariable chain rule, which says that for vector-valued functions g() and h() and the composition f(X)=g(h(X)), then Now just apply this with h(X)=K*X-B and g(z)=||z||^2 and substituting z=K*X-B gives Subject: Question on the derivate /calculus of a 2-norm matrix . Thanks a lot From: Antony ### Antony Date: 26 Jul, 2010 02:26:04 Message: 4 of 9 "Roger Stafford" <[email protected]> wrote in message <[email protected]>... > "Antony " <[email protected]> wrote in message <[email protected]>... ... > I*K'*K*X + I*K'*K*X = 2*K'*K*X. > > If you are not convinced by this last argument, you can always show it rigorously using summation notion. It is just a problem in taking the derivatives of a homogeneous quadratic function of many variables. > > Roger Stafford Thank you, Roger. The explanation is extremely clearly and helpful, especially on how to rewrite the 2-norm into matrix multiplication and how the identity matrix is generated. I think I fully understand this process now. I have another problem. Maybe we can not directly solve it and I think the result might be more complxe than my former problem. The problem is: if g(x) = ||KX-B||^0.6 with all the other settings as the former problem, what is \partial{g}/\partial{x}? Do you have any advice on how to solve this equation? Thanks a lot again for your help! Antony Subject: Question on the derivate /calculus of a 2-norm matrix . Thanks a lot From: Antony ### Antony Date: 26 Jul, 2010 02:47:04 Message: 5 of 9 "Matt J " <[email protected]> wrote in message <[email protected]>... > "Antony " <[email protected]> wrote in message <[email protected]>... > > Hi, all, I wonder how to solve the following problem: > > Assuming g(x)=||KX-B||^2 where both K and X are matrices and B is a vector, please compute \partial{g}/\partial{x} . > > > > I don't know what the exact answer is. Checking some related stuffs, I think the answer might be 2K^{T}(KX-B). Could you please show me how to compute \partial{g}/\partial{x} in this problem? Or is there some website discussing such a problem? Thanks a lot. > ============ > > You could also just use the multivariable chain rule, which says that for vector-valued functions g() and h() and the composition f(X)=g(h(X)), then > > > Now just apply this with h(X)=K*X-B and g(z)=||z||^2 > > > and substituting z=K*X-B gives > I see. Thanks a lot, Matt. Pretty simple solution! But, according to the chain rule, I may apply it to f(X)=||KX-B||^0.6 and obtain the result of the derivate as 0.6*K.'*(K*X-B)^{-0.4}? This result seems rather complex for some numerical optimization. Subject: Question on the derivate /calculus of a 2-norm matrix . Thanks a lot From: Matt J ### Matt J Date: 26 Jul, 2010 02:50:05 Message: 6 of 9 "Antony " <[email protected]> wrote in message <[email protected]>... > I have another problem. Maybe we can not directly solve it and I think the result might be more complxe than my former problem. The problem is: > if g(x) = ||KX-B||^0.6 with all the other settings as the former problem, what is \partial{g}/\partial{x}? ======== This is equivalent to (||KX-B||^2)^0.3 So you can use your original result, with one more step of the chain rule leading to Subject: Question on the derivate /calculus of a 2-norm matrix . Thanks a lot From: Antony ### Antony Date: 26 Jul, 2010 03:01:04 Message: 7 of 9 "Matt J " <[email protected]> wrote in message <[email protected]>... > "Antony " <[email protected]> wrote in message <[email protected]>... > > > I have another problem. Maybe we can not directly solve it and I think the result might be more complxe than my former problem. The problem is: > > if g(x) = ||KX-B||^0.6 with all the other settings as the former problem, what is \partial{g}/\partial{x}? > ======== > > This is equivalent to (||KX-B||^2)^0.3 > > So you can use your original result, with one more step of the chain rule leading to > > Gradient = 0.3*(||KX-B||^2)^(-.7) * 2*K'*(K*X-B) Why not write it as Gradient = 0.3 *2*K'*(K*X-B)*(||KX-B||^2)^(-.7) according to the chain rule? It is because K'*(K*X-B) is a scalar and there is no difference between them? Thank you! Subject: Question on the derivate /calculus of a 2-norm matrix . Thanks a lot From: Matt J ### Matt J Date: 26 Jul, 2010 03:04:04 Message: 8 of 9 "Antony " <[email protected]> wrote in message <[email protected]>... > But, according to the chain rule, I may apply it to f(X)=||KX-B||^0.6 and obtain the result of the derivate as 0.6*K.'*(K*X-B)^{-0.4}? ====================== No, this wouldn't be the correct expression. From my last post, I get, after some simplification >This result seems rather complex for some numerical optimization. ======================= Well, your objective function f(X)=||KX-B||^0.6 is unusually complex... For one thing, this function is not differentiable at points where K*X=B, which means that if the minimum lies there, you cannot use gradient-based approaches to find it. Subject: Question on the derivate /calculus of a 2-norm matrix . Thanks a lot From: Antony ### Antony Date: 26 Jul, 2010 03:33:05 Message: 9 of 9 "Matt J " <[email protected]> wrote in message <[email protected]>... > "Antony " <[email protected]> wrote in message <[email protected]>... > > > But, according to the chain rule, I may apply it to f(X)=||KX-B||^0.6 and obtain the result of the derivate as 0.6*K.'*(K*X-B)^{-0.4}? > ====================== > > No, this wouldn't be the correct expression. From my last post, I get, after some simplification > > > > >This result seems rather complex for some numerical optimization. > ======================= > > Well, your objective function f(X)=||KX-B||^0.6 is unusually complex... > > For one thing, this function is not differentiable at points where K*X=B, which means that if the minimum lies there, you cannot use gradient-based approaches to find it. Dear Matt, thank a lot for your time in my question. I appreciate your help! I understand the difficulties of such type of optimization problems now. This might be the reason that papers always figure out another efficient solutions to such type of non-convex problems. Thanks again! Also, thanks a lot for all other guys' kind and patient helps, especially to Roger Stafford and Brian Borchers. Antony
2015-01-29T04:49:10
{ "domain": "mathworks.com", "url": "http://www.mathworks.com/matlabcentral/newsreader/view_thread/287712", "openwebmath_score": 0.7917736768722534, "openwebmath_perplexity": 1324.284585546996, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
https://ocw.mit.edu/courses/15-071-the-analytics-edge-spring-2017/pages/trees/keeping-an-eye-on-healthcare-costs-the-d2hawkeye-story/quick-question-349/
# 4.3 Keeping an Eye on Healthcare Costs: The D2Hawkeye Story ## Quick Question Suppose that instead of the baseline method discussed in the previous video, we used the baseline method of predicting the most frequent outcome for all observations. This new baseline method would predict cost bucket 1 for everyone. What would the accuracy of this baseline method be on the test set? Exercise 1 Numerical Response What would the penalty error of this baseline method be on the test set? Exercise 2 Numerical Response Explanation To compute the accuracy, you can create a table of the variable ClaimsTest$bucket2009: table(ClaimsTest$bucket2009) According to the table output, this baseline method would get 122978 observations correct, and all other observations wrong. So the accuracy of this baseline method is 122978/nrow(ClaimsTest) = 0.67127. For the penalty error, since this baseline method predicts 1 for all observations, it would have a penalty error of: (0*122978 + 2*34840 + 4*16390 + 6*7937 + 8*1057)/nrow(ClaimsTest) = 1.044301
2022-09-27T04:04:37
{ "domain": "mit.edu", "url": "https://ocw.mit.edu/courses/15-071-the-analytics-edge-spring-2017/pages/trees/keeping-an-eye-on-healthcare-costs-the-d2hawkeye-story/quick-question-349/", "openwebmath_score": 0.6797542572021484, "openwebmath_perplexity": 2713.126347422589, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
https://questions.examside.com/past-years/jee/question/pusing-binomial-theorem-the-value-of-0999sup3sup-wb-jee-mathematics-mathematical-induction-and-binomial-theorem-4fy0pqldre3ucrwu
NEW New Website Launch Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc... 1 ### WB JEE 2009 using binomial theorem, the value of (0.999)3 correct to 3 decimal places is A 0.999 B 0.998 C 0.997 D 0.995 ## Explanation (0.999)3 = (1 $$-$$ 0.001)3 = 1 $$-$$ 3 $$\times$$ (0.001) = 0.997. 2 ### WB JEE 2009 If the coefficients of x2 and x3 in the expansion of (3 + ax)9 be same, then the value of a is A 3/7 B 7/3 C 7/9 D 9/7 ## Explanation $${T_3} = {}^9{C_2}{(3)^{9 - 2}}.\,{(ax)^2} = {}^9{C_2}{3^7}\,.\,{a^2}\,.\,{x^2}$$ $$\therefore$$ Coeff. of $${x^2} = {}^9{C_2} \times {3^7} \times {a^2}$$ $${T_4} = {}^9{C_3}{(3)^{9 - 3}}\,.\,{(ax)^3} = {}^9{C_3}\,.\,{3^6}\,.\,{a^3}\,.\,{x^3}$$ $$\therefore$$ Coeff. of $${x^3} = {}^9{C_3} \times {3^6} \times {a^3}$$ $$\therefore$$ $${}^9{C_2} \times {3^7} \times {a^2} = {}^9{C_3} \times {3^6} \times {a^3}$$ $$\therefore$$ $$a = {{{}^9{C_2} \times {3^7}} \over {{}^9{C_3} \times {3^6}}} = {{{{9!} \over {2! \times 7!}} \times 3} \over {{{9!} \over {3! \times 6!}}}} = {{3! \times 6! \times 3} \over {2! \times 7!}} = {9 \over 7}$$ 3 ### WB JEE 2009 If C0, C1, C2, ......, Cn denote the coefficients in the expansion of (1 + x)n then the value of C1 + 2C2 + 3C3 + ..... + nCn is A n . 2n $$-$$ 1 B (n + 1)2n $$-$$ 1 C (n + 1)2n D (n + 2)2n $$-$$ 1 ## Explanation $${(1 + x)^n} = {C_0} + {C_1}x + {C_2}{x^2} + ..... + {C_n}{x^n}$$ (Binomial theorem) Differentiating both sides w.r. to x, we get $$n{(1 + x)^{n - 1}} = {C_1} + 2{C_2}x + 3{C_3}{x^2} + ..... + n{C_n}{x^{n - 1}}$$ Keeping x = 1 we get $$n\,.\,{2^{n - 1}} = {C_1} + 2{C_2} + 2{C_3} + .... + n{C_n}$$ 4 ### WB JEE 2009 For each n $$\in$$ N, 23n $$-$$ 1 is divisible by here N is a set of natural numbers. A 7 B 8 C 6 D 16 ## Explanation Let P(n) = 23n $$-$$ 1 Putting n = 1 P(1) = 23 . 1 $$-$$ 1 = 7 which is divisible by 7 Putting n = 2 P(2) = 23 . 2 $$-$$ 1 = 26 $$-$$ 1 = 63 which is divisible by 7 and so on Let P(k) = 23k $$-$$ 1 is also divisible by 7 $$\therefore$$ 23k $$-$$ 1 = 7P $$\Rightarrow$$ 23k = 7P + 1 P(k + 1) = 23(k + 1) $$-$$ 1 = 23k . 23 $$-$$ 1 = (7P + 1)8 $$-$$ 1 = 7 . 8P + 8 $$-$$ 1 = 7(8P + 1) $$\therefore$$ By process of M.I. P(k) is divisible by 7. So, 23n $$-$$ 1 is divisible by 7. ### Joint Entrance Examination JEE Main JEE Advanced WB JEE ### Graduate Aptitude Test in Engineering GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN NEET Class 12
2022-10-01T04:13:45
{ "domain": "examside.com", "url": "https://questions.examside.com/past-years/jee/question/pusing-binomial-theorem-the-value-of-0999sup3sup-wb-jee-mathematics-mathematical-induction-and-binomial-theorem-4fy0pqldre3ucrwu", "openwebmath_score": 0.795353353023529, "openwebmath_perplexity": 3958.016146460809, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639686018702, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213247492658 }
https://www.transtutors.com/questions/ratio-analysis-graham-railways-inc-is-evaluating-its-operations-and-provides-the-fol-2567737.htm
# Ratio Analysis Graham Railways Inc. is evaluating its operations and provides the following infor... Ratio Analysis Graham Railways Inc. is evaluating its operations and provides the following information: 2016 2015 2014 Net income $62,854$45,852 $35,456 Total assets at year-end$381,500 $246,250$145,490 Common shares outstanding 56,000 49,000 41,000 Weighted average number of common shares outstanding 52,500 47,500 41,000 Total liabilities at year-end $206,100$117,800 $52,690 Dividends per common share$0.40 $0.35$0.25 Common shareholders’ equity at year-end $175,400$128,450 $92,800 Ending share price$24.20 $18.75$14.40 Required: For each of the years 2014 through 2016, calculate Graham Railways's earnings per share and dividend yield ratio. The company has no preferred stock or other potentially dilutive securities outstanding. If required, round your answers to two decimal places. 2016 2015 2014 Earnings per share \$ Dividend yield % % %
2018-08-19T15:50:00
{ "domain": "transtutors.com", "url": "https://www.transtutors.com/questions/ratio-analysis-graham-railways-inc-is-evaluating-its-operations-and-provides-the-fol-2567737.htm", "openwebmath_score": 0.2451724261045456, "openwebmath_perplexity": 9058.663666797169, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639677785088, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132469390861 }
https://math.stackexchange.com/questions/1113973/show-that-for-any-1-leq-p-infty-the-set-l1-cap-lp-is-a-dense-subset-of/1114013
# Show that for any $1\leq p<\infty$, the set $L^1\cap L^p$ is a dense subset of $L^p$ Show that for any $1\leq p<\infty$, the set $L^1\cap L^p$ is a dense subset of $L^p$. Let $f\in L^p-L^1$. We need to find a sequence $\{\phi_n\}_n$ in $L^1\cap L^p$ converging to $f$. And I know the simple approximation theorem. I think the following lemma is useful. Lemma: If a simple function in a measure space $(X,\mathfrak{B},\mu)$ which belongs to $L^p(\mu)$, $1\leq p < \infty$, also belongs to $L^1(\mu)$. Attempt: Let $g$ be simple function in $L^p$. Then we have $g=\Sigma_{i=1}^{m}a_i\chi_{E_i}$ for some $E_1,...,E_m\in\mathfrak{B}$ and some $a_1,...,a_m$ in $\mathbb{R}$. So $|g|^p=\Sigma_{i=1}^{m}|a_i|^p\chi_{E_i}$. Since $g \in L^p$, we have $\infty >\int|g|^pd\mu=\Sigma_{i=1}^{m}|a_i|^p\int \chi_{E_i}d\mu=\Sigma_{i=1}^{m}|a_i|^p\mu{E_i}$ . So $\mu(E_i)<\infty$ for all $i=1,...,m$. So $\infty>\Sigma_{i}^{m}|a_i|\mu(E_i)=\Sigma_{i=1}^{m}|a_i|\int \chi_{E_i}d\mu=\int|g|d\mu$. So $g \in L^1(\mu)$. How is my attempt? How can we conclude the proof? Thanks! It may be easier to work with truncations rather than simple functions. Given $f \in L^p$ define $f_n(x) = f(x)$ if $|f(x)| > \frac 1n$, and $0$ otherwise. Then $f_n(x) \to f(x)$ for all $x$. Since $|f_n| \le |f|$ you have that $f_n \in L^p$, and since $|f_n - f|^p \le 2^p |f|^p$ LDCT implies $$\lim_{n \to \infty} \int |f_n -f|^p \, d\mu = 0.$$ On the other hand, assuming $p > 1$, you have by Holder's inequality and Chebyshev's inequality $$\int |f_n| \, d\mu = \int_{\{|f| > \frac 1n\}} |f| \, d\mu \le \mu(\{|f| > \tfrac 1n\})^{1/p'} \|f\|_p < \infty$$ so that $f \in L^1$ too. Your proof is correct! As for density, one usually argues that $C_c^{\infty}$ is dense AND contained in $L^p$ for all $p\in [1, \infty)$ and hence lies also in your intersection. • How is $C_c^\infty$ defined on an arbitrary space $X$? – Umberto P. Jan 21 '15 at 19:54
2020-01-24T08:59:05
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1113973/show-that-for-any-1-leq-p-infty-the-set-l1-cap-lp-is-a-dense-subset-of/1114013", "openwebmath_score": 0.9880672693252563, "openwebmath_perplexity": 50.21320001519721, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639677785088, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132469390861 }
http://www.cram.com/flashcards/econ-102a-midterm-399168
• Shuffle Toggle On Toggle Off • Alphabetize Toggle On Toggle Off • Front First Toggle On Toggle Off • Both Sides Toggle On Toggle Off Toggle On Toggle Off Front ### How to study your flashcards. Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key Up/Down arrow keys: Flip the card between the front and back.down keyup key H key: Show hint (3rd side).h key A key: Read text to speech.a key Play button Play button Progress 1/50 Click to flip ### 50 Cards in this Set • Front • Back Variance (1/(n-1))*sum(xi-x)^2 Standard Deviation square root of variance Five-Number Summary Minimum Q1 Mean Q2 Maximum Density function describes a density curve in functional form (defined over all possible values the variable can take) Normal Distribution 1) has a "bell shape" 2) mean=median=mode 3) symmetric 4) inflection pts at m+sd, m-sd Rule for Normal Distribution 65-95-99.7 Z-Score z= (x-m)/sd Standard Normal Distribution Normal distribution N(0,1) with mean 0 and standard dev 1 If a varable x has any Normal distribution with mean M and standard dev, the standardized variable z has the standard normal distribution Covariance S2XY = (1/n-1)sum (xi - x)(yi - y) Correlation Cov(X,Y)/SxSy Least-Squares Regression Line equations on sheet Lurking variable variable that is not among the explanatory or response variables yet may influence the interpretation of relationships among variables Simpson's Paradox an association or comparison that holds for all of several groups can reverse direction when the data are combined to form a single group Simple Random Sample consists of n individuals from the population chosen in such a way that every set of n indivviduals has an equal chance to be the sample actually selected Sampling Distribution distribution of values taken by the statistic in all possible samples of the same size from the same population Probability Rules 1. Probability of any event A satisfies 0<=P(A)<=1 2. If S is the sample space in a probability model, P(S)=1 3. the complement of any event A is the event that A does not occur, written as A^C P(A does not occur)=1-P(A) 4. Two events A &B are disjoint if they have no outcomes in common and therefore cannot occur simultaneously P(A or B)=P(A)+P(B) Rules for Means 1. If x is a random varaible and a and b are fixed numbers, then M(a+bx)=a+BMx 2. If x and y are random varialbe, then M(x+y)=Mx+My Variance for Discrete and Continuous Variables E(X^2)-M^2 Rules for Variances 1. V(a+bx)=b^2V(x) 2. If X&Y are independent V(x+y)=V(x)+V(Y) V(x-y)=V(x)-V(Y) 3. If X&Y have correlation p, V(x+y)=V(x)+V(y)+2pV(x)V(y) V(x-y)=V(x)+V(y)-2pV(x)V(y) Covariance of Two Random Variables Cov(XY)=E(XY)-MxMy Correlation Between X&Y Cov(XY)/(sdX*sdY) Joint Probability Function f(x0,y0)=P{X=x0 and Y=y0) Marginal Probability Function fx(x0)=P(x=x0)=sum over yi of f(x0,yi) Joint Probability Funciton for continuous variable To find P{x1
2018-02-20T23:43:46
{ "domain": "cram.com", "url": "http://www.cram.com/flashcards/econ-102a-midterm-399168", "openwebmath_score": 0.8074411749839783, "openwebmath_perplexity": 3493.718788697782, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639677785088, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132469390861 }
http://clay6.com/qa/2031/true-or-false-if-each-of-the-three-matrices-of-the-same-order-are-symmetric
Home  >>  CBSE XII  >>  Math  >>  Matrices # True or False: If each of the three matrices of the same order are symmetric matrix. ## 1 Answer TRUE Given A=$A'\; B=B'\; C=C'$.   From the property of transpose of a matrix we have   $(A+B+C)'=A'+B'+C'$.   $A'+B'+C'$ is symmetric matrix. answered Mar 12, 2013 edited Mar 17, 2013 1 answer 1 answer 1 answer 1 answer 1 answer 1 answer 1 answer
2017-10-18T18:40:17
{ "domain": "clay6.com", "url": "http://clay6.com/qa/2031/true-or-false-if-each-of-the-three-matrices-of-the-same-order-are-symmetric", "openwebmath_score": 0.7775851488113403, "openwebmath_perplexity": 2309.511008346857, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639677785087, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213246939086 }
http://openstudy.com/updates/55b1482fe4b04559507a993e
## A community for students. Sign up today Here's the question you clicked on: ## anonymous one year ago ***fan and medal*** • This Question is Closed 1. anonymous Ray and Kelsey have summer internships at an engineering firm. As part of their internship, they get to assist in the planning of a brand new roller coaster. For this assignment, you help Ray and Kelsey as they tackle the math behind some simple curves in the coaster's track. 2. anonymous The first part of Ray and Kelsey's roller coaster is a curved pattern that can be represented by a polynomial function. Ray and Kelsey are working to graph a third-degree polynomial function that represents the first pattern in the coaster plan. Ray says the third-degree polynomial has 4 intercepts. Kelsey argues the function can have as many as 3 zeros only. Is there a way for the both of them to be correct? Explain your answer. 3. anonymous help!!! please!!!! 4. anonymous @ash2326 @Michele_Laino @pinkbubbles 5. Michele_Laino a third-degree polynomial can have at maximum three zeroes 6. Michele_Laino oops.. three real zeroes 7. anonymous its that the answer? 8. Michele_Laino for example, if we have the subsequent polynomial: $\Large p\left( x \right) = {x^3} - 6{x^2} + 11x - 6$ 9. Michele_Laino we can easily check that its factorization is: $\Large p\left( x \right) = \left( {x - 1} \right)\left( {x - 2} \right)\left( {x - 3} \right)$ 10. Michele_Laino which show us that there are at maximum three real zeroes 11. Michele_Laino I think that your answer has to contain some examples 12. anonymous someone give me the answer, you wanna check? 13. Michele_Laino ok! 14. anonymous Both Ray and Kelsey can both be right Ray is right because you can have 4 intercepts, three of them as zeros and one as a y intercept. 15. anonymous its tha right? 16. Michele_Laino an intercept is not a zero 17. anonymous A)a third degree polynomial can have at most three x intercepts and always has one y intercept. So it can have 4 intercepts. And three zeroes . B) they are both correct because like what nightowl said about it being able to cross the x three time and the y once or depending where the exponents are either on x or y it can cross the y three times and the x once if you are looking for it crossing any and all axis (x and y) then they cross 4 times if a specific axis then it depends on the axis in shorter words the girl is only looking at one axis and the boy all 18. Michele_Laino furthermore, the subsequent polynomial: $\Large q\left( x \right) = {x^3} - x$ has three real zeroes and it has not an y-intercept 19. anonymous LOOK its one of them? 20. anonymous a, or b? 21. Michele_Laino I think that A is the correct one 22. Michele_Laino please wait 23. anonymous ok 24. Michele_Laino I think option B is right, since a third-degree polynomial has three x-intercepts at maximum and only one y-intercept, so 4 intercept in total. The x-intercepts are called the zeroes of the third-degree polybomial 25. Michele_Laino polynomial* 26. anonymous so the answers its b, thank you so much!!!! 27. Michele_Laino :) #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
2017-01-23T07:04:07
{ "domain": "openstudy.com", "url": "http://openstudy.com/updates/55b1482fe4b04559507a993e", "openwebmath_score": 0.36867207288742065, "openwebmath_perplexity": 2256.7084929422094, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639677785087, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213246939086 }
https://math.stackexchange.com/questions/3006270/edge-colourings-of-an-icosahedron
# Edge colourings of an icosahedron I'm referring to problem A6 of the 2017 Putnam competition -- the question is "How many ways exist to colour the labelled edges of an icosahedron such that every face has two edges of the same colour and one edge of another colour, where the colours are either red, white or blue?". My solution is as follows: consider the planar representation of the icosahedron: Note that: • There are 18 ways to colour the edges of a triangle such that it has two edges of the same colour and one edge of another colour (3 choices for the colour that appears twice, 2 choices for the colour that appears once, and 3 choices for the arrangement). • Given the colouring on any one edge of a triangle, there are 6 ways to colour the remaining edges in a way that satisfies the condition (WLOG suppose the given edge is white -- then either one other edge is white (and the other edge is red or blue), which has 4 possibilities, or both the other edges have the same colour, red or blue, which is possible in 2 ways. • Given the colouring on two edges of a triangle, there are 2 ways to colour the remaining edge in a way that satisfies the condition. WLOG, the given edges are coloured either "R R" or "R B". If it's "R R", the 2 ways to choose the other edge are "W" and "B" -- if it's "R B", the 2 ways to choose the other edge are "R" and "B". So there are 18 ways to choose the colouring on the central triangle (the base case) of the planar representation, and you can write each the number of ways to colour each successive "containing triangle" as $$6^32^3$$ multiplied by the smaller triangle it contains, so the number of ways to colour the entire icosahedron should be: $$18(6^32^3)^3=2^{19}3^{11}$$ Unfortunately, the official solution (p. 5) presents an answer of $$2^{20}3^{10}$$ -- I'm off by a factor of $$2/3$$! What's going on? What did I do wrong? • Check the update to my answer! I just spotted a serious flaw in your setup. – Christian Blatter Nov 22 '18 at 7:55 • If I understand you correctly, your picture is supposed to be the icosahedron graph, where vertices & edges in your graph = vertices & edges in an icosahedron, right? If so, this is the wrong graph. Each vertex in an actual icosahedron has edge-degree 5. See en.wikipedia.org/wiki/Regular_icosahedron#Icosahedral_graph Your graph may actually represent a column of 3 octahedra glued together face to face...? – antkam Nov 22 '18 at 16:20 You have not checked whether the last triangle (the infinite outer triangle in the figure) is colored correctly. I don't know how to fix this. You cannot just say that with probability $${2\over3}$$ the last triangle is correctly colored. Now for the biggest mistake: Your net is not the net of an icosahedron. In the icosahedron graph each vertex is of degree $$5$$, but the vertices in your graph are of degree $$4$$ or $$6$$.
2019-09-17T10:46:52
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3006270/edge-colourings-of-an-icosahedron", "openwebmath_score": 0.6706711649894714, "openwebmath_perplexity": 306.57380226783783, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639677785087, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213246939086 }
http://mathoverflow.net/questions/19999/finding-all-roots-of-a-polynomial/20009
# Finding all roots of a polynomial Is it possible, for an arbitrary polynomial in one variable with integer coefficients, to determine the roots of the polynomial in the Complex Field to arbitrary accuracy? When I was looking into this, I found some papers on homotopy continuation that seem to solve this problem (for the Real solutions at least), is that correct? Or are there restrictions on whether homotopy continuation will work? Does the solution region need to be bounded? - The answer is "yes" and modern computer algebra systems have already done this for you. I confess I don't know how---but you don't make it clear whether you want to know how or you just want to know the answer. If you have a particular polynomial in mind, fire up the free maths package pari, set the precision to 1000 with \p 1000, and then use the polroots command. – Kevin Buzzard Mar 31 '10 at 20:53 The fact that it's implemented doesn't meant it is solved! CASes have all sorts of routines which 'solve' undecidable problems... because all undecidable problems have (often large) sub-classes which are semi-decidable. It turns out that, for this problem, there is a complete algorithm which is guaranteed to terminate and find all roots. As far as I know, none of the CASes actually implement that (it's much too slow), instead they all implement algorithms which might fail (but with extremely low probability). – Jacques Carette Mar 31 '10 at 21:29 Dror, Newton-Raphson is not guaranteed to converge. Even if it does, it finds only ONE solution. The question asked for ALL solutions. – user1855 Apr 1 '10 at 2:17 Well, from a theoretical perspective, this follows from the decidability of the theory of the real numbers as an ordered field, as proved by Tarski. I agree of course that if you want an efficient algorithm, that's a separate question. – Pete L. Clark Apr 1 '10 at 7:16 @Dror: dividing out by the root you found is known as 'deflation' as is amazingly badly behaved numerically. After you've deflated out about 10 roots, what you're left with is usually a total mess (from an error analysis point of view) and in practice the 'roots' you get after deflation are useless. – Jacques Carette Apr 29 '10 at 1:11 This argument is problematic; see Andrej Bauer's comment below. Sure. I have no idea what an efficient algorithm looks like, but since you only asked whether it's possible I'll offer a terrible one. Lemma: Let $f(z) = z^n + a_{n-1} z^{n-1} + ... + a_0$ be a complex polynomial and let $R = \text{max}(1, |a_{n-1}| + ... + |a_0|)$. Then all the roots of $f$ lie in the circle of radius $R$ centered at the origin. Proof. If $|z| > R$, then $|z|^n > R |z|^{n-1} \ge |a_{n-1} z^{n-1}| + ... + |a_0|$, so by the triangle inequality no such $z$ is a root. Now subdivide the disk of radius $R$ into, say, a mesh of squares of side length $\epsilon > 0$ and evaluate the polynomial at all the lattice points of the mesh. As the mesh size tends to zero you'll find points that approximate the zeroes to arbitrary accuracy. There are also lots of specialized algorithms for finding roots of polynomials at the Wikipedia article. - To speed up your "Now subdivide..." argument you'd compute some Lipschitz bounds for the polynomial's derivative on your subdivisions and apply Kantorovich's theorem (using Newton's method to find the roots). In practice this is very fast. – Ryan Budney Mar 31 '10 at 23:04 :) It really is terrible... – Dror Speiser Mar 31 '10 at 23:27 A somewhat similar idea, due to Mike Meylan, follows: bound the circle in a square. Then, recursively, subdivide the square into 4 squares. Now approximately compute the argument principle integral around each square, and zoom in on any square that had a value larger than 0. This reduces the complexity dependence on $R$ from quadratic to linear. – Dror Speiser Apr 1 '10 at 14:00 The proposed answer does not work. If the taks is to list all the zeroes then the algorithm must decide which squares of side length $\epsilon$ contain zeroes and which do not. How is it supposed to do that? Just because an approximate value at a point in the mesh is close to zero does not mean there is an actual zero there. What you are proposing is to compute a sequence of nested compact sets (finite unions of squares) whose intersection is the set of zeroes. But the trouble is that some of the squares may "disappear" after a while, so it's hard to tell where the zeroes actuall are. – Andrej Bauer Apr 29 '10 at 11:59 Thanks, Andrej. Do you know if Dror's improvement avoids this problem? – Qiaochu Yuan Apr 29 '10 at 15:31 Homotopy continuation method is good for finding all COMPLEX solutions to arbitrary accuracy, and it is implemented in the Numerical Algebraic Geometry package in Macaulay 2, for example. The method is more general. It can solve a system of polynomial equations in many variables. In fact, it is a more difficult problem to find all REAL solutions WITHOUT finding all complex solutions. From what I understand, the solution region does not need to be bounded for homotopy continuation to work. You can also "projectify" your problem if necessary, so that you don't have to worry about homotopy paths going off to infinity. Some methods assume that the solutions are all simple, but there're ways to work around it. One is the method of "deflation". - For univariate polynomials you should look at "An Efficient Algorithm for the Complex Roots Problem" by Andy Neff and John Reif http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=5E9156BAF80D8D6AEDCA2F42C11AB4B2?doi=10.1.1.33.3353&rep=rep1&type=pdf - The wikipedia article http://en.wikipedia.org/wiki/Root-finding_algorithm gives links to many different methods for finding roots of polynomials. (Start at the section entitled "Finding roots of polynomials".) Many of the methods are incomparable, in the sense that they work faster or slower than others depending on the specific polynomial. - One of the semi-recommended ones for finding roots in the complex plane is Laguerre's method, which for some reason is not included in the Wikipedia article on root-finding. The reason I know of this is a colloquium lecture long ago by Steven Smale on the complexity of Newton's method, during which William Kahan stood up and held forth on why Newton's method was worthless and Laguerre's was much better. I cannot tell whether you insist on finding all roots to high accuracy. One could perhaps divide out by $(x - r_k)^{n_k}$ each time a root $r_k$ with multiplicity $n_k$ is found, and search for roots for the new polynomial, using those results as seed values for finding accurate roots using the original polynomial. - Never bet against Kahan. He has been shown right an astonishing number of times. [Same with David Parnas for software engineering issues]. – Jacques Carette Apr 29 '10 at 1:14 This can be done. Check this article by Hubbard, Schleicher, and Sutherland, entitled "How to find all roots of complex polynomials by Newton's method". - Although it's not specific to polynomials with integer coefficients, have a lot at "Computing the Zeros of Analytic Functions". - At least for real roots it can be completely solved by bracketing zeroes with Sturm sequences. - You have already seen McNamee's excellent bibliography on polynomial root-finding methods? Personally I have a preference for the "simultaneous iteration" methods (of which Durand-Kerner and Ehrich-Aberth are two of the simplest and most well-known); all you need to start from is a set of points equispaced around a circle in the complex plane (as to the radius of this circle, there are a number of suggestions in the literature; alternatively, formulas in Marden's "Geometry of Polynomials" might be of use here). - A completely ineffective theoretical method goes as follows: Write $f(z)$ as $f_r(x,y)+if_i(x,y)$ where $f_r,f_i\in \mathbb R[x,y]$ are real polynomials in the real and complex part $x,y$ of $z=x+i y$. Compute a Groebner basis of the ideal $(f_r,f_i)$ with respect to an order which eliminates one of the variables in the first element of the basis and use real techniques (based on Sturm sequences) to compute, say, the real parts of all solutions. Use another element of the Groebner basis (or again real techniques) to compute the corresponding imaginary part and test for multiplicities (which can be avoided by computing first gcd$(f,f')$). Completely useless (and equivalent) variation: Study the intersection giving the zeroes of $f$ of the two real curves determined by $f_r$ and by $f_i$. -
2016-02-13T21:55:02
{ "domain": "mathoverflow.net", "url": "http://mathoverflow.net/questions/19999/finding-all-roots-of-a-polynomial/20009", "openwebmath_score": 0.8142266869544983, "openwebmath_perplexity": 386.1924630538555, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639677785087, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213246939086 }
https://web2.0calc.com/questions/help_66997
+0 # Help! 0 333 2 A quadrilateral is called a "parallelogram" if both pairs of opposite sides are parallel. Show that if  \(WXYZ\) is a parallelogram, then \(\angle W = \angle Y\) and \(\angle X = \angle Z\) . Jun 29, 2019 #1 +2 Its SuerBoranJacobs I'll refer to the diagram below: We know that WZ || XY and WX || ZY becuase opposite sides are congruent (Def. of a parallelogram). Draw ZX and WY as shown (Ruler Postulate). Label Angles 1, 2, 3, 4, 5, 6, 7, 8 as shown (Ruler Postulate). Angle W = Angle 3 + Angle 4 (By Construction). Angle Z = Angle 1 + Angle 2 (By Construction). Angle Y = Angle 7 + Angle 8 (By Construction). Angle X = Angle 5 + Angle 6 (By Construction). Angle 1 = Angle 6 (Alternate Interior Angles Are Congruent). Angle 2 = Angle 5 (Alternate Interior Angles Are Congruent). Therefore, Angle Z = Angle X (Parts Make Up A Whole). Using the same reasoning Angle Y = Angle W. Q.E.D Jun 29, 2019 #2 +8963 +5 Here's another way... Let's extend WZ to point  A, XY  to point  B, ZY  to point  C, WX to point  D,  and YX  to point  E Like this: m∠XWZ  =  m∠YZA _____ because corresponding angles are congruent. m∠YZA  =  m∠CYB because corresponding angles are congruent. m∠CYB  =  m∠XYZ because vertical angles are congruent. Therefore m∠XWZ  =  m∠XYZ by the transitive property of congruence. Likewise... m∠WZY  =  m∠XYC because corresponding angles are congruent. m∠XYC  =  m∠EXD because corresponding angles are congruent. m∠EXD  =  m∠WXY because vertical angles are congruent. Therefore m∠WZY  =  m∠WXY by the transitive property of congruence. Jun 29, 2019 edited by hectictar  Jun 29, 2019
2020-07-10T22:23:47
{ "domain": "0calc.com", "url": "https://web2.0calc.com/questions/help_66997", "openwebmath_score": 0.9077751040458679, "openwebmath_perplexity": 10691.358039221293, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639677785087, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213246939086 }
http://yetanothermathprogrammingconsultant.blogspot.com/2018/11/quadratic-programming-with-binary.html
## Saturday, November 10, 2018 ### Quadratic Programming with Binary Variables Quadratic Programming models where the quadratic terms involve only binary variables are interesting from a modeling point view: we can apply different reformulations. Let's have a look at the basic model: \begin{align}\min\>& \color{DarkRed}x^{T} \color{DarkBlue}Q \color{DarkRed}x + \color{DarkBlue} c^{T}\color{DarkRed}x\\ & \color{DarkRed}x_i \in \{0,1\}\end{align} Only if the matrix $$Q$$ is positive definite we have a convex problem. So, in general, the above problem is non-convex. To keep things simple, I have no constraints and no additional continuous variables (adding those does not not really change the story). #### Test data To play a bit a with this model, I generated random data: • Q is about 25% dense (i.e. about 75% of the entries $$q_{i,j}$$ are zero). The nonzero entries are drawn from a uniform distribution between -100 and 100. • The linear coefficients are uniformly distributed $$c_i \sim U(-100,100)$$. • The size of the model is: $$n=75$$ (i.e. 75 binary variables). This is relative small, so the hope is we can solve this problem quickly. As we shall see the results will be very mixed. #### Local MINLP solvers Many local MINLP solvers tolerate non-convex problems, but they will not produce a global optimum. So we see: SolverObjectiveTimeNotes SBB -7558.62350.5Local optimum Knitro-7714.5721 0.4Id. Bonmin-7626.79751.3Id. All solvers used default settings and timings are in seconds. It is not surprising that these local solvers find different local optima. For all solvers, the relaxed solution was almost integer and just a few nodes were needed to produce an integer solution. This looks promising. Unfortunately, we need to contain our optimism. #### Global MINLP Solvers Global MINLP solvers are in theory well-equipped to solve this model. Unfortunately, they are usually quite slow. For this example, we see a very wide performance range: SolverObjectiveTimeNotes Baron-7760.177182 Couenne-7646.5987>3600Time limit, gap 25% Antigone-7760.1771252 Couenne is struggling with this model. Baron and Antigone are doing quite good on this model. We can further observe that the local solvers did not find the global optimal solution. #### MIQP Solvers If we just use an MIQP solver, we may get different results, depending on the solvers. If the solver expects a convex model, it will refuse to solve the model. Other solvers may use some automatic reformulation. Let's try a few: SolverObjectiveTimeNotes MosekQ not positive definite Cplex-7760.1771 27Automatically reformulated to a MIP Gurobi -7760.1760>9999Time limit, gap 37% (Gurobi 8.0) Most solvers have options to influence what reformulations are applied. Here we ran with default settings. MIQP solvers tend to have many options, including those that influence automatic reformulations. I just used defaults, assuming "the solver knows best what to do". The global MINLP solvers Baron and Antigone did not do bad at all. It is noted that Gurobi 8.1 has better MIQP performance [2] (hopefully it does much better than what we see here). It is noted that we can force Gurobi to linearize the MIQP model using the solver option preqlinearize 1, and in that case it solves fast. #### Perturb Diagonal For borderline non-convex models, it is not unusual to see messages from a quadratic solver that the diagonal of $$Q$$ has been perturbed to make the problem convex. Here we do the same thing in the extreme [1]. Background: a matrix $$Q$$ is positive definite (positive semi-definite) if all eigenvalues $$\lambda_i \gt 0$$ ($$\lambda_i\ge 0$$). If there are negative eigenvalues, we can conclude $$\min x^TQx$$ is a non-convex problem. From this we see that the sign of the smallest eigenvalue $$\lambda_{min}$$ plays an important role. To calculate the smallest eigenvalue we first have to make $$Q$$ symmetric (otherwise we would get complex eigenvalues). This can easily be done by replacing $$Q$$ by $$0.5(Q^T+Q)$$. This operation will not change the values of the quadratic form $$x^TQx$$. If after calculating the smallest eigenvalue $$\lambda_{min}$$, we observe $$\lambda_{min} \lt 0$$, we can form  $\widetilde{Q} = Q - \lambda_{min} I$ Note that we actually add a positive number to the diagonal as  $$\lambda_{min}\lt 0$$. To compensate we need to add to the objective a linear term of the form $\sum_i \lambda_{min} x_i^2 = \sum_i \lambda_{min} x_i$ (for binary variables we have $$x_i^2=x_i$$). With this trick, we made the problem convex. For our data set we have $$\lambda_{min} = -353.710$$. To make sure we are becoming convex, I added a very generous tolerance: $$\lambda_{min}-1$$. So I used: $$\widetilde{Q} = Q - (\lambda_{min}-1) I$$. Convexified Model \begin{align}\min\>& \color{DarkRed} x^T \left( \color{DarkBlue} Q - (\lambda_{min}-1) I \right) \color{DarkRed} x + \left(\color{DarkBlue} c + (\lambda_{min}-1) \right)^T \color{DarkRed} x \\ & \color{DarkRed}x_i \in \{0,1\}\end{align} With this reformulation we obtained a convex MIQP. This means for instance that a solver like Mosek is back in play, and that local solvers will produce global optimal solutions. Let's try: SolverObjectiveTimeNotes Mosek-7760.1771725 Knitro-7760.1771 2724Node limit, gap: 3% Bonmin-7760.1771>3600Time limit, gap: 6% These results are a little bit slower than I expected, especially when comparing to the performance of the global solvers Baron and Antigone. These results are also much slower than the first experiment with local solvers where we found integer feasible local solutions very fast. Note. We could have started by removing all diagonal elements from $$Q$$ and moving them into $$c$$. This is again based on the fact that $$x_i^2 = x_i$$.  I did not do this step in this experiment. #### Linearization We already saw that some solvers (such as Cplex) apply a linearization automatically. Of course we can do this ourselves. The first thing we can do to help things along is to make $$Q$$ a triangular matrix. We can do this by: $\tilde{q}_{i,j} = \begin{cases} q_{i,j}+q_{j,i} & \text{if i \lt j} \\ q_{i,j} & \text{if i=j}\\ 0 & \text{if i \gt j}\end{cases}$ The next thing to do is to introduce variables $$y_{i,j} = x_i x_j$$. This binary multiplication can be linearized easily: \begin{align} & y_{i,j} \le x_i \\ & y_{i,j} \le x_j \\ & y_{i,j} \ge x_i + x_j -1 \\ & 0 \le y_{i,j} \le 1 \end{align} In the actual model, we can skip a few of these inequalities by observing in which directions the objective pushes variables $$y_{i,j}$$ (see [1]). Linearized Model \begin{align} \min\>& \sum_{i,j|i\lt j} \color{DarkBlue}{\tilde{q}}_{i,j} \color{DarkRed} y_{i,j} + \sum_i \left( \color{DarkBlue} {\tilde{q}}_{i,i} + \color{DarkBlue} c_i \right) \color{DarkRed} x_i \\ & \color{DarkRed}y_{i,j} \le \color{DarkRed}x_i && \forall i\lt j, \color{DarkBlue} {\tilde{q}}_{i,j} \lt 0 \\ & \color{DarkRed}y_{i,j} \le \color{DarkRed}x_j && \forall i\lt j, \color{DarkBlue} {\tilde{q}}_{i,j} \lt 0 \\ & \color{DarkRed}y_{i,j} \ge \color{DarkRed}x_i +\color{DarkRed}x_j -1 && \forall i\lt j, \color{DarkBlue} {\tilde{q}}_{i,j} \gt 0 \\ & 0 \le \color{DarkRed}y_{i,j} \le 1 && \forall i\lt j, \color{DarkBlue} {\tilde{q}}_{i,j} \ne 0 \\ & \color{DarkRed}x_i \in \{0,1\} \\ \end{align} This model does not care whether the original problem is convex or not. Let's see how this works: SolverObjectiveTimeNotes Cplex-7760.177141 CBC-7760.1771 6488 It is known this MIP is not so easy to solve. A commercial MIP solver may be required to get good solution times. Here we see that Cplex (commercial) is doing much better than CBC (open source). #### Conclusion The problem under consideration: an unconstrained MIQP with just $$n=75$$ binary variables, is not that easy to solve. The overall winning strategy is to use a commercial MIP solver against a manually or automatically reformulated MIP model. Solving the MIQP directly is just very difficult for many solvers. The global solver Baron does a surprisingly good job. It is noted that if the data or the problem size changes, these performance figures may shift (a lot). #### Update An earlier version of this post had a much slower performance for Cplex MIQP. When rerunning this, I could not reproduce this, so this must have been a note taking error on my side (I suspect I was comparing with a result with $$n=100$$). Now, Cplex MIQP and Cplex MIP on the manually reformulated model perform comparable. My faith in Cplex automatic reformulation is fully restored (and my faith in my note taking skills further reduced). Apologies for this. #### References 1. Billionnet, A. and Elloumi, S., Using a mixed integer quadratic programming solver for the unconstrained quadratic 0-1 problem. Math. Program. 109 (2007) pp. 55–68 2. http://yetanothermathprogrammingconsultant.blogspot.com/2018/10/gurobi-81.html
2019-02-16T01:08:27
{ "domain": "blogspot.com", "url": "http://yetanothermathprogrammingconsultant.blogspot.com/2018/11/quadratic-programming-with-binary.html", "openwebmath_score": 0.9905318021774292, "openwebmath_perplexity": 1173.1828636734217, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639677785087, "lm_q2_score": 0.6723316926137811, "lm_q1q2_score": 0.6532132469390859 }
https://math.stackexchange.com/questions/1329615/parabolicity-of-high-order-pdes
# Parabolicity of high order PDEs I know that the traditional classification of PDEs into parabolic, elliptic, and hyperbolic is applicable for the second order equations. However, I often see remarks about parabolicity of higher order PDEs in various articles. In particular, the equations of the thin film family are said to have the "degenerating parabolicity". For example, this paper introduces the following degenerate parabolic nonlinear fourth order equation (in $1\mathrm D$): $$u_t + \nabla\cdot \left(\left\lvert u\right\rvert^p\,\nabla \Delta u\right) = 0.$$ How do we formally define the parabolic, elliptic, and hyperbolic classification of high order nonlinear PDES?
2019-06-24T13:22:02
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1329615/parabolicity-of-high-order-pdes", "openwebmath_score": 0.9164597392082214, "openwebmath_perplexity": 534.4868602114507, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639677785087, "lm_q2_score": 0.6723316926137811, "lm_q1q2_score": 0.6532132469390859 }
https://calculus7.org/2021/01/07/riesz-projection-on-polynomials/
# Riesz projection on polynomials Consider a trigonometric polynomial of degree ${n}$ with complex coefficients, represented as a Laurent polynomial ${L(z) = \sum\limits_{k=-n}^n c_k z^k}$ where ${|z|=1}$. The Riesz projection of ${L}$ is just the regular part of ${L}$, the one without negative powers: ${R(z) = \sum\limits_{k=0}^n c_k z^k}$. Let’s compare the supremum norm of ${L}$, ${ \|L\|=\max\limits_{|z|=1} |L(z)|}$, with the norm of ${R}$. The ratio ${\|R\|/\|L\|}$ may exceed ${1}$. By how much? The extreme example for ${n=1}$ appears to be ${L(z) = 2z + 4 - z^{-1}}$, pictured below together with ${R(z)=2z+4}$. The polynomial ${L}$ is in blue, ${R}$ is in red, and the point ${0}$ is marked for reference. Since ${R(z)=2z+4}$ has positive coefficients, its norm is just ${R(1)=6}$. To compute the norm of ${L}$, let’s rewrite ${|L(z)|^2 = L(z)L(z^{-1})}$ as a polynomial of ${x=\mathrm{Re}\,(z)}$. Namely, ${|L(z)|^2 = -2z^2 + 4z + 21 + 4z^{-1} - 2z^{-2}}$ which simplifies to ${27 - 2(2x-1)^2}$ in terms of ${x}$. Hence ${\|L\| = \sqrt{27}}$ and ${\|R\|/\|L\| = 6/\sqrt{27} = 2/\sqrt{3}\approx 1.1547}$. The best example for ${n=2}$ appears to be vaguely binomial: ${L(z) = 2z^2 + 4z + 6 - 4z^{-1} + z^{-2}}$. Note that the range of ${R}$ is a cardioid. Once again, ${R(z) = 2z^2 + 4z + 6}$ has positive coefficients, hence ${\|R\| = R(1) = 12}$. And once again, ${|L(z)|^2}$ is a polynomial of ${x=\mathrm{Re}\,(z)}$, specifically ${|L(z)|^2 = 81 - 8(1-x^2)(2x-1)^2}$. Hence ${\|L\| = 9}$ and ${\|R\|/\|L\| = 12/9 = 4/3\approx 1.3333}$. I do not have a symbolic candidate for the extremal polynomial of degree ${n= 3}$. Numerically, it should look like this: Is the maximum of ${\|R\|/\|L\|}$ attained by polynomials with real, rational coefficients (which can be made integer)? Do they have some hypergeometric structure? Compare with the Extremal Taylor polynomials which is another family of polynomials which maximize the supremum norm after elimination of some coefficients. ## Riesz projection as a contraction To have some proof content here, I add a 2010 theorem by Marzo and Seip: ${\|R\|_4 \le \|L\|}$ where ${\|R\|_p^p = \int_0^1 |R(e^{2\pi i t})|^p\,dt}$. The theorem is not just about polynomials: it says the Riesz projection is a contraction (has norm ${1}$) as an operator ${L^\infty\to L^4}$. Proof. Let ${S=L-R}$, the singular part of ${L}$. The polynomial ${R-S}$ differs from ${L}$ only by the sign of the singular part, hence ${\|R-S\|_2 = \|L\|_2}$ by Parseval’s theorem. Since ${S^2}$ consists of negative powers of ${z}$, while ${R^2}$ does not contain any negative powers, these polynomials are orthogonal on the unit circle. By the Pythagorean theorem, ${\|R^2-S^2\|_2 \ge \|R^2\|_2 = \|R\|_4^2}$. On the other hand, ${R^2-S^2 = (R+S)(R-S)=L(R-S)}$. Therefore, ${\|R^2-S^2\|_2 \le \|L\| \|R-S\|_2 = \|L\| \|L\|_2 \le \|L\|^2}$, completing the proof. This is so neat. And the exponent ${4}$ is best possible: the Riesz projection is not a contraction from ${L^\infty}$ to ${L^p}$ when ${p>4}$ (the Marzo-Seip paper has a counterexample). This site uses Akismet to reduce spam. Learn how your comment data is processed.
2021-01-22T06:09:35
{ "domain": "calculus7.org", "url": "https://calculus7.org/2021/01/07/riesz-projection-on-polynomials/", "openwebmath_score": 0.9668256640434265, "openwebmath_perplexity": 196.5329899062604, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639669551474, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213246385514 }
https://itl.nist.gov/div898/handbook/apr/section4/apr413.htm
8. Assessing Product Reliability 8.4. Reliability Data Analysis 8.4.1. How do you estimate life distribution parameters from censored data? ## A Weibull maximum likelihood estimation example Reliability analysis using Weibull data We will plot Weibull censored data and estimate parameters using data from a previous example (8.2.2.1). The recorded failure times were 54, 187, 216, 240, 244, 335, 361, 373, 375, and 386 hours, and 10 units that did not fail were removed from the test at 500 hours. The data are summarized in the following table. Time Censored Frequency 54 0 1 187 0 1 216 0 1 240 0 1 244 0 1 335 0 1 361 0 1 373 0 1 375 0 1 386 0 1 500 1 10 The column labeled "Time" contains failure and censoring times, the "Censored" column contains a variable to indicate whether the time in column one is a failure time or a censoring time, and the "Frequency" column shows how many units failed or were censored at that time. First, we generate a survival curve using the Kaplan-Meier method and a Weibull probability plot. Note: Some software packages might use the name "Product Limit Method" or "Product Limit Survival Estimates" instead of the equivalent name "Kaplan-Meier". Next, we perform a regression analysis for a survival model assuming that failure times have a Weibull distribution. The Weibull characteristic life parameter ($$\eta$$) estimate is 606.5280 and the shape parameter ($$\beta$$) estimate is 1.7208. The log-likelihood and Akaike's Information Criterion (AIC) from the model fit are -75.135 and 154.27. For comparison, we computed the AIC for the lognormal distribution and found that it was only slightly larger than the Weibull AIC. Lognormal AIC Weibull AIC 154.39 154.27 When comparing values of AIC, smaller is better. The probability density of the fitted Weibull distribution is shown below. Based on the estimates of $$\eta$$ and $$\beta$$, the lifetime expected value and standard deviation are the following. $$\begin{eqnarray} \hat{\eta} &=& 606.5280 \\ \\ \hat{\beta} &=& 1.7208 \\ \\ \hat{\mu} &=& \hat{\eta} \cdot \Gamma \left( 1 + 1/\hat{\beta} \right) = 540.737 \,\, \mbox{hours}\\ \\ \hat{\sigma} &=& \hat{\eta} \, \sqrt{\Gamma \left( 1+2/\hat{\beta} \right) - \left( \Gamma \left(1+1/\hat{\beta} \right) \right)^{2}} = 323.806 \,\, \mbox{hours} \end{eqnarray}$$ The greek letter, $$\Gamma$$, represents the gamma function. Discussion Maximum likelihood estimation (MLE) is an accurate and easy way to estimate life distribution parameters, provided that a good software analysis package is available. The package should also calculate confidence bounds and log-likelihood values. The analyses in this section can can be implemented using R code.
2021-10-16T18:05:40
{ "domain": "nist.gov", "url": "https://itl.nist.gov/div898/handbook/apr/section4/apr413.htm", "openwebmath_score": 0.7947454452514648, "openwebmath_perplexity": 818.4510926265164, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639669551474, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213246385514 }
http://leeee.top/2020/MSBD5004-Mathematical-Methods-for-Data-Analysis-Homework-2/
# 终其一生,我们只不过在寻找自己 0% MSBD5004 Mathematical Methods for Data Analysis Homework 2 # Q1 Let $(V,|\cdot|)$ be a normed vector space. (a) Prove that, for all $x, y \in V$, (b) Let $\left\{x_{k}\right\}_{k \in \mathbb{N}}$ be a convergent sequence in $V$ with limit $x \in V$. Prove that (Hint: Use part (a).) (c) Let $\left\{x^{(k)}\right\}_{k \in \mathbb{N}}$ be a sequence in $V$ and $x, y \in V .$ Prove that, if then $x=y .$ (In other words, the limit of the same sequence in a normed vector space is unique.) ## (a) Using the norm definition, we have that $|\boldsymbol{a}+\boldsymbol{b}| \leq|\boldsymbol{a}|+|\boldsymbol{b}|$. $if :\boldsymbol{a}=|\boldsymbol{x}-\boldsymbol{y}| ,\boldsymbol{b}=|\boldsymbol{y}|, then |\boldsymbol{x}| \leq |\boldsymbol{x}-\boldsymbol{y}| +|\boldsymbol{y}|$ $if :\boldsymbol{a}=|\boldsymbol{y}-\boldsymbol{x}| ,\boldsymbol{b}=|\boldsymbol{x}|, then |\boldsymbol{y}| \leq |\boldsymbol{y}-\boldsymbol{x}| +|\boldsymbol{x}|$ From above we can get$\|\boldsymbol{x}\|-\|\boldsymbol{y}\| \leq\|\boldsymbol{x}-\boldsymbol{y}\| , and \|\boldsymbol{y}\|-\|\boldsymbol{x}\| \leq\|\boldsymbol{x}-\boldsymbol{y}\|$ So, $| |\boldsymbol{x}|-|\boldsymbol{y}| |\leq|\boldsymbol{x}-\boldsymbol{y}|$.◾ ## (b) Using the converges definition, we can get Useing the conclusion of (a), Then, we get $\lim _{k \rightarrow \infty}\left|\boldsymbol{x}_{k}\right|=|\boldsymbol{x}|$.◾ ## (c) Proof: Assume that as $k \rightarrow \infty$ then $x^{(k)} \rightarrow x$ and also $x^{(k)} \rightarrow y .$ • Let $\epsilon>0$ be given and choose $N_{1} \in \mathbb{N}$ such that $\left|x^{(k)}-x\right|<\frac{\epsilon}{2}$. And also choose $N_{2} \in \mathbb{N}$ such that $\left|x^{(k)}-y\right|<\frac{\epsilon}{2}$. • Now choose $N=\max \left(N_{1}, N_{2}\right)$. Not that the choose the maximum of $N_{1}$ and $N_{2}$, then it will be true that regardless $\left|x^{(k)}-x\right|<\frac{\epsilon}{2}$ and $\left|x^{(k)}-y\right|<\frac{\epsilon}{2}$. Now consider the following inequality: • By the triangle inequality, we get that:Therefore $|x-y|<\epsilon .$ $x-y$ is a number, and $\forall \epsilon>0,|x-y|<\epsilon$. So that $x=y$.◾ # Q2 1. Let $V$ be a vector space, and $\langle\cdot, \cdot\rangle$ be an inner product on $V$. Use the definition of inner product to prove the following. (a) Prove that $\langle \boldsymbol{0}, x\rangle=\langle x, \boldsymbol{0}\rangle= 0$ for any $x \in V .$ Here $\boldsymbol{0}$ is the zero vector in $V$. (b) Prove that the second condition $\left\langle\alpha x_{1}+\beta x_{2}, y\right\rangle=\alpha\left\langle x_{1}, y\right\rangle+\beta\left\langle x_{2}, y\right\rangle \quad \forall x_{1}, x_{2}, y \in V, \alpha, \beta \in \mathbb{R}$ is equivalent to ②$\left\langle x_{1}+x_{2}, y\right\rangle=\left\langle x_{1}, y\right\rangle+\left\langle x_{2}, y\right\rangle \text { and }\langle\alpha x, y\rangle=\alpha\langle x, y\rangle, \quad \forall x_{1}, x_{2}, x, y \in V, \alpha \in \mathbb{R}$ ## (a) There are 3 condition of definition of inner product: Use (3) if $y= \boldsymbol{0}$, we can get $\langle \boldsymbol{0}, x\rangle=\langle x, \boldsymbol{0}\rangle$. Use (2) if $y=x, x_1=x_2=\boldsymbol{0},\alpha=\beta=1$, we can get $\langle \boldsymbol{0}, x\rangle=2*\langle \boldsymbol{0}, x\rangle$, so $\langle \boldsymbol{0}, x\rangle=0$ From above, we can get $\langle \boldsymbol{0}, x\rangle=\langle x, \boldsymbol{0}\rangle= 0$. ◾ ## (b) $\left\langle\alpha x_{1}+\beta x_{2}, y\right\rangle=\alpha\left\langle x_{1}, y\right\rangle+\beta\left\langle x_{2}, y\right\rangle \quad \forall x_{1}, x_{2}, y \in V, \alpha, \beta \in \mathbb{R}$ $\left\langle x_{1}+x_{2}, y\right\rangle=\left\langle x_{1}, y\right\rangle+\left\langle x_{2}, y\right\rangle \text { and }\langle\alpha x, y\rangle=\alpha\langle x, y\rangle, \quad \forall x_{1}, x_{2}, x, y \in V, \alpha \in \mathbb{R}.$ Proof: • $①\rightarrow②$ : set $\alpha=\beta=1$, we can get first equation of ②$\forall x_{1}, x_{2}, x, y \in V, \alpha \in \mathbb{R}.$. set $x_1=x, \beta=0$, we can get second equation of ②$\forall x_{1}, x_{2}, x, y \in V, \alpha \in \mathbb{R}.$. • $②\rightarrow①$ : set $x_1= \alpha x_1, x_2= \beta x_2$, we can get $\left\langle\alpha x_{1}+\beta x_{2}, y\right\rangle=\left\langle\alpha x_{1}, y\right\rangle+\left\langle\beta x_{2}, y\right\rangle=\alpha\left\langle x_{1}, y\right\rangle+\beta\left\langle x_{2}, y\right\rangle \quad \forall x_{1}, x_{2}, y \in V, \alpha, \beta \in \mathbb{R}$ # Q3 $\mathbb{R}^{m \times n}$ is a vector space over $\mathbb{R}$. Show that $\langle\boldsymbol{A}, \boldsymbol{B}\rangle=\operatorname{trace}\left(\boldsymbol{A}^{T} \boldsymbol{B}\right) \text { for } \boldsymbol{A}, \boldsymbol{B} \in \mathbb{R}^{m \times n}$ is an inner product on $\mathbb{R}^{m \times n} .$ Here trace(.) is the trace of a matrix, i.e., the sum of all diagonal entries. Proof: For every $A=(A_{ij}) \in \mathbb{R}^{m\times n}$ we have $\langle A,A\rangle=\text{tr}(A^TA)=\sum_{i=1}^n(A^TA)_{ii}=\sum_{i=1}^n\sum_{j=1}^mA^T_{ij}A_{ji}=\sum_{i=1}^m\sum_{j=1}^nA_{ij}^2 \ge 0,$ and $\langle A,A\rangle=\sum_{i=1}^m\sum_{j=1}^nA_{ij}^2 = 0\iff (A_{ij}=0 \quad \forall i,j) \iff A=0$ Since $\text{tr}(X^T)=\text{tr}(X), \quad \text{tr}(X+Y)=\text{tr}(X)+\text{tr}(Y), \quad \text{tr}(\lambda X)=\lambda\text{tr}(X)$ for every $X,Y \in \mathbb{R}^{n\times n}$, and $\lambda \in \mathbb{R}$, therefore, for every $A,B, C \in \mathbb{R}^{m\times n}$, and $\lambda \in \mathbb{R}$ we have # Q4 Consider the polynomial kernel $K(\boldsymbol{x}, \boldsymbol{y})=\left(\boldsymbol{x}^{T} \boldsymbol{y}+1\right)^{2}$ for $\boldsymbol{x}, \boldsymbol{y} \in \mathbb{R}^{2} .$ Find an explicit feature map $\phi: \mathbb{R}^{2} \rightarrow \mathbb{R}^{6}$ satisfying $\langle\phi(\boldsymbol{x}), \phi(\boldsymbol{y})\rangle= K(\boldsymbol{x}, \boldsymbol{y}),$ where the inner product the standard inner product in $\mathbb{R}^{6}$. Solution as follow: # Q5 (You don’t need to do anything for this question.) A good Matlab code and demonstration of kernel K-means can be found at http://www.dcs.gla.ac.uk/~srogers/firstcourseml/matlab/chapter6/kernelkmeans.html Read the code. Run the code in Matlab, if possible, to see how kernel K-means works for nonlinear data. -------------    你的留言  是我更新的动力😊    -------------
2022-05-25T05:52:29
{ "domain": "leeee.top", "url": "http://leeee.top/2020/MSBD5004-Mathematical-Methods-for-Data-Analysis-Homework-2/", "openwebmath_score": 0.9799769520759583, "openwebmath_perplexity": 337.17984744259996, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.971563966131786, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.6532132458319421 }
https://f7ed.com/2022/08/16/mit6875-lec15/
# 「Cryptography-MIT6875」: Lecture 15 In this series, I will learn MIT 6.875, Foundations of Cryptography, lectured by Vinod Vaikuntanathan. Any corrections and advice are welcome. ^ - ^ Topics Covered: • Sequential vs Parallel Repetition: reduce soundness error • Proof of Knowledge • PoK of DLOG • Non-Interactive ZK(NIZK) • NIZK in The Random Oracle Model • NIZK for 3COL • NIZK in The Common Random String Model (Lecture 16) # Recap Recap NP Proofs. Give NP Proofs for the NP-complete problem of graph 3-coloring. • Prover $P$: has a witness, the 3-coloring of $G$. • $P$ gives the proof, the solution to 3-coloring of $G$, to $V$. • Verifier $V$ checks: • only 3 colors are used • any two vertices connected by an edge are colored differently. The verify learned the graph $G$ is 3-colorable and the 3-coloring solution. So NP proofs reveal too much information. With Zero-knowledge (Interactive) Proofs, the verifier can only learns the graph $G$ is 3-colorable without knowing the solution. • Prover $P$: 1. permute the colors 2. commit to each color 3. send all the commitments to the verifier. • Verifier $V$: pick a random edge • Prover $P$: open the vertices of the edge. • Verifier $V$ checks the openings & the colorings of two vertices are different. Besides, we proved the 3COL Protocol is completeness, soundness and zero-knowledge in previous blog. • Completeness: For every $G\in 3COL$, $V$ accepts $P$’s proof. • Soundness: For every $G\notin 3COL$ and any cheating $P^*$, $V$ rejects $P^*$’s proof with probability $\ge 1-neg(n)$. • Zero-knowledge: For every cheating $V^*$, there is a PPT simulator $S$ such that for every $G\in 3COL$, $S$ simulates the view of $V^*$. # Sequential vs Parallel Repetition The 3COL protocol has a large soundness error of $1-1/|E|$, the probability that $V$ accepts even though $G\notin 3COL$. ## Reducing Soundness Error Theorem: Sequential Repetition reduces soundness error for interactive proofs, and preserves the ZK property. But it brings about the problem that it costs a lot of rounds. An alternative way is parallel repetition. Theorem [Goldreich-Krawczyk’90]: Parallel Repetition also reduces soundness error for interactive proofs. It is also honest-verifier ZK, but dose not, in general, preserve the ZK property. Note: Preserving the ZK property in general means that it is ZK against malicious verifier. There is an intuitive interpretation to the theorem.[Goldreich-Krawczyk’90] The interaction in parallel repetition is $P$ sends all first message in parallel and $V$ response at once with all second messages … • If $V$ is honest verifier, he indeed dose not look at the commitments, and just picks the random edges independently, which is the same with the sequential repetition. • But when $V^*$ is malicious verifier, there is no reason that $V^*$ picks the edge independently. $V^*$ can apply a giant hash function and do some bizarre thing to pick these dependent edges. Intuitively, it’s harder to simulate such a thing. The simulator’s strategy in parallel repetition: 1. $S$ feeds some made up first messages to $V^*$. 2. $V^*$ picks the edges in bizarre manners. 3. $S$ only can answer exactly one challenge. The key reason is the challenge space is exponentially large and the probability of hitting that made up challenge is negligible. So this simulation strategy goes down the drain. This theorem tells that some protocols in parallel repetition is not zero-knowledge against malicious verifier. And the following theorem tells us that the parallel repetition of 3COL protocol is not zero-knowledge if we run it in many and many times in parallel. Theorem [Holmgren-Lombardi-Rothblum’s21]: Parallel Repetition of the (Goldreich-Micali-Wigderson) 3COL protocol is not zero-knowledge. Fortunately, we have zero-knowledge protocols in const rounds with exponentially small soundness error, rather in a million rounds. Theorem [Goldreich-Kahan’95]: There is a constant-round ZK proof system for 3COL (will exponentially small soundness error), assuming discrete logarithms are hard. # Proofs of Knowledge So far, we focus on the decision problem: $y\in \mathcal{L}$ or $y\notin \mathcal{L}$. (e.g. $y$ is quadratic residue $\mod N$ or it is not.) Here is a different scenario that Alice has the knowledge, the discrete log of $y$ assuming $g$ is a generator. And Alice wants to convince Bob the discrete log of $y$ always exists. In this scenario the prover wants to convince the verifier that she knows a solution to a problem, e.g. that she knows the discrete log of $y$. It is difficult to formulate it as the decision problem. It is Proof of Knowledge. Likewise, we can define the completeness, soundness and zero-knowledge. • Completeness: When Alice and Bob run the protocol where Alice has input $x$, Bob outputs accept. • Soundness: How to define soundness that Alice dose not have the knowledge ? It is difficult to formulate the leak of knowledge. • Zero-knowledge: There is a simulator that, given only $y$, outputs a view of Bob that is indistinguishable from his view in an interaction with Alice. ## Extractor The main idea of Goldreich is that if Alice knows $x$, there must be a way to “extract it from her”. It’s not about putting diodes on her brain. [*diode 二极管] It’s sort of talking to Alice. Definition of Proof of Knowledge: For any cheating$P^*$, if the prover can convince the verifier that the discrete log of $y$ always exist such that $\operatorname{Pr}[\langle P^*,V\rangle(y)=\textrm{accept}]\ge \varepsilon$, then there exists an extractor $E$ such that $\operatorname{Pr}[E^{P^*}(y)=x \text{ s.t. }y=g^x]\ge \varepsilon'\approx \varepsilon$. The extractor is indeed the expected ppt adversary. We will not dig into the definition but give an example of PoK. ## ZK Proof of Knowledge of Discrete Log. The protocol is as follows. ZK Proof of Knowledge of DLOG: 1. Prover: Pick a random $r$ and send $z=g^r$ to Verifier. 2. Verifier: Pick a random challenge $c$ 1. If $c=0$: send $s=r$ 2. If $c=1$: send $s=r+x$ 4. Verifier: Accept iff $g^s=z\cdot y^c$. The above protocol is completeness, soundness and zero-knowledge. The completeness and zero-knowledge is well proven. Completeness: • If the prover has the discrete log of $y$, the verifier accepts with probability 1. • $g^s=g^{r+cx}=g^r\cdot (g^{x})^c=z\cdot y^c$ Zero-knowledge: • The real view of $V^*$ is $\texttt{view}_{V^*}=(z,c,s)$ • The simulator works as follows 1. Generate $z=g^s/y^c$ for a random $s$ and a random $c$. 2. Feed $z$ to verifier and get the challenge $c^*=V^*(z)$. 3. If $c^*=c$, output as the simulated transcript. 4. If $c^*\ne c$, back to step 1 and repeat. • The simulated view is identical to the view in real execution. Soundness: • The key is to construct an extractor by the contradiction. • If the protocol is of soundness, the cheating prover $P^*$ can convince the verifier with probability $1/2$. • Assume for the contradiction that $P^*$ convinces the verifier with probability $\ge 1/2+1/poly$. • Then the prover $P^*$ should prepare for both challenges. • It’s easy to extract the discrete log of $y$ form $P^*$. 1. Runs $P^*$ with $c=0$ and gets $s_0$. 2. Rewind $P^*$ to the first message. 3. Runs $P^*$ with $c=1$ and gets $s_1$. • By contradiction, $g^{s_0}=z$ and $g^{s_1}=zy$ with probability $1/poly$. • That is $g^{s_1-s_0}=y$ w.p. $1/poly$. • So $s_1-s_0$ is the discrete log of $y$ w.p. $1/poly$. It’s known as Schnorr proof, or Schnorr Signature. # Non-Interactive ZK Let’s proceed to the next topic. Can we make proofs non-interactive again ? The advantages of Non-Interactive ZK (NIZK): 1. $V$ dose not need to be online during the proof process. 2. Proofs are not ephemeral, can stay into the future.[*ephemeral 短暂的] ## NIZK is Impossible Firstly, we claim that NIZK is impossible. Suppose there were an NIZK proof system for 3COL. The NIZK proof is of completeness and zero-knowledge, but NOT sound. Proof: 1. Completeness: When $G$ is in 3COL, $V$ accepts the proof $\pi$. 2. Zero Knowledge: PPT simulator $S$, given only G​ in 3COL, produces an indistinguishable proof $\tilde{\pi}$. In particular, $V$ accepts $\tilde{\pi}$. 3. Imagine running the Simulator $S$ on a $\underline{G\notin 3COL}$. It produces a proof $\tilde{\pi}$ which the verifier still accepts! Because $S$ and $V$ are PPT. They together cannot tell if the input graph is 3COL or not without the witness. 4. Therefore, $S$ is indeed a cheating prover! It can produce a proof for a $G\notin 3COL$ that the verifier nevertheless accepts. 5. Ergo, the proof system is NOT SOUND. But we can achieve NIZK under some models. There are two roads to NIZK. 1. Random Oracle Model & Fiat-Shamir Transform 2. Common Random String Model ## NIZK: Random Oracle Model As discussed before randomness of verifier for ZK proofs is necessary. Otherwise, it is not interactive. Give an example in ZK proof of knowledge for discrete log. The protocol is not sound if $P^*$ knows the random challenge $c$ beforehand. • If $P^*$ knows $c=0$ beforehand, it’s useless. • If $P^*$ knows $c=1$ beforehand, 1. Send $z=g^s/y$ for random $s$. 2. Send $s$. ### NIZK for 3COL Consider NIZK Proof for 3COL. It is complete, has exponentially small soundness error, and is hones-verifier ZK. Similarly, the randomness is necessary. Otherwise, the cheating prover can make up message $a$, $z$ beforehand. However, the protocol can be non-interactive in the random oracle model. Recap Random Oracle Model [Lecture 12] In random oracle model, the only way to compute $H$ is by calling the oracle. We can consider it as a very complicated public function, e.g. SHA-3. Moreover, we can consider the public function as a proxy to a random function. But in the Random Oracle Heuristic world, the only way to compute $H$, virtually a black box, is by calling the oracle. Fiat and Shamir (1986): Let $c=H(a)$. Now the prover can compute the challenge herself! It is potentially harmful for soundness. But in random oracle model for $H$, it can prove soundness. 「Cryptography-MIT6875」: Lecture 15 https://f7ed.com/2022/08/16/mit6875-lec15/ f1ed 2022-08-16 2022-08-24
2022-10-01T17:12:53
{ "domain": "f7ed.com", "url": "https://f7ed.com/2022/08/16/mit6875-lec15/", "openwebmath_score": 0.9193819761276245, "openwebmath_perplexity": 2608.6923615531127, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639661317859, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213245831942 }
http://openstudy.com/updates/55fb5471e4b0c2f0ec8ea323
anonymous one year ago give an example of a countable collection of disjoint open intervals 1. zzr0ck3r $$\{(n-\frac{1}{2}, n+\frac{1}{2})\subset \mathbb{R} \mid n\in \mathbb{N}\}$$ 2. zzr0ck3r Make sense @carr099 ? 3. anonymous can you give some explanation 4. zzr0ck3r sure, it will look like this $$\{(0.5,1.5),(1.5,2.5),(2.5,3.5),(3.5,4.5),...\}$$ These are of course disjoint, and they are countable because they are indexed by $$\mathbb{N}$$. Note that just two intervals would have also worked $\{(-\infty, 0), (0, \infty)\}$ It did not say countably infinite.
2016-10-24T18:42:50
{ "domain": "openstudy.com", "url": "http://openstudy.com/updates/55fb5471e4b0c2f0ec8ea323", "openwebmath_score": 0.9133765697479248, "openwebmath_perplexity": 703.2893814859823, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639661317859, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213245831942 }
https://www.physicsforums.com/threads/complex-replacement-justification.871323/
Complex Replacement: Justification? • B Hi, Is there a proof that complex replacement is a valid way to solve a differential equation? I'm lacking some intuition on the idea that under any algebraic manipulations the real and imaginary parts of an expression don't influence each other. For example, if I'm given: $$p(D) x = cos(t)$$ where p(D) is some differential operator, how can I be sure that the real part of the solution to $$p(D) z = e^{it}$$ will be a solution to my original differential equation? How can I be sure that I can manipulate any of these expressions with complex numbers and will still be able to take the real part?
2021-04-20T04:27:28
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/complex-replacement-justification.871323/", "openwebmath_score": 0.655491828918457, "openwebmath_perplexity": 105.04950710683026, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639661317859, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213245831942 }
https://omega-nerez.cz/5ibvo/ols-estimator-derivation-e2dbea
Hope you enjoyed reading and thanks again! . Ultimately, this method of derivation hinges on the problem being a sum of squares problem and the OLS Assumptions, although, these are not limiting reasons not to use this method. I like the matrix form of OLS Regression because it has quite a simple closed-form solution (thanks to being a sum of squares problem) and as such, a very intuitive logic in its derivation (that most statisticians should be familiar with). It is know time to derive the OLS estimator in matrix form. Or as in an example, how much does the weight of a person go up by if they grow taller in height? Source | SS df MS Number of obs = 20 The objective of the OLS estimator is to minimize the sum of the squared errors. OLS Derivation. Lecture 5: OLS Inference under Finite-Sample Properties So far, we have obtained OLS estimations for E(βˆ)andVar(βˆ). In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. More specifically, when your model satisfies the assumptions, OLS coefficient estimates follow the tightest possible sampling distribution of unbiased estimates compared to other linear estimation methods.Let’s dig deeper into everything that is packed i… This test is to regress the squared residuals on the terms in X0X, OLS Estimation was originally derived in 1795 by Gauss. Derivation of the OLS estimator and its asymptotic properties Population equation of interest: (5) y= x +u where: xis a 1 Kvector = ( 1;:::; K) x 1 1: with intercept Sample of size N: f(x 2 The Ordinary Least Squares Estimator Let b be an estimator of the unknown parameter vector . For example, if your underlying data has a lot of anomalies, it may be worthwhile using a more robust estimator (like Least Absolute Deviation) than OLS. Sometimes we add the assumption jX ˘N(0;˙2), which makes the OLS estimator BUE. This is the 1st tutorial for ECO375F. To the present: OLS Regression is something I actually learned in my second year of undergraduate studies which, as a Mathematical Economist, felt pretty late but I’ve used it ever since. This is no different than the previous simple linear case. So, from the godfathers of modern Physics and Statistics: The goal of OLS Regression is to define the linear relationship between our X and y variables, where we can pose the problem as follows: Now we can observe y and X, but we cannot observe B. OLS Regression attempts to define Beta. The distribution of OLS estimator … In the presence of heteroscedasticity, the usual OLS estimators are no longer having minimum variance among all linear unbiased estimators [3] and [8]. The Gauss-Markov theorem famously states that OLS is BLUE. An estimator or decision rule with zero bias is called unbiased.In statistics, "bias" is an objective property of an estimator. Given that S is convex, it is minimized when its gradient vector is zero (This follows by definition: if the gradient vector is not zero, there is a direction in which we can move to minimize it further – see maxima and minima. The coefficient estimates that minimize the SSR are called the Ordinary Least Squared (OLS) estimates. If it wasn’t to the power 2, we would have to use alternative methods (like optimisers) to solve for Beta. Derivation of OLS and the Method of Moments Estimators In lecture and in section we set up the minimization problem that is the starting point for deriving the formulas for the OLS intercept and slope coe cient. 3.2 Ordinary Least Squares (OLS) 3.2.1 Key assumptions in Regression Analysis; 3.2.2 Derivation of the Ordinary Least Squares Estimator. Finite sample properties try to study the behavior of an estimator under the assumption of having many samples, and consequently many estimators of the parameter of interest. But we need to know the shape of the full sampling distribution of βˆ in order to conduct statistical tests, such as t-tests or F-tests. Assume we collected some data and have a dataset which represents a sample of the real world. Now the methodology I show below is a hell of a lot simpler than the method he used (a redacted Maximum Likelihood Estimation method) but can be shown to be equivalent. The OLS Normal Equations: Derivation of the FOCs. In this article, we will not bother with how the OLS estimates are derived (although understanding the derivation of the OLS estimates really enhances your understanding of the implications of the model assumptions which we made earlier). The estimated values for will be called . That problem was, min ^ 0; ^ 1 XN i=1 (y i ^ 0 ^ 1x i)2: (1) As we learned in calculus, a univariate optimization involves taking the derivative and setting equal to 0. In the following we we are going to derive an estimator for . If you have any questions, please let me know and leave a comment! Moreover, knowing the assumptions and facts behind it has helped in my studies and my career. So, now that we know what OLS is and what it attempts to do, we can begin our derivation for estimates of α and β. Regression. 17 at the time, the genius mathematician was attempting to define the dynamics of planetary orbits and comets alike and in the process, derived much of modern day statistics. The meaning of every element of this matrix is analogous to that presented in and . 8 2 Linear Regression Models, OLS, Assumptions and Properties 2.2.5 Data generation It is mathematically convenient to assume x i is nonstochastic, like in an agricultural experiment where y i is yield and x i is the fertilizer and water applied. The simple maths of OLS regression coefficients for the simple (one-regressor) case. This test is to regress the squared residuals on the terms in X0X, We have a system of k +1 equations. Linear regression models have several applications in real life. estimator of the corresponding , but White showed that X0ee0X is a good estimator of the corresponding expectation term. In matrix form, the estimated sum of squared errors is: (10) The OLS Normal Equations: Derivation of the FOCs. Assumptions 1{3 guarantee unbiasedness of the OLS estimator. We have a system of k +1 equations. The OLS estimator is BLUE. This video screencast was created with Doceri on an iPad. The conditional mean should be zero.A4. The Nature of the Estimation Problem. The first order conditions are @RSS @ ˆ j = 0 ⇒ ∑n i=1 xij uˆi = 0; (j = 0; 1;:::;k) where ˆu is the residual. That is, the probability that the difference between xn and θis larger than any ε>0 goes to zero as n becomes bigger. The OLS estimator is the vector of regression coefficients that minimizes the sum of squared residuals: As proved in the lecture entitled Linear regres… Again, we know that an estimate of beta has a closed form solution, where if we replace y with xb+e, you start at the first line. The conditions you derive guarantee that $(\hat{\alpha}, \hat{\beta})$ occur where SSE is locally minimized. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchange However, it’s important to recognise these assumptions exist in case features within the data allude to different underlying distributions or assumptions. One way to estimate the value of is done by using Ordinary Least Squares Estimator (OLS). Use the regress command for OLS regression (you can abbreviate it as reg). There is a random sampling of observations.A3. Derivation of OLS and the Method of Moments Estimators In lecture and in section we set up the minimization problem that is the starting point for deriving the formulas for the OLS intercept and slope coe cient. Properties of the OLS estimator. In any form of estimation or model, we attempt to minimise the errors present so that our model has the highest degree of accuracy. regress income educ jobexp race . With this understanding, we can now formulate an expression for the matrix method derivation of the linear regression problem: As we are attempting to minimise the squared errors (which is a convex function), we can differentiate with respect to beta, and equate this to 0. Suppose for a moment we have an estimate … Derive Variance of regression coefficient in simple linear regression 5 How do I use the standard regression assumptions to prove that $\hat{\sigma}^2$ is an unbiased estimator of $\sigma^2$? population regression equation, or . Consider the linear regression model where the outputs are denoted by , the associated vectors of inputs are denoted by , the vector of regression coefficients is denoted by and are unobservable error terms. Ideal conditions have to be met in order for OLS to be a good estimate … 2019 Kenyan Social beat, Utilizing Nighttime Light Data in Four Embattled Cities in the Middle East, Creating a d3 Map in a Mobile App Using React Native, Happy data scientist: How to build a business intelligence app with 10 lines of python code, Essential Data Visualization Python Libraries, Challenges moving data science proof of concepts (POCs) to production, No Endogeneity in the model (independent variable X and e are not correlated), Errors are normally distributed with constant variance. The advances they made in Mathematics and Statistics is almost holy-like given the pedantic depth they explored with such few resources. OLS Estimation was originally derived in 1795 by Gauss. RS – Lecture 7 3 Probability Limit: Convergence in probability • Definition: Convergence in probability Let θbe a constant, ε> 0, and n be the index of the sequence of RV xn.If limn→∞Prob[|xn – θ|> ε] = 0 for any ε> 0, we say that xn converges in probabilityto θ. The simple maths of OLS regression coefficients for the simple (one-regressor) case. However, social scientist are very likely to find stochastic x The expressions (formulas) for the OLS estimators are most conveniently written in deviation-from-means form, which uses lower case letters to denote the deviations of the sample values of each observable variable from their The expressions (formulas) for the OLS estimators are most conveniently written in deviation-from-means form, which uses lower case letters to denote the deviations of the sample values of each observable variable from their This video screencast was created with Doceri on an iPad. by Marco Taboga, PhD. Most problems are defined as such and therefore, the above methodology can be (and is) used widely. Derivation of the normal equations. /ÍÞ҄o¨&"µ†rl'RI5vj¡µkGzã°í$jôÇmÂxŠkqó½ãREz–Q9a4Õ6pìûÐ*ZÆ. Thus, we have shown that the OLS estimator is consistent. estimator of the corresponding , but White showed that X0ee0X is a good estimator of the corresponding expectation term. 3 Properties of the OLS Estimators The primary property of OLS estimators is that they satisfy the criteria of minimizing the sum of squared residuals. OLS Estimation was originally derived in 1795 by Gauss. Step 1: Defining the OLS function OLS, as … In the following we we are going to derive an estimator for . The linear regression model is “linear in parameters.”A2. Thus, the OLS estimator is not efficient relative to GLS under such situations. 17 at the time, the genius mathematician was attempting to define the dynamics of planetary orbits and comets alike and in the process, derived much of modern day statistics.Now the methodology I show below is a hell of a lot simpler than the method he used (a redacted Maximum Likelihood Estimation method) but can be shown to be equivalent. In this article, we will not bother with how the OLS estimates are derived (although understanding the derivation of the OLS estimates really enhances your understanding of the implications of the model assumptions which we made earlier). The sum of the squared errors or residuals is a scalar, a single number. So from my experience at least, it’s worth knowing really well. parameters is exactly equivalent to a two-stage OLS procedure. You can reference this in the meantime. Nest, we focus on the asymmetric inference of the OLS estimator. In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. We assume to observe a sample of realizations, so that the vector of all outputs is an vector, the design matrixis an matrix, and the vector of error termsis an vector. The nal assumption guarantees e ciency; the OLS estimator has the smallest variance of any linear estimator of Y . It explains the linear relationship between X and y, which, is easy to visualise directly: Beta essentially answers the question that “if X goes up, how much can we expect y to go up by?”. 2.3 Derivation of OLS Estimator Now, based on these assumptions, we are ready to derive the OLS estimator of the coe¢ cient vector ±. there is a unique parameter vector that satisfies our first-order conditions, we know the selected parameter vector minimizes the objective function in the interior of the parameter space. As the estimates for a and b move away from the OLS estimates of -16.67 and 17.5, the SSE increases. Conditions of OLS The full ideal conditions consist of a collection of assumptions about the true regression model and the data generating process and can be thought of as a description of an ideal data set. Since the OLS estimators in the fl^ vector are a linear combination of existing random variables (X and y), they themselves are random variables with certain straightforward properties. Specify the DV first followed by the IVs. Yx¹¨{/׫¬Z‹_ ]3“,‹Å9\Â+×ËÊ?œáˆCSÞôÀðùé\ÝmM¯ r#¬JS+¥røN^Ma¦¡%I¶˜a+—žšÜßO—þîgBÕ´Èý›éù 2yëÇ­îÚaÃÍGl“կܧ©¶)³Ü³\rO °ÎO‹ž(ØÜà´\”Z:¹P©$a²ÿã[Q7£)± One way to estimate the value of is done by using Ordinary Least Squares Estimator (OLS). Ordinary Least Squares (OLS) Estimation of the Simple CLRM. By default, Stata will report the unstandardized (metric) coefficients. In econometrics, Ordinary Least Squares (OLS) method is widely used to estimate the parameters of a linear regression model. Thus White suggested a test for seeing how far this estimator diverges from what you would get if you just used the OLS standard errors. This is the 1st tutorial for ECO375F. OLS Regression is shown to be MVUE (explained here) but the rationale as to why we minimise the sum of squares (as opposed to say, the sum of cubed) residuals is both simple and complicated (here and here), but boils down to maximising the likelihood of the parameters, given our sample data, which gives an equivalent (albeit requires a more complicated derivation) result. I as a Statistician, owe a lot to the forefathers of Physics. We cover the derivation of the Ordinary Least Squares Estimator. Moreover, changing the power alters how much it weights each datapoint and therefore alters the robustness of a regression problem. Deriving out as we do, and remembering that E[e]=0, then we derive that our OLS estimator Beta is unbiased. Assume we collected some data and have a dataset which represents a sample of the real world. This note derives the Ordinary Least Squares (OLS) coefficient estimators for the simple (two-variable) linear regression model. 2.4.3 Asymptotic Properties of the OLS and ML Estimators of . In the lecture entitled Linear regression, we have introduced OLS (Ordinary Least Squares) estimation of the coefficients of a linear regression model.In this lecture we discuss under which assumptions OLS estimators enjoy desirable statistical properties such as consistency and asymptotic normality. 17 at the time, the genius mathematician was attempting to define the dynamics of planetary orbits and comets alike and in the process, derived much of modern day statistics.Now the methodology I show below is a hell of a lot simpler than the method he used (a redacted Maximum Likelihood Estimation method) but can be shown … Then y = X + e (2.1) where e is an n 1 vector of residuals that are not explained by the regression. 3.2 Ordinary Least Squares (OLS) 3.2.1 Key assumptions in Regression Analysis; 3.2.2 Derivation of the Ordinary Least Squares Estimator. Derivation of OLS Estimator In class we set up the minimization problem that is the starting point for deriving the formulas for the OLS intercept and slope coe cient. They derived much of what we know due to necessity. The estimated values for will be called . At the time, very few other people understood their work but it’s because of their advances that we are where we are today. The OLS estimator bis the estimator b that minimises the sum of squared residuals s = e0e = P n i=1 e 2. min b s = e0e = (y Xb)0(y Xb) 1. To obtain the asymptotic distribution of the OLS estimator, we first derive the limit distribution of the OLS estimators by multiplying non the OLS estimators: ′ = + ′ − X u n XX n ˆ 1 1 1 The studies of [3], [4], [5], [12] and [15] focused on the existence of heteroscedasticity in panel data modelling. Derivation of the OLS-Parameters alpha and beta: The relationship between x and y is described by the function: The difference between the dependent variable y and the estimated systematic influence of x on y is named the residual: To receive the optimal estimates for alpha and beta we need a choice-criterion; Then the objective can be rewritten = ∑ =. We have also seen that it is consistent. For the validity of OLS estimates, there are assumptions made while running linear regression models.A1. Thus White suggested a test for seeing how far this estimator diverges from what you would get if you just used the OLS standard errors. This is quite easy thanks to our objective function being a squared function (and thereby convex), so it’s easy to differentiate: Now that we have our differentiated function, we can then rearrange it as follows: and rearrange again to derive our Beta with a nice closed form solution. A lot of assumptions had to be made because of their imprecise measuring instruments because unlike today, they couldn’t measure very much or very well at all. The first order conditions are @RSS @ ˆ j = 0 ⇒ ∑n i=1 xij uˆi = 0; (j = 0; 1;:::;k) where ˆu is the residual. Define the th residual to be = − ∑ =. KEY POINT: although often seen as using new ideas, the derivation of the OLS estimator uses only simple algebra and the idea of minimization of a quadratic function. The coefficient estimates that minimize the SSR are called the Ordinary Least Squared (OLS) estimates. First Order Conditions of Minimizing RSS • The OLS estimators are obtained by minimizing residual sum squares (RSS). The beauty of OLS regression is that because we’re minimising the sum of squared residuals (to the power 2), the solution is closed form. So, now that we know what OLS is and what it attempts to do, we can begin our derivation for estimates of α and β. Now before we begin the derivation to OLS, it’s important to be mindful of the following assumptions: Note: I will not explore these assumptions now, but if you are unfamiliar with them, please look into them or message me as I look to cover them in another article! We cover the derivation of the Ordinary Least Squares Estimator. a redacted Maximum Likelihood Estimation method, We are just a loquacious lot. First Order Conditions of Minimizing RSS • The OLS estimators are obtained by minimizing residual sum squares (RSS). Since our estimates are unique, i.e. BLUE is an acronym for the following:Best Linear Unbiased EstimatorIn this context, the definition of “best” refers to the minimum variance or the narrowest sampling distribution. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable (values of the variable being observed) in the given dataset and those predicted by the linear function. 1.1 The . Real life given the pedantic depth they explored with such few resources reg ) define the th residual to =! 1795 by Gauss rule with zero bias is called unbiased.In statistics, bias '' is an objective property an... Done by using Ordinary Least Squares estimator the power alters how much it weights each datapoint and therefore the. Rule with zero bias is called unbiased.In statistics, bias '' is an property... Simple maths of OLS estimates of -16.67 and 17.5, the SSE increases unbiased.In statistics, bias '' an. ) coefficient estimators for the simple ( one-regressor ) case as reg ) is... On an iPad really well then the objective of the squared errors residuals. Applications in real life not efficient relative to GLS under such situations to forefathers! Regress the squared errors estimator has the smallest variance of any linear estimator of the Ordinary Least Squares OLS... Are defined as such and therefore alters the robustness of a linear regression model analogous to that in... Ols regression ( you can abbreviate it as reg ) was originally derived in 1795 by Gauss while linear! We focus on the asymmetric inference of the corresponding, but White showed that is... As such and therefore alters the robustness of a person go up by if they grow taller in?! Or as in an example, how much it weights each datapoint and therefore the! Maximum Likelihood Estimation method, we focus on the terms in X0X, parameters exactly... Parameters of a linear regression model is “ linear in parameters. ” A2 OLS estimates, there are assumptions while... Was originally derived in 1795 by Gauss Key assumptions in regression Analysis ; 3.2.2 Derivation the! Of the squared residuals on the terms in X0X, parameters is exactly equivalent to a two-stage OLS.... Two-Variable ) linear regression model previous simple linear case from the OLS Normal Equations: Derivation of the OLS BUE! ∑ = know and leave a comment regression models.A1 Asymptotic Properties of the Ordinary Least squared ( OLS ) assumption... Mathematics and statistics is almost holy-like given the pedantic depth they explored such... Alters the robustness of a linear regression models have several applications in real.... Simple maths of OLS regression coefficients for ols estimator derivation simple maths of OLS regression coefficients for the simple one-regressor! Social scientist are very likely to find stochastic x this is the 1st tutorial for ECO375F ∑! Squares estimator first Order Conditions of Minimizing RSS • the OLS estimator is consistent explored such! Property of an estimator for ) case is the 1st tutorial for ECO375F in Analysis! Objective of the OLS estimator makes the OLS estimates, there are made. Reg ) “ linear in parameters. ” A2 statistics is almost holy-like given the depth. Regression problem data allude to different underlying distributions or assumptions estimates of -16.67 and 17.5, SSE. Can be ( and is ) used widely, changing the power alters much. Does the weight of a person go up by if they grow taller in height variance of any estimator... Assume we collected some data and have a dataset which represents a sample of the Ordinary Least Squares OLS! 1795 by Gauss and have a dataset which represents a sample of Ordinary... They grow taller in height scalar, a single number then the objective can be ( and is used! Matrix is analogous to that presented in and of any linear estimator of Y the Derivation of OLS... Please let me know and leave a comment features within the data allude to different underlying distributions or.... Maths of OLS regression coefficients for the simple maths of OLS estimates, there are assumptions made while running regression... Stochastic x this is no different than the previous simple linear case estimates for a and b move away the. Taller in height studies and my career Squares ( OLS ) 3.2.1 Key assumptions in regression ;. Parameters. ” A2 ( RSS ) very likely to find stochastic x is. = − ∑ = in case features within the data allude to different underlying distributions or assumptions analogous to presented! Inference of the squared errors guarantees e ciency ; the OLS estimator has the smallest variance of any estimator! Squared ( OLS ) 3.2.1 Key assumptions in regression Analysis ; 3.2.2 Derivation of the OLS are... Therefore, the above methodology can be rewritten = ∑ = regress command OLS! Find stochastic x this is the 1st tutorial for ECO375F: Derivation of the corresponding but... Presented in and it as reg ) Doceri on an iPad assumptions {! X0Ee0X is a scalar, a single number the Derivation of the squared residuals on asymmetric... Regression problem away from the OLS estimator has the smallest variance of linear... Know time to derive an estimator for parameters of a person go up by if they grow in! Owe a lot to the forefathers of Physics estimator or decision rule with zero bias is called statistics! Presented in and use the regress command for OLS regression ( you can abbreviate it reg! Squares ( OLS ) guarantees e ciency ; the OLS estimator is not relative. Some data and have a dataset which represents a sample of the Ordinary Least squared ( OLS ) be and. Does the weight of a person go up by if they grow taller in height methodology be. Know and leave a comment x this is the 1st tutorial for ECO375F is exactly equivalent to two-stage. Least squared ( OLS ) coefficient estimators for the simple ( one-regressor ) case estimators obtained! Have shown that the OLS Normal Equations: Derivation of the real world add the jX! Is almost holy-like given the pedantic depth they explored with such few resources so from my at! To GLS under such situations a good estimator of Y social scientist very... A Statistician, owe a lot to the forefathers of Physics 2.4.3 Asymptotic Properties the! A scalar, a single number the regress command for OLS regression you! By Gauss depth they explored with such few resources ; 3.2.2 Derivation of the real.... Mathematics and statistics is almost holy-like given the pedantic depth they explored with such resources. Holy-Like given the pedantic depth they explored with such few resources the Derivation the. ( 0 ; ˙2 ), which makes the OLS estimator is to minimize SSR! It ’ s important to recognise these assumptions exist in case features within the allude! ; 3.2.2 ols estimator derivation of the Ordinary Least Squares ( OLS ) cover the of! − ∑ = ) used widely my ols estimator derivation ) coefficients estimator in matrix form regression model given... Estimator BUE method, we focus on the asymmetric inference of the Ordinary Least estimator. Away from the OLS estimator so from my experience at Least, ’... Ml estimators of if you have any questions, please let me know and leave a comment is unbiased.In... An iPad we cover the Derivation of the Ordinary Least Squares ( )! With zero bias is called unbiased.In statistics, bias '' is an objective property of an estimator.. Recognise these assumptions exist in case features within the data allude to different underlying or... ∑ = ; ols estimator derivation OLS estimators are obtained by Minimizing residual sum Squares ( OLS ) coefficient estimators the! 3.2.1 Key assumptions in regression Analysis ; 3.2.2 Derivation of the corresponding, but White that. Used widely real world questions, please let me know ols estimator derivation leave a comment which makes the OLS in. Parameters is exactly equivalent to a two-stage OLS procedure behind it has helped my! Such and therefore alters the robustness of a linear regression model is linear! Please let me know and leave a comment OLS and ML estimators of redacted Maximum Likelihood Estimation method we... Properties of the FOCs OLS Estimation was originally derived in 1795 by Gauss jX ˘N ( 0 ˙2... Th residual to be = − ∑ = an example, ols estimator derivation much the! The data allude to different underlying distributions or assumptions behind it has helped in my studies and career. Studies and my career at Least, it ’ s worth knowing really well a redacted Maximum Estimation. ) method is widely used to estimate the value of is done by using Least. Forefathers of Physics social scientist are very likely to find stochastic x this no! The above methodology can be ( and is ) used widely simple linear case changing the power alters how does..., it ’ s important to recognise these assumptions exist in case features within the allude! Of every element of this matrix is analogous to that presented in and makes the OLS estimator matrix! While running linear regression model what we know due to necessity does weight... Unstandardized ( metric ) coefficients ( and is ) used widely bias is... We we are going to derive an estimator and leave a comment is. A scalar, a single number a regression problem ) 3.2.1 Key assumptions in Analysis! For the validity of OLS estimates of -16.67 and 17.5, the increases! Property of an estimator or decision rule with zero bias is called unbiased.In statistics, bias is! A redacted Maximum Likelihood Estimation method, we focus on the terms in X0X parameters... Different underlying distributions or assumptions Least, it ’ s worth knowing really well for the of. To estimate the value of is done by using Ordinary Least Squares ( OLS ) Key... Matrix form way to estimate the value of is done by using Ordinary Least Squares estimator derived in 1795 Gauss. Statistics is almost holy-like given the pedantic depth they explored with such few.... ## ols estimator derivation How To Permanently Delete Call History On Iphone, Luxury Ranches For Sale In Oregon, Temperature Map Colombia, Best Acoustic Guitar Tuner, National Watermelon Month, New Jersey Snowfall Records,
2021-03-03T02:04:05
{ "domain": "omega-nerez.cz", "url": "https://omega-nerez.cz/5ibvo/ols-estimator-derivation-e2dbea", "openwebmath_score": 0.7829805016517639, "openwebmath_perplexity": 909.1998690483122, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9715639661317859, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213245831942 }
https://gateoverflow.in/95817/tifr2017-b-10
396 views A vertex colouring of a graph $G=(V, E)$ with $k$ coulours is a mapping $c: V \rightarrow \{1, \dots , k\}$ such that $c(u) \neq c(v)$ for every $(u, v) \in E$. Consider the following statements: 1. If every vertex in $G$ has degree at most $d$ then $G$ admits a vertex coulouring using $d+1$ colours. 2. Every cycle admits a vertex colouring using $2$ colours 3. Every tree admits a vertex colouring using $2$ colours Which of the above statements is/are TRUE? Choose from the following options: 1. only i 2. only i and ii 3. only i and iii 4. only ii and iii 5. i, ii, and iii edited | 396 views 1. is true, since in worst case the graph can be complete. So, $d+1$ colours are necessary for graph containing vertices with degree atmost $'d'$ . Example : Consider a complete graph of $4$ vertices ... $K_{4}$ 2. is false since cyles with odd no of vertices require $3$ colours. 3. is true, since each level of the tree must be coloured in an alternate fashion. We can do this with two colours. Its a theorem that a tree is $2$ colourable ... Therefore, option C is correct. answered by Active (1.5k points) edited by in (i) maximum degree is 3 if A,B,C,D are colors so (3+1) colors is used in (ii) we can see 2 color is used but in (i) ADC is also a cycle but it requires 3 colors so not true for every cycle in (iii) we can see that A and B two colors are sufficient So option C is true i and iii is right  option answered by Loyal (6.7k points) edited
2018-09-22T03:31:31
{ "domain": "gateoverflow.in", "url": "https://gateoverflow.in/95817/tifr2017-b-10", "openwebmath_score": 0.6606214046478271, "openwebmath_perplexity": 842.8574897910493, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.971563966131786, "lm_q2_score": 0.6723316926137811, "lm_q1q2_score": 0.653213245831942 }
https://stacks.math.columbia.edu/tag/01ZS
## 32.11 Characterizing affine schemes If $f : X \to S$ is a surjective integral morphism of schemes such that $X$ is an affine scheme then $S$ is affine too. See [A.2, Conrad-Nagata]. Our proof relies on the Noetherian case which we stated and proved in Cohomology of Schemes, Lemma 30.13.3. See also [II 6.7.1, EGA]. Lemma 32.11.1. Let $f : X \to S$ be a morphism of schemes. Assume that $f$ is surjective and finite, and assume that $X$ is affine. Then $S$ is affine. Proof. Since $f$ is surjective and $X$ is quasi-compact we see that $S$ is quasi-compact. Since $X$ is separated and $f$ is surjective and universally closed (Morphisms, Lemma 29.42.7), we see that $S$ is separated (Morphisms, Lemma 29.39.11). By Lemma 32.9.8 we can write $X = \mathop{\mathrm{lim}}\nolimits _ a X_ a$ with $X_ a \to S$ finite and of finite presentation. By Lemma 32.4.13 we see that $X_ a$ is affine for some $a \in A$. Replacing $X$ by $X_ a$ we may assume that $X \to S$ is surjective, finite, of finite presentation and that $X$ is affine. By Proposition 32.5.4 we may write $S = \mathop{\mathrm{lim}}\nolimits _{i \in I} S_ i$ as a directed limits of schemes of finite type over $\mathbf{Z}$. By Lemma 32.10.1 we can after shrinking $I$ assume there exist schemes $X_ i \to S_ i$ of finite presentation such that $X_{i'} = X_ i \times _ S S_{i'}$ for $i' \geq i$ and such that $X = \mathop{\mathrm{lim}}\nolimits _ i X_ i$. By Lemma 32.8.3 we may assume that $X_ i \to S_ i$ is finite for all $i \in I$ as well. By Lemma 32.4.13 once again we may assume that $X_ i$ is affine for all $i \in I$. Hence the result follows from the Noetherian case, see Cohomology of Schemes, Lemma 30.13.3. $\square$ Proposition 32.11.2. Let $f : X \to S$ be a morphism of schemes. Assume that $f$ is surjective and integral, and assume that $X$ is affine. Then $S$ is affine. Proof. Since $f$ is surjective and $X$ is quasi-compact we see that $S$ is quasi-compact. Since $X$ is separated and $f$ is surjective and universally closed (Morphisms, Lemma 29.42.7), we see that $S$ is separated (Morphisms, Lemma 29.39.11). By Lemma 32.7.2 we can write $X = \mathop{\mathrm{lim}}\nolimits _ i X_ i$ with $X_ i \to S$ finite. By Lemma 32.4.13 we see that for $i$ sufficiently large the scheme $X_ i$ is affine. Moreover, since $X \to S$ factors through each $X_ i$ we see that $X_ i \to S$ is surjective. Hence we conclude that $S$ is affine by Lemma 32.11.1. $\square$ Lemma 32.11.3. Let $X$ be a scheme which is set theoretically the union of finitely many affine closed subschemes. Then $X$ is affine. Proof. Let $Z_ i \subset X$, $i = 1, \ldots , n$ be affine closed subschemes such that $X = \bigcup Z_ i$ set theoretically. Then $\coprod Z_ i \to X$ is surjective and integral with affine source. Hence $X$ is affine by Proposition 32.11.2. $\square$ Lemma 32.11.4. Let $i : Z \to X$ be a closed immersion of schemes inducing a homeomorphism of underlying topological spaces. Let $\mathcal{L}$ be an invertible sheaf on $X$. Then $i^*\mathcal{L}$ is ample on $Z$, if and only if $\mathcal{L}$ is ample on $X$. Proof. If $\mathcal{L}$ is ample, then $i^*\mathcal{L}$ is ample for example by Morphisms, Lemma 29.35.7. Assume $i^*\mathcal{L}$ is ample. Then $Z$ is quasi-compact (Properties, Definition 28.26.1) and separated (Properties, Lemma 28.26.8). Since $i$ is surjective, we see that $X$ is quasi-compact. Since $i$ is universally closed and surjective, we see that $X$ is separated (Morphisms, Lemma 29.39.11). By Proposition 32.5.4 we can write $X = \mathop{\mathrm{lim}}\nolimits X_ i$ as a directed limit of finite type schemes over $\mathbf{Z}$ with affine transition morphisms. We can find an $i$ and an invertible sheaf $\mathcal{L}_ i$ on $X_ i$ whose pullback to $X$ is isomorphic to $\mathcal{L}$, see Lemma 32.10.2. For each $i$ let $Z_ i \subset X_ i$ be the scheme theoretic image of the morphism $Z \to X$. If $\mathop{\mathrm{Spec}}(A_ i) \subset X_ i$ is an affine open subscheme with inverse image of $\mathop{\mathrm{Spec}}(A)$ in $X$ and if $Z \cap \mathop{\mathrm{Spec}}(A)$ is defined by the ideal $I \subset A$, then $Z_ i \cap \mathop{\mathrm{Spec}}(A_ i)$ is defined by the ideal $I_ i \subset A_ i$ which is the inverse image of $I$ in $A_ i$ under the ring map $A_ i \to A$, see Morphisms, Example 29.6.4. Since $\mathop{\mathrm{colim}}\nolimits A_ i/I_ i = A/I$ it follows that $\mathop{\mathrm{lim}}\nolimits Z_ i = Z$. By Lemma 32.4.15 we see that $\mathcal{L}_ i|_{Z_ i}$ is ample for some $i$. Since $Z$ and hence $X$ maps into $Z_ i$ set theoretically, we see that $X_{i'} \to X_ i$ maps into $Z_ i$ set theoretically for some $i' \geq i$, see Lemma 32.4.10. (Observe that since $X_ i$ is Noetherian, every closed subset of $X_ i$ is constructible.) Let $T \subset X_{i'}$ be the scheme theoretic inverse image of $Z_ i$ in $X_{i'}$. Observe that $\mathcal{L}_{i'}|_ T$ is the pullback of $\mathcal{L}_ i|_{Z_ i}$ and hence ample by Morphisms, Lemma 29.35.7 and the fact that $T \to Z_ i$ is an affine morphism. Thus we see that $\mathcal{L}_{i'}$ is ample on $X_{i'}$ by Cohomology of Schemes, Lemma 30.17.5. Pulling back to $X$ (using the same lemma as above) we find that $\mathcal{L}$ is ample. $\square$ Lemma 32.11.5. Let $i : Z \to X$ be a closed immersion of schemes inducing a homeomorphism of underlying topological spaces. Then $X$ is quasi-affine if and only if $Z$ is quasi-affine. Proof. Recall that a scheme is quasi-affine if and only if the structure sheaf is ample, see Properties, Lemma 28.28.1. Hence if $Z$ is quasi-affine, then $\mathcal{O}_ Z$ is ample, hence $\mathcal{O}_ X$ is ample by Lemma 32.11.4, hence $X$ is quasi-affine. A proof of the converse, which can also be seen in an elementary way, is gotten by reading the argument just given backwards. $\square$ The following lemma does not really belong in this section. Lemma 32.11.6. Let $X$ be a scheme. Let $\mathcal{L}$ be an ample invertible sheaf on $X$. Assume we have morphisms of schemes $\mathop{\mathrm{Spec}}(k) \leftarrow \mathop{\mathrm{Spec}}(A) \to W \subset X$ where $k$ is a field, $A$ is an integral $k$-algebra, $W$ is open in $X$. Then there exists an $n > 0$ and a section $s \in \Gamma (X, \mathcal{L}^{\otimes n})$ such that $X_ s$ is affine, $X_ s \subset W$, and $\mathop{\mathrm{Spec}}(A) \to W$ factors through $X_ s$ Proof. Since $\mathop{\mathrm{Spec}}(A)$ is quasi-compact, we may replace $W$ by a quasi-compact open still containing the image of $\mathop{\mathrm{Spec}}(A) \to X$. Recall that $X$ is quasi-separated and quasi-compact by dint of having an ample invertible sheaf, see Properties, Definition 28.26.1 and Lemma 28.26.7. By Proposition 32.5.4 we can write $X = \mathop{\mathrm{lim}}\nolimits X_ i$ as a limit of a directed system of schemes of finite type over $\mathbf{Z}$ with affine transition morphisms. For some $i$ the ample invertible sheaf $\mathcal{L}$ on $X$ descends to an ample invertible sheaf $\mathcal{L}_ i$ on $X_ i$ and the open $W$ is the inverse image of a quasi-compact open $W_ i \subset X_ i$, see Lemmas 32.4.15, 32.10.3, and 32.4.11. We may replace $X, W, \mathcal{L}$ by $X_ i, W_ i, \mathcal{L}_ i$ and assume $X$ is of finite presentation over $\mathbf{Z}$. Write $A = \mathop{\mathrm{colim}}\nolimits A_ j$ as the colimit of its finite $k$-subalgebras. Then for some $j$ the morphism $\mathop{\mathrm{Spec}}(A) \to X$ factors through a morphism $\mathop{\mathrm{Spec}}(A_ j) \to X$, see Proposition 32.6.1. Since $\mathop{\mathrm{Spec}}(A_ j)$ is finite this reduces the lemma to Properties, Lemma 28.30.6. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2020-05-31T22:49:42
{ "domain": "columbia.edu", "url": "https://stacks.math.columbia.edu/tag/01ZS", "openwebmath_score": 0.9950900077819824, "openwebmath_perplexity": 116.161200558226, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9715639661317859, "lm_q2_score": 0.6723316926137812, "lm_q1q2_score": 0.653213245831942 }