Fill This Form To Receive Instant Help

Help in Homework
trustpilot ratings
google ratings


Homework answers / question archive / 1) Let G be a group with identity element e

1) Let G be a group with identity element e

Math

1) Let G be a group with identity element e. (a) If a, b ∈ G and ab = e, show that ba = e. (b) Show that the map f : G → G defined by f (a) = a−1 for all a ∈ G is a bijection. 2. (a) Suppose G is a finite group of order n. Show that any a ∈ G has order ≤ n. (Hint: Consider the n + 1 element e = a0 , a, a2 , a3 , . . . , an . Are they all distinct?) (b) Show by example that a group of order n does not necessarily contain an element of order n. (Hint: Consider G = Z/2Z × Z/2Z.) 3. Determine the order of the given group element. Please show your work. (a) 5 in (Z/8Z)× 1 2 3 4 5 6 7 (b) σ = in S7 . (Note: It may be more convenient to work with cycle notation 2 3 7 5 1 4 6 σ = (1237645) when computing powers of σ.) (c) (1, 2) in Z × Z/4Z 4. (a) If a ∈ G and a12 = e, what are the possibilities for the order of a? (b) If b ∈ G is a nonidentity element and bp = e for a prime p, show that b must have order p. 5. (a) Suppose G is a group and a, b ∈ G are elements of finite orders |a| and |b|, respectively. If ab = ba, show that (ab)|a||b| = e. 0 1 0 −1 (b) Show that a = has order 3 in GL2 (R) and b = has order 4, but that ab −1 −1 1 0 has infinite order. Thus, the conclusion of (a) may fail if a and b do not commute. 6. Let G be a group and a ∈ G be an element of finite order n. (a) Show that for any b ∈ G, the element bab−1 has order n. (Hint: Begin by using induction to show that (bab−1 )k = bak b−1 for any positive integer k.) (b) Show that if d ∈ Z with gcd(d, n) = 1, then ad has order n. (Hint: Since gcd(d, n) = 1, there exist integers u and v such that un + vd = 1, and therefore a = aun adv = (ad )v .) 7. Let H and K be subgroups of a group G. (a) Show by example that H ∪K need not be a subgroup of G. (Hint: What happens for H = {e, (12)} and K = {e, (23)} in G = S3 ?) (b) Prove that H ∩ K is a subgroup of G. 1 2 3 4 5 6 7 1. Prove using Euler’s formula and standard trigonometric identities, that for any n 2 N and ? 2 R: ein? = (ei? )n 2. (a) Fix N 2 N and define PN (?) = k= N Prove that if f 2 L1 (T), 1 2? ˆ ? ˆ ? N P ↵k eik? for some complex numbers {↵ f (?)PN (?) d? = ? N X ↵k fˆ(k). k= N (b) Use part (a) to prove that 1 2? f (?)SN f (?) d? = ? N X k= N |fˆ(k)|2 . 3. Let f (?) = ? for ? 2 T. (a) Prove that {fˆ(n)}n2Z belongs to `2 (Z) but not to `1 (Z). (b) Prove that 1 X ?2 1 . ? n2 6 n=1 (Hint: Use part (a) and Bessel’s inequality) 4. Let f (?) = |?| for ? 2 T. (a) Show that the Fourier coefficients of f are given by 8? > if n = 0, > > > ( 1)n 1 > : if n 6= 0. ?n2 N , . . . , ↵N }. (b) Prove that the Fourier series of f converges to f uniformly on T. (c) Use parts (a) and (b) to prove that 1 X ?2 1 . = 2 n 6 n=1 P (Hint: First prove that {n2N|n even} 1 n2 = 1 4 ? 1 P n=1 1 n2 ? .) 5. (a) If the following statement is true, prove it, if it is false, provide a counterexample: “If f 2 L2p (T) for some 1 ? p < 1, then {fˆ(n)} 2 `p (Z).” (b) Fill in the blank in the following statement, and justify your answer: “If ´? ? |f (x)|2 dx ? 2 ?, then P ˆ |f (n)|2 ? n2Z .” Linear Algebra iv Preface Preface v vi Preface Contents Preface iii Chapter 1. Basic Notions 1 §1. Vector spaces 1 §2. Linear combinations, bases. 6 §3. Linear Transformations. Matrix–vector multiplication 12 §4. Linear transformations as a vector space 17 §5. Composition of linear transformations and matrix multiplication. 19 §6. Invertible transformations and matrices. Isomorphisms 24 §7. Subspaces. 30 §8. Application to computer graphics. 31 Chapter 2. Systems of linear equations 39 §1. Different faces of linear systems. 39 §3. §2. Solution of a linear system. Echelon and reduced echelon forms 40 Analyzing the pivots. 46 §4. Finding A−1 by row reduction. 52 §5. Dimension. Finite-dimensional spaces. 54 §6. General solution of a linear system. 56 §7. Fundamental subspaces of a matrix. Rank. 59 §8. Representation of a linear transformation in arbitrary bases. Change of coordinates formula. 69 Chapter 3. Determinants 75 vii viii §1. §2. §3. §4. §5. §6. §7. Contents Introduction. 75 What properties determinant should have. 76 Constructing the determinant. 78 Formal definition. Existence and uniqueness of the determinant. 86 Cofactor expansion. 90 Minors and rank. 96 Review exercises for Chapter 3. 96 Chapter 4. §1. §2. Introduction to spectral theory (eigenvalues and eigenvectors) Main definitions Diagonalization. 99 100 105 Chapter 5. Inner product spaces §1. Inner product in Rn and Cn . Inner product spaces. §2. Orthogonality. Orthogonal and orthonormal bases. §3. Orthogonal projection and Gram-Schmidt orthogonalization §4. Least square solution. Formula for the orthogonal projection §5. Adjoint of a linear transformation. Fundamental subspaces revisited. §6. Isometries and unitary operators. Unitary and orthogonal matrices. §7. Rigid motions in Rn §8. Complexification and decomplexification 117 117 125 129 136 Chapter 6. Structure of operators in inner product spaces. §1. Upper triangular (Schur) representation of an operator. §2. Spectral theorem for self-adjoint and normal operators. §3. Polar and singular value decompositions. §4. Applications of the singular value decomposition. §5. Structure of orthogonal matrices §6. Orientation 163 163 165 171 179 187 193 142 146 151 154 Chapter 7. Bilinear and quadratic forms 197 §1. Main definition 197 §2. Diagonalization of quadratic forms 200 §3. Sylvester’s Law of Inertia 206 §4. Positive definite forms. Minimax characterization of eigenvalues and the Sylvester’s criterion of positivity 208 ix Contents §5. Positive definite forms and inner products Chapter 8. Dual spaces and tensors 214 217 §1. Dual spaces 217 Dual of an inner product space 224 §3. Adjoint (dual) transformations and transpose. Fundamental subspace revisited (once more) 227 §4. What is the difference between a space and its dual? 232 Multilinear functions. Tensors 239 §6. Change of coordinates formula for tensors. 247 §2. §5. Chapter 9. Advanced spectral theory 253 §1. Cayley–Hamilton Theorem 253 Spectral Mapping Theorem 257 §3. Generalized eigenspaces. Geometric meaning of algebraic multiplicity 259 §4. Structure of nilpotent operators 266 Jordan decomposition theorem 272 §2. §5. Index 275 Chapter 1 Basic Notions 1. Vector spaces A vector space V is a collection of objects, called vectors (denoted in this book by lowercase bold letters, like v), along with two operations, addition of vectors and multiplication by a number (scalar) 1 , such that the following 8 properties (the so-called axioms of a vector space) hold: The first 4 properties deal with the addition: 1. Commutativity: v + w = w + v for all v, w ∈ V ; 2. Associativity: (u + v) + w = u + (v + w) for all u, v, w ∈ V ; 3. Zero vector: there exists a special vector, denoted by 0 such that v + 0 = v for all v ∈ V ; 4. Additive inverse: For every vector v ∈ V there exists a vector w ∈ V such that v + w = 0. Such additive inverse is usually denoted as −v; The next two properties concern multiplication: 5. Multiplicative identity: 1v = v for all v ∈ V ; 1We need some visual distinction between vectors and other objects, so in this book we use bold lowercase letters for vectors and regular lowercase letters for numbers (scalars). In some (more advanced) books Latin letters are reserved for vectors, while Greek letters are used for scalars; in even more advanced texts any letter can be used for anything and the reader must understand from the context what each symbol means. I think it is helpful, especially for a beginner to have some visual distinction between different objects, so a bold lowercase letters will always denote a vector. And on a blackboard an arrow (like in ~v ) is used to identify a vector. 1 A question arises, “How one can memorize the above properties?” And the answer is that one does not need to, see below! 2 1. Basic Notions 6. Multiplicative associativity: (αβ)v = α(βv) for all v ∈ V and all scalars α, β; And finally, two distributive properties, which connect multiplication and addition: 7. α(u + v) = αu + αv for all u, v ∈ V and all scalars α; 8. (α + β)v = αv + βv for all v ∈ V and all scalars α, β. Remark. The above properties seem hard to memorize, but it is not necessary. They are simply the familiar rules of algebraic manipulations with numbers, that you know from high school. The only new twist here is that you have to understand what operations you can apply to what objects. You can add vectors, and you can multiply a vector by a number (scalar). Of course, you can do with number all possible manipulations that you have learned before. But, you cannot multiply two vectors, or add a number to a vector. Remark. It is easy to prove that zero vector 0 is unique, and that given v ∈ V its additive inverse −v is also unique. It is also not hard to show using properties 5, 6 and 8 that 0 = 0v for any v ∈ V , and that −v = (−1)v. Note, that to do this one still needs to use other properties of a vector space in the proofs, in particular properties 3 and 4. If the scalars are the usual real numbers, we call the space V a real vector space. If the scalars are the complex numbers, i.e. if we can multiply vectors by complex numbers, we call the space V a complex vector space. Note, that any complex vector space is a real vector space as well (if we can multiply by complex numbers, we can multiply by real numbers), but not the other way around. If you do not know what a field is, do not worry, since in this book we consider only the case of real and complex spaces. It is also possible to consider a situation when the scalars are elements of an arbitrary field F. In this case we say that V is a vector space over the field F. Although many of the constructions in the book (in particular, everything in Chapters 1–3) work for general fields, in this text we are considering only real and complex vector spaces. If we do not specify the set of scalars, or use a letter F for it, then the results are true for both real and complex spaces. If we need to distinguish real and complex cases, we will explicitly say which case we are considering. Note, that in the definition of a vector space over an arbitrary field, we require the set of scalars to be a field, so we can always divide (without a remainder) by a non-zero scalar. Thus, it is possible to consider vector space over rationals, but not over the integers. 3 1. Vector spaces 1.1. Examples. Example. The space Rn consists of all columns of size n, ? ? v1 ? v2 ? ? ? v=? . ? . ? . ? vn whose entries are real numbers. entrywise, i.e. ? ? ? ? v1 αv1 ? v2 ? ? αv 2 ? ? ? ? ? α? . ? = ? . ?, . . ? . ? ? . ? vn αv n Addition and multiplication are defined ? ? ? ? ? v1 v2 .. . vn ? ? ? ? ? ? ?+? ? ? w1 w2 .. . wn ? ? ? ? ? ? ?=? ? ? v 1 + w1 v 2 + w2 .. . ? ? ? ? ? v n + wn Example. The space Cn also consists of columns of size n, only the entries now are complex numbers. Addition and multiplication are defined exactly as in the case of Rn , the only difference is that we can now multiply vectors by complex numbers, i.e. Cn is a complex vector space. Many results in this text are true for both Rn and Cn . In such cases we will use notation Fn . Example. The space Mm×n (also denoted as Mm,n ) of m × n matrices: the addition and multiplication by scalars are defined entrywise. If we allow only real entries (and so only multiplication only by reals), then we have a real vector space; if we allow complex entries and multiplication by complex numbers, we then have a complex vector space. Formally, we have to distinguish between between real and complex R C . However, in most situacases, i.e. write something like Mm,n or Mm,n tions there is no difference between real and complex case, and there is no need to specify which case we are considering. If there is a difference we say explicitly which case we are considering. Remark. As we mentioned above, the axioms of a vector space are just the familiar rules of algebraic manipulations with (real or complex) numbers, so if we put scalars (numbers) for the vectors, all axioms will be satisfied. Thus, the set R of real numbers is a real vector space, and the set C of complex numbers is a complex vector space. More importantly, since in the above examples all vector operations (addition and multiplication by a scalar) are performed entrywise, for these examples the axioms of a vector space are automatically satisfied because they are satisfied for scalars (can you see why?). So, we do not have to 4 1. Basic Notions check the axioms, we get the fact that the above examples are indeed vector spaces for free! The same can be applied to the next example, the coefficients of the polynomials play the role of entries there. Example. The space Pn of polynomials of degree at most n, consists of all polynomials p of form p(t) = a0 + a1 t + a2 t2 + . . . + an tn , where t is the independent variable. Note, that some, or even all, coefficients ak can be 0. In the case of real coefficients ak we have a real vector space, complex coefficient give us a complex vector space. Again, we will specify whether we treating real or complex case only when it is essential; otherwise everything applies to both cases. Question: What are zero vectors in each of the above examples? 1.2. Matrix notation. An m × n matrix is a rectangular array with m rows and n columns. Elements of the array are called entries of the matrix. It is often convenient to denote matrix entries by indexed letters: the first index denotes the number of the row, where the entry is, and the second one is the number of the column. For example ? ? a1,1 a1,2 . . . a1,n ? a2,1 a2,2 . . . a2,n ? ? ? n (1.1) A = (aj,k )m, = ? .. .. .. ? j=1, k=1 ? . . . ? am,1 am,2 . . . am,n is a general way to write an m × n matrix. Very often for a matrix A the entry in row number j and column number k is denoted by Aj,k or (A)j,k , and sometimes as in example (1.1) above the same letter but in lowercase is used for the matrix entries. Given a matrix A, its transpose (or transposed matrix) AT , is defined by transforming the rows of A into the columns. For example ? ? T 1 4 1 2 3 = ? 2 5 ?. 4 5 6 3 6 So, the columns of AT are the rows of A and vice versa, the rows of AT are the columns of A. The formal definition is as follows: (AT )j,k = (A)k,j meaning that the entry of AT in the row number j and column number k equals the entry of A in the row number k and column number j. 1. Vector spaces 5 The transpose of a matrix has a very nice interpretation in terms of linear transformations, namely it gives the so-called adjoint transformation. We will study this in detail later, but for now transposition will be just a useful formal operation. One of the first uses of the transpose is that we can write a column vector x ∈ Fn (recall that F is R or C) as x = (x1 , x2 , . . . , xn )T . If we put the column vertically, it will use significantly more space. Exercises. 1.1. Let x = (1, 2, 3)T , y = (y1 , y2 , y3 )T , z = (4, 2, 1)T . Compute 2x, 3y, x + 2y − 3z. 1.2. Which of the following sets (with natural addition and multiplication by a scalar) are vector spaces. Justify your answer. a) The set of all continuous functions on the interval [0, 1]; b) The set of all non-negative functions on the interval [0, 1]; c) The set of all polynomials of degree exactly n; d) The set of all symmetric n × n matrices, i.e. the set of matrices A = {aj,k }nj,k=1 such that AT = A. 1.3. True or false: a) Every vector space contains a zero vector; b) A vector space can have more than one zero vector; c) An m × n matrix has m rows and n columns; d) If f and g are polynomials of degree n, then f + g is also a polynomial of degree n; e) If f and g are polynomials of degree at most n, then f + g is also a polynomial of degree at most n 1.4. Prove that a zero vector 0 of a vector space V is unique. 1.5. What matrix is the zero vector of the space M2×3 ? 1.6. Prove that the additive inverse, defined in Axiom 4 of a vector space is unique. 1.7. Prove that 0v = 0 for any vector v ∈ V . 1.8. Prove that for any vector v its additive inverse −v is given by (−1)v. 6 1. Basic Notions 2. Linear combinations, bases. Let V be a vector space, and let v1 , v2 , . . . , vp ∈ V be a collection of vectors. A linear combination of vectors v1 , v2 , . . . , vp is a sum of form α1 v1 + α2 v2 + . . . + αp vp = p X α k vk . k=1 Definition 2.1. A system of vectors v1 , v2 , . . . vn ∈ V is called a basis (for the vector space V ) if any vector v ∈ V admits a unique representation as a linear combination n X v = α1 v1 + α2 v2 + . . . + αn vn = αk v k . k=1 The coefficients α1 , α2 , . . . , αn are called coordinates of the vector v (in the basis, or with respect to the basis v1 , v2 , . . . , vn ). Another way to say that v1 , v2 , . . . , vn is a basis is to say that for any possible choice of the right side v, the equation x1 v1 +x2 v2 +. . .+xm vn = v (with unknowns xk ) has a unique solution. Before discussing any properties of bases2, let us give a few examples, showing that such objects exist, and that it makes sense to study them. Example 2.2. In the first or C. Consider vectors ? ? ? 1 ? 0 ? ? ? ? ? ? 0 ? ? e1 = ? ? , e2 = ? ? .. ? ? ? . ? ? 0 example the space V is Fn , where F is either R 0 1 0 .. . ? ? ? ? ?, ? ? ? ? ? ? e3 = ? ? ? 0 0 0 1 .. . ? ? ? ? ? ?,..., ? ? ? ? ? en = ? ? ? 0 0 0 0 .. . ? ? ? ? ?, ? ? 1 (the vector ek has all entries 0 except the entry number k, which is 1). The system of vectors e1 , e2 , . . . , en is a basis in Fn . Indeed, any vector ? ? x1 ? x2 ? ? ? v = ? . ? ∈ Fn ? .. ? xn can be represented as the linear combination v = x1 e1 + x2 e2 + . . . xn en = n X xk ek k=1 2the plural for the “basis” is bases, the same as the plural for “base” 7 2. Linear combinations, bases. and this representation is unique. The system e1 , e2 , . . . , en ∈ Fn is called the standard basis in Fn Example 2.3. In this example the space is the space Pn of the polynomials of degree at most n. Consider vectors (polynomials) e0 , e1 , e2 , . . . , en ∈ Pn defined by e0 := 1, e1 := t, e2 := t2 , e3 := t3 , ..., en := tn . Clearly, any polynomial p, p(t) = a0 + a1 t + a2 t2 + . . . + an tn admits a unique representation p = a0 e0 + a1 e1 + . . . + an en . So the system e0 , e1 , e2 , . . . , en ∈ Pn is a basis in Pn . We will call it the standard basis in Pn . Remark 2.4. If a vector space V has a basis v1 , v2 , . . . , vn , thenPany vector v is uniquely defined by its coefficients in the decomposition v = nk=1 αk vk . So, if we stack the coefficients αk in a column, we can operate with them as if they were column vectors, i.e. as with elements of Fn (again here F is either R or C, but everything also works for an abstract field F). P P Namely, if v = nk=1 αk vk and w = nk=1 βk vk , then v+w = n X k=1 αk vk + n X k=1 n X β k vk = (αk + βk )vk , k=1 i.e. to get the column of coordinates of the sum one just need to add the columns of coordinates of the summands. Similarly, to get the coordinates of αv we need simply to multiply the column of coordinates of v by α. 2.1. Generating and linearly independent systems. The definit...
 

Option 1

Low Cost Option
Download this past answer in few clicks

16.86 USD

PURCHASE SOLUTION

Already member?


Option 2

Custom new solution created by our subject matter experts

GET A QUOTE