Jump to content

Coordinate vector

From Wikipedia, the free encyclopedia

In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers (a tuple) that describes the vector in terms of a particular ordered basis.[1] An easy example may be a position such as (5, 2, 1) in a 3-dimensional Cartesian coordinate system with the basis as the axes of this system. Coordinates are always specified relative to an ordered basis. Bases and their associated coordinate representations let one realize vector spaces and linear transformations concretely as column vectors, row vectors, and matrices; hence, they are useful in calculations.

The idea of a coordinate vector can also be used for infinite-dimensional vector spaces, as addressed below.

Definition

[edit]

Let V be a vector space of dimension n over a field F and let

be an ordered basis for V. Then for every there is a unique linear combination of the basis vectors that equals :

The coordinate vector of relative to B is the sequence of coordinates

This is also called the representation of with respect to B, or the B representation of . The are called the coordinates of . The order of the basis becomes important here, since it determines the order in which the coefficients are listed in the coordinate vector.

Coordinate vectors of finite-dimensional vector spaces can be represented by matrices as column or row vectors. In the above notation, one can write

and

where is the transpose of the matrix .

The standard representation

[edit]

We can mechanize the above transformation by defining a function , called the standard representation of V with respect to B, that takes every vector to its coordinate representation: . Then is a linear transformation from V to Fn. In fact, it is an isomorphism, and its inverse is simply

Alternatively, we could have defined to be the above function from the beginning, realized that is an isomorphism, and defined to be its inverse.

Examples

[edit]

Example 1

[edit]

Let be the space of all the algebraic polynomials of degree at most 3 (i.e. the highest exponent of x can be 3). This space is linear and spanned by the following polynomials:

matching

then the coordinate vector corresponding to the polynomial

is

According to that representation, the differentiation operator d/dx which we shall mark D will be represented by the following matrix:

Using that method it is easy to explore the properties of the operator, such as: invertibility, Hermitian or anti-Hermitian or neither, spectrum and eigenvalues, and more.

Example 2

[edit]

The Pauli matrices, which represent the spin operator when transforming the spin eigenstates into vector coordinates.

Basis transformation matrix

[edit]

Let B and C be two different bases of a vector space V, and let us mark with the matrix which has columns consisting of the C representation of basis vectors b1, b2, …, bn:

This matrix is referred to as the basis transformation matrix from B to C. It can be regarded as an automorphism over . Any vector v represented in B can be transformed to a representation in C as follows:

Under the transformation of basis, notice that the superscript on the transformation matrix, M, and the subscript on the coordinate vector, v, are the same, and seemingly cancel, leaving the remaining subscript. While this may serve as a memory aid, it is important to note that no such cancellation, or similar mathematical operation, is taking place.

Corollary

[edit]

The matrix M is an invertible matrix and M−1 is the basis transformation matrix from C to B. In other words,

Infinite-dimensional vector spaces

[edit]

Suppose V is an infinite-dimensional vector space over a field F. If the dimension is κ, then there is some basis of κ elements for V. After an order is chosen, the basis can be considered an ordered basis. The elements of V are finite linear combinations of elements in the basis, which give rise to unique coordinate representations exactly as described before. The only change is that the indexing set for the coordinates is not finite. Since a given vector v is a finite linear combination of basis elements, the only nonzero entries of the coordinate vector for v will be the nonzero coefficients of the linear combination representing v. Thus the coordinate vector for v is zero except in finitely many entries.

The linear transformations between (possibly) infinite-dimensional vector spaces can be modeled, analogously to the finite-dimensional case, with infinite matrices. The special case of the transformations from V into V is described in the full linear ring article.

See also

[edit]

References

[edit]
  1. ^ Howard Anton; Chris Rorres (12 April 2010). Elementary Linear Algebra: Applications Version. John Wiley & Sons. ISBN 978-0-470-43205-1.