Active/Collaborative Learning Student Teams Integrating Technology Effectively Women and Minorities Assessment and Evaluation EC2000 Emerging Technology Foundation Coalition Curricula Concept Inventories
Next: Scientific Workplace Applications Up: VECTORS, TENSORS AND MATRIX Previous: VECTORS, TENSORS AND MATRIX   Chapters

Review of Vector and Matrix Operations

In Engineering, we represent physical quantities using three different groups of mathematical objects, i.e., scalars, vectors and tensors. A scalar quantity is represented by a real number with some appropriate units (mass, temperature, energy, time, etc.). A vector is an object that has a scalar magnitude and a direction. A vector in three-dimensional space can be described as a linear combination of three base vectors that have unit length and point in the positive direction of the three axes; these form the so-called standard orthonormal basis. They are denoted $\mathbf{i}$, $\mathbf{j}$, and $\mathbf{k}$ and point parallel to the $x$-, $y$-, and $z$-axes, respectively. Using them, we can write a three-space vector $\mathbf{a}$ in the following form

\begin{displaymath} \mathbf{a}=a_1\mathbf{i}+a_2\mathbf{j}+a_3\mathbf{k} \end{displaymath} (A.1)

where $a_1$, $a_2$, $a_3$ are the scalar components of the vector $a$ with respect to the standard orthonormal basis.

Note that in printed text, lower-case roman boldface letters are generally used to represent vectors, and subscripted lower-case italic letters represent their components. In handwriting, the vector $\mathbf{a}$ is often written as $\bar{a}$, $\vec{a}$ or $\underset{\sim}{a}$.

A given vector $\mathbf{a}$ can be expressed in matrix form as a $3\times 1$ column matrix whose entries are the components of the vector:

\begin{displaymath} \left[\mathbf{a}\right]=\left[\begin{array}{c} a_1\\ a_2\\ a_3\\ \end{array}\right] \end{displaymath} (A.2)

We normally think of a vector as a column matrix, but a vector may also be written in matrix notation as a $1\times 3$ row matrix:
\begin{displaymath} \left[\mathbf{a}\right]=\left[\begin{array}{ccc} a_1 & a_2 & a_3\\ \end{array}\right] \end{displaymath} (A.3)

Addition of vectors is defined component-wise by
\begin{displaymath} (\mathbf{a}+\mathbf{b})_i=a_i+b_i\quad\text{for all $i$.} \end{displaymath} (A.4)

Multiplication of a vector by a scalar is defined component-wise by
\begin{displaymath} (c\mathbf{a})_i=c\cdot a_i\quad\text{for all $i$.} \end{displaymath} (A.5)

The difference $\mathbf{a}-\mathbf{b}$ is simply $\mathbf{a}+(-1)\mathbf{b}$. Analogous definitions hold for general matrices.

The above definitions arise from their geometrical usefulness and from obvious analogy to operations on the real numbers. How to define a useful form of multiplication of one vector by another is not so obvious. We define three products of vectors: the dot product (or scalar product), the cross product (or vector product) and the dyadic product (or tensor product). All are products of two vectors, but the products are scalar-, vector-, and tensor-valued.

The dot product $\mathbf{a}\cdot\mathbf{b}$ is given by

\begin{displaymath} \mathbf{a}\cdot\mathbf{b}=[\mathbf{a}]^T[\mathbf{b}]=\left[\... ...{c} b_1\\ b_2\\ b_3\\ \end{array}\right]=a_1b_1+a_2b_2+a_3b_3, \end{displaymath} (A.6)

where $\left[ \cdot \right]^T$ denotes matrix transpose. The computation shown is actually only a mnemonic; the right-hand side is more properly a $1\times 1$ matrix, but we always interpret the result as a scalar, and the ambiguity rarely causes us trouble. The definition extends to vectors of other dimensions.

The cross product $\mathbf{a}\times\mathbf{b}$ is the vector given by the following determinant:

\begin{displaymath} \begin{array}{c} \mathbf{a}\times\mathbf{b}=\left\vert\begin... ...thbf{j}(a_1b_3-a_3b_1)+\mathbf{k}(a_1b_2-a_2b_1)\ \end{array}\end{displaymath} (A.7)

The result is a three-component vector. Note that the $\mathbf{j}$-term is negated. Note also that $\mathbf{a}\times\mathbf{b}=-\mathbf{b}\times\mathbf{a}$. The definition applies only to vectors in three-space. Finally, we define the dyadic product $\mathbf{a}\otimes\mathbf{b}$ by
\begin{displaymath} \left[\mathbf{a}\otimes\mathbf{b}\right]\equiv\left[\mathbf{... ...2b_2 & a_2b_3\\ a_3b_1 & a_3b_2 & a_3b_3\ \end{array}\right] \end{displaymath} (A.8)

The result is a square matrix, also called a second-order tensor (often simply ``tensor'') in mechanics contexts. Clearly, $\mathbf{b}\otimes\mathbf{a}=(\mathbf{a}\otimes\mathbf{b})^T$. Note that $\left(\mathbf{a}\otimes\mathbf{b}\right)\mathbf{c}=\mathbf{a}\left(\mathbf{b}\cdot\mathbf{c}\right)$ for all vectors $\mathbf{c}$. The above observation allows us to define a (second-order) tensor to be a linear transformation that maps vectors to vectors: if $\mathbf{A}$ denotes a tensor, then when $\mathbf{A}$ operates on a vector $\mathbf{b}$ it maps it to another vector given by
\begin{displaymath} \mathbf{A}\mathbf{b}=\left[\mathbf{A}\right]\left[\mathbf{b}... ..._{23}b_3\ A_{31}b_1+A_{32}b_2+A_{33}b_3\\ \end{array}\right] \end{displaymath} (A.9)

$\displaystyle \mathbf{b}\mathbf{A}$ $\textstyle =$ $\displaystyle \left[\mathbf{b}\right]^T\left[\mathbf{A}\right] =\left[\begin{ar... ...}\ A_{21} & A_{22} & A_{23}\ A_{31} & A_{32} & A_{33}\\ \end{array}\right]$ (A.10)
  $\textstyle =$ $\displaystyle \left[\begin{array}{ccc} A_{11}b_1+A_{21}b_2+A_{31}b_3 & A_{12}b_1+A_{22}b_2+A_{32}b_3 & A_{13}b_1+A_{23}b_2+A_{33}b_3\ \end{array}\right]$  

Higher order tensors can be similarly defined; for example, a third-order tensor maps a vector to a second-order tensor, etc.

After introducing the vector operations, one can easily introduce vector calculus by defining the del operator, denoted by $\nabla$. $\nabla$ is a vector operator that ``obeys'' (in a mnemonic sense) the multiplication rules for vectors and operates on the object that follows it. In Cartesian rectangular coordinates $\nabla$ is given by

\begin{displaymath} \nabla\equiv\mathbf{i}\frac{\partial}{\partial x}+\mathbf{j}... ...{\partial}{\partial y} +\mathbf{k}\frac{\partial}{\partial z}. \end{displaymath} (A.11)

This is not a proper three-space vector (its components are differential operators, not real numbers), but the vector notation helps us easily write formulas for the vector and vector-valued derivatives that we wish to define. The divergence of the vector-valued function $v$ is denoted ( $\nabla\cdot v)$. The curl of $v$ is denoted $\nabla\times v$. The gradient of the scalar function $f$ is denoted $\nabla f$. They are defined as follows:

\begin{displaymath} \nabla\cdot\mathbf{v}=\left[\begin{array}{ccc} \frac{\partia... ...tial y}\ \frac{\partial f}{\partial z}\\ \end{array}\right]. \end{displaymath} (A.12)

In each case, we define the operation by treating $\nabla$ as a vector and computing the ``product'' indicated by the notation: dot, cross, and scalar, respectively. Thus the results of these operations are scalar, vector, and vector, respectively.

These definitions apply to vectors represented with rectangular coordinates. The mnemonic formulas remain the same when we change coordinate systems, but the del operator changes. For example in cylindrical coordinates $(\,r,\theta,z\,)$ the del operator is given by

\begin{displaymath} \nabla\equiv\mathbf{e}_r\frac{\partial}{\partial r}+\mathbf{... ...tial}{\partial\theta} +\mathbf{e}_z\frac{\partial}{\partial z} \end{displaymath} (A.13)

where $\mathbf{e}_r$ is the unit vector in the direction of increasing $r$, $\mathbf{e}_{\theta}$ is the unit vector in the direction of increasing $\theta$, and $\mathbf{e}_z$ is the unit vector in the direction of increasing $z$ (this is the same as the unit vector $\mathbf{k}$). These three vectors form an orthonormal basis for the cylindrical coordinate system. The vectors $\mathbf{e}_r$ and $\mathbf{e}_{\theta}$ are variable with respect to $\theta$. Because of this, their derivatives must be accounted for when the above differential operations are carried out. Try some examples with Scientific Workplace to determine the difference in vector operations between Cartesian rectangular and cylindrical or spherical coordinate systems.

Next: Scientific Workplace Applications Up: VECTORS, TENSORS AND MATRIX Previous: VECTORS, TENSORS AND MATRIX   Chapters

Related Links:









Partner Links