Vector and Matrices

Addition of Matrix

Addition of matrices is a basic operation in linear algebra that involves adding corresponding elements of two matrices to obtain a new matrix. The resulting matrix has the same dimensions as the original matrices being added.

Let’s consider an example to understand the addition of matrices. Suppose we have two matrices A and B given by:

A = \begin{bmatrix} 1 & 2 & 3\\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix} , 

B = \begin{bmatrix} 9 & 8 & 7\\ 6 & 5 & 4 \\ 3 & 2 & 1 \end{bmatrix}

To add these matrices, we add corresponding elements of the matrices to obtain a new matrix C.

A + B = \begin{bmatrix} 1+9 & 2+8 & 3+7\\ 4+6 & 5+5 & 6+4 \\7+3 & 8+2 & 9+1 \end{bmatrix} 
 \\C = \begin{bmatrix} 10 & 10 & 10\\ 10 & 10 & 10 \\10 & 10 & 10 \end{bmatrix}.

If the matrices have different dimensions, addition is not defined.

Scalar Multiplication

Scalar matrix multiplication is the process of multiplying a scalar (a single number) with every element of a matrix. The resulting matrix has the same dimensions as the original matrix, and each element is the product of the scalar and the corresponding element of the original matrix.

In\;this \;code, k \;represents\; the\; scalar \;and \;\mathbf{A}\;represents\; the \;matrix. \\ \;The \;resulting\; matrix\; is \;enclosed \;in \;the\; bmatrix \; environment. \\
k \cdot \mathbf{A} = \begin{bmatrix}
k \cdot a_{11} & k \cdot a_{12} & \cdots & k \cdot a_{1n} \\
k \cdot a_{21} & k \cdot a_{22} & \cdots & k \cdot a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
k \cdot a_{m1} & k \cdot a_{m2} & \cdots & k \cdot a_{mn}
\end{bmatrix}

Suppose, we have the matrix:

 \mathbf{A} = \begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9
\end{bmatrix}

And we want to multiply it by the scalar 2.

2 \cdot \mathbf{A} = \begin{bmatrix}
2 & 4 & 6 \\
8 & 10 & 12 \\
14 & 16 & 18
\end{bmatrix}

System of Linear Equations

A system of linear equations is a collection of equations involving linear expressions. It can be represented in matrix form as follows:

Ax = b

where A is a matrix of coefficients of the variables, x is a column vector of the variables, and b is a column vector of constants. The system can be written in expanded form as:

\begin{bmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{m1} & a_{m2} & \cdots & a_{mn}
\end{bmatrix}
\begin{bmatrix}
x_1 \\
x_2 \\
\vdots \\
x_n
\end{bmatrix} =
\begin{bmatrix}
b_1 \\
b_2 \\
\vdots \\
b_m
\end{bmatrix}

Here, the matrix on the left represents the coefficients of the variables, the column vector in the middle represents the variables, and the column vector on the right represents the constants.

\begin{aligned}
a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n &= b_1 \\
a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n &= b_2 \\
\vdots \\
a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n &= b_m \
\end{aligned}

The matrix form of a system of linear equations is useful for solving the system using matrix operations.

x = A^(-1)b

where x is the solution vector of the system.

Example – Solution of System of Linear Equations:

Consider the system of equations:

\begin{aligned}
2x + y &= 4 \\
x - y &= 1 \
\end{aligned}

We can represent this system in matrix form as:

\begin{bmatrix}
2 & 1 \\
1 & -1 \\
\end{bmatrix}
\begin{bmatrix}
x \\
y \\
\end{bmatrix}=
\begin{bmatrix}
4 \\
1 \\
\end{bmatrix}

To solve this system using the first method (finding the inverse of matrix A), we first check that

det(A) ≠ 0:

det\begin{bmatrix}
2 & 1 \\
1 & -1 \\
\end{bmatrix} = 2(-1) - 1(1) = -3 \neq 0
A^{-1} = \frac{1}{det(A)}
\begin{bmatrix}
-1 & -1 \\
-1 & 2 \\
\end{bmatrix}
= -\frac{1}{3}
\begin{bmatrix}
-1 & -1 \\
-1 & 2 \\
\end{bmatrix}

Multiplying both sides of the equation Ax = b by A^(-1), we get:

\begin{bmatrix}
x \\
y \\
\end{bmatrix}=
-\frac{1}{3}
\begin{bmatrix}
-1 & -1 \\
-1 & 2 \\
\end{bmatrix}
\begin{bmatrix}
4 \\
1 \\
\end{bmatrix}=
\begin{bmatrix}
1 \\
1 \\
\end{bmatrix}

Linear Independence. Rank of A Matrix

The rank of a matrix is the maximum number of linearly independent rows or columns in the matrix.

A =
\begin{bmatrix}
1 & 2 & 3 \\
2 & 4 & 6 \\
1 & 1 & 2 \\
\end{bmatrix}\\

To find the rank of this matrix, we can use row reduction to transform the matrix into row echelon form:

\begin{bmatrix}
1 & 2 & 3 \\
0 & 0 & 0 \\
0 & 0 & 0 \\
\end{bmatrix}

Alternatively, we can use column reduction to transform the matrix into column echelon form:

\begin{bmatrix}
1 & 0 & -1 \\
0 & 1 & 2 \\
0 & 0 & 0 \\
\end{bmatrix}

Linear Transformations

  1. Scaling: Consider a transformation that scales a vector by a factor of 2. For example, if we apply this transformation to the vector [1, 2], we get the new vector [2, 4].
  2. Rotation: Consider a transformation that rotates a vector by 45 degrees counterclockwise. This means that the x-coordinate of the vector becomes the y-coordinate, and the y-coordinate becomes the negative of the x-coordinate. For example, if we apply this transformation to the vector [1, 0], we get the new vector [sqrt(2)/2, sqrt(2)/2]. Geometrically, this transformation rotates the vector counterclockwise by 45 degrees.
  3. Shearing: Consider a transformation that shears a vector in the x-direction by a factor of 2. For example, if we apply this transformation to the vector [1, 1], we get the new vector [3, 1]. Geometrically, this transformation distorts the shape of the vector by stretching it in the x-direction.

Eigen Values and Eigen Vectors

.

Spread the love

Leave a Reply