-
Linear Algebra for Machine Learning
-
1. Introduction to Vectors
- Introduction to Linear Algebra
- What is a vector ?
- Introduction to Vectors
- Scaling Vectors
- Vector Addition
- Adding Vectors Geometrically
- Vector Subtraction
- Dot Product Insight
- Vector Projections
- Orthogonality Illustrated
- Cross Product Insight
- Vector Norms
-
2. Introduction to Matrices
- Theory of Matrices
- Determinant of a matrix
- Inverse of a matrix
- Eigenvalues & Eigenvectors
This activity is also part of one or more other Books. Modifications will be visible in all these Books. Do you want to modify the original activity or create your own copy for this Book instead?
This activity was created by '{$1}'. Do you want to modify the original activity or create your own copy instead?
This activity was created by '{$1}' and you lack the permission to edit it. Do you want to create your own copy instead and add it to the book?
Linear Algebra for Machine Learning
Vikash Srivastava, Nov 18, 2020

Table of Contents
- Introduction to Vectors
- Introduction to Linear Algebra
- What is a vector ?
- Introduction to Vectors
- Scaling Vectors
- Vector Addition
- Adding Vectors Geometrically
- Vector Subtraction
- Dot Product Insight
- Vector Projections
- Orthogonality Illustrated
- Cross Product Insight
- Vector Norms
- Introduction to Matrices
- Theory of Matrices
- Determinant of a matrix
- Inverse of a matrix
- Eigenvalues & Eigenvectors
Introduction to Vectors
This chapters discusses the central concept of Linear Algebra with a heavy focus on learning vectors.
-
1. Introduction to Linear Algebra
-
2. What is a vector ?
-
3. Introduction to Vectors
-
4. Scaling Vectors
-
5. Vector Addition
-
6. Adding Vectors Geometrically
-
7. Vector Subtraction
-
8. Dot Product Insight
-
9. Vector Projections
-
10. Orthogonality Illustrated
-
11. Cross Product Insight
-
12. Vector Norms
Introduction to Linear Algebra
What is Linear Algebra?
A branch of mathematics that is concerned with mathematical structures closed under the operations of addition and scalar multiplication and that includes the theory of systems of linear equations, matrices, determinants, vector spaces, and linear transformations. Basically, it is the science of numbers which empowers diverse Data Science algorithms and applications. To fully comprehend machine learning, linear algebra fundamentals are the essential prerequisite.
Why do you need to learn Linear Algebra?
Linear algebra is a foundation of machine learning. Before you start to study machine learning, you need to get better knowledge and understanding of this field. If you are a fan and a practitioner of machine learning, this post will help you to realize where linear algebra is applied to and you can benefit from these insights.
In machine learning, the majority of data is most often represented as vectors, matrices or tensors. Therefore, the machine learning heavily relies on the linear algebra.
- A vector is a 1D array. For instance, a point in space can be defined as a vector of three coordinates (x, y, z). Usually, it is defined in such a way that it has both the magnitude and the direction.
- A matrix is a two-dimensional array of numbers, that has a fixed number of rows and columns. It contains a number at the intersection of each row and each column. A matrix is usually denoted by square brackets [].
- A tensor is a generalization of vectors and matrices. For instance, a tensor of dimension one is a vector. In addition, we can also have a tensor of two dimensions which is a matrix. Then, we can have a three-dimensional tensor such as the image with RGB colors. This continues to expand to four-dimensional tensors and so on.
- Coordinate Transformations
- Linear Regression
- Dimensionality Reduction
- Natural Language Processing
- Computer Vision
- Network Graphs
Theory of Matrices


Saving…
All changes saved
Error
A timeout occurred. Trying to re-save …
Sorry, but the server is not responding. Please wait a few minutes and then try to save again.