Merge pull request #30460 from toknow-gh/ai-2

Translated
This commit is contained in:
Xingyu.Wang 2023-11-14 09:36:34 +08:00 committed by GitHub
commit bb97316b84
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 105 additions and 105 deletions

View File

@ -1,105 +0,0 @@
[#]: subject: "AI: Some History and a Lot More about Matrices"
[#]: via: "https://www.opensourceforu.com/2022/06/ai-some-history-and-a-lot-more-about-matrices/"
[#]: author: "Deepu Benson https://www.opensourceforu.com/author/deepu-benson/"
[#]: collector: "lkxed"
[#]: translator: "toknow-gh"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
AI: Some History and a Lot More about Matrices
======
![AI-Some-History-and-a-Lot-More-about-Matrices][1]
*In the first article on this series on AI (published in OSFY, June 2022), we discussed how fields like AI, machine learning, deep learning, data science, etc, are related yet distinct from each other. We also made a few difficult choices regarding the programming language, tools, etc, which will be used throughout this series. Finally, we started discussing matrices. In this article, we will discuss more about matrix theory, the heart of AI. But before we proceed any further, let us discuss the history of AI.*
first of all, why is a discussion on the history of AI essential? Well, there were a few times in history when there was great enthusiasm around research on AI, but on many of those occasions the huge expectations about the potential of AI went wrong. So, we need to learn some history of AI to understand whether this time we really are going to do wonders with AI or is it yet another bubble that is about to burst.
One important question that needs to be answered is: when did our fascination for artificial intelligence start? Was it after the invention of digital computers or did it start even before? I believe the quest for an omniscient being capable of answering all questions dates back to the beginning of civilisation. The Oracle of Delphi in the ancient times was one such supposedly divine being capable of answering all the queries asked by people. The quest for creative machines outwitting human intelligence was also fascinating to us from ancient times. There were several failed attempts to make machines capable of playing chess. For example, Mechanical Turk was an infamous fake chess playing machine constructed in the late 18th century. The inventions of logarithm by John Napier, the Pascals calculator by Blaise Pascal, difference engine by Charles Babbage, etc, were all predecessors to the development of artificial intelligence. Further, if you go through the history of mankind, you will find countless other real and imaginary (fake) moments hoping to achieve intelligence beyond the reach of the human brain. However, irrespective of all these achievements, the search for true AI began with the invention of digital computers.
However, if that is the case, the important question is: what were some of the milestones in the history of AI up until today? As mentioned earlier, the invention of digital computers is the single most significant event in the search for artificial intelligence. Unlike electro-mechanical devices where scalability was dependent on power requirements, digital devices simply benefited technological advancement like the transition from vacuum tubes to transistors to integrated circuits to present day very large-scale integration (VLSI) technology.
Another important phase in the development of AI could be traced back to the first theoretical analysis of AI by Allan Turing. The Turing test proposed by him suggests one of the earliest methods to test artificial intelligence. The test may not be relevant today but it was one of the first attempts to define artificial intelligence. It can be summarised as follows. Behind a curtain, there is either a person named A or a machine named B. You can talk with him/it for say 30 minutes, after which you should be able to identify whether you were conversing with a person or a machine. With the powerful chatbots of today, it is easy to see that such a test will not identify true AI. But in the early 1950s this did give a theoretical framework to understand AI.
In the late 1950s, a programming language called Lisp was developed by John McCarthy. This was one of the earliest high-level programming languages. Until then, machine languages and assembly languages (notoriously difficult to use) were used for computer programming. A very powerful machine, a very powerful language to communicate with it, and computer scientists being the optimists and dreamers made the hope for artificially intelligent machines reasonable for the first time in the history of humanity. In the early 1960s, the hopes for artificially intelligent machines were at their peak. Of course, there were a lot of developments in the computing world, but were there any wonders? Sadly, no. So, the 1960s saw hope and despair about AI for the first time. However, computer science was developing at a pace that is unparalleled in any field of human endeavour.
Then came the 70s and the 80s, where algorithms played a major role. A lot of new and efficient algorithms were developed during this period. The famous textbook The Art of Computer Programming written by Donald Knuth (I request you to read about this person, in the world of computer science he is either Gauss or Euler) had its first volume published in the late 1960s marking the beginning of the algorithmic era. A lot of general and graph algorithms were developed during these years. Further, programming based on artificial neural networks was also introduced during this time. Though McCulloch and Pitts were the pioneers in proposing artificial neural networks as far back as the 1940s, it took decades to become a mainstream technique. Today, deep learning is almost exclusively based on artificial neural networks. Such developments in the field of algorithms led to a resurgence in research in the field of artificial intelligence in the 1980s. However, this time, limitations in communication and computing powers derailed progress in this area. For a second time, AI couldnt live up to the huge expectations it had generated. Then came the 90s, Y2K, and the present day. Once again there is a lot of enthusiasm and hope about the positive impact of AI based applications.
Based on our discussion, it is easy to see that there were at least two other occasions in the digital era when AI was promising. On both those occasions AI didnt live up to its expectations. Is the current enthusiasm around AI a similar one? Of course, this is a very difficult question to answer. But in my personal opinion, this time AI is going to make a huge impact. But what are the factors that made me make such a prediction? First, very powerful computing devices are cheaply available nowadays. Unlike in the 1960s or 1980s, when a handful of such powerful machines were available, we now have them in the millions and billions. Second, there are large data sets available today for training AI and machine learning based applications. Imagine an AI engineer working in the field of digital image processing in the 90s. How many digital images were available for training his/her algorithms? Well, maybe a few thousands or tens of thousands. A cursory search on the Internet will tell you that the data science platform Kaggle (a subsidiary of Google) itself has more than ten thousand data sets. The large amount of data produced on the Internet every day makes training of algorithms a lot easier. Third, high speed connectivity has made it easy to collaborate with large groups. Imagine the difficulty of computer scientists collaborating with each other until the first decade of the 21st century. The present day speed of the Internet has made collaborative AI based projects like Google Colab, Kaggle, Project Jupyter, etc, a reality. Owing to these three factors, I believe this time AI is here for good and will have some excellent applications.
![Matrices A, B, C and D][2]
### More about matrices
Now that we are somewhat familiar with the history of AI, it is time to revisit one of the main themes of this tutorial on AI — matrices and vectors. In the last article, we began discussing these. This time we are going to dive a little deeper into the world of matrix theory. I will begin by pointing your attention to Figures 1 and 2, which show eight matrices (A, B, C and D, and matrices E, F, G and H). Why so many matrices in a tutorial on AI and machine learning? First, as mentioned in the previous article in this series, matrices are the core of linear algebra, and linear algebra is the heart of machine learning, if not the brain. Second, each of these matrices does serve a specific purpose in the following discussion.
![Matrices E, F, G and H][3]
Let us see how matrices can be represented and how some details about them can be obtained. Figure 3 shows how matrix A is represented with NumPy. Though they are not equivalent, we use the terms matrix and array as if they are synonyms.
![Matrix A with NumPy][4]
I urge you to carefully learn how a matrix is created using the function array of NumPy. There is also a function matrix offered by NumPy to create two-dimensional arrays and matrices. However, this function will be deprecated in the future and its use is no longer encouraged. We have created matrix A and Figure 3 also shows some details about it. The line of code A.size tells us about the number of elements in the array. In this case, it is 9. The line of code A.nidm tells the dimension of the array. In this case, it is easy to see that matrix A is two-dimensional. The line of code A.shape gives us the order of matrix A. The order of a matrix is the number of rows and columns of that matrix. Notice that A is a 3 x 3 matrix (3 rows and 3 columns). Though I am not explaining any further, knowing the size, dimension, and shape of an array is a little tricky with NumPy. Figure 4 shows you why the size, dimension and shape of a matrix should be identified carefully. Minor changes while defining an array could lead to changes in its size, shape, and dimension. Hence, from Figure 4, it is clear that a programmer should be extra careful while defining matrices.
![Size, dimension and shape of an array][5]
Now let us perform some basic matrix operations. Figure 5 shows how two matrices A and B are added. Indeed, there are two distinct Pythonic ways to add the two matrices, either using the function add of NumPy or the operator plus (+) for matrix addition. A very important point to remember is that two matrices can be added only if they are of the same order. For example, two 4 x 3 matrices can be added whereas a 3 x 4 matrix and a 2 x 3 matrix cannot be added. However, since programming is different from mathematics, NumPy does not follow this rule in practice. Figure 5 also shows how matrices A and D are added. Remember that, in the mathematical world, this sort of matrix addition is not valid. A rule called broadcasting decides how matrices of different orders should be added. We wont be discussing broadcasting at this point but if you are familiar with C or C++, then typecasting of variables might give you a hint as to what happens when broadcasting rules are applied in Python. So, if you want to make sure that you are performing matrix addition in the true mathematical sense, it would be safe if the following test is performed and the answer is True:
![Matrix addition][6]
```
A.shape == B.shape
```
It can be verified that if you try to add matrices D and H, then it will give you an error.
Please dont think that matrix addition is the only operation we are going to spend time on. Of course, there are a lot more matrix operations. Matrix subtraction and multiplication are illustrated in Figure 6. Notice that subtraction and multiplication also have two distinct Pythonic ways to perform the required operations. The function subtract or the operator for matrix subtraction (-) and the function matmul or the operator for matrix multiplication (@), can be used for performing matrix subtraction and multiplication, respectively. Execution of the operator * is also shown in Figure 3. Notice that the function matmul of NumPy and the operator @ perform matrix multiplication in the mathematical sense. So, be careful with the operator * while dealing with matrices.
![More matrix operations][7]
Two matrices of order (m x n) and (p x q) can be multiplied if and only if n is equal to p. Further, the product matrix will be of order (m x q). Figure 7 shows more examples of matrices being multiplied. Notice that E@A is possible whereas A@E leads to an error. Carefully go through the examples of D@G and G@D. Using the shape attribute, identify which among the eight matrices can be multiplied with each other. Though by the strict mathematical definition, a matrix is two-dimensional, we will be dealing with arrays of higher dimensions. So, just to give an example, let us see how a three-dimensional array is represented using NumPy. The following line of code creates a three-dimensional array named T.
![More examples of matrix multiplication][8]
```
T = np.arr ay([[[11,22],[33,44]],[[55,66],[77,88]]])
```
### An introduction to Pandas
So far, we were entering matrices from the keyboard. What if we want to read a large matrix from a text file or a data set (very large for that matter) and process it? Here comes another powerful Python library to our rescue, Pandas. Let us begin by reading a small CSV (comma-separated value) file, which is a type of text file (dont worry, soon we will read Microsoft Excel and LibreOffice/OpenOffice Calc files). Figure 8 shows how a CSV file called cricket.csv is read, and the first three lines in it are printed on the terminal. More features of Pandas will be discussed in the coming articles in this series.
![Reading a CSV file with Pandas][9]
### Rank of a matrix
The rank of a matrix is the dimension of the vector space spanned by its rows (columns). The three italicised words, dimension, vector space and spanned, may not be familiar to you unless you remember your college level linear algebra. If you fully understand what is meant by the rank of a matrix, I am happy and no further explanation is required. However, if you are not familiar with the terms, think of it as the amount of information contained in the matrix. Again, this is a gross oversimplification and I am not proud of it. Figure 9 shows how the ranks of matrices are found out. Matrix A has rank 3 because none of its rows can be obtained from any other rows in it. Matrix Bs rank is just 1, because the second and third rows can be obtained by multiplying the first row ([1, 2, 3]) with 2 and 3, respectively. Matrix C only has one non-zero row and hence has rank 1. Similarly, the ranks of the other matrices can also be understood. Rank is very relevant in our discussion and we will revisit this topic in later articles in this series.
![Finding the rank of a matrix][10]
Now it is time for us to wind up our discussion for this month. In the next article in this series, we will add more tools to our arsenal so that they can be used for developing AI and machine learning based applications. We will also discuss terms like neural networks, supervised learning, unsupervised learning, etc, in more detail. Further, from the next article onwards, we will use JupyterLab in place of the Linux terminal.
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/ai-some-history-and-a-lot-more-about-matrices/
作者:[Deepu Benson][a]
选题:[lkxed][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/deepu-benson/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/06/AI-Some-History-and-a-Lot-More-about-Matrices-1.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-1-Matrices-A-B-C-and-D.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-2-Matrices-E-F-G-and-H.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-3-Matrix-A-with-NumPy-350x167.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-4-Size-dimension-and-shape-of-an-array-350x292.jpg
[6]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-5-Matrix-addition-350x314.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-6-More-matrix-operations-1-350x375.jpg
[8]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-7-More-examples-of-matrix-multiplication-590x293.jpg
[9]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-8-Reading-a-CSV-file-with-Pandas-350x123.jpg
[10]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-9-Finding-the-rank-of-a-matrix-350x366.jpg

View File

@ -0,0 +1,105 @@
[#]: subject: "AI: Some History and a Lot More about Matrices"
[#]: via: "https://www.opensourceforu.com/2022/06/ai-some-history-and-a-lot-more-about-matrices/"
[#]: author: "Deepu Benson https://www.opensourceforu.com/author/deepu-benson/"
[#]: collector: "lkxed"
[#]: translator: "toknow-gh"
[#]: reviewer: " "
[#]: publisher: " "
[#]: url: " "
人工智能教程(二):人工智能的历史以及再探矩阵
======
![AI-Some-History-and-a-Lot-More-about-Matrices][1]
*在本系列的[第一篇文章](https://linux.cn/article-16326-1.html)中,我们讨论了人工智能、机器学习、深度学习、数据科学等领域的关联和区别。我们还在编程语言和工具等方面做了一些选择,我们将在这个系列教程中用到它们。最后,我们还介绍了一点矩阵的知识。在本文中,我们将深入地讨论人工智能的核心——矩阵。不过在此之前,我们先来了解一下人工智能的历史。*
我们为什么需要了解人工智能的历史呢?历史上曾出现过多次人工智能热潮,但在很多情况下,对人工智能潜力的巨大期望都未能达成。了解人工智能的历史,有助于让我们看清这次人工智浪潮是会创造奇迹,抑或只是另一个即将破灭的泡沫。
我们对人工智能的最寻起源于何时呢?是在发明数字计算机之后吗?还是更早呢?我相信对一个无所不知的存在的追求可以追溯到文明之初。比如古希腊神话中的 <ruby>德尔菲<rt>Delphi</rt></ruby> 就是这样一位能回答任何问题的先知。从远古时代起,对于超越人类智慧的创造性机器的探索同样吸引着我们 。历史上有过几次制造国际象棋机器的失败的尝试。其中就有臭名昭著的<ruby>机械特克<rt>Mechanical Turk</rt></ruby>,它并不是真正的机器人,而是由一位藏在内部的棋手操控的。<ruby>纳皮尔<rt>John Napier</rt></ruby>发明的对数、<ruby>帕斯卡<rt>Blaise Pascal</rt></ruby>的计算器、<ruby>巴贝奇<rt>Charles Babbage</rt></ruby>的差分机等,这写都是人工智能研究的前身。回顾人类历史,你会发现更多真实或虚构的时刻,人们想要获得超越人脑的智能。如果不考虑以上这些历史成就,对真正人工智能的探索起始于数字计算机的发明。
那么,人工智能发展至今有哪些里程碑呢?前面已经提到,数字计算机的发明是人工智能研究历程中最重要的事件。与可扩展性依赖于功率需求的机电设备不同,数字设备受益于技术进步,比如从真空管到晶体管到集成电路再到如今的超大规模集成技术。
人工智能发展的另一个里程碑是 <ruby>图灵<rt>Alan Turing</rt></ruby> 首次对人工智能的理论分析。他提出的图灵测试是最早的人工智能测试方法之一。现在图灵测试可能已经不太适用了,但它是定义人工智能的最初尝试之一。图灵测试可以简单描述如下:假设有一台能够与人类对话的机器,如果它能在对话中让人无法分辨它是人还是机器,那么就可以认为这台机器具有智能。如今的聊天机器人非常强大,使我们很容易看出图灵测试无法识别出真正的人工智能。但在 20 世纪 50 年代初,这确实为理解人工智能提供了一个理论框架。
20 世纪 50 年代末,<ruby>麦卡锡<rt>John McCarthy</rt></ruby> 发明了 Lisp 编程语言。它是最早的高级编程语言之一。在此之前,计算机编程用的是机器语言和汇编语言(众所周知地难用)。有了强大的机器和编程语言, 计算机科学家中的乐观主义梦想家顺理成章地开始用它们来创造人工智能。20 世纪 60 年代初对人工智能机器的期望达到了顶峰。当然计算机科学领域取得了很大发展但人工智能的奇迹发生了吗很遗憾并没有。20 世纪 60 年代见证了第一次人工智能热潮的兴起和破灭。然而计算机科学以无与伦比的速度继续发展着。
到了 70 年代和 80 年代算法发挥了主要作用。在这段时间许多新的高效算法被提出。20 世纪 60 年代末<ruby>高德纳<rt>Donald Knuth</rt></ruby>(我强烈建议你了解一下他,在计算机科学界,他可以说是高斯或欧拉)著名的《计算机程序设计艺术》第一卷的出版标志着算法时代的开始。在这些年中,开发了许多通用算法和图算法。此外,基于人工神经网络的编程也在此时兴起。尽管早在 20 世纪 40 年代,<ruby>麦卡洛克<rt>Warren S. McCulloch</rt></ruby><ruby>皮茨<rt>Walter Pitts</rt></ruby>就提出了人工神经网络,但直到几十年后它才成为主流技术。今天,深度学习几乎完全是基于人工神经网络的。算法领域的这种发展导致了 20 世纪 80 年代人工智能研究的复苏。然而,当时通信和算力的限制阻碍了人工智能的发展,使其未能达到人们野心勃勃的预期。然后是 90 年代,千禧年,直到今天。又一次,我们对人工智能的积极影响充满了热情和希望。
我你们可以看到,在数字时代,人工智能至少有两次前景光明的机会。但这两次人工智能都没有达到它的预期。现在的人工智能浪潮也与此类似吗?当然这个问题很难回答。但我个人认为,这一次人工智能将产生巨大的影响。是什么让我做出这样的预测呢?第一,现在的高性能计算设备价格低廉且容易获得。在 20 世纪 60 年代或 80 年代只有几台如此强大的计算设备而现在我们有数百万甚至数十亿台这样的机器。第二现在有大量数据可用来训练人工智能和机器学习程序。想象一下90 年代从事数字图像处理的人工智能工程师,能有多少数字图像来练算法呢?也许是几千或者几万张吧。现在单单数据科学平台 Kaggle谷歌的子公司就拥有超过 1 万个数据集。互联网上每天产生的大量数据使训练算法变得容易得多。第三高速的互联网连接使得与大型机构协作变得更加容易。21 世纪的头 10 年,计算机科学家之间的合作还很困难。如今互联网的速度已经使谷歌 Colab、Kaggle、Project jupiter 等人工智能项目的协作成为现实。由于这三个因素,我相信这一次人工智能将永远存在,并会出现许多优秀的应用。
![图 1矩阵 A、B、C、D][2]
### 更多矩阵的知识
在大致了解了人工智能的历史后,现在是时候回到矩阵与向量这一主题上了。在上一篇文章中,我已经对它们做了简要介绍。这一次,我们将更深入矩阵的世界。首先看图 1 和 图 2其中显示了从 A 到 H 共 8 个矩阵。为什么教程中需要这么多矩阵呢?首先,正如前一篇文章中提到的,线性代数是机器学习的核心,而矩阵是线性代数的核心。。其次,在接下来的讨论中,它们每一个都有特定的用途。
![图 2矩阵 E、F、G、H][3]
让我们看看矩阵是如何表示的,以及如何获取它们的详细信息。图 3 展示了怎么用 NumPy 表示矩阵 A。虽然矩阵和数组并不完全等价但实践中我们经常将它们作为同义词来使用。
![图 3用 NumPy 表示矩阵 A][4]
我强烈建议你仔细学习如何使用 NumPy 的 `array` 函数创建矩阵。虽然 NumPy 也提供了 `matrix` 函数来创建二维数组和矩阵。但是它将在未来被废弃,所以不再建议使用了。在图 3 还显示了矩阵 A 的一些详细信息。`A.size` 告诉我们数组中元素的个数。在我们的例子中,它是 9。代码 `A.nidm` 表示数组的<ruby>维数<rt>dimension</rt></ruby>。很容易看出矩阵 A 是二维的。`A.shape` 表示矩阵 A 的<ruby>阶数<rt>order</rt></ruby>虽然我没有进一步解释,但使用 NumPy 库时需要注意矩阵的大小、维度和阶数。图 4 显示了为什么应该仔细识别矩阵的大小、维数和阶数。定义数组时的微小差异可能导致其大小、维数和阶数的不同。因此,程序员在定义矩阵时应该格外注意这些细节。
![图 4数组的大小、维数和阶数][5]
现在我们来做一些基本的矩阵运算。图 5 显示了如何将矩阵 A 和 B 相加。NumPy 提供了两种方法将矩阵相加,`add` 函数和 `+` 运算符。请注意,只有阶数相同的矩阵才能相加。例如,两个 4 × 3 矩阵可以相加,而一个 3 × 4 矩阵和一个 2 × 3 矩阵不能相加。然而由于编程不同于数学NumPy 在实际上并不遵循这一规则。图 5 还展示了将矩阵 A 和 D 相加。记住,这种矩阵加法在数学上是非法的。一种叫做<ruby>广播<rt>broadcasting </rt></ruby>的机制决定了不同阶数的矩阵应该如何相加。我们现在不会讨论广播的细节,但如果你熟悉 C 或 C++,可以暂时将其理解为变量的类型转换。因此,如果你想确保执行正真数学意义上的矩阵加法,需要保证以下测试为真:
![图 5矩阵相加][6]
```
A.shape == B.shape
```
广播机制也不是万能的,如果你尝试把矩阵 D 和 H 相加,会产生一个运算错误。
当然除了矩阵加法外还有其它矩阵运算。图 6 展示了矩阵减法和矩阵乘法。它们同样有两种形式,矩阵减法可以由 `subtract` 函数或减法运算符 `-` 来实现,矩阵乘法可以由`matmul` 函数或矩阵乘法运算符 `@` 来实现。图 6 还展示了<ruby>逐元素乘法<rt>element-wise multiplication</rt></ruby>运算符 `*` 的使用。请注意,只有 NumPy 的 `matmul` 函数和 `@` 运算符执行的是数学意义上的矩阵乘法。在处理矩阵时要小心使用 `*` 运算符。
![图 6更多矩阵运算][7]
对于一个 m x n 阶和一个 p x q 阶的矩阵,当且仅当 n 等于 p 时它们才可以相乘,相乘的结果是一个 m x q 阶矩的阵。图 7 显示了更多矩阵相乘的示例。注意 `E@A` 是可行的,而 `A@E` 会导致错误。请仔细阅读对比 `D@G``G@D` 的示例。使用 `shape` 属性,确定这 8 个矩阵中哪些可以相乘。虽然根据严格的数学定义,矩阵是二维的,但我们将要处理更高维的数组。作为例子,下面的代码创建一个名为 T 的三维数组。
![图 7更多矩阵乘法的例子][8]
```
T = np.array([[[11,22], [33,44]], [[55,66], [77,88]]])
```
### Pandas
到目前为止,我们都是通过键盘输入矩阵的。如果我们需要从文件或数据集中读取大型矩阵并处理,那该怎么办呢?这时我们就要用到另一个强大的 Python 库了——Pandas。我们以读取一个小的 CSV (comma-separated value) 文件为例。图 8 展示了如何读取 cricket.csv 文件,并将其中的前三行打印到终端上。在本系列的后续文章中将会介绍 Pandas 的更多特性。
![图 8用 Pandas 读取 CSV 文件][9]
### 矩阵的秩
矩阵的秩是由它的行(列)张成的向量空间的维数。如果你还记得大学线性代数的内容的话,你一定对维数、向量空间和张成还有印象,那么你也应该能理解矩阵的秩的含义了。但如果你不熟悉这些术语,那么可以简单地将矩阵的秩理解为矩阵中包含的信息量。当然,这又是一种未来方便理解而过度简化的说法。图 9 显示了如何用 NumPy 求矩阵的秩。矩阵 A 的秩为 3因为它的任何一行都不能从其它行中得到。矩阵 B 的秩为 1因为第二行和第三行可以由第一行分别乘以 2 和 3 得到。矩阵 C 只有一个非零行,因此秩为 1。同样的其它矩阵的秩也不难理解。矩阵的秩与我们的主题关系密切我们会在后续文章中再提到它。
![图 9求矩阵的秩][10]
本次的内容就到此结束了。在下一篇文章中,我们将扩充工具库,以便它们可用于开发人工智能和机器学习程序。我们还将更详细地讨论<ruby>神经网络<rt>neural network</rt></ruby><ruby>监督学习<rt>supervised learning</rt></ruby><ruby>无监督学习<rt>unsupervised learning</rt></ruby>等术语。此外,从下一篇文章开始,我们将使用 JupyterLab 代替 Linux 终端。
--------------------------------------------------------------------------------
via: https://www.opensourceforu.com/2022/06/ai-some-history-and-a-lot-more-about-matrices/
作者:[Deepu Benson][a]
选题:[lkxed][b]
译者:[toknow-gh](https://github.com/toknow-gh)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.opensourceforu.com/author/deepu-benson/
[b]: https://github.com/lkxed
[1]: https://www.opensourceforu.com/wp-content/uploads/2022/06/AI-Some-History-and-a-Lot-More-about-Matrices-1.jpg
[2]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-1-Matrices-A-B-C-and-D.jpg
[3]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-2-Matrices-E-F-G-and-H.jpg
[4]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-3-Matrix-A-with-NumPy-350x167.jpg
[5]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-4-Size-dimension-and-shape-of-an-array-350x292.jpg
[6]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-5-Matrix-addition-350x314.jpg
[7]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-6-More-matrix-operations-1-350x375.jpg
[8]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-7-More-examples-of-matrix-multiplication-590x293.jpg
[9]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-8-Reading-a-CSV-file-with-Pandas-350x123.jpg
[10]: https://www.opensourceforu.com/wp-content/uploads/2022/06/Figure-9-Finding-the-rank-of-a-matrix-350x366.jpg