Merge pull request #10888 from MjSeven/master

20180305 Getting started with Python for data science.md 翻译完毕
This commit is contained in:
Xingyu.Wang 2018-10-25 23:23:30 +08:00 committed by GitHub
commit 911dbc1ff3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 134 additions and 138 deletions

View File

@ -1,138 +0,0 @@
Translating by MjSeven
Getting started with Python for data science
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_open_data_520x292.jpg?itok=R8rBrlk7)
Whether you're a budding data science enthusiast with a math or computer science background or an expert in an unrelated field, the possibilities data science offers are within your reach. And you don't need expensive, highly specialized enterprise software—the open source tools discussed in this article are all you need to get started.
[Python][1], its machine-learning and data science libraries ([pandas][2], [Keras][3], [TensorFlow][4], [scikit-learn][5], [SciPy][6], [NumPy][7], etc.), and its extensive list of visualization libraries ([Matplotlib][8], [pyplot][9], [Plotly][10], etc.) are excellent FOSS tools for beginners and experts alike. Easy to learn, popular enough to offer community support, and armed with the latest emerging techniques and algorithms developed for data science, these comprise one of the best toolsets you can acquire when starting out.
Many of these Python libraries are built on top of each other (known as dependencies), and the basis is the [NumPy][7] library. Designed specifically for data science, NumPy is often used to store relevant portions of datasets in its ndarray datatype, which is a convenient datatype for storing records from relational tables as `cvs` files or in any other format, and vice-versa. It is particularly convenient when scikit functions are applied to multidimensional arrays. SQL is great for querying databases, but to perform complex and resource-intensive data science operations, storing data in ndarray boosts efficiency and speed (but make sure you have ample RAM when dealing with large datasets). When you get to using pandas for knowledge extraction and analysis, the almost seamless conversion between DataFrame datatype in pandas and ndarray in NumPy creates a powerful combination for extraction and compute-intensive operations, respectively.
For a quick demonstration, lets fire up the Python shell and load an open dataset on crime statistics from the city of Baltimore in a pandas DataFrame variable, and view a portion of the loaded frame:
```
>>>  import   pandas as  pd
>>>  crime_stats   =  pd.read_csv('BPD_Arrests.csv')
>>>  crime_stats.head()
```
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/crime_stats_chart.jpg?itok=_rPXJYHz)
We can now perform most of the queries on this pandas DataFrame that we can with SQL in databases. For instance, to get all the unique values of the "Description" attribute, the SQL query is:
```
$  SELECT  unique(“Description”)   from   crime_stats;
```
This same query written for a pandas DataFrame looks like this:
```
>>>  crime_stats['Description'].unique()
['COMMON   ASSAULT'   'LARCENY'   'ROBBERY   - STREET'   'AGG.   ASSAULT'
'LARCENY   FROM   AUTO'   'HOMICIDE'   'BURGLARY'   'AUTO   THEFT'
'ROBBERY   - RESIDENCE'   'ROBBERY   - COMMERCIAL'   'ROBBERY   - CARJACKING'
'ASSAULT   BY  THREAT'   'SHOOTING'   'RAPE'   'ARSON']
```
which returns a NumPy array (ndarray):
```
>>>  type(crime_stats['Description'].unique())
<class   'numpy.ndarray'>
```
Next lets feed this data into a neural network to see how accurately it can predict the type of weapon used, given data such as the time the crime was committed, the type of crime, and the neighborhood in which it happened:
```
>>>  from   sklearn.neural_network   import   MLPClassifier
>>>  import   numpy   as np
>>>
>>>  prediction   =  crime_stats[[Weapon]]
>>>  predictors   =  crime_stats['CrimeTime',   CrimeCode,   Neighborhood]
>>>
>>>  nn_model   =  MLPClassifier(solver='lbfgs',   alpha=1e-5,   hidden_layer_sizes=(5,
2),   random_state=1)
>>>
>>>predict_weapon   =  nn_model.fit(prediction,   predictors)
```
Now that the learning model is ready, we can perform several tests to determine its quality and reliability. For starters, lets feed a training set data (the portion of the original dataset used to train the model and not included in creating the model):
```
>>>  predict_weapon.predict(training_set_weapons)
array([4,   4,   4,   ..., 0,   4,   4])
```
As you can see, it returns a list, with each number predicting the weapon for each of the records in the training set. We see numbers rather than weapon names, as most classification algorithms are optimized with numerical data. For categorical data, there are techniques that can reliably convert attributes into numerical representations. In this case, the technique used is Label Encoding, using the LabelEncoder function in the sklearn preprocessing library: `preprocessing.LabelEncoder()`. It has a function to transform and inverse transform data and their numerical representations. In this example, we can use the `inverse_transform` function of LabelEncoder() to see what Weapons 0 and 4 are:
```
>>>  preprocessing.LabelEncoder().inverse_transform(encoded_weapons)
array(['HANDS',   'FIREARM',   'HANDS',   ..., 'FIREARM',   'FIREARM',   'FIREARM']
```
This is fun to see, but to get an idea of how accurate this model is, let's calculate several scores as percentages:
```
>>>  nn_model.score(X,   y)
0.81999999999999995
```
This shows that our neural network model is ~82% accurate. That result seems impressive, but it is important to check its effectiveness when used on a different crime dataset. There are other tests, like correlations, confusion, matrices, etc., to do this. Although our model has high accuracy, it is not very useful for general crime datasets as this particular dataset has a disproportionate number of rows that list FIREARM as the weapon used. Unless it is re-trained, our classifier is most likely to predict FIREARM, even if the input dataset has a different distribution.
It is important to clean the data and remove outliers and aberrations before we classify it. The better the preprocessing, the better the accuracy of our insights. Also, feeding the model/classifier with too much data to get higher accuracy (generally over ~90%) is a bad idea because it looks accurate but is not useful due to [overfitting][11].
[Jupyter notebooks][12] are a great interactive alternative to the command line. While the CLI is fine for most things, Jupyter shines when you want to run snippets on the go to generate visualizations. It also formats data better than the terminal.
[This article][13] has a list of some of the best free resources for machine learning, but plenty of additional guidance and tutorials are available. You will also find many open datasets available to use, based on your interests and inclinations. As a starting point, the datasets maintained by [Kaggle][14], and those available at state government websites are excellent resources.
Payal Singh will be presenting at SCaLE16x this year, March 8-11 in Pasadena, California. To attend and get 50% of your ticket, [register][15] using promo code **OSDC**
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/getting-started-data-science
作者:[Payal Singh][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/payalsingh
[1]:https://www.python.org/
[2]:https://pandas.pydata.org/
[3]:https://keras.io/
[4]:https://www.tensorflow.org/
[5]:http://scikit-learn.org/stable/
[6]:https://www.scipy.org/
[7]:http://www.numpy.org/
[8]:https://matplotlib.org/
[9]:https://matplotlib.org/api/pyplot_api.html
[10]:https://plot.ly/
[11]:https://www.kdnuggets.com/2014/06/cardinal-sin-data-mining-data-science.html
[12]:http://jupyter.org/
[13]:https://machinelearningmastery.com/best-machine-learning-resources-for-getting-started/
[14]:https://www.kaggle.com/
[15]:https://register.socallinuxexpo.org/reg6/

View File

@ -0,0 +1,134 @@
Python 数据科学入门
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_open_data_520x292.jpg?itok=R8rBrlk7)
无论你是一个具有数学或计算机科学背景的数据科学爱好者,还是一个其它领域的专家,数据科学提供的可能性都在你力所能及的范围内,而且你不需要昂贵的,高度专业化的企业软件。本文中讨论的开源工具就是你入门时所需的全部内容。
[Python][1],其机器学习和数据科学库([pandas][2], [Keras][3], [TensorFlow][4], [scikit-learn][5], [SciPy][6], [NumPy][7] 等),以及大量可视化库([Matplotlib][8], [pyplot][9], [Plotly][10] 等)对于初学者和专家来说都是优秀的 FOSS译注全称为 Free and Open Source Software工具。它们易于学习很受欢迎且受到社区支持并拥有为数据科学开发的最新技术和算法。它们是你在开始学习时可以获得的最佳工具集之一。
许多 Python 库都是建立在彼此之上的(称为依赖项),其基础是 [NumPy][7] 库。NumPy 专门为数据科学设计,经常用于在其 ndarray 数据类型中存储数据集的相关部分。ndarray 是一种方便的数据类型,用于将关系表中的记录存储为 `cvs` 文件或其它任何格式,反之亦然。将 scikit 功能应用于多维数组时它特别方便。SQL 非常适合查询数据库,但是对于执行复杂和资源密集型的数据科学操作,在 ndarray 中存储数据可以提高效率和速度(确保在处理大量数据集时有足够的 RAM。当你使用 pandas 进行知识提取和分析时pandas 中的 DataFrame 数据类型和 NumPy 中的 ndarray 之间的无缝转换分别为提取和计算密集型操作创建了一个强大的组合。
为了快速演示,让我们启动 Python shel 并在 pandas DataFrame 变量中加载来自巴尔的摩Baltimore的犯罪统计数据的开放数据集并查看加载 frame 的一部分:
```
>>>  import pandas as pd
>>>  crime_stats = pd.read_csv('BPD_Arrests.csv')
>>>  crime_stats.head()
```
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/crime_stats_chart.jpg?itok=_rPXJYHz)
我们现在可以在这个 pandas DataFrame 上执行大多数查询就像我们可以在数据库中使用 SQL。例如要获取 "Description"属性的所有唯一值SQL 查询是:
```
$ SELECT unique(“Description”) from crime_stats;
```
利用 pandas DataFrame 编写相同的查询如下所示:
```
>>>  crime_stats['Description'].unique()
['COMMON   ASSAULT'   'LARCENY'   'ROBBERY   - STREET'   'AGG.   ASSAULT'
'LARCENY   FROM   AUTO'   'HOMICIDE'   'BURGLARY'   'AUTO   THEFT'
'ROBBERY   - RESIDENCE'   'ROBBERY   - COMMERCIAL'   'ROBBERY   - CARJACKING'
'ASSAULT   BY  THREAT'   'SHOOTING'   'RAPE'   'ARSON']
```
它返回的是一个 NumPy 数组ndarray 类型):
```
>>>  type(crime_stats['Description'].unique())
<class 'numpy.ndarray'>
```
接下来让我们将这些数据输入神经网络,看看它能多准确地预测使用的武器类型,给出的数据包括犯罪事件,犯罪类型以及发生的地点:
```
>>>  from sklearn.neural_network import MLPClassifier
>>>  import numpy as np
>>>
>>>  prediction = crime_stats[[Weapon]]
>>>  predictors = crime_stats['CrimeTime', CrimeCode, Neighborhood]
>>>
>>>  nn_model = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5,2), random_state=1)
>>>
>>>predict_weapon = nn_model.fit(prediction, predictors)
```
现在学习模型准备就绪,我们可以执行一些测试来确定其质量和可靠性。对于初学者,让我们输入一个训练集数据(用于训练模型的原始数据集的一部分,不包括在创建模型中):
```
>>>  predict_weapon.predict(training_set_weapons)
array([4, 4, 4, ..., 0, 4, 4])
```
如你所见,它返回一个列表,每个数字预测训练集中每个记录的武器。我们之所以看到的是数字而不是武器名称,是因为大多数分类算法都是用数字优化的。对于分类数据,有一些技术可以将属性转换为数字表示。在这种情况下,使用的技术是 Label Encoder使用 sklearn 预处理库中的 LabelEncoder 函数:`preprocessing.LabelEncoder()`。它能够对一个数据和其对应的数值表示来进行变换和逆变换。在这个例子中,我们可以使用 LabelEncoder() 的 `inverse_transform` 函数来查看武器 0 和 4 是什么:
```
>>>  preprocessing.LabelEncoder().inverse_transform(encoded_weapons)
array(['HANDS',   'FIREARM',   'HANDS',   ..., 'FIREARM',   'FIREARM',   'FIREARM']
```
这很有趣,但为了了解这个模型的准确程度,我们将几个分数计算为百分比:
```
>>>  nn_model.score(X, y)
0.81999999999999995
```
这表明我们的神经网络模型准确度约为 82%。这个结果似乎令人印象深刻,但用于不同的犯罪数据集时,检查其有效性非常重要。还有其它测试来做这个,如相关性,混淆,矩阵等。尽管我们的模型有很高的准确率,但它对于一般犯罪数据集并不是非常有用,因为这个特定数据集具有不成比例的行数,其列出 FIREARM 作为使用的武器。除非重新训练,否则我们的分类器最有可能预测 FIREARM即使输入数据集有不同的分布。
在对数据进行分类之前清洗数据并删除异常值和畸形数据非常重要。预处理越好,我们的见解准确性就越高。此外,为模型或分类器提供过多数据(通常超过 90%)以获得更高的准确度是一个坏主意,因为它看起来准确但由于[过度拟合][11]而无效。
[Jupyter notebooks][12] 相对于命令行来说是一个很好的交互式替代品。虽然 CLI 对大多数事情都很好但是当你想要运行代码片段以生成可视化时Jupyter 会很出色。它比终端更好地格式化数据。
[这篇文章][13] 列出了一些最好的机器学习免费资源,但是还有很多其它的指导和教程。根据你的兴趣和爱好,你还会发现许多开放数据集可供使用。作为起点,由 [Kaggle][14] 维护的数据集,以及在州政府网站上提供的数据集是极好的资源。
(to 校正:最后这句话不知该如何理解)
Payal Singh 将出席今年 3 月 8 日至 11 日在 California加利福尼亚的 Pasadena帕萨迪纳举行的 SCaLE16x。要参加并获得 50% 的门票优惠,[注册][15]使用优惠码**OSDC**。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/getting-started-data-science
作者:[Payal Singh][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/payalsingh
[1]:https://www.python.org/
[2]:https://pandas.pydata.org/
[3]:https://keras.io/
[4]:https://www.tensorflow.org/
[5]:http://scikit-learn.org/stable/
[6]:https://www.scipy.org/
[7]:http://www.numpy.org/
[8]:https://matplotlib.org/
[9]:https://matplotlib.org/api/pyplot_api.html
[10]:https://plot.ly/
[11]:https://www.kdnuggets.com/2014/06/cardinal-sin-data-mining-data-science.html
[12]:http://jupyter.org/
[13]:https://machinelearningmastery.com/best-machine-learning-resources-for-getting-started/
[14]:https://www.kaggle.com/
[15]:https://register.socallinuxexpo.org/reg6/