mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
36dfee1661
@ -0,0 +1,152 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A Quick Look at Some of the Best Cloud Platforms for High Performance Computing Applications)
|
||||
[#]: via: (https://opensourceforu.com/2019/11/a-quick-look-at-some-of-the-best-cloud-platforms-for-high-performance-computing-applications/)
|
||||
[#]: author: (Dr Kumar Gaurav https://opensourceforu.com/author/dr-gaurav-kumar/)
|
||||
|
||||
A Quick Look at Some of the Best Cloud Platforms for High Performance Computing Applications
|
||||
======
|
||||
|
||||
[![][1]][2]
|
||||
|
||||
_Cloud platforms enable high performance computing without the need to purchase the required infrastructure. Cloud services are available on a ‘pay per use’ basis which is very economical. This article takes a look at cloud platforms like Neptune, BigML, Deep Cognition and Google Colaboratory, all of which can be used for high performance applications._
|
||||
|
||||
Software applications, smart devices and gadgets face many performance issues which include load balancing, turnaround time, delay, congestion, Big Data, parallel computations and others. These key issues traditionally consume enormous computational resources and low-configuration computers are not able to work on high performance tasks. The laptops and computers available in the market are designed for personal use; so these systems face numerous performance issues when they are tasked with high performance jobs.
|
||||
|
||||
For example, a desktop computer or laptop with a 3GHz processor is able to perform approximately 3 billion computations per second. However, high performance computing (HPC) is focused on solving complex problems and working on quadrillions or trillions of computations with high speed and maximum accuracy.
|
||||
|
||||
![Figure 1: The Neptune portal][3]
|
||||
|
||||
![Figure 2: Creating a new project on the Neptune platform][4]
|
||||
|
||||
**Application domains and use cases**
|
||||
High performance computing applications are used in domains where speed and accuracy levels are quite high as compared to those in traditional scenarios, and the cost factor is also very high.
|
||||
|
||||
The following are the use cases where high performance implementations are required:
|
||||
|
||||
* Nuclear power plants
|
||||
* Space research organisations
|
||||
* Oil and gas exploration
|
||||
* Artificial intelligence and knowledge discovery
|
||||
* Machine learning and deep learning
|
||||
* Financial services and digital forensics
|
||||
* Geographical and satellite data analytics
|
||||
* Bio-informatics and molecular sciences
|
||||
|
||||
|
||||
|
||||
**Working with cloud platforms for high performance applications**
|
||||
There are a number of cloud platforms on which high performance computing applications can be launched without users having actual access to the supercomputer. The billing for these cloud services is done on a usage basis and costs less compared to purchasing the actual infrastructure required to work with high performance computing applications.
|
||||
The following are a few of the prominent cloud based platforms that can be used for advanced implementations including data science, data exploration, machine learning, deep learning, artificial intelligence, etc.
|
||||
|
||||
**Neptune**
|
||||
URL: _<https://neptune.ml/>_
|
||||
Neptune is a lightweight cloud based service for high performance applications including data science, machine learning, predictive knowledge discovery, deep learning, modelling training curves and many others. Neptune can be integrated with Jupyter notebooks so that Python programs can be easily executed for multiple applications.
|
||||
|
||||
The Neptune dashboard is available at <https://ui.neptune.ml/> on which multiple experiments can be performed. Neptune works as a machine learning lab on which assorted algorithms can be programmed and their outcomes can be visualised. The platform is available as Software as a Service (SaaS) so that the deployment can be done on the cloud. The deployments can be done on the users’ own hardware and can be mapped with the Neptune cloud.
|
||||
|
||||
In addition to having a pre-built cloud based platform, Neptune can be integrated with Python and R programming so that high performance applications can be programmed. Python and R are prominent programming environments for data science, machine learning, deep learning, Big Data and many other applications.
|
||||
|
||||
For Python programming, Neptune provides neptune-client so that communication with the Neptune server can be achieved, and advanced data analytics can be implemented on its advanced cloud.
|
||||
For integration of Neptune with R, there is an amazing and effective library ‘reticulate’ which integrates the use of neptune-client.
|
||||
|
||||
The detailed documentation for the integration of R and Python with Neptune is available at _<https://docs.neptune.ml/python-api.html> and <https://docs.neptune.ml/r-support.html>_.
|
||||
|
||||
![Figure 3: Integration of Neptune with Jupyter Notebook][5]
|
||||
|
||||
![Figure 4: Dashboard of BigML][6]
|
||||
|
||||
In addition, integration with MLflow and TensorBoard is also available. MLflow is an open source platform for managing the machine learning life cycle with reproducibility, advanced experiments and deployments. It has three key components — tracking, projects and models. These can be programmed and controlled using the Neptune – MLflow integration.
|
||||
|
||||
The association of TensorFlow with Neptune is possible using Neptune-TensorBoard. TensorFlow is one of the powerful frameworks for the deep learning and advanced knowledge discovery approaches.
|
||||
With the use of assorted features and dimensions, the Neptune cloud can be used for high performance research based implementations.
|
||||
|
||||
**BigML**
|
||||
URL: _<https://bigml.com/>_
|
||||
|
||||
BigML is a cloud based platform for the implementation of advanced algorithms with assorted data sets. This cloud based platform has a panel for implementing multiple machine learning algorithms with ease.
|
||||
The BigML dashboard has access to different data sets and algorithms under supervised and unsupervised taxonomy, as shown in Figure 4. The researcher can use the algorithm from the menu according to the requirements of the research domain.
|
||||
|
||||
![Figure 5: Algorithms and techniques integrated with BigML][7]
|
||||
|
||||
A number of tools, libraries and repositories are integrated with BigML so that the programming, collaboration and reporting can be done with a higher degree of performance and minimum error levels.
|
||||
Algorithms and techniques can be attached to specific data sets for evaluation and deep analytics, as shown in Figure 5. Using this methodology, the researcher can work with the code as well as the data set on easier platforms.
|
||||
|
||||
The following are the tools and libraries associated with BigML for multiple applications of high performance computing:
|
||||
|
||||
* Node-Red for flow diagrams
|
||||
* GitHub repos
|
||||
* BigMLer as the command line tool
|
||||
* Alexa Voice Service
|
||||
* Zapier for machine learning workflows
|
||||
* Google Sheets
|
||||
* Amazon EC2 Image PredictServer
|
||||
* BigMLX app for MacOS
|
||||
|
||||
|
||||
|
||||
![Figure 6: Enabling Google Colaboratory from Google Drive][8]
|
||||
|
||||
![Figure 7: Activation of the hardware accelerator with Google Colaboratory notebook][9]
|
||||
|
||||
**Google Colaboratory**
|
||||
URL: _<https://colab.research.google.com>_
|
||||
Google Colaboratory is one of the cloud platforms for the implementation of high performance computing tasks including artificial intelligence, machine learning, deep learning and many others. It is a cloud based service which integrates Jupyter Notebook so that Python code can be executed as per the application domain.
|
||||
Google Colaboratory is available as a Google app in Google Cloud Services. It can be invoked from Google Drive as depicted in Figure 6 or directly at _<https://colab.research.google.com>_.
|
||||
|
||||
The Jupyter notebook in Google Colaboratory is associated with the CPU, by default. If a hardware accelerator is required, like the tensor processing unit (TPU) or the graphics processing unit (GPU), it can be activated from _Notebook Settings_, as shown in Figure 7.
|
||||
Figure 8 presents a view of Python code that is imported in the Jupyter Notebook. The data set can be placed in Google Drive. The data set under analysis is mapped with the code so that the script can directly perform the operations as programmed in the code. The outputs and logs are presented on the Jupyter Notebook in the platform of Google Colaboratory.
|
||||
|
||||
![Figure 8: Implementation of the Python code on the Google Colaboratory Jupyter Notebook][10]
|
||||
|
||||
**Deep Cognition**
|
||||
URL: _<https://deepcognition.ai/>_
|
||||
Deep Cognition provides the platform to implement advanced neural networks and deep learning models. AutoML with Deep Cognition provides an autonomous integrated development environment (IDE) so that the coding, testing and debugging of advanced models can be done.
|
||||
It has a visual editor so that the multiple layers of different types can be programmed. The layers that can be imported are core layers, hidden layers, convolutional layers, recurrent layers, pooling layers and many others.
|
||||
The platform provides the features to work with advanced frameworks and libraries of MXNet and TensorFlow for scientific computations and deep neural networks.
|
||||
|
||||
![Figure 9: Importing layers in neural network models on Deep Cognition][11]
|
||||
|
||||
**Scope for research and development**
|
||||
Research scholars, academicians and practitioners can work on advanced algorithms and their implementations using cloud based platforms dedicated to high performance computing. With this type of implementation, there is no need to purchase the specific infrastructure or devices; rather, the supercomputing environment can be hired on the cloud.
|
||||
|
||||
![Avatar][12]
|
||||
|
||||
[Dr Kumar Gaurav][13]
|
||||
|
||||
The author is the managing director of Magma Research and Consultancy Pvt Ltd, Ambala Cantonment, Haryana. He has 16 years experience in teaching, in industry and in research. He is a projects contributor for the Web-based source code repository SourceForge.net. He is associated with various central, state and deemed universities in India as a research guide and consultant. He is also an author and consultant reviewer/member of advisory panels for various journals, magazines and periodicals. The author can be reached at [kumargaurav.in@gmail.com][14].
|
||||
|
||||
[![][15]][16]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensourceforu.com/2019/11/a-quick-look-at-some-of-the-best-cloud-platforms-for-high-performance-computing-applications/
|
||||
|
||||
作者:[Dr Kumar Gaurav][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensourceforu.com/author/dr-gaurav-kumar/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Big-ML-Colab-and-Deep-cognition.jpg?resize=696%2C384&ssl=1 (Big ML Colab and Deep cognition)
|
||||
[2]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Big-ML-Colab-and-Deep-cognition.jpg?fit=900%2C497&ssl=1
|
||||
[3]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-1-The-Neptune-portal.jpg?resize=350%2C122&ssl=1
|
||||
[4]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-2-Creating-a-new-project-on-the-Neptune-platform.jpg?resize=350%2C161&ssl=1
|
||||
[5]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-3-Integration-of-Neptune-with-Jupyter-Notebook.jpg?resize=350%2C200&ssl=1
|
||||
[6]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-4-Dashboard-of-BigML.jpg?resize=350%2C193&ssl=1
|
||||
[7]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-5-Algorithms-and-techniques-integrated-with-BigML.jpg?resize=350%2C200&ssl=1
|
||||
[8]: https://i2.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-6-Enabling-Google-Colaboratory-from-Google-Drive.jpg?resize=350%2C253&ssl=1
|
||||
[9]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-7-Activation-of-the-hardware-accelerator-with-Google-Colaboratory-notebook.jpg?resize=350%2C264&ssl=1
|
||||
[10]: https://i0.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-8-Implementation-of-the-Python-code-on-the-Google-Colaboratory-Jupyter-Notebook.jpg?resize=350%2C253&ssl=1
|
||||
[11]: https://i1.wp.com/opensourceforu.com/wp-content/uploads/2019/11/Figure-9-Importing-layers-in-neural-network-models-on-Deep-Cognition.jpg?resize=350%2C254&ssl=1
|
||||
[12]: https://secure.gravatar.com/avatar/4a506881730a18516f8f839f49527105?s=100&r=g
|
||||
[13]: https://opensourceforu.com/author/dr-gaurav-kumar/
|
||||
[14]: mailto:kumargaurav.in@gmail.com
|
||||
[15]: http://opensourceforu.com/wp-content/uploads/2013/10/assoc.png
|
||||
[16]: https://feedburner.google.com/fb/a/mailverify?uri=LinuxForYou&loc=en_US
|
@ -1,39 +1,33 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Viewing network bandwidth usage with bmon)
|
||||
[#]: via: (https://www.networkworld.com/article/3447936/viewing-network-bandwidth-usage-with-bmon.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Viewing network bandwidth usage with bmon
|
||||
用 bmon 查看网络带宽使用情况
|
||||
======
|
||||
Introducing bmon, a monitoring and debugging tool that captures network statistics and makes them easily digestible.
|
||||
Sandra Henry-Stocker
|
||||
|
||||
Bmon is a monitoring and debugging tool that runs in a terminal window and captures network statistics, offering options on how and how much data will be displayed and displayed in a form that is easy to understand.
|
||||
> 介绍一下 bmon,这是一个监视和调试工具,可捕获网络统计信息并使它们易于理解。
|
||||
|
||||
To check if **bmon** is installed on your system, use the **which** command:
|
||||
![](https://img.linux.net.cn/data/attachment/album/201911/07/010237a8gb5oqddvl3bnd0.jpg)
|
||||
|
||||
`bmon` 是一种监视和调试工具,可在终端窗口中捕获网络统计信息,并提供了如何以易于理解的形式显示以及显示多少数据的选项。
|
||||
|
||||
要检查系统上是否安装了 `bmon`,请使用 `which` 命令:
|
||||
|
||||
```
|
||||
$ which bmon
|
||||
/usr/bin/bmon
|
||||
```
|
||||
|
||||
### Getting bmon
|
||||
### 获取 bmon
|
||||
|
||||
On Debian systems, use **sudo apt-get install bmon** to install the tool.
|
||||
在 Debian 系统上,使用 `sudo apt-get install bmon` 安装该工具。
|
||||
|
||||
[][1]
|
||||
|
||||
BrandPost Sponsored by HPE
|
||||
|
||||
[Take the Intelligent Route with Consumption-Based Storage][1]
|
||||
|
||||
Combine the agility and economics of HPE storage with HPE GreenLake and run your IT department with efficiency.
|
||||
|
||||
For Red Hat and related distributions, you might be able to install with **yum install bmon** or **sudo dnf install bmon**. Alternately, you may have to resort to a more complex install with commands like these that first set up the required **libconfuse** using the root account or sudo:
|
||||
对于 Red Hat 和相关发行版,你可以使用 `yum install bmon` 或 `sudo dnf install bmon` 进行安装。或者,你可能必须使用更复杂的安装方式,例如使用以下命令,这些命令首先使用 root 帐户或 sudo 来设置所需的 `libconfuse`:
|
||||
|
||||
```
|
||||
# wget https://github.com/martinh/libconfuse/releases/download/v3.2.2/confuse-3.2.2.zip
|
||||
@ -48,15 +42,13 @@ For Red Hat and related distributions, you might be able to install with **yum i
|
||||
# sudo make install
|
||||
```
|
||||
|
||||
The first five lines will install **libconfuse** and the second five will grab and install **bmon** itself.
|
||||
前面五行会安装 `libconfuse`,而后面五行会获取并安装 `bmon` 本身。
|
||||
|
||||
### Using bmon
|
||||
### 使用 bmon
|
||||
|
||||
The simplest way to start **bmon** is simply to type **bmon** on the command line. Depending on the size of the window you are using, you will be able to see and bring up a variety of data.
|
||||
启动 `bmon` 的最简单方法是在命令行中键入 `bmon`。根据你正在使用的窗口的大小,你能够查看并显示各种数据。
|
||||
|
||||
The top portion of your display will display stats on your network interfaces – the loopback (lo) and network-accessible (e.g., eth0). If you terminal window has few lines, this is all you may see, and it will look something like this:
|
||||
|
||||
[RELATED: 11 pointless but awesome Linux terminal tricks][2]
|
||||
显示区域的顶部将显示你的网络接口的统计信息:环回接口(lo)和可通过网络访问的接口(例如 eth0)。如果你的终端窗口只有区区几行高,下面这就是你可能会看到的所有内容,它将看起来像这样:
|
||||
|
||||
```
|
||||
lo bmon 4.0
|
||||
@ -73,7 +65,7 @@ q Press i to enable additional information qq
|
||||
Wed Oct 23 14:36:27 2019 Press ? for help
|
||||
```
|
||||
|
||||
In this example, the network interface is enp0s25. Notice the helpful "Increase screen height" hint below the listed interfaces. Stretch your screen to add sufficient lines (no need to restart bmon) and you will see some graphs:
|
||||
在此示例中,网络接口是 enp0s25。请注意列出的接口下方的有用的 “Increase screen height” 提示。拉伸屏幕以增加足够的行(无需重新启动 bmon),你将看到一些图形:
|
||||
|
||||
```
|
||||
Interfaces x RX bps pps %x TX bps pps %
|
||||
@ -100,7 +92,7 @@ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqq
|
||||
1 5 10 15 20 25 30 35 40 45 50 55 60
|
||||
```
|
||||
|
||||
Notice, however, that the graphs are not showing values. This is because it is displaying the loopback **>lo** interface. Arrow your way down to the public network interface and you will see some traffic.
|
||||
但是请注意,该图形未显示值。这是因为它正在显示环回 “>lo” 接口。按下箭头键指向公共网络接口,你将看到一些流量。
|
||||
|
||||
```
|
||||
Interfaces x RX bps pps %x TX bps pps %
|
||||
@ -132,11 +124,11 @@ q Press i to enable additional information qq
|
||||
Wed Oct 23 16:42:06 2019 Press ? for help
|
||||
```
|
||||
|
||||
The change allows you to view a graph displaying network traffic. Note, however, that the default is to display bytes per second. To display bits per second instead, you would start the tool using **bmon -b**
|
||||
通过更改接口,你可以查看显示了网络流量的图表。但是请注意,默认值是按每秒字节数显示的。要按每秒位数来显示,你可以使用 `bmon -b` 启动该工具。
|
||||
|
||||
Detailed statistics on network traffic can be displayed if your window is large enough and you press **d**. An example of the stats you will see is displayed below. This display was split into left and right portions because of its width.
|
||||
如果你的窗口足够大并按下 `d` 键,则可以显示有关网络流量的详细统计信息。你看到的统计信息示例如下所示。由于其宽度太宽,该显示分为左右两部分。
|
||||
|
||||
##### left side:
|
||||
左侧:
|
||||
|
||||
```
|
||||
RX TX │ RX TX │
|
||||
@ -154,7 +146,7 @@ RX TX │ RX TX │
|
||||
Window Error - 0 │ │
|
||||
```
|
||||
|
||||
##### right side
|
||||
右侧:
|
||||
|
||||
```
|
||||
│ RX TX │ RX TX
|
||||
@ -171,9 +163,9 @@ RX TX │ RX TX │
|
||||
│ No Handler 0 - │ Over Error 0 -
|
||||
```
|
||||
|
||||
Additional information on the network interface will be displayed if you press **i**
|
||||
如果按下 `i` 键,将显示网络接口上的其他信息。
|
||||
|
||||
##### left side:
|
||||
左侧:
|
||||
|
||||
```
|
||||
MTU 1500 | Flags broadcast,multicast,up |
|
||||
@ -181,7 +173,7 @@ Address 00:1d:09:77:9d:08 | Broadcast ff:ff:ff:ff:ff:ff |
|
||||
Family unspec | Alias |
|
||||
```
|
||||
|
||||
##### right side:
|
||||
右侧:
|
||||
|
||||
```
|
||||
| Operstate up | IfIndex 2 |
|
||||
@ -189,19 +181,15 @@ Family unspec | Alias |
|
||||
| Qdisc fq_codel |
|
||||
```
|
||||
|
||||
A help menu will appear if you press **?** with brief descriptions of how to move around the screen, select data to be displayed and control the graphs.
|
||||
如果你按下 `?` 键,将会出现一个帮助菜单,其中简要介绍了如何在屏幕上移动光标、选择要显示的数据以及控制图形如何显示。
|
||||
|
||||
To quit **bmon**, you would type **q** and then **y** in response to the prompt to confirm your choice to exit.
|
||||
要退出 `bmon`,输入 `q`,然后输入 `y` 以响应提示来确认退出。
|
||||
|
||||
Some of the important things to note are that:
|
||||
需要注意的一些重要事项是:
|
||||
|
||||
* **bmon** adjusts its display to the size of the terminal window
|
||||
* some of the choices shown at the bottom of the display will only function if the window is large enough to accomodate the data
|
||||
* the display is updated every second unless you slow this down using the **-R** (e.g., **bmon -R 5)** option
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
|
||||
* `bmon` 会将其显示调整为终端窗口的大小
|
||||
* 显示区域底部显示的某些选项仅在窗口足够大可以容纳数据时才起作用
|
||||
* 除非你使用 `-R`(例如 `bmon -R 5`)来减慢显示速度,否则每秒更新一次显示
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -209,8 +197,8 @@ via: https://www.networkworld.com/article/3447936/viewing-network-bandwidth-usag
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
Loading…
Reference in New Issue
Block a user