Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu Wang 2020-09-10 08:25:21 +08:00
commit 579ed5730f
16 changed files with 1666 additions and 232 deletions

View File

@ -0,0 +1,115 @@
[#]: collector: (lujun9972)
[#]: translator: (silentdawn-zz)
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12594-1.html)
[#]: subject: (Why Sorting is O\(N log N\))
[#]: via: (https://theartofmachinery.com/2019/01/05/sorting_is_nlogn.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
为什么排序的复杂度为 O(N log N)
======
基本上所有正而八经的算法教材都会解释像<ruby>快速排序<rt>quicksort</rt></ruby><ruby>堆排序<rt>heapsort</rt></ruby>这样的排序算法有多快,但并不需要复杂的数学就能证明你可以逐渐趋近的速度有多快。
> 关于标记的一个严肃说明:
>
> 大多数计算机专业的科学家使用大写字母 O 标记来指代“趋近,直到到达一个常数比例因子”,这与数学专业所指代的意义是有所区别的。这里我使用的大 O 标记的含义与计算机教材所指相同,但至少不会和其他数学符号混用。
### 基于比较的排序
先来看个特例,即每次比较两个值大小的算法(快速排序、堆排序,及其它通用排序算法)。这种思想后续可以扩展至所有排序算法。
#### 一个简单的最差情况下的计数角度
假设有 4 个互不相等的数,且顺序随机,那么,可以通过只比较一对数字完成排序吗?显然不能,证明如下:根据定义,要对该数组排序,需要按照某种顺序重新排列数字。换句话说,你需要知道用哪种排列方式?有多少种可能的排列?第一个数字可以放在四个位置中的任意一个,第二个数字可以放在剩下三个位置中的任意一个,第三个数字可以放在剩下两个位置中的任意一个,最后一个数字只有剩下的一个位置可选。这样,共有 $4×3×2×1 = 4! = 24$ 种排列可供选择。通过一次比较大小,只能产生两种可能的结果。如果列出所有的排列,那么“从小到大”排序对应的可能是第 8 种排列,按“从大到小”排序对应的可能是第 24 种排列,但无法知道什么时候需要的是其它 22 种排列。
通过 2 次比较,可以得到 2×2=4 种可能的结果,这仍然不够。只要比较的次数少于 5对应 $2^{5} = 32$ 种输出),就无法完成 4 个随机次序的数字的排序。如果 $W(N)$ 是最差情况下对 $N$ 个不同元素进行排序所需要的比较次数,那么,
$$
2^{W(N)} \geq N!
$$
两边取以 2 为底的对数,得:
$$
W(N) \geq \log_{2}{N!}
$$
$N!$ 的增长近似于 $N^{N}$ (参阅 [Stirling 公式][1]),那么,
$$
W(N) \succeq \log N^{N} = N\log N
$$
这就是最差情况下从输出计数的角度得出的 $O(N\log N)$ 上限。
#### 从信息论角度的平均状态的例子
使用一些信息论知识,就可以从上面的讨论中得到一个更有力的结论。下面,使用排序算法作为信息传输的编码器:
1. 任取一个数,比如 15
2. 从 4 个数字的排列列表中查找第 15 种排列
3. 对这种排列运行排序算法,记录所有的“大”、“小”比较结果
4. 用二进制编码发送比较结果
5. 接收端重新逐步执行发送端的排序算法,需要的话可以引用发送端的比较结果
6. 现在接收端就可以知道发送端如何重新排列数字以按照需要排序,接收端可以对排列进行逆算,得到 4 个数字的初始顺序
7. 接收端在排列表中检索发送端的原始排列,指出发送端发送的是 15
确实,这有点奇怪,但确实可以。这意味着排序算法遵循着与编码方案相同的定律,包括理论所证明的不存在通用的数据压缩算法。算法中每次比较发送 1 比特的比较结果编码数据,根据信息论,比较的次数至少是能表示所有数据的二进制位数。更技术语言点,[平均所需的最小比较次数是输入数据的香农熵,以比特为单位][2]。熵是衡量信息等不可预测量的数学度量。
包含 $N$ 个元素的数组,元素次序随机且无偏时的熵最大,其值为 $\log_{2}{N!}$ 个比特。这证明 $O(N\log N)$ 是一个基于比较的对任意输入排序的最优平均值。
以上都是理论说法,那么实际的排序算法如何做比较的呢?下面是一个数组排序所需比较次数均值的图。我比较的是理论值与快速排序及 [Ford-Johnson 合并插入排序][3] 的表现。后者设计目的就是最小化比较次数(整体上没比快速排序快多少,因为生活中相对于最大限度减少比较次数,还有更重要的事情)。又因为<ruby>合并插入排序<rt>merge-insertion sort</rt></ruby>是在 1959 年提出的,它一直在调整,以减少了一些比较次数,但图示说明,它基本上达到了最优状态。
![随机排列 100 个元素所需的平均排序次数图。最下面的线是理论值,约 1% 处的是合并插入算法,原始 quicksort 大约在 25% 处。][4]
一点点理论导出这么实用的结论,这感觉真棒!
#### 小结
证明了:
1. 如果数组可以是任意顺序,在最坏情况下至少需要 $O(N\log N)$ 次比较。
2. 数组的平均比较次数最少是数组的熵,对随机输入而言,其值是 $O(N\log N)$ 。
注意,第 2 个结论允许基于比较的算法优于 $O(N\log N)$,前提是输入是低熵的(换言之,是部分可预测的)。如果输入包含很多有序的子序列,那么合并排序的性能接近 $O(N)$。如果在确定一个位之前,其输入是有序的,插入排序性能接近 $O(N)$。在最差情况下,以上算法的性能表现都不超出 $O(N\log N)$。
### 一般排序算法
基于比较的排序在实践中是个有趣的特例,但从理论上讲,计算机的 [CMP][5] 指令与其它指令相比,并没有什么特别之处。在下面两条的基础上,前面两种情形都可以扩展至任意排序算法:
1. 大多数计算机指令有多于两个的输出,但输出的数量仍然是有限的。
2. 一条指令有限的输出意味着一条指令只能处理有限的熵。
这给出了 $O(N\log N)$ 对应的指令下限。任何物理上可实现的计算机都只能在给定时间内执行有限数量的指令,所以算法的执行时间也有对应 $O(N\log N)$ 的下限。
#### 什么是更快的算法?
一般意义上的 $O(N\log N)$ 下限,放在实践中来看,如果听人说到任何更快的算法,你要知道,它肯定以某种方式“作弊”了,其中肯定有圈套,即它不是一个可以处理任意大数组的通用排序算法。可能它是一个有用的算法,但最好看明白它字里行间隐含的东西。
一个广为人知的例子是<ruby>基数排序<rt>radix sort</rt></ruby>算法,它经常被称为 $O(N)$ 排序算法,但它只能处理所有数字都能放入 $k$ 比特的情况,所以实际上它的性能是 $O({kN})$。
什么意思呢?假如你用的 8 位计算机,那么 8 个二进制位可以表示 $2^{8} = 256$ 个不同的数字,如果数组有上千个数字,那么其中必有重复。对有些应用而言这是可以的,但对有些应用就必须用 16 个二进制位来表示16 个二进制位可以表示 $2^{16} = 65,536$ 个不同的数字。32 个二进制位可以表示 $2^{32} = 4,294,967,296$ 不同的数字。随着数组长度的增长,所需要的二进制位数也在增长。要表示 $N$ 个不同的数字,需要 $k \geq \log_{2}N$ 个二进制位。所以,只有允许数组中存在重复的数字时,$O({kN})$ 才优于 $O(N\log N)$。
一般意义上输入数据的 $O(N\log N)$ 的性能已经说明了全部问题。这个讨论不那么有趣因为很少需要在 32 位计算机上对几十亿整数进行排序,[如果有谁的需求超出了 64 位计算机的极限,他一定没有告诉别人][6]。
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2019/01/05/sorting_is_nlogn.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[silentdawn-zz](https://github.com/silentdawn-zz)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: http://hyperphysics.phy-astr.gsu.edu/hbase/Math/stirling.html
[2]: https://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem
[3]: https://en.wikipedia.org/wiki/Merge-insertion_sort
[4]: https://theartofmachinery.com/images/sorting_is_nlogn/sorting_algorithms_num_comparisons.svg
[5]: https://c9x.me/x86/html/file_module_x86_id_35.html
[6]: https://sortbenchmark.org/

View File

@ -1,8 +1,8 @@
[#]: collector: (lujun9972)
[#]: translator: (wxy)
[#]: reviewer: (wxy)
[#]: publisher: ( )
[#]: url: ( )
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-12596-1.html)
[#]: subject: (Tweaking history on Linux)
[#]: via: (https://www.networkworld.com/article/3537214/tweaking-history-on-linux.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
@ -12,7 +12,7 @@
> 在 Linux 系统上bash shell 的 history 命令可以方便地回顾和重用命令,但是你要控制它记住多少,忘记多少,有很多事情要做。
![](https://images.idgesg.net/images/article/2019/08/uk_united_kingdom_england_london_natural_history_museum_by_claudio_testa_cc0_via_unsplash_2400x1600-100808449-large.jpg)
![](https://img.linux.net.cn/data/attachment/album/202009/08/232418c2121m2euw3aaw58.jpg)
Linux 系统中的 bash `history` 命令有助于记住你以前运行过的命令,并重复这些命令,而不必重新输入。

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: (chen-ni)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
@ -140,7 +140,7 @@ via: https://opensource.com/article/19/3/reducing-sysadmin-toil-kubernetes-contr
作者:[Paul Czarkowski][a]
选题:[lujun9972][b]
译者:[chen-ni](https://github.com/chen-ni)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -1,111 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Build a remote management console using Python and Jupyter Notebooks)
[#]: via: (https://opensource.com/article/20/9/remote-management-jupyter)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Build a remote management console using Python and Jupyter Notebooks
======
Turn Jupyter into a remote administration console.
![Computer laptop in space][1]
Secure shell (SSH) is a powerful tool for remote administration, but it lacks some niceties. Writing a full-fledged remote administration console sounds like it would be a lot of work. Surely, someone in the open source community has already written something?
They have, and its name is [Jupyter][2]. You might think Jupyter is one of those tools data scientists use to analyze trends in ad clicks over a week or something. This is not wrong—they do, and it is a great tool for that. But that is just scratching its surface.
### About SSH port forwarding
Sometimes, there is a server that you can SSH into over port 22. There is no reason to assume you can connect to any other port. Maybe you are SSHing through another "jumpbox" server that has more access or there are host or network firewalls that restrict ports. There are good reasons to restrict IP ranges for access, of course. SSH is a secure protocol for remote management, but allowing anyone to connect to any port is quite unnecessary.
Here is an alternative: Run a simple SSH command with port forwarding to forward a local port to a _remote_ _local_ connection. When you run an SSH port-forwarding command like `-L 8111:127.0.0.1:8888`, you are telling SSH to forward your _local_ port `8111` to what the _remote_ host thinks `127.0.0.1:8888` is. The remote host thinks `127.0.0.1` is itself.
Just like on _Sesame Street_, "here" is a subtle word.
The address `127.0.0.1` is how you spell "here" to the network.
### Learn by doing
This might sound confusing, but running this is less complicated than explaining it:
```
$ ssh -L 8111:127.0.0.1:8888 moshez@172.17.0.3
Linux 6ad096502e48 5.4.0-40-generic #44-Ubuntu SMP Tue Jun 23 00:01:04 UTC 2020 x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Aug  5 22:03:25 2020 from 172.17.0.1
$ jupyter/bin/jupyter lab --ip=127.0.0.1
[I 22:04:29.771 LabApp] JupyterLab application directory is /home/moshez/jupyter/share/jupyter/lab
[I 22:04:29.773 LabApp] Serving notebooks from local directory: /home/moshez
[I 22:04:29.773 LabApp] Jupyter Notebook 6.1.1 is running at:
[I 22:04:29.773 LabApp] <http://127.0.0.1:8888/?token=df91012a36dd26a10b4724d618b2e78cb99013b36bb6a0d1>
&lt;MORE STUFF SNIPPED&gt;
```
Port-forward `8111` to `127.0.0.1` and start Jupyter on the remote host that's listening on `127.0.0.1:8888`.
Now you need to understand that Jupyter is lying. It thinks you need to connect to port `8888`, but you forwarded that to port `8111`. So, after you copy the URL to your browser, but before clicking Enter, modify the port from `8888` to `8111`:
![Jupyter remote management console][3]
(Moshe Zadka, [CC BY-SA 4.0][4])
There it is: your remote management console. As you can see, there is a "Terminal" icon at the bottom. Click it to get a terminal:
![Terminal in Jupyter remote console][5]
(Moshe Zadka, [CC BY-SA 4.0][4])
You can run a command. Creating a file will show it in the file browser on the side. You can click on that file to open it in an editor that is running locally:
![Opening a file][6]
(Moshe Zadka, [CC BY-SA 4.0][4])
You can also download, rename, or delete files:
![File options in Jupyter remote console][7]
(Moshe Zadka, [CC BY-SA 4.0][4])
Clicking on the little **Up arrow** will let you upload files. Why not upload the screenshot above?
![Uploading a screenshot][8]
(Moshe Zadka, [CC BY-SA 4.0][4])
As a nice final tidbit, Jupyter lets you view the remote images directly by double-clicking on them.
Oh, right, and if you want to do systems automation using Python, you can also use Jupyter to open a notebook.
So the next time you need to remotely manage a firewalled environment, why not use Jupyter?
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/remote-management-jupyter
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
[2]: https://jupyter.org/
[3]: https://opensource.com/sites/default/files/uploads/output_1_0.png (Jupyter remote management console)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/output_3_0.png (Terminal in Jupyter remote console)
[6]: https://opensource.com/sites/default/files/uploads/output_5_0.png (Opening a file)
[7]: https://opensource.com/sites/default/files/uploads/output_7_0.png (File options in Jupyter remote console)
[8]: https://opensource.com/sites/default/files/uploads/output_9_0.png (Uploading a screenshot)

View File

@ -0,0 +1,265 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Deploy a deep learning model on Kubernetes)
[#]: via: (https://opensource.com/article/20/9/deep-learning-model-kubernetes)
[#]: author: (Chaimaa Zyani https://opensource.com/users/chaimaa)
Deploy a deep learning model on Kubernetes
======
Learn how to deploy, scale, and manage a deep learning model that serves
up image recognition predictions with Kubermatic Kubernetes Platform.
![Brain on a computer screen][1]
As enterprises increase their use of artificial intelligence (AI), machine learning (ML), and deep learning (DL), a critical question arises: How can they scale and industrialize ML development? These conversations often focus on the ML model; however, this is only one step along the way to a complete solution. To achieve in-production application and scale, model development must include a repeatable process that accounts for the critical activities that precede and follow development, including getting the model into a public-facing deployment.
This article demonstrates how to deploy, scale, and manage a deep learning model that serves up image recognition predictions using [Kubermatic Kubernetes Platform][2].
Kubermatic Kubernetes Platform is a production-grade, open source Kubernetes cluster-management tool that offers flexibility and automation to integrate with ML/DL workflows with full cluster lifecycle management.
### Get started
This example deploys a deep learning model for image recognition. It uses the [CIFAR-10][3] dataset that consists of 60,000 32x32 color images in 10 classes with the [Gluon][4] library in [Apache MXNet][5] and NVIDIA GPUs to accelerate the workload. If you want to use a pre-trained model on the CIFAR-10 dataset, check out the [getting started guide][6].
The model was trained over a span of 200 epochs, as long as the validation error kept decreasing slowly without causing the model to overfit. This plot shows the training process:
![Deep learning model training plot][7]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
After training, it's essential to save the model's parameters so they can be loaded later:
```
file_name = "net.params"
net.save_parameters(file_name)
```
Once the model is ready, wrap your prediction code in a Flask server. This allows the server to accept an image as an argument to its request and return the model's prediction in the response:
```
from gluoncv.model_zoo import get_model
import matplotlib.pyplot as plt
from mxnet import gluon, nd, image
from mxnet.gluon.data.vision import transforms
from gluoncv import utils
from PIL import Image
import io
import flask
app = flask.Flask(__name__)
@app.route("/predict",methods=["POST"])
def predict():
    if flask.request.method == "POST":
        if flask.request.files.get("img"):
           img = Image.open(io.BytesIO(flask.request.files["img"].read()))
            transform_fn = transforms.Compose([
            transforms.Resize(32),
            transforms.CenterCrop(32),
            transforms.ToTensor(),
            transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
            img = transform_fn(nd.array(img))
            net = get_model('cifar_resnet20_v1', classes=10)
            net.load_parameters('net.params')
            pred = net(img.expand_dims(axis=0))
            class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
                       'dog', 'frog', 'horse', 'ship', 'truck']
            ind = nd.argmax(pred, axis=1).astype('int')
            prediction = 'The input picture is classified as [%s], with probability %.3f.'%
                         (class_names[ind.asscalar()], nd.softmax(pred)[0][ind].asscalar())
    return prediction
if __name__ == '__main__':
   app.run(host='0.0.0.0')
```
### Containerize the model
Before you can deploy your model to Kubernetes, you need to install Docker and create a container image with your model.
1. Download, install, and start Docker: [code]
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo <https://download.docker.com/linux/centos/docker-ce.repo>
sudo yum install docker-ce
sudo systemctl start docker
```
2. Create a directory where you can organize your code and dependencies: [code]
mkdir kubermatic-dl
cd kubermatic-dl
```
3. Create a `requirements.txt` file to contain the packages the code needs to run: [code]
flask
gluoncv
matplotlib
mxnet
requests
Pillow
```
4. Create the Dockerfile that Docker will read to build and run the model: [code]
FROM python:3.6
WORKDIR /app
COPY requirements.txt /app
RUN pip install -r ./requirements.txt
COPY app.py /app
CMD ["python", "app.py"]~
[/code] This Dockerfile can be broken down into three steps. First, it creates the Dockerfile and instructs Docker to download a base image of Python 3. Next, it asks Docker to use the Python package manager `pip` to install the packages in `requirements.txt`. Finally, it tells Docker to run your script via `python app.py`.
5. Build the Docker container: [code]`sudo docker build -t kubermatic-dl:latest .`[/code] This instructs Docker to build a container for the code in your current working directory, `kubermatic-dl`.
6. Check that your container is working by running it on your local machine: [code]`sudo docker run -d -p 5000:5000 kubermatic-dl`
```
7. Check the status of your container by running `sudo docker ps -a`:
![Checking the container's status][9]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
### Upload the model to Docker Hub
Before you can deploy the model on Kubernetes, it must be publicly available. Do that by adding it to [Docker Hub][10]. (You will need to create a Docker Hub account if you don't have one.)
1. Log into your Docker Hub account: [code]`sudo docker login`
```
2. Tag the image so you can refer to it for versioning when you upload it to Docker Hub: [code]
sudo docker tag &lt;your-image-id&gt; &lt;your-docker-hub-name&gt;/&lt;your-app-name&gt;
sudo docker push &lt;your-docker-hub-name&gt;/&lt;your-app-name&gt;
```
![Tagging the image][11]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
3. Check your image ID by running `sudo docker images`.
### Deploy the model to a Kubernetes cluster
1. Create a project on the Kubermatic Kubernetes Platform, then create a Kubernetes cluster using the [quick start tutorial][12].
![Create a Kubernetes cluster][13]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
2. Download the `kubeconfig` used to configure access to your cluster, change it into the download directory, and export it into your environment:
![Kubernetes cluster example][14]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
3. Using `kubectl`, check the cluster information, such as the services that `kube-system` starts on your cluster: [code]`kubectl cluster-info`
```
![Checking the cluster info][15]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
4. To run the container in the cluster, you need to create a deployment (`deployment.yaml`) and apply it to the cluster: [code]
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubermatic-dl-deployment
spec:
  selector:
    matchLabels:
      app: kubermatic-dl
  replicas: 3
  template:
    metadata:
      labels:
        app: kubermatic-dl
    spec:
     containers:
     - name: kubermatic-dl
       image: kubermatic00/kubermatic-dl:latest
       imagePullPolicy: Always
       ports:
       - containerPort: 8080
[/code] [code]`kubectl apply -f deployment.yaml`
```
5. To expose your deployment to the outside world, you need a service object that will create an externally reachable IP for your container: [code]`kubectl expose deployment kubermatic-dl-deployment  --type=LoadBalancer --port 80 --target-port 5000`
```
6. You're almost there! Check your services to determine the status of your deployment and get the IP address to call your image recognition API: [code]`kubectl get service`
```
![Get the IP address to call your image recognition API][16]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
7. Test your API with these two images using the external IP:
![Horse][17]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
![Dog][18]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
![Testing the API][19]
(Chaimaa Zyami, [CC BY-SA 4.0][8])
### Summary
In this tutorial, you created a deep learning model to be served as a [REST API][20] using Flask. It put the application inside a Docker container, uploaded the container to Docker Hub, and deployed it with Kubernetes. Then, with just a few commands, Kubermatic Kubernetes Platform deployed the app and exposed it to the world.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/deep-learning-model-kubernetes
作者:[Chaimaa Zyani][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/chaimaa
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_computer_solve_fix_tool.png?itok=okq8joti (Brain on a computer screen)
[2]: https://www.loodse.com/products/kubermatic/
[3]: https://www.cs.toronto.edu/~kriz/cifar.html
[4]: https://gluon.mxnet.io/
[5]: https://mxnet.apache.org/
[6]: https://gluon-cv.mxnet.io/build/examples_classification/demo_cifar10.html
[7]: https://opensource.com/sites/default/files/uploads/trainingplot.png (Deep learning model training plot)
[8]: https://creativecommons.org/licenses/by-sa/4.0/
[9]: https://opensource.com/sites/default/files/uploads/containerstatus.png (Checking the container's status)
[10]: https://hub.docker.com/
[11]: https://opensource.com/sites/default/files/uploads/tagimage.png (Tagging the image)
[12]: https://docs.kubermatic.com/kubermatic/v2.13/installation/install_kubermatic/_installer/
[13]: https://opensource.com/sites/default/files/uploads/kubernetesclusterempty.png (Create a Kubernetes cluster)
[14]: https://opensource.com/sites/default/files/uploads/kubernetesexamplecluster.png (Kubernetes cluster example)
[15]: https://opensource.com/sites/default/files/uploads/clusterinfo.png (Checking the cluster info)
[16]: https://opensource.com/sites/default/files/uploads/getservice.png (Get the IP address to call your image recognition API)
[17]: https://opensource.com/sites/default/files/uploads/horse.jpg (Horse)
[18]: https://opensource.com/sites/default/files/uploads/dog.jpg (Dog)
[19]: https://opensource.com/sites/default/files/uploads/testapi.png (Testing the API)
[20]: https://www.redhat.com/en/topics/api/what-is-a-rest-api

View File

@ -0,0 +1,229 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to install software with Ansible)
[#]: via: (https://opensource.com/article/20/9/install-packages-ansible)
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
How to install software with Ansible
======
Automate software installations and updates across your devices with
Ansible playbooks.
![Puzzle pieces coming together to form a computer screen][1]
Ansible is a popular automation tool used by sysadmins and developers to keep their computer systems in prime condition. As is often the case with extensible frameworks, [Ansible][2] has limited use on its own, with its real power dwelling in its many modules. Ansible modules are, in a way, what commands are to a [Linux][3] computer. They provide solutions to specific problems, and one common task when maintaining computers is keeping all the ones you use updated and consistent.
I used to use a text list of packages to keep my systems more or less synchronized: I'd list the packages installed on my laptop and then cross-reference that with my desktop, or between one server and another server, making up for any difference manually. Of course, installing and maintaining applications on a Linux machine is a basic task for Ansible, and it means you can list what you want across all computers under your care.
### Finding the right Ansible module
The number of Ansible modules can be overwhelming. How do you find the one you need for a given task? In Linux, you might look in your Applications menu or in `/usr/bin` to discover new applications to run. When you're using Ansible, you refer to the [Ansible module index][4].
The index is listed primarily by category. With a little searching, you're very likely to find a module for whatever you need. For package management, the [Packaging modules][5] section contains a module for nearly any system with a package manager.
### Writing an Ansible playbook
To begin, choose the package manager on your local computer. For instance, if you're going to write your Ansible instructions (a "playbook," as it's called in Ansible) on a laptop running Fedora, start with the `dnf` module. If you're writing on Elementary OS, use the `apt` module, and so on. This gets you started with something you can test and verify as you go, and you can expand your work for your other computers later.
The first step is to create a directory representing your playbook. This isn't strictly necessary, but it's a good idea to establish the habit. Ansible can run with just a configuration file written in YAML, but if you want to expand your playbook later, you can control Ansible by how you lay out your directories and files. For now, just create a directory called `install_packages` or similar:
```
`$ mkdir ~/install_packages`
```
The file that serves as the Ansible playbook can be named anything you like, but it's traditional to name it `site.yml`:
```
`$ touch ~/install_packages/site.yml`
```
Open `site.yml` in your favorite text editor, and add this:
```
\---
\- hosts: localhost
  tasks:
    - name: install packages
      become: true
      become_user: root
      dnf:
        state: present
        name:
         - tcsh
         - htop
```
You must adjust the module name you use to match the distribution you're using. In this example, I used `dnf` because I wrote the playbook on Fedora Linux.
Like with a command in a Linux terminal, knowing _how_ to invoke an Ansible module is half the battle. This playbook example follows the standard playbook format:
* `hosts` targets a computer or computers. In this case, the computer being targeted is `localhost`, which is the computer you're using right now (as opposed to a remote system you want Ansible to connect with).
* `tasks` opens a list of tasks you want to be performed on the hosts.
* `name` is a human-friendly title for a task. In this case, I'm using `install packages` because that's what this task is doing.
* `become` permits Ansible to change which user is running this task.
* `become_user` permits Ansible to become the `root` user to run this task. This is necessary because only the root user can install new applications using `dnf`.
* `dnf` is the name of the module, which you discovered from the module index on the Ansible website.
The items under the `dnf` item are specific to the `dnf` module. This is where the module documentation is essential. Like a man page for a Linux command, the module documentation tells you what options are available and what kinds of arguments are required.
![Ansible documentation][6]
Ansible module documentation (Seth Kenlon, [CC BY-SA 4.0][7])
Package installation is a relatively simple task and only requires two elements. The `state` option instructs Ansible to check whether or not _some package_ is present on the system, and the `name` option lists which packages to look for. Ansible deals in machine _state_, so module instructions always imply change. Should Ansible scan a system and find a conflict between how a playbook describes a system (in this case, that the commands `tcsh` and `htop` are present) and what the system state actually is (in this example, `tcsh` and `htop` are not present), then Ansible's task is to make whatever changes are necessary for the system to match the playbook. Ansible can make those changes because of the `dnf` (or `apt` or whatever your package manager is) module.
Each module is likely to have a different set of options, so when you're writing playbooks, anticipate referring to the module documentation often. Until you're very familiar with a module, it's the only reasonable way to expect a module to do what you need it to do.
### Verifying YAML
Playbooks are written in YAML. Because YAML adheres to a strict syntax, it's helpful to install the `yamllint` command to check (or "lint," in computer terminology) your work. Better still, there's a linter specific to Ansible called `ansible-lint` created specifically for playbooks. Install these before continuing.
On Fedora or CentOS:
```
`$ sudo dnf install yamllint python3-ansible-lint`
```
On Debian, Elementary, Ubuntu, or similar:
```
`$ sudo apt install yamllint ansible-lint`
```
Verify your playbook with `ansible-lint`. If you don't have access to `ansible-lint`, you can use `yamllint`.
```
`$ ansible-lint ~/install_packages/site.yml`
```
Success returns nothing, but if there are errors in your file, you must fix them before continuing. Common errors from copying and pasting include omitting a newline character at the end of the final line and using tabs instead of spaces for indentation. Fix them in a text editor, rerun the linter, and repeat this process until you get no feedback from `ansible-lint` or `yamllint`.
### Installing an application with Ansible
Now that you have a verifiably valid playbook, you can finally run it on your local machine. Because you happen to know that the task defined by the playbook requires root permissions, you must use the `--ask-become-pass` option when invoking Ansible, so you will be prompted for your administrative password.
Start the installation:
```
$ ansible-playbook --ask-become-pass ~/install_packages/site.yml
BECOME password:
PLAY [localhost] ******************************
TASK [Gathering Facts] ******************************
ok: [localhost]
TASK [install packages] ******************************
ok: [localhost]
PLAY RECAP ******************************
localhost: ok=0 changed=2 unreachable=0 failed=0 [...]
```
The commands are installed, leaving the target system in an identical state to the one described by the playbook.
### Installing an application on remote systems
Going through all of that to replace one simple command would be counterproductive, but Ansible's advantage is that it can be automated across all of your systems. You can use conditional statements to cause Ansible to use a specific module on different systems, but for now, assume all your computers use the same package manager.
To connect to a remote system, you must define the remote system in the `/etc/ansible/hosts` file. This file was installed along with Ansible, so it already exists, but it's probably empty, aside from explanatory comments. Use `sudo` to open the file in your favorite text editor.
You can define a host by its IP address or hostname, as long as the hostname can be resolved. For instance, if you've already defined `liavara` in `/etc/hosts` and can successfully ping it, then you can set `liavara` as a host in `/etc/ansible/hosts`. Alternately, if you're running a domain name server or Avahi server and can ping `liavara`, then you can set it as a host in `/etc/ansible/hosts`. Otherwise, you must use its internet protocol address.
You also must have set up a successful secure shell (SSH) connection to your target hosts. The easiest way to do that is with the `ssh-copy-id` command, but if you've never set up an SSH connection with a host before, [read my article on how to create an automated SSH connection][8].
Once you've entered the hostname or IP address in the `/etc/ansible/hosts` file, change the `hosts` definition in your playbook:
```
\---
\- hosts: all
  tasks:
    - name: install packages
      become: true
      become_user: root
      dnf:
        state: present
        name:
         - tcsh
         - htop
```
Run `ansible-playbook` again:
```
`$ ansible-playbook --ask-become-pass ~/install_packages/site.yml`
```
This time, the playbook runs on your remote system.
Should you add more hosts, there are many ways to filter which host performs which task. For instance, you can create groups of hosts (`webservers` for servers, `workstations` for desktop machines, and so on).
### Ansible for mixed environments
The logic used in the solution so far assumes that all hosts being configured by Ansible run the same OS (specifically, one that uses the **dnf** command for package management). So what do you do if you're managing hosts running a different distribution, such as Ubuntu (which uses **apt**) or Arch (using **pacman**), or even different operating systems?
As long as the targeted OS has a package manager (and these days even [MacOS has Homebrew][9] and [Windows has Chocolatey][10]), Ansible can help.
This is where Ansible's advantage becomes most apparent. In a shell script, you'd have to check for what package manager is available on the target host, and even with pure Python you'd have to check for the OS. Ansible not only has those checks built in, but it also has mechanisms to use the results in your playbook. Instead of using the **dnf** module, you can use the **action** keyword to perform tasks defined by variables provided by Ansible's fact gathering subsystem.
```
\---
\- hosts: all
  tasks:
    - name: install packages
      become: true
      become_user: root
      action: &gt;
       {{ ansible_pkg_mgr }} name=htop,transmission state=present update_cache=yes
```
The **action** keyword loads action plugins. In this example, it's using the **ansible_pkg_mgr** variable, which is populated by Ansible during the initial **Gathering Facts** task. You don't have to tell Ansible to gather facts about the OS it's running on, so it's easy to overlook it, but when you run a playbook, you see it listed in the default output:
```
TASK [Gathering Facts] *****************************************
ok: [localhost]
```
The **action** plugin uses information from this probe to populate **ansible_pkg_mgr** with the relevant package manager command to install the packages listed after the **name** argument. With 8 lines of code, you can overcome a complex cross-platform quandary that few other scripting options allow.
### Use Ansible
It's the 21st century, and we all expect our computing devices to be connected and relatively consistent. Whether you maintain two or 200 computers, you shouldn't have to perform the same maintenance tasks over and over again. Use Ansible to synchronize the computers in your life, then see what else Ansible can do for you.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/install-packages-ansible
作者:[Seth Kenlon][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen)
[2]: https://opensource.com/resources/what-ansible
[3]: https://opensource.com/resources/linux
[4]: https://docs.ansible.com/ansible/latest/modules/modules_by_category.html
[5]: https://docs.ansible.com/ansible/latest/modules/list_of_packaging_modules.html
[6]: https://opensource.com/sites/default/files/uploads/ansible-module.png (Ansible documentation)
[7]: https://creativecommons.org/licenses/by-sa/4.0/
[8]: https://opensource.com/article/20/8/how-ssh
[9]: https://opensource.com/article/20/6/homebrew-mac
[10]: https://opensource.com/article/20/3/chocolatey

View File

@ -0,0 +1,63 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Open source data control for cloud services with Apache Ranger)
[#]: via: (https://opensource.com/article/20/9/apache-ranger)
[#]: author: (Balaji Ganesan https://opensource.com/users/balajiganesan)
Open source data control for cloud services with Apache Ranger
======
Comparing different approaches to make more data available to more users
while maintaining security and compliance with data privacy regulations.
![Tools in a cloud][1]
As the movement to migrate enterprise data to the cloud gathers steam, there is an active debate on the best approach to securing and protecting it. But before we talk about the details of the various access control frameworks, let us first understand the breadth of challenges a company faces when it begins migrating its data to the cloud. First and foremost is the wide array of storage and analysis or compute services offered by cloud and third-party providers. In other words, when a company decides to move its data to the cloud, it needs to decide the type of repository in which it is going to store its data.
Each cloud company offers many different data stores, and there are a dozen different services to analyze data once it has been migrated to the cloud. Then there are cloud-native third-party services to allow data science platforms and data warehouses to operate as part of the leading public cloud infrastructure. Each of these services offers a unique mechanism by which to administer access to data consumers such as data analysts and scientists in the organization.
If you think this is beginning to sound a lot like Hadoop-based data lakes, you're right. Needless to say, this places a very heavy burden on the administrators that have to make data widely available in the organization and comply with privacy and industry regulations such as California Consumer Privacy Act (CCPA), General Data Protection Regulation (GDPR), and Health Insurance Portability and Accountability Act (HIPAA) at the same time.
### The fundamentals of two popular approaches: RBAC vs. ABAC
Access control mechanisms have been part of the enterprise IT landscape since the advent of computer systems, and there are two key aspects to controlling access to data. The first relates to authenticating the identity of the user and establishing whether the individual or system is actually who they claim to be. The second has to do with ensuring that the user has the appropriate permission to access a data system, a process known as authorization. These principles also apply to the data stored in the cloud. Today, role-based access control (RBAC) and attribute-based access control (ABAC) are the two most prevalent approaches to managing access to data in the enterprise. The goal of these approaches is to help define and enforce the policies and privileges that grant authorized users access to the required data.
RBAC is based on the concepts of users, roles, groups, and privileges in an organization. Administrators grant privileges or permissions to pre-defined organizational roles—roles that are assigned to subjects or users based on their responsibility or area of expertise. For example, a user who is assigned the role of a manager might have access to a different set of objects and/or is given permission to perform a broader set of actions on them as compared to a user with the assigned role of an analyst. When the user generates a request to access a data object, the access control mechanism evaluates the role assigned to the user and the set of operations this role is authorized to perform on the object before deciding whether to grant or deny the request.
RBAC simplifies the administration of data access controls because concepts such as users and roles are well-understood constructs in a majority of organizations. In addition to being based on familiar database concepts, RBAC also offers administrators the flexibility to assign users to various roles, reassign users from one role to another, and grant or revoke permissions as required. Once an RBAC framework is established, the administrator's role is primarily to assign or revoke users to specific roles. In RBAC, a user can be assigned many roles, a role can have many users, and a role/user can perform many operations.
The concept of attribute-based access control appeared on the scene in the early 2000s. Prior to ABAC, managing access to enterprise data involved granting a user or subject permission to perform a specific action on an entity—in this case, a database, table, or column. In ABAC, the decision to grant access or request to perform an operation on an object is based on assigned attributes of the subject, object, environment conditions, and a set of policies that are specific to those attributes and conditions. Environment conditions are dynamic factors that are independent of user or object and can include things such as the time and location of the subject. Just like subjects or users have attributes, so do objects such as databases, files, or tables. Object attributes may include author, creation date, version, effective date, last update, etc.
ABAC operates by assigning attributes to subjects and objects and developing policies that govern rules of data access. Each component in the information system is assigned attributes that are specific to the object. For example, a file can be classified as an intellectual property (IP). Similarly, each user or subject in the system can be assigned attributes that may include the user's location and time zone. Based on these attributes, an administrator can build an access policy that specifies that any document that has been classified as IP cannot be accessed by a user who is located outside the US or that it can only be accessed by users who are affiliated with the company's legal department during the hours of 8:00am and 5:00pm PST. You can now see how ABAC extends the concept of role, users, and privileges to include attributes.
ABAC also offers several advantages to infrastructure administrators. For instance, they do not require knowledge of specific users or subjects that need access to data. The combination of user and object attributes governed by a set of policies can accommodate an unlimited number of users. As new users are added to the platform, they, too, can be governed by the same set of rules. Because ABAC does not require administrators to have prior knowledge of the users, this approach is better suited to environments where individuals are routinely added and removed from the data platform.
### Making the right choice
It is important to point out that the distinction between RBAC and ABAC approaches is increasingly blurred by access control platforms such as Apache Ranger, a data governance framework originally developed to manage Big Data in Hadoop data lakes.
Today, Apache Ranger is the leading open source project for data access governance for Big Data environments, including Apache Spark. It's in use at hundreds of enterprises around the world, utilized to define and enforce data access control policies to govern sensitive data as mandated by regulations like GDPR and CCPA.
Apache Ranger was built to centrally manage access to data used by different engines that are part of the Hadoop platforms. It is inherently architected to handle the diversity of data storage and compute environments presented by multiple cloud services in use at enterprises today.
Apache Ranger's approach to data authorization is based on ABAC, which is a combination of the subject, action, resource, and environment. At the same time, Ranger can provide fine-grained access control to users based on the concepts of role, user, and permission.
The best strategy for organizations migrating to the cloud is to select a data access control platform that strikes a balance between empowering administrators to make more data available to more data consumers and complying with industry and privacy regulations. More importantly, it must do this without adversely affecting the performance of the data platform or user behavior. 
Looking for ways to draw meaningful conclusions from big data? Rommel Garcia runs through three...
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/apache-ranger
作者:[Balaji Ganesan][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/balajiganesan
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT (Tools in a cloud)

View File

@ -0,0 +1,88 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Tux the Linux Penguin in its first video game, better DNS and firewall on Android, Gitops IDE goes open source, and more open source news)
[#]: via: (https://opensource.com/article/20/9/news-sept-8)
[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo)
Tux the Linux Penguin in its first video game, better DNS and firewall on Android, Gitops IDE goes open source, and more open source news
======
Catch up on the biggest open source headlines from the past two weeks.
![][1]
In this weeks edition of our open source news roundup, Gitpod open sources its IDE platform, BraveDNS launches an all-in-one platform, and more open source news.
### Engineers debut an open source-powered robot
Matthias Müller and Vladlen Koltun, two engineers at Intel, have shared their new robot to tackle computer vision tasks. [The robot][2], called "OpenBot", is powered by a smartphone, which acts as a camera and computing unit. 
The OpenBot prototype components cost $50. It's intended to be a low-cost alternative to commercially available radio-controlled models, with more computing power than educational models.
To use OpenBot, users can connect their smartphones to an electromechanical body. They can also use Bluetooth to connect their smartphone to a video game controller like an Xbox or PlayStation. 
Müller and Koltun say they want OpenBot to address two key issues in robotics: Scalability and accessibility. Its source code is still pending [on GitHub][3], although models for 3D-printing the case are up.
### Tux the Linux Penguin gets his video game dues
A new update to [a free and open source 3D kart racer][4] features an unlikely hero: Tux, the Linux penguin.
Born in the early aughts as a project called _TuxKart_, Joerg Henrichs renamed it "Super Tux Kart" in 2006. Lux is the latest open source mascot to feature in the project: Blender and GIMP's mascots are represented as well.
Along with adding Tux to the mix, Super Tux Kart Version 1.2 includes lots of updates. iOS users can create racing servers in-game, while all official tracks are now included in the release built on Android. And since the game is open source [on four platforms][5], all players can make their own changes to submit for review.
### BraveDNS offers three services in one for Android users
It's notoriously tough for Android users to find a firewall, adblocker, and DNS-over-HTTPS client in one product. But if BraveDNS lives up to the hype, this free and open source tool offers all three in one. 
Self-described as “an [OpenSnitch][6]-inspired firewall and network monitor + a [pi-hole][7]-inspired DNS over HTTPS client with blocklists”, BraveDNS uses its own ads, trackers, and spyware-blocking DNS endpoint. Users who need features like custom blocklists and ability to store DNS logs can use the tool's DNS resolver service as a paid option.
Along with a robust [list of firewall features][8], BraveDNS offers to backport support for dual-mode DNS and firewall execution to legacy Android versions. You'll need at least Android 8 Oreo to use the latest version of BraveDNS on their website and Google Play, but their developers pledge to make it compatible down to Android Marshmellow in the near future. 
### Gitpod open sources its IDE platform
With projects like Theia, Xtext, and Open VSX under its belt, Gitpod has been a strong open source presence for 10 years. Now, Gitpod -- an IDE platform for GitHub projects -- is [officially open source][9] as well.
The move marks a big change for Gitpod, which was previously closed to community development from the start. Founders Sven Efftinge and Johannes Landgraf shared that Gitpod now meets GitHub's open source criteria under AGPL license. This allows Gitpod developers to co-collaborate on Kubernetes applications.
Along with Gitpod's open source status, they've expanded into software as well. Self-Hosted, a private cloud platform, is now available for free to unlimited users. Designed for DevOps teams to work on enterprise projects, Self-Hosted's features include collaboration tools, analytics, dashboards, and more.
In other news:
* [5 open source software applications for virtualization][10]
* [Building a heavy duty open source ventilator][11]
* [China looks at Gitee as an open source alternative to Microsoft's GitHub][12]
* [The future of American industry depends on open source tech][13]
Thanks, as always, to Opensource.com staff members and [Correspondents][14] for their help this week.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/news-sept-8
作者:[Lauren Maffeo][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lmaffeo
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/weekly_news_roundup_tv.png?itok=tibLvjBd
[2]: https://www.inceptivemind.com/openbot-open-source-low-cost-smartphone-powered-robot/15023/
[3]: https://github.com/intel-isl/OpenBot
[4]: https://hothardware.com/news/super-tux-kart-update
[5]: https://supertuxkart.net/Download
[6]: https://github.com/evilsocket/opensnitch
[7]: https://github.com/pi-hole/pi-hole
[8]: https://www.xda-developers.com/bravedns-open-source-dns-over-https-client-firewall-adblocker-android/
[9]: https://aithority.com/it-and-devops/gitpod-goes-open-source-with-its-ide-platform-launches-self-hosted-cloud-package/
[10]: https://searchservervirtualization.techtarget.com/tip/5-open-source-software-applications-for-virtualization
[11]: https://hackaday.com/2020/08/28/building-a-heavy-duty-open-source-ventilator/
[12]: https://www.scmp.com/abacus/tech/article/3099107/china-pins-its-hopes-gitee-open-source-alternative-microsofts-github
[13]: https://www.wired.com/story/opinon-the-future-of-american-industry-depends-on-open-source-tech/
[14]: https://opensource.com/correspondent-program

View File

@ -0,0 +1,108 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Create a slide deck using Jupyter Notebooks)
[#]: via: (https://opensource.com/article/20/9/presentation-jupyter-notebooks)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
Create a slide deck using Jupyter Notebooks
======
Jupyter may not be the most straightforward way to create presentation
slides and handouts, but it affords more control than simpler options.
![Person reading a book and digital copy][1]
There are many options when it comes to creating slides for a presentation. There are straightforward ways, and generating slides directly from [Jupyter][2] is not one of them. But I was never one to do things the easy way. I also have high expectations that no other slide-generation software quite meets.
### Why transition from slides to Jupyter?
I want four features in my presentation software:
1. An environment where I can run the source code to check for errors
2. A way to include speaker notes but hide them during the presentation
3. To give attendees a useful handout for reading
4. To give attendees a useful handout for exploratory learning
There is nothing more uncomfortable about giving a talk than having someone in the audience point out that there is a coding mistake on one of my slides. Often, it's misspelling a word, forgetting a return statement, or doing something else that becomes invisible as soon as I leave my development environment, where I have [a linter][3] running to catch these mistakes.
After having one too many of these moments, I decided to find a way to run the code directly from my slide editor to make sure it is correct. There are three "gotchas" I needed to consider in my solution:
* A lot of code is boring. Nobody cares about three slides worth of `import` statements, and my hacks to mock out the `socket` module distract from my point. But it's essential that I can test the code without creating a network outage.
* Including boilerplate code is _almost_ as boring as hearing me read words directly off of the slide. We have all heard (or even given) talks where there are three bullet points, and the presenter reads them verbatim. I try to avoid this behavior by using speaker notes.
* There is nothing more annoying to the audience when the talk's reference material doesn't have any of the speaker notes. So I want to generate a beautiful handout containing all of my notes and the slides from the same source. Even better, I don't want to have slides on one handout and a separate GitHub repository for the source code.
As is often the case, to solve this issue, I found myself reaching for [JupyterLab][4] and its notebook management capabilities.
### Using Jupyter Notebooks for presentations
I begin my presentations by using Markdown and code blocks in a Jupyter Notebook, just like I would for anything else in JupyterLab. I write out my presentation using separate Markdown sections for the text I want to show on the slides and for the speaker notes. Code snippets go into their own blocks, as you would expect.
Because you can add a "tag" to cells, I tag any cell that has "boring" code as `no_markdown`.
![Using tags in Jupyter Notebook][5]
(Moshe Zadka, [CC BY-SA 4.0][6])
Then I convert my Notebook to Markdown with:
```
`$ jupyter nbconvert presentation.ipynb --to markdown --TagRemovePreprocessor.remove_cell_tags='{"no_markdown"}'  --output build/presentation.md`
```
There are ways to [convert Markdown to slides][7]—but I have no idea how to use any of them and even less desire to learn. Plus, I already have my favorite presentation-creation tool: [Beamer][8].
But Beamer requires custom LaTeX, and that is not usually generated when you convert Markdown to LaTeX. Thankfully, one Markdown implementation[Pandoc Markdown][9]—has a feature that lets me do what I want. Its [raw_attribute][10] extension allows including "raw" bits of the target format in the Markdown.
This means if I run `pandoc` on the Markdown export from a notebook that includes `raw_attribute` LaTeX bits, I can have as much control over the LaTeX as I want:
```
`$ pandoc --listings -o build/presentation.tex build/presentation.md`
```
The `--listings` makes `pandoc` use LaTeX's `listings` package, which makes code look much prettier. Putting those two pieces together, I can generate LaTeX from the notebook.
Through a series of conversion steps, I was able to hide the parts I wanted to hide by using:
* LaTeX `raw_attribute` bits inside Jupyter Notebook's Markdown cells
* Tagging boring cells as `no_markdown`
* Jupyter's "nbconvert" to convert the notebook to Markdown
* Pandoc to convert the Markdown to LaTeX while interpolating the `raw_attribute` bits
* Beamer to convert the Pandoc output to a PDF slide-deck
* Beamer's beamerarticle mode
All combined with a little bit of duct-tape, in the form of a UNIX shell script, to produce slide-deck creation software. Ultimately, this pipeline works for me. With these tools, or similar, and some light UNIX scripting, you can make your own customized slide created pipeline, optimized to your needs and preferences.
What is the most complicated pipeline you have ever used to build a slide deck? Let me know about it—and whether you would use it again—in the comments.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/presentation-jupyter-notebooks
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/read_book_guide_tutorial_teacher_student_apaper.png?itok=_GOufk6N (Person reading a book and digital copy)
[2]: https://jupyter.org/
[3]: https://opensource.com/article/19/5/python-flake8
[4]: https://jupyterlab.readthedocs.io/en/stable/index.html
[5]: https://opensource.com/sites/default/files/uploads/jupyter_presentations_tags.png (Using tags in Jupyter Notebook)
[6]: https://creativecommons.org/licenses/by-sa/4.0/
[7]: https://opensource.com/article/18/5/markdown-slide-generators
[8]: https://opensource.com/article/19/1/create-presentations-beamer
[9]: https://pandoc.org/MANUAL.html#pandocs-markdown
[10]: https://pandoc.org/MANUAL.html#extension-raw_attribute

View File

@ -0,0 +1,201 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Connect to WiFi from the Terminal in Ubuntu Linux)
[#]: via: (https://itsfoss.com/connect-wifi-terminal-ubuntu/)
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
How to Connect to WiFi from the Terminal in Ubuntu Linux
======
_**In this tutorial, youll learn how to connect to wireless network from the terminal in Ubuntu. This is particularly helpful if you are using Ubuntu server where you dont have access to the regular [desktop environment][1].**_
I primarily use desktop Linux on my home computers. I also have multiple Linux servers for hosting Its FOSS and related websites and open source software like [Nextcloud][2], [Discourse][3], Ghost, Rocket Chat etc.
I use [Linode][4] for quickly deploying Linux servers in cloud in minutes. But recently, I installed [Ubuntu server on my Raspberry Pi][5]. This is the first time I installed a server on a physical device and I had to do extra stuff to connect Ubuntu server to WiFi via command line.
In this tutorial, Ill show the steps to connect to WiFi using terminal in Ubuntu Linux. You should
* not be afraid of using terminal to edit files
* know the wifi access point name (SSID) and the password
### Connect to WiFi from terminal in Ubuntu
![][6]
It is easy when you are using Ubuntu desktop because you have the GUI to easily do that. Its not the same when you are using Ubuntu server and restricted to the command line.
Ubuntu uses [Netplan][7] utility for easily configuring networking. In Netplan, you create YAML file with the description of network interface and with the help of the netplan command line tool, you generate all the required configuration.
Lets see how to connect to wireless networking from the terminal using Netplan.
#### Step 1: Identify your wireless network interface name
There are several ways to identify your network interface name. You can use the ip command, the deprecated ipconfig command or check this file:
```
ls /sys/class/net
```
This should give you all the available networking interface (Ethernet, wifi and loopback). The wireless network interface name starts with w and it is usually named similar to wlanX, wlpxyz.
```
[email protected]:~$ ls /sys/class/net
eth0 lo wlan0
```
Take a note of this interface name. Youll use it in the next step.
#### Step 2: Edit the Netplan configuration file with the wifi interface details
The Netplan configuration file resides in /etc/netplan directory. If you check the contents of this directory, you should see files like 01-network-manager-all.yml or 50-cloud-init.yaml.
If it is Ubuntu server, you should have cloud-init file. For desktops, it should be network-manager file.
The Network Manager on the Linux desktop allows you to choose a wireless network. You may hard code the wifi access point in its configuration. This could help you in some cases (like suspend) where connection drops automatically.
Whichever file it is, open it for editing. I hope you are a tad bit [familiar with Nano editor][8] because Ubuntu comes pre-installed with it.
```
sudo nano /etc/netplan/50-cloud-init.yaml
```
YAML files are very sensitive about spaces, indention and alignment. Dont use tabs, use 4 (or 2, whichever is already used in the YAML file) spaces instead where you see an indention.
Basically, youll have to add the following lines with the access point name (SSID) and its password (usually) in quotes:
```
wifis:
wlan0:
dhcp4: true
optional: true
access-points:
"SSID_name":
password: "WiFi_password"
```
Again, keep the alignment as I have shown or else YAML file wont be parsed and it will throw an error.
Your complete configuration file may look like this:
```
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
ethernets:
eth0:
dhcp4: true
optional: true
version: 2
wifis:
wlan0:
dhcp4: true
optional: true
access-points:
"SSID_name":
password: "WiFi_password"
```
I find it strange that despite the message that changes will not persist across an instance reboot, it still works.
Anyway, generate the configuration using this command:
```
sudo netplan generate
```
And now apply this:
```
sudo netplan apply
```
If you are lucky, you should have network connected. Try to ping a website or run apt update command.
However, things may not go as smooth and you may see some errors. Try some extra steps if thats the case.
#### Possible troubleshooting
It is possible that when you use the netplan apply command, you see an error in the output that reads something like this:
```
Failed to start netplan-wpa-wlan0.service: Unit netplan-wpa-wlan0.service not found.
Traceback (most recent call last):
File "/usr/sbin/netplan", line 23, in <module>
netplan.main()
File "/usr/share/netplan/netplan/cli/core.py", line 50, in main
self.run_command()
File "/usr/share/netplan/netplan/cli/utils.py", line 179, in run_command
self.func()
File "/usr/share/netplan/netplan/cli/commands/apply.py", line 46, in run
self.run_command()
File "/usr/share/netplan/netplan/cli/utils.py", line 179, in run_command
self.func()
File "/usr/share/netplan/netplan/cli/commands/apply.py", line 173, in command_apply
utils.systemctl_networkd('start', sync=sync, extra_services=netplan_wpa)
File "/usr/share/netplan/netplan/cli/utils.py", line 86, in systemctl_networkd
subprocess.check_call(command)
File "/usr/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['systemctl', 'start', '--no-block', 'systemd-networkd.service', 'netplan-wpa-wlan0.service']' returned non-zero exit status 5.
```
It is possible that wpa_supplicant service is not running. Run this command:
```
sudo systemctl start wpa_supplicant
```
Run netplan apply once again. If it fixes the issue well and good. Otherwise, [shutdown your Ubuntu system][9] using:
```
shutdown now
```
Start your Ubuntu system again, log in and generate and apply netplan once again:
```
sudo netplan generate
sudo netplan apply
```
It may show warning (instead of error) now. It is warning and not an error. I checked the [running systemd services][10] and found that netplan-wpa-wlan0.service was already running. Probably it showed the warning because it was already running and netplan apply updated the config file (even without any changes).
```
Warning: The unit file, source configuration file or drop-ins of netplan-wpa-wlan0.service changed on disk. Run 'systemctl daemon-reload' to reload units.
```
It is not crtical and you may check that the internet is probably working already by running apt update.
I hope you were able to connect to wifi using the command line in Ubuntu with the help of this tutorial. If you are still facing trouble with it, do let me know in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/connect-wifi-terminal-ubuntu/
作者:[Abhishek Prakash][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/abhishek/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/what-is-desktop-environment/
[2]: https://itsfoss.com/nextcloud/
[3]: https://www.discourse.org/
[4]: https://itsfoss.com/recommends/linode/
[5]: https://itsfoss.com/install-ubuntu-server-raspberry-pi/
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/connect-to-wifi-from-terminal-ubuntu.png?resize=800%2C450&ssl=1
[7]: https://netplan.io/
[8]: https://itsfoss.com/nano-editor-guide/
[9]: https://itsfoss.com/schedule-shutdown-ubuntu/
[10]: https://linuxhandbook.com/systemd-list-services/

View File

@ -0,0 +1,210 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to Install Ubuntu Server on a Raspberry Pi)
[#]: via: (https://itsfoss.com/install-ubuntu-server-raspberry-pi/)
[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/)
How to Install Ubuntu Server on a Raspberry Pi
======
The [Raspberry Pi][1] is the best-known [single-board computer][2]. Initially, the scope of the Raspberry Pi project was targeted to the promotion of teaching of basic computer science in schools and in developing countries.
Its low cost, portability and very low power consumption, made the models far more popular than anticipated. From weather station to home automation, tinkerers built so many [cool projects using Raspberry Pi][3].
The [4th generation of the Raspberry Pi][4], is equipped with features and processing power of a regular desktop computer. But this article is not about using RPi as desktop. Instead, Ill show you how to install Ubuntu server on Raspberry Pi.
In this tutorial I will use a Raspberry Pi 4 and I will cover the following:
* Installing Ubuntu Server on a microSD card
* Setting up a wireless network connection on the Raspberry Pi
* Accessing your Raspberry Pi via SSH
![][5]
**Youll need the following things for this tutorial**:
* A micro SD card (8 GB or greater recommended)
* A computer (running Linux, Windows or macOS) with a micro SD card reader
* A Raspberry Pi 2, 3 or 4
* Good internet connection
* An HDMI cable for the Pi 2 &amp; 3 and a micro HDMI cable for the Pi 4 (optional)
* A USB keyboard set (optional)
### Installing Ubuntu Server on a Raspberry Pi
![][6]
I have used Ubuntu for creating Raspberry Pi SD card in this tutorial but you may follow it on other Linux distributions, macOS and Windows as well. This is because the steps for preparing the SD card is the same with Raspberry Pi Imager tool.
The Raspberry Pi Imager tool downloads the image of your [choice of Raspberry Pi OS][7] automatically. This means that you need a good internet connection for downloading data around 1 GB.
#### Step 1: Prepare the SD Card with Raspberry Pi Imager
Make sure you have inserted the microSD card into your computer, and install the Raspberry Pi Imager at your computer.
You can download the Imager tool for your operating system from these links:
* [Raspberry Pi Imager for Ubuntu/Debian][8]
* [Raspberry Pi Imager for Windows][9]
* [Raspberry Pi Imager for MacOS][10]
Despite I use Ubuntu, I wont use the Debian package that is listed above, but I will install the snap package using the command line. This method can be applied to wider range of Linux distributions.
```
sudo snap install rpi-imager
```
Once you have installed Raspberry Pi Imager tool, find and open it and click on the “CHOOSE OS” menu.
![][11]
Scroll across the menu and click on “Ubuntu” (Core and Server Images).
![][12]
From the available images, I choose the Ubuntu 20.04 LTS 64 bit. If you have a Raspberry Pi 2, you are limited to the 32bit image.
**Important Note: If you use the latest Raspberry Pi 4 8 GB RAM model, you should choose the 64bit OS, otherwise you will be able to use 4 GB RAM only.**
![][13]
Select your microSD card from the “SD Card” menu, and click on “WRITE”after.
![][14]
If it shows some error, try writing it again. It will now download the Ubuntu server image and write it to the micro SD card.
It will notify you when the process is completed.
![][15]
#### Step 2: Add WiFi support to Ubuntu server
Once the micro SD card flashing is done, you are almost ready to use it. There is one thng that you may want to do before using it and that is to add Wi-Fi support.
With the SD card still inserted in the card reader, open the file manager and locate the “system-boot” partition on the card.
The file that you are looking for and need to edit is named `network-config`.
![][16]
This process can be done on Windows and MacOS too. Edit the **`network-config`** file as already mentioned to add your Wi-Fi credentials.
Firstly, uncomment (remove the hashtag “#” at the beginning) from lines that are included in the rectangular box.
After that, replace myhomewifi with your Wi-Fi network name enclosed in quotation marks, such as “itsfoss” and the “S3kr1t” with the Wi-Fi password enclosed in quotation marks, such as “12345679”.
![][17]
It may look like this:
```
wifis:
wlan0:
dhcp4: true
optional: true
access-points:
"your wifi name":
password: "your_wifi_password"
```
Save the file and insert the micro SD card into your Raspberry Pi. During the first boot, if your Raspberry Pi fails connect to the Wi-Fi network, simply reboot your device.
#### Step 3: Use Ubuntu server on Raspberry Pi (if you have dedicated monitor, keyboard and mouse for Raspberry Pi)
If you have got an additional set of mouse, keyboard and a monitor for the Raspberry Pi, you can use easily use it like any other computer (but without GUI).
Simply insert the micro SD card to the Raspberry Pi, plug in the monitor, keyboard and mouse. Now [turn on your Raspberry Pi][18]. It will present TTY login screen (black terminal screen) and aks for username and password.
* Default username: ubuntu
* Default password: ubuntu
When prompted, use “**ubuntu**” for the password. Right after a successful login, [Ubuntu will ask you to change the default password][19].
Enjoy your Ubuntu Server!
#### Step 3: Connect remotely to your Raspberry Pi via SSH (if you dont have monitor, keyboard and mouse for Raspberry Pi)
It is okay if you dont have a dedicated monitor to be used with Raspberry Pi. Who needs a monitor with a server when you can just SSH into it and use it the way you want?
**On Ubuntu and Mac OS**, an SSH client is usually already installed. To connect remotely to your Raspberry Pi, you need to discover its IP address. Check the [devices connected to your network][20] and see which one is the Raspberry Pi.
Since I dont have access to a Windows machine, you can access a comprehensive guide provided by [Microsoft][21].
Open a terminal and run the following command:
```
ssh [email protected]_pi_ip_address
```
You will be asked to confirm the connection with the message:
```
Are you sure you want to continue connecting (yes/no/[fingerprint])?
```
Type “yes” and click the enter key.
![][22]
When prompted, use “ubuntu” for the password as mentioned earlier. Youll be asked to change the password of course.
Once done, you will be automatically logged out and you have to reconnect, using your new password.
Your Ubuntu server is up and running on a Raspberry Pi!
**Conclusion**
Installing Ubuntu Server on a Raspberry Pi is an easy process and it comes pre-configured at a great degree which the use a pleasant experience.
I have to say that among all the [operating systems that I tried on my Raspberry Pi][7], Ubuntu Server was the easiest to install. I am not exaggerating. Check my guide on [installing Arch Linux on Raspberry Pi][23] for reference.
I hope this guide helped you in installing Ubuntu server on your Raspberry Pi as well. If you have questions or suggestions, please let me know in the comment section.
--------------------------------------------------------------------------------
via: https://itsfoss.com/install-ubuntu-server-raspberry-pi/
作者:[Dimitrios Savvopoulos][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/dimitrios/
[b]: https://github.com/lujun9972
[1]: https://www.raspberrypi.org/
[2]: https://itsfoss.com/raspberry-pi-alternatives/
[3]: https://itsfoss.com/raspberry-pi-projects/
[4]: https://itsfoss.com/raspberry-pi-4/
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/Ubuntu-Server-20.04.1-LTS-aarch64.png?resize=800%2C600&ssl=1
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/ubuntu-server-raspberry-pi.png?resize=800%2C450&ssl=1
[7]: https://itsfoss.com/raspberry-pi-os/
[8]: https://downloads.raspberrypi.org/imager/imager_amd64.deb
[9]: https://downloads.raspberrypi.org/imager/imager.exe
[10]: https://downloads.raspberrypi.org/imager/imager.dmg
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/raspberry-pi-imager.png?resize=800%2C600&ssl=1
[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/raspberry-pi-imager-choose-ubuntu.png?resize=800%2C600&ssl=1
[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/raspberry-pi-imager-ubuntu-server.png?resize=800%2C600&ssl=1
[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/raspberry-pi-imager-sd-card.png?resize=800%2C600&ssl=1
[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/ubuntu-server-installed-raspberry-pi.png?resize=799%2C506&ssl=1
[16]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/ubuntu-server-pi-network-config.png?resize=800%2C565&ssl=1
[17]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/Ubuntu-server-wifi.png?resize=800%2C600&ssl=1
[18]: https://itsfoss.com/turn-on-raspberry-pi/
[19]: https://itsfoss.com/change-password-ubuntu/
[20]: https://itsfoss.com/how-to-find-what-devices-are-connected-to-network-in-ubuntu/
[21]: https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse
[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/ubuntu-server-change-password.png?resize=800%2C600&ssl=1
[23]: https://itsfoss.com/install-arch-raspberry-pi/

View File

@ -0,0 +1,132 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Manage your SSH connections with this open source tool)
[#]: via: (https://opensource.com/article/20/9/ssh-connection-manager)
[#]: author: (Kenneth Aaron https://opensource.com/users/flyingrhino)
Manage your SSH connections with this open source tool
======
This open source project makes connecting to any SSH session quick and
seamless, and downright relaxing.
![Penguins][1]
OpenSSH is widely used, but there isn't a well-known connection manager, so I developed the ncurses SSH connection manager (`nccm`) to fill that significant gap in the process. `nccm` is a simple SSH connection manager with an ultra-portable terminal interface (written in ncurses, as the project name suggests). And best of all, it's straightforward to use. With `nccm`, you can connect to an SSH session of your choice with minimum distraction and minimal keystrokes.
### Install nccm
The quickest way to get going is to clone the project from its [Git repository][2]:
```
`$ git clone https://github.com/flyingrhinonz/nccm nccm.git`
```
In the `nccm.git/nccm` directory, there are two files—`nccm` itself and an `nccm.yml` configuration file.
First, copy the nccm script to `/usr/local/bin/` and grant it executable permissions. You can do this in one step with the `install` command:
```
$ sudo install -m755 nccm
target-directory /usr/local/bin
```
The `nccm.yml` file can be copied to any one of these locations, and is loaded from the first location found:
* `~/.config/nccm/nccm.yml`
* `~/.nccm.yml`
* `~/nccm.yml`
* `/etc/nccm.yml`
The `nccm` command requires Python 3 to be installed on your machine, which shouldn't be a problem on most Linux boxes. Most Python library dependencies are already present as part of Python 3; however, there are some YAML dependencies and utilities you must install.
If you don't have `pip` installed, you can install it with your package manager. And while you're at it, install the `yamllint` application to help you validate the `nccm.yml` file.
On Debian or similar, use `apt`:
```
`$ sudo apt install python3-pip yamllint`
```
On Fedora or similar, use `dnf`:
```
`$ sudo dnf install python3-pip yamllint`
```
You also need PyYAML, which you can install with the `pip` command:
```
`$ pip3 install --user PyYAML`
```
### Using nccm
Before starting, edit the `nccm.yml` file and add your SSH configuration. Formatting YAML is easy, and there are examples provided in the file. Just follow the structure—provide the connection name at the beginning of the line, with config items indented two spaces. Don't forget the colons—these are part of the YAML language.
Don't worry about ordering your SSH session blocks in any specific way, because `nccm` gives you "sort by" options within the program.
Once you've finished editing, check your work with `yamllint`:
```
`$ yamllint ~/.config/nccm/nccm.yml`
```
If no errors are returned, then you've formatted your file correctly, and it's safe to continue.
If `nccm` is accessible [from your path][3] and is executable, then typing `nccm` is all that's required to launch the TUI (terminal user interface). If you see Python 3 exceptions, check whether you have satisfied the dependencies. Any exceptions should mention any package that's missing.
As long as you're using the YAML config file without changing `nccm_config_control mode`, then you can use these keyboard controls:
* Up/Down arrows - Move the marker the traditional way
* Home/End - Jump marker to list first/last entry
* PgUp/PgDn - Page up/down in the list
* Left/Right arrows - Scroll the list horizontally
* TAB - Moves the cursor between text boxes
* Enter - Connect to the selected entry
* Ctrl-h - Display this help menu
* Ctrl-q or Ctrl-c - Quit the program
* F1-F5 or !@#$% - Sort by respective column (1-5)
Use keys F1 through F5 to sort by columns 1 through 5. If your desktop captures F-key input, you can instead sort by pressing **!@#$%** in the "Conn" text box. The display shows 4 visible columms but we treat username and server address as separate columns for sorting purposes giving us 5 controls for sorting. You can reverse the order by pressing the same "sort" key a second time. A connection can be made by pressing **Enter** on the highlighted line.
![nccm screenshot terminal view][4]
Typing text into the "Filter" text box filters the output with an "and" function between everything entered. This is case-insensitive, and a blank space delimits entries. The same is true for the "Conn" text box, but pressing **Enter** here connects to that specific entry number.
There are a few more interesting features to discover, such as focus mode, but I'll leave it up to you to explore the details. See the project page or built-in help for more details.
The config YAML file is well-documented, so you'll know how to edit the settings to make `nccm` work best for you. The `nccm` program is highly commented, too, so you may wish to fork or mod it to add more features. Pull requests are welcome!
### Relax into SSH with nccm
I hope this program serves you well and is as useful to you as it is to me. Thanks for being part of the open source community, and please accept `nccm` as my contribution to the ongoing efforts toward seamless, painless, and efficient computing experiences.
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/ssh-connection-manager
作者:[Kenneth Aaron][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/flyingrhino
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux-penguins.png?itok=yKOpaJM_ (Penguins)
[2]: https://github.com/flyingrhinonz/nccm
[3]: https://opensource.com/article/17/6/set-path-linux
[4]: https://opensource.com/sites/default/files/uploads/nccm_screenshot.png (nccm screenshot terminal view)

View File

@ -0,0 +1,138 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Using the Linux stat command to create flexible file listings)
[#]: via: (https://www.networkworld.com/article/3573802/using-the-linux-stat-command-to-create-flexible-file-listings.html)
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
Using the Linux stat command to create flexible file listings
======
D3Damon / Getty Images
The **stat** command supplies a lot of detailed information on files.
It provides not just the date/time of the most recent file changes, but also shows when files were most recently accessed and permissions changed. It tells you the file size in both bytes and blocks. It displays the inode being used by the file along with the file type. It includes the file owner and the associated user group both by name and UID/GID. It displays file permissions in both the “rwx” (referred to as the “human-readable” format) and numerically. On some systems, it might even include the date and time that a file was created (called its “birth”).
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
In addition to providing all this information, the **stat** command can also be used to create file listings. These listings are extremely flexible in that you can choose to include any or all of the information described above.
To generate a custom listing, you just need to use the **stat** commands **-c** (or --**format**) option and specify the fields you want included. For example, to create a listing that shows file permissions in both of the available formats, use this command:
```
$ stat -c '%n %a %A' my*
my.banner 664 -rw-rw-r--
mydir 775 drwxrwxr-x
myfile 664 -rw-rw-r--
myjunk 777 lrwxrwxrwx
mykey 664 -rw-rw-r--
mylog 664 -rw-rw-r--
myscript 755 -rwxr-xr-x
mytext 664 -rw-rw-r--
mytext.bak 664 -rw-rw-r--
mytwin 50 -rw-r-----
mywords 664 -rw-rw-r--
```
As you can see in the example above, **%n** represents the file name, **%a** the permissions in octal and **%A** the permissions in the **rwx** form. A complete list is shown below.
To create an alias for this command, type this or add this definition to your **.bashrc** file:
```
$ alias ls_perms="stat -c '%n %a %A'"
```
To create a listing that is very close to the long listing provided by **ls -l**, do this:
```
$ stat -c '%A %h %U %G %s %y %n' my*
-rw-rw-r-- 1 shs shs 255 2020-04-01 16:20:00.899374215 -0400 my.banner
drwxrwxr-x 2 shs shs 4096 2020-09-07 12:50:20.224470760 -0400 mydir
-rw-rw-r-- 1 shs shs 6 2020-05-16 11:12:00.460355387 -0400 myfile
lrwxrwxrwx 1 shs shs 11 2020-05-28 18:49:21.666792608 -0400 myjunk
-rw-rw-r-- 1 shs shs 655 2020-01-14 15:56:08.540540488 -0500 mykey
-rw-rw-r-- 1 shs shs 8 2020-03-04 17:13:21.406874246 -0500 mylog
-rwxr-xr-x 1 shs shs 201 2020-09-07 12:50:41.316745867 -0400 myscript
-rw-rw-r-- 1 shs shs 40 2019-06-06 08:54:09.538663323 -0400 mytext
-rw-rw-r-- 1 shs shs 24 2019-06-06 08:48:59.652712578 -0400 mytext.bak
-rw-r----- 2 shs shs 228 2019-04-12 19:37:12.790284604 -0400 mytwin
-rw-rw-r-- 1 shs shs 1983 2020-08-10 14:39:57.164842370 -0400 mywords
```
The differences include: 1) no attempt to line up the fields in discernible columns, 2) the date in a _**yyyy-mm-dd**_ format, 3) considerably more precision in the time field and 4) the addition of the time zone (-0400 is EDT).
If you want to see files listed according to the date they were most last accessed (e.g., displayed with the **cat** command), use a command like this:
```
$ stat -c '%n %x' my* | sort -k2
mytwin 2019-04-22 11:25:20.656828964 -0400
mykey 2020-08-20 16:10:34.479324431 -0400
mylog 2020-08-20 16:10:34.527325066 -0400
myfile 2020-08-20 16:10:57.815632794 -0400
mytext.bak 2020-08-20 16:10:57.935634379 -0400
mytext 2020-08-20 16:15:42.323391985 -0400
mywords 2020-08-20 16:15:43.479407259 -0400
myjunk 2020-09-07 10:04:26.543980300 -0400
myscript 2020-09-07 12:50:41.312745815 -0400
my.banner 2020-09-07 13:22:38.105826116 -0400
mydir 2020-09-07 14:53:10.171867194 -0400
```
The field options available for listing file details with **stat** include:
* %a  access rights in octal (note '#' and '0' printf flags)
* %A  access rights in human readable form
* %b  number of blocks allocated (see %B)
* %B  the size in bytes of each block reported by %b
* %C  SELinux security context string
* %d  device number in decimal
* %D  device number in hex
* %f  raw mode in hex
* %F  file type
* %g  group ID of owner
* %G  group name of owner
* %h number of hard links
* %i  inode number
* %m  mount point
* %n  file name
* %N  quoted file name with dereference if symbolic link
* %o  optimal I/O transfer size hint
* %s  total size, in bytes
* %t  major device type in hex, for character/block device special files
* %T  minor device type in hex, for character/block device special files
* %u user ID of owner
* %U  user name of owner
* %w  time of file birth, human-readable; - if unknown
* %W  time of file birth, seconds since Epoch; 0 if unknown
* %x  time of last access, human-readable
* %X  time of last access, seconds since Epoch
* %y  time of last data modification, human-readable
* %Y  time of last data modification, seconds since Epoch
* %z time of last status change, human-readable
* %Z  time of last status change, seconds since Epoch
These field choices are all listed in the man page and you can choose any, though creating a few aliases with your preferred details should save you a lot of trouble. Some options, like the SELinux security context string, will not be available unless that option is in use on the system. File birth is only available if your system retains that information.
Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind.
--------------------------------------------------------------------------------
via: https://www.networkworld.com/article/3573802/using-the-linux-stat-command-to-create-flexible-file-listings.html
作者:[Sandra Henry-Stocker][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
[b]: https://github.com/lujun9972
[1]: https://www.networkworld.com/newsletters/signup.html
[2]: https://www.facebook.com/NetworkWorld/
[3]: https://www.linkedin.com/company/network-world

View File

@ -1,115 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (silentdawn-zz)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Why Sorting is O(N log N))
[#]: via: (https://theartofmachinery.com/2019/01/05/sorting_is_nlogn.html)
[#]: author: (Simon Arneaud https://theartofmachinery.com)
为什么排序的复杂度为 O(N log N)
======
基本上所有正而八经的算法教材都会解释什么是快速排序算法,比如 quicksort 和堆排序,但又都基本上不会动用复杂的数学来证明这些算法只能趋近你所能达到的速度。
### 关于标记的说明
大多数计算机专业的科学家使用大写字母 O 标记来指代“趋近与乘以一个常数比例因子”,这与数学专业所指代的意义是有所区别的。这里我使用的大 O 标记的含义与计算机教材所指相同,且不会混杂使用数学专业所指含义。
## 基于比较的排序
先来看个特例即每次比较两个值大小的算法quicksort、堆排序及其它通用排序算法基本上都是这样的。这种思想后续可以扩展至所有排序算法。
### A simple counting argument for the worst case
假设有 4 个互不相等的数,且顺序随机,那么,可以通过比较一对数字完成排序吗?显然不能,证明如下:根据定义,对该数组排序,需要按照某种顺序重新排列数字。那么究竟有多少种可能的排列呢?第一个数字可以放在四个位置中的任意一个,第二个数字可以放在剩下三个位置中的任意一个,第三个数字可以放在剩下两个位置中的任意一个,最后一个数字只有剩下的一个位置可选。这样,共有 4×3×2×1 = 4! = 24 种排列可供选择。通过一次比较大小,只能产生两种可能的结果。如果列出所有的排列,那么“从小到大”排序对应的可能是第 8 种排列,按“从大到小”排序对应的可能是第 22 种排列,但无法知道什么时候需要的是其它 22 种排列。
通过 2 次比较,可以得到 2×2=4 种可能的结果,这仍然不够。只要比较的次数少于 5对应 (2^{5} = 32) 种输出),就无法完成 4 个随机次序的数字的排序。如果 (W(N)) 是最差情况下对 (N) 个不同元素进行排序所需要的比较次数,那么
[2^{W(N)} \geq N!]
两边取以 2 为底的对数,得
[W(N) \geq \log_{2}{N!}]
(N!) 的增长近似于 (N^{N}) (参阅 [Stirling 公式][1]),那么
[W(N) \succeq \log N^{N} = N\log N]
这就是最差情况下从输出计数的角度得出的 (O(N\log N)) 上限。
### 信息论角度平均状态的例子
使用一些信息论知识,就可以从上面的讨论中得到一个更有力的结论。下面,使用排序算法作为信息传输的编码器:
1. 任取一个数,比如 15
2. 从 4 个数字的排列列表中查找第 15 种排列
3. 对这种排列运行排序算法,记录所有的“大”、“小”比较结果
4. 用二进制编码发送比较结果
5. 接收端重新逐步执行发送端的排序算法,需要的话可以引用发送端的比较结果
6. 现在接收端就可以知道发送端如何重新排列数字以按照需要排序,接收端可以对排列进行逆算,得到 4 个数字的初始顺序
7. 接收端在排列表中检索发送端的原始排列,指出发送端发送的是 15
确实,这有点奇怪,但确实可以。这意味着排序算法遵循着与编码方案相同的定律,包括理论所证明的通用数据压缩算法的不存在。算法中每次比较发送 1 bit 的比较结果编码数据,根据信息论,比较的次数至少是能表示所有数据的二进制位数。更技术语言点,[平均所需的最小比较次数是输入数据的香农熵以二进制的位数][2]。熵是信息等不可预测量的数学度量。
包含 (N) 个元素的数组,元素次序随机且无偏时的熵最大,其值为 (\log_{2}{N!}) 二进制位。这证明 (O(N\log N)) 是基于比较的排序对任意输入所需的比较次数。
以上都是理论说法,那么实际的排序算法如何做比较的呢?下面是一个数组排序所需比较次数均值的图。我比较的是理论值与 quicksort 及 [Ford-Johnson 合并插入排序][3] 的表现。后者设计目的就是最小化比较次数(整体上没比 quicksort 快多少,因为生命中相对于最小化比较,还有更多其它的事情)。又因为合并插入排序是在 1959 年提出的,它又减少了一些比较次数,但图示说明,它基本上达到了最优状态。
![随机排列 100 个元素所需的平均排序次数图。最下面的线是理论值,约 1% 处的是合并插入算法,原始 quicksort 大约在 25% 处。][4]
一点点理论导出这么实用的结论,这感觉真棒!
### 小结
证明了:
1. 如果数组可以是任意顺序,在最坏情况下至少需要 (O(N\log N)) 次比较。
2. 数组的平均比较次数最少是数组的熵,对随机输入而言,其值是 (O(N\log N)) 。
注意,第 2 个结论允许基于比较的算法优于 (O(N\log N)),前提是输入是低熵的(换言之,是部分可预测的)。如果输入包含很多有序的子序列,那么合并排序的性能接近 (O(N))。如果在确定一个位之前,其输入是有序的,插入排序性能接近 (O(N))。在最差情况下,以上算法的性能表现都不超出 (O(N\log N))。
## 一般排序算法
基于比较的排序在实践中是个有趣的特例,但计算机的 [`CMP`][5] 指令与其它指令相比,并没有任何理论上的区别。在下面两条的基础上,前面两种情形都可以扩展至任意排序算法:
1. 大多数计算机指令有多于两个的输出,但输出的数量仍然是有限的。
2. 一条指令有限的输出意味着一条指令只能处理有限的熵。
这给出了 (O(N\log N)) 对应的指令下限。任何物理可实现的计算机都只能在给定时间内执行有限数量的指令,所以算法的执行时间也有对应 (O(N\log N)) 的下限。
### 什么是更快的算法?
一般意义上的 (O(N\log N)) 下限,放在实践中来看,如果听人说到任何更快的算法,你要知道,它肯定以某种方式“作弊”了,其中肯定有圈套,即它不是一个可以处理任意大数组的通用排序算法。可能它是一个有用的算法,但最好看明白它字里行间隐含的东西。
一个广为人知的例子是基数排序算法 radix sort它经常被称为 (O(N)) 排序算法,但它只能处理所有数字都是 (k) 位的情况,所以实际上它的性能是 (O({kN}))。
什么意思呢?假如你用的 8 位计算机,那么 8 个二进制位可以表示 (2^{8} = 256) 个不同的数字,如果数组有上千个数字,那么其中必有重复。对有些应用而言这是可以的,但对有些应用就必须用 16 个二进制位来表示16 个二进制位可以表示 (2^{16} = 65,536) 个不同的数字。32 个二进制位可以表示 (2^{32} = 4,294,967,296) 不同的数字。随着数组长度的增长,所需要的二进制位数也在增长。要表示 (N) 个不同的数字,需要 (k \geq \log_{2}N) 个二进制位。所以,只有允许数组中存在重复的数字时,(O({kN})) 才优于 (O(N\log N))。
一般意义上输入数据的 (O(N\log N)) 的性能已经说明了全部问题。这个讨论不那么有趣因为很少需要在 32 位计算机上对几十亿整数进行排序,[如果有谁的需求超出了 64 位计算机的极限,他一定没有说出他的全部][6]。
--------------------------------------------------------------------------------
via: https://theartofmachinery.com/2019/01/05/sorting_is_nlogn.html
作者:[Simon Arneaud][a]
选题:[lujun9972][b]
译者:[silentdawn-zz](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://theartofmachinery.com
[b]: https://github.com/lujun9972
[1]: http://hyperphysics.phy-astr.gsu.edu/hbase/Math/stirling.html
[2]: https://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem
[3]: https://en.wikipedia.org/wiki/Merge-insertion_sort
[4]: /images/sorting_is_nlogn/sorting_algorithms_num_comparisons.svg
[5]: https://c9x.me/x86/html/file_module_x86_id_35.html
[6]: https://sortbenchmark.org/

View File

@ -0,0 +1,111 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Build a remote management console using Python and Jupyter Notebooks)
[#]: via: (https://opensource.com/article/20/9/remote-management-jupyter)
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
使用 Python 和 Jupyter Notebooks 构建一个远程管理控制台
======
把 Jupyter 变成一个远程管理控制台。
![Computer laptop in space][1]
SSH 是一个强大的远程管理工具,但它缺乏一些细节。编写一个成熟的远程管理控制台听起来好像是一件很费劲的事情。当然,开源社区中肯定有人已经写了一些东西。
他们已经写了,它的名字是 [Jupyter][2]。你可能会认为 Jupyter 是那些数据科学家用来分析一周内的广告点击趋势之类的工具。这并没有错,他们确实是这样做的,而且它是一个很好的工具。但这只是它的表面。
### 关于 SSH 端口转发
有时,你可以通过 22 端口进入一台服务器。没有理由认为你可以连接到任何其他端口。也许你是通过另一个有更多访问权限的”堡垒机“,或者有主机或者限制端口的网络防火墙访问 SSH。当然限制访问的 IP 范围是有充分理由的。SSH 是远程管理的安全协议,但允许任何人连接到任何端口是相当不必要的。
这里有一个替代方案:运行一个简单的 SSH 端口转发命令将本地端口转发到一个_远程本地_连接上。当你运行像 `-L 8111:127.0.0.1:8888` 这样的 SSH 端口转发命令时,你是在告诉 SSH 将你的_本地_端口 `8111` 转发到_远程_主机 `127.0.0.1:8888`。远程主机认为 `127.0.0.1` 就是它本身。
就像在_芝麻街_一样“这里”here是一个微妙的词。
地址 `127.0.0.1` 就是你告诉网络的“这里”。
### 实际动手学习
这可能听起来很混乱,但运行比解释它更简单。
```
$ ssh -L 8111:127.0.0.1:8888 moshez@172.17.0.3
Linux 6ad096502e48 5.4.0-40-generic #44-Ubuntu SMP Tue Jun 23 00:01:04 UTC 2020 x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Aug  5 22:03:25 2020 from 172.17.0.1
$ jupyter/bin/jupyter lab --ip=127.0.0.1
[I 22:04:29.771 LabApp] JupyterLab application directory is /home/moshez/jupyter/share/jupyter/lab
[I 22:04:29.773 LabApp] Serving notebooks from local directory: /home/moshez
[I 22:04:29.773 LabApp] Jupyter Notebook 6.1.1 is running at:
[I 22:04:29.773 LabApp] <http://127.0.0.1:8888/?token=df91012a36dd26a10b4724d618b2e78cb99013b36bb6a0d1>
&lt;MORE STUFF SNIPPED&gt;
```
端口转发 `8111``127.0.0.1`,并在远程主机上启动 Jupyter它在 `127.0.0.1:8888` 上监听。
现在你要明白Jupyter 在撒谎。它认为你需要连接到 `8888` 端口,但你把它转发到 `8111` 端口。所以,当你把 URL 复制到浏览器后,但在点击回车之前,把端口从 `8888` 修改为 `8111`
![Jupyter remote management console][3]
(Moshe Zadka, [CC BY-SA 4.0][4])
这就是你的远程管理控制台。如你所见,底部有一个“终端”图标。点击它可以启动一个终端。
![Terminal in Jupyter remote console][5]
(Moshe Zadka, [CC BY-SA 4.0][4])
你可以运行一条命令。创建一个文件会在旁边的文件浏览器中显示出来。你可以点击该文件,在本地的编辑器中打开它。
![Opening a file][6]
(Moshe Zadka, [CC BY-SA 4.0][4])
你还可以下载、重命名或删除文件:
![File options in Jupyter remote console][7]
(Moshe Zadka, [CC BY-SA 4.0][4])
点击**上箭头**就可以上传文件了。为什么不上传上面的截图呢?
![Uploading a screenshot][8]
(Moshe Zadka, [CC BY-SA 4.0][4])
最后说个小功能Jupyter 可以让你直接通过双击远程图像查看。
哦,对了,如果你想用 Python 做系统自动化,还可以用 Jupyter 打开笔记本。
所以,下次你需要远程管理防火墙环境的时候,为什么不使用 Jupyter 呢?
--------------------------------------------------------------------------------
via: https://opensource.com/article/20/9/remote-management-jupyter
作者:[Moshe Zadka][a]
选题:[lujun9972][b]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/moshez
[b]: https://github.com/lujun9972
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_space_graphic_cosmic.png?itok=wu493YbB (Computer laptop in space)
[2]: https://jupyter.org/
[3]: https://opensource.com/sites/default/files/uploads/output_1_0.png (Jupyter remote management console)
[4]: https://creativecommons.org/licenses/by-sa/4.0/
[5]: https://opensource.com/sites/default/files/uploads/output_3_0.png (Terminal in Jupyter remote console)
[6]: https://opensource.com/sites/default/files/uploads/output_5_0.png (Opening a file)
[7]: https://opensource.com/sites/default/files/uploads/output_7_0.png (File options in Jupyter remote console)
[8]: https://opensource.com/sites/default/files/uploads/output_9_0.png (Uploading a screenshot)