mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
commit
c025d7e37d
182
published/20200722 SCP user-s migration guide to rsync.md
Normal file
182
published/20200722 SCP user-s migration guide to rsync.md
Normal file
@ -0,0 +1,182 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12575-1.html)
|
||||
[#]: subject: (SCP user’s migration guide to rsync)
|
||||
[#]: via: (https://fedoramagazine.org/scp-users-migration-guide-to-rsync/)
|
||||
[#]: author: (chasinglogic https://fedoramagazine.org/author/chasinglogic/)
|
||||
|
||||
scp 用户的 rsync 迁移指南
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202009/03/102942u7rxf79a7rsr9txz.jpg)
|
||||
|
||||
在 [SSH 8.0 预发布公告][2]中,OpenSSH 项目表示,他们认为 scp 协议已经过时,不灵活,而且不容易修复,然后他们继而推荐使用 `sftp` 或 `rsync` 来进行文件传输。
|
||||
|
||||
然而,很多用户都是从小用着 `scp` 命令长大的,所以对 `rsync` 并不熟悉。此外,`rsync` 可以做的事情也远不止复制文件,这可能会给菜鸟们留下复杂和难以掌握的印象。尤其是,`scp` 命令的标志大体上可以直接对应到 `cp` 命令的标志,而 `rsync` 命令的标志却和它大相径庭。
|
||||
|
||||
本文将为熟悉 `scp` 的人提供一个介绍和过渡的指南。让我们跳进最常见的场景:复制文件和复制目录。
|
||||
|
||||
### 复制文件
|
||||
|
||||
对于复制单个文件而言,`scp` 和 `rsync` 命令实际上是等价的。比方说,你需要把 `foo.txt` 传到你在名为 `server` 的服务器上的主目录下:
|
||||
|
||||
```
|
||||
$ scp foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
相应的 `rsync` 命令只需要输入 `rsync` 取代 `scp`:
|
||||
|
||||
```
|
||||
$ rsync foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
### 复制目录
|
||||
|
||||
对于复制目录,就有了很大的分歧,这也解释了为什么 `rsync` 会被认为比 `scp` 更复杂。如果你想把 `bar` 目录复制到 `server` 服务器上,除了指定 `ssh` 信息外,相应的 `scp` 命令和 `cp` 命令一模一样。
|
||||
|
||||
```
|
||||
$ scp -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
对于 `rsync`,考虑的因素比较多,因为它是一个比较强大的工具。首先,我们来看一下最简单的形式:
|
||||
|
||||
```
|
||||
$ rsync -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
看起来很简单吧?对于只包含目录和普通文件的简单情况,这就可以了。然而,`rsync` 更在意发送与主机系统中一模一样的文件。让我们来创建一个稍微复杂一些,但并不罕见的例子:
|
||||
|
||||
```
|
||||
# 创建多级目录结构
|
||||
$ mkdir -p bar/baz
|
||||
# 在其根目录下创建文件
|
||||
$ touch bar/foo.txt
|
||||
# 现在创建一个符号链接指回到该文件
|
||||
$ cd bar/baz
|
||||
$ ln -s ../foo.txt link.txt
|
||||
# 返回原位置
|
||||
$ cd -
|
||||
```
|
||||
|
||||
现在我们有了一个如下的目录树:
|
||||
|
||||
```
|
||||
bar
|
||||
├── baz
|
||||
│ └── link.txt -> ../foo.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
如果我们尝试上面的命令来复制 `bar`,我们会注意到非常不同的(并令人惊讶的)结果。首先,我们来试试 `scp`:
|
||||
|
||||
```
|
||||
$ scp -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
如果你 `ssh` 进入你的服务器,看看 `bar` 的目录树,你会发现它和你的主机系统有一个重要而微妙的区别:
|
||||
|
||||
```
|
||||
bar
|
||||
├── baz
|
||||
│ └── link.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
请注意,`link.txt` 不再是一个符号链接,它现在是一个 `foo.txt` 的完整副本。如果你习惯于使用 `cp`,这可能会是令人惊讶的行为。如果你尝试使用 `cp -r` 复制 `bar` 目录,你会得到一个新的目录,里面的符号链接和 `bar` 的一样。现在如果我们尝试使用之前的 `rsync` 命令,我们会得到一个警告:
|
||||
|
||||
```
|
||||
$ rsync -r bar/ me@server:/home/me/
|
||||
skipping non-regular file "bar/baz/link.txt"
|
||||
```
|
||||
|
||||
`rsync` 警告我们它发现了一个非常规文件,并正在跳过它。因为你没有告诉它可以复制符号链接,所以它忽略了它们。`rsync` 在手册中有一节“符号链接”,解释了所有可能的行为选项。在我们的例子中,我们需要添加 `-links` 标志:
|
||||
|
||||
```
|
||||
$ rsync -r --links bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
在远程服务器上,我们看到这个符号链接是作为一个符号链接复制过来的。请注意,这与 `scp` 复制符号链接的方式不同。
|
||||
|
||||
```
|
||||
bar/
|
||||
├── baz
|
||||
│ └── link.txt -> ../foo.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
为了省去一些打字工作,并利用更多的文件保护选项,在复制目录时可以使用归档标志 `-archive`(简称 `-a`)。该归档标志将做大多数人所期望的事情,因为它可以实现递归复制、符号链接复制和许多其他选项。
|
||||
|
||||
```
|
||||
$ rsync -a bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
如果你感兴趣的话,`rsync` 手册页有关于存档标志的深入解释。
|
||||
|
||||
### 注意事项
|
||||
|
||||
不过,使用 `rsync` 有一个注意事项。使用 `scp` 比使用 `rsync` 更容易指定一个非标准的 ssh 端口。例如,如果 `server` 使用 8022 端口的 SSH 连接,那么这些命令就会像这样:
|
||||
|
||||
```
|
||||
$ scp -P 8022 foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
而在使用 `rsync` 时,你必须指定要使用的“远程 shell”命令,默认是 `ssh`。你可以使用 `-e` 标志来指定。
|
||||
|
||||
```
|
||||
$ rsync -e 'ssh -p 8022' foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
`rsync` 会使用你的 `ssh` 配置;但是,如果你经常连接到这个服务器,你可以在你的 `~/.ssh/config` 文件中添加以下代码。这样你就不需要再为 `rsync` 或 `ssh` 命令指定端口了!
|
||||
|
||||
```
|
||||
Host server
|
||||
Port 8022
|
||||
```
|
||||
|
||||
另外,如果你连接的每一台服务器都在同一个非标准端口上运行,你还可以配置 `RSYNC_RSH` 环境变量。
|
||||
|
||||
### 为什么你还是应该切换到 rsync?
|
||||
|
||||
现在我们已经介绍了从 `scp` 切换到 `rsync` 的日常使用案例和注意事项,让我们花一些时间来探讨一下为什么你可能想要使用 `rsync` 的优点。很多人在很久以前就已经开始使用 `rsync` 了,就是因为这些优点。
|
||||
|
||||
#### 即时压缩
|
||||
|
||||
如果你和服务器之间的网络连接速度较慢或有限,`rsync` 可以花费更多的 CPU 处理能力来节省网络带宽。它通过在发送数据之前对数据进行即时压缩来实现。压缩可以用 `-z` 标志来启用。
|
||||
|
||||
#### 差量传输
|
||||
|
||||
`rsync` 也只在目标文件与源文件不同的情况下复制文件。这可以在目录中递归地工作。例如,如果你拿我们上面的最后一个 `bar` 的例子,并多次重新运行那个 `rsync` 命令,那么在最初的传输之后就不会有任何传输。如果你知道你会重复使用这些命令,例如备份到 U 盘,那么使用 `rsync` 即使是进行本地复制也是值得的,因为这个功能可以节省处理大型数据集的大量的时间。
|
||||
|
||||
#### 同步
|
||||
|
||||
顾名思义,`rsync` 可以做的不仅仅是复制数据。到目前为止,我们只演示了如何使用 `rsync` 复制文件。如果你想让 `rsync` 把目标目录变成源目录的样子,你可以在 `rsync` 中添加删除标志 `-delete`。这个删除标志使得 `rsync` 将从源目录中复制不存在于目标目录中的文件,然后它将删除目标目录中不存在于源目录中的文件。结果就是目标目录和源目录完全一样。相比之下,`scp` 只会在目标目录下添加文件。
|
||||
|
||||
### 结论
|
||||
|
||||
对于简单的使用情况,`rsync` 并不比老牌的 `scp` 工具复杂多少。唯一显著的区别是在递归复制目录时使用 `-a` 而不是 `-r`。然而,正如我们看到的,`rsync` 的 `-a` 标志比 `scp` 的 `-r` 标志更像 `cp` 的 `-r` 标志。
|
||||
|
||||
希望通过这些新命令,你可以加快你的文件传输工作流程。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/scp-users-migration-guide-to-rsync/
|
||||
|
||||
作者:[chasinglogic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/chasinglogic/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/07/scp-rsync-816x345.png
|
||||
[2]: https://lists.mindrot.org/pipermail/openssh-unix-dev/2019-March/037672.html
|
@ -0,0 +1,259 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12572-1.html)
|
||||
[#]: subject: (How to create a documentation site with Docsify and GitHub Pages)
|
||||
[#]: via: (https://opensource.com/article/20/7/docsify-github-pages)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson)
|
||||
|
||||
如何使用 Docsify 和 GitHub Pages 创建一个文档网站
|
||||
======
|
||||
|
||||
> 使用 Docsify 创建文档网页并发布到 GitHub Pages 上。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202009/01/211718hws6rvvziks2zrkc.jpg)
|
||||
|
||||
文档是帮助用户使用开源项目一个重要部分,但它并不总是开发人员的首要任务,因为他们可能更关注的是使他们的应用程序更好,而不是帮助人们使用它。对开发者来说,这就是为什么让发布文档变得更容易是如此有价值的原因。在本教程中,我将向你展示一个这样做的方式:将 [Docsify][2] 文档生成器与 [GitHub Pages][3] 结合起来。
|
||||
|
||||
如果你喜欢通过视频学习,可以访问 YouTube 版本的教程:
|
||||
|
||||
- [video](https://youtu.be/ccA2ecqKyHo)
|
||||
|
||||
默认情况下,GitHub Pages 会提示用户使用 [Jekyll][4],这是一个支持 HTML、CSS 和其它网页技术的静态网站生成器。Jekyll 可以从以 Markdown 格式编码的文档文件中生成一个静态网站,GitHub 会自动识别它们的 `.md` 或 `.markdown` 扩展名。虽然这种设置很好,但我想尝试一下其他的东西。
|
||||
|
||||
幸运的是,GitHub Pages 支持 HTML 文件,这意味着你可以使用其他网站生成工具(比如 Docsify)在这个平台上创建一个网站。Docsify 是一个采用 MIT 许可证的开源项目,其具有可以让你在 GitHub Pages 上轻松创建一个有吸引力的、先进的文档网站的[功能][5]。
|
||||
|
||||
![Docsify][6]
|
||||
|
||||
### 开始使用 Docsify
|
||||
|
||||
安装 Docsify 有两种方法:
|
||||
|
||||
1. 通过 NPM 安装 Docsify 的命令行界面(CLI)。
|
||||
2. 手动编写自己的 `index.html`。
|
||||
|
||||
Docsify 推荐使用 NPM 方式,但我将使用第二种方案。如果你想使用 NPM,请按照[快速入门指南][8]中的说明进行操作。
|
||||
|
||||
### 从 GitHub 下载示例内容
|
||||
|
||||
我已经在[该项目的 GitHub 页面][9]上发布了这个例子的源代码。你可以单独下载这些文件,也可以通过以下方式[克隆这个存储库][10]。
|
||||
|
||||
```
|
||||
git clone https://github.com/bryantson/OpensourceDotComDemos
|
||||
```
|
||||
|
||||
然后 `cd` 进入 `DocsifyDemo` 目录。
|
||||
|
||||
我将在下面为你介绍这些代码,它们克隆自我的示例存储库中,这样你就可以理解如何修改 Docsify。如果你愿意,你也可以从头开始创建一个新的 `index.html` 文件,就像 Docsify 文档中的的[示例][11]一样:
|
||||
|
||||
```
|
||||
<!-- index.html -->
|
||||
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
|
||||
<meta name="viewport" content="width=device-width,initial-scale=1">
|
||||
<meta charset="UTF-8">
|
||||
<link rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
</head>
|
||||
<body>
|
||||
<div id="app"></div>
|
||||
<script>
|
||||
window.$docsify = {
|
||||
//...
|
||||
}
|
||||
</script>
|
||||
<script src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
### 探索 Docsify 如何工作
|
||||
|
||||
如果你克隆了我的 [GitHub 存储库][10],并切换到 `DocsifyDemo` 目录下,你应该看到这样的文件结构:
|
||||
|
||||
![File contents in the cloned GitHub][19]
|
||||
|
||||
文件/文件夹名称 | 内容
|
||||
---|---
|
||||
`index.html` | 主要的 Docsify 初始化文件,也是最重要的文件
|
||||
`_sidebar.md` | 生成导航
|
||||
`README.md` | 你的文档根目录下的默认 Markdown 文件
|
||||
`images` | 包含了 `README.md` 中的示例 .jpg 图片
|
||||
其它目录和文件 | 包含可导航的 Markdown 文件
|
||||
|
||||
`index.html` 是 Docsify 可以工作的唯一要求。打开该文件,你可以查看其内容:
|
||||
|
||||
```
|
||||
<!-- index.html -->
|
||||
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
|
||||
<meta name="viewport" content="width=device-width,initial-scale=1">
|
||||
<meta charset="UTF-8">
|
||||
<link rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
<title>Docsify Demo</title>
|
||||
</head>
|
||||
<body>
|
||||
<div id="app"></div>
|
||||
<script>
|
||||
window.$docsify = {
|
||||
el: "#app",
|
||||
repo: 'https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo',
|
||||
loadSidebar: true,
|
||||
}
|
||||
</script>
|
||||
<script src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
这本质上只是一个普通的 HTML 文件,但看看这两行:
|
||||
|
||||
```
|
||||
<link rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
... 一些其它内容 ...
|
||||
<script src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></script>
|
||||
```
|
||||
|
||||
这些行使用内容交付网络(CDN)的 URL 来提供 CSS 和 JavaScript 脚本,以将网站转化为 Docsify 网站。只要你包含这些行,你就可以把你的普通 GitHub 页面变成 Docsify 页面。
|
||||
|
||||
`<body>` 标签后的第一行指定了要渲染的内容:
|
||||
|
||||
```
|
||||
<div id="app"></div>
|
||||
```
|
||||
|
||||
Docsify 使用[单页应用][21](SPA)的方式来渲染请求的页面,而不是刷新一个全新的页面。
|
||||
|
||||
最后,看看 `<script>` 块里面的行:
|
||||
|
||||
```
|
||||
<script>
|
||||
window.$docsify = {
|
||||
el: "#app",
|
||||
repo: 'https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo',
|
||||
loadSidebar: true,
|
||||
}
|
||||
</script>
|
||||
```
|
||||
|
||||
在这个块中:
|
||||
|
||||
* `el` 属性基本上是说:“嘿,这就是我要找的 `id`,所以找到它并在那里呈现。”
|
||||
* 改变 `repo` 值,以确定当用户点击右上角的 GitHub 图标时,会被重定向到哪个页面。
|
||||
![GitHub icon][22]
|
||||
* 将 `loadSideBar` 设置为 `true` 将使 Docsify 查找包含导航链接的 `_sidebar.md` 文件。
|
||||
|
||||
你可以在 Docsify 文档的[配置][23]部分找到所有选项。
|
||||
|
||||
接下来,看看 `_sidebar.md` 文件。因为你在 `index.html` 中设置了 `loadSidebar` 属性值为 `true`,所以 Docsify 会查找 `_sidebar.md` 文件,并根据其内容生成导航文件。示例存储库中的 `_sidebar.md` 内容是:
|
||||
|
||||
```
|
||||
<!-- docs/_sidebar.md -->
|
||||
|
||||
|
||||
* [HOME](./)
|
||||
|
||||
* [Tutorials](./tutorials/index)
|
||||
* [Tomcat](./tutorials/tomcat/index)
|
||||
* [Cloud](./tutorials/cloud/index)
|
||||
* [Java](./tutorials/java/index)
|
||||
|
||||
* [About](./about/index)
|
||||
|
||||
* [Contact](./contact/index)
|
||||
```
|
||||
|
||||
这会使用 Markdown 的链接格式来创建导航。请注意 “Tomcat”、“Cloud” 和 “Java” 等链接是缩进的;这意味着它们被渲染为父链接下的子链接。
|
||||
|
||||
像 `README.md` 和 `images` 这样的文件与存储库的结构有关,但所有其它 Markdown 文件都与你的 Docsify 网页有关。
|
||||
|
||||
根据你的需求,随意修改你下载的文件。下一步,你将把这些文件添加到你的 GitHub 存储库中,启用 GitHub Pages,并完成项目。
|
||||
|
||||
### 启用 GitHub 页面
|
||||
|
||||
创建一个示例的 GitHub 存储库,然后使用以下 GitHub 命令检出、提交和推送你的代码:
|
||||
|
||||
```
|
||||
$ git clone 你的 GitHub 存储库位置
|
||||
$ cd 你的 GitHub 存储库位置
|
||||
$ git add .
|
||||
$ git commit -m "My first Docsify!"
|
||||
$ git push
|
||||
```
|
||||
|
||||
设置你的 GitHub Pages 页面。在你的新 GitHub 存储库中,点击 “Settings”:
|
||||
|
||||
![Settings link in GitHub][24]
|
||||
|
||||
向下滚动直到看到 “GitHub Pages”:
|
||||
|
||||
![GitHub Pages settings][25]
|
||||
|
||||
查找 “Source” 部分:
|
||||
|
||||
![GitHub Pages settings][26]
|
||||
|
||||
点击 “Source” 下的下拉菜单。通常,你会将其设置为 “master branch”,但如果你愿意,也可以使用其他分支:
|
||||
|
||||
![Setting Source to master branch][27]
|
||||
|
||||
就是这样!你现在应该有一个链接到你的 GitHub Pages 的页面了。点击该链接将带你到那里,然后用 Docsify 渲染:
|
||||
|
||||
![Link to GitHub Pages docs site][28]
|
||||
|
||||
它应该像这样:
|
||||
|
||||
![Example Docsify site on GitHub Pages][29]
|
||||
|
||||
### 结论
|
||||
|
||||
通过编辑一个 HTML 文件和一些 Markdown 文本,你可以用 Docsify 创建一个外观精美的文档网站。你觉得怎么样?请留言,也可以分享其他可以和 GitHub Pages 一起使用的开源工具。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/docsify-github-pages
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
|
||||
[2]: https://docsify.js.org
|
||||
[3]: https://pages.github.com/
|
||||
[4]: https://docs.github.com/en/github/working-with-github-pages/about-github-pages-and-jekyll
|
||||
[5]: https://docsify.js.org/#/?id=features
|
||||
[6]: https://opensource.com/sites/default/files/uploads/docsify1_ui.jpg (Docsify)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://docsify.js.org/#/quickstart?id=quick-start
|
||||
[9]: https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo
|
||||
[10]: https://github.com/bryantson/OpensourceDotComDemos
|
||||
[11]: https://docsify.js.org/#/quickstart?id=manual-initialization
|
||||
[12]: http://december.com/html/4/element/html.html
|
||||
[13]: http://december.com/html/4/element/head.html
|
||||
[14]: http://december.com/html/4/element/meta.html
|
||||
[15]: http://december.com/html/4/element/link.html
|
||||
[16]: http://december.com/html/4/element/body.html
|
||||
[17]: http://december.com/html/4/element/div.html
|
||||
[18]: http://december.com/html/4/element/script.html
|
||||
[19]: https://opensource.com/sites/default/files/uploads/docsify3_files.jpg (File contents in the cloned GitHub)
|
||||
[20]: http://december.com/html/4/element/title.html
|
||||
[21]: https://en.wikipedia.org/wiki/Single-page_application
|
||||
[22]: https://opensource.com/sites/default/files/uploads/docsify4_github-icon_rev_0.jpg (GitHub icon)
|
||||
[23]: https://docsify.js.org/#/configuration?id=configuration
|
||||
[24]: https://opensource.com/sites/default/files/uploads/docsify5_githubsettings_0.jpg (Settings link in GitHub)
|
||||
[25]: https://opensource.com/sites/default/files/uploads/docsify6_githubpageconfig_rev.jpg (GitHub Pages settings)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/docsify6_githubpageconfig_rev2.jpg (GitHub Pages settings)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/docsify8_setsource_rev.jpg (Setting Source to master branch)
|
||||
[28]: https://opensource.com/sites/default/files/uploads/docsify9_link_rev.jpg (Link to GitHub Pages docs site)
|
||||
[29]: https://opensource.com/sites/default/files/uploads/docsify2_examplesite.jpg (Example Docsify site on GitHub Pages)
|
@ -0,0 +1,137 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chenmu-kk)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12566-1.html)
|
||||
[#]: subject: (9 open source tools for building a fault-tolerant system)
|
||||
[#]: via: (https://opensource.com/article/19/3/tools-fault-tolerant-system)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson)
|
||||
|
||||
九个用来构建容错系统的开源工具
|
||||
======
|
||||
|
||||
> 这些开源工具可以最大化延长运行时间并且在最大程度上减少问题。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/30/205036eqh1j8hhss9skf57.jpg)
|
||||
|
||||
我一直对 Web 开发和软件体系结构很感兴趣,因为我喜欢看到一个工作系统的宏观视图。无论是构建一个移动应用程序还是一个 Web 应用程序,都必须连接到互联网,在不同的模块中交换数据,这意味着你需要 Web 服务。
|
||||
|
||||
如果选择云系统作为应用程序的后端,则可以利用更强大的计算能力,因为后端服务将会在水平和垂直方向上进行扩展并编排不同的服务。但无论你是否使用云后端,建造一个灵活、稳定、快速又安全的容错系统是必不可少的。
|
||||
|
||||
要了解容错系统,让我们以脸书、亚马逊、谷歌和奈飞为例。数以亿计的用户会同时接入这些平台并通过对等网络和用户-服务器网络传输大量数据,你可以肯定这其中还存在许多的带有不法目的的恶意用户,例如黑客攻击和拒绝服务(DoS)攻击。即使如此,这些平台无需停机也可以全年无休地运转。
|
||||
|
||||
虽然机器学习和智能算法是这些系统的基础,但它们实现持续的服务而不停机一分钟的事实值得称赞。它们昂贵的硬件设备和巨大的数据中心当然十分重要,但是支持服务的精密软件设计也同样重要。而且容错系统是一个构建如此精密系统的法则之一。
|
||||
|
||||
### 生产过程中导致错误的两种行为
|
||||
|
||||
这是考虑容错系统的另一种方法。当你在本地运行应用程序服务时,每件事似乎都很完美。棒极了!但当你提升服务到生产环境时,一切都会变得一团糟。在这种情况下,容错系统通过解决两个问题来提供帮助:故障停止行为和拜占庭行为。
|
||||
|
||||
#### 故障停止行为
|
||||
|
||||
故障停止行为是运行中系统突然停止运行或者系统中的某些部分发生了故障。服务器停机时间和数据库不可访问都属于此种类型。举个例子,在下图中,由于服务 2 无法访问,因此服务 1 无法与服务 2 进行通信。
|
||||
|
||||
![服务 2 停机导致的故障停止行为][2]
|
||||
|
||||
但是,如果服务之间存在网络问题,也会出现此问题,如下图所示:
|
||||
|
||||
![网络故障导致的故障停止行为][3]
|
||||
|
||||
#### 拜占庭行为
|
||||
|
||||
拜占庭行为是指系统在持续运行,但并没有产生预期行为(例如:错误的数据或者无效的数据)。
|
||||
|
||||
如果服务 2 的数据(值)已损坏则可能会发生拜占庭故障,即使服务看起来运行得很好,比如下面的例子:
|
||||
|
||||
![因服务损坏而导致的拜占庭故障][4]
|
||||
|
||||
或者,可能存在恶意的中间人在服务之间进行拦截,并注入了不需要的数据:
|
||||
|
||||
![恶意中间人导致的拜占庭故障][5]
|
||||
|
||||
无论是故障停止和拜占庭行为,都不是理想的情况,因此我们需要一些预防或修复它们的手段。这里容错系统就起作用了。以下是可以帮助你解决这些问题的 8 个开源工具。
|
||||
|
||||
### 构建容错系统的工具
|
||||
|
||||
尽管构建一个真正实用的容错系统涉及到深入的“分布式计算理论”和复杂的计算机科学原理,但有许多的软件工具(其中许多是开源软件)通过构建容错系统来减轻不良后果的影响。
|
||||
|
||||
#### 断路模式:Hystrix 和 Resilience4j
|
||||
|
||||
[断路模式][6]是一种技术,它有助于在服务失败时返回准备好的虚拟回应或者简单回应。
|
||||
|
||||
![断路模式][7]
|
||||
|
||||
奈飞开源的 [Hystrix][8] 是断路模式中最流行的应用。
|
||||
|
||||
我之前工作过的很多家公司都在用这款出色的工具。令人意外的是,奈飞宣布将不再更新 Hystrix(是的,我知道了)。相反,奈飞建议使用另一种支持 Java 8 和函数式编程的 [Resilence4j][9] 之类的替代解决方案,或者类似于 [Adaptive Concurrency Limit][10] 的替代解决方案。
|
||||
|
||||
#### 负载均衡:Nginx 和 HaProxy
|
||||
|
||||
负载均衡是分布式系统中最基本的概念之一,要想拥有一个生产质量的环境,必须有负载均衡的存在。要理解负载均衡器,首先我们需要明白冗余的概念。每个生产级的 Web 服务都有多个服务器在某个服务器宕机时提供冗余来接管和维持服务。
|
||||
|
||||
![负载均衡][11]
|
||||
|
||||
想想现代飞机:它们的双引擎提供冗余,使它们即使在一个引擎着火的情况下也能安全的着陆。(这也有助于大多数商用飞机拥有最先进的自动化系统)。但是,拥有多引擎(或者多服务器)也意味着必须存在一些调度机制在故障发生时有效地对系统进行路由。
|
||||
|
||||
负载均衡器是一种通过平衡多个服务节点来优化大流量事务的设备或者软件。举个例子,当数以千计的请求涌入时,负载均衡器可以作为中间层在不同的服务器间进行路由和平均分配流量。如果一台服务器宕机,负载均衡器会将请求转发给其它运行良好的服务器。
|
||||
|
||||
有许多可用的负载均衡器,但其中最出名的两个就是 Nginx 和 HaProxy。
|
||||
|
||||
[Nginx][12] 不仅仅是一个负载均衡器,它还是 HTTP 和反向代理服务器、邮件代理服务器和通用 TCP/UDP 代理服务器。Groupon、Capital One、Adobe 和 NASA 等公司都在使用它。
|
||||
|
||||
[HaProxy][13] 也很受欢迎,因为它是一个免费的、非常快且可靠的解决方案,它为基于 TCP 和 HTTP 的应用程序提供高可用性、负载平衡和代理。许多大型网络公司,包括 Github、Reddit、Twitter 和 Stack Overflow 都使用 HaProxy。是的,Red Hat Enterprise Linux 同样支持 HaProxy 设置。
|
||||
|
||||
#### 参与者模型:Akka
|
||||
|
||||
[参与者模型][14]是一种并发设计模式,当作为基本计算单位的“参与者”接收到消息时,它会分派责任。一个参与者可以创建更多的参与者,并将消息委派给他们。
|
||||
|
||||
[Akka][15] 是最著名的参与者模型实现之一。该框架同时支持基于 JVM 的 Java 和 Scala。
|
||||
|
||||
#### 使用消息队列的异步、非阻塞 I/O:Kafka 和 RabbitMQ
|
||||
|
||||
多线程开发在过去很流行,但是现在已经不鼓励这种做法了,取而代之的是异步的、非阻塞的 I/O 模式。对于 Java,这一点在 [EnterpriseJavaBean(EJB)规范][16]中得到了明确的规定:
|
||||
|
||||
> “企业 bean 一定不能使用线程同步原语来同步多个实例的执行。”
|
||||
>
|
||||
> “企业 bean 不得试图去管理线程。企业 bean 不得试图启动、停止、挂起或恢复线程,或者去更改线程的优先级或者名称。企业 bean 不得试图管理线程组。”
|
||||
|
||||
如今,虽然还有其他做法,如流 API 和参与者模型,但像 [Kafka][17] 和[RabbitMQ][18] 之类的消息队列为异步和非阻塞 I/O 功能提供了开箱即用的支持,同时它们也是功能强大的开源工具,通过处理并发进程可以替代线程。
|
||||
|
||||
#### 其他的选择:Eureka 和 Chaos Monkey
|
||||
|
||||
用于容错系统其它有用的工具包括奈飞的 [Eureka][19] 之类的监控工具,以及像 [Chaos Monkey][20] 这样的压力测试工具。它们旨在通过在较低环境中的测试,如集成(INT)、质量保障(QA)和用户接受测试(UAT)来早早发现潜在问题以防止在转移到生产环境之前出现潜在问题。
|
||||
|
||||
你在使用什么开源工具来构建一个容错系统呢?请在评论中分享你的最爱。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/tools-fault-tolerant-system
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[chenmu-kk](https://github.com/chenmu-kk)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/1_errordowntimeservice.jpg (Fail-stop behavior due to Service 2 downtime)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/2_errordowntimenetwork.jpg (Fail-stop behavior due to network failure)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/3_byzantinefailuremalicious.jpg (Byzantine failure due to corrupted service)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/4_byzantinefailuremiddleman.jpg (Byzantine failure due to malicious middleman)
|
||||
[6]: https://martinfowler.com/bliki/CircuitBreaker.html
|
||||
[7]: https://opensource.com/sites/default/files/uploads/5_circuitbreakerpattern.jpg (Circuit breaker pattern)
|
||||
[8]: https://github.com/Netflix/Hystrix/wiki
|
||||
[9]: https://github.com/resilience4j/resilience4j
|
||||
[10]: https://medium.com/@NetflixTechBlog/performance-under-load-3e6fa9a60581
|
||||
[11]: https://opensource.com/sites/default/files/uploads/7_loadbalancer.jpg (Load balancer)
|
||||
[12]: https://www.nginx.com
|
||||
[13]: https://www.haproxy.org
|
||||
[14]: https://en.wikipedia.org/wiki/Actor_model
|
||||
[15]: https://akka.io
|
||||
[16]: https://jcp.org/aboutJava/communityprocess/final/jsr220/index.html
|
||||
[17]: https://kafka.apache.org
|
||||
[18]: https://www.rabbitmq.com
|
||||
[19]: https://github.com/Netflix/eureka
|
||||
[20]: https://github.com/Netflix/chaosmonkey
|
@ -1,38 +1,37 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12543-1.html)
|
||||
[#]: subject: (FreeFileSync: Open Source File Synchronization Tool)
|
||||
[#]: via: (https://itsfoss.com/freefilesync/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
FreeFileSync:开源文件同步工具
|
||||
FreeFileSync:开源的文件同步工具
|
||||
======
|
||||
|
||||
_**简介:FreeFileSync 是一个开源文件夹比较和同步工具,你可以使用它将数据备份到外部磁盘、云服务(如 Google Drive)或任何其他存储路径。**_
|
||||
> FreeFileSync 是一个开源的文件夹比较和同步工具,你可以使用它将数据备份到外部磁盘、云服务(如 Google Drive)或任何其他存储路径。
|
||||
|
||||
### FreeFileSync:一个免费和开源的同步工具
|
||||
### FreeFileSync:一个免费且开源的同步工具
|
||||
|
||||
![][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/23/060523ubx28vyi8qf8sv9d.jpg)
|
||||
|
||||
[FreeFileSync][2] 是一个令人印象深刻的开源工具,可以帮助你将数据备份到其他位置。
|
||||
|
||||
它们可以是外部 USB 磁盘、Google Drive 或使用 **SFTP 或 FTP** 连接到任何云存储。
|
||||
|
||||
你可能之前读过我们的[如何在 Linux 上使用 Google Drive][3] 的教程。不幸的是, 没有合适的在 Linux 上原生使用 Google Drive 的 FOSS 方案。还有 [Insync][4],但它是收费软件而非开源软件。
|
||||
你可能之前读过我们的[如何在 Linux 上使用 Google Drive][3] 的教程。不幸的是,没有合适的在 Linux 上原生使用 Google Drive 的 FOSS 方案。有个 [Insync][4],但它是收费软件而非开源软件。
|
||||
|
||||
FreeFileSync 可使用 Google Drive 帐户同步文件。事实上,我用它把我的文件同步到 Google Drive 和一个单独的硬盘上。
|
||||
|
||||
### FreeFileSync 的功能
|
||||
|
||||
![][5]
|
||||
![][1]
|
||||
|
||||
尽管 FreeFileSync 的 UI 看起来可能很老,但它为普通用户和高级用户提供了许多有用的功能。
|
||||
|
||||
我将在此处重点介绍所有功能:
|
||||
我将在此处把所有能重点介绍的功能都介绍出来:
|
||||
|
||||
* Parallel file copy (paid)
|
||||
* 跨平台支持(Windows、macOS 和 Linux)
|
||||
* 同步前比较文件夹
|
||||
* 支持 Google Drive、[SFTP][6] 和 FTP 连接
|
||||
@ -46,8 +45,6 @@ FreeFileSync 可使用 Google Drive 帐户同步文件。事实上,我用它
|
||||
* 便携式版(付费)
|
||||
* 并行文件复制(付费)
|
||||
|
||||
|
||||
|
||||
如果你看一下它提供的功能,它不仅是普通的同步工具,而且还免费提供了更多功能。
|
||||
|
||||
此外,为了让你了解,你还可以在同步文件之前先比较它们。例如,你可以比较文件内容/文件时间,或者简单地比较源文件夹和目标文件夹的文件大小。
|
||||
@ -58,7 +55,7 @@ FreeFileSync 可使用 Google Drive 帐户同步文件。事实上,我用它
|
||||
|
||||
![][8]
|
||||
|
||||
但是,它也为你提供了捐赠密钥的选项,它可解锁一些特殊功能,如在同步完成时通过电子邮件通知你等。
|
||||
但是,它也为你提供了捐赠密钥的可选选项,它可解锁一些特殊功能,如在同步完成时通过电子邮件通知你等。
|
||||
|
||||
以下是免费版本和付费版本的不同:
|
||||
|
||||
@ -66,21 +63,21 @@ FreeFileSync 可使用 Google Drive 帐户同步文件。事实上,我用它
|
||||
|
||||
因此,大多数基本功能是免费的。高级功能主要是针对高级用户,当然,如果你想支持他们也可以。(如果你觉得它有用,请这么做)。
|
||||
|
||||
此外,请注意,捐赠版单用户最多可在 3 台设备上使用。所以,这绝对不坏!
|
||||
此外,请注意,捐赠版单用户最多可在 3 台设备上使用。所以,这绝对不差!
|
||||
|
||||
### 在 Linux 上安装 FreeFileSync
|
||||
|
||||
你可以前往它的[官方下载页面][10],并下载 Linux 的 **tar.gz**文件。如果你喜欢,你还可以下载源码。
|
||||
你可以前往它的[官方下载页面][10],并下载 Linux 的 tar.gz 文件。如果你喜欢,你还可以下载源码。
|
||||
|
||||
![][11]
|
||||
|
||||
接下来,你只需解压并运行可执行文件就可以了(如上图所示)
|
||||
|
||||
[Download FreeFileSync][2]
|
||||
- [下载 FreeFileSync][2]
|
||||
|
||||
### 如何开始使用 FreeFileSync?
|
||||
|
||||
虽然我还没有尝试成功创建自动同步作业,但它很容易使用。
|
||||
虽然我还没有成功地尝试过创建自动同步作业,但它很容易使用。
|
||||
|
||||
[官方文档][12]应该足以让你获得想要的。
|
||||
|
||||
@ -94,11 +91,11 @@ FreeFileSync 可使用 Google Drive 帐户同步文件。事实上,我用它
|
||||
|
||||
#### FreeFileSync 的同步类型
|
||||
|
||||
当你选择**“更新”方式进行同步**时,它只需将新数据从源文件夹复制到目标文件夹。因此,即使你从源文件夹中删除了某些东西,它也不会在目标文件夹中被删除。
|
||||
当你选择 “更新” 的方式进行同步时,它只需将新数据从源文件夹复制到目标文件夹。因此,即使你从源文件夹中删除了某些东西,它也不会在目标文件夹中被删除。
|
||||
|
||||
如果你希望目标文件夹有相同的文件副本,可以选择**“镜像”同步方式**。这样,如果你从源文件夹中删除内容,它就会从目标文件夹中删除。
|
||||
如果你希望目标文件夹有相同的文件副本,可以选择 “镜像”同步方式。这样,如果你从源文件夹中删除内容,它就会从目标文件夹中删除。
|
||||
|
||||
还有一个**“双向”同步方式**,它检测源文件夹和目标文件夹的更改(而不是只监视源文件夹)。因此,如果对源/目标文件夹进行了任何更改,都将同步修改。
|
||||
还有一个 “双向” 同步方式,它检测源文件夹和目标文件夹的更改(而不是只监视源文件夹)。因此,如果对源/目标文件夹进行了任何更改,都将同步修改。
|
||||
|
||||
有关更高级的用法,我建议你参考[文档][12]。
|
||||
|
||||
@ -119,7 +116,7 @@ via: https://itsfoss.com/freefilesync/
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12536-1.html)
|
||||
[#]: subject: (4 Mac terminal customizations even a curmudgeon can love)
|
||||
[#]: via: (https://opensource.com/article/20/7/mac-terminal)
|
||||
[#]: author: (Katie McLaughlin https://opensource.com/users/glasnt)
|
||||
@ -12,7 +12,7 @@
|
||||
|
||||
> 开源意味着我可以在任何终端上找到熟悉的 Linux。
|
||||
|
||||
!["咖啡和笔记本"][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/21/002323xqslvqnnmdz487dq.jpg)
|
||||
|
||||
十年前,我开始了我的第一份工作,它要求我使用 Linux 作为我的笔记本电脑的操作系统。如果我愿意的话,我可以使用各种 Linux 发行版,包括 Gentoo,但由于我过去曾短暂地使用过Ubuntu,所以我选择了 Ubuntu Lucid Lynx 10.04。
|
||||
|
@ -0,0 +1,682 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Yufei-Yan)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12555-1.html)
|
||||
[#]: subject: (An example of very lightweight RESTful web services in Java)
|
||||
[#]: via: (https://opensource.com/article/20/7/restful-services-java)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
一个用 Java 实现的超轻量级 RESTful Web 服务示例
|
||||
======
|
||||
|
||||
> 通过管理一套图书的完整代码示例,来探索轻量级的 RESTful 服务。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/27/071808tt9zlno3b6lmbgl8.jpg)
|
||||
|
||||
Web 服务,以这样或那样的形式,已经存在了近二十年。比如,[XML-RPC 服务][2]出现在 90 年代后期,紧接着是用 SOAP 分支编写的服务。在 XML-RPC 和 SOAP 这两个开拓者之后出现后不久,REST 架构风格的服务在大约 20 年前也出现了。[REST][4] 风格(以下简称 Restful)服务现在主导了流行的网站,比如 eBay、Facebook 和 Twitter。尽管分布式计算的 Web 服务有很多替代品(如 Web 套接字、微服务和远程过程调用的新框架),但基于 Restful 的 Web 服务依然具有吸引力,原因如下:
|
||||
|
||||
* Restful 服务建立在现有的基础设施和协议上,特别是 Web 服务器和 HTTP/HTTPS 协议。一个拥有基于 HTML 的网站的组织可以很容易地为客户添加 Web 服务,这些客户对数据和底层功能更感兴趣,而不是对 HTML 的表现形式感兴趣。比如,亚马逊就率先通过网站和 Web 服务(基于 SOAP 或 Restful)提供相同的信息和功能。
|
||||
* Restful 服务将 HTTP 当作 API,因此避免了复杂的软件分层,这种分层是基于 SOAP 的 Web 服务的明显特征。比如,Restful API 支持通过 HTTP 命令(POST-GET-PUT-DELETE)进行标准的 CRUD(增加-读取-更新-删除)操作;通过 HTTP 状态码可以知道请求是否成功或者为什么失败。
|
||||
* Restful Web 服务可以根据需要变得简单或复杂。Restful 是一种风格,实际上是一种非常灵活的风格,而不是一套关于如何设计和构造服务的规定。(伴随而来的缺点是,可能很难确定哪些服务不能算作 Restful 服务。)
|
||||
* 作为使用者或者客户端,Restful Web 服务与语言和平台无关。客户端发送 HTTP(S) 请求,并以适合现代数据交换的格式(如 JSON)接收文本响应。
|
||||
* 几乎每一种通用编程语言都至少对 HTTP/HTTPS 有足够的(通常是强大的)支持,这意味着 Web 服务的客户端可以用这些语言来编写。
|
||||
|
||||
这篇文章将通过一段完整的 Java 代码示例来探讨轻量级的 Restful 服务。
|
||||
|
||||
### 基于 Restful 的“小说” Web 服务
|
||||
|
||||
基于 Restful 的“小说” web 服务包含三个程序员定义的类:
|
||||
|
||||
* `Novel` 类代表一个小说,只有三个属性:机器生成的 ID、作者和标题。属性可以根据实际情况进行扩展,但我还是想让这个例子看上去更简单一些。
|
||||
* `Novels` 类包含了用于各种任务的工具类:将一个 `Novel` 或者它们的列表的纯文本编码转换成 XML 或者 JSON;支持在小说集合上进行 CRUD 操作;以及从存储在文件中的数据初始化集合。`Novels` 类在 `Novel` 实例和 servlet 之间起中介作用。
|
||||
* `NovelsServlet` 类是从 `HttpServlet` 中继承的,`HttpServlet` 是一段健壮且灵活的代码,自 90 年代末的早期企业级 Java 就已经存在了。对于客户端的 CRUD 请求,servlet 可以当作 HTTP 的端点。 servlet 代码主要用于处理客户端的请求和生成相应的响应,而将复杂的细节留给 `Novels` 类中的工具类进行处理。
|
||||
|
||||
一些 Java 框架,比如 Jersey(JAX-RS)和 Restlet,就是为 Restful 服务设计的。尽管如此,`HttpServlet` 本身为完成这些服务提供了轻量、灵活、强大且充分测试过的 API。我会通过下面的“小说”例子来说明。
|
||||
|
||||
### 部署“小说” Web 服务
|
||||
|
||||
当然,部署“小说” Web 服务需要一个 Web 服务器。我的选择是 [Tomcat][5],但是如果该服务托管在 Jetty 或者甚至是 Java 应用服务器上,那么这个服务应该至少可以工作(著名的最后一句话!)。[在我的网站上][6]有总结了如何安装 Tomcat 的 README 文件和代码。还有一个附带文档的 Apache Ant 脚本,可以用来构建“小说”服务(或者任何其他服务或网站),并且将它部署在 Tomcat 或相同的服务。
|
||||
|
||||
Tomcat 可以从它的[官网][7]上下载。当你在本地安装后,将 `TOMCAT_HOME` 设置为安装目录。有两个子目录值得关注:
|
||||
|
||||
* `TOMCAT_HOME/bin` 目录包含了类 Unix 系统(`startup.sh` 和 `shutdown.sh`)和 Windows(`startup.bat` 和 `shutdown.bat`) 的启动和停止脚本。Tomcat 作为 Java 应用程序运行。Web 服务器的 servlet 容器叫做 Catalina。(在 Jetty 中,Web 服务器和容器的名字一样。)当 Tomcat 启动后,在浏览器中输入 `http://localhost:8080/` 可以查看详细文档,包括示例。
|
||||
* `TOMCAT_HOME/webapps` 目录是已部署的 Web 网站和服务的默认目录。部署网站或 Web 服务的直接方法是复制以 `.war` 结尾的 JAR 文件(也就是 WAR 文件)到 `TOMCAT_HOME/webapps` 或它的子目录下。然后 Tomcat 会将 WAR 文件解压到它自己的目录下。比如,Tomcat 会将 `novels.war` 文件解压到一个叫做 `novels` 的子目录下,并且保留 `novels.war` 文件。一个网站或 Web 服务可以通过删除 WAR 文件进行移除,也可以用一个新版 WAR 文件来覆盖已有文件进行更新。顺便说一下,调试网站或服务的第一步就是检查 Tomcat 已经正确解压 WAR 文件;如果没有的话,网站或服务就无法发布,因为代码或配置中有致命错误。
|
||||
* 因为 Tomcat 默认会监听 8080 端口上的 HTTP 请求,所以本机上的 URL 请求以 `http://localhost:8080/` 开始。
|
||||
|
||||
通过添加不带 `.war` 后缀的 WAR 文件名来访问由程序员部署的 WAR 文件:
|
||||
|
||||
```
|
||||
http://locahost:8080/novels/
|
||||
```
|
||||
|
||||
如果服务部署在 `TOMCAT_HOME` 下的一个子目录中(比如,`myapps`),这会在 URL 中反映出来:
|
||||
|
||||
```
|
||||
http://locahost:8080/myapps/novels/
|
||||
```
|
||||
|
||||
我会在靠近文章结尾处的测试部分提供这部分的更多细节。
|
||||
|
||||
如前所述,我的主页上有一个包含 Ant 脚本的 ZIP 文件,这个文件可以编译并且部署网站或者服务。(这个 ZIP 文件中也包含一个 `novels.war` 的副本。)对于“小说”这个例子,命令的示例(`%` 是命令行提示符)如下:
|
||||
|
||||
```
|
||||
% ant -Dwar.name=novels deploy
|
||||
```
|
||||
|
||||
这个命令首先会编译 Java 源代码,并且创建一个可部署的 `novels.war` 文件,然后将这个文件保存在当前目录中,再复制到 `TOMCAT_HOME/webapps` 目录中。如果一切顺利,`GET` 请求(使用浏览器或者命令行工具,比如 `curl`)可以用来做一个测试:
|
||||
|
||||
```
|
||||
% curl http://localhost:8080/novels/
|
||||
```
|
||||
|
||||
默认情况下,Tomcat 设置为 <ruby>热部署<rt>hot deploys</rt></ruby>:Web 服务器不需要关闭就可以进行部署、更新或者移除一个 web 应用。
|
||||
|
||||
### “小说”服务的代码
|
||||
|
||||
让我们回到“小说”这个例子,不过是在代码层面。考虑下面的 `Novel` 类:
|
||||
|
||||
#### 例 1:Novel 类
|
||||
|
||||
```java
|
||||
package novels;
|
||||
|
||||
import java.io.Serializable;
|
||||
|
||||
public class Novel implements Serializable, Comparable<Novel> {
|
||||
static final long serialVersionUID = 1L;
|
||||
private String author;
|
||||
private String title;
|
||||
private int id;
|
||||
|
||||
public Novel() { }
|
||||
|
||||
public void setAuthor(final String author) { this.author = author; }
|
||||
public String getAuthor() { return this.author; }
|
||||
public void setTitle(final String title) { this.title = title; }
|
||||
public String getTitle() { return this.title; }
|
||||
public void setId(final int id) { this.id = id; }
|
||||
public int getId() { return this.id; }
|
||||
|
||||
public int compareTo(final Novel other) { return this.id - other.id; }
|
||||
}
|
||||
```
|
||||
|
||||
这个类实现了 `Comparable` 接口中的 `compareTo` 方法,因为 `Novel` 实例是存储在一个线程安全的无序 `ConcurrentHashMap` 中。在响应查看集合的请求时,“小说”服务会对从映射中提取的集合(一个 `ArrayList`)进行排序;`compareTo` 的实现通过 `Novel` 的 ID 将它按升序排序。
|
||||
|
||||
`Novels` 类中包含多个实用工具函数:
|
||||
|
||||
#### 例 2:Novels 实用工具类
|
||||
|
||||
```java
|
||||
package novels;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.io.File;
|
||||
import java.io.ByteArrayOutputStream;
|
||||
import java.io.InputStream;
|
||||
import java.io.InputStreamReader;
|
||||
import java.io.BufferedReader;
|
||||
import java.nio.file.Files;
|
||||
import java.util.stream.Stream;
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.Collections;
|
||||
import java.beans.XMLEncoder;
|
||||
import javax.servlet.ServletContext; // not in JavaSE
|
||||
import org.json.JSONObject;
|
||||
import org.json.XML;
|
||||
|
||||
public class Novels {
|
||||
private final String fileName = "/WEB-INF/data/novels.db";
|
||||
private ConcurrentMap<Integer, Novel> novels;
|
||||
private ServletContext sctx;
|
||||
private AtomicInteger mapKey;
|
||||
|
||||
public Novels() {
|
||||
novels = new ConcurrentHashMap<Integer, Novel>();
|
||||
mapKey = new AtomicInteger();
|
||||
}
|
||||
|
||||
public void setServletContext(ServletContext sctx) { this.sctx = sctx; }
|
||||
public ServletContext getServletContext() { return this.sctx; }
|
||||
|
||||
public ConcurrentMap<Integer, Novel> getConcurrentMap() {
|
||||
if (getServletContext() == null) return null; // not initialized
|
||||
if (novels.size() < 1) populate();
|
||||
return this.novels;
|
||||
}
|
||||
|
||||
public String toXml(Object obj) { // default encoding
|
||||
String xml = null;
|
||||
try {
|
||||
ByteArrayOutputStream out = new ByteArrayOutputStream();
|
||||
XMLEncoder encoder = new XMLEncoder(out);
|
||||
encoder.writeObject(obj);
|
||||
encoder.close();
|
||||
xml = out.toString();
|
||||
}
|
||||
catch(Exception e) { }
|
||||
return xml;
|
||||
}
|
||||
|
||||
public String toJson(String xml) { // option for requester
|
||||
try {
|
||||
JSONObject jobt = XML.toJSONObject(xml);
|
||||
return jobt.toString(3); // 3 is indentation level
|
||||
}
|
||||
catch(Exception e) { }
|
||||
return null;
|
||||
}
|
||||
|
||||
public int addNovel(Novel novel) {
|
||||
int id = mapKey.incrementAndGet();
|
||||
novel.setId(id);
|
||||
novels.put(id, novel);
|
||||
return id;
|
||||
}
|
||||
|
||||
private void populate() {
|
||||
InputStream in = sctx.getResourceAsStream(this.fileName);
|
||||
// Convert novel.db string data into novels.
|
||||
if (in != null) {
|
||||
try {
|
||||
InputStreamReader isr = new InputStreamReader(in);
|
||||
BufferedReader reader = new BufferedReader(isr);
|
||||
|
||||
String record = null;
|
||||
while ((record = reader.readLine()) != null) {
|
||||
String[] parts = record.split("!");
|
||||
if (parts.length == 2) {
|
||||
Novel novel = new Novel();
|
||||
novel.setAuthor(parts[0]);
|
||||
novel.setTitle(parts[1]);
|
||||
addNovel(novel); // sets the Id, adds to map
|
||||
}
|
||||
}
|
||||
in.close();
|
||||
}
|
||||
catch (IOException e) { }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
最复杂的方法是 `populate`,这个方法从一个包含在 WAR 文件中的文本文件读取。这个文本文件包括了“小说”的初始集合。要打开此文件,`populate` 方法需要 `ServletContext`,这是一个 Java 映射类型,包含了关于嵌入在 servlet 容器中的 servlet 的所有关键信息。这个文本文件有包含了像下面这样的记录:
|
||||
|
||||
```
|
||||
Jane Austen!Persuasion
|
||||
```
|
||||
|
||||
这一行被解析为两部分(作者和标题),由感叹号(`!`)分隔。然后这个方法创建一个 `Novel` 实例,设置作者和标题属性,并且将“小说”加到容器中,保存在内存中。
|
||||
|
||||
`Novels` 类也有一些实用工具函数,可以将“小说”容器编码为 XML 或 JSON,取决于发出请求的人所要求的格式。默认是 XML 格式,但是也可以请求 JSON 格式。一个轻量级的 XML 转 JSON 包提供了 JSON。下面是关于编码的更多细节。
|
||||
|
||||
#### 例 3:NovelsServlet 类
|
||||
|
||||
```java
|
||||
package novels;
|
||||
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import javax.servlet.ServletException;
|
||||
import javax.servlet.http.HttpServlet;
|
||||
import javax.servlet.http.HttpServletRequest;
|
||||
import javax.servlet.http.HttpServletResponse;
|
||||
import java.util.Arrays;
|
||||
import java.io.ByteArrayInputStream;
|
||||
import java.io.ByteArrayOutputStream;
|
||||
import java.io.OutputStream;
|
||||
import java.io.BufferedReader;
|
||||
import java.io.InputStreamReader;
|
||||
import java.beans.XMLEncoder;
|
||||
import org.json.JSONObject;
|
||||
import org.json.XML;
|
||||
|
||||
public class NovelsServlet extends HttpServlet {
|
||||
static final long serialVersionUID = 1L;
|
||||
private Novels novels; // back-end bean
|
||||
|
||||
// Executed when servlet is first loaded into container.
|
||||
@Override
|
||||
public void init() {
|
||||
this.novels = new Novels();
|
||||
novels.setServletContext(this.getServletContext());
|
||||
}
|
||||
|
||||
// GET /novels
|
||||
// GET /novels?id=1
|
||||
@Override
|
||||
public void doGet(HttpServletRequest request, HttpServletResponse response) {
|
||||
String param = request.getParameter("id");
|
||||
Integer key = (param == null) ? null : Integer.valueOf((param.trim()));
|
||||
|
||||
// Check user preference for XML or JSON by inspecting
|
||||
// the HTTP headers for the Accept key.
|
||||
boolean json = false;
|
||||
String accept = request.getHeader("accept");
|
||||
if (accept != null && accept.contains("json")) json = true;
|
||||
|
||||
// If no query string, assume client wants the full list.
|
||||
if (key == null) {
|
||||
ConcurrentMap<Integer, Novel> map = novels.getConcurrentMap();
|
||||
Object list = map.values().toArray();
|
||||
Arrays.sort(list);
|
||||
|
||||
String payload = novels.toXml(list); // defaults to Xml
|
||||
if (json) payload = novels.toJson(payload); // Json preferred?
|
||||
sendResponse(response, payload);
|
||||
}
|
||||
// Otherwise, return the specified Novel.
|
||||
else {
|
||||
Novel novel = novels.getConcurrentMap().get(key);
|
||||
if (novel == null) { // no such Novel
|
||||
String msg = key + " does not map to a novel.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
else { // requested Novel found
|
||||
if (json) sendResponse(response, novels.toJson(novels.toXml(novel)));
|
||||
else sendResponse(response, novels.toXml(novel));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// POST /novels
|
||||
@Override
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
String author = request.getParameter("author");
|
||||
String title = request.getParameter("title");
|
||||
|
||||
// Are the data to create a new novel present?
|
||||
if (author == null || title == null)
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
|
||||
// Create a novel.
|
||||
Novel n = new Novel();
|
||||
n.setAuthor(author);
|
||||
n.setTitle(title);
|
||||
|
||||
// Save the ID of the newly created Novel.
|
||||
int id = novels.addNovel(n);
|
||||
|
||||
// Generate the confirmation message.
|
||||
String msg = "Novel " + id + " created.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
|
||||
// PUT /novels
|
||||
@Override
|
||||
public void doPut(HttpServletRequest request, HttpServletResponse response) {
|
||||
/* A workaround is necessary for a PUT request because Tomcat does not
|
||||
generate a workable parameter map for the PUT verb. */
|
||||
String key = null;
|
||||
String rest = null;
|
||||
boolean author = false;
|
||||
|
||||
/* Let the hack begin. */
|
||||
try {
|
||||
BufferedReader br =
|
||||
new BufferedReader(new InputStreamReader(request.getInputStream()));
|
||||
String data = br.readLine();
|
||||
/* To simplify the hack, assume that the PUT request has exactly
|
||||
two parameters: the id and either author or title. Assume, further,
|
||||
that the id comes first. From the client side, a hash character
|
||||
# separates the id and the author/title, e.g.,
|
||||
|
||||
id=33#title=War and Peace
|
||||
*/
|
||||
String[] args = data.split("#"); // id in args[0], rest in args[1]
|
||||
String[] parts1 = args[0].split("="); // id = parts1[1]
|
||||
key = parts1[1];
|
||||
|
||||
String[] parts2 = args[1].split("="); // parts2[0] is key
|
||||
if (parts2[0].contains("author")) author = true;
|
||||
rest = parts2[1];
|
||||
}
|
||||
catch(Exception e) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
|
||||
// If no key, then the request is ill formed.
|
||||
if (key == null)
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
|
||||
// Look up the specified novel.
|
||||
Novel p = novels.getConcurrentMap().get(Integer.valueOf((key.trim())));
|
||||
if (p == null) { // not found
|
||||
String msg = key + " does not map to a novel.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
else { // found
|
||||
if (rest == null) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
}
|
||||
// Do the editing.
|
||||
else {
|
||||
if (author) p.setAuthor(rest);
|
||||
else p.setTitle(rest);
|
||||
|
||||
String msg = "Novel " + key + " has been edited.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DELETE /novels?id=1
|
||||
@Override
|
||||
public void doDelete(HttpServletRequest request, HttpServletResponse response) {
|
||||
String param = request.getParameter("id");
|
||||
Integer key = (param == null) ? null : Integer.valueOf((param.trim()));
|
||||
// Only one Novel can be deleted at a time.
|
||||
if (key == null)
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
try {
|
||||
novels.getConcurrentMap().remove(key);
|
||||
String msg = "Novel " + key + " removed.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
catch(Exception e) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
}
|
||||
|
||||
// Methods Not Allowed
|
||||
@Override
|
||||
public void doTrace(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void doHead(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void doOptions(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
// Send the response payload (Xml or Json) to the client.
|
||||
private void sendResponse(HttpServletResponse response, String payload) {
|
||||
try {
|
||||
OutputStream out = response.getOutputStream();
|
||||
out.write(payload.getBytes());
|
||||
out.flush();
|
||||
}
|
||||
catch(Exception e) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
上面的 `NovelsServlet` 类继承了 `HttpServlet` 类,`HttpServlet` 类继承了 `GenericServlet` 类,后者实现了 `Servlet` 接口:
|
||||
|
||||
```java
|
||||
NovelsServlet extends HttpServlet extends GenericServlet implements Servlet
|
||||
```
|
||||
|
||||
从名字可以很清楚的看出来,`HttpServlet` 是为实现 HTTP(S) 上的 servlet 设计的。这个类提供了以标准 HTTP 请求动词(官方说法,<ruby>方法<rt>methods</rt></ruby>)命名的空方法:
|
||||
|
||||
* `doPost` (Post = 创建)
|
||||
* `doGet` (Get = 读取)
|
||||
* `doPut` (Put = 更新)
|
||||
* `doDelete` (Delete = 删除)
|
||||
|
||||
其他一些 HTTP 动词也会涉及到。`HttpServlet` 的子类,比如 `NovelsServlet`,会重载相关的 `do` 方法,并且保留其他方法为<ruby>空<rt>no-ops</rt></ruby>。`NovelsServlet` 重载了七个 `do` 方法。
|
||||
|
||||
每个 `HttpServlet` 的 CRUD 方法都有两个相同的参数。下面以 `doPost` 为例:
|
||||
|
||||
```java
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
```
|
||||
|
||||
`request` 参数是一个 HTTP 请求信息的映射,而 `response` 提供了一个返回给请求者的输出流。像 `doPost` 的方法,结构如下:
|
||||
|
||||
* 读取 `request` 信息,采取任何适当的措施生成一个响应。如果该信息丢失或者损坏了,就会生成一个错误。
|
||||
* 使用提取的请求信息来执行适当的 CRUD 操作(在本例中,创建一个 `Novel`),然后使用 `response` 输出流为请求者编码一个适当的响应。在 `doPost` 例子中,响应就是已经成功生成一个新“小说”并且添加到容器中的一个确认。当响应被发送后,输出流就关闭了,同时也将连接关闭了。
|
||||
|
||||
### 关于方法重载的更多内容
|
||||
|
||||
HTTP 请求的格式相对比较简单。下面是一个非常熟悉的 HTTP 1.1 的格式,注释由双井号分隔:
|
||||
|
||||
```
|
||||
GET /novels ## start line
|
||||
Host: localhost:8080 ## header element
|
||||
Accept-type: text/plain ## ditto
|
||||
...
|
||||
[body] ## POST and PUT only
|
||||
```
|
||||
|
||||
第一行由 HTTP 动词(在本例中是 `GET`)和以名词(在本例中是 `novels`)命名目标资源的 URI 开始。报头中包含键-值对,用冒号分隔左面的键和右面的值。报头中的键 `Host`(大小写敏感)是必须的;主机名 `localhost` 是当前机器上的本地符号地址,`8080` 端口是 Tomcat web 服务器上等待 HTTP 请求的默认端口。(默认情况下,Tomcat 在 8443 端口上监听 HTTP 请求。)报头元素可以以任意顺序出现。在这个例子中,`Accept-type` 报头的值是 MIME 类型 `text/plain`。
|
||||
|
||||
一些请求(特别是 `POST` 和 `PUT`)会有报文,而其他请求(特别是 `GET` 和 `DELETE`)没有。如果有报文(可能为空),以两个换行符将报头和报文分隔开;HTTP 报文包含一系列键-值对。对于无报文的请求,比如说查询字符串,报头元素就可以用来发送信息。下面是一个用 ID 2 对 `/novels` 资源的 `GET` 请求:
|
||||
|
||||
```
|
||||
GET /novels?id=2
|
||||
```
|
||||
|
||||
通常来说,查询字符串以问号开始,并且包含一个键-值对,尽管这个键-值可能值为空。
|
||||
|
||||
带有 `getParameter` 和 `getParameterMap` 等方法的 `HttpServlet` 很好地回避了有报文和没有报文的 HTTP 请求之前的差异。在“小说”例子中,`getParameter` 方法用来从 `GET`、`POST` 和 `DELETE` 方法中提取所需的信息。(处理 `PUT` 请求需要更底层的代码,因为 Tomcat 没有提供可以解析 `PUT` 请求的参数映射。)下面展示了一段在 `NovelsServlet` 中被重载的 `doPost` 方法:
|
||||
|
||||
```java
|
||||
@Override
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
String author = request.getParameter("author");
|
||||
String title = request.getParameter("title");
|
||||
...
|
||||
```
|
||||
|
||||
对于没有报文的 `DELETE` 请求,过程基本是一样的:
|
||||
|
||||
```java
|
||||
@Override
|
||||
public void doDelete(HttpServletRequest request, HttpServletResponse response) {
|
||||
String param = request.getParameter("id"); // id of novel to be removed
|
||||
...
|
||||
```
|
||||
|
||||
`doGet` 方法需要区分 `GET` 请求的两种方式:一种是“获得所有”,而另一种是“获得某一个”。如果 `GET` 请求 URL 中包含一个键是一个 ID 的查询字符串,那么这个请求就被解析为“获得某一个”:
|
||||
|
||||
```
|
||||
http://localhost:8080/novels?id=2 ## GET specified
|
||||
```
|
||||
|
||||
如果没有查询字符串,`GET` 请求就会被解析为“获得所有”:
|
||||
|
||||
```
|
||||
http://localhost:8080/novels ## GET all
|
||||
```
|
||||
|
||||
### 一些值得注意的细节
|
||||
|
||||
“小说”服务的设计反映了像 Tomcat 这样基于 Java 的 web 服务器是如何工作的。在启动时,Tomcat 构建一个线程池,从中提取请求处理程序,这种方法称为 “<ruby>每个请求一个线程<rt>one thread per request</rt></ruby>” 模型。现在版本的 Tomcat 使用非阻塞 I/O 来提高个性能。
|
||||
|
||||
“小说”服务是作为 `NovelsServlet` 类的单个实例来执行的,该实例也就维护了一个“小说”集合。相应的,也就会出现竞态条件,比如出现两个请求同时被处理:
|
||||
|
||||
* 一个请求向集合中添加一个新“小说”。
|
||||
* 另一个请求获得集合中的所有“小说”。
|
||||
|
||||
这样的结果是不确定的,取决与 _读_ 和 _写_ 的操作是以怎样的顺序进行操作的。为了避免这个问题,“小说”服务使用了线程安全的 `ConcurrentMap`。这个映射的关键是生成了一个线程安全的 `AtomicInteger`。下面是相关的代码片段:
|
||||
|
||||
```java
|
||||
public class Novels {
|
||||
private ConcurrentMap<Integer, Novel> novels;
|
||||
private AtomicInteger mapKey;
|
||||
...
|
||||
```
|
||||
|
||||
默认情况下,对客户端请求的响应被编码为 XML。为了简单,“小说”程序使用了以前的 `XMLEncoder` 类;另一个包含更丰富功能的方式是使用 JAX-B 库。代码很简单:
|
||||
|
||||
```java
|
||||
public String toXml(Object obj) { // default encoding
|
||||
String xml = null;
|
||||
try {
|
||||
ByteArrayOutputStream out = new ByteArrayOutputStream();
|
||||
XMLEncoder encoder = new XMLEncoder(out);
|
||||
encoder.writeObject(obj);
|
||||
encoder.close();
|
||||
xml = out.toString();
|
||||
}
|
||||
catch(Exception e) { }
|
||||
return xml;
|
||||
}
|
||||
```
|
||||
|
||||
`Object` 参数要么是一个有序的“小说” `ArraList`(用以响应“<ruby>获得所有<rt>get all</rt></ruby>”请求),要么是一个 `Novel` 实例(用以响应“<ruby>获得一个<rt>get one</rt></ruby>”请求),又或者是一个 `String`(确认消息)。
|
||||
|
||||
如果 HTTP 请求报头指定 JSON 作为所需要的类型,那么 XML 就被转化成 JSON。下面是 `NovelsServlet` 中的 `doGet` 方法中的检查:
|
||||
|
||||
```java
|
||||
String accept = request.getHeader("accept"); // "accept" is case insensitive
|
||||
if (accept != null && accept.contains("json")) json = true;
|
||||
```
|
||||
|
||||
`Novels`类中包含了 `toJson` 方法,可以将 XML 转换成 JSON:
|
||||
|
||||
```java
|
||||
public String toJson(String xml) { // option for requester
|
||||
try {
|
||||
JSONObject jobt = XML.toJSONObject(xml);
|
||||
return jobt.toString(3); // 3 is indentation level
|
||||
}
|
||||
catch(Exception e) { }
|
||||
return null;
|
||||
}
|
||||
```
|
||||
|
||||
`NovelsServlet`会对各种类型进行错误检查。比如,`POST` 请求应该包含新“小说”的作者和标题。如果有一个丢了,`doPost` 方法会抛出一个异常:
|
||||
|
||||
```java
|
||||
if (author == null || title == null)
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
```
|
||||
|
||||
`SC_BAD_REQUEST` 中的 `SC` 代表的是 <ruby>状态码<rt>status code</rt></ruby>,`BAD_REQUEST` 的标准 HTTP 数值是 400。如果请求中的 HTTP 动词是 `TRACE`,会返回一个不同的状态码:
|
||||
|
||||
```java
|
||||
public void doTrace(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
```
|
||||
|
||||
### 测试“小说”服务
|
||||
|
||||
用浏览器测试 web 服务会很不顺手。在 CRUD 动词中,现代浏览器只能生成 `POST`(创建)和 `GET`(读取)请求。甚至从浏览器发送一个 `POST` 请求都有点不好办,因为报文需要包含键-值对;这样的测试通常通过 HTML 表单完成。命令行工具,比如说 [curl][21],是一个更好的选择,这个部分展示的一些 `curl` 命令,已经包含在我网站的 ZIP 文件中了。
|
||||
|
||||
下面是一些测试样例,没有展示相应的输出结果:
|
||||
|
||||
```
|
||||
% curl localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=1
|
||||
% curl --header "Accept: application/json" localhost:8080/novels/
|
||||
```
|
||||
|
||||
第一条命令请求所有“小说”,默认是 XML 编码。第二条命令请求 ID 为 1 的“小说”,XML 编码。最后一条命令通过 `application/json` 添加了 `Accept` 报头元素,作为所需要的 MIME 类型。“<ruby>获得一个<rt>get one</rt></ruby>”命令也可以用这个报头。这些请求用了 JSON 而不是 XML 编码作为响应。
|
||||
|
||||
下面两条命令在集合中创建了一个新“小说”,并且确认添加了进去:
|
||||
|
||||
```
|
||||
% curl --request POST --data "author=Tolstoy&title=War and Peace" localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=4
|
||||
```
|
||||
|
||||
`curl` 中的 `PUT` 命令与 `POST` 命令相似,不同的地方是 `PUT` 的报文没有使用标准的语法。在 `NovelsServlet` 中关于 `doPut` 方法的文档中有详细的介绍,但是简单来说,Tomcat 不会对 `PUT` 请求生成合适的映射。下面是一个 `PUT` 命令和确认命令的的例子:
|
||||
|
||||
```
|
||||
% curl --request PUT --data "id=3#title=This is an UPDATE" localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=3
|
||||
```
|
||||
|
||||
第二个命令确认了集合已经更新。
|
||||
|
||||
最后,`DELETE` 命令会正常运行:
|
||||
|
||||
```
|
||||
% curl --request DELETE localhost:8080/novels?id=2
|
||||
% curl localhost:8080/novels/
|
||||
```
|
||||
|
||||
这个请求是删除 ID 为 2 的“小说”。第二个命令会显示剩余的“小说”。
|
||||
|
||||
### web.xml 配置文件
|
||||
|
||||
尽管官方规定它是可选的,`web.xml` 配置文件是一个生产级别网站或服务的重要组成部分。这个配置文件可以配置独立于代码的路由、安全性,或者网站或服务的其他功能。“小说”服务的配置通过为该服务的请求分配一个 URL 模式来配置路由:
|
||||
|
||||
```xml
|
||||
<xml version = "1.0" encoding = "UTF-8">
|
||||
<web-app>
|
||||
<servlet>
|
||||
<servlet-name>novels</servlet-name>
|
||||
<servlet-class>novels.NovelsServlet</servlet-class>
|
||||
</servlet>
|
||||
<servlet-mapping>
|
||||
<servlet-name>novels</servlet-name>
|
||||
<url-pattern>/*</url-pattern>
|
||||
</servlet-mapping>
|
||||
</web-app>
|
||||
```
|
||||
|
||||
`servlet-name` 元素为 servlet 全名(`novels.NovelsServlet`)提供了一个缩写(`novels`),然后这个名字在下面的 `servlet-mapping` 元素中使用。
|
||||
|
||||
回想一下,一个已部署服务的 URL 会在端口号后面有 WAR 文件的文件名:
|
||||
|
||||
```
|
||||
http://localhost:8080/novels/
|
||||
```
|
||||
|
||||
端口号后斜杠后的 URI,是所请求资源的“路径”,在这个例子中,就是“小说”服务。因此,`novels` 出现在了第一个单斜杠后。
|
||||
|
||||
在 `web.xml` 文件中,`url-patter` 被指定为 `/*`,代表 “以 `/novels` 为起始的任意路径”。假设 Tomcat 遇到了一个不存在的 URL,像这样:
|
||||
|
||||
```
|
||||
http://localhost:8080/novels/foobar/
|
||||
```
|
||||
|
||||
`web.xml` 配置也会指定这个请求被分配到“小说” servlet 中,因为 `/*` 模式也包含 `/foobar`。因此,这个不存在的 URL 也会得到像上面合法路径的相同结果。
|
||||
|
||||
生产级别的配置文件可能会包含安全相关的信息,包括<ruby>连接级别<rt>wire-level</rt></ruby>和<ruby>用户角色<rt>users-roles</rt></ruby>。即使在这种情况下,配置文件的大小也只会是这个例子中的两到三倍大。
|
||||
|
||||
### 总结
|
||||
|
||||
`HttpServlet` 是 Java web 技术的核心。像“小说”这样的网站或 web 服务继承了这个类,并且根据需求重载了相应的 `do` 动词方法。像 Jersay(JAX-RS)或 Restlet 这样的 Restful 框架通过提供定制的 servlet 完成了基本相同的功能,针对框架中的 web 应用程序的请求,这个 servlet 扮演着 HTTP(S) <ruby>终端<rt>endpoint</rt></ruby>的角色。
|
||||
|
||||
当然,基于 servlet 的应用程序可以访问 web 应用程序中所需要的任何 Java 库。如果应用程序遵循<ruby>关注点分离<rt>separation-of-concerns</rt></ruby>原则,那么 servlet 代码仍然相当简单:代码会检查请求,如果存在缺陷,就会发出适当的错误;否则,代码会调用所需要的功能(比如,查询数据库,以特定格式为响应编码),然后向请求这发送响应。`HttpServletRequest` 和 `HttpServletReponse` 类型使得读取请求和编写响应变得简单。
|
||||
|
||||
Java 的 API 可以从非常简单变得相当复杂。如果你需要用 Java 交付一些 Restful 服务的话,我的建议是在做其他事情之前先尝试一下简单的 `HttpServlet`。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/restful-services-java
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Yufei-Yan](https://github.com/Yufei-Yan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
|
||||
[2]: http://xmlrpc.com/
|
||||
[3]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
||||
[4]: https://www.redhat.com/en/topics/integration/whats-the-difference-between-soap-rest
|
||||
[5]: http://tomcat.apache.org/
|
||||
[6]: https://condor.depaul.edu/mkalin
|
||||
[7]: https://tomcat.apache.org/download-90.cgi
|
||||
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+serializable
|
||||
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer
|
||||
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+object
|
||||
[12]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bytearrayoutputstream
|
||||
[13]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
|
||||
[14]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstream
|
||||
[15]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstreamreader
|
||||
[16]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bufferedreader
|
||||
[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+ioexception
|
||||
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+arrays
|
||||
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+runtimeexception
|
||||
[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+outputstream
|
||||
[21]: https://curl.haxx.se/
|
@ -1,22 +1,22 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12533-1.html)
|
||||
[#]: subject: (Debug Linux using ProcDump)
|
||||
[#]: via: (https://opensource.com/article/20/7/procdump-linux)
|
||||
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
|
||||
|
||||
使用 ProcDump 调试 Linux
|
||||
使用微软的 ProcDump 调试 Linux 进程
|
||||
======
|
||||
|
||||
> 用这个微软的开源工具,获取进程信息。
|
||||
|
||||
![渣土车在路上转弯][1] 。
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/20/095646k5wz7cd11vyc7lhr.jpg)
|
||||
|
||||
微软越来越心仪 Linux 和开放源码,这并不是什么秘密。在过去几年中,该公司稳步增加对开源的贡献,包括将其部分软件和工具移植到 Linux。2018 年底,微软[宣布][2]将其部分 [Sysinternals][3] 工具以开源的方式移植到 Linux,[Linux 版的 ProcDump][4]是第一个这样的版本。
|
||||
微软越来越心仪 Linux 和开源,这并不是什么秘密。在过去几年中,该公司稳步地增加了对开源的贡献,包括将其部分软件和工具移植到 Linux。2018 年底,微软[宣布][2]将其 [Sysinternals][3] 的部分工具以开源的方式移植到 Linux,[Linux 版的 ProcDump][4]是其中的第一个。
|
||||
|
||||
如果你在 Windows 上从事过调试或故障排除工作,你可能听说过 Sysinternals。它是一个“瑞士军刀”工具集,可以帮助系统管理员、开发人员和 IT 安全专家监控和排除 Windows 环境的故障。
|
||||
如果你在 Windows 上从事过调试或故障排除工作,你可能听说过 Sysinternals,它是一个“瑞士军刀”工具集,可以帮助系统管理员、开发人员和 IT 安全专家监控和排除 Windows 环境的故障。
|
||||
|
||||
Sysinternals 最受欢迎的工具之一是 [ProcDump][5]。顾名思义,它用于将正在运行的进程的内存转储到磁盘上的一个核心文件中。然后可以用调试器对这个核心文件进行分析,了解转储时进程的状态。因为之前用过 Sysinternals,所以我很想试试 ProcDump 的 Linux 移植版。
|
||||
|
||||
@ -93,7 +93,7 @@ bin/ProcDumpTestApplication: ELF 64-bit LSB executable, x86-64, version 1 (SYSV)
|
||||
$
|
||||
```
|
||||
|
||||
在此情况下,每次运行 `procdump` 实用程序时,你都必须移动到 `bin/` 文件夹中。要使它在系统中的任何地方都可以使用,运行 `make install`。这将二进制文件复制到通常的 `bin/` 目录中,它是你的 shell `$PATH` 的一部分:
|
||||
在此情况下,每次运行 `procdump` 实用程序时,你都必须移动到 `bin/` 文件夹中。要使它在系统中的任何地方都可以使用,运行 `make install`。这将这个二进制文件复制到通常的 `bin/` 目录中,它是你的 shell `$PATH` 的一部分:
|
||||
|
||||
```
|
||||
$ which procdump
|
||||
@ -153,7 +153,7 @@ root 350508 347350 0 03:29 pts/0 00:00:00 grep --color=auto pro
|
||||
$
|
||||
```
|
||||
|
||||
当测试进程正在运行时,调用 `procdump` 并提供 PID。输出表明了该进程的名称和 PID,并报告它生成了一个核心转储文件,并显示其文件名:
|
||||
当测试进程正在运行时,调用 `procdump` 并提供 PID。下面的输出表明了该进程的名称和 PID,并报告它生成了一个核心转储文件,并显示其文件名:
|
||||
|
||||
```
|
||||
$ procdump -p 350498
|
||||
@ -262,7 +262,7 @@ $
|
||||
|
||||
### 你应该使用 ProcDump 还是 gcore?
|
||||
|
||||
有几种情况下,你可能更喜欢使用 ProcDump 而不是 gcore,ProcDump 有一些内置的功能,在一般情况下可能很有用。
|
||||
有几种情况下,你可能更喜欢使用 ProcDump 而不是 gcore,ProcDump 有一些内置的功能,在一些情况下可能很有用。
|
||||
|
||||
#### 等待测试二进制文件的执行
|
||||
|
||||
@ -306,7 +306,7 @@ $ ./progxyz &
|
||||
$
|
||||
```
|
||||
|
||||
ProcDump 立即检测到二进制正在运行,并转储这个二进制的核心文件:
|
||||
ProcDump 立即检测到该二进制正在运行,并转储这个二进制的核心文件:
|
||||
|
||||
```
|
||||
[03:39:23 - INFO]: Waiting for process 'progxyz' to launch...
|
||||
@ -391,7 +391,7 @@ via: https://opensource.com/article/20/7/procdump-linux
|
||||
作者:[Gaurav Kamathe][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,74 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12563-1.html)
|
||||
[#]: subject: (Decentralized Messaging App Riot Rebrands to Element)
|
||||
[#]: via: (https://itsfoss.com/riot-to-element/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
去中心化的消息应用 Riot 改名为 Element
|
||||
======
|
||||
|
||||
Riot 曾经是,现在也是一款基于开源 Matrix 协议的去中心化即时通讯应用。
|
||||
|
||||
6 月底,Riot 即时通讯客户端宣布将改名。他们透露,他们的新名字是 [Element][1]。让我们来看看 Riot 为什么要改名,还有哪些要改。
|
||||
|
||||
### 为什么从 Riot 改名为 Element?
|
||||
|
||||
![][2]
|
||||
|
||||
在说到最新的公告之前,我们先来看看他们当初为什么要改名。
|
||||
|
||||
根据 6 月 23 日的一篇[博客文章][3],该组织改名有三个原因。
|
||||
|
||||
首先,他们表示“某大型游戏公司”曾多次阻止他们注册 Riot 和 Riot.im 产品名称的商标。如果要我猜的话,他们可能指的就是这家[“游戏公司”][4])。
|
||||
|
||||
其次,他们选择 Riot 这个名字的初衷是为了“唤起一些破坏性和活力的东西”。他们担心人们反而认为这个应用是“专注于暴力”。我想,当前的情形下,这个名字并不算好。
|
||||
|
||||
第三,他们希望澄清 Riot 涉及的众多品牌名称所造成的混乱。例如,Riot 是由一家名为 New Vector 的公司创建的,而 Riot 是托管在 Modular 上,Modular 也是 New Vector 的产品。他们希望简化他们的命名系统,以避免混淆潜在客户。当人们寻找消息解决方案时,他们希望他们只需要寻找一个名字:Element。
|
||||
|
||||
### 元素即一切
|
||||
|
||||
![][5]
|
||||
|
||||
从 7 月 15 日开始,该应用的名称和公司的名称已经改为 Element(元素)。他们的 Matrix 托管服务现在将被称为 Element Matrix Services。他们的公告很好地总结了这一点。
|
||||
|
||||
> “对于那些第一次发现我们的人来说,Element 是 Matrix 通信网络中的旗舰级安全协作应用。Element 让你拥有自己的端到端加密聊天服务器,同时还能与更广泛的 Matrix 网络中的其他人连接。”
|
||||
|
||||
他们之所以选择 Element 这个名字,是因为它“反映了我们在设计 RiotX 时对简单和清晰的强调;这个名字突出了我们一心一意将 Element 打造成可以想象的最优雅和最实用的主流通讯应用的使命”。他们还说,他们想要一个“能唤起数据所有权和自我主权的概念”的名字。他们还认为这是一个很酷的名字。
|
||||
|
||||
### 除了改个名之外
|
||||
|
||||
![][6]
|
||||
|
||||
最近的公告也表明,此举不仅仅是简单的改名。Element 还发布了“新一代安卓版 Matrix 客户端”。该客户端的前身是 RiotX,现在改名为 Element。(还有呢?)它对以前的客户端进行了彻底的重写,现在支持 VoIP 通话和小部件。Element 还将在 iOS 上推出,支持 iOS 13,并提供“全新的推送通知支持”。
|
||||
|
||||
Element Web 客户端也得到了一些关爱,更新了 UI 和新的更容易阅读的字体。他们还“重写了房间列表控件 —— 增加了房间预览(!!)、按字母顺序排列、可调整列表大小、改进的通知用户界面等”。他们还开始努力改进端到端加密。
|
||||
|
||||
### 最后思考
|
||||
|
||||
Element 公司的人迈出了一大步,做出了这样的重大改名。他们可能会在短期内失去一些客户。(这可能主要是由于出于某种原因没有意识到名称的改变,或者不喜欢改变)。然而从长远来看,品牌简化将帮助他们脱颖而出。
|
||||
|
||||
我唯一要提到的负面说明是,这是他们在该应用历史上的第三次改名。在 2016 年发布时,它最初被命名为 Vector。当年晚些时候改名为 Riot。希望 Element 能一直用下去。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/riot-to-element/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://element.io/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/07/riot-to-element.png?ssl=1
|
||||
[3]: https://element.io/blog/the-world-is-changing/
|
||||
[4]: https://en.wikipedia.org/wiki/Riot_Games
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/element-desktop.jpg?ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/element-apps.jpg?ssl=1
|
||||
[7]: http://reddit.com/r/linuxusersgroup
|
106
published/202008/20200727 5 open source IDE tools for Java.md
Normal file
106
published/202008/20200727 5 open source IDE tools for Java.md
Normal file
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12542-1.html)
|
||||
[#]: subject: (5 open source IDE tools for Java)
|
||||
[#]: via: (https://opensource.com/article/20/7/ide-java)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
|
||||
5 个开源的 Java IDE 工具
|
||||
======
|
||||
|
||||
> Java IDE 工具提供了大量的方法来根据你的独特需求和偏好创建一个编程环境。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/22/235441wnnorcvo4olasv8o.jpg)
|
||||
|
||||
通过简化程序员的工作,[Java][2] 框架可以使他们的生活更加轻松。这些框架是为了在各种服务器环境上运行各种应用程序而设计开发的;这包括解析注解、扫描描述符、加载配置以及在 Java 虚拟机(JVM)上启动实际的服务等方面的动态行为。控制这么多的任务需要更多的代码,这就很难降低内存占用、加快新应用的启动时间。无论如何,据 [TIOBE 指数][3],在当今使用的编程语言中 Java 一直排名前三,拥有着 700 万到 1000 万开发者的社区。
|
||||
|
||||
有这么多用 Java 编写的代码,这意味着有一些很好的集成开发环境(IDE)可供选择,可以为开发人员提供有效地编写、整理、测试和运行 Java 应用程序所需的所有工具。
|
||||
|
||||
下面,我将按字母顺序介绍五个我最喜欢的用于编写 Java 的开源 IDE 工具,以及如何配置它们的基本功能。
|
||||
|
||||
### BlueJ
|
||||
|
||||
[BlueJ][4] 为 Java 初学者提供了一个集成的教育性 Java 开发环境。它也可以使用 Java 开发工具包(JDK)开发小型软件。各种版本和操作系统的安装方式都可以在[这里][5]找到。
|
||||
|
||||
在笔记本电脑上安装 BlueJ IDE 后,启动一个新项目,点击<ruby>项目<rt>Project</rt></ruby>菜单中的<ruby>新项目<rt>New Project</rt></ruby>,然后从创建一个<ruby>新类<rt>New Class</rt></ruby>开始编写 Java 代码。生成的示例方法和骨架代码如下所示:
|
||||
|
||||
![BlueJ IDE screenshot][6]
|
||||
|
||||
BlueJ 不仅为学校的 Java 编程课的教学提供了一个交互式的图形用户界面(GUI),而且可以让开发人员在不编译源代码的情况下调用函数(即对象、方法、参数)。
|
||||
|
||||
### Eclipse
|
||||
|
||||
[Eclipse][7] 是桌面计算机上最著名的 Java IDE 之一,它支持 C/C++、JavaScript 和 PHP 等多种编程语言。它还允许开发者从 Eclipse 市场中的添加无穷无尽的扩展,以获得更多的开发便利。[Eclipse 基金会][8]提供了一个名为 [Eclipse Che][9] 的 Web IDE,供 DevOps 团队在多个云平台上用托管的工作空间创建出一个敏捷软件开发环境。
|
||||
|
||||
[可以在这里下载][10];然后你可以创建一个新的项目或从本地目录导入一个现有的项目。在[本文][11]中找到更多 Java 开发技巧。
|
||||
|
||||
![Eclipse IDE screenshot][12]
|
||||
|
||||
### IntelliJ IDEA
|
||||
|
||||
[IntelliJ IDEA CE(社区版)][13]是 IntelliJ IDEA 的开源版本,为 Java、Groovy、Kotlin、Rust、Scala 等多种编程语言提供了 IDE。IntelliJ IDEA CE 在有经验的开发人员中也非常受欢迎,可以用它来对现有源码进行重构、代码检查、使用 JUnit 或 TestNG 构建测试用例,以及使用 Maven 或 Ant 构建代码。可在[这里][14]下载它。
|
||||
|
||||
IntelliJ IDEA CE 带有一些独特的功能;我特别喜欢它的 API 测试器。例如,如果你用 Java 框架实现了一个 REST API,IntelliJ IDEA CE 允许你通过 Swing GUI 设计器来测试 API 的功能。
|
||||
|
||||
![IntelliJ IDEA screenshot][15]
|
||||
|
||||
IntelliJ IDEA CE 是开源的,但其背后的公司也提供了一个商业的终极版。可以在[这里][16]找到社区版和终极版之间的更多差异。
|
||||
|
||||
### Netbeans IDE
|
||||
|
||||
[NetBeans IDE][17] 是一个 Java 的集成开发环境,它允许开发人员利用 HTML5、JavaScript 和 CSS 等支持的 Web 技术为独立、移动和网络架构制作模块化应用程序。NetBeans IDE 允许开发人员就如何高效管理项目、工具和数据设置多个视图,并帮助他们在新开发人员加入项目时使用 Git 集成进行软件协作开发。
|
||||
|
||||
[这里][18]下载的二进制文件支持 Windows、macOS、Linux 等多个平台。在本地环境中安装了 IDE 工具后,新建项目向导可以帮助你创建一个新项目。例如,向导会生成骨架代码(有部分需要填写,如 `// TODO 代码应用逻辑在此`),然后你可以添加自己的应用代码。
|
||||
|
||||
### VSCodium
|
||||
|
||||
[VSCodium][19] 是一个轻量级、自由的源代码编辑器,允许开发者在 Windows、macOS、Linux 等各种操作系统平台上安装,是基于 [Visual Studio Code][20] 的开源替代品。其也是为支持包括 Java、C++、C#、PHP、Go、Python、.NET 在内的多种编程语言的丰富生态系统而设计开发的。Visual Studio Code 默认提供了调试、智能代码完成、语法高亮和代码重构功能,以提高开发的代码质量。
|
||||
|
||||
在其[资源库][21]中有很多下载项。当你运行 Visual Studio Code 时,你可以通过点击左侧活动栏中的“扩展”图标或按下 `Ctrl+Shift+X` 键来添加新的功能和主题。例如,当你在搜索框中输入 “quarkus” 时,就会出现 Visual Studio Code 的 Quarkus 工具,该扩展允许你[在 VS Code 中使用 Quarkus 编写 Java][22]:
|
||||
|
||||
![VSCodium IDE screenshot][23]
|
||||
|
||||
### 总结
|
||||
|
||||
Java 作为最广泛使用的编程语言和环境之一,这五种只是 Java 开发者可以使用的各种开源 IDE 工具的一小部分。可能很难知道哪一个是正确的选择。和以往一样,这取决于你的具体需求和目标 —— 你想实现什么样的工作负载(Web、移动应用、消息传递、数据交易),以及你将使用 IDE 扩展功能部署什么样的运行时(本地、云、Kubernetes、无服务器)。虽然丰富的选择可能会让人不知所措,但这也意味着你可能可以找到一个适合你的特殊情况和偏好的选择。
|
||||
|
||||
你有喜欢的开源 Java IDE 吗?请在评论中分享吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/ide-java
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
|
||||
[2]: https://opensource.com/resources/java
|
||||
[3]: https://www.tiobe.com/tiobe-index/
|
||||
[4]: https://www.bluej.org/about.html
|
||||
[5]: https://www.bluej.org/versions.html
|
||||
[6]: https://opensource.com/sites/default/files/uploads/5_open_source_ide_tools_to_write_java_and_how_you_begin_it.png (BlueJ IDE screenshot)
|
||||
[7]: https://www.eclipse.org/ide/
|
||||
[8]: https://www.eclipse.org/
|
||||
[9]: https://opensource.com/article/19/10/cloud-ide-che
|
||||
[10]: https://www.eclipse.org/downloads/
|
||||
[11]: https://opensource.com/article/19/10/java-basics
|
||||
[12]: https://opensource.com/sites/default/files/uploads/os_ide_2.png (Eclipse IDE screenshot)
|
||||
[13]: https://www.jetbrains.com/idea/
|
||||
[14]: https://www.jetbrains.org/display/IJOS/Download
|
||||
[15]: https://opensource.com/sites/default/files/uploads/os_ide_3.png (IntelliJ IDEA screenshot)
|
||||
[16]: https://www.jetbrains.com/idea/features/editions_comparison_matrix.html
|
||||
[17]: https://netbeans.org/
|
||||
[18]: https://netbeans.org/downloads/8.2/rc/
|
||||
[19]: https://vscodium.com/
|
||||
[20]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
|
||||
[21]: https://github.com/VSCodium/vscodium#downloadinstall
|
||||
[22]: https://opensource.com/article/20/4/java-quarkus-vs-code
|
||||
[23]: https://opensource.com/sites/default/files/uploads/os_ide_5.png (VSCodium IDE screenshot)
|
@ -0,0 +1,244 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12554-1.html)
|
||||
[#]: subject: (Creating and debugging Linux dump files)
|
||||
[#]: via: (https://opensource.com/article/20/8/linux-dump)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
|
||||
在 Linux 上创建并调试转储文件
|
||||
======
|
||||
|
||||
> 了解如何处理转储文件将帮你找到应用中难以重现的 bug。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/26/234535rhnwdc783swgsbqw.jpg)
|
||||
|
||||
崩溃转储、内存转储、核心转储、系统转储……这些全都会产生同样的产物:一个包含了当应用崩溃时,在那个特定时刻应用的内存状态的文件。
|
||||
|
||||
这是一篇指导文章,你可以通过克隆示例的应用仓库来跟随学习:
|
||||
|
||||
```
|
||||
git clone https://github.com/hANSIc99/core_dump_example.git
|
||||
```
|
||||
|
||||
### 信号如何关联到转储
|
||||
|
||||
信号是操作系统和用户应用之间的进程间通讯。Linux 使用 [POSIX 标准][2]中定义的信号。在你的系统上,你可以在 `/usr/include/bits/signum-generic.h` 找到标准信号的定义。如果你想知道更多关于在你的应用程序中使用信号的信息,这有一个信息丰富的 [signal 手册页][3]。简单地说,Linux 基于预期的或意外的信号来触发进一步的活动。
|
||||
|
||||
当你退出一个正在运行的应用程序时,应用程序通常会收到 `SIGTERM` 信号。因为这种类型的退出信号是预期的,所以这个操作不会创建一个内存转储。
|
||||
|
||||
以下信号将导致创建一个转储文件(来源:[GNU C库][4]):
|
||||
|
||||
* `SIGFPE`:错误的算术操作
|
||||
* `SIGILL`:非法指令
|
||||
* `SIGSEGV`:对存储的无效访问
|
||||
* `SIGBUS`:总线错误
|
||||
* `SIGABRT`:程序检测到的错误,并通过调用 `abort()` 来报告
|
||||
* `SIGIOT`:这个信号在 Fedora 上已经过时,过去在 [PDP-11][5] 上用 `abort()` 时触发,现在映射到 SIGABRT
|
||||
|
||||
### 创建转储文件
|
||||
|
||||
导航到 `core_dump_example` 目录,运行 `make`,并使用 `-c1` 开关执行该示例二进制:
|
||||
|
||||
```
|
||||
./coredump -c1
|
||||
```
|
||||
|
||||
该应用将以状态 4 退出,带有如下错误:
|
||||
|
||||
![Dump written][6]
|
||||
|
||||
“Abgebrochen (Speicherabzug geschrieben) ”(LCTT 译注:这是德语,应该是因为本文作者系统是德语环境)大致翻译为“分段故障(核心转储)”。
|
||||
|
||||
是否创建核心转储是由运行该进程的用户的资源限制决定的。你可以用 `ulimit` 命令修改资源限制。
|
||||
|
||||
检查当前创建核心转储的设置:
|
||||
|
||||
```
|
||||
ulimit -c
|
||||
```
|
||||
|
||||
如果它输出 `unlimited`,那么它使用的是(建议的)默认值。否则,用以下方法纠正限制:
|
||||
|
||||
```
|
||||
ulimit -c unlimited
|
||||
```
|
||||
|
||||
要禁用创建核心转储,可以设置其大小为 0:
|
||||
|
||||
```
|
||||
ulimit -c 0
|
||||
```
|
||||
|
||||
这个数字指定了核心转储文件的大小,单位是块。
|
||||
|
||||
### 什么是核心转储?
|
||||
|
||||
内核处理核心转储的方式定义在:
|
||||
|
||||
```
|
||||
/proc/sys/kernel/core_pattern
|
||||
```
|
||||
|
||||
我运行的是 Fedora 31,在我的系统上,该文件包含的内容是:
|
||||
|
||||
```
|
||||
/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
|
||||
```
|
||||
|
||||
这表明核心转储被转发到 `systemd-coredump` 工具。在不同的 Linux 发行版中,`core_pattern` 的内容会有很大的不同。当使用 `systemd-coredump` 时,转储文件被压缩保存在 `/var/lib/systemd/coredump` 下。你不需要直接接触这些文件,你可以使用 `coredumpctl`。比如说:
|
||||
|
||||
```
|
||||
coredumpctl list
|
||||
```
|
||||
|
||||
会显示系统中保存的所有可用的转储文件。
|
||||
|
||||
使用 `coredumpctl dump`,你可以从最后保存的转储文件中检索信息:
|
||||
|
||||
```
|
||||
[stephan@localhost core_dump_example]$ ./coredump
|
||||
Application started…
|
||||
|
||||
(…….)
|
||||
|
||||
Message: Process 4598 (coredump) of user 1000 dumped core.
|
||||
|
||||
Stack trace of thread 4598:
|
||||
#0 0x00007f4bbaf22625 __GI_raise (libc.so.6)
|
||||
#1 0x00007f4bbaf0b8d9 __GI_abort (libc.so.6)
|
||||
#2 0x00007f4bbaf664af __libc_message (libc.so.6)
|
||||
#3 0x00007f4bbaf6da9c malloc_printerr (libc.so.6)
|
||||
#4 0x00007f4bbaf6f49c _int_free (libc.so.6)
|
||||
#5 0x000000000040120e n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
#6 0x00000000004013b1 n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
#7 0x00007f4bbaf0d1a3 __libc_start_main (libc.so.6)
|
||||
#8 0x000000000040113e n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
Refusing to dump core to tty (use shell redirection or specify — output).
|
||||
```
|
||||
|
||||
这表明该进程被 `SIGABRT` 停止。这个视图中的堆栈跟踪不是很详细,因为它不包括函数名。然而,使用 `coredumpctl debug`,你可以简单地用调试器(默认为 [GDB][8])打开转储文件。输入 `bt`(<ruby>回溯<rt>backtrace</rt></ruby>的缩写)可以得到更详细的视图:
|
||||
|
||||
```
|
||||
Core was generated by `./coredump -c1'.
|
||||
Program terminated with signal SIGABRT, Aborted.
|
||||
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
|
||||
50 return ret;
|
||||
(gdb) bt
|
||||
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
|
||||
#1 0x00007fc37a9aa8d9 in __GI_abort () at abort.c:79
|
||||
#2 0x00007fc37aa054af in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7fc37ab14f4b "%s\n") at ../sysdeps/posix/libc_fatal.c:181
|
||||
#3 0x00007fc37aa0ca9c in malloc_printerr (str=str@entry=0x7fc37ab130e0 "free(): invalid pointer") at malloc.c:5339
|
||||
#4 0x00007fc37aa0e49c in _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at malloc.c:4173
|
||||
#5 0x000000000040120e in freeSomething(void*) ()
|
||||
#6 0x0000000000401401 in main ()
|
||||
```
|
||||
|
||||
与后续帧相比,`main()` 和 `freeSomething()` 的内存地址相当低。由于共享对象被映射到虚拟地址空间末尾的区域,可以认为 `SIGABRT` 是由共享库中的调用引起的。共享对象的内存地址在多次调用之间并不是恒定不变的,所以当你看到多次调用之间的地址不同时,完全可以认为是共享对象。
|
||||
|
||||
堆栈跟踪显示,后续的调用源于 `malloc.c`,这说明内存的(取消)分配可能出了问题。
|
||||
|
||||
在源代码中,(即使没有任何 C++ 知识)你也可以看到,它试图释放一个指针,而这个指针并没有被内存管理函数返回。这导致了未定义的行为,并导致了 `SIGABRT`。
|
||||
|
||||
```
|
||||
void freeSomething(void *ptr){
|
||||
free(ptr);
|
||||
}
|
||||
int nTmp = 5;
|
||||
int *ptrNull = &nTmp;
|
||||
freeSomething(ptrNull);
|
||||
```
|
||||
|
||||
systemd 的这个 `coredump` 工具可以在 `/etc/systemd/coredump.conf` 中配置。可以在 `/etc/systemd/systemd-tmpfiles-clean.timer` 中配置轮换清理转储文件。
|
||||
|
||||
你可以在其[手册页][10]中找到更多关于 `coredumpctl` 的信息。
|
||||
|
||||
### 用调试符号编译
|
||||
|
||||
打开 `Makefile` 并注释掉第 9 行的最后一部分。现在应该是这样的:
|
||||
|
||||
```
|
||||
CFLAGS =-Wall -Werror -std=c++11 -g
|
||||
```
|
||||
|
||||
`-g` 开关使编译器能够创建调试信息。启动应用程序,这次使用 `-c2` 开关。
|
||||
|
||||
```
|
||||
./coredump -c2
|
||||
```
|
||||
|
||||
你会得到一个浮点异常。在 GDB 中打开该转储文件:
|
||||
|
||||
```
|
||||
coredumpctl debug
|
||||
```
|
||||
|
||||
这一次,你会直接被指向源代码中导致错误的那一行:
|
||||
|
||||
```
|
||||
Reading symbols from /home/stephan/Dokumente/core_dump_example/coredump…
|
||||
[New LWP 6218]
|
||||
Core was generated by `./coredump -c2'.
|
||||
Program terminated with signal SIGFPE, Arithmetic exception.
|
||||
#0 0x0000000000401233 in zeroDivide () at main.cpp:29
|
||||
29 nRes = 5 / nDivider;
|
||||
(gdb)
|
||||
```
|
||||
|
||||
键入 `list` 以获得更好的源代码概览:
|
||||
|
||||
```
|
||||
(gdb) list
|
||||
24 int zeroDivide(){
|
||||
25 int nDivider = 5;
|
||||
26 int nRes = 0;
|
||||
27 while(nDivider > 0){
|
||||
28 nDivider--;
|
||||
29 nRes = 5 / nDivider;
|
||||
30 }
|
||||
31 return nRes;
|
||||
32 }
|
||||
```
|
||||
|
||||
使用命令 `info locals` 从应用程序失败的时间点检索局部变量的值:
|
||||
|
||||
```
|
||||
(gdb) info locals
|
||||
nDivider = 0
|
||||
nRes = 5
|
||||
```
|
||||
|
||||
结合源码,可以看出,你遇到的是零除错误:
|
||||
|
||||
```
|
||||
nRes = 5 / 0
|
||||
```
|
||||
|
||||
### 结论
|
||||
|
||||
了解如何处理转储文件将帮助你找到并修复应用程序中难以重现的随机错误。而如果不是你的应用程序,将核心转储转发给开发人员将帮助她或他找到并修复问题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/linux-dump
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0 (Magnifying glass on code)
|
||||
[2]: https://en.wikipedia.org/wiki/POSIX
|
||||
[3]: https://man7.org/linux/man-pages/man7/signal.7.html
|
||||
[4]: https://www.gnu.org/software/libc/manual/html_node/Program-Error-Signals.html#Program-Error-Signals
|
||||
[5]: https://en.wikipedia.org/wiki/PDP-11
|
||||
[6]: https://opensource.com/sites/default/files/uploads/dump_written.png (Dump written)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://www.gnu.org/software/gdb/
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/free.html
|
||||
[10]: https://man7.org/linux/man-pages/man1/coredumpctl.1.html
|
@ -0,0 +1,56 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12552-1.html)
|
||||
[#]: subject: (Microsoft uses AI to boost its reuse, recycling of server parts)
|
||||
[#]: via: (https://www.networkworld.com/article/3570451/microsoft-uses-ai-to-boost-its-reuse-recycling-of-server-parts.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
微软利用 AI 提升服务器部件的重复使用和回收率
|
||||
======
|
||||
|
||||
> 准备好在提到数据中心设备时,听到更多的“循环”一词。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/25/234108f8yz3c3la8xw18mn.jpg)
|
||||
|
||||
微软正在将人工智能引入到对数百万台服务器进行分类的任务中,以确定哪些部件可以回收,在哪里回收。
|
||||
|
||||
新计划要求在微软全球各地的数据中心建立所谓的“<ruby>循环中心<rt>Circular Center</rt></ruby>”,在那里,人工智能算法将用于从退役的服务器或其他硬件中分拣零件,并找出哪些零件可以在园区内重新使用。
|
||||
|
||||
微软表示,它的数据中心有超过 300 万台服务器和相关硬件,一台服务器的平均寿命约为 5 年。另外,微软正在全球范围内扩张,所以其服务器数量应该会增加。
|
||||
|
||||
循环中心就是要快速整理库存,而不是让过度劳累的员工疲于奔命。微软计划到 2025 年将服务器部件的重复使用率提高 90%。微软总裁 Brad Smith 在宣布这一举措的一篇[博客][2]中写道:“利用机器学习,我们将对退役的服务器和硬件进行现场处理。我们会将那些可以被我们以及客户重复使用和再利用的部件进行分类,或者出售。”
|
||||
|
||||
Smith 指出,如今,关于废物的数量、质量和类型,以及废物的产生地和去向,都没有一致的数据。例如,关于建造和拆除废物的数据并不一致,我们要一个标准化的方法,有更好的透明度和更高的质量。
|
||||
|
||||
他写道:“如果没有更准确的数据,几乎不可能了解运营决策的影响,设定什么目标,如何评估进展,以及废物去向方法的行业标准。”
|
||||
|
||||
根据微软的说法,阿姆斯特丹数据中心的一个循环中心试点减少了停机时间,并增加了服务器和网络部件的可用性,供其自身再利用和供应商回购。它还降低了将服务器和硬件运输到处理设施的成本,从而降低了碳排放。
|
||||
|
||||
“<ruby>循环经济<rt>circular economy</rt></ruby>”一词正在科技界流行。它是基于服务器硬件的循环利用,将那些已经使用了几年但仍可用的设备重新投入到其他地方服务。ITRenew 是[我在几个月前介绍过][3]的一家二手超大规模服务器的转售商,它对这个词很感兴趣。
|
||||
|
||||
该公司表示,首批微软循环中心将建在新的主要数据中心园区或地区。它计划最终将这些中心添加到已经存在的园区中。
|
||||
|
||||
微软曾明确表示要在 2030 年之前实现“碳负排放”,而这只是其中几个项目之一。近日,微软宣布在其位于盐湖城的系统开发者实验室进行了一项测试,用一套 250kW 的氢燃料电池系统为一排服务器机架连续供电 48 小时,微软表示这是以前从未做过的事情。
|
||||
|
||||
微软首席基础设施工程师 Mark Monroe 在一篇[博客][4]中写道:“这是我们所知道的最大的以氢气运行的计算机备用电源系统,而且它的连续测试时间最长。”他说,近年来氢燃料电池的价格大幅下降,现在已经成为柴油发电机的可行替代品,但燃烧更清洁。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3570451/microsoft-uses-ai-to-boost-its-reuse-recycling-of-server-parts.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://blogs.microsoft.com/blog/2020/08/04/microsoft-direct-operations-products-and-packaging-to-be-zero-waste-by-2030/
|
||||
[3]: https://www.networkworld.com/article/3543810/for-sale-used-low-mileage-hyperscaler-servers.html
|
||||
[4]: https://news.microsoft.com/innovation-stories/hydrogen-datacenters/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,121 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12545-1.html)
|
||||
[#]: subject: (An attempt to make a font look more handwritten)
|
||||
[#]: via: (https://jvns.ca/blog/2020/08/08/handwritten-font/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
|
||||
一次让字体看起来更像手写体的尝试
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/24/111019lzpc280kkvlfpv1p.jpg)
|
||||
|
||||
其实我对这个实验的结果并不是特别满意,但我还是想分享一下,因为摆弄字体是件非常简单和有趣的事情。而且有人问我怎么做,我告诉她我会写一篇博文来介绍一下 :)
|
||||
|
||||
### 背景:原本的手写体
|
||||
|
||||
先交代一些背景信息:我有一个我自己的手写字体,我已经在我的电子杂志中使用了好几年了。我用一个叫 [iFontMaker][1] 的令人愉快的应用程序制作了它。他们在网站上自诩为“你可以在 5 分钟内只用手指就能制作出你的手工字体”。根据我的经验,“5 分钟”的部分比较准确 —— 我可能花了更多的时间,比如 15 分钟。我对“只用手指”的说法持怀疑态度 —— 我用的是 Apple Pencil,它的精确度要好得多。但是,使用该应用程序制作你的笔迹的 TTF 字体是非常容易的,如果你碰巧已经有了 Apple Pencil 和 iPad,我认为这是一个有趣的方式,我只花了 7.99 美元。
|
||||
|
||||
下面是我的字体的样子。左边的“CONNECT”文字是我的实际笔迹,右边的段落是字体。其实有 2 种字体 —— 有一种是普通字体,一种是手写的“等宽”字体。(其实实际并不是等宽,我还没有想好如何在 iFontMaker 中制作一个实际的等宽字体)
|
||||
|
||||
![][2]
|
||||
|
||||
### 目标:在字体上做更多的字符变化
|
||||
|
||||
在上面的截图中,很明显可以看出这是一种字体,而不是实际的笔迹。当你有两个相同的字母相邻时,就最容易看出来,比如“HTTP”。
|
||||
|
||||
所以我想,使用一些 OpenType 的功能,以某种方式为这个字体引入更多的变化,比如也许两个 “T” 可以是不同的。不过我不知道该怎么做!
|
||||
|
||||
### 来自 Tristan Hume 的主意:使用 OpenType!
|
||||
|
||||
然后我在 5 月份的 !!Con 2020 上(所有的[演讲录音都在这里!][3])看到了 Tristan Hume 的这个演讲:关于使用 OpenType 通过特殊的字体将逗号插入到大的数字中。他的演讲和博文都很棒,所以这里有一堆链接 —— 下面现场演示也许是最快看到他的成果的方式。
|
||||
|
||||
* 一个现场演示: [Numderline 测试][4]
|
||||
* 博客文章:[将逗号插入到大的数字的各个位置:OpenType 冒险][5]
|
||||
* 谈话:[!!Con 2020 - 使用字体塑型,把逗号插入到大的数字的各个位置!][6]
|
||||
* GitHub 存储库: https://github.com/trishume/numderline/blob/master/patcher.py
|
||||
|
||||
### 基本思路:OpenType 允许你根据上下文替换字符
|
||||
|
||||
我一开始对 OpenType 到底是什么非常困惑。目前我仍然不甚了然,但我知道到你可以编写极其简单的 OpenType 规则来改变字体的外观,而且你甚至不需要真正了解字体。
|
||||
|
||||
下面是一个规则示例:
|
||||
|
||||
```
|
||||
sub a' b by other_a;
|
||||
```
|
||||
|
||||
这里 `sub a' b by other_a;` 的意思是:如果一个 `a` 字形是在一个 `b` 之前,那么替换 `a` 为字形 `other_a`。
|
||||
|
||||
所以这意味着我可以让 `ab` 和 `ac` 在字体中出现不同的字形。这并不像手写体那样随机,但它确实引入了一点变化。
|
||||
|
||||
### OpenType 参考文档:真棒
|
||||
|
||||
我找到的最好的 OpenType 文档是这个 [OpenType™ 特性文件规范][7] 资料。里面有很多你可以做的很酷的事情的例子,比如用一个连字替换 “ffi”。
|
||||
|
||||
### 如何应用这些规则:fonttools
|
||||
|
||||
为字体添加新的 OpenType 规则是超容易的。有一个 Python 库叫 `fonttools`,这 5 行代码会把放在 `rules.fea` 中的 OpenType 规则列表应用到字体文件 `input.ttf` 中。
|
||||
|
||||
```
|
||||
from fontTools.ttLib import TTFont
|
||||
from fontTools.feaLib.builder import addOpenTypeFeatures
|
||||
|
||||
ft_font = TTFont('input.ttf')
|
||||
addOpenTypeFeatures(ft_font, 'rules.fea', tables=['GSUB'])
|
||||
ft_font.save('output.ttf')
|
||||
```
|
||||
|
||||
`fontTools` 还提供了几个名为 `ttx` 和 `fonttools` 的命令行工具。`ttx` 可以将 TTF 字体转换为 XML 文件,这对我很有用,因为我想重新命名我的字体中的一些字形,但我对字体一无所知。所以我只是将我的字体转换为 XML 文件,使用 `sed` 重命名字形,然后再次使用 `ttx` 将 XML 文件转换回 `ttf`。
|
||||
|
||||
`fonttools merge` 可以让我把我的 3 个手写字体合并成 1 个,这样我就在 1 个文件中得到了我需要的所有字形。
|
||||
|
||||
### 代码
|
||||
|
||||
我把我的极潦草的代码放在一个叫 [font-mixer][8] 的存储库里。它大概有 33 行代码,我认为它不言自明。(都在 `run.sh` 和 `combine.py` 中)
|
||||
|
||||
### 结果
|
||||
|
||||
下面是旧字体和新字体的小样。我不认为新字体的“感觉”更像手写体 —— 有更多的变化,但还是比不上实际的手写体文字(在下面)。
|
||||
|
||||
我觉得稍微有点不可思议,它明明还是一种字体,但它却要假装成不是字体:
|
||||
|
||||
![][9]
|
||||
|
||||
而这是实际手写的同样的文字的样本:
|
||||
|
||||
![][10]
|
||||
|
||||
如果我在制作另外 2 种手写字体的时候,把原来的字体混合在一起,再仔细一点,可能效果会更好。
|
||||
|
||||
### 添加 OpenType 规则这么容易,真酷!
|
||||
|
||||
这里最让我欣喜的是,添加 OpenType 规则来改变字体的工作方式是如此的容易,比如你可以很容易地做出一个“the”单词总是被“teh”代替的字体(让错别字一直留着!)。
|
||||
|
||||
不过我还是不知道如何做出更逼真的手写字体:)。我现在还在用旧的那个字体(没有额外的变化),我对它很满意。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2020/08/08/handwritten-font/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://2ttf.com/
|
||||
[2]: https://jvns.ca/images/font-sample-connect.png
|
||||
[3]: http://bangbangcon.com/recordings.html
|
||||
[4]: https://thume.ca/numderline/
|
||||
[5]: https://blog.janestreet.com/commas-in-big-numbers-everywhere/
|
||||
[6]: https://www.youtube.com/watch?v=Biqm9ndNyC8
|
||||
[7]: https://adobe-type-tools.github.io/afdko/OpenTypeFeatureFileSpecification.html
|
||||
[8]: https://github.com/jvns/font-mixer/
|
||||
[9]: https://jvns.ca/images/font-mixer-comparison.png
|
||||
[10]: https://jvns.ca/images/handwriting-sample.jpeg
|
155
published/202008/20200810 Merging and sorting files on Linux.md
Normal file
155
published/202008/20200810 Merging and sorting files on Linux.md
Normal file
@ -0,0 +1,155 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12539-1.html)
|
||||
[#]: subject: (Merging and sorting files on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3570508/merging-and-sorting-files-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
合并和排序 Linux 上的文件
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/22/102250i3943is48r34w4nz.jpg)
|
||||
|
||||
在 Linux 上合并和排序文本的方法有很多种,但如何去处理它取决于你试图做什么:你是只想将多个文件的内容放入一个文件中,还是以某种方式组织它,让它更易于使用。在本文中,我们将查看一些用于排序和合并文件内容的命令,并重点介绍结果有何不同。
|
||||
|
||||
### 使用 cat
|
||||
|
||||
如果你只想将一组文件放到单个文件中,那么 `cat` 命令是一个容易的选择。你所要做的就是输入 `cat`,然后按你希望它们在合并文件中的顺序在命令行中列出这些文件。将命令的输出重定向到要创建的文件。如果指定名称的文件已经存在,那么文件将被覆盖。例如:
|
||||
|
||||
```
|
||||
$ cat firstfile secondfile thirdfile > newfile
|
||||
```
|
||||
|
||||
如果要将一系列文件的内容添加到现有文件中,而不是覆盖它,只需将 `>` 变成 `>>`。
|
||||
|
||||
```
|
||||
$ cat firstfile secondfile thirdfile >> updated_file
|
||||
```
|
||||
|
||||
如果你要合并的文件遵循一些方便的命名约定,那么任务可能更简单。如果可以使用正则表达式指定所有文件名,那就不必列出所有文件。例如,如果文件全部以 `file` 结束,如上所示,你可以进行如下操作:
|
||||
|
||||
```
|
||||
$ cat *file > allfiles
|
||||
```
|
||||
|
||||
请注意,上面的命令将按字母数字顺序添加文件内容。在 Linux 上,一个名为 `filea` 的文件将排在名为 `fileA` 的文件的前面,但会在 `file7` 的后面。毕竟,当我们处理字母数字序列时,我们不仅需要考虑 `ABCDE`,还需要考虑 `0123456789aAbBcCdDeE`。你可以使用 `ls *file` 这样的命令来查看合并文件之前文件的顺序。
|
||||
|
||||
注意:首先确保你的命令包含合并文件中所需的所有文件,而不是其他文件,尤其是你使用 `*` 等通配符时。不要忘记,用于合并的文件仍将单独存在,在确认合并后,你可能想要删除这些文件。
|
||||
|
||||
### 按时间期限合并文件
|
||||
|
||||
如果要基于每个文件的时间期限而不是文件名来合并文件,请使用以下命令:
|
||||
|
||||
```
|
||||
$ for file in `ls -tr myfile.*`; do cat $file >> BigFile.$$; done
|
||||
```
|
||||
|
||||
使用 `-tr` 选项(`t` = 时间,`r` = 反向)将产生按照最早的在最前排列的文件列表。例如,如果你要保留某些活动的日志,并且希望按活动执行的顺序添加内容,则这非常有用。
|
||||
|
||||
上面命令中的 `$$` 表示运行命令时的进程 ID。不是很必要使用此功能,但它几乎不可能会无意添加到现有的文件,而不是创建新文件。如果使用 `$$`,那么生成的文件可能如下所示:
|
||||
|
||||
```
|
||||
$ ls -l BigFile.*
|
||||
-rw-rw-r-- 1 justme justme 931725 Aug 6 12:36 BigFile.582914
|
||||
```
|
||||
|
||||
### 合并和排序文件
|
||||
|
||||
Linux 提供了一些有趣的方式来对合并之前或之后的文件内容进行排序。
|
||||
|
||||
#### 按字母对内容进行排序
|
||||
|
||||
如果要对合并的文件内容进行排序,那么可以使用以下命令对整体内容进行排序:
|
||||
|
||||
```
|
||||
$ cat myfile.1 myfile.2 myfile.3 | sort > newfile
|
||||
```
|
||||
|
||||
如果要按文件对内容进行分组,请使用以下命令对每个文件进行排序,然后再将它添加到新文件中:
|
||||
|
||||
```
|
||||
$ for file in `ls myfile.?`; do sort $file >> newfile; done
|
||||
```
|
||||
|
||||
#### 对文件进行数字排序
|
||||
|
||||
要对文件内容进行数字排序,请在 `sort` 中使用 `-n` 选项。仅当文件中的行以数字开头时,此选项才有用。请记住,按照默认顺序,`02` 将小于 `1`。当你要确保行以数字排序时,请使用 `-n` 选项。
|
||||
|
||||
```
|
||||
$ cat myfile.1 myfile.2 myfile.3 | sort -n > xyz
|
||||
```
|
||||
|
||||
如果文件中的行以 `2020-11-03` 或 `2020/11/03`(年月日格式)这样的日期格式开头,`-n` 选项还能让你按日期对内容进行排序。其他格式的日期排序将非常棘手,并且将需要更复杂的命令。
|
||||
|
||||
### 使用 paste
|
||||
|
||||
`paste` 命令允许你逐行连接文件内容。使用此命令时,合并文件的第一行将包含要合并的每个文件的第一行。以下是示例,其中我使用了大写字母以便于查看行的来源:
|
||||
|
||||
```
|
||||
$ cat file.a
|
||||
A one
|
||||
A two
|
||||
A three
|
||||
|
||||
$ paste file.a file.b file.c
|
||||
A one B one C one
|
||||
A two B two C two
|
||||
A three B three C thee
|
||||
B four C four
|
||||
C five
|
||||
```
|
||||
|
||||
将输出重定向到另一个文件来保存它:
|
||||
|
||||
```
|
||||
$ paste file.a file.b file.c > merged_content
|
||||
```
|
||||
|
||||
或者,你可以将每个文件的内容在同一行中合并,然后将文件粘贴在一起。这需要使用 `-s`(序列)选项。注意这次的输出如何显示每个文件的内容:
|
||||
|
||||
```
|
||||
$ paste -s file.a file.b file.c
|
||||
A one A two A three
|
||||
B one B two B three B four
|
||||
C one C two C thee C four C five
|
||||
```
|
||||
|
||||
### 使用 join
|
||||
|
||||
合并文件的另一个命令是 `join`。`join` 命令让你能基于一个共同字段合并多个文件的内容。例如,你可能有一个包含一组同事的电话的文件,其中,而另一个包含了同事的电子邮件地址,并且两者均按个人姓名列出。你可以使用 `join` 创建一个包含电话和电子邮件地址的文件。
|
||||
|
||||
一个重要的限制是文件的行必须是相同的顺序,并在每个文件中包括用于连接的字段。
|
||||
|
||||
这是一个示例命令:
|
||||
|
||||
```
|
||||
$ join phone_numbers email_addresses
|
||||
Sandra 555-456-1234 bugfarm@gmail.com
|
||||
Pedro 555-540-5405
|
||||
John 555-333-1234 john_doe@gmail.com
|
||||
Nemo 555-123-4567 cutie@fish.com
|
||||
```
|
||||
|
||||
在本例中,即使缺少附加信息,第一个字段(名字)也必须存在于每个文件中,否则命令会因错误而失败。对内容进行排序有帮助,而且可能更容易管理,但只要顺序一致,就不需要这么做。
|
||||
|
||||
### 总结
|
||||
|
||||
在 Linux 上,你有很多可以合并和排序存储在单独文件中的数据的方式。这些方法可以使原本繁琐的任务变得异常简单。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3570508/merging-and-sorting-files-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.facebook.com/NetworkWorld/
|
||||
[3]: https://www.linkedin.com/company/network-world
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12538-1.html)
|
||||
[#]: subject: (Photoflare: An Open Source Image Editor for Simple Editing Needs)
|
||||
[#]: via: (https://itsfoss.com/photoflare/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
@ -10,7 +10,7 @@
|
||||
Photoflare:满足简单编辑需求的开源图像编辑器
|
||||
======
|
||||
|
||||
_**简介:Photoflare 是 Linux 和 Windows 上的图像编辑器。它有一个免费且开源的社区版本。**_
|
||||
> Photoflare 是一款可用于 Linux 和 Windows 上的图像编辑器。它有一个免费而开源的社区版本。
|
||||
|
||||
在 Linux 上编辑图像时,GIMP 显然是首选。但是,如果你不需要高级编辑功能,GIMP 可能会让人不知所措。这是像 Photoflare 这样的应用立足的地方。
|
||||
|
||||
@ -18,9 +18,9 @@ _**简介:Photoflare 是 Linux 和 Windows 上的图像编辑器。它有一
|
||||
|
||||
![][1]
|
||||
|
||||
Photoflare 是一个在简单易用的界面提供基本的图像编辑功能的编辑器。
|
||||
Photoflare 是一个在简单易用的界面里提供了基本的图像编辑功能的编辑器。
|
||||
|
||||
它受流行的 Windows 应用 [PhotoFiltre][2] 的启发。这个程序不是仅仅克隆,它是从头开始用 C++ 编写的,并使用 Qt 框架作为界面。
|
||||
它受流行的 Windows 应用 [PhotoFiltre][2] 的启发。这个程序不是一个克隆品,它是用 C++ 从头开始编写的,并使用 Qt 框架作为界面。
|
||||
|
||||
它的功能包括裁剪、翻转/旋转、调整图像大小。你还可以使用诸如油漆刷、油漆桶、喷雾罐、模糊工具和橡皮擦之类的工具。魔术棒工具可让你选择图像的特定区域。
|
||||
|
||||
@ -43,18 +43,16 @@ Photoflare 是一个在简单易用的界面提供基本的图像编辑功能的
|
||||
* 使用画笔、油漆桶、喷涂、模糊工具和图像等工具编辑图像
|
||||
* 在图像上添加线条和文字
|
||||
* 更改图像的色调
|
||||
* 添加老照滤镜
|
||||
* 添加老照片滤镜
|
||||
* 批量调整大小、滤镜等
|
||||
|
||||
|
||||
|
||||
### 在 Linux 上安装 Photflare
|
||||
|
||||
![][5]
|
||||
|
||||
在 Photoflare 的网站上,你可以找到定价以及每月订阅的选项。但是,应用是开源的,它的[源码可在 GitHub 上找到][6]。
|
||||
在 Photoflare 的网站上,你可以找到定价以及每月订阅的选项。但是,该应用是开源的,它的[源码可在 GitHub 上找到][6]。
|
||||
|
||||
应用也是“免费”使用的。[定价/订购部分][7]用于项目的财务支持。你可以免费下载它,如果你喜欢该应用并且会继续使用,请考虑给它捐赠。
|
||||
应用也是“免费”使用的。[定价/订购部分][7]用于该项目的财务支持。你可以免费下载它,如果你喜欢该应用并且会继续使用,请考虑给它捐赠。
|
||||
|
||||
Photoflare 有[官方 PPA][8],适用于 Ubuntu 和基于 Ubuntu 的发行版。此 PPA 可用于 Ubuntu 18.04 和 20.04 版本。
|
||||
|
||||
@ -78,17 +76,17 @@ sudo apt remove photoflare
|
||||
sudo add-apt-repository -r ppa:photoflare/photoflare-stable
|
||||
```
|
||||
|
||||
**Arch Linux** 和 Manjaro 用户可以[从 AUR 获取][9]。
|
||||
Arch Linux 和 Manjaro 用户可以[从 AUR 获取][9]。
|
||||
|
||||
Fedora 没有现成的软件包,因此你需要获取源码:
|
||||
|
||||
[Photoflare source code][6]
|
||||
- [Photoflare 源代码][6]
|
||||
|
||||
### Photoflare 的经验
|
||||
|
||||
我发现它与 [Pinta][10] 有点相似,但功能更多。它是用于基本图像编辑的简单工具。批处理功能是加分。
|
||||
我发现它与 [Pinta][10] 有点相似,但功能更多。它是用于基本图像编辑的简单工具。批处理功能是加分项。
|
||||
|
||||
我注意到图像在打开编辑时看起来不清晰。我打开一张截图进行编辑,字体看起来很模糊。但是,保存图像并在[图像查看器][11]中打开后,没有显示此问题。
|
||||
我注意到图像在打开编辑时看起来不够清晰。我打开一张截图进行编辑,字体看起来很模糊。但是,保存图像并在[图像查看器][11]中打开后,没有显示此问题。
|
||||
|
||||
总之,如果你不需要专业级的图像编辑,它是一个不错的工具。
|
||||
|
||||
@ -101,7 +99,7 @@ via: https://itsfoss.com/photoflare/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,56 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12569-1.html)
|
||||
[#]: subject: (AI system analyzes code similarities, makes progress toward automated coding)
|
||||
[#]: via: (https://www.networkworld.com/article/3570389/ai-system-analyzes-code-similarities-makes-progress-toward-automated-coding.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
AI 系统向自动化编码迈进
|
||||
======
|
||||
|
||||
> 来自 Intel、MIT 和佐治亚理工学院的研究人员正在研究一个 AI 引擎,它可以分析代码的相似性,以确定代码的实际作用,为自动化软件编写奠定了基础。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/31/231333fklk447gw4w4b4vk.jpg)
|
||||
|
||||
随着人工智能(AI)的快速发展,我们是否会进入计算机智能到足以编写自己的代码并和人类一起完成工作?新的研究表明,我们可能正在接近这个里程碑。
|
||||
|
||||
来自 MIT 和佐治亚理工学院的研究人员与 Intel 合作开发了一个人工智能引擎,被称为机器推断代码相似性(MISIM),它旨在分析软件代码并确定它与其他代码的相似性。最有趣的是,该系统有学习代码的潜力,然后利用这种智能来改变软件的编写方式。最终,人们可以解释希望程序做什么,然后机器编程(MP)系统可以拿出一个已经编写完的应用。
|
||||
|
||||
Intel 首席科学家兼机器编程研究总监/创始人 Justin Gottschlich 在该公司的[新闻稿][2]中说:“当完全实现时,MP 能让每个人都能以任何最适合自己的方式 —— 无论是代码、自然语言还是其他东西 —— 来表达自己的意图以创建软件。这是一个大胆的目标,虽然还有很多工作要做,但 MISIM 是朝着这个目标迈出的坚实一步。”
|
||||
|
||||
### 它是如何工作的
|
||||
|
||||
Intel 解释说,神经网络“根据它们被设计执行的作业”给代码片段打出相似度分数。例如,两个代码样本可能看起来完全不同,但由于它们执行相同的功能,因此被评为相同。然后,该算法可以确定哪个代码片段更有效率。
|
||||
|
||||
例如,代码相似性系统的原始版本被用于抄袭检测。然而,有了 MISIM,该算法会查看代码块,并试图根据上下文确定这些代码段是否具有相似的特征,或者是否有相似的目标。然后,它可以提供性能方面的改进,例如说,总体效率的改进。
|
||||
|
||||
MISIM 的关键是创造者的意图,它标志着向基于意图的编程的进步,它可以使软件的设计基于非程序员创造者想要实现的目标。通过基于意图的编程,算法会借助于一个开源代码池,而不是依靠传统的、手工的方法,编译一系列类似于步骤的编程指令,逐行告诉计算机如何做某件事。
|
||||
|
||||
Intel 解释说:“MISIM 与现有代码相似性系统的核心区别在于其新颖的上下文感知语义结构 (CASS),其目的是将代码的实际作用提炼出来。与其他现有的方法不同,CASS 可以根据特定的上下文进行配置,使其能够捕捉到更高层次的代码描述信息。CASS 可以更具体地洞察代码的作用,而不是它是如何做的。”
|
||||
|
||||
这是在没有编译器(编程中的一个阶段,将人类可读代码转换为计算机程序)的情况下完成的。方便的是,可以执行部分片段,只是为了看看那段代码中会发生什么。另外,该系统摆脱了软件开发中一些比较繁琐的部分,比如逐行查找错误。更多细节可以在该小组的论文([PDF][3])中找到。
|
||||
|
||||
Intel 表示,该团队的 MISIM 系统比之前的代码相似性系统识别相似代码的准确率高 40 倍。
|
||||
|
||||
一个 Redditor,Heres_your_sign [对 MISIM 报道][4]的评论中有趣地指出,幸好计算机不写需求。这位 Redditor 认为,那是自找麻烦。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3570389/ai-system-analyzes-code-similarities-makes-progress-toward-automated-coding.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://newsroom.intel.com/news/intel-mit-georgia-tech-machine-programming-code-similarity-system/#gs.d8qd40
|
||||
[3]: https://arxiv.org/pdf/2006.05265.pdf
|
||||
[4]: https://www.reddit.com/r/technology/comments/i2dxed/this_ai_could_bring_us_computers_that_can_write/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12540-1.html)
|
||||
[#]: subject: (Come test a new release of pipenv, the Python development tool)
|
||||
[#]: via: (https://fedoramagazine.org/come-test-a-new-release-of-pipenv-the-python-development-tool/)
|
||||
[#]: author: (torsava https://fedoramagazine.org/author/torsava/)
|
||||
@ -12,14 +12,13 @@
|
||||
|
||||
![][1]
|
||||
|
||||
**[Pipenv][2]** 是一个可帮助 Python 开发人员维护具有特定一组依赖关系的隔离虚拟环境,以实现可重复的开发和部署环境的工具。它类似于其他编程语言中的工具如 bundler、composer、npm、cargo、yarn 等。
|
||||
[pipenv][2] 是一个可帮助 Python 开发人员维护具有特定一组依赖关系的隔离虚拟环境,以实现可重新复制的开发和部署环境的工具。它类似于其他编程语言中的工具如 bundler、composer、npm、cargo、yarn 等。
|
||||
|
||||
|
||||
最近发布了新版本的 pipenv 2020.6.2。现在可以在 Fedora 33 和 Rawhide 中使用它。对于较旧的 Fedora,维护人员决定在 [COPR][3] 中打包,然后进行测试。因此,在稳定版 Fedora 中安装之前请先尝试一下。新版本没有带来任何新颖的功能,但是经过两年的开发,它解决了许多问题,并且在底层做了很多不同的事情。之前可以正常工作的应该可以继续工作,但是可能会略有不同。
|
||||
最近发布了新版本的 pipenv 2020.6.2。现在可以在 Fedora 33 和 Rawhide 中使用它。对于较旧的 Fedora,维护人员决定打包到 [COPR][3] 中来先进行测试。所以在他们把它推送到稳定的Fedora版本之前,来试试吧。新版本没有带来任何新颖的功能,但是经过两年的开发,它解决了许多问题,并且在底层做了很多不同的事情。之前可以正常工作的应该可以继续工作,但是可能会略有不同。
|
||||
|
||||
### 如何获取
|
||||
|
||||
如果你已经在运行 Fedora 33 或 Rawhide,请运行 _$ sudo dnf upgrade pipenv_ 或者 _$ sudo dnf install pipenv_,你将获得新版本。
|
||||
如果你已经在运行 Fedora 33 或 Rawhide,请运行 `$ sudo dnf upgrade pipenv` 或者 `$ sudo dnf install pipenv`,你将获得新版本。
|
||||
|
||||
在 Fedora 31 或 Fedora 32 上,你需要使用 [copr 仓库][3],直到经过测试的包出现在官方仓库中为止。要启用仓库,请运行:
|
||||
|
||||
@ -27,7 +26,7 @@
|
||||
$ sudo dnf copr enable @python/pipenv
|
||||
```
|
||||
|
||||
然后将 pipenv 升级到新版本,运行:
|
||||
然后将 `pipenv` 升级到新版本,运行:
|
||||
|
||||
```
|
||||
$ sudo dnf upgrade pipenv
|
||||
@ -46,13 +45,13 @@ $ sudo dnf copr disable @python/pipenv
|
||||
$ sudo dnf distro-sync pipenv
|
||||
```
|
||||
|
||||
_COPR 不受 Fedora 基础架构的官方支持。使用软件包需要你自担风险。_
|
||||
*COPR 不受 Fedora 基础架构的官方支持。使用软件包需要你自担风险。*
|
||||
|
||||
### 如何使用
|
||||
|
||||
如果你有用旧版本 pipenv 管理的项目,你应该可以毫无问题地使用新版本。让我们知道是否有问题。
|
||||
如果你有用旧版本 `pipenv` 管理的项目,你应该可以毫无问题地使用新版本。如果有问题请让我们知道。
|
||||
|
||||
如果你还不熟悉 pipenv 或想开始一个新项目,请参考以下快速指南:
|
||||
如果你还不熟悉 `pipenv` 或想开始一个新项目,请参考以下快速指南:
|
||||
|
||||
创建一个工作目录:
|
||||
|
||||
@ -60,7 +59,7 @@ _COPR 不受 Fedora 基础架构的官方支持。使用软件包需要你自担
|
||||
$ mkdir new-project && cd new-project
|
||||
```
|
||||
|
||||
使用 Python 3 初始化 pipenv:
|
||||
使用 Python 3 初始化 `pipenv`:
|
||||
|
||||
```
|
||||
$ pipenv --three
|
||||
@ -72,13 +71,13 @@ $ pipenv --three
|
||||
$ pipenv install six
|
||||
```
|
||||
|
||||
生成 Pipfile.lock 文件:
|
||||
生成 `Pipfile.lock` 文件:
|
||||
|
||||
```
|
||||
$ pipenv lock
|
||||
```
|
||||
|
||||
现在,你可以将创建的 Pipfile 和 Pipfile.lock 文件提交到版本控制系统(例如 git)中,其他人可以在克隆的仓库中使用此命令来获得相同的环境:
|
||||
现在,你可以将创建的 `Pipfile` 和 `Pipfile.lock` 文件提交到版本控制系统(例如 git)中,其他人可以在克隆的仓库中使用此命令来获得相同的环境:
|
||||
|
||||
```
|
||||
$ pipenv install
|
||||
@ -88,7 +87,7 @@ $ pipenv install
|
||||
|
||||
### 如何报告问题
|
||||
|
||||
如果你使用新版本的 pipenv 遇到任何问题,请[在 Fedora 的 Bugzilla中 报告问题][5]。Fedora 官方仓库和 copr 仓库中 pipenv 软件包的维护者是相同的。请在报告中指出是新版本。
|
||||
如果你使用新版本的 `pipenv` 遇到任何问题,请[在 Fedora 的 Bugzilla中 报告问题][5]。Fedora 官方仓库和 copr 仓库中 `pipenv` 软件包的维护者是相同的人。请在报告中指出是新版本。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -97,7 +96,7 @@ via: https://fedoramagazine.org/come-test-a-new-release-of-pipenv-the-python-dev
|
||||
作者:[torsava][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
141
published/202008/20200817 Use GNU on Windows with MinGW.md
Normal file
141
published/202008/20200817 Use GNU on Windows with MinGW.md
Normal file
@ -0,0 +1,141 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12549-1.html)
|
||||
[#]: subject: (Use GNU on Windows with MinGW)
|
||||
[#]: via: (https://opensource.com/article/20/8/gnu-windows-mingw)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
使用 Mingw 在 Windows 上使用 GNU
|
||||
======
|
||||
|
||||
> 在 Windows 上安装 GNU 编译器集合(gcc)和其他 GNU 组件来启用 GNU Autotools。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/25/085619rr331p13shpt6htp.jpg)
|
||||
|
||||
如果你是一名使用 Windows 的黑客,你不需要专有应用来编译代码。借助 [Minimalist GNU for Windows][2](MinGW)项目,你可以下载并安装 [GNU 编译器集合(GCC)][3]以及其它几个基本的 GNU 组件,以在 Windows 计算机上启用 [GNU Autotools][4]。
|
||||
|
||||
### 安装 MinGW
|
||||
|
||||
安装 MinGW 的最简单方法是通过 mingw-get,它是一个图形用户界面 (GUI) 应用,可帮助你选择要安装哪些组件,并让它们保持最新。要运行它,请从项目主页[下载 mingw-get-setup.exe][5]。像你安装其他 EXE 一样,在向导中单击完成安装。
|
||||
|
||||
![Installing mingw-get][6]
|
||||
|
||||
### 在 Windows 上安装 GCC
|
||||
|
||||
目前为止,你只安装了一个程序,或者更准确地说,一个称为 mingw-get 的专用的*包管理器*。启动 mingw-get 选择要在计算机上安装的 MinGW 项目应用。
|
||||
|
||||
首先,从应用菜单中选择 mingw-get 启动它。
|
||||
|
||||
![Installing GCC with MinGW][8]
|
||||
|
||||
要安装 GCC,请单击 GCC 和 G++ 包来标记要安装 GNU C、C++ 编译器。要完成此过程,请从 mingw-get 窗口左上角的**安装**菜单中选择**应用更改**。
|
||||
|
||||
安装 GCC 后,你可以使用完整路径在 [PowerShell][9] 中运行它:
|
||||
|
||||
```
|
||||
PS> C:\MinGW\bin\gcc.exe --version
|
||||
gcc.exe (MinGW.org GCC Build-x) x.y.z
|
||||
Copyright (C) 2019 Free Software Foundation, Inc.
|
||||
```
|
||||
|
||||
### 在 Windows 上运行 Bash
|
||||
|
||||
虽然它自称 “minimalist”(最小化),但 MinGW 还提供一个可选的 [Bourne shell][10] 命令行解释器,称为 MSYS(它代表<ruby>最小系统<rt>Minimal System</rt></ruby>)。它是微软的 `cmd.exe` 和 PowerShell 的替代方案,它默认是 Bash。除了是(自然而然的)最流行的 shell 之一外,Bash 在将开源应用移植到 Windows 平台时很有用,因为许多开源项目都假定了 [POSIX][11] 环境。
|
||||
|
||||
你可以在 mingw-get GUI 或 PowerShell 内安装 MSYS:
|
||||
|
||||
```
|
||||
PS> mingw-get install msys
|
||||
```
|
||||
|
||||
要尝试 Bash,请使用完整路径启动它:
|
||||
|
||||
```
|
||||
PS> C:\MinGW\msys/1.0/bin/bash.exe
|
||||
bash.exe-$ echo $0
|
||||
"C:\MinGW\msys/1.0/bin/bash.exe"
|
||||
```
|
||||
|
||||
### 在 Windows 上设置路径
|
||||
|
||||
你可能不希望为要使用的每个命令输入完整路径。将包含新 GNU 可执行文件的目录添加到 Windows 中的路径中。需要添加两个可执行文件的根目录:一个用于 MinGW(包括 GCC 及其相关工具链),另一个用于 MSYS(包括 Bash、GNU 和 [BSD][12] 项目中的许多常用工具)。
|
||||
|
||||
若要在 Windows 中修改环境,请单击应用菜单并输入 `env`。
|
||||
|
||||
![Edit your env][13]
|
||||
|
||||
这将打开“首选项”窗口。点击窗口底部附近的“环境变量”按钮。
|
||||
|
||||
在“环境变量”窗口中,双击底部面板中的“路径”选区。
|
||||
|
||||
在“编辑环境变量”窗口中,单击右侧的“新增”按钮。创建一个新条目 `C:\MinCW\msys\1.0\bin`,然后单击 “确定”。以相同的方式创建第二条 `C:\MinGW\bin`,然后单击 “确定”。
|
||||
|
||||
![Set your env][14]
|
||||
|
||||
在每个首选项窗口中接受这些更改。你可以重启计算机以确保所有应用都检测到新变量,或者只需重启 PowerShell 窗口。
|
||||
|
||||
从现在开始,你可以调用任何 MinGW 命令而不指定完整路径,因为完整路径位于 PowerShell 继承的 Windows 系统的 `%PATH%` 环境变量中。
|
||||
|
||||
### Hello world
|
||||
|
||||
你已经完成设置,因此可以对新的 MinGW 系统进行小测试。如果你是 [Vim][15] 用户,请启动它,然后输入下面的 “hello world” 代码:
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <iostream>
|
||||
|
||||
using namespace std;
|
||||
|
||||
int main() {
|
||||
cout << "Hello open source." << endl;
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
将文件保存为 `hello.cpp`,然后使用 GCC 的 C++ 组件编译文件:
|
||||
|
||||
```
|
||||
PS> gcc hello.cpp --output hello
|
||||
```
|
||||
|
||||
最后,运行它:
|
||||
|
||||
```
|
||||
PS> .\a.exe
|
||||
Hello open source.
|
||||
PS>
|
||||
```
|
||||
|
||||
MinGW 的内容远不止我在这里所能介绍的。毕竟,MinGW 打开了一个完整的开源世界和定制代码的潜力,因此请充分利用它。对于更广阔的开源世界,你还可以[试试 Linux][16]。当所有的限制都被消除后,你会惊讶于可能的事情。但与此同时,请试试 MinGW,并享受 GNU 的自由。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/gnu-windows-mingw
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/more_windows.jpg?itok=hKk64RcZ (Windows)
|
||||
[2]: http://mingw.org
|
||||
[3]: https://gcc.gnu.org/
|
||||
[4]: https://opensource.com/article/19/7/introduction-gnu-autotools
|
||||
[5]: https://osdn.net/projects/mingw/releases/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/mingw-install.jpg (Installing mingw-get)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/mingw-packages.jpg (Installing GCC with MinGW)
|
||||
[9]: https://opensource.com/article/19/8/variables-powershell
|
||||
[10]: https://en.wikipedia.org/wiki/Bourne_shell
|
||||
[11]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[12]: https://opensource.com/article/19/3/netbsd-raspberry-pi
|
||||
[13]: https://opensource.com/sites/default/files/uploads/mingw-env.jpg (Edit your env)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/mingw-env-set.jpg (Set your env)
|
||||
[15]: https://opensource.com/resources/what-vim
|
||||
[16]: https://opensource.com/article/19/7/ways-get-started-linux
|
@ -0,0 +1,53 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12546-1.html)
|
||||
[#]: subject: (IBM details next-gen POWER10 processor)
|
||||
[#]: via: (https://www.networkworld.com/article/3571415/ibm-details-next-gen-power10-processor.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
IBM 披露了下一代 POWER10 处理器细节
|
||||
======
|
||||
|
||||
![](https://images.idgesg.net/images/article/2020/08/power-10_06-100854570-large.jpg)
|
||||
|
||||
> 新的 CPU 针对企业混合云和 AI 推断进行了优化,它采用了为 PB 级内存集群开发的新技术。
|
||||
|
||||
IBM 上周一公布了最新的 POWER RISC CPU 系列,该系列针对企业混合云计算和人工智能 (AI)推理进行了优化,同时还进行了其他一些改进。
|
||||
|
||||
Power 是上世纪 90 年代最后一款 Unix 处理器,当时 Sun Microsystems、HP、SGI 和 IBM 都有竞争性的 Unix 系统,以及与之配合的 RISC 处理器。后来,Unix 让位给了 Linux,RISC 让位给了 x86,但 IBM 坚持了下来。
|
||||
|
||||
这是 IBM 的第一款 7 纳米处理器,IBM 宣称它将在与前代 POWER9 相同的功率范围内,将容量和处理器能效提升多达三倍。该处理器采用 15 核设计(实际上是 16 核,但其中一个没有使用),并允许采用单芯片或双芯片型号,因此 IBM 可以在同一外形尺寸中放入两个处理器。每个核心最多可以有 8 个线程,每块 CPU 最多支持 4TB 的内存。
|
||||
|
||||
更有趣的是一种名为 Memory Inception 的新内存集群技术。这种形式的集群允许系统将另一台物理服务器中的内存当作自己的内存来看待。因此,服务器不需要在每个机箱中放很多内存,而是可以在内存需求激增的时候,从邻居那里借到内存。或者,管理员可以在集群的中间设置一台拥有大量内存的服务器,并在其周围设置一些低内存服务器,这些服务器可以根据需要从大内存服务器上借用内存。
|
||||
|
||||
所有这些都是在 50 到 100 纳秒的延迟下完成的。IBM 的杰出工程师 William Starke 在宣布前的视频会议上说:“这已经成为行业的圣杯了。与其在每个机器里放很多内存,不如当我们对内存的需求激增时,我可以向邻居借。”
|
||||
|
||||
POWER10 使用的是一种叫做开放内存接口(OMI)的东西,因此服务器现在可以使用 DDR4,上市后可以升级到 DDR5,它还可以使用 GPU 中使用的 GDDR6 内存。理论上,POWER10 将具备 1TB/秒的内存带宽和 1TB/秒的 SMP 带宽。
|
||||
|
||||
与 POWER9 相比,POWER10 处理器每个核心的 AES 加密引擎数量增加了四倍。这实现了多项安全增强功能。首先,这意味着在不降低性能的情况下进行全内存加密,因此入侵者无法扫描内存内容。
|
||||
|
||||
其次,它可以为容器提供隔离的硬件和软件安全。这是为了解决更高密度的容器相关的行安全考虑。如果一个容器被入侵,POWER10 处理器的设计能够防止同一虚拟机中的其他容器受到同样的入侵影响。
|
||||
|
||||
最后,POWER10 提供了核心内的 AI 业务推断。它通过片上支持用于训练的 bfloat16 以及 AI 推断中常用的 INT8 和 INT4 实现。这将允许事务性负载在应用中添加 AI 推断。IBM 表示,POWER10 中的 AI 推断是 POWER9 的 20 倍。
|
||||
|
||||
公告中没有提到的是对操作系统的支持。POWER 运行 IBM 的 Unix 分支 AIX,以及 Linux。这并不太令人惊讶,因为这个消息是在 Hot Chips 上发布的,Hot Chips 是每年在斯坦福大学举行的年度半导体会议。Hot Chips 关注的是最新的芯片进展,所以软件通常被排除在外。
|
||||
|
||||
IBM 一般会在发布前一年左右公布新的 POWER 处理器,所以有足够的时间进行 AIX 的更新。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3571415/ibm-details-next-gen-power10-processor.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.facebook.com/NetworkWorld/
|
||||
[2]: https://www.linkedin.com/company/network-world
|
@ -1,34 +1,34 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Convert Text Files between Unix and DOS (Windows) Formats)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12558-1.html)
|
||||
[#]: subject: (How to Convert Text Files between Unix and DOS Windows Formats)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-convert-text-files-between-unix-and-dos-windows-formats/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Convert Text Files between Unix and DOS (Windows) Formats
|
||||
如何将文本文件在 Unix 和 DOS(Windows)格式之间转换
|
||||
======
|
||||
|
||||
As a Linux administrator, you may have noticed some requests from developers to convert files from DOS format to Unix format, and vice versa.
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/27/235550klfnz34lzpnchf7g.jpg)
|
||||
|
||||
This is because these files were created on a Windows system and copied to a Linux system for some reason.
|
||||
作为一名 Linux 管理员,你可能已经注意到了一些开发者请求将文件从 DOS 格式转换为 Unix 格式,反之亦然。
|
||||
|
||||
It’s harmless, but some applications on the Linux system may not understand these new line of characters, so you need to convert them before using it.
|
||||
这是因为这些文件是在 Windows 系统上创建的,并由于某种原因被复制到 Linux 系统上。
|
||||
|
||||
DOS text files comes with carriage return (CR or \r) and line feed (LF or \n) pairs as their newline characters, whereas Unix text files only have line feed as their newline character.
|
||||
这本身没什么问题,但 Linux 系统上的一些应用可能不能理解这些新的换行符,所以在使用之前,你需要转换它们。
|
||||
|
||||
There are many ways you can convert a DOS text file to a Unix format.
|
||||
DOS 文本文件带有回车(`CR` 或 `\r`)和换行(`LF` 或 `\n`)一对字符作为它们的换行符,而 Unix 文本只有换行(`LF`)符。
|
||||
|
||||
But I recommend using a special utility called **dos2unix** / **unix2dos** to convert text files between DOS and Unix formats.
|
||||
有很多方法可以将 DOS 文本文件转换为 Unix 格式。
|
||||
|
||||
* **dos2unix:** To convert a text files from the DOS format to the Unix format.
|
||||
* **unix2dos:** To convert a text files from the Unix format to the DOS format.
|
||||
* **tr, awk and [sed Command][1]:** These can be used for the same purpose
|
||||
但我推荐使用一个名为 `dos2unix` / `unix2dos` 的特殊工具将文本在 DOS 和 Unix 格式之间转换。
|
||||
|
||||
* `dos2unix`:将文本文件从 DOS 格式转换为 Unix 格式。
|
||||
* `unix2dos`:将文本文件从 Unix 格式转换为 DOS 格式。
|
||||
* `tr`、`awk` 和 [sed 命令][1]:这些可以用于相同的目的。
|
||||
|
||||
|
||||
You can easily identify whether the file is DOS format or Unix format using the od (octal dump) command as shown below.
|
||||
使用 `od`(<ruby>八进制转储<rt>octal dump</rt></ruby>)命令可以很容易地识别文件是 DOS 格式还是 Unix 格式,如下图所示:
|
||||
|
||||
```
|
||||
# od -bc windows.txt
|
||||
@ -55,9 +55,9 @@ n L i n u x \r \n
|
||||
0000231
|
||||
```
|
||||
|
||||
The above output clearly shows that this is a DOS format file because it contains the escape sequence **`\r\n`**.
|
||||
上面的输出清楚地表明这是一个 DOS 格式的文件,因为它包含了转义序列 `\r\n`。
|
||||
|
||||
At the same time, when you print the file output on your terminal you will get the output below.
|
||||
同时,当你在终端上打印文件输出时,你会得到下面的输出:
|
||||
|
||||
```
|
||||
# cat windows.txt
|
||||
@ -67,40 +67,40 @@ Super computers are running on UNIX
|
||||
Anything can be done on Linux
|
||||
```
|
||||
|
||||
### How to Install dos2unix on Linux
|
||||
### 如何在 Linux 上安装 dos2unix?
|
||||
|
||||
dos2unix can be easily installed from the distribution official repository.
|
||||
`dos2unix` 可以很容易地从发行版的官方仓库中安装。
|
||||
|
||||
For RHEL/CentOS 6/7 systems, use the **[yum command][2]** to install dos2unix.
|
||||
对于 RHEL/CentOS 6/7 系统,使用 [yum 命令][2] 安装 `dos2unix`。
|
||||
|
||||
```
|
||||
$ sudo yum install -y dos2unix
|
||||
```
|
||||
|
||||
For RHEL/CentOS 8 and Fedora systems, use the **[dnf command][3]** to install dos2unix.
|
||||
对于 RHEL/CentOS 8 和 Fedora 系统,使用 [dnf 命令][3] 安装 `dos2unix`。
|
||||
|
||||
```
|
||||
$ sudo yum install -y dos2unix
|
||||
```
|
||||
|
||||
For Debian based systems, use the **[apt command][4]** or **[apt-get command][5]** to install dos2unix.
|
||||
对于基于 Debian 的系统,使用 [apt 命令][4] 或 [apt-get 命令][5] 来安装 `dos2unix`。
|
||||
|
||||
```
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install dos2unix
|
||||
```
|
||||
|
||||
For openSUSE systems, use the **[zypper command][6]** to install dos2unix.
|
||||
对于 openSUSE 系统,使用 [zypper命令][6] 安装 `dos2unix`。
|
||||
|
||||
```
|
||||
$ sudo zypper install -y dos2unix
|
||||
```
|
||||
|
||||
### 1) How to Convert DOS file to UNIX format
|
||||
### 1)如何将 DOS 文件转换为 UNIX 格式?
|
||||
|
||||
The following command converts the “windows.txt” file from DOS to Unix format.
|
||||
以下命令将 `windows.txt` 文件从 DOS 转换为 Unix 格式。
|
||||
|
||||
The modification of this file is to remove the “\r” from each line of the file.
|
||||
对该文件的修改是删除文件每行的 `\r`。
|
||||
|
||||
```
|
||||
# dos2unix windows.txt
|
||||
@ -132,70 +132,70 @@ i n u x \n
|
||||
0000225
|
||||
```
|
||||
|
||||
The above command will overwrite the original file.
|
||||
上面的命令将覆盖原始文件。
|
||||
|
||||
Use the following command if you want to keep the original file. This will save the converted output as a new file.
|
||||
如果你想保留原始文件,请使用以下命令。这将把转换后的输出保存为一个新文件。
|
||||
|
||||
```
|
||||
# dos2unix -n windows.txt unix.txt
|
||||
dos2unix: converting file windows.txt to file unix.txt in Unix format …
|
||||
```
|
||||
|
||||
### 1a) How to Convert DOS file to UNIX format Using tr Command
|
||||
#### 1a)如何使用 tr 命令将 DOS 文件转换为 UNIX 格式。
|
||||
|
||||
As discussed at the beginning of the article, you can use the tr command to convert the DOS file to Unix format as shown below.
|
||||
正如文章开头所讨论的,你可以如下所示使用 `tr` 命令将 DOS 文件转换为 Unix 格式。
|
||||
|
||||
```
|
||||
Syntax: tr -d '\r' < source_file > output_file
|
||||
```
|
||||
|
||||
The below tr command converts the “windows.txt” DOS file to Unix format file “unix.txt”.
|
||||
下面的 `tr` 命令将 DOS 格式的文件 `windows.txt` 转换为 Unix 格式文件 `unix.txt`。
|
||||
|
||||
```
|
||||
# tr -d '\r' < windows.txt >unix.txt
|
||||
```
|
||||
|
||||
**Make a note:** You can’t use the tr command to convert a file from Unix format to Windows (DOS).
|
||||
注意:不能使用 `tr` 命令将文件从 Unix 格式转换为 Windows(DOS)。
|
||||
|
||||
### 1b) How to Convert DOS file to UNIX format Using awk Command
|
||||
#### 1b)如何使用 awk 命令将 DOS 文件转换为 UNIX 格式。
|
||||
|
||||
Use the following awk command format to convert a DOS file to a Unix format.
|
||||
使用以下 `awk` 命令格式将 DOS 文件转换为 Unix 格式。
|
||||
|
||||
```
|
||||
Syntax: awk '{ sub("\r$", ""); print }' source_file.txt > output_file.txt
|
||||
```
|
||||
|
||||
The below awk command converts the “windows.txt” DOS file to Unix format file “unix.txt”.
|
||||
以下 `awk` 命令将 DOS 文件 `windows.txt` 转换为 Unix 格式文件 `unix.txt`。
|
||||
|
||||
```
|
||||
# awk '{ sub("\r$", ""); print }' windows.txt > unix.txt
|
||||
```
|
||||
|
||||
### 2) How to Convert UNIX file to DOS format
|
||||
### 2)如何将 UNIX 文件转换为 DOS 格式?
|
||||
|
||||
When you convert a file from UNIX to DOS format, it will add a carriage return (CR or \r) in each of the line.
|
||||
当你把一个文件从 UNIX 转换为 DOS 格式时,它会在每一行中添加一个回车(`CR` 或 `\r`)。
|
||||
|
||||
```
|
||||
# unix2dos unix.txt
|
||||
unix2dos: converting file unix.txt to DOS format …
|
||||
```
|
||||
|
||||
This command will keep the original file.
|
||||
该命令将保留原始文件。
|
||||
|
||||
```
|
||||
# unix2dos -n unix.txt windows.txt
|
||||
unix2dos: converting file unix.txt to file windows.txt in DOS format …
|
||||
```
|
||||
|
||||
### 2a) How to Convert UNIX file to DOS format Using awk Command
|
||||
#### 2a)如何使用 awk 命令将 UNIX 文件转换为 DOS 格式?
|
||||
|
||||
Use the following awk command format to convert UNIX file to DOS format.
|
||||
使用以下 `awk` 命令格式将 UNIX 文件转换为 DOS 格式。
|
||||
|
||||
```
|
||||
Syntax: awk 'sub("$", "\r")' source_file.txt > output_file.txt
|
||||
```
|
||||
|
||||
The below awk command converts the “unix.txt” file to the DOS format file “windows.txt”.
|
||||
下面的 `awk` 命令将 `unix.txt` 文件转换为 DOS 格式文件 `windows.txt`。
|
||||
|
||||
```
|
||||
# awk 'sub("$", "\r")' unix.txt > windows.txt
|
||||
@ -207,8 +207,8 @@ via: https://www.2daygeek.com/how-to-convert-text-files-between-unix-and-dos-win
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
58
published/202008/20200821 Being open to open values.md
Normal file
58
published/202008/20200821 Being open to open values.md
Normal file
@ -0,0 +1,58 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12561-1.html)
|
||||
[#]: subject: (Being open to open values)
|
||||
[#]: via: (https://opensource.com/open-organization/20/8/being-open-to-open-values)
|
||||
[#]: author: (Heidi Hess von Ludewig https://opensource.com/users/heidi-hess-von-ludewig)
|
||||
|
||||
对开放的价值观持开放态度
|
||||
======
|
||||
|
||||
> 开放管理可能会让人感到恐惧。一位经理人解释了为什么值得冒这个风险。
|
||||
|
||||
![Open Lego CAD][1]
|
||||
|
||||
在本期的“[用开放的价值观管理][2]”系列中,我和美国一家全国性保险公司的定价总监、人事经理 Braxton 聊了聊。
|
||||
|
||||
2018 年 6 月,Braxton 联系到了开放组织社区的红帽人员。他想了解更多关于他*和*他的团队如何使用开放的价值观,以不同的方式工作。我们很乐意提供帮助。于是我帮助 Braxton 和他的团队组织了一个关于[开放组织原则][3]的研讨会,并在之后还保持着联系,这样我就可以了解他在变得更加开放的过程中的风险。
|
||||
|
||||
最近我们采访了 Braxton,并和他一起坐下来听了事情的进展。[产业/组织心理学家和员工参与度专家][4] Tracy Guiliani 和 [Bryan Behrenshausen][5] 一起加入了采访。我们的谈话范围很广,探讨了了解开源价值观后的感受,如何利用它们来改变组织,以及它们如何帮助 Braxton 和他的团队更好地工作和提高参与度。
|
||||
|
||||
与 Braxton 合作是一次异常有意义的经历。它让我们直接见证了一个人如何将开放组织社区驱动的研讨会材料融入动态变化,并使他、他的团队和他的组织受益。开放组织大使*一直*在寻求帮助人们获得关于开放价值的见解和知识,使他们能够理解文化变革和[自己组织内的转型][6]。
|
||||
|
||||
他和他的团队正在以对他们有效的方式执行他们独特的开放价值观,并且让团队实现的利益超过了提议变革在时间和精力上的投入。
|
||||
|
||||
Braxton 对开放组织原则的*解释*和使组织更加开放的策略,让我们深受启发。
|
||||
|
||||
Braxton 承认,他的更开放的目标并不包括“制造另一个红帽”。相反,他和他的团队是在以对他们有效的方式,以及让团队实现的利益超过提议的变革所带来的时间和精力投入,来执行他们独特的开放价值观。
|
||||
|
||||
在我们采访的第一部分,你还会听到 Braxton 描述:
|
||||
|
||||
1. 在了解了透明性、协作性、适应性、社区性和包容性这五种开放式组织价值观之后,“开放式管理”对他意味着什么?
|
||||
2. 他的一些开放式管理做法。
|
||||
3. 他如何在他的团队中倡导开放文化,如何在后来者中鼓励开源价值观,以及他所体验到的好处。
|
||||
4. 当人们试图改造自己的组织时,对开源价值观最大的误解是什么?
|
||||
|
||||
- [收听对 Braxton 的采访](https://opensource.com/sites/default/files/images/open-org/braxton_1.ogg)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/20/8/being-open-to-open-values
|
||||
|
||||
作者:[Heidi Hess von Ludewig][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/heidi-hess-von-ludewig
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open-lego.tiff_.png?itok=mQglOhW_ (Open Lego CAD)
|
||||
[2]: https://opensource.com/open-organization/managing-with-open-values
|
||||
[3]: https://github.com/open-organization/open-org-definition
|
||||
[4]: https://opensource.com/open-organization/20/5/commitment-engagement-org-psychology
|
||||
[5]: https://opensource.com/users/bbehrens
|
||||
[6]: https://opensource.com/open-organization/18/4/rethinking-ownership-across-organization
|
@ -0,0 +1,212 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12560-1.html)
|
||||
[#]: subject: (11 ways to list and sort files on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3572590/11-ways-to-list-and-sort-files-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
把 Linux 上的文件列表和排序玩出花来
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/28/213742y8cxnbnjpopzd5j0.jpg)
|
||||
|
||||
> Linux 命令可以提供文件的详细信息,也可以自定义显示的文件列表,甚至可以深入到文件系统的目录中,只要你愿意看。
|
||||
|
||||
在 Linux 系统上,有许多方法可以列出文件并显示它们的信息。这篇文章回顾了一些提供文件细节的命令,并提供了自定义文件列表的选项,以满足你的需求。
|
||||
|
||||
大多数命令都会列出单个目录中的文件,而其他命令则可以深入到文件系统的目录中,只要你愿意看。
|
||||
|
||||
当然,最主要的文件列表命令是 `ls`。然而,这个命令有大量的选项,可以只查找和列出你想看的文件。另外,还有 `find` 可以帮助你进行非常具体的文件搜索。
|
||||
|
||||
### 按名称列出文件
|
||||
|
||||
最简单的方法是使用 `ls` 命令按名称列出文件。毕竟,按名称(字母数字顺序)列出文件是默认的。你可以选择 `ls`(无细节)或 `ls -l`(大量细节)来决定你看到什么。
|
||||
|
||||
```
|
||||
$ ls | head -6
|
||||
8pgs.pdf
|
||||
Aesthetics_Thank_You.pdf
|
||||
alien.pdf
|
||||
Annual_Meeting_Agenda-20190602.pdf
|
||||
bigfile.bz2
|
||||
bin
|
||||
$ ls -l | head -6
|
||||
-rw-rw-r-- 1 shs shs 10886 Mar 22 2019 8pgs.pdf
|
||||
-rw-rw-r-- 1 shs shs 284003 May 11 2019 Aesthetics_Thank_You.pdf
|
||||
-rw-rw-r-- 1 shs shs 38282 Jan 24 2019 alien.pdf
|
||||
-rw-rw-r-- 1 shs shs 97358 May 19 2019 Annual_Meeting_20190602.pdf
|
||||
-rw-rw-r-- 1 shs shs 18115234 Apr 16 17:36 bigfile.bz2
|
||||
drwxrwxr-x 4 shs shs 8052736 Jul 10 13:17 bin
|
||||
```
|
||||
|
||||
如果你想一次查看一屏的列表,可以将 `ls` 的输出用管道送到 `more` 命令中。
|
||||
|
||||
### 按相反的名字顺序排列文件
|
||||
|
||||
要按名称反转文件列表,请添加 `-r`(<ruby>反转<rt>Reverse</rt></ruby>)选项。这就像把正常的列表倒过来一样。
|
||||
|
||||
```
|
||||
$ ls -r
|
||||
$ ls -lr
|
||||
```
|
||||
|
||||
### 按文件扩展名列出文件
|
||||
|
||||
`ls` 命令不会按内容分析文件类型,它只会处理文件名。不过,有一个命令选项可以按扩展名列出文件。如果你添加了 `-X` (<ruby>扩展名<rt>eXtension</rt></ruby>)选项,`ls` 将在每个扩展名类别中按名称对文件进行排序。例如,它将首先列出没有扩展名的文件(按字母数字顺序),然后是扩展名为 `.1`、`.bz2`、`.c` 等的文件。
|
||||
|
||||
### 只列出目录
|
||||
|
||||
默认情况下,`ls` 命令将同时显示文件和目录。如果你想只列出目录,你可以使用 `-d`(<ruby>目录<rt>Directory</rt></ruby>)选项。你会得到一个像这样的列表:
|
||||
|
||||
```
|
||||
$ ls -d */
|
||||
1/ backups/ modules/ projects/ templates/
|
||||
2/ html/ patches/ public/ videos/
|
||||
bin/ new/ private/ save/
|
||||
```
|
||||
|
||||
### 按大小排列文件
|
||||
|
||||
如果你想按大小顺序列出文件,请添加 `-S`(<ruby>大小<rt>Size</rt></ruby>)选项。但请注意,这实际上不会显示文件的大小(以及其他文件的细节),除非你还添加 `-l`(<ruby>长列表<rt>Long listing</rt></ruby>)选项。当按大小列出文件时,一般来说,看到命令在按你的要求做事情是很有帮助的。注意,默认情况下是先显示最大的文件。添加 `-r` 选项可以反过来(即 `ls -lSr`)。
|
||||
|
||||
```
|
||||
$ ls -lS
|
||||
total 959492
|
||||
-rw-rw-r-- 1 shs shs 357679381 Sep 19 2019 sav-linux-free-9.tgz
|
||||
-rw-rw-r-- 1 shs shs 103270400 Apr 16 17:38 bigfile
|
||||
-rw-rw-r-- 1 shs shs 79117862 Oct 5 2019 Nessus-8.7.1-ubuntu1110_amd64.deb
|
||||
```
|
||||
|
||||
### 按属主列出文件
|
||||
|
||||
如果你想按属主列出文件(例如,在一个共享目录中),你可以把 `ls` 命令的输出传给 `sort`,并通过添加 `-k3` 来按第三个字段排序,从而挑出属主一栏。
|
||||
|
||||
```
|
||||
$ ls -l | sort -k3 | more
|
||||
total 56
|
||||
-rw-rw-r-- 1 dory shs 0 Aug 23 12:27 tasklist
|
||||
drwx------ 2 gdm gdm 4096 Aug 21 17:12 tracker-extract-files.121
|
||||
srwxr-xr-x 1 root root 0 Aug 21 17:12 ntf_listenerc0c6b8b4567
|
||||
drwxr-xr-x 2 root root 4096 Aug 21 17:12 hsperfdata_root
|
||||
^
|
||||
|
|
||||
```
|
||||
|
||||
事实上,你可以用这种方式对任何字段进行排序(例如,年份)。只是要注意,如果你要对一个数字字段进行排序,则要加上一个 `n`,如 `-k5n`,否则你将按字母数字顺序进行排序。这种排序技术对于文件内容的排序也很有用,而不仅仅是用于列出文件。
|
||||
|
||||
### 按年份排列文件
|
||||
|
||||
使用 `-t`(<ruby>修改时间<rt>Time modified</rt></ruby>)选项按年份顺序列出文件 —— 它们的新旧程度。添加 `-r` 选项,让最近更新的文件在列表中最后显示。我使用这个别名来显示我最近更新的文件列表。
|
||||
|
||||
```
|
||||
$ alias recent='ls -ltr | tail -8'
|
||||
```
|
||||
|
||||
请注意,文件的更改时间和修改时间是不同的。`-c`(<ruby>更改时间<rt>time Changed</rt></ruby>)和 `-t`(修改时间)选项的结果并不总是相同。如果你改变了一个文件的权限,而没有改变其他内容,`-c` 会把这个文件放在 `ls` 输出的顶部,而 `-t` 则不会。如果你想知道其中的区别,可以看看 `stat` 命令的输出。
|
||||
|
||||
```
|
||||
$ stat ckacct
|
||||
File: ckacct
|
||||
Size: 200 Blocks: 8 IO Block: 4096 regular file
|
||||
Device: 801h/2049d Inode: 829041 Links: 1
|
||||
Access: (0750/-rwxr-x---) Uid: ( 1000/ shs) Gid: ( 1000/ shs)
|
||||
Access: 2020-08-20 16:10:11.063015008 -0400
|
||||
Modify: 2020-08-17 07:26:34.579922297 -0400 <== content changes
|
||||
Change: 2020-08-24 09:36:51.699775940 -0400 <== content or permissions changes
|
||||
Birth: -
|
||||
```
|
||||
|
||||
### 按组别列出文件
|
||||
|
||||
要按关联的组别对文件进行排序,你可以将一个长列表的输出传给 `sort` 命令,并告诉它在第 4 列进行排序。
|
||||
|
||||
```
|
||||
$ ls -l | sort -k4
|
||||
```
|
||||
|
||||
### 按访问日期列出文件
|
||||
|
||||
要按访问日期(最近访问的日期在前)列出文件,使用 `-ltu` 选项。`u` 强制“按访问日期”排列顺序。
|
||||
|
||||
```
|
||||
$ ls -ltu
|
||||
total 959500
|
||||
-rwxr-x--- 1 shs shs 200 Aug 24 09:42 ckacct <== most recently used
|
||||
-rw-rw-r-- 1 shs shs 1335 Aug 23 17:45 lte
|
||||
```
|
||||
|
||||
### 单行列出多个文件
|
||||
|
||||
有时,精简的文件列表更适合手头的任务。`ls` 命令甚至有这方面的选项。为了在尽可能少的行上列出文件,你可以使用 `--format=comma` 来用逗号分隔文件名,就像这个命令一样:
|
||||
|
||||
```
|
||||
$ ls --format=comma
|
||||
1, 10, 11, 12, 124, 13, 14, 15, 16pgs-landscape.pdf, 16pgs.pdf, 17, 18, 19,
|
||||
192.168.0.4, 2, 20, 2018-12-23_OoS_2.pdf, 2018-12-23_OoS.pdf, 20190512_OoS.pdf,
|
||||
'2019_HOHO_application working.pdf' …
|
||||
```
|
||||
|
||||
喜欢用空格?使用 `--format=across` 代替。
|
||||
|
||||
```
|
||||
$ ls --format=across z*
|
||||
z zip zipfiles zipfiles1.bat zipfiles2.bat
|
||||
zipfiles3.bat zipfiles4.bat zipfiles.bat zoom_amd64.deb zoomap.pdf
|
||||
zoom-mtg
|
||||
```
|
||||
|
||||
### 增加搜索的深度
|
||||
|
||||
虽然 `ls` 一般只列出单个目录中的文件,但你可以选择使用 `-R` 选项(<ruby>递归<rt>Recursively</rt></ruby>)地列出文件,深入到整个目录的深处。
|
||||
|
||||
```
|
||||
$ ls -R zzzzz | grep -v "^$"
|
||||
zzzzz:
|
||||
zzzz
|
||||
zzzzz/zzzz:
|
||||
zzz
|
||||
zzzzz/zzzz/zzz:
|
||||
zz
|
||||
zzzzz/zzzz/zzz/zz:
|
||||
z
|
||||
zzzzz/zzzz/zzz/zz/z:
|
||||
sleeping
|
||||
```
|
||||
|
||||
另外,你也可以使用 `find` 命令,对深度进行限制或不限制。在这个命令中,我们指示 `find` 命令只在三个层次的目录中查找:
|
||||
|
||||
```
|
||||
$ find zzzzz -maxdepth 3
|
||||
zzzzz
|
||||
zzzzz/zzzz
|
||||
zzzzz/zzzz/zzz
|
||||
zzzzz/zzzz/zzz/zz
|
||||
```
|
||||
|
||||
### 选择 ls 还是 find
|
||||
|
||||
当你需要列出符合具体要求的文件时,`find` 命令可能是比 `ls` 更好的工具。
|
||||
|
||||
与 `ls` 不同的是,`find` 命令会尽可能地深入查找,除非你限制它。它还有许多其他选项和一个 `-exec` 子命令,允许在找到你要找的文件后采取一些特定的行动。
|
||||
|
||||
### 总结
|
||||
|
||||
`ls` 命令有很多用于列出文件的选项。了解一下它们。你可能会发现一些你会喜欢的选项。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3572590/11-ways-to-list-and-sort-files-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.facebook.com/NetworkWorld/
|
||||
[2]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,161 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12564-1.html)
|
||||
[#]: subject: (OnionShare: An Open-Source Tool to Share Files Securely Over Tor Network)
|
||||
[#]: via: (https://itsfoss.com/onionshare/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
OnionShare:一个安全共享文件的开源工具
|
||||
======
|
||||
|
||||
> OnionShare 是一个自由开源工具,它利用 Tor 网络安全和匿名地共享文件。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/30/103623ty2r6sz03y32o99o.jpg)
|
||||
|
||||
已经有很多在线服务可以安全地共享文件,但它可能不是完全匿名的。
|
||||
|
||||
此外,你必须依靠一个集中式服务来共享文件,如果服务决定像 [Firefox Send][1] 那样关闭,那你不能真正依靠它来一直安全地共享文件。
|
||||
|
||||
考虑到这些,OnionShare 是一个让人惊叹的开源工具,它让你使用 [Tor Onion 服务][2]来共享文件。它应该是所有云端文件共享服务的一个很好的替代品。
|
||||
|
||||
让我们来看看它提供了什么以及它是如何工作的。
|
||||
|
||||
![][3]
|
||||
|
||||
### OnionShare: 通过 Tor 匿名分享文件
|
||||
|
||||
[OnionShare][4] 是一款有趣的开源工具,可用于 Linux、Windows 和 macOS。
|
||||
|
||||
它可以让你安全地将文件直接从你的电脑分享给接收者,而不会在这个过程中暴露你的身份。你不必注册任何帐户,它也不依赖于任何集中式存储服务。
|
||||
|
||||
它基本上是在 Tor 网络上的点对点服务。接收者只需要有一个 [Tor 浏览器][5]就可以下载/上传文件到你的电脑上。如果你好奇的话,我也建议你去看看我们的 [Tor 指南][6]来探索更多关于它的内容。
|
||||
|
||||
让我们来看看它的功能。
|
||||
|
||||
### OnionShare 的功能
|
||||
|
||||
对于一个只想要安全和匿名的普通用户来说,它不需要调整。不过,如果你有需要,它也有一些高级选项。
|
||||
|
||||
* 跨平台支持(Windows、macOS和 Linux)。
|
||||
* 发送文件
|
||||
* 接收文件
|
||||
* 命令行选项
|
||||
* 发布洋葱站点
|
||||
* 能够使用桥接(如果你的 Tor 连接不起作用)
|
||||
* 能够使用持久 URL 进行共享(高级用户)。
|
||||
* 隐身模式(更安全)
|
||||
|
||||
你可以通过 GitHub 上的[官方用户指南][7]来了解更多关于它们的信息。
|
||||
|
||||
### 在 Linux 上安装 OnionShare
|
||||
|
||||
你应该可以在你的软件中心找到 OnionShare 并安装它。如果没有,你可以在 Ubuntu 发行版上使用下面的命令添加 PPA:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:micahflee/ppa
|
||||
sudo apt update
|
||||
sudo apt install -y onionshare
|
||||
```
|
||||
|
||||
如果你想把它安装在其他 Linux 发行版上,你可以访问[官方网站][4]获取 Fedora 上的安装说明以及构建说明。
|
||||
|
||||
- [下载 OnionShare][4]
|
||||
|
||||
### OnionShare 如何工作?
|
||||
|
||||
当你安装好后,一切都很明了且易于使用。但是,如果你想开始,让我告诉你它是如何工作的。
|
||||
|
||||
完成后,它加载并连接到 Tor 网络。
|
||||
|
||||
#### 共享文件
|
||||
|
||||
![][8]
|
||||
|
||||
你只需要在电脑上添加你要分享的文件,然后点击 “**Start sharing**”。
|
||||
|
||||
完成后,右下角的状态应该是 “**Sharing**”,然后会生成一个 **OnionShare 地址**(自动复制到剪贴板),如下图所示。
|
||||
|
||||
![][9]
|
||||
|
||||
现在接收方需要的是 OnionShare 的地址,它看上去是这样的。
|
||||
|
||||
```
|
||||
http://onionshare:xyz@jumbo2127k6nekzqpff2p2zcxcsrygbnxbitsgnjcfa6v47wvyd.onion
|
||||
```
|
||||
|
||||
接着 Tor 浏览器开始下载文件。
|
||||
|
||||
值得注意的是,下载完成后(文件传输完成),文件共享就会停止。到时候也会通知你。
|
||||
|
||||
所以,如果你要再次分享或与他人分享,你必须重新分享,并将新的 OnionShare 地址发送给接收者。
|
||||
|
||||
#### 允许接收文件
|
||||
|
||||
如果你想生成一个 URL,让别人直接上传文件到你的电脑上(要注意你与谁分享),你可以在启动 OnionShare 后点击 **Receive Files** 标签即可。
|
||||
|
||||
![][10]
|
||||
|
||||
你只需要点击 “**Start Receive Mode**” 按钮就可以开始了。接下来,你会得到一个 OnionShare 地址(就像共享文件时一样)。
|
||||
|
||||
接收者必须使用 Tor 浏览器访问它并开始上传文件。它应该像下面这样:
|
||||
|
||||
![][11]
|
||||
|
||||
虽然当有人上传文件到你的电脑上时,你会收到文件传输的通知,但完成后,你需要手动停止接收模式。
|
||||
|
||||
#### 下载/上传文件
|
||||
|
||||
考虑到你已经安装了 Tor 浏览器,你只需要在 URL 地址中输入 OnionShare 的地址,确认登录(按 OK 键),它看上去像这样。
|
||||
|
||||
![][12]
|
||||
|
||||
同样,当你得到一个上传文件的地址时,它看上去是这样的。
|
||||
|
||||
![][13]
|
||||
|
||||
#### 发布洋葱站点
|
||||
|
||||
如果你想的话,你可以直接添加文件来托管一个静态的洋葱网站。当然,正因为是点对点的连接,所以在它从你的电脑上传输每个文件时,加载速度会非常慢。
|
||||
|
||||
![][14]
|
||||
|
||||
我试着用[免费模板][15]测试了一下,效果很好(但很慢)。所以,这可能取决于你的网络连接。
|
||||
|
||||
### 总结
|
||||
|
||||
除了上面提到的功能,如果需要的话,你还可以使用命令行进行一些高级的调整。
|
||||
|
||||
OnionShare 的确是一款令人印象深刻的开源工具,它可以让你轻松地匿名分享文件,而不需要任何特殊的调整。
|
||||
|
||||
你尝试过 OnionShare 吗?你知道有类似的软件么?请在下面的评论中告诉我!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/onionshare/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/firefox-send/
|
||||
[2]: https://community.torproject.org/onion-services/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-screenshot.jpg?resize=800%2C629&ssl=1
|
||||
[4]: https://onionshare.org/
|
||||
[5]: https://itsfoss.com/install-tar-browser-linux/
|
||||
[6]: https://itsfoss.com/tor-guide/
|
||||
[7]: https://github.com/micahflee/onionshare/wiki
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-share.png?resize=800%2C604&ssl=1
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-file-shared.jpg?resize=800%2C532&ssl=1
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-receive-files.jpg?resize=800%2C655&ssl=1
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-receive-mode.jpg?resize=800%2C529&ssl=1
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-download.jpg?resize=800%2C499&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-upload.jpg?resize=800%2C542&ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-onion-site.jpg?resize=800%2C366&ssl=1
|
||||
[15]: https://www.styleshout.com/free-templates/kards/
|
@ -0,0 +1,228 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12567-1.html)
|
||||
[#]: subject: (Glances – A Versatile System Monitoring Tool for Linux Systems)
|
||||
[#]: via: (https://itsfoss.com/glances/)
|
||||
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
|
||||
|
||||
Glances:多功能 Linux 系统监控工具
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/30/212820tgdi7iou6dg6qqq2.jpg)
|
||||
|
||||
Linux 上最常用的[命令行进程监控工具][1]是 `top` 和它那色彩斑斓、功能丰富的表弟 [htop][2]。
|
||||
|
||||
要[监控 Linux 上的温度][3],可以使用 [lm-sensors][4]。同样,还有很多实用工具可以监控其他实时指标,如磁盘 I/O、网络统计等。
|
||||
|
||||
[Glances][5] 是一个系统监控工具,它把这些都联系在一起,并提供了更多的功能。我最喜欢的是,你可以在远程 Linux 服务器上运行 Glances 来监控本地系统的系统资源,也可以通过 Web 浏览器监控。
|
||||
|
||||
下面是它的外观。下面截图中的终端已经[用 Pywal 工具美化过,可以根据壁纸自动改变颜色][6]。
|
||||
|
||||
![][7]
|
||||
|
||||
你也可以将它集成到像 [Grafana][8] 这样的工具中,在一个直观的仪表盘中监控统计数据。
|
||||
|
||||
它是用 Python 编写的,这意味着它的绝大多数功能都可以在大多数平台上使用。
|
||||
|
||||
### Glances 的功能
|
||||
|
||||
![Glances Data In Grafana Dashboard][9]
|
||||
|
||||
让我们快速浏览一下 Glances 提供的主要功能:
|
||||
|
||||
* 可以监控系统上的 15 个之多的指标(包括 Docker 容器)。
|
||||
* 灵活的使用模式:单机模式、客户端-服务器模式、通过 SSH 和 Web 模式。
|
||||
* 可用于集成的各种 REST API 和 XML-RPC API。
|
||||
* 支持将数据轻松导出到不同的服务和数据库。
|
||||
* 高度的可配置性和适应不同的需求。
|
||||
* 非常全面的文档。
|
||||
|
||||
### 在 Ubuntu 和其他 Linux 发行版上安装 Glances
|
||||
|
||||
Glances 在许多 Linux 发行版的官方软件库中都有。这意味着你可以使用你的发行版的软件包管理器来轻松安装它。
|
||||
|
||||
在基于 Debian/Ubuntu 的发行版上,你可以使用以下命令:
|
||||
|
||||
```
|
||||
sudo apt install glances
|
||||
```
|
||||
|
||||
你也可以使用 snap 包安装最新的 Glances:
|
||||
|
||||
```
|
||||
sudo snap install glances
|
||||
```
|
||||
|
||||
由于 Glances 是基于 Python 的,你也可以使用 PIP 在大多数 Linux 发行版上安装它。先[安装 PIP][10],然后用它来安装 Glances:
|
||||
|
||||
```
|
||||
sudo pip3 install glances
|
||||
```
|
||||
|
||||
如果没有别的办法,你还可以使用 Glances 开发者提供的自动安装脚本。虽然我们不建议直接在你的系统上随便运行脚本,但这完全取决于你自己:
|
||||
|
||||
```
|
||||
curl -L https://bit.ly/glances | /bin/bash
|
||||
```
|
||||
|
||||
你可以从他们的[文档][11]中查看其他安装 Glances 的方法,甚至你还可以把它作为一个 Docker 容器来安装。
|
||||
|
||||
### 使用 Glances 监控本地系统上的 Linux 系统资源(独立模式)
|
||||
|
||||
你可以通过在终端上运行这个命令,轻松启动 Glances 来监控你的本地机器:
|
||||
|
||||
```
|
||||
glances
|
||||
```
|
||||
|
||||
你可以立即观察到,它将很多不同的信息整合在一个屏幕上。我喜欢它在顶部显示电脑的公共和私人 IP:
|
||||
|
||||
![][12]
|
||||
|
||||
Glances 也是交互式的,这意味着你可以在它运行时使用命令与它互动。你可以按 `s` 将传感器显示在屏幕上;按 `k` 将 TCP 连接列表显示在屏幕上;按 `1` 将 CPU 统计扩展到显示单个线程。
|
||||
|
||||
你也可以使用方向键在进程列表中移动,并按不同的指标对表格进行排序。
|
||||
|
||||
你可以通过各种命令行选项来启动 Glances。此外,它还有很多交互式命令。你可以在他们的[丰富的文档][13]中找到完整的列表。
|
||||
|
||||
按 `Ctrl+C` 键退出 Glances。
|
||||
|
||||
### 使用 Glances 监控远程 Linux 系统(客户端-服务器模式)
|
||||
|
||||
要监控远程计算机,你可以在客户端-服务器模式下使用 Glances。你需要在两个系统上都安装 Glances。
|
||||
|
||||
在远程 Linux 系统上,使用 `-s` 选项在服务器模式下启动 Glances:
|
||||
|
||||
```
|
||||
glances -s
|
||||
```
|
||||
|
||||
在客户端系统中,使用下面的命令在客户端模式下启动 Glances 并连接到服务器:
|
||||
|
||||
```
|
||||
glances -c server_ip_address
|
||||
```
|
||||
|
||||
你也可以通过 SSH 进入任何一台电脑,然后启动 Glances,它可以完美地工作。更多关于客户端-服务器模式的信息请看[这里][14]。
|
||||
|
||||
### 使用 Glances 在 Web 浏览器中监控 Linux 系统资源(Web 模式)
|
||||
|
||||
Glances 也可以在 Web 模式下运行。这意味着你可以使用 Web 浏览器来访问 Glances。与之前的客户端-服务器模式不同,你不需要在客户端系统上安装 Glances。
|
||||
|
||||
要在 Web 模式下启动 Glances,请使用 `-w` 选项:
|
||||
|
||||
```
|
||||
glances -w
|
||||
```
|
||||
|
||||
请注意,即使在 Linux 服务器上,它也可能显示 “Glances Web User Interface started on http://0.0.0.0:61208”,而实际上它使用的是服务器的 IP 地址。
|
||||
|
||||
最主要的是它使用的是 61208 端口号,你可以用它来通过网络浏览器访问 Glances。只要在服务器的 IP 地址后面输入端口号,比如 <http://123.123.123.123:61208>。
|
||||
|
||||
你也可以在本地系统中使用 <http://0.0.0.0:61208/> 或 <https://localhost:61208/> 访问。
|
||||
|
||||
![][15]
|
||||
|
||||
Web 模式也模仿终端的样子。网页版是根据响应式设计原则打造的,即使在手机上也很好看。
|
||||
|
||||
你可能想用密码来保护 Web 模式,这样只有授权的人才能使用它。默认的用户名是 `glances`。
|
||||
|
||||
```
|
||||
root@localhost:~# glances -w --password
|
||||
Define the Glances webserver password (glances username):
|
||||
Password (confirm):
|
||||
Do you want to save the password? [Yes/No]: n
|
||||
Glances Web User Interface started on http://0.0.0.0:61208/
|
||||
```
|
||||
|
||||
你可以在[快速入门指南][16]中找到关于配置密码的更多信息。
|
||||
|
||||
### 导出 Glances 数据到不同的服务
|
||||
|
||||
使用 Glances 最大的优势之一就是开箱即用,它支持将数据导出到各种数据库、服务,并无缝集成到各种数据管道中。
|
||||
|
||||
你可以在监控的同时用这个命令导出到 CSV:
|
||||
|
||||
```
|
||||
glances --export csv --export-csv-file /tmp/glances.csv
|
||||
```
|
||||
|
||||
`/tmp/glances.csv` 是文件的位置。数据以时间序列的形式整齐地填入。
|
||||
|
||||
![][17]
|
||||
|
||||
你也可以导出到其它大型应用程序,如 [Prometheus][18],以启用条件触发器和通知。
|
||||
|
||||
它可以直接插入到消息服务(如 RabbitMQ、MQTT)、流媒体平台(如 Kafka),并将时间序列数据导出到数据库(如 InfluxDB),并使用 Grafana 进行可视化。
|
||||
|
||||
你可以在[这里][19]查看服务和导出选项的整个列表。
|
||||
|
||||
### 使用 REST API 将 Glances 与其他服务进行整合
|
||||
|
||||
这是整个栈中我最喜欢的功能。Glances 不仅可以将各种指标汇集在一起,还可以通过 API 将它们暴露出来。
|
||||
|
||||
这个简单而强大的功能使得为任何特定的用例构建自定义应用程序、服务和中间件应用程序变得非常容易。
|
||||
|
||||
当你在 Web 模式下启动 Glances 时,REST API 服务器会自动启动。要在 API 服务器模式下启动它,你可以使用以下命令:
|
||||
|
||||
```
|
||||
glances -w --disable-webui
|
||||
```
|
||||
|
||||
[REST API][20] 的文档很全面,其响应也很容易与 Web 应用集成。这使得使用类似 [Node-RED][21] 这样的工具可以很容易地构建一个统一的仪表盘来监控多个服务器。
|
||||
|
||||
![][22]
|
||||
|
||||
Glances 也提供了一个 XML-RPC 服务器,你可以在[这里][23]查看文档。
|
||||
|
||||
### 关于 Glances 的结束语
|
||||
|
||||
Glances 使用 [psutil][24] Python 库来访问不同的系统统计数据。早在 2017 年,我就曾使用相同的库构建了一个简单的 API 服务器来检索 CPU 的使用情况。我能够使用 Node-RED 构建的仪表盘监控一个集群中的所有树莓派。
|
||||
|
||||
Glances 可以为我节省一些时间,同时提供更多的功能,可惜我当时并不知道它。
|
||||
|
||||
在写这篇文章的时候,我确实尝试着在我的树莓派上安装 Glances,可惜所有的安装方法都出现了一些错误,失败了。当我成功后,我会更新文章,或者可能再写一篇文章,介绍在树莓派上安装的步骤。
|
||||
|
||||
我希望 Glances 能提供一种顶替 `top` 或 `htop` 等的方法。让我们希望在即将到来的版本中得到它。
|
||||
|
||||
我希望这能给你提供大量关于 Glances 的信息。你们使用什么系统监控工具呢,请在评论中告诉我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/glances/
|
||||
|
||||
作者:[Chinmay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/chinmay/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/linux-system-monitoring-tools/
|
||||
[2]: https://hisham.hm/htop/
|
||||
[3]: https://itsfoss.com/monitor-cpu-gpu-temp-linux/
|
||||
[4]: https://github.com/lm-sensors/lm-sensors
|
||||
[5]: https://nicolargo.github.io/glances/
|
||||
[6]: https://itsfoss.com/pywal/
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/glances-linux.png?resize=800%2C510&ssl=1
|
||||
[8]: https://grafana.com/
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/glances-data-in-grafana-dashboard.jpg?resize=800%2C472&ssl=1
|
||||
[10]: https://itsfoss.com/install-pip-ubuntu/
|
||||
[11]: https://github.com/nicolargo/glances/blob/master/README.rst#installation
|
||||
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/Screenshot-from-2020-08-13-11-54-18.png?resize=800%2C503&ssl=1
|
||||
[13]: https://glances.readthedocs.io/en/latest/cmds.html
|
||||
[14]: https://glances.readthedocs.io/en/latest/quickstart.html#central-client
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/Screenshot-from-2020-08-13-16-49-11.png?resize=800%2C471&ssl=1
|
||||
[16]: https://glances.readthedocs.io/en/stable/quickstart.html
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/Screenshot-from-2020-08-13-12-25-40.png?resize=800%2C448&ssl=1
|
||||
[18]: https://prometheus.io/
|
||||
[19]: https://glances.readthedocs.io/en/latest/gw/index.html
|
||||
[20]: https://github.com/nicolargo/glances/wiki/The-Glances-RESTFULL-JSON-API
|
||||
[21]: https://nodered.org/
|
||||
[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/Screenshot-from-2020-08-13-17-49-41.png?resize=800%2C468&ssl=1
|
||||
[23]: https://github.com/nicolargo/glances/wiki
|
||||
[24]: https://pypi.org/project/psutil/
|
207
published/20200813 How to use printf to format output.md
Normal file
207
published/20200813 How to use printf to format output.md
Normal file
@ -0,0 +1,207 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12573-1.html)
|
||||
[#]: subject: (How to use printf to format output)
|
||||
[#]: via: (https://opensource.com/article/20/8/printf)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
如何使用 printf 来格式化输出
|
||||
======
|
||||
|
||||
> 来了解一下 printf ,一个神秘的、灵活的和功能丰富的函数,可以替换 echo、print 和 cout。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202009/02/001109wp3xdtr27xop25e7.jpg)
|
||||
|
||||
当我开始学习 Unix 时,我很早就接触到了 `echo` 命令。同样,我最初的 [Python][2] 课程也涉及到了 `print` 函数。再想起学习 C++ 和 [Java][2] 时学到 `cout` 和 `systemout`。似乎每种语言都骄傲地宣称拥有一种方便的单行输出方法,并生怕这种方式要过时一样宣传它。
|
||||
|
||||
但是当我翻开中级教程的第一页后,我遇到了 `printf`,一个晦涩难懂的、神秘莫测的,又出奇灵活的函数。本文一反向初学者隐藏 `printf` 这个令人费解的传统,旨在介绍这个不起眼的 `printf` 函数,并解释如何在几乎所有语言中使用它。
|
||||
|
||||
### printf 简史
|
||||
|
||||
术语 `printf` 代表“<ruby>格式化打印<rt>print formatted</rt></ruby>”,它可能最早出现 [Algol 68][3] 编程语言中。自从它被纳入到 C 语言后,`printf` 已经在 C++、Java、Bash、PHP 中一次次重新实现,并且很可能在你最喜欢的 “后 C” 语言中再次出现。
|
||||
|
||||
显然,它很受欢迎,但很多人认为它的语法很复杂,尤其是与 `echo` 或 `print` 或 `cout` 等替代的函数相比尤为明显。例如,这是在 Bash 中的一个简单的 `echo` 语句:
|
||||
|
||||
```
|
||||
$ echo hello
|
||||
hello
|
||||
$
|
||||
```
|
||||
|
||||
这是在 Bash 中使用 `printf` 得到同样结果:
|
||||
|
||||
```
|
||||
$ printf "%s\n" hello
|
||||
hello
|
||||
$
|
||||
```
|
||||
|
||||
但是所增加的复杂性反而让你拥有很多功能,这是为什么 `printf` 值得学习的确切原因。
|
||||
|
||||
### printf 输出
|
||||
|
||||
在 `printf` 背后的基本思想是:它能够基于与内容*分离的*样式信息来格式化输出。例如,这里是 `printf` 认可的视作特殊字符的特定序列集合。你喜欢的语言可能会有或多或少的序列,但是通常包含:
|
||||
|
||||
* `\n`: 新行
|
||||
* `\r`: 回车换行
|
||||
* `\t`: 水平制表符
|
||||
* `\NNN`: 一个包含一个到三个数字,使用八进制值表示的特殊字节
|
||||
|
||||
例如:
|
||||
|
||||
```
|
||||
$ printf "\t\123\105\124\110\n"
|
||||
SETH
|
||||
$
|
||||
```
|
||||
|
||||
在这个 Bash 示例中, `printf` 渲染一个制表符后,然后是分配给四个八进制值字符串的 ASCII 字符,并以一个生成一个新行(`\n`)的控制序列结束。
|
||||
|
||||
如果同样使用 `echo` 来输出会产生更多的字符:
|
||||
|
||||
```
|
||||
$ printf "\t\123\105\124\110\n"
|
||||
\t\123\105\124\110\n
|
||||
$
|
||||
```
|
||||
|
||||
使用 Python 的 `print` 函数来完成同样的任务,你会发现 Python 的 `print` 命令比你想象的要强大:
|
||||
|
||||
```
|
||||
>>> print("\t\123\n")
|
||||
S
|
||||
|
||||
>>>
|
||||
```
|
||||
|
||||
显然,Python 的 `print` 包含传统的 `printf` 特性以及简单的 `echo` 或 `cout` 的特性。
|
||||
|
||||
不过,这些示例包括的只是文字字符,尽管在某些情况下它们也很有用,但它们可能是 `printf` 最不重要的部分。`printf` 的真正的威力在于格式化说明。
|
||||
|
||||
### 使用 printf 格式化输出
|
||||
|
||||
格式化说明符是以一个百分号(`%`)开头的字符。
|
||||
|
||||
常见的格式化说明符包括:
|
||||
|
||||
* `%s`: 字符串
|
||||
* `%d`: 数字
|
||||
* `%f`: 浮点数字
|
||||
* `%o`: 一个八进制的数字
|
||||
|
||||
这些格式化说明符是 `printf` 语句的占位符,你可以使用一个在其它地方提供的值来替换你的 `printf` 语句中的占位符。这些值在哪里提供取决于你使用的语言和它的语法,这里有一个简单的 Java 例子:
|
||||
|
||||
```
|
||||
string var="hello\n";
|
||||
system.out.printf("%s", var);
|
||||
```
|
||||
|
||||
把这个代码包裹在适当的样板文件中,在执行后,将呈现:
|
||||
|
||||
```
|
||||
$ ./example
|
||||
hello
|
||||
$
|
||||
```
|
||||
|
||||
但是,当一个变量的内容更改时,有意思的地方就来了。假设你想基于不断增加的数字来更新输出:
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
int var=0;
|
||||
while ( var < 100) {
|
||||
var++;
|
||||
printf("Processing is %d% finished.\n", var);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
编译并运行:
|
||||
|
||||
```
|
||||
Processing is 1% finished.
|
||||
[...]
|
||||
Processing is 100% finished.
|
||||
```
|
||||
|
||||
注意,在代码中的两个 `%` 将被解析为一个打印出来的 `%` 符号。
|
||||
|
||||
### 使用 printf 限制小数位数
|
||||
|
||||
数字也可以是很复杂,`printf` 提供很多格式化选项。你可以对浮点数使用 `%f` 限制打印出多少个小数位。通过把一个点(`.`)和一个限制的数放置在百分符号和 `f` 之间, 你可以告诉 `printf` 打印多少位小数。这是一个简单的用 Bash 写的简练示例:
|
||||
|
||||
```
|
||||
$ printf "%.2f\n" 3.141519
|
||||
3.14
|
||||
$
|
||||
```
|
||||
|
||||
类似的语法也适用于其它的语言。这里是一个 C 语言的示例:
|
||||
|
||||
```
|
||||
#include <math.h>
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
fprintf(stdout, "%.2f\n", 4 * atan(1.0));
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
对于三位小数,使用 `.3f` ,依次类推。
|
||||
|
||||
### 使用 printf 来在数字上添加逗号
|
||||
|
||||
因为位数大的数字很难解读,所以通常使用一个逗号来断开大的数字。你可以在百分号和 `d` 之间放置一个撇号(`'`),让 `printf` 根据需要添加逗号:
|
||||
|
||||
```
|
||||
$ printf "%'d\n" 1024
|
||||
1,024
|
||||
$ printf "%'d\n" 1024601
|
||||
1,024,601
|
||||
$
|
||||
```
|
||||
|
||||
### 使用 printf 来添加前缀零
|
||||
|
||||
`printf` 的另一个常用的用法是对文件名称中的数字强制实行一种特定的格式。例如,如果你在一台计算机上有 10 个按顺序排列的文件,计算机可能会把 `10.jpg` 排在 `1.jpg` 之前,这可能不是你的本意。当你以编程的方式写一个到文件时,你可以使用 `printf` 来用前缀为 0 的字符形成文件名称。这是一个简单的用 Bash 写的简练示例:
|
||||
|
||||
```
|
||||
$ printf "%03d.jpg\n" {1..10}
|
||||
001.jpg
|
||||
002.jpg
|
||||
[...]
|
||||
010.jpg
|
||||
```
|
||||
|
||||
注意:每个数字最多使用 3 位数字。
|
||||
|
||||
### 使用 printf
|
||||
|
||||
正如这些 `printf` 示例所显示,包括控制字符,尤其是 `\n` ,可能会冗长,并且语法相对复杂。这就是为什么开发像 `echo` 和 `cout` 之类的快捷方式的原因。不过,如果你时不时地使用 `printf` ,你就会习惯于这种语法,并且它也会变成你的习惯。我不认为 `printf` 有任何理由成为你在日常活动中打印时的*首选*,但是它是一个很好的工具,当你需要它时,它不会拖累你。
|
||||
|
||||
花一些时间学习你所选择语言中的 `printf`,并且当你需要时就使用它。它是一个强有力的工具,你不会后悔随时可用的工具。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/printf
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hot drink at the computer)
|
||||
[2]: https://opensource.com/resources/python
|
||||
[3]: https://opensource.com/article/20/6/algol68
|
||||
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
|
||||
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/atan.html
|
@ -0,0 +1,87 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12579-1.html)
|
||||
[#]: subject: (Linux Jargon Buster: What is Desktop Environment in Linux?)
|
||||
[#]: via: (https://itsfoss.com/what-is-desktop-environment/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Linux 黑话解释:什么是桌面环境?
|
||||
======
|
||||
|
||||
![][6]
|
||||
|
||||
在桌面 Linux 世界中,最常用的术语之一就是<ruby>桌面环境<rt>Desktop Environment</rt></ruby>(DE)。如果你是 Linux 的新手,你应该了解一下这个经常使用的术语。
|
||||
|
||||
### 什么是 Linux 中的桌面环境?
|
||||
|
||||
桌面环境是一个组件的组合体,为你提供常见的<ruby>图形用户界面<rt>graphical user interface</rt></ruby>(GUI)元素组件,如图标、工具栏、壁纸和桌面小部件。借助桌面环境,你可以像在 Windows 中一样使用鼠标和键盘使用 Linux。
|
||||
|
||||
有几种不同的桌面环境,这些桌面环境决定了你的 Linux 系统的样子以及你与它的交互方式。
|
||||
|
||||
大多数桌面环境都有自己的一套集成的应用程序和实用程序,这样用户在使用操作系统时就能得到统一的感受。所以,你会得到一个文件资源管理器、桌面搜索、应用程序菜单、壁纸和屏保实用程序、文本编辑器等。
|
||||
|
||||
如果没有桌面环境,你的 Linux 系统就只有一个类似于终端的实用程序,你只能用命令与之交互。
|
||||
|
||||
![Screenshot of GNOME Desktop Environment][1]
|
||||
|
||||
### Linux 中各种桌面环境
|
||||
|
||||
桌面环境有时也被简称为 DE。
|
||||
|
||||
如前所述,Linux 有[各种桌面环境可供选择][2]。为什么这么说呢?
|
||||
|
||||
可以把桌面环境看成是衣服。衣服决定了你的样子。如果你穿紧身牛仔裤和平底鞋,你会很好看,但穿着这些衣服跑步或登山就不舒服了。
|
||||
|
||||
[GNOME][3] 这样桌面环境注重现代的外观和用户体验,而像 [Xfce][4] 这样的桌面环境更注重使用更少的计算资源,而不是花哨的图形。
|
||||
|
||||
![Screenshot of Xfce Desktop Environment][5]
|
||||
|
||||
你的衣服取决于你的需要,决定了你的外观,桌面环境也是如此。你必须决定你是想要一些好看的东西,还是让你的系统运行得更快。
|
||||
|
||||
一些[流行的桌面环境][2]有:
|
||||
|
||||
* GNOME - 使用大量的系统资源,但给你一个现代的、精致的系统
|
||||
* Xfce - 外观复古但占用资源很少
|
||||
* KDE - 可高度定制的桌面,适度占用系统资源
|
||||
* LXDE - 唯一的重点是尽可能少地使用资源
|
||||
* Budgie - 现代的外观和适度占用系统资源
|
||||
|
||||
### Linux 发行版及其桌面环境变体
|
||||
|
||||
同样的桌面环境可以在多个 Linux 发行版上使用,一个 Linux 发行版也可能提供多个桌面环境。
|
||||
|
||||
例如,Fedora 和 Ubuntu 都默认使用 GNOME 桌面,但 Fedora 和 Ubuntu 都提供了其他桌面环境。
|
||||
|
||||
Linux 的优点和灵活性在于,你可以自己在任何 Linux 发行版上安装桌面环境。但大多数 Linux 发行版都为你省去了这个麻烦,并为不同的桌面环境提供了随时安装的 ISO 镜像。
|
||||
|
||||
例如 [Manjaro Linux][7] 默认使用 Xfce,但如果你喜欢在 Manjaro 上使用 GNOME,也可以下载 GNOME 版本的 ISO。
|
||||
|
||||
### 最后...
|
||||
|
||||
桌面环境是 Linux 桌面计算机的重要组成部分,而 Linux 服务器通常依靠命令行界面。并不是说不能在 Linux 服务器上安装桌面环境,但这是画蛇添足,浪费了重要的系统资源,而这些资源可以被服务器上运行的应用程序所利用。
|
||||
|
||||
我希望你现在对 Linux 中的桌面环境有了一些了解。我强烈推荐你阅读我的[关于什么是 Linux 以及为什么有这么多 Linux 发行版][8]的解释文章。我很有预感,你会喜欢我用它做的比喻。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/what-is-desktop-environment/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/gnome-3-36-screenshot.jpg?resize=800%2C450&ssl=1
|
||||
[2]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[3]: https://www.gnome.org/
|
||||
[4]: https://www.xfce.org/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2015/12/Ubuntu-XFCE-Chromebook-e1451426418482-1.jpg?resize=701%2C394&ssl=1
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/08/what-is-desktop-environment-linux.png?resize=800%2C450&ssl=1
|
||||
[7]: https://manjaro.org/
|
||||
[8]: https://itsfoss.com/what-is-linux/
|
@ -0,0 +1,129 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12576-1.html)
|
||||
[#]: subject: (Use this command-line tool to find security flaws in your code)
|
||||
[#]: via: (https://opensource.com/article/20/8/static-code-security-analysis)
|
||||
[#]: author: (Ari Noman https://opensource.com/users/arinoman)
|
||||
|
||||
使用命令行工具 Graudit 来查找你代码中的安全漏洞
|
||||
======
|
||||
|
||||
> 凭借广泛的语言支持,Graudit 可以让你在开发过程中的审计你的代码安全。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202009/03/114037qhi2h282wghbp74n.jpg)
|
||||
|
||||
测试是软件开发生命周期(SDLC)的重要组成部分,它有几个阶段。今天,我想谈谈如何在代码中发现安全问题。
|
||||
|
||||
在开发软件的时候,你不能忽视安全问题。这就是为什么有一个术语叫 DevSecOps,它的基本职责是识别和解决应用中的安全漏洞。有一些用于检查 [OWASP 漏洞][2]的开源解决方案,它将通过创建源代码的威胁模型来得出结果。
|
||||
|
||||
处理安全问题有不同的方法,如静态应用安全测试(SAST)、动态应用安全测试(DAST)、交互式应用安全测试(IAST)、软件组成分析等。
|
||||
|
||||
静态应用安全测试在代码层面运行,通过发现编写好的代码中的错误来分析应用。这种方法不需要运行代码,所以叫静态分析。
|
||||
|
||||
我将重点介绍静态代码分析,并使用一个开源工具进行实际体验。
|
||||
|
||||
### 为什么要使用开源工具检查代码安全?
|
||||
|
||||
选择开源软件、工具和项目作为开发的一部分有很多理由。它不会花费任何金钱,因为你使用的是一个由志趣相投的开发者社区开发的工具,而他们希望帮助其他开发者。如果你有一个小团队或一个初创公司,找到开源软件来检查你的代码安全是很好的。这样可以让你不必单独雇佣一个 DevSecOps 团队,让你的成本降低。
|
||||
|
||||
好的开源工具总是考虑到灵活性,它们应该能够在任何环境中使用,覆盖尽可能多的情况。这让开发人员更容易将该软件与他们现有的系统连接起来。
|
||||
|
||||
但是有的时候,你可能需要一个功能,而这个功能在你选择的工具中是不可用的。那么你就可以选择复刻其代码,在其上开发自己的功能,并在你的系统中使用。
|
||||
|
||||
因为,大多数时候,开源软件是由社区驱动的,开发的速度往往是该工具的用户的加分项,因为他们会根据用户的反馈、问题或 bug 报告来迭代项目。
|
||||
|
||||
### 使用 Graudit 来确保你的代码安全
|
||||
|
||||
有各种开源的静态代码分析工具可供选择,但正如你所知道的,工具分析的是代码本身,这就是为什么没有通用的工具适用于所有的编程语言。但其中一些遵循 OWASP 指南,尽量覆盖更多的语言。
|
||||
|
||||
在这里,我们将使用 [Graudit][3],它是一个简单的命令行工具,可以让我们找到代码库中的安全缺陷。它支持不同的语言,但有一个固定的签名集。
|
||||
|
||||
Graudit 使用的 `grep` 是 GNU 许可证下的工具,类似的静态代码分析工具还有 Rough Auditing Tool for Security(RATS)、Securitycompass Web Application Analysis Tool(SWAAT)、flawfinder 等。但 Graudit 的技术要求是最低的,并且非常灵活。不过,你可能还是有 Graudit 无法满足的要求。如果是这样,你可以看看这个[列表][4]的其他的选择。
|
||||
|
||||
我们可以将这个工具安装在特定的项目下,或者全局命名空间中,或者在特定的用户下,或者任何我们喜欢地方,它很灵活。我们先来克隆一下仓库。
|
||||
|
||||
```
|
||||
$ git clone https://github.com/wireghoul/graudit
|
||||
```
|
||||
|
||||
现在,我们需要创建一个 Graudit 的符号链接,以便我们可以将其作为一个命令使用。
|
||||
|
||||
```
|
||||
$ cd ~/bin && mkdir graudit
|
||||
$ ln --symbolic ~/graudit/graudit ~/bin/graudit
|
||||
```
|
||||
|
||||
在 `.bashrc` (或者你使用的任何 shell 的配置文件)中添加一个别名。
|
||||
|
||||
```
|
||||
#------ .bashrc ------
|
||||
|
||||
alias graudit="~/bin/graudit"
|
||||
```
|
||||
|
||||
重新加载 shell:
|
||||
|
||||
```
|
||||
$ source ~/.bashrc # 或
|
||||
$ exex $SHELL
|
||||
```
|
||||
|
||||
让我们通过运行这个来检查是否成功安装了这个工具。
|
||||
|
||||
```
|
||||
$ graudit -h
|
||||
```
|
||||
|
||||
如果你得到类似于这样的结果,那么就可以了。
|
||||
|
||||
![Graudit terminal screen showing help page][5]
|
||||
|
||||
*图 1 Graudit 帮助页面*
|
||||
|
||||
我正在使用我现有的一个项目来测试这个工具。要运行该工具,我们需要传递相应语言的数据库。你会在 signatures 文件夹下找到这些数据库。
|
||||
|
||||
```
|
||||
$ graudit -d ~/gradit/signatures/js.db
|
||||
```
|
||||
|
||||
我在现有项目中的两个 JavaScript 文件上运行了它,你可以看到它在控制台中抛出了易受攻击的代码。
|
||||
|
||||
![JavaScript file showing Graudit display of vulnerable code][6]
|
||||
|
||||
![JavaScript file showing Graudit display of vulnerable code][7]
|
||||
|
||||
你可以尝试在你的一个项目上运行这个,项目本身有一个长长的[数据库][8]列表,用于支持不同的语言。
|
||||
|
||||
### Graudit 的优点和缺点
|
||||
|
||||
Graudit 支持很多语言,这使其成为许多不同系统上的用户的理想选择。由于它的使用简单和语言支持广泛,它可以与其他免费或付费工具相媲美。最重要的是,它们正在开发中,社区也支持其他用户。
|
||||
|
||||
虽然这是一个方便的工具,但你可能会发现很难将某个特定的代码识别为“易受攻击”。也许开发者会在未来版本的工具中加入这个功能。但是,通过使用这样的工具来关注代码中的安全问题总是好的。
|
||||
|
||||
### 总结
|
||||
|
||||
在本文中,我只介绍了众多安全测试类型中的一种:静态应用安全测试。从静态代码分析开始很容易,但这只是一个开始。你可以在你的应用开发流水线中添加其他类型的应用安全测试,以丰富你的整体安全意识。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/static-code-security-analysis
|
||||
|
||||
作者:[Ari Noman][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/arinoman
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_screen_display.jpg?itok=2HMTzqz0 (Code on a screen)
|
||||
[2]: https://owasp.org/www-community/vulnerabilities/
|
||||
[3]: https://github.com/wireghoul/graudit
|
||||
[4]: https://project-awesome.org/mre/awesome-static-analysis
|
||||
[5]: https://opensource.com/sites/default/files/uploads/graudit_1.png (Graudit terminal screen showing help page)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/graudit_2.png (JavaScript file showing Graudit display of vulnerable code)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/graudit_3.png (JavaScript file showing Graudit display of vulnerable code)
|
||||
[8]: https://github.com/wireghoul/graudit#databases
|
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (koolape)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12581-1.html)
|
||||
[#]: subject: (Soon You’ll be Able to Convert Any Website into Desktop Application in Linux Mint)
|
||||
[#]: via: (https://itsfoss.com/web-app-manager-linux-mint/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
很快你就能在 Linux Mint 上将任何网站转化为桌面应用程序了
|
||||
======
|
||||
|
||||
设想一下,你正忙于一项任务且需要在浏览器中打开超过 20 个页面,大多数页面都和工作有关。
|
||||
|
||||
还有一些是 YouTube 或其他音乐流媒体网站。
|
||||
|
||||
完成任务后需要关闭浏览器,但这会将包括工作相关和听音乐的网页等所有网页一起关掉。
|
||||
|
||||
然后你需要再次打开这些网页并登录账号以回到原来的进度。
|
||||
|
||||
这看起来令人沮丧,对吧?Linux Mint 理解你的烦恼,因此有了下面这个新项目帮助你应对这些问题。
|
||||
|
||||
![][1]
|
||||
|
||||
在[最近的一篇文章][2]中,Linux Mint 团队披露了正在开发一个名叫“<ruby>网页应用管理器<rt>Web App Manager</rt></ruby>”的新工具。
|
||||
|
||||
该工具让你能够像使用桌面程序那样以独立窗口运行你最喜爱的网页。
|
||||
|
||||
在将网页添加为网页应用程序的时候,你可以给这个程序取名字并添加图标。也可以将它添加到不同的分类,以便在菜单中搜索这个应用。还可以指定用什么浏览器打开应用。启用/禁用导航栏的选项也有。
|
||||
|
||||
![在 Linux Mint 中添加网页应用程序][3]
|
||||
|
||||
例如,将 YouTube 添加为网页应用程序:
|
||||
|
||||
![Linux Mint 中的网页应用程序][4]
|
||||
|
||||
运行 YouTube 程序将通过你所使用的浏览器打开一个独立的页面。
|
||||
|
||||
![YouTube 网页应用程序][5]
|
||||
|
||||
网页应用程序拥有常规桌面应用程序有的大多数功能特点,如使用 `Alt+Tab` 切换。
|
||||
|
||||
![使用 Alt+Tab 切换网页应用][6]
|
||||
|
||||
甚至还能将应用固定到面板/任务栏方便打开。
|
||||
|
||||
![添加到面板的 YouTube 网页应用][7]
|
||||
|
||||
该管理器目前处于 beta 开发阶段,但已经使用起来已经相对比较稳定了。不过目前还没有面向大众发放,因为翻译工作还未完成。
|
||||
|
||||
如果你在使用 Linux Mint 并想尝试这个工具,可在下方下载 beta 版本的 deb 文件:
|
||||
|
||||
- [下载 beta 版][8]
|
||||
|
||||
### 网页应用的好处
|
||||
|
||||
有读者问到这个网页应用管理器与 Chrome 和其他一些网页浏览器中已有的其他类似功能相比的好处。让我来展开一下这个话题。
|
||||
|
||||
- 你可以使用 URL 的特定部分(example.com/tool 而不是 example.com)作为应用程序。
|
||||
- 添加自定义图标的可能性对于没有清晰的 favicon 的网站来说非常方便。
|
||||
- 你可以使用一个没有任何扩展的轻量级浏览器来打开网页应用,而不是像 Chrome/Chromium 这样的常规网页浏览器。它的速度应该更快。
|
||||
- 你的网页应用可以被整合到应用菜单中。你可以像其他应用程序一样搜索它。
|
||||
|
||||
### 网页应用程序在桌面环境的 Linux 中不是什么新事物
|
||||
|
||||
网页应用程序不是由 Linux Mint 独创的,而是早在大约 10 年前就有了。
|
||||
|
||||
你可能还记得 Ubuntu 在 2013-2014 年向 Unity 桌面中加入了网页应用程序这项特性。
|
||||
|
||||
轻量级 Linux 发行版 PeppermintOS 自 2010 年起就将 ICE(网页应用程序工具)列为其主要特色之一。实际上,Linux Mint 的网页应用程序管理器也是基于 [ICE][9] 的。
|
||||
|
||||
我个人喜欢网页应用程序,因为有用。
|
||||
|
||||
你怎么看 Linux Mint 中的网页应用程序呢,这是你期待使用的吗?欢迎在下方评论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/web-app-manager-linux-mint/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[koolape](https://github.com/koolape)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/09/Web-App-Manager-linux-mint.jpg?resize=800%2C450&ssl=1
|
||||
[2]: https://blog.linuxmint.com/?p=3960
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/Add-web-app-in-Linux-Mint.png?resize=600%2C489&ssl=1
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/Web-Apps-in-Linux-Mint.png?resize=600%2C489&ssl=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/youtube-web-app-linux-mint.jpg?resize=800%2C611&ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/09/web-app-alt-tab-switcher.jpg?resize=721%2C576&ssl=1
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/09/panel.jpg?resize=470%2C246&ssl=1
|
||||
[8]: http://www.linuxmint.com/tmp/blog/3960/webapp-manager_1.0.3_all.deb
|
||||
[9]: https://github.com/peppermintos/ice
|
@ -0,0 +1,91 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Rejoice KDE Lovers! MX Linux Joins the KDE Bandwagon and Now You Can Download MX Linux KDE Edition)
|
||||
[#]: via: (https://itsfoss.com/mx-linux-kde-edition/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Rejoice KDE Lovers! MX Linux Joins the KDE Bandwagon and Now You Can Download MX Linux KDE Edition
|
||||
======
|
||||
|
||||
Debian-based [MX Linux][1] is already an impressive Linux distribution with [Xfce desktop environment][2] as the default. Even though it works good and is suitable to run with minimal hardware configuration, it still isn’t the best Linux distribution in terms of eye candy.
|
||||
|
||||
That’s where KDE comes to the rescue. Of late, KDE Plasma has reduced a lot of weight and it uses fewer system resources without compromising on the modern looks. No wonder KDE Plasma is one of [the best desktop environments][3] out there.
|
||||
|
||||
![][4]
|
||||
|
||||
With [MX Linux 19.2][5], they began testing a KDE edition and have finally released their first KDE version.
|
||||
|
||||
Also, the KDE edition comes with Advanced Hardware Support (AHS) enabled. Here’s what they have mentioned in their release notes:
|
||||
|
||||
> MX-19.2 KDE is an **Advanced Hardware Support (AHS) **enabled **64-bit only** version of MX featuring the KDE/plasma desktop. Applications utilizing Qt library frameworks are given a preference for inclusion on the iso.
|
||||
|
||||
As I mentioned it earlier, this is MX Linux’s first KDE edition ever, and they’ve also shed some light on it with the announcement as well:
|
||||
|
||||
> This will be first officially supported MX/antiX family iso utilizing the KDE/plasma desktop since the halting of the predecessor MEPIS project in 2013.
|
||||
|
||||
Personally, I enjoyed the experience of using MX Linux until I started using [Pop OS 20.04][6]. So, I’ll give you some key highlights of MX Linux 19.2 KDE edition along with my impressions of testing it.
|
||||
|
||||
### MX Linux 19.2 KDE: Overview
|
||||
|
||||
![][7]
|
||||
|
||||
Out of the box, MX Linux looks cleaner and more attractive with KDE desktop on board. Unlike KDE Neon, it doesn’t feature the latest and greatest KDE stuff, but it looks to be doing the job intended.
|
||||
|
||||
Of course, you will get the same options that you expect from a KDE-powered distro to customize the look and feel of your desktop. In addition to the obvious KDE perks, you will also get the usual MX tools, antiX-live-usb-system, and snapshot feature that comes baked in the Xfce edition.
|
||||
|
||||
It’s a great thing to have the best of both worlds here, as stated in their announcement:
|
||||
|
||||
> MX-19.2 KDE includes the usual MX tools, antiX-live-usb-system, and snapshot technology that our users have come to expect from our standard flagship Xfce releases. Adding KDE/plasma to the existing Xfce/MX-fluxbox desktops will provide for a wider range user needs and wants.
|
||||
|
||||
I haven’t performed a great deal of tests but I did have some issues with extracting archives (it didn’t work the first try) and copy-pasting a file to a new location. Not sure if those are some known bugs — but I thought I should let you know here.
|
||||
|
||||
![][8]
|
||||
|
||||
Other than that, it features every useful tool you’d want to have and works great. With KDE on board, it actually feels more polished and smooth in my case.
|
||||
|
||||
Along with KDE Plasma 5.14.5 on top of Debian 10 “buster”, it also comes with GIMP 2.10.12, MESA, Debian (AHS) 5.6 Kernel, Firefox browser, and few other goodies like VLC, Thunderbird, LibreOffice, and Clementine music player.
|
||||
|
||||
You can also look for more stuff in the MX repositories.
|
||||
|
||||
![][9]
|
||||
|
||||
There are some known issues with the release like the System clock settings not being able adjustable via KDE settings. You can check their [announcement post][10] for more information or their [bug list][11] to make sure everything’s fine before trying it out on your production system.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
MX Linux 19.2 KDE edition is definitely more impressive than its Xfce offering in my opinion. It would take a while to iron out the bugs for this first KDE release — but it’s not a bad start.
|
||||
|
||||
Speaking of KDE, I recently tested out KDE Neon, the official KDE distribution. I shared my experience in this video. I’ll try to do a video on MX Linux KDE flavor as well.
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][12]
|
||||
|
||||
Have you tried it yet? Let me know your thoughts in the comments below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/mx-linux-kde-edition/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://mxlinux.org/
|
||||
[2]: https://www.xfce.org/
|
||||
[3]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-kde-edition.jpg?resize=800%2C450&ssl=1
|
||||
[5]: https://mxlinux.org/blog/mx-19-2-now-available/
|
||||
[6]: https://itsfoss.com/pop-os-20-04-review/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-19-2-kde.jpg?resize=800%2C452&ssl=1
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-19-2-kde-filemanager.jpg?resize=800%2C452&ssl=1
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-19-2-kde-info.jpg?resize=800%2C452&ssl=1
|
||||
[10]: https://mxlinux.org/blog/mx-19-2-kde-now-available/
|
||||
[11]: https://bugs.mxlinux.org/
|
||||
[12]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco open-source code boosts performance of Kubernetes apps over SD-WAN)
|
||||
[#]: via: (https://www.networkworld.com/article/3572310/cisco-open-source-code-boosts-performance-of-kubernetes-apps-over-sd-wan.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco open-source code boosts performance of Kubernetes apps over SD-WAN
|
||||
======
|
||||
Cisco's Cloud-Native SD-WAN project marries SD-WANs to Kubernetes applications to cut down on the manual work needed to optimize latency and packet loss.
|
||||
Thinkstock
|
||||
|
||||
Cisco has introduced an open-source project that it says could go a long way toward reducing the manual work involved in optimizing performance of Kubernetes-applications across [SD-WANs][1].
|
||||
|
||||
Cisco said it launched the Cloud-Native SD-WAN (CN-WAN) project to show how Kubernetes applications can be automatically mapped to SD-WAN with the result that the applications perform better over the WAN.
|
||||
|
||||
**More about SD-WAN**: [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][2] • [How to pick an off-site data-backup method][3] • [SD-Branch: What it is and why you’ll need it][4] • [What are the options for security SD-WAN?][5]
|
||||
|
||||
“In many cases, enterprises deploy an SD-WAN to connect a Kubernetes cluster with users or workloads that consume cloud-native applications. In a typical enterprise, NetOps teams leverage their network expertise to program SD-WAN policies to optimize general connectivity to the Kubernetes hosted applications, with the goal to reduce latency, reduce packet loss, etc.” wrote John Apostolopoulos, vice president and CTO of Cisco’s intent-based networking group in a group [blog][6].
|
||||
|
||||
“The enterprise usually also has DevOps teams that maintain and optimize the Kubernetes infrastructure. However, despite the efforts of NetOps and DevOps teams, today Kubernetes and SD-WAN operate mostly like ships in the night, often unaware of each other. Integration between SD-WAN and Kubernetes typically involves time-consuming manual coordination between the two teams.”
|
||||
|
||||
Current SD-WAN offering often have APIs that let customers programmatically influence how their traffic is handled over the WAN. This enables interesting and valuable opportunities for automation and application optimization, Apostolopoulos stated. “We believe there is an opportunity to pair the declarative nature of Kubernetes with the programmable nature of modern SD-WAN solutions,” he stated.
|
||||
|
||||
Enter CN-WAN, which defines a set of components that can be used to integrate an SD-WAN package, such as Cisco Viptela SD-WAN, with Kubernetes to enable DevOps teams to express the WAN needs of the microservices they deploy in a Kubernetes cluster, while simultaneously letting NetOps automatically render the microservices needs to optimize the application performance over the WAN, Apostolopoulos stated.
|
||||
|
||||
Apostolopoulos wrote that CN-WAN is composed of a Kubernetes Operator, a Reader, and an Adaptor. It works like this: The CN-WAN Operator runs in the Kubernetes cluster, actively monitoring the deployed services. DevOps teams can use standard Kubernetes annotations on the services to define WAN-specific metadata, such as the traffic profile of the application. The CN-WAN Operator then automatically registers the service along with the metadata in a service registry. In a demo at KubeCon EU this week Cisco used Google Service Directory as the service registry.
|
||||
|
||||
Earlier this year [Cisco and Google][7] deepened their relationship with a turnkey package that lets customers mesh SD-WAN connectivity with applications running in a private [data center][8], Google Cloud or another cloud or SaaS application. That jointly developed platform, called Cisco SD-WAN Cloud Hub with Google Cloud, combines Cisco’s SD-WAN policy-, telemetry- and security-setting capabilities with Google's software-defined backbone to ensure that application service-level agreement, security and compliance policies are extended across the network.
|
||||
|
||||
Meanwhile, on the SD-WAN side, the CN-WAN Reader connects to the service registry to learn about how Kubernetes is exposing the services and the associated WAN metadata extracted by the CN-WAN operator, Cisco stated. When new or updated services or metadata are detected, the CN-WAN Reader sends a message towards the CN-WAN Adaptor so SD-WAN policies can be updated.
|
||||
|
||||
Finally, the CN-WAN Adaptor, maps the service-associated metadata into the detailed SD-WAN policies programmed by NetOps in the SD-WAN controller. The SD-WAN controller automatically renders the SD-WAN policies, specified by the NetOps for each metadata type, into specific SD-WAN data-plane optimizations for the service, Cisco stated.
|
||||
|
||||
“The SD-WAN may support multiple types of access at both sender and receiver (e.g., wired Internet, MPLS, wireless 4G or 5G), as well as multiple service options and prioritizations per access network, and of course multiple paths between source and destination,” Apostolopoulos stated.
|
||||
|
||||
The code for the CN-WAN project is available as open-source in [GitHub][9].
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3572310/cisco-open-source-code-boosts-performance-of-kubernetes-apps-over-sd-wan.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
[2]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[3]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[4]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[5]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html
|
||||
[6]: https://blogs.cisco.com/networking/introducing-the-cloud-native-sd-wan-project
|
||||
[7]: https://www.networkworld.com/article/3539252/cisco-integrates-sd-wan-connectivity-with-google-cloud.html
|
||||
[8]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[9]: https://github.com/CloudNativeSDWAN/cnwan-docs
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (leommxj)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,145 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( chenmu-kk )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (9 open source tools for building a fault-tolerant system)
|
||||
[#]: via: (https://opensource.com/article/19/3/tools-fault-tolerant-system)
|
||||
[#]: author: (Bryant Son (Red Hat, Community Moderator) https://opensource.com/users/brson)
|
||||
|
||||
9 open source tools for building a fault-tolerant system
|
||||
九个用来构建容错系统的开源工具
|
||||
======
|
||||
|
||||
Maximize uptime and minimize problems with these open source tools.
|
||||
这些开源工具可以最大化延长运行时间并且在最大程度上减少问题。
|
||||
|
||||
![magnifying glass on computer screen, finding a bug in the code][1]
|
||||
|
||||
I've always been interested in web development and software architecture because I like to see the broader picture of a working system. Whether you are building a mobile app or a web application, it has to be connected to the internet to exchange data among different modules, which means you need a web service.
|
||||
我总是对web开发和软件体系结构很感兴趣,因为我喜欢看到一个工作系统的更广阔的前景。无论是构建一个移动应用程序还是一个web应用程序,都必须连接到internet才能在不同的模块中交换数据,这意味着你需要web服务。
|
||||
|
||||
If you use a cloud system as your application's backend, you can take advantage of greater computing power, as the backend service will scale horizontally and vertically and orchestrate different services. But whether or not you use a cloud backend, it's important to build a _fault-tolerant system_ —one that is resilient, stable, fast, and safe.
|
||||
如果选择云系统作为应用程序的后端,则可以利用更优秀的计算能力,因为后端服务将会在水平和垂直方向上进行扩展并编排不同的服务。但无论你是否使用云后端,建造一个灵活、稳定、快速又安全的容错系统是必不可少的。
|
||||
|
||||
To understand fault-tolerant systems, let's use Facebook, Amazon, Google, and Netflix as examples. Millions and billions of users access these platforms simultaneously while transmitting enormous amounts of data via peer-to-peer and user-to-server networks, and you can be sure there are also malicious users with bad intentions, like hacking or denial-of-service (DoS) attacks. Even so, these platforms can operate 24 hours a day and 365 days a year without downtime.
|
||||
要了解容错系统,让我们以脸书、亚马逊、谷歌和奈飞为例。数以亿计的用户会同时接入这些平台并通过对等网络和用户-服务器网络传输大量数据,你可以肯定这其中还存在许多的带有不法目的的恶意用户,例如黑客攻击和拒绝服务(DoS)攻击。即使如此,这些平台无需停机也可以全年无休地运转。
|
||||
|
||||
Although machine learning and smart algorithms are the backbones of these systems, the fact that they achieve consistent service without a single minute of downtime is praiseworthy. Their expensive hardware and gigantic datacenters certainly matter, but the elegant software designs supporting the services are equally important. And the fault-tolerant system is one of the principles to build such an elegant system.
|
||||
尽管机器学习和智能算法是这些系统的基础,但它们实现一致服务而不需要一分钟停机的事实值得称赞。它们昂贵的硬件设备和巨大的数据中心当然十分重要,但是支持服务的精密软件设计也同样重要。而且容错系统是一个构建如此精密系统的法则之一。
|
||||
|
||||
### Two behaviors that cause problems in production
|
||||
|
||||
Here's another way to think of a fault-tolerant system. When you run your application service locally, everything seems to be fine. Great! But when you promote your service to the production environment, all hell breaks loose. In a situation like this, a fault-tolerant system helps by addressing two problems: Fail-stop behavior and Byzantine behavior.
|
||||
|
||||
#### Fail-stop behavior
|
||||
|
||||
Fail-stop behavior is when a running system suddenly halts or a few parts of the system fail. Server downtime and database inaccessibility fall under this category. For example, in the diagram below, Service 1 can't communicate with Service 2 because Service 2 is inaccessible:
|
||||
|
||||
![Fail-stop behavior due to Service 2 downtime][2]
|
||||
|
||||
But the problem can also occur if there is a network problem between the services, like this:
|
||||
|
||||
![Fail-stop behavior due to network failure][3]
|
||||
|
||||
#### Byzantine behavior
|
||||
|
||||
Byzantine behavior is when the system continuously runs but doesn't produce the expected behavior (e.g., wrong data or an invalid value).
|
||||
|
||||
Byzantine failure can happen if Service 2 has corrupted data or values, even though the service looks to be operating just fine, like in this example:
|
||||
|
||||
![Byzantine failure due to corrupted service][4]
|
||||
|
||||
Or, there can be a malicious middleman intercepting between the services and injecting unwanted data:
|
||||
|
||||
![Byzantine failure due to malicious middleman][5]
|
||||
|
||||
Neither fail-stop nor Byzantine behavior is a desired situation, so we need ways to prevent or fix them. That's where fault-tolerant systems come into play. Following are eight open source tools that can help you address these problems.
|
||||
|
||||
### Tools for building a fault-tolerant system
|
||||
|
||||
Although building a truly practical fault-tolerant system touches upon in-depth _distributed computing theory_ and complex computer science principles, there are many software tools—many of them, like the following, open source—to alleviate undesirable results by building a fault-tolerant system.
|
||||
|
||||
#### Circuit-breaker pattern: Hystrix and Resilience4j
|
||||
|
||||
The [circuit-breaker pattern][6] is a technique that helps to return a prepared dummy response or a simple response when a service fails:
|
||||
|
||||
![Circuit breaker pattern][7]
|
||||
|
||||
Netflix's open source **[Hystrix][8]** is the most popular implementation of the circuit-breaker pattern.
|
||||
|
||||
Many companies where I've worked previously are leveraging this wonderful tool. Surprisingly, Netflix announced that it will no longer update Hystrix. (Yeah, I know.) Instead, Netflix recommends using an alternative solution like [**Resilence4j**][9], which supports Java 8 and functional programming, or an alternative practice like [Adaptive Concurrency Limit][10].
|
||||
|
||||
#### Load balancing: Nginx and HaProxy
|
||||
|
||||
Load balancing is one of the most fundamental concepts in a distributed system and must be present to have a production-quality environment. To understand load balancers, we first need to understand the concept of _redundancy_. Every production-quality web service has multiple servers that provide redundancy to take over and maintain services when servers go down.
|
||||
|
||||
![Load balancer][11]
|
||||
|
||||
Think about modern airplanes: their dual engines provide redundancy that allows them to land safely even if an engine catches fire. (It also helps that most commercial airplanes have state-of-art, automated systems.) But, having multiple engines (or servers) means that there must be some kind of scheduling mechanism to effectively route the system when something fails.
|
||||
|
||||
A load balancer is a device or software that optimizes heavy traffic transactions by balancing multiple server nodes. For instance, when thousands of requests come in, the load balancer acts as the middle layer to route and evenly distribute traffic across different servers. If a server goes down, the load balancer forwards requests to the other servers that are running well.
|
||||
|
||||
There are many load balancers available, but the two best-known ones are Nginx and HaProxy.
|
||||
|
||||
[**Nginx**][12] is more than a load balancer. It is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server. Companies like Groupon, Capital One, Adobe, and NASA use it.
|
||||
|
||||
[**HaProxy**][13] is also popular, as it is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. Many large internet companies, including GitHub, Reddit, Twitter, and Stack Overflow, use HaProxy. Oh and yes, Red Hat Enterprise Linux also supports HaProxy configuration.
|
||||
|
||||
#### Actor model: Akka
|
||||
|
||||
The [actor model][14] is a concurrency design pattern that delegates responsibility when an _actor_ , which is a primitive unit of computation, receives a message. An actor can create even more actors and delegate the message to them.
|
||||
|
||||
[**Akka**][15] is one of the most well-known tools for the actor model implementation. The framework supports Java and Scala, which are both based on JVM.
|
||||
|
||||
#### Asynchronous, non-blocking I/O using messaging queue: Kafka and RabbitMQ
|
||||
|
||||
Multi-threaded development has been popular in the past, but this practice has been discouraged and replaced with asynchronous, non-blocking I/O patterns. For Java, this is explicitly stated in its [Enterprise Java Bean (EJB) specifications][16]:
|
||||
|
||||
> "An enterprise bean must not use thread synchronization primitives to synchronize execution of multiple instances.
|
||||
>
|
||||
> "The enterprise bean must not attempt to manage threads. The enterprise bean must not attempt to start, stop, suspend, or resume a thread, or to change a thread's priority or name. The enterprise bean must not attempt to manage thread groups."
|
||||
|
||||
Now, there are other practices like stream APIs and actor models. But messaging queues like [**Kafka**][17] and [**RabbitMQ**][18] offer the out-of-box support for asynchronous and non-blocking IO features, and they are powerful open source tools that can be replacements for threads by handling concurrent processes.
|
||||
|
||||
#### Other options: Eureka and Chaos Monkey
|
||||
|
||||
Other useful tools for fault-tolerant systems include monitoring tools, such as Netflix's **[Eureka][19]** , and stress-testing tools, like **[Chaos Monkey][20]**. They aim to discover potential issues earlier by testing in lower environments, like integration (INT), quality assurance (QA), and user acceptance testing (UAT), to prevent potential problems before moving to the production environment.
|
||||
|
||||
* * *
|
||||
|
||||
What open source tools are you using for building a fault-tolerant system? Please share your favorites in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/3/tools-fault-tolerant-system
|
||||
|
||||
作者:[Bryant Son (Red Hat, Community Moderator)][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga (magnifying glass on computer screen, finding a bug in the code)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/1_errordowntimeservice.jpg (Fail-stop behavior due to Service 2 downtime)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/2_errordowntimenetwork.jpg (Fail-stop behavior due to network failure)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/3_byzantinefailuremalicious.jpg (Byzantine failure due to corrupted service)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/4_byzantinefailuremiddleman.jpg (Byzantine failure due to malicious middleman)
|
||||
[6]: https://martinfowler.com/bliki/CircuitBreaker.html
|
||||
[7]: https://opensource.com/sites/default/files/uploads/5_circuitbreakerpattern.jpg (Circuit breaker pattern)
|
||||
[8]: https://github.com/Netflix/Hystrix/wiki
|
||||
[9]: https://github.com/resilience4j/resilience4j
|
||||
[10]: https://medium.com/@NetflixTechBlog/performance-under-load-3e6fa9a60581
|
||||
[11]: https://opensource.com/sites/default/files/uploads/7_loadbalancer.jpg (Load balancer)
|
||||
[12]: https://www.nginx.com
|
||||
[13]: https://www.haproxy.org
|
||||
[14]: https://en.wikipedia.org/wiki/Actor_model
|
||||
[15]: https://akka.io
|
||||
[16]: https://jcp.org/aboutJava/communityprocess/final/jsr220/index.html
|
||||
[17]: https://kafka.apache.org
|
||||
[18]: https://www.rabbitmq.com
|
||||
[19]: https://github.com/Netflix/eureka
|
||||
[20]: https://github.com/Netflix/chaosmonkey
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (JunJie957)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: ( chenmu-kk )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,59 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Microsoft uses AI to boost its reuse, recycling of server parts)
|
||||
[#]: via: (https://www.networkworld.com/article/3570451/microsoft-uses-ai-to-boost-its-reuse-recycling-of-server-parts.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Microsoft uses AI to boost its reuse, recycling of server parts
|
||||
======
|
||||
Get ready to hear the term 'circular' a lot more in reference to data center gear.
|
||||
Monsitj / Getty Images
|
||||
|
||||
Microsoft is bringing artificial intelligence to the task of sorting through millions of servers to determine what can be recycled and where.
|
||||
|
||||
The new initiative calls for the building of so-called Circular Centers at Microsoft data centers around the world, where AI algorithms will be used to sort through parts from decommissioned servers or other hardware and figure out which parts can be reused on the campus.
|
||||
|
||||
**READ MORE:** [How to decommission a data center][1]
|
||||
|
||||
Microsoft says it has more than three million servers and related hardware in its data centers, and that a server's average lifespan is about five years. Plus, Microsoft is expanding globally, so its server numbers should increase.
|
||||
|
||||
Circular Centers are all about quickly sorting through the inventory rather than tying up overworked staff. Microsoft plans to increase its reuse of server parts by 90% by 2025. "Using machine learning, we will process servers and hardware that are being decommissioned onsite," wrote Brad Smith, president of Microsoft, in a [blog post][2] announcing the initiative. "We'll sort the pieces that can be reused and repurposed by us, our customers, or sold."
|
||||
|
||||
Smith notes that today there is no consistent data about the quantity, quality and type of waste, where it is generated, and where it goes. Data about construction and demolition waste, for example, is inconsistent and needs a standardized methodology, better transparency and higher quality.
|
||||
|
||||
"Without more accurate data, it's nearly impossible to understand the impact of operational decisions, what goals to set, and how to assess progress, as well as an industry standard for waste footprint methodology," he wrote.
|
||||
|
||||
A Circular Center pilot in an Amsterdam data center reduced downtime and increased the availability of server and network parts for its own reuse and buy-back by suppliers, according to Microsoft. It also reduced the cost of transporting and shipping servers and hardware to processing facilities, which lowered carbon emissions.
|
||||
|
||||
The term "circular economy" is catching on in tech. It's based on recycling of server hardware, putting equipment that is a few years old but still quite usable back in service somewhere else. ITRenew, a reseller of used hyperscaler servers [that I profiled][3] a few months back, is big on the term.
|
||||
|
||||
The first Microsoft Circular Centers will be built at new, major data-center campuses or regions, the company says. It plans to eventually add these centers to campuses that already exist.
|
||||
|
||||
Microsoft has an expressed goal of being "carbon negative" by 2030, and this is just one of several projects. Recently Microsoft announced it had conducted a test at its system developer's lab in Salt Lake City where a 250kW hydrogen fuel cell system powered a row of server racks for 48 hours straight, something the company says has never been done before.
|
||||
|
||||
"It is the largest computer backup power system that we know that is running on hydrogen and it has run the longest continuous test," Mark Monroe, a principal infrastructure engineer, wrote in a Microsoft [blog post][4]. He says hydrogen fuel cells have plummeted so much in recent years that they are now a viable alternative to diesel-powered backup generators but much cleaner burning.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3570451/microsoft-uses-ai-to-boost-its-reuse-recycling-of-server-parts.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3439917/how-to-decommission-a-data-center.html
|
||||
[2]: https://blogs.microsoft.com/blog/2020/08/04/microsoft-direct-operations-products-and-packaging-to-be-zero-waste-by-2030/
|
||||
[3]: https://www.networkworld.com/article/3543810/for-sale-used-low-mileage-hyperscaler-servers.html
|
||||
[4]: https://news.microsoft.com/innovation-stories/hydrogen-datacenters/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,59 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (AI system analyzes code similarities, makes progress toward automated coding)
|
||||
[#]: via: (https://www.networkworld.com/article/3570389/ai-system-analyzes-code-similarities-makes-progress-toward-automated-coding.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
AI system analyzes code similarities, makes progress toward automated coding
|
||||
======
|
||||
Researchers from Intel, MIT and Georgia Tech are working on an AI engine that can analyze code similarities to determine what code actually does, setting the stage for automated software writing.
|
||||
Monsitj / Getty Images
|
||||
|
||||
With the rapid advances in artificial intelligence (AI), are we getting to the point when computers will be smart enough to write their own code and be done with human coders? New research suggests we might be getting closer to that milestone.
|
||||
|
||||
Researchers from MIT and Georgia Tech teamed with Intel to develop an AI engine, dubbed Machine Inferred Code Similarity (MISIM), that's designed to analyze software code and determine how it's similar to other code. What's most interesting is the potential for the system to learn what bits of code do, and then use that intelligence to change how software is written. Ultimately, a human could explain what it wants a software program to do, and then a machine programming (MP) system could come up with a coded app to accomplish it.
|
||||
|
||||
**READ MORE:** [How AI can create self-driving data centers][1]
|
||||
|
||||
"When fully realized, MP will enable everyone to create software by expressing their intention in whatever fashion that's best for them, whether that's code, natural language or something else," said Justin Gottschlich, principal scientist and director/founder of machine programming research at Intel, in the company's [press release][2]. "That's an audacious goal, and while there's much more work to be done, MISIM is a solid step toward it."
|
||||
|
||||
### How it works
|
||||
|
||||
Neural networks give similarity scores to snippets of code "based on the jobs they are designed to carry out," Intel explains. Two code samples may look completely different but be rated the same because they perform the same function, for example. The algorithm can then determine which code snippet is more efficient.
|
||||
|
||||
Primitive versions of code-similarity systems are used in plagiarism detection, for example. With MISIM, however, the algorithm looks at chunks of code and attempts to ascertain contextually whether the snippets have similar characteristics or are aiming for similar objectives. It can then offer improvements in performance, for example, or general efficiency.
|
||||
|
||||
What's critical with MISIM is the intent of the creator, and it marks an advancement towards intent-based programming, which could enable software to be designed based on what a non-programmer creator wants to achieve. With intent-based programming, an algorithm draws on a pool of open source code rather than relying on the traditional, manual method of compiling a series of step-like programming instructions, line-by-line, telling a computer how to do something.
|
||||
|
||||
"A core differentiation between MISIM and existing code-similarity systems lies in its novel context-aware semantic structure (CASS), which aims to lift out what the code actually does. Unlike other existing approaches, CASS can be configured to a specific context, allowing it to capture information that describes the code at a higher level. CASS can provide more specific insight into what the code does rather than how it does it," Intel explains.
|
||||
|
||||
This is accomplished without a compiler (a stage used in programming that converts human-readable code into the computer program). Conveniently, partial snippets can be executed just to see what happens in that piece of code. Plus, the system gets rid of some of the more tedious parts of software development, like line-by-line bug finding. More details are available in the group's paper ([PDF][3])
|
||||
|
||||
Intel says the team's MISIM system is 40-times more accurate identifying similar code than previous code similarity systems.
|
||||
|
||||
Heres_your_sign, a Redditor [commenting on blog coverage of MISIM][4], amusingly points out that thankfully the computers aren't writing the requirements too. That would be asking for trouble, the Redditor believes.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3570389/ai-system-analyzes-code-similarities-makes-progress-toward-automated-coding.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3568354/how-ai-can-create-self-driving-data-centers.html
|
||||
[2]: https://newsroom.intel.com/news/intel-mit-georgia-tech-machine-programming-code-similarity-system/#gs.d8qd40
|
||||
[3]: https://arxiv.org/pdf/2006.05265.pdf
|
||||
[4]: https://www.reddit.com/r/technology/comments/i2dxed/this_ai_could_bring_us_computers_that_can_write/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,53 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IBM details next-gen POWER10 processor)
|
||||
[#]: via: (https://www.networkworld.com/article/3571415/ibm-details-next-gen-power10-processor.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
IBM details next-gen POWER10 processor
|
||||
======
|
||||
New CPU is optimized for enterprise hybrid cloud and AI inferencing, and it features a new technology for creating petabyte-scale memory clusters.
|
||||
IBM
|
||||
|
||||
IBM on Monday took the wraps off its latest POWER RISC CPU family, optimized for enterprise hybrid-cloud computing and artificial intelligence (AI) inferencing, along with a number of other improvements.
|
||||
|
||||
Power is the last of the Unix processors from the 1990s, when Sun Microsystems, HP, SGI, and IBM all had competing Unixes and RISC processors to go with them. Unix gave way to Linux and RISC gave way to x86, but IBM holds on.
|
||||
|
||||
This is IBM's first 7-nanometer processor, and IBM claims it will deliver an up-to-three-times improvement in capacity and processor energy efficiency within the same power envelope as its POWER9 predecessor. The processor comes in a 15-core design (actually 16-cores but one is not used) and allows for single or dual chip models, so IBM can put two processors in the same form factor. Each core can have up to eight threads, and each socket supports up to 4TB of memory.
|
||||
|
||||
More interesting is a new memory clustering technology called Memory Inception. This form of clustering allows the system to view memory in another physical server as though it were its own. So instead of putting a lot of memory in each box, servers can literally borrow from their neighbors when there is a spike in demand for memory. Or admins can set up one big server with lots of memory in the middle of a cluster and surround it with low-memory servers that can borrow memory as needed from the high capacity server.
|
||||
|
||||
All of this is done with a latency of 50 to 100 nanoseconds. "This has been a holy grail of the industry for a while now," said William Starke, a distinguished engineer with IBM, on a video conference in advance of the announcement. "Instead of putting a lot of memory in each box, when we have a spike demand for memory I can borrow from my neighbors."
|
||||
|
||||
POWER10 uses something called Open Memory Interfaces (OMI), so the server can use DDR4 now and be upgraded to DDR5 when it hits the market, and it can also use GDDR6 memory used in GPUs. In theory, POWER10 will come with 1TB/sec of memory bandwidth and 1TB/sec of SMP of bandwidth.
|
||||
|
||||
The POWER10 processor has quadruple the number of AES encryption engines per core compared to the POWER9. This enables several security enhancements. First, it means full memory encryption without degradation of performance, so no intruder can scan the contents of memory.
|
||||
|
||||
Second, it enables hardware and software security for containers to provide isolation. This is designed to address new security considerations associated with the higher density of containers. If a container were to be compromised, the POWER10 processor is designed to be able to prevent other containers in the same virtual machine from being affected by the same intrusion.
|
||||
|
||||
Finally, the POWER10 offers in-core AI business inferencing. It achieves this through on-chip support for bfloat16 for training as well as INT8 and INT4, which are commonly used in AI inferencing. This will allow transactional workloads to add AI inferencing in their apps. IBM says the AI inferencing in POWER10 is 20 times that of POWER9.
|
||||
|
||||
Not mentioned in the announcement is operating system support. POWER runs AIX, IBM's flavor of Unix, as well as Linux. That's not too surprising since the news is coming at Hot Chips, the annual semiconductor conference held every year at Stanford University. Hot Chips is focused on the latest chip advances, so software is usually left out.
|
||||
|
||||
IBM generally announces new POWER processors about a year in advance of release, so there is plenty of time for an AIX update.
|
||||
|
||||
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3571415/ibm-details-next-gen-power10-processor.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.facebook.com/NetworkWorld/
|
||||
[2]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 ways a legal team can enable open source)
|
||||
[#]: via: (https://opensource.com/article/20/8/open-source-legal-organization)
|
||||
[#]: author: (Jeffrey Robert Kaufman https://opensource.com/users/jkaufman)
|
||||
|
||||
3 ways a legal team can enable open source
|
||||
======
|
||||
Open source law is unique because of its unusual requirements for
|
||||
success. Learn ways lawyers can get their organizations to "yes".
|
||||
![Active listening in a meeting is a skill][1]
|
||||
|
||||
I am an open source lawyer for Red Hat. One important part of my job is to provide information to other companies, including their in-house counsel, about how Red Hat builds enterprise-class products with a completely open source development model and answering their questions about open source licensing in general. After hearing about Red Hat's success, these conversations often turn to discussions about how their organization can evolve to be more open source-aware and -capable, and lawyers at these meetings regularly ask how they can modify their practices to be more skilled in providing open source counsel to their employees.
|
||||
|
||||
In this article and the next, I'll convey what I normally tell in-house counsel about these topics. If you are not in-house counsel and instead work for a law firm supporting clients in the software space, you may also find this information useful. (If you are considering going to law school and becoming an open source lawyer, you should read Luis Villa's excellent article [_What to know before jumping into a career as an open source lawyer_][2].)
|
||||
|
||||
My perspective is based on my personal and possibly unique experience working in various engineering, product management, and lawyer roles. My atypical background means I see the world through a different lens from most lawyers. So, the ideas presented below may not be traditional, but they have served me well in my practice, and I hope they will benefit you.
|
||||
|
||||
### Connect with open source organizations
|
||||
|
||||
There are a multitude of open source organizations that are especially useful to open source lawyers. Many of these organizations have measurable influence over the views and interpretations of open source licenses. Consider getting involved with some of the more prominent organizations, such as the [Open Source Initiative][3] (OSI), the [Software Freedom Conservancy][4], the [Software Freedom Law Center][5], the [Free Software Foundation][6], [Free Software Foundation Europe][7], and the [Linux Foundation][8]. There are also a number of useful mailing lists, such as OSI's [license-discuss][9] and [license-review][10], that are worth monitoring and even participating in.
|
||||
|
||||
Participating in these groups and lists will help you understand the myriad and unique issues you may encounter when practicing open source law, including how various terms of the open source license texts are interpreted by the community. There is little case law to guide you, but there are plenty of people happy to help answer questions, and these resources are the best source of guidance. This is perhaps one of the very unique and amazing aspects of practicing open source law—the openness of the development community is equally matched by the openness of the legal community to provide perspective and advice. All you have to do is ask.
|
||||
|
||||
### Adopt the mindset of a business manager and find a path to yes
|
||||
|
||||
Product managers are ultimately held responsible for a product or service from cradle to grave, including enabling that product or service to get to market. Since the bulk of my career has been spent leading product-management organizations, my mind is programmed to find a path, no matter how, to get a viable product or service to market. I encourage any lawyer to adopt this mindset since products and services are the lifeblood of any business.
|
||||
|
||||
As such, the approach I have always taken in my legal practice involves issue spotting and advising clients of risk _but always having the objective of finding a path to "YES_," especially when my analysis impacts product/service development and release. When evaluating legal issues for internal clients, my executive management or I may, at times, view the risk to be too high. In such cases, continue encouraging everyone to work on the problem because, in my experience, solutions do eventually present themselves, often in unexpected ways.
|
||||
|
||||
Be sure to tap all your resources, including your software development clients (see below), as they can be an excellent source of creative approaches to solving problems, often using technology to resolve issues. I have found much joy in this method, and my clients seem pleased with this passion and sentiment. I encourage all lawyers to consider this approach.
|
||||
|
||||
Sadly, it is always easy to say "no" for self-preservation and to eliminate what may appear to be _any_ risk to the company. I have always found this response untenable. All business transactions have risk. As a counselor, it is your job to understand these risks and present them to your clients so that they may make educated business decisions. Simply saying "no" when any risk is present, without providing any additional context or other paths forward to mitigate risks, does no good for the long-term success of the organization. Companies need to provide products and services to survive, and you should be helping find that path, whenever possible, to YES. You have an ethical responsibility to say "no" in certain situations, of course, but explore and exhaust all reasonable options first.
|
||||
|
||||
### Build relationships with developers
|
||||
|
||||
Building relationships with your software development clients is _absolutely critical_. Building rapport and trust with developers are two important ways to strengthen these relationships.
|
||||
|
||||
#### Build rapport
|
||||
|
||||
Your success as an open source lawyer is most often a direct result of your positive relationships with your software developers. In many cases, your software developers are the direct or indirect recipients of your legal advice, and they will be looking to you for guidance and counsel. Unfortunately, many software developers are suspicious of lawyers and often view lawyers as obstacles to their ability to develop and release software. The best way to overcome this ill will is to build rapport with your clients. How you do that is different for most people, but here are some ideas.
|
||||
|
||||
1. **Show an interest in your clients' work:** Be inquisitive about the details of their project, how it works, the underlying programming language, how it connects to other systems, and how it will benefit the company. Some of these answers will be useful in your legal analysis when ascertaining legal risk and ways to mitigate such risk, but more importantly, this builds a solid foundation for an ongoing positive relationship with your client.
|
||||
2. **Be clear to your client that you are working to find a path to YES:** It is perfectly acceptable to let your client know you are concerned about certain aspects of their project, but follow that up with ideas for mitigating those concerns. Reassure them that it is your job to work with them to find a solution and not to be a roadblock. The effect of this cannot be overstated.
|
||||
3. **Participate in an open source project:** This is especially true if you have software development experience. Even if you do not have such experience, there are many ways to participate, such as helping with documentation or community outreach. Or request to join their status meetings just to learn more about their work. This will also allow you to provide counsel on-demand and in real time so that the team may course-correct early in the process.
|
||||
|
||||
|
||||
|
||||
#### Have trust
|
||||
|
||||
Your software developers are very active in their open source communities and are some of the best resources for understanding current issues affecting open source software and development. Just as you connect with legal organizations, like your local bar or national legal organizations to keep current on the latest legal developments, you should also engage with your software development resources for periodic briefings and to gain their counsel on various matters (e.g., how would the community view the use of this license for a project?).
|
||||
|
||||
### Relationships breed success
|
||||
|
||||
Open source law is unique because of its unusual requirements for success, namely, connections to other open source attorneys, open source organizations, and a deep and respectful relationship with clients. Success is a direct function of these relationships.
|
||||
|
||||
In part two of this series, I will explore how and why it is important to find a path to "open" and to develop scalable solutions for your organization.
|
||||
|
||||
Find out what it takes to become an open source lawyer.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/open-source-legal-organization
|
||||
|
||||
作者:[Jeffrey Robert Kaufman][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jkaufman
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team-discussion-mac-laptop-stickers.png?itok=AThobsFH (Active listening in a meeting is a skill)
|
||||
[2]: https://opensource.com/article/16/12/open-source-lawyer
|
||||
[3]: https://opensource.org/
|
||||
[4]: https://sfconservancy.org/
|
||||
[5]: https://www.softwarefreedom.org/
|
||||
[6]: https://www.fsf.org/
|
||||
[7]: https://fsfe.org/index.en.html
|
||||
[8]: https://www.linuxfoundation.org/
|
||||
[9]: https://lists.opensource.org/mailman/listinfo/license-discuss_lists.opensource.org
|
||||
[10]: https://lists.opensource.org/mailman/listinfo/license-review_lists.opensource.org
|
@ -0,0 +1,77 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 reasons small businesses choose open source tools for remote employees)
|
||||
[#]: via: (https://opensource.com/article/20/8/business-tools)
|
||||
[#]: author: (Adrian Johansen https://opensource.com/users/ajohansen)
|
||||
|
||||
3 reasons small businesses choose open source tools for remote employees
|
||||
======
|
||||
There are plenty of open source operations tools available if you lack
|
||||
the budget for premium software; here's how to evaluate your options.
|
||||
![A chair in a field.][1]
|
||||
|
||||
The last decade or so has seen some significant changes in how businesses operate. The expansion of accessible, affordable, connected technology has removed barriers to many resources, enabling collaboration and execution of work by nearly anyone, from nearly anywhere. Though COVID-19 has made remote operations a necessity for a lot of industries, many businesses had already begun to embrace it as a more cost-effective, agile way of working.
|
||||
|
||||
That said, not every business has the budget to subscribe to premium software as a service (SaaS) to keep their remote employees productive. The good news is that open source software can be every bit as robust and intuitive as the premium options that are only available to those with plenty of capital. The key is clearly identifying what you need from those tools in order to focus your search.
|
||||
|
||||
The open source community can offer some smart solutions to the challenges of remote working, and we’re going to look at a few key areas of need for businesses exploring how they can operate more effectively.
|
||||
|
||||
### Flexible collaboration
|
||||
|
||||
One of the primary challenges for businesses operating remotely is managing the productivity of disparate teams. Employee management can be difficult enough when everybody is in the same room, but keeping teams in close collaboration when they may be working in different cities or even different time zones requires water-tight organization. This is why open source tools that make flexible yet robust collaborations possible are on top of the list for remote teams today.
|
||||
|
||||
Among the best [open source tools][2] on the market at the moment is [Taiga][3], a project management platform. It uses the card-style task organization approach, providing a board that is visible to all employees on the network and keeps leadership and team members informed about the status of individual tasks and overall project progress. Open source project management software that mimics the easy collaboration that premium services like Trello and Asana offer are increasingly popular. Many—[Odoo][4] and [OpenProject][5] among them—go further than their premium counterparts, offering integrated apps for forecasting and making it easier to share and transfer files or documents.
|
||||
|
||||
When it comes to remote collaboration, effective communication tools are also a must. Team chat platforms can help to make certain that remote employees have access to leadership and other team members whenever they need assistance or clarification on tasks. The chat room nature of them also helps to build team camaraderie. [Mattermost][6] and [Rocket.Chat][7] are among the popular open source platforms that act as [effective alternatives to SaaS like Slack][8]; both have free options, public and private chat rooms, and the ability to upload and share media files.
|
||||
|
||||
### User-friendliness
|
||||
|
||||
There is a focus on effective user interfaces (UI) across the software industry at the moment, and this is arguably even more essential for remote teams. This is not just important for the day-to-day functionality of tools but also for ease of training. New employee onboarding can be improved by implementing a clear, fluid process that introduces new hires to the business’ core practices and tools. This means that any open source software deployed must be user-friendly enough to require minimal guidance and cause few disruptions during ramp-up.
|
||||
|
||||
It certainly helps when the software itself is designed intuitively. [Drawpile][9] is an excellent example of this. [This collaborative drawing platform][10], used for team meetings and creative projects alike, uses clear icons and interfaces that are similar to popular drawing platforms like Photoshop or MSPaint. It also presents a minimalistic, functional approach to avoid overwhelming the uninitiated. When reviewing open source software, business leaders need to consider the perspective of a new user and evaluate its ease of use.
|
||||
|
||||
It’s also important to take into account what instructional assets the developers have provided. Many have online manuals, though the nature of open source can mean that these frequently change and evolve over time. Some, like storage and sharing platform [Nextcloud][11], include separate training materials for users, developers, and administration. Review accessibility to concise documentation like this and ensure that it can be easily integrated, delivered, and understood during your remote employee onboarding process.
|
||||
|
||||
### Security and support
|
||||
|
||||
A concern for any business owner operating in digital spaces is ensuring that operations are not just efficient but secure. One of the aspects that can make premium software attractive is robust cybersecurity protocols and integrated support services. In searching for open source software, it’s important to understand the extent to which developers have put security protocols in place, and how this affects company, employee, and customer safety.
|
||||
|
||||
This can be especially important when utilizing platforms that facilitate the sharing of documents and discussion of potentially sensitive company information. Many options on the market, including [Jitsi][12] and [BigBlueButton][13], are upfront about the security measures and encryptions on their software that often go beyond those on premium platforms. However, it’s equally important to make certain that employees themselves understand that their actions are as vital to security as the encryption. Be clear about what behavior can lead to phishing attacks that make the business vulnerable to two-factor authentication bypassing, and how to safely share information through activities such as dynamic linking.
|
||||
|
||||
One of the most significant advantages that open source software holds over most premium products is access to a vibrant and supportive community. While there are core teams behind the software, there’s a spirit of collaboration and collective ownership to its development and continued growth. [LibreOffice][14] actively encourages its users to [help improve the product][15] through feedback and forums. This means users can often easily communicate with experts whenever issues arise and work together to solve problems, and ultimately make the product better in the future.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Review how open source software can improve collaboration, and fit into your onboarding procedures, and examine the potential for security and community support. And by using open source, you retain control over your data, assets, and workflow. In a world that is swiftly embracing remote practices, discovering the right tools now can give you a competitive advantage.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/business-tools
|
||||
|
||||
作者:[Adrian Johansen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ajohansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_WorkInPublic_4618517_1110_CS_A.png?itok=RwVrWArk (A chair in a field.)
|
||||
[2]: https://opensource.com/article/20/3/open-source-working-home
|
||||
[3]: https://taiga.io/
|
||||
[4]: https://www.odoo.com/
|
||||
[5]: https://www.openproject.org/
|
||||
[6]: https://mattermost.com/
|
||||
[7]: https://rocket.chat/
|
||||
[8]: https://opensource.com/alternatives/slack
|
||||
[9]: https://drawpile.net/
|
||||
[10]: https://opensource.com/article/20/3/drawpile
|
||||
[11]: https://nextcloud.com/
|
||||
[12]: https://jitsi.org/
|
||||
[13]: https://bigbluebutton.org/
|
||||
[14]: https://www.libreoffice.org/
|
||||
[15]: https://www.libreoffice.org/community/get-involved/
|
69
sources/talk/20200823 Open organizations through the ages.md
Normal file
69
sources/talk/20200823 Open organizations through the ages.md
Normal file
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open organizations through the ages)
|
||||
[#]: via: (https://opensource.com/open-organization/20/8/global-history-collaboration)
|
||||
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland)
|
||||
|
||||
Open organizations through the ages
|
||||
======
|
||||
On a global timeline, extensive collaboration is still a relatively new
|
||||
phenomenon. That could explain why we're still getting the hang of it.
|
||||
![Alarm clocks with different time][1]
|
||||
|
||||
Consider the evolution of humankind. When we do, we will recognize that _having global discussions_ and _acting on global decisions_ is a relatively new phenomenon—only 100 years old, give or take a few years. We're still learning _how_ to make global decisions and execute on them successfully.
|
||||
|
||||
Yet our ability to improve those globally focused practices and skills is critical to our continued survival. And open principles will be the keys to helping us learn them—as they have been throughout history.
|
||||
|
||||
In the [first part of this series][2], I reviewed four factors one might use to assess globalization, and I explained how those factors relate specifically to [open organization principles][3]. Here, I'd like to present a chronology of how those principles have influenced certain developments that have made the world feel more connected and have made _personal_ or _regional_ issues into _global_ issues.
|
||||
|
||||
This work draws on research by [Jeffrey D. Sachs][4], author of the book [_The Ages of Globalization_][5]. Sachs examines globalization from the genesis of humankind and argues that globalization has improved life and prosperity through the ages. He organizes human history into seven "ages," and examines the governance structure predominant in each. That structure determines how populations interact with each other (another way of assessing how socially inclusive they are). For Sachs, that inclusiveness is directly related to per capita GDP, or productivity per person. That productivity is where prosperity (or survival) is determined.
|
||||
|
||||
So let's look at the growth of globalization through the ages (I'll use Sachs' categorizations) and see where open organization principles began to take hold in early civilizations. In this piece, I'll discuss historical periods up to the beginning of the Industrial Revolution (the early 1800s).
|
||||
|
||||
### The Paleolithic Age (70,000‒10,000 BCE): The hunter/gatherer setting
|
||||
|
||||
According to Sachs, the history of globalization really begins at the dawn of humankind. And open organization principles are evident even then—though only in tight-knit groups of 25 to 30 members, called "bands," each very similar to "bottom-up" business teams today. Such bands resisted hierarchical organization, eventually connecting with other bands to form "clans" of around 150 people, then "mega-bands" of around 500 people, and finally to "tribes" of around 1,500 people. But these groups never let go of that "band" concept (we can observe this in ancient ruins around the globe). Bands cooperated, but interactions were relatively weak (sometimes even warlike, as bands fought to protect territory). As bands' means of survival was primarily hunting and gathering, they lived a largely nomadic lifestyle.
|
||||
|
||||
### The Neolithic Age (10,000‒3,000 BCE): The ranching/farming setting
|
||||
|
||||
The advent of farming and ranching, Sachs says, marked this period of globalization. During that period, major segments of the human population started establishing permanent settlements, leading to decline in the hunter and gatherer nomadic lifestyle, as agricultural developments allowed for more productivity per unit area. People could establish larger villages. With new agricultural techniques, ten individuals could survive on one square kilometer of land (compared to only one person per square kilometer of hunter/gatherers). Therefore, people were not forced to migrate to new areas to survive. Communities grew larger, and these larger communities set in motion new technical discoveries in metallurgy, the arts, numeric record keeping, ceramics, and even a writing system to record technical breakthroughs. In short, _sharing_ and _collaboration_ became keys to expanding know-how, evidence of open organization principles even tens of thousands of years ago.
|
||||
|
||||
Having global discussions and acting on global decisions is a relatively new phenomenon—only 100 years old, give or take a few years.
|
||||
|
||||
### The Equestrian Age (3,000‒1,000 BCE): The land travel by horse setting
|
||||
|
||||
During the Neolithic Age, communities began connecting with each other using horses for transportation, giving rise to another era of globalization—what Sachs calls The Equestrian Age. Domestication of animals took place almost exclusively in Eurasia and North Africa, including the use of donkeys, cattle, camels and other animals (not just horses). That domestication was by far the most important factor in the economic development and globalization in this age. Animal husbandry was a major influence on farming, mining, manufacturing, transportation, communications, warfare tactics, and governance. As greater long-distance movement was now possible (routes were formed to and from the east and west), whole civilizations began to form. The Egyptians introduced a system of writing and documentation, as well as public administration, which unified dynasties within the region. This led to advances in scientific fields, including mathematics, astronomy, engineering, metallurgy, and medicine.
|
||||
|
||||
### The Classical Age (1,000 BCE‒1500 CE): An information, documentation, learning setting
|
||||
|
||||
According to Sachs, this era of globalization involves the globalization of politics—namely conquering wide regions and creating empires. This includes the empires of Assyria, Persia, Greece, Rome, India, China and later the Ottoman and Mongol empires. This age saw the spread of ideas, technology, institutional concepts, and infrastructural development on a continental scale. As a result, larger communities were developed, and there was a greater and broader level of collaboration, interaction, transparency, adaptability and inclusivity than in the past. Through interaction between empires, better methods of growing food, raising farm animals, transporting goods and fighting wars spread around the globe. Much of this knowledge spread through thousands of books published, distributed, and taught (formal, community schooling began in this era, as did several documentation practices). Global trade improved with the establishment of a navy to police and protect travel routes. Simply put, this was a setting of multinational governance with ever expanding collaboration, inclusivity, and larger community development.
|
||||
|
||||
### The Ocean Age (1500‒1800 CE): The long-distance sea travel and exploration setting
|
||||
|
||||
During this period of globalization, Sachs says, the Old World and the New World, isolated from each other since the Paleolithic Age, finally united through ever faster ocean-going vessels, leading to greater two-way exchange. Plants, animals—and, unfortunately, diseases and pathogens—trafficked between them. Travel from Europe to Asia (via the Cape of Good Hope) increased during this era. This age saw the rise of global capitalism, with the establishment of global-scale economic organizations, like the British East India Company (chartered in 1600) and the Dutch East India Company (chartered in 1602). Each supplied global markets with goods from remote areas. But while global trade produced great prosperity during this era, in many regions it also led to great cruelty (the exploitation of indigenous peoples, for example).
|
||||
|
||||
By the 1800s, communities were interacting on a scale more global than it had ever been, and open organization principles were more influential than ever before. And yet this era still saw a significant deficit of overall inclusivity and collaboration. We should note that this era of globalization is relatively _recent_ in history when viewed on the timescale Sachs outlines in _The Ages of Globalization_.
|
||||
|
||||
How did we get from there to the current age? In the next article, I'll explore those developments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/20/8/global-history-collaboration
|
||||
|
||||
作者:[Ron McFarland][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ron-mcfarland
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
|
||||
[2]: https://opensource.com/open-organization/20/7/globalization-history-open
|
||||
[3]: https://opensource.com/open-organization/resources/open-org-definition
|
||||
[4]: https://en.wikipedia.org/wiki/Jeffrey_Sachs
|
||||
[5]: https://cup.columbia.edu/book/the-ages-of-globalization/9780231193740
|
75
sources/talk/20200824 Origin stories about Unix.md
Normal file
75
sources/talk/20200824 Origin stories about Unix.md
Normal file
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Origin stories about Unix)
|
||||
[#]: via: (https://opensource.com/article/20/8/unix-history)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
|
||||
Origin stories about Unix
|
||||
======
|
||||
Brian Kernighan, one of the original Unix gurus, shares his insights
|
||||
into the origins of Unix and its associated technology.
|
||||
![Old UNIX computer][1]
|
||||
|
||||
Brian W. Kernighan opens his book _Unix: A History and a Memoir_ with the line, "To understand how Unix happened, we have to understand Bell Labs, especially how it worked and the creative environment that it provided." And so begins a wonderful trip back in time, following the creation and development of early Unix with someone who was there.
|
||||
|
||||
You may recognize Brian Kernighan's name. He is the "K" in [AWK][2], the "K" in "K&R C" (he co-wrote the original "Kernighan and Ritchie" book about the C programming language), and he has authored and co-authored many books about Unix and technology. On my own bookshelf, I can find several of Kernighan's books, including _The Unix Programming Environment_ (with Rob Pike), _The AWK Programming Language_ (with Alfred Aho and Peter J. Weinberger), and _The C Programming Language_ (with Dennis M. Ritchie). And of course, his latest entry, _Unix: A History and a Memoir_.
|
||||
|
||||
I interviewed Brian about this most recent book. I think we spent equal amounts of time discussing the book as we did reminiscing about Unix and groff. Below are a few highlights of our conversation:
|
||||
|
||||
### JH: What prompted you to write this book?
|
||||
|
||||
BWK: I thought it would be nice to have a history of what happened at Bell Labs. Jon Gertner wrote a book, _The Idea Factory: Bell Labs and the Great Age of American Innovation_, that described the physical science work at Bell Labs. This was an authoritative work, very technical, and not something that I could do, but it was kind of the inspiration for this book.
|
||||
|
||||
There's also a book by James Gleick, _The Information: A History, a Theory, a Flood_, that isn't specific to Bell Labs, but it's very interesting. That was kind of an inspiration for this, too.
|
||||
|
||||
I originally wanted to write an academic history of the Labs, but I realized it was better to write something based on my own memories and the memories of those who were there at the time. So that's where the book came from.
|
||||
|
||||
### JH: What are a few stories from the book you'd like people to read about?
|
||||
|
||||
BWK: I think there are really two stories I'd like people to know about, and both of them are origin myths. I heard them afresh when Ken Thompson and I were at the [Vintage Computer Festival about a year ago][3].
|
||||
|
||||
One is the origin of Unix itself—how Bonnie, Ken's wife, went off on vacation for three weeks, just at the time that Ken thought he was about three weeks away from having a complete operating system. This was, of course, due to Ken's very competent programming abilities, and it was incredible he was able to pull it off. It was written entirely in Assembly and was really amazing work.
|
||||
|
||||
[Note: This story starts on page 33 in the book. I'll briefly relate it here. Thompson was working on "a disk scheduling algorithm that would try to maximize throughput on any disk," but particularly the PDP-7's very tall single-platter disk drive. In testing the algorithm, Thompson realized, "I was three weeks from an operating system." He broke down his work into three units—an editor, an assembler, and a kernel—and wrote one per week. And about that time, Bonnie took their son to visit Ken's parents in California, so Thompson had those three weeks to work undisturbed.]
|
||||
|
||||
And then there's the origin story for `grep`. Over the years, I'd gotten the story slightly wrong—I thought Ken had written `grep` entirely on demand. It was classic Ken that he had a great idea, a neat idea, a clean idea, and he was able to write it very quickly. Regular expressions (regex) were already present in the text editor, so really, he just pulled regex from the editor and turned it into a program.
|
||||
|
||||
[Note: This story starts on page 70 in the book. Doug McIlroy said, "Wouldn't it be great if we could look for things in files?" Thompson replied, "Let me think about it overnight," and the next morning presented McIlroy with the `grep` program he'd written.]
|
||||
|
||||
### JH: What other stories did you not get to tell in the book?
|
||||
|
||||
BWK: I immediately think of the "Peter Weinberger's face" story! There were a lot of pranks based on having a picture of Peter's face pop up in random places. Someone attached a picture of Peter with magnets to the metal wall of a stairway. And there was a meeting once where Peter was up in front, not in the audience. And while he was talking, everyone in the audience held up a mask that had Peter's face printed on it.
|
||||
|
||||
[Note: The "Peter Weinberger's face" story starts on page 47 in the book. Spinroot also has an [archive of the prank][4] with examples.]
|
||||
|
||||
I talked to a lot of people from the Labs about the book. I would email people, and I would receive long replies with more stories than I could fit into the length or the narrative. Honestly, there's probably a whole other book that someone else could write just based on those stories. It's amazing how many people come forward with stories about Unix and running Unix on systems I haven't even heard of.
|
||||
|
||||
## A fantastic read
|
||||
|
||||
_Unix: A History and a Memoir_ is well-titled. Throughout the book, Kernighan shares details on the rich history of Unix, including background on Bell Labs, the spark of Unix with CTSS and Multics in 1969, and the first edition in 1971. Kernighan also provides his own reflection on how Unix came to be such a dominant platform, including notes on portability, Unix tools, the Unix Wars, and Unix descendants such as Minix, Linux, BSD, and Plan9. You will also find nuggets of information and great stories that fill in details around some of the everyday features of Unix.
|
||||
|
||||
At just over 180 pages, _Unix: A History and a Memoir_ is a fantastic read. If you are a fan of Linux, or any open source Unix, including the BSD versions, you will want to read this book.
|
||||
|
||||
_Unix: A History and a Memoir_ is available on [Amazon][5] in paperback and e-book formats. Published by Kindle Direct Publishing, October 2019.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/unix-history
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/retro_old_unix_computer.png?itok=SYAb2xoW (Old UNIX computer)
|
||||
[2]: https://opensource.com/resources/what-awk
|
||||
[3]: https://www.youtube.com/watch?v=EY6q5dv_B-o
|
||||
[4]: https://spinroot.com/pico/pjw.html
|
||||
[5]: https://www.amazon.com/dp/1695978552
|
@ -0,0 +1,102 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Starryi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why we open sourced our security project)
|
||||
[#]: via: (https://opensource.com/article/20/8/why-open-source)
|
||||
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
|
||||
|
||||
Why we open sourced our security project
|
||||
======
|
||||
It’s not just coding that we do in the open.
|
||||
![A lock on the side of a building][1]
|
||||
|
||||
When Nathaniel McCallum and I embarked on the project that is now called [Enarx][2], we made one decision right at the beginning: the code for Enarx would be open source, a stance fully supported by our employer, Red Hat (see the [standard disclaimer][3] on my blog). All of it, and forever.
|
||||
|
||||
That's a decision that we've not regretted at any point, and it's something we stand behind. As soon as we had enough code for a demo and were ready to show it, we created a [repository on GitHub][4] and made it public. There's a very small exception, which is that there are some details of upcoming chip features that are shared with us under an NDA[1][5] where publishing any code we might write for them would be a breach of the NDA. But where this applies (which is rarely), we are absolutely clear with the vendors that we intend to make the code open as soon as possible, and we lobby them to release details as early as they can (which may be earlier than they might prefer) so that more experts can look over both their designs and our code.
|
||||
|
||||
### Auditability and trust
|
||||
|
||||
This brings us to possibly the most important reasons for making Enarx open source: auditability and trust. Enarx is a security-related project, and I believe passionately not only that [security should be done in the open][6] but also that if anybody is actually going to trust their sensitive data, algorithms, and workloads to a piece of software, then they want to be in a position where as many experts as possible have looked at it, scrutinised it, criticised it, and improved it, whether that is the people running the software, their employees, contractors, or (even better) the wider security community. The more people who check the code, the happier you should be to [trust it][7]. This is important for any piece of security software, but it is _vital_ for software such as Enarx, which is designed to protect your most sensitive workloads.
|
||||
|
||||
### Bug catching
|
||||
|
||||
There are bugs in Enarx. I know: I'm writing some of the code,[2][8] and I found one yesterday (which I'd put in), just as I was about to give a demo.[3][9] It is very, very difficult to write perfect code, and we know that if we make our source open, then more people can help us fix issues.
|
||||
|
||||
### Commonwealth
|
||||
|
||||
For Nathaniel and me, open source is an ethical issue, and we make no apologies for that. I think it's the same for most, if not all, of the team working on Enarx. This includes a number of Red Hat employees (see standard disclaimer), so it shouldn't come as a surprise, but we also have non-Red Hat contributors from a number of backgrounds. We feel that Enarx should be a Common Good and [contribute to the commonwealth][10] of intellectual property out there.
|
||||
|
||||
### More brainpower
|
||||
|
||||
Making something open source doesn't just make it easier to fix bugs: it can improve the quality of what you produce in general. The more brainpower you have to apply to the problem, the better your chances of making something great––assuming that the brainpower is applied efficiently (not always an easy task!). In a recent design meeting, one of the participants said towards the end, "I'm sure I could implement some of this, but I don't know a huge amount about this topic, and I'm worried that I'm not contributing to this discussion." In fact, they had contributed by asking questions and clarifying some points, and we assured them that we wanted to include experienced, senior developers for their expertise and knowledge and to pull out assumptions and validate the design, and not because we expected everybody to be experts in all parts of the project.
|
||||
|
||||
Having bright people involved in design and coding spreads expertise and knowledge and helps keep the work from becoming an insulated, isolated "ivory tower" construction, understood by few, and almost impossible to validate.
|
||||
|
||||
### Not just code
|
||||
|
||||
It's not just coding that we do in the open. We manage our architecture in the open, our design meetings, our protocol design, our design methodology,[4][11] our documentation, our bug tracking, our chat, our CI/CD processes: all of it is open. The one exception is our [vulnerability management][12] process, which needs the opportunity for confidential exposure for a limited time. Here is where you can find our resources:
|
||||
|
||||
* [Code][4]
|
||||
* [Wiki][13]
|
||||
* Design is on the wiki and [request for comments][14] repo
|
||||
* [Issues][15] and [pull requests][16]
|
||||
* [Chat][17] (thanks to [Rocket.chat][18]!)
|
||||
* CI/CD resources thanks to [Packet][19]!
|
||||
* [Stand-ups][20]
|
||||
|
||||
|
||||
|
||||
We also take diversity seriously, and the project contributors are subject to the [Contributor Covenant Code of Conduct][21].
|
||||
|
||||
In short, Enarx is an open project. I'm sure we could do better, and we'll strive for that, but our underlying principles are that open is good in general and vital for security. If you agree, please come and visit!
|
||||
|
||||
* * *
|
||||
|
||||
1. Non-disclosure agreement
|
||||
2. To the surprise of many of the team, including myself. At least it's not in Perl.
|
||||
3. I fixed it. Admittedly, after the demo.
|
||||
4. We've just moved to a sprint pattern, the details of which we designed and agreed to in the open.
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
_This article was originally published on [Alice, Eve, and Bob][22] and is adapted and reprinted with the author's permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/why-open-source
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mikecamel
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
|
||||
[2]: https://enarx.dev/
|
||||
[3]: https://aliceevebob.com/
|
||||
[4]: https://github.com/enarx
|
||||
[5]: tmp.PM1nWCfATC#1
|
||||
[6]: https://opensource.com/article/17/10/many-eyes
|
||||
[7]: https://aliceevebob.com/2019/06/18/trust-choosing-open-source/
|
||||
[8]: tmp.PM1nWCfATC#2
|
||||
[9]: tmp.PM1nWCfATC#3
|
||||
[10]: https://opensource.com/article/17/11/commonwealth-open-source
|
||||
[11]: tmp.PM1nWCfATC#4
|
||||
[12]: https://aliceevebob.com/2020/05/26/security-disclosure-or-vulnerability-management/
|
||||
[13]: https://github.com/enarx/enarx/wiki
|
||||
[14]: https://github.com/enarx/rfcs
|
||||
[15]: https://github.com/enarx/enarx/issues
|
||||
[16]: https://github.com/enarx/enarx/pulls
|
||||
[17]: https://chat.enarx.dev/
|
||||
[18]: https://rocket.chat/
|
||||
[19]: https://packet.com/
|
||||
[20]: https://github.com/enarx/enarx/wiki/How-to-contribute
|
||||
[21]: https://github.com/enarx/.github/blob/master/CODE_OF_CONDUCT.md
|
||||
[22]: https://aliceevebob.com/2020/07/28/why-enarx-is-open/
|
140
sources/talk/20200826 What is DNS and how does it work.md
Normal file
140
sources/talk/20200826 What is DNS and how does it work.md
Normal file
@ -0,0 +1,140 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is DNS and how does it work?)
|
||||
[#]: via: (https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html)
|
||||
[#]: author: (Keith Shaw and Josh Fruhlinger )
|
||||
|
||||
What is DNS and how does it work?
|
||||
======
|
||||
The Domain Name System resolves the names of internet sites with their underlying IP addresses adding efficiency and even security in the process.
|
||||
Thinkstock
|
||||
|
||||
The Domain Name System (DNS) is one of the foundations of the internet, yet most people outside of networking probably don’t realize they use it every day to do their jobs, check their email or waste time on their smartphones.
|
||||
|
||||
At its most basic, DNS is a directory of names that match with numbers. The numbers, in this case are IP addresses, which computers use to communicate with each other. Most descriptions of DNS use the analogy of a phone book, which is fine for people over the age of 30 who know what a phone book is.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
|
||||
|
||||
If you’re under 30, think of DNS like your smartphone’s contact list, which matches people’s names with their phone numbers and email addresses. Then multiply that contact list by everyone else on the planet.
|
||||
|
||||
### A brief history of DNS
|
||||
|
||||
When the internet was very, very small, it was easier for people to correspond specific IP addresses with specific computers, but that didn’t last for long as more devices and people joined the growing network. It's still possible to type a specific IP address into a browser to reach a website, but then, as now, people wanted an address made up of easy-to-remember words, of the sort that we would recognize as a domain name (like networkworld.com) today. In the 1970s and early '80s, those names and addresses were assigned by one person — [Elizabeth Feinler at Stanford][2] – who maintained a master list of every Internet-connected computer in a text file called [HOSTS.TXT][3].
|
||||
|
||||
This was obviously an untenable situation as the Internet grew, not least because Feinler only handled requests before 6 p.m. California time, and took time off for Christmas. In 1983, Paul Mockapetris, a researcher at USC, was tasked with coming up with a compromise among multiple suggestions for dealing with the problem. He basically ignored them all and developed his own system, which he dubbed DNS. While it's obviously changed quite a bit since then, at a fundamental level it still works the same way it did nearly 40 years ago.
|
||||
|
||||
### How DNS servers work
|
||||
|
||||
The DNS directory that matches name to numbers isn’t located all in one place in some dark corner of the internet. With [more than 332 million domain names listed at the end of 2017][4], a single directory would be very large indeed. Like the internet itself, the directory is distributed around the world, stored on domain name servers (generally referred to as DNS servers for short) that all communicate with each other on a very regular basis to provide updates and redundancies.
|
||||
|
||||
### Authoritative DNS servers vs. recursive DNS servers
|
||||
|
||||
When your computer wants to find the IP address associated with a domain name, it first makes its request to a recursive DNS server, also known as recursive resolver*.* A recursive resolver is a server that is usually operated by an ISP or other third-party provider, and it knows which other DNS servers it needs to ask to resolve the name of a site with its IP address. The servers that actually have the needed information are called authoritative DNS servers*.*
|
||||
|
||||
### DNS servers and IP addresses
|
||||
|
||||
Each domain can correspond to more than one IP address. In fact, some sites have hundreds or more IP addresses that correspond with a single domain name. For example, the server your computer reaches for [www.google.com][5] is likely completely different from the server that someone in another country would reach by typing the same site name into their browser.
|
||||
|
||||
Another reason for the distributed nature of the directory is the amount of time it would take for you to get a response when you were looking for a site if there was only one location for the directory, shared among the millions, probably billions, of people also looking for information at the same time. That’s one long line to use the phone book.
|
||||
|
||||
### What is DNS caching?
|
||||
|
||||
To get around this problem, DNS information is shared among many servers. But information for sites visited recently is also cached locally on client computers. Chances are that you use google.com several times a day. Instead of your computer querying the DNS name server for the IP address of google.com every time, that information is saved on your computer so it doesn’t have to access a DNS server to resolve the name with its IP address. Additional caching can occur on the routers used to connect clients to the internet, as well as on the servers of the user’s Internet Service Provider (ISP). With so much caching going on, the number of queries that actually make it to DNS name servers is a lot lower than it would seem.
|
||||
|
||||
### How do I find my DNS server?
|
||||
|
||||
Generally speaking, the DNS server you use will be established automatically by your network provider when you connect to the internet. If you want to see which servers are your primary nameservers — generally the recursive resolver, as described above — there are web utilities that can provide a host of information about your current network connection. [Browserleaks.com][6] is a good one, and it provides a lot of information, including your current DNS servers.
|
||||
|
||||
### Can I use 8.8.8.8 DNS?
|
||||
|
||||
It's important to keep in mind, though, that while your ISP will set a default DNS server, you're under no obligation to use it. Some users may have reason to avoid their ISP's DNS — for instance, some ISPs use their DNS servers to redirect requests for nonexistent addresses to [pages with advertising][7].
|
||||
|
||||
If you want an alternative, you can instead point your computer to a public DNS server that will act as a recursive resolver. One of the most prominent public DNS servers is Google's; its IP address is 8.8.8.8. Google's DNS services tend to be [fast][8], and while there are certain questions about the [ulterior motives Google has for offering the free service][9], they can't really get any more information from you that they don't already get from Chrome. Google has a page with detailed instructions on how to [configure your computer or router][10] to connect to Google's DNS.
|
||||
|
||||
### How DNS adds efficiency
|
||||
|
||||
DNS is organized in a hierarchy that helps keep things running quickly and smoothly. To illustrate, let’s pretend that you wanted to visit networkworld.com.
|
||||
|
||||
The initial request for the IP address is made to a recursive resolver, as discussed above. The recursive resolver knows which other DNS servers it needs to ask to resolve the name of a site (networkworld.com) with its IP address. This search leads to a root server, which knows all the information about top-level domains, such as .com, .net, .org and all of those country domains like .cn (China) and .uk (United Kingdom). Root servers are located all around the world, so the system usually directs you to the closest one geographically.
|
||||
|
||||
Once the request reaches the correct root server, it goes to a top-level domain (TLD) name server, which stores the information for the second-level domain, the words used before you get to the .com, .org, .net (for example, that information for networkworld.com is “networkworld”). The request then goes to the Domain Name Server, which holds the information about the site and its IP address. Once the IP address is discovered, it is sent back to the client, which can now use it to visit the website. All of this takes mere milliseconds.
|
||||
|
||||
Because DNS has been working for the past 30-plus years, most people take it for granted. Security also wasn’t considered when building the system, so [hackers have taken full advantage of this][11], creating a variety of attacks.
|
||||
|
||||
### DNS reflection attacks
|
||||
|
||||
DNS reflection attacks can swamp victims with high-volume messages from DNS resolver servers. Attackers request large DNS files from all the open DNS resolvers they can find and do so using the spoofed IP address of the victim. When the resolvers respond, the victim receives a flood of unrequested DNS data that overwhelms their machines.
|
||||
|
||||
### DNS cache poisoning
|
||||
|
||||
[DNS cache poisoning][12] can divert users to malicious Web sites. Attackers manage to insert false address records into the DNS so when a potential victim requests an address resolution for one of the poisoned sites, the DNS responds with the IP address for a different site, one controlled by the attacker. Once on these phony sites, victims may be tricked into giving up passwords or suffer malware downloads.
|
||||
|
||||
### DNS resource exhaustion
|
||||
|
||||
[DNS resource exhaustion][13] attacks can clog the DNS infrastructure of ISPs, blocking the ISP’s customers from reaching sites on the internet. This can be done by attackers registering a domain name and using the victim’s name server as the domain’s authoritative server. So if a recursive resolver can’t supply the IP address associated with the site name, it will ask the name server of the victim. Attackers generate large numbers of requests for their domain and toss in non-existent subdomains to boot, which leads to a torrent of resolution requests being fired at the victim’s name server, overwhelming it.
|
||||
|
||||
### What is DNSSec?
|
||||
|
||||
DNS Security Extensions is an effort to make communication among the various levels of servers involved in DNS lookups more secure. It was devised by the Internet Corporation for Assigned Names and Numbers (ICANN), the organization in charge of the DNS system.
|
||||
|
||||
ICANN became aware of weaknesses in the communication between the DNS top-level, second-level and third-level directory servers that could allow attackers to hijack lookups. That would allow the attackers to respond to requests for lookups to legitimate sites with the IP address for malicious sites. These sites could upload malware to users or carry out phishing and pharming attacks.
|
||||
|
||||
DNSSEC would address this by having each level of DNS server digitally sign its requests, which insures that the requests sent in by end users aren’t commandeered by attackers. This creates a chain of trust so that at each step in the lookup, the integrity of the request is validated.
|
||||
|
||||
In addition, DNSSec can determine if domain names exist, and if one doesn’t, it won’t let that fraudulent domain be delivered to innocent requesters seeking to have a domain name resolved.
|
||||
|
||||
As more domain names are created, and more devices continue to join the network via internet of things devices and other “smart” systems, and as [more sites migrate to IPv6][14], maintaining a healthy DNS ecosystem will be required. The growth of big data and analytics also [brings a greater need for DNS management][15].
|
||||
|
||||
### SIGRed: A wormable DNS flaw rears its head
|
||||
|
||||
The world got a good look recently at the sort of chaos weaknesses in DNS could cause with the discovery of a flaw in Windows DNS servers. The potential security hole, dubbed SIGRed, [requires a complex attack chain][16], but can exploit unpatched Windows DNS servers to potentially install and execute arbitrary malicious code on clients. And the exploit is "wormable," meaning that it can spread from computer to computer without human intervention. The vulnerability was considered alarming enough that U.S. federal agencies were [given only a few days to install patches][17].
|
||||
|
||||
### DNS over HTTPS: A new privacy landscape
|
||||
|
||||
As of this writing, DNS is on the verge of one of its biggest shifts in its history. Google and Mozilla, who together control the lion's share of the browser market, are encouraging a move towards [DNS over HTTPS][18], or DoH, in which DNS requests are encrypted by the same HTTPS protocol that already protects most web traffic. In Chrome's implementation, the browser checks to see if the DNS servers support DoH, and if they don't, it reroutes DNS requests to Google's 8.8.8.8.
|
||||
|
||||
It's a move not without controversy. Paul Vixie, who did much of the early work on the DNS protocol back in the 1980s, calls the move a "[disaster][19]" for security: corporate IT will have a much harder time monitoring or directing DoH traffic that traverses their network, for instance. Still, Chrome is omnipresent and DoH will soon be turned on by default, so we'll see what the future holds.
|
||||
|
||||
_(Keith Shaw is_ _a former senior editor for Network World and_ _an award-winning writer, editor and product reviewer who has written for many publications and websites around the world.)_
|
||||
|
||||
_(Josh Fruhlinger is a writer and editor who lives in Los Angeles.)_
|
||||
|
||||
Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html
|
||||
|
||||
作者:[Keith Shaw and Josh Fruhlinger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/newsletters/signup.html
|
||||
[2]: https://www.internethalloffame.org/blog/2012/07/23/why-does-net-still-work-christmas-paul-mockapetris
|
||||
[3]: https://tools.ietf.org/html/rfc608
|
||||
[4]: http://www.verisign.com/en_US/domain-names/dnib/index.xhtml?section=cc-tlds
|
||||
[5]: http://www.google.com
|
||||
[6]: https://browserleaks.com/ip
|
||||
[7]: https://www.networkworld.com/article/2246426/comcast-redirects-bad-urls-to-pages-with-advertising.html
|
||||
[8]: https://www.networkworld.com/article/3194890/comparing-the-performance-of-popular-public-dns-providers.html
|
||||
[9]: https://blog.dnsimple.com/2015/03/why-and-how-to-use-googles-public-dns/
|
||||
[10]: https://developers.google.com/speed/public-dns/docs/using
|
||||
[11]: https://www.networkworld.com/article/2838356/network-security/dns-is-ubiquitous-and-its-easily-abused-to-halt-service-or-steal-data.html
|
||||
[12]: https://www.networkworld.com/article/2277316/tech-primers/tech-primers-how-dns-cache-poisoning-works.html
|
||||
[13]: https://www.cloudmark.com/releases/docs/whitepapers/dns-resource-exhaustion-v01.pdf
|
||||
[14]: https://www.networkworld.com/article/3254575/lan-wan/what-is-ipv6-and-why-aren-t-we-there-yet.html
|
||||
[15]: http://social.dnsmadeeasy.com/blog/opinion/future-big-data-dns-analytics/
|
||||
[16]: https://www.csoonline.com/article/3567188/wormable-dns-flaw-endangers-all-windows-servers.html
|
||||
[17]: https://federalnewsnetwork.com/cybersecurity/2020/07/cisa-gives-agencies-a-day-to-remedy-windows-dns-server-vulnerability/
|
||||
[18]: https://www.networkworld.com/article/3322023/dns-over-https-seeks-to-make-internet-use-more-private.html
|
||||
[19]: https://www.theregister.com/2018/10/23/paul_vixie_slaps_doh_as_dns_privacy_feature_becomes_a_standard/
|
||||
[20]: https://www.facebook.com/NetworkWorld/
|
||||
[21]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,98 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is IPv6, and why aren’t we there yet?)
|
||||
[#]: via: (https://www.networkworld.com/article/3254575/what-is-ipv6-and-why-aren-t-we-there-yet.html)
|
||||
[#]: author: (Keith Shaw and Josh Fruhlinger )
|
||||
|
||||
What is IPv6, and why aren’t we there yet?
|
||||
======
|
||||
IPv6 has been in the works since 1998 to address the shortfall of IP addresses available under Ipv4, yet despite its efficiency and security advantages, adoption is still slow.
|
||||
Thinkstock
|
||||
|
||||
For the most part the dire warnings about running out of internet addresses have ceased because, slowly but surely, migration from the world of Internet Protocol Version 4 (IPv4) to IPv6 has begun, and software is in place to prevent the address apocalypse that many were predicting.
|
||||
|
||||
But before we see where are and where we’re going with IPv6, let’s go back to the early days of internet addressing.
|
||||
|
||||
**[ Related: [IPv6 deployment guide][1] + [How to plan your migration to IPv6][2] ]**
|
||||
|
||||
### What is IPv6 and why is it important?
|
||||
|
||||
IPv6 is the latest version of the Internet Protocol, which identifies devices across the internet so they can be located. Every device that uses the internet is identified through its own IP address in order for internet communication to work. In that respect, it’s just like the street addresses and zip codes you need to know in order to mail a letter.
|
||||
|
||||
The previous version, IPv4, uses a 32-bit addressing scheme to support 4.3 billion devices, which was thought to be enough. However, the growth of the internet, personal computers, smartphones and now Internet of Things devices proves that the world needed more addresses.
|
||||
|
||||
Fortunately, the Internet Engineering Task Force (IETF) recognized this 20 years ago. In 1998 it created IPv6, which instead uses 128-bit addressing to support approximately 340 trillion trillion (or 2 to the 128th power, if you like). Instead of the IPv4 address method of four sets of one- to three-digit numbers, IPv6 uses eight groups of four hexadecimal digits, separated by colons.
|
||||
|
||||
### What are the benefits of IPv6?
|
||||
|
||||
In its work, the IETF included enhancements to IPv6 compared with IPv4. The IPv6 protocol can handle packets more efficiently, improve performance and increase security. It enables internet service providers to reduce the size of their routing tables by making them more hierarchical.
|
||||
|
||||
### Network address translation (NAT) and IPv6
|
||||
|
||||
Adoption of IPv6 has been delayed in part due to network address translation (NAT), which takes private IP addresses and turns them into public IP addresses. That way a corporate machine with a private IP address can send to and receive packets from machines located outside the private network that have public IP addresses.
|
||||
|
||||
Without NAT, large corporations with thousands or tens of thousands of computers would devour enormous quantities of public IPv4 addresses if they wanted to communicate with the outside world. But those IPv4 addresses are limited and nearing exhaustion to the point of having to be rationed.
|
||||
|
||||
NAT helps alleviate the problem. With NAT, thousands of privately addressed computers can be presented to the public internet by a NAT machine such as a firewall or router.
|
||||
|
||||
The way NAT works is when a corporate computer with a private IP address sends a packet to a public IP address outside the corporate network, it first goes to the NAT device. The NAT notes the packet’s source and destination addresses in a translation table.
|
||||
|
||||
The NAT changes the source address of the packet to the public-facing address of the NAT device and sends it along to the external destination. When a packet replies, the NAT translates the destination address to the private IP address of the computer that initiated the communication. This can be done so that a single public IP address can represent multiple privately addressed computers.
|
||||
|
||||
### Who is deploying IPv6?
|
||||
|
||||
Carrier networks and ISPs have been the first group to start deploying IPv6 on their networks, with mobile networks leading the charge. For example, T-Mobile USA has more than 90% of its traffic going over IPv6, with Verizon Wireless close behind at 82.25%. Comcast and AT&T have its networks at 63% and 65%, respectively, according to the industry group [World Ipv6 Launch][3].
|
||||
|
||||
Major websites are following suit - just under 30% of the Alexa Top 1000 websites are currently reachable over IPv6, World IPv6 Launch says.
|
||||
|
||||
Enterprises are trailing in deployment, with slightly under one-fourth of enterprises advertising IPv6 prefixes, according to the Internet Society’s [“State of IPv6 Deployment 2017” report][4]. Complexity, costs and time needed to complete are all reasons given. In addition, some projects have been delayed due to software compatibility. For example, a [January 2017 report][5] said a bug in Windows 10 was “undermining Microsoft’s efforts to roll out an IPv6-only network at its Seattle headquarters.”
|
||||
|
||||
### When will more deployments occur?
|
||||
|
||||
The Internet Society said the price of IPv4 addresses will peak in 2018, and then prices will drop after IPv6 deployment passes the 50% mark. Currently, [according to Google][6], the world has 20% to 22% IPv6 adoption, but in the U.S. it’s about 32%).
|
||||
|
||||
As the price of IPv4 addresses begin to drop, the Internet Society suggests that enterprises sell off their existing IPv4 addresses to help fund IPv6 deployment. The Massachusetts Institute of Technology has done this, according to [a note posted on GitHub][7]. The university concluded that 8 million of its IPv4 addresses were “excess” and could be sold without impacting current or future needs since it also holds 20 nonillion IPv6 addresses. (A nonillion is the numeral one followed by 30 zeroes.)
|
||||
|
||||
In addition, as more deployments occur, more companies will start charging for the use of IPv4 addresses, while providing IPv6 services for free. [UK-based ISP Mythic Beasts][8] says “IPv6 connectivity comes as standard,” while “IPv4 connectivity is an optional extra.”
|
||||
|
||||
### When will IPv4 be “shut off”?
|
||||
|
||||
Most of the world [“ran out” of new IPv4 addresses][9] between 2011 and 2018 – but we won’t completely be out of them as IPv4 addresses get sold and re-used (as mentioned earlier), and any leftover addresses will be used for IPv6 transitions.
|
||||
|
||||
There’s no official switch-off date, so people shouldn’t be worried that their internet access will suddenly go away one day. As more networks transition, more content sites support IPv6 and more end users upgrade their equipment for IPv6 capabilities, the world will slowly move away from IPv4.
|
||||
|
||||
### Why is there no IPv5?
|
||||
|
||||
There was an IPv5 that was also known as Internet Stream Protocol, abbreviated simply as ST. It was designed for connection-oriented communications across IP networks with the intent of supporting voice and video.
|
||||
|
||||
It was successful at that task, and was used experimentally. One shortcoming that undermined its popular use was its 32-bit address scheme – the same scheme used by IPv4. As a result, it had the same problem that IPv4 had – a limited number of possible IP addresses. That led to the development and eventual adoption of IPv6. Even though IPv5 was never adopted publicly, it had used up the name IPv5.
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3254575/what-is-ipv6-and-why-aren-t-we-there-yet.html
|
||||
|
||||
作者:[Keith Shaw and Josh Fruhlinger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3235805/lan-wan/ipv6-deployment-guide.html#tk.nww-fsb
|
||||
[2]: https://www.networkworld.com/article/3214388/lan-wan/how-to-plan-your-migration-to-ipv6.html#tk.nww-fsb
|
||||
[3]: http://www.worldipv6launch.org/measurements/
|
||||
[4]: https://www.internetsociety.org/resources/doc/2017/state-of-ipv6-deployment-2017/
|
||||
[5]: https://www.theregister.co.uk/2017/01/19/windows_10_bug_undercuts_ipv6_rollout/https://www.theregister.co.uk/2017/01/19/windows_10_bug_undercuts_ipv6_rollout/
|
||||
[6]: https://www.google.com/intl/en/ipv6/statistics.html
|
||||
[7]: https://gist.github.com/simonster/e22e50cd52b7dffcf5a4db2b8ea4cce0
|
||||
[8]: https://www.mythic-beasts.com/sales/ipv6
|
||||
[9]: https://ipv4.potaroo.net/
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,60 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Information could be half the world's mass by 2245, says researcher)
|
||||
[#]: via: (https://www.networkworld.com/article/3570438/information-could-be-half-the-worlds-mass-by-2245-says-researcher.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Information could be half the world's mass by 2245, says researcher
|
||||
======
|
||||
Because of the amount of energy and resources used to create and store digital information, the data should be considered physical, and not just invisible ones and zeroes, according to one theoretical physicist.
|
||||
Luza Studios / Getty Images
|
||||
|
||||
Digital content should be considered a fifth state of matter, along with gas, liquid, plasma and solid, suggests one university scholar.
|
||||
|
||||
Because of the energy and resources used to create, store and distribute data physically and digitally, data has evolved and should now be considered as mass, according to Melvin Vopson, a senior lecturer at the U.K.'s University of Portsmouth and author of an article, "[The information catastrophe][1]," published in the journal AIP Advances.
|
||||
|
||||
Vopson also claims digital bits are on a course to overwhelm the planet and will eventually outnumber atoms.
|
||||
|
||||
The idea of assigning mass to digital information builds off some existing data points. Vopson cites an IBM estimate that finds data is created at a rate of 2.5 quintillion bytes every day. He also factors in data storage densities of more than 1 terabit per inch to compare the size of a bit to the size of an atom.
|
||||
|
||||
Presuming 50% annual growth in data generation, "the number of bits would equal the number of atoms on Earth in approximately 150 years," according to a [media release][2] announcing Vopson's research.
|
||||
|
||||
"It would be approximately 130 years until the power needed to sustain digital information creation would equal all the power currently produced on planet Earth, and by 2245, half of Earth's mass would be converted to digital information mass," the release reads.
|
||||
|
||||
The COVID-19 pandemic is increasing the rate of digital data creation and accelerating this process, Vopson adds.
|
||||
|
||||
He warns of an impending saturation point: "Even assuming that future technological progress brings the bit size down to sizes closer to the atom itself, this volume of digital information will take up more than the size of the planet, leading to what we define as the information catastrophe," Vopson writes in the [paper][3].
|
||||
|
||||
"We are literally changing the planet bit by bit, and it is an invisible crisis," says Vopson, a former R&D scientist at Seagate Technology.
|
||||
|
||||
Vopson isn't alone in exploring the idea that information isn't simply imperceptible ones and zeroes. According to the release, Vopson draws on the mass-energy comparisons in Einstein's theory of general relativity; the work of Rolf Landauer, who applied the laws of thermodynamics to information; and the work of Claude Shannon, the inventor of the digital bit.
|
||||
|
||||
"When one brings information content into existing physical theories, it is almost like an extra dimension to everything in physics," Vopson says.
|
||||
|
||||
With a growth rate that seems unstoppable, digital information production "will consume most of the planetary power capacity, leading to ethical and environmental concerns," his paper concludes.
|
||||
|
||||
Interestingly – and a bit more out there – Vopson also suggests that if, as he projects, the future mass of the planet is made up predominantly of bits of information, and there exists enough power created to do it (not certain), then "one could envisage a future world mostly computer simulated and dominated by digital bits and computer code," he writes.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3570438/information-could-be-half-the-worlds-mass-by-2245-says-researcher.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://aip.scitation.org/doi/10.1063/5.0019941
|
||||
[2]: https://publishing.aip.org/publications/latest-content/digital-content-on-track-to-equal-half-earths-mass-by-2245/
|
||||
[3]: https://aip.scitation.org/doi/full/10.1063/5.0019941
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,95 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why Comcast open sourced its DNS management tool)
|
||||
[#]: via: (https://opensource.com/article/20/9/open-source-dns)
|
||||
[#]: author: (Paul Cleary https://opensource.com/users/pauljamescleary)
|
||||
|
||||
Why Comcast open sourced its DNS management tool
|
||||
======
|
||||
This open source DNS management tool was built by and for the telcom
|
||||
giant, but is establishing itself in its own right and welcoming more
|
||||
contributors.
|
||||
![An intersection of pipes.][1]
|
||||
|
||||
Adoption of [DevOps][2] practices at Comcast led to increased automation and configuration of infrastructure that supports applications, back-office, data centers, and our network. These practices require teams to move fast and be self-reliant. Infrastructure is constantly turned upside down, with network traffic moved around it constantly. Good DNS record management is critical to support this level of autonomy and automation, but how can a large, diverse enterprise move quickly while safely governing its DNS assets?
|
||||
|
||||
### Challenge
|
||||
|
||||
Prior to 2016, DNS record management was mostly done through an online ticketing system—users would submit tickets for DNS changes that were manually reviewed and implemented by a separate team of DNS technicians. This system frequently required manual intervention for many of the DNS requests, which was time-consuming.
|
||||
|
||||
Turnaround times for DNS changes were in hours, which is not suitable for infrastructure automation. Large Internet companies can manage millions of DNS records, making it practically impossible for DNS technicians to certify the correctness of the thousands of DNS updates being requested daily. This increased the possibility of an inadvertent errant update to a critical DNS record that ultimately would lead to a downtime event.
|
||||
|
||||
In addition, engineering teams are intimately familiar with their DNS needs—much more so than a single group of DNS technicians serving an entire enterprise. So, we needed to enable engineering teams to self-service their own DNS records, implement changes quickly (in seconds), and at the same time, make sure all changes are done safely.
|
||||
|
||||
### Solution
|
||||
|
||||
VinylDNS was built at Comcast and subsequently opened to empower engineering teams to automate as they please while providing the safety and administrative controls demanded by DNS operators and the Comcast Security team.
|
||||
|
||||
### Security as a way of life
|
||||
|
||||
VinylDNS is all about automation and enhanced security. At Comcast, the VinylDNS team worked in close coordination with both the DNS and engineering teams at Comcast, as well as the security team, to meet stringent engineering and security requirements. An incredible array of access controls was implemented that give extreme flexibility to both DNS operators and engineering teams to control their DNS assets.
|
||||
|
||||
Access controls implemented at the DNS zone level allow any team to control who can make updates to their DNS zones. When a DNS zone is registered and authorized to a VinylDNS group, only members of that group can make changes to DNS records in that DNS zone. In addition, access-list (ACL) rules provide extreme flexibility to allow other VinylDNS users to manage records in that zone. These ACL rules can be defined using regular expression masks or classless inter-domain routing (CIDR) rules and DNS record types that lock down access to specific users and groups to certain records in specific DNS zones.
|
||||
|
||||
### Meeting the demands of automation
|
||||
|
||||
A [representational state transfer (REST) API][3] was built along with the system. This uses request signing to help eliminate man-in-the-middle attacks. Once the engineering teams at Comcast caught wind of the kind of automation afforded by VinylDNS, many began building out tooling to integrate directly with VinylDNS via its API. It wasn't long before most of them were using organically developed tooling integrated with the VinylDNS API to support their DNS needs.
|
||||
|
||||
### Performing at large enterprise scale
|
||||
|
||||
Very quickly, VinylDNS was managing a million DNS records and thousands of DNS zones, and supporting hundreds of engineers. As we sought to expand VinylDNS to support the rest of Comcast, we recognized some challenges.
|
||||
|
||||
1. Certain DNS records were off-limits, deemed too critical to manage in any way other than by hand.
|
||||
2. The ACL rule model, while flexible, would be impossible to set up and maintain across the entirety of Comcast's DNS footprint (which has millions of DNS zones, and hundreds of millions of DNS records).
|
||||
3. Many DNS domains are considered "universal" and not locked down to a single group. This holds true for reverse zones, as IP space can often be freely assigned to anyone.
|
||||
4. Certain DNS change requests still require a manual review and approval, i.e., you cannot truly automate everything.
|
||||
5. Some teams that provision a DNS record are not the same engineers responsible for its lifecycle. The engineers that ultimately decommission a DNS record might be unknown at the time of creation.
|
||||
6. Certain teams require DNS changes to be scheduled at some point in the future. For example, maintenance may be done off-hours, and the employee doing the maintenance may not have access to VinylDNS.
|
||||
|
||||
|
||||
|
||||
To address these issues, VinylDNS added more access controls and features. Shared zones allow universal access while maintaining security via record ownership. Record ownership ensures that the party who creates a DNS record is the only one that can manage that record. This feature alone allowed us to move much of the DNS reverse space into VinylDNS.
|
||||
|
||||
Manual review was added to support tighter governance on certain DNS zones and records. For example, a sensitive DNS zone might demand review before implementing changes, as opposed to having all changes immediately applied.
|
||||
|
||||
High-value domains support was added to block VinylDNS from ever being able to update certain DNS records. High-value DNS records like [www.comcast.com][4], for example, are impossible to manage via VinylDNS and require extreme governance that can't be accomplished via an automation platform.
|
||||
|
||||
Global ACLs were added to support situations where teams that created DNS records were not responsible for the maintenance and decommissioning of those DNS records. This allowed overrides for certain groups by fully qualified domain name (FQDN) and IP address for certain DNS domains.
|
||||
|
||||
Finally, scheduled changes allow users to schedule a DNS change for a future time.
|
||||
|
||||
### Results
|
||||
|
||||
VinylDNS now governs most of Comcast's internal DNS space, managing millions of DNS records across thousands of DNS zones, and supporting thousands of engineers. In addition, we leverage integration with a wide array of tools and programming languages, including Java, Python, Go, and Ruby (most of which are open source).
|
||||
|
||||
### Toward the future
|
||||
|
||||
There are several opportunities for additional feature development, which Comcast has planned as part of its ongoing evolution of the platform. The same level of access controls and governance is needed for DNS assets managed in public cloud settings. In addition, we are looking into the ability to manage DNS zones (create and delete), which is required for IPv6 reverse zones. Finally, we are looking to create a powerful admin experience for our DNS operators who are looking to take advantage of the data that lives in the VinylDNS database.
|
||||
|
||||
### Opening up
|
||||
|
||||
[VinylDNS][5] is an open source project released and managed by [Comcast Open Source][6]. VinylDNS and its accompanying ecosystem were built by engineers in several organizations across Comcast, leveraging our inner source program. It is free for use, licensed under Apache License 2.0. We welcome all contributors, from code to bugs to feature requests, from new projects to project ideas. You can [contact our team on Gitter][7].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/9/open-source-dns
|
||||
|
||||
作者:[Paul Cleary][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pauljamescleary
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
|
||||
[2]: https://opensource.com/resources/devops
|
||||
[3]: https://www.redhat.com/en/topics/api/what-is-a-rest-api
|
||||
[4]: http://www.comcast.com
|
||||
[5]: https://www.vinyldns.io
|
||||
[6]: https://comcast.github.io/
|
||||
[7]: https://gitter.im/vinyldns/vinyldns
|
@ -0,0 +1,71 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The power of open source during a pandemic)
|
||||
[#]: via: (https://opensource.com/open-organization/20/8/power-open-source-during-pandemic)
|
||||
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
|
||||
|
||||
The power of open source during a pandemic
|
||||
======
|
||||
Successfully addressing shared global problems requires open source—both
|
||||
principles and code—to foster innovation quickly and bridge gaps in
|
||||
effective remote collaboration.
|
||||
![Open source prescription.][1]
|
||||
|
||||
When a novel coronavirus made headlines earlier this year, the world wasn't ready. In a short period of time, we all witnessed the consequences of having a global, interconnected economy unprepared for effective global collaboration. Indeed, this pandemic shed light on the under-preparedness of a truly global economy in a hyper-connected world. We didn't pay attention to the fact that a health issue in China could have an impact on both the real estate market in North Carolina and a shoe factory in Italy. Facing a pandemic, especially one that forced such extreme social distancing, required drastic shifts—both technological and social.
|
||||
|
||||
Many organizations, governments, and companies realized that facing a global pandemic like COVID-19 required new tools for tracking the spread of the virus. In order to keep up with this growing threat, decision makers around the world needed effective technological solutions for both tracking and contact tracing; we needed correct, up-to-the-minute data for making the best decisions. Facing this pandemic also required non-technical organizations to become technically capable quite quickly. Organizations were forced to rapidly build new solutions and uncover new ways of working. Facing this new global problem, the world realized that the digital transformation everyone has been talking about—but delaying until they had a budget—was now more important than ever.
|
||||
|
||||
Moreover, as we started to advise and work with different organizations around the world, we discovered that many of them were facing similar issues. As a last minute solution, many started building not only their own custom technologies but also their own custom _organizational strategies_ for tackling this crisis. Quick but reactive approaches left many governments and organizations across the world building exactly the same solutions, often in isolation. Had they collaborated and shared what they were learning, they could have been building solutions _with_ one another.
|
||||
|
||||
In other words: Had they been operating more like open source communities and truly open organizations, they could have found better answers sooner.
|
||||
|
||||
### Building global solutions the open source way
|
||||
|
||||
For years, members of the open source community have been collaborating with one another on projects like these. We've been working together in remote environments since long before the pandemic hit. Our processes and policies, documented or not, have allowed us to push the technology industry forward at warp speed. In a mere 30 years, we've managed to make open source the standard for new software, and we've proven that global, remote, community-based collaboration can solve big problems.
|
||||
|
||||
Now the rest of the world seems to be following suit, adapting open principles and practices to create global change. By relying on the open source principles and code, we could truly build more effectively shared solutions and strategies that could help us all stop a global pandemic.
|
||||
|
||||
We see many examples of openness bridging collaboration gaps. The [UK Government Digital Service][2] is uniting citizens and government, helping the government to operate more effectively and efficiently. The [UNESCO Global Network of Learning Cities][3] is pushing city education policy forward, facilitating partnerships between member cities. [Greenpeace is championing open source][4], and using it to help change mindsets and behaviors surrounding the climate emergency. Other coalitions have pledged to accelerate [cooperation on a coronavirus vaccine and to share research][5], treatment, and medicines across the globe.
|
||||
|
||||
The world needs to shift the way it's approaching problems and continue locating solutions the open source way. Individually, this might mean becoming connection-oriented problem-solvers. We need people able to think communally, communicate asynchronously, and consider innovation iteratively. We're seeing that organizations would need to consider technologists less as tradespeople who build systems and more as experts in global collaboration, people who can make future-proof decisions on everything from data structures to personnel processes.
|
||||
|
||||
### A new strategy: Cross-sector social innovation
|
||||
|
||||
Now is the time to start building new paradigms for global collaboration and find unifying solutions to our shared problems, and one key element to doing this successfully is our ability to work together across sectors. A global pandemic needs the public sector, the private sector, and the nonprofit world to collaborate, each bringing its own expertise to a common, shared platform.
|
||||
|
||||
By relying on the open source principles and code, we could truly build more effectively shared solutions and strategies that could help us all stop a global pandemic.
|
||||
|
||||
The private sector plays a key role in building this method of collaboration by building a robust social innovation strategy that aligns with the collective problems affecting us all. This pandemic is a great example of collective global issues affecting every business around the world, and this is the reason why shared platforms and effective collaboration will be key moving forward.
|
||||
|
||||
Social innovation is an approach to social responsibility that requires a unique strategy for every organization, one directly linking their business goals and company values to the common good. The most effective way of implementing this strategy that requires fast-paced, social innovation is by embracing open source principles to enable collaboration across regions and sectors.
|
||||
|
||||
For example, [PathCheck][6] Foundation is doing this to quickly and effectively build a platform for shared needs. The PathCheck suite of open source software gives public and private sector organizations solutions for digital contact tracing and exposure notification, with the goal of containing COVID-19 and restarting the economy without sacrificing privacy. Started by Prof. Ramesh Raskar and passionate students at MIT—and now consisting of 1800 volunteers in partnership with many companies, and supported by a newly created nonprofit—PathCheck is a great example of how collaboration on an open source platform is a multiplier (being developed by a growing global, passionate, community of engineers, scientist, health authorities, designers, and contributors, and being used by many governments and organizations than any other service provider).
|
||||
|
||||
There is a way to tackle the pandemic and other global issues more effectively through shared platforms and open collaboration. Open source advocates have been doing it for a while, bringing open source to non-profits, governments and corporations, opening up education, connecting open design to global knowledge networks, building open hardware initiatives and structuring open government work. Members of the global open source community have already shown how the post-pandemic world could look—humane, transparent, collaborative, diverse.
|
||||
|
||||
Last but not least, the power of the individual aligned with a community—and the power of collaboration for solving issues that are important to all of us—are more powerful than we think. When companies, the public sector, and nonprofits work together, everyone can be part of a _shared_ solution. An open source approach allows every individual to make an impact and contribute to projects using different, complementary skill sets.
|
||||
|
||||
Everyone can make a difference every day. And we can use the power of open source to ensure that difference changes the world for good.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/20/8/power-open-source-during-pandemic
|
||||
|
||||
作者:[Laura Hilliger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/laurahilliger
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourceprescription.png?itok=gFrc_GTH (Open source prescription.)
|
||||
[2]: https://www.gov.uk/government/organisations/government-digital-service/about
|
||||
[3]: https://uil.unesco.org/lifelong-learning/learning-cities
|
||||
[4]: https://opensource.com/tags/open-organization-greenpeace
|
||||
[5]: https://opensource.com/open-organization/20/6/covid-alliance
|
||||
[6]: https://pathcheck.org/
|
59
sources/talk/20200831 Making Zephyr More Secure.md
Normal file
59
sources/talk/20200831 Making Zephyr More Secure.md
Normal file
@ -0,0 +1,59 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Making Zephyr More Secure)
|
||||
[#]: via: (https://www.linux.com/audience/developers/making-zephyr-more-secure/)
|
||||
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
|
||||
|
||||
Making Zephyr More Secure
|
||||
======
|
||||
|
||||
Zephyr is gaining momentum where more and more companies are embracing this open source project for their embedded devices. However, security is becoming a huge concern for these connected devices. The NCC Group recently conducted an evaluation and security assessment of the project to help harden it against attacks. In the interview, Kate Stewart, Senior Director of Strategic Programs at Linux Foundation talk about the assessment and the evolution of the project.
|
||||
|
||||
Here is a quick transcript of the interview:
|
||||
|
||||
**Swapnil Bhartiya: The NCC group recently evaluated Linux for security. Can you talk about what was the outcome of that evaluation?**
|
||||
Kate Stewart: We’re very thankful for the NCC group for the work that they did and helping us to get Zephyr hardened further. In some senses when it had first hit us, it was like, “Okay, they’re taking us seriously now. Awesome.” And the reason they’re doing this is that their customers are asking for it. They’ve got people who are very interested in Zephyr so they decided to invest the time doing the research to see what they could find. And the fact that we’re good enough to critique now is a nice positive for the project, no question.
|
||||
|
||||
Up till this point, we’d had been getting some vulnerabilities that researchers had noticed in certain areas and had to tell us about. We’d issued CVEs so we had a process down, but suddenly being hit with the whole bulk of those like that was like, “Okay, time to up our game guys.” And so, what we’ve done is we found out we didn’t have a good way of letting people who have products with Zephyr based on them know about our vulnerabilities. And what we wanted to be able to do is make it clear that if people have products and they have them out in the market and that they want to know if there’s a vulnerability. We just added a new webpage so they know how to register, and they can let us know to contact them.
|
||||
|
||||
The challenge of embedded is you don’t quite know where the software is. We’ve got a lot of people downloading Zephyr, we got a lot of people using Zephyr. We’re seeing people upstreaming things all the time, but we don’t know where the products are, it’s all word of mouth to a large extent. There’re no tracers or anything else, you don’t want to do that in an embedded space on IoT; battery life is important. And so, it’s pretty key for figuring out how do we let people who want to be notified know.
|
||||
|
||||
We’d registered as a CNA with Mitre several years ago now and we can assign CVE numbers in the project. But what we didn’t have was a good way of reaching out to people beyond our membership under embargo so that we can give them time to remediate any issues that we’re fixing. By changing our policies, it’s gone from a 60-day embargo window to a 90-day embargo window. In the first 30 days, we’re working internally to get the team to fix the issues and then we’ve got a 60-day window for our people who do products to basically remediate in the field if necessary. So, getting ourselves useful for product makers was one of the big focuses this year.
|
||||
|
||||
**Swapnil Bhartiya: Since Zephyr’s LTS release was made last year, can you talk about the new releases, especially from the security perspective because I think the latest version is 2.3.0?**
|
||||
Kate Stewart: Yeah, 2.3.0 and then we also have 1.14.2. and 1.14 is our LTS-1 as we say. And we’ve put an update out to it with the security fixes and a long-term stable like the Linux kernel has security fixes and bug fixes backported into it so that people can build products on it and keep it active over time without as much change in the interfaces and everything else that we’re doing in the mainline development tree and what we’ve just done with the 2.3.
|
||||
|
||||
2.3 has a lot of new features in it and we’ve got all these vulnerabilities remediated. There’s a lot more coming up down the road, so the community right now is working. We’ve adopted new set of coding guidelines for the project and we will be working on that so we can get ourselves ready for going after safety certifications next year. So there’s a lot of code in motion right now, but there’s a lot of new features being added every day. It’s great.
|
||||
|
||||
**Swapnil Bhartiya: I also want to talk a bit about the community side of it. Can you talk about how the community is growing new use cases?**
|
||||
Kate Stewart: We’ve just added two new members into Zephyr. We’ve got Teenage Engineering has just joined us and Laird Connectivity has just joined us and it’s really cool to start seeing these products coming out. There are some rather interesting technologies and products that are showing up and so I’m really looking forward to being able to have blog posts about them.
|
||||
|
||||
Laird Connectivity is basically a small device running Zephyr that you can use for monitoring distance without recording other information. So, in days of COVID, we need to start figuring out technology assists to help us keep the risk down. Laird Connectivity has devices for that.
|
||||
|
||||
So we’re seeing a lot of innovation happening very quickly in Zephyr and that’s really Zephyr’s strength is it’s got a very solid code base and lets people add their innovation on top.
|
||||
|
||||
**Swapnil Bhartiya: What role do you think Zephyr going to play in the post-COVID-19 world?**
|
||||
Kate Stewart: Well, I think they offer us interesting opportunities. Some of the technologies that are being looked at for monitoring for instance – we have distance monitoring, contact tracing and things like that. We can either do it very manually or we can start to take advantage of the technology infrastructures to do so. But people may not want to have a device effectively monitoring them all the time. They may just want to know exactly, position-wise, where they are. So that’s potentially some degree of control over what’s being sent into the tracing and tracking.
|
||||
|
||||
These sorts of technologies I think will be helping us improve things over time. I think there’s a lot of knowledge that we’re getting out of these and ways we can optimize the information and the RTOS and the sensors are discrete functionality and are improving how do we look at things.
|
||||
|
||||
**Swapnil Bhartiya: There are so many people who are using Zephyr but since it is open source we not even aware of them. How do you ensure that whether someone is an official member of the project or not if they are running Zephyr their devices are secure?**
|
||||
Kate Stewart: We do a lot of testing with Zephyr, there’s a tremendous amount of test infrastructure. There’s the whole regression infrastructure. We work to various thresholds of quality levels and we’ve got a lot of expertise and have publicly documented all of our best practices. A security team is a top-notch group of people. I’m really so proud to be able to work with them. They do a really good job of caring about the issues as well as finding them, debugging them and making sure anything that comes up gets solved. So in that sense, there’s a lot of really great people working on Zephyr and it makes it a really fun community to work with, no question. In fact, it’s growing fast actually.
|
||||
|
||||
**Swapnil Bhartiya: Kate, thank you so much for taking your time out and talking to me today about these projects.**
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/audience/developers/making-zephyr-more-secure/
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/author/swapnil/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,63 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Military looks to ultraviolet networks for secure battlefield communication)
|
||||
[#]: via: (https://www.networkworld.com/article/3572372/military-looks-to-ultraviolet-networks-for-secure-battlefield-communication.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Military looks to ultraviolet networks for secure battlefield communication
|
||||
======
|
||||
The U.S. Army wants to develop new, more secure communications networks for soldiers in the field using free-space ultra-violet optical transmissions.
|
||||
Thinkstock
|
||||
|
||||
U.S. Army researchers are exploring the use of ultraviolet optical communications in battlefield situations because, under the right circumstances, the technology might support links that are undetectable to the enemy.
|
||||
|
||||
One thing the researchers looked at was the effects of attenuation, the natural phenomenon of the signals getting weaker over distance. They wanted to know whether there was a distance range in which the signals were weak enough that adversaries likely couldn’t detect them, but still be strong enough that friendly receivers could. They say they observed that to be the case, but the [research paper about their work][1] doesn’t say what those distances are.
|
||||
|
||||
According to an army press release, “ultraviolet communication has unique propagation characteristics that not only allow for a novel non-line-of-sight optical link, but also imply that the transmissions may be harder for an adversary to detect.”
|
||||
|
||||
The main thrust of the study by the U.S. Army Combat Capabilities Development Command’s [Army Research Laboratory][2] was to develop a framework for future research that could quantify the circumstances under which ultraviolet communications could be both useful to friendly forces and undetectable to hostiles. In the course of that research they gleaned two other important insights:
|
||||
|
||||
* The worst case scenario – when the enemy detector is in direct line-of-sight with the transmitter and the friendly receiver is not – isn’t as big a concern as might be feared.
|
||||
* Steering the signal of the UV transmitter doesn’t seem to be an effective way to mitigate detection of the signal by an adversary.
|
||||
|
||||
|
||||
|
||||
The researchers plan to analyze four scenarios involving the placement of the UV transmitter, the intended receiver and the enemy detector:
|
||||
|
||||
* The friendly receiver and the adversary detector are both in line-of-sight with the transmitter.
|
||||
* The friendly receiver is in line-of-sight but the adversary detector is not. (Best case)
|
||||
* The adversary’s detector is in line-of-sight but the friendly receiver is not. (Worst case)
|
||||
* Neither the friendly receiver nor adversary detector is in line-of-sight.
|
||||
|
||||
|
||||
|
||||
The assumption is that an opponent would try to count photons over time to detect a coherent transmission signal that would indicate that communication was underway.
|
||||
|
||||
The scientists accept the fact that close-in to the transmitter, the signal is easy to detect, so effective use of the UV transmissions would rely on having a good sense of where the opposing detectors are located.
|
||||
|
||||
“Our work provides a framework enabling the study of the fundamental limits of detectability for an ultraviolet communication system meeting desired communication performance requirements,” said Dr. Robert Drost, one of the researchers.
|
||||
|
||||
“Our research is ensuring that the community has the fundamental understanding of the potential for and limitations of using ultraviolet wavelengths for communications, and I am confident that this understanding will inform the development of future Army networking capabilities.”
|
||||
|
||||
Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3572372/military-looks-to-ultraviolet-networks-for-secure-battlefield-communication.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.osapublishing.org/DirectPDFAccess/4516B0FD-2152-4663-9A9899BF00560B7C_433781/oe-28-16-23640.pdf?da=1&id=433781&seq=0&mobile=no
|
||||
[2]: https://www.arl.army.mil
|
||||
[3]: https://www.facebook.com/NetworkWorld/
|
||||
[4]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,94 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open Source Project For Earthquake Warning Systems)
|
||||
[#]: via: (https://www.linux.com/featured/open-source-project-for-earthquake-warning-systems/)
|
||||
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
|
||||
|
||||
Open Source Project For Earthquake Warning Systems
|
||||
======
|
||||
|
||||
Earthquakes or the shaking doesn’t kill people, buildings do. If we can get people out of buildings in time, we can save lives. Grillo has founded OpenEEW in partnership with IBM and the Linux Foundation to allow anyone to build their own earthquake early-warning system. Swapnil Bhartiya, the founder of TFiR, talked to the founder of Grillo on behalf of The Linux Foundation to learn more about the project.
|
||||
|
||||
Here is the transcript of the interview:
|
||||
|
||||
**Swapnil Bhartiya: If you look at these natural phenomena like earthquakes, there’s no way to fight with nature. We have to learn to coexist with them. Early warnings are the best thing to do. And we have all these technologies – IoTs and AI/ML. All those things are there, but we still don’t know much about these phenomena. So, what I do want to understand is if you look at an earthquake, we’ll see that in some countries the damage is much more than some other places. What is the reason for that?**
|
||||
|
||||
Andres Meira: earthquakes disproportionately affect countries that don’t have great construction. And so, if you look at places like Mexico, the Caribbean, much of Latin America, Nepal, even some parts of India in the North and the Himalayas, you find that earthquakes can cause more damage than say in California or in Tokyo. The reason is it is buildings that ultimately kill people, not the shaking itself. So, if you can find a way to get people out of buildings before the shaking that’s really the solution here. There are many things that we don’t know about earthquakes. It’s obviously a whole field of study, but we can’t tell you for example, that an earthquake can happen in 10 years or five years. We can give you some probabilities, but not enough for you to act on.
|
||||
|
||||
What we can say is that an earthquake is happening right now. These technologies are all about reducing the latency so that when we know an earthquake is happening in milliseconds we can be telling people who will be affected by that event.
|
||||
|
||||
**Swapnil Bhartiya: What kind of work is going on to better understand earthquakes themselves?**
|
||||
|
||||
Andres Meira: I have a very narrow focus. I’m not a seismologist and I have a very narrow focus related to detecting earthquakes and alerting people. I think in the world of seismology, there are a lot of efforts to understand the tectonic movement, but I would say there are a few interesting things happening that I know of. For example, undersea cables. People in Chile and other places are looking at undersea telecommunications cables and the effects that any sort of seismic movement have on the signals. They can actually use that as a detection system. But when you talk about some of the really deep earthquakes, 60-100 miles beneath the surface, man has not yet created holes deep enough for us to place sensors. So we’re very limited as to actually detecting earthquakes at a great depth. We have to wait for them to affect us near the surface.
|
||||
|
||||
**Swapnil Bhartiya: So then how do these earthquake early warning systems work? I want to understand from a couple of points: What does the device itself look like? What do those sensors look like? What does the software look like? And how do you kind of share data and interact with each other?**
|
||||
|
||||
Andres Meira: The sensors that we use, we’ve developed several iterations over the last couple of years and effectively, they are a small microcontroller, an accelerometer, this is the core component and some other components. What the device does is it records accelerations. So, it looks on the X, Y, and Z axes and just records accelerations from the ground so we are very fussy about how we install our sensors. Anybody can install it in their home through this OpenEEW initiative that we’re doing.
|
||||
|
||||
The sensors themselves record shaking accelerations and we send all of those accelerations in quite large messages using MQTT. We send them every second from every sensor and all of this data is collected in the cloud, and in real-time we run algorithms. We want to know that the shaking, which the accelerometer is getting is not a passing truck. It’s actually an earthquake.
|
||||
|
||||
So we’ve developed the algorithms that can tell those things apart. And of course, we wait for one or two sensors to confirm the same event so that we don’t get any false positives because you can still get some errors. Once we have that confirmation in the cloud we can send a message to all of the client devices. If you have an app, you will be receiving a message saying, there’s an earthquake at this location, and your device will then be calculating how long it will take to reach it. Therefore, how much energy will be lost and therefore, what shaking you’re going to be expecting very soon.
|
||||
|
||||
**Swapnil Bhartiya: Where are these devices installed?**
|
||||
|
||||
Andres Meira: They are installed at the moment in several countries – Mexico, Chile, Costa Rica, and Puerto Rico. We are very fussy about how people install them, and in fact, on the OpenEEW website, we have a guide for this. We really require that they’re installed on the ground floor because the higher up you go, the different the frequencies of the building movement, which affects the recordings. We need it to be fixed to a solid structural element. So this could be a column or a reinforced wall, something which is rigid and it needs to be away from the noise. So it wouldn’t be great if it’s near a door that was constantly opening and closing. Although we can handle that to some extent. As long as you are within the parameters, and ideally we look for good internet connections, although we have cellular versions as well, then that’s all we need.
|
||||
|
||||
The real name of the game here is a quantity more than quality. If you can have a lot of sensors, it doesn’t matter if one is out. It doesn’t matter if the quality is down because we’re waiting for confirmation from other ones and redundancy is how you achieve a stable network.
|
||||
|
||||
**Swapnil Bhartiya: What is the latency between the time when sensors detect an earthquake and the warning is sent out? Does it also mean that the further you are from the epicenter, the more time you will get to leave a building?**
|
||||
|
||||
Andres Meira: So the time that a user gaps in terms of what we call the window of opportunity for them to actually act on the information is a variable and it depends on where the earthquake is relative to the user. So, I’ll give you an example. Right now, I’m in Mexico City. If we are detecting an earthquake in Acapulco, then you might get 60 seconds of advanced warning because an earthquake travels at more or less a fixed velocity, which is unknown and so the distance and the velocity gives you the time that you’re going to be getting.
|
||||
|
||||
If that earthquake was in the South of Mexico in Oaxaca, we might get two minutes. Now, this is a variable. So of course, if you are in Istanbul, you might be very near the fault line or Kathmandu. You might be near the fault line. If the distance is less than what I just described, the time goes down. But even if you only have five seconds or 10 seconds, which might happen in the Bay area, for example, that’s still okay. You can still ask children in a school to get underneath the furniture. You can still ask surgeons in a hospital to stop doing the surgery. There’s many things you can do and there are also automated things. You can shut off elevators or turn off gas pipes. So anytime is good, but the actual time itself is a variable.
|
||||
**
|
||||
Swapnil Bhartiya: The most interesting thing that you are doing is that you are also open sourcing some of these technologies. Talk about what components you have open source and why.**
|
||||
|
||||
Andres Meira: Open sourcing was a tough decision for us. It wasn’t something we felt comfortable with initially because we spent several years developing these tools, and we’re obviously very proud. I think that there came a point where we realized why are we doing this? Are we doing this to develop cool technologies to make some money or to save lives? All of us live in Mexico, all of us have seen the devastation of these things. We realized that open source was the only way to really accelerate what we’re doing.
|
||||
|
||||
If we want to reach people in these countries that I’ve mentioned; if we really want people to work on our technology as well and make it better, which means better alert times, less false positives. If we want to really take this to the next level, then we can’t do it on our own. It will take a long time and we may never get there.
|
||||
|
||||
So that was the idea for the open source. And then we thought about what we could do with open source. We identified three of our core technologies and by that I mean the sensors, the detection system, which lives in the cloud, but could also live on a Raspberry Pi, and then the way we alert people. The last part is really quite open. It depends on the context. It could be a radio station. It could be a mobile app, which we’ve got on the website, on the GitHub. It could be many things. Loudspeakers. So those three core components, we now have published in our repo, which is OpenEEW on GitHub. And from there, people can pick and choose.
|
||||
|
||||
It might be that some people are data scientists so they might go just for the data because we also publish over a terabyte of accelerometer data from our networks. So people might be developing new detection systems using machine learning, and we’ve got instructions for that and we would very much welcome it. Then we have something for the people who do front end development. So they might be helping us with the applications and then we also have people something for the makers and the hardware guys. So they might be interested in working on the census and the firmware. There’s really a whole suite of technologies that we published.
|
||||
|
||||
**Swapnil Bhartiya: There are other earthquake warning systems. How is OpenEEW different?**
|
||||
|
||||
Andres Meira: I would divide the other systems into two categories. I would look at the national systems. I would look at say the Japanese or the California and the West coast system called Shake Alert. Those are systems with significant public funding and have taken decades to develop. I would put those into one category and another category I would look at some applications that people have developed. My Shake or Skylert, or there’s many of them.
|
||||
|
||||
If you look at the first category, I would say that the main difference is that we understand the limitations of those systems because an earthquake in Northern Mexico is going to affect California and vice versa. An earthquake in Guatemala is going to affect Mexico and vice versa. An earthquake in Dominican Republic is going to affect Puerto Rico. The point is that earthquakes don’t respect geography or political boundaries. And so we think national systems are limited, and so far they are limited by their borders. So, that was the first thing.
|
||||
|
||||
In terms of the technology, actually in many ways, the MEMS accelerometers that we use now are streets ahead of where we were a couple of years ago. And it really allows us to detect earthquakes hundreds of kilometers away. And actually, we can perform as well as these national systems. We’ve studied our system versus the Mexican national system called SASMEX, and more often than not, we are faster and more accurate. It’s on our website. So there’s no reason to say that our technology is worse. In fact, having cheaper sensors means you can have huge networks and these arrays are what make all the difference.
|
||||
|
||||
In terms of the private ones, the problems with those are that sometimes they don’t have the investment to really do wide coverage. So the open sources are our strength there because we can rely on many people to add to the project.
|
||||
|
||||
**Swapnil Bhartiya: What kind of roadmap do you have for the project? How do you see the evolution of the project itself?**
|
||||
|
||||
Andres Meira: So this has been a new area for me; I’ve had to learn. The governance of OpenEEW as of today, like you mentioned, is now under the umbrella of the Linux Foundation. So this is now a Linux Foundation project and they have certain prerequisites. So we had to form a technical committee. This committee makes the steering decisions and creates the roadmap you mentioned. So, the roadmap is now published on the GitHub, and it’s a work in progress, but effectively we’re looking 12 months ahead and we’ve identified some areas that really need priority. Machine learning, as you mentioned, is definitely something that will be a huge change in this world because if we can detect earthquakes, potentially with just a single station with a much higher degree of certainty, then we can create networks that are less dense. So you can have something in Northern India and in Nepal, in Ecuador, with just a handful of sensors. So that’s a real Holy grail for us.
|
||||
|
||||
We also are asking on the roadmap for people to work with us in lots of other areas. In terms of the sensors themselves, we want to do more detection on the edge. We feel that edge computing with the sensors is obviously a much better solution than what we do now, which has a lot of cloud detection. But if we can move a lot of that work to the actual devices, then I think we’re going to have much smarter networks and less telemetry, which opens up new connectivity options. So, the sensors as well are another area of priority on the road map.
|
||||
|
||||
**Swapnil Bhartiya: What kind of people would you like to get involved with and how can they get involved?**
|
||||
|
||||
Andres Meira: So as of today, we’re formally announcing the initiative and I would really invite people to go to OpenEEW.com, where we’ve got a site that outlines some areas that people can get involved with. We’ve tried to consider what type of people would join the project. So you’re going to get seismologists. We have seismologists from Harvard University and from other areas. They’re most interested in the data from what we’ve seen so far. They’re going to be looking at the data sets that we’ve offered and some of them are already looking at machine learning. So there’s many things that they might be looking at. Of course, anyone involved with Python and machine learning, data scientists in general, might also do similar things. Ultimately, you can be agnostic about seismology. It shouldn’t put you off because we’ve tried to abstract it away. We’ve got down to the point where this is really just data.
|
||||
|
||||
Then we’ve also identified the engineers and the makers, and we’ve tried to guide them towards the repos, like the sensory posts. We are asking them to help us with the firmware and the hardware. And then we’ve got for your more typical full stack or front end developer, we’ve got some other repos that deal with the actual applications. How does the user get the data? How does the user get the alerts? There’s a lot of work we can be doing there as well.
|
||||
|
||||
So, different people might have different interests. Someone might just want to take it all. Maybe someone might want to start a network in the community, but isn’t technical and that’s fine. We have a Slack channel where people can join and people can say, “Hey, I’m in this part of the world and I’m looking for people to help me with the sensors. I can do this part.” Maybe an entrepreneur might want to join and look for the technical people.
|
||||
|
||||
So, we’re just open to anybody who is keen on the mission, and they’re welcome to join.
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/featured/open-source-project-for-earthquake-warning-systems/
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/author/swapnil/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,115 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 reasons Jamstack is changing web development)
|
||||
[#]: via: (https://opensource.com/article/20/9/jamstack)
|
||||
[#]: author: (Phil Hawksworth https://opensource.com/users/phil-hawksworth)
|
||||
|
||||
4 reasons Jamstack is changing web development
|
||||
======
|
||||
Jamstack allows web developers to move far beyond static sites without
|
||||
the need for complex, expensive active-hosting infrastructure.
|
||||
![spiderweb diagram][1]
|
||||
|
||||
The way we use and the way we build the web have evolved dramatically since its inception. Developers have seen the rise and fall of many architectural and development paradigms intended to satisfy more complex user experiences, support evolving device capabilities, and enable more effective development workflows.
|
||||
|
||||
In 2015, [Netlify][2] founders Matt Biilmann and Chris Bach coined the term "[Jamstack][3]" to describe the architectural model they were championing and that was gaining popularity. In reality, the foundations of this model have existed from the beginning of the web. But multiple factors led them to coin this new term to encapsulate the approach and to give developers and technical architects a better means to discuss it.
|
||||
|
||||
In this article, I'll look at those factors, Jamstack's attributes, why the term came into existence, and how it is changing how we approach web development.
|
||||
|
||||
### What is Jamstack?
|
||||
|
||||
All Jamstack sites share a core principle: They are a collection of static assets generated during a build or compilation process so that they can be served from a simplified web server or directly from a content delivery network (CDN).
|
||||
|
||||
Before the term "Jamstack" emerged, many people described these sites as "static." This describes how the first sites on the web were created (although CDNs would come later). But the term "static sites" does a poor job of capturing what is possible with the Jamstack due to the way tools, services, and browsers have evolved.
|
||||
|
||||
The simplest Jamstack site is a single HTML file served as a static file. For a long time, open source web servers efficiently hosted static assets this way. This has become a commodity, with companies including Amazon, Microsoft, and Google offering hosting services based on file serving rather than spending compute cycles generating a response for each request on-demand.
|
||||
|
||||
But that's just a static site? Right?
|
||||
|
||||
Well, yes. But it's the thin end of the wedge. Jamstack builds upon this to deliver sites that confound the term "static" as a useful descriptor.
|
||||
|
||||
If you take things a stage further and introduce JavaScript into the equation, you can begin enhancing the user's experience. Modern browsers have increasingly capable JavaScript engines (like the open source [V8][4] from Google) and powerful browser [APIs][5] to enable services such as local-caching, location services, identity services, media access, and much more.
|
||||
|
||||
In many ways, the browser's JavaScript engine has replaced the runtime environment needed to perform dynamic operations in web experiences. Whereas a traditional technology stack such as [LAMP][6] requires configuration and maintenance of an operating system (Linux) and an active web server (Apache), these are not considerations with Jamstack. Files are served statically to clients (browsers), and if any computation is required, it can happen there rather than on the hosting infrastructure.
|
||||
|
||||
As Matt Biilmann describes it, "the runtime has moved up a level, to the browser."
|
||||
|
||||
Web experiences don't always require content to be dynamic or personalized, but they often do. Jamstack sites can provide this, thanks to the efficient use of JavaScript as well as a booming API economy. Many companies now provide content, tools, and services via APIs. These APIs enable even small project teams to inject previously unattainable, prohibitively expensive, and complex abilities into their Jamstack projects. For example, rather than needing to build identity, payment, and fulfillment services or to host, maintain, secure, and scale database capabilities, teams can source these functionalities from experts in those areas through APIs.
|
||||
|
||||
Businesses have emerged to provide these and many other services with all of the economies of scale and domain specializations needed to make them robust, efficient, and sustainable. The ability to consume these services via APIs decouples them from web applications' code, which is a very desirable thing.
|
||||
|
||||
Because these services took us beyond the old concept of static sites, a more descriptive term was needed.
|
||||
|
||||
### What's in a name?
|
||||
|
||||
The _JavaScript_ available in modern browsers, calling on _APIs_, and enriching the underlying site content delivered with _markup_ (HTML) are the J, A, and M in the Jamstack name. They identify properties that allow sites to move far beyond static sites without the need for complex and expensive active-hosting infrastructure.
|
||||
|
||||
But whether you decide to use all or just some of these attributes, the Jamstack's key principle is that the assets are created in advance to vastly improve hosting and development workflows. It's a shift from the higher-risk, just-in-time request-servicing method to a simplified, more predictable, prepared-in-advance approach.
|
||||
|
||||
As [Aaron Swartz][7] succinctly put it way back in 2002, "Bake, don't fry" your pages.
|
||||
|
||||
### 4 benefits of using Jamstack
|
||||
|
||||
Embracing this model of decoupling the generation (or rendering) of assets from the work of serving assets creates significant opportunities.
|
||||
|
||||
#### Lowering the cost of scaling
|
||||
|
||||
In a traditional stack, where views are generated for each incoming request, there is a correlation between the volume of traffic and the computation work done on the servers. This might reach all levels of the hosting stack, from load balancers and web servers to application servers and database servers. When these additional layers of infrastructure are provisioned to help manage the load, it adds cost and complexity to the environment and the work of operating the infrastructure and the site itself.
|
||||
|
||||
Jamstack sites flatten that stack. The work of generating the views happens only when the code or content changes, rather than for every request. Therefore, the site can be prepared in advance and hosted directly from a CDN without needing to perform expensive computations on the fly. Large pieces of infrastructure (and the work associated with them) disappear.
|
||||
|
||||
In short: Jamstack sites are optimized for scale by default.
|
||||
|
||||
#### Improving speed
|
||||
|
||||
Traditionally, to improve the hosting infrastructure's response time, those with the budget and the skills would add a CDN. Identifying assets that might be considered "static" and offloading serving those resources to a CDN could reduce the load on the web-serving infrastructure. Therefore, some requests could be served more rapidly from a CDN that is optimized for that task.
|
||||
|
||||
With a Jamstack site, the site is served entirely from the CDN. This avoids the complex logic of determining what must be served dynamically and what can be served from a CDN.
|
||||
|
||||
Every deployment becomes an operation to update the entire site on the CDN rather than across many pieces of infrastructure. This allows you to automate the process, which can increase your confidence in and decrease the cost and friction of deploying site updates.
|
||||
|
||||
#### Reducing security risks
|
||||
|
||||
Removing hosting infrastructure, especially servers that do computation based on the requests they receive, has a profound impact on a site's security profile. A Jamstack site has far fewer attack vectors than a traditional site since many servers are no longer needed. There is no server more secure than the one that does not exist.
|
||||
|
||||
The CDN infrastructure remains but is optimized to serve pre-generated assets. Because these are read-only operations, they have fewer opportunities for attack.
|
||||
|
||||
#### Supercharging the workflow
|
||||
|
||||
By removing so many moving parts from site hosting, you can vastly improve the workflows involved in developing and deploying them.
|
||||
|
||||
Jamstack sites support [version control from end to end][8] and commonly use Git and Git conventions to do everything from defining and provisioning new environments to executing a deployment. Deployments no longer need to change the state, resources, or configuration of multiple pieces of hosting infrastructure. And they can be tested locally, on staging environments, and in production environments with ease.
|
||||
|
||||
The approach also allows more comprehensive project encapsulation. A site's code repository can include everything needed to bootstrap a project, including defining the dependencies and operations involved in building the site. This simplifies onboarding developers and walking the path to production. (Here's [an example][9].)
|
||||
|
||||
### Jamstack for all
|
||||
|
||||
Web developers familiar with a wide variety of tools and frameworks are embracing the Jamstack. They are achieving new levels of productivity, ignited by the recognition that they can use many tools and languages that put more power and ability in their hands.
|
||||
|
||||
Open source libraries and frameworks with high levels of adoption and love from the development community are being used in combination and with third- and first-party APIs to produce incredibly capable solutions. And the low barriers of entry mean they are easier to explore, leading to higher levels of developer empowerment, effectiveness, and enthusiasm.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/9/jamstack
|
||||
|
||||
作者:[Phil Hawksworth][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/phil-hawksworth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-cms-build-howto-tutorial.png?itok=bRbCJt1U (spiderweb diagram)
|
||||
[2]: https://www.netlify.com/jamstack?utm_source=opensource.com&utm_medium=jamstack-benefits-pnh&utm_campaign=devex
|
||||
[3]: https://jamstack.org/?utm_source=opensource.com&utm_medium=jamstack-benefits-pnh&utm_campaign=devex
|
||||
[4]: https://v8.dev/
|
||||
[5]: https://www.redhat.com/en/topics/api/what-are-application-programming-interfaces
|
||||
[6]: https://en.wikipedia.org/wiki/LAMP_(software_bundle)
|
||||
[7]: http://www.aaronsw.com/weblog/000404
|
||||
[8]: https://www.netlify.com/products/build/?utm_source=opensource.com&utm_medium=jamstack-benefits-pnh&utm_campaign=devex
|
||||
[9]: https://github.com/philhawksworth/hello-trello
|
@ -0,0 +1,137 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (My dramatic journey to becoming an open source engineer)
|
||||
[#]: via: (https://opensource.com/article/20/9/open-source-story)
|
||||
[#]: author: (Anisha Swain https://opensource.com/users/anishaswain)
|
||||
|
||||
My dramatic journey to becoming an open source engineer
|
||||
======
|
||||
This is the story of how I grew from hating to loving open source
|
||||
technology.
|
||||
![Love and hate][1]
|
||||
|
||||
It's been five years and a heck of a journey from being a non-programmer to becoming an associate software engineer at Red Hat. It's a story worth telling—not because I have achieved a lot, but because of so much drama and so many pitfalls. So grab a cup of coffee, and I will share the unturned pages of my love story with technology.
|
||||
|
||||
People say love is as powerful as hate. And love stories that start with hate are often the most passionate ones. My love story with technology was just like that. I got into the world of programming in my freshman year of college. It was my most painful subject. Even though I have always been passionate about futuristic technologies, I didn't know how to move forward towards my passion.
|
||||
|
||||
Coming from an interest in instrumentation and electronics engineering, my distaste for programming increased even more with my first failed attempt at passing an interview for my college's technical society, as 80% of the questions were about programming. In my second attempt, I went in prepared, all set for my war with programming. And, luckily, I cleared it.
|
||||
|
||||
### Getting started
|
||||
|
||||
As the phrase goes, sometimes it takes a wrong turn to get you to the right place. At first, it did feel like a wrong turn. Then bit by bit, I learned something precious, not through learning, but by _listening_. I started my journey by Googling the technical terms I'd heard people utter, which is how I started growing. I came to learn about web design, and the first website I designed was the front page for my college hostel. I had never been so excited about a project before. That simple HTML/CSS website was nothing less than magic to me.
|
||||
|
||||
Have you heard of love at first sight? It was like that. At that moment, I fell in love with the work I was doing. I continued learning different web design frameworks (like [Bootstrap][2] and [Material Design][3]), libraries like [jQuery][4], and web development techniques like [Ajax][5].
|
||||
|
||||
### Gaining experience
|
||||
|
||||
In 2016, I received my first internship, working as a frontend developer. Even though it lasted only a short time, it made me realize that I can contribute code, and it filled me with even more confidence.
|
||||
|
||||
![Anisha Swain sitting by water][6]
|
||||
|
||||
(Anisha Swain, [CC BY-SA 4.0][7])
|
||||
|
||||
Gradually, I started learning server-side code, like [Node.js][8], the [Express][9] framework, [MongoDB][10], [MySQL][11], and more. My seniors helped make my road to understanding the client-server interaction fairly easy.
|
||||
|
||||
We heard about a hackathon organized by Tata Consultancy Services. We applied for it with an idea to add [gamification to cleaning][12] public places. (This is probably still the most engrossing project I have worked on to date.) And we were selected! After about a month of preparation, I was going to Mumbai, which was my first trip outside my state, Odisha. Today, when people ask me about young girls being banned from such opportunities, I answer, "tell me about it." I was the only girl on this trip and, yes, it was a little scary. But when I look back, I realize if I hadn't taken that calculated risk at that time, I probably wouldn't have received opportunities to travel in the future.
|
||||
|
||||
In time, I started learning Python, and the area that interested me the most was image processing. I started doing small projects on the [OpenCV][13] computer vision and machine learning library and later collaborated on making a prototype of a self-driving car—just a small one with basic features.
|
||||
|
||||
### My open source debut
|
||||
|
||||
Another turning point came when I learned about open source programs while participating in the Rails Girls Summer of Code (RGSoC) in 2017 with [Manaswini Das][14]. RGSoC is a global fellowship program for women and non-binary coders. Students receive three-month scholarships to work on existing open source projects and expand their skill sets.
|
||||
|
||||
I participated in a project named [HospitalRun][15]. It was an exciting—and honestly scary—experience for me. It was exciting because, for the first time, it felt like I was part of something meaningful, broader, and significant. A simple change I made would be visible to people all over the world. My name would be on the contributor list for a large community. It might sound like nothing, but at that time, it was like a wave of motivation. It was scary because the application was in [Ember.js][16], and learning Ember.js so quickly is an experience that can't be described. I will be ever grateful to my mentors, [Joel Worrall][17] and [Joel Glovier][18], for all the support they provided our group. Even though we didn't get selected for the program, this experience will always be a shining part of my story.
|
||||
|
||||
The best thing about working with technology is, it never felt like work. Neither then nor now. It was always like I was self-reflecting on a computer screen.
|
||||
|
||||
### Disappointment, then winning
|
||||
|
||||
The Summer Research Fellowships Program (SRFP), offered by the Indian Academy of Sciences (IAS), is a summer immersion experience that supplements research activities during the academic year. I wanted to work under this renowned research fellowship, and in 2017, I anxiously looked for my name among the awardees. Alas! My name wasn't there. And I was upset. Despite having the slimmest of chances to receive a fellowship, I had reviewed the profiles of all the professors who were working in the field of signal and image processing, shortlisted around 30 of them, and gone through their research papers. As I had some experience with OpenCV and image processing, I was somehow expecting to be selected.
|
||||
|
||||
However, I moved on and applied for RGSoC. I devoted two months to the application, but, again, my team wasn't selected. The semester was edging to an end, and I had no internships in hand. I was disheartened and clueless about what would happen next. I started applying for local internships, and I was completely upset.
|
||||
|
||||
But I was not aware that the IAS fellowship's second selection list had not yet been announced. On May 2, I got a message from one of my seniors with a link to the IAS second selection result list. And voila! I found my name. Anisha Swain. I couldn't believe my eyes. The credentials matched mine! I was selected to work on image processing.
|
||||
|
||||
The confirmation email said I was going to work in Delhi. But there was a problem with where I could live. The accommodation list was only for the people who were selected for the fellowship on the first list. I had a dilemma. My parents strictly banned me from going without proper accommodation. But when there is a will, there is a way, and I found I could stay on the Delhi University campus. In two months at Delhi, while doing research, having fun, and traveling, I experienced everything I could. I traveled to all the metro cities, and Delhi is the most beautiful city I have ever been to.
|
||||
|
||||
### When one door closes, another opens
|
||||
|
||||
In 2018, Google Summer of Code (GSoC) was just around the corner. GSoC is an annual international program where students are awarded stipends for completing a free and open source software coding project during the summer. Nothing was more prestigious to me at the time than getting into this program. Now I wonder why. I still see students going crazy over it, as if nothing else is left in life if they don't crack it. I was also upset at not being selected. Not once but twice. But as they say, "The journey is as important as the destination."
|
||||
|
||||
What matters more than getting selected is the learning you do during the process, as it will always stay with you. While applying to GSoC, I learned concept visualization with [D3.js][19] and [Three.js][20].
|
||||
|
||||
Even though I couldn't crack GSoC, my learning helped me land another internship, in 2019, at Mytrah Energy in Hyderabad. It was my first industrial experience, and I learned to do data visualization on a large scale. I dealt with data in JSON and CSV formats and created interactive charts with SVG and Canvas. The experience also helped me deal with my fear of getting into corporate life. It was a brief look into the life I was aspiring for.
|
||||
|
||||
### Walking into open source
|
||||
|
||||
Some of my friends selected for GSoC shared with me LinkedIn contact information for members of Red Hat's talent acquisition team. In 2018, I messaged them and sent my resume and personal description, but didn't receive a reply for more than a year.
|
||||
|
||||
But then I met them at the 2019 Grace Hopper Celebration India (GHCI) conference, Asia's largest gathering of women technologists. During the career fair, they asked for my resume, and, to my utter surprise, they remembered me from my LinkedIn messages a year before. Soon, I got an interview call. During my first interview, I lost my connection and the interview couldn't be completed, but they were kind enough to understand the situation and reschedule it. The next interview round took around three hours, and just a few hours later, I got a job offer by email. It is the best thing that has ever happened to me!
|
||||
|
||||
![An office desk][21]
|
||||
|
||||
(Anisha Swain, [CC BY-SA 4.0][7])
|
||||
|
||||
Today, I am an associate software engineer for Red Hat's Performance and Scale Engineering team. I work with [React][22] and design frameworks with [Ant Design][23] and [PatternFly][24]. I also deal with web technologies like [Elasticsearch][25], [GraphQL][26], and [Postgres][27]. I try to share my knowledge with others through conferences, meetups, and articles. None of this could have been possible without my "second family" from the [Zairza Cetb][28] club. This makes me realize the power of a community and the important role our surroundings play in our development.
|
||||
|
||||
### Rules for success
|
||||
|
||||
![Anisha Swain in snow][29]
|
||||
|
||||
(Anisha Swain, [CC BY-SA 4.0][7])
|
||||
|
||||
Through my journey, I have learned many things about getting where you want to be in life, including:
|
||||
|
||||
1. Hard work is as important as smart work.
|
||||
2. Things will eventually be connected and fall into place if you have the desire to grow.
|
||||
3. Work for 100% if you want to achieve 80%.
|
||||
4. Stay hungry, stay foolish, and stay humble.
|
||||
5. Always give back knowledge to the community.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/9/open-source-story
|
||||
|
||||
作者:[Anisha Swain][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/anishaswain
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/business_lovehate_0310.png?itok=2IBhQqIn (Love and hate)
|
||||
[2]: https://getbootstrap.com/
|
||||
[3]: https://material.io/design
|
||||
[4]: https://jquery.com/
|
||||
[5]: https://en.wikipedia.org/wiki/Ajax_%28programming%29
|
||||
[6]: https://opensource.com/sites/default/files/uploads/anishaswain_seaside.jpg (Anisha Swain sitting by water)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://nodejs.org
|
||||
[9]: https://expressjs.com/
|
||||
[10]: https://www.mongodb.com/
|
||||
[11]: https://www.mysql.com/
|
||||
[12]: https://github.com/Anisha1234/Clean_Clan
|
||||
[13]: https://opencv.org/
|
||||
[14]: https://www.linkedin.com/in/manaswini-das/
|
||||
[15]: https://hospitalrun.io/
|
||||
[16]: https://emberjs.com/
|
||||
[17]: https://www.linkedin.com/in/jworrall/
|
||||
[18]: https://www.linkedin.com/in/jglovier/
|
||||
[19]: https://d3js.org/
|
||||
[20]: https://threejs.org/
|
||||
[21]: https://opensource.com/sites/default/files/uploads/anishaswain_desk.jpg (An office desk)
|
||||
[22]: https://reactjs.org/
|
||||
[23]: https://ant.design/
|
||||
[24]: https://www.patternfly.org/
|
||||
[25]: https://www.elastic.co/elasticsearch/
|
||||
[26]: https://graphql.org/
|
||||
[27]: https://www.postgresql.org/
|
||||
[28]: https://medium.com/u/a9c306e657d0?source=post_page-----aa2c4cb5d924----------------------
|
||||
[29]: https://opensource.com/sites/default/files/uploads/anishaswain_snow.jpg (Anisha Swain in snow)
|
@ -0,0 +1,68 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why your open source project needs more than just coders)
|
||||
[#]: via: (https://opensource.com/article/20/9/open-source-role-diversity)
|
||||
[#]: author: (Silona https://opensource.com/users/silona)
|
||||
|
||||
Why your open source project needs more than just coders
|
||||
======
|
||||
Developers alone don't create open source projects that meet a variety
|
||||
of needs for a long shelf life; it's time to welcome more roles and
|
||||
talent.
|
||||
![Teamwork starts with communication across silos][1]
|
||||
|
||||
Why do open source projects fail?
|
||||
|
||||
Lack of funding is a major factor, of course, but it's far from the only reason that open source projects fail to achieve sustainability. Sometimes there's a lack of understanding of how to create a product for a broad market, or some fundamental misstep with intellectual property rights (IPR)—such as failing to properly license your code.
|
||||
|
||||
It's hard for any open source project to sustain if it doesn't get these types of basics right. Collaboration across boundaries and the ability to iterate and expand are hindered, and innovation is stifled. I see these fatal flaws especially in a lot of humanitarian projects—_passion projects_—and it is heartbreaking.
|
||||
|
||||
### Building role diversity
|
||||
|
||||
Open source has tended to thrive in instances where it has been developers writing code for developers. That’s why so many underlying technologies developed via open source (Apache and Linux, for example) have succeeded.
|
||||
|
||||
However, when it comes to creating quality user experiences and products for people other than developers, open source's record is spottier. Why is that?
|
||||
|
||||
One of the big reasons is because the majority of open source communities don’t encourage or even welcome people of diverse expertise. Sometimes, there’s no acknowledgement of them. The coders get all of the love, and roles and contributions beyond coding are not even thought of.
|
||||
|
||||
Sustainable open source demands that communities embrace and reward a lot of different talents. There are the developers, most definitely, and they must be core for any open source project to be successful. But without contributions from marketing expertise, for example, you might not thoroughly understand what the users want. Without the input of product management, you run the risk of failing to develop a product for users other than other developers. Businesses normally invest in these and other roles because their varied contributions are critical to delivering successful results and creating products that are production ready with community support for long term sustainability.
|
||||
|
||||
One of the conflicts that I often find amongst open source development communities is an animosity towards product or project management. It’s true that product management especially in corporate places has control issues—they may try to do things like dominate a market, or come in from a perspective of scarcity rather than abundance. It shows in behavior, and it’s antithetical to the spirit of open source.
|
||||
|
||||
But, to be fair, I think it is also true that we developers have never been taught how to work well with product management. We are told, "More people would use your product if you just did X," and we respond, "No, you can't tell me anything about my baby." We don't want to hear, "Yeah, but if you change the diaper, more people would like your baby," even if it's true.
|
||||
|
||||
Open source hasn't always embraced talent other than developers, and this is what must change in order to foster long term stability.
|
||||
|
||||
### Birthing IEEE SA Open
|
||||
|
||||
Putting in place the tools and processes needed to encourage project sustainability is our current focus in architecting and designing [IEEE SA Open][2]. To that end, bringing in role diversity and building a platform and a tool that invites and rewards those diverse contributions is crucial in creating IEEE SA Open.
|
||||
|
||||
We are creating our community, marketing, and technology onboarding guides to ensure that incoming projects automatically get a level of support that they wouldn't normally get from a technology platform. We're looking at raising the maturity model into advanced processes and practices. For example, progressing to levels 4 (quantitatively managed) and 5 (optimizing) of [Capability Maturity Model Integration][3] (CMMI) requires measurement. Getting our processes right from the outset and assigning the right metrics to inform better, more consistent evaluations will support our sustainability.
|
||||
|
||||
This is one of the places where our linkage with IEEE is so important. One of the things that the standards world does especially well is process, and IEEE in particular has a history of making sure that its processes are fair and predicated on advancing technology for the benefit of humanity. With more than 419,000 members in over 160 countries, IEEE is the world’s largest technical professional organization dedicated to advancing technology for humanity. Its roots go back to 1884 when electricity and telecommunications began to become a major influence in society, and today IEEE has over 1,200 active standards and more than 650 standards under development.
|
||||
|
||||
IEEE SA Open can borrow on those best practices and lessons learned in sustainability that IEEE has acquired by experience. We aim to bridge the gap between global standards development and open source developer communities. It is definitely a balancing act, and we respect that!
|
||||
|
||||
We’re reaching out to people all over the global open source and standards communities in a diverse set of roles to be engaged in creating IEEE SA Open. You can participate in that birthing project, and now is the time. If there are things that are super important to you and that you’ve seen neglected in open source, this is the time to engage, share your experiences and influence the creation of IEEE SA Open. You can help make sure we don’t make those mistakes. [We need your unique insights and input.][2]
|
||||
|
||||
You don't need to be a master coder to contribute to open source. Jade Wang shares 8 ways you can...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/9/open-source-role-diversity
|
||||
|
||||
作者:[Silona][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/silona
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/happy-team-sticker-laptop.png?itok=91K77IgE (Teamwork starts with communication across silos)
|
||||
[2]: https://standards.ieee.org/initiatives/opensource?utm_source=External&utm_medium=PR&utm_campaign=IEEE-SA-Open
|
||||
[3]: https://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (gxlct008)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Sky0Master)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,741 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Yufei-Yan)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (An example of very lightweight RESTful web services in Java)
|
||||
[#]: via: (https://opensource.com/article/20/7/restful-services-java)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
An example of very lightweight RESTful web services in Java
|
||||
======
|
||||
Explore lightweight RESTful services in Java through a full code example
|
||||
to manage a book collection.
|
||||
![Coding on a computer][1]
|
||||
|
||||
Web services, in one form or another, have been around for more than two decades. For example, [XML-RPC services][2] appeared in the late 1990s, followed shortly by ones written in the SOAP offshoot. Services in the [REST architectural style][3] also made the scene about two decades ago, soon after the XML-RPC and SOAP trailblazers. [REST][4]-style (hereafter, Restful) services now dominate in popular sites such as eBay, Facebook, and Twitter. Despite the alternatives to web services for distributed computing (e.g., web sockets, microservices, and new frameworks for remote-procedure calls), Restful web services remain attractive for several reasons:
|
||||
|
||||
* Restful services build upon existing infrastructure and protocols, in particular, web servers and the HTTP/HTTPS protocols. An organization that has HTML-based websites can readily add web services for clients interested more in the data and underlying functionality than in the HTML presentation. Amazon, for example, has pioneered making the same information and functionality available through both websites and web services, either SOAP-based or Restful.
|
||||
|
||||
* Restful services treat HTTP as an API, thereby avoiding the complicated software layering that has come to characterize the SOAP-based approach to web services. For example, the Restful API supports the standard CRUD (Create-Read-Update-Delete) operations through the HTTP verbs POST-GET-PUT-DELETE, respectively; HTTP status codes inform a requester whether a request succeeded or why it failed.
|
||||
|
||||
* Restful web services can be as simple or complicated as needed. Restful is a style—indeed, a very flexible one—rather than a set of prescriptions about how services should be designed and structured. (The attendant downside is that it may be hard to determine what does _not_ count as a Restful service.)
|
||||
|
||||
* For a consumer or client, Restful web services are language- and platform-neutral. The client makes requests in HTTP(S) and receives text responses in a format suitable for modern data interchange (e.g., JSON).
|
||||
|
||||
* Almost every general-purpose programming language has at least adequate (and often strong) support for HTTP/HTTPS, which means that web-service clients can be written in those languages.
|
||||
|
||||
|
||||
|
||||
|
||||
This article explores lightweight Restful services in Java through a full code example.
|
||||
|
||||
### The Restful novels web service
|
||||
|
||||
The Restful novels web service consists of three programmer-defined classes:
|
||||
|
||||
* The `Novel` class represents a novel with just three properties: a machine-generated ID, an author, and a title. The properties could be expanded for more realism, but I want to keep this example simple.
|
||||
|
||||
|
||||
* The `Novels` class consists of utilities for various tasks: converting a plain-text encoding of a `Novel` or a list of them into XML or JSON; supporting the CRUD operations on the novels collection; and initializing the collection from data stored in a file. The `Novels` class mediates between `Novel` instances and the servlet.
|
||||
|
||||
|
||||
* The `NovelsServlet` class derives from `HttpServlet`, a sturdy and flexible piece of software that has been around since the very early enterprise Java of the late 1990s. The servlet acts as an HTTP endpoint for client CRUD requests. The servlet code focuses on processing client requests and generating the appropriate responses, leaving the devilish details to utilities in the `Novels` class.
|
||||
|
||||
|
||||
|
||||
Some Java frameworks, such as Jersey (JAX-RS) and Restlet, are designed for Restful services. Nonetheless, the `HttpServlet` on its own provides a lightweight, flexible, powerful, and well-tested API for delivering such services. I'll demonstrate this with the novels example.
|
||||
|
||||
### Deploy the novels web service
|
||||
|
||||
Deploying the novels web service requires a web server, of course. My choice is [Tomcat][5], but the service should work (famous last words!) if it's hosted on, for example, Jetty or even a Java Application Server. The code and a README that summarizes how to install Tomcat are [available on my website][6]. There is also a documented Apache Ant script that builds the novels service (or any other service or website) and deploys it under Tomcat or the equivalent.
|
||||
|
||||
Tomcat is available for download from its [website][7]. Once you install it locally, let `TOMCAT_HOME` be the install directory. There are two subdirectories of immediate interest:
|
||||
|
||||
* The `TOMCAT_HOME/bin` directory contains startup and stop scripts for Unix-like systems (`startup.sh` and `shutdown.sh`) and Windows (`startup.bat` and `shutdown.bat`). Tomcat runs as a Java application. The web server's servlet container is named Catalina. (In Jetty, the web server and container have the same name.) Once Tomcat starts, enter `http://localhost:8080/` in a browser to see extensive documentation, including examples.
|
||||
|
||||
* The `TOMCAT_HOME/webapps` directory is the default for deployed websites and web services. The straightforward way to deploy a website or web service is to copy a JAR file with a `.war` extension (hence, a WAR file) to `TOMCAT_HOME/webapps` or a subdirectory thereof. Tomcat then unpacks the WAR file into its own directory. For example, Tomcat would unpack `novels.war` into a subdirectory named `novels`, leaving `novels.war` as-is. A website or service can be removed by deleting the WAR file and updated by overwriting the WAR file with a new version. By the way, the first step in debugging a website or service is to check that Tomcat has unpacked the WAR file; if not, the site or service was not published because of a fatal error in the code or configuration.
|
||||
|
||||
* Because Tomcat listens by default on port 8080 for HTTP requests, a request URL for Tomcat on the local machine begins:
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/`
|
||||
```
|
||||
|
||||
Access a programmer-deployed WAR file by adding the WAR file's name but without the `.war` extension:
|
||||
|
||||
|
||||
```
|
||||
`http://locahost:8080/novels/`
|
||||
```
|
||||
|
||||
If the service was deployed in a subdirectory (e.g., `myapps`) of `TOMCAT_HOME`, this would be reflected in the URL:
|
||||
|
||||
|
||||
```
|
||||
`http://locahost:8080/myapps/novels/`
|
||||
```
|
||||
|
||||
I'll offer more details about this in the testing section near the end of the article.
|
||||
|
||||
|
||||
|
||||
|
||||
As noted, the ZIP file on my homepage contains an Ant script that compiles and deploys a website or service. (A copy of `novels.war` is also included in the ZIP file.) For the novels example, a sample command (with `%` as the command-line prompt) is:
|
||||
|
||||
|
||||
```
|
||||
`% ant -Dwar.name=novels deploy`
|
||||
```
|
||||
|
||||
This command compiles Java source files and then builds a deployable file named `novels.war`, leaves this file in the current directory, and copies it to `TOMCAT_HOME/webapps`. If all goes well, a `GET` request (using a browser or a command-line utility, such as `curl`) serves as a first test:
|
||||
|
||||
|
||||
```
|
||||
`% curl http://localhost:8080/novels/`
|
||||
```
|
||||
|
||||
Tomcat is configured, by default, for _hot deploys_: the web server does not need to be shut down to deploy, update, or remove a web application.
|
||||
|
||||
### The novels service at the code level
|
||||
|
||||
Let's get back to the novels example but at the code level. Consider the `Novel` class below:
|
||||
|
||||
#### Example 1. The Novel class
|
||||
|
||||
|
||||
```
|
||||
package novels;
|
||||
|
||||
import java.io.Serializable;
|
||||
|
||||
public class Novel implements [Serializable][8], Comparable<Novel> {
|
||||
static final long serialVersionUID = 1L;
|
||||
private [String][9] author;
|
||||
private [String][9] title;
|
||||
private int id;
|
||||
|
||||
public Novel() { }
|
||||
|
||||
public void setAuthor(final [String][9] author) { this.author = author; }
|
||||
public [String][9] getAuthor() { return this.author; }
|
||||
public void setTitle(final [String][9] title) { this.title = title; }
|
||||
public [String][9] getTitle() { return this.title; }
|
||||
public void setId(final int id) { this.id = id; }
|
||||
public int getId() { return this.id; }
|
||||
|
||||
public int compareTo(final Novel other) { return this.id - other.id; }
|
||||
}
|
||||
```
|
||||
|
||||
This class implements the `compareTo` method from the `Comparable` interface because `Novel` instances are stored in a thread-safe `ConcurrentHashMap`, which does not enforce a sorted order. In responding to requests to view the collection, the novels service sorts a collection (an `ArrayList`) extracted from the map; the implementation of `compareTo` enforces an ascending sorted order by `Novel` ID.
|
||||
|
||||
The class `Novels` contains various utility functions:
|
||||
|
||||
#### Example 2. The Novels utility class
|
||||
|
||||
|
||||
```
|
||||
package novels;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.io.File;
|
||||
import java.io.ByteArrayOutputStream;
|
||||
import java.io.InputStream;
|
||||
import java.io.InputStreamReader;
|
||||
import java.io.BufferedReader;
|
||||
import java.nio.file.Files;
|
||||
import java.util.stream.Stream;
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.Collections;
|
||||
import java.beans.XMLEncoder;
|
||||
import javax.servlet.ServletContext; // not in JavaSE
|
||||
import org.json.JSONObject;
|
||||
import org.json.XML;
|
||||
|
||||
public class Novels {
|
||||
private final [String][9] fileName = "/WEB-INF/data/novels.db";
|
||||
private ConcurrentMap<[Integer][10], Novel> novels;
|
||||
private ServletContext sctx;
|
||||
private AtomicInteger mapKey;
|
||||
|
||||
public Novels() {
|
||||
novels = new ConcurrentHashMap<[Integer][10], Novel>();
|
||||
mapKey = new AtomicInteger();
|
||||
}
|
||||
|
||||
public void setServletContext(ServletContext sctx) { this.sctx = sctx; }
|
||||
public ServletContext getServletContext() { return this.sctx; }
|
||||
|
||||
public ConcurrentMap<[Integer][10], Novel> getConcurrentMap() {
|
||||
if (getServletContext() == null) return null; // not initialized
|
||||
if (novels.size() < 1) populate();
|
||||
return this.novels;
|
||||
}
|
||||
|
||||
public [String][9] toXml([Object][11] obj) { // default encoding
|
||||
[String][9] xml = null;
|
||||
try {
|
||||
[ByteArrayOutputStream][12] out = new [ByteArrayOutputStream][12]();
|
||||
XMLEncoder encoder = new XMLEncoder(out);
|
||||
encoder.writeObject(obj);
|
||||
encoder.close();
|
||||
xml = out.toString();
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return xml;
|
||||
}
|
||||
|
||||
public [String][9] toJson([String][9] xml) { // option for requester
|
||||
try {
|
||||
JSONObject jobt = XML.toJSONObject(xml);
|
||||
return jobt.toString(3); // 3 is indentation level
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return null;
|
||||
}
|
||||
|
||||
public int addNovel(Novel novel) {
|
||||
int id = mapKey.incrementAndGet();
|
||||
novel.setId(id);
|
||||
novels.put(id, novel);
|
||||
return id;
|
||||
}
|
||||
|
||||
private void populate() {
|
||||
[InputStream][14] in = sctx.getResourceAsStream(this.fileName);
|
||||
// Convert novel.db string data into novels.
|
||||
if (in != null) {
|
||||
try {
|
||||
[InputStreamReader][15] isr = new [InputStreamReader][15](in);
|
||||
[BufferedReader][16] reader = new [BufferedReader][16](isr);
|
||||
|
||||
[String][9] record = null;
|
||||
while ((record = reader.readLine()) != null) {
|
||||
[String][9][] parts = record.split("!");
|
||||
if (parts.length == 2) {
|
||||
Novel novel = new Novel();
|
||||
novel.setAuthor(parts[0]);
|
||||
novel.setTitle(parts[1]);
|
||||
addNovel(novel); // sets the Id, adds to map
|
||||
}
|
||||
}
|
||||
in.close();
|
||||
}
|
||||
catch ([IOException][17] e) { }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The most complicated method is `populate`, which reads from a text file contained in the deployed WAR file. The text file contains the initial collection of novels. To open the text file, the `populate` method needs the `ServletContext`, a Java map that contains all of the critical information about the servlet embedded in the servlet container. The text file, in turn, contains records such as this:
|
||||
|
||||
|
||||
```
|
||||
`Jane Austen!Persuasion`
|
||||
```
|
||||
|
||||
The line is parsed into two parts (author and title) separated by the bang symbol (`!`). The method then builds a `Novel` instance, sets the author and title properties, and adds the novel to the collection, which acts as an in-memory data store.
|
||||
|
||||
The `Novels` class also has utilities to encode the novels collection into XML or JSON, depending upon the format that the requester prefers. XML is the default, but JSON is available upon request. A lightweight XML-to-JSON package provides the JSON. Further details on encoding are below.
|
||||
|
||||
#### Example 3. The NovelsServlet class
|
||||
|
||||
|
||||
```
|
||||
package novels;
|
||||
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import javax.servlet.ServletException;
|
||||
import javax.servlet.http.HttpServlet;
|
||||
import javax.servlet.http.HttpServletRequest;
|
||||
import javax.servlet.http.HttpServletResponse;
|
||||
import java.util.Arrays;
|
||||
import java.io.ByteArrayInputStream;
|
||||
import java.io.ByteArrayOutputStream;
|
||||
import java.io.OutputStream;
|
||||
import java.io.BufferedReader;
|
||||
import java.io.InputStreamReader;
|
||||
import java.beans.XMLEncoder;
|
||||
import org.json.JSONObject;
|
||||
import org.json.XML;
|
||||
|
||||
public class NovelsServlet extends HttpServlet {
|
||||
static final long serialVersionUID = 1L;
|
||||
private Novels novels; // back-end bean
|
||||
|
||||
// Executed when servlet is first loaded into container.
|
||||
@Override
|
||||
public void init() {
|
||||
this.novels = new Novels();
|
||||
novels.setServletContext(this.getServletContext());
|
||||
}
|
||||
|
||||
// GET /novels
|
||||
// GET /novels?id=1
|
||||
@Override
|
||||
public void doGet(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] param = request.getParameter("id");
|
||||
[Integer][10] key = (param == null) ? null : [Integer][10].valueOf((param.trim()));
|
||||
|
||||
// Check user preference for XML or JSON by inspecting
|
||||
// the HTTP headers for the Accept key.
|
||||
boolean json = false;
|
||||
[String][9] accept = request.getHeader("accept");
|
||||
if (accept != null && accept.contains("json")) json = true;
|
||||
|
||||
// If no query string, assume client wants the full list.
|
||||
if (key == null) {
|
||||
ConcurrentMap<[Integer][10], Novel> map = novels.getConcurrentMap();
|
||||
[Object][11][] list = map.values().toArray();
|
||||
[Arrays][18].sort(list);
|
||||
|
||||
[String][9] payload = novels.toXml(list); // defaults to Xml
|
||||
if (json) payload = novels.toJson(payload); // Json preferred?
|
||||
sendResponse(response, payload);
|
||||
}
|
||||
// Otherwise, return the specified Novel.
|
||||
else {
|
||||
Novel novel = novels.getConcurrentMap().get(key);
|
||||
if (novel == null) { // no such Novel
|
||||
[String][9] msg = key + " does not map to a novel.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
else { // requested Novel found
|
||||
if (json) sendResponse(response, novels.toJson(novels.toXml(novel)));
|
||||
else sendResponse(response, novels.toXml(novel));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// POST /novels
|
||||
@Override
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] author = request.getParameter("author");
|
||||
[String][9] title = request.getParameter("title");
|
||||
|
||||
// Are the data to create a new novel present?
|
||||
if (author == null || title == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
|
||||
// Create a novel.
|
||||
Novel n = new Novel();
|
||||
n.setAuthor(author);
|
||||
n.setTitle(title);
|
||||
|
||||
// Save the ID of the newly created Novel.
|
||||
int id = novels.addNovel(n);
|
||||
|
||||
// Generate the confirmation message.
|
||||
[String][9] msg = "Novel " + id + " created.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
|
||||
// PUT /novels
|
||||
@Override
|
||||
public void doPut(HttpServletRequest request, HttpServletResponse response) {
|
||||
/* A workaround is necessary for a PUT request because Tomcat does not
|
||||
generate a workable parameter map for the PUT verb. */
|
||||
[String][9] key = null;
|
||||
[String][9] rest = null;
|
||||
boolean author = false;
|
||||
|
||||
/* Let the hack begin. */
|
||||
try {
|
||||
[BufferedReader][16] br =
|
||||
new [BufferedReader][16](new [InputStreamReader][15](request.getInputStream()));
|
||||
[String][9] data = br.readLine();
|
||||
/* To simplify the hack, assume that the PUT request has exactly
|
||||
two parameters: the id and either author or title. Assume, further,
|
||||
that the id comes first. From the client side, a hash character
|
||||
# separates the id and the author/title, e.g.,
|
||||
|
||||
id=33#title=War and Peace
|
||||
*/
|
||||
[String][9][] args = data.split("#"); // id in args[0], rest in args[1]
|
||||
[String][9][] parts1 = args[0].split("="); // id = parts1[1]
|
||||
key = parts1[1];
|
||||
|
||||
[String][9][] parts2 = args[1].split("="); // parts2[0] is key
|
||||
if (parts2[0].contains("author")) author = true;
|
||||
rest = parts2[1];
|
||||
}
|
||||
catch([Exception][13] e) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
|
||||
// If no key, then the request is ill formed.
|
||||
if (key == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
|
||||
// Look up the specified novel.
|
||||
Novel p = novels.getConcurrentMap().get([Integer][10].valueOf((key.trim())));
|
||||
if (p == null) { // not found
|
||||
[String][9] msg = key + " does not map to a novel.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
else { // found
|
||||
if (rest == null) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
}
|
||||
// Do the editing.
|
||||
else {
|
||||
if (author) p.setAuthor(rest);
|
||||
else p.setTitle(rest);
|
||||
|
||||
[String][9] msg = "Novel " + key + " has been edited.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DELETE /novels?id=1
|
||||
@Override
|
||||
public void doDelete(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] param = request.getParameter("id");
|
||||
[Integer][10] key = (param == null) ? null : [Integer][10].valueOf((param.trim()));
|
||||
// Only one Novel can be deleted at a time.
|
||||
if (key == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
try {
|
||||
novels.getConcurrentMap().remove(key);
|
||||
[String][9] msg = "Novel " + key + " removed.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
catch([Exception][13] e) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
}
|
||||
|
||||
// Methods Not Allowed
|
||||
@Override
|
||||
public void doTrace(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void doHead(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void doOptions(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
// Send the response payload (Xml or Json) to the client.
|
||||
private void sendResponse(HttpServletResponse response, [String][9] payload) {
|
||||
try {
|
||||
[OutputStream][20] out = response.getOutputStream();
|
||||
out.write(payload.getBytes());
|
||||
out.flush();
|
||||
}
|
||||
catch([Exception][13] e) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Recall that the `NovelsServlet` class above extends the `HttpServlet` class, which in turn extends the `GenericServlet` class, which implements the `Servlet` interface:
|
||||
|
||||
|
||||
```
|
||||
`NovelsServlet extends HttpServlet extends GenericServlet implements Servlet`
|
||||
```
|
||||
|
||||
As the name makes clear, the `HttpServlet` is designed for servlets delivered over HTTP(S). The class provides empty methods named after the standard HTTP request verbs (officially, _methods_):
|
||||
|
||||
* `doPost` (Post = Create)
|
||||
* `doGet` (Get = Read)
|
||||
* `doPut` (Put = Update)
|
||||
* `doDelete` (Delete = Delete)
|
||||
|
||||
|
||||
|
||||
Some additional HTTP verbs are covered as well. An extension of the `HttpServlet`, such as the `NovelsServlet`, overrides any `do` method of interest, leaving the others as no-ops. The `NovelsServlet` overrides seven of the `do` methods.
|
||||
|
||||
Each of the `HttpServlet` CRUD methods takes the same two arguments. Here is `doPost` as an example:
|
||||
|
||||
|
||||
```
|
||||
`public void doPost(HttpServletRequest request, HttpServletResponse response) {`
|
||||
```
|
||||
|
||||
The `request` argument is a map of the HTTP request information, and the `response` provides an output stream back to the requester. A method such as `doPost` is structured as follows:
|
||||
|
||||
* Read the `request` information, taking whatever action is appropriate to generate a response. If information is missing or otherwise deficient, generate an error.
|
||||
|
||||
|
||||
* Use the extracted request information to perform the appropriate CRUD operation (in this case, create a `Novel`) and then encode an appropriate response to the requester using the `response` output stream to do so. In the case of `doPost`, the response is a confirmation that a new novel has been created and added to the collection. Once the response is sent, the output stream is closed, which closes the connection as well.
|
||||
|
||||
|
||||
|
||||
### More on the do method overrides
|
||||
|
||||
An HTTP request has a relatively simple structure. Here is a sketch in the familiar HTTP 1.1 format, with comments introduced by double sharp signs:
|
||||
|
||||
|
||||
```
|
||||
GET /novels ## start line
|
||||
Host: localhost:8080 ## header element
|
||||
Accept-type: text/plain ## ditto
|
||||
...
|
||||
[body] ## POST and PUT only
|
||||
```
|
||||
|
||||
The start line begins with the HTTP verb (in this case, `GET`) and the URI (Uniform Resource Identifier), which is the noun (in this case, `novels`) that names the targeted resource. The headers consist of key-value pairs, with a colon separating the key on the left from the value(s) on the right. The header with key `Host` (case insensitive) is required; the hostname `localhost` is the symbolic address of the local machine on the local machine, and the port number `8080` is the default for the Tomcat web server awaiting HTTP requests. (By default, Tomcat listens on port 8443 for HTTPS requests.) The header elements can occur in arbitrary order. In this example, the `Accept-type` header's value is the MIME type `text/plain`.
|
||||
|
||||
Some requests (in particular, `POST` and `PUT`) have bodies, whereas others (in particular, `GET` and `DELETE`) do not. If there is a body (perhaps empty), two newlines separate the headers from the body; the HTTP body consists of key-value pairs. For bodyless requests, header elements, such as the query string, can be used to send information. Here is a request to `GET` the `/novels` resource with the ID of 2:
|
||||
|
||||
|
||||
```
|
||||
`GET /novels?id=2`
|
||||
```
|
||||
|
||||
The query string starts with the question mark and, in general, consists of key-value pairs, although a key without a value is possible.
|
||||
|
||||
The `HttpServlet`, with methods such as `getParameter` and `getParameterMap`, nicely hides the distinction between HTTP requests with and without a body. In the novels example, the `getParameter` method is used to extract the required information from the `GET`, `POST`, and `DELETE` requests. (Handling a `PUT` request requires lower-level code because Tomcat does not provide a workable parameter map for `PUT` requests.) Here, for illustration, is a slice of the `doPost` method in the `NovelsServlet` override:
|
||||
|
||||
|
||||
```
|
||||
@Override
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] author = request.getParameter("author");
|
||||
[String][9] title = request.getParameter("title");
|
||||
...
|
||||
```
|
||||
|
||||
For a bodyless `DELETE` request, the approach is essentially the same:
|
||||
|
||||
|
||||
```
|
||||
@Override
|
||||
public void doDelete(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] param = request.getParameter("id"); // id of novel to be removed
|
||||
...
|
||||
```
|
||||
|
||||
The `doGet` method needs to distinguish between two flavors of a `GET` request: one flavor means "get all*"*, whereas the other means _get a specified one_. If the `GET` request URL contains a query string whose key is an ID, then the request is interpreted as "get a specified one":
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels?id=2 ## GET specified`
|
||||
```
|
||||
|
||||
If there is no query string, the `GET` request is interpreted as "get all":
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels ## GET all`
|
||||
```
|
||||
|
||||
### Some devilish details
|
||||
|
||||
The novels service design reflects how a Java-based web server such as Tomcat works. At startup, Tomcat builds a thread pool from which request handlers are drawn, an approach known as the _one thread per request model_. Modern versions of Tomcat also use non-blocking I/O to boost performance.
|
||||
|
||||
The novels service executes as a _single_ instance of the `NovelsServlet` class, which in turn maintains a _single_ collection of novels. Accordingly, a race condition would arise, for example, if these two requests were processed concurrently:
|
||||
|
||||
* One request changes the collection by adding a new novel.
|
||||
* The other request gets all the novels in the collection.
|
||||
|
||||
|
||||
|
||||
The outcome is indeterminate, depending on exactly how the _read_ and _write_ operations overlap. To avoid this problem, the novels service uses a thread-safe `ConcurrentMap`. Keys for this map are generated with a thread-safe `AtomicInteger`. Here is the relevant code segment:
|
||||
|
||||
|
||||
```
|
||||
public class Novels {
|
||||
private ConcurrentMap<[Integer][10], Novel> novels;
|
||||
private AtomicInteger mapKey;
|
||||
...
|
||||
```
|
||||
|
||||
By default, a response to a client request is encoded as XML. The novels program uses the old-time `XMLEncoder` class for simplicity; a far richer option is the JAX-B library. The code is straightforward:
|
||||
|
||||
|
||||
```
|
||||
public [String][9] toXml([Object][11] obj) { // default encoding
|
||||
[String][9] xml = null;
|
||||
try {
|
||||
[ByteArrayOutputStream][12] out = new [ByteArrayOutputStream][12]();
|
||||
XMLEncoder encoder = new XMLEncoder(out);
|
||||
encoder.writeObject(obj);
|
||||
encoder.close();
|
||||
xml = out.toString();
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return xml;
|
||||
}
|
||||
```
|
||||
|
||||
The `Object` parameter is either a sorted `ArrayList` of novels (in response to a "get all" request); or a single `Novel` instance (in response to a _get one_ request); or a `String` (a confirmation message).
|
||||
|
||||
If an HTTP request header refers to JSON as a desired type, then the XML is converted to JSON. Here is the check in the `doGet` method of the `NovelsServlet`:
|
||||
|
||||
|
||||
```
|
||||
[String][9] accept = request.getHeader("accept"); // "accept" is case insensitive
|
||||
if (accept != null && accept.contains("json")) json = true;
|
||||
```
|
||||
|
||||
The `Novels` class houses the `toJson` method, which converts XML to JSON:
|
||||
|
||||
|
||||
```
|
||||
public [String][9] toJson([String][9] xml) { // option for requester
|
||||
try {
|
||||
JSONObject jobt = XML.toJSONObject(xml);
|
||||
return jobt.toString(3); // 3 is indentation level
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return null;
|
||||
}
|
||||
```
|
||||
|
||||
The `NovelsServlet` checks for errors of various types. For example, a `POST` request should include an author and a title for the new novel. If either is missing, the `doPost` method throws an exception:
|
||||
|
||||
|
||||
```
|
||||
if (author == null || title == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
```
|
||||
|
||||
The `SC` in `SC_BAD_REQUEST` stands for _status code_, and the `BAD_REQUEST` has the standard HTTP numeric value of 400. If the HTTP verb in a request is `TRACE`, a different status code is returned:
|
||||
|
||||
|
||||
```
|
||||
public void doTrace(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
```
|
||||
|
||||
### Testing the novels service
|
||||
|
||||
Testing a web service with a browser is tricky. Among the CRUD verbs, modern browsers generate only `POST` (Create) and `GET` (Read) requests. Even a `POST` request is challenging from a browser, as the key-values for the body need to be included; this is typically done through an HTML form. A command-line utility such as [curl][21] is a better way to go, as this section illustrates with some `curl` commands, which are included in the ZIP on my website.
|
||||
|
||||
Here are some sample tests without the corresponding output:
|
||||
|
||||
|
||||
```
|
||||
% curl localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=1
|
||||
% curl --header "Accept: application/json" localhost:8080/novels/
|
||||
```
|
||||
|
||||
The first command requests all the novels, which are encoded by default in XML. The second command requests the novel with an ID of 1, which is encoded in XML. The last command adds an `Accept` header element with `application/json` as the MIME type desired. The `get one` command could also use this header element. Such requests have JSON rather than the XML responses.
|
||||
|
||||
The next two commands create a new novel in the collection and confirm the addition:
|
||||
|
||||
|
||||
```
|
||||
% curl --request POST --data "author=Tolstoy&title=War and Peace" localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=4
|
||||
```
|
||||
|
||||
A `PUT` command in `curl` resembles a `POST` command except that the `PUT` body does not use standard syntax. The documentation for the `doPut` method in the `NovelsServlet` goes into detail, but the short version is that Tomcat does not generate a proper map on `PUT` requests. Here is the sample `PUT` command and a confirmation command:
|
||||
|
||||
|
||||
```
|
||||
% curl --request PUT --data "id=3#title=This is an UPDATE" localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=3
|
||||
```
|
||||
|
||||
The second command confirms the update.
|
||||
|
||||
Finally, the `DELETE` command works as expected:
|
||||
|
||||
|
||||
```
|
||||
% curl --request DELETE localhost:8080/novels?id=2
|
||||
% curl localhost:8080/novels/
|
||||
```
|
||||
|
||||
The request is for the novel with the ID of 2 to be deleted. The second command shows the remaining novels.
|
||||
|
||||
### The web.xml configuration file
|
||||
|
||||
Although it's officially optional, a `web.xml` configuration file is a mainstay in a production-grade website or service. The configuration file allows routing, security, and other features of a site or service to be specified independently of the implementation code. The configuration for the novels service handles routing by providing a URL pattern for requests dispatched to this service:
|
||||
|
||||
|
||||
```
|
||||
<?xml version = "1.0" encoding = "UTF-8"?>
|
||||
<web-app>
|
||||
<servlet>
|
||||
<servlet-name>novels</servlet-name>
|
||||
<servlet-class>novels.NovelsServlet</servlet-class>
|
||||
</servlet>
|
||||
<servlet-mapping>
|
||||
<servlet-name>novels</servlet-name>
|
||||
<url-pattern>/*</url-pattern>
|
||||
</servlet-mapping>
|
||||
</web-app>
|
||||
```
|
||||
|
||||
The `servlet-name` element provides an abbreviation (`novels`) for the servlet's fully qualified class name (`novels.NovelsServlet`), and this name is used in the `servlet-mapping` element below.
|
||||
|
||||
Recall that a URL for a deployed service has the WAR file name right after the port number:
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels/`
|
||||
```
|
||||
|
||||
The slash immediately after the port number begins the URI known as the _path_ to the requested resource, in this case, the novels service; hence, the term `novels` occurs after the first single slash.
|
||||
|
||||
In the `web.xml` file, the `url-pattern` is specified as `/*`, which means _any path that starts with /novels_. Suppose Tomcat encounters a contrived request URL, such as this:
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels/foobar/`
|
||||
```
|
||||
|
||||
The `web.xml` configuration specifies that this request, too, should be dispatched to the novels servlet because the `/*` pattern covers `/foobar`. The contrived URL thus has the same result as the legitimate one shown above it.
|
||||
|
||||
A production-grade configuration file might include information on security, both wire-level and users-roles. Even in this case, the configuration file would be only two or three times the size of the sample one.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
The `HttpServlet` is at the center of Java's web technologies. A website or web service, such as the novels service, extends this class, overriding the `do` verbs of interest. A Restful framework such as Jersey (JAX-RS) or Restlet does essentially the same by providing a customized servlet, which then acts as the HTTP(S) endpoint for requests against a web application written in the framework.
|
||||
|
||||
A servlet-based application has access, of course, to any Java library required in the web application. If the application follows the separation-of-concerns principle, then the servlet code remains attractively simple: the code checks a request, issuing the appropriate error if there are deficiencies; otherwise, the code calls out for whatever functionality may be required (e.g., querying a database, encoding a response in a specified format), and then sends the response to the requester. The `HttpServletRequest` and `HttpServletResponse` types make it easy to perform the servlet-specific work of reading the request and writing the response.
|
||||
|
||||
Java has APIs that range from the very simple to the highly complicated. If you need to deliver some Restful services using Java, my advice is to give the low-fuss `HttpServlet` a try before anything else.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/restful-services-java
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
|
||||
[2]: http://xmlrpc.com/
|
||||
[3]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
||||
[4]: https://www.redhat.com/en/topics/integration/whats-the-difference-between-soap-rest
|
||||
[5]: http://tomcat.apache.org/
|
||||
[6]: https://condor.depaul.edu/mkalin
|
||||
[7]: https://tomcat.apache.org/download-90.cgi
|
||||
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+serializable
|
||||
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer
|
||||
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+object
|
||||
[12]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bytearrayoutputstream
|
||||
[13]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
|
||||
[14]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstream
|
||||
[15]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstreamreader
|
||||
[16]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bufferedreader
|
||||
[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+ioexception
|
||||
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+arrays
|
||||
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+runtimeexception
|
||||
[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+outputstream
|
||||
[21]: https://curl.haxx.se/
|
@ -1,76 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Decentralized Messaging App Riot Rebrands to Element)
|
||||
[#]: via: (https://itsfoss.com/riot-to-element/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
Decentralized Messaging App Riot Rebrands to Element
|
||||
======
|
||||
|
||||
Riot is/was a decentralized instant messaging app based on the open source Matrix protocol.
|
||||
|
||||
In late June, Riot (the instant messaging client) announced that they would be changing their name. Yesterday, they revealed that their new name is [Element][1]. Let’s see more details on why Riot changed its name and what else is being changed.
|
||||
|
||||
### Why change the name from Riot to Element?
|
||||
|
||||
![][2]
|
||||
|
||||
Before we get to the most recent announcement, let us take a look at why they changed their name in the first place.
|
||||
|
||||
According to a [blog post][3] dated June 23rd, the group had three reasons for the name change.
|
||||
|
||||
First, they stated that “a certain large games company” had repeatedly blocked them from trademarking the Riot and Riot.im product names. (If I had to guess, they are probably referring to this [“games company”][4].)
|
||||
|
||||
Second, they originally chose the name Riot to “evoke something disruptive and vibrant”. They are worried that people are instead thinking that the app is “focused on violence”. I imagine that current world events have not helped that situation.
|
||||
|
||||
Thirdly, they want to clear up any confusion created by the many brand names involved with Riot. For example, Riot is created by a company named New Vector, while the Riot is hosted on Modular which is also a product of New Vector. They want to simplify their naming system to avoid confusing potential customers. When people look for a messaging solution, they want them to only have to look for one name: Element.
|
||||
|
||||
### Element is everywhere
|
||||
|
||||
![][5]
|
||||
|
||||
As of July 15th, the name of the app and the name of the company has been changed to Element. Their Matrix hosting service will now be called Element Matrix Services. Their announcement sums it up nicely:
|
||||
|
||||
> “For those discovering us for the first time: Element is the flagship secure collaboration app for the decentralised Matrix communication network. Element lets you own your own end-to-end encrypted chat server, while still connecting to everyone else in the wider Matrix network.
|
||||
|
||||
They chose the name Element because it “reflects the emphasis on simplicity and clarity that we aimed for when designing RiotX; a name that highlights our single-minded mission to make Element the most elegant and usable mainstream comms app imaginable”. They also said they wanted a name “evokes the idea of data ownership and self-sovereignty”. They also thought it was a cool name.
|
||||
|
||||
### More than just a name change
|
||||
|
||||
![][6]
|
||||
|
||||
The recent announcement also makes it clear that this move is more than just a simple name change. Element has also released its “next generation Matrix client for Android”. The client was formerly known as RiotX and is now renamed Element. (What else?) It is a complete rewrite of the former client and now supports VoIP calls and widgets. Element will also be available on iOS with support for iOS 13 with “entirely new push notification support”.
|
||||
|
||||
The Element Web client has also received some love with a UI update and a new easier to read font. They have also “rewritten the Room List control – adding in room previews(!!), alphabetic ordering, resizable lists, improved notification UI and more”. They have also started working to improve end-to-end encryption.
|
||||
|
||||
### Final thought
|
||||
|
||||
The people over at Element are taking a big step by making a major name change like this. They may lose some customers in the short term. (This could mainly be due to not being aware of the name change for whatever reason or not liking change.) However in the long run the brand simplification will help them stand out from the crowd.
|
||||
|
||||
The only negative note I’ll mention is that this is the third name change they have made in the app’s history. It was originally named Vector when it was released in 2016. The name was changed to Riot later that year. Hopefully, Element is here to stay.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News, or [Reddit][7].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/riot-to-element/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://element.io/
|
||||
[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/07/riot-to-element.png?ssl=1
|
||||
[3]: https://element.io/blog/the-world-is-changing/
|
||||
[4]: https://en.wikipedia.org/wiki/Riot_Games
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/07/element-desktop.jpg?ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/07/element-apps.jpg?ssl=1
|
||||
[7]: http://reddit.com/r/linuxusersgroup
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (FSSlc)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -299,7 +299,7 @@ via: https://opensource.com/article/20/7/nmcli
|
||||
|
||||
作者:[Dave McKay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,182 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (SCP user’s migration guide to rsync)
|
||||
[#]: via: (https://fedoramagazine.org/scp-users-migration-guide-to-rsync/)
|
||||
[#]: author: (chasinglogic https://fedoramagazine.org/author/chasinglogic/)
|
||||
|
||||
SCP user’s migration guide to rsync
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
As part of the [8.0 pre-release announcement,][2] the OpenSSH project stated that they consider the scp protocol outdated, inflexible, and not readily fixed. They then go on to recommend the use of sftp or rsync for file transfer instead.
|
||||
|
||||
Many users grew up on the _scp_ command, however, and so are not familiar with rsync. Additionally, rsync can do much more than just copy files, which can give a beginner the impression that it’s complicated and opaque. Especially when broadly the scp flags map directly to the cp flags while the rsync flags do not.
|
||||
|
||||
This article will provide an introduction and transition guide for anyone familiar with scp. Let’s jump into the most common scenarios: Copying Files and Copying Directories.
|
||||
|
||||
### Copying files
|
||||
|
||||
For copying a single file, the scp and rsync commands are effectively equivalent. Let’s say you need to ship _foo.txt_ to your home directory on a server named _server._
|
||||
|
||||
```
|
||||
$ scp foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
The equivalent rsync command requires only that you type rsync instead of scp:
|
||||
|
||||
```
|
||||
$ rsync foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
### Copying directories
|
||||
|
||||
For copying directories, things do diverge quite a bit and probably explains why rsync is seen as more complex than scp. If you want to copy the directory _bar_ to _server_ the corresponding scp command looks exactly like the cp command except for specifying ssh information:
|
||||
|
||||
```
|
||||
$ scp -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
With rsync, there are more considerations, as it’s a more powerful tool. First, let’s look at the simplest form:
|
||||
|
||||
```
|
||||
$ rsync -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
Looks simple right? For the simple case of a directory that contains only directories and regular files, this will work. However, rsync cares a lot about sending files exactly as they are on the host system. Let’s create a slightly more complex, but not uncommon, example.
|
||||
|
||||
```
|
||||
# Create a multi-level directory structure
|
||||
$ mkdir -p bar/baz
|
||||
# Create a file at the root directory
|
||||
$ touch bar/foo.txt
|
||||
# Now create a symlink which points back up to this file
|
||||
$ cd bar/baz
|
||||
$ ln -s ../foo.txt link.txt
|
||||
# Return to our original location
|
||||
$ cd -
|
||||
```
|
||||
|
||||
We now have a directory tree that looks like the following:
|
||||
|
||||
```
|
||||
bar
|
||||
├── baz
|
||||
│ └── link.txt -> ../foo.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
If we try the commands from above to copy bar, we’ll notice very different (and surprising) results. First, let’s give scp a go:
|
||||
|
||||
```
|
||||
$ scp -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
If you ssh into your server and look at the directory tree of bar you’ll notice an important and subtle difference from your host system:
|
||||
|
||||
```
|
||||
bar
|
||||
├── baz
|
||||
│ └── link.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
Note that _link.txt_ is no longer a symlink. It is now a full-blown copy of _foo.txt_. This might be surprising behavior if you’re used to _cp_. If you did try to copy the _bar_ directory using _cp -r_, you would get a new directory with the exact symlinks that _bar_ had. Now if we try the same rsync command from before we’ll get a warning:
|
||||
|
||||
```
|
||||
$ rsync -r bar/ me@server:/home/me/
|
||||
skipping non-regular file "bar/baz/link.txt"
|
||||
```
|
||||
|
||||
Rsync has warned us that it found a non-regular file and is skipping it. Because you didn’t tell it to copy symlinks, it’s ignoring them. Rsync has an extensive manual section titled “SYMBOLIC LINKS” that explains all of the possible behavior options available to you. For our example, we need to add the –links flag.
|
||||
|
||||
```
|
||||
$ rsync -r --links bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
On the remote server we see that the symlink was copied over as a symlink. Note that this is different from how scp copied the symlink.
|
||||
|
||||
```
|
||||
bar/
|
||||
├── baz
|
||||
│ └── link.txt -> ../foo.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
To save some typing and take advantage of more file-preserving options, use the –archive (-a for short) flag whenever copying a directory. The archive flag will do what most people expect as it enables recursive copy, symlink copy, and many other options.
|
||||
|
||||
```
|
||||
$ rsync -a bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
The rsync man page has in-depth explanations of what the archive flag enables if you’re curious.
|
||||
|
||||
### Caveats
|
||||
|
||||
There is one caveat, however, to using rsync. It’s much easier to specify a non-standard ssh port with scp than with rsync. If _server_ was using port 8022 SSH connections, for instance, then those commands would look like this:
|
||||
|
||||
```
|
||||
$ scp -P 8022 foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
With rsync, you have to specify the “remote shell” command to use. This defaults to _ssh_. You do so using the **-e flag.
|
||||
|
||||
```
|
||||
$ rsync -e 'ssh -p 8022' foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
Rsync does use your ssh config; however, so if you are connecting to this server frequently, you can add the following snippet to your _~/.ssh/config_ file. Then you no longer need to specify the port for the rsync or ssh commands!
|
||||
|
||||
```
|
||||
Host server
|
||||
Port 8022
|
||||
```
|
||||
|
||||
Alternatively, if every server you connect to runs on the same non-standard port, you can configure the _RSYNC_RSH_ environment variable.
|
||||
|
||||
### Why else should you switch to rsync?
|
||||
|
||||
Now that we’ve covered the everyday use cases and caveats for switching from scp to rsync, let’s take some time to explore why you probably want to use rsync on its own merits. Many people have made the switch to rsync long before now on these merits alone.
|
||||
|
||||
#### In-flight compression
|
||||
|
||||
If you have a slow or otherwise limited network connection between you and your server, rsync can spend more CPU cycles to save network bandwidth. It does this by compressing data before sending it. Compression can be enabled with the -z flag.
|
||||
|
||||
#### Delta transfers
|
||||
|
||||
Rsync also only copies a file if the target file is different than the source file. This works recursively through directories. For instance, if you took our final bar example above and re-ran that rsync command multiple times, it would do no work after the initial transfer. Using rsync even for local copies is worth it if you know you will repeat them, such as backing up to a USB drive, for this feature alone as it can save a lot of time with large data sets.
|
||||
|
||||
#### Syncing
|
||||
|
||||
As the name implies, rsync can do more than just copy data. So far, we’ve only demonstrated how to copy files with rsync. If you instead want rsync to make the target directory look like your source directory, you can add the –delete flag to rsync. The delete flag makes it so rsync will copy files from the source directory which don’t exist on the target directory. Then it will remove files on the target directory which do not exist in the source directory. The result is the target directory is identical to the source directory. By contrast, scp will only ever add files to the target directory.
|
||||
|
||||
### Conclusion
|
||||
|
||||
For simple use cases, rsync is not significantly more complicated than the venerable scp tool. The only significant difference being the use of -a instead of -r for recursive copying of directories. However, as we saw rsync’s -a flag behaves more like cp’s -r flag than scp’s -r flag does.
|
||||
|
||||
Hopefully, with these new commands, you can speed up your file transfer workflow!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/scp-users-migration-guide-to-rsync/
|
||||
|
||||
作者:[chasinglogic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/chasinglogic/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/07/scp-rsync-816x345.png
|
||||
[2]: https://lists.mindrot.org/pipermail/openssh-unix-dev/2019-March/037672.html
|
@ -1,105 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 open source IDE tools for Java)
|
||||
[#]: via: (https://opensource.com/article/20/7/ide-java)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
|
||||
5 open source IDE tools for Java
|
||||
======
|
||||
Java IDE tools offer plenty of ways to create a programming environment
|
||||
based on your unique needs and preferences.
|
||||
![woman on laptop sitting at the window][1]
|
||||
|
||||
[Java][2] frameworks make life easier for programmers by streamlining their work. These frameworks were designed and developed to run any application on any server environment; that includes dynamic behaviors in terms of parsing annotations, scanning descriptors, loading configurations, and launching the actual services on a Java virtual machine (JVM). Controlling this much scope requires more code, making it difficult to minimize memory footprint or speed up startup times for new applications. Regardless, Java consistently ranks in the top three of programming languages in use today with a community of seven to ten million developers in the [TIOBE Index][3].
|
||||
|
||||
With all that code written in Java, that means there are some great options for integrated development environments (IDE) to give developers all the tools needed to effectively write, lint, test, and run Java applications.
|
||||
|
||||
Below, I introduce—in alphabetical order—my five favorite open source IDE tools to write Java and how to configure their basics.
|
||||
|
||||
### BlueJ
|
||||
|
||||
[BlueJ][4] provides an integrated educational Java development environment for Java beginners. It also aids in developing small-scale software using the Java Development Kit (JDK). The installation options for a variety of versions and operating systems are available [here][5].
|
||||
|
||||
Once you install the BlueJ IDE on your laptop, start a new project. Click on New Project in the Project menu then begin writing Java codes from New Class. Sample methods and skeleton codes will be generated as below:
|
||||
|
||||
![BlueJ IDE screenshot][6]
|
||||
|
||||
BlueJ not only provides an interactive graphical user interface (GUI) for teaching Java programming courses in schools but also allows developers to invoke functions (i.e., objects, methods, parameters) without source code compilation.
|
||||
|
||||
### Eclipse
|
||||
|
||||
[Eclipse][7] is one of the most famous Java IDEs based on the desktop, and it supports a variety of programming languages such as C/C++, JavaScript, and PHP. It also allows developers to add unlimited extensions from the Eclipse Marketplace for more development conveniences. [Eclipse Foundation][8] provides a Web IDE called [Eclipse Che][9] for DevOps teams to spin up an agile software development environment with hosted workspaces on multiple cloud platforms.
|
||||
|
||||
[The download][10] is available here; then you can create a new project or import an existing project from the local directory. Find more Java development tips in [this article][11].
|
||||
|
||||
![Eclipse IDE screenshot][12]
|
||||
|
||||
### IntelliJ IDEA
|
||||
|
||||
[IntelliJ IDEA CE (Community Edition)][13] is the open source version of IntelliJ IDEA, providing an IDE for multiple programming languages (i.e., Java, Groovy, Kotlin, Rust, Scala). IntelliJ IDEA CE is also very popular for experienced developers to use for existing source refactoring, code inspections, building testing cases with JUnit or TestNG, and building codes using Maven or Ant. Downloadable binaries are available [here][14].
|
||||
|
||||
IntelliJ IDEA CE comes with some unique features; I particularly like the API tester. For example, if you implement a REST API with a Java framework, IntelliJ IDEA CE allows you to test the API's functionality via Swing GUI designer:
|
||||
|
||||
![IntelliJ IDEA screenshot][15]
|
||||
|
||||
IntelliJ IDEA CE is open source, but the company behind it has a commercial option. Find more differences between the Community Edition and the Ultimate [here][16].
|
||||
|
||||
### Netbeans IDE
|
||||
|
||||
[NetBeans IDE][17] is an integrated Java development environment that allows developers to craft modular applications for standalone, mobile, and web architecture with supported web technologies (i.e., HTML5, JavaScript, and CSS). NetBeans IDE allows developers to set up multiple views on how to manage projects, tools, and data efficiently and helps them collaborate on software development—using Git integration—when a new developer joins the project.
|
||||
|
||||
Download binaries are available [here][18] for multiple platforms (i.e., Windows, macOS, Linux). Once you install the IDE tool in your local environment, the New Project wizard helps you create a new project. For example, the wizard generates the skeleton codes (with sections to fill in like `// TODO code application logic here`) then you can add your own application codes.
|
||||
|
||||
### VSCodium
|
||||
|
||||
[VSCodium][19] is a lightweight, free source code editor that allows developers to install a variety of OS platforms (i.e., Windows, macOS, Linux) and is an open source alternative based on [Visual Studio Code][20]. It was also designed and developed to support a rich ecosystem for multiple programming languages (i.e., Java, C++, C#, PHP, Go, Python, .NET). For high code quality, Visual Studio Code provides debugging, intelligent code completion, syntax highlighting, and code refactoring by default.
|
||||
|
||||
There are many download options available in the [repository][21]. When you run the Visual Studio Code, you can add new features and themes by clicking on the Extensions icon in the activity bar on the left side or by pressing Ctrl+Shift+X in the keyboard. For example, the Quarkus Tools for Visual Studio Code comes up when you type "quarkus" in the search box. The extension allows you to use helpful tools for [writing Java with Quarkus in VS Code][22]:
|
||||
|
||||
![VSCodium IDE screenshot][23]
|
||||
|
||||
### Wrapping up
|
||||
|
||||
Java being one of the most widely used programming languages and environments, these five are just a fraction of the different open source IDE tools available for Java developers. It can be hard to know which is the right one to choose. As always, it depends on your specific needs and goals—what kinds of workloads (web, mobile, messaging, data transaction) you want to implement and what runtimes (local, cloud, Kubernetes, serverless) you will deploy using IDE extended features. While the wealth of options out there can be overwhelming, it does also mean that you can probably find one that suits your particular circumstances and preferences.
|
||||
|
||||
Do you have a favorite open source Java IDE? Share it in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/ide-java
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
|
||||
[2]: https://opensource.com/resources/java
|
||||
[3]: https://www.tiobe.com/tiobe-index/
|
||||
[4]: https://www.bluej.org/about.html
|
||||
[5]: https://www.bluej.org/versions.html
|
||||
[6]: https://opensource.com/sites/default/files/uploads/5_open_source_ide_tools_to_write_java_and_how_you_begin_it.png (BlueJ IDE screenshot)
|
||||
[7]: https://www.eclipse.org/ide/
|
||||
[8]: https://www.eclipse.org/
|
||||
[9]: https://opensource.com/article/19/10/cloud-ide-che
|
||||
[10]: https://www.eclipse.org/downloads/
|
||||
[11]: https://opensource.com/article/19/10/java-basics
|
||||
[12]: https://opensource.com/sites/default/files/uploads/os_ide_2.png (Eclipse IDE screenshot)
|
||||
[13]: https://www.jetbrains.com/idea/
|
||||
[14]: https://www.jetbrains.org/display/IJOS/Download
|
||||
[15]: https://opensource.com/sites/default/files/uploads/os_ide_3.png (IntelliJ IDEA screenshot)
|
||||
[16]: https://www.jetbrains.com/idea/features/editions_comparison_matrix.html
|
||||
[17]: https://netbeans.org/
|
||||
[18]: https://netbeans.org/downloads/8.2/rc/
|
||||
[19]: https://vscodium.com/
|
||||
[20]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
|
||||
[21]: https://github.com/VSCodium/vscodium#downloadinstall
|
||||
[22]: https://opensource.com/article/20/4/java-quarkus-vs-code
|
||||
[23]: https://opensource.com/sites/default/files/uploads/os_ide_5.png (VSCodium IDE screenshot)
|
@ -1,291 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to create a documentation site with Docsify and GitHub Pages)
|
||||
[#]: via: (https://opensource.com/article/20/7/docsify-github-pages)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson)
|
||||
|
||||
How to create a documentation site with Docsify and GitHub Pages
|
||||
======
|
||||
Use Docsify to create documentation web pages to publish on GitHub
|
||||
Pages.
|
||||
![Digital creative of a browser on the internet][1]
|
||||
|
||||
Documentation is an essential part of making any open source project useful to users. But it's not always developers' top priority, as they may be more focused on making their application better than on helping people use it. This is why making it easier to publish documentation is so valuable to developers. In this tutorial, I'll show you one option for doing so: combining the [Docsify][2] documentation generator with [GitHub Pages][3].
|
||||
|
||||
If you prefer to learn by video, you can access the YouTube version of this how-to:
|
||||
|
||||
By default, GitHub Pages prompts users to use [Jekyll][4], a static site generator that supports HTML, CSS, and other web technologies. Jekyll generates a static website from documentation files encoded in Markdown format, which GitHub automatically recognizes due to their .md or .markdown extension. While this setup is nice, I wanted to try something else.
|
||||
|
||||
Fortunately, GitHub Pages' HTML file support means you can use other site-generation tools, including Docsify, to create a website on the platform. Docsify is an MIT-Licensed open source project with [features][5] that make it easy to create an attractive advanced documentation site on GitHub Pages.
|
||||
|
||||
![Docsify][6]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
### Get started with Docsify
|
||||
|
||||
There are two ways to install Docsify:
|
||||
|
||||
1. Docsify's command-line interface (CLI) through NPM
|
||||
2. Manually by writing your own `index.html`
|
||||
|
||||
|
||||
|
||||
Docsify recommends the NPM approach, but I will use the second option. If you want to use NPM, follow the instructions in the [quick-start guide][8].
|
||||
|
||||
### Download the sample content from GitHub
|
||||
|
||||
I've published this example's source code on the [project's GitHub page][9]. You can download the files individually or [clone the repo][10] with:
|
||||
|
||||
|
||||
```
|
||||
`git clone https://github.com/bryantson/OpensourceDotComDemos`
|
||||
```
|
||||
|
||||
Then `cd` into the DocsifyDemo directory.
|
||||
|
||||
I will walk you through the cloned code from my sample repo below, so you can understand how to modify Docsify. If you prefer, you can start from scratch by creating a new `index.html` file, like in the [example][11] in Docsify's docs:
|
||||
|
||||
|
||||
```
|
||||
<!-- index.html -->
|
||||
|
||||
<!DOCTYPE html>
|
||||
<[html][12]>
|
||||
<[head][13]>
|
||||
<[meta][14] http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
|
||||
<[meta][14] name="viewport" content="width=device-width,initial-scale=1">
|
||||
<[meta][14] charset="UTF-8">
|
||||
<[link][15] rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
</[head][13]>
|
||||
<[body][16]>
|
||||
<[div][17] id="app"></[div][17]>
|
||||
<[script][18]>
|
||||
window.$docsify = {
|
||||
//...
|
||||
}
|
||||
</[script][18]>
|
||||
<[script][18] src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></[script][18]>
|
||||
</[body][16]>
|
||||
</[html][12]>
|
||||
```
|
||||
|
||||
### Explore how Docsify works
|
||||
|
||||
If you cloned my [GitHub repo][10] and changed into the DocsifyDemo directory, you should see a file structure like this:
|
||||
|
||||
![File contents in the cloned GitHub][19]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
File/Folder Name | What It Is
|
||||
---|---
|
||||
index.html | The main Docsify initiation file (and the most important file)
|
||||
_sidebar.md | Renders the navigation
|
||||
README.md | The default Markdown file at the root of your documentation
|
||||
images | Contains a sample .jpg image from the README.md
|
||||
Other directories and files | Contain navigatable Markdown files
|
||||
|
||||
`Index.html` is the only thing required for Docsify to work. Open the file, so you can explore the contents:
|
||||
|
||||
|
||||
```
|
||||
<!-- index.html -->
|
||||
|
||||
<!DOCTYPE html>
|
||||
<[html][12]>
|
||||
<[head][13]>
|
||||
<[meta][14] http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
|
||||
<[meta][14] name="viewport" content="width=device-width,initial-scale=1">
|
||||
<[meta][14] charset="UTF-8">
|
||||
<[link][15] rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
<[title][20]>Docsify Demo</[title][20]>
|
||||
</[head][13]>
|
||||
<[body][16]>
|
||||
<[div][17] id="app"></[div][17]>
|
||||
<[script][18]>
|
||||
window.$docsify = {
|
||||
el: "#app",
|
||||
repo: '<https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo>',
|
||||
loadSidebar: true,
|
||||
}
|
||||
</[script][18]>
|
||||
<[script][18] src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></[script][18]>
|
||||
</[body][16]>
|
||||
</[html][12]>
|
||||
```
|
||||
|
||||
This is essentially just a plain HTML file, but take a look at these two lines:
|
||||
|
||||
|
||||
```
|
||||
<[link][15] rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
... SOME OTHER STUFFS ...
|
||||
<[script][18] src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></[script][18]>
|
||||
```
|
||||
|
||||
These lines use content delivery network (CDN) URLs to serve the CSS and JavaScript scripts to transform the site into a Docsify site. As long as you include these lines, you can turn your regular GitHub page into a Docsify page.
|
||||
|
||||
The first line after the `body` tag specifies what to render:
|
||||
|
||||
|
||||
```
|
||||
`<div id="app"></div>`
|
||||
```
|
||||
|
||||
Docsify is using the [single page application][21] (SPA) approach to render a requested page instead of refreshing an entirely new page.
|
||||
|
||||
Last, look at the lines inside the `script` block:
|
||||
|
||||
|
||||
```
|
||||
<[script][18]>
|
||||
window.$docsify = {
|
||||
el: "#app",
|
||||
repo: '<https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo>',
|
||||
loadSidebar: true,
|
||||
}
|
||||
</[script][18]>
|
||||
```
|
||||
|
||||
In this block:
|
||||
|
||||
* The `el` property basically says, "Hey, this is the `id` I am looking for, so locate the `id` and render it there."
|
||||
|
||||
* Changing the `repo` value identifies which page users will be redirected to when they click the GitHub icon in the top-right corner.
|
||||
|
||||
![GitHub icon][22]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
* Setting `loadSideBar` to `true` will make Docsify look for the `_sidebar.md` file that contains your navigation links.
|
||||
|
||||
|
||||
|
||||
|
||||
You can find all the options in the [Configuration][23] section of Docsify's docs.
|
||||
|
||||
Next, look at the `_sidebar.md` file. Because you set the `loadSidebar` property value to `true` in `index.html`, Docsify will look for the `_sidebar.md` file and generate the navigation file from its contents. The `_sidebar.md` contents in the sample repo are:
|
||||
|
||||
|
||||
```
|
||||
<!-- docs/_sidebar.md -->
|
||||
|
||||
* [HOME](./)
|
||||
|
||||
* [Tutorials](./tutorials/index)
|
||||
* [Tomcat](./tutorials/tomcat/index)
|
||||
* [Cloud](./tutorials/cloud/index)
|
||||
* [Java](./tutorials/java/index)
|
||||
|
||||
* [About](./about/index)
|
||||
|
||||
* [Contact](./contact/index)
|
||||
```
|
||||
|
||||
This uses Markdown's link format to create the navigation. Note that the Tomcat, Cloud, and Java links are indented; this causes them to be rendered as sublinks under the parent link.
|
||||
|
||||
Files like `README.md` and `images` pertain to the repository's structure, but all the other Markdown files are related to your Docsify webpage.
|
||||
|
||||
Modify the files you downloaded however you want, based on your needs. In the next step, you will add these files to your GitHub repo, enable GitHub Pages, and finish the project.
|
||||
|
||||
### Enable GitHub Pages
|
||||
|
||||
Create a sample GitHub repo, then use the following GitHub commands to check, commit, and push your code:
|
||||
|
||||
|
||||
```
|
||||
$ git clone LOCATION_TO_YOUR_GITHUB_REPO
|
||||
$ cd LOCATION_TO_YOUR_GITHUB_REPO
|
||||
$ git add .
|
||||
$ git commit -m "My first Docsify!"
|
||||
$ git push
|
||||
```
|
||||
|
||||
Set up your GitHub Pages page. From inside your new GitHub repo, click **Settings**:
|
||||
|
||||
![Settings link in GitHub][24]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
Scroll down until you see **GitHub Pages**:
|
||||
|
||||
![GitHub Pages settings][25]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
Look for the **Source** section:
|
||||
|
||||
![GitHub Pages settings][26]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
Click the drop-down menu under **Source**. Usually, you will set this to the **master branch**, but you can use another branch, if you'd like:
|
||||
|
||||
![Setting Source to master branch][27]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
That's it! You should now have a link to your GitHub Pages page. Clicking the link will take you there, and it should render with Docsify:
|
||||
|
||||
![Link to GitHub Pages docs site][28]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
And it should look something like this:
|
||||
|
||||
![Example Docsify site on GitHub Pages][29]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
### Conclusion
|
||||
|
||||
By editing a single HTML file and some Markdown text, you can create an awesome-looking documentation site with Docsify. What do you think? Please leave a comment and also share any other open source tools that can be used with GitHub Pages.
|
||||
|
||||
See how Jekyll, an open source generator of static HTML files, makes running a blog as easy as...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/docsify-github-pages
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
|
||||
[2]: https://docsify.js.org
|
||||
[3]: https://pages.github.com/
|
||||
[4]: https://docs.github.com/en/github/working-with-github-pages/about-github-pages-and-jekyll
|
||||
[5]: https://docsify.js.org/#/?id=features
|
||||
[6]: https://opensource.com/sites/default/files/uploads/docsify1_ui.jpg (Docsify)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://docsify.js.org/#/quickstart?id=quick-start
|
||||
[9]: https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo
|
||||
[10]: https://github.com/bryantson/OpensourceDotComDemos
|
||||
[11]: https://docsify.js.org/#/quickstart?id=manual-initialization
|
||||
[12]: http://december.com/html/4/element/html.html
|
||||
[13]: http://december.com/html/4/element/head.html
|
||||
[14]: http://december.com/html/4/element/meta.html
|
||||
[15]: http://december.com/html/4/element/link.html
|
||||
[16]: http://december.com/html/4/element/body.html
|
||||
[17]: http://december.com/html/4/element/div.html
|
||||
[18]: http://december.com/html/4/element/script.html
|
||||
[19]: https://opensource.com/sites/default/files/uploads/docsify3_files.jpg (File contents in the cloned GitHub)
|
||||
[20]: http://december.com/html/4/element/title.html
|
||||
[21]: https://en.wikipedia.org/wiki/Single-page_application
|
||||
[22]: https://opensource.com/sites/default/files/uploads/docsify4_github-icon_rev_0.jpg (GitHub icon)
|
||||
[23]: https://docsify.js.org/#/configuration?id=configuration
|
||||
[24]: https://opensource.com/sites/default/files/uploads/docsify5_githubsettings_0.jpg (Settings link in GitHub)
|
||||
[25]: https://opensource.com/sites/default/files/uploads/docsify6_githubpageconfig_rev.jpg (GitHub Pages settings)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/docsify6_githubpageconfig_rev2.jpg (GitHub Pages settings)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/docsify8_setsource_rev.jpg (Setting Source to master branch)
|
||||
[28]: https://opensource.com/sites/default/files/uploads/docsify9_link_rev.jpg (Link to GitHub Pages docs site)
|
||||
[29]: https://opensource.com/sites/default/files/uploads/docsify2_examplesite.jpg (Example Docsify site on GitHub Pages)
|
@ -1,267 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Creating and debugging Linux dump files)
|
||||
[#]: via: (https://opensource.com/article/20/8/linux-dump)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
|
||||
Creating and debugging Linux dump files
|
||||
======
|
||||
Knowing how to deal with dump files will help you find and fix
|
||||
hard-to-reproduce bugs in an application.
|
||||
![Magnifying glass on code][1]
|
||||
|
||||
Crash dump, memory dump, core dump, system dump … all produce the same outcome: a file containing the state of an application's memory at a specific time—usually when the application crashes.
|
||||
|
||||
Knowing how to deal with these files can help you find the root cause(s) of a failure. Even if you are not a developer, dump files created on your system can be very helpful (as well as approachable) in understanding software.
|
||||
|
||||
This is a hands-on article, and can you follow along with the example by cloning the sample application repository with:
|
||||
|
||||
|
||||
```
|
||||
`git clone https://github.com/hANSIc99/core_dump_example.git`
|
||||
```
|
||||
|
||||
### How signals relate to dumps
|
||||
|
||||
Signals are a kind of interprocess communication between the operating system and the user applications. Linux uses the signals defined in the [POSIX standard][2]. On your system, you can find the standard signals defined in `/usr/include/bits/signum-generic.h`. There is also an informative [man signal][3] page if you want more on using signals in your application. Put simply, Linux uses signals to trigger further activities based on whether they were expected or unexpected.
|
||||
|
||||
When you quit a running application, the application will usually receive the `SIGTERM` signal. Because this type of exit signal is expected, this action will not create a memory dump.
|
||||
|
||||
The following signals will cause a dump file to be created (source: [GNU C Library][4]):
|
||||
|
||||
* SIGFPE: Erroneous arithmetic operation
|
||||
* SIGILL: Illegal instruction
|
||||
* SIGSEGV: Invalid access to storage
|
||||
* SIGBUS: Bus error
|
||||
* SIGABRT: An error detected by the program and reported by calling abort
|
||||
* SIGIOT: Labeled archaic on Fedora, this signal used to trigger on `abort()` on a [PDP-11][5] and now maps to SIGABRT
|
||||
|
||||
|
||||
|
||||
### Creating dump files
|
||||
|
||||
Navigate to the `core_dump_example` directory, run `make`, and execute the sample with the `-c1` switch:
|
||||
|
||||
|
||||
```
|
||||
`./coredump -c1`
|
||||
```
|
||||
|
||||
The application should exit in state 4 with an error:
|
||||
|
||||
![Dump written][6]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][7])
|
||||
|
||||
"Abgebrochen (Speicherabzug geschrieben)" roughly translates to "Segmentation fault (core dumped)."
|
||||
|
||||
Whether it creates a core dump or not is determined by the resource limit of the user running the process. You can modify the resource limits with the `ulimit` command.
|
||||
|
||||
Check the current setting for core dump creation:
|
||||
|
||||
|
||||
```
|
||||
`ulimit -c`
|
||||
```
|
||||
|
||||
If it outputs `unlimited`, then it is using the (recommended) default. Otherwise, correct the limit with:
|
||||
|
||||
|
||||
```
|
||||
`ulimit -c unlimited`
|
||||
```
|
||||
|
||||
To disable creating core dumps' type:
|
||||
|
||||
|
||||
```
|
||||
`ulimit -c 0`
|
||||
```
|
||||
|
||||
The number specifies the resource in kilobytes.
|
||||
|
||||
### What are core dumps?
|
||||
|
||||
The way the kernel handles core dumps is defined in:
|
||||
|
||||
|
||||
```
|
||||
`/proc/sys/kernel/core_pattern`
|
||||
```
|
||||
|
||||
I'm running Fedora 31, and on my system, the file contains:
|
||||
|
||||
|
||||
```
|
||||
`/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h`
|
||||
```
|
||||
|
||||
This shows core dumps are forwarded to the `systemd-coredump` utility. The contents of `core_pattern` can vary widely between the different flavors of Linux distributions. When `systemd-coredump` is in use, the dump files are saved compressed under `/var/lib/systemd/coredump`. You don't need to touch the files directly; instead, you can use `coredumpctl`. For example:
|
||||
|
||||
|
||||
```
|
||||
`coredumpctl list`
|
||||
```
|
||||
|
||||
shows all available dump files saved on your system.
|
||||
|
||||
With `coredumpctl dump`, you can retrieve information from the last dump file saved:
|
||||
|
||||
|
||||
```
|
||||
[stephan@localhost core_dump_example]$ ./coredump
|
||||
Application started…
|
||||
|
||||
(…….)
|
||||
|
||||
Message: Process 4598 (coredump) of user 1000 dumped core.
|
||||
|
||||
Stack trace of thread 4598:
|
||||
#0 0x00007f4bbaf22625 __GI_raise (libc.so.6)
|
||||
#1 0x00007f4bbaf0b8d9 __GI_abort (libc.so.6)
|
||||
#2 0x00007f4bbaf664af __libc_message (libc.so.6)
|
||||
#3 0x00007f4bbaf6da9c malloc_printerr (libc.so.6)
|
||||
#4 0x00007f4bbaf6f49c _int_free (libc.so.6)
|
||||
#5 0x000000000040120e n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
#6 0x00000000004013b1 n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
#7 0x00007f4bbaf0d1a3 __libc_start_main (libc.so.6)
|
||||
#8 0x000000000040113e n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
Refusing to dump core to tty (use shell redirection or specify — output).
|
||||
```
|
||||
|
||||
This shows that the process was stopped by `SIGABRT`. The stack trace in this view is not very detailed because it does not include function names. However, with `coredumpctl debug`, you can simply open the dump file with a debugger ([GDB][8] by default). Type `bt` (short for backtrace) to get a more detailed view:
|
||||
|
||||
|
||||
```
|
||||
Core was generated by `./coredump -c1'.
|
||||
Program terminated with signal SIGABRT, Aborted.
|
||||
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
|
||||
50 return ret;
|
||||
(gdb) bt
|
||||
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
|
||||
#1 0x00007fc37a9aa8d9 in __GI_abort () at abort.c:79
|
||||
#2 0x00007fc37aa054af in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7fc37ab14f4b "%s\n") at ../sysdeps/posix/libc_fatal.c:181
|
||||
#3 0x00007fc37aa0ca9c in malloc_printerr (str=str@entry=0x7fc37ab130e0 "free(): invalid pointer") at malloc.c:5339
|
||||
#4 0x00007fc37aa0e49c in _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at malloc.c:4173
|
||||
#5 0x000000000040120e in freeSomething(void*) ()
|
||||
#6 0x0000000000401401 in main ()
|
||||
```
|
||||
|
||||
The memory addresses: `main()` and `freeSomething()` are quite low compared to subsequent frames. Due to the fact that shared objects are mapped to an area at the end of the virtual address space, you can assume that the `SIGABRT` was caused by a call in a shared library. Memory addresses of shared objects are not constant between invocations, so it is totally fine when you see varying addresses between calls.
|
||||
|
||||
The stack trace shows that subsequent calls originate from `malloc.c`, which indicates that something with memory (de-)allocation could have gone wrong.
|
||||
|
||||
In the source code, you can see (even without any knowledge of C++) that it tried to free a pointer, which was not returned by a memory management function. This results in undefined behavior and causes the `SIGABRT`:
|
||||
|
||||
|
||||
```
|
||||
void freeSomething(void *ptr){
|
||||
[free][9](ptr);
|
||||
}
|
||||
int nTmp = 5;
|
||||
int *ptrNull = &nTmp;
|
||||
freeSomething(ptrNull);
|
||||
```
|
||||
|
||||
The systemd coredump utility can be configured under `/etc/systemd/coredump.conf`. Rotation of dump file cleaning can be configured in `/etc/systemd/system/systemd-tmpfiles-clean.timer`.
|
||||
|
||||
You can find more information about `coredumpctl` on its [man page][10].
|
||||
|
||||
### Compiling with debug symbols
|
||||
|
||||
Open the `Makefile` and comment out the last part of line 9. It should now look like:
|
||||
|
||||
|
||||
```
|
||||
`CFLAGS =-Wall -Werror -std=c++11 -g`
|
||||
```
|
||||
|
||||
The `-g` switch enables the compiler to create debug information. Start the application, this time with the `-c2` switch:
|
||||
|
||||
|
||||
```
|
||||
`./coredump -c2`
|
||||
```
|
||||
|
||||
You will get a floating-point exception. Open the dump in GDB with:
|
||||
|
||||
|
||||
```
|
||||
`coredumpctl debug`
|
||||
```
|
||||
|
||||
This time, you are pointed directly to the line in the source code that caused the error:
|
||||
|
||||
|
||||
```
|
||||
Reading symbols from /home/stephan/Dokumente/core_dump_example/coredump…
|
||||
[New LWP 6218]
|
||||
Core was generated by `./coredump -c2'.
|
||||
Program terminated with signal SIGFPE, Arithmetic exception.
|
||||
#0 0x0000000000401233 in zeroDivide () at main.cpp:29
|
||||
29 nRes = 5 / nDivider;
|
||||
(gdb)
|
||||
```
|
||||
|
||||
Type `list` to get a better overview of the source code:
|
||||
|
||||
|
||||
```
|
||||
(gdb) list
|
||||
24 int zeroDivide(){
|
||||
25 int nDivider = 5;
|
||||
26 int nRes = 0;
|
||||
27 while(nDivider > 0){
|
||||
28 nDivider--;
|
||||
29 nRes = 5 / nDivider;
|
||||
30 }
|
||||
31 return nRes;
|
||||
32 }
|
||||
```
|
||||
|
||||
Use the command `info locals` to retrieve the values of the local variables from the point in time when the application failed:
|
||||
|
||||
|
||||
```
|
||||
(gdb) info locals
|
||||
nDivider = 0
|
||||
nRes = 5
|
||||
```
|
||||
|
||||
In combination with the source code, you can see that you ran into a division by zero:
|
||||
|
||||
|
||||
```
|
||||
`nRes = 5 / 0`
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
Knowing how to deal with dump files will help you find and fix hard-to-reproduce random bugs in an application. And if it is not your application, forwarding a core dump to the developer will help her or him find and fix the problem.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/linux-dump
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0 (Magnifying glass on code)
|
||||
[2]: https://en.wikipedia.org/wiki/POSIX
|
||||
[3]: https://man7.org/linux/man-pages/man7/signal.7.html
|
||||
[4]: https://www.gnu.org/software/libc/manual/html_node/Program-Error-Signals.html#Program-Error-Signals
|
||||
[5]: https://en.wikipedia.org/wiki/PDP-11
|
||||
[6]: https://opensource.com/sites/default/files/uploads/dump_written.png (Dump written)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://www.gnu.org/software/gdb/
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/free.html
|
||||
[10]: https://man7.org/linux/man-pages/man1/coredumpctl.1.html
|
@ -1,121 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (An attempt to make a font look more handwritten)
|
||||
[#]: via: (https://jvns.ca/blog/2020/08/08/handwritten-font/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
|
||||
An attempt to make a font look more handwritten
|
||||
======
|
||||
|
||||
I’m actually not super happy with the results of this experiment, but I wanted to share it anyway because it was very easy and fun to play with fonts. And somebody asked me how to do it and I told her I’d write a blog post about it :)
|
||||
|
||||
### background: the original handwritten font
|
||||
|
||||
Some background: I have a font of my handwriting that I’ve been use in my zines for a couple of years. I made it using a delightful app called [iFontMaker][1]. They pitch themselves on their website as “You can create your handmade typeface in less than 5 minutes just with your fingers”. In my experience the ‘5 minutes” part is pretty accurate – I might have spent more like 15 minutes. I’m skeptical of the “just your fingers” claim – I used an Apple Pencil, which has much better accuracy. But it is extremely easy to make a TTF font of your handwriting with the app and if you happen to already have an Apple Pencil and iPad I think it’s a fun way to spend $7.99.
|
||||
|
||||
Here’s what my font looks like. The “CONNECT” text on the left is my actual handwriting, and the paragraph on the right is the font. There are actually 2 fonts – there’s a regular font and a handwritten “monospace” font. (which actually isn’t monospace in practice, I haven’t figured out how to make a actual monospace font in iFontMaker)
|
||||
|
||||
![][2]
|
||||
|
||||
### the goal: have more character variation in the font
|
||||
|
||||
In the screenshot above, it’s pretty obvious that it’s a font and not actual handwriting. It’s easiest to see this when you have two of the same letter next to each other, like in “HTTP’.
|
||||
|
||||
So I thought it might be fun to use some OpenType features to somehow introduce a little more variation into this font, like maybe the two Ts could be different. I didn’t know how to do this though!
|
||||
|
||||
### idea from Tristan Hume: use OpenType!
|
||||
|
||||
Then I was at !!Con 2020 in May (all the [talk recordings are here!][3]) and saw this talk by Tristan Hume about using OpenType to place commas in big numbers by using a special font. His talk and blog post are both great so here are a bunch of links – the live demo is maybe the fastest way to see his results.
|
||||
|
||||
* a live demo: [Numderline Test][4]
|
||||
* the blog post: [Commas in big numbers everywhere: An OpenType adventure][5]
|
||||
* the talk: [!!Con 2020 - Using font shaping to put commas in big numbers EVERYWHERE!! by Tristan Hume][6]
|
||||
* the github repo: <https://github.com/trishume/numderline/blob/master/patcher.py>
|
||||
|
||||
|
||||
|
||||
### the main idea: OpenType lets you replace characters based on context
|
||||
|
||||
I started out being extremely confused about what OpenType even is. I still don’t know much, but I learned that you can write extremely simple OpenType rules to change how a font looks, and you don’t even have to really understand anything about fonts.
|
||||
|
||||
Here’s an example rule:
|
||||
|
||||
```
|
||||
sub a' b by other_a;
|
||||
```
|
||||
|
||||
What `sub a' b by other_a;` means is: If an `a` glyph is before a `b`, then replace the `a` with the glyph `other_a`.
|
||||
|
||||
So this means I can make `ab` appear different from `ac` in the font. It’s not random the way handwriting is, but it does introduce a little bit of variation.
|
||||
|
||||
### OpenType reference documentation: awesome
|
||||
|
||||
The best documentation I found for OpenType was this [OpenType™ Feature File Specification][7] reference. There are a lot of examples of cool things you can do in there, like replace “ffi” with a ligature.
|
||||
|
||||
### how to apply these rules: `fonttools`
|
||||
|
||||
Adding new OpenType rules to a font is extremely easy. There’s a Python library called `fonttools`, and these 5 lines of code will apply a list of OpenType rules (in `rules.fea`) to the font file `input.ttf`.
|
||||
|
||||
```
|
||||
from fontTools.ttLib import TTFont
|
||||
from fontTools.feaLib.builder import addOpenTypeFeatures
|
||||
|
||||
ft_font = TTFont('input.ttf')
|
||||
addOpenTypeFeatures(ft_font, 'rules.fea', tables=['GSUB'])
|
||||
ft_font.save('output.ttf')
|
||||
```
|
||||
|
||||
`fontTools` also provides a couple of command line tools called `ttx` and `fonttools`. `ttx` converts a TTF font into an XML file, which was useful to me because I wanted to rename some glyphs in my font but did not understand anything about fonts. So I just converted my font into an XML file, used `sed` to rename the glyphs, and then used `ttx` again to convert the XML file back into a `ttf`.
|
||||
|
||||
`fonttools merge` let me merge my 3 handwriting fonts into 1 so that I had all the glyphs I needed in 1 file.
|
||||
|
||||
### the code
|
||||
|
||||
I put my extremely hacky code for doing this in a repository called [font-mixer][8]. It’s like 33 lines of code and I think it’s pretty straightforward. (it’s all in `run.sh` and `combine.py`)
|
||||
|
||||
### the results
|
||||
|
||||
Here’s a small sample the old font and the new font. I don’t think the new font “feels” that much more like handwriting – there’s a little more variation, but it still doesn’t compare to actual handwritten text (at the bottom).
|
||||
|
||||
It feels a little uncanny valley to me, like it’s obviously still a font but it’s pretending to be something else.
|
||||
|
||||
![][9]
|
||||
|
||||
And here’s a sample of the same text actually written by hand:
|
||||
|
||||
![][10]
|
||||
|
||||
It’s possible that the results would be better if I was more careful about how I made the 2 other handwriting fonts I mixed the original font with.
|
||||
|
||||
### it’s cool that it’s so easy to add opentype rules!
|
||||
|
||||
Mostly what was delightful to me here is that it’s so easy to add OpenType rules to change how fonts work, like you can pretty easily make a font where the word “the” is always replaced with “teh” (typos all the time!).
|
||||
|
||||
I still don’t know how to make a more realistic handwriting font though :). I’m still using the old one (without the extra variations) and I’m pretty happy with it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2020/08/08/handwritten-font/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://2ttf.com/
|
||||
[2]: https://jvns.ca/images/font-sample-connect.png
|
||||
[3]: http://bangbangcon.com/recordings.html
|
||||
[4]: https://thume.ca/numderline/
|
||||
[5]: https://blog.janestreet.com/commas-in-big-numbers-everywhere/
|
||||
[6]: https://www.youtube.com/watch?v=Biqm9ndNyC8
|
||||
[7]: https://adobe-type-tools.github.io/afdko/OpenTypeFeatureFileSpecification.html
|
||||
[8]: https://github.com/jvns/font-mixer/
|
||||
[9]: https://jvns.ca/images/font-mixer-comparison.png
|
||||
[10]: https://jvns.ca/images/handwriting-sample.jpeg
|
@ -1,222 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use printf to format output)
|
||||
[#]: via: (https://opensource.com/article/20/8/printf)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
How to use printf to format output
|
||||
======
|
||||
Get to know printf, a mysterious, flexible, and feature-rich alternative
|
||||
to echo, print, and cout.
|
||||
![Person drinking a hot drink at the computer][1]
|
||||
|
||||
When I started learning Unix, I was introduced to the `echo` command pretty early in the process. Likewise, my initial [Python][2] lesson involved the `print` function. Picking up C++ and [Java][2] introduced me to `cout` and `systemout`. It seemed every language proudly had a convenient one-line method of producing output and advertised it like it was going out of style.
|
||||
|
||||
But once I turned the first page of intermediate lessons, I met `printf`, a cryptic, mysterious, and surprisingly flexible function. In going against the puzzling tradition of hiding `printf` from beginners, this article aims to introduce to the world the humble `printf` function and explain how it can be used in nearly any language.
|
||||
|
||||
### A brief history of printf
|
||||
|
||||
The term `printf` stands for "print formatted" and may have first appeared in the [Algol 68][3] programming language. Since its inclusion in C, `printf` has been reimplemented in C++, Java, Bash, PHP, and quite probably in whatever your favorite (post-C) language happens to be.
|
||||
|
||||
It's clearly popular, and yet many people seem to regard its syntax to be complex, especially compared to alternatives such as `echo` or `print` or `cout`. For example, here's a simple echo statement in Bash:
|
||||
|
||||
|
||||
```
|
||||
$ echo hello
|
||||
hello
|
||||
$
|
||||
```
|
||||
|
||||
Here's the same result using `printf` in Bash:
|
||||
|
||||
|
||||
```
|
||||
$ printf "%s\n" hello
|
||||
hello
|
||||
$
|
||||
```
|
||||
|
||||
But you get a lot of features for that added complexity, and that's exactly why `printf` is well worth learning.
|
||||
|
||||
### printf output
|
||||
|
||||
The main concept behind `printf` is its ability to format its output based on style information _separate_ from the content. For instance, there is a collection of special sequences that `printf` recognizes as special characters. Your favorite language may have greater or fewer sequences, but common ones include:
|
||||
|
||||
* `\n`: New line
|
||||
* `\r`: Carriage return
|
||||
* `\t`: Horizontal tab
|
||||
* `\NNN`: A specific byte with an octal value containing one to three digits
|
||||
|
||||
|
||||
|
||||
For example:
|
||||
|
||||
|
||||
```
|
||||
$ printf "\t\123\105\124\110\n"
|
||||
SETH
|
||||
$
|
||||
```
|
||||
|
||||
In this Bash example, `printf` renders a tab character followed by the ASCII characters assigned to a string of four octal values. This is terminated with the control sequence to produce a new line (`\n`).
|
||||
|
||||
Attempting the same thing with `echo` produces something a little more literal:
|
||||
|
||||
|
||||
```
|
||||
$ printf "\t\123\105\124\110\n"
|
||||
\t\123\105\124\110\n
|
||||
$
|
||||
```
|
||||
|
||||
Using Python's `print` function for the same task reveals there's more to Python's `print` command than you might expect:
|
||||
|
||||
|
||||
```
|
||||
>>> print("\t\123\n")
|
||||
S
|
||||
|
||||
>>>
|
||||
```
|
||||
|
||||
Obviously, Python's `print` incorporates traditional `printf` features as well as the features of a simple `echo` or `cout`.
|
||||
|
||||
These examples contain nothing more than literal characters, though, and while they're useful in some situations, they're probably the least significant thing about `printf`. The true power of `printf` lies in format specification.
|
||||
|
||||
### Format output with printf
|
||||
|
||||
Format specifiers are characters preceded by a percent sign (`%`).
|
||||
Common ones include:
|
||||
|
||||
* `%s`: String
|
||||
* `%d`: Digit
|
||||
* `%f`: Floating-point number
|
||||
* `%o`: A number in octal
|
||||
|
||||
|
||||
|
||||
These are placeholders in a `printf` statement, which you can replace with a value you provide somewhere else in your `printf` statement. Where these values are provided depends on the language you're using and its syntax, but here's a simple example in Java:
|
||||
|
||||
|
||||
```
|
||||
string var="hello\n";
|
||||
system.out.printf("%s", var);
|
||||
```
|
||||
|
||||
This, wrapped in appropriate boilerplate code and executed, renders:
|
||||
|
||||
|
||||
```
|
||||
$ ./example
|
||||
hello
|
||||
$
|
||||
```
|
||||
|
||||
It gets even more interesting, though, when the content of a variable changes. Suppose you want to update your output based on an ever-increasing number:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
int var=0;
|
||||
while ( var < 100) {
|
||||
var++;
|
||||
printf("Processing is %d% finished.\n", var);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Compiled and run:
|
||||
|
||||
|
||||
```
|
||||
Processing is 1% finished.
|
||||
[...]
|
||||
Processing is 100% finished.
|
||||
```
|
||||
|
||||
Notice that the double `%` in the code resolves to a single printed `%` symbol.
|
||||
|
||||
### Limiting decimal places with printf
|
||||
|
||||
Numbers can get complex, and `printf` offers many formatting options. You can limit how many decimal places are printed using the `%f` for floating-point numbers. By placing a dot (`.`) along with a limiter number between the percent sign and the `f`, you tell `printf` how many decimals to render. Here's a simple example written in Bash for brevity:
|
||||
|
||||
|
||||
```
|
||||
$ printf "%.2f\n" 3.141519
|
||||
3.14
|
||||
$
|
||||
```
|
||||
|
||||
Similar syntax applies to other languages. Here's an example in C:
|
||||
|
||||
|
||||
```
|
||||
#include <math.h>
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
[fprintf][4](stdout, "%.2f\n", 4 * [atan][5](1.0));
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
For three decimal places, use `.3f`, and so on.
|
||||
|
||||
### Adding commas to a number with printf
|
||||
|
||||
Since big numbers can be difficult to parse, it's common to break them up with a comma. You can have `printf` add commas as needed by placing an apostrophe (`'`) between the percent sign and the `d`:
|
||||
|
||||
|
||||
```
|
||||
$ printf "%'d\n" 1024
|
||||
1,024
|
||||
$ printf "%'d\n" 1024601
|
||||
1,024,601
|
||||
$
|
||||
```
|
||||
|
||||
### Add leading zeros with printf
|
||||
|
||||
Another common use for `printf` is to impose a specific format upon numbers in file names. For instance, if you have 10 sequential files on a computer, the computer may sort `10.jpg` before `1.jpg`, which is probably not your intent. When writing to a file programmatically, you can use `printf` to form the file name with leading zero characters. Here's an example in Bash for brevity:
|
||||
|
||||
|
||||
```
|
||||
$ printf "%03d.jpg\n" {1..10}
|
||||
001.jpg
|
||||
002.jpg
|
||||
[...]
|
||||
010.jpg
|
||||
```
|
||||
|
||||
Notice that a maximum of 3 places are used in each number.
|
||||
|
||||
### Using printf
|
||||
|
||||
As you can tell from these `printf` examples, including control characters, especially `\n`, can be tedious, and the syntax is relatively complex. This is the reason shortcuts like `echo` and `cout` were developed. However, if you use `printf` every now and again, you'll get used to the syntax, and it will become second nature. I don't see any reason `printf` should be your _first_ choice for printing statements during everyday activities, but it's a great tool to be comfortable enough with that it won't slow you down when you need it.
|
||||
|
||||
Take some time to learn `printf` in your language of choice, and use it when you need it. It's a powerful tool you won't regret having at your fingertips.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/printf
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hot drink at the computer)
|
||||
[2]: https://opensource.com/resources/python
|
||||
[3]: https://opensource.com/article/20/6/algol68
|
||||
[4]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html
|
||||
[5]: http://www.opengroup.org/onlinepubs/009695399/functions/atan.html
|
@ -1,154 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use GNU on Windows with MinGW)
|
||||
[#]: via: (https://opensource.com/article/20/8/gnu-windows-mingw)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Use GNU on Windows with MinGW
|
||||
======
|
||||
Install the GNU Compiler Collection and other GNU components to enable
|
||||
GNU Autotools on Windows.
|
||||
![Windows][1]
|
||||
|
||||
If you're a hacker running Windows, you don't need a proprietary application to compile code. With the [Minimalist GNU for Windows][2] (MinGW) project, you can download and install the [GNU Compiler Collection][3] (GCC) along with several other essential GNU components to enable [GNU Autotools][4] on your Windows computer.
|
||||
|
||||
### Install MinGW
|
||||
|
||||
The easiest way to install MinGW is through mingw-get, a graphical user interface (GUI) application that helps you select which components to install and keep them up to date. To run it, [download `mingw-get-setup.exe`][5] from the project's host. Install it as you would any other EXE file by clicking through the installation wizard to completion.
|
||||
|
||||
![Installing mingw-get][6]
|
||||
|
||||
(Seth Kenlon, [CC BY-SA 4.0][7])
|
||||
|
||||
### Install GCC on Windows
|
||||
|
||||
So far, you've only installed an installer—or more accurately, a dedicated _package manager_ called mingw-get. Launch mingw-get to select which MinGW project applications you want to install on your computer.
|
||||
|
||||
First, select **mingw-get** from your application menu to launch it.
|
||||
|
||||
![Installing GCC with MinGW][8]
|
||||
|
||||
(Seth Kenlon, [CC BY-SA 4.0][7])
|
||||
|
||||
To install GCC, click the GCC and G++ package to mark GNU C and C++ compiler for installation. To complete the process, select **Apply Changes** from the **Installation** menu in the top-left corner of the mingw-get window.
|
||||
|
||||
Once GCC is installed, you can run it from [PowerShell][9] using its full path:
|
||||
|
||||
|
||||
```
|
||||
PS> C:\MinGW\bin\gcc.exe --version
|
||||
gcc.exe (MinGW.org GCC Build-x) x.y.z
|
||||
Copyright (C) 2019 Free Software Foundation, Inc.
|
||||
```
|
||||
|
||||
### Run Bash on Windows
|
||||
|
||||
While it calls itself "minimalist," MinGW also provides an optional [Bourne shell][10] command-line interpreter called MSYS (which stands for Minimal System). It's an alternative to Microsoft's `cmd.exe` and PowerShell, and it defaults to Bash. Aside from being one of the (justifiably) most popular shells, Bash is useful when porting open source applications to the Windows platform because many open source projects assume a [POSIX][11] environment.
|
||||
|
||||
You can install MSYS from the mingw-get GUI or from within PowerShell:
|
||||
|
||||
|
||||
```
|
||||
`PS> mingw-get install msys`
|
||||
```
|
||||
|
||||
To try out Bash, launch it using its full path:
|
||||
|
||||
|
||||
```
|
||||
PS> C:\MinGW\msys/1.0/bin/bash.exe
|
||||
bash.exe-$ echo $0
|
||||
"C:\MinGW\msys/1.0/bin/bash.exe"
|
||||
```
|
||||
|
||||
### Set the path on Windows
|
||||
|
||||
You probably don't want to have to type the full path for every command you want to use. Add the directory containing your new GNU executables to your path in Windows. There are two root directories of executables to add: one for MinGW (including GCC and its related toolchain) and another for MSYS (including Bash and many common tools from the GNU and [BSD][12] projects).
|
||||
|
||||
To modify your environment in Windows, click on the application menu and type `env`.
|
||||
|
||||
![Edit your env][13]
|
||||
|
||||
(Seth Kenlon, [CC BY-SA 4.0][7])
|
||||
|
||||
A Preferences window will open; click the **Environment variables** button located near the bottom of the window.
|
||||
|
||||
In the **Environment variables** window, double-click the **Path** selection from the bottom panel.
|
||||
|
||||
In the **Edit Environment variables** window that appears, click the **New** button on the right. Create a new entry reading **C:\MinCW\msys\1.0\bin** and click **OK**. Create a second new entry the same way, this one reading **C:\MinGW\bin**, and click **OK**.
|
||||
|
||||
![Set your env][14]
|
||||
|
||||
(Seth Kenlon, [CC BY-SA 4.0][7])
|
||||
|
||||
Accept these changes in each Preferences window. You can reboot your computer to ensure the new variables are detected by all applications, or just relaunch your PowerShell window.
|
||||
|
||||
From now on, you can call any MinGW command without specifying the full path, because the full path is in the `%PATH%` environment variable of your Windows system, which PowerShell inherits.
|
||||
|
||||
### Hello world
|
||||
|
||||
You're all set up now, so put your new MinGW system to a small test. If you're a [Vim][15] user, launch it, and enter this obligatory "hello world" code:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <iostream>
|
||||
|
||||
using namespace std;
|
||||
|
||||
int main() {
|
||||
cout << "Hello open source." << endl;
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Save the file as `hello.cpp`, then compile it with the C++ component of GCC:
|
||||
|
||||
|
||||
```
|
||||
`PS> gcc hello.cpp --output hello`
|
||||
```
|
||||
|
||||
And, finally, run it:
|
||||
|
||||
|
||||
```
|
||||
PS> .\a.exe
|
||||
Hello open source.
|
||||
PS>
|
||||
```
|
||||
|
||||
There's much more to MinGW than what I can cover here. After all, MinGW opens a whole world of open source and potential for custom code, so take advantage of it. For a wider world of open source, you can also [give Linux a try][16]. You'll be amazed at what's possible when all limits are removed. But in the meantime, give MinGW a try and enjoy the freedom of the GNU.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/gnu-windows-mingw
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/more_windows.jpg?itok=hKk64RcZ (Windows)
|
||||
[2]: http://mingw.org
|
||||
[3]: https://gcc.gnu.org/
|
||||
[4]: https://opensource.com/article/19/7/introduction-gnu-autotools
|
||||
[5]: https://osdn.net/projects/mingw/releases/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/mingw-install.jpg (Installing mingw-get)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/mingw-packages.jpg (Installing GCC with MinGW)
|
||||
[9]: https://opensource.com/article/19/8/variables-powershell
|
||||
[10]: https://en.wikipedia.org/wiki/Bourne_shell
|
||||
[11]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[12]: https://opensource.com/article/19/3/netbsd-raspberry-pi
|
||||
[13]: https://opensource.com/sites/default/files/uploads/mingw-env.jpg (Edit your env)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/mingw-env-set.jpg (Set your env)
|
||||
[15]: https://opensource.com/resources/what-vim
|
||||
[16]: https://opensource.com/article/19/7/ways-get-started-linux
|
284
sources/tech/20200820 Learn the basics of programming with C.md
Normal file
284
sources/tech/20200820 Learn the basics of programming with C.md
Normal file
@ -0,0 +1,284 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Learn the basics of programming with C)
|
||||
[#]: via: (https://opensource.com/article/20/8/c-programming-cheat-sheet)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Learn the basics of programming with C
|
||||
======
|
||||
Our new cheat sheet puts all the essentials of C syntax on an
|
||||
easy-to-read handout.
|
||||
![Cheat Sheet cover image][1]
|
||||
|
||||
In 1972, Dennis Ritchie was at Bell Labs, where a few years earlier, he and his fellow team members invented Unix. After creating an enduring OS (still in use today), he needed a good way to program those Unix computers so that they could perform new tasks. It seems strange now, but at the time, there were relatively few programming languages; Fortran, Lisp, [Algol][2], and B were popular but insufficient for what the Bell Labs researchers wanted to do. Demonstrating a trait that would become known as a primary characteristic of programmers, Dennis Ritchie created his own solution. He called it C, and nearly 50 years later, it's still in widespread use.
|
||||
|
||||
### Why you should learn C
|
||||
|
||||
Today, there are many languages that provide programmers more features than C. The most obvious one is C++, a rather blatantly named language that built upon C to create a nice object-oriented language. There are many others, though, and there's a good reason they exist. Computers are good at consistent repetition, so anything predictable enough to be built into a language means less work for programmers. Why spend two lines recasting an `int` to a `long` in C when one line of C++ (`long x = long(n);`) can do the same?
|
||||
|
||||
And yet C is still useful today.
|
||||
|
||||
First of all, C is a fairly minimal and straightforward language. There aren't very advanced concepts beyond the basics of programming, largely because C is literally one of the foundations of modern programming languages. For instance, C features arrays, but it doesn't offer a dictionary (unless you write it yourself). When you learn C, you learn the building blocks of programming that can help you recognize the improved and elaborate designs of recent languages.
|
||||
|
||||
Because C is a minimal language, your applications are likely to get a boost in performance that they wouldn't see with many other languages. It's easy to get caught up in the race to the bottom when you're thinking about how fast your code executes, so it's important to ask whether you _need_ more speed for a specific task. And with C, you have less to obsess over in each line of code, compared to, say, Python or Java. C is fast. There's a good reason the Linux kernel is written in C.
|
||||
|
||||
Finally, C is easy to get started with, especially if you're running Linux. You can already run C code because Linux systems include the GNU C library (`glibc`). To write and build it, all you need to do is install a compiler, open a text editor, and start coding.
|
||||
|
||||
### Getting started with C
|
||||
|
||||
If you're running Linux, you can install a C compiler using your package manager. On Fedora or RHEL:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install gcc`
|
||||
```
|
||||
|
||||
On Debian and similar:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install build-essential`
|
||||
```
|
||||
|
||||
On macOS, you can [install Homebrew][3] and use it to install [GCC][4]:
|
||||
|
||||
|
||||
```
|
||||
`$ brew install gcc`
|
||||
```
|
||||
|
||||
On Windows, you can install a minimal set of GNU utilities, GCC included, with [MinGW][5].
|
||||
|
||||
Verify you've installed GCC on Linux or macOS:
|
||||
|
||||
|
||||
```
|
||||
$ gcc --version
|
||||
gcc (GCC) x.y.z
|
||||
Copyright (C) 20XX Free Software Foundation, Inc.
|
||||
```
|
||||
|
||||
On Windows, provide the full path to the EXE file:
|
||||
|
||||
|
||||
```
|
||||
PS> C:\MinGW\bin\gcc.exe --version
|
||||
gcc.exe (MinGW.org GCC Build-2) x.y.z
|
||||
Copyright (C) 20XX Free Software Foundation, Inc.
|
||||
```
|
||||
|
||||
### C syntax
|
||||
|
||||
C isn't a scripting language. It's compiled, meaning that it gets processed by a C compiler to produce a binary executable file. This is different from a scripting language like [Bash][6] or a hybrid language like [Python][7].
|
||||
|
||||
In C, you create _functions_ to carry out your desired task. A function named `main` is executed by default.
|
||||
|
||||
Here's a simple "hello world" program written in C:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
[printf][8]("Hello world");
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The first line includes a _header file_, essentially free and very low-level C code that you can reuse in your own programs, called `stdio.h` (standard input and output). A function called `main` is created and populated with a rudimentary print statement. Save this text to a file called `hello.c`, then compile it with GCC:
|
||||
|
||||
|
||||
```
|
||||
`$ gcc hello.c --output hello`
|
||||
```
|
||||
|
||||
Try running your C program:
|
||||
|
||||
|
||||
```
|
||||
$ ./hello
|
||||
Hello world$
|
||||
```
|
||||
|
||||
#### Return values
|
||||
|
||||
It's part of the Unix philosophy that a function "returns" something to you after it executes: nothing upon success and something else (an error message, for example) upon failure. These return codes are often represented with numbers (integers, to be precise): 0 represents nothing, and any number higher than 0 represents some non-successful state.
|
||||
|
||||
There's a good reason Unix and Linux are designed to expect silence upon success. It's so that you can always plan for success by assuming no errors nor warnings will get in your way when executing a series of commands. Similarly, functions in C expect no errors by design.
|
||||
|
||||
You can see this for yourself with one small modification to make your program appear to fail:
|
||||
|
||||
|
||||
```
|
||||
include <stdio.h>
|
||||
|
||||
int main() {
|
||||
[printf][8]("Hello world");
|
||||
return 1;
|
||||
}
|
||||
```
|
||||
|
||||
Compile it:
|
||||
|
||||
|
||||
```
|
||||
`$ gcc hello.c --output failer`
|
||||
```
|
||||
|
||||
Now run it using a built-in Linux test for success. The `&&` operator executes the second half of a command only upon success. For example:
|
||||
|
||||
|
||||
```
|
||||
$ echo "success" && echo "it worked"
|
||||
success
|
||||
it worked
|
||||
```
|
||||
|
||||
The `||` test executes the second half of a command upon _failure_.
|
||||
|
||||
|
||||
```
|
||||
$ ls blah || echo "it did not work"
|
||||
ls: cannot access 'blah': No such file or directory
|
||||
it did not work
|
||||
```
|
||||
|
||||
Now try your program, which does _not_ return 0 upon success; it returns 1 instead:
|
||||
|
||||
|
||||
```
|
||||
$ ./failer && echo "it worked"
|
||||
String is: hello
|
||||
```
|
||||
|
||||
The program executed successfully, yet did not trigger the second command.
|
||||
|
||||
#### Variables and types
|
||||
|
||||
In some languages, you can create variables without specifying what _type_ of data they contain. Those languages have been designed such that the interpreter runs some tests against a variable in an attempt to discover what kind of data it contains. For instance, Python knows that `var=1` defines an integer when you create an expression that adds `var` to something that is obviously an integer. It similarly knows that the word `world` is a string when you concatenate `hello` and `world`.
|
||||
|
||||
C doesn't do any of these investigations for you; you must define your variable type. There are several types of variables, including integers (int), characters (char), float, and Boolean.
|
||||
|
||||
You may also notice there's no string type. Unlike Python and Java and Lua and many others, C doesn't have a string type and instead sees strings as an array of characters.
|
||||
|
||||
Here's some simple code that establishes a `char` array variable, and then prints it to your screen using [printf][9] along with a short message:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
char var[6] = "hello";
|
||||
[printf][8]("Your string is: %s\r\n",var);
|
||||
```
|
||||
|
||||
You may notice that this code sample allows six characters for a five-letter word. This is because there's a hidden terminator at the end of the string, which takes up one byte in the array. You can run the code by compiling and executing it:
|
||||
|
||||
|
||||
```
|
||||
$ gcc hello.c --output hello
|
||||
$ ./hello
|
||||
hello
|
||||
```
|
||||
|
||||
### Functions
|
||||
|
||||
As with other languages, C functions take optional parameters. You can pass parameters from one function to another by defining the type of data you want a function to accept:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int printmsg(char a[]) {
|
||||
[printf][8]("String is: %s\r\n",a);
|
||||
}
|
||||
|
||||
int main() {
|
||||
char a[6] = "hello";
|
||||
printmsg(a);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The way this code sample breaks one function into two isn't very useful, but it demonstrates that `main` runs by default and how to pass data between functions.
|
||||
|
||||
### Conditionals
|
||||
|
||||
In real-world programming, you usually want your code to make decisions based on data. This is done with _conditional_ statements, and the `if` statement is one of the most basic of them.
|
||||
|
||||
To make this example program more dynamic, you can include the `string.h` header file, which contains code to examine (as the name implies) strings. Try testing whether the string passed to the `printmsg` function is greater than 0 by using the `strlen` function from the `string.h` file:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
|
||||
int printmsg(char a[]) {
|
||||
size_t len = [strlen][10](a);
|
||||
if ( len > 0) {
|
||||
[printf][8]("String is: %s\r\n",a);
|
||||
}
|
||||
}
|
||||
|
||||
int main() {
|
||||
char a[6] = "hello";
|
||||
printmsg(a);
|
||||
return 1;
|
||||
}
|
||||
```
|
||||
|
||||
As implemented in this example, the sample condition will never be untrue because the string provided is always "hello," the length of which is always greater than 0. The final touch to this humble re-implementation of the `echo` command is to accept input from the user.
|
||||
|
||||
### Command arguments
|
||||
|
||||
The `stdio.h` file contains code that provides two arguments each time a program is launched: a count of how many items are contained in the command (`argc`) and an array containing each item (`argv`). For example, suppose you issue this imaginary command:
|
||||
|
||||
|
||||
```
|
||||
`$ foo -i bar`
|
||||
```
|
||||
|
||||
The `argc` is three, and the contents of `argv` are:
|
||||
|
||||
* `argv[0] = foo`
|
||||
* `argv[1] = -i`
|
||||
* `argv[2] = bar`
|
||||
|
||||
|
||||
|
||||
Can you modify the example C program to accept `argv[2]` as the string instead of defaulting to `hello`?
|
||||
|
||||
### Imperative programming
|
||||
|
||||
C is an imperative programming language. It isn't object-oriented, and it has no class structure. Using C can teach you a lot about how data is processed and how to better manage the data you generate as your code runs. Use C enough, and you'll eventually be able to write libraries that other languages, such as Python and Lua, can use.
|
||||
|
||||
To learn more about C, you need to use it. Look in `/usr/include/` for useful C header files, and see what small tasks you can do to make C useful to you. As you learn, use our [C cheat sheet][11] by [Jim Hall][12] of FreeDOS. It's got all the basics on one double-sided sheet, so you can immediately access all the essentials of C syntax while you practice.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/c-programming-cheat-sheet
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coverimage_cheat_sheet.png?itok=lYkNKieP (Cheat Sheet cover image)
|
||||
[2]: https://opensource.com/article/20/6/algol68
|
||||
[3]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[4]: https://gcc.gnu.org/
|
||||
[5]: https://opensource.com/article/20/8/gnu-windows-mingw
|
||||
[6]: https://opensource.com/resources/what-bash
|
||||
[7]: https://opensource.com/resources/python
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
|
||||
[9]: https://opensource.com/article/20/8/printf
|
||||
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
|
||||
[11]: https://opensource.com/downloads/c-programming-cheat-sheet
|
||||
[12]: https://opensource.com/users/jim-hall
|
@ -0,0 +1,177 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Recognize more devices on Linux with this USB ID Repository)
|
||||
[#]: via: (https://opensource.com/article/20/8/usb-id-repository)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
|
||||
Recognize more devices on Linux with this USB ID Repository
|
||||
======
|
||||
An open source project contains a public repository of all known IDs
|
||||
used in USB devices.
|
||||
![Multiple USB plugs in different colors][1]
|
||||
|
||||
There are thousands of USB devices on the market—keyboards, scanners, printers, mouses, and countless others that all work on Linux. Their vendor details are stored in the USB ID Repository.
|
||||
|
||||
### lsusb
|
||||
|
||||
The Linux `lsusb` command lists information about the USB devices connected to a system, but sometimes the information is incomplete. For example, I recently noticed that the brand of one of my USB devices was not recognized. the device was functional, but listing the details of my connected USB devices provided no identification information. Here is the output from my `lsusb` command:
|
||||
|
||||
|
||||
```
|
||||
$ lsusb
|
||||
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
|
||||
Bus 001 Device 004: ID 046d:082c Logitech, Inc.
|
||||
Bus 001 Device 003: ID 0951:16d2 Kingston Technology
|
||||
Bus 001 Device 002: ID 18f8:1486
|
||||
Bus 001 Device 005: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
|
||||
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
|
||||
```
|
||||
|
||||
As you can see in the last column, there is one device with no manufacturers description. To determine what the device is, I would have to do a deeper inspection of my USB device tree. Fortunately, the `lsusb` command has more options. One is `-D device`, to elicit per-device details, as the man page explains:
|
||||
|
||||
> "Do not scan the /dev/bus/usb directory, instead display only information about the device whose device file is given. The device file should be something like /dev/bus/usb/001/001. This option displays detailed information like the **v** option; you must be root to do this."
|
||||
|
||||
I didn't think it was easily apparent how to pass the device path to the lsusb command, but after carefully reading the man page and the initial output I was able to determine how to construct it. USB devices reside in the UDEV filesystem. Their device path begins in the USB device directory `/dev/bus/usb/`. The rest of the path is made up of the device's Bus ID and Device ID. My non-descript device is Bus 001, Device 002, which translates to 001/002, and completes the path `/dev/bus/usb/001/002`. Now I can pass this path to `lsusb`. I'll also pipe to `more` since there is often quite a lot of information there:
|
||||
|
||||
|
||||
```
|
||||
$ lsusb -D /dev/bus/usb/001/002 |more
|
||||
Device: ID 18f8:1486
|
||||
Device Descriptor:
|
||||
bLength 18
|
||||
bDescriptorType 1
|
||||
bcdUSB 1.10
|
||||
bDeviceClass 0 (Defined at Interface level)
|
||||
bDeviceSubClass 0
|
||||
bDeviceProtocol 0
|
||||
bMaxPacketSize0 8
|
||||
idVendor 0x18f8
|
||||
idProduct 0x1486
|
||||
bcdDevice 1.00
|
||||
iManufacturer 0
|
||||
iProduct 1
|
||||
iSerial 0
|
||||
bNumConfigurations 1
|
||||
Configuration Descriptor:
|
||||
bLength 9
|
||||
bDescriptorType 2
|
||||
wTotalLength 59
|
||||
bNumInterfaces 2
|
||||
bConfigurationValue 1
|
||||
iConfiguration 0
|
||||
bmAttributes 0xa0
|
||||
(Bus Powered)
|
||||
Remote Wakeup
|
||||
MaxPower 100mA
|
||||
Interface Descriptor:
|
||||
bLength 9
|
||||
bDescriptorType 4
|
||||
bInterfaceNumber 0
|
||||
bAlternateSetting 0
|
||||
bNumEndpoints 1
|
||||
bInterfaceClass 3 Human Interface Device
|
||||
bInterfaceSubClass 1 Boot Interface Subclass
|
||||
bInterfaceProtocol 2 Mouse
|
||||
iInterface 0
|
||||
HID Device Descriptor:
|
||||
```
|
||||
|
||||
Unfortunately, this didn't provide the detail I was hoping to find. The two fields that appear in the initial output, `idVendor` and `idProduct`, are both empty. There is some help, as scanning down a bit reveals the word **Mouse**. A-HA! So, this device is my mouse.
|
||||
|
||||
## The USB ID Repository
|
||||
|
||||
This made me wonder how I could populate these fields, not only for myself but also for other Linux users. It turns out there is already an open source project for this: the [USB ID Repository][2]. It is a public repository of all known IDs used in USB devices. It is also used in various programs, including the [USB Utilities][3], to display human-readable device names.
|
||||
|
||||
![The USB ID Repository Site][4]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][5])
|
||||
|
||||
You can browse the repository for particular devices either from the website or by downloading the database. Users are also welcome to submit new data. This is what I did for my mouse, which was absent.
|
||||
|
||||
### Update your USB IDs
|
||||
|
||||
The USB ID database is stored in a file called `usb.ids`. This location may vary depending on the Linux distribution.
|
||||
|
||||
On Ubuntu 18.04, this file is located in `/var/lib/usbutils`. To update the database, use the command `update-usbids`, which you need to run with root privileges or with `sudo`:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo update-usbids`
|
||||
```
|
||||
|
||||
If a new file is available, it will be downloaded. The current file will be backed up and replaced by the new one:
|
||||
|
||||
|
||||
```
|
||||
$ ls -la
|
||||
total 1148
|
||||
drwxr-xr-x 2 root root 4096 Jan 15 00:34 .
|
||||
drwxr-xr-x 85 root root 4096 Nov 7 08:05 ..
|
||||
-rw-r--r-- 1 root root 614379 Jan 9 15:34 usb.ids
|
||||
-rw-r--r-- 1 root root 551472 Jan 15 00:34 usb.ids.old
|
||||
```
|
||||
|
||||
Recent versions of Fedora Linux store the database file in `/usr/share/hwdata`. Also, there is no update script. Instead, the database is maintained in a package named `hwdata`.
|
||||
|
||||
|
||||
```
|
||||
# dnf info hwdata
|
||||
|
||||
Installed Packages
|
||||
Name : hwdata
|
||||
Version : 0.332
|
||||
Release : 1.fc31
|
||||
Architecture : noarch
|
||||
Size : 7.5 M
|
||||
Source : hwdata-0.332-1.fc31.src.rpm
|
||||
Repository : @System
|
||||
From repo : updates
|
||||
Summary : Hardware identification and configuration data
|
||||
URL : <https://github.com/vcrhonek/hwdata>
|
||||
License : GPLv2+
|
||||
Description : hwdata contains various hardware identification and configuration data,
|
||||
: such as the pci.ids and usb.ids databases.
|
||||
```
|
||||
|
||||
Now my USB device list shows a name next to this previously unnamed device. Compare this to the output above:
|
||||
|
||||
|
||||
```
|
||||
$ lsusb
|
||||
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
|
||||
Bus 001 Device 004: ID 046d:082c Logitech, Inc. HD Webcam C615
|
||||
Bus 001 Device 003: ID 0951:16d2 Kingston Technology
|
||||
Bus 001 Device 014: ID 18f8:1486 [Maxxter]
|
||||
Bus 001 Device 005: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
|
||||
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
|
||||
```
|
||||
|
||||
You may notice that other device descriptions change as the repository is regularly updated with new devices and details about existing ones.
|
||||
|
||||
### Submit new data
|
||||
|
||||
There are two ways to submit new data: by using the web interface or by emailing a specially formatted patch file. Before I began, I read through the submission guidelines. First, I had to register an account, and then I needed to use the project's submission system to provide my mouse's ID and name. The process is the same for adding any USB device.
|
||||
|
||||
Have you used the USB ID Repository? If so, please share your reaction in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/usb-id-repository
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/usb-hardware.png?itok=ROPtNZ5V (Multiple USB plugs in different colors)
|
||||
[2]: http://www.linux-usb.org/usb-ids.html
|
||||
[3]: https://sourceforge.net/projects/linux-usb/files/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/theusbidrepositorysite.png (The USB ID Repository Site)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user