mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-13 22:30:37 +08:00
commit
c79a14f091
@ -0,0 +1,138 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12543-1.html)
|
||||
[#]: subject: (FreeFileSync: Open Source File Synchronization Tool)
|
||||
[#]: via: (https://itsfoss.com/freefilesync/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
FreeFileSync:开源的文件同步工具
|
||||
======
|
||||
|
||||
> FreeFileSync 是一个开源的文件夹比较和同步工具,你可以使用它将数据备份到外部磁盘、云服务(如 Google Drive)或任何其他存储路径。
|
||||
|
||||
### FreeFileSync:一个免费且开源的同步工具
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/23/060523ubx28vyi8qf8sv9d.jpg)
|
||||
|
||||
[FreeFileSync][2] 是一个令人印象深刻的开源工具,可以帮助你将数据备份到其他位置。
|
||||
|
||||
它们可以是外部 USB 磁盘、Google Drive 或使用 **SFTP 或 FTP** 连接到任何云存储。
|
||||
|
||||
你可能之前读过我们的[如何在 Linux 上使用 Google Drive][3] 的教程。不幸的是,没有合适的在 Linux 上原生使用 Google Drive 的 FOSS 方案。有个 [Insync][4],但它是收费软件而非开源软件。
|
||||
|
||||
FreeFileSync 可使用 Google Drive 帐户同步文件。事实上,我用它把我的文件同步到 Google Drive 和一个单独的硬盘上。
|
||||
|
||||
### FreeFileSync 的功能
|
||||
|
||||
![][1]
|
||||
|
||||
尽管 FreeFileSync 的 UI 看起来可能很老,但它为普通用户和高级用户提供了许多有用的功能。
|
||||
|
||||
我将在此处把所有能重点介绍的功能都介绍出来:
|
||||
|
||||
* 跨平台支持(Windows、macOS 和 Linux)
|
||||
* 同步前比较文件夹
|
||||
* 支持 Google Drive、[SFTP][6] 和 FTP 连接
|
||||
* 提供在不同的存储路径(或外部存储设备)上同步文件的能力
|
||||
* 多个可用的同步选项(从源更新文件到目标或镜像目标和源之间的文件)
|
||||
* 支持双向同步(如果目标文件夹或源文件夹有任何修改,将同步更改)
|
||||
* 适用于高级用户的版本控制
|
||||
* 可进行实时同步
|
||||
* 能安排批处理作业
|
||||
* 同步完成时通过电子邮件收到通知(付费)
|
||||
* 便携式版(付费)
|
||||
* 并行文件复制(付费)
|
||||
|
||||
如果你看一下它提供的功能,它不仅是普通的同步工具,而且还免费提供了更多功能。
|
||||
|
||||
此外,为了让你了解,你还可以在同步文件之前先比较它们。例如,你可以比较文件内容/文件时间,或者简单地比较源文件夹和目标文件夹的文件大小。
|
||||
|
||||
![][7]
|
||||
|
||||
你还有许多同步选项来镜像或更新数据。如下所示:
|
||||
|
||||
![][8]
|
||||
|
||||
但是,它也为你提供了捐赠密钥的可选选项,它可解锁一些特殊功能,如在同步完成时通过电子邮件通知你等。
|
||||
|
||||
以下是免费版本和付费版本的不同:
|
||||
|
||||
![][9]
|
||||
|
||||
因此,大多数基本功能是免费的。高级功能主要是针对高级用户,当然,如果你想支持他们也可以。(如果你觉得它有用,请这么做)。
|
||||
|
||||
此外,请注意,捐赠版单用户最多可在 3 台设备上使用。所以,这绝对不差!
|
||||
|
||||
### 在 Linux 上安装 FreeFileSync
|
||||
|
||||
你可以前往它的[官方下载页面][10],并下载 Linux 的 tar.gz 文件。如果你喜欢,你还可以下载源码。
|
||||
|
||||
![][11]
|
||||
|
||||
接下来,你只需解压并运行可执行文件就可以了(如上图所示)
|
||||
|
||||
- [下载 FreeFileSync][2]
|
||||
|
||||
### 如何开始使用 FreeFileSync?
|
||||
|
||||
虽然我还没有成功地尝试过创建自动同步作业,但它很容易使用。
|
||||
|
||||
[官方文档][12]应该足以让你获得想要的。
|
||||
|
||||
但是,为了让你初步了解,这里有一些事情,你应该记住。
|
||||
|
||||
![][13]
|
||||
|
||||
如上面的截图所示,你只需选择源文件夹和要同步的目标文件夹。你可以选择本地文件夹或云存储位置。
|
||||
|
||||
完成后,你需要选择在同步中文件夹比较的类型(通常是文件时间和大小),在右侧,你可以调整要执行的同步类型。
|
||||
|
||||
#### FreeFileSync 的同步类型
|
||||
|
||||
当你选择 “更新” 的方式进行同步时,它只需将新数据从源文件夹复制到目标文件夹。因此,即使你从源文件夹中删除了某些东西,它也不会在目标文件夹中被删除。
|
||||
|
||||
如果你希望目标文件夹有相同的文件副本,可以选择 “镜像”同步方式。这样,如果你从源文件夹中删除内容,它就会从目标文件夹中删除。
|
||||
|
||||
还有一个 “双向” 同步方式,它检测源文件夹和目标文件夹的更改(而不是只监视源文件夹)。因此,如果对源/目标文件夹进行了任何更改,都将同步修改。
|
||||
|
||||
有关更高级的用法,我建议你参考[文档][12]。
|
||||
|
||||
### 总结
|
||||
|
||||
还有一个[开源文件同步工具是 Syncthing][14],你可能想要看看。
|
||||
|
||||
FreeFileSync 是一个相当被低估的文件夹比较和同步工具,适用于使用 Google Drive、SFTP 或 FTP 连接以及单独的存储位置进行备份的 Linux 用户。
|
||||
|
||||
而且,所有这些功能都免费提供对 Windows、macOS 和 Linux 的跨平台支持。
|
||||
|
||||
这难道不令人兴奋吗?请在下面的评论,让我知道你对 Freefilesync 的看法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/freefilesync/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/free-file-sync.jpg?ssl=1
|
||||
[2]: https://freefilesync.org/
|
||||
[3]: https://itsfoss.com/use-google-drive-linux/
|
||||
[4]: https://itsfoss.com/recommends/insync/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/FreeFileSync.jpg?ssl=1
|
||||
[6]: https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/freefilesync-comparison.png?ssl=1
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/freefilesync-synchronization.png?ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/free-file-sync-donation-edition.jpg?ssl=1
|
||||
[10]: https://freefilesync.org/download.php
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/freefilesync-run.jpg?ssl=1
|
||||
[12]: https://freefilesync.org/manual.php
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/freefilesync-tips.jpg?ssl=1
|
||||
[14]: https://itsfoss.com/syncthing/
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12536-1.html)
|
||||
[#]: subject: (4 Mac terminal customizations even a curmudgeon can love)
|
||||
[#]: via: (https://opensource.com/article/20/7/mac-terminal)
|
||||
[#]: author: (Katie McLaughlin https://opensource.com/users/glasnt)
|
||||
@ -12,7 +12,7 @@
|
||||
|
||||
> 开源意味着我可以在任何终端上找到熟悉的 Linux。
|
||||
|
||||
!["咖啡和笔记本"][1]
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/21/002323xqslvqnnmdz487dq.jpg)
|
||||
|
||||
十年前,我开始了我的第一份工作,它要求我使用 Linux 作为我的笔记本电脑的操作系统。如果我愿意的话,我可以使用各种 Linux 发行版,包括 Gentoo,但由于我过去曾短暂地使用过Ubuntu,所以我选择了 Ubuntu Lucid Lynx 10.04。
|
||||
|
@ -0,0 +1,682 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Yufei-Yan)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12555-1.html)
|
||||
[#]: subject: (An example of very lightweight RESTful web services in Java)
|
||||
[#]: via: (https://opensource.com/article/20/7/restful-services-java)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
一个用 Java 实现的超轻量级 RESTful Web 服务示例
|
||||
======
|
||||
|
||||
> 通过管理一套图书的完整代码示例,来探索轻量级的 RESTful 服务。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/27/071808tt9zlno3b6lmbgl8.jpg)
|
||||
|
||||
Web 服务,以这样或那样的形式,已经存在了近二十年。比如,[XML-RPC 服务][2]出现在 90 年代后期,紧接着是用 SOAP 分支编写的服务。在 XML-RPC 和 SOAP 这两个开拓者之后出现后不久,REST 架构风格的服务在大约 20 年前也出现了。[REST][4] 风格(以下简称 Restful)服务现在主导了流行的网站,比如 eBay、Facebook 和 Twitter。尽管分布式计算的 Web 服务有很多替代品(如 Web 套接字、微服务和远程过程调用的新框架),但基于 Restful 的 Web 服务依然具有吸引力,原因如下:
|
||||
|
||||
* Restful 服务建立在现有的基础设施和协议上,特别是 Web 服务器和 HTTP/HTTPS 协议。一个拥有基于 HTML 的网站的组织可以很容易地为客户添加 Web 服务,这些客户对数据和底层功能更感兴趣,而不是对 HTML 的表现形式感兴趣。比如,亚马逊就率先通过网站和 Web 服务(基于 SOAP 或 Restful)提供相同的信息和功能。
|
||||
* Restful 服务将 HTTP 当作 API,因此避免了复杂的软件分层,这种分层是基于 SOAP 的 Web 服务的明显特征。比如,Restful API 支持通过 HTTP 命令(POST-GET-PUT-DELETE)进行标准的 CRUD(增加-读取-更新-删除)操作;通过 HTTP 状态码可以知道请求是否成功或者为什么失败。
|
||||
* Restful Web 服务可以根据需要变得简单或复杂。Restful 是一种风格,实际上是一种非常灵活的风格,而不是一套关于如何设计和构造服务的规定。(伴随而来的缺点是,可能很难确定哪些服务不能算作 Restful 服务。)
|
||||
* 作为使用者或者客户端,Restful Web 服务与语言和平台无关。客户端发送 HTTP(S) 请求,并以适合现代数据交换的格式(如 JSON)接收文本响应。
|
||||
* 几乎每一种通用编程语言都至少对 HTTP/HTTPS 有足够的(通常是强大的)支持,这意味着 Web 服务的客户端可以用这些语言来编写。
|
||||
|
||||
这篇文章将通过一段完整的 Java 代码示例来探讨轻量级的 Restful 服务。
|
||||
|
||||
### 基于 Restful 的“小说” Web 服务
|
||||
|
||||
基于 Restful 的“小说” web 服务包含三个程序员定义的类:
|
||||
|
||||
* `Novel` 类代表一个小说,只有三个属性:机器生成的 ID、作者和标题。属性可以根据实际情况进行扩展,但我还是想让这个例子看上去更简单一些。
|
||||
* `Novels` 类包含了用于各种任务的工具类:将一个 `Novel` 或者它们的列表的纯文本编码转换成 XML 或者 JSON;支持在小说集合上进行 CRUD 操作;以及从存储在文件中的数据初始化集合。`Novels` 类在 `Novel` 实例和 servlet 之间起中介作用。
|
||||
* `NovelsServlet` 类是从 `HttpServlet` 中继承的,`HttpServlet` 是一段健壮且灵活的代码,自 90 年代末的早期企业级 Java 就已经存在了。对于客户端的 CRUD 请求,servlet 可以当作 HTTP 的端点。 servlet 代码主要用于处理客户端的请求和生成相应的响应,而将复杂的细节留给 `Novels` 类中的工具类进行处理。
|
||||
|
||||
一些 Java 框架,比如 Jersey(JAX-RS)和 Restlet,就是为 Restful 服务设计的。尽管如此,`HttpServlet` 本身为完成这些服务提供了轻量、灵活、强大且充分测试过的 API。我会通过下面的“小说”例子来说明。
|
||||
|
||||
### 部署“小说” Web 服务
|
||||
|
||||
当然,部署“小说” Web 服务需要一个 Web 服务器。我的选择是 [Tomcat][5],但是如果该服务托管在 Jetty 或者甚至是 Java 应用服务器上,那么这个服务应该至少可以工作(著名的最后一句话!)。[在我的网站上][6]有总结了如何安装 Tomcat 的 README 文件和代码。还有一个附带文档的 Apache Ant 脚本,可以用来构建“小说”服务(或者任何其他服务或网站),并且将它部署在 Tomcat 或相同的服务。
|
||||
|
||||
Tomcat 可以从它的[官网][7]上下载。当你在本地安装后,将 `TOMCAT_HOME` 设置为安装目录。有两个子目录值得关注:
|
||||
|
||||
* `TOMCAT_HOME/bin` 目录包含了类 Unix 系统(`startup.sh` 和 `shutdown.sh`)和 Windows(`startup.bat` 和 `shutdown.bat`) 的启动和停止脚本。Tomcat 作为 Java 应用程序运行。Web 服务器的 servlet 容器叫做 Catalina。(在 Jetty 中,Web 服务器和容器的名字一样。)当 Tomcat 启动后,在浏览器中输入 `http://localhost:8080/` 可以查看详细文档,包括示例。
|
||||
* `TOMCAT_HOME/webapps` 目录是已部署的 Web 网站和服务的默认目录。部署网站或 Web 服务的直接方法是复制以 `.war` 结尾的 JAR 文件(也就是 WAR 文件)到 `TOMCAT_HOME/webapps` 或它的子目录下。然后 Tomcat 会将 WAR 文件解压到它自己的目录下。比如,Tomcat 会将 `novels.war` 文件解压到一个叫做 `novels` 的子目录下,并且保留 `novels.war` 文件。一个网站或 Web 服务可以通过删除 WAR 文件进行移除,也可以用一个新版 WAR 文件来覆盖已有文件进行更新。顺便说一下,调试网站或服务的第一步就是检查 Tomcat 已经正确解压 WAR 文件;如果没有的话,网站或服务就无法发布,因为代码或配置中有致命错误。
|
||||
* 因为 Tomcat 默认会监听 8080 端口上的 HTTP 请求,所以本机上的 URL 请求以 `http://localhost:8080/` 开始。
|
||||
|
||||
通过添加不带 `.war` 后缀的 WAR 文件名来访问由程序员部署的 WAR 文件:
|
||||
|
||||
```
|
||||
http://locahost:8080/novels/
|
||||
```
|
||||
|
||||
如果服务部署在 `TOMCAT_HOME` 下的一个子目录中(比如,`myapps`),这会在 URL 中反映出来:
|
||||
|
||||
```
|
||||
http://locahost:8080/myapps/novels/
|
||||
```
|
||||
|
||||
我会在靠近文章结尾处的测试部分提供这部分的更多细节。
|
||||
|
||||
如前所述,我的主页上有一个包含 Ant 脚本的 ZIP 文件,这个文件可以编译并且部署网站或者服务。(这个 ZIP 文件中也包含一个 `novels.war` 的副本。)对于“小说”这个例子,命令的示例(`%` 是命令行提示符)如下:
|
||||
|
||||
```
|
||||
% ant -Dwar.name=novels deploy
|
||||
```
|
||||
|
||||
这个命令首先会编译 Java 源代码,并且创建一个可部署的 `novels.war` 文件,然后将这个文件保存在当前目录中,再复制到 `TOMCAT_HOME/webapps` 目录中。如果一切顺利,`GET` 请求(使用浏览器或者命令行工具,比如 `curl`)可以用来做一个测试:
|
||||
|
||||
```
|
||||
% curl http://localhost:8080/novels/
|
||||
```
|
||||
|
||||
默认情况下,Tomcat 设置为 <ruby>热部署<rt>hot deploys</rt></ruby>:Web 服务器不需要关闭就可以进行部署、更新或者移除一个 web 应用。
|
||||
|
||||
### “小说”服务的代码
|
||||
|
||||
让我们回到“小说”这个例子,不过是在代码层面。考虑下面的 `Novel` 类:
|
||||
|
||||
#### 例 1:Novel 类
|
||||
|
||||
```java
|
||||
package novels;
|
||||
|
||||
import java.io.Serializable;
|
||||
|
||||
public class Novel implements Serializable, Comparable<Novel> {
|
||||
static final long serialVersionUID = 1L;
|
||||
private String author;
|
||||
private String title;
|
||||
private int id;
|
||||
|
||||
public Novel() { }
|
||||
|
||||
public void setAuthor(final String author) { this.author = author; }
|
||||
public String getAuthor() { return this.author; }
|
||||
public void setTitle(final String title) { this.title = title; }
|
||||
public String getTitle() { return this.title; }
|
||||
public void setId(final int id) { this.id = id; }
|
||||
public int getId() { return this.id; }
|
||||
|
||||
public int compareTo(final Novel other) { return this.id - other.id; }
|
||||
}
|
||||
```
|
||||
|
||||
这个类实现了 `Comparable` 接口中的 `compareTo` 方法,因为 `Novel` 实例是存储在一个线程安全的无序 `ConcurrentHashMap` 中。在响应查看集合的请求时,“小说”服务会对从映射中提取的集合(一个 `ArrayList`)进行排序;`compareTo` 的实现通过 `Novel` 的 ID 将它按升序排序。
|
||||
|
||||
`Novels` 类中包含多个实用工具函数:
|
||||
|
||||
#### 例 2:Novels 实用工具类
|
||||
|
||||
```java
|
||||
package novels;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.io.File;
|
||||
import java.io.ByteArrayOutputStream;
|
||||
import java.io.InputStream;
|
||||
import java.io.InputStreamReader;
|
||||
import java.io.BufferedReader;
|
||||
import java.nio.file.Files;
|
||||
import java.util.stream.Stream;
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.Collections;
|
||||
import java.beans.XMLEncoder;
|
||||
import javax.servlet.ServletContext; // not in JavaSE
|
||||
import org.json.JSONObject;
|
||||
import org.json.XML;
|
||||
|
||||
public class Novels {
|
||||
private final String fileName = "/WEB-INF/data/novels.db";
|
||||
private ConcurrentMap<Integer, Novel> novels;
|
||||
private ServletContext sctx;
|
||||
private AtomicInteger mapKey;
|
||||
|
||||
public Novels() {
|
||||
novels = new ConcurrentHashMap<Integer, Novel>();
|
||||
mapKey = new AtomicInteger();
|
||||
}
|
||||
|
||||
public void setServletContext(ServletContext sctx) { this.sctx = sctx; }
|
||||
public ServletContext getServletContext() { return this.sctx; }
|
||||
|
||||
public ConcurrentMap<Integer, Novel> getConcurrentMap() {
|
||||
if (getServletContext() == null) return null; // not initialized
|
||||
if (novels.size() < 1) populate();
|
||||
return this.novels;
|
||||
}
|
||||
|
||||
public String toXml(Object obj) { // default encoding
|
||||
String xml = null;
|
||||
try {
|
||||
ByteArrayOutputStream out = new ByteArrayOutputStream();
|
||||
XMLEncoder encoder = new XMLEncoder(out);
|
||||
encoder.writeObject(obj);
|
||||
encoder.close();
|
||||
xml = out.toString();
|
||||
}
|
||||
catch(Exception e) { }
|
||||
return xml;
|
||||
}
|
||||
|
||||
public String toJson(String xml) { // option for requester
|
||||
try {
|
||||
JSONObject jobt = XML.toJSONObject(xml);
|
||||
return jobt.toString(3); // 3 is indentation level
|
||||
}
|
||||
catch(Exception e) { }
|
||||
return null;
|
||||
}
|
||||
|
||||
public int addNovel(Novel novel) {
|
||||
int id = mapKey.incrementAndGet();
|
||||
novel.setId(id);
|
||||
novels.put(id, novel);
|
||||
return id;
|
||||
}
|
||||
|
||||
private void populate() {
|
||||
InputStream in = sctx.getResourceAsStream(this.fileName);
|
||||
// Convert novel.db string data into novels.
|
||||
if (in != null) {
|
||||
try {
|
||||
InputStreamReader isr = new InputStreamReader(in);
|
||||
BufferedReader reader = new BufferedReader(isr);
|
||||
|
||||
String record = null;
|
||||
while ((record = reader.readLine()) != null) {
|
||||
String[] parts = record.split("!");
|
||||
if (parts.length == 2) {
|
||||
Novel novel = new Novel();
|
||||
novel.setAuthor(parts[0]);
|
||||
novel.setTitle(parts[1]);
|
||||
addNovel(novel); // sets the Id, adds to map
|
||||
}
|
||||
}
|
||||
in.close();
|
||||
}
|
||||
catch (IOException e) { }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
最复杂的方法是 `populate`,这个方法从一个包含在 WAR 文件中的文本文件读取。这个文本文件包括了“小说”的初始集合。要打开此文件,`populate` 方法需要 `ServletContext`,这是一个 Java 映射类型,包含了关于嵌入在 servlet 容器中的 servlet 的所有关键信息。这个文本文件有包含了像下面这样的记录:
|
||||
|
||||
```
|
||||
Jane Austen!Persuasion
|
||||
```
|
||||
|
||||
这一行被解析为两部分(作者和标题),由感叹号(`!`)分隔。然后这个方法创建一个 `Novel` 实例,设置作者和标题属性,并且将“小说”加到容器中,保存在内存中。
|
||||
|
||||
`Novels` 类也有一些实用工具函数,可以将“小说”容器编码为 XML 或 JSON,取决于发出请求的人所要求的格式。默认是 XML 格式,但是也可以请求 JSON 格式。一个轻量级的 XML 转 JSON 包提供了 JSON。下面是关于编码的更多细节。
|
||||
|
||||
#### 例 3:NovelsServlet 类
|
||||
|
||||
```java
|
||||
package novels;
|
||||
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import javax.servlet.ServletException;
|
||||
import javax.servlet.http.HttpServlet;
|
||||
import javax.servlet.http.HttpServletRequest;
|
||||
import javax.servlet.http.HttpServletResponse;
|
||||
import java.util.Arrays;
|
||||
import java.io.ByteArrayInputStream;
|
||||
import java.io.ByteArrayOutputStream;
|
||||
import java.io.OutputStream;
|
||||
import java.io.BufferedReader;
|
||||
import java.io.InputStreamReader;
|
||||
import java.beans.XMLEncoder;
|
||||
import org.json.JSONObject;
|
||||
import org.json.XML;
|
||||
|
||||
public class NovelsServlet extends HttpServlet {
|
||||
static final long serialVersionUID = 1L;
|
||||
private Novels novels; // back-end bean
|
||||
|
||||
// Executed when servlet is first loaded into container.
|
||||
@Override
|
||||
public void init() {
|
||||
this.novels = new Novels();
|
||||
novels.setServletContext(this.getServletContext());
|
||||
}
|
||||
|
||||
// GET /novels
|
||||
// GET /novels?id=1
|
||||
@Override
|
||||
public void doGet(HttpServletRequest request, HttpServletResponse response) {
|
||||
String param = request.getParameter("id");
|
||||
Integer key = (param == null) ? null : Integer.valueOf((param.trim()));
|
||||
|
||||
// Check user preference for XML or JSON by inspecting
|
||||
// the HTTP headers for the Accept key.
|
||||
boolean json = false;
|
||||
String accept = request.getHeader("accept");
|
||||
if (accept != null && accept.contains("json")) json = true;
|
||||
|
||||
// If no query string, assume client wants the full list.
|
||||
if (key == null) {
|
||||
ConcurrentMap<Integer, Novel> map = novels.getConcurrentMap();
|
||||
Object list = map.values().toArray();
|
||||
Arrays.sort(list);
|
||||
|
||||
String payload = novels.toXml(list); // defaults to Xml
|
||||
if (json) payload = novels.toJson(payload); // Json preferred?
|
||||
sendResponse(response, payload);
|
||||
}
|
||||
// Otherwise, return the specified Novel.
|
||||
else {
|
||||
Novel novel = novels.getConcurrentMap().get(key);
|
||||
if (novel == null) { // no such Novel
|
||||
String msg = key + " does not map to a novel.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
else { // requested Novel found
|
||||
if (json) sendResponse(response, novels.toJson(novels.toXml(novel)));
|
||||
else sendResponse(response, novels.toXml(novel));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// POST /novels
|
||||
@Override
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
String author = request.getParameter("author");
|
||||
String title = request.getParameter("title");
|
||||
|
||||
// Are the data to create a new novel present?
|
||||
if (author == null || title == null)
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
|
||||
// Create a novel.
|
||||
Novel n = new Novel();
|
||||
n.setAuthor(author);
|
||||
n.setTitle(title);
|
||||
|
||||
// Save the ID of the newly created Novel.
|
||||
int id = novels.addNovel(n);
|
||||
|
||||
// Generate the confirmation message.
|
||||
String msg = "Novel " + id + " created.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
|
||||
// PUT /novels
|
||||
@Override
|
||||
public void doPut(HttpServletRequest request, HttpServletResponse response) {
|
||||
/* A workaround is necessary for a PUT request because Tomcat does not
|
||||
generate a workable parameter map for the PUT verb. */
|
||||
String key = null;
|
||||
String rest = null;
|
||||
boolean author = false;
|
||||
|
||||
/* Let the hack begin. */
|
||||
try {
|
||||
BufferedReader br =
|
||||
new BufferedReader(new InputStreamReader(request.getInputStream()));
|
||||
String data = br.readLine();
|
||||
/* To simplify the hack, assume that the PUT request has exactly
|
||||
two parameters: the id and either author or title. Assume, further,
|
||||
that the id comes first. From the client side, a hash character
|
||||
# separates the id and the author/title, e.g.,
|
||||
|
||||
id=33#title=War and Peace
|
||||
*/
|
||||
String[] args = data.split("#"); // id in args[0], rest in args[1]
|
||||
String[] parts1 = args[0].split("="); // id = parts1[1]
|
||||
key = parts1[1];
|
||||
|
||||
String[] parts2 = args[1].split("="); // parts2[0] is key
|
||||
if (parts2[0].contains("author")) author = true;
|
||||
rest = parts2[1];
|
||||
}
|
||||
catch(Exception e) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
|
||||
// If no key, then the request is ill formed.
|
||||
if (key == null)
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
|
||||
// Look up the specified novel.
|
||||
Novel p = novels.getConcurrentMap().get(Integer.valueOf((key.trim())));
|
||||
if (p == null) { // not found
|
||||
String msg = key + " does not map to a novel.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
else { // found
|
||||
if (rest == null) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
}
|
||||
// Do the editing.
|
||||
else {
|
||||
if (author) p.setAuthor(rest);
|
||||
else p.setTitle(rest);
|
||||
|
||||
String msg = "Novel " + key + " has been edited.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DELETE /novels?id=1
|
||||
@Override
|
||||
public void doDelete(HttpServletRequest request, HttpServletResponse response) {
|
||||
String param = request.getParameter("id");
|
||||
Integer key = (param == null) ? null : Integer.valueOf((param.trim()));
|
||||
// Only one Novel can be deleted at a time.
|
||||
if (key == null)
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
try {
|
||||
novels.getConcurrentMap().remove(key);
|
||||
String msg = "Novel " + key + " removed.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
catch(Exception e) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
}
|
||||
|
||||
// Methods Not Allowed
|
||||
@Override
|
||||
public void doTrace(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void doHead(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void doOptions(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
// Send the response payload (Xml or Json) to the client.
|
||||
private void sendResponse(HttpServletResponse response, String payload) {
|
||||
try {
|
||||
OutputStream out = response.getOutputStream();
|
||||
out.write(payload.getBytes());
|
||||
out.flush();
|
||||
}
|
||||
catch(Exception e) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
上面的 `NovelsServlet` 类继承了 `HttpServlet` 类,`HttpServlet` 类继承了 `GenericServlet` 类,后者实现了 `Servlet` 接口:
|
||||
|
||||
```java
|
||||
NovelsServlet extends HttpServlet extends GenericServlet implements Servlet
|
||||
```
|
||||
|
||||
从名字可以很清楚的看出来,`HttpServlet` 是为实现 HTTP(S) 上的 servlet 设计的。这个类提供了以标准 HTTP 请求动词(官方说法,<ruby>方法<rt>methods</rt></ruby>)命名的空方法:
|
||||
|
||||
* `doPost` (Post = 创建)
|
||||
* `doGet` (Get = 读取)
|
||||
* `doPut` (Put = 更新)
|
||||
* `doDelete` (Delete = 删除)
|
||||
|
||||
其他一些 HTTP 动词也会涉及到。`HttpServlet` 的子类,比如 `NovelsServlet`,会重载相关的 `do` 方法,并且保留其他方法为<ruby>空<rt>no-ops</rt></ruby>。`NovelsServlet` 重载了七个 `do` 方法。
|
||||
|
||||
每个 `HttpServlet` 的 CRUD 方法都有两个相同的参数。下面以 `doPost` 为例:
|
||||
|
||||
```java
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
```
|
||||
|
||||
`request` 参数是一个 HTTP 请求信息的映射,而 `response` 提供了一个返回给请求者的输出流。像 `doPost` 的方法,结构如下:
|
||||
|
||||
* 读取 `request` 信息,采取任何适当的措施生成一个响应。如果该信息丢失或者损坏了,就会生成一个错误。
|
||||
* 使用提取的请求信息来执行适当的 CRUD 操作(在本例中,创建一个 `Novel`),然后使用 `response` 输出流为请求者编码一个适当的响应。在 `doPost` 例子中,响应就是已经成功生成一个新“小说”并且添加到容器中的一个确认。当响应被发送后,输出流就关闭了,同时也将连接关闭了。
|
||||
|
||||
### 关于方法重载的更多内容
|
||||
|
||||
HTTP 请求的格式相对比较简单。下面是一个非常熟悉的 HTTP 1.1 的格式,注释由双井号分隔:
|
||||
|
||||
```
|
||||
GET /novels ## start line
|
||||
Host: localhost:8080 ## header element
|
||||
Accept-type: text/plain ## ditto
|
||||
...
|
||||
[body] ## POST and PUT only
|
||||
```
|
||||
|
||||
第一行由 HTTP 动词(在本例中是 `GET`)和以名词(在本例中是 `novels`)命名目标资源的 URI 开始。报头中包含键-值对,用冒号分隔左面的键和右面的值。报头中的键 `Host`(大小写敏感)是必须的;主机名 `localhost` 是当前机器上的本地符号地址,`8080` 端口是 Tomcat web 服务器上等待 HTTP 请求的默认端口。(默认情况下,Tomcat 在 8443 端口上监听 HTTP 请求。)报头元素可以以任意顺序出现。在这个例子中,`Accept-type` 报头的值是 MIME 类型 `text/plain`。
|
||||
|
||||
一些请求(特别是 `POST` 和 `PUT`)会有报文,而其他请求(特别是 `GET` 和 `DELETE`)没有。如果有报文(可能为空),以两个换行符将报头和报文分隔开;HTTP 报文包含一系列键-值对。对于无报文的请求,比如说查询字符串,报头元素就可以用来发送信息。下面是一个用 ID 2 对 `/novels` 资源的 `GET` 请求:
|
||||
|
||||
```
|
||||
GET /novels?id=2
|
||||
```
|
||||
|
||||
通常来说,查询字符串以问号开始,并且包含一个键-值对,尽管这个键-值可能值为空。
|
||||
|
||||
带有 `getParameter` 和 `getParameterMap` 等方法的 `HttpServlet` 很好地回避了有报文和没有报文的 HTTP 请求之前的差异。在“小说”例子中,`getParameter` 方法用来从 `GET`、`POST` 和 `DELETE` 方法中提取所需的信息。(处理 `PUT` 请求需要更底层的代码,因为 Tomcat 没有提供可以解析 `PUT` 请求的参数映射。)下面展示了一段在 `NovelsServlet` 中被重载的 `doPost` 方法:
|
||||
|
||||
```java
|
||||
@Override
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
String author = request.getParameter("author");
|
||||
String title = request.getParameter("title");
|
||||
...
|
||||
```
|
||||
|
||||
对于没有报文的 `DELETE` 请求,过程基本是一样的:
|
||||
|
||||
```java
|
||||
@Override
|
||||
public void doDelete(HttpServletRequest request, HttpServletResponse response) {
|
||||
String param = request.getParameter("id"); // id of novel to be removed
|
||||
...
|
||||
```
|
||||
|
||||
`doGet` 方法需要区分 `GET` 请求的两种方式:一种是“获得所有”,而另一种是“获得某一个”。如果 `GET` 请求 URL 中包含一个键是一个 ID 的查询字符串,那么这个请求就被解析为“获得某一个”:
|
||||
|
||||
```
|
||||
http://localhost:8080/novels?id=2 ## GET specified
|
||||
```
|
||||
|
||||
如果没有查询字符串,`GET` 请求就会被解析为“获得所有”:
|
||||
|
||||
```
|
||||
http://localhost:8080/novels ## GET all
|
||||
```
|
||||
|
||||
### 一些值得注意的细节
|
||||
|
||||
“小说”服务的设计反映了像 Tomcat 这样基于 Java 的 web 服务器是如何工作的。在启动时,Tomcat 构建一个线程池,从中提取请求处理程序,这种方法称为 “<ruby>每个请求一个线程<rt>one thread per request</rt></ruby>” 模型。现在版本的 Tomcat 使用非阻塞 I/O 来提高个性能。
|
||||
|
||||
“小说”服务是作为 `NovelsServlet` 类的单个实例来执行的,该实例也就维护了一个“小说”集合。相应的,也就会出现竞态条件,比如出现两个请求同时被处理:
|
||||
|
||||
* 一个请求向集合中添加一个新“小说”。
|
||||
* 另一个请求获得集合中的所有“小说”。
|
||||
|
||||
这样的结果是不确定的,取决与 _读_ 和 _写_ 的操作是以怎样的顺序进行操作的。为了避免这个问题,“小说”服务使用了线程安全的 `ConcurrentMap`。这个映射的关键是生成了一个线程安全的 `AtomicInteger`。下面是相关的代码片段:
|
||||
|
||||
```java
|
||||
public class Novels {
|
||||
private ConcurrentMap<Integer, Novel> novels;
|
||||
private AtomicInteger mapKey;
|
||||
...
|
||||
```
|
||||
|
||||
默认情况下,对客户端请求的响应被编码为 XML。为了简单,“小说”程序使用了以前的 `XMLEncoder` 类;另一个包含更丰富功能的方式是使用 JAX-B 库。代码很简单:
|
||||
|
||||
```java
|
||||
public String toXml(Object obj) { // default encoding
|
||||
String xml = null;
|
||||
try {
|
||||
ByteArrayOutputStream out = new ByteArrayOutputStream();
|
||||
XMLEncoder encoder = new XMLEncoder(out);
|
||||
encoder.writeObject(obj);
|
||||
encoder.close();
|
||||
xml = out.toString();
|
||||
}
|
||||
catch(Exception e) { }
|
||||
return xml;
|
||||
}
|
||||
```
|
||||
|
||||
`Object` 参数要么是一个有序的“小说” `ArraList`(用以响应“<ruby>获得所有<rt>get all</rt></ruby>”请求),要么是一个 `Novel` 实例(用以响应“<ruby>获得一个<rt>get one</rt></ruby>”请求),又或者是一个 `String`(确认消息)。
|
||||
|
||||
如果 HTTP 请求报头指定 JSON 作为所需要的类型,那么 XML 就被转化成 JSON。下面是 `NovelsServlet` 中的 `doGet` 方法中的检查:
|
||||
|
||||
```java
|
||||
String accept = request.getHeader("accept"); // "accept" is case insensitive
|
||||
if (accept != null && accept.contains("json")) json = true;
|
||||
```
|
||||
|
||||
`Novels`类中包含了 `toJson` 方法,可以将 XML 转换成 JSON:
|
||||
|
||||
```java
|
||||
public String toJson(String xml) { // option for requester
|
||||
try {
|
||||
JSONObject jobt = XML.toJSONObject(xml);
|
||||
return jobt.toString(3); // 3 is indentation level
|
||||
}
|
||||
catch(Exception e) { }
|
||||
return null;
|
||||
}
|
||||
```
|
||||
|
||||
`NovelsServlet`会对各种类型进行错误检查。比如,`POST` 请求应该包含新“小说”的作者和标题。如果有一个丢了,`doPost` 方法会抛出一个异常:
|
||||
|
||||
```java
|
||||
if (author == null || title == null)
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
```
|
||||
|
||||
`SC_BAD_REQUEST` 中的 `SC` 代表的是 <ruby>状态码<rt>status code</rt></ruby>,`BAD_REQUEST` 的标准 HTTP 数值是 400。如果请求中的 HTTP 动词是 `TRACE`,会返回一个不同的状态码:
|
||||
|
||||
```java
|
||||
public void doTrace(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new RuntimeException(Integer.toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
```
|
||||
|
||||
### 测试“小说”服务
|
||||
|
||||
用浏览器测试 web 服务会很不顺手。在 CRUD 动词中,现代浏览器只能生成 `POST`(创建)和 `GET`(读取)请求。甚至从浏览器发送一个 `POST` 请求都有点不好办,因为报文需要包含键-值对;这样的测试通常通过 HTML 表单完成。命令行工具,比如说 [curl][21],是一个更好的选择,这个部分展示的一些 `curl` 命令,已经包含在我网站的 ZIP 文件中了。
|
||||
|
||||
下面是一些测试样例,没有展示相应的输出结果:
|
||||
|
||||
```
|
||||
% curl localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=1
|
||||
% curl --header "Accept: application/json" localhost:8080/novels/
|
||||
```
|
||||
|
||||
第一条命令请求所有“小说”,默认是 XML 编码。第二条命令请求 ID 为 1 的“小说”,XML 编码。最后一条命令通过 `application/json` 添加了 `Accept` 报头元素,作为所需要的 MIME 类型。“<ruby>获得一个<rt>get one</rt></ruby>”命令也可以用这个报头。这些请求用了 JSON 而不是 XML 编码作为响应。
|
||||
|
||||
下面两条命令在集合中创建了一个新“小说”,并且确认添加了进去:
|
||||
|
||||
```
|
||||
% curl --request POST --data "author=Tolstoy&title=War and Peace" localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=4
|
||||
```
|
||||
|
||||
`curl` 中的 `PUT` 命令与 `POST` 命令相似,不同的地方是 `PUT` 的报文没有使用标准的语法。在 `NovelsServlet` 中关于 `doPut` 方法的文档中有详细的介绍,但是简单来说,Tomcat 不会对 `PUT` 请求生成合适的映射。下面是一个 `PUT` 命令和确认命令的的例子:
|
||||
|
||||
```
|
||||
% curl --request PUT --data "id=3#title=This is an UPDATE" localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=3
|
||||
```
|
||||
|
||||
第二个命令确认了集合已经更新。
|
||||
|
||||
最后,`DELETE` 命令会正常运行:
|
||||
|
||||
```
|
||||
% curl --request DELETE localhost:8080/novels?id=2
|
||||
% curl localhost:8080/novels/
|
||||
```
|
||||
|
||||
这个请求是删除 ID 为 2 的“小说”。第二个命令会显示剩余的“小说”。
|
||||
|
||||
### web.xml 配置文件
|
||||
|
||||
尽管官方规定它是可选的,`web.xml` 配置文件是一个生产级别网站或服务的重要组成部分。这个配置文件可以配置独立于代码的路由、安全性,或者网站或服务的其他功能。“小说”服务的配置通过为该服务的请求分配一个 URL 模式来配置路由:
|
||||
|
||||
```xml
|
||||
<xml version = "1.0" encoding = "UTF-8">
|
||||
<web-app>
|
||||
<servlet>
|
||||
<servlet-name>novels</servlet-name>
|
||||
<servlet-class>novels.NovelsServlet</servlet-class>
|
||||
</servlet>
|
||||
<servlet-mapping>
|
||||
<servlet-name>novels</servlet-name>
|
||||
<url-pattern>/*</url-pattern>
|
||||
</servlet-mapping>
|
||||
</web-app>
|
||||
```
|
||||
|
||||
`servlet-name` 元素为 servlet 全名(`novels.NovelsServlet`)提供了一个缩写(`novels`),然后这个名字在下面的 `servlet-mapping` 元素中使用。
|
||||
|
||||
回想一下,一个已部署服务的 URL 会在端口号后面有 WAR 文件的文件名:
|
||||
|
||||
```
|
||||
http://localhost:8080/novels/
|
||||
```
|
||||
|
||||
端口号后斜杠后的 URI,是所请求资源的“路径”,在这个例子中,就是“小说”服务。因此,`novels` 出现在了第一个单斜杠后。
|
||||
|
||||
在 `web.xml` 文件中,`url-patter` 被指定为 `/*`,代表 “以 `/novels` 为起始的任意路径”。假设 Tomcat 遇到了一个不存在的 URL,像这样:
|
||||
|
||||
```
|
||||
http://localhost:8080/novels/foobar/
|
||||
```
|
||||
|
||||
`web.xml` 配置也会指定这个请求被分配到“小说” servlet 中,因为 `/*` 模式也包含 `/foobar`。因此,这个不存在的 URL 也会得到像上面合法路径的相同结果。
|
||||
|
||||
生产级别的配置文件可能会包含安全相关的信息,包括<ruby>连接级别<rt>wire-level</rt></ruby>和<ruby>用户角色<rt>users-roles</rt></ruby>。即使在这种情况下,配置文件的大小也只会是这个例子中的两到三倍大。
|
||||
|
||||
### 总结
|
||||
|
||||
`HttpServlet` 是 Java web 技术的核心。像“小说”这样的网站或 web 服务继承了这个类,并且根据需求重载了相应的 `do` 动词方法。像 Jersay(JAX-RS)或 Restlet 这样的 Restful 框架通过提供定制的 servlet 完成了基本相同的功能,针对框架中的 web 应用程序的请求,这个 servlet 扮演着 HTTP(S) <ruby>终端<rt>endpoint</rt></ruby>的角色。
|
||||
|
||||
当然,基于 servlet 的应用程序可以访问 web 应用程序中所需要的任何 Java 库。如果应用程序遵循<ruby>关注点分离<rt>separation-of-concerns</rt></ruby>原则,那么 servlet 代码仍然相当简单:代码会检查请求,如果存在缺陷,就会发出适当的错误;否则,代码会调用所需要的功能(比如,查询数据库,以特定格式为响应编码),然后向请求这发送响应。`HttpServletRequest` 和 `HttpServletReponse` 类型使得读取请求和编写响应变得简单。
|
||||
|
||||
Java 的 API 可以从非常简单变得相当复杂。如果你需要用 Java 交付一些 Restful 服务的话,我的建议是在做其他事情之前先尝试一下简单的 `HttpServlet`。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/restful-services-java
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[Yufei-Yan](https://github.com/Yufei-Yan)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
|
||||
[2]: http://xmlrpc.com/
|
||||
[3]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
||||
[4]: https://www.redhat.com/en/topics/integration/whats-the-difference-between-soap-rest
|
||||
[5]: http://tomcat.apache.org/
|
||||
[6]: https://condor.depaul.edu/mkalin
|
||||
[7]: https://tomcat.apache.org/download-90.cgi
|
||||
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+serializable
|
||||
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer
|
||||
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+object
|
||||
[12]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bytearrayoutputstream
|
||||
[13]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
|
||||
[14]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstream
|
||||
[15]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstreamreader
|
||||
[16]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bufferedreader
|
||||
[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+ioexception
|
||||
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+arrays
|
||||
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+runtimeexception
|
||||
[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+outputstream
|
||||
[21]: https://curl.haxx.se/
|
@ -1,22 +1,22 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12533-1.html)
|
||||
[#]: subject: (Debug Linux using ProcDump)
|
||||
[#]: via: (https://opensource.com/article/20/7/procdump-linux)
|
||||
[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe)
|
||||
|
||||
使用 ProcDump 调试 Linux
|
||||
使用微软的 ProcDump 调试 Linux 进程
|
||||
======
|
||||
|
||||
> 用这个微软的开源工具,获取进程信息。
|
||||
|
||||
![渣土车在路上转弯][1] 。
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/20/095646k5wz7cd11vyc7lhr.jpg)
|
||||
|
||||
微软越来越心仪 Linux 和开放源码,这并不是什么秘密。在过去几年中,该公司稳步增加对开源的贡献,包括将其部分软件和工具移植到 Linux。2018 年底,微软[宣布][2]将其部分 [Sysinternals][3] 工具以开源的方式移植到 Linux,[Linux 版的 ProcDump][4]是第一个这样的版本。
|
||||
微软越来越心仪 Linux 和开源,这并不是什么秘密。在过去几年中,该公司稳步地增加了对开源的贡献,包括将其部分软件和工具移植到 Linux。2018 年底,微软[宣布][2]将其 [Sysinternals][3] 的部分工具以开源的方式移植到 Linux,[Linux 版的 ProcDump][4]是其中的第一个。
|
||||
|
||||
如果你在 Windows 上从事过调试或故障排除工作,你可能听说过 Sysinternals。它是一个“瑞士军刀”工具集,可以帮助系统管理员、开发人员和 IT 安全专家监控和排除 Windows 环境的故障。
|
||||
如果你在 Windows 上从事过调试或故障排除工作,你可能听说过 Sysinternals,它是一个“瑞士军刀”工具集,可以帮助系统管理员、开发人员和 IT 安全专家监控和排除 Windows 环境的故障。
|
||||
|
||||
Sysinternals 最受欢迎的工具之一是 [ProcDump][5]。顾名思义,它用于将正在运行的进程的内存转储到磁盘上的一个核心文件中。然后可以用调试器对这个核心文件进行分析,了解转储时进程的状态。因为之前用过 Sysinternals,所以我很想试试 ProcDump 的 Linux 移植版。
|
||||
|
||||
@ -93,7 +93,7 @@ bin/ProcDumpTestApplication: ELF 64-bit LSB executable, x86-64, version 1 (SYSV)
|
||||
$
|
||||
```
|
||||
|
||||
在此情况下,每次运行 `procdump` 实用程序时,你都必须移动到 `bin/` 文件夹中。要使它在系统中的任何地方都可以使用,运行 `make install`。这将二进制文件复制到通常的 `bin/` 目录中,它是你的 shell `$PATH` 的一部分:
|
||||
在此情况下,每次运行 `procdump` 实用程序时,你都必须移动到 `bin/` 文件夹中。要使它在系统中的任何地方都可以使用,运行 `make install`。这将这个二进制文件复制到通常的 `bin/` 目录中,它是你的 shell `$PATH` 的一部分:
|
||||
|
||||
```
|
||||
$ which procdump
|
||||
@ -153,7 +153,7 @@ root 350508 347350 0 03:29 pts/0 00:00:00 grep --color=auto pro
|
||||
$
|
||||
```
|
||||
|
||||
当测试进程正在运行时,调用 `procdump` 并提供 PID。输出表明了该进程的名称和 PID,并报告它生成了一个核心转储文件,并显示其文件名:
|
||||
当测试进程正在运行时,调用 `procdump` 并提供 PID。下面的输出表明了该进程的名称和 PID,并报告它生成了一个核心转储文件,并显示其文件名:
|
||||
|
||||
```
|
||||
$ procdump -p 350498
|
||||
@ -262,7 +262,7 @@ $
|
||||
|
||||
### 你应该使用 ProcDump 还是 gcore?
|
||||
|
||||
有几种情况下,你可能更喜欢使用 ProcDump 而不是 gcore,ProcDump 有一些内置的功能,在一般情况下可能很有用。
|
||||
有几种情况下,你可能更喜欢使用 ProcDump 而不是 gcore,ProcDump 有一些内置的功能,在一些情况下可能很有用。
|
||||
|
||||
#### 等待测试二进制文件的执行
|
||||
|
||||
@ -306,7 +306,7 @@ $ ./progxyz &
|
||||
$
|
||||
```
|
||||
|
||||
ProcDump 立即检测到二进制正在运行,并转储这个二进制的核心文件:
|
||||
ProcDump 立即检测到该二进制正在运行,并转储这个二进制的核心文件:
|
||||
|
||||
```
|
||||
[03:39:23 - INFO]: Waiting for process 'progxyz' to launch...
|
||||
@ -391,7 +391,7 @@ via: https://opensource.com/article/20/7/procdump-linux
|
||||
作者:[Gaurav Kamathe][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
106
published/20200727 5 open source IDE tools for Java.md
Normal file
106
published/20200727 5 open source IDE tools for Java.md
Normal file
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12542-1.html)
|
||||
[#]: subject: (5 open source IDE tools for Java)
|
||||
[#]: via: (https://opensource.com/article/20/7/ide-java)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
|
||||
5 个开源的 Java IDE 工具
|
||||
======
|
||||
|
||||
> Java IDE 工具提供了大量的方法来根据你的独特需求和偏好创建一个编程环境。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/22/235441wnnorcvo4olasv8o.jpg)
|
||||
|
||||
通过简化程序员的工作,[Java][2] 框架可以使他们的生活更加轻松。这些框架是为了在各种服务器环境上运行各种应用程序而设计开发的;这包括解析注解、扫描描述符、加载配置以及在 Java 虚拟机(JVM)上启动实际的服务等方面的动态行为。控制这么多的任务需要更多的代码,这就很难降低内存占用、加快新应用的启动时间。无论如何,据 [TIOBE 指数][3],在当今使用的编程语言中 Java 一直排名前三,拥有着 700 万到 1000 万开发者的社区。
|
||||
|
||||
有这么多用 Java 编写的代码,这意味着有一些很好的集成开发环境(IDE)可供选择,可以为开发人员提供有效地编写、整理、测试和运行 Java 应用程序所需的所有工具。
|
||||
|
||||
下面,我将按字母顺序介绍五个我最喜欢的用于编写 Java 的开源 IDE 工具,以及如何配置它们的基本功能。
|
||||
|
||||
### BlueJ
|
||||
|
||||
[BlueJ][4] 为 Java 初学者提供了一个集成的教育性 Java 开发环境。它也可以使用 Java 开发工具包(JDK)开发小型软件。各种版本和操作系统的安装方式都可以在[这里][5]找到。
|
||||
|
||||
在笔记本电脑上安装 BlueJ IDE 后,启动一个新项目,点击<ruby>项目<rt>Project</rt></ruby>菜单中的<ruby>新项目<rt>New Project</rt></ruby>,然后从创建一个<ruby>新类<rt>New Class</rt></ruby>开始编写 Java 代码。生成的示例方法和骨架代码如下所示:
|
||||
|
||||
![BlueJ IDE screenshot][6]
|
||||
|
||||
BlueJ 不仅为学校的 Java 编程课的教学提供了一个交互式的图形用户界面(GUI),而且可以让开发人员在不编译源代码的情况下调用函数(即对象、方法、参数)。
|
||||
|
||||
### Eclipse
|
||||
|
||||
[Eclipse][7] 是桌面计算机上最著名的 Java IDE 之一,它支持 C/C++、JavaScript 和 PHP 等多种编程语言。它还允许开发者从 Eclipse 市场中的添加无穷无尽的扩展,以获得更多的开发便利。[Eclipse 基金会][8]提供了一个名为 [Eclipse Che][9] 的 Web IDE,供 DevOps 团队在多个云平台上用托管的工作空间创建出一个敏捷软件开发环境。
|
||||
|
||||
[可以在这里下载][10];然后你可以创建一个新的项目或从本地目录导入一个现有的项目。在[本文][11]中找到更多 Java 开发技巧。
|
||||
|
||||
![Eclipse IDE screenshot][12]
|
||||
|
||||
### IntelliJ IDEA
|
||||
|
||||
[IntelliJ IDEA CE(社区版)][13]是 IntelliJ IDEA 的开源版本,为 Java、Groovy、Kotlin、Rust、Scala 等多种编程语言提供了 IDE。IntelliJ IDEA CE 在有经验的开发人员中也非常受欢迎,可以用它来对现有源码进行重构、代码检查、使用 JUnit 或 TestNG 构建测试用例,以及使用 Maven 或 Ant 构建代码。可在[这里][14]下载它。
|
||||
|
||||
IntelliJ IDEA CE 带有一些独特的功能;我特别喜欢它的 API 测试器。例如,如果你用 Java 框架实现了一个 REST API,IntelliJ IDEA CE 允许你通过 Swing GUI 设计器来测试 API 的功能。
|
||||
|
||||
![IntelliJ IDEA screenshot][15]
|
||||
|
||||
IntelliJ IDEA CE 是开源的,但其背后的公司也提供了一个商业的终极版。可以在[这里][16]找到社区版和终极版之间的更多差异。
|
||||
|
||||
### Netbeans IDE
|
||||
|
||||
[NetBeans IDE][17] 是一个 Java 的集成开发环境,它允许开发人员利用 HTML5、JavaScript 和 CSS 等支持的 Web 技术为独立、移动和网络架构制作模块化应用程序。NetBeans IDE 允许开发人员就如何高效管理项目、工具和数据设置多个视图,并帮助他们在新开发人员加入项目时使用 Git 集成进行软件协作开发。
|
||||
|
||||
[这里][18]下载的二进制文件支持 Windows、macOS、Linux 等多个平台。在本地环境中安装了 IDE 工具后,新建项目向导可以帮助你创建一个新项目。例如,向导会生成骨架代码(有部分需要填写,如 `// TODO 代码应用逻辑在此`),然后你可以添加自己的应用代码。
|
||||
|
||||
### VSCodium
|
||||
|
||||
[VSCodium][19] 是一个轻量级、自由的源代码编辑器,允许开发者在 Windows、macOS、Linux 等各种操作系统平台上安装,是基于 [Visual Studio Code][20] 的开源替代品。其也是为支持包括 Java、C++、C#、PHP、Go、Python、.NET 在内的多种编程语言的丰富生态系统而设计开发的。Visual Studio Code 默认提供了调试、智能代码完成、语法高亮和代码重构功能,以提高开发的代码质量。
|
||||
|
||||
在其[资源库][21]中有很多下载项。当你运行 Visual Studio Code 时,你可以通过点击左侧活动栏中的“扩展”图标或按下 `Ctrl+Shift+X` 键来添加新的功能和主题。例如,当你在搜索框中输入 “quarkus” 时,就会出现 Visual Studio Code 的 Quarkus 工具,该扩展允许你[在 VS Code 中使用 Quarkus 编写 Java][22]:
|
||||
|
||||
![VSCodium IDE screenshot][23]
|
||||
|
||||
### 总结
|
||||
|
||||
Java 作为最广泛使用的编程语言和环境之一,这五种只是 Java 开发者可以使用的各种开源 IDE 工具的一小部分。可能很难知道哪一个是正确的选择。和以往一样,这取决于你的具体需求和目标 —— 你想实现什么样的工作负载(Web、移动应用、消息传递、数据交易),以及你将使用 IDE 扩展功能部署什么样的运行时(本地、云、Kubernetes、无服务器)。虽然丰富的选择可能会让人不知所措,但这也意味着你可能可以找到一个适合你的特殊情况和偏好的选择。
|
||||
|
||||
你有喜欢的开源 Java IDE 吗?请在评论中分享吧。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/ide-java
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
|
||||
[2]: https://opensource.com/resources/java
|
||||
[3]: https://www.tiobe.com/tiobe-index/
|
||||
[4]: https://www.bluej.org/about.html
|
||||
[5]: https://www.bluej.org/versions.html
|
||||
[6]: https://opensource.com/sites/default/files/uploads/5_open_source_ide_tools_to_write_java_and_how_you_begin_it.png (BlueJ IDE screenshot)
|
||||
[7]: https://www.eclipse.org/ide/
|
||||
[8]: https://www.eclipse.org/
|
||||
[9]: https://opensource.com/article/19/10/cloud-ide-che
|
||||
[10]: https://www.eclipse.org/downloads/
|
||||
[11]: https://opensource.com/article/19/10/java-basics
|
||||
[12]: https://opensource.com/sites/default/files/uploads/os_ide_2.png (Eclipse IDE screenshot)
|
||||
[13]: https://www.jetbrains.com/idea/
|
||||
[14]: https://www.jetbrains.org/display/IJOS/Download
|
||||
[15]: https://opensource.com/sites/default/files/uploads/os_ide_3.png (IntelliJ IDEA screenshot)
|
||||
[16]: https://www.jetbrains.com/idea/features/editions_comparison_matrix.html
|
||||
[17]: https://netbeans.org/
|
||||
[18]: https://netbeans.org/downloads/8.2/rc/
|
||||
[19]: https://vscodium.com/
|
||||
[20]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
|
||||
[21]: https://github.com/VSCodium/vscodium#downloadinstall
|
||||
[22]: https://opensource.com/article/20/4/java-quarkus-vs-code
|
||||
[23]: https://opensource.com/sites/default/files/uploads/os_ide_5.png (VSCodium IDE screenshot)
|
244
published/20200804 Creating and debugging Linux dump files.md
Normal file
244
published/20200804 Creating and debugging Linux dump files.md
Normal file
@ -0,0 +1,244 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12554-1.html)
|
||||
[#]: subject: (Creating and debugging Linux dump files)
|
||||
[#]: via: (https://opensource.com/article/20/8/linux-dump)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
|
||||
在 Linux 上创建并调试转储文件
|
||||
======
|
||||
|
||||
> 了解如何处理转储文件将帮你找到应用中难以重现的 bug。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/26/234535rhnwdc783swgsbqw.jpg)
|
||||
|
||||
崩溃转储、内存转储、核心转储、系统转储……这些全都会产生同样的产物:一个包含了当应用崩溃时,在那个特定时刻应用的内存状态的文件。
|
||||
|
||||
这是一篇指导文章,你可以通过克隆示例的应用仓库来跟随学习:
|
||||
|
||||
```
|
||||
git clone https://github.com/hANSIc99/core_dump_example.git
|
||||
```
|
||||
|
||||
### 信号如何关联到转储
|
||||
|
||||
信号是操作系统和用户应用之间的进程间通讯。Linux 使用 [POSIX 标准][2]中定义的信号。在你的系统上,你可以在 `/usr/include/bits/signum-generic.h` 找到标准信号的定义。如果你想知道更多关于在你的应用程序中使用信号的信息,这有一个信息丰富的 [signal 手册页][3]。简单地说,Linux 基于预期的或意外的信号来触发进一步的活动。
|
||||
|
||||
当你退出一个正在运行的应用程序时,应用程序通常会收到 `SIGTERM` 信号。因为这种类型的退出信号是预期的,所以这个操作不会创建一个内存转储。
|
||||
|
||||
以下信号将导致创建一个转储文件(来源:[GNU C库][4]):
|
||||
|
||||
* `SIGFPE`:错误的算术操作
|
||||
* `SIGILL`:非法指令
|
||||
* `SIGSEGV`:对存储的无效访问
|
||||
* `SIGBUS`:总线错误
|
||||
* `SIGABRT`:程序检测到的错误,并通过调用 `abort()` 来报告
|
||||
* `SIGIOT`:这个信号在 Fedora 上已经过时,过去在 [PDP-11][5] 上用 `abort()` 时触发,现在映射到 SIGABRT
|
||||
|
||||
### 创建转储文件
|
||||
|
||||
导航到 `core_dump_example` 目录,运行 `make`,并使用 `-c1` 开关执行该示例二进制:
|
||||
|
||||
```
|
||||
./coredump -c1
|
||||
```
|
||||
|
||||
该应用将以状态 4 退出,带有如下错误:
|
||||
|
||||
![Dump written][6]
|
||||
|
||||
“Abgebrochen (Speicherabzug geschrieben) ”(LCTT 译注:这是德语,应该是因为本文作者系统是德语环境)大致翻译为“分段故障(核心转储)”。
|
||||
|
||||
是否创建核心转储是由运行该进程的用户的资源限制决定的。你可以用 `ulimit` 命令修改资源限制。
|
||||
|
||||
检查当前创建核心转储的设置:
|
||||
|
||||
```
|
||||
ulimit -c
|
||||
```
|
||||
|
||||
如果它输出 `unlimited`,那么它使用的是(建议的)默认值。否则,用以下方法纠正限制:
|
||||
|
||||
```
|
||||
ulimit -c unlimited
|
||||
```
|
||||
|
||||
要禁用创建核心转储,可以设置其大小为 0:
|
||||
|
||||
```
|
||||
ulimit -c 0
|
||||
```
|
||||
|
||||
这个数字指定了核心转储文件的大小,单位是块。
|
||||
|
||||
### 什么是核心转储?
|
||||
|
||||
内核处理核心转储的方式定义在:
|
||||
|
||||
```
|
||||
/proc/sys/kernel/core_pattern
|
||||
```
|
||||
|
||||
我运行的是 Fedora 31,在我的系统上,该文件包含的内容是:
|
||||
|
||||
```
|
||||
/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
|
||||
```
|
||||
|
||||
这表明核心转储被转发到 `systemd-coredump` 工具。在不同的 Linux 发行版中,`core_pattern` 的内容会有很大的不同。当使用 `systemd-coredump` 时,转储文件被压缩保存在 `/var/lib/systemd/coredump` 下。你不需要直接接触这些文件,你可以使用 `coredumpctl`。比如说:
|
||||
|
||||
```
|
||||
coredumpctl list
|
||||
```
|
||||
|
||||
会显示系统中保存的所有可用的转储文件。
|
||||
|
||||
使用 `coredumpctl dump`,你可以从最后保存的转储文件中检索信息:
|
||||
|
||||
```
|
||||
[stephan@localhost core_dump_example]$ ./coredump
|
||||
Application started…
|
||||
|
||||
(…….)
|
||||
|
||||
Message: Process 4598 (coredump) of user 1000 dumped core.
|
||||
|
||||
Stack trace of thread 4598:
|
||||
#0 0x00007f4bbaf22625 __GI_raise (libc.so.6)
|
||||
#1 0x00007f4bbaf0b8d9 __GI_abort (libc.so.6)
|
||||
#2 0x00007f4bbaf664af __libc_message (libc.so.6)
|
||||
#3 0x00007f4bbaf6da9c malloc_printerr (libc.so.6)
|
||||
#4 0x00007f4bbaf6f49c _int_free (libc.so.6)
|
||||
#5 0x000000000040120e n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
#6 0x00000000004013b1 n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
#7 0x00007f4bbaf0d1a3 __libc_start_main (libc.so.6)
|
||||
#8 0x000000000040113e n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
Refusing to dump core to tty (use shell redirection or specify — output).
|
||||
```
|
||||
|
||||
这表明该进程被 `SIGABRT` 停止。这个视图中的堆栈跟踪不是很详细,因为它不包括函数名。然而,使用 `coredumpctl debug`,你可以简单地用调试器(默认为 [GDB][8])打开转储文件。输入 `bt`(<ruby>回溯<rt>backtrace</rt></ruby>的缩写)可以得到更详细的视图:
|
||||
|
||||
```
|
||||
Core was generated by `./coredump -c1'.
|
||||
Program terminated with signal SIGABRT, Aborted.
|
||||
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
|
||||
50 return ret;
|
||||
(gdb) bt
|
||||
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
|
||||
#1 0x00007fc37a9aa8d9 in __GI_abort () at abort.c:79
|
||||
#2 0x00007fc37aa054af in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7fc37ab14f4b "%s\n") at ../sysdeps/posix/libc_fatal.c:181
|
||||
#3 0x00007fc37aa0ca9c in malloc_printerr (str=str@entry=0x7fc37ab130e0 "free(): invalid pointer") at malloc.c:5339
|
||||
#4 0x00007fc37aa0e49c in _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at malloc.c:4173
|
||||
#5 0x000000000040120e in freeSomething(void*) ()
|
||||
#6 0x0000000000401401 in main ()
|
||||
```
|
||||
|
||||
与后续帧相比,`main()` 和 `freeSomething()` 的内存地址相当低。由于共享对象被映射到虚拟地址空间末尾的区域,可以认为 `SIGABRT` 是由共享库中的调用引起的。共享对象的内存地址在多次调用之间并不是恒定不变的,所以当你看到多次调用之间的地址不同时,完全可以认为是共享对象。
|
||||
|
||||
堆栈跟踪显示,后续的调用源于 `malloc.c`,这说明内存的(取消)分配可能出了问题。
|
||||
|
||||
在源代码中,(即使没有任何 C++ 知识)你也可以看到,它试图释放一个指针,而这个指针并没有被内存管理函数返回。这导致了未定义的行为,并导致了 `SIGABRT`。
|
||||
|
||||
```
|
||||
void freeSomething(void *ptr){
|
||||
free(ptr);
|
||||
}
|
||||
int nTmp = 5;
|
||||
int *ptrNull = &nTmp;
|
||||
freeSomething(ptrNull);
|
||||
```
|
||||
|
||||
systemd 的这个 `coredump` 工具可以在 `/etc/systemd/coredump.conf` 中配置。可以在 `/etc/systemd/systemd-tmpfiles-clean.timer` 中配置轮换清理转储文件。
|
||||
|
||||
你可以在其[手册页][10]中找到更多关于 `coredumpctl` 的信息。
|
||||
|
||||
### 用调试符号编译
|
||||
|
||||
打开 `Makefile` 并注释掉第 9 行的最后一部分。现在应该是这样的:
|
||||
|
||||
```
|
||||
CFLAGS =-Wall -Werror -std=c++11 -g
|
||||
```
|
||||
|
||||
`-g` 开关使编译器能够创建调试信息。启动应用程序,这次使用 `-c2` 开关。
|
||||
|
||||
```
|
||||
./coredump -c2
|
||||
```
|
||||
|
||||
你会得到一个浮点异常。在 GDB 中打开该转储文件:
|
||||
|
||||
```
|
||||
coredumpctl debug
|
||||
```
|
||||
|
||||
这一次,你会直接被指向源代码中导致错误的那一行:
|
||||
|
||||
```
|
||||
Reading symbols from /home/stephan/Dokumente/core_dump_example/coredump…
|
||||
[New LWP 6218]
|
||||
Core was generated by `./coredump -c2'.
|
||||
Program terminated with signal SIGFPE, Arithmetic exception.
|
||||
#0 0x0000000000401233 in zeroDivide () at main.cpp:29
|
||||
29 nRes = 5 / nDivider;
|
||||
(gdb)
|
||||
```
|
||||
|
||||
键入 `list` 以获得更好的源代码概览:
|
||||
|
||||
```
|
||||
(gdb) list
|
||||
24 int zeroDivide(){
|
||||
25 int nDivider = 5;
|
||||
26 int nRes = 0;
|
||||
27 while(nDivider > 0){
|
||||
28 nDivider--;
|
||||
29 nRes = 5 / nDivider;
|
||||
30 }
|
||||
31 return nRes;
|
||||
32 }
|
||||
```
|
||||
|
||||
使用命令 `info locals` 从应用程序失败的时间点检索局部变量的值:
|
||||
|
||||
```
|
||||
(gdb) info locals
|
||||
nDivider = 0
|
||||
nRes = 5
|
||||
```
|
||||
|
||||
结合源码,可以看出,你遇到的是零除错误:
|
||||
|
||||
```
|
||||
nRes = 5 / 0
|
||||
```
|
||||
|
||||
### 结论
|
||||
|
||||
了解如何处理转储文件将帮助你找到并修复应用程序中难以重现的随机错误。而如果不是你的应用程序,将核心转储转发给开发人员将帮助她或他找到并修复问题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/linux-dump
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0 (Magnifying glass on code)
|
||||
[2]: https://en.wikipedia.org/wiki/POSIX
|
||||
[3]: https://man7.org/linux/man-pages/man7/signal.7.html
|
||||
[4]: https://www.gnu.org/software/libc/manual/html_node/Program-Error-Signals.html#Program-Error-Signals
|
||||
[5]: https://en.wikipedia.org/wiki/PDP-11
|
||||
[6]: https://opensource.com/sites/default/files/uploads/dump_written.png (Dump written)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://www.gnu.org/software/gdb/
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/free.html
|
||||
[10]: https://man7.org/linux/man-pages/man1/coredumpctl.1.html
|
@ -0,0 +1,56 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12552-1.html)
|
||||
[#]: subject: (Microsoft uses AI to boost its reuse, recycling of server parts)
|
||||
[#]: via: (https://www.networkworld.com/article/3570451/microsoft-uses-ai-to-boost-its-reuse-recycling-of-server-parts.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
微软利用 AI 提升服务器部件的重复使用和回收率
|
||||
======
|
||||
|
||||
> 准备好在提到数据中心设备时,听到更多的“循环”一词。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/25/234108f8yz3c3la8xw18mn.jpg)
|
||||
|
||||
微软正在将人工智能引入到对数百万台服务器进行分类的任务中,以确定哪些部件可以回收,在哪里回收。
|
||||
|
||||
新计划要求在微软全球各地的数据中心建立所谓的“<ruby>循环中心<rt>Circular Center</rt></ruby>”,在那里,人工智能算法将用于从退役的服务器或其他硬件中分拣零件,并找出哪些零件可以在园区内重新使用。
|
||||
|
||||
微软表示,它的数据中心有超过 300 万台服务器和相关硬件,一台服务器的平均寿命约为 5 年。另外,微软正在全球范围内扩张,所以其服务器数量应该会增加。
|
||||
|
||||
循环中心就是要快速整理库存,而不是让过度劳累的员工疲于奔命。微软计划到 2025 年将服务器部件的重复使用率提高 90%。微软总裁 Brad Smith 在宣布这一举措的一篇[博客][2]中写道:“利用机器学习,我们将对退役的服务器和硬件进行现场处理。我们会将那些可以被我们以及客户重复使用和再利用的部件进行分类,或者出售。”
|
||||
|
||||
Smith 指出,如今,关于废物的数量、质量和类型,以及废物的产生地和去向,都没有一致的数据。例如,关于建造和拆除废物的数据并不一致,我们要一个标准化的方法,有更好的透明度和更高的质量。
|
||||
|
||||
他写道:“如果没有更准确的数据,几乎不可能了解运营决策的影响,设定什么目标,如何评估进展,以及废物去向方法的行业标准。”
|
||||
|
||||
根据微软的说法,阿姆斯特丹数据中心的一个循环中心试点减少了停机时间,并增加了服务器和网络部件的可用性,供其自身再利用和供应商回购。它还降低了将服务器和硬件运输到处理设施的成本,从而降低了碳排放。
|
||||
|
||||
“<ruby>循环经济<rt>circular economy</rt></ruby>”一词正在科技界流行。它是基于服务器硬件的循环利用,将那些已经使用了几年但仍可用的设备重新投入到其他地方服务。ITRenew 是[我在几个月前介绍过][3]的一家二手超大规模服务器的转售商,它对这个词很感兴趣。
|
||||
|
||||
该公司表示,首批微软循环中心将建在新的主要数据中心园区或地区。它计划最终将这些中心添加到已经存在的园区中。
|
||||
|
||||
微软曾明确表示要在 2030 年之前实现“碳负排放”,而这只是其中几个项目之一。近日,微软宣布在其位于盐湖城的系统开发者实验室进行了一项测试,用一套 250kW 的氢燃料电池系统为一排服务器机架连续供电 48 小时,微软表示这是以前从未做过的事情。
|
||||
|
||||
微软首席基础设施工程师 Mark Monroe 在一篇[博客][4]中写道:“这是我们所知道的最大的以氢气运行的计算机备用电源系统,而且它的连续测试时间最长。”他说,近年来氢燃料电池的价格大幅下降,现在已经成为柴油发电机的可行替代品,但燃烧更清洁。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3570451/microsoft-uses-ai-to-boost-its-reuse-recycling-of-server-parts.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://blogs.microsoft.com/blog/2020/08/04/microsoft-direct-operations-products-and-packaging-to-be-zero-waste-by-2030/
|
||||
[3]: https://www.networkworld.com/article/3543810/for-sale-used-low-mileage-hyperscaler-servers.html
|
||||
[4]: https://news.microsoft.com/innovation-stories/hydrogen-datacenters/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,121 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12545-1.html)
|
||||
[#]: subject: (An attempt to make a font look more handwritten)
|
||||
[#]: via: (https://jvns.ca/blog/2020/08/08/handwritten-font/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
|
||||
一次让字体看起来更像手写体的尝试
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/24/111019lzpc280kkvlfpv1p.jpg)
|
||||
|
||||
其实我对这个实验的结果并不是特别满意,但我还是想分享一下,因为摆弄字体是件非常简单和有趣的事情。而且有人问我怎么做,我告诉她我会写一篇博文来介绍一下 :)
|
||||
|
||||
### 背景:原本的手写体
|
||||
|
||||
先交代一些背景信息:我有一个我自己的手写字体,我已经在我的电子杂志中使用了好几年了。我用一个叫 [iFontMaker][1] 的令人愉快的应用程序制作了它。他们在网站上自诩为“你可以在 5 分钟内只用手指就能制作出你的手工字体”。根据我的经验,“5 分钟”的部分比较准确 —— 我可能花了更多的时间,比如 15 分钟。我对“只用手指”的说法持怀疑态度 —— 我用的是 Apple Pencil,它的精确度要好得多。但是,使用该应用程序制作你的笔迹的 TTF 字体是非常容易的,如果你碰巧已经有了 Apple Pencil 和 iPad,我认为这是一个有趣的方式,我只花了 7.99 美元。
|
||||
|
||||
下面是我的字体的样子。左边的“CONNECT”文字是我的实际笔迹,右边的段落是字体。其实有 2 种字体 —— 有一种是普通字体,一种是手写的“等宽”字体。(其实实际并不是等宽,我还没有想好如何在 iFontMaker 中制作一个实际的等宽字体)
|
||||
|
||||
![][2]
|
||||
|
||||
### 目标:在字体上做更多的字符变化
|
||||
|
||||
在上面的截图中,很明显可以看出这是一种字体,而不是实际的笔迹。当你有两个相同的字母相邻时,就最容易看出来,比如“HTTP”。
|
||||
|
||||
所以我想,使用一些 OpenType 的功能,以某种方式为这个字体引入更多的变化,比如也许两个 “T” 可以是不同的。不过我不知道该怎么做!
|
||||
|
||||
### 来自 Tristan Hume 的主意:使用 OpenType!
|
||||
|
||||
然后我在 5 月份的 !!Con 2020 上(所有的[演讲录音都在这里!][3])看到了 Tristan Hume 的这个演讲:关于使用 OpenType 通过特殊的字体将逗号插入到大的数字中。他的演讲和博文都很棒,所以这里有一堆链接 —— 下面现场演示也许是最快看到他的成果的方式。
|
||||
|
||||
* 一个现场演示: [Numderline 测试][4]
|
||||
* 博客文章:[将逗号插入到大的数字的各个位置:OpenType 冒险][5]
|
||||
* 谈话:[!!Con 2020 - 使用字体塑型,把逗号插入到大的数字的各个位置!][6]
|
||||
* GitHub 存储库: https://github.com/trishume/numderline/blob/master/patcher.py
|
||||
|
||||
### 基本思路:OpenType 允许你根据上下文替换字符
|
||||
|
||||
我一开始对 OpenType 到底是什么非常困惑。目前我仍然不甚了然,但我知道到你可以编写极其简单的 OpenType 规则来改变字体的外观,而且你甚至不需要真正了解字体。
|
||||
|
||||
下面是一个规则示例:
|
||||
|
||||
```
|
||||
sub a' b by other_a;
|
||||
```
|
||||
|
||||
这里 `sub a' b by other_a;` 的意思是:如果一个 `a` 字形是在一个 `b` 之前,那么替换 `a` 为字形 `other_a`。
|
||||
|
||||
所以这意味着我可以让 `ab` 和 `ac` 在字体中出现不同的字形。这并不像手写体那样随机,但它确实引入了一点变化。
|
||||
|
||||
### OpenType 参考文档:真棒
|
||||
|
||||
我找到的最好的 OpenType 文档是这个 [OpenType™ 特性文件规范][7] 资料。里面有很多你可以做的很酷的事情的例子,比如用一个连字替换 “ffi”。
|
||||
|
||||
### 如何应用这些规则:fonttools
|
||||
|
||||
为字体添加新的 OpenType 规则是超容易的。有一个 Python 库叫 `fonttools`,这 5 行代码会把放在 `rules.fea` 中的 OpenType 规则列表应用到字体文件 `input.ttf` 中。
|
||||
|
||||
```
|
||||
from fontTools.ttLib import TTFont
|
||||
from fontTools.feaLib.builder import addOpenTypeFeatures
|
||||
|
||||
ft_font = TTFont('input.ttf')
|
||||
addOpenTypeFeatures(ft_font, 'rules.fea', tables=['GSUB'])
|
||||
ft_font.save('output.ttf')
|
||||
```
|
||||
|
||||
`fontTools` 还提供了几个名为 `ttx` 和 `fonttools` 的命令行工具。`ttx` 可以将 TTF 字体转换为 XML 文件,这对我很有用,因为我想重新命名我的字体中的一些字形,但我对字体一无所知。所以我只是将我的字体转换为 XML 文件,使用 `sed` 重命名字形,然后再次使用 `ttx` 将 XML 文件转换回 `ttf`。
|
||||
|
||||
`fonttools merge` 可以让我把我的 3 个手写字体合并成 1 个,这样我就在 1 个文件中得到了我需要的所有字形。
|
||||
|
||||
### 代码
|
||||
|
||||
我把我的极潦草的代码放在一个叫 [font-mixer][8] 的存储库里。它大概有 33 行代码,我认为它不言自明。(都在 `run.sh` 和 `combine.py` 中)
|
||||
|
||||
### 结果
|
||||
|
||||
下面是旧字体和新字体的小样。我不认为新字体的“感觉”更像手写体 —— 有更多的变化,但还是比不上实际的手写体文字(在下面)。
|
||||
|
||||
我觉得稍微有点不可思议,它明明还是一种字体,但它却要假装成不是字体:
|
||||
|
||||
![][9]
|
||||
|
||||
而这是实际手写的同样的文字的样本:
|
||||
|
||||
![][10]
|
||||
|
||||
如果我在制作另外 2 种手写字体的时候,把原来的字体混合在一起,再仔细一点,可能效果会更好。
|
||||
|
||||
### 添加 OpenType 规则这么容易,真酷!
|
||||
|
||||
这里最让我欣喜的是,添加 OpenType 规则来改变字体的工作方式是如此的容易,比如你可以很容易地做出一个“the”单词总是被“teh”代替的字体(让错别字一直留着!)。
|
||||
|
||||
不过我还是不知道如何做出更逼真的手写字体:)。我现在还在用旧的那个字体(没有额外的变化),我对它很满意。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2020/08/08/handwritten-font/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://2ttf.com/
|
||||
[2]: https://jvns.ca/images/font-sample-connect.png
|
||||
[3]: http://bangbangcon.com/recordings.html
|
||||
[4]: https://thume.ca/numderline/
|
||||
[5]: https://blog.janestreet.com/commas-in-big-numbers-everywhere/
|
||||
[6]: https://www.youtube.com/watch?v=Biqm9ndNyC8
|
||||
[7]: https://adobe-type-tools.github.io/afdko/OpenTypeFeatureFileSpecification.html
|
||||
[8]: https://github.com/jvns/font-mixer/
|
||||
[9]: https://jvns.ca/images/font-mixer-comparison.png
|
||||
[10]: https://jvns.ca/images/handwriting-sample.jpeg
|
155
published/20200810 Merging and sorting files on Linux.md
Normal file
155
published/20200810 Merging and sorting files on Linux.md
Normal file
@ -0,0 +1,155 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12539-1.html)
|
||||
[#]: subject: (Merging and sorting files on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3570508/merging-and-sorting-files-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
合并和排序 Linux 上的文件
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/22/102250i3943is48r34w4nz.jpg)
|
||||
|
||||
在 Linux 上合并和排序文本的方法有很多种,但如何去处理它取决于你试图做什么:你是只想将多个文件的内容放入一个文件中,还是以某种方式组织它,让它更易于使用。在本文中,我们将查看一些用于排序和合并文件内容的命令,并重点介绍结果有何不同。
|
||||
|
||||
### 使用 cat
|
||||
|
||||
如果你只想将一组文件放到单个文件中,那么 `cat` 命令是一个容易的选择。你所要做的就是输入 `cat`,然后按你希望它们在合并文件中的顺序在命令行中列出这些文件。将命令的输出重定向到要创建的文件。如果指定名称的文件已经存在,那么文件将被覆盖。例如:
|
||||
|
||||
```
|
||||
$ cat firstfile secondfile thirdfile > newfile
|
||||
```
|
||||
|
||||
如果要将一系列文件的内容添加到现有文件中,而不是覆盖它,只需将 `>` 变成 `>>`。
|
||||
|
||||
```
|
||||
$ cat firstfile secondfile thirdfile >> updated_file
|
||||
```
|
||||
|
||||
如果你要合并的文件遵循一些方便的命名约定,那么任务可能更简单。如果可以使用正则表达式指定所有文件名,那就不必列出所有文件。例如,如果文件全部以 `file` 结束,如上所示,你可以进行如下操作:
|
||||
|
||||
```
|
||||
$ cat *file > allfiles
|
||||
```
|
||||
|
||||
请注意,上面的命令将按字母数字顺序添加文件内容。在 Linux 上,一个名为 `filea` 的文件将排在名为 `fileA` 的文件的前面,但会在 `file7` 的后面。毕竟,当我们处理字母数字序列时,我们不仅需要考虑 `ABCDE`,还需要考虑 `0123456789aAbBcCdDeE`。你可以使用 `ls *file` 这样的命令来查看合并文件之前文件的顺序。
|
||||
|
||||
注意:首先确保你的命令包含合并文件中所需的所有文件,而不是其他文件,尤其是你使用 `*` 等通配符时。不要忘记,用于合并的文件仍将单独存在,在确认合并后,你可能想要删除这些文件。
|
||||
|
||||
### 按时间期限合并文件
|
||||
|
||||
如果要基于每个文件的时间期限而不是文件名来合并文件,请使用以下命令:
|
||||
|
||||
```
|
||||
$ for file in `ls -tr myfile.*`; do cat $file >> BigFile.$$; done
|
||||
```
|
||||
|
||||
使用 `-tr` 选项(`t` = 时间,`r` = 反向)将产生按照最早的在最前排列的文件列表。例如,如果你要保留某些活动的日志,并且希望按活动执行的顺序添加内容,则这非常有用。
|
||||
|
||||
上面命令中的 `$$` 表示运行命令时的进程 ID。不是很必要使用此功能,但它几乎不可能会无意添加到现有的文件,而不是创建新文件。如果使用 `$$`,那么生成的文件可能如下所示:
|
||||
|
||||
```
|
||||
$ ls -l BigFile.*
|
||||
-rw-rw-r-- 1 justme justme 931725 Aug 6 12:36 BigFile.582914
|
||||
```
|
||||
|
||||
### 合并和排序文件
|
||||
|
||||
Linux 提供了一些有趣的方式来对合并之前或之后的文件内容进行排序。
|
||||
|
||||
#### 按字母对内容进行排序
|
||||
|
||||
如果要对合并的文件内容进行排序,那么可以使用以下命令对整体内容进行排序:
|
||||
|
||||
```
|
||||
$ cat myfile.1 myfile.2 myfile.3 | sort > newfile
|
||||
```
|
||||
|
||||
如果要按文件对内容进行分组,请使用以下命令对每个文件进行排序,然后再将它添加到新文件中:
|
||||
|
||||
```
|
||||
$ for file in `ls myfile.?`; do sort $file >> newfile; done
|
||||
```
|
||||
|
||||
#### 对文件进行数字排序
|
||||
|
||||
要对文件内容进行数字排序,请在 `sort` 中使用 `-n` 选项。仅当文件中的行以数字开头时,此选项才有用。请记住,按照默认顺序,`02` 将小于 `1`。当你要确保行以数字排序时,请使用 `-n` 选项。
|
||||
|
||||
```
|
||||
$ cat myfile.1 myfile.2 myfile.3 | sort -n > xyz
|
||||
```
|
||||
|
||||
如果文件中的行以 `2020-11-03` 或 `2020/11/03`(年月日格式)这样的日期格式开头,`-n` 选项还能让你按日期对内容进行排序。其他格式的日期排序将非常棘手,并且将需要更复杂的命令。
|
||||
|
||||
### 使用 paste
|
||||
|
||||
`paste` 命令允许你逐行连接文件内容。使用此命令时,合并文件的第一行将包含要合并的每个文件的第一行。以下是示例,其中我使用了大写字母以便于查看行的来源:
|
||||
|
||||
```
|
||||
$ cat file.a
|
||||
A one
|
||||
A two
|
||||
A three
|
||||
|
||||
$ paste file.a file.b file.c
|
||||
A one B one C one
|
||||
A two B two C two
|
||||
A three B three C thee
|
||||
B four C four
|
||||
C five
|
||||
```
|
||||
|
||||
将输出重定向到另一个文件来保存它:
|
||||
|
||||
```
|
||||
$ paste file.a file.b file.c > merged_content
|
||||
```
|
||||
|
||||
或者,你可以将每个文件的内容在同一行中合并,然后将文件粘贴在一起。这需要使用 `-s`(序列)选项。注意这次的输出如何显示每个文件的内容:
|
||||
|
||||
```
|
||||
$ paste -s file.a file.b file.c
|
||||
A one A two A three
|
||||
B one B two B three B four
|
||||
C one C two C thee C four C five
|
||||
```
|
||||
|
||||
### 使用 join
|
||||
|
||||
合并文件的另一个命令是 `join`。`join` 命令让你能基于一个共同字段合并多个文件的内容。例如,你可能有一个包含一组同事的电话的文件,其中,而另一个包含了同事的电子邮件地址,并且两者均按个人姓名列出。你可以使用 `join` 创建一个包含电话和电子邮件地址的文件。
|
||||
|
||||
一个重要的限制是文件的行必须是相同的顺序,并在每个文件中包括用于连接的字段。
|
||||
|
||||
这是一个示例命令:
|
||||
|
||||
```
|
||||
$ join phone_numbers email_addresses
|
||||
Sandra 555-456-1234 bugfarm@gmail.com
|
||||
Pedro 555-540-5405
|
||||
John 555-333-1234 john_doe@gmail.com
|
||||
Nemo 555-123-4567 cutie@fish.com
|
||||
```
|
||||
|
||||
在本例中,即使缺少附加信息,第一个字段(名字)也必须存在于每个文件中,否则命令会因错误而失败。对内容进行排序有帮助,而且可能更容易管理,但只要顺序一致,就不需要这么做。
|
||||
|
||||
### 总结
|
||||
|
||||
在 Linux 上,你有很多可以合并和排序存储在单独文件中的数据的方式。这些方法可以使原本繁琐的任务变得异常简单。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3570508/merging-and-sorting-files-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.facebook.com/NetworkWorld/
|
||||
[3]: https://www.linkedin.com/company/network-world
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12538-1.html)
|
||||
[#]: subject: (Photoflare: An Open Source Image Editor for Simple Editing Needs)
|
||||
[#]: via: (https://itsfoss.com/photoflare/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
@ -10,7 +10,7 @@
|
||||
Photoflare:满足简单编辑需求的开源图像编辑器
|
||||
======
|
||||
|
||||
_**简介:Photoflare 是 Linux 和 Windows 上的图像编辑器。它有一个免费且开源的社区版本。**_
|
||||
> Photoflare 是一款可用于 Linux 和 Windows 上的图像编辑器。它有一个免费而开源的社区版本。
|
||||
|
||||
在 Linux 上编辑图像时,GIMP 显然是首选。但是,如果你不需要高级编辑功能,GIMP 可能会让人不知所措。这是像 Photoflare 这样的应用立足的地方。
|
||||
|
||||
@ -18,9 +18,9 @@ _**简介:Photoflare 是 Linux 和 Windows 上的图像编辑器。它有一
|
||||
|
||||
![][1]
|
||||
|
||||
Photoflare 是一个在简单易用的界面提供基本的图像编辑功能的编辑器。
|
||||
Photoflare 是一个在简单易用的界面里提供了基本的图像编辑功能的编辑器。
|
||||
|
||||
它受流行的 Windows 应用 [PhotoFiltre][2] 的启发。这个程序不是仅仅克隆,它是从头开始用 C++ 编写的,并使用 Qt 框架作为界面。
|
||||
它受流行的 Windows 应用 [PhotoFiltre][2] 的启发。这个程序不是一个克隆品,它是用 C++ 从头开始编写的,并使用 Qt 框架作为界面。
|
||||
|
||||
它的功能包括裁剪、翻转/旋转、调整图像大小。你还可以使用诸如油漆刷、油漆桶、喷雾罐、模糊工具和橡皮擦之类的工具。魔术棒工具可让你选择图像的特定区域。
|
||||
|
||||
@ -43,18 +43,16 @@ Photoflare 是一个在简单易用的界面提供基本的图像编辑功能的
|
||||
* 使用画笔、油漆桶、喷涂、模糊工具和图像等工具编辑图像
|
||||
* 在图像上添加线条和文字
|
||||
* 更改图像的色调
|
||||
* 添加老照滤镜
|
||||
* 添加老照片滤镜
|
||||
* 批量调整大小、滤镜等
|
||||
|
||||
|
||||
|
||||
### 在 Linux 上安装 Photflare
|
||||
|
||||
![][5]
|
||||
|
||||
在 Photoflare 的网站上,你可以找到定价以及每月订阅的选项。但是,应用是开源的,它的[源码可在 GitHub 上找到][6]。
|
||||
在 Photoflare 的网站上,你可以找到定价以及每月订阅的选项。但是,该应用是开源的,它的[源码可在 GitHub 上找到][6]。
|
||||
|
||||
应用也是“免费”使用的。[定价/订购部分][7]用于项目的财务支持。你可以免费下载它,如果你喜欢该应用并且会继续使用,请考虑给它捐赠。
|
||||
应用也是“免费”使用的。[定价/订购部分][7]用于该项目的财务支持。你可以免费下载它,如果你喜欢该应用并且会继续使用,请考虑给它捐赠。
|
||||
|
||||
Photoflare 有[官方 PPA][8],适用于 Ubuntu 和基于 Ubuntu 的发行版。此 PPA 可用于 Ubuntu 18.04 和 20.04 版本。
|
||||
|
||||
@ -78,17 +76,17 @@ sudo apt remove photoflare
|
||||
sudo add-apt-repository -r ppa:photoflare/photoflare-stable
|
||||
```
|
||||
|
||||
**Arch Linux** 和 Manjaro 用户可以[从 AUR 获取][9]。
|
||||
Arch Linux 和 Manjaro 用户可以[从 AUR 获取][9]。
|
||||
|
||||
Fedora 没有现成的软件包,因此你需要获取源码:
|
||||
|
||||
[Photoflare source code][6]
|
||||
- [Photoflare 源代码][6]
|
||||
|
||||
### Photoflare 的经验
|
||||
|
||||
我发现它与 [Pinta][10] 有点相似,但功能更多。它是用于基本图像编辑的简单工具。批处理功能是加分。
|
||||
我发现它与 [Pinta][10] 有点相似,但功能更多。它是用于基本图像编辑的简单工具。批处理功能是加分项。
|
||||
|
||||
我注意到图像在打开编辑时看起来不清晰。我打开一张截图进行编辑,字体看起来很模糊。但是,保存图像并在[图像查看器][11]中打开后,没有显示此问题。
|
||||
我注意到图像在打开编辑时看起来不够清晰。我打开一张截图进行编辑,字体看起来很模糊。但是,保存图像并在[图像查看器][11]中打开后,没有显示此问题。
|
||||
|
||||
总之,如果你不需要专业级的图像编辑,它是一个不错的工具。
|
||||
|
||||
@ -101,7 +99,7 @@ via: https://itsfoss.com/photoflare/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12540-1.html)
|
||||
[#]: subject: (Come test a new release of pipenv, the Python development tool)
|
||||
[#]: via: (https://fedoramagazine.org/come-test-a-new-release-of-pipenv-the-python-development-tool/)
|
||||
[#]: author: (torsava https://fedoramagazine.org/author/torsava/)
|
||||
@ -12,14 +12,13 @@
|
||||
|
||||
![][1]
|
||||
|
||||
**[Pipenv][2]** 是一个可帮助 Python 开发人员维护具有特定一组依赖关系的隔离虚拟环境,以实现可重复的开发和部署环境的工具。它类似于其他编程语言中的工具如 bundler、composer、npm、cargo、yarn 等。
|
||||
[pipenv][2] 是一个可帮助 Python 开发人员维护具有特定一组依赖关系的隔离虚拟环境,以实现可重新复制的开发和部署环境的工具。它类似于其他编程语言中的工具如 bundler、composer、npm、cargo、yarn 等。
|
||||
|
||||
|
||||
最近发布了新版本的 pipenv 2020.6.2。现在可以在 Fedora 33 和 Rawhide 中使用它。对于较旧的 Fedora,维护人员决定在 [COPR][3] 中打包,然后进行测试。因此,在稳定版 Fedora 中安装之前请先尝试一下。新版本没有带来任何新颖的功能,但是经过两年的开发,它解决了许多问题,并且在底层做了很多不同的事情。之前可以正常工作的应该可以继续工作,但是可能会略有不同。
|
||||
最近发布了新版本的 pipenv 2020.6.2。现在可以在 Fedora 33 和 Rawhide 中使用它。对于较旧的 Fedora,维护人员决定打包到 [COPR][3] 中来先进行测试。所以在他们把它推送到稳定的Fedora版本之前,来试试吧。新版本没有带来任何新颖的功能,但是经过两年的开发,它解决了许多问题,并且在底层做了很多不同的事情。之前可以正常工作的应该可以继续工作,但是可能会略有不同。
|
||||
|
||||
### 如何获取
|
||||
|
||||
如果你已经在运行 Fedora 33 或 Rawhide,请运行 _$ sudo dnf upgrade pipenv_ 或者 _$ sudo dnf install pipenv_,你将获得新版本。
|
||||
如果你已经在运行 Fedora 33 或 Rawhide,请运行 `$ sudo dnf upgrade pipenv` 或者 `$ sudo dnf install pipenv`,你将获得新版本。
|
||||
|
||||
在 Fedora 31 或 Fedora 32 上,你需要使用 [copr 仓库][3],直到经过测试的包出现在官方仓库中为止。要启用仓库,请运行:
|
||||
|
||||
@ -27,7 +26,7 @@
|
||||
$ sudo dnf copr enable @python/pipenv
|
||||
```
|
||||
|
||||
然后将 pipenv 升级到新版本,运行:
|
||||
然后将 `pipenv` 升级到新版本,运行:
|
||||
|
||||
```
|
||||
$ sudo dnf upgrade pipenv
|
||||
@ -46,13 +45,13 @@ $ sudo dnf copr disable @python/pipenv
|
||||
$ sudo dnf distro-sync pipenv
|
||||
```
|
||||
|
||||
_COPR 不受 Fedora 基础架构的官方支持。使用软件包需要你自担风险。_
|
||||
*COPR 不受 Fedora 基础架构的官方支持。使用软件包需要你自担风险。*
|
||||
|
||||
### 如何使用
|
||||
|
||||
如果你有用旧版本 pipenv 管理的项目,你应该可以毫无问题地使用新版本。让我们知道是否有问题。
|
||||
如果你有用旧版本 `pipenv` 管理的项目,你应该可以毫无问题地使用新版本。如果有问题请让我们知道。
|
||||
|
||||
如果你还不熟悉 pipenv 或想开始一个新项目,请参考以下快速指南:
|
||||
如果你还不熟悉 `pipenv` 或想开始一个新项目,请参考以下快速指南:
|
||||
|
||||
创建一个工作目录:
|
||||
|
||||
@ -60,7 +59,7 @@ _COPR 不受 Fedora 基础架构的官方支持。使用软件包需要你自担
|
||||
$ mkdir new-project && cd new-project
|
||||
```
|
||||
|
||||
使用 Python 3 初始化 pipenv:
|
||||
使用 Python 3 初始化 `pipenv`:
|
||||
|
||||
```
|
||||
$ pipenv --three
|
||||
@ -72,13 +71,13 @@ $ pipenv --three
|
||||
$ pipenv install six
|
||||
```
|
||||
|
||||
生成 Pipfile.lock 文件:
|
||||
生成 `Pipfile.lock` 文件:
|
||||
|
||||
```
|
||||
$ pipenv lock
|
||||
```
|
||||
|
||||
现在,你可以将创建的 Pipfile 和 Pipfile.lock 文件提交到版本控制系统(例如 git)中,其他人可以在克隆的仓库中使用此命令来获得相同的环境:
|
||||
现在,你可以将创建的 `Pipfile` 和 `Pipfile.lock` 文件提交到版本控制系统(例如 git)中,其他人可以在克隆的仓库中使用此命令来获得相同的环境:
|
||||
|
||||
```
|
||||
$ pipenv install
|
||||
@ -88,7 +87,7 @@ $ pipenv install
|
||||
|
||||
### 如何报告问题
|
||||
|
||||
如果你使用新版本的 pipenv 遇到任何问题,请[在 Fedora 的 Bugzilla中 报告问题][5]。Fedora 官方仓库和 copr 仓库中 pipenv 软件包的维护者是相同的。请在报告中指出是新版本。
|
||||
如果你使用新版本的 `pipenv` 遇到任何问题,请[在 Fedora 的 Bugzilla中 报告问题][5]。Fedora 官方仓库和 copr 仓库中 `pipenv` 软件包的维护者是相同的人。请在报告中指出是新版本。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -97,7 +96,7 @@ via: https://fedoramagazine.org/come-test-a-new-release-of-pipenv-the-python-dev
|
||||
作者:[torsava][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
141
published/20200817 Use GNU on Windows with MinGW.md
Normal file
141
published/20200817 Use GNU on Windows with MinGW.md
Normal file
@ -0,0 +1,141 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12549-1.html)
|
||||
[#]: subject: (Use GNU on Windows with MinGW)
|
||||
[#]: via: (https://opensource.com/article/20/8/gnu-windows-mingw)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
使用 Mingw 在 Windows 上使用 GNU
|
||||
======
|
||||
|
||||
> 在 Windows 上安装 GNU 编译器集合(gcc)和其他 GNU 组件来启用 GNU Autotools。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/25/085619rr331p13shpt6htp.jpg)
|
||||
|
||||
如果你是一名使用 Windows 的黑客,你不需要专有应用来编译代码。借助 [Minimalist GNU for Windows][2](MinGW)项目,你可以下载并安装 [GNU 编译器集合(GCC)][3]以及其它几个基本的 GNU 组件,以在 Windows 计算机上启用 [GNU Autotools][4]。
|
||||
|
||||
### 安装 MinGW
|
||||
|
||||
安装 MinGW 的最简单方法是通过 mingw-get,它是一个图形用户界面 (GUI) 应用,可帮助你选择要安装哪些组件,并让它们保持最新。要运行它,请从项目主页[下载 mingw-get-setup.exe][5]。像你安装其他 EXE 一样,在向导中单击完成安装。
|
||||
|
||||
![Installing mingw-get][6]
|
||||
|
||||
### 在 Windows 上安装 GCC
|
||||
|
||||
目前为止,你只安装了一个程序,或者更准确地说,一个称为 mingw-get 的专用的*包管理器*。启动 mingw-get 选择要在计算机上安装的 MinGW 项目应用。
|
||||
|
||||
首先,从应用菜单中选择 mingw-get 启动它。
|
||||
|
||||
![Installing GCC with MinGW][8]
|
||||
|
||||
要安装 GCC,请单击 GCC 和 G++ 包来标记要安装 GNU C、C++ 编译器。要完成此过程,请从 mingw-get 窗口左上角的**安装**菜单中选择**应用更改**。
|
||||
|
||||
安装 GCC 后,你可以使用完整路径在 [PowerShell][9] 中运行它:
|
||||
|
||||
```
|
||||
PS> C:\MinGW\bin\gcc.exe --version
|
||||
gcc.exe (MinGW.org GCC Build-x) x.y.z
|
||||
Copyright (C) 2019 Free Software Foundation, Inc.
|
||||
```
|
||||
|
||||
### 在 Windows 上运行 Bash
|
||||
|
||||
虽然它自称 “minimalist”(最小化),但 MinGW 还提供一个可选的 [Bourne shell][10] 命令行解释器,称为 MSYS(它代表<ruby>最小系统<rt>Minimal System</rt></ruby>)。它是微软的 `cmd.exe` 和 PowerShell 的替代方案,它默认是 Bash。除了是(自然而然的)最流行的 shell 之一外,Bash 在将开源应用移植到 Windows 平台时很有用,因为许多开源项目都假定了 [POSIX][11] 环境。
|
||||
|
||||
你可以在 mingw-get GUI 或 PowerShell 内安装 MSYS:
|
||||
|
||||
```
|
||||
PS> mingw-get install msys
|
||||
```
|
||||
|
||||
要尝试 Bash,请使用完整路径启动它:
|
||||
|
||||
```
|
||||
PS> C:\MinGW\msys/1.0/bin/bash.exe
|
||||
bash.exe-$ echo $0
|
||||
"C:\MinGW\msys/1.0/bin/bash.exe"
|
||||
```
|
||||
|
||||
### 在 Windows 上设置路径
|
||||
|
||||
你可能不希望为要使用的每个命令输入完整路径。将包含新 GNU 可执行文件的目录添加到 Windows 中的路径中。需要添加两个可执行文件的根目录:一个用于 MinGW(包括 GCC 及其相关工具链),另一个用于 MSYS(包括 Bash、GNU 和 [BSD][12] 项目中的许多常用工具)。
|
||||
|
||||
若要在 Windows 中修改环境,请单击应用菜单并输入 `env`。
|
||||
|
||||
![Edit your env][13]
|
||||
|
||||
这将打开“首选项”窗口。点击窗口底部附近的“环境变量”按钮。
|
||||
|
||||
在“环境变量”窗口中,双击底部面板中的“路径”选区。
|
||||
|
||||
在“编辑环境变量”窗口中,单击右侧的“新增”按钮。创建一个新条目 `C:\MinCW\msys\1.0\bin`,然后单击 “确定”。以相同的方式创建第二条 `C:\MinGW\bin`,然后单击 “确定”。
|
||||
|
||||
![Set your env][14]
|
||||
|
||||
在每个首选项窗口中接受这些更改。你可以重启计算机以确保所有应用都检测到新变量,或者只需重启 PowerShell 窗口。
|
||||
|
||||
从现在开始,你可以调用任何 MinGW 命令而不指定完整路径,因为完整路径位于 PowerShell 继承的 Windows 系统的 `%PATH%` 环境变量中。
|
||||
|
||||
### Hello world
|
||||
|
||||
你已经完成设置,因此可以对新的 MinGW 系统进行小测试。如果你是 [Vim][15] 用户,请启动它,然后输入下面的 “hello world” 代码:
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <iostream>
|
||||
|
||||
using namespace std;
|
||||
|
||||
int main() {
|
||||
cout << "Hello open source." << endl;
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
将文件保存为 `hello.cpp`,然后使用 GCC 的 C++ 组件编译文件:
|
||||
|
||||
```
|
||||
PS> gcc hello.cpp --output hello
|
||||
```
|
||||
|
||||
最后,运行它:
|
||||
|
||||
```
|
||||
PS> .\a.exe
|
||||
Hello open source.
|
||||
PS>
|
||||
```
|
||||
|
||||
MinGW 的内容远不止我在这里所能介绍的。毕竟,MinGW 打开了一个完整的开源世界和定制代码的潜力,因此请充分利用它。对于更广阔的开源世界,你还可以[试试 Linux][16]。当所有的限制都被消除后,你会惊讶于可能的事情。但与此同时,请试试 MinGW,并享受 GNU 的自由。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/gnu-windows-mingw
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/more_windows.jpg?itok=hKk64RcZ (Windows)
|
||||
[2]: http://mingw.org
|
||||
[3]: https://gcc.gnu.org/
|
||||
[4]: https://opensource.com/article/19/7/introduction-gnu-autotools
|
||||
[5]: https://osdn.net/projects/mingw/releases/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/mingw-install.jpg (Installing mingw-get)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/mingw-packages.jpg (Installing GCC with MinGW)
|
||||
[9]: https://opensource.com/article/19/8/variables-powershell
|
||||
[10]: https://en.wikipedia.org/wiki/Bourne_shell
|
||||
[11]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[12]: https://opensource.com/article/19/3/netbsd-raspberry-pi
|
||||
[13]: https://opensource.com/sites/default/files/uploads/mingw-env.jpg (Edit your env)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/mingw-env-set.jpg (Set your env)
|
||||
[15]: https://opensource.com/resources/what-vim
|
||||
[16]: https://opensource.com/article/19/7/ways-get-started-linux
|
53
published/20200818 IBM details next-gen POWER10 processor.md
Normal file
53
published/20200818 IBM details next-gen POWER10 processor.md
Normal file
@ -0,0 +1,53 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12546-1.html)
|
||||
[#]: subject: (IBM details next-gen POWER10 processor)
|
||||
[#]: via: (https://www.networkworld.com/article/3571415/ibm-details-next-gen-power10-processor.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
IBM 披露了下一代 POWER10 处理器细节
|
||||
======
|
||||
|
||||
![](https://images.idgesg.net/images/article/2020/08/power-10_06-100854570-large.jpg)
|
||||
|
||||
> 新的 CPU 针对企业混合云和 AI 推断进行了优化,它采用了为 PB 级内存集群开发的新技术。
|
||||
|
||||
IBM 上周一公布了最新的 POWER RISC CPU 系列,该系列针对企业混合云计算和人工智能 (AI)推理进行了优化,同时还进行了其他一些改进。
|
||||
|
||||
Power 是上世纪 90 年代最后一款 Unix 处理器,当时 Sun Microsystems、HP、SGI 和 IBM 都有竞争性的 Unix 系统,以及与之配合的 RISC 处理器。后来,Unix 让位给了 Linux,RISC 让位给了 x86,但 IBM 坚持了下来。
|
||||
|
||||
这是 IBM 的第一款 7 纳米处理器,IBM 宣称它将在与前代 POWER9 相同的功率范围内,将容量和处理器能效提升多达三倍。该处理器采用 15 核设计(实际上是 16 核,但其中一个没有使用),并允许采用单芯片或双芯片型号,因此 IBM 可以在同一外形尺寸中放入两个处理器。每个核心最多可以有 8 个线程,每块 CPU 最多支持 4TB 的内存。
|
||||
|
||||
更有趣的是一种名为 Memory Inception 的新内存集群技术。这种形式的集群允许系统将另一台物理服务器中的内存当作自己的内存来看待。因此,服务器不需要在每个机箱中放很多内存,而是可以在内存需求激增的时候,从邻居那里借到内存。或者,管理员可以在集群的中间设置一台拥有大量内存的服务器,并在其周围设置一些低内存服务器,这些服务器可以根据需要从大内存服务器上借用内存。
|
||||
|
||||
所有这些都是在 50 到 100 纳秒的延迟下完成的。IBM 的杰出工程师 William Starke 在宣布前的视频会议上说:“这已经成为行业的圣杯了。与其在每个机器里放很多内存,不如当我们对内存的需求激增时,我可以向邻居借。”
|
||||
|
||||
POWER10 使用的是一种叫做开放内存接口(OMI)的东西,因此服务器现在可以使用 DDR4,上市后可以升级到 DDR5,它还可以使用 GPU 中使用的 GDDR6 内存。理论上,POWER10 将具备 1TB/秒的内存带宽和 1TB/秒的 SMP 带宽。
|
||||
|
||||
与 POWER9 相比,POWER10 处理器每个核心的 AES 加密引擎数量增加了四倍。这实现了多项安全增强功能。首先,这意味着在不降低性能的情况下进行全内存加密,因此入侵者无法扫描内存内容。
|
||||
|
||||
其次,它可以为容器提供隔离的硬件和软件安全。这是为了解决更高密度的容器相关的行安全考虑。如果一个容器被入侵,POWER10 处理器的设计能够防止同一虚拟机中的其他容器受到同样的入侵影响。
|
||||
|
||||
最后,POWER10 提供了核心内的 AI 业务推断。它通过片上支持用于训练的 bfloat16 以及 AI 推断中常用的 INT8 和 INT4 实现。这将允许事务性负载在应用中添加 AI 推断。IBM 表示,POWER10 中的 AI 推断是 POWER9 的 20 倍。
|
||||
|
||||
公告中没有提到的是对操作系统的支持。POWER 运行 IBM 的 Unix 分支 AIX,以及 Linux。这并不太令人惊讶,因为这个消息是在 Hot Chips 上发布的,Hot Chips 是每年在斯坦福大学举行的年度半导体会议。Hot Chips 关注的是最新的芯片进展,所以软件通常被排除在外。
|
||||
|
||||
IBM 一般会在发布前一年左右公布新的 POWER 处理器,所以有足够的时间进行 AIX 的更新。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3571415/ibm-details-next-gen-power10-processor.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.facebook.com/NetworkWorld/
|
||||
[2]: https://www.linkedin.com/company/network-world
|
@ -1,34 +1,34 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Convert Text Files between Unix and DOS (Windows) Formats)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12558-1.html)
|
||||
[#]: subject: (How to Convert Text Files between Unix and DOS Windows Formats)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-convert-text-files-between-unix-and-dos-windows-formats/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How to Convert Text Files between Unix and DOS (Windows) Formats
|
||||
如何将文本文件在 Unix 和 DOS(Windows)格式之间转换
|
||||
======
|
||||
|
||||
As a Linux administrator, you may have noticed some requests from developers to convert files from DOS format to Unix format, and vice versa.
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/27/235550klfnz34lzpnchf7g.jpg)
|
||||
|
||||
This is because these files were created on a Windows system and copied to a Linux system for some reason.
|
||||
作为一名 Linux 管理员,你可能已经注意到了一些开发者请求将文件从 DOS 格式转换为 Unix 格式,反之亦然。
|
||||
|
||||
It’s harmless, but some applications on the Linux system may not understand these new line of characters, so you need to convert them before using it.
|
||||
这是因为这些文件是在 Windows 系统上创建的,并由于某种原因被复制到 Linux 系统上。
|
||||
|
||||
DOS text files comes with carriage return (CR or \r) and line feed (LF or \n) pairs as their newline characters, whereas Unix text files only have line feed as their newline character.
|
||||
这本身没什么问题,但 Linux 系统上的一些应用可能不能理解这些新的换行符,所以在使用之前,你需要转换它们。
|
||||
|
||||
There are many ways you can convert a DOS text file to a Unix format.
|
||||
DOS 文本文件带有回车(`CR` 或 `\r`)和换行(`LF` 或 `\n`)一对字符作为它们的换行符,而 Unix 文本只有换行(`LF`)符。
|
||||
|
||||
But I recommend using a special utility called **dos2unix** / **unix2dos** to convert text files between DOS and Unix formats.
|
||||
有很多方法可以将 DOS 文本文件转换为 Unix 格式。
|
||||
|
||||
* **dos2unix:** To convert a text files from the DOS format to the Unix format.
|
||||
* **unix2dos:** To convert a text files from the Unix format to the DOS format.
|
||||
* **tr, awk and [sed Command][1]:** These can be used for the same purpose
|
||||
但我推荐使用一个名为 `dos2unix` / `unix2dos` 的特殊工具将文本在 DOS 和 Unix 格式之间转换。
|
||||
|
||||
* `dos2unix`:将文本文件从 DOS 格式转换为 Unix 格式。
|
||||
* `unix2dos`:将文本文件从 Unix 格式转换为 DOS 格式。
|
||||
* `tr`、`awk` 和 [sed 命令][1]:这些可以用于相同的目的。
|
||||
|
||||
|
||||
You can easily identify whether the file is DOS format or Unix format using the od (octal dump) command as shown below.
|
||||
使用 `od`(<ruby>八进制转储<rt>octal dump</rt></ruby>)命令可以很容易地识别文件是 DOS 格式还是 Unix 格式,如下图所示:
|
||||
|
||||
```
|
||||
# od -bc windows.txt
|
||||
@ -55,9 +55,9 @@ n L i n u x \r \n
|
||||
0000231
|
||||
```
|
||||
|
||||
The above output clearly shows that this is a DOS format file because it contains the escape sequence **`\r\n`**.
|
||||
上面的输出清楚地表明这是一个 DOS 格式的文件,因为它包含了转义序列 `\r\n`。
|
||||
|
||||
At the same time, when you print the file output on your terminal you will get the output below.
|
||||
同时,当你在终端上打印文件输出时,你会得到下面的输出:
|
||||
|
||||
```
|
||||
# cat windows.txt
|
||||
@ -67,40 +67,40 @@ Super computers are running on UNIX
|
||||
Anything can be done on Linux
|
||||
```
|
||||
|
||||
### How to Install dos2unix on Linux
|
||||
### 如何在 Linux 上安装 dos2unix?
|
||||
|
||||
dos2unix can be easily installed from the distribution official repository.
|
||||
`dos2unix` 可以很容易地从发行版的官方仓库中安装。
|
||||
|
||||
For RHEL/CentOS 6/7 systems, use the **[yum command][2]** to install dos2unix.
|
||||
对于 RHEL/CentOS 6/7 系统,使用 [yum 命令][2] 安装 `dos2unix`。
|
||||
|
||||
```
|
||||
$ sudo yum install -y dos2unix
|
||||
```
|
||||
|
||||
For RHEL/CentOS 8 and Fedora systems, use the **[dnf command][3]** to install dos2unix.
|
||||
对于 RHEL/CentOS 8 和 Fedora 系统,使用 [dnf 命令][3] 安装 `dos2unix`。
|
||||
|
||||
```
|
||||
$ sudo yum install -y dos2unix
|
||||
```
|
||||
|
||||
For Debian based systems, use the **[apt command][4]** or **[apt-get command][5]** to install dos2unix.
|
||||
对于基于 Debian 的系统,使用 [apt 命令][4] 或 [apt-get 命令][5] 来安装 `dos2unix`。
|
||||
|
||||
```
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install dos2unix
|
||||
```
|
||||
|
||||
For openSUSE systems, use the **[zypper command][6]** to install dos2unix.
|
||||
对于 openSUSE 系统,使用 [zypper命令][6] 安装 `dos2unix`。
|
||||
|
||||
```
|
||||
$ sudo zypper install -y dos2unix
|
||||
```
|
||||
|
||||
### 1) How to Convert DOS file to UNIX format
|
||||
### 1)如何将 DOS 文件转换为 UNIX 格式?
|
||||
|
||||
The following command converts the “windows.txt” file from DOS to Unix format.
|
||||
以下命令将 `windows.txt` 文件从 DOS 转换为 Unix 格式。
|
||||
|
||||
The modification of this file is to remove the “\r” from each line of the file.
|
||||
对该文件的修改是删除文件每行的 `\r`。
|
||||
|
||||
```
|
||||
# dos2unix windows.txt
|
||||
@ -132,70 +132,70 @@ i n u x \n
|
||||
0000225
|
||||
```
|
||||
|
||||
The above command will overwrite the original file.
|
||||
上面的命令将覆盖原始文件。
|
||||
|
||||
Use the following command if you want to keep the original file. This will save the converted output as a new file.
|
||||
如果你想保留原始文件,请使用以下命令。这将把转换后的输出保存为一个新文件。
|
||||
|
||||
```
|
||||
# dos2unix -n windows.txt unix.txt
|
||||
dos2unix: converting file windows.txt to file unix.txt in Unix format …
|
||||
```
|
||||
|
||||
### 1a) How to Convert DOS file to UNIX format Using tr Command
|
||||
#### 1a)如何使用 tr 命令将 DOS 文件转换为 UNIX 格式。
|
||||
|
||||
As discussed at the beginning of the article, you can use the tr command to convert the DOS file to Unix format as shown below.
|
||||
正如文章开头所讨论的,你可以如下所示使用 `tr` 命令将 DOS 文件转换为 Unix 格式。
|
||||
|
||||
```
|
||||
Syntax: tr -d '\r' < source_file > output_file
|
||||
```
|
||||
|
||||
The below tr command converts the “windows.txt” DOS file to Unix format file “unix.txt”.
|
||||
下面的 `tr` 命令将 DOS 格式的文件 `windows.txt` 转换为 Unix 格式文件 `unix.txt`。
|
||||
|
||||
```
|
||||
# tr -d '\r' < windows.txt >unix.txt
|
||||
```
|
||||
|
||||
**Make a note:** You can’t use the tr command to convert a file from Unix format to Windows (DOS).
|
||||
注意:不能使用 `tr` 命令将文件从 Unix 格式转换为 Windows(DOS)。
|
||||
|
||||
### 1b) How to Convert DOS file to UNIX format Using awk Command
|
||||
#### 1b)如何使用 awk 命令将 DOS 文件转换为 UNIX 格式。
|
||||
|
||||
Use the following awk command format to convert a DOS file to a Unix format.
|
||||
使用以下 `awk` 命令格式将 DOS 文件转换为 Unix 格式。
|
||||
|
||||
```
|
||||
Syntax: awk '{ sub("\r$", ""); print }' source_file.txt > output_file.txt
|
||||
```
|
||||
|
||||
The below awk command converts the “windows.txt” DOS file to Unix format file “unix.txt”.
|
||||
以下 `awk` 命令将 DOS 文件 `windows.txt` 转换为 Unix 格式文件 `unix.txt`。
|
||||
|
||||
```
|
||||
# awk '{ sub("\r$", ""); print }' windows.txt > unix.txt
|
||||
```
|
||||
|
||||
### 2) How to Convert UNIX file to DOS format
|
||||
### 2)如何将 UNIX 文件转换为 DOS 格式?
|
||||
|
||||
When you convert a file from UNIX to DOS format, it will add a carriage return (CR or \r) in each of the line.
|
||||
当你把一个文件从 UNIX 转换为 DOS 格式时,它会在每一行中添加一个回车(`CR` 或 `\r`)。
|
||||
|
||||
```
|
||||
# unix2dos unix.txt
|
||||
unix2dos: converting file unix.txt to DOS format …
|
||||
```
|
||||
|
||||
This command will keep the original file.
|
||||
该命令将保留原始文件。
|
||||
|
||||
```
|
||||
# unix2dos -n unix.txt windows.txt
|
||||
unix2dos: converting file unix.txt to file windows.txt in DOS format …
|
||||
```
|
||||
|
||||
### 2a) How to Convert UNIX file to DOS format Using awk Command
|
||||
#### 2a)如何使用 awk 命令将 UNIX 文件转换为 DOS 格式?
|
||||
|
||||
Use the following awk command format to convert UNIX file to DOS format.
|
||||
使用以下 `awk` 命令格式将 UNIX 文件转换为 DOS 格式。
|
||||
|
||||
```
|
||||
Syntax: awk 'sub("$", "\r")' source_file.txt > output_file.txt
|
||||
```
|
||||
|
||||
The below awk command converts the “unix.txt” file to the DOS format file “windows.txt”.
|
||||
下面的 `awk` 命令将 `unix.txt` 文件转换为 DOS 格式文件 `windows.txt`。
|
||||
|
||||
```
|
||||
# awk 'sub("$", "\r")' unix.txt > windows.txt
|
||||
@ -207,8 +207,8 @@ via: https://www.2daygeek.com/how-to-convert-text-files-between-unix-and-dos-win
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
58
published/20200821 Being open to open values.md
Normal file
58
published/20200821 Being open to open values.md
Normal file
@ -0,0 +1,58 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12561-1.html)
|
||||
[#]: subject: (Being open to open values)
|
||||
[#]: via: (https://opensource.com/open-organization/20/8/being-open-to-open-values)
|
||||
[#]: author: (Heidi Hess von Ludewig https://opensource.com/users/heidi-hess-von-ludewig)
|
||||
|
||||
对开放的价值观持开放态度
|
||||
======
|
||||
|
||||
> 开放管理可能会让人感到恐惧。一位经理人解释了为什么值得冒这个风险。
|
||||
|
||||
![Open Lego CAD][1]
|
||||
|
||||
在本期的“[用开放的价值观管理][2]”系列中,我和美国一家全国性保险公司的定价总监、人事经理 Braxton 聊了聊。
|
||||
|
||||
2018 年 6 月,Braxton 联系到了开放组织社区的红帽人员。他想了解更多关于他*和*他的团队如何使用开放的价值观,以不同的方式工作。我们很乐意提供帮助。于是我帮助 Braxton 和他的团队组织了一个关于[开放组织原则][3]的研讨会,并在之后还保持着联系,这样我就可以了解他在变得更加开放的过程中的风险。
|
||||
|
||||
最近我们采访了 Braxton,并和他一起坐下来听了事情的进展。[产业/组织心理学家和员工参与度专家][4] Tracy Guiliani 和 [Bryan Behrenshausen][5] 一起加入了采访。我们的谈话范围很广,探讨了了解开源价值观后的感受,如何利用它们来改变组织,以及它们如何帮助 Braxton 和他的团队更好地工作和提高参与度。
|
||||
|
||||
与 Braxton 合作是一次异常有意义的经历。它让我们直接见证了一个人如何将开放组织社区驱动的研讨会材料融入动态变化,并使他、他的团队和他的组织受益。开放组织大使*一直*在寻求帮助人们获得关于开放价值的见解和知识,使他们能够理解文化变革和[自己组织内的转型][6]。
|
||||
|
||||
他和他的团队正在以对他们有效的方式执行他们独特的开放价值观,并且让团队实现的利益超过了提议变革在时间和精力上的投入。
|
||||
|
||||
Braxton 对开放组织原则的*解释*和使组织更加开放的策略,让我们深受启发。
|
||||
|
||||
Braxton 承认,他的更开放的目标并不包括“制造另一个红帽”。相反,他和他的团队是在以对他们有效的方式,以及让团队实现的利益超过提议的变革所带来的时间和精力投入,来执行他们独特的开放价值观。
|
||||
|
||||
在我们采访的第一部分,你还会听到 Braxton 描述:
|
||||
|
||||
1. 在了解了透明性、协作性、适应性、社区性和包容性这五种开放式组织价值观之后,“开放式管理”对他意味着什么?
|
||||
2. 他的一些开放式管理做法。
|
||||
3. 他如何在他的团队中倡导开放文化,如何在后来者中鼓励开源价值观,以及他所体验到的好处。
|
||||
4. 当人们试图改造自己的组织时,对开源价值观最大的误解是什么?
|
||||
|
||||
- [收听对 Braxton 的采访](https://opensource.com/sites/default/files/images/open-org/braxton_1.ogg)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/20/8/being-open-to-open-values
|
||||
|
||||
作者:[Heidi Hess von Ludewig][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/heidi-hess-von-ludewig
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open-lego.tiff_.png?itok=mQglOhW_ (Open Lego CAD)
|
||||
[2]: https://opensource.com/open-organization/managing-with-open-values
|
||||
[3]: https://github.com/open-organization/open-org-definition
|
||||
[4]: https://opensource.com/open-organization/20/5/commitment-engagement-org-psychology
|
||||
[5]: https://opensource.com/users/bbehrens
|
||||
[6]: https://opensource.com/open-organization/18/4/rethinking-ownership-across-organization
|
212
published/20200825 11 ways to list and sort files on Linux.md
Normal file
212
published/20200825 11 ways to list and sort files on Linux.md
Normal file
@ -0,0 +1,212 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12560-1.html)
|
||||
[#]: subject: (11 ways to list and sort files on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3572590/11-ways-to-list-and-sort-files-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
把 Linux 上的文件列表和排序玩出花来
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202008/28/213742y8cxnbnjpopzd5j0.jpg)
|
||||
|
||||
> Linux 命令可以提供文件的详细信息,也可以自定义显示的文件列表,甚至可以深入到文件系统的目录中,只要你愿意看。
|
||||
|
||||
在 Linux 系统上,有许多方法可以列出文件并显示它们的信息。这篇文章回顾了一些提供文件细节的命令,并提供了自定义文件列表的选项,以满足你的需求。
|
||||
|
||||
大多数命令都会列出单个目录中的文件,而其他命令则可以深入到文件系统的目录中,只要你愿意看。
|
||||
|
||||
当然,最主要的文件列表命令是 `ls`。然而,这个命令有大量的选项,可以只查找和列出你想看的文件。另外,还有 `find` 可以帮助你进行非常具体的文件搜索。
|
||||
|
||||
### 按名称列出文件
|
||||
|
||||
最简单的方法是使用 `ls` 命令按名称列出文件。毕竟,按名称(字母数字顺序)列出文件是默认的。你可以选择 `ls`(无细节)或 `ls -l`(大量细节)来决定你看到什么。
|
||||
|
||||
```
|
||||
$ ls | head -6
|
||||
8pgs.pdf
|
||||
Aesthetics_Thank_You.pdf
|
||||
alien.pdf
|
||||
Annual_Meeting_Agenda-20190602.pdf
|
||||
bigfile.bz2
|
||||
bin
|
||||
$ ls -l | head -6
|
||||
-rw-rw-r-- 1 shs shs 10886 Mar 22 2019 8pgs.pdf
|
||||
-rw-rw-r-- 1 shs shs 284003 May 11 2019 Aesthetics_Thank_You.pdf
|
||||
-rw-rw-r-- 1 shs shs 38282 Jan 24 2019 alien.pdf
|
||||
-rw-rw-r-- 1 shs shs 97358 May 19 2019 Annual_Meeting_20190602.pdf
|
||||
-rw-rw-r-- 1 shs shs 18115234 Apr 16 17:36 bigfile.bz2
|
||||
drwxrwxr-x 4 shs shs 8052736 Jul 10 13:17 bin
|
||||
```
|
||||
|
||||
如果你想一次查看一屏的列表,可以将 `ls` 的输出用管道送到 `more` 命令中。
|
||||
|
||||
### 按相反的名字顺序排列文件
|
||||
|
||||
要按名称反转文件列表,请添加 `-r`(<ruby>反转<rt>Reverse</rt></ruby>)选项。这就像把正常的列表倒过来一样。
|
||||
|
||||
```
|
||||
$ ls -r
|
||||
$ ls -lr
|
||||
```
|
||||
|
||||
### 按文件扩展名列出文件
|
||||
|
||||
`ls` 命令不会按内容分析文件类型,它只会处理文件名。不过,有一个命令选项可以按扩展名列出文件。如果你添加了 `-X` (<ruby>扩展名<rt>eXtension</rt></ruby>)选项,`ls` 将在每个扩展名类别中按名称对文件进行排序。例如,它将首先列出没有扩展名的文件(按字母数字顺序),然后是扩展名为 `.1`、`.bz2`、`.c` 等的文件。
|
||||
|
||||
### 只列出目录
|
||||
|
||||
默认情况下,`ls` 命令将同时显示文件和目录。如果你想只列出目录,你可以使用 `-d`(<ruby>目录<rt>Directory</rt></ruby>)选项。你会得到一个像这样的列表:
|
||||
|
||||
```
|
||||
$ ls -d */
|
||||
1/ backups/ modules/ projects/ templates/
|
||||
2/ html/ patches/ public/ videos/
|
||||
bin/ new/ private/ save/
|
||||
```
|
||||
|
||||
### 按大小排列文件
|
||||
|
||||
如果你想按大小顺序列出文件,请添加 `-S`(<ruby>大小<rt>Size</rt></ruby>)选项。但请注意,这实际上不会显示文件的大小(以及其他文件的细节),除非你还添加 `-l`(<ruby>长列表<rt>Long listing</rt></ruby>)选项。当按大小列出文件时,一般来说,看到命令在按你的要求做事情是很有帮助的。注意,默认情况下是先显示最大的文件。添加 `-r` 选项可以反过来(即 `ls -lSr`)。
|
||||
|
||||
```
|
||||
$ ls -lS
|
||||
total 959492
|
||||
-rw-rw-r-- 1 shs shs 357679381 Sep 19 2019 sav-linux-free-9.tgz
|
||||
-rw-rw-r-- 1 shs shs 103270400 Apr 16 17:38 bigfile
|
||||
-rw-rw-r-- 1 shs shs 79117862 Oct 5 2019 Nessus-8.7.1-ubuntu1110_amd64.deb
|
||||
```
|
||||
|
||||
### 按属主列出文件
|
||||
|
||||
如果你想按属主列出文件(例如,在一个共享目录中),你可以把 `ls` 命令的输出传给 `sort`,并通过添加 `-k3` 来按第三个字段排序,从而挑出属主一栏。
|
||||
|
||||
```
|
||||
$ ls -l | sort -k3 | more
|
||||
total 56
|
||||
-rw-rw-r-- 1 dory shs 0 Aug 23 12:27 tasklist
|
||||
drwx------ 2 gdm gdm 4096 Aug 21 17:12 tracker-extract-files.121
|
||||
srwxr-xr-x 1 root root 0 Aug 21 17:12 ntf_listenerc0c6b8b4567
|
||||
drwxr-xr-x 2 root root 4096 Aug 21 17:12 hsperfdata_root
|
||||
^
|
||||
|
|
||||
```
|
||||
|
||||
事实上,你可以用这种方式对任何字段进行排序(例如,年份)。只是要注意,如果你要对一个数字字段进行排序,则要加上一个 `n`,如 `-k5n`,否则你将按字母数字顺序进行排序。这种排序技术对于文件内容的排序也很有用,而不仅仅是用于列出文件。
|
||||
|
||||
### 按年份排列文件
|
||||
|
||||
使用 `-t`(<ruby>修改时间<rt>Time modified</rt></ruby>)选项按年份顺序列出文件 —— 它们的新旧程度。添加 `-r` 选项,让最近更新的文件在列表中最后显示。我使用这个别名来显示我最近更新的文件列表。
|
||||
|
||||
```
|
||||
$ alias recent='ls -ltr | tail -8'
|
||||
```
|
||||
|
||||
请注意,文件的更改时间和修改时间是不同的。`-c`(<ruby>更改时间<rt>time Changed</rt></ruby>)和 `-t`(修改时间)选项的结果并不总是相同。如果你改变了一个文件的权限,而没有改变其他内容,`-c` 会把这个文件放在 `ls` 输出的顶部,而 `-t` 则不会。如果你想知道其中的区别,可以看看 `stat` 命令的输出。
|
||||
|
||||
```
|
||||
$ stat ckacct
|
||||
File: ckacct
|
||||
Size: 200 Blocks: 8 IO Block: 4096 regular file
|
||||
Device: 801h/2049d Inode: 829041 Links: 1
|
||||
Access: (0750/-rwxr-x---) Uid: ( 1000/ shs) Gid: ( 1000/ shs)
|
||||
Access: 2020-08-20 16:10:11.063015008 -0400
|
||||
Modify: 2020-08-17 07:26:34.579922297 -0400 <== content changes
|
||||
Change: 2020-08-24 09:36:51.699775940 -0400 <== content or permissions changes
|
||||
Birth: -
|
||||
```
|
||||
|
||||
### 按组别列出文件
|
||||
|
||||
要按关联的组别对文件进行排序,你可以将一个长列表的输出传给 `sort` 命令,并告诉它在第 4 列进行排序。
|
||||
|
||||
```
|
||||
$ ls -l | sort -k4
|
||||
```
|
||||
|
||||
### 按访问日期列出文件
|
||||
|
||||
要按访问日期(最近访问的日期在前)列出文件,使用 `-ltu` 选项。`u` 强制“按访问日期”排列顺序。
|
||||
|
||||
```
|
||||
$ ls -ltu
|
||||
total 959500
|
||||
-rwxr-x--- 1 shs shs 200 Aug 24 09:42 ckacct <== most recently used
|
||||
-rw-rw-r-- 1 shs shs 1335 Aug 23 17:45 lte
|
||||
```
|
||||
|
||||
### 单行列出多个文件
|
||||
|
||||
有时,精简的文件列表更适合手头的任务。`ls` 命令甚至有这方面的选项。为了在尽可能少的行上列出文件,你可以使用 `--format=comma` 来用逗号分隔文件名,就像这个命令一样:
|
||||
|
||||
```
|
||||
$ ls --format=comma
|
||||
1, 10, 11, 12, 124, 13, 14, 15, 16pgs-landscape.pdf, 16pgs.pdf, 17, 18, 19,
|
||||
192.168.0.4, 2, 20, 2018-12-23_OoS_2.pdf, 2018-12-23_OoS.pdf, 20190512_OoS.pdf,
|
||||
'2019_HOHO_application working.pdf' …
|
||||
```
|
||||
|
||||
喜欢用空格?使用 `--format=across` 代替。
|
||||
|
||||
```
|
||||
$ ls --format=across z*
|
||||
z zip zipfiles zipfiles1.bat zipfiles2.bat
|
||||
zipfiles3.bat zipfiles4.bat zipfiles.bat zoom_amd64.deb zoomap.pdf
|
||||
zoom-mtg
|
||||
```
|
||||
|
||||
### 增加搜索的深度
|
||||
|
||||
虽然 `ls` 一般只列出单个目录中的文件,但你可以选择使用 `-R` 选项(<ruby>递归<rt>Recursively</rt></ruby>)地列出文件,深入到整个目录的深处。
|
||||
|
||||
```
|
||||
$ ls -R zzzzz | grep -v "^$"
|
||||
zzzzz:
|
||||
zzzz
|
||||
zzzzz/zzzz:
|
||||
zzz
|
||||
zzzzz/zzzz/zzz:
|
||||
zz
|
||||
zzzzz/zzzz/zzz/zz:
|
||||
z
|
||||
zzzzz/zzzz/zzz/zz/z:
|
||||
sleeping
|
||||
```
|
||||
|
||||
另外,你也可以使用 `find` 命令,对深度进行限制或不限制。在这个命令中,我们指示 `find` 命令只在三个层次的目录中查找:
|
||||
|
||||
```
|
||||
$ find zzzzz -maxdepth 3
|
||||
zzzzz
|
||||
zzzzz/zzzz
|
||||
zzzzz/zzzz/zzz
|
||||
zzzzz/zzzz/zzz/zz
|
||||
```
|
||||
|
||||
### 选择 ls 还是 find
|
||||
|
||||
当你需要列出符合具体要求的文件时,`find` 命令可能是比 `ls` 更好的工具。
|
||||
|
||||
与 `ls` 不同的是,`find` 命令会尽可能地深入查找,除非你限制它。它还有许多其他选项和一个 `-exec` 子命令,允许在找到你要找的文件后采取一些特定的行动。
|
||||
|
||||
### 总结
|
||||
|
||||
`ls` 命令有很多用于列出文件的选项。了解一下它们。你可能会发现一些你会喜欢的选项。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3572590/11-ways-to-list-and-sort-files-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.facebook.com/NetworkWorld/
|
||||
[2]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,91 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Rejoice KDE Lovers! MX Linux Joins the KDE Bandwagon and Now You Can Download MX Linux KDE Edition)
|
||||
[#]: via: (https://itsfoss.com/mx-linux-kde-edition/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Rejoice KDE Lovers! MX Linux Joins the KDE Bandwagon and Now You Can Download MX Linux KDE Edition
|
||||
======
|
||||
|
||||
Debian-based [MX Linux][1] is already an impressive Linux distribution with [Xfce desktop environment][2] as the default. Even though it works good and is suitable to run with minimal hardware configuration, it still isn’t the best Linux distribution in terms of eye candy.
|
||||
|
||||
That’s where KDE comes to the rescue. Of late, KDE Plasma has reduced a lot of weight and it uses fewer system resources without compromising on the modern looks. No wonder KDE Plasma is one of [the best desktop environments][3] out there.
|
||||
|
||||
![][4]
|
||||
|
||||
With [MX Linux 19.2][5], they began testing a KDE edition and have finally released their first KDE version.
|
||||
|
||||
Also, the KDE edition comes with Advanced Hardware Support (AHS) enabled. Here’s what they have mentioned in their release notes:
|
||||
|
||||
> MX-19.2 KDE is an **Advanced Hardware Support (AHS) **enabled **64-bit only** version of MX featuring the KDE/plasma desktop. Applications utilizing Qt library frameworks are given a preference for inclusion on the iso.
|
||||
|
||||
As I mentioned it earlier, this is MX Linux’s first KDE edition ever, and they’ve also shed some light on it with the announcement as well:
|
||||
|
||||
> This will be first officially supported MX/antiX family iso utilizing the KDE/plasma desktop since the halting of the predecessor MEPIS project in 2013.
|
||||
|
||||
Personally, I enjoyed the experience of using MX Linux until I started using [Pop OS 20.04][6]. So, I’ll give you some key highlights of MX Linux 19.2 KDE edition along with my impressions of testing it.
|
||||
|
||||
### MX Linux 19.2 KDE: Overview
|
||||
|
||||
![][7]
|
||||
|
||||
Out of the box, MX Linux looks cleaner and more attractive with KDE desktop on board. Unlike KDE Neon, it doesn’t feature the latest and greatest KDE stuff, but it looks to be doing the job intended.
|
||||
|
||||
Of course, you will get the same options that you expect from a KDE-powered distro to customize the look and feel of your desktop. In addition to the obvious KDE perks, you will also get the usual MX tools, antiX-live-usb-system, and snapshot feature that comes baked in the Xfce edition.
|
||||
|
||||
It’s a great thing to have the best of both worlds here, as stated in their announcement:
|
||||
|
||||
> MX-19.2 KDE includes the usual MX tools, antiX-live-usb-system, and snapshot technology that our users have come to expect from our standard flagship Xfce releases. Adding KDE/plasma to the existing Xfce/MX-fluxbox desktops will provide for a wider range user needs and wants.
|
||||
|
||||
I haven’t performed a great deal of tests but I did have some issues with extracting archives (it didn’t work the first try) and copy-pasting a file to a new location. Not sure if those are some known bugs — but I thought I should let you know here.
|
||||
|
||||
![][8]
|
||||
|
||||
Other than that, it features every useful tool you’d want to have and works great. With KDE on board, it actually feels more polished and smooth in my case.
|
||||
|
||||
Along with KDE Plasma 5.14.5 on top of Debian 10 “buster”, it also comes with GIMP 2.10.12, MESA, Debian (AHS) 5.6 Kernel, Firefox browser, and few other goodies like VLC, Thunderbird, LibreOffice, and Clementine music player.
|
||||
|
||||
You can also look for more stuff in the MX repositories.
|
||||
|
||||
![][9]
|
||||
|
||||
There are some known issues with the release like the System clock settings not being able adjustable via KDE settings. You can check their [announcement post][10] for more information or their [bug list][11] to make sure everything’s fine before trying it out on your production system.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
MX Linux 19.2 KDE edition is definitely more impressive than its Xfce offering in my opinion. It would take a while to iron out the bugs for this first KDE release — but it’s not a bad start.
|
||||
|
||||
Speaking of KDE, I recently tested out KDE Neon, the official KDE distribution. I shared my experience in this video. I’ll try to do a video on MX Linux KDE flavor as well.
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux videos][12]
|
||||
|
||||
Have you tried it yet? Let me know your thoughts in the comments below!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/mx-linux-kde-edition/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://mxlinux.org/
|
||||
[2]: https://www.xfce.org/
|
||||
[3]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-kde-edition.jpg?resize=800%2C450&ssl=1
|
||||
[5]: https://mxlinux.org/blog/mx-19-2-now-available/
|
||||
[6]: https://itsfoss.com/pop-os-20-04-review/
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-19-2-kde.jpg?resize=800%2C452&ssl=1
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-19-2-kde-filemanager.jpg?resize=800%2C452&ssl=1
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/mx-linux-19-2-kde-info.jpg?resize=800%2C452&ssl=1
|
||||
[10]: https://mxlinux.org/blog/mx-19-2-kde-now-available/
|
||||
[11]: https://bugs.mxlinux.org/
|
||||
[12]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco open-source code boosts performance of Kubernetes apps over SD-WAN)
|
||||
[#]: via: (https://www.networkworld.com/article/3572310/cisco-open-source-code-boosts-performance-of-kubernetes-apps-over-sd-wan.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco open-source code boosts performance of Kubernetes apps over SD-WAN
|
||||
======
|
||||
Cisco's Cloud-Native SD-WAN project marries SD-WANs to Kubernetes applications to cut down on the manual work needed to optimize latency and packet loss.
|
||||
Thinkstock
|
||||
|
||||
Cisco has introduced an open-source project that it says could go a long way toward reducing the manual work involved in optimizing performance of Kubernetes-applications across [SD-WANs][1].
|
||||
|
||||
Cisco said it launched the Cloud-Native SD-WAN (CN-WAN) project to show how Kubernetes applications can be automatically mapped to SD-WAN with the result that the applications perform better over the WAN.
|
||||
|
||||
**More about SD-WAN**: [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][2] • [How to pick an off-site data-backup method][3] • [SD-Branch: What it is and why you’ll need it][4] • [What are the options for security SD-WAN?][5]
|
||||
|
||||
“In many cases, enterprises deploy an SD-WAN to connect a Kubernetes cluster with users or workloads that consume cloud-native applications. In a typical enterprise, NetOps teams leverage their network expertise to program SD-WAN policies to optimize general connectivity to the Kubernetes hosted applications, with the goal to reduce latency, reduce packet loss, etc.” wrote John Apostolopoulos, vice president and CTO of Cisco’s intent-based networking group in a group [blog][6].
|
||||
|
||||
“The enterprise usually also has DevOps teams that maintain and optimize the Kubernetes infrastructure. However, despite the efforts of NetOps and DevOps teams, today Kubernetes and SD-WAN operate mostly like ships in the night, often unaware of each other. Integration between SD-WAN and Kubernetes typically involves time-consuming manual coordination between the two teams.”
|
||||
|
||||
Current SD-WAN offering often have APIs that let customers programmatically influence how their traffic is handled over the WAN. This enables interesting and valuable opportunities for automation and application optimization, Apostolopoulos stated. “We believe there is an opportunity to pair the declarative nature of Kubernetes with the programmable nature of modern SD-WAN solutions,” he stated.
|
||||
|
||||
Enter CN-WAN, which defines a set of components that can be used to integrate an SD-WAN package, such as Cisco Viptela SD-WAN, with Kubernetes to enable DevOps teams to express the WAN needs of the microservices they deploy in a Kubernetes cluster, while simultaneously letting NetOps automatically render the microservices needs to optimize the application performance over the WAN, Apostolopoulos stated.
|
||||
|
||||
Apostolopoulos wrote that CN-WAN is composed of a Kubernetes Operator, a Reader, and an Adaptor. It works like this: The CN-WAN Operator runs in the Kubernetes cluster, actively monitoring the deployed services. DevOps teams can use standard Kubernetes annotations on the services to define WAN-specific metadata, such as the traffic profile of the application. The CN-WAN Operator then automatically registers the service along with the metadata in a service registry. In a demo at KubeCon EU this week Cisco used Google Service Directory as the service registry.
|
||||
|
||||
Earlier this year [Cisco and Google][7] deepened their relationship with a turnkey package that lets customers mesh SD-WAN connectivity with applications running in a private [data center][8], Google Cloud or another cloud or SaaS application. That jointly developed platform, called Cisco SD-WAN Cloud Hub with Google Cloud, combines Cisco’s SD-WAN policy-, telemetry- and security-setting capabilities with Google's software-defined backbone to ensure that application service-level agreement, security and compliance policies are extended across the network.
|
||||
|
||||
Meanwhile, on the SD-WAN side, the CN-WAN Reader connects to the service registry to learn about how Kubernetes is exposing the services and the associated WAN metadata extracted by the CN-WAN operator, Cisco stated. When new or updated services or metadata are detected, the CN-WAN Reader sends a message towards the CN-WAN Adaptor so SD-WAN policies can be updated.
|
||||
|
||||
Finally, the CN-WAN Adaptor, maps the service-associated metadata into the detailed SD-WAN policies programmed by NetOps in the SD-WAN controller. The SD-WAN controller automatically renders the SD-WAN policies, specified by the NetOps for each metadata type, into specific SD-WAN data-plane optimizations for the service, Cisco stated.
|
||||
|
||||
“The SD-WAN may support multiple types of access at both sender and receiver (e.g., wired Internet, MPLS, wireless 4G or 5G), as well as multiple service options and prioritizations per access network, and of course multiple paths between source and destination,” Apostolopoulos stated.
|
||||
|
||||
The code for the CN-WAN project is available as open-source in [GitHub][9].
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3572310/cisco-open-source-code-boosts-performance-of-kubernetes-apps-over-sd-wan.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html
|
||||
[2]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html
|
||||
[3]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html
|
||||
[4]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html
|
||||
[5]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html
|
||||
[6]: https://blogs.cisco.com/networking/introducing-the-cloud-native-sd-wan-project
|
||||
[7]: https://www.networkworld.com/article/3539252/cisco-integrates-sd-wan-connectivity-with-google-cloud.html
|
||||
[8]: https://www.networkworld.com/article/3223692/what-is-a-data-centerhow-its-changed-and-what-you-need-to-know.html
|
||||
[9]: https://github.com/CloudNativeSDWAN/cnwan-docs
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (JunJie957)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,59 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Microsoft uses AI to boost its reuse, recycling of server parts)
|
||||
[#]: via: (https://www.networkworld.com/article/3570451/microsoft-uses-ai-to-boost-its-reuse-recycling-of-server-parts.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Microsoft uses AI to boost its reuse, recycling of server parts
|
||||
======
|
||||
Get ready to hear the term 'circular' a lot more in reference to data center gear.
|
||||
Monsitj / Getty Images
|
||||
|
||||
Microsoft is bringing artificial intelligence to the task of sorting through millions of servers to determine what can be recycled and where.
|
||||
|
||||
The new initiative calls for the building of so-called Circular Centers at Microsoft data centers around the world, where AI algorithms will be used to sort through parts from decommissioned servers or other hardware and figure out which parts can be reused on the campus.
|
||||
|
||||
**READ MORE:** [How to decommission a data center][1]
|
||||
|
||||
Microsoft says it has more than three million servers and related hardware in its data centers, and that a server's average lifespan is about five years. Plus, Microsoft is expanding globally, so its server numbers should increase.
|
||||
|
||||
Circular Centers are all about quickly sorting through the inventory rather than tying up overworked staff. Microsoft plans to increase its reuse of server parts by 90% by 2025. "Using machine learning, we will process servers and hardware that are being decommissioned onsite," wrote Brad Smith, president of Microsoft, in a [blog post][2] announcing the initiative. "We'll sort the pieces that can be reused and repurposed by us, our customers, or sold."
|
||||
|
||||
Smith notes that today there is no consistent data about the quantity, quality and type of waste, where it is generated, and where it goes. Data about construction and demolition waste, for example, is inconsistent and needs a standardized methodology, better transparency and higher quality.
|
||||
|
||||
"Without more accurate data, it's nearly impossible to understand the impact of operational decisions, what goals to set, and how to assess progress, as well as an industry standard for waste footprint methodology," he wrote.
|
||||
|
||||
A Circular Center pilot in an Amsterdam data center reduced downtime and increased the availability of server and network parts for its own reuse and buy-back by suppliers, according to Microsoft. It also reduced the cost of transporting and shipping servers and hardware to processing facilities, which lowered carbon emissions.
|
||||
|
||||
The term "circular economy" is catching on in tech. It's based on recycling of server hardware, putting equipment that is a few years old but still quite usable back in service somewhere else. ITRenew, a reseller of used hyperscaler servers [that I profiled][3] a few months back, is big on the term.
|
||||
|
||||
The first Microsoft Circular Centers will be built at new, major data-center campuses or regions, the company says. It plans to eventually add these centers to campuses that already exist.
|
||||
|
||||
Microsoft has an expressed goal of being "carbon negative" by 2030, and this is just one of several projects. Recently Microsoft announced it had conducted a test at its system developer's lab in Salt Lake City where a 250kW hydrogen fuel cell system powered a row of server racks for 48 hours straight, something the company says has never been done before.
|
||||
|
||||
"It is the largest computer backup power system that we know that is running on hydrogen and it has run the longest continuous test," Mark Monroe, a principal infrastructure engineer, wrote in a Microsoft [blog post][4]. He says hydrogen fuel cells have plummeted so much in recent years that they are now a viable alternative to diesel-powered backup generators but much cleaner burning.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3570451/microsoft-uses-ai-to-boost-its-reuse-recycling-of-server-parts.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3439917/how-to-decommission-a-data-center.html
|
||||
[2]: https://blogs.microsoft.com/blog/2020/08/04/microsoft-direct-operations-products-and-packaging-to-be-zero-waste-by-2030/
|
||||
[3]: https://www.networkworld.com/article/3543810/for-sale-used-low-mileage-hyperscaler-servers.html
|
||||
[4]: https://news.microsoft.com/innovation-stories/hydrogen-datacenters/
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,53 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IBM details next-gen POWER10 processor)
|
||||
[#]: via: (https://www.networkworld.com/article/3571415/ibm-details-next-gen-power10-processor.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
IBM details next-gen POWER10 processor
|
||||
======
|
||||
New CPU is optimized for enterprise hybrid cloud and AI inferencing, and it features a new technology for creating petabyte-scale memory clusters.
|
||||
IBM
|
||||
|
||||
IBM on Monday took the wraps off its latest POWER RISC CPU family, optimized for enterprise hybrid-cloud computing and artificial intelligence (AI) inferencing, along with a number of other improvements.
|
||||
|
||||
Power is the last of the Unix processors from the 1990s, when Sun Microsystems, HP, SGI, and IBM all had competing Unixes and RISC processors to go with them. Unix gave way to Linux and RISC gave way to x86, but IBM holds on.
|
||||
|
||||
This is IBM's first 7-nanometer processor, and IBM claims it will deliver an up-to-three-times improvement in capacity and processor energy efficiency within the same power envelope as its POWER9 predecessor. The processor comes in a 15-core design (actually 16-cores but one is not used) and allows for single or dual chip models, so IBM can put two processors in the same form factor. Each core can have up to eight threads, and each socket supports up to 4TB of memory.
|
||||
|
||||
More interesting is a new memory clustering technology called Memory Inception. This form of clustering allows the system to view memory in another physical server as though it were its own. So instead of putting a lot of memory in each box, servers can literally borrow from their neighbors when there is a spike in demand for memory. Or admins can set up one big server with lots of memory in the middle of a cluster and surround it with low-memory servers that can borrow memory as needed from the high capacity server.
|
||||
|
||||
All of this is done with a latency of 50 to 100 nanoseconds. "This has been a holy grail of the industry for a while now," said William Starke, a distinguished engineer with IBM, on a video conference in advance of the announcement. "Instead of putting a lot of memory in each box, when we have a spike demand for memory I can borrow from my neighbors."
|
||||
|
||||
POWER10 uses something called Open Memory Interfaces (OMI), so the server can use DDR4 now and be upgraded to DDR5 when it hits the market, and it can also use GDDR6 memory used in GPUs. In theory, POWER10 will come with 1TB/sec of memory bandwidth and 1TB/sec of SMP of bandwidth.
|
||||
|
||||
The POWER10 processor has quadruple the number of AES encryption engines per core compared to the POWER9. This enables several security enhancements. First, it means full memory encryption without degradation of performance, so no intruder can scan the contents of memory.
|
||||
|
||||
Second, it enables hardware and software security for containers to provide isolation. This is designed to address new security considerations associated with the higher density of containers. If a container were to be compromised, the POWER10 processor is designed to be able to prevent other containers in the same virtual machine from being affected by the same intrusion.
|
||||
|
||||
Finally, the POWER10 offers in-core AI business inferencing. It achieves this through on-chip support for bfloat16 for training as well as INT8 and INT4, which are commonly used in AI inferencing. This will allow transactional workloads to add AI inferencing in their apps. IBM says the AI inferencing in POWER10 is 20 times that of POWER9.
|
||||
|
||||
Not mentioned in the announcement is operating system support. POWER runs AIX, IBM's flavor of Unix, as well as Linux. That's not too surprising since the news is coming at Hot Chips, the annual semiconductor conference held every year at Stanford University. Hot Chips is focused on the latest chip advances, so software is usually left out.
|
||||
|
||||
IBM generally announces new POWER processors about a year in advance of release, so there is plenty of time for an AIX update.
|
||||
|
||||
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3571415/ibm-details-next-gen-power10-processor.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.facebook.com/NetworkWorld/
|
||||
[2]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 ways a legal team can enable open source)
|
||||
[#]: via: (https://opensource.com/article/20/8/open-source-legal-organization)
|
||||
[#]: author: (Jeffrey Robert Kaufman https://opensource.com/users/jkaufman)
|
||||
|
||||
3 ways a legal team can enable open source
|
||||
======
|
||||
Open source law is unique because of its unusual requirements for
|
||||
success. Learn ways lawyers can get their organizations to "yes".
|
||||
![Active listening in a meeting is a skill][1]
|
||||
|
||||
I am an open source lawyer for Red Hat. One important part of my job is to provide information to other companies, including their in-house counsel, about how Red Hat builds enterprise-class products with a completely open source development model and answering their questions about open source licensing in general. After hearing about Red Hat's success, these conversations often turn to discussions about how their organization can evolve to be more open source-aware and -capable, and lawyers at these meetings regularly ask how they can modify their practices to be more skilled in providing open source counsel to their employees.
|
||||
|
||||
In this article and the next, I'll convey what I normally tell in-house counsel about these topics. If you are not in-house counsel and instead work for a law firm supporting clients in the software space, you may also find this information useful. (If you are considering going to law school and becoming an open source lawyer, you should read Luis Villa's excellent article [_What to know before jumping into a career as an open source lawyer_][2].)
|
||||
|
||||
My perspective is based on my personal and possibly unique experience working in various engineering, product management, and lawyer roles. My atypical background means I see the world through a different lens from most lawyers. So, the ideas presented below may not be traditional, but they have served me well in my practice, and I hope they will benefit you.
|
||||
|
||||
### Connect with open source organizations
|
||||
|
||||
There are a multitude of open source organizations that are especially useful to open source lawyers. Many of these organizations have measurable influence over the views and interpretations of open source licenses. Consider getting involved with some of the more prominent organizations, such as the [Open Source Initiative][3] (OSI), the [Software Freedom Conservancy][4], the [Software Freedom Law Center][5], the [Free Software Foundation][6], [Free Software Foundation Europe][7], and the [Linux Foundation][8]. There are also a number of useful mailing lists, such as OSI's [license-discuss][9] and [license-review][10], that are worth monitoring and even participating in.
|
||||
|
||||
Participating in these groups and lists will help you understand the myriad and unique issues you may encounter when practicing open source law, including how various terms of the open source license texts are interpreted by the community. There is little case law to guide you, but there are plenty of people happy to help answer questions, and these resources are the best source of guidance. This is perhaps one of the very unique and amazing aspects of practicing open source law—the openness of the development community is equally matched by the openness of the legal community to provide perspective and advice. All you have to do is ask.
|
||||
|
||||
### Adopt the mindset of a business manager and find a path to yes
|
||||
|
||||
Product managers are ultimately held responsible for a product or service from cradle to grave, including enabling that product or service to get to market. Since the bulk of my career has been spent leading product-management organizations, my mind is programmed to find a path, no matter how, to get a viable product or service to market. I encourage any lawyer to adopt this mindset since products and services are the lifeblood of any business.
|
||||
|
||||
As such, the approach I have always taken in my legal practice involves issue spotting and advising clients of risk _but always having the objective of finding a path to "YES_," especially when my analysis impacts product/service development and release. When evaluating legal issues for internal clients, my executive management or I may, at times, view the risk to be too high. In such cases, continue encouraging everyone to work on the problem because, in my experience, solutions do eventually present themselves, often in unexpected ways.
|
||||
|
||||
Be sure to tap all your resources, including your software development clients (see below), as they can be an excellent source of creative approaches to solving problems, often using technology to resolve issues. I have found much joy in this method, and my clients seem pleased with this passion and sentiment. I encourage all lawyers to consider this approach.
|
||||
|
||||
Sadly, it is always easy to say "no" for self-preservation and to eliminate what may appear to be _any_ risk to the company. I have always found this response untenable. All business transactions have risk. As a counselor, it is your job to understand these risks and present them to your clients so that they may make educated business decisions. Simply saying "no" when any risk is present, without providing any additional context or other paths forward to mitigate risks, does no good for the long-term success of the organization. Companies need to provide products and services to survive, and you should be helping find that path, whenever possible, to YES. You have an ethical responsibility to say "no" in certain situations, of course, but explore and exhaust all reasonable options first.
|
||||
|
||||
### Build relationships with developers
|
||||
|
||||
Building relationships with your software development clients is _absolutely critical_. Building rapport and trust with developers are two important ways to strengthen these relationships.
|
||||
|
||||
#### Build rapport
|
||||
|
||||
Your success as an open source lawyer is most often a direct result of your positive relationships with your software developers. In many cases, your software developers are the direct or indirect recipients of your legal advice, and they will be looking to you for guidance and counsel. Unfortunately, many software developers are suspicious of lawyers and often view lawyers as obstacles to their ability to develop and release software. The best way to overcome this ill will is to build rapport with your clients. How you do that is different for most people, but here are some ideas.
|
||||
|
||||
1. **Show an interest in your clients' work:** Be inquisitive about the details of their project, how it works, the underlying programming language, how it connects to other systems, and how it will benefit the company. Some of these answers will be useful in your legal analysis when ascertaining legal risk and ways to mitigate such risk, but more importantly, this builds a solid foundation for an ongoing positive relationship with your client.
|
||||
2. **Be clear to your client that you are working to find a path to YES:** It is perfectly acceptable to let your client know you are concerned about certain aspects of their project, but follow that up with ideas for mitigating those concerns. Reassure them that it is your job to work with them to find a solution and not to be a roadblock. The effect of this cannot be overstated.
|
||||
3. **Participate in an open source project:** This is especially true if you have software development experience. Even if you do not have such experience, there are many ways to participate, such as helping with documentation or community outreach. Or request to join their status meetings just to learn more about their work. This will also allow you to provide counsel on-demand and in real time so that the team may course-correct early in the process.
|
||||
|
||||
|
||||
|
||||
#### Have trust
|
||||
|
||||
Your software developers are very active in their open source communities and are some of the best resources for understanding current issues affecting open source software and development. Just as you connect with legal organizations, like your local bar or national legal organizations to keep current on the latest legal developments, you should also engage with your software development resources for periodic briefings and to gain their counsel on various matters (e.g., how would the community view the use of this license for a project?).
|
||||
|
||||
### Relationships breed success
|
||||
|
||||
Open source law is unique because of its unusual requirements for success, namely, connections to other open source attorneys, open source organizations, and a deep and respectful relationship with clients. Success is a direct function of these relationships.
|
||||
|
||||
In part two of this series, I will explore how and why it is important to find a path to "open" and to develop scalable solutions for your organization.
|
||||
|
||||
Find out what it takes to become an open source lawyer.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/open-source-legal-organization
|
||||
|
||||
作者:[Jeffrey Robert Kaufman][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jkaufman
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team-discussion-mac-laptop-stickers.png?itok=AThobsFH (Active listening in a meeting is a skill)
|
||||
[2]: https://opensource.com/article/16/12/open-source-lawyer
|
||||
[3]: https://opensource.org/
|
||||
[4]: https://sfconservancy.org/
|
||||
[5]: https://www.softwarefreedom.org/
|
||||
[6]: https://www.fsf.org/
|
||||
[7]: https://fsfe.org/index.en.html
|
||||
[8]: https://www.linuxfoundation.org/
|
||||
[9]: https://lists.opensource.org/mailman/listinfo/license-discuss_lists.opensource.org
|
||||
[10]: https://lists.opensource.org/mailman/listinfo/license-review_lists.opensource.org
|
@ -0,0 +1,77 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 reasons small businesses choose open source tools for remote employees)
|
||||
[#]: via: (https://opensource.com/article/20/8/business-tools)
|
||||
[#]: author: (Adrian Johansen https://opensource.com/users/ajohansen)
|
||||
|
||||
3 reasons small businesses choose open source tools for remote employees
|
||||
======
|
||||
There are plenty of open source operations tools available if you lack
|
||||
the budget for premium software; here's how to evaluate your options.
|
||||
![A chair in a field.][1]
|
||||
|
||||
The last decade or so has seen some significant changes in how businesses operate. The expansion of accessible, affordable, connected technology has removed barriers to many resources, enabling collaboration and execution of work by nearly anyone, from nearly anywhere. Though COVID-19 has made remote operations a necessity for a lot of industries, many businesses had already begun to embrace it as a more cost-effective, agile way of working.
|
||||
|
||||
That said, not every business has the budget to subscribe to premium software as a service (SaaS) to keep their remote employees productive. The good news is that open source software can be every bit as robust and intuitive as the premium options that are only available to those with plenty of capital. The key is clearly identifying what you need from those tools in order to focus your search.
|
||||
|
||||
The open source community can offer some smart solutions to the challenges of remote working, and we’re going to look at a few key areas of need for businesses exploring how they can operate more effectively.
|
||||
|
||||
### Flexible collaboration
|
||||
|
||||
One of the primary challenges for businesses operating remotely is managing the productivity of disparate teams. Employee management can be difficult enough when everybody is in the same room, but keeping teams in close collaboration when they may be working in different cities or even different time zones requires water-tight organization. This is why open source tools that make flexible yet robust collaborations possible are on top of the list for remote teams today.
|
||||
|
||||
Among the best [open source tools][2] on the market at the moment is [Taiga][3], a project management platform. It uses the card-style task organization approach, providing a board that is visible to all employees on the network and keeps leadership and team members informed about the status of individual tasks and overall project progress. Open source project management software that mimics the easy collaboration that premium services like Trello and Asana offer are increasingly popular. Many—[Odoo][4] and [OpenProject][5] among them—go further than their premium counterparts, offering integrated apps for forecasting and making it easier to share and transfer files or documents.
|
||||
|
||||
When it comes to remote collaboration, effective communication tools are also a must. Team chat platforms can help to make certain that remote employees have access to leadership and other team members whenever they need assistance or clarification on tasks. The chat room nature of them also helps to build team camaraderie. [Mattermost][6] and [Rocket.Chat][7] are among the popular open source platforms that act as [effective alternatives to SaaS like Slack][8]; both have free options, public and private chat rooms, and the ability to upload and share media files.
|
||||
|
||||
### User-friendliness
|
||||
|
||||
There is a focus on effective user interfaces (UI) across the software industry at the moment, and this is arguably even more essential for remote teams. This is not just important for the day-to-day functionality of tools but also for ease of training. New employee onboarding can be improved by implementing a clear, fluid process that introduces new hires to the business’ core practices and tools. This means that any open source software deployed must be user-friendly enough to require minimal guidance and cause few disruptions during ramp-up.
|
||||
|
||||
It certainly helps when the software itself is designed intuitively. [Drawpile][9] is an excellent example of this. [This collaborative drawing platform][10], used for team meetings and creative projects alike, uses clear icons and interfaces that are similar to popular drawing platforms like Photoshop or MSPaint. It also presents a minimalistic, functional approach to avoid overwhelming the uninitiated. When reviewing open source software, business leaders need to consider the perspective of a new user and evaluate its ease of use.
|
||||
|
||||
It’s also important to take into account what instructional assets the developers have provided. Many have online manuals, though the nature of open source can mean that these frequently change and evolve over time. Some, like storage and sharing platform [Nextcloud][11], include separate training materials for users, developers, and administration. Review accessibility to concise documentation like this and ensure that it can be easily integrated, delivered, and understood during your remote employee onboarding process.
|
||||
|
||||
### Security and support
|
||||
|
||||
A concern for any business owner operating in digital spaces is ensuring that operations are not just efficient but secure. One of the aspects that can make premium software attractive is robust cybersecurity protocols and integrated support services. In searching for open source software, it’s important to understand the extent to which developers have put security protocols in place, and how this affects company, employee, and customer safety.
|
||||
|
||||
This can be especially important when utilizing platforms that facilitate the sharing of documents and discussion of potentially sensitive company information. Many options on the market, including [Jitsi][12] and [BigBlueButton][13], are upfront about the security measures and encryptions on their software that often go beyond those on premium platforms. However, it’s equally important to make certain that employees themselves understand that their actions are as vital to security as the encryption. Be clear about what behavior can lead to phishing attacks that make the business vulnerable to two-factor authentication bypassing, and how to safely share information through activities such as dynamic linking.
|
||||
|
||||
One of the most significant advantages that open source software holds over most premium products is access to a vibrant and supportive community. While there are core teams behind the software, there’s a spirit of collaboration and collective ownership to its development and continued growth. [LibreOffice][14] actively encourages its users to [help improve the product][15] through feedback and forums. This means users can often easily communicate with experts whenever issues arise and work together to solve problems, and ultimately make the product better in the future.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Review how open source software can improve collaboration, and fit into your onboarding procedures, and examine the potential for security and community support. And by using open source, you retain control over your data, assets, and workflow. In a world that is swiftly embracing remote practices, discovering the right tools now can give you a competitive advantage.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/business-tools
|
||||
|
||||
作者:[Adrian Johansen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ajohansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_WorkInPublic_4618517_1110_CS_A.png?itok=RwVrWArk (A chair in a field.)
|
||||
[2]: https://opensource.com/article/20/3/open-source-working-home
|
||||
[3]: https://taiga.io/
|
||||
[4]: https://www.odoo.com/
|
||||
[5]: https://www.openproject.org/
|
||||
[6]: https://mattermost.com/
|
||||
[7]: https://rocket.chat/
|
||||
[8]: https://opensource.com/alternatives/slack
|
||||
[9]: https://drawpile.net/
|
||||
[10]: https://opensource.com/article/20/3/drawpile
|
||||
[11]: https://nextcloud.com/
|
||||
[12]: https://jitsi.org/
|
||||
[13]: https://bigbluebutton.org/
|
||||
[14]: https://www.libreoffice.org/
|
||||
[15]: https://www.libreoffice.org/community/get-involved/
|
69
sources/talk/20200823 Open organizations through the ages.md
Normal file
69
sources/talk/20200823 Open organizations through the ages.md
Normal file
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open organizations through the ages)
|
||||
[#]: via: (https://opensource.com/open-organization/20/8/global-history-collaboration)
|
||||
[#]: author: (Ron McFarland https://opensource.com/users/ron-mcfarland)
|
||||
|
||||
Open organizations through the ages
|
||||
======
|
||||
On a global timeline, extensive collaboration is still a relatively new
|
||||
phenomenon. That could explain why we're still getting the hang of it.
|
||||
![Alarm clocks with different time][1]
|
||||
|
||||
Consider the evolution of humankind. When we do, we will recognize that _having global discussions_ and _acting on global decisions_ is a relatively new phenomenon—only 100 years old, give or take a few years. We're still learning _how_ to make global decisions and execute on them successfully.
|
||||
|
||||
Yet our ability to improve those globally focused practices and skills is critical to our continued survival. And open principles will be the keys to helping us learn them—as they have been throughout history.
|
||||
|
||||
In the [first part of this series][2], I reviewed four factors one might use to assess globalization, and I explained how those factors relate specifically to [open organization principles][3]. Here, I'd like to present a chronology of how those principles have influenced certain developments that have made the world feel more connected and have made _personal_ or _regional_ issues into _global_ issues.
|
||||
|
||||
This work draws on research by [Jeffrey D. Sachs][4], author of the book [_The Ages of Globalization_][5]. Sachs examines globalization from the genesis of humankind and argues that globalization has improved life and prosperity through the ages. He organizes human history into seven "ages," and examines the governance structure predominant in each. That structure determines how populations interact with each other (another way of assessing how socially inclusive they are). For Sachs, that inclusiveness is directly related to per capita GDP, or productivity per person. That productivity is where prosperity (or survival) is determined.
|
||||
|
||||
So let's look at the growth of globalization through the ages (I'll use Sachs' categorizations) and see where open organization principles began to take hold in early civilizations. In this piece, I'll discuss historical periods up to the beginning of the Industrial Revolution (the early 1800s).
|
||||
|
||||
### The Paleolithic Age (70,000‒10,000 BCE): The hunter/gatherer setting
|
||||
|
||||
According to Sachs, the history of globalization really begins at the dawn of humankind. And open organization principles are evident even then—though only in tight-knit groups of 25 to 30 members, called "bands," each very similar to "bottom-up" business teams today. Such bands resisted hierarchical organization, eventually connecting with other bands to form "clans" of around 150 people, then "mega-bands" of around 500 people, and finally to "tribes" of around 1,500 people. But these groups never let go of that "band" concept (we can observe this in ancient ruins around the globe). Bands cooperated, but interactions were relatively weak (sometimes even warlike, as bands fought to protect territory). As bands' means of survival was primarily hunting and gathering, they lived a largely nomadic lifestyle.
|
||||
|
||||
### The Neolithic Age (10,000‒3,000 BCE): The ranching/farming setting
|
||||
|
||||
The advent of farming and ranching, Sachs says, marked this period of globalization. During that period, major segments of the human population started establishing permanent settlements, leading to decline in the hunter and gatherer nomadic lifestyle, as agricultural developments allowed for more productivity per unit area. People could establish larger villages. With new agricultural techniques, ten individuals could survive on one square kilometer of land (compared to only one person per square kilometer of hunter/gatherers). Therefore, people were not forced to migrate to new areas to survive. Communities grew larger, and these larger communities set in motion new technical discoveries in metallurgy, the arts, numeric record keeping, ceramics, and even a writing system to record technical breakthroughs. In short, _sharing_ and _collaboration_ became keys to expanding know-how, evidence of open organization principles even tens of thousands of years ago.
|
||||
|
||||
Having global discussions and acting on global decisions is a relatively new phenomenon—only 100 years old, give or take a few years.
|
||||
|
||||
### The Equestrian Age (3,000‒1,000 BCE): The land travel by horse setting
|
||||
|
||||
During the Neolithic Age, communities began connecting with each other using horses for transportation, giving rise to another era of globalization—what Sachs calls The Equestrian Age. Domestication of animals took place almost exclusively in Eurasia and North Africa, including the use of donkeys, cattle, camels and other animals (not just horses). That domestication was by far the most important factor in the economic development and globalization in this age. Animal husbandry was a major influence on farming, mining, manufacturing, transportation, communications, warfare tactics, and governance. As greater long-distance movement was now possible (routes were formed to and from the east and west), whole civilizations began to form. The Egyptians introduced a system of writing and documentation, as well as public administration, which unified dynasties within the region. This led to advances in scientific fields, including mathematics, astronomy, engineering, metallurgy, and medicine.
|
||||
|
||||
### The Classical Age (1,000 BCE‒1500 CE): An information, documentation, learning setting
|
||||
|
||||
According to Sachs, this era of globalization involves the globalization of politics—namely conquering wide regions and creating empires. This includes the empires of Assyria, Persia, Greece, Rome, India, China and later the Ottoman and Mongol empires. This age saw the spread of ideas, technology, institutional concepts, and infrastructural development on a continental scale. As a result, larger communities were developed, and there was a greater and broader level of collaboration, interaction, transparency, adaptability and inclusivity than in the past. Through interaction between empires, better methods of growing food, raising farm animals, transporting goods and fighting wars spread around the globe. Much of this knowledge spread through thousands of books published, distributed, and taught (formal, community schooling began in this era, as did several documentation practices). Global trade improved with the establishment of a navy to police and protect travel routes. Simply put, this was a setting of multinational governance with ever expanding collaboration, inclusivity, and larger community development.
|
||||
|
||||
### The Ocean Age (1500‒1800 CE): The long-distance sea travel and exploration setting
|
||||
|
||||
During this period of globalization, Sachs says, the Old World and the New World, isolated from each other since the Paleolithic Age, finally united through ever faster ocean-going vessels, leading to greater two-way exchange. Plants, animals—and, unfortunately, diseases and pathogens—trafficked between them. Travel from Europe to Asia (via the Cape of Good Hope) increased during this era. This age saw the rise of global capitalism, with the establishment of global-scale economic organizations, like the British East India Company (chartered in 1600) and the Dutch East India Company (chartered in 1602). Each supplied global markets with goods from remote areas. But while global trade produced great prosperity during this era, in many regions it also led to great cruelty (the exploitation of indigenous peoples, for example).
|
||||
|
||||
By the 1800s, communities were interacting on a scale more global than it had ever been, and open organization principles were more influential than ever before. And yet this era still saw a significant deficit of overall inclusivity and collaboration. We should note that this era of globalization is relatively _recent_ in history when viewed on the timescale Sachs outlines in _The Ages of Globalization_.
|
||||
|
||||
How did we get from there to the current age? In the next article, I'll explore those developments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/20/8/global-history-collaboration
|
||||
|
||||
作者:[Ron McFarland][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ron-mcfarland
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk (Alarm clocks with different time)
|
||||
[2]: https://opensource.com/open-organization/20/7/globalization-history-open
|
||||
[3]: https://opensource.com/open-organization/resources/open-org-definition
|
||||
[4]: https://en.wikipedia.org/wiki/Jeffrey_Sachs
|
||||
[5]: https://cup.columbia.edu/book/the-ages-of-globalization/9780231193740
|
75
sources/talk/20200824 Origin stories about Unix.md、
Normal file
75
sources/talk/20200824 Origin stories about Unix.md、
Normal file
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Origin stories about Unix)
|
||||
[#]: via: (https://opensource.com/article/20/8/unix-history)
|
||||
[#]: author: (Jim Hall https://opensource.com/users/jim-hall)
|
||||
|
||||
Origin stories about Unix
|
||||
======
|
||||
Brian Kernighan, one of the original Unix gurus, shares his insights
|
||||
into the origins of Unix and its associated technology.
|
||||
![Old UNIX computer][1]
|
||||
|
||||
Brian W. Kernighan opens his book _Unix: A History and a Memoir_ with the line, "To understand how Unix happened, we have to understand Bell Labs, especially how it worked and the creative environment that it provided." And so begins a wonderful trip back in time, following the creation and development of early Unix with someone who was there.
|
||||
|
||||
You may recognize Brian Kernighan's name. He is the "K" in [AWK][2], the "K" in "K&R C" (he co-wrote the original "Kernighan and Ritchie" book about the C programming language), and he has authored and co-authored many books about Unix and technology. On my own bookshelf, I can find several of Kernighan's books, including _The Unix Programming Environment_ (with Rob Pike), _The AWK Programming Language_ (with Alfred Aho and Peter J. Weinberger), and _The C Programming Language_ (with Dennis M. Ritchie). And of course, his latest entry, _Unix: A History and a Memoir_.
|
||||
|
||||
I interviewed Brian about this most recent book. I think we spent equal amounts of time discussing the book as we did reminiscing about Unix and groff. Below are a few highlights of our conversation:
|
||||
|
||||
### JH: What prompted you to write this book?
|
||||
|
||||
BWK: I thought it would be nice to have a history of what happened at Bell Labs. Jon Gertner wrote a book, _The Idea Factory: Bell Labs and the Great Age of American Innovation_, that described the physical science work at Bell Labs. This was an authoritative work, very technical, and not something that I could do, but it was kind of the inspiration for this book.
|
||||
|
||||
There's also a book by James Gleick, _The Information: A History, a Theory, a Flood_, that isn't specific to Bell Labs, but it's very interesting. That was kind of an inspiration for this, too.
|
||||
|
||||
I originally wanted to write an academic history of the Labs, but I realized it was better to write something based on my own memories and the memories of those who were there at the time. So that's where the book came from.
|
||||
|
||||
### JH: What are a few stories from the book you'd like people to read about?
|
||||
|
||||
BWK: I think there are really two stories I'd like people to know about, and both of them are origin myths. I heard them afresh when Ken Thompson and I were at the [Vintage Computer Festival about a year ago][3].
|
||||
|
||||
One is the origin of Unix itself—how Bonnie, Ken's wife, went off on vacation for three weeks, just at the time that Ken thought he was about three weeks away from having a complete operating system. This was, of course, due to Ken's very competent programming abilities, and it was incredible he was able to pull it off. It was written entirely in Assembly and was really amazing work.
|
||||
|
||||
[Note: This story starts on page 33 in the book. I'll briefly relate it here. Thompson was working on "a disk scheduling algorithm that would try to maximize throughput on any disk," but particularly the PDP-7's very tall single-platter disk drive. In testing the algorithm, Thompson realized, "I was three weeks from an operating system." He broke down his work into three units—an editor, an assembler, and a kernel—and wrote one per week. And about that time, Bonnie took their son to visit Ken's parents in California, so Thompson had those three weeks to work undisturbed.]
|
||||
|
||||
And then there's the origin story for `grep`. Over the years, I'd gotten the story slightly wrong—I thought Ken had written `grep` entirely on demand. It was classic Ken that he had a great idea, a neat idea, a clean idea, and he was able to write it very quickly. Regular expressions (regex) were already present in the text editor, so really, he just pulled regex from the editor and turned it into a program.
|
||||
|
||||
[Note: This story starts on page 70 in the book. Doug McIlroy said, "Wouldn't it be great if we could look for things in files?" Thompson replied, "Let me think about it overnight," and the next morning presented McIlroy with the `grep` program he'd written.]
|
||||
|
||||
### JH: What other stories did you not get to tell in the book?
|
||||
|
||||
BWK: I immediately think of the "Peter Weinberger's face" story! There were a lot of pranks based on having a picture of Peter's face pop up in random places. Someone attached a picture of Peter with magnets to the metal wall of a stairway. And there was a meeting once where Peter was up in front, not in the audience. And while he was talking, everyone in the audience held up a mask that had Peter's face printed on it.
|
||||
|
||||
[Note: The "Peter Weinberger's face" story starts on page 47 in the book. Spinroot also has an [archive of the prank][4] with examples.]
|
||||
|
||||
I talked to a lot of people from the Labs about the book. I would email people, and I would receive long replies with more stories than I could fit into the length or the narrative. Honestly, there's probably a whole other book that someone else could write just based on those stories. It's amazing how many people come forward with stories about Unix and running Unix on systems I haven't even heard of.
|
||||
|
||||
## A fantastic read
|
||||
|
||||
_Unix: A History and a Memoir_ is well-titled. Throughout the book, Kernighan shares details on the rich history of Unix, including background on Bell Labs, the spark of Unix with CTSS and Multics in 1969, and the first edition in 1971. Kernighan also provides his own reflection on how Unix came to be such a dominant platform, including notes on portability, Unix tools, the Unix Wars, and Unix descendants such as Minix, Linux, BSD, and Plan9. You will also find nuggets of information and great stories that fill in details around some of the everyday features of Unix.
|
||||
|
||||
At just over 180 pages, _Unix: A History and a Memoir_ is a fantastic read. If you are a fan of Linux, or any open source Unix, including the BSD versions, you will want to read this book.
|
||||
|
||||
_Unix: A History and a Memoir_ is available on [Amazon][5] in paperback and e-book formats. Published by Kindle Direct Publishing, October 2019.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/unix-history
|
||||
|
||||
作者:[Jim Hall][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jim-hall
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/retro_old_unix_computer.png?itok=SYAb2xoW (Old UNIX computer)
|
||||
[2]: https://opensource.com/resources/what-awk
|
||||
[3]: https://www.youtube.com/watch?v=EY6q5dv_B-o
|
||||
[4]: https://spinroot.com/pico/pjw.html
|
||||
[5]: https://www.amazon.com/dp/1695978552
|
@ -0,0 +1,102 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why we open sourced our security project)
|
||||
[#]: via: (https://opensource.com/article/20/8/why-open-source)
|
||||
[#]: author: (Mike Bursell https://opensource.com/users/mikecamel)
|
||||
|
||||
Why we open sourced our security project
|
||||
======
|
||||
It’s not just coding that we do in the open.
|
||||
![A lock on the side of a building][1]
|
||||
|
||||
When Nathaniel McCallum and I embarked on the project that is now called [Enarx][2], we made one decision right at the beginning: the code for Enarx would be open source, a stance fully supported by our employer, Red Hat (see the [standard disclaimer][3] on my blog). All of it, and forever.
|
||||
|
||||
That's a decision that we've not regretted at any point, and it's something we stand behind. As soon as we had enough code for a demo and were ready to show it, we created a [repository on GitHub][4] and made it public. There's a very small exception, which is that there are some details of upcoming chip features that are shared with us under an NDA[1][5] where publishing any code we might write for them would be a breach of the NDA. But where this applies (which is rarely), we are absolutely clear with the vendors that we intend to make the code open as soon as possible, and we lobby them to release details as early as they can (which may be earlier than they might prefer) so that more experts can look over both their designs and our code.
|
||||
|
||||
### Auditability and trust
|
||||
|
||||
This brings us to possibly the most important reasons for making Enarx open source: auditability and trust. Enarx is a security-related project, and I believe passionately not only that [security should be done in the open][6] but also that if anybody is actually going to trust their sensitive data, algorithms, and workloads to a piece of software, then they want to be in a position where as many experts as possible have looked at it, scrutinised it, criticised it, and improved it, whether that is the people running the software, their employees, contractors, or (even better) the wider security community. The more people who check the code, the happier you should be to [trust it][7]. This is important for any piece of security software, but it is _vital_ for software such as Enarx, which is designed to protect your most sensitive workloads.
|
||||
|
||||
### Bug catching
|
||||
|
||||
There are bugs in Enarx. I know: I'm writing some of the code,[2][8] and I found one yesterday (which I'd put in), just as I was about to give a demo.[3][9] It is very, very difficult to write perfect code, and we know that if we make our source open, then more people can help us fix issues.
|
||||
|
||||
### Commonwealth
|
||||
|
||||
For Nathaniel and me, open source is an ethical issue, and we make no apologies for that. I think it's the same for most, if not all, of the team working on Enarx. This includes a number of Red Hat employees (see standard disclaimer), so it shouldn't come as a surprise, but we also have non-Red Hat contributors from a number of backgrounds. We feel that Enarx should be a Common Good and [contribute to the commonwealth][10] of intellectual property out there.
|
||||
|
||||
### More brainpower
|
||||
|
||||
Making something open source doesn't just make it easier to fix bugs: it can improve the quality of what you produce in general. The more brainpower you have to apply to the problem, the better your chances of making something great––assuming that the brainpower is applied efficiently (not always an easy task!). In a recent design meeting, one of the participants said towards the end, "I'm sure I could implement some of this, but I don't know a huge amount about this topic, and I'm worried that I'm not contributing to this discussion." In fact, they had contributed by asking questions and clarifying some points, and we assured them that we wanted to include experienced, senior developers for their expertise and knowledge and to pull out assumptions and validate the design, and not because we expected everybody to be experts in all parts of the project.
|
||||
|
||||
Having bright people involved in design and coding spreads expertise and knowledge and helps keep the work from becoming an insulated, isolated "ivory tower" construction, understood by few, and almost impossible to validate.
|
||||
|
||||
### Not just code
|
||||
|
||||
It's not just coding that we do in the open. We manage our architecture in the open, our design meetings, our protocol design, our design methodology,[4][11] our documentation, our bug tracking, our chat, our CI/CD processes: all of it is open. The one exception is our [vulnerability management][12] process, which needs the opportunity for confidential exposure for a limited time. Here is where you can find our resources:
|
||||
|
||||
* [Code][4]
|
||||
* [Wiki][13]
|
||||
* Design is on the wiki and [request for comments][14] repo
|
||||
* [Issues][15] and [pull requests][16]
|
||||
* [Chat][17] (thanks to [Rocket.chat][18]!)
|
||||
* CI/CD resources thanks to [Packet][19]!
|
||||
* [Stand-ups][20]
|
||||
|
||||
|
||||
|
||||
We also take diversity seriously, and the project contributors are subject to the [Contributor Covenant Code of Conduct][21].
|
||||
|
||||
In short, Enarx is an open project. I'm sure we could do better, and we'll strive for that, but our underlying principles are that open is good in general and vital for security. If you agree, please come and visit!
|
||||
|
||||
* * *
|
||||
|
||||
1. Non-disclosure agreement
|
||||
2. To the surprise of many of the team, including myself. At least it's not in Perl.
|
||||
3. I fixed it. Admittedly, after the demo.
|
||||
4. We've just moved to a sprint pattern, the details of which we designed and agreed to in the open.
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
_This article was originally published on [Alice, Eve, and Bob][22] and is adapted and reprinted with the author's permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/why-open-source
|
||||
|
||||
作者:[Mike Bursell][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mikecamel
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building)
|
||||
[2]: https://enarx.dev/
|
||||
[3]: https://aliceevebob.com/
|
||||
[4]: https://github.com/enarx
|
||||
[5]: tmp.PM1nWCfATC#1
|
||||
[6]: https://opensource.com/article/17/10/many-eyes
|
||||
[7]: https://aliceevebob.com/2019/06/18/trust-choosing-open-source/
|
||||
[8]: tmp.PM1nWCfATC#2
|
||||
[9]: tmp.PM1nWCfATC#3
|
||||
[10]: https://opensource.com/article/17/11/commonwealth-open-source
|
||||
[11]: tmp.PM1nWCfATC#4
|
||||
[12]: https://aliceevebob.com/2020/05/26/security-disclosure-or-vulnerability-management/
|
||||
[13]: https://github.com/enarx/enarx/wiki
|
||||
[14]: https://github.com/enarx/rfcs
|
||||
[15]: https://github.com/enarx/enarx/issues
|
||||
[16]: https://github.com/enarx/enarx/pulls
|
||||
[17]: https://chat.enarx.dev/
|
||||
[18]: https://rocket.chat/
|
||||
[19]: https://packet.com/
|
||||
[20]: https://github.com/enarx/enarx/wiki/How-to-contribute
|
||||
[21]: https://github.com/enarx/.github/blob/master/CODE_OF_CONDUCT.md
|
||||
[22]: https://aliceevebob.com/2020/07/28/why-enarx-is-open/
|
140
sources/talk/20200826 What is DNS and how does it work.md
Normal file
140
sources/talk/20200826 What is DNS and how does it work.md
Normal file
@ -0,0 +1,140 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is DNS and how does it work?)
|
||||
[#]: via: (https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html)
|
||||
[#]: author: (Keith Shaw and Josh Fruhlinger )
|
||||
|
||||
What is DNS and how does it work?
|
||||
======
|
||||
The Domain Name System resolves the names of internet sites with their underlying IP addresses adding efficiency and even security in the process.
|
||||
Thinkstock
|
||||
|
||||
The Domain Name System (DNS) is one of the foundations of the internet, yet most people outside of networking probably don’t realize they use it every day to do their jobs, check their email or waste time on their smartphones.
|
||||
|
||||
At its most basic, DNS is a directory of names that match with numbers. The numbers, in this case are IP addresses, which computers use to communicate with each other. Most descriptions of DNS use the analogy of a phone book, which is fine for people over the age of 30 who know what a phone book is.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][1]
|
||||
|
||||
If you’re under 30, think of DNS like your smartphone’s contact list, which matches people’s names with their phone numbers and email addresses. Then multiply that contact list by everyone else on the planet.
|
||||
|
||||
### A brief history of DNS
|
||||
|
||||
When the internet was very, very small, it was easier for people to correspond specific IP addresses with specific computers, but that didn’t last for long as more devices and people joined the growing network. It's still possible to type a specific IP address into a browser to reach a website, but then, as now, people wanted an address made up of easy-to-remember words, of the sort that we would recognize as a domain name (like networkworld.com) today. In the 1970s and early '80s, those names and addresses were assigned by one person — [Elizabeth Feinler at Stanford][2] – who maintained a master list of every Internet-connected computer in a text file called [HOSTS.TXT][3].
|
||||
|
||||
This was obviously an untenable situation as the Internet grew, not least because Feinler only handled requests before 6 p.m. California time, and took time off for Christmas. In 1983, Paul Mockapetris, a researcher at USC, was tasked with coming up with a compromise among multiple suggestions for dealing with the problem. He basically ignored them all and developed his own system, which he dubbed DNS. While it's obviously changed quite a bit since then, at a fundamental level it still works the same way it did nearly 40 years ago.
|
||||
|
||||
### How DNS servers work
|
||||
|
||||
The DNS directory that matches name to numbers isn’t located all in one place in some dark corner of the internet. With [more than 332 million domain names listed at the end of 2017][4], a single directory would be very large indeed. Like the internet itself, the directory is distributed around the world, stored on domain name servers (generally referred to as DNS servers for short) that all communicate with each other on a very regular basis to provide updates and redundancies.
|
||||
|
||||
### Authoritative DNS servers vs. recursive DNS servers
|
||||
|
||||
When your computer wants to find the IP address associated with a domain name, it first makes its request to a recursive DNS server, also known as recursive resolver*.* A recursive resolver is a server that is usually operated by an ISP or other third-party provider, and it knows which other DNS servers it needs to ask to resolve the name of a site with its IP address. The servers that actually have the needed information are called authoritative DNS servers*.*
|
||||
|
||||
### DNS servers and IP addresses
|
||||
|
||||
Each domain can correspond to more than one IP address. In fact, some sites have hundreds or more IP addresses that correspond with a single domain name. For example, the server your computer reaches for [www.google.com][5] is likely completely different from the server that someone in another country would reach by typing the same site name into their browser.
|
||||
|
||||
Another reason for the distributed nature of the directory is the amount of time it would take for you to get a response when you were looking for a site if there was only one location for the directory, shared among the millions, probably billions, of people also looking for information at the same time. That’s one long line to use the phone book.
|
||||
|
||||
### What is DNS caching?
|
||||
|
||||
To get around this problem, DNS information is shared among many servers. But information for sites visited recently is also cached locally on client computers. Chances are that you use google.com several times a day. Instead of your computer querying the DNS name server for the IP address of google.com every time, that information is saved on your computer so it doesn’t have to access a DNS server to resolve the name with its IP address. Additional caching can occur on the routers used to connect clients to the internet, as well as on the servers of the user’s Internet Service Provider (ISP). With so much caching going on, the number of queries that actually make it to DNS name servers is a lot lower than it would seem.
|
||||
|
||||
### How do I find my DNS server?
|
||||
|
||||
Generally speaking, the DNS server you use will be established automatically by your network provider when you connect to the internet. If you want to see which servers are your primary nameservers — generally the recursive resolver, as described above — there are web utilities that can provide a host of information about your current network connection. [Browserleaks.com][6] is a good one, and it provides a lot of information, including your current DNS servers.
|
||||
|
||||
### Can I use 8.8.8.8 DNS?
|
||||
|
||||
It's important to keep in mind, though, that while your ISP will set a default DNS server, you're under no obligation to use it. Some users may have reason to avoid their ISP's DNS — for instance, some ISPs use their DNS servers to redirect requests for nonexistent addresses to [pages with advertising][7].
|
||||
|
||||
If you want an alternative, you can instead point your computer to a public DNS server that will act as a recursive resolver. One of the most prominent public DNS servers is Google's; its IP address is 8.8.8.8. Google's DNS services tend to be [fast][8], and while there are certain questions about the [ulterior motives Google has for offering the free service][9], they can't really get any more information from you that they don't already get from Chrome. Google has a page with detailed instructions on how to [configure your computer or router][10] to connect to Google's DNS.
|
||||
|
||||
### How DNS adds efficiency
|
||||
|
||||
DNS is organized in a hierarchy that helps keep things running quickly and smoothly. To illustrate, let’s pretend that you wanted to visit networkworld.com.
|
||||
|
||||
The initial request for the IP address is made to a recursive resolver, as discussed above. The recursive resolver knows which other DNS servers it needs to ask to resolve the name of a site (networkworld.com) with its IP address. This search leads to a root server, which knows all the information about top-level domains, such as .com, .net, .org and all of those country domains like .cn (China) and .uk (United Kingdom). Root servers are located all around the world, so the system usually directs you to the closest one geographically.
|
||||
|
||||
Once the request reaches the correct root server, it goes to a top-level domain (TLD) name server, which stores the information for the second-level domain, the words used before you get to the .com, .org, .net (for example, that information for networkworld.com is “networkworld”). The request then goes to the Domain Name Server, which holds the information about the site and its IP address. Once the IP address is discovered, it is sent back to the client, which can now use it to visit the website. All of this takes mere milliseconds.
|
||||
|
||||
Because DNS has been working for the past 30-plus years, most people take it for granted. Security also wasn’t considered when building the system, so [hackers have taken full advantage of this][11], creating a variety of attacks.
|
||||
|
||||
### DNS reflection attacks
|
||||
|
||||
DNS reflection attacks can swamp victims with high-volume messages from DNS resolver servers. Attackers request large DNS files from all the open DNS resolvers they can find and do so using the spoofed IP address of the victim. When the resolvers respond, the victim receives a flood of unrequested DNS data that overwhelms their machines.
|
||||
|
||||
### DNS cache poisoning
|
||||
|
||||
[DNS cache poisoning][12] can divert users to malicious Web sites. Attackers manage to insert false address records into the DNS so when a potential victim requests an address resolution for one of the poisoned sites, the DNS responds with the IP address for a different site, one controlled by the attacker. Once on these phony sites, victims may be tricked into giving up passwords or suffer malware downloads.
|
||||
|
||||
### DNS resource exhaustion
|
||||
|
||||
[DNS resource exhaustion][13] attacks can clog the DNS infrastructure of ISPs, blocking the ISP’s customers from reaching sites on the internet. This can be done by attackers registering a domain name and using the victim’s name server as the domain’s authoritative server. So if a recursive resolver can’t supply the IP address associated with the site name, it will ask the name server of the victim. Attackers generate large numbers of requests for their domain and toss in non-existent subdomains to boot, which leads to a torrent of resolution requests being fired at the victim’s name server, overwhelming it.
|
||||
|
||||
### What is DNSSec?
|
||||
|
||||
DNS Security Extensions is an effort to make communication among the various levels of servers involved in DNS lookups more secure. It was devised by the Internet Corporation for Assigned Names and Numbers (ICANN), the organization in charge of the DNS system.
|
||||
|
||||
ICANN became aware of weaknesses in the communication between the DNS top-level, second-level and third-level directory servers that could allow attackers to hijack lookups. That would allow the attackers to respond to requests for lookups to legitimate sites with the IP address for malicious sites. These sites could upload malware to users or carry out phishing and pharming attacks.
|
||||
|
||||
DNSSEC would address this by having each level of DNS server digitally sign its requests, which insures that the requests sent in by end users aren’t commandeered by attackers. This creates a chain of trust so that at each step in the lookup, the integrity of the request is validated.
|
||||
|
||||
In addition, DNSSec can determine if domain names exist, and if one doesn’t, it won’t let that fraudulent domain be delivered to innocent requesters seeking to have a domain name resolved.
|
||||
|
||||
As more domain names are created, and more devices continue to join the network via internet of things devices and other “smart” systems, and as [more sites migrate to IPv6][14], maintaining a healthy DNS ecosystem will be required. The growth of big data and analytics also [brings a greater need for DNS management][15].
|
||||
|
||||
### SIGRed: A wormable DNS flaw rears its head
|
||||
|
||||
The world got a good look recently at the sort of chaos weaknesses in DNS could cause with the discovery of a flaw in Windows DNS servers. The potential security hole, dubbed SIGRed, [requires a complex attack chain][16], but can exploit unpatched Windows DNS servers to potentially install and execute arbitrary malicious code on clients. And the exploit is "wormable," meaning that it can spread from computer to computer without human intervention. The vulnerability was considered alarming enough that U.S. federal agencies were [given only a few days to install patches][17].
|
||||
|
||||
### DNS over HTTPS: A new privacy landscape
|
||||
|
||||
As of this writing, DNS is on the verge of one of its biggest shifts in its history. Google and Mozilla, who together control the lion's share of the browser market, are encouraging a move towards [DNS over HTTPS][18], or DoH, in which DNS requests are encrypted by the same HTTPS protocol that already protects most web traffic. In Chrome's implementation, the browser checks to see if the DNS servers support DoH, and if they don't, it reroutes DNS requests to Google's 8.8.8.8.
|
||||
|
||||
It's a move not without controversy. Paul Vixie, who did much of the early work on the DNS protocol back in the 1980s, calls the move a "[disaster][19]" for security: corporate IT will have a much harder time monitoring or directing DoH traffic that traverses their network, for instance. Still, Chrome is omnipresent and DoH will soon be turned on by default, so we'll see what the future holds.
|
||||
|
||||
_(Keith Shaw is_ _a former senior editor for Network World and_ _an award-winning writer, editor and product reviewer who has written for many publications and websites around the world.)_
|
||||
|
||||
_(Josh Fruhlinger is a writer and editor who lives in Los Angeles.)_
|
||||
|
||||
Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3268449/what-is-dns-and-how-does-it-work.html
|
||||
|
||||
作者:[Keith Shaw and Josh Fruhlinger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/newsletters/signup.html
|
||||
[2]: https://www.internethalloffame.org/blog/2012/07/23/why-does-net-still-work-christmas-paul-mockapetris
|
||||
[3]: https://tools.ietf.org/html/rfc608
|
||||
[4]: http://www.verisign.com/en_US/domain-names/dnib/index.xhtml?section=cc-tlds
|
||||
[5]: http://www.google.com
|
||||
[6]: https://browserleaks.com/ip
|
||||
[7]: https://www.networkworld.com/article/2246426/comcast-redirects-bad-urls-to-pages-with-advertising.html
|
||||
[8]: https://www.networkworld.com/article/3194890/comparing-the-performance-of-popular-public-dns-providers.html
|
||||
[9]: https://blog.dnsimple.com/2015/03/why-and-how-to-use-googles-public-dns/
|
||||
[10]: https://developers.google.com/speed/public-dns/docs/using
|
||||
[11]: https://www.networkworld.com/article/2838356/network-security/dns-is-ubiquitous-and-its-easily-abused-to-halt-service-or-steal-data.html
|
||||
[12]: https://www.networkworld.com/article/2277316/tech-primers/tech-primers-how-dns-cache-poisoning-works.html
|
||||
[13]: https://www.cloudmark.com/releases/docs/whitepapers/dns-resource-exhaustion-v01.pdf
|
||||
[14]: https://www.networkworld.com/article/3254575/lan-wan/what-is-ipv6-and-why-aren-t-we-there-yet.html
|
||||
[15]: http://social.dnsmadeeasy.com/blog/opinion/future-big-data-dns-analytics/
|
||||
[16]: https://www.csoonline.com/article/3567188/wormable-dns-flaw-endangers-all-windows-servers.html
|
||||
[17]: https://federalnewsnetwork.com/cybersecurity/2020/07/cisa-gives-agencies-a-day-to-remedy-windows-dns-server-vulnerability/
|
||||
[18]: https://www.networkworld.com/article/3322023/dns-over-https-seeks-to-make-internet-use-more-private.html
|
||||
[19]: https://www.theregister.com/2018/10/23/paul_vixie_slaps_doh_as_dns_privacy_feature_becomes_a_standard/
|
||||
[20]: https://www.facebook.com/NetworkWorld/
|
||||
[21]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,98 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (What is IPv6, and why aren’t we there yet?)
|
||||
[#]: via: (https://www.networkworld.com/article/3254575/what-is-ipv6-and-why-aren-t-we-there-yet.html)
|
||||
[#]: author: (Keith Shaw and Josh Fruhlinger )
|
||||
|
||||
What is IPv6, and why aren’t we there yet?
|
||||
======
|
||||
IPv6 has been in the works since 1998 to address the shortfall of IP addresses available under Ipv4, yet despite its efficiency and security advantages, adoption is still slow.
|
||||
Thinkstock
|
||||
|
||||
For the most part the dire warnings about running out of internet addresses have ceased because, slowly but surely, migration from the world of Internet Protocol Version 4 (IPv4) to IPv6 has begun, and software is in place to prevent the address apocalypse that many were predicting.
|
||||
|
||||
But before we see where are and where we’re going with IPv6, let’s go back to the early days of internet addressing.
|
||||
|
||||
**[ Related: [IPv6 deployment guide][1] + [How to plan your migration to IPv6][2] ]**
|
||||
|
||||
### What is IPv6 and why is it important?
|
||||
|
||||
IPv6 is the latest version of the Internet Protocol, which identifies devices across the internet so they can be located. Every device that uses the internet is identified through its own IP address in order for internet communication to work. In that respect, it’s just like the street addresses and zip codes you need to know in order to mail a letter.
|
||||
|
||||
The previous version, IPv4, uses a 32-bit addressing scheme to support 4.3 billion devices, which was thought to be enough. However, the growth of the internet, personal computers, smartphones and now Internet of Things devices proves that the world needed more addresses.
|
||||
|
||||
Fortunately, the Internet Engineering Task Force (IETF) recognized this 20 years ago. In 1998 it created IPv6, which instead uses 128-bit addressing to support approximately 340 trillion trillion (or 2 to the 128th power, if you like). Instead of the IPv4 address method of four sets of one- to three-digit numbers, IPv6 uses eight groups of four hexadecimal digits, separated by colons.
|
||||
|
||||
### What are the benefits of IPv6?
|
||||
|
||||
In its work, the IETF included enhancements to IPv6 compared with IPv4. The IPv6 protocol can handle packets more efficiently, improve performance and increase security. It enables internet service providers to reduce the size of their routing tables by making them more hierarchical.
|
||||
|
||||
### Network address translation (NAT) and IPv6
|
||||
|
||||
Adoption of IPv6 has been delayed in part due to network address translation (NAT), which takes private IP addresses and turns them into public IP addresses. That way a corporate machine with a private IP address can send to and receive packets from machines located outside the private network that have public IP addresses.
|
||||
|
||||
Without NAT, large corporations with thousands or tens of thousands of computers would devour enormous quantities of public IPv4 addresses if they wanted to communicate with the outside world. But those IPv4 addresses are limited and nearing exhaustion to the point of having to be rationed.
|
||||
|
||||
NAT helps alleviate the problem. With NAT, thousands of privately addressed computers can be presented to the public internet by a NAT machine such as a firewall or router.
|
||||
|
||||
The way NAT works is when a corporate computer with a private IP address sends a packet to a public IP address outside the corporate network, it first goes to the NAT device. The NAT notes the packet’s source and destination addresses in a translation table.
|
||||
|
||||
The NAT changes the source address of the packet to the public-facing address of the NAT device and sends it along to the external destination. When a packet replies, the NAT translates the destination address to the private IP address of the computer that initiated the communication. This can be done so that a single public IP address can represent multiple privately addressed computers.
|
||||
|
||||
### Who is deploying IPv6?
|
||||
|
||||
Carrier networks and ISPs have been the first group to start deploying IPv6 on their networks, with mobile networks leading the charge. For example, T-Mobile USA has more than 90% of its traffic going over IPv6, with Verizon Wireless close behind at 82.25%. Comcast and AT&T have its networks at 63% and 65%, respectively, according to the industry group [World Ipv6 Launch][3].
|
||||
|
||||
Major websites are following suit - just under 30% of the Alexa Top 1000 websites are currently reachable over IPv6, World IPv6 Launch says.
|
||||
|
||||
Enterprises are trailing in deployment, with slightly under one-fourth of enterprises advertising IPv6 prefixes, according to the Internet Society’s [“State of IPv6 Deployment 2017” report][4]. Complexity, costs and time needed to complete are all reasons given. In addition, some projects have been delayed due to software compatibility. For example, a [January 2017 report][5] said a bug in Windows 10 was “undermining Microsoft’s efforts to roll out an IPv6-only network at its Seattle headquarters.”
|
||||
|
||||
### When will more deployments occur?
|
||||
|
||||
The Internet Society said the price of IPv4 addresses will peak in 2018, and then prices will drop after IPv6 deployment passes the 50% mark. Currently, [according to Google][6], the world has 20% to 22% IPv6 adoption, but in the U.S. it’s about 32%).
|
||||
|
||||
As the price of IPv4 addresses begin to drop, the Internet Society suggests that enterprises sell off their existing IPv4 addresses to help fund IPv6 deployment. The Massachusetts Institute of Technology has done this, according to [a note posted on GitHub][7]. The university concluded that 8 million of its IPv4 addresses were “excess” and could be sold without impacting current or future needs since it also holds 20 nonillion IPv6 addresses. (A nonillion is the numeral one followed by 30 zeroes.)
|
||||
|
||||
In addition, as more deployments occur, more companies will start charging for the use of IPv4 addresses, while providing IPv6 services for free. [UK-based ISP Mythic Beasts][8] says “IPv6 connectivity comes as standard,” while “IPv4 connectivity is an optional extra.”
|
||||
|
||||
### When will IPv4 be “shut off”?
|
||||
|
||||
Most of the world [“ran out” of new IPv4 addresses][9] between 2011 and 2018 – but we won’t completely be out of them as IPv4 addresses get sold and re-used (as mentioned earlier), and any leftover addresses will be used for IPv6 transitions.
|
||||
|
||||
There’s no official switch-off date, so people shouldn’t be worried that their internet access will suddenly go away one day. As more networks transition, more content sites support IPv6 and more end users upgrade their equipment for IPv6 capabilities, the world will slowly move away from IPv4.
|
||||
|
||||
### Why is there no IPv5?
|
||||
|
||||
There was an IPv5 that was also known as Internet Stream Protocol, abbreviated simply as ST. It was designed for connection-oriented communications across IP networks with the intent of supporting voice and video.
|
||||
|
||||
It was successful at that task, and was used experimentally. One shortcoming that undermined its popular use was its 32-bit address scheme – the same scheme used by IPv4. As a result, it had the same problem that IPv4 had – a limited number of possible IP addresses. That led to the development and eventual adoption of IPv6. Even though IPv5 was never adopted publicly, it had used up the name IPv5.
|
||||
|
||||
Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3254575/what-is-ipv6-and-why-aren-t-we-there-yet.html
|
||||
|
||||
作者:[Keith Shaw and Josh Fruhlinger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3235805/lan-wan/ipv6-deployment-guide.html#tk.nww-fsb
|
||||
[2]: https://www.networkworld.com/article/3214388/lan-wan/how-to-plan-your-migration-to-ipv6.html#tk.nww-fsb
|
||||
[3]: http://www.worldipv6launch.org/measurements/
|
||||
[4]: https://www.internetsociety.org/resources/doc/2017/state-of-ipv6-deployment-2017/
|
||||
[5]: https://www.theregister.co.uk/2017/01/19/windows_10_bug_undercuts_ipv6_rollout/https://www.theregister.co.uk/2017/01/19/windows_10_bug_undercuts_ipv6_rollout/
|
||||
[6]: https://www.google.com/intl/en/ipv6/statistics.html
|
||||
[7]: https://gist.github.com/simonster/e22e50cd52b7dffcf5a4db2b8ea4cce0
|
||||
[8]: https://www.mythic-beasts.com/sales/ipv6
|
||||
[9]: https://ipv4.potaroo.net/
|
||||
[10]: https://www.facebook.com/NetworkWorld/
|
||||
[11]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,60 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Information could be half the world's mass by 2245, says researcher)
|
||||
[#]: via: (https://www.networkworld.com/article/3570438/information-could-be-half-the-worlds-mass-by-2245-says-researcher.html)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Information could be half the world's mass by 2245, says researcher
|
||||
======
|
||||
Because of the amount of energy and resources used to create and store digital information, the data should be considered physical, and not just invisible ones and zeroes, according to one theoretical physicist.
|
||||
Luza Studios / Getty Images
|
||||
|
||||
Digital content should be considered a fifth state of matter, along with gas, liquid, plasma and solid, suggests one university scholar.
|
||||
|
||||
Because of the energy and resources used to create, store and distribute data physically and digitally, data has evolved and should now be considered as mass, according to Melvin Vopson, a senior lecturer at the U.K.'s University of Portsmouth and author of an article, "[The information catastrophe][1]," published in the journal AIP Advances.
|
||||
|
||||
Vopson also claims digital bits are on a course to overwhelm the planet and will eventually outnumber atoms.
|
||||
|
||||
The idea of assigning mass to digital information builds off some existing data points. Vopson cites an IBM estimate that finds data is created at a rate of 2.5 quintillion bytes every day. He also factors in data storage densities of more than 1 terabit per inch to compare the size of a bit to the size of an atom.
|
||||
|
||||
Presuming 50% annual growth in data generation, "the number of bits would equal the number of atoms on Earth in approximately 150 years," according to a [media release][2] announcing Vopson's research.
|
||||
|
||||
"It would be approximately 130 years until the power needed to sustain digital information creation would equal all the power currently produced on planet Earth, and by 2245, half of Earth's mass would be converted to digital information mass," the release reads.
|
||||
|
||||
The COVID-19 pandemic is increasing the rate of digital data creation and accelerating this process, Vopson adds.
|
||||
|
||||
He warns of an impending saturation point: "Even assuming that future technological progress brings the bit size down to sizes closer to the atom itself, this volume of digital information will take up more than the size of the planet, leading to what we define as the information catastrophe," Vopson writes in the [paper][3].
|
||||
|
||||
"We are literally changing the planet bit by bit, and it is an invisible crisis," says Vopson, a former R&D scientist at Seagate Technology.
|
||||
|
||||
Vopson isn't alone in exploring the idea that information isn't simply imperceptible ones and zeroes. According to the release, Vopson draws on the mass-energy comparisons in Einstein's theory of general relativity; the work of Rolf Landauer, who applied the laws of thermodynamics to information; and the work of Claude Shannon, the inventor of the digital bit.
|
||||
|
||||
"When one brings information content into existing physical theories, it is almost like an extra dimension to everything in physics," Vopson says.
|
||||
|
||||
With a growth rate that seems unstoppable, digital information production "will consume most of the planetary power capacity, leading to ethical and environmental concerns," his paper concludes.
|
||||
|
||||
Interestingly – and a bit more out there – Vopson also suggests that if, as he projects, the future mass of the planet is made up predominantly of bits of information, and there exists enough power created to do it (not certain), then "one could envisage a future world mostly computer simulated and dominated by digital bits and computer code," he writes.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3570438/information-could-be-half-the-worlds-mass-by-2245-says-researcher.html
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://aip.scitation.org/doi/10.1063/5.0019941
|
||||
[2]: https://publishing.aip.org/publications/latest-content/digital-content-on-track-to-equal-half-earths-mass-by-2245/
|
||||
[3]: https://aip.scitation.org/doi/full/10.1063/5.0019941
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,95 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Why Comcast open sourced its DNS management tool)
|
||||
[#]: via: (https://opensource.com/article/20/9/open-source-dns)
|
||||
[#]: author: (Paul Cleary https://opensource.com/users/pauljamescleary)
|
||||
|
||||
Why Comcast open sourced its DNS management tool
|
||||
======
|
||||
This open source DNS management tool was built by and for the telcom
|
||||
giant, but is establishing itself in its own right and welcoming more
|
||||
contributors.
|
||||
![An intersection of pipes.][1]
|
||||
|
||||
Adoption of [DevOps][2] practices at Comcast led to increased automation and configuration of infrastructure that supports applications, back-office, data centers, and our network. These practices require teams to move fast and be self-reliant. Infrastructure is constantly turned upside down, with network traffic moved around it constantly. Good DNS record management is critical to support this level of autonomy and automation, but how can a large, diverse enterprise move quickly while safely governing its DNS assets?
|
||||
|
||||
### Challenge
|
||||
|
||||
Prior to 2016, DNS record management was mostly done through an online ticketing system—users would submit tickets for DNS changes that were manually reviewed and implemented by a separate team of DNS technicians. This system frequently required manual intervention for many of the DNS requests, which was time-consuming.
|
||||
|
||||
Turnaround times for DNS changes were in hours, which is not suitable for infrastructure automation. Large Internet companies can manage millions of DNS records, making it practically impossible for DNS technicians to certify the correctness of the thousands of DNS updates being requested daily. This increased the possibility of an inadvertent errant update to a critical DNS record that ultimately would lead to a downtime event.
|
||||
|
||||
In addition, engineering teams are intimately familiar with their DNS needs—much more so than a single group of DNS technicians serving an entire enterprise. So, we needed to enable engineering teams to self-service their own DNS records, implement changes quickly (in seconds), and at the same time, make sure all changes are done safely.
|
||||
|
||||
### Solution
|
||||
|
||||
VinylDNS was built at Comcast and subsequently opened to empower engineering teams to automate as they please while providing the safety and administrative controls demanded by DNS operators and the Comcast Security team.
|
||||
|
||||
### Security as a way of life
|
||||
|
||||
VinylDNS is all about automation and enhanced security. At Comcast, the VinylDNS team worked in close coordination with both the DNS and engineering teams at Comcast, as well as the security team, to meet stringent engineering and security requirements. An incredible array of access controls was implemented that give extreme flexibility to both DNS operators and engineering teams to control their DNS assets.
|
||||
|
||||
Access controls implemented at the DNS zone level allow any team to control who can make updates to their DNS zones. When a DNS zone is registered and authorized to a VinylDNS group, only members of that group can make changes to DNS records in that DNS zone. In addition, access-list (ACL) rules provide extreme flexibility to allow other VinylDNS users to manage records in that zone. These ACL rules can be defined using regular expression masks or classless inter-domain routing (CIDR) rules and DNS record types that lock down access to specific users and groups to certain records in specific DNS zones.
|
||||
|
||||
### Meeting the demands of automation
|
||||
|
||||
A [representational state transfer (REST) API][3] was built along with the system. This uses request signing to help eliminate man-in-the-middle attacks. Once the engineering teams at Comcast caught wind of the kind of automation afforded by VinylDNS, many began building out tooling to integrate directly with VinylDNS via its API. It wasn't long before most of them were using organically developed tooling integrated with the VinylDNS API to support their DNS needs.
|
||||
|
||||
### Performing at large enterprise scale
|
||||
|
||||
Very quickly, VinylDNS was managing a million DNS records and thousands of DNS zones, and supporting hundreds of engineers. As we sought to expand VinylDNS to support the rest of Comcast, we recognized some challenges.
|
||||
|
||||
1. Certain DNS records were off-limits, deemed too critical to manage in any way other than by hand.
|
||||
2. The ACL rule model, while flexible, would be impossible to set up and maintain across the entirety of Comcast's DNS footprint (which has millions of DNS zones, and hundreds of millions of DNS records).
|
||||
3. Many DNS domains are considered "universal" and not locked down to a single group. This holds true for reverse zones, as IP space can often be freely assigned to anyone.
|
||||
4. Certain DNS change requests still require a manual review and approval, i.e., you cannot truly automate everything.
|
||||
5. Some teams that provision a DNS record are not the same engineers responsible for its lifecycle. The engineers that ultimately decommission a DNS record might be unknown at the time of creation.
|
||||
6. Certain teams require DNS changes to be scheduled at some point in the future. For example, maintenance may be done off-hours, and the employee doing the maintenance may not have access to VinylDNS.
|
||||
|
||||
|
||||
|
||||
To address these issues, VinylDNS added more access controls and features. Shared zones allow universal access while maintaining security via record ownership. Record ownership ensures that the party who creates a DNS record is the only one that can manage that record. This feature alone allowed us to move much of the DNS reverse space into VinylDNS.
|
||||
|
||||
Manual review was added to support tighter governance on certain DNS zones and records. For example, a sensitive DNS zone might demand review before implementing changes, as opposed to having all changes immediately applied.
|
||||
|
||||
High-value domains support was added to block VinylDNS from ever being able to update certain DNS records. High-value DNS records like [www.comcast.com][4], for example, are impossible to manage via VinylDNS and require extreme governance that can't be accomplished via an automation platform.
|
||||
|
||||
Global ACLs were added to support situations where teams that created DNS records were not responsible for the maintenance and decommissioning of those DNS records. This allowed overrides for certain groups by fully qualified domain name (FQDN) and IP address for certain DNS domains.
|
||||
|
||||
Finally, scheduled changes allow users to schedule a DNS change for a future time.
|
||||
|
||||
### Results
|
||||
|
||||
VinylDNS now governs most of Comcast's internal DNS space, managing millions of DNS records across thousands of DNS zones, and supporting thousands of engineers. In addition, we leverage integration with a wide array of tools and programming languages, including Java, Python, Go, and Ruby (most of which are open source).
|
||||
|
||||
### Toward the future
|
||||
|
||||
There are several opportunities for additional feature development, which Comcast has planned as part of its ongoing evolution of the platform. The same level of access controls and governance is needed for DNS assets managed in public cloud settings. In addition, we are looking into the ability to manage DNS zones (create and delete), which is required for IPv6 reverse zones. Finally, we are looking to create a powerful admin experience for our DNS operators who are looking to take advantage of the data that lives in the VinylDNS database.
|
||||
|
||||
### Opening up
|
||||
|
||||
[VinylDNS][5] is an open source project released and managed by [Comcast Open Source][6]. VinylDNS and its accompanying ecosystem were built by engineers in several organizations across Comcast, leveraging our inner source program. It is free for use, licensed under Apache License 2.0. We welcome all contributors, from code to bugs to feature requests, from new projects to project ideas. You can [contact our team on Gitter][7].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/9/open-source-dns
|
||||
|
||||
作者:[Paul Cleary][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pauljamescleary
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe (An intersection of pipes.)
|
||||
[2]: https://opensource.com/resources/devops
|
||||
[3]: https://www.redhat.com/en/topics/api/what-is-a-rest-api
|
||||
[4]: http://www.comcast.com
|
||||
[5]: https://www.vinyldns.io
|
||||
[6]: https://comcast.github.io/
|
||||
[7]: https://gitter.im/vinyldns/vinyldns
|
@ -1,140 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (FreeFileSync: Open Source File Synchronization Tool)
|
||||
[#]: via: (https://itsfoss.com/freefilesync/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
FreeFileSync: Open Source File Synchronization Tool
|
||||
======
|
||||
|
||||
_**Brief: FreeFileSync is an open-source folder comparison and sync tool with which you can back up your data to an external disk, a cloud service like Google Drive or any other storage path.**_
|
||||
|
||||
### FreeFileSync: A Free & Open-Source Tool To Sync Files
|
||||
|
||||
![][1]
|
||||
|
||||
[FreeFileSync][2] is an impressive open-source tool that can help you back up your data to a different location.
|
||||
|
||||
This different location can be an external USB disk, Google Drive or to any of your cloud storage locations using **SFTP or FTP** connections.
|
||||
|
||||
You might have read our tutorial on [how to use Google Drive on Linux][3] before. Unfortunately, there’s no proper FOSS solution to use Google Drive natively on Linux. There is [Insync][4] but it is a premium, non open source software.
|
||||
|
||||
FreeFileSync can be used to sync files with your Google Drive account. In fact, I’m using it to sync my files to Google Drive and to a separate hard drive.
|
||||
|
||||
### Features of FreeFileSync
|
||||
|
||||
![][5]
|
||||
|
||||
Even though the UI of FreeFileSync might look old school — it offers a ton of useful features for average users and advanced users as well.
|
||||
|
||||
I’ll highlight all the features I can here:
|
||||
|
||||
* Cross-platform support (Windows, macOS & Linux)
|
||||
* Compare folders before synchronizing
|
||||
* Supports Google Drive, [SFTP][6], and FTP connections
|
||||
* Offers the ability to sync your files on a different storage path (or an external storage device)
|
||||
* Multiple synchronization options available (Update files to the target from source or Mirror the files between target and source)
|
||||
* Two-way synchronization supported (changes will be synced if there’s any modification on the target folder or the source folder)
|
||||
* Version control available for advanced users
|
||||
* Real-Time Sync option available
|
||||
* Ability to schedule batch jobs
|
||||
* Get notified via email when sync completes (paid)
|
||||
* Portable edition (paid)
|
||||
* Parallel file copy (paid)
|
||||
|
||||
|
||||
|
||||
So, if you take a look at the features it offers, it’s not just any ordinary sync tool but offers so much more for free.
|
||||
|
||||
Also, to give you an idea, you can also tweak how to compare the files before syncing them. For instance, you can compare the file content / file time or simply compare the file size of both source and target folder.
|
||||
|
||||
![][7]
|
||||
|
||||
You also get numerous synchronization options to mirror or update your data. Here’s how it looks like:
|
||||
|
||||
![][8]
|
||||
|
||||
However, it does give you the option to opt for a donation key which unlocks some special features like the ability to notify you via email when the sync completes and so on.
|
||||
|
||||
Here’s what different between the free and paid version:
|
||||
|
||||
![][9]
|
||||
|
||||
So, most of the essential features is available for free. The premium features are mostly for advanced users and of course, if you want to support them (please do if you find it useful).
|
||||
|
||||
Also, do note that the donation edition can be used by a single user on up to 3 devices. So, that is definitely not bad!
|
||||
|
||||
### Installing FreeFileSync on Linux
|
||||
|
||||
You can simply head on to its [official download page][10] and grab the **tar.gz** file for Linux. If you like you can download the source as well.
|
||||
|
||||
![][11]
|
||||
|
||||
Next, you just need to extract the archive and run the executable file to get started (as shown in the image above)
|
||||
|
||||
[Download FreeFileSync][2]
|
||||
|
||||
### How To Get Started With FreeFileSync?
|
||||
|
||||
While I haven’t tried successfully creating an automatic sync job, it is pretty easy to use.
|
||||
|
||||
The [official documentation][12] should be more than enough to get what you want using the software.
|
||||
|
||||
But, just to give you a head start, here are a few things that you should keep in mind.
|
||||
|
||||
![][13]
|
||||
|
||||
As you can see in the screenshot above, you just have to select a source folder and the target folder to sync. You can choose a local folder or a cloud storage location.
|
||||
|
||||
Once you do that, you need to tweak the type of folder comparison you want to do (usually the file time & size) for the synchronization process and on the right-side, you get to tweak the type of sync that you want to perform.
|
||||
|
||||
#### Types of synchronization in FreeFileSync
|
||||
|
||||
When you select **“Update” method for sync**, it simply copies your new data from the source folder to the target folder. So, even if you delete something from your source folder, it won’t get deleted on your target folder.
|
||||
|
||||
In case you want the target folder to have the same file copies of your same folder, you can choose the **“Mirror”** **synchronization method**. So, here, if you delete something from your source, it gets deleted from your target folder as well.
|
||||
|
||||
There’s also a **“Two-way” sync method** which detects changes on both source and target folder (instead of monitoring just the source folder). So, if you make any changes on the source/target folder, the modification will be synchronized.
|
||||
|
||||
For more advanced usage, I suggest you to refer the [documentation][12] available.
|
||||
|
||||
### Wrapping Up
|
||||
|
||||
Another [open source file synchronization tool is Syncthing][14] that you might want to look at.
|
||||
|
||||
FreeFileSync is a pretty underrated folder comparison and sync tool available for Linux users who utilize Google Drive, SFTP, or FTP connections along with separate storage locations for backup.
|
||||
|
||||
And, all of that — with cross-platform support for Windows, macOS, and Linux available for free.
|
||||
|
||||
Isn’t that exciting? Let me know your thoughts on FreeFileSync in the comments down below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/freefilesync/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/free-file-sync.jpg?ssl=1
|
||||
[2]: https://freefilesync.org/
|
||||
[3]: https://itsfoss.com/use-google-drive-linux/
|
||||
[4]: https://itsfoss.com/recommends/insync/
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/FreeFileSync.jpg?ssl=1
|
||||
[6]: https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/freefilesync-comparison.png?ssl=1
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/freefilesync-synchronization.png?ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/free-file-sync-donation-edition.jpg?ssl=1
|
||||
[10]: https://freefilesync.org/download.php
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/freefilesync-run.jpg?ssl=1
|
||||
[12]: https://freefilesync.org/manual.php
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/freefilesync-tips.jpg?ssl=1
|
||||
[14]: https://itsfoss.com/syncthing/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Sky0Master)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,741 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (Yufei-Yan)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (An example of very lightweight RESTful web services in Java)
|
||||
[#]: via: (https://opensource.com/article/20/7/restful-services-java)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
An example of very lightweight RESTful web services in Java
|
||||
======
|
||||
Explore lightweight RESTful services in Java through a full code example
|
||||
to manage a book collection.
|
||||
![Coding on a computer][1]
|
||||
|
||||
Web services, in one form or another, have been around for more than two decades. For example, [XML-RPC services][2] appeared in the late 1990s, followed shortly by ones written in the SOAP offshoot. Services in the [REST architectural style][3] also made the scene about two decades ago, soon after the XML-RPC and SOAP trailblazers. [REST][4]-style (hereafter, Restful) services now dominate in popular sites such as eBay, Facebook, and Twitter. Despite the alternatives to web services for distributed computing (e.g., web sockets, microservices, and new frameworks for remote-procedure calls), Restful web services remain attractive for several reasons:
|
||||
|
||||
* Restful services build upon existing infrastructure and protocols, in particular, web servers and the HTTP/HTTPS protocols. An organization that has HTML-based websites can readily add web services for clients interested more in the data and underlying functionality than in the HTML presentation. Amazon, for example, has pioneered making the same information and functionality available through both websites and web services, either SOAP-based or Restful.
|
||||
|
||||
* Restful services treat HTTP as an API, thereby avoiding the complicated software layering that has come to characterize the SOAP-based approach to web services. For example, the Restful API supports the standard CRUD (Create-Read-Update-Delete) operations through the HTTP verbs POST-GET-PUT-DELETE, respectively; HTTP status codes inform a requester whether a request succeeded or why it failed.
|
||||
|
||||
* Restful web services can be as simple or complicated as needed. Restful is a style—indeed, a very flexible one—rather than a set of prescriptions about how services should be designed and structured. (The attendant downside is that it may be hard to determine what does _not_ count as a Restful service.)
|
||||
|
||||
* For a consumer or client, Restful web services are language- and platform-neutral. The client makes requests in HTTP(S) and receives text responses in a format suitable for modern data interchange (e.g., JSON).
|
||||
|
||||
* Almost every general-purpose programming language has at least adequate (and often strong) support for HTTP/HTTPS, which means that web-service clients can be written in those languages.
|
||||
|
||||
|
||||
|
||||
|
||||
This article explores lightweight Restful services in Java through a full code example.
|
||||
|
||||
### The Restful novels web service
|
||||
|
||||
The Restful novels web service consists of three programmer-defined classes:
|
||||
|
||||
* The `Novel` class represents a novel with just three properties: a machine-generated ID, an author, and a title. The properties could be expanded for more realism, but I want to keep this example simple.
|
||||
|
||||
|
||||
* The `Novels` class consists of utilities for various tasks: converting a plain-text encoding of a `Novel` or a list of them into XML or JSON; supporting the CRUD operations on the novels collection; and initializing the collection from data stored in a file. The `Novels` class mediates between `Novel` instances and the servlet.
|
||||
|
||||
|
||||
* The `NovelsServlet` class derives from `HttpServlet`, a sturdy and flexible piece of software that has been around since the very early enterprise Java of the late 1990s. The servlet acts as an HTTP endpoint for client CRUD requests. The servlet code focuses on processing client requests and generating the appropriate responses, leaving the devilish details to utilities in the `Novels` class.
|
||||
|
||||
|
||||
|
||||
Some Java frameworks, such as Jersey (JAX-RS) and Restlet, are designed for Restful services. Nonetheless, the `HttpServlet` on its own provides a lightweight, flexible, powerful, and well-tested API for delivering such services. I'll demonstrate this with the novels example.
|
||||
|
||||
### Deploy the novels web service
|
||||
|
||||
Deploying the novels web service requires a web server, of course. My choice is [Tomcat][5], but the service should work (famous last words!) if it's hosted on, for example, Jetty or even a Java Application Server. The code and a README that summarizes how to install Tomcat are [available on my website][6]. There is also a documented Apache Ant script that builds the novels service (or any other service or website) and deploys it under Tomcat or the equivalent.
|
||||
|
||||
Tomcat is available for download from its [website][7]. Once you install it locally, let `TOMCAT_HOME` be the install directory. There are two subdirectories of immediate interest:
|
||||
|
||||
* The `TOMCAT_HOME/bin` directory contains startup and stop scripts for Unix-like systems (`startup.sh` and `shutdown.sh`) and Windows (`startup.bat` and `shutdown.bat`). Tomcat runs as a Java application. The web server's servlet container is named Catalina. (In Jetty, the web server and container have the same name.) Once Tomcat starts, enter `http://localhost:8080/` in a browser to see extensive documentation, including examples.
|
||||
|
||||
* The `TOMCAT_HOME/webapps` directory is the default for deployed websites and web services. The straightforward way to deploy a website or web service is to copy a JAR file with a `.war` extension (hence, a WAR file) to `TOMCAT_HOME/webapps` or a subdirectory thereof. Tomcat then unpacks the WAR file into its own directory. For example, Tomcat would unpack `novels.war` into a subdirectory named `novels`, leaving `novels.war` as-is. A website or service can be removed by deleting the WAR file and updated by overwriting the WAR file with a new version. By the way, the first step in debugging a website or service is to check that Tomcat has unpacked the WAR file; if not, the site or service was not published because of a fatal error in the code or configuration.
|
||||
|
||||
* Because Tomcat listens by default on port 8080 for HTTP requests, a request URL for Tomcat on the local machine begins:
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/`
|
||||
```
|
||||
|
||||
Access a programmer-deployed WAR file by adding the WAR file's name but without the `.war` extension:
|
||||
|
||||
|
||||
```
|
||||
`http://locahost:8080/novels/`
|
||||
```
|
||||
|
||||
If the service was deployed in a subdirectory (e.g., `myapps`) of `TOMCAT_HOME`, this would be reflected in the URL:
|
||||
|
||||
|
||||
```
|
||||
`http://locahost:8080/myapps/novels/`
|
||||
```
|
||||
|
||||
I'll offer more details about this in the testing section near the end of the article.
|
||||
|
||||
|
||||
|
||||
|
||||
As noted, the ZIP file on my homepage contains an Ant script that compiles and deploys a website or service. (A copy of `novels.war` is also included in the ZIP file.) For the novels example, a sample command (with `%` as the command-line prompt) is:
|
||||
|
||||
|
||||
```
|
||||
`% ant -Dwar.name=novels deploy`
|
||||
```
|
||||
|
||||
This command compiles Java source files and then builds a deployable file named `novels.war`, leaves this file in the current directory, and copies it to `TOMCAT_HOME/webapps`. If all goes well, a `GET` request (using a browser or a command-line utility, such as `curl`) serves as a first test:
|
||||
|
||||
|
||||
```
|
||||
`% curl http://localhost:8080/novels/`
|
||||
```
|
||||
|
||||
Tomcat is configured, by default, for _hot deploys_: the web server does not need to be shut down to deploy, update, or remove a web application.
|
||||
|
||||
### The novels service at the code level
|
||||
|
||||
Let's get back to the novels example but at the code level. Consider the `Novel` class below:
|
||||
|
||||
#### Example 1. The Novel class
|
||||
|
||||
|
||||
```
|
||||
package novels;
|
||||
|
||||
import java.io.Serializable;
|
||||
|
||||
public class Novel implements [Serializable][8], Comparable<Novel> {
|
||||
static final long serialVersionUID = 1L;
|
||||
private [String][9] author;
|
||||
private [String][9] title;
|
||||
private int id;
|
||||
|
||||
public Novel() { }
|
||||
|
||||
public void setAuthor(final [String][9] author) { this.author = author; }
|
||||
public [String][9] getAuthor() { return this.author; }
|
||||
public void setTitle(final [String][9] title) { this.title = title; }
|
||||
public [String][9] getTitle() { return this.title; }
|
||||
public void setId(final int id) { this.id = id; }
|
||||
public int getId() { return this.id; }
|
||||
|
||||
public int compareTo(final Novel other) { return this.id - other.id; }
|
||||
}
|
||||
```
|
||||
|
||||
This class implements the `compareTo` method from the `Comparable` interface because `Novel` instances are stored in a thread-safe `ConcurrentHashMap`, which does not enforce a sorted order. In responding to requests to view the collection, the novels service sorts a collection (an `ArrayList`) extracted from the map; the implementation of `compareTo` enforces an ascending sorted order by `Novel` ID.
|
||||
|
||||
The class `Novels` contains various utility functions:
|
||||
|
||||
#### Example 2. The Novels utility class
|
||||
|
||||
|
||||
```
|
||||
package novels;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.io.File;
|
||||
import java.io.ByteArrayOutputStream;
|
||||
import java.io.InputStream;
|
||||
import java.io.InputStreamReader;
|
||||
import java.io.BufferedReader;
|
||||
import java.nio.file.Files;
|
||||
import java.util.stream.Stream;
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.Collections;
|
||||
import java.beans.XMLEncoder;
|
||||
import javax.servlet.ServletContext; // not in JavaSE
|
||||
import org.json.JSONObject;
|
||||
import org.json.XML;
|
||||
|
||||
public class Novels {
|
||||
private final [String][9] fileName = "/WEB-INF/data/novels.db";
|
||||
private ConcurrentMap<[Integer][10], Novel> novels;
|
||||
private ServletContext sctx;
|
||||
private AtomicInteger mapKey;
|
||||
|
||||
public Novels() {
|
||||
novels = new ConcurrentHashMap<[Integer][10], Novel>();
|
||||
mapKey = new AtomicInteger();
|
||||
}
|
||||
|
||||
public void setServletContext(ServletContext sctx) { this.sctx = sctx; }
|
||||
public ServletContext getServletContext() { return this.sctx; }
|
||||
|
||||
public ConcurrentMap<[Integer][10], Novel> getConcurrentMap() {
|
||||
if (getServletContext() == null) return null; // not initialized
|
||||
if (novels.size() < 1) populate();
|
||||
return this.novels;
|
||||
}
|
||||
|
||||
public [String][9] toXml([Object][11] obj) { // default encoding
|
||||
[String][9] xml = null;
|
||||
try {
|
||||
[ByteArrayOutputStream][12] out = new [ByteArrayOutputStream][12]();
|
||||
XMLEncoder encoder = new XMLEncoder(out);
|
||||
encoder.writeObject(obj);
|
||||
encoder.close();
|
||||
xml = out.toString();
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return xml;
|
||||
}
|
||||
|
||||
public [String][9] toJson([String][9] xml) { // option for requester
|
||||
try {
|
||||
JSONObject jobt = XML.toJSONObject(xml);
|
||||
return jobt.toString(3); // 3 is indentation level
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return null;
|
||||
}
|
||||
|
||||
public int addNovel(Novel novel) {
|
||||
int id = mapKey.incrementAndGet();
|
||||
novel.setId(id);
|
||||
novels.put(id, novel);
|
||||
return id;
|
||||
}
|
||||
|
||||
private void populate() {
|
||||
[InputStream][14] in = sctx.getResourceAsStream(this.fileName);
|
||||
// Convert novel.db string data into novels.
|
||||
if (in != null) {
|
||||
try {
|
||||
[InputStreamReader][15] isr = new [InputStreamReader][15](in);
|
||||
[BufferedReader][16] reader = new [BufferedReader][16](isr);
|
||||
|
||||
[String][9] record = null;
|
||||
while ((record = reader.readLine()) != null) {
|
||||
[String][9][] parts = record.split("!");
|
||||
if (parts.length == 2) {
|
||||
Novel novel = new Novel();
|
||||
novel.setAuthor(parts[0]);
|
||||
novel.setTitle(parts[1]);
|
||||
addNovel(novel); // sets the Id, adds to map
|
||||
}
|
||||
}
|
||||
in.close();
|
||||
}
|
||||
catch ([IOException][17] e) { }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The most complicated method is `populate`, which reads from a text file contained in the deployed WAR file. The text file contains the initial collection of novels. To open the text file, the `populate` method needs the `ServletContext`, a Java map that contains all of the critical information about the servlet embedded in the servlet container. The text file, in turn, contains records such as this:
|
||||
|
||||
|
||||
```
|
||||
`Jane Austen!Persuasion`
|
||||
```
|
||||
|
||||
The line is parsed into two parts (author and title) separated by the bang symbol (`!`). The method then builds a `Novel` instance, sets the author and title properties, and adds the novel to the collection, which acts as an in-memory data store.
|
||||
|
||||
The `Novels` class also has utilities to encode the novels collection into XML or JSON, depending upon the format that the requester prefers. XML is the default, but JSON is available upon request. A lightweight XML-to-JSON package provides the JSON. Further details on encoding are below.
|
||||
|
||||
#### Example 3. The NovelsServlet class
|
||||
|
||||
|
||||
```
|
||||
package novels;
|
||||
|
||||
import java.util.concurrent.ConcurrentMap;
|
||||
import javax.servlet.ServletException;
|
||||
import javax.servlet.http.HttpServlet;
|
||||
import javax.servlet.http.HttpServletRequest;
|
||||
import javax.servlet.http.HttpServletResponse;
|
||||
import java.util.Arrays;
|
||||
import java.io.ByteArrayInputStream;
|
||||
import java.io.ByteArrayOutputStream;
|
||||
import java.io.OutputStream;
|
||||
import java.io.BufferedReader;
|
||||
import java.io.InputStreamReader;
|
||||
import java.beans.XMLEncoder;
|
||||
import org.json.JSONObject;
|
||||
import org.json.XML;
|
||||
|
||||
public class NovelsServlet extends HttpServlet {
|
||||
static final long serialVersionUID = 1L;
|
||||
private Novels novels; // back-end bean
|
||||
|
||||
// Executed when servlet is first loaded into container.
|
||||
@Override
|
||||
public void init() {
|
||||
this.novels = new Novels();
|
||||
novels.setServletContext(this.getServletContext());
|
||||
}
|
||||
|
||||
// GET /novels
|
||||
// GET /novels?id=1
|
||||
@Override
|
||||
public void doGet(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] param = request.getParameter("id");
|
||||
[Integer][10] key = (param == null) ? null : [Integer][10].valueOf((param.trim()));
|
||||
|
||||
// Check user preference for XML or JSON by inspecting
|
||||
// the HTTP headers for the Accept key.
|
||||
boolean json = false;
|
||||
[String][9] accept = request.getHeader("accept");
|
||||
if (accept != null && accept.contains("json")) json = true;
|
||||
|
||||
// If no query string, assume client wants the full list.
|
||||
if (key == null) {
|
||||
ConcurrentMap<[Integer][10], Novel> map = novels.getConcurrentMap();
|
||||
[Object][11][] list = map.values().toArray();
|
||||
[Arrays][18].sort(list);
|
||||
|
||||
[String][9] payload = novels.toXml(list); // defaults to Xml
|
||||
if (json) payload = novels.toJson(payload); // Json preferred?
|
||||
sendResponse(response, payload);
|
||||
}
|
||||
// Otherwise, return the specified Novel.
|
||||
else {
|
||||
Novel novel = novels.getConcurrentMap().get(key);
|
||||
if (novel == null) { // no such Novel
|
||||
[String][9] msg = key + " does not map to a novel.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
else { // requested Novel found
|
||||
if (json) sendResponse(response, novels.toJson(novels.toXml(novel)));
|
||||
else sendResponse(response, novels.toXml(novel));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// POST /novels
|
||||
@Override
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] author = request.getParameter("author");
|
||||
[String][9] title = request.getParameter("title");
|
||||
|
||||
// Are the data to create a new novel present?
|
||||
if (author == null || title == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
|
||||
// Create a novel.
|
||||
Novel n = new Novel();
|
||||
n.setAuthor(author);
|
||||
n.setTitle(title);
|
||||
|
||||
// Save the ID of the newly created Novel.
|
||||
int id = novels.addNovel(n);
|
||||
|
||||
// Generate the confirmation message.
|
||||
[String][9] msg = "Novel " + id + " created.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
|
||||
// PUT /novels
|
||||
@Override
|
||||
public void doPut(HttpServletRequest request, HttpServletResponse response) {
|
||||
/* A workaround is necessary for a PUT request because Tomcat does not
|
||||
generate a workable parameter map for the PUT verb. */
|
||||
[String][9] key = null;
|
||||
[String][9] rest = null;
|
||||
boolean author = false;
|
||||
|
||||
/* Let the hack begin. */
|
||||
try {
|
||||
[BufferedReader][16] br =
|
||||
new [BufferedReader][16](new [InputStreamReader][15](request.getInputStream()));
|
||||
[String][9] data = br.readLine();
|
||||
/* To simplify the hack, assume that the PUT request has exactly
|
||||
two parameters: the id and either author or title. Assume, further,
|
||||
that the id comes first. From the client side, a hash character
|
||||
# separates the id and the author/title, e.g.,
|
||||
|
||||
id=33#title=War and Peace
|
||||
*/
|
||||
[String][9][] args = data.split("#"); // id in args[0], rest in args[1]
|
||||
[String][9][] parts1 = args[0].split("="); // id = parts1[1]
|
||||
key = parts1[1];
|
||||
|
||||
[String][9][] parts2 = args[1].split("="); // parts2[0] is key
|
||||
if (parts2[0].contains("author")) author = true;
|
||||
rest = parts2[1];
|
||||
}
|
||||
catch([Exception][13] e) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
|
||||
// If no key, then the request is ill formed.
|
||||
if (key == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
|
||||
// Look up the specified novel.
|
||||
Novel p = novels.getConcurrentMap().get([Integer][10].valueOf((key.trim())));
|
||||
if (p == null) { // not found
|
||||
[String][9] msg = key + " does not map to a novel.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
else { // found
|
||||
if (rest == null) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
}
|
||||
// Do the editing.
|
||||
else {
|
||||
if (author) p.setAuthor(rest);
|
||||
else p.setTitle(rest);
|
||||
|
||||
[String][9] msg = "Novel " + key + " has been edited.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// DELETE /novels?id=1
|
||||
@Override
|
||||
public void doDelete(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] param = request.getParameter("id");
|
||||
[Integer][10] key = (param == null) ? null : [Integer][10].valueOf((param.trim()));
|
||||
// Only one Novel can be deleted at a time.
|
||||
if (key == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
try {
|
||||
novels.getConcurrentMap().remove(key);
|
||||
[String][9] msg = "Novel " + key + " removed.\n";
|
||||
sendResponse(response, novels.toXml(msg));
|
||||
}
|
||||
catch([Exception][13] e) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
}
|
||||
|
||||
// Methods Not Allowed
|
||||
@Override
|
||||
public void doTrace(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void doHead(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void doOptions(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
|
||||
// Send the response payload (Xml or Json) to the client.
|
||||
private void sendResponse(HttpServletResponse response, [String][9] payload) {
|
||||
try {
|
||||
[OutputStream][20] out = response.getOutputStream();
|
||||
out.write(payload.getBytes());
|
||||
out.flush();
|
||||
}
|
||||
catch([Exception][13] e) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_INTERNAL_SERVER_ERROR));
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Recall that the `NovelsServlet` class above extends the `HttpServlet` class, which in turn extends the `GenericServlet` class, which implements the `Servlet` interface:
|
||||
|
||||
|
||||
```
|
||||
`NovelsServlet extends HttpServlet extends GenericServlet implements Servlet`
|
||||
```
|
||||
|
||||
As the name makes clear, the `HttpServlet` is designed for servlets delivered over HTTP(S). The class provides empty methods named after the standard HTTP request verbs (officially, _methods_):
|
||||
|
||||
* `doPost` (Post = Create)
|
||||
* `doGet` (Get = Read)
|
||||
* `doPut` (Put = Update)
|
||||
* `doDelete` (Delete = Delete)
|
||||
|
||||
|
||||
|
||||
Some additional HTTP verbs are covered as well. An extension of the `HttpServlet`, such as the `NovelsServlet`, overrides any `do` method of interest, leaving the others as no-ops. The `NovelsServlet` overrides seven of the `do` methods.
|
||||
|
||||
Each of the `HttpServlet` CRUD methods takes the same two arguments. Here is `doPost` as an example:
|
||||
|
||||
|
||||
```
|
||||
`public void doPost(HttpServletRequest request, HttpServletResponse response) {`
|
||||
```
|
||||
|
||||
The `request` argument is a map of the HTTP request information, and the `response` provides an output stream back to the requester. A method such as `doPost` is structured as follows:
|
||||
|
||||
* Read the `request` information, taking whatever action is appropriate to generate a response. If information is missing or otherwise deficient, generate an error.
|
||||
|
||||
|
||||
* Use the extracted request information to perform the appropriate CRUD operation (in this case, create a `Novel`) and then encode an appropriate response to the requester using the `response` output stream to do so. In the case of `doPost`, the response is a confirmation that a new novel has been created and added to the collection. Once the response is sent, the output stream is closed, which closes the connection as well.
|
||||
|
||||
|
||||
|
||||
### More on the do method overrides
|
||||
|
||||
An HTTP request has a relatively simple structure. Here is a sketch in the familiar HTTP 1.1 format, with comments introduced by double sharp signs:
|
||||
|
||||
|
||||
```
|
||||
GET /novels ## start line
|
||||
Host: localhost:8080 ## header element
|
||||
Accept-type: text/plain ## ditto
|
||||
...
|
||||
[body] ## POST and PUT only
|
||||
```
|
||||
|
||||
The start line begins with the HTTP verb (in this case, `GET`) and the URI (Uniform Resource Identifier), which is the noun (in this case, `novels`) that names the targeted resource. The headers consist of key-value pairs, with a colon separating the key on the left from the value(s) on the right. The header with key `Host` (case insensitive) is required; the hostname `localhost` is the symbolic address of the local machine on the local machine, and the port number `8080` is the default for the Tomcat web server awaiting HTTP requests. (By default, Tomcat listens on port 8443 for HTTPS requests.) The header elements can occur in arbitrary order. In this example, the `Accept-type` header's value is the MIME type `text/plain`.
|
||||
|
||||
Some requests (in particular, `POST` and `PUT`) have bodies, whereas others (in particular, `GET` and `DELETE`) do not. If there is a body (perhaps empty), two newlines separate the headers from the body; the HTTP body consists of key-value pairs. For bodyless requests, header elements, such as the query string, can be used to send information. Here is a request to `GET` the `/novels` resource with the ID of 2:
|
||||
|
||||
|
||||
```
|
||||
`GET /novels?id=2`
|
||||
```
|
||||
|
||||
The query string starts with the question mark and, in general, consists of key-value pairs, although a key without a value is possible.
|
||||
|
||||
The `HttpServlet`, with methods such as `getParameter` and `getParameterMap`, nicely hides the distinction between HTTP requests with and without a body. In the novels example, the `getParameter` method is used to extract the required information from the `GET`, `POST`, and `DELETE` requests. (Handling a `PUT` request requires lower-level code because Tomcat does not provide a workable parameter map for `PUT` requests.) Here, for illustration, is a slice of the `doPost` method in the `NovelsServlet` override:
|
||||
|
||||
|
||||
```
|
||||
@Override
|
||||
public void doPost(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] author = request.getParameter("author");
|
||||
[String][9] title = request.getParameter("title");
|
||||
...
|
||||
```
|
||||
|
||||
For a bodyless `DELETE` request, the approach is essentially the same:
|
||||
|
||||
|
||||
```
|
||||
@Override
|
||||
public void doDelete(HttpServletRequest request, HttpServletResponse response) {
|
||||
[String][9] param = request.getParameter("id"); // id of novel to be removed
|
||||
...
|
||||
```
|
||||
|
||||
The `doGet` method needs to distinguish between two flavors of a `GET` request: one flavor means "get all*"*, whereas the other means _get a specified one_. If the `GET` request URL contains a query string whose key is an ID, then the request is interpreted as "get a specified one":
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels?id=2 ## GET specified`
|
||||
```
|
||||
|
||||
If there is no query string, the `GET` request is interpreted as "get all":
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels ## GET all`
|
||||
```
|
||||
|
||||
### Some devilish details
|
||||
|
||||
The novels service design reflects how a Java-based web server such as Tomcat works. At startup, Tomcat builds a thread pool from which request handlers are drawn, an approach known as the _one thread per request model_. Modern versions of Tomcat also use non-blocking I/O to boost performance.
|
||||
|
||||
The novels service executes as a _single_ instance of the `NovelsServlet` class, which in turn maintains a _single_ collection of novels. Accordingly, a race condition would arise, for example, if these two requests were processed concurrently:
|
||||
|
||||
* One request changes the collection by adding a new novel.
|
||||
* The other request gets all the novels in the collection.
|
||||
|
||||
|
||||
|
||||
The outcome is indeterminate, depending on exactly how the _read_ and _write_ operations overlap. To avoid this problem, the novels service uses a thread-safe `ConcurrentMap`. Keys for this map are generated with a thread-safe `AtomicInteger`. Here is the relevant code segment:
|
||||
|
||||
|
||||
```
|
||||
public class Novels {
|
||||
private ConcurrentMap<[Integer][10], Novel> novels;
|
||||
private AtomicInteger mapKey;
|
||||
...
|
||||
```
|
||||
|
||||
By default, a response to a client request is encoded as XML. The novels program uses the old-time `XMLEncoder` class for simplicity; a far richer option is the JAX-B library. The code is straightforward:
|
||||
|
||||
|
||||
```
|
||||
public [String][9] toXml([Object][11] obj) { // default encoding
|
||||
[String][9] xml = null;
|
||||
try {
|
||||
[ByteArrayOutputStream][12] out = new [ByteArrayOutputStream][12]();
|
||||
XMLEncoder encoder = new XMLEncoder(out);
|
||||
encoder.writeObject(obj);
|
||||
encoder.close();
|
||||
xml = out.toString();
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return xml;
|
||||
}
|
||||
```
|
||||
|
||||
The `Object` parameter is either a sorted `ArrayList` of novels (in response to a "get all" request); or a single `Novel` instance (in response to a _get one_ request); or a `String` (a confirmation message).
|
||||
|
||||
If an HTTP request header refers to JSON as a desired type, then the XML is converted to JSON. Here is the check in the `doGet` method of the `NovelsServlet`:
|
||||
|
||||
|
||||
```
|
||||
[String][9] accept = request.getHeader("accept"); // "accept" is case insensitive
|
||||
if (accept != null && accept.contains("json")) json = true;
|
||||
```
|
||||
|
||||
The `Novels` class houses the `toJson` method, which converts XML to JSON:
|
||||
|
||||
|
||||
```
|
||||
public [String][9] toJson([String][9] xml) { // option for requester
|
||||
try {
|
||||
JSONObject jobt = XML.toJSONObject(xml);
|
||||
return jobt.toString(3); // 3 is indentation level
|
||||
}
|
||||
catch([Exception][13] e) { }
|
||||
return null;
|
||||
}
|
||||
```
|
||||
|
||||
The `NovelsServlet` checks for errors of various types. For example, a `POST` request should include an author and a title for the new novel. If either is missing, the `doPost` method throws an exception:
|
||||
|
||||
|
||||
```
|
||||
if (author == null || title == null)
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_BAD_REQUEST));
|
||||
```
|
||||
|
||||
The `SC` in `SC_BAD_REQUEST` stands for _status code_, and the `BAD_REQUEST` has the standard HTTP numeric value of 400. If the HTTP verb in a request is `TRACE`, a different status code is returned:
|
||||
|
||||
|
||||
```
|
||||
public void doTrace(HttpServletRequest request, HttpServletResponse response) {
|
||||
throw new [RuntimeException][19]([Integer][10].toString(HttpServletResponse.SC_METHOD_NOT_ALLOWED));
|
||||
}
|
||||
```
|
||||
|
||||
### Testing the novels service
|
||||
|
||||
Testing a web service with a browser is tricky. Among the CRUD verbs, modern browsers generate only `POST` (Create) and `GET` (Read) requests. Even a `POST` request is challenging from a browser, as the key-values for the body need to be included; this is typically done through an HTML form. A command-line utility such as [curl][21] is a better way to go, as this section illustrates with some `curl` commands, which are included in the ZIP on my website.
|
||||
|
||||
Here are some sample tests without the corresponding output:
|
||||
|
||||
|
||||
```
|
||||
% curl localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=1
|
||||
% curl --header "Accept: application/json" localhost:8080/novels/
|
||||
```
|
||||
|
||||
The first command requests all the novels, which are encoded by default in XML. The second command requests the novel with an ID of 1, which is encoded in XML. The last command adds an `Accept` header element with `application/json` as the MIME type desired. The `get one` command could also use this header element. Such requests have JSON rather than the XML responses.
|
||||
|
||||
The next two commands create a new novel in the collection and confirm the addition:
|
||||
|
||||
|
||||
```
|
||||
% curl --request POST --data "author=Tolstoy&title=War and Peace" localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=4
|
||||
```
|
||||
|
||||
A `PUT` command in `curl` resembles a `POST` command except that the `PUT` body does not use standard syntax. The documentation for the `doPut` method in the `NovelsServlet` goes into detail, but the short version is that Tomcat does not generate a proper map on `PUT` requests. Here is the sample `PUT` command and a confirmation command:
|
||||
|
||||
|
||||
```
|
||||
% curl --request PUT --data "id=3#title=This is an UPDATE" localhost:8080/novels/
|
||||
% curl localhost:8080/novels?id=3
|
||||
```
|
||||
|
||||
The second command confirms the update.
|
||||
|
||||
Finally, the `DELETE` command works as expected:
|
||||
|
||||
|
||||
```
|
||||
% curl --request DELETE localhost:8080/novels?id=2
|
||||
% curl localhost:8080/novels/
|
||||
```
|
||||
|
||||
The request is for the novel with the ID of 2 to be deleted. The second command shows the remaining novels.
|
||||
|
||||
### The web.xml configuration file
|
||||
|
||||
Although it's officially optional, a `web.xml` configuration file is a mainstay in a production-grade website or service. The configuration file allows routing, security, and other features of a site or service to be specified independently of the implementation code. The configuration for the novels service handles routing by providing a URL pattern for requests dispatched to this service:
|
||||
|
||||
|
||||
```
|
||||
<?xml version = "1.0" encoding = "UTF-8"?>
|
||||
<web-app>
|
||||
<servlet>
|
||||
<servlet-name>novels</servlet-name>
|
||||
<servlet-class>novels.NovelsServlet</servlet-class>
|
||||
</servlet>
|
||||
<servlet-mapping>
|
||||
<servlet-name>novels</servlet-name>
|
||||
<url-pattern>/*</url-pattern>
|
||||
</servlet-mapping>
|
||||
</web-app>
|
||||
```
|
||||
|
||||
The `servlet-name` element provides an abbreviation (`novels`) for the servlet's fully qualified class name (`novels.NovelsServlet`), and this name is used in the `servlet-mapping` element below.
|
||||
|
||||
Recall that a URL for a deployed service has the WAR file name right after the port number:
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels/`
|
||||
```
|
||||
|
||||
The slash immediately after the port number begins the URI known as the _path_ to the requested resource, in this case, the novels service; hence, the term `novels` occurs after the first single slash.
|
||||
|
||||
In the `web.xml` file, the `url-pattern` is specified as `/*`, which means _any path that starts with /novels_. Suppose Tomcat encounters a contrived request URL, such as this:
|
||||
|
||||
|
||||
```
|
||||
`http://localhost:8080/novels/foobar/`
|
||||
```
|
||||
|
||||
The `web.xml` configuration specifies that this request, too, should be dispatched to the novels servlet because the `/*` pattern covers `/foobar`. The contrived URL thus has the same result as the legitimate one shown above it.
|
||||
|
||||
A production-grade configuration file might include information on security, both wire-level and users-roles. Even in this case, the configuration file would be only two or three times the size of the sample one.
|
||||
|
||||
### Wrapping up
|
||||
|
||||
The `HttpServlet` is at the center of Java's web technologies. A website or web service, such as the novels service, extends this class, overriding the `do` verbs of interest. A Restful framework such as Jersey (JAX-RS) or Restlet does essentially the same by providing a customized servlet, which then acts as the HTTP(S) endpoint for requests against a web application written in the framework.
|
||||
|
||||
A servlet-based application has access, of course, to any Java library required in the web application. If the application follows the separation-of-concerns principle, then the servlet code remains attractively simple: the code checks a request, issuing the appropriate error if there are deficiencies; otherwise, the code calls out for whatever functionality may be required (e.g., querying a database, encoding a response in a specified format), and then sends the response to the requester. The `HttpServletRequest` and `HttpServletResponse` types make it easy to perform the servlet-specific work of reading the request and writing the response.
|
||||
|
||||
Java has APIs that range from the very simple to the highly complicated. If you need to deliver some Restful services using Java, my advice is to give the low-fuss `HttpServlet` a try before anything else.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/restful-services-java
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_laptop_hack_work.png?itok=aSpcWkcl (Coding on a computer)
|
||||
[2]: http://xmlrpc.com/
|
||||
[3]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
||||
[4]: https://www.redhat.com/en/topics/integration/whats-the-difference-between-soap-rest
|
||||
[5]: http://tomcat.apache.org/
|
||||
[6]: https://condor.depaul.edu/mkalin
|
||||
[7]: https://tomcat.apache.org/download-90.cgi
|
||||
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+serializable
|
||||
[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[10]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer
|
||||
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+object
|
||||
[12]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bytearrayoutputstream
|
||||
[13]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+exception
|
||||
[14]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstream
|
||||
[15]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+inputstreamreader
|
||||
[16]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+bufferedreader
|
||||
[17]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+ioexception
|
||||
[18]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+arrays
|
||||
[19]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+runtimeexception
|
||||
[20]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+outputstream
|
||||
[21]: https://curl.haxx.se/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (FSSlc)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -299,7 +299,7 @@ via: https://opensource.com/article/20/7/nmcli
|
||||
|
||||
作者:[Dave McKay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,182 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (SCP user’s migration guide to rsync)
|
||||
[#]: via: (https://fedoramagazine.org/scp-users-migration-guide-to-rsync/)
|
||||
[#]: author: (chasinglogic https://fedoramagazine.org/author/chasinglogic/)
|
||||
|
||||
SCP user’s migration guide to rsync
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
As part of the [8.0 pre-release announcement,][2] the OpenSSH project stated that they consider the scp protocol outdated, inflexible, and not readily fixed. They then go on to recommend the use of sftp or rsync for file transfer instead.
|
||||
|
||||
Many users grew up on the _scp_ command, however, and so are not familiar with rsync. Additionally, rsync can do much more than just copy files, which can give a beginner the impression that it’s complicated and opaque. Especially when broadly the scp flags map directly to the cp flags while the rsync flags do not.
|
||||
|
||||
This article will provide an introduction and transition guide for anyone familiar with scp. Let’s jump into the most common scenarios: Copying Files and Copying Directories.
|
||||
|
||||
### Copying files
|
||||
|
||||
For copying a single file, the scp and rsync commands are effectively equivalent. Let’s say you need to ship _foo.txt_ to your home directory on a server named _server._
|
||||
|
||||
```
|
||||
$ scp foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
The equivalent rsync command requires only that you type rsync instead of scp:
|
||||
|
||||
```
|
||||
$ rsync foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
### Copying directories
|
||||
|
||||
For copying directories, things do diverge quite a bit and probably explains why rsync is seen as more complex than scp. If you want to copy the directory _bar_ to _server_ the corresponding scp command looks exactly like the cp command except for specifying ssh information:
|
||||
|
||||
```
|
||||
$ scp -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
With rsync, there are more considerations, as it’s a more powerful tool. First, let’s look at the simplest form:
|
||||
|
||||
```
|
||||
$ rsync -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
Looks simple right? For the simple case of a directory that contains only directories and regular files, this will work. However, rsync cares a lot about sending files exactly as they are on the host system. Let’s create a slightly more complex, but not uncommon, example.
|
||||
|
||||
```
|
||||
# Create a multi-level directory structure
|
||||
$ mkdir -p bar/baz
|
||||
# Create a file at the root directory
|
||||
$ touch bar/foo.txt
|
||||
# Now create a symlink which points back up to this file
|
||||
$ cd bar/baz
|
||||
$ ln -s ../foo.txt link.txt
|
||||
# Return to our original location
|
||||
$ cd -
|
||||
```
|
||||
|
||||
We now have a directory tree that looks like the following:
|
||||
|
||||
```
|
||||
bar
|
||||
├── baz
|
||||
│ └── link.txt -> ../foo.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
If we try the commands from above to copy bar, we’ll notice very different (and surprising) results. First, let’s give scp a go:
|
||||
|
||||
```
|
||||
$ scp -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
If you ssh into your server and look at the directory tree of bar you’ll notice an important and subtle difference from your host system:
|
||||
|
||||
```
|
||||
bar
|
||||
├── baz
|
||||
│ └── link.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
Note that _link.txt_ is no longer a symlink. It is now a full-blown copy of _foo.txt_. This might be surprising behavior if you’re used to _cp_. If you did try to copy the _bar_ directory using _cp -r_, you would get a new directory with the exact symlinks that _bar_ had. Now if we try the same rsync command from before we’ll get a warning:
|
||||
|
||||
```
|
||||
$ rsync -r bar/ me@server:/home/me/
|
||||
skipping non-regular file "bar/baz/link.txt"
|
||||
```
|
||||
|
||||
Rsync has warned us that it found a non-regular file and is skipping it. Because you didn’t tell it to copy symlinks, it’s ignoring them. Rsync has an extensive manual section titled “SYMBOLIC LINKS” that explains all of the possible behavior options available to you. For our example, we need to add the –links flag.
|
||||
|
||||
```
|
||||
$ rsync -r --links bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
On the remote server we see that the symlink was copied over as a symlink. Note that this is different from how scp copied the symlink.
|
||||
|
||||
```
|
||||
bar/
|
||||
├── baz
|
||||
│ └── link.txt -> ../foo.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
To save some typing and take advantage of more file-preserving options, use the –archive (-a for short) flag whenever copying a directory. The archive flag will do what most people expect as it enables recursive copy, symlink copy, and many other options.
|
||||
|
||||
```
|
||||
$ rsync -a bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
The rsync man page has in-depth explanations of what the archive flag enables if you’re curious.
|
||||
|
||||
### Caveats
|
||||
|
||||
There is one caveat, however, to using rsync. It’s much easier to specify a non-standard ssh port with scp than with rsync. If _server_ was using port 8022 SSH connections, for instance, then those commands would look like this:
|
||||
|
||||
```
|
||||
$ scp -P 8022 foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
With rsync, you have to specify the “remote shell” command to use. This defaults to _ssh_. You do so using the **-e flag.
|
||||
|
||||
```
|
||||
$ rsync -e 'ssh -p 8022' foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
Rsync does use your ssh config; however, so if you are connecting to this server frequently, you can add the following snippet to your _~/.ssh/config_ file. Then you no longer need to specify the port for the rsync or ssh commands!
|
||||
|
||||
```
|
||||
Host server
|
||||
Port 8022
|
||||
```
|
||||
|
||||
Alternatively, if every server you connect to runs on the same non-standard port, you can configure the _RSYNC_RSH_ environment variable.
|
||||
|
||||
### Why else should you switch to rsync?
|
||||
|
||||
Now that we’ve covered the everyday use cases and caveats for switching from scp to rsync, let’s take some time to explore why you probably want to use rsync on its own merits. Many people have made the switch to rsync long before now on these merits alone.
|
||||
|
||||
#### In-flight compression
|
||||
|
||||
If you have a slow or otherwise limited network connection between you and your server, rsync can spend more CPU cycles to save network bandwidth. It does this by compressing data before sending it. Compression can be enabled with the -z flag.
|
||||
|
||||
#### Delta transfers
|
||||
|
||||
Rsync also only copies a file if the target file is different than the source file. This works recursively through directories. For instance, if you took our final bar example above and re-ran that rsync command multiple times, it would do no work after the initial transfer. Using rsync even for local copies is worth it if you know you will repeat them, such as backing up to a USB drive, for this feature alone as it can save a lot of time with large data sets.
|
||||
|
||||
#### Syncing
|
||||
|
||||
As the name implies, rsync can do more than just copy data. So far, we’ve only demonstrated how to copy files with rsync. If you instead want rsync to make the target directory look like your source directory, you can add the –delete flag to rsync. The delete flag makes it so rsync will copy files from the source directory which don’t exist on the target directory. Then it will remove files on the target directory which do not exist in the source directory. The result is the target directory is identical to the source directory. By contrast, scp will only ever add files to the target directory.
|
||||
|
||||
### Conclusion
|
||||
|
||||
For simple use cases, rsync is not significantly more complicated than the venerable scp tool. The only significant difference being the use of -a instead of -r for recursive copying of directories. However, as we saw rsync’s -a flag behaves more like cp’s -r flag than scp’s -r flag does.
|
||||
|
||||
Hopefully, with these new commands, you can speed up your file transfer workflow!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/scp-users-migration-guide-to-rsync/
|
||||
|
||||
作者:[chasinglogic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/chasinglogic/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/07/scp-rsync-816x345.png
|
||||
[2]: https://lists.mindrot.org/pipermail/openssh-unix-dev/2019-March/037672.html
|
@ -1,105 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 open source IDE tools for Java)
|
||||
[#]: via: (https://opensource.com/article/20/7/ide-java)
|
||||
[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh)
|
||||
|
||||
5 open source IDE tools for Java
|
||||
======
|
||||
Java IDE tools offer plenty of ways to create a programming environment
|
||||
based on your unique needs and preferences.
|
||||
![woman on laptop sitting at the window][1]
|
||||
|
||||
[Java][2] frameworks make life easier for programmers by streamlining their work. These frameworks were designed and developed to run any application on any server environment; that includes dynamic behaviors in terms of parsing annotations, scanning descriptors, loading configurations, and launching the actual services on a Java virtual machine (JVM). Controlling this much scope requires more code, making it difficult to minimize memory footprint or speed up startup times for new applications. Regardless, Java consistently ranks in the top three of programming languages in use today with a community of seven to ten million developers in the [TIOBE Index][3].
|
||||
|
||||
With all that code written in Java, that means there are some great options for integrated development environments (IDE) to give developers all the tools needed to effectively write, lint, test, and run Java applications.
|
||||
|
||||
Below, I introduce—in alphabetical order—my five favorite open source IDE tools to write Java and how to configure their basics.
|
||||
|
||||
### BlueJ
|
||||
|
||||
[BlueJ][4] provides an integrated educational Java development environment for Java beginners. It also aids in developing small-scale software using the Java Development Kit (JDK). The installation options for a variety of versions and operating systems are available [here][5].
|
||||
|
||||
Once you install the BlueJ IDE on your laptop, start a new project. Click on New Project in the Project menu then begin writing Java codes from New Class. Sample methods and skeleton codes will be generated as below:
|
||||
|
||||
![BlueJ IDE screenshot][6]
|
||||
|
||||
BlueJ not only provides an interactive graphical user interface (GUI) for teaching Java programming courses in schools but also allows developers to invoke functions (i.e., objects, methods, parameters) without source code compilation.
|
||||
|
||||
### Eclipse
|
||||
|
||||
[Eclipse][7] is one of the most famous Java IDEs based on the desktop, and it supports a variety of programming languages such as C/C++, JavaScript, and PHP. It also allows developers to add unlimited extensions from the Eclipse Marketplace for more development conveniences. [Eclipse Foundation][8] provides a Web IDE called [Eclipse Che][9] for DevOps teams to spin up an agile software development environment with hosted workspaces on multiple cloud platforms.
|
||||
|
||||
[The download][10] is available here; then you can create a new project or import an existing project from the local directory. Find more Java development tips in [this article][11].
|
||||
|
||||
![Eclipse IDE screenshot][12]
|
||||
|
||||
### IntelliJ IDEA
|
||||
|
||||
[IntelliJ IDEA CE (Community Edition)][13] is the open source version of IntelliJ IDEA, providing an IDE for multiple programming languages (i.e., Java, Groovy, Kotlin, Rust, Scala). IntelliJ IDEA CE is also very popular for experienced developers to use for existing source refactoring, code inspections, building testing cases with JUnit or TestNG, and building codes using Maven or Ant. Downloadable binaries are available [here][14].
|
||||
|
||||
IntelliJ IDEA CE comes with some unique features; I particularly like the API tester. For example, if you implement a REST API with a Java framework, IntelliJ IDEA CE allows you to test the API's functionality via Swing GUI designer:
|
||||
|
||||
![IntelliJ IDEA screenshot][15]
|
||||
|
||||
IntelliJ IDEA CE is open source, but the company behind it has a commercial option. Find more differences between the Community Edition and the Ultimate [here][16].
|
||||
|
||||
### Netbeans IDE
|
||||
|
||||
[NetBeans IDE][17] is an integrated Java development environment that allows developers to craft modular applications for standalone, mobile, and web architecture with supported web technologies (i.e., HTML5, JavaScript, and CSS). NetBeans IDE allows developers to set up multiple views on how to manage projects, tools, and data efficiently and helps them collaborate on software development—using Git integration—when a new developer joins the project.
|
||||
|
||||
Download binaries are available [here][18] for multiple platforms (i.e., Windows, macOS, Linux). Once you install the IDE tool in your local environment, the New Project wizard helps you create a new project. For example, the wizard generates the skeleton codes (with sections to fill in like `// TODO code application logic here`) then you can add your own application codes.
|
||||
|
||||
### VSCodium
|
||||
|
||||
[VSCodium][19] is a lightweight, free source code editor that allows developers to install a variety of OS platforms (i.e., Windows, macOS, Linux) and is an open source alternative based on [Visual Studio Code][20]. It was also designed and developed to support a rich ecosystem for multiple programming languages (i.e., Java, C++, C#, PHP, Go, Python, .NET). For high code quality, Visual Studio Code provides debugging, intelligent code completion, syntax highlighting, and code refactoring by default.
|
||||
|
||||
There are many download options available in the [repository][21]. When you run the Visual Studio Code, you can add new features and themes by clicking on the Extensions icon in the activity bar on the left side or by pressing Ctrl+Shift+X in the keyboard. For example, the Quarkus Tools for Visual Studio Code comes up when you type "quarkus" in the search box. The extension allows you to use helpful tools for [writing Java with Quarkus in VS Code][22]:
|
||||
|
||||
![VSCodium IDE screenshot][23]
|
||||
|
||||
### Wrapping up
|
||||
|
||||
Java being one of the most widely used programming languages and environments, these five are just a fraction of the different open source IDE tools available for Java developers. It can be hard to know which is the right one to choose. As always, it depends on your specific needs and goals—what kinds of workloads (web, mobile, messaging, data transaction) you want to implement and what runtimes (local, cloud, Kubernetes, serverless) you will deploy using IDE extended features. While the wealth of options out there can be overwhelming, it does also mean that you can probably find one that suits your particular circumstances and preferences.
|
||||
|
||||
Do you have a favorite open source Java IDE? Share it in the comments!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/ide-java
|
||||
|
||||
作者:[Daniel Oh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/daniel-oh
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop)
|
||||
[2]: https://opensource.com/resources/java
|
||||
[3]: https://www.tiobe.com/tiobe-index/
|
||||
[4]: https://www.bluej.org/about.html
|
||||
[5]: https://www.bluej.org/versions.html
|
||||
[6]: https://opensource.com/sites/default/files/uploads/5_open_source_ide_tools_to_write_java_and_how_you_begin_it.png (BlueJ IDE screenshot)
|
||||
[7]: https://www.eclipse.org/ide/
|
||||
[8]: https://www.eclipse.org/
|
||||
[9]: https://opensource.com/article/19/10/cloud-ide-che
|
||||
[10]: https://www.eclipse.org/downloads/
|
||||
[11]: https://opensource.com/article/19/10/java-basics
|
||||
[12]: https://opensource.com/sites/default/files/uploads/os_ide_2.png (Eclipse IDE screenshot)
|
||||
[13]: https://www.jetbrains.com/idea/
|
||||
[14]: https://www.jetbrains.org/display/IJOS/Download
|
||||
[15]: https://opensource.com/sites/default/files/uploads/os_ide_3.png (IntelliJ IDEA screenshot)
|
||||
[16]: https://www.jetbrains.com/idea/features/editions_comparison_matrix.html
|
||||
[17]: https://netbeans.org/
|
||||
[18]: https://netbeans.org/downloads/8.2/rc/
|
||||
[19]: https://vscodium.com/
|
||||
[20]: https://opensource.com/article/20/6/open-source-alternatives-vs-code
|
||||
[21]: https://github.com/VSCodium/vscodium#downloadinstall
|
||||
[22]: https://opensource.com/article/20/4/java-quarkus-vs-code
|
||||
[23]: https://opensource.com/sites/default/files/uploads/os_ide_5.png (VSCodium IDE screenshot)
|
@ -1,291 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to create a documentation site with Docsify and GitHub Pages)
|
||||
[#]: via: (https://opensource.com/article/20/7/docsify-github-pages)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson)
|
||||
|
||||
How to create a documentation site with Docsify and GitHub Pages
|
||||
======
|
||||
Use Docsify to create documentation web pages to publish on GitHub
|
||||
Pages.
|
||||
![Digital creative of a browser on the internet][1]
|
||||
|
||||
Documentation is an essential part of making any open source project useful to users. But it's not always developers' top priority, as they may be more focused on making their application better than on helping people use it. This is why making it easier to publish documentation is so valuable to developers. In this tutorial, I'll show you one option for doing so: combining the [Docsify][2] documentation generator with [GitHub Pages][3].
|
||||
|
||||
If you prefer to learn by video, you can access the YouTube version of this how-to:
|
||||
|
||||
By default, GitHub Pages prompts users to use [Jekyll][4], a static site generator that supports HTML, CSS, and other web technologies. Jekyll generates a static website from documentation files encoded in Markdown format, which GitHub automatically recognizes due to their .md or .markdown extension. While this setup is nice, I wanted to try something else.
|
||||
|
||||
Fortunately, GitHub Pages' HTML file support means you can use other site-generation tools, including Docsify, to create a website on the platform. Docsify is an MIT-Licensed open source project with [features][5] that make it easy to create an attractive advanced documentation site on GitHub Pages.
|
||||
|
||||
![Docsify][6]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
### Get started with Docsify
|
||||
|
||||
There are two ways to install Docsify:
|
||||
|
||||
1. Docsify's command-line interface (CLI) through NPM
|
||||
2. Manually by writing your own `index.html`
|
||||
|
||||
|
||||
|
||||
Docsify recommends the NPM approach, but I will use the second option. If you want to use NPM, follow the instructions in the [quick-start guide][8].
|
||||
|
||||
### Download the sample content from GitHub
|
||||
|
||||
I've published this example's source code on the [project's GitHub page][9]. You can download the files individually or [clone the repo][10] with:
|
||||
|
||||
|
||||
```
|
||||
`git clone https://github.com/bryantson/OpensourceDotComDemos`
|
||||
```
|
||||
|
||||
Then `cd` into the DocsifyDemo directory.
|
||||
|
||||
I will walk you through the cloned code from my sample repo below, so you can understand how to modify Docsify. If you prefer, you can start from scratch by creating a new `index.html` file, like in the [example][11] in Docsify's docs:
|
||||
|
||||
|
||||
```
|
||||
<!-- index.html -->
|
||||
|
||||
<!DOCTYPE html>
|
||||
<[html][12]>
|
||||
<[head][13]>
|
||||
<[meta][14] http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
|
||||
<[meta][14] name="viewport" content="width=device-width,initial-scale=1">
|
||||
<[meta][14] charset="UTF-8">
|
||||
<[link][15] rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
</[head][13]>
|
||||
<[body][16]>
|
||||
<[div][17] id="app"></[div][17]>
|
||||
<[script][18]>
|
||||
window.$docsify = {
|
||||
//...
|
||||
}
|
||||
</[script][18]>
|
||||
<[script][18] src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></[script][18]>
|
||||
</[body][16]>
|
||||
</[html][12]>
|
||||
```
|
||||
|
||||
### Explore how Docsify works
|
||||
|
||||
If you cloned my [GitHub repo][10] and changed into the DocsifyDemo directory, you should see a file structure like this:
|
||||
|
||||
![File contents in the cloned GitHub][19]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
File/Folder Name | What It Is
|
||||
---|---
|
||||
index.html | The main Docsify initiation file (and the most important file)
|
||||
_sidebar.md | Renders the navigation
|
||||
README.md | The default Markdown file at the root of your documentation
|
||||
images | Contains a sample .jpg image from the README.md
|
||||
Other directories and files | Contain navigatable Markdown files
|
||||
|
||||
`Index.html` is the only thing required for Docsify to work. Open the file, so you can explore the contents:
|
||||
|
||||
|
||||
```
|
||||
<!-- index.html -->
|
||||
|
||||
<!DOCTYPE html>
|
||||
<[html][12]>
|
||||
<[head][13]>
|
||||
<[meta][14] http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
|
||||
<[meta][14] name="viewport" content="width=device-width,initial-scale=1">
|
||||
<[meta][14] charset="UTF-8">
|
||||
<[link][15] rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
<[title][20]>Docsify Demo</[title][20]>
|
||||
</[head][13]>
|
||||
<[body][16]>
|
||||
<[div][17] id="app"></[div][17]>
|
||||
<[script][18]>
|
||||
window.$docsify = {
|
||||
el: "#app",
|
||||
repo: '<https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo>',
|
||||
loadSidebar: true,
|
||||
}
|
||||
</[script][18]>
|
||||
<[script][18] src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></[script][18]>
|
||||
</[body][16]>
|
||||
</[html][12]>
|
||||
```
|
||||
|
||||
This is essentially just a plain HTML file, but take a look at these two lines:
|
||||
|
||||
|
||||
```
|
||||
<[link][15] rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
... SOME OTHER STUFFS ...
|
||||
<[script][18] src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></[script][18]>
|
||||
```
|
||||
|
||||
These lines use content delivery network (CDN) URLs to serve the CSS and JavaScript scripts to transform the site into a Docsify site. As long as you include these lines, you can turn your regular GitHub page into a Docsify page.
|
||||
|
||||
The first line after the `body` tag specifies what to render:
|
||||
|
||||
|
||||
```
|
||||
`<div id="app"></div>`
|
||||
```
|
||||
|
||||
Docsify is using the [single page application][21] (SPA) approach to render a requested page instead of refreshing an entirely new page.
|
||||
|
||||
Last, look at the lines inside the `script` block:
|
||||
|
||||
|
||||
```
|
||||
<[script][18]>
|
||||
window.$docsify = {
|
||||
el: "#app",
|
||||
repo: '<https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo>',
|
||||
loadSidebar: true,
|
||||
}
|
||||
</[script][18]>
|
||||
```
|
||||
|
||||
In this block:
|
||||
|
||||
* The `el` property basically says, "Hey, this is the `id` I am looking for, so locate the `id` and render it there."
|
||||
|
||||
* Changing the `repo` value identifies which page users will be redirected to when they click the GitHub icon in the top-right corner.
|
||||
|
||||
![GitHub icon][22]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
* Setting `loadSideBar` to `true` will make Docsify look for the `_sidebar.md` file that contains your navigation links.
|
||||
|
||||
|
||||
|
||||
|
||||
You can find all the options in the [Configuration][23] section of Docsify's docs.
|
||||
|
||||
Next, look at the `_sidebar.md` file. Because you set the `loadSidebar` property value to `true` in `index.html`, Docsify will look for the `_sidebar.md` file and generate the navigation file from its contents. The `_sidebar.md` contents in the sample repo are:
|
||||
|
||||
|
||||
```
|
||||
<!-- docs/_sidebar.md -->
|
||||
|
||||
* [HOME](./)
|
||||
|
||||
* [Tutorials](./tutorials/index)
|
||||
* [Tomcat](./tutorials/tomcat/index)
|
||||
* [Cloud](./tutorials/cloud/index)
|
||||
* [Java](./tutorials/java/index)
|
||||
|
||||
* [About](./about/index)
|
||||
|
||||
* [Contact](./contact/index)
|
||||
```
|
||||
|
||||
This uses Markdown's link format to create the navigation. Note that the Tomcat, Cloud, and Java links are indented; this causes them to be rendered as sublinks under the parent link.
|
||||
|
||||
Files like `README.md` and `images` pertain to the repository's structure, but all the other Markdown files are related to your Docsify webpage.
|
||||
|
||||
Modify the files you downloaded however you want, based on your needs. In the next step, you will add these files to your GitHub repo, enable GitHub Pages, and finish the project.
|
||||
|
||||
### Enable GitHub Pages
|
||||
|
||||
Create a sample GitHub repo, then use the following GitHub commands to check, commit, and push your code:
|
||||
|
||||
|
||||
```
|
||||
$ git clone LOCATION_TO_YOUR_GITHUB_REPO
|
||||
$ cd LOCATION_TO_YOUR_GITHUB_REPO
|
||||
$ git add .
|
||||
$ git commit -m "My first Docsify!"
|
||||
$ git push
|
||||
```
|
||||
|
||||
Set up your GitHub Pages page. From inside your new GitHub repo, click **Settings**:
|
||||
|
||||
![Settings link in GitHub][24]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
Scroll down until you see **GitHub Pages**:
|
||||
|
||||
![GitHub Pages settings][25]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
Look for the **Source** section:
|
||||
|
||||
![GitHub Pages settings][26]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
Click the drop-down menu under **Source**. Usually, you will set this to the **master branch**, but you can use another branch, if you'd like:
|
||||
|
||||
![Setting Source to master branch][27]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
That's it! You should now have a link to your GitHub Pages page. Clicking the link will take you there, and it should render with Docsify:
|
||||
|
||||
![Link to GitHub Pages docs site][28]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
And it should look something like this:
|
||||
|
||||
![Example Docsify site on GitHub Pages][29]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][7])
|
||||
|
||||
### Conclusion
|
||||
|
||||
By editing a single HTML file and some Markdown text, you can create an awesome-looking documentation site with Docsify. What do you think? Please leave a comment and also share any other open source tools that can be used with GitHub Pages.
|
||||
|
||||
See how Jekyll, an open source generator of static HTML files, makes running a blog as easy as...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/docsify-github-pages
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
|
||||
[2]: https://docsify.js.org
|
||||
[3]: https://pages.github.com/
|
||||
[4]: https://docs.github.com/en/github/working-with-github-pages/about-github-pages-and-jekyll
|
||||
[5]: https://docsify.js.org/#/?id=features
|
||||
[6]: https://opensource.com/sites/default/files/uploads/docsify1_ui.jpg (Docsify)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://docsify.js.org/#/quickstart?id=quick-start
|
||||
[9]: https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo
|
||||
[10]: https://github.com/bryantson/OpensourceDotComDemos
|
||||
[11]: https://docsify.js.org/#/quickstart?id=manual-initialization
|
||||
[12]: http://december.com/html/4/element/html.html
|
||||
[13]: http://december.com/html/4/element/head.html
|
||||
[14]: http://december.com/html/4/element/meta.html
|
||||
[15]: http://december.com/html/4/element/link.html
|
||||
[16]: http://december.com/html/4/element/body.html
|
||||
[17]: http://december.com/html/4/element/div.html
|
||||
[18]: http://december.com/html/4/element/script.html
|
||||
[19]: https://opensource.com/sites/default/files/uploads/docsify3_files.jpg (File contents in the cloned GitHub)
|
||||
[20]: http://december.com/html/4/element/title.html
|
||||
[21]: https://en.wikipedia.org/wiki/Single-page_application
|
||||
[22]: https://opensource.com/sites/default/files/uploads/docsify4_github-icon_rev_0.jpg (GitHub icon)
|
||||
[23]: https://docsify.js.org/#/configuration?id=configuration
|
||||
[24]: https://opensource.com/sites/default/files/uploads/docsify5_githubsettings_0.jpg (Settings link in GitHub)
|
||||
[25]: https://opensource.com/sites/default/files/uploads/docsify6_githubpageconfig_rev.jpg (GitHub Pages settings)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/docsify6_githubpageconfig_rev2.jpg (GitHub Pages settings)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/docsify8_setsource_rev.jpg (Setting Source to master branch)
|
||||
[28]: https://opensource.com/sites/default/files/uploads/docsify9_link_rev.jpg (Link to GitHub Pages docs site)
|
||||
[29]: https://opensource.com/sites/default/files/uploads/docsify2_examplesite.jpg (Example Docsify site on GitHub Pages)
|
@ -1,267 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Creating and debugging Linux dump files)
|
||||
[#]: via: (https://opensource.com/article/20/8/linux-dump)
|
||||
[#]: author: (Stephan Avenwedde https://opensource.com/users/hansic99)
|
||||
|
||||
Creating and debugging Linux dump files
|
||||
======
|
||||
Knowing how to deal with dump files will help you find and fix
|
||||
hard-to-reproduce bugs in an application.
|
||||
![Magnifying glass on code][1]
|
||||
|
||||
Crash dump, memory dump, core dump, system dump … all produce the same outcome: a file containing the state of an application's memory at a specific time—usually when the application crashes.
|
||||
|
||||
Knowing how to deal with these files can help you find the root cause(s) of a failure. Even if you are not a developer, dump files created on your system can be very helpful (as well as approachable) in understanding software.
|
||||
|
||||
This is a hands-on article, and can you follow along with the example by cloning the sample application repository with:
|
||||
|
||||
|
||||
```
|
||||
`git clone https://github.com/hANSIc99/core_dump_example.git`
|
||||
```
|
||||
|
||||
### How signals relate to dumps
|
||||
|
||||
Signals are a kind of interprocess communication between the operating system and the user applications. Linux uses the signals defined in the [POSIX standard][2]. On your system, you can find the standard signals defined in `/usr/include/bits/signum-generic.h`. There is also an informative [man signal][3] page if you want more on using signals in your application. Put simply, Linux uses signals to trigger further activities based on whether they were expected or unexpected.
|
||||
|
||||
When you quit a running application, the application will usually receive the `SIGTERM` signal. Because this type of exit signal is expected, this action will not create a memory dump.
|
||||
|
||||
The following signals will cause a dump file to be created (source: [GNU C Library][4]):
|
||||
|
||||
* SIGFPE: Erroneous arithmetic operation
|
||||
* SIGILL: Illegal instruction
|
||||
* SIGSEGV: Invalid access to storage
|
||||
* SIGBUS: Bus error
|
||||
* SIGABRT: An error detected by the program and reported by calling abort
|
||||
* SIGIOT: Labeled archaic on Fedora, this signal used to trigger on `abort()` on a [PDP-11][5] and now maps to SIGABRT
|
||||
|
||||
|
||||
|
||||
### Creating dump files
|
||||
|
||||
Navigate to the `core_dump_example` directory, run `make`, and execute the sample with the `-c1` switch:
|
||||
|
||||
|
||||
```
|
||||
`./coredump -c1`
|
||||
```
|
||||
|
||||
The application should exit in state 4 with an error:
|
||||
|
||||
![Dump written][6]
|
||||
|
||||
(Stephan Avenwedde, [CC BY-SA 4.0][7])
|
||||
|
||||
"Abgebrochen (Speicherabzug geschrieben)" roughly translates to "Segmentation fault (core dumped)."
|
||||
|
||||
Whether it creates a core dump or not is determined by the resource limit of the user running the process. You can modify the resource limits with the `ulimit` command.
|
||||
|
||||
Check the current setting for core dump creation:
|
||||
|
||||
|
||||
```
|
||||
`ulimit -c`
|
||||
```
|
||||
|
||||
If it outputs `unlimited`, then it is using the (recommended) default. Otherwise, correct the limit with:
|
||||
|
||||
|
||||
```
|
||||
`ulimit -c unlimited`
|
||||
```
|
||||
|
||||
To disable creating core dumps' type:
|
||||
|
||||
|
||||
```
|
||||
`ulimit -c 0`
|
||||
```
|
||||
|
||||
The number specifies the resource in kilobytes.
|
||||
|
||||
### What are core dumps?
|
||||
|
||||
The way the kernel handles core dumps is defined in:
|
||||
|
||||
|
||||
```
|
||||
`/proc/sys/kernel/core_pattern`
|
||||
```
|
||||
|
||||
I'm running Fedora 31, and on my system, the file contains:
|
||||
|
||||
|
||||
```
|
||||
`/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h`
|
||||
```
|
||||
|
||||
This shows core dumps are forwarded to the `systemd-coredump` utility. The contents of `core_pattern` can vary widely between the different flavors of Linux distributions. When `systemd-coredump` is in use, the dump files are saved compressed under `/var/lib/systemd/coredump`. You don't need to touch the files directly; instead, you can use `coredumpctl`. For example:
|
||||
|
||||
|
||||
```
|
||||
`coredumpctl list`
|
||||
```
|
||||
|
||||
shows all available dump files saved on your system.
|
||||
|
||||
With `coredumpctl dump`, you can retrieve information from the last dump file saved:
|
||||
|
||||
|
||||
```
|
||||
[stephan@localhost core_dump_example]$ ./coredump
|
||||
Application started…
|
||||
|
||||
(…….)
|
||||
|
||||
Message: Process 4598 (coredump) of user 1000 dumped core.
|
||||
|
||||
Stack trace of thread 4598:
|
||||
#0 0x00007f4bbaf22625 __GI_raise (libc.so.6)
|
||||
#1 0x00007f4bbaf0b8d9 __GI_abort (libc.so.6)
|
||||
#2 0x00007f4bbaf664af __libc_message (libc.so.6)
|
||||
#3 0x00007f4bbaf6da9c malloc_printerr (libc.so.6)
|
||||
#4 0x00007f4bbaf6f49c _int_free (libc.so.6)
|
||||
#5 0x000000000040120e n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
#6 0x00000000004013b1 n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
#7 0x00007f4bbaf0d1a3 __libc_start_main (libc.so.6)
|
||||
#8 0x000000000040113e n/a (/home/stephan/Dokumente/core_dump_example/coredump)
|
||||
Refusing to dump core to tty (use shell redirection or specify — output).
|
||||
```
|
||||
|
||||
This shows that the process was stopped by `SIGABRT`. The stack trace in this view is not very detailed because it does not include function names. However, with `coredumpctl debug`, you can simply open the dump file with a debugger ([GDB][8] by default). Type `bt` (short for backtrace) to get a more detailed view:
|
||||
|
||||
|
||||
```
|
||||
Core was generated by `./coredump -c1'.
|
||||
Program terminated with signal SIGABRT, Aborted.
|
||||
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
|
||||
50 return ret;
|
||||
(gdb) bt
|
||||
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
|
||||
#1 0x00007fc37a9aa8d9 in __GI_abort () at abort.c:79
|
||||
#2 0x00007fc37aa054af in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7fc37ab14f4b "%s\n") at ../sysdeps/posix/libc_fatal.c:181
|
||||
#3 0x00007fc37aa0ca9c in malloc_printerr (str=str@entry=0x7fc37ab130e0 "free(): invalid pointer") at malloc.c:5339
|
||||
#4 0x00007fc37aa0e49c in _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at malloc.c:4173
|
||||
#5 0x000000000040120e in freeSomething(void*) ()
|
||||
#6 0x0000000000401401 in main ()
|
||||
```
|
||||
|
||||
The memory addresses: `main()` and `freeSomething()` are quite low compared to subsequent frames. Due to the fact that shared objects are mapped to an area at the end of the virtual address space, you can assume that the `SIGABRT` was caused by a call in a shared library. Memory addresses of shared objects are not constant between invocations, so it is totally fine when you see varying addresses between calls.
|
||||
|
||||
The stack trace shows that subsequent calls originate from `malloc.c`, which indicates that something with memory (de-)allocation could have gone wrong.
|
||||
|
||||
In the source code, you can see (even without any knowledge of C++) that it tried to free a pointer, which was not returned by a memory management function. This results in undefined behavior and causes the `SIGABRT`:
|
||||
|
||||
|
||||
```
|
||||
void freeSomething(void *ptr){
|
||||
[free][9](ptr);
|
||||
}
|
||||
int nTmp = 5;
|
||||
int *ptrNull = &nTmp;
|
||||
freeSomething(ptrNull);
|
||||
```
|
||||
|
||||
The systemd coredump utility can be configured under `/etc/systemd/coredump.conf`. Rotation of dump file cleaning can be configured in `/etc/systemd/system/systemd-tmpfiles-clean.timer`.
|
||||
|
||||
You can find more information about `coredumpctl` on its [man page][10].
|
||||
|
||||
### Compiling with debug symbols
|
||||
|
||||
Open the `Makefile` and comment out the last part of line 9. It should now look like:
|
||||
|
||||
|
||||
```
|
||||
`CFLAGS =-Wall -Werror -std=c++11 -g`
|
||||
```
|
||||
|
||||
The `-g` switch enables the compiler to create debug information. Start the application, this time with the `-c2` switch:
|
||||
|
||||
|
||||
```
|
||||
`./coredump -c2`
|
||||
```
|
||||
|
||||
You will get a floating-point exception. Open the dump in GDB with:
|
||||
|
||||
|
||||
```
|
||||
`coredumpctl debug`
|
||||
```
|
||||
|
||||
This time, you are pointed directly to the line in the source code that caused the error:
|
||||
|
||||
|
||||
```
|
||||
Reading symbols from /home/stephan/Dokumente/core_dump_example/coredump…
|
||||
[New LWP 6218]
|
||||
Core was generated by `./coredump -c2'.
|
||||
Program terminated with signal SIGFPE, Arithmetic exception.
|
||||
#0 0x0000000000401233 in zeroDivide () at main.cpp:29
|
||||
29 nRes = 5 / nDivider;
|
||||
(gdb)
|
||||
```
|
||||
|
||||
Type `list` to get a better overview of the source code:
|
||||
|
||||
|
||||
```
|
||||
(gdb) list
|
||||
24 int zeroDivide(){
|
||||
25 int nDivider = 5;
|
||||
26 int nRes = 0;
|
||||
27 while(nDivider > 0){
|
||||
28 nDivider--;
|
||||
29 nRes = 5 / nDivider;
|
||||
30 }
|
||||
31 return nRes;
|
||||
32 }
|
||||
```
|
||||
|
||||
Use the command `info locals` to retrieve the values of the local variables from the point in time when the application failed:
|
||||
|
||||
|
||||
```
|
||||
(gdb) info locals
|
||||
nDivider = 0
|
||||
nRes = 5
|
||||
```
|
||||
|
||||
In combination with the source code, you can see that you ran into a division by zero:
|
||||
|
||||
|
||||
```
|
||||
`nRes = 5 / 0`
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
||||
Knowing how to deal with dump files will help you find and fix hard-to-reproduce random bugs in an application. And if it is not your application, forwarding a core dump to the developer will help her or him find and fix the problem.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/linux-dump
|
||||
|
||||
作者:[Stephan Avenwedde][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/hansic99
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0 (Magnifying glass on code)
|
||||
[2]: https://en.wikipedia.org/wiki/POSIX
|
||||
[3]: https://man7.org/linux/man-pages/man7/signal.7.html
|
||||
[4]: https://www.gnu.org/software/libc/manual/html_node/Program-Error-Signals.html#Program-Error-Signals
|
||||
[5]: https://en.wikipedia.org/wiki/PDP-11
|
||||
[6]: https://opensource.com/sites/default/files/uploads/dump_written.png (Dump written)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://www.gnu.org/software/gdb/
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/free.html
|
||||
[10]: https://man7.org/linux/man-pages/man1/coredumpctl.1.html
|
@ -1,121 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (An attempt to make a font look more handwritten)
|
||||
[#]: via: (https://jvns.ca/blog/2020/08/08/handwritten-font/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
|
||||
An attempt to make a font look more handwritten
|
||||
======
|
||||
|
||||
I’m actually not super happy with the results of this experiment, but I wanted to share it anyway because it was very easy and fun to play with fonts. And somebody asked me how to do it and I told her I’d write a blog post about it :)
|
||||
|
||||
### background: the original handwritten font
|
||||
|
||||
Some background: I have a font of my handwriting that I’ve been use in my zines for a couple of years. I made it using a delightful app called [iFontMaker][1]. They pitch themselves on their website as “You can create your handmade typeface in less than 5 minutes just with your fingers”. In my experience the ‘5 minutes” part is pretty accurate – I might have spent more like 15 minutes. I’m skeptical of the “just your fingers” claim – I used an Apple Pencil, which has much better accuracy. But it is extremely easy to make a TTF font of your handwriting with the app and if you happen to already have an Apple Pencil and iPad I think it’s a fun way to spend $7.99.
|
||||
|
||||
Here’s what my font looks like. The “CONNECT” text on the left is my actual handwriting, and the paragraph on the right is the font. There are actually 2 fonts – there’s a regular font and a handwritten “monospace” font. (which actually isn’t monospace in practice, I haven’t figured out how to make a actual monospace font in iFontMaker)
|
||||
|
||||
![][2]
|
||||
|
||||
### the goal: have more character variation in the font
|
||||
|
||||
In the screenshot above, it’s pretty obvious that it’s a font and not actual handwriting. It’s easiest to see this when you have two of the same letter next to each other, like in “HTTP’.
|
||||
|
||||
So I thought it might be fun to use some OpenType features to somehow introduce a little more variation into this font, like maybe the two Ts could be different. I didn’t know how to do this though!
|
||||
|
||||
### idea from Tristan Hume: use OpenType!
|
||||
|
||||
Then I was at !!Con 2020 in May (all the [talk recordings are here!][3]) and saw this talk by Tristan Hume about using OpenType to place commas in big numbers by using a special font. His talk and blog post are both great so here are a bunch of links – the live demo is maybe the fastest way to see his results.
|
||||
|
||||
* a live demo: [Numderline Test][4]
|
||||
* the blog post: [Commas in big numbers everywhere: An OpenType adventure][5]
|
||||
* the talk: [!!Con 2020 - Using font shaping to put commas in big numbers EVERYWHERE!! by Tristan Hume][6]
|
||||
* the github repo: <https://github.com/trishume/numderline/blob/master/patcher.py>
|
||||
|
||||
|
||||
|
||||
### the main idea: OpenType lets you replace characters based on context
|
||||
|
||||
I started out being extremely confused about what OpenType even is. I still don’t know much, but I learned that you can write extremely simple OpenType rules to change how a font looks, and you don’t even have to really understand anything about fonts.
|
||||
|
||||
Here’s an example rule:
|
||||
|
||||
```
|
||||
sub a' b by other_a;
|
||||
```
|
||||
|
||||
What `sub a' b by other_a;` means is: If an `a` glyph is before a `b`, then replace the `a` with the glyph `other_a`.
|
||||
|
||||
So this means I can make `ab` appear different from `ac` in the font. It’s not random the way handwriting is, but it does introduce a little bit of variation.
|
||||
|
||||
### OpenType reference documentation: awesome
|
||||
|
||||
The best documentation I found for OpenType was this [OpenType™ Feature File Specification][7] reference. There are a lot of examples of cool things you can do in there, like replace “ffi” with a ligature.
|
||||
|
||||
### how to apply these rules: `fonttools`
|
||||
|
||||
Adding new OpenType rules to a font is extremely easy. There’s a Python library called `fonttools`, and these 5 lines of code will apply a list of OpenType rules (in `rules.fea`) to the font file `input.ttf`.
|
||||
|
||||
```
|
||||
from fontTools.ttLib import TTFont
|
||||
from fontTools.feaLib.builder import addOpenTypeFeatures
|
||||
|
||||
ft_font = TTFont('input.ttf')
|
||||
addOpenTypeFeatures(ft_font, 'rules.fea', tables=['GSUB'])
|
||||
ft_font.save('output.ttf')
|
||||
```
|
||||
|
||||
`fontTools` also provides a couple of command line tools called `ttx` and `fonttools`. `ttx` converts a TTF font into an XML file, which was useful to me because I wanted to rename some glyphs in my font but did not understand anything about fonts. So I just converted my font into an XML file, used `sed` to rename the glyphs, and then used `ttx` again to convert the XML file back into a `ttf`.
|
||||
|
||||
`fonttools merge` let me merge my 3 handwriting fonts into 1 so that I had all the glyphs I needed in 1 file.
|
||||
|
||||
### the code
|
||||
|
||||
I put my extremely hacky code for doing this in a repository called [font-mixer][8]. It’s like 33 lines of code and I think it’s pretty straightforward. (it’s all in `run.sh` and `combine.py`)
|
||||
|
||||
### the results
|
||||
|
||||
Here’s a small sample the old font and the new font. I don’t think the new font “feels” that much more like handwriting – there’s a little more variation, but it still doesn’t compare to actual handwritten text (at the bottom).
|
||||
|
||||
It feels a little uncanny valley to me, like it’s obviously still a font but it’s pretending to be something else.
|
||||
|
||||
![][9]
|
||||
|
||||
And here’s a sample of the same text actually written by hand:
|
||||
|
||||
![][10]
|
||||
|
||||
It’s possible that the results would be better if I was more careful about how I made the 2 other handwriting fonts I mixed the original font with.
|
||||
|
||||
### it’s cool that it’s so easy to add opentype rules!
|
||||
|
||||
Mostly what was delightful to me here is that it’s so easy to add OpenType rules to change how fonts work, like you can pretty easily make a font where the word “the” is always replaced with “teh” (typos all the time!).
|
||||
|
||||
I still don’t know how to make a more realistic handwriting font though :). I’m still using the old one (without the extra variations) and I’m pretty happy with it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2020/08/08/handwritten-font/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://2ttf.com/
|
||||
[2]: https://jvns.ca/images/font-sample-connect.png
|
||||
[3]: http://bangbangcon.com/recordings.html
|
||||
[4]: https://thume.ca/numderline/
|
||||
[5]: https://blog.janestreet.com/commas-in-big-numbers-everywhere/
|
||||
[6]: https://www.youtube.com/watch?v=Biqm9ndNyC8
|
||||
[7]: https://adobe-type-tools.github.io/afdko/OpenTypeFeatureFileSpecification.html
|
||||
[8]: https://github.com/jvns/font-mixer/
|
||||
[9]: https://jvns.ca/images/font-mixer-comparison.png
|
||||
[10]: https://jvns.ca/images/handwriting-sample.jpeg
|
@ -1,154 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use GNU on Windows with MinGW)
|
||||
[#]: via: (https://opensource.com/article/20/8/gnu-windows-mingw)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Use GNU on Windows with MinGW
|
||||
======
|
||||
Install the GNU Compiler Collection and other GNU components to enable
|
||||
GNU Autotools on Windows.
|
||||
![Windows][1]
|
||||
|
||||
If you're a hacker running Windows, you don't need a proprietary application to compile code. With the [Minimalist GNU for Windows][2] (MinGW) project, you can download and install the [GNU Compiler Collection][3] (GCC) along with several other essential GNU components to enable [GNU Autotools][4] on your Windows computer.
|
||||
|
||||
### Install MinGW
|
||||
|
||||
The easiest way to install MinGW is through mingw-get, a graphical user interface (GUI) application that helps you select which components to install and keep them up to date. To run it, [download `mingw-get-setup.exe`][5] from the project's host. Install it as you would any other EXE file by clicking through the installation wizard to completion.
|
||||
|
||||
![Installing mingw-get][6]
|
||||
|
||||
(Seth Kenlon, [CC BY-SA 4.0][7])
|
||||
|
||||
### Install GCC on Windows
|
||||
|
||||
So far, you've only installed an installer—or more accurately, a dedicated _package manager_ called mingw-get. Launch mingw-get to select which MinGW project applications you want to install on your computer.
|
||||
|
||||
First, select **mingw-get** from your application menu to launch it.
|
||||
|
||||
![Installing GCC with MinGW][8]
|
||||
|
||||
(Seth Kenlon, [CC BY-SA 4.0][7])
|
||||
|
||||
To install GCC, click the GCC and G++ package to mark GNU C and C++ compiler for installation. To complete the process, select **Apply Changes** from the **Installation** menu in the top-left corner of the mingw-get window.
|
||||
|
||||
Once GCC is installed, you can run it from [PowerShell][9] using its full path:
|
||||
|
||||
|
||||
```
|
||||
PS> C:\MinGW\bin\gcc.exe --version
|
||||
gcc.exe (MinGW.org GCC Build-x) x.y.z
|
||||
Copyright (C) 2019 Free Software Foundation, Inc.
|
||||
```
|
||||
|
||||
### Run Bash on Windows
|
||||
|
||||
While it calls itself "minimalist," MinGW also provides an optional [Bourne shell][10] command-line interpreter called MSYS (which stands for Minimal System). It's an alternative to Microsoft's `cmd.exe` and PowerShell, and it defaults to Bash. Aside from being one of the (justifiably) most popular shells, Bash is useful when porting open source applications to the Windows platform because many open source projects assume a [POSIX][11] environment.
|
||||
|
||||
You can install MSYS from the mingw-get GUI or from within PowerShell:
|
||||
|
||||
|
||||
```
|
||||
`PS> mingw-get install msys`
|
||||
```
|
||||
|
||||
To try out Bash, launch it using its full path:
|
||||
|
||||
|
||||
```
|
||||
PS> C:\MinGW\msys/1.0/bin/bash.exe
|
||||
bash.exe-$ echo $0
|
||||
"C:\MinGW\msys/1.0/bin/bash.exe"
|
||||
```
|
||||
|
||||
### Set the path on Windows
|
||||
|
||||
You probably don't want to have to type the full path for every command you want to use. Add the directory containing your new GNU executables to your path in Windows. There are two root directories of executables to add: one for MinGW (including GCC and its related toolchain) and another for MSYS (including Bash and many common tools from the GNU and [BSD][12] projects).
|
||||
|
||||
To modify your environment in Windows, click on the application menu and type `env`.
|
||||
|
||||
![Edit your env][13]
|
||||
|
||||
(Seth Kenlon, [CC BY-SA 4.0][7])
|
||||
|
||||
A Preferences window will open; click the **Environment variables** button located near the bottom of the window.
|
||||
|
||||
In the **Environment variables** window, double-click the **Path** selection from the bottom panel.
|
||||
|
||||
In the **Edit Environment variables** window that appears, click the **New** button on the right. Create a new entry reading **C:\MinCW\msys\1.0\bin** and click **OK**. Create a second new entry the same way, this one reading **C:\MinGW\bin**, and click **OK**.
|
||||
|
||||
![Set your env][14]
|
||||
|
||||
(Seth Kenlon, [CC BY-SA 4.0][7])
|
||||
|
||||
Accept these changes in each Preferences window. You can reboot your computer to ensure the new variables are detected by all applications, or just relaunch your PowerShell window.
|
||||
|
||||
From now on, you can call any MinGW command without specifying the full path, because the full path is in the `%PATH%` environment variable of your Windows system, which PowerShell inherits.
|
||||
|
||||
### Hello world
|
||||
|
||||
You're all set up now, so put your new MinGW system to a small test. If you're a [Vim][15] user, launch it, and enter this obligatory "hello world" code:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <iostream>
|
||||
|
||||
using namespace std;
|
||||
|
||||
int main() {
|
||||
cout << "Hello open source." << endl;
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Save the file as `hello.cpp`, then compile it with the C++ component of GCC:
|
||||
|
||||
|
||||
```
|
||||
`PS> gcc hello.cpp --output hello`
|
||||
```
|
||||
|
||||
And, finally, run it:
|
||||
|
||||
|
||||
```
|
||||
PS> .\a.exe
|
||||
Hello open source.
|
||||
PS>
|
||||
```
|
||||
|
||||
There's much more to MinGW than what I can cover here. After all, MinGW opens a whole world of open source and potential for custom code, so take advantage of it. For a wider world of open source, you can also [give Linux a try][16]. You'll be amazed at what's possible when all limits are removed. But in the meantime, give MinGW a try and enjoy the freedom of the GNU.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/gnu-windows-mingw
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/more_windows.jpg?itok=hKk64RcZ (Windows)
|
||||
[2]: http://mingw.org
|
||||
[3]: https://gcc.gnu.org/
|
||||
[4]: https://opensource.com/article/19/7/introduction-gnu-autotools
|
||||
[5]: https://osdn.net/projects/mingw/releases/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/mingw-install.jpg (Installing mingw-get)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/mingw-packages.jpg (Installing GCC with MinGW)
|
||||
[9]: https://opensource.com/article/19/8/variables-powershell
|
||||
[10]: https://en.wikipedia.org/wiki/Bourne_shell
|
||||
[11]: https://opensource.com/article/19/7/what-posix-richard-stallman-explains
|
||||
[12]: https://opensource.com/article/19/3/netbsd-raspberry-pi
|
||||
[13]: https://opensource.com/sites/default/files/uploads/mingw-env.jpg (Edit your env)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/mingw-env-set.jpg (Set your env)
|
||||
[15]: https://opensource.com/resources/what-vim
|
||||
[16]: https://opensource.com/article/19/7/ways-get-started-linux
|
284
sources/tech/20200820 Learn the basics of programming with C.md
Normal file
284
sources/tech/20200820 Learn the basics of programming with C.md
Normal file
@ -0,0 +1,284 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Learn the basics of programming with C)
|
||||
[#]: via: (https://opensource.com/article/20/8/c-programming-cheat-sheet)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Learn the basics of programming with C
|
||||
======
|
||||
Our new cheat sheet puts all the essentials of C syntax on an
|
||||
easy-to-read handout.
|
||||
![Cheat Sheet cover image][1]
|
||||
|
||||
In 1972, Dennis Ritchie was at Bell Labs, where a few years earlier, he and his fellow team members invented Unix. After creating an enduring OS (still in use today), he needed a good way to program those Unix computers so that they could perform new tasks. It seems strange now, but at the time, there were relatively few programming languages; Fortran, Lisp, [Algol][2], and B were popular but insufficient for what the Bell Labs researchers wanted to do. Demonstrating a trait that would become known as a primary characteristic of programmers, Dennis Ritchie created his own solution. He called it C, and nearly 50 years later, it's still in widespread use.
|
||||
|
||||
### Why you should learn C
|
||||
|
||||
Today, there are many languages that provide programmers more features than C. The most obvious one is C++, a rather blatantly named language that built upon C to create a nice object-oriented language. There are many others, though, and there's a good reason they exist. Computers are good at consistent repetition, so anything predictable enough to be built into a language means less work for programmers. Why spend two lines recasting an `int` to a `long` in C when one line of C++ (`long x = long(n);`) can do the same?
|
||||
|
||||
And yet C is still useful today.
|
||||
|
||||
First of all, C is a fairly minimal and straightforward language. There aren't very advanced concepts beyond the basics of programming, largely because C is literally one of the foundations of modern programming languages. For instance, C features arrays, but it doesn't offer a dictionary (unless you write it yourself). When you learn C, you learn the building blocks of programming that can help you recognize the improved and elaborate designs of recent languages.
|
||||
|
||||
Because C is a minimal language, your applications are likely to get a boost in performance that they wouldn't see with many other languages. It's easy to get caught up in the race to the bottom when you're thinking about how fast your code executes, so it's important to ask whether you _need_ more speed for a specific task. And with C, you have less to obsess over in each line of code, compared to, say, Python or Java. C is fast. There's a good reason the Linux kernel is written in C.
|
||||
|
||||
Finally, C is easy to get started with, especially if you're running Linux. You can already run C code because Linux systems include the GNU C library (`glibc`). To write and build it, all you need to do is install a compiler, open a text editor, and start coding.
|
||||
|
||||
### Getting started with C
|
||||
|
||||
If you're running Linux, you can install a C compiler using your package manager. On Fedora or RHEL:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install gcc`
|
||||
```
|
||||
|
||||
On Debian and similar:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install build-essential`
|
||||
```
|
||||
|
||||
On macOS, you can [install Homebrew][3] and use it to install [GCC][4]:
|
||||
|
||||
|
||||
```
|
||||
`$ brew install gcc`
|
||||
```
|
||||
|
||||
On Windows, you can install a minimal set of GNU utilities, GCC included, with [MinGW][5].
|
||||
|
||||
Verify you've installed GCC on Linux or macOS:
|
||||
|
||||
|
||||
```
|
||||
$ gcc --version
|
||||
gcc (GCC) x.y.z
|
||||
Copyright (C) 20XX Free Software Foundation, Inc.
|
||||
```
|
||||
|
||||
On Windows, provide the full path to the EXE file:
|
||||
|
||||
|
||||
```
|
||||
PS> C:\MinGW\bin\gcc.exe --version
|
||||
gcc.exe (MinGW.org GCC Build-2) x.y.z
|
||||
Copyright (C) 20XX Free Software Foundation, Inc.
|
||||
```
|
||||
|
||||
### C syntax
|
||||
|
||||
C isn't a scripting language. It's compiled, meaning that it gets processed by a C compiler to produce a binary executable file. This is different from a scripting language like [Bash][6] or a hybrid language like [Python][7].
|
||||
|
||||
In C, you create _functions_ to carry out your desired task. A function named `main` is executed by default.
|
||||
|
||||
Here's a simple "hello world" program written in C:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
[printf][8]("Hello world");
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The first line includes a _header file_, essentially free and very low-level C code that you can reuse in your own programs, called `stdio.h` (standard input and output). A function called `main` is created and populated with a rudimentary print statement. Save this text to a file called `hello.c`, then compile it with GCC:
|
||||
|
||||
|
||||
```
|
||||
`$ gcc hello.c --output hello`
|
||||
```
|
||||
|
||||
Try running your C program:
|
||||
|
||||
|
||||
```
|
||||
$ ./hello
|
||||
Hello world$
|
||||
```
|
||||
|
||||
#### Return values
|
||||
|
||||
It's part of the Unix philosophy that a function "returns" something to you after it executes: nothing upon success and something else (an error message, for example) upon failure. These return codes are often represented with numbers (integers, to be precise): 0 represents nothing, and any number higher than 0 represents some non-successful state.
|
||||
|
||||
There's a good reason Unix and Linux are designed to expect silence upon success. It's so that you can always plan for success by assuming no errors nor warnings will get in your way when executing a series of commands. Similarly, functions in C expect no errors by design.
|
||||
|
||||
You can see this for yourself with one small modification to make your program appear to fail:
|
||||
|
||||
|
||||
```
|
||||
include <stdio.h>
|
||||
|
||||
int main() {
|
||||
[printf][8]("Hello world");
|
||||
return 1;
|
||||
}
|
||||
```
|
||||
|
||||
Compile it:
|
||||
|
||||
|
||||
```
|
||||
`$ gcc hello.c --output failer`
|
||||
```
|
||||
|
||||
Now run it using a built-in Linux test for success. The `&&` operator executes the second half of a command only upon success. For example:
|
||||
|
||||
|
||||
```
|
||||
$ echo "success" && echo "it worked"
|
||||
success
|
||||
it worked
|
||||
```
|
||||
|
||||
The `||` test executes the second half of a command upon _failure_.
|
||||
|
||||
|
||||
```
|
||||
$ ls blah || echo "it did not work"
|
||||
ls: cannot access 'blah': No such file or directory
|
||||
it did not work
|
||||
```
|
||||
|
||||
Now try your program, which does _not_ return 0 upon success; it returns 1 instead:
|
||||
|
||||
|
||||
```
|
||||
$ ./failer && echo "it worked"
|
||||
String is: hello
|
||||
```
|
||||
|
||||
The program executed successfully, yet did not trigger the second command.
|
||||
|
||||
#### Variables and types
|
||||
|
||||
In some languages, you can create variables without specifying what _type_ of data they contain. Those languages have been designed such that the interpreter runs some tests against a variable in an attempt to discover what kind of data it contains. For instance, Python knows that `var=1` defines an integer when you create an expression that adds `var` to something that is obviously an integer. It similarly knows that the word `world` is a string when you concatenate `hello` and `world`.
|
||||
|
||||
C doesn't do any of these investigations for you; you must define your variable type. There are several types of variables, including integers (int), characters (char), float, and Boolean.
|
||||
|
||||
You may also notice there's no string type. Unlike Python and Java and Lua and many others, C doesn't have a string type and instead sees strings as an array of characters.
|
||||
|
||||
Here's some simple code that establishes a `char` array variable, and then prints it to your screen using [printf][9] along with a short message:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int main() {
|
||||
char var[6] = "hello";
|
||||
[printf][8]("Your string is: %s\r\n",var);
|
||||
```
|
||||
|
||||
You may notice that this code sample allows six characters for a five-letter word. This is because there's a hidden terminator at the end of the string, which takes up one byte in the array. You can run the code by compiling and executing it:
|
||||
|
||||
|
||||
```
|
||||
$ gcc hello.c --output hello
|
||||
$ ./hello
|
||||
hello
|
||||
```
|
||||
|
||||
### Functions
|
||||
|
||||
As with other languages, C functions take optional parameters. You can pass parameters from one function to another by defining the type of data you want a function to accept:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
|
||||
int printmsg(char a[]) {
|
||||
[printf][8]("String is: %s\r\n",a);
|
||||
}
|
||||
|
||||
int main() {
|
||||
char a[6] = "hello";
|
||||
printmsg(a);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The way this code sample breaks one function into two isn't very useful, but it demonstrates that `main` runs by default and how to pass data between functions.
|
||||
|
||||
### Conditionals
|
||||
|
||||
In real-world programming, you usually want your code to make decisions based on data. This is done with _conditional_ statements, and the `if` statement is one of the most basic of them.
|
||||
|
||||
To make this example program more dynamic, you can include the `string.h` header file, which contains code to examine (as the name implies) strings. Try testing whether the string passed to the `printmsg` function is greater than 0 by using the `strlen` function from the `string.h` file:
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
|
||||
int printmsg(char a[]) {
|
||||
size_t len = [strlen][10](a);
|
||||
if ( len > 0) {
|
||||
[printf][8]("String is: %s\r\n",a);
|
||||
}
|
||||
}
|
||||
|
||||
int main() {
|
||||
char a[6] = "hello";
|
||||
printmsg(a);
|
||||
return 1;
|
||||
}
|
||||
```
|
||||
|
||||
As implemented in this example, the sample condition will never be untrue because the string provided is always "hello," the length of which is always greater than 0. The final touch to this humble re-implementation of the `echo` command is to accept input from the user.
|
||||
|
||||
### Command arguments
|
||||
|
||||
The `stdio.h` file contains code that provides two arguments each time a program is launched: a count of how many items are contained in the command (`argc`) and an array containing each item (`argv`). For example, suppose you issue this imaginary command:
|
||||
|
||||
|
||||
```
|
||||
`$ foo -i bar`
|
||||
```
|
||||
|
||||
The `argc` is three, and the contents of `argv` are:
|
||||
|
||||
* `argv[0] = foo`
|
||||
* `argv[1] = -i`
|
||||
* `argv[2] = bar`
|
||||
|
||||
|
||||
|
||||
Can you modify the example C program to accept `argv[2]` as the string instead of defaulting to `hello`?
|
||||
|
||||
### Imperative programming
|
||||
|
||||
C is an imperative programming language. It isn't object-oriented, and it has no class structure. Using C can teach you a lot about how data is processed and how to better manage the data you generate as your code runs. Use C enough, and you'll eventually be able to write libraries that other languages, such as Python and Lua, can use.
|
||||
|
||||
To learn more about C, you need to use it. Look in `/usr/include/` for useful C header files, and see what small tasks you can do to make C useful to you. As you learn, use our [C cheat sheet][11] by [Jim Hall][12] of FreeDOS. It's got all the basics on one double-sided sheet, so you can immediately access all the essentials of C syntax while you practice.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/c-programming-cheat-sheet
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coverimage_cheat_sheet.png?itok=lYkNKieP (Cheat Sheet cover image)
|
||||
[2]: https://opensource.com/article/20/6/algol68
|
||||
[3]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[4]: https://gcc.gnu.org/
|
||||
[5]: https://opensource.com/article/20/8/gnu-windows-mingw
|
||||
[6]: https://opensource.com/resources/what-bash
|
||||
[7]: https://opensource.com/resources/python
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
|
||||
[9]: https://opensource.com/article/20/8/printf
|
||||
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
|
||||
[11]: https://opensource.com/downloads/c-programming-cheat-sheet
|
||||
[12]: https://opensource.com/users/jim-hall
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Jargon Buster: What is Desktop Environment in Linux?)
|
||||
[#]: via: (https://itsfoss.com/what-is-desktop-environment/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Linux Jargon Buster: What is Desktop Environment in Linux?
|
||||
======
|
||||
|
||||
One of the most commonly used term in desktop Linux world is Desktop Environment (DE). If you are new to Linux, you should understand this frequently used term.
|
||||
|
||||
### What is Desktop Environment in Linux?
|
||||
|
||||
A desktop environment is the bundle of components that provide you common graphical user interface (GUI) elements such as icons, toolbars, wallpapers, and desktop widgets. Thanks to the desktop environment, you can use Linux graphically using your mouse and keyboard like you do in Windows.
|
||||
|
||||
There are several desktop environments and these desktop environments determines what your Linux system looks like and how you interact with it.
|
||||
|
||||
Most of the desktop environments have their own set of integrated applications and utilities so that users get a uniform feel while using the OS. So, you get a file explorer, desktop search, menu of applications, wallpaper and screensaver utilities, text editors and more.
|
||||
|
||||
Without a desktop environment, your Linux system will just have a terminal like utility and you’ll have to interact it using commands only.
|
||||
|
||||
![Screenshot of GNOME Desktop Environment][1]
|
||||
|
||||
### Different desktop environments in Linux
|
||||
|
||||
Desktop environment is also referred as DE sometimes.
|
||||
|
||||
As I mentioned earlier, there are [various desktop environments available for Linux][2]. Why so?
|
||||
|
||||
Think of the desktop environments as clothes. The clothes determine what you look like. If you wear skinny jeans and flat shoes, you would look good but running or hiking in those clothes won’t be comfortable.
|
||||
|
||||
Some desktop environments such as [GNOME][3] focus on a modern look and user experience while desktop like [Xfce][4] focus more on using fewer computing resources than on fancy graphics.
|
||||
|
||||
![Screenshot of Xfce Desktop Environment][5]
|
||||
|
||||
Your clothes depend on your need and determine your looks, the same is the case with the desktop environments. You have to decide whether you want something that looks good or something that lets your system run faster.
|
||||
|
||||
Some of the [popular desktop environments][2] are:
|
||||
|
||||
* GNOME – Uses plenty of system resources but gives you a modern, polished system
|
||||
* Xfce – Vintage look but light on resources
|
||||
* KDE – Highly customizable desktop with moderate usage of system resources
|
||||
* LXDE – The entire focus is on using as few resources as possible
|
||||
* Budgie – Modern looks and moderate on system resources
|
||||
|
||||
|
||||
|
||||
### Linux distributions and their DE variants
|
||||
|
||||
![][6]
|
||||
|
||||
Same desktop environment can be available on several Linux distributions and a Linux distribution may offer several desktop environments.
|
||||
|
||||
For example, Fedora and Ubuntu both use GNOME desktop by default. But both Fedora and Ubuntu offer other desktop environments.
|
||||
|
||||
The beauty and flexibility of Linux is that you can install a desktop environment on any Linux distribution by yourself. But most Linux distributions save you this trouble and offer ready-to-install ISO image for different desktop environments.
|
||||
|
||||
For example, [Manjaro Linux][7] uses Xfce by default but you can also download the ISO of GNOME version if you prefer using GNOME with Manjaro.
|
||||
|
||||
### In the end…
|
||||
|
||||
Desktop environments are crucial part of Linux desktop while Linux servers usually rely on command line interface. It’s not that you cannot install desktop environment on Linux servers but it’s an overkill and waste of important system resources which can be utilized by the applications running on the server.
|
||||
|
||||
I hope you have a slightly better understanding of desktop environments in Linux now. I highly recommend reading my [explainer article on what is Linux and why there are so many Linux distributions][8]. I have a good feeling that you’ll love the analogy I have used it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/what-is-desktop-environment/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/gnome-3-36-screenshot.jpg?resize=800%2C450&ssl=1
|
||||
[2]: https://itsfoss.com/best-linux-desktop-environments/
|
||||
[3]: https://www.gnome.org/
|
||||
[4]: https://www.xfce.org/
|
||||
[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2015/12/Ubuntu-XFCE-Chromebook-e1451426418482-1.jpg?resize=701%2C394&ssl=1
|
||||
[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/08/what-is-desktop-environment-linux.png?resize=800%2C450&ssl=1
|
||||
[7]: https://manjaro.org/
|
||||
[8]: https://itsfoss.com/what-is-linux/
|
@ -0,0 +1,177 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Recognize more devices on Linux with this USB ID Repository)
|
||||
[#]: via: (https://opensource.com/article/20/8/usb-id-repository)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
|
||||
Recognize more devices on Linux with this USB ID Repository
|
||||
======
|
||||
An open source project contains a public repository of all known IDs
|
||||
used in USB devices.
|
||||
![Multiple USB plugs in different colors][1]
|
||||
|
||||
There are thousands of USB devices on the market—keyboards, scanners, printers, mouses, and countless others that all work on Linux. Their vendor details are stored in the USB ID Repository.
|
||||
|
||||
### lsusb
|
||||
|
||||
The Linux `lsusb` command lists information about the USB devices connected to a system, but sometimes the information is incomplete. For example, I recently noticed that the brand of one of my USB devices was not recognized. the device was functional, but listing the details of my connected USB devices provided no identification information. Here is the output from my `lsusb` command:
|
||||
|
||||
|
||||
```
|
||||
$ lsusb
|
||||
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
|
||||
Bus 001 Device 004: ID 046d:082c Logitech, Inc.
|
||||
Bus 001 Device 003: ID 0951:16d2 Kingston Technology
|
||||
Bus 001 Device 002: ID 18f8:1486
|
||||
Bus 001 Device 005: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
|
||||
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
|
||||
```
|
||||
|
||||
As you can see in the last column, there is one device with no manufacturers description. To determine what the device is, I would have to do a deeper inspection of my USB device tree. Fortunately, the `lsusb` command has more options. One is `-D device`, to elicit per-device details, as the man page explains:
|
||||
|
||||
> "Do not scan the /dev/bus/usb directory, instead display only information about the device whose device file is given. The device file should be something like /dev/bus/usb/001/001. This option displays detailed information like the **v** option; you must be root to do this."
|
||||
|
||||
I didn't think it was easily apparent how to pass the device path to the lsusb command, but after carefully reading the man page and the initial output I was able to determine how to construct it. USB devices reside in the UDEV filesystem. Their device path begins in the USB device directory `/dev/bus/usb/`. The rest of the path is made up of the device's Bus ID and Device ID. My non-descript device is Bus 001, Device 002, which translates to 001/002, and completes the path `/dev/bus/usb/001/002`. Now I can pass this path to `lsusb`. I'll also pipe to `more` since there is often quite a lot of information there:
|
||||
|
||||
|
||||
```
|
||||
$ lsusb -D /dev/bus/usb/001/002 |more
|
||||
Device: ID 18f8:1486
|
||||
Device Descriptor:
|
||||
bLength 18
|
||||
bDescriptorType 1
|
||||
bcdUSB 1.10
|
||||
bDeviceClass 0 (Defined at Interface level)
|
||||
bDeviceSubClass 0
|
||||
bDeviceProtocol 0
|
||||
bMaxPacketSize0 8
|
||||
idVendor 0x18f8
|
||||
idProduct 0x1486
|
||||
bcdDevice 1.00
|
||||
iManufacturer 0
|
||||
iProduct 1
|
||||
iSerial 0
|
||||
bNumConfigurations 1
|
||||
Configuration Descriptor:
|
||||
bLength 9
|
||||
bDescriptorType 2
|
||||
wTotalLength 59
|
||||
bNumInterfaces 2
|
||||
bConfigurationValue 1
|
||||
iConfiguration 0
|
||||
bmAttributes 0xa0
|
||||
(Bus Powered)
|
||||
Remote Wakeup
|
||||
MaxPower 100mA
|
||||
Interface Descriptor:
|
||||
bLength 9
|
||||
bDescriptorType 4
|
||||
bInterfaceNumber 0
|
||||
bAlternateSetting 0
|
||||
bNumEndpoints 1
|
||||
bInterfaceClass 3 Human Interface Device
|
||||
bInterfaceSubClass 1 Boot Interface Subclass
|
||||
bInterfaceProtocol 2 Mouse
|
||||
iInterface 0
|
||||
HID Device Descriptor:
|
||||
```
|
||||
|
||||
Unfortunately, this didn't provide the detail I was hoping to find. The two fields that appear in the initial output, `idVendor` and `idProduct`, are both empty. There is some help, as scanning down a bit reveals the word **Mouse**. A-HA! So, this device is my mouse.
|
||||
|
||||
## The USB ID Repository
|
||||
|
||||
This made me wonder how I could populate these fields, not only for myself but also for other Linux users. It turns out there is already an open source project for this: the [USB ID Repository][2]. It is a public repository of all known IDs used in USB devices. It is also used in various programs, including the [USB Utilities][3], to display human-readable device names.
|
||||
|
||||
![The USB ID Repository Site][4]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][5])
|
||||
|
||||
You can browse the repository for particular devices either from the website or by downloading the database. Users are also welcome to submit new data. This is what I did for my mouse, which was absent.
|
||||
|
||||
### Update your USB IDs
|
||||
|
||||
The USB ID database is stored in a file called `usb.ids`. This location may vary depending on the Linux distribution.
|
||||
|
||||
On Ubuntu 18.04, this file is located in `/var/lib/usbutils`. To update the database, use the command `update-usbids`, which you need to run with root privileges or with `sudo`:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo update-usbids`
|
||||
```
|
||||
|
||||
If a new file is available, it will be downloaded. The current file will be backed up and replaced by the new one:
|
||||
|
||||
|
||||
```
|
||||
$ ls -la
|
||||
total 1148
|
||||
drwxr-xr-x 2 root root 4096 Jan 15 00:34 .
|
||||
drwxr-xr-x 85 root root 4096 Nov 7 08:05 ..
|
||||
-rw-r--r-- 1 root root 614379 Jan 9 15:34 usb.ids
|
||||
-rw-r--r-- 1 root root 551472 Jan 15 00:34 usb.ids.old
|
||||
```
|
||||
|
||||
Recent versions of Fedora Linux store the database file in `/usr/share/hwdata`. Also, there is no update script. Instead, the database is maintained in a package named `hwdata`.
|
||||
|
||||
|
||||
```
|
||||
# dnf info hwdata
|
||||
|
||||
Installed Packages
|
||||
Name : hwdata
|
||||
Version : 0.332
|
||||
Release : 1.fc31
|
||||
Architecture : noarch
|
||||
Size : 7.5 M
|
||||
Source : hwdata-0.332-1.fc31.src.rpm
|
||||
Repository : @System
|
||||
From repo : updates
|
||||
Summary : Hardware identification and configuration data
|
||||
URL : <https://github.com/vcrhonek/hwdata>
|
||||
License : GPLv2+
|
||||
Description : hwdata contains various hardware identification and configuration data,
|
||||
: such as the pci.ids and usb.ids databases.
|
||||
```
|
||||
|
||||
Now my USB device list shows a name next to this previously unnamed device. Compare this to the output above:
|
||||
|
||||
|
||||
```
|
||||
$ lsusb
|
||||
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
|
||||
Bus 001 Device 004: ID 046d:082c Logitech, Inc. HD Webcam C615
|
||||
Bus 001 Device 003: ID 0951:16d2 Kingston Technology
|
||||
Bus 001 Device 014: ID 18f8:1486 [Maxxter]
|
||||
Bus 001 Device 005: ID 051d:0002 American Power Conversion Uninterruptible Power Supply
|
||||
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
|
||||
```
|
||||
|
||||
You may notice that other device descriptions change as the repository is regularly updated with new devices and details about existing ones.
|
||||
|
||||
### Submit new data
|
||||
|
||||
There are two ways to submit new data: by using the web interface or by emailing a specially formatted patch file. Before I began, I read through the submission guidelines. First, I had to register an account, and then I needed to use the project's submission system to provide my mouse's ID and name. The process is the same for adding any USB device.
|
||||
|
||||
Have you used the USB ID Repository? If so, please share your reaction in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/usb-id-repository
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/usb-hardware.png?itok=ROPtNZ5V (Multiple USB plugs in different colors)
|
||||
[2]: http://www.linux-usb.org/usb-ids.html
|
||||
[3]: https://sourceforge.net/projects/linux-usb/files/
|
||||
[4]: https://opensource.com/sites/default/files/uploads/theusbidrepositorysite.png (The USB ID Repository Site)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
@ -0,0 +1,177 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 open source virtualization technologies to know in 2020)
|
||||
[#]: via: (https://opensource.com/article/20/8/virt-tools)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson)
|
||||
|
||||
6 open source virtualization technologies to know in 2020
|
||||
======
|
||||
Run, customize, and manage your VMs with open source Virt Tools. Plus
|
||||
get a glossary of key virtualization terms.
|
||||
![What is virtualization][1]
|
||||
|
||||
Virtualization Tools, better known as [Virt Tools][2], is a collection of six open source virtualization tools created by various contributors to make the virtualization world a better place.
|
||||
|
||||
![Virt Tools website][3]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][4])
|
||||
|
||||
Some of the tools, like KVM and QEMU, might be familiar to Linux enthusiasts, but tools like libvirt and libguestfs are probably less so.
|
||||
|
||||
In case you prefer to learn through watching videos than reading, I created a [video version][5] of this article, which you can access on YouTube.
|
||||
|
||||
Before walking through the tools, it's a good idea to know some essential [virtualization][6] terminology. I derived many of these definitions from Wikipedia, with pages linked in the table.
|
||||
|
||||
Term | Definition
|
||||
---|---
|
||||
[Virtualization][7] | In computing, virtualization refers to the act of creating a virtual (rather than physical) version of something, including virtual computer hardware platforms, storage devices, and computer network resources.
|
||||
[Emulator][8] | An emulator is a hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest).
|
||||
[Virtual machine (VM)][9] | Also known as a "guest machine," these are emulations of real, physical hardware computers.
|
||||
Hosts | In [hardware virtualization][10], a computer on which a hypervisor runs one or more VMs.
|
||||
[Hypervisor][11] | This is computer software, firmware, or hardware that creates and runs a VMs.
|
||||
[Kernel][12] | This is a computer program at the core of a computer's operating system with complete control over everything in the system.
|
||||
[Daemon][13] | This is a computer program that runs as a background process, rather than under the direct control of an interactive user.
|
||||
|
||||
This table summarizes each Virt Tool, including license information and links to each tool's website and source code. Much of this information comes from the Virt Tools website and each tool's site.
|
||||
|
||||
Name | What It Is | License | Source Code
|
||||
---|---|---|---
|
||||
[Kernel-based Virtual Machine (KVM)][14] | A virtualization module in the Linux kernel that allows the kernel to function as a hypervisor | GNU GPL or LGPL | [Source code][15]
|
||||
[Quick Emulator (QEMU)][16] | A generic and open source machine emulator and virtualizer | GPLv2 | [Source code][17]
|
||||
[Libvirt][18] | A library and daemon providing a stable, open source API for managing virtualization hosts | GNU | [Source code][19]
|
||||
[Libguestfs][20] | A set of tools for accessing and modifying VM disk images | LGPL, GPL | [Source code][21]
|
||||
[Virt-manager][22] | A desktop user interface for managing VMs through libvirt | GPLv2+ | [Source code][23]
|
||||
[Libosinfo][24] | Provides a database of information about operating system releases to assist in optimally configuring hardware when deploying VMs | LGPLv2+ | [Source code][25]
|
||||
|
||||
### Kernel-based Virtual Manager (KVM)
|
||||
|
||||
![KVM website][26]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][4])
|
||||
|
||||
KVM is a full virtualization solution for Linux on hardware containing virtualization extensions. KVM provides the hardware virtualization for a wide variety of guest operating systems, including Linux, Windows, macOS, ReactOS, and Haiku. Using KVM, you can run multiple VMs on unmodified Linux or Windows images. Each VM has private virtualized hardware: a network card, disk, graphics adapter, etc.
|
||||
|
||||
Most of the time, you won't directly interact with KVM. Instead, you must use QEMU, virt-manager, or another virtualization management tool to leverage KVM.
|
||||
|
||||
You can find full [documentation][27] on the KVM website, as well as access its [source code][15].
|
||||
|
||||
### Quick Emulator (QEMU)
|
||||
|
||||
![QEMU website][28]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][4])
|
||||
|
||||
QEMU is a generic, open source machine emulator and virtualizer. When used as an emulator, QEMU can run operating systems and programs made for one machine (e.g., an ARM board) on a different machine (e.g., your own x86_64 PC). When used as a virtualizer, QEMU achieves near-native performance by executing the guest code directly on the host CPU using KVM.
|
||||
|
||||
QEMU is supported on multiple operating systems, and its installation process is as easy as running a few simple commands; here, you can see how to install QEMU on macOS with [Homebrew][29].
|
||||
|
||||
![QEMU macOS installation info][30]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][4])
|
||||
|
||||
After installing, learn how to use it by reading through its [documentation][31], and you can also access its [source code][17].
|
||||
|
||||
### Libvirt
|
||||
|
||||
![Libvirt website][32]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][4])
|
||||
|
||||
Libvirt is a library and daemon that provides a stable open source API for managing virtualization hosts. It targets multiple hypervisors, including QEMU, KVM, LXC, Xen, OpenVZ, VMWare ESX, VirtualBox, and more.
|
||||
|
||||
Another interesting thing about libvirt is that [KubeVirt][33], an open source project for creating and managing VMs inside the Kubernetes platform, largely utilizes Libvirt. (I'll cover KubeVirt in a future article.) Libvirt is an interesting project to explore, and you can find a plethora of information on its official [website][18] as well as download its [source code][21].
|
||||
|
||||
### Libguestfs
|
||||
|
||||
![Libguestfs website][34]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][4])
|
||||
|
||||
Libguestfs is a set of tools for accessing and modifying VM disk images. You can use it for viewing and editing files inside guests; scripting changes to VMs; monitoring disk used/free statistics; creating guests, physical to virtual (P2V), or virtual to virtual (V2V) machines; performing backups; cloning VMs; building VMs; formatting disks; resizing disks; and much more. I have been using it recently while working on a KubeVirt-based project called [OpenShift Virtualization][35], which you can learn more about in my [video tutorial][36].
|
||||
|
||||
Libguestfs' official [website][20] contains extensive documentation on how to use each command, and you can also download its [source code][21] on GitHub.
|
||||
|
||||
### Virt-manager
|
||||
|
||||
![Virt-manager website][37]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][4])
|
||||
|
||||
Virt-manager is a desktop user interface for managing VMs through libvirt. It primarily targets KVM VMs but also manages Xen and LXC. It also includes the command line provisioning tool virt-install. Think of virt-manager as an easy-to-use management tool for your VMs. For example, you can use virt-manager to run a Microsoft Windows environment on a Linux workstation or vice versa.
|
||||
|
||||
Virt-manager's [source code][23] is available on GitHub and [documentation][38] is on its website. At this time, virt-manager is only available for Linux platforms.
|
||||
|
||||
### Libosinfo
|
||||
|
||||
![Libosinfo website][39]
|
||||
|
||||
(Bryant Son, [CC BY-SA 4.0][4])
|
||||
|
||||
Libosinfo provides a database of information about operating system releases to assist in configuring hardware when deploying VMs. It includes a C library for querying information in the database, which is also accessible from any language supported by GObject introspection. As you may guess, libosinfo is more of a building block to enable an operating system's functionality—but quite an important one.
|
||||
|
||||
Libosinfo's [source code][25] is available on GitLab, and its [documentation][40] can be found on at its website.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Virt-tools is a set of six powerful tools that make virtualization easier and enable important virtualization functions. All of them are open source projects, so I encourage you to explore further and maybe even contribute to them.
|
||||
|
||||
What do you think? Feel free to leave a comment to share your thoughts or ask questions.
|
||||
|
||||
Learn how Vagrant and Ansible can be used to provision virtual machines for web development.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/virt-tools
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/what_is_virtualization.png?itok=e4WCa7N_ (What is virtualization)
|
||||
[2]: https://www.virt-tools.org/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/1_virttools.jpg (Virt Tools website)
|
||||
[4]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[5]: https://youtu.be/E6TBDh2RLY8
|
||||
[6]: https://www.redhat.com/en/topics/virtualization/what-is-virtualization
|
||||
[7]: https://en.wikipedia.org/wiki/Virtualization
|
||||
[8]: https://en.wikipedia.org/wiki/Emulator
|
||||
[9]: https://en.wikipedia.org/wiki/Virtual_machine
|
||||
[10]: https://en.wikipedia.org/wiki/Hardware_virtualization
|
||||
[11]: https://en.wikipedia.org/wiki/Hypervisor
|
||||
[12]: https://en.wikipedia.org/wiki/Kernel_%28operating_system%29
|
||||
[13]: https://en.wikipedia.org/wiki/Daemon_%28computing%29
|
||||
[14]: https://www.linux-kvm.org/page/Main_Page
|
||||
[15]: https://git.kernel.org/pub/scm/virt/kvm/kvm.git
|
||||
[16]: https://www.qemu.org
|
||||
[17]: https://git.qemu.org/git/qemu.git
|
||||
[18]: https://libvirt.org
|
||||
[19]: https://libvirt.org/git/?p=libvirt.git
|
||||
[20]: http://libguestfs.org/
|
||||
[21]: https://github.com/libguestfs/libguestfs
|
||||
[22]: https://virt-manager.org
|
||||
[23]: https://github.com/virt-manager/virt-manager
|
||||
[24]: https://libosinfo.org/download/
|
||||
[25]: https://gitlab.com/libosinfo/libosinfo
|
||||
[26]: https://opensource.com/sites/default/files/uploads/2_kvm.jpg (KVM website)
|
||||
[27]: https://www.linux-kvm.org/page/Documents
|
||||
[28]: https://opensource.com/sites/default/files/uploads/3_qemu.jpg (QEMU website)
|
||||
[29]: https://opensource.com/article/20/6/homebrew-mac
|
||||
[30]: https://opensource.com/sites/default/files/uploads/3_1_qemuinstall.jpg (QEMU macOS installation info)
|
||||
[31]: https://www.qemu.org/documentation/
|
||||
[32]: https://opensource.com/sites/default/files/uploads/4_libvirt.jpg (Libvirt website)
|
||||
[33]: https://kubevirt.io/
|
||||
[34]: https://opensource.com/sites/default/files/uploads/5_libguestfs.jpg (Libguestfs website)
|
||||
[35]: https://www.openshift.com/blog/blog-openshift-virtualization-whats-new-with-virtualization-from-red-hat
|
||||
[36]: https://www.youtube.com/watch?v=tWPC-YER1I0
|
||||
[37]: https://opensource.com/sites/default/files/uploads/6_virtualmanager.jpg (Virt-manager website)
|
||||
[38]: https://virt-manager.org/documentation/
|
||||
[39]: https://opensource.com/sites/default/files/uploads/7_libosinfo.jpg (Libosinfo website)
|
||||
[40]: https://libosinfo.org/documentation/
|
@ -0,0 +1,75 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to choose an affordable Linux laptop for video conferencing)
|
||||
[#]: via: (https://opensource.com/article/20/8/linux-laptop-video-conferencing)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
|
||||
How to choose an affordable Linux laptop for video conferencing
|
||||
======
|
||||
Open source can give pre-owned computers the necessary boost to make
|
||||
them useful with today's popular video conferencing tools.
|
||||
![Two people chatting via a video conference app][1]
|
||||
|
||||
As more and more activities move online during the global pandemic, an increasing number of folks are looking for affordable and stable solutions to connect to their doctor, therapist, bank, college, and more. Many of the folks I've been working with are on limited incomes, and they're eager for any technical help they can get.
|
||||
|
||||
Whether they're on a proprietary video conferencing solution or using an [open source one like Jitsi Meet][2], everyone needs a platform that's robust enough to support their needs without breaking the budget. One of the leading cloud video conferencing providers [recommends][3] that platforms should have at least an i3 processor or equivalent with a minimum of 4GB RAM. My experience has taught me that an i5 or equivalent and at least 4-8GB RAM is even better.
|
||||
|
||||
I also recommend Linux for running meeting solutions. You could go out and purchase a new Linux computer. However, if you're on a limited income, then plunking down $1000 or more for a new system might not be what you had in mind.
|
||||
|
||||
A more budget-friendly solution I recently put together for a friend was a 2015 MacBook Air running Elementary OS. The computer had 4GB RAM, an i5 processor, and 240GB NVMe solid-state drive. Elementary OS was a great choice for this computer, as it came with a Broadcom 4360 wireless card, which didn't play nice with other Linux distributions but was detected by Elementary. The FaceTime camera didn't work with any Linux distribution I tried, Elementary included, and no one seems to have a good solution, so I purchased a USB camera and connected it to the laptop. This fellow needed to use Zoom to connect to his church, so I downloaded the [Zoom client][4] for Linux and installed it. The software download supports .rpm, .deb, and [Flatpak][5].
|
||||
|
||||
In another case, I purchased a refurbished PC laptop from a prominent vendor. It came with 8GB RAM, i5, webcam, and 256 SSD drive. I'm going to install either Fedora or Pop!_OS on it along with the Zoom client and the usual complement of free software, including LibreOffice, Calibre, ClamAV, Gnu Chess, and other games for my friend to explore.
|
||||
|
||||
### Used laptops for Linux
|
||||
|
||||
When looking for a used laptop, I usually consider the reputation of the brand. I check for the same or similar models and their compatibility with Linux. Both [Fedora][6] and [Ubuntu][7] maintain lists of acceptable hardware platforms. If possible, I try to get a list of the included hardware. For example, what is the CPU model and speed? Does the unit have Bluetooth built-in? How many USB ports does it have? Does it have audio ports? Does it support Thunderbolt? Does it have built-in WiFi, and what is the chipset of the WiFi adapter? I have had good luck with Intel, Broadcom, and Realtek, though the list varies depending on your particular needs.
|
||||
|
||||
There are many sources of good used laptops and desktops, but my favorites are eBay, [Dell Refurbished][8], and [PC Liquidations][9]. I look for units that are three to five years old, that are in good condition, and that have at least an Intel i3 or AMD FX-6300, or better, processor. CPU speed and at least 4 GB RAM are important if you are going to be using your Linux laptop or desktop for video conferencing. Check to make sure the unit you purchase has a power supply. It's handy to have a webcam, but that's not a dealbreaker because you can use a USB camera. I have had good experiences with Logitech web cameras.
|
||||
|
||||
### Refurbishing computers
|
||||
|
||||
When refurbishing older laptops or desktop computers, you'll often find older system components such as a mechanical hard drive or WiFi card that doesn't support the latest wireless technology. These can usually be remedied with a small amount of effort and a minimal budget. For instance, replacing an old hard drive with a solid-state drive (SSD) will usually provide an immediate performance boost. You can also purchase newer WiFi + Bluetooth cards. Most likely, the form factor on a laptop will be Mini PCI but do your research to be sure. This allows you to choose a brand and chipset that might be better supported by Linux. RAM can also be increased. 4GB is definitely my minimum, but I'd much rather have 8GB. I also like to install the latest BIOS, if it is available from the respective vendor, in order to have the latest fixes and features.
|
||||
|
||||
A lot of this advice also applies to desktop systems. You'll generally have more flexibility with these since there are more card slots and other connectors for peripherals. For instance, an older system may not support new technology such as Bluetooth, WiFi, or USB3. Add-in PCI or PCIe cards can be installed to provide this support.
|
||||
|
||||
### Choosing a Linux distribution for an old computer
|
||||
|
||||
The last piece of the puzzle is choosing a Linux distribution for your rescued computer. There are many distributions out there, and though they do like to highlight their own unique spin on providing a [Linux desktop][10], at the end of the day, they're all Linux. When it comes to installing Linux on an old computer, the best one is the one that works on the computer you have.
|
||||
|
||||
The key is to step through the install process (it's not much more complex than installing an office application or a new web browser) and then see how your computer responds when you reboot it. If you get a desktop to come up and you can open and run basic applications, then you're on the right track.
|
||||
|
||||
Once you're satisfied that you have a working computer with a desktop and access to the Internet, give your usual video conferencing application a try. If it fails, try a lightweight Linux distribution in hopes that using fewer resources for your OS can solve any video issues.
|
||||
|
||||
Also, if your camera is HiDef (high definition or HD), then try setting it to broadcast at a lower resolution. Sometimes this can improve your system's performance because you're sending less data over what's probably an old network card with a limited capacity.
|
||||
|
||||
I recommend trying [Elementary OS][11] or [Fedora Linux][12] for recent computers. For very old computers, try [Peppermint OS][13], which is specifically designed for computers without many resources. The great news is there are multiple ways to use open source solutions to turn your old machine into a modern communication platform.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/linux-laptop-video-conferencing
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/chat_video_conference_talk_team.png?itok=t2_7fEH0 (Two people chatting via a video conference app)
|
||||
[2]: https://opensource.com/article/20/5/open-source-video-conferencing
|
||||
[3]: https://support.zoom.us/hc/en-us/articles/201362023-System-requirements-for-Windows-macOS-and-Linux
|
||||
[4]: https://zoom.us/download
|
||||
[5]: https://flathub.org/apps/details/us.zoom.Zoom
|
||||
[6]: https://docs.fedoraproject.org/en-US/fedora/rawhide/release-notes/welcome/Hardware_Overview/
|
||||
[7]: https://certification.ubuntu.com/
|
||||
[8]: https://www.dellrefurbished.com/laptops
|
||||
[9]: https://www.pcliquidations.com/
|
||||
[10]: https://opensource.com/article/20/5/linux-desktops
|
||||
[11]: https://elementary.io/
|
||||
[12]: http://getfedora.org/
|
||||
[13]: https://peppermintos.com/
|
@ -0,0 +1,179 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Create and run Python apps on your Android phone)
|
||||
[#]: via: (https://opensource.com/article/20/8/python-android-mobile)
|
||||
[#]: author: (Phani Adabala https://opensource.com/users/adabala)
|
||||
|
||||
Create and run Python apps on your Android phone
|
||||
======
|
||||
Use Termux and Flask to create, develop, and run a web app on your
|
||||
mobile device.
|
||||
![Tux and Android stuffed animals on shelf][1]
|
||||
|
||||
Learning and using Python is fun. Thanks to its growing popularity, there are a plethora of ways it can be used to make the world of computing better than what it is today.
|
||||
|
||||
Imagine building and running python applications, whether it's a command-line tool developed to fetch your favorite curated articles from the Internet, or starting a web server that runs right in the palm of your hand, all with just an Android mobile device and open source tools. This would change how you view your mobile device entirely, changing it from a device that merely lets you consume content to a device that helps you be creative.
|
||||
|
||||
In this article, I'll demonstrate all of the tools, software packages, steps, and all the bells and whistles required to build, run, and test a simple Python application on any Android mobile device. I use the [Flask framework][2] to create a simple “Hello, World!” app running on a simple but powerful web server. And best of all, it all happens on the phone. No laptop or desktop required.
|
||||
|
||||
### Install Termux on Android
|
||||
|
||||
First, [install the Termux application][3]. Termux is a powerful terminal emulator that offers all the most popular Linux commands, plus hundreds of additional packages for easy installation. It doesn't require any special permissions You can use either the default [Google Play][4] store or the open source app repository [F-Droid][5] to install.
|
||||
|
||||
![Welcome to Termux][6]
|
||||
|
||||
Once you have installed Termux, launch it and perform a few requisite software installations using Termux's **pkg** command:
|
||||
|
||||
Subscribe to the additional repository “root-repo”:
|
||||
|
||||
|
||||
```
|
||||
`$ pkg install root-repo`
|
||||
```
|
||||
|
||||
Perform an update to bring all the installed software up to date:
|
||||
|
||||
|
||||
```
|
||||
`$ pkg update`
|
||||
```
|
||||
|
||||
Finally, install Python:
|
||||
|
||||
|
||||
```
|
||||
`$ pkg install python`
|
||||
```
|
||||
|
||||
![Install Python][7]
|
||||
|
||||
Once the installation and auto set-up of configuration is complete, it’s time to build your application.
|
||||
|
||||
### Build an app for Android on Android
|
||||
|
||||
Now that you have a terminal installed, you can work on your Android phone largely as if it were just another Linux computer. This is a great demonstration of just how powerful a terminal really is.
|
||||
|
||||
Start by creating a project directory:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir Source
|
||||
$ cd Source
|
||||
```
|
||||
|
||||
Next, create a Python virtual environment. This is a common practice among Python developers, and it helps keep your Python project independent of your development system (in this case, your phone). Within your virtual environment, you'll be able to install Python modules specific to your app.
|
||||
|
||||
|
||||
```
|
||||
`$ python -m venv venv`
|
||||
```
|
||||
|
||||
Activate your new virtual environment (note that the two dots at the start are separated by a space):
|
||||
|
||||
|
||||
```
|
||||
$ . ./venv/bin/activate
|
||||
(env)$
|
||||
```
|
||||
|
||||
Notice that your shell prompt is now preceded by **(env)** to indicate that you're in a virtual environment.
|
||||
|
||||
Now install the Flask Python module using **pip**:
|
||||
|
||||
|
||||
```
|
||||
`(env) $ pip install flask`
|
||||
```
|
||||
|
||||
### Write Python code on Android
|
||||
|
||||
You're all set up. All you need now is to write the code for your app.
|
||||
|
||||
To do this, you should have experience with a classic text editor. I use **vi**. If you’re unfamiliar with **vi**, install and try the **vimtutor** application, which (as its name suggests) can teach you how to use this editor. If you have a different editor you prefer, such as **jove**, **jed**, **joe**, or **emacs**, you can install and use one of those instead.
|
||||
|
||||
For now, because this demonstration app is so simple, you can also just use the shell's **heredoc** function, which allows you to enter text directly at your prompt:
|
||||
|
||||
|
||||
```
|
||||
(env)$ cat << EOF >> hello_world.py
|
||||
> from flask import Flask
|
||||
> app = Flask(__name__)
|
||||
>
|
||||
> @app.route('/')
|
||||
> def hello_world():
|
||||
> return 'Hello, World!'
|
||||
> EOF
|
||||
(env)$
|
||||
```
|
||||
|
||||
That's just six lines of code, but with that you import Flask, create an app, and route incoming traffic to the function called **hello_world**.
|
||||
|
||||
![Vim on Android][8]
|
||||
|
||||
Now you have the web-server code ready. It's time to set up some [environment variables][9] and start a web server on your phone.
|
||||
|
||||
|
||||
```
|
||||
(env) $ export FLASK_APP=hello_world.py
|
||||
(env) $ export FLASK_ENV=development
|
||||
(evn) $ python hello_world.py
|
||||
```
|
||||
|
||||
![Running a Flask app on your phone][10]
|
||||
|
||||
After starting your app, you see this message:
|
||||
|
||||
|
||||
```
|
||||
`serving Flask app… running on http://127.0.0.1:5000/`
|
||||
```
|
||||
|
||||
This indicates that you now have a tiny web server running on **localhost** (that is, your device). This server is listening for requests looking for port 5000.
|
||||
|
||||
Open your mobile browser and navigate to **<http://localhost:5000>** to see your web app.
|
||||
|
||||
![Your web app][11]
|
||||
|
||||
You haven't compromised your phone's security. You're only running a local server, meaning that your phone isn't accepting requests from the outside world. Only you can access your Flask server.
|
||||
|
||||
To make your server visible to others, you can disable Flask's debugging mode by adding **\--host=0.0.0.0** to the **run** command. This does open ports on your phone, so use this wisely.
|
||||
|
||||
|
||||
```
|
||||
(env) $ export FLASK_ENV=””
|
||||
(env) $ flask run –host=0.0.0.0
|
||||
```
|
||||
|
||||
Stop the server by pressing **Ctrl+C** (use the special Termux key for Control).
|
||||
|
||||
### Decide what comes next
|
||||
|
||||
Your phone is probably not the ideal server platform for a serious web app, but this demonstrates that the possibilities are endless. You might program on your Android phone just because it’s a convenient way to stay in practice, or because you have an exciting new idea for localized web apps, or maybe you just happen to use a Flask app for your own daily tasks. As Einstein once said “Imagination is more important than knowledge”, and this is a fun little project for any new coder, or a seasoned Linux or Android enthusiast. It can be expanded to endless levels, so let your curiosity take over, and make something exciting!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/python-android-mobile
|
||||
|
||||
作者:[Phani Adabala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/adabala
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tux_penguin_linux_android.jpg?itok=ctgANLI7 (Tux and Android stuffed animals on shelf)
|
||||
[2]: https://opensource.com/article/18/4/flask
|
||||
[3]: https://opensource.com/article/20/8/termux
|
||||
[4]: https://play.google.com/store/apps/details?id=com.termux
|
||||
[5]: https://f-droid.org/repository/browse/?fdid=com.termux
|
||||
[6]: https://opensource.com/sites/default/files/termux-flask-1_0.webp (Welcome to Termux)
|
||||
[7]: https://opensource.com/sites/default/files/termux-install-python.webp (Install Python)
|
||||
[8]: https://opensource.com/sites/default/files/termux-python-vim.webp (Vim on Android)
|
||||
[9]: https://opensource.com/article/19/8/what-are-environment-variables
|
||||
[10]: https://opensource.com/sites/default/files/termux-flask-run.webp (Running a Flask app on your phone)
|
||||
[11]: https://opensource.com/sites/default/files/flask-app-android.webp (Your web app)
|
106
sources/tech/20200826 Customize your GNOME desktop theme.md
Normal file
106
sources/tech/20200826 Customize your GNOME desktop theme.md
Normal file
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Customize your GNOME desktop theme)
|
||||
[#]: via: (https://opensource.com/article/20/8/gnome-themes)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
|
||||
Customize your GNOME desktop theme
|
||||
======
|
||||
Use Tweaks and its user themes extension to change the look of your
|
||||
Linux UI.
|
||||
![Gnomes in a window.][1]
|
||||
|
||||
GNOME is a fairly simple and streamlined Linux graphical user interface (GUI), and a lot of users appreciate its minimalist look. Although it's pretty basic out of the box, you can customize [GNOME][2] to match your preferences. Thanks to GNOME Tweaks and the user themes extension, you can change the look and feel of the top bar, window title bars, icons, cursors, and many other UI options.
|
||||
|
||||
### Get started
|
||||
|
||||
Before you can change your GNOME theme, you have to install [Tweaks][3] and enable the user themes extension.
|
||||
|
||||
#### Install GNOME Tweaks
|
||||
|
||||
You can find Tweaks in the GNOME Software Center, where you can install it quickly with just the click of a button.
|
||||
|
||||
![Install Tweaks in Software Center][4]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][5])
|
||||
|
||||
If you prefer the command line, use your package manager. For instance, on Fedora or CentOS:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo dnf install gnome-tweaks`
|
||||
```
|
||||
|
||||
On Debian or similar:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install gnome-tweaks`
|
||||
```
|
||||
|
||||
#### Enable user themes
|
||||
|
||||
To enable the user themes extension, launch Tweaks and select **Extensions**. Find **User themes** and click the slider to enable it.
|
||||
|
||||
![Enable User Themes Extension][6]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][5])
|
||||
|
||||
### Get a theme
|
||||
|
||||
Now that you've completed those prerequisites, you're ready to find and download some themes. A great site to find new themes is [GNOME-Look.org][7].
|
||||
|
||||
There's a list of theme categories on the left-hand side of the page. Once you find a theme you want, you need to download it. I downloaded the `.tar` file directly to the `.themes` directory under my home directory (you may need to create the directory first):
|
||||
|
||||
|
||||
```
|
||||
`$ mkdir ~/.themes`
|
||||
```
|
||||
|
||||
If you want all the machine's users to be able to use the theme, place it in `/usr/share/themes`.
|
||||
|
||||
|
||||
```
|
||||
`$ tar xvf theme_archive.tar.xz`
|
||||
```
|
||||
|
||||
Once you have downloaded the file, extract the archive. You can delete the `.tar.xz` file to save some disk space.
|
||||
|
||||
### Apply a theme
|
||||
|
||||
To apply your new theme, go to the **Appearance** section in Tweaks. Here, you can select different options for each aspect of your desktop.
|
||||
|
||||
![Apply a theme][8]
|
||||
|
||||
(Alan Formy-Duval, [CC BY-SA 4.0][5])
|
||||
|
||||
### Variety is the spice of life
|
||||
|
||||
Being able to personalize a computer desktop with different wallpaper, colors, fonts, and more has been a popular feature since the first graphical interfaces hit the market. GNOME Tweaks and the user themes extension enable this customization on the GNOME desktop environment on all the GNU/Linux operating systems where it is available. And the open source community continues to provide a wide range of themes, icons, fonts, and wallpapers that anyone can download, play with, and customize.
|
||||
|
||||
What are your favorite GNOME themes, and why do you like them? Please share in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/gnome-themes
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/custom_gnomes.png?itok=iG98iL8d (Gnomes in a window.)
|
||||
[2]: https://www.gnome.org/
|
||||
[3]: https://wiki.gnome.org/Apps/Tweaks
|
||||
[4]: https://opensource.com/sites/default/files/uploads/gnome-install_tweaks_gui.png (Install Tweaks in Software Center)
|
||||
[5]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[6]: https://opensource.com/sites/default/files/uploads/gnome-enable_user_theme_extension.png (Enable User Themes Extension)
|
||||
[7]: https://www.gnome-look.org
|
||||
[8]: https://opensource.com/sites/default/files/uploads/gnome-apply_theme.png (Apply a theme)
|
@ -0,0 +1,84 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Manage your software repositories with this open source tool)
|
||||
[#]: via: (https://opensource.com/article/20/8/manage-repositories-pulp)
|
||||
[#]: author: (Melanie Corr https://opensource.com/users/melanie-corr)
|
||||
|
||||
Manage your software repositories with this open source tool
|
||||
======
|
||||
An introduction to Pulp, the open source repository management solution
|
||||
that is growing in scope and functionality.
|
||||
![Cut pieces of citrus fruit and pomegranates][1]
|
||||
|
||||
[Foreman][2] is a robust management and automation product that provides administrators of Linux environments with enterprise-level solutions for four key scenarios: provisioning management, configuration management, patch management, and content management. A major component of the content management functionality in Foreman is provided by the Pulp project. While Pulp is an integral part of this product, it is also a standalone, free, and open source project that is making huge progress on its own.
|
||||
|
||||
Let's take a look at the Pulp project, especially the features of the latest release, Pulp 3.
|
||||
|
||||
### What is Pulp?
|
||||
|
||||
Pulp is a platform for managing repositories of software packages and making them available to a large number of consumers. You can use Pulp to mirror, synchronize, upload, and promote content like RPMs, Python packages, Ansible collections, container images, and more across different environments. If you have dozens, hundreds, or even thousands of software packages and need a better way to manage them, Pulp can help.
|
||||
|
||||
The latest major version is [Pulp 3][3], which was released in December 2019. Pulp 3 is the culmination of years of gathering user requirements and a complete technical overhaul of the existing Pulp architecture to increase reliability and flexibility. Plus, it includes a vast range of new features.
|
||||
|
||||
### Who uses Pulp?
|
||||
|
||||
For the most part, Pulp users administer enterprise software environments where the stability and reliability of content are paramount. Pulp users want a platform to develop content without worrying that repositories might disappear. They want to promote content across the different stages of their lifecycle environment in a secure manner that optimizes disk space and scales their environment to meet new demands. They also need the flexibility to work with a wide variety of content types. Pulp 3 provides that and more.
|
||||
|
||||
### Manage a wide variety of content in one place
|
||||
|
||||
After you install Pulp, you can add [content plugins][4] for the content types that you plan to manage, mirror the content locally, add privately hosted content, and blend content to suit your requirements. If you’re an Ansible user, for example, and you don't want to host your private content on Ansible Galaxy, you can add the Pulp Ansible plugin, mirror the public Ansible content that you require, and use Pulp as an on-premise platform to manage and distribute a scalable blend of public and private Ansible roles and collections across your organization. You can do this with any content type. There is a wide variety of content plugins available, including RPM, Debian, Python, Container, and Ansible, to name but a few. There is also a File plugin, which you can use to manage files like ISO images.
|
||||
|
||||
If you don't find a plugin for the content type that you require, Pulp 3 has introduced a new plugin API and plugin template to make it easy to create a Pulp plugin of your own. You can use the [plugin writing guide][5] to autogenerate a minimal viable plugin, and then start building from there.
|
||||
|
||||
### High availability
|
||||
|
||||
With Pulp 3, the change from MongoDB to PostgreSQL facilitated major improvements around performance and data integrity. Pulp users now have a fully open source tech stack that provides high availability (HA) and better scalability.
|
||||
|
||||
### Repository versioning
|
||||
|
||||
Using Pulp 3, you can experiment without risk. Every time you add or remove content, Pulp creates an immutable repository version so that you can roll back to earlier versions and thereby guarantee the safety and stability of your operation. Using publications and distributions, you can expose multiple versions of a repository, which you can use as another method of rolling back to an earlier version. To roll back, you can simply point your distribution to an older publication.
|
||||
|
||||
### Disk optimization
|
||||
|
||||
One of the major challenges for any software development environment is disk optimization. If you're downloading a constant stream of packages—for example, nightly builds of repositories that you require today but will no longer require tomorrow—disk space will quickly become an issue. Pulp 3 has been designed with disk optimization in mind. While the default option downloads and saves all software packages, you can also enable either the "on demand" or "streamed" option. The "on demand" option saves disk space by downloading and saving only the content that clients request. With the "streamed" option, you also download upon client request, but you don't save the content in Pulp. This is ideal for synchronizing content, for example, from a nightly repository, and saves you from having to perform a disk cleanup at a later stage.
|
||||
|
||||
### Multiple storage options
|
||||
|
||||
Even with the best disk optimization, as your project grows, you might need a way to scale your deployment to match your requirements. As well as local file storage, Pulp supports a range of cloud storage options, such as Amazon S3 and Azure, to ensure that you can scale to meet the demands of your deployment.
|
||||
|
||||
### Protect your content
|
||||
|
||||
Pulp 3 has the option of adding the [Certguard][6] plugin, which provides an X.509 capable ContentGuard that requires clients to submit a certificate proving their entitlement to content before receiving content from Pulp.
|
||||
|
||||
Any client presenting an X.509 or Red Hat Subscription Management-based certificate at request time will be authorized, as long as the client certification is not expired, is signed by the Certificate Authority, and was stored on the Certguard when it was created. The client presents the certificate using transport layer security (TLS), which proves that the client has not only the certificate but also its key. You can develop, confident in the knowledge that your content is being protected.
|
||||
|
||||
The Pulp team is also actively working on a role-based access control system for the entire Pulp deployment so that administrators can ensure that the right users have access to the right environments.
|
||||
|
||||
### Try Pulp in a container
|
||||
|
||||
If you're interested in evaluating Pulp 3 for yourself, you can easily install [Pulp 3 in a Container][7] using Docker or Podman. The Pulp team is constantly working on simplifying the installation process. You can also use an [Ansible playbook][8] to automate the full installation and configuration of Pulp 3.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/manage-repositories-pulp
|
||||
|
||||
作者:[Melanie Corr][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/melanie-corr
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fruit-orange-pomegranate-pulp-unsplash.jpg?itok=4cvODZDJ (Oranges and pomegranates)
|
||||
[2]: https://opensource.com/article/17/8/system-management-foreman
|
||||
[3]: https://pulpproject.org/about-pulp-3/
|
||||
[4]: https://pulpproject.org/content-plugins/
|
||||
[5]: https://docs.pulpproject.org/plugins/plugin-writer/index.html
|
||||
[6]: https://pulp-certguard.readthedocs.io/en/latest/
|
||||
[7]: https://pulpproject.org/pulp-in-one-container/
|
||||
[8]: https://pulp-installer.readthedocs.io/en/latest/
|
@ -0,0 +1,189 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to Check Dependencies of a Package in Ubuntu/Debian-based Linux Distributions)
|
||||
[#]: via: (https://itsfoss.com/check-dependencies-package-ubuntu/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
How to Check Dependencies of a Package in Ubuntu/Debian-based Linux Distributions
|
||||
======
|
||||
|
||||
Installing applications via command line is quite easy in Ubuntu/Debian. All you need to do is to use apt install package_name.
|
||||
|
||||
But what if you want to know the dependencies of a package before or after installing it?
|
||||
|
||||
In this tutorial, I’ll show you various ways to see the dependencies of a package in Ubuntu and other Debian-based Linux distributions that use [APT package management system][1].
|
||||
|
||||
### What is package dependency in Ubuntu?
|
||||
|
||||
If you didn’t know already, when you install a software package in Linux, sometimes, it needs other packages to function properly. These additional packages are called dependencies. If these dependency packages are not installed on the system, it is usually installed automatically with the package.
|
||||
|
||||
For example, the [GUI tool HandBrake for converting video formats][2] needs [FFmpeg][3], [GStreamer][4]. So for HandBrake, FFmpeg and GStreamer are the dependencies.
|
||||
|
||||
If you don’t have these packages installed on your system, they will be automatically installed when you [install HandBrake on Ubuntu][5].
|
||||
|
||||
### Check dependencies of a package in Ubuntu and Debian based distributions
|
||||
|
||||
As it often happens in Linux, there are more than one way to achieve the same result. Let’s see various ways to see the dependencies of a package.
|
||||
|
||||
#### Checking dependencies with apt show
|
||||
|
||||
You can use the [apt show command][6] to display details of a package. Part of this information is dependencies and you can see it in the line starting with Depends.
|
||||
|
||||
For example, here’s what it shows for [ubuntu-restricted-extras][7] package.
|
||||
|
||||
```
|
||||
[email protected]:~$ apt show ubuntu-restricted-extras
|
||||
Package: ubuntu-restricted-extras
|
||||
Version: 67
|
||||
Priority: optional
|
||||
Section: multiverse/metapackages
|
||||
Origin: Ubuntu
|
||||
Maintainer: Ubuntu Developers <[email protected]>
|
||||
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
|
||||
Installed-Size: 14.3 kB
|
||||
Depends: ubuntu-restricted-addons
|
||||
Recommends: libavcodec-extra, ttf-mscorefonts-installer, unrar
|
||||
Download-Size: 3,200 B
|
||||
APT-Manual-Installed: yes
|
||||
APT-Sources: http://us.archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages
|
||||
Description: Commonly used media codecs and fonts for Ubuntu
|
||||
This collection of packages includes:
|
||||
- MP3 and other audio codec software to play various audio formats
|
||||
(GStreamer plugins)
|
||||
- software to install the Microsoft Web fonts
|
||||
- the Adobe Flash plugin
|
||||
- LAME, software to create compressed audio files.
|
||||
.
|
||||
This software does not include libdvdcss2, and will not let you play
|
||||
encrypted DVDs. For more information, see
|
||||
https://help.ubuntu.com/community/RestrictedFormats/PlayingDVDs
|
||||
.
|
||||
These software packages are from the Multiverse channel, restricted by
|
||||
copyright or legal issues in some countries. For more information, see
|
||||
http://www.ubuntu.com/ubuntu/licensing
|
||||
```
|
||||
|
||||
As you can see, ubuntu-restricted-extras package depends on ubuntu-restricted-addons package.
|
||||
|
||||
Here’s a catch! The dependency package may also depend on some other package and the chain could go on. Thankfully, the APT package manager handles this for you by automatically installing all the dependencies (most of the time).
|
||||
|
||||
What is recommended package?
|
||||
|
||||
Did you notice the line starting with Recommends in the above output?
|
||||
|
||||
Recommended packages are not direct dependencies for the package but they enable additional features.
|
||||
|
||||
As you can see, ubuntu-restricted-extras has ttf-mscorefonts-installer as recommended package for installing Microsoft Fonts on Ubuntu.
|
||||
|
||||
The recommended packages are also installed by default and if you explicitly want to forbid the installation of recommended package, use the –no-install-recommends flag like this:
|
||||
|
||||
sudo apt install –no-install-recommends package_name
|
||||
|
||||
#### Use apt-cache for getting just the dependencies information
|
||||
|
||||
The apt show has way too many information. If you want to get the dependencies in a script, the apt-cache command gives you a better and clean output.
|
||||
|
||||
```
|
||||
apt-cache depends package_name
|
||||
```
|
||||
|
||||
The output looks much clean, does it not?
|
||||
|
||||
![][8]
|
||||
|
||||
#### Check the dependencies of a DEB file using dpkg
|
||||
|
||||
Both apt and apt-cache command works on the packages that are available from the repositories. But if you download a DEB file, these command won’t work.
|
||||
|
||||
In this case, you can use the dpkg command with -I or –info option.
|
||||
|
||||
```
|
||||
dpkg -I path_to_deb_file
|
||||
```
|
||||
|
||||
The dependencies can be seen in the line starting with Depends.
|
||||
|
||||
![][9]
|
||||
|
||||
#### Checking dependencies and reverse dependencies with apt-rdepends
|
||||
|
||||
If you want more details on the dependencies, you can use the apt-rdepends tool. This tool creates the complete dependency tree. So, you get the dependency of a package and the dependencies of the dependencies as well.
|
||||
|
||||
It is not a regular apt command and you’ll have to install it from the universe repository:
|
||||
|
||||
```
|
||||
sudo apt install apt-rdepends
|
||||
```
|
||||
|
||||
The output is usually quite large depending on the dependency tree.
|
||||
|
||||
```
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
shutter
|
||||
Depends: procps
|
||||
Depends: xdg-utils
|
||||
imagemagick
|
||||
Depends: imagemagick-6.q16 (>= 8:6.9.2.10+dfsg-2~)
|
||||
imagemagick-6.q16
|
||||
Depends: hicolor-icon-theme
|
||||
Depends: libc6 (>= 2.4)
|
||||
Depends: libmagickcore-6.q16-6 (>= 8:6.9.10.2)
|
||||
Depends: libmagickwand-6.q16-6 (>= 8:6.9.10.2)
|
||||
hicolor-icon-theme
|
||||
libc6
|
||||
Depends: libcrypt1 (>= 1:4.4.10-10ubuntu4)
|
||||
Depends: libgcc-s1
|
||||
libcrypt1
|
||||
Depends: libc6 (>= 2.25)
|
||||
```
|
||||
|
||||
The apt-rdepends tool is quite versatile. It can also calculate the reverse dependencies. Which means, you can see what other packages depend on a certain package.
|
||||
|
||||
```
|
||||
apt-rdepends -r package_name
|
||||
```
|
||||
|
||||
The output could be pretty big because it will print the reverse dependency tree.
|
||||
|
||||
```
|
||||
[email protected]:~$ apt-rdepends -r ffmpeg
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
ffmpeg
|
||||
Reverse Depends: ardour-video-timeline (>= 1:5.12.0-3ubuntu4)
|
||||
Reverse Depends: deepin-screen-recorder (5.0.0-1build2)
|
||||
Reverse Depends: devede (4.15.0-2)
|
||||
Reverse Depends: dvd-slideshow (0.8.6.1-1)
|
||||
Reverse Depends: green-recorder (>= 3.2.3)
|
||||
```
|
||||
|
||||
I hope this quick tutorial was helpful in improving your command line knowledge a bit. Stay tuned for more such tips.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/check-dependencies-package-ubuntu/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://wiki.debian.org/Apt
|
||||
[2]: https://itsfoss.com/handbrake/
|
||||
[3]: https://ffmpeg.org/
|
||||
[4]: https://gstreamer.freedesktop.org/
|
||||
[5]: https://itsfoss.com/install-handbrake-ubuntu/
|
||||
[6]: https://itsfoss.com/apt-search-command/
|
||||
[7]: https://itsfoss.com/install-media-codecs-ubuntu/
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/apt-check-dependencies-ubuntu.png?resize=800%2C297&ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/check-dpendencies-of-deb-package.png?resize=800%2C432&ssl=1
|
@ -0,0 +1,134 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use this command-line tool to find security flaws in your code)
|
||||
[#]: via: (https://opensource.com/article/20/8/static-code-security-analysis)
|
||||
[#]: author: (Ari Noman https://opensource.com/users/arinoman)
|
||||
|
||||
Use this command-line tool to find security flaws in your code
|
||||
======
|
||||
Featuring broad language support, Graudit allows you to audit the
|
||||
security of your code during the development process.
|
||||
![Code on a screen][1]
|
||||
|
||||
Testing is an important part of the software development lifecycle (SDLC), and there are several stages to it. Today, I want to talk about finding security issues in the code.
|
||||
|
||||
You can't ignore security when developing a piece of software. That's why there is a term called DevSecOps, which is fundamentally responsible for identifying and resolving security vulnerabilities in an application. There are open source solutions for checking [OWASP vulnerabilities][2] and which will derive insights by creating a threat model of the source code.
|
||||
|
||||
There are different approaches to handling security issues, e.g., static application security testing (SAST), dynamic application security testing (DAST), interactive application security testing (IAST), software composition analysis, etc.
|
||||
|
||||
Static application security testing runs at the code level and analyzes applications by uncovering errors in the code that has already been written. This approach doesn't require the code to be running, which is why it's called static analysis.
|
||||
|
||||
I'll focus on static code analysis and use an open source tool to have a hands-on experience.
|
||||
|
||||
### Why use an open source tool to check code security
|
||||
|
||||
There are many reasons to choose open source software, tools, and projects as a part of your development. It won't cost any money, as you're using a tool developed by a like-minded community of developers who want to help other developers. If you have a small team or a startup, it's good to find open source software to check your code security. This keeps you from having to hire a separate DevSecOps team, keeping your costs lower.
|
||||
|
||||
Good open source tools are always made with flexibility in mind, and they should be able to be used in any environment, covering as many cases as possible. It makes life easier for developers to connect that piece of software with their existing system.
|
||||
|
||||
But there can be times where you need a feature that is not available within the tool that you chose. Then you have the option to fork the code and develop your own feature on top of it and use it in your system.
|
||||
|
||||
Since, most of the time, open source software is driven by a community, the pace of the development tends to be a plus for the users of that tool because they iterate the project based on user feedback, issues, or bug-posting.
|
||||
|
||||
### Using Graudit to ensure that your code is secure
|
||||
|
||||
There are various open source static code analysis tools available, but as you know, the tool analyzes the code itself, and that's why there is no generic tool for any and all programming languages. But some of them follow OWASP guidelines and try to cover as many languages as they can.
|
||||
|
||||
Here, we'll use [Graudit][3], which is a simple command-line tool that allows us to find security flaws in our codebase. It has support for different languages but a fixed signature set.
|
||||
|
||||
Graudit uses grep, which is a GNU-licensed utility tool, and there are similar types of static code analysis tools like Rough Auditing Tool for Security (RATS), Securitycompass Web Application Analysis Tool (SWAAT), flawfinder, etc. But the technical requirement it has is minimal and very flexible. Still, you might have requirements that are not served by Graudit. If so, you can look at this [list][4] for other options.
|
||||
|
||||
We can install this tool under a specific project, or in the global namespace, or under a specific user—whatever we like, it's flexible. Let's clone the repo first:
|
||||
|
||||
|
||||
```
|
||||
`$ git clone https://github.com/wireghoul/graudit`
|
||||
```
|
||||
|
||||
Now, we need to create a symbolic link of Graudit so that we can use it as a command:
|
||||
|
||||
|
||||
```
|
||||
$ cd ~/bin && mkdir graudit
|
||||
$ ln --symbolic ~/graudit/graudit ~/bin/graudit
|
||||
```
|
||||
|
||||
Add an alias to .bashrc (or the config file for whatever shell you're using):
|
||||
|
||||
|
||||
```
|
||||
#------ .bashrc ------
|
||||
|
||||
alias graudit="~/bin/graudit"
|
||||
```
|
||||
|
||||
and reload the shell:
|
||||
|
||||
|
||||
```
|
||||
$ source ~/.bashrc # OR
|
||||
$ exex $SHELL
|
||||
```
|
||||
|
||||
Let's check whether or not we have successfully installed the tool by running this:
|
||||
|
||||
|
||||
```
|
||||
`$ graudit -h`
|
||||
```
|
||||
|
||||
If you get something similar to this, then you're good to go.
|
||||
|
||||
![Graudit terminal screen showing help page][5]
|
||||
|
||||
Fig. 1 Graudit help page
|
||||
|
||||
I'm using one of my existing projects to test the tool. To run the tool, we need to pass the database of the respective language. You'll find the databases under the signatures folder:
|
||||
|
||||
|
||||
```
|
||||
`$ graudit -d ~/gradit/signatures/js.db`
|
||||
```
|
||||
|
||||
I ran this on two JavaScript files from my existing projects, and you can see that it throws the vulnerable code in the console:
|
||||
|
||||
![JavaScript file showing Graudit display of vulnerable code][6]
|
||||
|
||||
![JavaScript file showing Graudit display of vulnerable code][7]
|
||||
|
||||
You can try running this on one of your projects, and they have a long list of [databases][8] included in the project itself for supporting different languages.
|
||||
|
||||
### Graudit pros and cons
|
||||
|
||||
Graudit supports a lot of languages, which makes it a good bet for users on many different systems. It's comparable to other free or paid tools because of its simplicity of use and broad language support. Most importantly, they are under development, and the community supports other users too.
|
||||
|
||||
Though this is a handy tool, you may find it difficult to identify a specific code as "vulnerable." Maybe the developers will include this function in future versions of the tool. But, it is always good to keep an eye on security issues in the code by using tools like this.
|
||||
|
||||
### Conclusion
|
||||
|
||||
In this article, I've only covered one of the many types of security testing—static application security testing. It's easy to start with static code analysis, but that's just the beginning. You can add other types of application security testing in your application development pipeline to enrich your overall security awareness.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/8/static-code-security-analysis
|
||||
|
||||
作者:[Ari Noman][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/arinoman
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_screen_display.jpg?itok=2HMTzqz0 (Code on a screen)
|
||||
[2]: https://owasp.org/www-community/vulnerabilities/
|
||||
[3]: https://github.com/wireghoul/graudit
|
||||
[4]: https://project-awesome.org/mre/awesome-static-analysis
|
||||
[5]: https://opensource.com/sites/default/files/uploads/graudit_1.png (Graudit terminal screen showing help page)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/graudit_2.png (JavaScript file showing Graudit display of vulnerable code)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/graudit_3.png (JavaScript file showing Graudit display of vulnerable code)
|
||||
[8]: https://github.com/wireghoul/graudit#databases
|
182
translated/tech/20200722 SCP user-s migration guide to rsync.md
Normal file
182
translated/tech/20200722 SCP user-s migration guide to rsync.md
Normal file
@ -0,0 +1,182 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (SCP user’s migration guide to rsync)
|
||||
[#]: via: (https://fedoramagazine.org/scp-users-migration-guide-to-rsync/)
|
||||
[#]: author: (chasinglogic https://fedoramagazine.org/author/chasinglogic/)
|
||||
|
||||
scp 用户的 rsync 迁移指南
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
在 [SSH 8.0 预发布公告][2]中,OpenSSH 项目表示,他们认为 scp 协议已经过时,不灵活,而且不容易修复,然后他们继续推荐使用 `sftp` 或 `rsync` 来进行文件传输。
|
||||
|
||||
然而,很多用户都是从小用着 `scp` 命令长大的,所以对 `rsync` 并不熟悉。此外,`rsync` 可以做的事情也远不止复制文件,这可能会给菜鸟们留下复杂和不透明的印象。尤其是,`scp` 命令的标志大体上可以直接对应到 `cp` 命令的标志,而 `rsync` 命令的标志却和它大相径庭。
|
||||
|
||||
本文将为熟悉 `scp` 的人提供一个介绍和过渡的指南。让我们跳进最常见的场景:复制文件和复制目录。
|
||||
|
||||
### 复制文件
|
||||
|
||||
对于复制单个文件而言,`scp` 和 `rsync` 命令实际上是等价的。比方说,你需要把 `foo.txt` 传到一个名为 `server` 的服务器上你的主目录下:
|
||||
|
||||
```
|
||||
$ scp foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
相应的 `rsync` 命令只需要输入 `rsync` 取代 `scp`:
|
||||
|
||||
```
|
||||
$ rsync foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
### 复制目录
|
||||
|
||||
对于复制目录,就确实有了很大的分歧,这也解释了为什么 `rsync` 会被认为比 `scp` 更复杂。如果你想把 `bar` 目录复制到 `server` 服务器上,除了指定 `ssh` 信息外,相应的 `scp` 命令和 `cp` 命令一模一样。
|
||||
|
||||
```
|
||||
$ scp -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
对于 `rsync`,考虑的因素比较多,因为它是一个比较强大的工具。首先,我们来看一下最简单的形式:
|
||||
|
||||
```
|
||||
$ rsync -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
看起来很简单吧?对于只包含目录和普通文件的简单情况,这就可以了。然而,`rsync` 更在意发送与主机系统中一模一样的文件。让我们来创建一个稍微复杂一些,但并不罕见的例子:
|
||||
|
||||
```
|
||||
# 创建多级目录结构
|
||||
$ mkdir -p bar/baz
|
||||
# 在其根目录下创建文件
|
||||
$ touch bar/foo.txt
|
||||
# 现在创建一个符号链接指回到该文件
|
||||
$ cd bar/baz
|
||||
$ ln -s ../foo.txt link.txt
|
||||
# 返回原位置
|
||||
$ cd -
|
||||
```
|
||||
|
||||
现在我们有了一个如下的目录树:
|
||||
|
||||
```
|
||||
bar
|
||||
├── baz
|
||||
│ └── link.txt -> ../foo.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
如果我们尝试上面的命令来复制 `bar`,我们会注意到非常不同的(并令人惊讶的)结果。首先,我们来试试 `scp`:
|
||||
|
||||
```
|
||||
$ scp -r bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
如果你 `ssh` 进入你的服务器,看看 `bar` 的目录树,你会发现它和你的主机系统有一个重要而微妙的区别:
|
||||
|
||||
```
|
||||
bar
|
||||
├── baz
|
||||
│ └── link.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
请注意,`link.txt` 不再是一个符号链接,它现在是一个完整的 `foo.txt` 副本。如果你习惯于使用 `cp`,这可能会是令人惊讶的行为。如果你尝试使用 `cp -r` 复制 `bar` 目录,你会得到一个新的目录,里面的符号链接和 `bar` 的一样。现在如果我们尝试使用之前的 `rsync` 命令,我们会得到一个警告:
|
||||
|
||||
```
|
||||
$ rsync -r bar/ me@server:/home/me/
|
||||
skipping non-regular file "bar/baz/link.txt"
|
||||
```
|
||||
|
||||
`rsync` 警告我们它发现了一个非常规的文件,并正在跳过它。因为你没有告诉它可以复制符号链接,所以它忽略了它们。`rsync` 在手册中有一节“符号链接”,解释了所有可能的行为选项。在我们的例子中,我们需要添加 `-links` 标志:
|
||||
|
||||
```
|
||||
$ rsync -r --links bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
在远程服务器上,我们看到这个符号链接是作为一个符号链接复制过来的。请注意,这与 `scp` 复制符号链接的方式不同。
|
||||
|
||||
```
|
||||
bar/
|
||||
├── baz
|
||||
│ └── link.txt -> ../foo.txt
|
||||
└── foo.txt
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
为了省去一些打字工作,并利用更多的文件保护选项,在复制目录时可以使用 `-archive`(简称 `-a`)标志。该归档标志将做大多数人所期望的事情,因为它可以实现递归复制、符号链接复制和许多其他选项。
|
||||
|
||||
```
|
||||
$ rsync -a bar/ me@server:/home/me/
|
||||
```
|
||||
|
||||
如果你感兴趣的话,`rsync` 手册页有关于存档标志的深入解释。
|
||||
|
||||
### 注意事项
|
||||
|
||||
不过,使用 `rsync` 有一个注意事项。使用 `scp` 比使用 `rsync` 更容易指定一个非标准的 ssh 端口。例如,如果 `server` 使用 8022 端口的 SSH 连接,那么这些命令就会像这样:
|
||||
|
||||
```
|
||||
$ scp -P 8022 foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
而在使用 `rsync` 时,你必须指定要使用的“远程 shell”命令,默认是 `ssh`。你可以使用 `-e` 标志来指定。
|
||||
|
||||
```
|
||||
$ rsync -e 'ssh -p 8022' foo.txt me@server:/home/me/
|
||||
```
|
||||
|
||||
`rsync` 会使用你的 `ssh` 配置;但是,如果你经常连接到这个服务器,你可以在你的 `~/.ssh/config` 文件中添加以下代码。这样你就不需要再为 `rsync` 或 `ssh` 命令指定端口了!
|
||||
|
||||
```
|
||||
Host server
|
||||
Port 8022
|
||||
```
|
||||
|
||||
另外,如果你连接的每一台服务器都在同一个非标准端口上运行,你还可以配置 `RSYNC_RSH` 环境变量。
|
||||
|
||||
### 为什么你还是应该切换到 rsync?
|
||||
|
||||
现在我们已经介绍了从 `scp` 切换到 `rsync` 的日常使用案例和注意事项,让我们花一些时间来探讨一下为什么你可能想要使用 `rsync` 的优点。很多人在很久以前就已经开始使用 `rsync` 了,就是因为这些优点。
|
||||
|
||||
#### 即时压缩
|
||||
|
||||
如果你和服务器之间的网络连接速度较慢或有限,`rsync` 可以花费更多的 CPU 处理能力来节省网络带宽。它通过在发送数据之前对数据进行即时压缩来实现。压缩可以用 `-z` 标志来启用。
|
||||
|
||||
#### 差量传输
|
||||
|
||||
`rsync` 也只在目标文件与源文件不同的情况下复制文件。这可以在目录中递归地工作。例如,如果你拿我们上面的最后一个 `bar` 的例子,并多次重新运行那个 `rsync` 命令,那么在最初的传输之后就不会有任何传输。如果你知道你会重复使用这些命令,例如备份到 U 盘,那么使用 `rsync` 即使是进行本地复制也是值得的,因为这个功能可以节省处理大型数据集的大量的时间。
|
||||
|
||||
#### 同步
|
||||
|
||||
顾名思义,`rsync` 可以做的不仅仅是复制数据。到目前为止,我们只演示了如何使用 `rsync` 复制文件。如果你想让 `rsync` 把目标目录变成源目录的样子,你可以在 `rsync` 中添加 `-delete` 标志。这个删除标志使得 `rsync` 将从源目录中复制不存在于目标目录中的文件,然后它将删除目标目录中不存在于源目录中的文件。结果就是目标目录和源目录完全一样。相比之下,`scp` 只会在目标目录下添加文件。
|
||||
|
||||
### 结论
|
||||
|
||||
对于简单的使用情况,`rsync` 并不比老牌的 `scp` 工具复杂多少。唯一显著的区别是在递归复制目录时使用 `-a` 而不是 `-r`。然而,正如我们看到的,`rsync` 的 `-a` 标志比 `scp` 的 `-r` 标志更像 `cp` 的 `-r` 标志。
|
||||
|
||||
希望通过这些新命令,你可以加快你的文件传输工作流程。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/scp-users-migration-guide-to-rsync/
|
||||
|
||||
作者:[chasinglogic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/chasinglogic/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2020/07/scp-rsync-816x345.png
|
||||
[2]: https://lists.mindrot.org/pipermail/openssh-unix-dev/2019-March/037672.html
|
@ -0,0 +1,259 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to create a documentation site with Docsify and GitHub Pages)
|
||||
[#]: via: (https://opensource.com/article/20/7/docsify-github-pages)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson)
|
||||
|
||||
如何使用 Docsify 和 GitHub Pages 创建一个文档网站
|
||||
======
|
||||
|
||||
> 使用 Docsify 创建文档网页并发布到 GitHub Pages 上。
|
||||
|
||||
![Digital creative of a browser on the internet][1]
|
||||
|
||||
文档是帮助用户使用开源项目一个重要部分,但它并不总是开发人员的首要任务,因为他们可能更关注的是使他们的应用程序更好,而不是帮助人们使用它。对开发者来说,这就是为什么让发布文档变得更容易是如此有价值的原因。在本教程中,我将向你展示一个这样做的方式:将 [Docsify][2] 文档生成器与 [GitHub Pages][3] 结合起来。
|
||||
|
||||
如果你喜欢通过视频学习,可以访问 YouTube 版本的教程:
|
||||
|
||||
- [video](https://youtu.be/ccA2ecqKyHo)
|
||||
|
||||
默认情况下,GitHub Pages 会提示用户使用 [Jekyll][4],这是一个支持 HTML、CSS 和其它网页技术的静态网站生成器。Jekyll 可以从以 Markdown 格式编码的文档文件中生成一个静态网站,GitHub 会自动识别它们的 `.md` 或 `.markdown` 扩展名。虽然这种设置很好,但我想尝试一下其他的东西。
|
||||
|
||||
幸运的是,GitHub Pages 支持 HTML 文件,这意味着你可以使用其他网站生成工具(比如 Docsify)在这个平台上创建一个网站。Docsify 是一个采用 MIT 许可证的开源项目,其具有可以让你在 GitHub Pages 上轻松创建一个有吸引力的先进的文档网站的[功能][5]。
|
||||
|
||||
![Docsify][6]
|
||||
|
||||
### 开始使用 Docsify
|
||||
|
||||
安装 Docsify 有两种方法:
|
||||
|
||||
1. 通过 NPM 安装 Docsify 的命令行界面(CLI)。
|
||||
2. 手动编写自己的 `index.html`。
|
||||
|
||||
Docsify 推荐使用 NPM 方式,但我将使用第二种方案。如果你想使用 NPM,请按照[快速入门指南][8]中的说明进行操作。
|
||||
|
||||
### 从 GitHub 下载示例内容
|
||||
|
||||
我已经在[该项目的 GitHub 页面][9]上发布了这个例子的源代码。你可以单独下载这些文件,也可以通过以下方式[克隆这个存储库][10]。
|
||||
|
||||
```
|
||||
git clone https://github.com/bryantson/OpensourceDotComDemos
|
||||
```
|
||||
|
||||
然后 `cd` 进入 `DocsifyDemo` 目录。
|
||||
|
||||
我将在下面为你介绍克隆自我的示例存储库中的代码,这样你就可以理解如何修改 Docsify。如果你愿意,你可以从头开始创建一个新的 `index.html` 文件,就像 Docsify 文档中的的[示例][11]一样:
|
||||
|
||||
```
|
||||
<!-- index.html -->
|
||||
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
|
||||
<meta name="viewport" content="width=device-width,initial-scale=1">
|
||||
<meta charset="UTF-8">
|
||||
<link rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
</head>
|
||||
<body>
|
||||
<div id="app"></div>
|
||||
<script>
|
||||
window.$docsify = {
|
||||
//...
|
||||
}
|
||||
</script>
|
||||
<script src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
### 探索 Docsify 如何工作
|
||||
|
||||
如果你克隆了我的 [GitHub 存储库][10],并切换到 `DocsifyDemo` 目录下,你应该看到这样的文件结构:
|
||||
|
||||
![File contents in the cloned GitHub][19]
|
||||
|
||||
文件/文件夹名称 | 内容
|
||||
---|---
|
||||
`index.html` | 主要的 Docsify 初始化文件,也是最重要的文件
|
||||
`_sidebar.md` | 生成导航
|
||||
`README.md` | 你的文档根目录下的默认 Markdown 文件
|
||||
`images` | 包含 `README.md` 中的示例 .jpg 图片
|
||||
其它目录和文件 | 包含可导航的 Markdown 文件
|
||||
|
||||
`index.html` 是 Docsify 可以工作的唯一要求。打开该文件,你可以查看其内容:
|
||||
|
||||
```
|
||||
<!-- index.html -->
|
||||
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
|
||||
<meta name="viewport" content="width=device-width,initial-scale=1">
|
||||
<meta charset="UTF-8">
|
||||
<link rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
<title>Docsify Demo</title>
|
||||
</head>
|
||||
<body>
|
||||
<div id="app"></div>
|
||||
<script>
|
||||
window.$docsify = {
|
||||
el: "#app",
|
||||
repo: 'https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo',
|
||||
loadSidebar: true,
|
||||
}
|
||||
</script>
|
||||
<script src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
这本质上只是一个普通的 HTML 文件,但看看这两行:
|
||||
|
||||
```
|
||||
<link rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/vue.css">
|
||||
... 一些其它内容 ...
|
||||
<script src="//cdn.jsdelivr.net/npm/docsify/lib/docsify.min.js"></script>
|
||||
```
|
||||
|
||||
这些行使用内容交付网络(CDN)URL 来提供 CSS 和 JavaScript 脚本,以将网站转化为 Docsify 网站。只要你包含这些行,你就可以把你的普通 GitHub 页面变成 Docsify 页面。
|
||||
|
||||
`<body>` 标签后的第一行指定了要渲染的内容:
|
||||
|
||||
```
|
||||
<div id="app"></div>
|
||||
```
|
||||
|
||||
Docsify 使用[单页应用][21](SPA)的方式来渲染请求的页面,而不是刷新一个全新的页面。
|
||||
|
||||
最后,看看 `<script>` 块里面的行:
|
||||
|
||||
```
|
||||
<script>
|
||||
window.$docsify = {
|
||||
el: "#app",
|
||||
repo: 'https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo',
|
||||
loadSidebar: true,
|
||||
}
|
||||
</script>
|
||||
```
|
||||
|
||||
在这个块中:
|
||||
|
||||
* `el` 属性基本上是说:“嘿,这就是我要找的 `id`,所以找到它并在那里呈现。”
|
||||
* 改变 `repo` 值,以确定当用户点击右上角的 GitHub 图标时,会被重定向到哪个页面。
|
||||
![GitHub icon][22]
|
||||
* 将 `loadSideBar` 设置为 `true` 将使 Docsify 查找包含导航链接的 `_sidebar.md` 文件。
|
||||
|
||||
你可以在 Docsify 文档的[配置][23]部分找到所有选项。
|
||||
|
||||
接下来,看看 `_sidebar.md` 文件。因为你在 `index.html` 中设置了 `loadSidebar` 属性值为 `true`,所以 Docsify 会查找 `_sidebar.md` 文件,并根据其内容生成导航文件。示例存储库中的 `_sidebar.md` 内容是:
|
||||
|
||||
```
|
||||
<!-- docs/_sidebar.md -->
|
||||
|
||||
|
||||
* [HOME](./)
|
||||
|
||||
* [Tutorials](./tutorials/index)
|
||||
* [Tomcat](./tutorials/tomcat/index)
|
||||
* [Cloud](./tutorials/cloud/index)
|
||||
* [Java](./tutorials/java/index)
|
||||
|
||||
* [About](./about/index)
|
||||
|
||||
* [Contact](./contact/index)
|
||||
```
|
||||
|
||||
这会使用 Markdown 的链接格式来创建导航。请注意 Tomcat、Cloud 和 Java 链接是缩进的;这导致它们被渲染为父链接下的子链接。
|
||||
|
||||
像 `README.md` 和 `images` 这样的文件与存储库的结构有关,但所有其它 Markdown 文件都与你的 Docsify 网页有关。
|
||||
|
||||
根据你的需求,随意修改你下载的文件。下一步,你将把这些文件添加到你的 GitHub 存储库中,启用 GitHub Pages,并完成项目。
|
||||
|
||||
### 启用 GitHub 页面
|
||||
|
||||
创建一个示例的 GitHub 存储库,然后使用以下 GitHub 命令检出、提交和推送你的代码:
|
||||
|
||||
```
|
||||
$ git clone 你的 GitHub 存储库位置
|
||||
$ cd 你的 GitHub 存储库位置
|
||||
$ git add .
|
||||
$ git commit -m "My first Docsify!"
|
||||
$ git push
|
||||
```
|
||||
|
||||
设置你的 GitHub Pages 页面。在你的新 GitHub 存储库中,点击 “Settings”:
|
||||
|
||||
![Settings link in GitHub][24]
|
||||
|
||||
向下滚动直到看到 “GitHub Pages”:
|
||||
|
||||
![GitHub Pages settings][25]
|
||||
|
||||
查找 “Source” 部分:
|
||||
|
||||
![GitHub Pages settings][26]
|
||||
|
||||
点击 “Source” 下的下拉菜单。通常,你会将其设置为 “master branch”,但如果你愿意,也可以使用其他分支:
|
||||
|
||||
![Setting Source to master branch][27]
|
||||
|
||||
就是这样!你现在应该有一个链接到你的 GitHub Pages 的页面了。点击该链接将带你到那里,然后用 Docsify 渲染:
|
||||
|
||||
![Link to GitHub Pages docs site][28]
|
||||
|
||||
它应该像这样:
|
||||
|
||||
![Example Docsify site on GitHub Pages][29]
|
||||
|
||||
### 结论
|
||||
|
||||
通过编辑一个 HTML 文件和一些 Markdown 文本,你可以用 Docsify 创建一个外观精美的文档网站。你觉得怎么样?请留言,也可以分享其他可以和 GitHub Pages 一起使用的开源工具。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/7/docsify-github-pages
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet)
|
||||
[2]: https://docsify.js.org
|
||||
[3]: https://pages.github.com/
|
||||
[4]: https://docs.github.com/en/github/working-with-github-pages/about-github-pages-and-jekyll
|
||||
[5]: https://docsify.js.org/#/?id=features
|
||||
[6]: https://opensource.com/sites/default/files/uploads/docsify1_ui.jpg (Docsify)
|
||||
[7]: https://creativecommons.org/licenses/by-sa/4.0/
|
||||
[8]: https://docsify.js.org/#/quickstart?id=quick-start
|
||||
[9]: https://github.com/bryantson/OpensourceDotComDemos/tree/master/DocsifyDemo
|
||||
[10]: https://github.com/bryantson/OpensourceDotComDemos
|
||||
[11]: https://docsify.js.org/#/quickstart?id=manual-initialization
|
||||
[12]: http://december.com/html/4/element/html.html
|
||||
[13]: http://december.com/html/4/element/head.html
|
||||
[14]: http://december.com/html/4/element/meta.html
|
||||
[15]: http://december.com/html/4/element/link.html
|
||||
[16]: http://december.com/html/4/element/body.html
|
||||
[17]: http://december.com/html/4/element/div.html
|
||||
[18]: http://december.com/html/4/element/script.html
|
||||
[19]: https://opensource.com/sites/default/files/uploads/docsify3_files.jpg (File contents in the cloned GitHub)
|
||||
[20]: http://december.com/html/4/element/title.html
|
||||
[21]: https://en.wikipedia.org/wiki/Single-page_application
|
||||
[22]: https://opensource.com/sites/default/files/uploads/docsify4_github-icon_rev_0.jpg (GitHub icon)
|
||||
[23]: https://docsify.js.org/#/configuration?id=configuration
|
||||
[24]: https://opensource.com/sites/default/files/uploads/docsify5_githubsettings_0.jpg (Settings link in GitHub)
|
||||
[25]: https://opensource.com/sites/default/files/uploads/docsify6_githubpageconfig_rev.jpg (GitHub Pages settings)
|
||||
[26]: https://opensource.com/sites/default/files/uploads/docsify6_githubpageconfig_rev2.jpg (GitHub Pages settings)
|
||||
[27]: https://opensource.com/sites/default/files/uploads/docsify8_setsource_rev.jpg (Setting Source to master branch)
|
||||
[28]: https://opensource.com/sites/default/files/uploads/docsify9_link_rev.jpg (Link to GitHub Pages docs site)
|
||||
[29]: https://opensource.com/sites/default/files/uploads/docsify2_examplesite.jpg (Example Docsify site on GitHub Pages)
|
@ -1,155 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Merging and sorting files on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3570508/merging-and-sorting-files-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
合并和排序 Linux 上的文件
|
||||
======
|
||||
|
||||
在 Linux 上合并和排序文本的方法有很多种,但如何去处理它取决于你试图做什么:你是只想将多个文件的内容放入一个文件中,还是以某种方式组织它,让他更易于使用。在本文中,我们将查看一些用于排序和合并文件内容的命令,并重点介绍结果有何不同。
|
||||
|
||||
### 使用 cat
|
||||
|
||||
如果你只想将一组文件放到单个文件中,那么 **cat** 命令是一个容易的选择。你所要做的就是输入 “cat”,然后按你希望它们在合并文件中的顺序在命令行中列出这些文件。将命令的输出重定向到要创建的文件。如果指定名称的文件已经存在,那么文件将被覆盖。例如:
|
||||
|
||||
```
|
||||
$ cat firstfile secondfile thirdfile > newfile
|
||||
```
|
||||
|
||||
如果要将一系列文件的内容添加到现有文件中,而不是覆盖它,只需将 **>** 变成 **>>**。
|
||||
|
||||
```
|
||||
$ cat firstfile secondfile thirdfile >> updated_file
|
||||
```
|
||||
|
||||
如果你要合并的文件遵循一些方便的命名约定,那么任务可能更简单。如果可以使用正则表达式指定所有文件名,那就不必列出所有文件。例如,如果文件全部以 “file” 结束,如上所示,你可以进行如下操作:
|
||||
|
||||
```
|
||||
$ cat *file > allfiles
|
||||
```
|
||||
|
||||
请注意,上面的命令将按字母数字顺序添加文件内容。在 Linux 上,一个名为 “filea” 的文件将加在名为 “fileA” 的文件的前面,但会在 “file7” 的后面。毕竟,当我们处理字母数字序列时,我们不仅需要考虑 ”ABCDE“,还需要考虑 ”0123456789aAbBcCdDeE“。你可以使用 ”ls \*file“ 这样的命令来查看合并文件之前文件的顺序。
|
||||
|
||||
**注意:**首先确保你的命令包含合并文件中所需的所有文件,而不是其他文件,尤其是你使用 ”\*“ 等通配符时。不要忘记,用于合并的文件仍将单独存在,在确认合并后,你可能想要删除这些文件。
|
||||
|
||||
### 按时间期限合并文件
|
||||
|
||||
如果要基于每个文件的时间期限而不是文件名来合并文件,请使用以下命令:
|
||||
|
||||
```
|
||||
$ for file in `ls -tr myfile.*`; do cat $file >> BigFile.$$; done
|
||||
```
|
||||
|
||||
使用 **-tr** 选项(**t**=时间,**r**=反向)将产生按照最早在前排列的文件列表。例如,如果你要保留某些活动的日志,并且希望按活动执行的顺序添加内容,则这非常有用。
|
||||
|
||||
上面命令中的 **$$** 表示运行命令时的进程 ID。完全没必要使用此功能,但它几乎不可能会无意添加到现有文件,而不是创建新文件。如果使用 $$,那么生成的文件可能如下所示:
|
||||
|
||||
```
|
||||
$ ls -l BigFile.*
|
||||
-rw-rw-r-- 1 justme justme 931725 Aug 6 12:36 BigFile.582914
|
||||
```
|
||||
|
||||
### 合并和排序文件
|
||||
|
||||
Linux 提供了一些有趣的方式来对合并之前或之后的文件内容进行排序。
|
||||
|
||||
#### 按字母对内容进行排序
|
||||
|
||||
如果要对合并的文件内容进行排序,那么可以使用以下命令对整体内容进行排序:
|
||||
|
||||
```
|
||||
$ cat myfile.1 myfile.2 myfile.3 | sort > newfile
|
||||
```
|
||||
|
||||
如果要按文件对内容进行分组,请使用以下命令对每个文件进行排序,然后再将它添加到新文件中:
|
||||
|
||||
```
|
||||
$ for file in `ls myfile.?`; do sort $file >> newfile; done
|
||||
```
|
||||
|
||||
#### 对文件进行数字排序
|
||||
|
||||
要对文件内容进行数字排序,请在 sort 中使用 **-n** 选项。仅当文件中的行以数字开头时,此选项才有用。请记住,按照默认顺序,“02” 将小于 “1”。当你要确保行以数字排序时,请使用 **-n** 选项。
|
||||
|
||||
```
|
||||
$ cat myfile.1 myfile.2 myfile.3 | sort -n > xyz
|
||||
```
|
||||
|
||||
如果文件中的行以 “2020-11-03” 或 “2020/11/03”(年、月、日格式)这样的日期格式开头,**-n** 选项还能让你按日期对内容进行排序。其他格式的日期排序将非常棘手,并且将需要更复杂的命令。
|
||||
|
||||
### 使用 paste
|
||||
|
||||
**paste** 命令允许你逐行连接文件内容。使用此命令时,合并文件的第一行将包含要合并的每个文件的第一行。以下是示例,其中我使用了大写字母以便于查看行的来源:
|
||||
|
||||
```
|
||||
$ cat file.a
|
||||
A one
|
||||
A two
|
||||
A three
|
||||
|
||||
$ paste file.a file.b file.c
|
||||
A one B one C one
|
||||
A two B two C two
|
||||
A three B three C thee
|
||||
B four C four
|
||||
C five
|
||||
```
|
||||
|
||||
将输出重定向到另一个文件来保存它:
|
||||
|
||||
```
|
||||
$ paste file.a file.b file.c > merged_content
|
||||
```
|
||||
|
||||
或者,你可以将文件粘贴在一起,将每个文件的内容在同一行中合并。这需要使用 **-s** (sequential)选项。注意这次的输出如何显示每个文件的内容:
|
||||
|
||||
```
|
||||
$ paste -s file.a file.b file.c
|
||||
A one A two A three
|
||||
B one B two B three B four
|
||||
C one C two C thee C four C five
|
||||
```
|
||||
|
||||
### 使用 join
|
||||
|
||||
合并文件的另一个命令是 **join**。**join** 命令让你能基于一个公共字段合并多个文件的内容。例如,你可能有一个包含一组同事的电话的文件,其中,而另一个包含了同事的电子邮件地址,并且两者均按个人姓名列出。你可以使用 join 创建一个包含电话和电子邮件地址的文件。
|
||||
|
||||
一个重要的限制是文件的行必须是相同的顺序,并在每个文件中包括连接字段。
|
||||
|
||||
这是一个示例命令:
|
||||
|
||||
```
|
||||
$ join phone_numbers email_addresses
|
||||
Sandra 555-456-1234 bugfarm@gmail.com
|
||||
Pedro 555-540-5405
|
||||
John 555-333-1234 john_doe@gmail.com
|
||||
Nemo 555-123-4567 cutie@fish.com
|
||||
```
|
||||
|
||||
在本例中,即使缺少附加信息,第一个字段(名)也必须存在于每个文件中,否则命令会因错误而失败。对内容进行排序有帮助,而且可能更容易管理,但只要顺序一致,就不需要这么做。
|
||||
|
||||
### 总结
|
||||
|
||||
在 Linux 上,你有很多选择可以合并和排序存储在单独文件中的数据。 这些选择可以使原本繁琐的任务变得异常简单。
|
||||
|
||||
加入 [Facebook][2] 和 [LinkedIn][3] 上的 Network World 社区,评论热门主题。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3570508/merging-and-sorting-files-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[2]: https://www.facebook.com/NetworkWorld/
|
||||
[3]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,162 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (OnionShare: An Open-Source Tool to Share Files Securely Over Tor Network)
|
||||
[#]: via: (https://itsfoss.com/onionshare/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
OnionShare: 一个通过 Tor 网络安全共享文件的开源工具
|
||||
======
|
||||
|
||||
_**简介:OnionShare 是一个免费开源工具,它利用 Tor 网络安全和匿名地共享文件。**_
|
||||
|
||||
你可能已经有很多在线服务来安全地共享文件,但它可能不是完全匿名的。
|
||||
|
||||
此外,你必须依靠一个集中式服务来共享文件,如果服务决定像 [Firefox Send][1] 那样关闭,那你不能真正依靠它来一直安全地共享文件。
|
||||
|
||||
考虑到这些,OnionShare 是一个让人惊叹的开源工具,它让你使用 [Tor Onion 服务][2]来共享文件。它应该是所有云端文件共享服务的一个很好的替代品。
|
||||
|
||||
让我们来看看它提供了什么以及它是如何工作的。
|
||||
|
||||
![][3]
|
||||
|
||||
### OnionShare: 通过 Tor 匿名分享文件
|
||||
|
||||
[OnionShare][4] 是一款有趣的开源工具,可用于 Linux、Windows 和 macOS。
|
||||
|
||||
它可以让你安全地将文件直接从你的电脑分享给接收者,而不会在这个过程中暴露你的身份。你不必注册任何帐户,它也不依赖于任何集中式存储服务。
|
||||
|
||||
它基本上是在 Tor 网络上的点对点服务。接收者只需要有一个 [Tor 浏览器][5]就可以下载/上传文件到你的电脑上。如果你好奇的话,我也建议你去看看我们的 [Tor 指南][6]来探索更多关于它的内容。
|
||||
|
||||
让我们来看看它的功能。
|
||||
|
||||
### OnionShare 的功能
|
||||
|
||||
对于一个只想要安全和匿名的普通用户来说,它不需要调整。不过,如果你有需要,它也有一些高级选项。
|
||||
|
||||
* 跨平台支持(Windows、macOS和 Linux)。
|
||||
* 发送文件
|
||||
* 接收文件
|
||||
* 命令行选项
|
||||
* 发布洋葱站点
|
||||
* 能够使用桥接(如果你的 Tor 连接不起作用)
|
||||
* 能够使用持久 URL 进行共享(高级用户)。
|
||||
* 隐身模式(更安全)
|
||||
|
||||
|
||||
|
||||
你可以通过 GitHub 上的[官方用户指南][7]来了解更多关于它们的信息。
|
||||
|
||||
### 在 Linux 上安装 OnionShare
|
||||
|
||||
|
||||
你应该可以在你的软件中心找到 OnionShare 并安装它。如果没有,你可以在 Ubuntu 发行版上使用下面的命令添加 PPA:
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:micahflee/ppa
|
||||
sudo apt update
|
||||
sudo apt install -y onionshare
|
||||
```
|
||||
|
||||
如果你想把它安装在其他 Linux 发行版上,你可以访问[官方网站][4]获取 Fedora 上的安装说明以及构建说明。
|
||||
|
||||
[下载 OnionShare][4]
|
||||
|
||||
### OnionShare 如何工作?
|
||||
|
||||
当你安装好后,一切都很明了且易于使用。但是,如果你想开始,让我告诉你它是如何工作的。
|
||||
|
||||
完成后,它加载并连接到 Tor 网络。
|
||||
|
||||
#### 共享文件
|
||||
|
||||
![][8]
|
||||
|
||||
你只需要在电脑上添加你要分享的文件,然后点击 “**Start sharing**”。
|
||||
|
||||
完成后,右下角的状态应该是 “**Sharing**”,然后会生成一个 **OnionShare 地址**(自动复制到剪贴板),如下图所示。
|
||||
|
||||
![][9]
|
||||
|
||||
现在接收方需要的是 OnionShare 的地址,它看上去是这样的。
|
||||
|
||||
```
|
||||
http://onionshare:[email protected]
|
||||
```
|
||||
|
||||
接着 Tor 浏览器开始下载文件。
|
||||
|
||||
值得注意的是,下载完成后(文件传输完成),文件共享就会停止。到时候也会通知你。
|
||||
|
||||
所以,如果你要再次分享或与他人分享,你必须重新分享,并将新的 OnionShare 地址发送给接收者。
|
||||
|
||||
#### 允许接收文件
|
||||
|
||||
如果你想生成一个 URL,让别人直接上传文件到你的电脑上(要注意谁与你分享),你可以在启动 OnionShare 后点击 **Receive Files** 标签即可。
|
||||
|
||||
![][10]
|
||||
|
||||
你只需要点击 “**Start Receive Mode**” 按钮就可以开始了。接下来,你会得到一个 OnionShare 地址(就像共享文件时一样)。
|
||||
|
||||
接收者必须使用 Tor 浏览器访问它并开始上传文件。它应该像下面这样:
|
||||
|
||||
![][11]
|
||||
|
||||
虽然当有人上传文件到你的电脑上时,你会收到文件传输的通知,但完成后,你需要手动停止接收模式。
|
||||
|
||||
#### 下载/上传文件
|
||||
|
||||
考虑到你已经安装了 Tor 浏览器,你只需要在 URL 地址中输入 OnionShare 的地址,确认登录(按 OK 键),它看上去像这样。
|
||||
|
||||
![][12]
|
||||
|
||||
同样,当你得到一个上传文件的地址时,它看上去是这样的。
|
||||
|
||||
![][13]
|
||||
|
||||
#### 发布洋葱站点
|
||||
|
||||
如果你想的话,你可以直接添加文件来托管一个静态的洋葱网站。当然,正因为是点对点的连接,所以在它从你的电脑上传输每个文件时,加载速度会非常慢。
|
||||
|
||||
![][14]
|
||||
|
||||
我试着用[免费模板][15]测试了一下,效果很好(但很慢)。所以,这可能取决于你的网络连接。
|
||||
|
||||
### **总结**
|
||||
|
||||
除了上面提到的功能,如果需要的话,你还可以使用命令行进行一些高级的调整。
|
||||
|
||||
OnionShare 的确是一款令人印象深刻的开源工具,它可以让你轻松地匿名分享文件,而不需要任何特殊的调整。
|
||||
|
||||
你尝试过 OnionShare 吗?你知道有类似的软件么?请在下面的评论中告诉我!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/onionshare/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/firefox-send/
|
||||
[2]: https://community.torproject.org/onion-services/
|
||||
[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-screenshot.jpg?resize=800%2C629&ssl=1
|
||||
[4]: https://onionshare.org/
|
||||
[5]: https://itsfoss.com/install-tar-browser-linux/
|
||||
[6]: https://itsfoss.com/tor-guide/
|
||||
[7]: https://github.com/micahflee/onionshare/wiki
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-share.png?resize=800%2C604&ssl=1
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-file-shared.jpg?resize=800%2C532&ssl=1
|
||||
[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-receive-files.jpg?resize=800%2C655&ssl=1
|
||||
[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-receive-mode.jpg?resize=800%2C529&ssl=1
|
||||
[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-download.jpg?resize=800%2C499&ssl=1
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-upload.jpg?resize=800%2C542&ssl=1
|
||||
[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/onionshare-onion-site.jpg?resize=800%2C366&ssl=1
|
||||
[15]: https://www.styleshout.com/free-templates/kards/
|
@ -0,0 +1,226 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (wxy)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Glances – A Versatile System Monitoring Tool for Linux Systems)
|
||||
[#]: via: (https://itsfoss.com/glances/)
|
||||
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
|
||||
|
||||
Glances:多功能 Linux 系统监控工具
|
||||
======
|
||||
|
||||
Linux 上最常用的[命令行进程监控工具][1]是 `top` 和它那色彩斑斓、功能丰富的表弟 [htop][2]。
|
||||
|
||||
要[监控 Linux 上的温度][3],可以使用 [lm-sensors][4]。同样,还有很多实用工具可以监控其他实时指标,如磁盘 I/O、网络统计等。
|
||||
|
||||
[Glances][5] 是一个系统监控工具,它把这些都联系在一起,并提供了更多的功能。我最喜欢的是,你可以在远程 Linux 服务器上运行 Glances 来监控本地系统的系统资源,也可以通过 Web 浏览器监控。
|
||||
|
||||
下面是它的外观。下面截图中的终端已经[用 Pywal 工具美化过,可以根据壁纸自动改变颜色][6]。
|
||||
|
||||
![][7]
|
||||
|
||||
你也可以将它集成到像 [Grafana][8] 这样的工具中,在一个直观的仪表盘中监控统计数据。
|
||||
|
||||
它是用 Python 编写的,这意味着它的绝大多数功能都可以在大多数平台上使用。
|
||||
|
||||
### Glances 的功能
|
||||
|
||||
![Glances Data In Grafana Dashboard][9]
|
||||
|
||||
让我们快速浏览一下 Glances 提供的主要功能:
|
||||
|
||||
* 可以监控系统上的 15 个之多的指标(包括 Docker 容器)。
|
||||
* 灵活的使用模式:单机模式、客户端-服务器模式、通过 SSH 和 Web 模式。
|
||||
* 可用于集成的各种 REST API 和 XML-RPC API。
|
||||
* 支持将数据轻松导出到不同的服务和数据库。
|
||||
* 高度的可配置性和适应不同的需求。
|
||||
* 非常全面的文档。
|
||||
|
||||
### 在 Ubuntu 和其他 Linux 发行版上安装 Glances
|
||||
|
||||
Glances 在许多 Linux 发行版的官方软件库中都有。这意味着你可以使用你的发行版的软件包管理器来轻松安装它。
|
||||
|
||||
在基于 Debian/Ubuntu 的发行版上,你可以使用以下命令:
|
||||
|
||||
```
|
||||
sudo apt install glances
|
||||
```
|
||||
|
||||
你也可以使用 snap 包安装最新的 Glances:
|
||||
|
||||
```
|
||||
sudo snap install glances
|
||||
```
|
||||
|
||||
由于 Glances 是基于 Python 的,你也可以使用 PIP 在大多数 Linux 发行版上安装它。先[安装 PIP][10],然后用它来安装 Glances:
|
||||
|
||||
```
|
||||
sudo pip3 install glances
|
||||
```
|
||||
|
||||
如果没有别的办法,你还可以使用 Glances 开发者提供的自动安装脚本。虽然我们不建议直接在你的系统上随便运行脚本,但这完全取决于你自己:
|
||||
|
||||
```
|
||||
curl -L https://bit.ly/glances | /bin/bash
|
||||
```
|
||||
|
||||
你可以从他们的[文档][11]中查看其他安装 Glances 的方法,甚至你还可以把它作为一个 Docker 容器来安装。
|
||||
|
||||
### 使用 Glances 监控本地系统上的 Linux 系统资源(独立模式)
|
||||
|
||||
你可以通过在终端上运行这个命令,轻松启动 Glances 来监控你的本地机器:
|
||||
|
||||
```
|
||||
glances
|
||||
```
|
||||
|
||||
你可以立即观察到,它将很多不同的信息整合在一个屏幕上。我喜欢它在顶部显示电脑的公共和私人 IP:
|
||||
|
||||
![][12]
|
||||
|
||||
Glances 也是交互式的,这意味着你可以在它运行时使用命令与它互动。你可以按 `s` 将传感器显示在屏幕上;按 `k` 将 TCP 连接列表显示在屏幕上;按 `1` 将 CPU 统计扩展到显示单个线程。
|
||||
|
||||
你也可以使用方向键在进程列表中移动,并按不同的指标对表格进行排序。
|
||||
|
||||
你可以通过各种命令行选项来启动 Glances。此外,它还有很多交互式命令。你可以在他们的[丰富的文档][13]中找到完整的列表。
|
||||
|
||||
按 `Ctrl+C` 键退出 Glances。
|
||||
|
||||
### 使用 Glances 监控远程 Linux 系统(客户端-服务器模式)
|
||||
|
||||
要监控远程计算机,你可以在客户端-服务器模式下使用 Glances。你需要在两个系统上都安装 Glances。
|
||||
|
||||
在远程 Linux 系统上,使用 `-s` 选项在服务器模式下启动 Glances:
|
||||
|
||||
```
|
||||
glances -s
|
||||
```
|
||||
|
||||
在客户端系统中,使用下面的命令在客户端模式下启动 Glances 并连接到服务器:
|
||||
|
||||
```
|
||||
glances -c server_ip_address
|
||||
```
|
||||
|
||||
你也可以通过 SSH 进入任何一台电脑,然后启动 Glances,它可以完美地工作。更多关于客户端-服务器模式的信息请看[这里][14]。
|
||||
|
||||
### 使用 Glances 在 Web 浏览器中监控 Linux 系统资源(Web 模式)
|
||||
|
||||
Glances 也可以在 Web 模式下运行。这意味着你可以使用 Web 浏览器来访问 Glances。与之前的客户端-服务器模式不同,你不需要在客户端系统上安装 Glances。
|
||||
|
||||
要在 Web 模式下启动 Glances,请使用 `-w` 选项:
|
||||
|
||||
```
|
||||
glances -w
|
||||
```
|
||||
|
||||
请注意,即使在 Linux 服务器上,它也可能显示 “Glances Web User Interface started on http://0.0.0.0:61208”,而实际上它使用的是服务器的 IP 地址。
|
||||
|
||||
最主要的是它使用的是 61208 端口号,你可以用它来通过网络浏览器访问 Glances。只要在服务器的 IP 地址后面输入端口号,比如 <http://123.123.123.123:61208>。
|
||||
|
||||
你也可以在本地系统中使用 <http://0.0.0.0:61208/> 或 <https://localhost:61208/> 访问。
|
||||
|
||||
![][15]
|
||||
|
||||
Web 模式也模仿终端的样子。网页版是根据响应式设计原则打造的,即使在手机上也很好看。
|
||||
|
||||
你可能想用密码来保护 Web 模式,这样只有授权的人才能使用它。默认的用户名是 `glances`。
|
||||
|
||||
```
|
||||
root@localhost:~# glances -w --password
|
||||
Define the Glances webserver password (glances username):
|
||||
Password (confirm):
|
||||
Do you want to save the password? [Yes/No]: n
|
||||
Glances Web User Interface started on http://0.0.0.0:61208/
|
||||
```
|
||||
|
||||
你可以在[快速入门指南][16]中找到关于配置密码的更多信息。
|
||||
|
||||
### 导出 Glances 数据到不同的服务
|
||||
|
||||
使用 Glances 最大的优势之一就是开箱即用,它支持将数据导出到各种数据库、服务,并无缝集成到各种数据管道中。
|
||||
|
||||
你可以在监控的同时用这个命令导出到 CSV:
|
||||
|
||||
```
|
||||
glances --export csv --export-csv-file /tmp/glances.csv
|
||||
```
|
||||
|
||||
`/tmp/glances.csv` 是文件的位置。数据以时间序列的形式整齐地填入。
|
||||
|
||||
![][17]
|
||||
|
||||
你也可以导出到其它大型应用程序,如 [Prometheus][18],以启用条件触发器和通知。
|
||||
|
||||
它可以直接插入到消息服务(如 RabbitMQ、MQTT)、流媒体平台(如 Kafka),并将时间序列数据导出到数据库(如 InfluxDB),并使用 Grafana 进行可视化。
|
||||
|
||||
你可以在[这里][19]查看服务和导出选项的整个列表。
|
||||
|
||||
### 使用 REST API 将 Glances 与其他服务进行整合
|
||||
|
||||
这是整个栈中我最喜欢的功能。Glances 不仅可以将各种指标汇集在一起,还可以通过 API 将它们暴露出来。
|
||||
|
||||
这个简单而强大的功能使得为任何特定的用例构建自定义应用程序、服务和中间件应用程序变得非常容易。
|
||||
|
||||
当你在 Web 模式下启动 Glances 时,REST API 服务器会自动启动。要在 API 服务器模式下启动它,你可以使用以下命令:
|
||||
|
||||
```
|
||||
glances -w --disable-webui
|
||||
```
|
||||
|
||||
[REST API][20] 的文档很全面,其响应也很容易与 Web 应用集成。这使得使用类似 [Node-RED][21] 这样的工具可以很容易地构建一个统一的仪表盘来监控多个服务器。
|
||||
|
||||
![][22]
|
||||
|
||||
Glances 也提供了一个 XML-RPC 服务器,你可以在[这里][23]查看文档。
|
||||
|
||||
### 关于 Glances 的结束语
|
||||
|
||||
Glances 使用 [psutil][24] Python 库来访问不同的系统统计数据。早在 2017 年,我就曾使用相同的库构建了一个简单的 API 服务器来检索 CPU 的使用情况。我能够使用 Node-RED 构建的仪表盘监控一个集群中的所有树莓派。
|
||||
|
||||
Glances 可以为我节省一些时间,同时提供更多的功能,可惜我当时并不知道它。
|
||||
|
||||
在写这篇文章的时候,我确实尝试着在我的树莓派上安装 Glances,可惜所有的安装方法都出现了一些错误,失败了。当我成功后,我会更新文章,或者可能再写一篇文章,介绍在树莓派上安装的步骤。
|
||||
|
||||
我希望 Glances 能提供一种顶替 `top` 或 `htop` 等的方法。让我们希望在即将到来的版本中得到它。
|
||||
|
||||
我希望这能给你提供大量关于 Glances 的信息。你们使用什么系统监控工具呢,请在评论中告诉我。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/glances/
|
||||
|
||||
作者:[Chinmay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/chinmay/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/linux-system-monitoring-tools/
|
||||
[2]: https://hisham.hm/htop/
|
||||
[3]: https://itsfoss.com/monitor-cpu-gpu-temp-linux/
|
||||
[4]: https://github.com/lm-sensors/lm-sensors
|
||||
[5]: https://nicolargo.github.io/glances/
|
||||
[6]: https://itsfoss.com/pywal/
|
||||
[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/glances-linux.png?resize=800%2C510&ssl=1
|
||||
[8]: https://grafana.com/
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/glances-data-in-grafana-dashboard.jpg?resize=800%2C472&ssl=1
|
||||
[10]: https://itsfoss.com/install-pip-ubuntu/
|
||||
[11]: https://github.com/nicolargo/glances/blob/master/README.rst#installation
|
||||
[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/08/Screenshot-from-2020-08-13-11-54-18.png?resize=800%2C503&ssl=1
|
||||
[13]: https://glances.readthedocs.io/en/latest/cmds.html
|
||||
[14]: https://glances.readthedocs.io/en/latest/quickstart.html#central-client
|
||||
[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/Screenshot-from-2020-08-13-16-49-11.png?resize=800%2C471&ssl=1
|
||||
[16]: https://glances.readthedocs.io/en/stable/quickstart.html
|
||||
[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/Screenshot-from-2020-08-13-12-25-40.png?resize=800%2C448&ssl=1
|
||||
[18]: https://prometheus.io/
|
||||
[19]: https://glances.readthedocs.io/en/latest/gw/index.html
|
||||
[20]: https://github.com/nicolargo/glances/wiki/The-Glances-RESTFULL-JSON-API
|
||||
[21]: https://nodered.org/
|
||||
[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/08/Screenshot-from-2020-08-13-17-49-41.png?resize=800%2C468&ssl=1
|
||||
[23]: https://github.com/nicolargo/glances/wiki
|
||||
[24]: https://pypi.org/project/psutil/
|
Loading…
Reference in New Issue
Block a user