mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
98040e95a3
142
published/20200123 6 things you should be doing with Emacs.md
Normal file
142
published/20200123 6 things you should be doing with Emacs.md
Normal file
@ -0,0 +1,142 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-12004-1.html)
|
||||
[#]: subject: (6 things you should be doing with Emacs)
|
||||
[#]: via: (https://opensource.com/article/20/1/emacs-cheat-sheet)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
6 件你应该用 Emacs 做的事
|
||||
======
|
||||
|
||||
> 下面六件事情你可能都没有意识到可以在 Emacs 下完成。此外还有我们的新备忘单,拿去,充分利用 Emacs 的功能吧。
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/202003/17/133738wjj66p2safcpc50z.jpg)
|
||||
|
||||
想象一下使用 Python 的 IDLE 界面来编辑文本。你可以将文件加载到内存中,编辑它们,并保存更改。但是你执行的每个操作都由 Python 函数定义。例如,调用 `upper()` 来让一个单词全部大写,调用 `open` 打开文件,等等。文本文档中的所有内容都是 Python 对象,可以进行相应的操作。从用户的角度来看,这与其他文本编辑器的体验一致。对于 Python 开发人员来说,这是一个丰富的 Python 环境,只需在配置文件中添加几个自定义函数就可以对其进行更改和开发。
|
||||
|
||||
这就是 [Emacs][2] 对 1958 年的编程语言 [Lisp][3] 所做的事情。在 Emacs 中,运行应用程序的 Lisp 引擎与输入文本之间无缝结合。对 Emacs 来说,一切都是 Lisp 数据,因此一切都可以通过编程进行分析和操作。
|
||||
|
||||
这造就了一个强大的用户界面(UI)。但是,如果你是 Emacs 的普通用户,你可能对它的能力知之甚少。下面是你可能没有意识到 Emacs 可以做的六件事。
|
||||
|
||||
### 使用 Tramp 模式进行云端编辑
|
||||
|
||||
Emacs 早在网络流行化之前就实现了透明的网络编辑能力了,而且时至今日,它仍然提供了最流畅的远程编辑体验。Emacs 中的 [Tramp 模式][4](以前称为 RPC 模式)代表着 “<ruby>透明的远程(文件)访问,多协议<rt>Transparent Remote (file) Access,Multiple Protocol</rt></ruby>”,这准确说明了它提供的功能:通过最流行的网络协议轻松访问你希望编辑的远程文件。目前最流行、最安全的能用于远程编辑的协议是 [OpenSSH][5],因此 Tramp 使用它作为默认的协议。
|
||||
|
||||
在 Emacs 22.1 或更高版本中已经包含了 Tramp,因此要使用 Tramp,只需使用 Tramp 语法打开一个文件。在 Emacs 的 “File” 菜单中,选择 “Open File”。当在 Emacs 窗口底部的小缓冲区中出现提示时,使用以下语法输入文件名:
|
||||
|
||||
```
|
||||
/ssh:user@example.com:/path/to/file
|
||||
```
|
||||
|
||||
如果需要交互式登录,Tramp 会提示你输入密码。但是,Tramp 直接使用 OpenSSH,所以为了避免交互提示,你可以将主机名、用户名和 SSH 密钥路径添加到你的 `~/.ssh/config` 文件。与 Git 一样,Emacs 首先使用你的 SSH 配置,只有在出现错误时才会停下来询问更多信息。
|
||||
|
||||
Tramp 非常适合编辑并没有放在你的计算机上的文件,它的用户体验与编辑本地文件没有明显的区别。下次,当你 SSH 到服务器启动 Vim 或 Emacs 会话时,请尝试使用 Tramp。
|
||||
|
||||
### 日历
|
||||
|
||||
如果你喜欢文本多过图形界面,那么你一定会很高兴地知道,可以使用 Emacs 以纯文本的方式安排你的日程(或生活),而且你依然可以在移动设备上使用开源的 [Org 模式][6]查看器来获得华丽的通知。
|
||||
|
||||
这个过程需要一些配置,以创建一个方便的方式来与移动设备同步你的日程(我使用 Git,但你可以调用蓝牙、KDE Connect、Nextcloud,或其他文件同步工具),此外你必须在移动设备上安装一个 Org 模式查看器(如 [Orgzly][7])以及 Git 客户程序。但是,一旦你搭建好了这些基础,该流程就会与你常用的(或正在完善的,如果你是新用户)Emacs 工作流完美地集成在一起。你可以在 Emacs 中方便地查阅日程、更新日程,并专注于任务上。议程上的变化将会反映在移动设备上,因此即使在 Emacs 不可用的时候,你也可以保持井然有序。
|
||||
|
||||
![][8]
|
||||
|
||||
感兴趣了?阅读我的关于[使用 Org mode 和 Git 进行日程安排][9]的逐步指南。
|
||||
|
||||
### 访问终端
|
||||
|
||||
有[许多终端模拟器][10]可用。尽管 Emacs 中的 Elisp 终端仿真器不是最强大的通用仿真器,但是它有两个显著的优点:
|
||||
|
||||
1. **打开在 Emacs 缓冲区之中**:我使用 Emacs 的 Elisp shell,因为它在 Emacs 窗口中打开很方便,我经常全屏运行该窗口。这是一个小而重要的优势,只需要输入 `Ctrl+x+o`(或用 Emacs 符号来表示就是 `C-x o`)就能使用终端了,而且它还有一个特别好的地方在于当运行漫长的作业时能够一瞥它的状态报告。
|
||||
2. **在没有系统剪贴板的情况下复制和粘贴特别方便**:无论是因为懒惰不愿将手从键盘移动到鼠标,还是因为在远程控制台运行 Emacs 而无法使用鼠标,在 Emacs 中运行终端有时意味着可以从 Emacs 缓冲区中很快地传输数据到 Bash。
|
||||
|
||||
要尝试 Emacs 终端,输入 `Alt+x`(用 Emacs 符号表示就是 `M-x`),然后输入 `shell`,然后按回车。
|
||||
|
||||
### 使用 Racket 模式
|
||||
|
||||
[Racket][11] 是一种激动人心的新兴 Lisp 方言,拥有动态编程环境、GUI 工具包和充满激情的社区。学习 Racket 的默认编辑器是 DrRacket,它的顶部是定义面板,底部是交互面板。使用该设置,用户可以编写影响 Racket 运行时环境的定义。就像旧的 [Logo Turtle][12] 程序,但是有一个终端而不是仅仅一个海龟。
|
||||
|
||||
![Racket-mode][13]
|
||||
|
||||
*由 PLT 提供的 LGPL 示例代码*
|
||||
|
||||
基于 Lisp 的 Emacs 为资深 Racket 编程人员提供了一个很好的集成开发环境(IDE)。它尚未附带 [Racket 模式][14],但你可以使用 Emacs 包安装程序安装 Racket 模式和辅助扩展。要安装它,按下 `Alt+X`(用 Emacs 符号表示就是 `M-x`),键入 `package-install`,然后按回车。接着输入要安装的包 `racet-mode`,按回车。
|
||||
|
||||
使用 `M-x racket-mode` 进入 Racket 模式。如果你是 Racket 新手,而对 Lisp 或 Emacs 比较熟悉,可以从这份优秀的[图解 Racket][15] 入手。
|
||||
|
||||
## 脚本
|
||||
|
||||
你可能知道,Bash 脚本在自动化和增强 Linux 或 Unix 体验方面很流行。你可能听说过 Python 在这方面也做得很好。但是你知道 Lisp 脚本可以用同样的方式运行吗?有时人们会对 Lisp 到底有多有用感到困惑,因为许多人是通过 Emacs 来了解 Lisp 的,因此有一种潜在的印象,即在 21 世纪运行 Lisp 的惟一方法是在 Emacs 中运行。幸运的是,事实并非如此,Emacs 是一个很好的 IDE,它支持将 Lisp 脚本作为一般的系统可执行文件来运行。
|
||||
|
||||
除了 Elisp 之外,还有两种流行的现代 Lisp 可以很容易地用来作为独立脚本运行。
|
||||
|
||||
1. **Racket**:你可以通过在系统上运行 Racket 来提供运行 Racket 脚本所需的运行时支持,或者你可以使用 `raco exe` 产生一个可执行文件。`raco exe` 命令将代码和运行时支持文件一起打包,以创建可执行文件。然后,`raco distribution` 命令将可执行文件打包成可以在其他机器上工作的发行版。Emacs 有许多 Racket 工具,因此在 Emacs 中创建 Racket 文件既简单又有效。
|
||||
2. **GNU Guile**:[GNU Guile][16](<ruby>GNU 通用智能语言扩展<rt>GNU Ubiquitous Intelligent Language for Extensions</rt></ruby> 的缩写)是 [Scheme][17] 编程语言的一个实现,它可以用于为桌面、互联网、终端等创建应用程序和游戏。Emacs 中的 Scheme 扩展众多,使用任何一个扩展来编写 Scheme 都很容易。例如,这里有一个用 Guile 编写的 “Hello world” 脚本:
|
||||
|
||||
```
|
||||
#!/usr/bin/guile -s
|
||||
!#
|
||||
|
||||
(display "hello world")
|
||||
(newline)
|
||||
```
|
||||
|
||||
用 `guile` 编译并允许它:
|
||||
|
||||
```
|
||||
$ guile ./hello.scheme
|
||||
;;; compiling /home/seth/./hello.scheme
|
||||
;;; compiled [...]/hello.scheme.go
|
||||
hello world
|
||||
$ guile ./hello.scheme
|
||||
hello world
|
||||
```
|
||||
|
||||
### 无需 Emacs 允许 Elisp
|
||||
|
||||
Emacs 可以作为 Elisp 的运行环境,但是你无需按照传统印象中的必须打开 Emacs 来运行 Elisp。`--script` 选项可以让你使用 Emacs 作为引擎来执行 Elisp 脚本,而无需运行 Emacs 图形界面(甚至也无需使用终端)。下面这个例子中,`-Q` 选项让 Emacs 忽略 `.emacs` 文件,从而避免由于执行 Elisp 脚本时产生延迟(若你的脚本依赖于 Emacs 配置中的内容,那么请忽略该选项)。
|
||||
|
||||
```
|
||||
emacs -Q --script ~/path/to/script.el
|
||||
```
|
||||
|
||||
### 下载 Emacs 备忘录
|
||||
|
||||
Emacs 许多重要功能都不是只能通过 Emacs 来实现的;Org 模式是 Emacs 扩展也是一种格式标准,流行的 Lisp 方言大多不依赖于具体的应用,我们甚至可以在没有可见或可交互式 Emacs 实例的情况下编写和运行 Elisp。然后若你对为什么模糊代码和数据之间的界限能够引发创新和效率感到好奇的话,那么 Emacs 是一个很棒的工具。
|
||||
|
||||
幸运的是,现在是 21 世纪,Emacs 有了带有传统菜单的图形界面以及大量的文档,因此学习曲线不再像以前那样。然而,要最大化 Emacs 对你的好处,你需要学习它的快捷键。由于 Emacs 支持的每个任务都是一个 Elisp 函数,Emacs 中的任何功能都可以对应一个快捷键,因此要描述所有这些快捷键是不可能完成的任务。你只要学习使用频率 10 倍于不常用功能的那些快捷键即可。
|
||||
|
||||
我们汇聚了最常用的 Emacs 快捷键成为一份 Emacs 备忘录以便你查询。将它挂在屏幕附近或办公室墙上,把它作为鼠标垫也行。让它触手可及经常翻阅一下。每次翻两下可以让你获得十倍的学习效率。而且一旦开始编写自己的函数,你一定不会后悔获取了这个免费的备忘录副本的!
|
||||
|
||||
- [这里下载 Emacs 备忘录(需注册)](https://opensource.com/downloads/emacs-cheat-sheet)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
via: https://opensource.com/article/20/1/emacs-cheat-sheet
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_blue_text_editor_web.png
|
||||
[2]: https://www.gnu.org/software/emacs/
|
||||
[3]: https://en.wikipedia.org/wiki/Lisp_(programming_language)
|
||||
[4]: https://www.gnu.org/software/tramp/
|
||||
[5]: https://www.openssh.com/
|
||||
[6]: https://orgmode.org/
|
||||
[7]: https://f-droid.org/en/packages/com.orgzly/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/orgzly-agenda.jpg
|
||||
[9]: https://linux.cn/article-11320-1.html
|
||||
[10]: https://linux.cn/article-11814-1.html
|
||||
[11]: http://racket-lang.org/
|
||||
[12]: https://en.wikipedia.org/wiki/Logo_(programming_language)#Turtle_and_graphics
|
||||
[13]: https://opensource.com/sites/default/files/racket-mode.jpg
|
||||
[14]: https://www.racket-mode.com/
|
||||
[15]: https://docs.racket-lang.org/quick/index.html
|
||||
[16]: https://www.gnu.org/software/guile/
|
||||
[17]: https://en.wikipedia.org/wiki/Scheme_(programming_language)
|
@ -0,0 +1,82 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Foundation prepares for disaster, new anti-tracking data set, Mozilla goes back to mobile OSes, and more open source news)
|
||||
[#]: via: (https://opensource.com/article/20/3/news-march-14)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
Linux Foundation prepares for disaster, new anti-tracking data set, Mozilla goes back to mobile OSes, and more open source news
|
||||
======
|
||||
Catch up on the biggest open source headlines from the past two weeks.
|
||||
![][1]
|
||||
|
||||
In this edition of our open source news roundup, we take a look at the Linux Foundation's disaster relief project, DuckDuckGo's anti-tracking tool, open textbooks, and more!
|
||||
|
||||
### Linux Foundation unveils Project OWL
|
||||
|
||||
When a disaster happens, it's vital to keep communications links up and running. One way to do that is with [mesh networks][2]. The Linux Foundation [has unveiled][3] Project OWL to "help build mesh network nodes for global emergency communications networks."
|
||||
|
||||
Short for _Organisation, Whereabouts, and Logistics_, OWL is firmware for Internet of Things (IoT) devices that "can quickly turn a cheap wireless device into a ‘DuckLink’, a mesh network node". Those devices can connect to other, similar devices around them. OWL also provides an analytics tool that responders can use for "coordinating resources, learning about weather patterns, and communicating with civilians who would otherwise be cut off."
|
||||
|
||||
### New open source tool to block web trackers
|
||||
|
||||
It's no secret that sites all over the web track their visitors. Often, it's shocking how much of that goes on and what a threat to your privacy that is. To help web browser developers better protect their users, the team behind search engine DuckDuckGo is "[sharing data it's collected about online trackers with other companies so they can also protect your privacy][4]."
|
||||
|
||||
That dataset is called Tracker Radar and it "details 5,326 internet domains used by 1,727 companies and organizations that track you online". Browser Radar is different from other tracker databases in that it "annotates data with other information, like whether blocking a tracker is likely to break a website, so anyone using it can pick the best balance of privacy and convenience."
|
||||
|
||||
Tracker Radar's dataset is [available on GitHub][5]. The repository also links to the code for the [crawler][6] and [detector][7] that work with the data.
|
||||
|
||||
### Oregon Tech embracing open textbooks
|
||||
|
||||
With the cost of textbooks taking an increasingly large bite out of the budgets of university students, more and more schools are turning to open textbooks to cut those costs. By embracing open textbooks, the Oregon Institute of Technology has [save students $400,000][8] over the last two years.
|
||||
|
||||
The school offers open textbooks for 26 courses, ranging "from chemistry and biology, to respiratory care, sociology and engineering." Although the textbooks are free, university librarian John Schoppert points out that the materials are of a high quality and that faculty members have been "developing lab manuals and open-licensed textbooks where they hadn’t existed before and improved on others’ materials."
|
||||
|
||||
### Mozilla to help update feature phone OS
|
||||
|
||||
A few years ago, Mozilla tried to break into the world of mobile operating systems with Firefox OS. While that effort didn't pan out, Firefox OS found new life powering low-cost feature phones under the name KaiOS. Mozilla's [jumping back into the game][9] by helping "modernize the browser engine that's core to the software."
|
||||
|
||||
KaiOS is built upon a four-year-old version of Mozilla's Gecko browser engine. Updating Gecko will "improve security, make apps run faster and more smoothly, and open [KaiOS to] more-sophisticated apps and WebGL 2.0 for better games graphics." Mozilla said its collaboration will include "Mozilla's help with test engineering and adding new low-level Gecko abilities."
|
||||
|
||||
#### In other news
|
||||
|
||||
* [CERN adopts Mattermost, an open source messaging app][10]
|
||||
* [Open-source software analyzes economics of biofuels, bioproducts][11]
|
||||
* [Netflix releases Dispatch for crisis management orchestration][12]
|
||||
* [FreeNAS and TrueNAS are merging][13]
|
||||
* [Smithsonian 3D Scans NASA Space Shuttle Discovery And Makes It Open Source][14]
|
||||
|
||||
|
||||
|
||||
Thanks, as always, to Opensource.com staff members and [Correspondents][15] for their help this week.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/news-march-14
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/weekly_news_roundup_tv.png?itok=tibLvjBd
|
||||
[2]: https://en.wikipedia.org/wiki/Mesh_networking
|
||||
[3]: https://www.smartcitiesworld.net/news/news/linux-announces-open-source-project-to-aid-disaster-relief-5102
|
||||
[4]: https://www.cnet.com/news/privacy-focused-duckduckgo-launches-new-effort-to-block-online-tracking/
|
||||
[5]: https://github.com/duckduckgo/tracker-radar
|
||||
[6]: https://github.com/duckduckgo/tracker-radar-collector
|
||||
[7]: https://github.com/duckduckgo/tracker-radar-detector
|
||||
[8]: https://www.heraldandnews.com/news/local_news/oregon-tech-turns-to-open-source-materials-to-save-students/article_ba641e79-3034-5b9a-a8f7-b5872ddc998e.html
|
||||
[9]: https://www.cnet.com/news/mozilla-helps-modernize-feature-phones-powered-by-firefox-tech/
|
||||
[10]: https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/cern-uses-mattermost
|
||||
[11]: http://www.biomassmagazine.com/articles/16848/open-source-software-analyzes-economics-of-biofuels-bioproducts
|
||||
[12]: https://jaxenter.com/netflix-dispatch-crisis-management-orchestration-169381.html
|
||||
[13]: https://liliputing.com/2020/03/freenas-and-turenas-are-merging-open-source-operating-systems-for-network-attached-storage.html
|
||||
[14]: https://www.forbes.com/sites/tjmccue/2020/03/04/smithsonian-3d-scans-the-nasa-space-shuttle-discovery-and-makes-it-open-source/#39aa0f243ecd
|
||||
[15]: https://opensource.com/correspondent-program
|
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (As the networks evolve enterprises need to rethink network security)
|
||||
[#]: via: (https://www.networkworld.com/article/3531929/as-the-network-evolves-enterprises-need-to-rethink-security.html)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
As the networks evolve enterprises need to rethink network security
|
||||
======
|
||||
Q&A: John Maddison, executive vice president of products for network security vendor Fortinet, discusses how to deal with network security in the digital era.
|
||||
D3Damon / Getty Images
|
||||
|
||||
_Digital innovation is disrupting businesses. Data and applications are at the hub of new business models, and data needs to travel across the extended network at increasingly high speeds without interruption. To make this possible, organizations are radically redesigning their networks by adopting multi-cloud environments, building hyperscale data centers, retooling their campuses, and designing new connectivity systems for their next-gen branch offices. Networks are faster than ever before, more agile and software-driven. They're also increasingly difficult to secure. To understand the challenges and how security needs to change, I recently talked with John Maddison, executive vice president of products for network security vendor Fortinet._
|
||||
|
||||
**ZK: As the speed and scale of data escalate, how do the challenges to secure it change?**
|
||||
|
||||
JM: Security platforms were designed to provide things like enhanced visibility, control, and performance by monitoring and managing the perimeter. But the traditional perimeter has shifted from being a very closely monitored, single access point to a highly dynamic and flexible environment that has not only expanded outward but inward, into the core of the network as well.
|
||||
|
||||
**[ Also see [What to consider when deploying a next generation firewall][1]. | Get regularly scheduled insights by [signing up for Network World newsletters][2]. ]**
|
||||
|
||||
**READ MORE:** [The VPN is dying, long live zero trust][3]
|
||||
|
||||
Today's perimeter not only includes multiple access points, the campus, the WAN, and the cloud, but also IoT, mobile, and virtual devices that are generating data, communicating with data centers and manufacturing floors, and literally creating thousands of new edges inside an organization. And with this expanded perimeter, there are a lot more places for attacks to get in. To address this new attack surface, security has to move from being a standalone perimeter solution to being fully integrated into the network.
|
||||
|
||||
This convergence of security and networking needs to cover SD-WAN, VPN, Wi-Fi controllers, switching infrastructures, and data center environments – something we call security-driven networking. As we see it, security-driven networking is an essential approach for ensuring that security and networking are integrated together into a single system so that whenever the networking infrastructure evolves or expands, security automatically adapts as an integrated part of that environment. And it needs to do this by providing organizations with a new suite of security solutions, including network segmentation, dynamic multi-cloud controls, and [zero-trust network access][3]. And because of the speed of digital operations and the sophistication of today's attacks, this new network-centric security strategy also needs to be augmented with AI-driven security operations.
|
||||
|
||||
The perimeter security devices that have been on the market weren't really built to run as part of the internal network, and when you put them there, they become bottlenecks. Customers don't put these traditional security devices in the middle of their networks because they just can't run fast enough. But the result is an open network environment that can become a playground for criminals that manage to breach perimeter defenses. It's why the dwell time for network malware is over six months.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][4] ]**
|
||||
|
||||
As you combine networking applications, networking functionality, and security applications together to address this challenge, you absolutely need a different performance architecture. This can't be achieved using the traditional hardware most security platforms rely on.
|
||||
|
||||
**ZK: Why can't traditional security devices secure the internal network?**
|
||||
|
||||
JM: They simply aren't fast enough. And the ones that come close are prohibitively expensive… For example, internal segmentation not only enables organizations to see and separate all of the devices on their network but also dynamically create horizontal segments that support and secure applications and automated workflows that need to travel across the extended network. Inside the network, you're running at 100 gigs, 400 gigs, that sort of thing. But the interface for a lot of security systems today is just 10 gigs. Even with multiple ports, the device can't handle much more than that without having to spend a fortune… In order to handle today's capacity and performance demands, security needs to be done at network speeds that most security solutions cannot support without specialized content processors.
|
||||
|
||||
**ZK: Hyperscale data centers have been growing steadily. What sort of additional security challenges do these environments face?**
|
||||
|
||||
JM: Hyperscale architectures are being used to move and process massive amounts of data. A lot of the times, research centers will need to send a payload of over 10 gigabytes – one packet that's 10 gigabytes – to support advanced rendering and modeling projects. Most firewalls today cannot process these large payloads, also known as elephant flows. Instead, they often compromise on their security to let them flow through. Other hyperscale environment examples include financial organizations that need to process transactions with sub-second latency or online gaming providers that need to support massive numbers of connections per second while maintaining high user experience. … [Traditional security platforms] will never be able to secure hyperscale environments, or even worse, the next generation of ultra-fast converged networks that rely on hyperscale and hyperconnectivity to run things like smart cities or smart infrastructures, until they fundamentally change their hardware.
|
||||
|
||||
**ZK: Do these approaches introduce new risks or increase the existing risk for these organizations?**
|
||||
|
||||
JM: They do both. As the attack surface expands, existing risks often get multiplied across the network. We actually see more exploits in the wild targeting older vulnerabilities than new ones. But cybercriminals are also building new tools designed to exploit cloud environments and modern data centers. They are targeting mobile devices and exploiting IoT vulnerabilities. Some of these attacks are simply revisions of older, tried and true exploits. But many are new and highly sophisticated. We are also seeing new attacks that use machine learning and rely on AI enhancements to better bypass security and evade detection.
|
||||
|
||||
To address this challenge, security platforms need to be broad, integrated, and automated.
|
||||
|
||||
Broad security platforms come in a variety of form factors so they can be deployed everywhere across the expanding network. Physical hardware enhancements, such as our [security processing units], enable security platforms to be effectively deployed inside high-performance networks, including hyperscale data centers and SD-WAN environments. And virtualized versions need to support private cloud environments as well as all major cloud providers through thorough cloud-native integration.
|
||||
|
||||
Next, these security platforms need to be integrated. The security components built into a security platform need to work together as a single solution – not the sort of loose affiliation most platforms provide – to enable extremely fast threat intelligence collection, correlation, and response. That security platform also needs to support common standards and APIs so third-party tools can be added and supported. And finally, these platforms need to be able to work together, regardless of their location or form factor, to create a single, unified security fabric. It's important to note that many cloud providers have developed their own custom hardware, such as Google's TPU, Amazon's Inferentia, and Microsoft's Corsica, to accelerate cloud functions. As a result, hardware acceleration on physical security platforms is essential to ensure consistent performance for data moving between physical and cloud environments
|
||||
|
||||
And finally, security platforms need to be automated. Support for automated workflows and AI-enhanced security operations can significantly accelerate the speed of threat detection, analysis, and response. But like other processing-intensive functions, such as decrypting traffic for deep inspection, these functions also need specialized and purpose-built processors or they will become innovation-killing bottlenecks.
|
||||
|
||||
**ZK: What's next for network security?**
|
||||
|
||||
JM: This is just the start. As networking functions begin to converge even further, creating the next generation of smart environments – smart buildings, smart cities, and smart critical infrastructures – the lack of viable security tools capable of inspecting and protecting these hyperfast, hyperconnected, and hyper-scalable environments will seriously impact our digital economy and way of life.
|
||||
|
||||
Security vendors need to understand this challenge and begin investing now in developing advanced hardware and security-driven networking technologies. Organizations aren't waiting for vendors to catch up so they can secure their networks of tomorrow. Their networks are being left exposed right now because the software-based security solutions they have in place are just not adequate. And it's up to the security industry to step up and solve this challenge.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3531929/as-the-network-evolves-enterprises-need-to-rethink-security.html
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://www.networkworld.com/article/3487720/the-vpn-is-dying-long-live-zero-trust.html
|
||||
[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,243 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with shaders: signed distance functions!)
|
||||
[#]: via: (https://jvns.ca/blog/2020/03/15/writing-shaders-with-signed-distance-functions/)
|
||||
[#]: author: (Julia Evans https://jvns.ca/)
|
||||
|
||||
Getting started with shaders: signed distance functions!
|
||||
======
|
||||
|
||||
Hello! A while back I learned how to make fun shiny spinny things like this using shaders:
|
||||
|
||||
![][1]
|
||||
|
||||
My shader skills are still extremely basic, but this fun spinning thing turned out to be a lot easier to make than I thought it would be to make (with a lot of copying of code snippets from other people!).
|
||||
|
||||
The big idea I learned when doing this was something called “signed distance functions”, which I learned about from a very fun tutorial called [Signed Distance Function tutorial: box & balloon][2].
|
||||
|
||||
In this post I’ll go through the steps I used to learn to write a simple shader and try to convince you that shaders are not that hard to get started with!
|
||||
|
||||
### examples of more advanced shaders
|
||||
|
||||
If you haven’t seen people do really fancy things with shaders, here are a couple:
|
||||
|
||||
1. this very complicated shader that is like a realistic video of a river: <https://www.shadertoy.com/view/Xl2XRW>
|
||||
2. a more abstract (and shorter!) fun shader with a lot of glowing circles: <https://www.shadertoy.com/view/lstSzj>
|
||||
|
||||
|
||||
|
||||
### step 1: my first shader
|
||||
|
||||
I knew that you could make shaders on shadertoy, and so I went to <https://www.shadertoy.com/new>. They give you a default shader to start with that looks like this:
|
||||
|
||||
![][3]
|
||||
|
||||
Here’s the code:
|
||||
|
||||
```
|
||||
void mainImage( out vec4 fragColor, in vec2 fragCoord )
|
||||
{
|
||||
// Normalized pixel coordinates (from 0 to 1)
|
||||
vec2 uv = fragCoord/iResolution.xy;
|
||||
|
||||
// Time varying pixel color
|
||||
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
|
||||
|
||||
// Output to screen
|
||||
fragColor = vec4(col,1.0);
|
||||
}
|
||||
```
|
||||
|
||||
This doesn’t do anythign that exciting, but it already taught me the basic structure of a shader program!
|
||||
|
||||
### the idea: map a pair of coordinates (and time) to a colour
|
||||
|
||||
The idea here is that you get a pair of coordinates as an input (`fragCoord`) and you need to output a RGBA vector with the colour of that. The function can also use the current time (`iTime`), which is how the picture changes over time.
|
||||
|
||||
The neat thing about this programming model (where you map a pair of coordinates and the time to) is that it’s extremely trivially parallelizable. I don’t understand a lot about GPUs but my understanding is that this kind of task (where you have 10000 trivially parallelizable calculations to do at once) is exactly the kind of thing GPUs are good at.
|
||||
|
||||
### step 2: iterate faster with `shadertoy-render`
|
||||
|
||||
After a while of playing with shadertoy, I got tired of having to click “recompile” on the Shadertoy website every time I saved my shader.
|
||||
|
||||
I found a command line tool that will watch a file and update the animation in real time every time I save called [shadertoy-render][4]. So now I can just run:
|
||||
|
||||
```
|
||||
shadertoy-render.py circle.glsl
|
||||
```
|
||||
|
||||
and iterate way faster!
|
||||
|
||||
### step 3: draw a circle
|
||||
|
||||
Next I thought – I’m good at math! I can use some basic trigonometry to draw a bouncing rainbow circle!
|
||||
|
||||
I know the equation for a circle (`x**2 + y**2 = whatever`!), so I wrote some code to do that:
|
||||
|
||||
![][5]
|
||||
|
||||
Here’s the code: (which you can also [see on shadertoy][6])
|
||||
|
||||
```
|
||||
void mainImage( out vec4 fragColor, in vec2 fragCoord )
|
||||
{
|
||||
// Normalized pixel coordinates (from 0 to 1)
|
||||
vec2 uv = fragCoord/iResolution.xy;
|
||||
// Draw a circle whose center depends on what time it is
|
||||
vec2 shifted = uv - vec2((sin(iGlobalTime) + 1)/2, (1 + cos(iGlobalTime)) / 2);
|
||||
if (dot(shifted, shifted) < 0.03) {
|
||||
// Varying pixel colour
|
||||
vec3 col = 0.5 + 0.5*cos(iGlobalTime+uv.xyx+vec3(0,2,4));
|
||||
fragColor = vec4(col,1.0);
|
||||
} else {
|
||||
// make everything outside the circle black
|
||||
fragColor = vec4(0,0,0,1.0);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This takes the dot product of the coordinate vector `fragCoord` with itself, which is the same as calculating `x^2 + y^2`. I played with the center of the circle a little bit in this one too – I made the center `vec2((sin(iGlobalTime) + 1)/2, (1 + cos(faster)) / 2)`, which means that the center of the circle also goes in a circle depending on what time it is.
|
||||
|
||||
### shaders are a fun way to play with math!
|
||||
|
||||
One thing I think is fun about this already (even though we haven’t done anything super advanced!) is that these shaders give us a fun visual way to play with math – I used `sin` and `cos` to make something go in a circle, and if you want to get some better intuition about how trigonometric work, maybe writing shaders would be a fun way to do that!
|
||||
|
||||
I love that you get instant visual feedback about your math code – if you multiply something by 2, things get bigger! or smaller! or faster! or slower! or more red!
|
||||
|
||||
### but how do we do something really fancy?
|
||||
|
||||
This bouncing circle is nice but it’s really far from the super fancy things I’ve seen other people do with shaders. So what’s the next step?
|
||||
|
||||
### idea: instead of using if statements, use signed distance functions!
|
||||
|
||||
In my circle code above, I basically wrote:
|
||||
|
||||
```
|
||||
if (dot(uv, uv) < 0.03) {
|
||||
// code for inside the circle
|
||||
} else {
|
||||
// code for outside the circle
|
||||
}
|
||||
```
|
||||
|
||||
But the problem with this (and the reason I was feeling stuck) is that it’s not clear how it generalizes to more complicated shapes! Writing a bajillion if statements doesn’t seem like it would work well. And how do people render those 3d shapes anyway?
|
||||
|
||||
So! **Signed distance functions** are a different way to define a shape. Instead of using a hardcoded if statement, instead you define a **function** that tells you, for any point in the world, how far away that point is from your shape. For example, here’s a signed distance function for a sphere.
|
||||
|
||||
```
|
||||
float sdSphere( vec3 p, float center )
|
||||
{
|
||||
return length(p)-center;
|
||||
}
|
||||
```
|
||||
|
||||
Signed distance functions are awesome because they’re:
|
||||
|
||||
* simple to define!
|
||||
* easy to compose! You can take a union / intersection / difference with some simple math if you want a sphere with a chunk taken out of it.
|
||||
* easy to rotate / stretch / bend!
|
||||
|
||||
|
||||
|
||||
### the steps to making a spinning top
|
||||
|
||||
When I started out I didn’t understand what code I needed to write to make a shiny spinning thing. It turns out that these are the basic steps:
|
||||
|
||||
1. Make a signed distance function for the shape I want (in my case an octahedron)
|
||||
2. Raytrace the signed distance function so you can display it in a 2D picture (or raymarch? The tutorial I used called it raytracing and I don’t understand the difference between raytracing and raymarching yet)
|
||||
3. Write some code to texture the surface of your shape and make it shiny
|
||||
|
||||
|
||||
|
||||
I’m not going to explain signed distance functions or raytracing in detail in this post because I found this [AMAZING tutorial on signed distance functions][2] that is very friendly and honestly it does a way better job than I could do. It explains how to do the 3 steps above and the code has a ton of comments and it’s great.
|
||||
|
||||
* The tutorial is called “SDF Tutorial: box & balloon” and it’s here: <https://www.shadertoy.com/view/Xl2XWt>
|
||||
* Here are tons of signed distance functions that you can copy and paste into your code <http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm> (and ways to compose them to make other shapes)
|
||||
|
||||
|
||||
|
||||
### step 4: copy the tutorial code and start changing things
|
||||
|
||||
Here I used the time honoured programming practice here of “copy the code and change things in a chaotic way until I get the result I want”.
|
||||
|
||||
My final shader of a bunch of shiny spinny things is here: <https://www.shadertoy.com/view/wdlcR4>
|
||||
|
||||
The animation comes out looking like this:
|
||||
|
||||
![][7]
|
||||
|
||||
Basically to make this I just copied the tutorial on signed distance functions that renders the shape based on the signed distance function and:
|
||||
|
||||
* changed `sdfBalloon` to `sdfOctahedron` and made the octahedron spin instead of staying still in my signed distance function
|
||||
* changed the `doBalloonColor` colouring function to make it shiny
|
||||
* made there be lots of octahedrons instead of just one
|
||||
|
||||
|
||||
|
||||
### making the octahedron spin!
|
||||
|
||||
Here’s some the I used to make the octahedron spin! This turned out to be really simple: first copied an octahedron signed distance function from [this page][8] and then added a `rotate` to make it rotate based on time and then suddenly it’s spinning!
|
||||
|
||||
```
|
||||
vec2 sdfOctahedron( vec3 currentRayPosition, vec3 offset ){
|
||||
vec3 p = rotate((currentRayPosition), offset.xy, iTime * 3.0) - offset;
|
||||
float s = 0.1; // what is s?
|
||||
p = abs(p);
|
||||
float distance = (p.x+p.y+p.z-s)*0.57735027;
|
||||
float id = 1.0;
|
||||
return vec2( distance, id );
|
||||
}
|
||||
```
|
||||
|
||||
### making it shiny with some noise
|
||||
|
||||
The other thing I wanted to do was to make my shape look sparkly/shiny. I used a noise funciton that I found in [this github gist][9] to make the surface look textured.
|
||||
|
||||
Here’s how I used the noise function. Basically I just changed parameters to the noise function mostly at random (multiply by 2? 3? 1800? who knows!) until I got an effect I liked.
|
||||
|
||||
```
|
||||
float x = noise(rotate(positionOfHit, vec2(0, 0), iGlobalTime * 3.0).xy * 1800.0);
|
||||
float x2 = noise(lightDirection.xy * 400.0);
|
||||
float y = min(max(x, 0.0), 1.0);
|
||||
float y2 = min(max(x2, 0.0), 1.0) ;
|
||||
vec3 balloonColor = vec3(y , y + y2, y + y2);
|
||||
```
|
||||
|
||||
### writing shaders is fun!
|
||||
|
||||
That’s all! I had a lot of fun making this thing spin and be shiny. If you also want to make fun animations with shaders, I hope this helps you make your cool thing!
|
||||
|
||||
As usual with subjects I don’t know tha well, I’ve probably said at least one wrong thing about shaders in this post, let me know what it is!
|
||||
|
||||
Again, here are the 2 resources I used:
|
||||
|
||||
1. “SDF Tutorial: box & balloon”: <https://www.shadertoy.com/view/Xl2XWt> (which is really fun to modify and play around with)
|
||||
2. Tons of signed distance functions that you can copy and paste into your code <http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm>
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://jvns.ca/blog/2020/03/15/writing-shaders-with-signed-distance-functions/
|
||||
|
||||
作者:[Julia Evans][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://jvns.ca/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://jvns.ca/images/spinny.gif
|
||||
[2]: https://www.shadertoy.com/view/Xl2XWt
|
||||
[3]: https://jvns.ca/images/colour.gif
|
||||
[4]: https://github.com/alexjc/shadertoy-render
|
||||
[5]: https://jvns.ca/images/circle.gif
|
||||
[6]: https://www.shadertoy.com/view/tsscR4
|
||||
[7]: https://jvns.ca/images/octahedron2.gif
|
||||
[8]: http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm
|
||||
[9]: https://gist.github.com/patriciogonzalezvivo/670c22f3966e662d2f83
|
@ -0,0 +1,288 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to test failed authentication attempts with test-driven development)
|
||||
[#]: via: (https://opensource.com/article/20/3/failed-authentication-attempts-tdd)
|
||||
[#]: author: (Alex Bunardzic https://opensource.com/users/alex-bunardzic)
|
||||
|
||||
How to test failed authentication attempts with test-driven development
|
||||
======
|
||||
Mountebank makes it easier to test the "less happy path" in your code.
|
||||
![Programming keyboard.][1]
|
||||
|
||||
Testing often begins with what we hope happens. In my [previous article][2], I demonstrated how to virtualize a service you depend on when processing the "happy path" scenario (that is, testing the outcome of a successful login attempt). But we all know that software fails in spectacular and unexpected ways. Now's the time to take a closer look into how to process the "less happy paths": what happens when someone tries to log in with the wrong credentials?
|
||||
|
||||
In the first article linked above, I walked through building a user authentication module. (Now is a good time to review that code and get it up and running.) This module does not do all the heavy lifting; it mostly relies on another service to do those tougher tasks—enable user registration, store the user accounts, and authenticate the users. The module will only be sending HTTP POST requests to this additional service's endpoint; in this case, **/api/v1/users/login**.
|
||||
|
||||
What do you do if the service you're dependent on hasn't been built yet? This scenario creates a blockage. In the previous post, I explored how to remove that blockage by using service virtualization enabled by [mountebank][3], a powerful test environment.
|
||||
|
||||
This article walks through the steps required to enable the processing of user authentication in cases when a user repeatedly attempts to log in. The third-party authentication service allows only three attempts to log in, after which it ceases to service the HTTP request arriving from the offending domain.
|
||||
|
||||
### How to simulate repeat requests
|
||||
|
||||
Mountebank makes it very easy to simulate a service that listens on a network port, matches the method and the path defined in the request, then handles it by sending back an HTTP response. To follow along, be sure to get mountebank running as we [did in the previous article][2]. As I explained there, these values are declared as JSONs that are posted to **<http://localhost:2525/imposters>**, mountebank's endpoint for processing authentication requests.
|
||||
|
||||
But the challenge now is how to simulate the scenario when the HTTP request keeps hitting the same endpoint from the same domain. This is necessary to simulate a user who submits invalid credentials (username and password), is informed they are invalid, tries different credentials, and is repeatedly rejected (or foolishly attempts to log in with the same credentials that failed on previous attempts). Eventually (in this case, after a third failed attempt), the user is barred from additional tries.
|
||||
|
||||
Writing executable code to simulate such a scenario would have to model very elaborate processing. However, when using mountebank, this type of simulated processing is extremely simple to accomplish. It is done by creating a rolling buffer of responses, and mountebank responds in the order the buffer was created. Here is an example of one way to simulate repeat requests in mountebank:
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
"port": 3001,
|
||||
"protocol": "http",
|
||||
"name": "authentication imposter",
|
||||
"stubs": [
|
||||
{
|
||||
"predicates": [
|
||||
{
|
||||
"equals": {
|
||||
"method": "post",
|
||||
"path": "/api/v1/users/login"
|
||||
}
|
||||
}
|
||||
],
|
||||
"responses": [
|
||||
{
|
||||
"is": {
|
||||
"statusCode": 200,
|
||||
"body": "Successfully logged in."
|
||||
}
|
||||
},
|
||||
{
|
||||
"is": {
|
||||
"statusCode": 400,
|
||||
"body": "Incorrect login. You have 2 more attempts left."
|
||||
}
|
||||
},
|
||||
{
|
||||
"is": {
|
||||
"statusCode": 400,
|
||||
"body": "Incorrect login. You have 1 more attempt left."
|
||||
}
|
||||
},
|
||||
{
|
||||
"is": {
|
||||
"statusCode": 400,
|
||||
"body": "Incorrect login. You have no more attempts left."
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The rolling buffer is simply an unlimited collection of JSON responses where each response is represented with two key-value pairs: **statusCode** and **body**. In this case, four responses are defined. The first response is the happy path (i.e., user successfully logged in), and the remaining three responses represent failed use cases (i.e., wrong credentials result in status code 400 and corresponding error messages).
|
||||
|
||||
### How to test repeat requests
|
||||
|
||||
Modify the tests as follows:
|
||||
|
||||
|
||||
```
|
||||
using System;
|
||||
using Xunit;
|
||||
using app;
|
||||
namespace tests
|
||||
{
|
||||
public class UnitTest1
|
||||
{
|
||||
Authenticate auth = [new][4] Authenticate();
|
||||
[Fact]
|
||||
public void SuccessfulLogin()
|
||||
{
|
||||
var given = "valid credentials";
|
||||
var expected = " Successfully logged in.";
|
||||
var actual= auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact]
|
||||
public void FirstFailedLogin()
|
||||
{
|
||||
var given = "invalid credentials";
|
||||
var expected = "Incorrect login. You have 2 more attempts left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact]
|
||||
public void SecondFailedLogin()
|
||||
{
|
||||
var given = “invalid credentials";
|
||||
var expected = "Incorrect login. You have 1 more attempt left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact]
|
||||
public void ThirdFailedLogin()
|
||||
{
|
||||
var given = " invalid credentials";
|
||||
var expected = "Incorrect login. You have no more attempts left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Now, run the tests to confirm that your code still works:
|
||||
|
||||
![Failed test][5]
|
||||
|
||||
Whoa! The tests now all fail. Why?
|
||||
|
||||
If you take a closer look, you'll see a revealing pattern:
|
||||
|
||||
![Reason for failed test][6]
|
||||
|
||||
Notice that ThirdFailedLogin is executed first, followed by the SuccessfulLogin, followed by FirstFailedLogin, followed by SecondFailedLogin. What's going on here? Why is the third test running before the first test?
|
||||
|
||||
The testing framework ([xUnit][7]) is executing all tests in parallel, and the sequence of execution is unpredictable. You need tests to run in order, which means you cannot test these scenarios using the vanilla xUnit toolkit.
|
||||
|
||||
### How to run tests in the right sequence
|
||||
|
||||
To force your tests to run in a certain sequence that you define (instead of running in an unpredictable order), you need to extend the vanilla xUnit toolkit with the NuGet [Xunit.Extensions.Ordering][8] package. Install the package on the command line with:
|
||||
|
||||
|
||||
```
|
||||
`$ dotnet add package Xunit.Extensions.Ordering --version 1.4.5`
|
||||
```
|
||||
|
||||
or add it to your **tests.csproj** config file:
|
||||
|
||||
|
||||
```
|
||||
`<PackageReference Include="Xunit.Extensions.Ordering" Version="1.4.5" />`
|
||||
```
|
||||
|
||||
Once that's taken care of, make some modifications to your **./tests/UnitTests1.cs** file. Add these four lines at the beginning of your **UnitTests1.cs **file:
|
||||
|
||||
|
||||
```
|
||||
using Xunit.Extensions.Ordering;
|
||||
[assembly: CollectionBehavior(DisableTestParallelization = true)]
|
||||
[assembly: TestCaseOrderer("Xunit.Extensions.Ordering.TestCaseOrderer", "Xunit.Extensions.Ordering")]
|
||||
[assembly: TestCollectionOrderer("Xunit.Extensions.Ordering.CollectionOrderer", "Xunit.Extensions.Ordering")]
|
||||
```
|
||||
|
||||
Now you can specify the order you want your tests to run. Initially, simulate the happy path (i.e., the **SuccessfulLogin()**) by annotating the test with:
|
||||
|
||||
|
||||
```
|
||||
[Fact, Order(1)]
|
||||
public void SuccessfulLogin() {
|
||||
```
|
||||
|
||||
After you test a successful login, test the first failed login:
|
||||
|
||||
|
||||
```
|
||||
[Fact, Order(2)]
|
||||
public void FirstFailedLogin()
|
||||
```
|
||||
|
||||
And so on. You can add the order of the test runs by simply adding the **Order(x)** (where **x** denotes the order you want the test to run) annotation to your Fact.
|
||||
|
||||
This annotation guarantees that your tests will run in the exact order you want them to run, and now you can (finally!) completely test your integration scenario.
|
||||
|
||||
The final version of your test is:
|
||||
|
||||
|
||||
```
|
||||
using System;
|
||||
using Xunit;
|
||||
using app;
|
||||
using Xunit.Extensions.Ordering;
|
||||
[assembly: CollectionBehavior(DisableTestParallelization = true)]
|
||||
[assembly: TestCaseOrderer("Xunit.Extensions.Ordering.TestCaseOrderer", "Xunit.Extensions.Ordering")]
|
||||
[assembly: TestCollectionOrderer("Xunit.Extensions.Ordering.CollectionOrderer", "Xunit.Extensions.Ordering")]
|
||||
namespace tests
|
||||
{
|
||||
public class UnitTest1
|
||||
{
|
||||
Authenticate auth = [new][4] Authenticate();
|
||||
[Fact, Order(1)]
|
||||
public void SuccessfulLogin()
|
||||
{
|
||||
var given = "[elon_musk@tesla.com][9]";
|
||||
var expected = "Successfully logged in.";
|
||||
var actual= auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact, Order(2)]
|
||||
public void FirstFailedLogin()
|
||||
{
|
||||
var given = "[mickey@tesla.com][10]";
|
||||
var expected = "Incorrect login. You have 2 more attempts left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact, Order(3)]
|
||||
public void SecondFailedLogin()
|
||||
{
|
||||
var given = "[mickey@tesla.com][10]";
|
||||
var expected = "Incorrect login. You have 1 more attempt left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
[Fact, Order(4)]
|
||||
public void ThirdFailedLogin()
|
||||
{
|
||||
var given = "[mickey@tesla.com][10]";
|
||||
var expected = "Incorrect login. You have no more attempts left.";
|
||||
var actual = auth.Login(given);
|
||||
Assert.Equal(expected, actual);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Run the test again—everything passes!
|
||||
|
||||
![Passing test][11]
|
||||
|
||||
### What are you testing exactly?
|
||||
|
||||
This article has focused on test-driven development (TDD), but let's review it from another methodology, Extreme Programming (XP). XP defines two types of tests:
|
||||
|
||||
1. Programmer tests
|
||||
2. Customer tests
|
||||
|
||||
|
||||
|
||||
So far, in this series of articles on TDD, I have focused on the first type of tests (i.e., programmer tests). In this and the previous article, I switched my lenses to examine the most efficient ways of doing customer tests.
|
||||
|
||||
The important point is that programmer (or producer) tests are focused on precision work. We often refer to these precision tests as "micro tests," while others may call them "unit tests." Customer tests, on the other hand, are more focused on a bigger picture; we sometimes refer to them as "approximation tests" or "end-to-end tests."
|
||||
|
||||
### Conclusion
|
||||
|
||||
This article demonstrated how to write a suite of approximation tests that integrate several discrete steps and ensure that the code can handle all edge cases, including simulating the customer experience when repeatedly attempting to log in and failing to obtain the necessary clearance. This combination of TDD and tools like xUnit and mountebank can lead to well-tested and thus more reliable application development.
|
||||
|
||||
In future articles, I'll look into other usages of mountebank for writing customer (or approximation) tests.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/failed-authentication-attempts-tdd
|
||||
|
||||
作者:[Alex Bunardzic][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alex-bunardzic
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_keyboard_coding.png?itok=E0Vvam7A (Programming keyboard.)
|
||||
[2]: https://opensource.com/article/20/3/service-virtualization-test-driven-development
|
||||
[3]: http://www.mbtest.org/
|
||||
[4]: http://www.google.com/search?q=new+msdn.microsoft.com
|
||||
[5]: https://opensource.com/sites/default/files/uploads/testfails_0.png (Failed test)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/failurepattern.png (Reason for failed test)
|
||||
[7]: https://xunit.net/
|
||||
[8]: https://www.nuget.org/packages/Xunit.Extensions.Ordering/#
|
||||
[9]: mailto:elon_musk@tesla.com
|
||||
[10]: mailto:mickey@tesla.com
|
||||
[11]: https://opensource.com/sites/default/files/uploads/testpasses.png (Passing test)
|
@ -0,0 +1,848 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to upload an OpenStack disk image to Glance)
|
||||
[#]: via: (https://opensource.com/article/20/3/glance)
|
||||
[#]: author: (Jair Patete https://opensource.com/users/jpatete)
|
||||
|
||||
How to upload an OpenStack disk image to Glance
|
||||
======
|
||||
Make images available to your private cloud, and more.
|
||||
![blank background that says your image here][1]
|
||||
|
||||
[Glance][2] is an image service that allows you to discover, provide, register, or even delete disk and/or server images. It is a fundamental part of managing images on [OpenStack][3] and [TripleO][4] (which stands for "OpenStack-On-OpenStack").
|
||||
|
||||
If you have used a recent version of the OpenStack platform, you may already have launched your first Overcloud using TripleO, as you interact with Glance when uploading the Overcloud disk images inside the Undercloud's OpenStack (i.e., the node inside your cloud that is used to install the Overcloud, add/delete nodes, and do some other handy things).
|
||||
|
||||
In this article, I'll explain how to upload an image to Glance. Uploading an image to the service makes it available for the instances in your private cloud. Also, when you're deploying an Overcloud, it makes the image(s) available so the bare-metal nodes can be deployed using them.
|
||||
|
||||
In an Undercloud, execute the following command:
|
||||
|
||||
|
||||
```
|
||||
`$ openstack overcloud image upload --image-path /home/stack/images/`
|
||||
```
|
||||
|
||||
This uploads the following Overcloud images to Glance:
|
||||
|
||||
1. overcloud-full
|
||||
2. overcloud-full-initrd
|
||||
3. overcloud-full-vmlinuz
|
||||
|
||||
|
||||
|
||||
After some seconds, the images will upload successfully. Check the result by running:
|
||||
|
||||
|
||||
```
|
||||
(undercloud) [stack@undercloud ~]$ openstack image list
|
||||
+--------------------------------------+------------------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+------------------------+--------+
|
||||
| 09ca88ea-2771-459d-94a2-9f87c9c393f0 | overcloud-full | active |
|
||||
| 806b6c35-2dd5-478d-a384-217173a6e032 | overcloud-full-initrd | active |
|
||||
| b2c96922-161a-4171-829f-be73482549d5 | overcloud-full-vmlinuz | active |
|
||||
+--------------------------------------+------------------------+--------+
|
||||
```
|
||||
|
||||
This is a mandatory and easy step in the process of deploying an Overcloud, and it happens within seconds, which makes it hard to see what's under the hood. But what if you want to know what is going on?
|
||||
|
||||
One thing to keep in mind: Glance works using client-server communication carried through REST APIs. Therefore, you can see what is going on by using [tcpdump][5] to take some TCP packets.
|
||||
|
||||
Another thing that is important: There is a database (there's always a database, right?) that is shared among all the OpenStack platform components, and it contains all the information that Glance (and other components) needs to operate. (In my case, MariaDB is the backend.) I won't get into how to access the SQL database, as I don't recommend playing around with it, but I will show what the database looks like during the upload process. (This is an entirely-for-test OpenStack installation, so there's no need to play with the database in this example.)
|
||||
|
||||
### The database
|
||||
|
||||
The basic flow of this example exercise is:
|
||||
|
||||
_Image Created -> Image Queued -> Image Saved -> Image Active_
|
||||
|
||||
You need permission to go through this flow, so first, you must ask OpenStack's identity service, [Keystone][6], for authorization. My Keystone catalog entry looks like this; as I'm in the Undercloud, I'll hit the public endpoint:
|
||||
|
||||
|
||||
```
|
||||
| keystone | identity | regionOne |
|
||||
| | | public: <https://172.16.0.20:13000> |
|
||||
| | | regionOne |
|
||||
| | | internal: <http://172.16.0.19:5000> |
|
||||
| | | regionOne |
|
||||
| | | admin: <http://172.16.0.19:35357> |
|
||||
```
|
||||
|
||||
And for Glance:
|
||||
|
||||
|
||||
```
|
||||
| glance | image | regionOne |
|
||||
| | | public: <https://172.16.0.20:13292> |
|
||||
| | | regionOne |
|
||||
| | | internal: <http://172.16.0.19:9292> |
|
||||
| | | regionOne |
|
||||
| | | admin: <http://172.16.0.19:9292> |
|
||||
```
|
||||
|
||||
I'll hit those ports and TCP port 3306 in the capture; the latter is so I can capture what's going on with the SQL database. To capture the packets, use the tcpdump command:
|
||||
|
||||
|
||||
```
|
||||
`$ tcpdump -nvs0 -i ens3 host 172.16.0.20 and port 13000 or port 3306 or port 13292`
|
||||
```
|
||||
|
||||
Under the hood, this looks like:
|
||||
|
||||
Authentication:
|
||||
|
||||
**Initial request (discovery of API Version Information):**
|
||||
|
||||
|
||||
```
|
||||
`https://172.16.0.20:13000 "GET / HTTP/1.1"`
|
||||
```
|
||||
|
||||
**Response:**
|
||||
|
||||
|
||||
```
|
||||
Content-Length: 268 Content-Type: application/json Date: Tue, 18 Feb 2020 04:49:55 GMT Location: <https://172.16.0.20:13000/v3/> Server: Apache Vary: X-Auth-Token x-openstack-request-id: req-6edc6642-3945-4fd0-a0f7-125744fb23ec
|
||||
|
||||
{
|
||||
"versions":{
|
||||
"values":[
|
||||
{
|
||||
"id":"v3.13",
|
||||
"status":"stable",
|
||||
"updated":"2019-07-19T00:00:00Z",
|
||||
"links":[
|
||||
{
|
||||
"rel":"self",
|
||||
"href":"<https://172.16.0.20:13000/v3/>"
|
||||
}
|
||||
],
|
||||
"media-types":[
|
||||
{
|
||||
"base":"application/json",
|
||||
"type":"application/vnd.openstack.identity-v3+json"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Authentication request**
|
||||
|
||||
|
||||
```
|
||||
`https://172.16.0.20:13000 "POST /v3/auth/tokens HTTP/1.1"`
|
||||
```
|
||||
|
||||
After this step, a token is assigned for the admin user to use the services. (The token cannot be displayed for security reasons.) The token tells the other services something like: "I've already logged in with the proper credentials against Keystone; please let me go straight to the service and ask no more questions about who I am."
|
||||
|
||||
At this point, the command:
|
||||
|
||||
|
||||
```
|
||||
`$ openstack overcloud image upload --image-path /home/stack/images/`
|
||||
```
|
||||
|
||||
executes, and it is authorized to upload the image to the Glance service.
|
||||
|
||||
The current status is:
|
||||
|
||||
_**Image Created**_ _-> Image Queued -> Image Saved -> Image Active_
|
||||
|
||||
The service checks whether this image already exists:
|
||||
|
||||
|
||||
```
|
||||
`https://172.16.0.20:13292 "GET /v2/images/overcloud-full-vmlinuz HTTP/1.1"`
|
||||
```
|
||||
|
||||
From the client's point of view, the request looks like:
|
||||
|
||||
|
||||
```
|
||||
`curl -g -i -X GET -H 'b'Content-Type': b'application/octet-stream'' -H 'b'X-Auth-Token': b'gAAAAABeS2zzWzAZBqF-whE7SmJt_Atx7tiLZhcL8mf6wJPrO3RBdv4SdnWImxbeSQSqEQdZJnwBT79SWhrtt7QDn-2o6dsAtpUb1Rb7w6xe7Qg_AHQfD5P1rU7tXXtKu2DyYFhtPg2TRQS5viV128FyItyt49Yn_ho3lWfIXaR3TuZzyIz38NU'' -H 'User-Agent: python-glanceclient' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'Connection: keep-alive' --cacert /etc/pki/ca-trust/source/anchors/cm-local-ca.pem --cert None --key None https://172.16.0.20:13292/v2/images/overcloud-full-vmlinuz`
|
||||
```
|
||||
|
||||
Here, you can see the fernet token, the user-agent indicating Glance is speaking, and the TLS certificate; this is why you don't see anything in your tcpdump.
|
||||
|
||||
Since the image does not exist, it is OK to get a 404 ERROR for this request.
|
||||
|
||||
Next, the current images are consulted:
|
||||
|
||||
|
||||
```
|
||||
`https://172.16.0.20:13292 "GET /v2/images?limit=20 HTTP/1.1" 200 78`
|
||||
```
|
||||
|
||||
and retrieved from the service:
|
||||
|
||||
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
Content-Length: 78
|
||||
Content-Type: application/json
|
||||
X-Openstack-Request-Id: req-0f117984-f427-4d35-bec3-956432865dd1
|
||||
Date: Tue, 18 Feb 2020 04:49:55 GMT
|
||||
|
||||
{
|
||||
"images":[
|
||||
|
||||
],
|
||||
"first":"/v2/images?limit=20",
|
||||
"schema":"/v2/schemas/images"
|
||||
}
|
||||
```
|
||||
|
||||
Yes, it is still empty.
|
||||
|
||||
Meanwhile, the same check has been done on the database, where a huge query has been triggered with the same results. (To sync on the timestamp, I checked on the tcpdump after the connection and queries were finished, and then compared them with the API calls' timestamp.)
|
||||
|
||||
To identify where the Glance-DB calls started, I did a full-packet search with the word "glance" inside the tcpdump file. This saves a lot of time vs. searching through all the other database calls, so this is my starting point to check each database call.
|
||||
|
||||
![Searching "glance" inside tcpdump][7]
|
||||
|
||||
The first query returns nothing in the fields, as the image still does not exist:
|
||||
|
||||
|
||||
```
|
||||
SELECT images.created_at AS images_created_at, images.updated_at AS images_updated_at, images.deleted_at AS images_deleted_at, images.deleted AS images_deleted, images.id AS images_id, images.name AS images_name, images.disk_format AS images_disk_format, images.container_format AS images_container_format, images.size AS images_size, images.virtual_size AS images_virtual_size, images.status AS images_status, images.visibility AS images_visibility, images.checksum AS images_checksum, images.os_hash_algo AS images_os_hash_algo, images.os_hash_value AS images_os_hash_value, images.min_disk AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS images_owner, images.protected AS images_protected, images.os_hidden AS images_os_hidden, image_properties_1.created_at AS image_properties_1_created_at, image_properties_1.updated_at AS image_properties_1_updated_at, image_properties_1.deleted_at AS image_properties_1_deleted_at, image_properties_1.deleted AS image_properties_1_deleted, image_properties_1.id AS image_properties_1_id, image_properties_1.image_id AS image_properties_1_image_id, image_properties_1.name AS image_properties_1_name, image_properties_1.value AS image_properties_1_value, image_locations_1.created_at AS image_locations_1_created_at, image_locations_1.updated_at AS image_locations_1_updated_at, image_locations_1.deleted_at AS image_locations_1_deleted_at, image_locations_1.deleted AS image_locations_1_deleted, image_locations_1.id AS image_locations_1_id, image_locations_1.image_id AS image_locations_1_image_id, image_locations_1.value AS image_locations_1_value, image_locations_1.meta_data AS image_locations_1_meta_data, image_locations_1.status AS image_locations_1_status
|
||||
FROM images LEFT OUTER JOIN image_properties AS image_properties_1 ON images.id = image_properties_1.image_id LEFT OUTER JOIN image_locations AS image_locations_1 ON images.id = image_locations_1.image_id
|
||||
WHERE images.id = 'overcloud-full-vmlinuz'
|
||||
```
|
||||
|
||||
Next, the image will start uploading, so an API call and a write to the database are expected.
|
||||
|
||||
On the API side, the image scheme is retrieved by consulting the service in:
|
||||
|
||||
|
||||
```
|
||||
`https://172.16.0.20:13292 "GET /v2/schemas/image HTTP/1.1"`
|
||||
```
|
||||
|
||||
Then, some of the fields are populated with image information. This is what the scheme looks like:
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
"name":"image",
|
||||
"properties":{
|
||||
"id":{
|
||||
"type":"string",
|
||||
"description":"An identifier for the image",
|
||||
"pattern":"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$"
|
||||
},
|
||||
"name":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"description":"Descriptive name for the image",
|
||||
"maxLength":255
|
||||
},
|
||||
"status":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"Status of the image",
|
||||
"enum":[
|
||||
"queued",
|
||||
"saving",
|
||||
"active",
|
||||
"killed",
|
||||
"deleted",
|
||||
"uploading",
|
||||
"importing",
|
||||
"pending_delete",
|
||||
"deactivated"
|
||||
]
|
||||
},
|
||||
"visibility":{
|
||||
"type":"string",
|
||||
"description":"Scope of image accessibility",
|
||||
"enum":[
|
||||
"community",
|
||||
"public",
|
||||
"private",
|
||||
"shared"
|
||||
]
|
||||
},
|
||||
"protected":{
|
||||
"type":"boolean",
|
||||
"description":"If true, image will not be deletable."
|
||||
},
|
||||
"os_hidden":{
|
||||
"type":"boolean",
|
||||
"description":"If true, image will not appear in default image list response."
|
||||
},
|
||||
"checksum":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"readOnly":true,
|
||||
"description":"md5 hash of image contents.",
|
||||
"maxLength":32
|
||||
},
|
||||
"os_hash_algo":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"readOnly":true,
|
||||
"description":"Algorithm to calculate the os_hash_value",
|
||||
"maxLength":64
|
||||
},
|
||||
"os_hash_value":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"readOnly":true,
|
||||
"description":"Hexdigest of the image contents using the algorithm specified by the os_hash_algo",
|
||||
"maxLength":128
|
||||
},
|
||||
"owner":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"description":"Owner of the image",
|
||||
"maxLength":255
|
||||
},
|
||||
"size":{
|
||||
"type":[
|
||||
"null",
|
||||
"integer"
|
||||
],
|
||||
"readOnly":true,
|
||||
"description":"Size of image file in bytes"
|
||||
},
|
||||
"virtual_size":{
|
||||
"type":[
|
||||
"null",
|
||||
"integer"
|
||||
],
|
||||
"readOnly":true,
|
||||
"description":"Virtual size of image in bytes"
|
||||
},
|
||||
"container_format":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"description":"Format of the container",
|
||||
"enum":[
|
||||
null,
|
||||
"ami",
|
||||
"ari",
|
||||
"aki",
|
||||
"bare",
|
||||
"ovf",
|
||||
"ova",
|
||||
"docker",
|
||||
"compressed"
|
||||
]
|
||||
},
|
||||
"disk_format":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"description":"Format of the disk",
|
||||
"enum":[
|
||||
null,
|
||||
"ami",
|
||||
"ari",
|
||||
"aki",
|
||||
"vhd",
|
||||
"vhdx",
|
||||
"vmdk",
|
||||
"raw",
|
||||
"qcow2",
|
||||
"vdi",
|
||||
"iso",
|
||||
"ploop"
|
||||
]
|
||||
},
|
||||
"created_at":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"Date and time of image registration"
|
||||
},
|
||||
"updated_at":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"Date and time of the last image modification"
|
||||
},
|
||||
"tags":{
|
||||
"type":"array",
|
||||
"description":"List of strings related to the image",
|
||||
"items":{
|
||||
"type":"string",
|
||||
"maxLength":255
|
||||
}
|
||||
},
|
||||
"direct_url":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"URL to access the image file kept in external store"
|
||||
},
|
||||
"min_ram":{
|
||||
"type":"integer",
|
||||
"description":"Amount of ram (in MB) required to boot image."
|
||||
},
|
||||
"min_disk":{
|
||||
"type":"integer",
|
||||
"description":"Amount of disk space (in GB) required to boot image."
|
||||
},
|
||||
"self":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"An image self url"
|
||||
},
|
||||
"file":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"An image file url"
|
||||
},
|
||||
"stores":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"Store in which image data resides. Only present when the operator has enabled multiple stores. May be a comma-separated list of store identifiers."
|
||||
},
|
||||
"schema":{
|
||||
"type":"string",
|
||||
"readOnly":true,
|
||||
"description":"An image schema url"
|
||||
},
|
||||
"locations":{
|
||||
"type":"array",
|
||||
"items":{
|
||||
"type":"object",
|
||||
"properties":{
|
||||
"url":{
|
||||
"type":"string",
|
||||
"maxLength":255
|
||||
},
|
||||
"metadata":{
|
||||
"type":"object"
|
||||
},
|
||||
"validation_data":{
|
||||
"description":"Values to be used to populate the corresponding image properties. If the image status is not 'queued', values must exactly match those already contained in the image properties.",
|
||||
"type":"object",
|
||||
"writeOnly":true,
|
||||
"additionalProperties":false,
|
||||
"properties":{
|
||||
"checksum":{
|
||||
"type":"string",
|
||||
"minLength":32,
|
||||
"maxLength":32
|
||||
},
|
||||
"os_hash_algo":{
|
||||
"type":"string",
|
||||
"maxLength":64
|
||||
},
|
||||
"os_hash_value":{
|
||||
"type":"string",
|
||||
"maxLength":128
|
||||
}
|
||||
},
|
||||
"required":[
|
||||
"os_hash_algo",
|
||||
"os_hash_value"
|
||||
]
|
||||
}
|
||||
},
|
||||
"required":[
|
||||
"url",
|
||||
"metadata"
|
||||
]
|
||||
},
|
||||
"description":"A set of URLs to access the image file kept in external store"
|
||||
},
|
||||
"kernel_id":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"pattern":"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$",
|
||||
"description":"ID of image stored in Glance that should be used as the kernel when booting an AMI-style image.",
|
||||
"is_base":false
|
||||
},
|
||||
"ramdisk_id":{
|
||||
"type":[
|
||||
"null",
|
||||
"string"
|
||||
],
|
||||
"pattern":"^([0-9a-fA-F]){8}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){4}-([0-9a-fA-F]){12}$",
|
||||
"description":"ID of image stored in Glance that should be used as the ramdisk when booting an AMI-style image.",
|
||||
"is_base":false
|
||||
},
|
||||
"instance_uuid":{
|
||||
"type":"string",
|
||||
"description":"Metadata which can be used to record which instance this image is associated with. (Informational only, does not create an instance snapshot.)",
|
||||
"is_base":false
|
||||
},
|
||||
"architecture":{
|
||||
"description":"Operating system architecture as specified in <https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html>",
|
||||
"type":"string",
|
||||
"is_base":false
|
||||
},
|
||||
"os_distro":{
|
||||
"description":"Common name of operating system distribution as specified in <https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html>",
|
||||
"type":"string",
|
||||
"is_base":false
|
||||
},
|
||||
"os_version":{
|
||||
"description":"Operating system version as specified by the distributor.",
|
||||
"type":"string",
|
||||
"is_base":false
|
||||
},
|
||||
"description":{
|
||||
"description":"A human-readable string describing this image.",
|
||||
"type":"string",
|
||||
"is_base":false
|
||||
},
|
||||
"cinder_encryption_key_id":{
|
||||
"description":"Identifier in the OpenStack Key Management Service for the encryption key for the Block Storage Service to use when mounting a volume created from this image",
|
||||
"type":"string",
|
||||
"is_base":false
|
||||
},
|
||||
"cinder_encryption_key_deletion_policy":{
|
||||
"description":"States the condition under which the Image Service will delete the object associated with the 'cinder_encryption_key_id' image property. If this property is missing, the Image Service will take no action",
|
||||
"type":"string",
|
||||
"enum":[
|
||||
"on_image_deletion",
|
||||
"do_not_delete"
|
||||
],
|
||||
"is_base":false
|
||||
}
|
||||
},
|
||||
"additionalProperties":{
|
||||
"type":"string"
|
||||
},
|
||||
"links":[
|
||||
{
|
||||
"rel":"self",
|
||||
"href":"{self}"
|
||||
},
|
||||
{
|
||||
"rel":"enclosure",
|
||||
"href":"{file}"
|
||||
},
|
||||
{
|
||||
"rel":"describedby",
|
||||
"href":"{schema}"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
That's a long scheme!
|
||||
|
||||
Here is the API call to start uploading the image information, and it will now move to the "queue" state:
|
||||
|
||||
|
||||
```
|
||||
`curl -g -i -X POST -H 'b'Content-Type': b'application/json'' -H 'b'X-Auth-Token': b'gAAAAABeS2zzWzAZBqF-whE7SmJt_Atx7tiLZhcL8mf6wJPrO3RBdv4SdnWImxbeSQSqEQdZJnwBT79SWhrtt7QDn-2o6dsAtpUb1Rb7w6xe7Qg_AHQfD5P1rU7tXXtKu2DyYFhtPg2TRQS5viV128FyItyt49Yn_ho3lWfIXaR3TuZzyIz38NU'' -H 'User-Agent: python-glanceclient' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'Connection: keep-alive' --cacert /etc/pki/ca-trust/source/anchors/cm-local-ca.pem --cert None --key None -d '{"name": "overcloud-full-vmlinuz", "disk_format": "aki", "visibility": "public", "container_format": "bare"}' https://172.16.0.20:13292/v2/images`
|
||||
```
|
||||
|
||||
Here is the API response:
|
||||
|
||||
|
||||
```
|
||||
HTTP/1.1 201 Created
|
||||
Content-Length: 629
|
||||
Content-Type: application/json
|
||||
Location: <https://172.16.0.20:13292/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c>
|
||||
Openstack-Image-Import-Methods: web-download
|
||||
X-Openstack-Request-Id: req-bd5194f0-b1c2-40d3-a646-8a24ed0a1b1b
|
||||
Date: Tue, 18 Feb 2020 04:49:56 GMT
|
||||
|
||||
{
|
||||
"name":"overcloud-full-vmlinuz",
|
||||
"disk_format":"aki",
|
||||
"container_format":"bare",
|
||||
"visibility":"public",
|
||||
"size":null,
|
||||
"virtual_size":null,
|
||||
"status":"queued",
|
||||
"checksum":null,
|
||||
"protected":false,
|
||||
"min_ram":0,
|
||||
"min_disk":0,
|
||||
"owner":"c0a46a106d3341649a25b10f2770aff8",
|
||||
"os_hidden":false,
|
||||
"os_hash_algo":null,
|
||||
"os_hash_value":null,
|
||||
"id":"13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"created_at":"2020-02-18T04:49:55Z",
|
||||
"updated_at":"2020-02-18T04:49:55Z",
|
||||
"tags":[
|
||||
|
||||
],
|
||||
"self":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"file":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c/file",
|
||||
"schema":"/v2/schemas/image"
|
||||
}
|
||||
```
|
||||
|
||||
and the SQL call to store the information in the Glance-DB:
|
||||
|
||||
|
||||
```
|
||||
`INSERT INTO images (created_at, updated_at, deleted_at, deleted, id, name, disk_format, container_format, SIZE, virtual_size, STATUS, visibility, checksum, os_hash_algo, os_hash_value, min_disk, min_ram, owner, protected, os_hidden) VALUES ('2020-02-18 04:49:55.993652', '2020-02-18 04:49:55.993652', NULL, 0, '13892850-6add-4c28-87cd-6da62e6f8a3c', 'overcloud-full-vmlinuz', 'aki', 'bare', NULL, NULL, 'queued', 'public', NULL, NULL, NULL, 0, 0, 'c0a46a106d3341649a25b10f2770aff8', 0, 0)`
|
||||
```
|
||||
|
||||
Current status:
|
||||
|
||||
_Image Created ->_ _**Image Queued**_ _-> Image Saved -> Image Active_
|
||||
|
||||
In the Glance architecture, the images are "physically" stored in the specified backend (Swift in this case), so traffic will also hit the Swift endpoint at port 8080. Capturing this traffic will make the .pcap file as large as the images being uploaded (2GB in my case).[*][8]
|
||||
|
||||
![Glance architecture][9]
|
||||
|
||||
|
||||
```
|
||||
SELECT image_properties.created_at AS image_properties_created_at, image_properties.updated_at AS image_properties_updated_at, image_properties.deleted_at AS image_properties_deleted_at, image_properties.deleted AS image_properties_deleted, image_properties.id AS image_properties_id, image_properties.image_id AS image_properties_image_id, image_properties.name AS image_properties_name, image_properties.value AS image_properties_value
|
||||
FROM image_properties
|
||||
WHERE '13892850-6add-4c28-87cd-6da62e6f8a3c' = image_properties.image_id
|
||||
```
|
||||
|
||||
You can see some validations happening within the database. At this point, the flow status is "queued" (as shown above), and you can check it here:
|
||||
|
||||
![Checking the Glance image status][10]
|
||||
|
||||
You can also check it with the following queries, where the **updated_at** field and the flow status are modified accordingly (i.e., queued to saving):
|
||||
|
||||
Current status:
|
||||
|
||||
_Image Created -> Image Queued ->_ _**Image Saved**_ _-> Image Active_
|
||||
|
||||
|
||||
```
|
||||
SELECT images.id AS images_id
|
||||
FROM images
|
||||
WHERE images.id = '13892850-6add-4c28-87cd-6da62e6f8a3c' AND images.status = 'queued'
|
||||
UPDATE images SET updated_at='2020-02-18 04:49:56.046542', id='13892850-6add-4c28-87cd-6da62e6f8a3c', name='overcloud-full-vmlinuz', disk_format='aki', container_format='bare', SIZE=NULL, virtual_size=NULL, STATUS='saving', visibility='public', checksum=NULL, os_hash_algo=NULL, os_hash_value=NULL, min_disk=0, min_ram=0, owner='c0a46a106d3341649a25b10f2770aff8', protected=0, os_hidden=0 WHERE images.id = '13892850-6add-4c28-87cd-6da62e6f8a3c' AND images.status = 'queued'
|
||||
```
|
||||
|
||||
This is validated during the process with the following query:
|
||||
|
||||
|
||||
```
|
||||
SELECT images.created_at AS images_created_at, images.updated_at AS images_updated_at, images.deleted_at AS images_deleted_at, images.deleted AS images_deleted, images.id AS images_id, images.name AS images_name, images.disk_format AS images_disk_format, images.container_format AS images_container_format, images.size AS images_size, images.virtual_size AS images_virtual_size, images.status AS images_status, images.visibility AS images_visibility, images.checksum AS images_checksum, images.os_hash_algo AS images_os_hash_algo, images.os_hash_value AS images_os_hash_value, images.min_disk AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS images_owner, images.protected AS images_protected, images.os_hidden AS images_os_hidden, image_properties_1.created_at AS image_properties_1_created_at, image_properties_1.updated_at AS image_properties_1_updated_at, image_properties_1.deleted_at AS image_properties_1_deleted_at, image_properties_1.deleted AS image_properties_1_deleted, image_properties_1.id AS image_properties_1_id, image_properties_1.image_id AS image_properties_1_image_id, image_properties_1.name AS image_properties_1_name, image_properties_1.value AS image_properties_1_value, image_locations_1.created_at AS image_locations_1_created_at, image_locations_1.updated_at AS image_locations_1_updated_at, image_locations_1.deleted_at AS image_locations_1_deleted_at, image_locations_1.deleted AS image_locations_1_deleted, image_locations_1.id AS image_locations_1_id, image_locations_1.image_id AS image_locations_1_image_id, image_locations_1.value AS image_locations_1_value, image_locations_1.meta_data AS image_locations_1_meta_data, image_locations_1.status AS image_locations_1_status
|
||||
FROM images LEFT OUTER JOIN image_properties AS image_properties_1 ON images.id = image_properties_1.image_id LEFT OUTER JOIN image_locations AS image_locations_1 ON images.id = image_locations_1.image_id
|
||||
WHERE images.id = '13892850-6add-4c28-87cd-6da62e6f8a3c'
|
||||
```
|
||||
|
||||
And you can see its response in the Wireshark capture:
|
||||
|
||||
![Wireshark capture][11]
|
||||
|
||||
After the image is completely uploaded, its status will change to "active," which means the image is available in the service and ready to use.
|
||||
|
||||
|
||||
```
|
||||
<https://172.16.0.20:13292> "GET /v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c HTTP/1.1" 200
|
||||
|
||||
{
|
||||
"name":"overcloud-full-vmlinuz",
|
||||
"disk_format":"aki",
|
||||
"container_format":"bare",
|
||||
"visibility":"public",
|
||||
"size":8106848,
|
||||
"virtual_size":null,
|
||||
"status":"active",
|
||||
"checksum":"5d31ee013d06b83d02c106ea07f20265",
|
||||
"protected":false,
|
||||
"min_ram":0,
|
||||
"min_disk":0,
|
||||
"owner":"c0a46a106d3341649a25b10f2770aff8",
|
||||
"os_hidden":false,
|
||||
"os_hash_algo":"sha512",
|
||||
"os_hash_value":"9f59d36dec7b30f69b696003e7e3726bbbb27a36211a0b31278318c2af0b969ffb279b0991474c18c9faef8b9e96cf372ce4087ca13f5f05338a36f57c281499",
|
||||
"id":"13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"created_at":"2020-02-18T04:49:55Z",
|
||||
"updated_at":"2020-02-18T04:49:56Z",
|
||||
"direct_url":"swift+config://ref1/glance/13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"tags":[
|
||||
|
||||
],
|
||||
"self":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"file":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c/file",
|
||||
"schema":"/v2/schemas/image"
|
||||
}
|
||||
```
|
||||
|
||||
You can also see the database call that updates the current status:
|
||||
|
||||
|
||||
```
|
||||
`UPDATE images SET updated_at='2020-02-18 04:49:56.571879', id='13892850-6add-4c28-87cd-6da62e6f8a3c', name='overcloud-full-vmlinuz', disk_format='aki', container_format='bare', SIZE=8106848, virtual_size=NULL, STATUS='active', visibility='public', checksum='5d31ee013d06b83d02c106ea07f20265', os_hash_algo='sha512', os_hash_value='9f59d36dec7b30f69b696003e7e3726bbbb27a36211a0b31278318c2af0b969ffb279b0991474c18c9faef8b9e96cf372ce4087ca13f5f05338a36f57c281499', min_disk=0, min_ram=0, owner='c0a46a106d3341649a25b10f2770aff8', protected=0, os_hidden=0 WHERE images.id = '13892850-6add-4c28-87cd-6da62e6f8a3c' AND images.status = 'saving'`
|
||||
```
|
||||
|
||||
Current status:
|
||||
|
||||
_Image Created -> Image Queued -> Image Saved ->_ _**Image Active**_
|
||||
|
||||
One interesting thing is that a property in the image is added after the image is uploaded using a PATCH. This property is **hw_architecture** and it is set to **x86_64**:
|
||||
|
||||
|
||||
```
|
||||
<https://172.16.0.20:13292> "PATCH /v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c HTTP/1.1"
|
||||
|
||||
curl -g -i -X PATCH -H 'b'Content-Type': b'application/openstack-images-v2.1-json-patch'' -H 'b'X-Auth-Token': b'gAAAAABeS2zzWzAZBqF-whE7SmJt_Atx7tiLZhcL8mf6wJPrO3RBdv4SdnWImxbeSQSqEQdZJnwBT79SWhrtt7QDn-2o6dsAtpUb1Rb7w6xe7Qg_AHQfD5P1rU7tXXtKu2DyYFhtPg2TRQS5viV128FyItyt49Yn_ho3lWfIXaR3TuZzyIz38NU'' -H 'User-Agent: python-glanceclient' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'Connection: keep-alive' --cacert /etc/pki/ca-trust/source/anchors/cm-local-ca.pem --cert None --key None -d '[{"op": "add", "path": "/hw_architecture", "value": "x86_64"}]' <https://172.16.0.20:13292/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c>
|
||||
|
||||
Response:
|
||||
|
||||
{
|
||||
"hw_architecture":"x86_64",
|
||||
"name":"overcloud-full-vmlinuz",
|
||||
"disk_format":"aki",
|
||||
"container_format":"bare",
|
||||
"visibility":"public",
|
||||
"size":8106848,
|
||||
"virtual_size":null,
|
||||
"status":"active",
|
||||
"checksum":"5d31ee013d06b83d02c106ea07f20265",
|
||||
"protected":false,
|
||||
"min_ram":0,
|
||||
"min_disk":0,
|
||||
"owner":"c0a46a106d3341649a25b10f2770aff8",
|
||||
"os_hidden":false,
|
||||
"os_hash_algo":"sha512",
|
||||
"os_hash_value":"9f59d36dec7b30f69b696003e7e3726bbbb27a36211a0b31278318c2af0b969ffb279b0991474c18c9faef8b9e96cf372ce4087ca13f5f05338a36f57c281499",
|
||||
"id":"13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"created_at":"2020-02-18T04:49:55Z",
|
||||
"updated_at":"2020-02-18T04:49:56Z",
|
||||
"direct_url":"swift+config://ref1/glance/13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"tags":[
|
||||
|
||||
],
|
||||
"self":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c",
|
||||
"file":"/v2/images/13892850-6add-4c28-87cd-6da62e6f8a3c/file",
|
||||
"schema":"/v2/schemas/image"
|
||||
}
|
||||
```
|
||||
|
||||
This is also updated in the MySQL database:
|
||||
|
||||
|
||||
```
|
||||
`INSERT INTO image_properties (created_at, updated_at, deleted_at, deleted, image_id, name, VALUE) VALUES ('2020-02-18 04:49:56.655780', '2020-02-18 04:49:56.655783', NULL, 0, '13892850-6add-4c28-87cd-6da62e6f8a3c', 'hw_architecture', 'x86_64')`
|
||||
```
|
||||
|
||||
This is pretty much what happens when you upload an image to Glance. Here's what it looks like if you check on the database:
|
||||
|
||||
|
||||
```
|
||||
MariaDB [glance]> SELECT images.created_at AS images_created_at, images.updated_at AS images_updated_at, images.deleted_at AS images_deleted_at, images.deleted AS images_deleted, images.id AS images_id, images.name AS images_name, images.disk_format AS images_disk_format, images.container_format AS images_container_format, images.size AS images_size, images.virtual_size AS images_virtual_size, images.status AS images_status, images.visibility AS images_visibility, images.checksum AS images_checksum, images.os_hash_algo AS images_os_hash_algo, images.os_hash_value AS images_os_hash_value, images.min_disk AS images_min_disk, images.min_ram AS images_min_ram, images.owner AS images_owner, images.protected AS images_protected, images.os_hidden AS images_os_hidden, image_properties_1.created_at AS image_properties_1_created_at, image_properties_1.updated_at AS image_properties_1_updated_at, image_properties_1.deleted_at AS image_properties_1_deleted_at, image_properties_1.deleted AS image_properties_1_deleted, image_properties_1.id AS image_properties_1_id, image_properties_1.image_id AS image_properties_1_image_id, image_properties_1.name AS image_properties_1_name, image_properties_1.value AS image_properties_1_value, image_locations_1.created_at AS image_locations_1_created_at, image_locations_1.updated_at AS image_locations_1_updated_at, image_locations_1.deleted_at AS image_locations_1_deleted_at, image_locations_1.deleted AS image_locations_1_deleted, image_locations_1.id AS image_locations_1_id, image_locations_1.image_id AS image_locations_1_image_id, image_locations_1.value AS image_locations_1_value, image_locations_1.meta_data AS image_locations_1_meta_data, image_locations_1.status AS image_locations_1_status FROM images LEFT OUTER JOIN image_properties AS image_properties_1 ON images.id = image_properties_1.image_id LEFT OUTER JOIN image_locations AS image_locations_1 ON images.id = image_locations_1.image_id WHERE images.id = '13892850-6add-4c28-87cd-6da62e6f8a3c'\G;
|
||||
*************************** 1. row ***************************
|
||||
images_created_at: 2020-02-18 04:49:55
|
||||
images_updated_at: 2020-02-18 04:49:56
|
||||
images_deleted_at: NULL
|
||||
images_deleted: 0
|
||||
images_id: 13892850-6add-4c28-87cd-6da62e6f8a3c
|
||||
images_name: overcloud-full-vmlinuz
|
||||
images_disk_format: aki
|
||||
images_container_format: bare
|
||||
images_size: 8106848
|
||||
images_virtual_size: NULL
|
||||
images_status: active
|
||||
images_visibility: public
|
||||
images_checksum: 5d31ee013d06b83d02c106ea07f20265
|
||||
images_os_hash_algo: sha512
|
||||
images_os_hash_value: 9f59d36dec7b30f69b696003e7e3726bbbb27a36211a0b31278318c2af0b969ffb279b0991474c18c9faef8b9e96cf372ce4087ca13f5f05338a36f57c281499
|
||||
images_min_disk: 0
|
||||
images_min_ram: 0
|
||||
images_owner: c0a46a106d3341649a25b10f2770aff8
|
||||
images_protected: 0
|
||||
images_os_hidden: 0
|
||||
image_properties_1_created_at: 2020-02-18 04:49:56
|
||||
image_properties_1_updated_at: 2020-02-18 04:49:56
|
||||
image_properties_1_deleted_at: NULL
|
||||
image_properties_1_deleted: 0
|
||||
image_properties_1_id: 11
|
||||
image_properties_1_image_id: 13892850-6add-4c28-87cd-6da62e6f8a3c
|
||||
image_properties_1_name: hw_architecture
|
||||
image_properties_1_value: x86_64
|
||||
image_locations_1_created_at: 2020-02-18 04:49:56
|
||||
image_locations_1_updated_at: 2020-02-18 04:49:56
|
||||
image_locations_1_deleted_at: NULL
|
||||
image_locations_1_deleted: 0
|
||||
image_locations_1_id: 7
|
||||
image_locations_1_image_id: 13892850-6add-4c28-87cd-6da62e6f8a3c
|
||||
image_locations_1_value: swift+config://ref1/glance/13892850-6add-4c28-87cd-6da62e6f8a3c
|
||||
image_locations_1_meta_data: {}
|
||||
image_locations_1_status: active
|
||||
1 row in set (0.00 sec)
|
||||
```
|
||||
|
||||
The final result is:
|
||||
|
||||
|
||||
```
|
||||
(undercloud) [stack@undercloud ~]$ openstack image list
|
||||
+--------------------------------------+------------------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+------------------------+--------+
|
||||
| 9a26b9da-3783-4223-bdd7-c553aa194e30 | overcloud-full | active |
|
||||
| a2914297-c70f-4021-bc3e-8ec2123f6ea6 | overcloud-full-initrd | active |
|
||||
| 13892850-6add-4c28-87cd-6da62e6f8a3c | overcloud-full-vmlinuz | active |
|
||||
+--------------------------------------+------------------------+--------+
|
||||
(undercloud) [stack@undercloud ~]$
|
||||
```
|
||||
|
||||
Some other minor things are happening during this process, but overall, this is how it looks.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Understanding the flow of the most common actions in the OpenStack platform will enable you to enhance your troubleshooting skills when facing some issues at work. You can check the status of an image in Glance; know whether an image is in a "queued," "saving," or "active" state; and do some captures in your environment to see what is going on by checking the endpoints you need to check.
|
||||
|
||||
I enjoy debugging. I consider this is an important skill for any role—whether you are working in a support, consulting, developer (of course!), or architect role. I hope this article gave you some basic guidelines to start debugging things.
|
||||
|
||||
* * *
|
||||
|
||||
* In case you're wondering how to open a 2GB .pcap file without problems, here is one way to do it:
|
||||
|
||||
|
||||
```
|
||||
`$ editcap -c 5000 image-upload.pcap upload-overcloud-image.pcap`
|
||||
```
|
||||
|
||||
This splits your huge capture in smaller captures of 5,000 packets each.
|
||||
|
||||
* * *
|
||||
|
||||
_This article was [originally posted][12] on LinkedIn and is reprinted with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/3/glance
|
||||
|
||||
作者:[Jair Patete][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jpatete
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yourimagehere_520x292.png?itok=V-xhX7KL (blank background that says your image here)
|
||||
[2]: https://www.openstack.org/software/releases/ocata/components/glance
|
||||
[3]: https://www.openstack.org/
|
||||
[4]: https://wiki.openstack.org/wiki/TripleO
|
||||
[5]: https://www.tcpdump.org/
|
||||
[6]: https://docs.openstack.org/keystone/latest/
|
||||
[7]: https://opensource.com/sites/default/files/uploads/glance-db-calls.png (Searching "glance" inside tcpdump)
|
||||
[8]: tmp.qBKg0ttLIJ#*
|
||||
[9]: https://opensource.com/sites/default/files/uploads/glance-architecture.png (Glance architecture)
|
||||
[10]: https://opensource.com/sites/default/files/uploads/check-flow-status.png (Checking the Glance image status)
|
||||
[11]: https://opensource.com/sites/default/files/uploads/wireshark-capture.png (Wireshark capture)
|
||||
[12]: https://www.linkedin.com/pulse/what-happens-behind-doors-when-we-upload-image-glance-patete-garc%25C3%25ADa/?trackingId=czWiFC4dRfOsSZJ%2BXdzQfg%3D%3D
|
@ -1,132 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 things you should be doing with Emacs)
|
||||
[#]: via: (https://opensource.com/article/20/1/emacs-cheat-sheet)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
6 件你应该用 Emacs 做的事
|
||||
======
|
||||
下面六件事情你可能都没有意识到可以在 Emacs 下完成。此外,使用我们的新备忘单来充分利用 Emacs 的功能吧。
|
||||
![浏览器上给蓝色编辑器 ][1]
|
||||
|
||||
想象一下使用 Python 的 IDLE 界面来编辑文本。你可以将文件加载到内存中,编辑它们,并保存更改。但是你执行的每个操作都由 Python 函数定义。例如,调用 **upper()** 来让一个单词全部大写,调用 **open** 打开文件,等等。文本文档中的所有内容都是 Python 对象,可以进行相应的操作。从用户的角度来看,这与其他文本编辑器的体验一致。对于 Python 开发人员来说,这是一个丰富的 Python 环境,只需在配置文件中添加几个自定义函数就可以对其进行更改和开发。
|
||||
|
||||
这就是 [Emacs][2] 使用 1958 年的编程语言 [Lisp][3] 所做的事情。在 Emacs 中,运行应用程序的 Lisp 引擎与输入文本之间无缝结合。对 Emacs 来说,一切都是 Lisp 数据,因此一切都可以通过编程进行分析和操作。
|
||||
|
||||
这就形成了一个强大的用户界面 (UI)。但是,如果您是 Emacs 的普通用户,您可能对它的能力知之甚少。下面是你可能没有意识到 Emacs 可以做的六件事。
|
||||
|
||||
## 使用 Tramp mode 进行云端编辑
|
||||
|
||||
Emacs 早在网络流行话之前就实现了透明的网络编辑能力了,而且时至今日,它仍然提供了最流畅的远程编辑体验。Emacs 中的 [Tramp mode][4]( 以前称为 RPC mode) 代表着 “Transparent Remote (file) Access,Multiple Protocol( 透明的远程(文件)访问,多协议)”,这详细描述了它提供的功能:通过最流行的网络协议轻松访问您希望编辑的远程文件。目前最流行、最安全的远程编辑协议是 [OpenSSH][5],因此 Tramp 使用它作为默认的协议。
|
||||
|
||||
在 Emacs 22.1 或更高版本中已经包含了 Tramp,因此要使用 Tramp,只需使用 Tramp 语法打开一个文件。在 Emacs 的 **File** 菜单中,选择 **Open File**。当在 Emacs 窗口底部的小缓冲区中出现提示时,使用以下语法输入文件名:
|
||||
|
||||
```
|
||||
`/ssh:user@example.com:/path/to/file`
|
||||
```
|
||||
|
||||
如果需要交互式登录,Tramp 会提示输入密码。但是,Tramp 直接使用 OpenSSH,所以为了避免交互提示,你可以将主机名、用户名和 SSH 密钥路径添加到您的 `~/.ssh/config` 文件。与 Git 一样,Emacs 首先使用 SSH 配置,只有在出现错误时才会停下来询问更多信息。
|
||||
|
||||
Tramp 非常适合编辑计算机上不存在的文件,它的用户体验与编辑本地文件没有明显的区别。下次,当你 SSH 到服务器启动 Vim 或 Emacs 会话时,请尝试使用 Tramp。
|
||||
|
||||
## 日历
|
||||
|
||||
如果你喜欢文本多过图形界面,那么你一定会很高兴地知道,可以使用 Emacs 以纯文本的方式安排你的日程(或生活)。而且你依然可以在移动设备上使用开放源码的 [Org mode][6] 查看器来获得华丽的通知。
|
||||
|
||||
这个过程需要一些配置来创建一个方便的方式来与移动设备同步你的日程(我使用 Git,但你可以调用蓝牙,KDE Connect,Nextcloud,或其他文件同步工具),此外你必须安装一个 Org mode 查看器(如 [Orgzly][7]) 以及移动设备上的 Git 客户程序。但是,一旦你搭建好了这些基础,该流程就会与您常用的(或正在完善的,如果您是新用户 )Emacs 工作流完美地集成在一起。你可以在 Emacs 中方便地查阅日程,更新日程,并专注于任务上。议程上的变化将会反映在移动设备上,因此即使在 Emacs 不可用的时候,你也可以保持条理性。
|
||||
|
||||
![][8]
|
||||
|
||||
感兴趣了?阅读我的关于[使用 Org mode 和 Git 进行日程安排 ][9] 的逐步指南。
|
||||
|
||||
## 访问终端
|
||||
|
||||
有[许多终端模拟器 ][10] 可用。尽管 Emacs 中的 Elisp 终端仿真器不是最强大的通用仿真器,但是它有两个显著的优点。
|
||||
|
||||
1。**在 Emacs 缓冲区中打开:**我使用 Emacs 的 Elisp shell,因为它在 Emacs 窗口中打开很方便,我经常全屏运行该窗口。这是一个小而重要的优势,只需要输入 `Ctrl+x+o`( 或用 Emacs 符号来表示就是 C-x) 就能使用终端了,而且它还有一个特别好的地方在于当运行漫长的作业时能够一瞥它的状态报告。
|
||||
2。**在没有系统剪贴板的情况下复制和粘贴特别方便:** 无论是因为懒惰不愿将手从键盘移动到鼠标,还是因为在远程控制台运行 Emacs 而无法使用鼠标,在 Emacs 中运行终端有时意味着可以快从 Emacs 缓冲区中传输数据到 Bash。
|
||||
|
||||
|
||||
|
||||
要尝试 Emacs 终端,输入 `Alt+x (用 Emacs 符号表示就是 M-x)`,然后输入 **shell**,然后按 **Return**。
|
||||
|
||||
## 使用 Racket mode
|
||||
|
||||
[Racket][11] 是一种激动人心的新兴 Lisp 方言,拥有动态编程环境 、GUI 工具包和热情的社区。学习 Racket 的默认编辑器是 DrRacket,它的顶部是定义面板,底部是交互面板。使用该设置,用户可以编写影响 Racket 运行时的定义。就像旧的 [Logo Turtle][12] 程序,但是有一个终端而不是仅仅一个海龟。
|
||||
|
||||
![Racket-mode][13]
|
||||
|
||||
由 PLT 提供的 LGPL 示例代码
|
||||
|
||||
基于 Lisp 的 Emacs 为资深 Racket 编程人员提供了一个很好的集成开发环境 (IDE)。它还没有自带 [Racket mode][14],但你可以使用 Emacs 包安装程序安装 Racket 模式和辅助扩展。
|
||||
要安装它,按下 `Alt+X` (用 Emacs 符号表示就是 **M-x**),键入 **package-install**,然后按 **Return**。然后输入要安装的包 (**racet-mode**),按 **Return**。
|
||||
|
||||
使用 **M-x racket-mode** 进入 Racket mode。如果你是 Racket 新手,但不是对 Lisp 或 Emacs 比较熟悉,可以从优秀[图解 Racket][15] 入手。
|
||||
|
||||
## 脚本
|
||||
|
||||
您可能知道,Bash 脚本在自动化和增强 Linux 或 Unix 体验方面很流行。你可能听说过 Python 在这方面也做得很好。但是你知道 Lisp 脚本可以用同样的方式运行吗?有时人们会对 Lisp 到底有多有用感到困惑,因为许多人是通过 Emacs 来了解 Lisp 的,因此有一种潜在的印象,即在 21 世纪运行 Lisp 的惟一方法是在 Emacs 中运行。幸运的是,事实并非如此,Emacs 是一个很好的 IDE,它支持将 Lisp 脚本作为一般的系统可执行文件来运行。
|
||||
|
||||
除了 Elisp 之外,还有两种流行的现代 lisp 可以很容易地用来作为独立脚本运行。
|
||||
|
||||
1。**Racket:** 你可以通过在系统上运行 Racket 来提供运行 Racket 脚本所需的运行时支持,或者你可以使用 **raco exe** 产生一个可执行文件。**raco exe** 命令将代码和运行时支持文件一起打包,以创建可执行文件。然后,**raco distribution** 命令将可执行文件打包成可以在其他机器上工作的发行版。Emacs 有许多 Racket 工具,因此在 Emacs 中创建 Racket 文件既简单又有效。
|
||||
|
||||
2。**GNU Guile:** [GNU Guile][16](“GNU Ubiquitous Intelligent Language for Extensions”--GNU 通用智能语言扩展的缩写)是 [Scheme][17] 编程语言的一个实现,它用于为桌面 、internet、 终端等创建应用程序和游戏。使用 Emacs 中的 Scheme 扩展众多,使用任何一个扩展来编写 Scheme 都很容易。例如,这里有一个用 Guile 编写的 “Hello world” 脚本:
|
||||
```
|
||||
#!/usr/bin/guile - s
|
||||
|
||||
(display "hello world")
|
||||
(newline) [/code] Compile and run it with the **guile** command: [code] $ guile ./hello.scheme
|
||||
;;; compiling /home/seth/./hello.scheme
|
||||
;;; compiled [...]/hello.scheme.go
|
||||
hello world
|
||||
$ guile ./hello.scheme
|
||||
hello world
|
||||
```
|
||||
## Run Elisp without Emacs
|
||||
Emacs 可以作为 Elisp 的运行环境,但是你无需按照传统印象中的必须打开 Emacs 来运行 Elisp。`--script` 选项可以让你使用 Emacs 作为引擎来执行 Elisp 脚本而无需运行 Emacs 图形界面(甚至也无需使用终端界面)。下面这个例子中,`-Q` 选项让 Emacs 忽略 `.emacs` 文件从而避免由于执行 Elisp 脚本时产生延迟(若你的脚本依赖于 Emacs 配置中的内容那么请忽略该选项)。
|
||||
|
||||
```
|
||||
emacs -Q --script ~/path/to/script.el
|
||||
```
|
||||
## 下载 Emacs 备忘录
|
||||
Emacs 许多重要功能都不是只能通过 Emacs 来实现的; Org mode 是 Emacs 扩展也是一种格式标准,流行的 Lisp 方言大多不依赖于具体的实现,我们甚至可以在没有可见或可交互式 Emacs 实例的情况下编写和运行 Elisp。然后若你对为什么模糊代码和数据之间的界限能够引发创新和效率感到好奇的话,那么 Emacs 是一个很棒的工具。
|
||||
|
||||
幸运的是,现在是 21 世纪,Emacs 有了带有传统菜单的图形界面以及大量的文档,因此学习曲线不再像以前那样。然而,要最大化 Emacs 对你的好处,你需要学习它的快捷键。由于 Emacs 支持的每个任务都是一个 Elisp 函数,Emacs 中的任何功能都可以对应一个快捷键,因此要描述所有这些快捷键是不可能完成的任务。你只要学习使用频率 10 倍于不常用功能的那些快捷键即可。
|
||||
|
||||
我们汇聚了最常用的 Emacs 快捷键成为一份 Emacs 备忘录以便你查询。将它挂在屏幕附近或办公室墙上,把它作为鼠标垫也行。让它触手可及经常翻阅一下。每次翻两下可以让你获得十倍的学习效率。而且一旦开始编写自己的函数,你一定不会后悔获取了这个免费的备忘录副本的!
|
||||
|
||||
[这里下载 Emacs 备忘录 ](https://opensource.com/downloads/emacs-cheat-sheet)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
via: https://opensource.com/article/20/1/emacs-cheat-sheet
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_blue_text_editor_web.png
|
||||
[2]: https://www.gnu.org/software/emacs/
|
||||
[3]: https://en.wikipedia.org/wiki/Lisp_(programming_language)
|
||||
[4]: https://www.gnu.org/software/tramp/
|
||||
[5]: https://www.openssh.com/
|
||||
[6]: https://orgmode.org/
|
||||
[7]: https://f-droid.org/en/packages/com.orgzly/
|
||||
[8]: https://opensource.com/sites/default/files/uploads/orgzly-agenda.jpg
|
||||
[9]: https://opensource.com/article/19/4/calendar-git
|
||||
[10]: https://opensource.com/article/19/12/favorite-terminal-emulator
|
||||
[11]: http://racket-lang.org/
|
||||
[12]: https://en.wikipedia.org/wiki/Logo_(programming_language)#Turtle_and_graphics
|
||||
[13]: https://opensource.com/sites/default/files/racket-mode.jpg
|
||||
[14]: https://www.racket-mode.com/
|
||||
[15]: https://docs.racket-lang.org/quick/index.html
|
||||
[16]: https://www.gnu.org/software/guile/
|
||||
[17]: https://en.wikipedia.org/wiki/Scheme_(programming_language)
|
Loading…
Reference in New Issue
Block a user