mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
bb9a7f9808
@ -1,13 +1,13 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11916-1.html)
|
||||
[#]: subject: (What is WireGuard? Why Linux Users Going Crazy Over it?)
|
||||
[#]: via: (https://itsfoss.com/wireguard/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
什么是 WireGuard?为什么 Linux 用户对它疯狂?
|
||||
什么是 WireGuard?为什么 Linux 用户为它疯狂?
|
||||
======
|
||||
|
||||
从普通的 Linux 用户到 Linux 创建者 [Linus Torvalds][1],每个人都对 WireGuard 很感兴趣。什么是 WireGuard,它为何如此特别?
|
||||
@ -18,7 +18,6 @@
|
||||
|
||||
[WireGuard][3] 是一个易于配置、快速且安全的开源 [VPN][4],它利用了最新的加密技术。目的是提供一种更快、更简单、更精简的通用 VPN,它可以轻松地在树莓派这类低端设备到高端服务器上部署。
|
||||
|
||||
|
||||
[IPsec][5] 和 OpenVPN 等大多数其他解决方案是几十年前开发的。安全研究人员和内核开发人员 Jason Donenfeld 意识到它们速度慢且难以正确配置和管理。
|
||||
|
||||
这让他创建了一个新的开源 VPN 协议和解决方案,它更加快速、安全、易于部署和管理。
|
||||
@ -31,31 +30,31 @@ WireGuard 最初是为 Linux 开发的,但现在可用于 Windows、macOS、BS
|
||||
|
||||
除了可以跨平台之外,WireGuard 的最大优点之一就是易于部署。配置和部署 WireGuard 就像配置和使用 SSH 一样容易。
|
||||
|
||||
看看 [WireGuard 设置指南][7]。安装 WireGuard、生成公钥和私钥(像 SSH 一样),设置防火墙规则并启动服务。现在将它和 [OpenVPN 设置指南][8]进行比较。它有太多要做的了。
|
||||
看看 [WireGuard 设置指南][7]。安装 WireGuard、生成公钥和私钥(像 SSH 一样),设置防火墙规则并启动服务。现在将它和 [OpenVPN 设置指南][8]进行比较——有太多要做的了。
|
||||
|
||||
WireGuard 的另一个好处是它有一个仅 4000 行代码的精简代码库。将它与 [OpenVPN][9](另一个流行的开源 VPN)的 100,000 行代码相比。显然,调试W ireGuard 更加容易。
|
||||
WireGuard 的另一个好处是它有一个仅 4000 行代码的精简代码库。将它与 [OpenVPN][9](另一个流行的开源 VPN)的 100,000 行代码相比。显然,调试 WireGuard 更加容易。
|
||||
|
||||
不要小看它的简单。WireGuard 支持所有最新的加密技术,例如 [Noise协议框架][10]、[Curve25519][11]、[ChaCha20][12]、[Poly1305][13]、[BLAKE2][14]、[SipHash24][15]、[HKDF][16] 和安全受信任结构。
|
||||
不要因其简单而小看它。WireGuard 支持所有最新的加密技术,例如 [Noise 协议框架][10]、[Curve25519][11]、[ChaCha20][12]、[Poly1305][13]、[BLAKE2][14]、[SipHash24][15]、[HKDF][16] 和安全受信任结构。
|
||||
|
||||
由于 WireGuard 运行在[内核空间][17],因此可以高速提供安全的网络。
|
||||
|
||||
这些是 WireGuard 越来越受欢迎的一些原因。Linux 创造者 Linus Torvalds 非常喜欢 WireGuard,以至于将其合并到 [Linux Kernel 5.6][18] 中:
|
||||
|
||||
> 我能否再次声明对它的爱,并希望它能很快合并?也许代码不是完美的,但我已经忽略,与 OpenVPN 和 IPSec 的恐怖相比,这是一件艺术品。
|
||||
> 我能否再次声明对它的爱,并希望它能很快合并?也许代码不是完美的,但我不在乎,与 OpenVPN 和 IPSec 的恐怖相比,这是一件艺术品。
|
||||
>
|
||||
> Linus Torvalds
|
||||
|
||||
### 如果 WireGuard 已经可用,那么将其包含在 Linux 内核中有什么大惊小怪的?
|
||||
|
||||
这可能会让新的 Linux 用户感到困惑。你知道可以在 Linux 上安装和配置 WireGuard VPN 服务器,但同时会看到 Linux Kernel 5.6 将包含 WireGuard 的消息。让我向您解释。
|
||||
这可能会让新的 Linux 用户感到困惑。你知道可以在 Linux 上安装和配置 WireGuard VPN 服务器,但同时也会看到 Linux Kernel 5.6 将包含 WireGuard 的消息。让我向您解释。
|
||||
|
||||
目前,你可以将 WireGuard 作为[内核模块][19]安装在 Linux 中。诸如 VLC、GIMP 等常规应用安装在 Linux 内核之上(在 [用户空间][20]中),而不是内部。
|
||||
目前,你可以将 WireGuard 作为[内核模块][19]安装在 Linux 中。而诸如 VLC、GIMP 等常规应用安装在 Linux 内核之上(在 [用户空间][20]中),而不是内部。
|
||||
|
||||
当将 WireGuard 安装为内核模块时,基本上是自行修改 Linux 内核并向其添加代码。从 5.6 内核开始,你无需手动添加内核模块。默认情况下它将包含在内核中。
|
||||
当将 WireGuard 安装为内核模块时,基本上需要你自行修改 Linux 内核并向其添加代码。从 5.6 内核开始,你无需手动添加内核模块。默认情况下它将包含在内核中。
|
||||
|
||||
在 5.6 内核中包含 WireGuard 很有可能[扩展 WireGuard 的采用,从而改变当前的 VPN 场景][21]。
|
||||
|
||||
**总结**
|
||||
### 总结
|
||||
|
||||
WireGuard 之所以受欢迎是有充分理由的。诸如 [Mullvad VPN][23] 之类的一些流行的[关注隐私的 VPN][22] 已经在使用 WireGuard,并且在不久的将来,采用率可能还会增长。
|
||||
|
||||
@ -68,7 +67,7 @@ via: https://itsfoss.com/wireguard/
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
@ -96,4 +95,4 @@ via: https://itsfoss.com/wireguard/
|
||||
[20]: http://www.linfo.org/user_space.html
|
||||
[21]: https://www.zdnet.com/article/vpns-will-change-forever-with-the-arrival-of-wireguard-into-linux/
|
||||
[22]: https://itsfoss.com/best-vpn-linux/
|
||||
[23]: https://mullvad.net/en/
|
||||
[23]: https://mullvad.net/en/
|
@ -1,32 +1,34 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11917-1.html)
|
||||
[#]: subject: (Dino is a Modern Looking Open Source XMPP Client)
|
||||
[#]: via: (https://itsfoss.com/dino-xmpp-client/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Dino 是一个有着现代外观的开源 XMPP 客户端
|
||||
Dino:一个有着现代外观的开源 XMPP 客户端
|
||||
======
|
||||
|
||||
_**简介:Dino 是一个相对较新的开源 XMPP 客户端,它尝试提供良好的用户体验,同时鼓励注重隐私的用户使用 XMPP 发送消息。**_
|
||||
> Dino 是一个相对较新的开源 XMPP 客户端,它试图提供良好的用户体验,鼓励注重隐私的用户使用 XMPP 发送消息。
|
||||
|
||||

|
||||
|
||||
### Dino:一个开源 XMPP 客户端
|
||||
|
||||
![][1]
|
||||
|
||||
[XMPP][2] (可扩展通讯和表示协议) 是一个去中心化的网络模型,可促进即时消息传递和协作。去中心化意味着没有中央服务器可以访问你的数据。通信直接点对点。
|
||||
[XMPP][2](<ruby>可扩展通讯和表示协议<rt>eXtensible Messaging Presence Protocol</rt></ruby>) 是一个去中心化的网络模型,可促进即时消息传递和协作。去中心化意味着没有中央服务器可以访问你的数据。通信直接点对点。
|
||||
|
||||
我们中的一些人可能会称它为"老派"技术,可能是因为 XMPP 客户端通常有着非常糟糕的用户体验,或者仅仅是因为它需要时间来适应(或设置它)。
|
||||
我们中的一些人可能会称它为“老派”技术,可能是因为 XMPP 客户端通常用户体验非常糟糕,或者仅仅是因为它需要时间来适应(或设置它)。
|
||||
|
||||
这时候 [Dino[3] 作为现代 XMPP 客户端出现了,在不损害你的隐私的情况下提供干净清爽的用户体验。
|
||||
这时候 [Dino][3] 作为现代 XMPP 客户端出现了,在不损害你的隐私的情况下提供干净清爽的用户体验。
|
||||
|
||||
### 用户体验
|
||||
|
||||
![][4]
|
||||
|
||||
Dino 有试图改善 XMPP 客户端的用户体验,但值得注意的是,它的外观和感受将在一定程度上取决于你的 Linux 发行版。你的图标主题或 Gnome 主题会让你的个人体验更好或更糟。
|
||||
Dino 试图改善 XMPP 客户端的用户体验,但值得注意的是,它的外观和感受将在一定程度上取决于你的 Linux 发行版。你的图标主题或 Gnome 主题会让你的个人体验更好或更糟。
|
||||
|
||||
从技术上讲,它的用户界面非常简单,易于使用。所以,我建议你看下 Ubuntu 中的[最佳图标主题][5]和 [GNOME 主题][6]来调整 Dino 的外观。
|
||||
|
||||
@ -34,7 +36,7 @@ Dino 有试图改善 XMPP 客户端的用户体验,但值得注意的是,它
|
||||
|
||||
![Dino Screenshot][7]
|
||||
|
||||
你可以期望将 Dino 用作 Slack、[Signal][8] 或 [Wire][9] 的替代产品,来用于你的业务或个人用途。
|
||||
你可以将 Dino 用作 Slack、[Signal][8] 或 [Wire][9] 的替代产品,来用于你的业务或个人用途。
|
||||
|
||||
它提供了消息应用所需的所有基本特性,让我们看下你可以从中得到的:
|
||||
|
||||
@ -47,14 +49,10 @@ Dino 有试图改善 XMPP 客户端的用户体验,但值得注意的是,它
|
||||
* 支持 [OpenPGP][10] 和 [OMEMO][11] 加密
|
||||
* 轻量级原生桌面应用
|
||||
|
||||
|
||||
|
||||
### 在 Linux 上安装 Dino
|
||||
|
||||
你可能会发现它列在你的软件中心中,也可能未找到。Dino 为基于 Debian(deb)和 Fedora(rpm)的发行版提供了可用的二进制文件。
|
||||
|
||||
**对于 Ubuntu:**
|
||||
|
||||
Dino 在 Ubuntu 的 universe 仓库中,你可以使用以下命令安装它:
|
||||
|
||||
```
|
||||
@ -63,15 +61,15 @@ sudo apt install dino-im
|
||||
|
||||
类似地,你可以在 [GitHub 分发包页面][12]上找到其他 Linux 发行版的包。
|
||||
|
||||
如果你想要获取最新的,你可以在 [OpenSUSE 的软件页面][13]找到 Dino 的 **.deb** 和 .**rpm** (每日构建版)安装在 Linux 中。
|
||||
如果你想要获取最新的,你可以在 [OpenSUSE 的软件页面][13]找到 Dino 的 **.deb** 和 .**rpm** (每日构建版)安装在 Linux 中。
|
||||
|
||||
在任何一种情况下,前往它的 [Github 页面][14]或点击下面的链接访问官方网站。
|
||||
|
||||
[下载 Dino][3]
|
||||
- [下载 Dino][3]
|
||||
|
||||
**总结**
|
||||
### 总结
|
||||
|
||||
它工作良好没有出过任何问题(在我编写这篇文章时快速测试过它)。我将尝试探索更多,并希望能涵盖更多有关 XMPP 的文章来鼓励用户使用 XMPP 的客户端和服务器用于通信。
|
||||
在我编写这篇文章时快速测试过它,它工作良好,没有出过问题。我将尝试探索更多,并希望能涵盖更多有关 XMPP 的文章来鼓励用户使用 XMPP 的客户端和服务器用于通信。
|
||||
|
||||
你觉得 Dino 怎么样?你会推荐另一个可能好于 Dino 的开源 XMPP 客户端吗?在下面的评论中让我知道你的想法。
|
||||
|
||||
@ -82,7 +80,7 @@ via: https://itsfoss.com/dino-xmpp-client/
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Google Cloud moves to aid mainframe migration)
|
||||
[#]: via: (https://www.networkworld.com/article/3528451/google-cloud-moves-to-aid-mainframe-migration.html)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Google Cloud moves to aid mainframe migration
|
||||
======
|
||||
Google bought Cornerstone Technology, whose technology facilitates moving mainframe applications to the cloud.
|
||||
Thinkstock
|
||||
|
||||
Google Cloud this week bought a mainframe cloud-migration service firm Cornerstone Technology with an eye toward helping Big Iron customers move workloads to the private and public cloud.
|
||||
|
||||
Google said the Cornerstone technology – found in its [G4 platform][1] – will shape the foundation of its future mainframe-to-Google Cloud offerings and help mainframe customers modernize applications and infrastructure.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][2]
|
||||
|
||||
“Through the use of automated processes, Cornerstone’s tools can break down your Cobol, PL/1, or Assembler programs into services and then make them cloud native, such as within a managed, containerized environment” wrote Howard Weale, Google’s director, Transformation Practice, in a [blog][3] about the buy.
|
||||
|
||||
“As the industry increasingly builds applications as a set of services, many customers want to break their mainframe monolith programs into either Java monoliths or Java microservices,” Weale stated.
|
||||
|
||||
Google Cloud’s Cornerstone service will:
|
||||
|
||||
* Develop a migration roadmap where Google will assess a customer’s mainframe environment and create a roadmap to a modern services architecture.
|
||||
* Convert any language to any other language and any database to any other database to prepare applications for modern environments.
|
||||
* Automate the migration of workloads to the Google Cloud.
|
||||
|
||||
|
||||
|
||||
“Easy mainframe migration will go a long way as Google attracts large enterprises to its cloud,” said Matt Eastwood, senior vice president, Enterprise Infrastructure, Cloud, Developers and Alliances, IDC wrote in a statement.
|
||||
|
||||
The Cornerstone move is also part of Google’s effort stay competitive in the face of mainframe-migration offerings from [Amazon Web Services][4], [IBM/RedHat][5] and [Microsoft][6].
|
||||
|
||||
While the idea of moving legacy applications off the mainframe might indeed be beneficial to a business, Gartner last year warned that such decisions should be taken very deliberately.
|
||||
|
||||
“The value gained by moving applications from the traditional enterprise platform onto the next ‘bright, shiny thing’ rarely provides an improvement in the business process or the company’s bottom line. A great deal of analysis must be performed and each cost accounted for,” Gartner stated in a report entitled *[_Considering Leaving Legacy IBM Platforms? Beware, as Cost Savings May Disappoint, While Risking Quality_][7]. * “Legacy platforms may seem old, outdated and due for replacement. Yet IBM and other vendors are continually integrating open-source tools to appeal to more developers while updating the hardware. Application leaders should reassess the capabilities and quality of these platforms before leaving them.”
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3528451/google-cloud-moves-to-aid-mainframe-migration.html
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.cornerstone.nl/solutions/modernization
|
||||
[2]: https://www.networkworld.com/newsletters/signup.html
|
||||
[3]: https://cloud.google.com/blog/topics/inside-google-cloud/helping-customers-migrate-their-mainframe-workloads-to-google-cloud
|
||||
[4]: https://aws.amazon.com/blogs/enterprise-strategy/yes-you-should-modernize-your-mainframe-with-the-cloud/
|
||||
[5]: https://www.networkworld.com/article/3438542/ibm-z15-mainframe-amps-up-cloud-security-features.html
|
||||
[6]: https://azure.microsoft.com/en-us/migration/mainframe/
|
||||
[7]: https://www.gartner.com/doc/reprints?id=1-6L80XQJ&ct=190429&st=sb
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,57 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Japanese firm announces potential 80TB hard drives)
|
||||
[#]: via: (https://www.networkworld.com/article/3528211/japanese-firm-announces-potential-80tb-hard-drives.html)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Japanese firm announces potential 80TB hard drives
|
||||
======
|
||||
Using some very fancy physics for stacking electrons, Showa Denko K.K. plans to quadruple the top end of proposed capacity.
|
||||
[geralt][1] [(CC0)][2]
|
||||
|
||||
Hard drive makers are staving off obsolescence to solid-state drives (SSDs) by offering capacities that are simply not feasible in an SSD. Seagate and Western Digital are both pushing to release 20TB hard disks in the next few years. A 20TB SSD might be doable but also cost more than a new car.
|
||||
|
||||
But Showa Denko K.K. of Japan has gone one further with the announcement of its next-generation of heat-assisted magnetic recording (HAMR) media for hard drives. The platters use all-new magnetic thin films to maximize their data density, with the goal of eventually enabling 70TB to 80TB hard drives in a 3.5-inch form factor.
|
||||
|
||||
[[Get regularly scheduled insights by signing up for Network World newsletters.]][3]
|
||||
|
||||
Showa Denko is the world’s largest independent maker of platters for hard drives, selling them to basically anyone left making hard drives not named Seagate and Western Digital. Those two make their own platters and are working on their own next-generation drives for release in the coming years.
|
||||
|
||||
While similar in concept, Seagate and Western Digital have chosen different solutions to the same problem. HAMR, championed by Seagate and Showa, works by temporarily heating the disk material during the write process so data can be written to a much smaller space, thus increasing capacity.
|
||||
|
||||
Western Digital supports a different technology called microwave-assisted magnetic recording (MAMR). It operates under a similar concept as HAMR but uses microwaves instead of heat to alter the drive platter. Seagate hopes to get to 48TB by 2023, while Western Digital is planning on releasing 18TB and 20TB drives this year.
|
||||
|
||||
Heat is never good for a piece of electrical equipment, and Showa Denko’s platters for HAMR HDDs are made of a special composite alloy to tolerate temperature and reduce wear, not to mention increase density. A standard hard disk has a density of about 1.1TB per square inch. Showa’s drive platters have a density of 5-6TB per square inch.
|
||||
|
||||
The question is when they will be for sale, and who will use them. Fellow Japanese electronics giant Toshiba is expected to ship drives with Showa platters later this year. Seagate will be the first American company to adopt HAMR, with 20TB drives scheduled to ship in late 2020.
|
||||
|
||||
[][4]
|
||||
|
||||
Know what’s scary? That still may not be enough. IDC predicts that our global datasphere – the total of all of the digital data we create, consume, or capture – will grow from a total of approximately 40 zettabytes of data in 2019 to 175 zettabytes total by 2025.
|
||||
|
||||
So even with the growth in hard-drive density, the growth in the global data pool – everything from Oracle databases to Instagram photos – may still mean deploying thousands upon thousands of hard drives across data centers.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3528211/japanese-firm-announces-potential-80tb-hard-drives.html
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://pixabay.com/en/data-data-loss-missing-data-process-2764823/
|
||||
[2]: https://creativecommons.org/publicdomain/zero/1.0/
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.networkworld.com/article/3440100/take-the-intelligent-route-with-consumption-based-storage.html?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE21620&utm_content=sidebar ( Take the Intelligent Route with Consumption-Based Storage)
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,234 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fun and Games in Emacs)
|
||||
[#]: via: (https://www.masteringemacs.org/article/fun-games-in-emacs)
|
||||
[#]: author: (Mickey Petersen https://www.masteringemacs.org/about)
|
||||
|
||||
Fun and Games in Emacs
|
||||
======
|
||||
|
||||
It’s yet another Monday and you’re hard at work on those [TPS reports][1] for your boss, Lumbergh. Why not play Emacs’s Zork-like text adventure game to take your mind off the tedium of work?
|
||||
|
||||
But seriously, yes, there are both games and quirky playthings in Emacs. Some you have probably heard of or played before. The only thing they have in common is that most of them were added a long time ago: some are rather odd inclusions (as you’ll see below) and others were clearly written by bored employees or graduate students. What they all have in common is a whimsy and a casualness that I rarely see in Emacs today. Emacs is Serious Business now in a way that it probably wasn’t back in the 1980s when some of these games were written.
|
||||
|
||||
### Tower of Hanoi
|
||||
|
||||
The [Tower of Hanoi][2] is an ancient mathematical puzzle game and one that is probably familiar to some of us as it is often used in Computer Science as a teaching aid because of its recursive and iterative solutions.
|
||||
|
||||

|
||||
|
||||
In Emacs there are three commands you can run to trigger the Tower of Hanoi puzzle: M-x hanoi with a default of 3 discs; M-x hanoi-unix and M-x hanoi-unix-64 uses the unix timestamp, making a move each second in line with the clock, and with the latter pretending it uses a 64-bit clock.
|
||||
|
||||
The Tower of Hanoi implementation in Emacs dates from the mid 1980s — an awful long time ago indeed. There are a few Customize options (M-x customize-group RET hanoi RET) such as enabling colorized discs. And when you exit the Hanoi buffer or type a character you are treated to a sarcastic goodbye message (see above.)
|
||||
|
||||
### 5x5
|
||||
|
||||

|
||||
The 5x5 game is a logic puzzle: you are given a 5x5 grid with a central cross already filled-in; your goal is to fill all the cells by toggling them on and off in the right order to win. It’s not as easy as it sounds!
|
||||
|
||||
To play, type M-x 5x5, and with an optional digit argument you can change the size of the grid. What makes this game interesting is its rather complex ability to suggest the next move and attempt to solve the game grid. It uses Emacs’s very own, and very cool, symbolic RPN calculator M-x calc (and in [Fun with Emacs Calc][3] I use it to solve a simple problem.)
|
||||
|
||||
So what I like about this game is that it comes with a very complex solver – really, you should read the source code with M-x find-library RET 5x5 – and a “cracker” that attempts to brute force solutions to the game.
|
||||
|
||||
Try creating a bigger game grid, such as M-10 M-x 5x5, and then run one of the crack commands below. The crackers will attempt to iterate their way to the best solution. This runs in real time and is fun to watch:
|
||||
|
||||
|
||||
|
||||
`M-x 5x5-crack-mutating-best`
|
||||
Attempt to crack 5x5 by mutating the best solution.
|
||||
|
||||
`M-x 5x5-crack-mutating-current`
|
||||
Attempt to crack 5x5 by mutating the current solution.
|
||||
|
||||
`M-x 5x5-crack-randomly`
|
||||
Attempt to crack 5x5 using random solutions.
|
||||
|
||||
`M-x 5x5-crack-xor-mutate`
|
||||
Attempt to crack 5x5 by xoring the current and best solution.
|
||||
|
||||
### Text Animation
|
||||
|
||||
You can display a fancy birthday present animation by running M-x animate-birthday-present and giving it your name. It looks rather cool!
|
||||
|
||||

|
||||
|
||||
The animate package is also used by M-x butterfly command, a command added to Emacs as an homage to the [XKCD][4] strip above. Of course the Emacs command in the strip is teeechnically not valid but the humor more than makes up for it.
|
||||
|
||||
### Blackbox
|
||||
|
||||
The objective of this game I am going to quote literally:
|
||||
|
||||
> The object of the game is to find four hidden balls by shooting rays into the black box. There are four possibilities: 1) the ray will pass thru the box undisturbed, 2) it will hit a ball and be absorbed, 3) it will be deflected and exit the box, or 4) be deflected immediately, not even being allowed entry into the box.
|
||||
|
||||
So, it’s a bit like the [Battleship][5] most of us played as kids but… for people with advanced degrees in physics?
|
||||
|
||||
It’s another game that was added back in the 1980s. I suggest you read the extensive documentation on how to play by typing C-h f blackbox.
|
||||
|
||||
### Bubbles
|
||||
|
||||

|
||||
|
||||
The M-x bubbles game is rather simple: you must clear out as many “bubbles” as you can in as few moves as possible. When you remove bubbles the other bubbles drop and stick together. It’s a fun game that, as an added bonus, comes with graphics if you use Emacs’s GUI. It also works with your mouse.
|
||||
|
||||
You can configure the difficulty of the game by calling M-x bubbles-set-game- where is one of: easy, medium, difficult, hard, or userdefined. Furthermore, you can alter the graphics, grid size and colors using Customize: M-x customize-group bubbles.
|
||||
|
||||
For its simplicity and fun factor, this ranks as one of my favorite games in Emacs.
|
||||
|
||||
### Fortune & Cookie
|
||||
|
||||
I like the fortune command. Snarky, unhelpful and often sarcastic “advice” mixed in with literature and riddles brightens up my day whenever I launch a new shell.
|
||||
|
||||
Rather confusingly there are two packages in Emacs that does more-or-less the same thing: fortune and cookie1. The former is geared towards putting fortune cookie messages in email signatures and the latter is just a simple reader for the fortune format.
|
||||
|
||||
Anyway, to use Emacs’s cookie1 package you must first tell it where to find the file by customizing the variable cookie-file with customize-option RET cookie RET.
|
||||
|
||||
If you’re on Ubuntu you will have to install the fortune package first. The files are found in the /usr/share/games/fortunes/ directory.
|
||||
|
||||
You can then call M-x cookie or, should you want to do this, find all matching cookies with M-x cookie-apropos.
|
||||
|
||||
### Decipher
|
||||
|
||||
This package perfectly captures the utilitarian nature of Emacs: it’s a package to help you break simple substitution ciphers (like cryptogram puzzles) using a helpful user interface. You just know that – more than twenty years ago – someone really had a dire need to break a lot of basic ciphers. It’s little things like this module that makes me overjoyed to use Emacs: a module of scant importance to all but a few people and, yet, should you need it – there it is.
|
||||
|
||||
So how do you use it then? Well, let’s consider the “rot13” cipher: rotating characters by 13 places in a 26-character alphabet. It’s an easy thing to try out in Emacs with M-x ielm, Emacs’s REPL for [Evaluating Elisp][6]:
|
||||
|
||||
```
|
||||
*** Welcome to IELM *** Type (describe-mode) for help.
|
||||
ELISP> (rot13 "Hello, World")
|
||||
"Uryyb, Jbeyq"
|
||||
ELISP> (rot13 "Uryyb, Jbeyq")
|
||||
"Hello, World"
|
||||
ELISP>
|
||||
```
|
||||
|
||||
So how can the decipher module help us here? Well, create a new buffer test-cipher and type in your cipher text (in my case Uryyb, Jbeyq)
|
||||
|
||||

|
||||
|
||||
You’re now presented with a rather complex interface. You can now place the point on any of the characters in the ciphertext on the purple line and guess what the character might be: Emacs will update the rest of the plaintext guess with your choices and tell you how the characters in the alphabet have been allocated thus far.
|
||||
|
||||
You can then start winnowing down the options using various helper commands to help infer which cipher characters might correspond to which plaintext character:
|
||||
|
||||
|
||||
|
||||
`D`
|
||||
Shows a list of digrams (two-character combinations from the cipher) and their frequency
|
||||
|
||||
`F`
|
||||
Shows the frequency of each ciphertext letter
|
||||
|
||||
`N`
|
||||
Shows adjacency of characters. I am not entirely sure how this works.
|
||||
|
||||
`M` and `R`
|
||||
Save and restore a checkpoint, allowing you to branch your work and explore different ways of cracking the cipher.
|
||||
|
||||
All in all, for such an esoteric task, this package is rather impressive! If you regularly solve cryptograms maybe this package can help?
|
||||
|
||||
### Doctor
|
||||
|
||||

|
||||
|
||||
Ah, the Emacs doctor. Based on the original [ELIZA][7] the “Doctor” tries to psychoanalyze what you say and attempts to repeat the question back to you. Rather fun, for a few minutes, and one of the more famous Emacs oddities. You can run it with M-x doctor.
|
||||
|
||||
### Dunnet
|
||||
|
||||
Emacs’s very own Zork-like text adventure game. To play it, type M-x dunnet. It’s rather good, if short, but it’s another rather famous Emacs game that too few have actually played through to the end.
|
||||
|
||||
If you find yourself with time to kill between your TPS reports then it’s a great game with a built-in “boss screen” as it’s text-only.
|
||||
|
||||
Oh, and, don’t try to eat the CPU card :)
|
||||
|
||||
### Gomoku
|
||||
|
||||

|
||||
|
||||
Another game written in the 1980s. You have to connect 5 squares, tic-tac-toe style. You can play against Emacs with M-x gomoku. The game also supports the mouse, which is rather handy. You can customize the group gomoku to adjust the size of the grid.
|
||||
|
||||
### Game of Life
|
||||
|
||||
[Conway’s Game of Life][8] is a famous example of cellular automata. The Emacs version comes with a handful of starting patterns that you can (programmatically with elisp) alter by adjusting the life-patterns variable.
|
||||
|
||||
You can trigger a game of life with M-x life. The fact that the whole thing, display code, comments and all, come in at less than 300 characters is also rather impressive.
|
||||
|
||||
### Pong, Snake and Tetris
|
||||
|
||||

|
||||
|
||||
These classic games are all implemented using the Emacs package gamegrid, a generic framework for building grid-based games like Tetris and Snake. The great thing about the gamegrid package is its compatibility with both graphical and terminal Emacs: if you run Emacs in a GUI you get fancy graphics; if you don’t, you get simple ASCII art.
|
||||
|
||||
You can run the games by typing M-x pong, M-x snake, or M-x tetris.
|
||||
|
||||
The Tetris game in particular is rather faithfully implemented, having both gradual speed increase and the ability to slide blocks into place. And given you have the code to it, you can finally remove that annoying Z-shaped piece no one likes!
|
||||
|
||||
### Solitaire
|
||||
|
||||

|
||||
|
||||
This is not the card game, unfortunately. But a peg-based game where you have to end up with just one stone on the board, by taking a stone (the o) and “jumping” over an adjacent stone into the hole (the .), removing the stone you jumped over in the process. Rinse and repeat until the board is empty.
|
||||
|
||||
There is a handy solver built in called M-x solitaire-solve if you get stuck.
|
||||
|
||||
### Zone
|
||||
|
||||
Another of my favorites. This time’s it’s a screensaver – or rather, a series of screensavers.
|
||||
|
||||
Type M-x zone and watch what happens to your screen!
|
||||
|
||||
You can configure a screensaver idle time by running M-x zone-when-idle (or calling it from elisp) with an idle time in seconds. You can turn it off with M-x zone-leave-me-alone.
|
||||
|
||||
This one’s guaranteed to make your coworkers freak out if it kicks off while they are looking.
|
||||
|
||||
### Multiplication Puzzle
|
||||
|
||||

|
||||
|
||||
This is another brain-twisting puzzle game. When you run M-x mpuz you are given a multiplication puzzle where you have to replace the letters with numbers and ensure the numbers add (multiply?) up.
|
||||
|
||||
You can run M-x mpuz-show-solution to solve the puzzle if you get stuck.
|
||||
|
||||
### Miscellaneous
|
||||
|
||||
There are more, but they’re not the most useful or interesting:
|
||||
|
||||
* You can translate a region into morse code with M-x morse-region and M-x unmorse-region.
|
||||
|
||||
* The Dissociated Press is a very simple command that applies something like a random walk markov-chain generator to a body of text in a buffer and generates nonsensical text from the source body. Try it with M-x dissociated-press.
|
||||
|
||||
* The Gamegrid package is a generic framework for building grid-based games. So far only Tetris, Pong and Snake use it. It’s called gamegrid.
|
||||
|
||||
* The gametree package is a complex way of notating and tracking chess games played via email.
|
||||
|
||||
* The M-x spook command inserts random words (usually into emails) designed to confuse/overload the “NSA trunk trawler” – and keep in mind this module dates from the 1980s and 1990s – with various words the spooks are supposedly listening for. Of course, even ten years ago that would’ve seemed awfully paranoid and quaint but not so much any more…
|
||||
|
||||
### Conclusion
|
||||
|
||||
I love the games and playthings that ship with Emacs. A lot of them date from, well, let’s just call a different era: an era where whimsy was allowed or perhaps even encouraged. Some are known classics (like Tetris and Tower of Hanoi) and some of the others are fun variations on classics (like blackbox) — and yet I love that they ship with Emacs after all these years. I wonder if any of these would make it into Emacs’s codebase today; well, they probably wouldn’t — they’d be relegated to the package manager where, in a clean and sterile world, they no doubt belong.
|
||||
|
||||
There’s a mandate in Emacs to move things not essential to the Emacs experience to ELPA, the package manager. I mean, as a developer myself, that does make sense, but… surely for every package removed and exiled to ELPA we chip away the essence of what defines Emacs?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
via: https://www.masteringemacs.org/article/fun-games-in-emacs
|
||||
|
||||
作者:[Mickey Petersen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.masteringemacs.org/about
|
||||
[b]:https://github.com/lujun9972
|
||||
[1]:https://en.wikipedia.org/wiki/Office_Space
|
||||
[2]:https://en.wikipedia.org/wiki/Tower_of_Hanoi
|
||||
[3]:https://www.masteringemacs.org/article/fun-emacs-calc
|
||||
[4]:http://www.xkcd.com
|
||||
[5]:https://en.wikipedia.org/wiki/Battleship_(game)
|
||||
[6]:https://www.masteringemacs.org/article/evaluating-elisp-emacs
|
||||
[7]:https://en.wikipedia.org/wiki/ELIZA
|
||||
[8]:https://en.wikipedia.org/wiki/Conway's_Game_of_Life
|
@ -0,0 +1,157 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Some Advice for How to Make Emacs Tetris Harder)
|
||||
[#]: via: (https://nickdrozd.github.io/2019/01/14/tetris.html)
|
||||
[#]: author: (nickdrozd https://nickdrozd.github.io)
|
||||
|
||||
Some Advice for How to Make Emacs Tetris Harder
|
||||
======
|
||||
|
||||
Did you know that Emacs comes bundled with an implementation of Tetris? Just hit M-x tetris and there it is:
|
||||
|
||||

|
||||
|
||||
This is often mentioned by Emacs advocates in text editor discussions. “Yeah, but can that other editor run Tetris?” I wonder, is that supposed to convince anyone that Emacs is superior? Like, why would anyone care that they could play games in their text editor? “Yeah, but can that other vacuum play mp3s?”
|
||||
|
||||
That said, Tetris is always fun. Like everything in Emacs, the source code is open for easy inspection and modifcation, so it’s possible to make it even more fun. And by more fun, I mean harder.
|
||||
|
||||
One of the simplest ways to make the game harder is to get rid of the next-block preview. No more sitting that S/Z block in a precarious position knowing that you can fill in the space with the next piece – you have to chance it and hope for the best. Here’s what it looks like with no preview (as you can see, without the preview I made some choices that turned out to have dire consequences):
|
||||
|
||||

|
||||
|
||||
The preview box is set with a function called tetris-draw-next-shape[1][2]:
|
||||
|
||||
```
|
||||
(defun tetris-draw-next-shape ()
|
||||
(dotimes (x 4)
|
||||
(dotimes (y 4)
|
||||
(gamegrid-set-cell (+ tetris-next-x x)
|
||||
(+ tetris-next-y y)
|
||||
tetris-blank)))
|
||||
(dotimes (i 4)
|
||||
(let ((tetris-shape tetris-next-shape)
|
||||
(tetris-rot 0))
|
||||
(gamegrid-set-cell (+ tetris-next-x
|
||||
(aref (tetris-get-shape-cell i) 0))
|
||||
(+ tetris-next-y
|
||||
(aref (tetris-get-shape-cell i) 1))
|
||||
tetris-shape))))
|
||||
```
|
||||
|
||||
First, we’ll introduce a flag to allow configuring next-preview[2][3]:
|
||||
|
||||
```
|
||||
(defvar tetris-preview-next-shape nil
|
||||
"When non-nil, show the next block the preview box.")
|
||||
```
|
||||
|
||||
Now the question is, how can we make tetris-draw-next-shape obey this flag? The obvious way would be to redefine it:
|
||||
|
||||
```
|
||||
(defun tetris-draw-next-shape ()
|
||||
(when tetris-preview-next-shape
|
||||
;; existing tetris-draw-next-shape logic
|
||||
))
|
||||
```
|
||||
|
||||
This is not an ideal solution. There will be two definitions of the same function floating around, which is confusing, and we’ll have to maintain our modified definition in case the upstream version changes.
|
||||
|
||||
A better approach is to use advice. Emacs advice is like a Python decorator, but even more flexible, since advice can be added to a function from anywhere. This means that we can modify the function without disturbing the original source file at all.
|
||||
|
||||
There are a lot of different ways to use Emacs advice ([check the manual][4]), but for now we’ll just stick with the advice-add function with the :around flag. The advising function takes the original function as an argument, and it might or might not execute it. In this case, we’ll say that the original should be executed only if the preview flag is non-nil:
|
||||
|
||||
```
|
||||
(defun tetris-maybe-draw-next-shape (tetris-draw-next-shape)
|
||||
(when tetris-preview-next-shape
|
||||
(funcall tetris-draw-next-shape)))
|
||||
|
||||
(advice-add 'tetris-draw-next-shape :around #'tetris-maybe-draw-next-shape)
|
||||
```
|
||||
|
||||
This code will modify the behavior of tetris-draw-next-shape, but it can be stored in your config files, safely away from the actual Tetris code.
|
||||
|
||||
Getting rid of the preview box is a simple change. A more drastic change is to make it so that blocks randomly stop in the air:
|
||||
|
||||

|
||||
|
||||
In that picture, the red I and green T pieces are not falling, they’re set in place. This can make the game almost unplayably hard, but it’s easy to implement.
|
||||
|
||||
As before, we’ll first define a flag:
|
||||
|
||||
```
|
||||
(defvar tetris-stop-midair t
|
||||
"If non-nil, pieces will sometimes stop in the air.")
|
||||
```
|
||||
|
||||
Now, the way Emacs Tetris works is something like this. The active piece has x- and y-coordinates. On each clock tick, the y-coordinate is incremented (the piece moves down one row), and then a check is made for collisions. If a collision is detected, the piece is backed out (its y-coordinate is decremented) and set in place. In order to make a piece stop in the air, all we have to do is hack the detection function, tetris-test-shape.
|
||||
|
||||
It doesn’t matter what this function does internally – what matters is that it’s a function of no arguments that returns a boolean value. We need it to return true whenever it normally would (otherwise we risk weird collisions) but also at other times. I’m sure there are a variety of ways this could be done, but here is what I came up with:
|
||||
|
||||
```
|
||||
(defun tetris-test-shape-random (tetris-test-shape)
|
||||
(or (and
|
||||
tetris-stop-midair
|
||||
;; Don't stop on the first shape.
|
||||
(< 1 tetris-n-shapes )
|
||||
;; Stop every INTERVAL pieces.
|
||||
(let ((interval 7))
|
||||
(zerop (mod tetris-n-shapes interval)))
|
||||
;; Don't stop too early (it makes the game unplayable).
|
||||
(let ((upper-limit 8))
|
||||
(< upper-limit tetris-pos-y))
|
||||
;; Don't stop at the same place every time.
|
||||
(zerop (mod (random 7) 10)))
|
||||
(funcall tetris-test-shape)))
|
||||
|
||||
(advice-add 'tetris-test-shape :around #'tetris-test-shape-random)
|
||||
```
|
||||
|
||||
The hardcoded parameters here were chosen to make the game harder but still playable. I was drunk on an airplane when I decided on them though, so they might need some further tweaking.
|
||||
|
||||
By the way, according to my tetris-scores file, my top score is
|
||||
|
||||
```
|
||||
01389 Wed Dec 5 15:32:19 2018
|
||||
```
|
||||
|
||||
The scores in that file are listed up to five digits by default, so that doesn’t seem very good.
|
||||
|
||||
Exercises for the reader
|
||||
|
||||
1. Using advice, modify Emacs Tetris so that it flashes the messsage “OH SHIT” under the scoreboard every time the block moves down. Make the size of the message proportional to the height of the block stack (when there are no blocks, the message should be small or nonexistent, and when the highest block is close to the ceiling, the message should be large).
|
||||
|
||||
2. The version of tetris-test-shape-random given here has every seventh piece stop midair. A player could potentially figure out the interval and use it to their advantage. Modify it to make the interval random in some reasonable range (say, every five to ten pieces).
|
||||
|
||||
3. For a different take on advising Tetris, try out [autotetris-mode][1].
|
||||
|
||||
4. Come up with an interesting way to mess with the piece-rotation mechanics and then implement it with advice.
|
||||
|
||||
Footnotes
|
||||
============================================================
|
||||
|
||||
[1][5] Emacs has just one big global namespace, so function and variable names are typically prefixed with their package name in order to avoid collisions.
|
||||
|
||||
[2][6] A lot of people will tell you that you shouldn’t use an existing namespace prefix and that you should reserve a namespace prefix for anything you define yourself, e.g. my/tetris-preview-next-shape. This is ugly and usually pointless, so I don’t do it.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://nickdrozd.github.io/2019/01/14/tetris.html
|
||||
|
||||
作者:[nickdrozd][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://nickdrozd.github.io
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://nullprogram.com/blog/2014/10/19/
|
||||
[2]: https://nickdrozd.github.io/2019/01/14/tetris.html#fn.1
|
||||
[3]: https://nickdrozd.github.io/2019/01/14/tetris.html#fn.2
|
||||
[4]: https://www.gnu.org/software/emacs/manual/html_node/elisp/Advising-Functions.html
|
||||
[5]: https://nickdrozd.github.io/2019/01/14/tetris.html#fnr.1
|
||||
[6]: https://nickdrozd.github.io/2019/01/14/tetris.html#fnr.2
|
@ -1,49 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How Kubernetes Became the Standard for Compute Resources)
|
||||
[#]: via: (https://www.linux.com/articles/how-kubernetes-became-the-standard-for-compute-resources/)
|
||||
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
|
||||
|
||||
How Kubernetes Became the Standard for Compute Resources
|
||||
======
|
||||
|
||||
<https://www.linux.com/wp-content/uploads/2019/08/elevator-1598431_1920.jpg>
|
||||
|
||||
2019 has been a game-changing year for the cloud-native ecosystem. There were [consolidations][1], acquisitions of powerhouses like Red Hat Docker and Pivotal, and the emergence of players like Rancher Labs and Mirantis.
|
||||
|
||||
“All these consolidation and M&A in this space is an indicator of how fast the market has matured,” said Sheng Liang, co-founder and CEO of Rancher Labs, a company that offers a complete software stack for teams adopting containers.
|
||||
|
||||
Traditionally, emerging technologies like Kubernetes and Docker appeal to tinkerers and mega-scalers such as Facebook and Google. There was very little interest outside of that group. However, both of these technologies experienced massive adoption at the enterprise level. Suddenly, there was a massive market with huge opportunities. Almost everyone jumped in. There were players who were bringing innovative solutions and then there were players who were trying to catch up with the rest. It became very crowded very quickly.
|
||||
|
||||
It also changed the way innovation was happening. [Early adopters were usually tech-savvy companies.][2] Now, almost everyone is using it, even in areas that were not considered turf for Kubernetes. It changed the market dynamics as companies like Rancher Labs were witnessing unique use cases.
|
||||
|
||||
Liang adds, “I’ve never been in a market or technology evolution that’s happened as quickly and as dynamically as Kubernetes. When we started some five years ago, it was a very crowded space. Over time, most of our peers disappeared for one reason or the other. Either they weren’t able to adjust to the change or they chose not to adjust to some of the changes.”
|
||||
|
||||
In the early days of Kubernetes, the most obvious opportunity was to build Kubernetes distro and Kubernetes operations. It’s new technology. It’s known to be reasonably complex to install, upgrade, and operate.
|
||||
|
||||
It all changed when Google, AWS, and Microsoft entered the market. At that point, there was a stampede of vendors rushing in to provide solutions for the platform. “As soon as cloud providers like Google decided to make Kubernetes as a service and offered it for free as loss-leader to drive infrastructure consumption, we knew that the business of actually operating and supporting Kubernetes, the upside of that would be very limited,” said Liang.
|
||||
|
||||
Not everything was bad for non-Google players. Since cloud vendors removed all the complexity that came with Kubernetes by offering it as a service, it meant wider adoption of the technology, even by those who refrained from using it due to the overhead of operating it. It meant that Kubernetes would become ubiquitous and would become an industry standard.
|
||||
|
||||
“Rancher Labs was one of the very few companies that saw this as an opportunity and looked one step further than everyone else. We realized that Kubernetes was going to become the new computing standard, just the way TCP/IP became the networking standard,” said Liang.
|
||||
|
||||
CNCF plays a critical role in building a vibrant ecosystem around Kubernetes, creating a massive community to build, nurture and commercialize cloud-native open source technologies.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/articles/how-kubernetes-became-the-standard-for-compute-resources/
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/author/swapnil/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.cloudfoundry.org/blog/2019-is-the-year-of-consolidation-why-ibms-deal-with-red-hat-is-a-harbinger-of-things-to-come/
|
||||
[2]: https://www.packet.com/blog/open-source-season-on-the-kubernetes-highway/
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
@ -656,7 +656,7 @@ via: https://opensource.com/article/20/2/python-gnu-octave-data-science
|
||||
|
||||
作者:[Cristiano L. Fontana][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
427
sources/tech/20200221 Don-t like loops- Try Java Streams.md
Normal file
427
sources/tech/20200221 Don-t like loops- Try Java Streams.md
Normal file
@ -0,0 +1,427 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Don't like loops? Try Java Streams)
|
||||
[#]: via: (https://opensource.com/article/20/2/java-streams)
|
||||
[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen)
|
||||
|
||||
Don't like loops? Try Java Streams
|
||||
======
|
||||
It's 2020 and time to learn about Java Streams.
|
||||
![Person drinking a hat drink at the computer][1]
|
||||
|
||||
In this article, I will explain how to not write loops anymore.
|
||||
|
||||
What? Whaddaya mean, no more loops?
|
||||
|
||||
Yep, that's my 2020 resolution—no more loops in Java. Understand that it's not that loops have failed me, nor have they led me astray (well, at least, I can argue that point). Really, it is that I, a Java programmer of modest abilities since 1997 or so, must finally learn about all this new [Streams][2] stuff, saying "what" I want to do and not "how" I want to do it, maybe being able to parallelize some of my computations, and all that other good stuff.
|
||||
|
||||
I'm guessing that there are other Java programmers out there who also have been programming in Java for a decent amount of time and are in the same boat. Therefore, I'm offering my experiences as a guide to "how to not write loops in Java anymore."
|
||||
|
||||
### Find a problem worth solving
|
||||
|
||||
If you're like me, then the first show-stopper you run into is "right, cool stuff, but what am I solving for, and how do I apply this?" I realized that I can spot the perfect opportunity camouflaged as _Something I've Done Before_.
|
||||
|
||||
In my case, it's sampling land cover within a specific area and coming up with an estimate and a confidence interval around that estimate for the land cover across the whole area. The specific problem involves deciding whether an area is "forested" or not, given a specific legal definition: if at least 10% of the soil is covered over by tree crowns, then the area is considered to be forested; otherwise, it's something else.
|
||||
|
||||
![Image of land cover in an area][3]
|
||||
|
||||
It's a pretty esoteric example of a recurring problem; I'll grant you. But there it is. For the ecologists and foresters out there who are accustomed to cool temperate or tropical forests, 10% might sound kind of low, but in the case of dry areas with low-growing shrubs and trees, that's a reasonable number.
|
||||
|
||||
So the basic idea is: use images to stratify the area (i.e., areas completely devoid of trees, areas of predominantly small trees spaced quite far apart, areas of predominantly small trees spaced closer together, areas of somewhat larger trees), locate some samples in those strata, send the crew out to measure the samples, analyze the results, and calculate the proportion of soil covered by tree crowns across the area. Simple, right?
|
||||
|
||||
![Survey team assessing land cover][4]
|
||||
|
||||
### What the field data looks like
|
||||
|
||||
In the current project, the samples are rectangular areas 20 meters wide by 25 meters long, so 500 square meters each. On each patch, the field crew measured each tree: its species, its height, the maximum and minimum width of its crown, and the diameter of its trunk at trunk height (nominally 30cm above the ground). This information was collected, entered into a spreadsheet, and exported to a bar separated value (BSV) file for me to analyze. It looks like this:
|
||||
|
||||
Stratum# | Sample# | Tree# | Species | Trunk diameter (cm) | Crown diameter 1 (m) | Crown diameter 2 (m) | Height (m)
|
||||
---|---|---|---|---|---|---|---
|
||||
1 | 1 | 1 | Ac | 6 | 3.6 | 4.6 | 2.4
|
||||
1 | 1 | 2 | Ac | 6 | 2.2 | 2.3 | 2.5
|
||||
1 | 1 | 3 | Ac | 16 | 2.5 | 1.7 | 2.4
|
||||
1 | 1 | 4 | Ac | 6 | 1.5 | 2.1 | 1.8
|
||||
1 | 1 | 5 | Ac | 5 | 0.9 | 1.7 | 1.7
|
||||
1 | 1 | 6 | Ac | 6 | 1.7 | 1.3 | 1.6
|
||||
1 | 1 | 7 | Ac | 5 | 1.82 | 1.32 | 1.8
|
||||
1 | 1 | 1 | Ac | 1 | 0.3 | 0.25 | 0.9
|
||||
1 | 1 | 2 | Ac | 2 | 1.2 | 1.2 | 1.7
|
||||
|
||||
The first column is the stratum number (where 1 is "predominantly small trees spaced quite far apart," 2 is "predominantly small trees spaced closer together," and 3 is "somewhat larger trees"; we didn't sample the areas "completely devoid of trees"). The second column is the sample number (there are 73 samples altogether, located in the three strata in proportion to the area of each stratum). The third column is the tree number within the sample. The fourth is the two-letter species code, the fifth the trunk diameter (in this case, 10cm above ground or exposed roots), the sixth the smallest distance across the crown, the seventh the largest distance, and the eighth the height of the tree.
|
||||
|
||||
For the purposes of this exercise, I'm only concerned with the total amount of ground covered by the tree crowns—not the species, nor the height, nor the diameter of the trunk.
|
||||
|
||||
In addition to the measurement information above, I also have the areas of the three strata, also in a BSV:
|
||||
|
||||
stratum | hectares
|
||||
---|---
|
||||
1 | 114.89
|
||||
2 | 207.72
|
||||
3 | 29.77
|
||||
|
||||
### What I want to do (not how I want to do it)
|
||||
|
||||
In keeping with one of the main design goals of Java Streams, here is "what" I want to do:
|
||||
|
||||
1. Read the stratum area BSV and save the data as a lookup table.
|
||||
2. Read the measurements from the measurement BSV file.
|
||||
3. Accumulate each measurement (tree) to calculate the total area of the sample covered by tree crowns.
|
||||
4. Accumulate the sample tree crown area values and count the number of samples to estimate the mean tree crown area coverage and standard error of the mean for each stratum.
|
||||
5. Summarize the stratum figures.
|
||||
6. Weigh the stratum means and standard errors by the stratum areas (looked up from the table created in step 1) and accumulate them to estimate the mean tree crown area coverage and standard error of the mean for the total area.
|
||||
7. Summarize the weighted figures.
|
||||
|
||||
|
||||
|
||||
Generally speaking, the way to define "what" with Java Streams is by creating a stream processing pipeline of function calls that pass over the data. So, yes, there is actually a bit of "how" that ends up creeping in… in fact, quite a bit of "how." But, it needs a very different knowledge base than the good, old fashioned loop.
|
||||
|
||||
I'll go through each of these steps in detail.
|
||||
|
||||
#### Build the stratum area table
|
||||
|
||||
The first job is to convert the stratum areas BSV file to a lookup table:
|
||||
|
||||
|
||||
```
|
||||
[String][5] fileName = "stratum_areas.bsv";
|
||||
Stream<String> inputLineStream = Files.lines(Paths.get(fileName)); // (1)
|
||||
|
||||
final Map<[Integer][6],Double> stratumAreas = // (2)
|
||||
inputLineStream // (3)
|
||||
.skip(1) // (4)
|
||||
.map(l -> l.split("\\\|")) // (5)
|
||||
.collect( // (6)
|
||||
Collectors.toMap( // (7)
|
||||
a -> [Integer][6].parseInt(a[0]), // (8)
|
||||
a -> [Double][7].parseDouble(a[1]) // (9)
|
||||
)
|
||||
);
|
||||
inputLineStream.close(); // (10)
|
||||
|
||||
[System][8].out.println("stratumAreas = " + stratumAreas); // (11)
|
||||
```
|
||||
|
||||
I'll take this a line or two at a time, where the numbers in comments following the lines above—e.g., _// (3)_— correspond to the numbers below:
|
||||
|
||||
1. java.nio.Files.lines() gives a stream of strings corresponding to lines in the file.
|
||||
2. The goal is to create the lookup table, **stratumAreas**, which is a **Map<Integer,Double>**. Therefore, I can get the **double** value area for stratum 2 as **stratumAreas.get(2)**.
|
||||
3. This is the beginning of the stream "pipeline."
|
||||
4. Skip the first line in the pipeline since it's the header line containing the column names.
|
||||
5. Use **map()** to split the **String** input line into an array of **String** fields, with the first field being the stratum # and the second being the stratum area.
|
||||
6. Use **collect()** to [materialize the results][9].
|
||||
7. The materialized result will be produced as a sequence of **Map** entries.
|
||||
8. The key of each map entry is the first element of the array in the pipeline—the **int** stratum number. By the way, this is a _Java lambda_ expression—[an anonymous function][10] that takes an argument and returns that argument converted to an **int**.
|
||||
9. The value of each map entry is the second element of the array in the pipeline—the **double** stratum area.
|
||||
10. Don't forget to close the stream (file).
|
||||
11. Print out the result, which looks like: [code]`stratumAreas = {1=114.89, 2=207.72, 3=29.77}`
|
||||
```
|
||||
### Build the measurements table and accumulate the measurements into the sample totals
|
||||
|
||||
Now that I have the stratum areas, I can start processing the main body of data—the measurements. I combine the two tasks of building the measurements table and accumulating the measurements into the sample totals since I don't have any interest in the measurement data per se.
|
||||
```
|
||||
|
||||
|
||||
fileName = "sample_data_for_testing.bsv";
|
||||
inputLineStream = Files.lines(Paths.get(fileName));
|
||||
|
||||
final Map<[Integer][6],Map<[Integer][6],Double>> sampleValues =
|
||||
inputLineStream
|
||||
.skip(1)
|
||||
.map(l -> l.split("\\\|"))
|
||||
.collect( // (1)
|
||||
Collectors.groupingBy(a -> [Integer][6].parseInt(a[0]), // (2)
|
||||
Collectors.groupingBy(b -> [Integer][6].parseInt(b[1]), // (3)
|
||||
Collectors.summingDouble( // (4)
|
||||
c -> { // (5)
|
||||
double rm = ([Double][7].parseDouble(c[5]) +
|
||||
[Double][7].parseDouble(c[6]))/4d; // (6)
|
||||
return rm*rm * [Math][11].PI / 500d; // (7)
|
||||
})
|
||||
)
|
||||
)
|
||||
);
|
||||
inputLineStream.close();
|
||||
|
||||
[System][8].out.println("sampleValues = " + sampleValues); // (8)
|
||||
|
||||
```
|
||||
Again, a line or two or so at a time:
|
||||
|
||||
1. The first seven lines are the same in this task and the previous, except the name of this lookup table is **sampleValues**; and it is a **Map** of **Map**s.
|
||||
2. The measurement data is grouped into samples (by sample #), which are, in turn, grouped into strata (by stratum #), so I use **Collectors.groupingBy()** at the topmost level [to separate data][12] into strata, with **a[0]** here being the stratum number.
|
||||
3. I use **Collectors.groupingBy()** once more to separate data into samples, with **b[1]** here being the sample number.
|
||||
4. I use the handy **Collectors.summingDouble()** [to accumulate the data][13] for each measurement within the sample within the stratum.
|
||||
5. Again, a Java lambda or anonymous function whose argument **c** is the array of fields, where this lambda has several lines of code that are surrounded by **{** and **}** with a **return** statement just before the **}**.
|
||||
6. Calculate the mean crown radius of the measurement.
|
||||
7. Calculate the crown area of the measurement as a proportion of the total sample area and return that value as the result of the lambda.
|
||||
8. Again, similar to the previous task. The result looks like (with some numbers elided): [code]`sampleValues = {1={1=0.09083231861452731, 66=0.06088002082602869, ... 28=0.0837823490804228}, 2={65=0.14738326403381743, 2=0.16961183847374103, ... 63=0.25083064794883453}, 3={64=0.3306323635177101, 32=0.25911911184680053, ... 30=0.2642668470291564}}`
|
||||
```
|
||||
|
||||
|
||||
|
||||
This output shows the **Map** of **Map**s structure clearly—there are three entries in the top level corresponding to the strata 1, 2, and 3, and each stratum has subentries corresponding to the proportional area of the sample covered by tree crowns.
|
||||
|
||||
#### Accumulate the sample totals into the stratum means and standard errors
|
||||
|
||||
At this point, the task becomes more complex; I need to count the number of samples, sum up the sample values in preparation for calculating the sample mean, and sum up the squares of the sample values in preparation for calculating the standard error of the mean. I may as well incorporate the stratum area into this grouping of data as well, as I'll need it shortly to weigh the stratum results together.
|
||||
|
||||
So the first thing to do is create a class, **StratumAccumulator**, to handle the accumulation and provide the calculation of the interesting results. This class implements **java.util.function.DoubleConsumer**, which can be passed to **collect()** to handle accumulation:
|
||||
|
||||
|
||||
```
|
||||
class StratumAccumulator implements DoubleConsumer {
|
||||
private double ha;
|
||||
private int n;
|
||||
private double sum;
|
||||
private double ssq;
|
||||
public StratumAccumulator(double ha) { // (1)
|
||||
this.ha = ha;
|
||||
this.n = 0;
|
||||
this.sum = 0d;
|
||||
this.ssq = 0d;
|
||||
}
|
||||
public void accept(double d) { // (2)
|
||||
this.sum += d;
|
||||
this.ssq += d*d;
|
||||
this.n++;
|
||||
}
|
||||
public void combine(StratumAccumulator other) { // (3)
|
||||
this.sum += other.sum;
|
||||
this.ssq += other.ssq;
|
||||
this.n += other.n;
|
||||
}
|
||||
public double getHa() { // (4)
|
||||
return this.ha;
|
||||
}
|
||||
public int getN() { // (5)
|
||||
return this.n;
|
||||
}
|
||||
public double getMean() { // (6)
|
||||
return this.n > 0 ? this.sum / this.n : 0d;
|
||||
}
|
||||
public double getStandardError() { // (7)
|
||||
double mean = this.getMean();
|
||||
double variance = this.n > 1 ? (this.ssq - mean*mean*n)/(this.n - 1) : 0d;
|
||||
return this.n > 0 ? [Math][11].sqrt(variance/this.n) : 0d;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Line-by-line:
|
||||
|
||||
1. The constructor **StratumAccumulator(double ha)** takes an argument, the area of the stratum in hectares, which allows me to merge the stratum area lookup table into instances of this class.
|
||||
2. The **accept(double d)** method is used to accumulate the stream of double values, and I use it to:
|
||||
a. Count the number of values.
|
||||
b. Sum the values in preparation for computing the sample mean.
|
||||
c. Sum the squares of the values in preparation for computing the standard error of the mean.
|
||||
3. The **combine()** method is used to merge substreams of **StratumAccumulator**s (in case I want to process in parallel).
|
||||
4. The getter for the area of the stratum
|
||||
5. The getter for the number of samples in the stratum
|
||||
6. The getter for the mean sample value in the stratum
|
||||
7. The getter for the standard error of the mean in the stratum
|
||||
|
||||
|
||||
|
||||
Once I have this accumulator, I can use it to accumulate the sample values pertaining to each stratum:
|
||||
|
||||
|
||||
```
|
||||
final Map<[Integer][6],StratumAccumulator> stratumValues = // (1)
|
||||
sampleValues.entrySet().stream() // (2)
|
||||
.collect( // (3)
|
||||
Collectors.toMap( // (4)
|
||||
e -> e.getKey(), // (5)
|
||||
e -> e.getValue().entrySet().stream() // (6)
|
||||
.map([Map.Entry][14]::getValue) // (7)
|
||||
.collect( // (8)
|
||||
() -> new StratumAccumulator(stratumAreas.get(e.getKey())), // (9)
|
||||
StratumAccumulator::accept, // (10)
|
||||
StratumAccumulator::combine) // (11)
|
||||
)
|
||||
);
|
||||
```
|
||||
|
||||
Line-by-line:
|
||||
|
||||
1. This time, I'm using the pipeline to build **stratumValues**, which is a **Map<Integer,StratumAccumulator>**, so **stratumValues.get(3)** will return the **StratumAccumulator** instance for stratum 3.
|
||||
2. Here, I'm using the **entrySet().stream()** method provided by **Map** to get a stream of (key, value) pairs; recall these are **Map**s of sample values by stratum.
|
||||
3. Again, I'm using **collect()** to gather the pipeline results by stratum…
|
||||
4. using **Collectors.toMap()** to generate a stream of **Map** entries…
|
||||
5. whose keys are the key of the incoming stream (that is, the stratum #)…
|
||||
6. and whose values are the Map of sample values, and I again use **entrySet().stream()** to convert to a stream of Map entries, one for each sample.
|
||||
7. Using **map()** to get the value of the sample **Map** entry; I'm not interested in the key by this point.
|
||||
8. Yet again, using **collect()** to accumulate the sample results into the **StratumAccumulator** instances.
|
||||
9. Telling **collect()** how to create a new **StratumAccumulator**—I need to pass the stratum area into the constructor here, so I can't just use **StratumAccumulator::new**.
|
||||
10. Telling **collect()** to use the **accept()** method of **StratumAccumulator** to accumulate the stream of sample values.
|
||||
11. Telling **collect()** to use the **combine()** method of **StratumAccumulator** to merge **StratumAccumulator** instances.
|
||||
|
||||
|
||||
|
||||
#### Summarize the stratum figures
|
||||
|
||||
Whew! After all of that, printing out the stratum figures is pretty straightforward:
|
||||
|
||||
|
||||
```
|
||||
stratumValues.entrySet().stream()
|
||||
.forEach(e -> {
|
||||
StratumAccumulator sa = e.getValue();
|
||||
int n = sa.getN();
|
||||
double se66 = sa.getStandardError();
|
||||
double t = new TDistribution(n - 1).inverseCumulativeProbability(0.975d);
|
||||
[System][8].out.printf("stratum %d n %d mean %g se66 %g t %g se95 %g ha %g\n",
|
||||
e.getKey(), n, sa.getMean(), se66, t, se66 * t, sa.getHa());
|
||||
});
|
||||
```
|
||||
|
||||
In the above, once again, I use **entrySet().stream()** to transform the **stratumValues** Map to a stream, and then apply the **forEach()** method to the stream. **ForEach()** is pretty much what it sounds like—a loop! But the business of finding the head of the stream, finding the next element, and checking to see if hits the end is all handled by Java Streams. So, I just get to say what I want to do for each record, which is basically to print it out.
|
||||
|
||||
My code looks a bit more complicated because I declare some local variables to hold some intermediate results that I use more than once—**n**, the number of samples, and **se66**, the standard error of the mean. I also calculate the inverse T value to [convert my standard error of the mean to a 95% confidence interval][15].
|
||||
|
||||
The result looks like this:
|
||||
|
||||
|
||||
```
|
||||
stratum 1 n 24 mean 0.0903355 se66 0.0107786 t 2.06866 se95 0.0222973 ha 114.890
|
||||
stratum 2 n 38 mean 0.154612 se66 0.00880498 t 2.02619 se95 0.0178406 ha 207.720
|
||||
stratum 3 n 11 mean 0.223634 se66 0.0261662 t 2.22814 se95 0.0583020 ha 29.7700
|
||||
```
|
||||
|
||||
#### Accumulate the stratum means and standard errors into the total
|
||||
|
||||
Once again, the task becomes more complex, so I create a class, **TotalAccumulator**, to handle the accumulation and provide the calculation of the interesting results. This class implements **java.util.function.Consumer<T>**, which can be passed to **collect()** to handle accumulation:
|
||||
|
||||
|
||||
```
|
||||
class TotalAccumulator implements Consumer<StratumAccumulator> {
|
||||
private double ha;
|
||||
private int n;
|
||||
private double sumWtdMeans;
|
||||
private double ssqWtdStandardErrors;
|
||||
public TotalAccumulator() {
|
||||
this.ha = 0d;
|
||||
this.n = 0;
|
||||
this.sumWtdMeans = 0d;
|
||||
this.ssqWtdStandardErrors = 0d;
|
||||
}
|
||||
public void accept(StratumAccumulator sa) {
|
||||
double saha = sa.getHa();
|
||||
double sase = sa.getStandardError();
|
||||
this.ha += saha;
|
||||
this.n += sa.getN();
|
||||
this.sumWtdMeans += saha * sa.getMean();
|
||||
this.ssqWtdStandardErrors += saha * saha * sase * sase;
|
||||
}
|
||||
public void combine(TotalAccumulator other) {
|
||||
this.ha += other.ha;
|
||||
this.n += other.n;
|
||||
this.sumWtdMeans += other.sumWtdMeans;
|
||||
this.ssqWtdStandardErrors += other.ssqWtdStandardErrors;
|
||||
}
|
||||
public double getHa() {
|
||||
return this.ha;
|
||||
}
|
||||
public int getN() {
|
||||
return this.n;
|
||||
}
|
||||
public double getMean() {
|
||||
return this.ha > 0 ? this.sumWtdMeans / this.ha : 0d;
|
||||
}
|
||||
public double getStandardError() {
|
||||
return this.ha > 0 ? [Math][11].sqrt(this.ssqWtdStandardErrors) / this.ha : 0;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
I'm not going to go into much detail on this, since it's structurally pretty similar to **StratumAccumulator**. Of main interest:
|
||||
|
||||
1. The constructor takes no arguments, which simplifies its use.
|
||||
2. The **accept()** method accumulates instances of **StratumAccumulator**, not **double** values, hence the use of the **Consumer<T>** interface.
|
||||
3. As for the calculations, they are assembling a weighted average of the **StratumAccumulator** instances, so they make use of the stratum areas, and the formulas might look a bit strange to anyone who's not used to stratified sampling.
|
||||
|
||||
|
||||
|
||||
As for actually carrying out the work, it's easy-peasy:
|
||||
|
||||
|
||||
```
|
||||
final TotalAccumulator totalValues =
|
||||
stratumValues.entrySet().stream()
|
||||
.map([Map.Entry][14]::getValue)
|
||||
.collect(TotalAccumulator::new, TotalAccumulator::accept, TotalAccumulator::combine);
|
||||
```
|
||||
|
||||
Same old stuff as before:
|
||||
|
||||
1. Use **entrySet().stream()** to convert the **stratumValue Map** entries to a stream.
|
||||
2. Use **map()** to replace the **Map** entries with their values—the instances of **StratumAccumulator**.
|
||||
3. Use **collect()** to apply the **TotalAccumulator** to the instances of **StratumAccumulator**.
|
||||
|
||||
|
||||
|
||||
#### Summarize the total figures
|
||||
|
||||
Getting the interesting bits out of the **TotalAccumulator** instance is also pretty straightforward:
|
||||
|
||||
|
||||
```
|
||||
int nT = totalValues.getN();
|
||||
double se66T = totalValues.getStandardError();
|
||||
double tT = new TDistribution(nT - stratumValues.size()).inverseCumulativeProbability(0.975d);
|
||||
[System][8].out.printf("total n %d mean %g se66 %g t %g se95 %g ha %g\n",
|
||||
nT, totalValues.getMean(), se66T, tT, se66T * tT, totalValues.getHa());
|
||||
```
|
||||
|
||||
Similar to the **StratumAccumulator**, I just call the relevant getters to pick out the number of samples **nT** and the standard error **se66T**. I calculate the T value **tT** (using "n – 3" here since there are three strata), and then I print the result, which looks like this:
|
||||
|
||||
|
||||
```
|
||||
`total n 73 mean 0.139487 se66 0.00664653 t 1.99444 se95 0.0132561 ha 352.380`
|
||||
```
|
||||
|
||||
### In conclusion
|
||||
|
||||
Wow, that looks like a bit of a marathon. It feels like it, too. As is often the case, there is a great deal of information about how to use Java Streams, all illustrated with toy examples, which kind of help, but not really. I found that getting this to work with a real-world (albeit very simple) example was difficult.
|
||||
|
||||
Because I've been working in [Groovy][16] a lot lately, I kept finding myself wanting to accumulate into "maps of maps of maps" rather than creating accumulator classes, but I was never able to pull that off except in the case of totaling up the measurements in the sample. So, I worked with accumulator classes instead of maps of maps, and maps of accumulator classes instead of maps of maps of maps.
|
||||
|
||||
I don't feel like any kind of master of Java Streams at this point, but I do feel I have a pretty solid understanding of **collect()**, which is deeply important, along with various methods to reformat data structures into streams and to reformat stream elements themselves. So yeah, more to learn!
|
||||
|
||||
Speaking of collect(), in the examples I presented above, we can see moving from a very simple use of this fundamental method - using the Collectors.summingDouble() accumulation method - through defining an accumulator class that extends one of the pre-defined interfaces - in this case DoubleConsumer - to defining a full-blown accumulator of our own, used to accumulate the intermediate stratum class. I was tempted - sort of - to work backward and implement fully custom accumulators for the stratum and sample accumulators, but the point of this exercise was to learn more about Java Streams, not to become an expert in one single part of it all.
|
||||
|
||||
What's your experience with Java Streams? Done anything big and complicated yet? Please share it in the comments.
|
||||
|
||||
Optimizing your Java code requires an understanding of how the different elements in Java interact...
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/java-streams
|
||||
|
||||
作者:[Chris Hermansen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/clhermansen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_tea_laptop_computer_work_desk.png?itok=D5yMx_Dr (Person drinking a hat drink at the computer)
|
||||
[2]: https://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html
|
||||
[3]: https://opensource.com/sites/default/files/uploads/landcover.png (Image of land cover in an area)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/foresters.jpg (Survey team assessing land cover)
|
||||
[5]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string
|
||||
[6]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer
|
||||
[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+double
|
||||
[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system
|
||||
[9]: https://www.baeldung.com/java-8-collectors
|
||||
[10]: https://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html
|
||||
[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+math
|
||||
[12]: https://www.baeldung.com/java-groupingby-collector
|
||||
[13]: http://www.java2s.com/Tutorials/Java/java.util.stream/Collectors/Collectors.summingDouble_ToDoubleFunction_super_T_mapper_.htm
|
||||
[14]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+map.entry
|
||||
[15]: https://en.wikipedia.org/wiki/Standard_error
|
||||
[16]: http://groovy-lang.org/
|
@ -0,0 +1,171 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Live video streaming with open source Video.js)
|
||||
[#]: via: (https://opensource.com/article/20/2/video-streaming-tools)
|
||||
[#]: author: (Aaron J. Prisk https://opensource.com/users/ricepriskytreat)
|
||||
|
||||
Live video streaming with open source Video.js
|
||||
======
|
||||
Video.js is a widely used protocol that will serve your live video
|
||||
stream to a wide range of devices.
|
||||
![video editing dashboard][1]
|
||||
|
||||
Last year, I wrote about [creating a video streaming server with Linux][2]. That project uses the Real-Time Messaging Protocol (RMTP), Nginx web server, Open Broadcast Studio (OBS), and VLC media player.
|
||||
|
||||
I used VLC to play our video stream, which may be fine for a small local deployment but isn't very practical on a large scale. First, your viewers have to use VLC, and RTMP streams can provide inconsistent playback. This is where [Video.js][3] comes into play! Video.js is an open source JavaScript framework for creating custom HTML5 video players. Video.js is incredibly powerful, and it's used by a host of very popular websites—largely due to its open nature and how easy it is to get up and running.
|
||||
|
||||
### Get started with Video.js
|
||||
|
||||
This project is based off of the video streaming project I wrote about last year. Since that project was set to serve RMTP streams, to use Video.js, you'll need to make some adjustments to that Nginx configuration. HTTP Live Streaming ([HLS][4]) is a widely used protocol developed by Apple that will serve your stream better to a multitude of devices. HLS will take your stream, break it into chunks, and serve it via a specialized playlist. This allows for a more fault-tolerant stream that can play on more devices.
|
||||
|
||||
First, create a directory that will house the HLS stream and give Nginx permission to write to it:
|
||||
|
||||
|
||||
```
|
||||
mkdir /mnt/hls
|
||||
chown www:www /mnt/hls
|
||||
```
|
||||
|
||||
Next, fire up your text editor, open the Nginx.conf file, and add the following under the **application live** section:
|
||||
|
||||
|
||||
```
|
||||
application live {
|
||||
live on;
|
||||
# Turn on HLS
|
||||
hls on;
|
||||
hls_path /mnt/hls/;
|
||||
hls_fragment 3;
|
||||
hls_playlist_length 60;
|
||||
# disable consuming the stream from nginx as rtmp
|
||||
deny play all;
|
||||
}
|
||||
```
|
||||
|
||||
Take note of the HLS fragment and playlist length settings. You may want to adjust them later, depending on your streaming needs, but this is a good baseline to start with. Next, we need to ensure that Nginx is able to listen for requests from our player and understand how to present it to the user. So, we'll want to add a new section at the bottom of our nginx.conf file.
|
||||
|
||||
|
||||
```
|
||||
server {
|
||||
listen 8080;
|
||||
|
||||
location / {
|
||||
# Disable cache
|
||||
add_header 'Cache-Control' 'no-cache';
|
||||
|
||||
# CORS setup
|
||||
add_header 'Access-Control-Allow-Origin' '*' always;
|
||||
add_header 'Access-Control-Expose-Headers' 'Content-Length';
|
||||
|
||||
# allow CORS preflight requests
|
||||
if ($request_method = 'OPTIONS') {
|
||||
add_header 'Access-Control-Allow-Origin' '*';
|
||||
add_header 'Access-Control-Max-Age' 1728000;
|
||||
add_header 'Content-Type' 'text/plain charset=UTF-8';
|
||||
add_header 'Content-Length' 0;
|
||||
return 204;
|
||||
}
|
||||
|
||||
types {
|
||||
application/dash+xml mpd;
|
||||
application/vnd.apple.mpegurl m3u8;
|
||||
video/mp2t ts;
|
||||
}
|
||||
|
||||
root /mnt/;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Visit Video.js's [Getting started][5] page to download the latest release and check out the release notes. Also on that page, Video.js has a great introductory template you can use to create a very basic web player. I'll break down the important bits of that template and insert the pieces you need to get your new HTML player to use your stream.
|
||||
|
||||
The **head** links in the Video.js library from a content-delivery network (CDN). You can also opt to download and store Video.js locally on your web server if you want.
|
||||
|
||||
|
||||
```
|
||||
<head>
|
||||
<link href="<https://vjs.zencdn.net/7.5.5/video-js.css>" rel="stylesheet" />
|
||||
|
||||
<!-- If you'd like to support IE8 (for Video.js versions prior to v7) -->
|
||||
<script src="[https://vjs.zencdn.net/ie8/1.1.2/videojs-ie8.min.js"\>\][6]</script>
|
||||
</head>
|
||||
```
|
||||
|
||||
Now to the real meat of the player. The **body** section sets the parameters of how the video player will be displayed. Within the **video** element, you need to define the properties of your player. How big do you want it to be? Do you want it to have a poster (i.e., a thumbnail)? Does it need any special player controls? This example defines a simple 600x600 pixel player with an appropriate (to me) thumbnail featuring Beastie (the BSD Demon) and Tux (the Linux penguin).
|
||||
|
||||
|
||||
```
|
||||
<body>
|
||||
<video
|
||||
id="my-video"
|
||||
class="video-js"
|
||||
controls
|
||||
preload="auto"
|
||||
width="600"
|
||||
height="600"
|
||||
poster="BEASTIE-TUX.jpg"
|
||||
data-setup="{}"
|
||||
>
|
||||
```
|
||||
|
||||
Now that you've set how you want your player to look, you need to tell it what to play. Video.js can handle a large number of different formats, including HLS streams.
|
||||
|
||||
|
||||
```
|
||||
<source src="<http://MY-WEB-SERVER:8080/hls/STREAM-KEY.m3u8>" type="application/x-mpegURL" />
|
||||
<p class="vjs-no-js">
|
||||
To view this video please enable JavaScript, and consider upgrading to a
|
||||
web browser that
|
||||
<a href="<https://videojs.com/html5-video-support/>" target="_blank"
|
||||
>supports HTML5 video</a
|
||||
>
|
||||
</p>
|
||||
</video>
|
||||
```
|
||||
|
||||
### Record your streams
|
||||
|
||||
Keeping a copy of your streams is super easy. Just add the following at the bottom of your **application live** section in the nginx.conf file:
|
||||
|
||||
|
||||
```
|
||||
# Enable stream recording
|
||||
record all;
|
||||
record_path /mnt/recordings/;
|
||||
record_unique on;
|
||||
```
|
||||
|
||||
Make sure that **record_path** exists and that Nginx has permissions to write to it:
|
||||
|
||||
|
||||
```
|
||||
`chown -R www:www /mnt/recordings`
|
||||
```
|
||||
|
||||
### Down the stream
|
||||
|
||||
That's it! You should now have a spiffy new HTML5-friendly live video player. There are lots of great resources out there on how to expand all your video-making adventures. If you have any questions or suggestions, feel free to reach out to me on [Twitter][7] or leave a comment below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/video-streaming-tools
|
||||
|
||||
作者:[Aaron J. Prisk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ricepriskytreat
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard)
|
||||
[2]: https://opensource.com/article/19/1/basic-live-video-streaming-server
|
||||
[3]: https://videojs.com/
|
||||
[4]: https://en.wikipedia.org/wiki/HTTP_Live_Streaming
|
||||
[5]: https://videojs.com/getting-started
|
||||
[6]: https://vjs.zencdn.net/ie8/1.1.2/videojs-ie8.min.js"\>\
|
||||
[7]: https://twitter.com/AKernelPanic
|
245
sources/tech/20200222 How to install TT-RSS on a Raspberry Pi.md
Normal file
245
sources/tech/20200222 How to install TT-RSS on a Raspberry Pi.md
Normal file
@ -0,0 +1,245 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to install TT-RSS on a Raspberry Pi)
|
||||
[#]: via: (https://opensource.com/article/20/2/ttrss-raspberry-pi)
|
||||
[#]: author: (Patrick H. Mullins https://opensource.com/users/pmullins)
|
||||
|
||||
How to install TT-RSS on a Raspberry Pi
|
||||
======
|
||||
Read your news feeds while keeping your privacy intact with Tiny Tiny
|
||||
RSS.
|
||||
![Raspberries with pi symbol overlay][1]
|
||||
|
||||
[Tiny Tiny RSS][2] (TT-RSS) is a free and open source web-based news feed (RSS/Atom) reader and aggregator. It's ideally suited to those who are privacy-focused and still rely on RSS for their daily news. Tiny Tiny RSS is self-hosted software, so you have 100% control of the server, your data, and your overall privacy. It also supports a wide range of plugins, add-ons, and themes, Want a dark mode interface? No problem. Want to filter your incoming news based on keywords? TT-RSS has you covered there, as well.
|
||||
|
||||
![Tiny Tiny RSS screenshot][3]
|
||||
|
||||
Now that you know what TT-RSS is and why you may want to use it, I'll explain everything you need to know about installing it on a Raspberry Pi or a Debian 10 server.
|
||||
|
||||
### Install and configure TT-RSS
|
||||
|
||||
To install TT-RSS on a Raspberry Pi, you must also install and configure the latest version of PHP (7.3 as of this writing), PostgreSQL for the database backend, the Nginx web server, Git, and finally, TT-RSS.
|
||||
|
||||
#### 1\. Install PHP 7
|
||||
|
||||
Installing PHP 7 is, by far, the most involved part of this process. Thankfully, it's not as difficult as it might appear. Start by installing the following support packages:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install -y ca-certificates apt-transport-https`
|
||||
```
|
||||
|
||||
Now, add the repository PGP key:
|
||||
|
||||
|
||||
```
|
||||
`$ wget -q https://packages.sury.org/php/apt.gpg -O- | sudo apt-key add -`
|
||||
```
|
||||
|
||||
Next, add the PHP repository to your apt sources:
|
||||
|
||||
|
||||
```
|
||||
`$ echo "deb https://packages.sury.org/php/ buster main" | sudo tee /etc/apt/sources.list.d/php.list`
|
||||
```
|
||||
|
||||
Then update your repository index:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt update`
|
||||
```
|
||||
|
||||
Finally, install PHP 7.3 (or the latest version) and some common components:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install -y php7.3 php7.3-cli php7.3-fpm php7.3-opcache php7.3-curl php7.3-mbstring php7.3-pgsql php7.3-zip php7.3-xml php7.3-gd php7.3-intl`
|
||||
```
|
||||
|
||||
The command above assumes you're using PostgreSQL as your database backend and installs **php7.3-pgsql**. If you'd rather use MySQL or MariaDB, you can easily change this to **php7.3-mysql**.
|
||||
|
||||
Next, verify that PHP is installed and running on your Raspberry Pi:
|
||||
|
||||
|
||||
```
|
||||
`$ php -v`
|
||||
```
|
||||
|
||||
Now it's time to install and configure the webserver.
|
||||
|
||||
#### 2\. Install Nginx
|
||||
|
||||
Nginx can be installed via apt with:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install -y nginx`
|
||||
```
|
||||
|
||||
Modify the default Nginx virtual host configuration so that the webserver will recognize PHP files and know what to do with them:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo nano /etc/nginx/sites-available/default`
|
||||
```
|
||||
|
||||
You can safely delete everything in the original file and replace it with:
|
||||
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80 default_server;
|
||||
listen [::]:80 default_server;
|
||||
|
||||
root /var/www/html;
|
||||
index index.html index.htm index.php;
|
||||
server_name _;
|
||||
|
||||
location / {
|
||||
try_files $uri $uri/ =404;
|
||||
}
|
||||
|
||||
location ~ \\.php$ {
|
||||
include snippets/fastcgi-php.conf;
|
||||
fastcgi_pass unix:/run/php/php7.3-fpm.sock;
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
Use **Ctrl+O** to save your new configuration file and then **Ctrl+X** to exit Nano. You can test your new configuration with:
|
||||
|
||||
|
||||
```
|
||||
`$ nginx -t`
|
||||
```
|
||||
|
||||
If there are no errors, restart the Nginx service:
|
||||
|
||||
|
||||
```
|
||||
`$ systemctl restart nginx`
|
||||
```
|
||||
|
||||
#### 3\. Install PostgreSQL
|
||||
|
||||
Next up is installing the database server. Installing PostgreSQL on the Raspberry Pi is super easy:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install -y postgresql postgresql-client postgis`
|
||||
```
|
||||
|
||||
Check to see if the database server was successfully installed by entering:
|
||||
|
||||
|
||||
```
|
||||
`$ psql --version`
|
||||
```
|
||||
|
||||
#### 4\. Create the Tiny Tiny RSS database
|
||||
|
||||
Before you can do anything else, you need to create a database that the TT-RSS software will use to store data. First, log into the PostgreSQL server:
|
||||
|
||||
|
||||
```
|
||||
`sudo -u postgres psql`
|
||||
```
|
||||
|
||||
Next, create a new user and assign a password:
|
||||
|
||||
|
||||
```
|
||||
`CREATE USER username WITH PASSWORD 'your_password' VALID UNTIL 'infinity';`
|
||||
```
|
||||
|
||||
Then create the database that will be used by TT-RSS:
|
||||
|
||||
|
||||
```
|
||||
`CREATE DATABASE tinyrss;`
|
||||
```
|
||||
|
||||
Finally, grant full permissions to the new user:
|
||||
|
||||
|
||||
```
|
||||
`GRANT ALL PRIVILEGES ON DATABASE tinyrss to user_name;`
|
||||
```
|
||||
|
||||
That's it for the database. You can exit the **psql** app by typing **\q**.
|
||||
|
||||
#### 5\. Install Git
|
||||
|
||||
Installing TT-RSS requires Git, so install Git with:
|
||||
|
||||
|
||||
```
|
||||
`$ sudo apt install git -y`
|
||||
```
|
||||
|
||||
Now, change directory to wherever Nginx serves web pages:
|
||||
|
||||
|
||||
```
|
||||
`$ cd /var/www/html`
|
||||
```
|
||||
|
||||
Then download the latest source for TT-RSS:
|
||||
|
||||
|
||||
```
|
||||
`$ git clone https://git.tt-rss.org/fox/tt-rss.git tt-rss`
|
||||
```
|
||||
|
||||
Note that this process creates a new **tt-rss** folder.
|
||||
|
||||
#### 6\. Install and configure Tiny Tiny RSS
|
||||
|
||||
It's finally time to install and configure your new TT-RSS server. First, verify that you can open **<http://your.site/tt-rss/install/index.php>** in a web browser. If you get a **403 Forbidden** error, your permissions are not set properly on the **/var/www/html** folder. The following will usually fix this issue:
|
||||
|
||||
|
||||
```
|
||||
`$ chmod 755 /var/www/html/ -v`
|
||||
```
|
||||
|
||||
If everything goes as planned, you'll see the TT-RSS Installer page, and it will ask you for some database information. Just tell it the database username and password that you created earlier; the database name; **localhost** for the hostname; and **5432** for the port.
|
||||
|
||||
Click **Test Configuration** to continue. If all went well, you should see a red button labeled **Initialize Database.** Click on it to begin the installation. Once finished, you'll have a configuration file that you can copy and save as **config.php** in the TT-RSS directory.
|
||||
|
||||
After finishing with the installer, open your TT-RSS installation at **<http://yoursite/tt-rss/>** and log in with the default credentials (username: **admin**, password: **password**). The system will recommend that you change the admin password as soon as you log in. I highly recommend that you follow that advice and change it as soon as possible.
|
||||
|
||||
### Set up TT-RSS
|
||||
|
||||
If all went well, you can start using TT-RSS right away. It's recommended that you create a new non-admin user, log in as the new user, and start importing your feeds, subscribing, and configuring it as you see fit.
|
||||
|
||||
Finally, and this is super important, don't forget to read the [Updating Feeds][4] section on TT-RSS's wiki. It describes how to create a simple systemd service that will update your feeds. If you skip this step, your RSS feeds will not update automatically.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Whew! That was a lot of work, but you did it! You now have your very own RSS aggregation server. Want to learn more about TT-RSS? I recommend checking out the official [FAQ][5], the [support][6] forum, and the detailed [installation][7] notes. Feel free to comment below if you have any questions or issues.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/2/ttrss-raspberry-pi
|
||||
|
||||
作者:[Patrick H. Mullins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/pmullins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life-raspberrypi_0.png?itok=Kczz87J2 (Raspberries with pi symbol overlay)
|
||||
[2]: https://tt-rss.org/
|
||||
[3]: https://opensource.com/sites/default/files/uploads/tt-rss.jpeg (Tiny Tiny RSS screenshot)
|
||||
[4]: https://tt-rss.org/wiki/UpdatingFeeds
|
||||
[5]: https://tt-rss.org/wiki/FAQ
|
||||
[6]: https://community.tt-rss.org/c/tiny-tiny-rss/support
|
||||
[7]: https://tt-rss.org/wiki/InstallationNotes
|
243
translated/tech/20170918 Fun and Games in Emacs.md
Normal file
243
translated/tech/20170918 Fun and Games in Emacs.md
Normal file
@ -0,0 +1,243 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (lujun9972)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Fun and Games in Emacs)
|
||||
[#]: via: (https://www.masteringemacs.org/article/fun-games-in-emacs)
|
||||
[#]: author: (Mickey Petersen https://www.masteringemacs.org/about)
|
||||
|
||||
Emacs 中的游戏与乐趣
|
||||
======
|
||||
|
||||
又是周一,你正在为你的老板 Lumbergh 努力倒腾那些 [无聊之极的文档 ][1]。为什么不玩玩 Emacs 中类似 zork 的文本冒险游戏来让你的大脑从单调的工作中解脱出来呢?
|
||||
|
||||
但说真的,Emacs 中既有游戏,也有古怪的玩物。有些你可能有所耳闻。这些玩意唯一的共同点就是,它们大多是很久以前就添加到 Emacs 中的:有些东西真的是相当古怪(如您将在下面看到的),而另一些则显然是由无聊的员工或毕业生编写的。
|
||||
它们中都带着一种奇思妙想和随意性,这在今天的 Emacs 中很少见。
|
||||
Emacs 现在变得十分严肃,在某种程度上,它已经与 20 世纪 80 年代那些玩意被编写出来的时候大不一样。
|
||||
|
||||
|
||||
### 汉诺塔
|
||||
|
||||
[汉诺塔 ][2] 是一款古老的数学解密游戏,有些人可能对它很熟悉,因为它的递归和迭代解决方案经常被用与计算机科学教学辅助。
|
||||
|
||||
|
||||

|
||||
|
||||
Emacs 中有三个命令可以运行汉诺塔:`M-x hanoi` 默认为 3 个碟子; `M-x hanoi-unix` 和 `M-x hanoi-unix-64` 使用 unix 时间戳的位数 (32bit 或 64bit) 作为默认盘子的个数,并且每秒钟自动移动一次,两者不同之处在于后者假装使用 64 位时钟(因此有 64 个碟子)。
|
||||
|
||||
Emacs 中汉诺塔的实现可以追溯到 20 世纪 80 年代中期——确实是久得可怕。它有一些自定义选项 (`M-x customize-group RET hanoi RET`),如启用彩色光盘等。
|
||||
当你离开汉诺塔缓冲区或输入一个字符,你会收到一个讽刺的告别信息(见上文)。
|
||||
|
||||
### 5x5
|
||||
|
||||

|
||||
|
||||
|
||||
5x5 的游戏是一个逻辑解密游戏:你有一个 5x5 的网格,中间的十字被填满;你的目标是通过按正确的顺序切换它们的空满状态来填充所有的单元格,从而获得胜利。这并不像听起来那么容易!
|
||||
|
||||
输入 `M-x 5x5` 就可以开始玩了,使用可选的数字参数可以改变网格的大小。
|
||||
这款游戏的有趣之处在于它能向你建议下一步行动并尝试解决该游戏网格。它用到了了 Emacs 自己的一款非常酷的符号 RPN 计算器 `M-x calc` (在 [Fun with Emacs Calc][3] 这篇文章中,我使用它来解决了一个简单的问题。)
|
||||
|
||||
所以我喜欢这个游戏的原因是它提供了一个非常复杂的解决器——真的,你应该通过 `M-x find-library RET 5x5` 来阅读其源代码——和一个试图通过强力破解游戏的“破解器”。
|
||||
|
||||
创建一个更大的游戏网格,例如输入 `M-10 M-x 5x5`,然后运行下面某个 `crack` 命令。破坏者将尝试通过迭代获得最佳解决方案。它会实时运行该游戏,观看起来非常有趣:
|
||||
|
||||
- `M-x 5x5-crack-mutating-best`: 试图通过修改最佳解决方案来破解 5x5。
|
||||
|
||||
- `M-x 5x5-crack-mutating-current`: 试图通过修改当前解决方案来破解 5x5。
|
||||
|
||||
- `M-x 5x5-crack-random`: 尝试使用随机方案解破解 5x5。
|
||||
|
||||
- `M-x 5x5-crack-xor-mutate`: 尝试通过将当前方案和最佳方案进行异或运算来破解 5x5。
|
||||
|
||||
### 文本动画
|
||||
|
||||
您可以通过运行 `M-x animation-birthday-present` 并给出名字来显示一个奇特的生日礼物动画。它看起来很酷!
|
||||
|
||||

|
||||
|
||||
`M-x butterfly` 命令中也使用了 `animate` 包,butterfly 命令被添加到 Emacs 中,以向上面的 [XKCD][4] 漫画致敬。当然,漫画中的 Emacs 命令在技术上是无效的,但它的幽默足以弥补这一点。
|
||||
|
||||
### 黑箱
|
||||
|
||||
我将逐字引用这款游戏的目标:
|
||||
|
||||
> 游戏的目标是通过向黑盒子发射光线来找到四个隐藏的球。有四种可能:
|
||||
> 1) 射线将通过盒子不受干扰,
|
||||
> 2) 它将击中一个球并被吸收,
|
||||
> 3) 它将偏转并退出盒子,或
|
||||
> 4) 立即偏转,甚至不被允许进入盒子。
|
||||
|
||||
所以,这有点像我们小时候玩的 [Battleship][5],但是……是专为物理专业高学历的人准备的?
|
||||
|
||||
这是另一款添加于 20 世纪 80 年代的游戏。我建议你输入 `C-h f blackbox` 来阅读玩法说明(文档巨大)。
|
||||
|
||||
|
||||
### 泡泡
|
||||
|
||||

|
||||
|
||||
|
||||
`M-x bubble` 游戏相当简单:你必须用尽可能少移动清除尽可能多的“泡泡”。当你移除气泡时,其他气泡会掉落并粘在一起。
|
||||
这是一款有趣的游戏,此外如果你使用 Emacs 的图形用户界面,它还支持图像现实。而且它还支持鼠标。
|
||||
|
||||
您可以通过调用 `M-x bubbles-set-game-< 难度>` 来设置难度,其中嗯 `<难度>` 可以是其中之一:`easy`,`medium=,=difficult`,`hard`,或 `userdefined`。
|
||||
此外,您可以使用:`M-x custom-group bubbles` 来更改图形、网格大小和颜色。
|
||||
|
||||
由于它即简单又有趣,这是 Emacs 中我最喜欢的游戏之一。
|
||||
|
||||
### 幸运饼干
|
||||
|
||||
我喜欢 `fortune` 命令。每当我启动一个新 shell 时,就会有刻薄、无益、常常带有讽刺意味的“建议(以及文学摘要,谜语)”就会点亮我的一天。
|
||||
|
||||
令人困惑的是,Emacs 中有两个包做了类似的事情:`fortune` 和 `cookie`。前者主要用于在电子邮件签名中添加幸运饼干消息,而后者只是一个简单的 fortune 格式阅读器。
|
||||
|
||||
不管怎样,使用 Emacs 的 `cookie` 包前,你首先需要通过 `customize-option RET cookie RET` 来自定义变量 `cookie-file` 告诉它从哪找到 fortune 文件。
|
||||
|
||||
如果你的操作系统是 Ubuntu,那么你先安装 `fortune` 软件包,然后就能在 `/usr/share/games/fortune/` 目录中找到这些文件了。
|
||||
|
||||
之后你就可以调用 `M-x cookie` 随机显示 fortune 内容,或者,如果你想的话,也可以调用 `M-x cookie-apropos` 查找所有匹配的 cookie。
|
||||
|
||||
### Decipher
|
||||
|
||||
这个包完美地抓住了 Emacs 的实用本质:这个包为你破解简单的替换密码(如密码谜题)提供了一个很有用的界面。
|
||||
你知道,二十多年前,某人确实迫切需要破解很多基础密码。正是像这个模块这样的小玩意让我非常高兴地用起 Emacs 来:这个模块只对少数人有用,但是,如果你突然需要它了,那么它就在那里等着你。
|
||||
|
||||
那么如何使用它呢?让我们假设使用 “rot13” 密码:在 26 个字符的字母表中,将字符旋转 13 个位置。
|
||||
通过 `M-x ielm` (Emacs 用于 [运行 Elisp][6] 的 REPL 环境)可以很容易在 Emacs 中进行尝试:
|
||||
|
||||
|
||||
```
|
||||
*** Welcome to IELM *** Type (describe-mode) for help.
|
||||
ELISP> (rot13 "Hello, World")
|
||||
"Uryyb, Jbeyq"
|
||||
ELISP> (rot13 "Uryyb, Jbeyq")
|
||||
"Hello, World"
|
||||
ELISP>
|
||||
```
|
||||
|
||||
那么,decipher 模块又是如何帮助我们的呢?让我们创建一个新的缓冲区 `test-cipher` 并输入您的密码文本(在我的例子中是 `Uryyb,Jbeyq`)
|
||||
|
||||

|
||||
|
||||
您现在面对的是一个相当复杂的接口。现在把光标放在紫行秘文中的任意字符上,猜猜这个字符可能是什么:Emacs 将根据你的选择更新其他明文的猜测结果,并告诉你字母表中的字符是如何分配的。
|
||||
|
||||
您现在可以下面各种 helper 命令来帮助推断密码字符可能对应的明文字符:
|
||||
|
||||
- **`D`:** 显示数字符号(密码中两个字符组合)及其频率的列表
|
||||
|
||||
- **`F`:** 表示每个密文字母的频率
|
||||
|
||||
- **`N`:** 显示字符的邻接信息。我不确定这是干啥的。
|
||||
|
||||
- **`M` 和 `R`:** 保存和恢复一个检查点,允许你对工作进行分支以探索破解密码的不同方法。
|
||||
|
||||
总而言之,对于这样一个深奥的任务,这个包是相当令人印象深刻的!如果你经常破解密码,也许这个程序包能帮上忙?
|
||||
|
||||
### 医生
|
||||
|
||||

|
||||
|
||||
啊,Emacs 医生。其基于最初的 [ELIZA][7],“ 医生”试图对你说的话进行心理分析,并试图把问题复述给你。体验它的那几分钟相当有趣,它也是 Emacs 中最著名的古怪玩意之一。你可以使用 `M-x doctor` 来运行它。
|
||||
|
||||
### Dunnet
|
||||
|
||||
Emacs 自己特有的类 Zork 文本冒险游戏。输入 `M-x dunnet` 就能玩了。
|
||||
这是一款相当不错的游戏,虽然时间不长,但非常著名,很少有人真正玩到最后。
|
||||
|
||||
如果你发现自己能在无聊的文档工作之间空出时间来,那么这是一个超级棒的游戏,内置“老板屏幕”,因为它是纯文本的。
|
||||
|
||||
哦,还有,不要吃掉那块 CPU 卡 :)
|
||||
|
||||
### 五子棋
|
||||
|
||||

|
||||
|
||||
另一款写于 20 世纪 80 年代的游戏。你必须连接 5 个方块,井字游戏风格。你可以运行 `M-x gomoku` 来与 Emacs 对抗。游戏还支持鼠标,非常方便。您也可以自定义 `gomoku` 组来调整网格的大小。
|
||||
|
||||
### 生命游戏
|
||||
|
||||
[Conway 的生命游戏 ][8] 是细胞自动机的一个著名例子。Emacs 版本提供了一些启动模式,你可以(通过 elisp 编程)通过调整 `life-patterns` 变量来更改这些模式。
|
||||
|
||||
你可以用 `M-x life` 触发生命游戏。事实上,所有的东西,包括代码,注释和所有的东西,总共不到 300 行,这也让人印象深刻。
|
||||
|
||||
### 乒乓,贪吃蛇和俄罗斯方块
|
||||
|
||||

|
||||
|
||||
这些经典游戏都是使用 Emacs 包 `gamegrid` 实现的,这是一个用于构建网格游戏(如俄罗斯方块和贪吃蛇)的通用框架。gamegrid 包的伟大之处在于它同时兼容图形化和终端 Emacs: 如果你在 GUI 中运行 Emacs,你会得到精美的图形;如果你没有,你得到简单的 ASCII 艺术。
|
||||
|
||||
你可以通过输入 `M-x pong`,`M-x snake`,`M-x tetris` 来运行这些游戏。
|
||||
|
||||
特别是俄罗斯方块游戏实现的非常到位,会逐渐增加速度并且能够滑块。而且既然你已经有了源代码,你完全可以移除那个讨厌的 Z 形块,没人喜欢它!
|
||||
|
||||
### Solitaire
|
||||
|
||||

|
||||
|
||||
可惜,这不是纸牌游戏,而是一个基于 peg 的游戏,你可以选择一块石头 (`o`) 并“跳过”相邻的石头进入洞中(`。`),并在这个过程中去掉你跳过的那些石头,最终只能在棋盘上留下一块石头,
|
||||
重复该过程直到板子被请空(只保留一个石头)。
|
||||
|
||||
如果你卡住了,有一个内置的解题器名为 `M-x solitire-solve`。
|
||||
|
||||
### Zone
|
||||
|
||||
我的另一个最爱。这是一个屏幕保护程序——或者更确切地说,是一系列的屏幕保护程序。
|
||||
|
||||
输入 `M-x zone`,然后看看屏幕上发生了什么!
|
||||
|
||||
您可以通过运行 `M-x zone-when-idle` (或从 elisp 调用它)来配置屏幕保护程序的空闲时间,时间以秒为单位。
|
||||
您也可以通过 `M-x zone-leave-me-alone` 来关闭它。
|
||||
|
||||
如果它在你的同事看着的时候被启动,你的同事肯定会抓狂的。
|
||||
|
||||
### 乘法解谜
|
||||
|
||||

|
||||
|
||||
这是另一个脑筋急转弯的益智游戏。当您运行 `M-x mpuz` 时,将看到一个乘法解谜题,你必须将字母替换为对应的数字,并确保数字相加(相乘?)符合结果
|
||||
|
||||
如果遇到难题,可以运行 `M-x mpuz-show-solution` 来解决。
|
||||
|
||||
### 杂项
|
||||
|
||||
还有更多好玩的东西,但它们就不如刚才那些那么好玩好用了:
|
||||
|
||||
- 你可以通过 `M-x morse-region` 和 `M-x unmorse-region` 将一个区域翻译成莫尔斯电码。
|
||||
- Dissociated Press 是一个非常简单的命令,它将类似随机游动 markov 链生成器应用到 buffer 中的文本中,并以此生成无意义的文本。试一下 `M-x dissociated-press`。
|
||||
- Gamegrid 包是构建网格游戏的通用框架。到目前为止,只有俄罗斯方块,乒乓和贪吃蛇使用了它。其名为 `gamegrid`。
|
||||
- `gametree` 软件包是一个通过电子邮件记录和跟踪国际象棋游戏的复杂方法。
|
||||
- `M-x spook` 命令插入随机单词(通常是在电子邮件中),目的是混淆/超载 “NSA trukn trawler”—— 记住,这个模块可以追溯到 20 世纪 80 年代和 90 年代——那时应该有间谍们在监听各种单词。当然,即使是在十年前,这样做也会显得非常偏执和古怪,不过现在看来已经不那么奇怪了……
|
||||
|
||||
|
||||
### 结论
|
||||
|
||||
我喜欢 Emacs 附带的游戏和玩具。它们大多来自于,嗯,我们姑且称之为一个不同的时代:一个允许或甚至鼓励奇思妙想的时代。
|
||||
有些玩意非常经典(如俄罗斯方块和汉诺塔),有些对经典游戏进行了有趣的变种(如黑盒)——但我很高兴这么多年后他们依然在 Emacs 中。
|
||||
我想知道时至今日,这些玩意是否还会纳入 Emacs 的代码库中;嗯,它们很可能不会——它们将被归入包管理仓库中,而在这个干净而贫瘠的世界中,它们无疑属于包管理仓库。
|
||||
|
||||
Emacs 要求将对 Emacs 体验不重要的内容转移到包管理仓库 ELPA 中。我的意思是,作为一个开发者,这是有道理的,但是……对于每一个被移出并流放到 ELPA 的包,我们都在蚕食 Emacs 的精髓。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
|
||||
via: https://www.masteringemacs.org/article/fun-games-in-emacs
|
||||
|
||||
作者:[Mickey Petersen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.masteringemacs.org/about
|
||||
[b]:https://github.com/lujun9972
|
||||
[1]:https://en.wikipedia.org/wiki/Office_Space
|
||||
[2]:https://en.wikipedia.org/wiki/Tower_of_Hanoi
|
||||
[3]:https://www.masteringemacs.org/article/fun-emacs-calc
|
||||
[4]:http://www.xkcd.com
|
||||
[5]:https://en.wikipedia.org/wiki/Battleship_(game)
|
||||
[6]:https://www.masteringemacs.org/article/evaluating-elisp-emacs
|
||||
[7]:https://en.wikipedia.org/wiki/ELIZA
|
||||
[8]:https://en.wikipedia.org/wiki/Conway's_Game_of_Life
|
@ -7,16 +7,16 @@
|
||||
[#]: via: (https://acidwords.com/posts/2019-12-04-handle-chromium-and-firefox-sessions-with-org-mode.html)
|
||||
[#]: author: (Sanel Z https://acidwords.com/)
|
||||
|
||||
Handle Chromium & Firefox sessions with org-mode
|
||||
通过 org-mode 管理 Chromium 和 Firefox 会话
|
||||
======
|
||||
|
||||
I was big fan of [Session Manager][1], small addon for Chrome and Chromium that will save all open tabs, assign the name to session and, when is needed, restore it.
|
||||
我是 [Session Manager][1] 的大粉丝,它是 Chrome 和 Chromium 的小插件,可以保存所有打开的选项卡,为会话命名,并在需要时恢复会话。
|
||||
|
||||
Very useful, especially if you are like me, switching between multiple "mind sessions" during the day - research, development or maybe news reading. Or simply, you'd like to remember workflow (and tabs) you had few days ago.
|
||||
它非常有用,特别是如果你像我一样,白天的时候需要在多个“思维活动”之间切换——研究、开发或者新闻阅读。或者您只是单纯地希望记住几天前的工作流(和选项卡)。
|
||||
|
||||
After I decided to ditch all extensions from Chromium except [uBlock Origin][2], it was time to look for alternative. My main goal was it to be browser agnostic and session links had to be stored in text file, so I can enjoy all the goodies of plain text file. What would be better for that than good old [org-mode][3] ;)
|
||||
在我决定放弃 chromium 上除了 [uBlock Origin][2] 之外的所有扩展后,也到了寻找替代品的时候了。我的主要目标是使之与浏览器无关同时会话链接需要保存在文本文件中,这样我就可以享受所有纯文本的好处了。还有什么比 [org-mode][3] 更好呢 ;)
|
||||
|
||||
Long time ago I found this trick: [Get the currently open tabs in Google Chrome via the command line][4] and with some elisp sugar and coffee, here is the code:
|
||||
很久以前我就发现了这个小诀窍:[通过命令行获取当前在谷歌 Chrome 中打开的标签 ][4] 再加上些 elisp 代码:
|
||||
|
||||
```
|
||||
(require 'cl-lib)
|
||||
@ -57,9 +57,9 @@ Make sure to put cursor on date heading that contains list of urls."
|
||||
(forward-line 1)))))
|
||||
```
|
||||
|
||||
So, how does it work?
|
||||
那么,它的工作原理是什么呢?
|
||||
|
||||
Evaluate above code, open new org-mode file and call `M-x save-chromium-session`. It will create something like this:
|
||||
运行上述代码,打开一个新 org-mode 文件并调用 `M-x save-chromium-session`。它会创建类似这样的东西:
|
||||
|
||||
```
|
||||
* [2019-12-04 12:14:02]
|
||||
@ -68,9 +68,9 @@ Evaluate above code, open new org-mode file and call `M-x save-chromium-session`
|
||||
- https://news.ycombinator.com
|
||||
```
|
||||
|
||||
or whatever urls are running in Chromium instance. To restore it back, put cursor on desired date and run `M-x restore-chromium-session`. All tabs should be back.
|
||||
也就是任何在 chromium 实例中运行着的 URL。要还原的话,则将光标置于所需日期上然后运行 `M-x restore-chromium-session`。所有标签都应该恢复了。
|
||||
|
||||
Here is how I use it, with randomly generated data for the purpose of this text:
|
||||
以下是我的使用案例,其中的数据是随机生成的:
|
||||
|
||||
```
|
||||
#+TITLE: Browser sessions
|
||||
@ -88,27 +88,28 @@ Here is how I use it, with randomly generated data for the purpose of this text:
|
||||
- https://news.ycombinator.com
|
||||
```
|
||||
|
||||
Note that hack for reading Chromium session isn't perfect: `strings` will read whatever looks like string and url from binary database and sometimes that will yield small artifacts in urls. But, you can easily edit those and keep session file lean and clean.
|
||||
请注意,用于读取 Chromium 会话的方法并不完美:`strings` 将从二进制数据库中读取任何类似 URL 字符串的内容,有时这将产生不完整的 url。不过,您可以很方便地地编辑它们,从而保持会话文件简洁。
|
||||
|
||||
To actually open tabs, elisp code will use [browse-url][5] and it can be further customized to run Chromium, Firefox or any other browser with `browse-url-browser-function` variable. Make sure to read documentation for this variable.
|
||||
为了真正打开标签,elisp 代码中使用到了 [browse-url][5],它可以通过 `browse-url-browser-function` 变量进一步定制成运行 Chromium,Firefox 或任何其他浏览器。请务必阅读该变量的相关文档。
|
||||
|
||||
Don't forget to put session file in git, mercurial or svn and enjoy the fact that you will never loose your session history again :)
|
||||
别忘了把会话文件放在 git、mercurial 或 svn 中,这样你就再也不会丢失会话历史记录了 :)
|
||||
|
||||
### What about Firefox?
|
||||
### 那么 Firefox 呢?
|
||||
|
||||
If you are using Firefox (recent versions) and would like to pull session urls, here is how to do it.
|
||||
如果您正在使用 Firefox( 最近的版本),并且想要获取会话 url,下面是操作方法。
|
||||
|
||||
First, download and compile [lz4json][6], small tool that will decompress Mozilla lz4json format, where Firefox stores session data. Session data (at the time of writing this post) is stored in `$HOME/.mozilla/firefox/<unique-name>/sessionstore-backups/recovery.jsonlz4`.
|
||||
首先,下载并编译 [lz4json][6],这是一个可以解压缩 Mozilla lz4json 格式的小工具,Firefox 以这种格式来存储会话数据。会话数据(在撰写本文时)存储在 `$HOME/.mozilla/firefox/<unique-name>/sessionstore-backup /recovery.jsonlz4` 中。
|
||||
|
||||
If Firefox is not running, `recovery.jsonlz4` will not be present, but use `previous.jsonlz4` instead.
|
||||
如果 Firefox 没有运行,则没有 `recovery.jsonlz4`,这种情况下用 `previous.jsonlz4` 代替。
|
||||
=恢复。jsonlz4= 将不存在,但使用=先前。jsonlz4 =。
|
||||
|
||||
To extract urls, try this in terminal:
|
||||
要提取网址,尝试在终端运行:
|
||||
|
||||
```
|
||||
$ lz4jsoncat recovery.jsonlz4 | grep -oP '"(http.+?)"' | sed 's/"//g' | sort | uniq
|
||||
```
|
||||
|
||||
and update `save-chromium-session` with:
|
||||
然后更新 `save-chromium-session` 为:
|
||||
|
||||
```
|
||||
(defun save-chromium-session ()
|
||||
@ -122,7 +123,7 @@ and update `save-chromium-session` with:
|
||||
;; rest of the code is unchanged
|
||||
```
|
||||
|
||||
Updating documentation strings, function name and any further refactoring is left for exercise.
|
||||
更新本函数的文档字符串、函数名以及进一步的重构都留作练习。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
@ -0,0 +1,50 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (heguangzhi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How Kubernetes Became the Standard for Compute Resources)
|
||||
[#]: via: (https://www.linux.com/articles/how-kubernetes-became-the-standard-for-compute-resources/)
|
||||
[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/)
|
||||
|
||||
Kubernetes 如何成为计算资源的标准
|
||||
======
|
||||
|
||||
|
||||
<https://www.linux.com/wp-content/uploads/2019/08/elevator-1598431_1920.jpg>
|
||||
|
||||
对于原生云生态系统来说,2019年是改变游戏规则的一年。有大的[并购][1),如 Red Hat Docker 和 Pivotal,并出现其他的玩家 如Rancher Labs 和 Mirantis 。
|
||||
|
||||
“所有这些并购",Rancher Labs (一家为采用容器的团队提供完整软件栈的公司) 的联合创始人兼首席执行官盛亮表示:“这一领域的成功表明市场成熟的速度很快。”。
|
||||
|
||||
传统上,像 Kubernetes 和 Docker 这样的新兴技术吸引着开发者和像脸书和谷歌这样的超级用户。这群人之外没什么其他的兴趣。然而,这两种技术都在企业层面得到了广泛采用。突然间,有了一个巨大的市场,有了巨大的机会。几乎每个人都跳了进去。有人带来了创新的解决方案,也有人试图赶上其他人。它很快变得非常拥挤和热闹起来。
|
||||
|
||||
它也改变了创新的方式。[早期采用者通常是精通技术的公司。][2]现在,几乎每个人都在使用它,即使是在不被认为是 Kubernetes 地盘的地方。它改变了市场动态,像 Rancher Labs 这样的公司见证了独特的用例。
|
||||
|
||||
梁补充道,“我从来没有经历过像 Kubernete 这样快速、动态的市场或这样的技术进化。当我们五年前开始的时候,这是一个非常拥挤的空间。随着时间的推移,我们大多数的同龄人因为这样或那样的原因消失了。他们要么无法适应变化,要么选择不适应某些变化。”
|
||||
|
||||
在 Kubernetes 的早期,最明显的机会是建立 Kubernetes 发行版本和 Kubernetes 业务。这是新技术。众所周知,它的安装、升级和操作相当的复杂。
|
||||
|
||||
当谷歌、AWS 和微软进入市场时,一切都变了。当时,有一群供应商蜂拥而至,为平台提供解决方案。梁表示:“一旦像谷歌这样的云提供商决定将Kubernetes 作为一项服务,并亏本出售的商品免费提供,以推动基础设施消费;我们就知道,运营和支持 Kubernetes 业务的优势将非常有限了。”。
|
||||
|
||||
对非谷歌玩家来说,并非一切都不好。由于云供应商通过将它作为服务来提供,消除了 Kubernetes 带来的所有复杂性,这意味着更广泛地采用该技术,即使是那些由于运营成本而不愿使用该技术的人也是如此。这意味着 Kubernetes 将变得无处不在,并将成为一个行业标准。
|
||||
|
||||
“Rancher Labs 是极少数将此视为机遇并比其他公司看得更远的公司之一。我们意识到 Kubernetes 将成为新的计算标准,就像TCP/IP成为网络标准一样,”梁说。
|
||||
|
||||
CNCF围绕 Kubernetes 构建一个充满活力的生态系统方面发挥着至关重要的作用,创建了一个庞大的社区来构建、培育和商业化原生云开源技术。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/articles/how-kubernetes-became-the-standard-for-compute-resources/
|
||||
|
||||
作者:[Swapnil Bhartiya][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[heguangzhi](https://github.com/heguangzhi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/author/swapnil/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.cloudfoundry.org/blog/2019-is-the-year-of-consolidation-why-ibms-deal-with-red-hat-is-a-harbinger-of-things-to-come/
|
||||
[2]: https://www.packet.com/blog/open-source-season-on-the-kubernetes-highway/
|
Loading…
Reference in New Issue
Block a user