mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
83667f9742
83
published/20190301 Emacs for (even more of) the win.md
Normal file
83
published/20190301 Emacs for (even more of) the win.md
Normal file
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (oneforalone)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11046-1.html)
|
||||
[#]: subject: (Emacs for (even more of) the win)
|
||||
[#]: via: (https://so.nwalsh.com/2019/03/01/emacs)
|
||||
[#]: author: (Norman Walsh https://so.nwalsh.com)
|
||||
|
||||
Emacs 的(更多)胜利
|
||||
======
|
||||
|
||||
![](https://img.linux.net.cn/data/attachment/album/201907/02/002550x2ol48004hx6e0od.jpg)
|
||||
|
||||
我天天用 Emacs,但我却从意识到。但是每当我用 Emacs 时,它都给我带来了很多乐趣。
|
||||
|
||||
> 如果你是个职业作家……Emacs 与其它的编辑器的相比就如皓日与群星一样。不仅更大、更亮,它轻而易举就让其他所有的东西都消失了。
|
||||
|
||||
我用 [Emacs][1] 已有二十多年了。我用它来写几乎所有的东西(我用 [IntelliJ][2] 编辑 Scala 和 Java )。看邮件的话我是能在 Emacs 里看就在里面看。
|
||||
|
||||
尽管我用 Emacs 已有数十年,我在新年前后才意识到,在过去十几年里,我对 Emacs 的使用几乎没有什么变化。当然,新的编辑模式出现了,我就会选一两个插件,几年前我确实是用了 [Helm][3],但大多数时候,它只是完成了我需要的所有繁重工作,日复一日,没有抱怨,也没有妨碍我。一方面,这证明了它有多好。另一方面,这是一个邀请,让我深入挖掘,看看我错过了什么。
|
||||
|
||||
于此同时,我也决定从以下几方面改进我的工作方式:
|
||||
|
||||
* **更好的议程管理** 我在工作中负责几个项目,这些项目有定期和临时的会议;有些我是我主持的,有些我只要参加就可以。
|
||||
|
||||
我意识到我对参加会议变得有些敷衍。往会议室里一坐很简单,但实际上我是在阅读电子邮件或处理其他事情。(我强烈反对在会议中“禁止携带笔记本电脑”的这条规定,但这是另一个话题。)
|
||||
|
||||
敷衍地去参加会议有几个问题。首先,这是对主持会议的人和其他参会者的不尊重。实际上这是不应该这么做的充分理由,但我还有意识到另一个问题:它掩盖了会议的成本。
|
||||
|
||||
如果你在开会,但同时回复了一封电子邮件,也许修复了一个 bug,那么这个会议就没什么成本(或没那么多)。如果会议成本低廉,那么会议数量将会更多。
|
||||
|
||||
我想要更少、更短的会议。我不想掩盖它们的成本,我想让开会变得很有价值,除非绝对必要,否则就干脆不要开。
|
||||
|
||||
有时,开会是绝对有必要的。而且我认为一个简短的会有时候能够很快的解决问题。但是,如果我一天要开十个短会的话,那我觉得还是不要假装取得了什么效果吧。
|
||||
|
||||
我决定在我参加的所有的会上做笔记。我并不是说一定要做会议记录,但是我肯定会花上几分钟。这会让我把注意力集中在开会上,而忽略其他事。
|
||||
|
||||
* **更好的时间管理** 无论是工作的或私人的,我有很多要做和想做的事。我一直在问题列表中跟踪其中的一些,一些在保存的电子邮件线索中(Emacs 和 [Gmail][4] 中,用于一些稍微不同的提醒),还有一些在日历、手机上各种各样的“待办事项列表”和小纸片上。可能还有其他地方。
|
||||
|
||||
我决定把它们放在一起。不是说我认为放到一个一致的地方就更好,而是我想完成两件事:首先,把它们都集中在一个地方,我能够更好更全面地了解我在哪里投入了更多的精力;其次,我想养成一个记录、跟踪并保存它们的习惯(习惯指“固定或规律的倾向或做法,尤指难以放弃的倾向或做法”)。
|
||||
|
||||
* **更好的问责制** 如果你在某些科学或工程领域工作,你就会养成记笔记的习惯。唉,我没有。但我决定这么做。
|
||||
|
||||
我对法律上鼓励使用装订页面或用永久记号笔涂抹并不感兴趣。我感兴趣的是养成做记录的习惯。我的目标是有一个地方记下想法和设计草图等。如果我突然有了灵感,或者我想到了一个不在测试套件中的边缘情况,我希望我的直觉是把它写在我的日志中,而不是草草写在一张小纸片上,或者自己觉得自己会记住它。
|
||||
|
||||
这些决心让我很快或多或少指向了 [Org][6] 模式。Org 模式有一个庞大的、活跃的、忠诚的用户社区。我以前也用过它(顺带一提,我都[写过][7]关于它的文章,在几年前),我花了很长的一段时间(将 [MarkLogic 集成][8]到其中。(这在过去的一两个星期里得到了回报!)
|
||||
|
||||
但我从没正经用过 Org 模式。
|
||||
|
||||
我现在正在用它。我用了几分钟,我把所有要做的事情都记录下来,我还记了日记。我不确定我争论或列表它的所有功能能有多大价值,你可以通过网页快速地搜索找到很多。
|
||||
|
||||
如果你用 Emacs,那你也应该用 Org 模式。如果没用过 Emacs,我相信你不会是第一个因 Org 模式而使用 Emacs 的人。Org 模式可以做很多。它需要一点时间来学习方法和快捷键,但我认为这是值得的。(如果你的口袋中有一台 [iOS][9] 设备,我推荐你在路上使用 [beorg][10] 来记录。)
|
||||
|
||||
当然,我想出了如何[将 XML 从其中提取出来][11](“working out” 确实是“用 elisp 来编程”的一种有趣的魔法)然后,如何将它转换回我的博客用的标记(当然,在 Emacs 中按下一个按钮就可以做到)。这是用 Org 模式写的第一篇帖子。这也不会是最后一次。
|
||||
|
||||
附注:生日快乐,[小博客][12]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://so.nwalsh.com/2019/03/01/emacs
|
||||
|
||||
作者:[Norman Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[oneforalone](https://github.com/oneforalone)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://so.nwalsh.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Emacs
|
||||
[2]: https://en.wikipedia.org/wiki/IntelliJ_IDEA
|
||||
[3]: https://emacs-helm.github.io/helm/
|
||||
[4]: https://en.wikipedia.org/wiki/Gmail
|
||||
[5]: https://en.wikipedia.org/wiki/Lab_notebook
|
||||
[6]: https://en.wikipedia.org/wiki/Org-mode
|
||||
[7]: https://www.balisage.net/Proceedings/vol17/html/Walsh01/BalisageVol17-Walsh01.html
|
||||
[8]: https://github.com/ndw/ob-ml-marklogic/
|
||||
[9]: https://en.wikipedia.org/wiki/IOS
|
||||
[10]: https://beorgapp.com/
|
||||
[11]: https://github.com/ndw/org-to-xml
|
||||
[12]: https://so.nwalsh.com/2017/03/01/helloWorld
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,88 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (BitTorrent Client Deluge 2.0 Released: Here’s What’s New)
|
||||
[#]: via: (https://itsfoss.com/deluge-2-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
BitTorrent Client Deluge 2.0 Released: Here’s What’s New
|
||||
======
|
||||
|
||||
You probably already know that [Deluge][1] is one of the [best Torrent clients available for Linux users][2]. However, the last stable release was almost two years back.
|
||||
|
||||
Even though it was in active development, a major stable release wasn’t there – until recently. The latest version while we write this happens to be 2.0.2. So, if you haven’t downloaded the latest stable version – do try it out.
|
||||
|
||||
In either case, if you’re curious, let us talk about what’s new.
|
||||
|
||||
![Deluge][3]
|
||||
|
||||
### Major improvements in Deluge 2.0
|
||||
|
||||
The new release introduces multi-user support – which was a much needed addition.
|
||||
|
||||
In addition to that, there has been several performance improvements to handle more torrents with faster loading times.
|
||||
|
||||
Also, with version 2.0, Deluge used Python 3 with minimal support for Python 2.7. Even for the user interface, they migrated from GTK UI to GTK3.
|
||||
|
||||
As per the release notes, there are several more significant additions/improvements, which include:
|
||||
|
||||
* Multi-user support.
|
||||
* Performance updates to handle thousands of torrents with faster loading times.
|
||||
* A New Console UI which emulates GTK/Web UIs.
|
||||
* GTK UI migrated to GTK3 with UI improvements and additions.
|
||||
* Magnet pre-fetching to allow file selection when adding torrent.
|
||||
* Fully support libtorrent 1.2 release.
|
||||
* Language switching support.
|
||||
* Improved documentation hosted on ReadTheDocs.
|
||||
* AutoAdd plugin replaces built-in functionality.
|
||||
|
||||
|
||||
|
||||
### How to install or upgrade to Deluge 2.0
|
||||
|
||||
![][4]
|
||||
|
||||
You should follow the official [installation guide][5] (using PPA or PyPi) for any Linux distro. However, if you are upgrading, you should go through the note mentioned in the release note:
|
||||
|
||||
“_Deluge 2.0 is not compatible with Deluge 1.x clients or daemons so these will require upgrading too._ _Also_ _third-party Python scripts may not be compatible if they directly connect to the Deluge client and will need migrating._“
|
||||
|
||||
So, they insist to always make a backup of your [config][6] before a major version upgrade to guard against data loss.
|
||||
|
||||
[][7]
|
||||
|
||||
Suggested read Ubuntu's Snap Apps Website Gets Much Needed Improvements
|
||||
|
||||
And, if you are an author of a plugin, you need to upgrade it make it compatible with the new release.
|
||||
|
||||
Direct download app packages not yet available for Windows and Mac OS. However, the release note mentions that they are being worked on.
|
||||
|
||||
As an alternative, you can install them manually by following the [installation guide][5] in the updated official documentation.
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
What do you think about the latest stable release? Do you utilize Deluge as your BitTorrent client? Or do you find something else as a better alternative?
|
||||
|
||||
Let us know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/deluge-2-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dev.deluge-torrent.org/
|
||||
[2]: https://itsfoss.com/best-torrent-ubuntu/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/deluge.jpg?fit=800%2C410&ssl=1
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/Deluge-2-release.png?resize=800%2C450&ssl=1
|
||||
[5]: https://deluge.readthedocs.io/en/latest/intro/01-install.html
|
||||
[6]: https://dev.deluge-torrent.org/wiki/Faq#WheredoesDelugestoreitssettingsconfig
|
||||
[7]: https://itsfoss.com/snap-store/
|
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tempered Networks simplifies secure network connectivity and microsegmentation)
|
||||
[#]: via: (https://www.networkworld.com/article/3405853/tempered-networks-simplifies-secure-network-connectivity-and-microsegmentation.html)
|
||||
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
|
||||
|
||||
Tempered Networks simplifies secure network connectivity and microsegmentation
|
||||
======
|
||||
Tempered Networks’ Identity Defined Network platform uses the Host Identity Protocol to partition and isolate the network into trusted microsegments, providing an easy and cost-effective way to secure the network.
|
||||
![Thinkstock][1]
|
||||
|
||||
The TCP/IP protocol is the foundation of the internet and pretty much every single network out there. The protocol was designed 45 years ago and was originally only created for connectivity. There’s nothing in the protocol for security, mobility, or trusted authentication.
|
||||
|
||||
The fundamental problem with TCP/IP is that the IP address within the protocol represents both the device location and the device identity on a network. This dual functionality of the address lacks the basic mechanisms for security and mobility of devices on a network.
|
||||
|
||||
This is one of the reasons networks are so complicated today. To connect to things on a network or over the internet, you need VPNs, firewalls, routers, cell modems, etc. and you have all the configurations that come with ACLs, VLANs, certificates, and so on. The nightmare grows exponentially when you factor in internet of things (IoT) device connectivity and security. It’s all unsustainable at scale.
|
||||
|
||||
Clearly, we need a more efficient and effective way to take on network connectivity, mobility, and security.
|
||||
|
||||
**[ Also read: [What is microsegmentation? How getting granular improves network security][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||
|
||||
The Internet Engineering Task Force (IETF) tackled this problem with the Host Identity Protocol (HIP). It provides a method of separating the endpoint identifier and the locator roles of IP addresses. It introduces a new Host Identity (HI) name space, based on public keys, from which endpoint identifiers are taken. HIP uses existing IP addressing and forwarding for locators and packet delivery.The protocol is compatible with IPv4 and IPv6 applications and utilizes a customized IPsec tunnel mode for confidentiality, authentication, and integrity of network applications.
|
||||
|
||||
Ratified by IETF in 2015, HIP represents a new security networking layer within the OSI stack. Think of it as Layer 3.5. It’s a flip of the trust model where TCP/IP is inherently promiscuous and will answer to anything that wants to talk to a device on that network. In contrast, HIP is a trust protocol that will not answer to anything on the network unless that connection has been authenticated and authorized based on its cryptographic identity. It is, in effect, a form of a [software-defined perimeter][4] around specific network resources. This is also known as [microsegmentation][5].
|
||||
|
||||
![][6]
|
||||
|
||||
### Tempered Networks’ IDN platform creates segmented, encrypted network
|
||||
|
||||
[Tempered Networks][7] has created a platform utilizing the HIP and a variety of technologies that partitions and isolates the network into trusted microsegments. Tempered Networks’ Identity Defined Networking (IDN) platform is deployed as an overlay technology that layers on top of any IP network. The HIP was designed to be both forward and backward compatible with any IP network without having to make any changes to the underlay network. The overlay network creates a direct tunnel between the two things you want to connect.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
|
||||
|
||||
The IDN platform uses three components to create a segmented and encrypted network: an orchestration engine called the Conductor, the HIPrelay identity-based router, and HIP Services enforcement points.
|
||||
|
||||
The Conductor is a centralized orchestration and intelligence engine that connects, protects, and disconnects any resource globally through a single pane of glass. The Conductor is used to define and enforce policies for HIP Services. Policy configuration is done in a simple point-and-click manner. The Conductor is available as a physical or virtual appliance or in the Amazon Web Services (AWS) cloud.
|
||||
|
||||
HIP Services provide software-based policy enforcement, enabling secure connectivity among IDN-protected devices, as well as cloaking, segmentation, identity-based routing, and IP mobility. They can be deployed on or in-line to any device or system and come in the form of HIPswitch hardware, HIPserver, HIPclient, Cloud HIPswitch, or Virtual HIPswitch. HIP Services also can be embedded in customer hardware or applications.
|
||||
|
||||
Placing HIPswitches in front of any connected device renders the device HIP-enabled and immediately microsegments the traffic, isolating inbound and outbound traffic from the underlying network. HIPswitches deployed on the network automatically register with the Conductor using their cryptographic identity.
|
||||
|
||||
HIPrelay works with the HIP Service-enabled endpoints to deliver peer-to-peer connectivity for any device or system across all networks and transport options. Rather than using Layer 3 or 4 rule sets or traditional routing protocols, HIPrelay routes and connects encrypted communications based on provable cryptographic identities traversing existing infrastructure.
|
||||
|
||||
It sounds complicated, but it really isn’t. A use case example should demonstrate the ease and power of this solution.
|
||||
|
||||
### Use case: Smart Ships
|
||||
|
||||
An international cruise line recently installed Tempered Networks’ IDN solution to provide tighter security around its critical maritime systems. Prior to deployment, the systems for fuel, propulsion, navigation, ballast, weather, and incinerators were on a flat Layer 2 network, which basically allowed authorized users of the network to see everything.
|
||||
|
||||
Given that vendors of the different maritime systems had access to their own system, the lack of microsegmentation allowed them to see the other systems as well. The cruise line needed a simple way to segment access to these different systems — isolating them from each other — and they wanted to do it without having to put the ships in dry dock for the network reconfiguration.
|
||||
|
||||
The original configuration looked like this:
|
||||
|
||||
![][9]
|
||||
|
||||
The company implemented microsegmentation of the network based on the functionality of the systems. This isolated and segmented vendor access to only their own systems — everything else was hidden to them. The implementation involved installing HIPrelay identity routing in the cloud, several HIPswitch wireless devices onboard the ships, and HIPclient software on the vendors’ and crew members’ devices. The Conductor appliance that managed the entire deployment was installed in AWS.
|
||||
|
||||
All of that was done without impacting the underlying network, and no dry dock time was required for the deployment. In addition, the cruise line was able to eliminate internal firewalls and VPNs that had previously been used for segmentation and remote access. The resulting configuration looks like this:
|
||||
|
||||
![][10]
|
||||
|
||||
The color coding of the illustration above indicates what systems are now able to directly see and communicate with their corresponding controllers and sensors. Everything else on the network is hidden from view of those systems.
|
||||
|
||||
The acquisition cost of the Tempered Networks’ solution was one-tenth that of a traditional microsegmentation solution. The deployment time was 2 FTE days per ship compared to the 40 FTE days a traditional solution would have needed. No additional staffing was required to support the solution, and no changes were made to the underlying network.
|
||||
|
||||
### A time-tested microsegmentation solution
|
||||
|
||||
This technology came out of Boeing and was deployed for over 12 years within their manufacturing facilities until 2014, when Boeing allowed the technology to become commercialized. Tempered Networks took the HIP and developed the full platform with easy, centralized management. It was purpose-built to provide secure connectivity to networks. The solution has been successfully deployed in industrial domains such as the utilities sector, oil and gas, electricity generation, and aircraft manufacturing, as well as in enterprise domains and healthcare.
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3405853/tempered-networks-simplifies-secure-network-connectivity-and-microsegmentation.html
|
||||
|
||||
作者:[Linda Musthaler][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Linda-Musthaler/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/01/network_security_hacker_virus_crime-100745979-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.networkworld.com/article/3359363/software-defined-perimeter-brings-trusted-access-to-multi-cloud-applications-network-resources.html
|
||||
[5]: https://www.networkworld.com/article/3247672/what-is-microsegmentation-how-getting-granular-improves-network-security.html
|
||||
[6]: https://images.idgesg.net/images/article/2019/07/hip-slide-100800735-large.jpg
|
||||
[7]: https://www.temperednetworks.com/
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[9]: https://images.idgesg.net/images/article/2019/07/cruise-ship-before-100800736-large.jpg
|
||||
[10]: https://images.idgesg.net/images/article/2019/07/cruise-ship-after-100800738-large.jpg
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -1,106 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0 – Public Vs Private Blockchain Comparison [Part 7])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/)
|
||||
[#]: author: (editor https://www.ostechnix.com/author/editor/)
|
||||
|
||||
Blockchain 2.0 – Public Vs Private Blockchain Comparison [Part 7]
|
||||
======
|
||||
|
||||
![Public vs Private blockchain][1]
|
||||
|
||||
The previous part of the [**Blockchain 2.0**][2] series explored the [**the state of Smart contracts**][3] now. This post intends to throw some light on the different types of blockchains that can be created. Each of these are used for vastly different applications and depending on the use cases, the protocol followed by each of these differ. Now let us go ahead and learn about **Public vs Private blockchain comparison** with Open source and proprietary technology.
|
||||
|
||||
The fundamental three-layer structure of a blockchain based distributed ledger as we know is as follows:
|
||||
|
||||
![][4]
|
||||
|
||||
Figure 1 – Fundamental structure of Blockchain-based ledgers
|
||||
|
||||
The differences between the types mentioned here is attributable primarily to the protocol that rests on the underlying blockchain. The protocol dictates rules for the participants and the behavior of the blockchain in response to the said participation.
|
||||
|
||||
Remember to keep the following things in mind while reading through this article:
|
||||
|
||||
* Platforms such as these are always created to solve a use-case requirement. There is no one direction that the technology should take that is best. Blockchains for instance have tremendous applications and some of these might require dropping features that seem significant in other settings. **Decentralized storage** is a major example in this regard.
|
||||
* Blockchains are basically database systems keeping track of information by timestamping and organizing data in the form of blocks. Creators of such blockchains can choose who has the right to make these blocks and perform alterations.
|
||||
* Blockchains can be “centralized” as well, and participation in varying extents can be limited to those who this “central authority” deems eligible.
|
||||
|
||||
|
||||
|
||||
Most blockchains are either **public** or **private**. Broadly speaking, public blockchains can be considered as being the equivalent of open source software and most private blockchains can be seen as proprietary platforms deriving from the public ones. The figure below should make the basic difference obvious to most of you.
|
||||
|
||||
![][5]
|
||||
|
||||
Figure 2 – Public vs Private blockchain comparison with Open source and Proprietary Technology
|
||||
|
||||
This is not to say that all private blockchains are derived from open public ones. The most popular ones however usually are though.
|
||||
|
||||
### Public Blockchains
|
||||
|
||||
A public blockchain can be considered as a **permission-less platform** or **network**. Anyone with the knowhow and computing resources can participate in it. This will have the following implications:
|
||||
|
||||
* Anyone can join and participate in a public blockchain network. All the “participant” needs is a stable internet connection along with computing resources.
|
||||
* Participation will include reading, writing, verifying, and providing consensus during transactions. An example for participating individuals would be **Bitcoin miners**. In exchange for participating in the network the miners are paid back in Bitcoins in this case.
|
||||
* The platform is decentralized completely and fully redundant.
|
||||
* Because of the decentralized nature, no one entity has complete control over the data recorded in the ledger. To validate a block all (or most) participants need to vet the data.
|
||||
* This means that once information is verified and recorded, it cannot be altered easily. Even if it is, its impossible to not leave marks.
|
||||
* The identity of participants remains anonymous by design in platforms such as **BITCOIN** and **LITECOIN**. These platforms by design aim for protecting and securing user identities. This is primarily a feature provided by the overlying protocol stack.
|
||||
* Examples for public blockchain networks are **BITCOIN** , **LITECOIN** , **ETHEREUM** etc.
|
||||
* Extensive decentralizations mean that gaining consensus on transactions might take a while compared to what is typically possible over blockchain ledger networks and throughput can be a challenge for large enterprises aiming for pushing a very high number of transactions every instant.
|
||||
* The open participation and often the high number of such participants in open chains such as bitcoin add up to considerable initial investments in computing equipment and energy costs.
|
||||
|
||||
|
||||
|
||||
### Private Blockchain
|
||||
|
||||
In contrast, a private blockchain is a **permissioned blockchain**. Meaning:
|
||||
|
||||
* Permission to participate in the network is restricted and is presided over by the owner or institution overseeing the network. Meaning even though an individual will be able to store data and transact (send and receive payments for example), the validation and storage of these transactions will be done only by select participants.
|
||||
* Participation even once permission is given by the central authority will be limited by terms. For instance, in case of a private blockchain network run by a financial institution, not every customer will have access to the entire blockchain ledger, and even among those with the permission, not everyone will be able to access everything. Permissions to access select services will be given by the central figure in this case. This is often referred to as **“channeling”**.
|
||||
* Such systems have significantly larger throughput capabilities and also showcase much faster transaction speeds compared to their public counterparts because a block of information only needs to be validated by a select few.
|
||||
* Security by design is something the public blockchains are renowned for. They achieve this
|
||||
by:
|
||||
* Anonymizing participants,
|
||||
* Distributed & redundant but encrypted storage on multiple nodes,
|
||||
* Mass consensus required for creating and altering data.
|
||||
|
||||
|
||||
|
||||
Private blockchains usually don’t feature any of these in their protocol. This makes the system only as secure as most cloud-based database systems currently in use.
|
||||
|
||||
### A note for the wise
|
||||
|
||||
An important point to note is this, the fact that they’re named public or private (or open or closed) has nothing to do with the underlying code base. The code or the literal foundations on which the platforms are based on may or may not be publicly available and or developed in either of these cases. **R3** is a **DLT** ( **D** istributed **L** edger **T** echnology) company that leads a public consortium of over 200 multinational institutions. Their aim is to further development of blockchain and related distributed ledger technology in the domain of finance and commerce. **Corda** is the product of this joint effort. R3 defines corda as a blockchain platform that is built specially for businesses. The codebase for the same is open source and developers all over the world are encouraged to contribute to the project. However, given its business facing nature and the needs it is meant to address, corda would be categorized as a permissioned closed blockchain platform. Meaning businesses can choose the participants of the network once it is deployed and choose the kind of information these participants can access through the use of natively available smart contract tools.
|
||||
|
||||
While it is a reality that public platforms like Bitcoin and Ethereum are responsible for the widespread awareness and development going on in the space, it can still be argued that private blockchains designed for specific use cases in enterprise or business settings is what will lead monetary investments in the short run. These are the platforms most of us will see implemented the near future in practical ways.
|
||||
|
||||
Read the next guide about Hyperledger project in this series.
|
||||
|
||||
* [**Blockchain 2.0 – An Introduction To Hyperledger Project (HLP)**][6]
|
||||
|
||||
|
||||
|
||||
We are working on many interesting topics on Blockchain technology. Stay tuned!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
|
||||
|
||||
作者:[editor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Public-Vs-Private-Blockchain-720x340.png
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
|
||||
[3]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/blockchain-architecture.png
|
||||
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/Public-vs-Private-blockchain-comparison.png
|
||||
[6]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
|
@ -1,118 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (qfzy1233)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A beginner's guide to Linux permissions)
|
||||
[#]: via: (https://opensource.com/article/19/6/understanding-linux-permissions)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson/users/greg-p/users/tj)
|
||||
|
||||
A beginner's guide to Linux permissions
|
||||
======
|
||||
Linux security permissions designate who can do what with a file or
|
||||
directory.
|
||||
![Hand putting a Linux file folder into a drawer][1]
|
||||
|
||||
One of the main benefits of Linux systems is that they are known to be less prone to security vulnerabilities and exploits than other systems. Linux definitely gives users more flexibility and granular controls over its file systems' security permissions. This may imply that it's critical for Linux users to understand security permissions. That isn't necessarily true, but it's still wise for beginning users to understand the basics of Linux permissions.
|
||||
|
||||
### View Linux security permissions
|
||||
|
||||
To start learning about Linux permissions, imagine we have a newly created directory called **PermissionDemo**. Run **cd** inside the directory and use the **ls -l** command to view the Linux security permissions. If you want to sort them by time modified, add the **-t** option.
|
||||
|
||||
|
||||
```
|
||||
`ls -lt`
|
||||
```
|
||||
|
||||
Since there are no files inside this new directory, this command returns nothing.
|
||||
|
||||
![No output from ls -l command][2]
|
||||
|
||||
To learn more about the **ls** option, access its man page by entering **man ls** on the command line.
|
||||
|
||||
![ls man page][3]
|
||||
|
||||
Now, let's create two files: **cat.txt** and **dog.txt** with empty content; this is easy to do using the **touch** command. Let's also create an empty directory called **Pets** with the **mkdir** command. We can use the **ls -l** command again to see the permissions for these new files.
|
||||
|
||||
![Creating new files and directory][4]
|
||||
|
||||
We need to pay attention to two sections of output from this command.
|
||||
|
||||
### Who has permission?
|
||||
|
||||
The first thing to examine indicates _who_ has permission to access the file/directory. Note the section highlighted in the red box below. The first column refers to the _user_ who has access, while the second column refers to the _group_ that has access.
|
||||
|
||||
![Output from -ls command][5]
|
||||
|
||||
There are three main types of users: **user** , **group** ; and **other** (essentially neither a user nor a group). There is one more: **all** , which means practically everyone.
|
||||
|
||||
![User types][6]
|
||||
|
||||
Because we are using **root** as the user, we can access any file or directory because **root** is the superuser. However, this is generally not the case, and you will probably be restricted to your username. A list of all users is stored in the **/etc/passwd** file.
|
||||
|
||||
![/etc/passwd file][7]
|
||||
|
||||
Groups are maintained in the **/etc/group** file.
|
||||
|
||||
![/etc/passwd file][8]
|
||||
|
||||
### What permissions do they have?
|
||||
|
||||
The other section of the output from **ls -l** that we need to pay attention to relates to enforcing permissions. Above, we confirmed that the owner and group permissions for the files dog.txt and cat.txt and the directory Pets we created belong to the **root** account. We can use that information about who owns what to enforce permissions for the different user ownership types, as highlighted in the red box below.
|
||||
|
||||
![Enforcing permissions for different user ownership types][9]
|
||||
|
||||
We can dissect each line into five bits of information. The first part indicates whether it is a file or a directory; files are labeled with a **-** (hyphen), and directories are labeled with **d**. The next three parts refer to permissions for **user** , **group** , and **other** , respectively. The last part is a flag for the [**access-control list**][10] (ACL), a list of permissions for an object.
|
||||
|
||||
![Different Linux permissions][11]
|
||||
|
||||
Linux permission levels can be identified with letters or numbers. There are three privilege types:
|
||||
|
||||
* **read** : r or 4
|
||||
* **write:** w or 2
|
||||
* **executable:** e or 1
|
||||
|
||||
|
||||
|
||||
![Privilege types][12]
|
||||
|
||||
The presence of each letter symbol ( **r** , **w** , or **x** ) means that the permission exists, while **-** indicates it does not. In the example below, the file is readable and writeable by the owner, only readable if the user belongs to the group, and readable and executable by anyone else. Converted to numeric notation, this would be 645 (see the image below for an explanation of how this is calculated).
|
||||
|
||||
![Permission type example][13]
|
||||
|
||||
Here are a few more examples:
|
||||
|
||||
![Permission type examples][14]
|
||||
|
||||
Test your knowledge by going through the following exercises.
|
||||
|
||||
![Permission type examples][15]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/understanding-linux-permissions
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson/users/greg-p/users/tj
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/1_3.jpg (No output from ls -l command)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/1_man.jpg (ls man page)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/2_6.jpg (Creating new files and directory)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/3_2.jpg (Output from -ls command)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/4_0.jpg (User types)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/linuxpermissions_4_passwd.jpg (/etc/passwd file)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/linuxpermissions_4_group.jpg (/etc/passwd file)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/linuxpermissions_5.jpg (Enforcing permissions for different user ownership types)
|
||||
[10]: https://en.wikipedia.org/wiki/Access-control_list
|
||||
[11]: https://opensource.com/sites/default/files/uploads/linuxpermissions_6.jpg (Different Linux permissions)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/linuxpermissions_7.jpg (Privilege types)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/linuxpermissions_8.jpg (Permission type example)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/linuxpermissions_9.jpg (Permission type examples)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/linuxpermissions_10.jpg (Permission type examples)
|
@ -1,93 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chen-ni)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The innovation delusion)
|
||||
[#]: via: (https://opensource.com/open-organization/19/6/innovation-delusion)
|
||||
[#]: author: (Jim Whitehurst https://opensource.com/users/jwhitehurst/users/jwhitehurst/users/n8chz/users/dhdeans)
|
||||
|
||||
The innovation delusion
|
||||
======
|
||||
Innovation is a messy process. Our stories about it aren't. We shouldn't
|
||||
confuse the two.
|
||||
![gears and lightbulb to represent innovation][1]
|
||||
|
||||
If [traditional planning is dead][2], then why do so many organizations still invest in planning techniques optimized for the Industrial Revolution?
|
||||
|
||||
One reason might be that we trick ourselves into thinking innovation is the kind of thing we can accomplish with a structured, linear process. When we do this, I think we're confusing our _stories_ about innovation with the _process_ of innovation itself—and the two are very different.
|
||||
|
||||
The _process_ of innovation is chaotic and unpredictable. It doesn't operate according to clean, regimented timelines. It's filled with iterative phases, sudden changes in direction, various starts and stops, dead ends, (hopefully productive) failures, and unknowable variables. It's messy.
|
||||
|
||||
But the stories we tell ourselves about innovation, including the books and articles we read about great inventions and the tales we tell each other about our successes in the workplace, tidy that process up. Think about how many social media posts you've seen that feature nothing but the "high points."
|
||||
|
||||
That's the nature of good storytelling. It takes a naturally scattered collection of moments and puts them neatly into a beginning, middle, and end. It smoothes out all the rough patches and makes a result seem inevitable from the start, despite whatever moments of uncertainty, panic, even despair we experienced along the way.
|
||||
|
||||
We shouldn't confuse messy process with simplified story. When we do, we might mistakenly assume we can approach innovation challenges with the same practices we bring to neat and linear processes. In other words, we apply a set of management techniques appropriate for one set of activities (for more rote, mechanical, and prescriptive tasks) to a set of activities they aren't really suited for (more creative, non-linear work requiring autonomy and experimentation).
|
||||
|
||||
If traditional planning is dead, then why do so many organizations still invest in planning techniques optimized for the Industrial Revolution?
|
||||
|
||||
### An innovation story
|
||||
|
||||
Here's [one of my favorite examples][2] of how this idea in action.
|
||||
|
||||
In the 1970s, the British motorcycle industry was desperately trying to figure out why its U.S. market share was plummeting while Honda's was skyrocketing. The company hired my former employer, the Boston Consulting Group, to help them figure out what was going wrong. BCG gathered some historical data, reviewed a two-decade sequence of events, and developed a neat, linear story explaining Honda's success.
|
||||
|
||||
Honda, [BCG concluded][3], had executed an ingenious strategy: enter the U.S market with smaller motorcycles it could sell at lower cost, use the economies of scale it had developed in the Japense market to set low prices and grow a market, then further leverage those economies of scale to grow their share in the States as demand grew. By all accounts, Honda had done it brilliantly, playing to its strengths while thoroughly and accurately assessing the new, target U.S. consumer. It had outsmarted, outflanked, and outperformed competitors with a well-executed plan.
|
||||
|
||||
It _sounded_ great. But the reality was much less straightforward.
|
||||
|
||||
Yes, Honda _did_ want to enter the U.S. motorcycle market. It initially attempted to [copy its competitors there][4], building the larger bikes Americans seemed to favor. But bikes like that weren't one of Honda's strengths, and their versions had reliability issues. To make matters worse, their models didn't look much different than other offerings already in the market, so they weren't standing out. Suffice it to say, sales were not booming.
|
||||
|
||||
But in a happy coincidence, Honda's Japanese representatives visiting the States had brought their own motorcycles with them. Those bikes were different than the ones the company was attempting to sell to the American market. They were smaller, zippier, less bulky, more efficient, and generally less expensive. Sears took notice, contacted the reps, and the companies struck a deal that let Sears carry this new motorcycle—called the "Super Cub"—in its American stores.
|
||||
|
||||
And the rest, as they say, is history. The Super Cub would go on to become the [best-selling motorized vehicle of all time][5], and Honda [continues to produce it today][6].
|
||||
|
||||
In hindsight, the events that brought the Super Cub to the U.S. seem logical, almost boring. But Honda owed its success less to an ingenious master plan and much more to serendipity and happenstance than most people care to admit.
|
||||
|
||||
When success depends on things we don't or can't predict, is getting exactly what you've planned for good enough?
|
||||
|
||||
### Open (and messy) innovation
|
||||
|
||||
Organizations (and especially leaders) like to think that success is always planned—that they've become masters of chaos and can almost predict the future. But they're often making those assessments with the benefit of hindsight, telling the stories of their haphazard journey in a way that organizes the chaos, essentially reflecting on a period of uncertainty and saying "we meant to do that."
|
||||
|
||||
But as I said, we shouldn't assume those stories are mirror reflections of the innovation process itself and build future initiatives or experiments on that mistaken assumption.
|
||||
|
||||
Imagine another motorcycle manufacturer looking to replicate Honda's success with the Super Cub by following BCG's narrative to the letter. Because the _story_ of Honda's success seems so logical and linear, the new company might assume it could use similar processes and get the same results: plan objectives, prescribe behaviors, and execute against knowable outcomes. But we know that Honda didn't really win its market with that kind of "plan, prescribe, execute" mentality. It won through flexibility and a bit of blind luck—something more like "[try, learn, modify][7]."
|
||||
|
||||
When we're able to appreciate and accept that the innovation process is messy, we allow ourselves to think differently about approaching innovation in our organizations. We can begin building the kinds of open and agile organizations capable of _responding to innovation as it happens_ instead of over-investing resources into pre-formed plans that try to _force_ innovation into a linear timeline.
|
||||
|
||||
I saw this kind of approach several years ago, when Red Hat released a new version of a product that included a major technology update. [Version 5.4 of Red Hat Enterprise Linux][8] was the first to include full support for a technology called the Kernel-based Virtual Machine (or "KVM"). For us it was a significant innovation that promised to deliver immense value not only to customers and partners, but also to open source software communities.
|
||||
|
||||
The technology was evolving quickly. Luckily, because we're an open organization, we were adaptable enough to respond to that innovation as it was happening and help our customers and partners take advantage of it. It was too important, and the competitive landscape too volatile, to justify withholding just so we could "save" it for a milestone moment like version 6.0.
|
||||
|
||||
When you go back and review [the archived release notes][9] for Red Hat Enterprise Linux, you'll see that it doesn't "read" like a typical software innovation tale. A game-changing development pops up at an unpredicted and unremarkable moment (version 5.4), rather than a pre-planned blockbuster milestone (version 6.0). In hindsight, we now know that KVM _was_ the kind of "big bang" advancement that could have warranted a milestone release name like "6.0." But that's just not how the innovation process unfolded.
|
||||
|
||||
Don't get me wrong, organizations still need to maintain operational excellence and perform execution-oriented tasks well. But [different kinds of challenges require different kinds of approaches][10], and we need to get better at building flexible organizations just as capable of [responding to the unforeseen or unknowable][11].
|
||||
|
||||
An organization great at planning (and executing against that plan) will quite likely get the results it planned for. But when success depends on things we _don't_ or _can't_ predict, is getting exactly what you've planned for good enough?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/6/innovation-delusion
|
||||
|
||||
作者:[Jim Whitehurst][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jwhitehurst/users/jwhitehurst/users/n8chz/users/dhdeans
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation)
|
||||
[2]: https://www.youtube.com/watch?v=8MCbJmZQM9c
|
||||
[3]: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/235319/0532.pdf
|
||||
[4]: http://www.howardyu.org/the-revolutionary-approach-honda-took-to-rise-above-competition/
|
||||
[5]: https://autoweek.com/article/motorcycles/first-ride-honda-super-cub-c125-abs-all-new-and-still-super-cute
|
||||
[6]: https://www.autoblog.com/2019/02/13/2019-honda-super-cub-first-ride-review/
|
||||
[7]: https://opensource.com/open-organization/18/3/try-learn-modify
|
||||
[8]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/5.4_release_notes/index
|
||||
[9]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/5.0_release_notes/index
|
||||
[10]: https://opensource.com/open-organization/19/4/managed-enabled-empowered
|
||||
[11]: https://www.linkedin.com/pulse/how-plan-world-full-unknowns-jim-whitehurst/
|
@ -1,139 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (LuuMing)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tracking down library injections on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3404621/tracking-down-library-injections-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Tracking down library injections on Linux
|
||||
======
|
||||
Library injections are less common on Linux than they are on Windows, but they're still a problem. Here's a look at how they work and how to identify them.
|
||||
![Sandra Henry-Stocker][1]
|
||||
|
||||
While not nearly commonly seen on Linux systems, library (shared object files on Linux) injections are still a serious threat. On interviewing Jaime Blasco from AT&T's Alien Labs, I've become more aware of how easily some of these attacks are conducted.
|
||||
|
||||
In this post, I'll cover one method of attack and some ways that it can be detected. I'll also provide some links that will provide more details on both attack methods and detection tools. First, a little background.
|
||||
|
||||
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
### Shared library vulnerability
|
||||
|
||||
Both DLL and .so files are shared library files that allow code (and sometimes data) to be shared by various processes. Commonly used code might be put into one of these files so that it can be reused rather than rewritten many times over for each process that requires it. This also facilitates management of commonly used code.
|
||||
|
||||
Linux processes often make use of many of these shared libraries. The **ldd** (display shared object dependencies) command can display these for any program file. Here are some examples:
|
||||
|
||||
```
|
||||
$ ldd /bin/date
|
||||
linux-vdso.so.1 (0x00007ffc5f179000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f02bea15000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f02bec3a000)
|
||||
$ ldd /bin/netstat
|
||||
linux-vdso.so.1 (0x00007ffcb67cd000)
|
||||
libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f45e5d7b000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f45e5b90000)
|
||||
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f45e5b1c000)
|
||||
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f45e5b16000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f45e5dec000)
|
||||
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f45e5af5000)
|
||||
```
|
||||
|
||||
The **linux-vdso.so.1** file (which may have a different name on some systems) is one that the kernel automatically maps into the address space of every process. Its job is to find and locate other shared libraries that the process requires.
|
||||
|
||||
One way that this library-loading mechanism is exploited is through the use of an environment variable called **LD_PRELOAD**. As Jaime Blasco explains in his research, "LD_PRELOAD is the easiest and most popular way to load a shared library in a process at startup. This environmental variable can be configured with a path to the shared library to be loaded before any other shared object."
|
||||
|
||||
To illustrate how easily this is done, I created an extremely simple shared library and assigned it to my (formerly non-existent) LD_PRELOAD environment variable. Then I used the **ldd** command to see how this would affect a commonly used Linux command.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][3] ]**
|
||||
|
||||
```
|
||||
$ export LD_PRELOAD=/home/shs/shownum.so
|
||||
$ ldd /bin/date
|
||||
linux-vdso.so.1 (0x00007ffe005ce000)
|
||||
/home/shs/shownum.so (0x00007f1e6b65f000) <== there it is
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f1e6b458000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f1e6b682000)
|
||||
```
|
||||
|
||||
Note that doing nothing more than assigning my new library to LD_PRELOAD now affects any process that I run.
|
||||
|
||||
Since the libraries specified by the LD_PRELOAD setting are the first to load (following linux-vdso.so.1), those libraries could significantly change a process. They could, for example, redirect system calls to their own resources or make unexpected changes in how the process being run behaves.
|
||||
|
||||
### The osquery tool can detect library injections
|
||||
|
||||
The **osquery** tool (downloadable from [osquery.io][4] is a tool that provides a very unique way of looking at Linux systems. It basically represents the operating system as a high-performance relational database. And, as you probably suspect, that means it can be queried and SQL tables created that provide details on such things as:
|
||||
|
||||
* Running processes
|
||||
* Loaded kernel modules
|
||||
* Open network connections
|
||||
|
||||
|
||||
|
||||
One kernel table that provides information on running processes is called **process_envs**. It provides details on environment variables used by various processes. With a fairly complicated query provided by Jaime Blasco, you can get osquery to identify processes that are using LD_PRELOAD.
|
||||
|
||||
Note that this query pulls data from the **process_envs** table. The attack ID (T1055) is a reference to [Mitre's explanation of the attack method][5]:
|
||||
|
||||
```
|
||||
SELECT process_envs.pid as source_process_id, process_envs.key as environment_variable_key, process_envs.value as environment_variable_value, processes.name as source_process, processes.path as file_path, processes.cmdline as source_process_commandline, processes.cwd as current_working_directory, 'T1055' as event_attack_id, 'Process Injection' as event_attack_technique, 'Defense Evasion, Privilege Escalation' as event_attack_tactic FROM process_envs join processes USING (pid) WHERE key = 'LD_PRELOAD';
|
||||
```
|
||||
|
||||
Note that the LD_PRELOAD environment variable is at times used legitimately. Various security monitoring tools, for example, could use it, as might developers while they are troubleshooting, debugging or doing performance analysis. However, its use is still quite uncommon and should be viewed with some suspicion.
|
||||
|
||||
It's also worth noting that osquery can be used interactively or be run as a daemon (osqueryd) for scheduled queries. See the reference at the bottom of this post for more on this.
|
||||
|
||||
You might also be able to locate use of LD_PRELOAD by examining users' environment settings. If LD_PRELOAD is configured in a user account, you might determine that with a command like this (after asssuming the individual's identity):
|
||||
|
||||
```
|
||||
$ env | grep PRELOAD
|
||||
LD_PRELOAD=/home/username/userlib.so
|
||||
```
|
||||
|
||||
If you've not previously heard of osquery, don't take it too hard. It's now in the process of becoming a more popular tool. Just last week, in fact, the Linux Foundation announced its intention to support the osquery commmunity with a brand-new [osquery foundation][6].
|
||||
|
||||
#### Wrap-up
|
||||
|
||||
While library injection remains a serious threat, it's helpful to know that some excellent tools are available to help detect its use on your systems.
|
||||
|
||||
#### Additional resources
|
||||
|
||||
Links to important references and tools:
|
||||
|
||||
* [Hunting for Linux library injection with osquery][7] from AT&T Cybersecurity
|
||||
* [Linux: How's My Memory?][8] from TrustedSec
|
||||
* [Download site for osquery][4]
|
||||
* [osquery schema][9]
|
||||
* [osqueryd (osquery deamon)][10]
|
||||
* [Mitre's attack framework][11]
|
||||
* [New osquery foundation announced][6]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3404621/tracking-down-library-injections-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/06/dll-injection-100800196-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[4]: https://osquery.io/
|
||||
[5]: https://attack.mitre.org/techniques/T1055/
|
||||
[6]: https://www.linuxfoundation.org/press-release/2019/06/the-linux-foundation-announces-intent-to-form-new-foundation-to-support-osquery-community/
|
||||
[7]: https://www.alienvault.com/blogs/labs-research/hunting-for-linux-library-injection-with-osquery
|
||||
[8]: https://www.trustedsec.com/2018/09/linux-hows-my-memory/
|
||||
[9]: https://osquery.io/schema/3.3.2
|
||||
[10]: https://osquery.readthedocs.io/en/stable/deployment/configuration/#schedule
|
||||
[11]: https://attack.mitre.org/
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (guevaraya)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
337
sources/tech/20190701 Get modular with Python functions.md
Normal file
337
sources/tech/20190701 Get modular with Python functions.md
Normal file
@ -0,0 +1,337 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get modular with Python functions)
|
||||
[#]: via: (https://opensource.com/article/19/7/get-modular-python-functions)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/xd-deng/users/nhuntwalker/users/don-watkins)
|
||||
|
||||
Get modular with Python functions
|
||||
======
|
||||
Minimize your coding workload by using Python functions for repeating
|
||||
tasks.
|
||||
![OpenStack source code \(Python\) in VIM][1]
|
||||
|
||||
Are you confused by fancy programming terms like functions, classes, methods, libraries, and modules? Do you struggle with the scope of variables? Whether you're a self-taught programmer or a formally trained code monkey, the modularity of code can be confusing. But classes and libraries encourage modular code, and modular code can mean building up a collection of multipurpose code blocks that you can use across many projects to reduce your coding workload. In other words, if you follow along with this article's study of [Python][2] functions, you'll find ways to work smarter, and working smarter means working less.
|
||||
|
||||
This article assumes enough Python familiarity to write and run a simple script. If you haven't used Python, read my [intro to Python][3] article first.
|
||||
|
||||
### Functions
|
||||
|
||||
Functions are an important step toward modularity because they are formalized methods of repetition. If there is a task that needs to be done again and again in your program, you can group the code into a function and call the function as often as you need it. This way, you only have to write the code once, but you can use it as often as you like.
|
||||
|
||||
Here is an example of a simple function:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
|
||||
def Timer():
|
||||
print("Time is " + str(time.time() ) )
|
||||
```
|
||||
|
||||
Create a folder called **mymodularity** and save the function code as **timestamp.py**.
|
||||
|
||||
In addition to this function, create a file called **__init__.py** in the **mymodularity** directory. You can do this in a file manager or a Bash shell:
|
||||
|
||||
|
||||
```
|
||||
`$ touch mymodularity/__init__.py`
|
||||
```
|
||||
|
||||
You have now created your own Python library (a "module," in Python lingo) in your Python package called **mymodularity**. It's not a very useful module, because all it does is import the **time** module and print a timestamp, but it's a start.
|
||||
|
||||
To use your function, treat it just like any other Python module. Here's a small application that tests the accuracy of Python's **sleep()** function, using your **mymodularity** package for support. Save this file as **sleeptest.py** _outside_ the **mymodularity** directory (if you put this _into_ **mymodularity**, then it becomes a module in your package, and you don't want that).
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
from mymodularity import timestamp
|
||||
|
||||
print("Testing Python sleep()...")
|
||||
|
||||
# modularity
|
||||
timestamp.Timer()
|
||||
time.sleep(3)
|
||||
timestamp.Timer()
|
||||
```
|
||||
|
||||
In this simple script, you are calling your **timestamp** module from your **mymodularity** package (twice). When you import a module from a package, the usual syntax is to import the module you want from the package and then use the _module name + a dot + the name of the function you want to call_ (e.g., **timestamp.Timer()**).
|
||||
|
||||
You're calling your **Timer()** function twice, so if your **timestamp** module were more complicated than this simple example, you'd be saving yourself quite a lot of repeated code.
|
||||
|
||||
Save the file and run it:
|
||||
|
||||
|
||||
```
|
||||
$ python3 ./sleeptest.py
|
||||
Testing Python sleep()...
|
||||
Time is 1560711266.1526039
|
||||
Time is 1560711269.1557732
|
||||
```
|
||||
|
||||
According to your test, the sleep function in Python is pretty accurate: after three seconds of sleep, the timestamp was successfully and correctly incremented by three, with a little variance in microseconds.
|
||||
|
||||
The structure of a Python library might seem confusing, but it's not magic. Python is _programmed_ to treat a folder full of Python code accompanied by an **__init__.py** file as a package, and it's programmed to look for available modules in its current directory _first_. This is why the statement **from mymodularity import timestamp** works: Python looks in the current directory for a folder called **mymodularity**, then looks for a **timestamp** file ending in **.py**.
|
||||
|
||||
What you have done in this example is functionally the same as this less modular version:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
from mymodularity import timestamp
|
||||
|
||||
print("Testing Python sleep()...")
|
||||
|
||||
# no modularity
|
||||
print("Time is " + str(time.time() ) )
|
||||
time.sleep(3)
|
||||
print("Time is " + str(time.time() ) )
|
||||
```
|
||||
|
||||
For a simple example like this, there's not really a reason you wouldn't write your sleep test that way, but the best part about writing your own module is that your code is generic so you can reuse it for other projects.
|
||||
|
||||
You can make the code more generic by passing information into the function when you call it. For instance, suppose you want to use your module to test not the _computer's_ sleep function, but a _user's_ sleep function. Change your **timestamp** code so it accepts an incoming variable called **msg**, which will be a string of text controlling how the **timestamp** is presented each time it is called:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
|
||||
# updated code
|
||||
def Timer(msg):
|
||||
print(str(msg) + str(time.time() ) )
|
||||
```
|
||||
|
||||
Now your function is more abstract than before. It still prints a timestamp, but what it prints for the user is undefined. That means you need to define it when calling the function.
|
||||
|
||||
The **msg** parameter your **Timer** function accepts is arbitrarily named. You could call the parameter **m** or **message** or **text** or anything that makes sense to you. The important thing is that when the **timestamp.Timer** function is called, it accepts some text as its input, places whatever it receives into a variable, and uses the variable to accomplish its task.
|
||||
|
||||
Here's a new application to test the user's ability to sense the passage of time correctly:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
from mymodularity import timestamp
|
||||
|
||||
print("Press the RETURN key. Count to 3, and press RETURN again.")
|
||||
|
||||
input()
|
||||
timestamp.Timer("Started timer at ")
|
||||
|
||||
print("Count to 3...")
|
||||
|
||||
input()
|
||||
timestamp.Timer("You slept until ")
|
||||
```
|
||||
|
||||
Save your new application as **response.py** and run it:
|
||||
|
||||
|
||||
```
|
||||
$ python3 ./response.py
|
||||
Press the RETURN key. Count to 3, and press RETURN again.
|
||||
|
||||
Started timer at 1560714482.3772075
|
||||
Count to 3...
|
||||
|
||||
You slept until 1560714484.1628013
|
||||
```
|
||||
|
||||
### Functions and required parameters
|
||||
|
||||
The new version of your timestamp module now _requires_ a **msg** parameter. That's significant because your first application is broken because it doesn't pass a string to the **timestamp.Timer** function:
|
||||
|
||||
|
||||
```
|
||||
$ python3 ./sleeptest.py
|
||||
Testing Python sleep()...
|
||||
Traceback (most recent call last):
|
||||
File "./sleeptest.py", line 8, in <module>
|
||||
timestamp.Timer()
|
||||
TypeError: Timer() missing 1 required positional argument: 'msg'
|
||||
```
|
||||
|
||||
Can you fix your **sleeptest.py** application so it runs correctly with the updated version of your module?
|
||||
|
||||
### Variables and functions
|
||||
|
||||
By design, functions limit the scope of variables. In other words, if a variable is created within a function, that variable is available to _only_ that function. If you try to use a variable that appears in a function outside the function, an error occurs.
|
||||
|
||||
Here's a modification of the **response.py** application, with an attempt to print the **msg** variable from the **timestamp.Timer()** function:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
from mymodularity import timestamp
|
||||
|
||||
print("Press the RETURN key. Count to 3, and press RETURN again.")
|
||||
|
||||
input()
|
||||
timestamp.Timer("Started timer at ")
|
||||
|
||||
print("Count to 3...")
|
||||
|
||||
input()
|
||||
timestamp.Timer("You slept for ")
|
||||
|
||||
print(msg)
|
||||
```
|
||||
|
||||
Try running it to see the error:
|
||||
|
||||
|
||||
```
|
||||
$ python3 ./response.py
|
||||
Press the RETURN key. Count to 3, and press RETURN again.
|
||||
|
||||
Started timer at 1560719527.7862902
|
||||
Count to 3...
|
||||
|
||||
You slept for 1560719528.135406
|
||||
Traceback (most recent call last):
|
||||
File "./response.py", line 15, in <module>
|
||||
print(msg)
|
||||
NameError: name 'msg' is not defined
|
||||
```
|
||||
|
||||
The application returns a **NameError** message because **msg** is not defined. This might seem confusing because you wrote code that defined **msg**, but you have greater insight into your code than Python does. Code that calls a function, whether the function appears within the same file or if it's packaged up as a module, doesn't know what happens inside the function. A function independently performs its calculations and returns what it has been programmed to return. Any variables involved are _local_ only: they exist only within the function and only as long as it takes the function to accomplish its purpose.
|
||||
|
||||
#### Return statements
|
||||
|
||||
If your application needs information contained only in a function, use a **return** statement to have the function provide meaningful data after it runs.
|
||||
|
||||
They say time is money, so modify your timestamp function to allow for an imaginary charging system:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
|
||||
def Timer(msg):
|
||||
print(str(msg) + str(time.time() ) )
|
||||
charge = .02
|
||||
return charge
|
||||
```
|
||||
|
||||
The **timestamp** module now charges two cents for each call, but most importantly, it returns the amount charged each time it is called.
|
||||
|
||||
Here's a demonstration of how a return statement can be used:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
from mymodularity import timestamp
|
||||
|
||||
print("Press RETURN for the time (costs 2 cents).")
|
||||
print("Press Q RETURN to quit.")
|
||||
|
||||
total = 0
|
||||
|
||||
while True:
|
||||
kbd = input()
|
||||
if kbd.lower() == "q":
|
||||
print("You owe $" + str(total) )
|
||||
exit()
|
||||
else:
|
||||
charge = timestamp.Timer("Time is ")
|
||||
total = total+charge
|
||||
```
|
||||
|
||||
In this sample code, the variable **charge** is assigned as the endpoint for the **timestamp.Timer()** function, so it receives whatever the function returns. In this case, the function returns a number, so a new variable called **total** is used to keep track of how many changes have been made. When the application receives the signal to quit, it prints the total charges:
|
||||
|
||||
|
||||
```
|
||||
$ python3 ./charge.py
|
||||
Press RETURN for the time (costs 2 cents).
|
||||
Press Q RETURN to quit.
|
||||
|
||||
Time is 1560722430.345412
|
||||
|
||||
Time is 1560722430.933996
|
||||
|
||||
Time is 1560722434.6027434
|
||||
|
||||
Time is 1560722438.612629
|
||||
|
||||
Time is 1560722439.3649364
|
||||
q
|
||||
You owe $0.1
|
||||
```
|
||||
|
||||
#### Inline functions
|
||||
|
||||
Functions don't have to be created in separate files. If you're just writing a short script specific to one task, it may make more sense to just write your functions in the same file. The only difference is that you don't have to import your own module, but otherwise the function works the same way. Here's the latest iteration of the time test application as one file:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
|
||||
total = 0
|
||||
|
||||
def Timer(msg):
|
||||
print(str(msg) + str(time.time() ) )
|
||||
charge = .02
|
||||
return charge
|
||||
|
||||
print("Press RETURN for the time (costs 2 cents).")
|
||||
print("Press Q RETURN to quit.")
|
||||
|
||||
while True:
|
||||
kbd = input()
|
||||
if kbd.lower() == "q":
|
||||
print("You owe $" + str(total) )
|
||||
exit()
|
||||
else:
|
||||
charge = Timer("Time is ")
|
||||
total = total+charge
|
||||
```
|
||||
|
||||
It has no external dependencies (the **time** module is included in the Python distribution), and produces the same results as the modular version. The advantage is that everything is located in one file, and the disadvantage is that you cannot use the **Timer()** function in some other script you are writing unless you copy and paste it manually.
|
||||
|
||||
#### Global variables
|
||||
|
||||
A variable created outside a function has nothing limiting its scope, so it is considered a _global_ variable.
|
||||
|
||||
An example of a global variable is the **total** variable in the **charge.py** example used to track current charges. The running total is created outside any function, so it is bound to the application rather than to a specific function.
|
||||
|
||||
A function within the application has access to your global variable, but to get the variable into your imported module, you must send it there the same way you send your **msg** variable.
|
||||
|
||||
Global variables are convenient because they seem to be available whenever and wherever you need them, but it can be difficult to keep track of their scope and to know which ones are still hanging around in system memory long after they're no longer needed (although Python generally has very good garbage collection).
|
||||
|
||||
Global variables are important, though, because not all variables can be local to a function or class. That's easy now that you know how to send variables to functions and get values back.
|
||||
|
||||
### Wrapping up functions
|
||||
|
||||
You've learned a lot about functions, so start putting them into your scripts—if not as separate modules, then as blocks of code you don't have to write multiple times within one script. In the next article in this series, I'll get into Python classes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/get-modular-python-functions
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth/users/xd-deng/users/nhuntwalker/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_1.jpg?itok=lHQK5zpm (OpenStack source code (Python) in VIM)
|
||||
[2]: https://www.python.org/
|
||||
[3]: https://opensource.com/article/17/10/python-101
|
373
sources/tech/20190701 How to use to infrastructure as code.md
Normal file
373
sources/tech/20190701 How to use to infrastructure as code.md
Normal file
@ -0,0 +1,373 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use to infrastructure as code)
|
||||
[#]: via: (https://opensource.com/article/19/7/infrastructure-code)
|
||||
[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
|
||||
|
||||
How to use to infrastructure as code
|
||||
======
|
||||
As your servers and applications grow, it becomes harder to maintain and
|
||||
keep track of them if you don't treat your infrastructure as code.
|
||||
![Magnifying glass on code][1]
|
||||
|
||||
My previous article about [setting up a homelab][2] described many options for building a personal lab to learn new technology. Regardless of whichever solution you choose, as your servers and applications grow, it will become harder and harder to maintain and keep track of them if you don't establish control. To avoid this, it's essential to treat your infrastructure as code.
|
||||
|
||||
This article is about infrastructure as code (IaC) best practices and includes a sample project automating the deployment of two virtual machines (VMs) and installing [Keepalived][3] and [Nginx][4] while implementing these practices. You can find all the [code for this project][5] on GitHub.
|
||||
|
||||
### Why is IaC important? Aren't my scripts enough?
|
||||
|
||||
No, scripts are not enough. Over time, scripts become hard to maintain and hard to keep track of. IaC can help you maintain uniformity and scalability while saving lots of time that you would waste if you did every task manually.
|
||||
|
||||
One of the problems with the culture of managing servers manually or partially automated is the lack of consistency and control, which (more often than not) causes configuration drifts and undocumented changes to applications or servers. If a server or a virtual machine has to be replaced, it's time-consuming to manually install every piece of software and do every bit of configuration.
|
||||
|
||||
With IaC, hundreds of servers can be provisioned, deployed, and configured, usually from a centralized location, and every configuration can be tracked in a version-control system. If a configuration file has to be modified, instead of connecting to every server, the file can be altered locally and the code pushed to the version-control system. The same is true with scaling up or replacing damaged servers. The entire infrastructure is managed centrally, all the code is kept in a version-control repository like Git, and any changes required by the servers are done using this code alone. No more unique unicorns! (Sorry, unicorns!)
|
||||
|
||||
One of IaC's main benefits is its integration with [CI/CD][6] tools like [Jenkins][7], which allows you to test more often and create deployment pipelines that automate moving versions of applications from one environment to the next.
|
||||
|
||||
### So, how do you start?
|
||||
|
||||
Start by doing an inventory of every application, service, and configuration needed by a server; review every piece of software installed, collect the configuration files, verify their parameters, and find where you can replicate the server.
|
||||
|
||||
When you have identified everything you need, remember:
|
||||
|
||||
* Use version control; everything should be tracked using version control.
|
||||
* Code everything; nothing should be done manually. Use code to describe the desired state.
|
||||
* Idempotence. Every result from the code you write should always yield the same result, no matter how many times it is executed.
|
||||
* Make your code modular.
|
||||
* Test, test, test
|
||||
* Again: Use version control. Don't _ever_ forget this.
|
||||
|
||||
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
You need two virtual machines with CentOS 7 installed. SSH login with keys should be working.
|
||||
|
||||
Create a directory called **homelab**. This will be your work directory, and this tutorial will refer to it as **$PWD**. Create two other directories inside this directory: **roles** and **group_vars**:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir -p homelab/{roles,group_vars}
|
||||
$ cd homelab
|
||||
```
|
||||
|
||||
#### Version control
|
||||
|
||||
The first best practice is to always keep track of everything: automation, configuration files, templates. Version-control systems like Git make it easy for users to collaborate by providing a centralized repository where all the code, configurations, templates, etc. can be found. It also lets users review or restore older versions of files.
|
||||
|
||||
If you don't have one, create an account in GitHub or GitLab (or use any other version-control provider of your choice).
|
||||
|
||||
Inside **$PWD**, initialize your Git repo:
|
||||
|
||||
|
||||
```
|
||||
$ echo "# IaC example" >> README.md
|
||||
$ git init
|
||||
$ git add README.md
|
||||
$ git commit -m "First commit"
|
||||
$ git remote add origin <your Git URL>
|
||||
$ git push -u origin master
|
||||
```
|
||||
|
||||
#### Code everything
|
||||
|
||||
The main idea of IaC is to manage—as much as possible—all your infrastructure with code. Any change required in a server, application, or configuration must be defined in the code. Configuration files can be converted into templates to enable greater flexibility and reusability. Settings specific to applications or servers must also be coded, usually in variable files.
|
||||
|
||||
When creating the automation, it is crucial to remember idempotence: No matter how many times the code is executed, it should always have the same result. Same input, same result. For example, when writing a piece of code that modifies a file, you must ensure that if the same code is executed again, the file will look the same.
|
||||
|
||||
The following steps are automated, and the code is idempotent.
|
||||
|
||||
### Modularity
|
||||
|
||||
When writing infrastructure as code, it is imperative to think about reusability. Most of the code you write should be reusable and scalable.
|
||||
|
||||
When writing [Ansible][8] roles, the best approach is to follow the Unix philosophy: "Write programs that do one thing and do it well." Therefore, create multiple roles, one for each piece of software: 1) a "base" or "common" role that prepares each VM regardless of its purpose; 2) a role to install and configure Keepalived (for high availability); 3) a role to install and configure Nginx (web server). This method allows each role to be reused for different kinds of servers and will save a lot of coding in the future.
|
||||
|
||||
#### Create the base role
|
||||
|
||||
This role will prepare the VM with all the steps it needs after it is provisioned. Think about any configurations or software each server needs; they should go in this module. In this example, the base role will:
|
||||
|
||||
* Change the hostname
|
||||
* Install security updates
|
||||
* Enable [EPEL][9] and install utilities
|
||||
* Customize the welcome message
|
||||
|
||||
|
||||
|
||||
Create the basic role skeleton inside **$PWD/roles**:
|
||||
|
||||
|
||||
```
|
||||
`$ ansible-galaxy init --offline base`
|
||||
```
|
||||
|
||||
The main file for the role is **$PWD/roles/base/tasks/main.yml**. Modify it with the following content:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
# We set the hostname to the value in the inventory
|
||||
\- name: Change hostname
|
||||
hostname:
|
||||
name: "{{ inventory_hostname }}"
|
||||
|
||||
\- name: Update the system
|
||||
yum:
|
||||
name: "*"
|
||||
state: latest
|
||||
|
||||
\- name: Install basic utilities
|
||||
yum:
|
||||
name: ['epel-release', 'tmux', 'vim', 'wget', 'nfs-utils']
|
||||
state: present
|
||||
|
||||
\- name: Copy motd
|
||||
template:
|
||||
src: motd.j2
|
||||
dest: /etc/motd
|
||||
```
|
||||
|
||||
Create the template file that will replace **/etc/motd** by creating the file **$PWD/roles/base/templates/motd.j2**
|
||||
|
||||
|
||||
```
|
||||
UNAUTHORIZED ACCESS TO THIS DEVICE IS PROHIBITED
|
||||
|
||||
You must have explicit, authorized permission to access or configure "{{ inventory_hostname }}". Unauthorized attempts and actions to access or use this system may result in civil and/or criminal penalties. All activities performed on this device are logged and monitored.
|
||||
```
|
||||
|
||||
Every task in this code is idempotent. No matter how many times the code is executed, it will always yield exactly the same result. Notice how **/etc/motd** is modified; if the file were modified by adding or appending content (instead of using a template), it would have failed the idempotence rule, because a new line would be added every time it was executed.
|
||||
|
||||
#### Create the Keepalived role
|
||||
|
||||
You could create a role that includes both **Keepalived** and **Nginx**. But what would happen if you ever needed to install Keepalived without a web server? The code would have to be duplicated, wasting time, effort, and simplicity. Keeping roles minimal and straightforward is the way to go.
|
||||
|
||||
The automation code should always handle configuration files so they can be tracked in your version-control system. But how do you handle configuration files when settings can have different values per host? Use templates! Templates allow you to use variables and facts, giving you flexibility with the benefits of uniformity.
|
||||
|
||||
Create the Keepalived role skeleton within **$PWD/roles**:
|
||||
|
||||
|
||||
```
|
||||
`$ ansible-galaxy init --offline keepalived`
|
||||
```
|
||||
|
||||
Modify the main task file **$PWD/roles/keepalived/tasks/main.yml** as follows:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
\- name: Install keepalived
|
||||
yum:
|
||||
name: "keepalived"
|
||||
state: latest
|
||||
|
||||
\- name: Configure keepalived with the right settings
|
||||
template:
|
||||
src: keepalived.j2
|
||||
dest: /etc/keepalived/keepalived.conf
|
||||
notify: restart keepalived
|
||||
```
|
||||
|
||||
**$PWD/roles/keepalived/handlers/main.yml**:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
# handlers file for keepalived
|
||||
\- name: restart keepalived
|
||||
service:
|
||||
name: keepalived
|
||||
enabled: yes
|
||||
state: restarted
|
||||
```
|
||||
|
||||
And create the configuration template file **$PWD/roles/keepalived/templates/keepalived.j2**:
|
||||
|
||||
|
||||
```
|
||||
#### File handled by Ansible.
|
||||
|
||||
vrrp_script chk_nginx {
|
||||
script "pidof nginx" # check the nginx process
|
||||
interval 2 # every 2 seconds
|
||||
weight 2 # add 2 points if OK
|
||||
}
|
||||
|
||||
vrrp_instance LAB {
|
||||
interface {{ keepalived_nic }} # interface to monitor
|
||||
state {{ keepalived_state }}
|
||||
virtual_router_id {{ keepalived_vri }}
|
||||
priority {{ keepalived_priority }}
|
||||
virtual_ipaddress {
|
||||
{{ keepalived_vip }}
|
||||
}
|
||||
track_script {
|
||||
chk_nginx
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The Keepalived configuration file was converted into a template. It is a typical Keepalived configuration file, but instead of hardcoding values, it is parameterized.
|
||||
|
||||
When automating infrastructure and configuration files, it's vital to analyze application configuration files carefully, noting which values are the same across the environments and which settings are unique to each server. Again, every time the template is processed, it should yield the same result. Create variables, use Ansible facts; this adds up to modularity and flexibility.
|
||||
|
||||
#### Create the Nginx role
|
||||
|
||||
This simple role will install and configure Nginx using a template, following the same principles discussed above. A template will be used for this role to generate an **index.html** with the host's internet protocol (IP). [Other facts][10] can be used, too.
|
||||
|
||||
Modify the main task file **$PWD/roles/nginx/tasks/main.yml** as follows:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
# tasks file for nginx
|
||||
\- name: Install nginx
|
||||
yum:
|
||||
name: 'nginx'
|
||||
state: 'latest'
|
||||
notify: start nginx
|
||||
|
||||
\- name: Create web directory
|
||||
file:
|
||||
path: /var/www
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
\- name: Create index.html
|
||||
template:
|
||||
src: index.html.j2
|
||||
dest: /var/www/index.html
|
||||
|
||||
\- name: Configure nginx
|
||||
template:
|
||||
src: lb.conf.j2
|
||||
dest: /etc/nginx/conf.d/lb.conf
|
||||
notify: restart nginx
|
||||
```
|
||||
|
||||
Modify the main task file **$PWD/roles/nginx/handlers/main.yml** as follows:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
\- name: start nginx
|
||||
systemd:
|
||||
name: 'nginx'
|
||||
state: 'started'
|
||||
enabled: yes
|
||||
|
||||
\- name: restart nginx
|
||||
systemd:
|
||||
name: 'nginx'
|
||||
state: 'restarted'
|
||||
```
|
||||
|
||||
And create the following two configuration template files:
|
||||
|
||||
**$PWD/roles/nginx/templates/site.conf.j2:**
|
||||
|
||||
|
||||
```
|
||||
server {
|
||||
listen {{ keepalived_vip }}:80;
|
||||
root /var/www;
|
||||
location / {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**$PWD/roles/nginx/templates/index.html.j2**:
|
||||
|
||||
|
||||
```
|
||||
`Hello, my ip is {{ ansible_default_ipv4.address }}`
|
||||
```
|
||||
|
||||
### Put it all together
|
||||
|
||||
You've created several roles; they are ready to be used, so create a playbook to use them.
|
||||
|
||||
Create a file called **$PWD/main.yml**:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
\- hosts: webservers
|
||||
become: yes
|
||||
roles:
|
||||
- base
|
||||
- nginx
|
||||
- keepalived
|
||||
```
|
||||
|
||||
This file defines what roles go where. If more roles are available, they can be included to create different combinations as needed. Some servers can be web servers only, for example. This flexibility is one of the main reasons it's so essential to write minimal functional units.
|
||||
|
||||
The previous roles require variables to work. Ansible is really flexible and lets you define variable files. This example creates a file called **all inside group_vars** (**$PWD/group_vars/all**). If more flexibility is needed, variables can be defined per host inside a folder called **host_vars**:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
keepalived_nic: eth0
|
||||
keepalived_vri: 51
|
||||
keepalived_vip: 192.168.2.180
|
||||
```
|
||||
|
||||
Configure **keepalived_nic** with your preferred Keepalived interface, usually **eth0**. The variable **keepalived_vip** should have the IP needed to use as a virtual IP.
|
||||
|
||||
And finally, define the inventory. This inventory should keep track of your entire infrastructure. It's best to use dynamic inventories that gather all the information directly from the hypervisor so it doesn't have to be updated manually. Create a file called **inventory** with a section called **webservers** containing information about the two VMs:
|
||||
|
||||
|
||||
```
|
||||
[webservers]
|
||||
webserver01 ansible_user=centos ansible_host=192.168.2.101 keepalived_state=MASTER keepalived_priority=101
|
||||
webserver02 ansible_user=centos ansible_host=192.168.2.102 keepalived_state=BACKUP keepalived_priority=100
|
||||
```
|
||||
|
||||
The variable **ansible_user** should have the user Ansible will use to connect to the server. The variable **keepalived_state** should indicate if the host will be configured as a Master or Backup (as required in the Keepalived template file). Finally, set the variable **keepalived_priority** here because the master should have a higher priority than the backup.
|
||||
|
||||
And that's it; you've automated configuration of two VMs, installing Keepalived and Nginx.
|
||||
|
||||
Now save your changes:
|
||||
|
||||
|
||||
```
|
||||
$ git add .
|
||||
$ git commit -m "IaC playbook"
|
||||
$ git push -u origin master
|
||||
```
|
||||
|
||||
and deploy:
|
||||
|
||||
|
||||
```
|
||||
`$ ansible-playbook -i inventory main.yml`
|
||||
```
|
||||
|
||||
This project investigated basic IaC concepts, but it doesn't end here. Learn more by exploring how to do automated server provisioning, unit testing, and integration with CI/CD tools and pipelines. It's a long process, but it's worth it, both technically and career-wise.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/infrastructure-code
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0 (Magnifying glass on code)
|
||||
[2]: https://opensource.com/article/19/3/home-lab
|
||||
[3]: https://www.keepalived.org/
|
||||
[4]: https://www.nginx.com/
|
||||
[5]: https://github.com/mzamot/os-homelab-example
|
||||
[6]: https://en.wikipedia.org/wiki/CI/CD
|
||||
[7]: https://jenkins.io/
|
||||
[8]: https://www.ansible.com/
|
||||
[9]: https://fedoraproject.org/wiki/EPEL
|
||||
[10]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variables-discovered-from-systems-facts
|
@ -0,0 +1,292 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Learn how to Record and Replay Linux Terminal Sessions Activity)
|
||||
[#]: via: (https://www.linuxtechi.com/record-replay-linux-terminal-sessions-activity/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
Learn how to Record and Replay Linux Terminal Sessions Activity
|
||||
======
|
||||
|
||||
Generally, all Linux administrators use **history** command to track which commands were executed in previous sessions, but there is one limitation of history command is that it doesn’t store the command’s output. There can be some scenarios where we want to check commands output of previous session and want to compare it with current session. Apart from this, there are some situations where we are troubleshooting the issues on Linux production boxes and want to save all terminal session activities for future reference, so in such cases script command become handy.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Record-linux-terminal-session-activity.jpg>
|
||||
|
||||
Script is a command line tool which is used to capture or record your Linux server terminal sessions activity and later the recorded session can be replayed using scriptreplay command. In this article we will demonstrate how to install script command line tool and how to record Linux server terminal session activity and then later we will see how the recorded session can be replayed using **scriptreplay** command.
|
||||
|
||||
### Installation of Script tool on RHEL 7/ CentOS 7
|
||||
|
||||
Script command is provided by the rpm package “**util-linux**”, in case it is not installed on your CentOS 7 / RHEL 7 system , run the following yum command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# yum install util-linux -y
|
||||
```
|
||||
|
||||
**On RHEL 8 / CentOS 8**
|
||||
|
||||
Run the following dnf command to install script utility on RHEL 8 and CentOS 8 system,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install util-linux -y
|
||||
```
|
||||
|
||||
**Installation of Script tool on Debian based systems (Ubuntu / Linux Mint)**
|
||||
|
||||
Execute the beneath apt-get command to install script utility
|
||||
|
||||
```
|
||||
root@linuxtechi ~]# apt-get install util-linux -y
|
||||
```
|
||||
|
||||
### How to Use script utility
|
||||
|
||||
Use of script command is straight forward, type script command on terminal then hit enter, it will start capturing your current terminal session activities inside a file called “**typescript**”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script
|
||||
Script started, file is typescript
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
To stop recording the session activities, type exit command and hit enter.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# exit
|
||||
exit
|
||||
Script done, file is typescript
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Syntax of Script command:
|
||||
|
||||
```
|
||||
~ ] # script {options} {file_name}
|
||||
```
|
||||
|
||||
Different options used in script command,
|
||||
|
||||
![options-script-command][1]
|
||||
|
||||
Let’s start recording of your Linux terminal session by executing script command and then execute couple of command like ‘**w**’, ‘**route -n**’ , ‘[**df -h**][2]’ and ‘**free-h**’, example is shown below
|
||||
|
||||
![script-examples-linux-server][3]
|
||||
|
||||
As we can see above, terminal session logs are saved in the file “typescript”
|
||||
|
||||
Now view the contents of typescript file using [cat][4] / vi command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ls -l typescript
|
||||
-rw-r--r--. 1 root root 1861 Jun 21 00:50 typescript
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
![typescript-file-content-linux][5]
|
||||
|
||||
Above confirms that whatever commands we execute on terminal that have been saved inside the file “typescript”
|
||||
|
||||
### Use Custom File name in script command
|
||||
|
||||
Let’s assume we want to use our customize file name to script command, so specify the file name after script command, in the below example we are using a file name “session-log-(current-date-time).txt”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script sessions-log-$(date +%d-%m-%Y-%T).txt
|
||||
Script started, file is sessions-log-21-06-2019-01:37:39.txt
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now run the commands and then type exit,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# exit
|
||||
exit
|
||||
Script done, file is sessions-log-21-06-2019-01:37:39.txt
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Append the commands output to script file
|
||||
|
||||
Let assume script command had already recorded the commands output to a file called session-log.txt file and now we want to append output of new sessions commands output to this file, then use “**-a**” command in script command
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script -a sessions-log.txt
|
||||
Script started, file is sessions-log.txt
|
||||
[root@linuxtechi ~]# xfs_info /dev/mapper/centos-root
|
||||
meta-data=/dev/mapper/centos-root isize=512 agcount=4, agsize=2746624 blks
|
||||
= sectsz=512 attr=2, projid32bit=1
|
||||
= crc=1 finobt=0 spinodes=0
|
||||
data = bsize=4096 blocks=10986496, imaxpct=25
|
||||
= sunit=0 swidth=0 blks
|
||||
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
|
||||
log =internal bsize=4096 blocks=5364, version=2
|
||||
= sectsz=512 sunit=0 blks, lazy-count=1
|
||||
realtime =none extsz=4096 blocks=0, rtextents=0
|
||||
[root@linuxtechi ~]# exit
|
||||
exit
|
||||
Script done, file is sessions-log.txt
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
To view updated session’s logs, use “cat session-log.txt ”
|
||||
|
||||
### Capture commands output to script file without interactive shell
|
||||
|
||||
Let’s assume we want to capture commands output to a script file, then use **-c** option, example is shown below,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script -c "uptime && hostname && date" root-session.txt
|
||||
Script started, file is root-session.txt
|
||||
01:57:40 up 2:30, 3 users, load average: 0.00, 0.01, 0.05
|
||||
linuxtechi
|
||||
Fri Jun 21 01:57:40 EDT 2019
|
||||
Script done, file is root-session.txt
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Run script command in quiet mode
|
||||
|
||||
To run script command in quiet mode use **-q** option, this option will suppress the script started and script done message, example is shown below,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script -c "uptime && date" -q root-session.txt
|
||||
02:01:10 up 2:33, 3 users, load average: 0.00, 0.01, 0.05
|
||||
Fri Jun 21 02:01:10 EDT 2019
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Record Timing information to a file and capture commands output to a separate file, this can be achieved in script command by passing timing file (**–timing**) , example is shown below,
|
||||
|
||||
Syntax:
|
||||
|
||||
~ ]# script -t <timing-file-name> {file_name}
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script --timing=timing.txt session.log
|
||||
Script started, file is session.log
|
||||
[root@linuxtechi ~]# uptime
|
||||
02:27:59 up 3:00, 3 users, load average: 0.00, 0.01, 0.05
|
||||
[root@linuxtechi ~]# date
|
||||
Fri Jun 21 02:28:02 EDT 2019
|
||||
[root@linuxtechi ~]# free -h
|
||||
total used free shared buff/cache available
|
||||
Mem: 3.9G 171M 2.0G 8.6M 1.7G 3.3G
|
||||
Swap: 3.9G 0B 3.9G
|
||||
[root@linuxtechi ~]# whoami
|
||||
root
|
||||
[root@linuxtechi ~]# exit
|
||||
exit
|
||||
Script done, file is session.log
|
||||
[root@linuxtechi ~]#
|
||||
[root@linuxtechi ~]# ls -l session.log timing.txt
|
||||
-rw-r--r--. 1 root root 673 Jun 21 02:28 session.log
|
||||
-rw-r--r--. 1 root root 414 Jun 21 02:28 timing.txt
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Replay recorded Linux terminal session activity
|
||||
|
||||
Now replay the recorded terminal session activities using scriptreplay command,
|
||||
|
||||
**Note:** Scriptreplay is also provided by rpm package “**util-linux**”. Scriptreplay command requires timing file to work.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# scriptreplay --timing=timing.txt session.log
|
||||
```
|
||||
|
||||
Output of above command would be something like below,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/scriptreplay-linux.gif>
|
||||
|
||||
### Record all User’s Linux terminal session activities
|
||||
|
||||
There are some business critical Linux servers where we want keep track on all users activity, so this can be accomplished using script command, place the following content in /etc/profile file ,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/profile
|
||||
……………………………………………………
|
||||
if [ "x$SESSION_RECORD" = "x" ]
|
||||
then
|
||||
timestamp=$(date +%d-%m-%Y-%T)
|
||||
session_log=/var/log/session/session.$USER.$$.$timestamp
|
||||
SESSION_RECORD=started
|
||||
export SESSION_RECORD
|
||||
script -t -f -q 2>${session_log}.timing $session_log
|
||||
exit
|
||||
fi
|
||||
……………………………………………………
|
||||
```
|
||||
|
||||
Save & exit the file.
|
||||
|
||||
Create the session directory under /var/log folder,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# mkdir /var/log/session
|
||||
```
|
||||
|
||||
Assign the permissions to session folder,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# chmod 777 /var/log/session/
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now verify whether above code is working or not. Login to ordinary user to linux server, in my I am using pkumar user,
|
||||
|
||||
```
|
||||
~ ] # ssh root@linuxtechi
|
||||
root@linuxtechi's password:
|
||||
[root@linuxtechi ~]$ uptime
|
||||
04:34:09 up 5:06, 3 users, load average: 0.00, 0.01, 0.05
|
||||
[root@linuxtechi ~]$ date
|
||||
Fri Jun 21 04:34:11 EDT 2019
|
||||
[root@linuxtechi ~]$ free -h
|
||||
total used free shared buff/cache available
|
||||
Mem: 3.9G 172M 2.0G 8.6M 1.7G 3.3G
|
||||
Swap: 3.9G 0B 3.9G
|
||||
[root@linuxtechi ~]$ id
|
||||
uid=1001(pkumar) gid=1002(pkumar) groups=1002(pkumar) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
|
||||
[root@linuxtechi ~]$ whoami
|
||||
pkumar
|
||||
[root@linuxtechi ~]$ exit
|
||||
|
||||
Login as root and view user’s linux terminal session activity
|
||||
|
||||
[root@linuxtechi ~]# cd /var/log/session/
|
||||
[root@linuxtechi session]# ls -l | grep pkumar
|
||||
-rw-rw-r--. 1 pkumar pkumar 870 Jun 21 04:34 session.pkumar.19785.21-06-2019-04:34:05
|
||||
-rw-rw-r--. 1 pkumar pkumar 494 Jun 21 04:34 session.pkumar.19785.21-06-2019-04:34:05.timing
|
||||
[root@linuxtechi session]#
|
||||
```
|
||||
|
||||
![Session-output-file-linux][6]
|
||||
|
||||
We can also use scriptreplay command to replay user’s terminal session activities,
|
||||
|
||||
```
|
||||
[root@linuxtechi session]# scriptreplay --timing session.pkumar.19785.21-06-2019-04\:34\:05.timing session.pkumar.19785.21-06-2019-04\:34\:05
|
||||
```
|
||||
|
||||
That’s all from this tutorial, please do share your feedback and comments in the comments section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/record-replay-linux-terminal-sessions-activity/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/06/options-script-command.png
|
||||
[2]: https://www.linuxtechi.com/11-df-command-examples-in-linux/
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/06/script-examples-linux-server-1024x736.jpg
|
||||
[4]: https://www.linuxtechi.com/cat-command-examples-for-beginners-in-linux/
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/06/typescript-file-content-linux-1024x794.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Session-output-file-linux-1024x353.jpg
|
@ -0,0 +1,190 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chen-ni)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ubuntu or Fedora: Which One Should You Use and Why)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-vs-fedora/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Ubuntu or Fedora: Which One Should You Use and Why
|
||||
======
|
||||
|
||||
_**Brief: Ubuntu or Fedora? What’s the difference? Which is better? Which one should you use? Read this comparison of Ubuntu and Fedora.**_
|
||||
|
||||
[Ubuntu][1] and [Fedora][2] are one of the most popular Linux distributions out there. Making a decision to choose between using Ubuntu and Fedora is not an easy one. I’ll try to help you in making your decision by comparing various features of Ubuntu and Fedora.
|
||||
|
||||
Do note that this comparison is primarily from the desktop point of view. I am not going to focus on the container specific versions of Fedora or Ubuntu.
|
||||
|
||||
### Ubuntu vs Fedora: Which one is better?
|
||||
|
||||
![Ubuntu Vs Fedora][3]
|
||||
|
||||
Almost all Linux distributions differ from one another primarily on these points:
|
||||
|
||||
* Base distribution (Debian, Red Hat, Arch or from scratch)
|
||||
* Installation
|
||||
* Supported desktop environments
|
||||
* Package management, software support and updates
|
||||
* Hardware support
|
||||
* Development team (backed by corporate or created by hobbyists)
|
||||
* Release cycle
|
||||
* Community and support
|
||||
|
||||
|
||||
|
||||
Let’s see how similar or how different are Ubuntu and Fedora from each other. Once you know that, it should be perhaps easier for you to make a choice.
|
||||
|
||||
#### Installation
|
||||
|
||||
Ubuntu’s Ubiquity installer is one of easiest installers out there. I believe that it played an important role in Ubuntu’s popularity because when Ubuntu was just created in 2004, installing Linux itself was considered a huge task.
|
||||
|
||||
The Ubuntu installer allows you to install Ubuntu in around 10 minutes. In most cases, it can identify Windows installed on your system and allows you to dual boot Ubuntu and Windows in a matter of clicks.
|
||||
|
||||
You can also install updates and third-party codecs while installing Ubuntu. That’s an added advantage.
|
||||
|
||||
![Ubuntu Installer][4]
|
||||
|
||||
Fedora uses Anaconda installer. This too simplifies the installation process with the easy to use interface.
|
||||
|
||||
![Fedora Installer | Image Credit Fedora Magazine][5]
|
||||
|
||||
Fedora also provides a media writer tool for downloading and creating the live USB of Fedora on Windows operating system. When I last tried to use it around two years ago, it didn’t work and I had to use the regular live USB creating software.
|
||||
|
||||
In my experience, installing Ubuntu is easier than installing Fedora. That doesn’t mean installing Fedora is a complex process. Just that Ubuntu is simpler.
|
||||
|
||||
#### Desktop environments
|
||||
|
||||
Both Ubuntu and Fedora use GNOME desktop environment by default.
|
||||
|
||||
![GNOME Desktop in Fedora][6]
|
||||
|
||||
While Fedora uses the stock GNOME desktop, Ubuntu has customized it to look and behave like its previous Unity desktop.
|
||||
|
||||
![GNOME desktop customized by Ubuntu][7]
|
||||
|
||||
Apart from GNOME, both Ubuntu and Fedora offer several other desktop variants.
|
||||
|
||||
Ubuntu has Kubuntu, Xubuntu, Lubuntu etc., offering various desktop flavors. While they are the official flavor of Ubuntu, they are not directly developed by Ubuntu team from Canonical. The teams are separate.
|
||||
|
||||
Fedora offers various desktop choices in the form of [Fedora Spins][8]. Unlike Kubuntu, Lubuntu etc,. they are not created and maintained by separate team. They are from core Fedora team.
|
||||
|
||||
#### Package management and software availability
|
||||
|
||||
Ubuntu uses APT package manager to provide and manage software (applications, libraries and other required codes) while Fedora uses DNF package manager.
|
||||
|
||||
[][9]
|
||||
|
||||
Suggested read System76 Galago Pro: Specs, Price And Release Date
|
||||
|
||||
[Ubuntu has vast software repositories][10] allowing you to easily install thousands of programs, both FOSS and non-FOSS, easily. Fedora on the other hand focuses on providing only open source software. This is changing in the new versions but Fedora’s repositories are still not as big as that of Ubuntu.
|
||||
|
||||
Some third party software developer also provide click-to-install, .exe like packages for Linux. In Ubuntu, these packages are in .deb format and while Fedora supports .rpm packages.
|
||||
|
||||
Most software vendors provide both DEB and RPM files for Linux users but I have experienced that sometimes software vendor only provide DEB file. For example, SEO tool [Screaming Frog][11] has only DEB packages. It’s extremely rare that a software is available in RPM but not in DEB format.
|
||||
|
||||
#### Hardware support
|
||||
|
||||
Linux in general has its fair share of trouble with some WiFi adapters and graphics cards. Both Ubuntu and Fedora are impacted from that. Take the example of Nvidia. It’s [open source Nouveau driver often results in troubles like system hanging at boot][12].
|
||||
|
||||
Ubuntu provides an easy way of installing additional proprietary drivers. This results in better hardware support in many cases.
|
||||
|
||||
![Installing proprietary driver is easier in Ubuntu][13]
|
||||
|
||||
Fedora, on the other hand, sticks to open source software and thus installing proprietary drivers on Fedora becomes a difficult task.
|
||||
|
||||
#### Support and userbase
|
||||
|
||||
Both Ubuntu and Fedora provide support through community forums. Ubuntu has two main forums: [UbuntuForums][14] and [Ask Ubuntu][15]. Fedora has one main forum [Ask Fedora][16].
|
||||
|
||||
In terms of userbase, Fedora has a large following. However, Ubuntu is more popular and has a larger following than Fedora.
|
||||
|
||||
The popularity of Ubuntu has prompted a number of websites and blogs focused primarily on Ubuntu. This way, you get more troubleshooting tips and learning material on Ubuntu than Fedora.
|
||||
|
||||
#### Release cycle
|
||||
|
||||
A new Fedora version is released every six months and each Fedora release is supported for nine months only. Which means that between six to nine months, you must perform an upgrade. Upgrading Fedora version is simple but it does require a good internet connection. Not everyone can be happy with 1.5 GB of version upgrades every nine months.
|
||||
|
||||
Ubuntu has two versions: regular release and the long term support (LTS) release. Regular release is similar to Fedora. It’s released at the interval of six months and is supported for nine months.
|
||||
|
||||
The LTS release comes at an interval of two years and is supported for five years. Regular releases bring new features, new software versions while the LTS release holds on to the older versions. This makes it a great choice for people who don’t like frequent changes and prefer stability.
|
||||
|
||||
#### Solid base distributions
|
||||
|
||||
Ubuntu is based on [Debian][17]. Debian is one of the biggest community project and one of the most respected project in the [free software][18] world.
|
||||
|
||||
Fedora is a community project from Red Hat. Red Hat is an enterprise focused Linux distribution. Fedora works as a ‘testing ground’ ([upstream][19] in technical term) for new features before those features are included in Red Hat Enterprise Linux.
|
||||
|
||||
[][20]
|
||||
|
||||
Suggested read How To Manage StartUp Applications In Ubuntu
|
||||
|
||||
#### Backed by enterprises
|
||||
|
||||
Both Ubuntu and Fedora are backed by their parent corporations. Ubuntu is from [Canonical][21] while Fedora is from [Red Hat][22] (now [part of IBM][23]). Enterprise backing is important because it ensures that the Linux distribution is well-maintained.
|
||||
|
||||
Hobbyists distributions created by a group of individuals often crumble under workload. You might have seen reasonably popular distribution projects being shutdown for this sole reason. [Antergos][24], Korora are just some of the many such examples where distributions were discontinued because the developers couldn’t get enough free time to work on the project.
|
||||
|
||||
The fact that both Ubuntu and Fedora are supported by a two Linux-based enterprises makes them a viable choice over other independent distributions.
|
||||
|
||||
#### Ubuntu vs Fedora as server
|
||||
|
||||
The comparison between Ubuntu and Fedora was primarily aimed at desktop users so far. But a discussion about Linux is not complete until you include servers.
|
||||
|
||||
![Ubuntu Server][25]
|
||||
|
||||
Ubuntu is not only popular on desktop, it also has a good presence on the server side. If you are familiar with Ubuntu as desktop, you might not feel uncomfortable with Ubuntu server edition. I started with Ubuntu desktop and now my websites are hosted on Linux servers running Ubuntu.
|
||||
|
||||
Fedora too has server edition and some people use it as well. But most sysadmins won’t prefer a server that has to be upgraded and rebooted every nine months.
|
||||
|
||||
Knowing Fedora helps you in using Red Hat Enterprise Linux (RHEL). RHEL is a paid product and you’ll have to purchase a subscription. If you want an operating system for running server that is close to Fedora/Red Hat, I advise using CentOS. [CentOS][26] is also a community project affiliated with Red Hat but this one is focused on servers.
|
||||
|
||||
#### Conclusion
|
||||
|
||||
As you can see, both Ubuntu and Fedora are similar to each other on several points. Ubuntu does take lead when it comes to software availability, driver installation and online support. And _**these are the points that make Ubuntu a better choice, specially for inexperienced Linux users.**_
|
||||
|
||||
If you want to get familiar with Red Hat, Fedora is a good starting point. If you have some experience with Linux or if you want to use only open source software, Fedora is an excellent choice.
|
||||
|
||||
In the end, it is really up to you to decide if you want to use Fedora or Ubuntu. I would suggest creating live USB of both distributions or try them out in virtual machine.
|
||||
|
||||
What’s your opinion on Ubuntu vs Fedora? Which distribution do you prefer and why? Do share your views in the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-vs-fedora/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://ubuntu.com/
|
||||
[2]: https://getfedora.org/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/07/ubuntu-vs-fedora.png?resize=800%2C450&ssl=1
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/03/install-linux-inside-windows-10.jpg?resize=800%2C479&ssl=1
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/fedora-installer.png?resize=800%2C598&ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/gnome-desktop-fedora.png?resize=800%2C450&ssl=1
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/applications_menu.jpg?resize=800%2C450&ssl=1
|
||||
[8]: https://spins.fedoraproject.org/
|
||||
[9]: https://itsfoss.com/system-76-galago-pro/
|
||||
[10]: https://itsfoss.com/ubuntu-repositories/
|
||||
[11]: https://www.screamingfrog.co.uk/seo-spider/#download
|
||||
[12]: https://itsfoss.com/fix-ubuntu-freezing/
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/software_updates_additional_drivers_nvidia.png?resize=800%2C523&ssl=1
|
||||
[14]: https://ubuntuforums.org/
|
||||
[15]: https://askubuntu.com/
|
||||
[16]: https://ask.fedoraproject.org/
|
||||
[17]: https://www.debian.org/
|
||||
[18]: https://www.fsf.org/
|
||||
[19]: https://en.wikipedia.org/wiki/Upstream_(software_development)
|
||||
[20]: https://itsfoss.com/manage-startup-applications-ubuntu/
|
||||
[21]: https://canonical.com/
|
||||
[22]: https://www.redhat.com/en
|
||||
[23]: https://itsfoss.com/ibm-red-hat-acquisition/
|
||||
[24]: https://itsfoss.com/antergos-linux-discontinued/
|
||||
[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/ubuntu-server.png?resize=800%2C232&ssl=1
|
||||
[26]: https://centos.org/
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (BitTorrent Client Deluge 2.0 Released: Here’s What’s New)
|
||||
[#]: via: (https://itsfoss.com/deluge-2-release/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
BitTorrent 客户端 Deluge 2.0 发布:新功能介绍
|
||||
======
|
||||
|
||||
你可能已经知道 [Deluge][1] 是[最适合 Linux 用户的 Torrent 客户端][2]之一。然而,最近的稳定版本差不多是两年前的了。
|
||||
|
||||
尽管它在积极开发中,但直到最近才出了一个主要的稳定版本。我们写这篇文章时,最新版本恰好是 2.0.2。所以,如果你还没有下载最新的稳定版本,请尝试一下。
|
||||
|
||||
不管如何,如果你好奇的话,让我们谈下有哪些新的功能。
|
||||
|
||||
![Deluge][3]
|
||||
|
||||
### Deluge 2.0 的主要改进
|
||||
|
||||
新版本引入了多用户支持,这是一个非常需要的功能。
|
||||
|
||||
除此之外,还有一些性能改进可以更快地加载更多的种子。
|
||||
|
||||
此外,在 2.0 版本中,Deluge 使用了 Python 3,对 Python 2.7 提供最低支持。即使是用户界面,他们也从 GTK UI 迁移到了 GTK3。
|
||||
|
||||
根据发行说明,还有一些更重要的补充/改进,包括:
|
||||
|
||||
* 多用户支持。
|
||||
* 性能提升,可以更快地加载数千个种子。
|
||||
* 一个模拟 GTK/Web UI 的新控制台 UI。
|
||||
* GTK UI 迁移到 GTK3,并伴随 UI 改进和补充。
|
||||
* 磁链预获取功能以便在添加种子时选择文件。
|
||||
* 完全支持 libtorrent 1.2。
|
||||
* 语言切换支持。
|
||||
* 改进了在 ReadTheDocs 托管的文档。
|
||||
* AutoAdd 插件取代了内置功能。
|
||||
|
||||
|
||||
|
||||
### 如何安装或升级到 Deluge 2.0
|
||||
|
||||
![][4]
|
||||
|
||||
对于任何 Linux 发行版,你都应该遵循官方[安装指南][5](使用 PPA 或 PyPi)。但是,如果你要升级,你应该留意发行说明中提到的:
|
||||
|
||||
“_Deluge 2.0与 Deluge 1.x 客户端或守护进程不兼容,因此这些也需要升级。如果第三方脚本直接连接到 Deluge 客户端,那么可能也不兼容且需要迁移。_”
|
||||
|
||||
因此,坚持在升级主版本之前备份你的[配置][6]以免数据丢失。
|
||||
|
||||
而且,如果你是插件作者,那么需要升级它以使其与新版本兼容。
|
||||
|
||||
直接下载的安装包尚不包含 Windows 和 Mac OS。但是,说明中提到他们正在进行中。
|
||||
|
||||
除此之外,你可以按照更新后的官方文档中的[安装指南][5]来手动安装它们。
|
||||
|
||||
**总结**
|
||||
|
||||
你如何看待最新的稳定版本?你是否将 Deluge 用作 BitTorrent 客户端?或者你是否找到了其他更好的选择?
|
||||
|
||||
请在下面的评论栏告诉我们你的想法。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/deluge-2-release/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dev.deluge-torrent.org/
|
||||
[2]: https://itsfoss.com/best-torrent-ubuntu/
|
||||
[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/deluge.jpg?fit=800%2C410&ssl=1
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/Deluge-2-release.png?resize=800%2C450&ssl=1
|
||||
[5]: https://deluge.readthedocs.io/en/latest/intro/01-install.html
|
||||
[6]: https://dev.deluge-torrent.org/wiki/Faq#WheredoesDelugestoreitssettingsconfig
|
@ -1,83 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (oneforalone)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Emacs for (even more of) the win)
|
||||
[#]: via: (https://so.nwalsh.com/2019/03/01/emacs)
|
||||
[#]: author: (Norman Walsh https://so.nwalsh.com)
|
||||
|
||||
Emacs 的胜利(或是更多)
|
||||
======
|
||||
|
||||
我天天用 Emacs,但我却从意识到。但是每当我用 Emacs 时,它都给我带来了很多乐趣。
|
||||
|
||||
>如果你是个职业作家……Emacs 与其它的编辑器的相比就如皓日与群星一样。不仅更大、更亮,它轻而易举就让其他所有的东西都消失了。
|
||||
|
||||
我用 [Emacs][1] 已有二十多年了。我用它来写几乎所有的东西(Scala 和 Java 我用 [IntelliJ][2])。看邮件的话我是能在 Emacs 里看就在里面看。
|
||||
|
||||
尽管我用 Emacs 已有数十年,我在新年前后才意识到,在过去10年或更长时间里,我对 Emacs 的使用几乎没有什么变化。当然,新的编辑模式出现了,我就会选一两个插件,几年前我确实是用了 [Helm][3],但大多数时候,它只是完成了我需要的所有繁重工作,日复一日,没有抱怨,也没有妨碍我。一方面,这证明了它有多好。另一方面,这是一个邀请,让我深入挖掘,看看我错过了什么。
|
||||
|
||||
于此同时,我也决定从以下几方面改进我的工作方式:
|
||||
|
||||
* **更好的议程管理** 我在工作中负责几个项目,这些项目有定期和临时的会议;有些我是我主持的,有些我只要参加就可以。
|
||||
|
||||
我意识到我对开会变得草率起来了了。坐在一个有会议要开的房间里实在是太容易了,但实际上你可以阅读电子邮件,处理其他事情。(我强烈反对在会议中“禁止携带笔记本电脑”的这条规定,但这就是另一个话题。)
|
||||
|
||||
草率地去开会有几个问题。首先,这是对主持会议的人和其他参与者的不尊重。实际上这是不这么做的完美理由,但我还有意识到令一个问题:它忽视了会议的成本。
|
||||
|
||||
如果你在开会,但同时还要回复电子邮件,也许还要改 bug,那么这个会议就不需要花费任何东西(或同样多的钱)。如果会议成本低廉,那么会议数量将会更多。
|
||||
|
||||
我想要少点、短些的会议。我不想忽视它们的成本,我想让开会变得很有价值,除非绝对必要,否则就可以避免。
|
||||
|
||||
有时,开会是很有必要的。而且我认为一个简短的会能够很快的解决问题。但是,如果我一天有十个短会的话,那还是不要说我做了些有成果的事吧。
|
||||
|
||||
我决定在我参加的所有的会上做笔记。我并不是说一定要做会议记录,而是我在做某种会议记录。这会让我把注意力集中在开会上,而忽略其他事。
|
||||
|
||||
* **更好的时间管理** 我有很多要做和想做的事,或工作的或私人的。之前,我有在问题清单和邮件进程(Emacs 和 [Gmail][4] 中,用于一些稍微不同的提醒)、日历、手机上各种各样的“待办事项列表”和小纸片上记录过它们。可能还有其他地方。
|
||||
|
||||
我决定把它们放在一起。不是说我认为有一个地方就最好或更好,而是说我想完成两件事。首先,把它们都放在一个地方,我能够对我把精力放在哪里有一个更好、更全面的看法。第二,也是因为我想养成一个习惯。固定的或有规律的倾向或行为,尤指难以放弃的。记录、跟踪并保存它们。
|
||||
|
||||
* **更好的说明** 如果你在某些科学或工程领域工作,你就会养成记笔记的习惯。唉,我没有。但我决定这么做。
|
||||
|
||||
我对法律上鼓励装订页面或做永久标记并不感兴趣。我感兴趣的是养成做记录的习惯。我的目标是有一个地方记下想法和设计草图等。如果我突然有了灵感,或者我想到了一个不在测试套件中的边缘案例,我希望我的本能是把它写在我的日志中,而不是草草写在一张小纸片上,或者向自己保证我会记住它。
|
||||
|
||||
|
||||
|
||||
这些决心让我很快或多或少地转到了 [Org][6]。Org 有一个庞大的、活跃的、忠诚的用户社区。我以前也用过它(顺带一提,我有[写过][7]它,至少在几年前),我花了很长的一段时间(将 [MarkLogic 集成][8]到其中。(天哪,这在过去的一两个星期里得到了回报!)
|
||||
|
||||
但我从没用过 Org。
|
||||
|
||||
我现在正在用它。我用了几分钟,我把所有要做的事情都记录下来,我还记了日记。我不确定我试图对它进行边界或列举它的所有特性有多大价值,你可以通过网页快速地搜索找到很多。
|
||||
|
||||
如果你用 Emacs,那你也应该用 Org。如果没用过Emacs,我相信你不会是第一个因 Org 而使用 Emacs 的人。Org 可以做很多。它需要一点时间来学习你的方法和快捷键,但我认为这是值得的。(如果你的口袋中有一台 [iOS][9] 设备,我推荐你在忙的时候使用 [beorg][10] 来记录。)
|
||||
|
||||
当然,我想出了如何[将 XML 从其中提取出来][11]⊕“working out” 确实是“用 elisp 来编程”的一种有趣的拼写方式。然后,如何将它转换回我的 weblog 期望的标记(当然,在 Emacs 中按下一个按钮就可以做到)。这是第一次用 Org 写的帖子。这也不会是最后一次。
|
||||
|
||||
附注:生日快乐,[小博客][12]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://so.nwalsh.com/2019/03/01/emacs
|
||||
|
||||
作者:[Norman Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[oneforalone](https://github.com/oneforalone)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://so.nwalsh.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Emacs
|
||||
[2]: https://en.wikipedia.org/wiki/IntelliJ_IDEA
|
||||
[3]: https://emacs-helm.github.io/helm/
|
||||
[4]: https://en.wikipedia.org/wiki/Gmail
|
||||
[5]: https://en.wikipedia.org/wiki/Lab_notebook
|
||||
[6]: https://en.wikipedia.org/wiki/Org-mode
|
||||
[7]: https://www.balisage.net/Proceedings/vol17/html/Walsh01/BalisageVol17-Walsh01.html
|
||||
[8]: https://github.com/ndw/ob-ml-marklogic/
|
||||
[9]: https://en.wikipedia.org/wiki/IOS
|
||||
[10]: https://beorgapp.com/
|
||||
[11]: https://github.com/ndw/org-to-xml
|
||||
[12]: https://so.nwalsh.com/2017/03/01/helloWorld
|
@ -0,0 +1,96 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zionfuo)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0 – Public Vs Private Blockchain Comparison [Part 7])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/)
|
||||
[#]: author: (editor https://www.ostechnix.com/author/editor/)
|
||||
|
||||
区块链 2.0:公有链 Vs 私有链(七)
|
||||
======
|
||||
|
||||
![Public vs Private blockchain][1]
|
||||
|
||||
[**区块链 2.0**][2]系列的前一篇文章探索了[**智能合同的现状**][3]。这篇文章旨在揭示可以创建的不同类型的区块链。每个协议都用于与众不同的应用程序,并且根据用例的不同,每个应用程序所遵循的协议也不同。现在,让我们比较一下公有链、开源软件与私有链、专有技术。
|
||||
|
||||
正如我们所知,基于区块链的分布式账本的基本三层结构如下:
|
||||
|
||||
![][4]
|
||||
图1 – 区块链分布式账本的基本结构
|
||||
|
||||
这里提到的类型之间的差异主要是因为底层区块链的协议。该协议规定了参与者的规则和参与的方式。
|
||||
|
||||
阅读本文时,请记住以下几点事项:
|
||||
|
||||
- 任何平台的产生都是为了解决需求而生。技术应该采取最好的方向。例如,区块链具有巨大的应用价值,其中一些可能需要丢弃在其他设置中看起来很重要的功能。在这方面,**分布式存储**就是最好的例子。
|
||||
- 区块链基本上是数据库系统,通过时间戳和区块的形式组织数据来跟踪信息。此类区块链的创建者可以选择谁有权产出这些区块并进行修改。
|
||||
- 区块链也可以“中心化”,设置不同的参与程度,可以得出那些参与者是符合条件的“中心”节点。
|
||||
|
||||
大多数区块链要么是公有的,要么是私有的。一般来说,公有链可以被认为是开源软件的等价物,大多数私有链可以被视为源自公有链的专有平台。下图应该会让大多数人明显地看出基本的区别。
|
||||
|
||||
![][5]
|
||||
图2 – 公有链/私有链与开源/专有技术的对比
|
||||
|
||||
虽然这是最受欢迎的理解。但是这并不是说所有的私有链都是从公有链中衍生出来的。
|
||||
|
||||
### 公有链
|
||||
|
||||
公有链可以被视为是一个开放的平台或网络。任何拥有专业知识和计算资源的人都可以参与其中。这将产生以下影响:
|
||||
|
||||
- 任何人都可以加入公有链网络并参与到其中。“参与者” 所需要的只是稳定的网络资源和计算资源。
|
||||
- 参与包括了读取、写入、验证和交易期间的共识。比特币矿工就是很好的例子。作为网络的参与者,矿工会得到比特币作为回报。
|
||||
- 平台完全去中心,完全冗余。
|
||||
- 由于去中心化,没有一个主体可以完全控制分类账中记录的数据。所有 (或大多数) 参与者都需要通过验证区块的方式检查数据。
|
||||
- 这意味着,一旦信息被验证和记录,就不能轻易改变。即使这样,也不可能留下痕迹。
|
||||
- 在比特币和莱特币等平台上,参与者的身份仍然是匿名的。设计这些平台的目的是保护和保护用户身份。这主要是由上层协议栈提供的功能。
|
||||
- 在BITCOIN和LITECOIN等平台上,参与者的身份仍然是匿名的。这些平台的设计旨在保护和保护用户身份。这主要是由上层协议栈提供的功能。
|
||||
- 公有链有比特币、莱特币、以太坊等不同的网络。
|
||||
- 广泛的去中心化意味着,区块链分布式网络与实现的交易相比,在交易上获得共识可能需要一段时间,并且吞吐量对于旨在每时刻推动大量交易的大型企业来说可能是一个挑战。
|
||||
- 开放式参与,使比特币等公有链中的大量参与者,往往会增加对计算设备和能源成本的初始投资。
|
||||
|
||||
|
||||
### 私有链
|
||||
|
||||
相比之下,私有链是被许可的区块链。含义:
|
||||
|
||||
- 参与网络的许可受到限制,并由监督网络的所有者或机构主持。这意味着,即使个人能够存储数据并进行交易 (例如,发送和接收付款),这些交易的验证和存储也只能由选定的参与者来完成。
|
||||
- 参与者一旦获得中心机构的许可,将受到条款的限制。例如,在金融机构运营的私有链网络中,并不是每个客户都可以访问整个区块链的分布式账本,甚至在那些获得许可的客户中, 不是每个人都能访问所有的东西。在这种情况下,中心机构将授予访问选择服务的权限。这通常被称为 “通道”。
|
||||
- 与公有链相比,这种系统具有更大的吞吐量能力,也展示了更快的交易速度,因为区块只需要由少数几个人验证。
|
||||
- 公有链以设计安全著称。他们的实现依靠以下几点:
|
||||
- 匿名参与者
|
||||
- 多个节点上的分布式和冗余的加密存储
|
||||
- 创建和更改数据需要大量的共识
|
||||
|
||||
私有链通常在其协议中没有任何特征。这使得该系统仅与目前使用的大多数基于云的数据库系统一样安全。
|
||||
|
||||
### 智者的观点
|
||||
|
||||
需要注意的一点是,它们被命名为 public 或 private (或 open 或 close) 的事实与底层代码库无关。在这两种情况下,平台所基于的代码或文字基础可能是公开的,也可能不是公开的。R3 是一家 DLT (**D**istributed **L**edger **T**echnology) 公司,领导着由 200 多家跨国机构组成的公有财团。他们的目标是在金融和商业领域进一步发展区块链和相关分布式账本技术。Corda 是这一共同努力的产物。R3 将 corda 定义为专门为企业构建的区块链平台。同样的代码库是开源的,鼓励世界各地的开发人员为这个项目做出贡献。然而,考虑到 corda 面临的业务性质和旨在满足的需求,corda 将被归类为许可的封闭区块链平台。这意味着企业可以在部署后选择网络的参与者,并通过使用本机可用的智能合约工具选择这些参与者可以访问的信息类型。
|
||||
|
||||
虽然像比特币和以太坊这样的公有链对这个领域的广泛认知和发展负有责任,这是一个现实, 仍然可以争辩说,为企业或商业环境中的特定用例设计的私有链将在短期内引领货币投资。这些都是我们大多数人在不久的将来会看到以实际方式运用起来的平台。
|
||||
|
||||
阅读本系列中写一篇文章是有关Hyperledger项目的。
|
||||
|
||||
- [**Blockchain 2.0 – An Introduction To Hyperledger Project (HLP)**][6]
|
||||
|
||||
我们正在研究更多有趣的区块链技术话题。敬请期待!
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
|
||||
|
||||
作者:[editor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Public-Vs-Private-Blockchain-720x340.png
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
|
||||
[3]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/blockchain-architecture.png
|
||||
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/Public-vs-Private-blockchain-comparison.png
|
||||
[6]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
|
@ -0,0 +1,116 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (qfzy1233)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (A beginner's guide to Linux permissions)
|
||||
[#]: via: (https://opensource.com/article/19/6/understanding-linux-permissions)
|
||||
[#]: author: (Bryant Son https://opensource.com/users/brson/users/greg-p/users/tj)
|
||||
|
||||
Linux 权限入门指南
|
||||
======
|
||||
Linux安全权限能够指定谁可以对文件或目录执行什么操作。
|
||||
![Hand putting a Linux file folder into a drawer][1]
|
||||
|
||||
与其他系统相比而言 Linux 系统的众多优点中最为主要一个便是Linux 系统有着更少的安全漏洞和被攻击的隐患。Linux无疑为用户提供了更为灵活和精细化的文件系统安全权限控制。这可能意味着Linux用户理解安全权限是至关重要的。虽然这并不一定是必要的,但是对于初学者来说,理解Linux权限的基本知识仍是一个明智之选。
|
||||
|
||||
### 查看 Linux 安全权限
|
||||
|
||||
在开始 Linux 权限的相关学习之前,假设我们新建了一个名为 **PermissionDemo**的目录。使用 **cd** 命令进入这个目录,然后使用 **ls -l** 命令查看 Linux 安全管理权限信息。如果你想以时间为序排列,加上 **-t** 选项
|
||||
|
||||
|
||||
```
|
||||
`ls -lt`
|
||||
```
|
||||
|
||||
因为这一目录下没有文件,所以这一命令执行不会返回结果。
|
||||
|
||||
![No output from ls -l command][2]
|
||||
|
||||
要了解关于 **ls** 命令的更多信息,请通过在命令行中输入 **man ls** 来查看命令手册。
|
||||
|
||||
![ls man page][3]
|
||||
|
||||
现在,让我们创建两个名为 **cat.txt** 和 **dog.txt** 的空白文件;这一步使用 **touch** 命令将更为简便。然后继续使用 **mkdir** 命令创建一个名为 **Pets** 的空目录。我们可以再次使用**ls -l**命令查看这些新文件的权限。
|
||||
|
||||
![Creating new files and directory][4]
|
||||
|
||||
我们需要留意这个命令输出结果的两个部分。
|
||||
|
||||
### 谁拥有权限?
|
||||
|
||||
首先要注意的是 _who_ 具有访问文件/目录的权限。请注意下面红色框中突出显示的部分。第一列是指具有访问权限的 _user(用户)_ ,而第二列是指具有访问权限的 _group(组)_ 。
|
||||
|
||||
![Output from -ls command][5]
|
||||
|
||||
用户的类型主要有三种:**user**、**group**;和**other**(本质上既不是用户也不是组)。还有一个**all**,意思是几乎所有人。
|
||||
|
||||
![User types][6]
|
||||
|
||||
由于我们使用 **root** 作为当前用户,所以我们可以访问任何文件或目录,因为 **root** 是超级用户。然而,通常情况并非如此,您可能会被限定使用您的普通用户登录。所有的用户都存储在 **/etc/passwd** 文件中。
|
||||
|
||||
![/etc/passwd file][7]
|
||||
|
||||
“组“的相关信息保存在 **/etc/group** 文件中。
|
||||
|
||||
![/etc/passwd file][8]
|
||||
|
||||
### 他们有什么权限?
|
||||
|
||||
我们需要注意的是 **ls -l** 命令输出结果的另一部分与执行权限有关。以上,我们查看了创建的dog.txt 和 cat.txt文件以及Pets目录的所有者和组权限都属于 **root** 用户。我们可以通过这一信息了解到不同用户组所拥有的相应权限,如下面的红色框中的标示。
|
||||
|
||||
![Enforcing permissions for different user ownership types][9]
|
||||
|
||||
我们可以把每一行分解成五部分。第一部分标志着它是文件还是目录;文件用 **-** (连字符)标记,目录用 **d** 来标记。接下来的三个部分分别是**user**、**group**和**other**的对应权限。最后一部分是[**access-control list**][10] (ACL)(访问控制列表)的标志,是记录着特定用户或者用户组对该文件的操作权限的列表。
|
||||
|
||||
![Different Linux permissions][11]
|
||||
|
||||
Linux 的权限级别可以用字母或数字标识。有三种权限类型:
|
||||
|
||||
* **read(读):** r or 4
|
||||
* **write(写):** w or 2
|
||||
* **executable(可执行):** x or 1
|
||||
(LCTT译注:原文此处对应的字母标示 **x** 误写为 **e** 已更正)
|
||||
|
||||
![Privilege types][12]
|
||||
|
||||
每个字母符号(**r**、**w**或**x**)表示有该项权限,而 **-** 表示无该项权限。在下面的示例中,文件的所有者可读可写,用户组成员仅可读,其他人可读可执行。转换成数字表示法,对应的是645(如何计算,请参见下图的图示)。
|
||||
|
||||
![Permission type example][13]
|
||||
|
||||
以下是一些示例:
|
||||
|
||||
![Permission type examples][14]
|
||||
|
||||
完成下面的测试,检查你是否掌握了权限管理相关的知识。
|
||||
|
||||
![Permission type examples][15]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/6/understanding-linux-permissions
|
||||
|
||||
作者:[Bryant Son][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[qfzy1233](https://github.com/qfzy1233)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/brson/users/greg-p/users/tj
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/1_3.jpg (No output from ls -l command)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/1_man.jpg (ls man page)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/2_6.jpg (Creating new files and directory)
|
||||
[5]: https://opensource.com/sites/default/files/uploads/3_2.jpg (Output from -ls command)
|
||||
[6]: https://opensource.com/sites/default/files/uploads/4_0.jpg (User types)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/linuxpermissions_4_passwd.jpg (/etc/passwd file)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/linuxpermissions_4_group.jpg (/etc/passwd file)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/linuxpermissions_5.jpg (Enforcing permissions for different user ownership types)
|
||||
[10]: https://en.wikipedia.org/wiki/Access-control_list
|
||||
[11]: https://opensource.com/sites/default/files/uploads/linuxpermissions_6.jpg (Different Linux permissions)
|
||||
[12]: https://opensource.com/sites/default/files/uploads/linuxpermissions_7.jpg (Privilege types)
|
||||
[13]: https://opensource.com/sites/default/files/uploads/linuxpermissions_8.jpg (Permission type example)
|
||||
[14]: https://opensource.com/sites/default/files/uploads/linuxpermissions_9.jpg (Permission type examples)
|
||||
[15]: https://opensource.com/sites/default/files/uploads/linuxpermissions_10.jpg (Permission type examples)
|
88
translated/tech/20190625 The innovation delusion.md
Normal file
88
translated/tech/20190625 The innovation delusion.md
Normal file
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (chen-ni)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The innovation delusion)
|
||||
[#]: via: (https://opensource.com/open-organization/19/6/innovation-delusion)
|
||||
[#]: author: (Jim Whitehurst https://opensource.com/users/jwhitehurst/users/jwhitehurst/users/n8chz/users/dhdeans)
|
||||
|
||||
创新的幻觉
|
||||
======
|
||||
创新是一种混乱的过程,但是关于创新的故事却很有条理。我们不应该把两者搞混了。
|
||||
![gears and lightbulb to represent innovation][1]
|
||||
|
||||
如果说 [传统的规划方法已经消亡了][2],为什么这么多机构还在孜孜不倦地运用那些针对工业革命所设计的规划方法呢?
|
||||
|
||||
其中的一个原因是,我们错误地认为创新是可以通过结构化的、线性的过程实现的。我觉得这样做是在混淆关于创新的 **故事** 和创新这个 **过程** 本身 —— 两者是截然不同的东西。
|
||||
|
||||
创新的过程是混乱和不可预测的,并不遵循井然有序的时间线。过程中充满了重复迭代、突然的改变方向、各式各样的启动和终止、死胡同、失败(但愿是有用的失败),以及很多不可知的变量。创新是混乱的。
|
||||
|
||||
但是关于创新的故事却让事情显得很简单,从讲述伟大发明的书籍和文章,到我们(简化了整个过程之后)讲述自己在工作上的成功都是这样。想想你在社交媒体上读的帖子里,有多少都只写了最好的、最愉快的部分吧。
|
||||
|
||||
好故事就是这样。我们把一些原本分散的时间点干净利落地整理到一起,有开头、有发展、也有结尾。尽管我们一路上经历了很多没有把握、恐慌、甚至是绝望的时刻,在故事里这些坎坷却都被抹平了,让事情看起来像是从一开始就注定是成功的。
|
||||
|
||||
我们不应该把混乱的过程和简化了的故事搞混。否则,我们会错误地认为可以使用在简单线性过程中所使用的同样的方法迎接创新的挑战。换句话说,我们将适用于一种类型的活动(比较偏死记硬背、机械和规则性的任务)的管理技术应用在了并不适用的另一种类型的活动(更加有创造性的、非线性的工作,需要自主性和试验)上。
|
||||
|
||||
### 一个创新的故事
|
||||
|
||||
下面这个故事可以很好地说明这一点。这是我 [最喜欢举的例子][2] 之一。
|
||||
|
||||
在 1970 年代,英国的摩托车行业怎么也想不明白为什么他们在美国的市场份额急剧下降,而本田公司的市场份额却急速攀升。他们雇佣了波士顿咨询公司(刚好是我之前的雇主)帮助他们找出问题所在。波士顿咨询搜集了一些历史数据,回看了二十年的历史事件,总结出了一个井井有条的、线性的故事,可以很好地解释本田公司的成功。
|
||||
|
||||
[波士顿咨询的结论是][3],本田公司使用了一个巧妙的策略:通过一种可以以更低价格销售的偏小型摩托车进入美国市场,并且借助于他们在日本本土市场发展出来的规模经济,在美国市场使用低价策略发展市场份额,然后等到需求进一步增长的时候,再利用更大的规模经济发展他们在美国的市场份额。在所有人的眼中,本田公司都表现得非常出色,不仅发挥了自己的优势,还非常透彻和精准地理解了新的目标顾客:美国消费者。他们智胜一筹,通过一个执行得很好的计划占领了先机,胜过了竞争对手们。
|
||||
|
||||
这个故事 **听上去** 很厉害,但是实际情况没有这么简单。
|
||||
|
||||
没错,本田公司 **确实** 是想进入美国摩托车市场。他们最初是想 [效仿在美国的竞争对手][4],也制造美国人似乎更喜欢的大型摩托车。但是大型摩托车并不是本田公司的强项,他们生产的大型摩托车存在可靠性上的问题。更糟的是,他们的产品和市面上的其它产品没有什么差别,所以并不能脱颖而出。简单来说,他们的产品销售额表现平平。
|
||||
|
||||
但是在一次奇妙的巧合里,本田公司出访美国的日本代表们带了几辆自己开的摩托车。这些摩托车和本田公司试图在美国市场上销售的摩托车完全不同,它们更为小巧灵活、不那么笨重、更有效率,并且一般来说也更便宜。西尔斯公司(LCTT 译注:美国零售业巨头)注意到了这些小巧的摩托车,并且和日本代表达成了一项协议,让西尔斯公司可以在他们在美国的商店里出售这种被称为“超级幼兽”的新型摩托车。
|
||||
|
||||
剩下的故事已经载入史册。超级幼兽成为了 [史上最畅销的机动车][5],并且本田公司 [至今仍然在生产超级幼兽][6]。
|
||||
|
||||
事后看来,将超级幼兽带到美国的一连串事件似乎很有逻辑,近乎无聊了。但是本田公司的成功和“巧妙的计划”没有什么关系,而更应该归功于一些(大多数人不愿意承认的)机缘巧合。
|
||||
|
||||
### 开放(并且混乱的)创新
|
||||
|
||||
机构(特别是领导们)喜欢把成功说成是一件计划之内的事情 - 好像成功人士可以驾驭混乱,并且几乎可以预测未来。但是这些言论都是事后诸葛亮罢了,他们在讲述自己充满偶然性的经历的时候会刻意将无序的事情整理一番,对于毫无确定性的事情也会说“我们就是想要那么做的”。
|
||||
|
||||
但是正如我前面说的,我们不应该相信这些故事是创新过程的真实还原,也不应该在这种错误的假设的基础之上去构建未来的方案或者实验。
|
||||
|
||||
试想有另一家摩托车制造商想要复制本田公司在超级幼兽上的成功,就逐字逐句地照搬波士顿咨询总结的故事。由于本田公司成功的 **故事** 听上去是如此有逻辑,并且是线性的,这家新公司也许会假定他们可以通过类似的程序得到同样的结果:制定目标、谋划行动,然后针对可预期的结果进行执行。但是我们知道本田公司并不是真的靠这种“制定、谋划、执行”的方式赢得他们的市场份额的。他们是通过灵活性和一点运气获得成功的 - 更像是“尝试、学习、修改”。
|
||||
|
||||
当我们可以真正理解并且接受“创新过程是混乱的”这个事实的时候,我们就可以换种方式思考如何让我们的机构实现创新了。与其将资源浪费在预先制定的计划上,**强迫** 创新以一种线性时间线的方式发生,我们不如去构建一些开放并且敏捷的机构,可以 **在创新发生的时候做出及时的响应**。
|
||||
|
||||
几年前红帽公司发布一个包含了重大技术升级的产品的新版本的时候,我就看到了这样的方法。[红帽企业级 Linux 5.4 版本][8] 首次完全支持了一种被称为"基于内核的虚拟机"(KVM)的技术。这对于我们来说是一个重大的创新,不仅可以为顾客和合作伙伴带来巨大的价值,也有望为开源社区带来巨大的价值。
|
||||
|
||||
这项技术正在快速演进。幸运的事,因为我们是一个开放的机构,我们具有足够的适应能力,可以在这项创新发生的时候做出响应,从而帮助我们的顾客和合作伙伴可以更好地利用它。这项技术太重要了,并且竞争格局也太不稳定了,以至于我们没有理由要等到像 6.0 版本这样的里程碑时刻才出手。
|
||||
|
||||
如果你回看红帽企业级 Linux [已经存档的发行说明][9],你会发现它读起来并不像一个典型的软件创新的故事。一次改变游戏规则的进展突然出现在一个没有预测到的、并不起眼的时刻(5.4 版本),而不是一个事先计划好的重要里程碑时刻(6.0 版本)。事后看来,我们现在知道 KVM 这种“大爆炸”级别的进步是足够担得起 “6.0 版本”这样的里程碑式的名字的。但是创新并不是按照这样的剧本发生的。
|
||||
|
||||
不要误解我,机构仍然需要保持出色的运转,高效完成执行性的任务。但是 [不同的挑战需要不同的方法去应对][10],我们需要能够更好地建立灵活的机构,以及对 [意想不到和不可知的事情][11] 能够做出更好的响应。
|
||||
|
||||
一个在计划工作(以及按照计划执行)上做得很出色的公司很可能会得到他们计划要得到的结果。但是如果成功还取决于我们没有预测或者无法预测的的事情,那么仅仅精准地按照计划执行是不是就不够了?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/6/innovation-delusion
|
||||
|
||||
作者:[Jim Whitehurst][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[chen-ni](https://github.com/chen-ni)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jwhitehurst/users/jwhitehurst/users/n8chz/users/dhdeans
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation)
|
||||
[2]: https://www.youtube.com/watch?v=8MCbJmZQM9c
|
||||
[3]: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/235319/0532.pdf
|
||||
[4]: http://www.howardyu.org/the-revolutionary-approach-honda-took-to-rise-above-competition/
|
||||
[5]: https://autoweek.com/article/motorcycles/first-ride-honda-super-cub-c125-abs-all-new-and-still-super-cute
|
||||
[6]: https://www.autoblog.com/2019/02/13/2019-honda-super-cub-first-ride-review/
|
||||
[7]: https://opensource.com/open-organization/18/3/try-learn-modify
|
||||
[8]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/5.4_release_notes/index
|
||||
[9]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/5.0_release_notes/index
|
||||
[10]: https://opensource.com/open-organization/19/4/managed-enabled-empowered
|
||||
[11]: https://www.linkedin.com/pulse/how-plan-world-full-unknowns-jim-whitehurst/
|
@ -0,0 +1,131 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (LuuMing)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tracking down library injections on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3404621/tracking-down-library-injections-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
追溯 Linux 上的库注入
|
||||
======
|
||||
<ruby>库注入<rt>Library injections</rt></ruby>在 Linux 上不如 Windows 上常见,但它仍然是一个问题。下来看看它们如何工作的,以及如何鉴别它们。
|
||||
![Sandra Henry-Stocker][1]
|
||||
|
||||
尽管在 Linux 系统上几乎见不到,但库(Linux 上的共享目标文件)注入仍是一个严峻的威胁。在采访了来自 AT&T 公司 Alien 实验室的 Jaime Blasco 后,我更加意识到了其中一些攻击是多么的易实施。
|
||||
|
||||
在这篇文章中,我会介绍一种攻击方法和它的几种检测手段。我也会提供一些展示攻击细节的链接和一些检测工具。首先,引入一个小小的背景。
|
||||
|
||||
### 共享库漏洞
|
||||
|
||||
DLL 和 .so 文件都是允许代码(有时候是数据)被不同的进程共享的共享库文件。公用的代码可以放进一个文件中使得每个需要它的进程可以重新使用而不是多次被重写。这也促进了对公用代码的管理。
|
||||
|
||||
Linux 进程经常使用这些共享库。`ldd`(显示共享对象依赖)命令可以为任何程序显示共享库。这里有一些例子:
|
||||
|
||||
```
|
||||
$ ldd /bin/date
|
||||
linux-vdso.so.1 (0x00007ffc5f179000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f02bea15000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f02bec3a000)
|
||||
$ ldd /bin/netstat
|
||||
linux-vdso.so.1 (0x00007ffcb67cd000)
|
||||
libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f45e5d7b000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f45e5b90000)
|
||||
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f45e5b1c000)
|
||||
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f45e5b16000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f45e5dec000)
|
||||
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f45e5af5000)
|
||||
```
|
||||
|
||||
`linux-vdso.so.1` (在一些系统上也许会有不同的名字)是内核自动映射到每个进程地址空间的文件。它的工作是找到并定位进程所需的其他共享库。
|
||||
|
||||
利用这种库加载机制的一种方法是通过使用 `LD_PRELOAD` 环境变量。正如 Jaime Blasco 在他的研究中所解释的那样,“`LD_PRELOAD` 是最简单且最受欢迎的方法来在进程启动时加载共享库。可以使用共享库的路径配置环境变量,以便在加载其他共享对象之前加载该共享库。”
|
||||
|
||||
为了展示有多简单,我创建了一个极其简单的共享库并且赋值给我的(之前不存在) `LD_PRELOAD` 环境变量。之后我使用 `ldd` 命令查看它对于常用 Linux 命令的影响。
|
||||
|
||||
```
|
||||
$ export LD_PRELOAD=/home/shs/shownum.so
|
||||
$ ldd /bin/date
|
||||
linux-vdso.so.1 (0x00007ffe005ce000)
|
||||
/home/shs/shownum.so (0x00007f1e6b65f000) <== there it is
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f1e6b458000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f1e6b682000)
|
||||
```
|
||||
|
||||
注意,仅仅将新的库赋给 `LD_PRELOAD` 就影响到了运行的任何程序。
|
||||
|
||||
通过设置 `LD_PRELOAD` 指定的共享库首先被加载(紧随 linux-vdso.so.1),这些库可以极大程度上改变一个进程。例如,它们可以重定向系统调用到它们自己的资源,或对程序运行的行为方式进行意想不到的更改。
|
||||
|
||||
### osquery 工具可以检测库注入
|
||||
|
||||
`osquery` 工具(可以在 [osquery.io][4]下载)提供了一个非常独特的方式来照看 Linux 系统。它基本上将操作系统表示为高性能关系数据库。然后,也许你会猜到,这就意味着它可以用来查询并且生成 SQL 表,该表提供了诸如以下的详细信息:
|
||||
|
||||
* 运行中的进程
|
||||
* 加载的内核模块
|
||||
* 进行的网络链接
|
||||
|
||||
一个提供了进程信息的内核表叫做 `process_envs`。它提供了各种进程使用环境变量的详细信息。Jaime Blasco 提供了一个相当复杂的查询,可以使用 `osquery` 标识使用 `LD_PRELOAD` 的进程。
|
||||
|
||||
注意,这个查询是从 `process_envs` 表中获取数据。攻击 ID(T1055)参考 [Mitre 对攻击方法的解释][5]。
|
||||
|
||||
```
|
||||
SELECT process_envs.pid as source_process_id, process_envs.key as environment_variable_key, process_envs.value as environment_variable_value, processes.name as source_process, processes.path as file_path, processes.cmdline as source_process_commandline, processes.cwd as current_working_directory, 'T1055' as event_attack_id, 'Process Injection' as event_attack_technique, 'Defense Evasion, Privilege Escalation' as event_attack_tactic FROM process_envs join processes USING (pid) WHERE key = 'LD_PRELOAD';
|
||||
```
|
||||
|
||||
注意 `LD_PRELOAD` 环境变量有时是合法使用的。例如,各种安全监控工具可能会使用到它,因为开发人员需要进行故障排除、调试或性能分析。然而,它的使用仍然很少见,应当加以防范。
|
||||
|
||||
同样值得注意的是 osquery 可以交互使用或是作为定期查询的守护进程去运行。了解更多请查阅文章末尾给出的参考。
|
||||
|
||||
你也能够通过查看用户的环境设置定位到 `LD_PRELOAD` 的使用。如果 `LD_PRELOAD` 使用用户账户配置,你可以使用这样的命令来查看(在认证了个人身法之后):
|
||||
|
||||
```
|
||||
$ env | grep PRELOAD
|
||||
LD_PRELOAD=/home/username/userlib.so
|
||||
```
|
||||
|
||||
如果你之前没有听说过 osquery,别太在意。它正在成为一个更受欢迎的工具。事实上就在上周,Linux 基金会宣布用新的 [osquery 基金会][6]支持 osquery 社区。
|
||||
|
||||
#### 总结
|
||||
|
||||
尽管库注入是一个严重的威胁,但了解一些优秀的工具来帮助你检测它是否存在是很有帮助的。
|
||||
|
||||
#### 扩展阅读
|
||||
|
||||
重要的参考和工具的链接:
|
||||
|
||||
* [用 osquery 追寻 Linux 库注入][7],AT&T 网络安全
|
||||
* [Linux:我的内存怎么了?][8],TrustedSec
|
||||
* [osquery 下载网站][4]
|
||||
* [osquery 关系模式][9]
|
||||
* [osqueryd(osquery 守护进程)][10]
|
||||
* [Mitre 的攻击框架][11]
|
||||
* [新的 osquery 基金会宣布][6]
|
||||
|
||||
在 [Facebook][12] 和 [LinkedIn][13] 上加入网络会议参与讨论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3404621/tracking-down-library-injections-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[LuuMing](https://github.com/LuuMing)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/06/dll-injection-100800196-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[4]: https://osquery.io/
|
||||
[5]: https://attack.mitre.org/techniques/T1055/
|
||||
[6]: https://www.linuxfoundation.org/press-release/2019/06/the-linux-foundation-announces-intent-to-form-new-foundation-to-support-osquery-community/
|
||||
[7]: https://www.alienvault.com/blogs/labs-research/hunting-for-linux-library-injection-with-osquery
|
||||
[8]: https://www.trustedsec.com/2018/09/linux-hows-my-memory/
|
||||
[9]: https://osquery.io/schema/3.3.2
|
||||
[10]: https://osquery.readthedocs.io/en/stable/deployment/configuration/#schedule
|
||||
[11]: https://attack.mitre.org/
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
Loading…
Reference in New Issue
Block a user