mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-21 02:10:11 +08:00
commit
bb4d94d71c
83
published/20190301 Emacs for (even more of) the win.md
Normal file
83
published/20190301 Emacs for (even more of) the win.md
Normal file
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (oneforalone)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-11046-1.html)
|
||||
[#]: subject: (Emacs for (even more of) the win)
|
||||
[#]: via: (https://so.nwalsh.com/2019/03/01/emacs)
|
||||
[#]: author: (Norman Walsh https://so.nwalsh.com)
|
||||
|
||||
Emacs 的(更多)胜利
|
||||
======
|
||||
|
||||

|
||||
|
||||
我天天用 Emacs,但我却从意识到。但是每当我用 Emacs 时,它都给我带来了很多乐趣。
|
||||
|
||||
> 如果你是个职业作家……Emacs 与其它的编辑器的相比就如皓日与群星一样。不仅更大、更亮,它轻而易举就让其他所有的东西都消失了。
|
||||
|
||||
我用 [Emacs][1] 已有二十多年了。我用它来写几乎所有的东西(我用 [IntelliJ][2] 编辑 Scala 和 Java )。看邮件的话我是能在 Emacs 里看就在里面看。
|
||||
|
||||
尽管我用 Emacs 已有数十年,我在新年前后才意识到,在过去十几年里,我对 Emacs 的使用几乎没有什么变化。当然,新的编辑模式出现了,我就会选一两个插件,几年前我确实是用了 [Helm][3],但大多数时候,它只是完成了我需要的所有繁重工作,日复一日,没有抱怨,也没有妨碍我。一方面,这证明了它有多好。另一方面,这是一个邀请,让我深入挖掘,看看我错过了什么。
|
||||
|
||||
于此同时,我也决定从以下几方面改进我的工作方式:
|
||||
|
||||
* **更好的议程管理** 我在工作中负责几个项目,这些项目有定期和临时的会议;有些我是我主持的,有些我只要参加就可以。
|
||||
|
||||
我意识到我对参加会议变得有些敷衍。往会议室里一坐很简单,但实际上我是在阅读电子邮件或处理其他事情。(我强烈反对在会议中“禁止携带笔记本电脑”的这条规定,但这是另一个话题。)
|
||||
|
||||
敷衍地去参加会议有几个问题。首先,这是对主持会议的人和其他参会者的不尊重。实际上这是不应该这么做的充分理由,但我还有意识到另一个问题:它掩盖了会议的成本。
|
||||
|
||||
如果你在开会,但同时回复了一封电子邮件,也许修复了一个 bug,那么这个会议就没什么成本(或没那么多)。如果会议成本低廉,那么会议数量将会更多。
|
||||
|
||||
我想要更少、更短的会议。我不想掩盖它们的成本,我想让开会变得很有价值,除非绝对必要,否则就干脆不要开。
|
||||
|
||||
有时,开会是绝对有必要的。而且我认为一个简短的会有时候能够很快的解决问题。但是,如果我一天要开十个短会的话,那我觉得还是不要假装取得了什么效果吧。
|
||||
|
||||
我决定在我参加的所有的会上做笔记。我并不是说一定要做会议记录,但是我肯定会花上几分钟。这会让我把注意力集中在开会上,而忽略其他事。
|
||||
|
||||
* **更好的时间管理** 无论是工作的或私人的,我有很多要做和想做的事。我一直在问题列表中跟踪其中的一些,一些在保存的电子邮件线索中(Emacs 和 [Gmail][4] 中,用于一些稍微不同的提醒),还有一些在日历、手机上各种各样的“待办事项列表”和小纸片上。可能还有其他地方。
|
||||
|
||||
我决定把它们放在一起。不是说我认为放到一个一致的地方就更好,而是我想完成两件事:首先,把它们都集中在一个地方,我能够更好更全面地了解我在哪里投入了更多的精力;其次,我想养成一个记录、跟踪并保存它们的习惯(习惯指“固定或规律的倾向或做法,尤指难以放弃的倾向或做法”)。
|
||||
|
||||
* **更好的问责制** 如果你在某些科学或工程领域工作,你就会养成记笔记的习惯。唉,我没有。但我决定这么做。
|
||||
|
||||
我对法律上鼓励使用装订页面或用永久记号笔涂抹并不感兴趣。我感兴趣的是养成做记录的习惯。我的目标是有一个地方记下想法和设计草图等。如果我突然有了灵感,或者我想到了一个不在测试套件中的边缘情况,我希望我的直觉是把它写在我的日志中,而不是草草写在一张小纸片上,或者自己觉得自己会记住它。
|
||||
|
||||
这些决心让我很快或多或少指向了 [Org][6] 模式。Org 模式有一个庞大的、活跃的、忠诚的用户社区。我以前也用过它(顺带一提,我都[写过][7]关于它的文章,在几年前),我花了很长的一段时间(将 [MarkLogic 集成][8]到其中。(这在过去的一两个星期里得到了回报!)
|
||||
|
||||
但我从没正经用过 Org 模式。
|
||||
|
||||
我现在正在用它。我用了几分钟,我把所有要做的事情都记录下来,我还记了日记。我不确定我争论或列表它的所有功能能有多大价值,你可以通过网页快速地搜索找到很多。
|
||||
|
||||
如果你用 Emacs,那你也应该用 Org 模式。如果没用过 Emacs,我相信你不会是第一个因 Org 模式而使用 Emacs 的人。Org 模式可以做很多。它需要一点时间来学习方法和快捷键,但我认为这是值得的。(如果你的口袋中有一台 [iOS][9] 设备,我推荐你在路上使用 [beorg][10] 来记录。)
|
||||
|
||||
当然,我想出了如何[将 XML 从其中提取出来][11](“working out” 确实是“用 elisp 来编程”的一种有趣的魔法)然后,如何将它转换回我的博客用的标记(当然,在 Emacs 中按下一个按钮就可以做到)。这是用 Org 模式写的第一篇帖子。这也不会是最后一次。
|
||||
|
||||
附注:生日快乐,[小博客][12]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://so.nwalsh.com/2019/03/01/emacs
|
||||
|
||||
作者:[Norman Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[oneforalone](https://github.com/oneforalone)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://so.nwalsh.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Emacs
|
||||
[2]: https://en.wikipedia.org/wiki/IntelliJ_IDEA
|
||||
[3]: https://emacs-helm.github.io/helm/
|
||||
[4]: https://en.wikipedia.org/wiki/Gmail
|
||||
[5]: https://en.wikipedia.org/wiki/Lab_notebook
|
||||
[6]: https://en.wikipedia.org/wiki/Org-mode
|
||||
[7]: https://www.balisage.net/Proceedings/vol17/html/Walsh01/BalisageVol17-Walsh01.html
|
||||
[8]: https://github.com/ndw/ob-ml-marklogic/
|
||||
[9]: https://en.wikipedia.org/wiki/IOS
|
||||
[10]: https://beorgapp.com/
|
||||
[11]: https://github.com/ndw/org-to-xml
|
||||
[12]: https://so.nwalsh.com/2017/03/01/helloWorld
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tempered Networks simplifies secure network connectivity and microsegmentation)
|
||||
[#]: via: (https://www.networkworld.com/article/3405853/tempered-networks-simplifies-secure-network-connectivity-and-microsegmentation.html)
|
||||
[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/)
|
||||
|
||||
Tempered Networks simplifies secure network connectivity and microsegmentation
|
||||
======
|
||||
Tempered Networks’ Identity Defined Network platform uses the Host Identity Protocol to partition and isolate the network into trusted microsegments, providing an easy and cost-effective way to secure the network.
|
||||
![Thinkstock][1]
|
||||
|
||||
The TCP/IP protocol is the foundation of the internet and pretty much every single network out there. The protocol was designed 45 years ago and was originally only created for connectivity. There’s nothing in the protocol for security, mobility, or trusted authentication.
|
||||
|
||||
The fundamental problem with TCP/IP is that the IP address within the protocol represents both the device location and the device identity on a network. This dual functionality of the address lacks the basic mechanisms for security and mobility of devices on a network.
|
||||
|
||||
This is one of the reasons networks are so complicated today. To connect to things on a network or over the internet, you need VPNs, firewalls, routers, cell modems, etc. and you have all the configurations that come with ACLs, VLANs, certificates, and so on. The nightmare grows exponentially when you factor in internet of things (IoT) device connectivity and security. It’s all unsustainable at scale.
|
||||
|
||||
Clearly, we need a more efficient and effective way to take on network connectivity, mobility, and security.
|
||||
|
||||
**[ Also read: [What is microsegmentation? How getting granular improves network security][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]**
|
||||
|
||||
The Internet Engineering Task Force (IETF) tackled this problem with the Host Identity Protocol (HIP). It provides a method of separating the endpoint identifier and the locator roles of IP addresses. It introduces a new Host Identity (HI) name space, based on public keys, from which endpoint identifiers are taken. HIP uses existing IP addressing and forwarding for locators and packet delivery.The protocol is compatible with IPv4 and IPv6 applications and utilizes a customized IPsec tunnel mode for confidentiality, authentication, and integrity of network applications.
|
||||
|
||||
Ratified by IETF in 2015, HIP represents a new security networking layer within the OSI stack. Think of it as Layer 3.5. It’s a flip of the trust model where TCP/IP is inherently promiscuous and will answer to anything that wants to talk to a device on that network. In contrast, HIP is a trust protocol that will not answer to anything on the network unless that connection has been authenticated and authorized based on its cryptographic identity. It is, in effect, a form of a [software-defined perimeter][4] around specific network resources. This is also known as [microsegmentation][5].
|
||||
|
||||
![][6]
|
||||
|
||||
### Tempered Networks’ IDN platform creates segmented, encrypted network
|
||||
|
||||
[Tempered Networks][7] has created a platform utilizing the HIP and a variety of technologies that partitions and isolates the network into trusted microsegments. Tempered Networks’ Identity Defined Networking (IDN) platform is deployed as an overlay technology that layers on top of any IP network. The HIP was designed to be both forward and backward compatible with any IP network without having to make any changes to the underlay network. The overlay network creates a direct tunnel between the two things you want to connect.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][8] ]**
|
||||
|
||||
The IDN platform uses three components to create a segmented and encrypted network: an orchestration engine called the Conductor, the HIPrelay identity-based router, and HIP Services enforcement points.
|
||||
|
||||
The Conductor is a centralized orchestration and intelligence engine that connects, protects, and disconnects any resource globally through a single pane of glass. The Conductor is used to define and enforce policies for HIP Services. Policy configuration is done in a simple point-and-click manner. The Conductor is available as a physical or virtual appliance or in the Amazon Web Services (AWS) cloud.
|
||||
|
||||
HIP Services provide software-based policy enforcement, enabling secure connectivity among IDN-protected devices, as well as cloaking, segmentation, identity-based routing, and IP mobility. They can be deployed on or in-line to any device or system and come in the form of HIPswitch hardware, HIPserver, HIPclient, Cloud HIPswitch, or Virtual HIPswitch. HIP Services also can be embedded in customer hardware or applications.
|
||||
|
||||
Placing HIPswitches in front of any connected device renders the device HIP-enabled and immediately microsegments the traffic, isolating inbound and outbound traffic from the underlying network. HIPswitches deployed on the network automatically register with the Conductor using their cryptographic identity.
|
||||
|
||||
HIPrelay works with the HIP Service-enabled endpoints to deliver peer-to-peer connectivity for any device or system across all networks and transport options. Rather than using Layer 3 or 4 rule sets or traditional routing protocols, HIPrelay routes and connects encrypted communications based on provable cryptographic identities traversing existing infrastructure.
|
||||
|
||||
It sounds complicated, but it really isn’t. A use case example should demonstrate the ease and power of this solution.
|
||||
|
||||
### Use case: Smart Ships
|
||||
|
||||
An international cruise line recently installed Tempered Networks’ IDN solution to provide tighter security around its critical maritime systems. Prior to deployment, the systems for fuel, propulsion, navigation, ballast, weather, and incinerators were on a flat Layer 2 network, which basically allowed authorized users of the network to see everything.
|
||||
|
||||
Given that vendors of the different maritime systems had access to their own system, the lack of microsegmentation allowed them to see the other systems as well. The cruise line needed a simple way to segment access to these different systems — isolating them from each other — and they wanted to do it without having to put the ships in dry dock for the network reconfiguration.
|
||||
|
||||
The original configuration looked like this:
|
||||
|
||||
![][9]
|
||||
|
||||
The company implemented microsegmentation of the network based on the functionality of the systems. This isolated and segmented vendor access to only their own systems — everything else was hidden to them. The implementation involved installing HIPrelay identity routing in the cloud, several HIPswitch wireless devices onboard the ships, and HIPclient software on the vendors’ and crew members’ devices. The Conductor appliance that managed the entire deployment was installed in AWS.
|
||||
|
||||
All of that was done without impacting the underlying network, and no dry dock time was required for the deployment. In addition, the cruise line was able to eliminate internal firewalls and VPNs that had previously been used for segmentation and remote access. The resulting configuration looks like this:
|
||||
|
||||
![][10]
|
||||
|
||||
The color coding of the illustration above indicates what systems are now able to directly see and communicate with their corresponding controllers and sensors. Everything else on the network is hidden from view of those systems.
|
||||
|
||||
The acquisition cost of the Tempered Networks’ solution was one-tenth that of a traditional microsegmentation solution. The deployment time was 2 FTE days per ship compared to the 40 FTE days a traditional solution would have needed. No additional staffing was required to support the solution, and no changes were made to the underlying network.
|
||||
|
||||
### A time-tested microsegmentation solution
|
||||
|
||||
This technology came out of Boeing and was deployed for over 12 years within their manufacturing facilities until 2014, when Boeing allowed the technology to become commercialized. Tempered Networks took the HIP and developed the full platform with easy, centralized management. It was purpose-built to provide secure connectivity to networks. The solution has been successfully deployed in industrial domains such as the utilities sector, oil and gas, electricity generation, and aircraft manufacturing, as well as in enterprise domains and healthcare.
|
||||
|
||||
Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3405853/tempered-networks-simplifies-secure-network-connectivity-and-microsegmentation.html
|
||||
|
||||
作者:[Linda Musthaler][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Linda-Musthaler/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/01/network_security_hacker_virus_crime-100745979-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
[3]: https://www.networkworld.com/newsletters/signup.html
|
||||
[4]: https://www.networkworld.com/article/3359363/software-defined-perimeter-brings-trusted-access-to-multi-cloud-applications-network-resources.html
|
||||
[5]: https://www.networkworld.com/article/3247672/what-is-microsegmentation-how-getting-granular-improves-network-security.html
|
||||
[6]: https://images.idgesg.net/images/article/2019/07/hip-slide-100800735-large.jpg
|
||||
[7]: https://www.temperednetworks.com/
|
||||
[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[9]: https://images.idgesg.net/images/article/2019/07/cruise-ship-before-100800736-large.jpg
|
||||
[10]: https://images.idgesg.net/images/article/2019/07/cruise-ship-after-100800738-large.jpg
|
||||
[11]: https://www.facebook.com/NetworkWorld/
|
||||
[12]: https://www.linkedin.com/company/network-world
|
@ -1,106 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0 – Public Vs Private Blockchain Comparison [Part 7])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/)
|
||||
[#]: author: (editor https://www.ostechnix.com/author/editor/)
|
||||
|
||||
Blockchain 2.0 – Public Vs Private Blockchain Comparison [Part 7]
|
||||
======
|
||||
|
||||
![Public vs Private blockchain][1]
|
||||
|
||||
The previous part of the [**Blockchain 2.0**][2] series explored the [**the state of Smart contracts**][3] now. This post intends to throw some light on the different types of blockchains that can be created. Each of these are used for vastly different applications and depending on the use cases, the protocol followed by each of these differ. Now let us go ahead and learn about **Public vs Private blockchain comparison** with Open source and proprietary technology.
|
||||
|
||||
The fundamental three-layer structure of a blockchain based distributed ledger as we know is as follows:
|
||||
|
||||
![][4]
|
||||
|
||||
Figure 1 – Fundamental structure of Blockchain-based ledgers
|
||||
|
||||
The differences between the types mentioned here is attributable primarily to the protocol that rests on the underlying blockchain. The protocol dictates rules for the participants and the behavior of the blockchain in response to the said participation.
|
||||
|
||||
Remember to keep the following things in mind while reading through this article:
|
||||
|
||||
* Platforms such as these are always created to solve a use-case requirement. There is no one direction that the technology should take that is best. Blockchains for instance have tremendous applications and some of these might require dropping features that seem significant in other settings. **Decentralized storage** is a major example in this regard.
|
||||
* Blockchains are basically database systems keeping track of information by timestamping and organizing data in the form of blocks. Creators of such blockchains can choose who has the right to make these blocks and perform alterations.
|
||||
* Blockchains can be “centralized” as well, and participation in varying extents can be limited to those who this “central authority” deems eligible.
|
||||
|
||||
|
||||
|
||||
Most blockchains are either **public** or **private**. Broadly speaking, public blockchains can be considered as being the equivalent of open source software and most private blockchains can be seen as proprietary platforms deriving from the public ones. The figure below should make the basic difference obvious to most of you.
|
||||
|
||||
![][5]
|
||||
|
||||
Figure 2 – Public vs Private blockchain comparison with Open source and Proprietary Technology
|
||||
|
||||
This is not to say that all private blockchains are derived from open public ones. The most popular ones however usually are though.
|
||||
|
||||
### Public Blockchains
|
||||
|
||||
A public blockchain can be considered as a **permission-less platform** or **network**. Anyone with the knowhow and computing resources can participate in it. This will have the following implications:
|
||||
|
||||
* Anyone can join and participate in a public blockchain network. All the “participant” needs is a stable internet connection along with computing resources.
|
||||
* Participation will include reading, writing, verifying, and providing consensus during transactions. An example for participating individuals would be **Bitcoin miners**. In exchange for participating in the network the miners are paid back in Bitcoins in this case.
|
||||
* The platform is decentralized completely and fully redundant.
|
||||
* Because of the decentralized nature, no one entity has complete control over the data recorded in the ledger. To validate a block all (or most) participants need to vet the data.
|
||||
* This means that once information is verified and recorded, it cannot be altered easily. Even if it is, its impossible to not leave marks.
|
||||
* The identity of participants remains anonymous by design in platforms such as **BITCOIN** and **LITECOIN**. These platforms by design aim for protecting and securing user identities. This is primarily a feature provided by the overlying protocol stack.
|
||||
* Examples for public blockchain networks are **BITCOIN** , **LITECOIN** , **ETHEREUM** etc.
|
||||
* Extensive decentralizations mean that gaining consensus on transactions might take a while compared to what is typically possible over blockchain ledger networks and throughput can be a challenge for large enterprises aiming for pushing a very high number of transactions every instant.
|
||||
* The open participation and often the high number of such participants in open chains such as bitcoin add up to considerable initial investments in computing equipment and energy costs.
|
||||
|
||||
|
||||
|
||||
### Private Blockchain
|
||||
|
||||
In contrast, a private blockchain is a **permissioned blockchain**. Meaning:
|
||||
|
||||
* Permission to participate in the network is restricted and is presided over by the owner or institution overseeing the network. Meaning even though an individual will be able to store data and transact (send and receive payments for example), the validation and storage of these transactions will be done only by select participants.
|
||||
* Participation even once permission is given by the central authority will be limited by terms. For instance, in case of a private blockchain network run by a financial institution, not every customer will have access to the entire blockchain ledger, and even among those with the permission, not everyone will be able to access everything. Permissions to access select services will be given by the central figure in this case. This is often referred to as **“channeling”**.
|
||||
* Such systems have significantly larger throughput capabilities and also showcase much faster transaction speeds compared to their public counterparts because a block of information only needs to be validated by a select few.
|
||||
* Security by design is something the public blockchains are renowned for. They achieve this
|
||||
by:
|
||||
* Anonymizing participants,
|
||||
* Distributed & redundant but encrypted storage on multiple nodes,
|
||||
* Mass consensus required for creating and altering data.
|
||||
|
||||
|
||||
|
||||
Private blockchains usually don’t feature any of these in their protocol. This makes the system only as secure as most cloud-based database systems currently in use.
|
||||
|
||||
### A note for the wise
|
||||
|
||||
An important point to note is this, the fact that they’re named public or private (or open or closed) has nothing to do with the underlying code base. The code or the literal foundations on which the platforms are based on may or may not be publicly available and or developed in either of these cases. **R3** is a **DLT** ( **D** istributed **L** edger **T** echnology) company that leads a public consortium of over 200 multinational institutions. Their aim is to further development of blockchain and related distributed ledger technology in the domain of finance and commerce. **Corda** is the product of this joint effort. R3 defines corda as a blockchain platform that is built specially for businesses. The codebase for the same is open source and developers all over the world are encouraged to contribute to the project. However, given its business facing nature and the needs it is meant to address, corda would be categorized as a permissioned closed blockchain platform. Meaning businesses can choose the participants of the network once it is deployed and choose the kind of information these participants can access through the use of natively available smart contract tools.
|
||||
|
||||
While it is a reality that public platforms like Bitcoin and Ethereum are responsible for the widespread awareness and development going on in the space, it can still be argued that private blockchains designed for specific use cases in enterprise or business settings is what will lead monetary investments in the short run. These are the platforms most of us will see implemented the near future in practical ways.
|
||||
|
||||
Read the next guide about Hyperledger project in this series.
|
||||
|
||||
* [**Blockchain 2.0 – An Introduction To Hyperledger Project (HLP)**][6]
|
||||
|
||||
|
||||
|
||||
We are working on many interesting topics on Blockchain technology. Stay tuned!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
|
||||
|
||||
作者:[editor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Public-Vs-Private-Blockchain-720x340.png
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
|
||||
[3]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/blockchain-architecture.png
|
||||
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/Public-vs-Private-blockchain-comparison.png
|
||||
[6]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
|
@ -1,139 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (LuuMing)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tracking down library injections on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3404621/tracking-down-library-injections-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Tracking down library injections on Linux
|
||||
======
|
||||
Library injections are less common on Linux than they are on Windows, but they're still a problem. Here's a look at how they work and how to identify them.
|
||||
![Sandra Henry-Stocker][1]
|
||||
|
||||
While not nearly commonly seen on Linux systems, library (shared object files on Linux) injections are still a serious threat. On interviewing Jaime Blasco from AT&T's Alien Labs, I've become more aware of how easily some of these attacks are conducted.
|
||||
|
||||
In this post, I'll cover one method of attack and some ways that it can be detected. I'll also provide some links that will provide more details on both attack methods and detection tools. First, a little background.
|
||||
|
||||
**[ Two-Minute Linux Tips: [Learn how to master a host of Linux commands in these 2-minute video tutorials][2] ]**
|
||||
|
||||
### Shared library vulnerability
|
||||
|
||||
Both DLL and .so files are shared library files that allow code (and sometimes data) to be shared by various processes. Commonly used code might be put into one of these files so that it can be reused rather than rewritten many times over for each process that requires it. This also facilitates management of commonly used code.
|
||||
|
||||
Linux processes often make use of many of these shared libraries. The **ldd** (display shared object dependencies) command can display these for any program file. Here are some examples:
|
||||
|
||||
```
|
||||
$ ldd /bin/date
|
||||
linux-vdso.so.1 (0x00007ffc5f179000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f02bea15000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f02bec3a000)
|
||||
$ ldd /bin/netstat
|
||||
linux-vdso.so.1 (0x00007ffcb67cd000)
|
||||
libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f45e5d7b000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f45e5b90000)
|
||||
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f45e5b1c000)
|
||||
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f45e5b16000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f45e5dec000)
|
||||
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f45e5af5000)
|
||||
```
|
||||
|
||||
The **linux-vdso.so.1** file (which may have a different name on some systems) is one that the kernel automatically maps into the address space of every process. Its job is to find and locate other shared libraries that the process requires.
|
||||
|
||||
One way that this library-loading mechanism is exploited is through the use of an environment variable called **LD_PRELOAD**. As Jaime Blasco explains in his research, "LD_PRELOAD is the easiest and most popular way to load a shared library in a process at startup. This environmental variable can be configured with a path to the shared library to be loaded before any other shared object."
|
||||
|
||||
To illustrate how easily this is done, I created an extremely simple shared library and assigned it to my (formerly non-existent) LD_PRELOAD environment variable. Then I used the **ldd** command to see how this would affect a commonly used Linux command.
|
||||
|
||||
**[ [Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][3] ]**
|
||||
|
||||
```
|
||||
$ export LD_PRELOAD=/home/shs/shownum.so
|
||||
$ ldd /bin/date
|
||||
linux-vdso.so.1 (0x00007ffe005ce000)
|
||||
/home/shs/shownum.so (0x00007f1e6b65f000) <== there it is
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f1e6b458000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f1e6b682000)
|
||||
```
|
||||
|
||||
Note that doing nothing more than assigning my new library to LD_PRELOAD now affects any process that I run.
|
||||
|
||||
Since the libraries specified by the LD_PRELOAD setting are the first to load (following linux-vdso.so.1), those libraries could significantly change a process. They could, for example, redirect system calls to their own resources or make unexpected changes in how the process being run behaves.
|
||||
|
||||
### The osquery tool can detect library injections
|
||||
|
||||
The **osquery** tool (downloadable from [osquery.io][4] is a tool that provides a very unique way of looking at Linux systems. It basically represents the operating system as a high-performance relational database. And, as you probably suspect, that means it can be queried and SQL tables created that provide details on such things as:
|
||||
|
||||
* Running processes
|
||||
* Loaded kernel modules
|
||||
* Open network connections
|
||||
|
||||
|
||||
|
||||
One kernel table that provides information on running processes is called **process_envs**. It provides details on environment variables used by various processes. With a fairly complicated query provided by Jaime Blasco, you can get osquery to identify processes that are using LD_PRELOAD.
|
||||
|
||||
Note that this query pulls data from the **process_envs** table. The attack ID (T1055) is a reference to [Mitre's explanation of the attack method][5]:
|
||||
|
||||
```
|
||||
SELECT process_envs.pid as source_process_id, process_envs.key as environment_variable_key, process_envs.value as environment_variable_value, processes.name as source_process, processes.path as file_path, processes.cmdline as source_process_commandline, processes.cwd as current_working_directory, 'T1055' as event_attack_id, 'Process Injection' as event_attack_technique, 'Defense Evasion, Privilege Escalation' as event_attack_tactic FROM process_envs join processes USING (pid) WHERE key = 'LD_PRELOAD';
|
||||
```
|
||||
|
||||
Note that the LD_PRELOAD environment variable is at times used legitimately. Various security monitoring tools, for example, could use it, as might developers while they are troubleshooting, debugging or doing performance analysis. However, its use is still quite uncommon and should be viewed with some suspicion.
|
||||
|
||||
It's also worth noting that osquery can be used interactively or be run as a daemon (osqueryd) for scheduled queries. See the reference at the bottom of this post for more on this.
|
||||
|
||||
You might also be able to locate use of LD_PRELOAD by examining users' environment settings. If LD_PRELOAD is configured in a user account, you might determine that with a command like this (after asssuming the individual's identity):
|
||||
|
||||
```
|
||||
$ env | grep PRELOAD
|
||||
LD_PRELOAD=/home/username/userlib.so
|
||||
```
|
||||
|
||||
If you've not previously heard of osquery, don't take it too hard. It's now in the process of becoming a more popular tool. Just last week, in fact, the Linux Foundation announced its intention to support the osquery commmunity with a brand-new [osquery foundation][6].
|
||||
|
||||
#### Wrap-up
|
||||
|
||||
While library injection remains a serious threat, it's helpful to know that some excellent tools are available to help detect its use on your systems.
|
||||
|
||||
#### Additional resources
|
||||
|
||||
Links to important references and tools:
|
||||
|
||||
* [Hunting for Linux library injection with osquery][7] from AT&T Cybersecurity
|
||||
* [Linux: How's My Memory?][8] from TrustedSec
|
||||
* [Download site for osquery][4]
|
||||
* [osquery schema][9]
|
||||
* [osqueryd (osquery deamon)][10]
|
||||
* [Mitre's attack framework][11]
|
||||
* [New osquery foundation announced][6]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3404621/tracking-down-library-injections-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/06/dll-injection-100800196-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[4]: https://osquery.io/
|
||||
[5]: https://attack.mitre.org/techniques/T1055/
|
||||
[6]: https://www.linuxfoundation.org/press-release/2019/06/the-linux-foundation-announces-intent-to-form-new-foundation-to-support-osquery-community/
|
||||
[7]: https://www.alienvault.com/blogs/labs-research/hunting-for-linux-library-injection-with-osquery
|
||||
[8]: https://www.trustedsec.com/2018/09/linux-hows-my-memory/
|
||||
[9]: https://osquery.io/schema/3.3.2
|
||||
[10]: https://osquery.readthedocs.io/en/stable/deployment/configuration/#schedule
|
||||
[11]: https://attack.mitre.org/
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
337
sources/tech/20190701 Get modular with Python functions.md
Normal file
337
sources/tech/20190701 Get modular with Python functions.md
Normal file
@ -0,0 +1,337 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get modular with Python functions)
|
||||
[#]: via: (https://opensource.com/article/19/7/get-modular-python-functions)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/xd-deng/users/nhuntwalker/users/don-watkins)
|
||||
|
||||
Get modular with Python functions
|
||||
======
|
||||
Minimize your coding workload by using Python functions for repeating
|
||||
tasks.
|
||||
![OpenStack source code \(Python\) in VIM][1]
|
||||
|
||||
Are you confused by fancy programming terms like functions, classes, methods, libraries, and modules? Do you struggle with the scope of variables? Whether you're a self-taught programmer or a formally trained code monkey, the modularity of code can be confusing. But classes and libraries encourage modular code, and modular code can mean building up a collection of multipurpose code blocks that you can use across many projects to reduce your coding workload. In other words, if you follow along with this article's study of [Python][2] functions, you'll find ways to work smarter, and working smarter means working less.
|
||||
|
||||
This article assumes enough Python familiarity to write and run a simple script. If you haven't used Python, read my [intro to Python][3] article first.
|
||||
|
||||
### Functions
|
||||
|
||||
Functions are an important step toward modularity because they are formalized methods of repetition. If there is a task that needs to be done again and again in your program, you can group the code into a function and call the function as often as you need it. This way, you only have to write the code once, but you can use it as often as you like.
|
||||
|
||||
Here is an example of a simple function:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
|
||||
def Timer():
|
||||
print("Time is " + str(time.time() ) )
|
||||
```
|
||||
|
||||
Create a folder called **mymodularity** and save the function code as **timestamp.py**.
|
||||
|
||||
In addition to this function, create a file called **__init__.py** in the **mymodularity** directory. You can do this in a file manager or a Bash shell:
|
||||
|
||||
|
||||
```
|
||||
`$ touch mymodularity/__init__.py`
|
||||
```
|
||||
|
||||
You have now created your own Python library (a "module," in Python lingo) in your Python package called **mymodularity**. It's not a very useful module, because all it does is import the **time** module and print a timestamp, but it's a start.
|
||||
|
||||
To use your function, treat it just like any other Python module. Here's a small application that tests the accuracy of Python's **sleep()** function, using your **mymodularity** package for support. Save this file as **sleeptest.py** _outside_ the **mymodularity** directory (if you put this _into_ **mymodularity**, then it becomes a module in your package, and you don't want that).
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
from mymodularity import timestamp
|
||||
|
||||
print("Testing Python sleep()...")
|
||||
|
||||
# modularity
|
||||
timestamp.Timer()
|
||||
time.sleep(3)
|
||||
timestamp.Timer()
|
||||
```
|
||||
|
||||
In this simple script, you are calling your **timestamp** module from your **mymodularity** package (twice). When you import a module from a package, the usual syntax is to import the module you want from the package and then use the _module name + a dot + the name of the function you want to call_ (e.g., **timestamp.Timer()**).
|
||||
|
||||
You're calling your **Timer()** function twice, so if your **timestamp** module were more complicated than this simple example, you'd be saving yourself quite a lot of repeated code.
|
||||
|
||||
Save the file and run it:
|
||||
|
||||
|
||||
```
|
||||
$ python3 ./sleeptest.py
|
||||
Testing Python sleep()...
|
||||
Time is 1560711266.1526039
|
||||
Time is 1560711269.1557732
|
||||
```
|
||||
|
||||
According to your test, the sleep function in Python is pretty accurate: after three seconds of sleep, the timestamp was successfully and correctly incremented by three, with a little variance in microseconds.
|
||||
|
||||
The structure of a Python library might seem confusing, but it's not magic. Python is _programmed_ to treat a folder full of Python code accompanied by an **__init__.py** file as a package, and it's programmed to look for available modules in its current directory _first_. This is why the statement **from mymodularity import timestamp** works: Python looks in the current directory for a folder called **mymodularity**, then looks for a **timestamp** file ending in **.py**.
|
||||
|
||||
What you have done in this example is functionally the same as this less modular version:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
from mymodularity import timestamp
|
||||
|
||||
print("Testing Python sleep()...")
|
||||
|
||||
# no modularity
|
||||
print("Time is " + str(time.time() ) )
|
||||
time.sleep(3)
|
||||
print("Time is " + str(time.time() ) )
|
||||
```
|
||||
|
||||
For a simple example like this, there's not really a reason you wouldn't write your sleep test that way, but the best part about writing your own module is that your code is generic so you can reuse it for other projects.
|
||||
|
||||
You can make the code more generic by passing information into the function when you call it. For instance, suppose you want to use your module to test not the _computer's_ sleep function, but a _user's_ sleep function. Change your **timestamp** code so it accepts an incoming variable called **msg**, which will be a string of text controlling how the **timestamp** is presented each time it is called:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
|
||||
# updated code
|
||||
def Timer(msg):
|
||||
print(str(msg) + str(time.time() ) )
|
||||
```
|
||||
|
||||
Now your function is more abstract than before. It still prints a timestamp, but what it prints for the user is undefined. That means you need to define it when calling the function.
|
||||
|
||||
The **msg** parameter your **Timer** function accepts is arbitrarily named. You could call the parameter **m** or **message** or **text** or anything that makes sense to you. The important thing is that when the **timestamp.Timer** function is called, it accepts some text as its input, places whatever it receives into a variable, and uses the variable to accomplish its task.
|
||||
|
||||
Here's a new application to test the user's ability to sense the passage of time correctly:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
from mymodularity import timestamp
|
||||
|
||||
print("Press the RETURN key. Count to 3, and press RETURN again.")
|
||||
|
||||
input()
|
||||
timestamp.Timer("Started timer at ")
|
||||
|
||||
print("Count to 3...")
|
||||
|
||||
input()
|
||||
timestamp.Timer("You slept until ")
|
||||
```
|
||||
|
||||
Save your new application as **response.py** and run it:
|
||||
|
||||
|
||||
```
|
||||
$ python3 ./response.py
|
||||
Press the RETURN key. Count to 3, and press RETURN again.
|
||||
|
||||
Started timer at 1560714482.3772075
|
||||
Count to 3...
|
||||
|
||||
You slept until 1560714484.1628013
|
||||
```
|
||||
|
||||
### Functions and required parameters
|
||||
|
||||
The new version of your timestamp module now _requires_ a **msg** parameter. That's significant because your first application is broken because it doesn't pass a string to the **timestamp.Timer** function:
|
||||
|
||||
|
||||
```
|
||||
$ python3 ./sleeptest.py
|
||||
Testing Python sleep()...
|
||||
Traceback (most recent call last):
|
||||
File "./sleeptest.py", line 8, in <module>
|
||||
timestamp.Timer()
|
||||
TypeError: Timer() missing 1 required positional argument: 'msg'
|
||||
```
|
||||
|
||||
Can you fix your **sleeptest.py** application so it runs correctly with the updated version of your module?
|
||||
|
||||
### Variables and functions
|
||||
|
||||
By design, functions limit the scope of variables. In other words, if a variable is created within a function, that variable is available to _only_ that function. If you try to use a variable that appears in a function outside the function, an error occurs.
|
||||
|
||||
Here's a modification of the **response.py** application, with an attempt to print the **msg** variable from the **timestamp.Timer()** function:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
from mymodularity import timestamp
|
||||
|
||||
print("Press the RETURN key. Count to 3, and press RETURN again.")
|
||||
|
||||
input()
|
||||
timestamp.Timer("Started timer at ")
|
||||
|
||||
print("Count to 3...")
|
||||
|
||||
input()
|
||||
timestamp.Timer("You slept for ")
|
||||
|
||||
print(msg)
|
||||
```
|
||||
|
||||
Try running it to see the error:
|
||||
|
||||
|
||||
```
|
||||
$ python3 ./response.py
|
||||
Press the RETURN key. Count to 3, and press RETURN again.
|
||||
|
||||
Started timer at 1560719527.7862902
|
||||
Count to 3...
|
||||
|
||||
You slept for 1560719528.135406
|
||||
Traceback (most recent call last):
|
||||
File "./response.py", line 15, in <module>
|
||||
print(msg)
|
||||
NameError: name 'msg' is not defined
|
||||
```
|
||||
|
||||
The application returns a **NameError** message because **msg** is not defined. This might seem confusing because you wrote code that defined **msg**, but you have greater insight into your code than Python does. Code that calls a function, whether the function appears within the same file or if it's packaged up as a module, doesn't know what happens inside the function. A function independently performs its calculations and returns what it has been programmed to return. Any variables involved are _local_ only: they exist only within the function and only as long as it takes the function to accomplish its purpose.
|
||||
|
||||
#### Return statements
|
||||
|
||||
If your application needs information contained only in a function, use a **return** statement to have the function provide meaningful data after it runs.
|
||||
|
||||
They say time is money, so modify your timestamp function to allow for an imaginary charging system:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
|
||||
def Timer(msg):
|
||||
print(str(msg) + str(time.time() ) )
|
||||
charge = .02
|
||||
return charge
|
||||
```
|
||||
|
||||
The **timestamp** module now charges two cents for each call, but most importantly, it returns the amount charged each time it is called.
|
||||
|
||||
Here's a demonstration of how a return statement can be used:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
from mymodularity import timestamp
|
||||
|
||||
print("Press RETURN for the time (costs 2 cents).")
|
||||
print("Press Q RETURN to quit.")
|
||||
|
||||
total = 0
|
||||
|
||||
while True:
|
||||
kbd = input()
|
||||
if kbd.lower() == "q":
|
||||
print("You owe $" + str(total) )
|
||||
exit()
|
||||
else:
|
||||
charge = timestamp.Timer("Time is ")
|
||||
total = total+charge
|
||||
```
|
||||
|
||||
In this sample code, the variable **charge** is assigned as the endpoint for the **timestamp.Timer()** function, so it receives whatever the function returns. In this case, the function returns a number, so a new variable called **total** is used to keep track of how many changes have been made. When the application receives the signal to quit, it prints the total charges:
|
||||
|
||||
|
||||
```
|
||||
$ python3 ./charge.py
|
||||
Press RETURN for the time (costs 2 cents).
|
||||
Press Q RETURN to quit.
|
||||
|
||||
Time is 1560722430.345412
|
||||
|
||||
Time is 1560722430.933996
|
||||
|
||||
Time is 1560722434.6027434
|
||||
|
||||
Time is 1560722438.612629
|
||||
|
||||
Time is 1560722439.3649364
|
||||
q
|
||||
You owe $0.1
|
||||
```
|
||||
|
||||
#### Inline functions
|
||||
|
||||
Functions don't have to be created in separate files. If you're just writing a short script specific to one task, it may make more sense to just write your functions in the same file. The only difference is that you don't have to import your own module, but otherwise the function works the same way. Here's the latest iteration of the time test application as one file:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import time
|
||||
|
||||
total = 0
|
||||
|
||||
def Timer(msg):
|
||||
print(str(msg) + str(time.time() ) )
|
||||
charge = .02
|
||||
return charge
|
||||
|
||||
print("Press RETURN for the time (costs 2 cents).")
|
||||
print("Press Q RETURN to quit.")
|
||||
|
||||
while True:
|
||||
kbd = input()
|
||||
if kbd.lower() == "q":
|
||||
print("You owe $" + str(total) )
|
||||
exit()
|
||||
else:
|
||||
charge = Timer("Time is ")
|
||||
total = total+charge
|
||||
```
|
||||
|
||||
It has no external dependencies (the **time** module is included in the Python distribution), and produces the same results as the modular version. The advantage is that everything is located in one file, and the disadvantage is that you cannot use the **Timer()** function in some other script you are writing unless you copy and paste it manually.
|
||||
|
||||
#### Global variables
|
||||
|
||||
A variable created outside a function has nothing limiting its scope, so it is considered a _global_ variable.
|
||||
|
||||
An example of a global variable is the **total** variable in the **charge.py** example used to track current charges. The running total is created outside any function, so it is bound to the application rather than to a specific function.
|
||||
|
||||
A function within the application has access to your global variable, but to get the variable into your imported module, you must send it there the same way you send your **msg** variable.
|
||||
|
||||
Global variables are convenient because they seem to be available whenever and wherever you need them, but it can be difficult to keep track of their scope and to know which ones are still hanging around in system memory long after they're no longer needed (although Python generally has very good garbage collection).
|
||||
|
||||
Global variables are important, though, because not all variables can be local to a function or class. That's easy now that you know how to send variables to functions and get values back.
|
||||
|
||||
### Wrapping up functions
|
||||
|
||||
You've learned a lot about functions, so start putting them into your scripts—if not as separate modules, then as blocks of code you don't have to write multiple times within one script. In the next article in this series, I'll get into Python classes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/get-modular-python-functions
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth/users/xd-deng/users/nhuntwalker/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_1.jpg?itok=lHQK5zpm (OpenStack source code (Python) in VIM)
|
||||
[2]: https://www.python.org/
|
||||
[3]: https://opensource.com/article/17/10/python-101
|
373
sources/tech/20190701 How to use to infrastructure as code.md
Normal file
373
sources/tech/20190701 How to use to infrastructure as code.md
Normal file
@ -0,0 +1,373 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use to infrastructure as code)
|
||||
[#]: via: (https://opensource.com/article/19/7/infrastructure-code)
|
||||
[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
|
||||
|
||||
How to use to infrastructure as code
|
||||
======
|
||||
As your servers and applications grow, it becomes harder to maintain and
|
||||
keep track of them if you don't treat your infrastructure as code.
|
||||
![Magnifying glass on code][1]
|
||||
|
||||
My previous article about [setting up a homelab][2] described many options for building a personal lab to learn new technology. Regardless of whichever solution you choose, as your servers and applications grow, it will become harder and harder to maintain and keep track of them if you don't establish control. To avoid this, it's essential to treat your infrastructure as code.
|
||||
|
||||
This article is about infrastructure as code (IaC) best practices and includes a sample project automating the deployment of two virtual machines (VMs) and installing [Keepalived][3] and [Nginx][4] while implementing these practices. You can find all the [code for this project][5] on GitHub.
|
||||
|
||||
### Why is IaC important? Aren't my scripts enough?
|
||||
|
||||
No, scripts are not enough. Over time, scripts become hard to maintain and hard to keep track of. IaC can help you maintain uniformity and scalability while saving lots of time that you would waste if you did every task manually.
|
||||
|
||||
One of the problems with the culture of managing servers manually or partially automated is the lack of consistency and control, which (more often than not) causes configuration drifts and undocumented changes to applications or servers. If a server or a virtual machine has to be replaced, it's time-consuming to manually install every piece of software and do every bit of configuration.
|
||||
|
||||
With IaC, hundreds of servers can be provisioned, deployed, and configured, usually from a centralized location, and every configuration can be tracked in a version-control system. If a configuration file has to be modified, instead of connecting to every server, the file can be altered locally and the code pushed to the version-control system. The same is true with scaling up or replacing damaged servers. The entire infrastructure is managed centrally, all the code is kept in a version-control repository like Git, and any changes required by the servers are done using this code alone. No more unique unicorns! (Sorry, unicorns!)
|
||||
|
||||
One of IaC's main benefits is its integration with [CI/CD][6] tools like [Jenkins][7], which allows you to test more often and create deployment pipelines that automate moving versions of applications from one environment to the next.
|
||||
|
||||
### So, how do you start?
|
||||
|
||||
Start by doing an inventory of every application, service, and configuration needed by a server; review every piece of software installed, collect the configuration files, verify their parameters, and find where you can replicate the server.
|
||||
|
||||
When you have identified everything you need, remember:
|
||||
|
||||
* Use version control; everything should be tracked using version control.
|
||||
* Code everything; nothing should be done manually. Use code to describe the desired state.
|
||||
* Idempotence. Every result from the code you write should always yield the same result, no matter how many times it is executed.
|
||||
* Make your code modular.
|
||||
* Test, test, test
|
||||
* Again: Use version control. Don't _ever_ forget this.
|
||||
|
||||
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
You need two virtual machines with CentOS 7 installed. SSH login with keys should be working.
|
||||
|
||||
Create a directory called **homelab**. This will be your work directory, and this tutorial will refer to it as **$PWD**. Create two other directories inside this directory: **roles** and **group_vars**:
|
||||
|
||||
|
||||
```
|
||||
$ mkdir -p homelab/{roles,group_vars}
|
||||
$ cd homelab
|
||||
```
|
||||
|
||||
#### Version control
|
||||
|
||||
The first best practice is to always keep track of everything: automation, configuration files, templates. Version-control systems like Git make it easy for users to collaborate by providing a centralized repository where all the code, configurations, templates, etc. can be found. It also lets users review or restore older versions of files.
|
||||
|
||||
If you don't have one, create an account in GitHub or GitLab (or use any other version-control provider of your choice).
|
||||
|
||||
Inside **$PWD**, initialize your Git repo:
|
||||
|
||||
|
||||
```
|
||||
$ echo "# IaC example" >> README.md
|
||||
$ git init
|
||||
$ git add README.md
|
||||
$ git commit -m "First commit"
|
||||
$ git remote add origin <your Git URL>
|
||||
$ git push -u origin master
|
||||
```
|
||||
|
||||
#### Code everything
|
||||
|
||||
The main idea of IaC is to manage—as much as possible—all your infrastructure with code. Any change required in a server, application, or configuration must be defined in the code. Configuration files can be converted into templates to enable greater flexibility and reusability. Settings specific to applications or servers must also be coded, usually in variable files.
|
||||
|
||||
When creating the automation, it is crucial to remember idempotence: No matter how many times the code is executed, it should always have the same result. Same input, same result. For example, when writing a piece of code that modifies a file, you must ensure that if the same code is executed again, the file will look the same.
|
||||
|
||||
The following steps are automated, and the code is idempotent.
|
||||
|
||||
### Modularity
|
||||
|
||||
When writing infrastructure as code, it is imperative to think about reusability. Most of the code you write should be reusable and scalable.
|
||||
|
||||
When writing [Ansible][8] roles, the best approach is to follow the Unix philosophy: "Write programs that do one thing and do it well." Therefore, create multiple roles, one for each piece of software: 1) a "base" or "common" role that prepares each VM regardless of its purpose; 2) a role to install and configure Keepalived (for high availability); 3) a role to install and configure Nginx (web server). This method allows each role to be reused for different kinds of servers and will save a lot of coding in the future.
|
||||
|
||||
#### Create the base role
|
||||
|
||||
This role will prepare the VM with all the steps it needs after it is provisioned. Think about any configurations or software each server needs; they should go in this module. In this example, the base role will:
|
||||
|
||||
* Change the hostname
|
||||
* Install security updates
|
||||
* Enable [EPEL][9] and install utilities
|
||||
* Customize the welcome message
|
||||
|
||||
|
||||
|
||||
Create the basic role skeleton inside **$PWD/roles**:
|
||||
|
||||
|
||||
```
|
||||
`$ ansible-galaxy init --offline base`
|
||||
```
|
||||
|
||||
The main file for the role is **$PWD/roles/base/tasks/main.yml**. Modify it with the following content:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
# We set the hostname to the value in the inventory
|
||||
\- name: Change hostname
|
||||
hostname:
|
||||
name: "{{ inventory_hostname }}"
|
||||
|
||||
\- name: Update the system
|
||||
yum:
|
||||
name: "*"
|
||||
state: latest
|
||||
|
||||
\- name: Install basic utilities
|
||||
yum:
|
||||
name: ['epel-release', 'tmux', 'vim', 'wget', 'nfs-utils']
|
||||
state: present
|
||||
|
||||
\- name: Copy motd
|
||||
template:
|
||||
src: motd.j2
|
||||
dest: /etc/motd
|
||||
```
|
||||
|
||||
Create the template file that will replace **/etc/motd** by creating the file **$PWD/roles/base/templates/motd.j2**
|
||||
|
||||
|
||||
```
|
||||
UNAUTHORIZED ACCESS TO THIS DEVICE IS PROHIBITED
|
||||
|
||||
You must have explicit, authorized permission to access or configure "{{ inventory_hostname }}". Unauthorized attempts and actions to access or use this system may result in civil and/or criminal penalties. All activities performed on this device are logged and monitored.
|
||||
```
|
||||
|
||||
Every task in this code is idempotent. No matter how many times the code is executed, it will always yield exactly the same result. Notice how **/etc/motd** is modified; if the file were modified by adding or appending content (instead of using a template), it would have failed the idempotence rule, because a new line would be added every time it was executed.
|
||||
|
||||
#### Create the Keepalived role
|
||||
|
||||
You could create a role that includes both **Keepalived** and **Nginx**. But what would happen if you ever needed to install Keepalived without a web server? The code would have to be duplicated, wasting time, effort, and simplicity. Keeping roles minimal and straightforward is the way to go.
|
||||
|
||||
The automation code should always handle configuration files so they can be tracked in your version-control system. But how do you handle configuration files when settings can have different values per host? Use templates! Templates allow you to use variables and facts, giving you flexibility with the benefits of uniformity.
|
||||
|
||||
Create the Keepalived role skeleton within **$PWD/roles**:
|
||||
|
||||
|
||||
```
|
||||
`$ ansible-galaxy init --offline keepalived`
|
||||
```
|
||||
|
||||
Modify the main task file **$PWD/roles/keepalived/tasks/main.yml** as follows:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
\- name: Install keepalived
|
||||
yum:
|
||||
name: "keepalived"
|
||||
state: latest
|
||||
|
||||
\- name: Configure keepalived with the right settings
|
||||
template:
|
||||
src: keepalived.j2
|
||||
dest: /etc/keepalived/keepalived.conf
|
||||
notify: restart keepalived
|
||||
```
|
||||
|
||||
**$PWD/roles/keepalived/handlers/main.yml**:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
# handlers file for keepalived
|
||||
\- name: restart keepalived
|
||||
service:
|
||||
name: keepalived
|
||||
enabled: yes
|
||||
state: restarted
|
||||
```
|
||||
|
||||
And create the configuration template file **$PWD/roles/keepalived/templates/keepalived.j2**:
|
||||
|
||||
|
||||
```
|
||||
#### File handled by Ansible.
|
||||
|
||||
vrrp_script chk_nginx {
|
||||
script "pidof nginx" # check the nginx process
|
||||
interval 2 # every 2 seconds
|
||||
weight 2 # add 2 points if OK
|
||||
}
|
||||
|
||||
vrrp_instance LAB {
|
||||
interface {{ keepalived_nic }} # interface to monitor
|
||||
state {{ keepalived_state }}
|
||||
virtual_router_id {{ keepalived_vri }}
|
||||
priority {{ keepalived_priority }}
|
||||
virtual_ipaddress {
|
||||
{{ keepalived_vip }}
|
||||
}
|
||||
track_script {
|
||||
chk_nginx
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The Keepalived configuration file was converted into a template. It is a typical Keepalived configuration file, but instead of hardcoding values, it is parameterized.
|
||||
|
||||
When automating infrastructure and configuration files, it's vital to analyze application configuration files carefully, noting which values are the same across the environments and which settings are unique to each server. Again, every time the template is processed, it should yield the same result. Create variables, use Ansible facts; this adds up to modularity and flexibility.
|
||||
|
||||
#### Create the Nginx role
|
||||
|
||||
This simple role will install and configure Nginx using a template, following the same principles discussed above. A template will be used for this role to generate an **index.html** with the host's internet protocol (IP). [Other facts][10] can be used, too.
|
||||
|
||||
Modify the main task file **$PWD/roles/nginx/tasks/main.yml** as follows:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
# tasks file for nginx
|
||||
\- name: Install nginx
|
||||
yum:
|
||||
name: 'nginx'
|
||||
state: 'latest'
|
||||
notify: start nginx
|
||||
|
||||
\- name: Create web directory
|
||||
file:
|
||||
path: /var/www
|
||||
state: directory
|
||||
mode: '0755'
|
||||
|
||||
\- name: Create index.html
|
||||
template:
|
||||
src: index.html.j2
|
||||
dest: /var/www/index.html
|
||||
|
||||
\- name: Configure nginx
|
||||
template:
|
||||
src: lb.conf.j2
|
||||
dest: /etc/nginx/conf.d/lb.conf
|
||||
notify: restart nginx
|
||||
```
|
||||
|
||||
Modify the main task file **$PWD/roles/nginx/handlers/main.yml** as follows:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
\- name: start nginx
|
||||
systemd:
|
||||
name: 'nginx'
|
||||
state: 'started'
|
||||
enabled: yes
|
||||
|
||||
\- name: restart nginx
|
||||
systemd:
|
||||
name: 'nginx'
|
||||
state: 'restarted'
|
||||
```
|
||||
|
||||
And create the following two configuration template files:
|
||||
|
||||
**$PWD/roles/nginx/templates/site.conf.j2:**
|
||||
|
||||
|
||||
```
|
||||
server {
|
||||
listen {{ keepalived_vip }}:80;
|
||||
root /var/www;
|
||||
location / {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**$PWD/roles/nginx/templates/index.html.j2**:
|
||||
|
||||
|
||||
```
|
||||
`Hello, my ip is {{ ansible_default_ipv4.address }}`
|
||||
```
|
||||
|
||||
### Put it all together
|
||||
|
||||
You've created several roles; they are ready to be used, so create a playbook to use them.
|
||||
|
||||
Create a file called **$PWD/main.yml**:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
\- hosts: webservers
|
||||
become: yes
|
||||
roles:
|
||||
- base
|
||||
- nginx
|
||||
- keepalived
|
||||
```
|
||||
|
||||
This file defines what roles go where. If more roles are available, they can be included to create different combinations as needed. Some servers can be web servers only, for example. This flexibility is one of the main reasons it's so essential to write minimal functional units.
|
||||
|
||||
The previous roles require variables to work. Ansible is really flexible and lets you define variable files. This example creates a file called **all inside group_vars** (**$PWD/group_vars/all**). If more flexibility is needed, variables can be defined per host inside a folder called **host_vars**:
|
||||
|
||||
|
||||
```
|
||||
\---
|
||||
keepalived_nic: eth0
|
||||
keepalived_vri: 51
|
||||
keepalived_vip: 192.168.2.180
|
||||
```
|
||||
|
||||
Configure **keepalived_nic** with your preferred Keepalived interface, usually **eth0**. The variable **keepalived_vip** should have the IP needed to use as a virtual IP.
|
||||
|
||||
And finally, define the inventory. This inventory should keep track of your entire infrastructure. It's best to use dynamic inventories that gather all the information directly from the hypervisor so it doesn't have to be updated manually. Create a file called **inventory** with a section called **webservers** containing information about the two VMs:
|
||||
|
||||
|
||||
```
|
||||
[webservers]
|
||||
webserver01 ansible_user=centos ansible_host=192.168.2.101 keepalived_state=MASTER keepalived_priority=101
|
||||
webserver02 ansible_user=centos ansible_host=192.168.2.102 keepalived_state=BACKUP keepalived_priority=100
|
||||
```
|
||||
|
||||
The variable **ansible_user** should have the user Ansible will use to connect to the server. The variable **keepalived_state** should indicate if the host will be configured as a Master or Backup (as required in the Keepalived template file). Finally, set the variable **keepalived_priority** here because the master should have a higher priority than the backup.
|
||||
|
||||
And that's it; you've automated configuration of two VMs, installing Keepalived and Nginx.
|
||||
|
||||
Now save your changes:
|
||||
|
||||
|
||||
```
|
||||
$ git add .
|
||||
$ git commit -m "IaC playbook"
|
||||
$ git push -u origin master
|
||||
```
|
||||
|
||||
and deploy:
|
||||
|
||||
|
||||
```
|
||||
`$ ansible-playbook -i inventory main.yml`
|
||||
```
|
||||
|
||||
This project investigated basic IaC concepts, but it doesn't end here. Learn more by exploring how to do automated server provisioning, unit testing, and integration with CI/CD tools and pipelines. It's a long process, but it's worth it, both technically and career-wise.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/infrastructure-code
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0 (Magnifying glass on code)
|
||||
[2]: https://opensource.com/article/19/3/home-lab
|
||||
[3]: https://www.keepalived.org/
|
||||
[4]: https://www.nginx.com/
|
||||
[5]: https://github.com/mzamot/os-homelab-example
|
||||
[6]: https://en.wikipedia.org/wiki/CI/CD
|
||||
[7]: https://jenkins.io/
|
||||
[8]: https://www.ansible.com/
|
||||
[9]: https://fedoraproject.org/wiki/EPEL
|
||||
[10]: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variables-discovered-from-systems-facts
|
@ -0,0 +1,292 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Learn how to Record and Replay Linux Terminal Sessions Activity)
|
||||
[#]: via: (https://www.linuxtechi.com/record-replay-linux-terminal-sessions-activity/)
|
||||
[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/)
|
||||
|
||||
Learn how to Record and Replay Linux Terminal Sessions Activity
|
||||
======
|
||||
|
||||
Generally, all Linux administrators use **history** command to track which commands were executed in previous sessions, but there is one limitation of history command is that it doesn’t store the command’s output. There can be some scenarios where we want to check commands output of previous session and want to compare it with current session. Apart from this, there are some situations where we are troubleshooting the issues on Linux production boxes and want to save all terminal session activities for future reference, so in such cases script command become handy.
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/Record-linux-terminal-session-activity.jpg>
|
||||
|
||||
Script is a command line tool which is used to capture or record your Linux server terminal sessions activity and later the recorded session can be replayed using scriptreplay command. In this article we will demonstrate how to install script command line tool and how to record Linux server terminal session activity and then later we will see how the recorded session can be replayed using **scriptreplay** command.
|
||||
|
||||
### Installation of Script tool on RHEL 7/ CentOS 7
|
||||
|
||||
Script command is provided by the rpm package “**util-linux**”, in case it is not installed on your CentOS 7 / RHEL 7 system , run the following yum command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# yum install util-linux -y
|
||||
```
|
||||
|
||||
**On RHEL 8 / CentOS 8**
|
||||
|
||||
Run the following dnf command to install script utility on RHEL 8 and CentOS 8 system,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# dnf install util-linux -y
|
||||
```
|
||||
|
||||
**Installation of Script tool on Debian based systems (Ubuntu / Linux Mint)**
|
||||
|
||||
Execute the beneath apt-get command to install script utility
|
||||
|
||||
```
|
||||
root@linuxtechi ~]# apt-get install util-linux -y
|
||||
```
|
||||
|
||||
### How to Use script utility
|
||||
|
||||
Use of script command is straight forward, type script command on terminal then hit enter, it will start capturing your current terminal session activities inside a file called “**typescript**”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script
|
||||
Script started, file is typescript
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
To stop recording the session activities, type exit command and hit enter.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# exit
|
||||
exit
|
||||
Script done, file is typescript
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Syntax of Script command:
|
||||
|
||||
```
|
||||
~ ] # script {options} {file_name}
|
||||
```
|
||||
|
||||
Different options used in script command,
|
||||
|
||||
![options-script-command][1]
|
||||
|
||||
Let’s start recording of your Linux terminal session by executing script command and then execute couple of command like ‘**w**’, ‘**route -n**’ , ‘[**df -h**][2]’ and ‘**free-h**’, example is shown below
|
||||
|
||||
![script-examples-linux-server][3]
|
||||
|
||||
As we can see above, terminal session logs are saved in the file “typescript”
|
||||
|
||||
Now view the contents of typescript file using [cat][4] / vi command,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# ls -l typescript
|
||||
-rw-r--r--. 1 root root 1861 Jun 21 00:50 typescript
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
![typescript-file-content-linux][5]
|
||||
|
||||
Above confirms that whatever commands we execute on terminal that have been saved inside the file “typescript”
|
||||
|
||||
### Use Custom File name in script command
|
||||
|
||||
Let’s assume we want to use our customize file name to script command, so specify the file name after script command, in the below example we are using a file name “session-log-(current-date-time).txt”
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script sessions-log-$(date +%d-%m-%Y-%T).txt
|
||||
Script started, file is sessions-log-21-06-2019-01:37:39.txt
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now run the commands and then type exit,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# exit
|
||||
exit
|
||||
Script done, file is sessions-log-21-06-2019-01:37:39.txt
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Append the commands output to script file
|
||||
|
||||
Let assume script command had already recorded the commands output to a file called session-log.txt file and now we want to append output of new sessions commands output to this file, then use “**-a**” command in script command
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script -a sessions-log.txt
|
||||
Script started, file is sessions-log.txt
|
||||
[root@linuxtechi ~]# xfs_info /dev/mapper/centos-root
|
||||
meta-data=/dev/mapper/centos-root isize=512 agcount=4, agsize=2746624 blks
|
||||
= sectsz=512 attr=2, projid32bit=1
|
||||
= crc=1 finobt=0 spinodes=0
|
||||
data = bsize=4096 blocks=10986496, imaxpct=25
|
||||
= sunit=0 swidth=0 blks
|
||||
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
|
||||
log =internal bsize=4096 blocks=5364, version=2
|
||||
= sectsz=512 sunit=0 blks, lazy-count=1
|
||||
realtime =none extsz=4096 blocks=0, rtextents=0
|
||||
[root@linuxtechi ~]# exit
|
||||
exit
|
||||
Script done, file is sessions-log.txt
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
To view updated session’s logs, use “cat session-log.txt ”
|
||||
|
||||
### Capture commands output to script file without interactive shell
|
||||
|
||||
Let’s assume we want to capture commands output to a script file, then use **-c** option, example is shown below,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script -c "uptime && hostname && date" root-session.txt
|
||||
Script started, file is root-session.txt
|
||||
01:57:40 up 2:30, 3 users, load average: 0.00, 0.01, 0.05
|
||||
linuxtechi
|
||||
Fri Jun 21 01:57:40 EDT 2019
|
||||
Script done, file is root-session.txt
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Run script command in quiet mode
|
||||
|
||||
To run script command in quiet mode use **-q** option, this option will suppress the script started and script done message, example is shown below,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script -c "uptime && date" -q root-session.txt
|
||||
02:01:10 up 2:33, 3 users, load average: 0.00, 0.01, 0.05
|
||||
Fri Jun 21 02:01:10 EDT 2019
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Record Timing information to a file and capture commands output to a separate file, this can be achieved in script command by passing timing file (**–timing**) , example is shown below,
|
||||
|
||||
Syntax:
|
||||
|
||||
~ ]# script -t <timing-file-name> {file_name}
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# script --timing=timing.txt session.log
|
||||
Script started, file is session.log
|
||||
[root@linuxtechi ~]# uptime
|
||||
02:27:59 up 3:00, 3 users, load average: 0.00, 0.01, 0.05
|
||||
[root@linuxtechi ~]# date
|
||||
Fri Jun 21 02:28:02 EDT 2019
|
||||
[root@linuxtechi ~]# free -h
|
||||
total used free shared buff/cache available
|
||||
Mem: 3.9G 171M 2.0G 8.6M 1.7G 3.3G
|
||||
Swap: 3.9G 0B 3.9G
|
||||
[root@linuxtechi ~]# whoami
|
||||
root
|
||||
[root@linuxtechi ~]# exit
|
||||
exit
|
||||
Script done, file is session.log
|
||||
[root@linuxtechi ~]#
|
||||
[root@linuxtechi ~]# ls -l session.log timing.txt
|
||||
-rw-r--r--. 1 root root 673 Jun 21 02:28 session.log
|
||||
-rw-r--r--. 1 root root 414 Jun 21 02:28 timing.txt
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
### Replay recorded Linux terminal session activity
|
||||
|
||||
Now replay the recorded terminal session activities using scriptreplay command,
|
||||
|
||||
**Note:** Scriptreplay is also provided by rpm package “**util-linux**”. Scriptreplay command requires timing file to work.
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# scriptreplay --timing=timing.txt session.log
|
||||
```
|
||||
|
||||
Output of above command would be something like below,
|
||||
|
||||
<https://www.linuxtechi.com/wp-content/uploads/2019/06/scriptreplay-linux.gif>
|
||||
|
||||
### Record all User’s Linux terminal session activities
|
||||
|
||||
There are some business critical Linux servers where we want keep track on all users activity, so this can be accomplished using script command, place the following content in /etc/profile file ,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# vi /etc/profile
|
||||
……………………………………………………
|
||||
if [ "x$SESSION_RECORD" = "x" ]
|
||||
then
|
||||
timestamp=$(date +%d-%m-%Y-%T)
|
||||
session_log=/var/log/session/session.$USER.$$.$timestamp
|
||||
SESSION_RECORD=started
|
||||
export SESSION_RECORD
|
||||
script -t -f -q 2>${session_log}.timing $session_log
|
||||
exit
|
||||
fi
|
||||
……………………………………………………
|
||||
```
|
||||
|
||||
Save & exit the file.
|
||||
|
||||
Create the session directory under /var/log folder,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# mkdir /var/log/session
|
||||
```
|
||||
|
||||
Assign the permissions to session folder,
|
||||
|
||||
```
|
||||
[root@linuxtechi ~]# chmod 777 /var/log/session/
|
||||
[root@linuxtechi ~]#
|
||||
```
|
||||
|
||||
Now verify whether above code is working or not. Login to ordinary user to linux server, in my I am using pkumar user,
|
||||
|
||||
```
|
||||
~ ] # ssh root@linuxtechi
|
||||
root@linuxtechi's password:
|
||||
[root@linuxtechi ~]$ uptime
|
||||
04:34:09 up 5:06, 3 users, load average: 0.00, 0.01, 0.05
|
||||
[root@linuxtechi ~]$ date
|
||||
Fri Jun 21 04:34:11 EDT 2019
|
||||
[root@linuxtechi ~]$ free -h
|
||||
total used free shared buff/cache available
|
||||
Mem: 3.9G 172M 2.0G 8.6M 1.7G 3.3G
|
||||
Swap: 3.9G 0B 3.9G
|
||||
[root@linuxtechi ~]$ id
|
||||
uid=1001(pkumar) gid=1002(pkumar) groups=1002(pkumar) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
|
||||
[root@linuxtechi ~]$ whoami
|
||||
pkumar
|
||||
[root@linuxtechi ~]$ exit
|
||||
|
||||
Login as root and view user’s linux terminal session activity
|
||||
|
||||
[root@linuxtechi ~]# cd /var/log/session/
|
||||
[root@linuxtechi session]# ls -l | grep pkumar
|
||||
-rw-rw-r--. 1 pkumar pkumar 870 Jun 21 04:34 session.pkumar.19785.21-06-2019-04:34:05
|
||||
-rw-rw-r--. 1 pkumar pkumar 494 Jun 21 04:34 session.pkumar.19785.21-06-2019-04:34:05.timing
|
||||
[root@linuxtechi session]#
|
||||
```
|
||||
|
||||
![Session-output-file-linux][6]
|
||||
|
||||
We can also use scriptreplay command to replay user’s terminal session activities,
|
||||
|
||||
```
|
||||
[root@linuxtechi session]# scriptreplay --timing session.pkumar.19785.21-06-2019-04\:34\:05.timing session.pkumar.19785.21-06-2019-04\:34\:05
|
||||
```
|
||||
|
||||
That’s all from this tutorial, please do share your feedback and comments in the comments section below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/record-replay-linux-terminal-sessions-activity/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linuxtechi.com/author/pradeep/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.linuxtechi.com/wp-content/uploads/2019/06/options-script-command.png
|
||||
[2]: https://www.linuxtechi.com/11-df-command-examples-in-linux/
|
||||
[3]: https://www.linuxtechi.com/wp-content/uploads/2019/06/script-examples-linux-server-1024x736.jpg
|
||||
[4]: https://www.linuxtechi.com/cat-command-examples-for-beginners-in-linux/
|
||||
[5]: https://www.linuxtechi.com/wp-content/uploads/2019/06/typescript-file-content-linux-1024x794.jpg
|
||||
[6]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Session-output-file-linux-1024x353.jpg
|
@ -0,0 +1,190 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ubuntu or Fedora: Which One Should You Use and Why)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-vs-fedora/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Ubuntu or Fedora: Which One Should You Use and Why
|
||||
======
|
||||
|
||||
_**Brief: Ubuntu or Fedora? What’s the difference? Which is better? Which one should you use? Read this comparison of Ubuntu and Fedora.**_
|
||||
|
||||
[Ubuntu][1] and [Fedora][2] are one of the most popular Linux distributions out there. Making a decision to choose between using Ubuntu and Fedora is not an easy one. I’ll try to help you in making your decision by comparing various features of Ubuntu and Fedora.
|
||||
|
||||
Do note that this comparison is primarily from the desktop point of view. I am not going to focus on the container specific versions of Fedora or Ubuntu.
|
||||
|
||||
### Ubuntu vs Fedora: Which one is better?
|
||||
|
||||
![Ubuntu Vs Fedora][3]
|
||||
|
||||
Almost all Linux distributions differ from one another primarily on these points:
|
||||
|
||||
* Base distribution (Debian, Red Hat, Arch or from scratch)
|
||||
* Installation
|
||||
* Supported desktop environments
|
||||
* Package management, software support and updates
|
||||
* Hardware support
|
||||
* Development team (backed by corporate or created by hobbyists)
|
||||
* Release cycle
|
||||
* Community and support
|
||||
|
||||
|
||||
|
||||
Let’s see how similar or how different are Ubuntu and Fedora from each other. Once you know that, it should be perhaps easier for you to make a choice.
|
||||
|
||||
#### Installation
|
||||
|
||||
Ubuntu’s Ubiquity installer is one of easiest installers out there. I believe that it played an important role in Ubuntu’s popularity because when Ubuntu was just created in 2004, installing Linux itself was considered a huge task.
|
||||
|
||||
The Ubuntu installer allows you to install Ubuntu in around 10 minutes. In most cases, it can identify Windows installed on your system and allows you to dual boot Ubuntu and Windows in a matter of clicks.
|
||||
|
||||
You can also install updates and third-party codecs while installing Ubuntu. That’s an added advantage.
|
||||
|
||||
![Ubuntu Installer][4]
|
||||
|
||||
Fedora uses Anaconda installer. This too simplifies the installation process with the easy to use interface.
|
||||
|
||||
![Fedora Installer | Image Credit Fedora Magazine][5]
|
||||
|
||||
Fedora also provides a media writer tool for downloading and creating the live USB of Fedora on Windows operating system. When I last tried to use it around two years ago, it didn’t work and I had to use the regular live USB creating software.
|
||||
|
||||
In my experience, installing Ubuntu is easier than installing Fedora. That doesn’t mean installing Fedora is a complex process. Just that Ubuntu is simpler.
|
||||
|
||||
#### Desktop environments
|
||||
|
||||
Both Ubuntu and Fedora use GNOME desktop environment by default.
|
||||
|
||||
![GNOME Desktop in Fedora][6]
|
||||
|
||||
While Fedora uses the stock GNOME desktop, Ubuntu has customized it to look and behave like its previous Unity desktop.
|
||||
|
||||
![GNOME desktop customized by Ubuntu][7]
|
||||
|
||||
Apart from GNOME, both Ubuntu and Fedora offer several other desktop variants.
|
||||
|
||||
Ubuntu has Kubuntu, Xubuntu, Lubuntu etc., offering various desktop flavors. While they are the official flavor of Ubuntu, they are not directly developed by Ubuntu team from Canonical. The teams are separate.
|
||||
|
||||
Fedora offers various desktop choices in the form of [Fedora Spins][8]. Unlike Kubuntu, Lubuntu etc,. they are not created and maintained by separate team. They are from core Fedora team.
|
||||
|
||||
#### Package management and software availability
|
||||
|
||||
Ubuntu uses APT package manager to provide and manage software (applications, libraries and other required codes) while Fedora uses DNF package manager.
|
||||
|
||||
[][9]
|
||||
|
||||
Suggested read System76 Galago Pro: Specs, Price And Release Date
|
||||
|
||||
[Ubuntu has vast software repositories][10] allowing you to easily install thousands of programs, both FOSS and non-FOSS, easily. Fedora on the other hand focuses on providing only open source software. This is changing in the new versions but Fedora’s repositories are still not as big as that of Ubuntu.
|
||||
|
||||
Some third party software developer also provide click-to-install, .exe like packages for Linux. In Ubuntu, these packages are in .deb format and while Fedora supports .rpm packages.
|
||||
|
||||
Most software vendors provide both DEB and RPM files for Linux users but I have experienced that sometimes software vendor only provide DEB file. For example, SEO tool [Screaming Frog][11] has only DEB packages. It’s extremely rare that a software is available in RPM but not in DEB format.
|
||||
|
||||
#### Hardware support
|
||||
|
||||
Linux in general has its fair share of trouble with some WiFi adapters and graphics cards. Both Ubuntu and Fedora are impacted from that. Take the example of Nvidia. It’s [open source Nouveau driver often results in troubles like system hanging at boot][12].
|
||||
|
||||
Ubuntu provides an easy way of installing additional proprietary drivers. This results in better hardware support in many cases.
|
||||
|
||||
![Installing proprietary driver is easier in Ubuntu][13]
|
||||
|
||||
Fedora, on the other hand, sticks to open source software and thus installing proprietary drivers on Fedora becomes a difficult task.
|
||||
|
||||
#### Support and userbase
|
||||
|
||||
Both Ubuntu and Fedora provide support through community forums. Ubuntu has two main forums: [UbuntuForums][14] and [Ask Ubuntu][15]. Fedora has one main forum [Ask Fedora][16].
|
||||
|
||||
In terms of userbase, Fedora has a large following. However, Ubuntu is more popular and has a larger following than Fedora.
|
||||
|
||||
The popularity of Ubuntu has prompted a number of websites and blogs focused primarily on Ubuntu. This way, you get more troubleshooting tips and learning material on Ubuntu than Fedora.
|
||||
|
||||
#### Release cycle
|
||||
|
||||
A new Fedora version is released every six months and each Fedora release is supported for nine months only. Which means that between six to nine months, you must perform an upgrade. Upgrading Fedora version is simple but it does require a good internet connection. Not everyone can be happy with 1.5 GB of version upgrades every nine months.
|
||||
|
||||
Ubuntu has two versions: regular release and the long term support (LTS) release. Regular release is similar to Fedora. It’s released at the interval of six months and is supported for nine months.
|
||||
|
||||
The LTS release comes at an interval of two years and is supported for five years. Regular releases bring new features, new software versions while the LTS release holds on to the older versions. This makes it a great choice for people who don’t like frequent changes and prefer stability.
|
||||
|
||||
#### Solid base distributions
|
||||
|
||||
Ubuntu is based on [Debian][17]. Debian is one of the biggest community project and one of the most respected project in the [free software][18] world.
|
||||
|
||||
Fedora is a community project from Red Hat. Red Hat is an enterprise focused Linux distribution. Fedora works as a ‘testing ground’ ([upstream][19] in technical term) for new features before those features are included in Red Hat Enterprise Linux.
|
||||
|
||||
[][20]
|
||||
|
||||
Suggested read How To Manage StartUp Applications In Ubuntu
|
||||
|
||||
#### Backed by enterprises
|
||||
|
||||
Both Ubuntu and Fedora are backed by their parent corporations. Ubuntu is from [Canonical][21] while Fedora is from [Red Hat][22] (now [part of IBM][23]). Enterprise backing is important because it ensures that the Linux distribution is well-maintained.
|
||||
|
||||
Hobbyists distributions created by a group of individuals often crumble under workload. You might have seen reasonably popular distribution projects being shutdown for this sole reason. [Antergos][24], Korora are just some of the many such examples where distributions were discontinued because the developers couldn’t get enough free time to work on the project.
|
||||
|
||||
The fact that both Ubuntu and Fedora are supported by a two Linux-based enterprises makes them a viable choice over other independent distributions.
|
||||
|
||||
#### Ubuntu vs Fedora as server
|
||||
|
||||
The comparison between Ubuntu and Fedora was primarily aimed at desktop users so far. But a discussion about Linux is not complete until you include servers.
|
||||
|
||||
![Ubuntu Server][25]
|
||||
|
||||
Ubuntu is not only popular on desktop, it also has a good presence on the server side. If you are familiar with Ubuntu as desktop, you might not feel uncomfortable with Ubuntu server edition. I started with Ubuntu desktop and now my websites are hosted on Linux servers running Ubuntu.
|
||||
|
||||
Fedora too has server edition and some people use it as well. But most sysadmins won’t prefer a server that has to be upgraded and rebooted every nine months.
|
||||
|
||||
Knowing Fedora helps you in using Red Hat Enterprise Linux (RHEL). RHEL is a paid product and you’ll have to purchase a subscription. If you want an operating system for running server that is close to Fedora/Red Hat, I advise using CentOS. [CentOS][26] is also a community project affiliated with Red Hat but this one is focused on servers.
|
||||
|
||||
#### Conclusion
|
||||
|
||||
As you can see, both Ubuntu and Fedora are similar to each other on several points. Ubuntu does take lead when it comes to software availability, driver installation and online support. And _**these are the points that make Ubuntu a better choice, specially for inexperienced Linux users.**_
|
||||
|
||||
If you want to get familiar with Red Hat, Fedora is a good starting point. If you have some experience with Linux or if you want to use only open source software, Fedora is an excellent choice.
|
||||
|
||||
In the end, it is really up to you to decide if you want to use Fedora or Ubuntu. I would suggest creating live USB of both distributions or try them out in virtual machine.
|
||||
|
||||
What’s your opinion on Ubuntu vs Fedora? Which distribution do you prefer and why? Do share your views in the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/ubuntu-vs-fedora/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://ubuntu.com/
|
||||
[2]: https://getfedora.org/
|
||||
[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/07/ubuntu-vs-fedora.png?resize=800%2C450&ssl=1
|
||||
[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/03/install-linux-inside-windows-10.jpg?resize=800%2C479&ssl=1
|
||||
[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/07/fedora-installer.png?resize=800%2C598&ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/gnome-desktop-fedora.png?resize=800%2C450&ssl=1
|
||||
[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/applications_menu.jpg?resize=800%2C450&ssl=1
|
||||
[8]: https://spins.fedoraproject.org/
|
||||
[9]: https://itsfoss.com/system-76-galago-pro/
|
||||
[10]: https://itsfoss.com/ubuntu-repositories/
|
||||
[11]: https://www.screamingfrog.co.uk/seo-spider/#download
|
||||
[12]: https://itsfoss.com/fix-ubuntu-freezing/
|
||||
[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/software_updates_additional_drivers_nvidia.png?resize=800%2C523&ssl=1
|
||||
[14]: https://ubuntuforums.org/
|
||||
[15]: https://askubuntu.com/
|
||||
[16]: https://ask.fedoraproject.org/
|
||||
[17]: https://www.debian.org/
|
||||
[18]: https://www.fsf.org/
|
||||
[19]: https://en.wikipedia.org/wiki/Upstream_(software_development)
|
||||
[20]: https://itsfoss.com/manage-startup-applications-ubuntu/
|
||||
[21]: https://canonical.com/
|
||||
[22]: https://www.redhat.com/en
|
||||
[23]: https://itsfoss.com/ibm-red-hat-acquisition/
|
||||
[24]: https://itsfoss.com/antergos-linux-discontinued/
|
||||
[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/07/ubuntu-server.png?resize=800%2C232&ssl=1
|
||||
[26]: https://centos.org/
|
@ -1,83 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (oneforalone)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Emacs for (even more of) the win)
|
||||
[#]: via: (https://so.nwalsh.com/2019/03/01/emacs)
|
||||
[#]: author: (Norman Walsh https://so.nwalsh.com)
|
||||
|
||||
Emacs 的胜利(或是更多)
|
||||
======
|
||||
|
||||
我天天用 Emacs,但我却从意识到。但是每当我用 Emacs 时,它都给我带来了很多乐趣。
|
||||
|
||||
>如果你是个职业作家……Emacs 与其它的编辑器的相比就如皓日与群星一样。不仅更大、更亮,它轻而易举就让其他所有的东西都消失了。
|
||||
|
||||
我用 [Emacs][1] 已有二十多年了。我用它来写几乎所有的东西(Scala 和 Java 我用 [IntelliJ][2])。看邮件的话我是能在 Emacs 里看就在里面看。
|
||||
|
||||
尽管我用 Emacs 已有数十年,我在新年前后才意识到,在过去10年或更长时间里,我对 Emacs 的使用几乎没有什么变化。当然,新的编辑模式出现了,我就会选一两个插件,几年前我确实是用了 [Helm][3],但大多数时候,它只是完成了我需要的所有繁重工作,日复一日,没有抱怨,也没有妨碍我。一方面,这证明了它有多好。另一方面,这是一个邀请,让我深入挖掘,看看我错过了什么。
|
||||
|
||||
于此同时,我也决定从以下几方面改进我的工作方式:
|
||||
|
||||
* **更好的议程管理** 我在工作中负责几个项目,这些项目有定期和临时的会议;有些我是我主持的,有些我只要参加就可以。
|
||||
|
||||
我意识到我对开会变得草率起来了了。坐在一个有会议要开的房间里实在是太容易了,但实际上你可以阅读电子邮件,处理其他事情。(我强烈反对在会议中“禁止携带笔记本电脑”的这条规定,但这就是另一个话题。)
|
||||
|
||||
草率地去开会有几个问题。首先,这是对主持会议的人和其他参与者的不尊重。实际上这是不这么做的完美理由,但我还有意识到令一个问题:它忽视了会议的成本。
|
||||
|
||||
如果你在开会,但同时还要回复电子邮件,也许还要改 bug,那么这个会议就不需要花费任何东西(或同样多的钱)。如果会议成本低廉,那么会议数量将会更多。
|
||||
|
||||
我想要少点、短些的会议。我不想忽视它们的成本,我想让开会变得很有价值,除非绝对必要,否则就可以避免。
|
||||
|
||||
有时,开会是很有必要的。而且我认为一个简短的会能够很快的解决问题。但是,如果我一天有十个短会的话,那还是不要说我做了些有成果的事吧。
|
||||
|
||||
我决定在我参加的所有的会上做笔记。我并不是说一定要做会议记录,而是我在做某种会议记录。这会让我把注意力集中在开会上,而忽略其他事。
|
||||
|
||||
* **更好的时间管理** 我有很多要做和想做的事,或工作的或私人的。之前,我有在问题清单和邮件进程(Emacs 和 [Gmail][4] 中,用于一些稍微不同的提醒)、日历、手机上各种各样的“待办事项列表”和小纸片上记录过它们。可能还有其他地方。
|
||||
|
||||
我决定把它们放在一起。不是说我认为有一个地方就最好或更好,而是说我想完成两件事。首先,把它们都放在一个地方,我能够对我把精力放在哪里有一个更好、更全面的看法。第二,也是因为我想养成一个习惯。固定的或有规律的倾向或行为,尤指难以放弃的。记录、跟踪并保存它们。
|
||||
|
||||
* **更好的说明** 如果你在某些科学或工程领域工作,你就会养成记笔记的习惯。唉,我没有。但我决定这么做。
|
||||
|
||||
我对法律上鼓励装订页面或做永久标记并不感兴趣。我感兴趣的是养成做记录的习惯。我的目标是有一个地方记下想法和设计草图等。如果我突然有了灵感,或者我想到了一个不在测试套件中的边缘案例,我希望我的本能是把它写在我的日志中,而不是草草写在一张小纸片上,或者向自己保证我会记住它。
|
||||
|
||||
|
||||
|
||||
这些决心让我很快或多或少地转到了 [Org][6]。Org 有一个庞大的、活跃的、忠诚的用户社区。我以前也用过它(顺带一提,我有[写过][7]它,至少在几年前),我花了很长的一段时间(将 [MarkLogic 集成][8]到其中。(天哪,这在过去的一两个星期里得到了回报!)
|
||||
|
||||
但我从没用过 Org。
|
||||
|
||||
我现在正在用它。我用了几分钟,我把所有要做的事情都记录下来,我还记了日记。我不确定我试图对它进行边界或列举它的所有特性有多大价值,你可以通过网页快速地搜索找到很多。
|
||||
|
||||
如果你用 Emacs,那你也应该用 Org。如果没用过Emacs,我相信你不会是第一个因 Org 而使用 Emacs 的人。Org 可以做很多。它需要一点时间来学习你的方法和快捷键,但我认为这是值得的。(如果你的口袋中有一台 [iOS][9] 设备,我推荐你在忙的时候使用 [beorg][10] 来记录。)
|
||||
|
||||
当然,我想出了如何[将 XML 从其中提取出来][11]⊕“working out” 确实是“用 elisp 来编程”的一种有趣的拼写方式。然后,如何将它转换回我的 weblog 期望的标记(当然,在 Emacs 中按下一个按钮就可以做到)。这是第一次用 Org 写的帖子。这也不会是最后一次。
|
||||
|
||||
附注:生日快乐,[小博客][12]。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://so.nwalsh.com/2019/03/01/emacs
|
||||
|
||||
作者:[Norman Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[oneforalone](https://github.com/oneforalone)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://so.nwalsh.com
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Emacs
|
||||
[2]: https://en.wikipedia.org/wiki/IntelliJ_IDEA
|
||||
[3]: https://emacs-helm.github.io/helm/
|
||||
[4]: https://en.wikipedia.org/wiki/Gmail
|
||||
[5]: https://en.wikipedia.org/wiki/Lab_notebook
|
||||
[6]: https://en.wikipedia.org/wiki/Org-mode
|
||||
[7]: https://www.balisage.net/Proceedings/vol17/html/Walsh01/BalisageVol17-Walsh01.html
|
||||
[8]: https://github.com/ndw/ob-ml-marklogic/
|
||||
[9]: https://en.wikipedia.org/wiki/IOS
|
||||
[10]: https://beorgapp.com/
|
||||
[11]: https://github.com/ndw/org-to-xml
|
||||
[12]: https://so.nwalsh.com/2017/03/01/helloWorld
|
@ -0,0 +1,96 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (zionfuo)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0 – Public Vs Private Blockchain Comparison [Part 7])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/)
|
||||
[#]: author: (editor https://www.ostechnix.com/author/editor/)
|
||||
|
||||
区块链 2.0:公有链 Vs 私有链(七)
|
||||
======
|
||||
|
||||
![Public vs Private blockchain][1]
|
||||
|
||||
[**区块链 2.0**][2]系列的前一篇文章探索了[**智能合同的现状**][3]。这篇文章旨在揭示可以创建的不同类型的区块链。每个协议都用于与众不同的应用程序,并且根据用例的不同,每个应用程序所遵循的协议也不同。现在,让我们比较一下公有链、开源软件与私有链、专有技术。
|
||||
|
||||
正如我们所知,基于区块链的分布式账本的基本三层结构如下:
|
||||
|
||||
![][4]
|
||||
图1 – 区块链分布式账本的基本结构
|
||||
|
||||
这里提到的类型之间的差异主要是因为底层区块链的协议。该协议规定了参与者的规则和参与的方式。
|
||||
|
||||
阅读本文时,请记住以下几点事项:
|
||||
|
||||
- 任何平台的产生都是为了解决需求而生。技术应该采取最好的方向。例如,区块链具有巨大的应用价值,其中一些可能需要丢弃在其他设置中看起来很重要的功能。在这方面,**分布式存储**就是最好的例子。
|
||||
- 区块链基本上是数据库系统,通过时间戳和区块的形式组织数据来跟踪信息。此类区块链的创建者可以选择谁有权产出这些区块并进行修改。
|
||||
- 区块链也可以“中心化”,设置不同的参与程度,可以得出那些参与者是符合条件的“中心”节点。
|
||||
|
||||
大多数区块链要么是公有的,要么是私有的。一般来说,公有链可以被认为是开源软件的等价物,大多数私有链可以被视为源自公有链的专有平台。下图应该会让大多数人明显地看出基本的区别。
|
||||
|
||||
![][5]
|
||||
图2 – 公有链/私有链与开源/专有技术的对比
|
||||
|
||||
虽然这是最受欢迎的理解。但是这并不是说所有的私有链都是从公有链中衍生出来的。
|
||||
|
||||
### 公有链
|
||||
|
||||
公有链可以被视为是一个开放的平台或网络。任何拥有专业知识和计算资源的人都可以参与其中。这将产生以下影响:
|
||||
|
||||
- 任何人都可以加入公有链网络并参与到其中。“参与者” 所需要的只是稳定的网络资源和计算资源。
|
||||
- 参与包括了读取、写入、验证和交易期间的共识。比特币矿工就是很好的例子。作为网络的参与者,矿工会得到比特币作为回报。
|
||||
- 平台完全去中心,完全冗余。
|
||||
- 由于去中心化,没有一个主体可以完全控制分类账中记录的数据。所有 (或大多数) 参与者都需要通过验证区块的方式检查数据。
|
||||
- 这意味着,一旦信息被验证和记录,就不能轻易改变。即使这样,也不可能留下痕迹。
|
||||
- 在比特币和莱特币等平台上,参与者的身份仍然是匿名的。设计这些平台的目的是保护和保护用户身份。这主要是由上层协议栈提供的功能。
|
||||
- 在BITCOIN和LITECOIN等平台上,参与者的身份仍然是匿名的。这些平台的设计旨在保护和保护用户身份。这主要是由上层协议栈提供的功能。
|
||||
- 公有链有比特币、莱特币、以太坊等不同的网络。
|
||||
- 广泛的去中心化意味着,区块链分布式网络与实现的交易相比,在交易上获得共识可能需要一段时间,并且吞吐量对于旨在每时刻推动大量交易的大型企业来说可能是一个挑战。
|
||||
- 开放式参与,使比特币等公有链中的大量参与者,往往会增加对计算设备和能源成本的初始投资。
|
||||
|
||||
|
||||
### 私有链
|
||||
|
||||
相比之下,私有链是被许可的区块链。含义:
|
||||
|
||||
- 参与网络的许可受到限制,并由监督网络的所有者或机构主持。这意味着,即使个人能够存储数据并进行交易 (例如,发送和接收付款),这些交易的验证和存储也只能由选定的参与者来完成。
|
||||
- 参与者一旦获得中心机构的许可,将受到条款的限制。例如,在金融机构运营的私有链网络中,并不是每个客户都可以访问整个区块链的分布式账本,甚至在那些获得许可的客户中, 不是每个人都能访问所有的东西。在这种情况下,中心机构将授予访问选择服务的权限。这通常被称为 “通道”。
|
||||
- 与公有链相比,这种系统具有更大的吞吐量能力,也展示了更快的交易速度,因为区块只需要由少数几个人验证。
|
||||
- 公有链以设计安全著称。他们的实现依靠以下几点:
|
||||
- 匿名参与者
|
||||
- 多个节点上的分布式和冗余的加密存储
|
||||
- 创建和更改数据需要大量的共识
|
||||
|
||||
私有链通常在其协议中没有任何特征。这使得该系统仅与目前使用的大多数基于云的数据库系统一样安全。
|
||||
|
||||
### 智者的观点
|
||||
|
||||
需要注意的一点是,它们被命名为 public 或 private (或 open 或 close) 的事实与底层代码库无关。在这两种情况下,平台所基于的代码或文字基础可能是公开的,也可能不是公开的。R3 是一家 DLT (**D**istributed **L**edger **T**echnology) 公司,领导着由 200 多家跨国机构组成的公有财团。他们的目标是在金融和商业领域进一步发展区块链和相关分布式账本技术。Corda 是这一共同努力的产物。R3 将 corda 定义为专门为企业构建的区块链平台。同样的代码库是开源的,鼓励世界各地的开发人员为这个项目做出贡献。然而,考虑到 corda 面临的业务性质和旨在满足的需求,corda 将被归类为许可的封闭区块链平台。这意味着企业可以在部署后选择网络的参与者,并通过使用本机可用的智能合约工具选择这些参与者可以访问的信息类型。
|
||||
|
||||
虽然像比特币和以太坊这样的公有链对这个领域的广泛认知和发展负有责任,这是一个现实, 仍然可以争辩说,为企业或商业环境中的特定用例设计的私有链将在短期内引领货币投资。这些都是我们大多数人在不久的将来会看到以实际方式运用起来的平台。
|
||||
|
||||
阅读本系列中写一篇文章是有关Hyperledger项目的。
|
||||
|
||||
- [**Blockchain 2.0 – An Introduction To Hyperledger Project (HLP)**][6]
|
||||
|
||||
我们正在研究更多有趣的区块链技术话题。敬请期待!
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
|
||||
|
||||
作者:[editor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Public-Vs-Private-Blockchain-720x340.png
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
|
||||
[3]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/blockchain-architecture.png
|
||||
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/Public-vs-Private-blockchain-comparison.png
|
||||
[6]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
|
@ -0,0 +1,131 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (LuuMing)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tracking down library injections on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3404621/tracking-down-library-injections-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
追溯 Linux 上的库注入
|
||||
======
|
||||
<ruby>库注入<rt>Library injections</rt></ruby>在 Linux 上不如 Windows 上常见,但它仍然是一个问题。下来看看它们如何工作的,以及如何鉴别它们。
|
||||
![Sandra Henry-Stocker][1]
|
||||
|
||||
尽管在 Linux 系统上几乎见不到,但库(Linux 上的共享目标文件)注入仍是一个严峻的威胁。在采访了来自 AT&T 公司 Alien 实验室的 Jaime Blasco 后,我更加意识到了其中一些攻击是多么的易实施。
|
||||
|
||||
在这篇文章中,我会介绍一种攻击方法和它的几种检测手段。我也会提供一些展示攻击细节的链接和一些检测工具。首先,引入一个小小的背景。
|
||||
|
||||
### 共享库漏洞
|
||||
|
||||
DLL 和 .so 文件都是允许代码(有时候是数据)被不同的进程共享的共享库文件。公用的代码可以放进一个文件中使得每个需要它的进程可以重新使用而不是多次被重写。这也促进了对公用代码的管理。
|
||||
|
||||
Linux 进程经常使用这些共享库。`ldd`(显示共享对象依赖)命令可以为任何程序显示共享库。这里有一些例子:
|
||||
|
||||
```
|
||||
$ ldd /bin/date
|
||||
linux-vdso.so.1 (0x00007ffc5f179000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f02bea15000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f02bec3a000)
|
||||
$ ldd /bin/netstat
|
||||
linux-vdso.so.1 (0x00007ffcb67cd000)
|
||||
libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f45e5d7b000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f45e5b90000)
|
||||
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f45e5b1c000)
|
||||
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f45e5b16000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f45e5dec000)
|
||||
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f45e5af5000)
|
||||
```
|
||||
|
||||
`linux-vdso.so.1` (在一些系统上也许会有不同的名字)是内核自动映射到每个进程地址空间的文件。它的工作是找到并定位进程所需的其他共享库。
|
||||
|
||||
利用这种库加载机制的一种方法是通过使用 `LD_PRELOAD` 环境变量。正如 Jaime Blasco 在他的研究中所解释的那样,“`LD_PRELOAD` 是最简单且最受欢迎的方法来在进程启动时加载共享库。可以使用共享库的路径配置环境变量,以便在加载其他共享对象之前加载该共享库。”
|
||||
|
||||
为了展示有多简单,我创建了一个极其简单的共享库并且赋值给我的(之前不存在) `LD_PRELOAD` 环境变量。之后我使用 `ldd` 命令查看它对于常用 Linux 命令的影响。
|
||||
|
||||
```
|
||||
$ export LD_PRELOAD=/home/shs/shownum.so
|
||||
$ ldd /bin/date
|
||||
linux-vdso.so.1 (0x00007ffe005ce000)
|
||||
/home/shs/shownum.so (0x00007f1e6b65f000) <== there it is
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f1e6b458000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007f1e6b682000)
|
||||
```
|
||||
|
||||
注意,仅仅将新的库赋给 `LD_PRELOAD` 就影响到了运行的任何程序。
|
||||
|
||||
通过设置 `LD_PRELOAD` 指定的共享库首先被加载(紧随 linux-vdso.so.1),这些库可以极大程度上改变一个进程。例如,它们可以重定向系统调用到它们自己的资源,或对程序运行的行为方式进行意想不到的更改。
|
||||
|
||||
### osquery 工具可以检测库注入
|
||||
|
||||
`osquery` 工具(可以在 [osquery.io][4]下载)提供了一个非常独特的方式来照看 Linux 系统。它基本上将操作系统表示为高性能关系数据库。然后,也许你会猜到,这就意味着它可以用来查询并且生成 SQL 表,该表提供了诸如以下的详细信息:
|
||||
|
||||
* 运行中的进程
|
||||
* 加载的内核模块
|
||||
* 进行的网络链接
|
||||
|
||||
一个提供了进程信息的内核表叫做 `process_envs`。它提供了各种进程使用环境变量的详细信息。Jaime Blasco 提供了一个相当复杂的查询,可以使用 `osquery` 标识使用 `LD_PRELOAD` 的进程。
|
||||
|
||||
注意,这个查询是从 `process_envs` 表中获取数据。攻击 ID(T1055)参考 [Mitre 对攻击方法的解释][5]。
|
||||
|
||||
```
|
||||
SELECT process_envs.pid as source_process_id, process_envs.key as environment_variable_key, process_envs.value as environment_variable_value, processes.name as source_process, processes.path as file_path, processes.cmdline as source_process_commandline, processes.cwd as current_working_directory, 'T1055' as event_attack_id, 'Process Injection' as event_attack_technique, 'Defense Evasion, Privilege Escalation' as event_attack_tactic FROM process_envs join processes USING (pid) WHERE key = 'LD_PRELOAD';
|
||||
```
|
||||
|
||||
注意 `LD_PRELOAD` 环境变量有时是合法使用的。例如,各种安全监控工具可能会使用到它,因为开发人员需要进行故障排除、调试或性能分析。然而,它的使用仍然很少见,应当加以防范。
|
||||
|
||||
同样值得注意的是 osquery 可以交互使用或是作为定期查询的守护进程去运行。了解更多请查阅文章末尾给出的参考。
|
||||
|
||||
你也能够通过查看用户的环境设置定位到 `LD_PRELOAD` 的使用。如果 `LD_PRELOAD` 使用用户账户配置,你可以使用这样的命令来查看(在认证了个人身法之后):
|
||||
|
||||
```
|
||||
$ env | grep PRELOAD
|
||||
LD_PRELOAD=/home/username/userlib.so
|
||||
```
|
||||
|
||||
如果你之前没有听说过 osquery,别太在意。它正在成为一个更受欢迎的工具。事实上就在上周,Linux 基金会宣布用新的 [osquery 基金会][6]支持 osquery 社区。
|
||||
|
||||
#### 总结
|
||||
|
||||
尽管库注入是一个严重的威胁,但了解一些优秀的工具来帮助你检测它是否存在是很有帮助的。
|
||||
|
||||
#### 扩展阅读
|
||||
|
||||
重要的参考和工具的链接:
|
||||
|
||||
* [用 osquery 追寻 Linux 库注入][7],AT&T 网络安全
|
||||
* [Linux:我的内存怎么了?][8],TrustedSec
|
||||
* [osquery 下载网站][4]
|
||||
* [osquery 关系模式][9]
|
||||
* [osqueryd(osquery 守护进程)][10]
|
||||
* [Mitre 的攻击框架][11]
|
||||
* [新的 osquery 基金会宣布][6]
|
||||
|
||||
在 [Facebook][12] 和 [LinkedIn][13] 上加入网络会议参与讨论。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3404621/tracking-down-library-injections-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[LuuMing](https://github.com/LuuMing)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/06/dll-injection-100800196-large.jpg
|
||||
[2]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[3]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[4]: https://osquery.io/
|
||||
[5]: https://attack.mitre.org/techniques/T1055/
|
||||
[6]: https://www.linuxfoundation.org/press-release/2019/06/the-linux-foundation-announces-intent-to-form-new-foundation-to-support-osquery-community/
|
||||
[7]: https://www.alienvault.com/blogs/labs-research/hunting-for-linux-library-injection-with-osquery
|
||||
[8]: https://www.trustedsec.com/2018/09/linux-hows-my-memory/
|
||||
[9]: https://osquery.io/schema/3.3.2
|
||||
[10]: https://osquery.readthedocs.io/en/stable/deployment/configuration/#schedule
|
||||
[11]: https://attack.mitre.org/
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
Loading…
Reference in New Issue
Block a user