mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-25 23:11:02 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
4148e73bb5
3
.gitmodules
vendored
3
.gitmodules
vendored
@ -1,3 +0,0 @@
|
||||
[submodule "comic"]
|
||||
path = comic
|
||||
url = https://wxy@github.com/LCTT/comic.git
|
@ -2,7 +2,13 @@ language: c
|
||||
script:
|
||||
- sh ./scripts/check.sh
|
||||
- ./scripts/badge.sh
|
||||
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
except:
|
||||
- gh-pages
|
||||
git:
|
||||
submodules: false
|
||||
deploy:
|
||||
provider: pages
|
||||
skip_cleanup: true
|
||||
|
21
README.md
21
README.md
@ -6,9 +6,9 @@
|
||||
![待校正](https://lctt.github.io/TranslateProject/badge/translated.svg)
|
||||
![已发布](https://lctt.github.io/TranslateProject/badge/published.svg)
|
||||
|
||||
LCTT 是“Linux中国”([https://linux.cn/](https://linux.cn/))的翻译组,负责从国外优秀媒体翻译 Linux 相关的技术、资讯、杂文等内容。
|
||||
[LCTT](https://linux.cn/lctt/) 是“Linux中国”([https://linux.cn/](https://linux.cn/))的翻译组,负责从国外优秀媒体翻译 Linux 相关的技术、资讯、杂文等内容。
|
||||
|
||||
LCTT 已经拥有几百名活跃成员,并欢迎更多的Linux志愿者加入我们的团队。
|
||||
LCTT 已经拥有几百名活跃成员,并欢迎更多的 Linux 志愿者加入我们的团队。
|
||||
|
||||
![logo](https://linux.cn/static/image/common/lctt_logo.png)
|
||||
|
||||
@ -70,6 +70,10 @@ LCTT 的组成
|
||||
* 2018/01/11 提升 lujun9972 成为核心成员,并加入选题组。
|
||||
* 2018/02/20 遭遇 DMCA 仓库被封。
|
||||
* 2018/05/15 提升 MjSeven 为核心成员。
|
||||
* 2018/08/01 [发布 Linux 中国通证:LCCN](https://linux.cn/article-9886-1.html)。
|
||||
* 2018/08/17 提升 pityonline 为核心成员,担任校对,并接受他的建议采用 PR 审核模式。
|
||||
* 2018/09/10 [LCTT 五周年](https://linux.cn/article-9999-1.html)。
|
||||
* 2018/10/25 重构了 CI,感谢 vizv、lujun9972、bestony。
|
||||
|
||||
核心成员
|
||||
-------------------------------
|
||||
@ -78,13 +82,16 @@ LCTT 的组成
|
||||
|
||||
- 组长 @wxy,
|
||||
- 选题 @oska874,
|
||||
- 选题 @lujun9972,
|
||||
- 技术 @bestony,
|
||||
- 校对 @jasminepeng,
|
||||
- 校对 @pityonline,
|
||||
- 钻石译者 @geekpi,
|
||||
- 钻石译者 @qhwdw,
|
||||
- 钻石译者 @GOLinux,
|
||||
- 钻石译者 @ictlyh,
|
||||
- 技术组长 @bestony,
|
||||
- 漫画组长 @GHLandy,
|
||||
- LFS 组长 @martin2011qi,
|
||||
- 核心成员 @GHLandy,
|
||||
- 核心成员 @martin2011qi,
|
||||
- 核心成员 @ictlyh,
|
||||
- 核心成员 @strugglingyouth,
|
||||
- 核心成员 @FSSlc,
|
||||
- 核心成员 @zpl1025,
|
||||
@ -96,8 +103,6 @@ LCTT 的组成
|
||||
- 核心成员 @Locez,
|
||||
- 核心成员 @ucasFL,
|
||||
- 核心成员 @rusking,
|
||||
- 核心成员 @qhwdw,
|
||||
- 核心成员 @lujun9972
|
||||
- 核心成员 @MjSeven
|
||||
- 前任选题 @DeadFire,
|
||||
- 前任校对 @reinoir222,
|
||||
|
1
comic
1
comic
@ -1 +0,0 @@
|
||||
Subproject commit e5db5b880dac1302ee0571ecaaa1f8ea7cf61901
|
@ -0,0 +1,50 @@
|
||||
The Most Important Database You've Never Heard of
|
||||
======
|
||||
In 1962, JFK challenged Americans to send a man to the moon by the end of the decade, inspiring a heroic engineering effort that culminated in Neil Armstrong’s first steps on the lunar surface. Many of the fruits of this engineering effort were highly visible and sexy—there were new spacecraft, new spacesuits, and moon buggies. But the Apollo Program was so staggeringly complex that new technologies had to be invented even to do the mundane things. One of these technologies was IBM’s Information Management System (IMS).
|
||||
|
||||
IMS is a database management system. NASA needed one in order to keep track of all the parts that went into building a Saturn V rocket, which—because there were two million of them—was expected to be a challenge. Databases were a new idea in the 1960s and there weren’t any already available for NASA to use, so, in 1965, NASA asked IBM to work with North American Aviation and Caterpillar Tractor to create one. By 1968, IBM had installed a working version of IMS at NASA, though at the time it was called ICS/DL/I for “Informational Control System and Data Language/Interface.” (IBM seems to have gone through a brief, unfortunate infatuation with the slash; see [PL/I][1].) Two years later, IBM rebranded ICS/DL/I as “IMS” and began selling it to other customers. It was one of the first commercially available database management systems.
|
||||
|
||||
The incredible thing about IMS is that it is still in use today. And not just on a small scale: Banks, insurance companies, hospitals, and government agencies still use IMS for all sorts of critical tasks. Over 95% of Fortune 1000 companies use IMS in some capacity, as do all of the top five US banks. Whenever you withdraw cash from an ATM, the odds are exceedingly good that you are interacting with IMS at some point in the course of your transaction. In a world where the relational database is an old workhorse increasingly in competition with trendy new NoSQL databases, IMS is a freaking dinosaur. It is a relic from an era before the relational database was even invented, which didn’t happen until 1970. And yet it seems to be the database system in charge of all the important stuff.
|
||||
|
||||
I think this makes IMS pretty interesting. Depending on how you feel about relational databases, it either offers insight into how the relational model improved on its predecessors or else exemplifies an alternative model better suited to certain problems.
|
||||
|
||||
IMS works according to a hierarchical model, meaning that, instead of thinking about data as tables that can be brought together using JOIN operations, IMS thinks about data as trees. Each kind of record you store can have other kinds of records as children; these child record types represent additional information that you might be interested in given a record of the parent type.
|
||||
|
||||
To take an example, say that you want to store information about bank customers. You might have one type of record to represent customers and another type of record to represent accounts. Like in a relational database, where each table has columns, these records will have different fields; we might want to have a first name field, a last name field, and a city field for each customer. We must then decide whether we are likely to first lookup a customer and then information about that customer’s account, or whether we are likely to first lookup an account and then information about that account’s owner. Assuming we decide that we will access customers first, then we will make our account record type a child of our customer record type. Diagrammed, our database model would look something like this:
|
||||
|
||||
![][2]
|
||||
|
||||
And an actual database might look like:
|
||||
|
||||
![][3]
|
||||
|
||||
By modeling our data this way, we are hewing close to the reality of how our data is stored. Each parent record includes pointers to its children, meaning that moving down our tree from the root node is efficient. (Actually, each parent basically stores just one pointer to the first of its children. The children in turn contain pointers to their siblings. This ensures that the size of a record does not vary with the number of children it has.) This efficiency can make data accesses very fast, provided that we are accessing our data in ways that we anticipated when we first structured our database. According to IBM, an IMS instance can process over 100,000 transactions a second, which is probably a large part of why IMS is still used, particularly at banks. But the downside is that we have lost a lot of flexibility. If we want to access our data in ways we did not anticipate, we will have a hard time.
|
||||
|
||||
To illustrate this, consider what might happen if we decide that we would like to access accounts before customers. Perhaps customers are calling in to update their addresses, and we would like them to uniquely identify themselves using their account numbers. So we want to use an account number to find an account, and then from there find the account’s owner. But since all accesses start at the root of our tree, there’s no way for us to get to an account efficiently without first deciding on a customer. To fix this problem, we could introduce a second tree or hierarchy starting with account records; these account records would then have customer records as children. This would let us access accounts and then customers efficiently. But it would involve duplicating information that we already have stored in our database—we would have two trees storing the same information in different orders. Another option would be to establish an index of accounts that could point us to the right account record given an account number. That would work too, but it would entail extra work during insert and update operations in the future.
|
||||
|
||||
It was precisely this inflexibility and the problem of duplicated information that pushed E. F. Codd to propose the relational model. In his 1970 paper, A Relational Model of Data for Large Shared Data Banks, he states at the outset that he intends to present a model for data storage that can protect users from having to know anything about how their data is stored. Looked at one way, the hierarchical model is entirely an artifact of how the designers of IMS chose to store data. It is a bottom-up model, the implication of a physical reality. The relational model, on the other hand, is an abstract model based on relational algebra, and is top-down in that the data storage scheme can be anything provided it accommodates the model. The relational model’s great advantage is that, just because you’ve made decisions that have caused the database to store your data in a particular way, you won’t find yourself effectively unable to make certain queries.
|
||||
|
||||
All that said, the relational model is an abstraction, and we all know abstractions aren’t free. Banks and large institutions have stuck with IMS partly because of the performance benefits, though it’s hard to say if those benefits would be enough to keep them from switching to a modern database if they weren’t also trying to avoid rewriting mission-critical legacy code. However, today’s popular NoSQL databases demonstrate that there are people willing to drop the conveniences of the relational model in return for better performance. Something like MongoDB, which encourages its users to store data in a denormalized form, isn’t all that different from IMS. If you choose to store some entity inside of another JSON record, then in effect you have created something like the IMS hierarchy, and you have constrained your ability to query for that data in the future. But perhaps that’s a tradeoff you’re willing to make. So, even if IMS hadn’t predated E. F. Codd’s relational model by several years, there are still reasons why IMS’ creators might not have adopted the relational model wholesale.
|
||||
|
||||
Unfortunately, IMS isn’t something that you can download and take for a spin on your own computer. First of all, IMS is not free, so you would have to buy it from IBM. But the bigger problem is that IMS only runs on IBM mainframes like the IBM z13. That’s a shame, because it would be a joy to play around with IMS and get a sense for exactly how it differs from something like MySQL. But even without that opportunity, it’s interesting to think about software systems that work in ways we don’t expect or aren’t used to. And it’s especially interesting when those systems, alien as they are, turn out to undergird your local hospital, the entire financial sector, and even the federal government.
|
||||
|
||||
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][4] on Twitter or subscribe to the [RSS feed][5] to make sure you know when a new post is out.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2017/10/07/the-most-important-database.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/PL/I
|
||||
[2]: https://twobithistory.org/images/hierarchical-model.png
|
||||
[3]: https://twobithistory.org/images/hierarchical-db.png
|
||||
[4]: https://twitter.com/TwoBitHistory
|
||||
[5]: https://twobithistory.org/feed.xml
|
84
sources/talk/20171119 The Ruby Story.md
Normal file
84
sources/talk/20171119 The Ruby Story.md
Normal file
@ -0,0 +1,84 @@
|
||||
The Ruby Story
|
||||
======
|
||||
Ruby has always been one of my favorite languages, though I’ve sometimes found it hard to express why that is. The best I’ve been able to do is this musical analogy: Whereas Python feels to me like punk rock—it’s simple, predictable, but rigid—Ruby feels like jazz. Ruby gives programmers a radical freedom to express themselves, though that comes at the cost of added complexity and can lead to programmers writing programs that don’t make immediate sense to other people.
|
||||
|
||||
I’ve always been aware that freedom of expression is a core value of the Ruby community. But what I didn’t appreciate is how deeply important it was to the development and popularization of Ruby in the first place. One might create a programming lanugage in pursuit of better peformance, or perhaps timesaving abstractions—the Ruby story is interesting because instead the goal was, from the very beginning, nothing more or less than the happiness of the programmer.
|
||||
|
||||
### Yukihiro Matsumoto
|
||||
|
||||
Yukihiro Matsumoto, also known as “Matz,” graduated from the University of Tsukuba in 1990. Tsukuba is a small town just northeast of Tokyo, known as a center for scientific research and technological devlopment. The University of Tsukuba is particularly well-regarded for its STEM programs. Matsumoto studied Information Science, with a focus on programming languages. For a time he worked in a programming language lab run by Ikuo Nakata.
|
||||
|
||||
Matsumoto started working on Ruby in 1993, only a few years after graduating. He began working on Ruby because he was looking for a scripting language with features that no existing scripting language could provide. He was using Perl at the time, but felt that it was too much of a “toy language.” Python also fell short; in his own words:
|
||||
|
||||
> I knew Python then. But I didn’t like it, because I didn’t think it was a true object-oriented language—OO features appeared to be an add-on to the language. As a language maniac and OO fan for 15 years, I really wanted a genuine object-oriented, easy-to-use scripting language. I looked for one, but couldn’t find one.
|
||||
|
||||
So one way of understanding Matsumoto’s motivations in creating Ruby is that he was trying to create a better, object-oriented version of Perl.
|
||||
|
||||
But at other times, Matsumoto has said that his primary motivation in creating Ruby was simply to make himself and others happier. Toward the end of a Google tech talk that Matsumoto gave in 2008, he showed the following slide:
|
||||
|
||||
![][1]
|
||||
|
||||
He told his audience,
|
||||
|
||||
> I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of the Ruby language.
|
||||
|
||||
Matsumoto goes on to joke that he created Ruby for selfish reasons, because he was so underwhelmed by other languages that he just wanted to create something that would make him happy.
|
||||
|
||||
The slide epitomizes Matsumoto’s humble style. Matsumoto, it turns out, is a practicing Mormon, and I’ve wondered whether his religious commitments have any bearing on his legendary kindness. In any case, this kindness is so well known that the Ruby community has a principle known as MINASWAN, or “Matz Is Nice And So We Are Nice.” The slide must have struck the audience at Google as an unusual one—I imagine that any random slide drawn from a Google tech talk is dense with code samples and metrics showing how one engineering solution is faster or more efficient than another. Few, I suspect, come close to stating nobler goals more simply.
|
||||
|
||||
Ruby was influenced primarily by Perl. Perl was created by Larry Wall in the late 1980s as a means of processing and transforming text-based reports. It became well-known for its text processing and regular expression capabilities. A Perl program contains many syntactic elements that would be familiar to a Ruby programmer—there are `$` signs, `@` signs, and even `elsif`s, which I’d always thought were one of Ruby’s less felicitous idiosyncracies. On a deeper level, Ruby borrows much of Perl’s regular expression handling and standard library.
|
||||
|
||||
But Perl was by no means the only influence on Ruby. Prior to beginning work on Ruby, Matsumoto worked on a mail client written entirely in Emacs Lisp. The experience taught him a lot about the inner workings of Emacs and the Lisp language, which Matsumoto has said influenced the underlying object model of Ruby. On top of that he added a Smalltalk-style messsage passing system which forms the basis for any behavior relying on Ruby’s `#method_missing`. Matsumoto has also claimed Ada and Eiffel as influences on Ruby.
|
||||
|
||||
When it came time to decide on a name for Ruby, Matsumoto and a colleague, Keiju Ishitsuka, considered several alternatives. They were looking for something that suggested Ruby’s relationship to Perl and also to shell scripting. In an [instant message exchange][2] that is well-worth reading, Ishitsuka and Matsumoto probably spend too much time thinking about the relationship between shells, clams, oysters, and pearls and get close to calling the Ruby language “Coral” or “Bisque” instead. Thankfully, they decided to go with “Ruby”, the idea being that it was, like “pearl”, the name of a valuable jewel. It also turns out that the birthstone for June is a pearl while the birthstone for July is a ruby, meaning that the name “Ruby” is another tongue-in-cheek “incremental improvement” name like C++ or C#.
|
||||
|
||||
### Ruby Goes West
|
||||
|
||||
Ruby grew popular in Japan very quickly. Soon after its initial release in 1995, Matz was hired by a Japanese software consulting group called Netlab (also known as Network Applied Communication Laboratory) to work on Ruby full-time. By 2000, only five years after it was initially released, Ruby was more popular in Japan than Python. But it was only just beginning to make its way to English-speaking countries. There had been a Japanese-language mailing list for Ruby discussion since almost the very beginning of Ruby’s existence, but the English-language mailing list wasn’t started until 1998. Initially, the English-language mailing list was used by Japanese Rubyists writing in English, but this gradually changed as awareness of Ruby grew.
|
||||
|
||||
In 2000, Dave Thomas published Programming Ruby, the first English-language book to cover Ruby. The book became known as the “pickaxe” book for the pickaxe it featured on its cover. It introduced Ruby to many programmers in the West for the first time. Like it had in Japan, Ruby spread quickly, and by 2002 the English-language Ruby mailing list had more traffic than the original Japanese-language mailing list.
|
||||
|
||||
By 2005, Ruby had become more popular, but it was still not a mainstream programming language. That changed with the release of Ruby on Rails. Ruby on Rails was the “killer app” for Ruby, and it did more than any other project to popularize Ruby. After the release of Ruby on Rails, interest in Ruby shot up across the board, as measured by the TIOBE language index:
|
||||
|
||||
![][3]
|
||||
|
||||
It’s sometimes joked that the only programs anybody writes in Ruby are Ruby-on-Rails web applications. That makes it sound as if Ruby on Rails completely took over the Ruby community, which is only partly true. While Ruby has certainly come to be known as that language people write Rails apps in, Rails owes as much to Ruby as Ruby owes to Rails.
|
||||
|
||||
The Ruby philosophy heavily informed the design and implementation of Rails. David Heinemeier Hansson, who created Rails, often talks about how his first contact with Ruby was an almost religious experience. He has said that the encounter was so transformative that it “imbued him with a calling to do missionary work in service of Matz’s creation.” For Hansson, Ruby’s no-shackles approach was a politically courageous rebellion against the top-down impositions made by languages like Python and Java. He appreciated that the language trusted him and empowered him to make his own judgements about how best to express his programs.
|
||||
|
||||
Like Matsumoto, Hansson claims that he created Rails out of a frustration with the status quo and a desire to make things better for himself. He, like Matsumoto, prioritized programmer happiness above all else, evaluating additions to Rails by what he calls “The Principle of The Bigger Smile.” Whatever made Hansson smile more was what made it into the Rails codebase. As a result, Rails would come to include unorthodox features like the “Inflector” class (which tries to map singular class names to plural database table names automatically) and Rails’ `Time` extensions (allowing programmers to write cute expressions like `2.days.ago`). To some, these features were truly weird, but the success of Rails is testament to the number of people who found it made their lives much easier.
|
||||
|
||||
And so, while it might seem that Rails was an incidental application of Ruby that happened to become extremely popular, Rails in fact embodies many of Ruby’s core principles. Futhermore, it’s hard to see how Rails could have been built in any other language, given its dependence on Ruby’s macro-like class method calls to implement things like model associations. Some people might take the fact that so much of Ruby development revolves around Ruby on Rails as a sign of an unhealthy ecosystem, but there are good reasons that Ruby and Ruby on Rails are so intertwined.
|
||||
|
||||
### The Future of Ruby
|
||||
|
||||
People seem to have an inordinate amount of interest in whether or not Ruby (and Ruby on Rails) are dying. Since as early as 2011, it seems that Stack Overflow and Quora have been full of programmers asking whether or not they should bother learning Ruby if it will no longer be around in the next few years. These concerns are not unjustified; according to the TIOBE index and to Stack Overflow trends, Ruby and Ruby on Rails have been shrinking in popularity. Though Ruby on Rails was once the hot new thing, it has since been eclipsed by hotter and newer frameworks.
|
||||
|
||||
One theory for why this has happened is that programmers are abandoning dynamically typed languages for statically typed ones. Analysts at TIOBE index figure that a rise in quality requirements have made runtime exceptions increasingly unacceptable. They cite TypeScript as an example of this trend—a whole new version of JavaScript was created just to ensure that client-side code could be written with the benefit of compile-time safety guarantees.
|
||||
|
||||
A more likely answer, I think, is just that Ruby on Rails now has many more competitors than it once did. When Rails was first introduced in 2005, there weren’t that many ways to create web applications—the main alternative was Java. Today, you can create web applications using great frameworks built for Go, JavaScript, or Python, to name only the most popular options. The web world also seems to be moving toward a more distributed architecture for applications, meaning that, rather than having one codebase responsible for everything from database access to view rendering, responsibilites are split between different components that focus on doing one thing well. Rails feels overbroad and bloated for something as focused as a JSON API that talks to a JavaScript frontend.
|
||||
|
||||
All that said, there are reasons to be optimistic about Ruby’s future. Both Rails and Ruby continue to be actively developed. Matsumoto and others are working hard on Ruby’s third major release, which they aim to make three times faster than the existing version of Ruby, possibly alleviating the performance concerns that have always dogged Ruby. And even if the world of web frameworks has become more diverse since 2005, that doesn’t mean that there won’t always be room for Ruby on Rails. It is now a mature tool with an enormous amount of built-in power that will always be a good choice for certain kinds of applications.
|
||||
|
||||
But even if Ruby and Rails go the way of the dinosaurs, one thing that seems certain to survive is the Ruby ethos of programmer happiness. Ruby has had a profound influence on the design of many new programming languages, which have adopted many of its best ideas. Other new lanuages have tried to be “more modern” interpretations of Ruby: Elixir, for example, is a version of Ruby that emphasizes the functional programming paradigm, while Crystal, which is still in development, aims to be a statically typed version of Ruby. Many programmers around the world have fallen in love with Ruby and its syntax, so we can count on its influence persisting for a long while to come.
|
||||
|
||||
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][4] on Twitter or subscribe to the [RSS feed][5] to make sure you know when a new post is out.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2017/11/19/the-ruby-story.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://twobithistory.org/images/matz.png
|
||||
[2]: http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/88819
|
||||
[3]: https://twobithistory.org/images/tiobe_ruby.png
|
||||
[4]: https://twitter.com/TwoBitHistory
|
||||
[5]: https://twobithistory.org/feed.xml
|
@ -0,0 +1,44 @@
|
||||
Important Papers: Codd and the Relational Model
|
||||
======
|
||||
It’s hard to believe today, but the relational database was once the cool new kid on the block. In 2017, the relational model competes with all sorts of cutting-edge NoSQL technologies that make relational database systems seem old-fashioned and boring. Yet, 50 years ago, none of the dominant database systems were relational. Nobody had thought to structure their data that way. When the relational model did come along, it was a radical new idea that revolutionized the database world and spawned a multi-billion dollar industry.
|
||||
|
||||
The relational model was introduced in 1970. Edgar F. Codd, a researcher at IBM, published a [paper][1] called “A Relational Model of Data for Large Shared Data Banks.” The paper was a rewrite of a paper he had circulated internally at IBM a year earlier. The paper is unassuming; Codd does not announce in his abstract that he has discovered a brilliant new approach to storing data. He only claims to have employed a novel tool (the mathematical notion of a “relation”) to address some of the inadequacies of the prevailing database models.
|
||||
|
||||
In 1970, there were two schools of thought about how to structure a database: the hierarchical model and the network model. The hierarchical model was used by IBM’s Information Management System (IMS), the dominant database system at the time. The network model had been specified by a standards committee called CODASYL (which also—random tidbit—specified COBOL) and implemented by several other database system vendors. The two models were not really that different; both could be called “navigational” models. They persisted tree or graph data structures to disk using pointers to preserve the links between the data. Retrieving a record stored toward the bottom of the tree would involve first navigating through all of its ancestor records. These databases were fast (IMS is still used by many financial institutions partly for this reason, see [this excellent blog post][2]) but inflexible. Woe unto those database administrators who suddenly found themselves needing to query records from the bottom of the tree without having an obvious place to start at the top.
|
||||
|
||||
Codd saw this inflexibility as a symptom of a larger problem. Programs using a hierarchical or network database had to know about how the stored data was structured. Programs had to know this because they were responsible for navigating down this structure to find the information they needed. This was so true that when Charles Bachman, a major pioneer of the network model, received a Turing Award for his work in 1973, he gave a speech titled “[The Programmer as Navigator][3].” Of course, if programs were saddled with this responsibility, then they would immediately break if the structure of the database ever changed. In the introduction to his 1970 paper, Codd motivates the search for a better model by arguing that we need “data independence,” which he defines as “the independence of application programs and terminal activities from growth in data types and changes in data representation.” The relational model, he argues, “appears to be superior in several respects to the graph or network model presently in vogue,” partly because, among other benefits, the relational model “provides a means of describing data with its natural structure only.” By this he meant that programs could safely ignore any artificial structures (like trees) imposed upon the data for storage and retrieval purposes only.
|
||||
|
||||
To further illustrate the problem with the navigational models, Codd devotes the first section of his paper to an example data set involving machine parts and assembly projects. This dataset, he says, could be represented in existing systems in at least five different ways. Any program that is developed assuming one of five structures will fail when run against at least three of the other structures. The program could instead try to figure out ahead of time which of the structures it might be dealing with, but it would be difficult to do so in this specific case and practically impossible in the general case. So, as long as the program needs to know about how the data is structured, we cannot switch to an alternative structure without breaking the program. This is a real bummer because (and this is from the abstract) “changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.”
|
||||
|
||||
Codd then introduces his relational model. This model would be refined and expanded in subsequent papers: In 1971, Codd wrote about ALPHA, a SQL-like query language he created; in another 1971 paper, he introduced the first three normal forms we know and love today; and in 1972, he further developed relational algebra and relational calculus, the mathematically rigorous underpinnings of the relational model. But Codd’s 1970 paper contains the kernel of the relational idea:
|
||||
|
||||
> The term relation is used here in its accepted mathematical sense. Given sets (not necessarily distinct), is a relation on these sets if it is a set of -tuples each of which has its first element from , its second element from , and so on. We shall refer to as the th domain of . As defined above, is said to have degree . Relations of degree 1 are often called unary, degree 2 binary, degree 3 ternary, and degree n-ary.
|
||||
|
||||
Today, we call a relation a table, and a domain an attribute or a column. The word “table” actually appears nowhere in the paper, though Codd’s visual representations of relations (which he calls “arrays”) do resemble tables. Codd defines several more terms, some of which we continue to use and others we have replaced. He explains primary and foreign keys, as well as what he calls the “active domain,” which is the set of all distinct values that actually appear in a given domain or column. He then spends some time distinguishing between a “simple” and a “nonsimple” domain. A simple domain contains “atomic” or “nondecomposable” values, like integers. A nonsimple domain has relations as elements. The example Codd gives here is that of an employee with a salary history. The salary history is not one salary but a collection of salaries each associated with a date. So a salary history cannot be represented by a single number or string.
|
||||
|
||||
It’s not obvious how one could store a nonsimple domain in a multi-dimensional array, AKA a table. The temptation might be to denote the nonsimple relationship using some kind of pointer, but then we would be repeating the mistakes of the navigational models. Instead. Codd introduces normalization, which at least in the 1970 paper involves nothing more than turning nonsimple domains into simple ones. This is done by expanding the child relation so that it includes the primary key of the parent. Each tuple of the child relation references its parent using simple domains, eliminating the need for a nonsimple domain in the parent. Normalization means no pointers, sidestepping all the problems they cause in the navigational models.
|
||||
|
||||
At this point, anyone reading Codd’s paper would have several questions, such as “Okay, how would I actually query such a system?” Codd mentions the possibility of creating a universal sublanguage for querying relational databases from other programs, but declines to define such a language in this particular paper. He does explain, in mathematical terms, many of the fundamental operations such a language would have to support, like joins, “projection” (`SELECT` in SQL), and “restriction” (`WHERE`). The amazing thing about Codd’s 1970 paper is that, really, all the ideas are there—we’ve been writing `SELECT` statements and joins for almost half a century now.
|
||||
|
||||
Codd wraps up the paper by discussing ways in which a normalized relational database, on top of its other benefits, can reduce redundancy and improve consistency in data storage. Altogether, the paper is only 11 pages long and not that difficult of a read. I encourage you to look through it yourself. It would be another ten years before Codd’s ideas were properly implemented in a functioning system, but, when they finally were, those systems were so obviously better than previous systems that they took the world by storm.
|
||||
|
||||
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][4] on Twitter or subscribe to the [RSS feed][5] to make sure you know when a new post is out.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2017/12/29/codd-relational-model.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://cs.uwaterloo.ca/~david/cs848s14/codd-relational.pdf
|
||||
[2]: https://twobithistory.org/2017/10/07/the-most-important-database.html
|
||||
[3]: https://pdfs.semanticscholar.org/f371/d196bf0e7b43df6dcbbc44de461925a21709.pdf
|
||||
[4]: https://twitter.com/TwoBitHistory
|
||||
[5]: https://twobithistory.org/feed.xml
|
@ -1,4 +1,4 @@
|
||||
How writing can change your career for the better, even if you don't identify as a writer Translating by FelixYFZ
|
||||
How writing can change your career for the better, even if you don't identify as a writer
|
||||
======
|
||||
Have you read Marie Kondo's book [The Life-Changing Magic of Tidying Up][1]? Or did you, like me, buy it and read a little bit and then add it to the pile of clutter next to your bed?
|
||||
|
||||
|
106
sources/talk/20180527 Whatever Happened to the Semantic Web.md
Normal file
106
sources/talk/20180527 Whatever Happened to the Semantic Web.md
Normal file
@ -0,0 +1,106 @@
|
||||
Whatever Happened to the Semantic Web?
|
||||
======
|
||||
In 2001, Tim Berners-Lee, inventor of the World Wide Web, published an article in Scientific American. Berners-Lee, along with two other researchers, Ora Lassila and James Hendler, wanted to give the world a preview of the revolutionary new changes they saw coming to the web. Since its introduction only a decade before, the web had fast become the world’s best means for sharing documents with other people. Now, the authors promised, the web would evolve to encompass not just documents but every kind of data one could imagine.
|
||||
|
||||
They called this new web the Semantic Web. The great promise of the Semantic Web was that it would be readable not just by humans but also by machines. Pages on the web would be meaningful to software programs—they would have semantics—allowing programs to interact with the web the same way that people do. Programs could exchange data across the Semantic Web without having to be explicitly engineered to talk to each other. According to Berners-Lee, Lassila, and Hendler, a typical day living with the myriad conveniences of the Semantic Web might look something like this:
|
||||
|
||||
> The entertainment system was belting out the Beatles’ “We Can Work It Out” when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all the other local devices that had a volume control. His sister, Lucy, was on the line from the doctor’s office: “Mom needs to see a specialist and then has to have a series of physical therapy sessions. Biweekly or something. I’m going to have my agent set up the appointments.” Pete immediately agreed to share the chauffeuring. At the doctor’s office, Lucy instructed her Semantic Web agent through her handheld Web browser. The agent promptly retrieved the information about Mom’s prescribed treatment within a 20-mile radius of her home and with a rating of excellent or very good on trusted rating services. It then began trying to find a match between available appointment times (supplied by the agents of individual providers through their Web sites) and Pete’s and Lucy’s busy schedules.
|
||||
|
||||
The vision was that the Semantic Web would become a playground for intelligent “agents.” These agents would automate much of the work that the world had only just learned to do on the web.
|
||||
|
||||
![][1]
|
||||
|
||||
For a while, this vision enticed a lot of people. After new technologies such as AJAX led to the rise of what Silicon Valley called Web 2.0, Berners-Lee began referring to the Semantic Web as Web 3.0. Many thought that the Semantic Web was indeed the inevitable next step. A New York Times article published in 2006 quotes a speech Berners-Lee gave at a conference in which he said that the extant web would, twenty years in the future, be seen as only the “embryonic” form of something far greater. A venture capitalist, also quoted in the article, claimed that the Semantic Web would be “profound,” and ultimately “as obvious as the web seems obvious to us today.”
|
||||
|
||||
Of course, the Semantic Web we were promised has yet to be delivered. In 2018, we have “agents” like Siri that can do certain tasks for us. But Siri can only do what it can because engineers at Apple have manually hooked it up to a medley of web services each capable of answering only a narrow category of questions. An important consequence is that, without being large and important enough for Apple to care, you cannot advertise your services directly to Siri from your own website. Unlike the physical therapists that Berners-Lee and his co-authors imagined would be able to hang out their shingles on the web, today we are stuck with giant, centralized repositories of information. Today’s physical therapists must enter information about their practice into Google or Yelp, because those are the only services that the smartphone agents know how to use and the only ones human beings will bother to check. The key difference between our current reality and the promised Semantic future is best captured by this throwaway aside in the excerpt above: “…appointment times (supplied by the agents of individual providers through **their** Web sites)…”
|
||||
|
||||
In fact, over the last decade, the web has not only failed to become the Semantic Web but also threatened to recede as an idea altogether. We now hardly ever talk about “the web” and instead talk about “the internet,” which as of 2016 has become such a common term that newspapers no longer capitalize it. (To be fair, they stopped capitalizing “web” too.) Some might still protest that the web and the internet are two different things, but the distinction gets less clear all the time. The web we have today is slowly becoming a glorified app store, just the easiest way among many to download software that communicates with distant servers using closed protocols and schemas, making it functionally identical to the software ecosystem that existed before the web. How did we get here? If the effort to build a Semantic Web had succeeded, would the web have looked different today? Or have there been so many forces working against a decentralized web for so long that the Semantic Web was always going to be stillborn?
|
||||
|
||||
### Semweb Hucksters and Their Metacrap
|
||||
|
||||
To some more practically minded engineers, the Semantic Web was, from the outset, a utopian dream.
|
||||
|
||||
The basic idea behind the Semantic Web was that everyone would use a new set of standards to annotate their webpages with little bits of XML. These little bits of XML would have no effect on the presentation of the webpage, but they could be read by software programs to divine meaning that otherwise would only be available to humans.
|
||||
|
||||
The bits of XML were a way of expressing metadata about the webpage. We are all familiar with metadata in the context of a file system: When we look at a file on our computers, we can see when it was created, when it was last updated, and whom it was originally created by. Likewise, webpages on the Semantic Web would be able to tell your browser who authored the page and perhaps even where that person went to school, or where that person is currently employed. In theory, this information would allow Semantic Web browsers to answer queries across a large collection of webpages. In their article for Scientific American, Berners-Lee and his co-authors explain that you could, for example, use the Semantic Web to look up a person you met at a conference whose name you only partially remember.
|
||||
|
||||
Cory Doctorow, a blogger and digital rights activist, published an influential essay in 2001 that pointed out the many problems with depending on voluntarily supplied metadata. A world of “exhaustive, reliable” metadata would be wonderful, he argued, but such a world was “a pipe-dream, founded on self-delusion, nerd hubris, and hysterically inflated market opportunities.” Doctorow had found himself in a series of debates over the Semantic Web at tech conferences and wanted to catalog the serious issues that the Semantic Web enthusiasts (Doctorow calls them “semweb hucksters”) were overlooking. The essay, titled “Metacrap,” identifies seven problems, among them the obvious fact that most web users were likely to provide either no metadata at all or else lots of misleading metadata meant to draw clicks. Even if users were universally diligent and well-intentioned, in order for the metadata to be robust and reliable, users would all have to agree on a single representation for each important concept. Doctorow argued that in some cases a single representation might not be appropriate, desirable, or fair to all users.
|
||||
|
||||
Indeed, the web had already seen people abusing the HTML `<meta>` tag (introduced at least as early as HTML 4) in an attempt to improve the visibility of their webpages in search results. In a 2004 paper, Ben Munat, then an academic at Evergreen State College, explains how search engines once experimented with using keywords supplied via the `<meta>` tag to index results, but soon discovered that unscrupulous webpage authors were including tags unrelated to the actual content of their webpage. As a result, search engines came to ignore the `<meta>` tag in favor of using complex algorithms to analyze the actual content of a webpage. Munat concludes that a general-purpose Semantic Web is unworkable, and that the focus should be on specific domains within medicine and science.
|
||||
|
||||
Others have also seen the Semantic Web project as tragically flawed, though they have located the flaw elsewhere. Aaron Swartz, the famous programmer and another digital rights activist, wrote in an unfinished book about the Semantic Web published after his death that Doctorow was “attacking a strawman.” Nobody expected that metadata on the web would be thoroughly accurate and reliable, but the Semantic Web, or at least a more realistically scoped version of it, remained possible. The problem, in Swartz’ view, was the “formalizing mindset of mathematics and the institutional structure of academics” that the “semantic Webheads” brought to bear on the challenge. In forums like the World Wide Web Consortium (W3C), a huge amount of effort and discussion went into creating standards before there were any applications out there to standardize. And the standards that emerged from these “Talmudic debates” were so abstract that few of them ever saw widespread adoption. The few that did, like XML, were “uniformly scourges on the planet, offenses against hardworking programmers that have pushed out sensible formats (like JSON) in favor of overly-complicated hairballs with no basis in reality.” The Semantic Web might have thrived if, like the original web, its standards were eagerly adopted by everyone. But that never happened because—as [has been discussed][2] on this blog before—the putative benefits of something like XML are not easy to sell to a programmer when the alternatives are both entirely sufficient and much easier to understand.
|
||||
|
||||
### Building the Semantic Web
|
||||
|
||||
If the Semantic Web was not an outright impossibility, it was always going to require the contributions of lots of clever people working in concert.
|
||||
|
||||
The long effort to build the Semantic Web has been said to consist of four phases. The first phase, which lasted from 2001 to 2005, was the golden age of Semantic Web activity. Between 2001 and 2005, the W3C issued a slew of new standards laying out the foundational technologies of the Semantic future.
|
||||
|
||||
The most important of these was the Resource Description Framework (RDF). The W3C issued the first version of the RDF standard in 2004, but RDF had been floating around since 1997, when a W3C working group introduced it in a draft specification. RDF was originally conceived of as a tool for modeling metadata and was partly based on earlier attempts by Ramanathan Guha, an Apple engineer, to develop a metadata system for files stored on Apple computers. The Semantic Web working groups at W3C repurposed RDF to represent arbitrary kinds of general knowledge.
|
||||
|
||||
RDF would be the grammar in which Semantic webpages expressed information. The grammar is a simple one: Facts about the world are expressed in RDF as triplets of subject, predicate, and object. Tim Bray, who worked with Ramanathan Guha on an early version of RDF, gives the following example, describing TV shows and movies:
|
||||
|
||||
```
|
||||
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
|
||||
|
||||
@prefix ex: <http://www.example.org/> .
|
||||
|
||||
|
||||
ex:vincent_donofrio ex:starred_in ex:law_and_order_ci .
|
||||
|
||||
ex:law_and_order_ci rdf:type ex:tv_show .
|
||||
|
||||
ex:the_thirteenth_floor ex:similar_plot_as ex:the_matrix .
|
||||
```
|
||||
|
||||
The syntax is not important, especially since RDF can be represented in a number of formats, including XML and JSON. This example is in a format called Turtle, which expresses RDF triplets as straightforward sentences terminated by periods. The three essential sentences, which appear above after the `@prefix` preamble, state three facts: Vincent Donofrio starred in Law and Order, Law and Order is a type of TV Show, and the movie The Thirteenth Floor has a similar plot as The Matrix. (If you don’t know who Vincent Donofrio is and have never seen The Thirteenth Floor, I, too, was watching Nickelodeon and sipping Capri Suns in 1999.)
|
||||
|
||||
Other specifications finalized and drafted during this first era of Semantic Web development describe all the ways in which RDF can be used. RDF in Attributes (RDFa) defines how RDF can be embedded in HTML so that browsers, search engines, and other programs can glean meaning from a webpage. RDF Schema and another standard called OWL allows RDF authors to demarcate the boundary between valid and invalid RDF statements in their RDF documents. RDF Schema and OWL, in other words, are tools for creating what are known as ontologies, explicit specifications of what can and cannot be said within a specific domain. An ontology might include a rule, for example, expressing that no person can be the mother of another person without also being a parent of that person. The hope was that these ontologies would be widely used not only to check the accuracy of RDF found in the wild but also to make inferences about omitted information.
|
||||
|
||||
In 2006, Tim Berners-Lee posted a short article in which he argued that the existing work on Semantic Web standards needed to be supplemented by a concerted effort to make semantic data available on the web. Furthermore, once on the web, it was important that semantic data link to other kinds of semantic data, ensuring the rise of a data-based web as interconnected as the existing web. Berners-Lee used the term “linked data” to describe this ideal scenario. Though “linked data” was in one sense just a recapitulation of the original vision for the Semantic Web, it became a term that people could rally around and thus amounted to a rebranding of the Semantic Web project.
|
||||
|
||||
Berners-Lee’s article launched the second phase of the Semantic Web’s development, where the focus shifted from setting standards and building toy examples to creating and popularizing large RDF datasets. Perhaps the most successful of these datasets was [DBpedia][3], a giant repository of RDF triplets extracted from Wikipedia articles. DBpedia, which made heavy use of the Semantic Web standards that had been developed in the first half of the 2000s, was a standout example of what could be accomplished using the W3C’s new formats. Today DBpedia describes 4.58 million entities and is used by organizations like the NY Times, BBC, and IBM, which employed DBpedia as a knowledge source for IBM Watson, the Jeopardy-winning artificial intelligence system.
|
||||
|
||||
![][4]
|
||||
|
||||
The third phase of the Semantic Web’s development involved adapting the W3C’s standards to fit the actual practices and preferences of web developers. By 2008, JSON had begun its meteoric rise to popularity. Whereas XML came packaged with a bunch of associated technologies of indeterminate purpose (XLST, XPath, XQuery, XLink), JSON was just JSON. It was less verbose and more readable. Manu Sporny, an entrepreneur and member of the W3C, had already started using JSON at his company and wanted to find an easy way for RDFa and JSON to work together. The result would be JSON-LD, which in essence was RDF reimagined for a world that had chosen JSON over XML. Sporny, together with his CTO, Dave Longley, issued a draft specification of JSON-LD in 2010. For the next few years, JSON-LD and an updated RDF specification would be the primary focus of Semantic Web work at the W3C. JSON-LD could be used on its own or it could be embedded within a `<script>` tag on an HTML page, making it an alternative to both RDF and RDFa.
|
||||
|
||||
Work on JSON-LD coincided with the development of [schema.org][5], a centralized collection of simple schemas for describing things that might exist on the web. schema.org was started by Google, Bing, and Yahoo with the express purpose of delivering better search results by agreeing to a common set of vocabularies. schema.org vocabularies, together with JSON-LD, are now used to drive features like Google’s Knowledge Graph. The approach was a more practical and less abstract one, where immediate applications in search results were the focus. The schema.org team are careful to state on their website that they are not attempting to create a “universal ontology.”
|
||||
|
||||
Today, work on the Semantic Web seems to have petered out. The W3C still does some work on the Semantic Web under the heading of “Data Activity,” which might charitably be called the fourth phase of the Semantic Web project. But it’s telling that the most recent “Data Activity” project is a study of what the W3C must do to improve its standardization process. Even the W3C now appears to recognize that few of its Semantic Web standards have been widely adopted and that simpler standards would have been more successful. The attitude at the W3C seems to be one of retrenchment and introspection, perhaps in the hope of being better prepared when the Semantic Web looks promising again.
|
||||
|
||||
### A Lingering Legacy
|
||||
|
||||
And so the Semantic Web, as colorfully described by one person, is “as dead as last year’s roadkill.” At least, the version of the Semantic Web originally proposed by Tim Berners-Lee, which once seemed to be the imminent future of the web, is unlikely to emerge soon. That said, many of the technologies and ideas that were developed amid the push to create the Semantic Web have been repurposed and live on in various applications. As already mentioned, Google relies on Semantic Web technologies—now primarily JSON-LD—to generate useful conceptual summaries next to search results. schema.org maintains a list of “vocabularies” that web developers can use to publish easily understood data for a wide audience—it is a new, more practical imagining of what a public, shared ontology might look like. And to some degree, the many REST APIs now available constitute a diminished Semantic Web. What wasn’t possible in 2001 now is: You can easily build applications that make use of data from across the web. The difference is that you must sign up for each API one by one beforehand, which in addition to being wearisome also gives whoever hosts the API enormous control over how you access their data.
|
||||
|
||||
Another modern application of Semantic Web technologies, perhaps the most popular and successful in recent years outside of Google, is Facebook’s [OpenGraph][6] protocol. The OpenGraph protocol defines a schema that web developers can use (via RDFa) to determine how a web page is displayed when shared in a social media application. For example, a web developer working at the New York Times might use OpenGraph to specify the title and thumbnail that should appear when a New York Times article is shared in Facebook. In one sense, this is an application of Semantic Web technologies true to the Semantic Web’s origins in research on metadata. Tagging a webpage with extra information about who wrote it and what it is about is exactly the kind of metadata authoring the Semantic Web was going to depend on. But in another sense, OpenGraph is an application of Semantic Web technologies to further a purpose somewhat at odds with the philosophy of the web. The metadata isn’t meant to be general-purpose, after all. People tag their webpages using OpenGraph because they want links to their content to unfurl properly in Facebook. And the more information Facebook knows about your website, the closer Facebook gets to simply reproducing your entire website within Facebook, portending a future where the open web is a mythical land beyond Facebook’s towering blue walls.
|
||||
|
||||
What’s fascinating about JSON-LD and OpenGraph is that you can use them without knowing anything about subject-predicate-object triplets, RDF, RDF Schema, ontologies, OWL, or really any other Semantic Web technologies—you don’t even have to know XML. Manu Sporny has even said that the JSON-LD working group at W3C made a special effort to avoid references to RDF in the JSON-LD specification. This is almost certainly why these technologies have succeeded and continue to be popular. Nobody wants to use a tool that can only be fully understood by reading a whole family of specifications.
|
||||
|
||||
It’s interesting to consider what might have happened if simple formats like JSON-LD had appeared earlier. The Semantic Web could have sidestepped its fatal association with XML. More people might have been tempted to mark up their websites with RDF, but even that may not have saved the Semantic Web. Sean B. Palmer, an Internet Person that has scrubbed all biographical information about himself from the internet but who claims to have worked in the Semantic Web world for a while in the 2000s, posits that the real problem was the lack of a truly decentralized infrastructure to host the Semantic Web on. To host your own website, you need to buy a domain name from ICANN, configure it correctly using DNS, and then pay someone to host your content if you don’t already have a server of your own. We shouldn’t be surprised if the average person finds it easier to enter their information into a giant, corporate data repository. And in a web of giant, corporate data repositories, there are no compelling use cases for Semantic Web technologies.
|
||||
|
||||
So the problems that confronted the Semantic Web were more numerous and profound than just “XML sucks.” All the same, it’s hard to believe that the Semantic Web is truly dead and gone. Some of the particular technologies that the W3C dreamed up in the early 2000s may not have a future, but the decentralized vision of the web that Tim Berners-Lee and his follow researchers described in Scientific American is too compelling to simply disappear. Imagine a web where, rather than filling out the same tedious form every time you register for a service, you were somehow able to authorize services to get that information from your own website. Imagine a Facebook that keeps your list of friends, hosted on your own website, up-to-date, rather than vice-versa. Basically, the Semantic Web was going to be a web where everyone gets to have their own personal REST API, whether they know the first thing about computers or not. Conceived of that way, it’s easy to see why the Semantic Web hasn’t yet been realized. There are so many engineering and security issues to sort out between here and there. But it’s also easy to see why the dream of the Semantic Web seduced so many people.
|
||||
|
||||
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][7] on Twitter or subscribe to the [RSS feed][8] to make sure you know when a new post is out.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2018/05/27/semantic-web.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://twobithistory.org/images/scientific_american_cover.jpg
|
||||
[2]: https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html
|
||||
[3]: http://wiki.dbpedia.org/
|
||||
[4]: https://twobithistory.org/images/linked_data.png
|
||||
[5]: http://schema.org/
|
||||
[6]: http://ogp.me/
|
||||
[7]: https://twitter.com/TwoBitHistory
|
||||
[8]: https://twobithistory.org/feed.xml
|
@ -1,110 +0,0 @@
|
||||
HankChow translating
|
||||
|
||||
3 areas to drive DevOps change
|
||||
======
|
||||
Driving large-scale organizational change is painful, but when it comes to DevOps, the payoff is worth the pain.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-inclusion-transformation-change_20180927.png?itok=2E-g10hJ)
|
||||
|
||||
Pain avoidance is a powerful motivator. Some studies hint that even [plants experience a type of pain][1] and take steps to defend themselves. Yet we have plenty of examples of humans enduring pain on purpose—exercise often hurts, but we still do it. When we believe the payoff is worth the pain, we'll endure almost anything.
|
||||
|
||||
The truth is that driving large-scale organizational change is painful. It hurts for those having to change their values and behaviors, it hurts for leadership, and it hurts for the people just trying to do their jobs. In the case of DevOps, though, I can tell you the pain is worth it.
|
||||
|
||||
I've seen firsthand how teams learn they must spend time improving their technical processes, take ownership of their automation pipelines, and become masters of their fate. They gain the tools they need to be successful.
|
||||
|
||||
![Improvements after DevOps transformation][3]
|
||||
|
||||
Image by Lee Eason. CC BY-SA 4.0
|
||||
|
||||
This chart shows the value of that change. In a company where I directed a DevOps transformation, its 60+ teams submitted more than 900 requests per month to release management. If you add up the time those tickets stayed open, it came to more than 350 days per month. What could your company do with an extra 350 person-days per month? In addition to the improvements seen above, they went from 100 to 9,000 deployments per month, a 24% decrease in high-severity bugs, happier engineers, and improved net promoter scores (NPS). The biggest NPS improvements link to the teams furthest along on their DevOps journey, as the [Puppet State of DevOps][4] report predicted. The bottom line is that investments into technical process improvement translate into better business outcomes.
|
||||
|
||||
DevOps leaders must focus on three main areas to drive this change: executives, culture, and team health.
|
||||
|
||||
### Executives
|
||||
|
||||
The bottom line is that investments into technical process improvement translate into better business outcomes.
|
||||
|
||||
The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work.
|
||||
|
||||
The larger your organization, the greater the distance (and opportunities for misunderstanding) between business leadership and the individuals delivering services to your customers. To make things worse, the landscape of tools and practices in technology is changing at an accelerating rate. This makes it practically impossible for business leaders to understand on their own how transformations like DevOps or agile work.
|
||||
|
||||
DevOps leaders must help executives come along for the ride. Educating leaders gives them options when they're making decisions and makes it more likely they'll choose paths that help your company.
|
||||
|
||||
For example, let's say your executives believe DevOps is going to improve how you deploy your products into production, but they don't understand how. You've been working with a software team to help automate their deployment. When an executive hears about a deploy failure (and there will be failures), they will want to understand how it occurred. When they learn the software team did the deployment rather than the release management team, they may try to protect the business by decreeing all production releases must go through traditional change controls. You will lose credibility, and teams will be far less likely to trust you and accept further changes.
|
||||
|
||||
It takes longer to rebuild trust with executives and get their support after an incident than it would have taken to educate them in the first place. Put the time in upfront to build alignment, and it will pay off as you implement tactical changes.
|
||||
|
||||
Two pieces of advice when building that alignment:
|
||||
|
||||
* First, **don't ignore any constraints** they raise. If they have worries about contracts or security, make the heads of legal and security your new best friends. By partnering with them, you'll build their trust and avoid making costly mistakes.
|
||||
* Second, **use metrics to build a bridge** between what your delivery teams are doing and your executives' concerns. If the business has a goal to reduce customer churn, and you know from research that many customers leave because of unplanned downtime, reinforce that your teams are committed to tracking and improving Mean Time To Detection and Resolution (MTTD and MTTR). You can use those key metrics to show meaningful progress that teams and executives understand and get behind.
|
||||
|
||||
|
||||
|
||||
### Culture
|
||||
|
||||
DevOps is a culture of continuous improvement focused on code, build, deploy, and operational processes. Culture describes the organization's values and behaviors. Essentially, we're talking about changing how people behave, which is never easy.
|
||||
|
||||
I recommend reading [The Wolf in CIO's Clothing][5]. Spend time thinking about psychology and motivation. Read [Drive][6] or at least watch Daniel Pink's excellent [TED Talk][7]. Read [The Hero with a Thousand Faces][8] and learn to identify the different journeys everyone is on. If none of these things sound interesting, you are not the right person to drive change in your company. Otherwise, read on!
|
||||
|
||||
Essentially, we're talking about changing how people behave, which is never easy.
|
||||
|
||||
Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had.
|
||||
|
||||
Most rational people behave according to their values. Most organizations don't have explicit values everyone understands and lives by. Therefore, you'll need to identify the organization's values that have led to the behaviors that have led to the current state. You also need to make sure you can tell the story about how those values came to be and how they led to where you are. When you tell that story, be careful not to demonize those values—they aren't immoral or evil. People did the best they could at the time, given what they knew and what resources they had.
|
||||
|
||||
Explain that the company and its organizational goals are changing, and the team must alter its values. It's helpful to express this in terms of contrast. For example, your company may have historically valued cost savings above all else. That value is there for a reason—the company was cash-strapped. To get new products out, the infrastructure group had to tightly couple services by sharing database clusters or servers. Over time, those practices created a real mess that became hard to maintain. Simple changes started breaking things in unexpected ways. This led to tight change-control processes that were painful for delivery teams, so they stopped changing things.
|
||||
|
||||
Play that movie for five years, and you end up with little to no innovation, legacy technology, attraction and retention problems, and poor-quality products. You've grown the company, but you've hit a ceiling, and you can't continue to grow with those same values and behaviors. Now you must put engineering efficiency above cost saving. If one option will help teams maintain their service easier, but the other option is cheaper in the short term, you go with the first option.
|
||||
|
||||
You must tell this story again and again. Then you must celebrate any time a team expresses the new value through their behavior—even if they make a mistake. When a team has a deploy failure, congratulate them for taking the risk and encourage them to keep learning. Explain how their behavior is leading to the right outcome and support them. Over time, teams will see the message is real, and they'll feel safe altering their behavior.
|
||||
|
||||
### Team health
|
||||
|
||||
Have you ever been in a planning meeting and heard something like this: "We can't really estimate that story until John gets back from vacation. He's the only one who knows that area of the code well enough." Or: "We can't get this task done because it's got a cross-team dependency on network engineering, and the guy that set up the firewall is out sick." Or: "John knows that system best; if he estimated the story at a 3, then let's just go with that." When the team works on that story, who will most likely do the work? That's right, John will, and the cycle will continue.
|
||||
|
||||
For a long time, we've accepted that this is just the nature of software development. If we don't solve for it, we perpetuate the cycle.
|
||||
|
||||
Entropy will always drive teams naturally towards disorder and bad health. Our job as team members and leaders is to intentionally manage against that entropy and keep our teams healthy. Transformations like DevOps, agile, moving to the cloud, or refactoring a legacy application all amplify and accelerate that entropy. That's because transformations add new skills and expertise needed for the team to take on that new type of work.
|
||||
|
||||
Let's look at an example of a product team refactoring its legacy monolith. As usual, they build those new services in AWS. The legacy monolith was deployed to the data center, monitored, and backed up by IT. IT made sure the application's infosec requirements were met at the infrastructure layer. They conducted disaster recovery tests, patched the servers, and installed and configured required intrusion detection and antivirus agents. And they kept change control records, required for the annual audit process, of everything was done to the application's infrastructure.
|
||||
|
||||
I often see product teams make the fatal mistake of thinking IT is all cost and bottleneck. They're hungry to shed the skin of IT and use the public cloud, but they never stop to appreciate the critical services IT provides. Moving to the cloud means you implement these things differently; they don't go away. AWS is still a data center, and any team utilizing it accepts the related responsibilities.
|
||||
|
||||
In practice, this means product teams must learn how to do those IT services when they move to the cloud. So, when our fictional product team starts refactoring its legacy application and putting new services in in the cloud, it will need a vastly expanded skillset to be successful. Those skills don't magically appear—they're learned or hired—and team leaders and managers must actively manage the process.
|
||||
|
||||
I built [Tekata.io][9] because I couldn't find any tools to support me as I helped my teams evolve. Tekata is free and easy to use, but the tool is not as important as the people and process. Make sure you build continuous learning into your cadence and keep track of your team's weak spots. Those weak spots affect your ability to deliver, and filling them usually involves learning new things, so there's a wonderful synergy here. In fact, 76% of millennials think professional development opportunities are [one of the most important elements][10] of company culture.
|
||||
|
||||
### Proof is in the payoff
|
||||
|
||||
DevOps transformations involve altering the behavior, and therefore the culture, of your teams. That must be done with executive support and understanding. At the same time, those behavior changes mean learning new skills, and that process must also be managed carefully. But the payoff for pulling this off is more productive teams, happier and more engaged team members, higher quality products, and happier customers.
|
||||
|
||||
Lee Eason will present [Tales From A DevOps Transformation][11] at [All Things Open][12], October 21-23 in Raleigh, N.C.
|
||||
|
||||
Disclaimer: All opinions are statements in this article are exclusively those of Lee Eason and are not representative of Ipreo or IHS Markit.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/tales-devops-transformation
|
||||
|
||||
作者:[Lee Eason][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/leeeason
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://link.springer.com/article/10.1007%2Fs00442-014-2995-6
|
||||
[2]: /file/411061
|
||||
[3]: https://opensource.com/sites/default/files/uploads/devops-delays.png (Improvements after DevOps transformation)
|
||||
[4]: https://puppet.com/resources/whitepaper/state-of-devops-report
|
||||
[5]: https://www.gartner.com/en/publications/wolf-cio
|
||||
[6]: https://en.wikipedia.org/wiki/Drive:_The_Surprising_Truth_About_What_Motivates_Us
|
||||
[7]: https://www.ted.com/talks/dan_pink_on_motivation?language=en#t-2094
|
||||
[8]: https://en.wikipedia.org/wiki/The_Hero_with_a_Thousand_Faces
|
||||
[9]: https://tekata.io/
|
||||
[10]: https://www.execu-search.com/~/media/Resources/pdf/2017_Hiring_Outlook_eBook
|
||||
[11]: https://allthingsopen.org/talk/tales-from-a-devops-transformation/
|
||||
[12]: https://allthingsopen.org/
|
113
sources/tech/20170928 The Lineage of Man.md
Normal file
113
sources/tech/20170928 The Lineage of Man.md
Normal file
@ -0,0 +1,113 @@
|
||||
The Lineage of Man
|
||||
======
|
||||
I’ve always found man pages fascinating. Formatted as strangely as they are and accessible primarily through the terminal, they have always felt to me like relics of an ancient past. Some man pages probably are ancient: I’d love to know how many times the man page for `cat` or say `tee` has been revised since the early days of Unix, but I’m willing to bet it’s not many. Man pages are mysterious—it’s not obvious where they come from, where they live on your computer, or what kind of file they might be stored in—and yet it’s hard to believe that something so fundamental and so obviously governed by rigid conventions could remain so inscrutable. Where did the man page conventions come from? Where are they codified? If I wanted to write my own man page, where would I even begin?
|
||||
|
||||
The story of `man` is inextricably tied to the story of Unix. The very first version of Unix, completed in 1971 but only available internally at Bell Labs, did not provide a `man` command. But Douglas McIlroy, who at the time was head of the Computing Techniques Research Department and managed the Unix project, insisted that some kind of documentation be made available. He pushed Ken Thompson and Dennis Ritchie, the two programmers commonly credited with creating Unix, to write some. The result was the [first edition][1] of the Unix Programmer’s Manual.
|
||||
|
||||
The first edition of the Unix Programmer’s Manual consisted of (physical!) pages collected together in a single binder. It documented only 61 different commands, along with a couple dozen system calls and a few library routines. Though the `man` command itself was not to come until later, the first edition of the Unix Programmer’s Manual established many of the conventions that man pages adhere to today, even in the absence of an official specification. The documentation for each command included the well-known NAME, SYNOPSIS, DESCRIPTION, and SEE ALSO headings. Optional flags were enclosed in square brackets and meta-arguments (for example, “file” where a file path is expected) were underlined. The manual also established the canonical manual sections such as Section 1 for General Commands, Section 2 for System Calls, and so on; these sections were, at the time, simply sections of a very long printed document. Thompson and Ritchie could not have known that they were establishing a tradition that would survive for decades and decades, but that is what they did.
|
||||
|
||||
McIlroy later speculated about why the man page format has survived as long as it has. In a technical report about the conceptual development of Unix, he noted that the original man pages were written in a “terse, yet informal, prose style” that together with the alphabetical ordering of information “encouraged accurate on-line documentation.” In a nod to an experience with man pages that all programmers have had at one time or another, he added that the man page format “was popular with initiates who needed to look up facts, albeit sometimes frustrating for beginners who didn’t know what facts to look for.” McIlroy was highlighting the sometimes-overlooked distinction between tutorial and reference; man pages may not be much use for the former, but they are perfect for the latter.
|
||||
|
||||
The `man` command was a part of Unix by the time the [second edition][2] of the Unix Programmer’s Manual was printed. It made the entire manual available “on-line”, meaning interactively, which was seen as enormously useful. The `man` command has its own manual page in the second edition (this page is the original `man man`), which explains that `man` can be used to “run off a particular section of this manual.” Among the original Unix wizards, the term “run off” referred to the physical act of printing a document but also to the program they used to typeset documents, `roff`. The `roff` program had been used to typeset both the first and second editions of the Unix Programmer’s Manual before they were printed, but it was now also used by `man` to process man pages before they were displayed. The man pages themselves were stored on every Unix system in a file format meant to be read by `roff`.
|
||||
|
||||
`roff` was the first in a long lineage of typesetting programs that have always been used to format man pages. Its own development can be traced back to a program called `RUNOFF` that was written in the mid-60s. At Bell Labs, `roff` spawned several successors including `nroff` (en-roff) and `troff` (tee-roff). `nroff` was designed to improve on `roff` and better output text to terminals, while `troff` tackled the problem of printing using a CAT phototypesetter. (If you don’t know what phototypesetting is, as I did not, I refer you to [this][3] eminently watchable film.) All of these programs were based on a kind of markup language consisting of two-letter commands inserted at the beginning of every line in a document. These commands could control such things as font size, text positioning, line spacing, and so on. Today, the most common implementation of the `roff` system is `groff`, a part of the GNU project.
|
||||
|
||||
It’s easy to get a sense of what `roff` input files look like by just taking a gander at some of the man pages stored on your own computer. At least on a BSD-derived system like MacOS, you can use the `--path` argument to `man` to find out where the man page for a particular command is stored. Typically this will be under `/usr/share/man` or `/usr/local/share/man`. Using `man` this way, you can find the path for the `man` man page itself and then open it in a text editor. It will not look anything like what you’re used to looking at with `man`. On my system, the first couple dozen lines are:
|
||||
|
||||
```
|
||||
.TH man 1 "September 19, 2005"
|
||||
.LO 1
|
||||
.SH NAME
|
||||
man \- format and display the on-line manual pages
|
||||
.SH SYNOPSIS
|
||||
.B man
|
||||
.RB [ \-acdfFhkKtwW ]
|
||||
.RB [ --path ]
|
||||
.RB [ \-m
|
||||
.IR system ]
|
||||
.RB [ \-p
|
||||
.IR string ]
|
||||
.RB [ \-C
|
||||
.IR config_file ]
|
||||
.RB [ \-M
|
||||
.IR pathlist ]
|
||||
.RB [ \-P
|
||||
.IR pager ]
|
||||
.RB [ \-B
|
||||
.IR browser ]
|
||||
.RB [ \-H
|
||||
.IR htmlpager ]
|
||||
.RB [ \-S
|
||||
.IR section_list ]
|
||||
.RI [ section ]
|
||||
.I "name ..."
|
||||
|
||||
.SH DESCRIPTION
|
||||
.B man
|
||||
formats and displays the on-line manual pages. If you specify
|
||||
.IR section ,
|
||||
.B man
|
||||
only looks in that section of the manual.
|
||||
.I name
|
||||
is normally the name of the manual page, which is typically the name
|
||||
of a command, function, or file.
|
||||
However, if
|
||||
.I name
|
||||
contains a slash
|
||||
.RB ( / )
|
||||
then
|
||||
.B man
|
||||
interprets it as a file specification, so that you can do
|
||||
.B "man ./foo.5"
|
||||
or even
|
||||
.B "man /cd/foo/bar.1.gz\fR.\fP"
|
||||
.PP
|
||||
See below for a description of where
|
||||
.B man
|
||||
looks for the manual page files.
|
||||
```
|
||||
|
||||
You can make out, for example, that all of the section headings are preceded by `.SH`, and everything that would appear in bold is preceded by `.B`. These commands are `roff` macros specifically designed for writing man pages. The macros used here are part of a package called `man`, but there are other packages such as `mdoc` that you can use for the same purpose. The macros make writing man pages much simpler than it would otherwise be. They also enforce consistency by always compiling down to the same set of lower-level `roff` commands. The `man` and `mdoc` packages are now documented under [GROFF_MAN(7)][4] and [GROFF_MDOC(7)][5] respectively.
|
||||
|
||||
The entire `roff` system is reminiscent of LaTeX, another typesetting tool that today enjoys much more popularity. LaTeX is essentially a big bucket of macros built on top of the core TeX system designed by Donald Knuth. Like with `roff`, there are many other macro packages that you can incorporate into your LaTeX documents. These macro packages mean that you almost never have to write raw TeX yourself. LaTeX has superseded `roff` in many domains, but it is poorly suited to formatting text for a terminal, so nobody uses it to write man pages.
|
||||
|
||||
If you were to write a man page today, in 2017, how would you go about it? You certainly could write a man page using a `roff` macro package like `man` or `mdoc`. The syntax is unfamiliar and unwieldy. But the macros abstract away so much of the complexity that you can write a reasonably complete man page without learning very many commands. That said, there are now other options worth considering.
|
||||
|
||||
[Pandoc][6] is a widely used software tool for converting documents from one format to another. You can use Pandoc to convert Markdown files into `man`-macro-based man pages, meaning that you can now write your man pages in something as straightforward as Markdown. Pandoc supports many more Markdown constructs than most Markdown converters, giving you lots of ways to format your man page. While this convenience comes at the cost of some control, it’s unlikely that you will ever need something that would warrant dropping down to the `roff` macro level. If you’re curious about what these Markdown files might look like, I’ve written [a few of my own][7] to document a tool I created for keeping notes on how to use different command-line utilities. NPM’s [documentation][8] is also written in Markdown and converted to a `roff` man format later, though they use a JavaScript package called [marked-man][9] instead of Pandoc to do the conversion.
|
||||
|
||||
So there are now plenty of ways to write man pages, giving you lots of freedom to choose the tool you think best. That said, you’d better ensure that your man page reads like every other man page that has ever been written. Even though there is now so much flexibility in tooling, the man page conventions are as strong as ever. And you might be tempted to skip writing a man page altogether—after all, you probably have documentation on the web, or maybe you just want to rely on the `--help` flag—but you’re forgoing the patina of respectability a man page can provide. The man page is an institution that doesn’t seem likely to disappear or evolve soon, which is fascinating, because there are so many ways in which we could do man pages better. XML didn’t come off well in my [last post][10], but it would be the perfect format here, and it would allow us to do something like query `man` about an option:
|
||||
|
||||
```
|
||||
$ man grep -v
|
||||
Selected lines are those not matching any of the specified patterns.
|
||||
```
|
||||
|
||||
Imagine that! But it seems that we’re all too used to man pages the way they are. In a field where rapid change is the norm, maybe some stability—particularly in a documentation system we all turn to in moments of ignorance and confusion—is a good thing.
|
||||
|
||||
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][11] on Twitter or subscribe to the [RSS feed][12] to make sure you know when a new post is out.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2017/09/28/the-lineage-of-man.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.bell-labs.com/usr/dmr/www/1stEdman.html
|
||||
[2]: http://bitsavers.informatik.uni-stuttgart.de/pdf/att/unix/2nd_Edition/UNIX_Programmers_Manual_2ed_Jun72.pdf
|
||||
[3]: https://vimeo.com/127605644
|
||||
[4]: http://man7.org/linux/man-pages/man7/groff_man.7.html
|
||||
[5]: http://man7.org/linux/man-pages/man7/groff_mdoc.7.html
|
||||
[6]: http://pandoc.org/
|
||||
[7]: https://github.com/sinclairtarget/um/tree/02365bd0c0a229efb936b3d6234294e512e8a218/doc
|
||||
[8]: https://github.com/npm/npm/blob/20589f4b028d3e8a617800ac6289d27f39e548e8/doc/cli/npm.md
|
||||
[9]: https://www.npmjs.com/package/marked-man
|
||||
[10]: https://twobithistory.org/2017/09/21/the-rise-and-rise-of-json.html
|
||||
[11]: https://twitter.com/TwoBitHistory
|
||||
[12]: https://twobithistory.org/feed.xml
|
70
sources/tech/20171202 Simulating the Altair.md
Normal file
70
sources/tech/20171202 Simulating the Altair.md
Normal file
@ -0,0 +1,70 @@
|
||||
translating---geekpi
|
||||
|
||||
Simulating the Altair
|
||||
======
|
||||
The [Altair 8800][1] was a build-it-yourself home computer kit released in 1975. The Altair was basically the first personal computer, though it predated the advent of that term by several years. It is Adam (or Eve) to every Dell, HP, or Macbook out there.
|
||||
|
||||
Some people thought it’d be awesome to write an emulator for the Z80—a processor closely related to the Altair’s Intel 8080—and then thought it needed a simulation of the Altair’s control panel on top of it. So if you’ve ever wondered what it was like to use a computer in 1975, you can run the Altair on your Macbook:
|
||||
|
||||
![Altair 8800][2]
|
||||
|
||||
### Installing it
|
||||
|
||||
You can download Z80 pack from the FTP server available [here][3]. You’re looking for the latest Z80 pack release, something like `z80pack-1.26.tgz`.
|
||||
|
||||
First unpack the file:
|
||||
|
||||
```
|
||||
$ tar -xvf z80pack-1.26.tgz
|
||||
```
|
||||
|
||||
Move into the unpacked directory:
|
||||
|
||||
```
|
||||
$ cd z80pack-1.26
|
||||
```
|
||||
|
||||
The control panel simulation is based on a library called `frontpanel`. You’ll have to compile that library first. If you move into the `frontpanel` directory, you will find a `README` file listing the libraries own dependencies. Your experience here will almost certainly differ from mine, but perhaps my travails will be illustrative. I had the dependencies installed, but via [Homebrew][4]. To get the library to compile I just had to make sure that `/usr/local/include` was added to Clang’s include path in `Makefile.osx`.
|
||||
|
||||
If you’ve satisfied the dependencies, you should be able to compile the library (we’re now in `z80pack-1.26/frontpanel`:
|
||||
|
||||
```
|
||||
$ make -f Makefile.osx ...
|
||||
$ make -f Makefile.osx clean
|
||||
```
|
||||
|
||||
You should end up with `libfrontpanel.so`. I copied this to `/usr/local/lib`.
|
||||
|
||||
The Altair simulator is under `z80pack-1.26/altairsim`. You now need to compile the simulator itself. Move into `z80pack-1.26/altairsim/srcsim` and run `make` once more:
|
||||
|
||||
```
|
||||
$ make -f Makefile.osx ...
|
||||
$ make -f Makefile.osx clean
|
||||
```
|
||||
|
||||
That process will create an executable called `altairsim` one level up in `z80pack-1.26/altairsim`. Run that executable and you should see that iconic Altair control panel!
|
||||
|
||||
And if you really want to nerd out, read through the original [Altair manual][5].
|
||||
|
||||
If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][6] on Twitter or subscribe to the [RSS feed][7] to make sure you know when a new post is out.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2017/12/02/simulating-the-altair.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Altair_8800
|
||||
[2]: https://www.autometer.de/unix4fun/z80pack/altair.png
|
||||
[3]: http://www.autometer.de/unix4fun/z80pack/ftp/
|
||||
[4]: http://brew.sh/
|
||||
[5]: http://www.classiccmp.org/dunfield/altair/d/88opman.pdf
|
||||
[6]: https://twitter.com/TwoBitHistory
|
||||
[7]: https://twobithistory.org/feed.xml
|
@ -1,3 +1,5 @@
|
||||
Translating by cycoe...
|
||||
cycoe 翻译中
|
||||
24 Must Have Essential Linux Applications In 2017
|
||||
======
|
||||
Brief: What are the must have applications for Linux? The answer is subjective and it depends on for what purpose do you use your desktop Linux. But there are still some essentials Linux apps that are more likely to be used by most Linux user. We have listed such best Linux applications that you should have installed in every Linux distribution you use.
|
||||
|
@ -1,249 +0,0 @@
|
||||
translating by dianbanjiu
|
||||
Manage Your Games Using Lutris In Linux
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-720x340.jpg)
|
||||
|
||||
Let us start the first day of 2018 with Games! Today, we are going to talk about **Lutris** , an open source gaming platform for Linux. You can install, remove, configure, launch and manage your games using Lutris. It helps you to manage your Linux games, Windows games, emulated console games and browser games, in a single interface. It also includes community-written installer scripts to make a game's installation process a lot easier.
|
||||
|
||||
Lutris comes with more than 20 emulators installed automatically (Or you can install them in a single click) that provides support for most gaming systems from the late 70's to the present day. The currently supported gaming platforms are;
|
||||
|
||||
* Native Linux
|
||||
* Windows
|
||||
* Steam (Linux and Windows)
|
||||
* MS-DOS
|
||||
* Arcade machines
|
||||
* Amiga computers
|
||||
* Atari 8 and 16bit computers and consoles
|
||||
* Browsers (Flash or HTML5 games)
|
||||
* Commmodore 8 bit computers
|
||||
* SCUMM based games and other point and click adventure games
|
||||
* Magnavox Odyssey², Videopac+
|
||||
* Mattel Intellivision
|
||||
* NEC PC-Engine Turbographx 16, Supergraphx, PC-FX
|
||||
* Nintendo NES, SNES, Game Boy, Game Boy Advance, DS
|
||||
* Game Cube and Wii
|
||||
* Sega Master Sytem, Game Gear, Genesis, Dreamcast
|
||||
* SNK Neo Geo, Neo Geo Pocket
|
||||
* Sony PlayStation
|
||||
* Sony PlayStation 2
|
||||
* Sony PSP
|
||||
* Z-Machine games like Zork
|
||||
* And more yet to come.
|
||||
|
||||
|
||||
|
||||
### Installing Lutris
|
||||
|
||||
Like Steam, Lutris consists of two parts; the website and the client application. From the website you can browse for the available games, add your favorite games to your personal library and install them using the installation link.
|
||||
|
||||
First, let us install Lutris client. It currently supports Arch Linux, Debian, Fedora, Gentoo, openSUSE, and Ubuntu.
|
||||
|
||||
For Arch Linux and its derivatives like Antergos, Manjaro Linux, it is available in [**AUR**][1]. So you can install it using any AUR helper programs.
|
||||
|
||||
Using [**Pacaur**][2]:
|
||||
```
|
||||
pacaur -S lutris
|
||||
```
|
||||
|
||||
Using **[Packer][3]** :
|
||||
```
|
||||
packer -S lutris
|
||||
```
|
||||
|
||||
Using [**Yaourt**][4]:
|
||||
```
|
||||
yaourt -S lutris
|
||||
```
|
||||
|
||||
Using [**Yay**][5]:
|
||||
```
|
||||
yay -S lutris
|
||||
```
|
||||
|
||||
**On Debian:**
|
||||
|
||||
On **Debian 9.0** run the following commands as **root** :
|
||||
```
|
||||
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_9.0/ /' > /etc/apt/sources.list.d/lutris.list
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_9.0/Release.key -O Release.key
|
||||
apt-key add - < Release.key
|
||||
apt-get update
|
||||
apt-get install lutris
|
||||
```
|
||||
|
||||
On **Debian 8.0** run the following as **root** :
|
||||
```
|
||||
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_8.0/ /' > /etc/apt/sources.list.d/lutris.list
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_8.0/Release.key -O Release.key
|
||||
apt-key add - < Release.key
|
||||
apt-get update
|
||||
apt-get install lutris
|
||||
```
|
||||
|
||||
On **Fedora 27** run the following as **root** :
|
||||
```
|
||||
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_27/home:strycore.repo
|
||||
dnf install lutris
|
||||
```
|
||||
|
||||
On **Fedora 26** run the following as **root** :
|
||||
```
|
||||
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_26/home:strycore.repo
|
||||
dnf install lutris
|
||||
```
|
||||
|
||||
On **openSUSE Tumbleweed** run the following as **root** :
|
||||
```
|
||||
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Tumbleweed/home:strycore.repo
|
||||
zypper refresh
|
||||
zypper install lutris
|
||||
```
|
||||
|
||||
On **openSUSE Leap 42.3** run the following as **root** :
|
||||
```
|
||||
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Leap_42.3/home:strycore.repo
|
||||
zypper refresh
|
||||
zypper install lutris
|
||||
```
|
||||
|
||||
On **Ubuntu 17.10**:
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.10/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.10/Release.key -O Release.key
|
||||
sudo apt-key add - < Release.key
|
||||
sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
On **Ubuntu 17.04**:
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.04/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.04/Release.key -O Release.key
|
||||
sudo apt-key add - < Release.key
|
||||
sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
On **Ubuntu 16.10**:
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.10/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.10/Release.key -O Release.key
|
||||
sudo apt-key add - < Release.key
|
||||
sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
On **Ubuntu 16.04**:
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.04/Release.key -O Release.key
|
||||
sudo apt-key add - < Release.key
|
||||
sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
For other platforms, refer the [**Lutris download link**][6].
|
||||
|
||||
### Manage Your Games Using Lutris
|
||||
|
||||
Once installed, open Lutris from your Menu or Application launcher. At first launch, the default interface of Lutris will look like below.
|
||||
|
||||
[![][7]][8]
|
||||
|
||||
**Connecting to your Lutris.net account**
|
||||
|
||||
Next, you need to connect your Lutris.net account to your client in-order to sync the games from your personal library. To do so, [**register a new account**][9] if you don't have one already. Then, click on **" Connecting to your Lutirs.net account to sync your library"** link in the Lutris client.
|
||||
|
||||
Enter your user credentials and click **Connect**.
|
||||
|
||||
[![][7]][10]
|
||||
|
||||
Now you're connected to your Lutris.net account.
|
||||
|
||||
[![][7]][11]**Browse Games**
|
||||
|
||||
To search any games, click on the Browse icon (the game controller icon) in the tool bar. It will automatically take you to Games page of Lutris website. You can see there all available games in an alphabetical order. Lutris website has lot of games and more games are being added constantly.
|
||||
|
||||
[![][7]][12]
|
||||
|
||||
Choose any games of your choice and add them to your library.
|
||||
|
||||
[![][7]][13]
|
||||
|
||||
Then, go back to your Lutris client and click **Menu - > Lutris -> Synchronize library**. Now you will see all games in your library in your local Lutris client interface.
|
||||
|
||||
[![][7]][14]
|
||||
|
||||
If you don't see the games, just restart Lutris client once.
|
||||
|
||||
**Installing Games**
|
||||
|
||||
To install a game, just right click on it and click **Install** button. For example, I am going to install [**2048 game**][15] in my system. As you see in the below screenshot, it asks me to choose the version to install. Since it has only one version (i.e online), it was selected automatically. Click **Continue**.
|
||||
|
||||
[![][7]][16]Click Install:
|
||||
|
||||
[![][7]][17]
|
||||
|
||||
After installation completed, you can either launch the newly installed game or simply close the window and continue installing other games in your library.
|
||||
|
||||
**Import Steam library**
|
||||
|
||||
You can also import your Steam library. To do so, go to your Lutris profile and click the **" Sign in through Steam"** button. You will then be redirected to Steam and will be asked to enter your user credentials. Once you authorized it, your Steam account will be connected with your Lutris account. Please be mindful that your Steam account should be public in order to sync the games from the library. You can switch it back to private after the sync is completed.
|
||||
|
||||
**Adding games manually**
|
||||
|
||||
Lutris has the option to add games manually. To do so, click the plus (+) sign on the toolbar.
|
||||
|
||||
[![][7]][18]
|
||||
|
||||
In the next window, enter the game's name, and choose the runners in the Game info tab. The runners are programs such as Wine, Steam for linux etc., that helps you to launch a game. You can install runners from Menu -> Manage runners.
|
||||
|
||||
[![][7]][19]
|
||||
|
||||
Then go to the next tab and choose Game's main executable or ISO. Finally click Save. The good thing is you can add multiple version of same games.
|
||||
|
||||
**Removing games**
|
||||
|
||||
To remove any installed game, just right click on it in the local library of your Lutris client application. Choose "Remove" and then "Apply".
|
||||
|
||||
[![][7]][20]
|
||||
|
||||
Lutris is just like Steam. Just add the games to your library in the website and the client will install them for you!
|
||||
|
||||
And, that's all for today, folks! We will be posting more good and useful stuffs in this year. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/manage-games-using-lutris-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://aur.archlinux.org/packages/lutris/
|
||||
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
|
||||
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
|
||||
[4]:https://www.ostechnix.com/install-yaourt-arch-linux/
|
||||
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[6]:https://lutris.net/downloads/
|
||||
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-1.png ()
|
||||
[9]:https://lutris.net/user/register/
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-2.png ()
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-3.png ()
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-15-1.png ()
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-16.png ()
|
||||
[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-6.png ()
|
||||
[15]:https://www.ostechnix.com/let-us-play-2048-game-terminal/
|
||||
[16]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-12.png ()
|
||||
[17]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-13.png ()
|
||||
[18]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-18-1.png ()
|
||||
[19]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-19.png ()
|
||||
[20]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-14-1.png ()
|
@ -1,3 +1,6 @@
|
||||
Translating by MjSeven
|
||||
|
||||
|
||||
Getting started with Python for data science
|
||||
======
|
||||
|
||||
|
@ -1,131 +0,0 @@
|
||||
translating---geekpi
|
||||
|
||||
How To Lock Virtual Console Sessions On Linux
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-720x340.png)
|
||||
|
||||
When you’re working on a shared system, you might not want the other users to sneak peak in your console to know what you’re actually doing. If so, I know a simple trick to lock your own session while still allowing other users to use the system on other virtual consoles. Thanks to **Vlock** , stands for **V** irtual Console **lock** , a command line program to lock one or more sessions on the Linux console. If necessary, you can lock the entire console and disable the virtual console switching functionality altogether. Vlock is especially useful for the shared Linux systems which have multiple users with access to the console.
|
||||
|
||||
### Installing Vlock
|
||||
|
||||
On Arch-based systems, the Vlock package is replaced with **kpd** package which is preinstalled by default, so you need not to bother with installation.
|
||||
|
||||
On Debian, Ubuntu, Linux Mint, run the following command to install Vlock:
|
||||
|
||||
```
|
||||
$ sudo apt-get install vlock
|
||||
```
|
||||
|
||||
On Fedora:
|
||||
|
||||
```
|
||||
$ sudo dnf install vlock
|
||||
```
|
||||
|
||||
On RHEL, CentOS:
|
||||
|
||||
```
|
||||
$ sudo yum install vlock
|
||||
```
|
||||
|
||||
### Lock Virtual Console Sessions On Linux
|
||||
|
||||
The general syntax for Vlock is:
|
||||
|
||||
```
|
||||
vlock [ -acnshv ] [ -t <timeout> ] [ plugins... ]
|
||||
```
|
||||
|
||||
Where,
|
||||
|
||||
* **a** – Lock all virtual console sessions,
|
||||
* **c** – Lock current virtual console session,
|
||||
* **n** – Switch to new empty console before locking all sessions,
|
||||
* **s** – Disable SysRq key mechanism,
|
||||
* **t** – Specify the timeout for the screensaver plugins,
|
||||
* **h** – Display help section,
|
||||
* **v** – Display version.
|
||||
|
||||
|
||||
|
||||
Let me show you some examples.
|
||||
|
||||
**1\. Lock current console session**
|
||||
|
||||
When running Vlock without any arguments, it locks the current console session (TYY) by default. To unlock the session, you need to enter either the current user’s password or the root password.
|
||||
|
||||
```
|
||||
$ vlock
|
||||
```
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-1-1.gif)
|
||||
|
||||
You can also use **-c** flag to lock the current console session.
|
||||
|
||||
```
|
||||
$ vlock -c
|
||||
```
|
||||
|
||||
Please note that this command will only lock the current console. You can switch to other consoles by pressing **ALT+F2**. For more details about switching between TTYs, refer the following guide.
|
||||
|
||||
Also, if the system has multiple users, the other users can still access their respective TTYs.
|
||||
|
||||
**2\. Lock all console sessions**
|
||||
|
||||
To lock all TTYs at the same time and also disable the virtual console switching functionality, run:
|
||||
|
||||
```
|
||||
$ vlock -a
|
||||
```
|
||||
|
||||
Again, to unlock the console sessions, just press ENTER key and type your current user’s password or root user password.
|
||||
|
||||
Please keep in mind that the **root user can always unlock any vlock session** at any time, unless disabled at compile time.
|
||||
|
||||
**3. Switch to new virtual console before locking all consoles
|
||||
**
|
||||
|
||||
It is also possible to make Vlock to switch to new empty virtual console from X session before locking all consoles. To do so, use **-n** flag.
|
||||
|
||||
```
|
||||
$ vlock -n
|
||||
```
|
||||
|
||||
**4. Disable SysRq mechanism
|
||||
**
|
||||
|
||||
As you may know, the Magic SysRq key mechanism allows the users to perform some operations when the system freeze. So the users can unlock the consoles using SysRq. In order to prevent this, pass the **-s** option to disable SysRq mechanism. Please remember, this only works if the **-a** option is given.
|
||||
|
||||
```
|
||||
$ vlock -sa
|
||||
```
|
||||
|
||||
For more options and its usage, refer the help section or the man pages.
|
||||
|
||||
```
|
||||
$ vlock -h
|
||||
$ man vlock
|
||||
```
|
||||
|
||||
Vlock prevents the unauthorized users from gaining the console access. If you’re looking for a simple console locking mechanism to your Linux machine, Vlock is worth checking!
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-lock-virtual-console-sessions-on-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
@ -1,3 +1,5 @@
|
||||
HankChow translating
|
||||
|
||||
5 tips for choosing the right open source database
|
||||
======
|
||||
When selecting a mission-critical application, you can't afford to make mistakes.
|
||||
|
99
translated/talk/20181008 3 areas to drive DevOps change.md
Normal file
99
translated/talk/20181008 3 areas to drive DevOps change.md
Normal file
@ -0,0 +1,99 @@
|
||||
推动 DevOps 变革的三个方面
|
||||
======
|
||||
推动大规模的组织变革是一个痛苦的过程。对于 DevOps 来说,尽管也有阵痛,但变革带来的价值则相当可观。
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/diversity-inclusion-transformation-change_20180927.png?itok=2E-g10hJ)
|
||||
|
||||
避免痛苦是一种强大的动力。一些研究表明,[植物也会通过遭受疼痛的过程][1]以采取措施来保护自己。我们人类有时也会刻意让自己受苦——在剧烈运动之后,身体可能会发生酸痛,但我们仍然坚持运动。那是因为当人认为整个过程利大于弊时,几乎可以忍受任何事情。
|
||||
|
||||
推动大规模的组织变革得过程确实是痛苦的。有人可能会因难以改变价值观和行为而感到痛苦,有人可能会因难以带领团队而感到痛苦,也有人可能会因难以开展工作而感到痛苦。但就 DevOps 而言,我可以说这些痛苦都是值得的。
|
||||
|
||||
我也曾经关注过一个团队耗费大量时间优化技术流程的过程,在这个过程中,团队逐渐将流程进行自动化改造,并最终获得了成功。
|
||||
|
||||
![Improvements after DevOps transformation][3]
|
||||
|
||||
图片来源:Lee Eason. CC BY-SA 4.0
|
||||
|
||||
这张图表充分表明了变革的价值。一家公司在我主导实行了 DevOps 转型之后,60 多个团队每月提交了超过 900 个发布请求。这些工作量的原耗时高达每个月 350 天,而这么多的工作量对于任何公司来说都是不可忽视的。除此以外,他们每月的部署次数从 100 次增加到了 9000 次,高危 bug 减少了 24%,工程师们更轻松了,<ruby>净推荐值<rt>Net Promoter Score</rt></ruby>(NPS)也提高了,而 NPS 提高反过来也让团队的 DevOps 转型更加顺利。正如 [Puppet 发布的 DevOps 报告][4]所预测的,用在技术流程改进上的投资可以在业务成果上明显地体现出来。
|
||||
|
||||
而 DevOps 主导者在推动变革是必须关注这三个方面:团队管理,团队文化和团队活力。
|
||||
|
||||
### 团队管理
|
||||
|
||||
组织架构越大,业务领导与一线员工之间的距离就会越大,当然发生误解的可能性也会越大。而且各种技术工具和实际应用都在以日新月异的速度变化,这就导致业务领导几乎不可能对 DevOps 或敏捷开发的转型方向有一个亲身的了解。
|
||||
|
||||
DevOps 主导者必须和管理层密切合作,在进行决策的时候给出相关的意见,以帮助他们做出正确的决策。
|
||||
|
||||
公司的管理层只是知道 DevOps 会对产品部署的方式进行改进,而并不了解其中的具体过程。当管理层发现你在和软件团队执行自动化部署失败时,就会想要了解这件事情的细节。如果管理层了解到进行部署的是软件团队而不是专门的发布管理团队,就可能会坚持使用传统的变更流程来保证业务的正常运作。你可能会失去团队的信任,团队也可能不愿意作出进一步的改变。
|
||||
|
||||
如果没有和管理层做好心理上的预期,一旦发生意外的生产事件,都会对你和管理层之间的信任造成难以消除的影响。所以,最好事先和管理层之间在各方面协调好,这会让你在后续的工作中避免很多麻烦。
|
||||
|
||||
对于和管理层之间的协调,这里有两条建议:
|
||||
|
||||
* 一是**重视所有规章制度**。如果管理层对合同、安全等各方面有任何疑问,你都可以向法务或安全负责人咨询,这样做可以避免犯下后果严重的错误。
|
||||
* 二是**将管理层的重点关注的方面输出为量化指标**。举个例子,如果公司的目标是减少客户流失,而你调查得出计划外的停机是造成客户流失的主要原因,那么就可以让团队对故障的<ruby>平均检测时间<rt>Mean Time To Detection</rt></ruby>(MTTD)和<ruby>平均解决时间<rt>Mean Time To Resolution</rt></ruby>(MTTR)实行重点优化。你可以使用这些关键指标来量化团队的工作成果,而管理层对此也可以有一个直观的了解。
|
||||
|
||||
|
||||
|
||||
### 团队文化
|
||||
|
||||
DevOps 是一种专注于持续改进代码、构建、部署和操作流程的文化,而团队文化代表了团队的价值观和行为。从本质上说,团队文化是要塑造团队成员的行为方式,而这并不是一件容易的事。
|
||||
|
||||
我推荐一本叫做《[披着狼皮的 CIO][5]》的书。另外,研究心理学、阅读《[Drive][6]》、观看 Daniel Pink 的 [TED 演讲][7]、阅读《[千面英雄][7]》、了解每个人的心路历程,以上这些都是你推动公司技术变革所应该尝试去做的事情。
|
||||
|
||||
理性的人大多都按照自己的价值观工作,然而团队通常没有让每个人都能达成共识的明确价值观。因此,你需要明确团队目前的价值观,包括价值观的形成过程和价值观的目标导向。也不能将这些价值观强加到团队成员身上,只需要让团队成员在目前的硬件条件下力所能及地做到最好就可以了
|
||||
|
||||
同时需要向团队成员阐明,公司正在发生组织上的变化,团队的价值观也随之改变,最好也厘清整个过程中将会作出什么变化。例如,公司以往或许是由于资金有限,一直将节约成本的原则放在首位,在研发新产品的时候,基础架构团队不得不通过共享数据库集群或服务器,从而导致了服务之间的紧密耦合。然而随着时间的推移,这种做法会产生难以维护的混乱,即使是一个小小的变化也可能造成无法预料的后果。这就导致交付团队难以执行变更控制流程,进而令变更停滞不前。
|
||||
|
||||
如果这种状况持续多年,最终的结果将会是毫无创新、技术老旧、问题繁多以及产品品质低下,公司的发展到达了瓶颈,原本的价值观已经不再适用。所以,工作效率的优先级必须高于节约成本。
|
||||
|
||||
你必须强调团队的价值观。每当团队按照价值观取得了一定的工作进展,都应该对团队作出激励。在团队部署出现失败时,鼓励他们承担风险、继续学习,同时指导团队如何改进他们的工作并表示支持。长此下来,团队成员就会对你产生信任,并逐渐切合团队的价值观。
|
||||
|
||||
### 团队活力
|
||||
|
||||
你有没有在会议上听过类似这样的话?“在张三度假回来之前,我们无法对这件事情做出评估。他是唯一一个了解代码的人”,或者是“我们完成不了这项任务,它在网络上需要跨团队合作,而防火墙管理员刚好请病假了”,又或者是“张三最清楚这个系统最好,他说是怎么样,通常就是怎么样”。那么如果团队在处理工作时,谁才是主力?就是张三。而且也一直会是他。
|
||||
|
||||
我们一直都认为这就是软件开发的本质。但是如果我们不作出改变,这种循环就会一直保持下去。
|
||||
|
||||
熵的存在会让团队自发地变得混乱和缺乏活力,团队的成员和主导者的都有责任控制这个熵并保持团队的活力。DevOps、敏捷开发、上云、代码重构这些行为都会令熵增加速,这是因为转型让团队需要学习更多新技能和专业知识以开展新工作。
|
||||
|
||||
我们来看一个产品团队重构遗留代码的例子。像往常一样,他们在 AWS 上构建新的服务。而传统的系统则在数据中心部署,并由 IT 部门进行监控和备份。IT 部门会确保在基础架构的层面上满足应用的安全需求、进行灾难恢复测试、系统补丁、安装配置了入侵检测和防病毒代理,而且 IT 部门还保留了年度审计流程所需的变更控制记录。
|
||||
|
||||
产品团队经常会犯一个致命的错误,就是认为 IT 部门是需要突破的瓶颈。他们希望脱离已有的 IT 部门并使用公有云,但实际上是他们忽视了 IT 部门提供的关键服务。迁移到云上只是以不同的方式实现这些关键服务,因为 AWS 也是一个数据中心,团队即使使用 AWS 也需要完成 IT 运维任务。
|
||||
|
||||
实际上,产品团队在迁移到云时候也必须学习如何使用这些 IT 服务。因此,当产品团队开始重构遗留的代码并部署到云上时,也需要学习大量的技能才能正常运作。这些技能不会无师自通,必须自行学习或者聘用相关的人员,团队的主导者也必须积极进行管理。
|
||||
|
||||
在带领团队时,我找不到任何适合我的工具,因此我建立了 [Tekita.io][9] 这个项目。Tekata 免费而且容易使用。但相比起来,把注意力集中在人员和流程上更为重要,你需要不断学习,持续关注团队的弱项,因为它们会影响团队的交付能力,而修补这些弱项往往需要学习大量的新知识,这就需要团队成员之间有一个很好的协作。因此 76% 的年轻人都认为个人发展机会是公司文化[最重要的的一环][10]。
|
||||
|
||||
### 效果就是最好的证明
|
||||
|
||||
DevOps 转型会改变团队的工作方式和文化,这需要得到管理层的支持和理解。同时,工作方式的改变意味着新技术的引入,所以在管理上也必须谨慎。但转型的最终结果是团队变得更高效、成员变得更积极、产品变得更优质,客户也变得更快乐。
|
||||
|
||||
免责声明:本文中的内容仅为 Lee Eason 的个人立场,不代表 Ipreo 或 IHS Markit。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/10/tales-devops-transformation
|
||||
|
||||
作者:[Lee Eason][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[HankChow](https://github.com/HankChow)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/leeeason
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://link.springer.com/article/10.1007%2Fs00442-014-2995-6
|
||||
[2]: /file/411061
|
||||
[3]: https://opensource.com/sites/default/files/uploads/devops-delays.png "Improvements after DevOps transformation"
|
||||
[4]: https://puppet.com/resources/whitepaper/state-of-devops-report
|
||||
[5]: https://www.gartner.com/en/publications/wolf-cio
|
||||
[6]: https://en.wikipedia.org/wiki/Drive:_The_Surprising_Truth_About_What_Motivates_Us
|
||||
[7]: https://www.ted.com/talks/dan_pink_on_motivation?language=en#t-2094
|
||||
[8]: https://en.wikipedia.org/wiki/The_Hero_with_a_Thousand_Faces
|
||||
[9]: https://tekata.io/
|
||||
[10]: https://www.execu-search.com/~/media/Resources/pdf/2017_Hiring_Outlook_eBook
|
||||
[11]: https://allthingsopen.org/talk/tales-from-a-devops-transformation/
|
||||
[12]: https://allthingsopen.org/
|
||||
|
@ -0,0 +1,250 @@
|
||||
在 Linux 上使用 Lutries 管理你的游戏
|
||||
======
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-720x340.jpg)
|
||||
|
||||
让我们用游戏开始 2018 的第一天吧!今天我们要讨论的是 **Lutris**,一个 Linux 上的开源游戏平台。你可以使用 Lutries 安装、移除、配置、启动和管理你的游戏。它可以以一个界面帮你管理你的 Linux 游戏、Windows 游戏、仿真控制台游戏和浏览器游戏。它还包含社区编写的安装脚本,使得游戏的安装过程更加简单。
|
||||
|
||||
Lutries 自动安装(或者你可以单击点击安装)了超过 20 个模拟器,它提供了从七十年代到现在的大多数游戏系统。目前支持的游戏系统如下:
|
||||
|
||||
* Native Linux
|
||||
* Windows
|
||||
* Steam (Linux and Windows)
|
||||
* MS-DOS
|
||||
* 街机
|
||||
* Amiga 电脑
|
||||
* Atari 8 和 16 位计算机和控制器
|
||||
* 浏览器 (Flash 或者 HTML5 游戏)
|
||||
* Commmodore 8 位计算机
|
||||
* 基于 SCUMM 的游戏和其他点击冒险游戏
|
||||
* Magnavox Odyssey², Videopac+
|
||||
* Mattel Intellivision
|
||||
* NEC PC-Engine Turbographx 16, Supergraphx, PC-FX
|
||||
* Nintendo NES, SNES, Game Boy, Game Boy Advance, DS
|
||||
* Game Cube and Wii
|
||||
* Sega Master Sytem, Game Gear, Genesis, Dreamcast
|
||||
* SNK Neo Geo, Neo Geo Pocket
|
||||
* Sony PlayStation
|
||||
* Sony PlayStation 2
|
||||
* Sony PSP
|
||||
* 像 Zork 这样的 Z-Machine 游戏
|
||||
* 还有更多
|
||||
|
||||
|
||||
|
||||
### 安装 Lutris
|
||||
|
||||
就像 Steam 一样,Lutries 包含两部分:网站和客户端程序。从网站你可以浏览可用的游戏,添加最喜欢的游戏到个人库,以及使用安装链接安装他们。
|
||||
|
||||
首先,我们还是来安装客户端。它目前支持 Arch Linux、Debian、Fedroa、Gentoo、openSUSE 和 Ubuntu。
|
||||
|
||||
对于 Arch Linux 和它的衍生版本,像是 Antergos, Manjaro Linux,都可以在 [**AUR**][1] 中找到。因此,你可以使用 AUR 帮助程序安装它。
|
||||
|
||||
使用 [**Pacaur**][2]:
|
||||
```
|
||||
pacaur -S lutris
|
||||
```
|
||||
|
||||
使用 **[Packer][3]** :
|
||||
```
|
||||
packer -S lutris
|
||||
```
|
||||
|
||||
使用 [**Yaourt**][4]:
|
||||
```
|
||||
yaourt -S lutris
|
||||
```
|
||||
|
||||
使用 [**Yay**][5]:
|
||||
```
|
||||
yay -S lutris
|
||||
```
|
||||
|
||||
**Debian:**
|
||||
|
||||
在 **Debian 9.0** 上以 **root** 身份运行以下命令:
|
||||
```
|
||||
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_9.0/ /' > /etc/apt/sources.list.d/lutris.list
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_9.0/Release.key -O Release.key
|
||||
apt-key add - < Release.key
|
||||
apt-get update
|
||||
apt-get install lutris
|
||||
```
|
||||
|
||||
在 **Debian 8.0** 上以 **root** 身份运行以下命令:
|
||||
```
|
||||
echo 'deb http://download.opensuse.org/repositories/home:/strycore/Debian_8.0/ /' > /etc/apt/sources.list.d/lutris.list
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/Debian_8.0/Release.key -O Release.key
|
||||
apt-key add - < Release.key
|
||||
apt-get update
|
||||
apt-get install lutris
|
||||
```
|
||||
|
||||
在 **Fedora 27** 上以 **root** 身份运行以下命令: r
|
||||
```
|
||||
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_27/home:strycore.repo
|
||||
dnf install lutris
|
||||
```
|
||||
|
||||
在 **Fedora 26** 上以 **root** 身份运行以下命令:
|
||||
```
|
||||
dnf config-manager --add-repo https://download.opensuse.org/repositories/home:strycore/Fedora_26/home:strycore.repo
|
||||
dnf install lutris
|
||||
```
|
||||
|
||||
在 **openSUSE Tumbleweed** 上以 **root** 身份运行以下命令:
|
||||
```
|
||||
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Tumbleweed/home:strycore.repo
|
||||
zypper refresh
|
||||
zypper install lutris
|
||||
```
|
||||
|
||||
在 **openSUSE Leap 42.3** 上以 **root** 身份运行以下命令:
|
||||
```
|
||||
zypper addrepo https://download.opensuse.org/repositories/home:strycore/openSUSE_Leap_42.3/home:strycore.repo
|
||||
zypper refresh
|
||||
zypper install lutris
|
||||
```
|
||||
|
||||
**Ubuntu 17.10**:
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.10/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.10/Release.key -O Release.key
|
||||
sudo apt-key add - < Release.key
|
||||
sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
**Ubuntu 17.04**:
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_17.04/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_17.04/Release.key -O Release.key
|
||||
sudo apt-key add - < Release.key
|
||||
sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
**Ubuntu 16.10**:
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.10/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.10/Release.key -O Release.key
|
||||
sudo apt-key add - < Release.key
|
||||
sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
**Ubuntu 16.04**:
|
||||
```
|
||||
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/strycore/xUbuntu_16.04/ /' > /etc/apt/sources.list.d/lutris.list"
|
||||
wget -nv https://download.opensuse.org/repositories/home:strycore/xUbuntu_16.04/Release.key -O Release.key
|
||||
sudo apt-key add - < Release.key
|
||||
sudo apt-get update
|
||||
sudo apt-get install lutris
|
||||
```
|
||||
|
||||
对于其他平台,参考 [**Lutris 下载链接**][6].
|
||||
|
||||
### 使用 Lutris 管理你的游戏
|
||||
|
||||
安装完成后,从菜单或者应用启动器里打开 Lutries。首次启动时,Lutries 的默认界面像下面这样:
|
||||
|
||||
[![][7]][8]
|
||||
|
||||
**登录你的 Lutris.net 账号**
|
||||
|
||||
为了能同步你个人库中的游戏,下一步你需要在客户端中登录你的 Lutris.net 账号。如果你没有,先 [**注册一个新的账号**][9]。然后点击 **"连接到你的 Lutirs.net 账号同步你的库 "** 连接到 Lutries 客户端。
|
||||
|
||||
输入你的账号信息然后点击 **继续**。
|
||||
|
||||
[![][7]][10]
|
||||
|
||||
现在你已经连接到你的 Lutries.net 账号了。
|
||||
|
||||
[![][7]][11]**Browse Games**
|
||||
|
||||
点击工具栏里的浏览图标(游戏控制器图标)可以搜索任何游戏。它会自动定向到 Lutries 网站的游戏页。你可以以字母顺序查看所有可用的游戏。Lutries 现在已经有了很多游戏,而且还有更多的不断添加进来。
|
||||
|
||||
[![][7]][12]
|
||||
|
||||
任选一个游戏,添加到你的库中。
|
||||
|
||||
[![][7]][13]
|
||||
|
||||
然后返回到你的 Lutries 客户端,点击 **菜单 - > Lutris -> 同步库**。现在你可以在本地的 Lutries 客户端中看到所有在库中的游戏了。
|
||||
|
||||
[![][7]][14]
|
||||
|
||||
如果你没有看到游戏,只需要重启一次。
|
||||
|
||||
**安装游戏**
|
||||
|
||||
安装游戏,只需要点击游戏,然后点击 **安装** 按钮。例如,我想在我的系统安装 [**2048**][15],就像你在底下的截图中看到的,它要求我选择一个版本去安装。因为它只有一个版本(例如,在线),它就会自动选择这个版本。点击 **继续**。
|
||||
|
||||
[![][7]][16]Click Install:
|
||||
|
||||
[![][7]][17]
|
||||
|
||||
安装完成之后,你可以启动新安装的游戏或是关闭这个窗口,继续从你的库中安装其他游戏。
|
||||
|
||||
**导入 Steam 库**
|
||||
|
||||
你也可以导入你的 Steam 库。在你的头像处点击 **"通过 Steam 登录"** 按钮。接下来你将被重定向到 Steam,输入你的账号信息。填写正确后,你的 Steam 账号将被连接到 Lutries 账号。请注意,为了同步库中的游戏,这里你的 Steam 账号将被公开。你可以在同步完成之后将其重新设为私密状态。
|
||||
|
||||
**手动添加游戏**
|
||||
|
||||
Lutries 有手动添加游戏的选项。在工具栏中点击 + 号登录。
|
||||
|
||||
[![][7]][18]
|
||||
|
||||
在下一个窗口,输入游戏名,在游戏信息栏选择一个运行器。运行器是指 Linux 上类似 wine,Steam 之类的程序,它们可以帮助你启动这个游戏。你可以从 菜单 -> 管理运行器 中安装运行器。
|
||||
|
||||
[![][7]][19]
|
||||
|
||||
然后在下一栏中选择可执行文件或者 ISO。最后点击保存。有一个好消息是,你可以添加一个游戏的多个版本。
|
||||
|
||||
**移除游戏**
|
||||
|
||||
移除任何已安装的游戏,只需在 Lutries 客户端的本地库中点击对应的游戏。选择 **移除** 然后 **应用**。
|
||||
|
||||
[![][7]][20]
|
||||
|
||||
Lutries 就像 Steam。只是从网站向你的库中添加游戏,并在客户端中为你安装它们。
|
||||
|
||||
各位,这就是今天所有的内容了。我们将会在今年发表更多好的和有用的文章。敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
:)
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/manage-games-using-lutris-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
译者:[dianbanjiu](https://github.com/dianbanjiu)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://aur.archlinux.org/packages/lutris/
|
||||
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
|
||||
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
|
||||
[4]:https://www.ostechnix.com/install-yaourt-arch-linux/
|
||||
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[6]:https://lutris.net/downloads/
|
||||
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[8]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-1-1.png ()
|
||||
[9]:https://lutris.net/user/register/
|
||||
[10]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-2.png ()
|
||||
[11]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-3.png ()
|
||||
[12]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-15-1.png ()
|
||||
[13]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-16.png ()
|
||||
[14]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-6.png ()
|
||||
[15]:https://www.ostechnix.com/let-us-play-2048-game-terminal/
|
||||
[16]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-12.png ()
|
||||
[17]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-13.png ()
|
||||
[18]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-18-1.png ()
|
||||
[19]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-19.png ()
|
||||
[20]:http://www.ostechnix.com/wp-content/uploads/2018/01/Lutris-14-1.png ()
|
@ -0,0 +1,127 @@
|
||||
如何在 Linux 上锁定虚拟控制台会话
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-720x340.png)
|
||||
|
||||
当你在共享系统上工作时,你可能不希望其他用户在你的控制台中悄悄地看你在做什么。如果是这样,我知道有个简单的技巧来锁定自己的会话,同时仍然允许其他用户在其他虚拟控制台上使用该系统。要感谢 **Vlock**(**V** irtual Console **lock**),这是一个命令行程序,用于锁定 Linux 控制台上的一个或多个会话。如有必要,你可以锁定整个控制台并完全禁用虚拟控制台切换功能。Vlock 对于有多个用户访问控制台的共享 Linux 系统特别有用。
|
||||
|
||||
### 安装 Vlock
|
||||
|
||||
在基于 Arch 的系统上,Vlock 软件包被替换为默认预安装的 **kpd** 包,因此你无需为安装烦恼。
|
||||
|
||||
在 Debian、Ubuntu、Linux Mint 上,运行以下命令来安装 Vlock:
|
||||
|
||||
```
|
||||
$ sudo apt-get install vlock
|
||||
```
|
||||
|
||||
在 Fedora 上:
|
||||
|
||||
```
|
||||
$ sudo dnf install vlock
|
||||
```
|
||||
|
||||
在 RHEL、CentOS 上:
|
||||
|
||||
```
|
||||
$ sudo yum install vlock
|
||||
```
|
||||
|
||||
### 在Linux上锁定虚拟控制台会话
|
||||
|
||||
Vlock 的一般语法是:
|
||||
|
||||
```
|
||||
vlock [ -acnshv ] [ -t <timeout> ] [ plugins... ]
|
||||
```
|
||||
|
||||
这里:
|
||||
|
||||
* **a** – 锁定所有虚拟控制台会话,
|
||||
* **c** – 锁定当前虚拟控制台会话,
|
||||
* **n** – 在锁定所有会话之前切换到新的空控制台,
|
||||
* **s** – 禁用 SysRq 键机制,
|
||||
* **t** – 指定屏保插件的超时时间,
|
||||
* **h** – 显示帮助,
|
||||
* **v** – 显示版本。
|
||||
|
||||
|
||||
|
||||
让我举几个例子。
|
||||
|
||||
**1\. 锁定当前控制台会话**
|
||||
|
||||
在没有任何参数的情况下运行 Vlock 时,它默认锁定当前控制台会话 (TYY)。要解锁会话,你需要输入当前用户的密码或 root 密码。
|
||||
|
||||
```
|
||||
$ vlock
|
||||
```
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/10/vlock-1-1.gif)
|
||||
|
||||
你还可以使用 **-c** 标志来锁定当前的控制台会话。
|
||||
|
||||
```
|
||||
$ vlock -c
|
||||
```
|
||||
|
||||
请注意,此命令仅锁定当前控制台。你可以按 **ALT+F2** 切换到其他控制台。有关在 TTY 之间切换的更多详细信息,请参阅以下指南。
|
||||
|
||||
此外,如果系统有多个用户,则其他用户仍可以访问其各自的 TTY。
|
||||
|
||||
**2\. 锁定所有控制台会话**
|
||||
|
||||
要同时锁定所有 TTY 并禁用虚拟控制台切换功能,请运行:
|
||||
|
||||
```
|
||||
$ vlock -a
|
||||
```
|
||||
|
||||
同样,要解锁控制台会话,只需按下回车键并输入当前用户的密码或 root 用户密码。
|
||||
|
||||
请记住,**root 用户可以随时解锁任何 vlock 会话**,除非在编译时禁用。
|
||||
|
||||
**3\. 在锁定所有控制台之前切换到新的虚拟控制台**
|
||||
|
||||
在锁定所有控制台之前,还可以使 Vlock 从 X 会话切换到新的空虚拟控制台。为此,请使用 **-n** 标志。
|
||||
|
||||
```
|
||||
$ vlock -n
|
||||
```
|
||||
|
||||
**4\. 禁用 SysRq 机制**
|
||||
|
||||
你也许知道,魔术 SysRq 键机制允许用户在系统死机时执行某些操作。因此,用户可以使用 SysRq 解锁控制台。为了防止这种情况,请传递 **-s** 选项以禁用 SysRq 机制。请记住,这只适用于有 **-a** 选项的时候。
|
||||
|
||||
```
|
||||
$ vlock -sa
|
||||
```
|
||||
|
||||
有关更多选项及其用法,请参阅帮助或手册页。
|
||||
|
||||
```
|
||||
$ vlock -h
|
||||
$ man vlock
|
||||
```
|
||||
|
||||
Vlock 可防止未经授权的用户获得控制台访问权限。如果你在为 Linux 寻找一个简单的控制台锁定机制,那么 Vlock 值得一试!
|
||||
|
||||
就是这些了。希望这篇文章有用。还有更多好东西。敬请关注!
|
||||
|
||||
干杯!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-lock-virtual-console-sessions-on-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
Loading…
Reference in New Issue
Block a user