mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-16 22:42:21 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
781f74b9e1
@ -1,3 +1,4 @@
|
||||
translated by lixinyuxx
|
||||
6 common questions about agile development practices for teams
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
translated by lixinyuxx
|
||||
5 guiding principles you should know before you design a microservice
|
||||
======
|
||||
|
||||
|
@ -0,0 +1,64 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (5 resolutions for open source project maintainers)
|
||||
[#]: via: (https://opensource.com/article/18/12/resolutions-open-source-project-maintainers)
|
||||
[#]: author: (Ben Cotton https://opensource.com/users/bcotton)
|
||||
|
||||
5 resolutions for open source project maintainers
|
||||
======
|
||||
No matter how you say it, good communication is essential to strong open source communities.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/spark_sparkler_fire_new_year_idea.png?itok=rnyMpVP8)
|
||||
|
||||
I'm generally not big on New Year's resolutions. I have no problem with self-improvement, of course, but I tend to anchor around other parts of the calendar. Even so, there's something about taking down this year's free calendar and replacing it with next year's that inspires some introspection.
|
||||
|
||||
In 2017, I resolved to not share articles on social media until I'd read them. I've kept to that pretty well, and I'd like to think it has made me a better citizen of the internet. For 2019, I'm thinking about resolutions to make me a better open source software maintainer.
|
||||
|
||||
Here are some resolutions I'll try to stick to on the projects where I'm a maintainer or co-maintainer.
|
||||
|
||||
### 1\. Include a code of conduct
|
||||
|
||||
Jono Bacon included "not enforcing the code of conduct" in his article "[7 mistakes you're probably making][1]." Of course, to enforce a code of conduct, you must first have a code of conduct. I plan on defaulting to the [Contributor Covenant][2], but you can use whatever you like. As with licenses, it's probably best to use one that's already written instead of writing your own. But the important thing is to find something that defines how you want your community to behave, whatever that looks like. Once it's written down and enforced, people can decide for themselves if it looks like the kind of community they want to be a part of.
|
||||
|
||||
### 2\. Make the license clear and specific
|
||||
|
||||
You know what really stinks? Unclear licenses. "This software is licensed under the GPL" with no further text doesn't tell me much. Which version of the [GPL][3]? Do I get to pick? For non-code portions of a project, "licensed under a Creative Commons license" is even worse. I love the [Creative Commons licenses][4], but there are several different licenses with significantly different rights and obligations. So, I will make it very clear which variant and version of a license applies to my projects. I will include the full text of the license in the repo and a concise note in the other files.
|
||||
|
||||
Sort of related to this is using an [OSI][5]-approved license. It's tempting to come up with a new license that says exactly what you want it to say, but good luck if you ever need to enforce it. Will it hold up? Will the people using your project understand it?
|
||||
|
||||
### 3\. Triage bug reports and questions quickly
|
||||
|
||||
Few things in technology scale as poorly as open source maintainers. Even on small projects, it can be hard to find the time to answer every question and fix every bug. But that doesn't mean I can't at least acknowledge the person. It doesn't have to be a multi-paragraph reply. Even just labeling the GitHub issue shows that I saw it. Maybe I'll get to it right away. Maybe I'll get to it a year later. But it's important for the community to see that, yes, there is still someone here.
|
||||
|
||||
### 4\. Don't push features or bug fixes without accompanying documentation
|
||||
|
||||
For as much as my open source contributions over the years have revolved around documentation, my projects don't reflect the importance I put on it. There aren't many commits I can push that don't require some form of documentation. New features should obviously be documented at (or before!) the time they're committed. But even bug fixes should get an entry in the release notes. If nothing else, a push is a good opportunity to also make a commit to improving the docs.
|
||||
|
||||
### 5\. Make it clear when I'm abandoning a project
|
||||
|
||||
I'm really bad at saying "no" to things. I told the editors I'd write one or two articles for [Opensource.com][6] and here I am almost 60 articles later. Oops. But at some point, the things that once held my interests no longer do. Maybe the project is unnecessary because its functionality got absorbed into a larger project. Maybe I'm just tired of it. But it's unfair to the community (and potentially dangerous, as the recent [event-stream malware injection][7] showed) to leave a project in limbo. Maintainers have the right to walk away whenever and for whatever reason, but it should be clear that they have.
|
||||
|
||||
Whether you're an open source maintainer or contributor, if you know other resolutions project maintainers should make, please share them in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/resolutions-open-source-project-maintainers
|
||||
|
||||
作者:[Ben Cotton][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/bcotton
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/17/8/mistakes-open-source-avoid
|
||||
[2]: https://www.contributor-covenant.org/
|
||||
[3]: https://opensource.org/licenses/gpl-license
|
||||
[4]: https://creativecommons.org/share-your-work/licensing-types-examples/
|
||||
[5]: https://opensource.org/
|
||||
[6]: http://Opensource.com
|
||||
[7]: https://arstechnica.com/information-technology/2018/11/hacker-backdoors-widely-used-open-source-software-to-steal-bitcoin/
|
@ -0,0 +1,111 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (8 tips to help non-techies move to Linux)
|
||||
[#]: via: (https://opensource.com/article/18/12/help-non-techies)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
8 tips to help non-techies move to Linux
|
||||
======
|
||||
Help your friends dump their proprietary operating systems and make the move to open source.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_team_community_group.png?itok=Nc_lTsUK)
|
||||
|
||||
Back in 2016, I took down the shingle for my technology coaching business. Permanently. Or so I thought.
|
||||
|
||||
Over the last 10 months, a handful of friends and acquaintances have pulled me back into that realm. How? With their desire to dump That Other Operating System™ and move to Linux.
|
||||
|
||||
This has been an interesting experience, in no small part because most of the people aren't at all technical. They know how to use a computer to do what they need to do. Beyond that, they're not interested in delving deeper. That said, they were (and are) attracted to Linux for a number of reasons—probably because I constantly prattle on about it.
|
||||
|
||||
While bringing them to the Linux side of the computing world, I learned a few things about helping non-techies move to Linux. If someone asks you to help them make the jump to Linux, these eight tips can help you.
|
||||
|
||||
### 1\. Be honest about Linux.
|
||||
|
||||
Linux is great. It's not perfect, though. It can be perplexing and sometimes frustrating for new users. It's best to prepare the person you're helping with a short pep talk.
|
||||
|
||||
What should you talk about? Briefly explain what Linux is and how it differs from other operating systems. Explain what you can and _can't_ do with it. Let them know some of the pain points they might encounter when using Linux daily.
|
||||
|
||||
If you take a bit of time to [ease them into][1] Linux and open source, the switch won't be as jarring.
|
||||
|
||||
### 2\. It's not about you.
|
||||
|
||||
It's easy to fall into what I call the _power user fallacy_ : the idea that everyone uses technology the same way you do. That's rarely, if ever, the case.
|
||||
|
||||
This isn't about you. It's not about your needs or how you use a computer. It's about the person you're helping's needs and intentions. Their needs, especially if they're not particularly technical, will be different from yours.
|
||||
|
||||
It doesn't matter if Ubuntu or Elementary or Manjaro aren't your distros of choice. It doesn't matter if you turn your nose up at window managers like GNOME, KDE, or Pantheon in favor of i3 or Ratpoison. The person you're helping might think otherwise.
|
||||
|
||||
Put your needs and prejudices aside and help them find the right Linux distribution for them. Find out what they use their computer for and tailor your recommendations for a distribution or three based on that.
|
||||
|
||||
### 3\. Not everyone's a techie.
|
||||
|
||||
And not everyone wants to be. Everyone I've helped move to Linux in the last 10 months has no interest in compiling kernels or code nor in editing and tweaking configuration files. Most of them will never crack open a terminal window. I don't expect them to be interested in doing any of that in the future, either.
|
||||
|
||||
Guess what? There's nothing wrong with that. Maybe they won't _get the most out of_ Linux (whatever that means) by not embracing their inner geeks. Not everyone will want to take on challenges of, say, installing and configuring Slackware or Arch. They need something that will work out of the box.
|
||||
|
||||
### 4\. Take stock of their hardware.
|
||||
|
||||
In an ideal world, we'd all have tricked-out, high-powered laptops or desktops with everything maxed out. Sadly, that world doesn't exist.
|
||||
|
||||
That probably includes the person you're helping move to Linux. They may have slightly (maybe more than slightly) older hardware that they're comfortable with and that works for them. Hardware that they might not be able to afford to upgrade or replace.
|
||||
|
||||
Also, remember that not everyone needs a system for heavy-duty development or gaming or audio and video production. They just need a computer for browsing the web, editing photos, running personal productivity software, and the like.
|
||||
|
||||
One person I recently helped adopt Linux had an Acer Aspire 1 laptop with 4GB of RAM and a 64GB SSD. That helped inform my recommendations, which revolved around a few lightweight Linux distributions.
|
||||
|
||||
### 5\. Help them test-drive some distros.
|
||||
|
||||
The [DistroWatch][2] database contains close to 900 Linux distributions. You should be able to find three to five Linux distributions to recommend. Make a short list of the distributions you think would be a good fit for them. Also, point them to reviews so they can get other perspectives on those distributions.
|
||||
|
||||
When it comes time to take those Linux distributions for a spin, don't just hand someone a bunch of flash drives and walk away. You might be surprised to learn that most people have never run a live Linux distribution or installed an operating system. Any operating system. Beyond plugging the flash drives in, they probably won't know what to do.
|
||||
|
||||
Instead, show them how to [create bootable flash drives][3] and set up their computer's BIOS to start from those drives. Then, let them spend some time running the distros off the flash drives. That will give them a rudimentary feel for the distros and their window managers' quirks.
|
||||
|
||||
### 6\. Walk them through an installation.
|
||||
|
||||
Running a live session with a flash drive tells someone only so much. They need to work with a Linux distribution for a couple or three weeks to really form an opinion of it and to understand its quirks and strengths.
|
||||
|
||||
There's a myth that Linux is difficult to install. That might have been true back in the mid-1990s, but today most Linux distributions are easy to install. You follow a few graphical prompts and let the software do the rest.
|
||||
|
||||
For someone who's never installed any operating system, installing Linux can be a bit daunting. They might not know what to choose when, say, they're asked which filesystem to use or whether or not to encrypt their hard disk.
|
||||
|
||||
Guide them through at least one installation. While you should let them do most of the work, be there to answer questions.
|
||||
|
||||
### 7\. Be prepared to do a couple of installs.
|
||||
|
||||
As I mentioned a paragraph or two ago, using a Linux distribution for two weeks gives someone ample time to regularly interact with it and see if it can be their daily driver. It often works out. Sometimes, though, it doesn't.
|
||||
|
||||
Remember the person with the Acer Aspire 1 laptop? She thought Xubuntu was the right distribution for her. After a few weeks of working with it, that wasn't the case. There wasn't a technical reason—Xubuntu ran smoothly on her laptop. It was just a matter of feel. Instead, she switched back to the first distro she test drove: [MX Linux][4]. She's been happily using MX ever since.
|
||||
|
||||
### 8\. Teach them to fish.
|
||||
|
||||
You can't always be there to be the guiding hand. Or to be the mechanic or plumber who can fix any problems the person encounters. You have a life, too.
|
||||
|
||||
Once they've settled on a Linux distribution, explain that you'll offer a helping hand for two or three weeks. After that, they're on their own. Don't completely abandon them. Be around to help with big problems, but let them know they'll have to learn to do things for themselves.
|
||||
|
||||
Introduce them to websites that can help them solve their problems. Point them to useful articles and books. Doing that will help make them more confident and competent users of Linux—and of computers and technology in general.
|
||||
|
||||
### Final thoughts
|
||||
|
||||
Helping someone move to Linux from another, more familiar operating system can be a challenge—a challenge for them and for you. If you take it slowly and follow the advice in this article, you can make the process smoother.
|
||||
|
||||
Do you have other tips for helping a non-techie switch to Linux? Feel free to share them by leaving a comment.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/help-non-techies
|
||||
|
||||
作者:[Scott Nesbitt][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/business/15/2/ato2014-lightning-talks-scott-nesbitt
|
||||
[2]: https://distrowatch.com
|
||||
[3]: https://opensource.com/article/18/7/getting-started-etcherio
|
||||
[4]: https://opensource.com/article/18/2/mx-linux-17-distro-beginners
|
146
sources/talk/20181218 The Rise and Demise of RSS.md
Normal file
146
sources/talk/20181218 The Rise and Demise of RSS.md
Normal file
@ -0,0 +1,146 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Rise and Demise of RSS)
|
||||
[#]: via: (https://twobithistory.org/2018/12/18/rss.html)
|
||||
[#]: author: (Two-Bit History https://twobithistory.org)
|
||||
|
||||
The Rise and Demise of RSS
|
||||
======
|
||||
|
||||
This post was originally published on [September 16th, 2018][1]. What follows is a revision that includes additional information gleaned from interviews with Ramanathan Guha, Ian Davis, Dan Libby, and Kevin Werbach.
|
||||
|
||||
About a decade ago, the average internet user might well have heard of RSS. Really Simple Syndication, or Rich Site Summary—what the acronym stands for depends on who you ask—is a standard that websites and podcasts can use to offer a feed of content to their users, one easily understood by lots of different computer programs. Today, though RSS continues to power many applications on the web, it has become, for most people, an obscure technology.
|
||||
|
||||
The story of how this happened is really two stories. The first is a story about a broad vision for the web’s future that never quite came to fruition. The second is a story about how a collaborative effort to improve a popular standard devolved into one of the most contentious forks in the history of open-source software development.
|
||||
|
||||
In the late 1990s, in the go-go years between Netscape’s IPO and the Dot-com crash, everyone could see that the web was going to be an even bigger deal than it already was, even if they didn’t know exactly how it was going to get there. One theory was that the web was about to be revolutionized by syndication. The web, originally built to enable a simple transaction between two parties—a client fetching a document from a single host server—would be broken open by new standards that could be used to repackage and redistribute entire websites through a variety of channels. Kevin Werbach, writing for Release 1.0, a newsletter influential among investors in the 1990s, predicted that syndication “would evolve into the core model for the Internet economy, allowing businesses and individuals to retain control over their online personae while enjoying the benefits of massive scale and scope.”
|
||||
|
||||
He invited his readers to imagine a future in which fencing aficionados, rather than going directly to an “online sporting goods site” or “fencing equipment retailer,” could buy a new épée directly through e-commerce widgets embedded into their favorite website about fencing. Just like in the television world, where big networks syndicate their shows to smaller local stations, syndication on the web would allow businesses and publications to reach consumers through a multitude of intermediary sites. This would mean, as a corollary, that consumers would gain significant control over where and how they interacted with any given business or publication on the web.
|
||||
|
||||
RSS was one of the standards that promised to deliver this syndicated future. To Werbach, RSS was “the leading example of a lightweight syndication protocol.” Another contemporaneous article called RSS the first protocol to realize the potential of XML. It was going to be a way for both users and content aggregators to create their own customized channels out of everything the web had to offer. And yet, two decades later, after the rise of social media and Google’s decision to shut down Google Reader, RSS appears to be [a slowly dying technology][2], now used chiefly by podcasters, programmers with tech blogs, and the occasional journalist. Though of course some people really do still rely on RSS readers, stubbornly adding an RSS feed to your blog, even in 2018, is a political statement. That little tangerine bubble has become a wistful symbol of defiance against a centralized web increasingly controlled by a handful of corporations, a web that hardly resembles the syndicated web of Werbach’s imagining.
|
||||
|
||||
The future once looked so bright for RSS. What happened? Was its downfall inevitable, or was it precipitated by the bitter infighting that thwarted the development of a single RSS standard?
|
||||
|
||||
### Muddied Water
|
||||
|
||||
RSS was invented twice. This meant it never had an obvious owner, a state of affairs that spawned endless debate and acrimony. But it also suggests that RSS was an important idea whose time had come.
|
||||
|
||||
In 1998, Netscape was struggling to envision a future for itself. Its flagship product, the Netscape Navigator web browser—once preferred by over 80 percent of web users—was quickly losing ground to Microsoft’s Internet Explorer. So Netscape decided to compete in a new arena. In May, a team was brought together to start work on what was known internally as “Project 60.” Two months later, Netscape announced “My Netscape,” a web portal that would fight it out with other portals like Yahoo, MSN, and Excite.
|
||||
|
||||
The following year, in March, Netscape announced an addition to the My Netscape portal called the “My Netscape Network.” My Netscape users could now customize their My Netscape page so that it contained “channels” featuring the most recent headlines from sites around the web. As long as your favorite website published a special file in a format dictated by Netscape, you could add that website to your My Netscape page, typically by clicking an “Add Channel” button that participating websites were supposed to add to their interfaces. A little box containing a list of linked headlines would then appear.
|
||||
|
||||
![A My Netscape Network Channel][3] A My Netscape Network channel for Mozilla.org, as it might look to users
|
||||
about to add it to their My Netscape page.
|
||||
|
||||
The special file that participating websites had to publish was an RSS file. In the My Netscape Network announcement, Netscape explained that RSS stood for “RDF Site Summary.” This was somewhat of a misnomer. RDF, or the Resource Description Framework, is basically a grammar for describing certain properties of arbitrary resources. (See [my article about the Semantic Web][4] if that sounds really exciting to you.) In 1999, a draft specification for RDF was being considered by the World Wide Web Consortium (W3C), the web’s main standards body. Though RSS was supposed to be based on RDF, the example RSS document Netscape actually released didn’t use any RDF tags at all. In a document that accompanied the Netscape RSS specification, Dan Libby, one of the specification’s authors, explained that “in this release of MNN, Netscape has intentionally limited the complexity of the RSS format.” The specification was given the 0.90 version number, the idea being that subsequent versions would bring RSS more in line with the W3C’s XML specification and the evolving draft of the RDF specification.
|
||||
|
||||
RSS had been created by Libby and two other Netscape employees, Eckart Walther and Ramanathan Guha. According to an email to me from Guha, he and Walther cooked up RSS in the beginning with some input from Libby; after AOL bought Netscape in 1998, he and Walther left and it became Libby’s responsibility. Before Netscape, Guha had worked for Apple, where he came up with something called the Meta Content Framework. MCF was a format for representing metadata about anything from web pages to local files. Guha demonstrated its power by developing an application called [HotSauce][5] that visualized relationships between files as a network of nodes suspended in 3D space. Immediately after leaving Apple for Netscape, Guha worked with a Netscape consultant named Tim Bray, who in a post on his blog said that he and Guha eventually produced an XML-based version of MCF that in turn became the foundation for the W3C’s RDF draft. It’s no surprise, then, that Guha, Walther, and Libby were keen to build on Guha’s prior work and incorporate RDF into RSS. But Libby later wrote that the original vision for an RDF-based RSS was pared back because of time constraints and the perception that RDF was “‘too complex’ for the ‘average user.’”
|
||||
|
||||
While Netscape was trying to win eyeballs in what became known as the “portal wars,” elsewhere on the web a new phenomenon known as “weblogging” was being pioneered. One of these pioneers was Dave Winer, CEO of a company called UserLand Software, which developed early content management systems that made blogging accessible to people without deep technical fluency. Winer ran his own blog, [Scripting News][6], which today is one of the oldest blogs on the internet. More than a year before Netscape announced My Netscape Network, on December 15, 1997, Winer published a post announcing that the blog would now be available in XML as well as HTML.
|
||||
|
||||
Dave Winer’s XML format became known as the Scripting News format. It was supposedly similar to Microsoft’s Channel Definition Format (a “push technology” standard submitted to the W3C in March, 1997), but I haven’t been able to find a file in the original format to verify that claim. Like Netscape’s RSS, it structured the content of Winer’s blog so that it could be understood by other software applications. When Netscape released RSS 0.90, Winer and UserLand Software began to support both formats. But Winer believed that Netscape’s format was “woefully inadequate” and “missing the key thing web writers and readers need.” It could only represent a list of links, whereas the Scripting News format could represent a series of paragraphs, each containing one or more links.
|
||||
|
||||
In June 1999, two months after Netscape’s My Netscape Network announcement, Winer introduced a new version of the Scripting News format, called ScriptingNews 2.0b1. Winer claimed that he decided to move ahead with his own format only after trying but failing to get anyone at Netscape to care about RSS 0.90’s deficiencies. The new version of the Scripting News format added several items to the `<header>` element that brought the Scripting News format to parity with RSS. But the two formats continued to differ in that the Scripting News format, which Winer nicknamed the “fat” syndication format, could include entire paragraphs and not just links.
|
||||
|
||||
Netscape got around to releasing RSS 0.91 the very next month. The updated specification was a major about-face. RSS no longer stood for “RDF Site Summary”; it now stood for “Rich Site Summary.” All the RDF—and there was almost none anyway—was stripped out. Many of the Scripting News tags were incorporated. In the text of the new specification, Libby explained:
|
||||
|
||||
> RDF references removed. RSS was originally conceived as a metadata format providing a summary of a website. Two things have become clear: the first is that providers want more of a syndication format than a metadata format. The structure of an RDF file is very precise and must conform to the RDF data model in order to be valid. This is not easily human-understandable and can make it difficult to create useful RDF files. The second is that few tools are available for RDF generation, validation and processing. For these reasons, we have decided to go with a standard XML approach.
|
||||
|
||||
Winer was enormously pleased with RSS 0.91, calling it “even better than I thought it would be.” UserLand Software adopted it as a replacement for the existing ScriptingNews 2.0b1 format. For a while, it seemed that RSS finally had a single authoritative specification.
|
||||
|
||||
### The Great Fork
|
||||
|
||||
A year later, the RSS 0.91 specification had become woefully inadequate. There were all sorts of things people were trying to do with RSS that the specification did not address. There were other parts of the specification that seemed unnecessarily constraining—each RSS channel could only contain a maximum of 15 items, for example.
|
||||
|
||||
By that point, RSS had been adopted by several more organizations. Other than Netscape, which seems to have lost interest after RSS 0.91, the big players were Dave Winer’s UserLand Software; O’Reilly Net, which ran an RSS aggregator called Meerkat; and Moreover.com, which also ran an RSS aggregator focused on news. Via mailing list, representatives from these organizations and others regularly discussed how to improve on RSS 0.91. But there were deep disagreements about what those improvements should look like.
|
||||
|
||||
The mailing list in which most of the discussion occurred was called the Syndication mailing list. [An archive of the Syndication mailing list][7] is still available. It is an amazing historical resource. It provides a moment-by-moment account of how those deep disagreements eventually led to a political rupture of the RSS community.
|
||||
|
||||
On one side of the coming rupture was Winer. Winer was impatient to evolve RSS, but he wanted to change it only in relatively conservative ways. In June, 2000, he published his own RSS 0.91 specification on the UserLand website, meant to be a starting point for further development of RSS. It made no significant changes to the 0.91 specification published by Netscape. Winer claimed in a blog post that accompanied his specification that it was only a “cleanup” documenting how RSS was actually being used in the wild, which was needed because the Netscape specification was no longer being maintained. In the same post, he argued that RSS had succeeded so far because it was simple, and that by adding namespaces (a way to explicitly distinguish between different RSS vocabularies) or RDF back to the format—some had suggested this be done in the Syndication mailing list—it “would become vastly more complex, and IMHO, at the content provider level, would buy us almost nothing for the added complexity.” In a message to the Syndication mailing list sent around the same time, Winer suggested that these issues were important enough that they might lead him to create a fork:
|
||||
|
||||
> I’m still pondering how to move RSS forward. I definitely want ICE-like stuff in RSS2, publish and subscribe is at the top of my list, but I am going to fight tooth and nail for simplicity. I love optional elements. I don’t want to go down the namespaces and schema road, or try to make it a dialect of RDF. I understand other people want to do this, and therefore I guess we’re going to get a fork. I have my own opinion about where the other fork will lead, but I’ll keep those to myself for the moment at least.
|
||||
|
||||
Arrayed against Winer were several other people, including Rael Dornfest of O’Reilly, Ian Davis (responsible for a search startup called Calaba), and a precocious, 14-year-old Aaron Swartz. This is the same Aaron Swartz that would later co-found Reddit and become famous for his hacktivism. (In 2000, according to an email to me from Davis, his dad often accompanied him to technology meetups.) Dornfest, Davis, and Swartz all thought that RSS needed namespaces in order to accommodate the many different things everyone wanted to do with it. On another mailing list hosted by O’Reilly, Davis proposed a namespace-based module system, writing that such a system would “make RSS as extensible as we like rather than packing in new features that over-complicate the spec.” The “namespace camp” believed that RSS would soon be used for much more than the syndication of blog posts, so namespaces, rather than being a complication, were the only way to keep RSS from becoming unmanageable as it supported more and more use cases.
|
||||
|
||||
At the root of this disagreement about namespaces was a deeper disagreement about what RSS was even for. Winer had invented his Scripting News format to syndicate the posts he wrote for his blog. Netscape had released RSS as “RDF Site Summary” because it was a way of recreating a site in miniature within the My Netscape online portal. Some people felt that Netscape’s original vision should be honored. Writing to the Syndication mailing list, Davis explained his view that RSS was “originally conceived as a way of building mini sitemaps,” and that now he and others wanted to expand RSS “to encompass more types of information than simple news headlines and to cater for the new uses of RSS that have emerged over the last 12 months.” This was a sensible point to make because the goal of the Netscape RSS project in the beginning was even loftier than Davis suggests: Guha told me that he wanted to create a technology that could support not just website channels but feeds about arbitrary entities such as, for example, Madonna. Further developing RSS so that it could do this would indeed be in keeping with that original motivation. But Davis’ argument also overstates the degree to which there was a unified vision at Netscape by the time the RSS specification was published. According to Libby, who I talked to via email, there was eventually contention between a “Let’s Build the Semantic Web” group and “Let’s Make This Simple for People to Author” group even within Netscape.
|
||||
|
||||
For his part, Winer argued that Netscape’s original goals were irrelevant because his Scripting News format was in fact the first RSS and it had been meant for a very different purpose. Given that the people most involved in the development of RSS disagreed about who had created RSS and why, a fork seems to have been inevitable.
|
||||
|
||||
The fork happened after Dornfest announced a proposed RSS 1.0 specification and formed the RSS-DEV Working Group—which would include Davis, Swartz, and several others but not Winer—to get it ready for publication. In the proposed specification, RSS once again stood for “RDF Site Summary,” because RDF had been added back in to represent metadata properties of certain RSS elements. The specification acknowledged Winer by name, giving him credit for popularizing RSS through his “evangelism.” But it also argued that RSS could not be improved in the way that Winer was advocating. Just adding more elements to RSS without providing for extensibility with a module system would “sacrifice scalability.” The specification went on to define a module system for RSS based on XML namespaces.
|
||||
|
||||
Winer felt that it was “unfair” that the RSS-DEV Working Group had arrogated the “RSS 1.0” name for themselves. In another mailing list about decentralization, he wrote that he had “recently had a standard stolen by a big name,” presumably meaning O’Reilly, which had convened the RSS-DEV Working Group. Other members of the Syndication mailing list also felt that the RSS-DEV Working Group should not have used the name “RSS” without unanimous agreement from the community on how to move RSS forward. But the Working Group stuck with the name. Dan Brickley, another member of the RSS-DEV Working Group, defended this decision by arguing that “RSS 1.0 as proposed is solidly grounded in the original RSS vision, which itself had a long heritage going back to MCF (an RDF precursor) and related specs (CDF etc).” He essentially felt that the RSS 1.0 effort had a better claim to the RSS name than Winer did, since RDF had originally been a part of RSS. The RSS-DEV Working Group published a final version of their specification in December. That same month, Winer published his own improvement to RSS 0.91, which he called RSS 0.92, on UserLand’s website. RSS 0.92 made several small optional improvements to RSS, among which was the addition of the `<enclosure>` tag soon used by podcasters everywhere. RSS had officially forked.
|
||||
|
||||
The fork might have been avoided if a better effort had been made to include Winer in the RSS-DEV Working Group. He obviously belonged there. He was a prominent contributor to the Syndication mailing list and responsible for much of RSS’ popularity, as the members of the Working Group themselves acknowledged. But, as Davis wrote in an email to me, Winer “wanted control and wanted RSS to be his legacy so was reluctant to work with us.” Tim O’Reilly, founder and CEO of O’Reilly, explained in a UserLand discussion group in September, 2000 that Winer basically refused to participate:
|
||||
|
||||
> A group of people involved in RSS got together to start thinking about its future evolution. Dave was part of the group. When the consensus of the group turned in a direction he didn’t like, Dave stopped participating, and characterized it as a plot by O’Reilly to take over RSS from him, despite the fact that Rael Dornfest of O’Reilly was only one of about a dozen authors of the proposed RSS 1.0 spec, and that many of those who were part of its development had at least as long a history with RSS as Dave had.
|
||||
|
||||
To this, Winer said:
|
||||
|
||||
> I met with Dale [Dougherty] two weeks before the announcement, and he didn’t say anything about it being called RSS 1.0. I spoke on the phone with Rael the Friday before it was announced, again he didn’t say that they were calling it RSS 1.0. The first I found out about it was when it was publicly announced.
|
||||
>
|
||||
> Let me ask you a straight question. If it turns out that the plan to call the new spec “RSS 1.0” was done in private, without any heads-up or consultation, or for a chance for the Syndication list members to agree or disagree, not just me, what are you going to do?
|
||||
>
|
||||
> UserLand did a lot of work to create and popularize and support RSS. We walked away from that, and let your guys have the name. That’s the top level. If I want to do any further work in Web syndication, I have to use a different name. Why and how did that happen Tim?
|
||||
|
||||
I have not been able to find a discussion in the Syndication mailing list about using the RSS 1.0 name prior to the announcement of the RSS 1.0 proposal. Winer, in a message to me, said that he was not trying to control RSS and just wanted to use it in his products.
|
||||
|
||||
RSS would fork again in 2003, when several developers frustrated with the bickering in the RSS community sought to create an entirely new format. These developers created Atom, a format that did away with RDF but embraced XML namespaces. Atom would eventually be specified by [a proposed IETF standard][8]. After the introduction of Atom, there were three competing versions of RSS: Winer’s RSS 0.92 (updated to RSS 2.0 in 2002 and renamed “Really Simple Syndication”), the RSS-DEV Working Group’s RSS 1.0, and Atom.
|
||||
|
||||
### Decline
|
||||
|
||||
The proliferation of competing RSS specifications may have hampered RSS in other ways that I’ll discuss shortly. But it did not stop RSS from becoming enormously popular during the 2000s. By 2004, the New York Times had started offering its headlines in RSS and had written an article explaining to the layperson what RSS was and how to use it. Google Reader, the RSS aggregator ultimately used by millions, was launched in 2005. By 2013, RSS seemed popular enough that the New York Times, in its obituary for Aaron Swartz, called the technology “ubiquitous.” For a while, before a third of the planet had signed up for Facebook, RSS was simply how many people stayed abreast of news on the internet.
|
||||
|
||||
The New York Times published Swartz’ obituary in January 2013. By that point, though, RSS had actually turned a corner and was well on its way to becoming an obscure technology. Google Reader was shut down in July 2013, ostensibly because user numbers had been falling “over the years.” This prompted several articles from various outlets declaring that RSS was dead. But people had been declaring that RSS was dead for years, even before Google Reader’s shuttering. Steve Gillmor, writing for TechCrunch in May 2009, advised that “it’s time to get completely off RSS and switch to Twitter” because “RSS just doesn’t cut it anymore.” He pointed out that Twitter was basically a better RSS feed, since it could show you what people thought about an article in addition to the article itself. It allowed you to follow people and not just channels. Gillmor told his readers that it was time to let RSS recede into the background. He ended his article with a verse from Bob Dylan’s “Forever Young.”
|
||||
|
||||
Today, RSS is not dead. But neither is it anywhere near as popular as it once was. Lots of people have offered explanations for why RSS lost its broad appeal. Perhaps the most persuasive explanation is exactly the one offered by Gillmor in 2009. Social networks, just like RSS, provide a feed featuring all the latest news on the internet. Social networks took over from RSS because they were simply better feeds. They also provide more benefits to the companies that own them. Some people have accused Google, for example, of shutting down Google Reader in order to encourage people to use Google+. Google might have been able to monetize Google+ in a way that it could never have monetized Google Reader. Marco Arment, the creator of Instapaper, wrote on his blog in 2013:
|
||||
|
||||
> Google Reader is just the latest casualty of the war that Facebook started, seemingly accidentally: the battle to own everything. While Google did technically “own” Reader and could make some use of the huge amount of news and attention data flowing through it, it conflicted with their far more important Google+ strategy: they need everyone reading and sharing everything through Google+ so they can compete with Facebook for ad-targeting data, ad dollars, growth, and relevance.
|
||||
|
||||
So both users and technology companies realized that they got more out of using social networks than they did out of RSS.
|
||||
|
||||
Another theory is that RSS was always too geeky for regular people. Even the New York Times, which seems to have been eager to adopt RSS and promote it to its audience, complained in 2006 that RSS is a “not particularly user friendly” acronym coined by “computer geeks.” Before the RSS icon was designed in 2004, websites like the New York Times linked to their RSS feeds using little orange boxes labeled “XML,” which can only have been intimidating. The label was perfectly accurate though, because back then clicking the link would take a hapless user to a page full of XML. [This great tweet][9] captures the essence of this explanation for RSS’ demise. Regular people never felt comfortable using RSS; it hadn’t really been designed as a consumer-facing technology and involved too many hurdles; people jumped ship as soon as something better came along.
|
||||
|
||||
RSS might have been able to overcome some of these limitations if it had been further developed. Maybe RSS could have been extended somehow so that friends subscribed to the same channel could syndicate their thoughts about an article to each other. Maybe browser support could have been improved. But whereas a company like Facebook was able to “move fast and break things,” the RSS developer community was stuck trying to achieve consensus. When they failed to agree on a single standard, effort that could have gone into improving RSS was instead squandered on duplicating work that had already been done. Davis told me, for example, that Atom would not have been necessary if the members of the Syndication mailing list had been able to compromise and collaborate, and “all that cleanup work could have been put into RSS to strengthen it.” So if we are asking ourselves why RSS is no longer popular, a good first-order explanation is that social networks supplanted it. If we ask ourselves why social networks were able to supplant it, then the answer may be that the people trying to make RSS succeed faced a problem much harder than, say, building Facebook. As Dornfest wrote to the Syndication mailing list at one point, “currently it’s the politics far more than the serialization that’s far from simple.”
|
||||
|
||||
So today we are left with centralized silos of information. Even so, the syndicated web that Werbach foresaw in 1999 has been realized, just not in the way he thought it would be. After all, The Onion is a publication that relies on syndication through Facebook and Twitter the same way that Seinfeld relied on syndication to rake in millions after the end of its original run. I asked Werbach what he thinks about this and he more or less agrees. He told me that RSS, on one level, was clearly a failure, because it isn’t now “a technology that is really the core of the whole blogging world or content world or world of assembling different elements of things into sites.” But, on another level, “the whole social media revolution is partly about the ability to aggregate different content and resources” in a manner reminiscent of RSS and his original vision for a syndicated web. To Werbach, “it’s the legacy of RSS, even if it’s not built on RSS.”
|
||||
|
||||
Unfortunately, syndication on the modern web still only happens through one of a very small number of channels, meaning that none of us “retain control over our online personae” the way that Werbach imagined we would. One reason this happened is garden-variety corporate rapaciousness—RSS, an open format, didn’t give technology companies the control over data and eyeballs that they needed to sell ads, so they did not support it. But the more mundane reason is that centralized silos are just easier to design than common standards. Consensus is difficult to achieve and it takes time, but without consensus spurned developers will go off and create competing standards. The lesson here may be that if we want to see a better, more open web, we have to get better at not screwing each other over.
|
||||
|
||||
If you enjoyed this post, more like it come out every four weeks! Follow [@TwoBitHistory][10] on Twitter or subscribe to the [RSS feed][11] to make sure you know when a new post is out.
|
||||
|
||||
Previously on TwoBitHistory…
|
||||
|
||||
> I've long wondered if the Unix commands on my Macbook are built from the same code that they were built from 20 or 30 years ago. The answer, it turns, out, is "kinda"!
|
||||
>
|
||||
> My latest post, on how the implementation of cat has changed over the years:<https://t.co/dHizjK50ES>
|
||||
>
|
||||
> — TwoBitHistory (@TwoBitHistory) [November 12, 2018][12]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://twobithistory.org/2018/12/18/rss.html
|
||||
|
||||
作者:[Two-Bit History][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://twobithistory.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://twobithistory.org/2018/09/16/the-rise-and-demise-of-rss.html
|
||||
[2]: https://trends.google.com/trends/explore?date=all&geo=US&q=rss
|
||||
[3]: https://twobithistory.org/images/mnn-channel.gif
|
||||
[4]: https://twobithistory.org/2018/05/27/semantic-web.html
|
||||
[5]: http://web.archive.org/web/19970703020212/http://mcf.research.apple.com:80/hs/screen_shot.html
|
||||
[6]: http://scripting.com
|
||||
[7]: https://groups.yahoo.com/neo/groups/syndication/info
|
||||
[8]: https://tools.ietf.org/html/rfc4287
|
||||
[9]: https://twitter.com/mgsiegler/status/311992206716203008
|
||||
[10]: https://twitter.com/TwoBitHistory
|
||||
[11]: https://twobithistory.org/feed.xml
|
||||
[12]: https://twitter.com/TwoBitHistory/status/1062114484209311746?ref_src=twsrc%5Etfw
|
@ -1,3 +1,4 @@
|
||||
Translating by qhwdw
|
||||
Protecting Code Integrity with PGP — Part 4: Moving Your Master Key to Offline Storage
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
translated by lixinyuxx
|
||||
4 Firefox extensions worth checking out
|
||||
======
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
translated by lixinyuxx
|
||||
The life cycle of a software bug
|
||||
======
|
||||
|
||||
|
@ -1,142 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (11 Uses for a Raspberry Pi Around the Office)
|
||||
[#]: via: (https://blog.dxmtechsupport.com.au/11-uses-for-a-raspberry-pi-around-the-office/)
|
||||
[#]: author: (James Mawson https://blog.dxmtechsupport.com.au/author/james-mawson/)
|
||||
|
||||
11 Uses for a Raspberry Pi Around the Office
|
||||
======
|
||||
|
||||
Look, I know what you’re thinking: a Raspberry Pi is really just for tinkering, prototyping and hobby use. It’s not actually meant for running a business on.
|
||||
|
||||
And it’s definitely true that this computer’s relatively low processing power, corruptible SD card, lack of battery backup and the DIY nature of the support means it’s not going to be a viable replacement for a [professionally installed and configured business server][1] for your most mission-critical operations any time soon.
|
||||
|
||||
But the board is affordable, incredibly frugal with power, small enough to fit just about anywhere and endlessly flexible – it’s actually a pretty great way to handle some basic tasks around the office.
|
||||
|
||||
And, even better, there’s a whole world of people out there who have done these projects before and are happy to share how they did it.
|
||||
|
||||
### DNS Server
|
||||
|
||||
Every time you type a website address or click a link in your browser, it needs to convert the domain name into a numeric IP address before it can show you anything.
|
||||
|
||||
Normally this means a request to a DNS server somewhere on the internet – but you can speed up your browsing by handling this locally.
|
||||
|
||||
You can also assign your own subdomains for local access to machines around the office.
|
||||
|
||||
[Here’s how to get this working.][2]
|
||||
|
||||
### Toilet Occupied Sign
|
||||
|
||||
Ever get queues at the loos?
|
||||
|
||||
That’s annoying for those left waiting and the time spent dealing with it is a drain on your whole office’s productivity.
|
||||
|
||||
I guess you could always hang those signs they have on airplanes all through your office.
|
||||
|
||||
[Occu-pi][3] is a much simpler solution, using a magnetic switch and a Raspberry Pi to tell when the bolt is closed and update a Slack channel as to when it’s in use – meaning that the whole office can tell at a glance of their computer or mobile device whether there’s a cubicle free.
|
||||
|
||||
### Honeypot Trap for Hackers
|
||||
|
||||
It should scare most business owners just a little bit that their first clue that a hacker’s breached the network is when something goes badly wrong.
|
||||
|
||||
That’s where it can help to have a honeypot: a computer that serves no purpose except to sit on your network with certain ports open to masquerade as a juicy target to hackers.
|
||||
|
||||
Security researchers often deploy honeypots on the exterior of a network, to collect data on what attackers are doing.
|
||||
|
||||
But for the average small business, these are more usefully deployed in the interior, to serve as kind of a tripwire. Because no ordinary user has any real reason to want to connect to the honeypot, any login attempts that occur are a very good indicator that mischief is afoot.
|
||||
|
||||
This can provide early warning of outsider intrusion, and of trusted insiders up to no good.
|
||||
|
||||
In larger, client/server networks, it might be more practical to run something like this as a virtual machine. But in small-office/home-office situations with a peer-to-peer network running on a wireless router, something like [HoneyPi][4] is a great little burglar alarm.
|
||||
|
||||
### Print Server
|
||||
|
||||
Network-attached printers are so much more convenient.
|
||||
|
||||
But it can be expensive to replace all your printers- especially if you’re otherwise happy with them.
|
||||
|
||||
It might make a lot more sense to [set up a Raspberry Pi as a print server][5].
|
||||
|
||||
### Network Attached Storage
|
||||
|
||||
Turning simple hard drives into network attached storage was one of the earliest practical uses for a Raspberry Pi, and it’s still one of the best.
|
||||
|
||||
[Here’s how to create a NAS with your Raspberry Pi.][6]
|
||||
|
||||
### Ticketing Server
|
||||
|
||||
Looking for a way to manage the support tickets for your help desk on a shoestring budget?
|
||||
|
||||
There’s a totally open source ticketing program called osTicket that you can install on your Pi, and it’s even available as [a ready-to-go SD card image][7].
|
||||
|
||||
### Digital Signage
|
||||
|
||||
Whether it’s for events, advertising, a menu, or something else entirely, a lot of businesses need a way to display digital signage – and the Pi’s affordability and frugal electricity needs make it a very attractive choice.
|
||||
|
||||
[There are a wealth of options to choose from here.][8]
|
||||
|
||||
### Directories and Kiosks
|
||||
|
||||
[FullPageOS][9] is a Linux distribution based on Raspbian that boots straight into a full screen version of Chromium – ideal for shopping directoriers, library catalogues and so on.
|
||||
|
||||
### Basic Intranet Web Server
|
||||
|
||||
For hosting a public-facing website, you’re really much better off just getting a hosting account. A Raspberry Pi is not really built to serve any real volume of web traffic.
|
||||
|
||||
But for small offices, it can host an internal business wiki or basic company intranet. It can also work as a sandbox environment for experimenting with code and server configurations.
|
||||
|
||||
[Here’s how to get Apache, MySQL and PHP running on a Pi.][10]
|
||||
|
||||
### Penetration Tester
|
||||
|
||||
Kali Linux is an operating system built specifically to probe networks for security vulnerabilities. By installing it on a Pi, you’ve got a super portable penetration tester with more than 600 tools included.
|
||||
|
||||
[You can find a torrent link for the Raspberry Pi image here.][11]
|
||||
|
||||
Be absolutely scrupulous to only use this on your own network or networks you’ve got permission to perform a security audit on – using this to hack other networks is a serious crime.
|
||||
|
||||
### VPN Server
|
||||
|
||||
When you’re out on the road, relying on public wireless internet, you’ve not really any say in who else might be on the network, snooping on all your traffic. That’s why it can be reassuring to encrypt everything with a VPN connection.
|
||||
|
||||
There are any number of commercial VPN services you can subscribe to – and you can install your own in the cloud – but by running one from your office, you can also access the local network from anywhere.
|
||||
|
||||
For light use – say, the occasional bit of business travel – a Raspberry Pi is a great, power-efficient way to set up a VPN server. (It’s also worth checking first that your router doesn’t offer this functionality already – very many do.)
|
||||
|
||||
[Here’s how to install OpenVPN on a Raspberry Pi.][12]
|
||||
|
||||
### Wireless Coffee Machine
|
||||
|
||||
Ahh, ambrosia: sweet nectar of the gods and the backbone of all productive enterprise.
|
||||
|
||||
So why not [hack the office coffee machine into a smart coffee machine][13] for precision temperature control and wireless network connectivity?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.dxmtechsupport.com.au/11-uses-for-a-raspberry-pi-around-the-office/
|
||||
|
||||
作者:[James Mawson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://blog.dxmtechsupport.com.au/author/james-mawson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dxmtechsupport.com.au/server-configuration
|
||||
[2]: https://www.1and1.com/digitalguide/server/configuration/how-to-make-your-raspberry-pi-into-a-dns-server/
|
||||
[3]: https://blog.usejournal.com/occu-pi-the-bathroom-of-the-future-ed69b84e21d5
|
||||
[4]: https://trustfoundry.net/honeypi-easy-honeypot-raspberry-pi/
|
||||
[5]: https://opensource.com/article/18/3/print-server-raspberry-pi
|
||||
[6]: https://howtoraspberrypi.com/create-a-nas-with-your-raspberry-pi-and-samba/
|
||||
[7]: https://everyday-tech.com/a-raspberry-pi-ticketing-system-image-with-osticket/
|
||||
[8]: https://blog.capterra.com/7-free-and-open-source-digital-signage-software-options-for-your-next-event/
|
||||
[9]: https://github.com/guysoft/FullPageOS
|
||||
[10]: https://maker.pro/raspberry-pi/projects/raspberry-pi-web-server
|
||||
[11]: https://www.offensive-security.com/kali-linux-arm-images/
|
||||
[12]: https://medium.freecodecamp.org/running-your-own-openvpn-server-on-a-raspberry-pi-8b78043ccdea
|
||||
[13]: https://www.techradar.com/au/how-to/how-to-build-your-own-smart-coffee-machine
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: ( Auk7F7)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: subject: (Arch-Audit : A Tool To Check Vulnerable Packages In Arch Linux)
|
||||
@ -7,6 +7,7 @@
|
||||
[#]: author: (Prakash Subramanian https://www.2daygeek.com/author/prakash/)
|
||||
[#]: url: ( )
|
||||
|
||||
|
||||
Arch-Audit : A Tool To Check Vulnerable Packages In Arch Linux
|
||||
======
|
||||
|
||||
|
169
sources/tech/20181204 4 Unique Terminal Emulators for Linux.md
Normal file
169
sources/tech/20181204 4 Unique Terminal Emulators for Linux.md
Normal file
@ -0,0 +1,169 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 Unique Terminal Emulators for Linux)
|
||||
[#]: via: (https://www.linux.com/blog/learn/2018/12/4-unique-terminals-linux)
|
||||
[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen)
|
||||
|
||||
4 Unique Terminal Emulators for Linux
|
||||
======
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_main.jpg?itok=e6av-5VO)
|
||||
Let’s face it, if you’re a Linux administrator, you’re going to work with the command line. To do that, you’ll be using a terminal emulator. Most likely, your distribution of choice came pre-installed with a default terminal emulator that gets the job done. But this is Linux, so you have a wealth of choices to pick from, and that ideology holds true for terminal emulators as well. In fact, if you open up your distribution’s GUI package manager (or search from the command line), you’ll find a trove of possible options. Of those, many are pretty straightforward tools; however, some are truly unique.
|
||||
|
||||
In this article, I’ll highlight four such terminal emulators, that will not only get the job done, but do so while making the job a bit more interesting or fun. So, let’s take a look at these terminals.
|
||||
|
||||
### Tilda
|
||||
|
||||
[Tilda][1] is designed for Gtk and is a member of the cool drop-down family of terminals. That means the terminal is always running in the background, ready to drop down from the top of your monitor (such as Guake and Yakuake). What makes Tilda rise above many of the others is the number of configuration options available for the terminal (Figure 1).
|
||||
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_1.jpg?itok=bra6qb6X)
|
||||
|
||||
Tilda can be installed from the standard repositories. On a Ubuntu- (or Debian-) based distribution, the installation is as simple as:
|
||||
|
||||
```
|
||||
sudo apt-get install tilda -y
|
||||
```
|
||||
|
||||
Once installed, open Tilda from your desktop menu, which will also open the configuration window. Configure the app to suit your taste and then close the configuration window. You can then open and close Tilda by hitting the F1 hotkey. One caveat to using Tilda is that, after the first run, you won’t find any indication as to how to reach the configuration wizard. No worries. If you run the command tilda -C it will open the configuration window, while still retaining the options you’ve previously set.
|
||||
|
||||
Available options include:
|
||||
|
||||
* Terminal size and location
|
||||
|
||||
* Font and color configurations
|
||||
|
||||
* Auto Hide
|
||||
|
||||
* Title
|
||||
|
||||
* Custom commands
|
||||
|
||||
* URL Handling
|
||||
|
||||
* Transparency
|
||||
|
||||
* Animation
|
||||
|
||||
* Scrolling
|
||||
|
||||
* And more
|
||||
|
||||
|
||||
|
||||
|
||||
What I like about these types of terminals is that they easily get out of the way when you don’t need them and are just a button click away when you do. For those that hop in and out of the terminal, a tool like Tilda is ideal.
|
||||
|
||||
### Aterm
|
||||
|
||||
Aterm holds a special place in my heart, as it was one of the first terminals I used that made me realize how flexible Linux was. This was back when AfterStep was my window manager of choice (which dates me a bit) and I was new to the command line. What Aterm offered was a terminal emulator that was highly customizable, while helping me learn the ins and outs of using the terminal (how to add options and switches to a command). “How?” you ask. Because Aterm never had a GUI for customization. To run Aterm with any special options, it had to run as a command. For example, say you want to open Aterm with transparency enabled, green text, white highlights, and no scroll bar. To do this, issue the command:
|
||||
|
||||
```
|
||||
aterm -tr -fg green -bg white +xb
|
||||
```
|
||||
|
||||
The end result (with the top command running for illustration) would look like that shown in Figure 2.
|
||||
|
||||
![Aterm][3]
|
||||
|
||||
Figure 2: Aterm with a few custom options.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
Of course, you must first install Aterm. Fortunately, the application is still found in the standard repositories, so installing on the likes of Ubuntu is as simple as:
|
||||
|
||||
```
|
||||
sudo apt-get install aterm -y
|
||||
```
|
||||
|
||||
If you want to always open Aterm with those options, your best bet is to create an alias in your ~/.bashrc file like so:
|
||||
|
||||
```
|
||||
alias=”aterm -tr -fg green -bg white +sb”
|
||||
```
|
||||
|
||||
Save that file and, when you issue the command aterm, it will always open with those options. For more about creating aliases, check out [this tutorial][5].
|
||||
|
||||
### Eterm
|
||||
|
||||
Eterm is the second terminal that really showed me how much fun the Linux command line could be. Eterm is the default terminal emulator for the Enlightenment desktop. When I eventually migrated from AfterStep to Enlightenment (back in the early 2000s), I was afraid I’d lose out on all those cool aesthetic options. That turned out to not be the case. In fact, Eterm offered plenty of unique options, while making the task easier with a terminal toolbar. With Eterm, you can easily select from a large number of background images (should you want one - Figure 3) by selecting from the Background > Pixmap menu entry.
|
||||
|
||||
![Eterm][7]
|
||||
|
||||
Figure 3: Selecting from one of the many background images for Eterm.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
There are a number of other options to configure (such as font size, map alerts, toggle scrollbar, brightness, contrast, and gamma of background images, and more). The one thing you want to make sure is, after you’ve configured Eterm to suit your tastes, to click Eterm > Save User Settings (otherwise, all settings will be lost when you close the app).
|
||||
|
||||
Eterm can be installed from the standard repositories, with a command such as:
|
||||
|
||||
```
|
||||
sudo apt-get install eterm
|
||||
```
|
||||
|
||||
### Extraterm
|
||||
|
||||
[Extraterm][8] should probably win a few awards for coolest feature set of any terminal window project available today. The most unique feature of Extraterm is the ability to wrap commands in color-coded frames (blue for successful commands and red for failed commands - Figure 4).
|
||||
|
||||
![Extraterm][10]
|
||||
|
||||
Figure 4: Extraterm showing two failed command frames.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
When you run a command, Extraterm will wrap the command in an isolated frame. If the command succeeds, the frame will be outlined in blue. Should the command fail, the frame will be outlined in red.
|
||||
|
||||
Extraterm cannot be installed via the standard repositories. In fact, the only way to run Extraterm on Linux (at the moment) is to [download the precompiled binary][11] from the project’s GitHub page, extract the file, change into the newly created directory, and issue the command ./extraterm.
|
||||
|
||||
Once the app is running, to enable frames you must first enable bash integration. To do that, open Extraterm and then right-click anywhere in the window to reveal the popup menu. Scroll until you see the entry for Inject Bash shell Integration (Figure 5). Select that entry and you can then begin using the frames option.
|
||||
|
||||
![Extraterm][13]
|
||||
|
||||
Figure 5: Injecting Bash integration for Extraterm.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
If you run a command, and don’t see a frame appear, you probably have to create a new frame for the command (as Extraterm only ships with a few default frames). To do that, click on the Extraterm menu button (three horizontal lines in the top right corner of the window), select Settings, and then click the Frames tab. In this window, scroll down and click the New Rule button. You can then add a command you want to work with the frames option (Figure 6).
|
||||
|
||||
![frames][15]
|
||||
|
||||
Figure 6: Adding a new rule for frames.
|
||||
|
||||
[Used with permission][4]
|
||||
|
||||
If, after this, you still don’t see frames appearing, download the extraterm-commands file from the [Download page][11], extract the file, change into the newly created directory, and issue the command sh setup_extraterm_bash.sh. That should enable frames for Extraterm.
|
||||
There’s plenty more options available for Extraterm. I’m convinced, once you start playing around with this new take on the terminal window, you won’t want to go back to the standard terminal. Hopefully the developer will make this app available to the standard repositories soon (as it could easily become one of the most popular terminal windows in use).
|
||||
|
||||
### And Many More
|
||||
|
||||
As you probably expected, there are quite a lot of terminals available for Linux. These four represent (at least for me) four unique takes on the task, each of which do a great job of helping you run the commands every Linux admin needs to run. If you aren’t satisfied with one of these, give your package manager a look to see what’s available. You are sure to find something that works perfectly for you.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linux.com/blog/learn/2018/12/4-unique-terminals-linux
|
||||
|
||||
作者:[Jack Wallen][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.linux.com/users/jlwallen
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://tilda.sourceforge.net/tildadoc.php
|
||||
[2]: https://www.linux.com/files/images/terminals2jpg
|
||||
[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_2.jpg?itok=gBkRLwDI (Aterm)
|
||||
[4]: https://www.linux.com/licenses/category/used-permission
|
||||
[5]: https://www.linux.com/blog/learn/2018/12/aliases-diy-shell-commands
|
||||
[6]: https://www.linux.com/files/images/terminals3jpg
|
||||
[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_3.jpg?itok=RVPTJAtK (Eterm)
|
||||
[8]: http://extraterm.org
|
||||
[9]: https://www.linux.com/files/images/terminals4jpg
|
||||
[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_4.jpg?itok=2n01qdwO (Extraterm)
|
||||
[11]: https://github.com/sedwards2009/extraterm/releases
|
||||
[12]: https://www.linux.com/files/images/terminals5jpg
|
||||
[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_5.jpg?itok=FdaE1Mpf (Extraterm)
|
||||
[14]: https://www.linux.com/files/images/terminals6jpg
|
||||
[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_6.jpg?itok=lQ1Zv5wq (frames)
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,745 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (TLP – An Advanced Power Management Tool That Improve Battery Life On Linux Laptop)
|
||||
[#]: via: (https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
TLP – An Advanced Power Management Tool That Improve Battery Life On Linux Laptop
|
||||
======
|
||||
|
||||
Laptop battery is highly optimized for Windows OS, that i had realized when i was using Windows OS in my laptop but it’s not same for Linux.
|
||||
|
||||
Over the years Linux has improved a lot for battery optimization but still we need make some necessary things to improve laptop battery life in Linux.
|
||||
|
||||
When i think about battery life, i got few options for that but i felt TLP is a better solutions for me so, i’m going with it.
|
||||
|
||||
In this tutorial we are going to discuss about TLP in details to improve battery life.
|
||||
|
||||
We had written three articles previously in our site about **[laptop battery saving utilities][1]** for Linux **[PowerTOP][2]** and **[Battery Charging State][3]**.
|
||||
|
||||
### What is TLP?
|
||||
|
||||
[TLP][4] is a free opensource advanced power management tool that improve your battery life without making any configuration change.
|
||||
|
||||
Since it comes with a default configuration already optimized for battery life, so you may just install and forget it.
|
||||
|
||||
Also, it is highly customizable to fulfill your specific requirements. TLP is a pure command line tool with automated background tasks. It does not contain a GUI.
|
||||
|
||||
TLP runs on every laptop brand. Setting the battery charge thresholds is available for IBM/Lenovo ThinkPads only.
|
||||
|
||||
All TLP settings are stored in `/etc/default/tlp`. The default configuration provides optimized power saving out of the box.
|
||||
|
||||
The following TLP settings is available for customization and you need to make the necessary changes accordingly if you want it.
|
||||
|
||||
### TLP Features
|
||||
|
||||
* Kernel laptop mode and dirty buffer timeouts
|
||||
* Processor frequency scaling including “turbo boost” / “turbo core”
|
||||
* Limit max/min P-state to control power dissipation of the CPU
|
||||
* HWP energy performance hints
|
||||
* Power aware process scheduler for multi-core/hyper-threading
|
||||
* Processor performance versus energy savings policy (x86_energy_perf_policy)
|
||||
* Hard disk advanced power magement level (APM) and spin down timeout (per disk)
|
||||
* AHCI link power management (ALPM) with device blacklist
|
||||
* PCIe active state power management (PCIe ASPM)
|
||||
* Runtime power management for PCI(e) bus devices
|
||||
* Radeon graphics power management (KMS and DPM)
|
||||
* Wifi power saving mode
|
||||
* Power off optical drive in drive bay
|
||||
* Audio power saving mode
|
||||
* I/O scheduler (per disk)
|
||||
* USB autosuspend with device blacklist/whitelist (input devices excluded automatically)
|
||||
* Enable or disable integrated wifi, bluetooth or wwan devices upon system startup and shutdown
|
||||
* Restore radio device state on system startup (from previous shutdown).
|
||||
* Radio device wizard: switch radios upon network connect/disconnect and dock/undock
|
||||
* Disable Wake On LAN
|
||||
* Integrated WWAN and bluetooth state is restored after suspend/hibernate
|
||||
* Untervolting of Intel processors – requires kernel with PHC-Patch
|
||||
* Battery charge thresholds – ThinkPads only
|
||||
* Recalibrate battery – ThinkPads only
|
||||
|
||||
|
||||
|
||||
### How to Install TLP in Linux
|
||||
|
||||
TLP package is available in most of the distributions official repository so, use the distributions **[Package Manager][5]** to install it.
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][6]** to install TLP.
|
||||
|
||||
```
|
||||
$ sudo dnf install tlp tlp-rdw
|
||||
```
|
||||
|
||||
ThinkPads require an additional packages.
|
||||
|
||||
```
|
||||
$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
|
||||
$ sudo dnf install http://repo.linrunner.de/fedora/tlp/repos/releases/tlp-release.fc$(rpm -E %fedora).noarch.rpm
|
||||
$ sudo dnf install akmod-tp_smapi akmod-acpi_call kernel-devel
|
||||
```
|
||||
|
||||
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
|
||||
|
||||
```
|
||||
$ sudo dnf install smartmontools
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][7]** or **[APT Command][8]** to install TLP.
|
||||
|
||||
```
|
||||
$ sudo apt install tlp tlp-rdw
|
||||
```
|
||||
|
||||
ThinkPads require an additional packages.
|
||||
|
||||
```
|
||||
$ sudo apt-get install tp-smapi-dkms acpi-call-dkms
|
||||
```
|
||||
|
||||
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
|
||||
|
||||
```
|
||||
$ sudo apt-get install smartmontools
|
||||
```
|
||||
|
||||
When the official package becomes outdated for Ubuntu based systems then use the following PPA repository which provides an up-to-date version. Run the following commands to install TLP using the PPA.
|
||||
|
||||
```
|
||||
$ sudo apt-get install tlp tlp-rdw
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][9]** to install TLP.
|
||||
|
||||
```
|
||||
$ sudo pacman -S tlp tlp-rdw
|
||||
```
|
||||
|
||||
ThinkPads require an additional packages.
|
||||
|
||||
```
|
||||
$ pacman -S tp_smapi acpi_call
|
||||
```
|
||||
|
||||
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
|
||||
|
||||
```
|
||||
$ sudo pacman -S smartmontools
|
||||
```
|
||||
|
||||
Enable TLP & TLP-Sleep service on boot for Arch Linux based systems.
|
||||
|
||||
```
|
||||
$ sudo systemctl enable tlp.service
|
||||
$ sudo systemctl enable tlp-sleep.service
|
||||
```
|
||||
|
||||
You should also mask the following services to avoid conflicts and assure proper operation of TLP’s radio device switching options for Arch Linux based systems.
|
||||
|
||||
```
|
||||
$ sudo systemctl mask systemd-rfkill.service
|
||||
$ sudo systemctl mask systemd-rfkill.socket
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][10]** to install TLP.
|
||||
|
||||
```
|
||||
$ sudo yum install tlp tlp-rdw
|
||||
```
|
||||
|
||||
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
|
||||
|
||||
```
|
||||
$ sudo yum install smartmontools
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][11]** to install TLP.
|
||||
|
||||
```
|
||||
$ sudo zypper install TLP
|
||||
```
|
||||
|
||||
Install smartmontool to display S.M.A.R.T. data in tlp-stat.
|
||||
|
||||
```
|
||||
$ sudo zypper install smartmontools
|
||||
```
|
||||
|
||||
After successfully TLP installed, use the following command to start the service.
|
||||
|
||||
```
|
||||
$ systemctl start tlp.service
|
||||
```
|
||||
|
||||
To show battery information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -b
|
||||
or
|
||||
$ sudo tlp-stat --battery
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Battery Status
|
||||
/sys/class/power_supply/BAT0/manufacturer = SMP
|
||||
/sys/class/power_supply/BAT0/model_name = L14M4P23
|
||||
/sys/class/power_supply/BAT0/cycle_count = (not supported)
|
||||
/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_full = 48850 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_now = 48850 [mWh]
|
||||
/sys/class/power_supply/BAT0/power_now = 0 [mW]
|
||||
/sys/class/power_supply/BAT0/status = Full
|
||||
|
||||
Charge = 100.0 [%]
|
||||
Capacity = 81.4 [%]
|
||||
```
|
||||
|
||||
To show disk information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -d
|
||||
or
|
||||
$ sudo tlp-stat --disk
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Storage Devices
|
||||
/dev/sda:
|
||||
Model = WDC WD10SPCX-24HWST1
|
||||
Firmware = 02.01A02
|
||||
APM Level = 128
|
||||
Status = active/idle
|
||||
Scheduler = mq-deadline
|
||||
|
||||
Runtime PM: control = on, autosuspend_delay = (not available)
|
||||
|
||||
SMART info:
|
||||
4 Start_Stop_Count = 18787
|
||||
5 Reallocated_Sector_Ct = 0
|
||||
9 Power_On_Hours = 606 [h]
|
||||
12 Power_Cycle_Count = 1792
|
||||
193 Load_Cycle_Count = 25775
|
||||
194 Temperature_Celsius = 31 [°C]
|
||||
|
||||
|
||||
+++ AHCI Link Power Management (ALPM)
|
||||
/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm
|
||||
|
||||
+++ AHCI Host Controller Runtime Power Management
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on
|
||||
```
|
||||
|
||||
To show PCI device information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -e
|
||||
or
|
||||
$ sudo tlp-stat --pcie
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Runtime Power Management
|
||||
Device blacklist = (not configured)
|
||||
Driver blacklist = amdgpu nouveau nvidia radeon pcieport
|
||||
|
||||
/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore)
|
||||
/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915)
|
||||
/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd)
|
||||
/sys/bus/pci/devices/0000:00:16.0/power/control = auto (0x078000, Communication controller, mei_me)
|
||||
/sys/bus/pci/devices/0000:00:17.0/power/control = auto (0x010601, SATA controller, ahci)
|
||||
/sys/bus/pci/devices/0000:00:1c.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1c.2/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1c.3/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1d.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1f.0/power/control = auto (0x060100, ISA bridge, no driver)
|
||||
/sys/bus/pci/devices/0000:00:1f.2/power/control = auto (0x058000, Memory controller, no driver)
|
||||
/sys/bus/pci/devices/0000:00:1f.3/power/control = auto (0x040300, Audio device, snd_hda_intel)
|
||||
/sys/bus/pci/devices/0000:00:1f.4/power/control = auto (0x0c0500, SMBus, i801_smbus)
|
||||
/sys/bus/pci/devices/0000:01:00.0/power/control = auto (0x030200, 3D controller, nouveau)
|
||||
/sys/bus/pci/devices/0000:07:00.0/power/control = auto (0x080501, SD Host controller, sdhci-pci)
|
||||
/sys/bus/pci/devices/0000:08:00.0/power/control = auto (0x028000, Network controller, iwlwifi)
|
||||
/sys/bus/pci/devices/0000:09:00.0/power/control = auto (0x020000, Ethernet controller, r8168)
|
||||
/sys/bus/pci/devices/0000:0a:00.0/power/control = auto (0x010802, Non-Volatile memory controller, nvme)
|
||||
```
|
||||
|
||||
To show graphics card information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -g
|
||||
or
|
||||
$ sudo tlp-stat --graphics
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Intel Graphics
|
||||
/sys/module/i915/parameters/enable_dc = -1 (use per-chip default)
|
||||
/sys/module/i915/parameters/enable_fbc = 1 (enabled)
|
||||
/sys/module/i915/parameters/enable_psr = 0 (disabled)
|
||||
/sys/module/i915/parameters/modeset = -1 (use per-chip default)
|
||||
```
|
||||
|
||||
To show Processor information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -p
|
||||
or
|
||||
$ sudo tlp-stat --processor
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Processor
|
||||
CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
|
||||
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/no_turbo = 0
|
||||
/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/num_pstates = 28
|
||||
|
||||
x86_energy_perf_policy: program not installed.
|
||||
|
||||
/sys/module/workqueue/parameters/power_efficient = Y
|
||||
/proc/sys/kernel/nmi_watchdog = 0
|
||||
|
||||
+++ Undervolting
|
||||
PHC kernel not available.
|
||||
```
|
||||
|
||||
To show system data information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -s
|
||||
or
|
||||
$ sudo tlp-stat --system
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ System Info
|
||||
System = LENOVO Lenovo ideapad Y700-15ISK 80NV
|
||||
BIOS = CDCN35WW
|
||||
Release = "Manjaro Linux"
|
||||
Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64
|
||||
/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b
|
||||
Init system = systemd
|
||||
Boot mode = BIOS (CSM, Legacy)
|
||||
|
||||
+++ TLP Status
|
||||
State = enabled
|
||||
Last run = 11:04:00 IST, 596 sec(s) ago
|
||||
Mode = battery
|
||||
Power source = battery
|
||||
```
|
||||
|
||||
To show temperatures and fan speed information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -t
|
||||
or
|
||||
$ sudo tlp-stat --temp
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Temperatures
|
||||
CPU temp = 36 [°C]
|
||||
Fan speed = (not available)
|
||||
```
|
||||
|
||||
To show USB device data information.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -u
|
||||
or
|
||||
$ sudo tlp-stat --usb
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ USB
|
||||
Autosuspend = disabled
|
||||
Device whitelist = (not configured)
|
||||
Device blacklist = (not configured)
|
||||
Bluetooth blacklist = disabled
|
||||
Phone blacklist = disabled
|
||||
WWAN blacklist = enabled
|
||||
|
||||
Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub)
|
||||
Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo)
|
||||
Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid)
|
||||
Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb)
|
||||
Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub)
|
||||
```
|
||||
|
||||
To show warnings.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat -w
|
||||
or
|
||||
$ sudo tlp-stat --warn
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
No warnings detected.
|
||||
```
|
||||
|
||||
Status report with configuration and all active settings.
|
||||
|
||||
```
|
||||
$ sudo tlp-stat
|
||||
|
||||
--- TLP 1.1 --------------------------------------------
|
||||
|
||||
+++ Configured Settings: /etc/default/tlp
|
||||
TLP_ENABLE=1
|
||||
TLP_DEFAULT_MODE=AC
|
||||
TLP_PERSISTENT_DEFAULT=0
|
||||
DISK_IDLE_SECS_ON_AC=0
|
||||
DISK_IDLE_SECS_ON_BAT=2
|
||||
MAX_LOST_WORK_SECS_ON_AC=15
|
||||
MAX_LOST_WORK_SECS_ON_BAT=60
|
||||
CPU_HWP_ON_AC=balance_performance
|
||||
CPU_HWP_ON_BAT=balance_power
|
||||
SCHED_POWERSAVE_ON_AC=0
|
||||
SCHED_POWERSAVE_ON_BAT=1
|
||||
NMI_WATCHDOG=0
|
||||
ENERGY_PERF_POLICY_ON_AC=performance
|
||||
ENERGY_PERF_POLICY_ON_BAT=power
|
||||
DISK_DEVICES="sda sdb"
|
||||
DISK_APM_LEVEL_ON_AC="254 254"
|
||||
DISK_APM_LEVEL_ON_BAT="128 128"
|
||||
SATA_LINKPWR_ON_AC="med_power_with_dipm max_performance"
|
||||
SATA_LINKPWR_ON_BAT="med_power_with_dipm max_performance"
|
||||
AHCI_RUNTIME_PM_TIMEOUT=15
|
||||
PCIE_ASPM_ON_AC=performance
|
||||
PCIE_ASPM_ON_BAT=powersave
|
||||
RADEON_POWER_PROFILE_ON_AC=default
|
||||
RADEON_POWER_PROFILE_ON_BAT=low
|
||||
RADEON_DPM_STATE_ON_AC=performance
|
||||
RADEON_DPM_STATE_ON_BAT=battery
|
||||
RADEON_DPM_PERF_LEVEL_ON_AC=auto
|
||||
RADEON_DPM_PERF_LEVEL_ON_BAT=auto
|
||||
WIFI_PWR_ON_AC=off
|
||||
WIFI_PWR_ON_BAT=on
|
||||
WOL_DISABLE=Y
|
||||
SOUND_POWER_SAVE_ON_AC=0
|
||||
SOUND_POWER_SAVE_ON_BAT=1
|
||||
SOUND_POWER_SAVE_CONTROLLER=Y
|
||||
BAY_POWEROFF_ON_AC=0
|
||||
BAY_POWEROFF_ON_BAT=0
|
||||
BAY_DEVICE="sr0"
|
||||
RUNTIME_PM_ON_AC=on
|
||||
RUNTIME_PM_ON_BAT=auto
|
||||
RUNTIME_PM_DRIVER_BLACKLIST="amdgpu nouveau nvidia radeon pcieport"
|
||||
USB_AUTOSUSPEND=0
|
||||
USB_BLACKLIST_BTUSB=0
|
||||
USB_BLACKLIST_PHONE=0
|
||||
USB_BLACKLIST_PRINTER=1
|
||||
USB_BLACKLIST_WWAN=1
|
||||
RESTORE_DEVICE_STATE_ON_STARTUP=0
|
||||
|
||||
+++ System Info
|
||||
System = LENOVO Lenovo ideapad Y700-15ISK 80NV
|
||||
BIOS = CDCN35WW
|
||||
Release = "Manjaro Linux"
|
||||
Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64
|
||||
/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b
|
||||
Init system = systemd
|
||||
Boot mode = BIOS (CSM, Legacy)
|
||||
|
||||
+++ TLP Status
|
||||
State = enabled
|
||||
Last run = 11:04:00 IST, 684 sec(s) ago
|
||||
Mode = battery
|
||||
Power source = battery
|
||||
|
||||
+++ Processor
|
||||
CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
|
||||
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_driver = intel_pstate
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor = powersave
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_available_governors = performance powersave
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_min_freq = 800000 [kHz]
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/scaling_max_freq = 3500000 [kHz]
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_preference = balance_power
|
||||
/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power
|
||||
|
||||
/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/no_turbo = 0
|
||||
/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%]
|
||||
/sys/devices/system/cpu/intel_pstate/num_pstates = 28
|
||||
|
||||
x86_energy_perf_policy: program not installed.
|
||||
|
||||
/sys/module/workqueue/parameters/power_efficient = Y
|
||||
/proc/sys/kernel/nmi_watchdog = 0
|
||||
|
||||
+++ Undervolting
|
||||
PHC kernel not available.
|
||||
|
||||
+++ Temperatures
|
||||
CPU temp = 42 [°C]
|
||||
Fan speed = (not available)
|
||||
|
||||
+++ File System
|
||||
/proc/sys/vm/laptop_mode = 2
|
||||
/proc/sys/vm/dirty_writeback_centisecs = 6000
|
||||
/proc/sys/vm/dirty_expire_centisecs = 6000
|
||||
/proc/sys/vm/dirty_ratio = 20
|
||||
/proc/sys/vm/dirty_background_ratio = 10
|
||||
|
||||
+++ Storage Devices
|
||||
/dev/sda:
|
||||
Model = WDC WD10SPCX-24HWST1
|
||||
Firmware = 02.01A02
|
||||
APM Level = 128
|
||||
Status = active/idle
|
||||
Scheduler = mq-deadline
|
||||
|
||||
Runtime PM: control = on, autosuspend_delay = (not available)
|
||||
|
||||
SMART info:
|
||||
4 Start_Stop_Count = 18787
|
||||
5 Reallocated_Sector_Ct = 0
|
||||
9 Power_On_Hours = 606 [h]
|
||||
12 Power_Cycle_Count = 1792
|
||||
193 Load_Cycle_Count = 25777
|
||||
194 Temperature_Celsius = 31 [°C]
|
||||
|
||||
|
||||
+++ AHCI Link Power Management (ALPM)
|
||||
/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm
|
||||
/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm
|
||||
|
||||
+++ AHCI Host Controller Runtime Power Management
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on
|
||||
/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on
|
||||
|
||||
+++ PCIe Active State Power Management
|
||||
/sys/module/pcie_aspm/parameters/policy = powersave
|
||||
|
||||
+++ Intel Graphics
|
||||
/sys/module/i915/parameters/enable_dc = -1 (use per-chip default)
|
||||
/sys/module/i915/parameters/enable_fbc = 1 (enabled)
|
||||
/sys/module/i915/parameters/enable_psr = 0 (disabled)
|
||||
/sys/module/i915/parameters/modeset = -1 (use per-chip default)
|
||||
|
||||
+++ Wireless
|
||||
bluetooth = on
|
||||
wifi = on
|
||||
wwan = none (no device)
|
||||
|
||||
hci0(btusb) : bluetooth, not connected
|
||||
wlp8s0(iwlwifi) : wifi, connected, power management = on
|
||||
|
||||
+++ Audio
|
||||
/sys/module/snd_hda_intel/parameters/power_save = 1
|
||||
/sys/module/snd_hda_intel/parameters/power_save_controller = Y
|
||||
|
||||
+++ Runtime Power Management
|
||||
Device blacklist = (not configured)
|
||||
Driver blacklist = amdgpu nouveau nvidia radeon pcieport
|
||||
|
||||
/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore)
|
||||
/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915)
|
||||
/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd)
|
||||
/sys/bus/pci/devices/0000:00:16.0/power/control = auto (0x078000, Communication controller, mei_me)
|
||||
/sys/bus/pci/devices/0000:00:17.0/power/control = auto (0x010601, SATA controller, ahci)
|
||||
/sys/bus/pci/devices/0000:00:1c.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1c.2/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1c.3/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1d.0/power/control = auto (0x060400, PCI bridge, pcieport)
|
||||
/sys/bus/pci/devices/0000:00:1f.0/power/control = auto (0x060100, ISA bridge, no driver)
|
||||
/sys/bus/pci/devices/0000:00:1f.2/power/control = auto (0x058000, Memory controller, no driver)
|
||||
/sys/bus/pci/devices/0000:00:1f.3/power/control = auto (0x040300, Audio device, snd_hda_intel)
|
||||
/sys/bus/pci/devices/0000:00:1f.4/power/control = auto (0x0c0500, SMBus, i801_smbus)
|
||||
/sys/bus/pci/devices/0000:01:00.0/power/control = auto (0x030200, 3D controller, nouveau)
|
||||
/sys/bus/pci/devices/0000:07:00.0/power/control = auto (0x080501, SD Host controller, sdhci-pci)
|
||||
/sys/bus/pci/devices/0000:08:00.0/power/control = auto (0x028000, Network controller, iwlwifi)
|
||||
/sys/bus/pci/devices/0000:09:00.0/power/control = auto (0x020000, Ethernet controller, r8168)
|
||||
/sys/bus/pci/devices/0000:0a:00.0/power/control = auto (0x010802, Non-Volatile memory controller, nvme)
|
||||
|
||||
+++ USB
|
||||
Autosuspend = disabled
|
||||
Device whitelist = (not configured)
|
||||
Device blacklist = (not configured)
|
||||
Bluetooth blacklist = disabled
|
||||
Phone blacklist = disabled
|
||||
WWAN blacklist = enabled
|
||||
|
||||
Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub)
|
||||
Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo)
|
||||
Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid)
|
||||
Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb)
|
||||
Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub)
|
||||
|
||||
+++ Battery Status
|
||||
/sys/class/power_supply/BAT0/manufacturer = SMP
|
||||
/sys/class/power_supply/BAT0/model_name = L14M4P23
|
||||
/sys/class/power_supply/BAT0/cycle_count = (not supported)
|
||||
/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_full = 51690 [mWh]
|
||||
/sys/class/power_supply/BAT0/energy_now = 50140 [mWh]
|
||||
/sys/class/power_supply/BAT0/power_now = 12185 [mW]
|
||||
/sys/class/power_supply/BAT0/status = Discharging
|
||||
|
||||
Charge = 97.0 [%]
|
||||
Capacity = 86.2 [%]
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/check-laptop-battery-status-and-charging-state-in-linux-terminal/
|
||||
[2]: https://www.2daygeek.com/powertop-monitors-laptop-battery-usage-linux/
|
||||
[3]: https://www.2daygeek.com/monitor-laptop-battery-charging-state-linux/
|
||||
[4]: https://linrunner.de/en/tlp/docs/tlp-linux-advanced-power-management.html
|
||||
[5]: https://www.2daygeek.com/category/package-management/
|
||||
[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[7]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
@ -0,0 +1,145 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Podman and user namespaces: A marriage made in heaven)
|
||||
[#]: via: (https://opensource.com/article/18/12/podman-and-user-namespaces)
|
||||
[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan)
|
||||
|
||||
Podman and user namespaces: A marriage made in heaven
|
||||
======
|
||||
Learn how to use Podman to run containers in separate user namespaces.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/architecture_structure_planning_design_.png?itok=KL7dIDct)
|
||||
|
||||
[Podman][1], part of the [libpod][2] library, enables users to manage pods, containers, and container images. In my last article, I wrote about [Podman as a more secure way to run containers][3]. Here, I'll explain how to use Podman to run containers in separate user namespaces.
|
||||
|
||||
I have always thought of [user namespace][4], primarily developed by Red Hat's Eric Biederman, as a great feature for separating containers. User namespace allows you to specify a user identifier (UID) and group identifier (GID) mapping to run your containers. This means you can run as UID 0 inside the container and UID 100000 outside the container. If your container processes escape the container, the kernel will treat them as UID 100000. Not only that, but any file object owned by a UID that isn't mapped into the user namespace will be treated as owned by "nobody" (65534, kernel.overflowuid), and the container process will not be allowed access unless the object is accessible by "other" (world readable/writable).
|
||||
|
||||
If you have a file owned by "real" root with permissions [660][5], and the container processes in the user namespace attempt to read it, they will be prevented from accessing it and will see the file as owned by nobody.
|
||||
|
||||
### An example
|
||||
|
||||
Here's how that might work. First, I create a file in my system owned by root.
|
||||
|
||||
```
|
||||
$ sudo bash -c "echo Test > /tmp/test"
|
||||
$ sudo chmod 600 /tmp/test
|
||||
$ sudo ls -l /tmp/test
|
||||
-rw-------. 1 root root 5 Dec 17 16:40 /tmp/test
|
||||
```
|
||||
|
||||
Next, I volume-mount the file into a container running with a user namespace map 0:100000:5000.
|
||||
|
||||
```
|
||||
$ sudo podman run -ti -v /tmp/test:/tmp/test:Z --uidmap 0:100000:5000 fedora sh
|
||||
# id
|
||||
uid=0(root) gid=0(root) groups=0(root)
|
||||
# ls -l /tmp/test
|
||||
-rw-rw----. 1 nobody nobody 8 Nov 30 12:40 /tmp/test
|
||||
# cat /tmp/test
|
||||
cat: /tmp/test: Permission denied
|
||||
```
|
||||
|
||||
The **\--uidmap** setting above tells Podman to map a range of 5000 UIDs inside the container, starting with UID 100000 outside the container (so the range is 100000-104999) to a range starting at UID 0 inside the container (so the range is 0-4999). Inside the container, if my process is running as UID 1, it is 100001 on the host
|
||||
|
||||
Since the real UID=0 is not mapped into the container, any file owned by root will be treated as owned by nobody. Even if the process inside the container has **CAP_DAC_OVERRIDE** , it can't override this protection. **DAC_OVERRIDE** enables root processes to read/write any file on the system, even if the process was not owned by root or world readable or writable.
|
||||
|
||||
User namespace capabilities are not the same as capabilities on the host. They are namespaced capabilities. This means my container root has capabilities only within the container—really only across the range of UIDs that were mapped into the user namespace. If a container process escaped the container, it wouldn't have any capabilities over UIDs not mapped into the user namespace, including UID=0. Even if the processes could somehow enter another container, they would not have those capabilities if the container uses a different range of UIDs.
|
||||
|
||||
Note that SELinux and other technologies also limit what would happen if a container process broke out of the container.
|
||||
|
||||
### Using `podman top` to show user namespaces
|
||||
|
||||
We have added features to **podman top** to allow you to examine the usernames of processes running inside a container and identify their real UIDs on the host.
|
||||
|
||||
Let's start by running a sleep container using our UID mapping.
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:100000:5000 -d fedora sleep 1000
|
||||
```
|
||||
|
||||
Now run **podman top** :
|
||||
|
||||
```
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 100000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
Notice **podman top** reports that the user process is running as root inside the container but as UID 100000 on the host (HUSER). Also the **ps** command confirms that the sleep process is running as UID 100000.
|
||||
|
||||
Now let's run a second container, but this time we will choose a separate UID map starting at 200000.
|
||||
|
||||
```
|
||||
$ sudo podman run --uidmap 0:200000:5000 -d fedora sleep 1000
|
||||
$ sudo podman top --latest user huser
|
||||
USER HUSER
|
||||
root 200000
|
||||
|
||||
$ ps -ef | grep sleep
|
||||
100000 21821 21809 0 08:04 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
200000 23644 23632 1 08:08 ? 00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000
|
||||
```
|
||||
|
||||
Notice that **podman top** reports the second container is running as root inside the container but as UID=200000 on the host.
|
||||
|
||||
Also look at the **ps** command—it shows both sleep processes running: one as 100000 and the other as 200000.
|
||||
|
||||
This means running the containers inside separate user namespaces gives you traditional UID separation between processes, which has been the standard security tool of Linux/Unix from the beginning.
|
||||
|
||||
### Problems with user namespaces
|
||||
|
||||
For several years, I've advocated user namespace as the security tool everyone wants but hardly anyone has used. The reason is there hasn't been any filesystem support or a shifting file system.
|
||||
|
||||
In containers, you want to share the **base** image between lots of containers. The examples above use the Fedora base image in each example. Most of the files in the Fedora image are owned by real UID=0. If I run a container on this image with the user namespace 0:100000:5000, by default it sees all of these files as owned by nobody, so we need to shift all of these UIDs to match the user namespace. For years, I've wanted a mount option to tell the kernel to remap these file UIDs to match the user namespace. Upstream kernel storage developers continue to investigate and make progress on this feature, but it is a difficult problem.
|
||||
|
||||
|
||||
Podman can use different user namespaces on the same image because of automatic [chowning][6] built into [containers/storage][7] by a team led by Nalin Dahyabhai. Podman uses containers/storage, and the first time Podman uses a container image in a new user namespace, container/storage "chowns" (i.e., changes ownership for) all files in the image to the UIDs mapped in the user namespace and creates a new image. Think of this as the **fedora:0:100000:5000** image.
|
||||
|
||||
When Podman runs another container on the image with the same UID mappings, it uses the "pre-chowned" image. When I run the second container on 0:200000:5000, containers/storage creates a second image, let's call it **fedora:0:200000:5000**.
|
||||
|
||||
Note if you are doing a **podman build** or **podman commit** and push the newly created image to a container registry, Podman will use container/storage to reverse the shift and push the image with all files chowned back to real UID=0.
|
||||
|
||||
This can cause a real slowdown in creating containers in new UID mappings since the **chown** can be slow depending on the number of files in the image. Also, on a normal [OverlayFS][8], every file in the image gets copied up. The normal Fedora image can take up to 30 seconds to finish the chown and start the container.
|
||||
|
||||
Luckily, the Red Hat kernel storage team, primarily Vivek Goyal and Miklos Szeredi, added a new feature to OverlayFS in kernel 4.19. The feature is called **metadata only copy-up**. If you mount an overlay filesystem with **metacopy=on** as a mount option, it will not copy up the contents of the lower layers when you change file attributes; the kernel creates new inodes that include the attributes with references pointing at the lower-level data. It will still copy up the contents if the content changes. This functionality is available in the Red Hat Enterprise Linux 8 Beta, if you want to try it out.
|
||||
|
||||
This means container chowning can happen in a couple of seconds, and you won't double the storage space for each container.
|
||||
|
||||
This makes running containers with tools like Podman in separate user namespaces viable, greatly increasing the security of the system.
|
||||
|
||||
### Going forward
|
||||
|
||||
I want to add a new flag, like **\--userns=auto** , to Podman that will tell it to automatically pick a unique user namespace for each container you run. This is similar to the way SELinux works with separate multi-category security (MCS) labels. If you set the environment variable **PODMAN_USERNS=auto** , you won't even need to set the flag.
|
||||
|
||||
Podman is finally allowing users to run containers in separate user namespaces. Tools like [Buildah][9] and [CRI-O][10] will also be able to take advantage of user namespaces. For CRI-O, however, Kubernetes needs to understand which user namespace will run the container engine, and the upstream is working on that.
|
||||
|
||||
In my next article, I will explain how to run Podman as non-root in a user namespace.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/podman-and-user-namespaces
|
||||
|
||||
作者:[Daniel J Walsh][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/rhatdan
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://podman.io/
|
||||
[2]: https://github.com/containers/libpod
|
||||
[3]: https://opensource.com/article/18/10/podman-more-secure-way-run-containers
|
||||
[4]: http://man7.org/linux/man-pages/man7/user_namespaces.7.html
|
||||
[5]: https://chmodcommand.com/chmod-660/
|
||||
[6]: https://en.wikipedia.org/wiki/Chown
|
||||
[7]: https://github.com/containers/storage
|
||||
[8]: https://en.wikipedia.org/wiki/OverlayFS
|
||||
[9]: https://buildah.io/
|
||||
[10]: http://cri-o.io/
|
@ -0,0 +1,57 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Relax by the fire at your Linux terminal)
|
||||
[#]: via: (https://opensource.com/article/18/12/linux-toy-aafire)
|
||||
[#]: author: (Jason Baker https://opensource.com/users/jason-baker)
|
||||
|
||||
Relax by the fire at your Linux terminal
|
||||
======
|
||||
Chestnuts roasting on an open command prompt? Why not, with this fun Linux toy.
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/uploads/linux-toy-aafire.png?itok=pAttiVvG)
|
||||
|
||||
Welcome back. Here we are, just past the halfway mark at day 13 of our 24 days of Linux command-line toys. If this is your first visit to the series, see the link to the previous article at the bottom of this one, and take a look back to learn what it's all about. In short, our command-line toys are anything that's a fun diversion at the terminal.
|
||||
|
||||
Maybe some are familiar, and some aren't. Either way, we hope you have fun.
|
||||
|
||||
If you're in the northern hemisphere outside of the tropics, perhaps winter is starting to rear its frigid face outside. At least it is where I live. And some I'd love nothing more than to curl up by the fire with a cup of tea and my favorite book (or a digital equivalent).
|
||||
|
||||
The bad news is my house lacks a fireplace. The good news is that I can still pretend, thanks to the Linux terminal and today's command-line toy, **aafire**.
|
||||
|
||||
On my system, I found **aafire** packed with **aalib** , a delightful library for converting visual images into the style of ASCII art and making it available at your terminal (or elsewhere). **aalib** enables all sorts of fun graphics at the Linux terminal, so we may revisit a toy or two that make use of it before the end of our series. On Fedora, this meant installation was as simple as:
|
||||
|
||||
```
|
||||
$ sudo dnf install aalib
|
||||
```
|
||||
|
||||
Then, it was simple to launch with the **aafire** command. By default, **aalib** attempted to draw to my GUI, so I had to manually override it to keep my fire in the terminal (this is a command-line series, after all). Fortunately, it comes with a [curses][1] driver, so this meant I just had to run the following to get my fire going:
|
||||
|
||||
```
|
||||
$ aafire -driver curses
|
||||
```
|
||||
![](https://opensource.com/sites/default/files/uploads/linux-toy-aafire-animated.gif)
|
||||
You can find out more about the **aa-lib** library and download the source on [Sourceforge][2], under an LGPLv2 license.
|
||||
|
||||
Do you have a favorite command-line toy that you think I ought to include? The calendar for this series is mostly filled out but I've got a few spots left. Let me know in the comments below, and I'll check it out. If there's space, I'll try to include it. If not, but I get some good submissions, I'll do a round-up of honorable mentions at the end.
|
||||
|
||||
Check out yesterday's toy, [Patch into The Matrix at the Linux command line][3] , and check back tomorrow for another!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/linux-toy-aafire
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jason-baker
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Curses_(programming_library)
|
||||
[2]: http://aa-project.sourceforge.net/aalib/
|
||||
[3]: https://opensource.com/article/18/12/linux-toy-cmatrix
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (jlztan)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -0,0 +1,179 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Install Rust Programming Language In Linux)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-install-rust-programming-language-in-linux/)
|
||||
[#]: author: (Prakash Subramanian https://www.2daygeek.com/author/prakash/)
|
||||
|
||||
How To Install Rust Programming Language In Linux
|
||||
======
|
||||
|
||||
Rust is often called rust-lang.
|
||||
|
||||
Rust is a general-purpose, multi-paradigm, modern, cross-platform, and open source systems programming language sponsored by Mozilla Research.
|
||||
|
||||
It was designed to be achieve a goals such as safety, speed, and concurrency.
|
||||
|
||||
Rust is syntactically similar to C++,[14] but its designers intend it to provide better memory safety while still maintaining performance.
|
||||
|
||||
Rust is currently used in many organization such as Firefox, Chef, Dropbox, Oracle, GNOME, etc,.
|
||||
|
||||
### How to Install Runs Language in Linux?
|
||||
|
||||
There are many ways we can install Rust but below is the officially recommended way to install it.
|
||||
|
||||
```
|
||||
$ curl https://sh.rustup.rs -sSf | sh
|
||||
info: downloading installer
|
||||
|
||||
Welcome to Rust!
|
||||
|
||||
This will download and install the official compiler for the Rust programming
|
||||
language, and its package manager, Cargo.
|
||||
|
||||
It will add the cargo, rustc, rustup and other commands to Cargo's bin
|
||||
directory, located at:
|
||||
|
||||
/home/daygeek/.cargo/bin
|
||||
|
||||
This path will then be added to your PATH environment variable by modifying the
|
||||
profile files located at:
|
||||
|
||||
/home/daygeek/.profile
|
||||
/home/daygeek/.bash_profile
|
||||
|
||||
You can uninstall at any time with rustup self uninstall and these changes will
|
||||
be reverted.
|
||||
|
||||
Current installation options:
|
||||
|
||||
default host triple: x86_64-unknown-linux-gnu
|
||||
default toolchain: stable
|
||||
modify PATH variable: yes
|
||||
|
||||
1) Proceed with installation (default)
|
||||
2) Customize installation
|
||||
3) Cancel installation
|
||||
>1
|
||||
|
||||
info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu'
|
||||
info: latest update on 2018-12-06, rust version 1.31.0 (abe02cefd 2018-12-04)
|
||||
info: downloading component 'rustc'
|
||||
77.7 MiB / 77.7 MiB (100 %) 1.2 MiB/s ETA: 0 s
|
||||
info: downloading component 'rust-std'
|
||||
54.2 MiB / 54.2 MiB (100 %) 1.2 MiB/s ETA: 0 s
|
||||
info: downloading component 'cargo'
|
||||
4.7 MiB / 4.7 MiB (100 %) 1.2 MiB/s ETA: 0 s
|
||||
info: downloading component 'rust-docs'
|
||||
8.5 MiB / 8.5 MiB (100 %) 1.2 MiB/s ETA: 0 s
|
||||
info: installing component 'rustc'
|
||||
info: installing component 'rust-std'
|
||||
info: installing component 'cargo'
|
||||
info: installing component 'rust-docs'
|
||||
info: default toolchain set to 'stable'
|
||||
|
||||
stable installed - rustc 1.31.0 (abe02cefd 2018-12-04)
|
||||
|
||||
|
||||
Rust is installed now. Great!
|
||||
|
||||
To get started you need Cargo's bin directory ($HOME/.cargo/bin) in your PATH
|
||||
environment variable. Next time you log in this will be done automatically.
|
||||
|
||||
To configure your current shell run source $HOME/.cargo/env
|
||||
```
|
||||
|
||||
Run the following command to configure your current shell.
|
||||
|
||||
```
|
||||
$ source $HOME/.cargo/env
|
||||
```
|
||||
|
||||
Run the following command to verify the installed Rust version.
|
||||
|
||||
```
|
||||
$ rustc --version
|
||||
rustc 1.31.0 (abe02cefd 2018-12-04)
|
||||
```
|
||||
|
||||
### How To Test Rust programming language?
|
||||
|
||||
Once you installed Rust follow the below steps to check whether Rust programe language is working fine or not.
|
||||
|
||||
```
|
||||
$ mkdir ~/projects
|
||||
$ cd ~/projects
|
||||
$ mkdir hello_world
|
||||
$ cd hello_world
|
||||
```
|
||||
|
||||
Create a file and add the below code and save the file. Make sure, Rust files always end in a .rs extension.
|
||||
|
||||
```
|
||||
$ vi 2g.rs
|
||||
|
||||
fn main() {
|
||||
println!("Hello, It's 2DayGeek.com - Best Linux Practical Blog!");
|
||||
}
|
||||
```
|
||||
|
||||
Run the following command to compile the rust code.
|
||||
|
||||
```
|
||||
$ rustc 2g.rs
|
||||
```
|
||||
|
||||
The above command will create a executable Rust program file in the same directory.
|
||||
|
||||
```
|
||||
$ ls -lh
|
||||
total 3.9M
|
||||
-rwxr-xr-x 1 daygeek daygeek 3.9M Dec 14 11:09 2g
|
||||
-rw-r--r-- 1 daygeek daygeek 86 Dec 14 11:09 2g.rs
|
||||
```
|
||||
|
||||
Run the Rust executable file to get the output.
|
||||
|
||||
```
|
||||
$ ./2g
|
||||
Hello, It's 2DayGeek.com - Best Linux Practical Blog!
|
||||
```
|
||||
|
||||
Yup! that’s working fine.
|
||||
|
||||
To update Rust to latest version.
|
||||
|
||||
```
|
||||
$ rustup update
|
||||
info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu'
|
||||
info: checking for self-updates
|
||||
|
||||
stable-x86_64-unknown-linux-gnu unchanged - rustc 1.31.0 (abe02cefd 2018-12-04)
|
||||
```
|
||||
|
||||
Run the following command to remove the Rust package from your system.
|
||||
|
||||
```
|
||||
$ rustup self uninstall
|
||||
```
|
||||
|
||||
Once you uninstalled the Rust package, remove the Rust project directory.
|
||||
|
||||
```
|
||||
$ rm -fr ~/projects
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-install-rust-programming-language-in-linux/
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/prakash/
|
||||
[b]: https://github.com/lujun9972
|
@ -0,0 +1,66 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Linux terminal is no one-trick pony)
|
||||
[#]: via: (https://opensource.com/article/18/12/linux-toy-ponysay)
|
||||
[#]: author: (Jason Baker https://opensource.com/users/jason-baker)
|
||||
|
||||
The Linux terminal is no one-trick pony
|
||||
======
|
||||
Bring the magic of My Little Pony to your Linux command line.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/uploads/linux-toy-ponysay.png?itok=ehl6pTr_)
|
||||
|
||||
Welcome to another day of the Linux command-line toys advent calendar. If this is your first visit to the series, you might be asking yourself what a command-line toy even is. We’re figuring that out as we go, but generally, it could be a game, or any simple diversion that helps you have fun at the terminal.
|
||||
|
||||
Some of you will have seen various selections from our calendar before, but we hope there’s at least one new thing for everyone.
|
||||
|
||||
Reader [Lori][1] made the suggestion of today's toy in a comment on my previous article on [cowsay][2]:
|
||||
|
||||
"Hmmm, I've been playing with something called ponysay which seems to be a full-color variant on your cowsay."
|
||||
|
||||
Intrigued, I had to check it out, and I was not disappointed with what I found.
|
||||
|
||||
In a nutshell, **[ponysay][3]** is exactly that: a rewrite of **cowsay** that includes many full-color characters from [My Little Pony][4], that you can use to output phrases at the Linux command line. It's actually a really well-done project, that features over 400 characters and character combinations, and is incredibly well documented in a [78-page PDF][5] covering full usage.
|
||||
|
||||
To install **ponysay** , you'll want to check out the project [README][6] to select the installation method that works best for your distribution and situation. Since ponysay didn't appear to be packaged for my distribution, Fedora, I opted to try out the Docker container image, but do what works best for you; installation from source may also work for you.
|
||||
|
||||
I was curious to try out [**podman**][7] as a drop-in replacement for **docker** for a casual container users, and for me at least, it just worked!
|
||||
|
||||
```
|
||||
$ podman run -ti --rm mpepping/ponysay 'Ponytastic'
|
||||
```
|
||||
|
||||
The outputs are amazing, and I challenge you to try it out and let me know your favorite. Here was one of mine:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux-toy-ponysay-output.png)
|
||||
|
||||
It's developers chose to write the code in [Pony][8]! (Update: Sadly, I was wrong about this. It's written in Python, though GitHub believes it to be Pony because of the file extensions.) Ponysay is licensed under the GPL version 3, and you can pick up its source code [on GitHub][3].
|
||||
|
||||
Do you have a favorite command-line toy that you think I ought to profile? The calendar for this series is mostly filled out but I've got a few spots left. Let me know in the comments below, and I'll check it out. If there's space, I'll try to include it. If not, but I get some good submissions, I'll do a round-up of honorable mentions at the end.
|
||||
|
||||
Check out yesterday's toy, [Relax by the fire at your Linux terminal][9], and check back tomorrow for another!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/linux-toy-ponysay
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jason-baker
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/users/n8chz
|
||||
[2]: https://opensource.com/article/18/12/linux-toy-cowsay
|
||||
[3]: https://github.com/erkin/ponysay
|
||||
[4]: https://en.wikipedia.org/wiki/My_Little_Pony
|
||||
[5]: https://github.com/erkin/ponysay/blob/master/ponysay.pdf?raw=true
|
||||
[6]: https://github.com/erkin/ponysay/blob/master/README.md
|
||||
[7]: https://opensource.com/article/18/10/podman-more-secure-way-run-containers
|
||||
[8]: https://opensource.com/article/18/5/pony
|
||||
[9]: https://opensource.com/article/18/12/linux-toy-aafire
|
@ -0,0 +1,180 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tips for using Flood Element for performance testing)
|
||||
[#]: via: (https://opensource.com/article/18/12/tips-flood-element-testing)
|
||||
[#]: author: (Nicole van der Hoeven https://opensource.com/users/nicolevanderhoeven)
|
||||
|
||||
Tips for using Flood Element for performance testing
|
||||
======
|
||||
Get started with this powerful, intuitive load testing tool.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn)
|
||||
|
||||
In case you missed it, there’s a new performance test tool on the block: [Flood Element][1]. It’s a scalable, browser-based tool that allows you to write scripts in JavaScript that interact with web pages like a real user would.
|
||||
|
||||
Browser Level Users is a [newer approach to load testing][2] that overcomes many of the common challenges we hear about traditional methods of testing. It offers:
|
||||
|
||||
* Scripting that is akin to common functional tools like Selenium and easier to learn
|
||||
* More realistic results that are based on true browser performance rather than API response
|
||||
* The ability to test against all components of your web app, including things like JavaScript that are rendered via the browser
|
||||
|
||||
|
||||
|
||||
Given the above benefits, it’s a no-brainer to check out Flood Element for your web load testing, especially if you have struggled with existing tools like JMeter or HP LoadRunner.
|
||||
|
||||
Pairing Element with [Flood][3] turns it into a pretty powerful load test tool. We have a [great guide here][4] that you can follow if you’d like to get started. I’ve been using and testing Element for several months now, and I’d like to share some tips I’ve learned along the way.
|
||||
|
||||
### Initializing your script
|
||||
|
||||
You can always start from scratch, but the quickest way to get started is to type `element init myfirstelementtest` from your terminal, filling in your preferred project name.
|
||||
|
||||
You’ll then be asked to type the title of your test as well as the URL you’d like to script against. After a minute, you’ll see that a new directory has been created:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/image_1_-_new_directory.png)
|
||||
|
||||
Element will automatically create a file called **test.ts**. This file contains the skeleton of a script, along with some sample code to help you find a button and then click on it. But before you open it, let’s move on to…
|
||||
|
||||
### Choosing the right text editor
|
||||
|
||||
Scripting in Element is already pretty simple, but two things that help are syntax highlighting and code completion. Syntax highlighting will greatly improve the experience of learning a new test tool like Element, and code completion will make your scripting lightning-fast as you become more experienced. My text editor of choice is [Visual Studio Code][5], which has both of those features. It’s slick and clean, and it does the job.
|
||||
|
||||
Syntax highlighting is when the text editor intelligently changes the font color of your code according to its role in the programming language you’re using. Here’s a screenshot of the **test.ts** file we generated earlier in VS Code to show you what I mean:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/image_2_test.ts_.png)
|
||||
|
||||
This makes it easier to make sense of the code at a glance: Comments are in green, values and labels are in orange, etc.
|
||||
|
||||
Code completion is when you start to type something, and VS Code helpfully opens a context menu with suggestions for methods you can use.
|
||||
|
||||
![][6]
|
||||
|
||||
I love this because it means I don’t need to remember the exact name of the method. It also suggests names of variables you’ve already defined and highlights code that doesn’t make sense. This will help to make your tests more maintainable and readable for others, which is a great benefit as you look to scale your testing out in the future.
|
||||
|
||||
![](https://opensource.com/sites/default/files/image-4-element-visible-copy.gif)
|
||||
|
||||
### Taking screenshots
|
||||
|
||||
One of the most powerful features of Element is its ability to take screenshots. I find it immensely useful when debugging because sometimes it’s just easier to see what’s going on visually. With protocol-based tools, debugging can be a much more involved and technical process.
|
||||
|
||||
There are two ways to take screenshots in Element:
|
||||
|
||||
1. Add a setting to automatically take a screenshot when an error is encountered. You can do this by setting `screenshotOnFailure` to "true" in `TestSettings`:
|
||||
|
||||
|
||||
|
||||
```
|
||||
export const settings: TestSettings = {
|
||||
device: Device.iPadLandscape,
|
||||
userAgent: 'flood-chrome-test',
|
||||
clearCache: true,
|
||||
disableCache: true,
|
||||
screenshotOnFailure: true,
|
||||
}
|
||||
```
|
||||
|
||||
2. Explicitly take a screenshot at a particular point in the script. You can do this by adding
|
||||
|
||||
|
||||
|
||||
```
|
||||
await browser.takeScreenshot()
|
||||
```
|
||||
|
||||
to your code.
|
||||
|
||||
### Viewing screenshots
|
||||
|
||||
Once you’ve taken screenshots within your tests, you will probably want to view them and know that they will be stored for future safekeeping. Whether you are running your test locally on have uploaded it to Flood to run with increased concurrency, Flood Element has you covered.
|
||||
|
||||
**Locally run tests**
|
||||
|
||||
Screenshots will be saved as .jpg files in a timestamped folder corresponding to your run. It should look something like this: **…myfirstelementtest/tmp/element-results/test/2018-11-20T135700.595Z/flood/screenshots/**. The screenshots will be uniquely named so that new screenshots, even for the same step, don’t overwrite older ones.
|
||||
|
||||
However, I rarely need to look up the screenshots in that folder because I prefer to see them in iTerm2 for MacOS. iTerm is an alternative to the terminal that works particularly well with Element. When you take a screenshot, iTerm actually shows it in-line:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/image_5_iterm_inline.png)
|
||||
|
||||
**Tests run in Flood**
|
||||
|
||||
Running an Element script on Flood is ideal when you need larger concurrency. Rather than accessing your screenshot locally, Flood will centralize the images into your account, so the images remain even after the cloud load injectors are destroyed. You can get to the screenshot files by downloading Archived Results:
|
||||
|
||||
![](https://opensource.com/sites/default/files/image_6_archived_results.png)
|
||||
|
||||
You can also click on a step on the dashboard to see a filmstrip of your test:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/image_7_filmstrip_view.png)
|
||||
|
||||
### Using logs
|
||||
|
||||
You may need to check out the logs for more technical debugging, especially when the screenshots don’t tell the whole story. Again, whether you are running your test locally or have uploaded it to Flood to run with increased concurrency, Flood Element has you covered.
|
||||
|
||||
**Locally run tests**
|
||||
|
||||
You can print to the console by typing, for example: `console.log('orderValues = ’ + orderValues)`
|
||||
|
||||
This will print the value of the variable `orderValues` at that point in the script. You would see this in your terminal if you’re running Element locally.
|
||||
|
||||
**Tests run in Flood**
|
||||
|
||||
If you’re running the script on Flood, you can either download the log (in the same Archived Results zipped file mentioned earlier) or click on the Logs tab:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/image_8_logs_tab.png)
|
||||
|
||||
### Fun with flags
|
||||
|
||||
Element comes with a few flags that give you more control over how the script is run locally. Here are a few of my favorites:
|
||||
|
||||
**Headless flag**
|
||||
|
||||
When in doubt, run Element in non-headless mode to see the script actually opening the web app on Chrome and interacting with the page. This is only possible locally, but there’s nothing like actually seeing for yourself what’s happening in real time instead of relying on screenshots and logs after the fact. To enable this mode, add the flag when running your test:
|
||||
|
||||
```
|
||||
element run myfirstelementtest.ts --no-headless
|
||||
```
|
||||
|
||||
**Watch flag**
|
||||
|
||||
Element will automatically close the browser window when it encounters an error or finishes the iteration. Adding `--watch` will leave the browser window open and then monitor the script. As soon as the script is saved, it will automatically run it in the same window from the beginning. Simply add this flag like the above example:
|
||||
|
||||
```
|
||||
--watch
|
||||
```
|
||||
|
||||
**Dev tools flag**
|
||||
|
||||
This opens a browser instance and runs the script with the Chrome Dev Tools open, allowing you to find locators for the next action you want to script. Simply add this flag as in the first example:
|
||||
|
||||
```
|
||||
--dev-tools
|
||||
```
|
||||
|
||||
For more flags, use `element run --help`.
|
||||
|
||||
### Try Element
|
||||
|
||||
You’ve just gotten a crash course on Flood Element and are ready to get started. [Download Element][1] to start writing functional test scripts and reusing them as load test scripts on Flood. If you don’t have a Flood account, you can easily sign up for a free trial [on the Flood website][7].
|
||||
|
||||
We’re proud to contribute to the open source community and can’t wait to have you try this new addition to the Flood line.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/tips-flood-element-testing
|
||||
|
||||
作者:[Nicole van der Hoeven][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/nicolevanderhoeven
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://element.flood.io/
|
||||
[2]: https://flood.io/blog/why-you-should-load-test-with-browsers/
|
||||
[3]: https://flood.io/
|
||||
[4]: https://help.flood.io/getting-started-with-load-testing/step-by-step-guide-flood-element
|
||||
[5]: https://code.visualstudio.com/
|
||||
[6]: https://flood.io/wp-content/uploads/2018/11/vscode-codecompletion2.gif
|
||||
[7]: https://flood.io/load-performance-testing-tool/free-load-testing-trial/
|
@ -0,0 +1,52 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Head to the arcade in your Linux terminal with this Pac-man clone)
|
||||
[#]: via: (https://opensource.com/article/18/12/linux-toy-myman)
|
||||
[#]: author: (Jason Baker https://opensource.com/users/jason-baker)
|
||||
|
||||
Head to the arcade in your Linux terminal with this Pac-man clone
|
||||
======
|
||||
Want to recreate the magic of your favorite arcade game? Today's command-line toy will transport you back in time.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/uploads/linux-toy-myman.png?itok=9j1DFgH0)
|
||||
|
||||
Welcome back to another day of the Linux command-line toys advent calendar. If this is your first visit to the series, you might be asking yourself what command-line toys are all about. Basically, they're games and simple diversions that help you have fun at the terminal.
|
||||
|
||||
Some are new, and some are old classics. We hope you enjoy.
|
||||
|
||||
Today's toy, MyMan, is a fun clone of the classic arcade game [Pac-Man][1]. (You didn't think this was going to be about the [similarly-named][2] Linux package manager, did you?) If you're anything like me, you spent more than your fair share of quarters trying to hit a high score Pac-Man back in the day, and still give it a go whenever you get a chance.
|
||||
|
||||
MyMan isn't the only Pac-Man clone for the Linux terminal, but it's the one I chose to include because 1) I like its visual style, which rings true to the original and 2) it's conveniently packaged for my Linux distribution so it was an easy install. But you should check out your other options as well. Here's [another one][3] that looks like it may be promising, but I haven't tried it.
|
||||
|
||||
Since MyMan was packaged for Fedora, installation was as simple as:
|
||||
|
||||
```
|
||||
$ dnf install myman
|
||||
```
|
||||
|
||||
MyMan is made available under an MIT license and you can check out the source code on [SourceForge][4].
|
||||
![](https://opensource.com/sites/default/files/uploads/linux-toy-myman-animated.gif)
|
||||
Do you have a favorite command-line toy that you think I ought to profile? The calendar for this series is mostly filled out but I've got a few spots left. Let me know in the comments below, and I'll check it out. If there's space, I'll try to include it. If not, but I get some good submissions, I'll do a round-up of honorable mentions at the end.
|
||||
|
||||
Check out yesterday's toy, [The Linux terminal is no one-trick pony][5], and check back tomorrow for another!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/linux-toy-myman
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jason-baker
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/Pac-Man
|
||||
[2]: https://wiki.archlinux.org/index.php/pacman
|
||||
[3]: https://github.com/YoctoForBeaglebone/pacman4console
|
||||
[4]: https://myman.sourceforge.io/
|
||||
[5]: https://opensource.com/article/18/12/linux-toy-ponysay
|
@ -0,0 +1,62 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Schedule a visit with the Emacs psychiatrist)
|
||||
[#]: via: (https://opensource.com/article/18/12/linux-toy-eliza)
|
||||
[#]: author: (Jason Baker https://opensource.com/users/jason-baker)
|
||||
|
||||
Schedule a visit with the Emacs psychiatrist
|
||||
======
|
||||
Eliza is a natural language processing chatbot hidden inside of one of Linux's most popular text editors.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/uploads/linux-toy-eliza.png?itok=3ioiBik_)
|
||||
|
||||
Welcome to another day of the 24-day-long Linux command-line toys advent calendar. If this is your first visit to the series, you might be asking yourself what a command-line toy even is. We’re figuring that out as we go, but generally, it could be a game, or any simple diversion that helps you have fun at the terminal.
|
||||
|
||||
Some of you will have seen various selections from our calendar before, but we hope there’s at least one new thing for everyone.
|
||||
|
||||
Today's selection is a hidden gem inside of Emacs: Eliza, the Rogerian psychotherapist, a terminal toy ready to listen to everything you have to say.
|
||||
|
||||
A brief aside: While this toy is amusing, your health is no laughing matter. Please take care of yourself this holiday season, physically and mentally, and if stress and anxiety from the holidays are having a negative impact on your wellbeing, please consider seeing a professional for guidance. It really can help.
|
||||
|
||||
To launch [Eliza][1], first, you'll need to launch Emacs. There's a good chance Emacs is already installed on your system, but if it's not, it's almost certainly in your default repositories.
|
||||
|
||||
Since I've been pretty fastidious about keeping this series in the terminal, launch Emacs with the **-nw** flag to keep in within your terminal emulator.
|
||||
|
||||
```
|
||||
$ emacs -nw
|
||||
```
|
||||
|
||||
Inside of Emacs, type M-x doctor to launch Eliza. For those of you like me from a Vim background who have no idea what this means, just hit escape, type x and then type doctor. Then, share all of your holiday frustrations.
|
||||
|
||||
Eliza goes way back, all the way to the mid-1960s a the MIT Artificial Intelligence Lab. [Wikipedia][2] has a rather fascinating look at her history.
|
||||
|
||||
Eliza isn't the only amusement inside of Emacs. Check out the [manual][3] for a whole list of fun toys.
|
||||
|
||||
|
||||
![Linux toy: eliza animated][5]
|
||||
|
||||
Do you have a favorite command-line toy that you think I ought to profile? We're running out of time, but I'd still love to hear your suggestions. Let me know in the comments below, and I'll check it out. And let me know what you thought of today's amusement.
|
||||
|
||||
Be sure to check out yesterday's toy, [Head to the arcade in your Linux terminal with this Pac-man clone][6], and come back tomorrow for another!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/linux-toy-eliza
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jason-baker
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.emacswiki.org/emacs/EmacsDoctor
|
||||
[2]: https://en.wikipedia.org/wiki/ELIZA
|
||||
[3]: https://www.gnu.org/software/emacs/manual/html_node/emacs/Amusements.html
|
||||
[4]: /file/417326
|
||||
[5]: https://opensource.com/sites/default/files/uploads/linux-toy-eliza-animated.gif (Linux toy: eliza animated)
|
||||
[6]: https://opensource.com/article/18/12/linux-toy-myman
|
@ -0,0 +1,93 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (4 cool new projects to try in COPR for December 2018)
|
||||
[#]: via: (https://fedoramagazine.org/4-try-copr-december-2018/)
|
||||
[#]: author: (Dominik Turecek https://fedoramagazine.org)
|
||||
|
||||
4 cool new projects to try in COPR for December 2018
|
||||
======
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg)
|
||||
|
||||
COPR is a [collection][1] of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
|
||||
|
||||
Here’s a set of new and interesting projects in COPR.
|
||||
|
||||
### MindForger
|
||||
|
||||
[MindForger][2] is a Markdown editor and a notebook. In addition to features you’d expect from a Markdown editor, MindForger lets you split a single file into multiple notes. It’s easy to organize the notes and move them around between files, as well as search through them. I’ve been using MindForger for some time for my study notes, so it’s nice that it’s available through COPR now.![][3]
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The repo currently provides MindForger for Fedora 29 and Rawhide. To install MindForger, use these commands:
|
||||
|
||||
```
|
||||
sudo dnf copr enable deadmozay/mindforger
|
||||
sudo dnf install mindforger
|
||||
```
|
||||
|
||||
### Clingo
|
||||
|
||||
[Clingo][4] is a program for solving logical problems using [answer set programming][5] (ASP) modeling language. With ASP, you can declaratively describe a problem as a logical program that Clingo then solves. As a result, Clingo produces solutions to the problem in the form of logical models, called answer sets.
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The repo currently provides Clingo for Fedora 28 and 29. To install Clingo, use these commands:
|
||||
|
||||
```
|
||||
sudo dnf copr enable timn/clingo
|
||||
sudo dnf install clingo
|
||||
```
|
||||
|
||||
### SGVrecord
|
||||
|
||||
[SGVrecord][6] is a simple tool for recording your screen. It allows you to either capture the whole screen or select just a part of it. Furthermore, it is possible to make the record with or without sound. Sgvrecord produces files in WebM format.![][7]
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The repo currently provides SGVrecord for Fedora 28, 29, and Rawhide. To install SGVrecord, use these commands:
|
||||
|
||||
```
|
||||
sudo dnf copr enable youssefmsourani/sgvrecord
|
||||
sudo dnf install sgvrecord
|
||||
```
|
||||
|
||||
### Watchman
|
||||
|
||||
[Watchman][8] is a service for monitoring and recording when changes are done to files.
|
||||
You can specify directory trees for Watchman to monitor, as well as define actions
|
||||
that are triggered when specified files are changed.
|
||||
|
||||
#### Installation instructions
|
||||
|
||||
The repo currently provides Watchman for Fedora 29 and Rawhide. To install Watchman, use these commands:
|
||||
|
||||
```
|
||||
sudo dnf copr enable eklitzke/watchman
|
||||
sudo dnf install watchman
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/4-try-copr-december-2018/
|
||||
|
||||
作者:[Dominik Turecek][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://copr.fedorainfracloud.org/
|
||||
[2]: https://www.mindforger.com/
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2018/12/mindforger.png
|
||||
[4]: https://potassco.org/clingo/
|
||||
[5]: https://en.wikipedia.org/wiki/Answer_set_programming
|
||||
[6]: https://github.com/yucefsourani/sgvrecord
|
||||
[7]: https://fedoramagazine.org/wp-content/uploads/2018/12/SGVrecord.png
|
||||
[8]: https://facebook.github.io/watchman/
|
@ -0,0 +1,78 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (6 tips and tricks for using KeePassX to secure your passwords)
|
||||
[#]: via: (https://opensource.com/article/18/12/keepassx-security-best-practices)
|
||||
[#]: author: (Michael McCune https://opensource.com/users/elmiko)
|
||||
|
||||
6 tips and tricks for using KeePassX to secure your passwords
|
||||
======
|
||||
Get more out of your password manager by following these best practices.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum)
|
||||
|
||||
Our increasingly interconnected digital world makes security an essential and common discussion topic. We hear about [data breaches][1] with alarming regularity and are often on our own to make informed decisions about how to use technology securely. Although security is a deep and nuanced topic, there are some easy daily habits you can keep to reduce your attack surface.
|
||||
|
||||
Securing passwords and account information is something that affects anyone today. Technologies like [OAuth][2] help make our lives simpler by reducing the number of accounts we need to create, but we are still left with a staggering number of places where we need new, unique information to keep our records secure. An easy way to deal with the increased mental load of organizing all this sensitive information is to use a password manager like [KeePassX][3].
|
||||
|
||||
In this article, I will explain the importance of keeping your password information secure and offer suggestions for getting the most out of KeePassX. For an introduction to KeePassX and its features, I highly recommend Ricardo Frydman's article "[Managing passwords in Linux with KeePassX][4]."
|
||||
|
||||
### Why are unique passwords important?
|
||||
|
||||
Using a different password for each account is the first step in ensuring that your accounts are not vulnerable to shared information leaks. Generating new credentials for every account is time-consuming, and it is extremely common for people to fall into the trap of using the same password on several accounts. The main problem with reusing passwords is that you increase the number of accounts an attacker could access if one of them experiences a credential breach.
|
||||
|
||||
It may seem like a burden to create new credentials for each account, but the few minutes you spend creating and recording this information will pay for itself many times over in the event of a data breach. This is where password management tools like KeePassX are invaluable for providing convenience and reliability in securing your logins.
|
||||
|
||||
### 3 tips for getting the most out of KeePassX
|
||||
|
||||
I have been using KeePassX to manage my password information for many years, and it has become a primary resource in my digital toolbox. Overall, it's fairly simple to use, but there are a few best practices I've learned that I think are worth highlighting.
|
||||
|
||||
1. Add the direct login URL for each account entry. KeePassX has a very convenient shortcut to open the URL listed with an entry. (It's Control+Shift+U on Linux.) When creating a new account entry for a website, I spend some time to locate the site's direct login URL. Although most websites have a login widget in their navigation toolbars, they also usually have direct pages for login forms. By putting this URL into the URL field on the account entry setup form, I can use the shortcut to directly open the login page in my browser.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/keepassx-tip1.png)
|
||||
|
||||
2. Use the Notes field to record extra security information. In addition to passwords, most websites will ask several questions to create additional authentication factors for an account. I use the Notes sections in my account entries to record these additional factors.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/keepassx-tip2.png)
|
||||
|
||||
3. Turn on automatic database locking. In the **Application Settings** under the **Tools** menu, there is an option to lock the database after a period of inactivity. Enabling this option is a good common-sense measure, similar to enabling a password-protected screen lock, that will help ensure your password database is not left open and unprotected if someone else gains access to your computer.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/keepassx_application-settings.png)
|
||||
|
||||
### Food for thought
|
||||
|
||||
Protecting your accounts with better password practices and daily habits is just the beginning. Once you start using a password manager, you need to consider issues like protecting the password database file and ensuring you don't forget or lose the master credentials.
|
||||
|
||||
The cloud-native world of disconnected devices and edge computing makes having a central password store essential. The practices and methodologies you adopt will help minimize your risk while you explore and work in the digital world.
|
||||
|
||||
1. Be aware of retention policies when storing your database in the cloud. KeePassX's database has an open format used by several tools on multiple platforms. Sooner or later, you will want to transfer your database to another device. As you do this, consider the medium you will use to transfer the file. The best option is to use some sort of direct transfer between devices, but this is not always convenient. Always think about where the database file might be stored as it winds its way through the information superhighway; an email may get cached on a server, an object store may move old files to a trash folder. Learn about these interactions for the platforms you are using before deciding where and how you will share your database file.
|
||||
|
||||
2. Consider the source of truth for your database while you're making edits. After you share your database file between devices, you might need to create accounts for new services or change information for existing services while using a device. To ensure your information is always correct across all your devices, you need to make sure any edits you make on one device end up in all copies of the database file. There is no easy solution to this problem, but you might think about making all edits from a single device or storing the master copy in a location where all your devices can make edits.
|
||||
|
||||
3. Do you really need to know your passwords? This is more of a philosophical question that touches on the nature of memorable passwords, convenience, and secrecy. I hardly look at passwords as I create them for new accounts; in most cases, I don't even click the "Show Password" checkbox. There is an idea that you can be more secure by not knowing your passwords, as it would be impossible to compel you to provide them. This may seem like a worrisome idea at first, but consider that you can recover or reset passwords for most accounts through alternate verification methods. When you consider that you might want to change your passwords on a semi-regular basis, it almost makes more sense to treat them as ephemeral information that can be regenerated or replaced.
|
||||
|
||||
|
||||
|
||||
|
||||
Here are a few more ideas to consider as you develop your best practices.
|
||||
|
||||
I hope these tips and tricks have helped expand your knowledge of password management and KeePassX. You can find tools that support the KeePass database format on nearly every platform. If you are not currently using a password manager or have never tried KeePassX, I highly recommend doing so now!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/keepassx-security-best-practices
|
||||
|
||||
作者:[Michael McCune][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/elmiko
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://vigilante.pw/
|
||||
[2]: https://en.wikipedia.org/wiki/OAuth
|
||||
[3]: https://www.keepassx.org/
|
||||
[4]: https://opensource.com/business/16/5/keepassx
|
@ -0,0 +1,48 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Take a swim at your Linux terminal with asciiquarium)
|
||||
[#]: via: (https://opensource.com/article/18/12/linux-toy-asciiquarium)
|
||||
[#]: author: (Jason Baker https://opensource.com/users/jason-baker)
|
||||
|
||||
Take a swim at your Linux terminal with asciiquarium
|
||||
======
|
||||
Darling it's better, when your command line is wetter, thanks to ASCII.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/uploads/linux-toy-asciiquarium.png?itok=ZhJ9P2Ft)
|
||||
|
||||
We're now nearing the end of our 24-day-long Linux command-line toys advent calendar. Just one week left after today! If this is your first visit to the series, you might be asking yourself what a command-line toy even is. We’re figuring that out as we go, but generally, it could be a game, or any simple diversion that helps you have fun at the terminal.
|
||||
|
||||
Some of you will have seen various selections from our calendar before, but we hope there’s at least one new thing for everyone.
|
||||
|
||||
Today's selection is a fishy one. Say hello to **asciiquarium** , an undersea adventure for your terminal. I found **asciiquarium** in my Fedora repositories, so installing it was as simple as:
|
||||
|
||||
```
|
||||
$ sudo dnf install asciiquarium
|
||||
```
|
||||
|
||||
If you're running a different distribution, chances are it's packaged for you too. Just run **asciiquarium** at your terminal to feel happy as a clam. The project has been translated outside of the terminal as well, with screensavers of all of the aquatic pals being made for several non-Linux operating systems, and even an Android live wallpaper version is floating around out there.
|
||||
|
||||
Visit the **asciiquarium** [homepage][1] for more information or to download the Perl source code. The project is open source under a GPL version 2 license. And if you want to learn more about how open source, open data, and open science are making a difference in the actual oceans, take a moment to go learn about the [Ocean Health Index][2].
|
||||
![](https://opensource.com/sites/default/files/uploads/linux-toy-asciiquarium-animated.gif)
|
||||
Do you have a favorite command-line toy that you think I ought to profile? We're running out of time, but I'd still love to hear your suggestions. Let me know in the comments below, and I'll check it out. And let me know what you thought of today's amusement.
|
||||
|
||||
Be sure to check out yesterday's toy, [Schedule a visit with the Emacs psychiatrist][3], and come back tomorrow for another!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/linux-toy-asciiquarium
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jason-baker
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://robobunny.com/projects/asciiquarium/html/
|
||||
[2]: https://opensource.com/article/18/12/protecting-world-oceans
|
||||
[3]: https://opensource.com/article/18/12/linux-toy-eliza
|
100
sources/tech/20181217 Working with tarballs on Linux.md
Normal file
100
sources/tech/20181217 Working with tarballs on Linux.md
Normal file
@ -0,0 +1,100 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Working with tarballs on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3328840/linux/working-with-tarballs-on-linux.html)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Working with tarballs on Linux
|
||||
======
|
||||
![](https://images.idgesg.net/images/article/2018/12/tarball-100783148-large.jpg)
|
||||
|
||||
The word “tarball” is often used to describe the type of file used to back up a select group of files and join them into a single file. The name comes from the **.tar** file extension and the **tar** command that is used to group together the files into a single file that is then sometimes compressed to make it smaller for its move to another system.
|
||||
|
||||
Tarballs are often used to back up personal or system files in place to create an archive, especially prior to making changes that might have to be reversed. Linux sysadmins, for example, will often create a tarball containing a series of configuration files before making changes to an application just in case they have to reverse those changes. Extracting the files from a tarball that’s sitting in place will generally be faster than having to retrieve the files from backups.
|
||||
|
||||
### How to create a tarball on Linux
|
||||
|
||||
You can create a tarball and compress it in a single step if you use a command like this one:
|
||||
|
||||
```
|
||||
$ tar -cvzf PDFs.tar.gz *.pdf
|
||||
```
|
||||
|
||||
The result in this case is a compressed (gzipped) file that contains all of the PDF files that are in the current directory. The compression is optional, of course. A slightly simpler command would just put all of the PDF files into an uncompressed tarball:
|
||||
|
||||
```
|
||||
$ tar -cvf PDFs.tar *.pdf
|
||||
```
|
||||
|
||||
Note that it’s the **z** in that list of options that causes the file to be compressed or “zipped”. The **c** specifies that you are creating the file and the **v** (verbose) indicates that you want some feedback while the command is running. Omit the **v** if you don't want to see the files listed.
|
||||
|
||||
Another common naming convention is to give zipped tarballs the extension **.tgz** instead of the double extension **.tar.gz** as shown in this command:
|
||||
|
||||
```
|
||||
$ tar cvzf MyPDFs.tgz *.pdf
|
||||
```
|
||||
|
||||
### How to extract files from a tarball
|
||||
|
||||
To extract all of the files from a gzipped tarball, you would use a command like this:
|
||||
|
||||
```
|
||||
$ tar -xvzf file.tar.gz
|
||||
```
|
||||
|
||||
If you use the .tgz naming convention, that command would look like this:
|
||||
|
||||
```
|
||||
$ tar -xvzf MyPDFs.tgz
|
||||
```
|
||||
|
||||
To extract an individual file from a gzipped tarball, you do almost the same thing but add the file name:
|
||||
|
||||
```
|
||||
$ tar -xvzf PDFs.tar.gz ShenTix.pdf
|
||||
ShenTix.pdf
|
||||
ls -l ShenTix.pdf
|
||||
-rw-rw-r-- 1 shs shs 122057 Dec 14 14:43 ShenTix.pdf
|
||||
```
|
||||
|
||||
You can even delete files from a tarball if the tarball is not compressed. For example, if we wanted to remove tile file that we extracted above from the PDFs.tar.gz file, we would do it like this:
|
||||
|
||||
```
|
||||
$ gunzip PDFs.tar.gz
|
||||
$ ls -l PDFs.tar
|
||||
-rw-rw-r-- 1 shs shs 10700800 Dec 15 11:51 PDFs.tar
|
||||
$ tar -vf PDFs.tar --delete ShenTix.pdf
|
||||
$ ls -l PDFs.tar
|
||||
-rw-rw-r-- 1 shs shs 10577920 Dec 15 11:45 PDFs.tar
|
||||
```
|
||||
|
||||
Notice that we shaved a little space off the tar file while deleting the ShenTix.pdf file. We can then compress the file again if we want:
|
||||
|
||||
```
|
||||
$ gzip -f PDFs.tar
|
||||
ls -l PDFs.tar.gz
|
||||
-rw-rw-r-- 1 shs shs 10134499 Dec 15 11:51 PDFs.tar.gzFlickr / James St. John
|
||||
```
|
||||
|
||||
The versatility of the command line options makes working with tarballs easy and very convenient.
|
||||
|
||||
Join the Network World communities on [Facebook][1] and [LinkedIn][2] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3328840/linux/working-with-tarballs-on-linux.html
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.facebook.com/NetworkWorld/
|
||||
[2]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,137 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Insync: The Hassleless Way of Using Google Drive on Linux)
|
||||
[#]: via: (https://itsfoss.com/insync-linux-review/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
Insync: The Hassleless Way of Using Google Drive on Linux
|
||||
======
|
||||
|
||||
Using Google Drive on Linux is a pain and you probably already know that. There is no official desktop client of Google Drive for Linux. It’s been [more than six years since Google promised Google Drive on Linux][1] but it doesn’t seem to be happening.
|
||||
|
||||
In the absence of the official Google Drive client on Linux, you have no option other than trying the alternatives. I have already discussed a number of [tools that allow you to use Google Drive on Linux][2]. One of those to[ols is][3] Insync, and in my opinion, this is your best bet for a native Google Drive experience on desktop Linux.
|
||||
|
||||
Note that Insync is not an open source software. Heck, it is not even free to use.
|
||||
|
||||
But it has so many features that it becomes an essential tool for those Linux users who rely heavily on Google Drive.
|
||||
|
||||
I briefly discussed Insync in the old article about [Google Drive and Linux][2]. In this article, I’ll discuss Insync features in detail.
|
||||
|
||||
### Insync brings native Google Drive experience to Linux desktop
|
||||
|
||||
![Use insync to access Google Drive in Linux][4]
|
||||
|
||||
The core competency of Insync is syncing your Google Drive, but the app is much more than that. It has features to help you maximize and control your productivity, your Google Drive and your files such as:
|
||||
|
||||
* Cross-platform access (supports Linux, Windows and macOS)
|
||||
* Easy multiple Google Drive accounts access
|
||||
* Choose your syncing location. Sync files to your hard drive, external drives and NAS!
|
||||
* Support for features like file matching, symlink and ignore list
|
||||
|
||||
|
||||
|
||||
Let me show you some of the main features in action:
|
||||
|
||||
#### Cross-platform in true sense
|
||||
|
||||
Insync claims to run the same app across all operating systems i.e., Linux, Windows, and macOS. That means that you can access the same UI across different OSes, making it easy for you to manage your files across multiple machines.
|
||||
|
||||
![The UI of Insync and the default location of the Insync folder.][5]The UI of Insync and the default location of the Insync folder.
|
||||
|
||||
#### Multiple Google account management
|
||||
|
||||
Insync interface allows you to manage multiple Google Drive accounts seamlessly. You can easily switch between several accounts just by clicking your Google account.
|
||||
|
||||
![Switching between multiple Google accounts in Insync][6]Switching between multiple Google accounts
|
||||
|
||||
#### Custom sync folders
|
||||
|
||||
Customize the way you sync your files and folders. You can easily set your syncing destination anywhere on your machine including external drive and network drives.
|
||||
|
||||
![Customize sync location in Insync][7]Customize sync location
|
||||
|
||||
The selective syncing mode also allows you to easily select a number of files and folders you’d want to sync (or unsync) in your local machine. This includes selectively syncing files within folders.
|
||||
|
||||
![Selective synchronization in Insync][8]Selective synchronization
|
||||
|
||||
It has features like file matching and ‘ignore list’ to help you filter files you don’t want to sync or files that you already have on your machine.
|
||||
|
||||
![File matching feature in Insync][9]Avoids duplication of files
|
||||
|
||||
The ‘ignore list’ allows you to set rules to exclude certain type of files from synchronization.
|
||||
|
||||
![Selective syncing based on rules in Insync][10]Selective syncing based on rules
|
||||
|
||||
If you prefer to work out of the desktop, you have an “Add to Insync” feature that will allow you to add any local file to your Drive.
|
||||
|
||||
![Sync files right from your desktop][11]Sync files right from your desktop
|
||||
|
||||
Insync also supports symlinks for those with workflows that use symbolic links. To learn more about Insync and symlinks, you can refer to [this article.][12]
|
||||
|
||||
#### Exclusive features for Linux
|
||||
|
||||
Insync supports the most commonly used 64-bit Linux distributions like **Ubuntu, Debian and Fedora**. You can check out the full list of distribution support [here][13].
|
||||
|
||||
Insync also has [headless][14] support for those looking to sync through the command line interface. This is perfect if you use a distro that is not fully supported by the GUI app or if you are working with servers or if you simply prefer the CLI.
|
||||
|
||||
![Insync CLI][15]Command Line Interface
|
||||
|
||||
You can learn more about installing and running Insync headless [here][16].
|
||||
|
||||
### Insync pricing and special discount
|
||||
|
||||
Insync is a premium tool and it comes with a [price tag][17]. You have 2 licenses to choose from:
|
||||
|
||||
* **Prime** is priced at $29.99 per Google account. You’ll get access to: cross-platform syncing, multiple accounts access and **support**.
|
||||
* **Teams** is priced at $49.99 per Google account. You’ll be able to access all the Prime features + Team Drives syncing
|
||||
|
||||
|
||||
|
||||
It’s a one-time fee which means once you buy it, you don’t have to pay it again. In a world where everything is paid monthly, it’s refreshing to pay for software that is still one-time!
|
||||
|
||||
Each Google account has a 15-day free trial that will allow you to test the full suite of features, including [Team Drives][18] syncing.
|
||||
|
||||
If you think it’s a bit expensive for your budget, I have good news for you. As an It’s FOSS reader, you get Insync at 25% discount.
|
||||
|
||||
Just use the code ITSFOSS25 at checkout time and you will get 25% immediate discount on any license. Isn’t it cool?
|
||||
|
||||
If you are not certain yet, you can try Insync free for 15 days. And if you think it’s worth the money, purchase the license with **ITSFOSS25** coupon code.
|
||||
|
||||
You can download Insync from their website.
|
||||
|
||||
I have used Insync from the time when it was available for free and I have always liked it. They have added more features over the time and improved its UI and performance. Overall, it’s a nice-to-have application if you use Google Drive a lot and do not mind paying for the efforts of the developers.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/insync-linux-review/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://abevoelker.github.io/how-long-since-google-said-a-google-drive-linux-client-is-coming/
|
||||
[2]: https://itsfoss.com/use-google-drive-linux/
|
||||
[3]: https://www.insynchq.com
|
||||
[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/12/google-drive-linux-insync.jpeg?resize=800%2C450&ssl=1
|
||||
[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_interface.jpeg?fit=800%2C501&ssl=1
|
||||
[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_multiple_google_account.jpeg?ssl=1
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_folder_settings.png?ssl=1
|
||||
[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_selective_sync.png?ssl=1
|
||||
[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_file_matching.jpeg?ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_ignore_list_1.png?ssl=1
|
||||
[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/add-to-insync-shortcut.jpeg?ssl=1
|
||||
[12]: https://help.insynchq.com/key-features-and-syncing-explained/syncing-superpowers/using-symlinks-on-google-drive-with-insync
|
||||
[13]: https://www.insynchq.com/downloads
|
||||
[14]: https://en.wikipedia.org/wiki/Headless_software
|
||||
[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_cli.jpeg?fit=800%2C478&ssl=1
|
||||
[16]: https://help.insynchq.com/installation-on-windows-linux-and-macos/advanced/linux-controlling-insync-via-command-line-cli
|
||||
[17]: https://www.insynchq.com/pricing
|
||||
[18]: https://gsuite.google.com/learning-center/products/drive/get-started-team-drive/#!/
|
@ -0,0 +1,170 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Termtosvg – Record Your Terminal Sessions As SVG Animations In Linux)
|
||||
[#]: via: (https://www.2daygeek.com/termtosvg-record-your-terminal-sessions-as-svg-animations-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Termtosvg – Record Your Terminal Sessions As SVG Animations In Linux
|
||||
======
|
||||
|
||||
By default everyone prefer history command to review/recall the previously entered commands in terminal.
|
||||
|
||||
But unfortunately, that shows only the commands that we ran and doesn’t shows the commands output which was performed previously.
|
||||
|
||||
There are many utilities available in Linux to record the terminal session activity.
|
||||
|
||||
This tool will help us to record the users terminal activity, also will help us to identify other useful information from the output.
|
||||
|
||||
Also, we had written about few utilities in the past and today also we are going to discuss about the same kind of topic.
|
||||
|
||||
If you would like to check other utilities to record your Linux terminal session activity then you can give a try to **[Script Command][1]** and **[Terminalizer Tool][2]**.
|
||||
|
||||
But if you are looking for **[GIF Recorder][3]** then try **[Gifine][4]** , **[Kgif][5]** and **[Peek][6]** utilities.
|
||||
|
||||
Script is one of the best utility to record your terminal session on headless server.
|
||||
|
||||
Script is a Unix command line utility that records a terminal session (in other terms, It’s record everything displayed on your terminal).
|
||||
|
||||
It stores the output in the current directory as a text file and the default file name is typescript.
|
||||
|
||||
### What is Termtosvg
|
||||
|
||||
Termtosvg is a Unix terminal recorder written in Python that renders your command line sessions as standalone SVG animations.
|
||||
|
||||
### Termtosvg Features
|
||||
|
||||
* Produce lightweight and clean looking animations embeddable on a project page.
|
||||
* Custom color themes, terminal UI and animation controls via SVG templates.
|
||||
* Compatible with asciinema recording format.
|
||||
* It requires Python >= 3.5
|
||||
|
||||
|
||||
|
||||
### How to Install Termtosvg In Linux
|
||||
|
||||
It was written in Python and pip installation is a recommended method to install Termtosvg on Linux.
|
||||
|
||||
Make sure you should have installed python-pip package on your system. If no, use the following command to install it.
|
||||
|
||||
For Debian/Ubuntu users, use **[Apt Command][7]** or **[Apt-Get Command][8]** to install pip package.
|
||||
|
||||
```
|
||||
$ sudo apt install python-pip
|
||||
```
|
||||
|
||||
For Archlinux users, use **[Pacman Command][9]** to install pip package.
|
||||
|
||||
```
|
||||
$ sudo pacman -S python-pip
|
||||
```
|
||||
|
||||
For Fedora users, use **[DNF Command][10]** to install pip package.
|
||||
|
||||
```
|
||||
$ sudo dnf install python-pip
|
||||
```
|
||||
|
||||
For CentOS/RHEL users, use **[YUM Command][11]** to install pip package.
|
||||
|
||||
```
|
||||
$ sudo yum install python-pip
|
||||
```
|
||||
|
||||
For openSUSE users, use **[Zypper Command][12]** to install pip package.
|
||||
|
||||
```
|
||||
$ sudo zypper install python-pip
|
||||
```
|
||||
|
||||
Finally run the following **[pip command][13]** to install Termtosvg tool in Linux.
|
||||
|
||||
```
|
||||
$ sudo pip3 install termtosvg pyte python-xlib svgwrite
|
||||
```
|
||||
|
||||
### How to Record Your Terminal Session Using Termtosvg
|
||||
|
||||
Once you successfully installed Termtosvg. Just run the following command to start recording.
|
||||
|
||||
```
|
||||
$ termtosvg
|
||||
Recording started, enter "exit" command or Control-D to end
|
||||
```
|
||||
|
||||
For testing purpose run few commands and see whether it’s working fine or not.
|
||||
|
||||
```
|
||||
$ uname -a
|
||||
Linux daygeek-Y700 4.19.8-2-MANJARO #1 SMP PREEMPT Sat Dec 8 14:45:36 UTC 2018 x86_64 GNU/Linux
|
||||
$ hostname
|
||||
daygeek-Y700
|
||||
$ cat /etc/*-release
|
||||
Manjaro Linux
|
||||
DISTRIB_ID=ManjaroLinux
|
||||
DISTRIB_RELEASE=18.0
|
||||
DISTRIB_CODENAME=Illyria
|
||||
DISTRIB_DESCRIPTION="Manjaro Linux"
|
||||
Manjaro Linux
|
||||
NAME="Manjaro Linux"
|
||||
ID=manjaro
|
||||
ID_LIKE=arch
|
||||
PRETTY_NAME="Manjaro Linux"
|
||||
ANSI_COLOR="1;32"
|
||||
HOME_URL="https://www.manjaro.org/"
|
||||
SUPPORT_URL="https://www.manjaro.org/"
|
||||
BUG_REPORT_URL="https://bugs.manjaro.org/"
|
||||
$ free -g
|
||||
free: Multiple unit options doesn't make sense.
|
||||
$ free -m
|
||||
free: Multiple unit options doesn't make sense.
|
||||
$ pip3 --version
|
||||
pip 18.1 from /usr/lib/python3.7/site-packages/pip (python 3.7)
|
||||
```
|
||||
|
||||
Once you have done, simple press `CTRL+D` or type `exit` to stop the recording. The result will be saved in `/tmp` folder with a unique name.
|
||||
|
||||
```
|
||||
$ exit
|
||||
exit
|
||||
Recording ended, SVG animation is /tmp/termtosvg_5waorper.svg
|
||||
```
|
||||
|
||||
We can open the SVG file output with help of any web browser.
|
||||
|
||||
```
|
||||
$ firefox /tmp/termtosvg_5waorper.svg
|
||||
```
|
||||
|
||||
![][15]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/termtosvg-record-your-terminal-sessions-as-svg-animations-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/script-command-record-save-your-terminal-session-activity-linux/
|
||||
[2]: https://www.2daygeek.com/terminalizer-a-tool-to-record-your-terminal-and-generate-animated-gif-images/
|
||||
[3]: https://www.2daygeek.com/category/gif-recorder/
|
||||
[4]: https://www.2daygeek.com/gifine-create-animated-gif-vedio-recorder-linux-mint-debian-ubuntu/
|
||||
[5]: https://www.2daygeek.com/kgif-create-animated-gif-file-active-window-screen-recorder-capture-arch-linux-mint-fedora-ubuntu-debian-opensuse-centos/
|
||||
[6]: https://www.2daygeek.com/peek-create-animated-gif-screen-recorder-capture-arch-linux-mint-fedora-ubuntu/
|
||||
[7]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[8]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[10]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[11]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[12]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
||||
[13]: https://www.2daygeek.com/install-pip-manage-python-packages-linux/
|
||||
[14]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[15]: https://www.2daygeek.com/wp-content/uploads/2018/12/Termtosvg-Record-Your-Terminal-Sessions-As-SVG-Animations-In-Linux-1.gif
|
@ -0,0 +1,100 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Use your Linux terminal to celebrate a banner year)
|
||||
[#]: via: (https://opensource.com/article/18/12/linux-toy-figlet)
|
||||
[#]: author: (Jason Baker https://opensource.com/users/jason-baker)
|
||||
|
||||
Use your Linux terminal to celebrate a banner year
|
||||
======
|
||||
Need make sure your command is heard? Pipe it to a banner and it won't be missed.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/uploads/linux-toy-figlet.png?itok=o4XmTL-b)
|
||||
|
||||
|
||||
Hello again for another installment in our 24-day-long Linux command-line toys advent calendar. If this is your first visit to the series, you might be asking yourself what a command-line toy even is. We’re figuring that out as we go, but generally, it could be a game, or any simple diversion that helps you have fun at the terminal.
|
||||
|
||||
Some of you will have seen various selections from our calendar before, but we hope there’s at least one new thing for everyone.
|
||||
|
||||
Today's toy if **figlet** , a utility for printing text in banner form across your Linux terminal.
|
||||
|
||||
You'll likely find **figlet** packaged in your standard repositories. For me on Fedora, this meant installation was as simple as:
|
||||
|
||||
```
|
||||
$ sudo dnf install figlet
|
||||
```
|
||||
|
||||
After that, simply use the program's name to invoke it. You can either use it interactively, or, pipe some text to it, as below:
|
||||
|
||||
```
|
||||
echo "Hello world" | figlet
|
||||
_ _ _ _ _ _
|
||||
| | | | ___| | | ___ __ _____ _ __| | __| |
|
||||
| |_| |/ _ \ | |/ _ \ \ \ /\ / / _ \| '__| |/ _` |
|
||||
| _ | __/ | | (_) | \ V V / (_) | | | | (_| |
|
||||
|_| |_|\___|_|_|\___/ \_/\_/ \___/|_| |_|\__,_|
|
||||
```
|
||||
|
||||
There are a number of different font options available for **figlet**. To see the options available to you, try the command **showfigfonts**. For me, this displayed over a dozen. I've copied out a few of my favorites below.
|
||||
|
||||
```
|
||||
block :
|
||||
|
||||
_| _| _|
|
||||
_|_|_| _| _|_| _|_|_| _| _|
|
||||
_| _| _| _| _| _| _|_|
|
||||
_| _| _| _| _| _| _| _|
|
||||
_|_|_| _| _|_| _|_|_| _| _|
|
||||
|
||||
|
||||
bubble :
|
||||
_ _ _ _ _ _
|
||||
/ \ / \ / \ / \ / \ / \
|
||||
( b | u | b | b | l | e )
|
||||
\_/ \_/ \_/ \_/ \_/ \_/
|
||||
|
||||
|
||||
lean :
|
||||
|
||||
_/
|
||||
_/ _/_/ _/_/_/ _/_/_/
|
||||
_/ _/_/_/_/ _/ _/ _/ _/
|
||||
_/ _/ _/ _/ _/ _/
|
||||
_/ _/_/_/ _/_/_/ _/ _/
|
||||
|
||||
|
||||
script :
|
||||
|
||||
o
|
||||
, __ ,_ _ _|_
|
||||
/ \_/ / | | |/ \_|
|
||||
\/ \___/ |_/|_/|__/ |_/
|
||||
/|
|
||||
\|
|
||||
```
|
||||
|
||||
You can find out more about **figlet** on the project's [homepage][1]. The version I downloaded was made available as open source under an MIT license.
|
||||
|
||||
You'll find that **figlet** isn't the only banner-printer available for the Linux terminal. Another option that you may choose to check out is [toilet][2], which comes with its own set of ASCII-art style printing options.
|
||||
|
||||
Do you have a favorite command-line toy that you we should have included? Our calendar is basically set for the remainder of the series, but we'd still love to feature some cool command-line toys in the new year. Let me know in the comments below, and I'll check it out. And let me know what you thought of today's amusement.
|
||||
|
||||
Be sure to check out yesterday's toy, [Take a swim at your Linux terminal with asciiquarium][3], and come back tomorrow for another!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/linux-toy-figlet
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jason-baker
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: http://www.figlet.org/
|
||||
[2]: http://caca.zoy.org/wiki/toilet
|
||||
[3]: https://opensource.com/article/18/12/linux-toy-asciiquarium
|
114
sources/tech/20181219 How to open source your Python library.md
Normal file
114
sources/tech/20181219 How to open source your Python library.md
Normal file
@ -0,0 +1,114 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (HankChow)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to open source your Python library)
|
||||
[#]: via: (https://opensource.com/article/18/12/tips-open-sourcing-python-libraries)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
How to open source your Python library
|
||||
======
|
||||
This 12-step checklist will ensure a successful launch.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx)
|
||||
|
||||
You wrote a Python library. I'm sure it's amazing! Wouldn't it be neat if it was easy for people to use it? Here is a checklist of things to think about and concrete steps to take when open sourcing your Python library.
|
||||
|
||||
### 1\. Source
|
||||
|
||||
Put the code up on [GitHub][1], where most open source projects happen and where it is easiest for people to submit pull requests.
|
||||
|
||||
### 2\. License
|
||||
|
||||
Choose an open source license. A good, permissive default is the [MIT License][2]. If you have specific requirements, Creative Common's [Choose a License][3] can guide you through the alternatives. Most importantly, there are three rules to keep in mind when choosing a license:
|
||||
|
||||
* Don't create your own license.
|
||||
* Don't create your own license.
|
||||
* Don't create your own license.
|
||||
|
||||
|
||||
|
||||
### 3\. README
|
||||
|
||||
Put a file called README.rst, formatted with ReStructured Text, at the top of your tree.
|
||||
|
||||
GitHub will render ReStructured Text just as well as Markdown, and ReST plays better with Python's documentation ecosystem.
|
||||
|
||||
### 4\. Tests
|
||||
|
||||
Write tests. This is not useful just for you: it is useful for people who want to make patches that avoid breaking related functionality.
|
||||
|
||||
Tests help collaborators collaborate.
|
||||
|
||||
Usually, it is best if they are runnable with [**pytest**][4]. There are other test runners—but very little reason to use them.
|
||||
|
||||
### 5\. Style
|
||||
|
||||
Enforce style with a linter: PyLint, Flake8, or Black with **\--check**. Unless you use Black, make sure to specify configuration options in a file checked into source control.
|
||||
|
||||
### 6\. API documentation
|
||||
|
||||
Use docstrings to document modules, functions, classes, and methods.
|
||||
|
||||
There are a few styles you can use. I prefer the [Google-style docstrings][5], but [ReST docstrings][6] are an option.
|
||||
|
||||
Both Google-style and ReST docstrings can be processed by Sphinx to integrate API documentation with prose documentation.
|
||||
|
||||
### 7\. Prose documentation
|
||||
|
||||
Use [Sphinx][7]. (Read [our article on it][8].) A tutorial is useful, but it is also important to specify what this thing is, what it is good for, what it is bad for, and any special considerations.
|
||||
|
||||
### 8\. Building
|
||||
|
||||
Use **tox** or **nox** to automatically run your tests and linter and build the documentation. These tools support a "dependency matrix." These matrices tend to explode fast, but try to test against a reasonable sample, such as Python versions, versions of dependencies, and possibly optional dependencies you install.
|
||||
|
||||
### 9\. Packaging
|
||||
|
||||
Use [setuptools][9]. Write a **setup.py** and a **setup.cfg**. If you support both Python 2 and 3, specify universal wheels in the **setup.cfg**.
|
||||
|
||||
One thing **tox** or **nox** should do is build a wheel and run tests against the installed wheel.
|
||||
|
||||
Avoid C extensions. If you absolutely need them for performance or binding reasons, put them in a separate package. Properly packaging C extensions deserves its own post. There are a lot of gotchas!
|
||||
|
||||
### 10\. Continuous integration
|
||||
|
||||
### 11\. Versions
|
||||
|
||||
Use a public continuous integration runner. [TravisCI][10] and [CircleCI][11] offer free tiers for open source projects. Configure GitHub or other repo to require passing checks before merging pull requests, and you'll never have to worry about telling people to fix their tests or their style in code reviews.
|
||||
|
||||
Use either [SemVer][12] or [CalVer][13]. There are many tools to help manage versions: [incremental][14], [bumpversion][15], and [setuptools_scm][16] are all packages on PyPI that help manage versions for you.
|
||||
|
||||
### 12\. Release
|
||||
|
||||
Release by running **tox** or **nox** and using **twine** to upload the artifacts to PyPI. You can do a "test upload" by running [DevPI][17].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/tips-open-sourcing-python-libraries
|
||||
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://github.com/
|
||||
[2]: https://en.wikipedia.org/wiki/MIT_License
|
||||
[3]: https://choosealicense.com/
|
||||
[4]: https://docs.pytest.org/en/latest/
|
||||
[5]: https://github.com/google/styleguide/blob/gh-pages/pyguide.md
|
||||
[6]: https://www.python.org/dev/peps/pep-0287/
|
||||
[7]: http://www.sphinx-doc.org/en/master/
|
||||
[8]: https://opensource.com/article/18/11/building-custom-workflows-sphinx
|
||||
[9]: https://pypi.org/project/setuptools/
|
||||
[10]: https://travis-ci.org/
|
||||
[11]: https://circleci.com/
|
||||
[12]: https://semver.org/
|
||||
[13]: https://calver.org/
|
||||
[14]: https://pypi.org/project/incremental/
|
||||
[15]: https://pypi.org/project/bumpversion/
|
||||
[16]: https://pypi.org/project/setuptools_scm/
|
||||
[17]: https://opensource.com/article/18/7/setting-devpi
|
@ -0,0 +1,54 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Solve a puzzle at the Linux command line with nudoku)
|
||||
[#]: via: (https://opensource.com/article/18/12/linux-toy-nudoku)
|
||||
[#]: author: (Jason Baker https://opensource.com/users/jason-baker)
|
||||
|
||||
Solve a puzzle at the Linux command line with nudoku
|
||||
======
|
||||
Sudokus are simple logic games that can be enjoyed just about anywhere, including in your Linux terminal.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/uploads/linux-toy-nudoku.png?itok=OS2o4Rot)
|
||||
|
||||
Welcome back to another installment in our 24-day-long Linux command-line toys advent calendar. If this is your first visit to the series, you might be asking yourself what a command-line toy even is. We’re figuring that out as we go, but generally, it could be a game, or any simple diversion that helps you have fun at the terminal.
|
||||
|
||||
Some of you will have seen various selections from our calendar before, but we hope there’s at least one new thing for everyone.
|
||||
|
||||
Every year for Christmas, my mother-in-law gives my wife a Sudoku calendar. It sits on our coffee table for the year to follow. Each day is a separate sheet (except for Saturday and Sunday, that are combined onto one page), with the idea being that you have a new puzzle every day while also having a functioning calendar.
|
||||
|
||||
The problem, in practice, is that it's a great pad of puzzles but not a great calendar because it turns out some days are harder than others and we just don't get through them at the necessary rate of one a day. Then, we may have a week's worth that gets batched on a lazy Sunday.
|
||||
|
||||
Since I've already given you a [calendar][1] as a part of this series, I figure it's only fair to give you a Sudoku puzzle as well, except our command-line versions are decoupled so there's no pressure to complete exactly one a day.
|
||||
|
||||
I found **nudoku** in my default repositories on Fedora, so installing it was as simple as:
|
||||
|
||||
```
|
||||
$ sudo dnf install nudoku
|
||||
```
|
||||
|
||||
Once installed, just invoke **nudoku** by name to launch it, and it should be fairly self-explanatory from there. If you've never played Sudoku before, it's fairly simple: You need to make sure that each row, each column, and each of the nine 3x3 squares that make up the large square each have one of every digit, 1-9.
|
||||
|
||||
You can find **nudoku** 's c source code [on GitHub][2] under a GPLv3 license.
|
||||
![](https://opensource.com/sites/default/files/uploads/linux-toy-nudoku-animated.gif)
|
||||
Do you have a favorite command-line toy that you we should have included? Our calendar is basically set for the remainder of the series, but we'd still love to feature some cool command-line toys in the new year. Let me know in the comments below, and I'll check it out. And let me know what you thought of today's amusement.
|
||||
|
||||
Be sure to check out yesterday's toy, [Use your Linux terminal to celebrate a banner][3] [year][3], and come back tomorrow for another!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/linux-toy-nudoku
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jason-baker
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/article/18/12/linux-toy-cal
|
||||
[2]: https://github.com/jubalh/nudoku
|
||||
[3]: https://opensource.com/article/18/12/linux-toy-figlet
|
166
sources/tech/20181220 Getting started with Prometheus.md
Normal file
166
sources/tech/20181220 Getting started with Prometheus.md
Normal file
@ -0,0 +1,166 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Getting started with Prometheus)
|
||||
[#]: via: (https://opensource.com/article/18/12/introduction-prometheus)
|
||||
[#]: author: (Michael Zamot https://opensource.com/users/mzamot)
|
||||
|
||||
Getting started with Prometheus
|
||||
======
|
||||
Learn to install and write queries for the Prometheus monitoring and alerting system.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn)
|
||||
|
||||
[Prometheus][1] is an open source monitoring and alerting system that directly scrapes metrics from agents running on the target hosts and stores the collected samples centrally on its server. Metrics can also be pushed using plugins like **collectd_exporter** —although this is not Promethius' default behavior, it may be useful in some environments where hosts are behind a firewall or prohibited from opening ports by security policy.
|
||||
|
||||
Prometheus, a project of the [Cloud Native Computing Foundation][2], scales up using a federation model, which enables one Prometheus server to scrape another Prometheus server. This allows creation of a hierarchical topology, where a central system or higher-level Prometheus server can scrape aggregated data already collected from subordinate instances.
|
||||
|
||||
Besides the Prometheus server, its most common components are its [Alertmanager][3] and its exporters.
|
||||
|
||||
Alerting rules can be created within Prometheus and configured to send custom alerts to Alertmanager. Alertmanager then processes and handles these alerts, including sending notifications through different mechanisms like email or third-party services like [PagerDuty][4].
|
||||
|
||||
Prometheus' exporters can be libraries, processes, devices, or anything else that exposes the metrics that will be scraped by Prometheus. The metrics are available at the endpoint **/metrics** , which allows Prometheus to scrape them directly without needing an agent. The tutorial in this article uses **node_exporter** to expose the target hosts' hardware and operating system metrics. Exporters' outputs are plaintext and highly readable, which is one of Prometheus' strengths.
|
||||
|
||||
In addition, you can configure [Grafana][5] to use Prometheus as a backend to provide data visualization and dashboarding functions.
|
||||
|
||||
### Making sense of Prometheus' configuration file
|
||||
|
||||
The number of seconds between when **/metrics** is scraped controls the granularity of the time-series database. This is defined in the configuration file as the **scrape_interval** parameter, which by default is set to 60 seconds.
|
||||
|
||||
Targets are set for each scrape job in the **scrape_configs** section. Each job has its own name and a set of labels that can help filter, categorize, and make it easier to identify the target. One job can have many targets.
|
||||
|
||||
### Installing Prometheus
|
||||
|
||||
In this tutorial, for simplicity, we will install a Prometheus server and **node_exporter** with docker. Docker should already be installed and configured properly on your system. For a more in-depth, automated method, I recommend Steve Ovens' article [How to use Ansible to set up system monitoring with Prometheus][6].
|
||||
|
||||
Before starting, create the Prometheus configuration file **prometheus.yml** in your work directory as follows:
|
||||
|
||||
```
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'prometheus'
|
||||
|
||||
static_configs:
|
||||
- targets: ['localhost:9090']
|
||||
|
||||
- job_name: 'webservers'
|
||||
|
||||
static_configs:
|
||||
- targets: ['<node exporter node IP>:9100']
|
||||
```
|
||||
|
||||
Start Prometheus with Docker by running the following command:
|
||||
|
||||
```
|
||||
$ sudo docker run -d -p 9090:9090 -v
|
||||
/path/to/prometheus.yml:/etc/prometheus/prometheus.yml
|
||||
prom/prometheus
|
||||
```
|
||||
|
||||
By default, the Prometheus server will use port 9090. If this port is already in use, you can change it by adding the parameter **\--web.listen-address=" <IP of machine>:<port>"** at the end of the previous command.
|
||||
|
||||
In the machine you want to monitor, download and run the **node_exporter** container by using the following command:
|
||||
|
||||
```
|
||||
$ sudo docker run -d -v "/proc:/host/proc" -v "/sys:/host/sys" -v
|
||||
"/:/rootfs" --net="host" prom/node-exporter --path.procfs
|
||||
/host/proc --path.sysfs /host/sys --collector.filesystem.ignored-
|
||||
mount-points "^/(sys|proc|dev|host|etc)($|/)"
|
||||
```
|
||||
|
||||
For the purposes of this learning exercise, you can install **node_exporter** and Prometheus on the same machine. Please note that it's not wise to run **node_exporter** under Docker in production—this is for testing purposes only.
|
||||
|
||||
To verify that **node_exporter** is running, open your browser and navigate to **http:// <IP of Node exporter host>:9100/metrics**. All the metrics collected will be displayed; these are the same metrics Prometheus will scrape.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/check-node_exporter.png)
|
||||
|
||||
To verify the Prometheus server installation, open your browser and navigate to <http://localhost:9090>.
|
||||
|
||||
You should see the Prometheus interface. Click on **Status** and then **Targets**. Under State, you should see your machines listed as **UP**.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/targets-up.png)
|
||||
|
||||
### Using Prometheus queries
|
||||
|
||||
It's time to get familiar with [PromQL][7], Prometheus' query syntax, and its graphing web interface. Go to **<http://localhost:9090/graph>** on your Prometheus server. You will see a query editor and two tabs: Graph and Console.
|
||||
|
||||
Prometheus stores all data as time series, identifying each one with a metric name. For example, the metric **node_filesystem_avail_bytes** shows the available filesystem space. The metric's name can be used in the expression box to select all of the time series with this name and produce an instant vector. If desired, these time series can be filtered using selectors and labels—a set of key-value pairs—for example:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype="ext4"}
|
||||
```
|
||||
|
||||
When filtering, you can match "exactly equal" ( **=** ), "not equal" ( **!=** ), "regex-match" ( **=~** ), and "do not regex-match" ( **!~** ). The following examples illustrate this:
|
||||
|
||||
To filter **node_filesystem_avail_bytes** to show both ext4 and XFS filesystems:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype=~"ext4|xfs"}
|
||||
```
|
||||
|
||||
To exclude a match:
|
||||
|
||||
```
|
||||
node_filesystem_avail_bytes{fstype!="xfs"}
|
||||
```
|
||||
|
||||
You can also get a range of samples back from the current time by using square brackets. You can use **s** to represent seconds, **m** for minutes, **h** for hours, **d** for days, **w** for weeks, and **y** for years. When using time ranges, the vector returned will be a range vector.
|
||||
|
||||
For example, the following command produces the samples from five minutes to the present:
|
||||
|
||||
```
|
||||
node_memory_MemAvailable_bytes[5m]
|
||||
```
|
||||
|
||||
Prometheus also includes functions to allow advanced queries, such as this:
|
||||
|
||||
```
|
||||
100 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated (1 - avg by(instance)(irate(node_cpu_seconds_total{job='webservers',mode='idle'}[5m])))
|
||||
```
|
||||
|
||||
Notice how the labels are used to filter the job and the mode. The metric **node_cpu_seconds_total** returns a counter, and the **irate()** function calculates the per-second rate of change based on the last two data points of the range interval (meaning the range can be smaller than five minutes). To calculate the overall CPU usage, you can use the idle mode of the **node_cpu_seconds_total** metric. The idle percent of a processor is the opposite of a busy processor, so the **irate** value is subtracted from 1. To make it a percentage, multiply it by 100.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/cpu-usage.png)
|
||||
|
||||
### Learn more
|
||||
|
||||
Prometheus is a powerful, scalable, lightweight, and easy to use and deploy monitoring tool that is indispensable for every system administrator and developer. For these and other reasons, many companies are implementing Prometheus as part of their infrastructure.
|
||||
|
||||
To learn more about Prometheus and its functions, I recommend the following resources:
|
||||
|
||||
+ About [PromQL][8]
|
||||
+ What [node_exporters collects][9]
|
||||
+ [Prometheus functions][10]
|
||||
+ [4 open source monitoring tools][11]
|
||||
+ [Now available: The open source guide to DevOps monitoring tools][12]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/introduction-prometheus
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mzamot
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://prometheus.io/
|
||||
[2]: https://www.cncf.io/
|
||||
[3]: https://prometheus.io/docs/alerting/alertmanager/
|
||||
[4]: https://en.wikipedia.org/wiki/PagerDuty
|
||||
[5]: https://grafana.com/
|
||||
[6]: https://opensource.com/article/18/3/how-use-ansible-set-system-monitoring-prometheus
|
||||
[7]: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||
[8]: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||
[9]: https://github.com/prometheus/node_exporter#collectors
|
||||
[10]: https://prometheus.io/docs/prometheus/latest/querying/functions/
|
||||
[11]: https://opensource.com/article/18/8/open-source-monitoring-tools
|
||||
[12]: https://opensource.com/article/18/8/now-available-open-source-guide-devops-monitoring-tools
|
@ -0,0 +1,62 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Let your Linux terminal speak its mind)
|
||||
[#]: via: (https://opensource.com/article/18/12/linux-toy-espeak)
|
||||
[#]: author: (Jason Baker https://opensource.com/users/jason-baker)
|
||||
|
||||
Let your Linux terminal speak its mind
|
||||
======
|
||||
eSpeak is an open source text-to-speech synthesizer that can be invoked from the Linux command line.
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/uploads/linux-toy-cava.png?itok=4EWYL8uZ)
|
||||
|
||||
Greetings from another day in our 24-day-long Linux command-line toys advent calendar. If this is your first visit to the series, you might be asking yourself what a command-line toy even is. We’re figuring that out as we go, but generally, it could be a game, or any simple diversion that helps you have fun at the terminal.
|
||||
|
||||
We hope that even if you've seen some of these before, there will be something new for everybody in our series.
|
||||
|
||||
Some of you may be too young to remember, but before there was Alexa, Siri, or the Google Assistant, computers still had voices.
|
||||
|
||||
Many of us will never forget HAL 9000 from [2001: A Space Odessey][1] helpfully conversing with the crew (sorry, Dave). But between 1960s science fiction and today, there was a whole generation of speaking computers. Some of them great, most of them, not so great.
|
||||
|
||||
One of my favorites is the open source project [eSpeak][2]. It's available in many forms, including a library version you can use to include speech technology in your own project, but it also coms as a command-line program that you can install and use easily. In my distribution, this was as simple as:
|
||||
|
||||
```
|
||||
$ sudo dnf install espeak
|
||||
```
|
||||
|
||||
Invoking eSpeak then can be invoked either interactively, or by piping text to it using the output of another program or a simple echo command. There are a number of [voice files][3] available for eSpeak, and if you're especially bored over the holidays, you could even create your own.
|
||||
|
||||
A fork of eSpeak called eSpeak NG ("Next Generation") was created in 2015 from some developers who wanted to continue development of the otherwise lightly-updated eSpeak. eSpeak is made available as open source under a GPL version 3 license, and you can find out more about the project and download the source code [on SourceForge][2].
|
||||
|
||||
I'll also throw in a bonus toy today, [cava][4]. Because I've been eager to give each of these articles a unique screenshot as the lead image, and today's toy outputs sound rather than something visual, I needed to find something to fill the space. Short for "console-based audio visualizer for ALSA" (although it supports more than just ALSA now), cava is a nice MIT-licensed terminal audio visualization tool that's fun to watch. Below, is a visualization of eSpeak's output of the following:
|
||||
|
||||
```
|
||||
$ echo "Rudolph, the red-nosed reindeer, had a very shiny nose." | espeak
|
||||
```
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/linux-toy-cava.gif)
|
||||
|
||||
Do you have a favorite command-line toy that you we should have included? Our calendar is basically set for the remainder of the series, but we'd still love to feature some cool command-line toys in the new year. Let me know in the comments below, and I'll check it out. And let me know what you thought of today's amusement.
|
||||
|
||||
Be sure to check out yesterday's toy, [Solve a puzzle at the Linux command line with nudoku][5], and come back tomorrow for another!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/12/linux-toy-espeak
|
||||
|
||||
作者:[Jason Baker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jason-baker
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film)
|
||||
[2]: http://espeak.sourceforge.net/
|
||||
[3]: http://espeak.sourceforge.net/voices.html
|
||||
[4]: https://github.com/karlstav/cava
|
||||
[5]: https://opensource.com/article/18/12/linux-toy-nudoku
|
@ -0,0 +1,142 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (11 Uses for a Raspberry Pi Around the Office)
|
||||
[#]: via: (https://blog.dxmtechsupport.com.au/11-uses-for-a-raspberry-pi-around-the-office/)
|
||||
[#]: author: (James Mawson https://blog.dxmtechsupport.com.au/author/james-mawson/)
|
||||
|
||||
树莓派在办公室的 11 种用法
|
||||
======
|
||||
|
||||
我知道你在想什么:树莓派只能用在修修补补、原型设计和个人爱好中。它实际不能用在业务中。
|
||||
|
||||
毫无疑问,这台电脑的处理能力相对较低、易损坏的 SD 卡、缺乏电池备份以及支持的 DIY 性质,这意味着它不会是一个能在任何时候执行最关键的操作的[专业的已安装和已配置的商业服务器][1]的可行替代,。
|
||||
|
||||
但是它电路板便宜、功耗很小、很小几乎适合任何地方、无限灵活 - 这实际上是处理办公室一些基本任务的好方法。
|
||||
|
||||
而且,更好的是,已经有一些人完成了这些项目并很乐意分享他们是如何做到的。
|
||||
|
||||
### DNS 服务器
|
||||
|
||||
每次在浏览器中输入网站地址或者点击链接时,都需要将域名转换为数字 IP 地址,然后才能显示内容。
|
||||
|
||||
通常这意味着向互联网上某处 DNS 服务器发出请求 - 但你可以通过本地处理来加快浏览速度。
|
||||
|
||||
你还可以分配自己的子域,以便本地访问办公室中的计算机。
|
||||
|
||||
[这里是如何让这它工作。][2]
|
||||
|
||||
### 厕所占用标志
|
||||
|
||||
在厕所排过队吗?
|
||||
|
||||
这对于那些等待的人来说很烦人,花在处理它上面的时间会耗费你在办公室的工作效率。
|
||||
|
||||
我想你希望在办公室里也悬挂飞机上有的标志。
|
||||
|
||||
[Occu-pi][3] 是一个更简单的解决方案,使用磁性开关和树莓派来判断螺栓何时关闭并在 Slack 频道中更新厕所在使用中 - 这意味着整个办公室的人都可以看一眼电脑或者移动设备知道是否有空闲的隔间。
|
||||
|
||||
### 针对黑客的蜜罐陷阱
|
||||
|
||||
黑客破坏了网络的第一个线索是一些事情变得糟糕,这应该会吓到大多数企业主。
|
||||
|
||||
这就是可以用到蜜罐的地方:一台没有任何服务的计算机位于你的网络,将特定端口打开伪装成黑客喜欢的目标。
|
||||
|
||||
安全研究人员经常在网络外部部署蜜罐,以收集攻击者正在做的事情的数据。
|
||||
|
||||
但对于普通的小型企业来说,这些作为一种绊脚石部署在内部更有用。因为普通用户没有真正的理由想要连接到蜜罐,所以任何发生的登录尝试都是正在进行捣乱的非常好的指示。
|
||||
|
||||
这可以提供对外部人员入侵的预警,并且可信赖的内部人员也没有任何好处。
|
||||
|
||||
在较大的客户端/服务器网络中,将它作为虚拟机运行可能更为实际。但是在无线路由器上运行的点对点的小型办公室/家庭办公网络中,[HoneyPi][4] 之类的东西是一个很小的防盗报警器。
|
||||
|
||||
### 打印服务器
|
||||
|
||||
网络连接的打印机更方便。
|
||||
|
||||
但更换所有打印机可能会很昂贵 - 特别是如果你对它们感到满意的话。
|
||||
|
||||
[将树莓派设置为打印服务器][5]可能会更有意义。
|
||||
|
||||
### 网络附加存储 (NAS)
|
||||
|
||||
将硬盘变为 NAS 是树莓派最早的实际应用之一,并且它仍然是最好的之一。
|
||||
|
||||
[这是如何使用树莓派创建NAS。][6]
|
||||
|
||||
### 工单服务器
|
||||
|
||||
想要在预算不足的情况下在服务台中支持工单?
|
||||
|
||||
有一个名为 osTicket 的完全开源的工单程序,它可以安装在你的树莓派上,它甚至还有[随时可用的 SD 卡镜像][7]。
|
||||
|
||||
### 数字标牌
|
||||
|
||||
无论是用于活动、广告、菜单还是其他任何东西,许多企业都需要一种显示数字标牌的方式 - 而树莓派的廉价和省电使其成为一个非常有吸引力的选择。
|
||||
|
||||
[这有很多可供选择的选项。] [8]
|
||||
|
||||
### 目录和信息亭
|
||||
|
||||
[FullPageOS][9] 是一个基于 Raspbian 的 Linux 发行版,它直接引导到 Chromium 的全屏版本 - 这非常适合导购、图书馆目录等。
|
||||
|
||||
### 基本的内联网 Web 服务器
|
||||
|
||||
对于托管一个面向公众的网站,你最好有一个托管帐户。树莓派不适合面对真正的网络流量。
|
||||
|
||||
但对于小型办公室,它可以托管内部业务维基或基本的公司内网。它还可以用作沙箱环境,用于试验代码和服务器配置。
|
||||
|
||||
[这里是如何在树莓派上运行 Apache、MySQL 和 PHP。][10]
|
||||
|
||||
### 渗透测试器
|
||||
|
||||
Kali Linux 是专为探测网络安全漏洞而构建的操作系统。通过将其安装在树莓派上,你就拥有了一个超便携式穿透测试器,其中包含 600 多种工具。
|
||||
|
||||
[你可以在这里找到树莓派镜像的种子链接。][11]
|
||||
|
||||
绝对小心只在你自己的网络或你有权对它安全审计的网络中使用它 - 使用此方法来破解其他网络是严重的犯罪行为。
|
||||
|
||||
### VPN 服务器
|
||||
|
||||
当你外出时,依靠的是公共无线互联网,你无法控制还有谁在网络中、谁在窥探你的所有流量。这就是为什么通过 VPN 连接加密所有内容可以让人放心。
|
||||
|
||||
你可以订阅任意数量的商业 VPN 服务,并且你可以在云中安装自己的服务,但是在办公室运行一个 VPN,这样你也可以从任何地方访问本地网络。
|
||||
|
||||
对于轻度使用 - 比如偶尔的商务旅行 - 树莓派是一种强大的,节约能源的设置 VPN 服务器的方式。(首先要检查一下你的路由器是不是不支持这个功能,许多路由器是支持的。)
|
||||
|
||||
[这是如何在树莓派上安装 OpenVPN。][12]
|
||||
|
||||
### 无线咖啡机
|
||||
|
||||
啊,美味:美味的饮料还是公司内工作效率的支柱。
|
||||
|
||||
那么, 为什么不[将办公室的咖啡机变成可以精确控制温度和无线连接的智能咖啡机呢?][13]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://blog.dxmtechsupport.com.au/11-uses-for-a-raspberry-pi-around-the-office/
|
||||
|
||||
作者:[James Mawson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://blog.dxmtechsupport.com.au/author/james-mawson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://dxmtechsupport.com.au/server-configuration
|
||||
[2]: https://www.1and1.com/digitalguide/server/configuration/how-to-make-your-raspberry-pi-into-a-dns-server/
|
||||
[3]: https://blog.usejournal.com/occu-pi-the-bathroom-of-the-future-ed69b84e21d5
|
||||
[4]: https://trustfoundry.net/honeypi-easy-honeypot-raspberry-pi/
|
||||
[5]: https://opensource.com/article/18/3/print-server-raspberry-pi
|
||||
[6]: https://howtoraspberrypi.com/create-a-nas-with-your-raspberry-pi-and-samba/
|
||||
[7]: https://everyday-tech.com/a-raspberry-pi-ticketing-system-image-with-osticket/
|
||||
[8]: https://blog.capterra.com/7-free-and-open-source-digital-signage-software-options-for-your-next-event/
|
||||
[9]: https://github.com/guysoft/FullPageOS
|
||||
[10]: https://maker.pro/raspberry-pi/projects/raspberry-pi-web-server
|
||||
[11]: https://www.offensive-security.com/kali-linux-arm-images/
|
||||
[12]: https://medium.freecodecamp.org/running-your-own-openvpn-server-on-a-raspberry-pi-8b78043ccdea
|
||||
[13]: https://www.techradar.com/au/how-to/how-to-build-your-own-smart-coffee-machine
|
@ -1,28 +1,28 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (qhwdw)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: ()
|
||||
[#]: publisher: ()
|
||||
[#]: url: ()
|
||||
[#]: subject: (How to Build a Netboot Server, Part 2)
|
||||
[#]: via: (https://fedoramagazine.org/how-to-build-a-netboot-server-part-2/)
|
||||
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
|
||||
|
||||
How to Build a Netboot Server, Part 2
|
||||
如何构建一台网络引导服务器(第二部分)
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/12/netboot2-816x345.jpg)
|
||||
|
||||
The article [How to Build a Netboot Server, Part 1][1] showed you how to create a netboot image with a “liveuser” account whose home directory lives in volatile memory. Most users probably want to preserve files and settings across reboots, though. So this second part of the netboot series shows how to reconfigure the netboot image from part one so that [Active Directory][2] user accounts can log in and their home directories can be automatically mounted from a NFS server.
|
||||
在 [如何构建一台网络引导服务器(第一部分)][1] 的文章中,我们展示了如何创建一个网络引导镜像,在那个镜像中使用了一个名为 “liveuser” 帐户,它的 home 目录位于内存中,重启后 home 中的内容将全部消失。然而很多用户都希望机器重启后保存他们的文件和设置。因此,在本系列的第二部分,我们将向你展示如何在第一部分的基础上,重新配置网络引导镜像,使它能够使用 [活动目录][2] 中的用户帐户进行登陆,然后能够从一个 NFS 服务器上自动挂载他们的 home 目录。
|
||||
|
||||
Part 3 of this series will show how to make an interactive and centrally-configurable iPXE boot menu for the netboot clients.
|
||||
本系列的第三部分,我们将向你展示网络引导客户端如何与中心化配置的 iPXE 引导菜单进行交互。
|
||||
|
||||
### Setup NFS4 Home Directories with KRB5 Authentication
|
||||
### 设置使用 KRB5 认证的 NFS4 Home 目录
|
||||
|
||||
Follow the directions from the previous post “[Share NFS Home Directories Securely with Kerberos][3],” then return here.
|
||||
按以前的文章 “[使用 Kerberos 强化共享的 NFS Home 目录安全性][3]” 的指导来做这个设置。
|
||||
|
||||
### Remove the Liveuser Account
|
||||
### 删除 Liveuser 帐户
|
||||
|
||||
Remove the “liveuser” account created in part one of this series:
|
||||
删除本系列文章第一部分中创建的 “liveuser” 帐户:
|
||||
|
||||
```
|
||||
$ sudo -i
|
||||
@ -31,9 +31,9 @@ $ sudo -i
|
||||
# for i in passwd shadow group gshadow; do sed -i '/^liveuser:/d' /fc28/etc/$i; done
|
||||
```
|
||||
|
||||
### Configure NTP, KRB5 and SSSD
|
||||
### 配置 NTP、KRB5 和 SSSD
|
||||
|
||||
Next, we will need to duplicate the NTP, KRB5, and SSSD configuration that we set up on the server in the client image so that the same accounts will be available:
|
||||
接下来,我们需要将 NTP、KRB5、和 SSSD 的配置文件复制进客户端使用的镜像中,以便于它们能够使用同一个帐户:
|
||||
|
||||
```
|
||||
# MY_HOSTNAME=$(</etc/hostname)
|
||||
@ -45,27 +45,27 @@ Next, we will need to duplicate the NTP, KRB5, and SSSD configuration that we se
|
||||
# cp /etc/sssd/sssd.conf /fc28/etc/sssd
|
||||
```
|
||||
|
||||
Reconfigure sssd to provide authentication services, in addition to the identification service already configured:
|
||||
重新配置 sssd 在已配置的识别服务的基础上去提供认证服务:
|
||||
|
||||
```
|
||||
# sed -i '/services =/s/$/, pam/' /fc28/etc/sssd/sssd.conf
|
||||
```
|
||||
|
||||
Also, ensure none of the clients attempt to update the computer account password:
|
||||
另外,配置成确保客户端不能更改这个帐户密码:
|
||||
|
||||
```
|
||||
# sed -i '/id_provider/a \ \ ad_maximum_machine_account_password_age = 0' /fc28/etc/sssd/sssd.conf
|
||||
```
|
||||
|
||||
Also, copy the nfsnobody definitions:
|
||||
另外,复制 nfsnobody 的定义:
|
||||
|
||||
```
|
||||
# for i in passwd shadow group gshadow; do grep "^nfsnobody:" /etc/$i >> /fc28/etc/$i; done
|
||||
```
|
||||
|
||||
### Join Active Directory
|
||||
### 连接活动目录
|
||||
|
||||
Next, you’ll perform a chroot to join the client image to Active Directory. Begin by deleting any pre-existing computer account with the same name your netboot image will use:
|
||||
接下来,你将执行一个 chroot 将客户端镜像连接到活动目录。从删除预置在网络引导镜像中相同的计算机帐户开始:
|
||||
|
||||
```
|
||||
# MY_USERNAME=jsmith
|
||||
@ -73,20 +73,20 @@ Next, you’ll perform a chroot to join the client image to Active Directory. Be
|
||||
# adcli delete-computer "${MY_CLIENT_HOSTNAME%%.*}" -U "$MY_USERNAME"
|
||||
```
|
||||
|
||||
Also delete the krb5.keytab file from the netboot image if it exists:
|
||||
在网络引导镜像中如果有 krb5.keytab 文件,也删除它:
|
||||
|
||||
```
|
||||
# rm -f /fc28/etc/krb5.keytab
|
||||
```
|
||||
|
||||
Perform a chroot into the netboot image:
|
||||
在网络引导镜像中执行一个 chroot 操作:
|
||||
|
||||
```
|
||||
# for i in dev dev/pts dev/shm proc sys run; do mount -o bind /$i /fc28/$i; done
|
||||
# chroot /fc28 /usr/bin/bash --login
|
||||
```
|
||||
|
||||
Perform the join:
|
||||
执行一个 join 操作:
|
||||
|
||||
```
|
||||
# MY_USERNAME=jsmith
|
||||
@ -97,7 +97,7 @@ Perform the join:
|
||||
# adcli join $MY_DOMAIN --login-user="$MY_USERNAME" --computer-name="${MY_HOSTNAME%%.*}" --host-fqdn="$MY_HOSTNAME" --user-principal="host/$MY_HOSTNAME@$MY_REALM" --domain-ou="$MY_OU"
|
||||
```
|
||||
|
||||
Now log out of the chroot and clear the root user’s command history:
|
||||
现在登出 chroot,并清除命令历史:
|
||||
|
||||
```
|
||||
# logout
|
||||
@ -105,9 +105,9 @@ Now log out of the chroot and clear the root user’s command history:
|
||||
# > /fc28/root/.bash_history
|
||||
```
|
||||
|
||||
### Install and Configure PAM Mount
|
||||
### 安装和配置 PAM Mount
|
||||
|
||||
We want our clients to automatically mount the user’s home directory when they log in. To accomplish this, we’ll use the “pam_mount” module. Install and configure pam_mount:
|
||||
我们希望客户端登入后自动挂载它的 home 目录。为实现这个目的,我们将要使用 “pam_mount” 模块。安装和配置 pam_mount:
|
||||
|
||||
```
|
||||
# dnf install -y --installroot=/fc28 pam_mount
|
||||
@ -123,7 +123,7 @@ We want our clients to automatically mount the user’s home directory when they
|
||||
END
|
||||
```
|
||||
|
||||
Reconfigure PAM to use pam_mount:
|
||||
重新配置 PAM 去使用 pam_mount:
|
||||
|
||||
```
|
||||
# dnf install -y patch
|
||||
@ -152,24 +152,24 @@ END
|
||||
# chroot /fc28 authselect select custom/sssd with-pammount --force
|
||||
```
|
||||
|
||||
Also ensure the NFS server’s hostname is always resolvable from the client:
|
||||
另外,要确保从客户端上总是可解析 NFS 服务器的主机名:
|
||||
|
||||
```
|
||||
# MY_IP=$(host -t A $MY_HOSTNAME | awk '{print $4}')
|
||||
# echo "$MY_IP $MY_HOSTNAME ${MY_HOSTNAME%%.*}" >> /fc28/etc/hosts
|
||||
```
|
||||
|
||||
Optionally, allow all users to run sudo:
|
||||
可选,允许所有用户去使用 sudo:
|
||||
|
||||
```
|
||||
# echo '%users ALL=(ALL) NOPASSWD: ALL' > /fc28/etc/sudoers.d/users
|
||||
```
|
||||
|
||||
### Convert the NFS Root to an iSCSI Backing-Store
|
||||
### 转换 NFS Root 到一个 iSCSI 背后的存储
|
||||
|
||||
Current versions of nfs-utils may have difficulty establishing a second connection from the client back to the NFS server for home directories when an nfsroot connection is already established. The client hangs when attempting to access the home directory. So, we will work around the problem by using a different protocol (iSCSI) for sharing our netboot image.
|
||||
在一个 nfsroot 连接建立之后,目前版本的 nfs-utils 可能很难为 home 目录维护一个从客户端到 NFS 服务器的二次连接。当尝试去访问 home 目录时,客户端将被挂住。因此,为了网络引导镜像可共享使用,我们将使用一个不同的协议(iSCSI)来解决这个问题。
|
||||
|
||||
First chroot into the image to reconfigure its initramfs for booting from an iSCSI root:
|
||||
首先 chroot 到镜像中,去重新配置它的 initramfs,让它从一个 iSCSI root 中去引导:
|
||||
|
||||
```
|
||||
# for i in dev dev/pts dev/shm proc sys run; do mount -o bind /$i /fc28/$i; done
|
||||
@ -186,18 +186,18 @@ First chroot into the image to reconfigure its initramfs for booting from an iSC
|
||||
# > /fc28/root/.bash_history
|
||||
```
|
||||
|
||||
The qedi driver broke iscsi during testing, so it’s been disabled here.
|
||||
在测试时,qedi 驱动会破坏 iscsi,因此我们将它禁用。
|
||||
|
||||
Next, create a fc28.img [sparse file][4]. This file serves as the iSCSI target’s backing store:
|
||||
接着,创建一个 fc28.img 的 [稀疏文件][4]。这个稀疏文件代表 iSCSI 目标的背后存储:
|
||||
|
||||
```
|
||||
# FC28_SIZE=$(du -ms /fc28 | cut -f 1)
|
||||
# dd if=/dev/zero of=/fc28.img bs=1MiB count=0 seek=$(($FC28_SIZE*2))
|
||||
```
|
||||
|
||||
(If you have one available, a separate partition or disk drive can be used instead of creating a file.)
|
||||
(如果你有一个可使用的稀疏文件、一个单独的分区或磁盘驱动器,就可以代替它了,不用再去创建这个稀疏文件了。)
|
||||
|
||||
Next, format the image with a filesystem, mount it, and copy the netboot image into it:
|
||||
接着,使用一个文件系统去格式化镜像、挂载它、然后将网络引导镜像复制进去:
|
||||
|
||||
```
|
||||
# mkfs -t xfs -L NETROOT /fc28.img
|
||||
@ -207,19 +207,19 @@ Next, format the image with a filesystem, mount it, and copy the netboot image i
|
||||
# umount $TEMP_MNT
|
||||
```
|
||||
|
||||
During testing using SquashFS, the client would occasionally stutter. It seems that SquashFS does not perform well when doing random I/O from a multiprocessor client. (See also [The curious case of stalled squashfs reads][5].) If you want to improve throughput performance with filesystem compression, [ZFS][6] is probably a better option.
|
||||
在使用 SquashFS 测试时,客户端偶尔会出现小状况。似乎是因为 SquashFS 在多处理器客户端上没法执行一个随机 I/O。(更多内容见 [squashfs 读取卡顿的奇怪案例][5])。如果你希望使用一个压缩文件系统来提升吞吐性能,[ZFS][6] 或许是个很好的选择。
|
||||
|
||||
If you need extremely high throughput from the iSCSI server (say, for hundreds of clients), it might be possible to [load balance][7] a [Ceph][8] cluster. For more information, see [Load Balancing Ceph Object Gateway Servers with HAProxy and Keepalived][9].
|
||||
如果你对 iSCSI 服务器的吞吐性能要求非常高(比如,成百上千的客户端要连接它),可能需要使用带 [负载均衡][7] 的 [Ceph][8] 集群了。更多相关内容,请查看 [使用 HAProxy 和 Keepalived 负载均衡的 Ceph 对象网关][9]。
|
||||
|
||||
### Install and Configure iSCSI
|
||||
### 安装和配置 iSCSI
|
||||
|
||||
Install the scsi-target-utils package which will provide the iSCSI daemon for serving our image out to our clients:
|
||||
为了给我们的客户端提供网络引导镜像,安装 scsi-target-utils 包:
|
||||
|
||||
```
|
||||
# dnf install -y scsi-target-utils
|
||||
```
|
||||
|
||||
Configure the iSCSI daemon to serve the fc28.img file:
|
||||
配置 iSCSI 守护程序去提供 fc28.img 文件:
|
||||
|
||||
```
|
||||
# MY_REVERSE_HOSTNAME=$(echo $MY_HOSTNAME | tr '.' "\n" | tac | tr "\n" '.' | cut -b -${#MY_HOSTNAME})
|
||||
@ -231,9 +231,9 @@ Configure the iSCSI daemon to serve the fc28.img file:
|
||||
END
|
||||
```
|
||||
|
||||
The leading iqn. is expected by /usr/lib/dracut/modules.d/40network/net-lib.sh.
|
||||
通过 /usr/lib/dracut/modules.d/40network/net-lib.sh 来指示预期的 iqn。
|
||||
|
||||
Add an exception to the firewall and enable and start the service:
|
||||
添加一个防火墙例外,并启用和启动这个服务:
|
||||
|
||||
```
|
||||
# firewall-cmd --add-service=iscsi-target
|
||||
@ -242,13 +242,13 @@ Add an exception to the firewall and enable and start the service:
|
||||
# systemctl start tgtd.service
|
||||
```
|
||||
|
||||
You should now be able to see the image being shared with the tgtadm command:
|
||||
你现在应该能够使用 tatadm 命令看到这个共享后的镜像:
|
||||
|
||||
```
|
||||
# tgtadm --mode target --op show
|
||||
```
|
||||
|
||||
The above command should output something similar to the following:
|
||||
上述命令的输出应该类似如下的内容:
|
||||
|
||||
```
|
||||
Target 1: iqn.edu.example.server-01:fc28
|
||||
@ -290,7 +290,7 @@ Target 1: iqn.edu.example.server-01:fc28
|
||||
ALL
|
||||
```
|
||||
|
||||
We can now remove the NFS share that we created in part one of this series:
|
||||
现在,我们可以去删除本系列文章的第一部分中创建的 NFS 共享了:
|
||||
|
||||
```
|
||||
# rm -f /etc/exports.d/fc28.exports
|
||||
@ -300,11 +300,11 @@ We can now remove the NFS share that we created in part one of this series:
|
||||
# sed -i '/^\/fc28 /d' /etc/fstab
|
||||
```
|
||||
|
||||
You can also delete the /fc28 filesystem, but you may want to keep it for performing future updates.
|
||||
你也可以删除 /fc28 文件系统,但为了以后进一步更新,你可能需要保留它。
|
||||
|
||||
### Update the ESP to use the iSCSI Kernel
|
||||
### 更新 ESP 去使用 iSCSI 内核
|
||||
|
||||
Ipdate the ESP to contain the iSCSI-enabled initramfs:
|
||||
更新 ESP 去包含启用了 iSCSI 的 initramfs:
|
||||
|
||||
```
|
||||
$ rm -vf $HOME/esp/linux/*.fc28.*
|
||||
@ -313,7 +313,7 @@ $ cp $(find /fc28/lib/modules -maxdepth 2 -name 'vmlinuz' | grep -m 1 $MY_KRNL)
|
||||
$ cp $(find /fc28/boot -name 'init*' | grep -m 1 $MY_KRNL) $HOME/esp/linux/initramfs-$MY_KRNL.img
|
||||
```
|
||||
|
||||
Update the boot.cfg file to pass the new root and netroot parameters:
|
||||
更新 boot.cfg 文件去传递新的 root 和 netroot 参数:
|
||||
|
||||
```
|
||||
$ MY_NAME=server-01.example.edu
|
||||
@ -322,52 +322,52 @@ $ MY_ADDR=$(host -t A $MY_NAME | awk '{print $4}')
|
||||
$ sed -i "s! root=[^ ]*! root=/dev/disk/by-path/ip-$MY_ADDR:3260-iscsi-iqn.$MY_EMAN:fc28-lun-1 netroot=iscsi:$MY_ADDR::::iqn.$MY_EMAN:fc28!" $HOME/esp/linux/boot.cfg
|
||||
```
|
||||
|
||||
Now you just need to copy the updated files from your $HOME/esp/linux directory out to the ESPs of all your client systems. You should see results similar to what is shown in the below screenshot:
|
||||
现在,你只需要从 $HOME/esp/linux 目录中复制更新后的文件到所有客户端系统的 ESP 中。你应该会看到类似下面屏幕截图的结果:
|
||||
|
||||
![][10]
|
||||
|
||||
### Upgrading the Image
|
||||
### 更新镜像
|
||||
|
||||
First, make a copy of the current image:
|
||||
首先,复制出一个当前镜像的副本:
|
||||
|
||||
```
|
||||
# cp -a /fc28 /fc29
|
||||
```
|
||||
|
||||
Chroot into the new copy of the image:
|
||||
Chroot 进入到镜像的新副本:
|
||||
|
||||
```
|
||||
# for i in dev dev/pts dev/shm proc sys run; do mount -o bind /$i /fc29/$i; done
|
||||
# chroot /fc29 /usr/bin/bash --login
|
||||
```
|
||||
|
||||
Allow updating the kernel:
|
||||
允许更新内核:
|
||||
|
||||
```
|
||||
# sed -i 's/^exclude=kernel-\*$/#exclude=kernel-*/' /etc/dnf/dnf.conf
|
||||
```
|
||||
|
||||
Perform the upgrade:
|
||||
执行升级:
|
||||
|
||||
```
|
||||
# dnf distro-sync -y --releasever=29
|
||||
```
|
||||
|
||||
Prevent the kernel from being updated:
|
||||
阻止更新过的内核被再次更新:
|
||||
|
||||
```
|
||||
# sed -i 's/^#exclude=kernel-\*$/exclude=kernel-*/' /etc/dnf/dnf.conf
|
||||
```
|
||||
|
||||
The above command is optional, but saves you from having to copy a new kernel out to the clients if you add or update a few packages in the image at some future time.
|
||||
上述命令是可选的,但是在以后,如果在镜像中添加和更新了几个包,在你的客户端之外保存有一个最新内核的副本,会在关键时刻对你非常有帮助。
|
||||
|
||||
Clean up dnf’s package cache:
|
||||
清理 dnf 的包缓存:
|
||||
|
||||
```
|
||||
# dnf clean all
|
||||
```
|
||||
|
||||
Exit the chroot and clear root’s command history:
|
||||
退出 chroot 并清理 root 的命令历史:
|
||||
|
||||
```
|
||||
# logout
|
||||
@ -375,7 +375,7 @@ Exit the chroot and clear root’s command history:
|
||||
# > /fc29/root/.bash_history
|
||||
```
|
||||
|
||||
Create the iSCSI image:
|
||||
创建 iSCSI 镜像:
|
||||
|
||||
```
|
||||
# FC29_SIZE=$(du -ms /fc29 | cut -f 1)
|
||||
@ -387,7 +387,7 @@ Create the iSCSI image:
|
||||
# umount $TEMP_MNT
|
||||
```
|
||||
|
||||
Define a new iSCSI target that points to our new image and export it:
|
||||
定义一个新的 iSCSI 目标,指向到新的镜像并导出它:
|
||||
|
||||
```
|
||||
# MY_HOSTNAME=$(</etc/hostname)
|
||||
@ -401,7 +401,7 @@ END
|
||||
# tgt-admin --update ALL
|
||||
```
|
||||
|
||||
Add the new kernel and initramfs to the ESP:
|
||||
添加新内核并 initramfs 到 ESP:
|
||||
|
||||
```
|
||||
$ MY_KRNL=$(ls -c /fc29/lib/modules | head -n 1)
|
||||
@ -409,7 +409,7 @@ $ cp $(find /fc29/lib/modules -maxdepth 2 -name 'vmlinuz' | grep -m 1 $MY_KRNL)
|
||||
$ cp $(find /fc29/boot -name 'init*' | grep -m 1 $MY_KRNL) $HOME/esp/linux/initramfs-$MY_KRNL.img
|
||||
```
|
||||
|
||||
Update the boot.cfg in the ESP:
|
||||
更新 ESP 的 boot.cfg:
|
||||
|
||||
```
|
||||
$ MY_DNS1=192.0.2.91
|
||||
@ -426,8 +426,7 @@ boot || exit
|
||||
END
|
||||
```
|
||||
|
||||
Finally, copy the files from your $HOME/esp/linux directory out to the ESPs of all your client systems and enjoy!
|
||||
|
||||
最后,从我的 $HOME/esp/linux 目录中复制文件到所有客户端系统的 ESP 中去使用它吧!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -435,7 +434,7 @@ via: https://fedoramagazine.org/how-to-build-a-netboot-server-part-2/
|
||||
|
||||
作者:[Gregory Bartholomew][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
Loading…
Reference in New Issue
Block a user