Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
wxy 2018-02-06 18:52:33 +08:00
commit 2d15da7e22
30 changed files with 2038 additions and 420 deletions

View File

@ -1,117 +0,0 @@
XLCYun 翻译中
Manjaro Gaming: Gaming on Linux Meets Manjaros Awesomeness
======
[![Meet Manjaro Gaming, a Linux distro designed for gamers with the power of Manjaro][1]][1]
[Gaming on Linux][2]? Yes, that's very much possible and we have a dedicated new Linux distribution aiming for gamers.
Manjaro Gaming is a Linux distro designed for gamers with the power of Manjaro. Those who have used Manjaro Linux before, know exactly why it is a such a good news for gamers.
[Manjaro][3] is a Linux distro based on one of the most popular distro - [Arch Linux][4]. Arch Linux is widely known for its bleeding-edge nature offering a lightweight, powerful, extensively customizable and up-to-date experience. And while all those are absolutely great, the main drawback is that Arch Linux embraces the DIY (do it yourself) approach where users need to possess a certain level of technical expertise to get along with it.
Manjaro strips that requirement and makes Arch accessible to newcomers, and at the same time provides all the advanced and powerful features of Arch for the experienced users as well. In short, Manjaro is an user-friendly Linux distro that works straight out of the box.
The reasons why Manjaro makes a great and extremely suitable distro for gaming are:
* Manjaro automatically detects computer's hardware (e.g. Graphics cards)
* Automatically installs the necessary drivers and software (e.g. Graphics drivers)
* Various codecs for media files playback comes pre-installed with it
* Has dedicated repositories that deliver fully tested and stable packages
Manjaro Gaming is packed with all of Manjaro's awesomeness with the addition of various tweaks and software packages dedicated to make gaming on Linux smooth and enjoyable.
![Inside Manjaro Gaming][5]
#### Tweaks
Some of the tweaks made on Manjaro Gaming are:
* Manjaro Gaming uses highly customizable XFCE desktop environment with an overall dark theme.
* Sleep mode is disabled for preventing computers from sleeping while playing games with GamePad or watching long cutscenes.
#### Softwares
Maintaining Manjaro's tradition of working straight out of the box, Manjaro Gaming comes bundled with various Open Source software to provide often needed functionalities for gamers. Some of the software included are:
* [**KdenLIVE**][6]: Videos editing software for editing gaming videos
* [**Mumble**][7]: Voice chatting software for gamers
* [**OBS Studio**][8]: Software for video recording and live streaming games videos on [Twitch][9]
* **[OpenShot][10]** : Powerful video editor for Linux
* [**PlayOnLinux**][11]: For running Windows games on Linux with [Wine][12] backend
* [**Shutter**][13]: Feature-rich screenshot tool
#### Emulators
Manjaro Gaming comes with a long list of gaming emulators:
* **[DeSmuME][14]** : Nintendo DS emulator
* **[Dolphin Emulator][15]** : GameCube and Wii emulator
* [**DOSBox**][16]: DOS Games emulator
* **[FCEUX][17]** : Nintendo Entertainment System (NES), Famicom, and Famicom Disk System (FDS) emulator
* **Gens/GS** : Sega Mega Drive emulator
* **[PCSXR][18]** : PlayStation Emulator
* [**PCSX2**][19]: Playstation 2 emulator
* [**PPSSPP**][20]: PSP emulator
* **[Stella][21]** : Atari 2600 VCS emulator
* [**VBA-M**][22]: Gameboy and GameboyAdvance emulator
* [**Yabause**][23]: Sega Saturn Emulator
* **[ZSNES][24]** : Super Nintendo emulator
#### Others
There are some terminal add-ons - Color, ILoveCandy and Screenfetch. [Conky Manager][25] with Retro Conky theme is also included.
**Point to be noted: Not all the features mentioned are included in the current release of Manjaro Gaming (which is 16.03). Some of them are scheduled to be included in the next release - Manjaro Gaming 16.06.**
### Downloads
Manjaro Gaming 16.06 is going to be the first proper release of Manjaro Gaming. But if you are interested enough to try it now, Manjaro Gaming 16.03 is available for downloading on the Sourceforge [project page][26]. Go there and grab the ISO.
How do you feel about this new Gaming Linux distro? Are you thinking of giving it a try? Let us know!
--------------------------------------------------------------------------------
via: https://itsfoss.com/manjaro-gaming-linux/
作者:[Munif Tanjim][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/munif/
[1]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming.jpg
[2]:https://itsfoss.com/linux-gaming-guide/
[3]:https://manjaro.github.io/
[4]:https://www.archlinux.org/
[5]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming-Inside-1024x576.png
[6]:https://kdenlive.org/
[7]:https://www.mumble.info
[8]:https://obsproject.com/
[9]:https://www.twitch.tv/
[10]:http://www.openshot.org/
[11]:https://www.playonlinux.com
[12]:https://www.winehq.org/
[13]:http://shutter-project.org/
[14]:http://desmume.org/
[15]:https://dolphin-emu.org
[16]:https://www.dosbox.com/
[17]:http://www.fceux.com/
[18]:https://pcsxr.codeplex.com
[19]:http://pcsx2.net/
[20]:http://www.ppsspp.org/
[21]:http://stella.sourceforge.net/
[22]:http://vba-m.com/
[23]:https://yabause.org/
[24]:http://www.zsnes.com/
[25]:https://itsfoss.com/conky-gui-ubuntu-1304/
[26]:https://sourceforge.net/projects/mgame/

View File

@ -0,0 +1,56 @@
Your API is missing Swagger
======
![](https://ryanmccue.ca/content/images/2017/11/top-20mobileapps--3-.png)
We have all struggled through thrown together, convoluted API documentation. It is frustrating, and in the worst case, can lead to bad requests. The process of understanding an API is something most developers go through on a regular basis, so it is any wonder that the majority of APIs have horrific documentation.
[Swagger][1] is the solution to this problem. Swagger came out in 2011 and is an open source software framework which has many tools that help developers design, build, document, and consume RESTful APIs. Designing an API using Swagger, or documenting it after with Swagger helps everyone consumers of your API seamlessly. One of the amazing features which many people do not know about Swagger is that you can actually **generate** a client from it! That's right, if a service you're consuming has Swagger documentation you can generate a client to consume it!
All major languages support Swagger and connect it to your API. Depending on the language you're writing your API in you can have the Swagger documentation generated from the actual code. Here are some of the standout Swagger libraries I've seen recently.
### Golang
Golang has a couple great tools for integrating Swagger into your API. The first is [go-swagger][2], which is a tool that lets you generate the scaffolding for an API from a Swagger file. This is a fundamentally different way of thinking about APIs. Instead of building the endpoints and thinking about new ones on the fly, go-swagger gets you to think through your API before you write a single line of code. This can help visualize what you want the API to do first. Another tool which Golang has is called [Goa][3]. A quote from their website sums up what Goa is:
> goa provides a novel approach for developing microservices that saves time when working on independent services and helps with keeping the overall system consistent. goa uses code generation to handle both the boilerplate and ancillary artifacts such as documentation, client modules, and client tools.
They take designing the API before implementing it to a new level. Goa has a DSL to help you programmatically describe your entire API, from endpoints to payloads, to responses. From this DSL Goa generates a Swagger file for anyone that consumes your API, and it will enforce your endpoints output the correct data, which will keep your API and documentation in sync. This is counter-intuitive when you start, but after actually implementing an API with Goa, you will not know how you ever did it before.
### Python
[Flask][4] has a great extension for building an API with Swagger called [Flask-RESTPlus][5].
> If you are familiar with Flask, Flask-RESTPlus should be easy to pick up. It provides a coherent collection of decorators and tools to describe your API and expose its documentation properly using Swagger.
It uses python decorators to generate swagger documentation and can be used to enforce endpoint output similar to Goa. It can be very powerful and makes generating swagger from an API stupid easy.
### NodeJS
Finally, NodeJS has a powerful tool for working with Swagger called [swagger-js-codegen][6]. It can generate both servers and clients from a swagger file.
> This package generates a nodejs, reactjs or angularjs class from a swagger specification file. The code is generated using mustache templates and is quality checked by jshint and beautified by js-beautify.
It is not quite as easy to use as Goa and Flask-RESTPlus, but if Node is your thing, this will do the job. It shines when it comes to generating frontend code to interface with your API, which is perfect if you're developing a web app to go along with the API.
### Conclusion
Swagger is a simple yet powerful representation of your RESTful API. When used properly it can help flush out your API design and make it easier to consume. Harnessing its full power can save you time by forming and visualizing your API before you write a line of code, then generate the boilerplate surrounding the core logic. And with tools like [Goa][3], [Flask-RESTPlus][5], and [swagger-js-codegen][6] which will make the whole experience of architecting and implementing an API painless, there is no excuse not to have Swagger.
--------------------------------------------------------------------------------
via: https://ryanmccue.ca/your-api-is-missing-swagger/
作者:[Ryan McCue][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://ryanmccue.ca/author/ryan/
[1]:http://swagger.io
[2]:https://github.com/go-swagger/go-swagger
[3]:https://goa.design/
[4]:http://flask.pocoo.org/
[5]:https://github.com/noirbizarre/flask-restplus
[6]:https://github.com/wcandillon/swagger-js-codegen

View File

@ -0,0 +1,54 @@
5 Podcasts Every Dev Should Listen to
======
![](https://ryanmccue.ca/content/images/2017/11/Electric-Love.png)
Being a developer is a tough job, the landscape is constantly changing, and new frameworks and best practices come out every month. Having a great go-to list of podcasts keeping you up to date on the industry can make a huge difference. I've done some of the hard work and created a list of the top 5 podcasts I personally listen too.
### This Developer's Life
Unlike many developer-focused podcasts, there is no talk of code or explanations of software architecture in [This Developer's Life][1]. There are just relatable stories from other developers. This Developer's Life dives into the issues developers face in their daily lives, from a developers point of view. [Rob Conery][2] and [Scott Hanselman][3] host the show and it focuses on all aspects of a developers life. For example, what it feels like to get fired. To hit a home run. To be competitive. It is a very well made podcast and isn't just for developers, but it can also be enjoyed by those that love and live with them.
### Developer Tea
Dont have a lot of time? [Developer Tea][4] is "A podcast for developers designed to fit inside your tea break." The podcast exists to help driven developers connect with their purpose and excel at their work so that they can make an impact. Hosted by [Jonathan Cutrell][5], the director of technology at Whiteboard, Developer Tea breaks down the news and gives useful insights into all aspects of a developers life in and out of work. Cutrell explains listener questions mixed in with news, interviews, and career advice during his show, which releases multiple episodes every week.
### Software Engineering Today
[Software Engineering Daily][6] is a daily podcast which focuses on heavily technical topics like software development and system architecture. It covering a range of topics from load balancing at scale and serverless event-driven architecture to augmented reality. Hosted by [Jeff Meyerson][7], this podcast is great for developers who have a passion for learning about complicated software topics to expand their knowledge base.
### Talking Code
The [Talking Code][8] podcast is from 2015, and contains 24 episodes which have "short expert interviews that help you decode what developers are saying." The hosts, [Josh Smith][9] and [Venkat Dinavahi][10], talk about diverse web development topics like how to become an effective junior developer and how to go from junior to senior developer, to topics like building modern web applications and making the most out of your analytics. This podcast is perfect for those getting into web development and those who look to level up their web development skills.
### The Laracasts Snippet
[The Laracasts Snippet][11] is a bite-size podcast where each episode offers a single thought on some aspect of web development. The host, [Jeffrey Way][12], is a prominent character in the Laravel community and runs the site [Laracasts][12]. His insights are broad and are useful for developers of all backgrounds.
### Conclusion
Podcasts are on the rise and more and more developers are listening to them. With such a rapidly expanding list of new podcasts coming out it can be tough to pick the top 5, but if you listen to these podcasts, you will have a competitive edge as a developer.
--------------------------------------------------------------------------------
via: https://ryanmccue.ca/podcasts-every-developer-should-listen-too/
作者:[Ryan McCue][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://ryanmccue.ca/author/ryan/
[1]:http://thisdeveloperslife.com/
[2]:https://rob.conery.io/
[3]:https://www.hanselman.com/
[4]:https://developertea.com/
[5]:http://jonathancutrell.com/
[6]:https://softwareengineeringdaily.com/
[7]:http://jeffmeyerson.com/
[8]:http://talkingcode.com/
[9]:https://twitter.com/joshsmith
[10]:https://twitter.com/venkatdinavahi
[11]:https://laracasts.simplecast.fm/
[12]:https://laracasts.com

View File

@ -0,0 +1,48 @@
Blueprint for Simple Scalable Microservices
======
![](https://ryanmccue.ca/content/images/2017/12/Copy-of-Copy-of-Electric-Love--1-.png)
When you're building a microservice, what do you value? A fully managed and scalable system? It's hard to know where to start with AWS; there are so many options for hosting code, you can use EC2, ECS, Elastic Beanstalk, Lambda. Everyone has patterns for deploying microservices. Using the pattern below will provide a great structure for a scalable microservice architecture.
### Elastic Beanstalk
The first and most important piece is [Elastic Beanstalk][1]. It is a great, simple way to deploy auto-scaling microservices. All you need to do is upload your code to Elastic Beanstalk via their command line tool or management console. Once it's in Elastic Beanstalk the deployment, capacity provisioning, load balancing, auto-scaling is handled by AWS.
### S3
Another important service is [S3][2]; it is an object storage built to store and retrieve data. S3 has lots of uses, from storing images, to backups. Particular use cases are storing sensitive files such as private keys, environment variable files which will be accessed and used by multiple instances or services. Finally, using S3 for less sensitive, publically accessible files like configuration files, Dockerfiles, and images.
### Kinesis
[Kinesis][3] is a tool which allows for microservices to communicate with each other and other projects like Lambda, which we will discuss farther down. Kinesis does this by real-time, persistent data streaming, which enables microservices to emit events. Data can be persisted for up to 7 days for persistent and batch processing.
### RDS
[Amazon RDS][4] is a great, fully managed relational database hosted by AWS. Using RDS over your own database server is beneficial because AWS manages everything. It makes it easy to set up, operate, and scale a relational databases.
### Lambda
Finally, [AWS Lambda][5] lets you run code without provisioning or managing servers. Lambda has many uses; you can even create the whole APIs with it. Some great uses for it in a microservice architecture are cron jobs and image manipulation. Crons can be scheduled with [CloudWatch][6].
### Conclusion
These AWS products you can create fully scalable, stateless microservices that can communicate with each other. Using Elastic Beanstalk to run microservices, S3 to store files, Kinesis to emit events and Lambdas to subscribe to them and run other tasks. Finally, RDS for easily managing and scaling relational databases.
--------------------------------------------------------------------------------
via: https://ryanmccue.ca/blueprint-for-simple-scalable-microservices/
作者:[Ryan McCue][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://ryanmccue.ca/author/ryan/
[1]:https://aws.amazon.com/elasticbeanstalk/?nc2=h_m1
[2]:https://aws.amazon.com/s3/?nc2=h_m1
[3]:https://aws.amazon.com/kinesis/?nc2=h_m1
[4]:https://aws.amazon.com/rds/?nc2=h_m1
[5]:https://aws.amazon.com/lambda/?nc2=h_m1
[6]:https://aws.amazon.com/cloudwatch/?nc2=h_m1

View File

@ -0,0 +1,60 @@
5 Things to Look for When You Contract Out the Backend of Your App
======
![](https://ryanmccue.ca/content/images/2017/12/Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Electric-Love.png)
For many app developers, it can be hard to know what to do when it comes to the backend of your app. There are a few options, Firebase, throw together a quick Node API, contract it out. I am going to make a blog post soon weighing the pros and cons of each of these options, but for now, let's assume you want the API done professionally.
You are going to want to look for specific things before you give the contract to some freelancer or agency.
### 1. Documentation
Documentation is one of the most important pieces here, the API could be amazing, but if it is impossible to understand which endpoints are available, what parameters they provide, and what they respond with you won't have much luck integrating the API into your app. Surprisingly this is one of the pieces with most contractors get wrong.
So what are you looking for? First, make sure they understand the importance of documentation, this alone makes a huge difference. Second, the should preferably be using an open standard like [Swagger][1] for documentation. If they do both of these things, you should have documentation covered.
### 2. Communication
You know the saying "communication is key," well that applies to API development. This is harder to gauge, but sometimes a developer will get the contract, and then disappear. This doesn't mean they aren't working on it, but it means there isn't a good feedback loop to sort out problems before they get too large.
A good way to get around this is to have a weekly, or however often you want, meeting to go over progress and make sure the API is shaping up the way you want. Even if the meeting is just going over the endpoints and confirming they are returning the data you need.
### 3. Error Handling
Error handling is crucial, this basically means if there is an error on the backend, whether it's an invalid request or an unexpected internal server error, it will be handled properly and a useful response is given to the client. It's important that they are handled gracefully. Often this can get overlooked in the API development process.
This is a tricky thing to look out for, but by letting them know you expect useful error messages and maybe put it into the contract, you should get the error messages you need. This may seem like a small thing but being able to present the user of your app with the actual thing they've done wrong, like "Passwords must be between 6-64 characters" improves the UX immensely.
### 4. Database
This section may be a bit controversial, but I think that 90% of apps really just need a SQL database. I know NoSQL is sexy, but you get so many extra benefits from using SQL I feel that's what you should use for the backend of your app. Of course, there are cases where NoSQL is the better option, but broadly speaking you should probably just use a SQL database.
SQL adds so much added flexibility by being able to add, modify, and remove columns. The option to aggregate data with a simple query is also immensely useful. And finally, the ability to do transactions and be sure all your data is valid will help you sleep better at night.
The reason I say all the above is because I would recommend looking for someone who is willing to build your API with a SQL database.
### 5. Infrastructure
The last major thing to look for when contracting out your backend is infrastructure. This is essential because you want your app to scale. If you get 10,000 users join your app in one day for some reason, you want your backend to handle that. Using services like [AWS Elastic Beanstalk][2] or [Heroku][3] you can create APIs which will scale up automatically with load. That means if your app takes off overnight your API will scale with the load and not buckle under it.
Making sure your contractor is building it with scalability in mind is key. I wrote a [post on scalable APIs][4] if you're interested in learning more about a good AWS stack.
### Conclusion
It is important to get a quality backend when you contract it out. You're paying for a professional to design and build the backend of your app, so if they're lacking in any of the above points it will reduce the chance of success for but the backend, but for your app. If you make a checklist with these points and go over them with contractors, you should be able to weed out the under-qualified applicants and focus your attention on the contractors that know what they're doing.
--------------------------------------------------------------------------------
via: https://ryanmccue.ca/things-to-look-for-when-you-contract-out-the-backend-your-app/
作者:[Ryan McCue][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://ryanmccue.ca/author/ryan/
[1]:https://swagger.io/
[2]:https://aws.amazon.com/elasticbeanstalk/
[3]:https://www.heroku.com/
[4]:https://ryanmccue.ca/blueprint-for-simple-scalable-microservices/

View File

@ -0,0 +1,88 @@
Where to Get Your App Backend Built
======
![](https://ryanmccue.ca/content/images/2017/12/Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Electric-Love.png)
Building a great app takes lots of work. From designing the views to adding the right transitions and images. One thing which is often overlooked is the backend, connecting your app to the outside world. A backend which is not up to the same quality as your app can wreck even the most perfect user interface. That is why choosing the right option for your backend budget and needs is essential.
There are three main choices you have when you're getting it built. First, you have agencies, they are a company with salespeople, project managers, and developers. Second, you have market rate freelancers, they are developers who charge market rate for their work and are often in North America or western Europe. Finally, there are budget freelancers, they are inexpensive and usually in parts of Asia and South America.
I am going to break down the pros and cons of each of these options.
### Agency
Agencies are often a safe bet if you're looking for a more hands-off approach agencies are often the way to go, they have project managers who will manage your project and communicate your requirements to developers. This takes some of the work off of your plate and can free it up to work on your app. Agencies also often have a team of developers at their disposal, so if the developer working on your project takes a vacation, they can swap another developer in without much hassle.
With all these upsides there is a downside. Price. Having a sales team, a project management team, and a developer team isn't cheap. Agencies often cost quite a bit of money compared to freelancers.
So in summary:
#### Pros
* Hands Off
* No Single Point of Failure
#### Cons
* Very expensive
### Market Rate Freelancer
Another option you have are market rate freelancers, these are highly skilled developers who often have worked in agencies, but decided to go their own way and get clients themselves. They generally produce high-quality work at a lower cost than agencies.
The downside to freelancers is since they're only one person they might not be available right away to start your work. Especially high demand freelancers you may have to wait a few weeks or months before they start development. They also are hard to replace, if they get sick or go on vacation, it can often be hard to find someone to continue the work, unless you get a good recommendation from the freelancer.
#### Pros
* Cost Effective
* Similar quality to agency
* Great for short term
#### Cons
* May not be available
* Hard to replace
### Budget Freelancer
The last option I'm going over is budget freelancers who are often found on job boards such as Fiverr and Upwork. They work for very cheap, but that often comes at the cost of quality and communication. Often you will not get what you're looking for, or it will be very brittle code which buckles under strain.
If you're on a very tight budget, it may be worth rolling the dice on a highly rated budget freelancer, although you must be okay with the risk of potentially throwing the code away.
#### Pros
* Very cheap
#### Cons
* Often low quality
* May not be what you asked for
### Conclusion
Getting the right backend for your app is important. It is often a good idea to stick with agencies or market rate freelancers due to the predictability and higher quality code, but if you're on a very tight budget rolling the dice with budget freelancers could pay off. At the end of the day, it doesn't matter where the code is from, as long as it works and does what it's supposed to do.
--------------------------------------------------------------------------------
via: https://ryanmccue.ca/where-to-get-your-app-backend-built/
作者:[Ryan McCue][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://ryanmccue.ca/author/ryan/

View File

@ -1,84 +0,0 @@
Why isn't open source hot among computer science students?
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OSDC_OpenClass_520x292_FINAL_JD.png?itok=ly78pMqu)
Image by : opensource.com
The technical savvy and inventive energy of young programmers is alive and well.
This was clear from the diligent work that I witnessed while participating in this year's [PennApps][1], the nation's largest college hackathon. Over the course of 48 hours, my high school- and college-age peers created projects ranging from a [blink-based communication device for shut-in patients][2] to a [burrito maker with IoT connectivity][3]. The spirit of open source was tangible throughout the event, as diverse groups bonded over a mutual desire to build, the free flow of ideas and tech know-how, fearless experimentation and rapid prototyping, and an overwhelming eagerness to participate.
Why then, I wondered, wasn't open source a hot topic among my tech geek peers?
To learn more about what college students think when they hear "open source," I surveyed several college students who are members of the same professional computer science organization I belong to. All members of this community must apply during high school or college and are selected based on their computer science-specific achievements and leadership--whether that means leading a school robotics team, founding a nonprofit to bring coding into insufficiently funded classrooms, or some other worthy endeavor. Given these individuals' accomplishments in computer science, I thought that their perspectives would help in understanding what young programmers find appealing (or unappealing) about open source projects.
The online survey I prepared and disseminated included the following questions:
* Do you like to code personal projects? Have you ever contributed to an open source project?
* Do you feel like it's more beneficial to you to start your own programming projects, or to contribute to existing open source efforts?
* How would you compare the prestige associated with coding for an organization that produces open source software versus proprietary software?
Though the overwhelming majority said that they at least occasionally enjoyed coding personal projects in their spare time, most had never contributed to an open source project. When I further explored this trend, a few common preconceptions about open source projects and organizations came to light. To persuade my peers that open source projects are worth their time, and to provide educators and open source organizations insight on their students, I'll address the three top preconceptions.
### Preconception #1: Creating personal projects from scratch is better experience than contributing to an existing open source project.
Of the college-age programmers I surveyed, 24 out of 26 asserted that starting their own personal projects felt potentially more beneficial than building on open source ones.
As a bright-eyed freshman in computer science, I believed this too. I had often heard from older peers that personal projects would make me more appealing to intern recruiters. No one ever mentioned the possibility of contributing to open source projects--so in my mind, it wasn't relevant.
I now realize that open source projects offer powerful preparation for the real world. Contributing to open source projects cultivates [an awareness of how tools and languages piece together][4] in a way that even individual projects cannot. Moreover, open source is an exercise in coordination and collaboration, building students' [professional skills in communication, teamwork, and problem-solving. ][5]
### Preconception #2: My coding skills just won't cut it.
A few respondents said they were intimidated by open source projects, unsure of where to contribute, or fearful of stunting project progress. Unfortunately, feelings of inferiority, which too often especially affect female programmers, do not stop at the open source community. In fact, "Imposter Syndrome" may even be magnified, as [open source advocates typically reject bureaucracy][6]--and as difficult as bureaucracy makes internal mobility, it helps newcomers know their place in an organization.
I remember how intimidated I felt by contribution guidelines while looking through open source projects on GitHub for the first time. However, guidelines are not intended to encourage exclusivity, but to provide a [guiding hand][7]. To that end, I think of guidelines as a way of establishing expectations without relying on a hierarchical structure.
Several open source projects actively carve a place for new project contributors. [TEAMMATES][8], an educational feedback management tool, is one of the many open source projects that marks issues "up for grabs" for first-timers. In the comments, programmers of all skill levels iron out implementation details, demonstrating that open source is a place for eager new programmers and seasoned software veterans alike. For young programmers who are still hesitant, [a few open source projects][9] have been thoughtful enough to adopt an [Imposter Syndrome disclaimer][10].
### Preconception #3: Proprietary software firms do better work than open source software organizations.
Only five of the 26 respondents I surveyed thought that open and proprietary software organizations were considered equal in prestige. This is likely due to the misperception that "open" means "profitless," and thus low-quality (see [Doesn't 'open source' just mean something is free of charge?][11]).
However, open source software and profitable software are not mutually exclusive. In fact, small and large businesses alike often pay for free open source software to receive technical support services. As [Red Hat CEO Jim Whitehurst explains][12], "We have engineering teams that track every single change--a bug fix, security enhancement, or whatever--made to Linux, and ensure our customers' mission-critical systems remain up-to-date and stable."
Moreover, the nature of openness facilitates rather than hinders quality by enabling more people to examine source code. [Igor Faletski, CEO of Mobify][13], writes that Mobify's team of "25 software developers and quality assurance professionals" is "no match for the all the software developers in the world who might make use of [Mobify's open source] platform. Each of them is a potential tester of, or contributor to, the project."
Another problem may be that young programmers are not aware of the open source software they interact with every day. I used many tools--including MySQL, Eclipse, Atom, Audacity, and WordPress--for months or even years without realizing they were open source. College students, who often rush to download syllabus-specified software to complete class assignments, may be unaware of which software is open source. This makes open source seem more foreign than it is.
So students, don't knock open source before you try it. Check out this [list of beginner-friendly projects][14] and [these six starting points][15] to begin your open source journey.
Educators, remind your students of the open source community's history of successful innovation, and lead them toward open source projects outside the classroom. You will help develop sharper, better-prepared, and more confident students.
### About the author
Susie Choi - Susie is an undergraduate student studying computer science at Duke University. She is interested in the implications of technological innovation and open source principles for issues relating to education and socioeconomic inequality.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/12/students-and-open-source-3-common-preconceptions
作者:[Susie Choi][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/susiechoi
[1]:http://pennapps.com/
[2]:https://devpost.com/software/blink-9o2iln
[3]:https://devpost.com/software/daburrito
[4]:https://hackernoon.com/benefits-of-contributing-to-open-source-2c97b6f529e9
[5]:https://opensource.com/education/16/8/5-reasons-student-involvement-open-source
[6]:https://opensource.com/open-organization/17/7/open-thinking-curb-bureaucracy
[7]:https://opensource.com/life/16/3/contributor-guidelines-template-and-tips
[8]:https://github.com/TEAMMATES/teammates/issues?q=is%3Aissue+is%3Aopen+label%3Ad.FirstTimers
[9]:https://github.com/adriennefriend/imposter-syndrome-disclaimer/blob/master/examples.md
[10]:https://github.com/adriennefriend/imposter-syndrome-disclaimer
[11]:https://opensource.com/resources/what-open-source
[12]:https://hbr.org/2013/01/yes-you-can-make-money-with-op
[13]:https://hbr.org/2012/10/open-sourcing-may-be-worth
[14]:https://github.com/MunGell/awesome-for-beginners
[15]:https://opensource.com/life/16/1/6-beginner-open-source

View File

@ -1,3 +1,4 @@
translating by wyxplus
How to price cryptocurrencies
======

View File

@ -1,3 +1,4 @@
translating by leowang
Moving to Linux from dated Windows machines
======

View File

@ -1,76 +0,0 @@
##Name1e5s Translating##
Open source software: 20 years and counting
============================================================
### On the 20th anniversary of the coining of the term "open source software," how did it rise to dominance and what's next?
![Open source software: 20 years and counting](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/2cents.png?itok=XlT7kFNY "Open source software: 20 years and counting")
Image by : opensource.com
Twenty years ago, in February 1998, the term "open source" was first applied to software. Soon afterwards, the Open Source Definition was created and the seeds that became the Open Source Initiative (OSI) were sown. As the OSDs author [Bruce Perens relates][9],
> “Open source” is the proper name of a campaign to promote the pre-existing concept of free software to business, and to certify licenses to a rule set.
Twenty years later, that campaign has proven wildly successful, beyond the imagination of anyone involved at the time. Today open source software is literally everywhere. It is the foundation for the internet and the web. It powers the computers and mobile devices we all use, as well as the networks they connect to. Without it, cloud computing and the nascent Internet of Things would be impossible to scale and perhaps to create. It has enabled new ways of doing business to be tested and proven, allowing giant corporations like Google and Facebook to start from the top of a mountain others already climbed.
Like any human creation, it has a dark side as well. It has also unlocked dystopian possibilities for surveillance and the inevitably consequent authoritarian control. It has provided criminals with new ways to cheat their victims and unleashed the darkness of bullying delivered anonymously and at scale. It allows destructive fanatics to organize in secret without the inconvenience of meeting. All of these are shadows cast by useful capabilities, just as every human tool throughout history has been used both to feed and care and to harm and control. We need to help the upcoming generation strive for irreproachable innovation. As [Richard Feynman said][10],
> To every man is given the key to the gates of heaven. The same key opens the gates of hell.
As open source has matured, the way it is discussed and understood has also matured. The first decade was one of advocacy and controversy, while the second was marked by adoption and adaptation.
1. In the first decade, the key question concerned business models—“how can I contribute freely yet still be paid?”—while during the second, more people asked about governance—“how can I participate yet keep control/not be controlled?”
2. Open source projects of the first decade were predominantly replacements for off-the-shelf products; in the second decade, they were increasingly components of larger solutions.
3. Projects of the first decade were often run by informal groups of individuals; in the second decade, they were frequently run by charities created on a project-by-project basis.
4. Open source developers of the first decade were frequently devoted to a single project and often worked in their spare time. In the second decade, they were increasingly employed to work on a specific technology—professional specialists.
5. While open source was always intended as a way to promote software freedom, during the first decade, conflict arose with those preferring the term “free software.” In the second decade, this conflict was largely ignored as open source adoption accelerated.
So what will the third decade bring?
1. _The complexity business model_ —The predominant business model will involve monetizing the solution of the complexity arising from the integration of many open source parts, especially from deployment and scaling. Governance needs will reflect this.
2. _Open source mosaics_ —Open source projects will be predominantly families of component parts, together with being built into stacks of components. The resultant larger solutions will be a mosaic of open source parts.
3. _Families of projects_ —More and more projects will be hosted by consortia/trade associations like the Linux Foundation and OpenStack, and by general-purpose charities like Apache and the Software Freedom Conservancy.
4. _Professional generalists_ —Open source developers will increasingly be employed to integrate many technologies into complex solutions and will contribute to a range of projects.
5. _Software freedom redux_ —As new problems arise, software freedom (the application of the Four Freedoms to user and developer flexibility) will increasingly be applied to identify solutions that work for collaborative communities and independent deployers.
Ill be expounding on all this in conference keynotes around the world during 2018\. Watch for [OSIs 20th Anniversary World Tour][11]!
_This article was originally published on [Meshed Insights Ltd.][2] and is reprinted with permission. This article, as well as my work at OSI, is supported by [Patreon patrons][3]._
### About the author
[![Simon Phipps (smiling)](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-2305.jpg?itok=CefW_OYh)][12] Simon Phipps - Computer industry and open source veteran Simon Phipps started [Public Software][4], a European host for open source projects, and volunteers as President at OSI and a director at The Document Foundation. His posts are sponsored by [Patreon patrons][5] - become one if you'd like to see more! Over a 30+ year career he has been involved at a strategic level in some of the worlds leading... [more about Simon Phipps][6][More about me][7]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/open-source-20-years-and-counting
作者:[Simon Phipps ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/simonphipps
[1]:https://opensource.com/article/18/2/open-source-20-years-and-counting?rate=TZxa8jxR6VBcYukor0FDsTH38HxUrr7Mt8QRcn0sC2I
[2]:https://meshedinsights.com/2017/12/21/20-years-and-counting/
[3]:https://patreon.com/webmink
[4]:https://publicsoftware.eu/
[5]:https://patreon.com/webmink
[6]:https://opensource.com/users/simonphipps
[7]:https://opensource.com/users/simonphipps
[8]:https://opensource.com/user/12532/feed
[9]:https://perens.com/2017/09/26/on-usage-of-the-phrase-open-source/
[10]:https://www.brainpickings.org/2013/07/19/richard-feynman-science-morality-poem/
[11]:https://opensource.org/node/905
[12]:https://opensource.com/users/simonphipps
[13]:https://opensource.com/users/simonphipps
[14]:https://opensource.com/users/simonphipps

View File

@ -0,0 +1,62 @@
Security Is Not an Absolute
======
If theres one thing I wish people from outside the security industry knew when dealing with information security, its that **Security is not an absolute**. Most of the time, its not even quantifiable. Even in the case of particular threat models, its often impossible to make statements about the security of a system with certainty.
At work, I deal with a lot of very smart people who are not “security people”, but are well-meaning and trying to do the right thing. Online, I sometimes find myself in conversations on [/r/netsec][1], [/r/netsecstudents][2], [/r/asknetsec][3], or [security.stackexchange][4] where someone wants to know something about information security. Either way, its quite common that someone asks the fateful question: “Is this secure?”. There are actually only two answers to this question, and neither one is “Yes.”
The first answer is, fairly obviously, “No.” There are some ideas that are not secure under any reasonable definition of security. Imagine an employer that makes the PIN for your payroll system the day and month on which you started your new job. Clearly, all it takes is someone posting “started my new job today!” to social media, and their PIN has been outed. Consider transporting an encrypted hard drive with the password on a sticky note attached to the outside of the drive. Both of these systems have employed some form of “security control” (even if I use the term loosely), and both are clearly insecure to even the most rudimentary of attacker. Consequently, answering “Is this secure?” with a firm “No” seems appropriate.
The second answer is more nuanced: “It depends.” What it depends on, and whether those conditions exist in the system in use, are what many security professionals get paid to evaluate. For example, consider the employer in the previous paragraph. Instead of using a fixed scheme for PINs, they now generate a random 4-digit PIN and mail it to each new employee. Is this secure? That all depends on the threat model being applied to the scenario. If we allow an attacker unlimited attempts to log in as that user, then no 4 digit PIN (random or deterministic) is reasonably secure. On average, an attacker will need no more than 5000 requests to find the valid PIN. That can be done by a very basic script in 10s of minutes. If, on the other hand, we lock the account after 10 failed attempts, then weve reduced the attacker to a 0.1% chance of success for a given account. Is this secure? For a single account, this is probably reasonably secure (although most users might be uncomfortable at even a 1 in 1000 chance of an attacker succeeding against their personal account) but what if the attacker has a list of 1000 usernames? The attacker now has a **64%** chance of successfully accessing at least 1 account. I think most businesses would find those odds very much against their favor.
So why cant we ever come up with an answer of “Yes, this is a secure system”? Well, theres several factors at play here. The first is that very little in life in general is an absolute:
* Your doctor cannot tell you with certainty that you will be alive tomorrow.
* A seismologist cant say that there absolutely wont be a 9.0 earthquake that levels a big chunk of the West Coast.
* Your car manufacturer cannot guarantee that the 4 wheels on your car do not fall of on your way to work tomorrow.
However, all of these possibilities are very remote events. Most people are comfortable with these probabilities, largely because they do not think much about them, but even if they did, they would believe that it would not happen to them. (And almost always, they would be correct in that assumption.)
Unfortunately, in information security, we have three things working against us:
* The risks are much less understood by those seeking to understand them.
* The reality is that there are enough security threats that are **much** more common than the events above.
* The threats against which security must guard are **adaptive**.
Because most people have a hard time reasoning about the likelihood of attacks and threats against them, they seek absolute reassurance. They dont want to be told “it depends”, they just want to hear “yes, youre fine.” Many of these individuals are the hypochondriacs of the information security world they think every possible attack will get them, and they want absolute reassurance theyre safe from those attacks. Alternatively, they dont understand that there are degrees of security and threat models, and just want to be reassured that they are perfectly secure. Either way, the effect is the same they dont understand, but are afraid, and so want the reassurance of complete security.
Were in an era where security breaches are unfortunately common, and developers and users alike are hearing about these vulnerabilities and breaches all the time. This causes them to pay far more attention to security then they otherwise would. By itself, this isnt bad all of us in the industry have been trying to get everyones attention about security issues for decades. Getting it now is better late than never. But because were so far behind the curve, the breaches being common, everytone is rushing to find out their risk and get reassurance now. Rather than consider the nuances of the situation, they just want a simple answer to “Am I secure?”
The last of these issues, however, is also the most unique to information security. For decades, weve looked for the formula to make a system perfectly secure. However, each countermeasure or security system is quickly defeated by attackers. Were in a cat-and-mouse game, rather than an engineering discipline.
This isnt to say that security is not an engineering practice it certainly is in many ways (and my official title claims that I am an engineer), but just that it differs from other engineering areas. The forces faced by a building do not change in face of design changes by the structural engineer. Gravity remains a constant, wind forces are predictible for a given design, the seismic nature of an area is approximately known. Making the building have stronger doors does not suddenly increase the wind forces on the windows. In security, however, when we “strengthen the doors”, the attackers do turn to the “windows” of our system. Our threats are **adaptive** for each control we implement, they adapt to attempt to circumvent that control. For this reason, a system that was believed secure against the known threats one year is completely broken the next.
Another form of the security absolutism is those that realize there are degrees of security, but want to take it to an almost ridiculous level of paranoia. Nearly always, these seem to be interested in forms of cryptography perhaps because cryptography offers numbers that can be tweaked, giving an impression of differing levels of security.
* Generating RSA encryption keys of over 4k bits in length, even though all cryptographers agree this is pointless.
* Asking why AES-512 doesnt exist, even though SHA-512 does. (Because the length of a hash and the length of a key do not equal in effective strength against attacks.)
* Setting up bizarre browser settings and then complaining about websites being broken. (Disabling all JavaScript, all cookies, all ciphers that are less than 256 bits and not perfect forward secrecy, etc.)
So the next time you want to know “Is this secure?”, consider the threat model: what are you trying to defend against? Recognize that there are no security absolutes and guarantees, and that good security engineering practice often involves compromise. Sometimes the compromise is one of usability or utility, sometimes the compromise involves working in a less-than-perfect world.
--------------------------------------------------------------------------------
via: https://systemoverlord.com/2018/02/05/security-is-not-an-absolute.html
作者:[David][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://systemoverlord.com/about
[1]:https://reddit.com/r/netsec
[2]:https://reddit.com/r/netsecstudents
[3]:https://reddit.com/r/asknetsec
[4]:https://security.stackexchange.com

View File

@ -1,3 +1,5 @@
translating---geekpi
How to use lftp to accelerate ftp/https download speed on Linux/UNIX
======
lftp is a file transfer program. It allows sophisticated FTP, HTTP/HTTPS, and other connections. If the site URL is specified, then lftp will connect to that site otherwise a connection has to be established with the open command. It is an essential tool for all a Linux/Unix command line users. I have already written about [Linux ultra fast command line download accelerator][1] such as Axel and prozilla. lftp is another tool for the same job with more features. lftp can handle seven file access methods:

View File

@ -0,0 +1,94 @@
How To Safely Generate A Random Number — Quarrelsome
======
### Use urandom
Use [urandom][1]. Use [urandom][2]. Use [urandom][3]. Use [urandom][4]. Use [urandom][5]. Use [urandom][6].
### But what about for crypto keys?
Still [urandom][6].
### Why not {SecureRandom, OpenSSL, havaged, &c}?
These are userspace CSPRNGs. You want to use the kernels CSPRNG, because:
* The kernel has access to raw device entropy.
* It can promise not to share the same state between applications.
* A good kernel CSPRNG, like FreeBSDs, can also promise not to feed you random data before its seeded.
Study the last ten years of randomness failures and youll read a litany of userspace randomness failures. [Debians OpenSSH debacle][7]? Userspace random. Android Bitcoin wallets [repeating ECDSA ks][8]? Userspace random. Gambling sites with predictable shuffles? Userspace random.
Userspace OpenSSL also seeds itself from “from uninitialized memory, magical fairy dust and unicorn horns” generators almost always depend on the kernels generator anyways. Even if they dont, the security of your whole system sure does. **A userspace CSPRNG doesnt add defense-in-depth; instead, it creates two single points of failure.**
### Doesnt the man page say to use /dev/random?
You But, more on this later. Stay your pitchforks. should ignore the man page. Dont use /dev/random. The distinction between /dev/random and /dev/urandom is a Unix design wart. The man page doesnt want to admit that, so it invents a security concern that doesnt really exist. Consider the cryptographic advice in random(4) an urban legend and get on with your life.
### But what if I need real random values, not psuedorandom values?
Both urandom and /dev/random provide the same kind of randomness. Contrary to popular belief, /dev/random doesnt provide “true random” data. For cryptography, you dont usually want “true random”.
Both urandom and /dev/random are based on a simple idea. Their design is closely related to that of a stream cipher: a small secret is stretched into an indefinite stream of unpredictable values. Here the secrets are “entropy”, and the stream is “output”.
Only on Linux are /dev/random and urandom still meaningfully different. The Linux kernel CSPRNG rekeys itself regularly (by collecting more entropy). But /dev/random also tries to keep track of how much entropy remains in its kernel pool, and will occasionally go on strike if it decides not enough remains. This design is as silly as Ive made it sound; its akin to AES-CTR blocking based on how much “key” is left in the “keystream”.
If you use /dev/random instead of urandom, your program will unpredictably (or, if youre an attacker, very predictably) hang when Linux gets confused about how its own RNG works. Using /dev/random will make your programs less stable, but it wont make them any more cryptographically safe.
### Theres a catch here, isnt there?
No, but theres a Linux kernel bug you might want to know about, even though it doesnt change which RNG you should use.
On Linux, if your software runs immediately at boot, and/or the OS has just been installed, your code might be in a race with the RNG. Thats bad, because if you win the race, there could be a window of time where you get predictable outputs from urandom. This is a bug in Linux, and you need to know about it if youre building platform-level code for a Linux embedded device.
This is indeed a problem with urandom (and not /dev/random) on Linux. Its also a [bug in the Linux kernel][9]. But its also easily fixed in userland: at boot, seed urandom explicitly. Most Linux distributions have done this for a long time. But dont switch to a different CSPRNG.
### What about on other operating systems?
FreeBSD and OS X do away with the distinction between urandom and /dev/random; the two devices behave identically. Unfortunately, the man page does a poor job of explaining why this is, and perpetuates the myth that Linux urandom is scary.
FreeBSDs kernel crypto RNG doesnt block regardless of whether you use /dev/random or urandom. Unless it hasnt been seeded, in which case both block. This behavior, unlike Linuxs, makes sense. Linux should adopt it. But if youre an app developer, this makes little difference to you: Linux, FreeBSD, iOS, whatever: use urandom.
### tl;dr
Use urandom.
### Epilog
[ruby-trunk Feature #9569][10]
> Right now, SecureRandom.random_bytes tries to detect an OpenSSL to use before it tries to detect /dev/urandom. I think it should be the other way around. In both cases, you just need random bytes to unpack, so SecureRandom could skip the middleman (and second point of failure) and just talk to /dev/urandom directly if its available.
Resolution:
> /dev/urandom is not suitable to be used to generate directly session keys and other application level random data which is generated frequently.
>
> [the] random(4) [man page] on GNU/Linux [says]…
Thanks to Matthew Green, Nate Lawson, Sean Devlin, Coda Hale, and Alex Balducci for reading drafts of this. Fair warning: Matthew only mostly agrees with me.
--------------------------------------------------------------------------------
via: https://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/
作者:[Thomas;Erin;Matasano][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://sockpuppet.org/blog
[1]:http://blog.cr.yp.to/20140205-entropy.html
[2]:http://cr.yp.to/talks/2011.09.28/slides.pdf
[3]:http://golang.org/src/pkg/crypto/rand/rand_unix.go
[4]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key
[5]:http://stackoverflow.com/a/5639631
[6]:https://twitter.com/bramcohen/status/206146075487240194
[7]:http://research.swtch.com/openssl
[8]:http://arstechnica.com/security/2013/08/google-confirms-critical-android-crypto-flaw-used-in-5700-bitcoin-heist/
[9]:https://factorable.net/weakkeys12.extended.pdf
[10]:https://bugs.ruby-lang.org/issues/9569

View File

@ -0,0 +1,289 @@
Myths about /dev/urandom
======
There are a few things about /dev/urandom and /dev/random that are repeated again and again. Still they are false.
I'm mostly talking about reasonably recent Linux systems, not other UNIX-like systems.
### /dev/urandom is insecure. Always use /dev/random for cryptographic purposes.
Fact: /dev/urandom is the preferred source of cryptographic randomness on UNIX-like systems.
### /dev/urandom is a pseudo random number generator, a PRNG, while /dev/random is a “true” random number generator.
Fact: Both /dev/urandom and /dev/random are using the exact same CSPRNG (a cryptographically secure pseudorandom number generator). They only differ in very few ways that have nothing to do with “true” randomness.
### /dev/random is unambiguously the better choice for cryptography. Even if /dev/urandom were comparably secure, there's no reason to choose the latter.
Fact: /dev/random has a very nasty problem: it blocks.
### But that's good! /dev/random gives out exactly as much randomness as it has entropy in its pool. /dev/urandom will give you insecure random numbers, even though it has long run out of entropy.
Fact: No. Even disregarding issues like availability and subsequent manipulation by users, the issue of entropy “running low” is a straw man. About 256 bits of entropy are enough to get computationally secure numbers for a long, long time.
And the fun only starts here: how does /dev/random know how much entropy there is available to give out? Stay tuned!
### But cryptographers always talk about constant re-seeding. Doesn't that contradict your last point?
Fact: You got me! Kind of. It is true, the random number generator is constantly re-seeded using whatever entropy the system can lay its hands on. But that has (partly) other reasons.
Look, I don't claim that injecting entropy is bad. It's good. I just claim that it's bad to block when the entropy estimate is low.
### That's all good and nice, but even the man page for /dev/(u)random contradicts you! Does anyone who knows about this stuff actually agree with you?
Fact: No, it really doesn't. It seems to imply that /dev/urandom is insecure for cryptographic use, unless you really understand all that cryptographic jargon.
The man page does recommend the use of /dev/random in some cases (it doesn't hurt, in my opinion, but is not strictly necessary), but it also recommends /dev/urandom as the device to use for “normal” cryptographic use.
And while appeal to authority is usually nothing to be proud of, in cryptographic issues you're generally right to be careful and try to get the opinion of a domain expert.
And yes, quite a few experts share my view that /dev/urandom is the go-to solution for your random number needs in a cryptography context on UNIX-like systems. Obviously, their opinions influenced mine, not the other way around.
Hard to believe, right? I must certainly be wrong! Well, read on and let me try to convince you.
I tried to keep it out, but I fear there are two preliminaries to be taken care of, before we can really tackle all those points.
Namely, what is randomness, or better: what kind of randomness am I talking about here?
And, even more important, I'm really not being condescending. I have written this document to have a thing to point to, when this discussion comes up again. More than 140 characters. Without repeating myself again and again. Being able to hone the writing and the arguments itself, benefitting many discussions in many venues.
And I'm certainly willing to hear differing opinions. I'm just saying that it won't be enough to state that /dev/urandom is bad. You need to identify the points you're disagreeing with and engage them.
### You're saying I'm stupid!
Emphatically no!
Actually, I used to believe that /dev/urandom was insecure myself, a few years ago. And it's something you and me almost had to believe, because all those highly respected people on Usenet, in web forums and today on Twitter told us. Even the man page seems to say so. Who were we to dismiss their convincing argument about “entropy running low”?
This misconception isn't so rampant because people are stupid, it is because with a little knowledge about cryptography (namely some vague idea what entropy is) it's very easy to be convinced of it. Intuition almost forces us there. Unfortunately intuition is often wrong in cryptography. So it is here.
### True randomness
What does it mean for random numbers to be “truly random”?
I don't want to dive into that issue too deep, because it quickly gets philosophical. Discussions have been known to unravel fast, because everyone can wax about their favorite model of randomness, without paying attention to anyone else. Or even making himself understood.
I believe that the “gold standard” for “true randomness” are quantum effects. Observe a photon pass through a semi-transparent mirror. Or not. Observe some radioactive material emit alpha particles. It's the best idea we have when it comes to randomness in the world. Other people might reasonably believe that those effects aren't truly random. Or even that there is no randomness in the world at all. Let a million flowers bloom.
Cryptographers often circumvent this philosophical debate by disregarding what it means for randomness to be “true”. They care about unpredictability. As long as nobody can get any information about the next random number, we're fine. And when you're talking about random numbers as a prerequisite in using cryptography, that's what you should aim for, in my opinion.
Anyway, I don't care much about those “philosophically secure” random numbers, as I like to think of your “true” random numbers.
### Two kinds of security, one that matters
But let's assume you've obtained those “true” random numbers. What are you going to do with them?
You print them out, frame them and hang them on your living-room wall, to revel in the beauty of a quantum universe? That's great, and I certainly understand.
Wait, what? You're using them? For cryptographic purposes? Well, that spoils everything, because now things get a bit ugly.
You see, your truly-random, quantum effect blessed random numbers are put into some less respectable, real-world tarnished algorithms.
Because almost all of the cryptographic algorithms we use do not hold up to ### information-theoretic security**. They can “only” offer **computational security. The two exceptions that come to my mind are Shamir's Secret Sharing and the One-time pad. And while the first one may be a valid counterpoint (if you actually intend to use it), the latter is utterly impractical.
But all those algorithms you know about, AES, RSA, Diffie-Hellman, Elliptic curves, and all those crypto packages you're using, OpenSSL, GnuTLS, Keyczar, your operating system's crypto API, these are only computationally secure.
What's the difference? While information-theoretically secure algorithms are secure, period, those other algorithms cannot guarantee security against an adversary with unlimited computational power who's trying all possibilities for keys. We still use them because it would take all the computers in the world taken together longer than the universe has existed, so far. That's the level of “insecurity” we're talking about here.
Unless some clever guy breaks the algorithm itself, using much less computational power. Even computational power achievable today. That's the big prize every cryptanalyst dreams about: breaking AES itself, breaking RSA itself and so on.
So now we're at the point where you don't trust the inner building blocks of the random number generator, insisting on “true randomness” instead of “pseudo randomness”. But then you're using those “true” random numbers in algorithms that you so despise that you didn't want them near your random number generator in the first place!
Truth is, when state-of-the-art hash algorithms are broken, or when state-of-the-art block ciphers are broken, it doesn't matter that you get “philosophically insecure” random numbers because of them. You've got nothing left to securely use them for anyway.
So just use those computationally-secure random numbers for your computationally-secure algorithms. In other words: use /dev/urandom.
### Structure of Linux's random number generator
#### An incorrect view
Chances are, your idea of the kernel's random number generator is something similar to this:
![image: mythical structure of the kernel's random number generator][1]
“True randomness”, albeit possibly skewed and biased, enters the system and its entropy is precisely counted and immediately added to an internal entropy counter. After de-biasing and whitening it's entering the kernel's entropy pool, where both /dev/random and /dev/urandom get their random numbers from.
The “true” random number generator, /dev/random, takes those random numbers straight out of the pool, if the entropy count is sufficient for the number of requested numbers, decreasing the entropy counter, of course. If not, it blocks until new entropy has entered the system.
The important thing in this narrative is that /dev/random basically yields the numbers that have been input by those randomness sources outside, after only the necessary whitening. Nothing more, just pure randomness.
/dev/urandom, so the story goes, is doing the same thing. Except when there isn't sufficient entropy in the system. In contrast to /dev/random, it does not block, but gets “low quality random” numbers from a pseudorandom number generator (conceded, a cryptographically secure one) that is running alongside the rest of the random number machinery. This CSPRNG is just seeded once (or maybe every now and then, it doesn't matter) with “true randomness” from the randomness pool, but you can't really trust it.
In this view, that seems to be in a lot of people's minds when they're talking about random numbers on Linux, avoiding /dev/urandom is plausible.
Because either there is enough entropy left, then you get the same you'd have gotten from /dev/random. Or there isn't, then you get those low-quality random numbers from a CSPRNG that almost never saw high-entropy input.
Devilish, right? Unfortunately, also utterly wrong. In reality, the internal structure of the random number generator looks like this.
#### A better simplification
##### Before Linux 4.8
![image: actual structure of the kernel's random number generator before Linux 4.8][2] This is a pretty rough simplification. In fact, there isn't just one, but three pools filled with entropy. One primary pool, and one for /dev/random and /dev/urandom each, feeding off the primary pool. Those three pools all have their own entropy counts, but the counts of the secondary pools (for /dev/random and /dev/urandom) are mostly close to zero, and “fresh” entropy flows from the primary pool when needed, decreasing its entropy count. Also there is a lot of mixing and re-injecting outputs back into the system going on. All of this is far more detail than is necessary for this document.
See the big difference? The CSPRNG is not running alongside the random number generator, filling in for those times when /dev/urandom wants to output something, but has nothing good to output. The CSPRNG is an integral part of the random number generation process. There is no /dev/random handing out “good and pure” random numbers straight from the whitener. Every randomness source's input is thoroughly mixed and hashed inside the CSPRNG, before it emerges as random numbers, either via /dev/urandom or /dev/random.
Another important difference is that there is no entropy counting going on here, but estimation. The amount of entropy some source is giving you isn't something obvious that you just get, along with the data. It has to be estimated. Please note that when your estimate is too optimistic, the dearly held property of /dev/random, that it's only giving out as many random numbers as available entropy allows, is gone. Unfortunately, it's hard to estimate the amount of entropy.
The Linux kernel uses only the arrival times of events to estimate their entropy. It does that by interpolating polynomials of those arrival times, to calculate “how surprising” the actual arrival time was, according to the model. Whether this polynomial interpolation model is the best way to estimate entropy is an interesting question. There is also the problem that internal hardware restrictions might influence those arrival times. The sampling rates of all kinds of hardware components may also play a role, because it directly influences the values and the granularity of those event arrival times.
In the end, to the best of our knowledge, the kernel's entropy estimate is pretty good. Which means it's conservative. People argue about how good it really is, but that issue is far above my head. Still, if you insist on never handing out random numbers that are not “backed” by sufficient entropy, you might be nervous here. I'm sleeping sound because I don't care about the entropy estimate.
So to make one thing crystal clear: both /dev/random and /dev/urandom are fed by the same CSPRNG. Only the behavior when their respective pool runs out of entropy, according to some estimate, differs: /dev/random blocks, while /dev/urandom does not.
##### From Linux 4.8 onward
In Linux 4.8 the equivalency between /dev/urandom and /dev/random was given up. Now /dev/urandom output does not come from an entropy pool, but directly from a CSPRNG.
![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
We will see shortly why that is not a security problem.
### What's wrong with blocking?
Have you ever waited for /dev/random to give you more random numbers? Generating a PGP key inside a virtual machine maybe? Connecting to a web server that's waiting for more random numbers to create an ephemeral session key?
That's the problem. It inherently runs counter to availability. So your system is not working. It's not doing what you built it to do. Obviously, that's bad. You wouldn't have built it if you didn't need it.
I'm working on safety-related systems in factory automation. Can you guess what the main reason for failures of safety systems is? Manipulation. Simple as that. Something about the safety measure bugged the worker. It took too much time, was too inconvenient, whatever. People are very resourceful when it comes to finding “inofficial solutions”.
But the problem runs even deeper: people don't like to be stopped in their ways. They will devise workarounds, concoct bizarre machinations to just get it running. People who don't know anything about cryptography. Normal people.
Why not patching out the call to `random()`? Why not having some guy in a web forum tell you how to use some strange ioctl to increase the entropy counter? Why not switch off SSL altogether?
In the end you just educate your users to do foolish things that compromise your system's security without you ever knowing about it.
It's easy to disregard availability, usability or other nice properties. Security trumps everything, right? So better be inconvenient, unavailable or unusable than feign security.
But that's a false dichotomy. Blocking is not necessary for security. As we saw, /dev/urandom gives you the same kind of random numbers as /dev/random, straight out of a CSPRNG. Use it!
### The CSPRNGs are alright
But now everything sounds really bleak. If even the high-quality random numbers from /dev/random are coming out of a CSPRNG, how can we use them for high-security purposes?
It turns out, that “looking random” is the basic requirement for a lot of our cryptographic building blocks. If you take the output of a cryptographic hash, it has to be indistinguishable from a random string so that cryptographers will accept it. If you take a block cipher, its output (without knowing the key) must also be indistinguishable from random data.
If anyone could gain an advantage over brute force breaking of cryptographic building blocks, using some perceived weakness of those CSPRNGs over “true” randomness, then it's the same old story: you don't have anything left. Block ciphers, hashes, everything is based on the same mathematical fundament as CSPRNGs. So don't be afraid.
### What about entropy running low?
It doesn't matter.
The underlying cryptographic building blocks are designed such that an attacker cannot predict the outcome, as long as there was enough randomness (a.k.a. entropy) in the beginning. A usual lower limit for “enough” may be 256 bits. No more.
Considering that we were pretty hand-wavey about the term “entropy” in the first place, it feels right. As we saw, the kernel's random number generator cannot even precisely know the amount of entropy entering the system. Only an estimate. And whether the model that's the basis for the estimate is good enough is pretty unclear, too.
### Re-seeding
But if entropy is so unimportant, why is fresh entropy constantly being injected into the random number generator?
djb [remarked][4] that more entropy actually can hurt.
First, it cannot hurt. If you've got more randomness just lying around, by all means use it!
There is another reason why re-seeding the random number generator every now and then is important:
Imagine an attacker knows everything about your random number generator's internal state. That's the most severe security compromise you can imagine, the attacker has full access to the system.
You've totally lost now, because the attacker can compute all future outputs from this point on.
But over time, with more and more fresh entropy being mixed into it, the internal state gets more and more random again. So that such a random number generator's design is kind of self-healing.
But this is injecting entropy into the generator's internal state, it has nothing to do with blocking its output.
### The random and urandom man page
The man page for /dev/random and /dev/urandom is pretty effective when it comes to instilling fear into the gullible programmer's mind:
> A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current unclassified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead.
Such an attack is not known in “unclassified literature”, but the NSA certainly has one in store, right? And if you're really concerned about this (you should!), please use /dev/random, and all your problems are solved.
The truth is, while there may be such an attack available to secret services, evil hackers or the Bogeyman, it's just not rational to just take it as a given.
And even if you need that peace of mind, let me tell you a secret: no practical attacks on AES, SHA-3 or other solid ciphers and hashes are known in the “unclassified” literature, either. Are you going to stop using those, as well? Of course not!
Now the fun part: “use /dev/random instead”. While /dev/urandom does not block, its random number output comes from the very same CSPRNG as /dev/random's.
If you really need information-theoretically secure random numbers (you don't!), and that's about the only reason why the entropy of the CSPRNGs input matters, you can't use /dev/random, either!
The man page is silly, that's all. At least it tries to redeem itself with this:
> If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter. As a general rule, /dev/urandom should be used for everything except long-lived GPG/SSL/SSH keys.
Fine. I think it's unnecessary, but if you want to use /dev/random for your “long-lived keys”, by all means, do so! You'll be waiting a few seconds typing stuff on your keyboard, that's no problem.
But please don't make connections to a mail server hang forever, just because you “wanted to be safe”.
### Orthodoxy
The view espoused here is certainly a tiny minority's opinions on the Internet. But ask a real cryptographer, you'll be hard pressed to find someone who sympathizes much with that blocking /dev/random.
Let's take [Daniel Bernstein][5], better known as djb:
> Cryptographers are certainly not responsible for this superstitious nonsense. Think about this for a moment: whoever wrote the /dev/random manual page seems to simultaneously believe that
>
> * (1) we can't figure out how to deterministically expand one 256-bit /dev/random output into an endless stream of unpredictable keys (this is what we need from urandom), but
>
> * (2) we _can_ figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.).
>
>
>
> For a cryptographer this doesn't even pass the laugh test.
Or [Thomas Pornin][6], who is probably one of the most helpful persons I've ever encountered on the Stackexchange sites:
> The short answer is yes. The long answer is also yes. /dev/urandom yields data which is indistinguishable from true randomness, given existing technology. Getting "better" randomness than what /dev/urandom provides is meaningless, unless you are using one of the few "information theoretic" cryptographic algorithm, which is not your case (you would know it).
>
> The man page for urandom is somewhat misleading, arguably downright wrong, when it suggests that /dev/urandom may "run out of entropy" and /dev/random should be preferred;
Or maybe [Thomas Ptacek][7], who is not a real cryptographer in the sense of designing cryptographic algorithms or building cryptographic systems, but still the founder of a well-reputed security consultancy that's doing a lot of penetration testing and breaking bad cryptography:
> Use urandom. Use urandom. Use urandom. Use urandom. Use urandom. Use urandom.
### Not everything is perfect
/dev/urandom isn't perfect. The problems are twofold:
On Linux, unlike FreeBSD, /dev/urandom never blocks. Remember that the whole security rested on some starting randomness, a seed?
Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.
FreeBSD does the right thing: they don't have the distinction between /dev/random and /dev/urandom, both are the same device. At startup /dev/random blocks once until enough starting entropy has been gathered. Then it won't block ever again.
In the meantime, Linux has implemented a new syscall, originally introduced by OpenBSD as getentropy(2): getrandom(2). This syscall does the right thing: blocking until it has gathered enough initial entropy, and never blocking after that point. Of course, it is a syscall, not a character device, so it isn't as easily accessible from shell or script languages. It is available from Linux 3.17 onward.
On Linux it isn't too bad, because Linux distributions save some random numbers when booting up the system (but after they have gathered some entropy, since the startup script doesn't run immediately after switching on the machine) into a seed file that is read next time the machine is booting. So you carry over the randomness from the last running of the machine.
Obviously that isn't as good as if you let the shutdown scripts write out the seed, because in that case there would have been much more time to gather entropy. The advantage is obviously that this does not depend on a proper shutdown with execution of the shutdown scripts (in case the computer crashes, for example).
And it doesn't help you the very first time a machine is running, but the Linux distributions usually do the same saving into a seed file when running the installer. So that's mostly okay.
Virtual machines are the other problem. Because people like to clone them, or rewind them to a previously saved check point, this seed file doesn't help you.
But the solution still isn't using /dev/random everywhere, but properly seeding each and every virtual machine after cloning, restoring a checkpoint, whatever.
### tldr;
Just use /dev/urandom!
--------------------------------------------------------------------------------
via: https://www.2uo.de/myths-about-urandom/
作者:[Thomas Hühn][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2uo.de/
[1]:https://www.2uo.de/myths-about-urandom/structure-no.png
[2]:https://www.2uo.de/myths-about-urandom/structure-yes.png
[3]:https://www.2uo.de/myths-about-urandom/structure-new.png
[4]:http://blog.cr.yp.to/20140205-entropy.html
[5]:http://www.mail-archive.com/cryptography@randombit.net/msg04763.html
[6]:http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key/3939#3939
[7]:http://sockpuppet.org/blog/2014/02/25/safely-generate-random-numbers/

View File

@ -1,3 +1,4 @@
Translating by stevenzdg988
How To Find The Installed Proprietary Packages In Arch Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/01/Absolutely-Proprietary-720x340.jpg)

View File

@ -0,0 +1,41 @@
What is the deal with GraphQL?
======
![](https://ryanmccue.ca/content/images/2018/01/Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Copy-of-Electric-Love.png)
There has been lots of talks lately about this thing called [GraphQL][1]. It is a relatively new technology coming out of Facebook and is starting to be widely adopted by large companies like [Github][2], Facebook, Twitter, Yelp, and many others. Basically, GraphQL is an alternative to REST, it replaces many dumb endpoints, `/user/1`, `/user/1/comments` with `/graphql` and you use the post body or query string to request the data you need, like, `/graphql?query={user(id:1){id,username,comments{text}}}`. You pick the pieces of data you need and can nest down to relations to avoid multiple calls. This is a different way of thinking about a backend, but in some situations, it makes practical sense.
### My Experience with GraphQL
Originally when I heard about it I was very skeptical, after dabbling in [Apollo Server][3] I was not convinced. Why would you use some silly new technology when you can simply build REST endpoints! But after digging deeper and learning more about its use cases, I came around. I still think REST has a place and will be important for the foreseeable future, but with how bad many APIs and their documentation are, this can be a breath of fresh air...
### Why Use GraphQL Over REST?
Although I have used GraphQL, and think it is a compelling and exciting technology, I believe it does not replace REST. That being said there are compelling reasons to pick GraphQL over REST in some situations. When you are building mobile apps or web apps which are made with high mobile traffic in mind GraphQL really shines. The reason for this is mobile data. REST uses many calls and often returns unused data whereas, with GraphQL, you can define precisely what you want to be returned for minimal data usage.
You can get do all the above with REST by making multiple endpoints available, but that also adds complexity to the project. It also means there will be back and forth between the front and backend teams.
### What Should You Use?
GraphQL is a new technology which is now mainstream. But many developers are not aware of it or choose not to learn it because they think it's a fad. I feel like for most projects you can get away using either REST or GraphQL. Developing using GraphQL has great benefits like enforcing documentation, which helps teams work better together, and provides clear expectations for each query. This will likely speed up development after the initial hurdle of wrapping your head around GraphQL.
Although I have been comparing GraphQL and REST, I think in most cases a mixture of the two will produce the best results. Combine the strengths of both instead of seeing it strightly as just using GraphQL or just using REST.
### Final Thoughts
Both technologies are here to stay. And done right both technologies can make fast and efficient backends. GraphQL has an edge up because it allows the client to query only the data they need by default, but that is at a potential sacrifice of endpoint speed. Ultimately, if I were starting a new project, I would go with a mix of both GraphQL and REST.
--------------------------------------------------------------------------------
via: https://ryanmccue.ca/what-is-the-deal-with-graphql/
作者:[Ryan McCue][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://ryanmccue.ca/author/ryan/
[1]:http://graphql.org/
[2]:https://developer.github.com/v4/
[3]:https://github.com/apollographql/apollo-server

View File

@ -1,84 +0,0 @@
translating---geekpi
Configuring MSMTP On Ubuntu 16.04 (Again)
======
This post exists as a copy of what I had on my previous blog about configuring MSMTP on Ubuntu 16.04; I'm posting it as-is for posterity, and have no idea if it'll work on later versions. As I'm not hosting my own Ubuntu/MSMTP server anymore I can't see any updates being made to this, but if I ever do have to set this up again I'll create an updated post! Anyway, here's what I had…
I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in a previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you're using Apache as the web server, but I'm sure it shouldn't be too different if your web server of choice is something else.
I use [msmtp][1] for sending emails from this blog to notify me of comments and upgrades etc. Here I'm going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.
To begin, we need to install 3 packages:
`sudo apt-get install msmtp msmtp-mta ca-certificates`
Once these are installed, a default config is required. By default msmtp will look at `/etc/msmtprc`, so I created that using vim, though any text editor will do the trick. This file looked something like this:
```
# Set defaults.
defaults
# Enable or disable TLS/SSL encryption.
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
account
host smtp.gmail.com
port 587
auth login
user
password
from
logfile /var/log/msmtp/msmtp.log
account default :
```
Any of the uppercase items (i.e. ``) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.
Once that file is saved, we'll update the permissions on the above configuration file -- msmtp won't run if the permissions on that file are too open -- and create the directory for the log file.
```
sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chmod 0600 /etc/msmtprc
```
Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don't get too large as well as keeping the log directory a little tidier. To do this, we create `/etc/logrotate.d/msmtp` and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.
```
/var/log/msmtp/*.log {
rotate 12
monthly
compress
missingok
notifempty
}
```
Now that the logging is configured, we need to tell PHP to use msmtp by editing `/etc/php/7.0/apache2/php.ini` and updating the sendmail path from
`sendmail_path =`
to
`sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"`
Here I did run into an issue where even though I specified the account name it wasn't sending emails correctly when I tested it. This is why the line `account default : ` was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run `sudo service apache2 restart`, then run `php -a` and execute the following
```
mail ('personal@email.com', 'Test Subject', 'Test body text');
exit();
```
Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).
I make no claims that this is the most secure configuration, so if you come across this and realise it's grossly insecure or something is drastically wrong please let me know and I'll update it accordingly.
--------------------------------------------------------------------------------
via: https://codingproductivity.wordpress.com/2018/01/18/configuring-msmtp-on-ubuntu-16-04-again/
作者:[JOE][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://codingproductivity.wordpress.com/author/joeb454/
[1]:http://msmtp.sourceforge.net/

View File

@ -1,3 +1,5 @@
translated by cyleft
How do I edit files on the command line?
======

View File

@ -1,59 +0,0 @@
Which Linux Kernel Version Is Stable?
============================================================
![Linux kernel ](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/apple1.jpg?itok=PGRxOQz_ "Linux kernel")
Konstantin Ryabitsev explains which Linux kernel versions are considered "stable" and how to choose what's right for you.[Creative Commons Zero][1]
Almost every time Linus Torvalds releases [a new mainline Linux kernel][4], there's inevitable confusion about which kernel is the "stable" one now. Is it the brand new X.Y one, or the previous X.Y-1.Z one? Is the brand new kernel too new? Should you stick to the previous release?
The [kernel.org][5] page doesn't really help clear up this confusion. Currently, right at the top of the page. we see that 4.15 is the latest stable kernel -- but then in the table below, 4.14.16 is listed as "stable," and 4.15 as "mainline." Frustrating, eh?
Unfortunately, there are no easy answers. We use the word "stable" for two different things here: as the name of the Git tree where the release originated, and as indicator of whether the kernel should be considered “stable” as in “production-ready.”
Due to the distributed nature of Git, Linux development happens in a number of [various forked repositories][6]. All bug fixes and new features are first collected and prepared by subsystem maintainers and then submitted to Linus Torvalds for inclusion into [his own Linux tree][7], which is considered the “master” Git repository. We call this the “mainline” Linux tree.
### Release Candidates
Before each new kernel version is released, it goes through several “release candidate” cycles, which are used by developers to test and polish all the cool new features. Based on the feedback he receives during this cycle, Linus decides whether the final version is ready to go yet or not. Usually, there are 7 weekly pre-releases, but that number routinely goes up to -rc8, and sometimes even up to -rc9 and above. When Linus is convinced that the new kernel is ready to go, he makes the final release, and we call this release “stable” to indicate that its not a “release candidate.”
### Bug Fixes
As any kind of complex software written by imperfect human beings, each new version of the Linux kernel contains bugs, and those bugs require fixing. The rule for bug fixes in the Linux Kernel is very straightforward: all fixes must first go into Linuss tree. Once the bug is fixed in the mainline repository, it may then be applied to previously released kernels that are still maintained by the Kernel development community. All fixes backported to stable releases must meet a [set of important criteria][8] before they are considered -- and one of them is that they “must already exist in Linuss tree.” There is a [separate Git repository][9] used for the purpose of maintaining backported bug fixes, and it is called the “stable” tree -- because it is used to track previously released stable kernels. It is maintained and curated by Greg Kroah-Hartman.
### Latest Stable Kernel
So, whenever you visit kernel.org looking for the latest stable kernel, you should use the version that is in the Big Yellow Button that says “Latest Stable Kernel.”
![sWnmAYf0BgxjGdAHshK61CE9GdQQCPBkmSF9MG8s](https://lh6.googleusercontent.com/sWnmAYf0BgxjGdAHshK61CE9GdQQCPBkmSF9MG8sYqZsmL6e0h8AiyJwqtWYC-MoxWpRWHpdIEpKji0hJ5xxeYshK9QkbTfubFb2TFaMeFNmtJ5ypQNt8lAHC2zniEEe8O4v7MZh)
Ah, but now you may wonder -- if both 4.15 and 4.14.16 are stable, then which one is more stable? Some people avoid using ".0" releases of kernel because they think a particular version is not stable enough until there is at least a ".1". It's hard to either prove or disprove this, and there are pro and con arguments for both, so it's pretty much up to you to decide which you prefer.
On the one hand, anything that goes into a stable tree release must first be accepted into the mainline kernel and then backported. This means that mainline kernels will always have fresher bug fixes than what is released in the stable tree, and therefore you should always use mainline “.0” releases if you want fewest “known bugs.”
On the other hand, mainline is where all the cool new features are added -- and new features bring with them an unknown quantity of “new bugs” that are not in the older stable releases. Whether new, unknown bugs are more worrisome than older, known, but yet unfixed bugs -- well, that is entirely your call. However, it is worth pointing out that many bug fixes are only thoroughly tested against mainline kernels. When patches are backported into older kernels, chances are they will work just fine, but there are fewer integration tests performed against older stable releases. More often than not, it is assumed that "previous stable" is close enough to current mainline that things will likely "just work." And they usually do, of course, but this yet again shows how hard it is to say "which kernel is actually more stable."
So, basically, there is no quantitative or qualitative metric we can use to definitively say which kernel is more stable -- 4.15 or 4.14.16\. The most we can do is to unhelpfully state that they are "differently stable.”
_Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX._
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/2/which-linux-kernel-version-stable
作者:[KONSTANTIN RYABITSEV ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/mricon
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/apple1jpg
[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[4]:https://www.linux.com/blog/intro-to-linux/2018/1/linux-kernel-415-unusual-release-cycle
[5]:https://www.kernel.org/
[6]:https://git.kernel.org/pub/scm/linux/kernel/git/
[7]:https://git.kernel.org/torvalds/c/v4.15
[8]:https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
[9]:https://git.kernel.org/stable/linux-stable/c/v4.14.16

View File

@ -1,3 +1,5 @@
translated by cyleft
How to print filename with awk on Linux / Unix
======

View File

@ -0,0 +1,66 @@
LKRG: Linux to Get a Loadable Kernel Module for Runtime Integrity Checking
======
![LKRG logo][1]
Members of the open source community are working on a new security-focused project for the Linux kernel. Named Linux Kernel Runtime Guard (LKRG), this is a loadable kernel module that will perform runtime integrity checking of the Linux kernel.
Its purpose is to detect exploitation attempts for known and unknwon security vulnerabilities against the Linux kernel and attempt to block attacks.
LKRG will also detect privilege escalation for running processes, and kill the running process before the exploit code runs.
### Project under development since 2011. First versions released.
Since the project is in such early development, current versions of LKRG will only report kernel integrity violations via kernel messages, but a full exploit mitigation system will be deployed as the system matures.
Work on this project started in 2011, and LKRG has gone through a "re-development" phase, as LKRG member Alexander Peslyak described the process.
The first public version of LKRG —LKRG v0.0— is now live and available for download on [this page][2]. A wiki is also available [here][3], and a [Patreon page][4] for supporting the project has also been set up.
While LKRG will remain an open source project, LKRG maintainers also have plans for an LKRG Pro version that will include distro-specific LKRG builds and support for the detection of specific exploits, such as container escapes. The team plans to use the funds from LKRG Pro to fund the rest of the project.
### LKRG is a kernel module. Not a patch.
A similar project is Additional Kernel Observer (AKO), but AKO differs from LKRG because it's a kernel load-on module and not a patch. The LKRG team chose to create a kernel module because patching the kernel has a direct impact on the security, system stability, and performance.
By offering a kernel module this also makes LKRG easier to deploy on a per-system basis without having to tinker with core kernel code, a very complicated and error-prone process.
LKRG kernel modules are currently available for main Linux distros such as RHEL7, OpenVZ 7, Virtuozzo 7, and Ubuntu 16.04 to latest mainlines.
### Not a perfect solution
But LKRG's creators are warning users not to consider their tool as unbreakable and 100% secure. They said LKRG is "bypassable by design," and only provides "security through diversity."
```
While LKRG defeats many pre-existing exploits of Linux kernel vulnerabilities, and will likely defeat many future exploits (including of yet unknown vulnerabilities) that do not specifically attempt to bypass LKRG, it is bypassable by design (albeit sometimes at the expense of more complicated and/or less reliable exploits). Thus, it can be said that LKRG provides security through diversity, much like running an uncommon OS kernel would, yet without the usability drawbacks of actually running an uncommon OS.
```
LKRG is similar to Windows-based antivirus software, which also works at the kernel level to detect exploits and malware. Nonetheless, the LKRG team says their product is much safer then antivirus and other endpoint security software because it has a much smaller codebase, hence a smaller footprint for introducing new bugs and vulnerabilities at the kernel level.
### Current LKRG version adds a 6.5% performance dip
Peslyak says LKRG is most suited for Linux machines that can't be rebooted right in the aftermath of a security flaw to patch the kernel. LKRG allows owners to continue to run the machine with a security measure in place until patches for critical vulnerabilities can be tested and deployed during planned maintenance windows.
Tests showed that installing LKRG v0.0 added a 6.5% performance impact, but Peslyak says this will be reduced as development moves forward.
Tests also showed that LKRG detected exploitation attempts for CVE-2014-9322 (BadIRET), CVE-2017-5123 (waitid(2) missing access_ok), and CVE-2017-6074 (use-after-free in DCCP protocol), but failed to detect CVE-2016-5195 (Dirty COW). The team said LKRG failed to spot the Dirty COW privilege escalation because of the previously mentioned "bypassable by design" strategy.
```
In case of Dirty COW the LKRG "bypass" happened due to the nature of the bug and this being the way to exploit it, it's also a way for future exploits to bypass LKRG by similarly directly targeting userspace. It remains to be seen whether such exploits become common (unlikely unless LKRG or similar become popular?) and what (negative?) effect on their reliability this will have (for kernel vulnerabilities where directly targeting the userspace isn't essential and possibly not straightforward).
```
--------------------------------------------------------------------------------
via: https://www.bleepingcomputer.com/news/linux/lkrg-linux-to-get-a-loadable-kernel-module-for-runtime-integrity-checking/
作者:[Catalin Cimpanu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.bleepingcomputer.com/author/catalin-cimpanu/
[1]:https://www.bleepstatic.com/content/posts/2018/02/04/LKRG-logo.png
[2]:http://www.openwall.com/lkrg/
[3]:http://openwall.info/wiki/p_lkrg/Main
[4]:https://www.patreon.com/p_lkrg

View File

@ -0,0 +1,217 @@
translating by wenwensnow
Getting Started with the openbox windows manager in Fedora
======
![](https://fedoramagazine.org/wp-content/uploads/2017/10/openbox.png-945x400.jpg)
Openbox is [a lightweight, next generation window manager][1] for users who want a minimal enviroment for their [Fedora][2]desktop. Its well known for its minimalistic appearance, low resource usage and the ability to run applications the way they were designed to work. Openbox is highly configurable. It allows you to change almost every aspect of how you interact with your desktop. This article covers a basic setup of Openbox on Fedora.
### Installing Openbox in Fedora
This tutorial assumes youre already working in a traditional desktop environment like [GNOME][3] or [Plasma][4]over the [Wayland][5] compositor. First, open a terminal and run the following command [using sudo][6].
```
sudo dnf install openbox xbacklight feh conky xorg-x11-drv-libinput tint2 volumeicon xorg-x11-server-utils network-manager-applet
```
Curious about the packages this command installs? Here is the package-by-package breakdown.
* **openbox** is the main window manager package
* **xbacklight** is a utility to set laptop screen brightness
* **feh** is a utility to set a wallpaper for the desktop
* **conky** is a utility to display system information
* **tint2** is a system panel/taskbar
* **xorg-x11-drv-libinput** is a driver that lets the system activate clicks on tap in a laptop touchpad
* **volumeicon** is a volume control for the system tray
* **xorg-x11-server-utils** provides the xinput tool
* **network-manager-applet** provides the nm-applet tool for the system tray
Once you install these packages, restart your computer. After the system restarts, choose your user name to login. Before you enter your password, click the gear icon to select the Openbox session. Then enter your password to start Openbox.
If you ever want to switch back, simply use this gear icon to return to the selection for your desired desktop session.
### Using Openbox
The first time you login to your Openbox session, a mouse pointer appears over a black desktop. Dont worry, this is the default look and feel of the desktop. First, right click your mouse to access a handy menu to launch your apps. You can use the shortcut **Ctrl + Alt + LeftArrow / RightArrow** to switch between four virtual screens.
![][7]
If your laptop has a touchpad, you may want to configure tap to click for an improved experience. Fedora features libinput to handle input from the touchpad. First, get a list of input devices in your computer:
```
$ xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ ETPS/2 Elantech Touchpad id=11 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Power Button id=8 [slave keyboard (3)]
↳ WebCam SC-13HDL11939N: WebCam S id=9 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=10 [slave keyboard (3)]
```
In the example laptop, the touchpad is the device with ID 11. With this info you can list your trackpad properties:
```
$ xinput list-props 11
Device 'ETPS/2 Elantech Touchpad':
Device Enabled (141): 1
Coordinate Transformation Matrix (143): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
libinput Tapping Enabled (278): 0
libinput Tapping Enabled Default (279): 0
droped
```
In this example, the touchpad has the Tapping Enabled property set to false (0).
Now you know your trackpad device ID (11) and the property to configure (278). This means you can enable tapping with the command:
```
xinput set-prop <device> <property> <value>
```
For the example above:
```
xinput set-prop 11 278 1
```
You should now be able to successfully click in your touchpad with a tap. Now configure this option at the Openbox session start. First, create the config file with an editor:
```
vi ~/.config/openbox/autostart
```
This example uses the vi text editor, but you can use any editor you want, like gedit or kwrite. In this file add the following lines:
```
# Set tapping on touchpad on:
xinput set-prop 11 278 1 &
```
Save the file, logout of the current session, and login again to verify your touchpad works.
### Configuring the session
Here are some examples of how you can configure your Openbox session to your preferences. To use feh to set the desktop wallpaper at startup, just add these lines to your ~/.config/openbox/autostart file:
```
# Set desktop wallpaper:
feh --bg-scale ~/path/to/wallpaper.png &
```
To use tint2 to show a task bar in the desktop, add these lines to the autostart file:
```
# Show system tray
tint2 &
```
Add these lines to the autostart file to start conky when you login:
```
# Show system info
conky &
```
Now you can add your own services to your Openbox session. Just add entries to your autostart file. For instance, add the NetworkManager applet and volume control with these lines:
```
#NetworkManager
nm-applet &
#Volume control in system tray
volumeicon &
```
The configuration file used in this post for conky is available [here][8] you can copy and paste the configuration in a file called .conkyrc in your home directory .
The conky utility is a highly configurable way to show system information. You can set up a preferred profile of settings in a ~/.conkyrc file. Heres [an example conkyrc file][9]. You can find many more on the web.
You are now able to customize your Openbox installation in exciting ways. Heres a screenshot of the authors Openbox desktop:
![][10]
### Configuring tint2
You can also configure the look and feel of the panel with tint2. The configuration file is available in ~/.config/tint2/tint2rc. Use your favorite editor to open this file:
```
vi ~/.config/tint2/tint2rc
```
Look for these lines first:
```
#-------------------------------------
#Panel
panel_items = LTSCB
```
These are the elements than will be included in the bar where:
* **L** = Launchers
* **T** = Task bar
* **S** = Systray
* **C** = Clock
* **B** = Battery
Then look for those lines to configure the launchers in the task bar:
```
#-------------------------------------
#Launcher
launcher_padding = 2 4 2
launcher_background_id = 0
launcher_icon_background_id = 0
launcher_icon_size = 24
launcher_icon_asb = 100 0 0
launcher_icon_theme_override = 0
startup_notifications = 1
launcher_tooltip = 1
launcher_item_app = /usr/share/applications/tint2conf.desktop
launcher_item_app = /usr/local/share/applications/tint2conf.desktop
launcher_item_app = /usr/share/applications/firefox.desktop
launcher_item_app = /usr/share/applications/iceweasel.desktop
launcher_item_app = /usr/share/applications/chromium-browser.desktop
launcher_item_app = /usr/share/applications/google-chrome.desktop
```
Here you can add shortcuts to your favorite launcher_item_app elements. This item accepts .desktop files and not executables. You can get a list of your system-wide desktop files with this command:
```
ls /usr/share/applications/
```
As an exercise for the reader, see if you can find and install a theme for either the Openbox [window manager][11] or [tint2][12]. Enjoy getting started with Openbox as a Fedora desktop.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/openbox-fedora/
作者:[William Moreno][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://williamjmorenor.id.fedoraproject.org/
[1]:http://openbox.org/wiki/Main_Page
[2]:https://getfedora.org/
[3]:https://getfedora.org/es/workstation/
[4]:https://spins.fedoraproject.org/kde/
[5]:https://wayland.freedesktop.org/
[6]:https://fedoramagazine.org/howto-use-sudo/
[7]:https://fedoramagazine.org/wp-content/uploads/2017/10/openbox-01-300x169.png
[8]:https://gist.github.com/williamjmorenor/96399defad35e24a8f1843e2c256b4a4
[9]:https://github.com/zenzire/conkyrc/blob/master/conkyrc
[10]:https://fedoramagazine.org/wp-content/uploads/2017/10/openbox-02-300x169.png
[11]:https://www.deviantart.com/customization/skins/linuxutil/winmanagers/openbox/whats-hot/?order=9&offset=0
[12]:https://github.com/addy-dclxvi/Tint2-Theme-Collections

View File

@ -0,0 +1,51 @@
Linear Regression Classifier from scratch using Numpy and Stochastic gradient descent as an optimization technique
======
![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/cKkX2ryQteXTdZYSR6t7)
In statistics, linear regression is a linear approach for modelling the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variables) denoted X. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression.
As you may know the equation of line with a slope **m** and intercept **c** is given by **y=mx+c** .Now in our dataset **x** is a feature and **y** is the label that is the output.
Now we will start with some random values of m and c and by using our classifier we will adjust their values so that we obtain a line with the best fit.
Suppose we have a dataset with a single feature given by **X=[1,2,3,4,5,6,7,8,9,10]** and label/output being **Y=[1,4,9,16,25,36,49,64,81,100]**.We start with random value of **m** being **1** and **c** being **0**. Now starting with the first data point which is **x=1** we will calculate its corresponding output which is **y=m*x+c** - > **y=1-1+0** - > **y=1** .
Now this is our guess for the given input.Now we will subtract the calculated y which is our guess whith the actual output which is **y(original)=1** to calculate the error which is **y(guess)-y(original)** which can also be termed as our cost function when we take the square of its mean and our aim is to minimize this cost.
After each iteration through the data points we will change our values of **m** and **c** such that the obtained m and c gives the line with the best fit.Now how we can do this?
The answer is using **Gradient Descent Technique**.
![Gd_demystified.png][1]
In gradient descent we look to minimize the cost function and in order to minimize the cost function we need to minimize the error which is given by **error=y(guess)-y(original)**.
Now error depends on two values **m** and **c** . Now if we take the partial derivative of error with respect to **m** and **c** we can get to know the oreintation i.e whether we need to increase the values of m and c or decrease them in order to obtain the line of best fit.
Now error depends on two values **m** and **c**.So on taking partial derivative of error with respect to **m** we get **x** and taking partial derivative of error with repsect to **c** we get a constant.
So if we apply two changes that is **m=m-error*x** and **c=c-error*1** after every iteration we can adjust the value of m and c to obtain the line with the best fit.
Now error can be negative as well as positive.When the error is negative it means our **m** and **c** are smaller than the actual **m** and **c** and hence we would need to increase their values and if the error is positive we would need to decrease their values that is what we are doing.
But wait we also need a constant called the learning_rate so that we don't increase or decrease the values of **m** and **c** with a steep rate .so we need to multiply **m=m-error * x * learning_rate** and **c=c-error * 1 * learning_rate** so as to make the process smooth.
So we need to update **m** to **m=m-error * x * learning_rate** and **c** to **c=c-error * 1 * learning_rate** to obtain the line with the best fit and this is our linear regreesion model using stochastic gradient descent meaning of stochastic being that we are updating the values of m and c in every iteration.
You can check the full code in python :[https://github.com/assassinsurvivor/MachineLearning/blob/master/Regression.py][2]
--------------------------------------------------------------------------------
via: https://www.codementor.io/prakharthapak/linear-regression-classifier-from-scratch-using-numpy-and-stochastic-gradient-descent-as-an-optimization-technique-gf5gm9yti
作者:[Prakhar Thapak][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.codementor.io/prakharthapak
[1]:https://process.filestackapi.com/cache=expiry:max/5TXRH28rSo27kTNZLgdN
[2]:https://www.codementor.io/prakharthapak/here

View File

@ -0,0 +1,262 @@
Locust.io: Load-testing using vagrant
======
![](https://process.filestackapi.com/cache=expiry:max/resize=width:700/compress/Rm2HlpyYQc6ma5BnUGRO)
What could possibly go wrong when you release an application to the public domain without testing? You could either wait to find out or you can just find out before releasing the product.
In this tutorial, we will be considering the art of load-testing, one of the several types of [non-functional test][1] required for a system.
According to wikipedia
> [Load testing][2] is the process of putting demand on a software system or computing device and measuring its response. Load testing is performed to determine a system's behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation.
### What the heck is locust.io?
[Locust][3] is an opensource load-testing tool that can be used to simulate millions of simultaneous users, it has other cool features that allows you to visualize the data generated from the test plus it has been proven & battle tested ![😃][4]
### Why Vagrant?
Because [vagrant][5] allows us to build and maintain our near replica production environment with the right parameters for memory, CPU, storage, and disk i/o.
### Why VirtualBox?
VirtualBox here will act as our hypervisor, the computer software that will create and run our virtual machine(s).
### So, what is the plan here?
* Download [Vagrant][6] and [VirtualBox][7]
* Set up a near-production replica environment using ### vagrant** and **virtualbox [SOURCE_CODE_APPLICATION][8]
* Set up locust to run our load test [SOURCE_CODE_LOCUST][9]
* Execute test against our production replica environment and check performance
### Some context
Vagrant uses "Provisioners" and "Providers" as building blocks to manage the development environments.
> Provisioners are tools that allow users to customize the configuration of virtual environments. Puppet and Chef are the two most widely used provisioners in the Vagrant ecosystem.
> Providers are the services that Vagrant uses to set up and create virtual environments.
Reference can be found [here][10]
That said for our vagrant configuration we will be making use of the Vagrant Shell provisioner and VirtualBox for our provider, just a simple setup for now ![😉][11]
One more thing, the Machine, and software requirements are written in a file called "Vagrantfile" to execute necessary steps in order to create a development-ready box, so let's get down to business.
### A near production environment using Vagrant and Virtualbox
I used a past project of mine, a very minimal Python/Django application I called Bookshelf to create a near-production environment. Here is the link to the [repository][8]
Let's create our environmnet using a vagrantfile.
Use the command `vagrant init --minimal hashicorp/precise64` to create a vagrant file, where `hashicorp` is the username and `precise64` is the box name.
More about getting started with vagrant can be found [here][12]
```
# vagrant file
# set our environment to use our host private and public key to access the VM
# as vagrant project provides an insecure key pair for SSH Public Key # Authentication so that vagrant ssh works
# https://stackoverflow.com/questions/14715678/vagrant-insecure-by-default
private_key_path = File.join(Dir.home, ".ssh", "id_rsa")
public_key_path = File.join(Dir.home, ".ssh", "id_rsa.pub")
insecure_key_path = File.join(Dir.home, ".vagrant.d", "insecure_private_key")
private_key = IO.read(private_key_path)
public_key = IO.read(public_key_path)
# Set the environment details here
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/precise64"
config.vm.hostname = "bookshelf-dev"
# using a private network here, so don't forget to update your /etc/host file.
# 192.168.50.4 bookshelf.example
config.vm.network "private_network", ip: "192.168.50.4"
config.ssh.insert_key = false
config.ssh.private_key_path = [
private_key_path,
insecure_key_path # to provision the first time
]
# reference: https://github.com/hashicorp/vagrant/issues/992 @dwickern
# use host/personal public and private key for security reasons
config.vm.provision :shell, :inline => <<-SCRIPT
set -e
mkdir -p /vagrant/.ssh/
echo '#{private_key}' > /vagrant/.ssh/id_rsa
chmod 600 /vagrant/.ssh/id_rsa
echo '#{public_key}' > /vagrant/.ssh/authorized_keys
chmod 600 /vagrant/.ssh/authorized_keys
SCRIPT
# Use a shell provisioner here
config.vm.provision "shell" do |s|
s.path = ".provision/setup_env.sh"
s.args = ["set_up_python"]
end
config.vm.provision "shell" do |s|
s.path = ".provision/setup_nginx.sh"
s.args = ["set_up_nginx"]
end
if Vagrant.has_plugin?("vagrant-vbguest")
config.vbguest.auto_update = false
end
# set your environment parameters here
config.vm.provider 'virtualbox' do |v|
v.memory = 2048
v.cpus = 2
end
config.vm.post_up_message = "At this point use `vagrant ssh` to ssh into the development environment"
end
```
Something to note here, notice the config `config.vm.network "private_network", ip: "192.168.50.4"` where I configured the Virtual machine network to use a private network "192.168.59.4", I edited my `/etc/hosts` file to map that IP address to the fully qualified domain name (FQDN) of the application called `bookshelf.example`. So, don't forget to edit your `/etc/hosts/` as well it should look like this
```
##
# /etc/host
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
192.168.50.4 bookshelf.example
```
The provision scripts can be found in the `.provision` [folder][13] of the repository
![provision_sd.png][14]
There you would see all the scripts used in the setup, the `start_app.sh` is used to run the application once you are in the virtual machine via ssh.
To start the process run `vagrant up && vagrant ssh`, this will start the application and take you via ssh into the VM, inside the VM navigate to the `/vagrant/` folder to start the app via the command `./start_app.sh`
With our application up and running, next would be to create a load testing script to run against our setup.
### NB: The current application setup here makes use of sqlite3 for the database config, you can change that to Postgres by uncommenting that in the settings file. Also, `setup_env.sh` provisions the environment to use Postgres.
To set up a more comprehensive and robust production replica environment I would suggest you reference the docs [here][15], you can also check out [vagrant][5] to understand and play with vagrant.
### Set up locust for load-testing
In other to perform load testing we are going to make use of locust. Source code can be found [here][9]
First, we create our locust file
```
# locustfile.py
# script used against vagrant set up on bookshelf git repo
# url to repo: https://github.com/andela-sjames/bookshelf
from locust import HttpLocust, TaskSet, task
class SampleTrafficTask(TaskSet):
@task(2)
def index(self):
self.client.get("/")
@task(1)
def search_for_book_that_contains_string_space(self):
self.client.get("/?q=space")
@task(1)
def search_for_book_that_contains_string_man(self):
self.client.get("/?q=man")
class WebsiteUser(HttpLocust):
host = "http://bookshelf.example"
task_set = SampleTrafficTask
min_wait = 5000
max_wait = 9000
```
Here is a simple locust file called `locustfile.py`, where we define a number of locust task grouped under the `TaskSet class`. Then we have the `HttpLocust class` which represents a user, where we define how long a simulated user should wait between executing tasks, as well as what TaskSet class should define the users “behavior”.
using the filename locustfile.py allows us to start the process by simply running the command `locust`. If you choose to give your file a different name then you just need to reference the path using `locust -f /path/to/the/locust/file` to start the script.
If you're getting excited and want to know more then the [quick start][16] guide will get up to speed.
### Execute test and check perfomance
It's time to see some action ![😮][17]
Bookshelf app:
Run the application via `vagrant up && vagrant ssh` navigate to the `/vagrant` and run `./start_app.sh`
Vagrant allows you to shut down the running machine using `vagrant halt` and to destroy the machine and all the resources that were created with it using `vagrant destroy`. Use this [link][18] to know more about the vagrant command line.
![bookshelf_str.png][14]
Go to your browser and use the private_ip address `192.168.50.4` or preferably `http://bookshelf.example` what we set in our `/etc/host` file of the system
`192.168.50.4 bookshelf.example`
![bookshelf_app_web.png][14]
Locust Swarm:
Within your load-testing folder, activate your `virtualenv`, get your dependencies down via `pip install -r requirements.txt` and run `locust`
![locust_str.png][14]
We're almost done:
Now got to `http://127.0.0.1:8089/` on your browser
![locust_rate.png][14]
Enter the number of users you want to simulate and the hatch rate (i.e how many users you want to be generated per second) and start swarming your development environment
**NB: You can also run locust against a development environment hosted via a cloud service if that is your use case. You don't have to confine yourself to vagrant.**
With the generated report and metric from the process, you should be able to make a well-informed decision on with regards to your system architecture or at least know the limit of your system and prepare for an anticipated event.
![locust_1.png][14]
![locust_a.png][14]
![locust_b.png][14]
![locust_error.png][14]
### Conclusion
Congrats!!! if you made it to the end. As a recap, we were able to talk about what load-testing is, why you would want to perform a load test on your application and how to do it using locust and vagrant with a VirtualBox provider and a Shell provisioner. We also looked at the metrics and data generated from the test.
**NB: If you want a more concise vagrant production environment you can reference the docs [here][15].**
Thanks for reading and feel free to like/share this post.
--------------------------------------------------------------------------------
via: https://www.codementor.io/samueljames/locust-io-load-testing-using-vagrant-ffwnjger9
作者:[Samuel James][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:
[1]:https://en.wikipedia.org/wiki/Non-functional_testing
[2]:https://en.wikipedia.org/wiki/Load_testing
[3]:https://locust.io/
[4]:https://twemoji.maxcdn.com/2/72x72/1f603.png
[5]:https://www.vagrantup.com/intro/index.html
[6]:https://www.vagrantup.com/downloads.html
[7]:https://www.virtualbox.org/wiki/Downloads
[8]:https://github.com/andela-sjames/bookshelf
[9]:https://github.com/andela-sjames/load-testing
[10]:https://en.wikipedia.org/wiki/Vagrant_(software)
[11]:https://twemoji.maxcdn.com/2/72x72/1f609.png
[12]:https://www.vagrantup.com/intro/getting-started/install.html
[13]:https://github.com/andela-sjames/bookshelf/tree/master/.provision
[14]:data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==
[15]:http://vagrant-django.readthedocs.io/en/latest/intro.html
[16]:https://docs.locust.io/en/latest/quickstart.html
[17]:https://twemoji.maxcdn.com/2/72x72/1f62e.png
[18]:https://www.vagrantup.com/docs/cli/

View File

@ -0,0 +1,221 @@
Rancher - Container Management Application
======
Docker is a cutting-edge software used for containerization, that is used in most of IT companies to reduce infrastructure cost.
By default docker comes without any GUI, which is easy for Linux administrator to manage it and its very difficult for developers to manage. When its come to production then its very difficult for Linux admin too. So, what would be the best solution to manage the docker without any trouble.
The only way is GUI. The Docker API has allowed third party applications to interfacing with Docker. There are many docker GUI applications available in the market. We already wrote an article about Portainer application. Today we are going to discuss about Rancher.
Containers make software development easier, enabling you to write code faster and run it better. However, running containers in production can be hard.
**Suggested Read :** [Portainer A Simple Docker Management GUI][1]
### What is Rancher
[Rancher][2] is a complete container management platform that makes it easy to deploy and run containers in production on any infrastructure. It provides infrastructure services such as multi-host networking, global and local load balancing, and volume snapshots. It integrates native Docker management capabilities such as Docker Machine and Docker Swarm. It offers a rich user experience that enables devops admins to operate Docker in production at large scale.
Navigate to following article for docker installation on Linux.
**Suggested Read :**
**(#)** [How to install Docker in Linux][3]
**(#)** [How to play with Docker images on Linux][4]
**(#)** [How to play with Docker containers on Linux][5]
**(#)** [How to Install, Run Applications inside Docker Containers][6]
### Rancher Features
* Set up Kubernetes in two minutes
* Launch apps with single click (90 popular Docker applications)
* Deploy and manage Docker easily
* complete container management platform for production environment
* Quickly deploy containers in production
* Automate container deployment and operations with a robust technology
* Modular infrastructure services
* Rich set of orchestration tools
* Rancher supports multiple authentication mechanisms
### How to install Rancher
Rancher installation is very simple since its runs as a lightweight Docker containers. Rancher is deployed as a set of Docker containers. Running Rancher is as simple as launching two containers. One container as the management server and another container on a node as an agent. Simple run the following command to deploy rancher on Linux.
Rancher server offers two different package tags like `stable` & `latest`. The below commands will pull appropriate build rancher image and install on your system. It will only take a couple of minutes for Rancher server to start up.
* `stable` : This tag will be their latest development builds. These builds will have been validated through rancher CI automation framework which is not advisable for deployment in production.
* `latest` : Its a latest stable release version which is recommend for production environment.
Rancher installation comes with many varieties. In this tutorial we are going to discuss about two variants.
* Install rancher server in a single container (Inbuilt Rancher Database)
* Install rancher server in a single container (External Database)
### Method-1
Run the following commands to install rancher server in a single container (Inbuilt Rancher Database).
```
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:latest
```
### Method-2
Instead of using the internal database that comes with Rancher server, you can start Rancher server pointing to an external database. First create required database, database user for the same.
```
> CREATE DATABASE IF NOT EXISTS cattle COLLATE = 'utf8_general_ci' CHARACTER SET = 'utf8';
> GRANT ALL ON cattle.* TO 'cattle'@'%' IDENTIFIED BY 'cattle';
> GRANT ALL ON cattle.* TO 'cattle'@'localhost' IDENTIFIED BY 'cattle';
```
Run the following command to start Rancher connecting to an external database.
```
$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server \
--db-host myhost.example.com --db-port 3306 --db-user username --db-pass password --db-name cattle
```
If you want to test Rancher 2.0 use the following command to start.
```
$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/server:preview
```
### Access & Setup Rancher Through GUI
Navigate to the following URL `http://hostname:8080` or `http://server_ip:8080` to access rancher GUI.
[![][7]![][7]][8]
### How To Register the Host
Register your host URL which allow hosts to connect to the Rancher API. Its one time setup.
To do, Click “Add a Host” link under the main menu or Go to >> Infrastructure >> Add Hosts then hit `save` button.
[![][7]![][7]][9]
By default access control authentication is disabled in rancher so first we have to enable the access control authentication through available method, otherwise anyone can access the GUI.
Go to >> Admin >> Access Control and input the following values and finally hit `Enable Authentication` button to enable it. In my case im enabling via `local authentication`
* **`Login UserName`** Input your descried login username
* **`Full Name`** Input your full name
* **`Password`** Input your descried password
* **`Confirm Password`**Confirm the password once again
[![][7]![][7]][10]
Logout and login back with your new login credential.
[![][7]![][7]][11]
Now, i can see the local authentication is enabled.
[![][7]![][7]][12]
### How To Add Hosts
After register your host, it will take you to next page where you can choose Linux machines from varies cloud providers. We are going to add the host that is running Rancher server, so select the `custom` option and input the required information.
Enter your server public IP address in the 4th step and run the command which is displaying in the 5th step into your terminal then finally hit `close` button.
```
$ sudo docker run -e CATTLE_AGENT_IP="192.168.1.112" --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.9 http://192.168.1.112:8080/v1/scripts/3F8217A1DCF01A7B7F8A:1514678400000:D7WeLUcEUnqZOt8rWjrvoaUE
INFO: Running Agent Registration Process, CATTLE_URL=http://192.168.1.112:8080/v1
INFO: Attempting to connect to: http://66.70.189.137:8080/v1
INFO: http://192.168.1.112:8080/v1 is accessible
INFO: Inspecting host capabilities
INFO: Boot2Docker: false
INFO: Host writable: true
INFO: Token: xxxxxxxx
INFO: Running registration
INFO: Printing Environment
INFO: ENV: CATTLE_ACCESS_KEY=A35151AB87C15633DFB4
INFO: ENV: CATTLE_AGENT_IP=192.168.1.112
INFO: ENV: CATTLE_HOME=/var/lib/cattle
INFO: ENV: CATTLE_REGISTRATION_ACCESS_KEY=registrationToken
INFO: ENV: CATTLE_REGISTRATION_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_URL=http://192.168.1.112:8080/v1
INFO: ENV: DETECTED_CATTLE_AGENT_IP=172.17.0.1
INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v1.2.9
INFO: Deleting container rancher-agent
INFO: Launched Rancher Agent: 3415a1fd101f3c57d9cff6aef373c0ce66a3e20772122d2ca832039dcefd92fd
```
[![][7]![][7]][13]
Wait few seconds then the newly added host will be visible. To bring this Go to Infrastructure >> Hosts page.
[![][7]![][7]][14]
### How To View Containers
Just navigate the following location to view a list of running containers. Go to >> Infrastructure >> Containers.
[![][7]![][7]][15]
### How To Create Container
Its very simple, just navigate the following location to create a container.
Go to >> Infrastructure >> Containers >> “Add Container” and input the required information as per your requirement. To test this, im going to create Centos container with latest OS.
[![][7]![][7]][16]
The same has been listed here. Infrastructure >> Containers
[![][7]![][7]][17]
Hit on the `Container` name to view the container performances information like CPU, memory, network and storage.
[![][7]![][7]][18]
To manage the container such as stop, start, clone, restart, etc. Choose the particular container then hit `Three dot's` button in the left side of the container or `Actions` button to perform.
[![][7]![][7]][19]
If you want console access of the container, just hit `Execute Shell` option in the action button.
[![][7]![][7]][20]
### How To Deploy Container From Application Catalog
Rancher provides a catalog of application templates that make it easy to deploy in single click. Its maintain popular applications (nearly 90) contributed by the Rancher community.
[![][7]![][7]][21]
Go to >> Catalog >> All >> Choose the required application >> Finally hit “Launch” button to deploy.
[![][7]![][7]][22]
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/rancher-a-complete-container-management-platform-for-production-environment/
作者:[Magesh Maruthamuthu][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.2daygeek.com/author/magesh/
[1]:https://www.2daygeek.com/portainer-a-simple-docker-management-gui/
[2]:http://rancher.com/
[3]:https://www.2daygeek.com/install-docker-on-centos-rhel-fedora-ubuntu-debian-oracle-archi-scentific-linux-mint-opensuse/
[4]:https://www.2daygeek.com/list-search-pull-download-remove-docker-images-on-linux/
[5]:https://www.2daygeek.com/create-run-list-start-stop-attach-delete-interactive-daemonized-docker-containers-on-linux/
[6]:https://www.2daygeek.com/install-run-applications-inside-docker-containers/
[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[8]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-1.png
[9]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-2.png
[10]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-3.png
[11]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-3a.png
[12]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-4.png
[13]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-5.png
[14]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-6.png
[15]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-7.png
[16]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-8.png
[17]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-9.png
[18]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-10.png
[19]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-11.png
[20]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-12.png
[21]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-13.png
[22]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-14.png

View File

@ -0,0 +1,117 @@
Manjaro Gaming当 Manjaro 的才华遇上 Linux 游戏
======
[![Meet Manjaro Gaming, a Linux distro designed for gamers with the power of Manjaro][1]][1]
[![见见 Manjaro Gaming, 一个专门为游戏者设计的 Linux 发行版,带有 Manjaro 的所有才能。][1]][1]
[在 Linux 上玩转游戏][2]? 没错,这是非常可行的,我们正致力于为游戏人群打造一个新的 Linux 发行版。
Manjaro Gaming 是一个专门为游戏人群设计的,带有 Manjaro 所有才能的发行版。之前用过 Manjaro Linux 的人一定知道为什么这对于游戏人群来说是一个如此好的一个消息。
[Manjaro][3] 是一个 Linux 发行版,它基于最流行的 Linux 发行版—— [Arch Linux][4]。 Arch Linux 因它的前沿性所带来的轻量、强大、高度定制和最新的体验而闻名于世。尽管这些都非常赞,但是也正是因为 Arch Linux 提倡这种 DIY (do it yourself)方式,导致一个主要的缺点,那就是用户想用好它,需要处理一定的技术问题。
Manjaro 把这些要求全都剥开了去,让 Arch 对新手更亲切,同时也为老手提供了 Arch 所有的高端与强大功能。总之Manjaro 是一个用户友好型的 Linux 发行版,工作起来行云流水。
Manjaro 会成为一个强大并且极度适用于游戏的原因:
* Manjaro 自动检测计算的硬件(例如,显卡)
* 自动安装必要的驱动和软件(例如,显示驱动)
* 预安装播放媒体文件的编码器
* 专用的软件库提供完整测试过的稳定软件包。
Manjaro Gaming 打包了 Manjaro 的所有强大特性以及各种小工具和软件包,以使得在 Linux 上做游戏即顺畅又享受。
![Inside Manjaro Gaming][5]
![Manjaro Gaming 中][5]
#### 小工具
Manjaro Gaming 中的一些小工具
* Manjaro Gaming 使用高度定制化的 XFCE 桌面环境,拥有一个黑暗风格主题。
* 禁用睡眠模式,防止用手柄上玩游戏或者观看一个长过场动画时计算机自动休眠。
#### Softwares
维持 Manjaro 工作起来行云流水的传统Manjaro Gaming 打包了各种开源软件包,提供游戏人群经常需要用到的功能。其中一部分软件有:
* [**KdenLIVE**][6]: 用于编辑游戏视频的视频编辑软件
* [**Mumble**][7]: 给游戏人群使用的视频聊天软件
* [**OBS Studio**][8]: 用于录制视频或在 [Twitch][9] 上直播游戏用的软件
* **[OpenShot][10]**: Linux 上强大的视频编辑器
* [**PlayOnLinux**][11]: 使用 [Wine][12] 作为后端,在 Linux 上运行 Windows 游戏的软件
* [**Shutter**][13]: 多种功能的截图工具
#### 模拟器
Manjaro Gaming comes with a long list of gaming emulators:
Manjaro Gaming 自带很多的游戏模拟器:
* **[DeSmuME][14]** : Nintendo DS 任天堂 DS 模拟器
* **[Dolphin Emulator][15]** : GameCube 和 Wii 模拟器
* [**DOSBox**][16]: DOS 游戏模拟器
* **[FCEUX][17]** : Nintendo Entertainment System (任天堂娱乐系统, NES), FamicomFC, 红白机), 和 Famicom Disk System (FC磁盘系统, FDS) 模拟器
* **Gens/GS** : Sega Mega Drive世嘉模拟器
* **[PCSXR][18]** : PlayStation 模拟器
* [**PCSX2**][19]: Playstation 2 模拟器
* [**PPSSPP**][20]: PSP 模拟器
* **[Stella][21]** : Atari 2600 VCS (雅达利)模拟器
* [**VBA-M**][22]: Gameboy and GameboyAdvance 模拟器
* [**Yabause**][23]: Sega Saturn (世嘉土星)模拟器
* **[ZSNES][24]** : Super Nintendo (超级任天堂)模拟器
*
#### Others
#### 其它
还有一些终端插件——Color, ILoveCandy 和 Screenfetch。也包括带有 Retro Conky (译:复古 Conky风格的 [Conky 管理器][25]。
**注意:上面提到的所有功能并没有全部包含在 Manjaro Gaming 的现行发行版中(版本 16.03 )。部分功能计划将在下一版本中纳入——Manjaro Gaming 16.06。**
### 下载
Manjaro Gaming 16.06 将会是 Manjaro Gaming 的第一个正式版本。如果你现在就有兴趣尝试,你可以在 Sourceforge 的 [project page][26] (译注:项目页面) 中下载。去那然后下载它的 ISO 文件吧。
你觉得 Gaming Linux 发行版怎么样?想尝试吗?告诉我们!
--------------------------------------------------------------------------------
via: https://itsfoss.com/manjaro-gaming-linux/
作者:[Munif Tanjim][a]
译者:[译者ID](https://github.com/XLCYun)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/munif/
[1]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming.jpg
[2]:https://itsfoss.com/linux-gaming-guide/
[3]:https://manjaro.github.io/
[4]:https://www.archlinux.org/
[5]:https://itsfoss.com/wp-content/uploads/2016/06/Manjaro-Gaming-Inside-1024x576.png
[6]:https://kdenlive.org/
[7]:https://www.mumble.info
[8]:https://obsproject.com/
[9]:https://www.twitch.tv/
[10]:http://www.openshot.org/
[11]:https://www.playonlinux.com
[12]:https://www.winehq.org/
[13]:http://shutter-project.org/
[14]:http://desmume.org/
[15]:https://dolphin-emu.org
[16]:https://www.dosbox.com/
[17]:http://www.fceux.com/
[18]:https://pcsxr.codeplex.com
[19]:http://pcsx2.net/
[20]:http://www.ppsspp.org/
[21]:http://stella.sourceforge.net/
[22]:http://vba-m.com/
[23]:https://yabause.org/
[24]:http://www.zsnes.com/
[25]:https://itsfoss.com/conky-gui-ubuntu-1304/
[26]:https://sourceforge.net/projects/mgame/

View File

@ -0,0 +1,87 @@
# 为什么开源在计算机专业的学生中不那么流行?
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OSDC_OpenClass_520x292_FINAL_JD.png?itok=ly78pMqu)
图片来自opensource.com
年轻程序员的技术悟性和创造力是充满活力的。
这一点可以从我参加今年的全球最大的黑客马拉松 [PennApps][1] 时所目睹的他们勤奋的工作中可以看出。我的高中和大学年龄的同龄人创建的项目包括[可以通过眨眼来让不能说话或行动不便的人来交流的设备][2] 到 [带有物联网功能的煎饼机][3] 。在整个过程中,开源的精神是切实可见的,不同群体之间建立了共同的愿望,思想和技术诀窍的自由流通,无畏的实验和快速的原型设计,以及热衷于参与的渴望。
那么我想知道,为什么在我的技术极客同行中,开源并不是一个热门话题?
为了更多地了解大学生在听到“开源”时的想法,我调查了几个大学生,他们都是我所属的专业计算机科学组织的成员。这个社区的所有成员都必须在高中或大学期间申请,并根据他们的计算机科学成就和领导能力进行选择——这是否意味着领导一个学校的机器人团队,建立一个非营利组织,将编码带入资金不足的教室中,或其他一些值得努力的地方。鉴于这些个人在计算机科学方面的成就,我认为他们的观点将有助于理解年轻程序员对开源项目的吸引力(或不吸引人)。
我编写和发布的在线调查包括以下问题:
* 你喜欢编写个人项目吗?您是否曾经参与过开源项目?
* 你觉得自己开发自己的编程项目,还是对现有的开源工作做出贡献会更有益处?
* 你将如何比较为开源软件组织和专有软件的组织编码的声望?
尽管绝大多数人表示,他们至少偶尔会喜欢在业余时间编写个人项目,但大多数人从未参与过开源项目。当我进一步探索这一趋势时,一些关于开源项目和组织的常见的偏见逐渐浮出水面。为了说服我的同行们,开源项目值得他们花时间,并且为教育工作者和开源组织提供他们对学生的见解,我将谈谈三个首要的偏见。
### 偏见1:从零开始创建个人项目比为现有的开源项目做贡献更好。
在我所调查的大学年龄程序员中,有 26 人中有 24 人声称,开发自己的个人项目比开源项目更有益。
作为一名计算机科学专业的大一新生,我也相信这一点。我经常听到年长的同行说,个人项目会让我成为更有吸引力的实习生。没有人提到过为开源项目做出贡献的可能性——所以在我看来,这是无关紧要的。
我现在意识到开源项目为现实世界提供了强大的准备。对开源项目的贡献培养了一种意识,即[工具和语言如何拼凑在一起][4],单个项目却不能。而且,开源是一个协调与协作的练习,培养[学生的沟通,团队合作和解决问题的专业技能][5]。
### 偏见2我的编码技能是不够的。
一些受访者表示,他们被开源项目吓倒了,不知道该从哪里投稿,或者担心项目进展缓慢。不幸的是,自卑感往往会对女性程序员产生影响,而这种感觉并不止于开源社区。事实上,“冒名顶替综合症”甚至可能会被放大,因为[开源的倡导者通常会拒绝官僚主义][6] - 而且和官僚主义一样难以在内部流动,它有助于新加入的人了解他们在一个组织中的位置。
我还记得第一次在 GitHub 上查看开源项目时,我对贡献指南感到害怕。然而,指导方针并非旨在鼓励排他性,而是提供[指导][7]。为此,我认为指导方针是建立期望而不依赖于等级结构的一种方式。
几个开源项目积极为新的项目贡献者创造了一个地方。[TEAMMATES][8] 是一种教育反馈管理工具,是许多开源项目中的一个,标志着首次出现问题。在评论中,所有技能水平的程序员都详细阐述了实现的细节,表明开源项目是渴望新的程序员和经验丰富的软件老手的地方。对于那些还在犹豫的年轻程序员来说,一些[开源项目][9]已经考虑周全,采用了[冒名顶替综合症的免责声明][10]。
### 偏见3专有软件公司比开源软件组织做得更好。
在接受调查的 26 位受访者中,只有 5 位认为公开和专有软件组织在声望上是平等的。这可能是由于“开放”意味着“无利可图”,因此质量低下的误解(查看 [“开源”不只是意味着是免费][11])。
然而,开源软件和盈利软件并不相互排斥。事实上,小型和大型企业通常都支付免费的开源软件来获得技术支持服务。正如[红帽公司首席执行官 Jim Whitehurst ][12]所解释的那样“我们拥有一批工程团队负责跟踪Linux的每一项变更--错误修复,安全性增强等等,确保我们客户的关键任务系统保持最新状态和稳定“。
另外,开放的本质是通过使更多的人能够检查源代码来提升而不是阻碍质量的提高。[Mobify 首席执行官Igor Faletski ][13]写道Mobify 的25位软件开发人员和专业的质量保证人员“团队”无法与世界上所有可能使用 [Mobify 的开源]平台的软件开发者相匹配,每个人都是该项目的潜在测试者或贡献者“。
另一个问题可能是年轻的程序员不知道他们每天使用的开源软件。 我使用了许多工具——包括 MySQLEclipseAtomAudacity 和 WordPress——几个月甚至几年却没有意识到它们是开源的。 经常急于下载教学大纲指定软件以完成课堂作业的大学生可能不知道哪个软件是开源的。 这使得开源看起来比现在更加陌生。
所以学生们,在尝试之前不要敲开源码。 看看这个[初学者友好的项目][14]列表和这[六个起点][15],开始你的开源之旅。
教育工作者们,提醒您的学生开源社区的成功创新的历史,并引导他们走向课堂之外的开源项目。你将帮助培养更敏锐、更有准备、更自信的学生。
### 关于作者
Susie Choi - Susie 是杜克大学计算机科学专业的本科生。她对技术革新和开放源码原则对教育和社会经济不平等问题的影响非常感兴趣。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/12/students-and-open-source-3-common-preconceptions
作者:[Susie Choi][a]
译者:[amwps290](https://github.com/amwps290)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/susiechoi
[1]:http://pennapps.com/
[2]:https://devpost.com/software/blink-9o2iln
[3]:https://devpost.com/software/daburrito
[4]:https://hackernoon.com/benefits-of-contributing-to-open-source-2c97b6f529e9
[5]:https://opensource.com/education/16/8/5-reasons-student-involvement-open-source
[6]:https://opensource.com/open-organization/17/7/open-thinking-curb-bureaucracy
[7]:https://opensource.com/life/16/3/contributor-guidelines-template-and-tips
[8]:https://github.com/TEAMMATES/teammates/issues?q=is%3Aissue+is%3Aopen+label%3Ad.FirstTimers
[9]:https://github.com/adriennefriend/imposter-syndrome-disclaimer/blob/master/examples.md
[10]:https://github.com/adriennefriend/imposter-syndrome-disclaimer
[11]:https://opensource.com/resources/what-open-source
[12]:https://hbr.org/2013/01/yes-you-can-make-money-with-op
[13]:https://hbr.org/2012/10/open-sourcing-may-be-worth
[14]:https://github.com/MunGell/awesome-for-beginners
[15]:https://opensource.com/life/16/1/6-beginner-open-source

View File

@ -0,0 +1,75 @@
开源软件二十年 —— 过去,现在,未来
=================================================================
### 谨以此文纪念 “开源软件” 这个词的二十年纪念日,开源软件是怎么占有软件的主导地位的 ?以后会如何发展?
![Open source software: 20 years and counting](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/2cents.png?itok=XlT7kFNY "Open source software: 20 years and counting")
图片来自 : opensource.com
二十年以前,在 1998 年二月,“开源” 这个词汇第一次出现在软件之前。不久之后如”开源的定义“OSD的作者 [Bruce Perens 所说][9], OSD 这一文档被创建,开放源代码促进会( OSI )的种子被播种。
> “开源”是宣传自由软件的既有概念到商业软件,并对使用一系列规则对许可证进行认证的活动的正式名称。
二十年后,我们能看到这一运动是非常成功的,甚至超出了当时参与这一活动的任何人的想象。 如今,开源软件无处不在。它是互联网和网络的基础。它为我们所有使用的电脑和移动设备,以及它们所连接的网络提供动力。没有它,云计算和新兴的物联网将不可能发展,甚至不可能出现。它使新的业务方式能被测试和验证,还可以让像谷歌和 Facebook 这样的大公司使用别人的成果继续发展。
如任何人类的创造物一样,它也有黑暗的一面。它也让反乌托邦的监视和必然导致的专制控制的出现成为了可能。它为犯罪分子提供欺骗受害者的新的途径,还让匿名且大规模的欺凌得以存在。它让有破环性的狂热分子可以暗中组织而不会感到有何不便。这些都是开源的能力之下的黑暗投影。所有的人类工具都是如此,既可以养育人类,亦可以有害于人类。我们需要帮助下一代,让他们能争取无可取代的创新。就如 [费曼所说][10]
> 每个人都掌握着一把开启天堂之门的钥匙,但这把钥匙亦能打开地狱之门。
开源运动已经渐渐成熟。我们讨论和理解它的方式也渐渐的成熟。如果说第一个十年拥护与非议对立的十年,那么第二个十年就是接纳和适应并存的十年。
1. 在第一个十年里面,关键问题就是商业模型 - “我怎样才能自由的贡献代码,且从中受益?” - 之后,还有更多的人提出了有关管理的难题- “我怎么才能参与进来,且不受控制 ?”
2. 第一个十年的开源项目主要是替代现有的产品; 在第二个十年中,它们更多地是作为更大的解决方案的组成部分。
3. 第一个十年的项目往往由非正式的个人组织进行; 在第二个十年中,它们经常由逐个项目地创建的机构经营。
4. 第一个十年的开源开发人员经常是投入于单一的项目,并经常在业余时间工作。 在第二个十年里,他们越来越多地受雇于一个专门的技术 —— 他们成了专业人员。
5. 尽管开源一直被认为是提升软件自由度的一种方式,但在第一个十年中,这个运动与那些更喜欢使用“自由软件”的人产生了冲突。在第二个十年里,随着开源运动的加速发展,这个冲突基本上被忽略了。
第三个十年会带来什么?
1. _更复杂的商业模式_ - 主要的商业模式将通过整合很多开源的,特别是部署和扩展,那部分的开源软件,从而产生的复杂的解决方案。 管理的需求将反映这一点。
2. _开源拼图_ - 开源项目将主要是一堆组件。 由此产生的解决方案将是开源组建的拼图。
3. _项目族_ - 越来越多的项目将由诸如 Linux Foundation 和 OpenStack 等联盟/行业协会以及 Apache 和 Software Freedom Conservancy 等机构主办。
4. _专业通才_ - 开源开发人员将越来越多地被雇来将诸多技术集成到复杂的解决方案里,这将有助于一系列的项目的开发。
5. _软件自由度降低_ - 随着新问题的出现,软件自由(将四项自由应用于用户和开发人员之间的灵活性)将越来越多地应用于识别适用于协作社区和独立部署人员的解决方案。
2018年我将在全球各地的主题演讲中阐述这些内容。欢迎观看 [OSI 20周年纪念全球巡演][11]
_本文最初发表于 [Meshed Insights Ltd.][2] 已获转载授权,本文,以及我在 OSI 的工作,由 [Patreon patrons][3] 支持_
### 关于作者
[![Simon Phipps (smiling)](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/picture-2305.jpg?itok=CefW_OYh)][12] Simon Phipps - 计算机工业和开源软件专家Simon Phipps创办了[公共软件公司][4],一个欧洲开源项目主管,志愿成为 OSI 的总裁还是The Document Foundation的一名主管。 他的作品是由 [Patreon patrons][5] 赞助 - 如果你想看更多的话,来做赞助人吧! 在超过30年的职业生涯中他一直在参与世界领先的战略层面的开发...[关于 Simon Phipps][6][关于我][7]
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/2/open-source-20-years-and-counting
作者:[Simon Phipps ][a]
译者:[name1e5s](https://github.com/name1e5s)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/simonphipps
[1]:https://opensource.com/article/18/2/open-source-20-years-and-counting?rate=TZxa8jxR6VBcYukor0FDsTH38HxUrr7Mt8QRcn0sC2I
[2]:https://meshedinsights.com/2017/12/21/20-years-and-counting/
[3]:https://patreon.com/webmink
[4]:https://publicsoftware.eu/
[5]:https://patreon.com/webmink
[6]:https://opensource.com/users/simonphipps
[7]:https://opensource.com/users/simonphipps
[8]:https://opensource.com/user/12532/feed
[9]:https://perens.com/2017/09/26/on-usage-of-the-phrase-open-source/
[10]:https://www.brainpickings.org/2013/07/19/richard-feynman-science-morality-poem/
[11]:https://opensource.org/node/905
[12]:https://opensource.com/users/simonphipps
[13]:https://opensource.com/users/simonphipps
[14]:https://opensource.com/users/simonphipps

View File

@ -0,0 +1,82 @@
(再次)在 Ubuntu 16.04 上配置 MSMTP
======
这篇文章是在我之前的博客中发表过的在 Ubuntu 16.04 上配置 MSMTP 的一个副本。我再次发表是为了后续,我并不知道它是否能在更高版本上工作。由于我没有再托管自己的 Ubuntu/MSMTP 服务器了,所以我现在看不到有更新的,但是如果我需要重新设置,我会创建一个更新的帖子!无论如何,这是我现有的。
我之前写了一篇在 Ubuntu 12.04 上配置 msmtp 的文章,但是正如我在之前的文章中暗示的那样,当我升级到 Ubuntu 16.04 后出现了一些问题。接下来的内容基本上是一样的,但 16.04 有一些小的更新。和以前一样,这里假定你使用 Apache 作为 Web 服务器,但是我相信如果你选择其他的 Web 服务器,也应该相差不多。
我使用 [msmtp][1] 发送来自这个博客的邮件俩通知我评论和更新等。这里我会记录如何配置它通过 Google Apps 帐户发送电子邮件,虽然这应该与标准帐户一样。
首先,我们需要安装 3 个软件包:
`sudo apt-get install msmtp msmtp-mta ca-certificates`
安装完成后就需要一个默认配置。默认情况下msmtp 会在 `/etc/msmtprc` 中查找,所以我使用 vim 创建了这个文件,尽管任何文本编辑器都可以做到这一点。这个文件看起来像这样:
```
# Set defaults.
defaults
# Enable or disable TLS/SSL encryption.
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
account
host smtp.gmail.com
port 587
auth login
user
password
from
logfile /var/log/msmtp/msmtp.log
account default :
```
任何大写项(即``)都是需要替换为你特定的配置。日志文件是一个例外,当然你也可以将活动/警告/错误放在任何你想要的地方。
文件保存后,我们将更新上述配置文件的权限 如果该文件的权限过于开放msmtp 将不会运行,并且创建日志文件的目录。
```
sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chmod 0600 /etc/msmtprc
```
接下来,我选择为 msmtp 日志配置 logrotate以确保日志文件不会太大并让日志目录更加整洁。为此我们创建 `/etc/logrotate.d/msmtp` 并使用按以下内容配置。请注意,这是可选的,你可以选择不这样做,或者你可以选择以不同方式配置日志。
```
/var/log/msmtp/*.log {
rotate 12
monthly
compress
missingok
notifempty
}
```
现在配置了日志,我们需要通过编辑 `/etc/php/7.0/apache2/php.ini` 告诉 PHP 使用 msmtp并将 sendmail 路径从
`sendmail_path =`
变成
`sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"`
这里我遇到了一个问题,即使我指定了帐户名称,但是当我测试它时,它并没有正确发送电子邮件。这就是为什么 `account default : ` 这行被放在 msmtp 配置文件的末尾。要测试配置,请确保 PHP 文件已保存并运行 `sudo service apache2 restart`,然后运行 `php -a` 并执行以下命令
```
mail ('personal@email.com', 'Test Subject', 'Test body text');
exit();
```
此时发生的任何错误都将显示在输出中,因此错误诊断会相对容易。如果一切顺利,你现在应该可以使用 PHP sendmail至少 WordPress 可以)中用 Gmail或 Google Apps从 Ubuntu 服务器发送电子邮件。
我没有说这是最安全的配置,所以当你看到并且意识要这个非常不安全,或者有其他严重的错误,请让我知道,我会相应地更新。
--------------------------------------------------------------------------------
via: https://codingproductivity.wordpress.com/2018/01/18/configuring-msmtp-on-ubuntu-16-04-again/
作者:[JOE][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://codingproductivity.wordpress.com/author/joeb454/
[1]:http://msmtp.sourceforge.net/

View File

@ -0,0 +1,59 @@
哪个 Linux 内核版本是 “稳定的”?
============================================================
![Linux kernel ](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/apple1.jpg?itok=PGRxOQz_ "Linux kernel")
Konstantin Ryabitsev 为你讲解哪个 Linux 内核版本将被考虑作为“稳定版”,以及你如何选择一个适用你的内核版本。[Creative Commons Zero][1]
每次 Linus Torvalds 发布 [一个新 Linux 内核的主线版本][4],几乎都会引起这种困惑,那就是到底哪个内核版本才是最新的“稳定版”?是新的那个 X.Y还是前面的那个 X.Y-1.Z ?最新的内核版本是不是太“新”了?你是不是应该坚持使用以前的版本?
[kernel.org][5] 网页上的信息并不会帮你解开这个困惑。目前,在页面的最顶部,我们看到是最新稳定版内核是 4.15 — 但是在这个表格的下面4.14.16 也被列为“稳定版”,而 4.15 被列为“主线版本”,很困惑,是吧?
不幸的是,这个问题并不好回答。我们在这里使用“稳定”这个词有两个不同的意思:一是,作为最初发布的 Git 树的名字,二是,表示这个内核已经被考虑为“稳定版”,可以作为“生产系统”使用了。
由于 Git 的分布式特性Linux 的开发工作在许多 [不同的 fork 仓库中][6] 进行。所有的 bug 修复和新特性也是由子系统维护者首次收集和准备的,然后提交给 Linus Torvalds由 Linus Torvalds 包含进 [他的 Linux 树][7] 中,它的 Git 树被认为是 Git 仓库的 “master”。我们称这个树为 ”主线" Linux 树。
### 候选发布版
在每个新的内核版本发布之前它都要经过几轮的“候选发布”它由开发者进行测试并“打磨”所有的这些很酷的新特性。基于他们这几轮测试的反馈Linus 决定最终版本是否准备就绪。通常每周发布一个候选版本,但是,这个数字经常走到 -rc8并且有时候甚至达到 -rc9 及以上。当 Linus 确信那个新内核已经没有问题了,他就制作最终发行版,我们称这个版本为“稳定版”,表示它不再是一个“候选发布版”。
### Bug 修复
就像任何一个由不是十全十美的人所写的复杂软件一样,任何一个 Linux 内核的新版本都包含 bug并且这些 bug 必须被修复。Linux 内核的 bug 修复规则是非常简单的:所有修复必须首先进入到 Linus 的树。一旦在主线仓库中 bug 被修复后,它接着会被应用到由内核开发社区仍然在维护的已发布的内核中。在它们被考虑回迁到已发布的稳定版本之前,所有的 bug 修复必须满足 [一套重要的标准][8] — 标准的其中之一是,它们 “必须已经存在于 Linus 的树中”。这是一个 [独立的 Git 仓库][9],维护它的用途是回迁 bug 修复,而它也被称为“稳定”树 — 因为它用于跟踪以前发布的稳定内核。这个树由 Greg Kroah-Hartman 策划和维护。
### 最新的稳定内核
因此,无论在什么时候,为了查看最新的稳定内核而访问 kernel.org 网站时,你应该去使用那个在大黄色按钮所说的“最新的稳定内核”。
![sWnmAYf0BgxjGdAHshK61CE9GdQQCPBkmSF9MG8s](https://lh6.googleusercontent.com/sWnmAYf0BgxjGdAHshK61CE9GdQQCPBkmSF9MG8sYqZsmL6e0h8AiyJwqtWYC-MoxWpRWHpdIEpKji0hJ5xxeYshK9QkbTfubFb2TFaMeFNmtJ5ypQNt8lAHC2zniEEe8O4v7MZh)
但是,你可能会惊奇地发现 -- 4.15 和 4.14.16 都是稳定版本,那么到底哪一个更“稳定”呢?有些人不愿意使用 ".0" 的内核发行版,因为他们认为这个版本并不足够“稳定”,直到最新的是 ".1" 的为止。很难证明或者反驳这种观点的对与错,并且这两种观点都有赞成或者反对的理由,因此,具体选择哪一个取决于你的喜好。
一方面,任何一个进入到稳定树的发行版都必须首先被接受进入主线内核版本中,并且随后会被回迁到已发行版本中。这意味着内核的主线版本相比稳定树中的发行版本来说,总包含有最新的 bug 修复,因此,如果你想使用的发行版包含的“**已知 bug**”最少,那么使用 “.0” 的主线发行版是最佳选择。
另一方面,主线版本增加了所有很酷的新特性 — 而新特性也给它们带来了**数量未知的“新 bug”**,而这些“新 bug”在老的稳定版中是**不会存在**的。而新的、未知的 bug 是否比旧的、已知的但尚未修复的 bug 更加令人担心呢? -- 这取决于你的选择。不过需要说明的一点是,许多 bug 修复仅对内核的主线版本进行了彻底的测试。当补丁回迁到旧内核时,它们**可能**会工作的很好,但是它们**很少**做与旧内核的集成测试工作。通常都假定,“以前的稳定版本”足够接近当前的确信可用于生产系统的主线版本。而实际上也确实是这样做的,当然,这也更加说明了为什么选择”哪个内核版本更稳定“是件**非常困难**的事情了。
因此,从根本上说,我们并没有定量的或者定性的手段去明确的告诉你哪个内核版本更加稳定 -- 4.15 还是 4.14.16?我们能够做到的只是告诉你,它们具有”**不同的**稳定性“,(这个答案可能没有帮到你,但是,至少你明白了这些版本的差别是什么?)。
_学习更多的 Linux 的知识,可以通过来自 Linux 基金会和 edX 的免费课程 ["认识 Linux" ][3]。_
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/2/which-linux-kernel-version-stable
作者:[KONSTANTIN RYABITSEV ][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/mricon
[1]:https://www.linux.com/licenses/category/creative-commons-zero
[2]:https://www.linux.com/files/images/apple1jpg
[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux
[4]:https://www.linux.com/blog/intro-to-linux/2018/1/linux-kernel-415-unusual-release-cycle
[5]:https://www.kernel.org/
[6]:https://git.kernel.org/pub/scm/linux/kernel/git/
[7]:https://git.kernel.org/torvalds/c/v4.15
[8]:https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
[9]:https://git.kernel.org/stable/linux-stable/c/v4.14.16