mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-31 23:30:11 +08:00
Merge branch 'master' of https://github.com/LCTT/TranslateProject into new
This commit is contained in:
commit
1fa69a661b
@ -0,0 +1,154 @@
|
||||
How to Install and Use FreeDOS on VirtualBox
|
||||
======
|
||||
This step-by-step guide shows you how to install FreeDOS on VirtualBox in Linux.
|
||||
|
||||
### Installing FreeDOS on VirtualBox in Linux
|
||||
|
||||
<https://www.youtube.com/embed/p1MegqzFAqA?enablejsapi=1&autoplay=0&cc_load_policy=0&iv_load_policy=1&loop=0&modestbranding=1&rel=0&showinfo=0&fs=1&playsinline=0&autohide=2&theme=dark&color=red&controls=2&>
|
||||
|
||||
In November of 2017, I [interviewed Jim Hall][1] about the history behind the [FreeDOS project][2]. Today, I’m going to tell you how to install and use FreeDOS. Please note: I will be using [VirtualBox][3] 5.2.14 on [Solus][4].
|
||||
|
||||
Note: I used Solus as the host operating system for this tutorial because it is very easy to setup. One thing you should keep in mind is that Solus’ Software Center contains two versions of VirtualBox: `virtualbox` and `virtualbox-current`. Solus gives you the option to use the linux-lts kernel and the linux-current kernel. `virtualbox`is modified for linux-lts and `virtualbox-current` is for linux-current.
|
||||
|
||||
#### Step 1 – Create New Virtual Machine
|
||||
|
||||
![][5]
|
||||
|
||||
Once you open VirtualBox, press the “New” button to create a new virtual machine. You can name it whatever you want, I just use “FreeDOS”. You can use the label to specify what version of FreeDOS you are installing. You also need to select the type and version of the operating system you will be installing. Select “Other” and “DOS”.
|
||||
|
||||
#### Step 2 – Select Memory Size
|
||||
|
||||
![][6]
|
||||
|
||||
The next dialog box will ask you how much of the host computer’s memory you want to make available to FreeDOS. The default is 32MB. Don’t change it. Back in the day, this would be a huge amount of RAM for a DOS machine. If you need to, you can increase it later by right-clicking on the virtual machine you created for FreeDOS and selecting Settings -> System.
|
||||
|
||||
![][7]
|
||||
|
||||
#### Step 3 – Create Virtual Hard Disk
|
||||
|
||||
![][8]
|
||||
|
||||
Next, you will be asked to create a virtual hard drive where FreeDOS and its files will be stored. Since you haven’t created one yet, just click “Create”.
|
||||
|
||||
The next dialog box will ask you what hard disk file type you want to use. This default (VirtualBox Disk Image) works just fine. Click “Next”.
|
||||
|
||||
The next question you will encounter is how you want the virtual disk to act. Do you want it to start small and gradually grow to its full size as you create files and install programs? Then choose dynamically allocated. If you prefer that the virtual hard drive (vhd) is created at full size, then choose fixed size. Dynamically allocated is nice if you don’t plan to use the whole vhd or if you don’t have very much free space on your hard drive. (Keep in mind that while the size of a dynamically allocated vhd increases as you add files, it will not drop when you remove files.) I prefer dynamically allocated, but you can choose the option that serves your needs best and click “Next”.
|
||||
|
||||
![][9]
|
||||
|
||||
Now, you can choose the size and location of the vhd. 500 MB should be plenty of space. Remember most of the programs you will be using will be text-based, thus fairly small. Once you make your adjustments, click Create,
|
||||
|
||||
#### Step 4 – Attach .iso file
|
||||
|
||||
Before we continue, you will need to [download][10] the FreeDOS .iso file. You will need to choose the CDROM “standard” installer.
|
||||
|
||||
![][11]
|
||||
|
||||
Once the file has been downloaded, return to VirtualBox. Select your virtual machine and open the settings. You can do this by either right-clicking on the virtual machine and selecting “Settings” or highlight the virtual machine and click the “Settings” button.
|
||||
|
||||
Now, click the “Storage” tab. Under “Storage Devices”, select the CD icon. (It should say “Empty” next to it.) In the “Attributes” panel on the right, click on the CD icon and select the location of the .iso file you just downloaded.
|
||||
|
||||
Note: Typically, after you install an operating system on VirtualBox you can delete the original .iso file. Not with FreeDOS. You need the .iso file if you want to install applications via the FreeDOS package manager. I generally keep the ,iso file attached the virtual machine in case I want to install something. If you do that, you have to make sure that you tell FreeDOS you want to boot from the hard drive each time you boot it up because it defaults to the attached CD/iso. If you forget to attach the .iso, don’t worry. You can do so by selecting “Devices” on the top of your FreeDOS virtual machine window. The .iso files are listed under “Optical Drives”.
|
||||
|
||||
#### Step 5 – Install FreeDOS
|
||||
|
||||
![][12]
|
||||
|
||||
Now that we’ve completed all of the preparations, let’s install FreeDOS.
|
||||
|
||||
First, you need to be aware of a bug in the most recent version of VirtualBox. If you start the virtual machine that we just created and select “Install to harddisk” when the FreeDOS welcome screen appears, you will see an unending, scrolling mass of machine code. I’ve only run into this issue recently and it affects both the Linux and Windows versions of VirtualBox. (I know first hand.)
|
||||
|
||||
To get around this, you need to make a simple edit. When you see the FreeDOS welcome screen, press Tab. (Make sure that the “Install to harddrive” option is selected.) Type the word `raw` after “fdboot.img” and hit Enter. The FreeDOS installer will then start.
|
||||
|
||||
![][13]
|
||||
|
||||
The first part of the installer will handle formatting your virtual drive. Once formatting is completed, the installer will reboot. When the FreeDOS welcome screen appears again, you will have to re-enter the `raw` comment you used earlier.
|
||||
|
||||
Make sure that you select “Yes” on all of the questions in the installer. One important question that doesn’t have a “Yes” or “No” answer is: “What FreeDOS packages do you want to install?. The two options are “Base packages” or “Full installation”. Base packages are for those who want a DOS experience most like the original MS-DOS. The Full installation includes a bunch of tools and utilities to improve DOS.
|
||||
|
||||
At the end of the installation, you will be given the option to reboot or stay on DOS. Select “reboot”.
|
||||
|
||||
#### Step 6 – Setup Networking
|
||||
|
||||
Unlike the original DOS, FreeDOS can access the internet. You can install new packages and update the ones already you have installed. In order to use networking, you need to install several applications in FreeDOS.
|
||||
|
||||
![][14]
|
||||
|
||||
First, boot into your newly created FreeDOS virtual machine. At the FreeDOS selection screen, select “Boot from System harddrive”.
|
||||
|
||||
![][15]
|
||||
|
||||
Now, to access the FreeDOS package manager, type `fdimples`. You can navigate around the package manager with the arrow keys and select categories or packages with the space bar. From the “Networking” category, you need to select `fdnet`. The FreeDOS Project also recommends installing `mtcp` and `wget`. Hit “Tab” several times until “OK” is selected and press “Enter”. Once the installation is complete, type `reboot` and hit enter. After the system reboots, boot to your system drive. If the network installation was successful, you will see several new messages at the terminal listing your network information.
|
||||
|
||||
![][16]
|
||||
|
||||
##### Note
|
||||
|
||||
Sometimes the default VirtualBox setup doesn’t work. If that happens, close your FreeDOS VirtualBox window. Right-click your virtual machine from the main VirtualBox screen and select “Settings”. The default VirtualBox network setting is “NAT”. Change it to “Bridged Adapter” and retry installing the FreeDOS packages. It should work now.
|
||||
|
||||
#### Step 7 – Basic Usage of FreeDOS
|
||||
|
||||
##### Commons Commands
|
||||
|
||||
Now that you have installed FreeDOS, let’s look at a few basic commands. If you have ever used the Command Prompt on Windows, you will be familiar with some of these commands.
|
||||
|
||||
* `DIR`– display the contents of the current directory
|
||||
* `CD` – change the directory you are currently in
|
||||
* `COPY OLD.TXT NEW.TXT`– copy files
|
||||
* `TYPE TEST.TXT` – display content of file
|
||||
* `DEL TEST.TXT` – delete file
|
||||
* `XCOPY DIR NEWDIR` – copy directory and all of its contents
|
||||
* `EDIT TEST.TXT`– edit a file
|
||||
* `MKDIR NEWDIR` – create a new directory
|
||||
* `CLS` – clear the screen
|
||||
|
||||
|
||||
|
||||
You can find more basic DOS commands on the web or the [handy cheat sheet][17] created by Jim Hall.
|
||||
|
||||
##### Running a Program
|
||||
|
||||
Running program on FreeDos is fairly easy. When you install an application with the `fdimples` package manager, be sure to note where the .EXE file of the application is located. This is shown in the application’s details. To run the application, you generally need to navigate to the application folder and type the application’s name.
|
||||
|
||||
For example, FreeDOS has an editor named `FED` that you can install. After installing it, all you need to do is navigate to `C:\FED` and type `FED`.
|
||||
|
||||
Sometimes a program, such as Pico, is stored in the `\bin` folder. These programs can be called up from any folder.
|
||||
|
||||
Games usually have an .EXE program or two that you have to run before you can play the game. These setup file usually fix sound, video, or control issues.
|
||||
|
||||
If you run into problems that this tutorial didn’t cover, don’t forget to visit the [home of FreeDOS][2]. They have a wiki and several other support options.
|
||||
|
||||
Have you ever used FreeDOS? What tutorials would you like to see in the future? Please let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][18].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-freedos/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[1]:https://itsfoss.com/interview-freedos-jim-hall/
|
||||
[2]:http://www.freedos.org/
|
||||
[3]:https://www.virtualbox.org/
|
||||
[4]:https://solus-project.com/home/
|
||||
[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-1.jpg
|
||||
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-2.jpg
|
||||
[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-3.jpg
|
||||
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-4.jpg
|
||||
[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-6.jpg
|
||||
[10]:http://www.freedos.org/download/
|
||||
[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-7.jpg
|
||||
[12]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-8.png
|
||||
[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-9.png
|
||||
[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-10.png
|
||||
[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-11.png
|
||||
[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/07/freedos-tutorial-12.png
|
||||
[17]:https://opensource.com/article/18/6/freedos-commands-cheat-sheet
|
||||
[18]:http://reddit.com/r/linuxusersgroup
|
@ -1,162 +0,0 @@
|
||||
idea2act translating
|
||||
|
||||
How to use VS Code for your Python projects
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/07/pythonvscode-816x345.jpg)
|
||||
Visual Studio Code, or VS Code, is an open source code editor that also includes tools for building and debugging an application. With the Python extension enabled, vscode becomes a great working environment for any Python developer. This article shows you which extensions are useful, and how to configure VS Code to get the most out of it.
|
||||
|
||||
If you don’t have it installed, check out our previous article, [Using Visual Studio Code on Fedora][1]:
|
||||
|
||||
[Using Visual Studio Code on Fedora ](https://fedoramagazine.org/using-visual-studio-code-fedora/)
|
||||
|
||||
### Install the VS Code Python extension
|
||||
|
||||
First, to make VS Code Python friendly, install the Python extension from the marketplace.
|
||||
|
||||
![][2]
|
||||
|
||||
Once the Python extension installed, you can now configure the Python extension.
|
||||
|
||||
VS Code manages its configuration inside JSON files. Two files are used:
|
||||
|
||||
* One for the global settings that applies to all projects
|
||||
* One for project specific settings
|
||||
|
||||
|
||||
|
||||
Press **Ctrl+,** (comma) to open the global settings.
|
||||
|
||||
#### Setup the Python Path
|
||||
|
||||
You can configure VS Code to automatically select the best Python interpreter for each of your projects. To do this, configure the python.pythonPath key in the global settings.
|
||||
```
|
||||
// Place your settings in this file to overwrite default and user settings.
|
||||
{
|
||||
"python.pythonPath":"${workspaceRoot}/.venv/bin/python",
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
This sets VS Code to use the Python interpreter located in the project root directory under the .venv virtual environment directory.
|
||||
|
||||
#### Use environment variables
|
||||
|
||||
By default, VS Code uses environment variables defined in the project root directory in a .env file. This is useful to set environment variables like:
|
||||
```
|
||||
PYTHONWARNINGS="once"
|
||||
|
||||
```
|
||||
|
||||
That setting ensures that warnings are displayed when your program is running.
|
||||
|
||||
To change this default, set the python.envFile configuration key as follows:
|
||||
```
|
||||
"python.envFile": "${workspaceFolder}/.env",
|
||||
|
||||
```
|
||||
|
||||
### Code Linting
|
||||
|
||||
The Python extension also supports different code linters (pep8, flake8, pylint). To enable your favorite linter, or the one used by the project you’re working on, you need to set a few configuration items.
|
||||
|
||||
By default pylint is enabled. But for this example, configure flake8:
|
||||
```
|
||||
"python.linting.pylintEnabled": false,
|
||||
"python.linting.flake8Path": "${workspaceRoot}/.venv/bin/flake8",
|
||||
"python.linting.flake8Enabled": true,
|
||||
"python.linting.flake8Args": ["--max-line-length=90"],
|
||||
|
||||
```
|
||||
|
||||
After enabling the linter, your code is underlined to show where it doesn’t meet criteria enforced by the linter. Note that for this example to work, you need to install flake8 in the virtual environment of the project.
|
||||
|
||||
![][3]
|
||||
|
||||
### Code Formatting
|
||||
|
||||
VS Code also lets you configure automatic code formatting. The extension currently supports autopep8, black and yapf. Here’s how to configure black.
|
||||
```
|
||||
"python.formatting.provider": "black",
|
||||
"python.formatting.blackPath": "${workspaceRoot}/.venv/bin/black"
|
||||
"python.formatting.blackArgs": ["--line-length=90"],
|
||||
"editor.formatOnSave": true,
|
||||
|
||||
```
|
||||
|
||||
If you don’t want the editor to format your file on save, set the option to false and use **Ctrl+Shift+I** to format the current document. Note that for this example to work, you need to install black in the virtual environment of the project.
|
||||
|
||||
### Running Tasks
|
||||
|
||||
Another great feature of VS Code is that it can run tasks. These tasks are also defined in a JSON file saved in the project root directory.
|
||||
|
||||
#### Run a development flask server
|
||||
|
||||
In this example, you’ll create a task to run a Flask development server. Create a new Build using the basic template that can run an external command:
|
||||
|
||||
![][4]
|
||||
|
||||
Edit the tasks.json file as follows to create a new task that runs the Flask development server:
|
||||
```
|
||||
{
|
||||
// See https://go.microsoft.com/fwlink/?LinkId=733558
|
||||
// for the documentation about the tasks.json format
|
||||
"version": "2.0.0",
|
||||
"tasks": [
|
||||
{
|
||||
|
||||
"label": "Run Debug Server",
|
||||
"type": "shell",
|
||||
"command": "${workspaceRoot}/.venv/bin/flask run -h 0.0.0.0 -p 5000",
|
||||
"group": {
|
||||
"kind": "build",
|
||||
"isDefault": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
The Flask development server uses an environment variable to get the entrypoint of the application. Use the .env file to declare these variables. For example:
|
||||
```
|
||||
FLASK_APP=wsgi.py
|
||||
FLASK_DEBUG=True
|
||||
|
||||
```
|
||||
|
||||
Now you can execute the task using **Ctrl+Shift+B**.
|
||||
|
||||
### Unit tests
|
||||
|
||||
VS Code also has the unit test runners pytest, unittest, and nosetest integrated out of the box. After you enable a test runner, VS Code discovers the unit tests and letsyou to run them individually, by test suite, or simply all the tests.
|
||||
|
||||
For example, to enable pytest:
|
||||
```
|
||||
"python.unitTest.pyTestEnabled": true,
|
||||
"python.unitTest.pyTestPath": "${workspaceRoot}/.venv/bin/pytest",
|
||||
|
||||
```
|
||||
|
||||
Note that for this example to work, you need to install pytest in the virtual environment of the project.
|
||||
|
||||
![][5]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/vscode-python-howto/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org
|
||||
[1]:https://fedoramagazine.org/using-visual-studio-code-fedora/
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-09-44.gif
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-12-05.gif
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-13-26.gif
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-15-33.gif
|
@ -1,3 +1,5 @@
|
||||
translating---geekpi
|
||||
|
||||
5 applications to manage your to-do list on Fedora
|
||||
======
|
||||
|
||||
|
@ -1,112 +0,0 @@
|
||||
MPV Player: A Minimalist Video Player for Linux
|
||||
======
|
||||
MPV is an open source, cross platform video player that comes with a minimalist GUI and feature rich command line version.
|
||||
|
||||
VLC is probably the best video player for Linux or any other operating system. I have been using VLC for years and it is still my favorite.
|
||||
|
||||
However, lately, I am more inclined towards minimalist applications with a clean UI. This is how came across MPV. I loved it so much that I added it in the list of [best Ubuntu applications][1].
|
||||
|
||||
[MPV][2] is an open source video player available for Linux, Windows, macOS, BSD and Android. It is actually a fork of [MPlayer][3].
|
||||
|
||||
The graphical user interface is sleek and minimalist.
|
||||
|
||||
![MPV Player Interface in Linux][4]
|
||||
MPV Player
|
||||
|
||||
### MPV Features
|
||||
|
||||
MPV has all the features required from a standard video player. You can play a variety of videos and control the playback with usual shortcuts.
|
||||
|
||||
* Minimalist GUI with only the necessary controls.
|
||||
* Video codecs support.
|
||||
* High quality video output and GPU video decoding.
|
||||
* Supports subtitles.
|
||||
* Can play YouTube and other streaming videos through the command line.
|
||||
* CLI version of MPV can be embedded in web and other applications.
|
||||
|
||||
|
||||
|
||||
Though MPV player has a minimal UI with limited options, don’t underestimate its capabilities. Its main power lies in the command line version.
|
||||
|
||||
Just type the command mpv –list-options and you’ll see that it provides 447 different kind of options. But this article is not about utilizing the advanced settings of MPV. Let’s see how good it is as a regular desktop video player.
|
||||
|
||||
### Installing MPV in Linux
|
||||
|
||||
MPV is a popular application and it should be found in the default repositories of most Linux distributions. Just look for it in the Software Center application.
|
||||
|
||||
I can confirm that it is available in Ubuntu’s Software Center. You can install it from there or simply use the following command:
|
||||
```
|
||||
sudo apt install mpv
|
||||
|
||||
```
|
||||
|
||||
You can find installation instructions for other platforms on [MPV website][5].
|
||||
|
||||
### Using MPV Video Player
|
||||
|
||||
Once installed, you can open a video file with MPV by right-clicking and choosing MPV.
|
||||
|
||||
![MPV Player Interface][6]
|
||||
MPV Player Interface
|
||||
|
||||
The interface has only a control panel that is only visible when you hover your mouse on the player. As you can see, the control panel provides you the option to pause/play, change track, change audio track, subtitles and switch to full screen.
|
||||
|
||||
MPV’s default size depends upon the quality of video you are playing. For a 240p video, the application video will be small while as 1080p video will result in almost full screen app window size on a Full-HD screen. You can always double click on the player to make it full screen irrespective of the video size.
|
||||
|
||||
#### The subtitle struggle
|
||||
|
||||
If your video has a subtitle file, MPV will [automatically play subtitles][7] and you can choose to disable it. However, if you want to use an external subtitle file, it’s not that available directly from the player.
|
||||
|
||||
You can rename the additional subtitle file exactly the same as the name of the video file and keep it in the same folder as the video file. MPV should now play your subtitles.
|
||||
|
||||
An easier option to play external subtitles is to simply drag and drop into the player.
|
||||
|
||||
#### Playing YouTube and other online video content
|
||||
|
||||
To play online videos, you’ll have to use the command line version of MPV.
|
||||
|
||||
Open a terminal and use it in the following fashion:
|
||||
```
|
||||
mpv <URL_of_Video>
|
||||
|
||||
```
|
||||
|
||||
![Playing YouTube videos on Linux desktop using MPV][8]
|
||||
Playing YouTube videos with MPV
|
||||
|
||||
I didn’t find playing YouTube videos in MPV player a pleasant experience. It kept on buffering and that was utter frustrating.
|
||||
|
||||
#### Should you use MPV player?
|
||||
|
||||
That depends on you. If you like to experiment with applications, you should give MPV a go. Otherwise, the default video player and VLC are always good enough.
|
||||
|
||||
Earlier when I wrote about [Sayonara][9], I wasn’t sure if people would like an obscure music player over the popular ones but it was loved by It’s FOSS readers.
|
||||
|
||||
Try MPV and see if it is something you would like to use as your default video player.
|
||||
|
||||
If you liked MPV but want slightly more features on the graphical interface, I suggest using [GNOME MPV Player][10].
|
||||
|
||||
Have you used MPV video player? How was your experience with it? What you liked or disliked about it? Do share your views in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/mpv-video-player/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]:https://itsfoss.com/best-ubuntu-apps/
|
||||
[2]:https://mpv.io/
|
||||
[3]:http://www.mplayerhq.hu/design7/news.html
|
||||
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/mpv-player.jpg
|
||||
[5]:https://mpv.io/installation/
|
||||
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/mpv-player-interface.png
|
||||
[7]:https://itsfoss.com/how-to-play-movie-with-subtitles-on-samsung-tv-via-usb/
|
||||
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/play-youtube-videos-on-mpv-player.jpeg
|
||||
[9]:https://itsfoss.com/sayonara-music-player/
|
||||
[10]:https://gnome-mpv.github.io/
|
@ -0,0 +1,117 @@
|
||||
How I recorded user behaviour on my competitor’s websites
|
||||
======
|
||||
|
||||
### Update
|
||||
|
||||
Google’s team has tracked down my test site, most likely using the source code I shared and de-indexed the whole domain.
|
||||
|
||||
Last time [I publicly exposed a flaw][1], Google issued a [manual penalty][2] and devalued a single offending page. This time, there is no notice in Search Console. The site is completely removed from their index without any notification.
|
||||
|
||||
I’ve received a lot of criticism in the way I’ve handled this. Many are suggesting the right way is to approach Google directly with security flaws like this instead of writing about it publicly. Others are suggesting I acted unethically, or even illegally by running this test. I think it should be obvious that if I intended to exploit this method I wouldn’t write about it. With so much risk and so little gain, is this even worth doing in practice? Of course not. I’d be more concerned about those who do unethical things and don’t write about it.
|
||||
|
||||
### My wish list:
|
||||
|
||||
a) Manipulating the back button in Chrome shouldn’t be possible in 2018
|
||||
b) Websites that employ this tactic should be detected and penalised by Google’s algorithms
|
||||
c) If still found in Google’s results, such pages should be labelled with “this page may be harmful” notice.
|
||||
|
||||
### Here’s what I did:
|
||||
|
||||
1. User lands on my page (referrer: google)
|
||||
2. When they hit “back” button in Chrome, JS sends them to my copy of SERP
|
||||
3. Click on any competitor takes them to my mirror of competitor’s site (noindex)
|
||||
4. Now I generate heatmaps, scrollmaps, records screen interactions and typing.
|
||||
|
||||
![][3]
|
||||
|
||||
![script][4]
|
||||
![][5]
|
||||
![][6]
|
||||
|
||||
Interestingly, only about 50% of users found anything suspicious, partly due to the fact that I used https on all my pages, which is one of the main [trust factors on the web][7].
|
||||
|
||||
Many users are just happy to see the “padlock” in their browser.
|
||||
|
||||
At this point I was able to:
|
||||
|
||||
* Generate heatmaps (clicks, moves, scroll depth)
|
||||
* Record actual sessions (mouse movement, clicks, typing)
|
||||
|
||||
|
||||
|
||||
I gasped when I realised I can actually **capture all form submissions and send them to my own email**.
|
||||
|
||||
Note: I never actually tried that.
|
||||
|
||||
Yikes!
|
||||
|
||||
### Wouldn’t a website doing this be penalised?
|
||||
|
||||
You would think so.
|
||||
|
||||
I had this implemented for a **very brief period of time** (and for ethical reasons took it down almost immediately, realising that this may cause trouble). After that I changed the topic of the page completely and moved the test to one of my disposable domains where **remained** for five years and ranked really well, though for completely different search terms with rather low search volumes. Its new purpose was to mess with conspiracy theory people.
|
||||
|
||||
### Alternative Technique
|
||||
|
||||
You don’t have to spoof Google SERPs to generate competitor’s heatmaps, you can simply A/B test your landing page VS your clone of theirs through paid traffic (e.g. social media). Is the A/B testing version of this ethically OK? I don’t know, but it may get you in legal trouble depending on where you live.
|
||||
|
||||
### What did I learn?
|
||||
|
||||
Users seldom read home page “fluff” and often look for things like testimonials, case studies, pricing levels and staff profiles / company information in search for credibility and trust. One of my upcoming tests will be to combine home page with “about us”, “testimonials”, “case studies” and “packages”. This would give users all they really want on a single page.
|
||||
|
||||
### Reader Suggestions
|
||||
|
||||
“I would’ve thrown in an exit pop-up to let users know what they’d just been subjected to.”
|
||||
<https://twitter.com/marcnashaat/status/1031915003224309760>
|
||||
|
||||
### From Hacker News
|
||||
|
||||
> Howdy, former Matasano pentester here.
|
||||
> FWIW, I would probably have done something similar to them before I’d worked in the security industry. It’s an easy mistake to make, because it’s one you make by default: intellectual curiosity doesn’t absolve you from legal judgement, and people on the internet tend to flip out if you do something illegal and say anything but “You’re right, I was mistaken. I’ve learned my lesson.”
|
||||
>
|
||||
> To the author: The reason you pattern-matched into the blackhat category instead of whitehat/grayhat (grayhat?) category is that in the security industry, whenever we discover a vuln, we PoC it and then write it up in the report and tell them immediately. The report typically includes background info, reproduction steps, and recommended actions. The whole thing is typically clinical and detached.
|
||||
>
|
||||
> Most notably, the PoC is usually as simple as possible. alert(1) suffices to demonstrate XSS, for example, rather than implementing a fully-working cookie swipe. The latter is more fun, but the former is more impactful.
|
||||
>
|
||||
> One interesting idea would’ve been to create a fake competitor — e.g. “VirtualBagel: Just download your bagels and enjoy.” Once it’s ranking on Google, run this same experiment and see if you could rank higher.
|
||||
>
|
||||
> That experiment would demonstrate two things: (1) the history vulnerability exists, and (2) it’s possible for someone to clone a competitor and outrank them with this vulnerability, thereby raising it from sev:low to sev:hi.
|
||||
>
|
||||
> So to be clear, the crux of the issue was running the exploit on a live site without their blessing.
|
||||
>
|
||||
> But again, don’t worry too much. I would have made similar errors without formal training. It’s easy for everyone to say “Oh well it’s obvious,” but when you feel like you have good intent, it’s not obvious at all.
|
||||
>
|
||||
> I remind everyone that RTM once ran afoul of the law due to similar intellectual curiosity. (In fairness, his experiment exploded half the internet, but still.)
|
||||
|
||||
Source: <https://news.ycombinator.com/item?id=17826106>
|
||||
|
||||
|
||||
### About the author
|
||||
|
||||
[Dan Petrovic][9]
|
||||
|
||||
Dan Petrovic, the managing director of DEJAN, is Australia’s best-known name in the field of search engine optimisation. Dan is a web author, innovator and a highly regarded search industry event speaker.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://dejanseo.com.au/competitor-hack/
|
||||
|
||||
作者:[Dan Petrovic][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://dejanseo.com.au/dan-petrovic/
|
||||
[1]:https://dejanseo.com.au/hijack/
|
||||
[2]:https://dejanseo.com.au/google-against-content-scrapers/
|
||||
[3]:https://dejanseo.com.au/wp-content/uploads/2018/08/step-1.png
|
||||
[4]:https://dejanseo.com.au/wp-content/uploads/2018/08/script.gif
|
||||
[5]:https://dejanseo.com.au/wp-content/uploads/2018/08/step-2.png
|
||||
[6]:https://dejanseo.com.au/wp-content/uploads/2018/08/step-3.png
|
||||
[7]:https://dejanseo.com.au/trust/
|
||||
[8]:https://secure.gravatar.com/avatar/9068275e6d3863b7dc11f7dff0974ced?s=100&d=mm&r=g
|
||||
[9]:https://dejanseo.com.au/dan-petrovic/ (Dan Petrovic)
|
||||
[10]:https://dejanseo.com.au/author/admin/ (More posts by Dan Petrovic)
|
322
sources/tech/20180822 What is a Makefile and how does it work.md
Normal file
322
sources/tech/20180822 What is a Makefile and how does it work.md
Normal file
@ -0,0 +1,322 @@
|
||||
What is a Makefile and how does it work?
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_liberate%20docs_1109ay.png?itok=xQOLreya)
|
||||
If you want to run or update a task when certain files are updated, the `make` utility can come in handy. The `make` utility requires a file, `Makefile` (or `makefile`), which defines set of tasks to be executed. You may have used `make` to compile a program from source code. Most open source projects use `make` to compile a final executable binary, which can then be installed using `make install`.
|
||||
|
||||
In this article, we'll explore `make` and `Makefile` using basic and advanced examples. Before you start, ensure that `make` is installed in your system.
|
||||
|
||||
### Basic examples
|
||||
|
||||
Let's start by printing the classic "Hello World" on the terminal. Create a empty directory `myproject` containing a file `Makefile` with this content:
|
||||
```
|
||||
say_hello:
|
||||
|
||||
echo "Hello World"
|
||||
|
||||
```
|
||||
|
||||
Now run the file by typing `make` inside the directory `myproject`. The output will be:
|
||||
```
|
||||
$ make
|
||||
|
||||
echo "Hello World"
|
||||
|
||||
Hello World
|
||||
|
||||
```
|
||||
|
||||
In the example above, `say_hello` behaves like a function name, as in any programming language. This is called the target. The prerequisites or dependencies follow the target. For the sake of simplicity, we have not defined any prerequisites in this example. The command `echo "Hello World"` is called the recipe. The recipe uses prerequisites to make a target. The target, prerequisites, and recipes together make a rule.
|
||||
|
||||
To summarize, below is the syntax of a typical rule:
|
||||
```
|
||||
target: prerequisites
|
||||
|
||||
<TAB> recipe
|
||||
|
||||
```
|
||||
|
||||
As an example, a target might be a binary file that depends on prerequisites (source files). On the other hand, a prerequisite can also be a target that depends on other dependencies:
|
||||
```
|
||||
final_target: sub_target final_target.c
|
||||
|
||||
Recipe_to_create_final_target
|
||||
|
||||
|
||||
|
||||
sub_target: sub_target.c
|
||||
|
||||
Recipe_to_create_sub_target
|
||||
|
||||
```
|
||||
|
||||
It is not necessary for the target to be a file; it could be just a name for the recipe, as in our example. We call these "phony targets."
|
||||
|
||||
Going back to the example above, when `make` was executed, the entire command `echo "Hello World"` was displayed, followed by actual command output. We often don't want that. To suppress echoing the actual command, we need to start `echo` with `@`:
|
||||
```
|
||||
say_hello:
|
||||
|
||||
@echo "Hello World"
|
||||
|
||||
```
|
||||
|
||||
Now try to run `make` again. The output should display only this:
|
||||
```
|
||||
$ make
|
||||
|
||||
Hello World
|
||||
|
||||
```
|
||||
|
||||
Let's add a few more phony targets: `generate` and `clean` to the `Makefile`:
|
||||
```
|
||||
say_hello:
|
||||
@echo "Hello World"
|
||||
|
||||
generate:
|
||||
@echo "Creating empty text files..."
|
||||
touch file-{1..10}.txt
|
||||
|
||||
clean:
|
||||
@echo "Cleaning up..."
|
||||
rm *.txt
|
||||
```
|
||||
|
||||
If we try to run `make` after the changes, only the target `say_hello` will be executed. That's because only the first target in the makefile is the default target. Often called the default goal, this is the reason you will see `all` as the first target in most projects. It is the responsibility of `all` to call other targets. We can override this behavior using a special phony target called `.DEFAULT_GOAL`.
|
||||
|
||||
Let's include that at the beginning of our makefile:
|
||||
```
|
||||
.DEFAULT_GOAL := generate
|
||||
```
|
||||
|
||||
This will run the target `generate` as the default:
|
||||
```
|
||||
$ make
|
||||
Creating empty text files...
|
||||
touch file-{1..10}.txt
|
||||
```
|
||||
|
||||
As the name suggests, the phony target `.DEFAULT_GOAL` can run only one target at a time. This is why most makefiles include `all` as a target that can call as many targets as needed.
|
||||
|
||||
Let's include the phony target `all` and remove `.DEFAULT_GOAL`:
|
||||
```
|
||||
all: say_hello generate
|
||||
|
||||
say_hello:
|
||||
@echo "Hello World"
|
||||
|
||||
generate:
|
||||
@echo "Creating empty text files..."
|
||||
touch file-{1..10}.txt
|
||||
|
||||
clean:
|
||||
@echo "Cleaning up..."
|
||||
rm *.txt
|
||||
```
|
||||
|
||||
Before running `make`, let's include another special phony target, `.PHONY`, where we define all the targets that are not files. `make` will run its recipe regardless of whether a file with that name exists or what its last modification time is. Here is the complete makefile:
|
||||
```
|
||||
.PHONY: all say_hello generate clean
|
||||
|
||||
all: say_hello generate
|
||||
|
||||
say_hello:
|
||||
@echo "Hello World"
|
||||
|
||||
generate:
|
||||
@echo "Creating empty text files..."
|
||||
touch file-{1..10}.txt
|
||||
|
||||
clean:
|
||||
@echo "Cleaning up..."
|
||||
rm *.txt
|
||||
```
|
||||
|
||||
The `make` should call `say_hello` and `generate`:
|
||||
```
|
||||
$ make
|
||||
Hello World
|
||||
Creating empty text files...
|
||||
touch file-{1..10}.txt
|
||||
```
|
||||
|
||||
It is a good practice not to call `clean` in `all` or put it as the first target. `clean` should be called manually when cleaning is needed as a first argument to `make`:
|
||||
```
|
||||
$ make clean
|
||||
Cleaning up...
|
||||
rm *.txt
|
||||
```
|
||||
|
||||
Now that you have an idea of how a basic makefile works and how to write a simple makefile, let's look at some more advanced examples.
|
||||
|
||||
### Advanced examples
|
||||
|
||||
#### Variables
|
||||
|
||||
In the above example, most target and prerequisite values are hard-coded, but in real projects, these are replaced with variables and patterns.
|
||||
|
||||
The simplest way to define a variable in a makefile is to use the `=` operator. For example, to assign the command `gcc` to a variable `CC`:
|
||||
```
|
||||
CC = gcc
|
||||
```
|
||||
|
||||
This is also called a recursive expanded variable, and it is used in a rule as shown below:
|
||||
```
|
||||
hello: hello.c
|
||||
${CC} hello.c -o hello
|
||||
```
|
||||
|
||||
As you may have guessed, the recipe expands as below when it is passed to the terminal:
|
||||
```
|
||||
gcc hello.c -o hello
|
||||
```
|
||||
|
||||
Both `${CC}` and `$(CC)` are valid references to call `gcc`. But if one tries to reassign a variable to itself, it will cause an infinite loop. Let's verify this:
|
||||
```
|
||||
CC = gcc
|
||||
CC = ${CC}
|
||||
|
||||
all:
|
||||
@echo ${CC}
|
||||
```
|
||||
|
||||
Running `make` will result in:
|
||||
```
|
||||
$ make
|
||||
Makefile:8: *** Recursive variable 'CC' references itself (eventually). Stop.
|
||||
```
|
||||
|
||||
To avoid this scenario, we can use the `:=` operator (this is also called the simply expanded variable). We should have no problem running the makefile below:
|
||||
```
|
||||
CC := gcc
|
||||
CC := ${CC}
|
||||
|
||||
all:
|
||||
@echo ${CC}
|
||||
```
|
||||
|
||||
#### Patterns and functions
|
||||
|
||||
The following makefile can compile all C programs by using variables, patterns, and functions. Let's explore it line by line:
|
||||
```
|
||||
# Usage:
|
||||
# make # compile all binary
|
||||
# make clean # remove ALL binaries and objects
|
||||
|
||||
.PHONY = all clean
|
||||
|
||||
CC = gcc # compiler to use
|
||||
|
||||
LINKERFLAG = -lm
|
||||
|
||||
SRCS := $(wildcard *.c)
|
||||
BINS := $(SRCS:%.c=%)
|
||||
|
||||
all: ${BINS}
|
||||
|
||||
%: %.o
|
||||
@echo "Checking.."
|
||||
${CC} ${LINKERFLAG} $< -o $@
|
||||
|
||||
%.o: %.c
|
||||
@echo "Creating object.."
|
||||
${CC} -c $<
|
||||
|
||||
clean:
|
||||
@echo "Cleaning up..."
|
||||
rm -rvf *.o ${BINS}
|
||||
```
|
||||
|
||||
* Lines starting with `#` are comments.
|
||||
|
||||
* Line `.PHONY = all clean` defines phony targets `all` and `clean`.
|
||||
|
||||
* Variable `LINKERFLAG` defines flags to be used with `gcc` in a recipe.
|
||||
|
||||
* `SRCS := $(wildcard *.c)`: `$(wildcard pattern)` is one of the functions for filenames. In this case, all files with the `.c` extension will be stored in a variable `SRCS`.
|
||||
|
||||
* `BINS := $(SRCS:%.c=%)`: This is called as substitution reference. In this case, if `SRCS` has values `'foo.c bar.c'`, `BINS` will have `'foo bar'`.
|
||||
|
||||
* Line `all: ${BINS}`: The phony target `all` calls values in`${BINS}` as individual targets.
|
||||
|
||||
* Rule:
|
||||
```
|
||||
%: %.o
|
||||
@echo "Checking.."
|
||||
${CC} ${LINKERFLAG} $< -o $@
|
||||
```
|
||||
|
||||
Let's look at an example to understand this rule. Suppose `foo` is one of the values in `${BINS}`. Then `%` will match `foo`(`%` can match any target name). Below is the rule in its expanded form:
|
||||
```
|
||||
foo: foo.o
|
||||
@echo "Checking.."
|
||||
gcc -lm foo.o -o foo
|
||||
|
||||
```
|
||||
|
||||
As shown, `%` is replaced by `foo`. `$<` is replaced by `foo.o`. `$<` is patterned to match prerequisites and `$@` matches the target. This rule will be called for every value in `${BINS}`
|
||||
|
||||
* Rule:
|
||||
```
|
||||
%.o: %.c
|
||||
@echo "Creating object.."
|
||||
${CC} -c $<
|
||||
```
|
||||
|
||||
Every prerequisite in the previous rule is considered a target for this rule. Below is the rule in its expanded form:
|
||||
```
|
||||
foo.o: foo.c
|
||||
@echo "Creating object.."
|
||||
gcc -c foo.c
|
||||
```
|
||||
|
||||
* Finally, we remove all binaries and object files in target `clean`.
|
||||
|
||||
|
||||
|
||||
|
||||
Below is the rewrite of the above makefile, assuming it is placed in the directory having a single file `foo.c:`
|
||||
```
|
||||
# Usage:
|
||||
# make # compile all binary
|
||||
# make clean # remove ALL binaries and objects
|
||||
|
||||
.PHONY = all clean
|
||||
|
||||
CC = gcc # compiler to use
|
||||
|
||||
LINKERFLAG = -lm
|
||||
|
||||
SRCS := foo.c
|
||||
BINS := foo
|
||||
|
||||
all: foo
|
||||
|
||||
foo: foo.o
|
||||
@echo "Checking.."
|
||||
gcc -lm foo.o -o foo
|
||||
|
||||
foo.o: foo.c
|
||||
@echo "Creating object.."
|
||||
gcc -c foo.c
|
||||
|
||||
clean:
|
||||
@echo "Cleaning up..."
|
||||
rm -rvf foo.o foo
|
||||
```
|
||||
|
||||
For more on makefiles, refer to the [GNU Make manual][1], which offers a complete reference and examples.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/what-how-makefile
|
||||
|
||||
作者:[Sachin Patil][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/psachin
|
||||
[1]:https://www.gnu.org/software/make/manual/make.pdf
|
@ -0,0 +1,58 @@
|
||||
An introduction to pipes and named pipes in Linux
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe)
|
||||
|
||||
In Linux, the `pipe` command lets you sends the output of one command to another. Piping, as the term suggests, can redirect the standard output, input, or error of one process to another for further processing.
|
||||
|
||||
The syntax for the `pipe` or `unnamed pipe` command is the `|` character between any two commands:
|
||||
|
||||
`Command-1 | Command-2 | …| Command-N`
|
||||
|
||||
Here, the pipe cannot be accessed via another session; it is created temporarily to accommodate the execution of `Command-1` and redirect the standard output. It is deleted after successful execution.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/pipe.png)
|
||||
|
||||
In the example above, contents.txt contains a list of all files in a particular directory—specifically, the output of the ls -al command. We first grep the filenames with the "file" keyword from contents.txt by piping (as shown), so the output of the cat command is provided as the input for the grep command. Next, we add piping to execute the awk command, which displays the 9th column from the filtered output from the grep command. We can also count the number of rows in contents.txt using the wc -l command.
|
||||
|
||||
A named pipe can last until as long as the system is up and running or until it is deleted. It is a special file that follows the [FIFO][1] (first in, first out) mechanism. It can be used just like a normal file; i.e., you can write to it, read from it, and open or close it. To create a named pipe, the command is:
|
||||
```
|
||||
mkfifo <pipe-name>
|
||||
|
||||
```
|
||||
|
||||
This creates a named pipe file that can be used even over multiple shell sessions.
|
||||
|
||||
Another way to create a FIFO named pipe is to use this command:
|
||||
```
|
||||
mknod p <pipe-name>
|
||||
|
||||
```
|
||||
|
||||
To redirect a standard output of any command to another process, use the `>` symbol. To redirect a standard input of any command, use the `<` symbol.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/redirection.png)
|
||||
|
||||
As shown above, the output of the `ls -al` command is redirected to `contents.txt` and inserted in the file. Similarly, the input for the `tail` command is provided as `contents.txt` via the `<` symbol.
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/create-named-pipe.png)
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/verify-output.png)
|
||||
|
||||
Here, we have created a named pipe, `my-named-pipe`, and redirected the output of the `ls -al` command into the named pipe. We can the open a new shell session and `cat` the contents of the named pipe, which shows the output of the `ls -al` command, as previously supplied. Notice the size of the named pipe is zero and it has a designation of "p".
|
||||
|
||||
So, next time you're working with commands at the Linux terminal and find yourself moving data between commands, hopefully a pipe will make the process quick and easy.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/introduction-pipes-linux
|
||||
|
||||
作者:[Archit Modi][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/architmodi
|
||||
[1]:https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)
|
290
sources/tech/20180823 Getting started with Sensu monitoring.md
Normal file
290
sources/tech/20180823 Getting started with Sensu monitoring.md
Normal file
@ -0,0 +1,290 @@
|
||||
Getting started with Sensu monitoring
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e)
|
||||
|
||||
Sensu is an open source infrastructure and application monitoring solution that monitors servers, services, and application health, and sends alerts and notifications with third-party integration. Written in Ruby, Sensu can use either [RabbitMQ][1] or [Redis][2] to handle messages. It uses Redis to store data.
|
||||
|
||||
If you want to monitor your cloud infrastructure in a simple and efficient manner, Sensu is a good option. It can be integrated with many of the modern DevOps stacks your organization may already be using, such as [Slack][3], [HipChat][4], or [IRC][5], and it can even send mobile/pager alerts with [PagerDuty][6].
|
||||
|
||||
Sensu's [modular architecture][7] means every component can be installed on the same server or on completely separate machines.
|
||||
|
||||
### Architecture
|
||||
|
||||
Sensu's main communication mechanism is the Transport. Every Sensu component must connect to the Transport in order to send messages to each other. Transport can use either RabbitMQ (recommended in production) or Redis.
|
||||
|
||||
Sensu Server processes event data and takes action. It registers clients and processes check results and monitoring events using filters, mutators, and handlers. The server publishes check definitions to the clients and the Sensu API provides a RESTful API, providing access to monitoring data and core functionality.
|
||||
|
||||
[Sensu Client][8] executes checks either scheduled by Sensu Server or local checks definitions. Sensu uses a data store (Redis) to keep all the persistent data. Finally, [Uchiwa][9] is the web interface to communicate with Sensu API.
|
||||
|
||||
![sensu_system.png][11]
|
||||
|
||||
### Installing Sensu
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
* One Linux installation to act as the server node (I used CentOS 7 for this article)
|
||||
|
||||
* One or more Linux machines to monitor (clients)
|
||||
|
||||
|
||||
|
||||
|
||||
#### Server side
|
||||
|
||||
Sensu requires Redis to be installed. To install Redis, enable the EPEL repository:
|
||||
```
|
||||
$ sudo yum install epel-release -y
|
||||
|
||||
```
|
||||
|
||||
Then install Redis:
|
||||
```
|
||||
$ sudo yum install redis -y
|
||||
|
||||
```
|
||||
|
||||
Modify `/etc/redis.conf` to disable protected mode, listen on every interface, and set a password:
|
||||
```
|
||||
$ sudo sed -i 's/^protected-mode yes/protected-mode no/g' /etc/redis.conf
|
||||
|
||||
$ sudo sed -i 's/^bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.conf
|
||||
|
||||
$ sudo sed -i 's/^# requirepass foobared/requirepass password123/g' /etc/redis.conf
|
||||
|
||||
```
|
||||
|
||||
Enable and start Redis service:
|
||||
```
|
||||
$ sudo systemctl enable redis
|
||||
$ sudo systemctl start redis
|
||||
```
|
||||
|
||||
Redis is now installed and ready to be used by Sensu.
|
||||
|
||||
Now let’s install Sensu.
|
||||
|
||||
First, configure the Sensu repository and install the packages:
|
||||
```
|
||||
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
|
||||
[sensu]
|
||||
name=sensu
|
||||
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
EOF
|
||||
|
||||
$ sudo yum install sensu uchiwa -y
|
||||
```
|
||||
|
||||
Let’s create the bare minimum configuration files for Sensu:
|
||||
```
|
||||
$ sudo tee /etc/sensu/conf.d/api.json << EOF
|
||||
{
|
||||
"api": {
|
||||
"host": "127.0.0.1",
|
||||
"port": 4567
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Next, configure `sensu-api` to listen on localhost, with Port 4567:
|
||||
```
|
||||
$ sudo tee /etc/sensu/conf.d/redis.json << EOF
|
||||
{
|
||||
"redis": {
|
||||
"host": "<IP of server>",
|
||||
"port": 6379,
|
||||
"password": "password123"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
|
||||
$ sudo tee /etc/sensu/conf.d/transport.json << EOF
|
||||
{
|
||||
"transport": {
|
||||
"name": "redis"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
In these two files, we configure Sensu to use Redis as the transport mechanism and the address where Redis will listen. Clients need to connect directly to the transport mechanism. These two files will be required on each client machine.
|
||||
```
|
||||
$ sudo tee /etc/sensu/uchiwa.json << EOF
|
||||
{
|
||||
"sensu": [
|
||||
{
|
||||
"name": "sensu",
|
||||
"host": "127.0.0.1",
|
||||
"port": 4567
|
||||
}
|
||||
],
|
||||
"uchiwa": {
|
||||
"host": "0.0.0.0",
|
||||
"port": 3000
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
In this file, we configure Uchiwa to listen on every interface (0.0.0.0) on Port 3000. We also configure Uchiwa to use `sensu-api` (already configured).
|
||||
|
||||
For security reasons, change the owner of the configuration files you just created:
|
||||
```
|
||||
$ sudo chown -R sensu:sensu /etc/sensu
|
||||
```
|
||||
|
||||
Enable and start the Sensu services:
|
||||
```
|
||||
$ sudo systemctl enable sensu-server sensu-api sensu-client
|
||||
$ sudo systemctl start sensu-server sensu-api sensu-client
|
||||
$ sudo systemctl enable uchiwa
|
||||
$ sudo systemctl start uchiwa
|
||||
```
|
||||
|
||||
Try accessing the Uchiwa website: http://<IP of server>:3000
|
||||
|
||||
For production environments, it’s recommended to run a cluster of RabbitMQ as the Transport instead of Redis (a Redis cluster can be used in production too), and to run more than one instance of Sensu Server and API for load balancing and high availability.
|
||||
|
||||
Sensu is now installed. Now let’s configure the clients.
|
||||
|
||||
#### Client side
|
||||
|
||||
To add a new client, you will need to enable Sensu repository on the client machines by creating the file `/etc/yum.repos.d/sensu.repo`.
|
||||
```
|
||||
$ sudo tee /etc/yum.repos.d/sensu.repo << EOF
|
||||
[sensu]
|
||||
name=sensu
|
||||
baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/
|
||||
gpgcheck=0
|
||||
enabled=1
|
||||
EOF
|
||||
```
|
||||
|
||||
With the repository enabled, install the package Sensu:
|
||||
```
|
||||
$ sudo yum install sensu -y
|
||||
```
|
||||
|
||||
To configure `sensu-client`, create the same `redis.json` and `transport.json` created in the server machine, as well as the `client.json` configuration file:
|
||||
```
|
||||
$ sudo tee /etc/sensu/conf.d/client.json << EOF
|
||||
{
|
||||
"client": {
|
||||
"name": "rhel-client",
|
||||
"environment": "development",
|
||||
"subscriptions": [
|
||||
"frontend"
|
||||
]
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
In the name field, specify a name to identify this client (typically the hostname). The environment field can help you filter, and subscription defines which monitoring checks will be executed by the client.
|
||||
|
||||
Finally, enable and start the services and check in Uchiwa, as the new client will register automatically:
|
||||
```
|
||||
$ sudo systemctl enable sensu-client
|
||||
$ sudo systemctl start sensu-client
|
||||
```
|
||||
|
||||
### Sensu checks
|
||||
|
||||
Sensu checks have two components: a plugin and a definition.
|
||||
|
||||
Sensu is compatible with the [Nagios check plugin specification][12], so any check for Nagios can be used without modification. Checks are executable files and are run by the Sensu client.
|
||||
|
||||
Check definitions let Sensu know how, where, and when to run the plugin.
|
||||
|
||||
#### Client side
|
||||
|
||||
Let’s install one check plugin on the client machine. Remember, this plugin will be executed on the clients.
|
||||
|
||||
Enable EPEL and install `nagios-plugins-http` :
|
||||
```
|
||||
$ sudo yum install -y epel-release
|
||||
$ sudo yum install -y nagios-plugins-http
|
||||
```
|
||||
|
||||
Now let’s explore the plugin by executing it manually. Try checking the status of a web server running on the client machine. It should fail as we don’t have a web server running:
|
||||
```
|
||||
$ /usr/lib64/nagios/plugins/check_http -I 127.0.0.1
|
||||
connect to address 127.0.0.1 and port 80: Connection refused
|
||||
HTTP CRITICAL - Unable to open TCP socket
|
||||
```
|
||||
|
||||
It failed, as expected. Check the return code of the execution:
|
||||
```
|
||||
$ echo $?
|
||||
2
|
||||
|
||||
```
|
||||
|
||||
The Nagios check plugin specification defines four return codes for the plugin execution:
|
||||
|
||||
| **Plugin return code** | **State** |
|
||||
|------------------------|-----------|
|
||||
| 0 | OK |
|
||||
| 1 | WARNING |
|
||||
| 2 | CRITICAL |
|
||||
| 3 | UNKNOWN |
|
||||
|
||||
With this information, we can now create the check definition on the server.
|
||||
|
||||
#### Server side
|
||||
|
||||
On the server machine, create the file `/etc/sensu/conf.d/check_http.json`:
|
||||
```
|
||||
{
|
||||
"checks": {
|
||||
"check_http": {
|
||||
"command": "/usr/lib64/nagios/plugins/check_http -I 127.0.0.1",
|
||||
"interval": 10,
|
||||
"subscribers": [
|
||||
"frontend"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In the command field, use the command we tested before. `Interval` will tell Sensu how frequently, in seconds, this check should be executed. Finally, `subscribers` will define the clients where the check will be executed.
|
||||
|
||||
Restart both sensu-api and sensu-server and confirm that the new check is available in Uchiwa.
|
||||
```
|
||||
$ sudo systemctl restart sensu-api sensu-server
|
||||
```
|
||||
|
||||
### What’s next?
|
||||
|
||||
Sensu is a powerful tool, and this article covers just a glimpse of what it can do. See the [documentation][13] to learn more, and visit the Sensu site to learn more about the [Sensu community][14].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/getting-started-sensu-monitoring-solution
|
||||
|
||||
作者:[Michael Zamot][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/mzamot
|
||||
[1]:https://www.rabbitmq.com/
|
||||
[2]:https://redis.io/topics/config
|
||||
[3]:https://slack.com/
|
||||
[4]:https://en.wikipedia.org/wiki/HipChat
|
||||
[5]:http://www.irc.org/
|
||||
[6]:https://www.pagerduty.com/
|
||||
[7]:https://docs.sensu.io/sensu-core/1.4/overview/architecture/
|
||||
[8]:https://docs.sensu.io/sensu-core/1.4/installation/install-sensu-client/
|
||||
[9]:https://uchiwa.io/#/
|
||||
[10]:/file/406576
|
||||
[11]:https://opensource.com/sites/default/files/uploads/sensu_system.png (sensu_system.png)
|
||||
[12]:https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginapi.html
|
||||
[13]:https://docs.sensu.io/
|
||||
[14]:https://sensu.io/community
|
@ -0,0 +1,131 @@
|
||||
How To Easily And Safely Manage Cron Jobs In Linux
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/08/Crontab-UI-720x340.jpg)
|
||||
|
||||
When it comes to schedule tasks in Linux, which utility comes to your mind first? Yeah, you guessed it right. **Cron!** The cron utility helps you to schedule commands/tasks at specific time in Unix-like operating systems. We already published a [**beginners guides to Cron jobs**][1]. I have a few years experience in Linux, so setting up cron jobs is no big deal for me. But, it is not piece of cake for newbies. The noobs may unknowingly do small mistakes while editing plain text crontab and bring down all cron jobs. Just in case, if you think you might mess up with your cron jobs, there is a good alternative way. Say hello to **Crontab UI** , a web-based tool to easily and safely manage cron jobs in Unix-like operating systems.
|
||||
|
||||
You don’t need to manually edit the crontab file to create, delete and manage cron jobs. Everything can be done via a web browser with a couple mouse clicks. Crontab UI allows you to easily create, edit, pause, delete, backup cron jobs, and even import, export and deploy jobs on other machines without much hassle. Error log, mailing and hooks support also possible. It is free, open source and written using NodeJS.
|
||||
|
||||
### Installing Crontab UI
|
||||
|
||||
Installing Crontab UI is just a one-liner command. Make sure you have installed NPM. If you haven’t install npm yet, refer the following link.
|
||||
|
||||
Next, run the following command to install Crontab UI.
|
||||
```
|
||||
$ npm install -g crontab-ui
|
||||
|
||||
```
|
||||
|
||||
It’s that simple. Let us go ahead and see how to manage cron jobs using Crontab UI.
|
||||
|
||||
### Easily And Safely Manage Cron Jobs In Linux
|
||||
|
||||
To launch Crontab UI, simply run:
|
||||
```
|
||||
$ crontab-ui
|
||||
|
||||
```
|
||||
|
||||
You will see the following output:
|
||||
```
|
||||
Node version: 10.8.0
|
||||
Crontab UI is running at http://127.0.0.1:8000
|
||||
|
||||
```
|
||||
|
||||
Now, open your web browser and navigate to **<http://127.0.0.1:8000>**. Make sure the port no 8000 is allowed in your firewall/router.
|
||||
|
||||
Please note that you can only access Crontab UI web dashboard within the local system itself.
|
||||
|
||||
If you want to run Crontab UI with your system’s IP and custom port (so you can access it from any remote system in the network), use the following command instead:
|
||||
```
|
||||
$ HOST=0.0.0.0 PORT=9000 crontab-ui
|
||||
Node version: 10.8.0
|
||||
Crontab UI is running at http://0.0.0.0:9000
|
||||
|
||||
```
|
||||
|
||||
Now, Crontab UI can be accessed from the any system in the nework using URL – **http:// <IP-Address>:9000**.
|
||||
|
||||
This is how Crontab UI dashboard looks like.
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard.png)
|
||||
|
||||
As you can see in the above screenshot, Crontab UI dashbaord is very simply. All options are self-explanatory.
|
||||
|
||||
To exit Crontab UI, press **CTRL+C**.
|
||||
|
||||
**Create, edit, run, stop, delete a cron job**
|
||||
|
||||
To create a new cron job, click on “New” button. Enter your cron job details and click Save.
|
||||
|
||||
1. Name the cron job. It is optional.
|
||||
2. The full command you want to run.
|
||||
3. Choose schedule time. You can either choose the quick schedule time, (such as Startup, Hourly, Daily, Weekly, Monthly, Yearly) or set the exact time to run the command. After you choosing the schedule time, the syntax of the cron job will be shown in **Jobs** field.
|
||||
4. Choose whether you want to enable error logging for the particular job.
|
||||
|
||||
|
||||
|
||||
Here is my sample cron job.
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/08/create-new-cron-job.png)
|
||||
|
||||
As you can see, I have setup a cron job to clear pacman cache at every month.
|
||||
|
||||
Similarly, you can create any number of jobs as you want. You will see all cron jobs in the dashboard.
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/08/crontab-ui-dashboard-1.png)
|
||||
|
||||
If you wanted to change any parameter in a cron job, just click on the **Edit** button below the job and modify the parameters as you wish. To run a job immediately, click on the button that says **Run**. To stop a job, click **Stop** button. You can view the log details of any job by clicking on the **Log** button. If the job is no longer required, simply press **Delete** button.
|
||||
|
||||
**Backup cron jobs**
|
||||
|
||||
To backup all cron jobs, press the Backup from main dashboard and choose OK to confirm the backup.
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/08/backup-cron-jobs.png)
|
||||
|
||||
You can use this backup in case you messed with the contents of the crontab file.
|
||||
|
||||
**Import/Export cron jobs to other systems**
|
||||
|
||||
Another notable feature of Crontab UI is you can import, export and deploy cron jobs to other systems. If you have multiple systems on your network that requires the same cron jobs, just press **Export** button and choose the location to save the file. All contents of crontab file will be saved in a file named **crontab.db**.
|
||||
|
||||
Here is the contents of the crontab.db file.
|
||||
```
|
||||
$ cat Downloads/crontab.db
|
||||
{"name":"Remove Pacman Cache","command":"rm -rf /var/cache/pacman","schedule":"@monthly","stopped":false,"timestamp":"Thu Aug 23 2018 10:34:19 GMT+0000 (Coordinated Universal Time)","logging":"true","mailing":{},"created":1535020459093,"_id":"lcVc1nSdaceqS1ut"}
|
||||
|
||||
```
|
||||
|
||||
Then you can transfer the entire crontab.db file to some other system and import its to the new system. You don’t need to manually create cron jobs in all systems. Just create them in one system and export and import all of them to every system on the network.
|
||||
|
||||
**Get the contents from or save to existing crontab file**
|
||||
|
||||
There are chances that you might have already created some cron jobs using **crontab** command. If so, you can retrieve contents of the existing crontab file by click on the **“Get from crontab”** button in main dashboard.
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/08/get-from-crontab.png)
|
||||
|
||||
Similarly, you can save the newly created jobs using Crontab UI utility to existing crontab file in your system. To do so, just click **Save to crontab** option in the dashboard.
|
||||
|
||||
See? Managing cron jobs is not that complicated. Any newbie user can easily maintain any number of jobs without much hassle using Crontab UI. Give it a try and let us know what do you think about this tool. I am all ears!
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-easily-and-safely-manage-cron-jobs-in-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/a-beginners-guide-to-cron-jobs/
|
@ -0,0 +1,90 @@
|
||||
How to publish a WordPress blog to a static GitLab Pages site
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-design-monitor-website.png?itok=yUK7_qR0)
|
||||
|
||||
A long time ago, I set up a WordPress blog for a family member. There are lots of options these days, but back then there were few decent choices if you needed a web-based CMS with a WYSIWYG editor. An unfortunate side effect of things working well is that the blog has generated a lot of content over time. That means I was also regularly updating WordPress to protect against the exploits that are constantly popping up.
|
||||
|
||||
So I decided to convince the family member that switching to [Hugo][1] would be relatively easy, and the blog could then be hosted on [GitLab][2]. But trying to extract all that content and convert it to [Markdown][3] turned into a huge hassle. There were automated scripts that got me 95% there, but nothing worked perfectly. Manually updating all the posts was not something I wanted to do, so eventually, I gave up trying to move the blog.
|
||||
|
||||
Recently, I started thinking about this again and realized there was a solution I hadn't considered: I could continue maintaining the WordPress server but set it up to publish a static mirror and serve that with [GitLab Pages][4] (or [GitHub Pages][5] if you like). This would allow me to automate [Let's Encrypt][6] certificate renewals as well as eliminate the security concerns associated with hosting a WordPress site. This would, however, mean comments would stop working, but that feels like a minor loss in this case because the blog did not garner many comments.
|
||||
|
||||
Here's the solution I came up with, which so far seems to be working well:
|
||||
|
||||
* Host WordPress site at URL that is not linked to or from anywhere else to reduce the odds of it being exploited. In this example, we'll use <http://private.localconspiracy.com> (even though this site is actually built with Pelican).
|
||||
* [Set up hosting on GitLab Pages][7] for the public URL <https://www.localconspiracy.com>.
|
||||
* Add a [cron job][8] that determines when the last-built date differs between the two URLs; if the build dates differ, mirror the WordPress version.
|
||||
* After mirroring with `wget`, update all links from "private" version to "public" version.
|
||||
* Do a `git push` to publish the new content.
|
||||
|
||||
|
||||
|
||||
These are the two scripts I use:
|
||||
|
||||
`check-diff.sh` (called by cron every 15 minutes)
|
||||
```
|
||||
#!/bin/bash
|
||||
|
||||
ORIGINDATE="$(curl -v --silent http://private.localconspiracy.com/feed/ 2>&1|grep lastBuildDate)"
|
||||
PUBDATE="$(curl -v --silent https://www.localconspiracy.com/feed/ 2>&1|grep lastBuildDate)"
|
||||
|
||||
if [ "$ORIGINDATE" != "$PUBDATE" ]
|
||||
then
|
||||
/home/doc/repos/localconspiracy/mirror.sh
|
||||
fi
|
||||
```
|
||||
|
||||
`mirror.sh:`
|
||||
```
|
||||
#!/bin/sh
|
||||
|
||||
cd /home/doc/repos/localconspiracy
|
||||
|
||||
wget \
|
||||
--mirror \
|
||||
--convert-links \
|
||||
--adjust-extension \
|
||||
--page-requisites \
|
||||
--retry-connrefused \
|
||||
--exclude-directories=comments \
|
||||
--execute robots=off \
|
||||
http://private.localconspiracy.com
|
||||
|
||||
git rm -rf public/*
|
||||
mv private.localconspiracy.com/* public/.
|
||||
rmdir private.localconspiracy.com
|
||||
find ./public/ -type f -exec sed -i -e 's|http://private.localconspiracy|https://www.localconspiracy|g' {} \;
|
||||
find ./public/ -type f -exec sed -i -e 's|http://www.localconspiracy|https://www.localconspiracy|g' {} \;
|
||||
git add public/*
|
||||
git commit -m "new snapshot"
|
||||
git push origin master
|
||||
```
|
||||
|
||||
That's it! Now, when the blog is changed, within 15 minutes the site is mirrored to a static version and pushed up to the repo where it will be reflected in GitLab pages.
|
||||
|
||||
This concept could be extended a little further if you wanted to [run WordPress locally][9]. In that case, you would not need a server to host your WordPress blog; you could just run it on your local machine. In that scenario, there's no chance of your blog getting exploited. As long as you can run `wget` against it locally, you could use the approach outlined above to have a WordPress site hosted on GitLab Pages.
|
||||
|
||||
_This article was originally posted at[Local Conspiracy][10]. Reposted with permission._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/publish-wordpress-static-gitlab-pages-site
|
||||
|
||||
作者:[Christopher Aedo][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/docaedo
|
||||
[1]:https://gohugo.io/
|
||||
[2]:https://gitlab.com/
|
||||
[3]:https://en.wikipedia.org/wiki/Markdown
|
||||
[4]:https://docs.gitlab.com/ee/user/project/pages/
|
||||
[5]:https://pages.github.com/
|
||||
[6]:https://letsencrypt.org/
|
||||
[7]:https://about.gitlab.com/2016/04/07/gitlab-pages-setup/
|
||||
[8]:https://en.wikipedia.org/wiki/Cron
|
||||
[9]:https://codex.wordpress.org/Installing_WordPress_Locally_on_Your_Mac_With_MAMP
|
||||
[10]:https://localconspiracy.com/2018/08/wp-on-gitlab.html
|
108
sources/tech/20180824 5 cool music player apps.md
Normal file
108
sources/tech/20180824 5 cool music player apps.md
Normal file
@ -0,0 +1,108 @@
|
||||
5 cool music player apps
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/08/5-cool-music-apps-816x345.jpg)
|
||||
Do you like music? Then Fedora may have just what you’re looking for. This article introduces different music player apps that run on Fedora. You’re covered whether you have an extensive music library, a small one, or none at all. Here are four graphical application and one terminal-based music player that will have you jamming.
|
||||
|
||||
### Quod Libet
|
||||
|
||||
Quod Libet is a complete manager for your large audio library. If you have an extensive audio library that you would like not just listen to, but also manage, Quod Libet might a be a good choice for you.
|
||||
|
||||
![][1]
|
||||
|
||||
Quod Libet can import music from multiple locations on your disk, and allows you to edit tags of the audio files — so everything is under your control. As a bonus, there are various plugins available for anything from a simple equalizer to a [last.fm][2] sync. You can also search and play music directly from [Soundcloud][3].
|
||||
|
||||
Quod Libet works great on HiDPI screens, and is available as an RPM in Fedora or on [Flathub][4] in case you run [Silverblue][5]. Install it using Gnome Software or the command line:
|
||||
```
|
||||
$ sudo dnf install quodlibet
|
||||
|
||||
```
|
||||
|
||||
### Audacious
|
||||
|
||||
If you like a simple music player that could even look like the legendary Winamp, Audacious might be a good choice for you.
|
||||
|
||||
![][6]
|
||||
|
||||
Audacious probably won’t manage all your music at once, but it works great if you like to organize your music as files. You can also export and import playlists without reorganizing the music files themselves.
|
||||
|
||||
As a bonus, you can make it look likeWinamp. To make it look the same as on the screenshot above, go to Settings / Appearance, select Winamp Classic Interface at the top, and choose the Refugee skin right below. And Bob’s your uncle!
|
||||
|
||||
Audacious is available as an RPM in Fedora, and can be installed using the Gnome Software app or the following command on the terminal:
|
||||
```
|
||||
$ sudo dnf install audacious
|
||||
|
||||
```
|
||||
|
||||
### Lollypop
|
||||
|
||||
Lollypop is a music player that provides great integration with GNOME. If you enjoy how GNOME looks, and would like a music player that’s nicely integrated, Lollypop could be for you.
|
||||
|
||||
![][7]
|
||||
|
||||
Apart from nice visual integration with the GNOME Shell, it woks nicely on HiDPI screens, and supports a dark theme.
|
||||
|
||||
As a bonus, Lollypop has an integrated cover art downloader, and a so-called Party Mode (the note button at the top-right corner) that selects and plays music automatically for you. It also integrates with online services such as [last.fm][2] or [libre.fm][8].
|
||||
|
||||
Available as both an RPM in Fedora or a [Flathub][4] for your [Silverblue][5] workstation, install it using the Gnome Software app or using the terminal:
|
||||
```
|
||||
$ sudo dnf install lollypop
|
||||
|
||||
```
|
||||
|
||||
### Gradio
|
||||
|
||||
What if you don’t own any music, but still like to listen to it? Or you just simply love radio? Then Gradio is here for you.
|
||||
|
||||
![][9]
|
||||
|
||||
Gradio is a simple radio player that allows you to search and play internet radio stations. You can find them by country, language, or simply using search. As a bonus, it’s visually integrated into GNOME Shell, works great with HiDPI screens, and has an option for a dark theme.
|
||||
|
||||
Gradio is available on [Flathub][4] which works with both Fedora Workstation and [Silverblue][5]. Install it using the Gnome Software app.
|
||||
|
||||
### sox
|
||||
|
||||
Do you like using the terminal instead, and listening to some music while you work? You don’t have to leave the terminal thanks to sox.
|
||||
|
||||
![][10]
|
||||
|
||||
sox is a very simple, terminal-based music player. All you need to do is to run a command such as:
|
||||
```
|
||||
$ play file.mp3
|
||||
|
||||
```
|
||||
|
||||
…and sox will play it for you. Apart from individual audio files, sox also supports playlists in the m3u format.
|
||||
|
||||
As a bonus, because sox is a terminal-based application, you can run it over ssh. Do you have a home server with speakers attached to it? Or do you want to play music from a different computer? Try using it together with [tmux][11], so you can keep listening even when the session closes.
|
||||
|
||||
sox is available in Fedora as an RPM. Install it by running:
|
||||
```
|
||||
$ sudo dnf install sox
|
||||
|
||||
```
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/5-cool-music-player-apps/
|
||||
|
||||
作者:[Adam Šamalík][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/asamalik/
|
||||
[1]:https://fedoramagazine.org/wp-content/uploads/2018/08/qodlibet-300x217.png
|
||||
[2]:https://last.fm
|
||||
[3]:https://soundcloud.com/
|
||||
[4]:https://flathub.org/home
|
||||
[5]:https://teamsilverblue.org/
|
||||
[6]:https://fedoramagazine.org/wp-content/uploads/2018/08/audacious-300x136.png
|
||||
[7]:https://fedoramagazine.org/wp-content/uploads/2018/08/lollypop-300x172.png
|
||||
[8]:https://libre.fm
|
||||
[9]:https://fedoramagazine.org/wp-content/uploads/2018/08/gradio.png
|
||||
[10]:https://fedoramagazine.org/wp-content/uploads/2018/08/sox-300x179.png
|
||||
[11]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/
|
@ -0,0 +1,183 @@
|
||||
Add free books to your eReader: Formatting tips
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_library_reading_list_colorful.jpg?itok=jJtnyniB)
|
||||
|
||||
In my recent article, [A handy way to add free books to your eReader][1], I explained how to convert the plaintext indexes at [Project Gutenberg][2] to HTML and then EPUBs. But as one commenter noted, there is a problem in older indexes, where individual books are not always separated by an extra newline character.
|
||||
|
||||
I saw quite vividly the extent of the problem when I was working on the index for 2007, where you see things like this:
|
||||
```
|
||||
Audio: The General Epistle of James 22931
|
||||
Audio: The Epistle to the Hebrews 22930
|
||||
Audio: The Epistle of Philemon 22929
|
||||
|
||||
Sacrifice, by Stephen French Whitman 22928
|
||||
The Atlantic Monthly, Volume 18, No. 105, July 1866, by Various 22927
|
||||
The Continental Monthly, Vol. 6, No 3, September 1864, by Various 22926
|
||||
|
||||
The Story of Young Abraham Lincoln, by Wayne Whipple 22925
|
||||
Pathfinder, by Alan Douglas 22924
|
||||
[Subtitle: or, The Missing Tenderfoot]
|
||||
Pieni helmivyo, by Various 22923
|
||||
[Subtitle: Suomen runoja koulunuorisolle]
|
||||
[Editor: J. Waananen] [Language: Finnish]
|
||||
The Posy Ring, by Various 22922
|
||||
```
|
||||
|
||||
My first reaction was, "Well, how bad can it be to just add newlines where needed?" The answer: "Really bad." After days of working this way and stopping only when the cramps in my hand became too annoying, I decided to revisit the problem. I thought I might need to do multiple Find-Replace passes, maybe keyed on things like `[Language: Finnish] `or maybe just the `]` bracket, but this seemed almost as laborious as the manual method.
|
||||
|
||||
Then I noticed a particular feature: For most instances where a newline was needed, a newline character was immediately followed by the capital letter of the next title. For lines where there was still more information about the book, the newline was followed by spaces. So I tried this: In the Find text box in [KWrite][3] (remember, we’re using regex), I put:
|
||||
```
|
||||
(\n[A-Z])
|
||||
|
||||
```
|
||||
|
||||
and in Replace, I put:
|
||||
```
|
||||
\n\1
|
||||
|
||||
```
|
||||
|
||||
For every match inside the parentheses, I added a preceding newline, retaining whatever the capital letter was. This worked extremely well. The few instances where it failed involved book titles beginning with a number or with quotes. I fixed these manually, but I could have put this:
|
||||
```
|
||||
(\n[0-9])
|
||||
|
||||
```
|
||||
|
||||
In Find and run Replace All again. Later, I also tried it with the quotes—this requires a backslash, like this:
|
||||
```
|
||||
(\n\”) and (\n\’)
|
||||
|
||||
```
|
||||
|
||||
One side effect is that a number of the listings were separated by three newline characters. Not an issue for XHTML, but easily fixed by putting in Find:
|
||||
```
|
||||
\n\n\n
|
||||
|
||||
```
|
||||
|
||||
and in Replace:
|
||||
```
|
||||
\n\n
|
||||
|
||||
```
|
||||
|
||||
To review the process with the new features:
|
||||
|
||||
1. Remove the preamble and other text you don’t want
|
||||
2. Add extra newlines with the method shown above
|
||||
3. Convert three consecutive newlines to two (optional)
|
||||
4. Add the appropriate HTML tags at the beginning and end
|
||||
5. Create the links based on finding `(\d\d\d\d\d)`, replacing with `<a href=”http://www.gutenberg.org/ebooks/``\1”>\1</a>`
|
||||
6. Add paragraph tags by finding `\n\n` and replacing with `</p>\n\n<p>`
|
||||
7. Add a `</p>` just before the `</body>` tag at the end
|
||||
8. Fix the headers, preceding each with `<h3>` and changing the `</p>` to `</h3>` – the older indexes have only a single header
|
||||
9. Save the file with an `.xhtml` suffix, then import to [Sigil][4] to make your EPUB.
|
||||
|
||||
|
||||
|
||||
The next issue that comes up is when the eBook numbers include only four digits. This is a problem since there are many four-digit numbers in the listings, many of which are dates. The answer comes from modifying our strategy in point 5 in the above listing.
|
||||
|
||||
In Find, put:
|
||||
|
||||
`(\d\d\d\d)\n`
|
||||
|
||||
and in Replace, put:
|
||||
|
||||
`<a href="[http://www.gutenberg.org/ebooks/](http://www.gutenberg.org/ebooks/)\1">\1</a>\n`
|
||||
|
||||
Notice that the `\n` is outside the parentheses; therefore, we need to add it at the end of the new replacement. Now we see another problem resulting from this new method: Some of the eBook numbers are followed by C (copyrighted). So we need to do another pass in Find:
|
||||
|
||||
`(\d\d\d\d)C\n`
|
||||
|
||||
and in Replace:
|
||||
|
||||
`<a href=”[http://www.gutenberg.org/ebooks/](http://www.gutenberg.org/ebooks/)\1”>\1</a>C\n`
|
||||
|
||||
I noticed that as of the 2002 index, the lack of extra newlines between listings was no longer a problem, and this continued all the way to the very first index, so steps 2 and 3 became unnecessary.
|
||||
|
||||
I’ve now taken the process all the way to the beginning, GUTINDEX.1996, and this process works all the way. At one point three-digit eBook numbers appear, so you must begin to Find:
|
||||
|
||||
`(\d\d\d)\n` and then `(\d\d\d)C\n`
|
||||
|
||||
Then later:
|
||||
|
||||
`(\d\d)\n` and then `(\d\d)C\n`
|
||||
|
||||
And finally:
|
||||
|
||||
`(\d)\n`
|
||||
|
||||
The only glitch was in one book, eBook number 2, where the date "1798" was snagged by the three-digit search. At this point, I now have eBooks of the entire Gutenberg catalog, not counting new books presently being added.
|
||||
|
||||
### Troubleshooting and a bonus
|
||||
|
||||
I strongly advise you to test your XHTML files by trying to load them in a browser. Your browser should tell you if your XHTML is not properly formatted, in which case the file won’t show in your browser window. Two particular problems I found, having initially ignored my own advice, resulting from improper characters. I copied the link specification tags from my first article. If you do that, you will find that the typewriter quotes are substituted with typographic (curly) quotes. Fixing this was just a matter of doing a Find/Replace.
|
||||
|
||||
Second, there are a number of ampersands (&) in the listings, and these need to be replaced by & for the browser to make sense of them. Some recent listings also use the Unicode non-breaking space, and these should be replaced with a regular space. (Hint: Copy one, put it in Find, put a regular space in Replace, then Replace All)
|
||||
|
||||
Finally, there may be some accented characters lurking, and the browser feedback should help locate them. Example: Ibáñez needed to be Ibáñez.
|
||||
|
||||
And now the bonus: Once your XHTML is well-formed, you can use your browser to comb Project Gutenberg just like on your e-reader. I also found that [Calibre][5] would not make the links properly until the quotes were fixed.
|
||||
|
||||
Finally, here is a template for a separate web page you can place on your system to easily link to each year’s listing on your system. Make sure you fix the locations for your personal directory structure and filenames. Also, make sure all these quotes are typewriter quotes, not curly quotes.
|
||||
```
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
|
||||
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
|
||||
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<head>
|
||||
<title>GutIndexes</title>
|
||||
</head>
|
||||
<body leftmargin="100">
|
||||
<h2>GutIndexes</h2>
|
||||
<font size="5">
|
||||
<table cellpadding="20"><tr>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.1996.xhtml">1996</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.1997.xhtml">1997</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.1998.xhtml">1998</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.1999.xhtml">1999</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2000.xhtml">2000</a></td></tr>
|
||||
<tr><td><a href="/home/gregp/Documents/GUTINDEX.2001.xhtml">2001</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2002.xhtml">2002</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2003.xhtml">2003</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2004.xhtml">2004</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2005.xhtml">2005</a></td></tr>
|
||||
<tr><td><a href="/home/gregp/Documents/GUTINDEX.2006.xhtml">2006</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2007.xhtml">2007</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2008.xhtml">2008</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2009.xhtml">2009</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2010.xhtml">2010</a></td></tr>
|
||||
<tr><td><a href="/home/gregp/Documents/GUTINDEX.2011.xhtml">2011</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2012.xhtml">2012</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2013.xhtml">2013</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2014.xhtml">2014</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2015.xhtml">2015</a></td></tr>
|
||||
<tr><td><a href="/home/gregp/Documents/GUTINDEX.2016.xhtml">2016</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2017.xhtml">2017</a></td>
|
||||
<td><a href="/home/gregp/Documents/GUTINDEX.2018.xhtml">2018</a></td>
|
||||
</tr>
|
||||
</table>
|
||||
</font>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/more-books-your-ereader
|
||||
|
||||
作者:[Greg Pittman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/greg-p
|
||||
[1]:https://opensource.com/article/18/4/browse-project-gutenberg-library
|
||||
[2]:https://www.gutenberg.org/
|
||||
[3]:https://www.kde.org/applications/utilities/kwrite/
|
||||
[4]:https://sigil-ebook.com/
|
||||
[5]:https://calibre-ebook.com/
|
@ -0,0 +1,106 @@
|
||||
How to install software from the Linux command line
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/suitcase_container_bag.png?itok=q40lKCBY)
|
||||
|
||||
If you use Linux for any amount of time, you'll soon learn there are many different ways to do the same thing. This includes installing applications on a Linux machine via the command line. I have been a Linux user for roughly 25 years, and time and time again I find myself going back to the command line to install my apps.
|
||||
|
||||
The most common method of installing apps from the command line is through software repositories (a place where software is stored) using what's called a package manager. All Linux apps are distributed as packages, which are nothing more than files associated with a package management system. Every Linux distribution comes with a package management system, but they are not all the same.
|
||||
|
||||
### What is a package management system?
|
||||
|
||||
A package management system is comprised of sets of tools and file formats that are used together to install, update, and uninstall Linux apps. The two most common package management systems are from Red Hat and Debian. Red Hat, CentOS, and Fedora all use the `rpm` system (.rpm files), while Debian, Ubuntu, Mint, and Ubuntu use `dpkg` (.deb files). Gentoo Linux uses a system called Portage, and Arch Linux uses nothing but tarballs (.tar files). The primary difference between these systems is how they install and maintain apps.
|
||||
|
||||
You might be wondering what's inside an `.rpm`, `.deb`, or `.tar` file. You might be surprised to learn that all are nothing more than plain old archive files (like `.zip`) that contain an application's code, instructions on how to install it, dependencies (what other apps it may depend on), and where its configuration files should be placed. The software that reads and executes all of those instructions is called a package manager.
|
||||
|
||||
### Debian, Ubuntu, Mint, and others
|
||||
|
||||
Debian, Ubuntu, Mint, and other Debian-based distributions all use `.deb` files and the `dpkg` package management system. There are two ways to install apps via this system. You can use the `apt` application to install from a repository, or you can use the `dpkg` app to install apps from `.deb` files. Let's take a look at how to do both.
|
||||
|
||||
Installing apps using `apt` is as easy as:
|
||||
```
|
||||
$ sudo apt install app_name
|
||||
|
||||
```
|
||||
|
||||
Uninstalling an app via `apt` is also super easy:
|
||||
```
|
||||
$ sudo apt remove app_name
|
||||
|
||||
```
|
||||
|
||||
To upgrade your installed apps, you'll first need to update the app repository:
|
||||
```
|
||||
$ sudo apt update
|
||||
|
||||
```
|
||||
|
||||
Once finished, you can update any apps that need updating with the following:
|
||||
```
|
||||
$ sudo apt upgrade
|
||||
|
||||
```
|
||||
|
||||
What if you want to update only a single app? No problem.
|
||||
```
|
||||
$ sudo apt update app_name
|
||||
|
||||
```
|
||||
|
||||
Finally, let's say the app you want to install is not available in the Debian repository, but it is available as a `.deb` download.
|
||||
```
|
||||
$ sudo dpkg -i app_name.deb
|
||||
|
||||
```
|
||||
|
||||
### Red Hat, CentOS, and Fedora
|
||||
|
||||
Red Hat, by default, uses several package management systems. These systems, while using their own terminology, are still very similar to each other and to the one used in Debian. For example, we can use either the `yum` or `dnf` manager to install apps.
|
||||
```
|
||||
$ sudo yum install app_name
|
||||
|
||||
$ sudo dnf install app_name
|
||||
|
||||
```
|
||||
|
||||
Apps in the `.rpm` format can also be installed with the `rpm` command.
|
||||
```
|
||||
$ sudo rpm -i app_name.rpm
|
||||
|
||||
```
|
||||
|
||||
Removing unwanted applications is just as easy.
|
||||
```
|
||||
$ sudo yum remove app_name
|
||||
|
||||
$ sudo dnf remove app_name
|
||||
|
||||
```
|
||||
|
||||
Updating apps is similarly easy.
|
||||
```
|
||||
$ yum update
|
||||
|
||||
$ sudo dnf upgrade --refresh
|
||||
|
||||
```
|
||||
|
||||
As you can see, installing, uninstalling, and updating Linux apps from the command line isn't hard at all. In fact, once you get used to it, you'll find it's faster than using desktop GUI-based management tools!
|
||||
|
||||
For more information on installing apps from the command line, please visit the Debian [Apt wiki][1], the [Yum cheat sheet][2], and the [DNF wiki][3].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/how-install-software-linux-command-line
|
||||
|
||||
作者:[Patrick H.Mullins][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/pmullins
|
||||
[1]:https://wiki.debian.org/Apt
|
||||
[2]:https://access.redhat.com/articles/yum-cheat-sheet
|
||||
[3]:https://fedoraproject.org/wiki/DNF?rd=Dnf
|
@ -0,0 +1,72 @@
|
||||
Steam Makes it Easier to Play Windows Games on Linux
|
||||
======
|
||||
![Steam Wallpaper][1]
|
||||
|
||||
It’s no secret that the [Linux gaming][2] library offers only a fraction of what the Windows library offers. In fact, many people wouldn’t even consider [switching to Linux][3] simply because most of the games they want to play aren’t available on the platform.
|
||||
|
||||
At the time of writing this article, Linux has just over 5,000 games available on Steam compared to the library’s almost 27,000 total games. Now, 5,000 games may be a lot, but it isn’t 27,000 games, that’s for sure.
|
||||
|
||||
And though almost every new indie game seems to launch with a Linux release, we are still left without a way to play many [Triple-A][4] titles. For me, though there are many titles I would love the opportunity to play, this has never been a make-or-break problem since almost all of my favorite titles are available on Linux since I primarily play indie and [retro games][5] anyway.
|
||||
|
||||
### Meet Proton: a WINE Fork by Steam
|
||||
|
||||
Now, that problem is a thing of the past since this week Valve [announced][6] a new update to Steam Play that adds a forked version of Wine to the Linux and Mac Steam clients called Proton. Yes, the tool is open-source, and Valve has made the source code available on [Github][7]. The feature is still in beta though, so you must opt into the beta Steam client in order to take advantage of this functionality.
|
||||
|
||||
#### With proton, more Windows games are available for Linux on Steam
|
||||
|
||||
What does that actually mean for us Linux users? In short, it means that both Linux and Mac computers can now play all 27,000 of those games without needing to configure something like [PlayOnLinux][8] or [Lutris][9] to do so! Which, let me tell you, can be quite the headache at times.
|
||||
|
||||
The more complicated answer to this is that it sounds too good to be true for a reason. Though, in theory, you can play literally every Windows game on Linux this way, there is only a short list of games that are officially supported at launch, including DOOM, Final Fantasy VI, Tekken 7, Star Wars: Battlefront 2, and several more.
|
||||
|
||||
#### You can play all Windows games on Linux (in theory)
|
||||
|
||||
Though the list only has about 30 games thus far, you can force enable Steam to install and play any game through Proton by marking the “Enable Steam Play for all titles” checkbox. But don’t get your hopes too high. They do not guarantee the stability and performance you may be hoping for, so keep your expectations reasonable.
|
||||
|
||||
![Steam Play][10]
|
||||
|
||||
#### Experiencing Proton: Not as bad as I expected
|
||||
|
||||
For example, I installed a few moderately taxing games to put Proton through its paces. One of which was The Elder Scrolls IV: Oblivion, and in the two hours I played the game, it only crashed once, and it was almost immediately after an autosave point during the tutorial.
|
||||
|
||||
I have an Nvidia Gtx 1050 Ti, so I was able to play the game at 1080p with high settings, and I didn’t see a single problem outside of that one crash. The only negative feedback I really have is that the framerate was not nearly as high as it would have been if it was a native game. I got above 60 frames 90% of the time, but I admit it could have been better.
|
||||
|
||||
Every other game that I have installed and launched has also worked flawlessly, granted I haven’t played any of them for an extended amount of time yet. Some games I installed include The Forest, Dead Rising 4, H1Z1, and Assassin’s Creed II (can you tell I like horror games?).
|
||||
|
||||
#### Why is Steam (still) betting on Linux?
|
||||
|
||||
Now, this is all fine and dandy, but why did this happen? Why would Valve spend the time, money, and resources needed to implement something like this? I like to think they did so because they value the Linux community, but if I am honest, I don’t believe we had anything to do with it.
|
||||
|
||||
If I had to put money on it, I would say Valve has developed Proton because they haven’t given up on [Steam machines][11] yet. And since [Steam OS][12] is running on Linux, it is in their best interest financially to invest in something like this. The more games available on Steam OS, the more people might be willing to buy a Steam Machine.
|
||||
|
||||
Maybe I am wrong, but I bet this means we will see a new wave of Steam machines coming in the not-so-distant future. Maybe we will see them in one year, or perhaps we won’t see them for another five, who knows!
|
||||
|
||||
Either way, all I know is that I am beyond excited to finally play the games from my Steam library that I have slowly accumulated over the years from all of the Humble Bundles, promo codes, and random times I bought a game on sale just in case I wanted to try to get it running in Lutris.
|
||||
|
||||
#### Excited for more gaming on Linux?
|
||||
|
||||
What do you think? Are you excited about this, or are you afraid fewer developers will create native Linux games because there is almost no need to now? Does Valve love the Linux community, or do they love money? Let us know what you think in the comment section below, and check back in for more FOSS content like this.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/steam-play-proton/
|
||||
|
||||
作者:[Phillip Prado][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://itsfoss.com/author/phillip/
|
||||
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/steam-wallpaper.jpeg
|
||||
[2]:https://itsfoss.com/linux-gaming-guide/
|
||||
[3]:https://itsfoss.com/reasons-switch-linux-windows-xp/
|
||||
[4]:https://itsfoss.com/triplea-game-review/
|
||||
[5]:https://itsfoss.com/play-retro-games-linux/
|
||||
[6]:https://steamcommunity.com/games/221410
|
||||
[7]:https://github.com/ValveSoftware/Proton/
|
||||
[8]:https://www.playonlinux.com/en/
|
||||
[9]:https://lutris.net/
|
||||
[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/SteamProton.jpg
|
||||
[11]:https://store.steampowered.com/sale/steam_machines
|
||||
[12]:https://itsfoss.com/valve-annouces-linux-based-gaming-operating-system-steamos/
|
139
sources/tech/20180824 What Stable Kernel Should I Use.md
Normal file
139
sources/tech/20180824 What Stable Kernel Should I Use.md
Normal file
@ -0,0 +1,139 @@
|
||||
What Stable Kernel Should I Use?
|
||||
======
|
||||
I get a lot of questions about people asking me about what stable kernel should they be using for their product/device/laptop/server/etc. all the time. Especially given the now-extended length of time that some kernels are being supported by me and others, this isn’t always a very obvious thing to determine. So this post is an attempt to write down my opinions on the matter. Of course, you are free to use what ever kernel version you want, but here’s what I recommend.
|
||||
|
||||
As always, the opinions written here are my own, I speak for no one but myself.
|
||||
|
||||
### What kernel to pick
|
||||
|
||||
Here’s the my short list of what kernel you should use, raked from best to worst options. I’ll go into the details of all of these below, but if you just want the summary of all of this, here it is:
|
||||
|
||||
Hierarchy of what kernel to use, from best solution to worst:
|
||||
|
||||
* Supported kernel from your favorite Linux distribution
|
||||
* Latest stable release
|
||||
* Latest LTS release
|
||||
* Older LTS release that is still being maintained
|
||||
|
||||
|
||||
|
||||
What kernel to never use:
|
||||
|
||||
* Unmaintained kernel release
|
||||
|
||||
|
||||
|
||||
To give numbers to the above, today, as of August 24, 2018, the front page of kernel.org looks like this:
|
||||
|
||||
![][1]
|
||||
|
||||
So, based on the above list that would mean that:
|
||||
|
||||
* 4.18.5 is the latest stable release
|
||||
* 4.14.67 is the latest LTS release
|
||||
* 4.9.124, 4.4.152, and 3.16.57 are the older LTS releases that are still being maintained
|
||||
* 4.17.19 and 3.18.119 are “End of Life” kernels that have had a release in the past 60 days, and as such stick around on the kernel.org site for those who still might want to use them.
|
||||
|
||||
|
||||
|
||||
Quite easy, right?
|
||||
|
||||
Ok, now for some justification for all of this:
|
||||
|
||||
### Distribution kernels
|
||||
|
||||
The best solution for almost all Linux users is to just use the kernel from your favorite Linux distribution. Personally, I prefer the community based Linux distributions that constantly roll along with the latest updated kernel and it is supported by that developer community. Distributions in this category are Fedora, openSUSE, Arch, Gentoo, CoreOS, and others.
|
||||
|
||||
All of these distributions use the latest stable upstream kernel release and make sure that any needed bugfixes are applied on a regular basis. That is the one of the most solid and best kernel that you can use when it comes to having the latest fixes ([remember all fixes are security fixes][2]) in it.
|
||||
|
||||
There are some community distributions that take a bit longer to move to a new kernel release, but eventually get there and support the kernel they currently have quite well. Those are also great to use, and examples of these are Debian and Ubuntu.
|
||||
|
||||
Just because I did not list your favorite distro here does not mean its kernel is not good. Look on the web site for the distro and make sure that the kernel package is constantly updated with the latest security patches, and all should be well.
|
||||
|
||||
Lots of people seem to like the old, “traditional” model of a distribution and use RHEL, SLES, CentOS or the “LTS” Ubuntu release. Those distros pick a specific kernel version and then camp out on it for years, if not decades. They do loads of work backporting the latest bugfixes and sometimes new features to these kernels, all in a Quixote quest to keep the version number from never being changed, despite having many thousands of changes on top of that older kernel version. This work is a truly thankless job, and the developers assigned to these tasks do some wonderful work in order to achieve these goals. If you like never seeing your kernel version number change, then use these distributions. They usually cost some money to use, but the support you get from these companies is worth it when something goes wrong.
|
||||
|
||||
So again, the best kernel you can use is one that someone else supports, and you can turn to for help. Use that support, usually you are already paying for it (for the enterprise distributions), and those companies know what they are doing.
|
||||
|
||||
But, if you do not want to trust someone else to manage your kernel for you, or you have hardware that a distribution does not support, then you want to run the Latest stable release:
|
||||
|
||||
### Latest stable release
|
||||
|
||||
This kernel is the latest one from the Linux kernel developer community that they declare as “stable”. About every three months, the community releases a new stable kernel that contains all of the newest hardware support, the latest performance improvements, as well as the latest bugfixes for all parts of the kernel. Over the next 3 months, bugfixes that go into the next kernel release to be made are backported into this stable release, so that any users of this kernel are sure to get them as soon as possible.
|
||||
|
||||
This is usually the kernel that most community distributions use as well, so you can be sure it is tested and has a large audience of users. Also, the kernel community (all 4000+ developers) are willing to help support users of this release, as it is the latest one that they made.
|
||||
|
||||
After 3 months, a new kernel is released and you should move to it to ensure that you stay up to date, as support for this kernel is usually dropped a few weeks after the newer release happens.
|
||||
|
||||
If you have new hardware that is purchased after the last LTS release came out, you almost are guaranteed to have to run this kernel in order to have it supported. So for desktops or new servers, this is usually the recommended kernel to be running.
|
||||
|
||||
### Latest LTS release
|
||||
|
||||
If your hardware relies on a vendors out-of-tree patch in order to make it work properly (like almost all embedded devices these days), then the next best kernel to be using is the latest LTS release. That release gets all of the latest kernel fixes that goes into the stable releases where applicable, and lots of users test and use it.
|
||||
|
||||
Note, no new features and almost no new hardware support is ever added to these kernels, so if you need to use a new device, it is better to use the latest stable release, not this release.
|
||||
|
||||
Also this release is common for users that do not like to worry about “major” upgrades happening on them every 3 months. So they stick to this release and upgrade every year instead, which is a fine practice to follow.
|
||||
|
||||
The downsides of using this release is that you do not get the performance improvements that happen in newer kernels, except when you update to the next LTS kernel, potentially a year in the future. That could be significant for some workloads, so be very aware of this.
|
||||
|
||||
Also, if you have problems with this kernel release, the first thing that any developer whom you report the issue to is going to ask you to do is, “does the latest stable release have this problem?” So you will need to be aware that support might not be as easy to get as with the latest stable releases.
|
||||
|
||||
Now if you are stuck with a large patchset and can not update to a new LTS kernel once a year, perhaps you want the older LTS releases:
|
||||
|
||||
### Older LTS release
|
||||
|
||||
These releases have traditionally been supported by the community for 2 years, sometimes longer for when a major distribution relies on this (like Debian or SLES). However in the past year, thanks to a lot of suport and investment in testing and infrastructure from Google, Linaro, Linaro member companies, [kernelci.org][3], and others, these kernels are starting to be supported for much longer.
|
||||
|
||||
Here’s the latest LTS releases and how long they will be supported for, as shown at [kernel.org/category/releases.html][4] on August 24, 2018:
|
||||
|
||||
![][5]
|
||||
|
||||
The reason that Google and other companies want to have these kernels live longer is due to the crazy (some will say broken) development model of almost all SoC chips these days. Those devices start their development lifecycle a few years before the chip is released, however that code is never merged upstream, resulting in a brand new chip being released based on a 2 year old kernel. These SoC trees usually have over 2 million lines added to them, making them something that I have started calling “Linux-like” kernels.
|
||||
|
||||
If the LTS releases stop happening after 2 years, then support from the community instantly stops, and no one ends up doing bugfixes for them. This results in millions of very insecure devices floating around in the world, not something that is good for any ecosystem.
|
||||
|
||||
Because of this dependency, these companies now require new devices to constantly update to the latest LTS releases as they happen for their specific release version (i.e. every 4.9.y release that happens). An example of this is the Android kernel requirements for new devices shipping for the “O” and now “P” releases specified the minimum kernel version allowed, and Android security releases might start to require those “.y” releases to happen more frequently on devices.
|
||||
|
||||
I will note that some manufacturers are already doing this today. Sony is one great example of this, updating to the latest 4.4.y release on many of their new phones for their quarterly security release. Another good example is the small company Essential which has been tracking the 4.4.y releases faster than anyone that I know of.
|
||||
|
||||
There is one huge caveat when using a kernel like this. The number of security fixes that get backported are not as great as with the latest LTS release, because the traditional model of the devices that use these older LTS kernels is a much more reduced user model. These kernels are not to be used in any type of “general computing” model where you have untrusted users or virtual machines, as the ability to do some of the recent Spectre-type fixes for older releases is greatly reduced, if present at all in some branches.
|
||||
|
||||
So again, only use older LTS releases in a device that you fully control, or lock down with a very strong security model (like Android enforces using SELinux and application isolation). Never use these releases on a server with untrusted users, programs, or virtual machines.
|
||||
|
||||
Also, support from the community for these older LTS releases is greatly reduced even from the normal LTS releases, if available at all. If you use these kernels, you really are on your own, and need to be able to support the kernel yourself, or rely on you SoC vendor to provide that support for you (note that almost none of them do provide that support, so beware…)
|
||||
|
||||
### Unmaintained kernel release
|
||||
|
||||
Surprisingly, many companies do just grab a random kernel release, slap it into their product and proceed to ship it in hundreds of thousands of units without a second thought. One crazy example of this would be the Lego Mindstorm systems that shipped a random -rc release of a kernel in their device for some unknown reason. A -rc release is a development release that not even the Linux kernel developers feel is ready for everyone to use just yet, let alone millions of users.
|
||||
|
||||
You are of course free to do this if you want, but note that you really are on your own here. The community can not support you as no one is watching all kernel versions for specific issues, so you will have to rely on in-house support for everything that could go wrong. Which for some companies and systems, could be just fine, but be aware of the “hidden” cost this might cause if you do not plan for this up front.
|
||||
|
||||
### Summary
|
||||
|
||||
So, here’s a short list of different types of devices, and what I would recommend for their kernels:
|
||||
|
||||
* Laptop / Desktop: Latest stable release
|
||||
* Server: Latest stable release or latest LTS release
|
||||
* Embedded device: Latest LTS release or older LTS release if the security model used is very strong and tight.
|
||||
|
||||
|
||||
|
||||
And as for me, what do I run on my machines? My laptops run the latest development kernel (i.e. Linus’s development tree) plus whatever kernel changes I am currently working on and my servers run the latest stable release. So despite being in charge of the LTS releases, I don’t run them myself, except in testing systems. I rely on the development and latest stable releases to ensure that my machines are running the fastest and most secure releases that we know how to create at this point in time.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/
|
||||
|
||||
作者:[Greg Kroah-Hartman][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://kroah.com
|
||||
[1]:https://s3.amazonaws.com/kroah.com/images/kernel.org_2018_08_24.png
|
||||
[2]:http://kroah.com/log/blog/2018/02/05/linux-kernel-release-model/
|
||||
[3]:https://kernelci.org/
|
||||
[4]:https://www.kernel.org/category/releases.html
|
||||
[5]:https://s3.amazonaws.com/kroah.com/images/kernel.org_releases_2018_08_24.png
|
@ -0,0 +1,116 @@
|
||||
[Solved] “sub process usr bin dpkg returned an error code 1″ Error in Ubuntu
|
||||
======
|
||||
If you are encountering “sub process usr bin dpkg returned an error code 1” while installing software on Ubuntu Linux, here is how you can fix it.
|
||||
|
||||
One of the common issue in Ubuntu and other Debian based distribution is the broken packages. You try to update the system or install a new package and you encounter an error like ‘Sub-process /usr/bin/dpkg returned an error code’.
|
||||
|
||||
That’s what happened to me the other day. I was trying to install a radio application in Ubuntu when it threw me this error:
|
||||
```
|
||||
Unpacking python-gst-1.0 (1.6.2-1build1) ...
|
||||
Selecting previously unselected package radiotray.
|
||||
Preparing to unpack .../radiotray_0.7.3-5ubuntu1_all.deb ...
|
||||
Unpacking radiotray (0.7.3-5ubuntu1) ...
|
||||
Processing triggers for man-db (2.7.5-1) ...
|
||||
Processing triggers for desktop-file-utils (0.22-1ubuntu5.2) ...
|
||||
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20180209-0ubuntu1) ...
|
||||
Rebuilding /usr/share/applications/bamf-2.index...
|
||||
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ...
|
||||
Processing triggers for mime-support (3.59ubuntu1) ...
|
||||
Setting up polar-bookshelf (1.0.0-beta56) ...
|
||||
ln: failed to create symbolic link '/usr/local/bin/polar-bookshelf': No such file or directory
|
||||
dpkg: error processing package polar-bookshelf (--configure):
|
||||
subprocess installed post-installation script returned error exit status 1
|
||||
Setting up python-appindicator (12.10.1+16.04.20170215-0ubuntu1) ...
|
||||
Setting up python-gst-1.0 (1.6.2-1build1) ...
|
||||
Setting up radiotray (0.7.3-5ubuntu1) ...
|
||||
Errors were encountered while processing:
|
||||
polar-bookshelf
|
||||
E: Sub-process /usr/bin/dpkg returned an error code (1)
|
||||
|
||||
```
|
||||
|
||||
The last three lines are of the utmost importance here.
|
||||
```
|
||||
Errors were encountered while processing:
|
||||
polar-bookshelf
|
||||
E: Sub-process /usr/bin/dpkg returned an error code (1)
|
||||
|
||||
```
|
||||
|
||||
It tells me that the package polar-bookshelf is causing and issue. This might be crucial to how you fix this error here.
|
||||
|
||||
### Fixing Sub-process /usr/bin/dpkg returned an error code (1)
|
||||
|
||||
![Fix update errors in Ubuntu Linux][1]
|
||||
|
||||
Let’s try to fix this broken error package. I’ll show several methods that you can try one by one. The initial ones are easy to use and simply no-brainers.
|
||||
|
||||
You should try to run sudo apt update and then try to install a new package or upgrade after trying each of the methods discussed here.
|
||||
|
||||
#### Method 1: Reconfigure Package Database
|
||||
|
||||
The first method you can try is to reconfigure the package database. Probably the database got corrupted while installing a package. Reconfiguring often fixes the problem.
|
||||
```
|
||||
sudo dpkg --configure -a
|
||||
|
||||
```
|
||||
|
||||
#### Method 2: Use force install
|
||||
|
||||
If a package installation was interrupted previously, you may try to do a force install.
|
||||
```
|
||||
sudo apt-get install -f
|
||||
|
||||
```
|
||||
|
||||
#### Method 3: Try removing the troublesome package
|
||||
|
||||
If it’s not an issue for you, you may try to remove the package manually. Please don’t do it for Linux Kernels (packages starting with linux-).
|
||||
```
|
||||
sudo apt remove
|
||||
|
||||
```
|
||||
|
||||
#### Method 4: Remove post info files of the troublesome package
|
||||
|
||||
This should be your last resort. You can try removing the files associated to the package in question from /var/lib/dpkg/info.
|
||||
|
||||
**You need to know a little about basic Linux commands to figure out what’s happening and how can you use the same with your problem.**
|
||||
|
||||
In my case, I had an issue with polar-bookshelf. So I looked for the files associated with it:
|
||||
```
|
||||
ls -l /var/lib/dpkg/info | grep -i polar-bookshelf
|
||||
-rw-r--r-- 1 root root 2324811 Aug 14 19:29 polar-bookshelf.list
|
||||
-rw-r--r-- 1 root root 2822824 Aug 10 04:28 polar-bookshelf.md5sums
|
||||
-rwxr-xr-x 1 root root 113 Aug 10 04:28 polar-bookshelf.postinst
|
||||
-rwxr-xr-x 1 root root 84 Aug 10 04:28 polar-bookshelf.postrm
|
||||
|
||||
```
|
||||
|
||||
Now all I needed to do was to remove these files:
|
||||
```
|
||||
sudo mv /var/lib/dpkg/info/polar-bookshelf.* /tmp
|
||||
|
||||
```
|
||||
|
||||
Use the sudo apt update and then you should be able to install software as usual.
|
||||
|
||||
#### Which method worked for you (if it worked)?
|
||||
|
||||
I hope this quick article helps you in fixing the ‘E: Sub-process /usr/bin/dpkg returned an error code (1)’ error.
|
||||
|
||||
If it did work for you, which method was it? Did you manage to fix this error with some other method? If yes, please share that to help others with this issue.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/dpkg-returned-an-error-code-1/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/fix-common-update-errors-ubuntu.jpeg
|
@ -0,0 +1,417 @@
|
||||
How to capture and analyze packets with tcpdump command on Linux
|
||||
======
|
||||
tcpdump is a well known command line **packet analyzer** tool. Using tcpdump command we can capture the live TCP/IP packets and these packets can also be saved to a file. Later on these captured packets can be analyzed via tcpdump command. tcpdump command becomes very handy when it comes to troubleshooting on network level.
|
||||
|
||||
![](https://www.linuxtechi.com/wp-content/uploads/2018/08/tcpdump-command-examples-linux.jpg)
|
||||
|
||||
tcpdump is available in most of the Linux distributions, for Debian based Linux, it be can be installed using apt command,
|
||||
```
|
||||
# apt install tcpdump -y
|
||||
|
||||
```
|
||||
|
||||
On RPM based Linux OS, tcpdump can be installed using below yum command
|
||||
```
|
||||
# yum install tcpdump -y
|
||||
|
||||
```
|
||||
|
||||
When we run the tcpdump command without any options then it will capture packets of all the interfaces. So to stop or cancel the tcpdump command, type “ **ctrl+c** ” . In this tutorial we will discuss how to capture and analyze packets using different practical examples,
|
||||
|
||||
### Example:1) Capturing packets from a specific interface
|
||||
|
||||
When we run the tcpdump command without any options, it will capture packets on the all interfaces, so to capture the packets from a specific interface use the option ‘ **-i** ‘ followed by the interface name.
|
||||
|
||||
Syntax :
|
||||
|
||||
```
|
||||
# tcpdump -i {interface-name}
|
||||
```
|
||||
|
||||
Let’s assume, i want to capture packets from interface “enp0s3”
|
||||
|
||||
Output would be something like below,
|
||||
```
|
||||
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
|
||||
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
|
||||
06:43:22.905890 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952160:21952540, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 380
|
||||
06:43:22.906045 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952540:21952760, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220
|
||||
06:43:22.906150 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952760:21952980, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220
|
||||
06:43:22.906291 IP 169.144.0.1.39374 > compute-0-1.example.com.ssh: Flags [.], ack 21952980, win 13094, options [nop,nop,TS val 6580205 ecr 26164373], length 0
|
||||
06:43:22.906303 IP 169.144.0.1.39374 > compute-0-1.example.com.ssh: Flags [P.], seq 13537:13609, ack 21952980, win 13094, options [nop,nop,TS val 6580205 ecr 26164373], length 72
|
||||
06:43:22.906322 IP compute-0-1.example.com.ssh > 169.144.0.1.39374: Flags [P.], seq 21952980:21953200, ack 13537, win 291, options [nop,nop,TS val 26164373 ecr 6580205], length 220
|
||||
^C
|
||||
109930 packets captured
|
||||
110065 packets received by filter
|
||||
133 packets dropped by kernel
|
||||
[[email protected] ~]#
|
||||
|
||||
```
|
||||
|
||||
### Example:2) Capturing specific number number of packet from a specific interface
|
||||
|
||||
Let’s assume we want to capture 12 packets from the specific interface like “enp0s3”, this can be easily achieved using the options “ **-c {number} -i {interface-name}** ”
|
||||
```
|
||||
root@compute-0-1 ~]# tcpdump -c 12 -i enp0s3
|
||||
|
||||
```
|
||||
|
||||
Above command will generate the output something like below
|
||||
|
||||
[![N-Number-Packsets-tcpdump-interface][1]][2]
|
||||
|
||||
### Example:3) Display all the available Interfaces for tcpdump
|
||||
|
||||
Use ‘ **-D** ‘ option to display all the available interfaces for tcpdump command,
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -D
|
||||
1.enp0s3
|
||||
2.enp0s8
|
||||
3.ovs-system
|
||||
4.br-int
|
||||
5.br-tun
|
||||
6.nflog (Linux netfilter log (NFLOG) interface)
|
||||
7.nfqueue (Linux netfilter queue (NFQUEUE) interface)
|
||||
8.usbmon1 (USB bus number 1)
|
||||
9.usbmon2 (USB bus number 2)
|
||||
10.qbra692e993-28
|
||||
11.qvoa692e993-28
|
||||
12.qvba692e993-28
|
||||
13.tapa692e993-28
|
||||
14.vxlan_sys_4789
|
||||
15.any (Pseudo-device that captures on all interfaces)
|
||||
16.lo [Loopback]
|
||||
[[email protected] ~]#
|
||||
|
||||
```
|
||||
|
||||
I am running the tcpdump command on one of my openstack compute node, that’s why in the output you have seen number interfaces, tab interface, bridges and vxlan interface.
|
||||
|
||||
### Example:4) Capturing packets with human readable timestamp (-tttt option)
|
||||
|
||||
By default in tcpdump command output, there is no proper human readable timestamp, if you want to associate human readable timestamp to each captured packet then use ‘ **-tttt** ‘ option, example is shown below,
|
||||
```
|
||||
[[email protected] ~]# tcpdump -c 8 -tttt -i enp0s3
|
||||
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
|
||||
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
|
||||
2018-08-25 23:23:36.954883 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1449206247:1449206435, ack 3062020950, win 291, options [nop,nop,TS val 86178422 ecr 21583714], length 188
|
||||
2018-08-25 23:23:36.955046 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13585, options [nop,nop,TS val 21583717 ecr 86178422], length 0
|
||||
2018-08-25 23:23:37.140097 IP controller0.example.com.amqp > compute-0-1.example.com.57818: Flags [P.], seq 814607956:814607964, ack 2387094506, win 252, options [nop,nop,TS val 86172228 ecr 86176695], length 8
|
||||
2018-08-25 23:23:37.140175 IP compute-0-1.example.com.57818 > controller0.example.com.amqp: Flags [.], ack 8, win 237, options [nop,nop,TS val 86178607 ecr 86172228], length 0
|
||||
2018-08-25 23:23:37.355238 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [P.], seq 1080415080:1080417400, ack 1690909362, win 237, options [nop,nop,TS val 86178822 ecr 86163054], length 2320
|
||||
2018-08-25 23:23:37.357119 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [.], ack 2320, win 1432, options [nop,nop,TS val 86172448 ecr 86178822], length 0
|
||||
2018-08-25 23:23:37.357545 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [P.], seq 1:22, ack 2320, win 1432, options [nop,nop,TS val 86172449 ecr 86178822], length 21
|
||||
2018-08-25 23:23:37.357572 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [.], ack 22, win 237, options [nop,nop,TS val 86178825 ecr 86172449], length 0
|
||||
8 packets captured
|
||||
134 packets received by filter
|
||||
69 packets dropped by kernel
|
||||
[[email protected] ~]#
|
||||
|
||||
```
|
||||
|
||||
### Example:5) Capturing and saving packets to a file (-w option)
|
||||
|
||||
Use “ **-w** ” option in tcpdump command to save the capture TCP/IP packet to a file, so that we can analyze those packets in the future for further analysis.
|
||||
|
||||
Syntax :
|
||||
|
||||
```
|
||||
# tcpdump -w file_name.pcap -i {interface-name}
|
||||
```
|
||||
|
||||
Note: Extension of file must be **.pcap**
|
||||
|
||||
Let’s assume i want to save the captured packets of interface “ **enp0s3** ” to a file name **enp0s3-26082018.pcap**
|
||||
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3
|
||||
```
|
||||
|
||||
Above command will generate the output something like below,
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018.pcap -i enp0s3
|
||||
tcpdump: listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
|
||||
^C841 packets captured
|
||||
845 packets received by filter
|
||||
0 packets dropped by kernel
|
||||
[root@compute-0-1 ~]# ls
|
||||
anaconda-ks.cfg enp0s3-26082018.pcap
|
||||
[root@compute-0-1 ~]#
|
||||
|
||||
```
|
||||
|
||||
Capturing and Saving the packets whose size **greater** than **N bytes**
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-2.pcap greater 1024
|
||||
|
||||
```
|
||||
|
||||
Capturing and Saving the packets whose size **less** than **N bytes**
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -w enp0s3-26082018-3.pcap less 1024
|
||||
|
||||
```
|
||||
|
||||
### Example:6) Reading packets from the saved file ( -r option)
|
||||
|
||||
In the above example we have saved the captured packets to a file, we can read those packets from the file using the option ‘ **-r** ‘, example is shown below,
|
||||
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -r enp0s3-26082018.pcap
|
||||
```
|
||||
|
||||
Reading the packets with human readable timestamp,
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -tttt -r enp0s3-26082018.pcap
|
||||
reading from file enp0s3-26082018.pcap, link-type EN10MB (Ethernet)
|
||||
2018-08-25 22:03:17.249648 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1426167803:1426167927, ack 3061962134, win 291, options
|
||||
[nop,nop,TS val 81358717 ecr 20378789], length 124
|
||||
2018-08-25 22:03:17.249840 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 124, win 564, options [nop,nop,TS val 20378791 ecr 81358
|
||||
717], length 0
|
||||
2018-08-25 22:03:17.454559 IP controller0.example.com.amqp > compute-0-1.example.com.57836: Flags [.], ack 1079416895, win 1432, options [nop,nop,TS v
|
||||
al 81352560 ecr 81353913], length 0
|
||||
2018-08-25 22:03:17.454642 IP compute-0-1.example.com.57836 > controller0.example.com.amqp: Flags [.], ack 1, win 237, options [nop,nop,TS val 8135892
|
||||
2 ecr 81317504], length 0
|
||||
2018-08-25 22:03:17.646945 IP compute-0-1.example.com.57788 > controller0.example.com.amqp: Flags [.], seq 106760587:106762035, ack 688390730, win 237
|
||||
, options [nop,nop,TS val 81359114 ecr 81350901], length 1448
|
||||
2018-08-25 22:03:17.647043 IP compute-0-1.example.com.57788 > controller0.example.com.amqp: Flags [P.], seq 1448:1956, ack 1, win 237, options [nop,no
|
||||
p,TS val 81359114 ecr 81350901], length 508
|
||||
2018-08-25 22:03:17.647502 IP controller0.example.com.amqp > compute-0-1.example.com.57788: Flags [.], ack 1956, win 1432, options [nop,nop,TS val 813
|
||||
52753 ecr 81359114], length 0
|
||||
.........................................................................................................................
|
||||
|
||||
```
|
||||
|
||||
### Example:7) Capturing only IP address packets on a specific Interface (-n option)
|
||||
|
||||
Using -n option in tcpdum command we can capture only IP address packets on specific interface, example is shown below,
|
||||
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -n -i enp0s3
|
||||
```
|
||||
|
||||
Output of above command would be something like below,
|
||||
```
|
||||
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
|
||||
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
|
||||
22:22:28.537904 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1433301395:1433301583, ack 3061976250, win 291, options [nop,nop,TS val 82510005 ecr 20666610], length 188
|
||||
22:22:28.538173 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 188, win 9086, options [nop,nop,TS val 20666613 ecr 82510005], length 0
|
||||
22:22:28.538573 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:552, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 364
|
||||
22:22:28.538736 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 552, win 9086, options [nop,nop,TS val 20666613 ecr 82510006], length 0
|
||||
22:22:28.538874 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 552:892, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 340
|
||||
22:22:28.539042 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 892, win 9086, options [nop,nop,TS val 20666613 ecr 82510006], length 0
|
||||
22:22:28.539178 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 892:1232, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666613], length 340
|
||||
22:22:28.539282 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1232, win 9086, options [nop,nop,TS val 20666614 ecr 82510006], length 0
|
||||
22:22:28.539479 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1232:1572, ack 1, win 291, options [nop,nop,TS val 82510006 ecr 20666614], length 340
|
||||
22:22:28.539595 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1572, win 9086, options [nop,nop,TS val 20666614 ecr 82510006], length 0
|
||||
22:22:28.539760 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1572:1912, ack 1, win 291, options [nop,nop,TS val 82510007 ecr 20666614], length 340
|
||||
.........................................................................
|
||||
|
||||
```
|
||||
|
||||
You can also capture N number of IP address packets using -c and -n option in tcpdump command,
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -c 25 -n -i enp0s3
|
||||
|
||||
```
|
||||
|
||||
### Example:8) Capturing only TCP packets on a specific interface
|
||||
|
||||
In tcpdump command we can capture only tcp packets using the ‘ **tcp** ‘ option,
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -i enp0s3 tcp
|
||||
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
|
||||
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
|
||||
22:36:54.521053 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1433336467:1433336655, ack 3061986618, win 291, options [nop,nop,TS val 83375988 ecr 20883106], length 188
|
||||
22:36:54.521474 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 188, win 9086, options [nop,nop,TS val 20883109 ecr 83375988], length 0
|
||||
22:36:54.522214 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:552, ack 1, win 291, options [nop,nop,TS val 83375989 ecr 20883109], length 364
|
||||
22:36:54.522508 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 552, win 9086, options [nop,nop,TS val 20883109 ecr 83375989], length 0
|
||||
22:36:54.522867 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 552:892, ack 1, win 291, options [nop,nop,TS val 83375990 ecr 20883109], length 340
|
||||
22:36:54.523006 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 892, win 9086, options [nop,nop,TS val 20883109 ecr 83375990], length 0
|
||||
22:36:54.523304 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 892:1232, ack 1, win 291, options [nop,nop,TS val 83375990 ecr 20883109], length 340
|
||||
22:36:54.523461 IP 169.144.0.1.39406 > 169.144.0.20.ssh: Flags [.], ack 1232, win 9086, options [nop,nop,TS val 20883110 ecr 83375990], length 0
|
||||
22:36:54.523604 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1232:1572, ack 1, win 291, options [nop,nop,TS val 83375991 ecr 20883110], length 340
|
||||
...................................................................................................................................................
|
||||
|
||||
```
|
||||
|
||||
### Example:9) Capturing packets from a specific port on a specific interface
|
||||
|
||||
Using tcpdump command we can capture packet from a specific port (e.g 22) on a specific interface enp0s3
|
||||
|
||||
Syntax :
|
||||
|
||||
```
|
||||
# tcpdump -i {interface-name} port {Port_Number}
|
||||
```
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -i enp0s3 port 22
|
||||
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
|
||||
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
|
||||
22:54:45.032412 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1435010787:1435010975, ack 3061993834, win 291, options [nop,nop,TS val 84446499 ecr 21150734], length 188
|
||||
22:54:45.032631 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 9131, options [nop,nop,TS val 21150737 ecr 84446499], length 0
|
||||
22:54:55.037926 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 188:576, ack 1, win 291, options [nop,nop,TS val 84456505 ecr 21150737], length 388
|
||||
22:54:55.038106 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 576, win 9154, options [nop,nop,TS val 21153238 ecr 84456505], length 0
|
||||
22:54:55.038286 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 576:940, ack 1, win 291, options [nop,nop,TS val 84456505 ecr 21153238], length 364
|
||||
22:54:55.038564 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 940, win 9177, options [nop,nop,TS val 21153238 ecr 84456505], length 0
|
||||
22:54:55.038708 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 940:1304, ack 1, win 291, options [nop,nop,TS val 84456506 ecr 21153238], length 364
|
||||
............................................................................................................................
|
||||
[root@compute-0-1 ~]#
|
||||
|
||||
```
|
||||
|
||||
### Example:10) Capturing the packets from a Specific Source IP on a Specific Interface
|
||||
|
||||
Using “ **src** ” keyword followed by “ **ip address** ” in tcpdump command we can capture the packets from a specific Source IP,
|
||||
|
||||
syntax :
|
||||
|
||||
```
|
||||
# tcpdump -n -i {interface-name} src {ip-address}
|
||||
```
|
||||
|
||||
Example is shown below,
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -n -i enp0s3 src 169.144.0.10
|
||||
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
|
||||
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
|
||||
23:03:45.912733 IP 169.144.0.10.amqp > 169.144.0.20.57800: Flags [.], ack 526623844, win 243, options [nop,nop,TS val 84981008 ecr 84982372], length 0
|
||||
23:03:46.136757 IP 169.144.0.10.amqp > 169.144.0.20.57796: Flags [.], ack 2535995970, win 252, options [nop,nop,TS val 84981232 ecr 84982596], length 0
|
||||
23:03:46.153398 IP 169.144.0.10.amqp > 169.144.0.20.57798: Flags [.], ack 3623063621, win 243, options [nop,nop,TS val 84981248 ecr 84982612], length 0
|
||||
23:03:46.361160 IP 169.144.0.10.amqp > 169.144.0.20.57802: Flags [.], ack 2140263945, win 252, options [nop,nop,TS val 84981456 ecr 84982821], length 0
|
||||
23:03:46.376926 IP 169.144.0.10.amqp > 169.144.0.20.57808: Flags [.], ack 175946224, win 252, options [nop,nop,TS val 84981472 ecr 84982836], length 0
|
||||
23:03:46.505242 IP 169.144.0.10.amqp > 169.144.0.20.57810: Flags [.], ack 1016089556, win 252, options [nop,nop,TS val 84981600 ecr 84982965], length 0
|
||||
23:03:46.616994 IP 169.144.0.10.amqp > 169.144.0.20.57812: Flags [.], ack 832263835, win 252, options [nop,nop,TS val 84981712 ecr 84983076], length 0
|
||||
23:03:46.809344 IP 169.144.0.10.amqp > 169.144.0.20.57814: Flags [.], ack 2781799939, win 252, options [nop,nop,TS val 84981904 ecr 84983268], length 0
|
||||
23:03:46.809485 IP 169.144.0.10.amqp > 169.144.0.20.57816: Flags [.], ack 1662816815, win 252, options [nop,nop,TS val 84981904 ecr 84983268], length 0
|
||||
23:03:47.033301 IP 169.144.0.10.amqp > 169.144.0.20.57818: Flags [.], ack 2387094362, win 252, options [nop,nop,TS val 84982128 ecr 84983492], length 0
|
||||
^C
|
||||
10 packets captured
|
||||
12 packets received by filter
|
||||
0 packets dropped by kernel
|
||||
[root@compute-0-1 ~]#
|
||||
|
||||
```
|
||||
|
||||
### Example:11) Capturing packets from a specific destination IP on a specific Interface
|
||||
|
||||
Syntax :
|
||||
|
||||
```
|
||||
# tcpdump -n -i {interface-name} dst {IP-address}
|
||||
```
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -n -i enp0s3 dst 169.144.0.1
|
||||
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
|
||||
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
|
||||
23:10:43.520967 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 1439564171:1439564359, ack 3062005550, win 291, options [nop,nop,TS val 85404988 ecr 21390356], length 188
|
||||
23:10:43.521441 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 188:408, ack 1, win 291, options [nop,nop,TS val 85404988 ecr 21390359], length 220
|
||||
23:10:43.521719 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 408:604, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
|
||||
23:10:43.521993 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 604:800, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
|
||||
23:10:43.522157 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 800:996, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
|
||||
23:10:43.522346 IP 169.144.0.20.ssh > 169.144.0.1.39406: Flags [P.], seq 996:1192, ack 1, win 291, options [nop,nop,TS val 85404989 ecr 21390359], length 196
|
||||
.........................................................................................
|
||||
|
||||
```
|
||||
|
||||
### Example:12) Capturing TCP packet communication between two Hosts
|
||||
|
||||
Let’s assume i want to capture tcp packets between two hosts 169.144.0.1 & 169.144.0.20, example is shown below,
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -w two-host-tcp-comm.pcap -i enp0s3 tcp and \(host 169.144.0.1 or host 169.144.0.20\)
|
||||
|
||||
```
|
||||
|
||||
Capturing only SSH packet flow between two hosts using tcpdump command,
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -w ssh-comm-two-hosts.pcap -i enp0s3 src 169.144.0.1 and port 22 and dst 169.144.0.20 and port 22
|
||||
|
||||
```
|
||||
|
||||
### Example:13) Capturing the udp network packets (to & fro) between two hosts
|
||||
|
||||
Syntax :
|
||||
|
||||
```
|
||||
# tcpdump -w -s -i udp and \(host and host \)
|
||||
```
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -w two-host-comm.pcap -s 1000 -i enp0s3 udp and \(host 169.144.0.10 and host 169.144.0.20\)
|
||||
|
||||
```
|
||||
|
||||
### Example:14) Capturing packets in HEX and ASCII Format
|
||||
|
||||
Using tcpdump command, we can capture tcp/ip packet in ASCII and HEX format,
|
||||
|
||||
To capture the packets in ASCII format use **-A** option, example is shown below,
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -c 10 -A -i enp0s3
|
||||
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
|
||||
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
|
||||
00:37:10.520060 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1452637331:1452637519, ack 3062125586, win 333, options [nop,nop,TS val 90591987 ecr 22687106], length 188
|
||||
E...[root@compute-0-1 @...............V.|...T....MT......
|
||||
.fR..Z-....b.:..Z5...{.'p....]."}...Z..9.?......."root@compute-0-1 <.....V..C.....{,...OKP.2.*...`..-sS..1S...........:.O[.....{G..%ze.Pn.T..N.... ....qB..5...n.....`...:=...[..0....k.....S.:..5!.9..G....!-..'..
|
||||
00:37:10.520319 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13930, options [nop,nop,TS val 22687109 ecr 90591987], length 0
|
||||
root@compute-0-1 @.|+..............T.V.}O..6j.d.....
|
||||
.Z-..fR.
|
||||
00:37:11.687543 IP controller0.example.com.amqp > compute-0-1.example.com.57800: Flags [.], ack 526624548, win 243, options [nop,nop,TS val 90586768 ecr 90588146], length 0
|
||||
root@compute-0-1 @.!L...
|
||||
.....(..g....c.$...........
|
||||
.f>..fC.
|
||||
00:37:11.687612 IP compute-0-1.example.com.57800 > controller0.example.com.amqp: Flags [.], ack 1, win 237, options [nop,nop,TS val 90593155 ecr 90551716], length 0
|
||||
root@compute-0-1 @..........
|
||||
...(.c.$g.......Se.....
|
||||
.fW..e..
|
||||
..................................................................................................................................................
|
||||
|
||||
```
|
||||
|
||||
To Capture the packets both in HEX and ASCII format use **-XX** option
|
||||
```
|
||||
[root@compute-0-1 ~]# tcpdump -c 10 -XX -i enp0s3
|
||||
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
|
||||
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
|
||||
00:39:15.124363 IP compute-0-1.example.com.ssh > 169.144.0.1.39406: Flags [P.], seq 1452640859:1452641047, ack 3062126346, win 333, options [nop,nop,TS val 90716591 ecr 22718257], length 188
|
||||
0x0000: 0a00 2700 0000 0800 27f4 f935 0800 4510 ..'.....'..5..E.
|
||||
0x0010: 00f0 5bc6 4000 4006 8afc a990 0014 a990 ..[root@compute-0-1 @.........
|
||||
0x0020: 0001 0016 99ee 5695 8a5b b684 570a 8018 ......V..[..W...
|
||||
0x0030: 014d 5418 0000 0101 080a 0568 39af 015a .MT........h9..Z
|
||||
0x0040: a731 adb7 58b6 1a0f 2006 df67 c9b6 4479 .1..X......g..Dy
|
||||
0x0050: 19fd 2c3d 2042 3313 35b9 a160 fa87 d42c ..,=.B3.5..`...,
|
||||
0x0060: 89a9 3d7d dfbf 980d 2596 4f2a 99ba c92a ..=}....%.O*...*
|
||||
0x0070: 3e1e 7bf7 3af2 a5cc ee4f 10bc 7dfc 630d >.{.:....O..}.c.
|
||||
0x0080: 898a 0e16 6825 56c7 b683 1de4 3526 ff04 ....h%V.....5&..
|
||||
0x0090: 68d1 4f7d babd 27ba 84ae c5d3 750b 01bd h.O}..'.....u...
|
||||
0x00a0: 9c43 e10a 33a6 8df2 a9f0 c052 c7ed 2ff5 .C..3......R../.
|
||||
0x00b0: bfb1 ce84 edfc c141 6dad fa19 0702 62a7 .......Am.....b.
|
||||
0x00c0: 306c db6b 2eea 824e eea5 acd7 f92e 6de3 0l.k...N......m.
|
||||
0x00d0: 85d0 222d f8bf 9051 2c37 93c8 506d 5cb5 .."-...Q,7..Pm\.
|
||||
0x00e0: 3b4a 2a80 d027 49f2 c996 d2d9 a9eb c1c4 ;J*..'I.........
|
||||
0x00f0: 7719 c615 8486 d84c e42d 0ba3 698c w......L.-..i.
|
||||
00:39:15.124648 IP 169.144.0.1.39406 > compute-0-1.example.com.ssh: Flags [.], ack 188, win 13971, options [nop,nop,TS val 22718260 ecr 90716591], length 0
|
||||
0x0000: 0800 27f4 f935 0a00 2700 0000 0800 4510 ..'..5..'.....E.
|
||||
0x0010: 0034 6b70 4000 4006 7c0e a990 0001 a990 root@compute-0-1 @.|.......
|
||||
0x0020: 0014 99ee 0016 b684 570a 5695 8b17 8010 ........W.V.....
|
||||
0x0030: 3693 7c0e 0000 0101 080a 015a a734 0568 6.|........Z.4.h
|
||||
0x0040: 39af
|
||||
.......................................................................
|
||||
|
||||
```
|
||||
|
||||
That’s all from this article, i hope you got an idea how to capture and analyze tcp/ip packets using tcpdump command. Please do share your feedback and comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.linuxtechi.com/capture-analyze-packets-tcpdump-command-linux/
|
||||
|
||||
作者:[Pradeep Kumar][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:http://www.linuxtechi.com/author/pradeep/
|
||||
[1]:https://www.linuxtechi.com/wp-content/uploads/2018/08/N-Number-Packsets-tcpdump-interface-1024x422.jpg
|
||||
[2]:https://www.linuxtechi.com/wp-content/uploads/2018/08/N-Number-Packsets-tcpdump-interface.jpg
|
89
sources/tech/20180827 4 tips for better tmux sessions.md
Normal file
89
sources/tech/20180827 4 tips for better tmux sessions.md
Normal file
@ -0,0 +1,89 @@
|
||||
translating by lujun9972
|
||||
4 tips for better tmux sessions
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/08/tmux-4-tips-816x345.jpg)
|
||||
|
||||
The tmux utility, a terminal multiplexer, lets you treat your terminal as a multi-paned window into your system. You can arrange the configuration, run different processes in each, and generally make better use of your screen. We introduced some readers to this powerful tool [in this earlier article][1]. Here are some tips that will help you get more out of tmux if you’re getting started.
|
||||
|
||||
This article assumes your current prefix key is Ctrl+b. If you’ve remapped that prefix, simply substitute your prefix in its place.
|
||||
|
||||
### Set your terminal to automatically use tmux
|
||||
|
||||
One of the biggest benefits of tmux is being able to disconnect and reconnect to sesions at wilI. This makes remote login sessions more powerful. Have you ever lost a connection and wished you could get back the work you were doing on the remote system? With tmux this problem is solved.
|
||||
|
||||
However, you may sometimes find yourself doing work on a remote system, and realize you didn’t start a session. One way to avoid this is to have tmux start or attach every time you login to a system with in interactive shell.
|
||||
|
||||
Add this to your remote system’s ~/.bash_profile file:
|
||||
|
||||
```
|
||||
if [ -z "$TMUX" ]; then
|
||||
tmux attach -t default || tmux new -s default
|
||||
fi
|
||||
```
|
||||
|
||||
Then logout of the remote system, and log back in with SSH. You’ll find you’re in a tmux session named default. This session will be regenerated at next login if you exit it. But more importantly, if you detach from it as normal, your work is waiting for you next time you login — especially useful if your connection is interrupted.
|
||||
|
||||
Of course you can add this to your local system as well. Note that terminals inside most GUIs won’t use the default session automatically, because they aren’t login shells. While you can change that behavior, it may result in nesting that makes the session less usable, so proceed with caution.
|
||||
|
||||
### Use zoom to focus on a single process
|
||||
|
||||
While the point of tmux is to offer multiple windows, panes, and processes in a single session, sometimes you need to focus. If you’re in a process and need more space, or to focus on a single task, the zoom command works well. It expands the current pane to take up the entire current window space.
|
||||
|
||||
Zoom can be useful in other situations too. For instance, imagine you’re using a terminal window in a graphical desktop. Panes can make it harder to copy and paste multiple lines from inside your tmux session. If you zoom the pane, you can do a clean copy/paste of multiple lines of data with ease.
|
||||
|
||||
To zoom into the current pane, hit Ctrl+b, z. When you’re finished with the zoom function, hit the same key combo to unzoom the pane.
|
||||
|
||||
### Bind some useful commands
|
||||
|
||||
By default tmux has numerous commands available. But it’s helpful to have some of the more common operations bound to keys you can easily remember. Here are some examples you can add to your ~/.tmux.conf file to make sessions more enjoyable:
|
||||
|
||||
```
|
||||
bind r source-file ~/.tmux.conf \; display "Reloaded config"
|
||||
```
|
||||
|
||||
This command rereads the commands and bindings in your config file. Once you add this binding, exit any tmux sessions and then restart one. Now after you make any other future changes, simply run Ctrl+b, r and the changes will be part of your existing session.
|
||||
|
||||
```
|
||||
bind V split-window -h
|
||||
bind H split-window
|
||||
```
|
||||
|
||||
These commands make it easier to split the current window across a vertical axis (note that’s Shift+V) or across a horizontal axis (Shift+H).
|
||||
|
||||
If you want to see how all keys are bound, use Ctrl+B, ? to see a list. You may see keys bound in copy-mode first, for when you’re working with copy and paste inside tmux. The prefix mode bindings are where you’ll see ones you’ve added above. Feel free to experiment with your own!
|
||||
|
||||
### Use powerline for great justice
|
||||
|
||||
[As reported in a previous Fedora Magazine article][2], the powerline utility is a fantastic addition to your shell. But it also has capabilities when used with tmux. Because tmux takes over the entire terminal space, the powerline window can provide more than just a better shell prompt.
|
||||
|
||||
[![Screenshot of tmux powerline in git folder](https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53-1024x690.png)][3]
|
||||
|
||||
If you haven’t already, follow the instructions in the [Magazine’s powerline article][4] to install that utility. Then, install the addon [using sudo][5]:
|
||||
|
||||
```
|
||||
sudo dnf install tmux-powerline
|
||||
```
|
||||
|
||||
Now restart your session, and you’ll see a spiffy new status line at the bottom. Depending on the terminal width, the default status line now shows your current session ID, open windows, system information, date and time, and hostname. If you change directory into a git-controlled project, you’ll see the branch and color-coded status as well.
|
||||
|
||||
Of course, this status bar is highly configurable as well. Enjoy your new supercharged tmux session, and have fun experimenting with it.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/4-tips-better-tmux-sessions/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org/author/pfrields/
|
||||
[1]:https://fedoramagazine.org/use-tmux-more-powerful-terminal/
|
||||
[2]:https://fedoramagazine.org/add-power-terminal-powerline/
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2018/08/Screenshot-from-2018-08-25-19-36-53.png
|
||||
[4]:https://fedoramagazine.org/add-power-terminal-powerline/
|
||||
[5]:https://fedoramagazine.org/howto-use-sudo/
|
112
sources/tech/20180827 An introduction to diffs and patches.md
Normal file
112
sources/tech/20180827 An introduction to diffs and patches.md
Normal file
@ -0,0 +1,112 @@
|
||||
An introduction to diffs and patches
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
|
||||
|
||||
If you’ve ever worked on a large codebase with a distributed development model, you’ve probably heard people say things like “Sue just sent a patch,” or “Rajiv is checking out the diff.” Maybe those terms were new to you and you wondered what they meant. Open source has had an impact here, as the main development model of large projects from Apache web server to the Linux kernel have been “patch-based” development projects throughout their lifetime. In fact, did you know that Apache’s name originated from the set of patches that were collected and collated against the original [NCSA HTTPd server source code][1]?
|
||||
|
||||
You might think this is folklore, but an early [capture of the Apache website][2] claims that the name was derived from this original “patch” collection; hence **APA** t **CH** y server, which was then simplified to Apache.
|
||||
|
||||
But enough history trivia. What exactly are these patches and diffs that developers talk about?
|
||||
|
||||
First, for the sake of this article, let’s assume that these two terms reference one and the same thing. “Diff” is simply short for “difference;” a Unix utility by the same name reveals the difference between one or more files. We will look at a diff utility example below.
|
||||
|
||||
A “patch” refers to a specific collection of differences between files that can be applied to a source code tree using the Unix diff utility. So we can create diffs (or patches) using the diff tool and apply them to an unpatched version of that same source code using the patch tool. As an aside (and breaking my rule of no more history trivia), the word “patch” comes from the physical covering of punchcard holes to make software changes in the early computing days, when punchcards represented the program executed by the computer’s processor. The image below, found on this [Wikipedia page][3] describing software patches, shows this original “patching” concept:
|
||||
|
||||
![](https://opensource.com/sites/default/files/uploads/360px-harvard_mark_i_program_tape.agr_.jpg)
|
||||
|
||||
Now that you have a basic understanding of patches and diffs, let’s explore how software developers use these tools. If you haven’t used a source code control system like [Git][4] or [Subversion][5], I will set the stage for how most non-trivial software projects are developed. If you think of the life of a software project as a set of actions along a timeline, you might visualize changes to the software—such as adding a feature or a function to a source code file or fixing a bug—appearing at different points on the timeline, with each discrete point representing the state of all the source code files at that time. We will call these points of change “commits,” using the same nomenclature that today’s most popular source code control tool, Git, uses. When you want to see the difference between the source code before and after a certain commit, or between many commits, you can use a tool to show us diffs, or differences.
|
||||
|
||||
If you are developing software using this same source code control tool, Git, you may have changes in your local system that you want to provide for others to potentially add as commits to their own tree. One way to provide local changes to others is to create a diff of your local tree's changes and send this “patch” to others who are working on the same source code. This lets others patch their tree and see the source code tree with your changes applied.
|
||||
|
||||
### Linux, Git, and GitHub
|
||||
|
||||
This model of sharing patch files is how the Linux kernel community operates regarding proposed changes today. If you look at the archives for any of the popular Linux kernel mailing lists—[LKML][6] is the primary one, but others include [linux-containers][7], [fs-devel][8], [Netdev][9], to name a few—you’ll find many developers posting patches that they wish to have others review, test, and possibly bring into the official Linux kernel Git tree at some point. It is outside of the scope of this article to discuss Git, the source code control system written by Linus Torvalds, in more detail, but it's worth noting that Git enables this distributed development model, allowing patches to live separately from a main repository, pushing and pulling into different trees and following their specific development flow.
|
||||
|
||||
Before moving on, we can’t ignore the most popular service in which patches and diffs are relevant: [GitHub][10]. Given its name, you can probably guess that GitHub is based on Git, but it offers a web- and API-based workflow around the Git tool for distributed open source project development. One of the main ways that patches are shared in GitHub is not via email, like the Linux kernel, but by creating a **pull request**. When you commit changes on your own copy of a source code tree, you can share those changes by creating a pull request against a commonly shared repository for that software project. GitHub is used by many active and popular open source projects today, such as [Kubernetes][11], [Docker][12], [the Container Network Interface (CNI)][13], [Istio][14], and many others. In the GitHub world, users tend to use the web-based interface to review the diffs or patches that comprise a pull request, but you can still access the raw patch files and use them at the command line with the patch utility.
|
||||
|
||||
### Getting down to business
|
||||
|
||||
Now that we’ve covered patches and diffs and how they are used in popular open source communities or tools, let's look at a few examples.
|
||||
|
||||
The first example includes two copies of a source tree, and one has changes that we want to visualize using the diff utility. In our examples, we will look at “unified” diffs because that is the expected view for patches in most of the modern software development world. Check the diff manual page for more information on options and ways to produce differences. The original source code is located in sources-orig and our second, modified codebase is located in a directory named sources-fixed. To show the differences in a unified diff format in your terminal, use the following command:
|
||||
```
|
||||
$ diff -Naur sources-orig/ sources-fixed/
|
||||
```
|
||||
|
||||
...which then shows the following diff command output:
|
||||
```
|
||||
diff -Naur sources-orig/officespace/interest.go sources-fixed/officespace/interest.go
|
||||
--- sources-orig/officespace/interest.go 2018-08-10 16:39:11.000000000 -0400
|
||||
+++ sources-fixed/officespace/interest.go 2018-08-10 16:39:40.000000000 -0400
|
||||
@@ -11,15 +11,13 @@
|
||||
InterestRate float64
|
||||
}
|
||||
|
||||
+// compute the rounded interest for a transaction
|
||||
func computeInterest(acct *Account, t Transaction) float64 {
|
||||
|
||||
interest := t.Amount 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated t.InterestRate
|
||||
roundedInterest := math.Floor(interest*100) / 100.0
|
||||
remainingInterest := interest - roundedInterest
|
||||
|
||||
- // a little extra..
|
||||
- remainingInterest *= 1000
|
||||
-
|
||||
// Save the remaining interest into an account we control:
|
||||
acct.Balance = acct.Balance + remainingInterest
|
||||
```
|
||||
|
||||
The first few lines of the diff command output could use some explanation: The three `---` signs show the original filename; any lines that exist in the original file but not in the compared new file will be prefixed with a single `-` to note that this line was “subtracted” from the sources. The `+++` signs show the opposite: The compared new file and additions found in this file are marked with a single `+` symbol to show they were added in the new version of the file. Each “hunk” (that’s what sections prefixed by `@@` are called) of the difference patch file has contextual line numbers that help the patch tool (or other processors) know where to apply this change. You can see from the "Office Space" movie reference function that we’ve corrected (by removing three lines) the greed of one of our software developers, who added a bit to the rounded-out interest calculation along with a comment to our function.
|
||||
|
||||
If you want someone else to test the changes from this tree, you could save this output from diff into a patch file:
|
||||
```
|
||||
$ diff -Naur sources-orig/ sources-fixed/ >myfixes.patch
|
||||
```
|
||||
|
||||
Now you have a patch file, myfixes.patch, which can be shared with another developer to apply and test this set of changes. A fellow developer can apply the changes using the patch tool, given that their current working directory is in the base of the source code tree:
|
||||
```
|
||||
$ patch -p1 < ../myfixes.patch
|
||||
patching file officespace/interest.go
|
||||
```
|
||||
|
||||
Now your fellow developer’s source tree is patched and ready to build and test the changes that were applied via the patch. What if this developer had made changes to interest.go separately? As long as the changes do not conflict directly—for example, change the same exact lines—the patch tool should be able to solve where to merge the changes in. As an example, an interest.go file with several other changes is used in the following example run of patch:
|
||||
```
|
||||
$ patch -p1 < ../myfixes.patch
|
||||
patching file officespace/interest.go
|
||||
Hunk #1 succeeded at 26 (offset 15 lines).
|
||||
```
|
||||
|
||||
In this case, patch warns that the changes did not apply at the original location in the file, but were offset by 15 lines. If you have heavily changed files, patch may give up trying to find where the changes fit, but it does provide options (with requisite warnings in the documentation) for turning up the matching “fuzziness” (which are beyond the scope of this article).
|
||||
|
||||
If you are using Git and/or GitHub, you will probably not use the diff or patch tools as standalone tools. Git offers much of this functionality so you can use the built-in capabilities of working on a shared source tree with merging and pulling other developer’s changes. One similar capability is to use git diff to provide the unified diff output in your local tree or between any two references (a commit identifier, the name of a tag or branch, and so on). You can even create a patch file that someone not using Git might find useful by simply piping the git diff output to a file, given that it uses the exact format of the diffcommand that patch can consume. Of course, GitHub takes these capabilities into a web-based user interface so you can view file changes on a pull request. In this view, you will note that it is effectively a unified diff view in your web browser, and GitHub allows you to download these changes as a raw patch file.
|
||||
|
||||
### Summary
|
||||
|
||||
You’ve learned what a diff and a patch are, as well as the common Unix/Linux command line tools that interact with them. Unless you are a developer on a project still using a patch file-based development method—like the Linux kernel—you will consume these capabilities primarily through a source code control system like Git. But it’s helpful to know the background and underpinnings of features many developers use daily through higher-level tools like GitHub. And who knows—they may come in handy someday when you need to work with patches from a mailing list in the Linux world.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/diffs-patches
|
||||
|
||||
作者:[Phil Estes][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/estesp
|
||||
[1]:https://github.com/TooDumbForAName/ncsa-httpd
|
||||
[2]:https://web.archive.org/web/19970615081902/http:/www.apache.org/info.html
|
||||
[3]:https://en.wikipedia.org/wiki/Patch_(computing)
|
||||
[4]:https://git-scm.com/
|
||||
[5]:https://subversion.apache.org/
|
||||
[6]:https://lkml.org/
|
||||
[7]:https://lists.linuxfoundation.org/pipermail/containers/
|
||||
[8]:https://patchwork.kernel.org/project/linux-fsdevel/list/
|
||||
[9]:https://www.spinics.net/lists/netdev/
|
||||
[10]:https://github.com/
|
||||
[11]:https://kubernetes.io/
|
||||
[12]:https://www.docker.com/
|
||||
[13]:https://github.com/containernetworking/cni
|
||||
[14]:https://istio.io/
|
@ -0,0 +1,50 @@
|
||||
translating by lujun9972
|
||||
Solve "error: failed to commit transaction (conflicting files)" In Arch Linux
|
||||
======
|
||||
|
||||
![](https://www.ostechnix.com/wp-content/uploads/2018/06/arch_linux_wallpaper-720x340.png)
|
||||
|
||||
It’s been a month since I upgraded my Arch Linux desktop. Today, I tried to update my Arch Linux system, and ran into an error that said **“error: failed to commit transaction (conflicting files) stfl: /usr/lib/libstfl.so.0 exists in filesystem”**. It looks like one library (/usr/lib/libstfl.so.0) that exists on my filesystem and pacman can’t upgrade it. If you’re encountered with the same error, here is a quick fix to resolve it.
|
||||
|
||||
### Solve “error: failed to commit transaction (conflicting files)” In Arch Linux
|
||||
|
||||
You have three options.
|
||||
|
||||
1. Simply ignore the problematic **stfl** library from being upgraded and try to update the system again. Refer this guide to know [**how to ignore package from being upgraded**][1].
|
||||
|
||||
2. Overwrite the package using command:
|
||||
```
|
||||
$ sudo pacman -Syu --overwrite /usr/lib/libstfl.so.0
|
||||
```
|
||||
|
||||
3. Remove stfl library file manually and try to upgrade the system again. Please make sure the intended package is not a dependency to any important package. and check the archlinux.org whether are mentions of this conflict.
|
||||
```
|
||||
$ sudo rm /usr/lib/libstfl.so.0
|
||||
```
|
||||
|
||||
Now, try to update the system:
|
||||
```
|
||||
$ sudo pacman -Syu
|
||||
```
|
||||
|
||||
I chose the third option and just deleted the file and upgraded my Arch Linux system. It works now!
|
||||
|
||||
Hope this helps. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-solve-error-failed-to-commit-transaction-conflicting-files-in-arch-linux/
|
||||
|
||||
作者:[SK][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[lujun9972](https://github.com/lujun9972)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.ostechnix.com/author/sk/
|
||||
[1]:https://www.ostechnix.com/safely-ignore-package-upgraded-arch-linux/
|
92
sources/tech/20180827 Top 10 Raspberry Pi blogs to follow.md
Normal file
92
sources/tech/20180827 Top 10 Raspberry Pi blogs to follow.md
Normal file
@ -0,0 +1,92 @@
|
||||
Top 10 Raspberry Pi blogs to follow
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberry-pi-juggle.png?itok=oTgGGSRA)
|
||||
|
||||
There are plenty of great Raspberry Pi fan sites, tutorials, repositories, YouTube channels, and other resources on the web. Here are my top 10 favorite Raspberry Pi blogs, in no particular order.
|
||||
|
||||
### 1. Raspberry Pi Spy
|
||||
|
||||
Raspberry Pi fan Matt Hawkins has been writing a broad range of comprehensive and informative tutorials on his site, Raspberry Pi Spy, since the early days. I have learned a lot directly from this site, and Matt always seems to be the first to cover many topics. I have reached out for help many times in my first three years in the world of hacking and making with Raspberry Pi.
|
||||
|
||||
Fortunately for everyone, this early adopter site is still going strong. I hope to see it live on, giving new community members a helping hand when they need it.
|
||||
|
||||
### 2. Adafruit
|
||||
|
||||
Adafruit is one of the biggest names in hardware hacking. The company makes and sells beautiful hardware and provides excellent tutorials written by staff, community members, and even the wonderful Lady Ada herself.
|
||||
|
||||
As well as being a webshop, Adafruit also run a blog, which is full to the brim of great content from around the world. Check out the Raspberry Pi category, especially at the end of the work week, as [Friday is Pi Day][1] at Adafruit Towers.
|
||||
|
||||
### 3. Recantha's Raspberry Pi Pod
|
||||
|
||||
Mike Horne (Recantha) is a key Pi community member in the UK who runs the [CamJam and Potton Pi & Pint][2] (two Raspberry Jams in Cambridge) and [Pi Wars][3] (an annual Pi robotics competition). He gives advice to others setting up Jams and always has time to help beginners. With his co-organizer Tim Richardson, Horne developed the CamJam Edu Kit (a series of small and affordable kits for beginners to learn physical computing with Python).
|
||||
|
||||
On top of all this, he runs the Pi Pod, a blog full of anything and everything Pi-related from around the world. It's probably the most regularly updated Pi blog on this list, so it's a great way to keep your finger on the pulse of the Pi community.
|
||||
|
||||
### 4. Raspberry Pi blog
|
||||
|
||||
Not forgetting the official [Raspberry Pi Foundation][4], this blog covers a range of content from the Foundation's world of hardware, software, education, community, and charity and youth coding clubs. Big themes on the blog are digital making at home, empowerment through education, as well as official news on hardware releases and software updates.
|
||||
|
||||
The blog has been running [since 2011][5] and provides an [archive][6] of all 1800+ posts since that time. You can also follow [@raspberrypi_otd][7] on Twitter, which is a bot I created in [Python][8] (for an [Opensource.com tutorial][9], of course). The bot tweets links to blog posts from the current day in previous years from the Raspberry Pi blog archive.
|
||||
|
||||
### 5. RasPi.tv
|
||||
|
||||
Another seminal Raspberry Pi community member is Alex Eames, who got on board early on with his blog and YouTube channel, RasPi.tv. The site is packed with high-quality, well-produced video tutorials and written guides covering maker projects for all.
|
||||
|
||||
Alex makes a series of add-on boards and accessories for the Pi as [RasP.iO][10], including a handy GPIO port label, reference rulers, and more. His blog branches out into [Arduino][11], [WEMO][12], and other small boards too.
|
||||
|
||||
### 6. pyimagesearch
|
||||
|
||||
Though not strictly a Raspberry Pi blog (the "py" in the name is for "Python," not "Raspberry Pi"), this site features an extensive [Raspberry Pi category][13]. Adrian Rosebrock earned a PhD studying the fields of computer vision and machine learning. His blog aims to share the machine learning tricks he's picked up while studying and making his own computer vision projects.
|
||||
|
||||
If you want to learn about facial or object recognition using the Pi camera module, this is the place to be. Adrian's knowledge and practical application of deep learning and AI for image recognition is second to none—and he writes up his projects so that anyone can try.
|
||||
|
||||
### 7. Raspberry Pi Roundup
|
||||
|
||||
One of the UK's official Raspberry Pi resellers, The Pi Hut, maintains a blog curating the finds of the week. It's another great resource to keep up with what's on in the Pi world, and worth looking back through past issues.
|
||||
|
||||
### 8. Dave Akerman
|
||||
|
||||
A leading expert in high-altitude ballooning, Dave Akerman shares his knowledge and experience with balloon launches at minimal cost using Raspberry Pi. He publishes writeups of his launches with photos from the stratosphere and offers tips on how to launch a Pi balloon yourself.
|
||||
|
||||
Check out Dave's blogfor amazing photography from near space.
|
||||
|
||||
### 9. Pimoroni
|
||||
|
||||
A world-renowned Raspberry Pi reseller based in Sheffield in the UK, Pimoroni made the famous [Pibow Rainbow case][14] and followed it up with a host of incredible custom add-on boards and accessories.
|
||||
|
||||
Pimoroni's blog is laid out as beautifully as its hardware design and branding, and it provides great content for makers and hobbyists at home. The blog accompanies their entertaining YouTube channel [Bilge Tank][15].
|
||||
|
||||
### 10. Stuff About Code
|
||||
|
||||
Martin O'Hanlon is a Pi community member-turned-Foundation employee who started out hacking Minecraft on the Pi for fun and recently joined the Foundation as a content writer. Luckily, Martin's new job hasn't stopped him from updating his blog and sharing useful tidbits with the world. As well as lots on Minecraft, you'll find stuff on the Python libraries, [Blue Dot][16], and [guizero][17], along with general Raspberry Pi tips.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/8/top-10-raspberry-pi-blogs-follow
|
||||
|
||||
作者:[Ben Nuttall][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/bennuttall
|
||||
[1]:https://blog.adafruit.com/category/raspberry-pi/
|
||||
[2]:https://camjam.me/?page_id=753
|
||||
[3]:https://piwars.org/
|
||||
[4]:https://www.raspberrypi-spy.co.uk/
|
||||
[5]:https://www.raspberrypi.org/blog/first-post/
|
||||
[6]:https://www.raspberrypi.org/blog/archive/
|
||||
[7]:https://twitter.com/raspberrypi_otd
|
||||
[8]:https://github.com/bennuttall/rpi-otd-bot/blob/master/src/bot.py
|
||||
[9]:https://opensource.com/article/17/8/raspberry-pi-twitter-bot
|
||||
[10]:https://rasp.io/
|
||||
[11]:https://www.arduino.cc/
|
||||
[12]:http://community.wemo.com/
|
||||
[13]:https://www.pyimagesearch.com/category/raspberry-pi/
|
||||
[14]:https://shop.pimoroni.com/products/pibow-for-raspberry-pi-3-b-plus
|
||||
[15]:https://www.youtube.com/channel/UCuiDNTaTdPTGZZzHm0iriGQ
|
||||
[16]:https://bluedot.readthedocs.io/en/latest/#
|
||||
[17]:https://lawsie.github.io/guizero/
|
@ -3,33 +3,33 @@
|
||||
|
||||
> 看着我们在纽约的办公大楼,我们发现了一种观察不断变化的云原生领域的完美方式。
|
||||
|
||||
在 Packet,我们的工作价值(基础设施自动化)是非常基础的。因此,我们花费大量的时间来研究我们之上所有生态系统中的参与者和趋势 —— 以及之下的极少数!
|
||||
在 Packet,我们的工作价值(<ruby>基础设施<rt>infrastructure</rt></ruby>自动化)是非常基础的。因此,我们花费大量的时间来研究我们之上所有生态系统中的参与者和趋势 —— 以及之下的极少数!
|
||||
|
||||
当你在任何生态系统的汪洋大海中徜徉时,很容易困惑或迷失方向。我知道这是事实,因为当我去年进入 Packet 工作时,从 Bryn Mawr 获得的英语学位,并没有让我完全得到一个 Kubernetes 的认证。 :)
|
||||
当你在任何生态系统的汪洋大海中徜徉时,很容易困惑或迷失方向。我知道这是事实,因为当我去年进入 Packet 工作时,从 Bryn Mawr 获得的英语学位,并没有让我完全得到一个 [Kubernetes][Kubernetes] 的认证。:)
|
||||
|
||||
由于它超快的演进和巨大的影响,云原生生态系统打破了先例。似乎每眨一次眼睛,之前全新的技术(更不用说所有相关的理念了)就变得有意义 ... 或至少有趣了。和其他许多人一样,我依据无处不在的 CNCF 的 “[云原生蓝图][1]” 作为我去了解这个空间的参考标准。尽管如此,如果有一个定义这个生态系统的元素,那它一定是贡献和控制它们的人。
|
||||
由于它超快的演进和巨大的影响,云原生生态系统打破了先例。似乎每眨一次眼睛,之前全新的技术(更不用说所有相关的理念了)就变得有意义……或至少有趣了。和其他许多人一样,我依据无处不在的 [CNCF][CNCF] 的 “[云原生蓝图][1]” 作为我去了解这个空间的参考标准。尽管如此,如果有一个定义这个生态系统的元素,那它一定是贡献和引领它们的人。
|
||||
|
||||
所以,在 12 月份一个很冷的下午,当我们走回办公室时,我们偶然发现了一个给投资人解释“云原生”的创新方式,当我们谈到从 Aporeto 中区分 Cilium 的细微差别时,以及为什么从 CoreDNS 和 Spiffe 到 Digital Rebar 和 Fission 的所有这些都这么有趣时,他的眼睛中充满了兴趣。
|
||||
所以,在 12 月份一个很冷的下午,当我们走回办公室时,我们偶然发现了一个给投资人解释“云原生”的创新方式,当我们谈到从 [Aporeto][Aporeto] 中区分 [Cilium][Cilium] 的细微差别时,以及为什么从 [CoreDNS][CoreDNS] 和 [Spiffe][Spiffe] 到 [Digital Rebar][Digital Rebar] 和 [Fission][Fission] 的所有这些都这么有趣时,他的眼里充满了兴趣。
|
||||
|
||||
在新世贸中心的阴影下,看到我们位于 13 层的狭窄办公室,我突然想到一个把我们带到那个神奇世界的好主意:为什么不把它画出来呢?(LCTT 译注:“rabbit hole” 有多种含义,此处采用“爱丽丝梦游仙境”中的“兔子洞”含义。)
|
||||
在新世贸中心的影子里向我们位于 13 层的狭窄办公室望去,我们突然想到一个把我们带到那个神奇世界的好主意:为什么不把它画出来呢?(LCTT 译注:“rabbit hole” 有多种含义,此处采用“爱丽丝梦游仙境”中的“兔子洞”含义。)
|
||||
|
||||
![][2]
|
||||
|
||||
于是,我们开始了把云原生栈逐层拼接起来的旅程。让我们一起探索它,给你一个“仅限今日有效”的福利。(LCTT 译注:意即云原生领域变化很快,可能本文/本图中所述很快过时。)
|
||||
|
||||
[[查看高清大图][3]] (25Mb)或给我们发邮件索取副本。
|
||||
[查看高清大图][3](25Mb)或给我们发邮件索取副本。
|
||||
|
||||
### 从最底层开始
|
||||
|
||||
当我们开始下笔的时候,我们知道,我们希望首先亮出的是我们每天都与之交互的而对用户却是基本上不可见的那一部分:硬件。就像任何投资于下一个伟大的(通常是私有的)东西的秘密实验室一样,我们认为地下室是其最好的地点。
|
||||
当我们开始下笔的时候,我们希望首先亮出的是我们每天都在打交道的那一部分:硬件,但我们知道那对用户却是基本上不可见的。就像任何投资于下一个伟大的(通常是私有的)东西的秘密实验室一样,我们认为地下室是其最好的地点。
|
||||
|
||||
从大家公认的像 Intel、AMD 和华为(传言他们雇佣的工程师接近 80000 名)这样的巨头,到像 Mellanox 这样的细分市场参与者,硬件生态系统现在非常火。事实上,随着数十亿美元投入去攻克新的 offload(LCTT 译注:网卡的 offload 特性是将本来该操作系统进行的一些诸如数据包分片、重组等处理任务放到网卡硬件中去做,降低系统 CPU 消耗的同时,提高处理的性能)、GPU、定制协处理器,我们可能正在进入硬件的黄金时代。
|
||||
从大家公认的像 Intel、AMD 和华为(传言他们雇佣的工程师接近 80000 名)这样的巨头,到像 [Mellanox][Mellanox] 这样的细分市场参与者,硬件生态系统现在非常火。事实上,随着数十亿美元投入去攻克新的 offload(LCTT 译注:offload 泛指以前由软件及 CPU 来完成的工作,现在通过硬件来完成,以提升速度并降低 CPU 负载的做法)、GPU、定制协处理器,我们可能正在进入硬件的黄金时代。
|
||||
|
||||
著名的软件先驱<ruby>阿伦凯<rt>Alan Kay</rt></ruby> 在 25 年前说过:“对软件非常认真的人都应该去制造他自己的硬件” ,为阿伦凯打 call!
|
||||
著名的软件先驱[艾伦·凯][Alan Kay](Alan Kay)在 25 年前说过:“真正认真对待软件的人应该自己创造硬件”。说得不错,Alan!
|
||||
|
||||
### 云即资本
|
||||
|
||||
就像我们的 CEO Zac Smith 多次告诉我:所有都是钱的问题。不仅要制造它,还要消费它!在云中,数十亿美元的投入才能让数据中心出现计算机,这样才能让开发者使用软件去消费它。换句话说(根本没云,它只是别人的电脑而已):
|
||||
就像我们的 CEO Zac Smith 多次跟我说的:都是钱的问题。不仅要制造它,还要消费它!在云中,数十亿美元的投入才能让数据中心出现计算机,这样才能让开发者消费它。换句话说(根本没云,它只是别人的电脑而已):
|
||||
|
||||
![][4]
|
||||
|
||||
@ -39,45 +39,45 @@
|
||||
|
||||
### 连通和动力
|
||||
|
||||
如果金钱是燃料,那么消耗大量燃料的引擎就是数据中心供应商和连接它们的网络。我们称他们为“动力”和“连通”。
|
||||
如果金钱是润滑油,那么消耗大量燃料的引擎就是数据中心供应商和连接它们的网络。我们称他们为“连通”和“动力”。
|
||||
|
||||
从像 Equinix 这样处于核心的和像 Vapor.io 这样的接入新贵,到 Verizon、Crown Castle 和其它的处于地下(或海底)的“管道”,这是我们所有的栈都依赖但很少有人能看到的一部分。
|
||||
从像 [Equinix][Equinix] 这样处于核心地位的接入商的和像 [Vapor.io][Vapor.io] 这样的接入新贵,到 [Verizon][Verizon]、[Crown Castle][Crown Castle] 和其它接入商铺设在地下(或海底)的“管道”,这是我们所有的栈都依赖但很少有人能看到的一部分。
|
||||
|
||||
因为我们花费大量的时间去研究数据中心和连通性,需要注意的一件事情是,这一部分的变化非常快,尤其是在 5G 正式商用时,某些负载开始不再那么依赖中心化的基础设施了。
|
||||
|
||||
接入即将到来! :-)
|
||||
边缘接入即将到来!:-)
|
||||
|
||||
![][6]
|
||||
|
||||
### 嗨,它就是基础设施!
|
||||
|
||||
居于“连接”和“动力”之上的这一层,我们爱称为“处理器们”。这是奇迹发生的地方 —— 我们将来自下层的创新和实物投资转变成一个 API 尽头的某些东西。
|
||||
居于“连通”和“动力”之上的这一层,我们爱称为“处理器层”。这是奇迹发生的地方 —— 我们将来自下层的创新和实物投资转变成一个 API 终端的某些东西。
|
||||
|
||||
由于这是纽约的一个大楼,我们让在这里的云供应商处于纽约的中心。这就是为什么你会看到(Digital Ocean 系的)鲨鱼 Sammy,以及 Google 出现在会客室中的原因了。
|
||||
由于这是纽约的一个大楼,我们让在这里的云供应商处于纽约的中心。这就是为什么你会看到([Digital Ocean][Digital Ocean] 系的)鲨鱼 Sammy 和在 Google 之上的 “meet me” 的房间中和我打招呼的原因了。
|
||||
|
||||
正如你所见,这个场景是非常写实的。它就是一垛一垛堆起来的。尽管我们爱 EWR1 的设备经理(Michael Pedrazzini),我们努力去尽可能减少这种体力劳动。毕竟布线专业的博士学位是很难拿到的。
|
||||
正如你所见,这个场景是非常写实的。它是由多层机架堆叠起来的。尽管我们爱 EWR1 的设备经理(Michael Pedrazzini),我们努力去尽可能减少这种体力劳动。毕竟布线专业的博士学位是很难拿到的。
|
||||
|
||||
![][7]
|
||||
|
||||
### 供给
|
||||
|
||||
再上一层,在基础设施层之上是供给层。这是我们最喜欢的地方之一,它以前被我们称为“配置管理”。但是现在到处都是一开始就是<ruby>不可变基础设施<rt>immutable infrastructure</rt></ruby>和自动化:Terraform、Ansible、Quay.io 等等类似的东西。你可以看出软件是按它的方式来工作的,对吗?
|
||||
再上一层,在基础设施层之上是供给层。这是我们最喜欢的地方之一,它以前被我们称为<ruby>配置管理<rt>confing management</rt></ruby>。但是现在到处都是一开始就是<ruby>不可变基础设施<rt>immutable infrastructure</rt></ruby>和自动化:[Terraform][Terraform]、[Ansible][Ansible]、[Quay.io][Quay.io] 等等类似的东西。你可以看出软件是按它的方式来工作的,对吗?
|
||||
|
||||
Kelsey Hightower 最近写道“呆在无聊的基础设施中是一个让人兴奋的时刻”,我不认为它说的是物理部分(虽然我们认为它非常让人兴奋),但是由于软件持续侵入到栈的所有层,那必将是一个疯狂的旅程。
|
||||
Kelsey Hightower 最近写道“呆在无聊的基础设施中是一个让人兴奋的时刻”,我不认为这说的是物理部分(虽然我们认为它非常让人兴奋),但是由于软件持续侵入到栈的所有层,那必将是一个疯狂的旅程。
|
||||
|
||||
![][8]
|
||||
|
||||
### 操作系统
|
||||
|
||||
供应就绪后,我们来到操作系统层。在这里你可以看到我们打趣一些我们最喜欢的同事:注意上面 Brian Redbeard 的瑜珈姿势。:)
|
||||
供应就绪后,我们来到操作系统层。在这里你可以看到我们打趣一些我们最喜欢的同事:注意上面 Brian Redbeard 那超众的瑜珈姿势。:)
|
||||
|
||||
Packet 为我们的客户提供了 11 种主要的操作系统可供选择,包括一些你在图中看到的:Ubuntu、CoreOS、FreeBSD、Suse、和各种 Red Hat 的作品。我们看到越来越多的人们在这一层上有了他们自己的看法:从定制的内核和用于不可变部署中的惯用发行版光盘,到像 NixOS 和 LinuxKit 这样的项目。
|
||||
Packet 为客户提供了 11 种主要的操作系统可供选择,包括一些你在图中看到的:[Ubuntu][Ubuntu]、[CoreOS][CoreOS]、[FreeBSD][FreeBSD]、[Suse][Suse]、和各种 [Red Hat][Red Hat] 系的发行版。我们看到越来越多的人们在这一层上加入了他们自己的看法:从定制内核和用于不可变部署的<ruby>黄金镜像<rt>golden images</rt></ruby>(LCCT 注:golden image 指定型的镜像或模板,一般是经过一些定制,并做快照和版本控制,由此可拷贝出大量与此镜像一致的开发、测试或部署环境,也有人称作 master image),到像 [NixOS][NixOS] 和 [LinuxKit][LinuxKit] 这样的项目。
|
||||
|
||||
![][9]
|
||||
|
||||
### 运行时
|
||||
|
||||
为了有趣些,我们将运行时放在了体育馆内,并为 CoreOS 赞助的 rkt 和 Docker 的容器化举行了一次比赛。而无论如何赢家都是 CNCF!
|
||||
为了有趣些,我们将<ruby>运行时<rt>runtime</rt></ruby>放在了体育馆内,并为 CoreOS 赞助的 [rkt][rkt] 和 [Docker][Docker] 的容器化举行了一次比赛。而无论如何赢家都是 CNCF!
|
||||
|
||||
我们认为快速演进的存储生态系统应该是一些可上锁的储物柜。关于存储部分有趣的地方在于许多的新玩家尝试去解决持久性的挑战问题,以及性能和灵活性问题。就像他们说的:存储很简单。
|
||||
|
||||
@ -85,7 +85,7 @@ Packet 为我们的客户提供了 11 种主要的操作系统可供选择,包
|
||||
|
||||
### 编排
|
||||
|
||||
在过去的这些年里,编排层全是 Kubernetes 了,因此我们选取了其中一位著名的布道者(Kelsey Hightower),并在这个古怪的会议场景中给他一个特写。在我们的团队中有一些 Nomad (LCTT 译注:一个管理机器集群并在集群上运行应用程序的工具)的忠实粉丝,并且如果抛开 Docker 和它的工具集的影响,就无从谈起云原生。
|
||||
在过去的这一年里,编排层全是 Kubernetes 了,因此我们选取了其中一位著名的布道者(Kelsey Hightower),并在这个古怪的会议场景中给他一个特写。在我们的团队中有一些 [Nomad][Nomad](LCTT 译注:一个管理机器集群并在集群上运行应用程序的工具)的忠实粉丝,并且如果抛开 Docker 和它的工具集的影响,就无从谈起云原生。
|
||||
|
||||
虽然负载编排应用程序在我们栈中的地位非常高,我们看到的各种各样的证据表明,这些强大的工具开始去深入到栈中,以帮助用户利用 GPU 和其它特定硬件的优势。请继续关注 —— 我们正处于容器化革命的早期阶段!
|
||||
|
||||
@ -93,9 +93,9 @@ Packet 为我们的客户提供了 11 种主要的操作系统可供选择,包
|
||||
|
||||
### 平台
|
||||
|
||||
这是栈中我们喜欢的层之一,因为每个平台都有如此多的工具帮助用户去完成他们想要做的事情(顺便说一下,不是去运行容器,而是运行应用程序)。从 Rancher 和 Kontena,到 Tectonic 和 Redshift 都是像 Cycle.io 和 Flynn.io 一样是完全不同的方法 —— 我们看到这些项目如何以不同的方式为用户提供服务,总是激动不已。
|
||||
这是栈中我们喜欢的层之一,因为每个平台都有如此多的工具帮助用户去完成他们想要做的事情(顺便说一下,不是去运行容器,而是运行应用程序)。从 [Rancher][Rancher] 和 [Kontena][Kontena],到 [Tectonic][Tectonic] 和 [Redshift][Redshift] 都是像 [Cycle.io][Cycle.io] 和 [Flynn.io][Flynn.io] 一样是完全不同的方法 —— 我们看到这些项目如何以不同的方式为用户提供服务,总是激动不已。
|
||||
|
||||
关键点:这些平台是帮助去转化各种各样的快速变化的云原生生态系统给用户。很高兴能看到他们每个人带来的东西!
|
||||
关键点:这些平台是帮助用户转化云原生生态系统中各种各样的快速变化的部分。很高兴能看到他们各自带来的东西!
|
||||
|
||||
![][12]
|
||||
|
||||
@ -103,39 +103,39 @@ Packet 为我们的客户提供了 11 种主要的操作系统可供选择,包
|
||||
|
||||
当说到安全时,今年真是很忙的一年!我们尝试去展示一些很著名的攻击,并说明随着工作负载变得更加分散和更加可迁移(当然,同时攻击者也变得更加智能),这些各式各样的工具是如何去帮助保护我们的。
|
||||
|
||||
我们看到一个用于不可信环境(如 Aporeto)和低级安全(Cilium)的强大动作,以及尝试在网络级别上的像 Tigera 这样的可信方法。不管你的方法如何,记住这一点:安全无止境。:0
|
||||
我们看到一个用于不可信环境(如 Aporeto)和低级安全(Cilium)的强大动作,以及尝试在网络级别上的像 [Tigera][Tigera] 这样的可信方法。不管你的方法如何,记住这一点:安全无止境。:0
|
||||
|
||||
![][13]
|
||||
|
||||
### 应用程序
|
||||
|
||||
如何去表示海量的、无限的应用程序生态系统?在这个案例中,很容易:我们在纽约,选我们最喜欢的。;) 从 Postgres “房间里的大象” 和 Timescale 时钟,到暗藏的 ScyllaDB 垃圾桶和无所事事的《特拉维斯兄弟》—— 我们把这个片子拼到一起很有趣。
|
||||
如何去表示海量的、无限的应用程序生态系统?在这个案例中,很容易:我们在纽约,选我们最喜欢的。;) 从 [Postgres][Postgres] “房间里的大象” 和 [Timescale][Timescale] 时钟,到鬼鬼祟祟的 [ScyllaDB][ScyllaDB] 垃圾桶和那个悠闲的 [Travis][Travis] 哥们 —— 我们把这个片子拼到一起很有趣。
|
||||
|
||||
让我们感到很惊奇的一件事情是:很少有人注意到那个复印他的屁股的家伙。我想现在复印机已经不常见了吧?
|
||||
让我们感到很惊奇的一件事情是:很少有人注意到那个复印屁股的家伙。我想现在复印机已经不常见了吧?
|
||||
|
||||
![][14]
|
||||
|
||||
### 可观测性
|
||||
|
||||
由于我们的工作负载开始到处移动,规模也越来越大,这里没有一件事情能够像一个非常好用的 Grafana 仪表盘、或方便的 Datadog 代理让人更加欣慰了。由于复杂度的提升,“SRE” 一代开始越来越多地依赖警报和其它智能事件去帮我们感知发生的事件,出现越来越多的自我修复的基础设施和应用程序。
|
||||
由于我们的工作负载开始到处移动,规模也越来越大,这里没有一件事情能够像一个非常好用的 [Grafana][Grafana] 仪表盘、或方便的 [Datadog][Datadog] 代理让人更加欣慰了。由于复杂度的提升,[SRE][SRE] 时代开始越来越多地依赖监控告警和其它智能事件去帮我们感知发生的事件,出现越来越多的自我修复的基础设施和应用程序。
|
||||
|
||||
在未来的几个月或几年中,我们将看到什么样的公司进入这一领域 … 或许是一些人工智能、区块链、机器学习支撑的仪表盘?:-)
|
||||
在未来的几个月或几年中,我们将看到什么样的面孔进入这一领域……或许是一些人工智能、区块链、机器学习支撑的仪表盘?:-)
|
||||
|
||||
![][15]
|
||||
|
||||
### 流量管理
|
||||
|
||||
人们往往认为互联网“就该这样工作”,但事实上,我们很惊讶于它能工作。我的意思是,这是大规模的、不同的网络间的松散连接 —— 你不是在开玩笑吧?
|
||||
人们往往认为互联网“只是能工作而已”,但事实上,我们很惊讶于它居然能如此工作。我的意思是,就这些大规模的、不同的网络间的松散连接 —— 你不是在开玩笑吧?
|
||||
|
||||
能够把所有的这些独立的网络拼接到一起的一个原因是流量管理、DNS 和类似的东西。随着规模越来越大,这些让互联网变得更快、更安全、同时更具弹性。我们尤其高兴的是看到像 Fly.io 和 NS1 这样的新贵与优秀的老牌玩家进行竞争,最后的结果是整个生态系统都得以提升。让竞争来的更激烈吧!
|
||||
能够把所有的这些独立的网络拼接到一起的一个原因是流量管理、DNS 和类似的东西。随着规模越来越大,这些让互联网变得更快、更安全、同时更具弹性。我们尤其高兴的是看到像 [Fly.io][Fly.io] 和 [NS1][NS1] 这样的新贵与优秀的老牌玩家进行竞争,最后的结果是整个生态系统都得以提升。让竞争来的更激烈吧!
|
||||
|
||||
![][16]
|
||||
|
||||
### 用户
|
||||
|
||||
如果没有非常棒的用户,技术栈还有什么用呢?确实,他们享受了大量的创新,但在云原生的世界里,他们所做的远不止消费这么简单:他们创立并贡献了很多。从像 Kubernetes 这样的大量的贡献者到越来越多的(但同样重要)更多方面,而我们都是其中的非常棒的一份子。
|
||||
如果没有非常棒的用户,技术栈还有什么用呢?确实,他们享受了大量的创新,但在云原生的世界里,他们所做的远不止消费这么简单:他们也创造并贡献了很多。从像 Kubernetes 这样的大量的贡献者到越来越多的(但同样重要)更多方面,而我们都是其中的非常棒的一份子。
|
||||
|
||||
在我们屋顶的客厅上的许多用户,比如 Ticketmaster 和《纽约时报》,而不仅仅是新贵:这些组织采用了一种新的方式去部署和管理他们的应用程序,并且他们自己的用户正在收获回报。
|
||||
在我们屋顶上有许多悠闲的用户,比如 [Ticketmaster][Ticketmaster] 和[《纽约时报》][New York Times],而不仅仅是新贵:这些组织拥抱了部署和管理应用程序的方法的变革,并且他们的用户正在享受变革带来的回报。
|
||||
|
||||
![][17]
|
||||
|
||||
@ -143,7 +143,7 @@ Packet 为我们的客户提供了 11 种主要的操作系统可供选择,包
|
||||
|
||||
在以前的生态系统中,基金会扮演了一个非常被动的“幕后”角色。而 CNCF 不是!他们的目标(构建一个健壮的云原生生态系统),勇立潮流之先 —— 他们不仅已迎头赶上还一路领先。
|
||||
|
||||
从坚实的治理和经过深思熟虑的项目组,到提出像 CNCF 这样的蓝图,CNCF 横跨云 CI、Kubernetes 认证、和演讲者委员会 —— CNCF 已不再是 “仅仅” 受欢迎的 KubeCon + CloudNativeCon 了。
|
||||
从坚实的治理和经过深思熟虑的项目组,到提出像 CNCF 这样的蓝图,CNCF 横跨云 CI、Kubernetes 认证、和讲师团 —— CNCF 已不再是 “仅仅” 受欢迎的 [KubeCon + CloudNativeCon][KCCNC] 了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -152,25 +152,72 @@ via: https://www.packet.net/blog/splicing-the-cloud-native-stack/
|
||||
作者:[Zoe Allen][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
校对:[wxy](https://github.com/wxy)、[pityonline](https://github.com/pityonline)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.packet.net/about/zoe-allen/
|
||||
[1]:https://landscape.cncf.io/landscape=cloud
|
||||
[2]:https://assets.packet.net/media/images/PIFg-30.vesey.street.ny.jpg
|
||||
[3]:https://www.dropbox.com/s/ujxk3mw6qyhmway/Packet_Cloud_Native_Building_Stack.jpg?dl=0
|
||||
[4]:https://assets.packet.net/media/images/3vVx-there.is.no.cloud.jpg
|
||||
[5]:https://assets.packet.net/media/images/X0b9-the.bank.jpg
|
||||
[6]:https://assets.packet.net/media/images/2Etm-ping.and.power.jpg
|
||||
[7]:https://assets.packet.net/media/images/C800-infrastructure.jpg
|
||||
[8]:https://assets.packet.net/media/images/0V4O-provisioning.jpg
|
||||
[9]:https://assets.packet.net/media/images/eMYp-operating.system.jpg
|
||||
[10]:https://assets.packet.net/media/images/9BII-run.time.jpg
|
||||
[11]:https://assets.packet.net/media/images/njak-orchestration.jpg
|
||||
[12]:https://assets.packet.net/media/images/1QUS-platforms.jpg
|
||||
[13]:https://assets.packet.net/media/images/TeS9-security.jpg
|
||||
[14]:https://assets.packet.net/media/images/SFgF-apps.jpg
|
||||
[15]:https://assets.packet.net/media/images/SXoj-observability.jpg
|
||||
[16]:https://assets.packet.net/media/images/tKhf-traffic.management.jpg
|
||||
[17]:https://assets.packet.net/media/images/7cpe-users.jpg
|
||||
[a]: https://www.packet.net/about/zoe-allen/
|
||||
[1]: https://landscape.cncf.io/landscape=cloud
|
||||
[2]: https://assets.packet.net/media/images/PIFg-30.vesey.street.ny.jpg
|
||||
[3]: https://www.dropbox.com/s/ujxk3mw6qyhmway/Packet_Cloud_Native_Building_Stack.jpg?dl=0
|
||||
[4]: https://assets.packet.net/media/images/3vVx-there.is.no.cloud.jpg
|
||||
[5]: https://assets.packet.net/media/images/X0b9-the.bank.jpg
|
||||
[6]: https://assets.packet.net/media/images/2Etm-ping.and.power.jpg
|
||||
[7]: https://assets.packet.net/media/images/C800-infrastructure.jpg
|
||||
[8]: https://assets.packet.net/media/images/0V4O-provisioning.jpg
|
||||
[9]: https://assets.packet.net/media/images/eMYp-operating.system.jpg
|
||||
[10]: https://assets.packet.net/media/images/9BII-run.time.jpg
|
||||
[11]: https://assets.packet.net/media/images/njak-orchestration.jpg
|
||||
[12]: https://assets.packet.net/media/images/1QUS-platforms.jpg
|
||||
[13]: https://assets.packet.net/media/images/TeS9-security.jpg
|
||||
[14]: https://assets.packet.net/media/images/SFgF-apps.jpg
|
||||
[15]: https://assets.packet.net/media/images/SXoj-observability.jpg
|
||||
[16]: https://assets.packet.net/media/images/tKhf-traffic.management.jpg
|
||||
[17]: https://assets.packet.net/media/images/7cpe-users.jpg
|
||||
[Kubernetes]: https://kubernetes.io/
|
||||
[CNCF]: https://www.cncf.io/
|
||||
[Aporeto]: https://www.aporeto.com/
|
||||
[Cilium]: https://cilium.io/
|
||||
[CoreDNS]: https://coredns.io/
|
||||
[Spiffe]: https://spiffe.io/
|
||||
[Digital Rebar]: http://rebar.digital/
|
||||
[Fission]: https://fission.io/
|
||||
[Mellanox]: http://www.mellanox.com/
|
||||
[Alan Kay]: https://en.wikipedia.org/wiki/Alan_Kay
|
||||
[Equinix]: https://www.equinix.com/
|
||||
[Vapor.io]: https://www.vapor.io/
|
||||
[Verizon]: https://www.verizon.com/
|
||||
[Crown Castle]: http://www.crowncastle.com/
|
||||
[Digital Ocean]: https://www.digitalocean.com/
|
||||
[Terraform]: https://www.terraform.io/
|
||||
[Ansible]: https://www.ansible.com/
|
||||
[Quay.io]: https://quay.io/
|
||||
[Ubuntu]: https://www.ubuntu.com/
|
||||
[CoreOS]: https://coreos.com/
|
||||
[FreeBSD]: https://www.freebsd.org/
|
||||
[Suse]: https://www.suse.com/
|
||||
[Red Hat]: https://www.redhat.com/
|
||||
[NixOS]: https://nixos.org/
|
||||
[LinuxKit]: https://github.com/linuxkit/linuxkit
|
||||
[rkt]: https://coreos.com/rkt/
|
||||
[Docker]: https://www.docker.com/
|
||||
[Nomad]: https://www.nomadproject.io/
|
||||
[Rancher]: https://rancher.com/
|
||||
[Kontena]: https://kontena.io/
|
||||
[Tectonic]: https://coreos.com/tectonic/
|
||||
[Redshift]: https://aws.amazon.com/redshift/
|
||||
[Cycle.io]: https://cycle.io/
|
||||
[Flynn.io]: https://flynn.io/
|
||||
[Tigera]: https://www.tigera.io/
|
||||
[Postgres]: https://www.postgresql.org/
|
||||
[Timescale]: https://www.timescale.com/
|
||||
[ScyllaDB]: https://www.scylladb.com/
|
||||
[Travis]: https://travis-ci.com/
|
||||
[Grafana]: https://grafana.com/
|
||||
[Datadog]: https://www.datadoghq.com/
|
||||
[SRE]: https://en.wikipedia.org/wiki/Site_Reliability_Engineering
|
||||
[Fly.io]: https://fly.io/
|
||||
[NS1]: https://ns1.com/
|
||||
[Ticketmaster]: https://www.ticketmaster.com/
|
||||
[New York Times]: https://www.nytimes.com/
|
||||
[KCCNC]: https://www.cncf.io/community/kubecon-cloudnativecon-events/
|
||||
|
@ -1,32 +1,36 @@
|
||||
Translating by DavidChenLiang
|
||||
|
||||
|
||||
|
||||
How To View Detailed Information About A Package In Linux
|
||||
如何在Linux上检查一个包(package)的详细信息
|
||||
======
|
||||
This is know topic and we can write so many articles because most of the time we would stick with package managers for many reasons.
|
||||
|
||||
Each distribution clones has their own package manager, each has comes with their unique features that allow users to perform many actions such as installing new software packages, removing unnecessary software packages, updating the existing software packages, searching for specific software packages, and updating the system to latest available version, etc.
|
||||
|
||||
Whoever is sticking with command-line most of the time they would preferring the CLI based package managers. The major CLI package managers for Linux are Yum, Dnf, Rpm,Apt, Apt-Get, Deb, pacman and zypper.
|
||||
我们可以就这个已经被广泛讨论的话题写出大量的文档,大多数情况下,因为各种各样的原因,我们都愿意让包管理器(package manager)来帮我们做这些事情。
|
||||
|
||||
**Suggested Read :**
|
||||
每个Linux发行版都有自己的包管理器,并且每个都有各自有不同的特性,这些特性包括允许用户执行安装新软件包,删除无用的软件包,更新现存的软件包,搜索某些具体的软件包,以及更新整个系统到其最新的状态之类的操作。
|
||||
|
||||
习惯于命令行的用户大多数时间都会使用基于命令行方式的包管理器。对于Linux而言,这些基于命令行的包管理器有Yum,Dnf, Rpm, Apt, Apt-Get, Deb, pacman 和zypper.
|
||||
|
||||
|
||||
**推荐阅读**
|
||||
**(#)** [List of Command line Package Managers For Linux & Usage][1]
|
||||
**(#)** [A Graphical frontend tool for Linux Package Manager][2]
|
||||
**(#)** [How To Search If A Package Is Available On Your Linux Distribution Or Not][3]
|
||||
**(#)** [How To Add, Enable And Disable A Repository By Using The DNF/YUM Config Manager Command On Linux][4]
|
||||
|
||||
As a system administrator you should aware of from where packages are coming, which repository, version of the package, size of the package, release, package source url, license info, etc,.
|
||||
|
||||
This will help you to understand the package usage in simple way since it’s coming with package summary & Description. Run the below commands based on your distribution to get detailed information about given package.
|
||||
作为一个系统管理员你应该熟知以下事实:安装包来自何方,具体来自哪个软件仓库,包的具体版本,包的大小,发行版的版本,包的源URL,包的许可证信息,等等等等。
|
||||
|
||||
### [YUM Command][5] : View Package Information On RHEL & CentOS Systems
|
||||
|
||||
YUM stands for Yellowdog Updater, Modified is an open-source command-line front-end package-management utility for RPM based systems such as Red Hat Enterprise Linux (RHEL) and CentOS.
|
||||
这篇短文将用尽可能简单的方式帮你理解包管理器的用法,这些用法正是来自随包自带的总结和描述文件。按你所使用的Linux发行版的不同,运行下面相应的命令,你能得到你所使用的发行版下的包的详细信息。
|
||||
|
||||
### [YUM 命令][5] : 在RHEL和CentOS系统上获得包的信息
|
||||
|
||||
|
||||
YUM 英文直译是黄狗更新器--修改版,它是一个开源的基于命令行的包管理器前端实用工具。它被广泛应用在基于RPM的系统上,例如:RHEL和CentOS。
|
||||
|
||||
Yum是用于在官方发行版仓库以及其他第三方发行版仓库下获取,安装,删除,查询RPM包的主要工具。
|
||||
|
||||
Yum is the primary tool for getting, installing, deleting, querying, and managing RPM packages from distribution repositories, as well as other third-party repositories.
|
||||
```
|
||||
# yum info python
|
||||
# yum info python(LCTT译注:用yum info 获取python包的信息)
|
||||
Loaded plugins: fastestmirror, security
|
||||
Loading mirror speeds from cached hostfile
|
||||
* epel: epel.mirror.constant.com
|
||||
@ -60,11 +64,13 @@ Description : Python is an interpreted, interactive, object-oriented programming
|
||||
|
||||
```
|
||||
|
||||
### YUMDB Command : View Package Information On RHEL & CentOS Systems
|
||||
### YUMDB 命令: 查看RHEL和CentOS系统上的包信息
|
||||
|
||||
|
||||
Yumdb info这个命令提供与yum info相类似的的信息,不过它还额外提供了诸如包校验值,包类型,用户信息(由何人安装)。从yum 3.2.26版本后,yum开始在rpm数据库外储存额外的信息了(下文输出的用户信息指该python由该用户安装,而dep说明该包是被作为被依赖的包而被安装的)。
|
||||
|
||||
Yumdb info provides information similar to yum info but additionally it provides package checksum data, type, user info (who installed the package). Since yum 3.2.26 yum has started storing additional information outside of the rpmdatabase (where user indicates it was installed by the user, and dep means it was brought in as a dependency).
|
||||
```
|
||||
# yumdb info python
|
||||
# yumdb info python(LCTT译注:用yumdb info 来获取Python的信息)
|
||||
Loaded plugins: fastestmirror
|
||||
python-2.6.6-66.el6_8.x86_64
|
||||
changed_by = 4294967295
|
||||
@ -81,11 +87,13 @@ python-2.6.6-66.el6_8.x86_64
|
||||
|
||||
```
|
||||
|
||||
### [RPM Command][6] : View Package Information On RHEL/CentOS/Fedora Systems
|
||||
### [RPM 命令][6] : 在RHEL/CentOS/Fedora系统上查看包的信息
|
||||
|
||||
|
||||
RPM 英文直译为红帽包管理器,这是一个在RedHat以及其变种发行版(如RHEL, CentOS, Fedora, openSUSE,Megeia)下的功能强大的命令行包管理工具。它能让你轻松的安装,升级,删除,查询以及校验你的系统或服务器上的软件。RPM文件以.rpm结尾。RPM包由它所依赖的软件库以及其他依赖构成,它不会与系统上已经安装的包冲突。
|
||||
|
||||
RPM stands for Red Hat Package Manager is a powerful, command line Package Management utility for Red Hat based system such as (RHEL, CentOS, Fedora, openSUSE & Mageia) distributions. The utility allow you to install, upgrade, remove, query & verify the software on your Linux system/server. RPM files comes with .rpm extension. RPM package built with required libraries and dependency which will not conflicts other packages were installed on your system.
|
||||
```
|
||||
# rpm -qi nano
|
||||
# rpm -qi nano (LCTT译注:用RPM -qi 查询nano包的具体信息)
|
||||
Name : nano Relocations: (not relocatable)
|
||||
Version : 2.0.9 Vendor: CentOS
|
||||
Release : 7.el6 Build Date: Fri 12 Nov 2010 02:18:36 AM EST
|
||||
@ -101,11 +109,13 @@ GNU nano is a small and friendly text editor.
|
||||
|
||||
```
|
||||
|
||||
### [DNF Command][7] : View Package Information On Fedora System
|
||||
### [DNF 命令][7] : 在Fedora系统上查看报信息
|
||||
|
||||
|
||||
DNF指时髦版的Yum,我们也可以认为DNF是下一代的YUM包管理器(Yum的一个分支),它在后台使用了hawkey/libsolv库。Aleš Kozumplík在Fedora 18上开始开发DNF,在Fedora 22上正式最后发布。 DNF命令用来在Fedora 22及以后系统安装, 更新,搜索以及删除包。它能自动的解决包安装过程中的包依赖问题。
|
||||
|
||||
DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for backend. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally. Dnf command is used to install, update, search & remove packages on Fedora 22 and later system. It automatically resolve dependencies and make it smooth package installation without any trouble.
|
||||
```
|
||||
$ dnf info tilix
|
||||
$ dnf info tilix (LCTT译注: 用dnf info 查看tilix的包信息)
|
||||
Last metadata expiration check: 27 days, 10:00:23 ago on Wed 04 Oct 2017 06:43:27 AM IST.
|
||||
Installed Packages
|
||||
Name : tilix
|
||||
@ -139,11 +149,13 @@ Description : Tilix is a tiling terminal emulator with the following features:
|
||||
|
||||
```
|
||||
|
||||
### [Zypper Command][8] : View Package Information On openSUSE System
|
||||
### [Zypper 命令][8] : 在openSUSE系统上查看包信息
|
||||
|
||||
|
||||
Zypper是一个使用libzypp库的命令行包管理器。Zypper提供诸如软件仓库访问,安装依赖解决,软件包安装等等功能。
|
||||
|
||||
Zypper is a command line package manager which makes use of libzypp. Zypper provides functions like repository access, dependency solving, package installation, etc.
|
||||
```
|
||||
$ zypper info nano
|
||||
$ zypper info nano (译注: 用zypper info查询nano的信息)
|
||||
|
||||
Loading repository data...
|
||||
Reading installed packages...
|
||||
@ -167,11 +179,12 @@ Description :
|
||||
|
||||
```
|
||||
|
||||
### [pacman Command][9] : View Package Information On Arch Linux & Manjaro Systems
|
||||
### [pacman 命令][9] :在ArchLinux及Manjaro系统上查看包信息
|
||||
|
||||
Pacman指包管理器实用工具。pacman是一个用于安装,构建,删除,管理Arch Linux上包的命令行工具。它后端使用libalpm(Arch Linux package Manager(ALPM)库)来完成所有功能。
|
||||
|
||||
Pacman stands for package manager utility. pacman is a simple command-line utility to install, build, remove and manage Arch Linux packages. Pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions.
|
||||
```
|
||||
$ pacman -Qi bash
|
||||
$ pacman -Qi bash (LCTT译注: 用pacman -Qi 来查询bash)
|
||||
Name : bash
|
||||
Version : 4.4.012-2
|
||||
Description : The GNU Bourne Again shell
|
||||
@ -203,11 +216,14 @@ Validated By : Signature
|
||||
|
||||
```
|
||||
|
||||
### [Apt-Cache Command][10] : View Package Information On Debian/Ubuntu/Mint Systems
|
||||
### [Apt-Cache 命令][10] :在Debian/Ubuntu/Mint系统上查看包信息
|
||||
|
||||
|
||||
apt-cache命令能显示Apt内部数据库中的大量信息。这些信息是从sources.list中的不同的软件源中搜集而来,因此从某种意义上这些信息也可以被认为是某种缓存。
|
||||
这些信息搜集工作是在运行apt update命令时执行的。
|
||||
|
||||
The apt-cache command can display much of the information stored in APT’s internal database. This information is a sort of cache since it is gathered from the different sources listed in the sources.list file. This happens during the apt update operation.
|
||||
```
|
||||
$ sudo apt-cache show apache2
|
||||
$ sudo apt-cache show apache2 (LCTT译注:用管理员权限查询apache2的信息)
|
||||
Package: apache2
|
||||
Priority: optional
|
||||
Section: web
|
||||
@ -244,11 +260,13 @@ Task: lamp-server, mythbuntu-frontend, mythbuntu-desktop, mythbuntu-backend-slav
|
||||
|
||||
```
|
||||
|
||||
### [APT Command][11] : View Package Information On Debian/Ubuntu/Mint Systems
|
||||
### [APT 命令][11] : 查看Debian/Ubuntu/Mint系统上的包信息
|
||||
|
||||
|
||||
APT意为高级打包工具,就像DNF将如何替代YUM一样,APT是apt-get的替代物。它功能丰富的命令行工具包括了如下所有命令的功能如apt-cache,apt-search,dpkg, apt-cdrom, apt-config, apt-key等等,我们可以方便的通过apt来安装.dpkg包,但是我们却不能通过apt-get来完成这一点,还有一些其他的类似的功能也不能用apt-get来完成,所以apt-get因为没有解决上述功能缺乏的原因而被apt所取代。
|
||||
|
||||
APT stands for Advanced Packaging Tool (APT) which is replacement for apt-get, like how DNF came to picture instead of YUM. It’s feature rich command-line tools with included all the futures in one command (APT) such as apt-cache, apt-search, dpkg, apt-cdrom, apt-config, apt-key, etc..,. and several other unique features. For example we can easily install .dpkg packages through APT but we can’t do through Apt-Get similar more features are included into APT command. APT-GET replaced by APT Due to lock of futures missing in apt-get which was not solved.
|
||||
```
|
||||
$ apt show nano
|
||||
$ apt show nano (LCTT译注: 用apt show查看nano)
|
||||
Package: nano
|
||||
Version: 2.8.6-3
|
||||
Priority: standard
|
||||
@ -290,11 +308,13 @@ Description: small, friendly text editor inspired by Pico
|
||||
|
||||
```
|
||||
|
||||
### [dpkg Command][12] : View Package Information On Debian/Ubuntu/Mint Systems
|
||||
### [dpkg 命令][12] : 查看Debian/Ubuntu/Mint系统上的包信息
|
||||
|
||||
|
||||
dpkg意指Debian包管理器(dpkg)。dpkg用于Debian系统上的安装,构建,移除以及管理Debian包的命令行工具。dpkg 使用Aptitude(因为它更为主流及用户友好)作为前端工具来完成所有的功能。其他的工具如dpkg-deb和dpkg-query使用dpkg做为前端来实现功能。尽管系统管理员还是时不时会在必要时使用dpkg来完成一些软件安装的任务,他大多数情况下还是会因为APt,Apt-Get以及Aptitude的健壮性而使用后者。
|
||||
|
||||
dpkg stands for Debian package manager (dpkg). dpkg is a command-line tool to install, build, remove and manage Debian packages. dpkg uses Aptitude (primary and more user-friendly) as a front-end to perform all the actions. Other utility such as dpkg-deb and dpkg-query uses dpkg as a front-end to perform some action. Now a days most of the administrator using Apt, Apt-Get & Aptitude to manage packages easily without headache and its robust management too. Even though still we need to use dpkg to perform some software installation where it’s necessary.
|
||||
```
|
||||
$ dpkg -s python
|
||||
$ dpkg -s python (LCTT译注: 用dpkg -s查看python)
|
||||
Package: python
|
||||
Status: install ok installed
|
||||
Priority: optional
|
||||
@ -324,9 +344,11 @@ Original-Maintainer: Matthias Klose
|
||||
|
||||
```
|
||||
|
||||
Alternatively we can use `-p` option with dpkg that provides information similar to `dpkg -s` info but additionally it provides package checksum data and type.
|
||||
|
||||
我们也可使用dpkg的‘-p’选项,这个选项提供和‘dpkg -s’相类似的信息,但是它还提供了包的校验值和包类型。
|
||||
|
||||
```
|
||||
$ dpkg -p python3
|
||||
$ dpkg -p python3 (LCTT译注: 用dpkg -p查看python3的信息)
|
||||
Package: python3
|
||||
Priority: important
|
||||
Section: python
|
||||
@ -357,11 +379,13 @@ Supported: 9m
|
||||
|
||||
```
|
||||
|
||||
### Aptitude Command : View Package Information On Debian/Ubuntu/Mint Systems
|
||||
### Aptitude 命令 : 查看Debian/Ubuntu/Mint 系统上的包信息
|
||||
|
||||
|
||||
aptitude是Debian GNU/Linux包管理系统的面向文本的接口。它允许用户查看已安装的包的列表,以及完成诸如安装,升级,删除包之类的包管理任务。这些管理行为也能从图形接口来执行。
|
||||
|
||||
aptitude is a text-based interface to the Debian GNU/Linux package system. It allows the user to view the list of packages and to perform package management tasks such as installing, upgrading, and removing packages. Actions may be performed from a visual interface or from the command-line.
|
||||
```
|
||||
$ aptitude show htop
|
||||
$ aptitude show htop (LCTT译注: 用aptitude show查看htop信息)
|
||||
Package: htop
|
||||
Version: 2.0.2-1
|
||||
State: installed
|
||||
@ -388,7 +412,7 @@ via: https://www.2daygeek.com/how-to-view-detailed-information-about-a-package-i
|
||||
|
||||
作者:[Prakash Subramanian][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[DavidChenLiang](https://github.com/davidchenliang)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -0,0 +1,152 @@
|
||||
使用 VS Code 进行 Python 编程
|
||||
======
|
||||
|
||||
![](https://fedoramagazine.org/wp-content/uploads/2018/07/pythonvscode-816x345.jpg)
|
||||
|
||||
Visual Studio Code,简称 VS Code,是一个开源的文本编辑器,包含用于构建和调试应用程序的工具。安装启用 Python 扩展后,VS Code 可以配置成 Python 开发的理想工作环境。本文将介绍一些有用的 VS Code 扩展,并配置它们以充分提高 Python 开发效率。
|
||||
|
||||
如果你的计算机上还没有安装 VS Code,可以参考文章 [Using Visual Studio Code on Fedora ](https://fedoramagazine.org/using-visual-studio-code-fedora/) 安装。
|
||||
|
||||
### 在 VS Code 中安装 Python 扩展
|
||||
|
||||
首先,为了更方便地在 VS Code 中进行 Python 开发,需要从 VS Code 扩展商店中安装 Python 扩展。
|
||||
|
||||
![][2]
|
||||
|
||||
Python 扩展安装完成后,就可以开始配置 Python 扩展了。
|
||||
|
||||
VS Code 通过两个 JSON 文件管理设置:
|
||||
|
||||
* 一个文件用于 VS Code 的全局设置,作用于所有的项目
|
||||
* 另一个文件用于特殊设置,作用于单独项目
|
||||
|
||||
可以用快捷键 **Ctrl+,** (逗号)打开全局设置,也可以通过 **文件 -> 首选项 -> 设置** 来打开。
|
||||
|
||||
#### 设置 Python 路径
|
||||
|
||||
您可以在全局设置中配置 python.pythonPath 使 VS Code 自动为每个项目选择最适合的 Python 解释器。 。
|
||||
```
|
||||
// 将设置放在此处以覆盖默认设置和用户设置。
|
||||
// Path to Python, you can use a custom version of Python by modifying this setting to include the full path.
|
||||
{
|
||||
"python.pythonPath":"${workspaceRoot}/.venv/bin/python",
|
||||
}
|
||||
```
|
||||
|
||||
这样,VS Code 将使用虚拟环境目录 .venv 下项目根目录中的 Python 解释器。
|
||||
|
||||
#### 使用环境变量
|
||||
|
||||
默认情况下,VS Code 使用项目根目录下的 .env 文件中定义的环境变量。 这对于设置环境变量很有用,如:
|
||||
```
|
||||
PYTHONWARNINGS="once"
|
||||
```
|
||||
|
||||
可使程序在运行时显示警告。
|
||||
|
||||
可以通过设置 python.envFile 来加载其他的默认环境变量文件:
|
||||
```
|
||||
// Absolute path to a file containing environment variable definitions.
|
||||
"python.envFile": "${workspaceFolder}/.env",
|
||||
```
|
||||
|
||||
### 代码分析
|
||||
|
||||
Python 扩展还支持不同的代码分析工具(pep8,flake8,pylint)。要启用你喜欢的或者正在进行的项目所使用的分析工具,只需要进行一些简单的配置。
|
||||
|
||||
扩展默认情况下使用 pylint 进行代码分析。你可以这样配置以使用 flake8 进行分析:
|
||||
```
|
||||
"python.linting.pylintEnabled": false,
|
||||
"python.linting.flake8Path": "${workspaceRoot}/.venv/bin/flake8",
|
||||
"python.linting.flake8Enabled": true,
|
||||
"python.linting.flake8Args": ["--max-line-length=90"],
|
||||
```
|
||||
|
||||
启用代码分析后,分析器会在不符合要求的位置加上波浪线,鼠标置于该位置,将弹窗提示其原因。注意,项目的虚拟环境中需要安装有 flake8,此示例方能有效。
|
||||
|
||||
![][3]
|
||||
|
||||
### 格式化代码
|
||||
|
||||
可以配置 VS Code 使其自动格式化代码。目前支持 autopep8,black 和 yapf。下面的设置将启用 “black” 模式。
|
||||
```
|
||||
// Provider for formatting. Possible options include 'autopep8', 'black', and 'yapf'.
|
||||
"python.formatting.provider": "black",
|
||||
"python.formatting.blackPath": "${workspaceRoot}/.venv/bin/black"
|
||||
"python.formatting.blackArgs": ["--line-length=90"],
|
||||
"editor.formatOnSave": true,
|
||||
```
|
||||
|
||||
如果不需要编辑器在保存时自动格式化代码,可以将 editor.formatOnSave 设置为 false 并手动使用快捷键 **Ctrl + Shift + I** 格式化当前文档中的代码。 注意,项目的虚拟环境中需要安装有 black,此示例方能有效。
|
||||
|
||||
### 运行任务
|
||||
|
||||
VS Code 的一个重要特点是它可以运行任务。需要运行的任务保存在项目根目录中的 JSON 文件中。
|
||||
|
||||
#### 运行 flask 开发服务
|
||||
|
||||
这个例子将创建一个任务来运行 Flask 开发服务器。 使用一个可以运行外部命令的基本模板来创建新的工程:
|
||||
|
||||
![][4]
|
||||
|
||||
编辑如下所示的 tasks.json 文件,创建新任务来运行 Flask 开发服务:
|
||||
```
|
||||
{
|
||||
// See https://go.microsoft.com/fwlink/?LinkId=733558
|
||||
// for the documentation about the tasks.json format
|
||||
"version": "2.0.0",
|
||||
"tasks": [
|
||||
{
|
||||
|
||||
"label": "Run Debug Server",
|
||||
"type": "shell",
|
||||
"command": "${workspaceRoot}/.venv/bin/flask run -h 0.0.0.0 -p 5000",
|
||||
"group": {
|
||||
"kind": "build",
|
||||
"isDefault": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Flask 开发服务使用环境变量来获取应用程序的入口点。 如 **使用环境变量** 一节所说,可以在 .env 文件中声明这些变量:
|
||||
```
|
||||
FLASK_APP=wsgi.py
|
||||
FLASK_DEBUG=True
|
||||
```
|
||||
|
||||
这样就可以使用快捷键 **Ctrl + Shift + B** 来执行任务了。
|
||||
|
||||
### 单元测试
|
||||
|
||||
VS Code 还支持单元测试框架 pytest,unittest 和 nosetest。启用测试框架后,可以在 VS Code 中单独运行搜索到的单元测试,通过测试套件运行测试或者运行所有的测试。
|
||||
|
||||
例如,可以这样启用 pytest 测试框架:
|
||||
```
|
||||
"python.unitTest.pyTestEnabled": true,
|
||||
"python.unitTest.pyTestPath": "${workspaceRoot}/.venv/bin/pytest",
|
||||
```
|
||||
|
||||
注意,项目的虚拟环境中需要安装有 pytest,此示例方能有效。
|
||||
|
||||
![][5]
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/vscode-python-howto/
|
||||
|
||||
作者:[Clément Verna][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[idea2act](https://github.com/idea2act)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://fedoramagazine.org
|
||||
[1]:https://fedoramagazine.org/using-visual-studio-code-fedora/
|
||||
[2]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-09-44.gif
|
||||
[3]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-12-05.gif
|
||||
[4]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-13-26.gif
|
||||
[5]:https://fedoramagazine.org/wp-content/uploads/2018/07/Peek-2018-07-27-15-33.gif
|
@ -0,0 +1,112 @@
|
||||
MPV 播放器:Linux 下的极简视频播放器
|
||||
======
|
||||
MPV 是一个开源的,跨平台视频播放器,带有极简的 GUI 界面以及丰富的命令行控制。
|
||||
|
||||
VLC 可能是 Linux 或者其他平台下最好的视频播放器。我已经使用 VLC 很多年了,它现在仍是我最喜欢的播放器。
|
||||
|
||||
不过最近,我倾向于使用简洁界面的极简应用。这也是我偶然发现 MPV 的原因。我太喜欢这个软件,并把它加入了 [Ubuntu 最佳应用][1]列表里。
|
||||
|
||||
[MPV][2] 是一个开源的视频播放器,有 Linux,Windows,MacOS,BSD 以及 Android 等平台下的版本。它实际上是从 [MPlayer][3] 分支出来的。
|
||||
|
||||
它的图形界面只有必须的元素而且非常整洁。
|
||||
|
||||
![MPV 播放器在 Linux 下的界面][4]
|
||||
MPV 播放器
|
||||
|
||||
### MPV 的功能
|
||||
|
||||
MPV 有标准播放器该有的所有功能。你可以播放各种视频,以及通过常用快捷键来控制播放。
|
||||
|
||||
* 极简图形界面以及必须的控件。
|
||||
* 自带视频解码器。
|
||||
* 高质量视频输出以及支持 GPU 硬件视频解码。
|
||||
* 支持字幕。
|
||||
* 可以通过命令行播放 YouTube 等流媒体视频。
|
||||
* 命令行模式的 MPV 可以嵌入到网页或其他应用中。
|
||||
|
||||
|
||||
|
||||
尽管 MPV 播放器只有极简的界面以及有限的选项,但请不要怀疑它的功能。它主要的能力都来自命令行版本。
|
||||
|
||||
只需要输入命令 mpv --list-options,然后你会看到它所提供的 447 个不同的选项。但是本文不会介绍 MPV 的高级应用。让我们看看作为一个普通的桌面视频播放器,它能有多么优秀。
|
||||
|
||||
### 在 Linux 上安装 MPV
|
||||
|
||||
MPV 是一个常用应用,加入了大多数 Linux 发行版默认仓库里。在软件中心里搜索一下就可以了。
|
||||
|
||||
我可以确认在 Ubuntu 的软件中心里能找到。你可以在里面选择安装,或者通过下面的命令安装:
|
||||
```
|
||||
sudo apt install mpv
|
||||
|
||||
```
|
||||
|
||||
你可以在 [MPV 网站][5]上查看其他平台的安装指引。
|
||||
|
||||
### 使用 MPV 视频播放器
|
||||
|
||||
在安装完成以后,你可以通过鼠标右键点击视频文件,然后在列表里选择 MPV 来播放。
|
||||
|
||||
![MPV 播放器界面][6]
|
||||
MPV 播放器界面
|
||||
|
||||
整个界面只有一个控制面板,只有在鼠标移动到播放窗口上才会显示出来。控制面板上有播放/暂停,选择视频轨道,切换音轨,字幕以及全屏等选项。
|
||||
|
||||
MPV 的默认大小取决于你所播放视频的画质。比如一个 240p 的视频,播放窗口会比较小,而在全高清显示器上播放 1080p 视频时,会几乎占满整个屏幕。不管视频大小,你总是可以在播放窗口上双击鼠标切换成全屏。
|
||||
|
||||
#### The subtitle struggle
|
||||
|
||||
如果你的视频带有字幕,MPV 会[自动加载字幕][7],你也可以选择关闭。不过,如果你想使用其他外挂字幕文件,不能直接在播放器界面上操作。
|
||||
|
||||
你可以将额外的字幕文件名改成和视频文件一样,并且将它们放在同一个目录下。MPV 会加载你的字幕文件。
|
||||
|
||||
更简单的播放外挂字幕的方式是,用鼠标选中文件拖到播放窗口里放开。
|
||||
|
||||
#### 播放 YouTube 或其他在线视频
|
||||
|
||||
要播放在线视频,你只能使用命令行模式的 MPV。
|
||||
|
||||
打开终端窗口,然后用类似下面的方式来播放:
|
||||
```
|
||||
mpv <URL_of_Video>
|
||||
|
||||
```
|
||||
|
||||
![在 Linux 桌面上使用 MPV 播放 YouTube 视频][8]
|
||||
在 Linux 桌面上使用 MPV 播放 YouTube 视频
|
||||
|
||||
用 MPV 播放 YouTube 视频的体验不怎么好。它总是在缓冲缓冲,有点烦。
|
||||
|
||||
#### 是否需要安装 MPV 播放器?
|
||||
|
||||
这个看你自己。如果你想体验各种应用,大可以试试 MPV。否则,默认的视频播放器或者 VLC 就足够了。
|
||||
|
||||
我在早些时候写关于 [Sayonara][9] 的文章时,并不确定大家会不会喜欢一个相对不常用的音乐播放器,但是 FOSS 的读者觉得很好。
|
||||
|
||||
试一下 MPV,然后看看你会不会将它作为你的默认视频播放器。
|
||||
|
||||
如果你喜欢 MPV,但又觉得它的图形界面需要更多功能,我推荐你使用 [GNOME MPV 播放器][10]。
|
||||
|
||||
你用过 MPV 视频播放器吗?体验怎么样?喜欢还是不喜欢?欢迎在下面的评论区留言。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/mpv-video-player/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[zpl1025](https://github.com/zpl1025)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[1]:https://itsfoss.com/best-ubuntu-apps/
|
||||
[2]:https://mpv.io/
|
||||
[3]:http://www.mplayerhq.hu/design7/news.html
|
||||
[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/mpv-player.jpg
|
||||
[5]:https://mpv.io/installation/
|
||||
[6]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/mpv-player-interface.png
|
||||
[7]:https://itsfoss.com/how-to-play-movie-with-subtitles-on-samsung-tv-via-usb/
|
||||
[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/play-youtube-videos-on-mpv-player.jpeg
|
||||
[9]:https://itsfoss.com/sayonara-music-player/
|
||||
[10]:https://gnome-mpv.github.io/
|
Loading…
Reference in New Issue
Block a user