Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2018-09-13 18:07:51 +08:00
commit 24221ff55c
7 changed files with 1121 additions and 0 deletions

View File

@ -1,3 +1,5 @@
XiatianSummer translating
13 Keyboard Shortcut Every Ubuntu 18.04 User Should Know
======
Knowing keyboard shortcuts increase your productivity. Here are some useful Ubuntu shortcut keys that will help you use Ubuntu like a pro.

View File

@ -0,0 +1,62 @@
Know Your Storage: Block, File & Object
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/block2_1920.jpg?itok=s1y6RLhT)
Dealing with the tremendous amount of data generated today presents a big challenge for companies who create or consume such data. Its a challenge for tech companies that are dealing with related storage issues.
“Data is growing exponentially each year, and we find that the majority of data growth is due to increased consumption and industries adopting transformational projects to expand value. Certainly, the Internet of Things (IoT) has contributed greatly to data growth, but the key challenge for software-defined storage is how to address the use cases associated with data growth,” said Michael St. Jean, principal product marketing manager, Red Hat Storage.
Every challenge is an opportunity. “The deluge of data being generated by old and new sources today is certainly presenting us with opportunities to meet our customers escalating needs in the areas of scale, performance, resiliency, and governance,” said Tad Brockway, General Manager for Azure Storage, Media and Edge.
### Trinity of modern software-defined storage
There are three different kinds of storage solutions -- block, file, and object -- each serving a different purpose while working with the others.
Block storage is the oldest form of data storage, where data is stored in fixed-length blocks or chunks of data. Block storage is used in enterprise storage environments and usually is accessed using Fibre Channel or iSCSI interface. “Block storage requires an application to map where the data is stored on the storage device,” according to SUSEs Larry Morris, Sr. Product Manager, Software Defined Storage.
Block storage is virtualized in storage area network and software defined storage systems, which are abstracted logical devices that reside on a shared hardware infrastructure and are created and presented to the host operating system of a server, virtual server, or hypervisor via protocols like SCSI, SATA, SAS, FCP, FCoE, or iSCSI.
“Block storage splits a single storage volume (like a virtual or cloud storage node, or a good old fashioned hard disk) into individual instances known as blocks,” said St. Jean.
Each block exists independently and can be formatted with its own data transfer protocol and operating system — giving users complete configuration autonomy. Because block storage systems arent burdened with the same investigative file-finding duties as the file storage systems, block storage is a faster storage system. Pairing that speed with configuration flexibility makes block storage ideal for raw server storage or rich media databases.
Block storage can be used to host operating systems, applications, databases, entire virtual machines and containers. Traditionally, block storage can only be accessed by individual machine, or machines in a cluster, to which it has been presented.
### File-based storage
File-based storage uses a filesystem to map where the data is stored on the storage device. Its a dominant technology used on direct- and networked-attached storage system, and it takes care of two things: organizing data and representing it to users. “With file storage, data is arranged on the server side in the exact same format as the clients see it. This allows the user to request a file by some unique identifier — like a name, location, or URL — which is communicated to the storage system using specific data transfer protocols,” said St. Jean.
The result is a type of hierarchical file structure that can be navigated from top to bottom. File storage is layered on top of block storage, allowing users to see and access data as files and folders, but restricting access to the blocks that stand up those files and folders.
“File storage is typically represented by shared filesystems like NFS and CIFS/SMB that can be accessed by many servers over an IP network. Access can be controlled at a file, directory, and export level via user and group permissions. File storage can be used to store files needed by multiple users and machines, application binaries, databases, virtual machines, and can be used by containers,” explained Brockway.
### Object storage
Object storage is the newest form of data storage, and it provides a repository for unstructured data which separates the content from the indexing and allows the concatenation of multiple files into an object. An object is a piece of data paired with any associated metadata that provides context about the bytes contained within the object (things like how old or big the data is). Those two things together — the data and metadata — make an object.
One advantage of object storage is the unique identifier associated with each piece of data. Accessing the data involves using the unique identifier and does not require the application or user to know where the data is actually stored. Object data is accessed through APIs.
“The data stored in objects is uncompressed and unencrypted, and the objects themselves are arranged in object stores (a central repository filled with many other objects) or containers (a package that contains all of the files an application needs to run). Objects, object stores, and containers are very flat in nature — compared to the hierarchical structure of file storage systems — which allow them to be accessed very quickly at huge scale,” explained St. Jean.
Object stores can scale to many petabytes to accommodate the largest datasets and are a great choice for images, audio, video, logs, backups, and data used by analytics services.
### Conclusion
Now you know about the various types of storage and how they are used. Stay tuned to learn more about software-defined storage as we examine the topic in the future.
Join us at [Open Source Summit + Embedded Linux Conference Europe][1] in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/2018/9/know-your-storage-block-file-object
作者:[Swapnil Bhartiya][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.linux.com/users/arnieswap
[1]: https://events.linuxfoundation.org/events/elc-openiot-europe-2018/

View File

@ -1,3 +1,5 @@
XiatianSummer translating
Visualize Disk Usage On Your Linux System
======

View File

@ -0,0 +1,124 @@
How To Configure Mouse Support For Linux Virtual Consoles
======
![](https://www.ostechnix.com/wp-content/uploads/2018/09/GPM-1-720x340.png)
I use Oracle VirtualBox to test various Unix-like operating systems. Most of my VMs are headless servers that does not have graphical desktop environment. For a long time, I have been wondering how can we use the mouse in the text-based terminals in headless Linux servers. Thanks to **GPM** , today I learned that we can use Mouse in virtual consoles for copy and paste operations. **GPM** , acronym for **G** eneral **P** urpose **M** ouse, is a daemon that helps you to configure mouse support for Linux virtual consoles. Please do not confuse GPM with **GDM** (GNOME Display manager). Both serves entirely different purpose.
GPM is especially useful in the following scenarios:
* New Linux server installations or for systems that cannot or do not use an X windows system by default, like Arch Linux and Gentoo.
* Use copy/paste operations around in the virtual terminals/consoles.
* Use copy/paste in text-based editors and browsers (Eg. emacs, lynx).
* Use copy/paste in text file managers (Eg. Ranger, Midnight commander).
In this brief tutorial, we are going to see how to use Mouse in Text-based terminals in Unix-like operating systems.
### Installing GPM
To enable mouse support in Text-only Linux systems, install GPM package. It is available in the default repositories of most Linux distributions.
On Arch Linux and its variants like Antergos, Manjaro Linux, run the following command to install GPM:
```
$ sudo pacman -S gpm
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt install gpm
```
On Fedora:
```
$ sudo dnf install gpm
```
On openSUSE:
```
$ sudo zypper install gpm
```
Once installed, enable and start GPM service using the following commands:
```
$ sudo systemctl enable gpm
$ sudo systemctl start gpm
```
In Debian-based systems, gpm service will be automatically started after you installed it, so you need not to manually start the service as shown above.
### Configure Mouse Support For Linux Virtual Consoles
There is no special configuration required. GPM will start working as soon as you installed it and started gpm service.
Have a look at the following screenshot of my Ubuntu 18.04 LTS server before installing GPM:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/Ubuntu-18.04-CLI.png)
As you see in the above screenshot, there is no visible Mouse pointer in my Ubuntu 18.04 LTS headless server. Only a blinking cursor and it wont let me to select a text, copy/paste text using mouse. In CLI-only Linux servers, the mouse is literally not useful at all.
Now check the following screenshot of Ubuntu 18.04 LTS server after installing GPM:
![](https://www.ostechnix.com/wp-content/uploads/2018/09/GPM.png)
See? I can now be able to select the text.
To select, copy and paste text, do the following:
* To select text, press the left mouse button and drag the mouse.
* Once you selected the text, release the left mouse button and paste text in the same or another console by pressing the middle mouse button.
* The right button is used to extend the selection, like in `xterm.
* If youre using two-button mouse, use the right button to paste text.
Its that simple!
Like I already said, GPM works just fine and there is no extra configuration needed. Here is the sample contents of GPM configuration file **/etc/gpm.conf** (or `/etc/conf.d/gpm` in some distributions):
```
# protected from evaluation (i.e. by quoting them).
#
# This file is used by /etc/init.d/gpm and can be modified by
# "dpkg-reconfigure gpm" or by hand at your option.
#
device=/dev/input/mice
responsiveness=
repeat_type=none
type=exps2
append=''
sample_rate=
```
In my example, I use USB mouse. If youre using different mouse, you might have to change the values of **device=/dev/input/mice** and **type=exps2** parameters.
For more details, refer man pages.
```
$ man gpm
```
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-configure-mouse-support-for-linux-virtual-consoles/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/sk/

View File

@ -0,0 +1,335 @@
How subroutine signatures work in Perl 6
======
In the fourth article in this series comparing Perl 5 to Perl 6, learn how signatures work in Perl 6.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/collab-team-pair-programming-code-keyboard2.png?itok=WnKfsl-G)
In the [first article][1] in this series comparing Perl 5 to Perl 6, we looked into some of the issues you might encounter when migrating code into Perl 6. In the [second article][2], we examined how garbage collection works in Perl 6, and in the [third article][3], we looked at how containers replaced references in Perl 6. Here in the fourth article, we will focus on (subroutine) signatures in Perl 6 and how they differ from those in Perl 5.
### Experimental signatures in Perl 5
If you're migrating from Perl 5 code to Perl 6, you're probably not using the [experimental signature feature][4] that became available in Perl 5.20 or any of the older CPAN modules like [signatures][5], [Function::Parameters][6], or any of the other Perl 5 modules on CPAN with ["signature" in their name][7].
Also, in my experience, [prototypes][8] haven't been used very often in the Perl programs out in the world (e.g., the [DarkPAN][9] ).
For these reasons, I will compare Perl 6 functionality only with the most common use of "classic" Perl 5 argument passing.
### Argument passing in Perl 5
All arguments you pass to a Perl 5 subroutine are flattened and put into the automatically defined `@_` array variable inside. That is basically all Perl 5 does with passing arguments to subroutines. Nothing more, nothing less. There are, however, several idioms in Perl 5 that take it from there. The most common (I would say "standard") idiom in my experience is:
```
# Perl 5
sub do_something {
    my ($foo, $bar) = @_;
    # actually do something with $foo and $bar
}
```
This idiom performs a list assignment (copy) to two (new) lexical variables. This way of accessing the arguments to a subroutine is also supported in Perl 6, but it's intended just as a way to make migrations easier.
If you expect a fixed number of arguments followed by a variable number of arguments, the following idiom is typically used:
```
# Perl 5
sub do_something {
    my $foo = shift;
    my $bar = shift;
    for (@_) {
        # do something for each element in @_
    }
}do_something
```
This idiom depends on the magic behavior of [shift][10], which shifts from `@_` in this context. If the subroutine is intended to be called as a method, something like this is usually seen:
```
# Perl 5
sub do_something {
    my $self = shift;
    # do something with $self
}do_something
```
as the first argument passed is the [invocant][11] in Perl 5.
By the way, this idiom can also be written in the first idiom:
```
# Perl 5
sub do_something {
    my ($foo, $bar, @rest) = @_;
    for (@rest) {
        # do something for each element in @rest
    }
}
```
But that would be less efficient, as it would involve copying a potentially long list of values.
The third idiom revolves on directly accessing the `@_` array.
```
# Perl 5
sub sum_two {
    return $_[0] + $_[1];  # return the sum of the two parameters
}sum_two
```
This idiom is typically used for small, one-line subroutines, as it is one of the most efficient ways of handling arguments because no copying takes place.
This idiom is also used if you want to change any variable that is passed as a parameter. Since the elements in `@_` are aliases to any variables specified (in Perl 6 you would say: "are bound to the variables"), it is possible to change the contents:
```
# Perl 5
sub make42 {
    $_[0] = 42;
}
my $a = 666;
make42($a);
say $a;      # 42
```
### Named arguments in Perl 5
Named arguments (as such) don't exist in Perl 5. But there is an often-used idiom that effectively mimics named arguments:
```
# Perl 5
sub do_something {
    my %named = @_;
    if (exists %named{bar}) {
        # do stuff if named variable "bar" exists
    }
}do_somethingbar
```
This initializes the hash `%named` by alternately taking a key and a value from the `@_` array. If you call a subroutine with arguments using the fat-comma syntax:
```
# Perl 5
frobnicate( bar => 42 );
```
it will pass two values, `"foo"` and `42`, which will be placed into the `%named` hash as the value `42` associated with key `"foo"`. But the same thing would have happened if you had specified:
```
# Perl 5
frobnicate( "bar", 42 );
```
The `=>` is syntactic sugar for automatically quoting the left side. Otherwise, it functions just like a comma (hence the name "fat comma").
If a subroutine is called as a method with named arguments, this idiom is combined with the standard idiom:
```
# Perl 5
sub do_something {
    my ($self, %named) = @_;
    # do something with $self and %named
}
```
alternatively:
```
# Perl 5
sub do_something {
    my $self  = shift;
    my %named = @_;
    # do something with $self and %named
}do_something
```
### Argument passing in Perl 6
In their simplest form, subroutine signatures in Perl 6 are very much like the "standard" idiom of Perl 5. But instead of being part of the code, they are part of the definition of the subroutine, and you don't need to do the assignment:
```
# Perl 6
sub do-something($foo, $bar) {
    # actually do something with $foo and $bar
}
```
versus:
```
# Perl 5
sub do_something {
    my ($foo, $bar) = @_;
    # actually do something with $foo and $bar
}
```
In Perl 6, the `($foo, $bar)` part is called the signature of the subroutine.
Since Perl 6 has an actual `method` keyword, it is not necessary to take the invocant into account, as that is automatically available with the `self` term:
```
# Perl 6
class Foo {
    method do-something-else($foo, $bar) {
        # do something else with self, $foo and $bar
    }
}
```
Such parameters are called positional parameters in Perl 6. Unless indicated otherwise, positional parameters must be specified when calling the subroutine.
If you need the aliasing behavior of using `$_[0]` directly in Perl 5, you can mark the parameter as writable by specifying the `is rw` trait:
```
# Perl 6
sub make42($foo is rw) {
    $foo = 42;
}
my $a = 666;
make42($a);
say $a;      # 42
```
When you pass an array as an argument to a subroutine, it doesn't get flattened in Perl 6. You only need to accept an array as an array in the signature:
```
# Perl 6
sub handle-array(@a) {
    # do something with @a
}
my @foo = "a" .. "z";
handle-array(@foo);
```
You can pass any number of arrays:
```
# Perl 6
sub handle-two-arrays(@a, @b) {
    # do something with @a and @b
}
my @bar = 1..26;
handle-two-arrays(@foo, @bar);
```
If you want the ([variadic][12]) flattening semantics of Perl 5, you can indicate this with a so-called "slurpy array" by prefixing the array with an asterisk in the signature:
```
# Perl 6
sub slurp-an-array(*@values) {
    # do something with @values
}
slurp-an-array("foo", 42, "baz");slurpanarrayslurpanarray
```
A slurpy array can occur only as the last positional parameter in a signature.
If you prefer to use the Perl 5 way of specifying parameters in Perl 6, you can do this by specifying a slurpy array `*@_` in the signature:
```
# Perl 6
sub do-like-5(*@_) {
    my ($foo, $bar) = @_;
}
```
### Named arguments in Perl 6
On the calling side, named arguments in Perl 6 can be expressed very similarly to how they are expressed in Perl 5:
```
# Perl 5 and Perl 6
frobnicate( bar => 42 );
```
However, on the definition side of the subroutine, things are very different:
```
# Perl 6
sub frobnicate(:$bar) {
    # do something with $bar
}
```
The difference between an ordinary (positional) parameter and a named parameter is the colon, which precedes the [sigil][13] and the variable name in the definition:
```
$foo      # positional parameter, receives in $foo
:$bar     # named parameter "bar", receives in $bar
```
Unless otherwise specified, named parameters are optional. If a named argument is not specified, the associated variable will contain the default value, which usually is the type object `Any`.
If you want to catch any (other) named arguments, you can use a so-called "slurpy hash." Just like the slurpy array, it is indicated with an asterisk before a hash:
```
# Perl 6
sub slurp-nameds(*%nameds) {
    say "Received: " ~ join ", ", sort keys %nameds;
}
slurp-nameds(foo => 42, bar => 666); # Received: bar, fooslurpnamedssayslurpnamedsfoobar
```
As with the slurpy array, there can be only one slurpy hash in a signature, and it must be specified after any other named parameters.
Often you want to pass a named argument to a subroutine from a variable with the same name. In Perl 5 this looks like: `do_something(bar => $bar)`. In Perl 6, you can specify this in the same way: `do-something(bar => $bar)`. But you can also use a shortcut: `do-something(:$bar)`. This means less typingand less chance of typos.
### Default values in Perl 6
Perl 5 has the following idiom for making parameters optional with a default value:
```
# Perl 5
sub dosomething_with_defaults {
    my $foo = @_ ? shift : 42;
    my $bar = @_ ? shift : 666;
    # actually do something with $foo and $bar
}dosomething_with_defaults
```
In Perl 6, you can specify default values as part of the signature by specifying an equal sign and an expression:
```
# Perl 6
sub dosomething-with-defaults($foo = 42, :$bar = 666) {
    # actually do something with $foo and $bar
}
```
Positional parameters become optional if a default value is specified for them. Named parameters stay optional regardless of any default value.
### Summary
Perl 6 has a way of describing how arguments to a subroutine should be captured into parameters of that subroutine. Positional parameters are indicated by their name and the appropriate sigil (e.g., `$foo`). Named parameters are prefixed with a colon (e.g. `:$bar`). Positional parameters can be marked as `is rw` to allow changing variables in the caller's scope.
Positional arguments can be flattened in a slurpy array, which is prefixed by an asterisk (e.g., `*@values`). Unexpected named arguments can be collected using a slurpy hash, which is also prefixed with an asterisk (e.g., `*%nameds`).
Default values can be specified inside the signature by adding an expression after an equal sign (e.g., `$foo = 42`), which makes that parameter optional.
Signatures in Perl 6 have many other interesting features, aside from the ones summarized here; if you want to know more about them, check out the Perl 6 [signature object documentation][14].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/signatures-perl-6
作者:[Elizabeth Mattijsen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/lizmat
[1]: https://opensource.com/article/18/7/migrating-perl-5-perl-6
[2]: https://opensource.com/article/18/7/garbage-collection-perl-6
[3]: https://opensource.com/article/18/7/containers-perl-6
[4]: https://metacpan.org/pod/distribution/perl/pod/perlsub.pod#Signatures
[5]: https://metacpan.org/pod/signatures
[6]: https://metacpan.org/pod/Function::Parameters
[7]: https://metacpan.org/search?q=signature
[8]: https://metacpan.org/pod/perlsub#Prototypes
[9]: http://modernperlbooks.com/mt/2009/02/the-darkpan-dependency-management-and-support-problem.html
[10]: https://perldoc.perl.org/functions/shift.html
[11]: https://docs.perl6.org/routine/invocant
[12]: https://en.wikipedia.org/wiki/Variadic_function
[13]: https://www.perl.com/article/on-sigils/
[14]: https://docs.perl6.org/type/Signature

View File

@ -0,0 +1,395 @@
How to build rpm packages
======
Save time and effort installing files and scripts across multiple hosts.
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_gift_giveaway_box_520x292.png?itok=w1YQhNH1)
I have used rpm-based package managers to install software on Red Hat and Fedora Linux since I started using Linux more than 20 years ago. I have used the **rpm** program itself, **yum** , and **DNF** , which is a close descendant of yum, to install and update packages on my Linux hosts. The yum and DNF tools are wrappers around the rpm utility that provide additional functionality, such as the ability to find and install package dependencies.
Over the years I have created a number of Bash scripts, some of which have separate configuration files, that I like to install on most of my new computers and virtual machines. It reached the point that it took a great deal of time to install all of these packages, so I decided to automate that process by creating an rpm package that I could copy to the target hosts and install all of these files in their proper locations. Although the **rpm** tool was formerly used to build rpm packages, that function was removed and a new tool,was created to build new rpms.
When I started this project, I found very little information about creating rpm packages, but I managed to find a book, Maximum RPM, that helped me figure it out. That book is now somewhat out of date, as is the vast majority of information I have found. It is also out of print, and used copies go for hundreds of dollars. The online version of [Maximum RPM][1] is available at no charge and is kept up to date. The [RPM website][2] also has links to other websites that have a lot of documentation about rpm. What other information there is tends to be brief and apparently assumes that you already have a good deal of knowledge about the process.
In addition, every one of the documents I found assumes that the code needs to be compiled from sources as in a development environment. I am not a developer. I am a sysadmin, and we sysadmins have different needs because we dont—or we shouldnt—compile code to use for administrative tasks; we should use shell scripts. So we have no source code in the sense that it is something that needs to be compiled into binary executables. What we have is a source that is also the executable.
For the most part, this project should be performed as the non-root user student. Rpms should never be built by root, but only by non-privileged users. I will indicate which parts should be performed as root and which by a non-root, unprivileged user.
### Preparation
First, open one terminal session and `su` to root. Be sure to use the `-` option to ensure that the complete root environment is enabled. I do not believe that sysadmins should use `sudo` for any administrative tasks. Find out why in my personal blog post: [Real SysAdmins dont sudo][3].
```
[student@testvm1 ~]$ su -
Password:
[root@testvm1 ~]#
```
Create a student user that can be used for this project and set a password for that user.
```
[root@testvm1 ~]# useradd -c "Student User" student
[root@testvm1 ~]# passwd student
Changing password for user student.
New password: <Enter the password>
Retype new password: <Enter the password>
passwd: all authentication tokens updated successfully.
[root@testvm1 ~]#
```
Building rpm packages requires the `rpm-build` package, which is likely not already installed. Install it now as root. Note that this command will also install several dependencies. The number may vary, depending upon the packages already installed on your host; it installed a total of 17 packages on my test VM, which is pretty minimal.
```
dnf install -y rpm-build
```
The rest of this project should be performed as the user student unless otherwise explicitly directed. Open another terminal session and use `su` to switch to that user to perform the rest of these steps. Download a tarball that I have prepared of a development directory structure, utils.tar, from GitHub using the following command:
```
wget https://github.com/opensourceway/how-to-rpm/raw/master/utils.tar
```
This tarball includes all of the files and Bash scripts that will be installed by the final rpm. There is also a complete spec file, which you can use to build the rpm. We will go into detail about each section of the spec file.
As user student, using your home directory as your present working directory (pwd), untar the tarball.
```
[student@testvm1 ~]$ cd ; tar -xvf utils.tar
```
Use the `tree` command to verify that the directory structure of ~/development and the contained files looks like the following output:
```
[student@testvm1 ~]$ tree development/
development/
├── license
  ├── Copyright.and.GPL.Notice.txt
  └── GPL_LICENSE.txt
├── scripts
  ├── create_motd
  ├── die
  ├── mymotd
  └── sysdata
└── spec
    └── utils.spec
3 directories, 7 files
[student@testvm1 ~]$
```
The `mymotd` script creates a “Message Of The Day” data stream that is sent to stdout. The `create_motd` script runs the `mymotd` scripts and redirects the output to the /etc/motd file. This file is used to display a daily message to users who log in remotely using SSH.
The `die` script is my own script that wraps the `kill` command in a bit of code that can find running programs that match a specified string and kill them. It uses `kill -9` to ensure that they cannot ignore the kill message.
The `sysdata` script can spew tens of thousands of lines of data about your computer hardware, the installed version of Linux, all installed packages, and the metadata of your hard drives. I use it to document the state of a host at a point in time. I can later use it for reference. I used to do this to maintain a record of hosts that I installed for customers.
You may need to change ownership of these files and directories to student.student. Do this, if necessary, using the following command:
```
chown -R student.student development
```
Most of the files and directories in this tree will be installed on Fedora systems by the rpm you create during this project.
### Creating the build directory structure
The `rpmbuild` command requires a very specific directory structure. You must create this directory structure yourself because no automated way is provided. Create the following directory structure in your home directory:
```
~ ─ rpmbuild
    ├── RPMS
     └── noarch
    ├── SOURCES
    ├── SPECS
    └── SRPMS
```
We will not create the rpmbuild/RPMS/X86_64 directory because that would be architecture-specific for 64-bit compiled binaries. We have shell scripts that are not architecture-specific. In reality, we wont be using the SRPMS directory either, which would contain source files for the compiler.
### Examining the spec file
Each spec file has a number of sections, some of which may be ignored or omitted, depending upon the specific circumstances of the rpm build. This particular spec file is not an example of a minimal file required to work, but it is a good example of a moderately complex spec file that packages files that do not need to be compiled. If a compile were required, it would be performed in the `%build` section, which is omitted from this spec file because it is not required.
#### Preamble
This is the only section of the spec file that does not have a label. It consists of much of the information you see when the command `rpm -qi [Package Name]` is run. Each datum is a single line which consists of a tag, which identifies it and text data for the value of the tag.
```
###############################################################################
# Spec file for utils
################################################################################
# Configured to be built by user student or other non-root user
################################################################################
#
Summary: Utility scripts for testing RPM creation
Name: utils
Version: 1.0.0
Release: 1
License: GPL
URL: http://www.both.org
Group: System
Packager: David Both
Requires: bash
Requires: screen
Requires: mc
Requires: dmidecode
BuildRoot: ~/rpmbuild/
# Build with the following syntax:
# rpmbuild --target noarch -bb utils.spec
```
Comment lines are ignored by the `rpmbuild` program. I always like to add a comment to this section that contains the exact syntax of the `rpmbuild` command required to create the package. The Summary tag is a short description of the package. The Name, Version, and Release tags are used to create the name of the rpm file, as in utils-1.00-1.rpm. Incrementing the release and version numbers lets you create rpms that can be used to update older ones.
The License tag defines the license under which the package is released. I always use a variation of the GPL. Specifying the license is important to clarify the fact that the software contained in the package is open source. This is also why I included the license and GPL statement in the files that will be installed.
The URL is usually the web page of the project or project owner. In this case, it is my personal web page.
The Group tag is interesting and is usually used for GUI applications. The value of the Group tag determines which group of icons in the applications menu will contain the icon for the executable in this package. Used in conjunction with the Icon tag (which we are not using here), the Group tag allows adding the icon and the required information to launch a program into the applications menu structure.
The Packager tag is used to specify the person or organization responsible for maintaining and creating the package.
The Requires statements define the dependencies for this rpm. Each is a package name. If one of the specified packages is not present, the DNF installation utility will try to locate it in one of the defined repositories defined in /etc/yum.repos.d and install it if it exists. If DNF cannot find one or more of the required packages, it will throw an error indicating which packages are missing and terminate.
The BuildRoot line specifies the top-level directory in which the `rpmbuild` tool will find the spec file and in which it will create temporary directories while it builds the package. The finished package will be stored in the noarch subdirectory that we specified earlier. The comment showing the command syntax used to build this package includes the option `target noarch`, which defines the target architecture. Because these are Bash scripts, they are not associated with a specific CPU architecture. If this option were omitted, the build would be targeted to the architecture of the CPU on which the build is being performed.
The `rpmbuild` program can target many different architectures, and using the `--target` option allows us to build architecture-specific packages on a host with a different architecture from the one on which the build is performed. So I could build a package intended for use on an i686 architecture on an x86_64 host, and vice versa.
Change the packager name to yours and the URL to your own website if you have one.
#### %description
The `%description` section of the spec file contains a description of the rpm package. It can be very short or can contain many lines of information. Our `%description` section is rather terse.
```
%description
A collection of utility scripts for testing RPM creation.
```
#### %prep
The `%prep` section is the first script that is executed during the build process. This script is not executed during the installation of the package.
This script is just a Bash shell script. It prepares the build directory, creating directories used for the build as required and copying the appropriate files into their respective directories. This would include the sources required for a complete compile as part of the build.
The $RPM_BUILD_ROOT directory represents the root directory of an installed system. The directories created in the $RPM_BUILD_ROOT directory are fully qualified paths, such as /user/local/share/utils, /usr/local/bin, and so on, in a live filesystem.
In the case of our package, we have no pre-compile sources as all of our programs are Bash scripts. So we simply copy those scripts and other files into the directories where they belong in the installed system.
```
%prep
################################################################################
# Create the build tree and copy the files from the development directories    #
# into the build tree.                                                         #
################################################################################
echo "BUILDROOT = $RPM_BUILD_ROOT"
mkdir -p $RPM_BUILD_ROOT/usr/local/bin/
mkdir -p $RPM_BUILD_ROOT/usr/local/share/utils
cp /home/student/development/utils/scripts/* $RPM_BUILD_ROOT/usr/local/bin
cp /home/student/development/utils/license/* $RPM_BUILD_ROOT/usr/local/share/utils
cp /home/student/development/utils/spec/* $RPM_BUILD_ROOT/usr/local/share/utils
exit
```
Note that the exit statement at the end of this section is required.
#### %files
This section of the spec file defines the files to be installed and their locations in the directory tree. It also specifies the file attributes and the owner and group owner for each file to be installed. The file permissions and ownerships are optional, but I recommend that they be explicitly set to eliminate any chance for those attributes to be incorrect or ambiguous when installed. Directories are created as required during the installation if they do not already exist.
```
%files
%attr(0744, root, root) /usr/local/bin/*
%attr(0644, root, root) /usr/local/share/utils/*
```
#### %pre
This section is empty in our lab projects spec file. This would be the place to put any scripts that are required to run during installation of the rpm but prior to the installation of the files.
#### %post
This section of the spec file is another Bash script. This one runs after the installation of files. This section can be pretty much anything you need or want it to be, including creating files, running system commands, and restarting services to reinitialize them after making configuration changes. The `%post` script for our rpm package performs some of those tasks.
```
%post
################################################################################
# Set up MOTD scripts                                                          #
################################################################################
cd /etc
# Save the old MOTD if it exists
if [ -e motd ]
then
   cp motd motd.orig
fi
# If not there already, Add link to create_motd to cron.daily
cd /etc/cron.daily
if [ ! -e create_motd ]
then
   ln -s /usr/local/bin/create_motd
fi
# create the MOTD for the first time
/usr/local/bin/mymotd > /etc/motd
```
The comments included in this script should make its purpose clear.
#### %postun
This section contains a script that would be run after the rpm package is uninstalled. Using rpm or DNF to remove a package removes all of the files listed in the `%files` section, but it does not remove files or links created by the `%post` section, so we need to handle that in this section.
This script usually consists of cleanup tasks that simply erasing the files previously installed by the rpm cannot accomplish. In the case of our package, it includes removing the link created by the `%post` script and restoring the saved original of the motd file.
```
%postun
# remove installed files and links
rm /etc/cron.daily/create_motd
# Restore the original MOTD if it was backed up
if [ -e /etc/motd.orig ]
then
   mv -f /etc/motd.orig /etc/motd
fi
```
#### %clean
This Bash script performs cleanup after the rpm build process. The two lines in the `%clean` section below remove the build directories created by the `rpm-build` command. In many cases, additional cleanup may also be required.
```
%clean
rm -rf $RPM_BUILD_ROOT/usr/local/bin
rm -rf $RPM_BUILD_ROOT/usr/local/share/utils
```
#### %changelog
This optional text section contains a list of changes to the rpm and files it contains. The newest changes are recorded at the top of this section.
```
%changelog
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
  - The original package includes several useful scripts. it is
    primarily intended to be used to illustrate the process of
    building an RPM.
```
Replace the data in the header line with your own name and email address.
### Building the rpm
The spec file must be in the SPECS directory of the rpmbuild tree. I find it easiest to create a link to the actual spec file in that directory so that it can be edited in the development directory and there is no need to copy it to the SPECS directory. Make the SPECS directory your pwd, then create the link.
```
cd ~/rpmbuild/SPECS/
ln -s ~/development/spec/utils.spec
```
Run the following command to build the rpm. It should only take a moment to create the rpm if no errors occur.
```
rpmbuild --target noarch -bb utils.spec
```
Check in the ~/rpmbuild/RPMS/noarch directory to verify that the new rpm exists there.
```
[student@testvm1 ~]$ cd rpmbuild/RPMS/noarch/
[student@testvm1 noarch]$ ll
total 24
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
[student@testvm1 noarch]$
```
### Testing the rpm
As root, install the rpm to verify that it installs correctly and that the files are installed in the correct directories. The exact name of the rpm will depend upon the values you used for the tags in the Preamble section, but if you used the ones in the sample, the rpm name will be as shown in the sample command below:
```
[root@testvm1 ~]# cd /home/student/rpmbuild/RPMS/noarch/
[root@testvm1 noarch]# ll
total 24
-rw-rw-r--. 1 student student 24364 Aug 30 10:00 utils-1.0.0-1.noarch.rpm
[root@testvm1 noarch]# rpm -ivh utils-1.0.0-1.noarch.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:utils-1.0.0-1                    ################################# [100%]
```
Check /usr/local/bin to ensure that the new files are there. You should also verify that the create_motd link in /etc/cron.daily has been created.
Use the `rpm -q --changelog utils` command to view the changelog. View the files installed by the package using the `rpm -ql utils` command (that is a lowercase L in `ql`.)
```
[root@testvm1 noarch]# rpm -q --changelog utils
* Wed Aug 29 2018 Your Name <Youremail@yourdomain.com>
- The original package includes several useful scripts. it is
    primarily intended to be used to illustrate the process of
    building an RPM.
[root@testvm1 noarch]# rpm -ql utils
/usr/local/bin/create_motd
/usr/local/bin/die
/usr/local/bin/mymotd
/usr/local/bin/sysdata
/usr/local/share/utils/Copyright.and.GPL.Notice.txt
/usr/local/share/utils/GPL_LICENSE.txt
/usr/local/share/utils/utils.spec
[root@testvm1 noarch]#
```
Remove the package.
```
rpm -e utils
```
### Experimenting
Now you will change the spec file to require a package that does not exist. This will simulate a dependency that cannot be met. Add the following line immediately under the existing Requires line:
```
Requires: badrequire
```
Build the package and attempt to install it. What message is displayed?
We used the `rpm` command to install and delete the `utils` package. Try installing the package with yum or DNF. You must be in the same directory as the package or specify the full path to the package for this to work.
### Conclusion
There are many tags and a couple sections that we did not cover in this look at the basics of creating an rpm package. The resources listed below can provide more information. Building rpm packages is not difficult; you just need the right information. I hope this helps you—it took me months to figure things out on my own.
We did not cover building from source code, but if you are a developer, that should be a simple step from this point.
Creating rpm packages is another good way to be a lazy sysadmin and save time and effort. It provides an easy method for distributing and installing the scripts and other files that we as sysadmins need to install on many hosts.
### Resources
* Edward C. Baily, Maximum RPM, Sams Publishing, 2000, ISBN 0-672-31105-4
* Edward C. Baily, [Maximum RPM][1], updated online version
* [RPM Documentation][4]: This web page lists most of the available online documentation for rpm. It includes many links to other websites and information about rpm.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/9/how-build-rpm-packages
作者:[David Both][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/dboth
[1]: http://ftp.rpm.org/max-rpm/
[2]: http://rpm.org/index.html
[3]: http://www.both.org/?p=960
[4]: http://rpm.org/documentation.html

View File

@ -0,0 +1,201 @@
How to turn on an LED with Fedora IoT
======
![](https://fedoramagazine.org/wp-content/uploads/2018/08/LED-IoT-816x345.jpg)
Do you enjoy running Fedora, containers, and have a Raspberry Pi? What about using all three together to play with LEDs? This article introduces Fedora IoT and shows you how to install a preview image on a Raspberry Pi. Youll also learn how to interact with GPIO in order to light up an LED.
### What is Fedora IoT?
Fedora IoT is one of the current Fedora Project objectives, with a plan to become a full Fedora Edition. The result will be a system that runs on ARM (aarch64 only at the moment) devices such as the Raspberry Pi, as well as on the x86_64 architecture.
![][1]
Fedora IoT is based on OSTree, like [Fedora Silverblue][2] and the former [Atomic Host][3].
### Download and install Fedora IoT
The official Fedora IoT images are coming with the Fedora 29 release. However, in the meantime you can download a [Fedora 28-based image][4] for this experiment.
You have two options to install the system: either flash the SD card using a dd command, or use a fedora-arm-installer tool. The Fedora Wiki offers more information about [setting up a physical device][5] for IoT. Also, remember that you might need to resize the third partition.
Once you insert the SD card into the device, youll need to complete the installation by creating a user. This step requires either a serial connection, or a HDMI display with a keyboard to interact with the device.
When the system is installed and ready, the next step is to configure a network connection. Log in to the system with the user you have just created choose one of the following options:
* If you need to configure your network manually, run a command similar to the following. Remember to use the right addresses for your network:
```
$ nmcli connection add con-name cable ipv4.addresses \
192.168.0.10/24 ipv4.gateway 192.168.0.1 \
connection.autoconnect true ipv4.dns "8.8.8.8,1.1.1.1" \
type ethernet ifname eth0 ipv4.method manual
```
* If theres a DHCP service on your network, run a command like this:
```
$ nmcli con add type ethernet con-name cable ifname eth0
```
### **The GPIO interface in Fedora**
Many tutorials about GPIO on Linux focus on a legacy GPIO sysfis interface. This interface is deprecated, and the upstream Linux kernel community plan to remove it completely, due to security and other issues.
The Fedora kernel is already compiled without this legacy interface, so theres no /sys/class/gpio on the system. This tutorial uses a new character device /dev/gpiochipN provided by the upstream kernel. This is the current way of interacting with GPIO.
To interact with this new device, you need to use a library and a set of command line interface tools. The common command line tools such as echo or cat wont work with this device.
You can install the CLI tools by installing the libgpiod-utils package. A corresponding Python library is provided by the python3-libgpiod package.
### **Creating a container with Podman**
[Podman][6] is a container runtime with a command line interface similar to Docker. The big advantage of Podman is it doesnt run any daemon in the background. Thats especially useful for devices with limited resources. Podman also allows you to start containerized services with systemd unit files. Plus, it has many additional features.
Well create a container in these two steps:
1. Create a layered image containing the required packages.
2. Create a new container starting from our image.
First, create a file Dockerfile with the content below. This tells podman to build an image based on the latest Fedora image available in the registry. Then it updates the system inside and installs some packages:
```
FROM fedora:latest
RUN  dnf -y update
RUN  dnf -y install libgpiod-utils python3-libgpiod
```
You have created a build recipe of a container image based on the latest Fedora with updates, plus packages to interact with GPIO.
Now, run the following command to build your base image:
```
$ sudo podman build --tag fedora:gpiobase -f ./Dockerfile
```
You have just created your custom image with all the bits in place. You can play with this base container images as many times as you want without installing the packages every time you run it.
### Working with Podman
To verify the image is present, run the following command:
```
$ sudo podman images
REPOSITORY                 TAG        IMAGE ID       CREATED          SIZE
localhost/fedora           gpiobase   67a2b2b93b4b   10 minutes ago  488MB
docker.io/library/fedora   latest     c18042d7fac6   2 days ago     300MB
```
Now, start the container and do some actual experiments. Containers are normally isolated and dont have an access to the host system, including the GPIO interface. Therefore, you need to mount it inside while starting the container. To do this, use the device option in the following command:
```
$ sudo podman run -it --name gpioexperiment --device=/dev/gpiochip0 localhost/fedora:gpiobase /bin/bash
```
You are now inside the running container. Before you move on, here are some more container commands. For now, exit the container by typing exit or pressing **Ctrl+D**.
To list the the existing containers, including those not currently running, such as the one you just created, run:
```
$ sudo podman container ls -a
CONTAINER ID   IMAGE             COMMAND     CREATED          STATUS                              PORTS   NAMES
64e661d5d4e8   localhost/fedora:gpiobase   /bin/bash 37 seconds ago Exited (0) Less than a second ago           gpioexperiment
```
To create a new container, run this command:
```
$ sudo podman run -it --name newexperiment --device=/dev/gpiochip0 localhost/fedora:gpiobase /bin/bash
```
Delete it with the following command:
```
$ sudo podman rm newexperiment
```
### **Turn on an LED**
Now you can use the container you already created. If you exited from the container, start it again with this command:
```
$ sudo podman start -ia gpioexperiment
```
As already discussed, you can use the CLI tools provided by the libgpiod-utils package in Fedora. To list the available GPIO chips, run:
```
$ gpiodetect
gpiochip0 [pinctrl-bcm2835] (54 lines)
```
To get the list of the lines exposed by a specific chip, run:
```
$ gpioinfo gpiochip0
```
Notice theres no correlation between the number of physical pins and the number of lines printed by the previous command. Whats important is the BCM number, as shown on [pinout.xyz][7]. It is not advised to play with the lines that dont have a corresponding BCM number.
Now, connect an LED to the physical pin 40, that is BCM 21. Remember: the shorter leg of the LED (the negative leg, called the cathode) must be connected to a GND pin of the Raspberry Pi with a 330 ohm resistor, and the long leg (the anode) to the physical pin 40.
To turn the LED on, run the following command. It will stay on until you press **Ctrl+C** :
```
$ gpioset --mode=wait gpiochip0 21=1
```
To light it up for a certain period of time, add the -b (run in the background) and -s NUM (how many seconds) parameters, as shown below. For example, to light the LED for 5 seconds, run:
```
$ gpioset -b -s 5 --mode=time gpiochip0 21=1
```
Another useful command is gpioget. It gets the status of a pin (high or low), and can be useful to detect buttons and switches.
![Closeup of LED connection with GPIO][8]
### **Conclusion**
You can also play with LEDs using Python — [there are some examples here][9]. And you can also use the i2c devices inside the container as well. In addition, Podman is not strictly related to this Fedora edition. You can install it on any existing Fedora Edition, or try it on the two new OSTree-based systems in Fedora: [Fedora Silverblue][2] and [Fedora CoreOS][10].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/turnon-led-fedora-iot/
作者:[Alessio Ciregia][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: http://alciregi.id.fedoraproject.org/
[1]: https://fedoramagazine.org/wp-content/uploads/2018/08/oled-1024x768.png
[2]: https://teamsilverblue.org/
[3]: https://www.projectatomic.io/
[4]: https://kojipkgs.fedoraproject.org/compose/iot/latest-Fedora-IoT-28/compose/IoT/
[5]: https://fedoraproject.org/wiki/InternetOfThings/GettingStarted#Setting_up_a_Physical_Device
[6]: https://github.com/containers/libpod
[7]: https://pinout.xyz/
[8]: https://fedoramagazine.org/wp-content/uploads/2018/08/breadboard-1024x768.png
[9]: https://github.com/brgl/libgpiod/tree/master/bindings/python/examples
[10]: https://coreos.fedoraproject.org/