mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-03-12 01:40:10 +08:00
Merge remote-tracking branch 'LCTT/master'
This commit is contained in:
commit
94b7b4339b
@ -1,189 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (32-bit life support: Cross-compiling with GCC)
|
||||
[#]: via: (https://opensource.com/article/19/7/cross-compiling-gcc)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
32-bit life support: Cross-compiling with GCC
|
||||
======
|
||||
Use GCC to cross-compile binaries for different architectures from a
|
||||
single build machine.
|
||||
![Wratchet set tools][1]
|
||||
|
||||
If you're a developer creating binary packages, like an RPM, DEB, Flatpak, or Snap, you have to compile code for a variety of different target platforms. Typical targets include 32-bit and 64-bit x86 and ARM. You could do your builds on different physical or virtual machines, but that means maintaining several systems. Instead, you can use the GNU Compiler Collection ([GCC][2]) to cross-compile, producing binaries for several different architectures from a single build machine.
|
||||
|
||||
Assume you have a simple dice-rolling game that you want to cross-compile. Something written in C is relatively easy on most systems, so to add complexity for the sake of realism, I wrote this example in C++, so the program depends on something not present in C (**iostream**, specifically).
|
||||
|
||||
|
||||
```
|
||||
#include <iostream>
|
||||
#include <cstdlib>
|
||||
|
||||
using namespace std;
|
||||
|
||||
void lose (int c);
|
||||
void win (int c);
|
||||
void draw ();
|
||||
|
||||
int main() {
|
||||
int i;
|
||||
do {
|
||||
cout << "Pick a number between 1 and 20: \n";
|
||||
cin >> i;
|
||||
int c = rand ( ) % 21;
|
||||
if (i > 20) lose (c);
|
||||
else if (i < c ) lose (c);
|
||||
else if (i > c ) win (c);
|
||||
else draw ();
|
||||
}
|
||||
while (1==1);
|
||||
}
|
||||
|
||||
void lose (int c )
|
||||
{
|
||||
cout << "You lose! Computer rolled " << c << "\n";
|
||||
}
|
||||
|
||||
void win (int c )
|
||||
{
|
||||
cout << "You win!! Computer rolled " << c << "\n";
|
||||
}
|
||||
|
||||
void draw ( )
|
||||
{
|
||||
cout << "What are the chances. You tied. Try again, I dare you! \n";
|
||||
}
|
||||
```
|
||||
|
||||
Compile it on your system using the **g++** command:
|
||||
|
||||
|
||||
```
|
||||
`$ g++ dice.cpp -o dice`
|
||||
```
|
||||
|
||||
Then run it to confirm that it works:
|
||||
|
||||
|
||||
```
|
||||
$ ./dice
|
||||
Pick a number between 1 and 20:
|
||||
[...]
|
||||
```
|
||||
|
||||
You can see what kind of binary you just produced with the **file** command:
|
||||
|
||||
|
||||
```
|
||||
$ file ./dice
|
||||
dice: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically
|
||||
linked (uses shared libs), for GNU/Linux 5.1.15, not stripped
|
||||
```
|
||||
|
||||
And just as important, what libraries it links to with **ldd**:
|
||||
|
||||
|
||||
```
|
||||
$ ldd dice
|
||||
linux-vdso.so.1 => (0x00007ffe0d1dc000)
|
||||
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
|
||||
(0x00007fce8410e000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
|
||||
(0x00007fce83d4f000)
|
||||
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6
|
||||
(0x00007fce83a52000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007fce84449000)
|
||||
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1
|
||||
(0x00007fce8383c000)
|
||||
```
|
||||
|
||||
You have confirmed two things from these tests: The binary you just ran is 64-bit, and it is linked to 64-bit libraries.
|
||||
|
||||
That means that, in order to cross-compile for 32-bit, you must tell **g++** to:
|
||||
|
||||
1. Produce a 32-bit binary
|
||||
2. Link to 32-bit libraries instead of the default 64-bit libraries
|
||||
|
||||
|
||||
|
||||
### Setting up your dev environment
|
||||
|
||||
To compile to 32-bit, you need 32-bit libraries and headers installed on your system. If you run a pure 64-bit system, then you have no 32-bit libraries or headers and need to install a base set. At the very least, you need the C and C++ libraries (**glibc** and **libstdc++**) along with 32-bit version of GCC libraries (**libgcc**). The names of these packages may vary from distribution to distribution. On Slackware, a pure 64-bit distribution with 32-bit compatibility is available from the **multilib** packages provided by [Alien BOB][3]. On Fedora, CentOS, and RHEL:
|
||||
|
||||
|
||||
```
|
||||
$ yum install libstdc++-*.i686
|
||||
$ yum install glibc-*.i686
|
||||
$ yum install libgcc.i686
|
||||
```
|
||||
|
||||
Regardless of the system you're using, you also must install any 32-bit libraries your project uses. For instance, if you include **yaml-cpp** in your project, then you must install the 32-bit version of **yaml-cpp** or, on many systems, the development package for **yaml-cpp** (for instance, **yaml-cpp-devel** on Fedora) before compiling it.
|
||||
|
||||
Once that's taken care of, the compilation is fairly simple:
|
||||
|
||||
|
||||
```
|
||||
`$ g++ -m32 dice.cpp -o dice32 -L /usr/lib -march=i686`
|
||||
```
|
||||
|
||||
The **-m32** flag tells GCC to compile in 32-bit mode. The **-march=i686** option further defines what kind of optimizations to use (refer to **info gcc** for a list of options). The **-L** flag sets the path to the libraries you want GCC to link to. This is usually **/usr/lib** for 32-bit, although, depending on how your system is set up, it could be **/usr/lib32** or even **/opt/usr/lib** or any place you know you keep your 32-bit libraries.
|
||||
|
||||
After the code compiles, see proof of your build:
|
||||
|
||||
|
||||
```
|
||||
$ file ./dice32
|
||||
dice: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
|
||||
dynamically linked (uses shared libs) [...]
|
||||
```
|
||||
|
||||
And, of course, **ldd ./dice32** points to your 32-bit libraries.
|
||||
|
||||
### Different architectures
|
||||
|
||||
Compiling 32-bit on 64-bit for the same processor family allows GCC to make many assumptions about how to compile the code. If you need to compile for an entirely different processor, you must install the appropriate cross-build GCC utilities. Which utility you install depends on what you are compiling. This process is a little more complex than compiling for the same CPU family.
|
||||
|
||||
When you're cross-compiling for the same family, you can expect to find the same set of 32-bit libraries as 64-bit libraries, because your Linux distribution is maintaining both. When compiling for an entirely different architecture, you may have to hunt down libraries required by your code. The versions you need may not be in your distribution's repositories because your distribution may not provide packages for your target system, or it may not mirror all packages in a convenient location. If the code you're compiling is yours, then you probably have a good idea of what its dependencies are and possibly where to find them. If the code is something you have downloaded and need to compile, then you probably aren't as familiar with its requirements. In that case, investigate what the code requires to build correctly (they're usually listed in the README or INSTALL files, and certainly in the source code itself), then go gather the components.
|
||||
|
||||
For example, if you need to compile C code for ARM, you must first install **gcc-arm-linux-gnu** (32-bit) or **gcc-aarch64-linux-gnu** (64-bit) on Fedora or RHEL, or **arm-linux-gnueabi-gcc** and **binutils-arm-linux-gnueabi** on Ubuntu. This provides the commands and libraries you need to build (at least) a simple C program. Additionally, you need whatever libraries your code uses. You can place header files in the usual location (**/usr/include** on most systems), or you can place them in a directory of your choice and point GCC to it with the **-I** option.
|
||||
|
||||
When compiling, don't use the standard **gcc** or **g++** command. Instead, use the GCC utility you installed. For example:
|
||||
|
||||
|
||||
```
|
||||
$ arm-linux-gnu-g++ dice.cpp \
|
||||
-I/home/seth/src/crossbuild/arm/cpp \
|
||||
-o armdice.bin
|
||||
```
|
||||
|
||||
Verify what you've built:
|
||||
|
||||
|
||||
```
|
||||
$ file armdice.bin
|
||||
armdice.bin: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV) [...]
|
||||
```
|
||||
|
||||
### Libraries and deliverables
|
||||
|
||||
This was a simple example of how to use cross-compiling. In real life, your source code may produce more than just a single binary. While you can manage this manually, there's probably no good reason to do that. In my next article, I'll demonstrate GNU Autotools, which does most of the work required to make your code portable.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/cross-compiling-gcc
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4 (Wratchet set tools)
|
||||
[2]: https://gcc.gnu.org/
|
||||
[3]: http://www.slackware.com/~alien/multilib/
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Become a lifelong learner and succeed at work)
|
||||
[#]: via: (https://opensource.com/open-organization/19/7/informal-learning-adaptability)
|
||||
[#]: author: (Colin Willis https://opensource.com/users/colinwillishttps://opensource.com/users/marcobravo)
|
||||
|
||||
Become a lifelong learner and succeed at work
|
||||
======
|
||||
In open organizations with cultures of adaptability, learning should be
|
||||
continuous—and won't always happen in a formal setting. Do we really
|
||||
understand how it works?
|
||||
![Writing in a notebook][1]
|
||||
|
||||
Continuous learning refers to the ongoing, career-driven, intentional learning process people undertake to develop themselves. For people who consider themselves continuous learners, learning never stops—and these people see learning opportunities in everyday experiences. Engaging with one's colleagues in debate, reflecting on feedback, scouring the internet for a solution to a frustrating problem, trying something new, or taking a risk are all examples of the informal learning activities one can perform on the job.
|
||||
|
||||
Continuous learning is a core competency for anyone in an open organization. After all, open organizations are built upon peers thinking, arguing, and acting alongside one another. And thriving in the ambiguous, discourse-driven world of the open organization requires these sorts of skills from employees daily.
|
||||
|
||||
Unfortunately, the scientific literature has done a poor job disseminating our knowledge of learning at work in a way that helps individuals appreciate and develop their own learning abilities. So in this article series, I'll introduce you to informal learning and help you understand how viewing learning as a skill can help you thrive—in any organization, but _especially_ open organizations.
|
||||
|
||||
### Why so formal?
|
||||
|
||||
To date, the scientific study of learning in organizations has focused primarily on the design, delivery, and evaluation of _formal_ training as opposed to _informal_ learning.
|
||||
|
||||
Investing in the development of the knowledge, skills, and abilities of its workforce is an important way an organization maintains its edge over its competitors. Organizations _formalize_ learning opportunities by creating or purchasing classes, online courses, workshops, etc., which are meant to instruct an individual on job-related content—much like a class at a school. Providing a class is an easy (if expensive) way for an organization to ensure the skills or knowledge of its workforce remains current. Likewise, classroom settings are natural experiment rooms for researchers, making training-based research and work not only possible but also powerful.
|
||||
|
||||
Recent estimates suggest that between 70% to 80% of all job-related knowledge isn't learned in training but rather informally on-the-job.
|
||||
|
||||
Of course, people don't _need_ training to learn something; often, people learn by researching answers, talking to colleagues, reflecting, experimenting, or adapting to changes. In fact, [recent estimates suggest][2] that between 70% to 80% of all job-related knowledge isn't learned in training but rather _informally_ on-the-job. That isn't to say that formal training isn't effective; training can be _very_ effective, but it is a precise type of intervention. It simply isn't practical to formally train someone on most aspects of a job, especially as those jobs become more complex.
|
||||
|
||||
Informal learning, or any learning that occurs outside a structured learning environment, is therefore incredibly important to the workplace. In fact, [recent scientific evidence][3] suggests that informal learning is a better predictor of job performance than formal training.
|
||||
|
||||
So why do organizations and the scientific community focus so much on training?
|
||||
|
||||
### A cyclical process
|
||||
|
||||
Apart from the reasons I mentioned earlier, researching informal learning can be very difficult. Unlike formal training, informal learning occurs in unstructured environments, is highly dependent on the individual, and can be difficult or impossible to observe.
|
||||
|
||||
Until recently, most of the research on informal learning focused on defining the qualifies characteristic of informal learning and identifying how informal learning is theoretically connected to work experience. Researchers have described a [dynamic, cyclical process][4] by which individuals learn informally in organizations.
|
||||
|
||||
Unlike formal training, informal learning occurs in unstructured environments, is highly dependent on the individual, and can be difficult or impossible to observe.
|
||||
|
||||
In the process, both the individual and the organization have agency for creating learning opportunities. For example, an individual may be interested in learning something and performs learning behaviors to do so. The organization, in the form of feedback delivered to the individual, may signal that learning is needed. This could be a poor performance review, a comment made during a project, or a broader change in the organizational environment that isn't personally directed. These forces interact in the organizational environment (e.g., someone experiments with a new idea and his or her colleagues recognize and reward that behavior) or in the mind of the individual via reflection (e.g., someone reflects on feedback about his or her performance and decides to exert more effort into learning the job). Unlike training, informal learning does not follow a formal, linear process. An individual can experience any part of the process at any time and experience multiple parts of the process simultaneously.
|
||||
|
||||
### Informal learning in the open organization
|
||||
|
||||
In open organizations specifically, both a decreased emphasis on hierarchy and an increased focus on a participatory culture fuel this informal learning process. In short, open organizations simply present more opportunities for individuals and the organizational environment to interact and spark learning moments. Moreover, ideas and change require a broader level of buy-in among employees in an open organization—and buy-in requires an appreciation for the adaptability and insight of others.
|
||||
|
||||
That said, simply increasing the numbers of opportunities to learn does not guarantee that learning will occur or be successful. One might even argue that the ambiguity and open discourse common in an open organization could _prevent_ someone who is _not_ skilled at continuous learning—again, that habit of learning over time and a core competency of the open organization—from contributing to the organization as effectively as they could in more traditional organizations.
|
||||
|
||||
Addressing these kinds of concerns require a way of tracking informal learning in a consistent manner. Recently, there have been calls in the scientific community to create ways of measuring informal learning, so systematic research can be conducted to address questions around the antecedents and outcomes of informal learning. My own research has focused on this call, and I have spent several years developing and refining our understanding of informal learning behaviors so that they can be measured.
|
||||
|
||||
In the second part of this article series, I'll focus on findings from a recent study I conducted inside an open organization, where I tested my measure of informal learning behaviors and connected them to the broader workplace environment and individual work outcomes.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/open-organization/19/7/informal-learning-adaptability
|
||||
|
||||
作者:[Colin Willis][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/colinwillishttps://opensource.com/users/marcobravo
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/notebook-writing-pen.jpg?itok=uA3dCfu_ (Writing in a notebook)
|
||||
[2]: https://www.groupoe.com/images/Accelerating_On-the-Job-Learning_-_White_Paper.pdf
|
||||
[3]: https://www.researchgate.net/publication/316490244_Antecedents_and_Outcomes_of_Informal_Learning_Behaviors_a_Meta-Analysis
|
||||
[4]: https://psycnet.apa.org/record/2008-13469-009
|
134
sources/tech/20190716 Save and load Python data with JSON.md
Normal file
134
sources/tech/20190716 Save and load Python data with JSON.md
Normal file
@ -0,0 +1,134 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Save and load Python data with JSON)
|
||||
[#]: via: (https://opensource.com/article/19/7/save-and-load-data-python-json)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
Save and load Python data with JSON
|
||||
======
|
||||
The JSON format saves you from creating your own data formats, and is
|
||||
particularly easy to learn if you already know Python. Here's how to use
|
||||
it with Python.
|
||||
![Cloud and databsae incons][1]
|
||||
|
||||
[JSON][2] stands for JavaScript Object Notation. This format is a popular method of storing data in key-value arrangements so it can be parsed easily later. Don’t let the name fool you, though: You can use JSON in Python—not just JavaScript—as an easy way to store data, and this article demonstrates how to get started.
|
||||
|
||||
First, take a look at this simple JSON snippet:
|
||||
|
||||
|
||||
```
|
||||
{
|
||||
"name":"tux",
|
||||
"health":"23",
|
||||
"level":"4"
|
||||
}
|
||||
```
|
||||
|
||||
That's pure JSON and has not been altered for Python or any other language. Yet if you’re familiar with Python, you might notice that this example JSON code looks an awful lot like a Python dictionary. In fact, the two are very similar: If you are comfortable with Python lists and dictionaries, then JSON is a natural fit for you.
|
||||
|
||||
### Storing data in JSON format
|
||||
|
||||
You might consider using JSON if your application needs to store somewhat complex data. While you may have previously resorted to custom text configuration files or data formats, JSON offers you structured, recursive storage, and Python’s JSON module offers all of the parsing libraries necessary for getting this data in and out of your application. So, you don’t have to write parsing code yourself, and other programmers don’t have to decode a new data format when interacting with your application. For this reason, JSON is easy to use, and ubiquitous.
|
||||
|
||||
Here is some sample Python code using a dictionary within a dictionary:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import json
|
||||
|
||||
# instantiate an empty dict
|
||||
team = {}
|
||||
|
||||
# add a team member
|
||||
team['tux'] = {'health': 23, 'level': 4}
|
||||
team['beastie'] = {'health': 13, 'level': 6}
|
||||
team['konqi'] = {'health': 18, 'level': 7}
|
||||
```
|
||||
|
||||
This code creates a Python dictionary called **team**. It’s empty initially (you can create one that's already populated, but that’s impossible if you don’t have the data to put into the dictionary yet).
|
||||
|
||||
To add to the **dict** object, you create a key, such as **tux**, **beastie**, or **konqi** in the example code, and then provide a value. In this case, the value is _another_ dictionary full of player statistics.
|
||||
|
||||
Dictionaries are mutable. You can add, remove, and update the data they contain as often as you please. This format is ideal storage for data that your application frequently uses.
|
||||
|
||||
### Saving data in JSON format
|
||||
|
||||
If the data you’re storing in your dictionary is user data that needs to persist after the application quits, then you must write the data to a file on disk. This is where the JSON Python module comes in:
|
||||
|
||||
|
||||
```
|
||||
with open('mydata.json', 'w') as f:
|
||||
json.dump(team, f)
|
||||
```
|
||||
|
||||
This code block creates a file called **mydata.json** and opens it in write mode. The file is represented with the variable **f** (a completely arbitrary designation; you can use whatever variable name you like, such as **file**, **FILE**, **output**, or practically anything). Meanwhile, the JSON module’s **dump** function is used to dump the data from the **dict** into the data file.
|
||||
|
||||
Saving data from your application is as simple as that, and the best part about this is that the data is structured and predictable. To see, take a look at the resulting file:
|
||||
|
||||
|
||||
```
|
||||
$ cat mydata.json
|
||||
{"tux": {"health": 23, "level": 4}, "beastie": {"health": 13, "level": 6}, "konqi": {"health": 18, "level": 7}}
|
||||
```
|
||||
|
||||
### Reading data from a JSON file
|
||||
|
||||
If you are saving data to JSON format, you probably want to read the data back into Python eventually. To do this, use the Python JSON module’s **json.load** function:
|
||||
|
||||
|
||||
```
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import json
|
||||
|
||||
f = open('mydata.json')
|
||||
team = json.load(f)
|
||||
|
||||
print(team['tux'])
|
||||
print(team['tux']['health'])
|
||||
print(team['tux']['level'])
|
||||
|
||||
print(team['beastie'])
|
||||
print(team['beastie']['health'])
|
||||
print(team['beastie']['level'])
|
||||
|
||||
# when finished, close the file
|
||||
f.close()
|
||||
```
|
||||
|
||||
This function implements the inverse, more or less, of saving the file: an arbitrary variable (**f**) represents the data file, and then the JSON module’s **load** function dumps the data from the file into the arbitrary **team** variable.
|
||||
|
||||
The **print** statements in the code sample demonstrate how to use the data. It can be confusing to compound **dict** key upon **dict** key, but as long as you are familiar with your own dataset, or else can read the JSON source to get a mental map of it, the logic makes sense.
|
||||
|
||||
Of course, the **print** statements don’t have to be hard-coded. You could rewrite the sample application using a **for** loop:
|
||||
|
||||
|
||||
```
|
||||
for i in team.values():
|
||||
print(i)
|
||||
```
|
||||
|
||||
### Using JSON
|
||||
|
||||
As you can see, JSON integrates surprisingly well with Python, so it’s a great format when your data fits in with its model. JSON is flexible and simple to use, and learning one basically means you’re learning the other, so consider it for data storage the next time you’re working on a Python application.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/save-and-load-data-python-json
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg (Cloud and databsae incons)
|
||||
[2]: https://json.org
|
488
sources/tech/20190716 Security scanning your DevOps pipeline.md
Normal file
488
sources/tech/20190716 Security scanning your DevOps pipeline.md
Normal file
@ -0,0 +1,488 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Security scanning your DevOps pipeline)
|
||||
[#]: via: (https://opensource.com/article/19/7/security-scanning-your-devops-pipeline)
|
||||
[#]: author: (Jessica Repka https://opensource.com/users/jrepkahttps://opensource.com/users/marcobravohttps://opensource.com/users/jrepka)
|
||||
|
||||
Security scanning your DevOps pipeline
|
||||
======
|
||||
A hands-on introduction to container security using Anchore with Jenkins
|
||||
on Kubernetes.
|
||||
![Target practice][1]
|
||||
|
||||
Security is one of the most important considerations for running in any environment, and using open source software is a great way to handle security without going over budget in your corporate environment or for your home setup. It is easy to talk about the concepts of security, but it's another thing to understand the tools that will get you there. This tutorial explains how to set up security using [Jenkins][2] with [Anchore][3].
|
||||
|
||||
There are many ways to run [Kubernetes][4]. Using [Minikube][5], a prepackaged virtual machine (VM) environment designed for local testing, reduces the complexity of running an environment.
|
||||
|
||||
Technology | What is it?
|
||||
---|---
|
||||
[Jenkins][2] | An open source automation server
|
||||
[Anchore][3] | A centralized service for inspection, analysis, and certification of container images
|
||||
[Minikube][5] | A single-node Kubernetes cluster inside a VM
|
||||
|
||||
In this tutorial, you'll learn how to add Jenkins and Anchore to Kubernetes and configure a scanning pipeline for new container images and registries.
|
||||
|
||||
_Note: For best performance in this tutorial, Minikube requires at least four CPUs._
|
||||
|
||||
### Basic requirements
|
||||
|
||||
#### Knowledge
|
||||
|
||||
* Docker (including a [Docker Hub][6] account)
|
||||
* Minikube
|
||||
* Jenkins
|
||||
* Helm
|
||||
* Kubectl
|
||||
|
||||
|
||||
|
||||
#### Software
|
||||
|
||||
* Minikube
|
||||
* Helm
|
||||
* Kubectl client
|
||||
* Anchore CLI installed locally
|
||||
|
||||
|
||||
|
||||
### Set up the environment
|
||||
|
||||
[Install Minikube][7] in whatever way that makes sense for your environment. If you have enough resources, I recommend giving a bit more than the default memory and CPU power to your VM:
|
||||
|
||||
|
||||
```
|
||||
$ minikube config set memory 8192
|
||||
⚠️ These changes will take effect upon a minikube delete and then a minikube start
|
||||
$ minikube config set cpus 4
|
||||
⚠️ These changes will take effect upon a minikube delete and then a minikube start
|
||||
```
|
||||
|
||||
If you are already running a Minikube instance, you must delete it using **minikube delete** before continuing.
|
||||
|
||||
Next, [install Helm][8], the standard Kubernetes package manager, in whatever way makes sense for your operating system.
|
||||
|
||||
Now you're ready to install the applications.
|
||||
|
||||
### Install and configure Anchore and Jenkins
|
||||
|
||||
To begin, start Minikube and its dashboard.
|
||||
|
||||
|
||||
```
|
||||
$ minikube start
|
||||
😄 minikube v1.1.0 on darwin (amd64)
|
||||
💡 Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
|
||||
🔄 Restarting existing virtualbox VM for "minikube" ...
|
||||
⌛ Waiting for SSH access ...
|
||||
🐳 Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
|
||||
🔄 Relaunching Kubernetes v1.14.2 using kubeadm ...
|
||||
⌛ Verifying: apiserver proxy etcd scheduler controller dns
|
||||
🏄 Done! kubectl is now configured to use "minikube"
|
||||
|
||||
$ minikube dashboard
|
||||
🔌 Enabling dashboard ...
|
||||
🤔 Verifying dashboard health ...
|
||||
🚀 Launching proxy ...
|
||||
🤔 Verifying proxy health ...
|
||||
🎉 Opening <http://127.0.0.1:52646/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/> in your default browser...
|
||||
```
|
||||
|
||||
As long as you stay connected to this terminal session, you will have access to a visual dashboard for Minikube at **127.0.0.1:52646**.
|
||||
|
||||
![Minikube dashboard][9]
|
||||
|
||||
|
||||
|
||||
### Create namespace and install Jenkins
|
||||
|
||||
The next step is to get the Jenkins build environment up and running. To start, ensure your storage is configured for persistence so you can reuse it later. Set the storage class for **Persistent Volumes** before you install Helm, so its installation will be persistent across reboots.
|
||||
|
||||
Either exit the dashboard using CTRL+C or open a new terminal to run:
|
||||
|
||||
|
||||
```
|
||||
$ minikube addons enable default-storageclass
|
||||
✅ default-storageclass was successfully enabled
|
||||
```
|
||||
|
||||
**Using namespaces**
|
||||
|
||||
I test quite a few different applications, and I find it incredibly helpful to use [namespaces][10] in Kubernetes. Leaving everything in the default namespace can overcrowd it and make it challenging to uninstall a Helm-installed application. If you stick to this for Jenkins, you can remove it by running **helm del --purge jenkins --namespace jenkins** then **kubectl delete ns jenkins**. This is much easier than manually hunting and pecking through a long list of containers.
|
||||
|
||||
### Install Helm
|
||||
|
||||
To use Helm, Kubernetes' default package manager, initialize an environment and install Jenkins.
|
||||
|
||||
|
||||
```
|
||||
$ kubectl create ns jenkins
|
||||
namespace "jenkins" created
|
||||
$ helm init
|
||||
helm init
|
||||
Creating /Users/alleycat/.helm
|
||||
Creating /Users/alleycat/.helm/repository
|
||||
Creating /Users/alleycat/.helm/repository/cache
|
||||
Creating /Users/alleycat/.helm/repository/local
|
||||
Creating /Users/alleycat/.helm/plugins
|
||||
Creating /Users/alleycat/.helm/starters
|
||||
Creating /Users/alleycat/.helm/cache/archive
|
||||
Creating /Users/alleycat/.helm/repository/repositories.yaml
|
||||
Adding stable repo with URL: <https://kubernetes-charts.storage.googleapis.com>
|
||||
Adding local repo with URL: <http://127.0.0.1:8879/charts>
|
||||
$HELM_HOME has been configured at /Users/alleycat/.helm.
|
||||
|
||||
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
|
||||
|
||||
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
|
||||
To prevent this, run `helm init` with the --tiller-tls-verify flag.
|
||||
For more information on securing your installation see: <https://docs.helm.sh/using\_helm/\#securing-your-helm-installation>
|
||||
$ helm install --name jenkins stable/jenkins --namespace jenkins
|
||||
NAME: jenkins
|
||||
LAST DEPLOYED: Tue May 28 11:12:39 2019
|
||||
NAMESPACE: jenkins
|
||||
STATUS: DEPLOYED
|
||||
|
||||
RESOURCES:
|
||||
==> v1/ConfigMap
|
||||
NAME DATA AGE
|
||||
jenkins 5 0s
|
||||
jenkins-tests 1 0s
|
||||
|
||||
==> v1/Deployment
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
jenkins 0/1 1 0 0s
|
||||
|
||||
==> v1/PersistentVolumeClaim
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
jenkins Pending standard 0s
|
||||
|
||||
==> v1/Pod(related)
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
jenkins-7565554b8f-cvhbd 0/1 Pending 0 0s
|
||||
|
||||
==> v1/Role
|
||||
NAME AGE
|
||||
jenkins-schedule-agents 0s
|
||||
|
||||
==> v1/RoleBinding
|
||||
NAME AGE
|
||||
jenkins-schedule-agents 0s
|
||||
|
||||
==> v1/Secret
|
||||
NAME TYPE DATA AGE
|
||||
jenkins Opaque 2 0s
|
||||
|
||||
==> v1/Service
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
jenkins LoadBalancer 10.96.90.0 <pending> 8080:32015/TCP 0s
|
||||
jenkins-agent ClusterIP 10.103.85.49 <none> 50000/TCP 0s
|
||||
|
||||
==> v1/ServiceAccount
|
||||
NAME SECRETS AGE
|
||||
jenkins 1 0s
|
||||
|
||||
NOTES:
|
||||
1\. Get your 'admin' user password by running:
|
||||
printf $(kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
|
||||
2\. Get the Jenkins URL to visit by running these commands in the same shell:
|
||||
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
|
||||
You can watch the status of by running 'kubectl get svc --namespace jenkins -w jenkins'
|
||||
export SERVICE_IP=$(kubectl get svc --namespace jenkins jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
|
||||
echo http://$SERVICE_IP:8080/login
|
||||
|
||||
3\. Login with the password from step 1 and the username: admin
|
||||
|
||||
For more information on running Jenkins on Kubernetes, visit:
|
||||
<https://cloud.google.com/solutions/jenkins-on-container-engine>
|
||||
```
|
||||
|
||||
Note the Bash one-liner above that begins with **printf**; it allows you to query for the Jenkins password and it can be challenging to find your [default Jenkins password][11] without it. Take note of it and save it for later.
|
||||
|
||||
### Set up port forwarding to log into the UI
|
||||
|
||||
Now that you've installed Minikube and Jenkins, log in to configure Jenkins. You'll need the Pod name for port forwarding:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get pods --namespace jenkins
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
jenkins-7565554b8f-cvhbd 1/1 Running 0 9m
|
||||
```
|
||||
|
||||
Run the following to set up port forwarding (using your Jenkins pod name, which will be different from mine below):
|
||||
|
||||
|
||||
```
|
||||
# verify your pod name from the namespace named jenkins
|
||||
kubectl get pods --namespace jenkins
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
jenkins-7565554b8f-cvhbd 1/1 Running 0 37m
|
||||
# then forward it
|
||||
$ kubectl port-forward jenkins-7565554b8f-cvhbd 8088:8080 -n jenkins
|
||||
Forwarding from 127.0.0.1:8088 -> 8080
|
||||
Forwarding from [::1]:8088 -> 8080
|
||||
```
|
||||
|
||||
Note that you will need multiple tabs in your terminal once you run the port-forwarding command.
|
||||
|
||||
Leave this tab open going forward to maintain your port-forwarding session.
|
||||
|
||||
Navigate to Jenkins in your preferred browser by going to **localhost:8088**. The default username is **admin** and the password is stored in Kubernetes Secrets. Use the command at the end of the **helm install jenkins** step:
|
||||
|
||||
|
||||
```
|
||||
$ printf $(kubectl get secret --namespace jenkins jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
|
||||
Jfstacz2vy
|
||||
```
|
||||
|
||||
After logging in, the UI will display **Welcome to Jenkins!**
|
||||
|
||||
![Jenkins UI][12]
|
||||
|
||||
From here we'll have to install some plugins to Jenkins for our pipeline to work properly. From the main page choose **Manage Jenkins **on the left-hand side.
|
||||
|
||||
![][13]
|
||||
|
||||
|
||||
|
||||
Then choose **Manage Plugins**
|
||||
|
||||
![][14]
|
||||
|
||||
Then choose **Available **
|
||||
|
||||
**![][15]**
|
||||
|
||||
Then choose the checkboxes beside these plugins shown below
|
||||
|
||||
![][16]
|
||||
|
||||
![][17]
|
||||
|
||||
Once you have checked the boxes scroll to the bottom of the page and choose **Install without Restart**.
|
||||
|
||||
|
||||
![][18]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
#### Deploy Anchore
|
||||
|
||||
[Anchore Engine][19] "is an open source project that provides a centralized service for inspection, analysis, and certification of container images." Deploy it within Minikube to do some security inspection on your Jenkins pipeline. Add a security namespace for the Helm install, then run an installation:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl create ns security
|
||||
namespace "security" created
|
||||
$ helm install --name anchore-engine stable/anchore-engine --namespace security
|
||||
NAME: anchore-engine
|
||||
LAST DEPLOYED: Wed May 29 12:22:25 2019
|
||||
NAMESPACE: security
|
||||
STATUS: DEPLOYED
|
||||
## And a lot more output
|
||||
```
|
||||
|
||||
Confirm that the service is up and running with this command:
|
||||
|
||||
|
||||
```
|
||||
kubectl run -i --tty anchore-cli --restart=Always --image anchore/engine-cli --env ANCHORE_CLI_USER=admin --env ANCHORE_CLI_PASS=${ANCHORE_CLI_PASS} --env ANCHORE_CLI_URL=<http://anchore-engine-anchore-engine-api.security.svc.cluster.local:8228/v1/>
|
||||
If you don't see a command prompt, try pressing enter.
|
||||
[anchore@anchore-cli-86d7fd9568-rmknw anchore-cli]$
|
||||
```
|
||||
|
||||
If you are logged into an Anchore container (similar to above), then the system is online. The default password for Anchore is **admin/foobar**. Type **exit** to leave the terminal.
|
||||
|
||||
Use port forwarding again to access the Anchore Engine API from your host system:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get pods --namespace security
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
anchore-engine-anchore-engine-analyzer-7cf5958795-wtw69 1/1 Running 0 3m
|
||||
anchore-engine-anchore-engine-api-5c4cdb5587-mxkd7 1/1 Running 0 3m
|
||||
anchore-engine-anchore-engine-catalog-648fcf54fd-b8thl 1/1 Running 0 3m
|
||||
anchore-engine-anchore-engine-policy-7b78dd57f4-5dwsx 1/1 Running 0 3m
|
||||
anchore-engine-anchore-engine-simplequeue-859c989f99-5dwgf 1/1 Running 0 3m
|
||||
anchore-engine-postgresql-844dfcc468-s92c5 1/1 Running 0 3m
|
||||
# Find the API pod name above and add it to the command below
|
||||
$ kubectl port-forward anchore-engine-anchore-engine-api-5c4cdb5587-mxkd7 8228:8228 --namespace security
|
||||
```
|
||||
|
||||
### Join Anchore and Jenkins
|
||||
|
||||
Go back to the Jenkins UI at **<http://127.0.0.1:8088/>**. On the main menu, click **Manage Jenkins > Manage Plugins**. Choose the **Available** tab, then scroll down or search for the **Anchore Container Image Scanner Plugin**. Check the box next to the plugin and choose **Install without restart**.
|
||||
|
||||
![Jenkins plugin manager][20]
|
||||
|
||||
Once the installation completes, go back to the main menu in Jenkins and choose **Manage Jenkins**, then **Configure System**. Scroll down to **Anchore Configuration**. Confirm **Engine Mode** is selected and a URL is entered, which is output from the Helm installation. Add the username and password (default **admin/foobar**). For debugging purposes, check **Enable DEBUG logging**.
|
||||
|
||||
![Anchore plugin mode][21]
|
||||
|
||||
Now that the plugin is configured, you can set up a Jenkins pipeline to scan your container builds.
|
||||
|
||||
### Jenkins pipeline and Anchore scanning
|
||||
|
||||
The purpose of this setup is to be able to inspect container images on the fly to ensure they meet security requirements. To do so, use Anchore Engine and give it permission to access your images. In this example, they are on Docker Hub, but they could also be on Quay or any other [container registry supported by Anchore][22].
|
||||
|
||||
In order to run the necessary commands on the command line, we need to find our Anchore pod name, then SSH into it using **kubectl exec**:
|
||||
|
||||
|
||||
```
|
||||
$ kubectl get all
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/anchore-cli-86d7fd9568-rmknw 1/1 Running 2 2d
|
||||
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d
|
||||
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/anchore-cli 1 1 1 1 2d
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.apps/anchore-cli-86d7fd9568 1 1 1 2d
|
||||
# Let’s connect to our anchore-cli pod
|
||||
$ kubectl exec -it anchore-cli-86d7fd9568-rmknw -i -t -- bash
|
||||
[anchore@anchore-cli-86d7fd9568-rmknw anchore-cli]$ anchore-cli --u admin --p foobar registry add index.docker.io <username> <password>
|
||||
Registry: index.docker.io
|
||||
User: jrepka
|
||||
Type: docker_v2
|
||||
Verify TLS: True
|
||||
Created: 2019-05-14T22:37:59Z
|
||||
Updated: 2019-05-14T22:37:59Z
|
||||
```
|
||||
|
||||
Anchore Engine is now ready to work with your registry. There are [several ways][23] it can do so, including:
|
||||
|
||||
* Analyzing images
|
||||
* Inspecting image content
|
||||
* Scanning repositories
|
||||
* Viewing security vulnerabilities
|
||||
|
||||
|
||||
|
||||
Point Anchore Engine toward an image to analyze it against your policy. For our testing, we'll use the publicly available [Cassandra][24] image:
|
||||
|
||||
|
||||
```
|
||||
[anchore@anchore-cli-86d7fd9568-rmknw anchore-cli]$ anchore-cli --u admin --p foobar image add
|
||||
docker.io/library/cassandra:latest
|
||||
|
||||
Image Digest: sha256:7f7afff84384e36593b085d62e087674029de9aced4482c7780f155d8ee55fad
|
||||
Parent Digest: sha256:800084987d58c2a62daeea4662ecdd79fd4928d449279bd410ef7690ef482469
|
||||
Analysis Status: not_analyzed
|
||||
Image Type: docker
|
||||
Analyzed At: None
|
||||
Image ID: a34c036183d18527684cdb613fbb1c806c7e1bc26f6911dcc25e918aa7b093fc
|
||||
Dockerfile Mode: None
|
||||
Distro: None
|
||||
Distro Version: None
|
||||
Size: None
|
||||
Architecture: None
|
||||
Layer Count: None
|
||||
|
||||
Full Tag: docker.io/library/cassandra:latest
|
||||
Tag Detected At: 2019-07-09T17:44:45Z
|
||||
```
|
||||
|
||||
You will also need to grab a default policy ID to test against for your pipeline. (In a future article, I will go into customizing policy and whitelist rules.)
|
||||
|
||||
Run the following command to get the policy ID:
|
||||
|
||||
|
||||
```
|
||||
[anchore@anchore-cli-86d7fd9568-rmknw anchore-cli]$ anchore-cli --u admin --p foobar policy list
|
||||
|
||||
Policy ID Active Created Updated
|
||||
2c53a13c-1765-11e8-82ef-23527761d060 True 2019-05-14T22:12:05Z 2019-05-14T22:12:05Z
|
||||
```
|
||||
|
||||
Now that you have added a registry and the image you want, you can build a pipeline to scan it continuously.
|
||||
|
||||
Scanning works in this order: **Build, push, scan.** To prevent images that do not meet security requirements from making it into production, I recommend a tiered approach to security scanning: promote a container image to a separate development environment and promote it to production only once it passes the Anchore Engine's scan.
|
||||
|
||||
We can't do anything too exciting until we configure our custom policy, so we will make sure a scan completes successfully by running a Hello World version of it. Below is an example workflow written in Groovy:
|
||||
|
||||
|
||||
```
|
||||
node {
|
||||
echo 'Hello World'
|
||||
}
|
||||
```
|
||||
|
||||
To run this code, log back into the Jenkins UI at **localhost:8088**, choose New Item, Pipeline, then place this code block into the Pipeline Script area.
|
||||
|
||||
![The "Hello World" of Jenkins][25]
|
||||
|
||||
It will take some time to complete since we're building the entire Cassandra image added above. You will see a blinking red icon in the meantime.
|
||||
|
||||
![Jenkins building][26]
|
||||
|
||||
And it will eventually finish and pass. That means we have set everything up correctly.
|
||||
|
||||
### That's a wrap
|
||||
|
||||
If you made it this far, you have a running Minikube configuration with Jenkins and Anchore Engine. You also have one or more images hosted on a container registry service and a way for Jenkins to show errors when images don't meet the default policy. In the next article, we will build a custom pipeline that verifies security policies set by Anchore Engine.
|
||||
|
||||
Anchore can also be used to scan large-scale Amazon Elastic Container Registries (ECRs), as long as the credentials are configured properly in Jenkins.
|
||||
|
||||
### Other resources
|
||||
|
||||
This is a lot of information for one article. If you'd like more details, the following links (which include my GitHub for all the examples in this tutorial) may help:
|
||||
|
||||
* [Anchore scan example][27]
|
||||
* [Anchore Engine][28]
|
||||
* [Running Kubernetes locally via Minikube][5]
|
||||
* [Jenkins Helm Chart][29]
|
||||
|
||||
|
||||
|
||||
Are there any specific pipelines you want me to build in the next tutorial? Let me know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/security-scanning-your-devops-pipeline
|
||||
|
||||
作者:[Jessica Repka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jrepkahttps://opensource.com/users/marcobravohttps://opensource.com/users/jrepka
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/target-security.png?itok=Ca5-F6GW (Target practice)
|
||||
[2]: https://jenkins.io/
|
||||
[3]: https://anchore.com/
|
||||
[4]: https://opensource.com/resources/what-is-kubernetes
|
||||
[5]: https://kubernetes.io/docs/setup/minikube/
|
||||
[6]: https://hub.docker.com/
|
||||
[7]: https://kubernetes.io/docs/tasks/tools/install-minikube/
|
||||
[8]: https://helm.sh/docs/using_helm/#installing-helm
|
||||
[9]: https://opensource.com/sites/default/files/uploads/minikube-dashboard.png (Minikube dashboard)
|
||||
[10]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
|
||||
[11]: https://opensource.com/article/19/6/jenkins-admin-password-helm-kubernetes
|
||||
[12]: https://opensource.com/sites/default/files/uploads/welcometojenkins.png (Jenkins UI)
|
||||
[13]: https://opensource.com/sites/default/files/lead-images/screen_shot_2019-06-20_at_4.52.06_pm.png
|
||||
[14]: https://opensource.com/sites/default/files/lead-images/screen_shot_2019-06-20_at_4.52.30_pm.png
|
||||
[15]: https://opensource.com/sites/default/files/lead-images/screen_shot_2019-06-20_at_4.59.20_pm.png
|
||||
[16]: https://opensource.com/sites/default/files/resize/lead-images/screen_shot_2019-06-14_at_8.26.55_am-500x288.png
|
||||
[17]: https://opensource.com/sites/default/files/resize/lead-images/screen_shot_2019-06-14_at_8.26.25_am-500x451.png
|
||||
[18]: https://opensource.com/sites/default/files/lead-images/screen_shot_2019-06-20_at_5.05.10_pm.png
|
||||
[19]: https://github.com/anchore/anchore-engine
|
||||
[20]: https://opensource.com/sites/default/files/uploads/jenkins-install-without-restart.png (Jenkins plugin manager)
|
||||
[21]: https://opensource.com/sites/default/files/uploads/anchore-configuration.png (Anchore plugin mode)
|
||||
[22]: https://github.com/anchore/enterprise-docs/blob/master/content/docs/using/ui_usage/registries/_index.md
|
||||
[23]: https://docs.anchore.com/current/docs/using/cli_usage/
|
||||
[24]: http://cassandra.apache.org/
|
||||
[25]: https://opensource.com/sites/default/files/articles/jenkins_hello_world_pipeline_opensourcecom.png (The "Hello World" of Jenkins)
|
||||
[26]: https://opensource.com/sites/default/files/jenkins_build_opensourcecom.png (Jenkins building)
|
||||
[27]: https://github.com/Alynder/anchore_example
|
||||
[28]: https://github.com/anchore/anchore-engine/wiki
|
||||
[29]: https://github.com/helm/charts/tree/master/stable/jenkins
|
@ -0,0 +1,188 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (robsean)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (32-bit life support: Cross-compiling with GCC)
|
||||
[#]: via: (https://opensource.com/article/19/7/cross-compiling-gcc)
|
||||
[#]: author: (Seth Kenlon https://opensource.com/users/seth)
|
||||
|
||||
32位生存支持:使用 GCC 交叉编译
|
||||
======
|
||||
使用 GCC 从一个单个构建机器中来为不同的架构交叉编译二进制文件。
|
||||
![Wratchet set tools][1]
|
||||
|
||||
如果你是一个开发者,创建二进制软件包,像一个 RPM, DEB, Flatpak,或 Snap 软件包,你不得不来为各种不同的目标平台编译代码。典型的目标包括32位和64位的 x86 和 ARM 。你可以在不同的物理或虚拟机器上完成你的构建。作为代替,你可以使用 GNU 编译器集合 ([GCC][2]) 来交叉编译,从一个单个构建机器中为几个不同的架构产生二进制文件。
|
||||
|
||||
假设你有一个简单的掷骰子游戏,你想交叉编译。在大多数系统上,以 C 语言写的一些东西相对简单,为了给现实性添加复杂性的目的,我以 C++ 语言写这个示例,所以程序依赖于一些不在 C 语言中东西 (具体来说,**iostream**)。
|
||||
|
||||
|
||||
```
|
||||
#include <iostream>
|
||||
#include <cstdlib>
|
||||
|
||||
using namespace std;
|
||||
|
||||
void lose (int c);
|
||||
void win (int c);
|
||||
void draw ();
|
||||
|
||||
int main() {
|
||||
int i;
|
||||
do {
|
||||
cout << "Pick a number between 1 and 20: \n";
|
||||
cin >> i;
|
||||
int c = rand ( ) % 21;
|
||||
if (i > 20) lose (c);
|
||||
else if (i < c ) lose (c);
|
||||
else if (i > c ) win (c);
|
||||
else draw ();
|
||||
}
|
||||
while (1==1);
|
||||
}
|
||||
|
||||
void lose (int c )
|
||||
{
|
||||
cout << "You lose! Computer rolled " << c << "\n";
|
||||
}
|
||||
|
||||
void win (int c )
|
||||
{
|
||||
cout << "You win!! Computer rolled " << c << "\n";
|
||||
}
|
||||
|
||||
void draw ( )
|
||||
{
|
||||
cout << "What are the chances. You tied. Try again, I dare you! \n";
|
||||
}
|
||||
```
|
||||
|
||||
在你的系统上使用 **g++** 命令编译它:
|
||||
|
||||
|
||||
```
|
||||
`$ g++ dice.cpp -o dice`
|
||||
```
|
||||
|
||||
然后,运行它来确认其工作:
|
||||
|
||||
|
||||
```
|
||||
$ ./dice
|
||||
Pick a number between 1 and 20:
|
||||
[...]
|
||||
```
|
||||
|
||||
你可以使用 **file** 命令来查看你刚刚生产的二进制文件的类型:
|
||||
|
||||
|
||||
```
|
||||
$ file ./dice
|
||||
dice: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically
|
||||
linked (uses shared libs), for GNU/Linux 5.1.15, not stripped
|
||||
```
|
||||
|
||||
同样重要,使用 **ldd** 命令来窗口它链接哪些库:
|
||||
|
||||
|
||||
```
|
||||
$ ldd dice
|
||||
linux-vdso.so.1 => (0x00007ffe0d1dc000)
|
||||
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
|
||||
(0x00007fce8410e000)
|
||||
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
|
||||
(0x00007fce83d4f000)
|
||||
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6
|
||||
(0x00007fce83a52000)
|
||||
/lib64/ld-linux-x86-64.so.2 (0x00007fce84449000)
|
||||
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1
|
||||
(0x00007fce8383c000)
|
||||
```
|
||||
|
||||
从这些测试中,你已经确认两件事:你刚刚运行的二进制文件是64位的,并且它链接64位库。
|
||||
|
||||
这意味着,为实现32位交叉编译,你必需告诉 **g++** 来:
|
||||
|
||||
1. 产生一个32位二进制文件
|
||||
2. 链接32位库,而不是64位库
|
||||
|
||||
|
||||
|
||||
### 设置你的开发环境
|
||||
|
||||
为编译到32位,你需要在你的系统上安装32位库和头文件。如果你运行一个纯64位系统,那么,你没有32位库或头文件,并且需要安装一个基础集合。最起码,你需要 C 和 C++ 库 (**glibc** 和 **libstdc++**) 以及GCC 库 (**libgcc**) 的32位版本。这些软件包的名称可能在每个发行版中不同。在 Slackware 系统上,一个纯64位的带有32位兼容的发行版,可以从 [Alien BOB][3] 提供的 **multilib** 软件包中获得。在 Fedora,CentOS,和 RHEL 系统上:
|
||||
|
||||
|
||||
```
|
||||
$ yum install libstdc++-*.i686
|
||||
$ yum install glibc-*.i686
|
||||
$ yum install libgcc.i686
|
||||
```
|
||||
|
||||
不管你正在使用什么系统,你同样必需安装一些你工程使用的32位库。例如,如果你在你的工程中包含 **yaml-cpp** ,那么,在编译工程前,你必需安装 **yaml-cpp** 的32位版本,或者,在很多系统上,安装 **yaml-cpp** 的开发软件包(例如,在 Fedora 系统上**yaml-cpp-devel** )。
|
||||
|
||||
一旦这些处理好了,编译是相当简单的:
|
||||
|
||||
|
||||
```
|
||||
`$ g++ -m32 dice.cpp -o dice32 -L /usr/lib -march=i686`
|
||||
```
|
||||
|
||||
**-m32** 标志告诉 GCC 以32位模式编译。**-march=i686** 选项进一步定义来使用哪种最优化类型(参考 **info gcc** 选项列表)。**-L** 标志设置你希望 GCC 来链接库的路径。对于32位来说通常是 **/usr/lib** ,不过,依赖于你的系统是如何设置的,它可以是 **/usr/lib32** ,甚至 **/opt/usr/lib** ,或者任何你知道你存放你的32位库的地方。
|
||||
|
||||
在代码编译后,查看你的构建证据:
|
||||
|
||||
|
||||
```
|
||||
$ file ./dice32
|
||||
dice: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),
|
||||
dynamically linked (uses shared libs) [...]
|
||||
```
|
||||
|
||||
接着,当然, **ldd ./dice32** 指向你的32位库。
|
||||
|
||||
### 不同的架构
|
||||
|
||||
在64位相同的处理器家族上允许 GCC 制作很多关于如何编译代码的假设来编译32位软件。如果你需要为完全不同的处理器编译,你必需安装恰当地交叉构建实用程序。安装哪种实用程序取决于你正在编译的东西。这个过程比为相同的 CPU 家族编译更复杂一点。
|
||||
|
||||
当你为相同处理器家族交叉编译时,你可以期待找到与32位库集的相同的64位库集,因为你的 Linux 发行版是同时维护的。当为一个完全不同的架构编译时,你可能不得不穷追你代码需要的库。你需要的版本可能不在你的发行版的存储库中,因为你的发行版可能不为你的目标系统提供软件包,或者它不在容易到达的位置映照所有的软件包。如果你正在编译的代码是你的,那么你可能非常清楚它的依赖关系是什么,和清楚在哪里找到它们。如果代码是你下载的,并需要编译,那么你可能不熟悉它的要求。在这种情况下,研究正确编译代码需要什么 (它们通常被列在 README 或 INSTALL 文件中,当然也在源文件代码自身之中),然后收集需要的组件。
|
||||
|
||||
例如,如果你需要为 ARM 编译 C 代码,你必需首先在Fedora 或 RHEL 上安装 **gcc-arm-linux-gnu** (32位) 或 **gcc-aarch64-linux-gnu** (64位),或者,在 Ubuntu 上安装 **arm-linux-gnueabi-gcc** 和 **binutils-arm-linux-gnueabi** 。这提供你需要来构建(至少)一个简单的 C 程序的命令和库。此外,你需要你的代码使用的任何库。你可以在惯常的位置(在大多数系统上 **/usr/include** )放置头文件,或者,你可以放置它们在一个你选择的目录,并使用 **-I** 选项将 GCC 指向它。
|
||||
|
||||
当编译时,不使用标准的 **gcc** 或 **g++** 命令。作为代替,使用你安装的 GCC实用程序。例如:
|
||||
|
||||
|
||||
```
|
||||
$ arm-linux-gnu-g++ dice.cpp \
|
||||
-I/home/seth/src/crossbuild/arm/cpp \
|
||||
-o armdice.bin
|
||||
```
|
||||
|
||||
验证你构建的什么:
|
||||
|
||||
|
||||
```
|
||||
$ file armdice.bin
|
||||
armdice.bin: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV) [...]
|
||||
```
|
||||
|
||||
### 库和可交付结果
|
||||
|
||||
这是一个如何使用交叉编译的简单的示例。在真实的生活中,你的源文件代码可以产生多于一个单个的二进制文件。虽然你可以手动管理,在这里手动管理可能不是好的正当理由。在我接下来的文章中,我将说明 GNU 自动工具,GNU 自动工具做大多数工作来使你的代码可移植。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/7/cross-compiling-gcc
|
||||
|
||||
作者:[Seth Kenlon][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[robsean](https://github.com/robsean)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/seth
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4 (Wratchet set tools)
|
||||
[2]: https://gcc.gnu.org/
|
||||
[3]: http://www.slackware.com/~alien/multilib/
|
Loading…
Reference in New Issue
Block a user