mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
Translated. (#18405)
* Translating. * Translating. * Translating. * Translating. * Translating. * Translating. * Translating. * Translating. * Checking. * Translated.
This commit is contained in:
parent
5eb849912b
commit
791e6d91e0
@ -1,166 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (LazyWolfLin)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How I containerize a build system)
|
||||
[#]: via: (https://opensource.com/article/20/4/how-containerize-build-system)
|
||||
[#]: author: (Ravi Chandran https://opensource.com/users/ravichandran)
|
||||
|
||||
How I containerize a build system
|
||||
======
|
||||
Building a repeatable structure to deliver applications as containers
|
||||
can be complicated. Here is one way to do it effectively.
|
||||
![Containers on a ship on the ocean][1]
|
||||
|
||||
A build system is comprised of the tools and processes used to transition from source code to a running application. This transition also involves changing the code's audience from the software developer to the end user, whether the end user is a colleague in operations or a deployment system.
|
||||
|
||||
After creating a few build systems using containers, I think I have a decent, repeatable approach that's worth sharing. These build systems were used for generating loadable software images for embedded hardware and compiling machine learning algorithms, but the approach is abstract enough to be used in any container-based build system.
|
||||
|
||||
This approach is about creating or organizing the build system in a way that makes it easy to use and maintain. It's not about the tricks needed to deal with containerizing any particular software compilers or tools. It applies to the common use case of software developers building software to hand off a maintainable image to other technical users (whether they are sysadmins, DevOps engineers, or some other title). The build system is abstracted away from the end users so that they can focus on the software.
|
||||
|
||||
### Why containerize a build system?
|
||||
|
||||
Creating a repeatable, container-based build system can provide a number of benefits to a software team:
|
||||
|
||||
* **Focus:** I want to focus on writing my application. When I call a tool to "build," I want the toolset to deliver a ready-to-use binary. I don't want to spend time troubleshooting the build system. In fact, I'd rather not know or care about the build system.
|
||||
* **Identical build behavior:** Whatever the use case, I want to ensure that the entire team uses the same versions of the toolset and gets the same results when building. Otherwise, I am constantly dealing with the case of "it works on my PC but not yours." Using the same toolset version and getting identical output for a given input source file set is critical in a team project.
|
||||
* **Easy setup and future migration:** Even if a detailed set of instructions is given to everyone to install a toolset for a project, chances are someone will get it wrong. Or there could be issues due to how each person has customized their Linux environment. This can be further compounded by the use of different Linux distributions across the team (or other operating systems). The issues can get uglier quickly when it comes time for moving to the next version of the toolset. Using containers and the guidelines in this article will make migration to newer versions much easier.
|
||||
|
||||
|
||||
|
||||
Containerizing the build systems that I use on my projects has certainly been valuable in my experience, as it has alleviated the problems above. I tend to use Docker for my container tooling, but there can still be issues due to the installation and network configuration being unique environment to environment, especially if you work in a corporate environment involving some complex proxy settings. But at least now I have fewer build system problems to deal with.
|
||||
|
||||
### Walking through a containerized build system
|
||||
|
||||
I created a [tutorial repository][2] you can clone and examine at a later time or follow along through this article. I'll be walking through all the files in the repository. The build system is deliberately trivial (it runs **gcc**) to keep the focus on the build system architecture.
|
||||
|
||||
### Build system requirements
|
||||
|
||||
Two key aspects that I think are desirable in a build system are:
|
||||
|
||||
* **Standard build invocation:** I want to be able to build code by pointing to some work directory whose path is **/path/to/workdir**. I want to invoke the build as: [code]`./build.sh /path/to/workdir`[/code] To keep the example architecture simple (for the sake of explanation), I'll assume that the output is also generated somewhere within **/path/to/workdir**. (Otherwise, it would increase the number of volumes exposed to the container, which is not difficult, but more cumbersome to explain.)
|
||||
* **Custom build invocation via shell:** Sometimes, the toolset needs to be used in unforeseen ways. In addition to the standard **build.sh** to invoke the toolset, some of these could be added as options to **build.sh**, if needed. But I always want to be able to get to a shell where I can invoke toolset commands directly. In this trivial example, say I sometimes want to try out different **gcc** optimization options to see the effects. To achieve this, I want to invoke: [code]`./shell.sh /path/to/workdir`[/code] This should get me to a Bash shell inside the container with access to the toolset and to my **workdir**, so I can experiment as I please with the toolset.
|
||||
|
||||
|
||||
|
||||
### Build system architecture
|
||||
|
||||
To comply with the basic requirements above, here is how I architect the build system:
|
||||
|
||||
![Container build system architecture][3]
|
||||
|
||||
At the bottom, the **workdir** represents any software source code that needs to be built by the software developer end users. Typically, this **workdir** will be a source-code repository. The end users can manipulate this source code repository in any way they want before invoking a build. For example, if they're using **git** for version control, they could **git checkout** the feature branch they are working on and add or modify files. This keeps the build system independent of the **workdir**.
|
||||
|
||||
The three blocks at the top collectively represent the containerized build system. The left-most (yellow) block at the top represents the scripts (**build.sh** and **shell.sh**) that the end user will use to interact with the build system.
|
||||
|
||||
In the middle (the red block) is the Dockerfile and the associated script **build_docker_image.sh**. The development operations people (me, in this case) will typically execute this script and generate the container image. (In fact, I'll execute this many, many times until I get everything working right, but that's another story.) And then I would distribute the image to the end users, such as through a container trusted registry. The end users will need this image. In addition, they will clone the build system repository (i.e., one that is equivalent to the [tutorial repository][2]).
|
||||
|
||||
The **run_build.sh** script on the right is executed inside the container when the end user invokes either **build.sh** or **shell.sh**. I'll explain these scripts in detail next. The key here is that the end user does not need to know anything about the red or blue blocks or how a container works in order to use any of this.
|
||||
|
||||
### Build system details
|
||||
|
||||
The tutorial repository's file structure maps to this architecture. I've used this prototype structure for relatively complex build systems, so its simplicity is not a limitation in any way. Below, I've listed the tree structure of the relevant files from the repository. The **dockerize-tutorial** folder could be replaced with any other name corresponding to a build system. From within this folder, I invoke either **build.sh** or **shell.sh** with the one argument that is the path to the **workdir**.
|
||||
|
||||
|
||||
```
|
||||
dockerize-tutorial/
|
||||
├── build.sh
|
||||
├── shell.sh
|
||||
└── swbuilder
|
||||
├── build_docker_image.sh
|
||||
├── install_swbuilder.dockerfile
|
||||
└── scripts
|
||||
└── run_build.sh
|
||||
```
|
||||
|
||||
Note that I've deliberately excluded the **example_workdir** above, which you'll find in the tutorial repository. Actual source code would typically reside in a separate repository and not be part of the build tool repository; I included it in this repository, so I didn't have to deal with two repositories in the tutorial.
|
||||
|
||||
Doing the tutorial is not necessary if you're only interested in the concepts, as I'll explain all the files. But if you want to follow along (and have Docker installed), first build the container image **swbuilder:v1** with:
|
||||
|
||||
|
||||
```
|
||||
cd dockerize-tutorial/swbuilder/
|
||||
./build_docker_image.sh
|
||||
docker image ls # resulting image will be swbuilder:v1
|
||||
```
|
||||
|
||||
Then invoke **build.sh** as:
|
||||
|
||||
|
||||
```
|
||||
cd dockerize-tutorial
|
||||
./build.sh ~/repos/dockerize-tutorial/example_workdir
|
||||
```
|
||||
|
||||
The code for [build.sh][4] is below. This script instantiates a container from the container image **swbuilder:v1**. It performs two volume mappings: one from the **example_workdir** folder to a volume inside the container at path **/workdir**, and the second from **dockerize-tutorial/swbuilder/scripts** outside the container to **/scripts** inside the container.
|
||||
|
||||
|
||||
```
|
||||
docker container run \
|
||||
--volume $(pwd)/swbuilder/scripts:/scripts \
|
||||
--volume $1:/workdir \
|
||||
--user $(id -u ${USER}):$(id -g ${USER}) \
|
||||
--rm -it --name build_swbuilder swbuilder:v1 \
|
||||
build
|
||||
```
|
||||
|
||||
In addition, the **build.sh** also invokes the container to run with your username (and group, which the tutorial assumes to be the same) so that you will not have issues with file permissions when accessing the generated build output.
|
||||
|
||||
Note that [**shell.sh**][5] is identical except for two things: **build.sh** creates a container named **build_swbuilder** while **shell.sh** creates one named **shell_swbuilder**. This is so that there are no conflicts if either script is invoked while the other one is running.
|
||||
|
||||
The other key difference between the two scripts is the last argument: **build.sh** passes in the argument **build** while **shell.sh** passes in the argument **shell**. If you look at the [Dockerfile][6] that is used to create the container image, the last line contains the following **ENTRYPOINT**. This means that the **docker container run** invocation above will result in executing the **run_build.sh** script with either **build** or **shell** as the sole input argument.
|
||||
|
||||
|
||||
```
|
||||
# run bash script and process the input command
|
||||
ENTRYPOINT [ "/bin/bash", "/scripts/run_build.sh"]
|
||||
```
|
||||
|
||||
[**run_build.sh**][7] uses this input argument to either start the Bash shell or invoke **gcc** to perform the build of the trivial **helloworld.c** project. A real build system would typically invoke a Makefile and not run **gcc** directly.
|
||||
|
||||
|
||||
```
|
||||
cd /workdir
|
||||
|
||||
if [ $1 = "shell" ]; then
|
||||
echo "Starting Bash Shell"
|
||||
/bin/bash
|
||||
elif [ $1 = "build" ]; then
|
||||
echo "Performing SW Build"
|
||||
gcc helloworld.c -o helloworld -Wall
|
||||
fi
|
||||
```
|
||||
|
||||
You could certainly pass more than one argument if your use case demands it. For the build systems I've dealt with, the build is usually for a given project with a specific **make** invocation. In the case of a build system where the build invocation is complex, you can have **run_build.sh** call a specific script inside **workdir** that the end user has to write.
|
||||
|
||||
### A note about the scripts folder
|
||||
|
||||
You may be wondering why the **scripts** folder is located deep in the tree structure rather than at the top level of the repository. Either approach would work, but I didn't want to encourage the end user to poke around and change things there. Placing it deeper is a way to make it more difficult to poke around. Also, I could have added a **.dockerignore** file to ignore the **scripts** folder, as it doesn't need to be part of the container context. But since it's tiny, I didn't bother.
|
||||
|
||||
### Simple yet flexible
|
||||
|
||||
While the approach is simple, I've used it for a few rather different build systems and found it to be quite flexible. The aspects that are going to be relatively stable (e.g., a given toolset that changes only a few times a year) are fixed inside the container image. The aspects that are more fluid are kept outside the container image as scripts. This allows me to easily modify how the toolset is invoked by updating the script and pushing the changes to the build system repository. All the user needs to do is to pull the changes to their local build system repository, which is typically quite fast (unlike updating a Docker image). The structure lends itself to having as many volumes and scripts as are needed while abstracting the complexity away from the end user.
|
||||
|
||||
How will you need to modify your application to optimize it for a containerized environment?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/4/how-containerize-build-system
|
||||
|
||||
作者:[Ravi Chandran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ravichandran
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_2015-3-osdc-lead.png?itok=O6aivM_W (Containers on a ship on the ocean)
|
||||
[2]: https://github.com/ravi-chandran/dockerize-tutorial
|
||||
[3]: https://opensource.com/sites/default/files/uploads/build_sys_arch.jpg (Container build system architecture)
|
||||
[4]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/build.sh
|
||||
[5]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/shell.sh
|
||||
[6]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/install_swbuilder.dockerfile
|
||||
[7]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/scripts/run_build.sh
|
163
translated/tech/20200414 How I containerize a build system.md
Normal file
163
translated/tech/20200414 How I containerize a build system.md
Normal file
@ -0,0 +1,163 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (LazyWolfLin)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How I containerize a build system)
|
||||
[#]: via: (https://opensource.com/article/20/4/how-containerize-build-system)
|
||||
[#]: author: (Ravi Chandran https://opensource.com/users/ravichandran)
|
||||
|
||||
构建系统容器化指南
|
||||
======
|
||||
搭建一个通过容器分发应用的可复用系统可能很复杂,但这儿有个好方法。
|
||||
![Containers on a ship on the ocean][1]
|
||||
|
||||
一个用于将源代码编译成可运行的应用的构建系统是由工具和流程共同组成。在编译过程中还涉及到代码从软件开发者流转到最终用户,无论最终用户是运维的同事还是部署的同事。
|
||||
|
||||
在使用容器搭建了一些构建系统后,我觉得有一个不错的可复用的方法值得分享。虽然这些构建系统被用于编译机器学习算法和为嵌入式硬件生成可加载的软件镜像上,但这个方法足够抽象,可用于任何基于容器的构建系统。
|
||||
|
||||
这个方法是关于通过简单和可维护的方式搭建或组织构建系统,但并不涉及处理特定编译器或工具容器化的技巧。它适用于软件开发人员构建软件并将可维护镜像交给其他技术人员(无论是系统管理员,运维工程师或者其他头衔)的常见情况。由于构建系统对于最终用户是透明的,因此他们能够专注于软件本身。
|
||||
|
||||
### 为什么要容器化构建系统?
|
||||
|
||||
搭建基于容器的可复用构建系统可以为软件团队带来诸多好处:
|
||||
|
||||
* **专注**:我希望专注于应用的开发。当我调用一个名为“build”的工具时,我希望这个工具集能生成一个随时可用的二进制文件。我不想浪费时间在构建系统的查错上。实际上,我宁愿不了解也不关心构建系统。
|
||||
* **一致的构建行为**:无论在哪种使用情况下,我都想确保整个团队使用相同版本的工具集并在构建时得到相同的结果。否则,我就得不断地处理“我这咋就是好的”的麻烦。在团队项目中,使用相同版本的工具集并对给定的输入源文件集产生一致的输出是非常重要。
|
||||
* **易于部署和升级**:即使向每个人都提供一套详细说明来为项目安装工具集,也可能会有人翻车。问题可能是由于每个人对自己的 Linux 环境的个性化修改导致的。在团队中使用不同的 Linux 发行版(或者其他操作系统),情况可能还会变得更复杂。当需要将工具集升级到下一版本时,问题很快就会变得更糟糕。使用容器和本指南将使得新版本升级非常简单。
|
||||
|
||||
我在项目中容器化构建系统的经验显然很有价值,因为它可以缓解上述问题。我倾向于使用 Docker 作为容器工具,虽然在相对特殊的环境中安装和网络配置仍可能出现问题,尤其是当你在一个使用复杂代理的企业环境中工作时。但至少现在我需要解决的构建系统问题已经很少了。
|
||||
|
||||
### 漫步容器化的构建系统
|
||||
|
||||
我创建了一个[教程存储库][2],随后你可以 clone 并检查它,或者按照本文内容进行操作。我将逐个介绍存储库中的文件。这个构建系统非常简单(它使用**gcc**)从而可以专注于构建系统结构上。
|
||||
|
||||
### 构建系统需求
|
||||
|
||||
我认为构建系统中有两个关键点:
|
||||
|
||||
* **标准化构建调用**:我希望能够指定一些形如 **/path/to/workdir** 的工作目录来构建代码。我希望以如下形式调用构建:
|
||||
|
||||
./build.sh /path/to/workdir
|
||||
|
||||
为了使得示例的结构足够简单(以便说明),我将假定输出也在 **/path/to/workdir** 路径下的某处生成。(否则,将增加容器中显示的卷的数量,虽然这并不困难,但解释起来比较麻烦。)
|
||||
* **通过 shell 自定义构建调用**:有时,工具集会以出乎意料的方式被调用。除了标准的工具集调用 **build.sh** 之外,如果需要还可以为 **build.sh** 添加一些选项。但我一直希望能够有一个可以直接调用工具集命令的 shell。在这个简单的示例中,有时我想尝试不同的 **gcc** 优化选项并查看效果。为此,我希望调用:
|
||||
|
||||
./shell.sh /path/to/workdir
|
||||
|
||||
这将让我得到一个容器内部的 Bash shell,并且可以调用工具集和访问我的**工作目录 workdir**,从而我可以根据需要尝试使用这个工具集。
|
||||
|
||||
### 构建系统架构
|
||||
|
||||
为了满足上述基本需求,这是我的构架系统架构:
|
||||
|
||||
![Container build system architecture][3]
|
||||
|
||||
在底部的 **workdir** 代表软件开发者用于构建的任意软件源码。通常,这个 **workdir** 是一个源代码的存储库。在构建之前,最终用户可以通过任何方式来操纵这个存储库。例如,如果他们使用 **git** 作为版本控制工具的话,可以使用 **git checkout** 切换到他们正在工作的功能分支上并添加或修改文件。这样可以使得构建系统独立于 **workdir** 之外。
|
||||
|
||||
顶部的三个模块共同代表了容器化的构建系统。最左边的黄色模块代表最终用户与构建系统交互的脚本(**build.sh** 和 **shell.sh**)。
|
||||
|
||||
在中间的红色模块是 Dockerfile 和相关的脚本 **build_docker_image.sh**。开发者(在这个例子中指我)通常将执行这个脚本并生成容器镜像(事实上我多次执行它直到一切正常为止,但这是另一个故事)。然后我将镜像分发给最终用户,例如通过 container trusted registry 进行分发。最终用户将需要这个镜像。另外,他们将 clone 构建系统存储库(即一个与[教程存储库][2]等效的存储库)。
|
||||
|
||||
当最终用户调用 **build.sh** 或者 **shell.sh** 时,容器内将执行右边的 **run_build.sh** 脚本。接下来我将详细解释这些脚本。这里的关键是最终用户不需要为了使用而去了解任何关于红色或者蓝色模块或者容器工作原理的知识。
|
||||
|
||||
### 构建系统细节
|
||||
|
||||
把教程存储库的文件结构映射到这个系统结构上。我曾将这个原型结构用于相对复杂构建系统,因此它的简单并不会造成任何限制。下面我列出存储库中相关文件的树结构。文件夹 **dockerize-tutorial** 能用构建系统的其他任何名称代替。在这个文件夹下,我用 **workdir** 的路径作参数调用 **build.sh** 或 **shell.sh**。
|
||||
|
||||
```
|
||||
dockerize-tutorial/
|
||||
├── build.sh
|
||||
├── shell.sh
|
||||
└── swbuilder
|
||||
├── build_docker_image.sh
|
||||
├── install_swbuilder.dockerfile
|
||||
└── scripts
|
||||
└── run_build.sh
|
||||
```
|
||||
|
||||
请注意,我上面特意没列出 **example_workdir**,你能在教程存储库中找到。实际的源码通常存放在单独的存储库中,而不是构建工具库中的一部分;本教程为了不必处理两个存储库,所以我将它包含在这个存储库中。
|
||||
|
||||
如果你只对概念感兴趣,本教程并非必须的,因为我将解释所有文件。但是如果你继续本教程(并且已经安装 Docker),首先使用以下命令来构建容器镜像 **swbuilder:v1**:
|
||||
|
||||
```
|
||||
cd dockerize-tutorial/swbuilder/
|
||||
./build_docker_image.sh
|
||||
docker image ls # resulting image will be swbuilder:v1
|
||||
```
|
||||
|
||||
然后调用 **build.sh**:
|
||||
|
||||
```
|
||||
cd dockerize-tutorial
|
||||
./build.sh ~/repos/dockerize-tutorial/example_workdir
|
||||
```
|
||||
|
||||
下面是 [build.sh][4] 的代码。这个脚本从容器镜像 **swbuilder:v1** 实例化一个容器。而这个容器实例映射了两个卷:一个将文件夹 **example_workdir** 挂载到容器内部路径 **/workdir** 上,第二个则将容器外的文件夹 **dockerize-tutorial/swbuilder/scripts** 挂载到容器内部路径 **/scripts** 上。
|
||||
|
||||
```
|
||||
docker container run \
|
||||
--volume $(pwd)/swbuilder/scripts:/scripts \
|
||||
--volume $1:/workdir \
|
||||
--user $(id -u ${USER}):$(id -g ${USER}) \
|
||||
--rm -it --name build_swbuilder swbuilder:v1 \
|
||||
build
|
||||
```
|
||||
|
||||
另外,**build.sh** 还会用你的用户名(以及组,本教程假设两者一致)去运行容器,以便在访问构建输出时不出现文件权限问题。
|
||||
|
||||
请注意,[**shell.sh**][5] 和 **build.sh** 大体上是一致的,除了两点不同:**build.sh** 会创建一个名为 **build_swbuilder** 的容器,而 **shell.sh** 则会创建一个名为 **shell_swbuilder** 的容器。这样一来,当其中一个脚本运行时另一个脚本被调用也不会产生冲突。
|
||||
|
||||
两个脚本之间的另一处关键不同则在于最后一个参数:**build.sh** 传入参数 **build** 而 **shell.sh** 则传入 **shell**。如果你看了用于构建容器镜像的 [Dockerfile][6],就会发现最后一行包含了下面的 **ENTRYPOINT** 语句。这意味着上面的 **docker container run** 调用将使用 **build** 或 **shell** 作为唯一的输入参数来执行 **run_build.sh** 脚本。
|
||||
|
||||
```
|
||||
# run bash script and process the input command
|
||||
ENTRYPOINT [ "/bin/bash", "/scripts/run_build.sh"]
|
||||
```
|
||||
|
||||
[**run_build.sh**][7] 使用这个输入参数来选择启动 Bash shell 还是调用 **gcc** 来构建 **helloworld.c** 项目。一个真正的构建系统通常会使用 Makefile 而非直接运行 **gcc**。
|
||||
|
||||
```
|
||||
cd /workdir
|
||||
|
||||
if [ $1 = "shell" ]; then
|
||||
echo "Starting Bash Shell"
|
||||
/bin/bash
|
||||
elif [ $1 = "build" ]; then
|
||||
echo "Performing SW Build"
|
||||
gcc helloworld.c -o helloworld -Wall
|
||||
fi
|
||||
```
|
||||
|
||||
在使用时,如果你需要传入多个参数,当然也是可以的。我处理过的构建系统,构建通常是对给定的项目调用 **make**。如果一个构建系统有非常复杂的构建调用,则你可以让 **run_build.sh** 调用 **workdir** 下最终用户编写的特定脚本。
|
||||
|
||||
### 关于 scripts 文件夹的说明
|
||||
|
||||
你可能想知道为什么 **scripts** 文件夹位于目录树深处而不是位于存储库的顶层。两种方法都是可行的,但我不想鼓励最终用户到处乱翻并修改里面的脚本。将它放到更深的地方是一个让他们更难乱翻的方法。另外,我也可以添加一个 **.dockerignore** 文件去忽略 **scripts** 文件夹,因为它不是容器必需的部分。但因为它很小,所以我没有这样做。
|
||||
|
||||
### 简单而灵活
|
||||
|
||||
尽管这一方法很简单,但我将其用于某些非常特殊的构建系统时,发现它其实非常灵活。相对稳定的部分(例如,一年仅修改数次的给定工具集)被固定在容器镜像内。较为灵活的部分则以脚本的形式放在镜像外。这使我能够简单地通过修改脚本并将更改推送到构建系统存储库来修改调用工具集的方式。用户所需要做的是将更改拉到本地的构建系统存储库中,这通常是非常快的(与更新 Docker 镜像不同)。这种结构使其能够拥有尽可能多的卷和脚本,同时使最终用户摆脱复杂性。
|
||||
|
||||
你将如何修改你的应用来针对容器化环境进行优化呢?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/20/4/how-containerize-build-system
|
||||
|
||||
作者:[Ravi Chandran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/LazyWolfLin)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/ravichandran
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_2015-3-osdc-lead.png?itok=O6aivM_W (Containers on a ship on the ocean)
|
||||
[2]: https://github.com/ravi-chandran/dockerize-tutorial
|
||||
[3]: https://opensource.com/sites/default/files/uploads/build_sys_arch.jpg (Container build system architecture)
|
||||
[4]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/build.sh
|
||||
[5]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/shell.sh
|
||||
[6]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/install_swbuilder.dockerfile
|
||||
[7]: https://github.com/ravi-chandran/dockerize-tutorial/blob/master/swbuilder/scripts/run_build.sh
|
Loading…
Reference in New Issue
Block a user